wiki:DistributedTree

Version 6 (modified by trac, 12 years ago) ( diff )

Distributed Tree Management

The fundamental unit of the AMR mesh is a node or rectangular patch. First we define as the set of all nodes and as the set of all processors. We also define the functions , , and to identify the level of a node, the mpi_id of the processor containing the node's data, and the generation of the node. Using these definitions we can construct the set of all nodes on level and generation as

For the AMR calculation to proceed, every node must be aware of any nodes with which they might need to exchange data. This is generally based on some distance between the nodes boundaries. We can define an internode distance between nodes m and n as

Note if grids are adjacent than Rmn = 1, and if grids overlap then Rmn ⇐ 0

Different stages of the algorithm require different number of ghost zones. The hyperbolic advance for instance - will typically require between 2 and 4 ghost zones (mbcH). After a hydro advance the size of the grid with valid data will be reduced by mbcH. If we need to take 2 hydro steps on this level and finish with an updated region the size of the node, we must start out with data that is 2 * mbcH. We therefore would need to begin by collecting the data from all previous nodes with an inter-node distance ⇐ 2*mbcH. Since this data will come from previous overlapping nodes we'll call this .

The collection of nodes that we need to receive previous data from is called the previous overlap node group

After completing one hydro step, we now need to synchronize fluxes with adjacent neighboring nodes. These Neighbor Node Groups (NNG) are defined as nodes with inter-node distances of or more formerly

After synchronizing fluxes and internal data from child grids, we should update our external data with refined data from surrounding grids… Of course now we have completed one step - and if we need to take a second step we only need to fill in ghost data from current grids that are within mbcH or more generally within a distance . We will call these the Current Overlap Node Group (CONG) or more formerly:

Of course we should resynchronize our fluxes with our neighbors - but these will be the same neighbors as before.

Finally we need to exchange our data with the new set of grids - or our Future Overlap Node Groups

It is likely that our Current Overlaps and Neighbors will contain many of the same nodes - in fact provided that so we can use our current overlaps in place of our neighbors, however this may occasionally lead to sync-flux sub-messages being sent to processors containing non-adjacent grids…

Each node group also defines a processor group made up of the collection of processors on which the node groups of a processor's nodes resides… For example - the Previous Overlap Processor Group for processor can be defined as

Similarly the Neighbor Processor Group (NPG), the Current Overlap Processor Group (COPG), and the Future Overlap Processor Group (FOPG) can be similarly defined:

Since there is no coordinating processor managing the transfers, the sending and receiving of data needs to be a cooperative endeavor. So if processor 'p' is in the 'POPG' of processor 'q', then processor 'q' should be in the FOPG of processor 'p'. Fortunately as long as the inclusion criteria for node groups is symmetric, the processor groups will be symmetric. Of course before a remote node can be put in a node group - the processor must know about the remote node. Since we implement a nested grid sturcture - any node in a node group should be the child of a node's parent's node group. So if parent node's exchange their children, they can determine whether or not the children of the nodes in their node groups belong in the node groups of their children. Of course this requires parents exchange their children with members of their node groups (or at least the children that might belong to that member's children's nodegroups).

Note: See TracWiki for help on using the wiki.