wiki:AstroBearAmr

Version 8 (modified by Brandon Shroyer, 14 years ago) ( diff )

AMR Explained

At first glance the AMR routine in amr_control.f90 is a little intimidating, but it can be understood by examining the most basic features first.

Please note that this is an explanation of AstroBEAR's AMR implementation, not adaptive mesh refinement itself. Readers who are unfamiliar with AMR might wish to look over some the AMR resources we have on the How AstroBEAR Works page.


Round 1: A Simple, Single-Processor AMR Algorithm

First we'll start with a simplified version of the AMR algorithm. This version focuses only on the most essential features:

RECURSIVE SUBROUTINE AMR(n)
   INTEGER :: n, nSteps, step
   nSteps = 2
   CALL InitInfos(n)
   CALL ProlongateParentsData(n)
   DO step=1,nSteps
      levels(n)%step=step
      IF (step == 2) CALL UpdateOverlaps(n)
      CALL ApplyOverlaps(n,step)
      CALL ApplyPhysicalBCs(n)
      CALL SetErrFlags(n)
      IF (step == 2) CALL AgeNodesChildren(n)
      CALL BackupNodes(n+1)
      CALL CreateChildrens(n)
      IF (step == 1) THEN
         CALL InheritOldNodeOverlapsChildren(n)
         CALL InheritNewNodeOverlapsChildren(n)
      ELSE
         CALL InheritOverlapsOldChildren(n)
         CALL InheritOverlapsNewChildren(n)
      END IF
      CALL InheritNeighborsChildren(n)
      CALL AdvanceGrids(n)
      CALL AMR(n+1)
      CALL ApplyChildrenData(n)
      CALL SyncFluxes(n)
      CALL AccumulateFluxes(n)
      IF (step == 2) CALL NullifyNeighbors(n)
   END DO
   CALL CoarsenDataForParents(n)
END SUBROUTINE AMR

In this example, the parameter n represents the current level of the operation.

Remember as we step through the AMR() subroutine that it is recursive. At each step on level n, AMR(n) within the DO nSteps loop). calls itself on the next level up (see the calls to AMR(n+1) during each step on level n. This is because for each step on level n >= 0, there are two steps on level n+1, and the levels above the base level are regridded at each step.

IMPORTANT: Because AMR() is recursive, each call to AMR() at level n assumes that certain steps were carried out by the call at level -1. This can make the structure of AMR() a little confusing, especially once parallelization is included.

AMR(n) calls two subroutines before stepping through the simulation to finish initializing level n's data:

  • InitInfos(n) — Allocates grid data (InfoDef) structures for the grids on level n. Note that the tree structure and grid dimensions were created on the previous level n-1; InitInfos() just creates the data structures they reference.
  • ProlongateParentsData(n) — Populates the level n data structures with prolongated data from their parents on level n-1.

Once the data has been constructed, we can begin the work of advancing the simulation. Each level n will take two steps for every single step taken by n-1, but the process is slightly different on each step for levels above the base level.

Step 1

  • ApplyOverlaps(n, step) — After initializing level n with prolongated data from level n-1, AMR() copies over data from the previous generation of grids on level n. This higher-resolution data is preferable to data prolongated from the previous level, so AMR() uses it wherever it is available.
  • ApplyPhysicalBCs(n) — Apply physical boundary conditions to level n.
  • SetErrFlags(n) — Determine which regions to refine. Refinement regions are determined by the physical processes involved, as well as specific conditions imposed by the problem modules.
  • BackupNodes(n+1) — Caches the nodes on the child level n+1. We are about to create the new level n+1 nodes, and the data referenced by the backed-up nodes will be used when ApplyOverlaps(n+1) is called (see above).
  • CreateChildrens(n)— This routine creates child nodes on level n+1 using the refinement flags set on level n. The use of "Childrens" here is not a typo; CreateChildrens() is so named because it applies the CreateChildren(Info) subroutine to each grid on level n.
  • InheritOldNodeOverlapsChildren(n) — Nested grids mean that spatial relationships (overlaps and neighbors) are inherited from parent grids. This routine passes information about the previous generation of n+1 grids to the new grids created by CreateChildrens(n). This routine is only executed on step 1 of the AMR() execution loop.
  • InheritNewNodeOverlapsChildren(n) — The children of previous level n grids will also need to send their data to the children of new level n grids.
  • InheritNeighborsChildren(n) — The children of neighboring grids on level n will likely be neighbors on level n+1. This routine passes neighbor information from level n to level n+1.
  • AdvanceGrids(n) — Performs the hyperbolic advance step on the grids of level n. This is where our numerical solvers come in.
  • AMR(n+1) — Launches AMR routine on the child level
  • ApplyChildrenData(n) — The inverse of ProlongateParentData(), this routine restricts data from the child grids onto their parent grids, providing a more accurate solution on the coarser level.
  • SyncFluxes(n) — To enforce mass conservation and the DivB constraint, the fluxes at grid boundaries need to be synchronized.
  • AccumulateFluxes(n) — Accumulates the fluxes used on level n to send back to the parent grids on level n-1.

Step 2

  • UpdateOverlaps(n) — On the second step we don't need to receive overlap data from the previous generation of grids. We do, however, need to 'ghost' data with our current overlap grids or neighbors. So we treat our neighbors as our current overlaps; in effect, we use the same node list for neighbor operations and overlap operations. This is the reason why we use the NullifyNeighbors() routine later on instead of deleting the node list.
  • ApplyOverlaps(n, step) - Brings over ghost data from the neighbor grids, which are the current overlaps. This neighbor grid data has been advanced by the AdvanceGrids() subroutine, and thus is more accurate than the extrapolated data within the grid's ghost zones.
  • ApplyPhysicalBCs(n) — Apply physical boundary conditions to level n.
  • SetErrFlags(n) — Determine which regions to refine. Refinement regions are determined by the physical processes involved, as well as specific conditions imposed by the problem modules.
  • AgeNodesChildren(n) - Because of the nested grids giving us inherited relationships, we need to backup the relationships connecting us to the previous child grids on level n+1, as well as backing up the nodes themselves.
  • BackupNodes(n+1) — Caches the nodes on the child level n+1. We are about to create the new level n+1 nodes, and the data referenced by the backed-up nodes will be used when ApplyOverlaps(n+1) is called (see above).
  • CreateChildrens(n)— This routine creates child nodes on level n+1 using the refinement flags set on level n. The use of "Childrens" here is not a typo; CreateChildrens() is so named because it applies the CreateChildren(Info) subroutine to each grid on level n.
  • InheritOverlapsOldChildren(n) — Nested grids mean that spatial relationships (overlaps/neighbors) are inherited from parent grids. On the second step the previous generation of level n+1 grids are the old children of the current generation of level n grids.
  • InheritOverlapsNewChildren(n) — This inherits the relationships going the other way. The old children of level n grids will need to send their data to the new children of level n grids.
  • InheritNeighborsChildren(n) — The children of neighboring grids on level n will likely be neighbors on level n+1. This routine passes neighbor information from level n to level n+1.
  • AdvanceGrids(n) — Performs the hyperbolic advance step on the grids of level n. This is where our numerical solvers come in.
  • AMR(n+1) — Launches AMR routine on the child level
  • ApplyChildrenData(n) — The inverse of ProlongateParentData(), this routine restricts data from the child grids onto their parent grids, providing a more accurate solution on the coarser level.
  • SyncFluxes(n) — To enforce mass conservation and the DivB constraint, the fluxes at grid boundaries need to be synchronized.
  • AccumulateFluxes(n) — Accumulates the fluxes used on level n to send back to the parent grids on level n-1.
  • NullifyNeighbors(n) — On the second step, each node's neighbor list and overlap list is pointing to the same list (the node's neighbor list). This routine nullifies the neighbor list pointers without destroying the nodes they point to. In effect, this turns the current generation's neighbor lists into the next generation's overlap lists. On the next step, the BackUpNodes() routine will destroy the overlap nodes.
  • CoarsenDataForParent(n) — Now that both advance steps are complete for this level, it's time to coarsen the cell-centered data back down to the n-1 level grids.


Round 2: Refined Bookkeeping and Elliptic Solvers

Now that we have constructed a simple AMR routine, we will add a few routines to improve MHD calculations and simplify AMR tree management. We will also add sink particles and the elliptic solver step to the code, expanding the capabilities of our AMR algorithm.

RECURSIVE SUBROUTINE AMR(n)
   INTEGER :: n, nSteps, step
   nSteps = 2
   CALL InitInfos(n)
   CALL ProlongateParentsData(n)
   CALL ChildMaskOverlaps(n)
   DO step=1,nSteps
      levels(n)%step=step
      IF (step == 2) CALL UpdateOverlaps(n)
      CALL ApplyOverlaps(n,step)
      CALL ProlongationFixups(n)
      IF (lParticles) CALL ParticleUpdate(n)
      CALL ApplyPhysicalBCs(n)
      CALL SetErrFlags(n)
      IF (step == 2) CALL AgeNodesChildren(n)
      CALL BackupNodes(n+1)
      CALL CreateChildrens(n)
      IF (step == 1) THEN
         CALL InheritOldNodeOverlapsChildren(n)
         CALL InheritNewNodeOverlapsChildren(n)
      ELSE
         CALL InheritOverlapsOldChildren(n)
         CALL InheritOverlapsNewChildren(n)
      END IF
      CALL InheritNeighborsChildren(n)
      CALL AdvanceGrids(n)
      IF (lElliptic) CALL Elliptic(n)
      CALL PrintAdvance(n)
      CALL AMR(n+1)
      CALL ApplyChildrenData(n)
      CALL RestrictionFixups(n)
      CALL AfterFixups(n)
      CALL UpdateChildMasks(n)
      CALL SyncFluxes(n)
      CALL AccumulateFluxes(n)
      IF (step == 2) CALL NullifyNeighbors(n)
   END DO
   CALL CoarsenDataForParents(n)
END SUBROUTINE AMR
  • ParticleUpdate(n) — If there are sink particles then we update the particles here.
  • Elliptic(n) — If elliptic equations are being used, then the elliptic step is performed here.
  • PrintAdvance(n) — Just prints the 'Advancing level n …' line to the standard output (stdout). This routine has two collective communications in it, so it can be a bottleneck on a cluster with slow network connections between its nodes.
  • ProlongationFixups(n) — It is better to complete the prolongation of the aux fields after receiving overlaps. This guarantees that child grids have divergence-free fields consistent with both their neighbors and their parents.
  • ChildMaskOverlaps(n) — This sets the ChildMask array values to 0 for ghost cells that are refined by neighbors.
  • UpdateChildMask (n)— This sets the ChildMask array values to 1 in grid cells that are refined by the grid's own children. This also sets ChildMask to NEIGHBOR_CHILD in grid cells that are refined by a neighbor's children.
  • RestrictionFixups(n) — This updates cell-centered representations of aux fields after receiving restricted data from children.
  • AfterFixups(n) — This allows for user-defined routines to be applied after a grid has been fully updated. This is not to be confused with the AfterStep() routine, which is executed after the hyperbolic step is completed.


Round 3: The Maximum Level of Refinement

Up to this point we've assumed we are on an intermediate level of the AMR tree, with a level above us and a level below us. What is different if we are on the highest level MaxLevel?

RECURSIVE SUBROUTINE AMR(n)
   INTEGER :: n, nSteps, step
   nSteps = 2
   CALL InitInfos(n)
   CALL ProlongateParentsData(n)
   CALL ChildMaskOverlaps(n)
   DO step=1,nSteps
      levels(n)%step=step
      IF (step == 2) CALL UpdateOverlaps(n)
      CALL ApplyOverlaps(n,step)
      CALL ProlongationFixups(n)
      IF (lParticles) CALL ParticleUpdate(n)
      CALL ApplyPhysicalBCs(n)
      IF (n < MaxLevel) THEN
         CALL SetErrFlags(n)
         IF (step == 2 CALL AgeNodesChildren(n)
         CALL BackupNodes(n+1)
         CALL CreateChildrens(n)
         IF (step == 1) THEN
            CALL InheritOldNodeOverlapsChildren(n)
            CALL InheritNewNodeOverlapsChildren(n)
         ELSE
            CALL InheritOverlapsOldChildren(n)
            CALL InheritOverlapsNewChildren(n)
         END IF
         CALL InheritNeighborsChildren(n)
      END IF
      CALL AdvanceGrids(n)
      IF (lElliptic) CALL Elliptic(n)
      CALL PrintAdvance(n)
      IF (n < MaxLevel) CALL AMR(n+1)
      IF (n < MaxLevel) CALL ApplyChildrenData(n)
      CALL RestrictionFixups(n)
      CALL AfterFixups(n)
      IF (n < MaxLevel) CALL UpdateChildMasks(n)
      CALL SyncFluxes(n)
      CALL AccumulateFluxes(n)
      IF (step == 2) CALL NullifyNeighbors(n)
   END DO
   CALL CoarsenDataForParents(n)
END SUBROUTINE AMR

Nodes on MaxLevel will not have any children, and thus MaxLevel is the stopping case for our recursive AMR algorithm. AMR() should therefore not call itself on MaxLevel + 1. Similarly, none of the functions that only apply to creating, initializing, or responding to child nodes should be on MaxLevel. This means no error flags, no child grid creation, and no restriction operations.

In the code block above, we've applied the conditional

If (n < MaxLevel)

to the following routines that deal with child nodes. This prevents these routines from being executed on the highest allowable level of refinement:

  • SetErrFlags(n)
  • AgeNodesChildren(n)
  • BackupNodes(n)
  • CreateChildrens(n)
  • Inherit*Overlaps(n) (all the different permutations of the InheritOverlaps functionality).
  • AMR(n)
  • ApplyChildrenData(n)
  • UpdateChildMask(n)


Round 4: Behavior on the Lower Levels

At the other end of the AMR hierarchy is level 0, the base grid. This is the coarsest resolution in the problem, the source of the "root time step" against which the higher level timesteps are judged. The base grid has no parent grid, so it shouldn't execute any restriction or fixup routines that would map its data to a lower level.

Unfortunately, this is also where things get complicated. AstroBEAR doesn't have any grids below level 0, but the AMR tree structure in AstroBEAR does have levels below 0. This is because AstroBEAR needs to distribute processors as well as grids, and it is easier to do so with a root node below the base level. In addition, the problem domain might be split into multiple subdomains with different conditions, some of which may span more than one processor on the base level. Adding an additional level of nodes makes it easier for AstroBEAR to keep track of these regions.

RECURSIVE SUBROUTINE AMR(n)
   INTEGER :: n, nSteps, step
   IF (n <= 0) nSteps=1
   IF (n >  0) nSteps = 2
   IF (n > -2) THEN
      CALL InitInfos(n)
      CALL ProlongateParentsData(n)
      IF (n > -1) CALL ChildMaskOverlaps(n)
   END IF
   DO step=1,nSteps
      levels(n)%step=step
      IF (step == 2) CALL UpdateOverlaps(n)
      IF (n > -2) CALL ApplyOverlaps(n,step)
      IF (n > 0) CALL ProlongationFixups(n)
      IF (n > -1 .AND. lParticles) CALL ParticleUpdate(n)
      IF (n > -1) CALL ApplyPhysicalBCs(n)
      END IF
      IF (n < MaxLevel) THEN
         IF (n > -1) THEN
            CALL SetErrFlags(n)
         END IF
         IF (step == 2 .OR. n == -2) THEN
            CALL AgeNodesChildren(n)
         END IF
         CALL BackupNodes(n+1)
         CALL CreateChildrens(n)
         IF (n == -2) THEN
            CALL InheritOverlapsOldChildren(n)
            CALL InheritNeighborsChildren(n)
            CALL InheritOverlapsNewChildren(n)
         ELSE
            IF (step == 1) THEN
               CALL InheritOldNodeOverlapsChildren(n)
               CALL InheritNewNodeOverlapsChildren(n)
               CALL InheritNeighborsChildren(n)
            ELSE
               CALL InheritOverlapsOldChildren(n)
               CALL InheritNeighborsChildren(n)
               CALL InheritOverlapsNewChildren(n)
            END IF
         END IF
      END IF
      IF (n > -1) THEN
         CALL AdvanceGrids(n)
         IF (lElliptic) CALL Elliptic(n)
         CALL PrintAdvance(n)
      END IF
      IF (n < MaxLevel) CALL AMR(n+1)
      IF (n < MaxLevel) CALL ApplyChildrenData(n)
      IF (n > -1) THEN
         CALL RestrictionFixups(n)
         CALL AfterFixups(n)
      END IF
      IF (n > -1) THEN
         IF (n < MaxLevel) CALL UpdateChildMasks(n)
         CALL SyncFluxes(n)
      END IF
      IF (n > 0) CALL AccumulateFluxes(n)
      IF (step == 2) CALL NullifyNeighbors(n)
   END DO
   IF (n > -2) CALL CoarsenDataForParents(n)
END SUBROUTINE AMR

Levels 0 and below

Level 0, the base level, represents the lowest level of hydrodynamic data. The grids on this layer have no parent grids, and thus have no need to prolongate or restrict data to them. Consequently, the following subroutines do not need to be called at the base level or below:

  • ProlongationFixups(n)
  • AccumulateFluxes(n)

Were in not for costmap data, these levels would neither need to call

  • ProlongateParentsData(n)
  • CoarsenDataForParents(n)

Levels -1 and below

Levels -1 and below do not need to call any routines related solely to hydrodynamic variables. This includes in addition to the routines above:

  • ParticleUpdate
  • ApplyPhysicalBCs
  • SetErrFlags
  • AdvanceGrids
  • Elliptic
  • PrintAdvance
  • RestrictionFixups
  • AfterFixups
  • SyncFluxes

Additionally since the entire domain is refined at the root level, levels < 0 do not need to maintain the childmask array. So these levels do not need to call:

  • ChildMaskOverlaps
  • UpdateChildMasks

Level -2

The Level 2 grid is persistent so it does not need to be initialized or overlapped. So it does not need to call

  • InitInfos
  • ApplyOverlaps

Additionally the level 2 grid has no parent nodes so there is no need to call parent-related routines

  • ProlongateParentsData
  • CoarsenDataForParents

Finally since the level 2 is persistent, it behaves like a higher level grid in between steps so it always calls

  • AgeNodesChildren
  • InheritOverlapsOldChildren
  • InheritOverlapsNewChildren
  • InheritNeighborsNewChildren

Communication

Data

There are essentially four basic data routines that involve sharing of data between grids

  • ProlongateParentsData - Parent to Child (Inter-Level)
  • ApplyChildrenData - Child to Parent (Inter-Level)
  • ApplyOverlaps - Old Grids to Current Grids (Intra-Level)
  • SyncFluxes - Current Grids to Current Grids (Intra-Level)

For parallel applications this requires some degree of communication. In order to overlap computation with communication, it is good to post the sends as soon as the data is available - and to do as much computation as possible until having to wait for the receives to complete. When the sends are checked for completion and when the receives are first posted is somewhat arbitrary. It is reasonable to post the receives before you expect the sends to post and to complete the sends sometime after you expect the receives to have finished.

For each operation there is likely to be a degree of local sharing between grids. The basic approach therefore is to post the receives followed by the sends. Then perform the local sharing before waiting on the receives to complete, and then the sends. Sometimes the posting of the receives is shifted earlier, and the completion of the sends is put off until later. For example the parallel version of ApplyOverlaps is

  CALL PostRecvOverlaps
  ...
  CALL PostSendOverlaps
  CALL ApplyOverlaps
  CALL CompRecvOverlaps
  ...
  CALL CompSendOverlaps

Tree

In a similar manner there are five tree operations that require some communication between nodes

  • CreateChildren
  • InheritNeighborsChildren
  • InheritOldNodeOverlapsChildren
  • InheritNewNodeOverlapsChildren
  • InheritOverlapsOldChildren
  • InheritOverlapsNewChildren

As in the case with data operations, each of these requires four communication calls in order to overlap the computation with communication. In all of these cases, it is node's children that are being communicated - since this is the only tree data that is created locally.

Threading

There are several threading options for parallelizing the hydro advance across levels. There are currently three basic approaches to address this

  • Threading the Advances - The advancing of each level can be done independently although higher level threads should have higher priorities
  • Threading the AMR levels - Each AMR level can also be thought of as an independent thread. Unfortunately this approach requires threads to communicate with other threads on different processors. This requires MPI to be extremely thread safe
  • PseudoThreading - This is essentially careful scheduling of the advances to try and mimic the switching that would occur under a threaded implementation. This has the advance of not requiring any external libraries.

For more information on threading see the Scrambler Threading page.

Note: See TracWiki for help on using the wiki.