Changes between Version 9 and Version 10 of AstroBearAmr


Ignore:
Timestamp:
05/23/11 12:12:42 (14 years ago)
Author:
Brandon Shroyer
Comment:

Legend:

Unmodified
Added
Removed
Modified
  • AstroBearAmr

    v9 v10  
    212212
    213213[[BR]]
    214 = Round 4: Behavior on the Lower Levels =
     214== Round 4: Behavior on the Lower Levels ==
    215215
    216216At the other end of the AMR hierarchy is level 0, the base grid.  This is the coarsest resolution in the problem, the source of the "root time step" against which the higher level timesteps are judged.  The base grid has no parent grid, so it shouldn't execute any restriction or fixup routines that would map its data to a lower level.
     
    284284}}}
    285285==== Levels 0 and below ====
    286 
    287286Level 0, the base level, represents the lowest level of hydrodynamic data.  The grids on this layer have no parent grids, and thus have no need to prolongate or restrict data to them.  Consequently, the following subroutines do not need to be called at the base level or below:
    288287 * {{{ProlongationFixups(n)}}}
     
    310309
    311310==== Level -2 ====
    312 The Level 2 grid is persistent so it does not need to be initialized or overlapped.  So it does not need to call
    313  * !InitInfos
    314  * !ApplyOverlaps
    315 Additionally the level 2 grid has no parent nodes so there is no need to call parent-related routines
    316  * !ProlongateParentsData
    317  * !CoarsenDataForParents
     311Level -2 is the root level, which ties together all the different subdomains.  There is only ever one node on level -2, and it is always on the processor with MPI rank 0 (i.e., the master processor).
     312
     313The Level -2 node is both persistent and parentless.  As such, level -2 never calls:
     314 * {{{InitInfos(n)}}}
     315 * {{{ApplyOverlaps(n)}}}
     316 * {{{ProlongateParentsData(n)}}}
     317 * {{{CoarsenDataForParents(n)}}}
     318
    318319Finally since the level 2 is persistent, it behaves like a higher level grid in between steps so it always calls
    319  * !AgeNodesChildren
    320  * !InheritOverlapsOldChildren
    321  * !InheritOverlapsNewChildren
    322  * !InheritNeighborsNewChildren
    323 
    324  = Communication =
    325  == Data ==
     320Level -2 behaves like a higher-level grid in between steps.  Since it is persistent, it always calls the following routines:
     321 * {{{AgeNodesChildren(n)}}}
     322 * {{{InheritOverlapsOldChildren(n)}}}
     323 * {{{InheritOverlapsNewChildren(n)}}}
     324 * {{{InheritNeighborsNewChildren(n)}}}
     325
     326[[[BR]]]
     327== Communication ==
     328== Data ==
    326329There are essentially four basic data routines that involve sharing of data between grids
    327330 * !ProlongateParentsData - Parent to Child (Inter-Level)
     
    329332 * !ApplyOverlaps - Old Grids to Current Grids (Intra-Level)
    330333 * !SyncFluxes - Current Grids to Current Grids (Intra-Level)
    331 For parallel applications this requires some degree of communication.  In order to overlap computation with communication, it is good to post the sends as soon as the data is available - and to do as much computation as possible until having to wait for the receives to complete.  When the sends are checked for completion and when the receives are first posted is somewhat arbitrary.  It is reasonable to post the receives before you expect the sends to post and to complete the sends sometime after you expect the receives to have finished.
     334For parallel applications this requires some degree of communication.  In order to overlap computation with communication, it is good to post the sends as soon as the data is available - and to do as much computation as possible until having to wait for the receives to complete.  When the sends are checked for completion and when the receives are first posted is somewhat arbitrary.  It is reasonable to post the receivites before you expect the sends to post and to complete the sends sometime after you expect the receives to have finished.
    332335
    333336For each operation there is likely to be a degree of local sharing between grids.  The basic approach therefore is to post the receives followed by the sends.  Then perform the local sharing before waiting on the receives to complete, and then the sends.  Sometimes the posting of the receives is shifted earlier, and the completion of the sends is put off until later.  For example the parallel version of !ApplyOverlaps is