Changes between Version 9 and Version 10 of AstroBearAmr
- Timestamp:
- 05/23/11 12:12:42 (14 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
AstroBearAmr
v9 v10 212 212 213 213 [[BR]] 214 = Round 4: Behavior on the Lower Levels=214 == Round 4: Behavior on the Lower Levels == 215 215 216 216 At the other end of the AMR hierarchy is level 0, the base grid. This is the coarsest resolution in the problem, the source of the "root time step" against which the higher level timesteps are judged. The base grid has no parent grid, so it shouldn't execute any restriction or fixup routines that would map its data to a lower level. … … 284 284 }}} 285 285 ==== Levels 0 and below ==== 286 287 286 Level 0, the base level, represents the lowest level of hydrodynamic data. The grids on this layer have no parent grids, and thus have no need to prolongate or restrict data to them. Consequently, the following subroutines do not need to be called at the base level or below: 288 287 * {{{ProlongationFixups(n)}}} … … 310 309 311 310 ==== Level -2 ==== 312 The Level 2 grid is persistent so it does not need to be initialized or overlapped. So it does not need to call 313 * !InitInfos 314 * !ApplyOverlaps 315 Additionally the level 2 grid has no parent nodes so there is no need to call parent-related routines 316 * !ProlongateParentsData 317 * !CoarsenDataForParents 311 Level -2 is the root level, which ties together all the different subdomains. There is only ever one node on level -2, and it is always on the processor with MPI rank 0 (i.e., the master processor). 312 313 The Level -2 node is both persistent and parentless. As such, level -2 never calls: 314 * {{{InitInfos(n)}}} 315 * {{{ApplyOverlaps(n)}}} 316 * {{{ProlongateParentsData(n)}}} 317 * {{{CoarsenDataForParents(n)}}} 318 318 319 Finally since the level 2 is persistent, it behaves like a higher level grid in between steps so it always calls 319 * !AgeNodesChildren 320 * !InheritOverlapsOldChildren 321 * !InheritOverlapsNewChildren 322 * !InheritNeighborsNewChildren 323 324 = Communication = 325 == Data == 320 Level -2 behaves like a higher-level grid in between steps. Since it is persistent, it always calls the following routines: 321 * {{{AgeNodesChildren(n)}}} 322 * {{{InheritOverlapsOldChildren(n)}}} 323 * {{{InheritOverlapsNewChildren(n)}}} 324 * {{{InheritNeighborsNewChildren(n)}}} 325 326 [[[BR]]] 327 == Communication == 328 == Data == 326 329 There are essentially four basic data routines that involve sharing of data between grids 327 330 * !ProlongateParentsData - Parent to Child (Inter-Level) … … 329 332 * !ApplyOverlaps - Old Grids to Current Grids (Intra-Level) 330 333 * !SyncFluxes - Current Grids to Current Grids (Intra-Level) 331 For parallel applications this requires some degree of communication. In order to overlap computation with communication, it is good to post the sends as soon as the data is available - and to do as much computation as possible until having to wait for the receives to complete. When the sends are checked for completion and when the receives are first posted is somewhat arbitrary. It is reasonable to post the receiv es before you expect the sends to post and to complete the sends sometime after you expect the receives to have finished.334 For parallel applications this requires some degree of communication. In order to overlap computation with communication, it is good to post the sends as soon as the data is available - and to do as much computation as possible until having to wait for the receives to complete. When the sends are checked for completion and when the receives are first posted is somewhat arbitrary. It is reasonable to post the receivites before you expect the sends to post and to complete the sends sometime after you expect the receives to have finished. 332 335 333 336 For each operation there is likely to be a degree of local sharing between grids. The basic approach therefore is to post the receives followed by the sends. Then perform the local sharing before waiting on the receives to complete, and then the sends. Sometimes the posting of the receives is shifted earlier, and the completion of the sends is put off until later. For example the parallel version of !ApplyOverlaps is