Version 24 (modified by 14 years ago) ( diff ) | ,
---|
Scrambler
AstroBEAR 2.0 In Brief
The growing size of scientific simulations can no longer be accommodated simply by increasing the number of nodes in a cluster. Completing larger jobs without increasing the wall time requires a decrease in the workload per processor (i.e., increased parallelism). Unfortunately, increased parallelism often leads to increased communication time. Minimizing the cost of this communication requires efficient parallel algorithms to manage the distributed AMR structure and calculations.
AstroBEAR's strength lies in its distributed tree structure. Many AMR codes replicate the entire AMR tree on each computational node. This approach incurs a heavy communication cost as the tree continuously broadcasts structural changes to all processors. AstroBEAR, on the other hand, only keeps as much tree information as the local grids need to communicate with processors containing nearby grid regions. This approach saves us memory usage as well as communication time, leaving us well-positioned to take advantage of low-memory architectures such as BlueGene systems and GPUs.
Walk Throughs
- Adaptive Mesh Refinement implementation in Scrambler
- How Scrambler manages the distributed tree
- How Scrambler manages the distributed control
- How Scrambler schedules and performs communication
- How Scrambler balances the work load
- How Scrambler handles threading
- Performance issues related to Scrambler
- Stencils
- SuperGrids
Attachments (1)
- threading2.png (62.6 KB ) - added by 14 years ago.
Download all attachments as: .zip