HEDLA Jet meeting 12/15/2020 -- Baowei
1. setup and parameters
![]() |
Box: 60x60 mm radius_wire = 0.25 mm distance_between_centers = 7.5 mm MagField_direction = 0, 1, 0 WindMaterial = 27 rhoWind = 1e18 1/cc velWind = 6e1 km/s BWind = 1T !varies see table TempWind = 12 ev or 1.39e5 K dx= 15.625e-4 cm ratio_sizeWire_dx= 32 runTime: 426ns or ~0.4 domain crossing time, or 3.4 wire distance crossing time imomentumProtect=1 Boundary & Clump for wires lBoundary=.true. ! Use a boundary (instead of a high density barrier) DiffusionFactor = 1 !Factor to multiply velocity and finest level dx by for determining the diff_alpha2 parameter... Shouldn't be larger than 1.... MagneticDiffusionLength = 0.3 !cm MagneticDiffusionLengthWire = 0.0 !cm lBoundary=.false. ! Use a boundary (instead of a high density barrier) DiffusionFactor = 1 !Factor to multiply velocity and finest level dx by for determining the diff_alpha2 parameter... Shouldn't be larger than 1.... MagneticDiffusionLength = 0.3 !cm MagneticDiffusionLengthWire = 0.0 !cm
2. Results: Click for movies
runs | 298 ns | 426 ns |
setup 2, bounary | ![]() | ![]() |
setup 2, clump | ![]() | ![]() |
Note
and Each picture shows lineouts at three locations shown hereEach plot shows
- ram + thermal pressure
- magnetic pressure
- total pressure
- density
- magnetic field
- velocity (in x)
- Temperature
And the values are scaled as follows
Density | 1e17 cm-3 |
Pressure | Mbar |
B | Tesla |
T | eV |
v | 10 km/s |
The middle panel time range varies !!
runs | 298 ns | 426 ns |
setup 2, line-outs, boundary | ![]() | ![]() |
setup 2, line-outs clump | ![]() | ![]() |
More movies
boundary run | rhoScaled;Zoomed rhoScaled;Temp; mach;magPressure; |
clump | rhoScaled;Zoomed rhoScaled;Temp; mach;magPressure; |
HEDLA Jet meeting 11/20/2020
![]() | summary |
![]() | frame12; setup 2 By=1 boundary-run;setup 2 By=1 clump-run; |
1. Setup 1
![]() | setup1 |
Box: 160x160 mm radius_wire = 2 mm distance_between_centers = 9 mm MagField_direction = 0, 0, -1 rhoWire = 2.86e19 g/cc tempWire = 1.39e3 K WindMaterial = 27 rhoWind = 2.86e+17 1/cc velWind = 6e1 km/s BWind ! varies see table TempWind = 12 ev or 1.39e5 K rhoAmb = 2.86e+17 1/cc tempAmb = 1.39e5 K velAmb = 6e1 km/s dx: 0.3125mm runTime: 4.92 ms or 1.8 domain crossing time or 32.8 wire distance crossing time.. imomentumProtect=1
Runs | Results | diffusion parameter |
hydro, no cooling | rho;Temp;mach; | 2 |
hydro, Al cooling | rho;Temp;Mach; | 2 |
Bz=0.1T, beta=138, no cooling | rho; Temp; Mach; mag pressure; | 2 |
Bz=0.1T, beta=138, Al cooling | rho; Temp; Mach; mag pressure; | 2 |
Bz=1T, beta=1.38, no cooling | rho; Temp; Mach; mag pressure; | 1 |
Bz=1T, beta=1.38, Al cooling | rho; Temp; Mach; mag pressure; | 1 |
Bz=5T, beta=0.055, no cooling | rho; Temp; Mach; mag pressure; | 1 |
Bz=5T, beta=0.055, Al cooling | rho; Temp; Mach; mag pressure; | 1 |
2. Setup 2
![]() | setup2 |
Box: 20x20 mm radius_wire = 0.25 mm distance_between_centers = 7.5 mm MagField_direction = 0, -1, 0 rhoWire = 2.86e19 g/cc WindMaterial = 27 rhoWind = 2.86E+17 1/cc velWind = 6e1 km/s BWind !varies see table TempWind = 12 ev or 1.39e5 K dx: 0.03125mm runTime: ~90ns (0.924 ms) or ~0.4(3.6) domain crossing time, or ~1 (9.6) wire distance crossing time imomentumProtect=1
Runs | Results | diffusion parameter |
hydro, no cooling | rho;Temp;mach; | 2 |
hydro, Al cooling | rho;Temp;Mach; | 2 |
By=0.1T, beta=138, no cooling | rho; Temp; Mach; mag pressure; | 1 |
By=0.1T, beta=138, Al cooling | rho; Temp; Mach; mag pressure; | 1 |
By=1T, beta=1.38, no cooling | rho; Temp; Mach; mag pressure; | 1 |
By=1T, beta=1.38, Al cooling | rho; Temp; Mach; mag pressure; | 1 |
By=5T, beta=0.055, no cooling | rho; Temp; Mach; mag pressure; | 1 |
By=5T, beta=0.055, Al cooling | rho; Temp; Mach; mag pressure; | 1 |
3. Setup 3
![]() | setup3 |
Box: 20x20 mm radius_wire = 0.25 mm distance_between_centers = 3.0 mm MagField_direction = 0, 0, -1 rhoWire = 2.86e19 g/cc WindMaterial = 27 rhoWind = 2.86E+17 1/cc velWind = 6e6 cm/s BWind ! varies see table TempWind = 12 ev or 1.39e5 K rhoAmb = 2.86e+17 1/cc tempAmb = 1.39e5 K velAmb = 6e1 km/s dx: 0.3125mm runTime: 0.924 ms or 3.6 domain crossing time, or 9.6 wire distance crossing time imomentumProtect=1
Runs | Results | diffusion parameter |
hydro, no cooling | rho;Temp;mach; | 1 |
hydro, Al cooling | rho;Temp;Mach; | 1 |
Bz=0.1T, beta=138, no cooling | rho; Temp; Mach; mag pressure; | 1 |
Bz=0.1T, beta=138, Al cooling | rho; Temp; Mach; mag pressure; | 1 |
Bz=1T, beta=1.38, no cooling | rho; Temp; Mach; mag pressure; | 1 |
Bz=1T, beta=1.38, Al cooling | rho; Temp; Mach; mag pressure; | 1 |
Bz=5T, beta=0.055, no cooling | rho; Temp; Mach; mag pressure; | 1 |
Bz=5T, beta=0.055, Al cooling | rho; Temp; Mach; mag pressure; | 1 |
sample data files | global.data;physics.data; solver.data; scales.data; problem.data |
Documents from Danny
In the table each column represents a different simulation. The parameters in dark blue are dimensions and simulation setup instructions. The parameters in light blue are experimentally measured parameters for the plasma and obstacle. The rest of the parameter (white) are calculated using formulars from refs 1 and 2. There is a separate power point with the setup images which shows the layout of the obstacles and the direction of the magnetic field, the relevant figure is referenced in cell 5. The first two simulations we are interested in are a comparison of the same initial setup with and without radiative cooling. In our experiments we observe that the plasma temperature does not drop much below 12eV. However, a calculation of the cooling time suggests that within our experimental time frame we should see cooling. This could be due to a heating mechanism, such as ohmic heating. We are therefore interested to know whether a simulation with a realistic cooling time or one with no radiative cooling at all better resembles our data. The cooling time I have suggested for simulation 1 is taken from the Al cooling curves presented in ref 3 using the ni, Te and Z ̅ values shown in the table. For simulations 2 I have suggested the same experiment with no radiative cooling. The experimental setup is one that we have used several times and have very good data for. Once we see the results from these two simulations we will have other suggestions but we would like to understand the roll of radiative cooling in the simulations first.
Simulations requests for AstroBEAR | ||
Simulations requests for AstroBEAR | Figures for AstroBEAR simulations | Notes on simulation suggestions for AstroBEAR |
Meeting update
- Frontera Pathway proposal due this week. Working on weak scaling test.
- MHD outflow clumps: Fixed the high-density tail issue. Needs suggestion/comments for the MHD runs.
Meeting update 06/22/2020
MHDCollidingFlows runs with the new analytic cooling and the ambient tracer. Need better way to handle parameters Temp0 and tracer in the cooling code though. Also need to double-check the scales..
A_alpha = alpha !cs=sqrt(gamma*Boltzmann*TempScale/Xmu/muH/amu) !power=.5d0*(1d0-2d0*beta) !A_alpha=alpha*4.76e-20*&! (ergs*cm^3/s/K^.5) ! (3d0/16d0*Xmu*muH*amu/Boltzmann*(cs*vx)**2)**(power) A_beta = beta
Current version of code -- need better ways of handling Temp0 and trailer!!! FUNCTION AnalyticCoolingStrength2(q, Temp) REAL(KIND=qPREC) :: AnalyticCoolingStrength2 REAL(KIND=qPrec) :: q(:) ! Local declarations REAL(KIND=qPrec) :: Temp, Temp0, T0 Temp0 = 100000 !10^5 Kelvin T0 = Temp/Temp0 AnalyticCoolingStrength2=q(1)**2 * A_alpha*T0**A_beta*ScaleCool !! only for ambient traer > 1/3 which is the scaless value for initial ambient density if( q(9) <= 1d0/3d0 ) AnalyticCoolingStrength2=0d0 END FUNCTION AnalyticCoolingStrength2
Colliding Jets 2.10.2020
See details on this page for details about:
- Fiducial runs with no cooling
- Al cooling table from Eddie
Updated plans for 100TB NAS storage
Updated Plan from plans of october
Three 4-Bay NAS DiskStations with Twelve 8TB or 10TB hard drive
- Can use one DiskStation and Four hard drives for archiving data and turn it off
- Can connect Two or Three together as JBOD for saving data with redundancy.
- Fast order&delivery: can order directly from newegg (without with p-card or through the system)
Plan A: ~$3600 for 96TB
3 4-bay NAS DiskStation | 3 X $299.99 |
12 Seagate Desktop SATA 6.0Gb/s 3.5" Internal Hard Disk Drive | 12 X $217.99 |
96TB | $3515.85 |
Plan B: ~$4600 for 120TB
3 4-bay NAS DiskStation | 3 X $299.99 |
12 Seagate IronWolf 10TB NAS Hard Drive | 12 X $304.51 |
120TB | $4554.09 |
Buying storage for archiving/saving data
I. Multiple 8TB external hard drive + USB 3.0 port + (RAID)
pros: 1. relatively cheap and flexible
cons: 1. performance could be horrible 2. not quite reliable
- 56TB
7 x Seagate Expansion 8TB Desktop External Hard Drive | https://www.amazon.com/Seagate-Expansion-Desktop-External-STEB8000100/dp/B01HAPGEIE/ref=sr_1_2?s=electronics&rps=1&ie=UTF8&qid=1540229650&sr=1-2&keywords=16tb+external+hard+drive&refinements=p_n_feature_two_browse-bin%3A5446816011%2Cp_85%3A2470955011&dpID=41mDnJ8-plL&preST=_SY300_QL70_&dpSrc=srch | 7X150=$1050 |
1 x Sabrent 60W 7-Port USB 3.0 Hub | https://www.amazon.com/Sabrent-Charging-Individual-Switches-HB-B7C3/dp/B0797NWDCB/ref=sr_1_8?rps=1&ie=UTF8&qid=1540316819&sr=8-8&keywords=7+port+hub+usb3&refinements=p_85%3A2470955011 | $40 |
Total | 56TB or 48TB with RAID redundancy | $1090 |
II. Network-attached Storage (NAS) —QNAP
pros: high performance and stable
cons: cost
- 40TB
QNAP TS-431P2-1G-US Diskless System Network Storage | https://www.newegg.com/Product/Product.aspx?Item=N82E16822107986&ignorebbr=1 | $330 |
4 x 10TB Seagate SkyHawk Surveillance Hard Drive | https://www.newegg.com/Product/Product.aspx?Item=N82E16822178999 | 4x320=$1280 |
Total | 40TB | $1610 |
- 60TB
QNAP TS-669L-US Diskless System High-performance 6-bay NAS Server for SMBs | https://www.newegg.com/Product/Product.aspx?Item=9SIA0AJ2U04041 | $1000 |
6 x 10TB Seagate SkyHawk Surveillance Hard Drive | https://www.newegg.com/Product/Product.aspx?Item=N82E16822178999 | 6*320=$1920 |
Total | 60TB | $2920 |
- 100TB
QNAP REXP-1000-PRO SAS/SATA/SSD RAID Expansion Enclosure for Turbo NAS | https://www.newegg.com/Product/Product.aspx?Item=9SIA0ZX7MN0982 | $1250 |
10 x 10TB Seagate SkyHawk Surveillance Hard Drive | https://www.newegg.com/Product/Product.aspx?Item=N82E16822178999 | 10*320=$3200 |
Total | 100TB | $4450 |
- 120TB
QNAP High Performance 12 bay (8+4) NAS/iSCSI IP-SAN. Intel Skylake Core i3-6100 3.7 GHz Dual core, 8GB RAM, 10G-ready | https://www.newegg.com/Product/Product.aspx?Item=9SIA25V4S75250 | $2000 |
12 x 10TB Seagate SkyHawk Surveillance Hard Drive | https://www.newegg.com/Product/Product.aspx?Item=N82E16822178999 | 12*320=$3840 |
Total | 120TB | $5840 |
III. Cloud space: Amazon Glacier
pros: 1. charged every month
cons: 1. hard to predict total fees
$48 per TB per year + Other fees(retrieval,request,Data transfer etc)
pnStudy: conical wind with wedge tip --2nd way
Instead of making the wedge tip same as the ambient as in blog:bliu08012018 , tried with Bruce's idea of "t=0 simply replicate the flow at the edge of the launch surface into the wedge". Seems working much better than the scheme in blog:bliu08012018
t=0 | ![]() |
t=400y | ![]() |
Movie up to 400y | movie |
pnStudy: conical wind with wedge tip
Add a wedge tip to test if it solves the "piston/knots" issue along the y-axis
- The wedge tip is added using a wedge angle and tangent line to the edge to original conical wind launching region (circle in the testing example). The new launching region is now the circle + a wedge with the Info%q as the values of initial ambient at the area. The wedge area is marked as red in the following picture
30 deg vs 15 deg vs 10 deg | ![]() |
- Compare the result with(left panel)/without Wedge tip (wedge angle = 15 deg)
150 yr | ![]() | no wedge tip; 15 deg wedge tip |
- Conclusion: seems making things worse..
Meeting update -- 03/06/2018
- JetClump module with 3D MHD compiles and runs
- Tested the post-processing python code for polarization map on the 3D MHD JetClump results.
- Matlab code for plotting polarization map.
- Details can be found on this page
Meeting update -- 02/12/2018
- Laurence Sabin's Visit
- Time
- Projects
The main goal would be using the hydrodynamical model that you already published with Bruce and add the "magnetic component" to fit our observations. It would also be interesting to combine this with Martin's (synthetic) polarization maps to reflect what we are observing with the SMA, CARMA and ALMA.
A second point, that was raised during the PNe conference Adam and I attended last year, is the determination of the minimum field's intensity needed to trigger the shaping. This is a very important and rather unknown aspect: I am working on the measurement of photospheric magnetic fields, on Post-AGBs and PPNe, and so far the (longitudinal) values found are quite low and might not be enough to actually launch any material !!
- Current JetClump Module: 2D MHD (Toroidal magnetic field in Clump and Jet ) runs OK. 3D MHD untested.
Artificial knots for outflow models with spherical nozzles
The following is from Bruce's email. Just want to put here and see if any comments/ideas:'
Thin knots seem to arise in many outflow models along the y-axis shortly after the launch of a jet. In brief, I’m convinced that the biggest cause of such knots is the shape of the nozzle’s surface (a sphere). A flat or highly conical nozzle will suppress the knots.
The simplest flow is that of a cylindrical jet at the origin moving into an ambient medium of constant density on a Cartesian grid. In principle, such a flow has no way to deviate from a simple cylindrical flow unless shears (at the edge) or kink instabilities develop (they don’t).
Heavy flows: This is obviously the case if the flow density > ambient density. The flow is simply a telephone pole flying through something like a vacuum.
Light flows: If the flow density < ambient density then the flow will interact strongly with the dense medium through which it pushes. Even so, there is no apriori expectation that a dense, thin knot will develop almost immediately along the y axis. But it does: that’s what I find in the sims using the present version of AstroBEAR. See the attached figure where I move the viewing window at the same speed as the head of the flow.
Notes:
- the spatial units in the graph should be multiplied by two if the basic cell size = 500 AU. I had to mess with the scaling factors in VisIt (0.25 instead of 0.5) to get a good display. That is, the basic cell in the figure wiull have dimensions of 250 AU.
- I used Nlevel=5 in these sims. Changing it by ± 1 has no effect.
The panels show a light flow of density 102 and speed 200 km/s moving into a uniform ambient medium of density 104. The bottom panel shows the geometry at t=0. You are looking at the nozzle (round) and (unit) flow vectors that will emrge through its surface at t=0+. The vectors are perfectly vertical. The nozzle’s surface isn’t a perfect shpere, but that doesn’t matter much.
The vectors along the inside edges of the gas displaced by the round jet (the “swept-up, compressed rim”) almost immediately start to curve towards the y axis. This is exactly what should happen when the flow strikes the inner edge of the rim of displaced gas obliquely. The flow along the rim starts to converge towards the y axis. This convergence forms an incipient knot in 100 y (the nozzle crossing time). The knot rapidly becomes longer and denser as mass continues to continues to flow into it.
It’s what you will get if you put a squishy ball bearing between the jaws of a closing scissors.
My point is that artifical knots are inevitable using spherical nozzles. The formation of this axial knot can be suppressed if the nozzle were a flat surface or a long and thin cone. The flow from a flat nozzle would displace and sweep up a flat plug (a disc) whose speed decreases as ambient gas is incorporated into it. The only wat to completely avoid any axial knot is to introduce a flow with a sharply conical head, like the nose cone of a rocket.
Baowei knows through bitter experience that forming a flat nozzle is difficult in AstroBEAR. It’s even more difficult to make a nozzle shaped like a nose cone. But you might think about it. (Of course, some axial knots might form after a simple jet starts to break up or become unstable and pinch. Such knots are ‘real’, not artificial.)
Of course, no one has any idea what a nozzle looks like on large sice scales. Zhou’s sims might provide some guidance on this. They look highly conical to me.
This email sounds like its just about details of flow geometries. It’s really more about model outcomes. There’s potentially important science at stake.
Meeting Update --09/11/17
- Xsede Renewal proposal for Binary due on 10/15, working on code multi-threading optimization & scaling testings.
- Current allocation usage
- Wire paper
Meeting Update --08/03/17
- IOPP invoice for the OH231 paper
- XSEDE Resources
- Current resources: about 26000 Node-hour on stampede2 and 180000 cpu-hours on comet details
- Stampede2 currently only has KNL co-processors. details on this page
- Users
- has been working with Eric & Jason's students.
- Coding
- Updates to JetClump and pnStudy module: distribution of ambient density #438; total mass and momentum on the grids.
- OpenMP optimization for Common Envelop module: details on this page
- Wire Turbulence
- rearranged some figures and redid figure 1
- re-wrote introduction and method/model part. added more references
Meeting Update --6/2/17
- Disk Space
- archived MachStems data. Currently on bluehive, /scratch/afrank_lab has 4.4TB available.
- WireTurbulence
- redid Mach plot in blog:bliu05152017
- High res runs with tracers on Bluestreak: hydro 104/200 frames done, mhd 76/200 frames.
- Helped Bo working on his WindTunnel & StarAmbient modules. Some of the test results: wind tunnel; star ambience test1; star ambience test2
Meeting Update --5/15/17
- Disk Space
- received 12TB external hard disks from Erica.
- archiving Planetary Atmosphere data on Bluehive. Will clean ~2.9TB space.
- received several 500GB/1TB hard disks with total size ~5TB for clover from Dave. Will use them for archiving also.
- grassdata/ is mainly occupied by the WT data. So will not change it for now.
Meeting update 05/09/2017
- 1. Poster Print fee/grant account for Dave
- 2. Updated Bluehive space 39TB with 97$ per TB per year: blog:bliu04182017 . New external disk?
- 3. Wire Turbulence Poster and Paper conclusions 1). Turbulence generated are mainly solenoidal which follows the -5/3 Kolmorov law for both hydro and MHD velocities. 2). The driving factor is ~ 1/3 as the solenoidal turbulence dominants which makes the Mach number > 1 for both hydro and MHD runs.
Meeting Update --4/18/17
- CIRC poster Session
- Deadline for registering: Next Friday 04/28
- Link for registering https://registration.circ.rochester.edu/postersession
- Will do a poster for Wire Turbulence
- I will register some of our old posters next Monday. If you have a poster or will do a poster and need help, please let me know.
- Bluehive space under afrank name and will be charged (97$ per TB)
Eddie | 4TB |
Zhuo | 5TB |
Luke | 24TB |
afrank_lab | 6TB |
Total | 39TB |
Shall we move Eddie's account to his current group instead?
- Code & Users
- One user asking for code of mass transfer between binaries are putting on hold.
- XSEDE Machine usage
- TG-AST160054, expired date 2017-9-21: Stampede (-3517 out of 50000 SUs, 0% remaining); Comet (43427 out of 50000 SUs, 86% remaining)
- TG-AST120060, expired date 2017-12-31: Stampede (960802 out of 980222 SUs, 98% remaining); Comet (858187 out of 980222 SUs, 86% remaining)
- Wire Turbulence
- Velocity Spectra — supersonic for solenoidal and subsonic for compressive turbulence
- Redid Mach number vs b
![]() |
- Hydro pressure histogram
frame 199 | ![]() |
On Visit | ![]() |
Meeting Update --3/23/17
- Wire Turbulence
- Schematic Diagram:
- redid Spectra analysis with wave number range [2 20] comparing [2 40]. linear fit slope along x direction doesn't change much while y & z direction are goes much lower (-1.2). Details.
- Schematic Diagram:
Meeting Update --2/22/17
- Users
- Proposal for Jason's student?
- Laurence Sabin
- WT
- New figures added in the paper
Mach number Vs b | ![]() |
Wind/Grid tracer ratio PDF with tracers | ![]() |
- Tracers and Gaussian 2 Fit for density PDF Redid the Gaussian 2 fit of density PDF with tracers. Tried Gaussian 2 fit with simple test data to understand the GS2 fit parameters, mainly the relations of the sigmas of GS2 and the sigmas of individual component. While
there's no obvious relations between the sigmas, the sample data & fit shows clearly two peaks which matches the individual component peaks. This can't be found in the WT data with tracers. So interpret the Gaussian 2 fit for the WT density PDF as Grid & Wind material probably won't be proper.
redid tracers figure | ![]() |
original figure without tracers | ![]() |
Test Gaussian 2 fit with simple data | ![]() |
Meeting Update --01/26/17
- Contact User of Toronto?
- Book Vista for the new semester?
- Wire Turbulence
- Al Cooling: tried new parameters based on the aluminum table (density and temperature range). 2D runs show that the cooling intensity is too small comparing with the post shock energy. So the cooling is too slow comparing with the downstream velocity. Details can be found on this page.
- Tracers and Gaussian2 fit: Added tracers for grid and wind material. Gaussian 1 won't do a good fit for the PDF of either grid or wind material only. Details can be found on this page
- Paper: reading and writing..
Meeting Update --01/06/17
- Visitors and Users
- UCSB visitor next week: volunteers needed
- RIT user: compiling issues with *.cpp.f90 files with gnu fortran 4.8?. Possible problems for our visitor and other users
- Bruce: OH231 module and Hen3–401 paper
- Toronto user requesting for Binary code: NK cooling table.
- XSEDE allocations
- 2M CPU-hours on XSEDE machines (1M on Stampede and 1M on Comet) available for production runs. Current allocations can be found here
- Comparison of the Stampede and Comet machines can be found on this page
- Wire Turbulence (
- Gaussian 2 fit for density PDF: using wire and wind materials using temperature. Details can be found here.
![]() |
- Tracers (to do)
- Al Cooling (testing)
- Paper (working on)
Meeting Update --12/13/16
- Wire Turbulence.
- Merging Aluminum Cooling Table in the code
Table
Temperature(T) range 1-100K dT 0.1K Density range in the table - 1/cm3
Current Isothermal run without cooling
Wire Temperature 3.75E-12 K Wire Density 4.8E26 1/cm3 Wind Temperature 1.5E-8 K Wind Density 4.8E23 1/cm3
- 2D
- Bruce's OH231 runs
Meeting Update --11/29/16
- 2D Wire Turbulence
- Wind cannot pass the wire: 2D mhd movie
- Metal Cooling
- Eddie's looking for the cooling table&code for Aluminum+Argon.
- 3D Wire Turbulence with Analytic Cooling
- Analytic Cooling Parameters and Cooling Length
- hydro results
no Cooling | ![]() | movie |
Cooling Length 1 a | ![]() | movie |
Cooling Length 0.5 a | movie |
Wire Turbulence
- 3D result with bar grid
- hydro: no turbulence along z direction movie
- mhd: find turbulence along z direction version 1 code movie; version 2 code movie;
- deviation of velocity
![]() |
*2D with bar grid
- memory allocation error for mhd on Bluestreak. Runs OK on Bluehive
- 2D hydro movie; 2D mhd movie
Cooling Test Results for ThermalPulse module
2 or 3 levels of AMR
DMcooling with floorTemp=100K | DMcooling with floorTemp=500K | |
density | ![]() | ![]() |
Temp | ![]() | ![]() |
Velocity | ![]() | ![]() |
Movies | density; temperature; velocity | density; temperature; velocity |
Meeting Update --09/13/16
- XSEDE proposal
- Wire Turbulence
- prepared and uploaded figures to shareLatex
- more detailed pictures on this page.
- ThermalPulse module with high temperature inside the envelope.
- Overshoot expansion velocity problem with temperature inside the
frame 1 temp with DMcooling (minTemp 1000K) | ![]() |
overshoot expansion velocity | ![]() |
low temp velocity | ![]() |
- ClumpJet/pnStudy module
- fixed two bugs related to the conical wind nozzle in August.
- waiting for confirmation from Bruce's new runs.
Meeting Update --7/27/16
Wire Turbulence
Meeting Update --7/20/16
- Schedule for the Visitor
- office: 476?
- No hotel shuttle on weekends. Transporting (Zhuo?)
- will send out schedule draft soon
- New external hard disk for archiving (~160USD for 4TB, 250USD for 6TB)
- Wire Turbulence
- Mixture distribution: two types of materials (wire & wind)? — scatter plot: https://astrobear.pas.rochester.edu/trac/wiki/u/bliu/mixtureDistribution
Meeting Update --07/07/16
- Wire Turbulence
- Updates of figures
- Updates of figures
1) square-root (standard?) velocity variance: different variance for x & yz comes from the x-direction flow?
2) redid all PDF figures with the new weighted-with-area histogram data from visit: still cannot do the time average due to the different values of x-axis/frequency values
![]()
3) redid the energy with total pressure instead of thermal energy, although the total pressure is so small due to gamma=1.001. Pressure rather than the thermal energy matters here?
4) magnetic energy with
plot: will do
5) physical meaning of Gaussian 2 fit: Mixture of two types of turbulence or Turbulence with two components
- Post processing Spectra Results
- hydro spectra data: running
- linear fit: will do
Meeting Update --06/23/16
- Wire Turbulence
- Variance:
![]() |
- density PDF
![]() |
From Gaussian fit, Federrath 15, Mach number can be calculated for different values of
, Using relation with 1/3≤b⇐1, according tob | 1 | 2/3 | 0.53 | 1/3 |
Mach | 0.80 | 0.53 | 1.0 | 1.60 |
- Other Variance and PDF plots, see Figures
- OH231 for Bruce
- Updated code with low density in nozzle area.
- test result which looks good to Bruce.
Current total quotas for afrank group
Total 27.5 TB with 97USD per TB per year or ~2667.5USD per year
BlueHive | BG/Q | |
Eddie | 4TB | 4TB |
Erica | 0TB | 12.5 TB |
Zhuo | 1TB | 0 TB |
afrank_lab | 6TB | 0 TB |
Total | 27.5 TB |
Meeting Update --06/03/16
- Wire Turbulence
- OH231 module for Bruce
- The module is to simulate a soft-edged clump plow in an ambient gas left by a conical wind. The conical wind needs to be turned off after some time..
- Fixed the cooling issue for conical wind. ticket:445
- Need to set the nozzle area empty after the conical wind turned off — currently the code will stop pumping in material after the CW off but keeps updating the physical values and it will form a bubble. Clump could star from the nozzle area. Forcing the density & velocity in the nozzle to be low will cause the code choking. Haven't figured out a good way to do it.
round-nozzle of conical wind | ![]() |
current result with conical wind turned off | ![]() |
movie | bubble |
Meeting Update 05/11/2016
- Wire Turbulence
- fixed a bug in the script extracting spectra data and so the Box 1 results. The updated results for frame 195 can be found in blog:bliu05032016 . x-y axis are both in log scale except for Box 1 which has a different x-range for showing the delta function…
- Results for other frames
Box 1 | Box 2 | Box 3 | |
frame 1 | ![]() | ![]() | ![]() |
frame 3 | ![]() | ![]() | ![]() |
frame 7 | ![]() | ![]() | ![]() |
short-range Box 5; short-range Box 9; short-range Box 10
- Debugging code for Bruce's module (#445)
Wire Turbulence Spectra-MHD
Implemented the Spectra object with 10 boxes/windows (each with size of Ly or Lz or 1/10Lx; the wire is around the center of box 2) in the wireTurbulence module. Worked on scripts to extract the data out and make plots. Here's the testing results of frame 195 for the MHD runs for each box:
box | spectra | zoomed-in |
1 | ![]() | ![]() |
2 | ![]() | ![]() |
3 | ![]() | ![]() |
4 | ![]() | ![]() |
5 | ![]() | ![]() |
6 | ![]() | ![]() |
7 | ![]() | ![]() |
8 | ![]() | ![]() |
9 | ![]() | ![]() |
10 | ![]() | ![]() |
Meeting Update 04/19/2016
Wire Turbulence
- Analyzing data for hydro and MHD (up to 197 frames) runs.
- Haven't done with MHD data yet. Current results of log(density) along middle section and plots for this page. Will do the plots , , and can be found in
- different pattern in the density pseudo color plots along mid-y and mid-z section for MHD.
- both hydro and MHD plots of velocity found strange data point in the center. Will check if it's coming from the analysis method or chombo data
- Density pseudo color plots show MHD run crashed at frame 196. So the restart will need to go from frame 195..
3D volume rendering movies for "Hot Planet Winds Near a Star"
High Res Movies
Versions | mov | avi | mp4 | mpeg |
fixed | fixed mov | fixed avi | fixed mp4 | fixed mpeg |
rotate | rot mov | rot avi | rot mp4 | rot mpeg |
Flow Texture
Old Versions
Versions | gif | mov | avi | mp4 | mpeg |
1 | fixed | fixed mov | fixed avi | fixed mp4 | fixed mpeg |
2 | rotate | rot mov | rot avi | rot mp4 | rot mpeg |
3 | rotate | rot mov | rot avi | rot mp4 | rot mpeg |
Make high-res 3D volume-rendering image using VisIT2.8.2 on Bluehive
Jonathan installed VisIT 2.8.2 on Bluehive with SLIVR (a volume rendering library supported by GPU, http://www.visitusers.org/index.php?title=Volume_Rendering#SLIVR ). This makes doing 3D volume rendering images a lot faster and fancier. You can use 2D transfer functions and you can manipulate the pictures in real time since it's accelerated by GPU. Here's a short introducing movie.
![]() |
![]() |
To set the limits (maximum and minimum value) of the variable and to change the opacity, it has to switch to 1D function, as shown in the following two images:
![]() |
![]() |
And Here are some AstroBEAR results
![]() | |
![]() | | |
Meeting update
* New problem module for OH231
- Launch conical wind first for sometime then launch clump with the results of 1st step as background.
- compiled and ran. Some minor problems to be fixed..
- testing results.
*Rotating problem module for M2-9
- try to reproduce the results in Gacia-Arredondo2004 paper
- latest updates
Planetary Wind and Mass Loss Rate for HD209458b
1. AstroBEAR code and Set-up
In this study we use the AstroBEAR code (Cunningham et al 2009) to perform 3D hydrodynamic and magnetohydrodynamic numerical simulations and model the "Hot Jupiter" HD209458b ( Ballister et al 2007). AstroBEAR is a fully parallelized AMR MHD multi-physics code which currently includes modules for the treatment of self-gravity, ionization dynamics, chemistry, heat conduction, viscosity, resistivity and radiation transport via flux-limited diffusion. For our simulations we use a polytropic equations of state (the polytropic index
In this part we only focus on the planetary wind (hydrodynamic) for HD209458b without considering the star and stellar wind. We present the simulation results of planetary wind launching using the AstroBEAR code and calculate the mass loss rate of the planet using the density and velocity from the simulation data.
2. Parameters and Initial Conditions
The mass for H209458b is SouthWorth et al 2010). We use for the temperature of the planet.
where is the Jupiter mass (Wang et al 2002) and the radius is where is the Jupiter radius (
measures the strength of the planetary wind. For this , a Parker-type thermally driven hydrodynamic wind is expected.As a comparason, the sun with its corona has .
We use
as the initial density for the planet atmosphere. For the initial temperature, we use two set-ups: 1) set the outer boundary of the planet with temperature (without temperature profile or spherically-launching wind) and 2) set the outer boundary of the planet with azimuthally variable temperature where is the sub solar point and (with temperature profile). For the 2nd case, we use similar initial set up to that of Stone & Progra (2009). We summarize the parameters we use for HD209458b are shown in Table 1.
Table 1. Parameters for HD209458b
3. Resolutions
In our simulations the planet is considered as an internal boundary and the physical quantities are fixed during the simulation. Our computational domain consists a cube of size
with resolution for the base grid and totally -level of AMR is used. This makes the finest resolution up to zones per .4. Planetary Wind Results and Mass Loss Rate
In Figure 1, we show the 3D simulation results for both without-temperature-profile and with-temperature-profile cases. For the without-temperature-profile case (top panels in Fig.1), we can see the planet temperature launches a spherical thermal wind and Mach=1 contour is approximately spherical or circle in 2D cross section. While for the with-temperature-profile case (bottom panels in Fig. 1), there's flow across from the dayside to the nightside and the Mach=1 contour shows there's a weak shock between two sides.
in color | ![]() |
in gray | ![]() |
Fig. 1 Steady state planetary wind solution of cross section in the xy-plane for simulations without (top) and with the temperature profile. Flow and density are shown on the left and thermal structure and M=1 contours are shown on the right. The small circle at the center shows the radius of the planet.
The mass loss rate can be calculated by integrating
With the planet temperature
, we can also analytically solve the problem in 1D with Parker's wind solution (Parker 1958). The mass loss rate with Parker's wind solution gives . The estimated mass loss rate for H209458b can be found in Table 2.Methods | Mass Loss Rate |
3D Simulation Without Temperature Profile | |
3D Simulation With Temperature Profile | |
Analytic Parker wind Solution |
Table 2. Estimated Mass Loss Rate for HD209458b
5. References
Ballister, G., King, D., & Herbert, 2007, Nature, 445, 511
Cunningham A., Frank, A., Varniere, P., Mitran, S., Jones, T. W., 2009, ApJS, 182, 519
Southworth, J. 2010, MNRAS, 408, 1689
Wang, J. & Ford, E. B., 2011, MNRAS, 418, 1822
Parker,E.N. (1958) ApJ, vol. 128, pp.664-676
HD209458b: PlanetaryWind and Sonic Surface
When trying to produce a high-res picture of HD planetary wind with larger planet, I found the different-looking of sonic surface for L=60 and L40 as shown in blog:bliu10192015 .
I tended to think the L=60 is more correct(?) as
- The SF of L=60 is more like the 2D PW simulations I did before (comparing with 2D sonic surface ).
- In early frames of L=40 has similar sonic surface as shown in this picture
Thought this is resolution related. I did several restarts with higher AMR levels. Here's some of the results
L=40 4-AMR outside the planet | ![]() |
L=40 5-AMR outside the planet | ![]() |
Will try higher-res for L=60 to see if the results could be consistent..
Higher-res results for L=60 shows similar sonic surface as those of L=40. So we conclude the sonic surface of higher resolution is more correct although it looks slightly different from our 2.5D results
L=40 4-AMR outside the planet | ![]() | movie |
M2-9: 3D pnStudy with rotating/spinning conical wind
- Test the 3D pnStudy module by adding a rotating velocity to the conical wind:
Non-rotate ![]()
4AMR non Rotate movie; 4AMR non Rotate 2D slice movie; Rotate with 150y period ![]()
4AMR Rotate movie; 4AMR Rotate 2D slice movie;
- The data file for this run:
tamb = 1d3 ! ambient temp, 1cu = 0.1K (100K=1000cu) namb = 4e4 ! ambient central density cm^-3. Usually 400 for 1/r^2 or torus. stratified = f ! true = add a 1/r^2 background 'AGB stellar wind' torus = f ! true - add torus to the background torusalpha = 0.7 ! alpha and beta specify the geometry torusbeta = 10d0 ! see Frank & Mellema, 1994ApJ...430..800F rings = f ! true - add radial density modulations to AGB wind ! ! FLOW DESCRIPTION SECTION, values apply at origin at t=0 outflowType = 2 ! TYPE OF FLOW 1 cyl jet, 2 conical wind, 3 is clump njet = 4d2 ! flow density at launch zone, 1cu = 1cm^-3 Rjet = 2d0 ! flow radius at launch zone, 1cu = 500AU (outflowType=1 only) vjet = 2e7 ! flow velocity , 1cu = cm/s (100km/s=1e7cu) tjet = 1d3 ! flow temp, 1cu = 0.1K (100K=1000cu) tt = 0.0d0 ! flow accel time, 1cu = 8250y (0.02 = 165y) open_angle = 90d0 ! conical flow open angle (deg) tf = 15d0 ! conical flow Gaussian taper (deg) for njet and vjet; 0= disable sigma = 0d0 ! !toroidal.magnetic.energy / kinetic.energy, example 0.6
HD29:planetary wind and Mass loss
1. Planetary Wind With Temperature Profile
I have been trying to get a high-res picture for planetary wind HD29 with temperature profile. This picture shows there's a shock from the night-side and there's a back-flow.
frame 40 | ![]() |
frame 48 | ![]() |
The shock disappeared as time goes further, although still need to double-check if this comes form the restarting problem
frame 52 (2AMR) | ![]() | movie for this run |
frame 76 (3AMR) | ![]() | movie for this run |
2. Estimate of the Mass Loss Rate
- 1). The mass loss rate is estimated by calculating the flux out of the cubic surfaces as show below:
![]() |
Where the size of box is 0.4*Orbital separation between the star and the planet, or 0.019 AU and the planet is sitting in the center of the box.
- 2). The mass loss rate for no-temperature-profile case is 3.82E+09 gram per sec, see blog:bliu09222015
- 3). For with-temperature-profile case, I checked the mass loss rate for different frames
frame | Picture | Mass loss rate |
Frame 48 | ![]() | 1.33E+09 gram per sec |
Frame 70 3AMR | ![]() | 1.55E+09 gram per sec |
Frame 76 3AMR | ![]() | 1.56E+09 gram per sec |
HD209458b: PlanetaryWind Tests
- No co-rotation
With Temp profile old | ![]() | movie | Mass loss Rate: 2.13E+09 gram per sec |
With Temp profile ![]()
movie Mass loss Rate: 3.04E+09 gram per sec No Temp profile ![]()
movie Mass loss Rate: 3.82E+09 gram per sec
- Very Low Ambient Density
rho_ambient=1e-25 | 5 zones-per-radii |
- Wind on all boundaries
![]() | 20-zones-per-radii |
HD209458b: PlanetaryWind
L=1 a | ![]() | 12 zones per radii |
L=0.4 a | ![]() | 12 zones per radii |
L=0.8 a | ![]() | 12 zones per radii |
OutflowWind: planetary wind with cooling
- 1. low-res result
; with analytic cooling; Still running…
![]() | 8 zones per radii |
- 2. zCooling is not working with the current version of cold. Haven't figured out how to fix it yet…
- 3. Higher Resolution runs
qTolerance = .10,.30,.30,1d30,1d30,1d30,1d30,1d30,1d30
![]() |
OutflowWind: planetary wind only (large window and low-res)
in C.U. or is infinite as there's no stellar wind. always. Type III in Matsakos.
![]() | 8-zone-per-radii movie |
OutflowWind: Co-rot planetary wind
- 3D view
- volume rendering for results in blog:bliu08182015
- Larger box
For the setup in blog:bliu08182015, the radius of star is big (~78 CU). Large box will get the star in as shown in the following picture.
![]() |
Not sure if this is OK or should use different parameters since the left and top have persistInBoundaries. Here's the result ( 32 zones per radii and 8-zones per radii for the first 80 frames) of larger box but not including the star in:
![]() | L=200 movie |
- Restart Issues with High CFL
- This happens when restarting from a multi-core chombo files. chombo files from Single-core work OK.. I suspect there's a bug in the new parallel hdf writing code.
- Currently uses an older version of the code and latest module problem which works fine..
pnStudy: 3D Results from IAC 08192015
3D Data with Nlevel=4 and DMcooling of up to 493 y
- Ran for 2.5 hrs on 160 on TeideHPC
- Problem.data
tamb = 1d3 ! ambient temp, 1cu = 0.1K (100K=1000cu) namb = 4e3 ! ambient density in cell above lunch surface. 1cu = 1cm^-3 stratified = t ! true = add a 1/r^2 background 'AGB stellar wind' torus = f ! true - add torus to the background torusalpha = 0.7 ! alpha and beta specify the geometry torusbeta = 10d0 ! see Frank & Mellema, 1994ApJ...430..800F rings = f ! true - add radial density modulations to AGB wind StraTorus = 2 ! 1 for Martin's way used before Jan 6th 2015, tracer might not work correctly!! ! 2 for Baowei's way, updated from Martin's code according to Bruce's request ! ! FLOW DESCRIPTION SECTION, values apply at origin at t=0 outflowType = 2 ! TYPE OF FLOW 1 cyl jet, 2 conical wind, 3 is clump njet = 4d2 ! flow density at launch zone, 1cu = 1cm^-3 Rjet = 1d0 ! flow radius at launch zone, 1cu = 500AU (or clump radius) vjet = 2e7 ! flow velocity , 1cu = cm/s (100km/s=1e7cu) tjet = 1d3 ! flow temp, 1cu = 0.1K (100K=1000cu) tt = 0.0d0 ! flow accel time, 1cu = 8250y (0.02 = 165y) open_angle = 90d0 ! conical flow open angle (deg) (outflowType=2 cones only) tf = 30d0 ! conical flow Gaussian taper (deg) for njet and vjet; 0= disable sigma = 0d0 ! !toroidal.magnetic.energy / kinetic.energy, example 0.6
3D from IAC | ![]() | 3D movie |
same run in 2D | ![]() | 2D movie |
OutflowWind: Corot PW results with or without Outflow-only for xhigh and ylow
Planetary wind only in Co-rotating frame results of setting/not setting the xhigh and ylow boundaries as Outflow-only
Outflow_only for xhigh and ylow | ![]() | 2D slice movie;Sonic Surface movie;Vol movie |
No Outflow_only for xhigh and ylow | ![]() | movie |
OutflowWind: Planetary Wind only in Co-rot frame
- Latest high-res results:
Lambda=5 Omga=0.5 | movie 32 zones per radii |
Lambda=5 Omga=1 | movie 32 zones per radii |
Lambda=2.5 Omga=1 | movie 32 zones per radii |
details can be found in the last part of this page
- A bug found when doing restart.. Haven't found how to fix it…
OutflowWind: Co-rotating Planetary Wind & Parameters
1. Testing results about Co-rotating frame
![]() | movie |
Details in this page
2. Planetary wind only in Co-rotating frame
Only low-res results so far.. Got a lot of High CFL restarts for high-res runs — either by restarting from a low-res frame or starting from beginning directly..
low-res Movie for Lambda=5.3 Omega=0.5 ;
low-res Movie for Lambda=10.6 Omega=0.5
Details in this page
3. Compare Parameters with Matsakos
check this page — not quite finished yet..
OutflowWind: Parker wind & Paper Figures
1. low density stellar wind
StellarEnvelope%omega=0d0 ! Assume planet is tidally locked ... lStellarGravity=false
movie of stellar density 10^-5;
Other densities and details can be found here —part 4.
2. Paper Figures
Will upload to the high res pictures to this page
Meeting update
- OutflowWind Parker solution:
- The result of Parker wind of the star looks stable as shown in the movies of blog:bliu07092015.
- The planet still have problems as the radii is too small/resolution too low and cannot see the temperature profile.. Running with larger radii
- OutflowWind: high-res 3D co-rotating
- In the development branch, Jonathan updated the 3D temperature profile in outflow object by moving the sun/day side was moved to +z direction comparing +x direction. But there's a problem in the temperature profile currently as shown below… Working on debugging it..
![]() |
![]() |
- high res runs with smaller omega on BG/Q: fixed some compiling bugs on bg/q. Running the old version of code while debugging 1…
- 3D pnStudy
OutflowWind: Redo stellar wind with new Park solution
This is to redo the stellar wind Park solution (as in blog:bliu07012015) with Jonathan's updated park wind data (blog:johannjc06252015) and compare with the old one…Other than the initial data, the new run has larger box (larger than Sonic surface) while the boundary for old one is inside the supersonic area…
density | ![]() | ![]() | |
Vx | ![]() | ![]() |
2D results:
pnStudy: clump bubble
Modified the density for area r<Rjet from
if (outflowType == clump .AND. r2.lt.Rjet) then q(i,j,k,1) = namb/nScale+& njet/nscale*(1d0-(r2/Rjet)**2) ![cu] ... end if
to
if (outflowType == clump .AND. r2.lt.Rjet) then q(i,j,k,1) = njet/nScale ... end if
to make the density inside equal to nJet while it was a value of stratified…
And the results look like this
time | rho_scaled | |
t=0 | ![]() | |
t= 0.003 C.U. | ![]() |
pnStudy: test different cooling after merging
This is to test the pnStudy module in 3D after merging with the latest development branch. The data files are modified from a run of 2.5D (located at /home/balick/tap30/t30AGBn4e2v200namb4e3/). Not sure if it's physically correct or not but just test if the code can run when setting different parameters for cooliing in physics.data…
The 2.5D result is for density and temperature while the 3D results are middle section of density only..
2.5D result from Bruce | ![]() |
3D; no cooling | ![]() |
3D; analytic cooling | ![]() |
3D; DM cooling | ![]() |
3D; II cooling | ![]() |
OutflowWind: park solution for stellar wind
Currently the star is not stable
density | ![]() | density line plot movie; |
velocity | ![]() | Vx along x movie |
temperature | temperature movie |
Update 6/29/15 -- Baowei
XSEDE Proposal Renew
- highest priority. Due July 15th.
- Current documents
OutflowWind module
- 3D co-rotating frame: fixed some bugs and latest results— blog:bliu06222015
- Park wind solution for the star: merged with Jonathan's setup and running a test. Will post results soon
pnStudy
- Merged with current development branch for Eddie's cooling stuff. Compiled. Running test
- will do 3D runs with the new merge version on bluehive and install the new code on Spain machines
OutflowWind:bug fix for planetParticle and pointGravity position
1. bug and fix
The code of co-rotating frame for results in blog:bliu06222015 and the results before hard-coded the position of particle and pointgravity object as (0,0,0). Since the origin point in the co-rotating frame should be the mass center and the planet position is at (200,0,0) (currently,will make more sophisticated). So this is a bug:
IF(.NOT. lRestart) THEN CALL CreateParticle(PlanetParticle) PlanetParticle%q(1)=planet_mass PlanetParticle%xloc=0 PlanetParticle%radius=radius CALL CreatePointGravityObject(PlanetParticle%PointGravityObj) PlanetParticle%lFixed=.true. PointGravityObj=>PlanetParticle%PointGravityObj PointGravityObj%mass=planet_mass PointGravityObj%x0=PlanetParticle%xloc
The fix is set the PlanetParticle position to be Outflow%position —
PlanetParticle%xloc=Outflow%position
2. Low-Res Results after the fix
After the bug-fix, the low-res results with omega=0 looks promising. The tight sonic surface problem in blog:bliu06222015 seems gone. But it's still not quite right for the gravity/mach number plot inside the planet —trying with higher res.
![]() | ![]() |
![]() | log mach plot |
plot of log|vx|
shows the unmatched gravity plot for 2D and 3D seems from the low-res of 3D plot.
3. 3D corotating with omega=0.5
![]() |
OutflowWind: spline soft for Point gravity
This is to try to understand/fix the tight sonic surface found in 3D co-rotating frame with omega=0 as shown in the 3rd part of blog:bliu06112015_2 and the different line plots for the 3D co-rotating and 2.5D as shown in the 4th part of the same blog post… In 2.5D and 3D simulations before the problem module hardcoded PLUMERSOFT and soft radius=1 for the pointgravity.
SPLINESOFT = 1, & !g ~ r/r^2 for r < r_soft and then goes to 0 as r-> 0 PLUMMERSOFT = 2 !g ~ (r^2+r_soft^2)^(-3/2) r
1. Code modification: Splinesoft
Since in the 3D case we use r=1 but in 2.5D we use r=2. So the g in 3D is about twice of the g in 2.5D.
To handle point gravity more properly, we change the code to use SPLINESOFT instead. Here's the new update to the OutflowWind module
outflow_radius=0 ! outflow radius is 0 always outflow_thickness=planet_radius pointgravity_soft=1 !splinesoft always pointgravity_r_soft=0.5*planet_radius
2. Testing Results with Splinesoft =
New Results with SplineSoft (low res) | Old Results with PlummerSoft (high res) | |
2.5D no stellar wind | ![]() | ![]() |
2.5D stellar wind | ![]() | ![]() |
3D Co-rotating | ![]() | ![]() |
3D Co-rot and 2D plots compare | ![]() ![]() | ![]() |
OutflowWind: new set up for Co-rotating
small window with new parameters:
Tiny stellar wind density to check the planetary wind close to the planet surface.
2. No rotation omega=0
density omega=0; 20 zones per radii
temperature omega=0; 20 zones per radii
temperature omega=0; lambda=10
- No rotation omega=0 High Res longer
Set stellar wind with very low density and run with high resolution. To check if it's possible to reproduce the planetary wind in 2.5D.
density omega=0; 40-80 zones per radii
Temperature omega=0; 40-80 zones per radii
pressure omega=0; 40-80 zones per radii
- Line plots of density, temperature, Mach number and on dayside (from left boundary to critical radius)
Same run data as 3 above.
3D:
gamma=1.01 3dpressure=(gamma-1)*(E-0.5*(px^2+py^2+pz^2)/rho) vScale=sqrt(vx^2+vy^2+vz^2) soundSpeed=sqrt(gamma*3dpressure/rho) mach=vScale/soundSpeed
2D:
gamma=1.01 2dpressure=(gamma-1)*(E-0.5*(px^2+py^2)/rho) soundSpeed=sqrt(gamma*2dpressure/rho) mach=sqrt(vx^2+vy^2)/2dsoundSpeed
OutflowWind: small wind density
change the density to be
Co-rotating OutflowWind -- tiny rotating
In the latest high-res results of co-rotating frame as in blog:bliu06032015, the planetary wind seems not be able to expand. This test to test the effect of tiny corotating speed omega=1d-5 . All other parameters are same as blog:bliu06032015, namely rho_sw=4d-1…
Co-rotating Outflow Wind
The setwind subroutine In the current co-rotating frame sets up the stellar wind density proportional to
. where is about 200 C.U. for our runs So to make the stellar wind density same as in the 2.5D and 3D we have to set it 40000 times larger in these co-rotating runs. Otherwise the stand out distance and bow-shock size will be very large as we see in our former runs…By using larger stellar wind density, the result for blog:bliu06022015
, Mach 5 runs seems promising. — didn't change the density of planet for this run still 1g/cc also used larger planet as inMovies
Density movie -- 16 zones per radius
Temperature movie -- 16 zones per radius
Density movie -- 32 zones per radius and twice higher stellar wind density
Temperature movie -- 32 zones per radius and twice higher stellar wind density
PlanetWind:larger planet radius
Double Rp and Mp to keep the lambda same. New refinement setup. Here's the results of a 2.5D testing run with gamma=1.01, lambda=5.3, Rp=4, Mp=1Mj, mach=5 and max 3 levels of AMR
Mesh | ![]() | mesh movie;stellar wind mesh movie |
Planet wind | ![]() | zoomed-in movie to check sonic surface;stellar wind movie |
Meeting Update 05/18/2015
- Planetary Wind
- Refinement outside the outflow. Resolution (16 zone per radius) large window with size of 200. movie up to 29 frames; movie with rho_min fixed
- PN results from Bruce.
- XSEDE allocation
- Gordon & Comet almost gone. Stampede ~250,000 or 11.3% left. Details in wiki:ProjectRuns
Ideas for Central Installation of AstroBEAR code
The motivation is to set up a central installation for the code so a user won't need to re-compile the code every time he switches the problem module.. — Feel free to put your ideas here…
I. Binary Folder
- Compile all problem modules and generate an executable file for each module.
- Create a bin folder which contains all executable files (can be links)
- Sample Data files folder which contains all the data files for each module
II. AstroBEAR Library
- compile the compute engine as a library
- each module problem we have now or the user-developed problem module will link the library and make
Meeting Update 04/27/2015 -- Baowei
- Planetary Wind
- wiki:u/bliu/PlanetaryWind
- Back flow for gamma=1.01, mach=0.8
- Stand off distance Vs. ns
- Eddie's runs
- status of runs
d2.5_M15 | 22/50 |
d4.5_M10 | 46/50 |
d4.5_M15 | 23/50 |
d6.5_M10 | 48/50 |
d6.5_M15 | 23/50 |
- transferring from Gordon to BH. ~5TB total.
- current space usage on local machines — blog:bliu04242015
- XSEDE Allocation
- Current SUs — wiki:ProjectRuns
- New SDSC Machine — wiki:Czarships/ExternalResources/Machines (~47000 computing cores, standard queue: 1700 cores for 48 hours, fast, currently free)
Current Disk Usage
/clover: 11 T johannjc 3.0 T shuleli 2.5 T /alfalfa: 5.3 T shuleli 1.6 T martinhe 1.1 T johannjc 422 G bliu 262 G ehansen 17 G ckrau 13 G /bamboo: 13 T madams 3.2 T shuleli 2.4 T erica 2.2 T johannjc 1.1 T bliu 973 G /grass: 5.5 T erica 3.4 T shuleli 762 G johannjc 119 G
Update 04/20/2015 - Baowei
- Planet Wind 1 high resolution results for different lambda and different ns/np — smaller ns/np will make the planet wind relax faster. https://astrobear.pas.rochester.edu/trac/wiki/u/bliu/PlanetaryWind
Meeting Update 04/13/2015
- OutflowWind rotating frame
- gamma=1.01; Planet is ~0.1 AU away from mass center. Star orbital angular velocity 0.5
- Resutls: density; temperature
- 2.5D ambient as stellar wind double-check
- gamma=1.01; set ambient density,velocity and temperature same as the stellar wind
- low res results — no turbulence and falling back at the tail: density and temperature; zoomed in
- High res results:3AMR
Meeting Update 04/06/2015
- OutflowWind module
- gamma=5/3 no wind
- Rotate:
1)reproduce corotate binary;
2)Roate_no_wind; Rotate works for 2.5D ?
3)Weak_wind_along x?;
4) set the wind(stellar) direction along y
Meeting Update 03/31/2015
- OutflowWind
- the blowout found in 2D run blog:bliu03232015 was caused by too-short outflow_duration. Results after fixing that:
![]() | movie |
- Worked with users: #438
- Working on 3D anisotropic conductivity solver
Meeting Update 03/23/2015
- Outflowwind: density blowout in 2D For a 2D run I did last week, the density blows out after running some time..
nDim = 2 ! number of dimensions for this problem (1-3) GmX = 100,400,0 ! Base grid resolution [x,y,z] MaxLevel = 1 !5 ! Maximum level for this simulation (0 is fixed grid) LastStaticLevel = -1 ! Use static AMR for levels through LastStaticLevel [-1] GxBounds = 0d0,-200d0,0d0,100d0,200d0,0.d0 ! Problem boundaries in computational units,format:
- Working with SUNY user and on anisotropic conductivity solver.
Meeting Update 03/09/2015 -- Baowei
- OutflowWind
- latest results for 2D high-res and 3D low-res after sonic surface fixing are here: blog:bliu02282015_2
- got 3D high-res data, haven't analyzed yet.
- Ablative RT
- meet with LLE next week?
- XSEDE Allocation
- expires in June 30 2015
- Current resources: Stampede ~700,000 SUs(32%) left, Gordon 520,000 SUs (74%) left, Trestles 100,000 SUs (100%) left. Details can be found at: wiki:ProjectRuns
OutflowWind: sonic surfaces check
- Fixed bugs causing the subsonic speed instead of supersonic issue for the stellar wind. Here's the low res (for the frames after stellar wind kicked in) results
![]() | movie |
- 2.5D High Res (3 levels AMR) and longer runtime
- 3D Low res run
Meeting Update 02/28/15
Tried to solve the issues about small standoff radius and turbulence with 2 levels of AMR as shown in blog:bliu02232015. Found and fixed bugs in the OutflowWinds problem module. Here's the first cut results with 2 levels of AMR after the bug-fixing: parameters are same as the Stone&Proga paper. More results are coming. Will redo the case of moving ambient also…
![]() | movie |
![]() |
OutflowWind: moving ambient and standoff radius
1. Moving Ambient
Check the idea of setting ambient moving starting from the beginning at the same speed as the stellar wind. Stellar wind will kick in after some time t0, just as the case of ambient not moving… The following results are for ambient density 1e-4 g/cc with speed 1.638e3 km/s — the calculated stellar wind speed from the stone&proga paper. It shows the planet wind couldn't expand from the top…
![]() ![]() | density movie; temperature movie |
2. Standoff Radius
check the standoff radius for different wind velocity. The ambient has initial velocity zero. The results show the standoff radius is around 7cu for subsonic speed wind and 6 cu for supersonic speed wind.
v=8.92e2 km/s, 50X100, 1AMR | ![]() ![]() | density;temperature |
v=1e2 km/s, 25X100, 0AMR | ![]() ![]() | density;temperature |
v=1e0 km/s, 25X100, 0AMR | ![]() ![]() | density;temperature |
v=1e0 km/s, 25X100, 2AMR | ![]() ![]() | density;temperature |
OutflowWind: 3D
Higher interpolation order + H_viscocity; Wind_velocity=8.19e2 km/s
OutflowWind: 2.5D Higer oder Interpolation
Linear Interpolation + H_Viscocity
With Wind (vel 8.19e2km/s)
Zoomed in Temperature --fixed colorbar
With No Wind
1. symmetric profile
2. asymmetric profile
Meeting Update 02/09/2015
- 2D OutflowWind module Fixed a bug in wind object about the velocity direction when the wind applied on the +y direction. The following is the new results with wind velocity . The sonic surface looks very different from the Stone&Proga.
Fig 6 reproduced | ![]() ![]() | density;temperature |
Stone&Proga | ![]() |
- Ablative RT Growth rate result from Ruil:
Re-running the job to longer time.
OutflowWind: Tests with Wind
Updated the problem module with a flag lWind to turn on/off stellar wind. The stellar wind is applied after t_wind>0 to allow the planet with asymmetric temperature profile to relax to a stable state before the wind kicks in. Also the wind is applied along -y direction for 2.5D and -x direction for…
- Stellar Wind Speed In Stone&Proga and . Correspondingly in AstroBEAR, , . For case, , so the calculated stellar wind speed . The speed is way too high for the code with the current parameters…
- 2.5D restuls
density;temperature | ||
![]() ![]() | density; temperature;zoomed density;zoomed temperature | |
![]() ![]() | density; temperature | |
Stone & Proga | ![]() |
- Interface growth rate for Ablative RT Regenerated 50 frames txt files for Rui as 200 & 100 frames takes too long to read in…
- New user Helped Karan set up on local machine and worked him through using the code.
Meeting Update 01/26/2015 -- Baowei
- OutflowWind fixed a bug found when running symmetric profiling for different s. Here's the updated results for figure1,2 and small/large lambda… Looks better.
Fig1 from AstroBEAR updated | ![]() ![]() | |
Fig 1 in Stone&Proga | ![]() | |
Fig2 from AstroBEAR updated | ![]() | density symmetric lambda=5; temperature symmetric lambda=5; density, asymmetric lambda=5;temperature, asymmetric lambda=5;density symmetric lambda=50; temperature symmetric lambda=50; |
Fig2 Stone&Proga | ![]() |
- Growth rate analysis for AblativeRT. Too long to run Rui's script with the 3D data. Regenerating a small data set for him… Will modify his script to read in hdf5 directly…
Meeting Update 01/20/2015 -- Baowei
- OutflowWind
The 2D tests with asymmetric temperature shows the night side density is higher than the spherical-symmetric case while in Stone&Proga09, it's lower…Also tried different values(from 0.01 to 1000), the results are similar…
- night side density blog:bliu01132015
- large density;temperature; total energy :
- Update from LLE: still working on checking the growth rate of interface for 3D. Have to copy the data over to their machines as the gdl on alfalfa doesn't work well with their idl code…
- resistivity & viscosity development: Got the modules for astrobear1.0 from Shule. Will take a look…
OutflowWind: 2D Tests
Deprecated the point_mass and planet_radius parameters. point_mass is the planet_mass which is the mass of the planet. rho as the air density and radius as the position of the outflow boundary. Try to produce the figure 2 in the Stone&Proga paper…
For this set up, the night side(theta=PI) is off and doesn't show the inflow. And the density at the night side is higher… Not sure if it's a parameter issue…
0.6 Jupiter Mass
Fig 2 Reproduced | ![]() | symmetric density; symmetric temperature; asymmetric density; asymmetric temperature |
Fig 2 in Stone&Proga09 | ![]() |
Corresponding Fig 1, the Temperature is in unit of
. Temperature value matches with Stone&Proga09 paper.Fig 1 reproduced | ![]() ![]() |
Fig 1 Stone&Proga09 | ![]() |
OutflowWind: 2.5D Tests
Test the OutflowWind module and make sure the results agree with Stone&Proga 2009.
- Test with no initial temperature profile
In the version of the outflow object in the code I use, some lines involving energy has E=pressure while others have gamma7*pressure. Modified them all to E=gamma7*pressure though need to double-check with Jonathan…. Added lTempProfile flag in Outflow object. The following is a 2.5D test results for high(10000K) and low(100K) temperature with point mass = 0.1 Jupiter mass. The ambient temperature are 100K and 1K correspondingly. The results shows the planet material escapes for 10000K while are bounded by the point mass for 100K.
T=10000K | Density Movie;Temperature Movie |
T=100K | Density Movie;Temperature Movie |
- Parameter comparison with Stone&Proga 2009
To compare with the results and parameters used in Stone&Proga paper, we need to double-check the planet_density together with planet_mass, planet_radius and point_mass(?) and temperatures. In the current code it specifies the density, mass & radius separately (in the current default data files, density value is ~35% off) and use the planet mass to calculate the important parameter
in Stong&Proga (another important one is the planet density ) around origin. I'm modifying the code to set the planet mass & radius only then calculate the density which is probably a better way and hopefully will be more accurate to compare with the paper. No sure if I should include the point mass when calculating the density…
3D OutflowWind Module
Fixed bugs in 3D OutflowWind Module. Here's the low-res results (middle section). High-res testing is still running and will be posted soon…
Temperature | ![]() |
Density | ![]() |
Movies: Temperature and Velocity; Density and Velocity; Long Time: Temperature and Velocity; Long Time: Density and Velocity
Globus gridftp for transferring big data
I was using this very convenient tools when transferring ~2TB data from Gordon at SDSC to CIRC machines: as simple as drag the files from A to B and without worrying disconnect. Here's some basic steps:
- Sign up an account on globus.org
- Sign in
- Transfer Files with Endpoints: click "Transfer Files" on the right top Set up the path and endpoints as shown below and pick up files and use the arrows (blue triangles) to start transfer.
1) Endpoints for CIRC machines
For CIRC machines, the endpoint is "univofrochester#circ" and you need to login with your username and password on Bluehive/BlueStreak. The default path is your home directory on Bluehive. But you can access to your bh scratch or bgq scratch. For example for me— /scratch/bliu17 as bh scratch and /gpfs/fs2/bgqscratch/bliu17 as bgq scratch.
2) Endpoints for XSEDE machines:
xsede#gordon for Gordon at SDSC ( path example: /oasis/scratch/bliu/ for my scratch ) xsede#stampede for Stampede at TACC ( path example:/scratch/01688/bliu for my scratch ) More information can be found here:https://www.xsede.org/data-transfers
3) Endpoint for your own machine:
You can setup an endpoint for your own laptop also. Click "Manage Endpints" on the right top then "add Globus Connect Personal" and follow the instructions
- You will receive emails once the transferring is done/get problems.
pnStudy: 45Deg tapper and Issue with 2.5D MHD
- Divergence Problem with 2.5D MHD in the pnStudy Tried to test Bruce's idea of 45 deg taper:
Martin has the option to launch collimated flows with a Gaussian taper To run this you configure problem.data with an opening angle=90 (spherical flow) which Martin modulates with a gaussian of the form exp-(latitude/user-specified-gaussian angle)^2. You find that modulation function somewhere. I should think that its form can be changed from exp-((phi)/W)^2, where phi is the latitude angle and W is the user-specified gaussian width called "tf" in problem.data to this: exp-((phi-PA)/W)^2, where PA is the user-specified flow angle PA might as well default to 45 deg for cones and gaussian taper sims (that is, Outflowtype=2).
But found this divergence Mag issue on bluehive:
![]()
Movie
Need more time to confirm the results and dig out why….
- 2.5D Result from alfalfa
![]() | 2.5D Movie |
- Test for 45 deg taper with 2D runs (taper15n4e2v200namb4e4)
Fixing a bug just found. Will update results soon…
pnStudy: finest refinement with rectangle shape
This is a revisit for the new refinement for pnStudy module (see blog:bliu11182014). This is Bruce's idea:
"My sense is that the wiggles develop only after the rim emerges from the fixed high-res region of the sim at 0 < x < 0.5 and 0 < y < 0.5. To check this out, is there a way to reconfigure the high-res zone so that it is three times thinner and taller? "
Below is the results — the mesh is kept to show the rectangle refinement works. The results show clearly the difference for areas along x and y axis:
With Mesh | No Mesh |
![]() | ![]() |
movie with mesh | movie without mesh |
pnStudy: new refinement
I implement the new refinement idea we discussed yesterday: refine only the outflow launch region to the highest level and see if it helps for the weird wiggles close to y-axis. This run has 6 level AMR refinement (equivalent to 7 level as the box is ½ of runs before) only on the outflow object and maximum 3 level AMR outside the outflow. It also uses Rho as refinement variable instead of px and py… Just as Bruce said, the resolution doesn't help much
![]() |
pnStudy: Mesh for Spherical AGB wind with 5-level AMR
Trying to figure out how the grid size affect the wiggles developing near the x and y axes for the Bruce's run with the spherical winds into a spherical AGB environment. From the image below it seems the AMR works as expected. While the finest refinement is clearly around the spherical wind area, things seem better for larger Rjet than Rjet=0.5.
- Reproduce Bruce's results with Rjet=0.5 (sphAGBn10v200namb4e4)
Zoom-in image | |
![]() | ![]() |
- Rjet=5000AU(or 10 cu) and other parameters are same
t=0 | ![]() |
t=100y | ![]() |
pnStudy: Ambient Temperature close to Jet drops
Trying to figure out the ambient temperature where close to the jet drops for the pnStudy module (blog:bliu11102014). This hasn't been found before (From Keira's runs). Here I tried to reproduce one of Bruce's runs and also a run of Keira. For Bruce's run, the ambient temperature is 1000K, and the temperatures close to the jet drops to 100K around 660y. For Keira's run, the ambient temperature is 100K. And the temperature of similar area doesn't drop — as listed in 1 and 2 below.
A. Cooling Floor Temperature
Jonathan helped me figure out the cooling played a role here. The floor temperature for the cooling is 100K and when it reaches 100K it shuts off the cooling. That's why in Keira's case, it won't see the drop. But whenever the ambient temperature is higher than 100K, there is a drop
B. Misleading comments in problem.data
It was also found the comments for the temperature parameters in the problem.data was misleading. Here's the current problem.data
! BACKGROUND or “AMBIENT” SECTION. Values apply to origin tamb = 1d3 ! ambient temp, 1cu = 0.1K (100K=1000cu) ... tjet = 1000d0 ! flow temp, 1cu = 0.1K (100K=1000cu)
Both of these should be in Kelvin and 1 cu = 10K
! BACKGROUND or “AMBIENT” SECTION. Values apply to origin tamb = 1d3 ! ambient temp in Kelvin ( 1cu = 10K) ... tjet = 1000d0 ! flow temp in Kelvin ( 1cu = 10K)
C. Results
- Reproduce Keira's run with current code
Main Parameters | Baowei's Run | Keira's Run |
tamb = 1d2; namb = 4e2; outflowType = 1; njet = 1d2; Rjet = 1d0; vjet = 2e7; tjet = 1d1; | ![]() | ![]() |
- Reproduce Bruce's run
Main Parameters | Baowei's Run | Bruce's Run |
tamb = 1d3; namb = 4e4; outflowType = 1; njet = 4d4; Rjet = 2d0; vjet = 2e7; tjet = 1000d0; | ![]() | ![]() |
- Test for different ambient Temperatures — all other parameter are same
Tamb =1d2 | ![]() |
Tamb =2d2 | ![]() |
Tamb =3d2 | ![]() |
Tamb =4d2 | ![]() |
Tamb =5d2 | ![]() |
pnStudy: jet & ambient Temperature
Ambient temperature is set 1000K initially. As jet moves, ambient temperature at the bottom drops to 100K. Results are similar for both the new code (vJet not depends on Rjet) or the old code (v=vJet/Rjet…) — the jet velocity for the old code is half of the new code due to the Rjet dependence.
New code(5AMR) | Old code (3AMR) | |
t=0 | ![]() | ![]() |
t=600y | ![]() | ![]() |
movie | New code:jet&Abient Temperature | Old code:jet&Abient Temperature |
pnStudy:Test new velocity setup for jet
Test if set jet velocity not depending on the Rjet (blog:bliu11032014)
v=vjet/velscale*(/0d0,1d0,0d0/)/fact*timef ! v=vjet/velscale*(/0d0,1d0,0d0/)/Rjet*fact*timef !10 mar 2014
New Results (3 levels AMR) | Old Results (3 levels AMR) | |
Rjet=2, vJet=200 | ![]() ![]() | |
Rjet=2, vJet=100 | ![]() ![]() | ![]() ![]() |
No-Rjet_dependence Code:Rjet=2,Vjet=200;
pnStudy: Jet velocity Vs Jet Radius
In the pnStudy, the velocity of the jet is set according to the radius:
!======== J E T=========: IF (outflowType == collimated) then q(i,j,k,itracer4)=1d0 fact=1d0 !10 mar 2014 !fact=exp(-(x**2+z**2)/jet_width**2) ! b 4 10 mar 2014 qjet(1)=njet/nScale*fact v=vjet/velscale*(/0d0,1d0,0d0/)/Rjet*fact*timef !10 mar 2014 !v=vjet/velscale*(/0d0,y,0d0/)/Rjet*fact*timef !b 4 10 mar 2014 qjet(imom(1:nDim))=v(1:nDim)*qjet(1)*& !ramp up velocity mom_flow !5 may 2014, time dependent mom flux requested by bbalick. qjet(iE)=qjet(1)*tjet/TempScale*gamma7
This makes the Jet velocity is not vJet (which is given in problem.data) when Rjet !=0.
- Rjet=2d0, vJet-2e7 as in Problem.data
outflowType = 1 ! TYPE OF FLOW 1 cyl jet, 2 conical wind, 3 is clump njet = 4d4 ! flow density at launch zone, 1cu = 1cm^-3 Rjet = 2d0 ! flow radius at launch zone, 1cu = 500AU vjet = 2e7 ! flow velocity , 1cu = cm/s (100km/s=1e7cu) tjet = 1000d0 ! flow temp, 1cu = 0.1K (100K=1000cu) tt = 0.0d0 ! flow accel time, 1cu = 8250y (0.02 = 165y) open_angle = 00d0 ! conical flow open angle (deg) tf = 15d0 ! conical flow Gaussian taper (deg) for njet and vjet; 0= disable sigma = 0d0 ! !toroidal.magnetic.energy / kinetic.energy, example 0.6
Here's the file of Scales.data
TIMESCALE = 260347122628.507 , LSCALE = 7.479899800000000E+015, MSCALE = 2.099937121547526E+026, RSCALE = 5.017864740000001E-022, VELSCALE = 28730.4876830661 , PSCALE = 4.141950900000000E-013, NSCALE = 300.000000000000 , BSCALE = 2.281431350619136E-006, TEMPSCALE = 10.0000000000000 , SCALEGRAV = 2.269614763656989E-006
Velocity plot shows the jet velocity is 100 km/s instead of 200 km/s. And this can also be calculated from the distance the jet travels during 600 yrs:
t=0 | t=660 y |
![]() | ![]() |
- Rjet=1d0, vJet-2e7 as in Problem.data
outflowType = 1 ! TYPE OF FLOW 1 cyl jet, 2 conical wind, 3 is clump njet = 4d4 ! flow density at launch zone, 1cu = 1cm^-3 Rjet = 1d0 ! flow radius at launch zone, 1cu = 500AU vjet = 2e7 ! flow velocity , 1cu = cm/s (100km/s=1e7cu) tjet = 1000d0 ! flow temp, 1cu = 0.1K (100K=1000cu) tt = 0.0d0 ! flow accel time, 1cu = 8250y (0.02 = 165y) open_angle = 00d0 ! conical flow open angle (deg) tf = 15d0 ! conical flow Gaussian taper (deg) for njet and vjet; 0= disable sigma = 0d0 ! !toroidal.magnetic.energy / kinetic.energy, example 0.6
Scales.data are same
Velocity plot shows the jet velocity is 200 km/s. And this can also be calculated from the distance the jet travels during 600 yrs:
t=0 | t=660 y |
![]() | ![]() |
- Rjet=4d0, vJet-2e7 as in Problem.data
outflowType = 1 ! TYPE OF FLOW 1 cyl jet, 2 conical wind, 3 is clump njet = 4d4 ! flow density at launch zone, 1cu = 1cm^-3 Rjet = 4d0 ! flow radius at launch zone, 1cu = 500AU vjet = 2e7 ! flow velocity , 1cu = cm/s (100km/s=1e7cu) tjet = 1000d0 ! flow temp, 1cu = 0.1K (100K=1000cu) tt = 0.0d0 ! flow accel time, 1cu = 8250y (0.02 = 165y) open_angle = 00d0 ! conical flow open angle (deg) tf = 15d0 ! conical flow Gaussian taper (deg) for njet and vjet; 0= disable sigma = 0d0 ! !toroidal.magnetic.energy / kinetic.energy, example 0.6
Scales.data are same
Velocity plot shows the jet velocity is 50 km/s instead of 200 km/s. And this can also be calculated from the distance the jet travels during 600 yrs:
t=0 | t=660 y |
![]() | ![]() |
Candidate movies to show on Collaboratory Wall for the film
These are the candidates movies I've received so far:
- Eddies high rez 2.5D MHD jet simulations (the one with all 4 jets evolving in [SII] and Halpha) https://astrobear.pas.rochester.edu/trac/blog/ehansen09292013
- Some of bruce/kira's simulations of PN lobe evolution
a) 3D: http://www.pas.rochester.edu/~bliu/pnStudy/rhoPN_3d.gif
b) 2D: http://www.pas.rochester.edu/~bliu/pnStudy/2Dclump_bl.gif
data set to bring up directly from visit
- Some of Zhuo's simulations of fall back shells and binary evolution
https://astrobear.pas.rochester.edu/trac/wiki/u/zchen/simulations
https://astrobear.pas.rochester.edu/trac/wiki/u/zchen/3Dsimulations
- A rotating version of a SHAPE visualization of one of Bruce/Kira simulation? https://astrobear.pas.rochester.edu/trac/blog/crl618Figures http://www.pas.rochester.edu/~martinhe/2012/crl/f4.
- Magnetic Tower
- Accretion Disks
http://www.pas.rochester.edu/~martinhe/2012/binary/10lines2.gif
http://www.pas.rochester.edu/~martinhe/2012/binary/20lines2.gif
http://www.pas.rochester.edu/~martinhe/2011/binary/gene-4.gif
http://www.pas.rochester.edu/~martinhe/2011/binary/20mar1144.gif
http://www.pas.rochester.edu/~martinhe/2011/binary/40au-bb5-3d.gif
- Youtube channel:
Meeting Update 09/22/2014 -- Baowei
- worked with users from SUNY Oswego and Laurence's student to install AstroBEAR on their machines: got issues with the compiler and libraries on their machines.
- configure file (ticket #255): first version with development branch worked on local machines and hopefully most of other machines.
1) The problem module can be set with option "—with-module=". Module list will be shown in README and INSTALL documents. This option is required. Error will be reported if no module given.
2) Check the hdf5, fftw3 and hypre libraries. The paths can be set with options "—with-hdf5=", "—with-fftw3=" and "—with-hypre=". These options are optional. If no library found, it will report error and provide help information about downloading and installing the library.
3) A new run_dir folder will be created. If the folder exit, a backup "run_dir_Currenttime/" will be made to avoid erasing last runs. After compile, all necessary data files and the executable file "astrobear" will be copied to the run_dir/ folder. And an out/ subfolder will also be created.. Will add the pbs and slurm sample scripts to make the run_dir/ really ready to go on all machines.
4) pthreads is there but hasn't been tested.
5) Haven't included the IBM xl compilers, OpenMP, etc… but planning to do..
- OpenMP optimization (ticket #361): on it…
- Trying installing Paraview on Bluehive: current got errors with qt4 library and VTK.
Science Meeting Update 09/08/14 -- Baowei
- Ablative RT
- Coarse grid + 1 level AMR: Movie with no Mesh; Movie with Mesh
- Coarse grid + 3 level AMR:Movie with no Mesh;
- Compare with results with original resolution and 0 level AMR in blog:bliu08182014 (Still waiting for the growth rate).
- The way to get coarse grid with AstroBEAR — blog:bliu08282014
Use AstroBEAR to transfer initial data of 3D Ablative RT from fine grid to coarse grid
The initial grid for the Data from LLE is too small and AstroBEAR runs slows with such base grid. Here's how to transfer these initial data to a twice bigger grid with AstroBEAR.
- 1. Set the Base grid resolution to half and AMR level to be 1 in global.data
GmX = 50, 601, 50 !100,1205, 100 ! Base grid resolution [x,y,z] MaxLevel = 1 ! Maximum level for this simulation (0 is fixed grid)
- 2. Set ErrFlag to be 1 everywhere if not restart (or when read in 3D txt data).
SUBROUTINE ProblemSetErrFlag(Info) !! @brief Sets error flags according to problem-specific conditions.. !! @param Info A grid structure. TYPE (InfoDef) :: Info ! if need to generate coarse grid data (with 1 level AMR) set ErrFlag everywhere to be 1 if (InitialProfile .eq. 3 .AND. lRestart .eq. .false.) then Info%ErrFlag(:,:,:) = 1 end if END SUBROUTINE ProblemSetErrFlag
- 3. Read the txt data to level 1 grid instead of level 0. Level 0 grid needs to be initialized also to avoid protections.
DO i =1,mx DO j =1,my DO k =1,mz read(11,*),pos(1),pos(2),pos(3),rho read(12,*),pos(1),pos(2),pos(3),p read(13,*),pos(1),pos(2),pos(3),vx read(14,*),pos(1),pos(2),pos(3),vy read(15,*),pos(1),pos(2),pos(3),vz rho=5.0*rho/rScale p=1.25E+14*p/pScale vz=5E+6*vz/VelScale vy=5E+6*vy/VelScale vx=5E+6*vx/VelScale if(Info%level .eq. 0) then Info%q(i,j,k,1)=1.0 Info%q(i,j,k,2)=0.0 Info%q(i,j,k,3)=0.0 Info%q(i,j,k,4)=0.0 Info%q(i,j,k,iE)=0.0 end if if(Info%level .eq. 1) then Info%q(i,j,k,1)=rho Info%q(i,j,k,2)=rho*vx Info%q(i,j,k,3)=rho*vy Info%q(i,j,k,4)=rho*vz energy = 0.5*rho*(vx**2+vy**2+vz**2)+p/(gamma-1d0) Info%q(i,j,k,iE)=energy end if end do end do end do
- 4. Run the program from start. Frame 0 will have level=1 grid everywhere.
- 5. Restart from Frame 0 (and ErrFlag will be 0). Frame 1 after a tiny step (or any frame other than frame 0) will only have the level 1 at the interface.
Meeting Update 08/05/2014 -- Baowei
- 3D Ablative RT
- Extended 2D data to 3D: expect to have the exact same results as 2D. Tried to put gravity along different directions and found only when gravity along y the code works as expected. Running a job with gravity along y.
Gravity comparison | ![]() ![]() |
y-direction Movies | density; temperature |
- Reran the 3D Conduction Front tests along different directions to longer time( same long as the test we did for 2D), since the total mass plots look OK. Didn't find something wrong, although the x&z-direction case take a little more cycles to converge than y-direction at time later onconduction front
Adjusting Gravity in 3D Ablative RT module
Gravity value with different bottom heat flux for the current code
Heat flux at the bottom seems too low. Tried randomly bigger flux. Gravity increases first then drops. Seems something very wrong.
flb=6.0E+21 (calculated) | flb=6.8E+21 | flb=7.13E+21 | |
gravity & total mass | ![]() | ![]() | ![]() |
Int(P+rho*V2) | ![]() | ![]() | ![]() |
Compare with 2D Case
1. Initial Profile
Initial profile are close enough except the momentum along the gravity direction.
Rho&T | ![]() |
py or pz | ![]() |
2. heat diffusion check
Turn hydro off. Compare the 2D (gravity along -y direction) and 3D (gravity along -z direction). The solver seems OK.
2D, flb=0 | 2D, flb=6E21 |
3D, flb=0 | 3D, flb=6E21 |
3. Pure hydro test
Turn off the heat diffusion off. Compare the density and the momentum along gravity direction -py for 2D and pz for 3D — plots are along the center line.
2D rho | 2D, momentum | 2D, pressure |
3D rho | 3D, momentum | 3D, pressure |
Here's a picture to compare the py (2D) and pz (3D) — both along the center line at time=1.345E-5 (cu)
4. compare top and bottom integral of P+Rho*V2
The integrals (pressure + rho*v2) were calculated with queries of integrate (2D) and weighted variable sum (3D) in visit.
Bottom | Top |
![]() | ![]() |
Check the derivative of momentum
2D | 3D |
![]() | ![]() |
Disk Space Usage
- Local Machines
Machine | Size | Jonathan | Shule | Baowei | Martin | Eddie | Erica |
bambooData | 12 TB | 4 TB | 3.4 TB | 1.8 TB | 1.2 TB | ? | 0.5 TB |
alfalfaData | 4.2 TB | 1.8 TB | 0.7 TB | 0.06TB | 1.1 TB | 0.4 TB | 0.2 TB |
grassData | currently unaccessible |
- CIRC 200GB ~1TB GB per user
- XSEDE
- Ranch ( archival, only a single copy ): 12 TB
- Oasis (mounted, Data will be retained for three months beyond the end of the project): 3TB
Meeting Update 07/07/2014 -- Baowei
- Ablative RT
- Gave 2D text data to Rui. Haven't got update from him.
- Still debugging on 3D code.
- Users
- Worked with Guilherme of SUNY Oswego installing openmpi and hypre.
Science Meeting Update 06/09/14 -- Baowei
Ablative RT
- 2D: fixed an error in periodic boundary condition. new result
- 3D: http://astrobear.pas.rochester.edu/trac/wiki/u/bliu
Ablative RT growth rate with max(Vx)
Vx, Linear-Linear | ![]() |
Vx, Log-Linear | ![]() |
The equation of http://scitation.aip.org/content/aip/journal/pof1/28/12/10.1063/1.865099
was found in Takabe's paper:with
Meeting Update 06/02/2014 -- Baowei
- Ablative RT
To study the perturbation growth rate, the difference between the front positions along the center line and edge is calculated and plotted versus time. This seems different from the three stages of the normal RT instability: the exponential growth stage ends when the bubble starts. Not sure it's just because ablation or something wrong.
Middle line, Linear-Linear, 200 Extended zones | ![]() |
Middle line, Log-Linear, 200 Extended zones | ![]() |
Quarter line, Linear-Linear, 200 Extended zones | ![]() |
Middle line, Linear-Linear,5 Extended zones | ![]() |
- PN Jets: test Martin's 2.5D module on Stampede
- New users Report: Czarships/NewUsers/AskingForCode
Meeting Update 05/19/2014 -- Baowei
- Science
- 2D Ablative RT: tried lower tolerance (blog:johannjc05132014) with max iteration 10000, Maximum # of iteration reached and negative temperature found at frame 316 — comparing 356 frame with tolerance and max iteration 1000. according to
Rho | ![]() |
Temperature | ![]() |
- Help Rui set up on bluehive2. He's analyzing the front rate using his code
- 3D Ablative RT: still working on transfering the hdf4 data from Rui to text form.
science meeting update 05/05/14 -- Baowei
- Ablative RT
- Rui is running 2D RT on LLE machines. He will check the growth rate with his matlab program when getting the data (Find the front by checking the slope and subtract the speed of the whole body) Will send me the 3D results and let me try the 3D.
- Still working on the hypre chocking issue. http://astrobear.pas.rochester.edu/trac/blog/bliu05012014
Science Meeting Update 03/31/14 -- Baowei
- Ablative RT the ablation results of smaller extended zones look OK according to Rui (3:ticket:377, 5:ticket:377). Didn't get the bubble from 2D runs. Will have a meeting with Rui tomorrow discussing doing a RT simulation to benchmark the growthrates and bubble velocity.
Science Meeting Update 03/24/14 -- Baowei
- Ablative RT with adjusting gravity
- Met with LLE people last week. worked on putting adjusting gravity to the code. 1st cut results are show here 1:ticket:377 . The shell keeps stable about 4 ns. The gravity is not quite accurate especially when the front density drops due to ablation because of the extended zones.
Meeting Update 03/18/2014 -- Baowei
- Tickets
- Users & Resources
- Wiki updates: trying to update the trac with new plugins but generated some problems for our users this passed weekends and yesterday. Sorry about that. It works now. Rich, Jonathan and Baowei will have a meeting on Thursday to discuss the trac issues
- XSEDE proposal writing telecon: Time: 3/21 Fri 3:00pm ET, Location: Adam's office
, need questions
- Science
- Equations for the Ablative RT initial profile: #345
- Read Betti's paper (Growth rate of the ablative RT instability in ICF Phys. of Plasma 5, 1446 1998)
Trac wiki links refresher -- from Rich
Rich suggests us to use attachment instead of absolute URLs to create links in documents. So instead of things like
http://astrobear.pas.rochester.edu/trac/attachment/wiki/u/ehansen/Bvec_movie.gif
it's better to use
[attachment:Bvec_movie.gif:wiki:u/ehansen]
This dynamic way also has a convenient direct download link next to the file attachment link.
Rich found a workaround to make those old posts which use the former way still work but to be on the safe side we should start using the dynamic way to do the links.
Here's Rich's original email:
Hi Baowei, and folks: For what it is worth… I would take the time to read the Trac Links page here: http://trac.edgewall.org/wiki/TracLinks It provides very helpful information on creating links in documents you create on the blog, wiki, etc that are *dynamic* rather than hardcoded, absolute URLS (e.g. http://astrobear.pas.rochester.edu/trac/attachment/wiki/u/ehansen/Bvec_movie.gif) Taking this as our example, say we wanted to link to an attachment on another wiki page in a blog post. *** The incorrect way of doing this would be: [http://astrobear.pas.rochester.edu/trac/attachment/wiki/u/ehansen/Bvec_movie.gif Eddie's Bvec Movie] *** The correct way would be: [attachment:Bvec_movie.gif:wiki:u/ehansen] Where: * 'attachment:' is a keyword indicating you are referencing a Wiki file attachment * 'wiki' is a keyword referring to the wiki module of Trac * 'Bvec_movie.gif' is the literal referring to the filename of the attachment * 'u/ehansen/' is the wiki page containing this attachment. Do NOT LEAD OR END this reference with a "/", i.e "/u/ehansen/" is incorrect. Your links at this point will be created automatically and correctly and even include a handy 'direct download' attachment link in the page next to the file attachment link. If anything changes on the server, which is what seems to have happened today, your links are broken. I have placed a workaround redirect to fix those broken links. Still, I highly recommend you all follow the Trac best-practices for making links. Rich
SuperMIC Vs. Stampede
SuperMIC | Stampede | |
Computing Nodes | 360 | 6400 |
Processor | Each computing node has two 2.8GHz 10-Core Ivy Bridge-EP E5-2680 Xeon Processors | Each computing node has two 2.7 GHz 8-core Xeon E5-2680 (Sandy Bridge) processors |
Co-Processors | Each computing node has two Intel Xeon Phi 7120P 61-core Coprocessors(1.238GHz,16GB) | Each compute node is configured with an Intel Xeon Phi 5110p 61-core coprocessor(1.05GHz,8GB) |
Memory | 64GB DDR3 1866MHz Ram | 32GB DDR3 1600MHz Ram , with an additional 8GB of memory on each Xeon Phi coprocessor card |
Hybrid Compute Nodes | 20 Hybrid nodes, each with two Processors + one Coprocessor + One NVIDIA Tesla K20X 6GB GPU | 128 compute nodes with NVIDIA Kepler K20 5GB GPU |
XSEDE Proposal Writing Webinar
Summary of the XSEDE Webinar "Writing a Successful XSEDE Allocation Proposal" I attended last week
- The full recorded session can be found here: https://meeting.austin.utexas.edu/p3pmvkq0mjg/ .
- Questions I asked and the speaker's answer:
- Research collaborations (Typically how many SUs applied/how many SUs awarded) — Research Collaborations are those large projects with multi-PIs. Sites standard. Typically 15~16 million. Currently the total of all research request is about 800 per year, 4.0 billion SUs per year and 1.8 billion awarded.
- is it better to submit a big proposal asking a lot SUs or several smaller proposals each of which asking small amount of SUs? One PI is not allowed to apply with different projects as PI. Recommend to combine different projects from the same group to be one. — sounds like big proposal is OK?
- Is there a way we can run scaling-testing for our own code on these new machines? Transfer SUs. Some of the machines are very similar, So you don't have to do scaling testings on all of them. For example SuperMIC ( newest supercomputer funded by NSF, located in LSU, will be in production in April 1st 2014) is similar to Stampede.
- Important points I catch and we might miss before
- Justification of SUs: clear simple calculation, log/simple wall time?
- local compute resources in details: referees may know some of your big machines.
- research team in details: how may faculties, staffs, postdocs, graduate, undergraduate students. Ability to complete the plan.
- Publications acknowledging XSEDE and/or feature stories on XSEDE website: productive, are PI, Co-PIs publish together?
- There are groups that are awarded 90% of their request.
- Ranch (TACC) and XWFS (The XSEDE-Wide File System) can be requested for storage resources without need to request computing at the same time.
Meeting Update 03/03/2014 -- Baowei
- Tickets
- new: 16 tickets from Jonathan(355-370). 13of them are for AstroBEAR3.0 and have got assigned.
- closed: none
- Users
- worked with Visitor from Rice on his own module: with ambient and clump objects and added shock. compiled and ran OK. Talked about 3D cylindrical clump and tracers. Talked about computational resources
- Ablative RT: got positive response from LLE but still waiting for the confirmation in detail.
- Resources
- got a call from the director of User Service at TACC when looking for a person to contact about the XSEDE proposals. Found two possible candidates to speak with.
- Worked on
- Testing script: worked on Eddie's new testing script with overlay object
- Parallel hdf5
- Science
- reading articles about stability behavior of the front (Piriz and Tahir, 2013, etc.)
Ablative RT with Riccardo's initial profile
- Riccardo's initial profile and BCs Riccardo uses zero heat flux top boundary condition. hypre chokes due to the fast increased temperature at the top
Rho | ![]() |
T | ![]() |
P | ![]() |
Vy | ![]() |
- Riccardo's initial profile and non-zero heat flux at the top
The front holds stable around 1ns then is pushed up. It's pushed out around 2.8 ns:
Rho | ![]() |
T | ![]() |
Vy | ![]() |
Meeting Update 02/24/2014 -- Baowei
- Tickets
- new: #347(CF memory errors on Bluestreak), #348(Asymmetric outflow), #349(Array Bounds mismatch in CalcSinkAcc), #350(Plotting artifacts due to visit / chombo files), #351(Pthreads = 2 breaks compiler on BS), #352(Better I/O for parallelization), #353(Use new table interface for all tables), #354(initializing fields to nan)
- closed: several old tickets(like #287, #289, #312, #313,…details can be found http://astrobear.pas.rochester.edu/trac/astrobear/blog/bliu02182014), #349, #352
- Science
- Ablative RT: #345. still waiting for the time scale with fixed gravity constant from Rui. checked Betti's Initial profile and BCs with AstroBEAR code, found the temperature at the top jumped to very high probably due to the piled up heat flux which chokes hypre (http://astrobear.pas.rochester.edu/trac/astrobear/ticket/345#comment:3). Jonathan suggested to extend the y longer with Betti's BCs. Working on that..
Debug meeting report -- 02/18/2014
- Marvin's MHD disk
- Eddie's 3D Pulsed Jets
- Erica's Colliding Flows
- Baowei's Ablative RT
- Shule's Fedderath/Krumholz Accretion
Meeting Update 02/17/2014 -- Baowei
- Tickets
- Users:
- checked with Andy of Rice: AstroBEAR and Visit run well on Rice resources.
- set up Marvin on bluehive and bgq
- Erica's reservation on bgq
- Resources
- project & teragrid resources: ProjectRuns
- group reservation of half bgq machine for weeks
- link to cloverdata/ from other local machines fails possibly due to the failure of one disk on clover. Testing script and backup scripts need to be updated correspondingly.
- Science
- Ablative RT: aim at stable time 3~4 nano seconds according to LLE people. tried with different top BCs, hypre-choking problem fixed (details at #345). still need to make the front stay longer..
Meeting Update 02/10/2014 -- Baowei
- Tickets
- new: #335(stray momentum flows), #336(Compiling error on bamboo and bluehive with hypre flag = 0), #337(Memory usage), #338(fix comment in scrambler 3.0 in refinements.f90), #339(Making astrobear capable of using dependencies), #340(Organizing modules in the source code), #341(Difference between colliding flows and molecular cloud formation), #342(compiling error on bluestreak)
- closed: none
- Users
- Mark: XSEDE startup allocation: stampede/kraken
- New one asking for the code: Yunnan University(to simulate problems of AGNs or SNRs)
- Resources:
- XSEDE: 1.4 million SUs left on Kraken.
- Worked on
- Ablative RT (#317): With Shule or Betti's BCs, it can run 1E-10 seconds before hypre chokes. By fixing the values of top right boundary, it runs up to 6E-9 seconds with oscillating front: #331,comment:22 —is this long enough?
- QPBC(#317): summary of what I tried:
- The divergence comes from Az — got different values running with multi-processors
- Run with 1, 2 processors, vector potential values are same.
- Run with 3,4,5 processors, vector potential values are same.
- new subgrid scheme with minimum grid number=1,2,4: vector potential values are same as the old subgrid scheme. But for minimum grid number =8, values are different
- Only happens with AMR runs
- #336(Compiling error on bamboo and bluehive with hypre flag = 0)
Meeting Update 02/03/2014 -- Baowei
- Ticket
- new: none
- closed: none
- Resources
- grass is on.
- Worked on
- Ablative RT(#331): fixed bugs related to restart and multi-core running. flux testing with both hydro & hypre passed (0.1% difference from the ideal case). Revisited the time scale with the corrected dimensions: target final time should be 1E-8: Hypre chokes around 2% of this time: http://astrobear.pas.rochester.edu/trac/astrobear/ticket/331?cnum_edit=14#comment:14 — possibly due to the top-boundary conditions. Tried with 1st-order in time CN scheme [ThermalConduction] but won't help much
- Quasi Periodic boundaries in a quarter plane(#317): haven't got much time working on this but the large divergence seems coming from the Az, x&y directions are OK: http://astrobear.pas.rochester.edu/trac/astrobear/ticket/317#comment:10
Meeting Update 01/27/2014 --Baowei
- Tickets
- new: #334(Help running on bluestreak)
- closed: none
- Resources
- grass: One disk is dead. Rich is wiping out the disks and rebuilding the array with the left 7. One spare disk (1TB) might be needed in the future.
- microphone of the laptop: Lost the plastic cover (outside the chip of USB) two weeks ago. The chip seems working OK. Mike Culver is helping us to wrap it again.
- Worked on
- #331 (Ablative RT): the failure (dE/dt !=0) of test with both hydro and diffusion seems due to hypre: http://astrobear.pas.rochester.edu/trac/astrobear/ticket/331#comment:5
- #317 (Quasi Periodic boundaries in a quarter plane): There are a bug related to the number of cores (exchanging of data?). Suggested by Jonathan, I added Divergence as diagnostic variable. Haven't been able to track the bug yet: http://astrobear.pas.rochester.edu/trac/astrobear/ticket/317#comment:8
Meeting Update 01/21/2013 -- Baowei
- Tickets
- worked on
- Ablative RT module: fixed a bug in the open top boundary (ThermalConduction). by lowering the hypre tolerance to , hydro off results match the analytic value (#331). Working on double-check the hydro boundaries.
- Compiling Erica's code on BlueStreak.
Summary for the current status of the Ablative RT project
- Jonathan suggested to test the flux and with hydro-off. Ideally we can extend the test to 3-cases
- currently we still have problem with the hydro-off test result will be posted in the third part.
- When doing the test we find a mismatch of
- Putting everything in computational units, the bottom flux calculated from Kappa1 and Temperature is 2.32E-4 and the bottom flux converted from Betti's data is 3.48E-4 (5.876e18 W/m2 → 5.876e21 erg/s/cm2 and divided by fluxScale=pScale*velScale). The difference is a factor of 1.5 which is , or gamma7 in the code, or as in Jonathan's blog post.
for the bottom flux and the equation:
2.One way of understanding this is that Kappa1~K0/Cv where
from Betti, but in AstroBEAR we have
and
- One easy fix for this is to multiply both size of the equation by gamma7: Since in the code there was already a gamma7 in the left side of the equation (which is a bug), we just include gamma7 in the Kappa1 as Jonathan mentioned in his blog. So now the Kappa1 becomes
which is 1.5 time larger than before.
- Since we still have problem with the hydro-off test, I can only show the effect of this 1.5 factor with bug-buried results
without gamma7 | ![]() ![]() |
with gamma7 | ![]() ![]() |
- Results for flux test: the way I ran the temperature limiting case tests is multiplying a small factor to the Temperature) with hydro on and off.
- limiting case with hydro off, multiply by 1e-17
- limiting case with hydro on, multiply by 1e-13 — limited by the BCs in BeforeStep, cannot go too small
![]() | ![]() |
- normal hydro-off test
Meeting update 01/13/2014 -- Baowei
- Tickets
- new: #330(too many restarts for 3D pulsed jets)
- closed: none
- Resources
- grass: new controller card needed to rebuild the raid http://www.newegg.com/Product/Product.aspx?Item=N82E16816124070
- clover: got a wiki backup problem. Working on it with Rich
- Ablative RT
- there's an inconsistence of gamma7 (1/(gamma-1)) for the flux part between the equations and bottom flux. A gamma7 has to be included when calculating energy and flux to match the values in cgs with Betti's data. Jonathan posted a blog explaining this part here: http://astrobear.pas.rochester.edu/trac/astrobear/wiki/ThermalConduction . The limit case test passed but the non-hydro test for the energy increase ratio still won't match the heat flux.
Unit converts for Ablative RT problem
Equation solved in Betti's code (SI units and Temperature in Joules)
Betti's document and is the normal specific heat capacity. And the flux is
where is Boltzmann constant and as inTo convert this to cgs units we write
That is
In AstroBEAR we define
andComparing the definition of
and we haveIn Betti's data, and so
Meeting Update 01/08/2014 -- Baowei
- Resources
- grass: Got problem with the array card, the raid is degraded, Dave is backuping the data (will take several more days). Very likely we need a new array card (300~500$) which will support bigger size of disks(10~20TB VS. the current 3.3 TB ) or a new machine.
- XSEDE: 0.64M SUs on Stampede for Martin's renewed allocation and 2.0M SUs left on Kraken for Adam's allocation. The renewal of Adam's allocation will need to be submitted before March 30th.
- Skype account: will need Adam's credit card since it's a recursive charge.
- Worked on
- installed AstroBEAR3.0 code on Rice's machines: DaVinCI and STIC (for Andy Liao) —schedule for Andy to come over?
- Ablative RT: http://astrobear.pas.rochester.edu/trac/astrobear/ticket/309#comment:50
Meeting Update 12/16/2013 -- Baowei
- Tickets
- new: none
- closed: #328
- Resources
- Skype premium account: need to set up a recursive payment and may need Adam's credit card to do that
- Worked on
- #309: Checked the expected time for the front to move: http://astrobear.pas.rochester.edu/trac/astrobear/ticket/309#comment:44, http://astrobear.pas.rochester.edu/trac/astrobear/ticket/309#comment:45. Worked with Jonathan and checked the matrices/vectors feed into hypre. The maximum iteration was reached when the matrices are ill-conditioned. Tried a new way to set the flux boundaries (No nonlinear solver used). Got different but similar results of short time: http://astrobear.pas.rochester.edu/trac/astrobear/ticket/309#comment:47
- #328
- Will take next week off/work at home: accumulate too many vacation hours which cannot be carried to next year.
Meeting Update 12/09/2013 -- Baowei
- Tickets
- new: #328(Seg fault in Planetary Atmospheres module (AstroBEAR 3.0))
- closed: none
- Resources
- working with Frank on the Skype account
- Asked Dave to check the Laptop and the wireless worked on 4th floor. Will show him again if still not working on 3rd floor.
- New Users
- Andy from Rice (modeling of proposed magnetized shock experiments at LLE)
- from Western Kentucky University (modeling plasma jets of blazars)
- Worked on
- #309: got same result with both Betti's data and my data. checking on the BCs:
http://astrobear.pas.rochester.edu/trac/astrobear/ticket/309#comment:37, http://astrobear.pas.rochester.edu/trac/astrobear/ticket/309#comment:39, http://astrobear.pas.rochester.edu/trac/astrobear/ticket/309#comment:41
- new youtube movies from Martin: http://www.youtube.com/user/URAstroBEAR
Meeting Update 12/03/2013 -- Baowei
- Tickets
- Worked on
- ticket #309: fixed the problem causing hypre choking. Found new problems related to the boundary conditions: http://astrobear.pas.rochester.edu/trac/astrobear/ticket/309#comment:36
- working on transferring and uploading movies to youtube from Martin
Meeting Update 11/11/2013 -- Baowei
- Users
- new users: From Instituto de Astrofisica de Andalucia (Formation and X-ray emission from planetary nebulae and Wolf-Rayet nebulae.) and From institute of astronomy and astrophysics of TaiWan(code comparison).
- Resources
- alfalfa for Zhuo?
- intel fortran compiler on local workstations were not working properly last Friday due to the software updates but fixed now
- Worked on
- ticket #309: fixed a new bug (http://astrobear.pas.rochester.edu/trac/astrobear/ticket/309#comment:27). reran the tests. obtained hydro tests results very close to Reale's paper with average particle mass as half of the solar abundance (http://astrobear.pas.rochester.edu/trac/astrobear/ticket/309#comment:28). New code with the Ablative RT module (my data files) found nans in hypre.
- new user & local users
- Will attend Supercomputing Conference (SC13 Denver, CO) with Jonathan next week.
Meeting Update 11/04/2013 -- Baowei
- Tickets
- new: #311(Implement energy & momentum conserving self gravity), #312(Array bound mismatch in ProlongateCellCenteredData), #313(NaNs in Info%q when creating displaced disk), #314(Link broken), #315(Bugs in Binary), #316(Bugs in Binary), #317(Quasi Periodic boundaries in a quarter plane), #318(Array bound mismatch in SyncHydroFlux), #319(Invalid pointer assignment in ObjectListRemove), #320(Usage of uninitialized variable levels(0)%gmbc(1) in CreateAmbient), #321(Implementing simple line transfer in AstroBEAR)
- closed: #315 duplicate
- Resources
- New machine to replace Alethea (for Joe)?
- Worked on
- ticket #316(Joe's jobs on bluehive), Bugs found by Marvin with gfortran compiler (#312, #313, #318) — all our local machines & Teragrid use ifort as fortran compilers which is more tolerant to the array bounds checking. I tried running test suites with gfortran on alfalfa and found more tiny bugs and a fatal compile-time error with HDF5_gfortran. Still working on it.
- ticket #309 (Conduction Front Test with hydro)
Meeting Update10/21/2013 -- Baowei
- Tickets
- new: none
- closed: none
- Resources
- Submitted Teragrid renewal proposal last Tuesday
- Worked on
- ticket #309: tried to figure out the bug causing the different hypre matrix with the same dt_diff for subcycle 1 and 2. The bug was found and fixed in subroutine DiffusionSetBoxValues when setting BCs with ghost zones.
- Teragrid proposal
Journal Club 10/15 Agenda
- Discuss the conduction front test: ticket #309
- Erica's BE sphere model
Meeting Update 10/14/2013 -- Baowei
- Tickets
- Resources
- Teragrid renewal proposal due tomorrow
- Worked on
- ticket #309: fixed a bug in subcycling boundary conditions— the hypre chocking issue is solved and results are better comparing the Analytic. May still have problems with the boundary conditions
- proposal and progress report
- ticket #309: fixed a bug in subcycling boundary conditions— the hypre chocking issue is solved and results are better comparing the Analytic. May still have problems with the boundary conditions
SESAME Table subroutines for AstroBEAR
S2GETI, S2EOSI - These routines are to be incorporated into a hydro code which uses the inverted form of an equation of state. Density and internal energy are the independent variables, and pressure and temperature are the dependent variables.
- S2GETI is used to get data from the library
CALL S2GETI (IR, IDS2, TBLS, LCNT, LU, IFL) IR material region number IDS2 Sesame material number TBLS name of array designated for storage of tables LCNT current word in array TBLS LU unit number for library IFL error flag
- Subroutine S2EOSI is used to compute an EOS point. That is, it computes the pressure, temperature, and their derivatives for a given density and internal energy.
CALL S2EOSI (IR, TBLS, R, E, P, T) IR material region number TBLS name of array which contains the EOS tables R density in Mg/m3 E internal energy in MJ/kg P, T pressure, temperature vectors P(1), T(1) pressure in GPa, temperature in Kelvins, P(2), T(2) density derivatives, (P/r)E, (T/r)E P(3), T(3) energy derivatives, (P/E)r, (T/E)r.
For certain materials, the library also has tables of the pressure, temperature, density, and internal energy along the vapor-liquid coexistence curve. This information is needed in reactor safety problems. Routines S2GET and S2GETI can be modified to access the coexistence data, and routine LA401A can be used to compute the thermodynamic quantities.
Meeting Update 10/07/2013 -- Baowei
- Tickets
- New: #309 (Test Ablative RT module)
- Closed: none
- Users
- new one from University of Kiel asking for the code:" I want to simulate the circumnuclear disk around SgrA*"
- Resources
- working with Dave backing up the wiki from Botwin to Clover
- Teragrid progress report: due next Tuesday
![]() | ![]() |
- Worked on
- Testing the code for Ablative RT module: #309
- Called Los Alamos and left messages Friday & Monday: http://t1web.lanl.gov/newweb_dir/t1sesamereginfo.html Haven't got a response yet.
Meeting Update 09/30/2013 -- Baowei
- Tickets
- new: #308(MolecularCloudFormation error when increasing flow size)
- closed: none
- Resources
- Martin's Allocation expired. I tried to burn the left on stampede with 3D Pulsed Jets.
- Worked on:
- Scaling Test and Progress Report
- Ablative RT: http://astrobear.pas.rochester.edu/trac/astrobear/blog/bliu09272013
- 3D Pulsed Jets on Stampede: obtain 7 frames with 21X105X21 + 5 AMR hydro in 2 hours and 4 frames with 21X105X21 + 6 AMR hydro in 4 hours
- AstroBEAR1.0 with SESAME table?
Meeting Update 09/23/2013 -- Baowei
- Tickets
- new: #307 (BE module bug? ) from Andrew
- closed: none
- Users:
- Wrote to Clemson?
- Resources
- INCITE program of Argonne:
- INCITE program of Argonne:
1) Computing time on More than five billion core-hours will be allocated for Calendar Year (CY) 2014. Average awards per project for CY 2014 are expected to be on the order of 50 million core-hours for Titan and 100 million core-hours for Mira, but could be as much higher.
2) INCITE proposals are accepted between mid-April and the end of June.
2014 INCITE Call for Proposals is now closed
3) Request for Information for next year's call: https://proposals.doeleadershipcomputing.org/allocations/incite/;MicsLoginCookie=0pCnSQPWpzTs2v2GHrThlQ4N1tFhk8cRHYF4fRpMSq3nhxs0f55H!-460963996[[BR]]
4) Proposal preparation instructions: https://proposals.doeleadershipcomputing.org/allocations/incite/instructions.do
- Wiki was slowing occasionaly last week: hopefully fixed by Rich. Still need more memory on Botwin?
- Worked on
- Ablative RT: http://astrobear.pas.rochester.edu/trac/astrobear/blog/bliu09202013
- Sesame Table: finished the registration table. Need Jonathan to check before mail out. Shall I contact Jacques.
- Progress report for renewal: got the sections according to "XSEDE successful proposal guid". haven't include much other than the usage part (http://www.pas.rochester.edu/~bliu/Proposals/Progress_Report.pdf)
Standard output for 2.5D Pulsed Jet runs on Stampede
Meeting Update 09/16/2013 -- Baowei
- Tickets
- new: none
- closed: none
- Resources
- Mercurial repository & /cloverdata/ : working with Rich to get /cloverdata/ back and get the repository a standard name
- Teragrid: two weeks left for Martin's allocation Stampede 8% | 37,083 SUs remaining Kraken 22% | 260,046 SUs remaining
- Renewal report
- Worked on
- 2.5 Pulsed Jets Runs
- Multi-threaded scaling test of AstroBEAR3.0 on Blue streak
- Read about SESAME TABLE
Meeting Update 09/09/2013 -- Baowei
- Tickets
- new: none
- closed: none
- Resources
- Teragrid Proposal Renewal: waiting for the progress report paragraphs and publications with work using Martin's allocation — https://astrobear.pas.rochester.edu/trac/astrobear/blog/bliu09042013
- Martin's allocation usage: Stampede 12% | 53,227 SUs remaining ; Kraken 23% | 262,082 SUs remaining
- Worked on 2.5 Pulsed Jets —https://astrobear.pas.rochester.edu/trac/astrobear/wiki/PulsedJets. Waiting for the green lights to complete the rest of MHD runs with 7 level amr
- hydra: completed with data located at /bamboodata/bliu/Eddie/PulsedJet_2.5D/Fiducial/hydra/
- MHD Beta=5: 47 frames
- MHD Beta=1: 41 frames
- MHD Beta=0.4 : 23 frames
Acknowledgement for publications of work on Teragrid machines
Publications resulting from XSEDE Support are required for renewals and progress reports. Papers, presentations, and other publications that feature work that relied on XSEDE resources, services or expertise should include the following acknowledgement:
This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number OCI-1053575.
Meeting Update 09/03/2013 -- Baowei
- Tickets
- new: none
- closed: none
- Resources
- Wiki on Botwin
- Teragrid: MAGNETIC TOWERS AND BINARY-FORMED DISKS Stampede 13% | 56,652 SUs remaining Kraken 28% | 324,717 SUs remaining
Meeting Update 08/19/2013 -- Baowei
- Tickets
- new: none
- closed: #303 (Changing of gamma in global.data causes oddities)
- Users
- New user from Chalmers University of Technology of Sweden (AGB and Pre-PNe outflow modeling)
- Josh of Clemson's r-theta polar coordinate code
- Thank everybody for your help during Keira's visiting
- Resources
- Archived Shule's data on /bamboodata/: 5TB available
- Martin's Teragrid: progress report for renewing
Teragrid Allocation Policy
- EXTENSIONS
At the end of each 12-month allocation period, any unused compute SUs will be forfeited.
- Extensions of allocation periods beyond the normal 12-month duration
- Reasons for Extensions: encounter problems in consuming the allocation. For example, unexpected staffing changes
- Length of extension: 1-6 months
- Procedure: a brief note for the reason through POPS via the XSEDE User Portal.
- RENEWAL
If a PI wish to continue computing after the expiration of their current allocation period, he/she should submit a Renewal request.
- In most cases, they should submit this request approximately one year after their initial request submission, so that it can be reviewed and awarded to avoid any interruption. (July 15th for Martin's allocation and April 15th for Adam's allocation)
- Procedure: Progress Report (3 pages)
Meeting Update 08/12/2013 -- Baowei
- Tickets
- new: #306 (2.5D MHD diverging wind running out of memory)
- closed: none
- Users
- Keira's visit
- New users asked for the code: Open University, UK, Educational in connection with the undergraduate course S383 Relativistic Universe
- Resources
- Grass needs a 1TB new hard drive
- New Kraken allocation: 86% | 2,954,282 SUs remaining (used by Shule & Baowei), Old Kraken allocation (45% | 516,370 SUs remaining)
- Archiving data?
- Worked on
- Pulsed Jets: Tried to find the best production run setup (resolution vs processor number). 96X480X96 + 5AMR runs slow on 1200 cores of Kraken. Changed the resolution to 192X960X192 + 3AMR. got the first several frames for both the MHD(beta=5) and Hydra. Movies will be attached soon. The highest level refinement goes to the whole jet for the first several frames. This makes it run extremely slow at the beginning but hopefully it will run much faster later on since the highest refinement will go to the bow-shock only then.
hydra | ![]() | ![]() |
Meeting Update 08/05/2013 -- Baowei
- Tickets
- new: none
- closed: none
- Users
- Visitor's schedule: http://www.pas.rochester.edu/~bliu/Visitors/ScheduleforKeiraBrooks.pdf
- new one got the code: Student from East China University of Science and Technology ("to study the transport of charge carriers in the solar cells"); Student from University of Costa Rica ("run simulations for my thesis project on helmet streamers on the sun".)
- Resources
- Archived Martin's data for magnetic tower
- Worked on
- Ticket #302 (Cooling length refinement, run on kraken with 96X480X96+5AMR goes slow)
Meeting Update 07/29/2013 -- Baowei
- Tickets
- new: #304(Problem keeping hydro static equilibrium with sink particle)
- closed: none
- User
- Called Bruce. Schedule for the visit of Bruce's student?
- Student at Universidad de Chile asked to download 2.0
- Resources
Meeting Update 07/23/2013 -- Baowei
- Tickets
- Users
- Met with LLE folks: will merge AstroBEAR3.0 with Shule's work and give to Rui
- Ian installed AstroBEAR2.0 on Pleiades. Asked for reference.
- Gave Bruce the materials he needed for Teragrid proposal
- Resources
- Computing time: Kraken old grant 51% (585,530 SUs) remaining, new grant %100 (3,422,821 SUs) remaining, Stampede 15%(65,694 SUs) remaining. Update the page https://astrobear.pas.rochester.edu/trac/astrobear/wiki/ProjectRuns
- Archive the data: local machines are pretty full. For the first archiving, move to bluehive and zip the data files there?
- Code management
- AstroBEAR3.0 is on its way and will pull into the current scrambler folder
- current scrambler will move to branch 2.0
Convert animated gif file to videos on bamboo
- Command lines converting animated gif to different video formats
Example: : gif —Eddie's 2.5D emission Jets
New formats: avi, mp4, mov, mpeg/mpg
- Convert gif to jpg files first
convert old.gif old%05d.jpg
- Convert jpg files to avi
avconv -i old-%d.jpg new.avi
- Convert avi to mp4, mov
avconv -i new.avi new.mp4 avconv -i new.avi new.mov
- Convert mp4 to mpeg/mpg
Procedures for backing up your data
- Procedures for backing up your old computing data
- create a folder on /media/tmp070813 and name it with username_date. For example: bliu_07092013
- MOVE — NOT COPY the data you want to back up to the folder you created
- Make detailed notes about what these data are and save it as username_date.txt. For example: bliu_07092013.txt
- I will tar these data to the 4TB hard drive once everybody is done moving their data. And I will clean everything on /media/tmp070813 after that.
- Old data will be backed-up & cleaned twice every year. Our first backup date is Aug 1st 2013
Meeting Update 07/08/2013 -- Baowei
- Tickets
- Users
- Trying to arrange a meeting with LLE
- Wrote to Ian
- Storage issue/Equipment
- Working with Dave to get 1 TB extra space on bamboo for Shule to move his stuff to local: /media/tmp070813/ on bamboo
- Looking for an easy way to back up the data to the new 4TB harddrive
- Poster tubes have been shipped to Adam. Hopefully will get them before Friday.
Meeting Update 07/01/2013 -- Baowei
- Tickets
- New: #294(Refinement Artifacts), #295(Global Co-rotating Frame module stops running), #296(Self-Gravity Needs Investigation), #297(code terminates on Kraken immediately after starting when self gravity is turned on), #298(Problem running on Bluehive), #299(astrobear successfully compiled on bluehive but can't run by using PBS)
- Closed: #293(Problem running Hydro Static Disk)
- Users
- Ian: Install AstroBEAR on Pleiades
- Equipment
- Got the hard drive dock + 4TB hard disk.
Meeting Update 06/24/2013 -- Baowei
- Tickets
- Teragrid Proposal
- Worked on
- OoO
- will be out of office most of the time this week.
Teragrid Proposal of July 1
- Allocations:
- Requested: 7100000
- Awarded: 3422821
- Referee reviews:
- This is a new request for 7 Million on Kraken to study astrophysical flows by a large team of researchers (5 PIs including several early career scientists) and supported by a large number of awards (5, including 1 NSF award). They made use of a start-up grant to analize the performance of their code, astroBEAR, which is adapative mesh, and the proposing team is the same as the development team of this code. Provided is the strong scaling for the resolution they plan to run (128 + 4 levels AMR) on the target resource (Kraken) and it demonstrated good scalability. They make the point that the AMR code is 100x faster than the equivalent computation that has fixed grid resolution so their strategy of using AMR is very helpful for this research. Overall a good proposal. There wree a few shortcomings. I would have liked to have also seen a weak scaling for that resolution, or a smaller size, as well as some justification as to why they chose the resolution they did. They do not provide information about the experience of their team, but it seems they have expertise covering HPC aspects and consider code optimation. They also didn't mention whether they have local computing resources. They don't describe what beta is, or how the various angles lead to different results, why the chose the angles they did, and how that will lead to a successful investigation. they mention they will save 150 frames of each run, but it is not clear whether that is only a subsample of the total frames that will be run. They do not give units for the runtime in Fig 3, and dno't give walltime for how long a frame takes but they simple say it takes 6000 SUs. On balance, I would not recommend full allocation, but they have made a case for getting an award of about 50% of their request.
- This is a good proposal with all of the relevant information present. However, I could not find any previous usage of the group, except for some roaming allocation. The code is appropriate for the proposed computations and the scaling is fine. Because of the short track record I'm hesitant to recommend full funding. I recommend to grant half of the request. Kraken: 3.5 MSU Storage: 2500
- Important Factors
- Funding: NSF supported Computational Research Plans have priority over all non-NSF supported components.
- In the (usual) case where both non-NSF and NSF funding are involved, the Recommended Allocation is split into NSF and non-NSF portions* 3.The non-NSF portion of a Recommended Allocation is reduced by the global faction times an additional factor (greater than 1).
Meeting Update 06/17/2013 -- Baowei
- Tickets
- new: none
- closed: none
- Users
- send the code to Ian
- Things to buy (as discussed in the past couple of weeks)
- Hard drive dock
- Poster tubes
- Video card for Alfalfa
- Meetings
- SC13 Nov 16 - Nov 22, Denver CO. Registration opens July 17th. Technical Program poster. Poster submissions due July 31, 2013.
- ASP Annual Meeting 2014
- Worked on
- optimize with OpenMP (#285)
- local users
Meeting Update 06/10/2013 -- Baowei
- Tickets
- Users
- wrote to Uppsala University user, no response received yet.
- Promotional video
- uploaded to youtube channel: http://www.youtube.com/watch?v=epKYY1POl0s
- Worked on
- testing the OpenMP optimized code on bluehive
- local users
- reading planet atmosphere papers
Meeting Update 06/03/2013 -- Baowei
- Tickets
- new: none
- closed: none
- Users:
- New: Dan (REU student), user from University of Waterloo (for research), user from University of Birmingham, UK(To investigate using Astrobear for simulations of colliding winds in binary systems and galaxy superwinds. I've used VH-1 a lot over the years.http://adsabs.harvard.edu/abs/1992ApJ...386..265S http://adsabs.harvard.edu/abs/2000MNRAS.314..511S and want to investigate AMR/MHD some more.)
- worked on
- wiki latex plugin update: colorful equations: https://astrobear.pas.rochester.edu/trac/astrobear/wiki/FluxLimitedDiffusion
Meeting Update 05/28/2013 -- Baowei
- Tickets
- new: #288(Krumholz accretion creates multiple particles when using particle refinement buffer)
- closed: none
- Users
- new one from Uppsala University Sweden asked for the code
- another meeting with LLE
- Wiki
- new latex plugin (stable version): the current way to write a latex equation is
[[latex($ $)]]
instead of
[[latex($ $)]]
or
{{{#Latex }}}
I can do the transform for you if you have wiki pages with equations that couldn't show correctly
- Worked on
- 3D Colliding Jets: https://astrobear.pas.rochester.edu/trac/astrobear/blog/bliu05242013
- Single jets negative temperature:
1) 0AMR runs to the end on 16 cores of bamboo:
2) 16X160+2AMR on 16 cores of bamboo: http://www.pas.rochester.edu/~bliu/RAGA/m16X160_2AMR.gif
3) 16X160+4AMR on 16 cores of bamboo: got negative Temperature at frame 3
4) 16X160+4AMR on 1 core of bamboo: got negative temperature at frame 59
5) working on tracing back a revision which worked for Eddie
3D Colliding Jets
Got many restart requesting due to the nans in flux before the Jets meet.
- Jets meet at frame 30
- freezes at 78, restart from 77 still freezes
filling fractions = 0.969 0.930 Current efficiency = 86% Cell updates/second = 1990 4487 44% Wall Time Remaining = 151.1 kyr at frame 77.9 of 100 AMR Speed-Up Factor = 0.1904E+04 Advanced level 2 to tnext= 0.2234E+01 with dt= 0.2327E-11 CFL= 0.3892E-01 max speed= 0.4181E+10 Advanced level 2 to tnext= 0.2234E+01 with dt= 0.2327E-11 CFL= 0.3892E-01 max speed= 0.4181E+10 Advanced level 1 to tnext= 0.2234E+01 with dt= 0.4654E-11 CFL= 0.2994E-01 max speed= 0.3217E+10 Advanced level 2 to tnext= 0.2234E+01 with dt= 0.2327E-11 CFL= 0.3892E-01 max speed= 0.4181E+10 Advanced level 2 to tnext= 0.2234E+01 with dt= 0.2327E-11 CFL= 0.3892E-01 max speed= 0.4181E+10 Advanced level 1 to tnext= 0.2234E+01 with dt= 0.4654E-11 CFL= 0.2998E-01 max speed= 0.3221E+10 Advanced level 0 to tnext= 0.2234E+01 with dt= 0.9307E-11 CFL= 0.1964E-01 max speed= 0.2110E+10 Info allocations = 1.5 gb 130.4 mb message allocations = ------ 35.5 mb sweep allocations = ------ 54.3 mb filling fractions = 0.969 0.930 Current efficiency = 86% Cell updates/second = 1990 4487 44% Wall Time Remaining = 136.9 kyr at frame 77.9 of 100
Meeting Update 05/20/2013 -- Baowei
- Users
- Met with LLE: Most of time were spent discussion the results of ablative RT results. Arijit shows the results with perturbation from Betti's code. Rui shows the results of ablation balance from his 3D code. Shule will summarize what he's been doing.
- Very positive Feedback from Josh of Clemson.
- Wrote to a user in China asking for feedback, no reply yet
- The Clemson and LLE users asked computational scientist/system administrator to install AstroBEAR on their machines. Review my instruction for users to install
To install and run it, you will need a Linux system, MPI, and libraries like fftw3, hdf5 and maybe hypre. You may find more details on our wiki page: https://astrobear.pas.rochester.edu/trac/astrobear/wiki/UserGuide
Edit Makefile.inc. Sounds too complicated for common Linux guy? Considering it usually takes me about 2 hours to install all the libraries required & the code to a new system, and sometimes I need to ask system admin's help for job submission, there might be a big barrier for new users to pass before they know how good our code is. Simpler&easier-installation version? Make configure file high priority?
- Got Zhuo accounts and key to the office. Walked him through the whole process of getting, compiling and running the code on local machines
- Tickets
- New: #286(Memory Allocation Error on BlueStreak), #287(Virtual Memory Error on Kraken)
- Closed: none
- Machines & More disk space
- Move alfalfa & bamboo to 476?
- suggestions from Dave&Rich for disk space: https://astrobear.pas.rochester.edu/trac/astrobear/blog/bliu05152013
- Worked on optimization (#285)
- switch all the FORALLs to distributed DO LOOPs in sweep_scheme.f90, no optimization found
- results are shown here: https://astrobear.pas.rochester.edu/trac/astrobear/blog/bliu05132013_2
Archives & Hard drive docks -- better solution for space issue?
Dave & Rich use Hard drive docks which might be a good solution for our current space shortage. Here's some thoughts:
- They are very cheap comparing other options. And it's very easy to expand the storage. <300 500.
- We can archive the data to these harddrives once the paper published. We can create a folder on our local machines for the data to be archived. We will have a file keeping records what data are moved in the folder and to be archived. Whoever moves data to the folder will be responsible for putting those records to the file. Every three/four months, I will archive the folder to the hard drive and clean the whole folder.
optimization with OpenMP on Blue Gene/Q
Replace the vectorized FORALL loop with parallelized DO loops in sweep_scheme.f90 An example is to replace:
DO i=mB(1,1), mB(1,2) FORALL(j=mB(2,1):mB(2,2),k=mB(3,1):mB(3,2)) beforesweepstep_%data(beforesweepstep_%x(i),j,k,1,1:NrHydroVars) = & Info%q(index+i,j,k,1:NrHydroVars) END FORALL END DO
by
!$OMP PARALLEL DO PRIVATE(k,j,i) COLLAPSE(3) DO k=mB(3,1),mB(3,2) DO j=mB(2,1),mB(2,2) DO i=mB(1,1), mB(1,2) beforesweepstep_%data(1:NrHydroVars,1,i,j,beforesweepstep_%x(k)) = Info%q(i,j,index+k,1:NrHydroVars) END DO END DO END DO !$OMP END PARALLEL DO
Testing results on Blue Streak are
- 1283 + 4AMR, Current Revision Running Time on 512 cores: 224.57 (Tasks per node=16)
Tasks per node OMP_NUM_THREADS Total Running Time 1 32 3375.17 2 16 2019.94 4 8 1265.58 8 4 1052.74 16 2 907.62 32 1 1151.02
Tasks per node OMP_NUM_THREADS Total Running Time 1 64 >3600 2 32 2039.45 4 16 1181.27 8 8 946.2 16 4 741.81 32 2 737.68 64 1 877.07
- 323 + 4 AMR, Current Revision Running Time on 512 cores: 33.26 (Tasks per node=16)
Tasks per node OMP_NUM_THREADS Total Running Time 1 64 191.42 2 32 122.68 4 16 82.43 8 8 70.78 16 4 72.78 32 2 85.65 64 1 129.95
Tasks per node OMP_NUM_THREADS Total Running Time 1 32 164.59 2 16 105.90 4 8 86.67 8 4 79.62 16 2 84.98 32 1 128.47
The job submission script on Blue Streak is like
#!/bin/bash #SBATCH -J strongTest #SBATCH --nodes=32 #SBATCH --ntasks-per-node=4 #SBATCH -p debug #SBATCH -t 01:00:00 module purge module load mpi-xl module load hdf5-1.8.8-MPI-XL module load fftw-3.3.2-MPI-XL module load hypre-2.8.0b-MPI-XL ulimit -s unlimited export OMP_NUM_THREADS=16 #1node 8 processors srun astrobear > strong_4ThreadsperNode_X16.log
swap the DO loop layers to i, j, k — the difference of running time is small comparing k,j,i case
!$OMP PARALLEL DO PRIVATE(i,j,k) COLLAPSE(3) DO i=mB(1,1), mB(1,2) DO j=mB(2,1),mB(2,2) DO k=mB(3,1),mB(3,2) beforesweepstep_%data(1:NrHydroVars,1,i,j,beforesweepstep_%x(k)) = Info%q(i,j,index+k,1:NrHydroVars) END DO END DO END DO !$OMP END PARALLEL DO
Tasks per node OMP_NUM_THREADS Total Running Time 1 16 >3600 2 16 2099.57 4 16 1208.53 8 8 912.56 16 4 758.78 16 2 969.74 16 1 1436.98
meeting update 05/13/2013 -- Baowei
- Tickets
- Users:
- will meet with Ruka and set accounts for him
- CIRC poster session
- Worked
- #285 (Optimize AstroBEAR 2.0 on Blue gene/Q)
Meeting Update 05/09/2013 -- Baowei
- CIRC poster session
- Time: start at 10 a.m. — 9:45 am if you have a poster
- Location: Goergen Hall
- Users
- New: from United States Navel Academy (Interested in learning more about astrophysics, including possible future involvement in mass wave detection devices such as LIGO and LISA, with the hope of performing and enhancing universe dynamics simulation techniques.) and Xiamen University (Astrophysical Simulations)
- Tickets
- New: none
- Closed: none
- Worked on
- scaling test of AstroBEAR on the whole machine of BlueStreak
- poster for CIRC poster session: http://www.pas.rochester.edu/~bliu/Posters/poster_circ.pdf
- code optimization with OpenMP. (#285) testing it on Stampede.
Meeting Update 04/30/2013 -- Baowei
- Users
- Met with Arijit of LLE. Got visit run. Set up the 3DRT problem on LLE machines. Walked him through Jonathan's blog "Finding the RT Instability growth rate " https://astrobear.pas.rochester.edu/trac/astrobear/blog/johannjc04252013
- Tickets
- New #285 (Optimize AstroBEAR 2.0 on Blue gene/Q)
- Worked on
- Registered CIRC poster session
- Scaling test and optimization on Blue Streak(Ticket #285)
- Strong scaling of current revision code (done up to 2048 cores)
- Enabled parallelization of program code by turning on qsmp. Compiled, In queue for testing. (#285)
- Working on replacing vectorized FORALL loops with parallelized DO loops and corresponding modifications like swapping indices etc. in sweep_scheme.f90.
Meeting Update 04/22/2013 -- Baowei
- Tickets
- CIRC poster session
- Registration: http://www.circ.rochester.edu/poster_submission.html
- deadline, April 26th, this Friday
- Users:
- Next meeting time with LLE: training of visit and next step.
- Worked
- Optimization on stampede: https://astrobear.pas.rochester.edu/trac/astrobear/blog/bliu04042013
- New strong scaling test on stampede (O3):
- In summary:
- with level 3 optimization, stampede is about twice faster than kraken on less than 1000 cores, which is fair — 1 SUs on stampede = 2 SUs on kraken
- Kraken has better strong scaling up to 5000 cores. (The maximum for stampede normal queue is 4096 cores)
- Current usage: stampede (40% | 169,343 SUs left), kraken (99% | 1,134,191 SUs left)
- Working on
- optimization and scaling on blue hive and blue streak
- AstroBEAR poster for CIRC symposium
Meeting Update 04/16/2013 -- Baowei
- Teragrid proposal submitted
- Users
- Christine: needs Stream Object and the 2.5D,asks the progress in implementing a radiative transfer feature in astroBEAR, will come back in May.
- New one asking for the code from VSSC (Vikram Sarabhai Space Centre?)
- Tickets
- New: none
- Closed: (none)
Meeting Update 04/08/2013 -- Baowei
- Users
- New user from Indonesia National Institute of Aeronautics and Space: Modelling solar phenomena, especially coronal mass ejection
- Jonathan and Baowei met with the CS graduate student working on optimizing MUSCL with openMP on Friday.
- XSEDE proposal
- scaling test on kraken & stampede: Kraken is much faster with AstroBEAR (https://astrobear.pas.rochester.edu/trac/astrobear/blog/bliu04042013)
- still working on the computing resources applying: 80% on kraken?
- proposal — merge the feedback ladder proposal and the colliding flows paper together: http://www.pas.rochester.edu/~bliu/Proposals/XSEDE_proposal.pdf
- A revision runs on Kraken: http://www.pas.rochester.edu/~bliu/AstroBEAR/krakenbear.tar.gz
Strong scaling Test -- Kraken & Stampede
- From last run on stampede(https://astrobear.pas.rochester.edu/trac/astrobear/blog/bliu03272013),Hypre doesn't affect the scaling.
- This run, use a short final time and consider the profiling time and IO time.
- stampede
Runtime | ![]() |
Non-ghost zone portion | ![]() |
- kraken
Runtime | ![]() |
Non-ghost zone portion | ![]() |
- Data
- stampede Non optimization
Cores | Wall Time | Non-ghost zone portion |
---|---|---|
128 | 549.82 | 57% |
256 | 360.35 | 48% |
512 | 235.72 | 41% |
stampede O3
Cores Wall Time Non-ghost zone portion 128 54.0 57% 256 33.5 48% 512 20.4 41% 1024 13.31 35% 2048 10.63 29% 4096 8.66 23%
- Kraken
Cores | Wall Time | Non-ghost zone portion |
---|---|---|
120 | 118.18 | 56% |
240 | 69.00 | 50% |
480 | 40.99 | 42% |
1008 | 28.24 | 35% |
2016 | 16.88 | 29% |
4996 | 11.25 | 22% |
- Configuration of Kraken & Stampede
Kraken | Stampede | |
---|---|---|
Computing Nodes | 9408 | 6400 |
Core per Node | 12 | 16 |
Processor | 2.6 GHz AMD Opteron | 2.7GHz Xeon E5-2680 (Coprocessors Xeon Phi SE10P 1.1 GHz) |
Memory per Node | 16 GB | 32 GB |
- Standard output
1.128 cores
Total Runtime = 550.3116700649261475 seconds. Info allocations = ------ 280.6 mb message allocations = ------ 36.2 mb sweep allocations = ------ 49.8 mb filling fractions = 0.012 0.644 0.900 0.000 Current efficiency = 82% 16% 98% Cell updates/second = 973 1721 57% Wall Time Remaining = ------ AMR Speed-Up Factor = 0.1039E+04
- 256 cores
Total Runtime = 360.3508758544921875 seconds. Info allocations = ------ 200.6 mb message allocations = ------ 32.4 mb sweep allocations = ------ 59.7 mb filling fractions = 0.012 0.665 0.898 0.000 Current efficiency = 77% 21% 98% Cell updates/second = 735 1525 48% Wall Time Remaining = ------ AMR Speed-Up Factor = 0.7988E+03
- 512 cores
Total Runtime = 549.8217809200286865 seconds. Info allocations = ------ 280.6 mb message allocations = ------ 36.2 mb sweep allocations = ------ 49.8 mb filling fractions = 0.012 0.644 0.900 0.000 Current efficiency = 82% 16% 98% Cell updates/second = 974 1722 57% Wall Time Remaining = ------ AMR Speed-Up Factor = 0.1040E+04
- Standard output on Kraken
- 120 cores
Total Runtime = 118.1762299537658691 seconds. Info allocations = ------ 257.6 mb message allocations = ------ 41.2 mb sweep allocations = ------ 63.1 mb filling fractions = 0.012 0.668 0.895 0.000 Current efficiency = 75% Cell updates/second = 4837 8572 56% Wall Time Remaining = ------ AMR Speed-Up Factor = 0.8210E+03
- 120 cores
- 240 cores
Total Runtime = 68.9995868206024170 seconds. Info allocations = ------ 164.1 mb message allocations = ------ 32.8 mb sweep allocations = ------ 58.4 mb filling fractions = 0.012 0.654 0.897 0.000 Current efficiency = 70% Cell updates/second = 4099 8226 50% Wall Time Remaining = ------ AMR Speed-Up Factor = 0.7070E+03
- 480
Total Runtime = 40.9853310585021973 seconds. Info allocations = ------ 122.0 mb message allocations = ------ 24.2 mb sweep allocations = ------ 30.1 mb filling fractions = 0.011 0.706 0.901 0.000 Current efficiency = 68% Cell updates/second = 3414 8082 42% Wall Time Remaining = ------ AMR Speed-Up Factor = 0.5956E+03
Meeting Update 04/01/2013 -- Baowei
- Teragrid Proposal
- Scaling on Stampede: https://astrobear.pas.rochester.edu/trac/astrobear/blog/bliu03272013
- Scaling on Kraken hasn't finished yet.
- Current computing resources
- Stampede: 61% | 256,868 SUs left
- Kraken: 99% | 1,138,233 SUs left
- Tickets
- New: none
- Closed: none
- Working on testing box.net for storage.
Strong Scaling Test on Stampede
- Run Time
- With hypre
Num of Cores | Run Time (secs) |
1024 | 7963.7 |
2048 | 5862.3 |
4096 | 4005.8 |
2.Without hypre
Num of Cores Run Time (secs) 1024 7401.0 2048 5436.6 4096 4126.2/4025.2
- Scaling Test Result
Runtime | ![]() |
Runtime Considering Efficiency | ![]() |
Cell Updates Per Second | ![]() |
- Standard output of last advance
- 1024 cores:
Info allocations = 79.8 gb 110.0 mb message allocations = ------ 32.0 mb sweep allocations = ------ 29.9 mb filling fractions = 0.017 0.597 0.855 0.000 Current efficiency = 66% 31% 97% Cell updates/second = 437 1215 36% Wall Time Remaining = ------ AMR Speed-Up Factor = 0.3331E+03
- 2048 cores
Info allocations = 106.9 gb 85.5 mb message allocations = ------ 64.0 mb sweep allocations = ------ 30.7 mb filling fractions = 0.017 0.591 0.848 0.000 Current efficiency = 58% 39% 97% Cell updates/second = 298 1011 29% Wall Time Remaining = ------ AMR Speed-Up Factor = 0.2305E+03
- 4096 cores
Info allocations = 147.4 gb 61.2 mb message allocations = ------ 128.0 mb sweep allocations = ------ 20.1 mb filling fractions = 0.016 0.619 0.846 0.000 Current efficiency = 47% 50% 97% Cell updates/second = 187 785 24% Wall Time Remaining = ------ AMR Speed-Up Factor = 0.1753E+03
- 1024 cores:
- Standard output of last advance (No self-gravity)
- 1024
Info allocations = 67.0 gb 93.0 mb message allocations = ------ 32.0 mb sweep allocations = ------ 25.2 mb filling fractions = 0.017 0.597 0.852 0.000 Current efficiency = 69% Cell updates/second = 466 1299 36% Wall Time Remaining = ------ AMR Speed-Up Factor = 0.4099E+03
- 1024
- 2048
Info allocations = 92.0 gb 71.2 mb message allocations = ------ 64.0 mb sweep allocations = ------ 22.5 mb filling fractions = 0.016 0.620 0.848 0.000 Current efficiency = 61% Cell updates/second = 322 1097 29% Wall Time Remaining = ------ AMR Speed-Up Factor = 0.2873E+03
- 4096
Info allocations = 125.3 gb 51.0 mb message allocations = ------ 128.0 mb sweep allocations = ------ 19.9 mb filling fractions = 0.016 0.616 0.849 0.000 Current efficiency = 50% Cell updates/second = 206 865 24% Wall Time Remaining = ------ AMR Speed-Up Factor = 0.1884E+03
Info allocations = 125.6 gb 54.7 mb message allocations = ------ 128.0 mb sweep allocations = ------ 19.9 mb filling fractions = 0.016 0.620 0.845 0.000 Current efficiency = 51% Cell updates/second = 211 882 24% Wall Time Remaining = ------ AMR Speed-Up Factor = 0.1919E+03
- CPU hours
- 1 frame: 4600 SUs
- 50 frames: 230,000 SUs
- 4~5 runs: 1,150,000 SUs (on stampede)
- Current Allocation: 416,000 SUs (on stampede), 1,138,234 SUs (on Kraken)
Meeting Update 03/25/2013 -- Baowei
- Tickets
- New: none
- closed: #273(Install AstroBEAR on Stampede of TACC)
- Machines:
- Xsede current usage: stampede 25% (315,584 SUs left), kraken 0% (1,138,234 SUs left)
- Disk Usage
- Raid alert from Clover: Rich's working on it
- Simple speed test on blue streak, blue hive and stampede (#7 of Top500) and hypre 2.8/2.9
Module | Blue Streak(16 cores) | Blue hive(8 cores) | Stampede(16 cores) |
---|---|---|---|
BonnorEbertSphere | 794 secs | 334 secs | 545 secs |
Bondi | 899 secs | 107 secs | 626 secs |
MolecularCloudFormation | 392 secs | 48 secs | 269 secs |
Didn't see big difference between hypre 2.8 and 2.9 on blue streak
- New wiki header picture
- Working on scaling test on stampede (with MolecularCloudFormation)
- Will take a day off on Tuesday for moving.
Current Disk Usage
Machine | Total | Used | Use% |
---|---|---|---|
clover | 11T | 9.7T | 95% |
grass | 5.3T | 3.7T | 75% |
alfalfa | 5.3T | 4.9T | 97% |
bamboo | 13T | 12T | 97% |
Meeting Update 03/18/2013 -- Baowei
- New users from Download form
- Helsinki, UR(student for CS course)
- A better of handling users using Download form?
- Equipment & Local machines
- Mouse, Mac to VGA port
- Clover becomes unstable: move wiki to botwin and use other machines for backup
- Grass has a lot of issues also. New machine for Erica (bamboo?)
- Tickets
- New: #280 (Strange message submitting to Bhive afrank queue)
- New movies on youtube channel: http://www.youtube.com/user/URAstroBEAR
- Worked on #273, testing hypre 2.9 and running script for testing suite on blue streak.
Meeting Update 03/11/2013 -- Baowei
- Tickets:
- New: None
- Closed: #274 (Question on AMR)
- New user from China asking the code through Download form.
- Projects for CS course:
- Asking for videos for youtube channel & Promo movie:
- New movies uploaded to youtube: http://www.youtube.com/user/URAstroBEAR
- Out of Memory Issue on Blue Streak
- Latest revision installed on Blue Streak: hypre-2.9.0b-MPI-XL-No-Global-Partition with optimization flag O3
- Teragrid allocation
- Stampede: 416,000 SUs:https://astrobear.pas.rochester.edu/trac/astrobear/blog/bliu03082013
- Kraken: 1,138,234 SUs
About Stampede Time and Slurm
Some useful information about stampede queue:
Queue Name | Max Runtime | Max Nodes/Procs |
---|---|---|
normal | 24 hrs | 256 / 4000 |
development | 4 hrs | 16 /256 |
large | 24 hrs | 1024 /16000 |
Details can be found at http://www.tacc.utexas.edu/user-services/user-guides/stampede-user-guide#running-slurm-queue
Our total Allocation is 416,000 SUs (CPU hours). Suppose all our jobs run on 1000 CPUs, totally we have a little bit more than TWO weeks.
To submit a job to 1000 cpus in normal queue on Stampede for 24 hours, here's an example
#SBATCH -A TG-AST120060 #SBATCH -J bearRun #SBATCH -n 1000 #SBATCH -c 1 #SBATCH -p normal #SBATCH -t 24:00:00
The system can decide how many nodes for you or you can specify with option -N
The complete slurm script can be downloaded from ticket #273.
Meeting Update 03/04/2013 -- Baowei
- New Revision:
- 1241:70f57b1e4434
- Mainly bug fixes found on blue streak
- Tickets
- Stampede is all set (#273)
- Trac/wiki updates
- History of AstroBEAR (Upload to youtube?)
Meeting Update 02/25/2013 --Baowei
- New revision 1239:7aab6defde61 in Scrambler:
- working bov2jpeg from Jonathan.
- Currently has a problem with my bamboo account. So didn't run tests on bamboo.
- Tickets
- New: #276 (Update the testing suite scripts for HydroWaves IsoHydroWaves IsoMHDWaves MHDWaves on blue hive and blue streak)
- Weekly testing runs
- Results: Debugging
- New Hardware? (wireless mouse, Adapter Cable: Apple Mini DisplayPort)
Meeting Update 02/18/2013 -- Baowei
- Weekly testing suite runs: Debugging
- Ran two time the 30 testing modules on blue hive. Each time 5 (different) modules problems failed due to the job queue system — the jobs died silently before running. They should all pass as it's the same revision as last week. Working on modifying the script to handle the case.
- Users:
- Created Wiki accounts for Andrew.
- Rui asked questions about hypre in AstroBEAR (Jonathan)
- Worked on ticket #273 (Install AstroBEAR on Stampede of TACC). Currently get error with the new installed hdf5 and qprec type.
Meeting Update 02/11/2013 -- Baowei
- Wiki
- Set yearly backup
- Updated Doxygen
- New Revision
- 1237:82b26a9a1a33 and 1238:604fb418ad9a checked in: just some updates with running scripts for blue streak.
- Weekly tests on blue hive and blue streak all passed: Added a naive weekly update for the testing to Debugging. So everyone can see the weekly testing result — The last week testing status won't be accurate for now.
- Users
- Contacted IO (Jonathan has the email) and Rui of LLE (the student Ariji is trying AstroBEAR now). Let them know the latest revision in developing branch.
Meeting Update 02/04/2013 -- Baowei
- Golden Version AstroBEAR & Blue streak
- New publication with AstroBEAR: [AstroBearPublication]
- Tickets:
- AstroBEAR Vedio Meeting with Erica, Brendan and Will
- Worked on:
- Working on building problem with MHD, following documentation page using magnetic fields: Field Advection, clumps with rotation and magnetic field: https://clover.pas.rochester.edu/trac/astrobear/blog/bliu01222013
- Updated documentation page: [ModulesOnAstroBear], [MultiPhysics] and [ClumpObjects]
- Debug and check in code.Ticket: #275
Meeting Update 01/28/2013 -- Baowei
- Users
- New users: Give the latest revision code to Tony Piro and Christian Ott from Caltech
- Yan asked questions about visit
- Wiki: tried to install AccountManagement plugins but failed
- Following writing your own problem module, worked on build modules: https://clover.pas.rochester.edu/trac/astrobear/blog/bliu01222013
- Working on testing and checking in 2D MUSCL code and installing AstroBEAR on Stampede
Build Problem Module
Following the User Guide: https://clover.pas.rochester.edu/trac/astrobear/wiki/ModulesOnAstroBear , I built two problem modules
- Simple Clump Module (with objects)
- Documentation is very clear and easy to follow. The code is straightforward — maybe need a little bit background knowledge: Ghost zones (Parallel programming), gamma7=1.0/(gamma-1)(Energy and pressure relations), Namelist (Fortran) and data files
- code compiled with AstroBEAR
- Need to update the "problem.data" from template to get it run — with "rho=, radius=, velocity=…."
- data files for user to try and run?
- Result:ClumpMovie_0AMR
- Simple Clump Module (without objects)
- Result with 4 nodes: Result:ClumpOld_0AMR
- Result with 4 nodes: Result:ClumpOld_0AMR
- RTInstability
- Fixed grids: Movie_NoAMR
- 1 level AMR: Movie1LevelAMR
- Fixed grids: Movie_NoAMR
- Clump with Toroidal magnetic field
- Result:rhoBxMovie
- Result:rhoBxMovie
- Clump with Rotation
Meeting Update 01/21/2013 -- Baowei
- Trac backup & Blue Streak Queue
- Trac backup got conflicts with the new NameTag plugin which is required by Discussion. Trac backup back to normal after the plugin was removed but the forum is down.
- Blue Streak queue system works now ?
- MUSCL and sweep schemes in AstroBEAR presentation
- Working on moving from Ranger to Stampede (#273)
Meeting Update 01/14/2013 -- Baowei
- Wiki & Machines
- wiki documentation
- Trac backup failing
- Blue Streak: currently has issues with job submitting and scheduling
- 2D: nan errors and segmentation fault, working on it …..
- working on 1D Euler Solver with MUSCL Scheme (Sod Shock Tube, Lax-Friedrichs / Riemann solver for fluxes):
Meeting Update 01/07/2013 -- Baowei
- Christine's visit
- Jan 14th (Monday): office, account, group meeting(?), lunch(?)
- Volunteers?
- Golden Version — Current revision: 1201:72d442594ac2, includes
- updated scripts for routine test suites running on blue hive and blue streak
- MHD convergence tests
- The routine test running of MHDWaves failed on blue streak
- Project runs for Teragrid
- Machines:
- AstroBEAR and required libraries installed on LLE cyclone. Tested with RTInstability by Rui
- checking local machines (bamboo unreachable, clover trac backup failed). will restart if necessary.
- Wiki
- installed Discussion plugin
- Working on MUSCL
Christine's Visit
==============
Attached is an ApJ letter that I've published on my experiment and the astrophysical motivation for it. What I would like to do with AstroBEAR is create the Cataclysmic Variable (CV) detailed in the first paragraph of the introduction. The CV related to this work is considered non-magnetic because the secondary star in the binary system donates mass to the white dwarf (WD) in the orbital plane, (not along field lines to a polar axis).
The localized area of interest is the collision region at which the accreting stream impacts the formed accretion disk around the WD. I've also attached an ApJ paper that did simulations on this (comparing isothermal and adiabatic EOS's) because they detail parameters of the CV system. You will see in my paper the connection of this astrophysical system to my laser experiment, but to re-iterate the "problem I want to solve" is the dynamics of the shocked stream at the accretion disk edge. I have new data, which I showed to your group in July, that hasn't been published yet, but shows perhaps some stagnation and a diverted stream in an oblique shock scenario. This corresponds most likely to some optical thickness in the shock system, so it would be interesting to do the CV simulation looking more closely at the shocks that form in it and how mass moves around the collision region, as a function of scale height of the disk.
I can offer more details with questions/concerns, but my paper in combination with the Armitage and Livio one (sections 1 through 2.3) offer a good overview of the system.
I am currently on a different experiment in Livermore, but I am going to start exploring the AstroBEAR wiki, etc, this week! Thank you for setting up an account for me.
=====================
APJ letter http://www.pas.rochester.edu/~bliu/Christine/ApJL452485p4.pdf
Armitage and Livio http://www.pas.rochester.edu/~bliu/Christine/36557.pdf
Meeting Update 12/18/2012 -- Baowei
- Golden Version & Machines
- Youtube Channel for AstroBEAR
- Worked on
- coding and testing Revision 1174:9b0df1e0242b
- Help install for new machines and new users
- Testing Shule's AblativeRT module
Meeting Update 12/10/2012 -- Baowei
- Golden Version & Blue Streak
- Updated Information about Performance of Blue Streak https://clover.pas.rochester.edu/trac/astrobear/blog/johannjc11132012#comment-5
- Considering the speed difference of blue gene Q & P, current AstroBEAR code should be able to run faster than now. Testing individual library.
- Tickets
- New ticket: #270(standard out "walltime remaning" accuracy)
- Worked on
- Ticket #265
- parallel hdf5
- MUSCL-Hancock scheme
Meeting Update 12/03/2012 - Baowei
- Golden Version & Blue Streak
- AstroBEAR runs a little slower on Blue Gene/P than on blue streak: https://clover.pas.rochester.edu/trac/astrobear/blog/johannjc11132012#comment-4 ;
- Tried to compile AstroBEAR with the fftw3 of IBMmath but haven't succeeded yet.
- Tickets:
- Users:
- IO's chombo files: http://www.pas.rochester.edu/~bliu/Yat_Tien_UCLA/CND_chombo/
- Christine will visit us during the week of Jan 14th
Meeting Update 11/26/2012 - Baowei
- Golden Version & Blue Streak
- AstroBEAR performance on blue streak: https://clover.pas.rochester.edu/trac/astrobear/blog/johannjc11132012#comment-1
- Still have problem for modules GravitationalCascade (#265), IsotropicTurbulence (#266) and SlowMolecularCloudFormation
- Users
- IO will skype in during the meeting
- Will ask Rich for Shaz's local account.
- Start working on MUSCL
Meeting Update 11/19/2012 - Baowei
- Golden Version & Blue Streak
- Tickets
- New: #265 (GravitationalCascade failed on blue streak), #266 (IsotropicTurbulence terminated on blue streak), #267 (Issues with BlueStreak and optimization above O3)
- Start working on #264 (Parallel IO)
- Outside Users
- Yat-Tien made some progress running the code. He will post the blog and call-in meeting next Monday (11/26)
- Shazrene asked to download the code. Account on local machines for her?
- Download page?
Baowei's Meeting Update -- 11/12/2012
- Golden Version
- checked in 1162:851b4b9a604e with Copyright — thank Ivan
! Copyright (C) 2003-2012 Department of Physics and Astronomy, ! University of Rochester, ! Rochester, NY ! ! global_declarations.f90 is part of AstroBEAR. ! ! AstroBEAR is free software: you can redistribute it and/or modify ! it under the terms of the GNU General Public License as published by ! the Free Software Foundation, either version 3 of the License, or ! (at your option) any later version. ! ! AstroBEAR is distributed in the hope that it will be useful, ! but WITHOUT ANY WARRANTY; without even the implied warranty of ! MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ! GNU General Public License for more details. ! ! You should have received a copy of the GNU General Public License ! along with AstroBEAR. If not, see <http://www.gnu.org/licenses/>.
- Outside users: Created wiki and local machine account for Scott lucchini, created wiki account for Shazrene Mohamed. IO: waiting for the result so that we can start posting the beta version result in a blog. Attended LLE meeting.
- Blue streak
- Currently has some problems with the Queue system. won't do a update until after Thanksgiving.
- Post-processing with plots but one works: https://clover.pas.rochester.edu/trac/astrobear/wiki/u/MyCurentTests
- Working on slow-cell-updating
- Tickets:
- New #264 (Parallel hdf5 IO)
Baowei's Meeting Update 11/05/12
- Golden Version & Blue Streak
- tested and checked in 1156:86ff14bc8662 and 1161:f528b4a4b297 .
- Ivan found a easier way to handle the preprocessor issue on blue streak, Now the makefile.bluestreak and *.F files are deprecated in rev 1161:f528b4a4b297. global_declaration.f90 with Absolute path is deprecated also. So compiling and running AstroBEAR on blue streak is just as blue gene/P now. (Ticket #245)
- Tried installing mercurial on blue streak
- Copyright year for Golden Version AstroBEAR
- Ivan and Jonathan are running jobs on blue streak.
- Tickets
- New: none
- Closed: #251(wiki error), #262(Big/Little Endian on blue streak)
Meeting Update 10/29/2012 - Baowei
- Golden Version & Blue Streak
- Tested with testing modules on blue streak, bear2fix can extract BE/LE data automatically (Ticket #262). Do not need the flag lDataFromBlueGene and convertfrombluegene functions any more.
- Still working on running all tests on blue streak
- Need a big project to run on blue streak
- Tickets
- New: #263(Implementing cylindrical coordinates to reconstruction step)
- Teragrid
- Added Erica to the allocation
Meeting Update 10/22/2012 - Baowei
- Golden Version Status
- New revision 1144:3bc7b231aa1a with memory leak fix in main repo.
- Set a cron job that will notify about new revision.
- Unstable Endian issues with bear2fix when running testing suites on blue streak (#262)
- Tickets:
- New: #261(Download page for Golden Version AstroBEAR), #262 (Big/Little Endian on blue streak)
- Attended Matlab workshop
- Added Eddie to Teragrid Allocation
Meeting Update 10/15/2012 - Baowei
- Golden Version
- checked in revision 1125:2c3e80f15c86 which passed all tests on local machines and bluehive. All modules run on blue streak after hypre installed (#245) and fixed several problems in makefile and Makefile.inc for bgq. Working on post-processing issues which will not affect the running.
- folks can pull from the main repo and keep your astrobear updated to the main repo. If you fix something, please make sure you run buildproblem on local machines before check in.
- testing results can be viewed with USER_MACHINE. For example "CurrentTests(bliu_grass, width=250px)" with double "and" will show the current testing results I run on grass.
- Testings for the main repo will run weekly from now on.
- Tickets
- Trac updates
- Installed Collaps and BackLinksMenu macros
- Tried: wikipage to pdf plugins but failed
Meeting Update 10/08/2012 - Baowei
- Golden Version Status
Merged with Ivan's revision last Friday and Jonathan's revision on Sunday — So I was wrong about the final merge last week. Hopefully we are close to the final merge…The following table summarize the testing results on weekends — with Ivan's code.
Machines | Testing Results |
---|---|
Clover | Goes Very slow. Takes hours. Stop testing on clover |
Alfalfa | All passed |
Bamboo | All passed |
Grass | All passed |
Bluehive | Modules using bear2fix process testing chombos passed. Found an issue running modules which don't use bear2fix (generate plots like BrioWuShockTubes). Updated the testing suite script for these modules. |
Blue Gene/P | No reservation. Didn't run tests on it |
Blue Streak | Found a bunch of bugs in the code and data files but all fixed. Updated the testing suite script for modules with plots testing results. Got a slightly different chombo files when running Bondi which failed the test. Working on it |
With Jonathan's revision, I found an segmentation fault error running Basic Disk. Working on it.
- Trac 1.0
- Installed a new Latex plugin. Tried multiple times but the old plugin won't work with the current version Trac. The way writing equations is slightly different (single pair of ). Details can be found at Ticket #254. Old wiki pages with equations may look strange. We probably will go back to the old plugin when the new version comes.
- Reinstalled Mercurial plugin.
Baowei's Meeting Update -- Oct 1 2012
- Golden Version
- Did final merge. Running final tests on local machines and blue hive, bluegene. Will check in devel_branch when all tests pass.
- Total 28 testing modules.
- Things to do: GPL license, configure file, testing pages on wiki for each local machine. Weekly running testing, Downloading page.
- Blue Streak
- Installed AstroBEAR and necessary libraries with IBM XL compilers — makefile modified. (Ticket #245)
- Ran succesffully with Bondi testing module
- Will try to run all testing modules and scaling tests
- Trac
- updated to 1.0.1
- More plugins needed to be install (#254…)
- UCLA visitor
- Reported a compiling error. Solved.
Baowei's Meeting Update 09/24/12
- Golden Version
- Test passed on grass, alfalfa and bamboo with the modules we have
- Still miss three modules: IonizationTest, Rotating Collapse
- XSEDE Allocation
- Approved but not way more less: Resource Name NICS Cray XT5 (Kraken) Resource Requested Amount 4000000 Resource Awarded Amount 1138234 Resource Name TACC Sun Constellation Cluster (Ranger) Resource Requested Amount 4000000 Resource Awarded Amount 1326290
- Upate to the local machines
- New OS up
- Will discuss with Rich about updating trac and re-installing Wiki plugins
- Blue Gene Q
- will attend piloting user meeting this afternoon
- Tickets:
- Yat Tien's visit
- Thank everybody's work for the training
- Returned the keys
- Emailed Rich to delete the account
Baowei's Meeting Update -- Sep 17 2012
- Golden Version Status
- Merged with everyone's code, current testing status https://clover.pas.rochester.edu/trac/astrobear/wiki/u/MyCurentTests
- Jonathan modified the test-running script so testing result will be sent to an individual folder instead of a comment current test folder
- Modules needed more work:
BE_stuff does not pass test
Updated wiki page linked to the testing pageBasicDisk Updated page linked to the testing page MomentumConservation Updated page linked to the testing page MultiClumps About 17 mins on 8 cores of alfalfa
Updated page linked to the testing pageSingleClump does not pass test
Updated page linked to the testing pageSlowMolecularCloudFormation Updated page linked to the testing page ThermalInstability Updated page linked to the testing page
- Makefile.inc files for local machines need to be double-checked
- Missed the testing page on wiki
- Yat Tien's visit
- Followed the training schedule
- Will install astrobear on a UCLA machine so he has a machine to use when he leave here.
- Request to restart our local workstations
- Start using Ubuntu Linux 12.04 instead of the older version 10.04
- Deadline Friday Sep 21
- Planning to set up a time to restart all of them with Rich present
- New Tickets
- Two new paper from Martin on publication page:
- Blue Streak (the Q)
- Installed AstroBEAR. Got a data file open/read problem when running — could related to the file system.
- Teragrid Proposal — No news
- the review starts on Sep 1st And the allocation should begin on Oct. 1 if got approved.
Baowei's Meeting Update 09/10/12
- Golden Version Status:
- Progress: https://clover.pas.rochester.edu/trac/astrobear/wiki/GoldenVersion
- Merging with Eddie and Shule's code.
- Schedule for Yat-Tien's visit
Baowei's Meeting Update 09/05/12
- Golden Version Status:
- Progress: https://clover.pas.rochester.edu/trac/astrobear/wiki/GoldenVersion
- Merged Testing Modules from Baowei and Jonathan. All test passed. Checked into the main repo: https://clover.pas.rochester.edu/trac/astrobear/wiki/u/bliu#no1
- Will try to do the first merging-all late this week or early next week
- Tickets
- New Ticket: #246 (update wiki page for data files: https://clover.pas.rochester.edu/trac/astrobear/wiki/DataFileTutorial)
- Reopened Ticket: #240 (Segmentation fault on Kure) — description modified, reassign to Jonathan
- Closed ticket: #244
- astro-sim.org
- Martin sent Steffen Brinkman the webmaster an email as they know each other.
Meeting Update 08/28/2012 - Baowei
- Golden Version Status:
- Tickets:
- New: #243 (heat conduction and resistivity solver), #244 (Segmentation fault with UniformCollapse), #245 (Installing libs and astrobear on Blue Streak—the Q)
- Yat-Tien from UCLA
- Will arrive at 630pm on Sep 12th
- Talk/dinner before training—asked by Yat-Tien (Volunteers needed)
- Prepare training (temporary account on local machines, modules)
- Yat-Tien's Project: http://www.pas.rochester.edu/~bliu/Yat_Tien_UCLA/Yat-Tien_UCLA.pdf
- Worked on: #237 (test modules), #240 (Molecular clouds), #244 (Segmentation fault with UniformCollapse), #245 (Installing libs and astrobear on Blue Streak—the Q)
Baowei's Meeting Update 08/07/12
- Golden Version Status:
- Tickets:
- New tickets: #239, #240
- Closed tickets: #147, #171, #198, #202, #232, #233, #239
- https://clover.pas.rochester.edu/trac/astrobear/query?status=accepted&status=assigned&status=new&status=reopened&group=priority&col=id&col=summary&col=status&col=owner&col=type&col=priority&col=due_date&col=component&col=ttd&col=reporter&order=priority&report=28
- Will be on vacation on Thursday (Aug 9th) and Friday (Aug 10th)
Baowei's Meeting Update 07/31/12
- Modification to the code: Merged everybody's work. A Good Starting Point for our golden version
https://clover.pas.rochester.edu/trac/astrobear/log/Scrambler/?action=stop_on_copy&mode=stop_on_copy&rev=af45abad226a79da4317e74180e992b28a8f7524&stop_rev=b789b34104cb&limit=200
Project Management with Mercurial Branches
On Tuesday, Jonathan, Eddie and I had a little discussion about the project management of our code—especially after we have our golden version. I played with some of the ideas with these Mercurial extensions: branches, tags, graphic log view and transplant. The following summarized some of my tries.
Introduction to Our new Mercurial tools
- hg branch: create a branch under the same repository and check with branch you are working with. Users can specify which branch he/she wants to pull the code from
- hg tag: attach a tag to a revision. will create a new revision. Seems cannot attach a tag to individual files.
- hg glog or hg view: view the whole developing tree/revision structure
- hg transplant: cherry-picking the code
Schemes of Project Management
According to Karl Fogel's book which can be found at http://producingoss.com/. Especially Chapter 7 - Packaging, Release and Daily Development; The Release Branches Section, I think two branches work better than one and taking a snapshot of the tree is not a good way to get a stable version. Two branches can balance well on checking in the developer's effort as soon as possible and lowering the risk of checking in controversial/unstable code.
- Default branch/trunk: for development mainline, every developer can check in his/her code as long as it passes our NEW strong testing suite.
- A Release Candidate Branch: only check in clean and ready code (including bug fixing) which means
- Pass our NEW strong testing suite
- No controversial opinions from other developers
- All developers agree on checking to that release.
- Developers should update his/code developing branch frequently to the trunk — several times a day according to Karl Fogel.
We can have more branches like branches for release 1.0 and release 2.0. And with Mercurial-transplant, bug-fixing for 1.0 only can be cherry picked from the trunk to 1.0 branch only (Examples shown below).
My originally thinking was to create a branch under the same repository for each developer. The with hg glog or hg view, the developing tree of our whole group can be easily seen. But one thing, this tree could be very complicated. And people could easily get confused which branch he/she is working on (See the example for Scheme I).
The second thing I tried was to make two branches under the same repo. This was basically Jonathan's idea. Though the check-in procedures of the two branches were based on what I mentioned above, which was different from Jonathan's way. This scheme worked OK. The tree could be messed up a bit when I made mistake cherry-picking code. (revision 15 in the example for Scheme II)
The third scheme I tried was to have two branches in two repos: one for developing, one for release — could have more if we have more release candidates. This one we had much clear, separate and different though, tree structures for the two branches. cherry-picking was easier. I did make a mistake cherry-picking the code when I tried, but it didn't show in the developing tree (See the example for Scheme III).
Both the latter two are used in open source software project management according to Karl Fogel. And all three schemes can be realized with Mercurial.
Schemes | Pros | Cons | |
---|---|---|---|
I | 1. Branches under the same repository 2. Default branch/trunk for Development mainline 3. Individual Branches for each developer 4. Release branch for stable version | With single hg command, development structure/stage of our whole group can easily be seen. So it's easy to check the each developer's development revision so his/her branch won't lag behind too far | 1.Too many branches to handle under the repository so very easy to update the wrong branch 2. Developers could easily pull the whole repo instead of his own branch. |
II | 1. Branches under the same repository 2. Default branch/trunk for Development mainline 3. Release branch for stable version | 1. Fairly simple revision structures. 2. Clear view of the developing line and stable version | Wrong Cherry picking to the release branch could mess up a bit the whole repo revision structure |
III | Release Branch and Default branch are under different repo | Very clean revision structure for the trunk and especially for the release branch | No view of the whole revision structure |
- Scheme 1
o changeset: 14:7e05196f1647 |\ tag: tip | | parent: 12:d07c7f335631 | | parent: 13:567d336f6d7a | | user: bliu <bliu@pas.rochester.edu> | | date: Thu Jul 26 13:23:44 2012 -0400 | | summary: Shule merge his branch to trunk | | | o changeset: 13:567d336f6d7a | | branch: shule | | parent: 11:5935bfa52647 | | user: bliu <bliu@pas.rochester.edu> | | date: Thu Jul 26 13:22:27 2012 -0400 | | summary: shule added feature 2 | | o | changeset: 12:d07c7f335631 |\| parent: 7:75d40ef53d13 | | parent: 11:5935bfa52647 | | user: bliu <bliu@pas.rochester.edu> | | date: Thu Jul 26 13:19:05 2012 -0400 | | summary: Shule merged his branch with trunk/developing branch | | | o changeset: 11:5935bfa52647 | | branch: shule | | parent: 8:ec480f53ab7f | | user: bliu <bliu@pas.rochester.edu> | | date: Thu Jul 26 13:16:57 2012 -0400 | | summary: created module 2 by shule | | | | o changeset: 10:c160040af250 | | | branch: 1.0.x | | | user: bliu <bliu@pas.rochester.edu> | | | date: Thu Jul 26 13:14:18 2012 -0400 | | | summary: Added tag RELEASE_1_0_X for changeset f37e3dfa00ff | | | +---o changeset: 9:f37e3dfa00ff | | branch: 1.0.x | | tag: RELEASE_1_0_X | | parent: 7:75d40ef53d13 | | user: bliu <bliu@pas.rochester.edu> | | date: Thu Jul 26 12:06:15 2012 -0400 | | summary: Created branch for release 1.0 | | | o changeset: 8:ec480f53ab7f |/| branch: shule | | parent: 3:ffed79d15d20 | | parent: 7:75d40ef53d13 | | user: bliu <bliu@pas.rochester.edu> | | date: Thu Jul 26 12:00:14 2012 -0400 | | summary: Shule merged with the developing/default branch | | o | changeset: 7:75d40ef53d13 |\ \ parent: 4:ecfeaa2d72e6 | | | parent: 6:61b0ac6b5c20 | | | user: bliu <bliu@pas.rochester.edu> | | | date: Thu Jul 26 11:56:58 2012 -0400 | | | summary: Eddie merged feature 1 to the developing/default branch | | | | @ | changeset: 6:61b0ac6b5c20 | | | branch: eddie | | | user: bliu <bliu@pas.rochester.edu> | | | date: Thu Jul 26 11:55:17 2012 -0400 | | | summary: Eddie added feature1 | | | | o | changeset: 5:55e842fae920 |/| | branch: eddie | | | parent: 2:0fb3775dcc73 | | | parent: 4:ecfeaa2d72e6 | | | user: bliu <bliu@pas.rochester.edu> | | | date: Thu Jul 26 11:41:25 2012 -0400 | | | summary: Eddie merged with the developing/default branch | | | o---+ changeset: 4:ecfeaa2d72e6 | | | parent: 0:c70162535253 | | | parent: 3:ffed79d15d20 | | | user: bliu <bliu@pas.rochester.edu> | | | date: Thu Jul 26 11:35:16 2012 -0400 | | | summary: Merged with Shule's branch | | | | | o changeset: 3:ffed79d15d20 | | | branch: shule | | | parent: 1:d145928ca80e | | | user: bliu <bliu@pas.rochester.edu> | | | date: Thu Jul 26 11:31:06 2012 -0400 | | | summary: Shule's 1st modification to Module 1 | | | | o | changeset: 2:0fb3775dcc73 |/ / branch: eddie | | parent: 0:c70162535253 | | user: bliu <bliu@pas.rochester.edu> | | date: Thu Jul 26 11:26:57 2012 -0400 | | summary: Created Branch for Eddie | | | o changeset: 1:d145928ca80e |/ branch: shule | user: bliu <bliu@pas.rochester.edu> | date: Thu Jul 26 11:26:28 2012 -0400 | summary: Created Branch for Shule | o changeset: 0:c70162535253 user: bliu <bliu@pas.rochester.edu> date: Thu Jul 26 11:25:49 2012 -0400 summary: Initial commit of TAstroBEAR
- Scheme II
@ changeset: 19:c687cd2b699d |\ branch: 1.0.X | | tag: tip | | parent: 16:3d2c4134f5e6 | | parent: 17:b8e25dc8c496 | | user: bliu <bliu@pas.rochester.edu> | | date: Thu Jul 26 16:32:18 2012 -0400 | | summary: Eddie fixed a bug in module 1 | | | | o changeset: 18:d95fe8c2bae7 | |/| parent: 17:b8e25dc8c496 | | | parent: 13:250de7722735 | | | user: bliu <bliu@pas.rochester.edu> | | | date: Thu Jul 26 17:36:04 2012 -0400 | | | summary: Eddie merged his branch with bugfixing in module 1 with trunk | | | | o | changeset: 17:b8e25dc8c496 | | | parent: 11:257c587657ca | | | user: bliu <bliu@pas.rochester.edu> | | | date: Thu Jul 26 16:32:18 2012 -0400 | | | summary: Eddie fixed a bug in module 1 | | | o---+ changeset: 16:3d2c4134f5e6 | | | branch: 1.0.X | | | parent: 15:750c823af31f | | | parent: 13:250de7722735 | | | user: bliu <bliu@pas.rochester.edu> | | | date: Thu Jul 26 16:37:23 2012 -0400 | | | summary: Shule fixed a bug in feature 1 | | | o | | changeset: 15:750c823af31f | | | branch: 1.0.X | | | user: bliu <bliu@pas.rochester.edu> | | | date: Thu Jul 26 17:22:19 2012 -0400 | | | summary: backout the mis cherrypicking from the default branch | | | o | | changeset: 14:c8d8de75b020 | | | branch: 1.0.X | | | parent: 12:0b29e0bacf7d | | | user: bliu <bliu@pas.rochester.edu> | | | date: Thu Jul 26 15:33:31 2012 -0400 | | | summary: Eddie modified the feature1.f90 | | | | | o changeset: 13:250de7722735 | |/ parent: 11:257c587657ca | | user: bliu <bliu@pas.rochester.edu> | | date: Thu Jul 26 16:37:23 2012 -0400 | | summary: Shule fixed a bug in feature 1 | | o | changeset: 12:0b29e0bacf7d | | branch: 1.0.X | | parent: 6:cd151c2100ad | | user: bliu <bliu@pas.rochester.edu> | | date: Thu Jul 26 16:30:23 2012 -0400 | | summary: Eddie modified feature 1 | | | o changeset: 11:257c587657ca | |\ parent: 10:a22779a560a7 | | | parent: 9:ae21e6d2d4ba | | | user: bliu <bliu@pas.rochester.edu> | | | date: Thu Jul 26 15:43:15 2012 -0400 | | | summary: Eddie merge his branch with the main branch | | | | | o changeset: 10:a22779a560a7 | | | parent: 7:cf843a15dbf1 | | | user: bliu <bliu@pas.rochester.edu> | | | date: Thu Jul 26 15:33:31 2012 -0400 | | | summary: Eddie modified the feature1.f90 | | | | o | changeset: 9:ae21e6d2d4ba | |\| parent: 8:94bdb895f471 | | | parent: 7:cf843a15dbf1 | | | user: bliu <bliu@pas.rochester.edu> | | | date: Thu Jul 26 15:39:53 2012 -0400 | | | summary: merged after putting the tag RELEASE_2_0_X | | | | o | changeset: 8:94bdb895f471 | | | parent: 4:46982b65c965 | | | user: bliu <bliu@pas.rochester.edu> | | | date: Thu Jul 26 15:30:45 2012 -0400 | | | summary: Added tag module2.f90, RELEASE_2_0_X for changeset 46982b65c965 | | | | | o changeset: 7:cf843a15dbf1 | |/ parent: 4:46982b65c965 | | user: bliu <bliu@pas.rochester.edu> | | date: Thu Jul 26 15:08:58 2012 -0400 | | summary: Shule created module 2 | | o | changeset: 6:cd151c2100ad | | branch: 1.0.X | | user: bliu <bliu@pas.rochester.edu> | | date: Thu Jul 26 15:04:21 2012 -0400 | | summary: Added tag RELEASE_1_0_X for changeset 115372a64138 | | o | changeset: 5:115372a64138 |/ branch: 1.0.X | tag: RELEASE_1_0_X | user: bliu <bliu@pas.rochester.edu> | date: Thu Jul 26 15:03:31 2012 -0400 | summary: created a branch for 1.0.X | o changeset: 4:46982b65c965 |\ tag: RELEASE_2_0_X | | tag: module2.f90 | | parent: 3:bc41a66a492a | | parent: 2:3d05829cc1c1 | | user: bliu <bliu@pas.rochester.edu> | | date: Thu Jul 26 15:01:45 2012 -0400 | | summary: Shule merged his branch with the trunk | | | o changeset: 3:bc41a66a492a | | parent: 1:cab73a420c5b | | user: bliu <bliu@pas.rochester.edu> | | date: Thu Jul 26 15:01:03 2012 -0400 | | summary: Shule's 2nd modification to Module 1 | | o | changeset: 2:3d05829cc1c1 |/ user: bliu <bliu@pas.rochester.edu> | date: Thu Jul 26 14:54:08 2012 -0400 | summary: Eddie added feature 1 | o changeset: 1:cab73a420c5b | user: bliu <bliu@pas.rochester.edu> | date: Thu Jul 26 14:00:31 2012 -0400 | summary: Shule's 1st modification to module1.f90 | o changeset: 0:096f4d37e0fd user: bliu <bliu@pas.rochester.edu> date: Thu Jul 26 13:59:23 2012 -0400 summary: Initial commit of TAstroBEAR
- Scheme III
- trunk
@ changeset: 11:320adc42a728 | tag: tip | user: bliu <bliu@pas.rochester.edu> | date: Fri Jul 27 10:08:26 2012 -0400 | summary: Eddie fixed 1st bug in module 2 | o changeset: 10:ba38a00a54d2 | user: bliu <bliu@pas.rochester.edu> | date: Fri Jul 27 10:07:38 2012 -0400 | summary: Eddie fixed 1st bug in module 1 | o changeset: 9:b0a88811e50d | user: bliu <bliu@pas.rochester.edu> | date: Fri Jul 27 10:04:24 2012 -0400 | summary: Added tag RELEASE_2_0_X for changeset 0a89508c8ca2 | o changeset: 8:0a89508c8ca2 | tag: RELEASE_2_0_X | user: bliu <bliu@pas.rochester.edu> | date: Fri Jul 27 10:03:30 2012 -0400 | summary: Shule created module 2 | o changeset: 7:1b91941fb44b |\ parent: 6:4aac04e39e42 | | parent: 5:957430289152 | | user: bliu <bliu@pas.rochester.edu> | | date: Fri Jul 27 10:01:33 2012 -0400 | | summary: Shule merged his branch with the trunk | | | o changeset: 6:4aac04e39e42 | | parent: 2:d2bb8f24ef20 | | user: bliu <bliu@pas.rochester.edu> | | date: Fri Jul 27 09:51:18 2012 -0400 | | summary: Shule made 2nd modification to module 1 | | o | changeset: 5:957430289152 | | user: bliu <bliu@pas.rochester.edu> | | date: Fri Jul 27 09:52:24 2012 -0400 | | summary: Eddie fixed the 1st bug in feature 1 | | o | changeset: 4:2bf0d822ec6f | | user: bliu <bliu@pas.rochester.edu> | | date: Fri Jul 27 09:49:03 2012 -0400 | | summary: Eddie created feature 2 | | o | changeset: 3:307471c2a37e |/ user: bliu <bliu@pas.rochester.edu> | date: Fri Jul 27 09:44:54 2012 -0400 | summary: Added tag RELEASE_1_0_X for changeset d2bb8f24ef20 | o changeset: 2:d2bb8f24ef20 | tag: RELEASE_1_0_X | user: bliu <bliu@pas.rochester.edu> | date: Fri Jul 27 09:43:34 2012 -0400 | summary: Shule created feature 1 | o changeset: 1:abd9168ebcbc | user: bliu <bliu@pas.rochester.edu> | date: Fri Jul 27 09:40:19 2012 -0400 | summary: Eddie made the 1st modification to module 1 | o changeset: 0:98888bad8a2a user: bliu <bliu@pas.rochester.edu> date: Fri Jul 27 09:38:33 2012 -0400 summary: Initial committment of shceme3
- trunk
- Release Branch 1.0.x
@ changeset: 6:ecb6fe41824f | tag: tip | user: bliu <bliu@pas.rochester.edu> | date: Fri Jul 27 10:07:38 2012 -0400 | summary: Eddie fixed 1st bug in module 1 | o changeset: 5:898bd6d22ad9 | user: bliu <bliu@pas.rochester.edu> | date: Fri Jul 27 09:51:18 2012 -0400 | summary: Shule made 2nd modification to module 1 | o changeset: 4:c4c15931c578 | user: bliu <bliu@pas.rochester.edu> | date: Fri Jul 27 09:52:24 2012 -0400 | summary: Eddie fixed the 1st bug in feature 1 | o changeset: 3:307471c2a37e | user: bliu <bliu@pas.rochester.edu> | date: Fri Jul 27 09:44:54 2012 -0400 | summary: Added tag RELEASE_1_0_X for changeset d2bb8f24ef20 | o changeset: 2:d2bb8f24ef20 | tag: RELEASE_1_0_X | user: bliu <bliu@pas.rochester.edu> | date: Fri Jul 27 09:43:34 2012 -0400 | summary: Shule created feature 1 | o changeset: 1:abd9168ebcbc | user: bliu <bliu@pas.rochester.edu> | date: Fri Jul 27 09:40:19 2012 -0400 | summary: Eddie made the 1st modification to module 1 | o changeset: 0:98888bad8a2a user: bliu <bliu@pas.rochester.edu> date: Fri Jul 27 09:38:33 2012 -0400 summary: Initial committment of shceme3
Baowei's Meeting Update 07/24/12
- Golden Version AstroBEAR
- Tickets:
- Closed: #226
- New: #229, #230, #231
- Current Status of Tickets: https://clover.pas.rochester.edu/trac/astrobear/query?status=accepted&status=assigned&status=new&status=reopened&group=priority&col=id&col=summary&col=status&col=owner&col=type&col=priority&col=due_date&col=component&col=ttd&col=reporter&order=priority&report=28
New Revision 951:b789b34104cb in the official repository
I just pushed 951:b789b34104cb in the official repository. This revisions include the bug fixing for blue gene related to ticket #207 and #226.
A update list can be found at
https://clover.pas.rochester.edu/trac/astrobear/log/Scrambler/?action=stop_on_copy&mode=stop_on_copy&rev=b789b34104cbef9b3c0d260dc86175ee7b9042eb&stop_rev=950%3A7afdab9ebdfd&limit=200
Details of modification to the code can be found at:
https://clover.pas.rochester.edu/trac/astrobear/changeset?reponame=Scrambler&new=b789b34104cbef9b3c0d260dc86175ee7b9042eb%40&old=7afdab9ebdfdce698c5c6ce0283054919a5ce314%40
Test results can be found at:
https://clover.pas.rochester.edu/trac/astrobear/wiki/u/bliu#no1
Meeting Update 07/17/2012 - Baowei
- Golden Version AstroBEAR:
- tickets:
Meeting Update 07/10/2012 - Baowei
- Tickets:
- Added due date feature to tickets
- Created a new Milestone:
https://clover.pas.rochester.edu/trac/astrobear/milestone/Build%20Golden%20Version%20of%20AstroBEAR%202.0 - Current ticket status:
- Worked on
- Teragrid Proposal:
- Teragrid Proposal:
Baowei's Meeting Update 07/03/12
- Tickets:
- New Tickets: #228
- Closed Tickets: #223
- Current Open Tickets: https://clover.pas.rochester.edu/trac/astrobear/query?status=accepted&status=assigned&status=new&status=reopened&group=priority&col=id&col=summary&col=status&col=owner&col=type&col=priority&col=component&col=ttd&col=reporter&order=priority&report=28
- Worked on
- Teragrid Proposal: http://www.pas.rochester.edu/~bliu/Proposals/XSEDE_proposal.pdf
- Kraken Scaling Tests: #202
- #226: Reserved 256 cores on blue gene for Thu and Fri
- Current 103rd on Top 500!!!
http://www.pas.rochester.edu/~bliu/BlueGeneQ-delivery/P1010020.JPG
Baowei's Meeting Update 06/26/12
- Tickets:
- Modification to the main repo:
https://clover.pas.rochester.edu/trac/astrobear/log/Scrambler/?action=stop_on_copy&mode=stop_on_copy&rev=7afdab9ebdfdce698c5c6ce0283054919a5ce314&stop_rev=948%3A9cb35866cda0&limit=200
- AstroBEAR running status:
https://clover.pas.rochester.edu/trac/astrobear/wiki/ProjectStatistics
- Will work on:
- Proposal for Teragrid allocation
- More scaling test on Kraken
Baowei's Meeting Update 06/19/12
- Tickets:
- New Tickets: #221, #222
- Tickets Fixed/Closed: #214, #217, #219, #221
- Tickets waiting for verification of being fixed: #218, #222
- Tickets changed to lower priority: #215
- Modification to the official repository:
https://clover.pas.rochester.edu/trac/astrobear/log/Scrambler/?action=stop_on_copy&mode=stop_on_copy&rev=9cb35866cda06e8f69cce5ef787af702df0f1db9&stop_rev=946%3Aff6bdbea174a&limit=200
- AstroBEAR running status:
https://clover.pas.rochester.edu/trac/astrobear/wiki/ProjectStatistics
New Revision 948:9cb35866cda0 in the main repository
I just pushed Revision 948:9cb35866cda0 in the main repository. It contains a modification to restarts where the master sends a message to each worker when it is ready to receive and process the data. WIthout this - the master can get overrun with messages on some platforms like Kraken.
A update list can be found at https://clover.pas.rochester.edu/trac/astrobear/log/Scrambler/?action=stop_on_copy&mode=stop_on_copy&rev=9cb35866cda06e8f69cce5ef787af702df0f1db9&stop_rev=946%3Aff6bdbea174a&limit=200
Details of modification to the code can be found at: https://clover.pas.rochester.edu/trac/astrobear/changeset?old_path=%2FScrambler&old=9cb35866cda06e8f69cce5ef787af702df0f1db9&new_path=%2FScrambler&new=946%3Aff6bdbea174a
Test results can be found at:
https://clover.pas.rochester.edu/trac/astrobear/wiki/u/bliu#no1
Baowei's Meeting Update 06/06/12
- Tickets:
- Modification to the official repository: https://clover.pas.rochester.edu/trac/astrobear/log/Scrambler/?action=stop_on_copy&mode=stop_on_copy&rev=ff6bdbea174a9cd103a295811387c04bab5b50c2&stop_rev=936%3Aaac36d619caa&limit=200
- Working on Teragrid allocation application information
- We need thoughts about how to build a version of Astrobear working stably for most users and on most machines —our goal.
Currently our new official revisions most come from bug-fixing for tickets from Martin and Jonathan while other group members may have their own revision working well for themselves. What would be a good way to merge these revisions together to get more features and better performance but not bugs? Also how to build a stable official version of astrobear? I'm preparing a document to collect the information about revisions/modules and machines each of our group member working with.
https://clover.pas.rochester.edu/trac/astrobear/wiki/ProjectStatistics
- Photos/Videos for Bluegene/Q gallery:
Bluegene/Q is coming in late June. I know there will be a gallery with big/huge TV monitor showing research photos/videos to the visitors. The university communication team is working on it. We don't have the details yet. But I think we should start thinking how to prepare for this gallery show.
New Revision 946:ff6bdbea174a in the official repository
New Revision 946:ff6bdbea174a passed the test and was just pushed into the official repository. This revision includes fixes for the restart problems with self-gravity.
A update list can be found at
https://clover.pas.rochester.edu/trac/astrobear/log/Scrambler/?action=stop_on_copy&mode=stop_on_copy&rev=ff6bdbea174a9cd103a295811387c04bab5b50c2&stop_rev=936%3Aaac36d619caa&limit=200
Details of modification to the code can be found at:
https://clover.pas.rochester.edu/trac/astrobear/changeset?old_path=%2FScrambler&old=ff6bdbea174a9cd103a295811387c04bab5b50c2&new_path=%2FScrambler&new=936%3Aaac36d619caaea174a9cd103a295811387c04bab5b50c2
Test results can be found at:
https://clover.pas.rochester.edu/trac/astrobear/wiki/u/bliu#no1
Baowei's Meeting Update 05/29/12
- Tickets:
- Modifications to the official scrambler last week:
- Created a folder /cloverdata/trac/astrobear/doc/talks/ for talks. Folks who gave talks can upload your talks to the folder or just send the file/link to me.
New Revision 936:aac36d619caa checked in
Revision 936 was just pushed into the official repository on May 29th 2012. It include several fix for errors in Revsion 919. A modification list can be found at: https://clover.pas.rochester.edu/trac/astrobear/log/Scrambler/?action=stop_on_copy&mode=stop_on_copy&rev=aac36d619caacf1eda6eb785046514dcc8c5e87c&stop_rev=916%3A47468f693d6f&limit=200
Update details can be found at: https://clover.pas.rochester.edu/trac/astrobear/changeset?old_path=%2FScrambler&old=aac36d619caacf1eda6eb785046514dcc8c5e87c&new_path=%2FScrambler&new=916%3A47468f693d6f
Meeting Update 05/15/2012 - Baowei
- Worked on Ticket #192.
https://clover.pas.rochester.edu/trac/astrobear/ticket/192
- Working on scaling test on Ranger Ticket #193.
Baowei's Meeting Update 05/08/12
Baowei's Meeting Update 04/24/12
- Add a new global Flag lUseOriginalNewSubGrids to choose the old/new Subgrids-generating algorithm. testing and checking in the code
- Work on running AstroBEAR on Ranger's normal queue:
https://clover.pas.rochester.edu/trac/astrobear/ticket/188
https://clover.pas.rochester.edu/trac/astrobear/ticket/197
Baowei's Meeting Update 04/10/12
Ticket 185: new grid-generating algorithm
I used test suites to test the performance of the new algorithm. The new algorithm outperforms the old one for most of modules. Results can be found at: https://clover.pas.rochester.edu/trac/astrobear/ticket/185.
Ticket 188: Install AstroBEAR on Ranger
Ranger has a standard environment which is too old for AstroBEAR code. I compiled AstroBEAR on ranger with newer testing environment but got trouble submitting jobs on it. Details can be found at: https://clover.pas.rochester.edu/trac/astrobear/ticket/188
The Afrank Queue on Bluehive
This might not be a big issue for the time being. When I ran a benchmark on bluehive, I found the queue afrank used Ethernet instead of Infiniband which was not the way it's supposed to be. So a lot of time was wasted on waiting for communications when running with multiple nodes. Russell is checking the issue.
Meeting Update 04/03/2012 - Baowei
1. Ticket #169 links to papers using Astrobear on the wiki
I created the wiki page at:
https://clover.pas.rochester.edu/trac/astrobear/wiki/AstroBearPublication
The ticket was closed but anyone find a new paper can send to me or edit the page directly.
2. Ticket # 185 Compare new grid-generating algorithm and the old one using AstroBEAR modules
Results can be found at the page: https://clover.pas.rochester.edu/trac/astrobear/ticket/185
The new algo works as expected and is faster than the general case of the old algo.
Baowei's Meeting Update for Mar 27 2012
1. Tickets:
All active tickets were processed—either assign to new owners or closed. Details can be found at https://clover.pas.rochester.edu/trac/astrobear/blog/bliu03252012.
I'm testing to use Trac tickets as a way of managing my projects.
2. Optimization: New Grid-Generating Algorithm
I was working on comparing the new grid-generating algorithm with the old one using real AstroBEAR modules but ran into bug-fixing like the Ticket 184: https://clover.pas.rochester.edu/trac/astrobear/ticket/184. The primary results were promising: the time used from the new algorithm so far was comparable with the best case of the old one. Will work on pictures…
Tickets
Jonathan and I went through all the active tickets we have. Several tickets were closed and most of still active ones were assigned to new owners.
Here's a summary of what we have
Owners | Tickets |
---|---|
johannjc | #182 Spectra processing objects #179 Current test suite does not test Isothermal solver |
shuleli | #126 strange field behavior on amr edges with uniform gravity #121 3D, AMR, large sims abort possibly due to memory problem #151 Thermal Diffusion #152 Implementing Magnetic Resistivity #153 Implement Viscosity |
bliu | #183 Optimization for Grid Generation Algorithm #173 Interpolation options can trigger protections #71 Adaptive message block sizes #174 Point gravity and outflow properties stored in chombo #176 Adding additional tests #127 Oscillations in PPM MHD #150 Implementing Self Gravity in Cylindrical Coordinates #154 Sink Particles in 2D and Cylindrical #179 Current test suite does not test Isothermal solver #169 We should post links to papers using Astrobear on the wiki |
ehansen | #147 porting over 'NEQCooling' ?#171 If the initial conditions trigger protection we should stop the run and print an appropriate message |
erica | |
No Owner | #82 Create suppression file for valgrind's I/O errors #87 Improve parallel performance on Chombo HDF5 writes #92 Incorporate MPI_PACK_SIZE() into packing algorithm #155 Roe Solver |
A more detailed report can be found here https://clover.pas.rochester.edu/trac/astrobear/report/8
Meeting Update 03/20/2012 - Baowei
New Algorithm for Patch Refinement
New algorithm is good at doing refinement for line-shape areas (https://clover.pas.rochester.edu/trac/astrobear/blog/bliu03112012), but not good at ring-shape areas (https://clover.pas.rochester.edu/trac/astrobear/blog/bliu03112012#comment-5) where the old algorithm probably works better.
Here's the result for 2D-search (checking splitting point along both x and y)
![]() | ![]() |
Now working on a new recursive 3D algorithm.
Recursive Inflection Algorithm for Patch Refinement
This new(er) algorithm combines the old algorithm and the splitting-cost-check algorithm, and so the good parts of both algorithm. The following are results for some ErrFlag patterns:
![]() | ![]() |
![]() | ![]() |
![]() | ![]() |
Meeting Update 03/13/2012 - Baowei
Wiki Page for Ticket Assignment Procedure
https://clover.pas.rochester.edu/trac/astrobear/wiki/TicketAssignmentPage
Current Active Tickets
Total Active Tickets | 18 (include the two just closed) |
---|---|
astrobear | 15 |
bear2fix | 1 |
wiki | 1 |
Over 6 weeks | 4 |
Over 4 weeks | 8 |
Over 2 weeks | 10 |
Testing Results for new patch refinement algorithm
https://clover.pas.rochester.edu/trac/astrobear/blog/bliu03112012
Optimization: New Algorithm for Patch Refinement
The testing results of the old/current algorithm for refinement patches which includes a bug can be found here:
https://clover.pas.rochester.edu/trac/astrobear/blog/bliu03052012
The new algorithm calculates the splitting cost at each position (along one direction for the time being) and finds the optimal (with lowest cost) position. The following shows the testing results of the new algorithm. It clearly shows that the new algorithm fixes the bug in the old one.
1. 16X16 Diagonal patch
![]() | ![]() |
2. 32X32 Diagonal patch
![]() |
3. Random patch 1 =
![]() | ![]() |
4. Random patch 2
![]() | ![]() |
5. Random patch 3
![]() | ![]() |
6. Random patch 4
![]() | ![]() |
Optimization: Running Time VS Desired Filling Ratio for Refinement Area
Jonathan and I were trying to do some optimization on the refinement patches, as the smaller the patches are the more resources (memory and computing time) it needs:
https://clover.pas.rochester.edu/trac/astrobear/blog/bliu02272012
The current algorithm works as the following: if the filling ratio is less than the desired ratio, it cuts the refinement patch into smaller ones until the filling ratio is larger than the desired ratio according to the inflections of ErrFlags.
In Figure 1 filling ratio of the patch (big box) is 40%. When the desired filling ratio is larger than 40%, AMR goes to smaller patches. We can see the running time increases when Desired filling ratio goes beyond 40% in Figure 2.
Figure 1 |
---|
![]() |
Figure 2 |
---|
![]() |
For refinement patch in Figure 3, however, the current algorithm couldn't find smaller patches even when the filling ratio is less than the designed one.
Figure 3 |
---|
![]() |
Figure 4 |
---|
![]() |
Big Ratio of Data Memory Over Data File Size for AMR
The following summaries our understanding to the big ratio of Data memory over Data file size for AMR inspired by the results from Jonathan's memory checking tools — a typical number for the ratio could be 30~80.
Using AMR, extra ghost data are needed to do the interpolation. These ghost data size could be big when the refined patches are small.
Take a 3D problem with two-step updates for example,
where 2 comes from the copy we save for later restart and 16 comes from the ghost data.
So when , we have
.
Or the Data Memory is 54 times the size of the data file.
The smaller the AMR finer patches, the bigger the ratio is. So in Figure 1 we will have much smaller ratio than in Figure 2 though the total patches sizes are same. How these AMR patches distribute depends on the specific problem and calculation.
Figure 1 |
---|
![]() |
Figure 2 |
---|
![]() |
New Development Procedure for AstroBEAR code
I updated the development procedure page according to Jonathan's suggestion:
https://clover.pas.rochester.edu/trac/astrobear/wiki/DevelopmentProcedure
As Adam asked, we will have two people (Baowei and Eddie) in charge of testing the code and checking in the test-passed code to the official repository. So whenever you have code you want to check in, just notify me or Eddie to do the test. If the test passed, we will upload the results to test repository and ask you/everyone else to verify. After that, we will push the code to the official repository. If the test fails, we will point you to the reference and simulation images as well as the necessary information to reproduce the failed test and leave it to you to determine why the test failed and to fix any possible bugs.
Cloud based shared file space
Jonathan and I are piloting a shared file system called Box.net. Currently it has 2GB single-file-limit. So it's probably better for documents rather than data files. It can sync your documents on your computer while keeping historical versions. I find it's convenient when two or more people cooperate on same documents. If you are interested or have better ideas of using it, please let me know.
Computing Resources
Quotations for New Machine (https://clover.pas.rochester.edu/trac/astrobear/blog/johannjc01182012)
ASA: 2 Xeon 2.4GHz Quadro Core Processors, 24GB Memory, 16TB Harddisk https://www.pas.rochester.edu/~bliu/ComputingResources/ASA_Computers.pdf
AberDeen: 2 Xeon 2.4GHz Quadro Core Processors, 24GB Memory, 16TB Harddisk https://www.pas.rochester.edu/~bliu/ComputingResources/Aberdeen.pdf
Pogo: 1 Xeon 1.6GHz Quadro Core Processor, 3GB Memory, 14.5TB Harddisk https://www.pas.rochester.edu/~bliu/ComputingResources/Pogo_linux.pdf
Current Load of my Teragrid allocation https://www.pas.rochester.edu/~bliu/ComputingResources/Teragrid_Load.png
AstroBEAR Virtual Memory
AstroBEAR uses huge virtual memory (comparing with data and text memory) when running with mulpi-processors:
One Processor:
http://www.pas.rochester.edu/~bliu/Jan_24_2012/AstroBEAR/bear_1n1p1t.png
Four Processor: http://www.pas.rochester.edu/~bliu/Jan_24_2012/AstroBEAR/bear_2n8p4t.png
To understand the problem, I tried a very simple Hello World program. Here are the results from TotalView:
One Processor: http://www.pas.rochester.edu/~bliu/Jan_24_2012/HelloWorld/1n1p1t.png
Four Processor: http://www.pas.rochester.edu/~bliu/Jan_24_2012/HelloWorld/1n4p4t.png
It's fair to say that the big virtual memory issue is not related to the AstroBEAR code. It's more related to openMPI and the system. I saw online resources arguing Virtual Memory includes memory for shared libraries which depends on other processes running. It makes sense to me. Especially I ran the Hello World program with same setup but at different times and found out it's using different virtual memories
http://www.pas.rochester.edu/~bliu/Jan_24_2012/HelloWorld/1n1p1t_2ndRun.png http://www.pas.rochester.edu/~bliu/Jan_24_2012/HelloWorld/1n1p1t_3run.png
I'm reading more on virtual memory and shared libraries.