Posts by author bliu

HEDLA Jet meeting 12/15/2020 -- Baowei

1. setup and parameters

Box: 60x60 mm
radius_wire = 0.25 mm
distance_between_centers = 7.5 mm
MagField_direction = 0, 1, 0

WindMaterial = 27
rhoWind = 1e18 1/cc
velWind = 6e1 km/s
BWind = 1T !varies see table
TempWind = 12 ev or 1.39e5 K

dx= 15.625e-4 cm
ratio_sizeWire_dx= 32
runTime: 426ns or ~0.4 domain crossing time, or 3.4 wire distance crossing time
imomentumProtect=1

Boundary & Clump for wires

  lBoundary=.true.  ! Use a boundary (instead of a high density barrier)
  DiffusionFactor = 1 !Factor to multiply velocity and finest level dx by for determining the diff_alpha2 parameter...  Shouldn't be larger than 1....
  MagneticDiffusionLength = 0.3 !cm
  MagneticDiffusionLengthWire = 0.0 !cm

  lBoundary=.false.  ! Use a boundary (instead of a high density barrier)
  DiffusionFactor = 1 !Factor to multiply velocity and finest level dx by for determining the diff_alpha2 parameter...  Shouldn't be larger than 1....
  MagneticDiffusionLength = 0.3 !cm
  MagneticDiffusionLengthWire = 0.0 !cm

2. Results: Click for movies

runs 298 ns 426 ns
setup 2, bounary; ;
setup 2, clump; ;

Note and Each picture shows lineouts at three locations shown here

Each plot shows

  • ram + thermal pressure
  • magnetic pressure
  • total pressure
  • density
  • magnetic field
  • velocity (in x)
  • Temperature

And the values are scaled as follows

Density 1e17 cm-3
Pressure Mbar
B Tesla
T eV
v 10 km/s


The middle panel time range varies !!

runs 298 ns 426 ns
setup 2, line-outs, boundary;;
setup 2, line-outs clump; ;


More movies

boundary run rhoScaled;Zoomed rhoScaled;Temp; mach;magPressure;
clump rhoScaled;Zoomed rhoScaled;Temp; mach;magPressure;

HEDLA Jet meeting 11/20/2020

summary
frame12; setup 2 By=1 boundary-run;setup 2 By=1 clump-run;

1. Setup 1

setup1
Box: 160x160 mm
radius_wire = 2 mm
distance_between_centers = 9 mm
MagField_direction = 0, 0, -1

rhoWire = 2.86e19 g/cc
tempWire = 1.39e3 K

WindMaterial = 27
rhoWind = 2.86e+17 1/cc
velWind = 6e1 km/s
BWind ! varies see table
TempWind = 12 ev or 1.39e5 K

rhoAmb = 2.86e+17 1/cc
tempAmb = 1.39e5 K
velAmb = 6e1 km/s

dx: 0.3125mm
runTime: 4.92 ms or 1.8 domain crossing time or 32.8 wire distance crossing time..
imomentumProtect=1
Runs Results diffusion parameter
hydro, no cooling rho;Temp;mach; 2
hydro, Al cooling rho;Temp;Mach; 2
Bz=0.1T, beta=138, no cooling rho; Temp; Mach; mag pressure; 2
Bz=0.1T, beta=138, Al cooling rho; Temp; Mach; mag pressure; 2
Bz=1T, beta=1.38, no cooling rho; Temp; Mach; mag pressure; 1
Bz=1T, beta=1.38, Al cooling rho; Temp; Mach; mag pressure; 1
Bz=5T, beta=0.055, no cooling rho; Temp; Mach; mag pressure; 1
Bz=5T, beta=0.055, Al cooling rho; Temp; Mach; mag pressure; 1


2. Setup 2

setup2
Box: 20x20 mm
radius_wire = 0.25 mm
distance_between_centers = 7.5 mm
MagField_direction = 0, -1, 0

rhoWire = 2.86e19 g/cc

WindMaterial = 27
rhoWind = 2.86E+17 1/cc
velWind = 6e1 km/s
BWind !varies see table
TempWind = 12 ev or 1.39e5 K

dx: 0.03125mm
runTime: ~90ns (0.924 ms) or ~0.4(3.6) domain crossing time, or ~1 (9.6) wire distance crossing time
imomentumProtect=1


Runs Results diffusion parameter
hydro, no cooling rho;Temp;mach; 2
hydro, Al cooling rho;Temp;Mach; 2
By=0.1T, beta=138, no cooling rho; Temp; Mach; mag pressure; 1
By=0.1T, beta=138, Al cooling rho; Temp; Mach; mag pressure; 1
By=1T, beta=1.38, no cooling rho; Temp; Mach; mag pressure; 1
By=1T, beta=1.38, Al cooling rho; Temp; Mach; mag pressure; 1
By=5T, beta=0.055, no cooling rho; Temp; Mach; mag pressure; 1
By=5T, beta=0.055, Al cooling rho; Temp; Mach; mag pressure; 1


3. Setup 3

setup3
Box: 20x20 mm
radius_wire = 0.25 mm
distance_between_centers = 3.0 mm
MagField_direction = 0, 0, -1

rhoWire = 2.86e19 g/cc

WindMaterial = 27
rhoWind = 2.86E+17 1/cc
velWind = 6e6 cm/s
BWind ! varies see table
TempWind = 12 ev or 1.39e5 K

rhoAmb = 2.86e+17 1/cc
tempAmb = 1.39e5 K
velAmb = 6e1 km/s

dx: 0.3125mm
runTime: 0.924 ms or 3.6 domain crossing time, or 9.6 wire distance crossing time
imomentumProtect=1


Runs Results diffusion parameter
hydro, no cooling rho;Temp;mach; 1
hydro, Al cooling rho;Temp;Mach; 1
Bz=0.1T, beta=138, no cooling rho; Temp; Mach; mag pressure; 1
Bz=0.1T, beta=138, Al cooling rho; Temp; Mach; mag pressure; 1
Bz=1T, beta=1.38, no cooling rho; Temp; Mach; mag pressure; 1
Bz=1T, beta=1.38, Al cooling rho; Temp; Mach; mag pressure; 1
Bz=5T, beta=0.055, no cooling rho; Temp; Mach; mag pressure; 1
Bz=5T, beta=0.055, Al cooling rho; Temp; Mach; mag pressure; 1


sample data files global.data;physics.data; solver.data; scales.data; problem.data


Documents from Danny

In the table each column represents a different simulation. The parameters in dark blue are dimensions and simulation setup instructions. The parameters in light blue are experimentally measured parameters for the plasma and obstacle. The rest of the parameter (white) are calculated using formulars from refs 1 and 2. There is a separate power point with the setup images which shows the layout of the obstacles and the direction of the magnetic field, the relevant figure is referenced in cell 5. 
The first two simulations we are interested in are a comparison of the same initial setup with and without radiative cooling. In our experiments we observe that the plasma temperature does not drop much below 12eV. However, a calculation of the cooling time suggests that within our experimental time frame we should see cooling. This could be due to a heating mechanism, such as ohmic heating. We are therefore interested to know whether a simulation with a realistic cooling time or one with no radiative cooling at all better resembles our data. The cooling time I have suggested for simulation 1 is taken from the Al cooling curves presented in ref 3 using the ni, Te and Z ̅ values shown in the table. For simulations 2 I have suggested the same experiment with no radiative cooling. The experimental setup is one that we have used several times and have very good data for.
Once we see the results from these two simulations we will have other suggestions but we would like to understand the roll of radiative cooling in the simulations first.

Simulations requests for AstroBEAR
Simulations requests for AstroBEAR Figures for AstroBEAR simulations Notes on simulation suggestions for AstroBEAR

Meeting update

  • MHD outflow clumps: Fixed the high-density tail issue. Needs suggestion/comments for the MHD runs.

update 10/12/2020

Meeting update 06/22/2020

MHDCollidingFlows runs with the new analytic cooling and the ambient tracer. Need better way to handle parameters Temp0 and tracer in the cooling code though. Also need to double-check the scales..

Mach15 result

      A_alpha = alpha
      !cs=sqrt(gamma*Boltzmann*TempScale/Xmu/muH/amu)
      !power=.5d0*(1d0-2d0*beta)
      !A_alpha=alpha*4.76e-20*&! (ergs*cm^3/s/K^.5)
      !   (3d0/16d0*Xmu*muH*amu/Boltzmann*(cs*vx)**2)**(power)
      A_beta = beta

Current version of code -- need better ways of handling Temp0 and trailer!!!

  FUNCTION AnalyticCoolingStrength2(q, Temp)
     REAL(KIND=qPREC) :: AnalyticCoolingStrength2
     REAL(KIND=qPrec) :: q(:)
     ! Local declarations
     REAL(KIND=qPrec) :: Temp, Temp0, T0

     Temp0 = 100000  !10^5 Kelvin
     T0 = Temp/Temp0
     AnalyticCoolingStrength2=q(1)**2 * A_alpha*T0**A_beta*ScaleCool

     !! only for ambient traer > 1/3 which is the scaless value for initial ambient density
     if( q(9) <= 1d0/3d0 ) AnalyticCoolingStrength2=0d0


  END FUNCTION AnalyticCoolingStrength2

Colliding Jets 2.10.2020

See details on this page for details about:

  1. Fiducial runs with no cooling
  1. Al cooling table from Eddie

2.5D MHD Colliding Flows

sigma=B*B/rho*v*v

sigma=B*B/rho*v*v=0
sigma=6e-7

IAU_B1 List

Updated plans for 100TB NAS storage

Updated Plan from plans of october

Three 4-Bay NAS DiskStations with Twelve 8TB or 10TB hard drive

  1. Can use one DiskStation and Four hard drives for archiving data and turn it off
  2. Can connect Two or Three together as JBOD for saving data with redundancy.
  3. Fast order&delivery: can order directly from newegg (without with p-card or through the system)

Plan A: ~$3600 for 96TB

3 4-bay NAS DiskStation 3 X $299.99
12 Seagate Desktop SATA 6.0Gb/s 3.5" Internal Hard Disk Drive 12 X $217.99
96TB $3515.85

Plan B: ~$4600 for 120TB

3 4-bay NAS DiskStation 3 X $299.99
12 Seagate IronWolf 10TB NAS Hard Drive 12 X $304.51
120TB $4554.09

Buying storage for archiving/saving data

I. Multiple 8TB external hard drive + USB 3.0 port + (RAID)

pros: 1. relatively cheap and flexible

cons: 1. performance could be horrible 2. not quite reliable

  1. 56TB
7 x Seagate Expansion 8TB Desktop External Hard Drive https://www.amazon.com/Seagate-Expansion-Desktop-External-STEB8000100/dp/B01HAPGEIE/ref=sr_1_2?s=electronics&rps=1&ie=UTF8&qid=1540229650&sr=1-2&keywords=16tb+external+hard+drive&refinements=p_n_feature_two_browse-bin%3A5446816011%2Cp_85%3A2470955011&dpID=41mDnJ8-plL&preST=_SY300_QL70_&dpSrc=srch 7X150=$1050
1 x Sabrent 60W 7-Port USB 3.0 Hub https://www.amazon.com/Sabrent-Charging-Individual-Switches-HB-B7C3/dp/B0797NWDCB/ref=sr_1_8?rps=1&ie=UTF8&qid=1540316819&sr=8-8&keywords=7+port+hub+usb3&refinements=p_85%3A2470955011 $40
Total 56TB or 48TB with RAID redundancy $1090

II. Network-attached Storage (NAS) —QNAP

pros: high performance and stable

cons: cost

  1. 40TB
QNAP TS-431P2-1G-US Diskless System Network Storage https://www.newegg.com/Product/Product.aspx?Item=N82E16822107986&ignorebbr=1 $330
4 x 10TB Seagate SkyHawk Surveillance Hard Drive https://www.newegg.com/Product/Product.aspx?Item=N82E16822178999 4x320=$1280
Total 40TB $1610
  1. 60TB
QNAP TS-669L-US Diskless System High-performance 6-bay NAS Server for SMBs https://www.newegg.com/Product/Product.aspx?Item=9SIA0AJ2U04041 $1000
6 x 10TB Seagate SkyHawk Surveillance Hard Drive https://www.newegg.com/Product/Product.aspx?Item=N82E16822178999 6*320=$1920
Total 60TB $2920
  1. 100TB
QNAP REXP-1000-PRO SAS/SATA/SSD RAID Expansion Enclosure for Turbo NAS https://www.newegg.com/Product/Product.aspx?Item=9SIA0ZX7MN0982 $1250
10 x 10TB Seagate SkyHawk Surveillance Hard Drive https://www.newegg.com/Product/Product.aspx?Item=N82E16822178999 10*320=$3200
Total 100TB $4450
  1. 120TB
QNAP High Performance 12 bay (8+4) NAS/iSCSI IP-SAN. Intel Skylake Core i3-6100 3.7 GHz Dual core, 8GB RAM, 10G-ready https://www.newegg.com/Product/Product.aspx?Item=9SIA25V4S75250 $2000
12 x 10TB Seagate SkyHawk Surveillance Hard Drive https://www.newegg.com/Product/Product.aspx?Item=N82E16822178999 12*320=$3840
Total 120TB $5840

III. Cloud space: Amazon Glacier

pros: 1. charged every month

cons: 1. hard to predict total fees

$48 per TB per year + Other fees(retrieval,request,Data transfer etc)

pnStudy: conical wind with wedge tip --2nd way

Instead of making the wedge tip same as the ambient as in blog:bliu08012018 , tried with Bruce's idea of "t=0 simply replicate the flow at the edge of the launch surface into the wedge". Seems working much better than the scheme in blog:bliu08012018

t=0 http://www.pas.rochester.edu/~bliu/pnStudy/WedgeTip/logRho_15Deg_noWedge_2nd_frame0.png
t=400y http://www.pas.rochester.edu/~bliu/pnStudy/WedgeTip/logRho_15Deg_noWedge_2nd_frame50.png
Movie up to 400y movie

pnStudy: conical wind with wedge tip

Add a wedge tip to test if it solves the "piston/knots" issue along the y-axis

  1. The wedge tip is added using a wedge angle and tangent line to the edge to original conical wind launching region (circle in the testing example). The new launching region is now the circle + a wedge with the Info%q as the values of initial ambient at the area. The wedge area is marked as red in the following picture
30 deg vs 15 deg vs 10 deg http://www.pas.rochester.edu/~bliu/pnStudy/WedgeTip/Wedge_30Deg_15Deg_10Deg.png
  1. Compare the result with(left panel)/without Wedge tip (wedge angle = 15 deg)
150 yr http://www.pas.rochester.edu/~bliu/pnStudy/WedgeTip/Wedge15DegVsNoWedge.png no wedge tip; 15 deg wedge tip

comparison movie up to 300 y

  1. Conclusion: seems making things worse..

MHDJetClump Module

Low-field testing results with no magnetic field for Clump

Bstar=0, Compare with Bruce's hydro result(right panel) http://www.pas.rochester.edu/~bliu/pnStudy/MHD_OH231/hydroTest/frame8_highres.png movie
Bstar=1e-5 gauss, http://www.pas.rochester.edu/~bliu/pnStudy/MHD_OH231/logRhoBz_Bstar_1e-5_frame8.png movie
Bstar=1e-4 gauss, http://www.pas.rochester.edu/~bliu/pnStudy/MHD_OH231/logRhoBz_Bstar_1e-4_frame8.png movie

Meeting update 04/23/18

Tentative schedule for Laurence's visit: pdf

  1. talk?
  2. lunch&dinner

Meeting update -- 03/06/2018

  • JetClump module with 3D MHD compiles and runs
  • Tested the post-processing python code for polarization map on the 3D MHD JetClump results.
  • Matlab code for plotting polarization map.

Meeting update -- 02/12/2018

  • Laurence Sabin's Visit
    1. Time
    2. Projects

The main goal would be using the hydrodynamical model that you already published with Bruce and
 add the "magnetic component" to fit our observations.

It would also be interesting to combine this with Martin's (synthetic) polarization maps 
to reflect what we are observing with the SMA, CARMA and ALMA.

A second point, that was raised during the PNe conference Adam and I attended 
last year, is the determination of the minimum field's intensity needed to 

trigger the shaping. This is a very important and rather unknown aspect:  
I am working on the measurement of photospheric magnetic fields, on Post-AGBs and PPNe, and so far the
 (longitudinal) values found are quite low and might not be enough to actually launch any material !!  

  1. Current JetClump Module: 2D MHD (Toroidal magnetic field in Clump and Jet ) runs OK. 3D MHD untested.

Artificial knots for outflow models with spherical nozzles

The following is from Bruce's email. Just want to put here and see if any comments/ideas:'

Thin knots seem to arise in many outflow models along the y-axis shortly after the launch of a jet. In brief, I’m convinced that the biggest cause of such knots is the shape of the nozzle’s surface (a sphere). A flat or highly conical nozzle will suppress the knots.

The simplest flow is that of a cylindrical jet at the origin moving into an ambient medium of constant density on a Cartesian grid. In principle, such a flow has no way to deviate from a simple cylindrical flow unless shears (at the edge) or kink instabilities develop (they don’t).

Heavy flows: This is obviously the case if the flow density > ambient density. The flow is simply a telephone pole flying through something like a vacuum.

Light flows: If the flow density < ambient density then the flow will interact strongly with the dense medium through which it pushes. Even so, there is no apriori expectation that a dense, thin knot will develop almost immediately along the y axis. But it does: that’s what I find in the sims using the present version of AstroBEAR. See the attached figure where I move the viewing window at the same speed as the head of the flow.

Notes:

  1. the spatial units in the graph should be multiplied by two if the basic cell size = 500 AU. I had to mess with the scaling factors in VisIt (0.25 instead of 0.5) to get a good display. That is, the basic cell in the figure wiull have dimensions of 250 AU.
  2. I used Nlevel=5 in these sims. Changing it by ± 1 has no effect.

The panels show a light flow of density 102 and speed 200 km/s moving into a uniform ambient medium of density 104. The bottom panel shows the geometry at t=0. You are looking at the nozzle (round) and (unit) flow vectors that will emrge through its surface at t=0+. The vectors are perfectly vertical. The nozzle’s surface isn’t a perfect shpere, but that doesn’t matter much.

The vectors along the inside edges of the gas displaced by the round jet (the “swept-up, compressed rim”) almost immediately start to curve towards the y axis. This is exactly what should happen when the flow strikes the inner edge of the rim of displaced gas obliquely. The flow along the rim starts to converge towards the y axis. This convergence forms an incipient knot in 100 y (the nozzle crossing time). The knot rapidly becomes longer and denser as mass continues to continues to flow into it.

It’s what you will get if you put a squishy ball bearing between the jaws of a closing scissors.

My point is that artifical knots are inevitable using spherical nozzles. The formation of this axial knot can be suppressed if the nozzle were a flat surface or a long and thin cone. The flow from a flat nozzle would displace and sweep up a flat plug (a disc) whose speed decreases as ambient gas is incorporated into it. The only wat to completely avoid any axial knot is to introduce a flow with a sharply conical head, like the nose cone of a rocket.

Baowei knows through bitter experience that forming a flat nozzle is difficult in AstroBEAR. It’s even more difficult to make a nozzle shaped like a nose cone. But you might think about it. (Of course, some axial knots might form after a simple jet starts to break up or become unstable and pinch. Such knots are ‘real’, not artificial.)

Of course, no one has any idea what a nozzle looks like on large sice scales. Zhou’s sims might provide some guidance on this. They look highly conical to me.

This email sounds like its just about details of flow geometries. It’s really more about model outcomes. There’s potentially important science at stake.

Meeting Update --09/11/17

Meeting Update --08/03/17

  • IOPP invoice for the OH231 paper
  • XSEDE Resources
    1. Current resources: about 26000 Node-hour on stampede2 and 180000 cpu-hours on comet details
    2. Stampede2 currently only has KNL co-processors. details on this page
  • Users
    1. has been working with Eric & Jason's students.
  • Coding
    1. Updates to JetClump and pnStudy module: distribution of ambient density #438; total mass and momentum on the grids.
    2. OpenMP optimization for Common Envelop module: details on this page
  • Wire Turbulence
    1. rearranged some figures and redid figure 1
    2. re-wrote introduction and method/model part. added more references

Meeting Update --6/2/17

  • Disk Space
    1. archived MachStems data. Currently on bluehive, /scratch/afrank_lab has 4.4TB available.

Meeting Update --5/15/17

  • Disk Space
    1. received 12TB external hard disks from Erica.
    2. archiving Planetary Atmosphere data on Bluehive. Will clean ~2.9TB space.
    3. received several 500GB/1TB hard disks with total size ~5TB for clover from Dave. Will use them for archiving also.
    4. grassdata/ is mainly occupied by the WT data. So will not change it for now.
  • Wire Turbulence http://www.pas.rochester.edu/~bliu/wireTurbulence/PostProcessing/PNGs_sol/Mach4.png

Meeting update 05/09/2017

  • 1. Poster Print fee/grant account for Dave
  • 2. Updated Bluehive space 39TB with 97$ per TB per year: blog:bliu04182017 . New external disk?
  • 3. Wire Turbulence Poster and Paper conclusions 1). Turbulence generated are mainly solenoidal which follows the -5/3 Kolmorov law for both hydro and MHD velocities. 2). The driving factor is ~ 1/3 as the solenoidal turbulence dominants which makes the Mach number > 1 for both hydro and MHD runs.

Meeting Update --4/18/17

  • CIRC poster Session
    1. Deadline for registering: Next Friday 04/28
    2. Link for registering https://registration.circ.rochester.edu/postersession
    3. Will do a poster for Wire Turbulence
    4. I will register some of our old posters next Monday. If you have a poster or will do a poster and need help, please let me know.
  • Bluehive space under afrank name and will be charged (97$ per TB)
Eddie 4TB
Zhuo 5TB
Luke 24TB
afrank_lab 6TB
Total 39TB

Shall we move Eddie's account to his current group instead?

  • Code & Users
    1. One user asking for code of mass transfer between binaries are putting on hold.
  • XSEDE Machine usage
    1. TG-AST160054, expired date 2017-9-21: Stampede (-3517 out of 50000 SUs, 0% remaining); Comet (43427 out of 50000 SUs, 86% remaining)
    2. TG-AST120060, expired date 2017-12-31: Stampede (960802 out of 980222 SUs, 98% remaining); Comet (858187 out of 980222 SUs, 86% remaining)
  • Wire Turbulence
    1. Velocity Spectra — supersonic for solenoidal and subsonic for compressive turbulence
http://www.pas.rochester.edu/~bliu/wireTurbulence/PostProcessing/PNGs_sol/spectra_withsol.png
  1. Redid Mach number vs b
http://www.pas.rochester.edu/~bliu/wireTurbulence/PostProcessing/PNGs_sol/Mach2.png
  1. Hydro pressure histogram
frame 199 http://www.pas.rochester.edu/~bliu/wireTurbulence/Figures3/logP_frame199_hydro.png
On Visit http://www.pas.rochester.edu/~bliu/wireTurbulence/Figures3/hydro_box4.png

Meeting Update --3/23/17

  • Wire Turbulence
    1. Schematic Diagram: http://www.pas.rochester.edu/~bliu/wireTurbulence/schematic3.png
    2. redid Spectra analysis with wave number range [2 20] comparing [2 40]. linear fit slope along x direction doesn't change much while y & z direction are goes much lower (-1.2). Details.

Meeting Update --2/22/17

  • Users
    1. Proposal for Jason's student?
    2. Laurence Sabin
  • WT
    1. New figures added in the paper
Mach number Vs bhttp://www.pas.rochester.edu/~bliu/wireTurbulence/Tracers/Mach.png
Wind/Grid tracer ratio PDF with tracers http://www.pas.rochester.edu/~bliu/wireTurbulence/Tracers/WGratio.png
  1. Tracers and Gaussian 2 Fit for density PDF Redid the Gaussian 2 fit of density PDF with tracers. Tried Gaussian 2 fit with simple test data to understand the GS2 fit parameters, mainly the relations of the sigmas of GS2 and the sigmas of individual component. While

there's no obvious relations between the sigmas, the sample data & fit shows clearly two peaks which matches the individual component peaks. This can't be found in the WT data with tracers. So interpret the Gaussian 2 fit for the WT density PDF as Grid & Wind material probably won't be proper.

redid tracers figure http://www.pas.rochester.edu/~bliu/wireTurbulence/Tracers/GridWind_redo_g2.png
original figure without tracers http://www.pas.rochester.edu/~bliu/wireTurbulence/Figures3/both_g1_4p.png
Test Gaussian 2 fit with simple data http://www.pas.rochester.edu/~bliu/wireTurbulence/Tracers/FootSize_g2Test.png

Meeting Update --01/26/17

  • Contact User of Toronto?
  • Book Vista for the new semester?
  • Wire Turbulence
    1. Al Cooling: tried new parameters based on the aluminum table (density and temperature range). 2D runs show that the cooling intensity is too small comparing with the post shock energy. So the cooling is too slow comparing with the downstream velocity. Details can be found on this page.
    2. Tracers and Gaussian2 fit: Added tracers for grid and wind material. Gaussian 1 won't do a good fit for the PDF of either grid or wind material only. Details can be found on this page
http://www.pas.rochester.edu/~bliu/wireTurbulence/Tracers/logWG0200.png http://www.pas.rochester.edu/~bliu/wireTurbulence/Tracers/GridWind.png
  1. Paper: reading and writing..

Meeting Update --01/06/17

  • Visitors and Users
    1. UCSB visitor next week: volunteers needed
    2. RIT user: compiling issues with *.cpp.f90 files with gnu fortran 4.8?. Possible problems for our visitor and other users
    3. Bruce: OH231 module and Hen3–401 paper
    4. Toronto user requesting for Binary code: NK cooling table.
  • XSEDE allocations
    1. 2M CPU-hours on XSEDE machines (1M on Stampede and 1M on Comet) available for production runs. Current allocations can be found here
    2. Comparison of the Stampede and Comet machines can be found on this page
  • Wire Turbulence (
    1. Gaussian 2 fit for density PDF: using wire and wind materials using temperature. Details can be found here.
http://www.pas.rochester.edu/~bliu/wireTurbulence/seperatePDF/Hist_2X4.png
  1. Tracers (to do)
  2. Al Cooling (testing)
  3. Paper (working on)

Meeting Update --12/13/16

  • Wire Turbulence.
    1. Merging Aluminum Cooling Table in the code

Table

Temperature(T) range 1-100K
dT 0.1K
Density range in the table - 1/cm3

Current Isothermal run without cooling

Wire Temperature 3.75E-12 K
Wire Density 4.8E26 1/cm3
Wind Temperature 1.5E-8 K
Wind Density 4.8E23 1/cm3

details

  1. 2D

http://www.pas.rochester.edu/~bliu/wireTurbulence/2D/wT2D.png

  • Bruce's OH231 runs

Meeting Update --11/29/16

  • Metal Cooling
    1. Eddie's looking for the cooling table&code for Aluminum+Argon.
no Cooling http://www.pas.rochester.edu/~bliu/wireTurbulence/newRuns/proj_hydro_wire_f200.png movie
Cooling Length 1 a http://www.pas.rochester.edu/~bliu/wireTurbulence/Cooling/hydro_anacooling_1a_frame200.png movie
Cooling Length 0.5 a movie

Wire Turbulence

http://www.pas.rochester.edu/~bliu/wireTurbulence/newRuns/sqrtV_V1.png

*2D with bar grid

  1. memory allocation error for mhd on Bluestreak. Runs OK on Bluehive
  2. 2D hydro movie; 2D mhd movie

Meeting Update --10/19/16

  • Wire Turbulence
    1. New wire configure test
setup http://www.pas.rochester.edu/~bliu/wireTurbulence/newRuns/3Dvol_frame0.png
hydro http://www.pas.rochester.edu/~bliu/wireTurbulence/newRuns/logRho_hydro_frame199.png movie
mhd http://www.pas.rochester.edu/~bliu/wireTurbulence/newRuns/logRho_mhd_frame200.png movie

Cooling Test Results for ThermalPulse module

2 or 3 levels of AMR

DMcooling with floorTemp=100K DMcooling with floorTemp=500K
density http://www.pas.rochester.edu/~bliu/Matthias/rhoScaled_DM100K_fr200.png http://www.pas.rochester.edu/~bliu/Matthias/rhoScaled_DM500K_f200.png
Temp http://www.pas.rochester.edu/~bliu/Matthias/Temp_DM100K_fr200.png http://www.pas.rochester.edu/~bliu/Matthias/Temp_DM500K_f200.png
Velocity http://www.pas.rochester.edu/~bliu/Matthias/vel_DM100K_fr200.png http://www.pas.rochester.edu/~bliu/Matthias/vel_DM500K_f200.png
Movies density; temperature; velocity density; temperature; velocity

Meeting Update --09/13/16

  • XSEDE proposal
  • ThermalPulse module with high temperature inside the envelope.
    1. Overshoot expansion velocity problem with temperature inside the
frame 1 temp with DMcooling (minTemp 1000K) http://www.pas.rochester.edu/~bliu/Matthias/temp.png
overshoot expansion velocity http://www.pas.rochester.edu/~bliu/Matthias/overshoot_1000K.png
low temp velocity http://www.pas.rochester.edu/~bliu/Matthias/vel_DMcooling.png
  1. fixed two bugs related to the conical wind nozzle in August.
  2. waiting for confirmation from Bruce's new runs.

Meeting Update --7/27/16

Meeting Update --7/20/16

  • Schedule for the Visitor
    1. office: 476?
    2. No hotel shuttle on weekends. Transporting (Zhuo?)
    3. will send out schedule draft soon
  • New external hard disk for archiving (~160USD for 4TB, 250USD for 6TB)

Meeting Update --07/07/16

1) square-root (standard?) velocity variance: different variance for x & yz comes from the x-direction flow?

http://www.pas.rochester.edu/~bliu/wireTurbulence/Figures2/sqrtV.png

2) redid all PDF figures with the new weighted-with-area histogram data from visit: still cannot do the time average due to the different values of x-axis/frequency values

http://www.pas.rochester.edu/~bliu/wireTurbulence/Figures2/PDF_both_g1_2.png

3) redid the energy with total pressure instead of thermal energy, although the total pressure is so small due to gamma=1.001. Pressure rather than the thermal energy matters here?

4) magnetic energy with plot: will do

5) physical meaning of Gaussian 2 fit: Mixture of two types of turbulence or Turbulence with two components

Meeting Update --06/23/16

  • Wire Turbulence
    1. Variance:
http://www.pas.rochester.edu/~bliu/wireTurbulence/Figures/rhoVy2Vz2.png

  1. density PDF
http://www.pas.rochester.edu/~bliu/wireTurbulence/Figures/PDF_hydro_g1.png

From Gaussian fit, , Using relation with 1/3≤b⇐1, according to Federrath 15, Mach number can be calculated for different values of

b 1 2/3 0.53 1/3
Mach 0.80 0.53 1.0 1.60
  1. Other Variance and PDF plots, see Figures


  • OH231 for Bruce
    1. Updated code with low density in nozzle area.
    2. test result which looks good to Bruce.

Current total quotas for afrank group

Total 27.5 TB with 97USD per TB per year or ~2667.5USD per year

BlueHive BG/Q
Eddie 4TB 4TB
Erica 0TB 12.5 TB
Zhuo 1TB 0 TB
afrank_lab 6TB 0 TB
Total 27.5 TB

Meeting Update --06/03/16

  • Wire Turbulence
    1. hydro data: redid the velocity plots with averaged data for boxes and created volume-weighted Mach number for each Box. See updates
    2. MHD data: spectrum for velocity and magnetic field. See updates
  • OH231 module for Bruce
    1. The module is to simulate a soft-edged clump plow in an ambient gas left by a conical wind. The conical wind needs to be turned off after some time..
    2. Fixed the cooling issue for conical wind. ticket:445
    3. Need to set the nozzle area empty after the conical wind turned off — currently the code will stop pumping in material after the CW off but keeps updating the physical values and it will form a bubble. Clump could star from the nozzle area. Forcing the density & velocity in the nozzle to be low will cause the code choking. Haven't figured out a good way to do it.
round-nozzle of conical wind https://www.pas.rochester.edu/~bliu/pnStudy/OH231/conical_wind/CW_nozzle.png
current result with conical wind turned off https://www.pas.rochester.edu/~bliu/pnStudy/OH231/conical_wind/CW_off.jpg
movie bubble

Meeting Update 05/11/2016

  • Wire Turbulence
    1. fixed a bug in the script extracting spectra data and so the Box 1 results. The updated results for frame 195 can be found in blog:bliu05032016 . x-y axis are both in log scale except for Box 1 which has a different x-range for showing the delta function…
    2. Results for other frames
Box 1 Box 2 Box 3
frame 1 https://www.pas.rochester.edu/~bliu/wireTurbulence/plots/pngs_long/plot_frame_00001_box_1.png https://www.pas.rochester.edu/~bliu/wireTurbulence/plots/pngs_long/plot_frame_00001_box_2.png https://www.pas.rochester.edu/~bliu/wireTurbulence/plots/pngs_long/plot_frame_00001_box_3.png
frame 3 https://www.pas.rochester.edu/~bliu/wireTurbulence/plots/pngs_long/plot_frame_00003_box_1.png https://www.pas.rochester.edu/~bliu/wireTurbulence/plots/pngs_long/plot_frame_00003_box_2.png https://www.pas.rochester.edu/~bliu/wireTurbulence/plots/pngs_long/plot_frame_00003_box_3.png
frame 7 https://www.pas.rochester.edu/~bliu/wireTurbulence/plots/pngs_long/plot_frame_00007_box_1.png https://www.pas.rochester.edu/~bliu/wireTurbulence/plots/pngs_long/plot_frame_00007_box_2.png https://www.pas.rochester.edu/~bliu/wireTurbulence/plots/pngs_long/plot_frame_00007_box_3.png
  1. some movies: Box 5; Box 9; Box 10;

short-range Box 5; short-range Box 9; short-range Box 10

  • Debugging code for Bruce's module (#445)

Wire Turbulence Spectra-MHD

Implemented the Spectra object with 10 boxes/windows (each with size of Ly or Lz or 1/10Lx; the wire is around the center of box 2) in the wireTurbulence module. Worked on scripts to extract the data out and make plots. Here's the testing results of frame 195 for the MHD runs for each box:

box spectra zoomed-in
1 https://www.pas.rochester.edu/~bliu/wireTurbulence/plots/pngs_long/plot_frame_00195_box_1.png https://www.pas.rochester.edu/~bliu/wireTurbulence/plots/pngs_short/plot_frame_00195_box_1.png
2 https://www.pas.rochester.edu/~bliu/wireTurbulence/plots/pngs_long/plot_frame_00195_box_2.png https://www.pas.rochester.edu/~bliu/wireTurbulence/plots/pngs_short/plot_frame_00195_box_2.png
3 https://www.pas.rochester.edu/~bliu/wireTurbulence/plots/pngs_long/plot_frame_00195_box_3.png https://www.pas.rochester.edu/~bliu/wireTurbulence/plots/pngs_short/plot_frame_00195_box_3.png
4 https://www.pas.rochester.edu/~bliu/wireTurbulence/plots/pngs_long/plot_frame_00195_box_4.png https://www.pas.rochester.edu/~bliu/wireTurbulence/plots/pngs_short/plot_frame_00195_box_4.png
5 https://www.pas.rochester.edu/~bliu/wireTurbulence/plots/pngs_long/plot_frame_00195_box_5.png https://www.pas.rochester.edu/~bliu/wireTurbulence/plots/pngs_short/plot_frame_00195_box_5.png
6 https://www.pas.rochester.edu/~bliu/wireTurbulence/plots/pngs_long/plot_frame_00195_box_6.png https://www.pas.rochester.edu/~bliu/wireTurbulence/plots/pngs_short/plot_frame_00195_box_6.png
7 https://www.pas.rochester.edu/~bliu/wireTurbulence/plots/pngs_long/plot_frame_00195_box_7.png https://www.pas.rochester.edu/~bliu/wireTurbulence/plots/pngs_short/plot_frame_00195_box_7.png
8 https://www.pas.rochester.edu/~bliu/wireTurbulence/plots/pngs_long/plot_frame_00195_box_8.png https://www.pas.rochester.edu/~bliu/wireTurbulence/plots/pngs_short/plot_frame_00195_box_8.png
9 https://www.pas.rochester.edu/~bliu/wireTurbulence/plots/pngs_long/plot_frame_00195_box_9.png https://www.pas.rochester.edu/~bliu/wireTurbulence/plots/pngs_short/plot_frame_00195_box_9.png
10 https://www.pas.rochester.edu/~bliu/wireTurbulence/plots/pngs_long/plot_frame_00195_box_10.png https://www.pas.rochester.edu/~bliu/wireTurbulence/plots/pngs_short/plot_frame_00195_box_10.png

Meeting Update 04/19/2016

Wire Turbulence

  1. Analyzing data for hydro and MHD (up to 197 frames) runs.
  2. Haven't done with MHD data yet. Current results of log(density) along middle section and plots for , , and can be found in this page. Will do the plots
  3. different pattern in the density pseudo color plots along mid-y and mid-z section for MHD.
  4. both hydro and MHD plots of velocity found strange data point in the center. Will check if it's coming from the analysis method or chombo data
  5. Density pseudo color plots show MHD run crashed at frame 196. So the restart will need to go from frame 195..

wire turbulence - hydro

Plots and movie for <Vx>; (<Vx{2}>-<Vx>{2}); <Vy{2}>; <Vz{2}>

frame 100 http://www.pas.rochester.edu/~bliu/wireTurbulence/V2_100.png
frame 170 http://www.pas.rochester.edu/~bliu/wireTurbulence/V2_170.png

movie

Turbulent Wire velocity plots

Hydro run frame 200

https://www.pas.rochester.edu/~bliu/wireTurbulence/hydro_v2_vs_x_frame200.png
https://www.pas.rochester.edu/~bliu/wireTurbulence/hydro_frame200.png

Collecting data for other frames …

3D volume rendering movies for "Hot Planet Winds Near a Star"

Make high-res 3D volume-rendering image using VisIT2.8.2 on Bluehive

Jonathan installed VisIT 2.8.2 on Bluehive with SLIVR (a volume rendering library supported by GPU, http://www.visitusers.org/index.php?title=Volume_Rendering#SLIVR ). This makes doing 3D volume rendering images a lot faster and fancier. You can use 2D transfer functions and you can manipulate the pictures in real time since it's accelerated by GPU. Here's a short introducing movie.

http://www.pas.rochester.edu/~bliu/OutflowWind/HD209458b/3DVol_Silvr/slivr_1.png
http://www.pas.rochester.edu/~bliu/OutflowWind/HD209458b/3DVol_Silvr/slivr_2.png


To set the limits (maximum and minimum value) of the variable and to change the opacity, it has to switch to 1D function, as shown in the following two images:

http://www.pas.rochester.edu/~bliu/OutflowWind/HD209458b/3DVol_Silvr/setLimits.png
http://www.pas.rochester.edu/~bliu/OutflowWind/HD209458b/3DVol_Silvr/opacity.png


And Here are some AstroBEAR results

http://www.pas.rochester.edu/~bliu/OutflowWind/HD209458b/3DVol_Silvr/3Dhot20100.png
http://www.pas.rochester.edu/~bliu/OutflowWind/HD209458b/3DVol_Silvr/3Dcold20100.png |

meeting update 01/25/16

  • Interaction Wind Streamlines for different alpha
http://www.pas.rochester.edu/~bliu/OutflowWind/Figures_Interaction/streamLines_alpha.png

More other figures can be found in this page

Meeting update

* New problem module for OH231

  1. Launch conical wind first for sometime then launch clump with the results of 1st step as background.
  2. compiled and ran. Some minor problems to be fixed..
  3. testing results.

*Rotating problem module for M2-9

  1. try to reproduce the results in Gacia-Arredondo2004 paper
  2. latest updates

Planetary Wind and Mass Loss Rate for HD209458b

1. AstroBEAR code and Set-up

In this study we use the AstroBEAR code (Cunningham et al 2009) to perform 3D hydrodynamic and magnetohydrodynamic numerical simulations and model the "Hot Jupiter" HD209458b ( Ballister et al 2007). AstroBEAR is a fully parallelized AMR MHD multi-physics code which currently includes modules for the treatment of self-gravity, ionization dynamics, chemistry, heat conduction, viscosity, resistivity and radiation transport via flux-limited diffusion. For our simulations we use a polytropic equations of state (the polytropic index is an input parameter) and we assume isothermal conditions

In this part we only focus on the planetary wind (hydrodynamic) for HD209458b without considering the star and stellar wind. We present the simulation results of planetary wind launching using the AstroBEAR code and calculate the mass loss rate of the planet using the density and velocity from the simulation data.

2. Parameters and Initial Conditions

The mass for H209458b is where is the Jupiter mass (Wang et al 2002) and the radius is where is the Jupiter radius (SouthWorth et al 2010). We use for the temperature of the planet.

measures the strength of the planetary wind. For this , a Parker-type thermally driven hydrodynamic wind is expected.As a comparason, the sun with its corona has .

We use as the initial density for the planet atmosphere. For the initial temperature, we use two set-ups: 1) set the outer boundary of the planet with temperature (without temperature profile or spherically-launching wind) and 2) set the outer boundary of the planet with azimuthally variable temperature where is the sub solar point and (with temperature profile). For the 2nd case, we use similar initial set up to that of Stone & Progra (2009). We summarize the parameters we use for HD209458b are shown in Table 1.

Table 1. Parameters for HD209458b

3. Resolutions

In our simulations the planet is considered as an internal boundary and the physical quantities are fixed during the simulation. Our computational domain consists a cube of size with resolution for the base grid and totally -level of AMR is used. This makes the finest resolution up to zones per .

4. Planetary Wind Results and Mass Loss Rate

In Figure 1, we show the 3D simulation results for both without-temperature-profile and with-temperature-profile cases. For the without-temperature-profile case (top panels in Fig.1), we can see the planet temperature launches a spherical thermal wind and Mach=1 contour is approximately spherical or circle in 2D cross section. While for the with-temperature-profile case (bottom panels in Fig. 1), there's flow across from the dayside to the nightside and the Mach=1 contour shows there's a weak shock between two sides.

in color http://www.pas.rochester.edu/~bliu/OutflowWind/HD209458b/MassLoss/Finals/L40_color0250.png
in gray http://www.pas.rochester.edu/~bliu/OutflowWind/HD209458b/MassLoss/Finals/L40_gray0200.png

Fig. 1 Steady state planetary wind solution of cross section in the xy-plane for simulations without (top) and with the temperature profile. Flow and density are shown on the left and thermal structure and M=1 contours are shown on the right. The small circle at the center shows the radius of the planet.

The mass loss rate can be calculated by integrating From our 3D simulation data, the mass loss rate for the without-the-temperature-profile case is comparing with for the with-the-temperature-profile case. We see the temperature profile makes a mass loss about of the value for the spherically-launching wind.

With the planet temperature , we can also analytically solve the problem in 1D with Parker's wind solution (Parker 1958). The mass loss rate with Parker's wind solution gives . The estimated mass loss rate for H209458b can be found in Table 2.

Methods Mass Loss Rate
3D Simulation Without Temperature Profile
3D Simulation With Temperature Profile
Analytic Parker wind Solution

Table 2. Estimated Mass Loss Rate for HD209458b


5. References

Ballister, G., King, D., & Herbert, 2007, Nature, 445, 511

Cunningham A., Frank, A., Varniere, P., Mitran, S., Jones, T. W., 2009, ApJS, 182, 519

Southworth, J. 2010, MNRAS, 408, 1689

Wang, J. & Ford, E. B., 2011, MNRAS, 418, 1822

Parker,E.N. (1958) ApJ, vol. 128, pp.664-676

HD209458b: PlanetaryWind and Sonic Surface

When trying to produce a high-res picture of HD planetary wind with larger planet, I found the different-looking of sonic surface for L=60 and L40 as shown in blog:bliu10192015 .

I tended to think the L=60 is more correct(?) as

  • The SF of L=60 is more like the 2D PW simulations I did before (comparing with 2D sonic surface ).
  • In early frames of L=40 has similar sonic surface as shown in this picture

http://www.pas.rochester.edu/~bliu/OutflowWind/HD209458b/SonicSurface/L40_4AMR_frame115.png

Thought this is resolution related. I did several restarts with higher AMR levels. Here's some of the results

L=40 4-AMR outside the planet http://www.pas.rochester.edu/~bliu/OutflowWind/HD209458b/SonicSurface/L40_4AMR_amb.png
L=40 5-AMR outside the planet http://www.pas.rochester.edu/~bliu/OutflowWind/HD209458b/SonicSurface/L40_5AMR_amb.png

movie

Will try higher-res for L=60 to see if the results could be consistent..

Higher-res results for L=60 shows similar sonic surface as those of L=40. So we conclude the sonic surface of higher resolution is more correct although it looks slightly different from our 2.5D results

L=40 4-AMR outside the planethttp://www.pas.rochester.edu/~bliu/OutflowWind/HD209458b/SonicSurface/sonicSurface_L60_4AMR.png movie

meeting update 10/19/15

High-res PW Picture for HD209458b:

L=60 http://www.pas.rochester.edu/~bliu/OutflowWind/HD209458b/MassLoss/HD209_L60_high.png movie 1.56E+09 gram per sec
L=40 http://www.pas.rochester.edu/~bliu/OutflowWind/HD209458b/MassLoss/HD209_L40_high.png movie 1.42E+09 gram per sec

M2-9: 3D pnStudy with rotating/spinning conical wind

  • Test the 3D pnStudy module by adding a rotating velocity to the conical wind:
Non-rotate http://www.pas.rochester.edu/~bliu/pnStudy/M2-9/rhoV_M29_3D_noRot.png 4AMR non Rotate movie; 4AMR non Rotate 2D slice movie;
Rotate with 150y period http://www.pas.rochester.edu/~bliu/pnStudy/M2-9/M29_3D_rho_rot_f44.png 4AMR Rotate movie; 4AMR Rotate 2D slice movie;
  • The data file for this run:
tamb = 1d3           ! ambient temp, 1cu = 0.1K (100K=1000cu)     
namb = 4e4           ! ambient central density cm^-3. Usually 400 for 1/r^2 or torus.
stratified = f       ! true = add a 1/r^2 background 'AGB stellar wind'
torus      = f       ! true - add torus to the background
torusalpha = 0.7     ! alpha and beta specify the geometry
torusbeta  = 10d0    ! see Frank & Mellema, 1994ApJ...430..800F
rings      = f       ! true - add radial density modulations to AGB wind
!
!     FLOW  DESCRIPTION SECTION, values apply at origin at t=0
outflowType  = 2    ! TYPE OF FLOW    1 cyl jet, 2 conical wind, 3 is clump 
njet  = 4d2         ! flow density at launch zone, 1cu = 1cm^-3
Rjet  = 2d0         ! flow radius at launch zone, 1cu = 500AU (outflowType=1 only)
vjet  = 2e7         ! flow velocity , 1cu = cm/s (100km/s=1e7cu)
tjet  = 1d3         ! flow temp, 1cu = 0.1K (100K=1000cu)
tt    = 0.0d0       ! flow accel time, 1cu = 8250y (0.02 = 165y)
open_angle = 90d0    ! conical flow open angle (deg)
tf    = 15d0        ! conical flow Gaussian taper (deg) for njet and vjet; 0= disable
sigma = 0d0         ! !toroidal.magnetic.energy / kinetic.energy, example 0.6

HD29:planetary wind and Mass loss

1. Planetary Wind With Temperature Profile

I have been trying to get a high-res picture for planetary wind HD29 with temperature profile. This picture shows there's a shock from the night-side and there's a back-flow.

frame 40 http://www.pas.rochester.edu/~bliu/OutflowWind/HD209458b/MassLoss/RhoV0040.png
frame 48 http://www.pas.rochester.edu/~bliu/OutflowWind/HD209458b/MassLoss/RhoV0048.png

The shock disappeared as time goes further, although still need to double-check if this comes form the restarting problem

frame 52 (2AMR) http://www.pas.rochester.edu/~bliu/OutflowWind/HD209458b/MassLoss/RhoV0052.png movie for this run
frame 76 (3AMR) http://www.pas.rochester.edu/~bliu/OutflowWind/HD209458b/MassLoss/RhoV0052.png movie for this run

2. Estimate of the Mass Loss Rate

  • 1). The mass loss rate is estimated by calculating the flux out of the cubic surfaces as show below:
http://www.pas.rochester.edu/~bliu//OutflowWind/HD209458b/MassLoss/integral.png

Where the size of box is 0.4*Orbital separation between the star and the planet, or 0.019 AU and the planet is sitting in the center of the box.

  • 2). The mass loss rate for no-temperature-profile case is 3.82E+09 gram per sec, see blog:bliu09222015
  • 3). For with-temperature-profile case, I checked the mass loss rate for different frames
frame Picture Mass loss rate
Frame 48 http://www.pas.rochester.edu/~bliu//OutflowWind/HD209458b/MassLoss/RhoV0048.png 1.33E+09 gram per sec
Frame 70 3AMR http://www.pas.rochester.edu/~bliu//OutflowWind/HD209458b/MassLoss/rhoV0070.png 1.55E+09 gram per sec
Frame 76 3AMR http://www.pas.rochester.edu/~bliu//OutflowWind/HD209458b/MassLoss/rhoV0076.png 1.56E+09 gram per sec

HD209458b: PlanetaryWind Tests

  • No co-rotation
With Temp profile old http://www.pas.rochester.edu/~bliu//OutflowWind/HD209458b/rhoV_HD_omega0_40zones_old.png movie Mass loss Rate: 2.13E+09 gram per sec
With Temp profile http://www.pas.rochester.edu/~bliu//OutflowWind/HD209458b/rhoV_HD_omega0_40zones.png movie Mass loss Rate: 3.04E+09 gram per sec
No Temp profile http://www.pas.rochester.edu/~bliu//OutflowWind/HD209458b/rhoV_omega0_noTempProfile.png movie Mass loss Rate: 3.82E+09 gram per sec



  • Very Low Ambient Density
rho_ambient=1e-25 5 zones-per-radii

  • Wind on all boundaries
http://www.pas.rochester.edu/~bliu//OutflowWind/HD209458b/rhoV_windAllBD_20zones.png 20-zones-per-radii

HD209458b: PlanetaryWind

OutflowWind: planetary wind with cooling

  • 1. low-res result

; with analytic cooling; Still running…

http://www.pas.rochester.edu/~bliu/OutflowWind/3D_PlanetWind_Corot/rhoV_planetaryWind_lambeda2.5_Omega1_8zones_L600_analyticCooling.png 8 zones per radii
  • 2. zCooling is not working with the current version of cold. Haven't figured out how to fix it yet…
  • 3. Higher Resolution runs

qTolerance              = .10,.30,.30,1d30,1d30,1d30,1d30,1d30,1d30 

Grid refinement for 2 levels of AMR

OutflowWind: planetary wind only (large window and low-res)

in C.U. or is infinite as there's no stellar wind. always. Type III in Matsakos.

http://www.pas.rochester.edu/~bliu/OutflowWind/3D_PlanetWind_Corot/rhoV_lambeda2.5_Omega1_8zones_L600_Outwind_only.png 8-zone-per-radii movie

OutflowWind: Co-rot planetary wind

  • Larger box For the setup in blog:bliu08182015, the radius of star is big (~78 CU). Large box will get the star in as shown in the following picture.
http://www.pas.rochester.edu/~bliu/OutflowWind/3D_PlanetWind_Corot/PW_corot_largerBox.png

Not sure if this is OK or should use different parameters since the left and top have persistInBoundaries. Here's the result ( 32 zones per radii and 8-zones per radii for the first 80 frames) of larger box but not including the star in:

http://www.pas.rochester.edu/~bliu/OutflowWind/3D_PlanetWind_Corot/rhoV_stellarRho1E-11_lambeda2.5_Omega1_32zones_L200.png L=200 movie

  • Restart Issues with High CFL
  1. This happens when restarting from a multi-core chombo files. chombo files from Single-core work OK.. I suspect there's a bug in the new parallel hdf writing code.
  2. Currently uses an older version of the code and latest module problem which works fine..

pnStudy: 3D Results from IAC 08192015

3D Data with Nlevel=4 and DMcooling of up to 493 y

  1. Ran for 2.5 hrs on 160 on TeideHPC
  2. Problem.data

tamb = 1d3           ! ambient temp, 1cu = 0.1K (100K=1000cu)
namb = 4e3           ! ambient density in cell above lunch surface. 1cu = 1cm^-3
stratified = t       ! true = add a 1/r^2 background 'AGB stellar wind'
torus      = f       ! true - add torus to the background
torusalpha = 0.7     ! alpha and beta specify the geometry
torusbeta  = 10d0    ! see Frank & Mellema, 1994ApJ...430..800F
rings      = f       ! true - add radial density modulations to AGB wind
StraTorus  = 2       ! 1 for Martin's way used before Jan 6th 2015, tracer might not work correctly!!
                     ! 2 for Baowei's way, updated from Martin's code according to Bruce's request
!
!     FLOW  DESCRIPTION SECTION, values apply at origin at t=0
outflowType  = 2    ! TYPE OF FLOW    1 cyl jet, 2 conical wind, 3 is clump
njet  = 4d2         ! flow density at launch zone, 1cu = 1cm^-3
Rjet  = 1d0         ! flow radius at launch zone, 1cu = 500AU (or clump radius)
vjet  = 2e7         ! flow velocity , 1cu = cm/s (100km/s=1e7cu)
tjet  = 1d3         ! flow temp, 1cu = 0.1K (100K=1000cu)
tt    = 0.0d0       ! flow accel time, 1cu = 8250y (0.02 = 165y)
open_angle = 90d0   ! conical flow open angle (deg)  (outflowType=2 cones only)
tf    = 30d0        ! conical flow Gaussian taper (deg) for njet and vjet; 0= disable
sigma = 0d0         ! !toroidal.magnetic.energy / kinetic.energy, example 0.6


3D from IAC https://www.pas.rochester.edu/~bliu/pnStudy/IAC_Data/08192015/rho_iac08092015_493y.png 3D movie
same run in 2D https://www.pas.rochester.edu/~bliu/pnStudy/IAC_Data/08192015/rho_iac08092015_493y_2Dcompare.png 2D movie

OutflowWind: Corot PW results with or without Outflow-only for xhigh and ylow

Planetary wind only in Co-rotating frame results of setting/not setting the xhigh and ylow boundaries as Outflow-only

Outflow_only for xhigh and ylow http://www.pas.rochester.edu/~bliu/OutflowWind/3D_PlanetWind_Corot/rhoV_Outflow_only.png 2D slice movie;Sonic Surface movie;Vol movie
No Outflow_only for xhigh and ylow http://www.pas.rochester.edu/~bliu/OutflowWind/3D_PlanetWind_Corot/rhoV_Extrapolate.png movie

OutflowWind: Planetary Wind only in Co-rot frame

  1. Latest high-res results:
Lambda=5 Omga=0.5 movie 32 zones per radii
Lambda=5 Omga=1 movie 32 zones per radii
Lambda=2.5 Omga=1 movie 32 zones per radii

details can be found in the last part of this page

  1. A bug found when doing restart.. Haven't found how to fix it…

OutflowWind: Co-rotating Planetary Wind & Parameters

1. Testing results about Co-rotating frame

http://www.pas.rochester.edu/~bliu/OutflowWind/3DrotDebug/rhoV_stellarRho2E-5_Omega0.5_32zones_2.png movie

Details in this page

2. Planetary wind only in Co-rotating frame

Only low-res results so far.. Got a lot of High CFL restarts for high-res runs — either by restarting from a low-res frame or starting from beginning directly..

low-res Movie for Lambda=5.3 Omega=0.5 ; low-res Movie for Lambda=10.6 Omega=0.5

Details in this page

3. Compare Parameters with Matsakos

check this page — not quite finished yet..

OutflowWind: Parker wind & Paper Figures

1. low density stellar wind

StellarEnvelope%omega=0d0 ! Assume planet is tidally locked
...
lStellarGravity=false


movie of stellar density 10^-5;

Other densities and details can be found here —part 4.

2. Paper Figures

Will upload to the high res pictures to this page

Meeting update

  • OutflowWind Parker solution:
    1. The result of Parker wind of the star looks stable as shown in the movies of blog:bliu07092015.
    2. The planet still have problems as the radii is too small/resolution too low and cannot see the temperature profile.. Running with larger radii
  • OutflowWind: high-res 3D co-rotating
    1. In the development branch, Jonathan updated the 3D temperature profile in outflow object by moving the sun/day side was moved to +z direction comparing +x direction. But there's a problem in the temperature profile currently as shown below… Working on debugging it..
http://www.pas.rochester.edu/~bliu/OutflowWind/3D_zCorot/temp_xz.png
http://www.pas.rochester.edu/~bliu/OutflowWind/3D_zCorot/temp_yz.png

  1. high res runs with smaller omega on BG/Q: fixed some compiling bugs on bg/q. Running the old version of code while debugging 1…
  • 3D pnStudy
  1. sent the merged code to Bruce & IAC users after fixing some bugs and running scaling test on Bluehive: #440, #441..

OutflowWind: Redo stellar wind with new Park solution

This is to redo the stellar wind Park solution (as in blog:bliu07012015) with Jonathan's updated park wind data (blog:johannjc06252015) and compare with the old one…Other than the initial data, the new run has larger box (larger than Sonic surface) while the boundary for old one is inside the supersonic area…

density http://www.pas.rochester.edu/~bliu/OutflowWind/ParkWind/logRho_parkSolution_new.png;density plot http://www.pas.rochester.edu/~bliu/OutflowWind/ParkWind/Rho_parksolution.png;density plot
Vx http://www.pas.rochester.edu/~bliu/OutflowWind/ParkWind/Vx_parkSolution_new.png;Vx plot http://www.pas.rochester.edu/~bliu/OutflowWind/ParkWind/Vx_parksolution.png;Vx plot


2D results:

2D slicing rho; 2D slicing Temp

pnStudy: clump bubble

Modified the density for area r<Rjet from

if (outflowType == clump .AND. r2.lt.Rjet) then
 q(i,j,k,1) = namb/nScale+&
               njet/nscale*(1d0-(r2/Rjet)**2) ![cu]
...
end if

to

if (outflowType == clump .AND. r2.lt.Rjet) then
 q(i,j,k,1) = njet/nScale
...
end if

to make the density inside equal to nJet while it was a value of stratified…

And the results look like this

time rho_scaled
t=0 http://www.pas.rochester.edu/~bliu/pnStudy/ticket440/rhoScale0000.png
t= 0.003 C.U. http://www.pas.rochester.edu/~bliu/pnStudy/ticket440/rhoScale0001.png

clump bubble movie

pnStudy: test different cooling after merging

This is to test the pnStudy module in 3D after merging with the latest development branch. The data files are modified from a run of 2.5D (located at /home/balick/tap30/t30AGBn4e2v200namb4e3/). Not sure if it's physically correct or not but just test if the code can run when setting different parameters for cooliing in physics.data…

The 2.5D result is for density and temperature while the 3D results are middle section of density only..

2.5D result from Bruce http://www.pas.rochester.edu/~bliu/pnStudy/Cooling/t30AGBn4e2v200namb4e3_2D.jpg
3D; no cooling http://www.pas.rochester.edu/~bliu/pnStudy/Cooling/rho_noCooling_78y.png
3D; analytic cooling http://www.pas.rochester.edu/~bliu/pnStudy/Cooling/rho_AnalyticCool_78y.png
3D; DM cooling http://www.pas.rochester.edu/~bliu/pnStudy/Cooling/rho_DMCooling_78y.png
3D; II cooling http://www.pas.rochester.edu/~bliu/pnStudy/Cooling/rho_IICooling_78y.png

OutflowWind: park solution for stellar wind

Currently the star is not stable

density movie

density http://www.pas.rochester.edu/~bliu/OutflowWind/ParkWind/Rho_parksolution.png density line plot movie;
velocity http://www.pas.rochester.edu/~bliu/OutflowWind/ParkWind/Vx_parksolution.png Vx along x movie
temperature temperature movie

Update 6/29/15 -- Baowei

XSEDE Proposal Renew

  1. highest priority. Due July 15th.
  2. Current documents


OutflowWind module

  1. 3D co-rotating frame: fixed some bugs and latest results— blog:bliu06222015
  2. Park wind solution for the star: merged with Jonathan's setup and running a test. Will post results soon


pnStudy

  1. Merged with current development branch for Eddie's cooling stuff. Compiled. Running test
  2. will do 3D runs with the new merge version on bluehive and install the new code on Spain machines

OutflowWind:bug fix for planetParticle and pointGravity position

1. bug and fix

The code of co-rotating frame for results in blog:bliu06222015 and the results before hard-coded the position of particle and pointgravity object as (0,0,0). Since the origin point in the co-rotating frame should be the mass center and the planet position is at (200,0,0) (currently,will make more sophisticated). So this is a bug:

IF(.NOT. lRestart) THEN
      CALL CreateParticle(PlanetParticle)
      PlanetParticle%q(1)=planet_mass
      PlanetParticle%xloc=0
      PlanetParticle%radius=radius
     CALL CreatePointGravityObject(PlanetParticle%PointGravityObj)
      PlanetParticle%lFixed=.true.
      PointGravityObj=>PlanetParticle%PointGravityObj
      PointGravityObj%mass=planet_mass
      PointGravityObj%x0=PlanetParticle%xloc

The fix is set the PlanetParticle position to be Outflow%position —

      PlanetParticle%xloc=Outflow%position


2. Low-Res Results after the fix

After the bug-fix, the low-res results with omega=0 looks promising. The tight sonic surface problem in blog:bliu06222015 seems gone. But it's still not quite right for the gravity/mach number plot inside the planet —trying with higher res.

http://www.pas.rochester.edu/~bliu/OutflowWind/ForAdam/splineSoft/rhoV_splineSoft_particlePosFix_5ZonePerR.png;low res density move 5-zone-per-radii http://www.pas.rochester.edu/~bliu/OutflowWind/ForAdam/splineSoft/TV_splineSoft_particlePosFix_5ZonePerR.png;low res temperature move 5-zone-per-radii
http://www.pas.rochester.edu/~bliu/OutflowWind/ForAdam/splineSoft/splineSoft_particlePosFix.png log mach plot


plot of log|vx|
shows the unmatched gravity plot for 2D and 3D seems from the low-res of 3D plot. http://www.pas.rochester.edu/~bliu/OutflowWind/ForAdam/splineSoft/logVx_splineSoft_particlePosFix_5ZonePerR.png

3. 3D corotating with omega=0.5

http://www.pas.rochester.edu/~bliu/OutflowWind/ForAdam/splineSoft/RhoV_3Drho_omega0.5_spline_bugfix_frame58.png

movie for 8 or 16 zones per radii

OutflowWind: spline soft for Point gravity

This is to try to understand/fix the tight sonic surface found in 3D co-rotating frame with omega=0 as shown in the 3rd part of blog:bliu06112015_2 and the different line plots for the 3D co-rotating and 2.5D as shown in the 4th part of the same blog post… In 2.5D and 3D simulations before the problem module hardcoded PLUMERSOFT and soft radius=1 for the pointgravity.

        SPLINESOFT = 1, & !g ~ r/r^2 for r < r_soft and then goes to 0 as r-> 0 
        PLUMMERSOFT = 2 !g ~ (r^2+r_soft^2)^(-3/2) r


1. Code modification: Splinesoft

Since in the 3D case we use r=1 but in 2.5D we use r=2. So the g in 3D is about twice of the g in 2.5D.

To handle point gravity more properly, we change the code to use SPLINESOFT instead. Here's the new update to the OutflowWind module

        outflow_radius=0 ! outflow radius is 0 always
        outflow_thickness=planet_radius 
        pointgravity_soft=1 !splinesoft always
        pointgravity_r_soft=0.5*planet_radius

2. Testing Results with Splinesoft =

New Results with SplineSoft (low res) Old Results with PlummerSoft (high res)
2.5D no stellar wind http://www.pas.rochester.edu/~bliu/OutflowWind/ForAdam/splineSoft/rhoT_lambda5.3_Rp2_NoWind_8zones.png;movie 8 zones per radii http://www.pas.rochester.edu/~bliu/OutflowWind/ForAdam//Finals/lambda5.3_gamma1.01_ns1e-5_d20.png;high res movie
2.5D stellar wind http://www.pas.rochester.edu/~bliu/OutflowWind/ForAdam/splineSoft/rhoT_lambda5.3_Rp2_stellarWind_8zones.png;movie 8 zones per radii http://www.pas.rochester.edu/~bliu/OutflowWind/ForAdam/Finals/rhoT_gamma_1.01_Mach5_rho1e-5_high_zoomed_frame300.png;high res movie
3D Co-rotating http://www.pas.rochester.edu/~bliu/OutflowWind/ForAdam/splineSoft/Rho_3Drot_omega0_lambda5_lowres_4.9d.png; density 5 to 10 zones per radii http://www.pas.rochester.edu/~bliu/OutflowWind/ForAdam/newSetup/rhoV_rot0000.png; density with 40 to 80 zones per radii
3D Co-rot and 2D plots compare http://www.pas.rochester.edu/~bliu/OutflowWind/ForAdam/splineSoft/plotCompare2dn3d_lambda5_spline.png;http://www.pas.rochester.edu/~bliu/OutflowWind/ForAdam/splineSoft/plotCompare2dn3d_lambda5_spline_normal.png http://www.pas.rochester.edu/~bliu/OutflowWind/ForAdam/newSetup/plotCompare2dn3d_lambda5_highres.png

OutflowWind: new set up for Co-rotating

small window with new parameters:

http://www.pas.rochester.edu/~bliu/OutflowWind/ForAdam/newSetup/newSetup.png

Tiny stellar wind density to check the planetary wind close to the planet surface.

movie for 40 zones per radii

2. No rotation omega=0

density omega=0; 20 zones per radii

temperature omega=0; 20 zones per radii

density omega=0; lambda=10

temperature omega=0; lambda=10

  1. No rotation omega=0 High Res longer

Set stellar wind with very low density and run with high resolution. To check if it's possible to reproduce the planetary wind in 2.5D.

http://www.pas.rochester.edu/~bliu/OutflowWind/ForAdam/newSetup/rhoV_rot0000.png http://www.pas.rochester.edu/~bliu/OutflowWind/ForAdam//Finals/lambda5.3_gamma1.01_ns1e-5_d4.png

density omega=0; 40-80 zones per radii
Temperature omega=0; 40-80 zones per radii
pressure omega=0; 40-80 zones per radii


  1. Line plots of density, temperature, Mach number and on dayside (from left boundary to critical radius)


Same run data as 3 above.

http://www.pas.rochester.edu/~bliu/OutflowWind/ForAdam/newSetup/plotCompare2dn3d_lambda5_highres.png

3D:

gamma=1.01
3dpressure=(gamma-1)*(E-0.5*(px^2+py^2+pz^2)/rho)
vScale=sqrt(vx^2+vy^2+vz^2)
soundSpeed=sqrt(gamma*3dpressure/rho)
mach=vScale/soundSpeed

2D:

gamma=1.01
2dpressure=(gamma-1)*(E-0.5*(px^2+py^2)/rho)
soundSpeed=sqrt(gamma*2dpressure/rho)
mach=sqrt(vx^2+vy^2)/2dsoundSpeed

OutflowWind: small wind density

change the density to be smaller to make the density flux on the planet surface to be a reasonable value.

8 zones per radii

[8 zones per radii tiny omega

Co-rotating OutflowWind -- tiny rotating

In the latest high-res results of co-rotating frame as in blog:bliu06032015, the planetary wind seems not be able to expand. This test to test the effect of tiny corotating speed omega=1d-5 . All other parameters are same as blog:bliu06032015, namely rho_sw=4d-1…

density of 8 zones per radii;

temperature of 8 zones per radii

Co-rotating Outflow Wind

The setwind subroutine In the current co-rotating frame sets up the stellar wind density proportional to . where is about 200 C.U. for our runs So to make the stellar wind density same as in the 2.5D and 3D we have to set it 40000 times larger in these co-rotating runs. Otherwise the stand out distance and bow-shock size will be very large as we see in our former runs…

By using larger stellar wind density, the result for , Mach 5 runs seems promising. — didn't change the density of planet for this run still 1g/cc also used larger planet as in blog:bliu06022015

http://www.pas.rochester.edu/~bliu/OutflowWind/ForAdam/largerPlanet/rho0015_corotating_highStellarDensity.png

Movies

Density movie -- 16 zones per radius

Temperature movie -- 16 zones per radius

Density movie -- 32 zones per radius and twice higher stellar wind density

Temperature movie -- 32 zones per radius and twice higher stellar wind density

PlanetWind:larger planet radius

Double Rp and Mp to keep the lambda same. New refinement setup. Here's the results of a 2.5D testing run with gamma=1.01, lambda=5.3, Rp=4, Mp=1Mj, mach=5 and max 3 levels of AMR

Mesh http://www.pas.rochester.edu/~bliu/OutflowWind/ForAdam/largerPlanet/mesh0008.png mesh movie;stellar wind mesh movie
Planet wind http://www.pas.rochester.edu/~bliu/OutflowWind/ForAdam/largerPlanet/rhoT0008.png zoomed-in movie to check sonic surface;stellar wind movie

Meeting Update 05/18/2015

  • PN results from Bruce.

;

  • XSEDE allocation
    1. Gordon & Comet almost gone. Stampede ~250,000 or 11.3% left. Details in wiki:ProjectRuns

Ideas for Central Installation of AstroBEAR code

The motivation is to set up a central installation for the code so a user won't need to re-compile the code every time he switches the problem module.. — Feel free to put your ideas here…

I. Binary Folder

  1. Compile all problem modules and generate an executable file for each module.
  2. Create a bin folder which contains all executable files (can be links)
  3. Sample Data files folder which contains all the data files for each module

II. AstroBEAR Library

  1. compile the compute engine as a library
  2. each module problem we have now or the user-developed problem module will link the library and make

Photos for the 2015 CIRC Poster Session

http://www.pas.rochester.edu/~bliu/PosterSession/2015/P1010004.JPG http://www.pas.rochester.edu/~bliu/PosterSession/2015/P1010005.JPG
http://www.pas.rochester.edu/~bliu/PosterSession/2015/P1010006.JPG http://www.pas.rochester.edu/~bliu/PosterSession/2015/P1010007.JPG
http://www.pas.rochester.edu/~bliu/PosterSession/2015/P1010013.JPG http://www.pas.rochester.edu/~bliu/PosterSession/2015/P1010026.JPG
http://www.pas.rochester.edu/~bliu/PosterSession/2015/P1010029.JPG http://www.pas.rochester.edu/~bliu/PosterSession/2015/P1010032.JPG
http://www.pas.rochester.edu/~bliu/PosterSession/2015/P1010036.JPG http://www.pas.rochester.edu/~bliu/PosterSession/2015/P1010037.JPG
http://www.pas.rochester.edu/~bliu/PosterSession/2015/P1010038.JPG http://www.pas.rochester.edu/~bliu/PosterSession/2015/P1010039.JPG
http://www.pas.rochester.edu/~bliu/PosterSession/2015/P1010040.JPG http://www.pas.rochester.edu/~bliu/PosterSession/2015/P1010044.JPG
http://www.pas.rochester.edu/~bliu/PosterSession/2015/P1010047.JPG

Meeting Update 04/27/2015 -- Baowei

  • For Bruce
  • Eddie's runs
    1. status of runs
d2.5_M15 22/50
d4.5_M10 46/50
d4.5_M15 23/50
d6.5_M10 48/50
d6.5_M15 23/50
  1. transferring from Gordon to BH. ~5TB total.
  2. current space usage on local machines — blog:bliu04242015

Current Disk Usage

/clover:  11 T
   johannjc    3.0 T
   shuleli     2.5 T

/alfalfa: 5.3 T
   shuleli     1.6 T
   martinhe    1.1 T
   johannjc    422 G
   bliu        262 G
   ehansen     17 G
   ckrau       13 G


/bamboo:  13 T
   madams       3.2 T
   shuleli      2.4 T
   erica        2.2 T
   johannjc     1.1 T
   bliu         973 G

/grass: 5.5 T
   erica        3.4 T
   shuleli      762 G
   johannjc     119 G

Update 04/20/2015 - Baowei

Meeting Update 04/13/2015

  • 2.5D ambient as stellar wind double-check
    1. gamma=1.01; set ambient density,velocity and temperature same as the stellar wind
    2. low res results — no turbulence and falling back at the tail: density and temperature; zoomed in
    3. High res results:3AMR

Meeting Update 04/06/2015

1)reproduce corotate binary;
2)Roate_no_wind; Rotate works for 2.5D ?
3)Weak_wind_along x?;
4) set the wind(stellar) direction along y

Meeting Update 03/31/2015

http://www.pas.rochester.edu/~bliu/OutflowWind/blowOut/rhoT0300_recheck.png movie
  1. gamma=5/3; gamma=5/3,zoomed
  • Worked with users: #438
  • Working on 3D anisotropic conductivity solver

Meeting Update 03/23/2015

  1. Outflowwind: density blowout in 2D For a 2D run I did last week, the density blows out after running some time..

http://www.pas.rochester.edu/~bliu/OutflowWind/blowOut/rhoT0300.png

movie


nDim     = 2                            ! number of dimensions for this problem (1-3)
GmX      = 100,400,0                    ! Base grid resolution [x,y,z]
MaxLevel = 1 !5                         ! Maximum level for this simulation (0 is fixed grid)
LastStaticLevel = -1                    ! Use static AMR for levels through LastStaticLevel [-1]
GxBounds = 0d0,-200d0,0d0,100d0,200d0,0.d0      ! Problem boundaries in computational units,format:



  1. Working with SUNY user and on anisotropic conductivity solver.

OutflowWind: Outflow Tracer

Added tracer to the outflow object:

http://www.pas.rochester.edu/~bliu/OutflowWind/Tracers/tracer0300.png http://www.pas.rochester.edu/~bliu/OutflowWind/Tracers/zoomedtracer0300.png


movie zoomed in movie

Meeting Update 03/09/2015 -- Baowei

  • OutflowWind
    1. latest results for 2D high-res and 3D low-res after sonic surface fixing are here: blog:bliu02282015_2
    2. got 3D high-res data, haven't analyzed yet.
  • Ablative RT
    1. meet with LLE next week?
  • XSEDE Allocation
    1. expires in June 30 2015
    2. Current resources: Stampede ~700,000 SUs(32%) left, Gordon 520,000 SUs (74%) left, Trestles 100,000 SUs (100%) left. Details can be found at: wiki:ProjectRuns

OutflowWind: sonic surfaces check

  1. Fixed bugs causing the subsonic speed instead of supersonic issue for the stellar wind. Here's the low res (for the frames after stellar wind kicked in) results
http://www.pas.rochester.edu/~bliu/OutflowWind/bugFix_022815/rhoT0200_sonicCheck.png movie
  1. 2.5D High Res (3 levels AMR) and longer runtime

movie; Zoomed in movie

  1. 3D Low res run

density; temperature with sonic surface

Meeting Update 02/28/15

Tried to solve the issues about small standoff radius and turbulence with 2 levels of AMR as shown in blog:bliu02232015. Found and fixed bugs in the OutflowWinds problem module. Here's the first cut results with 2 levels of AMR after the bug-fixing: parameters are same as the Stone&Proga paper. More results are coming. Will redo the case of moving ambient also…

http://www.pas.rochester.edu/~bliu/OutflowWind/bugFix_022815/rhoT0200.png movie
http://www.pas.rochester.edu/~bliu/OutflowWind/bugFix_022815/rhoT_0172.png

OutflowWind: moving ambient and standoff radius

1. Moving Ambient

Check the idea of setting ambient moving starting from the beginning at the same speed as the stellar wind. Stellar wind will kick in after some time t0, just as the case of ambient not moving… The following results are for ambient density 1e-4 g/cc with speed 1.638e3 km/s — the calculated stellar wind speed from the stone&proga paper. It shows the planet wind couldn't expand from the top…

http://www.pas.rochester.edu/~bliu/OutflowWind/movingAmbient/zoomedRho0073.png; http://www.pas.rochester.edu/~bliu/OutflowWind/movingAmbient/zoomedT0073.png density movie; temperature movie

2. Standoff Radius

check the standoff radius for different wind velocity. The ambient has initial velocity zero. The results show the standoff radius is around 7cu for subsonic speed wind and 6 cu for supersonic speed wind.

v=8.92e2 km/s, 50X100, 1AMR http://www.pas.rochester.edu/~bliu/OutflowWind/standOffcheck/Rho_50X100_1AMR_v8.92d2.png; http://www.pas.rochester.edu/~bliu/OutflowWind/standOffcheck/T_50X100_1AMR_v8.92d2.png density;temperature
v=1e2 km/s, 25X100, 0AMR http://www.pas.rochester.edu/~bliu/OutflowWind/standOffcheck/Rho_25X100_0AMR_v1d2.png; http://www.pas.rochester.edu/~bliu/OutflowWind/standOffcheck/T_25X100_0AMR_v1d2.png density;temperature
v=1e0 km/s, 25X100, 0AMR http://www.pas.rochester.edu/~bliu/OutflowWind/standOffcheck/Rho_25X100_0AMR_1d0.png; http://www.pas.rochester.edu/~bliu/OutflowWind/standOffcheck/T_25X100_0AMR_1d0.png density;temperature
v=1e0 km/s, 25X100, 2AMR http://www.pas.rochester.edu/~bliu/OutflowWind/standOffcheck/Rho_25X100_2AMR_v1d2.png; http://www.pas.rochester.edu/~bliu/OutflowWind/standOffcheck/T_25X100_2AMR_v1d2.png density;temperature

OutflowWind: 3D

Higher interpolation order + H_viscocity; Wind_velocity=8.19e2 km/s

density without wind

temperature without wind

density with wind

temperature with wind

OutflowWind: 2.5D Higer oder Interpolation

Linear Interpolation + H_Viscocity

With Wind (vel 8.19e2km/s)

Zoomed In density

Zoomed in Temperature --fixed colorbar

Density

Temperature

With No Wind

1. symmetric profile

density temperature

2. asymmetric profile

density temperature

Meeting Update 02/09/2015

  • 2D OutflowWind module Fixed a bug in wind object about the velocity direction when the wind applied on the +y direction. The following is the new results with wind velocity . The sonic surface looks very different from the Stone&Proga.
Fig 6 reproduced http://www.pas.rochester.edu/~bliu/OutflowWind/2DwithWind/zoomedRhoV_lambda5_wind1.623e3kmps.png;http://www.pas.rochester.edu/~bliu/OutflowWind/2DwithWind/zoomedTV_lambda5_wind1.623e3kmps.png density;temperature
Stone&Proga http://www.pas.rochester.edu/~bliu/OutflowWind/2DwithWind/Fig6.png
  • Ablative RT Growth rate result from Ruil:

http://www.pas.rochester.edu/~bliu/AblativeRT/3Dcase/ThickTarget/astrobear-bubble_thickT.jpg

Re-running the job to longer time.

OutflowWind: Tests with Wind

Updated the problem module with a flag lWind to turn on/off stellar wind. The stellar wind is applied after t_wind>0 to allow the planet with asymmetric temperature profile to relax to a stable state before the wind kicks in. Also the wind is applied along -y direction for 2.5D and -x direction for…

  • Stellar Wind Speed In Stone&Proga and . Correspondingly in AstroBEAR, , . For case, , so the calculated stellar wind speed . The speed is way too high for the code with the current parameters…
  • 2.5D restuls
density;temperature
http://www.pas.rochester.edu/~bliu/OutflowWind/2DwithWind/zommedRhoV0136.png;http://www.pas.rochester.edu/~bliu/OutflowWind/2DwithWind/zommedTV0136.png density; temperature;zoomed density;zoomed temperature
http://www.pas.rochester.edu/~bliu/OutflowWind/2DwithWind/zoomedRhoV0200_windSpeed16.3kmps.png;http://www.pas.rochester.edu/~bliu/OutflowWind/2DwithWind/zoomedTV0200_windSpeed16.3kmps.png density; temperature
Stone & Proga http://www.pas.rochester.edu/~bliu/OutflowWind/2DwithWind/Fig6.png
  • Interface growth rate for Ablative RT Regenerated 50 frames txt files for Rui as 200 & 100 frames takes too long to read in…
  • New user Helped Karan set up on local machine and worked him through using the code.

Meeting Update 01/26/2015 -- Baowei

  • OutflowWind fixed a bug found when running symmetric profiling for different s. Here's the updated results for figure1,2 and small/large lambda… Looks better.
Fig1 from AstroBEAR updated http://www.pas.rochester.edu/~bliu/OutflowWind/2DTests/fig1_lambda5_rev.png;http://www.pas.rochester.edu/~bliu/OutflowWind/2DTests/fig1T_lambda5_rev.png
Fig 1 in Stone&Proga http://www.pas.rochester.edu/~bliu/OutflowWind/2DTests/Fig1_StoneProga09.png
Fig2 from AstroBEAR updated http://www.pas.rochester.edu/~bliu/OutflowWind/2DTests/fig2_lambda5_rev.png density symmetric lambda=5; temperature symmetric lambda=5; density, asymmetric lambda=5;temperature, asymmetric lambda=5;density symmetric lambda=50; temperature symmetric lambda=50;
Fig2 Stone&Proga http://www.pas.rochester.edu/~bliu/OutflowWind/2DTests/Fig2_StoneProga09.png
  • Growth rate analysis for AblativeRT. Too long to run Rui's script with the 3D data. Regenerating a small data set for him… Will modify his script to read in hdf5 directly…

Meeting Update 01/20/2015 -- Baowei

  • OutflowWind The 2D tests with asymmetric temperature shows the night side density is higher than the spherical-symmetric case while in Stone&Proga09, it's lower…Also tried different values(from 0.01 to 1000), the results are similar…
    1. night side density blog:bliu01132015
    2. large : density;temperature; total energy
  • Update from LLE: still working on checking the growth rate of interface for 3D. Have to copy the data over to their machines as the gdl on alfalfa doesn't work well with their idl code…
  • resistivity & viscosity development: Got the modules for astrobear1.0 from Shule. Will take a look…

OutflowWind: 2D Tests

Deprecated the point_mass and planet_radius parameters. point_mass is the planet_mass which is the mass of the planet. rho as the air density and radius as the position of the outflow boundary. Try to produce the figure 2 in the Stone&Proga paper…

For this set up, the night side(theta=PI) is off and doesn't show the inflow. And the density at the night side is higher… Not sure if it's a parameter issue…

0.6 Jupiter Mass

Fig 2 Reproduced http://www.pas.rochester.edu/~bliu/OutflowWind/2DTests/repfigure2.png symmetric density; symmetric temperature; asymmetric density; asymmetric temperature
Fig 2 in Stone&Proga09 http://www.pas.rochester.edu/~bliu/OutflowWind/2DTests/Fig2_StoneProga09.png

Corresponding Fig 1, the Temperature is in unit of . Temperature value matches with Stone&Proga09 paper.

Fig 1 reproduced http://www.pas.rochester.edu/~bliu/OutflowWind/2DTests/Lambda5/density_0.6JptMass_lambda5_asymmetric_10d.png;http://www.pas.rochester.edu/~bliu/OutflowWind/2DTests/Lambda5/temp_0.6JptMass_lambda5_asymmetric_10d_redo.png
Fig 1 Stone&Proga09 http://www.pas.rochester.edu/~bliu/OutflowWind/2DTests/Fig1_StoneProga09.png

OutflowWind: 2.5D Tests

Test the OutflowWind module and make sure the results agree with Stone&Proga 2009.

  1. Test with no initial temperature profile

In the version of the outflow object in the code I use, some lines involving energy has E=pressure while others have gamma7*pressure. Modified them all to E=gamma7*pressure though need to double-check with Jonathan…. Added lTempProfile flag in Outflow object. The following is a 2.5D test results for high(10000K) and low(100K) temperature with point mass = 0.1 Jupiter mass. The ambient temperature are 100K and 1K correspondingly. The results shows the planet material escapes for 10000K while are bounded by the point mass for 100K.

T=10000K Density Movie;Temperature Movie
T=100K Density Movie;Temperature Movie
  1. Parameter comparison with Stone&Proga 2009

To compare with the results and parameters used in Stone&Proga paper, we need to double-check the planet_density together with planet_mass, planet_radius and point_mass(?) and temperatures. In the current code it specifies the density, mass & radius separately (in the current default data files, density value is ~35% off) and use the planet mass to calculate the important parameter in Stong&Proga (another important one is the planet density ) around origin. I'm modifying the code to set the planet mass & radius only then calculate the density which is probably a better way and hopefully will be more accurate to compare with the paper. No sure if I should include the point mass when calculating the density…

3D OutflowWind Module

Fixed bugs in 3D OutflowWind Module. Here's the low-res results (middle section). High-res testing is still running and will be posted soon…

Temperature http://www.pas.rochester.edu/~bliu/OutflowWind/1stCut/TV0001.png
Density http://www.pas.rochester.edu/~bliu/OutflowWind/1stCut/rhoV0001.png

Movies: Temperature and Velocity; Density and Velocity; Long Time: Temperature and Velocity; Long Time: Density and Velocity

Globus gridftp for transferring big data

I was using this very convenient tools when transferring ~2TB data from Gordon at SDSC to CIRC machines: as simple as drag the files from A to B and without worrying disconnect. Here's some basic steps:

  1. Sign up an account on globus.org


  1. Sign in


  1. Transfer Files with Endpoints: click "Transfer Files" on the right top Set up the path and endpoints as shown below and pick up files and use the arrows (blue triangles) to start transfer.

1) Endpoints for CIRC machines

For CIRC machines, the endpoint is "univofrochester#circ" and you need to login with your username and password on Bluehive/BlueStreak. The default path is your home directory on Bluehive. But you can access to your bh scratch or bgq scratch. For example for me— /scratch/bliu17 as bh scratch and /gpfs/fs2/bgqscratch/bliu17 as bgq scratch.

2) Endpoints for XSEDE machines:

xsede#gordon for Gordon at SDSC ( path example: /oasis/scratch/bliu/ for my scratch ) xsede#stampede for Stampede at TACC ( path example:/scratch/01688/bliu for my scratch ) More information can be found here:https://www.xsede.org/data-transfers

3) Endpoint for your own machine:

You can setup an endpoint for your own laptop also. Click "Manage Endpints" on the right top then "add Globus Connect Personal" and follow the instructions


  1. You will receive emails once the transferring is done/get problems.

pnStudy: M2-9

Results from Bruce:

pnStudy: 45Deg tapper and Issue with 2.5D MHD

  • Divergence Problem with 2.5D MHD in the pnStudy Tried to test Bruce's idea of 45 deg taper:

   Martin has the option to launch collimated flows with a Gaussian taper  To run this you configure problem.data with an opening angle=90 (spherical flow) which Martin modulates with a gaussian of the form exp-(latitude/user-specified-gaussian angle)^2.  You find that modulation function somewhere.  I should think that its form can be changed from
   exp-((phi)/W)^2,                   where phi is the latitude angle and
                                            W is the user-specified gaussian width called "tf" in problem.data
to this:
   exp-((phi-PA)/W)^2,                where PA is the user-specified flow angle

PA might as well default to 45 deg for cones and gaussian taper sims (that is, Outflowtype=2).

But found this divergence Mag issue on bluehive:

https://www.pas.rochester.edu/~bliu/pnStudy/45degTaper/divergenceMag_377y.png Movie

Need more time to confirm the results and dig out why….

  • 2.5D Result from alfalfa
https://www.pas.rochester.edu/~bliu/pnStudy/45degTaper/45deg2.5D_471y_fromAlfalfa.png 2.5D Movie
  • Test for 45 deg taper with 2D runs (taper15n4e2v200namb4e4)

Fixing a bug just found. Will update results soon…

pnStudy: finest refinement with rectangle shape

This is a revisit for the new refinement for pnStudy module (see blog:bliu11182014). This is Bruce's idea:

"My sense is that the wiggles develop only after the rim emerges from the fixed high-res region of the sim at 0 < x < 0.5 and 0 < y < 0.5. To check this out, is there a way to reconfigure the high-res zone so that it is three times thinner and taller? "

Below is the results — the mesh is kept to show the rectangle refinement works. The results show clearly the difference for areas along x and y axis:

With Mesh No Mesh
http://www.pas.rochester.edu/~bliu/pnStudy/newRef/mesh_recRef0021.png http://www.pas.rochester.edu/~bliu/pnStudy/newRef/nomesh_recRef0021.png
movie with mesh movie without mesh

pnStudy: new refinement

I implement the new refinement idea we discussed yesterday: refine only the outflow launch region to the highest level and see if it helps for the weird wiggles close to y-axis. This run has 6 level AMR refinement (equivalent to 7 level as the box is ½ of runs before) only on the outflow object and maximum 3 level AMR outside the outflow. It also uses Rho as refinement variable instead of px and py… Just as Bruce said, the resolution doesn't help much

http://www.pas.rochester.edu/~bliu/pnStudy/newRef/RhoRefqTol30021.png

Movie

pnStudy: Mesh for Spherical AGB wind with 5-level AMR

Trying to figure out how the grid size affect the wiggles developing near the x and y axes for the Bruce's run with the spherical winds into a spherical AGB environment. From the image below it seems the AMR works as expected. While the finest refinement is clearly around the spherical wind area, things seem better for larger Rjet than Rjet=0.5.

  • Reproduce Bruce's results with Rjet=0.5 (sphAGBn10v200namb4e4)
Zoom-in image
http://www.pas.rochester.edu/~bliu/pnStudy/Wiggles/sphAGB_dens_5AMRmesh_2.png http://www.pas.rochester.edu/~bliu/pnStudy/Wiggles/sphAGB_dens_5AMRmesh_3.png
  • Rjet=5000AU(or 10 cu) and other parameters are same
t=0 http://www.pas.rochester.edu/~bliu/pnStudy/Wiggles/Mess_Rjet10cu_t0.png
t=100y http://www.pas.rochester.edu/~bliu/pnStudy/Wiggles/Mess_Rjet10cu_t21.png

Mesh movie for Rjet=5 or 10cu and Zomed-in on the edge

pnStudy: Ambient Temperature close to Jet drops

Trying to figure out the ambient temperature where close to the jet drops for the pnStudy module (blog:bliu11102014). This hasn't been found before (From Keira's runs). Here I tried to reproduce one of Bruce's runs and also a run of Keira. For Bruce's run, the ambient temperature is 1000K, and the temperatures close to the jet drops to 100K around 660y. For Keira's run, the ambient temperature is 100K. And the temperature of similar area doesn't drop — as listed in 1 and 2 below.

A. Cooling Floor Temperature

Jonathan helped me figure out the cooling played a role here. The floor temperature for the cooling is 100K and when it reaches 100K it shuts off the cooling. That's why in Keira's case, it won't see the drop. But whenever the ambient temperature is higher than 100K, there is a drop

B. Misleading comments in problem.data

It was also found the comments for the temperature parameters in the problem.data was misleading. Here's the current problem.data

!      BACKGROUND or “AMBIENT” SECTION. Values apply to origin
tamb = 1d3           ! ambient temp, 1cu = 0.1K (100K=1000cu)     

...

tjet  = 1000d0      ! flow temp, 1cu = 0.1K (100K=1000cu)

Both of these should be in Kelvin and 1 cu = 10K

!      BACKGROUND or “AMBIENT” SECTION. Values apply to origin
tamb = 1d3           ! ambient temp in Kelvin ( 1cu = 10K)

...

tjet  = 1000d0      ! flow temp in Kelvin ( 1cu = 10K)

C. Results

  1. Reproduce Keira's run with current code
Main Parameters Baowei's Run Keira's Run
tamb = 1d2; namb = 4e2; outflowType = 1; njet = 1d2; Rjet = 1d0; vjet = 2e7; tjet = 1d1; http://www.pas.rochester.edu/~bliu/pnStudy/AmbientTempIssue/jetTemperature_repKeira.png https://dl.dropboxusercontent.com/u/35223511/Research/final%20wiki/vel_temps/jet_10k_veltemp0000.png
  1. Reproduce Bruce's run
Main Parameters Baowei's Run Bruce's Run
tamb = 1d3; namb = 4e4; outflowType = 1; njet = 4d4; Rjet = 2d0; vjet = 2e7; tjet = 1000d0; http://www.pas.rochester.edu/~bliu/pnStudy/AmbientTempIssue/jetTemperature_repBruce.png http://www.pas.rochester.edu/~bliu/pnStudy/AmbientTempIssue/ambientTemp_fromBruce.jpg
  1. Test for different ambient Temperatures — all other parameter are same
Tamb =1d2 http://www.pas.rochester.edu/~bliu/pnStudy/AmbientTempIssue/jetTemperature_Tabt100.png
Tamb =2d2 http://www.pas.rochester.edu/~bliu/pnStudy/AmbientTempIssue/jetTemperature_Tabt200.png
Tamb =3d2 http://www.pas.rochester.edu/~bliu/pnStudy/AmbientTempIssue/jetTemperature_Tabt300.png
Tamb =4d2 http://www.pas.rochester.edu/~bliu/pnStudy/AmbientTempIssue/jetTemperature_Tabt400.png
Tamb =5d2 http://www.pas.rochester.edu/~bliu/pnStudy/AmbientTempIssue/jetTemperature_Tabt500.png

pnStudy: jet & ambient Temperature

Ambient temperature is set 1000K initially. As jet moves, ambient temperature at the bottom drops to 100K. Results are similar for both the new code (vJet not depends on Rjet) or the old code (v=vJet/Rjet…) — the jet velocity for the old code is half of the new code due to the Rjet dependence.

New code(5AMR) Old code (3AMR)
t=0 https://www.pas.rochester.edu/~bliu/pnStudy/jetTemp_5AMR_t0.png https://www.pas.rochester.edu/~bliu/pnStudy/jetTemp_3AMR_old_t0.png
t=600y https://www.pas.rochester.edu/~bliu/pnStudy/jetTemp_5AMR_t660.png https://www.pas.rochester.edu/~bliu/pnStudy/jetTemp_3AMR_old_t660.png
movie New code:jet&Abient Temperature Old code:jet&Abient Temperature

pnStudy:Test new velocity setup for jet

Test if set jet velocity not depending on the Rjet (blog:bliu11032014)

    v=vjet/velscale*(/0d0,1d0,0d0/)/fact*timef 
    ! v=vjet/velscale*(/0d0,1d0,0d0/)/Rjet*fact*timef !10 mar 2014
New Results (3 levels AMR) Old Results (3 levels AMR)
Rjet=2, vJet=200 http://www.pas.rochester.edu/~bliu/pnStudy/JetVelocity/new_R2v200_t0.png; http://www.pas.rochester.edu/~bliu/pnStudy/JetVelocity/new_R2v200_t660.png
Rjet=2, vJet=100 http://www.pas.rochester.edu/~bliu/pnStudy/JetVelocity/new_R2v100_t0.png; http://www.pas.rochester.edu/~bliu/pnStudy/JetVelocity/new_R2v100_t660.png http://www.pas.rochester.edu/~bliu/pnStudy/JetVelocity/Old_rJet2_vJet200_0y_3AMR.png; http://www.pas.rochester.edu/~bliu/pnStudy/JetVelocity/Old_rJet2_vJet200_660y_3AMR.png

No-Rjet_dependence Code:Rjet=2,Vjet=200;

No-Rjet_dependence Code:Rjet=2,Vjet=100

Rjet_dependence Code:Rjet=2,Vjet=100 effectively

pnStudy: Jet velocity Vs Jet Radius

In the pnStudy, the velocity of the jet is set according to the radius:

!======== J E T=========:
IF (outflowType == collimated) then
  q(i,j,k,itracer4)=1d0
  fact=1d0 !10 mar 2014
  !fact=exp(-(x**2+z**2)/jet_width**2) ! b 4 10 mar 2014
  qjet(1)=njet/nScale*fact
  v=vjet/velscale*(/0d0,1d0,0d0/)/Rjet*fact*timef !10 mar 2014
  !v=vjet/velscale*(/0d0,y,0d0/)/Rjet*fact*timef !b 4 10 mar 2014
  qjet(imom(1:nDim))=v(1:nDim)*qjet(1)*& !ramp up velocity
  mom_flow !5 may 2014, time dependent mom flux requested by bbalick. 
  qjet(iE)=qjet(1)*tjet/TempScale*gamma7

This makes the Jet velocity is not vJet (which is given in problem.data) when Rjet !=0.

  1. Rjet=2d0, vJet-2e7 as in Problem.data
outflowType  = 1    ! TYPE OF FLOW    1 cyl jet, 2 conical wind, 3 is clump
njet  = 4d4         ! flow density at launch zone, 1cu = 1cm^-3
Rjet  = 2d0         ! flow radius at launch zone, 1cu = 500AU
vjet  = 2e7         ! flow velocity , 1cu = cm/s (100km/s=1e7cu)
tjet  = 1000d0      ! flow temp, 1cu = 0.1K (100K=1000cu)
tt    = 0.0d0       ! flow accel time, 1cu = 8250y (0.02 = 165y)
open_angle = 00d0   ! conical flow open angle (deg)
tf    = 15d0         ! conical flow Gaussian taper (deg) for njet and vjet; 0= disable
sigma = 0d0         ! !toroidal.magnetic.energy / kinetic.energy, example 0.6

Here's the file of Scales.data

 TIMESCALE       =   260347122628.507     ,
 LSCALE  =  7.479899800000000E+015,
 MSCALE  =  2.099937121547526E+026,
 RSCALE  =  5.017864740000001E-022,
 VELSCALE        =   28730.4876830661     ,
 PSCALE  =  4.141950900000000E-013,
 NSCALE  =   300.000000000000     ,
 BSCALE  =  2.281431350619136E-006,
 TEMPSCALE       =   10.0000000000000     ,
 SCALEGRAV       =  2.269614763656989E-006

Velocity plot shows the jet velocity is 100 km/s instead of 200 km/s. And this can also be calculated from the distance the jet travels during 600 yrs:

t=0 t=660 y
http://www.pas.rochester.edu/~bliu/pnStudy/JetVelocity/rJet2_vJet200_0y_rep.png http://www.pas.rochester.edu/~bliu/pnStudy/JetVelocity/rJet2_vJet200_660y_rep.png
  1. Rjet=1d0, vJet-2e7 as in Problem.data
outflowType  = 1    ! TYPE OF FLOW    1 cyl jet, 2 conical wind, 3 is clump
njet  = 4d4         ! flow density at launch zone, 1cu = 1cm^-3
Rjet  = 1d0         ! flow radius at launch zone, 1cu = 500AU
vjet  = 2e7         ! flow velocity , 1cu = cm/s (100km/s=1e7cu)
tjet  = 1000d0      ! flow temp, 1cu = 0.1K (100K=1000cu)
tt    = 0.0d0       ! flow accel time, 1cu = 8250y (0.02 = 165y)
open_angle = 00d0   ! conical flow open angle (deg)
tf    = 15d0         ! conical flow Gaussian taper (deg) for njet and vjet; 0= disable
sigma = 0d0         ! !toroidal.magnetic.energy / kinetic.energy, example 0.6

Scales.data are same

Velocity plot shows the jet velocity is 200 km/s. And this can also be calculated from the distance the jet travels during 600 yrs:

t=0 t=660 y
http://www.pas.rochester.edu/~bliu/pnStudy/JetVelocity/rJet1_v200_0y_3AMR.png http://www.pas.rochester.edu/~bliu/pnStudy/JetVelocity/rJet1_v200_660y_3AMR..png
  1. Rjet=4d0, vJet-2e7 as in Problem.data
outflowType  = 1    ! TYPE OF FLOW    1 cyl jet, 2 conical wind, 3 is clump
njet  = 4d4         ! flow density at launch zone, 1cu = 1cm^-3
Rjet  = 4d0         ! flow radius at launch zone, 1cu = 500AU
vjet  = 2e7         ! flow velocity , 1cu = cm/s (100km/s=1e7cu)
tjet  = 1000d0      ! flow temp, 1cu = 0.1K (100K=1000cu)
tt    = 0.0d0       ! flow accel time, 1cu = 8250y (0.02 = 165y)
open_angle = 00d0   ! conical flow open angle (deg)
tf    = 15d0         ! conical flow Gaussian taper (deg) for njet and vjet; 0= disable
sigma = 0d0         ! !toroidal.magnetic.energy / kinetic.energy, example 0.6

Scales.data are same

Velocity plot shows the jet velocity is 50 km/s instead of 200 km/s. And this can also be calculated from the distance the jet travels during 600 yrs:

t=0 t=660 y
http://www.pas.rochester.edu/~bliu/pnStudy/JetVelocity/rJet4_vJet200_0y.png http://www.pas.rochester.edu/~bliu/pnStudy/JetVelocity/rJet4_vJet200_600y.png

Candidate movies to show on Collaboratory Wall for the film

These are the candidates movies I've received so far:

  1. Eddies high rez 2.5D MHD jet simulations (the one with all 4 jets evolving in [SII] and Halpha) https://astrobear.pas.rochester.edu/trac/blog/ehansen09292013
  1. Some of bruce/kira's simulations of PN lobe evolution

a) 3D: http://www.pas.rochester.edu/~bliu/pnStudy/rhoPN_3d.gif

b) 2D: http://www.pas.rochester.edu/~bliu/pnStudy/2Dclump_bl.gif

http://www.pas.rochester.edu/~bliu/pnStudy/2Dmix.gif

data set to bring up directly from visit

  1. Some of Zhuo's simulations of fall back shells and binary evolution

https://astrobear.pas.rochester.edu/trac/wiki/u/zchen/simulations

https://astrobear.pas.rochester.edu/trac/wiki/u/zchen/3Dsimulations

  1. A rotating version of a SHAPE visualization of one of Bruce/Kira simulation? https://astrobear.pas.rochester.edu/trac/blog/crl618Figures http://www.pas.rochester.edu/~martinhe/2012/crl/f4.
  1. Magnetic Tower

http://www.pas.rochester.edu/~bliu/adiabat-side-slower.mp4

https://astrobear.pas.rochester.edu/trac/blog/tower

https://www.youtube.com/watch?v=5inCYmHNGN0

https://www.youtube.com/watch?v=EvauxELBHGY

https://www.youtube.com/watch?v=rMnNRlz9JBY

  1. Accretion Disks

https://www.youtube.com/watch?v=fXYOz8RLVFs

http://www.pas.rochester.edu/~martinhe/2012/binary/10lines2.gif

http://www.pas.rochester.edu/~martinhe/2012/binary/20lines2.gif

http://www.pas.rochester.edu/~martinhe/2012/17sep12.gif

http://www.pas.rochester.edu/~martinhe/2011/binary/gene-4.gif

http://www.pas.rochester.edu/~martinhe/2011/binary/20mar1144.gif

http://www.pas.rochester.edu/~martinhe/2011/binary/40au-bb5-3d.gif

http://www.pas.rochester.edu/~martinhe/disk.pdf

  1. Youtube channel:

https://www.youtube.com/user/URAstroBEAR

Meeting Update 09/22/2014 -- Baowei

  1. worked with users from SUNY Oswego and Laurence's student to install AstroBEAR on their machines: got issues with the compiler and libraries on their machines.
  2. configure file (ticket #255): first version with development branch worked on local machines and hopefully most of other machines.

1) The problem module can be set with option "—with-module=". Module list will be shown in README and INSTALL documents. This option is required. Error will be reported if no module given.

2) Check the hdf5, fftw3 and hypre libraries. The paths can be set with options "—with-hdf5=", "—with-fftw3=" and "—with-hypre=". These options are optional. If no library found, it will report error and provide help information about downloading and installing the library.

3) A new run_dir folder will be created. If the folder exit, a backup "run_dir_Currenttime/" will be made to avoid erasing last runs. After compile, all necessary data files and the executable file "astrobear" will be copied to the run_dir/ folder. And an out/ subfolder will also be created.. Will add the pbs and slurm sample scripts to make the run_dir/ really ready to go on all machines.

4) pthreads is there but hasn't been tested.

5) Haven't included the IBM xl compilers, OpenMP, etc… but planning to do..

  1. OpenMP optimization (ticket #361): on it…
  2. Trying installing Paraview on Bluehive: current got errors with qt4 library and VTK.

Science Meeting Update 09/08/14 -- Baowei

  1. Coarse grid + 3 level AMR:Movie with no Mesh;

Movie with Mesh

  1. Compare with results with original resolution and 0 level AMR in blog:bliu08182014 (Still waiting for the growth rate).
  2. The way to get coarse grid with AstroBEAR — blog:bliu08282014

Use AstroBEAR to transfer initial data of 3D Ablative RT from fine grid to coarse grid

The initial grid for the Data from LLE is too small and AstroBEAR runs slows with such base grid. Here's how to transfer these initial data to a twice bigger grid with AstroBEAR.

  • 1. Set the Base grid resolution to half and AMR level to be 1 in global.data
GmX      = 50, 601, 50 !100,1205, 100                   ! Base grid resolution [x,y,z]
MaxLevel = 1                            ! Maximum level for this simulation (0 is fixed grid)
  • 2. Set ErrFlag to be 1 everywhere if not restart (or when read in 3D txt data).
  SUBROUTINE ProblemSetErrFlag(Info)
        !! @brief Sets error flags according to problem-specific conditions..
        !! @param Info A grid structure.        
    
    TYPE (InfoDef) :: Info
   
    ! if need to generate coarse grid data (with 1 level AMR) set ErrFlag everywhere to be 1
    if (InitialProfile .eq. 3 .AND. lRestart .eq. .false.) then
       Info%ErrFlag(:,:,:) = 1
    end if 
   
  END SUBROUTINE ProblemSetErrFlag
  • 3. Read the txt data to level 1 grid instead of level 0. Level 0 grid needs to be initialized also to avoid protections.
        DO i =1,mx
        DO j =1,my
        DO k =1,mz
            read(11,*),pos(1),pos(2),pos(3),rho
            read(12,*),pos(1),pos(2),pos(3),p
            read(13,*),pos(1),pos(2),pos(3),vx
            read(14,*),pos(1),pos(2),pos(3),vy
            read(15,*),pos(1),pos(2),pos(3),vz

            rho=5.0*rho/rScale
            p=1.25E+14*p/pScale
            vz=5E+6*vz/VelScale
            vy=5E+6*vy/VelScale
            vx=5E+6*vx/VelScale

           if(Info%level .eq. 0) then
            Info%q(i,j,k,1)=1.0
            Info%q(i,j,k,2)=0.0
            Info%q(i,j,k,3)=0.0
            Info%q(i,j,k,4)=0.0
            Info%q(i,j,k,iE)=0.0
           end if

           if(Info%level .eq. 1) then
            Info%q(i,j,k,1)=rho
            Info%q(i,j,k,2)=rho*vx
            Info%q(i,j,k,3)=rho*vy
            Info%q(i,j,k,4)=rho*vz
            energy = 0.5*rho*(vx**2+vy**2+vz**2)+p/(gamma-1d0)
            Info%q(i,j,k,iE)=energy
           end if

        end do
        end do
        end do
  • 4. Run the program from start. Frame 0 will have level=1 grid everywhere.
http://www.pas.rochester.edu/~bliu/AblativeRT/3Dcase/CoarseGrid/test/CoarseGrid_1AMR_test_frame0.png
  • 5. Restart from Frame 0 (and ErrFlag will be 0). Frame 1 after a tiny step (or any frame other than frame 0) will only have the level 1 at the interface.
http://www.pas.rochester.edu/~bliu/AblativeRT/3Dcase/CoarseGrid/test/CoarseGrid_1AMR_test_dt.png

Science Meeting Update 08/18/14 -- Baowei

http://www.pas.rochester.edu/~bliu/AblativeRT/3Dcase/2ndCut/ThickSec0499.png
  1. thin target: movie
http://www.pas.rochester.edu/~bliu/AblativeRT/3Dcase/2ndCut/ThinSec0340.png

Meeting Update 08/05/2014 -- Baowei

  • 3D Ablative RT
    1. Extended 2D data to 3D: expect to have the exact same results as 2D. Tried to put gravity along different directions and found only when gravity along y the code works as expected. Running a job with gravity along y.
Gravity comparison http://www.pas.rochester.edu/~bliu/AblativeRT/3Dcase/checkGravity/3d2dgmass_galongyz.pnghttp://www.pas.rochester.edu/~bliu/AblativeRT/3Dcase/checkGravity/3d2dgmass_galongyz2.png
y-direction Movies density; temperature
  1. Reran the 3D Conduction Front tests along different directions to longer time( same long as the test we did for 2D), since the total mass plots look OK. Didn't find something wrong, although the x&z-direction case take a little more cycles to converge than y-direction at time later onconduction front

Adjusting Gravity in 3D Ablative RT module

Gravity value with different bottom heat flux for the current code

Heat flux at the bottom seems too low. Tried randomly bigger flux. Gravity increases first then drops. Seems something very wrong.

flb=6.0E+21 (calculated) flb=6.8E+21 flb=7.13E+21
gravity & total mass http://www.pas.rochester.edu/~bliu/AblativeRT/3Dcase/checkGravity/3dgravity_thin_6.0.png http://www.pas.rochester.edu/~bliu/AblativeRT/3Dcase/checkGravity/3dgravity_thin6.8.png http://www.pas.rochester.edu/~bliu/AblativeRT/3Dcase/checkGravity/3dgravity_thin7.13.png
Int(P+rho*V2) http://www.pas.rochester.edu/~bliu/AblativeRT/3Dcase/checkGravity/3d_integralsurface_thin_6.0.png http://www.pas.rochester.edu/~bliu/AblativeRT/3Dcase/checkGravity/3d_integralsurface_thin_6.8.png http://www.pas.rochester.edu/~bliu/AblativeRT/3Dcase/checkGravity/3d_integralsurface_thin_7.13.png

Compare with 2D Case

1. Initial Profile

Initial profile are close enough except the momentum along the gravity direction.

Rho&T http://www.pas.rochester.edu/~bliu/AblativeRT/3Dcase/2DCompare/initPro2D3DCompare.png
py or pz http://www.pas.rochester.edu/~bliu/AblativeRT/3Dcase/2DCompare/initialMomentumCompare.png

2. heat diffusion check

Turn hydro off. Compare the 2D (gravity along -y direction) and 3D (gravity along -z direction). The solver seems OK.

2D, flb=0 2D, flb=6E21
3D, flb=0 3D, flb=6E21

3. Pure hydro test

Turn off the heat diffusion off. Compare the density and the momentum along gravity direction -py for 2D and pz for 3D — plots are along the center line.

2D rho 2D, momentum 2D, pressure
3D rho 3D, momentum 3D, pressure

Here's a picture to compare the py (2D) and pz (3D) — both along the center line at time=1.345E-5 (cu) http://www.pas.rochester.edu/~bliu/AblativeRT/3Dcase/2DCompare/Momemtum_tf.png

4. compare top and bottom integral of P+Rho*V2

The integrals (pressure + rho*v2) were calculated with queries of integrate (2D) and weighted variable sum (3D) in visit.

Bottom Top
http://www.pas.rochester.edu/~bliu/AblativeRT/3Dcase/2DCompare/Bottom_int_PplusRhoV2.png http://www.pas.rochester.edu/~bliu/AblativeRT/3Dcase/2DCompare/Top_int_PplusRhoV2.png

Check the derivative of momentum

2D 3D
http://www.pas.rochester.edu/~bliu/AblativeRT/3Dcase/2DCompare/dTotalPydt_2D.png http://www.pas.rochester.edu/~bliu/AblativeRT/3Dcase/2DCompare/dTotalPzdt_3D.png

Meeting Update 07/21/2014 -- Baowei

  • Ablative RT
    1. Thick Target:
Initial profile http://www.pas.rochester.edu/~bliu/AblativeRT/3Dcase/initialProfile/init_Profile.png
Movies Rho; RhoPlot;TPlot
Gravity http://www.pas.rochester.edu/~bliu/AblativeRT/3Dcase/checkGravity/3dgravity_thick.png
  1. Thin Target
Initial profile http://www.pas.rochester.edu/~bliu/AblativeRT/3Dcase/ThinTarget/initialProfile_thin/init_Profile_thinTarget.png
Movies RhoPlot;TPlot
Gravity http://www.pas.rochester.edu/~bliu/AblativeRT/3Dcase/checkGravity/3dgravity_thin.png
  1. 2D Gravity

http://www.pas.rochester.edu/~bliu/AblativeRT/3Dcase/checkGravity/2dgravity_st.png

Disk Space Usage

  • Local Machines
Machine Size Jonathan Shule Baowei Martin Eddie Erica
bambooData 12 TB 4 TB 3.4 TB 1.8 TB 1.2 TB ? 0.5 TB
alfalfaData 4.2 TB 1.8 TB 0.7 TB 0.06TB 1.1 TB 0.4 TB 0.2 TB
grassData currently unaccessible
  • CIRC 200GB ~1TB GB per user
  • XSEDE
    1. Ranch ( archival, only a single copy ): 12 TB
    2. Oasis (mounted, Data will be retained for three months beyond the end of the project): 3TB

Meeting Update 07/07/2014 -- Baowei

  • Ablative RT
    1. Gave 2D text data to Rui. Haven't got update from him.
    2. Still debugging on 3D code.
  • Users
    1. Worked with Guilherme of SUNY Oswego installing openmpi and hypre.

Meeting Update 06/23/2014 -- Baowei

  • 3D heat diffusion solver: #243
  • PN study: 3D; will try on darter for scaling test

Science Meeting Update 06/09/14 -- Baowei

Ablative RT

  1. 2D: fixed an error in periodic boundary condition. new result
  2. 3D: http://astrobear.pas.rochester.edu/trac/wiki/u/bliu

Ablative RT growth rate with max(Vx)

Vx, Linear-Linear http://www.pas.rochester.edu/~bliu/AblativeRT/growthRate/vx.png
Vx, Log-Linear http://www.pas.rochester.edu/~bliu/AblativeRT/growthRate/GrowthLogVx.png

The equation of was found in Takabe's paper: http://scitation.aip.org/content/aip/journal/pof1/28/12/10.1063/1.865099

with

Meeting Update 06/02/2014 -- Baowei

  • Ablative RT

To study the perturbation growth rate, the difference between the front positions along the center line and edge is calculated and plotted versus time. This seems different from the three stages of the normal RT instability: the exponential growth stage ends when the bubble starts. Not sure it's just because ablation or something wrong.

Middle line, Linear-Linear, 200 Extended zones http://www.pas.rochester.edu/~bliu/AblativeRT/growthRate/PertGrowth_060214.png
Middle line, Log-Linear, 200 Extended zones http://www.pas.rochester.edu/~bliu/AblativeRT/growthRate/PertGrowthLog_060214.png
Quarter line, Linear-Linear, 200 Extended zones http://www.pas.rochester.edu/~bliu/AblativeRT/growthRate/PertGrowth025_060214.png
Middle line, Linear-Linear,5 Extended zones http://www.pas.rochester.edu/~bliu/AblativeRT/growthRate/PertGrowth_5Extended_060214.png
  • PN Jets: test Martin's 2.5D module on Stampede

movie

Meeting Update 05/19/2014 -- Baowei

  • Science
    1. 2D Ablative RT: tried lower tolerance ( according to blog:johannjc05132014) with max iteration 10000, Maximum # of iteration reached and negative temperature found at frame 316 — comparing 356 frame with tolerance and max iteration 1000.
Rho
Temperature
  1. Help Rui set up on bluehive2. He's analyzing the front rate using his code
  2. 3D Ablative RT: still working on transfering the hdf4 data from Rui to text form.

Science Meeting Update 05/12/14 -- Baowei

*Ablative RT

  1. 2D with 200 (10%) extended ghost zones: bubble appears late comparing 5 extended ghost zones.
Rho
T
Vy
  1. 3D: working on the 3D data from Rui

science meeting update 05/05/14 -- Baowei

  • Ablative RT
    1. Rui is running 2D RT on LLE machines. He will check the growth rate with his matlab program when getting the data (Find the front by checking the slope and subtract the speed of the whole body) Will send me the 3D results and let me try the 3D.
    2. Still working on the hypre chocking issue. http://astrobear.pas.rochester.edu/trac/blog/bliu05012014

2D Ablative RT

Maximum subcycling number is reached at around frame 278 (diffcfl=0.04 and Subcycling=100). Here's the plots along Lx/2. Rho and T on the top could be tiny but all positive. Vy plot stays stable until around frame 270…

Rho
T
Vx
Vy

Science Meeting Update 04/28/14 -- Baowei

*Ablative RT

  1. checked the growth rate of the front along the middle line of x which Rui thought it's simplified too much. Gave Rui the 2D code working with AMR and let him try on LLE's machines.

Science Meeting Update 04/21/14 -- Baowei

  • 2D Ablative RT
Density
Temperature

Scaling Tests on Stampede

The code behavior starts to drop when running more than 1024 cores on Stampede. This is consistent with Jet module (which was done for the proposal of 10/15/2013 and only shown up to 1024 cores for better scaling) and Colliding Flows.

Colliding Flows Strong Scaling with Colliding Flows on Stampede
Triggered Star Formation (512 hydro)
Jets Strong Scaling with Jets module on Stampede

Science Meeting Update 03/31/14 -- Baowei

  • Ablative RT the ablation results of smaller extended zones look OK according to Rui (3:ticket:377, 5:ticket:377). Didn't get the bubble from 2D runs. Will have a meeting with Rui tomorrow discussing doing a RT simulation to benchmark the growthrates and bubble velocity.

Science Meeting Update 03/24/14 -- Baowei

  • Ablative RT with adjusting gravity
    1. Met with LLE people last week. worked on putting adjusting gravity to the code. 1st cut results are show here 1:ticket:377 . The shell keeps stable about 4 ns. The gravity is not quite accurate especially when the front density drops due to ablation because of the extended zones.

Meeting Update 03/18/2014 -- Baowei

  • Tickets
    1. new: #371 (I want to add radius dependence in gravity), #372(duplicate to #371), #373 (Unwanted reproduction of sink particles), #374 (Finish point gravity alpha implementation), #375 (Verify MUSCL scheme works in Mutli-Dimensions and in AMR)
    2. closed: #371
  • Users & Resources
    1. Wiki updates: trying to update the trac with new plugins but generated some problems for our users this passed weekends and yesterday. Sorry about that. It works now. Rich, Jonathan and Baowei will have a meeting on Thursday to discuss the trac issues
    2. XSEDE proposal writing telecon: Time: 3/21 Fri 3:00pm ET, Location: Adam's office (?), need questions
  • Science
    1. Equations for the Ablative RT initial profile: #345
    2. Read Betti's paper (Growth rate of the ablative RT instability in ICF Phys. of Plasma 5, 1446 1998)

Trac wiki links refresher -- from Rich

Rich suggests us to use attachment instead of absolute URLs to create links in documents. So instead of things like

http://astrobear.pas.rochester.edu/trac/attachment/wiki/u/ehansen/Bvec_movie.gif

it's better to use

[attachment:Bvec_movie.gif:wiki:u/ehansen]

This dynamic way also has a convenient direct download link next to the file attachment link.

Rich found a workaround to make those old posts which use the former way still work but to be on the safe side we should start using the dynamic way to do the links.

Here's Rich's original email:

Hi Baowei, and folks:

For what it is worth…

I would take the time to read the Trac Links page here: http://trac.edgewall.org/wiki/TracLinks

It provides very helpful information on creating links in documents you create on the blog, wiki, etc 
that are *dynamic* rather than hardcoded, absolute URLS (e.g. 

http://astrobear.pas.rochester.edu/trac/attachment/wiki/u/ehansen/Bvec_movie.gif)


Taking this as our example, say we wanted to link to an attachment on another wiki page in a blog post.

*** The incorrect way of doing this would be:
[http://astrobear.pas.rochester.edu/trac/attachment/wiki/u/ehansen/Bvec_movie.gif Eddie's Bvec Movie]

*** The correct way would be:
[attachment:Bvec_movie.gif:wiki:u/ehansen]

Where:
  * 'attachment:' is a keyword indicating you are referencing a Wiki file attachment
  * 'wiki' is a keyword referring to the wiki module of Trac
  * 'Bvec_movie.gif' is the literal referring to the filename of the attachment
  * 'u/ehansen/' is the wiki page containing this attachment.
     Do NOT LEAD OR END this reference with a "/", i.e "/u/ehansen/" is incorrect.

Your links at this point will be created automatically and correctly and even include a handy
 'direct download' attachment link in the page next to the file attachment link.

If anything changes on the server, which is what seems to have happened today, your links 
are broken. I have placed a workaround redirect to fix those broken links. Still, I highly 
recommend you all follow the Trac best-practices for making links.


Rich

SuperMIC Vs. Stampede

SuperMIC Stampede
Computing Nodes 360 6400
Processor Each computing node has two 2.8GHz 10-Core Ivy Bridge-EP E5-2680 Xeon Processors Each computing node has two 2.7 GHz 8-core Xeon E5-2680 (Sandy Bridge) processors
Co-Processors Each computing node has two Intel Xeon Phi 7120P 61-core Coprocessors(1.238GHz,16GB) Each compute node is configured with an Intel Xeon Phi 5110p 61-core coprocessor(1.05GHz,8GB)
Memory 64GB DDR3 1866MHz Ram 32GB DDR3 1600MHz Ram , with an additional 8GB of memory on each Xeon Phi coprocessor card
Hybrid Compute Nodes 20 Hybrid nodes, each with two Processors + one Coprocessor + One NVIDIA Tesla K20X 6GB GPU 128 compute nodes with NVIDIA Kepler K20 5GB GPU

XSEDE Proposal Writing Webinar

Summary of the XSEDE Webinar "Writing a Successful XSEDE Allocation Proposal" I attended last week

  • The full recorded session can be found here: https://meeting.austin.utexas.edu/p3pmvkq0mjg/ .
  • Questions I asked and the speaker's answer:
    1. Research collaborations (Typically how many SUs applied/how many SUs awarded) — Research Collaborations are those large projects with multi-PIs. Sites standard. Typically 15~16 million. Currently the total of all research request is about 800 per year, 4.0 billion SUs per year and 1.8 billion awarded.
  1. is it better to submit a big proposal asking a lot SUs or several smaller proposals each of which asking small amount of SUs? One PI is not allowed to apply with different projects as PI. Recommend to combine different projects from the same group to be one. — sounds like big proposal is OK?
  1. Is there a way we can run scaling-testing for our own code on these new machines? Transfer SUs. Some of the machines are very similar, So you don't have to do scaling testings on all of them. For example SuperMIC ( newest supercomputer funded by NSF, located in LSU, will be in production in April 1st 2014) is similar to Stampede.
  • Important points I catch and we might miss before
    1. Justification of SUs: clear simple calculation, log/simple wall time?
    2. local compute resources in details: referees may know some of your big machines.
    3. research team in details: how may faculties, staffs, postdocs, graduate, undergraduate students. Ability to complete the plan.
    4. Publications acknowledging XSEDE and/or feature stories on XSEDE website: productive, are PI, Co-PIs publish together?
    5. There are groups that are awarded 90% of their request.
    6. Ranch (TACC) and XWFS (The XSEDE-Wide File System) can be requested for storage resources without need to request computing at the same time.

Meeting Update 03/03/2014 -- Baowei

  • Tickets
    1. new: 16 tickets from Jonathan(355-370). 13of them are for AstroBEAR3.0 and have got assigned.
    2. closed: none
  • Users
    1. worked with Visitor from Rice on his own module: with ambient and clump objects and added shock. compiled and ran OK. Talked about 3D cylindrical clump and tracers. Talked about computational resources
    2. Ablative RT: got positive response from LLE but still waiting for the confirmation in detail.

  • Resources
    1. got a call from the director of User Service at TACC when looking for a person to contact about the XSEDE proposals. Found two possible candidates to speak with.
  • Worked on
    1. Testing script: worked on Eddie's new testing script with overlay object
    2. Parallel hdf5

  • Science
    1. reading articles about stability behavior of the front (Piriz and Tahir, 2013, etc.)

Ablative RT with Riccardo's initial profile

  • Riccardo's initial profile and BCs Riccardo uses zero heat flux top boundary condition. hypre chokes due to the fast increased temperature at the top
Rho
T
P
Vy
  • Riccardo's initial profile and non-zero heat flux at the top

The front holds stable around 1ns then is pushed up. It's pushed out around 2.8 ns:

Rho
T
Vy

Meeting Update 02/24/2014 -- Baowei

  • Science
    1. Ablative RT: #345. still waiting for the time scale with fixed gravity constant from Rui. checked Betti's Initial profile and BCs with AstroBEAR code, found the temperature at the top jumped to very high probably due to the piled up heat flux which chokes hypre (http://astrobear.pas.rochester.edu/trac/astrobear/ticket/345#comment:3). Jonathan suggested to extend the y longer with Betti's BCs. Working on that..

Debug meeting report -- 02/18/2014

  • Baowei's Ablative RT
    1. related tickets: #309, #331, #345
    2. current status: find one fix for hypre choking, waiting for confirmation from LLE
  • Jonathan's "others"
    1. related tickets: #311(Implement energy & momentum conserving self gravity), #321(Implementing simple line transfer in AstroBEAR), #325(Investigate Grapevine Load Balancing)
    2. AB3?

Meeting Update 02/17/2014 -- Baowei

  • Tickets
    1. new: #343(@ Doc), #344(Standard out and machine query differ for cpu count), #345(Test Ablative RT module -III), #346 (Add elliptic steps to efficiency calculation for standard out)
    2. closed: #317(Quasi Periodic boundaries in a quarter plane)
  • Users:
    1. checked with Andy of Rice: AstroBEAR and Visit run well on Rice resources.
    2. set up Marvin on bluehive and bgq
    3. Erica's reservation on bgq
  • Resources
    1. project & teragrid resources: ProjectRuns
    2. group reservation of half bgq machine for weeks
    3. link to cloverdata/ from other local machines fails possibly due to the failure of one disk on clover. Testing script and backup scripts need to be updated correspondingly.
  • Science
    1. Ablative RT: aim at stable time 3~4 nano seconds according to LLE people. tried with different top BCs, hypre-choking problem fixed (details at #345). still need to make the front stay longer..

Meeting Update 02/10/2014 -- Baowei

  • Tickets
    1. new: #335(stray momentum flows), #336(Compiling error on bamboo and bluehive with hypre flag = 0), #337(Memory usage), #338(fix comment in scrambler 3.0 in refinements.f90), #339(Making astrobear capable of using dependencies), #340(Organizing modules in the source code), #341(Difference between colliding flows and molecular cloud formation), #342(compiling error on bluestreak)
    2. closed: none
  • Users
    1. Mark: XSEDE startup allocation: stampede/kraken
    2. New one asking for the code: Yunnan University(to simulate problems of AGNs or SNRs)
  • Resources:
    1. XSEDE: 1.4 million SUs left on Kraken.
  • Worked on
    1. Ablative RT (#317): With Shule or Betti's BCs, it can run 1E-10 seconds before hypre chokes. By fixing the values of top right boundary, it runs up to 6E-9 seconds with oscillating front: #331,comment:22 —is this long enough?
    2. QPBC(#317): summary of what I tried:
      1. The divergence comes from Az — got different values running with multi-processors
      2. Run with 1, 2 processors, vector potential values are same.
      3. Run with 3,4,5 processors, vector potential values are same.
      4. new subgrid scheme with minimum grid number=1,2,4: vector potential values are same as the old subgrid scheme. But for minimum grid number =8, values are different
      5. Only happens with AMR runs
    3. #336(Compiling error on bamboo and bluehive with hypre flag = 0)

Meeting Update 02/03/2014 -- Baowei

  • Ticket
    1. new: none
    2. closed: none
  • Resources
    1. grass is on.

Meeting Update 01/27/2014 --Baowei

  • Tickets
    1. new: #334(Help running on bluestreak)
    2. closed: none
  • Resources
    1. grass: One disk is dead. Rich is wiping out the disks and rebuilding the array with the left 7. One spare disk (1TB) might be needed in the future.
    2. microphone of the laptop: Lost the plastic cover (outside the chip of USB) two weeks ago. The chip seems working OK. Mike Culver is helping us to wrap it again.

Meeting Update 01/21/2013 -- Baowei

  • Tickets
    1. new: #331(Test Ablative RT module -II), #332(Amr speed-up factor), #333(Filling fractions I/O should be checked)
    2. closed: none
  • worked on
    1. Ablative RT module: fixed a bug in the open top boundary (ThermalConduction). by lowering the hypre tolerance to , hydro off results match the analytic value (#331). Working on double-check the hydro boundaries.
    2. Compiling Erica's code on BlueStreak.

Summary for the current status of the Ablative RT project

  • Jonathan suggested to test the flux and with hydro-off. Ideally we can extend the test to 3-cases

  1. currently we still have problem with the hydro-off test result will be posted in the third part.
  • When doing the test we find a mismatch of for the bottom flux and the equation:
    1. Putting everything in computational units, the bottom flux calculated from Kappa1 and Temperature is 2.32E-4 and the bottom flux converted from Betti's data is 3.48E-4 (5.876e18 W/m2 → 5.876e21 erg/s/cm2 and divided by fluxScale=pScale*velScale). The difference is a factor of 1.5 which is , or gamma7 in the code, or as in Jonathan's blog post.

2.One way of understanding this is that Kappa1~K0/Cv where from Betti, but in AstroBEAR we have

and

  1. One easy fix for this is to multiply both size of the equation by gamma7: Since in the code there was already a gamma7 in the left side of the equation (which is a bug), we just include gamma7 in the Kappa1 as Jonathan mentioned in his blog. So now the Kappa1 becomes

which is 1.5 time larger than before.

  1. Since we still have problem with the hydro-off test, I can only show the effect of this 1.5 factor with bug-buried results
without gamma7
with gamma7
  • Results for flux test: the way I ran the temperature limiting case tests is multiplying a small factor to the Temperature) with hydro on and off.
    1. limiting case with hydro off, multiply by 1e-17

  1. limiting case with hydro on, multiply by 1e-13 — limited by the BCs in BeforeStep, cannot go too small
  1. normal hydro-off test

Meeting update 01/13/2014 -- Baowei

  • Tickets
    1. new: #330(too many restarts for 3D pulsed jets)
    2. closed: none
  • Ablative RT
    1. there's an inconsistence of gamma7 (1/(gamma-1)) for the flux part between the equations and bottom flux. A gamma7 has to be included when calculating energy and flux to match the values in cgs with Betti's data. Jonathan posted a blog explaining this part here: http://astrobear.pas.rochester.edu/trac/astrobear/wiki/ThermalConduction . The limit case test passed but the non-hydro test for the energy increase ratio still won't match the heat flux.

Unit converts for Ablative RT problem

Equation solved in Betti's code (SI units and Temperature in Joules)

where is Boltzmann constant and as in Betti's document and is the normal specific heat capacity. And the flux is

To convert this to cgs units we write

That is

In AstroBEAR we define and

Comparing the definition of and we have

In Betti's data, and so

Meeting Update 01/08/2014 -- Baowei

  • Tickets
    1. new: #329(test ticket system)
    2. closed: #329
  • Resources
    1. grass: Got problem with the array card, the raid is degraded, Dave is backuping the data (will take several more days). Very likely we need a new array card (300~500$) which will support bigger size of disks(10~20TB VS. the current 3.3 TB ) or a new machine.
    2. XSEDE: 0.64M SUs on Stampede for Martin's renewed allocation and 2.0M SUs left on Kraken for Adam's allocation. The renewal of Adam's allocation will need to be submitted before March 30th.
    3. Skype account: will need Adam's credit card since it's a recursive charge.

Meeting Update 12/16/2013 -- Baowei

  • Tickets
    1. new: none
    2. closed: #328
  • Resources
    1. Skype premium account: need to set up a recursive payment and may need Adam's credit card to do that

Meeting Update 12/09/2013 -- Baowei

  • Tickets
    1. new: #328(Seg fault in Planetary Atmospheres module (AstroBEAR 3.0))
    2. closed: none
  • Resources
    1. working with Frank on the Skype account
    2. Asked Dave to check the Laptop and the wireless worked on 4th floor. Will show him again if still not working on 3rd floor.
  • New Users
    1. Andy from Rice (modeling of proposed magnetized shock experiments at LLE)
    2. from Western Kentucky University (modeling plasma jets of blazars)
  • Worked on
    1. #309: got same result with both Betti's data and my data. checking on the BCs:

http://astrobear.pas.rochester.edu/trac/astrobear/ticket/309#comment:37, http://astrobear.pas.rochester.edu/trac/astrobear/ticket/309#comment:39, http://astrobear.pas.rochester.edu/trac/astrobear/ticket/309#comment:41

  1. new youtube movies from Martin: http://www.youtube.com/user/URAstroBEAR

Meeting Update 12/03/2013 -- Baowei

  • Tickets
    1. new: #325(Investigate Grapevine Load Balancing), #326(seg fault on BH with colliding flows), #327(time variable)
    2. closed: #327

Meeting Update 11/11/2013 -- Baowei

  • Tickets
    1. new: #322(internal compiler error with gfortran)
    2. closed: #316(Bugs in Binary)
  • Users
    1. new users: From Instituto de Astrofisica de Andalucia (Formation and X-ray emission from planetary nebulae and Wolf-Rayet nebulae.) and From institute of astronomy and astrophysics of TaiWan(code comparison).
  • Resources
    1. alfalfa for Zhuo?
    2. intel fortran compiler on local workstations were not working properly last Friday due to the software updates but fixed now

Meeting Update 11/04/2013 -- Baowei

  • Resources
    1. New machine to replace Alethea (for Joe)?
  • Worked on
    1. ticket #316(Joe's jobs on bluehive), Bugs found by Marvin with gfortran compiler (#312, #313, #318) — all our local machines & Teragrid use ifort as fortran compilers which is more tolerant to the array bounds checking. I tried running test suites with gfortran on alfalfa and found more tiny bugs and a fatal compile-time error with HDF5_gfortran. Still working on it.
    2. ticket #309 (Conduction Front Test with hydro)

Meeting Update 10/28/2013 -- Baowei

  • Tickets
    1. new: #310 (Implement Ionization Table to AstroBEAR)
    2. closed: none
  • Worked on
    1. Conduction Front test: #309
    2. EOS: #310

Meeting Update10/21/2013 -- Baowei

  • Tickets
    1. new: none
    2. closed: none
  • Resources
    1. Submitted Teragrid renewal proposal last Tuesday
  • Worked on
    1. ticket #309: tried to figure out the bug causing the different hypre matrix with the same dt_diff for subcycle 1 and 2. The bug was found and fixed in subroutine DiffusionSetBoxValues when setting BCs with ghost zones.
    2. Teragrid proposal

Journal Club 10/15 Agenda

  • Discuss the conduction front test: ticket #309
  • Erica's BE sphere model

Meeting Update 10/14/2013 -- Baowei

  • Tickets
    1. new: none
    2. closed: #304(Problem keeping hydro static equilibrium with sink particle), #307(BE module bug)
  • Resources
    1. Teragrid renewal proposal due tomorrow
  • Worked on
    1. ticket #309: fixed a bug in subcycling boundary conditions— the hypre chocking issue is solved and results are better comparing the Analytic. May still have problems with the boundary conditions
    2. proposal and progress report

SESAME Table subroutines for AstroBEAR

S2GETI, S2EOSI - These routines are to be incorporated into a hydro code which uses the inverted form of an equation of state. Density and internal energy are the independent variables, and pressure and temperature are the dependent variables.

  1. S2GETI is used to get data from the library
CALL S2GETI (IR, IDS2, TBLS, LCNT, LU, IFL)

IR material region number

IDS2 Sesame material number

TBLS name of array designated for storage of tables

LCNT current word in array TBLS

LU unit number for library

IFL error flag

  1. Subroutine S2EOSI is used to compute an EOS point. That is, it computes the pressure, temperature, and their derivatives for a given density and internal energy.
CALL S2EOSI (IR, TBLS, R, E, P, T)

IR material region number

TBLS name of array which contains the EOS tables

R density in Mg/m3

E internal energy in MJ/kg

P, T pressure, temperature vectors

P(1), T(1) pressure in GPa, temperature in Kelvins,

P(2), T(2) density derivatives, (P/r)E, (T/r)E

P(3), T(3) energy derivatives, (P/E)r, (T/E)r.

For certain materials, the library also has tables of the pressure, temperature, density, and internal energy along the vapor-liquid coexistence curve. This information is needed in reactor safety problems. Routines S2GET and S2GETI can be modified to access the coexistence data, and routine LA401A can be used to compute the thermodynamic quantities.

Meeting Update 10/07/2013 -- Baowei

  • Tickets
    1. New: #309 (Test Ablative RT module)
    2. Closed: none
  • Users
    1. new one from University of Kiel asking for the code:" I want to simulate the circumnuclear disk around SgrA*"
  • Resources
    1. working with Dave backing up the wiki from Botwin to Clover
    2. Teragrid progress report: due next Tuesday

Meeting Update 09/30/2013 -- Baowei

  • Resources
    1. Martin's Allocation expired. I tried to burn the left on stampede with 3D Pulsed Jets.

Ablative RT

Kappa=3.68e-14 Kappa=0.736e-14
Betti
Liu

Meeting Update 09/23/2013 -- Baowei

  • Tickets
    1. new: #307 (BE module bug? ) from Andrew
    2. closed: none
  • Users:
    1. Wrote to Clemson?
  • Resources
    1. INCITE program of Argonne:

1) Computing time on More than five billion core-hours will be allocated for Calendar Year (CY) 2014. Average awards per project for CY 2014 are expected to be on the order of 50 million core-hours for Titan and 100 million core-hours for Mira, but could be as much higher.

2) INCITE proposals are accepted between mid-April and the end of June.

2014 INCITE Call for Proposals is now closed

3) Request for Information for next year's call: https://proposals.doeleadershipcomputing.org/allocations/incite/;MicsLoginCookie=0pCnSQPWpzTs2v2GHrThlQ4N1tFhk8cRHYF4fRpMSq3nhxs0f55H!-460963996[[BR]]

4) Proposal preparation instructions: https://proposals.doeleadershipcomputing.org/allocations/incite/instructions.do

  1. Wiki was slowing occasionaly last week: hopefully fixed by Rich. Still need more memory on Botwin?

Initial Conditions for Ablative RT

  • Betti's data

  • Shule's data
R       = 4.7904d26
Cv      = 7.186d26
Kappa   = 3.734d69
g       = 1.0d14
q0      = -5.876d18
T0      =  2.42528d-16
rho0    = 68.1622919147237
v0      = -272144.604867564
nFlux       = 2.5

  • Baowei's data
  1. With Shule's Parameters


  1. With slightly-changed parameters

Standard output for 2.5D Pulsed Jet runs on Stampede

Meeting Update 09/16/2013 -- Baowei

  • Tickets
    1. new: none
    2. closed: none
  • Resources
    1. Mercurial repository & /cloverdata/ : working with Rich to get /cloverdata/ back and get the repository a standard name
    2. Teragrid: two weeks left for Martin's allocation Stampede 8% | 37,083 SUs remaining Kraken 22% | 260,046 SUs remaining
    3. Renewal report
  • Worked on
    1. 2.5 Pulsed Jets Runs
    2. Multi-threaded scaling test of AstroBEAR3.0 on Blue streak
    3. Read about SESAME TABLE

Meeting Update 09/09/2013 -- Baowei

  • Tickets
    1. new: none
    2. closed: none
  1. hydra: completed with data located at /bamboodata/bliu/Eddie/PulsedJet_2.5D/Fiducial/hydra/

hydra

  1. MHD Beta=5: 47 frames

MDH 5

  1. MHD Beta=1: 41 frames

MHD 1

  1. MHD Beta=0.4 : 23 frames

MHD 0.4

Acknowledgement for publications of work on Teragrid machines

Publications resulting from XSEDE Support are required for renewals and progress reports. Papers, presentations, and other publications that feature work that relied on XSEDE resources, services or expertise should include the following acknowledgement:

This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number OCI-1053575.

Meeting Update 09/03/2013 -- Baowei

  • Tickets
    1. new: none
    2. closed: none
  • Resources
    1. Wiki on Botwin
    2. Teragrid: MAGNETIC TOWERS AND BINARY-FORMED DISKS Stampede 13% | 56,652 SUs remaining Kraken 28% | 324,717 SUs remaining

2.5D Pulsed Jet

On bamboo

  • Hydra
    1. New version: 5AMR on 16 cores Density turbulence seems gone
      hydra 5AMR New version
  1. New version: 7AMR on 256 cores

hydra 7AMR new version

256 cores frame 5, 7 AMR on 256 cores
128 cores frame 5, 7 AMR on 128 cores

  1. 5AMR

hydra 5AMR

  1. 7AMR

frame 6, 7 AMR

hydra 7AMR

Meeting Update 08/26/2013 -- Baowei

  • Tickets
    1. new: none
    2. closed: #306 (2.5D MHD diverging wind running out of memory)
  • Resources:
    1. moving wiki to Botwin
  • Working on
    1. Strong scaling of AstroBEAR3.0 on Bluestreak
    2. Powerpoint on AstroBEAR. Will give a presentation on AstroBEAR to CIRC staffs on Wednesday
    3. 3D testing run with ErrorFlag buffer:
      testing buffered code

Meeting Update 08/19/2013 -- Baowei

  • Tickets
    1. new: none
    2. closed: #303 (Changing of gamma in global.data causes oddities)
  • Users
    1. New user from Chalmers University of Technology of Sweden (AGB and Pre-PNe outflow modeling)
    2. Josh of Clemson's r-theta polar coordinate code

r-theta Coordinate from Josh

  1. Thank everybody for your help during Keira's visiting
  • Resources
    1. Archived Shule's data on /bamboodata/: 5TB available
    2. Martin's Teragrid: progress report for renewing

Teragrid Allocation Policy

  • EXTENSIONS At the end of each 12-month allocation period, any unused compute SUs will be forfeited.
    1. Extensions of allocation periods beyond the normal 12-month duration
    2. Reasons for Extensions: encounter problems in consuming the allocation. For example, unexpected staffing changes
    3. Length of extension: 1-6 months
    4. Procedure: a brief note for the reason through POPS via the XSEDE User Portal.
  • RENEWAL If a PI wish to continue computing after the expiration of their current allocation period, he/she should submit a Renewal request.
    1. In most cases, they should submit this request approximately one year after their initial request submission, so that it can be reviewed and awarded to avoid any interruption. (July 15th for Martin's allocation and April 15th for Adam's allocation)
    2. Procedure: Progress Report (3 pages)

Meeting Update 08/12/2013 -- Baowei

  • Tickets
    1. new: #306 (2.5D MHD diverging wind running out of memory)
    2. closed: none
  • Users
    1. Keira's visit
    2. New users asked for the code: Open University, UK, Educational in connection with the undergraduate course S383 Relativistic Universe
  • Resources
    1. Grass needs a 1TB new hard drive
    2. New Kraken allocation: 86% | 2,954,282 SUs remaining (used by Shule & Baowei), Old Kraken allocation (45% | 516,370 SUs remaining)
    3. Archiving data?
  • Worked on
    1. Pulsed Jets: Tried to find the best production run setup (resolution vs processor number). 96X480X96 + 5AMR runs slow on 1200 cores of Kraken. Changed the resolution to 192X960X192 + 3AMR. got the first several frames for both the MHD(beta=5) and Hydra. Movies will be attached soon. The highest level refinement goes to the whole jet for the first several frames. This makes it run extremely slow at the beginning but hopefully it will run much faster later on since the highest refinement will go to the bow-shock only then.
hydra 192X960X192 + 3AMR Mesh grid of frame 11 hydra

Meeting Update 08/05/2013 -- Baowei

  • Tickets
    1. new: none
    2. closed: none
  • Resources
    1. Archived Martin's data for magnetic tower
  • Worked on
    1. Ticket #302 (Cooling length refinement, run on kraken with 96X480X96+5AMR goes slow)

Meeting Update 07/29/2013 -- Baowei

  • Tickets
    1. new: #304(Problem keeping hydro static equilibrium with sink particle)
    2. closed: none
  • User
    1. Called Bruce. Schedule for the visit of Bruce's student?
    2. Student at Universidad de Chile asked to download 2.0
  • Worked on
    1. #289 (3D pulsed jet runs too slow on Kraken): runs with the same setup shows that problem is solved with the new sub-sampling scheme
    2. #302 (Temperature flashes in Pulsed Jet runs): attached new result of beta=5 with sub-sampling

Meeting Update 07/23/2013 -- Baowei

  • Tickets
    1. New: #300 (Install AstroBEAR2.0 on Pleiades of NASA), #301 (make density protections better), #302 (Temperature flashes in Pulsed Jet runs), #303 (Changing of gamma in global.data causes oddities)
    2. Closed: #295 (Global Co-rotating Frame module stops running), #300
  • Users
    1. Met with LLE folks: will merge AstroBEAR3.0 with Shule's work and give to Rui
    2. Ian installed AstroBEAR2.0 on Pleiades. Asked for reference.
    3. Gave Bruce the materials he needed for Teragrid proposal
  • Resources
    1. Computing time: Kraken old grant 51% (585,530 SUs) remaining, new grant %100 (3,422,821 SUs) remaining, Stampede 15%(65,694 SUs) remaining. Update the page https://astrobear.pas.rochester.edu/trac/astrobear/wiki/ProjectRuns
    2. Archive the data: local machines are pretty full. For the first archiving, move to bluehive and zip the data files there?
  • Code management
    1. AstroBEAR3.0 is on its way and will pull into the current scrambler folder
    2. current scrambler will move to branch 2.0

Convert animated gif file to videos on bamboo

  • Command lines converting animated gif to different video formats

Example: : gif —Eddie's 2.5D emission Jets
New formats: avi, mp4, mov, mpeg/mpg

  • Convert gif to jpg files first
    convert old.gif old%05d.jpg
    
  • Convert jpg files to avi
    avconv -i old-%d.jpg new.avi
    
  • Convert avi to mp4, mov
    avconv -i new.avi new.mp4 
    avconv -i new.avi new.mov
    
    
  • Convert mp4 to mpeg/mpg
    1. Converting avi to mpg directly will cause problem: new2.mpg
      avconv -i new.avi new.mpg
      
    2. Converting from mp4 works OK new.mpg
      avconv -i new.mp4 -c:v mpeg2video -q:v 2 -c:a libmp3lame new.mpg
      

2.5D emission

Procedures for backing up your data

  • Procedures for backing up your old computing data
    1. create a folder on /media/tmp070813 and name it with username_date. For example: bliu_07092013
    2. MOVE — NOT COPY the data you want to back up to the folder you created
    3. Make detailed notes about what these data are and save it as username_date.txt. For example: bliu_07092013.txt
    4. I will tar these data to the 4TB hard drive once everybody is done moving their data. And I will clean everything on /media/tmp070813 after that.
    5. Old data will be backed-up & cleaned twice every year. Our first backup date is Aug 1st 2013

Meeting Update 07/08/2013 -- Baowei

  • Tickets
    1. new: none
    2. closed: #294(Refinement Artifacts), #298(Problem running on Bluehive), #299(astrobear successfully compiled on bluehive but can't run by using PBS.)
  • Users
    1. Trying to arrange a meeting with LLE
    2. Wrote to Ian
  • Storage issue/Equipment
    1. Working with Dave to get 1 TB extra space on bamboo for Shule to move his stuff to local: /media/tmp070813/ on bamboo
    2. Looking for an easy way to back up the data to the new 4TB harddrive
    3. Poster tubes have been shipped to Adam. Hopefully will get them before Friday.
  • Worked on
    1. #294 (New revision 1244:c8d7ee38391f), #298, #299
    2. testing & debug code of AstroBEAR3.0

Meeting Update 07/01/2013 -- Baowei

  • Tickets
    1. New: #294(Refinement Artifacts), #295(Global Co-rotating Frame module stops running), #296(Self-Gravity Needs Investigation), #297(code terminates on Kraken immediately after starting when self gravity is turned on), #298(Problem running on Bluehive), #299(astrobear successfully compiled on bluehive but can't run by using PBS)
    2. Closed: #293(Problem running Hydro Static Disk)
  • Users
    1. Ian: Install AstroBEAR on Pleiades
  • Equipment
    1. Got the hard drive dock + 4TB hard disk.

Meeting Update 06/24/2013 -- Baowei

  • Tickets
    1. new: #289(3D pulsed jet runs too slow on Kraken), #290(Problem with profile.data file), #291(Problem viewing data in Visit), #292(test), #293(Problem running Hydro Static Disk)
    2. closed: #292(test)
  • Worked on
    1. changed the pointer in riemann_solver.f90 to arrays with dimension MaxVars
    2. tested and fixed bugs in newer OpenMP optimized code
    3. #290, #293
  • OoO
  1. will be out of office most of the time this week.

Teragrid Proposal of July 1

  • Allocations:
    1. Requested: 7100000
    2. Awarded: 3422821
  • Referee reviews:
    1. This is a new request for 7 Million on Kraken to study astrophysical flows by a large team of researchers (5 PIs including several early career scientists) and supported by a large number of awards (5, including 1 NSF award). They made use of a start-up grant to analize the performance of their code, astroBEAR, which is adapative mesh, and the proposing team is the same as the development team of this code. Provided is the strong scaling for the resolution they plan to run (128 + 4 levels AMR) on the target resource (Kraken) and it demonstrated good scalability. They make the point that the AMR code is 100x faster than the equivalent computation that has fixed grid resolution so their strategy of using AMR is very helpful for this research. Overall a good proposal. There wree a few shortcomings. I would have liked to have also seen a weak scaling for that resolution, or a smaller size, as well as some justification as to why they chose the resolution they did. They do not provide information about the experience of their team, but it seems they have expertise covering HPC aspects and consider code optimation. They also didn't mention whether they have local computing resources. They don't describe what beta is, or how the various angles lead to different results, why the chose the angles they did, and how that will lead to a successful investigation. they mention they will save 150 frames of each run, but it is not clear whether that is only a subsample of the total frames that will be run. They do not give units for the runtime in Fig 3, and dno't give walltime for how long a frame takes but they simple say it takes 6000 SUs. On balance, I would not recommend full allocation, but they have made a case for getting an award of about 50% of their request.
    2. This is a good proposal with all of the relevant information present. However, I could not find any previous usage of the group, except for some roaming allocation. The code is appropriate for the proposed computations and the scaling is fine. Because of the short track record I'm hesitant to recommend full funding. I recommend to grant half of the request. Kraken: 3.5 MSU Storage: 2500
  • Important Factors
    1. Funding: NSF supported Computational Research Plans have priority over all non-NSF supported components.
    2. In the (usual) case where both non-NSF and NSF funding are involved, the Recommended Allocation is split into NSF and non-NSF portions* 3.The non-NSF portion of a Recommended Allocation is reduced by the global faction times an additional factor (greater than 1).

Meeting Update 06/17/2013 -- Baowei

  • Tickets
    1. new: none
    2. closed: none
  • Users
    1. send the code to Ian
  • Things to buy (as discussed in the past couple of weeks)
    1. Hard drive dock
    2. Poster tubes
    3. Video card for Alfalfa
  • Meetings
    1. SC13 Nov 16 - Nov 22, Denver CO. Registration opens July 17th. Technical Program poster. Poster submissions due July 31, 2013.
    2. ASP Annual Meeting 2014
  • Worked on
    1. optimize with OpenMP (#285)
    2. local users

Meeting Update 06/10/2013 -- Baowei

  • Tickets
    1. new: none
    2. closed: #277(Trouble restarting on bluestreak), #280(Strange message submitting to Bhive afrank queue)
  • Users
    1. wrote to Uppsala University user, no response received yet.
  • Worked on
    1. testing the OpenMP optimized code on bluehive
    2. local users
    3. reading planet atmosphere papers

Meeting Update 06/03/2013 -- Baowei

  • Tickets
    1. new: none
    2. closed: none
  • Users:
    1. New: Dan (REU student), user from University of Waterloo (for research), user from University of Birmingham, UK(To investigate using Astrobear for simulations of colliding winds in binary systems and galaxy superwinds. I've used VH-1 a lot over the years.http://adsabs.harvard.edu/abs/1992ApJ...386..265S http://adsabs.harvard.edu/abs/2000MNRAS.314..511S and want to investigate AMR/MHD some more.)

  • worked on
  1. wiki latex plugin update: colorful equations: https://astrobear.pas.rochester.edu/trac/astrobear/wiki/FluxLimitedDiffusion

Meeting Update 05/28/2013 -- Baowei

  • Tickets
    1. new: #288(Krumholz accretion creates multiple particles when using particle refinement buffer)
    2. closed: none
  • Users
    1. new one from Uppsala University Sweden asked for the code
    2. another meeting with LLE (?)
  • Wiki
    1. new latex plugin (stable version): the current way to write a latex equation is
[[latex($ $)]]

instead of

[[latex($ $)]]

or

{{{#Latex }}} 

I can do the transform for you if you have wiki pages with equations that couldn't show correctly

  1. Single jets negative temperature:
    1) 0AMR runs to the end on 16 cores of bamboo:
    2) 16X160+2AMR on 16 cores of bamboo: http://www.pas.rochester.edu/~bliu/RAGA/m16X160_2AMR.gif
    3) 16X160+4AMR on 16 cores of bamboo: got negative Temperature at frame 3
    4) 16X160+4AMR on 1 core of bamboo: got negative temperature at frame 59
    5) working on tracing back a revision which worked for Eddie

3D Colliding Jets

Got many restart requesting due to the nans in flux before the Jets meet.

  1. Jets meet at frame 30

http://www.pas.rochester.edu/~bliu/PulsedJets/colliding0009.png

  1. movie: http://www.pas.rochester.edu/~bliu/PulsedJets/colliding.gif
  1. freezes at 78, restart from 77 still freezes
     filling fractions   =   0.969  0.930
     Current efficiency  =  86% 
     Cell updates/second =       1990      4487  44%
     Wall Time Remaining =   151.1 kyr at frame   77.9 of    100
     AMR Speed-Up Factor =       0.1904E+04
         Advanced level  2 to tnext= 0.2234E+01 with dt= 0.2327E-11 CFL= 0.3892E-01 max speed= 0.4181E+10
         Advanced level  2 to tnext= 0.2234E+01 with dt= 0.2327E-11 CFL= 0.3892E-01 max speed= 0.4181E+10
       Advanced level  1 to tnext= 0.2234E+01 with dt= 0.4654E-11 CFL= 0.2994E-01 max speed= 0.3217E+10
         Advanced level  2 to tnext= 0.2234E+01 with dt= 0.2327E-11 CFL= 0.3892E-01 max speed= 0.4181E+10
         Advanced level  2 to tnext= 0.2234E+01 with dt= 0.2327E-11 CFL= 0.3892E-01 max speed= 0.4181E+10
       Advanced level  1 to tnext= 0.2234E+01 with dt= 0.4654E-11 CFL= 0.2998E-01 max speed= 0.3221E+10
     Advanced level  0 to tnext= 0.2234E+01 with dt= 0.9307E-11 CFL= 0.1964E-01 max speed= 0.2110E+10
     Info allocations    =     1.5 gb  130.4 mb
     message allocations =   ------     35.5 mb
     sweep allocations   =   ------     54.3 mb
     filling fractions   =   0.969  0.930
     Current efficiency  =  86% 
     Cell updates/second =       1990      4487  44%
     Wall Time Remaining =   136.9 kyr at frame   77.9 of    100
    

Meeting Update 05/20/2013 -- Baowei

  • Users
    1. Met with LLE: Most of time were spent discussion the results of ablative RT results. Arijit shows the results with perturbation from Betti's code. Rui shows the results of ablation balance from his 3D code. Shule will summarize what he's been doing.
    2. Very positive Feedback from Josh of Clemson.
    3. Wrote to a user in China asking for feedback, no reply yet
    4. The Clemson and LLE users asked computational scientist/system administrator to install AstroBEAR on their machines. Review my instruction for users to install

To install and run it, you will need a Linux system, MPI, and libraries like fftw3, hdf5 and maybe hypre. You may find more details on our wiki page:
https://astrobear.pas.rochester.edu/trac/astrobear/wiki/UserGuide

Edit Makefile.inc. Sounds too complicated for common Linux guy? Considering it usually takes me about 2 hours to install all the libraries required & the code to a new system, and sometimes I need to ask system admin's help for job submission, there might be a big barrier for new users to pass before they know how good our code is. Simpler&easier-installation version? Make configure file high priority?

  1. Got Zhuo accounts and key to the office. Walked him through the whole process of getting, compiling and running the code on local machines
  • Tickets
    1. New: #286(Memory Allocation Error on BlueStreak), #287(Virtual Memory Error on Kraken)
    2. Closed: none

Archives & Hard drive docks -- better solution for space issue?

Dave & Rich use Hard drive docks which might be a good solution for our current space shortage. Here's some thoughts:

  1. They are very cheap comparing other options. And it's very easy to expand the storage. <300500.
  1. We can archive the data to these harddrives once the paper published. We can create a folder on our local machines for the data to be archived. We will have a file keeping records what data are moved in the folder and to be archived. Whoever moves data to the folder will be responsible for putting those records to the file. Every three/four months, I will archive the folder to the hard drive and clean the whole folder.

optimization with OpenMP on Blue Gene/Q

Replace the vectorized FORALL loop with parallelized DO loops in sweep_scheme.f90 An example is to replace:

        DO i=mB(1,1), mB(1,2)
               FORALL(j=mB(2,1):mB(2,2),k=mB(3,1):mB(3,2))
                  beforesweepstep_%data(beforesweepstep_%x(i),j,k,1,1:NrHydroVars) = &
                       Info%q(index+i,j,k,1:NrHydroVars)
               END FORALL
            END DO

by

    !$OMP PARALLEL DO PRIVATE(k,j,i) COLLAPSE(3)
            DO k=mB(3,1),mB(3,2)
               DO j=mB(2,1),mB(2,2)
                  DO i=mB(1,1), mB(1,2)
                     beforesweepstep_%data(1:NrHydroVars,1,i,j,beforesweepstep_%x(k)) = Info%q(i,j,index+k,1:NrHydroVars)
                  END DO
               END DO
            END DO
                !$OMP END PARALLEL DO

Testing results on Blue Streak are

  1. 1283 + 4AMR, Current Revision Running Time on 512 cores: 224.57 (Tasks per node=16)
Tasks per node OMP_NUM_THREADS Total Running Time
1 32 3375.17
2 16 2019.94
4 8 1265.58
8 4 1052.74
16 2 907.62
32 1 1151.02

Tasks per node OMP_NUM_THREADS Total Running Time
1 64 >3600
2 32 2039.45
4 16 1181.27
8 8 946.2
16 4 741.81
32 2 737.68
64 1 877.07
  1. 323 + 4 AMR, Current Revision Running Time on 512 cores: 33.26 (Tasks per node=16)
Tasks per node OMP_NUM_THREADS Total Running Time
1 64 191.42
2 32 122.68
4 16 82.43
8 8 70.78
16 4 72.78
32 2 85.65
64 1 129.95
Tasks per node OMP_NUM_THREADS Total Running Time
1 32 164.59
2 16 105.90
4 8 86.67
8 4 79.62
16 2 84.98
32 1 128.47

The job submission script on Blue Streak is like

#!/bin/bash
#SBATCH -J strongTest
#SBATCH --nodes=32 
#SBATCH --ntasks-per-node=4
#SBATCH -p debug 
#SBATCH -t 01:00:00

module purge
module load mpi-xl
module load hdf5-1.8.8-MPI-XL
module load fftw-3.3.2-MPI-XL
module load hypre-2.8.0b-MPI-XL

ulimit -s unlimited
export OMP_NUM_THREADS=16
#1node 8 processors
srun astrobear > strong_4ThreadsperNode_X16.log
                                      

swap the DO loop layers to i, j, k — the difference of running time is small comparing k,j,i case

                !$OMP PARALLEL DO PRIVATE(i,j,k) COLLAPSE(3)
            DO i=mB(1,1), mB(1,2)
               DO j=mB(2,1),mB(2,2)
                  DO k=mB(3,1),mB(3,2)
                     beforesweepstep_%data(1:NrHydroVars,1,i,j,beforesweepstep_%x(k)) = Info%q(i,j,index+k,1:NrHydroVars)
                  END DO
               END DO
            END DO
                !$OMP END PARALLEL DO
Tasks per node OMP_NUM_THREADS Total Running Time
1 16 >3600
2 16 2099.57
4 16 1208.53
8 8 912.56
16 4 758.78
16 2 969.74
16 1 1436.98

meeting update 05/13/2013 -- Baowei

  • Tickets
    1. new: none
    2. closed: #283(compiler on Kraken complains about long lines), #284(trouble submitting jobs on bluestreak)
  • Users:
    1. will meet with Ruka and set accounts for him
  • Worked
    1. #285 (Optimize AstroBEAR 2.0 on Blue gene/Q)

Photos for 2013 CIRC Poster Session

Some pictures for this year's CIRC poster session. Thanks for participating.

http://www.pas.rochester.edu/~bliu/PosterSession/2013/P5100037.JPGhttp://www.pas.rochester.edu/~bliu/PosterSession/2013/P5100065.JPG
http://www.pas.rochester.edu/~bliu/PosterSession/2013/P5100012.JPGhttp://www.pas.rochester.edu/~bliu/PosterSession/2013/P5100007.JPG
http://www.pas.rochester.edu/~bliu/PosterSession/2013/P5100003.JPGhttp://www.pas.rochester.edu/~bliu/PosterSession/2013/P5100049.JPG
http://www.pas.rochester.edu/~bliu/PosterSession/2013/P5100062.JPGhttp://www.pas.rochester.edu/~bliu/PosterSession/2013/P5100059.JPG
http://www.pas.rochester.edu/~bliu/PosterSession/2013/P5100039.JPGhttp://www.pas.rochester.edu/~bliu/PosterSession/2013/P5100066.JPG
http://www.pas.rochester.edu/~bliu/PosterSession/2013/P5100047.JPGhttp://www.pas.rochester.edu/~bliu/PosterSession/2013/P5100053.JPG

Meeting Update 05/09/2013 -- Baowei

  • CIRC poster session
    1. Time: start at 10 a.m. — 9:45 am if you have a poster
    2. Location: Goergen Hall
  • Users
    1. New: from United States Navel Academy (Interested in learning more about astrophysics, including possible future involvement in mass wave detection devices such as LIGO and LISA, with the hope of performing and enhancing universe dynamics simulation techniques.) and Xiamen University (Astrophysical Simulations)
  • Tickets
    1. New: none
    2. Closed: none

Meeting Update 04/30/2013 -- Baowei

  • Tickets
    1. New #285 (Optimize AstroBEAR 2.0 on Blue gene/Q)
  • Worked on
    1. Registered CIRC poster session
    2. Scaling test and optimization on Blue Streak(Ticket #285)
      1. Strong scaling of current revision code (done up to 2048 cores)
      2. Enabled parallelization of program code by turning on qsmp. Compiled, In queue for testing. (#285)
      3. Working on replacing vectorized FORALL loops with parallelized DO loops and corresponding modifications like swapping indices etc. in sweep_scheme.f90.

Meeting Update 04/22/2013 -- Baowei

  • Tickets
    1. New: #282 (mpispawn error on stampede), #283 (compiler on Kraken complains about long lines), #284(trouble submitting jobs on bluestreak)
    2. Closed: #282
  • Users:
    1. Next meeting time with LLE: training of visit and next step.
  • Worked
    1. Optimization on stampede: https://astrobear.pas.rochester.edu/trac/astrobear/blog/bliu04042013
    2. New strong scaling test on stampede (O3):http://www.pas.rochester.edu/~bliu/Scaling/strongScalingOnStampedeO3.png
    3. In summary:
      1. with level 3 optimization, stampede is about twice faster than kraken on less than 1000 cores, which is fair — 1 SUs on stampede = 2 SUs on kraken
      2. Kraken has better strong scaling up to 5000 cores. (The maximum for stampede normal queue is 4096 cores)
      3. Current usage: stampede (40% | 169,343 SUs left), kraken (99% | 1,134,191 SUs left)

  • Working on
    1. optimization and scaling on blue hive and blue streak
    2. AstroBEAR poster for CIRC symposium

Meeting Update 04/16/2013 -- Baowei

  • Users
    1. Christine: needs Stream Object and the 2.5D,asks the progress in implementing a radiative transfer feature in astroBEAR, will come back in May.
    2. New one asking for the code from VSSC (Vikram Sarabhai Space Centre?)
  • Tickets
    1. New: none
    2. Closed: (none)
  • Worked on
    1. New scaling on Kraken http://www.pas.rochester.edu/~bliu/Scaling/strongScalingOnKraken5000.png
    2. storage space: 1) Cloud based space box.net: total size (50 TB, whole university), max file size: 5GB 2) CIRC scratch: no backup
      1. Working on testing libraries for AstroBEAR on stampede&bluestreak.

Meeting Update 04/08/2013 -- Baowei

  • Users
    1. New user from Indonesia National Institute of Aeronautics and Space: Modelling solar phenomena, especially coronal mass ejection
    2. Jonathan and Baowei met with the CS graduate student working on optimizing MUSCL with openMP on Friday.
  • Tickets
    1. New: #281 (Compile and Run AstroBEAR on Kraken)
    2. Closed: #281

Strong scaling Test -- Kraken & Stampede

  • This run, use a short final time and consider the profiling time and IO time.
    1. stampede
Runtime http://www.pas.rochester.edu/~bliu/Stampede/strongScalingOnStampedeNew.png
Non-ghost zone portion http://www.pas.rochester.edu/~bliu/Stampede/strongScalingOnStampedeNoGhostUpdate.png
  1. kraken
Runtime http://www.pas.rochester.edu/~bliu/Kraken/strongScalingOnKraken.png
Non-ghost zone portion http://www.pas.rochester.edu/~bliu/Kraken/strongScalingOnKrakenOnlyNonGhost.png
  • Data
    1. stampede Non optimization
Cores Wall Time Non-ghost zone portion
128 549.82 57%
256 360.35 48%
512 235.72 41%

stampede O3

Cores Wall Time Non-ghost zone portion
128 54.0 57%
256 33.5 48%
512 20.4 41%
1024 13.31 35%
2048 10.63 29%
4096 8.66 23%
  1. Kraken
Cores Wall Time Non-ghost zone portion
120 118.18 56%
240 69.00 50%
480 40.99 42%
1008 28.24 35%
2016 16.88 29%
4996 11.25 22%
  • Configuration of Kraken & Stampede
Kraken Stampede
Computing Nodes 9408 6400
Core per Node 12 16
Processor 2.6 GHz AMD Opteron 2.7GHz Xeon E5-2680 (Coprocessors Xeon Phi SE10P 1.1 GHz)
Memory per Node 16 GB 32 GB
  • Standard output 1.128 cores
    Total Runtime =      550.3116700649261475 seconds.
     Info allocations    =   ------    280.6 mb
     message allocations =   ------     36.2 mb
     sweep allocations   =   ------     49.8 mb
     filling fractions   =   0.012  0.644  0.900  0.000
     Current efficiency  =  82%  16%  98% 
     Cell updates/second =        973      1721  57%
     Wall Time Remaining =   ------   
     AMR Speed-Up Factor =       0.1039E+04
    
  1. 256 cores
    Total Runtime =      360.3508758544921875 seconds.
     Info allocations    =   ------    200.6 mb
     message allocations =   ------     32.4 mb
     sweep allocations   =   ------     59.7 mb
     filling fractions   =   0.012  0.665  0.898  0.000
     Current efficiency  =  77%  21%  98% 
     Cell updates/second =        735      1525  48%
     Wall Time Remaining =   ------   
     AMR Speed-Up Factor =       0.7988E+03
    
  1. 512 cores
    Total Runtime =      549.8217809200286865 seconds.
     Info allocations    =   ------    280.6 mb
     message allocations =   ------     36.2 mb
     sweep allocations   =   ------     49.8 mb
     filling fractions   =   0.012  0.644  0.900  0.000
     Current efficiency  =  82%  16%  98% 
     Cell updates/second =        974      1722  57%
     Wall Time Remaining =   ------   
     AMR Speed-Up Factor =       0.1040E+04
    
  • Standard output on Kraken
    1. 120 cores
      Total Runtime =      118.1762299537658691 seconds.
       Info allocations    =   ------    257.6 mb
       message allocations =   ------     41.2 mb
       sweep allocations   =   ------     63.1 mb
       filling fractions   =   0.012  0.668  0.895  0.000
       Current efficiency  =  75% 
       Cell updates/second =       4837      8572  56%
       Wall Time Remaining =   ------   
       AMR Speed-Up Factor =       0.8210E+03
      
  1. 240 cores
    Total Runtime =       68.9995868206024170 seconds.
     Info allocations    =   ------    164.1 mb
     message allocations =   ------     32.8 mb
     sweep allocations   =   ------     58.4 mb
     filling fractions   =   0.012  0.654  0.897  0.000
     Current efficiency  =  70% 
     Cell updates/second =       4099      8226  50%
     Wall Time Remaining =   ------   
     AMR Speed-Up Factor =       0.7070E+03
    
  2. 480
    Total Runtime =       40.9853310585021973 seconds.
     Info allocations    =   ------    122.0 mb
     message allocations =   ------     24.2 mb
     sweep allocations   =   ------     30.1 mb
     filling fractions   =   0.011  0.706  0.901  0.000
     Current efficiency  =  68% 
     Cell updates/second =       3414      8082  42%
     Wall Time Remaining =   ------   
     AMR Speed-Up Factor =       0.5956E+03
    

Meeting Update 04/01/2013 -- Baowei

  • Current computing resources
    1. Stampede: 61% | 256,868 SUs left
    2. Kraken: 99% | 1,138,233 SUs left
  • Tickets
    1. New: none
    2. Closed: none
  • Working on testing box.net for storage.

Strong Scaling Test on Stampede

  • Run Time
    1. With hypre
Num of Cores Run Time (secs)
1024 7963.7
2048 5862.3
4096 4005.8

2.Without hypre

Num of Cores Run Time (secs)
1024 7401.0
2048 5436.6
4096 4126.2/4025.2
  • Scaling Test Result
Runtime http://www.pas.rochester.edu/~bliu/Stampede/strongScalingOnStampedeLog.png
Runtime Considering Efficiency http://www.pas.rochester.edu/~bliu/Stampede/SC_withEff_Stampede.png
Cell Updates Per Second http://www.pas.rochester.edu/~bliu/Stampede/SC_CellUpStampede.png
  • Standard output of last advance
    1. 1024 cores:
      Info allocations    =    79.8 gb  110.0 mb
       message allocations =   ------     32.0 mb
       sweep allocations   =   ------     29.9 mb
       filling fractions   =   0.017  0.597  0.855  0.000
       Current efficiency  =  66%  31%  97% 
       Cell updates/second =        437      1215  36%
       Wall Time Remaining =   ------   
       AMR Speed-Up Factor =       0.3331E+03
      
    2. 2048 cores
      Info allocations    =   106.9 gb   85.5 mb
       message allocations =   ------     64.0 mb
       sweep allocations   =   ------     30.7 mb
       filling fractions   =   0.017  0.591  0.848  0.000
       Current efficiency  =  58%  39%  97% 
       Cell updates/second =        298      1011  29%
       Wall Time Remaining =   ------   
       AMR Speed-Up Factor =       0.2305E+03
      
    3. 4096 cores
       Info allocations    =   147.4 gb   61.2 mb
       message allocations =   ------    128.0 mb
       sweep allocations   =   ------     20.1 mb
       filling fractions   =   0.016  0.619  0.846  0.000
       Current efficiency  =  47%  50%  97% 
       Cell updates/second =        187       785  24%
       Wall Time Remaining =   ------   
       AMR Speed-Up Factor =       0.1753E+03
      
  • Standard output of last advance (No self-gravity)
    1. 1024
       Info allocations    =    67.0 gb   93.0 mb
       message allocations =   ------     32.0 mb
       sweep allocations   =   ------     25.2 mb
       filling fractions   =   0.017  0.597  0.852  0.000
       Current efficiency  =  69%
       Cell updates/second =        466      1299  36%
       Wall Time Remaining =   ------
       AMR Speed-Up Factor =       0.4099E+03
      
  1. 2048
     Info allocations    =    92.0 gb   71.2 mb
     message allocations =   ------     64.0 mb
     sweep allocations   =   ------     22.5 mb
     filling fractions   =   0.016  0.620  0.848  0.000
     Current efficiency  =  61%
     Cell updates/second =        322      1097  29%
     Wall Time Remaining =   ------
     AMR Speed-Up Factor =       0.2873E+03
    
  1. 4096
     Info allocations    =   125.3 gb   51.0 mb
     message allocations =   ------    128.0 mb
     sweep allocations   =   ------     19.9 mb
     filling fractions   =   0.016  0.616  0.849  0.000
     Current efficiency  =  50%
     Cell updates/second =        206       865  24%
     Wall Time Remaining =   ------
     AMR Speed-Up Factor =       0.1884E+03
    
Info allocations    =   125.6 gb   54.7 mb
 message allocations =   ------    128.0 mb
 sweep allocations   =   ------     19.9 mb
 filling fractions   =   0.016  0.620  0.845  0.000
 Current efficiency  =  51% 
 Cell updates/second =        211       882  24%
 Wall Time Remaining =   ------   
 AMR Speed-Up Factor =       0.1919E+03
  • CPU hours
    1. 1 frame: 4600 SUs
    2. 50 frames: 230,000 SUs
    3. 4~5 runs: 1,150,000 SUs (on stampede)
    4. Current Allocation: 416,000 SUs (on stampede), 1,138,234 SUs (on Kraken)

Meeting Update 03/25/2013 -- Baowei

  • Tickets
    1. New: none
    2. closed: #273(Install AstroBEAR on Stampede of TACC)
  • Machines:
    1. Xsede current usage: stampede 25% (315,584 SUs left), kraken 0% (1,138,234 SUs left)
    2. Disk Usage
    3. Raid alert from Clover: Rich's working on it
    4. Simple speed test on blue streak, blue hive and stampede (#7 of Top500) and hypre 2.8/2.9
Module Blue Streak(16 cores) Blue hive(8 cores) Stampede(16 cores)
BonnorEbertSphere 794 secs 334 secs 545 secs
Bondi 899 secs 107 secs 626 secs
MolecularCloudFormation 392 secs 48 secs 269 secs

Didn't see big difference between hypre 2.8 and 2.9 on blue streak

  • New wiki header picture
  • Will take a day off on Tuesday for moving.

Current Disk Usage

Machine Total Used Use%
clover 11T 9.7T 95%
grass 5.3T 3.7T 75%
alfalfa 5.3T 4.9T 97%
bamboo 13T 12T 97%

Meeting Update 03/18/2013 -- Baowei

  • New users from Download form
    1. Helsinki, UR(student for CS course)
    2. A better of handling users using Download form?
  • Equipment & Local machines
    1. Mouse, Mac to VGA port
    2. Clover becomes unstable: move wiki to botwin and use other machines for backup
    3. Grass has a lot of issues also. New machine for Erica (bamboo?)
  • Tickets
    1. New: #280 (Strange message submitting to Bhive afrank queue)
  • Worked on #273, testing hypre 2.9 and running script for testing suite on blue streak.

Meeting Update 03/11/2013 -- Baowei

  • Tickets:
    1. New: None
    2. Closed: #274 (Question on AMR)
  • New user from China asking the code through Download form.
  • Out of Memory Issue on Blue Streak
    1. Latest revision installed on Blue Streak: hypre-2.9.0b-MPI-XL-No-Global-Partition with optimization flag O3
  • Teragrid allocation
    1. Stampede: 416,000 SUs:https://astrobear.pas.rochester.edu/trac/astrobear/blog/bliu03082013
  1. Kraken: 1,138,234 SUs

About Stampede Time and Slurm

Some useful information about stampede queue:

Queue Name Max Runtime Max Nodes/Procs
normal 24 hrs 256 / 4000
development 4 hrs 16 /256
large 24 hrs 1024 /16000

Details can be found at http://www.tacc.utexas.edu/user-services/user-guides/stampede-user-guide#running-slurm-queue

Our total Allocation is 416,000 SUs (CPU hours). Suppose all our jobs run on 1000 CPUs, totally we have a little bit more than TWO weeks.

To submit a job to 1000 cpus in normal queue on Stampede for 24 hours, here's an example

#SBATCH -A TG-AST120060 
#SBATCH -J bearRun
#SBATCH -n 1000
#SBATCH -c 1 
#SBATCH -p normal
#SBATCH -t 24:00:00

The system can decide how many nodes for you or you can specify with option -N

The complete slurm script can be downloaded from ticket #273.

Meeting Update 03/04/2013 -- Baowei

  • New Revision:
    1. 1241:70f57b1e4434
    2. Mainly bug fixes found on blue streak
  • Tickets
    1. New: #277(Trouble restarting on bluestreak), #278(test), #279(Compiling error on Blue Streakin particle_info_ops.f90)
    2. Closed #278, #279
  • Stampede is all set (#273)
  • Trac/wiki updates
    1. Download page: Download with Contact Form plugin
    2. download plugin for wiki users
    3. Ticket email notifying plugin
    4. TracMath still doesn't work

Meeting Update 02/25/2013 --Baowei

  • New revision 1239:7aab6defde61 in Scrambler:
    1. working bov2jpeg from Jonathan.
    2. Currently has a problem with my bamboo account. So didn't run tests on bamboo.

  • New Hardware? (wireless mouse, Adapter Cable: Apple Mini DisplayPort)
  • Worked on
    1. Clean up the users on wiki and astrolocals google group: #272 (Clean up forum/discussion board)
    2. Setup Stampede of Teragrid for AstroBEAR: #273(Install AstroBEAR on Stampede of TACC)
    3. #276

Meeting Update 02/18/2013 -- Baowei

  • Weekly testing suite runs: Debugging
    1. Ran two time the 30 testing modules on blue hive. Each time 5 (different) modules problems failed due to the job queue system — the jobs died silently before running. They should all pass as it's the same revision as last week. Working on modifying the script to handle the case.
  • Users:
    1. Created Wiki accounts for Andrew.
    2. Rui asked questions about hypre in AstroBEAR (Jonathan)
  • Worked on ticket #273 (Install AstroBEAR on Stampede of TACC). Currently get error with the new installed hdf5 and qprec type.

Meeting Update 02/11/2013 -- Baowei

  • Wiki
    1. Set yearly backup
    2. Updated Doxygen

  • New Revision
    1. 1237:82b26a9a1a33 and 1238:604fb418ad9a checked in: just some updates with running scripts for blue streak.
    2. Weekly tests on blue hive and blue streak all passed: Added a naive weekly update for the testing to Debugging. So everyone can see the weekly testing result — The last week testing status won't be accurate for now.
  • Users
    1. Contacted IO (Jonathan has the email) and Rui of LLE (the student Ariji is trying AstroBEAR now). Let them know the latest revision in developing branch.

Meeting Update 02/04/2013 -- Baowei

  • Golden Version AstroBEAR & Blue streak
    1. New Revision in /clover/data/scrambler: 1235:f61e035a8ee7 including important bug fixes in cooling.f90 (#275) and clumps.f90
    2. Blue Streak: mercurial installed on blue streak. (Ticket #245)
  • Tickets:
    1. New: #275: Segmentation fault found with RadShock testing module on Blue hive
    2. Closed: #234 and #235(both for Golden Version Test Modules), #245(Preparing Environment for AstroBEAR on Blue Streak (Q)), #256(GNU General Public License Licence for AstroBEAR), #275
  • AstroBEAR Vedio Meeting with Erica, Brendan and Will

Meeting Update 01/28/2013 -- Baowei

  • Users
    1. New users: Give the latest revision code to Tony Piro and Christian Ott from Caltech
    2. Yan asked questions about visit
  • Working on testing and checking in 2D MUSCL code and installing AstroBEAR on Stampede

Build Problem Module

Following the User Guide: https://clover.pas.rochester.edu/trac/astrobear/wiki/ModulesOnAstroBear , I built two problem modules

  • Simple Clump Module (with objects)
    1. Documentation is very clear and easy to follow. The code is straightforward — maybe need a little bit background knowledge: Ghost zones (Parallel programming), gamma7=1.0/(gamma-1)(Energy and pressure relations), Namelist (Fortran) and data files
    2. code compiled with AstroBEAR
    3. Need to update the "problem.data" from template to get it run — with "rho=, radius=, velocity=…."
    4. data files for user to try and run?
    5. Result:ClumpMovie_0AMR

http://www.pas.rochester.edu/~bliu/ProblemModule/clump.png

  • Simple Clump Module (without objects)
    1. Result with 4 nodes: Result:ClumpOld_0AMR

http://www.pas.rochester.edu/~bliu/ProblemModule/oldStyle4node0018.png

http://www.pas.rochester.edu/~bliu/ProblemModule/mag100015.png

  • Clump with Rotation
  1. rho: rhoMovie

http://www.pas.rochester.edu/~bliu/ProblemModule/Clump_omega2.00015.png

Meeting Update 01/21/2013 -- Baowei

  • Trac backup & Blue Streak Queue
  1. Trac backup got conflicts with the new NameTag plugin which is required by Discussion. Trac backup back to normal after the plugin was removed but the forum is down.
  2. Blue Streak queue system works now ?
  • Working on moving from Ranger to Stampede (#273)

1D Sod shock Tube with Different Schemes

http://www.pas.rochester.edu/~bliu/MUSCL/rho_4.png
http://www.pas.rochester.edu/~bliu/MUSCL/vx_4.png
http://www.pas.rochester.edu/~bliu/MUSCL/P_4.png

Meeting Update 01/14/2013 -- Baowei

  • Wiki & Machines
    1. wiki documentation
    2. Trac backup failing
    3. Blue Streak: currently has issues with job submitting and scheduling
  • MUSCL in AstroBEAR
    1. 1D Sod Shock Tube
      http://www.pas.rochester.edu/~bliu/MUSCL/rho.png
      http://www.pas.rochester.edu/~bliu/MUSCL/vx.png
      http://www.pas.rochester.edu/~bliu/MUSCL/P.png
    2. 1D Rad Shock
      http://www.pas.rochester.edu/~bliu/MUSCL/RadShockMUSCL.png
      http://www.pas.rochester.edu/~bliu/MUSCL/RadShockSweep.png
      http://www.pas.rochester.edu/~bliu/MUSCL/RadShockplots.png


  1. 2D: nan errors and segmentation fault, working on it …..
  • working on 1D Euler Solver with MUSCL Scheme (Sod Shock Tube, Lax-Friedrichs / Riemann solver for fluxes):

Meeting Update 01/07/2013 -- Baowei

  • Christine's visit
    1. Jan 14th (Monday): office, account, group meeting(?), lunch(?)
    2. Volunteers?
  • Golden Version — Current revision: 1201:72d442594ac2, includes
    1. updated scripts for routine test suites running on blue hive and blue streak
    2. MHD convergence tests
    3. The routine test running of MHDWaves failed on blue streak
  • Machines:
    1. AstroBEAR and required libraries installed on LLE cyclone. Tested with RTInstability by Rui
    2. checking local machines (bamboo unreachable, clover trac backup failed). will restart if necessary.
  • Wiki
    1. installed Discussion plugin
  • Working on MUSCL

Christine's Visit

==============

Attached is an ApJ letter that I've published on my experiment and the astrophysical motivation for it. What I would like to do with AstroBEAR is create the Cataclysmic Variable (CV) detailed in the first paragraph of the introduction. The CV related to this work is considered non-magnetic because the secondary star in the binary system donates mass to the white dwarf (WD) in the orbital plane, (not along field lines to a polar axis).

The localized area of interest is the collision region at which the accreting stream impacts the formed accretion disk around the WD. I've also attached an ApJ paper that did simulations on this (comparing isothermal and adiabatic EOS's) because they detail parameters of the CV system. You will see in my paper the connection of this astrophysical system to my laser experiment, but to re-iterate the "problem I want to solve" is the dynamics of the shocked stream at the accretion disk edge. I have new data, which I showed to your group in July, that hasn't been published yet, but shows perhaps some stagnation and a diverted stream in an oblique shock scenario. This corresponds most likely to some optical thickness in the shock system, so it would be interesting to do the CV simulation looking more closely at the shocks that form in it and how mass moves around the collision region, as a function of scale height of the disk.

I can offer more details with questions/concerns, but my paper in combination with the Armitage and Livio one (sections 1 through 2.3) offer a good overview of the system.

I am currently on a different experiment in Livermore, but I am going to start exploring the AstroBEAR wiki, etc, this week! Thank you for setting up an account for me.

=====================

APJ letter http://www.pas.rochester.edu/~bliu/Christine/ApJL452485p4.pdf

Armitage and Livio http://www.pas.rochester.edu/~bliu/Christine/36557.pdf

Meeting Update 12/18/2012 -- Baowei

  • Golden Version & Machines
    1. New revision in devel branch: 1174:9b0df1e0242b including bug fixes on Blue streak (mainly ticket #265, #266).
    2. New machines to which Golden Version AstroBEAR was installed: Palmetto of Clemson(more than 15000 cores), Cyclone of LLE
    3. New users: Jake, Rui Yan, Alijit
  • Worked on
    1. coding and testing Revision 1174:9b0df1e0242b
    2. Help install for new machines and new users
    3. Testing Shule's AblativeRT module

Meeting Update 12/10/2012 -- Baowei

  • Tickets
    1. New ticket: #270(standard out "walltime remaning" accuracy)
  • Worked on
    1. Ticket #265
    2. parallel hdf5
    3. MUSCL-Hancock scheme

Meeting Update 12/03/2012 - Baowei

  • Tickets:
    1. New: #268 (Grid anomalies when ithreaded = 0), #269 (CND simulation test (include: program, standardout and output)

Meeting Update 11/26/2012 - Baowei

  • Users
    1. IO will skype in during the meeting
    2. Will ask Rich for Shaz's local account.
  • Start working on MUSCL

Meeting Update 11/19/2012 - Baowei

  • Golden Version & Blue Streak
    1. Found two modules failed on blue streak (ticket #265, #266)
    2. AstroBEAR running slow on Blue streak (see Jonathan's post on Blue streak performance)
    3. Queue of Blue streak doesn't work properly and the system seems unstable
  • Outside Users
    1. Yat-Tien made some progress running the code. He will post the blog and call-in meeting next Monday (11/26)
    2. Shazrene asked to download the code. Account on local machines for her?
    3. Download page?

Baowei's Meeting Update -- 11/12/2012

  • Golden Version
  1. checked in 1162:851b4b9a604e with Copyright — thank Ivan
!    Copyright (C) 2003-2012 Department of Physics and Astronomy,
!                            University of Rochester,
!                            Rochester, NY
!
!    global_declarations.f90 is part of AstroBEAR.
!
!    AstroBEAR is free software: you can redistribute it and/or modify    
!    it under the terms of the GNU General Public License as published by 
!    the Free Software Foundation, either version 3 of the License, or    
!    (at your option) any later version.
!
!    AstroBEAR is distributed in the hope that it will be useful, 
!    but WITHOUT ANY WARRANTY; without even the implied warranty of
!    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
!    GNU General Public License for more details.
!
!    You should have received a copy of the GNU General Public License
!    along with AstroBEAR.  If not, see <http://www.gnu.org/licenses/>.
  1. Outside users: Created wiki and local machine account for Scott lucchini, created wiki account for Shazrene Mohamed. IO: waiting for the result so that we can start posting the beta version result in a blog. Attended LLE meeting.
  • Blue streak
  1. Currently has some problems with the Queue system. won't do a update until after Thanksgiving.
  2. Post-processing with plots but one works: https://clover.pas.rochester.edu/trac/astrobear/wiki/u/MyCurentTests
  3. Working on slow-cell-updating
  • Tickets:
  1. New #264 (Parallel hdf5 IO)

Baowei's Meeting Update 11/05/12

  • Golden Version & Blue Streak
    1. tested and checked in 1156:86ff14bc8662 and 1161:f528b4a4b297 .
    2. Ivan found a easier way to handle the preprocessor issue on blue streak, Now the makefile.bluestreak and *.F files are deprecated in rev 1161:f528b4a4b297. global_declaration.f90 with Absolute path is deprecated also. So compiling and running AstroBEAR on blue streak is just as blue gene/P now. (Ticket #245)
    3. Tried installing mercurial on blue streak
    4. Copyright year for Golden Version AstroBEAR
    5. Ivan and Jonathan are running jobs on blue streak.

Meeting Update 10/29/2012 - Baowei

  • Golden Version & Blue Streak
    1. Tested with testing modules on blue streak, bear2fix can extract BE/LE data automatically (Ticket #262). Do not need the flag lDataFromBlueGene and convertfrombluegene functions any more.
    2. Still working on running all tests on blue streak
    3. Need a big project to run on blue streak
  • Tickets
    1. New: #263(Implementing cylindrical coordinates to reconstruction step)
  • Teragrid
    1. Added Erica to the allocation

Meeting Update 10/22/2012 - Baowei

  • Golden Version Status
    1. New revision 1144:3bc7b231aa1a with memory leak fix in main repo.
    2. Set a cron job that will notify about new revision.
    3. Unstable Endian issues with bear2fix when running testing suites on blue streak (#262)
  • Tickets:
    1. New: #261(Download page for Golden Version AstroBEAR), #262 (Big/Little Endian on blue streak)
  • Attended Matlab workshop
  • Added Eddie to Teragrid Allocation

Meeting Update 10/15/2012 - Baowei

  • Golden Version
    1. checked in revision 1125:2c3e80f15c86 which passed all tests on local machines and bluehive. All modules run on blue streak after hypre installed (#245) and fixed several problems in makefile and Makefile.inc for bgq. Working on post-processing issues which will not affect the running.
  1. folks can pull from the main repo and keep your astrobear updated to the main repo. If you fix something, please make sure you run buildproblem on local machines before check in.
  2. testing results can be viewed with USER_MACHINE. For example "CurrentTests(bliu_grass, width=250px)" with double "and" will show the current testing results I run on grass.
  3. Testings for the main repo will run weekly from now on.
  • Tickets
    1. New: #260 (Sims on Kraken going slow)
    2. Closed: #250 (Unified interface for module objects), #258 (wiki page for BE)
  • Trac updates
    1. Installed Collaps and BackLinksMenu macros
    2. Tried: wikipage to pdf plugins but failed

Meeting Update 10/08/2012 - Baowei

  • Golden Version Status

Merged with Ivan's revision last Friday and Jonathan's revision on Sunday — So I was wrong about the final merge last week. Hopefully we are close to the final merge…The following table summarize the testing results on weekends — with Ivan's code.

Machines Testing Results
Clover Goes Very slow. Takes hours. Stop testing on clover
Alfalfa All passed
Bamboo All passed
Grass All passed
Bluehive Modules using bear2fix process testing chombos passed. Found an issue running modules which don't use bear2fix (generate plots like BrioWuShockTubes). Updated the testing suite script for these modules.
Blue Gene/P No reservation. Didn't run tests on it
Blue Streak Found a bunch of bugs in the code and data files but all fixed. Updated the testing suite script for modules with plots testing results. Got a slightly different chombo files when running Bondi which failed the test. Working on it

With Jonathan's revision, I found an segmentation fault error running Basic Disk. Working on it.

  • Trac 1.0
    1. Installed a new Latex plugin. Tried multiple times but the old plugin won't work with the current version Trac. The way writing equations is slightly different (single pair of ). Details can be found at Ticket #254. Old wiki pages with equations may look strange. We probably will go back to the old plugin when the new version comes.
    2. Reinstalled Mercurial plugin.
  • Tickets
    1. New #258 (wiki page for BE module) #259(seg fault on BH)
    2. Closed #254

Baowei's Meeting Update -- Oct 1 2012

  • Golden Version
    1. Did final merge. Running final tests on local machines and blue hive, bluegene. Will check in devel_branch when all tests pass.
    2. Total 28 testing modules.
    3. Things to do: GPL license, configure file, testing pages on wiki for each local machine. Weekly running testing, Downloading page.
  • Tickets
    1. New tickets: #254(wiki update), #255(configure), #256(licence), #257 (tag files)
  • Blue Streak
    1. Installed AstroBEAR and necessary libraries with IBM XL compilers — makefile modified. (Ticket #245)
    2. Ran succesffully with Bondi testing module
    3. Will try to run all testing modules and scaling tests
  • Trac
    1. updated to 1.0.1
    2. More plugins needed to be install (#254…)
  • UCLA visitor
    1. Reported a compiling error. Solved.

Baowei's Meeting Update 09/24/12

  • Golden Version
    1. Test passed on grass, alfalfa and bamboo with the modules we have
    2. Still miss three modules: IonizationTest, Rotating Collapse
  • XSEDE Allocation
    1. Approved but not way more less: Resource Name NICS Cray XT5 (Kraken) Resource Requested Amount 4000000 Resource Awarded Amount 1138234 Resource Name TACC Sun Constellation Cluster (Ranger) Resource Requested Amount 4000000 Resource Awarded Amount 1326290
  • Upate to the local machines
    1. New OS up
    2. Will discuss with Rich about updating trac and re-installing Wiki plugins

  • Blue Gene Q
    1. will attend piloting user meeting this afternoon
  • Tickets:
    1. closed: #236, #237, #238 (Golden version)
    2. New: #251 (wiki update), #252 (Self-gravity restart), #253 (automatic testing run)
  • Yat Tien's visit
    1. Thank everybody's work for the training
    2. Returned the keys
    3. Emailed Rich to delete the account

Baowei's Meeting Update -- Sep 17 2012

BE_stuff does not pass test
Updated wiki page linked to the testing page
BasicDisk Updated page linked to the testing page
MomentumConservation Updated page linked to the testing page
MultiClumps About 17 mins on 8 cores of alfalfa
Updated page linked to the testing page
SingleClump does not pass test
Updated page linked to the testing page
SlowMolecularCloudFormation Updated page linked to the testing page
ThermalInstability Updated page linked to the testing page
  1. Makefile.inc files for local machines need to be double-checked
  2. Missed the testing page on wiki
  • Yat Tien's visit
    1. Followed the training schedule
    2. Will install astrobear on a UCLA machine so he has a machine to use when he leave here.
  • Request to restart our local workstations
    1. Start using Ubuntu Linux 12.04 instead of the older version 10.04
    2. Deadline Friday Sep 21
    3. Planning to set up a time to restart all of them with Rich present
  • New Tickets
    1. #248 (Unified nomenclature) — completed
    2. #249 (Isotropic turbulence has an empty global.data) —fixed

  • Blue Streak (the Q)
    1. Installed AstroBEAR. Got a data file open/read problem when running — could related to the file system.
  • Teragrid Proposal — No news
    1. the review starts on Sep 1st And the allocation should begin on Oct. 1 if got approved.

Baowei's Meeting Update 09/10/12

Baowei's Meeting Update 09/05/12

  • astro-sim.org
    1. Martin sent Steffen Brinkman the webmaster an email as they know each other.
  • Worked on:
    1. #237 Merging with Jonathan's testing modules
    2. #240 re-run with the attached problem.f90 on alfalfa
    3. #244 Segmentation fault running Uniform Collapse testing module, fixed
    4. #245 Installing libs for AstroBEAR on Blue Streak: Currently get trouble installing parallel version hdf5.

Meeting Update 08/28/2012 - Baowei

  • Tickets:
    1. New: #243 (heat conduction and resistivity solver), #244 (Segmentation fault with UniformCollapse), #245 (Installing libs and astrobear on Blue Streak—the Q)
  • Worked on: #237 (test modules), #240 (Molecular clouds), #244 (Segmentation fault with UniformCollapse), #245 (Installing libs and astrobear on Blue Streak—the Q)

Meeting Update 08/14/2012 - Baowei

Baowei's Meeting Update 08/07/12

Baowei's Meeting Update 07/31/12

Project Management with Mercurial Branches

On Tuesday, Jonathan, Eddie and I had a little discussion about the project management of our code—especially after we have our golden version. I played with some of the ideas with these Mercurial extensions: branches, tags, graphic log view and transplant. The following summarized some of my tries.

Introduction to Our new Mercurial tools

  1. hg branch: create a branch under the same repository and check with branch you are working with. Users can specify which branch he/she wants to pull the code from
  2. hg tag: attach a tag to a revision. will create a new revision. Seems cannot attach a tag to individual files.
  3. hg glog or hg view: view the whole developing tree/revision structure
  4. hg transplant: cherry-picking the code

Schemes of Project Management

According to Karl Fogel's book which can be found at http://producingoss.com/. Especially Chapter 7 - Packaging, Release and Daily Development; The Release Branches Section, I think two branches work better than one and taking a snapshot of the tree is not a good way to get a stable version. Two branches can balance well on checking in the developer's effort as soon as possible and lowering the risk of checking in controversial/unstable code.

  • Default branch/trunk: for development mainline, every developer can check in his/her code as long as it passes our NEW strong testing suite.
  • A Release Candidate Branch: only check in clean and ready code (including bug fixing) which means
    1. Pass our NEW strong testing suite
    2. No controversial opinions from other developers
    3. All developers agree on checking to that release.
  • Developers should update his/code developing branch frequently to the trunk — several times a day according to Karl Fogel.

We can have more branches like branches for release 1.0 and release 2.0. And with Mercurial-transplant, bug-fixing for 1.0 only can be cherry picked from the trunk to 1.0 branch only (Examples shown below).


My originally thinking was to create a branch under the same repository for each developer. The with hg glog or hg view, the developing tree of our whole group can be easily seen. But one thing, this tree could be very complicated. And people could easily get confused which branch he/she is working on (See the example for Scheme I).

The second thing I tried was to make two branches under the same repo. This was basically Jonathan's idea. Though the check-in procedures of the two branches were based on what I mentioned above, which was different from Jonathan's way. This scheme worked OK. The tree could be messed up a bit when I made mistake cherry-picking code. (revision 15 in the example for Scheme II)

The third scheme I tried was to have two branches in two repos: one for developing, one for release — could have more if we have more release candidates. This one we had much clear, separate and different though, tree structures for the two branches. cherry-picking was easier. I did make a mistake cherry-picking the code when I tried, but it didn't show in the developing tree (See the example for Scheme III).

Both the latter two are used in open source software project management according to Karl Fogel. And all three schemes can be realized with Mercurial.

Schemes Pros Cons
I 1. Branches under the same repository
2. Default branch/trunk for Development mainline
3. Individual Branches for each developer
4. Release branch for stable version
With single hg command, development structure/stage of our whole group can easily be seen.
So it's easy to check the each developer's development revision
so his/her branch won't lag behind too far
1.Too many branches to handle under the repository so very easy to update the wrong branch
2. Developers could easily pull the whole repo instead of his own branch.
II 1. Branches under the same repository
2. Default branch/trunk for Development mainline
3. Release branch for stable version
1. Fairly simple revision structures.
2. Clear view of the developing line and stable version
Wrong Cherry picking to the release branch could mess up a bit the whole repo revision structure
III Release Branch and Default branch are under different repo Very clean revision structure for the trunk and especially for the release branch No view of the whole revision structure


  • Scheme 1
o    changeset:   14:7e05196f1647
|\   tag:         tip
| |  parent:      12:d07c7f335631
| |  parent:      13:567d336f6d7a
| |  user:        bliu <bliu@pas.rochester.edu>
| |  date:        Thu Jul 26 13:23:44 2012 -0400
| |  summary:     Shule merge his branch to trunk
| |
| o  changeset:   13:567d336f6d7a
| |  branch:      shule
| |  parent:      11:5935bfa52647
| |  user:        bliu <bliu@pas.rochester.edu>
| |  date:        Thu Jul 26 13:22:27 2012 -0400
| |  summary:     shule added feature 2
| |
o |  changeset:   12:d07c7f335631
|\|  parent:      7:75d40ef53d13
| |  parent:      11:5935bfa52647
| |  user:        bliu <bliu@pas.rochester.edu>
| |  date:        Thu Jul 26 13:19:05 2012 -0400
| |  summary:     Shule merged his branch with trunk/developing branch
| |
| o  changeset:   11:5935bfa52647
| |  branch:      shule
| |  parent:      8:ec480f53ab7f
| |  user:        bliu <bliu@pas.rochester.edu>
| |  date:        Thu Jul 26 13:16:57 2012 -0400
| |  summary:     created module 2 by shule
| |
| | o  changeset:   10:c160040af250
| | |  branch:      1.0.x
| | |  user:        bliu <bliu@pas.rochester.edu>
| | |  date:        Thu Jul 26 13:14:18 2012 -0400
| | |  summary:     Added tag RELEASE_1_0_X for changeset f37e3dfa00ff
| | |
+---o  changeset:   9:f37e3dfa00ff
| |    branch:      1.0.x
| |    tag:         RELEASE_1_0_X
| |    parent:      7:75d40ef53d13
| |    user:        bliu <bliu@pas.rochester.edu>
| |    date:        Thu Jul 26 12:06:15 2012 -0400
| |    summary:     Created branch for release 1.0
| |
| o  changeset:   8:ec480f53ab7f
|/|  branch:      shule
| |  parent:      3:ffed79d15d20
| |  parent:      7:75d40ef53d13
| |  user:        bliu <bliu@pas.rochester.edu>
| |  date:        Thu Jul 26 12:00:14 2012 -0400
| |  summary:     Shule merged with the developing/default branch
| |
o |    changeset:   7:75d40ef53d13
|\ \   parent:      4:ecfeaa2d72e6
| | |  parent:      6:61b0ac6b5c20
| | |  user:        bliu <bliu@pas.rochester.edu>
| | |  date:        Thu Jul 26 11:56:58 2012 -0400
| | |  summary:     Eddie merged feature 1 to the developing/default branch
| | |
| @ |  changeset:   6:61b0ac6b5c20
| | |  branch:      eddie
| | |  user:        bliu <bliu@pas.rochester.edu>
| | |  date:        Thu Jul 26 11:55:17 2012 -0400
| | |  summary:     Eddie added feature1
| | |
| o |  changeset:   5:55e842fae920
|/| |  branch:      eddie
| | |  parent:      2:0fb3775dcc73
| | |  parent:      4:ecfeaa2d72e6
| | |  user:        bliu <bliu@pas.rochester.edu>
| | |  date:        Thu Jul 26 11:41:25 2012 -0400
| | |  summary:     Eddie merged with the developing/default branch
| | |
o---+  changeset:   4:ecfeaa2d72e6
| | |  parent:      0:c70162535253
| | |  parent:      3:ffed79d15d20
| | |  user:        bliu <bliu@pas.rochester.edu>
| | |  date:        Thu Jul 26 11:35:16 2012 -0400
| | |  summary:     Merged with Shule's branch
| | |
| | o  changeset:   3:ffed79d15d20
| | |  branch:      shule
| | |  parent:      1:d145928ca80e
| | |  user:        bliu <bliu@pas.rochester.edu>
| | |  date:        Thu Jul 26 11:31:06 2012 -0400
| | |  summary:     Shule's 1st modification to Module 1
| | |
| o |  changeset:   2:0fb3775dcc73
|/ /   branch:      eddie
| |    parent:      0:c70162535253
| |    user:        bliu <bliu@pas.rochester.edu>
| |    date:        Thu Jul 26 11:26:57 2012 -0400
| |    summary:     Created Branch for Eddie
| |
| o  changeset:   1:d145928ca80e
|/   branch:      shule
|    user:        bliu <bliu@pas.rochester.edu>
|    date:        Thu Jul 26 11:26:28 2012 -0400
|    summary:     Created Branch for Shule
|
o  changeset:   0:c70162535253
   user:        bliu <bliu@pas.rochester.edu>
   date:        Thu Jul 26 11:25:49 2012 -0400
   summary:     Initial commit of TAstroBEAR
  • Scheme II
@    changeset:   19:c687cd2b699d
|\   branch:      1.0.X
| |  tag:         tip
| |  parent:      16:3d2c4134f5e6
| |  parent:      17:b8e25dc8c496
| |  user:        bliu <bliu@pas.rochester.edu>
| |  date:        Thu Jul 26 16:32:18 2012 -0400
| |  summary:     Eddie fixed a bug in module 1
| |
| | o  changeset:   18:d95fe8c2bae7
| |/|  parent:      17:b8e25dc8c496
| | |  parent:      13:250de7722735
| | |  user:        bliu <bliu@pas.rochester.edu>
| | |  date:        Thu Jul 26 17:36:04 2012 -0400
| | |  summary:     Eddie merged his branch with bugfixing in module 1 with trunk
| | |
| o |  changeset:   17:b8e25dc8c496
| | |  parent:      11:257c587657ca
| | |  user:        bliu <bliu@pas.rochester.edu>
| | |  date:        Thu Jul 26 16:32:18 2012 -0400
| | |  summary:     Eddie fixed a bug in module 1
| | |
o---+  changeset:   16:3d2c4134f5e6
| | |  branch:      1.0.X
| | |  parent:      15:750c823af31f
| | |  parent:      13:250de7722735
| | |  user:        bliu <bliu@pas.rochester.edu>
| | |  date:        Thu Jul 26 16:37:23 2012 -0400
| | |  summary:     Shule fixed a bug in feature 1
| | |
o | |  changeset:   15:750c823af31f
| | |  branch:      1.0.X
| | |  user:        bliu <bliu@pas.rochester.edu>
| | |  date:        Thu Jul 26 17:22:19 2012 -0400
| | |  summary:     backout the mis cherrypicking from the default branch
| | |
o | |  changeset:   14:c8d8de75b020
| | |  branch:      1.0.X
| | |  parent:      12:0b29e0bacf7d
| | |  user:        bliu <bliu@pas.rochester.edu>
| | |  date:        Thu Jul 26 15:33:31 2012 -0400
| | |  summary:     Eddie modified the feature1.f90
| | |
| | o  changeset:   13:250de7722735
| |/   parent:      11:257c587657ca
| |    user:        bliu <bliu@pas.rochester.edu>
| |    date:        Thu Jul 26 16:37:23 2012 -0400
| |    summary:     Shule fixed a bug in feature 1
| |
o |  changeset:   12:0b29e0bacf7d
| |  branch:      1.0.X
| |  parent:      6:cd151c2100ad
| |  user:        bliu <bliu@pas.rochester.edu>
| |  date:        Thu Jul 26 16:30:23 2012 -0400
| |  summary:     Eddie modified feature 1
| |
| o    changeset:   11:257c587657ca
| |\   parent:      10:a22779a560a7
| | |  parent:      9:ae21e6d2d4ba
| | |  user:        bliu <bliu@pas.rochester.edu>
| | |  date:        Thu Jul 26 15:43:15 2012 -0400
| | |  summary:     Eddie merge his branch with the main branch
| | |
| | o  changeset:   10:a22779a560a7
| | |  parent:      7:cf843a15dbf1
| | |  user:        bliu <bliu@pas.rochester.edu>
| | |  date:        Thu Jul 26 15:33:31 2012 -0400
| | |  summary:     Eddie modified the feature1.f90
| | |
| o |  changeset:   9:ae21e6d2d4ba
| |\|  parent:      8:94bdb895f471
| | |  parent:      7:cf843a15dbf1
| | |  user:        bliu <bliu@pas.rochester.edu>
| | |  date:        Thu Jul 26 15:39:53 2012 -0400
| | |  summary:     merged after putting the tag RELEASE_2_0_X
| | |
| o |  changeset:   8:94bdb895f471
| | |  parent:      4:46982b65c965
| | |  user:        bliu <bliu@pas.rochester.edu>
| | |  date:        Thu Jul 26 15:30:45 2012 -0400
| | |  summary:     Added tag module2.f90, RELEASE_2_0_X for changeset 46982b65c965
| | |
| | o  changeset:   7:cf843a15dbf1
| |/   parent:      4:46982b65c965
| |    user:        bliu <bliu@pas.rochester.edu>
| |    date:        Thu Jul 26 15:08:58 2012 -0400
| |    summary:     Shule created module 2
| |
o |  changeset:   6:cd151c2100ad
| |  branch:      1.0.X
| |  user:        bliu <bliu@pas.rochester.edu>
| |  date:        Thu Jul 26 15:04:21 2012 -0400
| |  summary:     Added tag RELEASE_1_0_X for changeset 115372a64138
| |
o |  changeset:   5:115372a64138
|/   branch:      1.0.X
|    tag:         RELEASE_1_0_X
|    user:        bliu <bliu@pas.rochester.edu>
|    date:        Thu Jul 26 15:03:31 2012 -0400
|    summary:     created a branch for 1.0.X
|
o    changeset:   4:46982b65c965
|\   tag:         RELEASE_2_0_X
| |  tag:         module2.f90
| |  parent:      3:bc41a66a492a
| |  parent:      2:3d05829cc1c1
| |  user:        bliu <bliu@pas.rochester.edu>
| |  date:        Thu Jul 26 15:01:45 2012 -0400
| |  summary:     Shule merged his branch with the trunk
| |
| o  changeset:   3:bc41a66a492a
| |  parent:      1:cab73a420c5b
| |  user:        bliu <bliu@pas.rochester.edu>
| |  date:        Thu Jul 26 15:01:03 2012 -0400
| |  summary:     Shule's 2nd modification to Module 1
| |
o |  changeset:   2:3d05829cc1c1
|/   user:        bliu <bliu@pas.rochester.edu>
|    date:        Thu Jul 26 14:54:08 2012 -0400
|    summary:     Eddie added feature 1
|
o  changeset:   1:cab73a420c5b
|  user:        bliu <bliu@pas.rochester.edu>
|  date:        Thu Jul 26 14:00:31 2012 -0400
|  summary:     Shule's 1st modification to module1.f90
|
o  changeset:   0:096f4d37e0fd
   user:        bliu <bliu@pas.rochester.edu>
   date:        Thu Jul 26 13:59:23 2012 -0400
   summary:     Initial commit of TAstroBEAR
  • Scheme III
    1. trunk
      @  changeset:   11:320adc42a728
      |  tag:         tip
      |  user:        bliu <bliu@pas.rochester.edu>
      |  date:        Fri Jul 27 10:08:26 2012 -0400
      |  summary:     Eddie fixed 1st bug in module 2
      |
      o  changeset:   10:ba38a00a54d2
      |  user:        bliu <bliu@pas.rochester.edu>
      |  date:        Fri Jul 27 10:07:38 2012 -0400
      |  summary:     Eddie fixed 1st bug in module 1
      |
      o  changeset:   9:b0a88811e50d
      |  user:        bliu <bliu@pas.rochester.edu>
      |  date:        Fri Jul 27 10:04:24 2012 -0400
      |  summary:     Added tag RELEASE_2_0_X for changeset 0a89508c8ca2
      |
      o  changeset:   8:0a89508c8ca2
      |  tag:         RELEASE_2_0_X
      |  user:        bliu <bliu@pas.rochester.edu>
      |  date:        Fri Jul 27 10:03:30 2012 -0400
      |  summary:     Shule created module 2
      |
      o    changeset:   7:1b91941fb44b
      |\   parent:      6:4aac04e39e42
      | |  parent:      5:957430289152
      | |  user:        bliu <bliu@pas.rochester.edu>
      | |  date:        Fri Jul 27 10:01:33 2012 -0400
      | |  summary:     Shule merged his branch with the trunk
      | |
      | o  changeset:   6:4aac04e39e42
      | |  parent:      2:d2bb8f24ef20
      | |  user:        bliu <bliu@pas.rochester.edu>
      | |  date:        Fri Jul 27 09:51:18 2012 -0400
      | |  summary:     Shule made 2nd modification to module 1
      | |
      o |  changeset:   5:957430289152
      | |  user:        bliu <bliu@pas.rochester.edu>
      | |  date:        Fri Jul 27 09:52:24 2012 -0400
      | |  summary:     Eddie fixed the 1st bug in feature 1
      | |
      o |  changeset:   4:2bf0d822ec6f
      | |  user:        bliu <bliu@pas.rochester.edu>
      | |  date:        Fri Jul 27 09:49:03 2012 -0400
      | |  summary:     Eddie created feature 2
      | |
      o |  changeset:   3:307471c2a37e
      |/   user:        bliu <bliu@pas.rochester.edu>
      |    date:        Fri Jul 27 09:44:54 2012 -0400
      |    summary:     Added tag RELEASE_1_0_X for changeset d2bb8f24ef20
      |
      o  changeset:   2:d2bb8f24ef20
      |  tag:         RELEASE_1_0_X
      |  user:        bliu <bliu@pas.rochester.edu>
      |  date:        Fri Jul 27 09:43:34 2012 -0400
      |  summary:     Shule created feature 1
      |
      o  changeset:   1:abd9168ebcbc
      |  user:        bliu <bliu@pas.rochester.edu>
      |  date:        Fri Jul 27 09:40:19 2012 -0400
      |  summary:     Eddie made the 1st modification to module 1
      |
      o  changeset:   0:98888bad8a2a
         user:        bliu <bliu@pas.rochester.edu>
         date:        Fri Jul 27 09:38:33 2012 -0400
         summary:     Initial committment of shceme3
      
  1. Release Branch 1.0.x
@  changeset:   6:ecb6fe41824f
|  tag:         tip
|  user:        bliu <bliu@pas.rochester.edu>
|  date:        Fri Jul 27 10:07:38 2012 -0400
|  summary:     Eddie fixed 1st bug in module 1
|
o  changeset:   5:898bd6d22ad9
|  user:        bliu <bliu@pas.rochester.edu>
|  date:        Fri Jul 27 09:51:18 2012 -0400
|  summary:     Shule made 2nd modification to module 1
|
o  changeset:   4:c4c15931c578
|  user:        bliu <bliu@pas.rochester.edu>
|  date:        Fri Jul 27 09:52:24 2012 -0400
|  summary:     Eddie fixed the 1st bug in feature 1
|
o  changeset:   3:307471c2a37e
|  user:        bliu <bliu@pas.rochester.edu>
|  date:        Fri Jul 27 09:44:54 2012 -0400
|  summary:     Added tag RELEASE_1_0_X for changeset d2bb8f24ef20
|
o  changeset:   2:d2bb8f24ef20
|  tag:         RELEASE_1_0_X
|  user:        bliu <bliu@pas.rochester.edu>
|  date:        Fri Jul 27 09:43:34 2012 -0400
|  summary:     Shule created feature 1
|
o  changeset:   1:abd9168ebcbc
|  user:        bliu <bliu@pas.rochester.edu>
|  date:        Fri Jul 27 09:40:19 2012 -0400
|  summary:     Eddie made the 1st modification to module 1
|
o  changeset:   0:98888bad8a2a
   user:        bliu <bliu@pas.rochester.edu>
   date:        Fri Jul 27 09:38:33 2012 -0400
   summary:     Initial committment of shceme3

Baowei's Meeting Update 07/24/12

New Revision 951:b789b34104cb in the official repository

Meeting Update 07/17/2012 - Baowei

Meeting Update 07/10/2012 - Baowei

Baowei's Meeting Update 07/03/12

Baowei's Meeting Update 06/26/12

  • Worked on:
    1. Weak scaling test on Ranger: #193
    2. Hybrid scaling test on Kraken: #202
    3. #226
  • Will work on:
    1. Proposal for Teragrid allocation
    2. More scaling test on Kraken

Baowei's Meeting Update 06/19/12



New Revision 948:9cb35866cda0 in the main repository

I just pushed Revision 948:9cb35866cda0 in the main repository. It contains a modification to restarts where the master sends a message to each worker when it is ready to receive and process the data. WIthout this - the master can get overrun with messages on some platforms like Kraken.

A update list can be found at https://clover.pas.rochester.edu/trac/astrobear/log/Scrambler/?action=stop_on_copy&mode=stop_on_copy&rev=9cb35866cda06e8f69cce5ef787af702df0f1db9&stop_rev=946%3Aff6bdbea174a&limit=200

Details of modification to the code can be found at: https://clover.pas.rochester.edu/trac/astrobear/changeset?old_path=%2FScrambler&old=9cb35866cda06e8f69cce5ef787af702df0f1db9&new_path=%2FScrambler&new=946%3Aff6bdbea174a

Test results can be found at:

https://clover.pas.rochester.edu/trac/astrobear/wiki/u/bliu#no1

Baowei's Meeting Update 06/06/12

  • Working on Teragrid allocation application information
  • We need thoughts about how to build a version of Astrobear working stably for most users and on most machines —our goal.
    Currently our new official revisions most come from bug-fixing for tickets from Martin and Jonathan while other group members may have their own revision working well for themselves. What would be a good way to merge these revisions together to get more features and better performance but not bugs? Also how to build a stable official version of astrobear? I'm preparing a document to collect the information about revisions/modules and machines each of our group member working with.

https://clover.pas.rochester.edu/trac/astrobear/wiki/ProjectStatistics

Bluegene/Q is coming in late June. I know there will be a gallery with big/huge TV monitor showing research photos/videos to the visitors. The university communication team is working on it. We don't have the details yet. But I think we should start thinking how to prepare for this gallery show.

New Revision 946:ff6bdbea174a in the official repository

Baowei's Meeting Update 05/29/12

  • Modifications to the official scrambler last week:

https://clover.pas.rochester.edu/trac/astrobear/log/Scrambler/?action=stop_on_copy&mode=stop_on_copy&rev=aac36d619caacf1eda6eb785046514dcc8c5e87c&stop_rev=916%3A47468f693d6f&limit=200

  • Created a folder /cloverdata/trac/astrobear/doc/talks/ for talks. Folks who gave talks can upload your talks to the folder or just send the file/link to me.

New Revision 936:aac36d619caa checked in

Meeting Update 05/15/2012 - Baowei

  • Working on scaling test on Ranger Ticket #193.

CIRC Poster Session

I hope everyone had a great time at the CIRC Poster Session last Friday. Here are some photos for our group. Thanks for being there.

http://www.pas.rochester.edu/~bliu/PosterSession/AdamGroup/P1010013.JPGhttp://www.pas.rochester.edu/~bliu/PosterSession/AdamGroup/P1010014.JPG
http://www.pas.rochester.edu/~bliu/PosterSession/AdamGroup/P1010016.JPGhttp://www.pas.rochester.edu/~bliu/PosterSession/AdamGroup/P1010024.JPG
http://www.pas.rochester.edu/~bliu/PosterSession/AdamGroup/P1010025.JPGhttp://www.pas.rochester.edu/~bliu/PosterSession/AdamGroup/P1010031.JPG
http://www.pas.rochester.edu/~bliu/PosterSession/AdamGroup/P1010032.JPGhttp://www.pas.rochester.edu/~bliu/PosterSession/AdamGroup/P1010046.JPG
http://www.pas.rochester.edu/~bliu/PosterSession/AdamGroup/P1010047.JPGhttp://www.pas.rochester.edu/~bliu/PosterSession/AdamGroup/P1010049.JPG
http://www.pas.rochester.edu/~bliu/PosterSession/AdamGroup/P1010050.JPGhttp://www.pas.rochester.edu/~bliu/PosterSession/AdamGroup/P1010051.JPG
http://www.pas.rochester.edu/~bliu/PosterSession/AdamGroup/P1010064.JPGhttp://www.pas.rochester.edu/~bliu/PosterSession/AdamGroup/P1010066.JPG
http://www.pas.rochester.edu/~bliu/PosterSession/AdamGroup/P1010070.JPG

Baowei's Meeting Update 05/08/12

http://www.pas.rochester.edu/~bliu/Scaling/ranger.png

http://www.pas.rochester.edu/~bliu/Scaling/rangerSpeedup.png

  • Worked on runtime errors happened when testing Revision 894: Tickect #200. Checked in Revision 894 for new subgrid-generating: Ticket #183, #185.
  • Working on the unique TAGs for messaging stages to allow Max_levels be larger than 10: Ticket #71, Ticket #192

Baowei's Meeting Update 04/24/12

  1. Add a new global Flag lUseOriginalNewSubGrids to choose the old/new Subgrids-generating algorithm. testing and checking in the code
  1. Work on running AstroBEAR on Ranger's normal queue:

https://clover.pas.rochester.edu/trac/astrobear/ticket/188
https://clover.pas.rochester.edu/trac/astrobear/ticket/197

Baowei's Meeting Update 04/10/12

Ticket 185: new grid-generating algorithm

I used test suites to test the performance of the new algorithm. The new algorithm outperforms the old one for most of modules. Results can be found at: https://clover.pas.rochester.edu/trac/astrobear/ticket/185.

Ticket 188: Install AstroBEAR on Ranger

Ranger has a standard environment which is too old for AstroBEAR code. I compiled AstroBEAR on ranger with newer testing environment but got trouble submitting jobs on it. Details can be found at: https://clover.pas.rochester.edu/trac/astrobear/ticket/188

The Afrank Queue on Bluehive

This might not be a big issue for the time being. When I ran a benchmark on bluehive, I found the queue afrank used Ethernet instead of Infiniband which was not the way it's supposed to be. So a lot of time was wasted on waiting for communications when running with multiple nodes. Russell is checking the issue.

Meeting Update 04/03/2012 - Baowei

1. Ticket #169 links to papers using Astrobear on the wiki

I created the wiki page at: https://clover.pas.rochester.edu/trac/astrobear/wiki/AstroBearPublication The ticket was closed but anyone find a new paper can send to me or edit the page directly.

2. Ticket # 185 Compare new grid-generating algorithm and the old one using AstroBEAR modules

Results can be found at the page: https://clover.pas.rochester.edu/trac/astrobear/ticket/185

The new algo works as expected and is faster than the general case of the old algo.

Baowei's Meeting Update for Mar 27 2012

1. Tickets:
All active tickets were processed—either assign to new owners or closed. Details can be found at https://clover.pas.rochester.edu/trac/astrobear/blog/bliu03252012.
I'm testing to use Trac tickets as a way of managing my projects.

2. Optimization: New Grid-Generating Algorithm
I was working on comparing the new grid-generating algorithm with the old one using real AstroBEAR modules but ran into bug-fixing like the Ticket 184: https://clover.pas.rochester.edu/trac/astrobear/ticket/184. The primary results were promising: the time used from the new algorithm so far was comparable with the best case of the old one. Will work on pictures…

Tickets

Jonathan and I went through all the active tickets we have. Several tickets were closed and most of still active ones were assigned to new owners. http://www.pas.rochester.edu/~bliu/Tickets/TicketStatistics.png


Here's a summary of what we have

Owners Tickets
johannjc #182 Spectra processing objects
#179 Current test suite does not test Isothermal solver
shuleli #126 strange field behavior on amr edges with uniform gravity
#121 3D, AMR, large sims abort possibly due to memory problem
#151 Thermal Diffusion
#152 Implementing Magnetic Resistivity
#153 Implement Viscosity
bliu #183 Optimization for Grid Generation Algorithm
#173 Interpolation options can trigger protections
#71 Adaptive message block sizes
#174 Point gravity and outflow properties stored in chombo
#176 Adding additional tests
#127 Oscillations in PPM MHD
#150 Implementing Self Gravity in Cylindrical Coordinates
#154 Sink Particles in 2D and Cylindrical
#179 Current test suite does not test Isothermal solver
#169 We should post links to papers using Astrobear on the wiki
ehansen #147 porting over 'NEQCooling'
?#171 If the initial conditions trigger protection we should stop the run and print an appropriate message
erica
No Owner #82 Create suppression file for valgrind's I/O errors
#87 Improve parallel performance on Chombo HDF5 writes
#92 Incorporate MPI_PACK_SIZE() into packing algorithm
#155 Roe Solver

A more detailed report can be found here https://clover.pas.rochester.edu/trac/astrobear/report/8

Meeting Update 03/20/2012 - Baowei

New Algorithm for Patch Refinement

New algorithm is good at doing refinement for line-shape areas (https://clover.pas.rochester.edu/trac/astrobear/blog/bliu03112012), but not good at ring-shape areas (https://clover.pas.rochester.edu/trac/astrobear/blog/bliu03112012#comment-5) where the old algorithm probably works better.

Here's the result for 2D-search (checking splitting point along both x and y)

http://www.pas.rochester.edu/~bliu/Optimization/NewAlgo/16X16_2.png http://www.pas.rochester.edu/~bliu/Optimization/NewAlgo/16X16_2_refinement.png



Now working on a new recursive 3D algorithm.

Recursive Inflection Algorithm for Patch Refinement

This new(er) algorithm combines the old algorithm and the splitting-cost-check algorithm, and so the good parts of both algorithm. The following are results for some ErrFlag patterns:

http://www.pas.rochester.edu/~bliu/Optimization/recursiveInflection/diagonal.png http://www.pas.rochester.edu/~bliu/Optimization/recursiveInflection/diagonal_refinement.png
http://www.pas.rochester.edu/~bliu/Optimization/recursiveInflection/Ring.png http://www.pas.rochester.edu/~bliu/Optimization/recursiveInflection/ring_refinement.png
http://www.pas.rochester.edu/~bliu/Optimization/recursiveInflection/ring2.png http://www.pas.rochester.edu/~bliu/Optimization/recursiveInflection/ring2_refinement.png

Meeting Update 03/13/2012 - Baowei

Wiki Page for Ticket Assignment Procedure

https://clover.pas.rochester.edu/trac/astrobear/wiki/TicketAssignmentPage

Current Active Tickets

Total Active Tickets 18 (include the two just closed)
astrobear 15
bear2fix 1
wiki 1
Over 6 weeks 4
Over 4 weeks 8
Over 2 weeks 10


Testing Results for new patch refinement algorithm

https://clover.pas.rochester.edu/trac/astrobear/blog/bliu03112012

Optimization: New Algorithm for Patch Refinement

The testing results of the old/current algorithm for refinement patches which includes a bug can be found here:
https://clover.pas.rochester.edu/trac/astrobear/blog/bliu03052012

The new algorithm calculates the splitting cost at each position (along one direction for the time being) and finds the optimal (with lowest cost) position. The following shows the testing results of the new algorithm. It clearly shows that the new algorithm fixes the bug in the old one.

1. 16X16 Diagonal patch

http://www.pas.rochester.edu/~bliu/Optimization/NewAlgo/diagonal.pnghttp://www.pas.rochester.edu/~bliu/Optimization/NewAlgo/diagonal_refinement.png

2. 32X32 Diagonal patch

http://www.pas.rochester.edu/~bliu/Optimization/NewAlgo/diagonal_refinement32.png

3. Random patch 1 =

http://www.pas.rochester.edu/~bliu/Optimization/NewAlgo/random_1.pnghttp://www.pas.rochester.edu/~bliu/Optimization/NewAlgo/random1_refinement.png

4. Random patch 2

http://www.pas.rochester.edu/~bliu/Optimization/NewAlgo/random_2.pnghttp://www.pas.rochester.edu/~bliu/Optimization/NewAlgo/random2_refinement.png

5. Random patch 3

http://www.pas.rochester.edu/~bliu/Optimization/NewAlgo/random_3.pnghttp://www.pas.rochester.edu/~bliu/Optimization/NewAlgo/random3_refinement.png

6. Random patch 4

http://www.pas.rochester.edu/~bliu/Optimization/NewAlgo/random_4.pnghttp://www.pas.rochester.edu/~bliu/Optimization/NewAlgo/random4_refinement.png

Optimization: Running Time VS Desired Filling Ratio for Refinement Area

Jonathan and I were trying to do some optimization on the refinement patches, as the smaller the patches are the more resources (memory and computing time) it needs:

https://clover.pas.rochester.edu/trac/astrobear/blog/bliu02272012

The current algorithm works as the following: if the filling ratio is less than the desired ratio, it cuts the refinement patch into smaller ones until the filling ratio is larger than the desired ratio according to the inflections of ErrFlags.

In Figure 1 filling ratio of the patch (big box) is 40%. When the desired filling ratio is larger than 40%, AMR goes to smaller patches. We can see the running time increases when Desired filling ratio goes beyond 40% in Figure 2.

Figure 1
http://www.pas.rochester.edu/~bliu/Optimization/patch1.png


Figure 2
http://www.pas.rochester.edu/~bliu/Optimization/timeVSfilling.png



For refinement patch in Figure 3, however, the current algorithm couldn't find smaller patches even when the filling ratio is less than the designed one.

Figure 3
http://www.pas.rochester.edu/~bliu/Optimization/patch2.png


Figure 4
http://www.pas.rochester.edu/~bliu/Optimization/timeVSfilling_2.png

Big Ratio of Data Memory Over Data File Size for AMR

The following summaries our understanding to the big ratio of Data memory over Data file size for AMR inspired by the results from Jonathan's memory checking tools — a typical number for the ratio could be 30~80.

Using AMR, extra ghost data are needed to do the interpolation. These ghost data size could be big when the refined patches are small.

Take a 3D problem with two-step updates for example,

where 2 comes from the copy we save for later restart and 16 comes from the ghost data.
So when , we have

.


Or the Data Memory is 54 times the size of the data file.


The smaller the AMR finer patches, the bigger the ratio is. So in Figure 1 we will have much smaller ratio than in Figure 2 though the total patches sizes are same. How these AMR patches distribute depends on the specific problem and calculation.

Figure 1
http://www.pas.rochester.edu/~bliu/MemRatio/2Dlattice2L.png


Figure 2
http://www.pas.rochester.edu/~bliu/MemRatio/2Dlattice2L_2.png

New Development Procedure for AstroBEAR code

I updated the development procedure page according to Jonathan's suggestion:
https://clover.pas.rochester.edu/trac/astrobear/wiki/DevelopmentProcedure

As Adam asked, we will have two people (Baowei and Eddie) in charge of testing the code and checking in the test-passed code to the official repository. So whenever you have code you want to check in, just notify me or Eddie to do the test. If the test passed, we will upload the results to test repository and ask you/everyone else to verify. After that, we will push the code to the official repository. If the test fails, we will point you to the reference and simulation images as well as the necessary information to reproduce the failed test and leave it to you to determine why the test failed and to fix any possible bugs.

Cloud based shared file space

Jonathan and I are piloting a shared file system called Box.net. Currently it has 2GB single-file-limit. So it's probably better for documents rather than data files. It can sync your documents on your computer while keeping historical versions. I find it's convenient when two or more people cooperate on same documents. If you are interested or have better ideas of using it, please let me know.

http://www.pas.rochester.edu/~bliu/Box.net/Box.net.png

Computing Resources

Quotations for New Machine (https://clover.pas.rochester.edu/trac/astrobear/blog/johannjc01182012)

ASA: 2 Xeon 2.4GHz Quadro Core Processors, 24GB Memory, 16TB Harddisk https://www.pas.rochester.edu/~bliu/ComputingResources/ASA_Computers.pdf

AberDeen: 2 Xeon 2.4GHz Quadro Core Processors, 24GB Memory, 16TB Harddisk https://www.pas.rochester.edu/~bliu/ComputingResources/Aberdeen.pdf

Pogo: 1 Xeon 1.6GHz Quadro Core Processor, 3GB Memory, 14.5TB Harddisk https://www.pas.rochester.edu/~bliu/ComputingResources/Pogo_linux.pdf


Current Load of my Teragrid allocation https://www.pas.rochester.edu/~bliu/ComputingResources/Teragrid_Load.png

AstroBEAR Virtual Memory

AstroBEAR uses huge virtual memory (comparing with data and text memory) when running with mulpi-processors:

One Processor:

http://www.pas.rochester.edu/~bliu/Jan_24_2012/AstroBEAR/bear_1n1p1t.png

Four Processor: http://www.pas.rochester.edu/~bliu/Jan_24_2012/AstroBEAR/bear_2n8p4t.png

To understand the problem, I tried a very simple Hello World program. Here are the results from TotalView:

One Processor: http://www.pas.rochester.edu/~bliu/Jan_24_2012/HelloWorld/1n1p1t.png

Four Processor: http://www.pas.rochester.edu/~bliu/Jan_24_2012/HelloWorld/1n4p4t.png

It's fair to say that the big virtual memory issue is not related to the AstroBEAR code. It's more related to openMPI and the system. I saw online resources arguing Virtual Memory includes memory for shared libraries which depends on other processes running. It makes sense to me. Especially I ran the Hello World program with same setup but at different times and found out it's using different virtual memories

http://www.pas.rochester.edu/~bliu/Jan_24_2012/HelloWorld/1n1p1t_2ndRun.png http://www.pas.rochester.edu/~bliu/Jan_24_2012/HelloWorld/1n1p1t_3run.png

I'm reading more on virtual memory and shared libraries.