Magnetized Triggered Star Formation - Comparison of Azimuthal Angle
Low resolution Comparison (256 per radius)
High resolution Comparison (512 per radius)
Movies:
Magnetized Triggered Star Formation
Top panel: 0.36Myrs, Bottom Panel: 0.47Myrs
Left to right: beta = 12.8, 204.8, 5120
Movies:
Figures for the HEDLA paper, more TSF shock study
Here are the figures for the hedla paper. The magnetized tsf has poloidal field axis aligned with beta = 12. The beta = 6 and toroidal beta = 12 data are not yet visualized.
Fig1: slice through the disk (side on view). Left panel: pseudo color = density, contour = magnetic pressure. Right panel: contour = density, grey streamlines: field lines.
Fig2: slice through the disk (top down view). Left panel: pseudo color = density, contour = magnetic pressure. Right panel: contour = density, grey streamlines: field lines.
Fig3: 3D view of the disk and the jet. (a) and (b) are the same frame (t = 0.36 Myrs), but with different opacity schemes to show: (a) jet feature (b) overall morphology including the disk obscured by the cloud material surrounding it.
Fig4: line plots of stellar mass, accretion rate, mixing ratio, circumstellar bound mass.
The following are the triggering runs with new shock structure, with shell speed = 20 km/s and wind speed = 20 km/s, no stars formed:
Here is a comparison between shell = 10 km/s wind = 100 km/s and shell = 10 km/s wind = 10 km/s:
Meeting Update 0721
I have run three triggering runs with different shock velocity and realistic shock structure. The shocks below are only different in velocity. However, the density and temperature structures used here is based on the Vs = 150 km/s shock therefore they are not accurate for the other two runs.
1) Vs = 150 km/s, run to 37 kyrs (about 4.5 crushing time), no collapse. This is understandable as the shell we use has lower density and thinner than Boss'. If their shell need < 80 km/s to trigger, we may need even smaller shock speed.
2) Vs = 50 km/s, run to 100 kyrs (about 4 crushing time), no collapse. This is different than Boss' < 80 km/s criterion. But again, the shock structure is wildly different.
3) Vs = 10 km/s, run to just past the first crushing time, collapse triggered at the last frame. This is similar to our previous Mach 3 simulation, but with a very different shock structure.
Our next step should be trying to pin down the exact threshold velocity for collapse to happen, and research on how the shock structure should change when such velocity is reached. This means triggering can only happen outside of the 1~2 pc range we have assumed before, but actually much farther from the source of the shock.
TSF shock structure
plots of density, velocity and temperature.
shell density: 104 /cc, velocity: 50 km/s, temperature: 10 K, width: 0.005 pc
wind density: 1.0 /cc, velocity: 150 km/s, temperature: 106 K, gradually slowing down (you can see the slight slope of the wind velocity at the final frame, however the linear decrease spans 20 pc, our box is only 0.5 pc so no noticeable slow down can be observed.
density: in log-linear plot. normalized to the shell density (104 /cc)
temperature: in log-linear plot. normalized to the wind temperature (106 K)
velocity: in linear-linear plot. normalized to wind velocity (150 km/s)
under the new shell and wind speed, the clump crushing time is around 10kyrs, so the following movie shows the shock structure over 1 clump crushing time.
This looks OK in general however one thing I'm not sure is that there is an expanding buffer region between the shell and the wind.
movie:
http://www.pas.rochester.edu/~shuleli/tsf_new_shock/shock_structure.gif
same assumption as above, with 0.035 pc shell:
http://www.pas.rochester.edu/~shuleli/tsf_new_shock/shock2.gif
New simulation setup
Here's a new simulation setup for the next round of TSF sims (rho and temperature scales are log10 based):
For these, the wind velocity is subject to an exponential decrease since we have linear feature for the velocity-distance plot (Chevalier 1974), we can compute the time dependence by:
dv/ds = -c (where c is some slope counting from the shell front to the left)
thus dv/dt = dv/ds ds/dt = dv/ds v = -cv
thus v is an exponential decay on t: v = v0exp(-c t) where v0 is the initial wind velocity 150 km/s, c is the slope in the velocity-distance plot, which is about 7.884 if we count t in million years. This leads to the wind drops to subsonic at about 0.11 million years. In the code, the wind injection is shut off when Mach number of the wind drops below 1.
Also, I found in the previous paper the Mach 1.5 is using 2x ambient density while the Mach 3 run is using 4x ambient density. So I'm inclined to redo the Mach 3 run to make the comparison even for the revision. The rest of the revision are done.
Hydro Mach 10 shock TSF with linear velocity profile wind
See here:
http://www.pas.rochester.edu/~shuleli/tsf_paper/velpro1.gif
The wind is initially Mach 10, it continues for 0.2 mln yrs then linearly slow down to 0 for another 0.2 mln yrs.
Shock Wire Simulations
Meeting Update Apr 21 2014
TSF
Finished the low res MTSF runs on Kraken. The contained poloidal case is slow (only about 10 frames per day after particle formation) on 1200 cores, 256 zones per radius resolution. The toroidal case is much faster (30 frames per day).
The images: MTSF Contained Toroidal, Beta = 12, Column Density
movie
Cut through with inverse beta countour:
movie
Here we see consistent behavior compared to hydro cases. The cut movie takes a few hours to make, should be done by tonight.
MTSF Contained Poloidal, Beta = 12, Column Density
movie
Cut through with inverse beta countour:
movie
It is very clear that the material ejection region overlaps the magnetic cocoon.
I think the comparison is promising, the next step will be choose proper parameters to start production runs.
Meeting Update Mar 31 2014
Triggered Star Formation on Contained Poloidal Field with Mach = 1.5, Beta = 3
http://www.pas.rochester.edu/~shuleli/mtsf/polar3.gif
Comparing to the Beta = 12 and 6 cases:
http://www.pas.rochester.edu/~shuleli/mtsf/betacomp.gif
We definitely see more "eruption" from the center of the core due to the field compression near the star. I will go on to production runs with different contained field cases adding rotation.
Triggered Star Formation on Uniform Field with Mach = 1.5, Beta = 6 (no star formed)
http://www.pas.rochester.edu/~shuleli/mtsf/uniform6.gif
It could be interesting to study the triggered collapse (how likely, and the mixing) of magnetically subcritical cores where Mass to Flux ratio:
Here we don't need BE sphere or rotation, just cores with the right mass (~ solar mass), uniform magnetic field (1 ~ 10 uG) and an array of different Mach and field orientation.
Meeting Update 0217
- Laser Simulations
Simulation setup (2D):
Wtih 2 mm ablator (contour shows the target material tracer):
With 10 um ablator initial setup:
At 100 ns:
Wtih 10 um ablator, Added 8 Tesla vertical magnetic field:
At 100 ns:
Without ablator, hydro, single shock with the same shock speed (115 um/ns) initial setup:
At 100 ns:
Without ablator, hydro, single shock with half the shock speed (57 um/ns):
At 200 ns (the time slider in the image should be doubled):
Without ablator, added 8 Tesla vertical field, single shock with the same shock speed (115 um/ns):
At 100 ns:
- TSF
Revising the final version of the paper. Should be done by the end of this week.
- Mgnetized TSF
Since we've decided to tackle the magnetized TSF next, I have migrated the low res runs on Kraken to bluestreak. We should get the set of production runs we need after these runs are done.
Meeting Update Feb 10 2014
- Laser stuff: The rad transfer seems to be working fine. Here's what I got after applying 1e14 W/cm2 rad flux at the left surface of the simulation box for 1 ns:
At 2 ns, It becomes:
Note that the heat up is lower than that in Peter's simulations, where the region just in front of the target is at about 0.03 KeV. To achieve the same temperature, we need to apply the flux for more than 50 ns, in which the temperature of the target gets heated up way more than that in Peter's simulation.
We will apply shocks to this initial set up (the 2D runs are several hours worth on 8 cores so we should be able to get the results we want in 1 day or 2). The current shock speed I got from the set up is about 100 um/ns when it reaches the target. It'd be nice if we can know the pressure that drives the shock so we can simulate the driving directly.
Also, I have started revising the resistive shock clump paper. My goal is to get it submitted by the end of April.
- TSF: finished revising the paper. There's some minor tweaks needed, but we are nearly done with this one. I have been reviewing some of the simulation papers by Boss and Vanhala last week, I think the next paper should be investigating the parameters space for the disk formation: two sets of runs:
Mach ranging from 5 to 40 with K = 0.1 to study the impact of M on disk formation;
Mach 10 or 20 shock (this is what they proposed to be able to achieve injection) with K ranging from 0 to 1.
We should bring mixing into the focus as it is one of the key reason why people are interested in this problem.
From my previous paper, these two sets should total about 1 mln hours of cpu time on kraken.
The other two ideas are (1) magnetized TSF, as we've seen interesting results from the prototype runs. (2) to study different sink particle algos in the context of TSF.
Meeting Update Feb 3 2014
- The Berkeley Group sent me the project proposal upon my request. Most of my time last week was spent on reading the proposal along with documentations (implementation and performance) on ORION code. I'll briefly talk about it during the meeting. (not sure if it is OK to post it online)
- The new NLUF simulation is setup as below:
The radiation flux on the left side of the box (orange zone only) should launch a shock of 115 um/s. The shock front would be at a temperature of 20 ev with a precursor of width 1mm when hitting the target. The target and the ambient are both at room temperature. The sound crossing time for the target is in the ms time scale, which is much longer than the supposed simulation time scale (less than 1 us) so the pressure balancing should not be an issue.
The vertical magnetic field about 6 Tesla based on the latest design.
- TSF: Currently working on Adam's version while waiting for Eric's comments.
NLUF summary
Previous simulation setup:
http://www.pas.rochester.edu/~shuleli/NLUF_sep/setup.png
Temperature setup:
0.026ev in the ambient, 0.0012ev in the target initially, 0.026~0.8ev in the ambient, 0.0012~0.3ev in the target after preheated by radiation). The Spitzer resistivity has a floor temperature: any temperature below 0.26ev (3000K), is treated as 0.26ev when calculating resistivity. This means that only after preheating, the hot end of the target will start to diffuse magnetic field.
http://astrobear.pas.rochester.edu/trac/astrobear/attachment/blog/shuleli09072013/hydronew.png
Comparison animation for beta = 1, 10 and infinite:
http://www.pas.rochester.edu/~shuleli/NLUF_sep/movieall.gif
AstroBEAR has resistivity implemented and calculates the amount of magnetic diffusion at every simulation cell given their density, temperature and ionization fraction. We have since obtained ionization table of Argon and SiO2 from Jaques at LLE (Z(rho,T)), and have implemented code to read the table and compute local resistivity with the correct Z value.
Other design suggestions: eliminate the backwall shock and put in an ablator in front (density wall)
First cut of AstroBEAR GPU
Diffusion test movie here:
http://www.pas.rochester.edu/~shuleli/gpupro/magneticdiffusion.gif
Features:
User can specify blocks and threads setup in a data file. In the cuda code, we use simple setup as:
dim3 grid(XBLOCK,YBLOCK,ZBLOCK);
dim3 block(mx/XBLOCK,my/YBLOCK,mz/ZBLOCK);
Can use two different setups: one can use cudaMalloc to compile 3D data into an 1D array, and does calculation on the 1D array (more pressure on global memory since its poor data locality) or use cudaMalloc3D (not working yet, I think I did something wrong when dereferencing PitchPtrs). They use different kernels. It would be ideal to work out the Malloc3D setup.
Launching two different kernels:
JKernel<<<grid,block>>> to calculate edge-centered current
auxKernel<<<grid,block>>> to convert the currents into aux fields and fluxes
Launching multiple kernels seems to have a small overhead (3~5 cycles) unless new data is copied, but it would be nice to explore interblock communication options since we do need all the currents being updated before updating the aux fields. I'm not sure if there is hardware supported barrier on Tesla, but one can easily use an atomic+ on a global memory address to implement a rudimentary barrier between blocks (no sense-reversal needed here since when returning from kernel launch, all blocks will be synchronized and metadata destroyed).
Real time image processing: I have a code written (with a C# front end though) that does real time processing and image rendering on screen on the GPU using openGL. It would be nice to hook this up to the GPU kernel and render simulation results in real time (ideally can have a webapp that take simple user inputs and display simulation results in real time). For the above diffusion test, each frame takes 10ms to compute, 10ms to process (total time though is not 20ms since I spawn two threads one does the computation, the other only gets interrupted when new data available). It could be greater than 16ms if hydrodynamics is involved so some fine tuning is necessary.
This week: will be working on finishing the TSF paper this week, and some job applications. After the paper is out, I may go back to explore the GPU stuff a bit more.
Meeting Update 1203
Been writing the triggering paper mostly, here's a directory for paper figures. Have been making the 3D rendered movie for the rotating disk cases, they are not finished yet.
http://www.pas.rochester.edu/~shuleli/tsf_paper/paper_figs/
Resubmitted the NLUF jobs with Jonathan's extrapolation routine to bluehive. Should get some results this week if things go on OK. This is using the old design parameters Peter sent around a while ago (setup another conference call with the NLUF team?).
Meeting Update 1125
Triggered Star Formation, with Federath + Mach 3, high res.
Created two particles offcenter…
http://www.pas.rochester.edu/~shuleli/tsf_feed/tsf1125.gif
Meeting Update 1118
- ART problem, I think Baowei is getting a mismatch between the bottom temperatures of his profile and the given bottom heat flux (the bottom temperature should give a flux that is approx close to q0 given by Riccardo).
- Wrote a routine that reads from Jacques' Argon and SiO2 ionization fraction Tables, and wired it to the resistivity routine. Current features:
- read the Table.txt, from cell density and temperature, infer the ionization fraction Z by averaging the lower and higher bounds (should be better with Jonathan's extrapolation routine, I haven't yet received it btw).
- look at the tracer field and determine which table to use (which material has higher fraction). It may be worthwhile to do another average.
- from Z, calculate desired physics quantity on that cell using Spitzer equation (resistivity and heat conduction).
I also modified heat conduction routine to do the same thing, so that only ionized electrons can conduct heat, and only along B field directions (the anisotropicity is determined by gyroradius). So I think we are more sophisticatd than before with the explicit routines, except they don't do subcycles.
- Got contacted by UIUC comp astro group, read some of their papers (they do GRMHD, I found the magnetized disk simulations particularly interesting).
- TSF paper (short version) should be out to Adam and Eric this week. The isothermal runs got a memory error, see below.
http://www.pas.rochester.edu/~shuleli/TSF.e3864249
- Started high-res magnetized TSF (Krumholz) runs on bluestreak. Until the end of this year, I'll be more dedicated to writing (short letter for the TSF, resistive clumps paper, and also potentially the longer TSF paper) while waiting for these magnetized runs.
Meeting Update 1104
http://www.pas.rochester.edu/~shuleli/tsf_feed/fediso.gif
density movie with gamma = 1.000001.
Federrath and Krumholtz accretion temperature
Will add density to these later.
Krumholtz:
http://www.pas.rochester.edu/~shuleli/tsf_feed/krumtmp1.gif
Federrath:
some new images and bondi radius
http://www.pas.rochester.edu/~shuleli/tsf_feed
The explosion happens at frames 36 ~ 38. If we take an estimate of temperature surrounding the star around that time (< 200K), and the stellar mass at the same time (about 0.82 solar mass), we get a Bondi radius of about 868 au (0.075 on the grid of the plot), which corresponds to 38 grid points.
Journal Club 1022 Agenda
- triggered star formation: different accretion algorithms.
- Erica's paper
Meeting Update 1021
Federrath Accretion with rotation, full movie:
http://www.pas.rochester.edu/~shuleli/tsf_paper/fedf.gif
Krumholz Accretion restart:
http://www.pas.rochester.edu/~shuleli/tsf_paper/krumf.gif
The snowplow phase from a supernova expansion typically lasts 2 million yrs. if we use the shell speed ush ~ t-4/3 as characteristics, we can estimate the time needed for the shell to slow down from Mach = 1.5 to acoustic wave (Mach = 1). This time period is about 0.8 million yrs. Therefore one estimation of the duration of the wind in our simulation could just be 0.8 million yrs, at the end of which both accretion routines have a disk remaining.
Meeting Update 1014
Newest movie from the Federrath run:
Journal Club 1008 Agenda
Topics this week:
- More discussion on the physical conduction front test.
http://astrobear.pas.rochester.edu/trac/astrobear/ticket/309
- Erica's BE sphere model
- Adam's paper review
Meeting Update 1007
TSF project:
Mach = 3 Movie: http://www.pas.rochester.edu/~shuleli/tsf_paper/mach3final.gif
Federath accretion, Mach = 1.5, Parallel rotation:
http://www.pas.rochester.edu/~shuleli/tsf_paper/fedrot.gif
We should be able to run the Federath sim to the end in this week.
Also started MHD sims on bluestreak. The first run is Krumholz Mach = 1.5, no rotation, beta = 4.
Journal Club 1001
Agenda:
- Testing the implicit diffusion.
- Erica's research topics.
(I'll be leaving after 4:30pm so let's discuss the diffusion stuff first).
1, Conduction front test with c.u. with uniform rho = 1. (found the simulation is slower than the analytic solution)
2, Conduction front with different densities.
3, Conduction front test with real units (in the paper already).
4, Conduction front test with hydro turned on
5, RT nonablative test with c.u.
Meeting Update 0930
- TSF: Mach 3.5 finished. I have queued up the Mach 1.5 with Federath accretion + parallel rotation high res run on kraken. I have been writing up the simulation results section by describing the 2D column density images. The discussion section will be a bit tricky (how long will this paper be? if we use the format as previous papers: introduction → simulation results → discussion of line plots → math model, I can see it easily go over 10 pages).
- RT: Now we can confirm that the ODE generated initial condition in SI units is not stable. The next step would be first to retrieve the computational unit test results: conduction front problem as described in Reale et al (paper attached), and RT (nonablative) growth rate problem. If those still works, then we can confirm there's something wrong in the scaling. We should then use computational unit data to generate ODE initial condition and test it to bypass whatever scaling the code does.
- Been polishing up the resume and statement of purpose. The nearest deadline of fellowships comes 15th this month, I want to get them ready by this week. Major tech companies' internship application for next summer are open now, I'm getting prepared to that too.
Conduction Front Analytic test
The self-similar solution of the temperature of a thermal conduction front (without hydro) is described by the following set of equations:
where
here
Also notice that the above equations only give the profile left to
I wrote a quick thrown together executable that will return
./cond 0.001 100.0 0.1
would return "at time 0.1, Tc is 36.2443, x0 is 1.68733".
Meeting Update 0923
TSF figures
Stellar Mass:
Accretion Rate:
Mixing Ratio:
TSF movies
Column density non-rotation:
http://www.pas.rochester.edu/~shuleli/tsf_paper//norot.gif
Column density parallel-rotation:
http://www.pas.rochester.edu/~shuleli/tsf_paper//parrot.gif
Column density perpendicular-rotation:
http://www.pas.rochester.edu/~shuleli/tsf_paper//perrot.gif
Journal Club 0917 Agenda
- RT instability and the initial conditions.
http://www.pas.rochester.edu/~shuleli/0615/RT.pdf
- Implementing SESAME table in AstroBEAR. What do we need for the NLUF simulations? What are useful to be put in the code?
- Frequent restart problem on Kraken.
Boundary Conditions (variables labeled ≥ 1 are known):
Set1:
Set2:
NLUF movies
Hydro
http://www.pas.rochester.edu/~shuleli/NLUF_sep/hydro.gif
Beta = 10
http://www.pas.rochester.edu/~shuleli/NLUF_sep/beta10.gif
Beta = 1
http://www.pas.rochester.edu/~shuleli/NLUF_sep/beta1.gif
Beta = 1 (different floor temperature)
http://www.pas.rochester.edu/~shuleli/NLUF_sep/difwind.gif
(new movies posted)
http://www.pas.rochester.edu/~shuleli/NLUF_sep
Meeting Update 0908
I finished four NLUF runs:
- hydro
- beta = 1 with Spitzer resistivity (temperature cut-off at 3000k)
- beta = 10 with Spitzer resistivity (temperature cut-off at 3000k)
- beta = 1 with Spitzer resistivity (temperature cut-off at 1000k)
The animations should all be up tonight.
The final TSF run is currently at frame 83 of 200. I'm constantly getting restarted because of high CFL number. This run should finish this week.
NLUF stuff 3
The results from new runs with correct temperature setup (0.026ev in the ambient, 0.0012ev in the target initially, 0.026~0.8ev in the ambient, 0.0012~0.3ev in the target after preheated by radiation). The Spitzer resistivity has a floor temperature: any temperature below 0.26ev (3000K), is treated as 0.26ev when calculating resistivity. This means that only after preheating, the hot end of the target will start to diffuse magnetic field.
NLUF stuff 2
Update: notice that these runs are really beta = 1000 and 100. The correct beta runs with beta = 10 and 1, with a cold initial setup that matches the experiment will be posted later tomorrow.
Resolution: 64 zones per clump radius. All the MHD runs have Spitzer resistivity (~T-3/2) turned on. There is no post-shock B field imposed, only the initial ambient is magnetized.
Density comparison for hydro, beta = 10, beta = 1 (notice beta = 1 is under refined)
beta comparison for beta = 10, beta = 1 at the end of the runs (the scales is: 100, 1000, 1e+4, etc)
The comparison of density of the two low res AMR runs for beta = 10 and beta = 1.
The comparison of initial and final temperature for beta = 10 run (in eV).
Meeting Update 0903
- TSF simulation is taking longer than I expected. Currently the only unfinished run is the perpendicular rotation Mach1.5 one, should finish within next week. But we shall start thinking about what to say in the upcoming paper (meeting this week?).
- Coded up a rudimentary ambipolar diffusion explicit solver using the single fluid model described here:
http://iopscience.iop.org/0067-0049/181/2/413/fulltext/
We can run simple oblique shock tests to verify it in coming weeks. Could be helpful for some future MHD turbulence sims etc. Two-fluid algorithms can be found in recent literatures, for instance:
http://www.sciencedirect.com/science/article/pii/S1384107611001321
but requires substantially more coding effort.
- Since the current code takes very long time (especially on the Spitzer diffusivity problem in NLUF project) on solving fast diffusion problems with explicit solver, I have picked up the code for explicit subcycling and persistent data structure for communications during subcycling. The alternative is to code up hypre support for implicit solving, but it requires an additional divergence cleaning algorithm implemented for both resistivity and ambipolar diffusion.
- Currently going through the process of polishing CV and writing statement of purpose for postdoc fellowship applications. Some of them ends end of Sep ~ Oct, so not much time left. I will also apply for summer interns 2014 at tech companies later this semester, and possibly a job in next fall, so I'm reviewing interview problems (mostly on coding and algorithms).
- NLUF project: first shots will be taken this week, I will be participating in the post-shot meeting on Thursday.
NLUF stuff
Draft of initial condition setup (shock speed = 80um/ns coming from the left).
In the rad-hydro simulation, the radiation precursor would hit the target first, followed by the thick ablator front:
Here's a cut of density using Spitzer resistivity at 60ns. The shock is coming from left of speed 80um/ns (about Mach 60):
One problem is that the post-shock temperature I'm getting in this simulation is about 600ev (which is correct for adiabatic jump condition). This is very different from Peter's earlier rad-hydro simulation (only ~10ev). Radiative cooling needs to be included in the simulation. Our in-dev radiative transfer could be used in generating the initial condition:
Marshak shock
Journal Club 0820 Agenda
- Review operator splitting method (papers attached).
- Test problems for Federrath and Krumholz accretion algorithms.
- Ruka's PN simulations.
- Introduction of radiation transport interface.
- Incorporating devs from outside of our group.
Meeting Update 0819
- Triggered Star Formation
A comparison between Krumholz and Federrath accretion algorithms:
movie
We should be able to finish all runs needed for the first paper of this project by the end of August. Two things to think about: (1) what to say in the paper? (2) we should start accumulating low resolution simulations (on bluestreak since it's free) for the next step we want to go (magnetic field?).
- Ablative RT
The following is a movie showing the evolution of density and temperature over 5 instability growth times (assuming single mode, t = 1/sqrt(k*g), where k = 2*pi/l, l is the width of the box). 5 growth time = 1 sound crossing time after fixing the scaling.
movie
Next step would be to continue to check the input: (1) Boundary condition I've been using is slightly different from Riccardo's, which may explain the fluctuation on the boundary. (2) effect of varying g. (3) use the "homebrew" initial condition.
- NLUF
Draft of initial condition setup (shock speed = 80km/s coming from the left).
In the rad-hydro simulation, the radiation precursor would hit the target first, followed by the thick ablator front:
I have not decided how to mimic this setup, the simplest thing to do is run with a single shock. The shocked runs should take a few hours each, I also checked the stability behavior of the setup across the simulation time (6 ns as used in the rad-hydro simulations) since the setup is not in pressure equilibrium:
density movie
temperature movie
Another thing that we may put in is a constant pressure on the boundary to simulate the incoming laser pulse.
fixed length scale in the RT data files
After fixing a length scaling problem in the original data files, I was able to get a better equilibrium. The original parameters still need to be tuned (kappa needs to be reduced by a factor of 5). The following is a movie showing the evolution of density and temperature over 5 instability growth times (assuming single mode, t = 1/sqrt(k*g), where k = 2*pi/l, l is the width of the box).
movie
Journal Club 0813 Agenda
- Erica's 2D fluid code.
- Eddie's jet simulations.
- Adam's paper review.
Meeting Update 0812
- Ablative RT: after experimenting with various parameters, it seems the conductivity we were using previously is too high. The conductivity that can keep the ablation front at the same place should be about 2e-17 (the original parameter is about 3e-14). Here is a comparison between different conductivity (red: 2e-17, green: 3e-17, blue: 5e-17, white: 7e-17):
http://www.pas.rochester.edu/~shuleli/rtnew/diff4.gif
The temperature and density evolution with kappa = 2e-17:
http://www.pas.rochester.edu/~shuleli/rtnew/rt2e17.gif
Comparison between kappa = 2e-17 and non-conductive:
http://www.pas.rochester.edu/~shuleli/rtnew/3comp.gif
Comparison between kappa = 2e-17, flux = 5e+21 (blue) and kappa = 2e-17, flux = 5e+18 (red)
http://www.pas.rochester.edu/~shuleli/rtnew/2flux.gif
- The triggered star formation simulation: non-rotational case has finished:
http://www.pas.rochester.edu/~shuleli/rtnew/tsfnr.gif
The parallel rotation case with K = 0.1 is at frame 78 of 200, should finish this week.
http://www.pas.rochester.edu/~shuleli/rtnew/tsfrot1.gif
Ablative RT tests
I ported conduction to AstroBEAR3.0, here's the result of verified 1D conduction front test:
click for animation
The ablative RT problem does work better in 3.0 than before. It can hold up to half of an instability growth time (t = 1/sqrt(Akg)) compared to previous less than 1/20 of growth time. There's no spiking anymore. Here, I use Rui's value of conductivity: 7e+5 cm2/s at the bottom. The heat flux is about 5.876e+21 erg/(cm2*s).
click for animation
click for animation
The major problem now seems to be the flux on the boundary is not treated correctly. In an earlier version of AstroBEAR1.0, the code does maintain the temperature gradient at the lower boundary. So I'm thinking about looking at how that version treated the boundary flux.
The thermal conduction module is turned on by setting the following parameters in physics.data:
33 iDiffusion = 2 !Turns on implicit diffusion
34 kappa1 = 3.e-13 !conductivity (cgs)
35 flb = 17.628e+21 !boundary flux (cgs)
36 ndiff = 2.5 !diffusion index
37 diffcfl = 0.03 ! diffusion cfl
If you are getting Hypre solver error, try tune down diffcfl or increase the number protection cycles (currently set to 100), which is in elliptic/diffusion/diffusion.f90:
104 dt_protection = dt_remaining/100.0;
This protection can be parameterized in the future official release.
The diffusion scales are treated in physics/physics_control.f90:
411 ScaleDiff = lScale*SQRT(pScale*rScale)/(TempScalendiff)
412 kappa1 = kappa1/ScaleDiff
413 ScaleFlux = TempScale*SQRT(pScale*rScale)
The hydrostatic bottom boundary condition is treated in problem.f90:
145 IF(Info%xBounds(2,1)==GxBounds(2,1).AND.Gmthbc(2,1)==1)THEN
146 DO i=1,mx
147 rho1=Info%q(i,1,1,1);vy1=Info%q(i,1,1,3)/rho1;vx1=Info%q(i,1,1,2)/rho1
148 p1=(Info%q(i,1,1,iE)-0.5*(SUM(Info%q(i,1,1,2:3)2))/Info%q(i,1,1,1))*(gamma-1d0)
149 t1 =p1/rho1
150 CALL NonLinearsolver(t0,heatflux,t1,ndiff)
151 !t0=3519.10130483276
152 vx0=vx1
153 aa=t0+gx; bb=gx*rho1-p1-rho1*vy12; cc=rho12*vy12
154 root1=-0.5*bb/aa+0.5*sqrt(bb2-4d0*aa*cc)/aa
155 root2=-0.5*bb/aa-0.5*sqrt(bb2-4d0*aa*cc)/aa
156 rho0=max(root1,root2)
157 vy0=rho1*vy1/rho0
158 p0=rho0*t0
159 Info%q(i,0,1,iTemp)=p0/rho0
160 Info%q(i,0,1,1)=rho0; Info%q(i,0,1,2)=rho0*vx0; Info%q(i,0,1,3)=rho0*vy0
161 Info%q(i,0,1,iE)=p0/(gamma-1d0)+0.5*rho0*(vx02+vy02)
162 END DO
163 END IF
Meeting Update 0805
- TSF new images.
Federrath accretion + rotation (K = 0.1):
animation: click
Krumholz accretion + no rotation, 512 zones per radius:
animation: click
Krumholz accretion + no rotation, 512 zones per radius, Zoom-in:
animation: click
- Meeting with LLE folks this Wednesday, will hand them the code.
Journal Club 0730 Agenda
- Ruka's PN simulations.
https://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/smurugan/pn2013
- Sink particle algorithm wrap up.
- Learning to code self-gravity solver (Poisson equation).
Meeting Update 0729
- Krumholz accretion + Patch refinement worked with a factor added to the initial mass equation. I have done a run on Kraken with 320*192*192 base resolution + 2 level patch (effective resolution is 512 zones per clump radius) (with a factor of 0.25 added to drho). The rotating runs should look interesting with the increased resolution.
The star also grows to greater mass, not sure if it is because of the initial mass (with 0.25 factor, the initial mass is now about 10 units instead of 0.01) or increased resolution. We can try a factor > 0.25 to test the difference.
- Bluestreak has some compiling problem, and it seems I'm not on the new grant. Currently I can only use the old grant hours. Each of the high resolution runs takes about 2 days on 1080 cores on Kraken, therefore we need about 50000 cpu hours for each run. 6 runs (as planned) will cost about 10% of the new allocation.
- Currently working on merging 3.0 with thermal diffusion, will meet with LLE people this week.
Meeting Update 0723
- Attended PPVI meeting. The poster is very successful, also talked to many colleagues.
- Received Kraken token from NICS. I've sent out the activation form, the newly allocated hours can be used for the Triggering project once my account is activated.
- Met with LLE folks. Currently working on merging the diffusion solver with AstroBEAR 3.0 of which the tested in dev version I just received from Baowei.
- Chris Mckee mentioned to me his earlier Triggering work with Nakamura. He was talking about this 2005 paper:
http://arxiv.org/pdf/astro-ph/0511016v1.pdf
It has some interesting points. I may do a journal club presentation on it in the coming weeks.
- Meetings before the end of this year?
PPVI poster
Final revision of PPVI poster (attached).
Journal Club Meeting 0709
Agenda
In this week's journal club, we'll have:
- Discuss the poster format.
http://www.pas.rochester.edu/~shuleli/tsf_project/poster.pdf
- Adam discussing a paper.
http://adsabs.harvard.edu/abs/2012ApJ...747...22H
http://adsabs.harvard.edu/abs/2010ApJ...709...27W
http://adsabs.harvard.edu/abs/2010ApJ...720L..26L
- Jonathan's post on evanescent solutions to self-gravity:
https://astrobear.pas.rochester.edu/trac/astrobear/blog/johannjc07042013
Shock triggered star formation rotational vs nonrotatinoal, different mach
We have the following runs:
Krumholz accretion:
Mach = 1.5: non-rotational (case NR), rotation axis along the shock normal (case RA), rotation axis perp to shock normal (case RP). For the rotational cases, K = Omega*tff = 0.1.
Mach = 3.16: non-rotational
Federrath accretion:
Mach = 1.5, 2.5, 3.16 non-rotational
Mach = 1.5, beta = 4 magnetized (field perpendicular to shock normal)
The shock compression phase prior to the formation of the star all look very similar. The initial formation of the star is at about 2 cloud crushing time for Mach = 1.5, 1.6 cloud crushing time for Mach = 3.16. Initially, the cloud with a newly formed star embedded looks like this:
For Mach = 1.5, different rotation cases at 2 crushing time:
For Mach = 1.5, different rotation cases at 2.5 crushing time:
For Mach = 1.5, different rotation cases at 3.6 crushing time:
Comparison of Star Mass evolution for different rotations:
Comparison of Accretion Rate for different rotations:
Comparison of Wind Material Mixing evolution for different rotations:
Movies
case NR:see previous post
case RA:
http://www.pas.rochester.edu/~shuleli/tsf_project/tsf_rot1.gif
case RP:
http://www.pas.rochester.edu/~shuleli/tsf_project/tsf_rot2.gif
Mach = 3.16:
http://www.pas.rochester.edu/~shuleli/tsf_project/mach3.gif
Mach = 1.5, K = 1 (lateral expansion too strong to be contained by the shock):
http://www.pas.rochester.edu/~shuleli/tsf_project/K1.gif
Comparison of Star Mass evolution for different Mach:
Comparison of Accretion Rate for different Mach:
Comparison of Wind Material Mixing evolution for different Mach:
Journal Club Meeting 0702
Agenda
- Computation resources.
- Kraken (old grant): 1,138,234 SUs, 923,481 SUs remaining
- Kraken (new grant): 3,422,821 SUs, 3,422,821 SUs remaining
- Stampede: 416,000 SUs, 65,694 SUs remaining
https://astrobear.pas.rochester.edu/trac/astrobear/wiki/ProjectRuns
- Discuss sink particle in the code:
Federrath (section 2):
http://arxiv.org/pdf/1001.4456v2.pdf
Krumholz (section 2.2 to 2.4):
http://www.ucolick.org/~krumholz/papers/krumholz04.pdf
- Adam's talk.
Journal Club Meeting 0625
Agenda
Jean's instability:
https://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/erica/JeansInstability
Ruka's jet problem, and comfort zone of AstroBEAR:
https://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/smurugan/pn2013
Zhuo's BH accretion paper review:
http://arxiv.org/abs/astro-ph/0406166
Meeting Update 0624
I've redone some of the tests mentioned last week, all from frame 0 (some of the runs in the table I posted last week are from restarts). This is what I found:
res | accrete? | second particle created? |
80+1 | yes | no |
80+2 | yes | no |
80+3 | yes | no |
80+4 | no | yes |
80+5 | no | yes |
160+1 | yes | no |
160+2 | yes | no |
160+3 | no | yes |
160+4 | no | yes |
320+0 | yes | no |
320+1 | no | yes |
320+2 | no | yes |
320+3 | no | yes |
The 80+4, 80+5 runs last week that worked are restarts from frame 50 (the frame right before particle creation) of a previous 80 base grid run. From the new runs, it looks like our issue is entirely on resolution: anything beyond 640 effective resolution cannot accrete after creating the first particle, and will create more particles to "compensate" that. The old restart runs are done on bluestreak, the starting and finishing runs looks like:
Particle mass dependence I then went on and did a few tests with switching between Federrath and Krumholz. The idea is to use Federrath for the first few frames after the particle is created, then switch to Krumholz to prevent "explosion". However, the result shows that at 80+5 and 160+4 resolution, even if we switch to Krumholz after particle mass grows to 10 (10% of initial cloud), the particle fails to accrete.
Bluestreak performance issue. I was out of town since last Thursday till Sunday, but I did manage to get a reservation on bluestreak last Saturday and did some runs. I did two runs on bluestreak featuring essentially the same setup: using frame 51 from an old run and 80+4 setup, both run 20 frames (to frame 71). Stampede took 6 hours on 1024 cores, bluestreak took almost full 24 hours reservation time on 8192 cores. The full simulation is about 200 frames, with frames 50 to 150 more "intensive". Bluestreak needs nearly 10 days to finish one entire run, stampede needs about 2 days on 1024 cores. The output chombo files are of size 2.5GB, and I've seen the old BGP dealing with such problem size much much faster. I think the valid option now is to use Kraken for these runs (2048 cores estimated 2 days per run).
Journal Club Meeting 0618
Agenda:
In this week's journal club, we will cover the following topics:
Ruka' simulations:
https://astrobear.pas.rochester.edu/trac/astrobear/blog/smurugan06172013
Eddie's Ticket on 3D pulsed jet:
https://astrobear.pas.rochester.edu/trac/astrobear/ticket/289
False particle creation in triggered star formation (ticket 288):
https://astrobear.pas.rochester.edu/trac/astrobear/ticket/288
https://astrobear.pas.rochester.edu/trac/astrobear/blog/shuleli06172013
Zhuo's paper reading on Bondi-Hoyle-Lyttleton accretion:
http://arxiv.org/abs/astro-ph/0406166http://arxiv.org/abs/astro-ph/0406166
Meeting Update Jun 17
Have done a set of testing runs of TSF of Krumholtz accretion in patched refinement:
res | initial mass | accrete? | massive particle created? |
80+1 | 15 | yes | no |
80+2 | 3.58 | yes | no |
80+3 | 1.02 | yes | no |
80+4 | 0.65 | yes | no |
80+5 | 0.47 | yes | no |
320+0 | 40 | yes | no |
320+1 | 0.27 | no | yes |
320+2 | 6.52e-3 | no | yes |
320+3 | 4.35e-4 | no | yes |
As shown above, the creation of an unphysical massive particle (usually with mass 1e+4) seems to be correlated with whether the first created particle is able to accrete or not. Strangely, for any resolution above 320+1, the particle accretes at a significantly slower speed (usually at a ratio of 1e-4 times slower).
The 80+4 run provides 256 zones per cloud radius, while not ideal, it's decently fast for a high res simulation. We can go for 80+4 or do some more testing this week on 160 base resolution + 3 levels of refinement for the conference poster. For a paper, I think we should still shoot for 512 zones per cloud radius because that seems to be what most papers on shock-cloud interaction have been using recently. Either do 80+5 or 160+4 (slow), or figure out what's wrong with 320+3.
We need at least three runs: Mach = 1.5, Mach = 3.5, Mach = 1.5 rotating, these should finish in about a week.
I will be out of town for a short break from this Thursday till Sunday.
Journal Club Meeting 0611
Agenda
The Responses of Magnetically Sub-Critical Cores to Shocks http://arxiv.org/abs/1305.1716
Cross sections for tidal capture binary formation and stellar merger http://articles.adsabs.harvard.edu/full/1986ApJ...310..176L
Meeting Update Jun 10
TSF We have reached close to the cpu hours on stampede. Martin suggested we move the runs to other places. The current plan is to debug the particle refinement bug on Bluestreak.
Papers Finished writing the MUIV proceedings.
NLUF Finished implementing the Spitzer resistivity. We should be good to start running the experiment settings. One problem is to find a good place for the runs.
Travel May need to go to NYC again to submit some supporting material.
Meeting Update Jun 3
Conference Did the VISA interview in NYC last weekend. We need to order some poster tubes if we don't have some yet ($10 on Amazon).
Star Formation Jonathan did a quick fix for the patched particle refinement. It looks better now for the uniform collapse problem: only creating 12 particles and no more particles added (similar behavior to the Federrath accretion). Although one problem is that they don't seem to be symmetric, as in the case of Federrath.
movie
Using this fix, I was able to rerun the TSF problem with Krumholz accretion. Currently at frame 54:
One problem I found is that similar to the uniform collapse, it creates two particles both of which are off-centered (only slightly). The position of particles are (x=1.3, y=1.5, z=1.5 being the center):
I also noticed the run being much slower than the previous Federrath run with the same resolution setting.
NLUF completed the implementation of Spitzer resistivity into the explicit solver. One thing I'm worried about is the time stepping of the code will be dominated by the highest resistivity in the grid, i.e. inside the clump where temperature is 100 times colder, resulting in a resistivity that is 1000 times higher. Therefore even if the environment has a high Reynolds number, it is likely that the overall time step will be very small.
Meeting Update May 28
Need a letter demonstrating the covering of trip expense to Germany by tomorrow.
TSF Checked one thing we discussed last week, the initial cloud mass is about 1.15 solar mass. In the Krumholz runs, the final mass of the star is about 0.95 solar mass as indicated in my previous blog post. This means the star takes about 82% of the initial cloud mass, 3% of the star mass is wind material, so about 80% of the initial cloud material ends up in the star.
Rotational TSF Got the rotating TSF set up and running. The parameter is similar to those defined in Banerjee:
What we have:
A1 | 3.78e-13 | 0.1 | 0.0081 |
A2 | 7.56e-13 | 0.2 | 0.032 |
O1 | 3.78e-13 | 0.1 | 0.0081 |
O2 | 7.56e-13 | 0.2 | 0.032 |
A1 is currently running. I think we should at least have A1 and O1 (preferably high res) finished before the conference. After that we can focus on the rest and writing the paper. MHD supported cloud likely require a different cloud setup so that should be another paper.
NLUF From the discussion this morning it seems to be able to simulate the local change of resistivity at least to a certain degree (analytic model), is important. What I got as expected parameters:
Material: Argon.
Three problems:
- the preshock material is cold and less conductive, the postshock material has higher temperature thus more conductive, which is not simulated currently.
- the conductivity inside the ball tends to be low because of low temperature, however, it is this field that is important when shocked. If the field inside the ball diffuse quickly into the ambient as a result of high resistivity, we may not get the desired behavior.
- Whether to look down the axis or orthogonal to the axis.
Journal Club Meeting 0521
Agenda
- Review Tom Jones at al 1996:
http://www.pas.rochester.edu/~shuleli/0521/34308.web.pdf
- Rethink the sink particle creation. References:
http://adsabs.harvard.edu/abs/2004ApJ...611..399K
http://adsabs.harvard.edu/abs/2010ApJ...713..269F
https://astrobear.pas.rochester.edu/trac/astrobear/blog/johannjc03062012
- Look at more frames from my triggered collapse sim:
Full length movie:
http://www.pas.rochester.edu/~shuleli/0521/m1k.gif
Star Mass (solar mass):
Accretion Rate (g/s):
Mixing Ratio (one in a thousand):
- Review Erica's BE collapse problem.
Meeting Update May 20
Restarted the triggered star formation run after finding out the original run has created more than 1 zero-mass particles. After tweaking the code, it almost works now, at least for the AMR runs. The following is the Mach = 1.5 run with Krumholz accretion:
Wrote a small cpp program to strip the sink particle data as curve files and plot:
Interestingly, the mass of the star is very close to that of the initial cloud, I have not yet examined how much wind material is contained inside the star.
http://www.pas.rochester.edu/~shuleli/0520/m1k.gif
Later I found some weird behaviors in the uniform collapse problem. Both the Fedderath accretion and Krumholz accretion creates more than particles:
http://www.pas.rochester.edu/~shuleli/0520/ucf.gif
http://www.pas.rochester.edu/~shuleli/0520/uck.gif
This is also observable if we turn off adaptive refining and use the buffered zone refining instead. The density distribution is slightly different:
Options are:
(1) Complete the AMR runs that looks good.
(2) Find what's wrong with buffered refining, and focus on high res runs.
Meeting Update 0509
Most of last week I was working on my graph spec project.
http://www.pas.rochester.edu/~shuleli/0331/magneticbeta.png
http://www.pas.rochester.edu/~shuleli/0331/magneticbeta2.png
http://www.pas.rochester.edu/~shuleli/0331/magneticbeta3.png
I will be working on the NLUF project, new calculations based on the previous simulations (magnetic beta amplification vs compression ratio) to check the new design.
Meeting Update Apr.30
- Star Formation: had a two hour discussion with Eric last Friday. There were some good new ideas and at the end we agreed that the phenomenon observed can be possible but need to be further verified. One idea we discussed is that the explosion happened on both sides of the bowshock (both expansions have a halo like feature, but the expansion on the righthand side is delayed) can be triggered by depressuring the two different regions. The lefthand side expansion can be triggered by the runaway outer halo (the spherical expansion is possible based on approximate calculation that the incoming flow from right to left resulted from colliding flows missing the star has similar Ram pressure compared to the incoming wind but much higher thermal pressure), the righthand side expansion can be triggered by the region not centered at the star but actually displaced to the right, where compression is highest (converging flow coming from the back of the clump, the incoming colliding flow attracted by the core but miss the star). I am currently making new plots according to our discussion.
- Krumholz accretion: fixed the way how the particle is initialized (basically, the original Krumholz accretion forms zero mass particles and the accretion rate is depend on the particle mass). A new restart run with fixed Krumholz accretion Mach = 1.5 triggered star formation is submitted.
- Resistive Paper: finished By Rm = 10 run, one run left (Bx Rm = 10). The paper will feature Rm = infinite, 300, 30, 10 runs for both Bx and By.
Meeting Update 0416
High Res TSF Run:
Movie:
http://www.pas.rochester.edu/~shuleli/0331/set1cut.gif
Meeting Update 0408
Triggered Star Formation:
Particle plane vertical slice movie
3 slice movie
clump tracer movie
wind tracer movie
Krumholz Accretion:
wrote the code to turn on Krumholz accretion when particle is formed. Mach 3.5 and 2.5 jobs resubmitted to Stampede. Results this week.
Star Formation Jamboree:
Registered and submitted the abstract. The talk (if there is one) will be on the recent triggered star formation stuff.
Meeting Update Apr.1
TSF TSF with Mach 1.5 shock:
Movie
TSF with Mach 2.5 shock:
Movie
TSF with Mach 3.6 shock:
Movie
Particle Mass Evolution:
Mach 5 shock cannot form a star.
Some questions not answered in Boss: 1. how does the timing of star formation depends on the incoming shock speed (Boss is focused on whether formation is possible under certain conditions? from what we see in the above 3 simulations, there seems to be a inverse linear correlation between shock speed and the collapse time. 2. what is the end mass of the star after a long time? There seems to be a saturation for the end speed when the incoming shock is slow (Mach 1.5 and 2.5). The end mass seems to be again inversely correlated to the speed of the incoming shock, when shock speed is not saturated. The behavior can be better characterized with more simulations in the parameter space (Mach 3 and 4). I think we can write a paper by simply characterizing these behavior and maybe device an approximate model…
Weird Visit behavior?
So when I tried to plot the contours and the 3-slice, I got strange things… In the image output, the contour does not show transparency correctly, the 3-slice outputs a yellow colored image… The plots look fine in the viewing window. Tried all versions installed on the local machine, get the same thing.
Conferences Just got the email from MU4 on the proceedings, I started revising the proceeding according to their requirements. Star formation local meeting: I'd like to do a talk on recent stuff.
Column density
Mach 1.5:
Mach 1.5 Movie
Mach 3.6:
Mach 3.6 Movie
Mach 3.6 with beta = 4:
Mach 3.6 with beta = 4 Movie
Magnetized TSF
Shock triggered star formation comparison: Mach 1.5 vs Mach 3.61
Here's a comparison between Mach 1.5 shock triggered star formation (left column) and Mach 3.61 triggered star formation (right column) The time slices taken are 1 cloud crushing time, 1.2 cloud crushing time and 2.36 cloud crushing time respectively (1 cloud crushing time = transmitted shock travels halfway through the cloud).
The Mach 3.61 shock case seems to be compressed faster due to the acceleration effect on compression by gravity (possiblity: compressed by a faster shock → stronger gravitational drag on the uncompressed material by the accretion core → accelerated compression). Both of the cases seem to demonstrate some sort of angular momentum behavior surrounding the core, which is perhaps something to explore later. The movies:
Mach1.5
Mach3.6
Fixed temperature
Found a place in the code that is not setting the correct ambient temperature, which in turn changes the wind speed. So after rerunning the Mach 3.6 shock (corresponding to 6.7 km/s) last night on stampede, here's what I found:
The velocity has the particle velocity subtracted. The Mach 1.5 shock can now get star formation too, as expected. Movies tomorrow.
Volume rendered cloud. Formed sink particle presented by the sphere at the center.
The compressed cloud exhibits a pancake shape. The star is formed at the center of the "pancake". The edge area of the "pancake" is constantly getting shredded by the wind. Material from outside the "pancake" is raining down into the "pancake", but central region of the "pancake" obtained an angular momentum, with the rotation axis aligned with the shock propagation direction. The rotational velocity V_phi is actually much greater than the infall velocity V_r.
Meeting Update 0318
Finally I was able to run things on stampede. Here's a movie of triggered star formation with Mach 3.6 (10 times postshock density) on the Boss cloud. 16 zones per cloud radius, 3 levels of AMR. Takes about 6000 cpu hours to finish.
http://www.pas.rochester.edu/~shuleli/0318/tsf.gif
The grid looks like:
Tried 5 levels (effective 512 zones, same as Boss), takes too long to finish under current refinement criterion (6 hours per frame).
I'm currently working on testing the new implicit solver with modifications Jonathan and I did 2 weeks ago. The tested newer version should roll out sometime this week.
Meeting Update Mar 11
- To prepare for implementing FLD, walked Jonathan through the implicit diffusion solver / subcycling codes in my latest development version of AstroBEAR. We modified some of the communication routines and how the boundary temperature and temperature derivatives are ghosted along the discussion to a more reasonable way. This may have something to do with the AMR noise we have seen in the previous testings (see earlier blog post). After the modifications, I found NaNs during testing the code. So I have been working on ensuring the correctness of the modified code. Hopefully the modified code can behave better on the ablative RT problems.
- TSF working on making stampede to work. I was able to compile the code successfully, but unable to run the code both on development and normal queue. The error message has been attached to ticket 273.
Meeting Update Mar.4
TSF: For GLMYG setup, we can derive from the information in the paper that mu = 1.0357. The following profile can be derived too:
density | parts/cc | temperature | |
center | 9.8e-20 | 5.7e+4 | 20 |
edge | 6.831e-21 | 3.973e+3 | 20 |
ambient | 1.725e-22 | 1.00328e+2 | 792 |
For isothermal, crossing time = 715 kyr, shock crossing time = 7.15 kyr. We study the cloud stability in the following three settings:
isothermal:
http://www.pas.rochester.edu/~shuleli/0304/gn_iso.gif
addiabatic (initialized by isothermal profile):
http://www.pas.rochester.edu/~shuleli/0304/gn_addi.gif
with DM cooling (initialized by isothermal profile):
http://www.pas.rochester.edu/~shuleli/0304/gn_cool.gif
Boss setup, Mach = 3:
http://www.pas.rochester.edu/~shuleli/0304/bo_mach3.gif
Boss setup, Mach = 5:
http://www.pas.rochester.edu/~shuleli/0304/bo_mach5.gif
Boss setup, Mach = 10:
http://www.pas.rochester.edu/~shuleli/0304/bo_mach10.gif
Next steps:
- run GLMYG setup with wind in low res setting.
- move to stampede to finish high resolution runs. I got access to stampede last week, but there seems to be permission problem when compiling the code.
FLD: I'm currently working on the implicit code that will be handed to Jonathan to do flux limited diffusion.
LLE: You can now view the LLE presentation about resistive clumps online:
http://www.pas.rochester.edu/~shuleli/0304/lle3113.pptx
(This is written in PowerPoint2013. For PowerPointX(X newer than 2007) users, the pptx version is recommended.
OpenOffice, Keynote users: converted ppt version will be uploaded soon).
Meeting Update Feb.25
TSF test run with Boss' BE sphere:
When the incoming shock has Mach 1, the ram pressure from the shock matches the thermal pressure on the cloud edge since pressure remains the same across the isothermal shock. We get a slow blow of materials onto the sphere. The cloud density evolution in simulation:
http://www.pas.rochester.edu/~shuleli/tsf/tsf02.gif
When the incoming shock has Mach 3.16, the ram pressure is about 100 times the thermal pressure on the cloud edge. We get a shock speed of about 5.78km/s. This shock should theoretically trigger the cloud collapse as indicated in Boss2010 but there should be very little injection. The cloud density evolution:
http://www.pas.rochester.edu/~shuleli/tsf/tsf01.gif
When the incoming shock has Mach 20, the ram pressure is about 1.6e+5 times the thermal pressure on the cloud edge. In this case, we have a shock that is about 37km/s, falls into the triggered-injection window in Boss2010.
movie coming soon
I'll do the high res version of the above runs on stampede this week.
Gritschneder's setup: it seems the BE module only treats the isothermal case, so I need some input from Jonathan or Erica.
Papers: finished writing the MFU-IV draft.
Update Feb.19
TSF I set up a set of simulations with Boss cloud purturbed by an incoming wind:
- Pressure matched: no density jump, Mach = 1 (this is the case where the ram pressure from the wind equals the thermal pressure at the edge of the cloud)
- Density matched: the density of the wind matches that of the edge of the cloud, Mach = 1
- Boss condition: same as the density matched case but Mach = 10
- Magnetic Boss condition: same as the Boss condition but with a vertical magnetic field embedded in the domain (beta ~ 4)
Some of them should finish this week. I have also been going through some of the literature, the Gritschneder 2012 paper is interesting where they studied the triggered collapse of a marginally stable BE cloud with 10 solar masses and radius of 0.21, central mass about 104 with DM cooling. The shock velocity is considerably higher than that of Boss 2010: ranging in 100km/s to 200km/s, which is far out of the velocity window proposed by Boss 2010 for successful injection and collapse(20~70 km/s). This raises the question of how does this "successful window" depend on the cloud radius and mass question, which I have not seen a complete set of simulation/modeling done yet. They proposed the observed abundance of Al26 due to shock injection, which is similar to Boss' work.
http://arxiv.org/abs/1111.0012
The drawbacks are that these simulations are in 2D, no magnetic field or rotation, which they mentioned in the text:
" As this is an idealized case and the main focus of this study is the enrichment with SRLs, we don’t take any rotational or turbulent motion (e.g. Walch et al. 2010) inside the cloud core into account. While this initial rotation might be important in the later phases of the collapse - especially
in determining the final disk size - its influence will be very small in the initial phase here, as the rotational velocities are orders of magnitude smaller than the shock speeds."
"For example, the core could be initially rotating. We assume here that the high shock velocities dominate over the rotation, but this should be further investigated. Furthermore, we neglect magnetic fields. They are of course present and should be involved in various processes, e.g. the details of the shock front and especially later on in the disk formation and the removement of angular momentum."
It could be easy to set up simulations using their setting with weak rotation (beta = 0.01~0.1) and weak~medium field strength (beta = 4~100) in a 3D simulation. One challenge is to replicate their resolution in 3D, which is effectively 512 zones per cloud radii, need a effective resolution of 1200*1200*6000-ish (self gravity in reflect boundary condition?).
Also been going through some of the rotating/magnetic BE cloud papers:
http://arxiv.org/abs/0901.2127
http://arxiv.org/abs/astro-ph/0408277
http://adsabs.harvard.edu/abs/1993ApJ...417..220G
http://adsabs.harvard.edu/abs/1993ApJ...417..243G
Papers I have been writing up the draft for the MFU-IV proceeding.
Meeting Update Jan.28
- Did magneto-thermal instability runs to test implicit solver in astrobear. The magnetic pressure profile at 1 computational time (corresponding to 200 sound crossing time) is shown below.
- Made resistivity documentation page. The final section, "how to turn on resistivity" could be moved into user's guide. I'll make similar pages for viscosity and thermal conduction this week.
- Done revising the poster for the Cancun meeting.
Meeting Update Jan.21
Made movies for Gianluca's visit. Resistive images Here's a list for paper figures:
- density cut-through with field lines (black&white and color)
- field pressure cut-through (normalized to the initial field pressure therefore tells us amplification)
- total field energy evolution line plot (tells whether the stretching amplification dominates or field diffusion dominates?)
- bright spot: maximum field pressure amplification line plot (corresponding to Gianluca's Zeeman measuring idea)
submitted a set of jobs that test fixed grid thermal conduction in 3D by running 3D RT instability. The cases and domain resolutions are:
non-conductive | weakly conductive | strongly conductive |
32*32*160 | 32*32*160 | 32*32*160 |
64*64*320 | 64*64*320 | 64*64*320 |
128*128*640 | 128*128*640 | 128*128*640 |
Each simulation contains 1 wavelength on x and y direction, 5 wavelengths on z direction. This set of simulations will be run on bluehive 64 cores.
Finished the Cancun poster:
poster
They informed me that they have put me on the waiting list for oral presentation in case someone withdrawing. So I also prepared a ppt for the oral presentation. The talk is likely to be twice as long as the HEDLA one so I added more stuff from the clump paper (instabilities, magnetic field and mixing ratio evolution, mathematical modeling).
Meeting Update Jan.14
Not much to update since I was visiting relatives/traveling most of the time last week. Here's a recap on where I'm at for the ongoing projects.
Heat conduction solver: The conduction front and RT(not quantitative) tests are done. 3D compatibility is added. Next up is a 3D RT test to check the things I've added. After that, our collaborators can step in and play with the code on their own problems. Remaining task is putting in 1D compatibility (probably not relevant for the collaboration, but good to have). I will also touch base with Jonathan about the heat flux conservation routine he suggested me to add.
Triggered star formation: Last time I got a stable cloud with 1/100 pc in radius and 1 solar mass. I will submit some jobs with this cloud + shock to bluestreak this week.
Resistive clump: I'm currently preparing the first draft. For this week, I will make a short presentation from the draft (several pages pdf) for Gianluca's visit.
Meeting Update Dec.18
Triggered Star Formation Found a place in my problem module where the clump radius is put in by number not by reference. This caused a discrepancy in terms of clump radius between the problem module and the problem.data file that I used to run the simulations. Upon fixing this bug, I was able to get different distributions that can match Boss' numbers by tuning xi, clump radius, central density to match Boss' numbers (previously I always get one setup because the changes in clump radius were only in problem.data but not reflected in the problem module). In this way, I can get fairly stable setups that has the same density distribution and temperature inside the clump as Boss', although I still have to artificially increase or decrease the ambient density to what they were using. Following is one of such settings in 1/10 of clump crossing time (This setup is confirmed to stay for at least one crossing time). The ambient density is reduced by a factor of 100. The xi in this case is about 6.78 so it's still in the "unstable" regime. Currently working on preliminary runs with shocks.
2D:
http://www.pas.rochester.edu/~shuleli/meeting_1210/bss.gif
Cut:
http://www.pas.rochester.edu/~shuleli/meeting_1210/bsc.gif
LLE stuff Ported the implicit solver from a version a year ago to the latest gold version. Made the code to compile successfully and run in fixed grid, still needs work in AMR and testing (considering self-similar heat conduction test as 1-D quantitative test). Future minor change could also include making self gravity and implicit heat conduction to work together (currently you cannot turn on both at the same time). I expect to be able to have a fully functioning version by our next meeting.
Meeting Update Dec.10
cloud stability Using my setup to run the cloud stability for one crossing time:
http://www.pas.rochester.edu/~shuleli/meeting_1210/mysetup.gif
We can see that the cloud breathing, and there is no significant expansion or contraction. There is a movie for the cut-through density here:
http://www.pas.rochester.edu/~shuleli/meeting_1210/mysetupcut.gif
This setup is obtained by feeding Boss 2010's central and ambient density to the BE module in the code. The problem is that the density at the edge of the clump is not the same as Boss' number. We have 1000 density contrast at the edge of the cloud (radius = 0.058pc), they have 100 density contrast. Also, our temperature is 124k inside the cloud while they have 10k.
If we increase the cloud radius to 4 times the original, we can get the edge density drop to Boss' value. But the cloud seems to collapse under 1 crossing time scale:
http://www.pas.rochester.edu/~shuleli/meeting_1210/myset2.gif
In order to match exactly their profile, I tried increasing the density contrast (making the profile steeper). By doing this, the supposed ambient density will be 10 times lower than the original (1e-23 g/cc vs 1e-22 g/cc). I pad the ambient by the Boss' ambient density and match the temperature at the cloud edge. This results in a profile exactly the same as theirs, with the same in-cloud temperature 10K. However, this setup collapses for under 1/10 crossing time (about 1 shock crossing time):
http://www.pas.rochester.edu/~shuleli/meeting_1210/bosssetup.gif
Here is a movie for the cut-through density:
http://www.pas.rochester.edu/~shuleli/meeting_1210/bosscut.gif
If we do not pad the ambient by the original value and just use what is calculated in the code, we get:
http://www.pas.rochester.edu/~shuleli/meeting_1210/boss2.gif
At this point, I think using our own setup should be just fine since it produces a fairly stable cloud for 1 crossing time scale. However, since the density at the edge of the cloud is about 1000 times the ambient and only 1.5 times below the central density, the incoming wind will have a peak density close to the central density of the cloud if we still want to match the wind density with the cloud edge density. I'm not sure currently how important this condition is to the result. I think we need to do some wind-cloud interaction simulations next to verify how to setup the wind.
Papers Finished the revised version of the clump paper and the referee response. The only tiny problem is that in his comments 5 and 6, the referee suggested the restructuring of the material, which is not done in the revision (we just argued that our version is the best).
http://www.pas.rochester.edu/~shuleli/meeting_1210/paper_shule_1210.pdf
http://www.pas.rochester.edu/~shuleli/meeting_1210/referee_response_1210.pdf
Meeting Update Dec.3
Did the VISA thing last week and passed. The internet connection here is fairly good so I can work as usual. I'll be back after about 4 weeks.
Triggered Star Formation: Cloud Stability Did the Boss setup stability test with pressure matching on the boundary. I changed xi so that the central density and the ambient density can match their number exactly (6.2e-19 and 3.6e-22). One thing I found is that using this setup, our code gives a different isothermal temperature (124K) comparing to theirs (10K). Also, at the clump radius (0.058 pc), the density jump seems to be 1000 times rather than 100 times. The following movie is for a run to about 1.3e+5 years, which is the supposed shock crossing time scale (1e+5 yrs). The sound crossing time is 1e+6 yrs, so we need to wait a bit longer to see the long term evolution.
http://www.pas.rochester.edu/~shuleli/collapse/bss1.gif
Resistive Paper Been trying to write up the introduction section. I'm currently looking into previous literature so that I can correctly place the paper's goal.
Clump Paper Will be sending Adam and Eric the final revision this week.
HEDLA paper was accepted last week.
Registered the Cancun meeting in February. I used the previous clump paper's abstraction.
Meeting Update Nov. 19
Working on the triggered star formation project. By changing the ambient density, collapse could be triggered for a super critical BE sphere. The BE sphere has a cut at 0.15 pc, the density at the outer edge is always lower than the BE profile. See the following movies:
20 times denser ambient:
http://www.pas.rochester.edu/~shuleli/collapse/heavy20_2d.gif
line cut:
http://www.pas.rochester.edu/~shuleli/collapse/heavy20_line.gif
10 times denser ambient:
http://www.pas.rochester.edu/~shuleli/collapse/heavy10_2d.gif
For any ambient with ambient density < 8, the sphere simply dissipates:
5 times denser ambient:
http://www.pas.rochester.edu/~shuleli/collapse/heavy5_2d.gif
normal ambient:
http://www.pas.rochester.edu/~shuleli/collapse/normal.gif
normal ambient line cut:
http://www.pas.rochester.edu/~shuleli/collapse/normal_line.gif
ambient 5 times lighter:
http://www.pas.rochester.edu/~shuleli/collapse/light5.gif
normal ambient line cut:
http://www.pas.rochester.edu/~shuleli/collapse/light10.gif
currently running the collapse setup with wind running through.
Meeting Update Nov. 12
triggered star formation Finished some runs for pure BE sphere with Boss setup without shock to see how the BE sphere does. Surprisingly all of them expands. The original Boss 2010 setup has a cut-off on BE profile at 0.058 pc, the density at the outer edge of the cloud is 10 times denser than the ambient. Using the shock crossing time as the time scale, we see the following expansion:
http://www.pas.rochester.edu/~shuleli/clump_paper/cut_small.gif
Without cut-off, we can zoom out and let the density of the outer edge relax to the ambient value, resulting in a much larger cloud, the cloud can then hold up better, although still expands:
http://www.pas.rochester.edu/~shuleli/clump_paper/be_large.gif
Or we set the ambient density to be higher than what's suggested in Boss' paper:
http://www.pas.rochester.edu/~shuleli/clump_paper/zoom_linear.gif
In their 2010 paper, they stated that for 2.5D isothermal, the BE sphere "does not collapse, but instead oscillates around its initial equilibrium structure, over a time period of at least 1e+6 yr". 1e+6 yr in our simulation is about 10 shock crossing time in our simulations. I'm running a simulation with a wind the same as in the 2010 paper and cloud the same as in the first movie.
resistive clumps Finished some of the unfinished strong magnetic field runs (beta = 1). Now we have weakly or strongly magnetized, parallel or perpendicular with Reynolds number 100, 1000 and infinity. It seems I also have data for weak parallel Reynolds number = 10 run done on kraken, but I haven't got time to look at data. I've done the visualizations for all the clump density plot. As an example, see beta = 4 density comparison at 2.5 crushing time:
http://www.pas.rochester.edu/~shuleli/clump_paper/Resistive_MHDClump.png
papers I revised the previous paper and referee response according to Adam's comments. I'll give it a final read tonight and send it to Adam and Eric.
Meeting Update 1028
Triggered Star Formation: Implemented cooling function described in Neufeld & Kaufman 1993:
http://articles.adsabs.harvard.edu//full/1993ApJ...418..263N/0000265.000.html
Tried to rerun the problem on bluegene, but the job always terminate with the following message:
RestoreRelationships::StrictFindBackupNode() error: node not associated.
StrictFindBackupNode(box%MPI_id = 596) failed on box [ 49 27 11 96 28 12]
StrictFindBackupNode(box%MPI_id = 550) failed on box [ 49 11 13 96 12 14]
StrictFindBackupNode(box%MPI_id = 578) failed on box [ 49 29 9 96 30 10]
Clump Paper: Revised the paper based on referee's comment, wrote up the response to the referee:
Revised paper:
http://www.pas.rochester.edu/~shuleli/1028/paper_shule_0823.pdf
Response to referee:
http://www.pas.rochester.edu/~shuleli/1028/referee_response.pdf
Also wrote up a referee response to the HEDLA proceeding.
Resistive Paper: Wrote up the "Method" and "Simulation Setup" section.
Meeting Update 1007
Triggered Star FormationI've been running simulations according to Boss' setup:
http://www.pas.rochester.edu/~shuleli/triggered/bosspaper2.pdf
I'll post three new movies Monday morning.
Qual I'm currently making the ppt, 15 pages done (total 45 pages?).
Papers Done with the revision of HEDLA proceeding.
http://www.pas.rochester.edu/~shuleli/triggered/shuleconf2012_AF.pdf
Got the referee's response for the ApJ paper, which is pretty positive. The following points require some thought: (the rest points are trivial to improve)
2) On page 7 they state that they have used the Dalgarno and McCray (1972) cooling function. This is a bit old: it would have been more realistic to use Wolfire et al. (1994) to get a cooling function. They could also have included heating to get an equilibrium temperature (as in Van Loo et al. MNRAS, 406, 1260, 2010), rather than just switching off the cooling at 50K.
3) The choice of parameters for the cloud on page 8 seems somewhat eccentric. A density of 100 and a temperature of 100 is fine since this is roughly what Wolfire et al. would predict for thermal equilibrium, but the size seems very small. The translucent clumps in molecular clouds have these properties, but they are much bigger (~1 pc instead of 150 a.u.). Objects this small are only interesting if they have much higher densities.
The cloud size only affects the ratio of the cooling time to the other timescales. A more realistic simulation would have a very short cooling time: so short that one might as well assume that the gas is in thermal equilibrium.
Meeting Update 1008
Star formation: I've been doing more triggered star formation runs. The following three runs are for Mach 3.5, density contrast 13:
Hydro
Bx beta=4
By beta=4
The By run has not finished yet. These runs are with lower Mach and higher density contrast still do not form sink particles. For the Hydro and Bx cases, the images are at 1 crushing time, and the simulations actually last till 2 crushing times. We see that Hydro and Bx does not differ much. They seem to stretch quite a lot at about 1 crushing time, which I hope could be solved by the presence of By. This week I will use Boss' original data and see if we can reproduce the collapse he found. From there, we should be able to add magnetic field.
Qual: Wrote the brief for the qual. It has been sent to the committee members last Friday.
Papers: Almost Finished revising the HEDLA proceeding except some figures to tweek. It should be ready in 2 or 3 days.
Resistive Paper: Obtained account on Kraken. will try compiling and running the last two jobs for the resistive paper on it this week.
MHD sink particles: Andy's comment:
I also simply remove mass inside a volume of r ~ 4*dx.
Physically this means the gas switches from flux-frozen (or as close
to flux-frozen as the code can maintain with finite numerical
dissipation) to perfectly resistive inside the sink region. The
motivation for this is that the mass to flux ratio in young stars is
vastly larger than the mass to flux ratio inferred from observations
and in simulations from infrared dark cores. Flux freezing must break
down somewhere because observationally only a small fraction of the
magnetic flux from the core winds up in the star. Resistive effects,
like ambipolar diffusion, it is conjectured, is the dominant effect
for this, operating on small scale close to the surface of the
protostar. We assume that the magnetic dissipation scale is much,
much smaller than the grid scale and therefore throw flux freezing out
the window at the smallest scale that we can actually resolve (~ 4
dx). One consequence of doing accretion this way is that flux tubes
that get pulled into the sink region become boyant after you remove
mass from them, leading to interchange instabilities in the flow. If
the assumption of "small scale resistivity" is correct, I believe
these interchange instabilities are also correct or, as correct as one
can resolve.
http://adsabs.harvard.edu/abs/2011MNRAS.415.1228P
Meeting Update 1001
Sweet-Parker Problem
AMR issue is kind of fixed, it's due to a typo in the code. However, the error generated on the AMR boundaries depends on resolution. Here is a comparison between a successful 2 level AMR run (top) and a higher resolution 1 level AMR run (bottom). As one can see, the higher resolution run generates error on the inner boundaries.
Here is a movie for a successful 2 level AMR run:
http://www.pas.rochester.edu/~shuleli/multi_test/spmovie.gif
Magnetic Island Generation
Magnetic field in a sheer pinch configuration can generate many resistive instabilities under perturbation.
The magnetic island formation is induced by perturbing the sheer pinch by a sinusoidal perturbation. X points and O points can form due to non zero resistivity, and dense regions separated by X and O points are formed, hence "islands". The growth rate and the size of the "islands" depend on resistivity and the strength of the perturbation.
Full movie:
http://www.pas.rochester.edu/~shuleli/multi_test/mihr.gif
Triggered Star Formation
I completed a run with the BE sphere defined in the existing module. The density contrast of this BE sphere is 5, the shock is Mach 5, Gamma = 1, 3D.
We do see post-shock material bounded by self gravity, but no sink particles formed. The dense material is eventually torn apart by the incoming flow. The full movie is here:
http://www.pas.rochester.edu/~shuleli/multi_test/tsf1.gif
This one looks promising: with a very low density contrast, the post-shock material is still well bounded by gravity. Next step should be: (1) reduce shock Mach. In the Boss paper, they did Mach 1.5, 2.5 and 5. A slower shock should compress the material to trigger collapse but and the same time not strong enough to fragment stuff. (2) increase density contrast so that more material is available during the compression. (3) cooling and magnetic field can help a lot. Even in the presented setup, a By field will greatly confine the material to prevent the tearing along the x direction. (4) produce Boss' result.
MHD Sink Particle
A paper uses MHD sink particle to produce magnetic field expulsion:
http://arxiv.org/abs/1105.5739
Their method:
"When the matter from a cell is added to a sink particle, the magnetic flux from the cell cannot be added to the sink as well, on both physical and numerical grounds. Physically, the addition of the magnetic flux to the sink particle would make the stellar field strength much higher than observed (which is, of course, the well known “magnetic flux problem”). Numerically, the sink particle cannot hold a large magnetic flux, which would produce a large, unbalanced magnetic force in the host cell of the sink particle. The needed decoupling of the magnetic field from matter during sink particle mass accretion makes the creation of the DEMS unavoidable in an ideal MHD simulation of the protostellar phase of star formation"
Adam, Eric and I had a short discussion last week via email about their method. This should be easy to do in the code, but is it OK for our application (trigger star formation)?
Resistive Paper
Reynolds number = 10 runs are running now, they are very slow due to the fact that the code does not have efficient subcycling yet for the multiphysics. Maybe I will move to the TeraGrid machines to finish them.
This week
Continue the projects presented above, write up the Qual brief (deadline this Friday), finish the revision for HEDLA proceeding.
Meeting Update 0924
Implemented multiphysics: resistivity, conductivity, viscosity.
The switches are in physics.data:
Gaussian profile heat conduction test Initial condition: gaussian temperature profile, magnetic field is on the diagonal direction. The mhd heat conduction is turned on.
In the following figure (density), top left is single processor fixed grid, bottom left is 8 processors fixed grid, top right is 8 processors AMR.
Petschek reconnection test The following figure shows the Petschek shock from a reconnection spot at the center. Colored variable is the kinetic energy in log scale, magnetic field is illustrated by white lines.
See a movie here:
http://www.pas.rochester.edu/~shuleli/multi_test/sp1.gif
There is a bug in AMR case. See the following figure, colored is kinetic energy. You can see instabilities at the corner of inner boundaries.
KH instability with Braginskii viscocity. Please refer to a previous post of mine for the images.
Multiphysics is interlaced. Depending on total number of steps(maxlevel) you have taken, the code shuffles the order of calls to resistivity, conductivity and viscosity. Multiple component: tested conduction + resistivity and conduction + viscosity. Both ran well.
Potential tests for multiphysics.
Conduction: MTI, Parker instability
Resistivity: Hartman flow, Petschek reconnection
Viscosity: Hartman flow, MRI
New generic data transfer interface. This will be used for anything involving subcycling AMR. Explicit solvers mentioned above and their implicit versions will be using this new interface to speed up multiphysics calculation. Currently the implementation is near finished, but I haven't got time to debug or test it. Will be the emphasis for the next few weeks and the next release.
Papers. Clump Paper: Submitted to ApJ. Resistive paper: waiting for the last two runs to finish.
This week. Fix bugs, test more on multiple components, merge the code (this version is my own version of 1020, derived from 1010 main repo), start writing the resistivity paper draft, start writing the qual brief.
Meeting Update 0910
What I've been doing:
Bx, By runs with R# = inf, 1000, 100 (the same as I posted last week, just twice the framerate) are almost finished. The only one unfinished is the Bx R#=100 run currently at 80% progress.
The Hydro run for the resistive paper is finished.
Implementing Subcycling and persistent communication mechanism for Astrobear 2.0
Finished 9 out of 11 test modules.
Finished the color version of the ApJ clump paper. Ready to submit. I still have to make low resolution version to submit to astro-ph though.
Revising the HEDLA proceeding paper. Probably will get it done this week.
Meeting Update 0828
What I've been doing:
Revised the ApJ paper: one more appendix on initial field setup, one more paragraph on instabilities with one additional figure to support it, misc modifications according to Eric and Adam's previous suggestions, and also the referee response from the HEDLA proceeding. (Both Eric and I thought it is good enough to submit, so waiting on Adam's feedback.)
Making test modules for the gold version: 50% done.
Adapting the multiphysics code to the gold version: currently working on the communication stuff of the subcycle control routines.
Resistive paper: low res runs are shown in a previous blog post, high res runs are currently running on bluegene (We could have some movies to look at but unfortunately I found something wrong in the job that was running last Friday. So I started everything over new, and they just started running this Monday). It looks like we can finish them by the end of September. We will do R=10,100,1000,infinite for both Bx, By configurations, and also one hydro case and one Mach 10 case (the jobs use the experiment setup data the same as the NLUF proposal, Mach is 5) for comparison's sake, according to Gianluca's advice.
Qual: I'm planning on an October date. Eric Blackman and Chuang Ren both have time and have agreed to help. Currently waiting for Eric Mamajek's response. (Dan will not be available during that time frame) I will probably start writing up the brief in early September.
Resistive Clump Simulation Results (low res)
After fixing bugs (previously it does not diffuse the field lines correctly due to a sign mistake) and adding the magnetic energy flux, the resistive solver work better now. About the energy flux:
Previously we diffuse the magnetic field according to the equation:
But we do not explicitly treat the change of energy due to this movement of magnetic field lines. Therefore the total energy of one cell stays the same before and after resistive diffusion. This is not right since the total energy per cell will change (thermal energy stays the same). In the new code, we calculate the energy flux resulted from the resistive diffusion explicitly at the cell faces:
From this flux we update the total energy at each cell using the divergence law. With this calculation, the cell where field is diffusing away from does not magically get instantly heated up like before. There can be an inaccuracy resulting from the finite grid size during this calculation. This inaccuracy can be observed by looking at the thermal energy contained in one cell and test whether it's kept the same. At low resolution (16 zones per clump radii), this error gets to 0.3%. The solver is completely divergence free: during resistive diffusion, the magnetic field divergence is kept zero.
The next figure is the "Bx" (magnetic field parallel to the shock normal) case at around 3 clump crushing time scales. The Rm labeled in the figure is the magnetic Reynolds number. The plot at the bottom is the case where the fluid is perfectly conductive (Rm is infinite).
Here is a movie for it:
http://www.pas.rochester.edu/~shuleli/Resistive_Clump/denpara_lr.gif
The following figure is the "By" (magnetic field perpendicular to the shock normal) case at around 3 clump crushing time scales.
Here is a movie for it:
http://www.pas.rochester.edu/~shuleli/Resistive_Clump/denperp_lr.gif
We can see that although at Rm=100 still show quite a difference in the two field configurations, the difference is beginning to diminish. We will be doing production runs with high resolution (64 zones per clump radii) on the bluegene cluster in the next few weeks.
Meeting Update 0814
I've been working on the resistive MHD simulations for the next paper. Here's a image for the R_M = 100 case in low res:
We can see differences between this and the ideal MHD case (have to verify high res).
http://www.pas.rochester.edu/~shuleli/RMag1/rm1.gif
Meeting Update 0807
Finished the second revision of the clump paper. Eric has commented on it after his second read, and his suggestions are put into the latest version. Currently waiting for Adam's second read and responses. Hopefully it can be submitted very soon.
http://www.pas.rochester.edu/~shuleli/paperpics_clump1/Paper_shule_revised.pdf[[BR]][[BR]]
Currently working on the testing modules for the Gold version. I have generated the low res reference frames for the Magnetized Clump (using 3D toroidal aligned geometry) problem, and am working on the other problem modules since I'm not familiar with some of them.
Talked with Laura about the Qual. We need:
(1) setup the committee with four people: Adam, an experimentalist, a faculty member from the same area, a faculty member from a different area (can be from another department).
(2) setup date, time and room.
(3) submit a 10 page handout outlining the research and goals, each faculty member should get one 2 weeks prior to the actual Qual.
(4) prepare a presentation.
Meeting Update 0724
I've been porting the resistive solvers to the newest revision. Also have been working on the scaling and set up for the new project: the resistive clump simulations. The set up of our "standard" case is as follows:
ambient density = 1 mg/cc
atomic mass = 12 (carbon)
ambient temperature = 5 ev
density contrast = 100
clump radii = 0.1
tube radii = 0.3
shock speed = 40 km/s (mach 5)
magnetic beta = 4
magnetic reynolds number
simulation time = 4 crushing times
resolution = 64 zones per clump radii
adiabatic
with two additional runs:
with cooling
The magnetic reynolds number of 10 implies that:
i.e. each hydro step requires 4 resistive steps. Therefore the simulations should not require cycling resistive steps, but will be demanding.
I'm currently working on the low resolution version of the runs (with 24 zones per clump radii) to check things. But the production runs should be doable on bluegene. (the previous paper's runs of 192 zones grid takes 4 days with cooling)
Meeting 0720
Presentation: http://www.pas.rochester.edu/~shuleli/HedlaPro/0720meeting.ppt
Movies: http://www.pas.rochester.edu/~shuleli/HedlaMovie/torxvr.gif
http://www.pas.rochester.edu/~shuleli/HedlaMovie/torzvr.gif
http://www.pas.rochester.edu/~shuleli/HedlaMovie/polxvr.gif
http://www.pas.rochester.edu/~shuleli/HedlaMovie/polzvr.gif
Meeting Update 0710
Been working on the resistive stuff. Also updated paper with quantatative analysis:
http://www.pas.rochester.edu/~shuleli/paperpics_clump1/clumppaper.pdf
Meeting Update 0702
All runs are finished and looking good. New Figures for the paper are here (too many):
link
fig03: strong toroidal(beta=0.25), aligned with the shock.
fig04: strong toroidal(beta=0.25), perpendicular with the shock.
fig05: strong poloidal(beta=0.25), aligned with the shock.
fig06: strong poloidal(beta=0.25), perpendicular with the shock.
fig07: weak toroidal(beta=0.25), aligned with the shock.
fig08: weak toroidal(beta=0.25), perpendicular with the shock.
fig12: kinetic energy transfer and total magnetic energy for the strong contained field cases.
fig13: kinetic energy transfer and total magnetic energy for the weak contained field cases.
fig14: mixing ratio at tcc and 3tcc for the strong contained field cases.
fig15: mixing ratio at tcc and 3tcc for the weak contained field cases.
Volume rendered movie for one special case: strong poloidal perpendicular to demonstrate the clump morphology evolution:
movie
Mixing ratio has very different behavior when the contained field pressure is changed: in the strong contained field cases, the initial mixing ratio look almost identical but then develops differences as the simulations went on. But in the weak contained field cases, there is a sharp difference between the toroidal and poloidal configurations.
Movie of mixing ratio for the strong contained case:
movie
Movie of mixing ratio for the weak contained case:
movie
Meeting Update 0626
I have ran the low res version of the poloidal contained cases as discussed to determine the beta to be used in the "strong" cases. From the 4 simulations (lower res only the first 0.5 crushing times) of beta = 0.25, 0.5 aligned and perpendicular setups, it seems the beta = 0.25 is viable although you can observe obvious distortions happening at the beginning. Since we are more interested in the behavior after 1 crushing time, the initial distortion of beta = 0.25 should be no big deal. I then went on to run the cases needed for the paper on bluegene, with beta = 0.25 toroidal perpendicular, beta = 1 toroidal perpendicular, beta = 0.25 poloidal aligned and perpendicular. The first one is done (see result below), the second one is at 20%, the last two should be finished by Friday.
http://www.pas.rochester.edu/~shuleli/paperpics_clump1/Artical_File.pdf
I restructured the discussion part of the paper to have subsections with different beta. It seems too much with more than 10 3D figures and the coding is a bit confusing:
I still have queued runs of 3D clumps with more progressive beta values:
beta = 0.4, beta = 0.5, beta = 0.6.
Although we may not use these runs in the current paper, they maybe useful if we want to investigate the role of beta more thoroughly in the future (another paper probably): for the toroidal case, there seems to be a "threshold" beta around 0.6 that determines whether the clump collapse or open up, even in the perpendicular cases.
I've picked up the resistive clump runs from earlier this year. One problem we saw is that there are artificial effect at the head of the clump where the field is piling up in the B_y case. Currently looking into whether the subcycling is
working correctly or not by comparing the result with a non-subcycled version.
Meeting Update 0619
I'll talk about the clump paper related stuff.
Meeting Update 0606
Mostly working on NLUF proposal stuff and Hedla proceedings.
Current status:
Just received Eric's comments on the proceedings, working on making changes to the manuscript. Will send the revised version once Adam get back his comments.
Also got some time to work on the conduction module. Now it looks like this on parallel fixed grid (works): AMR gives NaN fluxes, haven't had time to look at what's wrong.
Meeting /0522/
Working on the clump paper. Sections 2,3,4 draft finished:
Now need to work on the introduction and discussion section. Appendix?
two more runs for this paper are submitted (expected to run this week):
toroidal only with beta_avg = 1.
These can be used to examine the effect of different beta_avg and can be compared to the poloidal only runs because they have the same beta_avg.
More runs on the magnetized wind case:
Been working on the diffusion hypre solver. Took away the anisotropic part(left to the explicit solver) to make the code simpler. Implemented the conduction front test which has a self similar solution to any diffusion index. Now trying to use the conduction front test to check if the solver is quantitatively correct and the AMR part of the code (encountering NaN issues).
Update/0515/
Working on the paper.outline:
1.introduction
2.physics description, including calculation of time scales
3.simulation setup
4.results
5.discussion
have been working on parts 3 and 4 mostly. Also wrote up some of part 2. The time scale part needs more time though.
Meanwhile, finished several runs with magnetized wind simulations: the magnetic energy density is 0.02, 0.1, 0.5 times the peak magnetic energy density inside the clump.
Some results:
Update/0507/
Some other results that were intended for HEDLA.
Started running the magnetized wind case.
Some Potential Hedla Movies
Toroidal Only, aligned with the shock sequence:
Click to enlarge.
Movie:
http://www.pas.rochester.edu/~shuleli/HedlaMovie/torxvr.gif
Poloidal Only, aligned with the shock sequence:
Click to enlarge.
During expansion, a shaft shaped "core" formed along the axis due to high field concentration (harder to deform). This core is then gradually deformed (slower than the cloud expansion) due to higher total pressure. At the same time, we see red dots of dense, cold material in the residue, they probably contain tangled magnetic field which make them harder to be further fragmented.
Volume Rendered Movie:
http://www.pas.rochester.edu/~shuleli/HedlaMovie/polxvr.gif
We also study the shocked behavior when the clump contains small scale field (with a length scale much smaller than the clump radius). The following sequence shows a clump with contained magnetic field with flat magnetic power spectrum PB(k) ~ k0, initially(left) and after 1 cloud crushing time(right).
We will also do a more realistic power spectrum with spectral index -5/3 (Kolmogorov type).
Movie:
http://www.pas.rochester.edu/~shuleli/HedlaMovie/tp1vr.gif
Meeting Update 04.10
Toroidal/Poloidal only runs finished.
Toroidal only, perpendicular to the shock:
http://www.pas.rochester.edu/~shuleli/HedlaMovie/dm_z.gif
Toroidal only, aligned with the shock:
http://www.pas.rochester.edu/~shuleli/HedlaMovie/torx.gif
Poloidal only, perpendicular with the shock:
http://www.pas.rochester.edu/~shuleli/HedlaMovie/polp.gif
Poloidal only, aligned with the shock:
http://www.pas.rochester.edu/~shuleli/HedlaMovie/polx.gif
Toroidal + Poloidal running, at frame 42:
Notice it's not symmetric not because noise but the field configuration does not have mirror symmetry when adding toroidal to poloidal components.
Bx, By, Bz amplitudes are plotted below.
I will meet with Ruka tomorrow to help him making the movies.
Meeting Update 03.26
Resources:
http://www.rcsg.rice.edu/home/
So the rice machine seems to require a netid which is only for students or staff members at rice.
http://www.rcsg.rice.edu/apply/
I guess we need to contact one of Pat's students or Pat himself for setting up a guest account.
clump sims:
Z Polarized Clumps with different beta:
Density:
http://www.pas.rochester.edu/~shuleli/SingleModeMovies/betacom.gif
Field Energy:
http://www.pas.rochester.edu/~shuleli/SingleModeMovies/fmcom.gif
Field Components:
http://www.pas.rochester.edu/~shuleli/SingleModeMovies/bcomp.gif
X Polarized Clumps with low beta:
http://www.pas.rochester.edu/~shuleli/SingleModeMovies/xplb.gif
Field Components:
Uniform Bx with reflective boundaries:
http://www.pas.rochester.edu/~shuleli/SingleModeMovies/refbx.gif
Uniform Bx and By with reflective boundaries (snapshot at different time):
Meeting Update 03.19
Clumps
For uniform field clumps, the data are on I've discussed with Ruka on the 2D images and line plots we want to have in the paper. He now has the data and instructions to make the images.
Here's his trial animation:
http://www.pas.rochester.edu/~smurugan/SYMP.gif
Done some "single mode" magnetic clump runs. Here I define the "polarization" as the direction at which the poloidal field is pointing. We need to discuss two distinct polarizations: aligned with the shock and perpendicular to the shock. Since the poloidal and toroidal components can have different ratios inside the clump, we actually have one more free parameter. In the paper the shock is directed on X direction. So X polarization is the aligned case, Y or Z polarization is the perpendicular case.
Here I show the animation when the clump is polarized on Z direction, poloidal vs toroidal ratio is 1, volume averaged beta = 3.15:
http://www.pas.rochester.edu/~shuleli/SingleMode/zpol.gif
The poloidal vs toroidal field strength is shown in the following animation:
http://www.pas.rochester.edu/~shuleli/SingleMode/ztvp.gif
Next is the X polarized clump:
http://www.pas.rochester.edu/~shuleli/SingleMode/xpol.gif
Comparing the density evolution, the Z polarized clump is delayed:
The higher resolution runs are about to start with volume averaged magnetic field strength equivalent to beta = 0.5 and 8, corresponding to the uniform field case.
BE Dynamo Generation
Now the subcycled runs on 1 processor. Here I show the animation of viscous MHD KH instability under when the Braginskii viscosity cycles twice per hydro step:
Meeting Update 03.13
Clumps:
The 4 uniform field runs are finished (fixed grid). I will meet with Ruka next Monday to discuss the analysis to do on the data.
The single mode runs will start shortly.
RT stuff:
Ported the implicit thermal diffusion code over to the new version of the code. I have been doing RT tests using Riccardo's data, which generates the same outputs as we've seen before. (spike growing at the interface)
I'll either use my own RT profile or invent some new quantitative tests to see whether the algorithm or the data is the reason in the next few days.
Subcyclings:
I have been working on the subcycling part of the code with isotropic thermal diffusion. The code is currently running fine but does not looking correct. I think we can get it done faster than we thought it would be, except for the persistent send/recv part. It will also be helpful for the RT stuff if we can compare the subcycled thermal diffusion vs the implicit thermal diffusion.
Meeting Update 03.07
clump project
So I did some cooling shock problem using the clump settings, just removed the clumps. The shock front looked fine when hviscosity and periodic boundaries are used.
So the clump runs are being restarted from frame 0 using those settings. The following animations show Bx Strong, Bx Weak, By Strong, By Weak respectively. They are not finished yet, but are running. The final frame should be 60. So By runs are almost finished, Bx runs are half way through.
Bx Strong
cloud collapse project
Eric and I thought it will be very interesting to do the BE collapse problem as mentioned in last week's journal club:
a paper about dynamo generations at various scales in the cloud collapse problem and the impact of a realistic prandtl number.
The runs will be similar to those of the Federrah 2011 paper, but put in physical viscosity, that is, Braginskii when gyrofrequency >> collision frequency, Spitzer when the opposite is true. The resistivity will be Spitzer, which we already have.
Based on these thoughts, I implemented Braginskii viscosity into the code. The numerical method is based on Parrish et al. 2012 paper:
http://arxiv.org/abs/1201.0754
They did the HBI/MTI simulations with explicit Braginskii viscosity.
I then went on to test 2D Braginskii viscosity in a MHD KH instablitiy problem using James Stone's parameters:
MHD Kelvin Helmholtz Instability with Braginskii type Viscosity
MHD Kelvin Helmholtz Instability with Braginskii type Viscosity, Comparison
It is best to have the code switching between Spitzer and Braginskii viscosity based on the relation between local gyrofrequency and collision frequency. Unfortunately we do not have that yet. Not far away though.
code dev
A problem we discussed before: when considering the energy equation for the operator split induction equation, do we need to apply the induction flux to the energy equation explicitly as well?
I now doubt to explicitly calculate the energy equation is the correct way. So I added some additional calculations to apply energy flux induced by multiphysical processes in both the braginskii viscosity and resistivity solvers.
I think we are at a point where sub-cycling is becoming a bottle-neck. Some thoughts are needed. The problem is:
How to do subcyclings in AMR without a significant increase in the size of ghost zones and at the same time maintain accuracy.
Meeting Update 02.27
Clumps:
See the project page for updated runs and images:
https://clover.pas.rochester.edu/trac/astrobear/wiki/AstroBearProjects/MagnetizedClumps
I have 6 jobs with 1000 cpu, 2days each that should start within this week (3 of them are on top of the queue). So we can for sure finish the uniform field runs this week. And maybe we can try some AMR runs to see if we can find the bug.
a) 1000 cpu may run better b/c of the larger memory.
b) the newer code in which Jonathan added some fixes last weekend may work better. (probly not related to the memory bug though)
c) do simplified runs to test the behavior: hydro only, or 1 level refinement.
Meeting Update
fixed grid sim: http://pas.rochester.edu/~shuleli/cloudbyhigh.gif
AMR: run into the "fail to allocation" bug similar to that of ticket 121 and ticket 129 after about 20 hours.
Resistive: Implementing the tensor form.
Meeting Update 02.14
More on resistivity, conduction implementation.
See "Multiphysics in AstroBEAR" project page.
More results of higher resolution Shocked clumps, as well as the resistive shocked clumps.
See "Shocked Clumps with Magnetic Field" project page.
Meeting Update 02.07
Presentations
The LLE presentation is at:
http://www.pas.rochester.edu/~shuleli/magclumps2.pdf
Clumps
The beta map of a bx case, vs a by case.
We can see there is a wake of low beta, low density region hiding behind the clump in the bx case. However, clump material is totally exposed to the erosion of the shock.
In the by case, there is a cylindrical cavity filled with strong magnetic field (low beta) surrounding the clump's "tail". Clump material are partly protected by this field. The field covering the boundary flow region is amplified by stretching. There is also an amplification region at the head of the clump initially, due to compression. Stretching is more effective in amplifying the field in this scenario. This is partly why the contained field case does not do much in terms of evolution. The only working field amplification mechanism is the compression by the transmitted shock, which is slowed down by a factor of 10.
The following shows the comparison of by cases between resistive and nonresistive runs at approximately 1 crushing time. The resistivity is isotropic and constant. We have microphysical resistivity which depends on the number density and the temperature, and nonlinear. We may want to try that out later.
We can see some blueish region disappeared, and the region behind transmitted shock has a higher beta. The clump tail is less obvious in the resistive case. The field diffusion speed is 2 orders of magnitudes smaller comparing to the shock speed. We can expect things getting more different when the resistivity is higher. Then again the problem is "is the realistic microphysical resistivity high enough to trigger noticeable effect".
The high res runs (50 zones per radius) have been sitting on the bluegene queue for five days.
It's currently near the top of the queue. The three queued jobs are bx strog, by weak, by strong respectively. Hopefully we can finish these plus the "single mode" contained field case within this month. Since these runs need to have a ending time twice as the AAS runs (we need at least 4 crushing time), I expect at least 48 hours on 512 processes for each run.
Multiphysics
Flux limited thermal conduction. Saw strange behavior while doing the test on the Parrish&Stone paper. Switched to the simple "ring test", found that the inner boundaries are doing some strange things when run in parallel. See the following picture.
Future Meetings, Abstracts?
Meeting Update 01.24
Been working on the flux limiting explicit thermal conduction and the presentations.
see separate project pages "Magnetized Clumps" and "Magnetized Clumps with Shocks".
Meeting Update 01.17
I'll talk about my poster and the AAS meeting. Currently working on the paper of magnetized clump with thermal conduction. I'd also like to discuss what we will present at the HEDLA meeting, coming Apr 30th.
Meeting Update 12.21
See Magnetized Clumps project page for updates.
Meeting Update 12.12
New runs on Magnetized Clump Project page. Scroll down to bottom to find five new animations.
Meeting Update 12.05
See Magnetized Clump project page (Scroll down to bottom) for updates.
Meeting Update 11.28
Magnetic cloud shocked by mach 3 shock. See project page bottom.
Meeting Update 11.21
See the Magnetized Clump Project page for the updates. As always, scroll down to bottom for new images and animations.
Meeting Update 11.16
New plots and animations on the magnetized clump project page. (scroll down to bottom)
Meeting Update 11.08
New Harris sheet test, magnetized clump simulations done. For images and animations, see the the project page (Scroll down to bottom).
https://clover.pas.rochester.edu/trac/astrobear/wiki/AstroBearProjects/MagnetizedClumps
Meeting Update 11.01
I will talk about the various clump simulations I updated on the magnetized clump project page (scroll down to bottom for the update):
https://clover.pas.rochester.edu/trac/astrobear/wiki/AstroBearProjects/MagnetizedClumps
Meeting Update 10.25
The following plots show the AstroBEAR test of various settings in the current sheet outflow problem.
The Workman setting has central symmetry while the reconnection sites locate at the two edges. The positive radius of curvature of field lines at the reconnection site results in an inflow to the X point.
The first panel shows its kinetic energy distribution.
The Semenov setting is achieved by applying a resistivity function that varies with position. This resistivity gradient results in bending of magnetic field lines with negative radius of curvature. A vertical outflow from the X point is thus created. The resulted Sweet-Parker region is expanding throughout the simulation till the flow speed at its boundary reaches the Alfven speed. Panels 2 and 3 shows the kinetic energy and vertical flow momentum. Panel 4 shows the kinetic energy of Semenov outflow with an enhanced resistivity contrary.
Meeting Update Week 10.18
Updated the Magnetized Cloud Project page.
The parallel AMR works now after fixing the convergence issue. Passed the interface test. The Harris sheet test requires more than 8 processors so I am planning to do it on bh or bg.
Also there is a table on the project page listing the runs needed for magnetized cloud simulation. It seems we need to reduce the number of runs: there are totally 45 runs.
Meeting Update 10.10
So I finished the resistive/viscous implementation. It can run the Run Test (tests on the Project page) without problem. Works in parallel and AMR. And it's compatible both in 2 and 3-D.
Only problem is that the original implementation needs to write a whole set of communication routines which seem to need some generalization/organization. So this working version is using an asymmetric solver as a work around, and it can not do subcycling (we may not want it for the next paper anyway).
I think we are at a point to discuss the detailed runs we want to take.
Resistive MHD test
So I've decided to use Harris current sheet test as a test problem. It's essentially an instability with easy to identify flow pattern. And it is also a good research problem if we want to dig into it. I will talk about this test in tomorrow's meeting.
Meeting Update 09.27
Mainly working on the multiphysics solver. Some change to the scheme I talked about last week:
(1) the viscosity solver is now using a 8-cell stencil instead of 12-cell in 2-D.
(2) now both solvers use allocated arrays to store the source terms (viscous force and current) instead of calculating source on each cell thus it saves repeated computations.
(3) now the source term flux from multiphysics solver goes into the fixup fluxes to render a better momentum and divergence conservation between levels.
(4) we are able to switch between the super-stepping approach ESHSSHSE (2 additional ghost zones)and more symmetric approach ESHSEESHSE (4 ghost zones.)
(5) both viscosity and resistivity use the same ghost zone, the solver swaps their order depending on which AMR step it is in.
(6) working on 1-D interface problem to make sure the AMR works correctly. After that, we would do an instability problem that can be put in the future paper.
viscosity and resistivity refinement limitations
The explicit solver should meet the requirement:
where are viscosity and sound speed.
This leads to the relation:
where is the ion-ion mean free path.
This makes sense since the viscosity is a collective behavior of ion-ion collisions. So refine under one ion-ion mean free path does not make sense.
The resistivity solver has a similar limitation:
Meeting Update 09.16
I was mainly working on the array ghosting scheme Jonathan and I talked about last week, which stores the ghosting info necessary for each separate field variables in an array and tracking this array during the advance. It is more general than our current method so that it can be easily extended to algorithms with multiple physical processes. But when I was testing the multiple physics functions, which has now resistivity and viscosity built in, it appears that for conservative variables, we always need all the hydro vars to go into the multiphysics calculation, which makes it not efficient to keep tracking the number of ghost zones required for each hydrodynamic field entries. Another idea is to do a simplified mbc array which is 2 by 1 and has the first dimension taking values from 1 to 4 which tracks essentially the hydro vars, aux, elliptic vars and tracers. The other approach is to return to our current approach but adding new smbc to define the ghost zones required by the source term solver while combine the functionality of egmbc.
For the integration of the solver into the code, it seems we cannot mechanically calculate the dqdt for each multiphysics process and lump them into the SrcDeriv for RK update since that update requires time subcycling which is not a problem for cooling and self gravity which do not need to destroy additional ghost zones for each advance, but will be a problem for diffusion, resistivity and viscosity. These three processes require additional peripheral zones to update the internal data. Diffusion and resistivity need 1, viscosity needs 2. This means subcycling 10 times in the source term advance would require 20 more ghost zones for hydro vars when doing viscosity, which is not acceptable. Therefore we are looking for a totally independent explicit update module similar to the implicit thermal conduction thing that resides outside the current source term update. Then we would be doing the update in the order of E-S1-S2-H-S2
where E is the elliptic update, S1 is the one step source term explicit update, S2 is the current source term explicit update that does subcycling. In this way the physics solved in S1 are not integrated into the flux calculation and the subcycling process, which may induce inaccuracy, but that is price we have to pay if we want to keep the code simple and restrict communication. Or we can calculate a slope in S1 and then use it to update the hydro vars during each subcycling step in S2.
Meeting Update 09.12
Implemented viscosity in full tensor form for isotropic fluid, which should be enough for our application. Also implemented the realistic viscosity which depends on the ion mean free path and averaged thermal velocity. The scheme now takes full consideration on the gradient of viscous tensor induced by different temperatures, which resulted from different ion thermal velocities between cells. This renders us the ability to solve problems with viscosity at temperature fronts. We may want to implement a table that look up the cross section for different types of ions, currently the ion mean free path is defaulted to the case of hydrogen. One thing worth considering might be different types of ions — the code currently defaults to one type of ion. For multiple species present at the same time, I am not sure how to get the averaged cross section and thus the viscosity tensor, yet. For a detailed description, see the updated resistive MHD project page. On the same page, there is also a comparing test between hydro RT vs viscous RT vs diffusive RT. The viscosity and diffusivity are both linear constant, and are purposely set to be comparable in value.
Meeting Update 09.06
Both viscosity and resistivity have been hooked up in the code using operator splitting approach. The Hartmann flow module has been finished, but still require a functionality to calculate the proper analytical solution at any given time and do the comparison. The R-B module is also in the code now. For the resistivity module, I am trying to find a resistivity + hydro only test problem with analytic solution that does not require the rotating cylinder. The Hartmann flow requires viscosity and the R-B instability requires thermal flux so neither is resistive only.
Meeting Update
I am implementing the viscosity and resistivity :
The resistivity solver is similar to the talk I gave last week, which implements a scalar resistivity that depends on the local temperature. We can be more sophisticated by allowing the resistivity to be a tensor: remember, the field diffusivity across the flow velocity and along the flow velocity is different by a factor of 2. But I have not discovered yet any MHD code fully implement this complexity. The viscosity is calculated similarly. Currently, the scalar viscosity is considered. We can implement a viscosity tensor later, but it will also be slightly more complicated.
Both the viscosity and resistivity modules are currently finished, which computes the source term induced by these two physics processes at cell centers and face centers respectively. They also calculate a preferred subcycling time based on the CFL number. Although I have not decided how to implement the subcycling since as we discussed before, the energy flow induced by the resistive diffusion and viscous diffusion are likely ignored if we do subcyclilng only in the operator splitted diffusion equation. If we just compare the hydro time step with these time steps, we can get the wanted temporal resolution for energy evolution, but that will mean a slower code.
The next step is to develop a coupling method to integrate these source terms into our MHD solver. I have discussed a bit of this with Jonathan last week. Hopefully we can get some thing running this week.
Also finished editing the paper. It is ready for submission now.
meeting update 0807
The resistive MHD is now in the code. I've added stuff to the project page on wiki. https://clover.pas.rochester.edu/trac/astrobear/wiki/AstroBearProjects/resistiveMHD
There are simulations and some discussion on it. This should be the basis of our upcoming magnetized cloud project, together with the force-free field stuff.
meeting update
I've been working on the RT problem and the implementation of the resistive MHD this weekend. I've also updated the project wiki page.
Magnetized Clump Project Page
I created a page for the magnetized cloud/clump project. I will be mainly talking about it in the meeting.
https://clover.pas.rochester.edu/trac/astrobear/wiki/AstroBearProjects/MagnetizedClumps