astrobear - Blog
https://bluehound.circ.rochester.edu/astrobear/blog
About blog postsen-USTrac 1.4.1Update with 1024coresmadamsMon, 20 Oct 2014 14:40:36 GMT
https://bluehound.circ.rochester.edu/astrobear/blog/madams10202014
https://bluehound.circ.rochester.edu/astrobear/blog/madams10202014<p>
I ran the Beta 10 No Shear case on Stampede with 1024 cores. Here is the result (see <strong>Table</strong> below):
</p>
<p>
<span class="trac-mathjax" style="display:none">\overline{t_{0}} = 87.67 \rightarrow r_{0} = \frac{1,440 \text{min}}{\overline{t_{0}}} = 16.43 \text{ frames a day}</span>
</p>
<p>
So if we're at frame 246, we have 154 frames left. So dividing 154 by our rate, we have 9.4 days (225.6 hrs) to run this simulation out. Thus, 225.6 * 1024 = <strong>231,014.4 cpu hrs.</strong> Multiply this by 4, as we have 4 runs, yields approximately <strong>924,057.6 cpu hrs</strong> total. This is not much different then the total result from <a class="ext-link" href="https://astrobear.pas.rochester.edu/trac/blog/madams10162014"><span class="icon"></span>last week</a>. It does not seem economical to run these on 1024 cores; in my opinion we might as well just run these on 2048 cores as they'll be faster but have little to no shift in cpu hrs.
</p>
<p>
<em>Perhaps we should choose just a few cases on 2048 cores?</em>
</p>
<p>
If on 2048 cores we estimate 34.85 frames a day (average of rates from last blog post) with approximately 164 frames left (average from last blog post) that implies that we have approximately 5 days to run a simulation, or 113 hours. This is approximately 231,304 cpu hrs. With 3 runs, that is 693,911 cpu hrs. With 2 runs that is 462,607 cpu hrs.
</p>
<p>
<em>Perhaps we could split the runs between machines?</em> However we aimed to use Stampede because it is so fast in comparison to the likes of <a class="missing wiki">BlueStreak</a>.
</p>
<table class="wiki">
<tr><td> <strong>Run (Current Frame)</strong> </td><td> <strong>Hours Sampled</strong> </td><td> <strong>Time (mins)</strong> </td><td style="text-align: right"> <strong>Avg. time (mins)</strong>
</td></tr><tr><td> b10s0 (246) </td><td> 23:30 - 00:50 </td><td> 80 </td><td>
</td></tr><tr><td> </td><td> 10:27 - 11:58 </td><td> 91 </td><td>
</td></tr><tr><td> </td><td> 19:44 - 21:16 </td><td> 92 </td><td>
</td></tr><tr><td> </td><td> </td><td> </td><td> 87.67
</td></tr></table>
<p>
<strong>Table.</strong> The Beta 10 Shear 0 run with current frame for which the hours were sampled and averaged to do the brief calculations above.
</p>
data-managementstampedecpu hrs for CF runs on StampedemadamsThu, 16 Oct 2014 14:30:15 GMT
https://bluehound.circ.rochester.edu/astrobear/blog/madams10162014
https://bluehound.circ.rochester.edu/astrobear/blog/madams10162014<p>
Running the <a class="wiki" href="https://bluehound.circ.rochester.edu/astrobear/wiki/CollidingFlows">CollidingFlows</a> problem out from frame 200 to 400 to double the time and see if we can observe any more sink formation. Given that this run is really computationally intensive, I've done a quick calculation for cpu hrs based on some current runs I am doing on Stampede. All runs are in the normal queue for 24 hrs on 2048 cores. The table below provides the current frame number at which I collected this data. We can see that the average time for our code to spit out a frame is (underscores correspond to the run):
</p>
<p>
<span class="trac-mathjax" style="display:none">\overline{t_{0}} = 44.\bar{3} \text{min}</span>
</p>
<p>
<span class="trac-mathjax" style="display:none">\overline{t_{15}} = 46 \text{min}</span>
</p>
<p>
<span class="trac-mathjax" style="display:none">\overline{t_{30}} = 47 \text{min}</span>
</p>
<p>
<span class="trac-mathjax" style="display:none">\overline{t_{60}} = 32 \text{min}</span>
</p>
<p>
Given that we have 1,440 minutes in a day, implying that we'd spit out the following frames per day:
</p>
<p>
<span class="trac-mathjax" style="display:none">r_{0} = \frac{1,440 \text{min}}{\overline{t_{0}}} \approx 32.5 \text{ frames per day}</span>
</p>
<p>
<span class="trac-mathjax" style="display:none">r_{15} = \frac{1,440 \text{min}}{\overline{t_{15}}} \approx 31.3 \text{ frames per day}</span>
</p>
<p>
<span class="trac-mathjax" style="display:none">r_{30} = \frac{1,440 \text{min}}{\overline{t_{30}}} \approx 30.6 \text{ frames per day}</span>
</p>
<p>
<span class="trac-mathjax" style="display:none">r_{60} = \frac{1,440 \text{min}}{\overline{t_{60}}} \approx 45 \text{ frames per day}</span>
</p>
<p>
Considering that the difference between the current frame and the last frame (400) for beta10 shear 0, 15, 30 and 60 respectively are 179, 182, 159, and 136, we're looking at running these out for approximately 5-6 days on 2048 cores. Specifically for b10s0: 5.5 days, b10s15: 5.8 days, b10s30: 5.2 days, and b10s60: 3 days. Using this number of days, that there are 24 hours in a day and we'd run these on 2048 cores, this puts us at a total of: <strong>957,973 cpu hrs</strong>. THAT IS INSANE.
</p>
<p>
After a quick discussion with Erica and Baowei I've come up with the following short term plan: Once these jobs stop later today, I'll submit 1 job to the normal queue on 1,000 cores. For this run I'll make the same calculation and see if it is more economical when multiplied by 4. Baowei has also suggested to throw runs on Gordon, another machine owned by the Texans. We have a lot of SUs there, so he is currently setting me up. We currently only have 1,551,296 SUs available on Stampede — so running our jobs for this problem there could be quite precarious.
</p>
<table class="wiki">
<tr><td> <strong>Run (Current Frame)</strong> </td><td> <strong>Hours Sampled</strong> </td><td> <strong>Time (mins)</strong> </td><td style="text-align: right"> <strong>Avg. time (mins)</strong>
</td></tr><tr><td> b10s0 (221) </td><td> 16:07 - 16:55 </td><td> 48 </td><td>
</td></tr><tr><td> </td><td> 02:58 - 03:38 </td><td> 40 </td><td>
</td></tr><tr><td> </td><td> 07:13 - 07:58 </td><td> 45 </td><td>
</td></tr><tr><td> </td><td> </td><td> </td><td> 44.3
</td></tr><tr><td> b10s15 (218) </td><td> 18:03 - 18:48 </td><td> 45 </td><td>
</td></tr><tr><td> </td><td> 02:19 - 03:03 </td><td> 44 </td><td>
</td></tr><tr><td> </td><td> 11:05 - 11:54 </td><td> 49 </td><td>
</td></tr><tr><td> </td><td> </td><td> </td><td> 46
</td></tr><tr><td> b10s30 (241) </td><td> 17:57 - 18:40 </td><td> 43 </td><td>
</td></tr><tr><td> </td><td> 00:26 - 01:23 </td><td> 57 </td><td>
</td></tr><tr><td> </td><td> 07:03 - 07:44 </td><td> 41 </td><td>
</td></tr><tr><td> </td><td> </td><td> </td><td> 47
</td></tr><tr><td> b10s60 (264) </td><td> 17:40 - 18:07 </td><td> 27 </td><td>
</td></tr><tr><td> </td><td> 00:04 - 00:38 </td><td> 34 </td><td>
</td></tr><tr><td> </td><td> 07:43 - 08:18 </td><td> 35 </td><td>
</td></tr><tr><td> </td><td> </td><td> </td><td> 32
</td></tr></table>
<p>
<strong>Table.</strong> Each run with current frame for which the hours were sampled and averaged to do the brief calculations above.
</p>
data-managementstampede