wiki:TestSuite

Version 51 (modified by ehansen, 13 years ago) ( diff )

AstroBEAR Test Suite

The basic design philosophy is as follows. There exist 3 levels of interactivity and completeness when running the tests —

  1. Verification tests, run via verifytests.s (Est. time: ? min)
  2. Individual tests, run via their own runtest.s files, located in their respective folders (within modules/tests/).
  3. General tests, run via alltests.s (Est. time: ? min).
  4. Showcase tests, run via show.s (Est. time: ? min)

The above scripts are all located in the code's main directory, and are intended to be executed in that location after a successful build of the code. Test result may be different depending on the test. For simple tests the result may consist of "pass" or "fail". For other tests, or in cases where failure is reported, the result may need to include certain figures, such as comparison figures (comparing the result versus the expected result).

Verification tests (verifytests.sh) should be ran by developers after ANY changes to the code, before checking them in. efter every code checkin in order to to ensure that. For simplicity, all verification tests will run mpibear on the current machine using 2 processors. More scripting will be needed to run larger showcase simulations if they need to run in parallel with 2+ cores in order to finish in a reasonable amount of time. General tests are meat to be automated periodical (overnight) test with no user interaction which will post results on web.

  • This will require scripts which correctly runs the executable on a given platform.
  • The run should be designed to produce data and exit in a repeatable fashion (i.e., not requiring the user to stop the simulation after a certain time).
  • Upon completion, the script needs to invoke bear2fix which, via the input-file option, processes the data and produces a result.

Old BasicTestingProcedure of verification tests.

Showcase tests are meant to illustrate whether the code functions as expected in a general sense when running well-understood simulations. These tests will produce images which show the endstate of the simulation. Inspection of these images by the user should confirm the simulation run; an indicator, or start point, that something is going wrong.


Current testing (6/30/11, under edition)

At the moment we do post-processing tests of astroBEAR2.0. Basically, tests consist on comparing the flow variables in a chombo file produced by the test modules, against a reference chombo file (see below). All test related files are located in modules/tests/. Each test has its own directory which contains:

  • the problem's data files: global.data, modules.data, problem.data, profile.data, solver.data, communication.data, io.data and physics.data,
  • the problem's module problem.f90,
  • two data files used by BearToFix to produce test reports: bear2fix.data.test and bear2fix.data.img,
  • a "ref" directory which contains the reference chombo file.

The reference chombo file (CHref). This file, chombo00001.hdf, has been produced with the code and verified by someone in our research group using well documented quantitative analytical or numerical studies. Reference chombos are included with the current distribution of the code, and information about them can be found in their corresponding test wiki pages.

Running tests. This is done using two shell scripts:

  • postprocess.s, that iterates over each of the test problem and calls the go.s script,
  • go.s, which basically cd's into the current test directory and executes BearToFix (see below), converts the images from ps to png and copies them back into the image directory. These images will them be linked to the corresponding test report wiki pages.

Tests report. Once test runs are finished, BearToFix will be executed. It will do the error computations and produce two png images showing the reference and the current result of the run. BearToFix is executed twice; the 1st time it reads bear2fix.data.img to set the color tables for the test report images, whereas the 2nd time it reads bear2fix.data.test to perform the error computation and produce the test report images.

Tests error calculations. These are done in BearToFix via L2norms and LInfinity norms (reference to be added soon).

To use BearToFix by yourself (e.g. in order to do further adjustment of the test report parameters), select operation 11, test 4 (Generic Test) and input the maximum mean error tolerance (see below) allowed for each fluid variable. BearToFix will then compute the errors and produce the test report images. You should then copy these images to the test images directory (reference to be added soon).

If the test fails, BearToFix outputs the current errors as well as suggested new tolerances (5% higher than the current errors). The user can then copy and paste these figures into the bear2fix.data.test file and run BearToFix again. Inspection of the text report wiki page should show why the test failed.

The test log file is located in the modules/tests directory and briefly keeps track of test results. We plan to add the log info into the test report images though the convert command, so the user can see the images and the error estimates at once.

List of tests

Name Description Variations Est. time Status
Compile Flags checks that compilation completes for different combinations of compile flags 8 possible combinations ??? In Development (Matt)
2D Field Loop Advection TestSuite/FieldLoop2D Advects a loop of magnetic field diagonally across the grid Includes a restart test Approx 30 minutes (2-proc, grass) Implemented
Uniform Collapse TestSuite/UniformCollapse The uniform collapse of a sphere. None Approx 5 minutes (4-proc, alfalfa) Implemented
Analytic Cooling TestSuite/RadiativeShocks The position of a strong shock oscillates depending on gas cooling rate. Cooling is prescribed analytically. 3 different sets of parameters Approx 6 minutes (2-proc, grass) Implemented
1D Sod Shock Tube TestSuite/SodShockTube Runs 1D Sod shock tube with all combinations of integration schemes None 60 sec. (single-proc, grass) In Development (Eddie)
2D Rayleigh-Taylor Instability TestSuite/RayleighTaylor2D A heavy fluid above a light fluid with uniform gravity is perturbed. None Approx 11 minutes (4-proc, alfalfa) Implemented
Bondi-Hoyle Accretion TestSuite/Bondi Spherically symmetric accretion of gas None ??? In Development (Ruka)
Orbiting Particles TestSuite/OrbitingParticles 2 particles orbiting each other None ??? In Development (Eddie)

The results for every test (apart from the Compile Flags Test) are of the same form:

Two postprocessing error files are produced which contain absolute errors and relative errors:

absolute errors (vector): |CHnew(q,x) - CHref(q,x)|

relative errors (vector): |CHnew(q,x) - CHref(q,x)|/|CHref(q,x)|,

where CHnew is the final chombo file that was produced by the new simulations (the one that's been tested), CHref is the reference chombo file (see above), q=(density, x-momentum, y-momentum, …) and x is the cell position vector.

CASTRO tests

  • Scalability
    • 643 weak scaling test
      • No gravity
      • Multipole gravity
      • Poisson gravity
  • hydrodynamic solver
    • Sod Shock Tube (Sod, 1978)
    • Double rarefaction (Toro, 1997)
    • Strong shock (Toro, 1997)
  • hydrodynamics, geometry
    • 1D Sedov-Taylor blast wave (Sedov, 1959)
    • 2D cylindrical Sedov-Taylor blast wave (Sedov, 1959)
    • 3D Cartesian Sedov-Taylor blast wave, 1D (Sedov, 1959)
  • hydrodynamics, gravity
    • Split piecewise linear, Rayleigh-Taylor (Taylor, 1950)
    • Unsplit piecewise linear, Rayleigh-Taylor (Taylor, 1950)
    • Split PPM (old limiter), Rayleigh-Taylor (Taylor, 1950)
    • Unsplit PPM old limiter), Rayleigh-Taylor (Taylor, 1950)
    • Split PPM (new limiter), Rayleigh-Taylor (Taylor, 1950)
    • Unsplit PPM (new limiter), Rayleigh-Taylor (Taylor, 1950)
Note: See TracWiki for help on using the wiki.