Reputation: 332
What would be a sensible approach to take to try and add some basic unit tests to a large body of existing (Fortran 90) code, that is developee solely on a locked-down system, where there is no opportunity to install any 3rd party framework. I'm pretty much limited to standard Linux tools. At the moment the code base is tested at a full system level using a very limited set of tests, but this is extremely time consuming (multiple days to run), and so is rarely used during development
Ideally looking to be able to incrementally add targetted testing to key systems, rather than completely overhaul the whole code base in a single attempt.
Taking the example module below, and assuming an implementation of assert-type macros as detailed in Assert in Fortran
MODULE foo
CONTAINS
FUNCTION bar() RESULT (some_output)
INTEGER :: some_output
some_output = 0
END FUNCTION bar
END MODULE foo
A couple of possible methods spring to mind, but there may be technical or admistrative challenges to implementating these of which I am not aware:
Separate test module for each module, as below, and have a single main test runner to call each function within each module
MODULE foo_test
CONTAINS
SUBROUTINE bar_test()
! ....
END SUBROUTINE bar_test()
END MODULE foo_test
A similar approach as above, but with individual executables for each test. Obvious benefit being that a single failure will not terminate all tests, but may be harder to manage a large set of test executables, and may require large amounts of extra code.
Use preprocessor to include main function(s) containing tests within each module, e.g. in gfortran Fortran 90 with C/C++ style macro (e.g. # define SUBNAME(x) s ## x) and use a build-script to automatically test main's stored between preprocessor delimiters in the main code file.
I have tried using some of the existing Fortran frameworks (as documented in >Why the unit test frameworks in Fortran rely on Ruby instead of Fortran itself?>) but for this particular project, there is no possibility of being able to install additional tools on the system I am using.
Upvotes: 5
Views: 1635
Reputation: 3819
In my opinion the assert mechanisms are not the main concern for unit tests of Fortran. As mentioned in the answer you linked, there exist several unit test frameworks for Fortran, like funit and FRUIT.
However, I think, the main issue is the resolution of dependencies. You might have a huge project with many interdependent modules, and your test should cover one of the modules using many others. Thus, you need to find these dependencies and build the unit test accordingly. Everything boils down to compiling executables, and the advantages of assertions are very limited, as you anyway will need to define your tests and do the comparisons yourself.
We are building our Fortran applications with Waf, which comes with a unit testing utility itself. Now, I don't know if this would be possible for you to use, but the only requirement is Python, which should be available on almost any platform. One shortcoming is, that the tests rely on a return code, which is not easily obtained from Fortran, at least not in a portable way before Fortran 2008, which recommends to provide stop codes in the return code. So I modified the checking for success in our projects. Instead of checking the return code, I expect the test to write some string, and check for that in the output:
def summary(bld):
"""
Get the test results from last line of output::
Fortran applications can not return arbitrary return codes in
a standarized way, instead we use the last line of output to
decide the outcome of a test: It has to state "PASSED" to count
as a successful test.
Otherwise it is considered as a failed test. Non-Zero return codes
that might still happen are also considered as failures.
Display an execution summary:
def build(bld):
bld(features='cxx cxxprogram test', source='main.c', target='app')
from waflib.extras import utest_results
bld.add_post_fun(utest_results.summary)
"""
from waflib import Logs
import sys
lst = getattr(bld, 'utest_results', [])
# Check for the PASSED keyword in the last line of stdout, to
# decide on the actual success/failure of the test.
nlst = []
for (f, code, out, err) in lst:
ncode = code
if not code:
if sys.version_info[0] > 2:
lines = out.decode('ascii').splitlines()
else:
lines = out.splitlines()
if lines:
ncode = lines[-1].strip() != 'PASSED'
else:
ncode = True
nlst.append([f, ncode, out, err])
lst = nlst
Also I add tests by convention, in the build script just a directory has to be provided and all files within that directory ending with _test.f90 will be assumed to be unit tests and we will try to build and run them:
def utests(bld, use, path='utests'):
"""
Define the unit tests from the programs found in the utests directory.
"""
from waflib import Options
for utest in bld.path.ant_glob(path + '/*_test.f90'):
nprocs = search_procs_in_file(utest.abspath())
if int(nprocs) > 0:
bld(
features = 'fc fcprogram test',
source = utest,
use = use,
ut_exec = [Options.options.mpicmd, '-n', nprocs,
utest.change_ext('').abspath()],
target = utest.change_ext(''))
else:
bld(
features = 'fc fcprogram test',
source = utest,
use = use,
target = utest.change_ext(''))
You can find unit tests defined like that in the Aotus library. Which are utilized in the wscript via:
from waflib.extras import utest_results
utest_results.utests(bld, 'aotus')
It is then also possible to build only subsets from the unit tests, for example by running
./waf build --target=aot_table_test
in Aotus. Our testcoverage is a little meagre, but I think this infrastructure fairs actually pretty well. A test can simply make use of all the modules in the project and it can be easily compiled without further ado.
Now I don't know wether this is suitable for you or not, but I would think more about the integration of your tests in your build environment, than about the assertion stuff. It definitely is a good idea to have a test routine in each module, which then can be easily called from a test program. I would try to aim for one executable per module you want to test, where each of these modules can of course contain several tests.
Upvotes: 3