Unit Testing

Automated testing is an important part of Parasol's build process, and can help to ensure that new changes do not have unexpected consequences for existing code.

We use our own command-line script, Flute, to run tests and evaluate their success. Flute can be run directly from the terminal, but is primarily intended for use in CMake build scripts to ensure that tests are being run as part of the build and release process.

A typical run of Flute from the terminal can look like this:

parasol tools/flute.fluid file=src/fluid/tests/test_array.fluid --log-warning

Usage in CMake is straight-forward using the following template:

flute_test (target_name "flute_file.fluid")

The target_name is a unique name for the test and flute_file.fluid is a reference to the test script.

Try to avoid bundling all your tests into one file. Ideally, your tests will be grouped into common categories that can be broken out across a number of files. Use as many calls to flute_test() as necessary, and give each a useful target_name for CMake's test report.

Test Scripts

Flute uses annotations to identify test functions and control test execution. When Flute loads a test script, it scans for annotated functions and executes them according to their configuration.

Basic Test Structure

The simplest test file consists of functions annotated with @Test:

@Test function testBasicFunctionality()
   result = myFunction(10)
   assert(result is 20, "Expected 20, got " .. result)
end

@Test function testEdgeCases()
   assert(myFunction(0) is 0, "Zero case failed")
   assert(myFunction(-5) is -10, "Negative case failed")
end

The @Test Annotation

The @Test annotation marks a function as a test case. It supports several optional arguments:

Argument Description
name Custom display name for the test.
priority Numeric value controlling test execution order (lower values run first).
timeout Maximum time in seconds before the test is marked as failed.
labels Tags for filtering tests (reserved for future use).

Examples:

@Test function testSimple()
   -- Basic test with no options
end

@Test(priority=1) function testFirst()
   -- Runs before tests with higher priority values
end

@Test(priority=2, name='Custom Test Name') function testSecond()
   -- Runs after priority=1 tests, displayed with custom name
end

Lifecycle Annotations

Flute provides lifecycle hooks for setup and teardown operations:

Annotation Description
@BeforeAll Runs once before any tests execute. Receives a State table with folder (script directory).
@AfterAll Runs once after all tests complete.
@BeforeEach Runs before each individual test.
@AfterEach Runs after each individual test.

Example with lifecycle hooks:

@BeforeAll function setup(State)
   -- State.folder contains the directory of the test script
   global glTestResource = loadResource(State.folder .. "test_data.xml")
end

@AfterAll function cleanup()
   glTestResource = nil
end

@BeforeEach function resetState()
   glTestCounter = 0
end

@Test function testWithSetup()
   -- glTestResource is available, glTestCounter is reset
   glTestCounter++
   assert(glTestCounter is 1)
end

Conditional Test Execution

@Disabled

Skip a test entirely with an optional reason:

@Test; @Disabled(reason='Requires further development')
function testNotReady()
   -- This test will be skipped
end

@Requires

Conditionally run tests based on system capabilities:

Requirement Description
network Network module availability.
ssl SSL support in the network module.
audio Audio module availability.
display Display/graphics module availability.

Examples:

@Test; @Requires(ssl=true)
function testSSLCommunication()
   -- Only runs if SSL is available
end

@Test; @Requires(network=true)
function testNetworkFeature()
   -- Only runs if network module is available
end

@Test; @Requires(display=false)
function testHeadlessOnly()
   -- Only runs when display is NOT available (headless mode)
end

Assertions and Error Handling

Tests use assertions to verify expected behaviour. A test passes if it completes without throwing an exception.

The assert() function raises an exception if the condition is false:

@Test function testAssertions()
   result = computeValue()
   assert(result is 42, "Expected 42, got " .. result)

   -- Multiple assertions in one test
   assert(result > 0, "Result should be positive")
   assert(type(result) is "number", "Result should be a number")
end

Use error() to explicitly fail a test with a message:

@Test function testExplicitFailure()
   if someCondition then
      error("Test failed: unexpected condition encountered")
   end
end

Use pcall() to test that code correctly raises exceptions:

@Test function testExpectedError()
   status, err = pcall(function()
      functionThatShouldFail()
   end)
   assert(not status, "Expected function to raise an error")
end

Logging Output

Use logOutput() to record diagnostic information during test execution:

@Test function testWithLogging()
   logOutput("Starting computation...")
   result = complexCalculation()
   logOutput("Result: " .. result)
   assert(result is expected)
end

Best Practices

  1. One concern per test - Each test function should verify a single behaviour or scenario
  2. Descriptive names - Function names should clearly describe what is being tested
  3. Independent tests - Tests should not depend on the execution order of other tests
  4. Clean up resources - Use @AfterAll or @AfterEach to release resources
  5. Use priority sparingly - Only set priorities when test order genuinely matters
  6. Test edge cases - Include tests for boundary conditions and error scenarios