Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

This involves running your Job from either the DataStage Designer (thick client) or DataStage Flow Designer (browser-based client) with the reserved MettleCI Unit Test environment variable set to its 'Interception' value.  This causes MettleCI to 'sniff' the data flowing down the input and output links referenced in your unit test specification and record the data it sees into the unit test data file referenced against each link.  This permits the capture of structured and unstructured data from both batch and streaming data sources.  Note that the data that appears on your output links is also captured as the current definition of 'expected' output into the relevant output data files.

High Volume Test Data

Capturing large volumes of test data will prevent those data from being interactively edited using MettleCI Workbench’s Unit Test Editor. In these circumstances you will be presented with a message of the form:

...

As well as this, you should consider the impact of high volumes of data on your git repository. If you have a cloud based git repository, there may be limits on the size of individual files you can commit, or on the total volume of data in the repository. On the other hand, most DataStage and MettleCI users have self-hosted git repositories. In this case, you may not have the same restrictions, but there will be performance impacts of storing high volumes of data in your repository. When you commit changes to git using Workbench, the repository must be cloned first, which will take longer the more data you have in the repository. The same is true for your CI/CD pipeline.

...

Step By Step Guide

This process is identical to the Execute Unit Test process, with the exception that you select the 'Interception' value for the Unit Testing parameter.  Note that running a job in interception mode replaces any existing data files that already exist.

...