Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...


A unit test specification’s input and output path names (csv file names) are (by default) formed from the stage and link names of the input/output being intercepted (stage-link.csv). These names can be anything, though, as long as they are consistent within a given test case. What matters is the stage and link names, as the harness needs these to replace them with csv files. For jobs that do not have containers, this using the stage and link names “as is” is sufficient, but containers add potential ambiguity.

This is resolved by dot prefixing the container name to the front of the stage name to give containera name of the form containerInvocation.stage-link.csv. In the case of nested containers the outer container is prepended in front of the inner, to as many levels as necessary to model the nesting accurately.

Info

Note: Container inputs and outputs themselves are not modeled or intercepted since they are really just connectors with no physically manifested input or output.

For example, consider this container, which has container input/output (not needed to be modeled in the yaml) and actual physical input/output as well, which we do need to intercept/test with.

...

Here is a job using it

...

When we generate a unit test spec from this job, the resulting yaml looks like this

...

If we generate a unit test spec for the above, it looks like this. As you can see all the stages are present, as expected. Note particularly stages ds_cust in the given: and ds_flaggedCust in the when: ... these are the stages inside the container that we need to manually add to the containerized job's test spec

Code Block
---
given:
- stage: "sf_orders"
  link: "ln_filter"
  path: "sf_orders-ln_filter.csv"
- stage: "ds_cust"
  link: "ln_cust"
  path: "ds_cust-ln_cust.csv"
when:
  job: "monolithic_v1"
  controller: null
  parameters: {}
then:
- stage: "ds_flaggedCust"
  link: "ln_flagged"
  path: "ds_flaggedCust-ln_flagged.csv"
  cluster: null
  ignore: null
- stage: "sf_samples"
  link: "ln_samples"
  path: "sf_samples-ln_samples.csv"
  cluster: null
  ignore: null
- stage: "sf_addressedOrders"
  link: "ln_addressed"
  path: "sf_addressedOrders-ln_addressed.csv"
  cluster: null
  ignore: null
- stage: "sf_summary"
  link: "ln_summary"
  path: "sf_summary-ln_summary.csv"
  cluster: null
  ignore: null

...

Info

Note: We do not need to always “undo” containers, but it can make things clearer for understanding how to add the missing stages your first few times.

The yaml we need can be created by taking the original yaml generated and adding the appropriate stage/link/path entries.

Here is the yaml for the ProcessOrders job after we modify it.

...