Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 13 Current »

Because Hashed File stages are not supported in Parallel jobs an alternative solution must be provided to support the use of Hashed File-stored data in a Parallel job. For each Hashed File stage encountered in a Server job S2PX will:

  • Translate the Hashed File stage to its Parallel equivalent (a DRS stage), and

  • Generate a new, entirely separate utility Server job which is intended for the one-off loading of Hashed File data into a DRS-compatible database table.

This page discusses the structure and usage of these Hashed File migration jobs.

Behaviour

These migration jobs will read data from Hashed Files and write them to database tables (the job will drop the table if it already exists, then create the table). There will be a one-to-one relationship between Hashed Files and tables. There will also be a one-to-one relationship between original jobs (that include Hashed File stages) and migration jobs.

See Hashed File Database Tables to understand how the database tables will be named and the structure of the data in the tables.

Only hash files that had at least one input link will be migrated

Each migration job will be wrapped in a Sequence that will include a single Activity Stage. The reason for this Sequence is to receive the parameters that the original job would have received and then supply the necessary parameters to the migration job. For example, the Sequence will generate and provide a couple of parameters to the migration job, which the job will use to set the database table name.

Structure

Migration job

Each migration job will comprise one or more separate ‘data flows’. Each flow will comprise a Hashed File stage with one or more output links; each output link will be followed by a Transformer (to massage the data being written into the S2PX format used for storage in Hashed File database tables) and then a DRS Connector. Here’s an example of one of these flows:

Each Hashed file stage (with at least one input link) in the original job will ;had to the generation of a Hashed File stage (of the same name) in the migration job. The number of input links that the original Hashed File stage has will be the number of output links the Hashed file stage has in the migration job.

Sequence

The wrapper Sequence will simply include one Activity Stage that will call the migration job. Example:

Usage

To migrate the Hashed Files, each Sequence provided should be called in the same manner as the original job. Therefore, if the original job was always called with a fixed set of parameter values (or none at all), the migration Sequence should be called once with the same set of parameter values. If the original job had multiple executions, with differing sets of parameter values, the Sequence should be called once with each set of parameter values. This will ensure that all Hashed Files are migrated appropriately.

Examples

Original Server job

Hashed File migration job

  • No labels