Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

A common query is: If S2PX is generating Parallel jobs, why don’t those jobs run in Sequential mode? Parallel mode? (i.e. Why are all stages configured to run sequentially?)

...

The design philosophy behind S2PX means that it…

  • prioritises the delivery of a working Parallel job (if at all possible) above all else,

  • does not attempt to prematurely optimise your designdoesn’t job designs, and

  • does not attempt to guess the job Designer’s intentions, but seeks to replicate, as far as technically possible, the job design unambiguously specified in each Server job.

the The fundamental reason S2PX-generated Parallel jobs run in Sequential mode is that Server job designs don’t provide all the context required to identify the keys necessary to support hashed partitioning. S2PX could have attempted to interpret existing designs and make a guess at your partitioning keys, but this will always involve a degree of ambiguity , and that ambiguity which could easily result in a non-functioning job . For this reason

You can replace SDRS stages with Data Sets Stages easily enough.

You can identify which jobs are inhibiting the end-to-end performance of your DataStage solution (which we strongly recommend you confirm using Critical Path Analysis) and only optimise those jobs when there is a which is counter to our primary guiding principle.