Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 5 Next »

A common query is: If S2PX is generating Parallel jobs, why don’t those jobs run in Sequential mode?

The design philosophy behind S2PX means that it…

  • prioritises the delivery of a working Parallel job (if at all possible) above all else

  • does not attempt to prematurely optimise your design

  • doesn’t not attempt to guess the job Designer’s intentions, but seeks to replicate, as far as technically possible, the job design unambiguously specified in each Server job

the fundamental reason is that Server job designs don’t provide all the context required to identify the keys necessary to support hashed partitioning.

S2PX could have attempted to interpret existing designs and make a guess at your partitioning keys, but this will always involve a degree of ambiguity, and that ambiguity could easily result in a non-functioning job. For this reason

You can replace SDRS stages with Data Sets Stages easily enough.

You can identify which jobs are inhibiting the end-to-end performance of your DataStage solution (which we strongly recommend you confirm using Critical Path Analysis) and only optimise those jobs when there is a

  • No labels