Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Version History

« Previous Version 2 Next »

Your Parallel

  • Theyโ€™re Parallel, so startup time is longer, meaning low-volume jobs could behave slower.

  • All Parallel stages are defined as running Sequential mode, as thereโ€™s no way S2PX can infer the data necessary to generate partitioning specifications. This is easily remediated (in the generated Parallel Jobs) by Developers, where there is a business case to do so.

  • S2PX potentially generates multiple Parallel jobs for each Server job, along along with a coordinating Job Sequence. Each Parallel Job introduces an unavoidable startup overhead. This decomposition is unavoidable due to stage synchronisation. (See link)

  • Hashed files beign replaced by DRS stage means bulk record read could be slowed, but lookups (when utilising database indices) could be faster.

  • Anything else?

  • No labels