Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • As you’ll already be aware, the startup time for Parallel Jobs is longer than for Server Jobs, meaning low-volume jobs could behave slower. Higher volume Jobs, however, are likely to perform better due to the inherent performance benefits provided by the Parallel engine.

  • For each Server job S2PX could potentially generate /wiki/spaces/S2PX/pages/1525579787 (each with an unavoidable startup overhead), along with a coordinating Job Sequence. This decomposition is unavoidable due to stage synchronisation.

  • Hashed files being replaced by the DRS stage means bulk record read could be slowed, but lookups (when utilising database indices) could be faster.

  • All S2PX-generated jobs use Parallel stages which are defined as running in Sequential mode as there’s no way S2PX can infer the data necessary to generate partitioning specifications. This is easily can be remediated (in the generated Parallel jobs) by Developers, where there is a business case to do so.

...