Your S2PX-generated Parallel Jobs may take longer to execute than the Server Jobs from which they were derived. There are a number of potential reasons for this…
As you’ll already be aware, the startup time for Parallel Jobs is longer than for Server Jobs, meaning low-volume jobs could behave slower. Higher volume Jobs, however, are likely to perform better due to the inherent performance benefits provided by the Parallel engine.
All S2PX-generated jobs use Parallel stages which are defined as running in Sequential mode as there’s no way S2PX can infer the data necessary to generate partitioning specifications. This is easily remediated (in the generated Parallel jobs) by Developers, where there is a business case to do so.
For each Server job S2PX could potentially generate /wiki/spaces/S2PX/pages/1525579787 (each with an unavoidable startup overhead), along with a coordinating Job Sequence. This decomposition is unavoidable due to stage synchronisation.
Hashed files being replaced by the DRS stage means bulk record read could be slowed, but lookups (when utilising database indices) could be faster.