Your Parallel
Theyโre Parallel, so startup time is longer, meaning low-volume jobs could behave slower.
All Parallel stages are defined as running Sequential mode, as thereโs no way S2PX can infer the data necessary to generate partitioning specifications. This is easily remediated (in the generated Parallel Jobs) by Developers, where there is a business case to do so.
S2PX potentially generates multiple Parallel jobs for each Server job, along along with a coordinating Job Sequence. Each Parallel Job introduces an unavoidable startup overhead. This decomposition is unavoidable due to stage synchronisation. (See link)
Hashed files beign replaced by DRS stage means bulk record read could be slowed, but lookups (when utilising database indices) could be faster.
Anything else?