Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 8 Next »

Summary

Hashed File stages represent a hashed file which is a sequential file that uses a hashing algorithm to distribute records across one or more groups on disk. The Hashed File Stage can be used to access UniVerse files.

IBM documentation

Server Stage: Hashed File Stages - IBM Documentation

Parallel Stage: DRS Connector stage - IBM Documentation

Conversion Notes

Key points you should know:

  • Server Hashed Files stages are converted to Parallel DRS Connector stage, which means that Server jobs which stored their data in Hashed Files will be converted to Parallel jobs that Parallel jobs which read from / write to the storage mechanisms supported by the DRS Connector (Oracle, DB2 or ODBC)

  • Hashed files which are loaded or maintained by the S2PX solution are stored in your database using a S2PX-specific format.

  • Any persistent data you use (i.e. which enables one execution of a job to communicate changes to subsequent executions of a job) which is stored in Hashed Files will need to be loaded in to your DRS-compatible data storage platform

Structural changes

The Hashed File stage can be used to both read and write, sometimes simultaneously within the same job design, which is not a job design supported by the DRS stage. The way these patterns are handled of this are described in Hashed File Job Restructuring.

See Parallel Job Structural Differences.

Server features not supported

Feature

Asset Query (?)

Comment

Clear file before writing

Hashed File Not Clearing Before Writing

This query detects Hashed Files that do not have the ‘Clear file before writing’ option checked.

This means that your job design will require you to implement a data storage layer replacing your hashed file will require an appropriate mechanism for preserving

See Also

  • No labels