Team,
I am working on a project to regularly process terabytes of legacy System z (mainframe) data (e.g. VSAM, Physical Sequential)) with DataStage. DataStage will be co-located on System z running in z/Linux. Communications between z/OS and z/Linux will be via
HiperSockets.
For Physical Sequential (PS) datasets we plan to use the Linux
zdsfs command to mount z/OS DASD for direct read by the
Sequential File Stage. The zdsfs approach comes with some limitations:
- an individual file can't span a z/OS volume
- z/OS catalog services and auditing (security) mechanisms are bypassed
For VSAM datasets we plan to use the Complex File Format (CFF) Stage. It appears from the documentation that the VSAM can't be natively read with CFF from DataStage. Even with HiperSockets, copying terabytes of data via FTP from z/OS to z/Linux creates a lot of data movement and will require vast amounts of z/Linux persistent storage.
I am writing to perform a sanity check that this is the best approach this challenge.
TIA,
Frank
------------------------------
Frank Fillmore
------------------------------
#DataIntegration