IBM Data Management Community Connect with Db2, Informix, Netezza, open source, and other data experts to gain value from your data, share insights, and solve problems. Join / Log in
Hi Naveen ... DVM is a component of the IBM Open Data Analytics for z/OS (IzODA) bundle. So any Spark client should be able to access DVM Server. Your use case seems to be to move the data (ETL) from the mainframe to Hadoop. I am not sure of the size of the tables, but would you be able to do either:1) select * from tab insert into Local table or CTAS into the Apache Hive metastore or CTAS
2) use Apache Sqoop JDBC driver to connect and import from DVM to Local. Apache Flume allows the ability to schedule as a service, I believe.