For the first time, I am attempting to use
distributed Optim TDM to access data within a
Hadoop hive, via an ODBC DSN. I was able to set up the new DSN using the Cloudera ODBC Driver for Apache Hive, create a new DB alias within the Optim Configuration UI, and create a new access definition for a specific table within the Optim UI (PR0TOOL). Within my new access definition, I can successfully bring up the "Table Specifications" and see the column definitions; so I know Optim can "see" the table (see attached screen print). However when I try to run an Extract using this AD, I immediately get an error. In the log file I'm seeing messages:
ErrMsg : COL_CMTransformCompile failed
Line : 4456 Function: XfbLclBuildColumnTransform Module: ..\SrcRes\XFBCMAIN.c
RetCode: XFBERR_COL_CM_TRANSFORM_COMPILE_FAILED(10250) Transform Compile Failed
My question is - does anyone here have experience with accessing a Hadoop hive with TDM, and if so can you share any information on anything special you needed to do for it?
For example, I know that in order to access a database within an RDBMS like Oracle or SQL Server you must install a bunch of Optim stored procedures into the database. But I have not done that here -- Are there stored procedures that need to be installed into a hive, and if so what/where are they?
Thanks for any help you can provide!
Rob Searson, Progressive
------------------------------
Rob Searson
Application Developer
Progressive Insurance
Mayfield Village OH
------------------------------
#InfoSphereOptim#Optim