Decision Optimization

 View Only
  • 1.  Optimization model deployment - Column type mismatch for column (OPL - DB2)

    Posted Thu February 09, 2023 09:27 AM

    Hi!!!

    I have an Opl model and I am working on in their deployment on CP4D. I have created the job for running the deploy model. The model has an output that is a tuple defined by two data types: a string (Pdo, name in the tuple), and a float (varX, name in the tuple). The output set of the models is called Results and it will be written in table in DB2. In DB2 we have a table called Results, it has two columns with the same previous names but that were defined as varchar and decimal type respectively.

    I have created the connection to the DB, and I created the data asset to be connected to the table. When I create the job for the integration and define the sources for both inputs and outputs of the model, I get the following message: "This data asset does not satisfy the schema require by the deployment. Column type mismatch for column Pdo. Expected: string, received varchar." Some of you know how I can handle this situation taking into account that in DB2 does not have a string data type like in it exists in Opl.



    ------------------------------
    Danilo Abril Hernandez
    ------------------------------


  • 2.  RE: Optimization model deployment - Column type mismatch for column (OPL - DB2)

    Posted Fri February 10, 2023 05:58 AM

    The opinion of our CP4D specialists is that it should just be a warning.

    Is it the case or do you experience issues?



    ------------------------------
    Vincent Beraudier
    ------------------------------



  • 3.  RE: Optimization model deployment - Column type mismatch for column (OPL - DB2)

    Posted Fri February 10, 2023 08:01 AM

    Hi Vicent

    Thanks for your answer. It is not just a warning. At the beginning I thought that so I proceeded to run the job and I got the following message from the log file (the job's run failed):

            "errors": [{
              "code": "PROCESSING",
              "message": "Unexpected CAMS communication issue accessing attachment 'results.csv' for job '8d869751-dab7-4c3e-ad62-2627aa315814' due to 'CDICO9999E: JDBC driver error occurred: Sql syntax error: \"Pdo\" is not valid in the context where it is used.. SQLCODE=-206, SQLSTATE=42703, DRIVER=4.28.11'."
            }]

    In my experiece string and varchar data type are compatible and this must not generated any conflit between them but I was wrong.

    Thank you in advance for your help!!!

    Danilo



    ------------------------------
    Danilo Abril Hernandez
    ------------------------------



  • 4.  RE: Optimization model deployment - Column type mismatch for column (OPL - DB2)

    Posted Fri February 10, 2023 11:18 AM

    We are looking at this on our side to check how we use DB2 for output.

    We think there may be an issue in your payload: there is  "id": "stats.csv" but the output table is results which will produce results.csv, so an error will certainly raise. Can you retry by changing the payload to have "id": "results.csv" ?



    ------------------------------
    Vincent Beraudier
    ------------------------------



  • 5.  RE: Optimization model deployment - Column type mismatch for column (OPL - DB2)

    Posted Fri February 10, 2023 02:47 PM

    Vincent

    Thank you for help. We made several tests and the problem is related to column's name. We changed from "Pdo" to "PDO" on the output tuple and it works!!!. CP4D wrote the set "Results" into the table on DB2. The best practice es to use upper case names for all elements of the tuples.

    Now, I have another doubt. I have identified that every time I run de job CP4D delete all records in the table.  In my project, I want to keep the results (it does not have anything related to my previous test) for each set of data i will use to run the mathematical model. I will use a column as Id  to identify each scenario but if CP4D deleles the whole content of the tables before it writtes on them so I do not know how I can mantain the previous of previuous runnings. Some comments???

    Thanks again for your help,

    Danilo



    ------------------------------
    Danilo Abril Hernandez
    ------------------------------



  • 6.  RE: Optimization model deployment - Column type mismatch for column (OPL - DB2)

    Posted Mon February 13, 2023 05:21 AM

    The publish mode can be controlled for some datasources, via the write_mode in the location dictionary, so you should use something like

    "output_data_references": [{
          "connection": {
    
          },
          "id": "stats.csv",
          "location": {
            "href": "https://api.dataplatform.cloud.ibm.com/v2/assets/6ab33fd5-eaec-48d9-a391-b2904db9aaf0?space_id=7e48d0d4-703e-4459-984a-3168ed23b6fa",
            "id": "6ab33fd5-eaec-48d9-a391-b2904db9aaf0"
    "write_mode" : "append"       }, "type": "data_asset" }

    write_mode values can be append, create, replace, truncate afaik.



    ------------------------------
    Vincent Beraudier
    ------------------------------



  • 7.  RE: Optimization model deployment - Column type mismatch for column (OPL - DB2)

    Posted Mon February 13, 2023 04:35 PM

    Hi Vincent

    Thank you for help. I am working on CP4D UI and my model has been written in OPL Language. Where, inside my deployment, would I edit the "write_mode" ?

    Thanks again for your time and help,

    Danilo



    ------------------------------
    Danilo Abril Hernandez
    ------------------------------