Thanks for the details.
I understand that you have a small amount of data (an ID) that you would like to give as parameter to your job. And that ID will change with each run of the job. As I mentioned, you will have to make this information available to the job via an input table. There are three ways to provide data to a Decision Optimization job in WML: references to remote data, data assets, and inline data. The simplest for you seems to be to send the information as inline data. Basically, you send an inline payload in the job creation request. You can find an example using Python in the section 'Inline tabular data' at
https://medium.com/@AlainChabrier/inline-data-in-do-for-wml-jobs-e5e966e0a16c.
The example looks like this:
"input_data": [
{
"id": "inline_data.csv",
"fields" : ["name", "value"],
"values" : [
["id", "the-id-value"]
]
}
]
The example above assumes that you have tabular data, organised as key-value pairs, which seems fine for an id. But when a table is not appropriate, you should know that you can basically send any file you want using base-64 encoding. Here's an example:
'decision_optimization': {
'input_data': [{
'id': 'one-var.lp',
'content': 'bWluaW1pemUgeApzdAogICB4ID49IDIKZW5k'
}]
}
With such a payload, a file named 'one-var.lp' will be created and the decoded content be put in. Here it is:
$ echo "bWluaW1pemUgeApzdAogICB4ID49IDIKZW5k" | base64 -d
minimize x
st
x >= 2
end
It is then up to your Python scripts to read the content of that file and act on it. At https://github.com/nodet/dowml/blob/11f44f01e37db84f09d5be87f983617504b906fe/src/dowml/dowmllib.py#L910, you will find some Python code that does this.
The links below are not really specific to this use case, but they may still be useful to you, hopefully.
With respect to using Python to submit WML jobs, I can suggest the following:
- An example of code to submit existing Decision Optimization models using Python:
dowml. Note that this code doesn't have inline tabular data, though.
- The documentation for the WML Python API used in dowml:
https://ibm-wml-api-pyclient.mybluemix.net/- The documentation for DO in WML:
https://dataplatform.cloud.ibm.com/docs/content/DO/wml_cpd_home.html?context=cpdaasIf you would rather use a command line, please refer to
cpdctl:
- Here's the
documentation.
- An example of what it looks like to
use cpdctl to send DO models.
I hope this helps.
------------------------------
Xavier
------------------------------
Original Message:
Sent: Wed October 06, 2021 08:46 PM
From: Anilkumar Lingaraj Biradar
Subject: Error as "Invalid request entity: input_data_references in the decision_optimization field can not be an empty array."
Thank you for the reply.
In our scenario, Ideally, From UI when the user clicks the submit button, the model pipeline is invoked with an ID based on selection, which then makes the API call to pull the data. This pulled data will be used as input data to run the model in the pipeline.
Thus, I need to deploy the model to the cloud and make a call to this deployed model programmatically by passing different IDs to get the different input data to the model. Initially, to try this I am running a model with default ID and making a call to the database to get the input data and run the optimization model and get the output. Later running the model programmatically. But I am not able to create the job as mentioned above. Could you please suggest how to proceed with this?
Also If you could provide any useful links related to this scenario for creating and running jobs through IBM Cli or through python programming would help.
------------------------------
Anilkumar Lingaraj Biradar
Original Message:
Sent: Wed October 06, 2021 03:25 AM
From: Xavier Nodet
Subject: Error as "Invalid request entity: input_data_references in the decision_optimization field can not be an empty array."
I suspect that this comes from your model not having any input table. Each input table in the Scenario will become a 'reference to input data' in the deployed job. If you don't have any input table, then the list of data reference inputs is indeed empty. And the reason it's not allowed to create such a job is that it wouldn't have any way to change its input, always leading to the exact same result, which would be pretty useless.
You mention that the input data is "passed through API to the model". But I'm not sure how this can happen on a model that's deployed from a Scenario. To the best of my knowledge, such models only use input tables, and have no means of passing data via the REST request to run the job. You may want to provide more details on this, in case I missed something.
------------------------------
Xavier
Original Message:
Sent: Tue October 05, 2021 06:05 PM
From: Anilkumar Lingaraj Biradar
Subject: Error as "Invalid request entity: input_data_references in the decision_optimization field can not be an empty array."
While creating a job in the IBM Cloud for the optimization model, I am getting the error as "Invalid request entity: input_data_references in the decision_optimization field can not be an empty array." I am not passing the input data, as it's passed through API to the model. Could I know how to resolve this, Thank you
------------------------------
Anilkumar Lingaraj Biradar
------------------------------
#DecisionOptimization