The latter is correct. Spark is only used for input/output, and CPLEX doesn't provide a direct integration with Spark. So it wouldn't distribute the solve across the cluster. On the other hand, even if CPLEX doesn't know anything about Spark, you can certainly build a platform with which it is possible to distribute
multiple solves across a cluster, each each solve using a single node.
Note that DOcplexcloud is being replaced with
Decision Optimization for Watson Studio. You can find more information in the
DO for WS section of
https://medium.com/@AlainChabrier/decision-optimization-education-a50cada93856.------------------------------
Xavier Nodet
Program Manager, Development
CPLEX Optimization Studio
------------------------------
Original Message:
Sent: Thu April 30, 2020 06:48 AM
From: Edward Umpfenbach
Subject: Spark Cplex
Hello. It's been a little while since I developed an optimization app and I'm trying to come up to speed on the newest IBM capability, specifically DOcplexcloud. I see evidence from some of the marketing materials that the newest tools integrate with Spark. My question -- Is it correct to say that if I had access to Cplex in a spark-enabled cluster environment, that these new optimization center tools + spark would handle distributing the solve in parallel across the cluster?
For example, this video:
https://www.youtube.com/watch?v=6X4FbMP18fI
It's unclear if spark is actually participating in the solve or just the data input/output?
------------------------------
Edward Umpfenbach
------------------------------
#DecisionOptimization