As many already know, the long-time approach to handling failures is to use this structure:
SEQUENCE (exit on SUCCESS)
…SEQUENCE (exit on FAILURE)
…SEQUENCE (exit on DONE/FAILURE)
where the nested SEQUENCEs are basically the try and catch blocks respectively.
Version 10.3 introduced the TRY/CATCH/FINALLY steps to support a more “Java-like” exception handling mechanism. I’ll defer to the docs for the descriptions of the whys and the hows.
Thinking that TRY/CATCH would be pretty much the same as the nested SEQUENCE approach, we’ve started using TRY/CATCH. However, it appears there is a behavior that differs. The pipeline in the CATCH block is as it existed at the time of the error. This increases the chances of variable collisions (what I call “pipeline litter”). In one case, an HTTP call returned an error, for which our wrapper service throws an exception. In the top-level service that called the wrapper, there is the TRY/CATCH. Because of the thrown exception, the “loadAs” var for the HTTP call was left in the pipeline. In the CATCH block there was a call to a different HTTP endpoint to create a help desk ticket – the erroneous value for that caused the HTTP call to behave differently than expected, causing an error in later steps.
Long-story short (too late?) does someone know of a way to restrict the pipeline context in a CATCH so that it behaves similarly to the nested SEQUENCE approach? That is, only the vars that existed before the SEQUENCE block was entered would be present and if one wants to get data as it existed in the try block at the time of the exception then call getLastError to do so. Perhaps I’m missing something simple.
#webMethods#Integration-Server-and-ESB#Flow-and-Java-services