Governance, Risk, and Compliance (GRC)

 View Only

 Objects successfully created in a Job are not found after the job completes successfully

  • IBMChampion
Nirupama Mallavarupu's profile image
Nirupama Mallavarupu posted Tue February 11, 2025 03:03 PM

Hi,

We are creating objects dynamically through a custom job.  The logs show the object creation going through successfully and we are even able to find the resources post -creation. However when the job completes, it looks all the objects are rolled back and none of them can be found.

Have you seen anything similar in your experience where objects created are not to be found?

Thanks for any help or insight into this!

Niru

Claire Terblanche's profile image
Claire Terblanche IBM Champion

Hi Nirupama

We also have several cases where object instances are created through a custom action, however we do not have any issues with finding them once the job completes. Is it possible that the view you are using has a filter and the new object instances are filtered out? Were you able to see the new created object instances previously or has this been the case since you deployed the custom job?

Regards

Claire

Richard Schoen's profile image
Richard Schoen IBM Champion

Is this IBMi you are talking about ?

If so there are usually temporary file objects created for a process and destroyed at the end of the process. Especially when you have a batch job.

Aayush Modi's profile image
Aayush Modi

We had the same issue. This is what IBM Support says:

"If there is no error being generated, then there is not much we can do. I would recommend trying a restart in case something caused the scheduler to get tied up or crash, but if the code is simply not doing what you expect, then I can't really help with that"

JAMES Nadziejko's profile image
JAMES Nadziejko

If the job is running on a zOS machine, then you should look at the DD name for these phantom objects and look at the JCL (Job Control Language) for this JCL Paramater:

//                              DISP=

If you do not see it then the default is DISP=(NEW,PASS,PASS) which means created the dataset as new, if the job step is return code zero pass it to the next step as a temporary dataset name. You should see a dataset that begins with && followed by system generated specifics...this will pass from step to step until the job ends at which point the last disposition state will PASS the temporary file to the proverbial "Bit Bucket" and you will never see it again.

If you want to make the job restart able from the top you can add a first job step the invokes PGM=IEFBR14 and in that step allocate a DDNAME of your choice with the dataset name you prefer and use the disposition of (MOD,DELETE,DELETE). This will allocate the dataset if it exists, if not it will delete it if the job ends normally or fails. When you get to the step in your job where you want to create the data for this same dataset you can use the disposition of (NEW,CATLG,DELETE). With this disposition coded the dataset will be cataloged and will only be deleted (sent to the bit bucket), if the job fails (see 3rd parameter DELETE). If you want you read the data into a subsequent job step you can use the disposition of DISP=SHR, or DISP=OLD as the other two positions are KEEP,KEEP and when the job ends your data will be there when it ends normally. This will also allow you to restart the job from the top and avoid a possible "NOT CATLALOGUED 2" condition which you may experience as well.  This technique will only work on a zOS platform. If your platform is not zOS, S390, MVS then you need to consult the publication that supports your particular platform for an example that provides a similar solution.

Nirupama Mallavarupu's profile image
Nirupama Mallavarupu

Thank you so much for all of your answers! I really appreciate the brainstorming here.

@Claire Terblanche The same job was creating these objects that were viewed successfully in the object views however now it has completely stopped surfacing. There are no filters that are blocking these out. There are also NO security filters so it is a mystery where these objects are disappearing.  Is it possible these are getting rolled back?

@Richard Schoen Yes this an IBM Internal service.  Where can we see the the temporary file objects?  Also even if we are persisting them, why would they be temporary? Thanks for any follow up insights.

@Aayush Modi Thank you for the feedback. How did you fix your issue? 
@JAMES Nadziejko We are not using IBM Z

Chris Jones's profile image
Chris Jones

Hi

I've had a very similar sounding problem a few years ago. 

In that case, the issue was that the custom scheduled job was running for longer than the transaction timeout period and so was automatically getting rolled back. I think this timeout is 5 minutes by default (it might be 10 though - can't recall). You should be able to see something in the aurora or web sphere system logs indicating the  timeout/rollback.

Is your scheduled job running for longer than 5 minutes? (or maybe 10?). If it's definitely running for less than 5 minutes then this won't be the problem.

The fix if this is the problem is to amend the scheduled job code to set a longer timeout period for the process or set no timeout at all (which is what I do).

Chris

Nirupama Mallavarupu's profile image
Nirupama Mallavarupu

@Chris Jones Thank you for sharing your experience and the tips you have provided.   Our job is not long running - it completes successfully with no errors within 2 minutes. We have not yet found any "rolled back" transaction in the logs. 

We will try increasing the transaction time out period in any case.

Thanks again for your help and suggestions,

Niru

Chris Jones's profile image
Chris Jones

Hi

i wonderful if you are getting a db deadlock which is being resolved by terminating / rolling back the transaction.

are there other things running whilst you rcustom job is running?

Chris