Data Integration

 View Only
  • 1.  The "osh" process crashed while one of our DataStage jobs was running.

    Posted Wed February 24, 2021 11:04 AM
    Hello.

    The "osh" process crashed while one of our DataStage jobs was running. The error alert we received was:
    Process osh Memory resources exhausted Memory exhausted on osh Process osh crashed with status "Segmentation fault"

    It is necessary to change DataStage Java heap size parameters? Or where I can to check...

    Has anyone ever experienced this

    Thank you.

    ------------------------------
    Rafael De Oliveira Leite Nery
    ------------------------------

    #DataIntegration


  • 2.  RE: The "osh" process crashed while one of our DataStage jobs was running.

    Posted Thu February 25, 2021 04:55 PM
    You should be able to set the jvm parameter at the job level, if this is the only job having issue
    If your Datastage runs on VM, I would check resources at the cluster level where your host lives. Typically if there are other hosts running jobs on the VM cluster then it will be harder for your host to get the needed, from what's left. Short answer, it may be a VM configuration issue.

    ------------------------------
    DAS MINJUR
    ------------------------------



  • 3.  RE: The "osh" process crashed while one of our DataStage jobs was running.

    Posted Fri February 26, 2021 06:38 AM
    Edited by System Fri January 20, 2023 04:37 PM

    Hello.

    How I guide the develop  to set the jvm parameter at the job level?


    Thank u



    ------------------------------
    Rafael De Oliveira Leite Nery
    ------------------------------



  • 4.  RE: The "osh" process crashed while one of our DataStage jobs was running.

    Posted Fri February 26, 2021 08:10 PM

    Hello - 

    1. If you have never set job level parameters, the link below, can be useful.




    2. For setting java heap memory you will use CC_JVM_OPTIONS as shown   below:

    Set the environment variable CC_JVM_OPTIONS to include the -Xmx parameter as documented here. For example, to set the maximum memory for the IDoc connector stage to 512 MB, specify:

    CC_JVM_OPTIONS = -Xmx512M


    Increase it to 1024M or even 2048M -  based on your need ONLY for that job by setting job level parameter.



    Hope this helps

    Das Minjur







  • 5.  RE: The "osh" process crashed while one of our DataStage jobs was running.

    Posted Fri February 26, 2021 06:42 AM
    Edited by System Fri January 20, 2023 04:41 PM
    Today at the same time (1300 EST) we saw errors on the clients interactions, Dynatrace reported the following errors
    /opt/IBM/InformationServer/Server/PXEngine/lib/liborchcorex86_64.so at liborchcorex86_64.so!APT_SYSabortSleep(int)+0x11dosh
    on hostserver1.ibm.com
    2021-02-12 13:01
    Executable Path
    /opt/IBM/InformationServer/Server/PXEngine/bin/osh
    Fault Location
    liborchcorex86_64.so!APT_SYSabortSleep(int)+0x11d
    Fault Module Path
    /opt/IBM/InformationServer/Server/PXEngine/lib/liborchcorex86_64.so
    Fault Module Version
    MD5: 59b4824245bb10528d927f6853556989
    Process Ids
    24626,24627,24624
    Signal
    Segmentation fault
    Additional artifacts
    CallStacks_osh_24626.txt and metrics.jsonCrash
    at Dump Event "abort" (00020000) receivedosh
    on hostserver1.ibm.com
    2021-02-12 13:00
    Executable Path
    [not
    Fault Location
    Dump Event "abort" (00020000) received
    Process Ids
    24228,24247,24236
    Additional artifacts

    #DataIntegration


  • 6.  RE: The "osh" process crashed while one of our DataStage jobs was running.

    Posted Fri February 26, 2021 06:56 AM
    The Datastage no runs on VM.

    ------------------------------
    Rafael De Oliveira Leite Nery
    ------------------------------