Content Management and Capture

Expand all | Collapse all

IBM Content Collector : how to tune it efficiently ?

  • 1.  IBM Content Collector : how to tune it efficiently ?

    Posted 3 days ago
    Hello all,

    I wonder if this forum is the best spot to post on IBM Content Collector, but it worths the try.

    So my question is actually 2 questions :
    1. What are the best settings to tune in order to have an efficient rate of document injection between ICC and FileNet P8 5.5.x ?

      Currently we have a virtual server with 2 cores configured with 16 threads and 128 queue size and document injection rate is about 10,8 document / sec with 100% CPU consumption for a batch of 30000 documents. This is good but, I think we can do better with the same resources.
      1. Typically, Task Route Engine service consumes about 45% and FileSystem Connector almost 40%, unlike P8 Connector consuming is consuming only 7%. Why Task Route Engine and FileSystem Connector take so much CPU ?
      2. Memory (RAM) is always a 1/3 of available memory during the injection round.
      3. Note that our ICC routes take documents from folder in NAS share and then move documents in OK/KO folder on the same NAS share.
      4. Does anyone have any relevant experience on optimizing ICC architecture and document injection rate please ?

    2. We are trying to monitor deeply ICC with Dynatrace tool which is an APM. But currently Dynatrace is not recognising processes and so we cannot drill and introspect ICC processes like Task Routing Engine, FileSystem Connector or P8 Connector. Has anyone tried to introspect ICC with performance tooling ? If yes, any advices on this topic ?
    I will open a ticket on IBM support on this but I wondered if anyone got this kind of behaviour before.

    Thanks for your help.

    Best regards,
    Florian Kiebel

    ------------------------------
    Florian KIEBEL
    Practice Leader
    Amexio
    ------------------------------


  • 2.  RE: IBM Content Collector : how to tune it efficiently ?

    Posted 3 days ago
    Hi Florian,

    Did you check the performance counters provided by the Listeners?
    https://www.ibm.com/support/knowledgecenter/SSAE9L_4.0.1/com.ibm.content.collector.doc/monitoring/r_afu_pf_counters.htm 

    FileNet and ICC, including these Listener performance and capacity metrics, can be monitored realtime by IBM ECM System Monitor: IBM ECM System Monitor Overview - maximize the service quality of your ECM applications  

    I hope that helps a bit.

    Regards,
    Roland

    ------------------------------
    Roland Merkt
    Sr Manager EIM
    CENIT
    ------------------------------



  • 3.  RE: IBM Content Collector : how to tune it efficiently ?

    Posted 2 days ago
    Hi Florian,

    Have you checked to see if virus scanning software is running on the server?  If so, it is worth white listing the working directories for ICC so that the virus scanner isn't processing every file processed.  If CPU is running at 100% then that would say it needs more CPU (assuming that no other processes are consuming the CPU like virus scanning).

    Regards
    Phil
    IBM Automation Expert Labs

    ------------------------------
    PHILIP RIMMINGTON
    ------------------------------



  • 4.  RE: IBM Content Collector : how to tune it efficiently ?

    Posted 14 hours ago
    Hi... this is a basic recommendation but is very important too. A couple year ago I had this situation in a customer's production environment. After disabling the AV checking for the work folders of ICC, the performance of the server came back to normal and it's working until now with no additional tuning or resources.


    ------------------------------
    Sergio Salinas
    ------------------------------



  • 5.  RE: IBM Content Collector : how to tune it efficiently ?

    Posted 2 days ago
    There are a couple of ICC performance and sizing related documents on Seismic. Please check them out and let me know in case of any remaining questions.

    ------------------------------
    Jörg Stolzenberg
    ------------------------------