Cognos Analytics

Cognos Analytics

Connect, learn, and share with thousands of IBM Cognos Analytics users! 

 View Only
  • 1.  Cognos Tuning - more Processes & less connections or vice versa

    Posted Wed March 17, 2021 11:35 AM
    Generally speaking, would it be better to increase the Maximum # Processes that Cognos can spawn and lower the # of affinity connections or vice versa?

    We're currently in the process of tuning our Cognos 11.0.11 ​​environment for production workloads. Our heaviest job contains 521 reports of varying complexity that are batch processed overnight. Concurrent running of reports is enabled within the job. Our distributed server architecture runs on a dedicated virtual HOST with the following servers & specs:

    1 IIS/web server - 2 Cores/4GBs memory
    2 Dispatcher servers - 4 Cores/16GBs memory each
    2 Content Manager servers (1 is on standby for failover) - 4 virtual Cores each/8GBs memory each
    1 Content Store DB Server - 4 Cores/4GBs memory
    1 OLTP DB server - 12 Cores/32GBs memory

    Currently our Batch Report Service settings during Non-Peak period which is when this job runs looks like this:

    Number of high affinity connections for the batch report service during non-peak period: 1
    Number of low affinity connections for the batch report service during non-peak period: 2
    Maximum number of processes for the batch report service during non-peak period: 4

    When I've watched the servers while the job is running it doesn't look like our Dispatch servers are being taxed unreasonably (30-50% CPU utilization; 60% memory). Our OLTP server on the other hand gets hit pretty hard; 1 see near max CPU utilization across all 12 Cores at times depending on which reports are executing. We are running on SQL Server 2017 and do have MAXDOP set to 4 with a Cost Threshold of 50. 

    Based on the current tuning settings above, I've observed up to 4 of the BiBus processes running on each Dispatcher server and up to 16 reports executing concurrently within the job at any given time.

    Also, I know we have some poorly written reports as well as some that probably can be removed from the job. I'm trying to work with users to address these problems but that's a much longer-term battle that I also don't have as much control over. It would seem to me that right now our current bottleneck might be processing power on the OLTP server but I'm very interested into any feedback about tuning I can do on the Cognos side of things to help the situation. Thank You​

    ------------------------------
    Brad Chance
    BI Developer
    Park National Bank
    Newark OH
    ------------------------------

    #CognosAnalyticswithWatson


  • 2.  RE: Cognos Tuning - more Processes & less connections or vice versa

    Posted Wed March 17, 2021 12:33 PM
    Hello Brad,

    as a rule of thumb we figured out that the number of cores on app.-servers should be multiplied by connectionsHighAffinity to reflect the BIBus limits set in "Maximum number of processes" x "Number of high affinity connections".

    So, for example for your machine: 4 virtual Cores each/8GBs
    - your machine has 4 cores, multiple by 2 = 8
    - 4 maxProcessesInteractiveReporting * 1 connectionsHighAffinity = 4 "users running reports in parallel"

    I would change this to:
    - 4 maxProcessesInteractiveReporting * 2 connectionsHighAffinity = 8 "users running reports in parallel"

    connectionsHighAffinity= number of threads of report processes inside each "BiBus"
    connectionsLowAffinity= navigation in portal, polling

    What really makes me worry is the 16GB RAM. Normally I would calc for at least 4GB of RAM for each BiBus handling two threads. I believe you could will run out of memory, as you tune your process limits.

    We used jMeter to test the performance with a testcase running two report (1x with 30sec. exec time, 1x with 2min. exec time) and then ramping up users over 30 minutes recurrently calling the same reports.

    On our machine (8 cores, 80GB RAM) we could handle 100 real concurrent users setting maxProcessesInteractiveReporting:10 + connectionsHighAffinity: 2

    Here is a screenshot from the results jMeter produced
    number of threads over time
    We could exactly determine the limits of concurrency with jMeter ramping up users to call some reports and CPU usage and RAM usage behave at the limits.

    From your figures of CPU and RAM usage, I think you could even go a little higher and set maxProcessesInteractiveReporting 5 or 6 or may be even 8.

    Seeing that your DB server is getting hit hard, I would analyze the long running SQLs and prepare materialized views for some of them.
    This will shure speed up your performance and bring down CPU usage. Having jMeter testplan at hand, you could easily and recurringly test again and again.

    Hope this helped a little.




    ------------------------------
    Ralf Roeber
    https://linkedin.com/in/ralf-roeber-470425a/
    ------------------------------



  • 3.  RE: Cognos Tuning - more Processes & less connections or vice versa

    Posted Wed March 17, 2021 09:00 PM
    Thanks, Ralf for your feedback. Our Dispatcher/App servers actually have 4 Cores & 16GBs of memory. I too was concerned about memory usage which is why I've purposely kept the Max # of Processes set to 4 as my understanding is each process can consume up to 4GBs of memory. So a 4 Core server running 4 processes  should hopefully never exceed the 16GBs of memory.

    I did some test experiments with my heavy report job today and simply adjusted the # of Low Affinity connections for the Batch Report Service. I started with 5 and saw that at any given time that up to 40 reports could be trying execute concurrently. So I'm assuming the math here is 5 Low Affinity * 4 Max Processes = 20 Connections per Server for a total of 40 connections. The job took 1hr 9mins to run at these settings. Next, I tuned the Low Affinity down to 2 ​​and left the the Max Processes to 4. When I ran the job again I saw up to 16 total reports running concurrently (2*4=8 connections * 2 servers = 16) and the job took 30 mins to run. Then later I performed yet another test and this time kept the Low Affinity at 2 and lowered the Max Processes to 2. The result was up to 8 reports executing concurrently (2*2=4 connections * 2 servers = 8) and, to my surprise, the job took 29 mins to run.

    So it would seem based on this testing that if I exceed 16 total connections then I'm hurting job performance presumably because I'm allowing too many reports to try and run concurrently against our OLTP server. ​This would also seem to suggest that our OLTP server is likely the bottleneck in any additional performance gains from a hardware perspective at the moment. That said, I also completely agree that we need to better analyze our queries for performance improvements as I suspect there's a lot to be gained there.

    I've not heard of jMeter before but will definitely look into it further, thanks for sharing!

    ------------------------------
    Brad Chance
    BI Developer
    Park National Bank
    Newark OH
    ------------------------------