The art of developing and testing the application programming interface (rather called as API) can be tough at times. The effort to building and testing such APIs don’t go smoothly. For instance, the dependency APIs might go offline, or it could be a case where our own API’s end point shows sluggish behavior.
This can lead to delays in our work and make our apps less reliable. On the bright side, there’s a solution called API mocking. It can help us work around these problems and make our development process faster and smoother.
But what are these “Mock Services”?
In an ideal world, you might simply not have access to all that you wish for while testing the services associated with the given use-case for the software applications. User acceptance testing (UAT) phase does face these challenges such as dependency on external services, data availability, and environment stability.
In such scenarios, you need to manually construct objects that can simulate the behavior of real objects and exhibit authentic characteristics. Mock services can provide consistent and controlled responses, without relying on the actual service providers. IBM DevOps Integration Tester and API solution does allow you to build your own mocking services and embedded business intelligence in it, thereby delivering a more sophisticated response as well.
Now that you have made up your mind to build and test some mock services, the challenges are very economical at times. Very often while the mock services (Stubs) are getting into testing phase, you come across several weird observations with the system resources being consumed. You tend to see high CPU/Memory utilization behaviors.
These observations could be a resultant of your project requirements which may demand an infra commitment that might support 500 transactions per second (TPS) for example, but it’s not able to withstand even 100 TPS, sometimes not even 50 TPS. You could also observe that the CPU consumption started increasing, and suddenly it reached to 90 % while normal load testing was configured.
You could also notice that when RTCP process is killed the mock service usage statistics might immediately reduce from 90% to 5%, or it could be a case where even though all stubs are stopped, CPU percentage still reflects more than 70 %.
In general, the idea is to first understand what processes is consuming the memory. By default, the stub is configured with few lined up operations and with each operation getting executed ,the message.log file (located under DevOpsTestControlPanel\logs directory) gets written/updated consuming CPU . Alternatively, this could also be a resultant of either that:
· Users are continually launching more stubs or that existing ones aren't freeing up memory for some reason.
· Either those stubs are being stopped by the users but not physically shutting down
· It could be that they are continuing to run but are consuming ever increasing amounts of memory.
· Does the stubs iterate on input data to process and return the response. If yes, then when number of threads start processing simultaneously, it requires more memory to process as each thread require certain amount of memory to process the input document
Debugging such scenarios are bit tedious and might be a good approach if the following can be quickly validated.
1) Check if the stub is producing too much logging information to its console.
By default, the logging level is set to "Debug". This can result in excessive CPU usage, because the logging requires processing. It can also result in high memory usage, because memory retains console output.
2) Reduce the messages retained in memory.
Exit from “DevOps Integration Tester and API” (DITA).
Invoke the DITAs Library Manager and add the following entry to JVM Arguments on its own line.
-Dcom.ghc.ghTester.gui.console.trim=true (trims to 300000 characters)
or specify the character limit
-Dcom.ghc.ghTester.gui.console.trim=5000 (trims to 5000 characters)
3) Stubs aren't executed directly. So, when a stub is started the DevOps Virtualization Control Panel (RTCP) communicates with an agent to tell the agent to run the stub. The agent does this by spawning a RunTests process. This RunTests process is also known as the "engine".
An engine can run 1 or more stubs. The Details tabs will show which stubs are being executed by the specific engine
The Engines tab will show the PID of the "engine" or RunTests process.
It's convoluted but, the process would be:
- Navigate to the infrastructure page in RTCP.
- Look at the details of each Agent to see which stubs are being run
- When you find the stub for a specific engine, find the PID for it in the Engines tab, call it <PID>
- On Linux, open the terminal and execute "ps -ef | grep <PID>" and see how much memory/CPU is being used for that process
- On Windows, Open the windows task manager and view the memory usage statistics per process under the “Process” tab.
Overall, try to get a snapshot of the runtest.exe processes at different stages. This would need to indicate the quantity of them and ideally listing them out with processID and memory usage. Alternatively, you may publish all the stubs from project/root level instead of publishing them individually. Idea of publishing them from project/root level is that all the RunTests.exe will use same JRE instead of spawning new JRE process.
On the other side underneath the OS , Prunsrv.exe creates a conhost process which runs RTCP java, this in turn runs another java process that is Kairos.
So the intent is to know how many of these are still running after clicking stop on the stub service and waiting for a normal shutdown.
Conclusion:
A lot depends on fine tuning the performance of stubs as well. Things such as ....
- What is the stub doing and how complex is it?
- What else is running on the server? – Other stubs, other processes?
- Is the stub using test data and if yes, then what kind of Test Data source?
In general, tweaks such as listed below should help to certain extent.
1) Turn off logging level
2) Minimize Business Logic steps that the stubs are involved with.
3) Minimize the number of Events configured in the stub as well. If in-case the stub has many events specified, then probably re-order Events to put common ones at the top (to reduce message comparisons).
4) Reduce the test data lookup activities
5) Re-order Events to put common ones at the top (to reduce message comparisons)
#Highlights-home