Description of Test Assets
UI Tests are created, edited and executed through the DevOps Test UI (DTUI) UI Test perspective in the Eclipse environment. You can use the UI Test perspective for Web, Mobile, and Windows tests. In addition, you can manage Selenium Java™ tests, Appium Java™ tests. See product documentation for further information.
VU Schedules are the "controller" of load emulation in DevOps Test Performance (DTP). Traditionally a single-user test is recorded, refined and then included in a schedule where you can emulate a real-life multi-user workload distributed across 1-N agent machines. See product documentation for further information.
MajordomoService is the name of the service used to manage work on DevOpsTest Performance agents.
Background
Recent history has seen an increased melding of test automation products in the form of DTUI UI Tests run on agents through DTP VU schedules. In one instance, a customer discovered that their application-under-test conveyed a significant amount of binary data in its network traffic. Binary data being notoriously difficult to correlate, they decided to record UI Tests for their load testing efforts. Another customer found that their traditional load testing efforts were complicated by client-side authentication measures. They, too, concluded that UI Tests were the path to success.
Load testing through UI Tests is a use case that is an uncommon use case; there are some things to keep in mind.
Considerations
Overloading the Workbench
Have you ever tried to open 30 browser instances on a single machine? Unless you are working with a tremendously robust system, you will experience noticeable resource drain on that machine. Be aware of this. Limit the number of users you expect to use on a single machine. Use agents to distribute the user load across many machines. A mantra you will hear repeated from load testers is “stress the server, not the agents.”
Test Modifications
This entire process relies upon the idea that the UI Test perspective framework is built upon the foundation of the Performance Test perspective. Just as VU Schedules are an DTP construct, other DTP features are available as you work with UI Test script; you must be familiar with the mechanics involved. Become familiar with splitting UI actions with help from the user documentation: https://www.ibm.com/docs/en/rft/11.0.0?topic=tests-splitting-ui-actions.
- Use Transactions to capture load times. When setting up a transaction, to ensure that it captures the accurate end-to-end response times, start a transaction when a button/link in a page is clicked, and stop the transaction when a verification point succeeds.
The example ties back to the Split UI Test Actions referenced earlier. Our original sample test is a Google search for "HCL", followed by a few clicks on the result set.
How do we put a Transaction into this test?
The test has two clear "transactions": the search itself and then the perusing of the results.
- Timeouts for actions and verification in a test script are ideally set to 60 seconds or less so that timeout errors don't skew overall averages and other statistics.
Ex.
- Ensure that there is sufficient delay between transactions to mimic think times. Avoid using think times associated with actions contained with transactions that are meant to capture load time lest they artificially contribute to the transaction's elapsed time.
An interesting complication arises if you need pass parameters to the browser at its start, example
Chrome --foo --bar=2
If you are running a standalone UI Test, you can specify these parameters as part of your run configuration. Running the UI Test within a VU Schedule bypasses the dialog that allows for this. Instead, parameters can be passed to the browser in two ways:
- Test Variables within the UI Test.
- A variable definition file, sometimes referred to as a varinit file. Details on creation of varinit files can be found in the DevOps Test UI Product Documentation.
Associating a varinit file with a User Group within the VU Schedule expands the scope of the variable definitions such that variables in the varinit file apply all the UI Tests in that User Group.
Either method amounts to defining these two variables:
webui.browserparam.selected=true
browser.parameters=-foo;-bar=2
Schedule Configuration
Use Ramp-up times of at least 5 seconds between each virtual users so that user actions are distributed in time. In general, the more gentle slope of the ramp-up rate, the better the chances for stable playback.
Majordomo Configuration
When installed with default settings, MajordomoService runs as a service on a Windows machine. This has benefits for running traditional DTP load tests in a schedule. When running UI Tests in a schedule, Majordomo should be run as a process. The product provides server batch files (*.bat) to help manage this.
Open Windows File Explorer and navigate to your Majordomo directory; by default it is C:\Program Files\IBM\DevOpsTest\Majordomo.
The first thing to notice is the .bat files. If you do not see the extension (the .bat part), your system is configured such that well-known file extensions (.bat, .exe, .txt) are hidden. You can use the Type column to determine the file type.
- NGAStart.bat - Starts MajordomoService as a service. "NGA" stands for Next Generation Access in the context of networking.
- NGAStop.bat - Stops MajordomoService as a service.
- Majordomo.bat - Starts Majordomo as a process with -Djdk.nativeDigest=false, for use with traditional HTTP test scenarios.
- Majordomo_webui.bat - Starts Majordomo as a process, for use with UI test scenarios.
To run a VU Schedule containing Web UI tests, stop the MajordomoService service using NGAStop.bat, and start the Majordomo process using Majordomo_webui.bat. In some environments, running these .bat files may require elevated privileges.
Testing Philosophy
Recommended testing efforts follow an incremental paradigm. Start with 5 virtual users on any agent. Increment the user load by 5 users after each schedule run to a maximum of 20-25 users per agent. This allows you to determine agent capacity, i.e. the user load at which a schedule can run successfully without losing connections to the browser instances and timeouts. Adhere to this capacity for testing per agent. If it is certain many agents are of similar configuration, you can do the capacity testing one time for all.
Discard test runs that contain a high number of timeouts/verdict errors as the elapsed times captured there reflect the timeouts rather than the actual elapsed time. If playback timeouts are high, re-examine the agent capacity and apportion VUs accordingly. As a rule, the lesser the number of VUs per agent, the better the consistency and accuracy of the results.
f