Software testing is crucial in software engineering as it ensures quality, reliability and security. It also helps increase confidence and customer satisfaction.
Software testing encompasses various types, including functional testing like Unit , Integration , system and acceptance testing. System Testing ( Commonly called as System Verification testing ) is one the most important test phase that helps ensure system reliability and validate how different components integrate with each other. For enterprise level products ,it can really get complex to design and test different customer usecases considering error conditions, negative paths validating corner cases and various failure conditions. At times building test bed for such complex scenarios takes lot of time.
Different AI assistant tools are being used by the business/industries all over. Once such offering from IBM is IBM Watson Code Assistant (WCA) which is being widely used today by development communities for various purposes like Unit tests generation, Code generation , explaining code snippets, documenting functions as well as translating from one language to another. Watson Code Assistant can be used in several ways to improve the process, particularly for automated testing , test case generation , bug detection , and code quality improvement .
We tried different usecases using WCA like generating documentation for piece of code, generating code for instructions mentioned as comments as well as used the Explain functionality to understand complex code. Exploring these functionalities further, we were able to use WCA to strengthen our System verification scenarios as well as get an understanding on varied data sets these scenarios need to be tested on. One such usecase we picked up was “Testing product stability during Machine Config Operator rollouts” . We automated running different MCO on and with the help of WCA , tested the code for different error conditions with varied Data sets which otherwise could have been done with limited data and scenarios.
Goal1 : Refactor Automation code
Automation suite: test_svt_odf_runmco.py
Prompt used : Refactor Automation code @test_svt_odf_runmco.py
Suggestions provided : It pointed out where the code needs to be refined to align with better coding practices .
It gave much cleaner code in-terms of parameter definitions . This helped save reviewer time.
It helped remove hard-coding of variable value.
Helped improve error message for different failure conditions.
Prompt used: “Refactor Automation code @test_svt_odf_runmco.py”
Goal2: Identify potential bugs in the code test_svt_odf_runmco.py
Automation suite: test_svt_odf_runmco.py
Prompt used : Identify potential bugs in the code @test_svt_odf_runmco.py
Suggestions provided : It pointed out scenarios where the code will probably fail due to the lack of handling of these cases.
Suggested that defining global parameter is bad way of coding: Helped save reviewer time.
Pointed out that there is very limited / no error handling in my test functions. Once these error conditions are handled, the code could be tested for different failure conditions
Also pointed out there could be possible race conditions in the automation code causing failures.
Prompt used: “Identify potential bugs in the code @test_svt_odf_runmco.py”
Goal3 : Suggest test scenarios for test_svt_odf_runmco.py
Once the scenario is written, its important to be tested on all possible data sets
Automation suite: test_svt_odf_runmco.py
Prompt used : Suggest test scenarios for @test_svt_odf_runmco.py
Suggestions provided : It suggested all possible scenarios for which this code needs to be tested.
It suggested to test the code for different node health conditions.
It suggested to test the code for different operator conditions.
It pointed out that only few pods were validated. Instead need to validate with various pod health status.
It also asked to validate the code with different number of ODF nodes.
Prompt Used: “Suggest test scenarios for @test_svt_odf_runmco.py”
Goal4 : Identify possible negative test scenarios in the code test_svt_odf_runmco.py
Automation suite: test_svt_odf_runmco.py
Prompt used : Identify possible negative test scenarios in the code test_svt_odf_runmco.py
Suggestions provided : It suggested different negative conditions to be tested
Test with unhealthy nodes
Test with incomplete MCP .
Test with Non running operator conditions.
Test with different pod deployments
Prompt Used: “Identify possible negative test scenarios in the code test_svt_odf_runmco.py”
Above is Proof of concept ( POC ) done for single usecases using WCA to enhance System verification Test where WCA proposed the need to validate using different Data Sets, error conditions and failure conditions. When testing Automation code, sometimes tests might be done with limited set of data and needs needs comprehensive understanding of the system .
Having WCA by our side, helped
Reduce manual review time and efforts if suggestion by WCA are well handled before raising PRs.
Provided rich set of Data sets against which the code needs to be validated. This will help reduce failure % when automation is run through daily build pipelines.
Ensures code is tested against all possible error conditions which we otherwise tend to ignore.
Few improvement areas for WCA:
Takes time to evaluate the prompt and provide output.
Context based search if gone wrong might give erroneous results.
Note: This is not only useful for Test community but can also be leveraged by Developers to strengthen their unit tests validations.
References : • Getting started — watsonx Code Assistant • https://cloud.ibm.com/docs/watsonx-code-assistant?topic=watsonx-code-assistant-wca-generate-code#wca-generate-code-best-practices • https://w3.ibm.com/w3publisher/wca-ibm/documentations#introduction_video
Acknowledgements : p.llamas@ibm.com, pallavi.s@in.ibm.com