Modern language models offer the potential to automate aspects of engineering that in the past have been resistant to automation. This seems especially promising for engineering artifacts and the processes around them that are (or can be) expressed in text.
Our starting point: left side of the Systems V
We often overlay the capabilities of IBM Engineering Lifecycle Management on the System V, which is a good representation of the digital thread information model that enables teams to align their work across disciplines:
Figure 1: IBM Engineering Lifecycle Management products aligned around the System V
Our focus for the first wave of AI-powered automations for the ELM products is on the left side of the V, specifically in requirements processes and to accelerate and democratize model-based system engineering (MBSE). We are focusing here for two reasons: (1) these are the areas that are the most unbounded (it’s hard to know when you are done creating, reviewing, etc.); and (2) if we can help teams arrive at higher-quality requirements and models faster, there are benefits across the whole engineering lifecycle.
What can an assistant do?
One of the things we’ve been working on is a Requirements Intelligence assistant: What if you could ask questions of your module in natural language and get useful answers? How easy can we make it to set up and use?
To help answer these questions, we put together a simple demonstrator that you can use with your own DOORS Next server and some free Watsonx services available on the IBM Cloud. The assistant sits next to your requirements module in the mini-dashboard, and it answers questions and requests related to your requirement (Figure 2). It uses RAG-style prompting to limit the answers to the requirements text.
Figure 2: The Requirements Intelligence Assistant in the DOORS Next web UI
A Python Flask application ("Requirements Intelligence application" in Figure 3) receives requests coming from the user interacting with the assistant and in turn makes requests of Watsonx services on the IBM Cloud.
Figure 3: The parts of the DOORS Next Requirements Intelligence Assistant
Ready to try it yourself?
The files you need are available in the GitHub repo referenced below. The detailed readme will guide you through the configuration steps. Numbers in the orange circles in Figure 4 correspond with the steps in the readme.
Figure 4: The Readme steps overlayed on the parts of the DOORS Next Requirements Intelligence Assistant
You need to supply the following:
- A DOORS Next V7.x server with requirements module you’d like to query. We recommend using a non-production server.
- A workstation or server that will host the Requirements Intelligence Assistant server. DOORS Next must have network access to the Requirements Intelligence server, and the Requirements Intelligence server must have access to the IBM Cloud services.
- An hour or two of your time.
Some details you might find helpful:
- The primary text of the whole module is included as part of the prompt. Your data will travel to IBM Wastonx services on the IBM Cloud. Choose a module with requirements that are suitable.
- This is a demonstrator, not a product. Expect some rough edges. For example, data in requirements attributes are not queried, and the kinds of responses you get are of limited length.
- Every LLM has a token context window, which varies by model. So don't use a module that is too large. The primary one we tested with has about 60 requirements.
Using and extending this demonstrator
After asking questions and getting answers, you may want to do more experimentation. For example, you can ...
- Change the LLM specified in the .env file. For example: change "granite-13b-instruct-v2" to "llama-3-70b-instruct", "mistral-large", or another one available in your watsonx.ai service.
- Change the Assistant's system prompt in ./src/prompt.py. For example: to get longer responses, change prompt point 4 "Your response should include 2-5 sentences." and perhaps `max_new_tokens` and `min_new_tokens`
This demonstrator is provided under the Eclipse Public License - v 2.0. You are welcome to experiment and extend it.
In summary
We have used this demonstrator to help us understand the promise and perils of using a generative AI assistant with requirements. You can expect the features we make available in our products will be more productized than this demonstrator in many aspects.
We hope you find insights in setting it up and using it as well. We think Watsonx is a good choice for our AI-powered automations and for ones you build too! We hope this demonstrator gives you a useful starting point.
References
Acknowledgements
Thanks to Devang Parikh and Bhawana Gupta for their contributions to this post.