Gen AI is hot and so is vulnerable
#IBMSecurity#GenerativeAI#IBMQRadar#AI
Unlocking the potential of Generative AI while safeguarding its usage is essential for a future where innovation and security go hand in hand.
Generative AI has been at the forefront of every debate on technological advancements over the last few years and has finally become a reality. Today, Gen AI is used in many functions such as Research and Development, Marketing, Design, Finance and Accounting, Human Resource Management, and Operations across many industries such as Information Technology, Finance, Healthcare, Media and Entertainment, etc.
While there is no denying the great value of adopting Gen AI tools and technologies, the biggest concern is their potential misuse. Gartner defines AI TRiSM as a framework for AI Trust, Risk, and Security Management. In a recent article, Gartner suggests monitoring AI tools as an important driver for adopting AI TRiSM. Unauthorized use of Gen AI tools can lead to compliance issues and significant data breaches. Because Gen AI models often handle sensitive data, security breaches can have severe impacts. Organizations can adopt AI TRiSM's framework to establish security protocols and measures to prevent unauthorized access.
Besides securing the usage of Gen AI tools, monitoring the usage of Gen AI tools comes along with several other benefits as well such as:
- Performance Optimisation: By tracking how these tools are used, organizations can identify performance bottlenecks and optimize resource allocation, ensuring the tools run efficiently.
- Cost Management: Gen AI tools can be computationally intensive and expensive to run. Monitoring usage helps manage costs and enables proactive scaling decisions.
QRadar Gen AI Content Extension has been officially released
As a first step in attempting to achieve the above objectives, IBM has officially released its first version of QRadar Generative AI Content Extension which is available on IBM App Exchange for all QRadar customers to download free of cost. You can download the content extension from IBM App Exchange here.
The content extension provides a set of dashboards designed to detect and monitor the usage of Gen AI tools within a SOC environment.
As of today, the dashboard supports monitoring
- ChatGPT
- OpenAI
- GitHub Copilot
- Stable Diffusion
- Bing
- Google AI
- Claude AI
- DreamStudio AI
We plan to add support for many more Gen AI tools in the future versions of these dashboards.
The dashboard lets you monitor:
- Pie chart view of the usage of all Gen AI tools
- Event Count Timeline across all Gen AI tools
- Top 10 Source and Destination IPs
- Location of Source IPs
- Top 10 users for each Gen AI tool
- Top 10 Source and Destination Countries for Username
- Top 10 Events Associated with Username
- and much more.
Comprehensive Monitoring of Gen AI tools is crucial to stop attacks proactively
An example of how these dashboards might be useful for an SOC to monitor is the event count timeline across all Gen AI tools, which might help a SOC monitor the usage pattern of any Gen AI tool. If a SOC analyst observes a sudden spike in the event count for a particular Gen AI tool that’s an extreme outlier compared to others, it might cause a potential data leak.
Another example is the pie chart view of the Gen AI tools that lets a SOC understand the overall frequency distribution of the usage of all the Gen AI tools. If there are one or two most popular Gen AI tools in the environment, the organization can choose to have appropriate policies in place for using those tools so that the users do not intentionally or unintentionally expose Sensitive Personal Information or Confidential Information of the organization to those tools.
The age of generative AI is still new and new legal laws and regulations are emerging each day. For instance, many organizations require their data to be stored in a data center within the country of its origin. The General Data Protection Regulation (GDPR) does impose restrictions on transferring and storing personal data outside the European Economic Area (EEA).
The ‘Generative AI Tools Top 10 Destination IPs‘ widget allows a SOC to monitor the IPs that host the Gen AI tools. If a server that hosts a particular Gen AI tool resides at a geographical location where the organization’s data cannot reside as per the local legal laws, then a SOC might want to raise an alarm and follow appropriate mitigation steps as per the security protocol.
Similarly, the ‘Generative AI Tools Source Location‘ and ‘Generative AI Tools Top 10 Users‘ widgets are available for SOCs to monitor the users accessing the Gen AI tools. Monitoring these widgets would help SOCs keep track of where these Gen AI tools are being accessed and by which user. A user accessing a Gen AI tool that has an extremely high event count and from/to a suspicious geographical location of source or destination IP might be performing malicious activities and this might need further investigation.
Another interesting example is how SOCs can monitor the top 10 users for each Gen AI tool to get actionable insights. Organizations can survey such top users to get feedback from them on how they use these tools in their day-to-day operations and if they face any challenges. Such organizations can then ensure that they create guardrails around the usage of such tools. Another insight that might be helpful is of a user who’s an extreme outlier as opposed to some of the other top 10 users in the environment. It might be an indicator of potential misuse or other malicious activity.
Investigate further uninterrupted with drilled-down views
SOC analysts can either monitor the usage of all Gen AI tools at a high level or they can choose to drill down the view to a particular Gen AI tool or a user to understand better.
They can also choose to filter the dashboard visualization with the Timespan in terms of the number of hours in the past.
One example of how these drilled-down views might be useful for a SOC to monitor is a pie chart view of the usage of a Gen AI tool that would enable a SOC team with a comprehensive overview of that particular Gen AI tool being utilized across the environment. This visibility can help identify potential outliers that don't meet the acceptable usage policy. For instance, if an organization has approved the usage of a specific tokenized version of GitHub Copilot that prevents unsanctioned or sensitive data from being shared but a user employs an unapproved version of the same, that case, it might create a risk of data loss that might need appropriate Data Loss Prevention (DLP) protocols to be enabled with immediate effect.
Another example of how drilled-down views might be useful is that of a particular user. Upon monitoring from the overall view of all Gen AI tools, if a SOC analyst intends to investigate a suspicious behavior further, they can choose to drill down to a particular user’s activity. For instance, if a user is associated with many different source IPs and these source IPs are associated with many different geographical locations across the globe, it might indicate a Distributed Denial of Service (DDoS) or a Botnet attack.
Please refer to the official documentation for more information or feel free to reach out to your IBM field representative to learn more.
We encourage you to download and install the latest version of the content extension to try it out yourself. We hope that this will make a difference in your SOC environment. But this is just the beginning as IBM stays committed to enabling a secure future for Gen AI. We will continue to expand our out-of-box coverage for Gen AI tools in the upcoming versions of the content pack. Also, as Gen AI technology evolves and frameworks and regulations strengthen across geographies, we are committed to adopting them to offer our customers a safe and secure future and would love to hear if you have any feedback or suggestions.