IBM Guardium

IBM Guardium

Join this online user group to communicate across Security product users and IBM experts by sharing advice and best practices with peers and staying up to date regarding product enhancements.

 View Only

Building Trust in AI: Lessons from IBM and HCL on Security and Governance

By Anshul Garg posted 21 hours ago

  

A few weeks ago, I had the privilege of hosting a webinar with experts on AI Security and AI Governance: Amit Mishra, Global lead for Data privacy & Security, HCL Technologies, Assaf Miron, Sr. Product Lead, Cloud & Emerging Capabilities, IBM, and Neil Leblanc, Go-to-Market lead, watsonx.governance, IBM. It was a candid conversation which covered key aspects of Securing and Governing AI, and the critical role the underlying data security plays in this model.

The conversation was packed with real-world insights, and I want to use this blog to share highlights for those who couldn’t join live. You can also watch the replay here.

The Problem: Shadow AI, Compliance Mandates, and Vulnerabilities

AI adoption is racing ahead, but trust and security haven’t kept pace. Many organizations are grappling with:

- Shadow AI – AI models and agents deployed without visibility or governance.
- Vulnerabilities – Misconfigurations, unsecured prompts, and lack of penetration testing leave AI models exposed.
- Compliance mandates – From the EU AI Act to ISO frameworks, companies must prove they are handling AI responsibly.
- Safe usage – Ensuring AI outputs are reliable, unbiased, and do not leak sensitive data.

As we saw in a live poll during the session, most attendees are wrestling with a combination of these challenges. Trustworthy AI is no longer optional—it is the foundation for scaling AI responsibly.

An Example from Practice: Unified AI Inventory

One of the most powerful examples shared in the webinar was around creating a unified AI inventory—covering both sanctioned and shadow AI—helps organizations take back control.

By discovering every model, mapping its owners, and aligning each to governance frameworks like NIST, MITRE ATLAS, and the EU AI Act, companies can break down silos between security and governance teams. This simple step turns fragmented oversight into a holistic risk picture.

The Solution: Securing AI Across the Lifecycle

The speakers emphasized that securing AI is not a one-time event. It requires a framework for Securing AI and a lifecycle approach:

1. Discover – Identify all AI models, including shadow AI and embedded AI.
2. Assess – Conduct vulnerability assessments, red teaming, and penetration testing to protect against adversarial attacks.
3. Manage Compliance – Map AI risks to established frameworks (ISO, NIST, EU AI Act).
4. Control – Apply policy-driven controls for prompts, data usage, and outputs once models are in production.
5. Data Security as Fuel – Tools like IBM Guardium were highlighted as critical for securing the data that fuels AI, simplifying compliance, and enabling real-time risk scoring.
6. Unified Governance and Security – Bringing security and governance teams together in a shared console to manage risks collectively.

Together, these elements help ensure that AI is not only innovative but also safe, compliant, and trustworthy.

Steps Organizations Should Take

From the insights shared, here’s a practical roadmap for organizations looking to implement trustworthy AI:

1. Start with visibility: Build an AI inventory and include shadow AI in scope.
2. Secure the data: Protect the information assets that feed AI models—this is the foundation.
3. Secure the model: Integrate security testing throughout development and deployment.
4. Secure the usage: Enforce prompt security, guardrails, and output monitoring.
5. Embed compliance: Align with global frameworks and ensure auditability.
6. Unify governance and security: Break silos and manage AI risks holistically.
7. Move fast: This cannot be a five-year plan—implement controls and quick wins now to keep pace with AI’s adoption curve.

Closing Thoughts

What stood out to me most was the urgency to act and the need to bring governance and security together. AI is being deployed faster than traditional governance and security processes can adapt. But by taking a lifecycle approach—securing data, models, and usage, while embedding governance—organizations can unlock AI’s potential without compromising trust.

I’m incredibly grateful to my colleagues from IBM and HCL who joined me for this important discussion. If you missed it, I encourage you to watch the replay here. It’s a practical guide for anyone serious about securing their AI journey

0 comments
2 views

Permalink