watsonx.governance

 View Only

AI Governance is an overloaded term! So where do you start?

By Douglas Stauber posted 2 days ago

  
AI Governance is heating up. 🌡🌡🌡

But AI Governance is an overloaded term. It refers to a variety of activities to ensure AI is responsible, trusted, and meets internal and external requirements. So where do you start?

Here are 10 activities under the AI Governance umbrella and the order they should be pursued. Keep in mind, this is not a one-size-fits-all. This list should be adjusted per organization / industry.

1. Start with defining internal policies. AI is the manifestation of people's decisions. These decisions must be clearly defined before AI efforts are started.
2. Ensure the data used by and generated by AI is secured, with appropriate data governance and, if appropriate, de-identification measures.
3. Measure AI metadata. Monitor quality and safety metrics, from hallucination detection to fairness.
4. Perform AI Risk Assessments to distribute AI Governance resources accordingly. Naturally, low risk internal development efforts need a lot less attention than external production applications.
5. Provide transparency by documenting the results of AI efforts. This aligns internal stakeholders enterprise-wide, helps eliminate silos, and ensures a consistent approach to AI.
6. Transparency will result in questions. Prepare to answer those questions with explainability methods to justify why certain decisions were made and how models function.
7. By this stage, organizations will be anxious to put AI applications into production. But there are dozens of AI regulatory standards worldwide (and growing). Mitigate risk before production by performing internal audits with regulatory risk assessments.
8. AI sustainability refers to the internal ($) and external (environmental) costs of running AI applications. By monitoring, informed decisions can be made about how and what AI applications are put into production.
9. Consent is gaining permission from those that contributed data behind the model. Depending on the industry, this would may jump up in priority significantly.
10. Usage Governance - who should be using AI to do their work? Usage Governance requires users to seek permission from a central authority before using AI for certain tasks.

From a macro level, most organizations engaged in AI have internal policies defined, have a data governance solution, and interested in consistently measuring AI metadata. Of course, with a wide range of variation.

Have I left anything out? Please leave a comment.
Image preview

#watsonx.governance
0 comments
5 views

Permalink