Original Message:
Sent: Wed January 15, 2025 07:55 AM
From: Daniel Toczala
Subject: Is it even possible to build AI models without human bias.
Fantastic observation Mik - which leads us to the core of any "predictive" AI system. The AI system which is used to predict things (like loan applicability) is SUPPOSED to be biased. It's supposed to choose or predict which group a particular individual will fall into. The key is in eliminating biases that do not (or should not) impact the prediction, and only considering the things that DO bias a prediction. These are the factors that most heavily influence a particular prediction. In your AI model predicting loan payment satisfaction, there is a bias towards individuals with larger incomes, since those people tend to have more available resources to pay off a loan. A Data Scientist may call this a feature of the model, and they will look for a combination of features and weights that will combine to make the model more accurate - more biased - towards choosing individuals who can pay back a loan.
So be careful when discussing bias in AI models. Some of it may be inherent in the data, some of it may come from organizations or societal institutions, and some of it just "human" bias (I like people who read my blog posts and articles....). The point here is that you should worry about, and always consider, bias in your data and models. You just shouldn't follow that path to it's very vanilla and un-useful end. You want fairness and a lack of discrimination - what I might call a "known set of acceptable biases".
Watsonx environments will give you the tools and allow you to track this "known set of acceptable biases" over time, and will allow you to track and detect things like model drift. It's important to do this - AI will never be perfect, but we can make it more transparent and continually seek to improve what it does for us.
------------------------------
Daniel Toczala
Community Leader and Customer Success Manager - Watson
dtoczala@us.ibm.com
Original Message:
Sent: Wed January 15, 2025 12:54 AM
From: Mik Clarke
Subject: Is it even possible to build AI models without human bias.
It's entirely possible for training data or documents to be both true and biased.
Many organizations that generated data gathered of the last few decades have been operating with social or institutional biases in place, so the bias ends up embedded into the data.
Question is, how do you remove the bias from the data without also removing its value as training data?
The point of training it to get the AI to make fair decisions based only on legitimate grounds. But how many elements in the data are truly unbiased? Names? Culturally biased. Addresses? Often culturally and/or socio-economically biased? Gender? Maybe, although non-conforming values will be scarce (and thus probably biased) Income? Socio-economically biased. Strip all of that out and you're left with 'someone asked or a loan', which means you might as well make the decision at random...
------------------------------
Mik Clarke
Original Message:
Sent: Tue January 14, 2025 07:14 PM
From: J A Hansen Esq IV
Subject: Is it even possible to build AI models without human bias.
Investigation is critical.
------------------------------
J A Hansen Esq IV
Original Message:
Sent: Mon January 13, 2025 04:48 AM
From: Oscar Dubbeldam
Subject: Is it even possible to build AI models without human bias.
Isn't every model based on documents written by humans? So yes we can filter on irregulaties. But how do we know that the content of the document is with true or reliable facts? A clear screening of each document which is embedded in the model should be necessary, but is that possible?
------------------------------
Oscar Dubbeldam
CEO / CTO
Migrato
Strijen
+31650734796
Original Message:
Sent: Thu December 26, 2024 11:33 AM
From: Ramkumar Yaragarla
Subject: Is it even possible to build AI models without human bias.
There is a possibility that societal biases unintentionally seep into the learning algorithms. I do not have real world examples to share right now. Bias is definitely in a big challenge.
------------------------------
Ramkumar Yaragarla
------------------------------