AI Automation for AI Fairness¶
When AI models contribute to high-impact decisions such as whether or not someone gets a loan, we want them to be fair. Unfortunately, in current practice, AI models are often optimized primarily for accuracy, with little consideration for fairness. This blog post gives a hands-on example for how AI Automation can help build AI models that are both accurate and fair. This blog post is written for data scientists who have some familiarity with Python. No prior knowledge of AI Automation or AI Fairness is required, we will introduce the relevant concepts as we get to them.
Bias in data leads to bias in models. AI models are increasingly consulted for consequential decisions about people, in domains including credit loans, hiring and retention, penal justice, medical, and more. Often, the model is trained from past decisions made by humans. If the decisions used for training were discriminatory, then your trained model will be too, unless you are careful. Being careful about unwanted bias is something you should do as a data scientist. Fortunately, you do not have to grapple with this issue alone. You can consult others about ethics. You can also ask yourself how your AI model may affect your (or your institution's) reputation. And ultimately, you must follow applicable laws and regulations.
AI Fairness can be measured via several metrics, and you need to select the appropriate metrics based on the circumstances. For illustration purposes, this blog post uses one particular fairness metric called disparate impact. Disparate impact is defined as the ratio of the rate of favorable outcome for the unprivileged group to that of the privileged group. To make this definition more concrete, consider the case where a favorable outcome means getting a loan, the unprivileged group is women, and the privileged group is men. Then if your AI model were to let women get a loan in 30% of the cases and men in 60% of the cases, the disparate impact would be 30% / 60% = 0.5, indicating a gender bias towards men. The ideal value for disparate impact is 1, and you could define fairness for this metric as a band around 1, e.g., from 0.8 to 1.25.
To get the best performance out of your AI model, you must experiment with its configuration. This means searching a high-dimensional space where some options are categorical, some are continuous, and some are even conditional. No configuration is optimal for all domains let alone all metrics, and searching them all by hand is impossible. In fact, in a high-dimensional space, even exhaustively enumerating all the valid combinations soon becomes impractical. Fortunately, you can use tools to automate the search, thus making you more productive at finding good models quickly. These productivity and quality improvements become compounded when you have to do the search over.
AI Automation is a technology that assists data scientists in building AI models by automating some of the tedious steps. One AI automation technique is algorithm selection, which automatically chooses among alternative algorithms for a particular task. Another AI automation technique is hyperparameter tuning, which automatically configures the arguments of AI algorithms. You can use AI automation to optimize for a variety of metrics. This blog post shows you how to use AI automation to optimize both for accuracy and for fairness as measured by disparate impact.
This blog post is generated from a Jupyter notebook that uses the following open-source Python libraries. AIF360 is a collection of fairness metrics and bias mitigation algorithms. The pandas, scikit-learn, and XGBoost libraries support data analysis and machine learning with data structures and a comprehensive collection of AI algorithms. The hyperopt library implements both algorithm selection and hyperparameter tuning for AI automation. And Lale is a library for semi-automated data science; this blog post uses Lale as the backbone for putting the other libraries together.
Our starting point is a dataset and a task. For illustration purposes, we picked credit-g, also known as the German Credit dataset. Each row describes a person using several features that may help evaluate them as a potential loan applicant. The task is to classify people into either good or bad credit risks. We will use AIF360 to load the dataset.