DevOps Automation

 View Only

We need to talk about Bias

By Tiago Moura posted Mon August 27, 2018 07:04 PM

  

A couple weeks ago, my girlfriend sent me an email about the ALGORITHMIC JUSTICE LEAGUE and other initiatives that aims to combat bias on artificial intelligence solutions to prevent exclusionary and discriminatory experiences to users. Bias is one of the biggest challenges on the AI research field, not only because the technical aspect, but the ethics plays an important role on this discussion.

We are living on a world full of discrimination and inequalities and we are replicating this on our solutions as bias. Could you imagine how bad a person feels when her face are not recognize only because of the color of the skin? Moreover, if in a contest, the AI judges white skin people prettier than black skinned (http://bit.ly/2MKMsca).

Last year I attend Colaboramerica 2017 and many discussions around AI ended on how we could reduce or even solve bias on our solutions. People highlighted many causes, ranging from data until metrics we are using to evaluate models. In other hand, some people argued AI mimics the human being and we have bias, maybe we only need to know how deal with it. On Index this year, I heard a Facebook researcher saying that we need to learn how to use bias for good.

There is no doubt about the important role of the data on this kind of solution, but we cannot justify everything with it. We need to bring the discussion to our research table and stop looking only for bigger numbers day by day. Therefore, we can look outside and evaluate the real world value of a solution. We are creating “intelligences” that speak English, recognize white people and define behaviors based on who are using the internet. There are so many people out of this group. Will not be an easy task solve those biases, but we cannot ignore, we need to talk about them.

It is true we need to figure out better datasets that truly represent our society. However while this process goes, we have another ways to minimize the bias. I had a great conversation with a Spotify data scientist in San Francisco this year. We talked about how we could handle algorithm bias promoting team diversity inside the companies. The main aspect was, the team diversity in some point, will balance the bias on the software because the hypothesis creation and evaluation will starts from many different point of views. With time, the differences between members will reflect on the algorithm, eliminating or, at least, minimizing the effect of bias.

People are fighting against social distortions for centuries. If AI have the power to leverage a huge revolution of our society, we can’t repeat the mistakes made before. On time of exponential changes, take decisions based upon patterns from the past is deny the mindset change we need. We are living the beginning of the revolution, there is a lot to come, let make it right and put the bias on focus. So maybe this revolution could create a really better society in the future without (or with less) privileges, racism and many other social problems. Because if we don’t probably AI will only increase the size of the problems we have.

To start our discussion: which kind of AI bias you already noticed? What you are doing to overcome this problem? Leave your comment here or reach me out on Twitter (@thvmoura).


#IBMChampion
#IBMChampion
#IBMChampion
1 comment
50 views

Permalink

Comments

Mon September 03, 2018 10:17 AM

Like all human creation, artificial intelligence is a reflection of our potentialities and our limitations.

Propublica two years ago did a very interesting research on the biases of a software used in the North American penal system, which has been accused of being systematically racist, damaging the black population. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

There is also a very interesting book about it: Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O'Neil (https://weaponsofmathdestructionbook.com/).


The issue of solutions is very complex. Personally, I think it is essential to advance in a co-design work with the people directly involved in the problem that the software should address. For example, in the first case that I mentioned about the North American justice system, the people deprived of liberty who are judged by the program did not have any interference in its development. This is a great arbitrariness.

The other important point is transparency, knowing the logic behind the programs so that they stop being "black boxes" and understand why the software generates x results (which could then be improved). The problem is that for intellectual protection issues this task usually becomes impossible.