AI’s bias problem is human

« Back

AI’s bias problem is human 08.17.2018 17:02

Thomson Reuters Blogs 

08.13.2018

AI is only as good as – and a reflection of – the data we train it on.

A 2016 investigation by ProPublica determined that COMPAS—AI-driven software that models the risk of recidivism in offenders—was biased against people of color. A study on Google’s advertising platform, which is powered by deep-learning algorithms, found that men were shown ads for high-paying jobs more often than women were, while another study found similar issues with LinkedIn’s job ads.

Tech stories abound these days about areas as disparate as facial recognition software, loan approval, credit rating, and health care also being vulnerable to AI bias. The problem?

AI is trained on data sets that may contain inherent bias, and teams developing AI may be insufficiently diverse to recognize bias in their models. In short, the problem is human(s). Fortunately, the solution is also human.

Tonya Custis, PhD, is a Research Director in our Center for AI and Cognitive Computing. She recently sat down to talk about the challenges and opportunities of greater awareness and insight on AI bias and diversity – and how AI can make better lawyers.

Can AI be biased?

Tonya Custis: Artificial intelligence can absolutely be biased. AI is only as good as – and a reflection of – the data we train it on. Facial recognition software that is trained only on white males performs really poorly on black females. With this example you can see how maybe unintended consequences (can produce) a really bad consequence. It might have societal implications … you are not even aware of.

How can AI tools be trained to minimize unwanted bias?

Custis: The first factor in training AI tools to minimize unwanted bias is just awareness. Scientists who are training the models need to know that the model they have chosen for the task is appropriate for that task. They need to see all sides of the problem and make sure they are actually solving the problem they want to solve.

The next factor is diversity. The more diverse your team is the better you will be able to anticipate different things that might come up after the model is trained.

What is “explainable AI”?

Custis: Explainable AI is a current trend in AI research to make algorithms more interpretable or transparent to users. Currently machine learning methods are often incorporated into systems like a black box. People can’t really understand what’s going on inside them. This is both good and bad.

Sometimes it’s delightful and people are happy with the right answer. Other times the algorithm isn’t correct … so you get a wrong answer and people want to know why. Why is it suggesting this?

The goal of explainable AI then is to give an audit trail into what factors and features weighed into the algorithm’s decision to give that output.

Will “robot lawyers” replace human lawyers?

Custis: Robot lawyers will not replace human lawyers anytime soon. Lawyers do a lot of specialized intellectual tasks. They craft complex arguments. These are tasks that we are a very long way from mastering with AI. We can’t really train computers right now to do tasks that people aren’t good at, that we can’t get good training data for. Since the very nature of being a lawyer is to do a lot of things you don’t agree with people on, there is difficulty in getting signals for some of those tasks.

Our aim really is to help lawyers do their legal research faster and better. We can automate the easy parts and routine tasks that people have to do. We can augment that with the things that humans aren’t so good at, like sifting through a lot of information fast. Helping with those first stages of legal research will allow more time for attorneys to do the things that they actually went to law school for, the reason they probably became a lawyer, the things they enjoy. They’ll have the time to do those things better because we helped them do the easier tasks faster.


Learn more

Watch Tonya’s complete interview, and explore more insights on how AI will impact the legal profession.


- Это интересно
How much does consulting cost?
News
02.10.18
Venture and growth investors are doing a lot to speed up the rise of these worker-bots. So far this year, they’ve poured hundreds of millions into developers of robotic process automation technology, the term to describe software used for performing a series of tasks previously carried out by humans.
02.10.18

What if blockchain turned out to be just what emerging economies were after?

 

02.10.18

While Clinton certainly did not appear to be a Shingy-esque blockchain evangelist onstage, he delivered a targeted amount of enthusiasm about new technologies like blockchain and artificial intelligence in enhancing accessibility and shaping the country’s economic future.

02.10.18
It might be the only way we can break Facebook’s hold on our lives, it could cement Apple’s reputation as a privacy-minded service provider, and more importantly, it wouldn’t be that hard to get people to use it.
02.10.18
Instagram’s co-founders announced that Adam Mosseri, the platform’s vice president of product, is now in charge. Mosseri will oversee all functions of the business and recruit a new executive team, Kevin Systrom and Mike Krieger said in a statement. The co-founders announced their departure last week — more than six years after Facebook purchased the company for $1 billion. “We remain excited for the future of Instagram in the coming years as we transition from being leaders at Instagram to being just two users in a billion,” said Systrom and Krieger.
Address:
2288 Homecrest Avenue 2FL Brooklyn NY 11229