Three Laws of Robotics

Cash or Credit? 

 

The potential of AI has always been a sort of Roddenberry-esque utopia in which bias is eliminated and logic and reason are paramount. But in practice, it doesn’t always work that way. 

In theory, AI can reduce humans’ subjective interpretation of data, because machine learning algorithms can learn to consider only the variables that improve their predictive accuracy. Unlike human decisions, decisions made by AI could in principle be examined, critiqued, and transformed for the greater good. As Andrew McAfee of MIT puts it, “If you want the bias out, get the algorithms in.”

The difficulty comes when humans program bias into the AI. Take the Apple Card, for instance. Or don’t, if you’re a woman. Last week, users began to notice the card was offering significantly smaller lines of credit to women

It was excoriated. It was defended. Wozniak went on TV!

It was a big deal.    

Apple’s response was strange and opaque; they said the algorithm had been vetted by a third party and gender wasn’t even part of the equation, therefore it was impossible for it to be discriminatory.

OK, but it…

That brings up an interesting point and a common problem when you’re talking about AI bias: Whether or not the AI (or, in this case, the algorithm) is programmed with an input, it can still behave with bias based on existing inputs that correlate with bias. So if “likes blue socks” happens to correlate statistically with “eats green olives,” it doesn’t matter if we input the algorithm with “eats green olives” or not; it’ll learn it anyway. 

Humans have biases. And when we build, we “bake in” those biases. In normal development, this is usually based on rules we don’t understand. For AI, it usually manifests in choosing datasets that we don’t realize are biased.

The big problem underlying it all is that we trust the output as though it’s from the wise sage. The computer said it, so it must just be scientifically true. When a human makes a bad call, we’re willing to acknowledge their mistakes. When the algorithm makes a bad call, we just assume that we’re wrong. After all, data don’t lie. Do they? 

Foundation

 

Machine learning experts are rising to the challenge and confronting AI bias. This week, a team of researchers presented a paper outlining new ways to correct for bias before it becomes a problem.

“People are looking at how AI systems are being deployed and they’re seeing they are not always being fair or safe,” says Emma Brunskill, an assistant professor at Stanford and one of the paper’s authors. “We’re worried right now that people may lose faith in some forms of AI, and therefore the potential benefits of AI might not be realized.” 

The team’s process involves building an algorithm with built-in boundaries on the results it can produce. “We need to make sure that it’s easy to use a machine learning algorithm responsibly, to avoid unsafe or unfair behavior,” says Philip Thomas, who also worked on the project. 

In one example, an algorithm predicts college students’ GPAs from entrance exam results. This is a relatively common practice, and one that can result in significant gender bias, because women tend to do better in school than their entrance exam scores would predict. Following the researchers’ principles, the new algorithm is limited in how much it may over- or under-predict GPAs for male and female students. 

The research team calls these Seldonian algorithms — a reference to Hari Seldon, a character in Isaac Asimov’s Foundation series, which gave us the now woven-into-human-consciousness Three Laws of Robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

This isn’t the only team working on ways to reduce bias in algorithms and AI, and as time goes on we’ll see more sophisticated and nuanced methods for reducing human error and increasing the Spockish logic of the machines. 

Ben Evans wisely points out that nearly everything people find worrying about AI right now are things we said about databases a few decades ago. That doesn’t make things like racial and gender bias not genuinely scary, but it’s worth remembering we need to be conscious of the way we look at technology as being “unbiased truth” simply because it’s based on data. And that our perspective on what’s catastrophic now is based on our limited understanding of the technology and how to control it.

We are babes in the wood, and we’ll grow. We’ll learn. So will the machines.

 

 

#ai #algorithms #Apple #bias #Stanford