The Value of Saying ‘I Don’t Know’

How can algorithmic systems ask for help in the face of uncertainty?

By Katharine Gammon

Heading link

Code on a computer screen

When people think of safety in artificial intelligence, they might think about ensuring that self-driven cars don’t collide an intersection, or protecting their children from “smart” toys.

When Brian Ziebart, an associate professor of computer science at UIC, thinks of AI safety, he conjures a different scenario: how to train machines to know when it’s unwise to make decisions based only on the past. Ziebart knows what artificial intelligence needs to understand: that data from the past don’t always prove true in a changing future.

Imagine this scenario: if we only ever responded to emails from peers and friends, and that experience informed our script for “how to reply to a message,” how would that affect our approach to fielding an email from a boss? If a child were always served vegetables that were eaten with a fork, learning exclusively from that experience how to consume food, what would that child do if she were served tomato soup?

Ziebart and his colleagues have a technical term for this problem: a covariant shift. In plain language, it’s about the world being unpredictable. What kind of trouble can result if an AI system only produces output based on past training, even if circumstances have since changed?

Risks like these are innate to AI because, as intelligent systems, they often get to choose the examples that they learn from—introducing potential bias to that learning, as Ziebart’s research shows. If the system keeps trying to learn new things from available but not representative data, it can create a cycle that raises the specter of incorrect future predictions.

Ziebart’s former graduate student Anqi Liu (PhD ’18) explains it this way: “In traditional machine learning, when the new data are the same as your previous data, you can have a guarantee that what you learned from historical data can be applied to new data without errors. But that is not true when the new data are not the same as the historical data. You can’t blindly apply the method from historical data, because it won’t work.”

In cases like these, where the future doesn’t necessarily replicate the past, a person confronted with the question, “What will happen next?” may have an intuitive human response, which is to say “I don’t know.” Ziebart and Liu, now a postdoctoral researcher at California Institute of Technology in Pasadena, are working to foster that instinct in AI.

 

Heading link

“It is much more reasonable to let the user know about the uncertainty rather than give the wrong prediction. The algorithm should stop and say ‘I don’t know’ when it’s not confident or when this is too different from what it has previously seen.”

 

Anqi Liu (PhD '18)  |  Postdoctoral researcher, California Institute of Technology

Heading link

Situations of this nature are becoming more common as AI is implemented in complex areas of our daily lives. Autonomous vehicles, for example, rely on computer-vision-based recognition elements to understand whether there are hazards ahead, such as a tree in the roadway or people crossing. If the car’s system bases its decisions only on past data, it can make mistakes in new environments, such as when it’s foggy or dark outside. Its errors cascade from there, creating a dangerous situation.

As Ziebart and Liu point out, the solution for AI systems facing uncertainty is for the algorithm to abstain from making a decision and defer to a friendly human for help—in short, to admit that they don’t know. After all, that’s what humans often do when faced with an unfamiliar situation. If AI systems admit they don’t know what to do, “humans can give the correction, and the algorithm can take that input and then update itself for the future,” Liu says. She adds that the mere act of holding off on a decision “is a way that a system can figure out what it needs to learn from humans.”

In general, Ziebart would like to see machine-learning algorithms employ a healthy dose of skepticism and caution. “We like to ask, ‘What is the worst possible case about what is unknown?’” he says.

Ziebart’s interest in these issues traces back to his study of computer engineering at University of Illinois Urbana-Champaign. Back then, he was working on ubiquitous computing—operating systems that could control multiple systems in one place—and was hooked by what technology might lie around the corner. “Controlling your lights automatically seemed like magic in 2000-2004, though it’s less impressive now,” he says with a laugh. He started to turn his attention to machine-learning tasks: systems that would proactively learn from the user how to automate things for them, eventually reducing or eliminating the need for the user to interact with the system. A home that would learn when its occupant would want the lights turned on or off, instead of having to wait for the resident to say so.

Machine learning, by its nature, conjures issues of bias: tendencies that can get built in to an algorithm’s decision-making depending on which inputs it learns from. Ziebart points out that if a system takes in biased data, it’s likely to create more bias when it’s developed and deployed.

As an example, Ziebart offers medical school admissions, an area in which schools have devised computer systems that use past admissions-decision information to create classifiers to rate future applicants. If those past decisions reflected historic biases, the systems would be likely to propagate the sidelining of certain types of people.

That’s an important issue, Ziebart says, as artificial intelligence tools come to play a role in decisions such as hiring or giving loans. “Machine-learning systems have the capability to be much more fair in the ultimate decisions they provide, but unless they are well designed, I don’t know if people will actually trust them.”

Creating algorithms that avoid bias and that recognize when they should admit “I don’t know” are key elements in enabling AI to contribute positively to our future. “People should know that we can fix those problems and make AI better,” Liu says.

Ziebart sees these obstacles in AI’s path as surmountable—and common in the evolution of a complex and widely applicable new technology. “Progress in AI is very uneven,” he explains. “It’s natural to see something and extrapolate all the things that could be easily done, but that extrapolation doesn’t naturally hold. So a lot of the time, there will be a big advance, but things don’t transfer smoothly in the way you might think.”