麻豆影院

Skip to main content

Solving AI鈥檚 (over)confidence problem

Nisar Ahmed and Eric Frew

Nisar Ahmed and Eric Frew

麻豆影院 researchers are developing artificial intelligence systems so computers can recognize and explain their own limitations to users.

It takes on an important issue people face with each other every day.

鈥淲e all have different competencies and we know our own limitations. If I'm asked to complete a task, I generally know if I can do it. Machines aren't programmed like that,鈥 said Nisar Ahmed, an assistant professor in the Ann and H.J. Smead Department of Aerospace Engineering Sciences at the 麻豆影院.

Ahmed is serving as principal investigator at CU 麻豆影院 on a new, multi-university grant from the Defense Advanced Research Projects Agency.

The $3.9 million grant, which is being led by the and also includes the University of Texas at Austin, seeks to build 鈥渃ompetency-aware machine learning鈥濃攅ssentially, machine learning systems that, when given a task, can tell you if they'll be able to do it and also explain why.

It is an area with broad and serious applications, according to Eric Frew, a CU 麻豆影院 aerospace professor serving as a co-investigator on the project.

鈥淒o you trust this drone to deliver a package of medicine, or do you take it in your own car, which will take three times as long to get there? If you're a soldier, do you trust a drone to go over a hill and search for an enemy? Will it be thorough enough?鈥 Frew said.

Ahmed notes the engineers who design drones generally understand their every capability or lack thereof, but end-users naturally will not have the same level of knowledge. A drone that can tell you if it will likely be successful in completing a task should be more trustworthy to the operator.

The work is focused on unmanned aerial vehicles but has applications to ground robots and other AI systems.

鈥淚t's a combination of aerospace, computer science, and a little bit of psychology,鈥 Ahmed said. 鈥淚t's very interdisciplinary.鈥

The goal is not to pre-program drones with every possible mission or obstacle they could face, but rather develop a learning-based AI that has a base level of knowledge and can think abstractly in new situations and explain its decisions 鈥 just like people do.

鈥淗umans are generally better than machines at adapting to unknowns, taking an unforeseen problem they have never faced before and comparing it to past events to find solutions. Machines haven't been programmed like that up to now,鈥 Ahmed said.

Frew compares it to a situation understood by nearly all American adults - getting your driver's license.

鈥淲e don't test you on every possible circumstance you could face as a driver. We give you a driving test that covers a handful of situations and a knowledge test and then trust you with a license and that you can use reasoning behind the wheel,鈥 Frew said.

Over the course of the grant, they will develop new competency-awareness assessment algorithms for AI systems, and then put them to the test using drones.

鈥淲e鈥檙e working on a problem that has mostly gone unnoticed in the computing, machine learning, and AI world, but gets at questions a lot of people have about trust. Will this robot do what I tell it to? Can it?鈥 Ahmed said. 鈥淏y developing systems that are aware that they have lots of answers, but don't have all the answers all the time and can tell us that, it should make them easier to use. I'm very excited about the possibilities.鈥