麻豆影院 computer science PhD students Aaquib Tabrez and Matthew Luebbers, along with their advisor Assistant Professor Bradley Hayes, are fascinated by the way artificial intelligence has become an integral part of daily decision making around the world. But the researchers are also interested in the scenarios where those AI systems 鈥 making potentially critical decisions on the behalf of humans 鈥 could be incorrect.
The researchers particularly wanted to address this problem for settings where humans work alongside robots as teammates, adding insights into the robot鈥檚 decision-making process to help the human partners better understand their automated collaborators.
鈥淲e were really interested in the overall question of when you should trust your robot teammate鈥檚 decision,"听Luebbers said.听
Motivated by the fact that an AI algorithm鈥檚 inner workings can often be a mystery to everyone, including its creator, the team developed a technique that allows these algorithms to provide better insights into their decision-making process. Without that kind of knowledge, you likely won鈥檛 know the AI has failed until you have already suffered the consequences of its mistakes. This could be doubly harmful if the AI is used to control machines interacting physically with the real world like manufacturing robots or self-driving cars.听
鈥淲e want humans and robots to act as a unified team on important tasks like search and rescue,鈥 Luebbers said. 鈥淭hat situation is a great example, where drones could be used to help explore the environment quickly. But if you think of how a group of humans come together to actually function as a team in that kind of situation, it requires every member to have a sense in their mind of what their shared goal is and what tasks each person should perform to reach that goal. The same thing is needed for human and robot teams to function well.鈥
The pair recently published their work on human/robot teammate trust at For their work they were awarded runner-up for best paper with a student lead author. And in June, their work was also featured and demonstrated at 鈥 an expo for artificial intelligence and robotics with a large academic and industry presence.
The researchers鈥 novel technique allows a robot to suggest courses of action to humans while also providing supporting evidence, improving the human鈥檚 ability to trust what the robot is telling them to do. To demonstrate how this technique affects the dynamics of trust and transparency, Tabrez and Luebbers created an experiment that would let humans and robots team up to find landmines. Sort of.
They created a life-size version of 鈥淢inesweeper,鈥 a computer game originally released in 1989 and played here using an augmented reality headset. The objective of the game is to clear a rectangular board that contains hidden 鈥渕ines鈥 without detonating any of them.
Tabrez and Luebbers鈥 life-sized application paired a human with a drone teammate. The drone would fly around scanning the area for evidence of 鈥渕ines鈥 using its imperfect detector to learn where they were most likely to be hidden. During the game, the drone would give suggestions to the human through their headset, showing them where to go to clear the board quickly.
To do this the researchers developed a reinforcement learning-based algorithm that simultaneously generated a strategy for the robot and advice for the human, as well as explanations through augmented reality about why that advice was provided.
鈥淭he game is an abstraction of search and rescue, where you have certain targets to find,鈥 Tabrez said. 鈥淲e wanted to compare how people would react when the drone gives a snapshot into its own decision-making rationale versus the drone simply telling you what to do next in highly uncertain environments like these where the drone is likely to make mistakes.鈥
The researchers found that through this 鈥淢inesweeper鈥 experiment, players defused the 鈥渕ines鈥 the quickest when they had access to both the drone鈥檚 recommendations and explanations. They also observed that the combination of both items led to an improved sense of trust and transparency between the teammates. Lastly, they found that people acted more independently, following the robot鈥檚 advice when it made sense to them and took action into their own hands when it didn鈥檛.
Both Tabrez and Luebbers credited their advisor and the freedom the Computer Science Department听offered them to conduct fascinating and impactful research. They also praised the ability to collaborate with other departments in the college.听
听
鈥淓veryone is excited about what they鈥檙e doing, and people across the department are quick to support one another in whatever way they can,鈥 Tabrez said.
听
Tabrez (MSMechEngr'19) earned his master鈥檚 degree at the university, so it was an easy decision for him to stay on and obtain his PhD here.听
听
鈥淐U 麻豆影院 offered me the freedom to pursue the specific type of research I wanted,鈥 Luebbers said. 鈥淭here was a lot of interesting work, not just in the computer science department, but across all fields. The vibe of the lab and the university really meshed well with my professional goals and personal approach.鈥