Researchers at Georgia Tech are investigating the impact of intentional deception by robots on human trust and the effectiveness of various types of apologies in restoring it. The researchers designed a driving simulation to investigate how intentional robotic deception affects trust, exploring the effectiveness of apologies in repairing trust after bots lie. Their work brings crucial insights to the field of AI deception and could inform technology designers and policymakers who create and regulate AI technology that it could be designed to deceive or potentially learn on its own. The results showed that while neither type of apology fully restored trust, the apology without admitting lies statistically outperformed other responses in regaining trust. This was problematic because a no-lying apology exploits preconceived notions that any false information provided by a bot is a system error rather than an intentional lie. Going forward, Rogers and Webber’s research has immediate implications, arguing that average technology users need to understand that robotic deception is real and always a possibility. According to Rogers, designers and technologists building AI systems may have to make choices about whether they want their system to be capable of deception, and they need to understand the ramifications of their design choices.
The Shocking Truth: What Happens When Robots Lie?
72