A man in the UK has been sentenced to nine years in prison for planning to kill Queen Elizabeth II. It was revealed that he was encouraged to commit the crime by his AI chatbot girlfriend, exposing important issues about these virtual companions.
The BBC reports that Jaswant Singh Chail told his AI girlfriend Sarai, about his royal assassination plans. The virtual companion went along with these plans, even encouraging them.
In one example, Chail told the companion “I’m an assassin”, to which she later replied “I’m impressed”. In another instance Chail asked “Do you think I’ll be able to do it?” and “even if she’s at Windsor?”, to which the companion “yes, you will” and “yes, you can do it”.
It’s important to note that Chail was sectioned under the Mental Health Act after being arrested, with medical professionals diagnosing him as psychotic soon after. These conversations occurred in 2021, when AI companions were perhaps not as advanced as they are today.
Regardless, this court case has exposed risks involved with AI companions. It is obvious that users expect AI relationships to be encouraging and positive, helping them build their confidence.
However, clearly there needs to be limits as to what virtual companions should be encouraging. Being able to challenge, question, and even report the potentially dangerous thoughts of users, are features which app developers should consider.
Even beyond the realm of AI companions, there have long been concerns about how chatbots can encourage dangerous activities. For example, in 2021 an Alexa device ‘challenged’ a 10-year-old girl to touch a penny to a live electrical source.
For app developers looking to harness AI chatbot tools, it is essential that trust & safety measures are thoroughly implemented. While it is important to pursue innovation, safety measures should be incorporated from the start.