By Joe Arney
As an expert in generative artificial intelligence and ethics, when Casey Fiesler interacts with brands or commenters online, she鈥檚 very attuned to whether the person on the other end might actually be a chatbot.
More and more, regular internet users are having the same doubts. That鈥檚 because听companies are increasingly turning to chatbots to solve problems, manage customer engagement鈥攐r because everyone else is doing it.
鈥淚鈥檝e heard from multiple people on social media who say the big conversations they have at work are about how to do A.I., because everyone feels like they have to integrate this new technology as quickly as possible鈥攅ven if it doesn鈥檛 make听sense,鈥 said Fiesler, associate professor of information science at CMCI.
Chatbots have their use, Fiesler said. They can spark brainstorming sessions for a writer struggling with a draft, or create non-player characters in tabletop role-playing games. The problem, she said, 鈥渋s the idea that chatbots and generative A.I. need to be doing everything, everywhere. Which is absurd.鈥
Don鈥檛 think so? Consider that chatbots have encouraged small-business owners to break the law (City of New York), advised using glue to help cheese stick to pizza (Google) and impersonated parents to offer reassurance about local schools (Meta).
鈥淚n the Meta case, to give them some credit, the account that responded to the parent was clearly labeled as being A.I.,鈥 Fiesler said. 鈥淏ut at the same time, the idea that it might impersonate a parent should have been anticipated, because large language models are not information retrieval systems鈥攖hey鈥檙e 鈥榳hat word comes next?鈥 systems. So, it鈥檚 inevitable you鈥檙e going to have some wrong responses.鈥
Social media interactions that should be between people are one case where Fiesler said chatbots should be off-limits;听another is dispensing legal, medical or business advice. That鈥檚 not even considering the complex social and ethical concerns about A.I.鈥攎isinformation, labor rights, intellectual property, energy consumption鈥攖hat are getting short shrift by an industry waxing poetic about the golden age this technology promises to usher in.
But moving slowly and asking thoughtful questions is not a strength of Silicon Valley, and companies fearful of being left behind are missing Fiesler鈥檚 bigger point about ethical debt.
鈥淭here鈥檚 this attitude of do this now, and deal with the consequences after we see what goes wrong,鈥 she said. 鈥淏ut very often, the harm is already done.
鈥淚t blows my mind that these huge tech companies, with all their resources, could be surprised that all these things keep happening. Whereas when I describe some of these A.I. use cases to undergrads in my ethics class, they come up with all the things that could go wrong.鈥