‘A new superpower:’ U of G professor points to security dangers of AI chatbots

By Justin Koehler

As artificial intelligence (AI) continues to gain more momentum in its use across companies and businesses, one professor at the University of Guelph is raising some concerns about the ability of chatbots to gain company secrets.

Dr. Ali Dehghantanha, a Cybersecurity professor and Canada Research Chair in Cybersecurity and Threat Intelligence with the university, said he was able to gain access to sensitive client data and internal project information from a Fortune 500 company, all from speaking with an AI chatbot.

He said he was able to do that within less than an hour.

Even further, he stated that he was able to draft an email containing real project information, looking like it came from the company’s CEO. He said it would have been just as easy to send it out to all the employees in the business.

“The chatbot had access to far more client and project data than it needed, and there were no systems in place to notice when the AI was being manipulated,” Dehghantanha said. “The case is not unique.”

He stated that he was able to use carefully worded prompts, role-playing scenarios, and multi-step conversations in order to manipulate the AI chatbot into taking actions it shouldn’t technically be allowed to do.

“The more connected an AI assistant is, the bigger its attack surface,” he mentioned. “If you connect it to sensitive data without serious safeguards, you’re effectively giving every employee and potentially every attacker a new superpower. Would you give a new intern the keys to every filing cabinet and not watch the door?”

Dehghantanha continued to say that these dangers around AI chatbots would only continue to grow as they are intertwined in the workplace and in common settings.

He said, as of now, the tools are already in use across various online retailers, financial institutions, small businesses, and even government offices.

He did mention, though, that the productivity benefits of integrating AI models and assistants into various industries are substantial, but clearly stated that it’s all while putting these companies in much more dangerous situations.

“The best defence isn’t just writing new policies, it’s stress-testing the AI like a real adversary would,” Dehghantanha stated. “Give it only the minimum access it needs. Monitor for unusual behaviour in real time. And most importantly, build AI security into your risk strategy from day one.”


Top Stories

Top Stories

Most Watched Today