Is It Possible for AI Chatbots to Fall Prey to Human-Like Social Engineering Schemes?

Is It Possible for AI Chatbots to Fall Prey to Human-Like Social Engineering Schemes?

Jeffrey Lv12

Is It Possible for AI Chatbots to Fall Prey to Human-Like Social Engineering Schemes?

“Social Engineering” is a tried and tested tactic that hackers use against the human element of computer security systems, often because it’s easier than defeating sophisticated security technology. As new AI becomes more human-like, will this approach work on them?

What Is Social Engineering?

Not to be confused with the ethically-dubious concept in political science , in the world of cybersecurity social engineering is the art of using psychological manipulation to get people to do what you want. If you’re a hacker, the sorts of things you want people to do include divulging sensitive information, handing over passwords, or just directly paying money into your account.

There are lots of different hacking techniques that fall under the umbrella of social engineering. For example, leaving a malware infected flash drive lying around depends on human curiosity. The Stuxnet virus that destroyed equipment at an Iranian nuclear facility may have made it into those computers thanks to planted USB drives .

But that’s not the type of social engineering that’s relevant here. Rather, common attacks such as “spear phishing “ (targeted phishing attacks) and “pretexting” (using a false identity to trick targets) where it’s one person in conversation with another that leads to the deception, are relevant here.

Since the “person” on the phone or in a chatroom with you will almost certainly eventually be an AI chatbot of some description, this raises the question of whether the art of social engineering will still be effective on synthetic targets.

https://techidaily.com

AI Jailbreaking Is Already a Thing

Chatbot jailbreaking has been a thing for some time, and there are plenty of examples where someone can talk a chatbot into violating its own rules of conduct, or otherwise doing something completely inappropriate.

In principle the existence and effectiveness of jailbreaking suggests that chatbots could in fact be vulnerable to social engineering. Chatbot developers have had to repeatedly shrink down their scope and put strict guardrails in place to ensure they behave properly, which seems to inspire another round of jailbreaking to see if those guardrails can be exposed or circumvented.

We can find some examples of this posted by users of X (formerly Twitter), such as Dr. Paris Buttfield-Addison who posted screenshots apparently showing how a banking chatbot could be convinced to change its name.

https://techidaily.com

Can Bots Be Protected From Social Engineering?

The idea that, for example, a banking chatbot could be convinced to give up sensitive information, is rightly concerning. Then again, a first line of defense against that sort of abuse would be to avoid giving these chatbots access to such information in the first place. It remains to be seen how much responsibility we can give to software such as this without any human oversight.

The flipside of this that for these AI programs to be useful, they need access to information. So it’s not truly a solution to keep information away from them. For example, if an AI program is handling hotel bookings, it needs access to the details of guests to do its job. The fear, then, is that a savvy con-artist could smooth-talk that AI into divulging who’s staying at that hotel and in which room.

Another potential solution could be to use a “buddy” system where one AI chatbot monitors another and steps in when it starts going off the rails. Having an AI supervisor that reviews every response before its passed on to the user could be one way to mitigate this sort of method.


Ultimately, when you create software that mimics natural language and logic, it stands to reason that the same persuasion techniques that work on humans will work on at least some of those systems. So maybe prospective hackers might want to read How to Win Friends & Influence People right alongside books on cybersecurity.

Also read:

  • Title: Is It Possible for AI Chatbots to Fall Prey to Human-Like Social Engineering Schemes?
  • Author: Jeffrey
  • Created at : 2024-11-18 02:16:41
  • Updated at : 2024-11-18 20:12:46
  • Link: https://tech-haven.techidaily.com/is-it-possible-for-ai-chatbots-to-fall-prey-to-human-like-social-engineering-schemes/
  • License: This work is licensed under CC BY-NC-SA 4.0.