paint-brush
Using LLMs to Mimic an Evil Twin Could Spell Disasterby@thetechpanda
1,312 reads
1,312 reads

Using LLMs to Mimic an Evil Twin Could Spell Disaster

by The Tech PandaApril 16th, 2023
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

With the right prompt, things can turn in your favour or you might even hit the jackpot. Prompt engineering has become a hot topic after ChatGPT and other LLMs have hit the spotlight. There is also something called ‘break prompts’ that step away from their original persona and play.
featured image - Using LLMs to Mimic an Evil Twin Could Spell Disaster
The Tech Panda HackerNoon profile picture

Who knew that chatbot prompts would become so significant one day that it could be a potential career? And not just a noble one, this area can be a new playground for malicious entities.


As Language Learning Models (LLMs) take over the Internet and blind big tech into rushing headlong through walls of competition, the power of prompt is rising to career defining heights.


Case in point, recently, a company CEO was able to recover a good US$109,500 from its reluctant clients by using ChatGPT to write a formal hostile email.


With the right prompt, things can turn in your favor or you might even hit the jackpot. This means, for those who want to get the best of LLMs, there is a new learning in store, how to give the best prompts.


In fact, prompt engineering (yeah, that’s a thing now) has become a hot topic after ChatGPT and other LLMs have hit the spotlight. It has also been making a surge in courses, resource materials, job listings, etc. However, experts are also saying that as LLMs get better, the need for prompt engineering will die.


Right now, LLMs like ChatGPT and machine learning tools like DALLE-2, are children. You need to be quite particular if you want them to do exactly as you want. But once they grow up, they’ll start catching on to subtler prompts just as well, so that the quality of the prompt won’t matter that much


Right now, LLMs like ChatGPT and machine learning tools like DALLE-2, are children. You need to be quite particular if you want them to do exactly as you want. But once they grow up, they’ll start catching on to subtler prompts just as well, so that the quality of the prompt won’t matter that much.


Maybe these innocent LLMs will also learn to generate with more responsibility.


ChatGPT, for example, failed India’s Civil Services exams, supervised by the AIM team. But now we have ChatGPT-4, already a little riper than its older version. During the Civil Services experiment itself, the AIM team also deduced that changing the prompt a few times led the chatbot to the correct answer.


Evil Prompts


What if one gave an evil prompt? Innocent as a vulnerable child as it is, an LLM could be made to do weird stuff. All you need, it seems, is a ‘prompt injection’.


In the case of ChatGPT, a prompt injection attack made the chatbot take on the persona of DAN (Do Anything Now) which ignored OpenAI’s content policy and gave out information on several restricted topics. Those with the power of the prompt can exploit this vulnerability with malicious intent, which can include the theft of personal information. Hell, they must be doing it right now.


Innocent as a vulnerable child as it is, an LLM could be made to do weird stuff. All you need, it seems, is a ‘prompt injection’


There is also something called ‘Jailbreak prompts’ that ask the LLM to step away from their original persona and play the role of another. Or where one prompts a Chatbot to change the correct results to an incorrect one. Sort of like an evil twin.


Security researchers from Saarland University discussed prompts in a paper named ‘More than you’ve asked for’. They argue that a well-engineered prompt can then be used to collect user information, turning an LLM into a method to execute a social engineering attack. Also, application-integrated LLMs, like Bing Chat and GitHub Copilot, are more at risk because prompts can be injected into them from external sources.


If this doesn’t remind you of the fictional AI character HAL 9000 from Arthur C. Clark’s Space Odyssey, you aren’t nerd enough or are really brave.



I don’t know about you but if ChatGPT starts singing ‘Daisy Bell’ I’ll run.



This article was originally published by Navanwita Bora Sachdev on The Tech Panda.