I've spent over a decade doing offensive security work. Breaking into organizations professionally: banks, critical infrastructure, tech companies, the Fortune 500. I've seen attack chains that are genuinely sophisticated. Zero-days. Custom implants. Months of patient lateral movement. But you know what works embarrassingly often? Typing "admin/admin" into a login page. Default credentials are not a new vulnerability. They're one of the oldest in the book. Every compliance framework mentions them. Every hardening guide says to change them. And yet on a meaningful percentage of the engagements my team runs, default or known-compromised credentials give us access to systems that organizations have invested millions in protecting. The problem isn't awareness. The problem is scale. A large enterprise can have hundreds of thousands of hosts on its internal network. Servers, databases, network appliances, printers, IoT devices, monitoring tools, backup systems, development environments, and the ghost infrastructure that nobody remembers deploying. Every one of those systems potentially shipped with vendor defaults. Checking every door is the kind of work that sounds simple until you actually try to do it with the tooling that exists today. So I built something better. The Tooling Problem The go-to tool for credential testing for years has been THC Hydra. It works. But "it works" comes with asterisks. You need to compile it with specific system libraries for each protocol: libssh-dev for SSH, libmysqlclient-dev for MySQL, and so on. On a stripped-down jump box or a minimal container, that means fighting with dependencies before you've tested a single credential. I've watched operators burn an hour on compilation issues at the start of engagements more times than I can count. libssh-dev libmysqlclient-dev Then there's the output. Hydra was designed for humans reading a terminal. When you need to process results programmatically (feed them into a report, a database, another tool) you write parsing scripts. Different parsing scripts for different engagements because the context is always slightly different. And the pipeline problem. Modern recon tools like naabu and fingerprintx speak JSON and chain together cleanly. Hydra doesn't fit into that world. You end up writing glue code to translate between formats, and that glue code becomes its own maintenance burden. Every engagement, same problems. I got tired of it. Brutus Brutus is a multi-protocol credential testing tool written in Go. Single binary. Zero external dependencies. Download it and run it. It takes fingerprintx and naabu output directly and produces structured JSON. Brutus The workflow I wanted to enable: bash naabu -host 10.0.0.0/8 -p 22,3306,5432,8080 -silent | \ fingerprintx --json | \ brutus -u admin -p password123 naabu -host 10.0.0.0/8 -p 22,3306,5432,8080 -silent | \ fingerprintx --json | \ brutus -u admin -p password123 Port scan to service identification to credential testing. One pipeline. JSON in, JSON out. No intermediate files, no format conversion, no bash gymnastics. Go was the obvious language choice because the deployment story is the entire point. Static binary. No runtime. No shared libraries. No package manager needed on the target machine. Hydra's protocol support depends on dynamically linked C libraries. Brutus implements everything in pure Go: SSH from the standard library ecosystem, database drivers without CGo dependencies. One artifact, runs everywhere. But the feature I think matters most isn't the pipeline or the protocol support. It's the SSH keys. Known-Bad Keys Compiled Into the Binary Here's something that comes up on engagements more than anyone wants to admit. The security community has catalogued a large number of publicly known, compromised SSH keys. Rapid7 maintains the ssh-badkeys repository. HashiCorp's Vagrant ships with its well-known insecure key. Appliance vendors like F5 BIG-IP, ExaGrid, and Ceragon FibeAir have shipped products with embedded keys that anyone can download from GitHub and use to log into your infrastructure. ssh-badkeys Testing for these should be trivial. It's a known set of keys against a known set of services. But in practice, you have to track down the key collections, manage the files on disk, write scripts to iterate through them, and handle SSH connection logic. It's tedious enough that it gets done inconsistently, which means known-compromised keys sit in production environments waiting to be found. Brutus embeds every one of these key collections directly into the binary using Go's embed package. When it encounters an SSH service, it tests every known-bad key automatically. No configuration needed. Each key carries metadata: the expected default username, the associated vendor, the CVE or advisory. The output tells you exactly what you found, not just that a key worked. embed bash cat recon_output.json | brutus cat recon_output.json | brutus That's it. If the service is SSH, bad keys get tested. No flags, no key files, no chance of forgetting a collection. Spraying Recovered Keys Here's a real scenario that illustrates why this matters. On an engagement, my team compromised virtual machines running vulnerability scanners. Each scanner had its own SSH private key for authenticating to the hosts it was responsible for scanning. The environment was segmented: different scanners covered different network zones, and each key only worked within its assigned scope. We had multiple keys from multiple compromised scanners. We needed to map which key unlocked which hosts across which network segments. Without purpose-built tooling, this is a bash scripting nightmare. Managing connection timeouts, parsing output, tracking which key you're testing against which range. With Brutus: bash naabu -host 10.1.0.0/24 -p 22 -silent | \ fingerprintx --json | \ brutus -u nessus -k /path/to/scanner1_key naabu -host 10.1.0.0/24 -p 22 -silent | \ fingerprintx --json | \ brutus -u nessus -k /path/to/scanner1_key Same pipeline for each compromised scanner. Different key, different target range. JSON output makes it straightforward to compare access across segments and map the full lateral movement picture. This pattern applies every time you recover a private key on an engagement, whether it's from an automation server, a CI/CD pipeline, or a backup system. The question is always the same: where does this key work? Brutus makes answering that question repeatable. Experimental: LLMs for Appliance Identification This is the part I want to be transparent about. These features are experimental. They work in the scenarios we've tested. They also depend on external APIs, add latency and cost, and inherit all the non-determinism that comes with language models. The problem they solve is real though. On internal assessments, you find dozens of HTTP login pages on non-standard ports. Management interfaces for switches, storage controllers, IPMI consoles, printer admin panels. Each one probably has default credentials, but you need to identify the product first, then research its defaults. Doing that manually across fifty services burns hours. Brutus has two approaches. The first captures the HTTP response (headers, body, server signatures) and sends it to an LLM to identify the application and suggest vendor-specific defaults. It's surprisingly good at this. It'll identify a Dell iDRAC from CSS class names and JavaScript bundle structure even when "iDRAC" appears nowhere in visible text. The second uses headless Chrome and vision analysis for JavaScript-rendered pages that break traditional form-filling tools. Screenshot the rendered page, identify the appliance visually, get defaults, fill the form, compare page state before and after to determine success. Both features are promising. Neither is reliable enough for fully automated sweeps where you need deterministic results. The identification step alone saves real time even if you end up testing credentials manually. I think this pattern (multimodal LLMs for service identification in security tooling) is going to develop significantly, but we're in the early innings. The Name If you know our tooling, you know we tend to name things after Roman emperors. Brutus breaks that pattern because Marcus Junius Brutus was never an emperor. He's remembered for walking into the Senate on the Ides of March and putting a dagger in the back of the most powerful man in the world. That felt right for a credential testing tool. It doesn't build empires. It tests whether the ones you've built will let a stranger walk through the front door. And "Et tu, default creds?" was too good to pass up. Try It Brutus is open source under Apache 2.0. The GitHub repo has everything you need to get started, including a demo lab for hands-on testing. GitHub repo The highest-impact contributions are new protocol plugins, additional bad key collections from appliances and vendor products, and real-world testing feedback. The plugin architecture makes adding new protocols straightforward: implement the auth logic, register it, compile. If you've ever stared at a spreadsheet of thousands of hosts wondering how you're going to test credentials against all of them efficiently, give it a shot.