Legacy CMSes, tiny IT teams, and the same five security mistakes on repeat.
Most of the security work I do is not glamorous.
There are no red team hoodies, no zero-days, no dramatic “we hacked the mainframe” moments. Instead, there’s a lot of curl, headers, and very old PHP.
I’m based in Chicago and I’ve been building web apps for 15+ years. At some point last year, I decided to scratch an itch: I wanted a small open-source tool that would quickly tell me how “healthy” a public website is, security‑wise. Nothing fancy – just:
- does HTTPS really work?
- are the obvious leaks closed?
- are we doing anything about scripts from half the internet?
Naturally, I pointed this tool at the kinds of domains most people forget about: small-town websites, school districts and county portals in the U.S. The stuff that runs on old CMS installs and “if the page loads, it’s probably fine” energy.
On one of the first school district sites I checked, I found a phpinfo.php page still sitting on a test subdomain that was never cleaned up. The homepage looked perfectly normal. Under the hood, it was basically a live x‑ray of the entire stack.
After a few dozen scans like that, a pattern emerged. Different vendors, different hosting, different logos… but almost exactly the same mistakes.
This post is not about the tool. It’s about those recurring mistakes and what they say about how we build (and maintain) public‑facing sites.
Pattern #1: HTTPS is there, but not really trusted
Almost every site I tested redirected to HTTPS. That’s a huge improvement compared to a decade ago.
But in a surprising number of cases, there was no HSTS at all.
From the browser’s point of view, that first visit is still “HTTP is allowed.” An on‑path attacker sitting on the same Wi‑Fi can quietly downgrade it, inject whatever they want, and then forward the request to the real site. The user sees a familiar URL and a working page. Nothing screams “you’re being messed with.”
The fix is literally one header:
Strict-Transport-Security: max-age=31536000; includeSubDomains; preload
Yet it’s missing on many sites simply because nobody ever picked it up as a requirement. The vendor shipped “it works on HTTPS,” the client said “great, thanks,” and that was the end of the story.
Pattern #2: No CSP on pages full of third‑party scripts
Most of the sites I looked at have had multiple lives.
Different contractors added:
- analytics
- chat widgets
- form providers
- random “we need this by Friday” embeds
A few years later, it’s a soup of inline scripts and external JS that nobody really owns.
Without a Content Security Policy (CSP), the browser will happily execute whatever responses come back from those third‑party domains. If one of them gets compromised, injected code runs on a government domain, in a context where users are used to trusting what they see.
I get why CSP is often missing. It looks scary, and the first attempt tends to break something. But even a simple starter policy like this:
Content-Security-Policy:
default-src 'self';
script-src 'self' 'unsafe-inline' https://www.googletagmanager.com https://www.google-analytics.com;
style-src 'self' 'unsafe-inline';
img-src 'self' data:;
frame-ancestors 'none';
already cuts down the blast radius a lot.
The problem isn’t the lack of silver‑bullet tech. It’s that no one ever said, “This site must have a CSP,” so it just never happened.
Pattern #3: Leaky files and forgotten test stuff
This one still makes me pause every time I see it.
From the scans, I kept running into things like:
.envfiles with database credentials.git/directories exposing repo historyphpinfo.phpon “temporary” subdomains- backup archives sitting in the web root
- random
/old/,/test/or/backup/directories that were meant to be removed “later”
Attackers don’t need to be creative here. They just need a script that walks common paths.
And again, the fixes are boring:
- block access to known‑sensitive patterns at the web server or WAF
- add “no test artifacts / backups in web root” to your deployment checklist
- schedule a periodic scan specifically looking for these paths
This isn’t deep, subtle technical debt. It’s more like leaving keys in the door because nobody added “check the door” to their routine.
Pattern #4: Session cookies that trust too much
Another repeat offender: session cookies with defaults from a different era.
I’d often see things like:
- no
Secureflag - no
HttpOnly - no
SameSite - combined with a lack of basic headers (
X-Frame-Options,X-Content-Type-Options, etc.)
In most modern stacks, you can fix 80% of this in one place, centrally. On PHP/Laravel, for example, tweaking session and cookie settings in the framework config, plus a few headers at the web server level, already raises the bar.
Yet it rarely happens by default, especially on sites that were “finished” years ago and only get touched when content changes.
Pattern #5: Client‑side libraries from another decade
Finally, the frontend. A lot of these sites still rely on:
- old jQuery (1.x / 2.x)
- outdated Bootstrap
- abandoned plugins from who‑knows‑where
I have sympathy for this one. If you’re the only IT person in a small organization, with no staging environment and no time, touching a core JS lib feels risky. If everything “kind of works,” why poke it?
The problem is that this code often has known CVEs and no upstream support. At some point, not touching it becomes the bigger risk.
The teams that do move forward usually do it in small steps: upgrade the core library, test the critical user journeys, be ready to roll back. That’s boring, methodical work — exactly the kind of work that gets deprioritized when there’s always another fire to put out.
Why these mistakes keep repeating
After a while, I stopped being surprised by individual issues and started thinking more about the environment that produces them.
A few patterns stood out:
- No clear owner. The website lives between “IT” and “communications”. Vendors maintain it, departments own the content, security is “everyone’s job” and therefore nobody’s.
- No baseline. There’s no short, agreed list of “every public site we run must at least do X, Y and Z” (enforce HTTPS, have CSP, block obvious leaks, etc.).
- No feedback loop. Once the site is launched, nobody regularly scans it for regressions or new issues. It just quietly ages in place.
You don’t need a nation‑state adversary for things to go wrong here. A misconfigured plugin, a compromised third‑party script or a sloppy backup can be enough.
So what can we actually do?
If you’re a developer or consultant working with public‑sector clients (or any small org with similar constraints), here’s what I’d suggest.
None of this requires a new platform. It does require installing a bit of discipline around boring things:
- Write down a tiny baseline.Literally a one‑page document that says:
“Every public‑facing site we run must:- enforce HTTPS with HSTS after a transition period,
- have at least a basic CSP,
- block access to config/test/backup artifacts,
- use secure/HTTP‑only/SameSite cookies,
- avoid severely outdated client‑side libraries.”
- Make it part of the contract. If you’re the vendor, include this baseline in your proposal. If you’re the client, ask for it. Make it part of “done,” not a nice‑to‑have.
- Automate the checks. Use whatever you like:
curl, custom scripts, open‑source scanners, CI jobs. The exact tool matters less than the fact that it runs regularly and someone looks at the results. - Prioritize by impact, not aesthetics. A site that handles payments or logins should get attention before the news archive. Align your energy with where real risk lives.
- Plan for small, continuous upgrades. Make it normal to upgrade libraries, tweak headers and refine CSP in small increments instead of waiting 5–10 years for a “big redesign.”
None of this will make headlines. That’s kind of the point. Quiet, unglamorous work on “boring” security details is what keeps a lot of people safe without them ever noticing.
Closing thoughts
I started scanning these sites with CivicMeshFlow mostly because I was curious. I kept going because I realized how many residents depend on them every day — to pay bills, read school announcements, check on local services.
Most of the issues I see aren’t there because people don’t care, but because nobody ever gave them time, tools or a baseline.
If you build or maintain public‑facing sites, especially for small organizations, you’re probably closer to this problem than you think. And you have more influence than you might realize: sometimes, all it takes to start changing the pattern is one person asking, “What’s our baseline?” and being willing to write it down.
Author bio (for HackerNoon):
I’m Nick Tkachenko, founder and CTO of CivicMeshFlow, an open‑source project focused on improving the security of small local government websites in the U.S. Based in Chicago, I’ve been building PHP/Python apps for 15+ years. Learn more at https://civicmeshflow.com
