Hi there, my fellow people on the internet. Hope you're doing well and your codebase isn't on fire (yet).
So here's the thing. Over the past year I've been watching something unfold that genuinely worries me. Everyone and their dog is using AI to write code now. Copilot, Cursor, Claude Code, ChatGPT, you name it. Vibe coding is real, and the productivity gains are no joke. I've used these tools myself while building Kira at Offgrid Security, and I'm not about to pretend they aren't useful.
But I've also spent a decade in security, building endpoint protection at Microsoft, securing cloud infrastructure at Atlassian, and now running my own security company. And that lens makes it impossible for me to look at AI-generated code and not ask my favorite question: what can go wrong?
Turns out, a lot.
The Numbers Don't Lie (And They Aren't Pretty)
Veracode recently published its 2025 GenAI Code Security Report after testing code from over 100 large language models. The headline finding? AI-generated code introduced security flaws in 45% of test cases. Not edge cases. Not obscure languages. Common OWASP Top 10 vulnerabilities across Java, Python, JavaScript, and C#.
Java was the worst offender with a 72% security failure rate. Cross-Site Scripting had an 86% failure rate. Let that sink in.
And here's the part that surprised even me: bigger, newer models don't do any better. Security performance has stayed flat even as models have gotten dramatically better at writing code that compiles and runs. They've learned syntax. They haven't learned security.
Apiiro's independent research across Fortune 50 companies backed this up, finding 2.74x more vulnerabilities in AI-generated code compared to human-written code. That's not a rounding error. That's a systemic problem.
Why Does This Keep Happening?
If you think about how LLMs learn to code, it makes total sense. They're trained on massive amounts of publicly available code from GitHub, Stack Overflow, tutorials, blog posts. The thing is, a huge chunk of that code is insecure. Old patterns, missing input validation, hardcoded credentials, SQL queries built with string concatenation. If the training data is full of bad habits, the model will confidently reproduce those bad habits.
The other piece is that LLMs don't understand your threat model. They don't know your application's architecture, your trust boundaries, your authentication flow. When you ask for an API endpoint, the model will happily generate one that accepts input without validation, because you didn't tell it to validate. And honestly, most developers don't include security constraints in their prompts. That's the whole premise of vibe coding: tell the AI what you want, trust it to figure out the how.
The problem is that "the how" often skips the security bits entirely.
I categorize these into three buckets:
The Obvious Stuff - missing input sanitization, SQL injection, XSS. These are the classics that have been plaguing us for two decades and LLMs are very good at reintroducing them because they're overrepresented in training data.
The Subtle Stuff - business logic flaws, missing access controls, race conditions. The code looks correct. It passes basic tests. But it's missing the guardrails that a security-conscious developer would add. This is harder to catch because there's no obvious "bad pattern" to scan for.
The Novel Stuff - hallucinated dependencies (packages that don't exist but an attacker could register), overly complex dependency trees for simple tasks, and the reintroduction of deprecated or known-vulnerable libraries. This one is uniquely AI-flavored and it's growing fast.
So What Do We Do About It?
We need to ensure that we have security measures that scale at the speed and volume of code shipped.
Unlike older security tools that used to run after every build - we need something faster, something earlier in development lifecycle.
Something that can correctly guide AI to write secure code from the inception rather than scanning for mistakes after you got your features working.
Here is one such tool, free for community to try: kira-lite, an MCP (Model Context Protocol) server that plugs directly into your AI-powered development workflow. If you haven't been following the MCP ecosystem, here's the quick version: MCP is a standard protocol that lets AI assistants connect to external tools and data sources. Think of it as giving your AI coding assistant the ability to call out to specialized services while it's working.
The idea behind kira-lite is straightforward. Instead of generating code and hoping for the best, your AI assistant can call kira-lite during the development process to scan for security issues before the code is even written to disk. It sits in the workflow, not after it.
Here's how you'd set it up:
Prerequisites
You just need Node.js >= 18 and an MCP-compatible AI coding assistant. That's it. No API keys, no accounts, no external servers.
Quick Start
bash
npx -y @offgridsec/kira-lite-mcp
No global install needed. The npx -y flag fetches the latest version automatically on every launch, so you're always up to date.
Setup by IDE / Tool
Here's how to set it up depending on what you're using:
Claude Code (CLI)
Claude Code needs three things: register the MCP server, add a CLAUDE.md so Claude knows to scan automatically, and add settings for auto-permissions.
Step 1 — Register the MCP server (one-time, global)
bash
claude mcp add --scope user kira-lite -- npx -y @offgridsec/kira-lite-mcp
Step 2 — Per-project setup
Navigate to your project directory, then copy the config files that ship with the package:
macOS / Linux:
bash
cd /path/to/your-project
# Resolve the package path via npx cache
PKG=$(node -e "console.log(require.resolve('@offgridsec/kira-lite-mcp/config/CLAUDE.md').replace('/config/CLAUDE.md',''))")
# Copy CLAUDE.md (tells Claude to scan before every write)
cp "$PKG/config/CLAUDE.md" .
# Copy settings (auto-allows kira-lite tools + adds post-write hook)
mkdir -p .claude
cp "$PKG/config/settings.local.json" .claude/settings.local.json
Windows (PowerShell):
powershell
cd C:\path\to\your-project
$pkg = Split-Path (node -e "console.log(require.resolve('@offgridsec/kira-lite-mcp/package.json'))")
Copy-Item "$pkg\config\CLAUDE.md" .
New-Item -ItemType Directory -Force .claude | Out-Null
Copy-Item "$pkg\config\settings.local.json" .claude\settings.local.json
Tip: If the package isn't cached yet, run npx -y @offgridsec/kira-lite-mcp once first to download it.
Already have a CLAUDE.md or .claude/settings.local.json? Don't overwrite them — merge manually.
Step 3 — Verify
Start a new Claude Code session in your project and run /mcp. You should see kira-lite listed with 7 tools. Claude will now automatically scan code before every write.
Cursor
Step 1 — Add MCP server
Open Cursor Settings → MCP and add a new server, or add this to your ~/.cursor/mcp.json (global) or .cursor/mcp.json (per-project):
json
{
"mcpServers": {
"kira-lite": {
"command": "npx",
"args": ["-y", "@offgridsec/kira-lite-mcp"]
}
}
}
Step 2 — Add project rules (recommended)
Create or append to your .cursorrules file in the project root:
# SECURITY SCANNING — REQUIRED ON EVERY CODE CHANGE
You MUST call the `scan_code` MCP tool before EVERY code change. No exceptions.
1. Before writing any code, call `scan_code` with the code you are about to write
2. If findings are returned, fix them and call `scan_code` again
3. Only write the code after scan returns clean
4. For edits to existing files, use `scan_diff` with original and new code
5. If scan returns critical or high findings, DO NOT write the code
6. Tell the user what you found and what you fixed
Step 3 — Verify
Open Cursor, start an Agent chat, and ask: "What MCP tools do you have?" You should see the kira-lite tools listed.
Windsurf
Step 1 — Add MCP server
Open Windsurf Settings → MCP and add a new server, or add this to your ~/.codeium/windsurf/mcp_config.json:
json
{
"mcpServers": {
"kira-lite": {
"command": "npx",
"args": ["-y", "@offgridsec/kira-lite-mcp"]
}
}
}
Step 2 — Add project rules (recommended)
Create or append to your .windsurfrules file in the project root with the same scanning rules as Cursor above.
Step 3 — Verify
Start a Cascade session and ask: "What MCP tools do you have?" You should see the kira-lite tools listed.
VS Code (Copilot / GitHub Copilot)
Add this to your .vscode/mcp.json (per-project) or user-level MCP settings:
json
{
"servers": {
"kira-lite": {
"command": "npx",
"args": ["-y", "@offgridsec/kira-lite-mcp"]
}
}
}
Then create or append the scanning rules to your .github/copilot-instructions.md file. Open Copilot Chat in Agent mode to verify.
OpenAI Codex CLI
Add this to your ~/.codex/config.yaml or project-level codex.yaml:
yaml
mcp_servers:
- name: kira-lite
command: npx
args: ["-y", "@offgridsec/kira-lite-mcp"]
Then add the scanning rules to your AGENTS.md file in the project root.
Other MCP-Compatible Clients
For any MCP-compatible tool not listed above, add this to your client's MCP configuration:
json
{
"kira-lite": {
"command": "npx",
"args": ["-y", "@offgridsec/kira-lite-mcp"]
}
}
The exact file location varies by client — check your tool's MCP documentation. Then add the scanning rules to whatever instructions/rules file your tool supports.
Quick Reference: Instructions File by IDE
|
IDE / Tool |
Instructions File |
MCP Config File |
|---|---|---|
|
Claude Code |
|
|
|
Cursor |
|
|
|
Windsurf |
|
|
|
VS Code (Copilot) |
|
|
|
OpenAI Codex CLI |
|
|
Why MCP and Why Now?
I've been thinking about this a lot. The traditional security tooling model is built around gates and checkpoints. Write code, commit, run CI pipeline, scanner finds issues, developer goes back to fix. It works, but it's slow and creates friction that developers (understandably) resent.
With MCP, the security tool becomes a collaborator rather than a gatekeeper. The AI assistant can proactively check its own work. It can call scan_code before presenting a snippet to you, catch the SQL injection in the Python function or the missing authentication check on the API endpoint, and fix it in the same conversation. No context switch. No waiting for CI. No separate dashboard to check.
With Claude Code, you can even set it up so that every edit is automatically scanned. Drop a CLAUDE.md file in your project that tells Claude to call scan_code before every write operation, and you've essentially got a security co-pilot riding shotgun on every line of AI-generated code.
This isn't a magic bullet. I want to be clear about that. No tool catches everything, and the security landscape for AI-generated code is evolving faster than any single solution can keep up with. But the shift from "scan after the fact" to "scan during generation" is significant. It's the difference between finding the fire after it's spread and catching the spark.
Things I'd Recommend Right Now
Whether you use kira-lite or not, here are some things I'd strongly suggest if your team is using AI coding assistants:
Don't trust, verify. Treat AI-generated code the same way you'd treat code from a new contractor who doesn't know your codebase. Review it. Question it. Don't assume it's handling edge cases or security concerns just because it compiles.
Add security context to your prompts. If you're asking an AI to write an API endpoint, explicitly say "include input validation, authentication checks, and parameterized queries." It won't add these by default.
Automate scanning in the loop. Whether it's through an MCP server like kira-lite, a SAST tool in your CI pipeline, or both, don't ship AI-generated code without automated security analysis. The volume of code being generated is too high for manual review alone.
Watch your dependencies. AI assistants love adding packages. Check that those packages actually exist, are maintained, and don't have known vulnerabilities. Package hallucination is a real attack vector now. Tools like kira-lite's dependency scanner can automatically check your lockfiles against CVE databases, which saves you from manually auditing every npm install your AI assistant decides to run.
Educate your team. The developers using AI tools need to understand that "working code" and "secure code" are not the same thing. This isn't about slowing people down. It's about building awareness so they know what to look for.
The Road Ahead
I genuinely believe AI is going to transform how we build software. But we're in this weird in-between phase where the tools are powerful enough to generate massive amounts of code and not yet smart enough to make that code secure by default.
That gap is where the next wave of security work lives. It's where I'm spending all my time right now, and honestly, it's one of the most interesting problems I've worked on in my career.
If you're working in this space too, or if you're a developer trying to figure out how to use AI tools without accidentally introducing a bunch of CVEs, I'd love to chat. Hit me up on LinkedIn
Will be back soon with more on this topic. There's a lot more to unpack, especially around how agent-based workflows are creating entirely new attack surfaces.
Keep hacking till then ();
