…and why “no-build, native ES modules” can quietly expand your attack surface if you’re not careful.
The seductive myth of a framework exodus
You’ve probably seen the headline:
“Developers are ditching frameworks for Vanilla JavaScript.”
It’s a compelling story. Everyone’s burnt out on React hooks discourse, upgrade churn, build tools, and megabyte bundles. Native browser APIs are better than ever. AI can scaffold code for you. And honestly, a clean index.html + a couple of <script type="module"> tags does feel refreshing after years of pipelines.
But there’s a difference between a cultural mood and an actual industry shift.
Yes, Vanilla JS is having a moment — especially in small projects, content-driven sites, and performance-obsessed setups. But the idea that frameworks are being abandoned misunderstands why they exist, how large teams work, and what actually happens when you throw unbundled ES modules into a real production environment.
This piece is a rebuttal to the “frameworks are over” narrative — and a warning: naive “no-build, native module” setups can introduce very real security risks if you don’t treat them with the same rigor you apply to your build chain.
Framework fatigue is real — but not a funeral
Framework fatigue isn’t imaginary. Developers are tired:
- Endless churn: React → hooks → Suspense → Server Components; Vue 2 → Vue 3; meta-framework of the month.
- Toolchain sprawl: bundlers, linters, formatters, dev servers, test runners, SSR layers, adapters.
- Onboarding overhead: “To render a button, first learn our 12-layer architecture.”
But fatigue doesn’t logically lead to “ditch frameworks.”
It leads to a different conclusion:
- We want leaner, clearer frameworks.
- We want better defaults, fewer layers, and tighter interoperability with the platform.
- We want to use frameworks where they’re justified, not everywhere by default.
Frameworks arose to solve genuine problems:
- Shared architectural patterns across teams
- State management for complex, long-lived UIs
- Routing, data fetching and caching
- SSR/SSG and streaming for performance and SEO
- Dev tooling (hot reload, type integration, testing)
- Predictable component models across browsers and devices
These problems didn’t vanish in 2025 just because native APIs got nicer.
Native APIs are powerful — but they don’t replace architecture
Modern browsers are legitimately impressive now:
fetchis standard.IntersectionObserver,MutationObserver,ResizeObserver… all built in.- ES modules, import maps, Web Components, Shadow DOM.
- CSS has grown its own superpowers: container queries,
:has(), etc.
So, can you build rich interfaces with Vanilla JS + native APIs? Absolutely. For certain projects, it’s not just possible — it’s ideal.
But there’s a hard line between capability and architecture.
Native APIs don’t give you:
- Opinionated app structure: Where does state live? How does it flow?
- Routing and data coordination: URL → data → UI → transitions.
- SSR/SSG strategies: SEO, initial render speed, personalization.
- Team-scale conventions: How do 20 developers avoid 20 patterns?
- Optimization heuristics: When and how to batch DOM updates?
- Cross-browser, cross-device guarantees: Polyfills and guardrails.
Web Components are a powerful primitive, but:
- They’re low level.
- They don’t prescribe state management.
- They don’t address streaming rendering or hydration.
- They don’t give you a cohesive ecosystem on their own.
That’s exactly the space frameworks live in.
Performance isn’t just “less JavaScript = faster”
The “framework tax” is a real thing. Over-engineered SPAs for news articles, massive bundles for static marketing pages — that’s all fair criticism.
But the argument “Vanilla JS is always faster because it’s less JavaScript” is too shallow.
Modern frameworks have quietly evolved:
- Tree-shaking and code splitting are default, not exotic.
- Compiler-driven frameworks (Svelte, Solid, Qwik, etc.) remove runtime overhead.
- Partial hydration and islands architecture reduce JS on the critical path.
- React Server Components eliminate whole classes of client-side code.
- Static extraction (Next, Astro, Remix-style) generates HTML first, enhances later.
Meanwhile, hand-rolled Vanilla JS can easily become:
- DOM-manipulation soup
- Hard-to-trace performance bugs
- Multiple competing patterns as teams grow
- A graveyard of ad-hoc optimizations
Performance is less about “framework vs no framework” and more about:
- Right tool for the job
- Doing less on the client
- Being thoughtful with data flow and rendering
- Avoiding unnecessary abstractions — including hand-made ones
A minimal framework with strong defaults can outperform a sprawling DIY vanilla architecture any day.
AI doesn’t kill frameworks — it upgrades them
Another claim is that AI assistants make frameworks unnecessary because they can generate Vanilla JS quickly.
The reality: AI thrives on structure and convention.
Frameworks provide:
- A common vocabulary (
useState,props,computed, etc.) - Predictable file layouts and idioms
- Stable patterns that AI can recognize and refactor
This means AI can:
- Scaffold React/Vue/Svelte components instantly
- Wire up routing, forms, and data fetching with standard patterns
- Refactor legacy patterns to modern idioms
- Suggest performance optimizations within established constraints
Contrast that with a large Vanilla JS codebase where every team does things slightly differently. AI has less structure to lean on and more room for subtle misinterpretation.
AI doesn’t erase the need for frameworks; it reduces the cognitive cost of using them well.
The “no-build, native ES module” dream — and its security shadow
Now to the interesting part: unbundled native ES modules in production.
The vision is seductive:
- No bundler.
- No massive build pipeline.
- Just
type="module"andimportfrom URLs or relative files. - Microfrontends, decoupled modules, fast deploys.
This actually aligns nicely with the broader trend toward microfrontends and no-build architectures. But there’s a less discussed angle:
Every time you move responsibility from a controlled build pipeline to runtime resolution, you change your security story.
Let’s unpack some specific risks.
1. CDN sprawl and origin trust
Unbundled ES modules are often loaded from multiple origins:
<script type="module">import { something } from "https://cdn.example.com/some-lib/v1/index.js";
import { helper } from "https://another-cdn.com/helpers.js";
</script>
Risks:
- Expanded trust boundary: Every CDN, subdomain and path becomes part of your trusted compute surface.
- Compromised third party: A single compromised CDN/library update can inject malicious logic into your app without any build-time visibility.
- Dynamic imports:
import()used with URLs built at runtime can be abused for injection if not tightly controlled.
Framework setups usually centralize dependencies into a lockfile & build step. The shift to runtime resolution can make this much more diffuse.
2. Weak or absent integrity guarantees
Without a build step, teams often skip Subresource Integrity (SRI) and tight Content Security Policy (CSP) configurations.
- No SRI on module imports: If you’re pulling modules directly from a CDN without SRI hashes, you’re trusting that resource to remain safe and unchanged, indefinitely.
- Loose CSP: If your CSP allows
script-srchttps://cdn.example.combroadly, a compromised file anywhere on that CDN under your paths can become executable in your origin.
Bundled builds, by contrast:
- Embed fixed, fingerprinted assets.
- Are often hosted on the same origin.
- Can be locked down with stricter CSP (
script-src 'self'+ hashes).
With native ES modules loaded directly from multiple origins, you must be deliberate about:
- SRI attributes on
<script>and<link>where possible. - Tightly scoped CSP, ideally with import maps limiting hostnames/routes.
- No dynamic construction of module URLs from user-controlled values.
3. Dependency drift and silent changes
With unbundled ES modules:
- It’s easy to reference remote versions like
/latest/or/v1/. - It’s tempting to “just point to the CDN” instead of pinning versions and building locally.
Consequence: you lose deterministic builds.
- A library can introduce a breaking change or a vulnerable version.
- That change is picked up immediately by users loading your app.
- No CI, no build, no review — just silent behavioral changes in production.
Framework-driven bundlers are usually wired into CI:
- Dependencies are pinned in a lockfile.
- PRs update versions explicitly.
- Builds are tested before deployment.
If you’re going no-build with ES modules, you need equivalent rigor in a different place:
- Strong version pinning (no
/latest). - A mirrored/internal registry rather than live CDNs where possible.
- Monitoring and alerts for changes in remote resources.
4. CSP, import maps, and attack surface creep
Import maps and ES modules give you new power — and new footguns.
An import map like:
<script type="importmap">
{
"imports": {
"lib": "https://cdn.example.com/lib/v1/index.js"
}
}
</script>
If an attacker finds any injection vector that lets them modify this block (or inject a new one), they can:
- Redirect imports to a malicious endpoint.
- Hijack core logic without touching your original source.
CSP can help, but:
- Many apps use overly permissive CSPs to “just get things working.”
- Few teams revisit CSP once shipped.
- Adding microfrontends/modules often leads to “temporarily relaxing CSP” that becomes permanent.
No-build architectures demand more disciplined CSP than many teams currently practice, not less.
5. Observability blind spots
When everything goes through a build pipeline:
- You can statically analyze bundles.
- You can scan for vulnerable dependencies.
- You can trace what code ends up in which asset.
With unbundled ES modules:
- Code is fetched from many paths at runtime.
- The effective dependency graph is constructed in the browser.
- Security scanners and SCA tools may miss dynamic imports or CDN-pulled modules.
Unless you intentionally:
- Log module usage,
- Track dependency URLs,
- And incorporate runtime scanning,
…you’ll likely lose visibility into what’s actually executing in the client.
Vanilla JS vs frameworks is the wrong fight
Stepping back, the real story isn’t:
Frameworks bad, Vanilla JS good.
It’s more:
We abused frameworks where they weren’t needed,
frameworks grew heavy in response to real complexity,
the platform has caught up,
and now we have more options than ever.
Vanilla JS (plus modern browser APIs) is fantastic for:
- Content-heavy sites with light interactivity
- Embeddable widgets and micro-interactions
- Highly performance-sensitive surfaces
- Teams that genuinely value minimalism and know the tradeoffs
Frameworks remain essential for:
- Complex, long-lived products
- Multi-team frontends
- Rich stateful applications
- SSR/SSG-heavy experiences
- Strong conventions and onboarding
- Mature tooling, testing and observability pipelines
And the security story is this:
- Bundled, framework-driven pipelines give you a central choke point to control, scan, and reason about what’s shipped.
- Unbundled, native ES-module architectures give you flexibility and simplicity — but only if you replace build-time safety with runtime discipline.
Without that discipline, your “no-build, framework-free” setup can quietly become more fragile and more exposed than the “bloated” framework app you were trying to escape.
The real reset: intentional choices, not pendulum swings
The framework hangover is actually a wake-up call, but not the one the slogans suggest.
It’s not:
Rip out your framework and go raw Vanilla JS.
It’s closer to:
Use frameworks where their structure, tooling, and ecosystem bring real value.
Use the platform directly where it’s enough.
And wherever you land — with or without a build step — treat security, observability and maintainability as first-class concerns.
In 2026, writing Vanilla JS doesn’t mean you’re going backwards.
And using a framework doesn’t mean you’re stuck in the past.
Progress isn’t about picking a side in “Vanilla vs Frameworks.”
It’s about mastering both and being honest about the tradeoffs of each, including the security ones.
