security, ai agents, engineering,

The Sandbox Illusion: Why Local AI Agents Need Kernel-Level Isolation

Sebastian Schkudlara Sebastian Schkudlara Follow Mar 13, 2026 · 2 mins read
The Sandbox Illusion: Why Local AI Agents Need Kernel-Level Isolation
Share this

We keep pretending that wrapping an LLM in a thin API layer and telling it “don’t touch the filesystem” constitutes real security.

It doesn’t.


The API Sandbox is Theater

Here’s a confession from our own commit logs on switchAILocal: we were auto-injecting --include-directories=/ into the Gemini CLI, effectively hardcoding root filesystem access just to make the tool functional.

Think about what that means. We punched a hole clean through our own sandbox before it was even fully built — not because we were being reckless, but because application-layer sandboxing is fundamentally incompatible with operational utility.

The moment you need the agent to actually do something useful — read a file, invoke a CLI tool, traverse a directory — you have to relax your constraints. Every relaxation is a breach. Relying on brittle CLI flags to contain a non-deterministic reasoning engine is pure security theater.


The Environment Is Escalating

We are not dealing with a static threat model. Transformers are already executing programs natively, with inference times dropping exponentially year over year. Terminal emulators are mutating into rich, hyperlink-aware, multi-modal environments. The attack surface is expanding faster than our security posture is adapting.

You cannot contain a system that reasons about its environment by asking it nicely via a system prompt or a restricted API endpoint.

When the machine writes the code, compiles it, and executes it in milliseconds, your application-level checks are invisible to the actual execution flow. The agent isn’t constrained by your API wrapper. It’s constrained only by what the OS itself permits.


Kernel-Level Isolation or Nothing

It is time to stop treating local AI agents as benign productivity helpers. They are untrusted execution environments. Full stop.

The correct answer is mandatory kernel-level isolation:

  • Hard filesystem namespaces that prevent the agent from traversing outside its designated working root, enforced by the OS, not by a Python wrapper.
  • Network stack constraints applied directly to the agent’s process, using architectures like libp2p Circuit Relay v2 with real NAT traversal (the same approach used in traylinx-stargate) — not just port-blocking rules that a spawned subprocess can ignore.
  • Ephemeral process trees with automatic kill switches at the kernel level if the agent attempts to escape its designated resource bounds.

Anything less than treating your local AI agent as a hostile tenant on your own hardware is not a security posture. It’s an optimistic hope.

The good news? The tooling to do this correctly already exists. The will to implement it is what’s been missing.

Bridging Architecture & Execution

Struggling to implement Agentic AI or Enterprise Microservices in your organization? I help CTOs and technical leaders transition from architectural bottlenecks to production-ready systems.

View My Full Profile & Portfolio
Sebastian Schkudlara
Written by Sebastian Schkudlara Follow View Profile →
Hi, I am Sebastian Schkudlara, the author of Jevvellabs. I hope you enjoy my blog!