If you build tools for AI agents, you need to get used to the feeling of your work becoming obsolete before you’ve even finished writing the documentation.
Just a few weeks ago, I drafted a post about an incredible new breakthrough: an “Intelligent MCP Server” for Google Drive. It was going to revolutionize how agents interact with the filesystem, bringing human-like path intuition (/Client/Budgt automatically resolving to the right folder) and cumulative intelligence.
I never published that draft. Why? Because the entire architectural paradigm shifted underneath my feet. The MCP server approach is already looking like a legacy system.
The Problem with the “Plugin” Paradigm
The Model Context Protocol (MCP) was a massive step forward. It standardized how LLMs communicate with external tools. But it inherently treats AI agents like standard web applications communicating with a REST API backend.
When an agent needs to access Google Drive via an MCP server, it hits a rigid interface. If it gets a 403 Forbidden error, or if a folder structure changes, it often hits a wall. We tried to build “Self-Healing” logic into the MCP server itself, but that meant the server was getting incredibly bloated and complex.
We were trying to teach the tool how to be smart, instead of teaching the agent how to use the tool.
The Shift to Portable Agent Skills
This brings us to the new paradigm: Portable Agent Skills.
Instead of a bulky Python MCP server running as a background daemon, Google Workspace integration is now handled by a single, blazing-fast Rust CLI (gws) paired with markdown-based Agent Skills.
The gws CLI doesn’t even have hard-coded commands. It reads Google’s API Discovery Service at runtime and builds its command surface dynamically. The Agent Skills are literally just SKILL.md files that teach the AI how to use those gws bash commands.
Here is why this seemingly simple shift changes everything:
1. The Agent Owns the Logic, Not the Tool
In the old MCP approach, we put the business logic inside the server. With the new gws skills, the agent just gets a raw CLI. The cognitive load of how to use the API is defined purely in plain text within a SKILL.md file. If the agent needs to upload a document, it reads the skill, learns the gws drive files create syntax, and executes the bash command itself. It doesn’t rely on a bloated middleware server to hold its hand.
2. Infinite Autonomous Extension
This is the real killer feature. With a rigid MCP server, if you wanted the agent to perform a new multi-step action (e.g., “Find all emails from Steve and save their attachments to Drive”), a human developer had to write and compile a new endpoint in the server’s backend.
With the CLI + Skills paradigm, the agent can write its own bash or Python scripts that orchestrate gws commands, and save them as permanent new “recipes”. The system is infinitely extensible by the agent itself, without a human developer ever touching the core Rust binary.
3. Context De-pollution
An MCP server exposes all its tools all the time. A Skill is loaded dynamically only when the agent realizes it needs Google Drive access. This keeps the agent’s context window clean and laser-focused.
The Takeaway for Builders
The shift from the google_drive_mcp draft to the current google-drive-mcp skill happened in a matter of weeks.
If you are building hard-coded integrations for AI agents today, pause and ask yourself: Are you building a rigid tool that expects a dumb client? Or are you building a flexible skill that empowers an intelligent, autonomous worker?
The era of building for AI is ending. We are now in the era of teaching AI to build for itself. If your architecture doesn’t support that, it’s already a legacy system.
Sebastian Schkudlara
The Unix Agent Paradigm: Why We Must Kill the AI Daemon