MCP is becoming the universal adapter for AI tooling — drop a config file in the right place and every compliant agent instantly gains the context it needs to work effectively in your project.
Model Context Protocol (MCP) is rapidly becoming the connective tissue of the modern AI development ecosystem. As AI agents proliferate across editors, terminals, and cloud environments, MCP provides a standardized way for those agents to reach out to external tools, services, and context providers — turning a general-purpose AI assistant into a deeply integrated development partner.
At its core, MCP defines how an AI agent discovers and communicates with MCP servers: lightweight processes or remote endpoints that expose capabilities such as code generation assistance, documentation lookup, live app inspection, or platform-specific tooling. Whether an agent is embedded in a code editor, a CLI, or a browser-based IDE, the protocol remains the same. What developers should know is where each AI agent looks to find its MCP server configurations — let's dive in.
Why MCP Tooling Matters
Without a shared tooling protocol, every AI agent would need its own bespoke integration layer for every tool it wanted to use. MCP solves this by acting as a universal adapter. The benefits are significant:
- Consistency: developers describe their tooling once and reuse that configuration across multiple agents.
- Composability: multiple MCP servers can be wired up simultaneously, each handling a different domain (e.g., one for platform documentation, one for live app introspection).
- Portability: configurations can be committed to source control and shared across teams, ensuring every developer — and every agent — operates with the same capabilities.
- Extensibility: new tools and platforms can expose MCP servers without waiting for first-party agent support.
For platform maintainers, shipping an MCP server means that any compliant AI agent can immediately become a first-class citizen in their ecosystem. Agents don't need to guess — MCP tools provide the grounding needed to bring contextual AI to developers.
Standardized MCP Server Configurations
MCP server configurations follow a consistent logical structure regardless of the agent consuming them. Each entry declares either a remote server (reachable via a URL) or a local server (launched via a command and arguments). This maps cleanly to two deployment patterns over standard transport protocols:
- Remote MCP servers are always-on endpoints, typically hosted by a platform vendor. They require no local installation and are ideal for documentation, code intelligence, and cloud-connected tooling.
- Local MCP servers are spawned on-demand by the agent. They run as child processes on the developer's machine and are well-suited for tasks that require access to the local environment, such as inspecting a running app or reading from the local file system.
Configuration Scope
MCP configurations can also be scoped at two levels:
- Project level — committed inside the repository (e.g., .github/, .cursor/, .codex/). Every developer who clones the project gets the same agent capabilities automatically. This is the recommended approach for team and open-source projects.
- Developer level — stored in user-level configuration files outside the repository (e.g., global settings in an IDE or a user-scoped config directory). Useful for personal tooling preferences that shouldn't be enforced across the team.
Uno Platform MCP Tools
Uno Platform is the flexible open source stack for building modern .NET cross-platform apps, complete with rich design & AI tools. For developers to have confidence in agentic workflows, AI agents should be grounded in the latest documentation and have the ability to interact with a running app.
Uno Platform MCP tools illustrate both remote and local patterns:
- uno — a remote MCP server hosted at https://mcp.platform.uno/v1. Provides platform documentation, API references, and code generation assistance.
- uno-app — a local MCP server launched via the Uno Dev Server. Connects to a running Uno application and enables live app inspection, diagnostics, and hot-reload-aware tooling.
In JSON format (used by most editor-based agents):
{
"mcpServers": {
"uno": {
"url": "https://mcp.platform.uno/v1"
},
"uno-app": {
"command": "dnx",
"args": ["-y", "uno.devserver", "--mcp-app"]
}
}
}In TOML format (used by CLI-oriented agents like Codex):
[mcp_servers.uno] url = "https://mcp.platform.uno/v1" [mcp_servers.uno-app] command = "dotnet" args = ["dnx", "-y", "uno.devserver", "--mcp-app"]
The logical intent is identical — only the serialization format and file locations differ. Once configured, most IDEs will list the Uno Platform MCP tools visually; for CLI agents, the /mcp listing shows configured MCP tools. Let's walk through how popular AI agents find these configurations.
GitHub Copilot
Configuration file: mcp.json · Locations: .github/, .vs/, .vscode/
GitHub Copilot supports MCP server configurations scoped to a workspace, searching for mcp.json in several well-known directories depending on the environment:
- .github/mcp.json — the canonical project-level location, recognized by Copilot across GitHub's tooling surface.
- .vscode/mcp.json — picked up when working inside Visual Studio Code with the Copilot extension.
- .vs/mcp.json — picked up when working inside Visual Studio (Windows).
This multi-location support means teams can target different configurations to different editor environments within the same repository, or maintain a single canonical file in .github/ that works everywhere. Committing mcp.json to a Uno Platform repository ensures every contributor working with GitHub Copilot automatically has access to Uno Platform's documentation and live app tooling — no manual setup required.
Claude Code
Configuration file: mcp.json · Location: .claude/
Claude Code is Anthropic's CLI-based agentic coding tool. It reads standard MCP configurations from a mcp.json file placed inside a .claude/ directory at the project root.
Registering Uno Platform MCP tools globally with Claude Code is straightforward:
claude mcp add --scope user --transport http uno https://mcp.platform.uno/v1 claude mcp add --scope user --transport stdio "uno-app" -- dotnet dnx -y uno.devserver --mcp-app
Once .claude/mcp.json is in place, Claude Code automatically attaches the Uno MCP servers when invoked inside the project directory. The remote uno server gives Claude Code access to platform knowledge and XAML/C# code generation assistance, while the local uno-app server enables agents to interact with running Uno applications during agentic sessions.
Codex
Configuration file: config.toml · Location: .codex/
OpenAI's Codex CLI agent uses TOML rather than JSON for its configuration. MCP servers are declared inside a config.toml file stored in a .codex/ directory at the project root.
Registering Uno Platform MCP tools with Codex:
codex mcp add "uno" --url "https://mcp.platform.uno/v1" codex mcp add "uno-app" -- dotnet dnx -y uno.devserver --mcp-app
The TOML structure maps directly to the same logical model as the JSON equivalents — each [mcp_servers.<name>] block declares either a URL-based remote server or a command-based local server. Codex spawns the local uno-app server on demand and connects to the remote uno endpoint at the start of each session. Committing .codex/config.toml makes the Uno Platform MCP toolset available to every team member using Codex, with zero additional configuration required.
Cursor
Configuration file: mcp.json · Location: .cursor/
Cursor stores project-level MCP configurations in a .cursor/mcp.json file at the repository root. The format mirrors the JSON structure used by other editor-based agents, making it straightforward to keep configurations in sync across tools.
Cursor also supports a user-level MCP configuration through its settings UI, which applies globally across all projects. The project-level .cursor/mcp.json takes precedence when present — the right choice for team-shared tooling. With these servers registered, Cursor's AI features — including inline completions, chat, and Composer — can all leverage Uno Platform context when working inside a Uno Platform project.
Antigravity
Configuration file: mcp_config.json · Location: project root or agent workspace directory
Google's Antigravity agent uses a mcp_config.json file for its MCP server declarations. Unlike the agents above, Antigravity uses a slightly different top-level setup with root access — so it's worth checking the agent's documentation for any schema nuances.
{
"mcpServers": {
"uno": {
"url": "https://mcp.platform.uno/v1"
},
"uno-app": {
"command": "dotnet",
"args": [
"dnx",
"-y",
"uno.devserver",
"--mcp-app",
"--force-roots-fallback"
]
}
}
}
With the JSON configuration in place, Antigravity gains access to Uno Platform's full MCP surface — enabling it to answer platform-specific questions, generate idiomatic Uno/XAML code, and interact with a live running application during an agentic workflow.
Summary
The table below captures where each agent looks for its MCP configuration at a glance:
| Agent | File | Location | Format |
|---|---|---|---|
| GitHub Copilot | mcp.json | .github/, .vscode/, .vs/ | JSON |
| Claude Code | mcp.json | .claude/ | JSON |
| Codex | config.toml | .codex/ | TOML |
| Cursor | mcp.json | .cursor/ | JSON |
| Antigravity | mcp_config.json | /antigravity | JSON |
While file names and paths differ slightly, the underlying concepts of MCP configuration are consistent: drop a config file in the right place, declare your MCP servers, and every supported AI agent immediately gains the tools it needs to work effectively in your project.
As the MCP ecosystem matures, configuration-as-code for AI tooling is set to become routine. AI agents in IDEs and CLIs may have nuanced ways to read configurations, but the overall standardization of MCP settings is a clear win for developers everywhere.
Subscribe to Our Blog
Subscribe via RSS
Back to Top