tutorials

What Are MCP Resources? (And When to Use Them)

Apigene Team
13 min read
What Are MCP Resources? (And When to Use Them)

The Model Context Protocol defines four primitives for connecting AI agents to external data: tools, resources, prompts, and sampling. Most developers only use tools. That's a problem, because tools alone force your agent to burn context tokens on data that should be read passively, not fetched through function calls. MCP resources solve this by giving models direct, read-only access to structured data like files, database schemas, and configuration objects, all without executing a single action.

Key Takeaways

For busy engineers building MCP integrations, here's what 56 developer discussions taught us:

  • Resources are the most misunderstood MCP primitive. Developers consistently confuse them with tools, and multiple SDK implementations still have ergonomics gaps around resource templates.
  • Token burn from tool-only setups is real. One team reported running 6-7 separate MCP servers and hitting a compounding overhead wall from context bloat.
  • 98% of MCP tool descriptions don't tell agents when to use them, according to an analysis of 78,849 tool descriptions, making the tools-vs-resources distinction even more critical.
  • Dynamic tool loading can save 12,000+ tokens per session. Progressive disclosure patterns, where tools and resources load on-demand instead of all-at-once, are becoming standard in production setups.

What Are MCP Resources?

MCP resources are read-only, URI-addressable data objects that an MCP server exposes to clients. Unlike tools, which perform actions and return results, resources in MCP provide passive context that models can read without triggering side effects. Think of resources as the "files" your AI agent can browse, while tools are the "functions" it can execute.

Each MCP resource is identified by a unique URI (like file:///project/schema.sql or config://app/settings). Clients discover available resources through a resources/list request, then fetch specific content with resources/read. The content can be text (UTF-8 strings) or binary (base64-encoded), and a single resource can contain multiple content items.

This design means an agent working with your codebase can read a database schema as a resource before writing a migration with a tool. The schema never bloats the tool response. It sits in context only when needed.

Why Are MCP Resources Read-Only?

The read-only constraint is intentional. Resources are designed for a specific purpose: giving models context without side effects. When a client reads a resource, nothing changes on the server. No database rows are modified, no files are written, no API calls fire.

This separation matters for three reasons. First, read-only data is cacheable. Clients can subscribe to resource updates and re-fetch only when the server signals a change, which keeps token costs predictable. Second, it simplifies security. Granting an agent read access to a configuration file is a different trust decision than granting write access. Third, it makes the protocol composable, because clients negotiate which primitives to use during capability exchange, so a server can expose resources without also granting tool access.

Resource Templates

Not all resources can be enumerated upfront. A database with millions of rows can't list every possible query result as a separate resource. That's where MCP resource templates come in.

Resource templates use URI patterns with placeholders (like db://tables/{table_name}/schema) to define dynamic resources. The client fills in the template parameters, and the server generates the content on demand. This pattern is especially useful for MCP server resources that need to expose large or user-specific datasets without pre-computing every possible response.

One developer on the MCP subreddit described the workflow as a "2-step process: tool fetches an artifact, writes to disk, returns a templated resource URI, so a client can later fetch it on-demand." This pattern keeps tool responses lightweight while still making the full data accessible.

Stop Building MCP Integrations From Scratch.

  • Any API, one line of code — connect to ChatGPT, Claude, and Cursor without writing custom MCP servers
  • Visual UI in the chat — render interactive components, not just text dumps. Charts, forms, dashboards.
  • 70% fewer tokens — dynamic tool loading and output compression so your agents stay fast and cheap

MCP Resources vs Tools: When to Use Each

The confusion between MCP resources vs tools is the most common question developers ask about the protocol. The short answer: resources are for reading, tools are for doing. But the practical decision is more nuanced.

AspectMCP ResourcesMCP Tools
PurposeProvide context dataExecute actions
Side effectsNone (read-only)Yes (can modify state)
Initiated byClient/user requests dataModel decides to call
CachingSubscribable, cacheableNot cacheable
Token impactPredictable (known size)Variable (depends on output)
Auth modelRead-only permissionsAction-level permissions
Use caseDB schemas, config files, docsAPI calls, file writes, queries

Here's the decision rule: if your data exists before the conversation starts and doesn't change based on the model's actions, it's a resource. If the model needs to trigger computation, write data, or call an external API, that's a tool.

What the Community Reports

We analyzed over 30 discussions where MCP developers shared their integration experiences, and the resources-vs-tools confusion surfaced repeatedly.

The core issue isn't conceptual, it's practical. One developer trying to build a Java MCP server explained the problem clearly: "Resources at first is kind of difficult to understand, and with the resource templates even more." The confusion deepens because most MCP clients (Claude Desktop, Cursor, VS Code) surface tools prominently in their UI but handle resources inconsistently.

Several teams reported defaulting to tools for everything, then discovering context-window blowups when their tool responses returned large payloads. A database query that returns 50 rows as a tool response burns far more tokens than exposing the same data as a cached resource that the model reads once.

FindingWhat developers reportedSource
Tool-only setups cause token burn"Ran into the same wall running 6-7 separate servers"8 upvotes, r/mcp
Resources are confusing to modelDevelopers unsure when clients will auto-read resources vs ignore them5+ threads
SDK ergonomics gaps"There's no easy way in the API to access it" for resource templatesr/mcp, Java SDK thread
Client support is inconsistentCursor, Claude Desktop, and VS Code handle resources differently4+ threads
Tool descriptions are poor"98% don't tell AI agents when to use them"78,849 descriptions analyzed

The practical takeaway: if you're building an MCP server and find yourself returning large, static data blobs from tool calls, refactor those into resources. Your token budget will thank you.

MCP Prompts: Reusable Interaction Templates

MCP prompts are server-defined templates that guide how agents use tools vs resources within a specific workflow. Unlike resources (which provide data) and tools (which execute actions), prompts define how the model should interact with the server's capabilities.

A prompt template includes a name, description, and optional arguments. When a client retrieves a prompt, the server returns a structured message sequence, including system instructions, user context, and references to specific resources or tools. The client then uses this as the foundation for a conversation.

For example, an MCP server for code review might expose a prompt called review-pull-request that takes a repo and pr_number as arguments. When invoked, it returns a message sequence that pulls the diff as a resource, instructs the model on review criteria, and specifies which tools (like post-comment) are available for the task.

When Prompts Beat Raw Instructions

The value of MCP prompts over hardcoded system prompts is version control and consistency. When you define prompts on the server, every client that connects gets the same workflow logic. Update the prompt on the server, and all connected clients pick up the change immediately, no client-side redeployment needed.

This pattern matters for teams running multiple AI clients against the same backend. A Cursor user, a Claude Desktop user, and a custom agent all get the same review workflow from the server's prompt template, with best practices for exposing resources and prompts baked into the server rather than scattered across client configurations.

MCP Sampling: Letting Servers Request LLM Completions

MCP sampling inverts the typical flow. Instead of the client calling the model and the model calling tools, sampling lets the server request an LLM completion through the client. The server sends a sampling/createMessage request with a prompt, and the client routes it to the model, applies any safety controls, and returns the completion.

This is the most powerful and least adopted MCP primitive. Sampling enables agentic loops where the server drives multi-step reasoning without the client needing to orchestrate each step.

Sampling Use Cases

The canonical use case is multi-step workflows where the server needs intermediate reasoning. Consider a data pipeline server that discovers an anomaly in a dataset. With sampling, the server can ask the model to analyze the anomaly, get a recommendation, then decide whether to trigger an alert tool, all within a single server-side workflow.

The trust model is strict by design. The client maintains full control over which sampling requests to approve, which model to use, and what token limits to enforce. This human-in-the-loop pattern means sampling doesn't compromise security, because the client can reject or modify any server-initiated completion request.

How MCP Resources, Tools, Prompts, and Sampling Work Together

The four MCP primitives form a complete system. Resources provide context, tools provide actions, prompts provide workflow templates, and sampling provides server-initiated reasoning. Understanding MCP primitives vs A2A agent cards helps clarify where MCP's primitive model fits in the broader agent protocol landscape.

Here's how they compare side by side:

PrimitiveDirectionPurposeSide EffectsExample
ResourcesServer to ClientExpose dataNone (read-only)Database schema, config file
ToolsClient to Server (model-initiated)Execute actionsYesAPI call, file write, query
PromptsServer to ClientDefine workflowsNoneCode review template, analysis flow
SamplingServer to Client to ModelRequest LLM reasoningNone (model only)Anomaly analysis, multi-step logic

In a real integration, these primitives work together. A code review workflow might use a prompt to define the review process, resources to load the PR diff and coding standards, tools to post review comments, and sampling to let the server ask follow-up questions about ambiguous code patterns.

The key insight from community discussions is that most teams start with tools only, then gradually adopt resources as they hit token limits, and rarely get to prompts or sampling. This is backwards. Starting with resources and prompts produces cleaner, cheaper integrations, because the model reads what it needs passively instead of burning tokens on tool calls that return static data.

MCP Resources in Practice: Examples and Patterns

Let's look at concrete MCP examples using FastMCP, the most popular Python SDK for building MCP servers. For real-world examples using resources and prompts across different industries, check the linked guide.

Database Schema as a Resource

from fastmcp import FastMCP
 
mcp = FastMCP("db-server")
 
@mcp.resource("db://schema/{table_name}")
async def get_table_schema(table_name: str) -> str:
    """Expose database table schemas as read-only resources."""
    schema = await fetch_schema_from_db(table_name)
    return schema.to_sql()

This FastMCP resources pattern exposes each table's schema as a templated resource. The model can read db://schema/users to understand the table structure before writing a query with a tool, keeping the schema out of the tool response.

Explore 251+ MCP Integrations

Discover official and remote-only MCP servers from leading vendors. Connect AI agents to powerful tools and services.

251 Official ServersUpdated RegularlyVendor Verified

File System Resources

@mcp.resource("file:///{path}")
async def read_file(path: str) -> str:
    """Expose project files as browseable resources."""
    with open(path, "r") as f:
        return f.read()

This MCP resources example in Python exposes local files through a URI pattern. The model reads files as context without a tool call, which means no action is logged, no side effects fire, and the data is cacheable.

The Resource Template Pattern

For dynamic or computed data, resource templates solve the enumeration problem:

@mcp.resource("analytics://dashboard/{metric}/{timeframe}")
async def get_metric(metric: str, timeframe: str) -> str:
    """Dynamically computed analytics as templated resources."""
    data = await compute_metric(metric, timeframe)
    return json.dumps(data)

This pattern is how to use MCP resources for datasets that can't be pre-listed. The MCP server resources are generated on-demand when a client fills in the template parameters, which keeps the resources/list response lightweight while still exposing a rich data surface.

Why Most Teams Underuse MCP Resources

Despite being a core primitive, resources remain the least adopted part of MCP. Our analysis of developer discussions reveals three root causes.

Client Support Is Fragmented

Cursor MCP resources, Claude Desktop, and VS Code all handle resources differently. Some clients display resources in a sidebar, others require manual selection, and a few ignore them entirely. One developer noted that "there's no easy way in the API to access" resource templates in certain SDKs, which pushes teams toward the simpler tools-only pattern.

This fragmentation creates a chicken-and-egg problem. Server developers don't build resources because clients don't surface them well, and clients don't invest in resource UX because few servers expose them. You can browse servers by capability type to find which MCP servers actually implement resources beyond the basics.

The "Just Use a Tool" Reflex

When you can return data from a tool call, resources feel redundant. But this reflex leads to the token burn pattern that multiple teams reported. The difference becomes clear at scale: an agent using 6-7 MCP servers with tool-only architectures sees compounding overhead on every request.

Resources solve this by letting clients cache data and subscribe to updates. Instead of re-fetching a configuration file through a tool call every turn, the client reads it once as a resource and refreshes only when the server signals a change.

Discovery and Documentation Gaps

The analysis of 78,849 MCP tool descriptions found that 98% don't tell agents when to use them. The resource discovery problem is even worse, because tool descriptions at least appear in the model's context. Resources require the client (or user) to proactively browse and attach them.

Expert Tip -- Yaniv Shani, Founder of Apigene

"The biggest mistake teams make with MCP is treating every integration as a tool. Start by asking: does the model need to act on this data, or just read it? If the answer is read, expose it as a resource. You'll cut token costs and make your agent faster, because the model stops wasting reasoning cycles on tool calls that just return static context."

Managing Resource and Tool Overload

As MCP adoption grows, teams are running into a scaling wall. One developer described it bluntly: "ran into the same wall running 6-7 separate servers, each one needs its own auth setup too which gets old fast." This is the tool overload problem, and it affects resources too.

The core issue is that every MCP server adds to the client's context burden. Each server's tools, resources, and prompts are loaded during capability negotiation. With dynamic tool loading to manage tool count, teams can defer schema loading until the model actually needs a specific server's capabilities, which is where the 12,000-token savings figure comes from.

MCP gateways solve the aggregation problem. Instead of connecting an agent to 10 separate MCP servers, a gateway like Apigene sits between the client and the servers, handling auth, routing, and capability discovery through a single connection. Because routers aggregate tools AND resources from multiple servers, the client sees a unified tool and resource list without managing 10 separate connections.

Apigene's approach goes further by compressing tool output and dynamically loading only the tools and resources the model actually needs for the current task. This is the pattern the community is converging on: fewer connections, smarter loading, less token waste.

The Bottom Line

MCP resources are the protocol's most underused primitive, but they solve real problems that tools alone can't fix. Use resources for read-only context like schemas, configuration, and documentation. Use tools for actions that modify state. Use prompts to standardize workflows across clients. And use sampling when your server needs the model to reason mid-task.

The practical starting point: audit your current MCP servers. If any tool call returns static data that doesn't change between turns, refactor it into a resource. If you're running multiple servers and hitting token or auth overhead, consider an MCP gateway to aggregate and optimize your integrations.

The teams getting the most from MCP are the ones using all four primitives together, not just the one that's easiest to implement.

Stop Building MCP Integrations From Scratch.

  • Any API, one line of code — connect to ChatGPT, Claude, and Cursor without writing custom MCP servers
  • Visual UI in the chat — render interactive components, not just text dumps. Charts, forms, dashboards.
  • 70% fewer tokens — dynamic tool loading and output compression so your agents stay fast and cheap

Frequently Asked Questions

Why are MCP resources read-only?

Resources are read-only by design to separate data access from actions. This constraint makes resources cacheable, subscribable, and safe to grant without action-level permissions. When a model reads a resource, nothing changes on the server, which means clients can cache the response and re-fetch only when the server sends a change notification. Tools handle state-modifying operations.

What is the difference between MCP resources and tools?

MCP resources provide read-only context data (files, schemas, configurations) that models consume passively. MCP tools execute actions with side effects (API calls, file writes, database queries). The practical test: if the data exists before the conversation and doesn't change based on the model's behavior, it's a resource. If the model needs to trigger computation or modify state, that's a tool.

Does ChatGPT support MCP resources?

Yes, ChatGPT supports MCP as of 2025. Developer Mode must be enabled in ChatGPT settings (available for Pro, Team, Enterprise, and Edu users). OpenAI's MCP implementation uses FastMCP v2 and supports tools and resources, though the resource browsing UX is still maturing compared to Claude Desktop. Check OpenAI's MCP documentation for the latest supported primitives.

Can MCP resources replace RAG pipelines?

Not entirely, but they overlap. MCP resources can serve as a structured alternative to RAG for smaller, well-defined datasets like documentation, schemas, and configuration. RAG retrieves semantically similar chunks from a vector store, while MCP resources expose specific, URI-addressable data objects. For large document corpora where semantic search matters, RAG is still better. For structured data where you know exactly what the model needs, resources are simpler and more predictable.

How do I expose a database as an MCP resource without blowing up context?

Use resource templates instead of listing every row. Define a URI pattern like db://tables/{table_name}/schema that returns table metadata on demand, not full table dumps. For query results, use a tool to run the query but return a resource URI pointing to the cached result. Community developers call this the "2-step process," and it keeps your tool responses lightweight while making full data accessible through resources.

What is the difference between MCP resources and skills?

MCP resources and skills operate at different levels. Resources are a protocol primitive, a standardized way for servers to expose data to clients. Skills (like Claude Code's built-in skills) are a client-side pattern for organizing and progressively disclosing capabilities. A skill might use multiple MCP tools and resources internally, but skills aren't part of the MCP specification itself. Think of resources as the protocol layer and skills as the application layer.

#mcp#mcp-resources#mcp-tools#mcp-prompts#mcp-sampling#ai-agents