tutorials

Remote MCP Servers: Setup, Hosting & Best Options (2026)

Apigene Team
10 min read
Remote MCP Servers: Setup, Hosting & Best Options (2026)

Most MCP servers run locally. You install them with npx, they communicate over stdio, and they only work on the machine where they're running. That's fine for development. It breaks the moment you need a team to share tools, a ChatGPT connector to reach your server, or a production agent to access tools without a developer's laptop running 24/7.

Remote MCP servers solve this by hosting the server on a network-accessible endpoint. Your AI agents connect over HTTP instead of stdio, and the server runs independently of any single machine. But going remote introduces challenges that local servers don't have: authentication, transport protocols, TLS, tunneling, and the question of who manages all of it.

We analyzed 53 developer discussions about deploying remote mcp servers and found consistent patterns in what works, what breaks, and what teams wish they'd known before starting.

Key Takeaways

For busy engineering leads deploying MCP servers for team or production use, here's what 53 developer discussions revealed:

  • Streamable HTTP is replacing SSE as the recommended transport for remote MCP servers. The MCP spec updated in 2025 and most clients now support it natively.
  • OAuth 2.1 with Dynamic Client Registration is required for ChatGPT MCP connectors. Claude handles auth differently. Each client has its own requirements.
  • Tunneling (Cloudflare Tunnel, ngrok) is the fastest path from local to remote, but production teams need proper hosting with TLS and auth.
  • An MCP gateway eliminates per-server hosting by providing a single remote endpoint that routes to all your tools with auth translation built in.

What Are Remote MCP Servers?

Remote MCP servers are Model Context Protocol servers hosted on network-accessible endpoints rather than running as local processes. Instead of communicating over stdio (standard input/output), they use HTTP-based transport (Streamable HTTP or SSE) so AI agents can connect from anywhere.

The shift to mcp remote matters because the MCP ecosystem is moving from "developer tool" to "team infrastructure." When one engineer's local MCP server connects to the company database, only that engineer's AI agent can use it. When a remote MCP server hosts the same connection, every authorized agent on the team can access it.

The remote mcp server list has grown significantly in 2026. Major vendors including Atlassian, Stripe, Salesforce, Slack, and GitHub now ship their own hosted remote MCP servers with Streamable HTTP endpoints. This means teams no longer need to build and host remote servers for common integrations. The remaining challenge is managing auth, transport, and configuration across these vendor endpoints, especially when connecting from multiple AI clients with different requirements.

Stop Building MCP Integrations From Scratch.

  • Any API, one line of code — connect to ChatGPT, Claude, and Cursor without writing custom MCP servers
  • Visual UI in the chat — render interactive components, not just text dumps. Charts, forms, dashboards.
  • 70% fewer tokens — dynamic tool loading and output compression so your agents stay fast and cheap

Local vs Remote: When to Switch

FactorLocal (stdio)Remote (HTTP)
Setup complexityLow (single command)Medium (server, TLS, auth)
Team accessOne developer onlyAny authorized agent
Client supportAll MCP clientsRequires HTTP transport support
Auth requiredNo (local trust)Yes (OAuth, API keys, or tokens)
UptimeOnly when laptop is running24/7 with proper hosting
Best forDevelopment, prototypingProduction, team sharing, ChatGPT connectors

Switch to remote when: (1) more than one person needs the same tool, (2) you're connecting ChatGPT or other cloud-based clients, or (3) you need tools available 24/7 without a laptop running. Most teams hit at least one of these requirements within weeks of their first MCP deployment.

How to Set Up a Remote MCP Server

Step 1: Choose Your Transport

The MCP spec supports two HTTP-based transports for remote servers:

Streamable HTTP is the current recommendation. It uses a single HTTP endpoint (/mcp) and supports bidirectional communication through standard HTTP requests and Server-Sent Events for streaming responses. Most new MCP servers and clients support it natively.

SSE (Server-Sent Events) is the older transport. It uses separate endpoints for sending messages and receiving events. Some existing servers still use it, but the spec is moving away from SSE toward Streamable HTTP.

One developer who built multiple remote servers shared: "Everything I learned building a remote MCP server" covered auth, OAuth, session management, and troubleshooting. The post earned 30+ comments and highlighted that transport choice affects which clients can connect.

Step 2: Host Your Server

Common hosting options for remote mcp servers:

Cloudflare Workers is the fastest path to production-grade remote hosting. Cloudflare published a detailed guide for deploying Streamable HTTP MCP servers on Workers. It handles TLS, scaling, and global distribution automatically.

Cloud Run / AWS Lambda / Azure Functions work for serverless deployments, but developers report issues with cold starts affecting the MCP initialization handshake. One thread documented "huge delay during initialize/list_tool" on AWS AgentCore.

VPS (DigitalOcean, Hetzner, etc.) gives full control but requires managing TLS certificates, reverse proxy configuration, and process management yourself.

Tunneling (development only): Cloudflare Tunnel, ngrok, or Tailscale expose a local server to the internet for testing. Multiple threads described using tunnels to test ChatGPT connectors before deploying to production hosting.

Step 3: Configure Authentication

This is where most developers get stuck. Different AI clients implement MCP auth differently:

  • ChatGPT requires full OAuth 2.1 with Dynamic Client Registration. Bearer tokens are not accepted.
  • Claude (web and desktop) expects OAuth but implements discovery paths differently between platforms.
  • Cursor/VS Code typically use local connections or API key auth.

The community's most common complaint: "I built a remote MCP server, auth works in the inspector, but ChatGPT/Claude won't connect." The cause is usually a mismatch between the client's expected auth flow and what the server implements. Teams that support multiple clients often end up maintaining separate auth configurations for each one, which is why many production deployments route through a gateway that handles auth translation automatically.

What the Community Reports: Remote MCP Pain Points

We analyzed 53 developer discussions about deploying remote mcp servers. Here are the most frequent issues:

IssueThread CountCommon Fix
OAuth/auth failures across clients14Implement full OAuth 2.1 + DCR for ChatGPT, separate flow for Claude
TLS/certificate problems8Use Cloudflare Tunnel or managed hosting with automatic TLS
Transport confusion (SSE vs Streamable HTTP)7Default to Streamable HTTP for new servers
Tunneling works but production hosting doesn't6Cold starts, IAP, or proxy configuration issues
Self-signed certs rejected by Claude5Must use valid certificates from a CA
Server works in inspector but not in client5Client-specific auth discovery paths differ

One developer captured the frustration: "After successful auth, Claude never connects to my remote MCP server." The thread earned 16 comments and revealed that Cloudflare Tunnel's "Block AI Training Bots" setting was silently blocking Claude's connection attempts.

Explore 251+ MCP Integrations

Discover official and remote-only MCP servers from leading vendors. Connect AI agents to powerful tools and services.

251 Official ServersUpdated RegularlyVendor Verified

Best Remote MCP Servers Available Now

Several vendors now offer production-ready remote MCP servers. The fastest way to access all of them is through Apigene's MCP Gateway, which provides a single remote endpoint with auth translation, dynamic tool loading, and output compression across 251+ vendor-verified servers. Instead of hosting and configuring each remote server individually, you connect once to the gateway and access everything.

For teams that prefer direct connections, here are the most established vendor-hosted remote MCP servers:

  • Atlassian was one of the first major vendors to ship a remote MCP server for Jira and Confluence
  • Stripe provides remote access to payment processing data
  • HubSpot offers CRM and marketing automation through remote MCP
  • Slack enables team messaging access for AI agents
  • Salesforce provides enterprise CRM data through remote MCP
  • GitHub gives agents access to repos, PRs, and issues remotely

The full list of remote-capable servers is available at apigene.ai/mcp/official, filtered by remote/streamable HTTP support.

The Gateway Alternative: Skip Per-Server Hosting

Hosting individual remote MCP servers works for 1-3 tools. At 10+ tools, you're managing 10 separate servers with 10 auth configurations, 10 TLS certificates, and 10 monitoring setups.

An MCP gateway like Apigene eliminates this by providing a single remote endpoint that routes to all your tools. Your AI agents connect to one URL. The gateway handles:

  • Auth translation so ChatGPT's OAuth, Claude's connector auth, and Cursor's local config all work through one endpoint
  • Tool aggregation across multiple backend servers
  • Dynamic tool loading so only relevant tools appear per session
  • Output compression that reduces token costs by up to 70%

Instead of hosting 10 remote MCP servers, you host zero and connect through the gateway.

Expert Tip — Yaniv Shani, Founder of Apigene

"If you're setting up your first remote MCP server, start with Cloudflare Workers and Streamable HTTP. It's the fastest path to a working remote endpoint. But plan for what happens when you need server #5 and #6 and #7. That's when the per-server hosting model breaks down and a gateway becomes the obvious next step."

The Bottom Line

Remote MCP servers are essential for team use, production deployments, and connecting cloud-based AI clients like ChatGPT. The setup requires choosing the right transport (Streamable HTTP), handling per-client auth (OAuth 2.1 for ChatGPT, different flows for Claude), and managing hosting infrastructure.

For teams scaling past a few remote servers, an MCP gateway removes the per-server hosting burden entirely. One endpoint, one auth layer, all your tools.

Stop Building MCP Integrations From Scratch.

  • Any API, one line of code — connect to ChatGPT, Claude, and Cursor without writing custom MCP servers
  • Visual UI in the chat — render interactive components, not just text dumps. Charts, forms, dashboards.
  • 70% fewer tokens — dynamic tool loading and output compression so your agents stay fast and cheap

Frequently Asked Questions

What is a remote MCP server?

A remote MCP server is a Model Context Protocol server hosted on a network-accessible endpoint instead of running as a local process. It uses HTTP-based transport (Streamable HTTP or SSE) so AI agents can connect from anywhere over the internet. This lets teams share tools, connect cloud-based clients like ChatGPT, and keep servers running 24/7 without depending on a developer's laptop.

How do I connect ChatGPT to a remote MCP server?

ChatGPT requires remote MCP servers to implement OAuth 2.1 with Dynamic Client Registration. Your server needs a publicly accessible HTTPS endpoint, proper OAuth discovery (/.well-known/oauth-authorization-server), and a registration endpoint. Bearer tokens alone won't work. Many developers use Cloudflare Workers or a gateway like Apigene to handle the OAuth requirements automatically.

What is the difference between SSE and Streamable HTTP for MCP?

SSE (Server-Sent Events) was the original HTTP transport for MCP, using separate endpoints for sending and receiving messages. Streamable HTTP is the newer, recommended transport that uses a single /mcp endpoint with bidirectional communication. New MCP servers should use Streamable HTTP. Most modern clients support both, but the spec is moving away from SSE.

Can I use ngrok or Cloudflare Tunnel for remote MCP?

Yes, for development and testing. Tunnels expose your local MCP server to the internet with a public URL, which is useful for testing ChatGPT connectors or sharing with teammates temporarily. For production, use proper hosting (Cloudflare Workers, Cloud Run, or a VPS) with permanent URLs, valid TLS certificates, and production-grade auth. Tunnels can drop connections and have rate limits that break production workflows.

Where can I find a list of remote MCP servers?

Apigene's official MCP directory at apigene.ai/mcp/official lists 251+ vendor-verified servers, many of which support remote connections via Streamable HTTP. You can filter by transport type to see only remote-capable servers. Other directories like mcpservers.org and PulseMCP also list remote servers, though with less verification.

Do I need to host my own remote MCP server?

Not necessarily. Many vendors now offer hosted remote MCP servers (Atlassian, Stripe, Slack, Salesforce). For custom tools, you can build and host your own on Cloudflare Workers or any cloud platform. Alternatively, an MCP gateway like Apigene provides a single remote endpoint that routes to multiple tools without requiring you to host individual servers. The gateway handles auth, routing, and TLS centrally.

#mcp#remote-mcp#mcp-server#hosting#streamable-http#ai-agents