·ChainContext Dev Team

Introducing ChainContext - MCP servers for every smart contract

Why we built a no-code MCP server builder for Web3 teams, what you can ship on day one, and how the MCP spec maps to smart contracts.

Most Web3 teams we talk to have the same unfinished side quest: make their smart contracts first-class citizens inside the LLM tools their users already live in. Claude, Cursor, ChatGPT, the dozens of agent frameworks being spun up every week. The problem is the gap between “here is a verified contract at 0xabc…” and “my agent can read its state and build transactions against it” is a full engineering project, not a weekend ticket.

That gap keeps getting built over and over, by every team that wants in. We built ChainContext so nobody has to build it again.

What ChainContext is

ChainContext is a no-code builder for hosted MCP servers, targeted at smart-contract-backed applications. You paste a contract address, pick a recipe, review the generated tools, and deploy. What comes back is a production MCP endpoint your users can plug into Claude, Cursor, or any MCP-compatible agent runtime, with the tool shapes and output formats already tuned for the way LLMs actually consume data.

The short version of what you ship on day one:

  • A hosted MCP endpoint at a stable URL, no infrastructure of your own
  • Tools auto-derived from your ABI, with descriptions, typed inputs, and reshaped outputs
  • Resources, prompts, and sampling - the full MCP spec, not a subset
  • Built-in observability: per-tool latency, usage events, error rates, cost
  • Write tools done safely by default: unsigned transactions returned to the user’s wallet, never signed by the server

Why now

MCP went from a niche Anthropic spec to something every agent runtime speaks in under a year. Claude Desktop ships with MCP first-class. Cursor added it. ChatGPT opened a connector protocol around the same shape. OpenAI-compatible tool runners converge on the same JSON-RPC surface. It is becoming the USB of LLM context, and standards moments do not come around often.

At the same time, agents are getting cheap enough and reliable enough that the end-user appetite is actually there. People want to ask their assistant “how much DAI does treasury X hold” and get an answer grounded in on-chain state, not in a hallucinated snapshot from training data. They want their assistant to draft the swap, the vote, the claim, and hand it to their wallet to sign. None of that works unless someone writes the MCP server for the contract.

The Web3 teams with the most to gain here - protocol teams, wallet teams, DeFi apps, DAOs with governance contracts - are mostly not MCP experts. They should not have to become one. Running a protocol is already a full-time job.

Who this is for

Three rough audiences:

  • Protocol and app teams who want their contracts to be callable from inside every agent their users touch, without owning the hosting, the observability, or the MCP spec churn.
  • Wallet and dev-tool teams who already have a trusted place in the Web3 stack and want to plug MCP into their surface without building it from scratch.
  • Agent builders and indie devs who want to ship a smart-contract-aware agent this week, not next quarter.

If you maintain one contract or twenty, if you are on EVM today or eyeing Solana, the shape of the problem is the same: you have a machine-readable interface in the ABI, and you need it to be LLM-readable in MCP.

How the MCP spec maps to smart contracts

MCP gives you three primitives: tools, resources, and prompts. Smart contracts fit into all three, and a good server uses all three.

Tools are the model’s callable verbs. For read functions, each tool returns decoded, reshaped state - balances, positions, quotes, pool ratios, vote weights. For write functions, each tool returns an unsigned transaction payload ready for the user’s wallet to sign. The LLM never touches a private key.

Resources are the model’s readable context. Things like a protocol’s current parameters, a pool’s live state, the full list of supported collateral assets, the abridged whitepaper. Resources are what the model reads passively to ground its answers, before it decides which tool to call. A good resource set often matters more for answer quality than a good tool set.

Prompts are the pre-canned templates your users invoke by name. “Audit my position”, “Generate a vote recommendation”, “Explain this transaction.” Prompts let you bake protocol-specific reasoning into the surface without shipping a custom client.

Most first-generation contract MCP servers we see ship tools only. Adding resources and prompts is what takes a server from technically functional to actually useful, and it is a lot of the reason ChainContext exists.

What the five-minute flow looks like

You drop in a contract address - verified on Etherscan or any block explorer we support. We fetch the ABI, resolve proxy patterns (EIP-1967 and the common transparent-proxy variants), and generate a candidate tool for every function.

You pick a recipe - an opinionated starting point for common contract types. “ERC-20 token dashboard”, “Uniswap V3 pool analytics”, “DAO voting assistant”, “NFT collection explorer”. Each recipe is just a curated selection of functions plus sensible output shaping. If none of them fit, start blank and pick functions yourself.

You review the generated tools. Names are derived to be verb-first and readable. Descriptions are seeded from NatSpec comments when you have them and from heuristics when you do not. Output schemas are reshaped into a handful of well-named fields instead of raw on-chain returns. You edit anything that does not look right - our field guide on tool design covers what to tune and why.

You set the network - mainnet, a testnet, or a custom RPC. You pick the transport your clients expect - HTTP with SSE, Streamable HTTP, stdio for local-first agents. You click deploy.

What lands is a real MCP endpoint. Connection snippets for Claude Desktop, Cursor, and any other runtime are generated for you. A .well-known/mcp.json descriptor lets automated discovery tools (and curious users) find your server without being told where to look.

Production-grade from day one

A demo MCP server is easy. A production one has to answer for five things we bake in at the platform level, so your team does not:

  • Reliability: retries, circuit breakers, health checks, RPC failover. A flaky server is a server agents will learn to avoid.
  • Observability: every call is recorded as a structured usage event - tool name, latency, input hash, outcome. You get a dashboard for trends and a feed for live calls.
  • Cost control: rate limits per user, per tool, and per project. Abuse does not take your whole server down.
  • Auth when you need it: API keys, SIWE (sign in with Ethereum) for user-gated endpoints, OAuth for the surfaces that want it.
  • Safe writes: the default contract for any transaction-building tool is “return an unsigned payload, let the user’s wallet sign”. No private keys near the LLM, no custodial surprises.

What comes next

We are going to be writing here a lot. The posts we have queued up cover the sharp edges of the MCP spec as it evolves, block design patterns for common contract shapes, prompt engineering for on-chain data, the agent-wallet UX that makes write tools actually shippable to non-crypto-native users, and the performance work under the hood when you start caring about p99 tool-call latency.

The first one, already live, is From ABI to MCP server in 5 minutes - a walkthrough of the product flow with concrete examples of what the generated tools look like. The second is Designing MCP tools for on-chain data, our field guide on the post-generation polish that separates “works” from “works reliably”.

If you want the rest as it ships, the RSS feed is your friend. If you want to skip the reading and deploy a server, paste any verified contract address into the new-server flow and you will have an MCP endpoint you can connect to Claude before your coffee goes cold.

We are early. The product gets sharper every week. We would rather be in your hands and iterating than in stealth and guessing - so go break it, and tell us what you find.

C

ChainContext Dev Team

Engineering

We build ChainContext - a no-code MCP server builder for Web3 teams. Posts here are collective notes from our engineering and product work.

More posts by ChainContext Dev Team →