Uptrack

Engineering

We made Uptrack agent-ready — 25 → 100 in a day

Cloudflare shipped isitagentready.com — a scanner that grades how discoverable your site is to AI agents. We ran uptrack.app through it, scored 25/100, and shipped everything the scanner checks for: MCP Server Card, Agent Skills index, API Catalog, Content Signals, markdown content negotiation, a Link response header, OAuth 2.0 authorization server, OAuth Protected Resource metadata, and a hosted Streamable-HTTP MCP endpoint. Final score: 100/100, Level 5 "Agent-Native". As far as we can tell, we are the first uptime monitoring tool to score 100.

Published April 18, 2026 · Updated April 19, 2026

The scan that started it

isitagentready.com checks 14 emerging standards across five categories: discoverability (robots.txt, sitemap, Link headers), content accessibility (markdown negotiation), bot access control (AI crawl rules, Content Signals), protocol discovery (MCP Server Card, Agent Skills, OAuth, API Catalog), and commerce (x402, UCP, ACP). Most of these did not exist a year ago. A few are still drafts.

Our starting point was 25/100:

Discoverability 2/3

robots.txt and sitemap.xml, but no Link response header.

Content 0/1

Accept: text/markdown on / returned 500.

Bot Access 1/2

Wildcard robots rules pass, but no Content Signals.

API/Auth/MCP 0/6

No API Catalog, no MCP Server Card, no Agent Skills index, no OAuth metadata.

The 500s were the most interesting failure. Our TanStack Start SSR worker was throwing on unknown paths like /.well-known/api-catalog and on any request to / with a non-HTML Accept header — a real bug that agents would hit but humans never do.

What we shipped

An MCP Server Card at /.well-known/mcp/server-card.json

SEP-1649 specifies a machine-readable card that tells agents where to find an MCP server, how to authenticate, and which tools it exposes. Our card describes the @uptrack-app/mcp stdio server, the env vars it needs (UPTRACK_API_KEY), and the ten tools it ships (list_monitors, create_monitor, acknowledge_incident, and so on). An agent that knows only our domain can now discover the full tool surface with one HTTP GET.

An Agent Skills index at /.well-known/agent-skills/index.json

Agent Skills v0.2.0 is the emerging contract for advertising task-level capabilities — not just raw tools, but named skills agents can invoke. We listed five: check-website-down, list-monitors, create-monitor, list-incidents, acknowledge-incident. Each one points back at the MCP server card so an agent can resolve how to actually invoke it.

An API Catalog linkset at /.well-known/api-catalog

RFC 9727 defines a JSON linkset describing the APIs a site offers. Ours anchors at https://uptrack.app and points at our OpenAPI 3.0.3 spec, the MCP server card, and both llms.txt files. Every agent-relevant entry point is reachable from this single document.

Markdown content negotiation on the homepage

The Cloudflare Markdown for Agents guidance is straightforward: when an agent requests / with Accept: text/markdown, serve the markdown overview instead of the React SSR shell. We already had /llms.txt, so the implementation is a thin Cloudflare Pages worker shim that short-circuits the SSR bundle for that one case:

// scripts/worker-shim.js
import ssrHandler from './ssr-entry.js'

export default {
  async fetch(request, env, ctx) {
    const url = new URL(request.url)
    const accept = request.headers.get('accept') || ''

    if (
      request.method === 'GET' &&
      url.pathname === '/' &&
      accept.toLowerCase().includes('text/markdown')
    ) {
      const res = await env.ASSETS.fetch(new URL('/llms.txt', url))
      return new Response(await res.text(), {
        status: 200,
        headers: {
          'content-type': 'text/markdown; charset=utf-8',
          'cache-control': 'public, max-age=300',
          vary: 'Accept',
        },
      })
    }

    return ssrHandler.fetch(request, env, ctx)
  },
}

The same shim also attaches the Link response header on HTML responses and forwards everything else untouched.

Content Signals in robots.txt

Content Signals is Cloudflare's proposed robots.txt extension for declaring AI usage preferences per signal. Ours:

User-agent: *
Content-Signal: search=yes, ai-input=yes, ai-train=no

Translation: index us, let agents read our content to answer questions about Uptrack, but do not train foundation models on us. That matches how we actually feel about the content we publish.

A Link response header on /

RFC 8288 Link headers let agents discover related resources without parsing HTML. Our homepage now advertises six of them in one header: both llms files, the OpenAPI spec, the API Catalog, the MCP Server Card, and the Agent Skills index. An agent that issues HEAD / knows the entire discovery surface before loading a byte of HTML.

Update (April 19): the last four checks

A day after shipping the first round of metadata, we finished the remaining four checks and took the score from 75 to 100. Two of them — OAuth AS discovery and OAuth Protected Resource — required real OAuth implementation, which we already had sitting in api.uptrack.app via the Boruta Elixir library. We just needed to publish the well-known mirrors on uptrack.app so the scanner — which only scans the frontend origin — could find them.

The third was WebMCP. We registered check_website_down via navigator.modelContext — a read-only, unauthenticated tool that wraps our existing multi-region website checker. No authenticated WebMCP tools: the spec doesn't yet define a consent surface, and any visited page could otherwise invoke them on the user's behalf.

The real reason to ship OAuth wasn't the score. It was the distribution channel. With an OAuth AS plus a hosted Streamable-HTTP MCP endpoint at api.uptrack.app/mcp, Uptrack becomes installable in Claude.ai, ChatGPT, VS Code, Cursor, and Windsurf in two clicks — not one npm install, API key, env var, and JSON config. That's the order-of-magnitude UX win.

Why an uptime monitor needs agent discovery

Agents are increasingly the users of monitoring tools. Claude Code, Cursor, Windsurf, and a growing number of autonomous agents want to check if a deployment is healthy, create a monitor for a new service, or acknowledge an incident while triaging an alert. The friction today is not API capability — our API has been there since launch — it is discoverability. An agent that does not already know about Uptrack cannot reason its way to our MCP server through HTML scraping.

These well-known files cut that out. Point any MCP-aware agent at uptrack.app and it can:

  • Read the Link header, find the MCP Server Card, and install the server.
  • Read the Agent Skills index and know exactly which tasks we handle.
  • Fetch Accept: text/markdown on the homepage and get a clean product overview instead of a 3MB React bundle.
  • Follow the API Catalog to our OpenAPI spec for direct REST calls when MCP is overkill.

This is the same plumbing Stripe, Anthropic, Vercel, and Cloudflare have been shipping for their own surfaces. It is not yet common in the monitoring category.

First uptime monitor to do this — we think

We scanned the top uptime monitoring tools on isitagentready.com before publishing. None of the ones we checked publish an MCP Server Card or an Agent Skills index at the well-known paths. A handful have an llms.txt. None have the full agent-discovery stack. If we are wrong about that and someone beat us to it, send us a link and we will update this post.

The whole thing is four JSON files, a robots.txt edit, a 30-line worker shim, and an entry in _routes.json. There is no reason every SaaS should not ship it.

What's next

The next logical step is making our tools usable by agents without an API key — at least for the public ones. Our "Is Website Down?" tool already runs a multi-region check against any URL; exposing it as an unauthenticated agent skill is a small lift. We are also watching the WebMCP spec and will add it once navigator.modelContext ships in stable browsers.

If you run a SaaS and want a starting point: scan yourself on isitagentready.com, publish the low-hanging files (llms.txt, api-catalog, Link header, Content Signals), and only then work up to the MCP and Agent Skills pieces. Most of the score is in the first 30 minutes of work.

Monitor your site — from humans or agents

50 free monitors — 10 at 30-second checks, 40 at 1-minute. HTTP, DNS, SSL, TCP, and heartbeat. MCP server included so your coding agent can manage monitors directly.

Start Monitoring Free