Use Cases
Monitor Cloudflare Workers, Vercel Edge, and serverless functions from the outside
You cannot SSH into an edge function. There is no server to log into, no process to inspect, no htop to run. External monitoring is the only way to know if your serverless code is actually working.
April 10, 2026
Edge functions fail silently — and you have zero access
Traditional servers give you tools when things break. You SSH in, check logs, restart the process, inspect memory usage. Serverless and edge functions strip all of that away by design. Your code runs in an ephemeral isolate on someone else's infrastructure, across dozens of regions you cannot see.
This tradeoff is usually worth it — zero ops, automatic scaling, global distribution. But it means your only window into production behavior is what comes back over HTTP. If your Cloudflare Worker starts returning 500s in the Sydney PoP but works fine in Frankfurt, you will not know unless something is checking from the outside.
Deploys succeed, dashboards show green, and users in specific regions silently get error pages. This is the baseline failure mode for every serverless platform.
Cloudflare Workers: D1, R2, KV — each can fail independently
A Cloudflare Worker is not one thing. A typical production Worker depends on multiple bindings — D1 for SQL, R2 for object storage, KV for key-value lookups, Durable Objects for state. Each binding is a separate service that can fail independently while the Worker itself stays "up."
D1 database outages
D1 is SQLite at the edge, replicated across Cloudflare's network. Replication lag or regional failures can cause reads to return stale data or writes to fail. The Worker returns 200, but the response body contains an error message instead of real data.
KV inconsistency
KV is eventually consistent with a propagation delay that can exceed 60 seconds. If your Worker reads a config value from KV that was just updated, different regions serve different versions. Feature flags, rate limits, and auth tokens stored in KV can all silently diverge.
R2 storage errors
R2 is S3-compatible object storage. Workers that serve files from R2 can hit rate limits or encounter transient errors that the Worker catches and returns as a 200 with a fallback response. Without checking the response body, you would never know the actual asset was not served.
Cloudflare's own status page reports at the platform level. It will not tell you that D1 reads are failing in Asia Pacific while working in Europe. Only external checks against your actual endpoints, from multiple regions, catch this.
Vercel and Netlify: deploy succeeds, function errors silently
Vercel Edge Functions and Netlify Edge Functions both run on globally distributed runtimes. Deployment is atomic — your new code goes live everywhere at once. But "deployed successfully" and "working correctly" are very different things.
A common failure pattern: you deploy a Vercel Edge Function that imports an npm package which uses a Node.js API not available in the Edge Runtime (like fs or net). The build passes because the bundler does not execute the code. The deployment succeeds. The function crashes at runtime with an error that only appears in Vercel's log drain — which you might not have configured, or which might be delayed by minutes.
# Deploy succeeds
$ vercel --prod
→ Production: https://app.vercel.app (Ready)
# But the edge function is broken at runtime
$ curl https://app.vercel.app/api/auth
→ 500 Internal Server Error
→ "This module is not available in the Edge Runtime"
# Without monitoring, you find out from usersVercel's monitoring dashboard shows function invocations and errors, but it does not alert you proactively. You have to be looking at the dashboard at the right moment. An external HTTP monitor catches the 500 within 30 seconds of deployment.
Cold starts and regional failures are invisible without checks
Cold starts are the defining reliability problem of serverless. When a function has not been invoked recently, the runtime spins up a new isolate. On Cloudflare Workers this is fast (under 5ms typically). On AWS Lambda with a Function URL, cold starts can exceed 3 seconds for Java or .NET runtimes. On Deno Deploy, cold starts are generally under 100ms but spike during regional capacity issues.
AWS Lambda Function URLs
Lambda Function URLs expose your function as an HTTP endpoint without API Gateway. Cold starts on these URLs hit users directly — there is no caching layer to absorb them. If your function has heavy initialization (database connections, ML model loading), the first request after idle can time out at the client while Lambda reports a successful invocation.
Deno Deploy
Deno Deploy runs on 35+ regions. A regional outage means requests routed to that PoP fail while others succeed. Deno's status page may show "degraded performance" but your users in that region experience full downtime. Monitoring from multiple regions catches asymmetric failures.
Supabase Edge Functions
Supabase Edge Functions run on Deno Deploy under the hood. They add another failure point: the Supabase client initialization. If your Edge Function connects to your Supabase database and the connection string is wrong or the database is paused (free tier), the function starts but every request that touches the database fails.
The common thread: the function process is alive, the platform reports it as deployed, but actual HTTP requests return errors. The only reliable signal is making real HTTP requests and inspecting what comes back.
Logs are scattered across regions and hard to debug
Every serverless platform has its own logging system, and none of them are designed for real-time incident detection. Cloudflare Workers logs via wrangler tail only show live requests — there is no persistent log storage unless you configure Logpush. Vercel logs are per-function and retained for limited periods depending on your plan. AWS Lambda logs go to CloudWatch, split across the region the function executed in.
When an edge function runs in 30 regions, the error logs are in 30 different places. Aggregating them requires a third-party log drain (Datadog, Axiom, Baselime) configured correctly and delivering with acceptable latency. Even then, turning log entries into alerts requires writing queries, setting thresholds, and maintaining that pipeline.
External HTTP monitoring skips all of that. You do not need to aggregate logs from 30 regions to know your function is broken. You need one check that hits the endpoint and tells you whether the response is correct.
How Uptrack monitors serverless and edge functions
Uptrack runs HTTP checks against your function endpoints at 30-second intervals. Here is what that catches for each platform:
HTTP status code checks
The baseline. If your Cloudflare Worker returns a 500, your Vercel Edge Function returns a 502, or your Lambda Function URL returns a 504 (gateway timeout from a cold start), Uptrack flags it immediately. Most serverless failures manifest as non-200 status codes, and 30-second checks catch them before users accumulate.
Keyword matching on response body
This is where serverless monitoring gets interesting. Many edge functions return 200 even when something is wrong — they catch errors internally and return a fallback or error message in the body. With keyword matching, you can verify the response contains expected content.
# Your Cloudflare Worker API endpoint
GET https://api.yourapp.com/v1/status
# Healthy response (200 OK)
{"status": "ok", "database": "connected", "cache": "warm"}
# Broken response (still 200 OK — Worker caught the error)
{"status": "error", "database": "unreachable", "cache": "cold"}
# Uptrack keyword match: require "status":"ok" in body
# Catches the broken response even though HTTP status is 200Response time thresholds
Cold starts show up as response time spikes. If your Cloudflare Worker normally responds in 20ms but starts taking 2 seconds, something changed — maybe a binding is slow, maybe the isolate is being recreated on every request due to a deployment issue. Monitoring response time trends surfaces these degradations before they become outages.
Practical setup for a typical stack
Example: SaaS app on Cloudflare Workers + D1 + R2
Monitor 1: https://api.yourapp.com/health — 30s interval, keyword match on "database":"connected". Catches Worker crashes and D1 failures.
Monitor 2: https://yourapp.com — 30s interval, keyword match on a string in your HTML (your app name or a meta tag). Catches static asset and CDN issues.
Monitor 3: https://api.yourapp.com/v1/files/test — 1-minute interval, expects 200. Catches R2 binding failures.
Three monitors, all within the free tier. You are covered for the three most common Cloudflare Workers failure modes: Worker crashes, D1 outages, and R2 errors.
Why 30-second checks catch what 5-minute checks miss
Serverless failures are often transient. A cold start spike lasts 10 seconds. A regional routing issue resolves in 2 minutes. A D1 replication delay clears in 45 seconds. If your monitoring only checks every 5 minutes, these incidents happen and resolve between checks. You never see them — but your users did.
With 30-second checks, you capture the pattern. Three consecutive failures at 30-second intervals means something was broken for at least 90 seconds. That is a real incident. You get the alert, check your platform's dashboard, and see the spike in the error rate that would otherwise have been averaged away by the time you looked.
For serverless functions that handle authentication, payments, or API responses, even a 60-second outage matters. Thirty-second checks give you the resolution to see it.
Start monitoring your edge functions
50 free monitors — 10 at 30-second checks, 40 at 1-minute. Keyword matching on response bodies to catch silent failures. Alerts via Slack, Discord, email, and webhooks. No credit card required.
Start Monitoring Free