Use Cases
Monitor Supabase, Neon, and Turso — your database can scale to zero and not come back
Serverless databases save you money by spinning down when idle. The problem is what happens when they try to spin back up — and can't.
April 10, 2026 · 10 min read
The serverless database landscape in 2026
If you're building on Supabase, Neon, Turso, Upstash, or PlanetScale, you've already made a bet: your database doesn't need to be running all the time. Scale to zero when nobody's using it. Pay only for what you consume. It's the promise of serverless infrastructure.
But that promise has a cost nobody talks about until it hits production. When your database scales to zero, it needs to scale back up when the next request arrives. That cold start can fail. The compute node might not be available. The connection might time out. The service itself might be having an incident.
And if your monitoring doesn't understand serverless databases, you won't know about it until your users tell you.
Neon: 150ms cold starts that can become 15 seconds
Neon's architecture separates compute from storage. When your database scales to zero, the compute node shuts down completely. The next connection triggers a cold start — Neon advertises ~150ms for this, and under normal conditions that's roughly accurate.
But "normal conditions" isn't the scenario you need monitoring for. Here's what actually happens in the wild:
Compute contention
During peak hours, Neon's compute pool can be constrained. Your cold start that normally takes 150ms now takes 3-5 seconds. If your application has a 2-second database timeout, the connection fails entirely.
Region outages
Neon operates in specific AWS regions. If us-east-1 has capacity issues, your database literally cannot start. It's scaled to zero, and it's staying there until AWS resolves the issue.
Branching failures
Neon's branching feature creates copy-on-write database branches for preview deployments. These branches scale to zero aggressively. If your CI/CD pipeline depends on a branch database being available, a cold start failure breaks your entire deployment process.
What you need: an HTTP check on your application's health endpoint that actually queries the database. Not a TCP ping to the Neon connection string — a real SELECT 1 through your app layer. If that takes longer than your configured threshold, you know before your users do.
Supabase: five services, five failure modes
Supabase isn't just Postgres. It's Postgres, Auth (GoTrue), Realtime (Phoenix channels), Storage (S3-compatible), and Edge Functions (Deno Deploy). Each runs as an independent service. Each can fail independently.
We've seen this pattern from Supabase users: the database is fine, but Auth is returning 500s. Users can't log in. The health check that only pings Postgres says everything is green. Meanwhile, signups are broken and nobody knows.
Here's what a proper Supabase monitoring setup looks like:
Monitor 1: App health endpoint (tests DB query)
→ GET https://your-app.com/api/health
→ Expects: 200 + {"db": "ok", "latency_ms": <50}
Monitor 2: Supabase Auth
→ GET https://your-project.supabase.co/auth/v1/health
→ Expects: 200
Monitor 3: Supabase Realtime
→ GET https://your-project.supabase.co/realtime/v1/health
→ Expects: 200
Monitor 4: Supabase Storage
→ HEAD https://your-project.supabase.co/storage/v1/bucket/your-bucket
→ Expects: 200 (with service_role key in header)
Monitor 5: Edge Functions
→ GET https://your-project.supabase.co/functions/v1/health-check
→ Expects: 200That's five monitors for one Supabase project. On Uptrack's free tier, you'd still have 45 monitors left for everything else.
Fauna and PlanetScale: the graveyard of serverless databases
Fauna shut down in 2025. Not "deprecated" — shut down. Teams that had built their entire data layer on Fauna's proprietary FQL had to migrate everything, under pressure, to a completely different database. The teams that had uptime monitoring caught the early signs: increased latency, intermittent 503s, degraded API responses. They had weeks of warning. The teams without monitoring found out from a blog post.
PlanetScale killed its free tier in early 2024. Overnight, thousands of hobby projects and startups in development lost their database. The migration window was short. Teams scrambled to move to Neon, Turso, or self-hosted MySQL.
The lesson: serverless database providers are startups too. They pivot, they cut tiers, they shut down. Monitoring your database health endpoint gives you the earliest possible signal that something is changing — before the deprecation notice hits your inbox.
Turso: when edge replicas fall out of sync
Turso's value proposition is edge-local reads. Your SQLite database is replicated to dozens of edge locations, and reads hit the nearest replica at ~5ms latency. Writes go to the primary and propagate out.
The failure mode nobody warns you about: replication lag. An edge replica falls behind the primary. Your user writes data, then immediately reads it back — but the read hits a stale replica and returns old data. From the user's perspective, their write disappeared.
This is especially nasty because it's not a binary up/down failure. The database is "up" — it's just serving stale data. A standard health check won't catch it.
What you actually need to monitor:
// Health endpoint that detects replication lag
app.get('/api/health', async (c) => {
const start = Date.now()
// Write a timestamp to the primary
await db.execute(
'INSERT OR REPLACE INTO _health (key, value) VALUES (?, ?)',
['last_check', Date.now().toString()]
)
// Read it back (hits edge replica)
const result = await db.execute(
'SELECT value FROM _health WHERE key = ?',
['last_check']
)
const latency = Date.now() - start
const replicationLag = Date.now() - parseInt(result.rows[0].value)
return c.json({
db: 'ok',
latency_ms: latency,
replication_lag_ms: replicationLag,
stale: replicationLag > 5000
})
})Monitor that endpoint. If replication_lag_ms exceeds your threshold, you know your edge replicas are behind — before users start reporting "ghost" data loss.
Upstash: Redis and Kafka with serverless caveats
Upstash provides serverless Redis and Kafka endpoints, billed per request. It's popular for rate limiting, caching, and session storage in edge deployments — Vercel, Cloudflare Workers, Netlify.
The serverless model means there's no persistent connection. Every operation is an HTTP request to the Upstash REST API. That's great for edge compatibility, but it introduces a failure mode traditional Redis doesn't have: HTTP endpoint availability.
If you're using Upstash Redis for rate limiting, an outage doesn't just mean slower responses — it means your rate limiter is gone. Depending on your fallback logic, that could mean either rejecting all requests (fail closed) or allowing unlimited traffic (fail open). Neither is good.
Monitor the Upstash REST endpoint directly. A simple GET to your app's health endpoint that performs a Redis PING via the Upstash SDK tells you immediately if the connection is healthy.
What to monitor: the checklist
Regardless of which serverless database you're using, here's the monitoring setup that catches problems before users report them:
Application health endpoint with DB query. Don't just ping the database host. Build a /api/health endpoint in your app that runs an actual query. SELECT 1 is fine for connectivity. A count on a core table is better — it proves the schema is intact too.
Latency thresholds, not just up/down. A 200 response that took 8 seconds is worse than a quick 503. Set your monitor to flag responses above a latency threshold — 2 seconds for cold starts, 500ms for warm.
Auth and API services separately. If you're on Supabase, monitor GoTrue independently. If you're using Neon with their Auth offering, same thing. Authentication failures have outsized user impact.
Check frequency that matches your cold start window. If your database scales to zero after 5 minutes of inactivity, a 10-minute check interval means every single check hits a cold start. Use 30-second checks to keep the database warm and detect issues during the brief window where it's spinning up.
Multi-region checks. Your Neon database is in us-east-1, but your users are global. A check from Europe tells you what European users experience — including cross-region latency, DNS resolution time, and TLS handshake overhead.
The hidden benefit: monitoring keeps your database warm
Here's something most monitoring guides won't tell you: a 30-second health check that queries your database prevents it from scaling to zero in the first place.
Neon's default scale-to-zero timeout is 5 minutes of inactivity. If your monitor hits the database every 30 seconds, the compute node never idles long enough to shut down. Your users never hit a cold start. The monitoring check itself becomes a keep-alive.
This works for Neon, Supabase (paused projects on free tier), and Turso. The trade-off: you'll consume slightly more compute units. On Neon's free tier, the difference is negligible — a SELECT 1 every 30 seconds uses virtually zero compute hours.
You get monitoring AND warm starts for free. That's a trade worth making.
How to set this up with Uptrack
Uptrack runs HTTP checks from three continents (Europe, Asia, and North America) with consensus-based alerting. That means a CDN blip in one region won't page you at 3am — all three regions must agree your endpoint is down before an alert fires.
For a typical serverless database setup, here's what we'd recommend:
Monitor: "Production DB Health"
URL: https://your-app.com/api/health
Interval: 30 seconds
Expected: HTTP 200 + response body contains "ok"
Timeout: 10 seconds (generous, covers cold starts)
Monitor: "Supabase Auth"
URL: https://your-project.supabase.co/auth/v1/health
Interval: 1 minute
Expected: HTTP 200
Monitor: "API Latency"
URL: https://your-app.com/api/health
Interval: 30 seconds
Expected: HTTP 200 within 2000msThe 30-second checks double as keep-alive pings. The 10-second timeout accommodates worst-case cold starts without masking real outages. Multi-region consensus filters out false positives from transient network issues.
The bottom line
Serverless databases are an excellent choice for most applications. The cost savings are real. The developer experience is better than managing your own Postgres. But the "serverless" part means you've traded operational control for convenience — and that trade requires monitoring to be safe.
Fauna proved that a database provider can disappear. PlanetScale proved that pricing tiers can vanish overnight. Neon's cold starts prove that scale-to-zero has real latency implications. Turso's edge replicas prove that "up" doesn't always mean "correct."
Monitor the endpoints your users actually hit. Check frequently enough to catch cold start failures. Use multi-region checks to distinguish between real outages and network noise. And keep your database warm while you're at it.
Monitor your serverless database stack
50 free monitors — 10 at 30-second checks, 40 at 1-minute. No credit card required.
Start Monitoring Free