Webhook payload reference
When an incident happens, Uptrack sends a POST request to your webhook URL with a JSON payload. This page documents every field you will receive for each event type.
Event types
Uptrack sends three types of webhook events:
| Event | When it fires |
|---|---|
monitor.down | A monitor fails its check and an incident is created |
monitor.up | A monitor recovers and the incident is resolved |
ssl.expiring | An SSL certificate will expire within the configured threshold |
monitor.down
Sent when a monitor fails its health check and a new incident is created.
{
"event": "monitor.down",
"timestamp": "2026-04-01T08:23:14Z",
"monitor": {
"id": "mon_7kQ3xR9pWm",
"name": "Production API",
"url": "https://api.example.com/health",
"check_interval": 60,
"protocol": "https"
},
"incident": {
"id": "inc_Nv4bL2xHcT",
"started_at": "2026-04-01T08:23:14Z",
"cause": "HTTP 503 Service Unavailable"
},
"check": {
"status_code": 503,
"response_time_ms": 2340,
"region": "eu-west",
"error": null
}
}Field descriptions
| Field | Type | Description |
|---|---|---|
event | string | Always "monitor.down" for this event type |
timestamp | string | ISO 8601 timestamp of when the event was emitted |
monitor.id | string | Unique monitor identifier |
monitor.name | string | Human-readable monitor name |
monitor.url | string | The URL being monitored |
monitor.check_interval | integer | Check frequency in seconds (60, 120, or 180) |
monitor.protocol | string | "https", "http", or "tcp" |
incident.id | string | Unique incident identifier |
incident.started_at | string | ISO 8601 timestamp of when the incident started |
incident.cause | string | Human-readable reason for the failure |
check.status_code | integer | null | HTTP status code, or null if the request timed out or DNS failed |
check.response_time_ms | integer | null | Response time in milliseconds, or null if unreachable |
check.region | string | Region the check ran from (e.g. "eu-west", "us-east") |
check.error | string | null | Low-level error message if the request did not complete (e.g. "timeout", "dns_resolution_failed") |
monitor.up
Sent when a monitor recovers and the open incident is resolved.
{
"event": "monitor.up",
"timestamp": "2026-04-01T08:31:47Z",
"monitor": {
"id": "mon_7kQ3xR9pWm",
"name": "Production API",
"url": "https://api.example.com/health",
"check_interval": 60,
"protocol": "https"
},
"incident": {
"id": "inc_Nv4bL2xHcT",
"started_at": "2026-04-01T08:23:14Z",
"resolved_at": "2026-04-01T08:31:47Z",
"duration_seconds": 513
},
"check": {
"status_code": 200,
"response_time_ms": 187,
"region": "eu-west",
"error": null
}
}Additional fields
| Field | Type | Description |
|---|---|---|
incident.resolved_at | string | ISO 8601 timestamp of when the incident was resolved |
incident.duration_seconds | integer | Total downtime duration in seconds |
The monitor and check objects have the same shape as monitor.down. The incident.cause field is omitted on recovery events.
ssl.expiring
Sent when an SSL certificate is approaching its expiration date.
{
"event": "ssl.expiring",
"timestamp": "2026-04-01T06:00:00Z",
"monitor": {
"id": "mon_7kQ3xR9pWm",
"name": "Production API",
"url": "https://api.example.com/health",
"check_interval": 60,
"protocol": "https"
},
"ssl": {
"issuer": "Let's Encrypt Authority X3",
"subject": "api.example.com",
"expires_at": "2026-04-15T12:00:00Z",
"days_remaining": 14
}
}Field descriptions
| Field | Type | Description |
|---|---|---|
ssl.issuer | string | Certificate authority that issued the certificate |
ssl.subject | string | Domain the certificate covers |
ssl.expires_at | string | ISO 8601 expiration timestamp of the certificate |
ssl.days_remaining | integer | Number of days until the certificate expires |
Request headers
Every webhook request includes these headers:
Content-Type: application/json
User-Agent: Uptrack-Webhook/1.0
X-Uptrack-Event: monitor.down| Field | Type | Description |
|---|---|---|
Content-Type | string | Always "application/json" |
User-Agent | string | Always "Uptrack-Webhook/1.0" |
X-Uptrack-Event | string | The event type ("monitor.down", "monitor.up", or "ssl.expiring") |
Verifying webhook requests
To confirm a request is from Uptrack, check the User-Agent header:
// Verify the request is from Uptrack
if (req.headers['user-agent'] !== 'Uptrack-Webhook/1.0') {
return res.status(403).send('Forbidden')
}For additional security, you can restrict incoming requests to Uptrack's IP ranges or use a secret token in your webhook URL (e.g. https://your-server.com/webhooks/uptrack?token=your-secret).
Code examples
Receive and parse Uptrack webhook payloads in your language of choice.
Node.js (Express)
const express = require('express')
const app = express()
app.use(express.json())
app.post('/webhooks/uptrack', (req, res) => {
const { event, monitor, incident, check, ssl } = req.body
switch (event) {
case 'monitor.down':
console.log(`[DOWN] ${monitor.name} — ${incident.cause}`)
// Page your on-call team, create a ticket, etc.
break
case 'monitor.up':
console.log(`[UP] ${monitor.name} — was down for ${incident.duration_seconds}s`)
// Resolve the ticket, notify the channel
break
case 'ssl.expiring':
console.log(`[SSL] ${monitor.name} — cert expires in ${ssl.days_remaining} days`)
// Open a renewal task
break
}
res.status(200).send('OK')
})
app.listen(3000)Python (Flask)
from flask import Flask, request, jsonify
app = Flask(__name__)
@app.route('/webhooks/uptrack', methods=['POST'])
def uptrack_webhook():
data = request.get_json()
event = data['event']
monitor = data['monitor']
if event == 'monitor.down':
incident = data['incident']
print(f"[DOWN] {monitor['name']} — {incident['cause']}")
elif event == 'monitor.up':
incident = data['incident']
print(f"[UP] {monitor['name']} — down for {incident['duration_seconds']}s")
elif event == 'ssl.expiring':
ssl_info = data['ssl']
print(f"[SSL] {monitor['name']} — {ssl_info['days_remaining']} days left")
return jsonify(status='ok'), 200Elixir (Phoenix)
defmodule MyAppWeb.WebhookController do
use MyAppWeb, :controller
def uptrack(conn, %{"event" => "monitor.down"} = params) do
monitor = params["monitor"]
incident = params["incident"]
IO.puts("[DOWN] #{monitor["name"]} — #{incident["cause"]}")
json(conn, %{status: "ok"})
end
def uptrack(conn, %{"event" => "monitor.up"} = params) do
monitor = params["monitor"]
incident = params["incident"]
IO.puts("[UP] #{monitor["name"]} — down for #{incident["duration_seconds"]}s")
json(conn, %{status: "ok"})
end
def uptrack(conn, %{"event" => "ssl.expiring"} = params) do
monitor = params["monitor"]
ssl = params["ssl"]
IO.puts("[SSL] #{monitor["name"]} — #{ssl["days_remaining"]} days left")
json(conn, %{status: "ok"})
end
endTesting webhooks locally
Use one of these tools to inspect webhook payloads before writing any code:
webhook.site
Go to webhook.site to get a unique URL. Paste it as your webhook URL in Uptrack, then trigger a test alert. You will see the exact headers and payload in your browser.
ngrok
Run ngrok http 3000 to expose your local server. Use the generated URL as your webhook endpoint in Uptrack. Requests hit your local code so you can debug with breakpoints.
Integration ideas
Custom Slack formatting
Parse the webhook payload and post a rich Slack message with Block Kit. Include the monitor name, downtime duration, and a direct link to your status page.
PagerDuty trigger
Forward monitor.down events to PagerDuty's Events API v2 to page your on-call engineer. Auto-resolve when monitor.up arrives.
Zapier / Make
Use a Zapier "Catch Hook" trigger to receive Uptrack webhooks, then route them to Google Sheets, Jira, Linear, or any of 5,000+ apps without writing code.
Incident log database
Store every webhook payload in a database table. Query historical incidents, calculate MTTR, or build your own uptime reporting dashboard.
Start receiving webhook alerts
5 monitors free — all at 30-second checks. No credit card required.
Get Started Free