What is Latency?
Definition
Latency is the time delay between sending a request and receiving a response. In web monitoring, it is typically measured in milliseconds (ms) and represents how long it takes for a server to process and return a response to an HTTP request.
Several factors contribute to latency: network distance between client and server, DNS resolution time, TCP/TLS handshake, server processing time, and response transfer time. A response time of 200ms is generally considered good, while anything above 1 second feels noticeably slow.
Latency is different from bandwidth. You can have a high-bandwidth connection with high latency (like satellite internet) or a low-bandwidth connection with low latency. For web applications, latency usually matters more than bandwidth because most API responses are small.
Why it matters
High latency degrades the user experience even when your service is technically "up." Studies show that every 100ms of added latency reduces conversion rates by roughly 1%. Users do not distinguish between "slow" and "broken."
Latency is also an early warning signal. A gradual increase in response times often precedes a full outage. If your API responses go from 200ms to 2 seconds over a week, something is wrong — a memory leak, a growing database query, or degrading infrastructure.
How Uptrack helps
Uptrack measures response time for every check and tracks it over time. You can see latency trends across hours, days, and weeks, making it easy to spot degradation before it becomes an outage.
With 30-second check intervals, you get a dense dataset of response time measurements. This high-frequency data makes latency spikes visible instead of hiding them between 5-minute samples.
Start monitoring your sites now
20 monitors free — 10 at 30s, 10 at 1min. No credit card required.
Start Monitoring Free