Back to blog
| Gregory Komissarov

Introducing TCP-Level Telemetry: See Beyond HTTP

Most monitoring tools stop at HTTP status codes. Oack now captures TCP-level metrics — RTT, retransmits, congestion window — so you can diagnose network issues before they become outages.

producttcptelemetry

Most monitoring tools tell you what happened — a 500 error, a timeout, a slow response. But they rarely tell you why. Was it the application? The database? Or something deeper in the network stack?

Today we’re launching TCP-level telemetry in Oack, giving you visibility into metrics that were previously invisible to monitoring tools.

What we capture

Every probe now collects real-time TCP socket statistics:

  • Round-trip time (RTT) — actual network latency, not just TTFB
  • Retransmits — packet loss that slows everything down
  • Congestion window — how much data the kernel is willing to send
  • RTO (Retransmission Timeout) — early warning of network degradation

The full picture: dashboard vs. API

The Oack web interface shows a curated subset of TCP metrics — the ones most useful for day-to-day troubleshooting. But under the hood, every probe captures the complete tcp_info struct returned by the operating system. If you query the API, you get all of the raw numbers the kernel reported on that connection: receive window, send window, path MTU, advmss, delivery rate, and dozens more.

Note: tcp_info is a Linux-specific syscall (getsockopt(TCP_INFO)), so the full TCP statistics are only available from network testers running on Linux. Checkers on macOS and FreeBSD still collect HTTP timing and basic connectivity data, but the deep kernel-level TCP metrics require a Linux host.

Why RTT matters more than you think

Consider a common scenario: your API response time spikes from 200ms to 2 seconds. Traditional monitoring shows a slow response. But is it the app, or the network?

With TCP telemetry, you can immediately see if RTT jumped (network issue), retransmits spiked (packet loss), or congestion window collapsed (bandwidth problem). No more guessing.

RTT is especially valuable when the target’s response is small enough to fit in a single TCP segment. In that case, the HTTP timing breakdown can’t distinguish network latency from server processing time — the “wait” and “receive” phases collapse into one, because the entire reply arrives in the same packet. But the kernel-level RTT tells you exactly how long a round trip over the TCP stack takes between the network tester and the target. Subtract the RTT from the total wait time, and you know how much of the delay was actually spent processing the request on the target’s side.

How to use it

TCP metrics appear automatically on every probe detail page. No configuration needed — if you’re already monitoring with Oack, you have TCP telemetry.

Navigate to any probe, scroll to the TCP Statistics section, and you’ll see real-time charts for all four metrics. Click any data point to correlate with HTTP response times on the same timeline.

Built from scratch, built to extend

Our network testers are not wrappers around curl or existing HTTP libraries. We wrote them from scratch specifically for this project. That means we control the full request lifecycle — from DNS resolution and TCP handshake through TLS negotiation to response parsing — and we can instrument every layer.

This architecture makes it straightforward to add new protocols, new metrics, and new behaviors. If there’s a metric you need that we don’t expose yet, or a protocol you’d like us to support — let us know. We can add it quickly.

What’s next

We’re working on TCP-based alerting — get notified when retransmits exceed a threshold, even if HTTP responses still return 200. Because by the time HTTP fails, your users have already noticed.

Start monitoring with Oack

Get TCP telemetry, 5-second alerts, and global coverage — free to start.

Get started free