Skip to content

fix(status): measure bandwidth at post-send, not at enqueue#466

Merged
stackia merged 2 commits intomainfrom
fix/bandwidth-actual-send-rate
May 5, 2026
Merged

fix(status): measure bandwidth at post-send, not at enqueue#466
stackia merged 2 commits intomainfrom
fix/bandwidth-actual-send-rate

Conversation

@stackia
Copy link
Copy Markdown
Owner

@stackia stackia commented May 5, 2026

Summary

Per-client bandwidth on the status page tracked the rate at which upstream data was queued into the client send buffer, not the rate at which it was actually delivered to the client. For slow clients, the displayed bandwidth stayed pinned at the upstream supply rate until the queue hit HWM and backpressure kicked in — overstating true client throughput.

Fix

Move the byte counter from the three upstream-receive sites in stream_handle_fd_event() (src/stream.c) to inside connection_handle_write()'s zerocopy_send() loop (src/connection.c). The counter now advances only when the kernel TCP stack accepts bytes, so the 1s tick in stream_tick() reports the real client receive rate even before the send queue fills.

When the client is slow, sendmsg() starts returning EAGAIN once the kernel TCP send buffer is full, and bytes_sent stays at 0 until the receiver acks more data — making the displayed bandwidth converge to the actual link rate. For fast clients (kernel buffer never fills), behavior is unchanged.

Side benefit: total_bytes_sent_cumulative and client_bytes_cumulative no longer over-count bytes that were enqueued but lost on disconnect.

Test plan

  • Builds clean (cmake --build build, no warnings)
  • Slow-client functional test: throttle loopback (tc qdisc add dev lo root tbf rate 1mbit ...), connect a normal curl to a stream, confirm status-page bandwidth converges to ~1 Mbit/s rather than upstream rate
  • Fast-client functional test: confirm normal stream still reports the upstream rate (unchanged)
  • e2e suite: e2e/run-e2e.sh

Per-client bandwidth on the status page was tracking the rate at which
upstream data was queued into the client send buffer, not the rate at
which the kernel actually delivered it. For slow clients, this stayed
pinned at the upstream supply rate until the queue hit HWM and
backpressure kicked in — overstating real client throughput.

Move the byte counter from the three upstream-receive sites in
stream_handle_fd_event() to inside connection_handle_write()'s
zerocopy_send() loop. The counter now advances only when the kernel
TCP stack accepts bytes, so the 1s tick in stream_tick() reports the
real client receive rate even before the queue fills.

Side benefit: total_bytes_sent_cumulative no longer over-counts bytes
that were enqueued but lost on disconnect.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Copilot AI review requested due to automatic review settings May 5, 2026 14:10
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented May 5, 2026

Azure Static Web Apps: Your stage site is ready! Visit it here: https://thankful-water-0a297bf00-466.eastasia.1.azurestaticapps.net

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: eaa129b097

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread src/connection.c
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR moves client traffic accounting from upstream receive time to the client socket write path so /status reports bytes/bandwidth based on what the server actually manages to send. In the rtp2httpd codebase, that directly affects per-client status metrics and disconnected-client totals.

Changes:

  • Remove enqueue-time total_bytes_sent updates from RTSP/HTTP-proxy receive paths in src/stream.c.
  • Add post-send total_bytes_sent updates in connection_handle_write() after zerocopy_send().
  • Aim to make slow-client bandwidth reporting match actual downstream delivery.

Review findings:

  • Blocking: multicast/FCC paths still increment total_bytes_sent at enqueue time elsewhere, so this new write-path counter double-counts those stream types.
  • No automated regression coverage appears to validate /status bytesSent / currentBandwidth behavior.

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.

File Description
src/stream.c Removes queue-time byte accounting from RTSP and HTTP proxy event handlers.
src/connection.c Adds write-time byte accounting in the shared client send loop.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread src/connection.c
Comment thread src/connection.c
The previous commit only removed enqueue-time `total_bytes_sent`
increments from stream.c, missing four equivalent sites in
fcc_handle_unicast_media(), fcc_handle_mcast_active() (×2), and
mcast_session_handle_event().  For multicast/FCC streams, bytes were
being counted at both enqueue (these sites) and post-send (the new
connection.c site), so the displayed bandwidth was roughly doubled
(reported ~17 Mbit/s for an 8 Mbit/s stream).

Drop the four leftover increments so post-send is the sole counter.
The local `flushed_bytes` accumulator in fcc_handle_mcast_active() is
preserved — it still drives the diagnostic "Flushed pending buffer
chain" log line.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented May 5, 2026

Azure Static Web Apps: Your stage site is ready! Visit it here: https://thankful-water-0a297bf00-466.eastasia.1.azurestaticapps.net

1 similar comment
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented May 5, 2026

Azure Static Web Apps: Your stage site is ready! Visit it here: https://thankful-water-0a297bf00-466.eastasia.1.azurestaticapps.net

@stackia stackia force-pushed the fix/bandwidth-actual-send-rate branch from 45a919f to 7f591b9 Compare May 5, 2026 14:21
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented May 5, 2026

Azure Static Web Apps: Your stage site is ready! Visit it here: https://thankful-water-0a297bf00-466.eastasia.1.azurestaticapps.net

@stackia stackia merged commit 279b1be into main May 5, 2026
10 checks passed
@stackia stackia deleted the fix/bandwidth-actual-send-rate branch May 5, 2026 14:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants