proto: add runtime set_stream_receive_window and FlowControlStats#2582
proto: add runtime set_stream_receive_window and FlowControlStats#2582JavaPerformance wants to merge 3 commits intoquinn-rs:mainfrom
Conversation
Add Connection::set_stream_receive_window() for runtime adjustment of the per-stream receive window. This completes the runtime window adjustment API — connection-level set_receive_window() and set_send_window() already exist. The implementation is minimal because stream_receive_window is stored as a single field on StreamsState and passed by value to max_stream_data() on each flow control decision. Changing it immediately affects the next MAX_STREAM_DATA frame for any stream without iterating per-stream state. Only expansion is safe at the QUIC protocol level: MAX_STREAM_DATA is a one-way ratchet and previously advertised limits cannot be revoked. Also adds FlowControlStats to ConnectionStats with: - send_credit_remaining: peer's MAX_DATA minus data sent - recv_credit_remaining: our MAX_DATA minus data received - streams_blocked_by_conn_fc: streams blocked on connection flow control These complement the existing FrameStats.data_blocked and stream_data_blocked counters (which count how many times the peer was blocked) with real-time utilization snapshots.
Ralith
left a comment
There was a problem hiding this comment.
Applications that monitor path characteristics and adapt windows need to adjust both connection-level and stream-level windows at runtime.
Why not configure whatever upper bound you're willing to tolerate up front?
| /// | ||
| /// This is the peer's advertised `MAX_DATA` limit minus the sum of data offsets across all | ||
| /// send streams. When this reaches zero, the connection is send-blocked at the flow control | ||
| /// level and a `DATA_BLOCKED` frame will be emitted. |
There was a problem hiding this comment.
and a
DATA_BLOCKEDframe will be emitted.
This is false. Quinn does not currently implement sending DATA_BLOCKED.
| pub send_credit_remaining: u64, | ||
| /// Remaining receive credit on the connection | ||
| /// | ||
| /// This is our advertised `MAX_DATA` limit minus the sum of data received across all receive |
There was a problem hiding this comment.
s/data/max offsets/ to match the precision above, perhaps?
| /// Only expansion is safe at the QUIC protocol level. Shrinking does not revoke previously | ||
| /// advertised limits; it only reduces what is advertised on future updates. | ||
| pub(crate) fn set_stream_receive_window(&mut self, window: u64) { | ||
| self.stream_receive_window = window; |
There was a problem hiding this comment.
Should this also trigger transmission of new MAX_STREAM_DATA frames where appropriate? If not, document why.
|
Good question. Two reasons:
set_receive_window already exists for the connection-level case and the same reasoning applies at the stream level. On the doc comments:
|
- Remove false claim that DATA_BLOCKED frame is emitted (Quinn does not currently implement sending DATA_BLOCKED) - Use "max offsets" instead of "data received" for recv_credit precision - Document why set_stream_receive_window does not trigger explicit MAX_STREAM_DATA transmission (max_stream_data() reads the current value on every flow control evaluation)
That was clear, but why is that necessarily okay? There's no guarantee that flow control for a specific stream will be refreshed on any particular time scale. |
Where is that memory wasted? These are upper bounds; we shouldn't actually commit the memory if we don't have data to store. If you're worried about badly behaved peers forcing excess memory consumption with synthetic workloads, such peers could just as well spoof your heuristic by faking high latency. |
You're right — if a stream is idle, the new window won't be advertised until data flows again. In our use case this is acceptable because we only grow the window on connections that are actively transferring (we measure BDP from observed goodput). An idle stream doesn't need a larger window advertised. That said, if the expectation is that
Fair point — the receiver doesn't pre-allocate. The cost is on the sender side: a large advertised window lets the sender have more data in flight, which consumes sender-side send buffer memory. But you're right that this is the sender's problem, not the receiver's. The stronger motivation is the second one: over-advertising credit on a low-BDP path means the sender can burst far more data than the path can sustain in one RTT. That creates large queues at intermediate routers and inflates latency. Starting conservative and growing based on measured path capacity keeps the advertised credit proportional to what the network can absorb without bufferbloat. That said, for many applications, configuring the maximum tolerable window up front and letting the peer figure out pacing is perfectly fine. The runtime setter is most useful for long-lived connections (hours/days) where path characteristics change, or for relay proxies that handle heterogeneous paths and want to minimize aggregate in-flight data across many connections. |
I'm happy relying on this assumption for simplicity so long as we document it, so there's no uncertainty if we need to revisit it in the future.
This sounds a lot like trying to implement congestion control at the application layer. Would it make more sense to use the congestion controller for this, and let senders set their own internal buffer sizes according to their preference? |
Will do — I'll add a doc note that the new window only takes effect on the next natural flow control update, and idle streams won't see it until they become active again.
Fair criticism — that argument was weaker than I made it sound. The congestion controller should handle pacing, you're right. The simpler and honest motivation: I'll update the PR description to focus on API completeness rather than the bufferbloat rationale. |
Document that idle streams won't see the updated window until they become active again, per reviewer feedback.
Ralith
left a comment
There was a problem hiding this comment.
Please include tests that exercise the flow control update transmit path for both growing and shrinking the window. Of particular interest is the case where the window is reduced immediately after sending a flow control update.
| /// value and advertise accordingly. | ||
| /// | ||
| /// Only expansion is safe at the QUIC protocol level. Shrinking does not revoke previously | ||
| /// advertised limits; it only reduces what is advertised on future updates. |
There was a problem hiding this comment.
This comment is a good description of what the behavior should be, but I'm not convinced it's accurate to what the behavior is as written. For example, if a flow control credit update is scheduled for a stream whose receive window has been shrunk, the new value will be computed based on the latest stream_receive_window value, when instead it should be the high water mark.
Summary
Two related additions:
1.
Connection::set_stream_receive_window(VarInt)Runtime adjustment of the per-stream receive window. This completes the runtime window adjustment API —
set_receive_window()(connection-level) andset_send_window()already exist.2.
FlowControlStatsonConnectionStatsMotivation
Stream window setter
On high-BDP paths (e.g., 250ms RTT cross-region), the default stream receive window can become a throughput bottleneck. Applications that monitor path characteristics and adapt windows need to adjust both connection-level and stream-level windows at runtime. Connection-level is already possible; stream-level is not.
The implementation is minimal because
stream_receive_windowis stored as a single field onStreamsStateand passed by value tomax_stream_data()on each flow control decision. Changing it immediately affects the nextMAX_STREAM_DATAframe for any stream without iterating per-stream state.Only expansion is safe:
MAX_STREAM_DATAis a one-way ratchet per RFC 9000 — previously advertised limits cannot be revoked. The doc comment makes this clear.Flow control stats
FrameStatsalready exposesdata_blockedandstream_data_blockedframe counts, which tell you how many times the peer was blocked.FlowControlStatscomplements this with real-time utilization:send_credit_remaining— how much headroom before we hit the peer'sMAX_DATArecv_credit_remaining— how much headroom before the peer exhausts ourMAX_DATAstreams_blocked_by_conn_fc— number of streams currently waiting on connection-level flow controlThis enables applications to detect flow control pressure before it becomes a bottleneck, rather than only after
DATA_BLOCKEDframes have been sent.Changes
quinn-proto/src/connection/streams/state.rs—FlowControlStatsstruct,set_stream_receive_window(),flow_control_stats()quinn-proto/src/connection/streams/mod.rs— re-exportquinn-proto/src/connection/mod.rs— expose onConnection, populate instats()quinn-proto/src/connection/stats.rs— addflow_controlfield toConnectionStatsquinn-proto/src/lib.rs— crate-level re-exportquinn/src/connection.rs— publicset_stream_receive_window()All new types use
#[non_exhaustive]. Backwards-compatible.Related