Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

wip: trying to debug why we have an extra data frame in the client streaming st case #3148

Closed
wants to merge 6 commits into from

Conversation

mgodave
Copy link
Contributor

@mgodave mgodave commented Dec 19, 2024

Problem

The blocking servicetalk server sends an extra empty data frame on failure even when there aren't any trailers. This causes some issues for the grpc-java client.

Changes

I added a filter and a context key to signal when we have a case where we can avoid sending the extra empty data frame. In the "BROKEN AFTER CHANGE" case below I expect not to see the extra data frame as I am in the previous case but I assume I'm missing a detail about how ST works to be able to be effective in debugging this rn.

Results

Before Change:

OUTBOUND HEADERS: streamId=3 headers=GrpcHttp2OutboundHeaders[:authority: 127.0.0.1:62451, :path: /grpc.netty.Compat/clientStreamingCall, :method: POST, :scheme: http, content-type: application/grpc, te: trailers, user-agent: grpc-java-netty/1.64.1, grpc-accept-encoding: gzip] streamDependency=0 weight=16 exclusive=false padding=0 endStream=false
OUTBOUND DATA: streamId=3 padding=0 endStream=false length=7 bytes=00000000020801
OUTBOUND DATA: streamId=3 padding=0 endStream=true length=0 bytes=
INBOUND HEADERS: streamId=3 headers=GrpcHttp2ResponseHeaders[:status: 200, server: servicetalk-grpc/, content-type: application/grpc+proto, grpc-status: 12, grpc-message: Method grpc.netty.Compat/clientStreamingCall is unimplemented] padding=0 endStream=false
INBOUND DATA: streamId=3 padding=0 endStream=true length=0 bytes=

After Change:

### WORKING AFTER CHANGE (grpc -> ST blocking)
OUTBOUND HEADERS: streamId=3 headers=GrpcHttp2OutboundHeaders[:authority: 127.0.0.1:62421, :path: /grpc.netty.Compat/clientStreamingCall, :method: POST, :scheme: http, content-type: application/grpc, te: trailers, user-agent: grpc-java-netty/1.64.1, grpc-accept-encoding: gzip] streamDependency=0 weight=16 exclusive=false padding=0 endStream=false
OUTBOUND DATA: streamId=3 padding=0 endStream=false length=7 bytes=00000000020801
OUTBOUND DATA: streamId=3 padding=0 endStream=true length=0 bytes=
INBOUND HEADERS: streamId=3 headers=GrpcHttp2ResponseHeaders[:status: 200, server: servicetalk-grpc/, content-type: application/grpc+proto, grpc-status: 12, grpc-message: Method grpc.netty.Compat/clientStreamingCall is unimplemented, content-length: 0] padding=0 endStream=true

@mgodave
Copy link
Contributor Author

mgodave commented Dec 20, 2024

Closed in favor or #3152

idelpivnitskiy added a commit that referenced this pull request Dec 23, 2024
…any (#3151)

Motivation:

Behavior was discovered as part of debugging #3148.
Currently, `BlockingStreamingHttpService` allocates trailers upfront and concatenates them after the payload body when protocol allows. In result, aggregation of `BlockingStreamingHttpResponse` does not lead to a single HTTP/2 frame because we always expect to receive trailers. However, adding trailers after users close payload writer is race. There are no guarantees that new trailers will be written to the network after close. Therefore, we can inspect the state after close and decide if we need to append trailers or not.

Modifications:

- Use `scanWithMapper` inside `BlockingStreamingHttpService` instead of always concatenating payload body with trailers;
- Create trailers on demand inside `BufferHttpPayloadWriter`, only when users touch them;
- Clarify behavior in `HttpPayloadWriter`'s javadoc for trailers-related methods;
- Enhance `BlockingStreamingToStreamingServiceTest` to assert new expectations;

Result:

In `BlockingStreamingHttpService`, trailers are allocated and attached only if users touch trailers before closing `HttpPayloadWriter`.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants