Conversation
Adds a new body send mode for gRPC traffic. Also adds a safe way for the ext_proc server to return OK status without losing data in FULL_DUPLEX_STREAMED and GRPC modes. See grpc/proposal#484 for context. Risk Level: Low Testing: N/A Docs Changes: Included in PR Release Notes: N/A Platform Specific Features: N/A --------- Signed-off-by: Mark D. Roth <[email protected]> Co-authored-by: Adi (Suissa) Peleg <[email protected]>
Adds a new body send mode for gRPC traffic. Also adds a safe way for the ext_proc server to return OK status without losing data in FULL_DUPLEX_STREAMED and GRPC modes. See grpc/proposal#484 for context. Risk Level: Low Testing: N/A Docs Changes: Included in PR Release Notes: N/A Platform Specific Features: N/A --------- Signed-off-by: Mark D. Roth <[email protected]> Co-authored-by: Adi (Suissa) Peleg <[email protected]> Mirrored from https://github.com/envoyproxy/envoy @ 7b3a632333b587c784aff65e72ff618ff034f331
| On the ext_proc stream, the events sent and received must be in the same | ||
| order as on the data plane. For client-to-server events, the order must | ||
| be headers, followed by zero or more messages, followed by a half-close. | ||
| For server-to-client events, the order must be headers, followed by zero |
There was a problem hiding this comment.
I assume server-to-client headers are optional in trailers-only?
| - [request_headers](https://github.com/envoyproxy/envoy/blob/cdd19052348f7f6d85910605d957ba4fe0538aec/api/envoy/service/ext_proc/v3/external_processor.proto#L76) | ||
| and | ||
| [response_headers](https://github.com/envoyproxy/envoy/blob/cdd19052348f7f6d85910605d957ba4fe0538aec/api/envoy/service/ext_proc/v3/external_processor.proto#L81). | ||
| Populated when sending client headers or server headers, respectively. |
There was a problem hiding this comment.
I know we talked about this, but what's the story around what headers should be included vs. not? Are we going to define which headers specifically? It would be good to call out exactly whether method/scheme/path/te/user-agent/message-type/etc, etc are supposed to be included or not. In Go many of these things are added by the transport on the way out or are removed by the transport on the way in. It would be great if we can specify: "only the things set by the application [plus X, Y, and Z, which some libraries may need to manually synthesize before sending to the ext_proc server]"
| For client messages, may be true if the client sent a half-close at | ||
| the same time as the last message. For server messages, will always | ||
| be false. |
There was a problem hiding this comment.
I don't think Go will be able to observe this scenario on the server. It has a RecvMsg() method on the ServerStream that only returns a message or an io.EOF error at stream end. There is no way to observe a message and an end-stream together. Instead Go will need to send the request message and the end stream separately. (Or we'll have to design a new interception API and add in the required plumbing between our transport and core grpc layers.)
Is this distinction a must-have feature for the in-progress Rust design?
| If true, indicates that a half-close should be sent after the | ||
| message. Honored only on client-to-server messages. |
There was a problem hiding this comment.
This puts the stream into an interesting state where the client application may continue sending messages but the filter needs to ignore them. In Go, on the client, it can simply ignore calls to SendMsg(), but on the server (in all languages with a blocking API, so maybe just Go/Rust?), it means we need to spawn a job to read from the client send stream and discard the results so the client doesn't block on flow control.
Actually in Rust we may implement it a bit differently, at least with my current design -- we would be giving the handler an owned RecvStream to poll and the gRPC library will already have to handle the situation where the application drops it early before returning from the handler -- in that case, we probably want to do this behavior of discarding any received messages vs. the alternative of letting the client application back up on flow control.
(I assume it's better to drain it, otherwise it could lead to deadlocks.)
|
|
||
| #### Server-Side Metrics | ||
|
|
||
| The following server-side metrics will be exported: |
There was a problem hiding this comment.
Are there no labels that could be used on the server?
| gRPC framing. | ||
| - It would further spread use of the gRPC framing to ext_proc servers, | ||
| which will make it awkward for gRPC to support other transports with | ||
| different framing in the future. |
There was a problem hiding this comment.
It also fails to represent the real data on the stream, as the "is_compressed" flag wouldn't be set correctly, and the exact compressed bytes, if applicable, can't really be knowable.
No description provided.