Skip to content

A93: xDS ExtProc Support#484

Open
markdroth wants to merge 27 commits intogrpc:masterfrom
markdroth:xds_ext_proc
Open

A93: xDS ExtProc Support#484
markdroth wants to merge 27 commits intogrpc:masterfrom
markdroth:xds_ext_proc

Conversation

@markdroth
Copy link
Member

No description provided.

@markdroth markdroth marked this pull request as ready for review September 18, 2025 22:50
yanavlasov pushed a commit to envoyproxy/envoy that referenced this pull request Dec 16, 2025
Adds a new body send mode for gRPC traffic. Also
adds a safe way for the ext_proc server to return OK status without
losing data in FULL_DUPLEX_STREAMED and GRPC modes. See
grpc/proposal#484 for context.
Risk Level: Low
Testing: N/A
Docs Changes: Included in PR
Release Notes: N/A
Platform Specific Features: N/A

---------

Signed-off-by: Mark D. Roth <[email protected]>
Co-authored-by: Adi (Suissa) Peleg <[email protected]>
update-envoy bot added a commit to envoyproxy/data-plane-api that referenced this pull request Dec 16, 2025
Adds a new body send mode for gRPC traffic. Also
adds a safe way for the ext_proc server to return OK status without
losing data in FULL_DUPLEX_STREAMED and GRPC modes. See
grpc/proposal#484 for context.
Risk Level: Low
Testing: N/A
Docs Changes: Included in PR
Release Notes: N/A
Platform Specific Features: N/A

---------

Signed-off-by: Mark D. Roth <[email protected]>
Co-authored-by: Adi (Suissa) Peleg <[email protected]>

Mirrored from https://github.com/envoyproxy/envoy @ 7b3a632333b587c784aff65e72ff618ff034f331
On the ext_proc stream, the events sent and received must be in the same
order as on the data plane. For client-to-server events, the order must
be headers, followed by zero or more messages, followed by a half-close.
For server-to-client events, the order must be headers, followed by zero
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume server-to-client headers are optional in trailers-only?

- [request_headers](https://github.com/envoyproxy/envoy/blob/cdd19052348f7f6d85910605d957ba4fe0538aec/api/envoy/service/ext_proc/v3/external_processor.proto#L76)
and
[response_headers](https://github.com/envoyproxy/envoy/blob/cdd19052348f7f6d85910605d957ba4fe0538aec/api/envoy/service/ext_proc/v3/external_processor.proto#L81).
Populated when sending client headers or server headers, respectively.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know we talked about this, but what's the story around what headers should be included vs. not? Are we going to define which headers specifically? It would be good to call out exactly whether method/scheme/path/te/user-agent/message-type/etc, etc are supposed to be included or not. In Go many of these things are added by the transport on the way out or are removed by the transport on the way in. It would be great if we can specify: "only the things set by the application [plus X, Y, and Z, which some libraries may need to manually synthesize before sending to the ext_proc server]"

Comment on lines +433 to +435
For client messages, may be true if the client sent a half-close at
the same time as the last message. For server messages, will always
be false.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think Go will be able to observe this scenario on the server. It has a RecvMsg() method on the ServerStream that only returns a message or an io.EOF error at stream end. There is no way to observe a message and an end-stream together. Instead Go will need to send the request message and the end stream separately. (Or we'll have to design a new interception API and add in the required plumbing between our transport and core grpc layers.)

Is this distinction a must-have feature for the in-progress Rust design?

Comment on lines +532 to +533
If true, indicates that a half-close should be sent after the
message. Honored only on client-to-server messages.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This puts the stream into an interesting state where the client application may continue sending messages but the filter needs to ignore them. In Go, on the client, it can simply ignore calls to SendMsg(), but on the server (in all languages with a blocking API, so maybe just Go/Rust?), it means we need to spawn a job to read from the client send stream and discard the results so the client doesn't block on flow control.

Actually in Rust we may implement it a bit differently, at least with my current design -- we would be giving the handler an owned RecvStream to poll and the gRPC library will already have to handle the situation where the application drops it early before returning from the handler -- in that case, we probably want to do this behavior of discarding any received messages vs. the alternative of letting the client application back up on flow control.

(I assume it's better to drain it, otherwise it could lead to deadlocks.)


#### Server-Side Metrics

The following server-side metrics will be exported:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are there no labels that could be used on the server?

gRPC framing.
- It would further spread use of the gRPC framing to ext_proc servers,
which will make it awkward for gRPC to support other transports with
different framing in the future.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It also fails to represent the real data on the stream, as the "is_compressed" flag wouldn't be set correctly, and the exact compressed bytes, if applicable, can't really be knowable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants