pax_global_header00006660000000000000000000000064135163577310014525gustar00rootroot0000000000000052 comment=045159ad57f3781d409358e3ade910a018c16b30 grpc-go-1.22.1/000077500000000000000000000000001351635773100131465ustar00rootroot00000000000000grpc-go-1.22.1/.github/000077500000000000000000000000001351635773100145065ustar00rootroot00000000000000grpc-go-1.22.1/.github/ISSUE_TEMPLATE/000077500000000000000000000000001351635773100166715ustar00rootroot00000000000000grpc-go-1.22.1/.github/ISSUE_TEMPLATE/bug.md000066400000000000000000000007551351635773100177770ustar00rootroot00000000000000--- name: Bug Report about: Create a report to help us improve labels: 'Type: Bug' --- Please see the FAQ in our main README.md, then answer the questions below before submitting your issue. ### What version of gRPC are you using? ### What version of Go are you using (`go version`)? ### What operating system (Linux, Windows, …) and version? ### What did you do? If possible, provide a recipe for reproducing the error. ### What did you expect to see? ### What did you see instead? grpc-go-1.22.1/.github/ISSUE_TEMPLATE/feature.md000066400000000000000000000004471351635773100206530ustar00rootroot00000000000000--- name: Feature Request about: Suggest an idea for gRPC-Go labels: 'Type: Feature' --- Please see the FAQ in our main README.md before submitting your issue. ### Use case(s) - what problem will this feature solve? ### Proposed Solution ### Alternatives Considered ### Additional Context grpc-go-1.22.1/.github/ISSUE_TEMPLATE/question.md000066400000000000000000000002351351635773100210620ustar00rootroot00000000000000--- name: Question about: Ask a question about gRPC-Go labels: 'Type: Question' --- Please see the FAQ in our main README.md before submitting your issue. grpc-go-1.22.1/.github/lock.yml000066400000000000000000000000461351635773100161610ustar00rootroot00000000000000daysUntilLock: 180 lockComment: false grpc-go-1.22.1/.travis.yml000066400000000000000000000021721351635773100152610ustar00rootroot00000000000000language: go matrix: include: - go: 1.12.x env: VET=1 GO111MODULE=on - go: 1.12.x env: RACE=1 GO111MODULE=on - go: 1.12.x env: RUN386=1 - go: 1.12.x env: GRPC_GO_RETRY=on - go: 1.11.x env: GO111MODULE=on - go: 1.10.x - go: 1.9.x - go: 1.9.x env: GAE=1 go_import_path: google.golang.org/grpc before_install: - if [[ "${GO111MODULE}" = "on" ]]; then mkdir "${HOME}/go"; export GOPATH="${HOME}/go"; fi - if [[ -n "${RUN386}" ]]; then export GOARCH=386; fi - if [[ "${TRAVIS_EVENT_TYPE}" = "cron" && -z "${RUN386}" ]]; then RACE=1; fi - if [[ "${TRAVIS_EVENT_TYPE}" != "cron" ]]; then VET_SKIP_PROTO=1; fi install: - try3() { eval "$*" || eval "$*" || eval "$*"; } - try3 'if [[ "${GO111MODULE}" = "on" ]]; then go mod download; else make testdeps; fi' - if [[ "${GAE}" = 1 ]]; then source ./install_gae.sh; make testappenginedeps; fi - if [[ "${VET}" = 1 ]]; then ./vet.sh -install; fi script: - set -e - if [[ "${VET}" = 1 ]]; then ./vet.sh; fi - if [[ "${GAE}" = 1 ]]; then make testappengine; exit 0; fi - if [[ "${RACE}" = 1 ]]; then make testrace; exit 0; fi - make test grpc-go-1.22.1/AUTHORS000066400000000000000000000000141351635773100142110ustar00rootroot00000000000000Google Inc. grpc-go-1.22.1/CONTRIBUTING.md000066400000000000000000000054471351635773100154110ustar00rootroot00000000000000# How to contribute We definitely welcome your patches and contributions to gRPC! If you are new to github, please start by reading [Pull Request howto](https://help.github.com/articles/about-pull-requests/) ## Legal requirements In order to protect both you and ourselves, you will need to sign the [Contributor License Agreement](https://identity.linuxfoundation.org/projects/cncf). ## Guidelines for Pull Requests How to get your contributions merged smoothly and quickly. - Create **small PRs** that are narrowly focused on **addressing a single concern**. We often times receive PRs that are trying to fix several things at a time, but only one fix is considered acceptable, nothing gets merged and both author's & review's time is wasted. Create more PRs to address different concerns and everyone will be happy. - The grpc package should only depend on standard Go packages and a small number of exceptions. If your contribution introduces new dependencies which are NOT in the [list](https://godoc.org/google.golang.org/grpc?imports), you need a discussion with gRPC-Go authors and consultants. - For speculative changes, consider opening an issue and discussing it first. If you are suggesting a behavioral or API change, consider starting with a [gRFC proposal](https://github.com/grpc/proposal). - Provide a good **PR description** as a record of **what** change is being made and **why** it was made. Link to a github issue if it exists. - Don't fix code style and formatting unless you are already changing that line to address an issue. PRs with irrelevant changes won't be merged. If you do want to fix formatting or style, do that in a separate PR. - Unless your PR is trivial, you should expect there will be reviewer comments that you'll need to address before merging. We expect you to be reasonably responsive to those comments, otherwise the PR will be closed after 2-3 weeks of inactivity. - Maintain **clean commit history** and use **meaningful commit messages**. PRs with messy commit history are difficult to review and won't be merged. Use `rebase -i upstream/master` to curate your commit history and/or to bring in latest changes from master (but avoid rebasing in the middle of a code review). - Keep your PR up to date with upstream/master (if there are merge conflicts, we can't really merge your change). - **All tests need to be passing** before your change can be merged. We recommend you **run tests locally** before creating your PR to catch breakages early on. - `make all` to test everything, OR - `make vet` to catch vet errors - `make test` to run the tests - `make testrace` to run tests in race mode - optional `make testappengine` to run tests with appengine - Exceptions to the rules can be made if there's a compelling reason for doing so. grpc-go-1.22.1/Documentation/000077500000000000000000000000001351635773100157575ustar00rootroot00000000000000grpc-go-1.22.1/Documentation/compression.md000066400000000000000000000067571351635773100206610ustar00rootroot00000000000000# Compression The preferred method for configuring message compression on both clients and servers is to use [`encoding.RegisterCompressor`](https://godoc.org/google.golang.org/grpc/encoding#RegisterCompressor) to register an implementation of a compression algorithm. See `grpc/encoding/gzip/gzip.go` for an example of how to implement one. Once a compressor has been registered on the client-side, RPCs may be sent using it via the [`UseCompressor`](https://godoc.org/google.golang.org/grpc#UseCompressor) `CallOption`. Remember that `CallOption`s may be turned into defaults for all calls from a `ClientConn` by using the [`WithDefaultCallOptions`](https://godoc.org/google.golang.org/grpc#WithDefaultCallOptions) `DialOption`. If `UseCompressor` is used and the corresponding compressor has not been installed, an `Internal` error will be returned to the application before the RPC is sent. Server-side, registered compressors will be used automatically to decode request messages and encode the responses. Servers currently always respond using the same compression method specified by the client. If the corresponding compressor has not been registered, an `Unimplemented` status will be returned to the client. ## Deprecated API There is a deprecated API for setting compression as well. It is not recommended for use. However, if you were previously using it, the following section may be helpful in understanding how it works in combination with the new API. ### Client-Side There are two legacy functions and one new function to configure compression: ```go func WithCompressor(grpc.Compressor) DialOption {} func WithDecompressor(grpc.Decompressor) DialOption {} func UseCompressor(name) CallOption {} ``` For outgoing requests, the following rules are applied in order: 1. If `UseCompressor` is used, messages will be compressed using the compressor named. * If the compressor named is not registered, an Internal error is returned back to the client before sending the RPC. * If UseCompressor("identity"), no compressor will be used, but "identity" will be sent in the header to the server. 1. If `WithCompressor` is used, messages will be compressed using that compressor implementation. 1. Otherwise, outbound messages will be uncompressed. For incoming responses, the following rules are applied in order: 1. If `WithDecompressor` is used and it matches the message's encoding, it will be used. 1. If a registered compressor matches the response's encoding, it will be used. 1. Otherwise, the stream will be closed and an `Unimplemented` status error will be returned to the application. ### Server-Side There are two legacy functions to configure compression: ```go func RPCCompressor(grpc.Compressor) ServerOption {} func RPCDecompressor(grpc.Decompressor) ServerOption {} ``` For incoming requests, the following rules are applied in order: 1. If `RPCDecompressor` is used and that decompressor matches the request's encoding: it will be used. 1. If a registered compressor matches the request's encoding, it will be used. 1. Otherwise, an `Unimplemented` status will be returned to the client. For outgoing responses, the following rules are applied in order: 1. If `RPCCompressor` is used, that compressor will be used to compress all response messages. 1. If compression was used for the incoming request and a registered compressor supports it, that same compression method will be used for the outgoing response. 1. Otherwise, no compression will be used for the outgoing response. grpc-go-1.22.1/Documentation/concurrency.md000066400000000000000000000030521351635773100206330ustar00rootroot00000000000000# Concurrency In general, gRPC-go provides a concurrency-friendly API. What follows are some guidelines. ## Clients A [ClientConn][client-conn] can safely be accessed concurrently. Using [helloworld][helloworld] as an example, one could share the `ClientConn` across multiple goroutines to create multiple `GreeterClient` types. In this case, RPCs would be sent in parallel. ## Streams When using streams, one must take care to avoid calling either `SendMsg` or `RecvMsg` multiple times against the same [Stream][stream] from different goroutines. In other words, it's safe to have a goroutine calling `SendMsg` and another goroutine calling `RecvMsg` on the same stream at the same time. But it is not safe to call `SendMsg` on the same stream in different goroutines, or to call `RecvMsg` on the same stream in different goroutines. ## Servers Each RPC handler attached to a registered server will be invoked in its own goroutine. For example, [SayHello][say-hello] will be invoked in its own goroutine. The same is true for service handlers for streaming RPCs, as seen in the route guide example [here][route-guide-stream]. [helloworld]: https://github.com/grpc/grpc-go/blob/master/examples/helloworld/greeter_client/main.go#L43 [client-conn]: https://godoc.org/google.golang.org/grpc#ClientConn [stream]: https://godoc.org/google.golang.org/grpc#Stream [say-hello]: https://github.com/grpc/grpc-go/blob/master/examples/helloworld/greeter_server/main.go#L41 [route-guide-stream]: https://github.com/grpc/grpc-go/blob/master/examples/route_guide/server/server.go#L126 grpc-go-1.22.1/Documentation/encoding.md000066400000000000000000000123521351635773100200720ustar00rootroot00000000000000# Encoding The gRPC API for sending and receiving is based upon *messages*. However, messages cannot be transmitted directly over a network; they must first be converted into *bytes*. This document describes how gRPC-Go converts messages into bytes and vice-versa for the purposes of network transmission. ## Codecs (Serialization and Deserialization) A `Codec` contains code to serialize a message into a byte slice (`Marshal`) and deserialize a byte slice back into a message (`Unmarshal`). `Codec`s are registered by name into a global registry maintained in the `encoding` package. ### Implementing a `Codec` A typical `Codec` will be implemented in its own package with an `init` function that registers itself, and is imported anonymously. For example: ```go package proto import "google.golang.org/grpc/encoding" func init() { encoding.RegisterCodec(protoCodec{}) } // ... implementation of protoCodec ... ``` For an example, gRPC's implementation of the `proto` codec can be found in [`encoding/proto`](https://godoc.org/google.golang.org/grpc/encoding/proto). ### Using a `Codec` By default, gRPC registers and uses the "proto" codec, so it is not necessary to do this in your own code to send and receive proto messages. To use another `Codec` from a client or server: ```go package myclient import _ "path/to/another/codec" ``` `Codec`s, by definition, must be symmetric, so the same desired `Codec` should be registered in both client and server binaries. On the client-side, to specify a `Codec` to use for message transmission, the `CallOption` `CallContentSubtype` should be used as follows: ```go response, err := myclient.MyCall(ctx, request, grpc.CallContentSubtype("mycodec")) ``` As a reminder, all `CallOption`s may be converted into `DialOption`s that become the default for all RPCs sent through a client using `grpc.WithDefaultCallOptions`: ```go myclient := grpc.Dial(ctx, target, grpc.WithDefaultCallOptions(grpc.CallContentSubtype("mycodec"))) ``` When specified in either of these ways, messages will be encoded using this codec and sent along with headers indicating the codec (`content-type` set to `application/grpc+`). On the server-side, using a `Codec` is as simple as registering it into the global registry (i.e. `import`ing it). If a message is encoded with the content sub-type supported by a registered `Codec`, it will be used automatically for decoding the request and encoding the response. Otherwise, for backward-compatibility reasons, gRPC will attempt to use the "proto" codec. In an upcoming change (tracked in [this issue](https://github.com/grpc/grpc-go/issues/1824)), such requests will be rejected with status code `Unimplemented` instead. ## Compressors (Compression and Decompression) Sometimes, the resulting serialization of a message is not space-efficient, and it may be beneficial to compress this byte stream before transmitting it over the network. To facilitate this operation, gRPC supports a mechanism for performing compression and decompression. A `Compressor` contains code to compress and decompress by wrapping `io.Writer`s and `io.Reader`s, respectively. (The form of `Compress` and `Decompress` were chosen to most closely match Go's standard package [implementations](https://golang.org/pkg/compress/) of compressors. Like `Codec`s, `Compressor`s are registered by name into a global registry maintained in the `encoding` package. ### Implementing a `Compressor` A typical `Compressor` will be implemented in its own package with an `init` function that registers itself, and is imported anonymously. For example: ```go package gzip import "google.golang.org/grpc/encoding" func init() { encoding.RegisterCompressor(compressor{}) } // ... implementation of compressor ... ``` An implementation of a `gzip` compressor can be found in [`encoding/gzip`](https://godoc.org/google.golang.org/grpc/encoding/gzip). ### Using a `Compressor` By default, gRPC does not register or use any compressors. To use a `Compressor` from a client or server: ```go package myclient import _ "google.golang.org/grpc/encoding/gzip" ``` `Compressor`s, by definition, must be symmetric, so the same desired `Compressor` should be registered in both client and server binaries. On the client-side, to specify a `Compressor` to use for message transmission, the `CallOption` `UseCompressor` should be used as follows: ```go response, err := myclient.MyCall(ctx, request, grpc.UseCompressor("gzip")) ``` As a reminder, all `CallOption`s may be converted into `DialOption`s that become the default for all RPCs sent through a client using `grpc.WithDefaultCallOptions`: ```go myclient := grpc.Dial(ctx, target, grpc.WithDefaultCallOptions(grpc.UseCompresor("gzip"))) ``` When specified in either of these ways, messages will be compressed using this compressor and sent along with headers indicating the compressor (`content-coding` set to ``). On the server-side, using a `Compressor` is as simple as registering it into the global registry (i.e. `import`ing it). If a message is compressed with the content coding supported by a registered `Compressor`, it will be used automatically for decompressing the request and compressing the response. Otherwise, the request will be rejected with status code `Unimplemented`. grpc-go-1.22.1/Documentation/gomock-example.md000066400000000000000000000151221351635773100212120ustar00rootroot00000000000000# Mocking Service for gRPC [Example code unary RPC](https://github.com/grpc/grpc-go/tree/master/examples/helloworld/mock_helloworld) [Example code streaming RPC](https://github.com/grpc/grpc-go/tree/master/examples/route_guide/mock_routeguide) ## Why? To test client-side logic without the overhead of connecting to a real server. Mocking enables users to write light-weight unit tests to check functionalities on client-side without invoking RPC calls to a server. ## Idea: Mock the client stub that connects to the server. We use Gomock to mock the client interface (in the generated code) and programmatically set its methods to expect and return pre-determined values. This enables users to write tests around the client logic and use this mocked stub while making RPC calls. ## How to use Gomock? Documentation on Gomock can be found [here](https://github.com/golang/mock). A quick reading of the documentation should enable users to follow the code below. Consider a gRPC service based on following proto file: ```proto //helloworld.proto package helloworld; message HelloRequest { string name = 1; } message HelloReply { string name = 1; } service Greeter { rpc SayHello (HelloRequest) returns (HelloReply) {} } ``` The generated file helloworld.pb.go will have a client interface for each service defined in the proto file. This interface will have methods corresponding to each rpc inside that service. ```Go type GreeterClient interface { SayHello(ctx context.Context, in *HelloRequest, opts ...grpc.CallOption) (*HelloReply, error) } ``` The generated code also contains a struct that implements this interface. ```Go type greeterClient struct { cc *grpc.ClientConn } func (c *greeterClient) SayHello(ctx context.Context, in *HelloRequest, opts ...grpc.CallOption) (*HelloReply, error){ // ... // gRPC specific code here // ... } ``` Along with this the generated code has a method to create an instance of this struct. ```Go func NewGreeterClient(cc *grpc.ClientConn) GreeterClient ``` The user code uses this function to create an instance of the struct greeterClient which then can be used to make rpc calls to the server. We will mock this interface GreeterClient and use an instance of that mock to make rpc calls. These calls instead of going to server will return pre-determined values. To create a mock we’ll use [mockgen](https://github.com/golang/mock#running-mockgen). From the directory ``` examples/helloworld/ ``` run ``` mockgen google.golang.org/grpc/examples/helloworld/helloworld GreeterClient > mock_helloworld/hw_mock.go ``` Notice that in the above command we specify GreeterClient as the interface to be mocked. The user test code can import the package generated by mockgen along with library package gomock to write unit tests around client-side logic. ```Go import "github.com/golang/mock/gomock" import hwmock "google.golang.org/grpc/examples/helloworld/mock_helloworld" ``` An instance of the mocked interface can be created as: ```Go mockGreeterClient := hwmock.NewMockGreeterClient(ctrl) ``` This mocked object can be programmed to expect calls to its methods and return pre-determined values. For instance, we can program mockGreeterClient to expect a call to its method SayHello and return a HelloReply with message “Mocked RPC”. ```Go mockGreeterClient.EXPECT().SayHello( gomock.Any(), // expect any value for first parameter gomock.Any(), // expect any value for second parameter ).Return(&helloworld.HelloReply{Message: “Mocked RPC”}, nil) ``` gomock.Any() indicates that the parameter can have any value or type. We can indicate specific values for built-in types with gomock.Eq(). However, if the test code needs to specify the parameter to have a proto message type, we can replace gomock.Any() with an instance of a struct that implements gomock.Matcher interface. ```Go type rpcMsg struct { msg proto.Message } func (r *rpcMsg) Matches(msg interface{}) bool { m, ok := msg.(proto.Message) if !ok { return false } return proto.Equal(m, r.msg) } func (r *rpcMsg) String() string { return fmt.Sprintf("is %s", r.msg) } ... req := &helloworld.HelloRequest{Name: "unit_test"} mockGreeterClient.EXPECT().SayHello( gomock.Any(), &rpcMsg{msg: req}, ).Return(&helloworld.HelloReply{Message: "Mocked Interface"}, nil) ``` ## Mock streaming RPCs: For our example we consider the case of bi-directional streaming RPCs. Concretely, we'll write a test for RouteChat function from the route guide example to demonstrate how to write mocks for streams. RouteChat is a bi-directional streaming RPC, which means calling RouteChat returns a stream that can __Send__ and __Recv__ messages to and from the server, respectively. We'll start by creating a mock of this stream interface returned by RouteChat and then we'll mock the client interface and set expectation on the method RouteChat to return our mocked stream. ### Generating mocking code: Like before we'll use [mockgen](https://github.com/golang/mock#running-mockgen). From the `examples/route_guide` directory run: `mockgen google.golang.org/grpc/examples/route_guide/routeguide RouteGuideClient,RouteGuide_RouteChatClient > mock_route_guide/rg_mock.go` Notice that we are mocking both client(`RouteGuideClient`) and stream(`RouteGuide_RouteChatClient`) interfaces here. This will create a file `rg_mock.go` under directory `mock_route_guide`. This file contains all the mocking code we need to write our test. In our test code, like before, we import the this mocking code along with the generated code ```go import ( rgmock "google.golang.org/grpc/examples/route_guide/mock_routeguide" rgpb "google.golang.org/grpc/examples/route_guide/routeguide" ) ``` Now considering a test that takes the RouteGuide client object as a parameter, makes a RouteChat rpc call and sends a message on the resulting stream. Furthermore, this test expects to see the same message to be received on the stream. ```go var msg = ... // Creates a RouteChat call and sends msg on it. // Checks if the received message was equal to msg. func testRouteChat(client rgb.RouteChatClient) error{ ... } ``` We can inject our mock in here by simply passing it as an argument to the method. Creating mock for stream interface: ```go stream := rgmock.NewMockRouteGuide_RouteChatClient(ctrl) } ``` Setting Expectations: ```go stream.EXPECT().Send(gomock.Any()).Return(nil) stream.EXPECT().Recv().Return(msg, nil) ``` Creating mock for client interface: ```go rgclient := rgmock.NewMockRouteGuideClient(ctrl) ``` Setting Expectations: ```go rgclient.EXPECT().RouteChat(gomock.Any()).Return(stream, nil) ``` grpc-go-1.22.1/Documentation/grpc-auth-support.md000066400000000000000000000062031351635773100217060ustar00rootroot00000000000000# Authentication As outlined in the [gRPC authentication guide](https://grpc.io/docs/guides/auth.html) there are a number of different mechanisms for asserting identity between an client and server. We'll present some code-samples here demonstrating how to provide TLS support encryption and identity assertions as well as passing OAuth2 tokens to services that support it. # Enabling TLS on a gRPC client ```Go conn, err := grpc.Dial(serverAddr, grpc.WithTransportCredentials(credentials.NewClientTLSFromCert(nil, ""))) ``` # Enabling TLS on a gRPC server ```Go creds, err := credentials.NewServerTLSFromFile(certFile, keyFile) if err != nil { log.Fatalf("Failed to generate credentials %v", err) } lis, err := net.Listen("tcp", ":0") server := grpc.NewServer(grpc.Creds(creds)) ... server.Serve(lis) ``` # OAuth2 For an example of how to configure client and server to use OAuth2 tokens, see [here](https://github.com/grpc/grpc-go/tree/master/examples/features/authentication). ## Validating a token on the server Clients may use [metadata.MD](https://godoc.org/google.golang.org/grpc/metadata#MD) to store tokens and other authentication-related data. To gain access to the `metadata.MD` object, a server may use [metadata.FromIncomingContext](https://godoc.org/google.golang.org/grpc/metadata#FromIncomingContext). With a reference to `metadata.MD` on the server, one needs to simply lookup the `authorization` key. Note, all keys stored within `metadata.MD` are normalized to lowercase. See [here](https://godoc.org/google.golang.org/grpc/metadata#New). It is possible to configure token validation for all RPCs using an interceptor. A server may configure either a [grpc.UnaryInterceptor](https://godoc.org/google.golang.org/grpc#UnaryInterceptor) or a [grpc.StreamInterceptor](https://godoc.org/google.golang.org/grpc#StreamInterceptor). ## Adding a token to all outgoing client RPCs To send an OAuth2 token with each RPC, a client may configure the `grpc.DialOption` [grpc.WithPerRPCCredentials](https://godoc.org/google.golang.org/grpc#WithPerRPCCredentials). Alternatively, a client may also use the `grpc.CallOption` [grpc.PerRPCCredentials](https://godoc.org/google.golang.org/grpc#PerRPCCredentials) on each invocation of an RPC. To create a `credentials.PerRPCCredentials`, use [oauth.NewOauthAccess](https://godoc.org/google.golang.org/grpc/credentials/oauth#NewOauthAccess). Note, the OAuth2 implementation of `grpc.PerRPCCredentials` requires a client to use [grpc.WithTransportCredentials](https://godoc.org/google.golang.org/grpc#WithTransportCredentials) to prevent any insecure transmission of tokens. # Authenticating with Google ## Google Compute Engine (GCE) ```Go conn, err := grpc.Dial(serverAddr, grpc.WithTransportCredentials(credentials.NewClientTLSFromCert(nil, "")), grpc.WithPerRPCCredentials(oauth.NewComputeEngine())) ``` ## JWT ```Go jwtCreds, err := oauth.NewServiceAccountFromFile(*serviceAccountKeyFile, *oauthScope) if err != nil { log.Fatalf("Failed to create JWT credentials: %v", err) } conn, err := grpc.Dial(serverAddr, grpc.WithTransportCredentials(credentials.NewClientTLSFromCert(nil, "")), grpc.WithPerRPCCredentials(jwtCreds)) ``` grpc-go-1.22.1/Documentation/grpc-metadata.md000066400000000000000000000165051351635773100210210ustar00rootroot00000000000000# Metadata gRPC supports sending metadata between client and server. This doc shows how to send and receive metadata in gRPC-go. ## Background Four kinds of service method: - [Unary RPC](https://grpc.io/docs/guides/concepts.html#unary-rpc) - [Server streaming RPC](https://grpc.io/docs/guides/concepts.html#server-streaming-rpc) - [Client streaming RPC](https://grpc.io/docs/guides/concepts.html#client-streaming-rpc) - [Bidirectional streaming RPC](https://grpc.io/docs/guides/concepts.html#bidirectional-streaming-rpc) And concept of [metadata](https://grpc.io/docs/guides/concepts.html#metadata). ## Constructing metadata A metadata can be created using package [metadata](https://godoc.org/google.golang.org/grpc/metadata). The type MD is actually a map from string to a list of strings: ```go type MD map[string][]string ``` Metadata can be read like a normal map. Note that the value type of this map is `[]string`, so that users can attach multiple values using a single key. ### Creating a new metadata A metadata can be created from a `map[string]string` using function `New`: ```go md := metadata.New(map[string]string{"key1": "val1", "key2": "val2"}) ``` Another way is to use `Pairs`. Values with the same key will be merged into a list: ```go md := metadata.Pairs( "key1", "val1", "key1", "val1-2", // "key1" will have map value []string{"val1", "val1-2"} "key2", "val2", ) ``` __Note:__ all the keys will be automatically converted to lowercase, so "key1" and "kEy1" will be the same key and their values will be merged into the same list. This happens for both `New` and `Pairs`. ### Storing binary data in metadata In metadata, keys are always strings. But values can be strings or binary data. To store binary data value in metadata, simply add "-bin" suffix to the key. The values with "-bin" suffixed keys will be encoded when creating the metadata: ```go md := metadata.Pairs( "key", "string value", "key-bin", string([]byte{96, 102}), // this binary data will be encoded (base64) before sending // and will be decoded after being transferred. ) ``` ## Retrieving metadata from context Metadata can be retrieved from context using `FromIncomingContext`: ```go func (s *server) SomeRPC(ctx context.Context, in *pb.SomeRequest) (*pb.SomeResponse, err) { md, ok := metadata.FromIncomingContext(ctx) // do something with metadata } ``` ## Sending and receiving metadata - client side Client side metadata sending and receiving examples are available [here](../examples/features/metadata/client/main.go). ### Sending metadata There are two ways to send metadata to the server. The recommended way is to append kv pairs to the context using `AppendToOutgoingContext`. This can be used with or without existing metadata on the context. When there is no prior metadata, metadata is added; when metadata already exists on the context, kv pairs are merged in. ```go // create a new context with some metadata ctx := metadata.AppendToOutgoingContext(ctx, "k1", "v1", "k1", "v2", "k2", "v3") // later, add some more metadata to the context (e.g. in an interceptor) ctx := metadata.AppendToOutgoingContext(ctx, "k3", "v4") // make unary RPC response, err := client.SomeRPC(ctx, someRequest) // or make streaming RPC stream, err := client.SomeStreamingRPC(ctx) ``` Alternatively, metadata may be attached to the context using `NewOutgoingContext`. However, this replaces any existing metadata in the context, so care must be taken to preserve the existing metadata if desired. This is slower than using `AppendToOutgoingContext`. An example of this is below: ```go // create a new context with some metadata md := metadata.Pairs("k1", "v1", "k1", "v2", "k2", "v3") ctx := metadata.NewOutgoingContext(context.Background(), md) // later, add some more metadata to the context (e.g. in an interceptor) md, _ := metadata.FromOutgoingContext(ctx) newMD := metadata.Pairs("k3", "v3") ctx = metadata.NewContext(ctx, metadata.Join(metadata.New(send), newMD)) // make unary RPC response, err := client.SomeRPC(ctx, someRequest) // or make streaming RPC stream, err := client.SomeStreamingRPC(ctx) ``` ### Receiving metadata Metadata that a client can receive includes header and trailer. #### Unary call Header and trailer sent along with a unary call can be retrieved using function [Header](https://godoc.org/google.golang.org/grpc#Header) and [Trailer](https://godoc.org/google.golang.org/grpc#Trailer) in [CallOption](https://godoc.org/google.golang.org/grpc#CallOption): ```go var header, trailer metadata.MD // variable to store header and trailer r, err := client.SomeRPC( ctx, someRequest, grpc.Header(&header), // will retrieve header grpc.Trailer(&trailer), // will retrieve trailer ) // do something with header and trailer ``` #### Streaming call For streaming calls including: - Server streaming RPC - Client streaming RPC - Bidirectional streaming RPC Header and trailer can be retrieved from the returned stream using function `Header` and `Trailer` in interface [ClientStream](https://godoc.org/google.golang.org/grpc#ClientStream): ```go stream, err := client.SomeStreamingRPC(ctx) // retrieve header header, err := stream.Header() // retrieve trailer trailer := stream.Trailer() ``` ## Sending and receiving metadata - server side Server side metadata sending and receiving examples are available [here](../examples/features/metadata/server/main.go). ### Receiving metadata To read metadata sent by the client, the server needs to retrieve it from RPC context. If it is a unary call, the RPC handler's context can be used. For streaming calls, the server needs to get context from the stream. #### Unary call ```go func (s *server) SomeRPC(ctx context.Context, in *pb.someRequest) (*pb.someResponse, error) { md, ok := metadata.FromIncomingContext(ctx) // do something with metadata } ``` #### Streaming call ```go func (s *server) SomeStreamingRPC(stream pb.Service_SomeStreamingRPCServer) error { md, ok := metadata.FromIncomingContext(stream.Context()) // get context from stream // do something with metadata } ``` ### Sending metadata #### Unary call To send header and trailer to client in unary call, the server can call [SendHeader](https://godoc.org/google.golang.org/grpc#SendHeader) and [SetTrailer](https://godoc.org/google.golang.org/grpc#SetTrailer) functions in module [grpc](https://godoc.org/google.golang.org/grpc). These two functions take a context as the first parameter. It should be the RPC handler's context or one derived from it: ```go func (s *server) SomeRPC(ctx context.Context, in *pb.someRequest) (*pb.someResponse, error) { // create and send header header := metadata.Pairs("header-key", "val") grpc.SendHeader(ctx, header) // create and set trailer trailer := metadata.Pairs("trailer-key", "val") grpc.SetTrailer(ctx, trailer) } ``` #### Streaming call For streaming calls, header and trailer can be sent using function `SendHeader` and `SetTrailer` in interface [ServerStream](https://godoc.org/google.golang.org/grpc#ServerStream): ```go func (s *server) SomeStreamingRPC(stream pb.Service_SomeStreamingRPCServer) error { // create and send header header := metadata.Pairs("header-key", "val") stream.SendHeader(header) // create and set trailer trailer := metadata.Pairs("trailer-key", "val") stream.SetTrailer(trailer) } ``` grpc-go-1.22.1/Documentation/keepalive.md000066400000000000000000000033651351635773100202550ustar00rootroot00000000000000# Keepalive gRPC sends http2 pings on the transport to detect if the connection is down. If the ping is not acknowledged by the other side within a certain period, the connection will be close. Note that pings are only necessary when there's no activity on the connection. For how to configure keepalive, see https://godoc.org/google.golang.org/grpc/keepalive for the options. ## What should I set? It should be sufficient for most users to set [client parameters](https://godoc.org/google.golang.org/grpc/keepalive) as a [dial option](https://godoc.org/google.golang.org/grpc#WithKeepaliveParams). ## What will happen? (The behavior described here is specific for gRPC-go, it might be slightly different in other languages.) When there's no activity on a connection (note that an ongoing stream results in __no activity__ when there's no message being sent), after `Time`, a ping will be sent by the client and the server will send a ping ack when it gets the ping. Client will wait for `Timeout`, and check if there's any activity on the connection during this period (a ping ack is an activity). ## What about server side? Server has similar `Time` and `Timeout` settings as client. Server can also configure connection max-age. See [server parameters](https://godoc.org/google.golang.org/grpc/keepalive#ServerParameters) for details. ### Enforcement policy [Enforcement policy](https://godoc.org/google.golang.org/grpc/keepalive#EnforcementPolicy) is a special setting on server side to protect server from malicious or misbehaving clients. Server sends GOAWAY with ENHANCE_YOUR_CALM and close the connection when bad behaviors are detected: - Client sends too frequent pings - Client sends pings when there's no stream and this is disallowed by server config grpc-go-1.22.1/Documentation/log_levels.md000066400000000000000000000030261351635773100204350ustar00rootroot00000000000000# Log Levels This document describes the different log levels supported by the grpc-go library, and under what conditions they should be used. ### Info Info messages are for informational purposes and may aid in the debugging of applications or the gRPC library. Examples: - The name resolver received an update. - The balancer updated its picker. - Significant gRPC state is changing. At verbosity of 0 (the default), any single info message should not be output more than once every 5 minutes under normal operation. ### Warning Warning messages indicate problems that are non-fatal for the application, but could lead to unexpected behavior or subsequent errors. Examples: - Resolver could not resolve target name. - Error received while connecting to a server. - Lost or corrupt connection with remote endpoint. ### Error Error messages represent errors in the usage of gRPC that cannot be returned to the application as errors, or internal gRPC-Go errors that are recoverable. Internal errors are detected during gRPC tests and will result in test failures. Examples: - Invalid arguments passed to a function that cannot return an error. - An internal error that cannot be returned or would be inappropriate to return to the user. ### Fatal Fatal errors are severe internal errors that are unrecoverable. These lead directly to panics, and are avoided as much as possible. Example: - Internal invariant was violated. - User attempted an action that cannot return an error gracefully, but would lead to an invalid state if performed. grpc-go-1.22.1/Documentation/proxy.md000066400000000000000000000011331351635773100174600ustar00rootroot00000000000000# Proxy HTTP CONNECT proxies are supported by default in gRPC. The proxy address can be specified by the environment variables HTTP_PROXY, HTTPS_PROXY and NO_PROXY (or the lowercase versions thereof). ## Custom proxy Currently, proxy support is implemented in the default dialer. It does one more handshake (a CONNECT handshake in the case of HTTP CONNECT proxy) on the connection before giving it to gRPC. If the default proxy doesn't work for you, replace the default dialer with your custom proxy dialer. This can be done using [`WithDialer`](https://godoc.org/google.golang.org/grpc#WithDialer).grpc-go-1.22.1/Documentation/rpc-errors.md000066400000000000000000000045511351635773100204040ustar00rootroot00000000000000# RPC Errors All service method handlers should return `nil` or errors from the `status.Status` type. Clients have direct access to the errors. Upon encountering an error, a gRPC server method handler should create a `status.Status`. In typical usage, one would use [status.New][new-status] passing in an appropriate [codes.Code][code] as well as a description of the error to produce a `status.Status`. Calling [status.Err][status-err] converts the `status.Status` type into an `error`. As a convenience method, there is also [status.Error][status-error] which obviates the conversion step. Compare: ``` st := status.New(codes.NotFound, "some description") err := st.Err() // vs. err := status.Error(codes.NotFound, "some description") ``` ## Adding additional details to errors In some cases, it may be necessary to add details for a particular error on the server side. The [status.WithDetails][with-details] method exists for this purpose. Clients may then read those details by first converting the plain `error` type back to a [status.Status][status] and then using [status.Details][details]. ## Example The [example][example] demonstrates the API discussed above and shows how to add information about rate limits to the error message using `status.Status`. To run the example, first start the server: ``` $ go run examples/rpc_errors/server/main.go ``` In a separate session, run the client: ``` $ go run examples/rpc_errors/client/main.go ``` On the first run of the client, all is well: ``` 2018/03/12 19:39:33 Greeting: Hello world ``` Upon running the client a second time, the client exceeds the rate limit and receives an error with details: ``` 2018/03/19 16:42:01 Quota failure: violations: exit status 1 ``` [status]: https://godoc.org/google.golang.org/grpc/status#Status [new-status]: https://godoc.org/google.golang.org/grpc/status#New [code]: https://godoc.org/google.golang.org/grpc/codes#Code [with-details]: https://godoc.org/google.golang.org/grpc/status#Status.WithDetails [details]: https://godoc.org/google.golang.org/grpc/status#Status.Details [status-err]: https://godoc.org/google.golang.org/grpc/status#Status.Err [status-error]: https://godoc.org/google.golang.org/grpc/status#Error [example]: https://github.com/grpc/grpc-go/tree/master/examples/features/errors grpc-go-1.22.1/Documentation/server-reflection-tutorial.md000066400000000000000000000076561351635773100236160ustar00rootroot00000000000000# gRPC Server Reflection Tutorial gRPC Server Reflection provides information about publicly-accessible gRPC services on a server, and assists clients at runtime to construct RPC requests and responses without precompiled service information. It is used by gRPC CLI, which can be used to introspect server protos and send/receive test RPCs. ## Enable Server Reflection gRPC-go Server Reflection is implemented in package [reflection](https://github.com/grpc/grpc-go/tree/master/reflection). To enable server reflection, you need to import this package and register reflection service on your gRPC server. For example, to enable server reflection in `example/helloworld`, we need to make the following changes: ```diff --- a/examples/helloworld/greeter_server/main.go +++ b/examples/helloworld/greeter_server/main.go @@ -40,6 +40,7 @@ import ( "google.golang.org/grpc" pb "google.golang.org/grpc/examples/helloworld/helloworld" + "google.golang.org/grpc/reflection" ) const ( @@ -61,6 +62,8 @@ func main() { } s := grpc.NewServer() pb.RegisterGreeterServer(s, &server{}) + // Register reflection service on gRPC server. + reflection.Register(s) if err := s.Serve(lis); err != nil { log.Fatalf("failed to serve: %v", err) } ``` An example server with reflection registered can be found at `examples/features/reflection/server`. ## gRPC CLI After enabling Server Reflection in a server application, you can use gRPC CLI to check its services. gRPC CLI is only available in c++. Instructions on how to use gRPC CLI can be found at [command_line_tool.md](https://github.com/grpc/grpc/blob/master/doc/command_line_tool.md). To build gRPC CLI: ```sh git clone https://github.com/grpc/grpc cd grpc make grpc_cli cd bins/opt # grpc_cli is in directory bins/opt/ ``` ## Use gRPC CLI to check services First, start the helloworld server in grpc-go directory: ```sh $ cd $ go run examples/features/reflection/server/main.go ``` Open a new terminal and make sure you are in the directory where grpc_cli lives: ```sh $ cd /bins/opt ``` ### List services `grpc_cli ls` command lists services and methods exposed at a given port: - List all the services exposed at a given port ```sh $ ./grpc_cli ls localhost:50051 ``` output: ```sh grpc.examples.echo.Echo grpc.reflection.v1alpha.ServerReflection helloworld.Greeter ``` - List one service with details `grpc_cli ls` command inspects a service given its full name (in the format of \.\). It can print information with a long listing format when `-l` flag is set. This flag can be used to get more details about a service. ```sh $ ./grpc_cli ls localhost:50051 helloworld.Greeter -l ``` output: ```sh filename: helloworld.proto package: helloworld; service Greeter { rpc SayHello(helloworld.HelloRequest) returns (helloworld.HelloReply) {} } ``` ### List methods - List one method with details `grpc_cli ls` command also inspects a method given its full name (in the format of \.\.\). ```sh $ ./grpc_cli ls localhost:50051 helloworld.Greeter.SayHello -l ``` output: ```sh rpc SayHello(helloworld.HelloRequest) returns (helloworld.HelloReply) {} ``` ### Inspect message types We can use`grpc_cli type` command to inspect request/response types given the full name of the type (in the format of \.\). - Get information about the request type ```sh $ ./grpc_cli type localhost:50051 helloworld.HelloRequest ``` output: ```sh message HelloRequest { optional string name = 1[json_name = "name"]; } ``` ### Call a remote method We can send RPCs to a server and get responses using `grpc_cli call` command. - Call a unary method ```sh $ ./grpc_cli call localhost:50051 SayHello "name: 'gRPC CLI'" ``` output: ```sh message: "Hello gRPC CLI" ``` grpc-go-1.22.1/Documentation/versioning.md000066400000000000000000000022631351635773100204670ustar00rootroot00000000000000# Versioning and Releases Note: This document references terminology defined at http://semver.org. ## Release Frequency Regular MINOR releases of gRPC-Go are performed every six weeks. Patch releases to the previous two MINOR releases may be performed on demand or if serious security problems are discovered. ## Versioning Policy The gRPC-Go versioning policy follows the Semantic Versioning 2.0.0 specification, with the following exceptions: - A MINOR version will not _necessarily_ add new functionality. - MINOR releases will not break backward compatibility, except in the following circumstances: - An API was marked as EXPERIMENTAL upon its introduction. - An API was marked as DEPRECATED in the initial MAJOR release. - An API is inherently flawed and cannot provide correct or secure behavior. In these cases, APIs MAY be changed or removed without a MAJOR release. Otherwise, backward compatibility will be preserved by MINOR releases. For an API marked as DEPRECATED, an alternative will be available (if appropriate) for at least three months prior to its removal. ## Release History Please see our release history on GitHub: https://github.com/grpc/grpc-go/releases grpc-go-1.22.1/LICENSE000066400000000000000000000261361351635773100141630ustar00rootroot00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. grpc-go-1.22.1/Makefile000066400000000000000000000020571351635773100146120ustar00rootroot00000000000000all: vet test testrace build: deps go build google.golang.org/grpc/... clean: go clean -i google.golang.org/grpc/... deps: go get -d -v google.golang.org/grpc/... proto: @ if ! which protoc > /dev/null; then \ echo "error: protoc not installed" >&2; \ exit 1; \ fi go generate google.golang.org/grpc/... test: testdeps go test -cpu 1,4 -timeout 7m google.golang.org/grpc/... testappengine: testappenginedeps goapp test -cpu 1,4 -timeout 7m google.golang.org/grpc/... testappenginedeps: goapp get -d -v -t -tags 'appengine appenginevm' google.golang.org/grpc/... testdeps: go get -d -v -t google.golang.org/grpc/... testrace: testdeps go test -race -cpu 1,4 -timeout 7m google.golang.org/grpc/... updatedeps: go get -d -v -u -f google.golang.org/grpc/... updatetestdeps: go get -d -v -t -u -f google.golang.org/grpc/... vet: vetdeps ./vet.sh vetdeps: ./vet.sh -install .PHONY: \ all \ build \ clean \ deps \ proto \ test \ testappengine \ testappenginedeps \ testdeps \ testrace \ updatedeps \ updatetestdeps \ vet \ vetdeps grpc-go-1.22.1/README.md000066400000000000000000000101151351635773100144230ustar00rootroot00000000000000# gRPC-Go [![Build Status](https://travis-ci.org/grpc/grpc-go.svg)](https://travis-ci.org/grpc/grpc-go) [![GoDoc](https://godoc.org/google.golang.org/grpc?status.svg)](https://godoc.org/google.golang.org/grpc) [![GoReportCard](https://goreportcard.com/badge/grpc/grpc-go)](https://goreportcard.com/report/github.com/grpc/grpc-go) The Go implementation of [gRPC](https://grpc.io/): A high performance, open source, general RPC framework that puts mobile and HTTP/2 first. For more information see the [gRPC Quick Start: Go](https://grpc.io/docs/quickstart/go.html) guide. Installation ------------ To install this package, you need to install Go and setup your Go workspace on your computer. The simplest way to install the library is to run: ``` $ go get -u google.golang.org/grpc ``` With Go module support (Go 1.11+), simply `import "google.golang.org/grpc"` in your source code and `go [build|run|test]` will automatically download the necessary dependencies ([Go modules ref](https://github.com/golang/go/wiki/Modules)). If you are trying to access grpc-go from within China, please see the [FAQ](#FAQ) below. Prerequisites ------------- gRPC-Go requires Go 1.9 or later. Documentation ------------- - See [godoc](https://godoc.org/google.golang.org/grpc) for package and API descriptions. - Documentation on specific topics can be found in the [Documentation directory](Documentation/). - Examples can be found in the [examples directory](examples/). Performance ----------- Performance benchmark data for grpc-go and other languages is maintained in [this dashboard](https://performance-dot-grpc-testing.appspot.com/explore?dashboard=5652536396611584&widget=490377658&container=1286539696). Status ------ General Availability [Google Cloud Platform Launch Stages](https://cloud.google.com/terms/launch-stages). FAQ --- #### I/O Timeout Errors The `golang.org` domain may be blocked from some countries. `go get` usually produces an error like the following when this happens: ``` $ go get -u google.golang.org/grpc package google.golang.org/grpc: unrecognized import path "google.golang.org/grpc" (https fetch: Get https://google.golang.org/grpc?go-get=1: dial tcp 216.239.37.1:443: i/o timeout) ``` To build Go code, there are several options: - Set up a VPN and access google.golang.org through that. - Without Go module support: `git clone` the repo manually: ``` git clone https://github.com/grpc/grpc-go.git $GOPATH/src/google.golang.org/grpc ``` You will need to do the same for all of grpc's dependencies in `golang.org`, e.g. `golang.org/x/net`. - With Go module support: it is possible to use the `replace` feature of `go mod` to create aliases for golang.org packages. In your project's directory: ``` go mod edit -replace=google.golang.org/grpc=github.com/grpc/grpc-go@latest go mod tidy go mod vendor go build -mod=vendor ``` Again, this will need to be done for all transitive dependencies hosted on golang.org as well. Please refer to [this issue](https://github.com/golang/go/issues/28652) in the golang repo regarding this concern. #### Compiling error, undefined: grpc.SupportPackageIsVersion Please update proto package, gRPC package and rebuild the proto files: - `go get -u github.com/golang/protobuf/{proto,protoc-gen-go}` - `go get -u google.golang.org/grpc` - `protoc --go_out=plugins=grpc:. *.proto` #### How to turn on logging The default logger is controlled by the environment variables. Turn everything on by setting: ``` GRPC_GO_LOG_VERBOSITY_LEVEL=99 GRPC_GO_LOG_SEVERITY_LEVEL=info ``` #### The RPC failed with error `"code = Unavailable desc = transport is closing"` This error means the connection the RPC is using was closed, and there are many possible reasons, including: 1. mis-configured transport credentials, connection failed on handshaking 1. bytes disrupted, possibly by a proxy in between 1. server shutdown It can be tricky to debug this because the error happens on the client side but the root cause of the connection being closed is on the server side. Turn on logging on __both client and server__, and see if there are any transport errors. grpc-go-1.22.1/backoff.go000066400000000000000000000022211351635773100150650ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // See internal/backoff package for the backoff implementation. This file is // kept for the exported types and API backward compatibility. package grpc import ( "time" ) // DefaultBackoffConfig uses values specified for backoff in // https://github.com/grpc/grpc/blob/master/doc/connection-backoff.md. var DefaultBackoffConfig = BackoffConfig{ MaxDelay: 120 * time.Second, } // BackoffConfig defines the parameters for the default gRPC backoff strategy. type BackoffConfig struct { // MaxDelay is the upper bound of backoff delay. MaxDelay time.Duration } grpc-go-1.22.1/balancer.go000066400000000000000000000255541351635773100152570ustar00rootroot00000000000000/* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "context" "net" "sync" "google.golang.org/grpc/codes" "google.golang.org/grpc/credentials" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/naming" "google.golang.org/grpc/status" ) // Address represents a server the client connects to. // // Deprecated: please use package balancer. type Address struct { // Addr is the server address on which a connection will be established. Addr string // Metadata is the information associated with Addr, which may be used // to make load balancing decision. Metadata interface{} } // BalancerConfig specifies the configurations for Balancer. // // Deprecated: please use package balancer. type BalancerConfig struct { // DialCreds is the transport credential the Balancer implementation can // use to dial to a remote load balancer server. The Balancer implementations // can ignore this if it does not need to talk to another party securely. DialCreds credentials.TransportCredentials // Dialer is the custom dialer the Balancer implementation can use to dial // to a remote load balancer server. The Balancer implementations // can ignore this if it doesn't need to talk to remote balancer. Dialer func(context.Context, string) (net.Conn, error) } // BalancerGetOptions configures a Get call. // // Deprecated: please use package balancer. type BalancerGetOptions struct { // BlockingWait specifies whether Get should block when there is no // connected address. BlockingWait bool } // Balancer chooses network addresses for RPCs. // // Deprecated: please use package balancer. type Balancer interface { // Start does the initialization work to bootstrap a Balancer. For example, // this function may start the name resolution and watch the updates. It will // be called when dialing. Start(target string, config BalancerConfig) error // Up informs the Balancer that gRPC has a connection to the server at // addr. It returns down which is called once the connection to addr gets // lost or closed. // TODO: It is not clear how to construct and take advantage of the meaningful error // parameter for down. Need realistic demands to guide. Up(addr Address) (down func(error)) // Get gets the address of a server for the RPC corresponding to ctx. // i) If it returns a connected address, gRPC internals issues the RPC on the // connection to this address; // ii) If it returns an address on which the connection is under construction // (initiated by Notify(...)) but not connected, gRPC internals // * fails RPC if the RPC is fail-fast and connection is in the TransientFailure or // Shutdown state; // or // * issues RPC on the connection otherwise. // iii) If it returns an address on which the connection does not exist, gRPC // internals treats it as an error and will fail the corresponding RPC. // // Therefore, the following is the recommended rule when writing a custom Balancer. // If opts.BlockingWait is true, it should return a connected address or // block if there is no connected address. It should respect the timeout or // cancellation of ctx when blocking. If opts.BlockingWait is false (for fail-fast // RPCs), it should return an address it has notified via Notify(...) immediately // instead of blocking. // // The function returns put which is called once the rpc has completed or failed. // put can collect and report RPC stats to a remote load balancer. // // This function should only return the errors Balancer cannot recover by itself. // gRPC internals will fail the RPC if an error is returned. Get(ctx context.Context, opts BalancerGetOptions) (addr Address, put func(), err error) // Notify returns a channel that is used by gRPC internals to watch the addresses // gRPC needs to connect. The addresses might be from a name resolver or remote // load balancer. gRPC internals will compare it with the existing connected // addresses. If the address Balancer notified is not in the existing connected // addresses, gRPC starts to connect the address. If an address in the existing // connected addresses is not in the notification list, the corresponding connection // is shutdown gracefully. Otherwise, there are no operations to take. Note that // the Address slice must be the full list of the Addresses which should be connected. // It is NOT delta. Notify() <-chan []Address // Close shuts down the balancer. Close() error } // RoundRobin returns a Balancer that selects addresses round-robin. It uses r to watch // the name resolution updates and updates the addresses available correspondingly. // // Deprecated: please use package balancer/roundrobin. func RoundRobin(r naming.Resolver) Balancer { return &roundRobin{r: r} } type addrInfo struct { addr Address connected bool } type roundRobin struct { r naming.Resolver w naming.Watcher addrs []*addrInfo // all the addresses the client should potentially connect mu sync.Mutex addrCh chan []Address // the channel to notify gRPC internals the list of addresses the client should connect to. next int // index of the next address to return for Get() waitCh chan struct{} // the channel to block when there is no connected address available done bool // The Balancer is closed. } func (rr *roundRobin) watchAddrUpdates() error { updates, err := rr.w.Next() if err != nil { grpclog.Warningf("grpc: the naming watcher stops working due to %v.", err) return err } rr.mu.Lock() defer rr.mu.Unlock() for _, update := range updates { addr := Address{ Addr: update.Addr, Metadata: update.Metadata, } switch update.Op { case naming.Add: var exist bool for _, v := range rr.addrs { if addr == v.addr { exist = true grpclog.Infoln("grpc: The name resolver wanted to add an existing address: ", addr) break } } if exist { continue } rr.addrs = append(rr.addrs, &addrInfo{addr: addr}) case naming.Delete: for i, v := range rr.addrs { if addr == v.addr { copy(rr.addrs[i:], rr.addrs[i+1:]) rr.addrs = rr.addrs[:len(rr.addrs)-1] break } } default: grpclog.Errorln("Unknown update.Op ", update.Op) } } // Make a copy of rr.addrs and write it onto rr.addrCh so that gRPC internals gets notified. open := make([]Address, len(rr.addrs)) for i, v := range rr.addrs { open[i] = v.addr } if rr.done { return ErrClientConnClosing } select { case <-rr.addrCh: default: } rr.addrCh <- open return nil } func (rr *roundRobin) Start(target string, config BalancerConfig) error { rr.mu.Lock() defer rr.mu.Unlock() if rr.done { return ErrClientConnClosing } if rr.r == nil { // If there is no name resolver installed, it is not needed to // do name resolution. In this case, target is added into rr.addrs // as the only address available and rr.addrCh stays nil. rr.addrs = append(rr.addrs, &addrInfo{addr: Address{Addr: target}}) return nil } w, err := rr.r.Resolve(target) if err != nil { return err } rr.w = w rr.addrCh = make(chan []Address, 1) go func() { for { if err := rr.watchAddrUpdates(); err != nil { return } } }() return nil } // Up sets the connected state of addr and sends notification if there are pending // Get() calls. func (rr *roundRobin) Up(addr Address) func(error) { rr.mu.Lock() defer rr.mu.Unlock() var cnt int for _, a := range rr.addrs { if a.addr == addr { if a.connected { return nil } a.connected = true } if a.connected { cnt++ } } // addr is only one which is connected. Notify the Get() callers who are blocking. if cnt == 1 && rr.waitCh != nil { close(rr.waitCh) rr.waitCh = nil } return func(err error) { rr.down(addr, err) } } // down unsets the connected state of addr. func (rr *roundRobin) down(addr Address, err error) { rr.mu.Lock() defer rr.mu.Unlock() for _, a := range rr.addrs { if addr == a.addr { a.connected = false break } } } // Get returns the next addr in the rotation. func (rr *roundRobin) Get(ctx context.Context, opts BalancerGetOptions) (addr Address, put func(), err error) { var ch chan struct{} rr.mu.Lock() if rr.done { rr.mu.Unlock() err = ErrClientConnClosing return } if len(rr.addrs) > 0 { if rr.next >= len(rr.addrs) { rr.next = 0 } next := rr.next for { a := rr.addrs[next] next = (next + 1) % len(rr.addrs) if a.connected { addr = a.addr rr.next = next rr.mu.Unlock() return } if next == rr.next { // Has iterated all the possible address but none is connected. break } } } if !opts.BlockingWait { if len(rr.addrs) == 0 { rr.mu.Unlock() err = status.Errorf(codes.Unavailable, "there is no address available") return } // Returns the next addr on rr.addrs for failfast RPCs. addr = rr.addrs[rr.next].addr rr.next++ rr.mu.Unlock() return } // Wait on rr.waitCh for non-failfast RPCs. if rr.waitCh == nil { ch = make(chan struct{}) rr.waitCh = ch } else { ch = rr.waitCh } rr.mu.Unlock() for { select { case <-ctx.Done(): err = ctx.Err() return case <-ch: rr.mu.Lock() if rr.done { rr.mu.Unlock() err = ErrClientConnClosing return } if len(rr.addrs) > 0 { if rr.next >= len(rr.addrs) { rr.next = 0 } next := rr.next for { a := rr.addrs[next] next = (next + 1) % len(rr.addrs) if a.connected { addr = a.addr rr.next = next rr.mu.Unlock() return } if next == rr.next { // Has iterated all the possible address but none is connected. break } } } // The newly added addr got removed by Down() again. if rr.waitCh == nil { ch = make(chan struct{}) rr.waitCh = ch } else { ch = rr.waitCh } rr.mu.Unlock() } } } func (rr *roundRobin) Notify() <-chan []Address { return rr.addrCh } func (rr *roundRobin) Close() error { rr.mu.Lock() defer rr.mu.Unlock() if rr.done { return errBalancerClosed } rr.done = true if rr.w != nil { rr.w.Close() } if rr.waitCh != nil { close(rr.waitCh) rr.waitCh = nil } if rr.addrCh != nil { close(rr.addrCh) } return nil } // pickFirst is used to test multi-addresses in one addrConn in which all addresses share the same addrConn. // It is a wrapper around roundRobin balancer. The logic of all methods works fine because balancer.Get() // returns the only address Up by resetTransport(). type pickFirst struct { *roundRobin } grpc-go-1.22.1/balancer/000077500000000000000000000000001351635773100147155ustar00rootroot00000000000000grpc-go-1.22.1/balancer/balancer.go000066400000000000000000000343101351635773100170140ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package balancer defines APIs for load balancing in gRPC. // All APIs in this package are experimental. package balancer import ( "context" "encoding/json" "errors" "net" "strings" "google.golang.org/grpc/connectivity" "google.golang.org/grpc/credentials" "google.golang.org/grpc/internal" "google.golang.org/grpc/metadata" "google.golang.org/grpc/resolver" "google.golang.org/grpc/serviceconfig" ) var ( // m is a map from name to balancer builder. m = make(map[string]Builder) ) // Register registers the balancer builder to the balancer map. b.Name // (lowercased) will be used as the name registered with this builder. If the // Builder implements ConfigParser, ParseConfig will be called when new service // configs are received by the resolver, and the result will be provided to the // Balancer in UpdateClientConnState. // // NOTE: this function must only be called during initialization time (i.e. in // an init() function), and is not thread-safe. If multiple Balancers are // registered with the same name, the one registered last will take effect. func Register(b Builder) { m[strings.ToLower(b.Name())] = b } // unregisterForTesting deletes the balancer with the given name from the // balancer map. // // This function is not thread-safe. func unregisterForTesting(name string) { delete(m, name) } func init() { internal.BalancerUnregister = unregisterForTesting } // Get returns the resolver builder registered with the given name. // Note that the compare is done in a case-insensitive fashion. // If no builder is register with the name, nil will be returned. func Get(name string) Builder { if b, ok := m[strings.ToLower(name)]; ok { return b } return nil } // SubConn represents a gRPC sub connection. // Each sub connection contains a list of addresses. gRPC will // try to connect to them (in sequence), and stop trying the // remainder once one connection is successful. // // The reconnect backoff will be applied on the list, not a single address. // For example, try_on_all_addresses -> backoff -> try_on_all_addresses. // // All SubConns start in IDLE, and will not try to connect. To trigger // the connecting, Balancers must call Connect. // When the connection encounters an error, it will reconnect immediately. // When the connection becomes IDLE, it will not reconnect unless Connect is // called. // // This interface is to be implemented by gRPC. Users should not need a // brand new implementation of this interface. For the situations like // testing, the new implementation should embed this interface. This allows // gRPC to add new methods to this interface. type SubConn interface { // UpdateAddresses updates the addresses used in this SubConn. // gRPC checks if currently-connected address is still in the new list. // If it's in the list, the connection will be kept. // If it's not in the list, the connection will gracefully closed, and // a new connection will be created. // // This will trigger a state transition for the SubConn. UpdateAddresses([]resolver.Address) // Connect starts the connecting for this SubConn. Connect() } // NewSubConnOptions contains options to create new SubConn. type NewSubConnOptions struct { // CredsBundle is the credentials bundle that will be used in the created // SubConn. If it's nil, the original creds from grpc DialOptions will be // used. CredsBundle credentials.Bundle // HealthCheckEnabled indicates whether health check service should be // enabled on this SubConn HealthCheckEnabled bool } // ClientConn represents a gRPC ClientConn. // // This interface is to be implemented by gRPC. Users should not need a // brand new implementation of this interface. For the situations like // testing, the new implementation should embed this interface. This allows // gRPC to add new methods to this interface. type ClientConn interface { // NewSubConn is called by balancer to create a new SubConn. // It doesn't block and wait for the connections to be established. // Behaviors of the SubConn can be controlled by options. NewSubConn([]resolver.Address, NewSubConnOptions) (SubConn, error) // RemoveSubConn removes the SubConn from ClientConn. // The SubConn will be shutdown. RemoveSubConn(SubConn) // UpdateBalancerState is called by balancer to notify gRPC that some internal // state in balancer has changed. // // gRPC will update the connectivity state of the ClientConn, and will call pick // on the new picker to pick new SubConn. UpdateBalancerState(s connectivity.State, p Picker) // ResolveNow is called by balancer to notify gRPC to do a name resolving. ResolveNow(resolver.ResolveNowOption) // Target returns the dial target for this ClientConn. // // Deprecated: Use the Target field in the BuildOptions instead. Target() string } // BuildOptions contains additional information for Build. type BuildOptions struct { // DialCreds is the transport credential the Balancer implementation can // use to dial to a remote load balancer server. The Balancer implementations // can ignore this if it does not need to talk to another party securely. DialCreds credentials.TransportCredentials // CredsBundle is the credentials bundle that the Balancer can use. CredsBundle credentials.Bundle // Dialer is the custom dialer the Balancer implementation can use to dial // to a remote load balancer server. The Balancer implementations // can ignore this if it doesn't need to talk to remote balancer. Dialer func(context.Context, string) (net.Conn, error) // ChannelzParentID is the entity parent's channelz unique identification number. ChannelzParentID int64 // Target contains the parsed address info of the dial target. It is the same resolver.Target as // passed to the resolver. // See the documentation for the resolver.Target type for details about what it contains. Target resolver.Target } // Builder creates a balancer. type Builder interface { // Build creates a new balancer with the ClientConn. Build(cc ClientConn, opts BuildOptions) Balancer // Name returns the name of balancers built by this builder. // It will be used to pick balancers (for example in service config). Name() string } // ConfigParser parses load balancer configs. type ConfigParser interface { // ParseConfig parses the JSON load balancer config provided into an // internal form or returns an error if the config is invalid. For future // compatibility reasons, unknown fields in the config should be ignored. ParseConfig(LoadBalancingConfigJSON json.RawMessage) (serviceconfig.LoadBalancingConfig, error) } // PickOptions contains addition information for the Pick operation. type PickOptions struct { // FullMethodName is the method name that NewClientStream() is called // with. The canonical format is /service/Method. FullMethodName string } // DoneInfo contains additional information for done. type DoneInfo struct { // Err is the rpc error the RPC finished with. It could be nil. Err error // Trailer contains the metadata from the RPC's trailer, if present. Trailer metadata.MD // BytesSent indicates if any bytes have been sent to the server. BytesSent bool // BytesReceived indicates if any byte has been received from the server. BytesReceived bool // ServerLoad is the load received from server. It's usually sent as part of // trailing metadata. // // The only supported type now is *orca_v1.LoadReport. ServerLoad interface{} } var ( // ErrNoSubConnAvailable indicates no SubConn is available for pick(). // gRPC will block the RPC until a new picker is available via UpdateBalancerState(). ErrNoSubConnAvailable = errors.New("no SubConn is available") // ErrTransientFailure indicates all SubConns are in TransientFailure. // WaitForReady RPCs will block, non-WaitForReady RPCs will fail. ErrTransientFailure = errors.New("all SubConns are in TransientFailure") ) // Picker is used by gRPC to pick a SubConn to send an RPC. // Balancer is expected to generate a new picker from its snapshot every time its // internal state has changed. // // The pickers used by gRPC can be updated by ClientConn.UpdateBalancerState(). type Picker interface { // Pick returns the SubConn to be used to send the RPC. // The returned SubConn must be one returned by NewSubConn(). // // This functions is expected to return: // - a SubConn that is known to be READY; // - ErrNoSubConnAvailable if no SubConn is available, but progress is being // made (for example, some SubConn is in CONNECTING mode); // - other errors if no active connecting is happening (for example, all SubConn // are in TRANSIENT_FAILURE mode). // // If a SubConn is returned: // - If it is READY, gRPC will send the RPC on it; // - If it is not ready, or becomes not ready after it's returned, gRPC will // block until UpdateBalancerState() is called and will call pick on the // new picker. The done function returned from Pick(), if not nil, will be // called with nil error, no bytes sent and no bytes received. // // If the returned error is not nil: // - If the error is ErrNoSubConnAvailable, gRPC will block until UpdateBalancerState() // - If the error is ErrTransientFailure: // - If the RPC is wait-for-ready, gRPC will block until UpdateBalancerState() // is called to pick again; // - Otherwise, RPC will fail with unavailable error. // - Else (error is other non-nil error): // - The RPC will fail with unavailable error. // // The returned done() function will be called once the rpc has finished, // with the final status of that RPC. If the SubConn returned is not a // valid SubConn type, done may not be called. done may be nil if balancer // doesn't care about the RPC status. Pick(ctx context.Context, opts PickOptions) (conn SubConn, done func(DoneInfo), err error) } // Balancer takes input from gRPC, manages SubConns, and collects and aggregates // the connectivity states. // // It also generates and updates the Picker used by gRPC to pick SubConns for RPCs. // // HandleSubConnectionStateChange, HandleResolvedAddrs and Close are guaranteed // to be called synchronously from the same goroutine. // There's no guarantee on picker.Pick, it may be called anytime. type Balancer interface { // HandleSubConnStateChange is called by gRPC when the connectivity state // of sc has changed. // Balancer is expected to aggregate all the state of SubConn and report // that back to gRPC. // Balancer should also generate and update Pickers when its internal state has // been changed by the new state. // // Deprecated: if V2Balancer is implemented by the Balancer, // UpdateSubConnState will be called instead. HandleSubConnStateChange(sc SubConn, state connectivity.State) // HandleResolvedAddrs is called by gRPC to send updated resolved addresses to // balancers. // Balancer can create new SubConn or remove SubConn with the addresses. // An empty address slice and a non-nil error will be passed if the resolver returns // non-nil error to gRPC. // // Deprecated: if V2Balancer is implemented by the Balancer, // UpdateClientConnState will be called instead. HandleResolvedAddrs([]resolver.Address, error) // Close closes the balancer. The balancer is not required to call // ClientConn.RemoveSubConn for its existing SubConns. Close() } // SubConnState describes the state of a SubConn. type SubConnState struct { ConnectivityState connectivity.State // TODO: add last connection error } // ClientConnState describes the state of a ClientConn relevant to the // balancer. type ClientConnState struct { ResolverState resolver.State // The parsed load balancing configuration returned by the builder's // ParseConfig method, if implemented. BalancerConfig serviceconfig.LoadBalancingConfig } // V2Balancer is defined for documentation purposes. If a Balancer also // implements V2Balancer, its UpdateClientConnState method will be called // instead of HandleResolvedAddrs and its UpdateSubConnState will be called // instead of HandleSubConnStateChange. type V2Balancer interface { // UpdateClientConnState is called by gRPC when the state of the ClientConn // changes. UpdateClientConnState(ClientConnState) // UpdateSubConnState is called by gRPC when the state of a SubConn // changes. UpdateSubConnState(SubConn, SubConnState) // Close closes the balancer. The balancer is not required to call // ClientConn.RemoveSubConn for its existing SubConns. Close() } // ConnectivityStateEvaluator takes the connectivity states of multiple SubConns // and returns one aggregated connectivity state. // // It's not thread safe. type ConnectivityStateEvaluator struct { numReady uint64 // Number of addrConns in ready state. numConnecting uint64 // Number of addrConns in connecting state. numTransientFailure uint64 // Number of addrConns in transientFailure. } // RecordTransition records state change happening in subConn and based on that // it evaluates what aggregated state should be. // // - If at least one SubConn in Ready, the aggregated state is Ready; // - Else if at least one SubConn in Connecting, the aggregated state is Connecting; // - Else the aggregated state is TransientFailure. // // Idle and Shutdown are not considered. func (cse *ConnectivityStateEvaluator) RecordTransition(oldState, newState connectivity.State) connectivity.State { // Update counters. for idx, state := range []connectivity.State{oldState, newState} { updateVal := 2*uint64(idx) - 1 // -1 for oldState and +1 for new. switch state { case connectivity.Ready: cse.numReady += updateVal case connectivity.Connecting: cse.numConnecting += updateVal case connectivity.TransientFailure: cse.numTransientFailure += updateVal } } // Evaluate. if cse.numReady > 0 { return connectivity.Ready } if cse.numConnecting > 0 { return connectivity.Connecting } return connectivity.TransientFailure } grpc-go-1.22.1/balancer/base/000077500000000000000000000000001351635773100156275ustar00rootroot00000000000000grpc-go-1.22.1/balancer/base/balancer.go000066400000000000000000000132371351635773100177330ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package base import ( "context" "google.golang.org/grpc/balancer" "google.golang.org/grpc/connectivity" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/resolver" ) type baseBuilder struct { name string pickerBuilder PickerBuilder config Config } func (bb *baseBuilder) Build(cc balancer.ClientConn, opt balancer.BuildOptions) balancer.Balancer { return &baseBalancer{ cc: cc, pickerBuilder: bb.pickerBuilder, subConns: make(map[resolver.Address]balancer.SubConn), scStates: make(map[balancer.SubConn]connectivity.State), csEvltr: &balancer.ConnectivityStateEvaluator{}, // Initialize picker to a picker that always return // ErrNoSubConnAvailable, because when state of a SubConn changes, we // may call UpdateBalancerState with this picker. picker: NewErrPicker(balancer.ErrNoSubConnAvailable), config: bb.config, } } func (bb *baseBuilder) Name() string { return bb.name } type baseBalancer struct { cc balancer.ClientConn pickerBuilder PickerBuilder csEvltr *balancer.ConnectivityStateEvaluator state connectivity.State subConns map[resolver.Address]balancer.SubConn scStates map[balancer.SubConn]connectivity.State picker balancer.Picker config Config } func (b *baseBalancer) HandleResolvedAddrs(addrs []resolver.Address, err error) { panic("not implemented") } func (b *baseBalancer) UpdateClientConnState(s balancer.ClientConnState) { // TODO: handle s.ResolverState.Err (log if not nil) once implemented. // TODO: handle s.ResolverState.ServiceConfig? grpclog.Infoln("base.baseBalancer: got new ClientConn state: ", s) // addrsSet is the set converted from addrs, it's used for quick lookup of an address. addrsSet := make(map[resolver.Address]struct{}) for _, a := range s.ResolverState.Addresses { addrsSet[a] = struct{}{} if _, ok := b.subConns[a]; !ok { // a is a new address (not existing in b.subConns). sc, err := b.cc.NewSubConn([]resolver.Address{a}, balancer.NewSubConnOptions{HealthCheckEnabled: b.config.HealthCheck}) if err != nil { grpclog.Warningf("base.baseBalancer: failed to create new SubConn: %v", err) continue } b.subConns[a] = sc b.scStates[sc] = connectivity.Idle sc.Connect() } } for a, sc := range b.subConns { // a was removed by resolver. if _, ok := addrsSet[a]; !ok { b.cc.RemoveSubConn(sc) delete(b.subConns, a) // Keep the state of this sc in b.scStates until sc's state becomes Shutdown. // The entry will be deleted in HandleSubConnStateChange. } } } // regeneratePicker takes a snapshot of the balancer, and generates a picker // from it. The picker is // - errPicker with ErrTransientFailure if the balancer is in TransientFailure, // - built by the pickerBuilder with all READY SubConns otherwise. func (b *baseBalancer) regeneratePicker() { if b.state == connectivity.TransientFailure { b.picker = NewErrPicker(balancer.ErrTransientFailure) return } readySCs := make(map[resolver.Address]balancer.SubConn) // Filter out all ready SCs from full subConn map. for addr, sc := range b.subConns { if st, ok := b.scStates[sc]; ok && st == connectivity.Ready { readySCs[addr] = sc } } b.picker = b.pickerBuilder.Build(readySCs) } func (b *baseBalancer) HandleSubConnStateChange(sc balancer.SubConn, s connectivity.State) { panic("not implemented") } func (b *baseBalancer) UpdateSubConnState(sc balancer.SubConn, state balancer.SubConnState) { s := state.ConnectivityState grpclog.Infof("base.baseBalancer: handle SubConn state change: %p, %v", sc, s) oldS, ok := b.scStates[sc] if !ok { grpclog.Infof("base.baseBalancer: got state changes for an unknown SubConn: %p, %v", sc, s) return } b.scStates[sc] = s switch s { case connectivity.Idle: sc.Connect() case connectivity.Shutdown: // When an address was removed by resolver, b called RemoveSubConn but // kept the sc's state in scStates. Remove state for this sc here. delete(b.scStates, sc) } oldAggrState := b.state b.state = b.csEvltr.RecordTransition(oldS, s) // Regenerate picker when one of the following happens: // - this sc became ready from not-ready // - this sc became not-ready from ready // - the aggregated state of balancer became TransientFailure from non-TransientFailure // - the aggregated state of balancer became non-TransientFailure from TransientFailure if (s == connectivity.Ready) != (oldS == connectivity.Ready) || (b.state == connectivity.TransientFailure) != (oldAggrState == connectivity.TransientFailure) { b.regeneratePicker() } b.cc.UpdateBalancerState(b.state, b.picker) } // Close is a nop because base balancer doesn't have internal state to clean up, // and it doesn't need to call RemoveSubConn for the SubConns. func (b *baseBalancer) Close() { } // NewErrPicker returns a picker that always returns err on Pick(). func NewErrPicker(err error) balancer.Picker { return &errPicker{err: err} } type errPicker struct { err error // Pick() always returns this err. } func (p *errPicker) Pick(ctx context.Context, opts balancer.PickOptions) (balancer.SubConn, func(balancer.DoneInfo), error) { return nil, nil, p.err } grpc-go-1.22.1/balancer/base/base.go000066400000000000000000000043571351635773100171010ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package base defines a balancer base that can be used to build balancers with // different picking algorithms. // // The base balancer creates a new SubConn for each resolved address. The // provided picker will only be notified about READY SubConns. // // This package is the base of round_robin balancer, its purpose is to be used // to build round_robin like balancers with complex picking algorithms. // Balancers with more complicated logic should try to implement a balancer // builder from scratch. // // All APIs in this package are experimental. package base import ( "google.golang.org/grpc/balancer" "google.golang.org/grpc/resolver" ) // PickerBuilder creates balancer.Picker. type PickerBuilder interface { // Build takes a slice of ready SubConns, and returns a picker that will be // used by gRPC to pick a SubConn. Build(readySCs map[resolver.Address]balancer.SubConn) balancer.Picker } // NewBalancerBuilder returns a balancer builder. The balancers // built by this builder will use the picker builder to build pickers. func NewBalancerBuilder(name string, pb PickerBuilder) balancer.Builder { return NewBalancerBuilderWithConfig(name, pb, Config{}) } // Config contains the config info about the base balancer builder. type Config struct { // HealthCheck indicates whether health checking should be enabled for this specific balancer. HealthCheck bool } // NewBalancerBuilderWithConfig returns a base balancer builder configured by the provided config. func NewBalancerBuilderWithConfig(name string, pb PickerBuilder, config Config) balancer.Builder { return &baseBuilder{ name: name, pickerBuilder: pb, config: config, } } grpc-go-1.22.1/balancer/grpclb/000077500000000000000000000000001351635773100161665ustar00rootroot00000000000000grpc-go-1.22.1/balancer/grpclb/grpc_lb_v1/000077500000000000000000000000001351635773100202045ustar00rootroot00000000000000grpc-go-1.22.1/balancer/grpclb/grpc_lb_v1/load_balancer.pb.go000066400000000000000000001012451351635773100237040ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: grpc/lb/v1/load_balancer.proto package grpc_lb_v1 // import "google.golang.org/grpc/balancer/grpclb/grpc_lb_v1" import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import duration "github.com/golang/protobuf/ptypes/duration" import timestamp "github.com/golang/protobuf/ptypes/timestamp" import ( context "golang.org/x/net/context" grpc "google.golang.org/grpc" ) // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package type LoadBalanceRequest struct { // Types that are valid to be assigned to LoadBalanceRequestType: // *LoadBalanceRequest_InitialRequest // *LoadBalanceRequest_ClientStats LoadBalanceRequestType isLoadBalanceRequest_LoadBalanceRequestType `protobuf_oneof:"load_balance_request_type"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *LoadBalanceRequest) Reset() { *m = LoadBalanceRequest{} } func (m *LoadBalanceRequest) String() string { return proto.CompactTextString(m) } func (*LoadBalanceRequest) ProtoMessage() {} func (*LoadBalanceRequest) Descriptor() ([]byte, []int) { return fileDescriptor_load_balancer_12026aec3f0251ba, []int{0} } func (m *LoadBalanceRequest) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_LoadBalanceRequest.Unmarshal(m, b) } func (m *LoadBalanceRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_LoadBalanceRequest.Marshal(b, m, deterministic) } func (dst *LoadBalanceRequest) XXX_Merge(src proto.Message) { xxx_messageInfo_LoadBalanceRequest.Merge(dst, src) } func (m *LoadBalanceRequest) XXX_Size() int { return xxx_messageInfo_LoadBalanceRequest.Size(m) } func (m *LoadBalanceRequest) XXX_DiscardUnknown() { xxx_messageInfo_LoadBalanceRequest.DiscardUnknown(m) } var xxx_messageInfo_LoadBalanceRequest proto.InternalMessageInfo type isLoadBalanceRequest_LoadBalanceRequestType interface { isLoadBalanceRequest_LoadBalanceRequestType() } type LoadBalanceRequest_InitialRequest struct { InitialRequest *InitialLoadBalanceRequest `protobuf:"bytes,1,opt,name=initial_request,json=initialRequest,proto3,oneof"` } type LoadBalanceRequest_ClientStats struct { ClientStats *ClientStats `protobuf:"bytes,2,opt,name=client_stats,json=clientStats,proto3,oneof"` } func (*LoadBalanceRequest_InitialRequest) isLoadBalanceRequest_LoadBalanceRequestType() {} func (*LoadBalanceRequest_ClientStats) isLoadBalanceRequest_LoadBalanceRequestType() {} func (m *LoadBalanceRequest) GetLoadBalanceRequestType() isLoadBalanceRequest_LoadBalanceRequestType { if m != nil { return m.LoadBalanceRequestType } return nil } func (m *LoadBalanceRequest) GetInitialRequest() *InitialLoadBalanceRequest { if x, ok := m.GetLoadBalanceRequestType().(*LoadBalanceRequest_InitialRequest); ok { return x.InitialRequest } return nil } func (m *LoadBalanceRequest) GetClientStats() *ClientStats { if x, ok := m.GetLoadBalanceRequestType().(*LoadBalanceRequest_ClientStats); ok { return x.ClientStats } return nil } // XXX_OneofFuncs is for the internal use of the proto package. func (*LoadBalanceRequest) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _LoadBalanceRequest_OneofMarshaler, _LoadBalanceRequest_OneofUnmarshaler, _LoadBalanceRequest_OneofSizer, []interface{}{ (*LoadBalanceRequest_InitialRequest)(nil), (*LoadBalanceRequest_ClientStats)(nil), } } func _LoadBalanceRequest_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*LoadBalanceRequest) // load_balance_request_type switch x := m.LoadBalanceRequestType.(type) { case *LoadBalanceRequest_InitialRequest: b.EncodeVarint(1<<3 | proto.WireBytes) if err := b.EncodeMessage(x.InitialRequest); err != nil { return err } case *LoadBalanceRequest_ClientStats: b.EncodeVarint(2<<3 | proto.WireBytes) if err := b.EncodeMessage(x.ClientStats); err != nil { return err } case nil: default: return fmt.Errorf("LoadBalanceRequest.LoadBalanceRequestType has unexpected type %T", x) } return nil } func _LoadBalanceRequest_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*LoadBalanceRequest) switch tag { case 1: // load_balance_request_type.initial_request if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(InitialLoadBalanceRequest) err := b.DecodeMessage(msg) m.LoadBalanceRequestType = &LoadBalanceRequest_InitialRequest{msg} return true, err case 2: // load_balance_request_type.client_stats if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(ClientStats) err := b.DecodeMessage(msg) m.LoadBalanceRequestType = &LoadBalanceRequest_ClientStats{msg} return true, err default: return false, nil } } func _LoadBalanceRequest_OneofSizer(msg proto.Message) (n int) { m := msg.(*LoadBalanceRequest) // load_balance_request_type switch x := m.LoadBalanceRequestType.(type) { case *LoadBalanceRequest_InitialRequest: s := proto.Size(x.InitialRequest) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *LoadBalanceRequest_ClientStats: s := proto.Size(x.ClientStats) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } type InitialLoadBalanceRequest struct { // The name of the load balanced service (e.g., service.googleapis.com). Its // length should be less than 256 bytes. // The name might include a port number. How to handle the port number is up // to the balancer. Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *InitialLoadBalanceRequest) Reset() { *m = InitialLoadBalanceRequest{} } func (m *InitialLoadBalanceRequest) String() string { return proto.CompactTextString(m) } func (*InitialLoadBalanceRequest) ProtoMessage() {} func (*InitialLoadBalanceRequest) Descriptor() ([]byte, []int) { return fileDescriptor_load_balancer_12026aec3f0251ba, []int{1} } func (m *InitialLoadBalanceRequest) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_InitialLoadBalanceRequest.Unmarshal(m, b) } func (m *InitialLoadBalanceRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_InitialLoadBalanceRequest.Marshal(b, m, deterministic) } func (dst *InitialLoadBalanceRequest) XXX_Merge(src proto.Message) { xxx_messageInfo_InitialLoadBalanceRequest.Merge(dst, src) } func (m *InitialLoadBalanceRequest) XXX_Size() int { return xxx_messageInfo_InitialLoadBalanceRequest.Size(m) } func (m *InitialLoadBalanceRequest) XXX_DiscardUnknown() { xxx_messageInfo_InitialLoadBalanceRequest.DiscardUnknown(m) } var xxx_messageInfo_InitialLoadBalanceRequest proto.InternalMessageInfo func (m *InitialLoadBalanceRequest) GetName() string { if m != nil { return m.Name } return "" } // Contains the number of calls finished for a particular load balance token. type ClientStatsPerToken struct { // See Server.load_balance_token. LoadBalanceToken string `protobuf:"bytes,1,opt,name=load_balance_token,json=loadBalanceToken,proto3" json:"load_balance_token,omitempty"` // The total number of RPCs that finished associated with the token. NumCalls int64 `protobuf:"varint,2,opt,name=num_calls,json=numCalls,proto3" json:"num_calls,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ClientStatsPerToken) Reset() { *m = ClientStatsPerToken{} } func (m *ClientStatsPerToken) String() string { return proto.CompactTextString(m) } func (*ClientStatsPerToken) ProtoMessage() {} func (*ClientStatsPerToken) Descriptor() ([]byte, []int) { return fileDescriptor_load_balancer_12026aec3f0251ba, []int{2} } func (m *ClientStatsPerToken) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ClientStatsPerToken.Unmarshal(m, b) } func (m *ClientStatsPerToken) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ClientStatsPerToken.Marshal(b, m, deterministic) } func (dst *ClientStatsPerToken) XXX_Merge(src proto.Message) { xxx_messageInfo_ClientStatsPerToken.Merge(dst, src) } func (m *ClientStatsPerToken) XXX_Size() int { return xxx_messageInfo_ClientStatsPerToken.Size(m) } func (m *ClientStatsPerToken) XXX_DiscardUnknown() { xxx_messageInfo_ClientStatsPerToken.DiscardUnknown(m) } var xxx_messageInfo_ClientStatsPerToken proto.InternalMessageInfo func (m *ClientStatsPerToken) GetLoadBalanceToken() string { if m != nil { return m.LoadBalanceToken } return "" } func (m *ClientStatsPerToken) GetNumCalls() int64 { if m != nil { return m.NumCalls } return 0 } // Contains client level statistics that are useful to load balancing. Each // count except the timestamp should be reset to zero after reporting the stats. type ClientStats struct { // The timestamp of generating the report. Timestamp *timestamp.Timestamp `protobuf:"bytes,1,opt,name=timestamp,proto3" json:"timestamp,omitempty"` // The total number of RPCs that started. NumCallsStarted int64 `protobuf:"varint,2,opt,name=num_calls_started,json=numCallsStarted,proto3" json:"num_calls_started,omitempty"` // The total number of RPCs that finished. NumCallsFinished int64 `protobuf:"varint,3,opt,name=num_calls_finished,json=numCallsFinished,proto3" json:"num_calls_finished,omitempty"` // The total number of RPCs that failed to reach a server except dropped RPCs. NumCallsFinishedWithClientFailedToSend int64 `protobuf:"varint,6,opt,name=num_calls_finished_with_client_failed_to_send,json=numCallsFinishedWithClientFailedToSend,proto3" json:"num_calls_finished_with_client_failed_to_send,omitempty"` // The total number of RPCs that finished and are known to have been received // by a server. NumCallsFinishedKnownReceived int64 `protobuf:"varint,7,opt,name=num_calls_finished_known_received,json=numCallsFinishedKnownReceived,proto3" json:"num_calls_finished_known_received,omitempty"` // The list of dropped calls. CallsFinishedWithDrop []*ClientStatsPerToken `protobuf:"bytes,8,rep,name=calls_finished_with_drop,json=callsFinishedWithDrop,proto3" json:"calls_finished_with_drop,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ClientStats) Reset() { *m = ClientStats{} } func (m *ClientStats) String() string { return proto.CompactTextString(m) } func (*ClientStats) ProtoMessage() {} func (*ClientStats) Descriptor() ([]byte, []int) { return fileDescriptor_load_balancer_12026aec3f0251ba, []int{3} } func (m *ClientStats) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ClientStats.Unmarshal(m, b) } func (m *ClientStats) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ClientStats.Marshal(b, m, deterministic) } func (dst *ClientStats) XXX_Merge(src proto.Message) { xxx_messageInfo_ClientStats.Merge(dst, src) } func (m *ClientStats) XXX_Size() int { return xxx_messageInfo_ClientStats.Size(m) } func (m *ClientStats) XXX_DiscardUnknown() { xxx_messageInfo_ClientStats.DiscardUnknown(m) } var xxx_messageInfo_ClientStats proto.InternalMessageInfo func (m *ClientStats) GetTimestamp() *timestamp.Timestamp { if m != nil { return m.Timestamp } return nil } func (m *ClientStats) GetNumCallsStarted() int64 { if m != nil { return m.NumCallsStarted } return 0 } func (m *ClientStats) GetNumCallsFinished() int64 { if m != nil { return m.NumCallsFinished } return 0 } func (m *ClientStats) GetNumCallsFinishedWithClientFailedToSend() int64 { if m != nil { return m.NumCallsFinishedWithClientFailedToSend } return 0 } func (m *ClientStats) GetNumCallsFinishedKnownReceived() int64 { if m != nil { return m.NumCallsFinishedKnownReceived } return 0 } func (m *ClientStats) GetCallsFinishedWithDrop() []*ClientStatsPerToken { if m != nil { return m.CallsFinishedWithDrop } return nil } type LoadBalanceResponse struct { // Types that are valid to be assigned to LoadBalanceResponseType: // *LoadBalanceResponse_InitialResponse // *LoadBalanceResponse_ServerList LoadBalanceResponseType isLoadBalanceResponse_LoadBalanceResponseType `protobuf_oneof:"load_balance_response_type"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *LoadBalanceResponse) Reset() { *m = LoadBalanceResponse{} } func (m *LoadBalanceResponse) String() string { return proto.CompactTextString(m) } func (*LoadBalanceResponse) ProtoMessage() {} func (*LoadBalanceResponse) Descriptor() ([]byte, []int) { return fileDescriptor_load_balancer_12026aec3f0251ba, []int{4} } func (m *LoadBalanceResponse) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_LoadBalanceResponse.Unmarshal(m, b) } func (m *LoadBalanceResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_LoadBalanceResponse.Marshal(b, m, deterministic) } func (dst *LoadBalanceResponse) XXX_Merge(src proto.Message) { xxx_messageInfo_LoadBalanceResponse.Merge(dst, src) } func (m *LoadBalanceResponse) XXX_Size() int { return xxx_messageInfo_LoadBalanceResponse.Size(m) } func (m *LoadBalanceResponse) XXX_DiscardUnknown() { xxx_messageInfo_LoadBalanceResponse.DiscardUnknown(m) } var xxx_messageInfo_LoadBalanceResponse proto.InternalMessageInfo type isLoadBalanceResponse_LoadBalanceResponseType interface { isLoadBalanceResponse_LoadBalanceResponseType() } type LoadBalanceResponse_InitialResponse struct { InitialResponse *InitialLoadBalanceResponse `protobuf:"bytes,1,opt,name=initial_response,json=initialResponse,proto3,oneof"` } type LoadBalanceResponse_ServerList struct { ServerList *ServerList `protobuf:"bytes,2,opt,name=server_list,json=serverList,proto3,oneof"` } func (*LoadBalanceResponse_InitialResponse) isLoadBalanceResponse_LoadBalanceResponseType() {} func (*LoadBalanceResponse_ServerList) isLoadBalanceResponse_LoadBalanceResponseType() {} func (m *LoadBalanceResponse) GetLoadBalanceResponseType() isLoadBalanceResponse_LoadBalanceResponseType { if m != nil { return m.LoadBalanceResponseType } return nil } func (m *LoadBalanceResponse) GetInitialResponse() *InitialLoadBalanceResponse { if x, ok := m.GetLoadBalanceResponseType().(*LoadBalanceResponse_InitialResponse); ok { return x.InitialResponse } return nil } func (m *LoadBalanceResponse) GetServerList() *ServerList { if x, ok := m.GetLoadBalanceResponseType().(*LoadBalanceResponse_ServerList); ok { return x.ServerList } return nil } // XXX_OneofFuncs is for the internal use of the proto package. func (*LoadBalanceResponse) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _LoadBalanceResponse_OneofMarshaler, _LoadBalanceResponse_OneofUnmarshaler, _LoadBalanceResponse_OneofSizer, []interface{}{ (*LoadBalanceResponse_InitialResponse)(nil), (*LoadBalanceResponse_ServerList)(nil), } } func _LoadBalanceResponse_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*LoadBalanceResponse) // load_balance_response_type switch x := m.LoadBalanceResponseType.(type) { case *LoadBalanceResponse_InitialResponse: b.EncodeVarint(1<<3 | proto.WireBytes) if err := b.EncodeMessage(x.InitialResponse); err != nil { return err } case *LoadBalanceResponse_ServerList: b.EncodeVarint(2<<3 | proto.WireBytes) if err := b.EncodeMessage(x.ServerList); err != nil { return err } case nil: default: return fmt.Errorf("LoadBalanceResponse.LoadBalanceResponseType has unexpected type %T", x) } return nil } func _LoadBalanceResponse_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*LoadBalanceResponse) switch tag { case 1: // load_balance_response_type.initial_response if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(InitialLoadBalanceResponse) err := b.DecodeMessage(msg) m.LoadBalanceResponseType = &LoadBalanceResponse_InitialResponse{msg} return true, err case 2: // load_balance_response_type.server_list if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(ServerList) err := b.DecodeMessage(msg) m.LoadBalanceResponseType = &LoadBalanceResponse_ServerList{msg} return true, err default: return false, nil } } func _LoadBalanceResponse_OneofSizer(msg proto.Message) (n int) { m := msg.(*LoadBalanceResponse) // load_balance_response_type switch x := m.LoadBalanceResponseType.(type) { case *LoadBalanceResponse_InitialResponse: s := proto.Size(x.InitialResponse) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *LoadBalanceResponse_ServerList: s := proto.Size(x.ServerList) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } type InitialLoadBalanceResponse struct { // This is an application layer redirect that indicates the client should use // the specified server for load balancing. When this field is non-empty in // the response, the client should open a separate connection to the // load_balancer_delegate and call the BalanceLoad method. Its length should // be less than 64 bytes. LoadBalancerDelegate string `protobuf:"bytes,1,opt,name=load_balancer_delegate,json=loadBalancerDelegate,proto3" json:"load_balancer_delegate,omitempty"` // This interval defines how often the client should send the client stats // to the load balancer. Stats should only be reported when the duration is // positive. ClientStatsReportInterval *duration.Duration `protobuf:"bytes,2,opt,name=client_stats_report_interval,json=clientStatsReportInterval,proto3" json:"client_stats_report_interval,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *InitialLoadBalanceResponse) Reset() { *m = InitialLoadBalanceResponse{} } func (m *InitialLoadBalanceResponse) String() string { return proto.CompactTextString(m) } func (*InitialLoadBalanceResponse) ProtoMessage() {} func (*InitialLoadBalanceResponse) Descriptor() ([]byte, []int) { return fileDescriptor_load_balancer_12026aec3f0251ba, []int{5} } func (m *InitialLoadBalanceResponse) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_InitialLoadBalanceResponse.Unmarshal(m, b) } func (m *InitialLoadBalanceResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_InitialLoadBalanceResponse.Marshal(b, m, deterministic) } func (dst *InitialLoadBalanceResponse) XXX_Merge(src proto.Message) { xxx_messageInfo_InitialLoadBalanceResponse.Merge(dst, src) } func (m *InitialLoadBalanceResponse) XXX_Size() int { return xxx_messageInfo_InitialLoadBalanceResponse.Size(m) } func (m *InitialLoadBalanceResponse) XXX_DiscardUnknown() { xxx_messageInfo_InitialLoadBalanceResponse.DiscardUnknown(m) } var xxx_messageInfo_InitialLoadBalanceResponse proto.InternalMessageInfo func (m *InitialLoadBalanceResponse) GetLoadBalancerDelegate() string { if m != nil { return m.LoadBalancerDelegate } return "" } func (m *InitialLoadBalanceResponse) GetClientStatsReportInterval() *duration.Duration { if m != nil { return m.ClientStatsReportInterval } return nil } type ServerList struct { // Contains a list of servers selected by the load balancer. The list will // be updated when server resolutions change or as needed to balance load // across more servers. The client should consume the server list in order // unless instructed otherwise via the client_config. Servers []*Server `protobuf:"bytes,1,rep,name=servers,proto3" json:"servers,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ServerList) Reset() { *m = ServerList{} } func (m *ServerList) String() string { return proto.CompactTextString(m) } func (*ServerList) ProtoMessage() {} func (*ServerList) Descriptor() ([]byte, []int) { return fileDescriptor_load_balancer_12026aec3f0251ba, []int{6} } func (m *ServerList) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ServerList.Unmarshal(m, b) } func (m *ServerList) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ServerList.Marshal(b, m, deterministic) } func (dst *ServerList) XXX_Merge(src proto.Message) { xxx_messageInfo_ServerList.Merge(dst, src) } func (m *ServerList) XXX_Size() int { return xxx_messageInfo_ServerList.Size(m) } func (m *ServerList) XXX_DiscardUnknown() { xxx_messageInfo_ServerList.DiscardUnknown(m) } var xxx_messageInfo_ServerList proto.InternalMessageInfo func (m *ServerList) GetServers() []*Server { if m != nil { return m.Servers } return nil } // Contains server information. When the drop field is not true, use the other // fields. type Server struct { // A resolved address for the server, serialized in network-byte-order. It may // either be an IPv4 or IPv6 address. IpAddress []byte `protobuf:"bytes,1,opt,name=ip_address,json=ipAddress,proto3" json:"ip_address,omitempty"` // A resolved port number for the server. Port int32 `protobuf:"varint,2,opt,name=port,proto3" json:"port,omitempty"` // An opaque but printable token for load reporting. The client must include // the token of the picked server into the initial metadata when it starts a // call to that server. The token is used by the server to verify the request // and to allow the server to report load to the gRPC LB system. The token is // also used in client stats for reporting dropped calls. // // Its length can be variable but must be less than 50 bytes. LoadBalanceToken string `protobuf:"bytes,3,opt,name=load_balance_token,json=loadBalanceToken,proto3" json:"load_balance_token,omitempty"` // Indicates whether this particular request should be dropped by the client. // If the request is dropped, there will be a corresponding entry in // ClientStats.calls_finished_with_drop. Drop bool `protobuf:"varint,4,opt,name=drop,proto3" json:"drop,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Server) Reset() { *m = Server{} } func (m *Server) String() string { return proto.CompactTextString(m) } func (*Server) ProtoMessage() {} func (*Server) Descriptor() ([]byte, []int) { return fileDescriptor_load_balancer_12026aec3f0251ba, []int{7} } func (m *Server) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Server.Unmarshal(m, b) } func (m *Server) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Server.Marshal(b, m, deterministic) } func (dst *Server) XXX_Merge(src proto.Message) { xxx_messageInfo_Server.Merge(dst, src) } func (m *Server) XXX_Size() int { return xxx_messageInfo_Server.Size(m) } func (m *Server) XXX_DiscardUnknown() { xxx_messageInfo_Server.DiscardUnknown(m) } var xxx_messageInfo_Server proto.InternalMessageInfo func (m *Server) GetIpAddress() []byte { if m != nil { return m.IpAddress } return nil } func (m *Server) GetPort() int32 { if m != nil { return m.Port } return 0 } func (m *Server) GetLoadBalanceToken() string { if m != nil { return m.LoadBalanceToken } return "" } func (m *Server) GetDrop() bool { if m != nil { return m.Drop } return false } func init() { proto.RegisterType((*LoadBalanceRequest)(nil), "grpc.lb.v1.LoadBalanceRequest") proto.RegisterType((*InitialLoadBalanceRequest)(nil), "grpc.lb.v1.InitialLoadBalanceRequest") proto.RegisterType((*ClientStatsPerToken)(nil), "grpc.lb.v1.ClientStatsPerToken") proto.RegisterType((*ClientStats)(nil), "grpc.lb.v1.ClientStats") proto.RegisterType((*LoadBalanceResponse)(nil), "grpc.lb.v1.LoadBalanceResponse") proto.RegisterType((*InitialLoadBalanceResponse)(nil), "grpc.lb.v1.InitialLoadBalanceResponse") proto.RegisterType((*ServerList)(nil), "grpc.lb.v1.ServerList") proto.RegisterType((*Server)(nil), "grpc.lb.v1.Server") } // Reference imports to suppress errors if they are not otherwise used. var _ context.Context var _ grpc.ClientConn // This is a compile-time assertion to ensure that this generated file // is compatible with the grpc package it is being compiled against. const _ = grpc.SupportPackageIsVersion4 // LoadBalancerClient is the client API for LoadBalancer service. // // For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream. type LoadBalancerClient interface { // Bidirectional rpc to get a list of servers. BalanceLoad(ctx context.Context, opts ...grpc.CallOption) (LoadBalancer_BalanceLoadClient, error) } type loadBalancerClient struct { cc *grpc.ClientConn } func NewLoadBalancerClient(cc *grpc.ClientConn) LoadBalancerClient { return &loadBalancerClient{cc} } func (c *loadBalancerClient) BalanceLoad(ctx context.Context, opts ...grpc.CallOption) (LoadBalancer_BalanceLoadClient, error) { stream, err := c.cc.NewStream(ctx, &_LoadBalancer_serviceDesc.Streams[0], "/grpc.lb.v1.LoadBalancer/BalanceLoad", opts...) if err != nil { return nil, err } x := &loadBalancerBalanceLoadClient{stream} return x, nil } type LoadBalancer_BalanceLoadClient interface { Send(*LoadBalanceRequest) error Recv() (*LoadBalanceResponse, error) grpc.ClientStream } type loadBalancerBalanceLoadClient struct { grpc.ClientStream } func (x *loadBalancerBalanceLoadClient) Send(m *LoadBalanceRequest) error { return x.ClientStream.SendMsg(m) } func (x *loadBalancerBalanceLoadClient) Recv() (*LoadBalanceResponse, error) { m := new(LoadBalanceResponse) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } // LoadBalancerServer is the server API for LoadBalancer service. type LoadBalancerServer interface { // Bidirectional rpc to get a list of servers. BalanceLoad(LoadBalancer_BalanceLoadServer) error } func RegisterLoadBalancerServer(s *grpc.Server, srv LoadBalancerServer) { s.RegisterService(&_LoadBalancer_serviceDesc, srv) } func _LoadBalancer_BalanceLoad_Handler(srv interface{}, stream grpc.ServerStream) error { return srv.(LoadBalancerServer).BalanceLoad(&loadBalancerBalanceLoadServer{stream}) } type LoadBalancer_BalanceLoadServer interface { Send(*LoadBalanceResponse) error Recv() (*LoadBalanceRequest, error) grpc.ServerStream } type loadBalancerBalanceLoadServer struct { grpc.ServerStream } func (x *loadBalancerBalanceLoadServer) Send(m *LoadBalanceResponse) error { return x.ServerStream.SendMsg(m) } func (x *loadBalancerBalanceLoadServer) Recv() (*LoadBalanceRequest, error) { m := new(LoadBalanceRequest) if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err } return m, nil } var _LoadBalancer_serviceDesc = grpc.ServiceDesc{ ServiceName: "grpc.lb.v1.LoadBalancer", HandlerType: (*LoadBalancerServer)(nil), Methods: []grpc.MethodDesc{}, Streams: []grpc.StreamDesc{ { StreamName: "BalanceLoad", Handler: _LoadBalancer_BalanceLoad_Handler, ServerStreams: true, ClientStreams: true, }, }, Metadata: "grpc/lb/v1/load_balancer.proto", } func init() { proto.RegisterFile("grpc/lb/v1/load_balancer.proto", fileDescriptor_load_balancer_12026aec3f0251ba) } var fileDescriptor_load_balancer_12026aec3f0251ba = []byte{ // 752 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x84, 0x55, 0xdd, 0x6e, 0x23, 0x35, 0x14, 0xee, 0x90, 0x69, 0x36, 0x39, 0x29, 0x34, 0xeb, 0x85, 0x65, 0x92, 0xdd, 0x6d, 0x4b, 0x24, 0x56, 0x11, 0x2a, 0x13, 0x52, 0xb8, 0x00, 0x89, 0x0b, 0x48, 0xab, 0x2a, 0x2d, 0xbd, 0x88, 0x9c, 0x4a, 0x45, 0x95, 0x90, 0x99, 0xc9, 0xb8, 0xa9, 0x55, 0xc7, 0x1e, 0x3c, 0x4e, 0x2a, 0xae, 0x79, 0x1f, 0xc4, 0x2b, 0x20, 0x5e, 0x0c, 0x8d, 0xed, 0x49, 0xa6, 0x49, 0xa3, 0xbd, 0xca, 0xf8, 0x9c, 0xcf, 0xdf, 0xf9, 0xfd, 0x1c, 0x38, 0x98, 0xaa, 0x74, 0xd2, 0xe3, 0x71, 0x6f, 0xd1, 0xef, 0x71, 0x19, 0x25, 0x24, 0x8e, 0x78, 0x24, 0x26, 0x54, 0x85, 0xa9, 0x92, 0x5a, 0x22, 0xc8, 0xfd, 0x21, 0x8f, 0xc3, 0x45, 0xbf, 0x7d, 0x30, 0x95, 0x72, 0xca, 0x69, 0xcf, 0x78, 0xe2, 0xf9, 0x5d, 0x2f, 0x99, 0xab, 0x48, 0x33, 0x29, 0x2c, 0xb6, 0x7d, 0xb8, 0xee, 0xd7, 0x6c, 0x46, 0x33, 0x1d, 0xcd, 0x52, 0x0b, 0xe8, 0xfc, 0xeb, 0x01, 0xba, 0x92, 0x51, 0x32, 0xb0, 0x31, 0x30, 0xfd, 0x63, 0x4e, 0x33, 0x8d, 0x46, 0xb0, 0xcf, 0x04, 0xd3, 0x2c, 0xe2, 0x44, 0x59, 0x53, 0xe0, 0x1d, 0x79, 0xdd, 0xc6, 0xc9, 0x97, 0xe1, 0x2a, 0x7a, 0x78, 0x61, 0x21, 0x9b, 0xf7, 0x87, 0x3b, 0xf8, 0x13, 0x77, 0xbf, 0x60, 0xfc, 0x11, 0xf6, 0x26, 0x9c, 0x51, 0xa1, 0x49, 0xa6, 0x23, 0x9d, 0x05, 0x1f, 0x19, 0xba, 0xcf, 0xcb, 0x74, 0xa7, 0xc6, 0x3f, 0xce, 0xdd, 0xc3, 0x1d, 0xdc, 0x98, 0xac, 0x8e, 0x83, 0x37, 0xd0, 0x2a, 0xb7, 0xa2, 0x48, 0x8a, 0xe8, 0x3f, 0x53, 0xda, 0xe9, 0x41, 0x6b, 0x6b, 0x26, 0x08, 0x81, 0x2f, 0xa2, 0x19, 0x35, 0xe9, 0xd7, 0xb1, 0xf9, 0xee, 0xfc, 0x0e, 0xaf, 0x4a, 0xb1, 0x46, 0x54, 0x5d, 0xcb, 0x07, 0x2a, 0xd0, 0x31, 0xa0, 0x27, 0x41, 0x74, 0x6e, 0x75, 0x17, 0x9b, 0x7c, 0x45, 0x6d, 0xd1, 0x6f, 0xa0, 0x2e, 0xe6, 0x33, 0x32, 0x89, 0x38, 0xb7, 0xd5, 0x54, 0x70, 0x4d, 0xcc, 0x67, 0xa7, 0xf9, 0xb9, 0xf3, 0x4f, 0x05, 0x1a, 0xa5, 0x10, 0xe8, 0x7b, 0xa8, 0x2f, 0x3b, 0xef, 0x3a, 0xd9, 0x0e, 0xed, 0x6c, 0xc2, 0x62, 0x36, 0xe1, 0x75, 0x81, 0xc0, 0x2b, 0x30, 0xfa, 0x0a, 0x5e, 0x2e, 0xc3, 0xe4, 0xad, 0x53, 0x9a, 0x26, 0x2e, 0xdc, 0x7e, 0x11, 0x6e, 0x6c, 0xcd, 0x79, 0x01, 0x2b, 0xec, 0x1d, 0x13, 0x2c, 0xbb, 0xa7, 0x49, 0x50, 0x31, 0xe0, 0x66, 0x01, 0x3e, 0x77, 0x76, 0xf4, 0x1b, 0x7c, 0xbd, 0x89, 0x26, 0x8f, 0x4c, 0xdf, 0x13, 0x37, 0xa9, 0xbb, 0x88, 0x71, 0x9a, 0x10, 0x2d, 0x49, 0x46, 0x45, 0x12, 0x54, 0x0d, 0xd1, 0xfb, 0x75, 0xa2, 0x1b, 0xa6, 0xef, 0x6d, 0xad, 0xe7, 0x06, 0x7f, 0x2d, 0xc7, 0x54, 0x24, 0x68, 0x08, 0x5f, 0x3c, 0x43, 0xff, 0x20, 0xe4, 0xa3, 0x20, 0x8a, 0x4e, 0x28, 0x5b, 0xd0, 0x24, 0x78, 0x61, 0x28, 0xdf, 0xad, 0x53, 0xfe, 0x92, 0xa3, 0xb0, 0x03, 0xa1, 0x5f, 0x21, 0x78, 0x2e, 0xc9, 0x44, 0xc9, 0x34, 0xa8, 0x1d, 0x55, 0xba, 0x8d, 0x93, 0xc3, 0x2d, 0x6b, 0x54, 0x8c, 0x16, 0x7f, 0x36, 0x59, 0xcf, 0xf8, 0x4c, 0xc9, 0xf4, 0xd2, 0xaf, 0xf9, 0xcd, 0xdd, 0x4b, 0xbf, 0xb6, 0xdb, 0xac, 0x76, 0xfe, 0xf3, 0xe0, 0xd5, 0x93, 0xfd, 0xc9, 0x52, 0x29, 0x32, 0x8a, 0xc6, 0xd0, 0x5c, 0x49, 0xc1, 0xda, 0xdc, 0x04, 0xdf, 0x7f, 0x48, 0x0b, 0x16, 0x3d, 0xdc, 0xc1, 0xfb, 0x4b, 0x31, 0x38, 0xd2, 0x1f, 0xa0, 0x91, 0x51, 0xb5, 0xa0, 0x8a, 0x70, 0x96, 0x69, 0x27, 0x86, 0xd7, 0x65, 0xbe, 0xb1, 0x71, 0x5f, 0x31, 0x23, 0x26, 0xc8, 0x96, 0xa7, 0xc1, 0x5b, 0x68, 0xaf, 0x49, 0xc1, 0x72, 0x5a, 0x2d, 0xfc, 0xed, 0x41, 0x7b, 0x7b, 0x2a, 0xe8, 0x3b, 0x78, 0xfd, 0xe4, 0x49, 0x21, 0x09, 0xe5, 0x74, 0x1a, 0xe9, 0x42, 0x1f, 0x9f, 0x96, 0xd6, 0x5c, 0x9d, 0x39, 0x1f, 0xba, 0x85, 0xb7, 0x65, 0xed, 0x12, 0x45, 0x53, 0xa9, 0x34, 0x61, 0x42, 0x53, 0xb5, 0x88, 0xb8, 0x4b, 0xbf, 0xb5, 0xb1, 0xd0, 0x67, 0xee, 0x31, 0xc2, 0xad, 0x92, 0x96, 0xb1, 0xb9, 0x7c, 0xe1, 0xee, 0x76, 0x7e, 0x02, 0x58, 0x95, 0x8a, 0x8e, 0xe1, 0x85, 0x2d, 0x35, 0x0b, 0x3c, 0x33, 0x59, 0xb4, 0xd9, 0x13, 0x5c, 0x40, 0x2e, 0xfd, 0x5a, 0xa5, 0xe9, 0x77, 0xfe, 0xf2, 0xa0, 0x6a, 0x3d, 0xe8, 0x1d, 0x00, 0x4b, 0x49, 0x94, 0x24, 0x8a, 0x66, 0x99, 0x29, 0x69, 0x0f, 0xd7, 0x59, 0xfa, 0xb3, 0x35, 0xe4, 0x6f, 0x41, 0x1e, 0xdb, 0xe4, 0xbb, 0x8b, 0xcd, 0xf7, 0x16, 0xd1, 0x57, 0xb6, 0x88, 0x1e, 0x81, 0x6f, 0xd6, 0xce, 0x3f, 0xf2, 0xba, 0x35, 0x6c, 0xbe, 0xed, 0xfa, 0x9c, 0xc4, 0xb0, 0x57, 0x6a, 0xb8, 0x42, 0x18, 0x1a, 0xee, 0x3b, 0x37, 0xa3, 0x83, 0x72, 0x1d, 0x9b, 0xcf, 0x54, 0xfb, 0x70, 0xab, 0xdf, 0x4e, 0xae, 0xeb, 0x7d, 0xe3, 0x0d, 0x6e, 0xe0, 0x63, 0x26, 0x4b, 0xc0, 0xc1, 0xcb, 0x72, 0xc8, 0x51, 0xde, 0xf6, 0x91, 0x77, 0xdb, 0x77, 0x63, 0x98, 0x4a, 0x1e, 0x89, 0x69, 0x28, 0xd5, 0xb4, 0x67, 0xfe, 0x51, 0x8a, 0x99, 0x9b, 0x13, 0x8f, 0xcd, 0x0f, 0xe1, 0x31, 0x59, 0xf4, 0xe3, 0xaa, 0x19, 0xd9, 0xb7, 0xff, 0x07, 0x00, 0x00, 0xff, 0xff, 0x81, 0x14, 0xee, 0xd1, 0x7b, 0x06, 0x00, 0x00, } grpc-go-1.22.1/balancer/grpclb/grpclb.go000066400000000000000000000357401351635773100177770ustar00rootroot00000000000000/* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ //go:generate ./regenerate.sh // Package grpclb defines a grpclb balancer. // // To install grpclb balancer, import this package as: // import _ "google.golang.org/grpc/balancer/grpclb" package grpclb import ( "context" "errors" "strconv" "sync" "time" durationpb "github.com/golang/protobuf/ptypes/duration" "google.golang.org/grpc" "google.golang.org/grpc/balancer" lbpb "google.golang.org/grpc/balancer/grpclb/grpc_lb_v1" "google.golang.org/grpc/connectivity" "google.golang.org/grpc/credentials" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/internal" "google.golang.org/grpc/internal/backoff" "google.golang.org/grpc/resolver" ) const ( lbTokeyKey = "lb-token" defaultFallbackTimeout = 10 * time.Second grpclbName = "grpclb" ) var ( // defaultBackoffConfig configures the backoff strategy that's used when the // init handshake in the RPC is unsuccessful. It's not for the clientconn // reconnect backoff. // // It has the same value as the default grpc.DefaultBackoffConfig. // // TODO: make backoff configurable. defaultBackoffConfig = backoff.Exponential{ MaxDelay: 120 * time.Second, } errServerTerminatedConnection = errors.New("grpclb: failed to recv server list: server terminated connection") ) func convertDuration(d *durationpb.Duration) time.Duration { if d == nil { return 0 } return time.Duration(d.Seconds)*time.Second + time.Duration(d.Nanos)*time.Nanosecond } // Client API for LoadBalancer service. // Mostly copied from generated pb.go file. // To avoid circular dependency. type loadBalancerClient struct { cc *grpc.ClientConn } func (c *loadBalancerClient) BalanceLoad(ctx context.Context, opts ...grpc.CallOption) (*balanceLoadClientStream, error) { desc := &grpc.StreamDesc{ StreamName: "BalanceLoad", ServerStreams: true, ClientStreams: true, } stream, err := c.cc.NewStream(ctx, desc, "/grpc.lb.v1.LoadBalancer/BalanceLoad", opts...) if err != nil { return nil, err } x := &balanceLoadClientStream{stream} return x, nil } type balanceLoadClientStream struct { grpc.ClientStream } func (x *balanceLoadClientStream) Send(m *lbpb.LoadBalanceRequest) error { return x.ClientStream.SendMsg(m) } func (x *balanceLoadClientStream) Recv() (*lbpb.LoadBalanceResponse, error) { m := new(lbpb.LoadBalanceResponse) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func init() { balancer.Register(newLBBuilder()) } // newLBBuilder creates a builder for grpclb. func newLBBuilder() balancer.Builder { return newLBBuilderWithFallbackTimeout(defaultFallbackTimeout) } // newLBBuilderWithFallbackTimeout creates a grpclb builder with the given // fallbackTimeout. If no response is received from the remote balancer within // fallbackTimeout, the backend addresses from the resolved address list will be // used. // // Only call this function when a non-default fallback timeout is needed. func newLBBuilderWithFallbackTimeout(fallbackTimeout time.Duration) balancer.Builder { return &lbBuilder{ fallbackTimeout: fallbackTimeout, } } type lbBuilder struct { fallbackTimeout time.Duration } func (b *lbBuilder) Name() string { return grpclbName } func (b *lbBuilder) Build(cc balancer.ClientConn, opt balancer.BuildOptions) balancer.Balancer { // This generates a manual resolver builder with a random scheme. This // scheme will be used to dial to remote LB, so we can send filtered address // updates to remote LB ClientConn using this manual resolver. scheme := "grpclb_internal_" + strconv.FormatInt(time.Now().UnixNano(), 36) r := &lbManualResolver{scheme: scheme, ccb: cc} lb := &lbBalancer{ cc: newLBCacheClientConn(cc), target: opt.Target.Endpoint, opt: opt, fallbackTimeout: b.fallbackTimeout, doneCh: make(chan struct{}), manualResolver: r, subConns: make(map[resolver.Address]balancer.SubConn), scStates: make(map[balancer.SubConn]connectivity.State), picker: &errPicker{err: balancer.ErrNoSubConnAvailable}, clientStats: newRPCStats(), backoff: defaultBackoffConfig, // TODO: make backoff configurable. } var err error if opt.CredsBundle != nil { lb.grpclbClientConnCreds, err = opt.CredsBundle.NewWithMode(internal.CredsBundleModeBalancer) if err != nil { grpclog.Warningf("lbBalancer: client connection creds NewWithMode failed: %v", err) } lb.grpclbBackendCreds, err = opt.CredsBundle.NewWithMode(internal.CredsBundleModeBackendFromBalancer) if err != nil { grpclog.Warningf("lbBalancer: backend creds NewWithMode failed: %v", err) } } return lb } type lbBalancer struct { cc *lbCacheClientConn target string opt balancer.BuildOptions usePickFirst bool // grpclbClientConnCreds is the creds bundle to be used to connect to grpclb // servers. If it's nil, use the TransportCredentials from BuildOptions // instead. grpclbClientConnCreds credentials.Bundle // grpclbBackendCreds is the creds bundle to be used for addresses that are // returned by grpclb server. If it's nil, don't set anything when creating // SubConns. grpclbBackendCreds credentials.Bundle fallbackTimeout time.Duration doneCh chan struct{} // manualResolver is used in the remote LB ClientConn inside grpclb. When // resolved address updates are received by grpclb, filtered updates will be // send to remote LB ClientConn through this resolver. manualResolver *lbManualResolver // The ClientConn to talk to the remote balancer. ccRemoteLB *grpc.ClientConn // backoff for calling remote balancer. backoff backoff.Strategy // Support client side load reporting. Each picker gets a reference to this, // and will update its content. clientStats *rpcStats mu sync.Mutex // guards everything following. // The full server list including drops, used to check if the newly received // serverList contains anything new. Each generate picker will also have // reference to this list to do the first layer pick. fullServerList []*lbpb.Server // Backend addresses. It's kept so the addresses are available when // switching between round_robin and pickfirst. backendAddrs []resolver.Address // All backends addresses, with metadata set to nil. This list contains all // backend addresses in the same order and with the same duplicates as in // serverlist. When generating picker, a SubConn slice with the same order // but with only READY SCs will be gerenated. backendAddrsWithoutMetadata []resolver.Address // Roundrobin functionalities. state connectivity.State subConns map[resolver.Address]balancer.SubConn // Used to new/remove SubConn. scStates map[balancer.SubConn]connectivity.State // Used to filter READY SubConns. picker balancer.Picker // Support fallback to resolved backend addresses if there's no response // from remote balancer within fallbackTimeout. remoteBalancerConnected bool serverListReceived bool inFallback bool // resolvedBackendAddrs is resolvedAddrs minus remote balancers. It's set // when resolved address updates are received, and read in the goroutine // handling fallback. resolvedBackendAddrs []resolver.Address } // regeneratePicker takes a snapshot of the balancer, and generates a picker from // it. The picker // - always returns ErrTransientFailure if the balancer is in TransientFailure, // - does two layer roundrobin pick otherwise. // Caller must hold lb.mu. func (lb *lbBalancer) regeneratePicker(resetDrop bool) { if lb.state == connectivity.TransientFailure { lb.picker = &errPicker{err: balancer.ErrTransientFailure} return } if lb.state == connectivity.Connecting { lb.picker = &errPicker{err: balancer.ErrNoSubConnAvailable} return } var readySCs []balancer.SubConn if lb.usePickFirst { for _, sc := range lb.subConns { readySCs = append(readySCs, sc) break } } else { for _, a := range lb.backendAddrsWithoutMetadata { if sc, ok := lb.subConns[a]; ok { if st, ok := lb.scStates[sc]; ok && st == connectivity.Ready { readySCs = append(readySCs, sc) } } } } if len(readySCs) <= 0 { // If there's no ready SubConns, always re-pick. This is to avoid drops // unless at least one SubConn is ready. Otherwise we may drop more // often than want because of drops + re-picks(which become re-drops). // // This doesn't seem to be necessary after the connecting check above. // Kept for safety. lb.picker = &errPicker{err: balancer.ErrNoSubConnAvailable} return } if lb.inFallback { lb.picker = newRRPicker(readySCs) return } if resetDrop { lb.picker = newLBPicker(lb.fullServerList, readySCs, lb.clientStats) return } prevLBPicker, ok := lb.picker.(*lbPicker) if !ok { lb.picker = newLBPicker(lb.fullServerList, readySCs, lb.clientStats) return } prevLBPicker.updateReadySCs(readySCs) } // aggregateSubConnStats calculate the aggregated state of SubConns in // lb.SubConns. These SubConns are subconns in use (when switching between // fallback and grpclb). lb.scState contains states for all SubConns, including // those in cache (SubConns are cached for 10 seconds after remove). // // The aggregated state is: // - If at least one SubConn in Ready, the aggregated state is Ready; // - Else if at least one SubConn in Connecting, the aggregated state is Connecting; // - Else the aggregated state is TransientFailure. func (lb *lbBalancer) aggregateSubConnStates() connectivity.State { var numConnecting uint64 for _, sc := range lb.subConns { if state, ok := lb.scStates[sc]; ok { switch state { case connectivity.Ready: return connectivity.Ready case connectivity.Connecting: numConnecting++ } } } if numConnecting > 0 { return connectivity.Connecting } return connectivity.TransientFailure } func (lb *lbBalancer) HandleSubConnStateChange(sc balancer.SubConn, s connectivity.State) { panic("not used") } func (lb *lbBalancer) UpdateSubConnState(sc balancer.SubConn, scs balancer.SubConnState) { s := scs.ConnectivityState if grpclog.V(2) { grpclog.Infof("lbBalancer: handle SubConn state change: %p, %v", sc, s) } lb.mu.Lock() defer lb.mu.Unlock() oldS, ok := lb.scStates[sc] if !ok { if grpclog.V(2) { grpclog.Infof("lbBalancer: got state changes for an unknown SubConn: %p, %v", sc, s) } return } lb.scStates[sc] = s switch s { case connectivity.Idle: sc.Connect() case connectivity.Shutdown: // When an address was removed by resolver, b called RemoveSubConn but // kept the sc's state in scStates. Remove state for this sc here. delete(lb.scStates, sc) } // Force regenerate picker if // - this sc became ready from not-ready // - this sc became not-ready from ready lb.updateStateAndPicker((oldS == connectivity.Ready) != (s == connectivity.Ready), false) // Enter fallback when the aggregated state is not Ready and the connection // to remote balancer is lost. if lb.state != connectivity.Ready { if !lb.inFallback && !lb.remoteBalancerConnected { // Enter fallback. lb.refreshSubConns(lb.resolvedBackendAddrs, true, lb.usePickFirst) } } } // updateStateAndPicker re-calculate the aggregated state, and regenerate picker // if overall state is changed. // // If forceRegeneratePicker is true, picker will be regenerated. func (lb *lbBalancer) updateStateAndPicker(forceRegeneratePicker bool, resetDrop bool) { oldAggrState := lb.state lb.state = lb.aggregateSubConnStates() // Regenerate picker when one of the following happens: // - caller wants to regenerate // - the aggregated state changed if forceRegeneratePicker || (lb.state != oldAggrState) { lb.regeneratePicker(resetDrop) } lb.cc.UpdateBalancerState(lb.state, lb.picker) } // fallbackToBackendsAfter blocks for fallbackTimeout and falls back to use // resolved backends (backends received from resolver, not from remote balancer) // if no connection to remote balancers was successful. func (lb *lbBalancer) fallbackToBackendsAfter(fallbackTimeout time.Duration) { timer := time.NewTimer(fallbackTimeout) defer timer.Stop() select { case <-timer.C: case <-lb.doneCh: return } lb.mu.Lock() if lb.inFallback || lb.serverListReceived { lb.mu.Unlock() return } // Enter fallback. lb.refreshSubConns(lb.resolvedBackendAddrs, true, lb.usePickFirst) lb.mu.Unlock() } // HandleResolvedAddrs sends the updated remoteLB addresses to remoteLB // clientConn. The remoteLB clientConn will handle creating/removing remoteLB // connections. func (lb *lbBalancer) HandleResolvedAddrs(addrs []resolver.Address, err error) { panic("not used") } func (lb *lbBalancer) handleServiceConfig(gc *grpclbServiceConfig) { lb.mu.Lock() defer lb.mu.Unlock() newUsePickFirst := childIsPickFirst(gc) if lb.usePickFirst == newUsePickFirst { return } if grpclog.V(2) { grpclog.Infof("lbBalancer: switching mode, new usePickFirst: %+v", newUsePickFirst) } lb.refreshSubConns(lb.backendAddrs, lb.inFallback, newUsePickFirst) } func (lb *lbBalancer) UpdateClientConnState(ccs balancer.ClientConnState) { if grpclog.V(2) { grpclog.Infof("lbBalancer: UpdateClientConnState: %+v", ccs) } gc, _ := ccs.BalancerConfig.(*grpclbServiceConfig) lb.handleServiceConfig(gc) addrs := ccs.ResolverState.Addresses if len(addrs) <= 0 { return } var remoteBalancerAddrs, backendAddrs []resolver.Address for _, a := range addrs { if a.Type == resolver.GRPCLB { a.Type = resolver.Backend remoteBalancerAddrs = append(remoteBalancerAddrs, a) } else { backendAddrs = append(backendAddrs, a) } } if lb.ccRemoteLB == nil { if len(remoteBalancerAddrs) <= 0 { grpclog.Errorf("grpclb: no remote balancer address is available, should never happen") return } // First time receiving resolved addresses, create a cc to remote // balancers. lb.dialRemoteLB(remoteBalancerAddrs[0].ServerName) // Start the fallback goroutine. go lb.fallbackToBackendsAfter(lb.fallbackTimeout) } // cc to remote balancers uses lb.manualResolver. Send the updated remote // balancer addresses to it through manualResolver. lb.manualResolver.UpdateState(resolver.State{Addresses: remoteBalancerAddrs}) lb.mu.Lock() lb.resolvedBackendAddrs = backendAddrs if lb.inFallback { // This means we received a new list of resolved backends, and we are // still in fallback mode. Need to update the list of backends we are // using to the new list of backends. lb.refreshSubConns(lb.resolvedBackendAddrs, true, lb.usePickFirst) } lb.mu.Unlock() } func (lb *lbBalancer) Close() { select { case <-lb.doneCh: return default: } close(lb.doneCh) if lb.ccRemoteLB != nil { lb.ccRemoteLB.Close() } lb.cc.close() } grpc-go-1.22.1/balancer/grpclb/grpclb_config.go000066400000000000000000000031711351635773100213150ustar00rootroot00000000000000/* * * Copyright 2019 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpclb import ( "encoding/json" "google.golang.org/grpc" "google.golang.org/grpc/balancer/roundrobin" "google.golang.org/grpc/serviceconfig" ) const ( roundRobinName = roundrobin.Name pickFirstName = grpc.PickFirstBalancerName ) type grpclbServiceConfig struct { serviceconfig.LoadBalancingConfig ChildPolicy *[]map[string]json.RawMessage } func (b *lbBuilder) ParseConfig(lbConfig json.RawMessage) (serviceconfig.LoadBalancingConfig, error) { ret := &grpclbServiceConfig{} if err := json.Unmarshal(lbConfig, ret); err != nil { return nil, err } return ret, nil } func childIsPickFirst(sc *grpclbServiceConfig) bool { if sc == nil { return false } childConfigs := sc.ChildPolicy if childConfigs == nil { return false } for _, childC := range *childConfigs { // If round_robin exists before pick_first, return false if _, ok := childC[roundRobinName]; ok { return false } // If pick_first is before round_robin, return true if _, ok := childC[pickFirstName]; ok { return true } } return false } grpc-go-1.22.1/balancer/grpclb/grpclb_config_test.go000066400000000000000000000051731351635773100223600ustar00rootroot00000000000000/* * * Copyright 2019 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpclb import ( "encoding/json" "errors" "fmt" "reflect" "strings" "testing" "google.golang.org/grpc/serviceconfig" ) func Test_Parse(t *testing.T) { tests := []struct { name string s string want serviceconfig.LoadBalancingConfig wantErr error }{ { name: "empty", s: "", want: nil, wantErr: errors.New("unexpected end of JSON input"), }, { name: "success1", s: `{"childPolicy":[{"pick_first":{}}]}`, want: &grpclbServiceConfig{ ChildPolicy: &[]map[string]json.RawMessage{ {"pick_first": json.RawMessage("{}")}, }, }, }, { name: "success2", s: `{"childPolicy":[{"round_robin":{}},{"pick_first":{}}]}`, want: &grpclbServiceConfig{ ChildPolicy: &[]map[string]json.RawMessage{ {"round_robin": json.RawMessage("{}")}, {"pick_first": json.RawMessage("{}")}, }, }, }, } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { if got, err := (&lbBuilder{}).ParseConfig(json.RawMessage(tt.s)); !reflect.DeepEqual(got, tt.want) || !strings.Contains(fmt.Sprint(err), fmt.Sprint(tt.wantErr)) { t.Errorf("parseFullServiceConfig() = %+v, %+v, want %+v, ", got, err, tt.want, tt.wantErr) } }) } } func Test_childIsPickFirst(t *testing.T) { tests := []struct { name string s string want bool }{ { name: "pickfirst_only", s: `{"childPolicy":[{"pick_first":{}}]}`, want: true, }, { name: "pickfirst_before_rr", s: `{"childPolicy":[{"pick_first":{}},{"round_robin":{}}]}`, want: true, }, { name: "rr_before_pickfirst", s: `{"childPolicy":[{"round_robin":{}},{"pick_first":{}}]}`, want: false, }, } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { gc, err := (&lbBuilder{}).ParseConfig(json.RawMessage(tt.s)) if err != nil { t.Fatalf("Parse(%v) = _, %v; want _, nil", tt.s, err) } if got := childIsPickFirst(gc.(*grpclbServiceConfig)); got != tt.want { t.Errorf("childIsPickFirst() = %v, want %v", got, tt.want) } }) } } grpc-go-1.22.1/balancer/grpclb/grpclb_picker.go000066400000000000000000000132761351635773100213340ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpclb import ( "context" "sync" "sync/atomic" "google.golang.org/grpc/balancer" lbpb "google.golang.org/grpc/balancer/grpclb/grpc_lb_v1" "google.golang.org/grpc/codes" "google.golang.org/grpc/internal/grpcrand" "google.golang.org/grpc/status" ) // rpcStats is same as lbmpb.ClientStats, except that numCallsDropped is a map // instead of a slice. type rpcStats struct { // Only access the following fields atomically. numCallsStarted int64 numCallsFinished int64 numCallsFinishedWithClientFailedToSend int64 numCallsFinishedKnownReceived int64 mu sync.Mutex // map load_balance_token -> num_calls_dropped numCallsDropped map[string]int64 } func newRPCStats() *rpcStats { return &rpcStats{ numCallsDropped: make(map[string]int64), } } // toClientStats converts rpcStats to lbpb.ClientStats, and clears rpcStats. func (s *rpcStats) toClientStats() *lbpb.ClientStats { stats := &lbpb.ClientStats{ NumCallsStarted: atomic.SwapInt64(&s.numCallsStarted, 0), NumCallsFinished: atomic.SwapInt64(&s.numCallsFinished, 0), NumCallsFinishedWithClientFailedToSend: atomic.SwapInt64(&s.numCallsFinishedWithClientFailedToSend, 0), NumCallsFinishedKnownReceived: atomic.SwapInt64(&s.numCallsFinishedKnownReceived, 0), } s.mu.Lock() dropped := s.numCallsDropped s.numCallsDropped = make(map[string]int64) s.mu.Unlock() for token, count := range dropped { stats.CallsFinishedWithDrop = append(stats.CallsFinishedWithDrop, &lbpb.ClientStatsPerToken{ LoadBalanceToken: token, NumCalls: count, }) } return stats } func (s *rpcStats) drop(token string) { atomic.AddInt64(&s.numCallsStarted, 1) s.mu.Lock() s.numCallsDropped[token]++ s.mu.Unlock() atomic.AddInt64(&s.numCallsFinished, 1) } func (s *rpcStats) failedToSend() { atomic.AddInt64(&s.numCallsStarted, 1) atomic.AddInt64(&s.numCallsFinishedWithClientFailedToSend, 1) atomic.AddInt64(&s.numCallsFinished, 1) } func (s *rpcStats) knownReceived() { atomic.AddInt64(&s.numCallsStarted, 1) atomic.AddInt64(&s.numCallsFinishedKnownReceived, 1) atomic.AddInt64(&s.numCallsFinished, 1) } type errPicker struct { // Pick always returns this err. err error } func (p *errPicker) Pick(ctx context.Context, opts balancer.PickOptions) (balancer.SubConn, func(balancer.DoneInfo), error) { return nil, nil, p.err } // rrPicker does roundrobin on subConns. It's typically used when there's no // response from remote balancer, and grpclb falls back to the resolved // backends. // // It guaranteed that len(subConns) > 0. type rrPicker struct { mu sync.Mutex subConns []balancer.SubConn // The subConns that were READY when taking the snapshot. subConnsNext int } func newRRPicker(readySCs []balancer.SubConn) *rrPicker { return &rrPicker{ subConns: readySCs, subConnsNext: grpcrand.Intn(len(readySCs)), } } func (p *rrPicker) Pick(ctx context.Context, opts balancer.PickOptions) (balancer.SubConn, func(balancer.DoneInfo), error) { p.mu.Lock() defer p.mu.Unlock() sc := p.subConns[p.subConnsNext] p.subConnsNext = (p.subConnsNext + 1) % len(p.subConns) return sc, nil, nil } // lbPicker does two layers of picks: // // First layer: roundrobin on all servers in serverList, including drops and backends. // - If it picks a drop, the RPC will fail as being dropped. // - If it picks a backend, do a second layer pick to pick the real backend. // // Second layer: roundrobin on all READY backends. // // It's guaranteed that len(serverList) > 0. type lbPicker struct { mu sync.Mutex serverList []*lbpb.Server serverListNext int subConns []balancer.SubConn // The subConns that were READY when taking the snapshot. subConnsNext int stats *rpcStats } func newLBPicker(serverList []*lbpb.Server, readySCs []balancer.SubConn, stats *rpcStats) *lbPicker { return &lbPicker{ serverList: serverList, subConns: readySCs, subConnsNext: grpcrand.Intn(len(readySCs)), stats: stats, } } func (p *lbPicker) Pick(ctx context.Context, opts balancer.PickOptions) (balancer.SubConn, func(balancer.DoneInfo), error) { p.mu.Lock() defer p.mu.Unlock() // Layer one roundrobin on serverList. s := p.serverList[p.serverListNext] p.serverListNext = (p.serverListNext + 1) % len(p.serverList) // If it's a drop, return an error and fail the RPC. if s.Drop { p.stats.drop(s.LoadBalanceToken) return nil, nil, status.Errorf(codes.Unavailable, "request dropped by grpclb") } // If not a drop but there's no ready subConns. if len(p.subConns) <= 0 { return nil, nil, balancer.ErrNoSubConnAvailable } // Return the next ready subConn in the list, also collect rpc stats. sc := p.subConns[p.subConnsNext] p.subConnsNext = (p.subConnsNext + 1) % len(p.subConns) done := func(info balancer.DoneInfo) { if !info.BytesSent { p.stats.failedToSend() } else if info.BytesReceived { p.stats.knownReceived() } } return sc, done, nil } func (p *lbPicker) updateReadySCs(readySCs []balancer.SubConn) { p.mu.Lock() defer p.mu.Unlock() p.subConns = readySCs p.subConnsNext = p.subConnsNext % len(readySCs) } grpc-go-1.22.1/balancer/grpclb/grpclb_remote_balancer.go000066400000000000000000000240611351635773100231730ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpclb import ( "context" "fmt" "io" "net" "reflect" "time" timestamppb "github.com/golang/protobuf/ptypes/timestamp" "google.golang.org/grpc" "google.golang.org/grpc/balancer" lbpb "google.golang.org/grpc/balancer/grpclb/grpc_lb_v1" "google.golang.org/grpc/connectivity" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/internal" "google.golang.org/grpc/internal/channelz" "google.golang.org/grpc/metadata" "google.golang.org/grpc/resolver" ) // processServerList updates balaner's internal state, create/remove SubConns // and regenerates picker using the received serverList. func (lb *lbBalancer) processServerList(l *lbpb.ServerList) { if grpclog.V(2) { grpclog.Infof("lbBalancer: processing server list: %+v", l) } lb.mu.Lock() defer lb.mu.Unlock() // Set serverListReceived to true so fallback will not take effect if it has // not hit timeout. lb.serverListReceived = true // If the new server list == old server list, do nothing. if reflect.DeepEqual(lb.fullServerList, l.Servers) { if grpclog.V(2) { grpclog.Infof("lbBalancer: new serverlist same as the previous one, ignoring") } return } lb.fullServerList = l.Servers var backendAddrs []resolver.Address for i, s := range l.Servers { if s.Drop { continue } md := metadata.Pairs(lbTokeyKey, s.LoadBalanceToken) ip := net.IP(s.IpAddress) ipStr := ip.String() if ip.To4() == nil { // Add square brackets to ipv6 addresses, otherwise net.Dial() and // net.SplitHostPort() will return too many colons error. ipStr = fmt.Sprintf("[%s]", ipStr) } addr := resolver.Address{ Addr: fmt.Sprintf("%s:%d", ipStr, s.Port), Metadata: &md, } if grpclog.V(2) { grpclog.Infof("lbBalancer: server list entry[%d]: ipStr:|%s|, port:|%d|, load balancer token:|%v|", i, ipStr, s.Port, s.LoadBalanceToken) } backendAddrs = append(backendAddrs, addr) } // Call refreshSubConns to create/remove SubConns. If we are in fallback, // this is also exiting fallback. lb.refreshSubConns(backendAddrs, false, lb.usePickFirst) } // refreshSubConns creates/removes SubConns with backendAddrs, and refreshes // balancer state and picker. // // Caller must hold lb.mu. func (lb *lbBalancer) refreshSubConns(backendAddrs []resolver.Address, fallback bool, pickFirst bool) { lb.inFallback = fallback opts := balancer.NewSubConnOptions{} if !fallback { opts.CredsBundle = lb.grpclbBackendCreds } lb.backendAddrs = backendAddrs lb.backendAddrsWithoutMetadata = nil if lb.usePickFirst != pickFirst { // Remove all SubConns when switching modes. for a, sc := range lb.subConns { if lb.usePickFirst { lb.cc.cc.RemoveSubConn(sc) } else { lb.cc.RemoveSubConn(sc) } delete(lb.subConns, a) } lb.usePickFirst = pickFirst } if lb.usePickFirst { var sc balancer.SubConn for _, sc = range lb.subConns { break } if sc != nil { sc.UpdateAddresses(backendAddrs) sc.Connect() return } // This bypasses the cc wrapper with SubConn cache. sc, err := lb.cc.cc.NewSubConn(backendAddrs, opts) if err != nil { grpclog.Warningf("grpclb: failed to create new SubConn: %v", err) return } sc.Connect() lb.subConns[backendAddrs[0]] = sc lb.scStates[sc] = connectivity.Idle return } // addrsSet is the set converted from backendAddrsWithoutMetadata, it's used to quick // lookup for an address. addrsSet := make(map[resolver.Address]struct{}) // Create new SubConns. for _, addr := range backendAddrs { addrWithoutMD := addr addrWithoutMD.Metadata = nil addrsSet[addrWithoutMD] = struct{}{} lb.backendAddrsWithoutMetadata = append(lb.backendAddrsWithoutMetadata, addrWithoutMD) if _, ok := lb.subConns[addrWithoutMD]; !ok { // Use addrWithMD to create the SubConn. sc, err := lb.cc.NewSubConn([]resolver.Address{addr}, opts) if err != nil { grpclog.Warningf("grpclb: failed to create new SubConn: %v", err) continue } lb.subConns[addrWithoutMD] = sc // Use the addr without MD as key for the map. if _, ok := lb.scStates[sc]; !ok { // Only set state of new sc to IDLE. The state could already be // READY for cached SubConns. lb.scStates[sc] = connectivity.Idle } sc.Connect() } } for a, sc := range lb.subConns { // a was removed by resolver. if _, ok := addrsSet[a]; !ok { lb.cc.RemoveSubConn(sc) delete(lb.subConns, a) // Keep the state of this sc in b.scStates until sc's state becomes Shutdown. // The entry will be deleted in HandleSubConnStateChange. } } // Regenerate and update picker after refreshing subconns because with // cache, even if SubConn was newed/removed, there might be no state // changes (the subconn will be kept in cache, not actually // newed/removed). lb.updateStateAndPicker(true, true) } func (lb *lbBalancer) readServerList(s *balanceLoadClientStream) error { for { reply, err := s.Recv() if err != nil { if err == io.EOF { return errServerTerminatedConnection } return fmt.Errorf("grpclb: failed to recv server list: %v", err) } if serverList := reply.GetServerList(); serverList != nil { lb.processServerList(serverList) } } } func (lb *lbBalancer) sendLoadReport(s *balanceLoadClientStream, interval time.Duration) { ticker := time.NewTicker(interval) defer ticker.Stop() for { select { case <-ticker.C: case <-s.Context().Done(): return } stats := lb.clientStats.toClientStats() t := time.Now() stats.Timestamp = ×tamppb.Timestamp{ Seconds: t.Unix(), Nanos: int32(t.Nanosecond()), } if err := s.Send(&lbpb.LoadBalanceRequest{ LoadBalanceRequestType: &lbpb.LoadBalanceRequest_ClientStats{ ClientStats: stats, }, }); err != nil { return } } } func (lb *lbBalancer) callRemoteBalancer() (backoff bool, _ error) { lbClient := &loadBalancerClient{cc: lb.ccRemoteLB} ctx, cancel := context.WithCancel(context.Background()) defer cancel() stream, err := lbClient.BalanceLoad(ctx, grpc.WaitForReady(true)) if err != nil { return true, fmt.Errorf("grpclb: failed to perform RPC to the remote balancer %v", err) } lb.mu.Lock() lb.remoteBalancerConnected = true lb.mu.Unlock() // grpclb handshake on the stream. initReq := &lbpb.LoadBalanceRequest{ LoadBalanceRequestType: &lbpb.LoadBalanceRequest_InitialRequest{ InitialRequest: &lbpb.InitialLoadBalanceRequest{ Name: lb.target, }, }, } if err := stream.Send(initReq); err != nil { return true, fmt.Errorf("grpclb: failed to send init request: %v", err) } reply, err := stream.Recv() if err != nil { return true, fmt.Errorf("grpclb: failed to recv init response: %v", err) } initResp := reply.GetInitialResponse() if initResp == nil { return true, fmt.Errorf("grpclb: reply from remote balancer did not include initial response") } if initResp.LoadBalancerDelegate != "" { return true, fmt.Errorf("grpclb: Delegation is not supported") } go func() { if d := convertDuration(initResp.ClientStatsReportInterval); d > 0 { lb.sendLoadReport(stream, d) } }() // No backoff if init req/resp handshake was successful. return false, lb.readServerList(stream) } func (lb *lbBalancer) watchRemoteBalancer() { var retryCount int for { doBackoff, err := lb.callRemoteBalancer() select { case <-lb.doneCh: return default: if err != nil { if err == errServerTerminatedConnection { grpclog.Info(err) } else { grpclog.Warning(err) } } } // Trigger a re-resolve when the stream errors. lb.cc.cc.ResolveNow(resolver.ResolveNowOption{}) lb.mu.Lock() lb.remoteBalancerConnected = false lb.fullServerList = nil // Enter fallback when connection to remote balancer is lost, and the // aggregated state is not Ready. if !lb.inFallback && lb.state != connectivity.Ready { // Entering fallback. lb.refreshSubConns(lb.resolvedBackendAddrs, true, lb.usePickFirst) } lb.mu.Unlock() if !doBackoff { retryCount = 0 continue } timer := time.NewTimer(lb.backoff.Backoff(retryCount)) select { case <-timer.C: case <-lb.doneCh: timer.Stop() return } retryCount++ } } func (lb *lbBalancer) dialRemoteLB(remoteLBName string) { var dopts []grpc.DialOption if creds := lb.opt.DialCreds; creds != nil { if err := creds.OverrideServerName(remoteLBName); err == nil { dopts = append(dopts, grpc.WithTransportCredentials(creds)) } else { grpclog.Warningf("grpclb: failed to override the server name in the credentials: %v, using Insecure", err) dopts = append(dopts, grpc.WithInsecure()) } } else if bundle := lb.grpclbClientConnCreds; bundle != nil { dopts = append(dopts, grpc.WithCredentialsBundle(bundle)) } else { dopts = append(dopts, grpc.WithInsecure()) } if lb.opt.Dialer != nil { dopts = append(dopts, grpc.WithContextDialer(lb.opt.Dialer)) } // Explicitly set pickfirst as the balancer. dopts = append(dopts, grpc.WithBalancerName(grpc.PickFirstBalancerName)) wrb := internal.WithResolverBuilder.(func(resolver.Builder) grpc.DialOption) dopts = append(dopts, wrb(lb.manualResolver)) if channelz.IsOn() { dopts = append(dopts, grpc.WithChannelzParentID(lb.opt.ChannelzParentID)) } // DialContext using manualResolver.Scheme, which is a random scheme // generated when init grpclb. The target scheme here is not important. // // The grpc dial target will be used by the creds (ALTS) as the authority, // so it has to be set to remoteLBName that comes from resolver. cc, err := grpc.DialContext(context.Background(), remoteLBName, dopts...) if err != nil { grpclog.Fatalf("failed to dial: %v", err) } lb.ccRemoteLB = cc go lb.watchRemoteBalancer() } grpc-go-1.22.1/balancer/grpclb/grpclb_test.go000066400000000000000000001076111351635773100210330ustar00rootroot00000000000000/* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpclb import ( "context" "errors" "fmt" "io" "net" "strconv" "strings" "sync" "sync/atomic" "testing" "time" durationpb "github.com/golang/protobuf/ptypes/duration" "google.golang.org/grpc" "google.golang.org/grpc/balancer" lbgrpc "google.golang.org/grpc/balancer/grpclb/grpc_lb_v1" lbpb "google.golang.org/grpc/balancer/grpclb/grpc_lb_v1" "google.golang.org/grpc/codes" "google.golang.org/grpc/credentials" _ "google.golang.org/grpc/grpclog/glogger" "google.golang.org/grpc/internal/leakcheck" "google.golang.org/grpc/metadata" "google.golang.org/grpc/peer" "google.golang.org/grpc/resolver" "google.golang.org/grpc/resolver/manual" "google.golang.org/grpc/serviceconfig" "google.golang.org/grpc/status" testpb "google.golang.org/grpc/test/grpc_testing" ) var ( lbServerName = "bar.com" beServerName = "foo.com" lbToken = "iamatoken" // Resolver replaces localhost with fakeName in Next(). // Dialer replaces fakeName with localhost when dialing. // This will test that custom dialer is passed from Dial to grpclb. fakeName = "fake.Name" ) type serverNameCheckCreds struct { mu sync.Mutex sn string expected string } func (c *serverNameCheckCreds) ServerHandshake(rawConn net.Conn) (net.Conn, credentials.AuthInfo, error) { if _, err := io.WriteString(rawConn, c.sn); err != nil { fmt.Printf("Failed to write the server name %s to the client %v", c.sn, err) return nil, nil, err } return rawConn, nil, nil } func (c *serverNameCheckCreds) ClientHandshake(ctx context.Context, addr string, rawConn net.Conn) (net.Conn, credentials.AuthInfo, error) { c.mu.Lock() defer c.mu.Unlock() b := make([]byte, len(c.expected)) errCh := make(chan error, 1) go func() { _, err := rawConn.Read(b) errCh <- err }() select { case err := <-errCh: if err != nil { fmt.Printf("Failed to read the server name from the server %v", err) return nil, nil, err } case <-ctx.Done(): return nil, nil, ctx.Err() } if c.expected != string(b) { fmt.Printf("Read the server name %s want %s", string(b), c.expected) return nil, nil, errors.New("received unexpected server name") } return rawConn, nil, nil } func (c *serverNameCheckCreds) Info() credentials.ProtocolInfo { c.mu.Lock() defer c.mu.Unlock() return credentials.ProtocolInfo{} } func (c *serverNameCheckCreds) Clone() credentials.TransportCredentials { c.mu.Lock() defer c.mu.Unlock() return &serverNameCheckCreds{ expected: c.expected, } } func (c *serverNameCheckCreds) OverrideServerName(s string) error { c.mu.Lock() defer c.mu.Unlock() c.expected = s return nil } // fakeNameDialer replaces fakeName with localhost when dialing. // This will test that custom dialer is passed from Dial to grpclb. func fakeNameDialer(ctx context.Context, addr string) (net.Conn, error) { addr = strings.Replace(addr, fakeName, "localhost", 1) return (&net.Dialer{}).DialContext(ctx, "tcp", addr) } // merge merges the new client stats into current stats. // // It's a test-only method. rpcStats is defined in grpclb_picker. func (s *rpcStats) merge(cs *lbpb.ClientStats) { atomic.AddInt64(&s.numCallsStarted, cs.NumCallsStarted) atomic.AddInt64(&s.numCallsFinished, cs.NumCallsFinished) atomic.AddInt64(&s.numCallsFinishedWithClientFailedToSend, cs.NumCallsFinishedWithClientFailedToSend) atomic.AddInt64(&s.numCallsFinishedKnownReceived, cs.NumCallsFinishedKnownReceived) s.mu.Lock() for _, perToken := range cs.CallsFinishedWithDrop { s.numCallsDropped[perToken.LoadBalanceToken] += perToken.NumCalls } s.mu.Unlock() } func mapsEqual(a, b map[string]int64) bool { if len(a) != len(b) { return false } for k, v1 := range a { if v2, ok := b[k]; !ok || v1 != v2 { return false } } return true } func atomicEqual(a, b *int64) bool { return atomic.LoadInt64(a) == atomic.LoadInt64(b) } // equal compares two rpcStats. // // It's a test-only method. rpcStats is defined in grpclb_picker. func (s *rpcStats) equal(o *rpcStats) bool { if !atomicEqual(&s.numCallsStarted, &o.numCallsStarted) { return false } if !atomicEqual(&s.numCallsFinished, &o.numCallsFinished) { return false } if !atomicEqual(&s.numCallsFinishedWithClientFailedToSend, &o.numCallsFinishedWithClientFailedToSend) { return false } if !atomicEqual(&s.numCallsFinishedKnownReceived, &o.numCallsFinishedKnownReceived) { return false } s.mu.Lock() defer s.mu.Unlock() o.mu.Lock() defer o.mu.Unlock() return mapsEqual(s.numCallsDropped, o.numCallsDropped) } type remoteBalancer struct { sls chan *lbpb.ServerList statsDura time.Duration done chan struct{} stats *rpcStats } func newRemoteBalancer(intervals []time.Duration) *remoteBalancer { return &remoteBalancer{ sls: make(chan *lbpb.ServerList, 1), done: make(chan struct{}), stats: newRPCStats(), } } func (b *remoteBalancer) stop() { close(b.sls) close(b.done) } func (b *remoteBalancer) BalanceLoad(stream lbgrpc.LoadBalancer_BalanceLoadServer) error { req, err := stream.Recv() if err != nil { return err } initReq := req.GetInitialRequest() if initReq.Name != beServerName { return status.Errorf(codes.InvalidArgument, "invalid service name: %v", initReq.Name) } resp := &lbpb.LoadBalanceResponse{ LoadBalanceResponseType: &lbpb.LoadBalanceResponse_InitialResponse{ InitialResponse: &lbpb.InitialLoadBalanceResponse{ ClientStatsReportInterval: &durationpb.Duration{ Seconds: int64(b.statsDura.Seconds()), Nanos: int32(b.statsDura.Nanoseconds() - int64(b.statsDura.Seconds())*1e9), }, }, }, } if err := stream.Send(resp); err != nil { return err } go func() { for { var ( req *lbpb.LoadBalanceRequest err error ) if req, err = stream.Recv(); err != nil { return } b.stats.merge(req.GetClientStats()) } }() for { select { case v := <-b.sls: resp = &lbpb.LoadBalanceResponse{ LoadBalanceResponseType: &lbpb.LoadBalanceResponse_ServerList{ ServerList: v, }, } case <-stream.Context().Done(): return stream.Context().Err() } if err := stream.Send(resp); err != nil { return err } } } type testServer struct { testpb.TestServiceServer addr string fallback bool } const testmdkey = "testmd" func (s *testServer) EmptyCall(ctx context.Context, in *testpb.Empty) (*testpb.Empty, error) { md, ok := metadata.FromIncomingContext(ctx) if !ok { return nil, status.Error(codes.Internal, "failed to receive metadata") } if !s.fallback && (md == nil || md["lb-token"][0] != lbToken) { return nil, status.Errorf(codes.Internal, "received unexpected metadata: %v", md) } grpc.SetTrailer(ctx, metadata.Pairs(testmdkey, s.addr)) return &testpb.Empty{}, nil } func (s *testServer) FullDuplexCall(stream testpb.TestService_FullDuplexCallServer) error { return nil } func startBackends(sn string, fallback bool, lis ...net.Listener) (servers []*grpc.Server) { for _, l := range lis { creds := &serverNameCheckCreds{ sn: sn, } s := grpc.NewServer(grpc.Creds(creds)) testpb.RegisterTestServiceServer(s, &testServer{addr: l.Addr().String(), fallback: fallback}) servers = append(servers, s) go func(s *grpc.Server, l net.Listener) { s.Serve(l) }(s, l) } return } func stopBackends(servers []*grpc.Server) { for _, s := range servers { s.Stop() } } type testServers struct { lbAddr string ls *remoteBalancer lb *grpc.Server backends []*grpc.Server beIPs []net.IP bePorts []int lbListener net.Listener beListeners []net.Listener } func newLoadBalancer(numberOfBackends int) (tss *testServers, cleanup func(), err error) { var ( beListeners []net.Listener ls *remoteBalancer lb *grpc.Server beIPs []net.IP bePorts []int ) for i := 0; i < numberOfBackends; i++ { // Start a backend. beLis, e := net.Listen("tcp", "localhost:0") if e != nil { err = fmt.Errorf("failed to listen %v", err) return } beIPs = append(beIPs, beLis.Addr().(*net.TCPAddr).IP) bePorts = append(bePorts, beLis.Addr().(*net.TCPAddr).Port) beListeners = append(beListeners, newRestartableListener(beLis)) } backends := startBackends(beServerName, false, beListeners...) // Start a load balancer. lbLis, err := net.Listen("tcp", "localhost:0") if err != nil { err = fmt.Errorf("failed to create the listener for the load balancer %v", err) return } lbLis = newRestartableListener(lbLis) lbCreds := &serverNameCheckCreds{ sn: lbServerName, } lb = grpc.NewServer(grpc.Creds(lbCreds)) ls = newRemoteBalancer(nil) lbgrpc.RegisterLoadBalancerServer(lb, ls) go func() { lb.Serve(lbLis) }() tss = &testServers{ lbAddr: net.JoinHostPort(fakeName, strconv.Itoa(lbLis.Addr().(*net.TCPAddr).Port)), ls: ls, lb: lb, backends: backends, beIPs: beIPs, bePorts: bePorts, lbListener: lbLis, beListeners: beListeners, } cleanup = func() { defer stopBackends(backends) defer func() { ls.stop() lb.Stop() }() } return } func TestGRPCLB(t *testing.T) { defer leakcheck.Check(t) r, cleanup := manual.GenerateAndRegisterManualResolver() defer cleanup() tss, cleanup, err := newLoadBalancer(1) if err != nil { t.Fatalf("failed to create new load balancer: %v", err) } defer cleanup() be := &lbpb.Server{ IpAddress: tss.beIPs[0], Port: int32(tss.bePorts[0]), LoadBalanceToken: lbToken, } var bes []*lbpb.Server bes = append(bes, be) sl := &lbpb.ServerList{ Servers: bes, } tss.ls.sls <- sl creds := serverNameCheckCreds{ expected: beServerName, } ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() cc, err := grpc.DialContext(ctx, r.Scheme()+":///"+beServerName, grpc.WithTransportCredentials(&creds), grpc.WithContextDialer(fakeNameDialer)) if err != nil { t.Fatalf("Failed to dial to the backend %v", err) } defer cc.Close() testC := testpb.NewTestServiceClient(cc) r.UpdateState(resolver.State{Addresses: []resolver.Address{{ Addr: tss.lbAddr, Type: resolver.GRPCLB, ServerName: lbServerName, }}}) if _, err := testC.EmptyCall(context.Background(), &testpb.Empty{}, grpc.WaitForReady(true)); err != nil { t.Fatalf("%v.EmptyCall(_, _) = _, %v, want _, ", testC, err) } } // The remote balancer sends response with duplicates to grpclb client. func TestGRPCLBWeighted(t *testing.T) { defer leakcheck.Check(t) r, cleanup := manual.GenerateAndRegisterManualResolver() defer cleanup() tss, cleanup, err := newLoadBalancer(2) if err != nil { t.Fatalf("failed to create new load balancer: %v", err) } defer cleanup() beServers := []*lbpb.Server{{ IpAddress: tss.beIPs[0], Port: int32(tss.bePorts[0]), LoadBalanceToken: lbToken, }, { IpAddress: tss.beIPs[1], Port: int32(tss.bePorts[1]), LoadBalanceToken: lbToken, }} portsToIndex := make(map[int]int) for i := range beServers { portsToIndex[tss.bePorts[i]] = i } creds := serverNameCheckCreds{ expected: beServerName, } ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() cc, err := grpc.DialContext(ctx, r.Scheme()+":///"+beServerName, grpc.WithTransportCredentials(&creds), grpc.WithContextDialer(fakeNameDialer)) if err != nil { t.Fatalf("Failed to dial to the backend %v", err) } defer cc.Close() testC := testpb.NewTestServiceClient(cc) r.UpdateState(resolver.State{Addresses: []resolver.Address{{ Addr: tss.lbAddr, Type: resolver.GRPCLB, ServerName: lbServerName, }}}) sequences := []string{"00101", "00011"} for _, seq := range sequences { var ( bes []*lbpb.Server p peer.Peer result string ) for _, s := range seq { bes = append(bes, beServers[s-'0']) } tss.ls.sls <- &lbpb.ServerList{Servers: bes} for i := 0; i < 1000; i++ { if _, err := testC.EmptyCall(context.Background(), &testpb.Empty{}, grpc.WaitForReady(true), grpc.Peer(&p)); err != nil { t.Fatalf("%v.EmptyCall(_, _) = _, %v, want _, ", testC, err) } result += strconv.Itoa(portsToIndex[p.Addr.(*net.TCPAddr).Port]) } // The generated result will be in format of "0010100101". if !strings.Contains(result, strings.Repeat(seq, 2)) { t.Errorf("got result sequence %q, want patten %q", result, seq) } } } func TestDropRequest(t *testing.T) { defer leakcheck.Check(t) r, cleanup := manual.GenerateAndRegisterManualResolver() defer cleanup() tss, cleanup, err := newLoadBalancer(2) if err != nil { t.Fatalf("failed to create new load balancer: %v", err) } defer cleanup() tss.ls.sls <- &lbpb.ServerList{ Servers: []*lbpb.Server{{ IpAddress: tss.beIPs[0], Port: int32(tss.bePorts[0]), LoadBalanceToken: lbToken, Drop: false, }, { IpAddress: tss.beIPs[1], Port: int32(tss.bePorts[1]), LoadBalanceToken: lbToken, Drop: false, }, { Drop: true, }}, } creds := serverNameCheckCreds{ expected: beServerName, } ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() cc, err := grpc.DialContext(ctx, r.Scheme()+":///"+beServerName, grpc.WithTransportCredentials(&creds), grpc.WithContextDialer(fakeNameDialer)) if err != nil { t.Fatalf("Failed to dial to the backend %v", err) } defer cc.Close() testC := testpb.NewTestServiceClient(cc) r.UpdateState(resolver.State{Addresses: []resolver.Address{{ Addr: tss.lbAddr, Type: resolver.GRPCLB, ServerName: lbServerName, }}}) var ( i int p peer.Peer ) const ( // Poll to wait for something to happen. Total timeout 1 second. Sleep 1 // ms each loop, and do at most 1000 loops. sleepEachLoop = time.Millisecond loopCount = int(time.Second / sleepEachLoop) ) // Make a non-fail-fast RPC and wait for it to succeed. for i = 0; i < loopCount; i++ { if _, err := testC.EmptyCall(ctx, &testpb.Empty{}, grpc.WaitForReady(true), grpc.Peer(&p)); err == nil { break } time.Sleep(sleepEachLoop) } if i >= loopCount { t.Fatalf("timeout waiting for the first connection to become ready. EmptyCall(_, _) = _, %v, want _, ", err) } // Make RPCs until the peer is different. So we know both connections are // READY. for i = 0; i < loopCount; i++ { var temp peer.Peer if _, err := testC.EmptyCall(ctx, &testpb.Empty{}, grpc.WaitForReady(true), grpc.Peer(&temp)); err == nil { if temp.Addr.(*net.TCPAddr).Port != p.Addr.(*net.TCPAddr).Port { break } } time.Sleep(sleepEachLoop) } if i >= loopCount { t.Fatalf("timeout waiting for the second connection to become ready") } // More RPCs until drop happens. So we know the picker index, and the // expected behavior of following RPCs. for i = 0; i < loopCount; i++ { if _, err := testC.EmptyCall(ctx, &testpb.Empty{}, grpc.WaitForReady(true)); status.Code(err) == codes.Unavailable { break } time.Sleep(sleepEachLoop) } if i >= loopCount { t.Fatalf("timeout waiting for drop. EmptyCall(_, _) = _, %v, want _, ", err) } select { case <-ctx.Done(): t.Fatal("timed out", ctx.Err()) default: } for _, failfast := range []bool{true, false} { for i := 0; i < 3; i++ { // 1st RPCs pick the first item in server list. They should succeed // since they choose the non-drop-request backend according to the // round robin policy. if _, err := testC.EmptyCall(context.Background(), &testpb.Empty{}, grpc.WaitForReady(!failfast)); err != nil { t.Errorf("%v.EmptyCall(_, _) = _, %v, want _, ", testC, err) } // 2nd RPCs pick the second item in server list. They should succeed // since they choose the non-drop-request backend according to the // round robin policy. if _, err := testC.EmptyCall(context.Background(), &testpb.Empty{}, grpc.WaitForReady(!failfast)); err != nil { t.Errorf("%v.EmptyCall(_, _) = _, %v, want _, ", testC, err) } // 3rd RPCs should fail, because they pick last item in server list, // with Drop set to true. if _, err := testC.EmptyCall(context.Background(), &testpb.Empty{}, grpc.WaitForReady(!failfast)); status.Code(err) != codes.Unavailable { t.Errorf("%v.EmptyCall(_, _) = _, %v, want _, %s", testC, err, codes.Unavailable) } } } // Make one more RPC to move the picker index one step further, so it's not // 0. The following RPCs will test that drop index is not reset. If picker // index is at 0, we cannot tell whether it's reset or not. if _, err := testC.EmptyCall(context.Background(), &testpb.Empty{}, grpc.WaitForReady(true)); err != nil { t.Errorf("%v.EmptyCall(_, _) = _, %v, want _, ", testC, err) } tss.backends[0].Stop() // This last pick was backend 0. Closing backend 0 doesn't reset drop index // (for level 1 picking), so the following picks will be (backend1, drop, // backend1), instead of (backend, backend, drop) if drop index was reset. time.Sleep(time.Second) for i := 0; i < 3; i++ { var p peer.Peer if _, err := testC.EmptyCall(context.Background(), &testpb.Empty{}, grpc.WaitForReady(true), grpc.Peer(&p)); err != nil { t.Errorf("%v.EmptyCall(_, _) = _, %v, want _, ", testC, err) } if want := tss.bePorts[1]; p.Addr.(*net.TCPAddr).Port != want { t.Errorf("got peer: %v, want peer port: %v", p.Addr, want) } if _, err := testC.EmptyCall(context.Background(), &testpb.Empty{}, grpc.WaitForReady(true)); status.Code(err) != codes.Unavailable { t.Errorf("%v.EmptyCall(_, _) = _, %v, want _, %s", testC, err, codes.Unavailable) } if _, err := testC.EmptyCall(context.Background(), &testpb.Empty{}, grpc.WaitForReady(true), grpc.Peer(&p)); err != nil { t.Errorf("%v.EmptyCall(_, _) = _, %v, want _, ", testC, err) } if want := tss.bePorts[1]; p.Addr.(*net.TCPAddr).Port != want { t.Errorf("got peer: %v, want peer port: %v", p.Addr, want) } } } // When the balancer in use disconnects, grpclb should connect to the next address from resolved balancer address list. func TestBalancerDisconnects(t *testing.T) { defer leakcheck.Check(t) r, cleanup := manual.GenerateAndRegisterManualResolver() defer cleanup() var ( tests []*testServers lbs []*grpc.Server ) for i := 0; i < 2; i++ { tss, cleanup, err := newLoadBalancer(1) if err != nil { t.Fatalf("failed to create new load balancer: %v", err) } defer cleanup() be := &lbpb.Server{ IpAddress: tss.beIPs[0], Port: int32(tss.bePorts[0]), LoadBalanceToken: lbToken, } var bes []*lbpb.Server bes = append(bes, be) sl := &lbpb.ServerList{ Servers: bes, } tss.ls.sls <- sl tests = append(tests, tss) lbs = append(lbs, tss.lb) } creds := serverNameCheckCreds{ expected: beServerName, } ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() cc, err := grpc.DialContext(ctx, r.Scheme()+":///"+beServerName, grpc.WithTransportCredentials(&creds), grpc.WithContextDialer(fakeNameDialer)) if err != nil { t.Fatalf("Failed to dial to the backend %v", err) } defer cc.Close() testC := testpb.NewTestServiceClient(cc) r.UpdateState(resolver.State{Addresses: []resolver.Address{{ Addr: tests[0].lbAddr, Type: resolver.GRPCLB, ServerName: lbServerName, }, { Addr: tests[1].lbAddr, Type: resolver.GRPCLB, ServerName: lbServerName, }}}) var p peer.Peer if _, err := testC.EmptyCall(context.Background(), &testpb.Empty{}, grpc.WaitForReady(true), grpc.Peer(&p)); err != nil { t.Fatalf("%v.EmptyCall(_, _) = _, %v, want _, ", testC, err) } if p.Addr.(*net.TCPAddr).Port != tests[0].bePorts[0] { t.Fatalf("got peer: %v, want peer port: %v", p.Addr, tests[0].bePorts[0]) } lbs[0].Stop() // Stop balancer[0], balancer[1] should be used by grpclb. // Check peer address to see if that happened. for i := 0; i < 1000; i++ { if _, err := testC.EmptyCall(context.Background(), &testpb.Empty{}, grpc.WaitForReady(true), grpc.Peer(&p)); err != nil { t.Fatalf("%v.EmptyCall(_, _) = _, %v, want _, ", testC, err) } if p.Addr.(*net.TCPAddr).Port == tests[1].bePorts[0] { return } time.Sleep(time.Millisecond) } t.Fatalf("No RPC sent to second backend after 1 second") } func TestFallback(t *testing.T) { balancer.Register(newLBBuilderWithFallbackTimeout(100 * time.Millisecond)) defer balancer.Register(newLBBuilder()) defer leakcheck.Check(t) r, cleanup := manual.GenerateAndRegisterManualResolver() defer cleanup() tss, cleanup, err := newLoadBalancer(1) if err != nil { t.Fatalf("failed to create new load balancer: %v", err) } defer cleanup() // Start a standalone backend. beLis, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("Failed to listen %v", err) } defer beLis.Close() standaloneBEs := startBackends(beServerName, true, beLis) defer stopBackends(standaloneBEs) be := &lbpb.Server{ IpAddress: tss.beIPs[0], Port: int32(tss.bePorts[0]), LoadBalanceToken: lbToken, } var bes []*lbpb.Server bes = append(bes, be) sl := &lbpb.ServerList{ Servers: bes, } tss.ls.sls <- sl creds := serverNameCheckCreds{ expected: beServerName, } ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() cc, err := grpc.DialContext(ctx, r.Scheme()+":///"+beServerName, grpc.WithTransportCredentials(&creds), grpc.WithContextDialer(fakeNameDialer)) if err != nil { t.Fatalf("Failed to dial to the backend %v", err) } defer cc.Close() testC := testpb.NewTestServiceClient(cc) r.UpdateState(resolver.State{Addresses: []resolver.Address{{ Addr: "invalid.address", Type: resolver.GRPCLB, ServerName: lbServerName, }, { Addr: beLis.Addr().String(), Type: resolver.Backend, ServerName: beServerName, }}}) var p peer.Peer if _, err := testC.EmptyCall(context.Background(), &testpb.Empty{}, grpc.WaitForReady(true), grpc.Peer(&p)); err != nil { t.Fatalf("_.EmptyCall(_, _) = _, %v, want _, ", err) } if p.Addr.String() != beLis.Addr().String() { t.Fatalf("got peer: %v, want peer: %v", p.Addr, beLis.Addr()) } r.UpdateState(resolver.State{Addresses: []resolver.Address{{ Addr: tss.lbAddr, Type: resolver.GRPCLB, ServerName: lbServerName, }, { Addr: beLis.Addr().String(), Type: resolver.Backend, ServerName: beServerName, }}}) var backendUsed bool for i := 0; i < 1000; i++ { if _, err := testC.EmptyCall(context.Background(), &testpb.Empty{}, grpc.WaitForReady(true), grpc.Peer(&p)); err != nil { t.Fatalf("%v.EmptyCall(_, _) = _, %v, want _, ", testC, err) } if p.Addr.(*net.TCPAddr).Port == tss.bePorts[0] { backendUsed = true break } time.Sleep(time.Millisecond) } if !backendUsed { t.Fatalf("No RPC sent to backend behind remote balancer after 1 second") } // Close backend and remote balancer connections, should use fallback. tss.beListeners[0].(*restartableListener).stopPreviousConns() tss.lbListener.(*restartableListener).stopPreviousConns() time.Sleep(time.Second) var fallbackUsed bool for i := 0; i < 1000; i++ { if _, err := testC.EmptyCall(context.Background(), &testpb.Empty{}, grpc.WaitForReady(true), grpc.Peer(&p)); err != nil { t.Fatalf("%v.EmptyCall(_, _) = _, %v, want _, ", testC, err) } if p.Addr.String() == beLis.Addr().String() { fallbackUsed = true break } time.Sleep(time.Millisecond) } if !fallbackUsed { t.Fatalf("No RPC sent to fallback after 1 second") } // Restart backend and remote balancer, should not use backends. tss.beListeners[0].(*restartableListener).restart() tss.lbListener.(*restartableListener).restart() tss.ls.sls <- sl time.Sleep(time.Second) var backendUsed2 bool for i := 0; i < 1000; i++ { if _, err := testC.EmptyCall(context.Background(), &testpb.Empty{}, grpc.WaitForReady(true), grpc.Peer(&p)); err != nil { t.Fatalf("%v.EmptyCall(_, _) = _, %v, want _, ", testC, err) } if p.Addr.(*net.TCPAddr).Port == tss.bePorts[0] { backendUsed2 = true break } time.Sleep(time.Millisecond) } if !backendUsed2 { t.Fatalf("No RPC sent to backend behind remote balancer after 1 second") } } func TestGRPCLBPickFirst(t *testing.T) { const pfc = `{"loadBalancingConfig":[{"grpclb":{"childPolicy":[{"pick_first":{}}]}}]}` svcCfg, err := serviceconfig.Parse(pfc) if err != nil { t.Fatalf("Error parsing config %q: %v", pfc, err) } defer leakcheck.Check(t) r, cleanup := manual.GenerateAndRegisterManualResolver() defer cleanup() tss, cleanup, err := newLoadBalancer(3) if err != nil { t.Fatalf("failed to create new load balancer: %v", err) } defer cleanup() beServers := []*lbpb.Server{{ IpAddress: tss.beIPs[0], Port: int32(tss.bePorts[0]), LoadBalanceToken: lbToken, }, { IpAddress: tss.beIPs[1], Port: int32(tss.bePorts[1]), LoadBalanceToken: lbToken, }, { IpAddress: tss.beIPs[2], Port: int32(tss.bePorts[2]), LoadBalanceToken: lbToken, }} portsToIndex := make(map[int]int) for i := range beServers { portsToIndex[tss.bePorts[i]] = i } creds := serverNameCheckCreds{ expected: beServerName, } ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() cc, err := grpc.DialContext(ctx, r.Scheme()+":///"+beServerName, grpc.WithTransportCredentials(&creds), grpc.WithContextDialer(fakeNameDialer)) if err != nil { t.Fatalf("Failed to dial to the backend %v", err) } defer cc.Close() testC := testpb.NewTestServiceClient(cc) var ( p peer.Peer result string ) tss.ls.sls <- &lbpb.ServerList{Servers: beServers[0:3]} // Start with sub policy pick_first. r.UpdateState(resolver.State{ Addresses: []resolver.Address{{ Addr: tss.lbAddr, Type: resolver.GRPCLB, ServerName: lbServerName, }}, ServiceConfig: svcCfg, }) result = "" for i := 0; i < 1000; i++ { if _, err := testC.EmptyCall(context.Background(), &testpb.Empty{}, grpc.WaitForReady(true), grpc.Peer(&p)); err != nil { t.Fatalf("_.EmptyCall(_, _) = _, %v, want _, ", err) } result += strconv.Itoa(portsToIndex[p.Addr.(*net.TCPAddr).Port]) } if seq := "00000"; !strings.Contains(result, strings.Repeat(seq, 100)) { t.Errorf("got result sequence %q, want patten %q", result, seq) } tss.ls.sls <- &lbpb.ServerList{Servers: beServers[2:]} result = "" for i := 0; i < 1000; i++ { if _, err := testC.EmptyCall(context.Background(), &testpb.Empty{}, grpc.WaitForReady(true), grpc.Peer(&p)); err != nil { t.Fatalf("_.EmptyCall(_, _) = _, %v, want _, ", err) } result += strconv.Itoa(portsToIndex[p.Addr.(*net.TCPAddr).Port]) } if seq := "22222"; !strings.Contains(result, strings.Repeat(seq, 100)) { t.Errorf("got result sequence %q, want patten %q", result, seq) } tss.ls.sls <- &lbpb.ServerList{Servers: beServers[1:]} result = "" for i := 0; i < 1000; i++ { if _, err := testC.EmptyCall(context.Background(), &testpb.Empty{}, grpc.WaitForReady(true), grpc.Peer(&p)); err != nil { t.Fatalf("_.EmptyCall(_, _) = _, %v, want _, ", err) } result += strconv.Itoa(portsToIndex[p.Addr.(*net.TCPAddr).Port]) } if seq := "22222"; !strings.Contains(result, strings.Repeat(seq, 100)) { t.Errorf("got result sequence %q, want patten %q", result, seq) } // Switch sub policy to roundrobin. grpclbServiceConfigEmpty, err := serviceconfig.Parse(`{}`) if err != nil { t.Fatalf("Error parsing config %q: %v", grpclbServiceConfigEmpty, err) } r.UpdateState(resolver.State{ Addresses: []resolver.Address{{ Addr: tss.lbAddr, Type: resolver.GRPCLB, ServerName: lbServerName, }}, ServiceConfig: grpclbServiceConfigEmpty, }) result = "" for i := 0; i < 1000; i++ { if _, err := testC.EmptyCall(context.Background(), &testpb.Empty{}, grpc.WaitForReady(true), grpc.Peer(&p)); err != nil { t.Fatalf("_.EmptyCall(_, _) = _, %v, want _, ", err) } result += strconv.Itoa(portsToIndex[p.Addr.(*net.TCPAddr).Port]) } if seq := "121212"; !strings.Contains(result, strings.Repeat(seq, 100)) { t.Errorf("got result sequence %q, want patten %q", result, seq) } tss.ls.sls <- &lbpb.ServerList{Servers: beServers[0:3]} result = "" for i := 0; i < 1000; i++ { if _, err := testC.EmptyCall(context.Background(), &testpb.Empty{}, grpc.WaitForReady(true), grpc.Peer(&p)); err != nil { t.Fatalf("%v.EmptyCall(_, _) = _, %v, want _, ", testC, err) } result += strconv.Itoa(portsToIndex[p.Addr.(*net.TCPAddr).Port]) } if seq := "012012012"; !strings.Contains(result, strings.Repeat(seq, 2)) { t.Errorf("got result sequence %q, want patten %q", result, seq) } } type failPreRPCCred struct{} func (failPreRPCCred) GetRequestMetadata(ctx context.Context, uri ...string) (map[string]string, error) { if strings.Contains(uri[0], failtosendURI) { return nil, fmt.Errorf("rpc should fail to send") } return nil, nil } func (failPreRPCCred) RequireTransportSecurity() bool { return false } func checkStats(stats, expected *rpcStats) error { if !stats.equal(expected) { return fmt.Errorf("stats not equal: got %+v, want %+v", stats, expected) } return nil } func runAndGetStats(t *testing.T, drop bool, runRPCs func(*grpc.ClientConn)) *rpcStats { defer leakcheck.Check(t) r, cleanup := manual.GenerateAndRegisterManualResolver() defer cleanup() tss, cleanup, err := newLoadBalancer(1) if err != nil { t.Fatalf("failed to create new load balancer: %v", err) } defer cleanup() servers := []*lbpb.Server{{ IpAddress: tss.beIPs[0], Port: int32(tss.bePorts[0]), LoadBalanceToken: lbToken, }} if drop { servers = append(servers, &lbpb.Server{ LoadBalanceToken: lbToken, Drop: drop, }) } tss.ls.sls <- &lbpb.ServerList{Servers: servers} tss.ls.statsDura = 100 * time.Millisecond creds := serverNameCheckCreds{expected: beServerName} ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() cc, err := grpc.DialContext(ctx, r.Scheme()+":///"+beServerName, grpc.WithTransportCredentials(&creds), grpc.WithPerRPCCredentials(failPreRPCCred{}), grpc.WithContextDialer(fakeNameDialer)) if err != nil { t.Fatalf("Failed to dial to the backend %v", err) } defer cc.Close() r.UpdateState(resolver.State{Addresses: []resolver.Address{{ Addr: tss.lbAddr, Type: resolver.GRPCLB, ServerName: lbServerName, }}}) runRPCs(cc) time.Sleep(1 * time.Second) stats := tss.ls.stats return stats } const ( countRPC = 40 failtosendURI = "failtosend" ) func TestGRPCLBStatsUnarySuccess(t *testing.T) { defer leakcheck.Check(t) stats := runAndGetStats(t, false, func(cc *grpc.ClientConn) { testC := testpb.NewTestServiceClient(cc) // The first non-failfast RPC succeeds, all connections are up. if _, err := testC.EmptyCall(context.Background(), &testpb.Empty{}, grpc.WaitForReady(true)); err != nil { t.Fatalf("%v.EmptyCall(_, _) = _, %v, want _, ", testC, err) } for i := 0; i < countRPC-1; i++ { testC.EmptyCall(context.Background(), &testpb.Empty{}) } }) if err := checkStats(stats, &rpcStats{ numCallsStarted: int64(countRPC), numCallsFinished: int64(countRPC), numCallsFinishedKnownReceived: int64(countRPC), }); err != nil { t.Fatal(err) } } func TestGRPCLBStatsUnaryDrop(t *testing.T) { defer leakcheck.Check(t) stats := runAndGetStats(t, true, func(cc *grpc.ClientConn) { testC := testpb.NewTestServiceClient(cc) // The first non-failfast RPC succeeds, all connections are up. if _, err := testC.EmptyCall(context.Background(), &testpb.Empty{}, grpc.WaitForReady(true)); err != nil { t.Fatalf("%v.EmptyCall(_, _) = _, %v, want _, ", testC, err) } for i := 0; i < countRPC-1; i++ { testC.EmptyCall(context.Background(), &testpb.Empty{}) } }) if err := checkStats(stats, &rpcStats{ numCallsStarted: int64(countRPC), numCallsFinished: int64(countRPC), numCallsFinishedKnownReceived: int64(countRPC) / 2, numCallsDropped: map[string]int64{lbToken: int64(countRPC) / 2}, }); err != nil { t.Fatal(err) } } func TestGRPCLBStatsUnaryFailedToSend(t *testing.T) { defer leakcheck.Check(t) stats := runAndGetStats(t, false, func(cc *grpc.ClientConn) { testC := testpb.NewTestServiceClient(cc) // The first non-failfast RPC succeeds, all connections are up. if _, err := testC.EmptyCall(context.Background(), &testpb.Empty{}, grpc.WaitForReady(true)); err != nil { t.Fatalf("%v.EmptyCall(_, _) = _, %v, want _, ", testC, err) } for i := 0; i < countRPC-1; i++ { cc.Invoke(context.Background(), failtosendURI, &testpb.Empty{}, nil) } }) if err := checkStats(stats, &rpcStats{ numCallsStarted: int64(countRPC), numCallsFinished: int64(countRPC), numCallsFinishedWithClientFailedToSend: int64(countRPC - 1), numCallsFinishedKnownReceived: 1, }); err != nil { t.Fatal(err) } } func TestGRPCLBStatsStreamingSuccess(t *testing.T) { defer leakcheck.Check(t) stats := runAndGetStats(t, false, func(cc *grpc.ClientConn) { testC := testpb.NewTestServiceClient(cc) // The first non-failfast RPC succeeds, all connections are up. stream, err := testC.FullDuplexCall(context.Background(), grpc.WaitForReady(true)) if err != nil { t.Fatalf("%v.FullDuplexCall(_, _) = _, %v, want _, ", testC, err) } for { if _, err = stream.Recv(); err == io.EOF { break } } for i := 0; i < countRPC-1; i++ { stream, err = testC.FullDuplexCall(context.Background()) if err == nil { // Wait for stream to end if err is nil. for { if _, err = stream.Recv(); err == io.EOF { break } } } } }) if err := checkStats(stats, &rpcStats{ numCallsStarted: int64(countRPC), numCallsFinished: int64(countRPC), numCallsFinishedKnownReceived: int64(countRPC), }); err != nil { t.Fatal(err) } } func TestGRPCLBStatsStreamingDrop(t *testing.T) { defer leakcheck.Check(t) stats := runAndGetStats(t, true, func(cc *grpc.ClientConn) { testC := testpb.NewTestServiceClient(cc) // The first non-failfast RPC succeeds, all connections are up. stream, err := testC.FullDuplexCall(context.Background(), grpc.WaitForReady(true)) if err != nil { t.Fatalf("%v.FullDuplexCall(_, _) = _, %v, want _, ", testC, err) } for { if _, err = stream.Recv(); err == io.EOF { break } } for i := 0; i < countRPC-1; i++ { stream, err = testC.FullDuplexCall(context.Background()) if err == nil { // Wait for stream to end if err is nil. for { if _, err = stream.Recv(); err == io.EOF { break } } } } }) if err := checkStats(stats, &rpcStats{ numCallsStarted: int64(countRPC), numCallsFinished: int64(countRPC), numCallsFinishedKnownReceived: int64(countRPC) / 2, numCallsDropped: map[string]int64{lbToken: int64(countRPC) / 2}, }); err != nil { t.Fatal(err) } } func TestGRPCLBStatsStreamingFailedToSend(t *testing.T) { defer leakcheck.Check(t) stats := runAndGetStats(t, false, func(cc *grpc.ClientConn) { testC := testpb.NewTestServiceClient(cc) // The first non-failfast RPC succeeds, all connections are up. stream, err := testC.FullDuplexCall(context.Background(), grpc.WaitForReady(true)) if err != nil { t.Fatalf("%v.FullDuplexCall(_, _) = _, %v, want _, ", testC, err) } for { if _, err = stream.Recv(); err == io.EOF { break } } for i := 0; i < countRPC-1; i++ { cc.NewStream(context.Background(), &grpc.StreamDesc{}, failtosendURI) } }) if err := checkStats(stats, &rpcStats{ numCallsStarted: int64(countRPC), numCallsFinished: int64(countRPC), numCallsFinishedWithClientFailedToSend: int64(countRPC - 1), numCallsFinishedKnownReceived: 1, }); err != nil { t.Fatal(err) } } grpc-go-1.22.1/balancer/grpclb/grpclb_test_util_test.go000066400000000000000000000032251351635773100231230ustar00rootroot00000000000000/* * * Copyright 2019 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpclb import ( "net" "sync" ) type tempError struct{} func (*tempError) Error() string { return "grpclb test temporary error" } func (*tempError) Temporary() bool { return true } type restartableListener struct { net.Listener addr string mu sync.Mutex closed bool conns []net.Conn } func newRestartableListener(l net.Listener) *restartableListener { return &restartableListener{ Listener: l, addr: l.Addr().String(), } } func (l *restartableListener) Accept() (conn net.Conn, err error) { conn, err = l.Listener.Accept() if err == nil { l.mu.Lock() if l.closed { conn.Close() l.mu.Unlock() return nil, &tempError{} } l.conns = append(l.conns, conn) l.mu.Unlock() } return } func (l *restartableListener) Close() error { return l.Listener.Close() } func (l *restartableListener) stopPreviousConns() { l.mu.Lock() l.closed = true tmp := l.conns l.conns = nil l.mu.Unlock() for _, conn := range tmp { conn.Close() } } func (l *restartableListener) restart() { l.mu.Lock() l.closed = false l.mu.Unlock() } grpc-go-1.22.1/balancer/grpclb/grpclb_util.go000066400000000000000000000150001351635773100210170ustar00rootroot00000000000000/* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpclb import ( "fmt" "sync" "time" "google.golang.org/grpc/balancer" "google.golang.org/grpc/connectivity" "google.golang.org/grpc/resolver" ) // The parent ClientConn should re-resolve when grpclb loses connection to the // remote balancer. When the ClientConn inside grpclb gets a TransientFailure, // it calls lbManualResolver.ResolveNow(), which calls parent ClientConn's // ResolveNow, and eventually results in re-resolve happening in parent // ClientConn's resolver (DNS for example). // // parent // ClientConn // +-----------------------------------------------------------------+ // | parent +---------------------------------+ | // | DNS ClientConn | grpclb | | // | resolver balancerWrapper | | | // | + + | grpclb grpclb | | // | | | | ManualResolver ClientConn | | // | | | | + + | | // | | | | | | Transient | | // | | | | | | Failure | | // | | | | | <--------- | | | // | | | <--------------- | ResolveNow | | | // | | <--------- | ResolveNow | | | | | // | | ResolveNow | | | | | | // | | | | | | | | // | + + | + + | | // | +---------------------------------+ | // +-----------------------------------------------------------------+ // lbManualResolver is used by the ClientConn inside grpclb. It's a manual // resolver with a special ResolveNow() function. // // When ResolveNow() is called, it calls ResolveNow() on the parent ClientConn, // so when grpclb client lose contact with remote balancers, the parent // ClientConn's resolver will re-resolve. type lbManualResolver struct { scheme string ccr resolver.ClientConn ccb balancer.ClientConn } func (r *lbManualResolver) Build(_ resolver.Target, cc resolver.ClientConn, _ resolver.BuildOption) (resolver.Resolver, error) { r.ccr = cc return r, nil } func (r *lbManualResolver) Scheme() string { return r.scheme } // ResolveNow calls resolveNow on the parent ClientConn. func (r *lbManualResolver) ResolveNow(o resolver.ResolveNowOption) { r.ccb.ResolveNow(o) } // Close is a noop for Resolver. func (*lbManualResolver) Close() {} // UpdateState calls cc.UpdateState. func (r *lbManualResolver) UpdateState(s resolver.State) { r.ccr.UpdateState(s) } const subConnCacheTime = time.Second * 10 // lbCacheClientConn is a wrapper balancer.ClientConn with a SubConn cache. // SubConns will be kept in cache for subConnCacheTime before being removed. // // Its new and remove methods are updated to do cache first. type lbCacheClientConn struct { cc balancer.ClientConn timeout time.Duration mu sync.Mutex // subConnCache only keeps subConns that are being deleted. subConnCache map[resolver.Address]*subConnCacheEntry subConnToAddr map[balancer.SubConn]resolver.Address } type subConnCacheEntry struct { sc balancer.SubConn cancel func() abortDeleting bool } func newLBCacheClientConn(cc balancer.ClientConn) *lbCacheClientConn { return &lbCacheClientConn{ cc: cc, timeout: subConnCacheTime, subConnCache: make(map[resolver.Address]*subConnCacheEntry), subConnToAddr: make(map[balancer.SubConn]resolver.Address), } } func (ccc *lbCacheClientConn) NewSubConn(addrs []resolver.Address, opts balancer.NewSubConnOptions) (balancer.SubConn, error) { if len(addrs) != 1 { return nil, fmt.Errorf("grpclb calling NewSubConn with addrs of length %v", len(addrs)) } addrWithoutMD := addrs[0] addrWithoutMD.Metadata = nil ccc.mu.Lock() defer ccc.mu.Unlock() if entry, ok := ccc.subConnCache[addrWithoutMD]; ok { // If entry is in subConnCache, the SubConn was being deleted. // cancel function will never be nil. entry.cancel() delete(ccc.subConnCache, addrWithoutMD) return entry.sc, nil } scNew, err := ccc.cc.NewSubConn(addrs, opts) if err != nil { return nil, err } ccc.subConnToAddr[scNew] = addrWithoutMD return scNew, nil } func (ccc *lbCacheClientConn) RemoveSubConn(sc balancer.SubConn) { ccc.mu.Lock() defer ccc.mu.Unlock() addr, ok := ccc.subConnToAddr[sc] if !ok { return } if entry, ok := ccc.subConnCache[addr]; ok { if entry.sc != sc { // This could happen if NewSubConn was called multiple times for the // same address, and those SubConns are all removed. We remove sc // immediately here. delete(ccc.subConnToAddr, sc) ccc.cc.RemoveSubConn(sc) } return } entry := &subConnCacheEntry{ sc: sc, } ccc.subConnCache[addr] = entry timer := time.AfterFunc(ccc.timeout, func() { ccc.mu.Lock() if entry.abortDeleting { return } ccc.cc.RemoveSubConn(sc) delete(ccc.subConnToAddr, sc) delete(ccc.subConnCache, addr) ccc.mu.Unlock() }) entry.cancel = func() { if !timer.Stop() { // If stop was not successful, the timer has fired (this can only // happen in a race). But the deleting function is blocked on ccc.mu // because the mutex was held by the caller of this function. // // Set abortDeleting to true to abort the deleting function. When // the lock is released, the deleting function will acquire the // lock, check the value of abortDeleting and return. entry.abortDeleting = true } } } func (ccc *lbCacheClientConn) UpdateBalancerState(s connectivity.State, p balancer.Picker) { ccc.cc.UpdateBalancerState(s, p) } func (ccc *lbCacheClientConn) close() { ccc.mu.Lock() // Only cancel all existing timers. There's no need to remove SubConns. for _, entry := range ccc.subConnCache { entry.cancel() } ccc.mu.Unlock() } grpc-go-1.22.1/balancer/grpclb/grpclb_util_test.go000066400000000000000000000123401351635773100220620ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpclb import ( "fmt" "sync" "testing" "time" "google.golang.org/grpc/balancer" "google.golang.org/grpc/resolver" ) type mockSubConn struct { balancer.SubConn } type mockClientConn struct { balancer.ClientConn mu sync.Mutex subConns map[balancer.SubConn]resolver.Address } func newMockClientConn() *mockClientConn { return &mockClientConn{ subConns: make(map[balancer.SubConn]resolver.Address), } } func (mcc *mockClientConn) NewSubConn(addrs []resolver.Address, opts balancer.NewSubConnOptions) (balancer.SubConn, error) { sc := &mockSubConn{} mcc.mu.Lock() defer mcc.mu.Unlock() mcc.subConns[sc] = addrs[0] return sc, nil } func (mcc *mockClientConn) RemoveSubConn(sc balancer.SubConn) { mcc.mu.Lock() defer mcc.mu.Unlock() delete(mcc.subConns, sc) } const testCacheTimeout = 100 * time.Millisecond func checkMockCC(mcc *mockClientConn, scLen int) error { mcc.mu.Lock() defer mcc.mu.Unlock() if len(mcc.subConns) != scLen { return fmt.Errorf("mcc = %+v, want len(mcc.subConns) = %v", mcc.subConns, scLen) } return nil } func checkCacheCC(ccc *lbCacheClientConn, sccLen, sctaLen int) error { ccc.mu.Lock() defer ccc.mu.Unlock() if len(ccc.subConnCache) != sccLen { return fmt.Errorf("ccc = %+v, want len(ccc.subConnCache) = %v", ccc.subConnCache, sccLen) } if len(ccc.subConnToAddr) != sctaLen { return fmt.Errorf("ccc = %+v, want len(ccc.subConnToAddr) = %v", ccc.subConnToAddr, sctaLen) } return nil } // Test that SubConn won't be immediately removed. func TestLBCacheClientConnExpire(t *testing.T) { mcc := newMockClientConn() if err := checkMockCC(mcc, 0); err != nil { t.Fatal(err) } ccc := newLBCacheClientConn(mcc) ccc.timeout = testCacheTimeout if err := checkCacheCC(ccc, 0, 0); err != nil { t.Fatal(err) } sc, _ := ccc.NewSubConn([]resolver.Address{{Addr: "address1"}}, balancer.NewSubConnOptions{}) // One subconn in MockCC. if err := checkMockCC(mcc, 1); err != nil { t.Fatal(err) } // No subconn being deleted, and one in CacheCC. if err := checkCacheCC(ccc, 0, 1); err != nil { t.Fatal(err) } ccc.RemoveSubConn(sc) // One subconn in MockCC before timeout. if err := checkMockCC(mcc, 1); err != nil { t.Fatal(err) } // One subconn being deleted, and one in CacheCC. if err := checkCacheCC(ccc, 1, 1); err != nil { t.Fatal(err) } // Should all become empty after timeout. var err error for i := 0; i < 2; i++ { time.Sleep(testCacheTimeout) err = checkMockCC(mcc, 0) if err != nil { continue } err = checkCacheCC(ccc, 0, 0) if err != nil { continue } } if err != nil { t.Fatal(err) } } // Test that NewSubConn with the same address of a SubConn being removed will // reuse the SubConn and cancel the removing. func TestLBCacheClientConnReuse(t *testing.T) { mcc := newMockClientConn() if err := checkMockCC(mcc, 0); err != nil { t.Fatal(err) } ccc := newLBCacheClientConn(mcc) ccc.timeout = testCacheTimeout if err := checkCacheCC(ccc, 0, 0); err != nil { t.Fatal(err) } sc, _ := ccc.NewSubConn([]resolver.Address{{Addr: "address1"}}, balancer.NewSubConnOptions{}) // One subconn in MockCC. if err := checkMockCC(mcc, 1); err != nil { t.Fatal(err) } // No subconn being deleted, and one in CacheCC. if err := checkCacheCC(ccc, 0, 1); err != nil { t.Fatal(err) } ccc.RemoveSubConn(sc) // One subconn in MockCC before timeout. if err := checkMockCC(mcc, 1); err != nil { t.Fatal(err) } // One subconn being deleted, and one in CacheCC. if err := checkCacheCC(ccc, 1, 1); err != nil { t.Fatal(err) } // Recreate the old subconn, this should cancel the deleting process. sc, _ = ccc.NewSubConn([]resolver.Address{{Addr: "address1"}}, balancer.NewSubConnOptions{}) // One subconn in MockCC. if err := checkMockCC(mcc, 1); err != nil { t.Fatal(err) } // No subconn being deleted, and one in CacheCC. if err := checkCacheCC(ccc, 0, 1); err != nil { t.Fatal(err) } var err error // Should not become empty after 2*timeout. time.Sleep(2 * testCacheTimeout) err = checkMockCC(mcc, 1) if err != nil { t.Fatal(err) } err = checkCacheCC(ccc, 0, 1) if err != nil { t.Fatal(err) } // Call remove again, will delete after timeout. ccc.RemoveSubConn(sc) // One subconn in MockCC before timeout. if err := checkMockCC(mcc, 1); err != nil { t.Fatal(err) } // One subconn being deleted, and one in CacheCC. if err := checkCacheCC(ccc, 1, 1); err != nil { t.Fatal(err) } // Should all become empty after timeout. for i := 0; i < 2; i++ { time.Sleep(testCacheTimeout) err = checkMockCC(mcc, 0) if err != nil { continue } err = checkCacheCC(ccc, 0, 0) if err != nil { continue } } if err != nil { t.Fatal(err) } } grpc-go-1.22.1/balancer/grpclb/regenerate.sh000077500000000000000000000017411351635773100206510ustar00rootroot00000000000000#!/bin/bash # Copyright 2018 gRPC authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. set -eux -o pipefail TMP=$(mktemp -d) function finish { rm -rf "$TMP" } trap finish EXIT pushd "$TMP" mkdir -p grpc/lb/v1 curl https://raw.githubusercontent.com/grpc/grpc-proto/master/grpc/lb/v1/load_balancer.proto > grpc/lb/v1/load_balancer.proto protoc --go_out=plugins=grpc,paths=source_relative:. -I. grpc/lb/v1/*.proto popd rm -f grpc_lb_v1/*.pb.go cp "$TMP"/grpc/lb/v1/*.pb.go grpc_lb_v1/ grpc-go-1.22.1/balancer/internal/000077500000000000000000000000001351635773100165315ustar00rootroot00000000000000grpc-go-1.22.1/balancer/internal/wrr/000077500000000000000000000000001351635773100173435ustar00rootroot00000000000000grpc-go-1.22.1/balancer/internal/wrr/random.go000066400000000000000000000031511351635773100211520ustar00rootroot00000000000000/* * * Copyright 2019 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package wrr import "google.golang.org/grpc/internal/grpcrand" // weightedItem is a wrapped weighted item that is used to implement weighted random algorithm. type weightedItem struct { Item interface{} Weight int64 } // randomWRR is a struct that contains weighted items implement weighted random algorithm. type randomWRR struct { items []*weightedItem sumOfWeights int64 } // NewRandom creates a new WRR with random. func NewRandom() WRR { return &randomWRR{} } var grpcrandInt63n = grpcrand.Int63n func (rw *randomWRR) Next() (item interface{}) { if rw.sumOfWeights == 0 { return nil } // Random number in [0, sum). randomWeight := grpcrandInt63n(rw.sumOfWeights) for _, item := range rw.items { randomWeight = randomWeight - item.Weight if randomWeight < 0 { return item.Item } } return rw.items[len(rw.items)-1].Item } func (rw *randomWRR) Add(item interface{}, weight int64) { rItem := &weightedItem{Item: item, Weight: weight} rw.items = append(rw.items, rItem) rw.sumOfWeights += weight } grpc-go-1.22.1/balancer/internal/wrr/wrr.go000066400000000000000000000015661351635773100205140ustar00rootroot00000000000000/* * * Copyright 2019 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package wrr // WRR defines an interface that implements weighted round robin. type WRR interface { // Add adds an item with weight to the WRR set. Add(item interface{}, weight int64) // Next returns the next picked item. // // Next needs to be thread safe. Next() interface{} } grpc-go-1.22.1/balancer/internal/wrr/wrr_test.go000066400000000000000000000043141351635773100215450ustar00rootroot00000000000000/* * * Copyright 2019 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package wrr import ( "errors" "math" "math/rand" "testing" "github.com/google/go-cmp/cmp" ) const iterCount = 10000 func equalApproximate(a, b float64) error { opt := cmp.Comparer(func(x, y float64) bool { delta := math.Abs(x - y) mean := math.Abs(x+y) / 2.0 return delta/mean < 0.05 }) if !cmp.Equal(a, b, opt) { return errors.New(cmp.Diff(a, b)) } return nil } func testWRRNext(t *testing.T, newWRR func() WRR) { tests := []struct { name string weights []int64 }{ { name: "1-1-1", weights: []int64{1, 1, 1}, }, { name: "1-2-3", weights: []int64{1, 2, 3}, }, { name: "5-3-2", weights: []int64{5, 3, 2}, }, { name: "17-23-37", weights: []int64{17, 23, 37}, }, } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { var sumOfWeights int64 w := newWRR() for i, weight := range tt.weights { w.Add(i, weight) sumOfWeights += weight } results := make(map[int]int) for i := 0; i < iterCount; i++ { results[w.Next().(int)]++ } wantRatio := make([]float64, len(tt.weights)) for i, weight := range tt.weights { wantRatio[i] = float64(weight) / float64(sumOfWeights) } gotRatio := make([]float64, len(tt.weights)) for i, count := range results { gotRatio[i] = float64(count) / iterCount } for i := range wantRatio { if err := equalApproximate(gotRatio[i], wantRatio[i]); err != nil { t.Errorf("%v not equal %v", i, err) } } }) } } func TestRandomWRRNext(t *testing.T) { testWRRNext(t, NewRandom) } func init() { r := rand.New(rand.NewSource(0)) grpcrandInt63n = r.Int63n } grpc-go-1.22.1/balancer/roundrobin/000077500000000000000000000000001351635773100170765ustar00rootroot00000000000000grpc-go-1.22.1/balancer/roundrobin/roundrobin.go000066400000000000000000000047311351635773100216130ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package roundrobin defines a roundrobin balancer. Roundrobin balancer is // installed as one of the default balancers in gRPC, users don't need to // explicitly install this balancer. package roundrobin import ( "context" "sync" "google.golang.org/grpc/balancer" "google.golang.org/grpc/balancer/base" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/internal/grpcrand" "google.golang.org/grpc/resolver" ) // Name is the name of round_robin balancer. const Name = "round_robin" // newBuilder creates a new roundrobin balancer builder. func newBuilder() balancer.Builder { return base.NewBalancerBuilderWithConfig(Name, &rrPickerBuilder{}, base.Config{HealthCheck: true}) } func init() { balancer.Register(newBuilder()) } type rrPickerBuilder struct{} func (*rrPickerBuilder) Build(readySCs map[resolver.Address]balancer.SubConn) balancer.Picker { grpclog.Infof("roundrobinPicker: newPicker called with readySCs: %v", readySCs) if len(readySCs) == 0 { return base.NewErrPicker(balancer.ErrNoSubConnAvailable) } var scs []balancer.SubConn for _, sc := range readySCs { scs = append(scs, sc) } return &rrPicker{ subConns: scs, // Start at a random index, as the same RR balancer rebuilds a new // picker when SubConn states change, and we don't want to apply excess // load to the first server in the list. next: grpcrand.Intn(len(scs)), } } type rrPicker struct { // subConns is the snapshot of the roundrobin balancer when this picker was // created. The slice is immutable. Each Get() will do a round robin // selection from it and return the selected SubConn. subConns []balancer.SubConn mu sync.Mutex next int } func (p *rrPicker) Pick(ctx context.Context, opts balancer.PickOptions) (balancer.SubConn, func(balancer.DoneInfo), error) { p.mu.Lock() sc := p.subConns[p.next] p.next = (p.next + 1) % len(p.subConns) p.mu.Unlock() return sc, nil, nil } grpc-go-1.22.1/balancer/roundrobin/roundrobin_test.go000066400000000000000000000354601351635773100226550ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package roundrobin_test import ( "context" "fmt" "net" "sync" "testing" "time" "google.golang.org/grpc" "google.golang.org/grpc/balancer/roundrobin" "google.golang.org/grpc/codes" _ "google.golang.org/grpc/grpclog/glogger" "google.golang.org/grpc/internal/leakcheck" "google.golang.org/grpc/peer" "google.golang.org/grpc/resolver" "google.golang.org/grpc/resolver/manual" "google.golang.org/grpc/status" testpb "google.golang.org/grpc/test/grpc_testing" ) type testServer struct { testpb.TestServiceServer } func (s *testServer) EmptyCall(ctx context.Context, in *testpb.Empty) (*testpb.Empty, error) { return &testpb.Empty{}, nil } func (s *testServer) FullDuplexCall(stream testpb.TestService_FullDuplexCallServer) error { return nil } type test struct { servers []*grpc.Server addresses []string } func (t *test) cleanup() { for _, s := range t.servers { s.Stop() } } func startTestServers(count int) (_ *test, err error) { t := &test{} defer func() { if err != nil { t.cleanup() } }() for i := 0; i < count; i++ { lis, err := net.Listen("tcp", "localhost:0") if err != nil { return nil, fmt.Errorf("failed to listen %v", err) } s := grpc.NewServer() testpb.RegisterTestServiceServer(s, &testServer{}) t.servers = append(t.servers, s) t.addresses = append(t.addresses, lis.Addr().String()) go func(s *grpc.Server, l net.Listener) { s.Serve(l) }(s, lis) } return t, nil } func TestOneBackend(t *testing.T) { defer leakcheck.Check(t) r, cleanup := manual.GenerateAndRegisterManualResolver() defer cleanup() test, err := startTestServers(1) if err != nil { t.Fatalf("failed to start servers: %v", err) } defer test.cleanup() cc, err := grpc.Dial(r.Scheme()+":///test.server", grpc.WithInsecure(), grpc.WithBalancerName(roundrobin.Name)) if err != nil { t.Fatalf("failed to dial: %v", err) } defer cc.Close() testc := testpb.NewTestServiceClient(cc) // The first RPC should fail because there's no address. ctx, cancel := context.WithTimeout(context.Background(), time.Millisecond) defer cancel() if _, err := testc.EmptyCall(ctx, &testpb.Empty{}); err == nil || status.Code(err) != codes.DeadlineExceeded { t.Fatalf("EmptyCall() = _, %v, want _, DeadlineExceeded", err) } r.UpdateState(resolver.State{Addresses: []resolver.Address{{Addr: test.addresses[0]}}}) // The second RPC should succeed. if _, err := testc.EmptyCall(context.Background(), &testpb.Empty{}); err != nil { t.Fatalf("EmptyCall() = _, %v, want _, ", err) } } func TestBackendsRoundRobin(t *testing.T) { defer leakcheck.Check(t) r, cleanup := manual.GenerateAndRegisterManualResolver() defer cleanup() backendCount := 5 test, err := startTestServers(backendCount) if err != nil { t.Fatalf("failed to start servers: %v", err) } defer test.cleanup() cc, err := grpc.Dial(r.Scheme()+":///test.server", grpc.WithInsecure(), grpc.WithBalancerName(roundrobin.Name)) if err != nil { t.Fatalf("failed to dial: %v", err) } defer cc.Close() testc := testpb.NewTestServiceClient(cc) // The first RPC should fail because there's no address. ctx, cancel := context.WithTimeout(context.Background(), time.Millisecond) defer cancel() if _, err := testc.EmptyCall(ctx, &testpb.Empty{}); err == nil || status.Code(err) != codes.DeadlineExceeded { t.Fatalf("EmptyCall() = _, %v, want _, DeadlineExceeded", err) } var resolvedAddrs []resolver.Address for i := 0; i < backendCount; i++ { resolvedAddrs = append(resolvedAddrs, resolver.Address{Addr: test.addresses[i]}) } r.UpdateState(resolver.State{Addresses: resolvedAddrs}) var p peer.Peer // Make sure connections to all servers are up. for si := 0; si < backendCount; si++ { var connected bool for i := 0; i < 1000; i++ { if _, err := testc.EmptyCall(context.Background(), &testpb.Empty{}, grpc.Peer(&p)); err != nil { t.Fatalf("EmptyCall() = _, %v, want _, ", err) } if p.Addr.String() == test.addresses[si] { connected = true break } time.Sleep(time.Millisecond) } if !connected { t.Fatalf("Connection to %v was not up after more than 1 second", test.addresses[si]) } } for i := 0; i < 3*backendCount; i++ { if _, err := testc.EmptyCall(context.Background(), &testpb.Empty{}, grpc.Peer(&p)); err != nil { t.Fatalf("EmptyCall() = _, %v, want _, ", err) } if p.Addr.String() != test.addresses[i%backendCount] { t.Fatalf("Index %d: want peer %v, got peer %v", i, test.addresses[i%backendCount], p.Addr.String()) } } } func TestAddressesRemoved(t *testing.T) { defer leakcheck.Check(t) r, cleanup := manual.GenerateAndRegisterManualResolver() defer cleanup() test, err := startTestServers(1) if err != nil { t.Fatalf("failed to start servers: %v", err) } defer test.cleanup() cc, err := grpc.Dial(r.Scheme()+":///test.server", grpc.WithInsecure(), grpc.WithBalancerName(roundrobin.Name)) if err != nil { t.Fatalf("failed to dial: %v", err) } defer cc.Close() testc := testpb.NewTestServiceClient(cc) // The first RPC should fail because there's no address. ctx, cancel := context.WithTimeout(context.Background(), time.Millisecond) defer cancel() if _, err := testc.EmptyCall(ctx, &testpb.Empty{}); err == nil || status.Code(err) != codes.DeadlineExceeded { t.Fatalf("EmptyCall() = _, %v, want _, DeadlineExceeded", err) } r.UpdateState(resolver.State{Addresses: []resolver.Address{{Addr: test.addresses[0]}}}) // The second RPC should succeed. if _, err := testc.EmptyCall(context.Background(), &testpb.Empty{}); err != nil { t.Fatalf("EmptyCall() = _, %v, want _, ", err) } r.UpdateState(resolver.State{Addresses: []resolver.Address{}}) for i := 0; i < 1000; i++ { ctx, cancel := context.WithTimeout(context.Background(), 50*time.Millisecond) defer cancel() if _, err := testc.EmptyCall(ctx, &testpb.Empty{}, grpc.WaitForReady(true)); status.Code(err) == codes.DeadlineExceeded { return } time.Sleep(time.Millisecond) } t.Fatalf("No RPC failed after removing all addresses, want RPC to fail with DeadlineExceeded") } func TestCloseWithPendingRPC(t *testing.T) { defer leakcheck.Check(t) r, cleanup := manual.GenerateAndRegisterManualResolver() defer cleanup() test, err := startTestServers(1) if err != nil { t.Fatalf("failed to start servers: %v", err) } defer test.cleanup() cc, err := grpc.Dial(r.Scheme()+":///test.server", grpc.WithInsecure(), grpc.WithBalancerName(roundrobin.Name)) if err != nil { t.Fatalf("failed to dial: %v", err) } testc := testpb.NewTestServiceClient(cc) var wg sync.WaitGroup for i := 0; i < 3; i++ { wg.Add(1) go func() { defer wg.Done() // This RPC blocks until cc is closed. ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) if _, err := testc.EmptyCall(ctx, &testpb.Empty{}); status.Code(err) == codes.DeadlineExceeded { t.Errorf("RPC failed because of deadline after cc is closed; want error the client connection is closing") } cancel() }() } cc.Close() wg.Wait() } func TestNewAddressWhileBlocking(t *testing.T) { defer leakcheck.Check(t) r, cleanup := manual.GenerateAndRegisterManualResolver() defer cleanup() test, err := startTestServers(1) if err != nil { t.Fatalf("failed to start servers: %v", err) } defer test.cleanup() cc, err := grpc.Dial(r.Scheme()+":///test.server", grpc.WithInsecure(), grpc.WithBalancerName(roundrobin.Name)) if err != nil { t.Fatalf("failed to dial: %v", err) } defer cc.Close() testc := testpb.NewTestServiceClient(cc) // The first RPC should fail because there's no address. ctx, cancel := context.WithTimeout(context.Background(), time.Millisecond) defer cancel() if _, err := testc.EmptyCall(ctx, &testpb.Empty{}); err == nil || status.Code(err) != codes.DeadlineExceeded { t.Fatalf("EmptyCall() = _, %v, want _, DeadlineExceeded", err) } r.UpdateState(resolver.State{Addresses: []resolver.Address{{Addr: test.addresses[0]}}}) // The second RPC should succeed. ctx, cancel = context.WithTimeout(context.Background(), 2*time.Second) defer cancel() if _, err := testc.EmptyCall(ctx, &testpb.Empty{}); err != nil { t.Fatalf("EmptyCall() = _, %v, want _, nil", err) } r.UpdateState(resolver.State{Addresses: []resolver.Address{}}) var wg sync.WaitGroup for i := 0; i < 3; i++ { wg.Add(1) go func() { defer wg.Done() // This RPC blocks until NewAddress is called. testc.EmptyCall(context.Background(), &testpb.Empty{}) }() } time.Sleep(50 * time.Millisecond) r.UpdateState(resolver.State{Addresses: []resolver.Address{{Addr: test.addresses[0]}}}) wg.Wait() } func TestOneServerDown(t *testing.T) { defer leakcheck.Check(t) r, cleanup := manual.GenerateAndRegisterManualResolver() defer cleanup() backendCount := 3 test, err := startTestServers(backendCount) if err != nil { t.Fatalf("failed to start servers: %v", err) } defer test.cleanup() cc, err := grpc.Dial(r.Scheme()+":///test.server", grpc.WithInsecure(), grpc.WithBalancerName(roundrobin.Name), grpc.WithWaitForHandshake()) if err != nil { t.Fatalf("failed to dial: %v", err) } defer cc.Close() testc := testpb.NewTestServiceClient(cc) // The first RPC should fail because there's no address. ctx, cancel := context.WithTimeout(context.Background(), time.Millisecond) defer cancel() if _, err := testc.EmptyCall(ctx, &testpb.Empty{}); err == nil || status.Code(err) != codes.DeadlineExceeded { t.Fatalf("EmptyCall() = _, %v, want _, DeadlineExceeded", err) } var resolvedAddrs []resolver.Address for i := 0; i < backendCount; i++ { resolvedAddrs = append(resolvedAddrs, resolver.Address{Addr: test.addresses[i]}) } r.UpdateState(resolver.State{Addresses: resolvedAddrs}) var p peer.Peer // Make sure connections to all servers are up. for si := 0; si < backendCount; si++ { var connected bool for i := 0; i < 1000; i++ { if _, err := testc.EmptyCall(context.Background(), &testpb.Empty{}, grpc.Peer(&p)); err != nil { t.Fatalf("EmptyCall() = _, %v, want _, ", err) } if p.Addr.String() == test.addresses[si] { connected = true break } time.Sleep(time.Millisecond) } if !connected { t.Fatalf("Connection to %v was not up after more than 1 second", test.addresses[si]) } } for i := 0; i < 3*backendCount; i++ { if _, err := testc.EmptyCall(context.Background(), &testpb.Empty{}, grpc.Peer(&p)); err != nil { t.Fatalf("EmptyCall() = _, %v, want _, ", err) } if p.Addr.String() != test.addresses[i%backendCount] { t.Fatalf("Index %d: want peer %v, got peer %v", i, test.addresses[i%backendCount], p.Addr.String()) } } // Stop one server, RPCs should roundrobin among the remaining servers. backendCount-- test.servers[backendCount].Stop() // Loop until see server[backendCount-1] twice without seeing server[backendCount]. var targetSeen int for i := 0; i < 1000; i++ { if _, err := testc.EmptyCall(context.Background(), &testpb.Empty{}, grpc.Peer(&p)); err != nil { targetSeen = 0 t.Logf("EmptyCall() = _, %v, want _, ", err) // Due to a race, this RPC could possibly get the connection that // was closing, and this RPC may fail. Keep trying when this // happens. continue } switch p.Addr.String() { case test.addresses[backendCount-1]: targetSeen++ case test.addresses[backendCount]: // Reset targetSeen if peer is server[backendCount]. targetSeen = 0 } // Break to make sure the last picked address is server[-1], so the following for loop won't be flaky. if targetSeen >= 2 { break } } if targetSeen != 2 { t.Fatal("Failed to see server[backendCount-1] twice without seeing server[backendCount]") } for i := 0; i < 3*backendCount; i++ { if _, err := testc.EmptyCall(context.Background(), &testpb.Empty{}, grpc.Peer(&p)); err != nil { t.Fatalf("EmptyCall() = _, %v, want _, ", err) } if p.Addr.String() != test.addresses[i%backendCount] { t.Errorf("Index %d: want peer %v, got peer %v", i, test.addresses[i%backendCount], p.Addr.String()) } } } func TestAllServersDown(t *testing.T) { defer leakcheck.Check(t) r, cleanup := manual.GenerateAndRegisterManualResolver() defer cleanup() backendCount := 3 test, err := startTestServers(backendCount) if err != nil { t.Fatalf("failed to start servers: %v", err) } defer test.cleanup() cc, err := grpc.Dial(r.Scheme()+":///test.server", grpc.WithInsecure(), grpc.WithBalancerName(roundrobin.Name), grpc.WithWaitForHandshake()) if err != nil { t.Fatalf("failed to dial: %v", err) } defer cc.Close() testc := testpb.NewTestServiceClient(cc) // The first RPC should fail because there's no address. ctx, cancel := context.WithTimeout(context.Background(), time.Millisecond) defer cancel() if _, err := testc.EmptyCall(ctx, &testpb.Empty{}); err == nil || status.Code(err) != codes.DeadlineExceeded { t.Fatalf("EmptyCall() = _, %v, want _, DeadlineExceeded", err) } var resolvedAddrs []resolver.Address for i := 0; i < backendCount; i++ { resolvedAddrs = append(resolvedAddrs, resolver.Address{Addr: test.addresses[i]}) } r.UpdateState(resolver.State{Addresses: resolvedAddrs}) var p peer.Peer // Make sure connections to all servers are up. for si := 0; si < backendCount; si++ { var connected bool for i := 0; i < 1000; i++ { if _, err := testc.EmptyCall(context.Background(), &testpb.Empty{}, grpc.Peer(&p)); err != nil { t.Fatalf("EmptyCall() = _, %v, want _, ", err) } if p.Addr.String() == test.addresses[si] { connected = true break } time.Sleep(time.Millisecond) } if !connected { t.Fatalf("Connection to %v was not up after more than 1 second", test.addresses[si]) } } for i := 0; i < 3*backendCount; i++ { if _, err := testc.EmptyCall(context.Background(), &testpb.Empty{}, grpc.Peer(&p)); err != nil { t.Fatalf("EmptyCall() = _, %v, want _, ", err) } if p.Addr.String() != test.addresses[i%backendCount] { t.Fatalf("Index %d: want peer %v, got peer %v", i, test.addresses[i%backendCount], p.Addr.String()) } } // All servers are stopped, failfast RPC should fail with unavailable. for i := 0; i < backendCount; i++ { test.servers[i].Stop() } time.Sleep(100 * time.Millisecond) for i := 0; i < 1000; i++ { if _, err := testc.EmptyCall(context.Background(), &testpb.Empty{}); status.Code(err) == codes.Unavailable { return } time.Sleep(time.Millisecond) } t.Fatalf("Failfast RPCs didn't fail with Unavailable after all servers are stopped") } grpc-go-1.22.1/balancer/xds/000077500000000000000000000000001351635773100155135ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/edsbalancer/000077500000000000000000000000001351635773100177565ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/edsbalancer/balancergroup.go000066400000000000000000000305631351635773100231400ustar00rootroot00000000000000/* * Copyright 2019 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package edsbalancer import ( "context" "sync" "google.golang.org/grpc/balancer" "google.golang.org/grpc/balancer/base" "google.golang.org/grpc/balancer/internal/wrr" "google.golang.org/grpc/balancer/xds/internal" orcapb "google.golang.org/grpc/balancer/xds/internal/proto/udpa/data/orca/v1/orca_load_report" "google.golang.org/grpc/balancer/xds/lrs" "google.golang.org/grpc/connectivity" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/resolver" ) type pickerState struct { weight uint32 picker balancer.Picker state connectivity.State } // balancerGroup takes a list of balancers, and make then into one balancer. // // Note that this struct doesn't implement balancer.Balancer, because it's not // intended to be used directly as a balancer. It's expected to be used as a // sub-balancer manager by a high level balancer. // // Updates from ClientConn are forwarded to sub-balancers // - service config update // - Not implemented // - address update // - subConn state change // - find the corresponding balancer and forward // // Actions from sub-balances are forwarded to parent ClientConn // - new/remove SubConn // - picker update and health states change // - sub-pickers are grouped into a group-picker // - aggregated connectivity state is the overall state of all pickers. // - resolveNow type balancerGroup struct { cc balancer.ClientConn mu sync.Mutex idToBalancer map[internal.Locality]balancer.Balancer scToID map[balancer.SubConn]internal.Locality loadStore lrs.Store pickerMu sync.Mutex // All balancer IDs exist as keys in this map. If an ID is not in map, it's // either removed or never added. idToPickerState map[internal.Locality]*pickerState } func newBalancerGroup(cc balancer.ClientConn, loadStore lrs.Store) *balancerGroup { return &balancerGroup{ cc: cc, scToID: make(map[balancer.SubConn]internal.Locality), idToBalancer: make(map[internal.Locality]balancer.Balancer), idToPickerState: make(map[internal.Locality]*pickerState), loadStore: loadStore, } } // add adds a balancer built by builder to the group, with given id and weight. // // weight should never be zero. func (bg *balancerGroup) add(id internal.Locality, weight uint32, builder balancer.Builder) { if weight == 0 { grpclog.Errorf("balancerGroup.add called with weight 0, locality: %v. Locality is not added to balancer group", id) return } bg.mu.Lock() if _, ok := bg.idToBalancer[id]; ok { bg.mu.Unlock() grpclog.Warningf("balancer group: adding a balancer with existing ID: %s", id) return } bg.mu.Unlock() bgcc := &balancerGroupCC{ id: id, group: bg, } b := builder.Build(bgcc, balancer.BuildOptions{}) bg.mu.Lock() bg.idToBalancer[id] = b bg.mu.Unlock() bg.pickerMu.Lock() bg.idToPickerState[id] = &pickerState{ weight: weight, // Start everything in IDLE. It's doesn't affect the overall state // because we don't count IDLE when aggregating (as opposite to e.g. // READY, 1 READY results in overall READY). state: connectivity.Idle, } bg.pickerMu.Unlock() } // remove removes the balancer with id from the group, and closes the balancer. // // It also removes the picker generated from this balancer from the picker // group. It always results in a picker update. func (bg *balancerGroup) remove(id internal.Locality) { bg.mu.Lock() // Close balancer. if b, ok := bg.idToBalancer[id]; ok { b.Close() delete(bg.idToBalancer, id) } // Remove SubConns. for sc, bid := range bg.scToID { if bid == id { bg.cc.RemoveSubConn(sc) delete(bg.scToID, sc) } } bg.mu.Unlock() bg.pickerMu.Lock() // Remove id and picker from picker map. This also results in future updates // for this ID to be ignored. delete(bg.idToPickerState, id) // Update state and picker to reflect the changes. bg.cc.UpdateBalancerState(buildPickerAndState(bg.idToPickerState)) bg.pickerMu.Unlock() } // changeWeight changes the weight of the balancer. // // newWeight should never be zero. // // NOTE: It always results in a picker update now. This probably isn't // necessary. But it seems better to do the update because it's a change in the // picker (which is balancer's snapshot). func (bg *balancerGroup) changeWeight(id internal.Locality, newWeight uint32) { if newWeight == 0 { grpclog.Errorf("balancerGroup.changeWeight called with newWeight 0. Weight is not changed") return } bg.pickerMu.Lock() defer bg.pickerMu.Unlock() pState, ok := bg.idToPickerState[id] if !ok { return } if pState.weight == newWeight { return } pState.weight = newWeight // Update state and picker to reflect the changes. bg.cc.UpdateBalancerState(buildPickerAndState(bg.idToPickerState)) } // Following are actions from the parent grpc.ClientConn, forward to sub-balancers. // SubConn state change: find the corresponding balancer and then forward. func (bg *balancerGroup) handleSubConnStateChange(sc balancer.SubConn, state connectivity.State) { grpclog.Infof("balancer group: handle subconn state change: %p, %v", sc, state) bg.mu.Lock() var b balancer.Balancer if id, ok := bg.scToID[sc]; ok { if state == connectivity.Shutdown { // Only delete sc from the map when state changed to Shutdown. delete(bg.scToID, sc) } b = bg.idToBalancer[id] } bg.mu.Unlock() if b == nil { grpclog.Infof("balancer group: balancer not found for sc state change") return } if ub, ok := b.(balancer.V2Balancer); ok { ub.UpdateSubConnState(sc, balancer.SubConnState{ConnectivityState: state}) } else { b.HandleSubConnStateChange(sc, state) } } // Address change: forward to balancer. func (bg *balancerGroup) handleResolvedAddrs(id internal.Locality, addrs []resolver.Address) { bg.mu.Lock() b, ok := bg.idToBalancer[id] bg.mu.Unlock() if !ok { grpclog.Infof("balancer group: balancer with id %q not found", id) return } if ub, ok := b.(balancer.V2Balancer); ok { ub.UpdateClientConnState(balancer.ClientConnState{ResolverState: resolver.State{Addresses: addrs}}) } else { b.HandleResolvedAddrs(addrs, nil) } } // TODO: handleServiceConfig() // // For BNS address for slicer, comes from endpoint.Metadata. It will be sent // from parent to sub-balancers as service config. // Following are actions from sub-balancers, forward to ClientConn. // newSubConn: forward to ClientConn, and also create a map from sc to balancer, // so state update will find the right balancer. // // One note about removing SubConn: only forward to ClientConn, but not delete // from map. Delete sc from the map only when state changes to Shutdown. Since // it's just forwarding the action, there's no need for a removeSubConn() // wrapper function. func (bg *balancerGroup) newSubConn(id internal.Locality, addrs []resolver.Address, opts balancer.NewSubConnOptions) (balancer.SubConn, error) { sc, err := bg.cc.NewSubConn(addrs, opts) if err != nil { return nil, err } bg.mu.Lock() bg.scToID[sc] = id bg.mu.Unlock() return sc, nil } // updateBalancerState: create an aggregated picker and an aggregated // connectivity state, then forward to ClientConn. func (bg *balancerGroup) updateBalancerState(id internal.Locality, state connectivity.State, picker balancer.Picker) { grpclog.Infof("balancer group: update balancer state: %v, %v, %p", id, state, picker) bg.pickerMu.Lock() defer bg.pickerMu.Unlock() pickerSt, ok := bg.idToPickerState[id] if !ok { // All state starts in IDLE. If ID is not in map, it's either removed, // or never existed. grpclog.Infof("balancer group: pickerState not found when update picker/state") return } pickerSt.picker = newLoadReportPicker(picker, id, bg.loadStore) pickerSt.state = state bg.cc.UpdateBalancerState(buildPickerAndState(bg.idToPickerState)) } func (bg *balancerGroup) close() { bg.mu.Lock() for _, b := range bg.idToBalancer { b.Close() } // Also remove all SubConns. for sc := range bg.scToID { bg.cc.RemoveSubConn(sc) } bg.mu.Unlock() } func buildPickerAndState(m map[internal.Locality]*pickerState) (connectivity.State, balancer.Picker) { var readyN, connectingN int readyPickerWithWeights := make([]pickerState, 0, len(m)) for _, ps := range m { switch ps.state { case connectivity.Ready: readyN++ readyPickerWithWeights = append(readyPickerWithWeights, *ps) case connectivity.Connecting: connectingN++ } } var aggregatedState connectivity.State switch { case readyN > 0: aggregatedState = connectivity.Ready case connectingN > 0: aggregatedState = connectivity.Connecting default: aggregatedState = connectivity.TransientFailure } if aggregatedState == connectivity.TransientFailure { return aggregatedState, base.NewErrPicker(balancer.ErrTransientFailure) } return aggregatedState, newPickerGroup(readyPickerWithWeights) } // RandomWRR constructor, to be modified in tests. var newRandomWRR = wrr.NewRandom type pickerGroup struct { length int w wrr.WRR } // newPickerGroup takes pickers with weights, and group them into one picker. // // Note it only takes ready pickers. The map shouldn't contain non-ready // pickers. // // TODO: (bg) confirm this is the expected behavior: non-ready balancers should // be ignored when picking. Only ready balancers are picked. func newPickerGroup(readyPickerWithWeights []pickerState) *pickerGroup { w := newRandomWRR() for _, ps := range readyPickerWithWeights { w.Add(ps.picker, int64(ps.weight)) } return &pickerGroup{ length: len(readyPickerWithWeights), w: w, } } func (pg *pickerGroup) Pick(ctx context.Context, opts balancer.PickOptions) (conn balancer.SubConn, done func(balancer.DoneInfo), err error) { if pg.length <= 0 { return nil, nil, balancer.ErrNoSubConnAvailable } p := pg.w.Next().(balancer.Picker) return p.Pick(ctx, opts) } const ( serverLoadCPUName = "cpu_utilization" serverLoadMemoryName = "mem_utilization" ) type loadReportPicker struct { balancer.Picker id internal.Locality loadStore lrs.Store } func newLoadReportPicker(p balancer.Picker, id internal.Locality, loadStore lrs.Store) *loadReportPicker { return &loadReportPicker{ Picker: p, id: id, loadStore: loadStore, } } func (lrp *loadReportPicker) Pick(ctx context.Context, opts balancer.PickOptions) (conn balancer.SubConn, done func(balancer.DoneInfo), err error) { conn, done, err = lrp.Picker.Pick(ctx, opts) if lrp.loadStore != nil && err == nil { lrp.loadStore.CallStarted(lrp.id) td := done done = func(info balancer.DoneInfo) { lrp.loadStore.CallFinished(lrp.id, info.Err) if load, ok := info.ServerLoad.(*orcapb.OrcaLoadReport); ok { lrp.loadStore.CallServerLoad(lrp.id, serverLoadCPUName, load.CpuUtilization) lrp.loadStore.CallServerLoad(lrp.id, serverLoadMemoryName, load.MemUtilization) for n, d := range load.RequestCostOrUtilization { lrp.loadStore.CallServerLoad(lrp.id, n, d) } } if td != nil { td(info) } } } return } // balancerGroupCC implements the balancer.ClientConn API and get passed to each // sub-balancer. It contains the sub-balancer ID, so the parent balancer can // keep track of SubConn/pickers and the sub-balancers they belong to. // // Some of the actions are forwarded to the parent ClientConn with no change. // Some are forward to balancer group with the sub-balancer ID. type balancerGroupCC struct { id internal.Locality group *balancerGroup } func (bgcc *balancerGroupCC) NewSubConn(addrs []resolver.Address, opts balancer.NewSubConnOptions) (balancer.SubConn, error) { return bgcc.group.newSubConn(bgcc.id, addrs, opts) } func (bgcc *balancerGroupCC) RemoveSubConn(sc balancer.SubConn) { bgcc.group.cc.RemoveSubConn(sc) } func (bgcc *balancerGroupCC) UpdateBalancerState(state connectivity.State, picker balancer.Picker) { bgcc.group.updateBalancerState(bgcc.id, state, picker) } func (bgcc *balancerGroupCC) ResolveNow(opt resolver.ResolveNowOption) { bgcc.group.cc.ResolveNow(opt) } func (bgcc *balancerGroupCC) Target() string { return bgcc.group.cc.Target() } grpc-go-1.22.1/balancer/xds/edsbalancer/balancergroup_test.go000066400000000000000000000367721351635773100242070ustar00rootroot00000000000000/* * Copyright 2019 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package edsbalancer import ( "context" "reflect" "testing" "google.golang.org/grpc/balancer" "google.golang.org/grpc/balancer/roundrobin" "google.golang.org/grpc/balancer/xds/internal" orcapb "google.golang.org/grpc/balancer/xds/internal/proto/udpa/data/orca/v1/orca_load_report" "google.golang.org/grpc/connectivity" "google.golang.org/grpc/resolver" ) var ( rrBuilder = balancer.Get(roundrobin.Name) testBalancerIDs = []internal.Locality{{Region: "b1"}, {Region: "b2"}, {Region: "b3"}} testBackendAddrs = []resolver.Address{{Addr: "1.1.1.1:1"}, {Addr: "2.2.2.2:2"}, {Addr: "3.3.3.3:3"}, {Addr: "4.4.4.4:4"}} ) // 1 balancer, 1 backend -> 2 backends -> 1 backend. func TestBalancerGroup_OneRR_AddRemoveBackend(t *testing.T) { cc := newTestClientConn(t) bg := newBalancerGroup(cc, nil) // Add one balancer to group. bg.add(testBalancerIDs[0], 1, rrBuilder) // Send one resolved address. bg.handleResolvedAddrs(testBalancerIDs[0], testBackendAddrs[0:1]) // Send subconn state change. sc1 := <-cc.newSubConnCh bg.handleSubConnStateChange(sc1, connectivity.Connecting) bg.handleSubConnStateChange(sc1, connectivity.Ready) // Test pick with one backend. p1 := <-cc.newPickerCh for i := 0; i < 5; i++ { gotSC, _, _ := p1.Pick(context.Background(), balancer.PickOptions{}) if !reflect.DeepEqual(gotSC, sc1) { t.Fatalf("picker.Pick, got %v, want %v", gotSC, sc1) } } // Send two addresses. bg.handleResolvedAddrs(testBalancerIDs[0], testBackendAddrs[0:2]) // Expect one new subconn, send state update. sc2 := <-cc.newSubConnCh bg.handleSubConnStateChange(sc2, connectivity.Connecting) bg.handleSubConnStateChange(sc2, connectivity.Ready) // Test roundrobin pick. p2 := <-cc.newPickerCh want := []balancer.SubConn{sc1, sc2} if err := isRoundRobin(want, func() balancer.SubConn { sc, _, _ := p2.Pick(context.Background(), balancer.PickOptions{}) return sc }); err != nil { t.Fatalf("want %v, got %v", want, err) } // Remove the first address. bg.handleResolvedAddrs(testBalancerIDs[0], testBackendAddrs[1:2]) scToRemove := <-cc.removeSubConnCh if !reflect.DeepEqual(scToRemove, sc1) { t.Fatalf("RemoveSubConn, want %v, got %v", sc1, scToRemove) } bg.handleSubConnStateChange(scToRemove, connectivity.Shutdown) // Test pick with only the second subconn. p3 := <-cc.newPickerCh for i := 0; i < 5; i++ { gotSC, _, _ := p3.Pick(context.Background(), balancer.PickOptions{}) if !reflect.DeepEqual(gotSC, sc2) { t.Fatalf("picker.Pick, got %v, want %v", gotSC, sc2) } } } // 2 balancers, each with 1 backend. func TestBalancerGroup_TwoRR_OneBackend(t *testing.T) { cc := newTestClientConn(t) bg := newBalancerGroup(cc, nil) // Add two balancers to group and send one resolved address to both // balancers. bg.add(testBalancerIDs[0], 1, rrBuilder) bg.handleResolvedAddrs(testBalancerIDs[0], testBackendAddrs[0:1]) sc1 := <-cc.newSubConnCh bg.add(testBalancerIDs[1], 1, rrBuilder) bg.handleResolvedAddrs(testBalancerIDs[1], testBackendAddrs[0:1]) sc2 := <-cc.newSubConnCh // Send state changes for both subconns. bg.handleSubConnStateChange(sc1, connectivity.Connecting) bg.handleSubConnStateChange(sc1, connectivity.Ready) bg.handleSubConnStateChange(sc2, connectivity.Connecting) bg.handleSubConnStateChange(sc2, connectivity.Ready) // Test roundrobin on the last picker. p1 := <-cc.newPickerCh want := []balancer.SubConn{sc1, sc2} if err := isRoundRobin(want, func() balancer.SubConn { sc, _, _ := p1.Pick(context.Background(), balancer.PickOptions{}) return sc }); err != nil { t.Fatalf("want %v, got %v", want, err) } } // 2 balancers, each with more than 1 backends. func TestBalancerGroup_TwoRR_MoreBackends(t *testing.T) { cc := newTestClientConn(t) bg := newBalancerGroup(cc, nil) // Add two balancers to group and send one resolved address to both // balancers. bg.add(testBalancerIDs[0], 1, rrBuilder) bg.handleResolvedAddrs(testBalancerIDs[0], testBackendAddrs[0:2]) sc1 := <-cc.newSubConnCh sc2 := <-cc.newSubConnCh bg.add(testBalancerIDs[1], 1, rrBuilder) bg.handleResolvedAddrs(testBalancerIDs[1], testBackendAddrs[2:4]) sc3 := <-cc.newSubConnCh sc4 := <-cc.newSubConnCh // Send state changes for both subconns. bg.handleSubConnStateChange(sc1, connectivity.Connecting) bg.handleSubConnStateChange(sc1, connectivity.Ready) bg.handleSubConnStateChange(sc2, connectivity.Connecting) bg.handleSubConnStateChange(sc2, connectivity.Ready) bg.handleSubConnStateChange(sc3, connectivity.Connecting) bg.handleSubConnStateChange(sc3, connectivity.Ready) bg.handleSubConnStateChange(sc4, connectivity.Connecting) bg.handleSubConnStateChange(sc4, connectivity.Ready) // Test roundrobin on the last picker. p1 := <-cc.newPickerCh want := []balancer.SubConn{sc1, sc2, sc3, sc4} if err := isRoundRobin(want, func() balancer.SubConn { sc, _, _ := p1.Pick(context.Background(), balancer.PickOptions{}) return sc }); err != nil { t.Fatalf("want %v, got %v", want, err) } // Turn sc2's connection down, should be RR between balancers. bg.handleSubConnStateChange(sc2, connectivity.TransientFailure) p2 := <-cc.newPickerCh // Expect two sc1's in the result, because balancer1 will be picked twice, // but there's only one sc in it. want = []balancer.SubConn{sc1, sc1, sc3, sc4} if err := isRoundRobin(want, func() balancer.SubConn { sc, _, _ := p2.Pick(context.Background(), balancer.PickOptions{}) return sc }); err != nil { t.Fatalf("want %v, got %v", want, err) } // Remove sc3's addresses. bg.handleResolvedAddrs(testBalancerIDs[1], testBackendAddrs[3:4]) scToRemove := <-cc.removeSubConnCh if !reflect.DeepEqual(scToRemove, sc3) { t.Fatalf("RemoveSubConn, want %v, got %v", sc3, scToRemove) } bg.handleSubConnStateChange(scToRemove, connectivity.Shutdown) p3 := <-cc.newPickerCh want = []balancer.SubConn{sc1, sc4} if err := isRoundRobin(want, func() balancer.SubConn { sc, _, _ := p3.Pick(context.Background(), balancer.PickOptions{}) return sc }); err != nil { t.Fatalf("want %v, got %v", want, err) } // Turn sc1's connection down. bg.handleSubConnStateChange(sc1, connectivity.TransientFailure) p4 := <-cc.newPickerCh want = []balancer.SubConn{sc4} if err := isRoundRobin(want, func() balancer.SubConn { sc, _, _ := p4.Pick(context.Background(), balancer.PickOptions{}) return sc }); err != nil { t.Fatalf("want %v, got %v", want, err) } // Turn last connection to connecting. bg.handleSubConnStateChange(sc4, connectivity.Connecting) p5 := <-cc.newPickerCh for i := 0; i < 5; i++ { if _, _, err := p5.Pick(context.Background(), balancer.PickOptions{}); err != balancer.ErrNoSubConnAvailable { t.Fatalf("want pick error %v, got %v", balancer.ErrNoSubConnAvailable, err) } } // Turn all connections down. bg.handleSubConnStateChange(sc4, connectivity.TransientFailure) p6 := <-cc.newPickerCh for i := 0; i < 5; i++ { if _, _, err := p6.Pick(context.Background(), balancer.PickOptions{}); err != balancer.ErrTransientFailure { t.Fatalf("want pick error %v, got %v", balancer.ErrTransientFailure, err) } } } // 2 balancers with different weights. func TestBalancerGroup_TwoRR_DifferentWeight_MoreBackends(t *testing.T) { cc := newTestClientConn(t) bg := newBalancerGroup(cc, nil) // Add two balancers to group and send two resolved addresses to both // balancers. bg.add(testBalancerIDs[0], 2, rrBuilder) bg.handleResolvedAddrs(testBalancerIDs[0], testBackendAddrs[0:2]) sc1 := <-cc.newSubConnCh sc2 := <-cc.newSubConnCh bg.add(testBalancerIDs[1], 1, rrBuilder) bg.handleResolvedAddrs(testBalancerIDs[1], testBackendAddrs[2:4]) sc3 := <-cc.newSubConnCh sc4 := <-cc.newSubConnCh // Send state changes for both subconns. bg.handleSubConnStateChange(sc1, connectivity.Connecting) bg.handleSubConnStateChange(sc1, connectivity.Ready) bg.handleSubConnStateChange(sc2, connectivity.Connecting) bg.handleSubConnStateChange(sc2, connectivity.Ready) bg.handleSubConnStateChange(sc3, connectivity.Connecting) bg.handleSubConnStateChange(sc3, connectivity.Ready) bg.handleSubConnStateChange(sc4, connectivity.Connecting) bg.handleSubConnStateChange(sc4, connectivity.Ready) // Test roundrobin on the last picker. p1 := <-cc.newPickerCh want := []balancer.SubConn{sc1, sc1, sc2, sc2, sc3, sc4} if err := isRoundRobin(want, func() balancer.SubConn { sc, _, _ := p1.Pick(context.Background(), balancer.PickOptions{}) return sc }); err != nil { t.Fatalf("want %v, got %v", want, err) } } // totally 3 balancers, add/remove balancer. func TestBalancerGroup_ThreeRR_RemoveBalancer(t *testing.T) { cc := newTestClientConn(t) bg := newBalancerGroup(cc, nil) // Add three balancers to group and send one resolved address to both // balancers. bg.add(testBalancerIDs[0], 1, rrBuilder) bg.handleResolvedAddrs(testBalancerIDs[0], testBackendAddrs[0:1]) sc1 := <-cc.newSubConnCh bg.add(testBalancerIDs[1], 1, rrBuilder) bg.handleResolvedAddrs(testBalancerIDs[1], testBackendAddrs[1:2]) sc2 := <-cc.newSubConnCh bg.add(testBalancerIDs[2], 1, rrBuilder) bg.handleResolvedAddrs(testBalancerIDs[2], testBackendAddrs[1:2]) sc3 := <-cc.newSubConnCh // Send state changes for both subconns. bg.handleSubConnStateChange(sc1, connectivity.Connecting) bg.handleSubConnStateChange(sc1, connectivity.Ready) bg.handleSubConnStateChange(sc2, connectivity.Connecting) bg.handleSubConnStateChange(sc2, connectivity.Ready) bg.handleSubConnStateChange(sc3, connectivity.Connecting) bg.handleSubConnStateChange(sc3, connectivity.Ready) p1 := <-cc.newPickerCh want := []balancer.SubConn{sc1, sc2, sc3} if err := isRoundRobin(want, func() balancer.SubConn { sc, _, _ := p1.Pick(context.Background(), balancer.PickOptions{}) return sc }); err != nil { t.Fatalf("want %v, got %v", want, err) } // Remove the second balancer, while the others two are ready. bg.remove(testBalancerIDs[1]) scToRemove := <-cc.removeSubConnCh if !reflect.DeepEqual(scToRemove, sc2) { t.Fatalf("RemoveSubConn, want %v, got %v", sc2, scToRemove) } p2 := <-cc.newPickerCh want = []balancer.SubConn{sc1, sc3} if err := isRoundRobin(want, func() balancer.SubConn { sc, _, _ := p2.Pick(context.Background(), balancer.PickOptions{}) return sc }); err != nil { t.Fatalf("want %v, got %v", want, err) } // move balancer 3 into transient failure. bg.handleSubConnStateChange(sc3, connectivity.TransientFailure) // Remove the first balancer, while the third is transient failure. bg.remove(testBalancerIDs[0]) scToRemove = <-cc.removeSubConnCh if !reflect.DeepEqual(scToRemove, sc1) { t.Fatalf("RemoveSubConn, want %v, got %v", sc1, scToRemove) } p3 := <-cc.newPickerCh for i := 0; i < 5; i++ { if _, _, err := p3.Pick(context.Background(), balancer.PickOptions{}); err != balancer.ErrTransientFailure { t.Fatalf("want pick error %v, got %v", balancer.ErrTransientFailure, err) } } } // 2 balancers, change balancer weight. func TestBalancerGroup_TwoRR_ChangeWeight_MoreBackends(t *testing.T) { cc := newTestClientConn(t) bg := newBalancerGroup(cc, nil) // Add two balancers to group and send two resolved addresses to both // balancers. bg.add(testBalancerIDs[0], 2, rrBuilder) bg.handleResolvedAddrs(testBalancerIDs[0], testBackendAddrs[0:2]) sc1 := <-cc.newSubConnCh sc2 := <-cc.newSubConnCh bg.add(testBalancerIDs[1], 1, rrBuilder) bg.handleResolvedAddrs(testBalancerIDs[1], testBackendAddrs[2:4]) sc3 := <-cc.newSubConnCh sc4 := <-cc.newSubConnCh // Send state changes for both subconns. bg.handleSubConnStateChange(sc1, connectivity.Connecting) bg.handleSubConnStateChange(sc1, connectivity.Ready) bg.handleSubConnStateChange(sc2, connectivity.Connecting) bg.handleSubConnStateChange(sc2, connectivity.Ready) bg.handleSubConnStateChange(sc3, connectivity.Connecting) bg.handleSubConnStateChange(sc3, connectivity.Ready) bg.handleSubConnStateChange(sc4, connectivity.Connecting) bg.handleSubConnStateChange(sc4, connectivity.Ready) // Test roundrobin on the last picker. p1 := <-cc.newPickerCh want := []balancer.SubConn{sc1, sc1, sc2, sc2, sc3, sc4} if err := isRoundRobin(want, func() balancer.SubConn { sc, _, _ := p1.Pick(context.Background(), balancer.PickOptions{}) return sc }); err != nil { t.Fatalf("want %v, got %v", want, err) } bg.changeWeight(testBalancerIDs[0], 3) // Test roundrobin with new weight. p2 := <-cc.newPickerCh want = []balancer.SubConn{sc1, sc1, sc1, sc2, sc2, sc2, sc3, sc4} if err := isRoundRobin(want, func() balancer.SubConn { sc, _, _ := p2.Pick(context.Background(), balancer.PickOptions{}) return sc }); err != nil { t.Fatalf("want %v, got %v", want, err) } } func TestBalancerGroup_LoadReport(t *testing.T) { testLoadStore := newTestLoadStore() cc := newTestClientConn(t) bg := newBalancerGroup(cc, testLoadStore) backendToBalancerID := make(map[balancer.SubConn]internal.Locality) // Add two balancers to group and send two resolved addresses to both // balancers. bg.add(testBalancerIDs[0], 2, rrBuilder) bg.handleResolvedAddrs(testBalancerIDs[0], testBackendAddrs[0:2]) sc1 := <-cc.newSubConnCh sc2 := <-cc.newSubConnCh backendToBalancerID[sc1] = testBalancerIDs[0] backendToBalancerID[sc2] = testBalancerIDs[0] bg.add(testBalancerIDs[1], 1, rrBuilder) bg.handleResolvedAddrs(testBalancerIDs[1], testBackendAddrs[2:4]) sc3 := <-cc.newSubConnCh sc4 := <-cc.newSubConnCh backendToBalancerID[sc3] = testBalancerIDs[1] backendToBalancerID[sc4] = testBalancerIDs[1] // Send state changes for both subconns. bg.handleSubConnStateChange(sc1, connectivity.Connecting) bg.handleSubConnStateChange(sc1, connectivity.Ready) bg.handleSubConnStateChange(sc2, connectivity.Connecting) bg.handleSubConnStateChange(sc2, connectivity.Ready) bg.handleSubConnStateChange(sc3, connectivity.Connecting) bg.handleSubConnStateChange(sc3, connectivity.Ready) bg.handleSubConnStateChange(sc4, connectivity.Connecting) bg.handleSubConnStateChange(sc4, connectivity.Ready) // Test roundrobin on the last picker. p1 := <-cc.newPickerCh var ( wantStart []internal.Locality wantEnd []internal.Locality wantCost []testServerLoad ) for i := 0; i < 10; i++ { sc, done, _ := p1.Pick(context.Background(), balancer.PickOptions{}) locality := backendToBalancerID[sc] wantStart = append(wantStart, locality) if done != nil && sc != sc1 { done(balancer.DoneInfo{ ServerLoad: &orcapb.OrcaLoadReport{ CpuUtilization: 10, MemUtilization: 5, RequestCostOrUtilization: map[string]float64{"pi": 3.14}, }, }) wantEnd = append(wantEnd, locality) wantCost = append(wantCost, testServerLoad{name: serverLoadCPUName, d: 10}, testServerLoad{name: serverLoadMemoryName, d: 5}, testServerLoad{name: "pi", d: 3.14}) } } if !reflect.DeepEqual(testLoadStore.callsStarted, wantStart) { t.Fatalf("want started: %v, got: %v", testLoadStore.callsStarted, wantStart) } if !reflect.DeepEqual(testLoadStore.callsEnded, wantEnd) { t.Fatalf("want ended: %v, got: %v", testLoadStore.callsEnded, wantEnd) } if !reflect.DeepEqual(testLoadStore.callsCost, wantCost) { t.Fatalf("want cost: %v, got: %v", testLoadStore.callsCost, wantCost) } } grpc-go-1.22.1/balancer/xds/edsbalancer/edsbalancer.go000066400000000000000000000255441351635773100225620ustar00rootroot00000000000000/* * Copyright 2019 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ // Package edsbalancer implements a balancer to handle EDS responses. package edsbalancer import ( "context" "encoding/json" "net" "reflect" "strconv" "sync" "google.golang.org/grpc/balancer" "google.golang.org/grpc/balancer/roundrobin" "google.golang.org/grpc/balancer/xds/internal" edspb "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/eds" endpointpb "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/endpoint/endpoint" percentpb "google.golang.org/grpc/balancer/xds/internal/proto/envoy/type/percent" "google.golang.org/grpc/balancer/xds/lrs" "google.golang.org/grpc/codes" "google.golang.org/grpc/connectivity" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/resolver" "google.golang.org/grpc/status" ) type localityConfig struct { weight uint32 addrs []resolver.Address } // EDSBalancer does load balancing based on the EDS responses. Note that it // doesn't implement the balancer interface. It's intended to be used by a high // level balancer implementation. // // The localities are picked as weighted round robin. A configurable child // policy is used to manage endpoints in each locality. type EDSBalancer struct { balancer.ClientConn bg *balancerGroup subBalancerBuilder balancer.Builder lidToConfig map[internal.Locality]*localityConfig loadStore lrs.Store pickerMu sync.Mutex drops []*dropper innerPicker balancer.Picker // The picker without drop support. innerState connectivity.State // The state of the picker. } // NewXDSBalancer create a new EDSBalancer. func NewXDSBalancer(cc balancer.ClientConn, loadStore lrs.Store) *EDSBalancer { xdsB := &EDSBalancer{ ClientConn: cc, subBalancerBuilder: balancer.Get(roundrobin.Name), lidToConfig: make(map[internal.Locality]*localityConfig), loadStore: loadStore, } // Don't start balancer group here. Start it when handling the first EDS // response. Otherwise the balancer group will be started with round-robin, // and if users specify a different sub-balancer, all balancers in balancer // group will be closed and recreated when sub-balancer update happens. return xdsB } // HandleChildPolicy updates the child balancers handling endpoints. Child // policy is roundrobin by default. If the specified balancer is not installed, // the old child balancer will be used. // // HandleChildPolicy and HandleEDSResponse must be called by the same goroutine. func (xdsB *EDSBalancer) HandleChildPolicy(name string, config json.RawMessage) { // name could come from cdsResp.GetLbPolicy().String(). LbPolicy.String() // are all UPPER_CASE with underscore. // // No conversion is needed here because balancer package converts all names // into lower_case before registering/looking up. xdsB.updateSubBalancerName(name) // TODO: (eds) send balancer config to the new child balancers. } func (xdsB *EDSBalancer) updateSubBalancerName(subBalancerName string) { if xdsB.subBalancerBuilder.Name() == subBalancerName { return } newSubBalancerBuilder := balancer.Get(subBalancerName) if newSubBalancerBuilder == nil { grpclog.Infof("EDSBalancer: failed to find balancer with name %q, keep using %q", subBalancerName, xdsB.subBalancerBuilder.Name()) return } xdsB.subBalancerBuilder = newSubBalancerBuilder if xdsB.bg != nil { // xdsB.bg == nil until the first EDS response is handled. There's no // need to update balancer group before that. for id, config := range xdsB.lidToConfig { // TODO: (eds) add support to balancer group to support smoothly // switching sub-balancers (keep old balancer around until new // balancer becomes ready). xdsB.bg.remove(id) xdsB.bg.add(id, config.weight, xdsB.subBalancerBuilder) xdsB.bg.handleResolvedAddrs(id, config.addrs) } } } // updateDrops compares new drop policies with the old. If they are different, // it updates the drop policies and send ClientConn an updated picker. func (xdsB *EDSBalancer) updateDrops(dropPolicies []*edspb.ClusterLoadAssignment_Policy_DropOverload) { var ( newDrops []*dropper dropsChanged bool ) for i, dropPolicy := range dropPolicies { percentage := dropPolicy.GetDropPercentage() var ( numerator = percentage.GetNumerator() denominator uint32 ) switch percentage.GetDenominator() { case percentpb.FractionalPercent_HUNDRED: denominator = 100 case percentpb.FractionalPercent_TEN_THOUSAND: denominator = 10000 case percentpb.FractionalPercent_MILLION: denominator = 1000000 } newDrops = append(newDrops, newDropper(numerator, denominator, dropPolicy.GetCategory())) // The following reading xdsB.drops doesn't need mutex because it can only // be updated by the code following. if dropsChanged { continue } if i >= len(xdsB.drops) { dropsChanged = true continue } if oldDrop := xdsB.drops[i]; numerator != oldDrop.numerator || denominator != oldDrop.denominator { dropsChanged = true } } if dropsChanged { xdsB.pickerMu.Lock() xdsB.drops = newDrops if xdsB.innerPicker != nil { // Update picker with old inner picker, new drops. xdsB.ClientConn.UpdateBalancerState(xdsB.innerState, newDropPicker(xdsB.innerPicker, newDrops, xdsB.loadStore)) } xdsB.pickerMu.Unlock() } } // HandleEDSResponse handles the EDS response and creates/deletes localities and // SubConns. It also handles drops. // // HandleCDSResponse and HandleEDSResponse must be called by the same goroutine. func (xdsB *EDSBalancer) HandleEDSResponse(edsResp *edspb.ClusterLoadAssignment) { // Create balancer group if it's never created (this is the first EDS // response). if xdsB.bg == nil { xdsB.bg = newBalancerGroup(xdsB, xdsB.loadStore) } // TODO: Unhandled fields from EDS response: // - edsResp.GetPolicy().GetOverprovisioningFactor() // - locality.GetPriority() // - lbEndpoint.GetMetadata(): contains BNS name, send to sub-balancers // - as service config or as resolved address // - if socketAddress is not ip:port // - socketAddress.GetNamedPort(), socketAddress.GetResolverName() // - resolve endpoint's name with another resolver xdsB.updateDrops(edsResp.GetPolicy().GetDropOverloads()) // Filter out all localities with weight 0. // // Locality weighted load balancer can be enabled by setting an option in // CDS, and the weight of each locality. Currently, without the guarantee // that CDS is always sent, we assume locality weighted load balance is // always enabled, and ignore all weight 0 localities. // // In the future, we should look at the config in CDS response and decide // whether locality weight matters. newEndpoints := make([]*endpointpb.LocalityLbEndpoints, 0, len(edsResp.Endpoints)) for _, locality := range edsResp.Endpoints { if locality.GetLoadBalancingWeight().GetValue() == 0 { continue } newEndpoints = append(newEndpoints, locality) } // newLocalitiesSet contains all names of localitis in the new EDS response. // It's used to delete localities that are removed in the new EDS response. newLocalitiesSet := make(map[internal.Locality]struct{}) for _, locality := range newEndpoints { // One balancer for each locality. l := locality.GetLocality() if l == nil { grpclog.Warningf("xds: received LocalityLbEndpoints with Locality") continue } lid := internal.Locality{ Region: l.Region, Zone: l.Zone, SubZone: l.SubZone, } newLocalitiesSet[lid] = struct{}{} newWeight := locality.GetLoadBalancingWeight().GetValue() var newAddrs []resolver.Address for _, lbEndpoint := range locality.GetLbEndpoints() { socketAddress := lbEndpoint.GetEndpoint().GetAddress().GetSocketAddress() newAddrs = append(newAddrs, resolver.Address{ Addr: net.JoinHostPort(socketAddress.GetAddress(), strconv.Itoa(int(socketAddress.GetPortValue()))), }) } var weightChanged, addrsChanged bool config, ok := xdsB.lidToConfig[lid] if !ok { // A new balancer, add it to balancer group and balancer map. xdsB.bg.add(lid, newWeight, xdsB.subBalancerBuilder) config = &localityConfig{ weight: newWeight, } xdsB.lidToConfig[lid] = config // weightChanged is false for new locality, because there's no need to // update weight in bg. addrsChanged = true } else { // Compare weight and addrs. if config.weight != newWeight { weightChanged = true } if !reflect.DeepEqual(config.addrs, newAddrs) { addrsChanged = true } } if weightChanged { config.weight = newWeight xdsB.bg.changeWeight(lid, newWeight) } if addrsChanged { config.addrs = newAddrs xdsB.bg.handleResolvedAddrs(lid, newAddrs) } } // Delete localities that are removed in the latest response. for lid := range xdsB.lidToConfig { if _, ok := newLocalitiesSet[lid]; !ok { xdsB.bg.remove(lid) delete(xdsB.lidToConfig, lid) } } } // HandleSubConnStateChange handles the state change and update pickers accordingly. func (xdsB *EDSBalancer) HandleSubConnStateChange(sc balancer.SubConn, s connectivity.State) { xdsB.bg.handleSubConnStateChange(sc, s) } // UpdateBalancerState overrides balancer.ClientConn to wrap the picker in a // dropPicker. func (xdsB *EDSBalancer) UpdateBalancerState(s connectivity.State, p balancer.Picker) { xdsB.pickerMu.Lock() defer xdsB.pickerMu.Unlock() xdsB.innerPicker = p xdsB.innerState = s // Don't reset drops when it's a state change. xdsB.ClientConn.UpdateBalancerState(s, newDropPicker(p, xdsB.drops, xdsB.loadStore)) } // Close closes the balancer. func (xdsB *EDSBalancer) Close() { if xdsB.bg != nil { xdsB.bg.close() } } type dropPicker struct { drops []*dropper p balancer.Picker loadStore lrs.Store } func newDropPicker(p balancer.Picker, drops []*dropper, loadStore lrs.Store) *dropPicker { return &dropPicker{ drops: drops, p: p, loadStore: loadStore, } } func (d *dropPicker) Pick(ctx context.Context, opts balancer.PickOptions) (conn balancer.SubConn, done func(balancer.DoneInfo), err error) { var ( drop bool category string ) for _, dp := range d.drops { if dp.drop() { drop = true category = dp.category break } } if drop { if d.loadStore != nil { d.loadStore.CallDropped(category) } return nil, nil, status.Errorf(codes.Unavailable, "RPC is dropped") } // TODO: (eds) don't drop unless the inner picker is READY. Similar to // https://github.com/grpc/grpc-go/issues/2622. return d.p.Pick(ctx, opts) } grpc-go-1.22.1/balancer/xds/edsbalancer/edsbalancer_test.go000066400000000000000000000462111351635773100236130ustar00rootroot00000000000000/* * Copyright 2019 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package edsbalancer import ( "context" "fmt" "net" "reflect" "strconv" "testing" typespb "github.com/golang/protobuf/ptypes/wrappers" "google.golang.org/grpc/balancer" "google.golang.org/grpc/balancer/roundrobin" "google.golang.org/grpc/balancer/xds/internal" addresspb "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/core/address" basepb "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/core/base" edspb "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/eds" endpointpb "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/endpoint/endpoint" percentpb "google.golang.org/grpc/balancer/xds/internal/proto/envoy/type/percent" "google.golang.org/grpc/connectivity" "google.golang.org/grpc/resolver" ) var ( testClusterNames = []string{"test-cluster-1", "test-cluster-2"} testSubZones = []string{"I", "II", "III", "IV"} testEndpointAddrs = []string{"1.1.1.1:1", "2.2.2.2:2", "3.3.3.3:3", "4.4.4.4:4"} ) type clusterLoadAssignmentBuilder struct { v *edspb.ClusterLoadAssignment } func newClusterLoadAssignmentBuilder(clusterName string, dropPercents []uint32) *clusterLoadAssignmentBuilder { var drops []*edspb.ClusterLoadAssignment_Policy_DropOverload for i, d := range dropPercents { drops = append(drops, &edspb.ClusterLoadAssignment_Policy_DropOverload{ Category: fmt.Sprintf("test-drop-%d", i), DropPercentage: &percentpb.FractionalPercent{ Numerator: d, Denominator: percentpb.FractionalPercent_HUNDRED, }, }) } return &clusterLoadAssignmentBuilder{ v: &edspb.ClusterLoadAssignment{ ClusterName: clusterName, Policy: &edspb.ClusterLoadAssignment_Policy{ DropOverloads: drops, }, }, } } func (clab *clusterLoadAssignmentBuilder) addLocality(subzone string, weight uint32, addrsWithPort []string) { var lbEndPoints []*endpointpb.LbEndpoint for _, a := range addrsWithPort { host, portStr, err := net.SplitHostPort(a) if err != nil { panic("failed to split " + a) } port, err := strconv.Atoi(portStr) if err != nil { panic("failed to atoi " + portStr) } lbEndPoints = append(lbEndPoints, &endpointpb.LbEndpoint{ HostIdentifier: &endpointpb.LbEndpoint_Endpoint{ Endpoint: &endpointpb.Endpoint{ Address: &addresspb.Address{ Address: &addresspb.Address_SocketAddress{ SocketAddress: &addresspb.SocketAddress{ Protocol: addresspb.SocketAddress_TCP, Address: host, PortSpecifier: &addresspb.SocketAddress_PortValue{ PortValue: uint32(port)}}}}}}}, ) } clab.v.Endpoints = append(clab.v.Endpoints, &endpointpb.LocalityLbEndpoints{ Locality: &basepb.Locality{ Region: "", Zone: "", SubZone: subzone, }, LbEndpoints: lbEndPoints, LoadBalancingWeight: &typespb.UInt32Value{Value: weight}, }) } func (clab *clusterLoadAssignmentBuilder) build() *edspb.ClusterLoadAssignment { return clab.v } // One locality // - add backend // - remove backend // - replace backend // - change drop rate func TestEDS_OneLocality(t *testing.T) { cc := newTestClientConn(t) edsb := NewXDSBalancer(cc, nil) // One locality with one backend. clab1 := newClusterLoadAssignmentBuilder(testClusterNames[0], nil) clab1.addLocality(testSubZones[0], 1, testEndpointAddrs[:1]) edsb.HandleEDSResponse(clab1.build()) sc1 := <-cc.newSubConnCh edsb.HandleSubConnStateChange(sc1, connectivity.Connecting) edsb.HandleSubConnStateChange(sc1, connectivity.Ready) // Pick with only the first backend. p1 := <-cc.newPickerCh for i := 0; i < 5; i++ { gotSC, _, _ := p1.Pick(context.Background(), balancer.PickOptions{}) if !reflect.DeepEqual(gotSC, sc1) { t.Fatalf("picker.Pick, got %v, want %v", gotSC, sc1) } } // The same locality, add one more backend. clab2 := newClusterLoadAssignmentBuilder(testClusterNames[0], nil) clab2.addLocality(testSubZones[0], 1, testEndpointAddrs[:2]) edsb.HandleEDSResponse(clab2.build()) sc2 := <-cc.newSubConnCh edsb.HandleSubConnStateChange(sc2, connectivity.Connecting) edsb.HandleSubConnStateChange(sc2, connectivity.Ready) // Test roundrobin with two subconns. p2 := <-cc.newPickerCh want := []balancer.SubConn{sc1, sc2} if err := isRoundRobin(want, func() balancer.SubConn { sc, _, _ := p2.Pick(context.Background(), balancer.PickOptions{}) return sc }); err != nil { t.Fatalf("want %v, got %v", want, err) } // The same locality, delete first backend. clab3 := newClusterLoadAssignmentBuilder(testClusterNames[0], nil) clab3.addLocality(testSubZones[0], 1, testEndpointAddrs[1:2]) edsb.HandleEDSResponse(clab3.build()) scToRemove := <-cc.removeSubConnCh if !reflect.DeepEqual(scToRemove, sc1) { t.Fatalf("RemoveSubConn, want %v, got %v", sc1, scToRemove) } edsb.HandleSubConnStateChange(scToRemove, connectivity.Shutdown) // Test pick with only the second subconn. p3 := <-cc.newPickerCh for i := 0; i < 5; i++ { gotSC, _, _ := p3.Pick(context.Background(), balancer.PickOptions{}) if !reflect.DeepEqual(gotSC, sc2) { t.Fatalf("picker.Pick, got %v, want %v", gotSC, sc2) } } // The same locality, replace backend. clab4 := newClusterLoadAssignmentBuilder(testClusterNames[0], nil) clab4.addLocality(testSubZones[0], 1, testEndpointAddrs[2:3]) edsb.HandleEDSResponse(clab4.build()) sc3 := <-cc.newSubConnCh edsb.HandleSubConnStateChange(sc3, connectivity.Connecting) edsb.HandleSubConnStateChange(sc3, connectivity.Ready) scToRemove = <-cc.removeSubConnCh if !reflect.DeepEqual(scToRemove, sc2) { t.Fatalf("RemoveSubConn, want %v, got %v", sc2, scToRemove) } edsb.HandleSubConnStateChange(scToRemove, connectivity.Shutdown) // Test pick with only the third subconn. p4 := <-cc.newPickerCh for i := 0; i < 5; i++ { gotSC, _, _ := p4.Pick(context.Background(), balancer.PickOptions{}) if !reflect.DeepEqual(gotSC, sc3) { t.Fatalf("picker.Pick, got %v, want %v", gotSC, sc3) } } // The same locality, different drop rate, dropping 50%. clab5 := newClusterLoadAssignmentBuilder(testClusterNames[0], []uint32{50}) clab5.addLocality(testSubZones[0], 1, testEndpointAddrs[2:3]) edsb.HandleEDSResponse(clab5.build()) // Picks with drops. p5 := <-cc.newPickerCh for i := 0; i < 100; i++ { _, _, err := p5.Pick(context.Background(), balancer.PickOptions{}) // TODO: the dropping algorithm needs a design. When the dropping algorithm // is fixed, this test also needs fix. if i < 50 && err == nil { t.Errorf("The first 50%% picks should be drops, got error ") } else if i > 50 && err != nil { t.Errorf("The second 50%% picks should be non-drops, got error %v", err) } } } // 2 locality // - start with 2 locality // - add locality // - remove locality // - address change for the locality // - update locality weight func TestEDS_TwoLocalities(t *testing.T) { cc := newTestClientConn(t) edsb := NewXDSBalancer(cc, nil) // Two localities, each with one backend. clab1 := newClusterLoadAssignmentBuilder(testClusterNames[0], nil) clab1.addLocality(testSubZones[0], 1, testEndpointAddrs[:1]) clab1.addLocality(testSubZones[1], 1, testEndpointAddrs[1:2]) edsb.HandleEDSResponse(clab1.build()) sc1 := <-cc.newSubConnCh edsb.HandleSubConnStateChange(sc1, connectivity.Connecting) edsb.HandleSubConnStateChange(sc1, connectivity.Ready) sc2 := <-cc.newSubConnCh edsb.HandleSubConnStateChange(sc2, connectivity.Connecting) edsb.HandleSubConnStateChange(sc2, connectivity.Ready) // Test roundrobin with two subconns. p1 := <-cc.newPickerCh want := []balancer.SubConn{sc1, sc2} if err := isRoundRobin(want, func() balancer.SubConn { sc, _, _ := p1.Pick(context.Background(), balancer.PickOptions{}) return sc }); err != nil { t.Fatalf("want %v, got %v", want, err) } // Add another locality, with one backend. clab2 := newClusterLoadAssignmentBuilder(testClusterNames[0], nil) clab2.addLocality(testSubZones[0], 1, testEndpointAddrs[:1]) clab2.addLocality(testSubZones[1], 1, testEndpointAddrs[1:2]) clab2.addLocality(testSubZones[2], 1, testEndpointAddrs[2:3]) edsb.HandleEDSResponse(clab2.build()) sc3 := <-cc.newSubConnCh edsb.HandleSubConnStateChange(sc3, connectivity.Connecting) edsb.HandleSubConnStateChange(sc3, connectivity.Ready) // Test roundrobin with three subconns. p2 := <-cc.newPickerCh want = []balancer.SubConn{sc1, sc2, sc3} if err := isRoundRobin(want, func() balancer.SubConn { sc, _, _ := p2.Pick(context.Background(), balancer.PickOptions{}) return sc }); err != nil { t.Fatalf("want %v, got %v", want, err) } // Remove first locality. clab3 := newClusterLoadAssignmentBuilder(testClusterNames[0], nil) clab3.addLocality(testSubZones[1], 1, testEndpointAddrs[1:2]) clab3.addLocality(testSubZones[2], 1, testEndpointAddrs[2:3]) edsb.HandleEDSResponse(clab3.build()) scToRemove := <-cc.removeSubConnCh if !reflect.DeepEqual(scToRemove, sc1) { t.Fatalf("RemoveSubConn, want %v, got %v", sc1, scToRemove) } edsb.HandleSubConnStateChange(scToRemove, connectivity.Shutdown) // Test pick with two subconns (without the first one). p3 := <-cc.newPickerCh want = []balancer.SubConn{sc2, sc3} if err := isRoundRobin(want, func() balancer.SubConn { sc, _, _ := p3.Pick(context.Background(), balancer.PickOptions{}) return sc }); err != nil { t.Fatalf("want %v, got %v", want, err) } // Add a backend to the last locality. clab4 := newClusterLoadAssignmentBuilder(testClusterNames[0], nil) clab4.addLocality(testSubZones[1], 1, testEndpointAddrs[1:2]) clab4.addLocality(testSubZones[2], 1, testEndpointAddrs[2:4]) edsb.HandleEDSResponse(clab4.build()) sc4 := <-cc.newSubConnCh edsb.HandleSubConnStateChange(sc4, connectivity.Connecting) edsb.HandleSubConnStateChange(sc4, connectivity.Ready) // Test pick with two subconns (without the first one). p4 := <-cc.newPickerCh // Locality-1 will be picked twice, and locality-2 will be picked twice. // Locality-1 contains only sc2, locality-2 contains sc3 and sc4. So expect // two sc2's and sc3, sc4. want = []balancer.SubConn{sc2, sc2, sc3, sc4} if err := isRoundRobin(want, func() balancer.SubConn { sc, _, _ := p4.Pick(context.Background(), balancer.PickOptions{}) return sc }); err != nil { t.Fatalf("want %v, got %v", want, err) } // Change weight of the locality[1]. clab5 := newClusterLoadAssignmentBuilder(testClusterNames[0], nil) clab5.addLocality(testSubZones[1], 2, testEndpointAddrs[1:2]) clab5.addLocality(testSubZones[2], 1, testEndpointAddrs[2:4]) edsb.HandleEDSResponse(clab5.build()) // Test pick with two subconns different locality weight. p5 := <-cc.newPickerCh // Locality-1 will be picked four times, and locality-2 will be picked twice // (weight 2 and 1). Locality-1 contains only sc2, locality-2 contains sc3 and // sc4. So expect four sc2's and sc3, sc4. want = []balancer.SubConn{sc2, sc2, sc2, sc2, sc3, sc4} if err := isRoundRobin(want, func() balancer.SubConn { sc, _, _ := p5.Pick(context.Background(), balancer.PickOptions{}) return sc }); err != nil { t.Fatalf("want %v, got %v", want, err) } // Change weight of the locality[1] to 0, it should never be picked. clab6 := newClusterLoadAssignmentBuilder(testClusterNames[0], nil) clab6.addLocality(testSubZones[1], 0, testEndpointAddrs[1:2]) clab6.addLocality(testSubZones[2], 1, testEndpointAddrs[2:4]) edsb.HandleEDSResponse(clab6.build()) // Test pick with two subconns different locality weight. p6 := <-cc.newPickerCh // Locality-1 will be not be picked, and locality-2 will be picked. // Locality-2 contains sc3 and sc4. So expect sc3, sc4. want = []balancer.SubConn{sc3, sc4} if err := isRoundRobin(want, func() balancer.SubConn { sc, _, _ := p6.Pick(context.Background(), balancer.PickOptions{}) return sc }); err != nil { t.Fatalf("want %v, got %v", want, err) } } func TestClose(t *testing.T) { edsb := NewXDSBalancer(nil, nil) // This is what could happen when switching between fallback and eds. This // make sure it doesn't panic. edsb.Close() } func init() { balancer.Register(&testConstBalancerBuilder{}) } var errTestConstPicker = fmt.Errorf("const picker error") type testConstBalancerBuilder struct{} func (*testConstBalancerBuilder) Build(cc balancer.ClientConn, opts balancer.BuildOptions) balancer.Balancer { return &testConstBalancer{cc: cc} } func (*testConstBalancerBuilder) Name() string { return "test-const-balancer" } type testConstBalancer struct { cc balancer.ClientConn } func (tb *testConstBalancer) HandleSubConnStateChange(sc balancer.SubConn, state connectivity.State) { tb.cc.UpdateBalancerState(connectivity.Ready, &testConstPicker{err: errTestConstPicker}) } func (tb *testConstBalancer) HandleResolvedAddrs([]resolver.Address, error) { tb.cc.UpdateBalancerState(connectivity.Ready, &testConstPicker{err: errTestConstPicker}) } func (*testConstBalancer) Close() { } type testConstPicker struct { err error sc balancer.SubConn } func (tcp *testConstPicker) Pick(ctx context.Context, opts balancer.PickOptions) (conn balancer.SubConn, done func(balancer.DoneInfo), err error) { if tcp.err != nil { return nil, nil, tcp.err } return tcp.sc, nil, nil } // Create XDS balancer, and update sub-balancer before handling eds responses. // Then switch between round-robin and test-const-balancer after handling first // eds response. func TestEDS_UpdateSubBalancerName(t *testing.T) { cc := newTestClientConn(t) edsb := NewXDSBalancer(cc, nil) t.Logf("update sub-balancer to test-const-balancer") edsb.HandleChildPolicy("test-const-balancer", nil) // Two localities, each with one backend. clab1 := newClusterLoadAssignmentBuilder(testClusterNames[0], nil) clab1.addLocality(testSubZones[0], 1, testEndpointAddrs[:1]) clab1.addLocality(testSubZones[1], 1, testEndpointAddrs[1:2]) edsb.HandleEDSResponse(clab1.build()) p0 := <-cc.newPickerCh for i := 0; i < 5; i++ { _, _, err := p0.Pick(context.Background(), balancer.PickOptions{}) if !reflect.DeepEqual(err, errTestConstPicker) { t.Fatalf("picker.Pick, got err %q, want err %q", err, errTestConstPicker) } } t.Logf("update sub-balancer to round-robin") edsb.HandleChildPolicy(roundrobin.Name, nil) sc1 := <-cc.newSubConnCh edsb.HandleSubConnStateChange(sc1, connectivity.Connecting) edsb.HandleSubConnStateChange(sc1, connectivity.Ready) sc2 := <-cc.newSubConnCh edsb.HandleSubConnStateChange(sc2, connectivity.Connecting) edsb.HandleSubConnStateChange(sc2, connectivity.Ready) // Test roundrobin with two subconns. p1 := <-cc.newPickerCh want := []balancer.SubConn{sc1, sc2} if err := isRoundRobin(want, func() balancer.SubConn { sc, _, _ := p1.Pick(context.Background(), balancer.PickOptions{}) return sc }); err != nil { t.Fatalf("want %v, got %v", want, err) } t.Logf("update sub-balancer to test-const-balancer") edsb.HandleChildPolicy("test-const-balancer", nil) for i := 0; i < 2; i++ { scToRemove := <-cc.removeSubConnCh if !reflect.DeepEqual(scToRemove, sc1) && !reflect.DeepEqual(scToRemove, sc2) { t.Fatalf("RemoveSubConn, want (%v or %v), got %v", sc1, sc2, scToRemove) } edsb.HandleSubConnStateChange(scToRemove, connectivity.Shutdown) } p2 := <-cc.newPickerCh for i := 0; i < 5; i++ { _, _, err := p2.Pick(context.Background(), balancer.PickOptions{}) if !reflect.DeepEqual(err, errTestConstPicker) { t.Fatalf("picker.Pick, got err %q, want err %q", err, errTestConstPicker) } } t.Logf("update sub-balancer to round-robin") edsb.HandleChildPolicy(roundrobin.Name, nil) sc3 := <-cc.newSubConnCh edsb.HandleSubConnStateChange(sc3, connectivity.Connecting) edsb.HandleSubConnStateChange(sc3, connectivity.Ready) sc4 := <-cc.newSubConnCh edsb.HandleSubConnStateChange(sc4, connectivity.Connecting) edsb.HandleSubConnStateChange(sc4, connectivity.Ready) p3 := <-cc.newPickerCh want = []balancer.SubConn{sc3, sc4} if err := isRoundRobin(want, func() balancer.SubConn { sc, _, _ := p3.Pick(context.Background(), balancer.PickOptions{}) return sc }); err != nil { t.Fatalf("want %v, got %v", want, err) } } func TestDropPicker(t *testing.T) { const pickCount = 12 var constPicker = &testConstPicker{ sc: testSubConns[0], } tests := []struct { name string drops []*dropper }{ { name: "no drop", drops: nil, }, { name: "one drop", drops: []*dropper{ newDropper(1, 2, ""), }, }, { name: "two drops", drops: []*dropper{ newDropper(1, 3, ""), newDropper(1, 2, ""), }, }, { name: "three drops", drops: []*dropper{ newDropper(1, 3, ""), newDropper(1, 4, ""), newDropper(1, 2, ""), }, }, } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { p := newDropPicker(constPicker, tt.drops, nil) // scCount is the number of sc's returned by pick. The opposite of // drop-count. var ( scCount int wantCount = pickCount ) for _, dp := range tt.drops { wantCount = wantCount * int(dp.denominator-dp.numerator) / int(dp.denominator) } for i := 0; i < pickCount; i++ { _, _, err := p.Pick(context.Background(), balancer.PickOptions{}) if err == nil { scCount++ } } if scCount != (wantCount) { t.Errorf("drops: %+v, scCount %v, wantCount %v", tt.drops, scCount, wantCount) } }) } } func TestEDS_LoadReport(t *testing.T) { testLoadStore := newTestLoadStore() cc := newTestClientConn(t) edsb := NewXDSBalancer(cc, testLoadStore) backendToBalancerID := make(map[balancer.SubConn]internal.Locality) // Two localities, each with one backend. clab1 := newClusterLoadAssignmentBuilder(testClusterNames[0], nil) clab1.addLocality(testSubZones[0], 1, testEndpointAddrs[:1]) clab1.addLocality(testSubZones[1], 1, testEndpointAddrs[1:2]) edsb.HandleEDSResponse(clab1.build()) sc1 := <-cc.newSubConnCh edsb.HandleSubConnStateChange(sc1, connectivity.Connecting) edsb.HandleSubConnStateChange(sc1, connectivity.Ready) backendToBalancerID[sc1] = internal.Locality{ SubZone: testSubZones[0], } sc2 := <-cc.newSubConnCh edsb.HandleSubConnStateChange(sc2, connectivity.Connecting) edsb.HandleSubConnStateChange(sc2, connectivity.Ready) backendToBalancerID[sc2] = internal.Locality{ SubZone: testSubZones[1], } // Test roundrobin with two subconns. p1 := <-cc.newPickerCh var ( wantStart []internal.Locality wantEnd []internal.Locality ) for i := 0; i < 10; i++ { sc, done, _ := p1.Pick(context.Background(), balancer.PickOptions{}) locality := backendToBalancerID[sc] wantStart = append(wantStart, locality) if done != nil && sc != sc1 { done(balancer.DoneInfo{}) wantEnd = append(wantEnd, backendToBalancerID[sc]) } } if !reflect.DeepEqual(testLoadStore.callsStarted, wantStart) { t.Fatalf("want started: %v, got: %v", testLoadStore.callsStarted, wantStart) } if !reflect.DeepEqual(testLoadStore.callsEnded, wantEnd) { t.Fatalf("want ended: %v, got: %v", testLoadStore.callsEnded, wantEnd) } } grpc-go-1.22.1/balancer/xds/edsbalancer/test_util_test.go000066400000000000000000000216601351635773100233650ustar00rootroot00000000000000/* * Copyright 2019 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package edsbalancer import ( "context" "fmt" "testing" "google.golang.org/grpc" "google.golang.org/grpc/balancer" "google.golang.org/grpc/balancer/xds/internal" "google.golang.org/grpc/connectivity" "google.golang.org/grpc/resolver" ) var ( testSubConns = []*testSubConn{{id: "sc1"}, {id: "sc2"}, {id: "sc3"}, {id: "sc4"}} ) type testSubConn struct { id string } func (tsc *testSubConn) UpdateAddresses([]resolver.Address) { panic("not implemented") } func (tsc *testSubConn) Connect() { } // Implement stringer to get human friendly error message. func (tsc *testSubConn) String() string { return tsc.id } type testClientConn struct { t *testing.T // For logging only. newSubConnAddrsCh chan []resolver.Address // The last 10 []Address to create subconn. newSubConnCh chan balancer.SubConn // The last 10 subconn created. removeSubConnCh chan balancer.SubConn // The last 10 subconn removed. newPickerCh chan balancer.Picker // The last picker updated. newStateCh chan connectivity.State // The last state. subConnIdx int } func newTestClientConn(t *testing.T) *testClientConn { return &testClientConn{ t: t, newSubConnAddrsCh: make(chan []resolver.Address, 10), newSubConnCh: make(chan balancer.SubConn, 10), removeSubConnCh: make(chan balancer.SubConn, 10), newPickerCh: make(chan balancer.Picker, 1), newStateCh: make(chan connectivity.State, 1), } } func (tcc *testClientConn) NewSubConn(a []resolver.Address, o balancer.NewSubConnOptions) (balancer.SubConn, error) { sc := testSubConns[tcc.subConnIdx] tcc.subConnIdx++ tcc.t.Logf("testClientConn: NewSubConn(%v, %+v) => %s", a, o, sc) select { case tcc.newSubConnAddrsCh <- a: default: } select { case tcc.newSubConnCh <- sc: default: } return sc, nil } func (tcc *testClientConn) RemoveSubConn(sc balancer.SubConn) { tcc.t.Logf("testClientCOnn: RemoveSubConn(%p)", sc) select { case tcc.removeSubConnCh <- sc: default: } } func (tcc *testClientConn) UpdateBalancerState(s connectivity.State, p balancer.Picker) { tcc.t.Logf("testClientConn: UpdateBalancerState(%v, %p)", s, p) select { case <-tcc.newStateCh: default: } tcc.newStateCh <- s select { case <-tcc.newPickerCh: default: } tcc.newPickerCh <- p } func (tcc *testClientConn) ResolveNow(resolver.ResolveNowOption) { panic("not implemented") } func (tcc *testClientConn) Target() string { panic("not implemented") } type testServerLoad struct { name string d float64 } type testLoadStore struct { callsStarted []internal.Locality callsEnded []internal.Locality callsCost []testServerLoad } func newTestLoadStore() *testLoadStore { return &testLoadStore{} } func (*testLoadStore) CallDropped(category string) { panic("not implemented") } func (tls *testLoadStore) CallStarted(l internal.Locality) { tls.callsStarted = append(tls.callsStarted, l) } func (tls *testLoadStore) CallFinished(l internal.Locality, err error) { tls.callsEnded = append(tls.callsEnded, l) } func (tls *testLoadStore) CallServerLoad(l internal.Locality, name string, d float64) { tls.callsCost = append(tls.callsCost, testServerLoad{name: name, d: d}) } func (*testLoadStore) ReportTo(ctx context.Context, cc *grpc.ClientConn) { panic("not implemented") } // isRoundRobin checks whether f's return value is roundrobin of elements from // want. But it doesn't check for the order. Note that want can contain // duplicate items, which makes it weight-round-robin. // // Step 1. the return values of f should form a permutation of all elements in // want, but not necessary in the same order. E.g. if want is {a,a,b}, the check // fails if f returns: // - {a,a,a}: third a is returned before b // - {a,b,b}: second b is returned before the second a // // If error is found in this step, the returned error contains only the first // iteration until where it goes wrong. // // Step 2. the return values of f should be repetitions of the same permutation. // E.g. if want is {a,a,b}, the check failes if f returns: // - {a,b,a,b,a,a}: though it satisfies step 1, the second iteration is not // repeating the first iteration. // // If error is found in this step, the returned error contains the first // iteration + the second iteration until where it goes wrong. func isRoundRobin(want []balancer.SubConn, f func() balancer.SubConn) error { wantSet := make(map[balancer.SubConn]int) // SubConn -> count, for weighted RR. for _, sc := range want { wantSet[sc]++ } // The first iteration: makes sure f's return values form a permutation of // elements in want. // // Also keep the returns values in a slice, so we can compare the order in // the second iteration. gotSliceFirstIteration := make([]balancer.SubConn, 0, len(want)) for range want { got := f() gotSliceFirstIteration = append(gotSliceFirstIteration, got) wantSet[got]-- if wantSet[got] < 0 { return fmt.Errorf("non-roundrobin want: %v, result: %v", want, gotSliceFirstIteration) } } // The second iteration should repeat the first iteration. var gotSliceSecondIteration []balancer.SubConn for i := 0; i < 2; i++ { for _, w := range gotSliceFirstIteration { g := f() gotSliceSecondIteration = append(gotSliceSecondIteration, g) if w != g { return fmt.Errorf("non-roundrobin, first iter: %v, second iter: %v", gotSliceFirstIteration, gotSliceSecondIteration) } } } return nil } // testClosure is a test util for TestIsRoundRobin. type testClosure struct { r []balancer.SubConn i int } func (tc *testClosure) next() balancer.SubConn { ret := tc.r[tc.i] tc.i = (tc.i + 1) % len(tc.r) return ret } func TestIsRoundRobin(t *testing.T) { var ( sc1 = testSubConns[0] sc2 = testSubConns[1] sc3 = testSubConns[2] ) testCases := []struct { desc string want []balancer.SubConn got []balancer.SubConn pass bool }{ { desc: "0 element", want: []balancer.SubConn{}, got: []balancer.SubConn{}, pass: true, }, { desc: "1 element RR", want: []balancer.SubConn{sc1}, got: []balancer.SubConn{sc1, sc1, sc1, sc1}, pass: true, }, { desc: "1 element not RR", want: []balancer.SubConn{sc1}, got: []balancer.SubConn{sc1, sc2, sc1}, pass: false, }, { desc: "2 elements RR", want: []balancer.SubConn{sc1, sc2}, got: []balancer.SubConn{sc1, sc2, sc1, sc2, sc1, sc2}, pass: true, }, { desc: "2 elements RR different order from want", want: []balancer.SubConn{sc2, sc1}, got: []balancer.SubConn{sc1, sc2, sc1, sc2, sc1, sc2}, pass: true, }, { desc: "2 elements RR not RR, mistake in first iter", want: []balancer.SubConn{sc1, sc2}, got: []balancer.SubConn{sc1, sc1, sc1, sc2, sc1, sc2}, pass: false, }, { desc: "2 elements RR not RR, mistake in second iter", want: []balancer.SubConn{sc1, sc2}, got: []balancer.SubConn{sc1, sc2, sc1, sc1, sc1, sc2}, pass: false, }, { desc: "2 elements weighted RR", want: []balancer.SubConn{sc1, sc1, sc2}, got: []balancer.SubConn{sc1, sc1, sc2, sc1, sc1, sc2}, pass: true, }, { desc: "2 elements weighted RR different order", want: []balancer.SubConn{sc1, sc1, sc2}, got: []balancer.SubConn{sc1, sc2, sc1, sc1, sc2, sc1}, pass: true, }, { desc: "3 elements RR", want: []balancer.SubConn{sc1, sc2, sc3}, got: []balancer.SubConn{sc1, sc2, sc3, sc1, sc2, sc3, sc1, sc2, sc3}, pass: true, }, { desc: "3 elements RR different order", want: []balancer.SubConn{sc1, sc2, sc3}, got: []balancer.SubConn{sc3, sc2, sc1, sc3, sc2, sc1}, pass: true, }, { desc: "3 elements weighted RR", want: []balancer.SubConn{sc1, sc1, sc1, sc2, sc2, sc3}, got: []balancer.SubConn{sc1, sc2, sc3, sc1, sc2, sc1, sc1, sc2, sc3, sc1, sc2, sc1}, pass: true, }, { desc: "3 elements weighted RR not RR, mistake in first iter", want: []balancer.SubConn{sc1, sc1, sc1, sc2, sc2, sc3}, got: []balancer.SubConn{sc1, sc2, sc1, sc1, sc2, sc1, sc1, sc2, sc3, sc1, sc2, sc1}, pass: false, }, { desc: "3 elements weighted RR not RR, mistake in second iter", want: []balancer.SubConn{sc1, sc1, sc1, sc2, sc2, sc3}, got: []balancer.SubConn{sc1, sc2, sc3, sc1, sc2, sc1, sc1, sc1, sc3, sc1, sc2, sc1}, pass: false, }, } for _, tC := range testCases { t.Run(tC.desc, func(t *testing.T) { err := isRoundRobin(tC.want, (&testClosure{r: tC.got}).next) if err == nil != tC.pass { t.Errorf("want pass %v, want %v, got err %v", tC.pass, tC.want, err) } }) } } grpc-go-1.22.1/balancer/xds/edsbalancer/util.go000066400000000000000000000022401351635773100212600ustar00rootroot00000000000000/* * Copyright 2019 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package edsbalancer import "google.golang.org/grpc/balancer/internal/wrr" type dropper struct { // Drop rate will be numerator/denominator. numerator uint32 denominator uint32 w wrr.WRR category string } func newDropper(numerator, denominator uint32, category string) *dropper { w := newRandomWRR() w.Add(true, int64(numerator)) w.Add(false, int64(denominator-numerator)) return &dropper{ numerator: numerator, denominator: denominator, w: w, category: category, } } func (d *dropper) drop() (ret bool) { return d.w.Next().(bool) } grpc-go-1.22.1/balancer/xds/edsbalancer/util_test.go000066400000000000000000000052241351635773100223240ustar00rootroot00000000000000/* * Copyright 2019 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package edsbalancer import ( "sync" "testing" "google.golang.org/grpc/balancer/internal/wrr" ) // testWRR is a deterministic WRR implementation. // // The real implementation does random WRR. testWRR makes the balancer behavior // deterministic and easier to test. // // With {a: 2, b: 3}, the Next() results will be {a, a, b, b, b}. type testWRR struct { itemsWithWeight []struct { item interface{} weight int64 } length int mu sync.Mutex idx int // The index of the item that will be picked count int64 // The number of times the current item has been picked. } func newTestWRR() wrr.WRR { return &testWRR{} } func (twrr *testWRR) Add(item interface{}, weight int64) { twrr.itemsWithWeight = append(twrr.itemsWithWeight, struct { item interface{} weight int64 }{item: item, weight: weight}) twrr.length++ } func (twrr *testWRR) Next() interface{} { twrr.mu.Lock() iww := twrr.itemsWithWeight[twrr.idx] twrr.count++ if twrr.count >= iww.weight { twrr.idx = (twrr.idx + 1) % twrr.length twrr.count = 0 } twrr.mu.Unlock() return iww.item } func init() { newRandomWRR = newTestWRR } func TestDropper(t *testing.T) { const repeat = 2 type args struct { numerator uint32 denominator uint32 } tests := []struct { name string args args }{ { name: "2_3", args: args{ numerator: 2, denominator: 3, }, }, { name: "4_8", args: args{ numerator: 4, denominator: 8, }, }, { name: "7_20", args: args{ numerator: 7, denominator: 20, }, }, } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { d := newDropper(tt.args.numerator, tt.args.denominator, "") var ( dCount int wantCount = int(tt.args.numerator) * repeat loopCount = int(tt.args.denominator) * repeat ) for i := 0; i < loopCount; i++ { if d.drop() { dCount++ } } if dCount != (wantCount) { t.Errorf("with numerator %v, denominator %v repeat %v, got drop count: %v, want %v", tt.args.numerator, tt.args.denominator, repeat, dCount, wantCount) } }) } } grpc-go-1.22.1/balancer/xds/internal/000077500000000000000000000000001351635773100173275ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/internal/const.go000066400000000000000000000014611351635773100210060ustar00rootroot00000000000000/* * * Copyright 2019 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package internal var ( // GrpcHostname is the metadata key for specifying the grpc service name when sending xDS requests // from grpc to the traffic director. GrpcHostname = "TRAFFICDIRECTOR_GRPC_HOSTNAME" ) grpc-go-1.22.1/balancer/xds/internal/internal.go000066400000000000000000000025521351635773100214760ustar00rootroot00000000000000/* * * Copyright 2019 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package internal import ( "fmt" basepb "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/core/base" ) // Locality is xds.Locality without XXX fields, so it can be used as map // keys. // // xds.Locality cannot be map keys because one of the XXX fields is a slice. // // This struct should only be used as map keys. Use the proto message directly // in all other places. type Locality struct { Region string Zone string SubZone string } func (lamk Locality) String() string { return fmt.Sprintf("%s-%s-%s", lamk.Region, lamk.Zone, lamk.SubZone) } // ToProto convert Locality to the proto representation. func (lamk Locality) ToProto() *basepb.Locality { return &basepb.Locality{ Region: lamk.Region, Zone: lamk.Zone, SubZone: lamk.SubZone, } } grpc-go-1.22.1/balancer/xds/internal/internal_test.go000066400000000000000000000032201351635773100225260ustar00rootroot00000000000000/* * * Copyright 2019 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package internal import ( "reflect" "strings" "testing" "unicode" "github.com/google/go-cmp/cmp" basepb "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/core/base" ) const ignorePrefix = "XXX_" func ignore(name string) bool { if !unicode.IsUpper([]rune(name)[0]) { return true } return strings.HasPrefix(name, ignorePrefix) } // A reflection based test to make sure internal.Locality contains all the // fields (expect for XXX_) from the proto message. func TestLocalityMatchProtoMessage(t *testing.T) { want1 := make(map[string]string) for ty, i := reflect.TypeOf(Locality{}), 0; i < ty.NumField(); i++ { f := ty.Field(i) if ignore(f.Name) { continue } want1[f.Name] = f.Type.Name() } want2 := make(map[string]string) for ty, i := reflect.TypeOf(basepb.Locality{}), 0; i < ty.NumField(); i++ { f := ty.Field(i) if ignore(f.Name) { continue } want2[f.Name] = f.Type.Name() } if !reflect.DeepEqual(want1, want2) { t.Fatalf("internal type and proto message have different fields:\n%+v", cmp.Diff(want1, want2)) } } grpc-go-1.22.1/balancer/xds/internal/proto/000077500000000000000000000000001351635773100204725ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/internal/proto/envoy/000077500000000000000000000000001351635773100216325ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/internal/proto/envoy/api/000077500000000000000000000000001351635773100224035ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/internal/proto/envoy/api/v2/000077500000000000000000000000001351635773100227325ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/internal/proto/envoy/api/v2/auth/000077500000000000000000000000001351635773100236735ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/internal/proto/envoy/api/v2/auth/cert/000077500000000000000000000000001351635773100246305ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/internal/proto/envoy/api/v2/auth/cert/cert.pb.go000077500000000000000000001351371351635773100265310ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: envoy/api/v2/auth/cert.proto package auth import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import wrappers "github.com/golang/protobuf/ptypes/wrappers" import base "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/core/base" import config_source "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/core/config_source" import _ "google.golang.org/grpc/balancer/xds/internal/proto/validate" // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package type TlsParameters_TlsProtocol int32 const ( TlsParameters_TLS_AUTO TlsParameters_TlsProtocol = 0 TlsParameters_TLSv1_0 TlsParameters_TlsProtocol = 1 TlsParameters_TLSv1_1 TlsParameters_TlsProtocol = 2 TlsParameters_TLSv1_2 TlsParameters_TlsProtocol = 3 TlsParameters_TLSv1_3 TlsParameters_TlsProtocol = 4 ) var TlsParameters_TlsProtocol_name = map[int32]string{ 0: "TLS_AUTO", 1: "TLSv1_0", 2: "TLSv1_1", 3: "TLSv1_2", 4: "TLSv1_3", } var TlsParameters_TlsProtocol_value = map[string]int32{ "TLS_AUTO": 0, "TLSv1_0": 1, "TLSv1_1": 2, "TLSv1_2": 3, "TLSv1_3": 4, } func (x TlsParameters_TlsProtocol) String() string { return proto.EnumName(TlsParameters_TlsProtocol_name, int32(x)) } func (TlsParameters_TlsProtocol) EnumDescriptor() ([]byte, []int) { return fileDescriptor_cert_f82beca1d890b9d7, []int{0, 0} } type TlsParameters struct { TlsMinimumProtocolVersion TlsParameters_TlsProtocol `protobuf:"varint,1,opt,name=tls_minimum_protocol_version,json=tlsMinimumProtocolVersion,proto3,enum=envoy.api.v2.auth.TlsParameters_TlsProtocol" json:"tls_minimum_protocol_version,omitempty"` TlsMaximumProtocolVersion TlsParameters_TlsProtocol `protobuf:"varint,2,opt,name=tls_maximum_protocol_version,json=tlsMaximumProtocolVersion,proto3,enum=envoy.api.v2.auth.TlsParameters_TlsProtocol" json:"tls_maximum_protocol_version,omitempty"` CipherSuites []string `protobuf:"bytes,3,rep,name=cipher_suites,json=cipherSuites,proto3" json:"cipher_suites,omitempty"` EcdhCurves []string `protobuf:"bytes,4,rep,name=ecdh_curves,json=ecdhCurves,proto3" json:"ecdh_curves,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *TlsParameters) Reset() { *m = TlsParameters{} } func (m *TlsParameters) String() string { return proto.CompactTextString(m) } func (*TlsParameters) ProtoMessage() {} func (*TlsParameters) Descriptor() ([]byte, []int) { return fileDescriptor_cert_f82beca1d890b9d7, []int{0} } func (m *TlsParameters) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_TlsParameters.Unmarshal(m, b) } func (m *TlsParameters) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_TlsParameters.Marshal(b, m, deterministic) } func (dst *TlsParameters) XXX_Merge(src proto.Message) { xxx_messageInfo_TlsParameters.Merge(dst, src) } func (m *TlsParameters) XXX_Size() int { return xxx_messageInfo_TlsParameters.Size(m) } func (m *TlsParameters) XXX_DiscardUnknown() { xxx_messageInfo_TlsParameters.DiscardUnknown(m) } var xxx_messageInfo_TlsParameters proto.InternalMessageInfo func (m *TlsParameters) GetTlsMinimumProtocolVersion() TlsParameters_TlsProtocol { if m != nil { return m.TlsMinimumProtocolVersion } return TlsParameters_TLS_AUTO } func (m *TlsParameters) GetTlsMaximumProtocolVersion() TlsParameters_TlsProtocol { if m != nil { return m.TlsMaximumProtocolVersion } return TlsParameters_TLS_AUTO } func (m *TlsParameters) GetCipherSuites() []string { if m != nil { return m.CipherSuites } return nil } func (m *TlsParameters) GetEcdhCurves() []string { if m != nil { return m.EcdhCurves } return nil } type TlsCertificate struct { CertificateChain *base.DataSource `protobuf:"bytes,1,opt,name=certificate_chain,json=certificateChain,proto3" json:"certificate_chain,omitempty"` PrivateKey *base.DataSource `protobuf:"bytes,2,opt,name=private_key,json=privateKey,proto3" json:"private_key,omitempty"` Password *base.DataSource `protobuf:"bytes,3,opt,name=password,proto3" json:"password,omitempty"` OcspStaple *base.DataSource `protobuf:"bytes,4,opt,name=ocsp_staple,json=ocspStaple,proto3" json:"ocsp_staple,omitempty"` SignedCertificateTimestamp []*base.DataSource `protobuf:"bytes,5,rep,name=signed_certificate_timestamp,json=signedCertificateTimestamp,proto3" json:"signed_certificate_timestamp,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *TlsCertificate) Reset() { *m = TlsCertificate{} } func (m *TlsCertificate) String() string { return proto.CompactTextString(m) } func (*TlsCertificate) ProtoMessage() {} func (*TlsCertificate) Descriptor() ([]byte, []int) { return fileDescriptor_cert_f82beca1d890b9d7, []int{1} } func (m *TlsCertificate) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_TlsCertificate.Unmarshal(m, b) } func (m *TlsCertificate) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_TlsCertificate.Marshal(b, m, deterministic) } func (dst *TlsCertificate) XXX_Merge(src proto.Message) { xxx_messageInfo_TlsCertificate.Merge(dst, src) } func (m *TlsCertificate) XXX_Size() int { return xxx_messageInfo_TlsCertificate.Size(m) } func (m *TlsCertificate) XXX_DiscardUnknown() { xxx_messageInfo_TlsCertificate.DiscardUnknown(m) } var xxx_messageInfo_TlsCertificate proto.InternalMessageInfo func (m *TlsCertificate) GetCertificateChain() *base.DataSource { if m != nil { return m.CertificateChain } return nil } func (m *TlsCertificate) GetPrivateKey() *base.DataSource { if m != nil { return m.PrivateKey } return nil } func (m *TlsCertificate) GetPassword() *base.DataSource { if m != nil { return m.Password } return nil } func (m *TlsCertificate) GetOcspStaple() *base.DataSource { if m != nil { return m.OcspStaple } return nil } func (m *TlsCertificate) GetSignedCertificateTimestamp() []*base.DataSource { if m != nil { return m.SignedCertificateTimestamp } return nil } type TlsSessionTicketKeys struct { Keys []*base.DataSource `protobuf:"bytes,1,rep,name=keys,proto3" json:"keys,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *TlsSessionTicketKeys) Reset() { *m = TlsSessionTicketKeys{} } func (m *TlsSessionTicketKeys) String() string { return proto.CompactTextString(m) } func (*TlsSessionTicketKeys) ProtoMessage() {} func (*TlsSessionTicketKeys) Descriptor() ([]byte, []int) { return fileDescriptor_cert_f82beca1d890b9d7, []int{2} } func (m *TlsSessionTicketKeys) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_TlsSessionTicketKeys.Unmarshal(m, b) } func (m *TlsSessionTicketKeys) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_TlsSessionTicketKeys.Marshal(b, m, deterministic) } func (dst *TlsSessionTicketKeys) XXX_Merge(src proto.Message) { xxx_messageInfo_TlsSessionTicketKeys.Merge(dst, src) } func (m *TlsSessionTicketKeys) XXX_Size() int { return xxx_messageInfo_TlsSessionTicketKeys.Size(m) } func (m *TlsSessionTicketKeys) XXX_DiscardUnknown() { xxx_messageInfo_TlsSessionTicketKeys.DiscardUnknown(m) } var xxx_messageInfo_TlsSessionTicketKeys proto.InternalMessageInfo func (m *TlsSessionTicketKeys) GetKeys() []*base.DataSource { if m != nil { return m.Keys } return nil } type CertificateValidationContext struct { TrustedCa *base.DataSource `protobuf:"bytes,1,opt,name=trusted_ca,json=trustedCa,proto3" json:"trusted_ca,omitempty"` VerifyCertificateSpki []string `protobuf:"bytes,3,rep,name=verify_certificate_spki,json=verifyCertificateSpki,proto3" json:"verify_certificate_spki,omitempty"` VerifyCertificateHash []string `protobuf:"bytes,2,rep,name=verify_certificate_hash,json=verifyCertificateHash,proto3" json:"verify_certificate_hash,omitempty"` VerifySubjectAltName []string `protobuf:"bytes,4,rep,name=verify_subject_alt_name,json=verifySubjectAltName,proto3" json:"verify_subject_alt_name,omitempty"` RequireOcspStaple *wrappers.BoolValue `protobuf:"bytes,5,opt,name=require_ocsp_staple,json=requireOcspStaple,proto3" json:"require_ocsp_staple,omitempty"` RequireSignedCertificateTimestamp *wrappers.BoolValue `protobuf:"bytes,6,opt,name=require_signed_certificate_timestamp,json=requireSignedCertificateTimestamp,proto3" json:"require_signed_certificate_timestamp,omitempty"` Crl *base.DataSource `protobuf:"bytes,7,opt,name=crl,proto3" json:"crl,omitempty"` AllowExpiredCertificate bool `protobuf:"varint,8,opt,name=allow_expired_certificate,json=allowExpiredCertificate,proto3" json:"allow_expired_certificate,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *CertificateValidationContext) Reset() { *m = CertificateValidationContext{} } func (m *CertificateValidationContext) String() string { return proto.CompactTextString(m) } func (*CertificateValidationContext) ProtoMessage() {} func (*CertificateValidationContext) Descriptor() ([]byte, []int) { return fileDescriptor_cert_f82beca1d890b9d7, []int{3} } func (m *CertificateValidationContext) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_CertificateValidationContext.Unmarshal(m, b) } func (m *CertificateValidationContext) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_CertificateValidationContext.Marshal(b, m, deterministic) } func (dst *CertificateValidationContext) XXX_Merge(src proto.Message) { xxx_messageInfo_CertificateValidationContext.Merge(dst, src) } func (m *CertificateValidationContext) XXX_Size() int { return xxx_messageInfo_CertificateValidationContext.Size(m) } func (m *CertificateValidationContext) XXX_DiscardUnknown() { xxx_messageInfo_CertificateValidationContext.DiscardUnknown(m) } var xxx_messageInfo_CertificateValidationContext proto.InternalMessageInfo func (m *CertificateValidationContext) GetTrustedCa() *base.DataSource { if m != nil { return m.TrustedCa } return nil } func (m *CertificateValidationContext) GetVerifyCertificateSpki() []string { if m != nil { return m.VerifyCertificateSpki } return nil } func (m *CertificateValidationContext) GetVerifyCertificateHash() []string { if m != nil { return m.VerifyCertificateHash } return nil } func (m *CertificateValidationContext) GetVerifySubjectAltName() []string { if m != nil { return m.VerifySubjectAltName } return nil } func (m *CertificateValidationContext) GetRequireOcspStaple() *wrappers.BoolValue { if m != nil { return m.RequireOcspStaple } return nil } func (m *CertificateValidationContext) GetRequireSignedCertificateTimestamp() *wrappers.BoolValue { if m != nil { return m.RequireSignedCertificateTimestamp } return nil } func (m *CertificateValidationContext) GetCrl() *base.DataSource { if m != nil { return m.Crl } return nil } func (m *CertificateValidationContext) GetAllowExpiredCertificate() bool { if m != nil { return m.AllowExpiredCertificate } return false } type CommonTlsContext struct { TlsParams *TlsParameters `protobuf:"bytes,1,opt,name=tls_params,json=tlsParams,proto3" json:"tls_params,omitempty"` TlsCertificates []*TlsCertificate `protobuf:"bytes,2,rep,name=tls_certificates,json=tlsCertificates,proto3" json:"tls_certificates,omitempty"` TlsCertificateSdsSecretConfigs []*SdsSecretConfig `protobuf:"bytes,6,rep,name=tls_certificate_sds_secret_configs,json=tlsCertificateSdsSecretConfigs,proto3" json:"tls_certificate_sds_secret_configs,omitempty"` // Types that are valid to be assigned to ValidationContextType: // *CommonTlsContext_ValidationContext // *CommonTlsContext_ValidationContextSdsSecretConfig // *CommonTlsContext_CombinedValidationContext ValidationContextType isCommonTlsContext_ValidationContextType `protobuf_oneof:"validation_context_type"` AlpnProtocols []string `protobuf:"bytes,4,rep,name=alpn_protocols,json=alpnProtocols,proto3" json:"alpn_protocols,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *CommonTlsContext) Reset() { *m = CommonTlsContext{} } func (m *CommonTlsContext) String() string { return proto.CompactTextString(m) } func (*CommonTlsContext) ProtoMessage() {} func (*CommonTlsContext) Descriptor() ([]byte, []int) { return fileDescriptor_cert_f82beca1d890b9d7, []int{4} } func (m *CommonTlsContext) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_CommonTlsContext.Unmarshal(m, b) } func (m *CommonTlsContext) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_CommonTlsContext.Marshal(b, m, deterministic) } func (dst *CommonTlsContext) XXX_Merge(src proto.Message) { xxx_messageInfo_CommonTlsContext.Merge(dst, src) } func (m *CommonTlsContext) XXX_Size() int { return xxx_messageInfo_CommonTlsContext.Size(m) } func (m *CommonTlsContext) XXX_DiscardUnknown() { xxx_messageInfo_CommonTlsContext.DiscardUnknown(m) } var xxx_messageInfo_CommonTlsContext proto.InternalMessageInfo func (m *CommonTlsContext) GetTlsParams() *TlsParameters { if m != nil { return m.TlsParams } return nil } func (m *CommonTlsContext) GetTlsCertificates() []*TlsCertificate { if m != nil { return m.TlsCertificates } return nil } func (m *CommonTlsContext) GetTlsCertificateSdsSecretConfigs() []*SdsSecretConfig { if m != nil { return m.TlsCertificateSdsSecretConfigs } return nil } type isCommonTlsContext_ValidationContextType interface { isCommonTlsContext_ValidationContextType() } type CommonTlsContext_ValidationContext struct { ValidationContext *CertificateValidationContext `protobuf:"bytes,3,opt,name=validation_context,json=validationContext,proto3,oneof"` } type CommonTlsContext_ValidationContextSdsSecretConfig struct { ValidationContextSdsSecretConfig *SdsSecretConfig `protobuf:"bytes,7,opt,name=validation_context_sds_secret_config,json=validationContextSdsSecretConfig,proto3,oneof"` } type CommonTlsContext_CombinedValidationContext struct { CombinedValidationContext *CommonTlsContext_CombinedCertificateValidationContext `protobuf:"bytes,8,opt,name=combined_validation_context,json=combinedValidationContext,proto3,oneof"` } func (*CommonTlsContext_ValidationContext) isCommonTlsContext_ValidationContextType() {} func (*CommonTlsContext_ValidationContextSdsSecretConfig) isCommonTlsContext_ValidationContextType() {} func (*CommonTlsContext_CombinedValidationContext) isCommonTlsContext_ValidationContextType() {} func (m *CommonTlsContext) GetValidationContextType() isCommonTlsContext_ValidationContextType { if m != nil { return m.ValidationContextType } return nil } func (m *CommonTlsContext) GetValidationContext() *CertificateValidationContext { if x, ok := m.GetValidationContextType().(*CommonTlsContext_ValidationContext); ok { return x.ValidationContext } return nil } func (m *CommonTlsContext) GetValidationContextSdsSecretConfig() *SdsSecretConfig { if x, ok := m.GetValidationContextType().(*CommonTlsContext_ValidationContextSdsSecretConfig); ok { return x.ValidationContextSdsSecretConfig } return nil } func (m *CommonTlsContext) GetCombinedValidationContext() *CommonTlsContext_CombinedCertificateValidationContext { if x, ok := m.GetValidationContextType().(*CommonTlsContext_CombinedValidationContext); ok { return x.CombinedValidationContext } return nil } func (m *CommonTlsContext) GetAlpnProtocols() []string { if m != nil { return m.AlpnProtocols } return nil } // XXX_OneofFuncs is for the internal use of the proto package. func (*CommonTlsContext) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _CommonTlsContext_OneofMarshaler, _CommonTlsContext_OneofUnmarshaler, _CommonTlsContext_OneofSizer, []interface{}{ (*CommonTlsContext_ValidationContext)(nil), (*CommonTlsContext_ValidationContextSdsSecretConfig)(nil), (*CommonTlsContext_CombinedValidationContext)(nil), } } func _CommonTlsContext_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*CommonTlsContext) // validation_context_type switch x := m.ValidationContextType.(type) { case *CommonTlsContext_ValidationContext: b.EncodeVarint(3<<3 | proto.WireBytes) if err := b.EncodeMessage(x.ValidationContext); err != nil { return err } case *CommonTlsContext_ValidationContextSdsSecretConfig: b.EncodeVarint(7<<3 | proto.WireBytes) if err := b.EncodeMessage(x.ValidationContextSdsSecretConfig); err != nil { return err } case *CommonTlsContext_CombinedValidationContext: b.EncodeVarint(8<<3 | proto.WireBytes) if err := b.EncodeMessage(x.CombinedValidationContext); err != nil { return err } case nil: default: return fmt.Errorf("CommonTlsContext.ValidationContextType has unexpected type %T", x) } return nil } func _CommonTlsContext_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*CommonTlsContext) switch tag { case 3: // validation_context_type.validation_context if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(CertificateValidationContext) err := b.DecodeMessage(msg) m.ValidationContextType = &CommonTlsContext_ValidationContext{msg} return true, err case 7: // validation_context_type.validation_context_sds_secret_config if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(SdsSecretConfig) err := b.DecodeMessage(msg) m.ValidationContextType = &CommonTlsContext_ValidationContextSdsSecretConfig{msg} return true, err case 8: // validation_context_type.combined_validation_context if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(CommonTlsContext_CombinedCertificateValidationContext) err := b.DecodeMessage(msg) m.ValidationContextType = &CommonTlsContext_CombinedValidationContext{msg} return true, err default: return false, nil } } func _CommonTlsContext_OneofSizer(msg proto.Message) (n int) { m := msg.(*CommonTlsContext) // validation_context_type switch x := m.ValidationContextType.(type) { case *CommonTlsContext_ValidationContext: s := proto.Size(x.ValidationContext) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *CommonTlsContext_ValidationContextSdsSecretConfig: s := proto.Size(x.ValidationContextSdsSecretConfig) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *CommonTlsContext_CombinedValidationContext: s := proto.Size(x.CombinedValidationContext) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } type CommonTlsContext_CombinedCertificateValidationContext struct { DefaultValidationContext *CertificateValidationContext `protobuf:"bytes,1,opt,name=default_validation_context,json=defaultValidationContext,proto3" json:"default_validation_context,omitempty"` ValidationContextSdsSecretConfig *SdsSecretConfig `protobuf:"bytes,2,opt,name=validation_context_sds_secret_config,json=validationContextSdsSecretConfig,proto3" json:"validation_context_sds_secret_config,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *CommonTlsContext_CombinedCertificateValidationContext) Reset() { *m = CommonTlsContext_CombinedCertificateValidationContext{} } func (m *CommonTlsContext_CombinedCertificateValidationContext) String() string { return proto.CompactTextString(m) } func (*CommonTlsContext_CombinedCertificateValidationContext) ProtoMessage() {} func (*CommonTlsContext_CombinedCertificateValidationContext) Descriptor() ([]byte, []int) { return fileDescriptor_cert_f82beca1d890b9d7, []int{4, 0} } func (m *CommonTlsContext_CombinedCertificateValidationContext) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_CommonTlsContext_CombinedCertificateValidationContext.Unmarshal(m, b) } func (m *CommonTlsContext_CombinedCertificateValidationContext) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_CommonTlsContext_CombinedCertificateValidationContext.Marshal(b, m, deterministic) } func (dst *CommonTlsContext_CombinedCertificateValidationContext) XXX_Merge(src proto.Message) { xxx_messageInfo_CommonTlsContext_CombinedCertificateValidationContext.Merge(dst, src) } func (m *CommonTlsContext_CombinedCertificateValidationContext) XXX_Size() int { return xxx_messageInfo_CommonTlsContext_CombinedCertificateValidationContext.Size(m) } func (m *CommonTlsContext_CombinedCertificateValidationContext) XXX_DiscardUnknown() { xxx_messageInfo_CommonTlsContext_CombinedCertificateValidationContext.DiscardUnknown(m) } var xxx_messageInfo_CommonTlsContext_CombinedCertificateValidationContext proto.InternalMessageInfo func (m *CommonTlsContext_CombinedCertificateValidationContext) GetDefaultValidationContext() *CertificateValidationContext { if m != nil { return m.DefaultValidationContext } return nil } func (m *CommonTlsContext_CombinedCertificateValidationContext) GetValidationContextSdsSecretConfig() *SdsSecretConfig { if m != nil { return m.ValidationContextSdsSecretConfig } return nil } type UpstreamTlsContext struct { CommonTlsContext *CommonTlsContext `protobuf:"bytes,1,opt,name=common_tls_context,json=commonTlsContext,proto3" json:"common_tls_context,omitempty"` Sni string `protobuf:"bytes,2,opt,name=sni,proto3" json:"sni,omitempty"` AllowRenegotiation bool `protobuf:"varint,3,opt,name=allow_renegotiation,json=allowRenegotiation,proto3" json:"allow_renegotiation,omitempty"` MaxSessionKeys *wrappers.UInt32Value `protobuf:"bytes,4,opt,name=max_session_keys,json=maxSessionKeys,proto3" json:"max_session_keys,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *UpstreamTlsContext) Reset() { *m = UpstreamTlsContext{} } func (m *UpstreamTlsContext) String() string { return proto.CompactTextString(m) } func (*UpstreamTlsContext) ProtoMessage() {} func (*UpstreamTlsContext) Descriptor() ([]byte, []int) { return fileDescriptor_cert_f82beca1d890b9d7, []int{5} } func (m *UpstreamTlsContext) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_UpstreamTlsContext.Unmarshal(m, b) } func (m *UpstreamTlsContext) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_UpstreamTlsContext.Marshal(b, m, deterministic) } func (dst *UpstreamTlsContext) XXX_Merge(src proto.Message) { xxx_messageInfo_UpstreamTlsContext.Merge(dst, src) } func (m *UpstreamTlsContext) XXX_Size() int { return xxx_messageInfo_UpstreamTlsContext.Size(m) } func (m *UpstreamTlsContext) XXX_DiscardUnknown() { xxx_messageInfo_UpstreamTlsContext.DiscardUnknown(m) } var xxx_messageInfo_UpstreamTlsContext proto.InternalMessageInfo func (m *UpstreamTlsContext) GetCommonTlsContext() *CommonTlsContext { if m != nil { return m.CommonTlsContext } return nil } func (m *UpstreamTlsContext) GetSni() string { if m != nil { return m.Sni } return "" } func (m *UpstreamTlsContext) GetAllowRenegotiation() bool { if m != nil { return m.AllowRenegotiation } return false } func (m *UpstreamTlsContext) GetMaxSessionKeys() *wrappers.UInt32Value { if m != nil { return m.MaxSessionKeys } return nil } type DownstreamTlsContext struct { CommonTlsContext *CommonTlsContext `protobuf:"bytes,1,opt,name=common_tls_context,json=commonTlsContext,proto3" json:"common_tls_context,omitempty"` RequireClientCertificate *wrappers.BoolValue `protobuf:"bytes,2,opt,name=require_client_certificate,json=requireClientCertificate,proto3" json:"require_client_certificate,omitempty"` RequireSni *wrappers.BoolValue `protobuf:"bytes,3,opt,name=require_sni,json=requireSni,proto3" json:"require_sni,omitempty"` // Types that are valid to be assigned to SessionTicketKeysType: // *DownstreamTlsContext_SessionTicketKeys // *DownstreamTlsContext_SessionTicketKeysSdsSecretConfig SessionTicketKeysType isDownstreamTlsContext_SessionTicketKeysType `protobuf_oneof:"session_ticket_keys_type"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *DownstreamTlsContext) Reset() { *m = DownstreamTlsContext{} } func (m *DownstreamTlsContext) String() string { return proto.CompactTextString(m) } func (*DownstreamTlsContext) ProtoMessage() {} func (*DownstreamTlsContext) Descriptor() ([]byte, []int) { return fileDescriptor_cert_f82beca1d890b9d7, []int{6} } func (m *DownstreamTlsContext) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_DownstreamTlsContext.Unmarshal(m, b) } func (m *DownstreamTlsContext) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_DownstreamTlsContext.Marshal(b, m, deterministic) } func (dst *DownstreamTlsContext) XXX_Merge(src proto.Message) { xxx_messageInfo_DownstreamTlsContext.Merge(dst, src) } func (m *DownstreamTlsContext) XXX_Size() int { return xxx_messageInfo_DownstreamTlsContext.Size(m) } func (m *DownstreamTlsContext) XXX_DiscardUnknown() { xxx_messageInfo_DownstreamTlsContext.DiscardUnknown(m) } var xxx_messageInfo_DownstreamTlsContext proto.InternalMessageInfo func (m *DownstreamTlsContext) GetCommonTlsContext() *CommonTlsContext { if m != nil { return m.CommonTlsContext } return nil } func (m *DownstreamTlsContext) GetRequireClientCertificate() *wrappers.BoolValue { if m != nil { return m.RequireClientCertificate } return nil } func (m *DownstreamTlsContext) GetRequireSni() *wrappers.BoolValue { if m != nil { return m.RequireSni } return nil } type isDownstreamTlsContext_SessionTicketKeysType interface { isDownstreamTlsContext_SessionTicketKeysType() } type DownstreamTlsContext_SessionTicketKeys struct { SessionTicketKeys *TlsSessionTicketKeys `protobuf:"bytes,4,opt,name=session_ticket_keys,json=sessionTicketKeys,proto3,oneof"` } type DownstreamTlsContext_SessionTicketKeysSdsSecretConfig struct { SessionTicketKeysSdsSecretConfig *SdsSecretConfig `protobuf:"bytes,5,opt,name=session_ticket_keys_sds_secret_config,json=sessionTicketKeysSdsSecretConfig,proto3,oneof"` } func (*DownstreamTlsContext_SessionTicketKeys) isDownstreamTlsContext_SessionTicketKeysType() {} func (*DownstreamTlsContext_SessionTicketKeysSdsSecretConfig) isDownstreamTlsContext_SessionTicketKeysType() { } func (m *DownstreamTlsContext) GetSessionTicketKeysType() isDownstreamTlsContext_SessionTicketKeysType { if m != nil { return m.SessionTicketKeysType } return nil } func (m *DownstreamTlsContext) GetSessionTicketKeys() *TlsSessionTicketKeys { if x, ok := m.GetSessionTicketKeysType().(*DownstreamTlsContext_SessionTicketKeys); ok { return x.SessionTicketKeys } return nil } func (m *DownstreamTlsContext) GetSessionTicketKeysSdsSecretConfig() *SdsSecretConfig { if x, ok := m.GetSessionTicketKeysType().(*DownstreamTlsContext_SessionTicketKeysSdsSecretConfig); ok { return x.SessionTicketKeysSdsSecretConfig } return nil } // XXX_OneofFuncs is for the internal use of the proto package. func (*DownstreamTlsContext) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _DownstreamTlsContext_OneofMarshaler, _DownstreamTlsContext_OneofUnmarshaler, _DownstreamTlsContext_OneofSizer, []interface{}{ (*DownstreamTlsContext_SessionTicketKeys)(nil), (*DownstreamTlsContext_SessionTicketKeysSdsSecretConfig)(nil), } } func _DownstreamTlsContext_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*DownstreamTlsContext) // session_ticket_keys_type switch x := m.SessionTicketKeysType.(type) { case *DownstreamTlsContext_SessionTicketKeys: b.EncodeVarint(4<<3 | proto.WireBytes) if err := b.EncodeMessage(x.SessionTicketKeys); err != nil { return err } case *DownstreamTlsContext_SessionTicketKeysSdsSecretConfig: b.EncodeVarint(5<<3 | proto.WireBytes) if err := b.EncodeMessage(x.SessionTicketKeysSdsSecretConfig); err != nil { return err } case nil: default: return fmt.Errorf("DownstreamTlsContext.SessionTicketKeysType has unexpected type %T", x) } return nil } func _DownstreamTlsContext_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*DownstreamTlsContext) switch tag { case 4: // session_ticket_keys_type.session_ticket_keys if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(TlsSessionTicketKeys) err := b.DecodeMessage(msg) m.SessionTicketKeysType = &DownstreamTlsContext_SessionTicketKeys{msg} return true, err case 5: // session_ticket_keys_type.session_ticket_keys_sds_secret_config if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(SdsSecretConfig) err := b.DecodeMessage(msg) m.SessionTicketKeysType = &DownstreamTlsContext_SessionTicketKeysSdsSecretConfig{msg} return true, err default: return false, nil } } func _DownstreamTlsContext_OneofSizer(msg proto.Message) (n int) { m := msg.(*DownstreamTlsContext) // session_ticket_keys_type switch x := m.SessionTicketKeysType.(type) { case *DownstreamTlsContext_SessionTicketKeys: s := proto.Size(x.SessionTicketKeys) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *DownstreamTlsContext_SessionTicketKeysSdsSecretConfig: s := proto.Size(x.SessionTicketKeysSdsSecretConfig) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } type SdsSecretConfig struct { Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` SdsConfig *config_source.ConfigSource `protobuf:"bytes,2,opt,name=sds_config,json=sdsConfig,proto3" json:"sds_config,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *SdsSecretConfig) Reset() { *m = SdsSecretConfig{} } func (m *SdsSecretConfig) String() string { return proto.CompactTextString(m) } func (*SdsSecretConfig) ProtoMessage() {} func (*SdsSecretConfig) Descriptor() ([]byte, []int) { return fileDescriptor_cert_f82beca1d890b9d7, []int{7} } func (m *SdsSecretConfig) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_SdsSecretConfig.Unmarshal(m, b) } func (m *SdsSecretConfig) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_SdsSecretConfig.Marshal(b, m, deterministic) } func (dst *SdsSecretConfig) XXX_Merge(src proto.Message) { xxx_messageInfo_SdsSecretConfig.Merge(dst, src) } func (m *SdsSecretConfig) XXX_Size() int { return xxx_messageInfo_SdsSecretConfig.Size(m) } func (m *SdsSecretConfig) XXX_DiscardUnknown() { xxx_messageInfo_SdsSecretConfig.DiscardUnknown(m) } var xxx_messageInfo_SdsSecretConfig proto.InternalMessageInfo func (m *SdsSecretConfig) GetName() string { if m != nil { return m.Name } return "" } func (m *SdsSecretConfig) GetSdsConfig() *config_source.ConfigSource { if m != nil { return m.SdsConfig } return nil } type Secret struct { Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` // Types that are valid to be assigned to Type: // *Secret_TlsCertificate // *Secret_SessionTicketKeys // *Secret_ValidationContext Type isSecret_Type `protobuf_oneof:"type"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Secret) Reset() { *m = Secret{} } func (m *Secret) String() string { return proto.CompactTextString(m) } func (*Secret) ProtoMessage() {} func (*Secret) Descriptor() ([]byte, []int) { return fileDescriptor_cert_f82beca1d890b9d7, []int{8} } func (m *Secret) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Secret.Unmarshal(m, b) } func (m *Secret) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Secret.Marshal(b, m, deterministic) } func (dst *Secret) XXX_Merge(src proto.Message) { xxx_messageInfo_Secret.Merge(dst, src) } func (m *Secret) XXX_Size() int { return xxx_messageInfo_Secret.Size(m) } func (m *Secret) XXX_DiscardUnknown() { xxx_messageInfo_Secret.DiscardUnknown(m) } var xxx_messageInfo_Secret proto.InternalMessageInfo func (m *Secret) GetName() string { if m != nil { return m.Name } return "" } type isSecret_Type interface { isSecret_Type() } type Secret_TlsCertificate struct { TlsCertificate *TlsCertificate `protobuf:"bytes,2,opt,name=tls_certificate,json=tlsCertificate,proto3,oneof"` } type Secret_SessionTicketKeys struct { SessionTicketKeys *TlsSessionTicketKeys `protobuf:"bytes,3,opt,name=session_ticket_keys,json=sessionTicketKeys,proto3,oneof"` } type Secret_ValidationContext struct { ValidationContext *CertificateValidationContext `protobuf:"bytes,4,opt,name=validation_context,json=validationContext,proto3,oneof"` } func (*Secret_TlsCertificate) isSecret_Type() {} func (*Secret_SessionTicketKeys) isSecret_Type() {} func (*Secret_ValidationContext) isSecret_Type() {} func (m *Secret) GetType() isSecret_Type { if m != nil { return m.Type } return nil } func (m *Secret) GetTlsCertificate() *TlsCertificate { if x, ok := m.GetType().(*Secret_TlsCertificate); ok { return x.TlsCertificate } return nil } func (m *Secret) GetSessionTicketKeys() *TlsSessionTicketKeys { if x, ok := m.GetType().(*Secret_SessionTicketKeys); ok { return x.SessionTicketKeys } return nil } func (m *Secret) GetValidationContext() *CertificateValidationContext { if x, ok := m.GetType().(*Secret_ValidationContext); ok { return x.ValidationContext } return nil } // XXX_OneofFuncs is for the internal use of the proto package. func (*Secret) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _Secret_OneofMarshaler, _Secret_OneofUnmarshaler, _Secret_OneofSizer, []interface{}{ (*Secret_TlsCertificate)(nil), (*Secret_SessionTicketKeys)(nil), (*Secret_ValidationContext)(nil), } } func _Secret_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*Secret) // type switch x := m.Type.(type) { case *Secret_TlsCertificate: b.EncodeVarint(2<<3 | proto.WireBytes) if err := b.EncodeMessage(x.TlsCertificate); err != nil { return err } case *Secret_SessionTicketKeys: b.EncodeVarint(3<<3 | proto.WireBytes) if err := b.EncodeMessage(x.SessionTicketKeys); err != nil { return err } case *Secret_ValidationContext: b.EncodeVarint(4<<3 | proto.WireBytes) if err := b.EncodeMessage(x.ValidationContext); err != nil { return err } case nil: default: return fmt.Errorf("Secret.Type has unexpected type %T", x) } return nil } func _Secret_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*Secret) switch tag { case 2: // type.tls_certificate if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(TlsCertificate) err := b.DecodeMessage(msg) m.Type = &Secret_TlsCertificate{msg} return true, err case 3: // type.session_ticket_keys if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(TlsSessionTicketKeys) err := b.DecodeMessage(msg) m.Type = &Secret_SessionTicketKeys{msg} return true, err case 4: // type.validation_context if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(CertificateValidationContext) err := b.DecodeMessage(msg) m.Type = &Secret_ValidationContext{msg} return true, err default: return false, nil } } func _Secret_OneofSizer(msg proto.Message) (n int) { m := msg.(*Secret) // type switch x := m.Type.(type) { case *Secret_TlsCertificate: s := proto.Size(x.TlsCertificate) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *Secret_SessionTicketKeys: s := proto.Size(x.SessionTicketKeys) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *Secret_ValidationContext: s := proto.Size(x.ValidationContext) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } func init() { proto.RegisterType((*TlsParameters)(nil), "envoy.api.v2.auth.TlsParameters") proto.RegisterType((*TlsCertificate)(nil), "envoy.api.v2.auth.TlsCertificate") proto.RegisterType((*TlsSessionTicketKeys)(nil), "envoy.api.v2.auth.TlsSessionTicketKeys") proto.RegisterType((*CertificateValidationContext)(nil), "envoy.api.v2.auth.CertificateValidationContext") proto.RegisterType((*CommonTlsContext)(nil), "envoy.api.v2.auth.CommonTlsContext") proto.RegisterType((*CommonTlsContext_CombinedCertificateValidationContext)(nil), "envoy.api.v2.auth.CommonTlsContext.CombinedCertificateValidationContext") proto.RegisterType((*UpstreamTlsContext)(nil), "envoy.api.v2.auth.UpstreamTlsContext") proto.RegisterType((*DownstreamTlsContext)(nil), "envoy.api.v2.auth.DownstreamTlsContext") proto.RegisterType((*SdsSecretConfig)(nil), "envoy.api.v2.auth.SdsSecretConfig") proto.RegisterType((*Secret)(nil), "envoy.api.v2.auth.Secret") proto.RegisterEnum("envoy.api.v2.auth.TlsParameters_TlsProtocol", TlsParameters_TlsProtocol_name, TlsParameters_TlsProtocol_value) } func init() { proto.RegisterFile("envoy/api/v2/auth/cert.proto", fileDescriptor_cert_f82beca1d890b9d7) } var fileDescriptor_cert_f82beca1d890b9d7 = []byte{ // 1295 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xbc, 0x56, 0x4d, 0x77, 0xdb, 0x44, 0x17, 0x8e, 0x6c, 0xc7, 0x75, 0xae, 0xdb, 0x54, 0x99, 0xf6, 0x3d, 0x51, 0xfc, 0x86, 0xd6, 0x55, 0xdb, 0x43, 0x16, 0x3d, 0x36, 0x75, 0x61, 0xc1, 0x57, 0xa1, 0x76, 0xe1, 0x84, 0x36, 0xd0, 0x22, 0x3b, 0x3d, 0xc0, 0x46, 0x4c, 0xe4, 0x49, 0x3c, 0x44, 0x5f, 0xcc, 0x8c, 0x9d, 0x98, 0x05, 0x0b, 0x96, 0x5d, 0x76, 0xcd, 0x86, 0x5f, 0xc0, 0x9e, 0x15, 0x7f, 0x80, 0x1f, 0xc2, 0xa6, 0x6b, 0x56, 0xc0, 0x99, 0x91, 0x14, 0x4b, 0x96, 0x5a, 0x1b, 0x4e, 0x0f, 0x3b, 0xcd, 0xdc, 0xb9, 0xcf, 0x33, 0x73, 0xef, 0x73, 0xef, 0x15, 0x6c, 0x13, 0x7f, 0x12, 0x4c, 0xdb, 0x38, 0xa4, 0xed, 0x49, 0xa7, 0x8d, 0xc7, 0x62, 0xd4, 0x76, 0x08, 0x13, 0xad, 0x90, 0x05, 0x22, 0x40, 0x1b, 0xca, 0xda, 0xc2, 0x21, 0x6d, 0x4d, 0x3a, 0x2d, 0x69, 0x6d, 0x64, 0x1d, 0x9c, 0x80, 0x91, 0xf6, 0x01, 0xe6, 0x24, 0x72, 0x68, 0xdc, 0xcc, 0x5b, 0x9d, 0xc0, 0x3f, 0xa4, 0x47, 0x36, 0x0f, 0xc6, 0xcc, 0x49, 0x8e, 0x5d, 0x39, 0x0a, 0x82, 0x23, 0x97, 0xb4, 0xd5, 0xea, 0x60, 0x7c, 0xd8, 0x3e, 0x61, 0x38, 0x0c, 0x09, 0xe3, 0xb1, 0x7d, 0x73, 0x82, 0x5d, 0x3a, 0xc4, 0x82, 0xb4, 0x93, 0x8f, 0xc8, 0x60, 0xfe, 0x58, 0x86, 0x0b, 0x03, 0x97, 0x3f, 0xc6, 0x0c, 0x7b, 0x44, 0x10, 0xc6, 0xd1, 0x14, 0xb6, 0x85, 0xcb, 0x6d, 0x8f, 0xfa, 0xd4, 0x1b, 0x7b, 0xb6, 0x3a, 0xe6, 0x04, 0xae, 0x3d, 0x21, 0x8c, 0xd3, 0xc0, 0x37, 0xb4, 0xa6, 0xb6, 0xb3, 0xde, 0xb9, 0xd5, 0xca, 0xbd, 0xa4, 0x95, 0xc1, 0x51, 0xab, 0xd8, 0xb7, 0x0b, 0xbf, 0xfc, 0xfe, 0x6b, 0x79, 0xf5, 0x07, 0xad, 0xa4, 0x6b, 0xd6, 0x96, 0x70, 0xf9, 0xa7, 0x11, 0x78, 0x62, 0x7f, 0x12, 0x41, 0x9f, 0x51, 0xe3, 0xd3, 0x62, 0xea, 0xd2, 0xab, 0xa0, 0x8e, 0xc0, 0xe7, 0xa9, 0xaf, 0xc3, 0x05, 0x87, 0x86, 0x23, 0xc2, 0x6c, 0x3e, 0xa6, 0x82, 0x70, 0xa3, 0xdc, 0x2c, 0xef, 0xac, 0x59, 0xe7, 0xa3, 0xcd, 0xbe, 0xda, 0x43, 0x57, 0xa1, 0x4e, 0x9c, 0xe1, 0xc8, 0x76, 0xc6, 0x6c, 0x42, 0xb8, 0x51, 0x51, 0x47, 0x40, 0x6e, 0xf5, 0xd4, 0x8e, 0xf9, 0x08, 0xea, 0x29, 0x6e, 0x74, 0x1e, 0x6a, 0x83, 0xbd, 0xbe, 0x7d, 0x6f, 0x7f, 0xf0, 0x48, 0x5f, 0x41, 0x75, 0x38, 0x37, 0xd8, 0xeb, 0x4f, 0x6e, 0xdb, 0x6f, 0xe8, 0xda, 0x6c, 0x71, 0x5b, 0x2f, 0xcd, 0x16, 0x1d, 0xbd, 0x3c, 0x5b, 0xdc, 0xd1, 0x2b, 0xe6, 0x1f, 0x25, 0x58, 0x1f, 0xb8, 0xbc, 0x47, 0x98, 0xa0, 0x87, 0xd4, 0xc1, 0x82, 0xa0, 0x07, 0xb0, 0xe1, 0xcc, 0x96, 0xb6, 0x33, 0xc2, 0x34, 0x4a, 0x4a, 0xbd, 0xf3, 0x5a, 0x36, 0x32, 0x52, 0x2d, 0xad, 0xfb, 0x58, 0xe0, 0xbe, 0x92, 0x8a, 0xa5, 0xa7, 0xfc, 0x7a, 0xd2, 0x0d, 0xdd, 0x85, 0x7a, 0xc8, 0xe8, 0x44, 0xe2, 0x1c, 0x93, 0xa9, 0x8a, 0xef, 0x42, 0x14, 0x88, 0x3d, 0x1e, 0x92, 0x29, 0x7a, 0x1b, 0x6a, 0x21, 0xe6, 0xfc, 0x24, 0x60, 0x43, 0xa3, 0xbc, 0x8c, 0xf3, 0xd9, 0x71, 0x49, 0x1d, 0x38, 0x3c, 0xb4, 0xb9, 0xc0, 0xa1, 0x4b, 0x8c, 0xca, 0x52, 0xd4, 0xd2, 0xa3, 0xaf, 0x1c, 0x90, 0x0d, 0xdb, 0x9c, 0x1e, 0xf9, 0x64, 0x68, 0xa7, 0xa3, 0x21, 0xa8, 0x47, 0xb8, 0xc0, 0x5e, 0x68, 0xac, 0x36, 0xcb, 0x8b, 0x01, 0x1b, 0x11, 0x44, 0x2a, 0xbc, 0x83, 0x04, 0xc0, 0xdc, 0x87, 0xcb, 0x03, 0x97, 0xf7, 0x09, 0x97, 0xfa, 0x18, 0x50, 0xe7, 0x98, 0x88, 0x87, 0x64, 0xca, 0xd1, 0xfb, 0x50, 0x39, 0x26, 0x53, 0x6e, 0x68, 0x4b, 0x10, 0xc4, 0xea, 0x7b, 0xa6, 0x95, 0x6a, 0x9a, 0xa5, 0xdc, 0xcc, 0xdf, 0x2a, 0xb0, 0x9d, 0xe2, 0x7b, 0x12, 0x95, 0x23, 0x0d, 0xfc, 0x5e, 0xe0, 0x0b, 0x72, 0x2a, 0xd0, 0x7b, 0x00, 0x82, 0x8d, 0xb9, 0x90, 0x2f, 0xc3, 0xcb, 0x25, 0x76, 0x2d, 0x76, 0xe8, 0x61, 0xb4, 0x0b, 0x9b, 0x13, 0xc2, 0xe8, 0xe1, 0x34, 0x13, 0x16, 0x1e, 0x1e, 0xd3, 0x48, 0xd1, 0x5d, 0x5d, 0xde, 0xa8, 0xfe, 0x4c, 0xab, 0x99, 0x55, 0x56, 0x69, 0xde, 0xda, 0xb9, 0x65, 0xfd, 0x2f, 0x72, 0x48, 0x5d, 0xaa, 0x1f, 0x1e, 0xd3, 0x17, 0x20, 0x8d, 0x30, 0x1f, 0x19, 0xa5, 0x02, 0xa4, 0x0f, 0x77, 0xec, 0x02, 0xa4, 0x5d, 0xcc, 0x47, 0xe8, 0xad, 0x33, 0x24, 0x3e, 0x3e, 0xf8, 0x86, 0x38, 0xc2, 0xc6, 0xae, 0xb0, 0x7d, 0xec, 0x91, 0xb8, 0x84, 0x2e, 0x47, 0xe6, 0x7e, 0x64, 0xbd, 0xe7, 0x8a, 0xcf, 0xb0, 0x27, 0x85, 0x7e, 0x89, 0x91, 0x6f, 0xc7, 0x94, 0x11, 0x3b, 0xad, 0x94, 0x55, 0x15, 0x91, 0x46, 0x2b, 0xea, 0x78, 0xad, 0xa4, 0xe3, 0xb5, 0xba, 0x41, 0xe0, 0x3e, 0xc1, 0xee, 0x98, 0x58, 0x1b, 0xb1, 0xdb, 0xa3, 0x99, 0x5a, 0x8e, 0xe1, 0x46, 0x82, 0xf5, 0x52, 0xd5, 0x54, 0x17, 0x82, 0x5f, 0x8b, 0x71, 0xfa, 0x2f, 0x54, 0x0e, 0x6a, 0x43, 0xd9, 0x61, 0xae, 0x71, 0x6e, 0x99, 0xd4, 0xc9, 0x93, 0xe8, 0x1d, 0xd8, 0xc2, 0xae, 0x1b, 0x9c, 0xd8, 0xe4, 0x34, 0xa4, 0x2c, 0x7b, 0x39, 0xa3, 0xd6, 0xd4, 0x76, 0x6a, 0xd6, 0xa6, 0x3a, 0xf0, 0x51, 0x64, 0x4f, 0xb1, 0x9a, 0xcf, 0xcf, 0x81, 0xde, 0x0b, 0x3c, 0x2f, 0xf0, 0x65, 0x9f, 0x88, 0x35, 0xf4, 0x01, 0x80, 0x6c, 0xa4, 0xa1, 0x6c, 0x89, 0x3c, 0xd6, 0x50, 0x73, 0x51, 0xdb, 0xb4, 0xd6, 0x44, 0xbc, 0xe4, 0x68, 0x0f, 0x74, 0x09, 0x90, 0xba, 0x07, 0x57, 0x59, 0xaf, 0x77, 0xae, 0x15, 0xc3, 0xa4, 0xae, 0x64, 0x5d, 0x14, 0x99, 0x35, 0x47, 0xdf, 0x81, 0x39, 0x87, 0x66, 0xf3, 0x21, 0xb7, 0x39, 0x71, 0x18, 0x11, 0x76, 0x34, 0xcf, 0xb8, 0x51, 0x55, 0xf8, 0x66, 0x01, 0x7e, 0x7f, 0xc8, 0xfb, 0xea, 0x6c, 0x4f, 0x1d, 0x9d, 0x55, 0x95, 0xae, 0x59, 0x57, 0xb2, 0x64, 0x73, 0x47, 0x39, 0xfa, 0x1a, 0xd0, 0xe4, 0xac, 0xc6, 0x24, 0x97, 0x0c, 0x50, 0xdc, 0xac, 0xda, 0x05, 0x5c, 0x2f, 0xab, 0xcd, 0xdd, 0x15, 0x6b, 0x63, 0x92, 0x2b, 0x58, 0x01, 0x37, 0xf2, 0x0c, 0xf9, 0x07, 0xc6, 0x7a, 0x58, 0xe2, 0x7d, 0xbb, 0x2b, 0x56, 0x33, 0x47, 0x33, 0x77, 0x06, 0x3d, 0xd5, 0xe0, 0xff, 0x4e, 0xe0, 0x1d, 0x50, 0x29, 0xe6, 0x82, 0x17, 0xd6, 0x14, 0xdb, 0x6e, 0xd1, 0x0b, 0xe7, 0xd4, 0x22, 0x37, 0x14, 0xcc, 0x82, 0xa7, 0x6f, 0x25, 0x74, 0xf9, 0x9e, 0x75, 0x13, 0xd6, 0xb1, 0x1b, 0xfa, 0x67, 0x13, 0x3b, 0x99, 0x8d, 0x17, 0xe4, 0x6e, 0x32, 0x0e, 0x79, 0xe3, 0xa7, 0x12, 0xdc, 0x58, 0x86, 0x0c, 0x4d, 0xa1, 0x31, 0x24, 0x87, 0x78, 0xec, 0x8a, 0xa2, 0xa7, 0x69, 0xff, 0x2a, 0x79, 0xb1, 0x6a, 0x9e, 0x2a, 0xd5, 0x18, 0x31, 0x7c, 0x9e, 0xfa, 0xfb, 0x25, 0xb3, 0x59, 0x5a, 0x36, 0x9b, 0x19, 0xde, 0x85, 0x79, 0xed, 0x6e, 0xc1, 0x66, 0x01, 0xbf, 0x98, 0x86, 0xe4, 0x41, 0xa5, 0xb6, 0xaa, 0x57, 0xcd, 0x3f, 0x35, 0x40, 0xfb, 0x21, 0x17, 0x8c, 0x60, 0x2f, 0x55, 0xf2, 0x9f, 0x03, 0x72, 0x54, 0x62, 0x6d, 0x55, 0x6a, 0x99, 0x50, 0x5d, 0x5f, 0x42, 0x05, 0x96, 0xee, 0xcc, 0x77, 0x91, 0x6d, 0x28, 0x73, 0x9f, 0xaa, 0x97, 0xae, 0xc5, 0xaf, 0x60, 0xe5, 0x9d, 0xbf, 0x34, 0x4b, 0x6e, 0xa3, 0x36, 0x5c, 0x8a, 0x9a, 0x16, 0x23, 0x3e, 0x39, 0x0a, 0x04, 0x55, 0x37, 0x56, 0x95, 0x55, 0xb3, 0x90, 0x32, 0x59, 0x69, 0x0b, 0xfa, 0x18, 0x74, 0x0f, 0x9f, 0xda, 0x3c, 0x9a, 0xa8, 0xb6, 0x1a, 0xa2, 0xd1, 0xd8, 0xdf, 0xce, 0xf5, 0xdb, 0xfd, 0x4f, 0x7c, 0x71, 0xa7, 0x13, 0x75, 0xdc, 0x75, 0x0f, 0x9f, 0xc6, 0x63, 0x58, 0x0e, 0x60, 0xf3, 0x79, 0x19, 0x2e, 0xdf, 0x0f, 0x4e, 0xfc, 0xff, 0x22, 0x04, 0x5f, 0x40, 0x23, 0x99, 0x1b, 0x8e, 0x4b, 0x89, 0x2f, 0x32, 0xad, 0xb9, 0xb4, 0x70, 0x5a, 0x18, 0xb1, 0x77, 0x4f, 0x39, 0xa7, 0x7f, 0xe3, 0xde, 0x85, 0xfa, 0xd9, 0x44, 0xf2, 0x69, 0xdc, 0x90, 0x5e, 0x06, 0x05, 0xc9, 0xe0, 0xf1, 0x29, 0xfa, 0x12, 0x2e, 0x25, 0x61, 0x14, 0xea, 0xcf, 0x24, 0x1d, 0xcd, 0xd7, 0x8b, 0x3b, 0x74, 0xee, 0x4f, 0x46, 0x76, 0x33, 0x9e, 0xfb, 0xbd, 0x19, 0xc3, 0xcd, 0x02, 0xe8, 0x82, 0x02, 0x58, 0xfd, 0x27, 0xed, 0x2c, 0xc7, 0x33, 0x2f, 0xfb, 0x06, 0x18, 0x45, 0xb4, 0x52, 0xf7, 0x26, 0x81, 0x8b, 0xf3, 0xdd, 0x0f, 0x41, 0x45, 0xfd, 0x3f, 0xc8, 0xe4, 0xae, 0x59, 0xea, 0x1b, 0xdd, 0x05, 0x90, 0xb7, 0xcc, 0xd4, 0xe7, 0xd5, 0x82, 0xe9, 0x1b, 0x41, 0x24, 0xbf, 0x4e, 0x7c, 0xc8, 0xa3, 0x0d, 0xf3, 0xe7, 0x12, 0x54, 0x23, 0x92, 0x42, 0xf8, 0x3d, 0xb8, 0x38, 0x37, 0xc4, 0x62, 0x8e, 0xc5, 0x13, 0x71, 0x77, 0xc5, 0x5a, 0xcf, 0x8e, 0xa9, 0x17, 0x65, 0xb0, 0xfc, 0x0a, 0x32, 0x58, 0x3c, 0xf1, 0x2a, 0xaf, 0x6e, 0xe2, 0x75, 0xab, 0x50, 0x91, 0x89, 0xe9, 0xbe, 0x09, 0x57, 0x69, 0x10, 0x21, 0x86, 0x2c, 0x38, 0x9d, 0xe6, 0xc1, 0xbb, 0x6b, 0x12, 0x5d, 0x4d, 0x80, 0xc7, 0xda, 0x57, 0x15, 0xb9, 0x75, 0x50, 0x55, 0xe2, 0xbe, 0xf3, 0x77, 0x00, 0x00, 0x00, 0xff, 0xff, 0x3c, 0x5e, 0x86, 0x05, 0x2a, 0x0f, 0x00, 0x00, } grpc-go-1.22.1/balancer/xds/internal/proto/envoy/api/v2/cds/000077500000000000000000000000001351635773100235035ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/internal/proto/envoy/api/v2/cds/cds.pb.go000077500000000000000000002276751351635773100252310ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: envoy/api/v2/cds.proto package envoy_api_v2 import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import any "github.com/golang/protobuf/ptypes/any" import duration "github.com/golang/protobuf/ptypes/duration" import _struct "github.com/golang/protobuf/ptypes/struct" import wrappers "github.com/golang/protobuf/ptypes/wrappers" import _ "google.golang.org/genproto/googleapis/api/annotations" import cert "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/auth/cert" import circuit_breaker "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/cluster/circuit_breaker" import outlier_detection "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/cluster/outlier_detection" import address "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/core/address" import base "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/core/base" import config_source "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/core/config_source" import health_check "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/core/health_check" import protocol "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/core/protocol" import discovery "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/discovery" import eds "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/eds" import percent "google.golang.org/grpc/balancer/xds/internal/proto/envoy/type/percent" import _ "google.golang.org/grpc/balancer/xds/internal/proto/validate" import ( context "golang.org/x/net/context" grpc "google.golang.org/grpc" ) // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package type Cluster_DiscoveryType int32 const ( Cluster_STATIC Cluster_DiscoveryType = 0 Cluster_STRICT_DNS Cluster_DiscoveryType = 1 Cluster_LOGICAL_DNS Cluster_DiscoveryType = 2 Cluster_EDS Cluster_DiscoveryType = 3 Cluster_ORIGINAL_DST Cluster_DiscoveryType = 4 ) var Cluster_DiscoveryType_name = map[int32]string{ 0: "STATIC", 1: "STRICT_DNS", 2: "LOGICAL_DNS", 3: "EDS", 4: "ORIGINAL_DST", } var Cluster_DiscoveryType_value = map[string]int32{ "STATIC": 0, "STRICT_DNS": 1, "LOGICAL_DNS": 2, "EDS": 3, "ORIGINAL_DST": 4, } func (x Cluster_DiscoveryType) String() string { return proto.EnumName(Cluster_DiscoveryType_name, int32(x)) } func (Cluster_DiscoveryType) EnumDescriptor() ([]byte, []int) { return fileDescriptor_cds_1dff7e464f9f9a10, []int{0, 0} } type Cluster_LbPolicy int32 const ( Cluster_ROUND_ROBIN Cluster_LbPolicy = 0 Cluster_LEAST_REQUEST Cluster_LbPolicy = 1 Cluster_RING_HASH Cluster_LbPolicy = 2 Cluster_RANDOM Cluster_LbPolicy = 3 Cluster_ORIGINAL_DST_LB Cluster_LbPolicy = 4 Cluster_MAGLEV Cluster_LbPolicy = 5 ) var Cluster_LbPolicy_name = map[int32]string{ 0: "ROUND_ROBIN", 1: "LEAST_REQUEST", 2: "RING_HASH", 3: "RANDOM", 4: "ORIGINAL_DST_LB", 5: "MAGLEV", } var Cluster_LbPolicy_value = map[string]int32{ "ROUND_ROBIN": 0, "LEAST_REQUEST": 1, "RING_HASH": 2, "RANDOM": 3, "ORIGINAL_DST_LB": 4, "MAGLEV": 5, } func (x Cluster_LbPolicy) String() string { return proto.EnumName(Cluster_LbPolicy_name, int32(x)) } func (Cluster_LbPolicy) EnumDescriptor() ([]byte, []int) { return fileDescriptor_cds_1dff7e464f9f9a10, []int{0, 1} } type Cluster_DnsLookupFamily int32 const ( Cluster_AUTO Cluster_DnsLookupFamily = 0 Cluster_V4_ONLY Cluster_DnsLookupFamily = 1 Cluster_V6_ONLY Cluster_DnsLookupFamily = 2 ) var Cluster_DnsLookupFamily_name = map[int32]string{ 0: "AUTO", 1: "V4_ONLY", 2: "V6_ONLY", } var Cluster_DnsLookupFamily_value = map[string]int32{ "AUTO": 0, "V4_ONLY": 1, "V6_ONLY": 2, } func (x Cluster_DnsLookupFamily) String() string { return proto.EnumName(Cluster_DnsLookupFamily_name, int32(x)) } func (Cluster_DnsLookupFamily) EnumDescriptor() ([]byte, []int) { return fileDescriptor_cds_1dff7e464f9f9a10, []int{0, 2} } type Cluster_ClusterProtocolSelection int32 const ( Cluster_USE_CONFIGURED_PROTOCOL Cluster_ClusterProtocolSelection = 0 Cluster_USE_DOWNSTREAM_PROTOCOL Cluster_ClusterProtocolSelection = 1 ) var Cluster_ClusterProtocolSelection_name = map[int32]string{ 0: "USE_CONFIGURED_PROTOCOL", 1: "USE_DOWNSTREAM_PROTOCOL", } var Cluster_ClusterProtocolSelection_value = map[string]int32{ "USE_CONFIGURED_PROTOCOL": 0, "USE_DOWNSTREAM_PROTOCOL": 1, } func (x Cluster_ClusterProtocolSelection) String() string { return proto.EnumName(Cluster_ClusterProtocolSelection_name, int32(x)) } func (Cluster_ClusterProtocolSelection) EnumDescriptor() ([]byte, []int) { return fileDescriptor_cds_1dff7e464f9f9a10, []int{0, 3} } type Cluster_LbSubsetConfig_LbSubsetFallbackPolicy int32 const ( Cluster_LbSubsetConfig_NO_FALLBACK Cluster_LbSubsetConfig_LbSubsetFallbackPolicy = 0 Cluster_LbSubsetConfig_ANY_ENDPOINT Cluster_LbSubsetConfig_LbSubsetFallbackPolicy = 1 Cluster_LbSubsetConfig_DEFAULT_SUBSET Cluster_LbSubsetConfig_LbSubsetFallbackPolicy = 2 ) var Cluster_LbSubsetConfig_LbSubsetFallbackPolicy_name = map[int32]string{ 0: "NO_FALLBACK", 1: "ANY_ENDPOINT", 2: "DEFAULT_SUBSET", } var Cluster_LbSubsetConfig_LbSubsetFallbackPolicy_value = map[string]int32{ "NO_FALLBACK": 0, "ANY_ENDPOINT": 1, "DEFAULT_SUBSET": 2, } func (x Cluster_LbSubsetConfig_LbSubsetFallbackPolicy) String() string { return proto.EnumName(Cluster_LbSubsetConfig_LbSubsetFallbackPolicy_name, int32(x)) } func (Cluster_LbSubsetConfig_LbSubsetFallbackPolicy) EnumDescriptor() ([]byte, []int) { return fileDescriptor_cds_1dff7e464f9f9a10, []int{0, 4, 0} } type Cluster_RingHashLbConfig_HashFunction int32 const ( Cluster_RingHashLbConfig_XX_HASH Cluster_RingHashLbConfig_HashFunction = 0 Cluster_RingHashLbConfig_MURMUR_HASH_2 Cluster_RingHashLbConfig_HashFunction = 1 ) var Cluster_RingHashLbConfig_HashFunction_name = map[int32]string{ 0: "XX_HASH", 1: "MURMUR_HASH_2", } var Cluster_RingHashLbConfig_HashFunction_value = map[string]int32{ "XX_HASH": 0, "MURMUR_HASH_2": 1, } func (x Cluster_RingHashLbConfig_HashFunction) String() string { return proto.EnumName(Cluster_RingHashLbConfig_HashFunction_name, int32(x)) } func (Cluster_RingHashLbConfig_HashFunction) EnumDescriptor() ([]byte, []int) { return fileDescriptor_cds_1dff7e464f9f9a10, []int{0, 6, 0} } type Cluster struct { Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` AltStatName string `protobuf:"bytes,28,opt,name=alt_stat_name,json=altStatName,proto3" json:"alt_stat_name,omitempty"` // Types that are valid to be assigned to ClusterDiscoveryType: // *Cluster_Type // *Cluster_ClusterType ClusterDiscoveryType isCluster_ClusterDiscoveryType `protobuf_oneof:"cluster_discovery_type"` EdsClusterConfig *Cluster_EdsClusterConfig `protobuf:"bytes,3,opt,name=eds_cluster_config,json=edsClusterConfig,proto3" json:"eds_cluster_config,omitempty"` ConnectTimeout *duration.Duration `protobuf:"bytes,4,opt,name=connect_timeout,json=connectTimeout,proto3" json:"connect_timeout,omitempty"` PerConnectionBufferLimitBytes *wrappers.UInt32Value `protobuf:"bytes,5,opt,name=per_connection_buffer_limit_bytes,json=perConnectionBufferLimitBytes,proto3" json:"per_connection_buffer_limit_bytes,omitempty"` LbPolicy Cluster_LbPolicy `protobuf:"varint,6,opt,name=lb_policy,json=lbPolicy,proto3,enum=envoy.api.v2.Cluster_LbPolicy" json:"lb_policy,omitempty"` Hosts []*address.Address `protobuf:"bytes,7,rep,name=hosts,proto3" json:"hosts,omitempty"` // Deprecated: Do not use. LoadAssignment *eds.ClusterLoadAssignment `protobuf:"bytes,33,opt,name=load_assignment,json=loadAssignment,proto3" json:"load_assignment,omitempty"` HealthChecks []*health_check.HealthCheck `protobuf:"bytes,8,rep,name=health_checks,json=healthChecks,proto3" json:"health_checks,omitempty"` MaxRequestsPerConnection *wrappers.UInt32Value `protobuf:"bytes,9,opt,name=max_requests_per_connection,json=maxRequestsPerConnection,proto3" json:"max_requests_per_connection,omitempty"` CircuitBreakers *circuit_breaker.CircuitBreakers `protobuf:"bytes,10,opt,name=circuit_breakers,json=circuitBreakers,proto3" json:"circuit_breakers,omitempty"` TlsContext *cert.UpstreamTlsContext `protobuf:"bytes,11,opt,name=tls_context,json=tlsContext,proto3" json:"tls_context,omitempty"` CommonHttpProtocolOptions *protocol.HttpProtocolOptions `protobuf:"bytes,29,opt,name=common_http_protocol_options,json=commonHttpProtocolOptions,proto3" json:"common_http_protocol_options,omitempty"` HttpProtocolOptions *protocol.Http1ProtocolOptions `protobuf:"bytes,13,opt,name=http_protocol_options,json=httpProtocolOptions,proto3" json:"http_protocol_options,omitempty"` Http2ProtocolOptions *protocol.Http2ProtocolOptions `protobuf:"bytes,14,opt,name=http2_protocol_options,json=http2ProtocolOptions,proto3" json:"http2_protocol_options,omitempty"` ExtensionProtocolOptions map[string]*_struct.Struct `protobuf:"bytes,35,rep,name=extension_protocol_options,json=extensionProtocolOptions,proto3" json:"extension_protocol_options,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"` TypedExtensionProtocolOptions map[string]*any.Any `protobuf:"bytes,36,rep,name=typed_extension_protocol_options,json=typedExtensionProtocolOptions,proto3" json:"typed_extension_protocol_options,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"` DnsRefreshRate *duration.Duration `protobuf:"bytes,16,opt,name=dns_refresh_rate,json=dnsRefreshRate,proto3" json:"dns_refresh_rate,omitempty"` DnsLookupFamily Cluster_DnsLookupFamily `protobuf:"varint,17,opt,name=dns_lookup_family,json=dnsLookupFamily,proto3,enum=envoy.api.v2.Cluster_DnsLookupFamily" json:"dns_lookup_family,omitempty"` DnsResolvers []*address.Address `protobuf:"bytes,18,rep,name=dns_resolvers,json=dnsResolvers,proto3" json:"dns_resolvers,omitempty"` OutlierDetection *outlier_detection.OutlierDetection `protobuf:"bytes,19,opt,name=outlier_detection,json=outlierDetection,proto3" json:"outlier_detection,omitempty"` CleanupInterval *duration.Duration `protobuf:"bytes,20,opt,name=cleanup_interval,json=cleanupInterval,proto3" json:"cleanup_interval,omitempty"` UpstreamBindConfig *address.BindConfig `protobuf:"bytes,21,opt,name=upstream_bind_config,json=upstreamBindConfig,proto3" json:"upstream_bind_config,omitempty"` LbSubsetConfig *Cluster_LbSubsetConfig `protobuf:"bytes,22,opt,name=lb_subset_config,json=lbSubsetConfig,proto3" json:"lb_subset_config,omitempty"` // Types that are valid to be assigned to LbConfig: // *Cluster_RingHashLbConfig_ // *Cluster_OriginalDstLbConfig_ // *Cluster_LeastRequestLbConfig_ LbConfig isCluster_LbConfig `protobuf_oneof:"lb_config"` CommonLbConfig *Cluster_CommonLbConfig `protobuf:"bytes,27,opt,name=common_lb_config,json=commonLbConfig,proto3" json:"common_lb_config,omitempty"` TransportSocket *base.TransportSocket `protobuf:"bytes,24,opt,name=transport_socket,json=transportSocket,proto3" json:"transport_socket,omitempty"` Metadata *base.Metadata `protobuf:"bytes,25,opt,name=metadata,proto3" json:"metadata,omitempty"` ProtocolSelection Cluster_ClusterProtocolSelection `protobuf:"varint,26,opt,name=protocol_selection,json=protocolSelection,proto3,enum=envoy.api.v2.Cluster_ClusterProtocolSelection" json:"protocol_selection,omitempty"` UpstreamConnectionOptions *UpstreamConnectionOptions `protobuf:"bytes,30,opt,name=upstream_connection_options,json=upstreamConnectionOptions,proto3" json:"upstream_connection_options,omitempty"` CloseConnectionsOnHostHealthFailure bool `protobuf:"varint,31,opt,name=close_connections_on_host_health_failure,json=closeConnectionsOnHostHealthFailure,proto3" json:"close_connections_on_host_health_failure,omitempty"` DrainConnectionsOnHostRemoval bool `protobuf:"varint,32,opt,name=drain_connections_on_host_removal,json=drainConnectionsOnHostRemoval,proto3" json:"drain_connections_on_host_removal,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Cluster) Reset() { *m = Cluster{} } func (m *Cluster) String() string { return proto.CompactTextString(m) } func (*Cluster) ProtoMessage() {} func (*Cluster) Descriptor() ([]byte, []int) { return fileDescriptor_cds_1dff7e464f9f9a10, []int{0} } func (m *Cluster) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Cluster.Unmarshal(m, b) } func (m *Cluster) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Cluster.Marshal(b, m, deterministic) } func (dst *Cluster) XXX_Merge(src proto.Message) { xxx_messageInfo_Cluster.Merge(dst, src) } func (m *Cluster) XXX_Size() int { return xxx_messageInfo_Cluster.Size(m) } func (m *Cluster) XXX_DiscardUnknown() { xxx_messageInfo_Cluster.DiscardUnknown(m) } var xxx_messageInfo_Cluster proto.InternalMessageInfo func (m *Cluster) GetName() string { if m != nil { return m.Name } return "" } func (m *Cluster) GetAltStatName() string { if m != nil { return m.AltStatName } return "" } type isCluster_ClusterDiscoveryType interface { isCluster_ClusterDiscoveryType() } type Cluster_Type struct { Type Cluster_DiscoveryType `protobuf:"varint,2,opt,name=type,proto3,enum=envoy.api.v2.Cluster_DiscoveryType,oneof"` } type Cluster_ClusterType struct { ClusterType *Cluster_CustomClusterType `protobuf:"bytes,38,opt,name=cluster_type,json=clusterType,proto3,oneof"` } func (*Cluster_Type) isCluster_ClusterDiscoveryType() {} func (*Cluster_ClusterType) isCluster_ClusterDiscoveryType() {} func (m *Cluster) GetClusterDiscoveryType() isCluster_ClusterDiscoveryType { if m != nil { return m.ClusterDiscoveryType } return nil } func (m *Cluster) GetType() Cluster_DiscoveryType { if x, ok := m.GetClusterDiscoveryType().(*Cluster_Type); ok { return x.Type } return Cluster_STATIC } func (m *Cluster) GetClusterType() *Cluster_CustomClusterType { if x, ok := m.GetClusterDiscoveryType().(*Cluster_ClusterType); ok { return x.ClusterType } return nil } func (m *Cluster) GetEdsClusterConfig() *Cluster_EdsClusterConfig { if m != nil { return m.EdsClusterConfig } return nil } func (m *Cluster) GetConnectTimeout() *duration.Duration { if m != nil { return m.ConnectTimeout } return nil } func (m *Cluster) GetPerConnectionBufferLimitBytes() *wrappers.UInt32Value { if m != nil { return m.PerConnectionBufferLimitBytes } return nil } func (m *Cluster) GetLbPolicy() Cluster_LbPolicy { if m != nil { return m.LbPolicy } return Cluster_ROUND_ROBIN } // Deprecated: Do not use. func (m *Cluster) GetHosts() []*address.Address { if m != nil { return m.Hosts } return nil } func (m *Cluster) GetLoadAssignment() *eds.ClusterLoadAssignment { if m != nil { return m.LoadAssignment } return nil } func (m *Cluster) GetHealthChecks() []*health_check.HealthCheck { if m != nil { return m.HealthChecks } return nil } func (m *Cluster) GetMaxRequestsPerConnection() *wrappers.UInt32Value { if m != nil { return m.MaxRequestsPerConnection } return nil } func (m *Cluster) GetCircuitBreakers() *circuit_breaker.CircuitBreakers { if m != nil { return m.CircuitBreakers } return nil } func (m *Cluster) GetTlsContext() *cert.UpstreamTlsContext { if m != nil { return m.TlsContext } return nil } func (m *Cluster) GetCommonHttpProtocolOptions() *protocol.HttpProtocolOptions { if m != nil { return m.CommonHttpProtocolOptions } return nil } func (m *Cluster) GetHttpProtocolOptions() *protocol.Http1ProtocolOptions { if m != nil { return m.HttpProtocolOptions } return nil } func (m *Cluster) GetHttp2ProtocolOptions() *protocol.Http2ProtocolOptions { if m != nil { return m.Http2ProtocolOptions } return nil } func (m *Cluster) GetExtensionProtocolOptions() map[string]*_struct.Struct { if m != nil { return m.ExtensionProtocolOptions } return nil } func (m *Cluster) GetTypedExtensionProtocolOptions() map[string]*any.Any { if m != nil { return m.TypedExtensionProtocolOptions } return nil } func (m *Cluster) GetDnsRefreshRate() *duration.Duration { if m != nil { return m.DnsRefreshRate } return nil } func (m *Cluster) GetDnsLookupFamily() Cluster_DnsLookupFamily { if m != nil { return m.DnsLookupFamily } return Cluster_AUTO } func (m *Cluster) GetDnsResolvers() []*address.Address { if m != nil { return m.DnsResolvers } return nil } func (m *Cluster) GetOutlierDetection() *outlier_detection.OutlierDetection { if m != nil { return m.OutlierDetection } return nil } func (m *Cluster) GetCleanupInterval() *duration.Duration { if m != nil { return m.CleanupInterval } return nil } func (m *Cluster) GetUpstreamBindConfig() *address.BindConfig { if m != nil { return m.UpstreamBindConfig } return nil } func (m *Cluster) GetLbSubsetConfig() *Cluster_LbSubsetConfig { if m != nil { return m.LbSubsetConfig } return nil } type isCluster_LbConfig interface { isCluster_LbConfig() } type Cluster_RingHashLbConfig_ struct { RingHashLbConfig *Cluster_RingHashLbConfig `protobuf:"bytes,23,opt,name=ring_hash_lb_config,json=ringHashLbConfig,proto3,oneof"` } type Cluster_OriginalDstLbConfig_ struct { OriginalDstLbConfig *Cluster_OriginalDstLbConfig `protobuf:"bytes,34,opt,name=original_dst_lb_config,json=originalDstLbConfig,proto3,oneof"` } type Cluster_LeastRequestLbConfig_ struct { LeastRequestLbConfig *Cluster_LeastRequestLbConfig `protobuf:"bytes,37,opt,name=least_request_lb_config,json=leastRequestLbConfig,proto3,oneof"` } func (*Cluster_RingHashLbConfig_) isCluster_LbConfig() {} func (*Cluster_OriginalDstLbConfig_) isCluster_LbConfig() {} func (*Cluster_LeastRequestLbConfig_) isCluster_LbConfig() {} func (m *Cluster) GetLbConfig() isCluster_LbConfig { if m != nil { return m.LbConfig } return nil } func (m *Cluster) GetRingHashLbConfig() *Cluster_RingHashLbConfig { if x, ok := m.GetLbConfig().(*Cluster_RingHashLbConfig_); ok { return x.RingHashLbConfig } return nil } func (m *Cluster) GetOriginalDstLbConfig() *Cluster_OriginalDstLbConfig { if x, ok := m.GetLbConfig().(*Cluster_OriginalDstLbConfig_); ok { return x.OriginalDstLbConfig } return nil } func (m *Cluster) GetLeastRequestLbConfig() *Cluster_LeastRequestLbConfig { if x, ok := m.GetLbConfig().(*Cluster_LeastRequestLbConfig_); ok { return x.LeastRequestLbConfig } return nil } func (m *Cluster) GetCommonLbConfig() *Cluster_CommonLbConfig { if m != nil { return m.CommonLbConfig } return nil } func (m *Cluster) GetTransportSocket() *base.TransportSocket { if m != nil { return m.TransportSocket } return nil } func (m *Cluster) GetMetadata() *base.Metadata { if m != nil { return m.Metadata } return nil } func (m *Cluster) GetProtocolSelection() Cluster_ClusterProtocolSelection { if m != nil { return m.ProtocolSelection } return Cluster_USE_CONFIGURED_PROTOCOL } func (m *Cluster) GetUpstreamConnectionOptions() *UpstreamConnectionOptions { if m != nil { return m.UpstreamConnectionOptions } return nil } func (m *Cluster) GetCloseConnectionsOnHostHealthFailure() bool { if m != nil { return m.CloseConnectionsOnHostHealthFailure } return false } func (m *Cluster) GetDrainConnectionsOnHostRemoval() bool { if m != nil { return m.DrainConnectionsOnHostRemoval } return false } // XXX_OneofFuncs is for the internal use of the proto package. func (*Cluster) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _Cluster_OneofMarshaler, _Cluster_OneofUnmarshaler, _Cluster_OneofSizer, []interface{}{ (*Cluster_Type)(nil), (*Cluster_ClusterType)(nil), (*Cluster_RingHashLbConfig_)(nil), (*Cluster_OriginalDstLbConfig_)(nil), (*Cluster_LeastRequestLbConfig_)(nil), } } func _Cluster_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*Cluster) // cluster_discovery_type switch x := m.ClusterDiscoveryType.(type) { case *Cluster_Type: b.EncodeVarint(2<<3 | proto.WireVarint) b.EncodeVarint(uint64(x.Type)) case *Cluster_ClusterType: b.EncodeVarint(38<<3 | proto.WireBytes) if err := b.EncodeMessage(x.ClusterType); err != nil { return err } case nil: default: return fmt.Errorf("Cluster.ClusterDiscoveryType has unexpected type %T", x) } // lb_config switch x := m.LbConfig.(type) { case *Cluster_RingHashLbConfig_: b.EncodeVarint(23<<3 | proto.WireBytes) if err := b.EncodeMessage(x.RingHashLbConfig); err != nil { return err } case *Cluster_OriginalDstLbConfig_: b.EncodeVarint(34<<3 | proto.WireBytes) if err := b.EncodeMessage(x.OriginalDstLbConfig); err != nil { return err } case *Cluster_LeastRequestLbConfig_: b.EncodeVarint(37<<3 | proto.WireBytes) if err := b.EncodeMessage(x.LeastRequestLbConfig); err != nil { return err } case nil: default: return fmt.Errorf("Cluster.LbConfig has unexpected type %T", x) } return nil } func _Cluster_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*Cluster) switch tag { case 2: // cluster_discovery_type.type if wire != proto.WireVarint { return true, proto.ErrInternalBadWireType } x, err := b.DecodeVarint() m.ClusterDiscoveryType = &Cluster_Type{Cluster_DiscoveryType(x)} return true, err case 38: // cluster_discovery_type.cluster_type if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(Cluster_CustomClusterType) err := b.DecodeMessage(msg) m.ClusterDiscoveryType = &Cluster_ClusterType{msg} return true, err case 23: // lb_config.ring_hash_lb_config if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(Cluster_RingHashLbConfig) err := b.DecodeMessage(msg) m.LbConfig = &Cluster_RingHashLbConfig_{msg} return true, err case 34: // lb_config.original_dst_lb_config if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(Cluster_OriginalDstLbConfig) err := b.DecodeMessage(msg) m.LbConfig = &Cluster_OriginalDstLbConfig_{msg} return true, err case 37: // lb_config.least_request_lb_config if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(Cluster_LeastRequestLbConfig) err := b.DecodeMessage(msg) m.LbConfig = &Cluster_LeastRequestLbConfig_{msg} return true, err default: return false, nil } } func _Cluster_OneofSizer(msg proto.Message) (n int) { m := msg.(*Cluster) // cluster_discovery_type switch x := m.ClusterDiscoveryType.(type) { case *Cluster_Type: n += 1 // tag and wire n += proto.SizeVarint(uint64(x.Type)) case *Cluster_ClusterType: s := proto.Size(x.ClusterType) n += 2 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } // lb_config switch x := m.LbConfig.(type) { case *Cluster_RingHashLbConfig_: s := proto.Size(x.RingHashLbConfig) n += 2 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *Cluster_OriginalDstLbConfig_: s := proto.Size(x.OriginalDstLbConfig) n += 2 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *Cluster_LeastRequestLbConfig_: s := proto.Size(x.LeastRequestLbConfig) n += 2 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } type Cluster_CustomClusterType struct { Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` TypedConfig *any.Any `protobuf:"bytes,2,opt,name=typed_config,json=typedConfig,proto3" json:"typed_config,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Cluster_CustomClusterType) Reset() { *m = Cluster_CustomClusterType{} } func (m *Cluster_CustomClusterType) String() string { return proto.CompactTextString(m) } func (*Cluster_CustomClusterType) ProtoMessage() {} func (*Cluster_CustomClusterType) Descriptor() ([]byte, []int) { return fileDescriptor_cds_1dff7e464f9f9a10, []int{0, 0} } func (m *Cluster_CustomClusterType) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Cluster_CustomClusterType.Unmarshal(m, b) } func (m *Cluster_CustomClusterType) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Cluster_CustomClusterType.Marshal(b, m, deterministic) } func (dst *Cluster_CustomClusterType) XXX_Merge(src proto.Message) { xxx_messageInfo_Cluster_CustomClusterType.Merge(dst, src) } func (m *Cluster_CustomClusterType) XXX_Size() int { return xxx_messageInfo_Cluster_CustomClusterType.Size(m) } func (m *Cluster_CustomClusterType) XXX_DiscardUnknown() { xxx_messageInfo_Cluster_CustomClusterType.DiscardUnknown(m) } var xxx_messageInfo_Cluster_CustomClusterType proto.InternalMessageInfo func (m *Cluster_CustomClusterType) GetName() string { if m != nil { return m.Name } return "" } func (m *Cluster_CustomClusterType) GetTypedConfig() *any.Any { if m != nil { return m.TypedConfig } return nil } type Cluster_EdsClusterConfig struct { EdsConfig *config_source.ConfigSource `protobuf:"bytes,1,opt,name=eds_config,json=edsConfig,proto3" json:"eds_config,omitempty"` ServiceName string `protobuf:"bytes,2,opt,name=service_name,json=serviceName,proto3" json:"service_name,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Cluster_EdsClusterConfig) Reset() { *m = Cluster_EdsClusterConfig{} } func (m *Cluster_EdsClusterConfig) String() string { return proto.CompactTextString(m) } func (*Cluster_EdsClusterConfig) ProtoMessage() {} func (*Cluster_EdsClusterConfig) Descriptor() ([]byte, []int) { return fileDescriptor_cds_1dff7e464f9f9a10, []int{0, 1} } func (m *Cluster_EdsClusterConfig) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Cluster_EdsClusterConfig.Unmarshal(m, b) } func (m *Cluster_EdsClusterConfig) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Cluster_EdsClusterConfig.Marshal(b, m, deterministic) } func (dst *Cluster_EdsClusterConfig) XXX_Merge(src proto.Message) { xxx_messageInfo_Cluster_EdsClusterConfig.Merge(dst, src) } func (m *Cluster_EdsClusterConfig) XXX_Size() int { return xxx_messageInfo_Cluster_EdsClusterConfig.Size(m) } func (m *Cluster_EdsClusterConfig) XXX_DiscardUnknown() { xxx_messageInfo_Cluster_EdsClusterConfig.DiscardUnknown(m) } var xxx_messageInfo_Cluster_EdsClusterConfig proto.InternalMessageInfo func (m *Cluster_EdsClusterConfig) GetEdsConfig() *config_source.ConfigSource { if m != nil { return m.EdsConfig } return nil } func (m *Cluster_EdsClusterConfig) GetServiceName() string { if m != nil { return m.ServiceName } return "" } type Cluster_LbSubsetConfig struct { FallbackPolicy Cluster_LbSubsetConfig_LbSubsetFallbackPolicy `protobuf:"varint,1,opt,name=fallback_policy,json=fallbackPolicy,proto3,enum=envoy.api.v2.Cluster_LbSubsetConfig_LbSubsetFallbackPolicy" json:"fallback_policy,omitempty"` DefaultSubset *_struct.Struct `protobuf:"bytes,2,opt,name=default_subset,json=defaultSubset,proto3" json:"default_subset,omitempty"` SubsetSelectors []*Cluster_LbSubsetConfig_LbSubsetSelector `protobuf:"bytes,3,rep,name=subset_selectors,json=subsetSelectors,proto3" json:"subset_selectors,omitempty"` LocalityWeightAware bool `protobuf:"varint,4,opt,name=locality_weight_aware,json=localityWeightAware,proto3" json:"locality_weight_aware,omitempty"` ScaleLocalityWeight bool `protobuf:"varint,5,opt,name=scale_locality_weight,json=scaleLocalityWeight,proto3" json:"scale_locality_weight,omitempty"` PanicModeAny bool `protobuf:"varint,6,opt,name=panic_mode_any,json=panicModeAny,proto3" json:"panic_mode_any,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Cluster_LbSubsetConfig) Reset() { *m = Cluster_LbSubsetConfig{} } func (m *Cluster_LbSubsetConfig) String() string { return proto.CompactTextString(m) } func (*Cluster_LbSubsetConfig) ProtoMessage() {} func (*Cluster_LbSubsetConfig) Descriptor() ([]byte, []int) { return fileDescriptor_cds_1dff7e464f9f9a10, []int{0, 4} } func (m *Cluster_LbSubsetConfig) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Cluster_LbSubsetConfig.Unmarshal(m, b) } func (m *Cluster_LbSubsetConfig) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Cluster_LbSubsetConfig.Marshal(b, m, deterministic) } func (dst *Cluster_LbSubsetConfig) XXX_Merge(src proto.Message) { xxx_messageInfo_Cluster_LbSubsetConfig.Merge(dst, src) } func (m *Cluster_LbSubsetConfig) XXX_Size() int { return xxx_messageInfo_Cluster_LbSubsetConfig.Size(m) } func (m *Cluster_LbSubsetConfig) XXX_DiscardUnknown() { xxx_messageInfo_Cluster_LbSubsetConfig.DiscardUnknown(m) } var xxx_messageInfo_Cluster_LbSubsetConfig proto.InternalMessageInfo func (m *Cluster_LbSubsetConfig) GetFallbackPolicy() Cluster_LbSubsetConfig_LbSubsetFallbackPolicy { if m != nil { return m.FallbackPolicy } return Cluster_LbSubsetConfig_NO_FALLBACK } func (m *Cluster_LbSubsetConfig) GetDefaultSubset() *_struct.Struct { if m != nil { return m.DefaultSubset } return nil } func (m *Cluster_LbSubsetConfig) GetSubsetSelectors() []*Cluster_LbSubsetConfig_LbSubsetSelector { if m != nil { return m.SubsetSelectors } return nil } func (m *Cluster_LbSubsetConfig) GetLocalityWeightAware() bool { if m != nil { return m.LocalityWeightAware } return false } func (m *Cluster_LbSubsetConfig) GetScaleLocalityWeight() bool { if m != nil { return m.ScaleLocalityWeight } return false } func (m *Cluster_LbSubsetConfig) GetPanicModeAny() bool { if m != nil { return m.PanicModeAny } return false } type Cluster_LbSubsetConfig_LbSubsetSelector struct { Keys []string `protobuf:"bytes,1,rep,name=keys,proto3" json:"keys,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Cluster_LbSubsetConfig_LbSubsetSelector) Reset() { *m = Cluster_LbSubsetConfig_LbSubsetSelector{} } func (m *Cluster_LbSubsetConfig_LbSubsetSelector) String() string { return proto.CompactTextString(m) } func (*Cluster_LbSubsetConfig_LbSubsetSelector) ProtoMessage() {} func (*Cluster_LbSubsetConfig_LbSubsetSelector) Descriptor() ([]byte, []int) { return fileDescriptor_cds_1dff7e464f9f9a10, []int{0, 4, 0} } func (m *Cluster_LbSubsetConfig_LbSubsetSelector) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Cluster_LbSubsetConfig_LbSubsetSelector.Unmarshal(m, b) } func (m *Cluster_LbSubsetConfig_LbSubsetSelector) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Cluster_LbSubsetConfig_LbSubsetSelector.Marshal(b, m, deterministic) } func (dst *Cluster_LbSubsetConfig_LbSubsetSelector) XXX_Merge(src proto.Message) { xxx_messageInfo_Cluster_LbSubsetConfig_LbSubsetSelector.Merge(dst, src) } func (m *Cluster_LbSubsetConfig_LbSubsetSelector) XXX_Size() int { return xxx_messageInfo_Cluster_LbSubsetConfig_LbSubsetSelector.Size(m) } func (m *Cluster_LbSubsetConfig_LbSubsetSelector) XXX_DiscardUnknown() { xxx_messageInfo_Cluster_LbSubsetConfig_LbSubsetSelector.DiscardUnknown(m) } var xxx_messageInfo_Cluster_LbSubsetConfig_LbSubsetSelector proto.InternalMessageInfo func (m *Cluster_LbSubsetConfig_LbSubsetSelector) GetKeys() []string { if m != nil { return m.Keys } return nil } type Cluster_LeastRequestLbConfig struct { ChoiceCount *wrappers.UInt32Value `protobuf:"bytes,1,opt,name=choice_count,json=choiceCount,proto3" json:"choice_count,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Cluster_LeastRequestLbConfig) Reset() { *m = Cluster_LeastRequestLbConfig{} } func (m *Cluster_LeastRequestLbConfig) String() string { return proto.CompactTextString(m) } func (*Cluster_LeastRequestLbConfig) ProtoMessage() {} func (*Cluster_LeastRequestLbConfig) Descriptor() ([]byte, []int) { return fileDescriptor_cds_1dff7e464f9f9a10, []int{0, 5} } func (m *Cluster_LeastRequestLbConfig) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Cluster_LeastRequestLbConfig.Unmarshal(m, b) } func (m *Cluster_LeastRequestLbConfig) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Cluster_LeastRequestLbConfig.Marshal(b, m, deterministic) } func (dst *Cluster_LeastRequestLbConfig) XXX_Merge(src proto.Message) { xxx_messageInfo_Cluster_LeastRequestLbConfig.Merge(dst, src) } func (m *Cluster_LeastRequestLbConfig) XXX_Size() int { return xxx_messageInfo_Cluster_LeastRequestLbConfig.Size(m) } func (m *Cluster_LeastRequestLbConfig) XXX_DiscardUnknown() { xxx_messageInfo_Cluster_LeastRequestLbConfig.DiscardUnknown(m) } var xxx_messageInfo_Cluster_LeastRequestLbConfig proto.InternalMessageInfo func (m *Cluster_LeastRequestLbConfig) GetChoiceCount() *wrappers.UInt32Value { if m != nil { return m.ChoiceCount } return nil } type Cluster_RingHashLbConfig struct { MinimumRingSize *wrappers.UInt64Value `protobuf:"bytes,1,opt,name=minimum_ring_size,json=minimumRingSize,proto3" json:"minimum_ring_size,omitempty"` HashFunction Cluster_RingHashLbConfig_HashFunction `protobuf:"varint,3,opt,name=hash_function,json=hashFunction,proto3,enum=envoy.api.v2.Cluster_RingHashLbConfig_HashFunction" json:"hash_function,omitempty"` MaximumRingSize *wrappers.UInt64Value `protobuf:"bytes,4,opt,name=maximum_ring_size,json=maximumRingSize,proto3" json:"maximum_ring_size,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Cluster_RingHashLbConfig) Reset() { *m = Cluster_RingHashLbConfig{} } func (m *Cluster_RingHashLbConfig) String() string { return proto.CompactTextString(m) } func (*Cluster_RingHashLbConfig) ProtoMessage() {} func (*Cluster_RingHashLbConfig) Descriptor() ([]byte, []int) { return fileDescriptor_cds_1dff7e464f9f9a10, []int{0, 6} } func (m *Cluster_RingHashLbConfig) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Cluster_RingHashLbConfig.Unmarshal(m, b) } func (m *Cluster_RingHashLbConfig) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Cluster_RingHashLbConfig.Marshal(b, m, deterministic) } func (dst *Cluster_RingHashLbConfig) XXX_Merge(src proto.Message) { xxx_messageInfo_Cluster_RingHashLbConfig.Merge(dst, src) } func (m *Cluster_RingHashLbConfig) XXX_Size() int { return xxx_messageInfo_Cluster_RingHashLbConfig.Size(m) } func (m *Cluster_RingHashLbConfig) XXX_DiscardUnknown() { xxx_messageInfo_Cluster_RingHashLbConfig.DiscardUnknown(m) } var xxx_messageInfo_Cluster_RingHashLbConfig proto.InternalMessageInfo func (m *Cluster_RingHashLbConfig) GetMinimumRingSize() *wrappers.UInt64Value { if m != nil { return m.MinimumRingSize } return nil } func (m *Cluster_RingHashLbConfig) GetHashFunction() Cluster_RingHashLbConfig_HashFunction { if m != nil { return m.HashFunction } return Cluster_RingHashLbConfig_XX_HASH } func (m *Cluster_RingHashLbConfig) GetMaximumRingSize() *wrappers.UInt64Value { if m != nil { return m.MaximumRingSize } return nil } type Cluster_OriginalDstLbConfig struct { UseHttpHeader bool `protobuf:"varint,1,opt,name=use_http_header,json=useHttpHeader,proto3" json:"use_http_header,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Cluster_OriginalDstLbConfig) Reset() { *m = Cluster_OriginalDstLbConfig{} } func (m *Cluster_OriginalDstLbConfig) String() string { return proto.CompactTextString(m) } func (*Cluster_OriginalDstLbConfig) ProtoMessage() {} func (*Cluster_OriginalDstLbConfig) Descriptor() ([]byte, []int) { return fileDescriptor_cds_1dff7e464f9f9a10, []int{0, 7} } func (m *Cluster_OriginalDstLbConfig) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Cluster_OriginalDstLbConfig.Unmarshal(m, b) } func (m *Cluster_OriginalDstLbConfig) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Cluster_OriginalDstLbConfig.Marshal(b, m, deterministic) } func (dst *Cluster_OriginalDstLbConfig) XXX_Merge(src proto.Message) { xxx_messageInfo_Cluster_OriginalDstLbConfig.Merge(dst, src) } func (m *Cluster_OriginalDstLbConfig) XXX_Size() int { return xxx_messageInfo_Cluster_OriginalDstLbConfig.Size(m) } func (m *Cluster_OriginalDstLbConfig) XXX_DiscardUnknown() { xxx_messageInfo_Cluster_OriginalDstLbConfig.DiscardUnknown(m) } var xxx_messageInfo_Cluster_OriginalDstLbConfig proto.InternalMessageInfo func (m *Cluster_OriginalDstLbConfig) GetUseHttpHeader() bool { if m != nil { return m.UseHttpHeader } return false } type Cluster_CommonLbConfig struct { HealthyPanicThreshold *percent.Percent `protobuf:"bytes,1,opt,name=healthy_panic_threshold,json=healthyPanicThreshold,proto3" json:"healthy_panic_threshold,omitempty"` // Types that are valid to be assigned to LocalityConfigSpecifier: // *Cluster_CommonLbConfig_ZoneAwareLbConfig_ // *Cluster_CommonLbConfig_LocalityWeightedLbConfig_ LocalityConfigSpecifier isCluster_CommonLbConfig_LocalityConfigSpecifier `protobuf_oneof:"locality_config_specifier"` UpdateMergeWindow *duration.Duration `protobuf:"bytes,4,opt,name=update_merge_window,json=updateMergeWindow,proto3" json:"update_merge_window,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Cluster_CommonLbConfig) Reset() { *m = Cluster_CommonLbConfig{} } func (m *Cluster_CommonLbConfig) String() string { return proto.CompactTextString(m) } func (*Cluster_CommonLbConfig) ProtoMessage() {} func (*Cluster_CommonLbConfig) Descriptor() ([]byte, []int) { return fileDescriptor_cds_1dff7e464f9f9a10, []int{0, 8} } func (m *Cluster_CommonLbConfig) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Cluster_CommonLbConfig.Unmarshal(m, b) } func (m *Cluster_CommonLbConfig) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Cluster_CommonLbConfig.Marshal(b, m, deterministic) } func (dst *Cluster_CommonLbConfig) XXX_Merge(src proto.Message) { xxx_messageInfo_Cluster_CommonLbConfig.Merge(dst, src) } func (m *Cluster_CommonLbConfig) XXX_Size() int { return xxx_messageInfo_Cluster_CommonLbConfig.Size(m) } func (m *Cluster_CommonLbConfig) XXX_DiscardUnknown() { xxx_messageInfo_Cluster_CommonLbConfig.DiscardUnknown(m) } var xxx_messageInfo_Cluster_CommonLbConfig proto.InternalMessageInfo func (m *Cluster_CommonLbConfig) GetHealthyPanicThreshold() *percent.Percent { if m != nil { return m.HealthyPanicThreshold } return nil } type isCluster_CommonLbConfig_LocalityConfigSpecifier interface { isCluster_CommonLbConfig_LocalityConfigSpecifier() } type Cluster_CommonLbConfig_ZoneAwareLbConfig_ struct { ZoneAwareLbConfig *Cluster_CommonLbConfig_ZoneAwareLbConfig `protobuf:"bytes,2,opt,name=zone_aware_lb_config,json=zoneAwareLbConfig,proto3,oneof"` } type Cluster_CommonLbConfig_LocalityWeightedLbConfig_ struct { LocalityWeightedLbConfig *Cluster_CommonLbConfig_LocalityWeightedLbConfig `protobuf:"bytes,3,opt,name=locality_weighted_lb_config,json=localityWeightedLbConfig,proto3,oneof"` } func (*Cluster_CommonLbConfig_ZoneAwareLbConfig_) isCluster_CommonLbConfig_LocalityConfigSpecifier() {} func (*Cluster_CommonLbConfig_LocalityWeightedLbConfig_) isCluster_CommonLbConfig_LocalityConfigSpecifier() { } func (m *Cluster_CommonLbConfig) GetLocalityConfigSpecifier() isCluster_CommonLbConfig_LocalityConfigSpecifier { if m != nil { return m.LocalityConfigSpecifier } return nil } func (m *Cluster_CommonLbConfig) GetZoneAwareLbConfig() *Cluster_CommonLbConfig_ZoneAwareLbConfig { if x, ok := m.GetLocalityConfigSpecifier().(*Cluster_CommonLbConfig_ZoneAwareLbConfig_); ok { return x.ZoneAwareLbConfig } return nil } func (m *Cluster_CommonLbConfig) GetLocalityWeightedLbConfig() *Cluster_CommonLbConfig_LocalityWeightedLbConfig { if x, ok := m.GetLocalityConfigSpecifier().(*Cluster_CommonLbConfig_LocalityWeightedLbConfig_); ok { return x.LocalityWeightedLbConfig } return nil } func (m *Cluster_CommonLbConfig) GetUpdateMergeWindow() *duration.Duration { if m != nil { return m.UpdateMergeWindow } return nil } // XXX_OneofFuncs is for the internal use of the proto package. func (*Cluster_CommonLbConfig) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _Cluster_CommonLbConfig_OneofMarshaler, _Cluster_CommonLbConfig_OneofUnmarshaler, _Cluster_CommonLbConfig_OneofSizer, []interface{}{ (*Cluster_CommonLbConfig_ZoneAwareLbConfig_)(nil), (*Cluster_CommonLbConfig_LocalityWeightedLbConfig_)(nil), } } func _Cluster_CommonLbConfig_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*Cluster_CommonLbConfig) // locality_config_specifier switch x := m.LocalityConfigSpecifier.(type) { case *Cluster_CommonLbConfig_ZoneAwareLbConfig_: b.EncodeVarint(2<<3 | proto.WireBytes) if err := b.EncodeMessage(x.ZoneAwareLbConfig); err != nil { return err } case *Cluster_CommonLbConfig_LocalityWeightedLbConfig_: b.EncodeVarint(3<<3 | proto.WireBytes) if err := b.EncodeMessage(x.LocalityWeightedLbConfig); err != nil { return err } case nil: default: return fmt.Errorf("Cluster_CommonLbConfig.LocalityConfigSpecifier has unexpected type %T", x) } return nil } func _Cluster_CommonLbConfig_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*Cluster_CommonLbConfig) switch tag { case 2: // locality_config_specifier.zone_aware_lb_config if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(Cluster_CommonLbConfig_ZoneAwareLbConfig) err := b.DecodeMessage(msg) m.LocalityConfigSpecifier = &Cluster_CommonLbConfig_ZoneAwareLbConfig_{msg} return true, err case 3: // locality_config_specifier.locality_weighted_lb_config if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(Cluster_CommonLbConfig_LocalityWeightedLbConfig) err := b.DecodeMessage(msg) m.LocalityConfigSpecifier = &Cluster_CommonLbConfig_LocalityWeightedLbConfig_{msg} return true, err default: return false, nil } } func _Cluster_CommonLbConfig_OneofSizer(msg proto.Message) (n int) { m := msg.(*Cluster_CommonLbConfig) // locality_config_specifier switch x := m.LocalityConfigSpecifier.(type) { case *Cluster_CommonLbConfig_ZoneAwareLbConfig_: s := proto.Size(x.ZoneAwareLbConfig) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *Cluster_CommonLbConfig_LocalityWeightedLbConfig_: s := proto.Size(x.LocalityWeightedLbConfig) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } type Cluster_CommonLbConfig_ZoneAwareLbConfig struct { RoutingEnabled *percent.Percent `protobuf:"bytes,1,opt,name=routing_enabled,json=routingEnabled,proto3" json:"routing_enabled,omitempty"` MinClusterSize *wrappers.UInt64Value `protobuf:"bytes,2,opt,name=min_cluster_size,json=minClusterSize,proto3" json:"min_cluster_size,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Cluster_CommonLbConfig_ZoneAwareLbConfig) Reset() { *m = Cluster_CommonLbConfig_ZoneAwareLbConfig{} } func (m *Cluster_CommonLbConfig_ZoneAwareLbConfig) String() string { return proto.CompactTextString(m) } func (*Cluster_CommonLbConfig_ZoneAwareLbConfig) ProtoMessage() {} func (*Cluster_CommonLbConfig_ZoneAwareLbConfig) Descriptor() ([]byte, []int) { return fileDescriptor_cds_1dff7e464f9f9a10, []int{0, 8, 0} } func (m *Cluster_CommonLbConfig_ZoneAwareLbConfig) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Cluster_CommonLbConfig_ZoneAwareLbConfig.Unmarshal(m, b) } func (m *Cluster_CommonLbConfig_ZoneAwareLbConfig) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Cluster_CommonLbConfig_ZoneAwareLbConfig.Marshal(b, m, deterministic) } func (dst *Cluster_CommonLbConfig_ZoneAwareLbConfig) XXX_Merge(src proto.Message) { xxx_messageInfo_Cluster_CommonLbConfig_ZoneAwareLbConfig.Merge(dst, src) } func (m *Cluster_CommonLbConfig_ZoneAwareLbConfig) XXX_Size() int { return xxx_messageInfo_Cluster_CommonLbConfig_ZoneAwareLbConfig.Size(m) } func (m *Cluster_CommonLbConfig_ZoneAwareLbConfig) XXX_DiscardUnknown() { xxx_messageInfo_Cluster_CommonLbConfig_ZoneAwareLbConfig.DiscardUnknown(m) } var xxx_messageInfo_Cluster_CommonLbConfig_ZoneAwareLbConfig proto.InternalMessageInfo func (m *Cluster_CommonLbConfig_ZoneAwareLbConfig) GetRoutingEnabled() *percent.Percent { if m != nil { return m.RoutingEnabled } return nil } func (m *Cluster_CommonLbConfig_ZoneAwareLbConfig) GetMinClusterSize() *wrappers.UInt64Value { if m != nil { return m.MinClusterSize } return nil } type Cluster_CommonLbConfig_LocalityWeightedLbConfig struct { XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Cluster_CommonLbConfig_LocalityWeightedLbConfig) Reset() { *m = Cluster_CommonLbConfig_LocalityWeightedLbConfig{} } func (m *Cluster_CommonLbConfig_LocalityWeightedLbConfig) String() string { return proto.CompactTextString(m) } func (*Cluster_CommonLbConfig_LocalityWeightedLbConfig) ProtoMessage() {} func (*Cluster_CommonLbConfig_LocalityWeightedLbConfig) Descriptor() ([]byte, []int) { return fileDescriptor_cds_1dff7e464f9f9a10, []int{0, 8, 1} } func (m *Cluster_CommonLbConfig_LocalityWeightedLbConfig) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Cluster_CommonLbConfig_LocalityWeightedLbConfig.Unmarshal(m, b) } func (m *Cluster_CommonLbConfig_LocalityWeightedLbConfig) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Cluster_CommonLbConfig_LocalityWeightedLbConfig.Marshal(b, m, deterministic) } func (dst *Cluster_CommonLbConfig_LocalityWeightedLbConfig) XXX_Merge(src proto.Message) { xxx_messageInfo_Cluster_CommonLbConfig_LocalityWeightedLbConfig.Merge(dst, src) } func (m *Cluster_CommonLbConfig_LocalityWeightedLbConfig) XXX_Size() int { return xxx_messageInfo_Cluster_CommonLbConfig_LocalityWeightedLbConfig.Size(m) } func (m *Cluster_CommonLbConfig_LocalityWeightedLbConfig) XXX_DiscardUnknown() { xxx_messageInfo_Cluster_CommonLbConfig_LocalityWeightedLbConfig.DiscardUnknown(m) } var xxx_messageInfo_Cluster_CommonLbConfig_LocalityWeightedLbConfig proto.InternalMessageInfo type UpstreamBindConfig struct { SourceAddress *address.Address `protobuf:"bytes,1,opt,name=source_address,json=sourceAddress,proto3" json:"source_address,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *UpstreamBindConfig) Reset() { *m = UpstreamBindConfig{} } func (m *UpstreamBindConfig) String() string { return proto.CompactTextString(m) } func (*UpstreamBindConfig) ProtoMessage() {} func (*UpstreamBindConfig) Descriptor() ([]byte, []int) { return fileDescriptor_cds_1dff7e464f9f9a10, []int{1} } func (m *UpstreamBindConfig) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_UpstreamBindConfig.Unmarshal(m, b) } func (m *UpstreamBindConfig) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_UpstreamBindConfig.Marshal(b, m, deterministic) } func (dst *UpstreamBindConfig) XXX_Merge(src proto.Message) { xxx_messageInfo_UpstreamBindConfig.Merge(dst, src) } func (m *UpstreamBindConfig) XXX_Size() int { return xxx_messageInfo_UpstreamBindConfig.Size(m) } func (m *UpstreamBindConfig) XXX_DiscardUnknown() { xxx_messageInfo_UpstreamBindConfig.DiscardUnknown(m) } var xxx_messageInfo_UpstreamBindConfig proto.InternalMessageInfo func (m *UpstreamBindConfig) GetSourceAddress() *address.Address { if m != nil { return m.SourceAddress } return nil } type UpstreamConnectionOptions struct { TcpKeepalive *address.TcpKeepalive `protobuf:"bytes,1,opt,name=tcp_keepalive,json=tcpKeepalive,proto3" json:"tcp_keepalive,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *UpstreamConnectionOptions) Reset() { *m = UpstreamConnectionOptions{} } func (m *UpstreamConnectionOptions) String() string { return proto.CompactTextString(m) } func (*UpstreamConnectionOptions) ProtoMessage() {} func (*UpstreamConnectionOptions) Descriptor() ([]byte, []int) { return fileDescriptor_cds_1dff7e464f9f9a10, []int{2} } func (m *UpstreamConnectionOptions) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_UpstreamConnectionOptions.Unmarshal(m, b) } func (m *UpstreamConnectionOptions) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_UpstreamConnectionOptions.Marshal(b, m, deterministic) } func (dst *UpstreamConnectionOptions) XXX_Merge(src proto.Message) { xxx_messageInfo_UpstreamConnectionOptions.Merge(dst, src) } func (m *UpstreamConnectionOptions) XXX_Size() int { return xxx_messageInfo_UpstreamConnectionOptions.Size(m) } func (m *UpstreamConnectionOptions) XXX_DiscardUnknown() { xxx_messageInfo_UpstreamConnectionOptions.DiscardUnknown(m) } var xxx_messageInfo_UpstreamConnectionOptions proto.InternalMessageInfo func (m *UpstreamConnectionOptions) GetTcpKeepalive() *address.TcpKeepalive { if m != nil { return m.TcpKeepalive } return nil } func init() { proto.RegisterType((*Cluster)(nil), "envoy.api.v2.Cluster") proto.RegisterMapType((map[string]*_struct.Struct)(nil), "envoy.api.v2.Cluster.ExtensionProtocolOptionsEntry") proto.RegisterMapType((map[string]*any.Any)(nil), "envoy.api.v2.Cluster.TypedExtensionProtocolOptionsEntry") proto.RegisterType((*Cluster_CustomClusterType)(nil), "envoy.api.v2.Cluster.CustomClusterType") proto.RegisterType((*Cluster_EdsClusterConfig)(nil), "envoy.api.v2.Cluster.EdsClusterConfig") proto.RegisterType((*Cluster_LbSubsetConfig)(nil), "envoy.api.v2.Cluster.LbSubsetConfig") proto.RegisterType((*Cluster_LbSubsetConfig_LbSubsetSelector)(nil), "envoy.api.v2.Cluster.LbSubsetConfig.LbSubsetSelector") proto.RegisterType((*Cluster_LeastRequestLbConfig)(nil), "envoy.api.v2.Cluster.LeastRequestLbConfig") proto.RegisterType((*Cluster_RingHashLbConfig)(nil), "envoy.api.v2.Cluster.RingHashLbConfig") proto.RegisterType((*Cluster_OriginalDstLbConfig)(nil), "envoy.api.v2.Cluster.OriginalDstLbConfig") proto.RegisterType((*Cluster_CommonLbConfig)(nil), "envoy.api.v2.Cluster.CommonLbConfig") proto.RegisterType((*Cluster_CommonLbConfig_ZoneAwareLbConfig)(nil), "envoy.api.v2.Cluster.CommonLbConfig.ZoneAwareLbConfig") proto.RegisterType((*Cluster_CommonLbConfig_LocalityWeightedLbConfig)(nil), "envoy.api.v2.Cluster.CommonLbConfig.LocalityWeightedLbConfig") proto.RegisterType((*UpstreamBindConfig)(nil), "envoy.api.v2.UpstreamBindConfig") proto.RegisterType((*UpstreamConnectionOptions)(nil), "envoy.api.v2.UpstreamConnectionOptions") proto.RegisterEnum("envoy.api.v2.Cluster_DiscoveryType", Cluster_DiscoveryType_name, Cluster_DiscoveryType_value) proto.RegisterEnum("envoy.api.v2.Cluster_LbPolicy", Cluster_LbPolicy_name, Cluster_LbPolicy_value) proto.RegisterEnum("envoy.api.v2.Cluster_DnsLookupFamily", Cluster_DnsLookupFamily_name, Cluster_DnsLookupFamily_value) proto.RegisterEnum("envoy.api.v2.Cluster_ClusterProtocolSelection", Cluster_ClusterProtocolSelection_name, Cluster_ClusterProtocolSelection_value) proto.RegisterEnum("envoy.api.v2.Cluster_LbSubsetConfig_LbSubsetFallbackPolicy", Cluster_LbSubsetConfig_LbSubsetFallbackPolicy_name, Cluster_LbSubsetConfig_LbSubsetFallbackPolicy_value) proto.RegisterEnum("envoy.api.v2.Cluster_RingHashLbConfig_HashFunction", Cluster_RingHashLbConfig_HashFunction_name, Cluster_RingHashLbConfig_HashFunction_value) } // Reference imports to suppress errors if they are not otherwise used. var _ context.Context var _ grpc.ClientConn // This is a compile-time assertion to ensure that this generated file // is compatible with the grpc package it is being compiled against. const _ = grpc.SupportPackageIsVersion4 // ClusterDiscoveryServiceClient is the client API for ClusterDiscoveryService service. // // For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream. type ClusterDiscoveryServiceClient interface { StreamClusters(ctx context.Context, opts ...grpc.CallOption) (ClusterDiscoveryService_StreamClustersClient, error) DeltaClusters(ctx context.Context, opts ...grpc.CallOption) (ClusterDiscoveryService_DeltaClustersClient, error) FetchClusters(ctx context.Context, in *discovery.DiscoveryRequest, opts ...grpc.CallOption) (*discovery.DiscoveryResponse, error) } type clusterDiscoveryServiceClient struct { cc *grpc.ClientConn } func NewClusterDiscoveryServiceClient(cc *grpc.ClientConn) ClusterDiscoveryServiceClient { return &clusterDiscoveryServiceClient{cc} } func (c *clusterDiscoveryServiceClient) StreamClusters(ctx context.Context, opts ...grpc.CallOption) (ClusterDiscoveryService_StreamClustersClient, error) { stream, err := c.cc.NewStream(ctx, &_ClusterDiscoveryService_serviceDesc.Streams[0], "/envoy.api.v2.ClusterDiscoveryService/StreamClusters", opts...) if err != nil { return nil, err } x := &clusterDiscoveryServiceStreamClustersClient{stream} return x, nil } type ClusterDiscoveryService_StreamClustersClient interface { Send(*discovery.DiscoveryRequest) error Recv() (*discovery.DiscoveryResponse, error) grpc.ClientStream } type clusterDiscoveryServiceStreamClustersClient struct { grpc.ClientStream } func (x *clusterDiscoveryServiceStreamClustersClient) Send(m *discovery.DiscoveryRequest) error { return x.ClientStream.SendMsg(m) } func (x *clusterDiscoveryServiceStreamClustersClient) Recv() (*discovery.DiscoveryResponse, error) { m := new(discovery.DiscoveryResponse) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func (c *clusterDiscoveryServiceClient) DeltaClusters(ctx context.Context, opts ...grpc.CallOption) (ClusterDiscoveryService_DeltaClustersClient, error) { stream, err := c.cc.NewStream(ctx, &_ClusterDiscoveryService_serviceDesc.Streams[1], "/envoy.api.v2.ClusterDiscoveryService/DeltaClusters", opts...) if err != nil { return nil, err } x := &clusterDiscoveryServiceDeltaClustersClient{stream} return x, nil } type ClusterDiscoveryService_DeltaClustersClient interface { Send(*discovery.DeltaDiscoveryRequest) error Recv() (*discovery.DeltaDiscoveryResponse, error) grpc.ClientStream } type clusterDiscoveryServiceDeltaClustersClient struct { grpc.ClientStream } func (x *clusterDiscoveryServiceDeltaClustersClient) Send(m *discovery.DeltaDiscoveryRequest) error { return x.ClientStream.SendMsg(m) } func (x *clusterDiscoveryServiceDeltaClustersClient) Recv() (*discovery.DeltaDiscoveryResponse, error) { m := new(discovery.DeltaDiscoveryResponse) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func (c *clusterDiscoveryServiceClient) FetchClusters(ctx context.Context, in *discovery.DiscoveryRequest, opts ...grpc.CallOption) (*discovery.DiscoveryResponse, error) { out := new(discovery.DiscoveryResponse) err := c.cc.Invoke(ctx, "/envoy.api.v2.ClusterDiscoveryService/FetchClusters", in, out, opts...) if err != nil { return nil, err } return out, nil } // ClusterDiscoveryServiceServer is the server API for ClusterDiscoveryService service. type ClusterDiscoveryServiceServer interface { StreamClusters(ClusterDiscoveryService_StreamClustersServer) error DeltaClusters(ClusterDiscoveryService_DeltaClustersServer) error FetchClusters(context.Context, *discovery.DiscoveryRequest) (*discovery.DiscoveryResponse, error) } func RegisterClusterDiscoveryServiceServer(s *grpc.Server, srv ClusterDiscoveryServiceServer) { s.RegisterService(&_ClusterDiscoveryService_serviceDesc, srv) } func _ClusterDiscoveryService_StreamClusters_Handler(srv interface{}, stream grpc.ServerStream) error { return srv.(ClusterDiscoveryServiceServer).StreamClusters(&clusterDiscoveryServiceStreamClustersServer{stream}) } type ClusterDiscoveryService_StreamClustersServer interface { Send(*discovery.DiscoveryResponse) error Recv() (*discovery.DiscoveryRequest, error) grpc.ServerStream } type clusterDiscoveryServiceStreamClustersServer struct { grpc.ServerStream } func (x *clusterDiscoveryServiceStreamClustersServer) Send(m *discovery.DiscoveryResponse) error { return x.ServerStream.SendMsg(m) } func (x *clusterDiscoveryServiceStreamClustersServer) Recv() (*discovery.DiscoveryRequest, error) { m := new(discovery.DiscoveryRequest) if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func _ClusterDiscoveryService_DeltaClusters_Handler(srv interface{}, stream grpc.ServerStream) error { return srv.(ClusterDiscoveryServiceServer).DeltaClusters(&clusterDiscoveryServiceDeltaClustersServer{stream}) } type ClusterDiscoveryService_DeltaClustersServer interface { Send(*discovery.DeltaDiscoveryResponse) error Recv() (*discovery.DeltaDiscoveryRequest, error) grpc.ServerStream } type clusterDiscoveryServiceDeltaClustersServer struct { grpc.ServerStream } func (x *clusterDiscoveryServiceDeltaClustersServer) Send(m *discovery.DeltaDiscoveryResponse) error { return x.ServerStream.SendMsg(m) } func (x *clusterDiscoveryServiceDeltaClustersServer) Recv() (*discovery.DeltaDiscoveryRequest, error) { m := new(discovery.DeltaDiscoveryRequest) if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func _ClusterDiscoveryService_FetchClusters_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(discovery.DiscoveryRequest) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(ClusterDiscoveryServiceServer).FetchClusters(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/envoy.api.v2.ClusterDiscoveryService/FetchClusters", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(ClusterDiscoveryServiceServer).FetchClusters(ctx, req.(*discovery.DiscoveryRequest)) } return interceptor(ctx, in, info, handler) } var _ClusterDiscoveryService_serviceDesc = grpc.ServiceDesc{ ServiceName: "envoy.api.v2.ClusterDiscoveryService", HandlerType: (*ClusterDiscoveryServiceServer)(nil), Methods: []grpc.MethodDesc{ { MethodName: "FetchClusters", Handler: _ClusterDiscoveryService_FetchClusters_Handler, }, }, Streams: []grpc.StreamDesc{ { StreamName: "StreamClusters", Handler: _ClusterDiscoveryService_StreamClusters_Handler, ServerStreams: true, ClientStreams: true, }, { StreamName: "DeltaClusters", Handler: _ClusterDiscoveryService_DeltaClusters_Handler, ServerStreams: true, ClientStreams: true, }, }, Metadata: "envoy/api/v2/cds.proto", } func init() { proto.RegisterFile("envoy/api/v2/cds.proto", fileDescriptor_cds_1dff7e464f9f9a10) } var fileDescriptor_cds_1dff7e464f9f9a10 = []byte{ // 2531 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xac, 0x59, 0x4b, 0x77, 0x1b, 0xb7, 0x15, 0xd6, 0x50, 0x74, 0x4c, 0x43, 0x7c, 0x8c, 0x20, 0x59, 0x1a, 0x53, 0x96, 0x2d, 0x33, 0xb6, 0xab, 0x3a, 0x2d, 0xd5, 0xca, 0x79, 0x9d, 0xb4, 0x49, 0x0f, 0x5f, 0xb2, 0xe4, 0x50, 0xa4, 0x0a, 0x52, 0x56, 0xd2, 0x9c, 0x1c, 0x04, 0x9c, 0x01, 0xc5, 0xa9, 0x86, 0x33, 0x93, 0x01, 0x46, 0x36, 0xbd, 0xe8, 0x49, 0xb3, 0xea, 0xbe, 0xab, 0xfe, 0x85, 0xf6, 0x1f, 0x74, 0xd5, 0x6d, 0xd7, 0xdd, 0x77, 0xd5, 0x45, 0xfb, 0x2f, 0x7a, 0x06, 0xc0, 0x50, 0x7c, 0x8c, 0x68, 0xa7, 0xa7, 0x2b, 0x11, 0xb8, 0xdf, 0xfd, 0x2e, 0xe6, 0xe2, 0xe2, 0xde, 0x0b, 0x08, 0x6c, 0x50, 0xf7, 0xd2, 0x1b, 0xed, 0x11, 0xdf, 0xde, 0xbb, 0xdc, 0xdf, 0x33, 0x2d, 0x56, 0xf6, 0x03, 0x8f, 0x7b, 0x30, 0x2b, 0xe6, 0xcb, 0xc4, 0xb7, 0xcb, 0x97, 0xfb, 0xc5, 0xfb, 0xd3, 0x28, 0x2f, 0xa0, 0x7b, 0xc4, 0xb2, 0x02, 0xca, 0x14, 0xbc, 0x78, 0x77, 0x0a, 0x40, 0x42, 0x3e, 0xd8, 0x33, 0x69, 0xc0, 0x13, 0xa5, 0x42, 0xbd, 0x47, 0x18, 0x55, 0xd2, 0x47, 0xf3, 0x52, 0xd3, 0x73, 0xfb, 0xf6, 0x39, 0x66, 0x5e, 0x18, 0x98, 0x34, 0x91, 0xc4, 0xb2, 0x99, 0xe9, 0x5d, 0xd2, 0x60, 0xa4, 0xa4, 0x0f, 0xe7, 0x49, 0x06, 0x94, 0x38, 0x7c, 0x80, 0xcd, 0x01, 0x35, 0x2f, 0x14, 0x6a, 0x67, 0x1e, 0x25, 0x04, 0xa6, 0xe7, 0x28, 0xc4, 0x93, 0x69, 0x84, 0x13, 0x32, 0x4e, 0x83, 0x3d, 0xd3, 0x0e, 0xcc, 0xd0, 0xe6, 0xb8, 0x17, 0x50, 0x72, 0x41, 0x03, 0x85, 0xfd, 0x49, 0x22, 0xd6, 0x0b, 0xb9, 0x63, 0xd3, 0x00, 0x5b, 0x94, 0x53, 0x93, 0xdb, 0x9e, 0xab, 0xd0, 0xd3, 0x9e, 0xa6, 0xb1, 0xa7, 0x8b, 0x86, 0x9c, 0xe7, 0x23, 0x9f, 0xee, 0xf9, 0x34, 0x30, 0xa9, 0x3b, 0x76, 0xdb, 0xb9, 0xe7, 0x9d, 0x3b, 0x54, 0xa8, 0x10, 0xd7, 0xf5, 0x38, 0x89, 0xe8, 0x62, 0xbd, 0x3b, 0x4a, 0x2a, 0x46, 0xbd, 0xb0, 0xbf, 0x47, 0xdc, 0xd8, 0x19, 0xf7, 0x66, 0x45, 0x56, 0x18, 0x90, 0x89, 0xa5, 0xdc, 0x9d, 0x95, 0x33, 0x1e, 0x84, 0x26, 0xbf, 0x4e, 0xfb, 0x65, 0x40, 0x7c, 0x9f, 0x06, 0xb1, 0xe1, 0xcd, 0x4b, 0xe2, 0xd8, 0x16, 0xe1, 0x74, 0x2f, 0xfe, 0x21, 0x05, 0xa5, 0xef, 0xdf, 0x03, 0x37, 0x6b, 0xd2, 0x0b, 0x70, 0x1b, 0xa4, 0x5d, 0x32, 0xa4, 0x86, 0xb6, 0xa3, 0xed, 0xde, 0xaa, 0xde, 0xfa, 0xeb, 0x7f, 0xfe, 0xb6, 0x9c, 0x0e, 0x52, 0x3b, 0x1a, 0x12, 0xd3, 0xb0, 0x04, 0x72, 0xc4, 0xe1, 0x98, 0x71, 0xc2, 0xb1, 0xc0, 0xdd, 0x8d, 0x70, 0x68, 0x85, 0x38, 0xbc, 0xc3, 0x09, 0x6f, 0x45, 0x98, 0x06, 0x48, 0x47, 0x4e, 0x31, 0x52, 0x3b, 0xda, 0x6e, 0x7e, 0xff, 0xdd, 0xf2, 0x64, 0x44, 0x96, 0x95, 0x9d, 0x72, 0x3d, 0x8e, 0x83, 0xee, 0xc8, 0xa7, 0x55, 0x10, 0xd9, 0xb9, 0xf1, 0xbd, 0x96, 0xd2, 0xb5, 0xc3, 0x25, 0x24, 0xd4, 0x61, 0x13, 0x64, 0xd5, 0xd6, 0x60, 0x41, 0xf7, 0x78, 0x47, 0xdb, 0x5d, 0xd9, 0xff, 0x51, 0x32, 0x5d, 0x2d, 0x64, 0xdc, 0x1b, 0xaa, 0x51, 0x44, 0x79, 0xb8, 0x84, 0x56, 0xcc, 0xab, 0x21, 0xec, 0x02, 0x48, 0x2d, 0x86, 0x63, 0x46, 0x19, 0xa8, 0xc6, 0xb2, 0xe0, 0x7c, 0x9c, 0xcc, 0xd9, 0xb0, 0x98, 0xfa, 0x59, 0x13, 0x68, 0xa4, 0xd3, 0x99, 0x19, 0xd8, 0x02, 0x05, 0xd3, 0x73, 0x5d, 0x6a, 0x72, 0xcc, 0xed, 0x21, 0xf5, 0x42, 0x6e, 0xa4, 0x05, 0xe5, 0x9d, 0xb2, 0xdc, 0x8c, 0x72, 0xbc, 0x19, 0xe5, 0xba, 0xda, 0x4a, 0xf5, 0xad, 0x7f, 0xd6, 0x52, 0x4f, 0x96, 0x50, 0x5e, 0x69, 0x77, 0xa5, 0x32, 0xec, 0x83, 0x07, 0xbe, 0x5c, 0x9d, 0x2b, 0x63, 0x10, 0xf7, 0xc2, 0x7e, 0x9f, 0x06, 0xd8, 0xb1, 0x87, 0x51, 0x18, 0x8f, 0x38, 0x65, 0xc6, 0x0d, 0x61, 0xe1, 0xee, 0x9c, 0x85, 0xd3, 0x23, 0x97, 0x3f, 0xdd, 0x7f, 0x41, 0x9c, 0x90, 0xa2, 0x6d, 0x5f, 0xac, 0x51, 0xb1, 0x54, 0x05, 0x49, 0x33, 0xe2, 0xa8, 0x46, 0x14, 0xf0, 0x19, 0xb8, 0xe5, 0xf4, 0xb0, 0xef, 0x39, 0xb6, 0x39, 0x32, 0xde, 0x11, 0xfb, 0x74, 0x2f, 0xd9, 0x09, 0xcd, 0xde, 0x89, 0x40, 0x4d, 0x6e, 0x11, 0xca, 0x38, 0x6a, 0x16, 0xbe, 0x0f, 0x6e, 0x0c, 0x3c, 0xc6, 0x99, 0x71, 0x73, 0x67, 0x79, 0x77, 0x65, 0xbf, 0x38, 0x4d, 0x12, 0x1d, 0xd4, 0x72, 0x45, 0x26, 0x9c, 0x6a, 0xca, 0xd0, 0x90, 0x04, 0xc3, 0x26, 0x28, 0x38, 0x1e, 0xb1, 0x30, 0x61, 0xcc, 0x3e, 0x77, 0x87, 0xd4, 0xe5, 0xc6, 0x03, 0xf1, 0x51, 0xc9, 0xc1, 0xd2, 0xf4, 0x88, 0x55, 0x19, 0x43, 0x51, 0xde, 0x99, 0x1a, 0xc3, 0x1a, 0xc8, 0x4d, 0xa6, 0x0c, 0x66, 0x64, 0xc4, 0x5a, 0xee, 0x25, 0xac, 0xe5, 0x50, 0xe0, 0x6a, 0x11, 0x0c, 0x65, 0x07, 0x57, 0x03, 0x06, 0xbf, 0x02, 0x5b, 0x43, 0xf2, 0x0a, 0x07, 0xf4, 0xdb, 0x90, 0x32, 0xce, 0xf0, 0xf4, 0x36, 0x18, 0xb7, 0xde, 0xc2, 0xe7, 0xc6, 0x90, 0xbc, 0x42, 0x4a, 0xff, 0x64, 0xd2, 0xfd, 0xf0, 0x04, 0xe8, 0x33, 0x99, 0x88, 0x19, 0x40, 0x30, 0x3e, 0x9a, 0x59, 0x64, 0x1c, 0xce, 0x12, 0x5d, 0x55, 0x60, 0x54, 0x30, 0xa7, 0x27, 0xe0, 0x01, 0x58, 0xe1, 0x0e, 0x8b, 0x56, 0xc8, 0xe9, 0x2b, 0x6e, 0xac, 0x24, 0x91, 0x45, 0xd9, 0xbc, 0x7c, 0xea, 0x33, 0x1e, 0x50, 0x32, 0xec, 0x3a, 0xac, 0x26, 0xc1, 0x08, 0xf0, 0xf1, 0x6f, 0x78, 0x0e, 0xee, 0x9a, 0xde, 0x70, 0xe8, 0xb9, 0x78, 0xc0, 0xb9, 0x8f, 0xe3, 0xa4, 0x8a, 0x3d, 0x5f, 0xa4, 0x2c, 0x63, 0x3b, 0xe9, 0x80, 0x48, 0x57, 0x72, 0xee, 0x9f, 0x28, 0x78, 0x5b, 0xa2, 0xd1, 0x1d, 0xc9, 0x95, 0x20, 0x82, 0x5f, 0x81, 0xdb, 0xc9, 0x16, 0x72, 0x49, 0xc7, 0x7a, 0x6c, 0xe1, 0xe7, 0xb3, 0x26, 0xd6, 0x06, 0x09, 0xe4, 0x5f, 0x83, 0x8d, 0x68, 0x7a, 0x7f, 0x9e, 0x3d, 0xbf, 0x90, 0x7d, 0x7f, 0x96, 0x7d, 0x7d, 0x90, 0x30, 0x0b, 0xbf, 0x05, 0x45, 0xfa, 0x8a, 0x53, 0x97, 0x45, 0x07, 0x72, 0xce, 0xc4, 0xbb, 0x22, 0xda, 0x9e, 0x5e, 0x93, 0x43, 0x62, 0xbd, 0x19, 0xce, 0x86, 0xcb, 0x83, 0x11, 0x32, 0xe8, 0x35, 0x62, 0xf8, 0x7b, 0x0d, 0xec, 0x44, 0x59, 0xcf, 0xc2, 0x0b, 0x2c, 0x3f, 0x14, 0x96, 0x3f, 0x4e, 0xb6, 0x1c, 0x65, 0x3d, 0x6b, 0xb1, 0xf9, 0x6d, 0xbe, 0x08, 0x03, 0xdb, 0x40, 0xb7, 0x5c, 0x86, 0x03, 0xda, 0x0f, 0x28, 0x1b, 0xe0, 0x80, 0x70, 0x6a, 0xe8, 0x3f, 0x28, 0xbb, 0x59, 0x2e, 0x43, 0x52, 0x1b, 0x11, 0x4e, 0xe1, 0xd7, 0x60, 0x35, 0x22, 0x74, 0x3c, 0xef, 0x22, 0xf4, 0x71, 0x9f, 0x0c, 0x6d, 0x67, 0x64, 0xac, 0x8a, 0xec, 0xf3, 0xe8, 0x9a, 0x2a, 0xe1, 0xb2, 0xa6, 0x40, 0x1f, 0x08, 0xf0, 0x54, 0x12, 0x2a, 0x58, 0xd3, 0x42, 0xf8, 0x2b, 0x90, 0x93, 0xeb, 0x65, 0x9e, 0x73, 0x19, 0x1d, 0x31, 0xf8, 0xa6, 0x9c, 0x84, 0xb2, 0x62, 0x85, 0x0a, 0x0f, 0x3b, 0x60, 0x75, 0xae, 0x09, 0x30, 0xd6, 0x12, 0x4f, 0x80, 0x5a, 0x5f, 0x5b, 0xc2, 0xeb, 0x31, 0x1a, 0xe9, 0xde, 0xcc, 0x8c, 0x38, 0xfb, 0x0e, 0x25, 0x6e, 0xe8, 0x63, 0xdb, 0xe5, 0x34, 0xb8, 0x24, 0x8e, 0xb1, 0xfe, 0x43, 0xbc, 0x58, 0x50, 0xea, 0x47, 0x4a, 0x1b, 0xb6, 0xc1, 0x7a, 0xa8, 0x4e, 0x35, 0xee, 0xd9, 0xae, 0x15, 0x17, 0xb3, 0xdb, 0x82, 0x75, 0x3b, 0xe1, 0x73, 0xab, 0xb6, 0x6b, 0xa9, 0x1a, 0x06, 0x63, 0xd5, 0xab, 0x39, 0xd8, 0x02, 0xba, 0xd3, 0xc3, 0x2c, 0xec, 0x31, 0xca, 0x63, 0xb2, 0x0d, 0x41, 0xf6, 0xf0, 0xba, 0xa2, 0xd0, 0x11, 0x60, 0xc5, 0x99, 0x77, 0xa6, 0xc6, 0xf0, 0x0c, 0xac, 0x05, 0xb6, 0x7b, 0x8e, 0x07, 0x84, 0x0d, 0xb0, 0xd3, 0x8b, 0x29, 0x37, 0x17, 0x15, 0x5b, 0x64, 0xbb, 0xe7, 0x87, 0x84, 0x0d, 0x9a, 0x3d, 0x49, 0x72, 0xa8, 0x21, 0x3d, 0x98, 0x99, 0x83, 0xdf, 0x80, 0x0d, 0x2f, 0xb0, 0xcf, 0x6d, 0x97, 0x38, 0xd8, 0x62, 0x7c, 0x82, 0xbb, 0x24, 0xb8, 0x7f, 0x9c, 0xcc, 0xdd, 0x56, 0x3a, 0x75, 0xc6, 0x27, 0xe8, 0xd7, 0xbc, 0xf9, 0x69, 0x68, 0x82, 0x4d, 0x87, 0x12, 0xc6, 0xe3, 0x42, 0x30, 0x61, 0xe2, 0x91, 0x30, 0xf1, 0xe4, 0x1a, 0x8f, 0x44, 0x4a, 0x2a, 0xf9, 0x4f, 0xd8, 0x58, 0x77, 0x12, 0xe6, 0x23, 0x7f, 0xab, 0xa4, 0x7b, 0xc5, 0xbe, 0xb5, 0xc8, 0xdf, 0x35, 0x81, 0x8e, 0xf5, 0xa3, 0xae, 0x61, 0x72, 0x0c, 0x8f, 0x81, 0xce, 0x03, 0xe2, 0x32, 0xdf, 0x0b, 0x38, 0x66, 0x9e, 0x79, 0x41, 0xb9, 0x61, 0x08, 0xbe, 0x52, 0x42, 0x30, 0x74, 0x63, 0x68, 0x47, 0x20, 0x51, 0x81, 0x4f, 0x4f, 0xc0, 0x8f, 0x40, 0x66, 0x48, 0x39, 0xb1, 0x08, 0x27, 0xc6, 0x1d, 0x41, 0xb3, 0x95, 0x40, 0x73, 0xac, 0x20, 0x68, 0x0c, 0x86, 0x5f, 0x03, 0x38, 0xce, 0x51, 0x8c, 0x3a, 0xea, 0x00, 0x15, 0xc5, 0x01, 0x2f, 0x5f, 0xf3, 0x65, 0xf2, 0x6f, 0x9c, 0x7a, 0x3a, 0xb1, 0x16, 0x5a, 0xf5, 0x67, 0xa7, 0xe0, 0x39, 0xd8, 0x1a, 0xc7, 0xfd, 0x44, 0x87, 0x14, 0x67, 0xc3, 0x7b, 0x49, 0xa9, 0x3e, 0x2e, 0x7f, 0x57, 0xc5, 0x78, 0x5c, 0xab, 0xc2, 0xeb, 0x44, 0xf0, 0x14, 0xec, 0x9a, 0x8e, 0xc7, 0xe8, 0x84, 0x15, 0x86, 0xa3, 0x12, 0xe9, 0x31, 0x8e, 0x55, 0xab, 0xd1, 0x27, 0xb6, 0x13, 0x06, 0xd4, 0xb8, 0xbf, 0xa3, 0xed, 0x66, 0xd0, 0xbb, 0x02, 0x7f, 0xc5, 0xc4, 0xda, 0xee, 0xa1, 0xc7, 0xb8, 0x6c, 0x37, 0x0e, 0x24, 0x14, 0x1e, 0x82, 0x07, 0x56, 0x40, 0x6c, 0x37, 0x91, 0x36, 0xa0, 0x43, 0x2f, 0x4a, 0x0d, 0x3b, 0x82, 0x6f, 0x5b, 0x00, 0xe7, 0xf8, 0x90, 0x04, 0x15, 0x2f, 0xc0, 0xea, 0x5c, 0xc3, 0xfb, 0xa6, 0xce, 0xfd, 0x23, 0x90, 0x95, 0x05, 0x45, 0x05, 0x5c, 0x4a, 0xb8, 0x6b, 0x7d, 0x2e, 0x07, 0x55, 0xdc, 0x11, 0x5a, 0x11, 0x48, 0x19, 0x5d, 0xc5, 0x10, 0xe8, 0xb3, 0x9d, 0x30, 0xfc, 0x0c, 0x00, 0xd1, 0x4d, 0x4b, 0x2a, 0x4d, 0x50, 0xdd, 0x4f, 0x08, 0x12, 0x09, 0xef, 0x88, 0xeb, 0x20, 0xba, 0x15, 0xb5, 0xcf, 0x52, 0xff, 0x01, 0xc8, 0x32, 0x1a, 0x5c, 0xda, 0x26, 0x95, 0xb7, 0x88, 0x94, 0xbc, 0x45, 0xa8, 0xb9, 0xe8, 0x16, 0x51, 0xb4, 0xc0, 0xf6, 0xc2, 0xea, 0x05, 0x75, 0xb0, 0x7c, 0x41, 0x47, 0xf2, 0x73, 0x51, 0xf4, 0x13, 0xfe, 0x14, 0xdc, 0xb8, 0x8c, 0x3a, 0x31, 0xf5, 0x6d, 0x9b, 0x73, 0xdf, 0xd6, 0x11, 0xd7, 0x25, 0x24, 0x51, 0x9f, 0xa4, 0x3e, 0xd6, 0x8a, 0x7d, 0x50, 0x7a, 0x73, 0xa1, 0x4c, 0x30, 0xf5, 0x64, 0xda, 0x54, 0xb2, 0x1b, 0x27, 0xec, 0xfc, 0x25, 0x0d, 0xf2, 0xd3, 0x59, 0x13, 0xfa, 0xa0, 0xd0, 0x27, 0x8e, 0xd3, 0x23, 0xe6, 0x45, 0xdc, 0x89, 0x6b, 0xe2, 0xa8, 0xfc, 0xe2, 0x6d, 0x92, 0xee, 0x78, 0x78, 0xa0, 0x38, 0x12, 0xda, 0xf4, 0x7c, 0x7f, 0x4a, 0x06, 0x3f, 0x03, 0x79, 0x8b, 0xf6, 0x49, 0x18, 0x5d, 0xe0, 0x84, 0xee, 0x9b, 0x1c, 0x95, 0x53, 0x70, 0x69, 0x09, 0x7e, 0x03, 0x74, 0x55, 0x24, 0xe4, 0xe9, 0xf6, 0x02, 0x66, 0x2c, 0x8b, 0x1a, 0xfb, 0xc1, 0x0f, 0x5a, 0x72, 0x47, 0x69, 0xa3, 0x02, 0x9b, 0x1a, 0x33, 0xb8, 0x0f, 0x6e, 0x3b, 0x9e, 0x49, 0x1c, 0x9b, 0x8f, 0xf0, 0x4b, 0x6a, 0x9f, 0x0f, 0x38, 0x26, 0x2f, 0x49, 0x40, 0xc5, 0xad, 0x2a, 0x83, 0xd6, 0x62, 0xe1, 0x99, 0x90, 0x55, 0x22, 0x51, 0xa4, 0xc3, 0x4c, 0xe2, 0x50, 0x3c, 0xa3, 0x29, 0xee, 0x49, 0x19, 0xb4, 0x26, 0x84, 0xcd, 0x29, 0x45, 0xf8, 0x10, 0xe4, 0x7d, 0xe2, 0xda, 0x26, 0x1e, 0x7a, 0x16, 0xc5, 0xc4, 0x95, 0x97, 0xa0, 0x0c, 0xca, 0x8a, 0xd9, 0x63, 0xcf, 0xa2, 0x15, 0x77, 0x54, 0x7c, 0x0c, 0xf4, 0xd9, 0x25, 0x43, 0x08, 0xd2, 0x17, 0x74, 0xc4, 0x0c, 0x6d, 0x67, 0x79, 0xf7, 0x16, 0x12, 0xbf, 0x4b, 0x6d, 0xb0, 0x91, 0xbc, 0x1b, 0xb0, 0x00, 0x56, 0x5a, 0x6d, 0x7c, 0x50, 0x69, 0x36, 0xab, 0x95, 0xda, 0xe7, 0xfa, 0x12, 0xd4, 0x41, 0xb6, 0xd2, 0xfa, 0x12, 0x37, 0x5a, 0xf5, 0x93, 0xf6, 0x51, 0xab, 0xab, 0x6b, 0x10, 0x82, 0x7c, 0xbd, 0x71, 0x50, 0x39, 0x6d, 0x76, 0x71, 0xe7, 0xb4, 0xda, 0x69, 0x74, 0xf5, 0x54, 0xb1, 0x07, 0xd6, 0x93, 0x0a, 0x0a, 0x7c, 0x0e, 0xb2, 0xe6, 0xc0, 0x8b, 0x4e, 0x8d, 0xe9, 0x85, 0x2e, 0x57, 0x07, 0x6f, 0xe1, 0xad, 0x44, 0x25, 0x82, 0x27, 0xa9, 0xdd, 0x14, 0x5a, 0x91, 0xca, 0xb5, 0x48, 0xb7, 0xf8, 0xcf, 0x14, 0xd0, 0x67, 0x8b, 0x2e, 0x7c, 0x01, 0x56, 0x87, 0xb6, 0x6b, 0x0f, 0xc3, 0x21, 0x16, 0x15, 0x9c, 0xd9, 0xaf, 0xe9, 0x42, 0x2b, 0x1f, 0xbe, 0x2f, 0xad, 0x64, 0x23, 0x2b, 0x37, 0xf7, 0x6f, 0x18, 0xdf, 0x7d, 0xf7, 0x5d, 0x1a, 0x15, 0x14, 0x49, 0xc4, 0xdf, 0xb1, 0x5f, 0x53, 0x68, 0x81, 0x9c, 0x68, 0x06, 0xfa, 0xa1, 0x2b, 0x8b, 0xc2, 0xb2, 0x88, 0xf4, 0xa7, 0x6f, 0xd7, 0x0b, 0x94, 0xa3, 0xc1, 0x81, 0x52, 0x9d, 0x8a, 0xf0, 0xec, 0x60, 0x42, 0x22, 0x56, 0x4f, 0x5e, 0xcd, 0xac, 0x3e, 0xfd, 0x3f, 0xac, 0x5e, 0x92, 0xc4, 0xab, 0x2f, 0x95, 0x41, 0x76, 0x72, 0x05, 0x70, 0x05, 0xdc, 0xfc, 0xe2, 0x0b, 0x7c, 0x58, 0xe9, 0x1c, 0xea, 0x4b, 0x70, 0x15, 0xe4, 0x8e, 0x4f, 0xd1, 0xf1, 0x29, 0x12, 0x13, 0x78, 0x5f, 0xd7, 0x9e, 0xa7, 0x33, 0x29, 0x7d, 0xb9, 0xf8, 0x29, 0x58, 0x4b, 0x68, 0x3c, 0xe0, 0x63, 0x50, 0x08, 0x19, 0x95, 0xd7, 0xad, 0x01, 0x25, 0x16, 0x0d, 0x84, 0x83, 0x33, 0x28, 0x17, 0x32, 0x1a, 0x5d, 0x4a, 0x0e, 0xc5, 0x64, 0xf1, 0xdf, 0x69, 0x90, 0x9f, 0xae, 0xfb, 0xf0, 0x73, 0xb0, 0x29, 0xab, 0xcf, 0x08, 0xcb, 0xe8, 0xe5, 0x83, 0xa8, 0xb7, 0xf6, 0x1c, 0x4b, 0xed, 0xd1, 0x9a, 0xf2, 0x67, 0x94, 0xbe, 0xcb, 0x27, 0xf2, 0x4d, 0x0a, 0xdd, 0x56, 0x3a, 0x27, 0x91, 0x4a, 0x37, 0xd6, 0x80, 0x36, 0x58, 0x7f, 0xed, 0xb9, 0x54, 0x9e, 0xaf, 0x89, 0x46, 0x44, 0xa6, 0x84, 0x0f, 0xdf, 0xa6, 0x11, 0x29, 0xff, 0xc6, 0x73, 0xa9, 0x38, 0x84, 0xe3, 0x96, 0x67, 0x09, 0xad, 0xbe, 0x9e, 0x9d, 0x84, 0xbf, 0x03, 0x5b, 0x33, 0x67, 0x93, 0x5a, 0x13, 0x16, 0xe5, 0x23, 0xcc, 0xa7, 0x6f, 0x65, 0x71, 0xfa, 0x1c, 0x53, 0x6b, 0xc2, 0xb0, 0xe1, 0x5c, 0x23, 0x83, 0x47, 0x60, 0x2d, 0xf4, 0x2d, 0xc2, 0x29, 0x1e, 0xd2, 0xe0, 0x9c, 0xe2, 0x97, 0xb6, 0x6b, 0x79, 0x2f, 0xdf, 0xf8, 0x52, 0x83, 0x56, 0xa5, 0xd6, 0x71, 0xa4, 0x74, 0x26, 0x74, 0x8a, 0x7f, 0xd2, 0xc0, 0xea, 0xdc, 0x57, 0xc3, 0x5f, 0x82, 0x42, 0xe0, 0x85, 0x3c, 0x8a, 0x39, 0xea, 0x92, 0x9e, 0x43, 0x17, 0x6e, 0x48, 0x5e, 0x61, 0x1b, 0x12, 0x0a, 0x0f, 0x80, 0x3e, 0x8c, 0xba, 0x02, 0xf5, 0x34, 0x25, 0xa2, 0x36, 0xf5, 0xe6, 0xa8, 0x45, 0xf9, 0xa1, 0xed, 0x2a, 0x37, 0x45, 0x61, 0x5a, 0x2c, 0x02, 0xe3, 0x3a, 0xf7, 0x54, 0xb7, 0xc0, 0x9d, 0xf1, 0x16, 0xc4, 0x8f, 0xb4, 0x3e, 0x35, 0xed, 0xbe, 0x4d, 0x83, 0xd2, 0x19, 0xc8, 0x4d, 0x3d, 0xc7, 0x41, 0x00, 0xde, 0xe9, 0x74, 0x2b, 0xdd, 0xa3, 0x9a, 0xbe, 0x04, 0xf3, 0x00, 0x74, 0xba, 0xe8, 0xa8, 0xd6, 0xc5, 0xf5, 0x56, 0x47, 0xd7, 0xa2, 0x94, 0xd6, 0x6c, 0x3f, 0x3b, 0xaa, 0x55, 0x9a, 0x62, 0x22, 0x05, 0x6f, 0x82, 0xe5, 0x46, 0xbd, 0xa3, 0x2f, 0x47, 0xb9, 0xad, 0x8d, 0x8e, 0x9e, 0x1d, 0xb5, 0x22, 0x51, 0xa7, 0xab, 0xa7, 0x4b, 0xbf, 0x05, 0x99, 0xf8, 0xfd, 0x28, 0xd2, 0x43, 0xed, 0xd3, 0x56, 0x1d, 0xa3, 0x76, 0xf5, 0xa8, 0x25, 0x0f, 0x4e, 0xb3, 0x51, 0xe9, 0x74, 0x31, 0x6a, 0xfc, 0xfa, 0xb4, 0xd1, 0x89, 0x72, 0x61, 0x0e, 0xdc, 0x42, 0x47, 0xad, 0x67, 0xf2, 0x68, 0xa5, 0xa2, 0x65, 0xa0, 0x4a, 0xab, 0xde, 0x3e, 0xd6, 0x97, 0xe1, 0x1a, 0x28, 0x4c, 0x92, 0xe3, 0x66, 0x55, 0x4f, 0x47, 0x80, 0xe3, 0xca, 0xb3, 0x66, 0xe3, 0x85, 0x7e, 0xa3, 0xf4, 0x01, 0x28, 0xcc, 0xdc, 0x16, 0x61, 0x06, 0xa4, 0x2b, 0xa7, 0xdd, 0xb6, 0xbe, 0x14, 0x9d, 0xd8, 0x17, 0xef, 0xe3, 0x76, 0xab, 0xf9, 0xa5, 0xae, 0x89, 0xc1, 0x87, 0x72, 0x90, 0x2a, 0x75, 0x81, 0x71, 0x5d, 0x0f, 0x0a, 0xb7, 0xc0, 0xe6, 0x69, 0xa7, 0x81, 0x6b, 0xed, 0xd6, 0xc1, 0xd1, 0xb3, 0x53, 0xd4, 0xa8, 0xe3, 0x13, 0xd4, 0xee, 0xb6, 0x6b, 0xed, 0xa6, 0xbe, 0x14, 0x0b, 0xeb, 0xed, 0xb3, 0x56, 0xa7, 0x8b, 0x1a, 0x95, 0xe3, 0x2b, 0xa1, 0x56, 0x35, 0xc0, 0x46, 0xbc, 0x9d, 0xe3, 0x07, 0x6f, 0xf1, 0x8a, 0x59, 0x5d, 0x11, 0x2f, 0x6f, 0x72, 0x0b, 0x9e, 0xa7, 0x33, 0x59, 0x3d, 0xf7, 0x3c, 0x9d, 0x29, 0xe8, 0x7a, 0xe9, 0x0c, 0xc0, 0xd3, 0xf9, 0xab, 0x59, 0x05, 0xe4, 0xe5, 0x63, 0x3a, 0x56, 0xef, 0xf6, 0x2a, 0xb0, 0x16, 0x5d, 0x6a, 0x73, 0x52, 0x43, 0x0d, 0x4b, 0x04, 0xdc, 0xb9, 0xb6, 0x0b, 0x86, 0x75, 0x90, 0xe3, 0xa6, 0x8f, 0x2f, 0x28, 0xf5, 0x89, 0x63, 0x5f, 0xd2, 0x05, 0xbd, 0x5c, 0xd7, 0xf4, 0x3f, 0x8f, 0x61, 0x28, 0xcb, 0x27, 0x46, 0xfb, 0x7f, 0x4f, 0x81, 0x4d, 0xe5, 0xc5, 0x71, 0x20, 0x75, 0x64, 0x2f, 0x07, 0xcf, 0x40, 0xbe, 0x23, 0x8d, 0x4b, 0x00, 0x83, 0x33, 0x0f, 0x73, 0x63, 0x0d, 0x55, 0xed, 0x8a, 0xf7, 0xaf, 0x95, 0x33, 0xdf, 0x73, 0x19, 0x2d, 0x2d, 0xed, 0x6a, 0x3f, 0xd3, 0xe0, 0x37, 0x20, 0x57, 0xa7, 0x0e, 0x27, 0x63, 0xde, 0x99, 0xc7, 0x43, 0x21, 0x9c, 0x23, 0x7f, 0xb8, 0x18, 0x34, 0x65, 0x81, 0x81, 0xdc, 0x01, 0xe5, 0xe6, 0xe0, 0xff, 0xb7, 0xf2, 0x07, 0xdf, 0xff, 0xe3, 0x5f, 0x7f, 0x4c, 0x6d, 0x95, 0x36, 0xa6, 0xfe, 0x15, 0xf2, 0x89, 0x8a, 0x15, 0xf6, 0x89, 0xf6, 0xa4, 0xfa, 0x1e, 0x28, 0xda, 0x9e, 0xe4, 0xf1, 0x03, 0xef, 0xd5, 0x68, 0x8a, 0xb2, 0x9a, 0xa9, 0x59, 0x4c, 0x04, 0xea, 0x89, 0xf6, 0x07, 0x4d, 0xeb, 0xbd, 0x23, 0x12, 0xc3, 0xd3, 0xff, 0x06, 0x00, 0x00, 0xff, 0xff, 0xe0, 0x16, 0x40, 0xfc, 0x0e, 0x1a, 0x00, 0x00, } grpc-go-1.22.1/balancer/xds/internal/proto/envoy/api/v2/cluster/000077500000000000000000000000001351635773100244135ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/internal/proto/envoy/api/v2/cluster/circuit_breaker/000077500000000000000000000000001351635773100275505ustar00rootroot00000000000000circuit_breaker.pb.go000077500000000000000000000206501351635773100335630ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/internal/proto/envoy/api/v2/cluster/circuit_breaker// Code generated by protoc-gen-go. DO NOT EDIT. // source: envoy/api/v2/cluster/circuit_breaker.proto package cluster import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import wrappers "github.com/golang/protobuf/ptypes/wrappers" import base "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/core/base" // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package type CircuitBreakers struct { Thresholds []*CircuitBreakers_Thresholds `protobuf:"bytes,1,rep,name=thresholds,proto3" json:"thresholds,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *CircuitBreakers) Reset() { *m = CircuitBreakers{} } func (m *CircuitBreakers) String() string { return proto.CompactTextString(m) } func (*CircuitBreakers) ProtoMessage() {} func (*CircuitBreakers) Descriptor() ([]byte, []int) { return fileDescriptor_circuit_breaker_dc7392708e718eb5, []int{0} } func (m *CircuitBreakers) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_CircuitBreakers.Unmarshal(m, b) } func (m *CircuitBreakers) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_CircuitBreakers.Marshal(b, m, deterministic) } func (dst *CircuitBreakers) XXX_Merge(src proto.Message) { xxx_messageInfo_CircuitBreakers.Merge(dst, src) } func (m *CircuitBreakers) XXX_Size() int { return xxx_messageInfo_CircuitBreakers.Size(m) } func (m *CircuitBreakers) XXX_DiscardUnknown() { xxx_messageInfo_CircuitBreakers.DiscardUnknown(m) } var xxx_messageInfo_CircuitBreakers proto.InternalMessageInfo func (m *CircuitBreakers) GetThresholds() []*CircuitBreakers_Thresholds { if m != nil { return m.Thresholds } return nil } type CircuitBreakers_Thresholds struct { Priority base.RoutingPriority `protobuf:"varint,1,opt,name=priority,proto3,enum=envoy.api.v2.core.RoutingPriority" json:"priority,omitempty"` MaxConnections *wrappers.UInt32Value `protobuf:"bytes,2,opt,name=max_connections,json=maxConnections,proto3" json:"max_connections,omitempty"` MaxPendingRequests *wrappers.UInt32Value `protobuf:"bytes,3,opt,name=max_pending_requests,json=maxPendingRequests,proto3" json:"max_pending_requests,omitempty"` MaxRequests *wrappers.UInt32Value `protobuf:"bytes,4,opt,name=max_requests,json=maxRequests,proto3" json:"max_requests,omitempty"` MaxRetries *wrappers.UInt32Value `protobuf:"bytes,5,opt,name=max_retries,json=maxRetries,proto3" json:"max_retries,omitempty"` TrackRemaining bool `protobuf:"varint,6,opt,name=track_remaining,json=trackRemaining,proto3" json:"track_remaining,omitempty"` MaxConnectionPools *wrappers.UInt32Value `protobuf:"bytes,7,opt,name=max_connection_pools,json=maxConnectionPools,proto3" json:"max_connection_pools,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *CircuitBreakers_Thresholds) Reset() { *m = CircuitBreakers_Thresholds{} } func (m *CircuitBreakers_Thresholds) String() string { return proto.CompactTextString(m) } func (*CircuitBreakers_Thresholds) ProtoMessage() {} func (*CircuitBreakers_Thresholds) Descriptor() ([]byte, []int) { return fileDescriptor_circuit_breaker_dc7392708e718eb5, []int{0, 0} } func (m *CircuitBreakers_Thresholds) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_CircuitBreakers_Thresholds.Unmarshal(m, b) } func (m *CircuitBreakers_Thresholds) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_CircuitBreakers_Thresholds.Marshal(b, m, deterministic) } func (dst *CircuitBreakers_Thresholds) XXX_Merge(src proto.Message) { xxx_messageInfo_CircuitBreakers_Thresholds.Merge(dst, src) } func (m *CircuitBreakers_Thresholds) XXX_Size() int { return xxx_messageInfo_CircuitBreakers_Thresholds.Size(m) } func (m *CircuitBreakers_Thresholds) XXX_DiscardUnknown() { xxx_messageInfo_CircuitBreakers_Thresholds.DiscardUnknown(m) } var xxx_messageInfo_CircuitBreakers_Thresholds proto.InternalMessageInfo func (m *CircuitBreakers_Thresholds) GetPriority() base.RoutingPriority { if m != nil { return m.Priority } return base.RoutingPriority_DEFAULT } func (m *CircuitBreakers_Thresholds) GetMaxConnections() *wrappers.UInt32Value { if m != nil { return m.MaxConnections } return nil } func (m *CircuitBreakers_Thresholds) GetMaxPendingRequests() *wrappers.UInt32Value { if m != nil { return m.MaxPendingRequests } return nil } func (m *CircuitBreakers_Thresholds) GetMaxRequests() *wrappers.UInt32Value { if m != nil { return m.MaxRequests } return nil } func (m *CircuitBreakers_Thresholds) GetMaxRetries() *wrappers.UInt32Value { if m != nil { return m.MaxRetries } return nil } func (m *CircuitBreakers_Thresholds) GetTrackRemaining() bool { if m != nil { return m.TrackRemaining } return false } func (m *CircuitBreakers_Thresholds) GetMaxConnectionPools() *wrappers.UInt32Value { if m != nil { return m.MaxConnectionPools } return nil } func init() { proto.RegisterType((*CircuitBreakers)(nil), "envoy.api.v2.cluster.CircuitBreakers") proto.RegisterType((*CircuitBreakers_Thresholds)(nil), "envoy.api.v2.cluster.CircuitBreakers.Thresholds") } func init() { proto.RegisterFile("envoy/api/v2/cluster/circuit_breaker.proto", fileDescriptor_circuit_breaker_dc7392708e718eb5) } var fileDescriptor_circuit_breaker_dc7392708e718eb5 = []byte{ // 417 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x8c, 0x92, 0xc1, 0x6e, 0xd3, 0x40, 0x10, 0x86, 0xe5, 0xa6, 0xb4, 0xd5, 0x06, 0x25, 0xd2, 0x52, 0x21, 0x2b, 0xaa, 0x50, 0x94, 0x0b, 0x11, 0x87, 0x35, 0x72, 0xcf, 0x80, 0x48, 0xd4, 0x03, 0x97, 0xca, 0x32, 0xd0, 0x03, 0x12, 0xb2, 0x36, 0xee, 0xe0, 0xae, 0x6a, 0xef, 0x2c, 0xb3, 0xeb, 0xe0, 0xbc, 0x12, 0x8f, 0xc1, 0x73, 0xf0, 0x30, 0x28, 0xde, 0x24, 0xa6, 0x55, 0x0f, 0x3e, 0x7a, 0x66, 0xbe, 0x6f, 0xfc, 0xef, 0x2e, 0x7b, 0x03, 0x7a, 0x8d, 0x9b, 0x48, 0x1a, 0x15, 0xad, 0xe3, 0x28, 0x2f, 0x6b, 0xeb, 0x80, 0xa2, 0x5c, 0x51, 0x5e, 0x2b, 0x97, 0xad, 0x08, 0xe4, 0x3d, 0x90, 0x30, 0x84, 0x0e, 0xf9, 0x79, 0x3b, 0x2b, 0xa4, 0x51, 0x62, 0x1d, 0x8b, 0xdd, 0xec, 0xe4, 0xe2, 0xa1, 0x01, 0x09, 0xa2, 0x95, 0xb4, 0xe0, 0x99, 0xc9, 0xab, 0x02, 0xb1, 0x28, 0x21, 0x6a, 0xbf, 0x56, 0xf5, 0x8f, 0xe8, 0x17, 0x49, 0x63, 0x80, 0xac, 0xef, 0xcf, 0xfe, 0x1c, 0xb3, 0xf1, 0xd2, 0x6f, 0x5b, 0xf8, 0x65, 0x96, 0x27, 0x8c, 0xb9, 0x3b, 0x02, 0x7b, 0x87, 0xe5, 0xad, 0x0d, 0x83, 0xe9, 0x60, 0x3e, 0x8c, 0xdf, 0x8a, 0xa7, 0x96, 0x8b, 0x47, 0xa8, 0xf8, 0x72, 0xe0, 0xd2, 0xff, 0x1c, 0x93, 0xbf, 0x03, 0xc6, 0xba, 0x16, 0x7f, 0xcf, 0xce, 0x0c, 0x29, 0x24, 0xe5, 0x36, 0x61, 0x30, 0x0d, 0xe6, 0xa3, 0x78, 0xf6, 0x48, 0x8f, 0x04, 0x22, 0xc5, 0xda, 0x29, 0x5d, 0x24, 0xbb, 0xc9, 0xf4, 0xc0, 0xf0, 0x2b, 0x36, 0xae, 0x64, 0x93, 0xe5, 0xa8, 0x35, 0xe4, 0x4e, 0xa1, 0xb6, 0xe1, 0xd1, 0x34, 0x98, 0x0f, 0xe3, 0x0b, 0xe1, 0xe3, 0x8a, 0x7d, 0x5c, 0xf1, 0xf5, 0x93, 0x76, 0x97, 0xf1, 0x8d, 0x2c, 0x6b, 0x48, 0x47, 0x95, 0x6c, 0x96, 0x1d, 0xc3, 0xaf, 0xd9, 0xf9, 0x56, 0x63, 0x40, 0xdf, 0x2a, 0x5d, 0x64, 0x04, 0x3f, 0x6b, 0xb0, 0xce, 0x86, 0x83, 0x1e, 0x2e, 0x5e, 0xc9, 0x26, 0xf1, 0x60, 0xba, 0xe3, 0xf8, 0x07, 0xf6, 0x7c, 0xeb, 0x3b, 0x78, 0x8e, 0x7b, 0x78, 0x86, 0x95, 0x6c, 0x0e, 0x82, 0x77, 0x6c, 0xe8, 0x05, 0x8e, 0x14, 0xd8, 0xf0, 0x59, 0x0f, 0x9e, 0xb5, 0x7c, 0x3b, 0xcf, 0x5f, 0xb3, 0xb1, 0x23, 0x99, 0xdf, 0x67, 0x04, 0x95, 0x54, 0x5a, 0xe9, 0x22, 0x3c, 0x99, 0x06, 0xf3, 0xb3, 0x74, 0xd4, 0x96, 0xd3, 0x7d, 0x75, 0x1f, 0xbc, 0x3b, 0xbf, 0xcc, 0x20, 0x96, 0x36, 0x3c, 0xed, 0x19, 0xbc, 0x3b, 0xc4, 0x64, 0xcb, 0x2d, 0xbe, 0xb3, 0x99, 0x42, 0x7f, 0x83, 0x86, 0xb0, 0xd9, 0x3c, 0xf9, 0x56, 0x16, 0x2f, 0x1e, 0x3e, 0x96, 0x64, 0x6b, 0x4f, 0x82, 0x6f, 0xa7, 0xbb, 0xfe, 0xef, 0xa3, 0x97, 0x57, 0x2d, 0xf6, 0xd1, 0x28, 0x71, 0x13, 0x8b, 0xa5, 0x2f, 0x5f, 0x7f, 0x5e, 0x9d, 0xb4, 0x3f, 0x72, 0xf9, 0x2f, 0x00, 0x00, 0xff, 0xff, 0xae, 0xe0, 0x40, 0x7b, 0x2c, 0x03, 0x00, 0x00, } grpc-go-1.22.1/balancer/xds/internal/proto/envoy/api/v2/cluster/outlier_detection/000077500000000000000000000000001351635773100301345ustar00rootroot00000000000000outlier_detection.pb.go000077500000000000000000000220231351635773100345270ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/internal/proto/envoy/api/v2/cluster/outlier_detection// Code generated by protoc-gen-go. DO NOT EDIT. // source: envoy/api/v2/cluster/outlier_detection.proto package envoy_api_v2_cluster import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import duration "github.com/golang/protobuf/ptypes/duration" import wrappers "github.com/golang/protobuf/ptypes/wrappers" import _ "google.golang.org/grpc/balancer/xds/internal/proto/validate" // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package type OutlierDetection struct { Consecutive_5Xx *wrappers.UInt32Value `protobuf:"bytes,1,opt,name=consecutive_5xx,json=consecutive5xx,proto3" json:"consecutive_5xx,omitempty"` Interval *duration.Duration `protobuf:"bytes,2,opt,name=interval,proto3" json:"interval,omitempty"` BaseEjectionTime *duration.Duration `protobuf:"bytes,3,opt,name=base_ejection_time,json=baseEjectionTime,proto3" json:"base_ejection_time,omitempty"` MaxEjectionPercent *wrappers.UInt32Value `protobuf:"bytes,4,opt,name=max_ejection_percent,json=maxEjectionPercent,proto3" json:"max_ejection_percent,omitempty"` EnforcingConsecutive_5Xx *wrappers.UInt32Value `protobuf:"bytes,5,opt,name=enforcing_consecutive_5xx,json=enforcingConsecutive5xx,proto3" json:"enforcing_consecutive_5xx,omitempty"` EnforcingSuccessRate *wrappers.UInt32Value `protobuf:"bytes,6,opt,name=enforcing_success_rate,json=enforcingSuccessRate,proto3" json:"enforcing_success_rate,omitempty"` SuccessRateMinimumHosts *wrappers.UInt32Value `protobuf:"bytes,7,opt,name=success_rate_minimum_hosts,json=successRateMinimumHosts,proto3" json:"success_rate_minimum_hosts,omitempty"` SuccessRateRequestVolume *wrappers.UInt32Value `protobuf:"bytes,8,opt,name=success_rate_request_volume,json=successRateRequestVolume,proto3" json:"success_rate_request_volume,omitempty"` SuccessRateStdevFactor *wrappers.UInt32Value `protobuf:"bytes,9,opt,name=success_rate_stdev_factor,json=successRateStdevFactor,proto3" json:"success_rate_stdev_factor,omitempty"` ConsecutiveGatewayFailure *wrappers.UInt32Value `protobuf:"bytes,10,opt,name=consecutive_gateway_failure,json=consecutiveGatewayFailure,proto3" json:"consecutive_gateway_failure,omitempty"` EnforcingConsecutiveGatewayFailure *wrappers.UInt32Value `protobuf:"bytes,11,opt,name=enforcing_consecutive_gateway_failure,json=enforcingConsecutiveGatewayFailure,proto3" json:"enforcing_consecutive_gateway_failure,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *OutlierDetection) Reset() { *m = OutlierDetection{} } func (m *OutlierDetection) String() string { return proto.CompactTextString(m) } func (*OutlierDetection) ProtoMessage() {} func (*OutlierDetection) Descriptor() ([]byte, []int) { return fileDescriptor_outlier_detection_c374e0b25113dd85, []int{0} } func (m *OutlierDetection) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_OutlierDetection.Unmarshal(m, b) } func (m *OutlierDetection) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_OutlierDetection.Marshal(b, m, deterministic) } func (dst *OutlierDetection) XXX_Merge(src proto.Message) { xxx_messageInfo_OutlierDetection.Merge(dst, src) } func (m *OutlierDetection) XXX_Size() int { return xxx_messageInfo_OutlierDetection.Size(m) } func (m *OutlierDetection) XXX_DiscardUnknown() { xxx_messageInfo_OutlierDetection.DiscardUnknown(m) } var xxx_messageInfo_OutlierDetection proto.InternalMessageInfo func (m *OutlierDetection) GetConsecutive_5Xx() *wrappers.UInt32Value { if m != nil { return m.Consecutive_5Xx } return nil } func (m *OutlierDetection) GetInterval() *duration.Duration { if m != nil { return m.Interval } return nil } func (m *OutlierDetection) GetBaseEjectionTime() *duration.Duration { if m != nil { return m.BaseEjectionTime } return nil } func (m *OutlierDetection) GetMaxEjectionPercent() *wrappers.UInt32Value { if m != nil { return m.MaxEjectionPercent } return nil } func (m *OutlierDetection) GetEnforcingConsecutive_5Xx() *wrappers.UInt32Value { if m != nil { return m.EnforcingConsecutive_5Xx } return nil } func (m *OutlierDetection) GetEnforcingSuccessRate() *wrappers.UInt32Value { if m != nil { return m.EnforcingSuccessRate } return nil } func (m *OutlierDetection) GetSuccessRateMinimumHosts() *wrappers.UInt32Value { if m != nil { return m.SuccessRateMinimumHosts } return nil } func (m *OutlierDetection) GetSuccessRateRequestVolume() *wrappers.UInt32Value { if m != nil { return m.SuccessRateRequestVolume } return nil } func (m *OutlierDetection) GetSuccessRateStdevFactor() *wrappers.UInt32Value { if m != nil { return m.SuccessRateStdevFactor } return nil } func (m *OutlierDetection) GetConsecutiveGatewayFailure() *wrappers.UInt32Value { if m != nil { return m.ConsecutiveGatewayFailure } return nil } func (m *OutlierDetection) GetEnforcingConsecutiveGatewayFailure() *wrappers.UInt32Value { if m != nil { return m.EnforcingConsecutiveGatewayFailure } return nil } func init() { proto.RegisterType((*OutlierDetection)(nil), "envoy.api.v2.cluster.OutlierDetection") } func init() { proto.RegisterFile("envoy/api/v2/cluster/outlier_detection.proto", fileDescriptor_outlier_detection_c374e0b25113dd85) } var fileDescriptor_outlier_detection_c374e0b25113dd85 = []byte{ // 526 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x94, 0x94, 0xed, 0x6a, 0x13, 0x4d, 0x14, 0xc7, 0x9f, 0xa4, 0xef, 0x53, 0x78, 0x2c, 0x43, 0x4d, 0x26, 0xad, 0x88, 0x04, 0x04, 0x29, 0x32, 0x0b, 0x29, 0xbd, 0x80, 0xa6, 0x4d, 0xd5, 0x0f, 0x6a, 0x48, 0x34, 0x22, 0x2a, 0xc3, 0x64, 0x73, 0x12, 0x47, 0x76, 0x77, 0xd6, 0x79, 0xd9, 0x6e, 0xbc, 0xa4, 0x5e, 0x82, 0x9f, 0xbc, 0x1d, 0xaf, 0xc1, 0x2f, 0xb2, 0x3b, 0x79, 0xd9, 0xa4, 0x01, 0x93, 0x6f, 0x0b, 0x73, 0x7e, 0xbf, 0xff, 0xd9, 0x99, 0xc3, 0x41, 0xcf, 0x21, 0x4a, 0xe4, 0xd8, 0xe3, 0xb1, 0xf0, 0x92, 0x86, 0xe7, 0x07, 0x56, 0x1b, 0x50, 0x9e, 0xb4, 0x26, 0x10, 0xa0, 0xd8, 0x00, 0x0c, 0xf8, 0x46, 0xc8, 0x88, 0xc6, 0x4a, 0x1a, 0x89, 0x8f, 0xf3, 0x6a, 0xca, 0x63, 0x41, 0x93, 0x06, 0x9d, 0x54, 0x9f, 0x3c, 0x1e, 0x49, 0x39, 0x0a, 0xc0, 0xcb, 0x6b, 0xfa, 0x76, 0xe8, 0x0d, 0xac, 0xe2, 0x73, 0xea, 0xfe, 0xf9, 0xad, 0xe2, 0x71, 0x0c, 0x4a, 0x4f, 0xce, 0xab, 0x09, 0x0f, 0xc4, 0x80, 0x1b, 0xf0, 0xa6, 0x1f, 0xee, 0xa0, 0xfe, 0x67, 0x0f, 0x1d, 0xbd, 0x75, 0xad, 0x5c, 0x4f, 0x3b, 0xc1, 0x2d, 0xf4, 0xc0, 0x97, 0x91, 0x06, 0xdf, 0x1a, 0x91, 0x00, 0xbb, 0x48, 0x53, 0x52, 0x7a, 0x52, 0x7a, 0x76, 0xd8, 0x78, 0x44, 0x5d, 0x0e, 0x9d, 0xe6, 0xd0, 0xf7, 0xaf, 0x22, 0x73, 0xde, 0xe8, 0xf1, 0xc0, 0x42, 0xe7, 0xff, 0x02, 0x74, 0x91, 0xa6, 0xf8, 0x12, 0xed, 0x8b, 0xc8, 0x80, 0x4a, 0x78, 0x40, 0xca, 0x39, 0x5f, 0xbb, 0xc7, 0x5f, 0x4f, 0xfe, 0xa3, 0x89, 0x7e, 0xfe, 0xfe, 0xb5, 0xb5, 0x73, 0x57, 0x2a, 0x9f, 0xfd, 0xd7, 0x99, 0x61, 0xb8, 0x8b, 0x70, 0x9f, 0x6b, 0x60, 0xf0, 0xcd, 0xb5, 0xc6, 0x8c, 0x08, 0x81, 0x6c, 0x6d, 0x22, 0x3b, 0xca, 0x04, 0xad, 0x09, 0xff, 0x4e, 0x84, 0x80, 0x3f, 0xa2, 0xe3, 0x90, 0xa7, 0x73, 0x67, 0x0c, 0xca, 0x87, 0xc8, 0x90, 0xed, 0x7f, 0xff, 0x63, 0xf3, 0x20, 0x33, 0x6f, 0x9f, 0x95, 0xc9, 0xa0, 0x83, 0x43, 0x9e, 0x4e, 0xbd, 0x6d, 0xa7, 0xc0, 0x3e, 0xaa, 0x41, 0x34, 0x94, 0xca, 0x17, 0xd1, 0x88, 0x2d, 0xdf, 0xe1, 0xce, 0x66, 0xfe, 0xea, 0xcc, 0x74, 0xb5, 0x78, 0xaf, 0x5f, 0x50, 0x65, 0x1e, 0xa2, 0xad, 0xef, 0x83, 0xd6, 0x4c, 0x71, 0x03, 0x64, 0x77, 0xb3, 0x84, 0xe3, 0x99, 0xa6, 0xeb, 0x2c, 0x1d, 0x6e, 0xb2, 0xeb, 0x39, 0x29, 0x4a, 0x59, 0x28, 0x22, 0x11, 0xda, 0x90, 0x7d, 0x95, 0xda, 0x68, 0xb2, 0xb7, 0xc6, 0x20, 0x54, 0xf5, 0x5c, 0xf7, 0xda, 0xd1, 0x2f, 0x33, 0x18, 0x7f, 0x42, 0xa7, 0x0b, 0x6a, 0x05, 0xdf, 0x2d, 0x68, 0xc3, 0x12, 0x19, 0xd8, 0x10, 0xc8, 0xfe, 0x1a, 0x6e, 0x52, 0x70, 0x77, 0x1c, 0xde, 0xcb, 0x69, 0xfc, 0x01, 0xd5, 0x16, 0xe4, 0xda, 0x0c, 0x20, 0x61, 0x43, 0xee, 0x1b, 0xa9, 0xc8, 0xc1, 0x1a, 0xea, 0x4a, 0x41, 0xdd, 0xcd, 0xe0, 0x9b, 0x9c, 0xc5, 0x9f, 0xd1, 0x69, 0xf1, 0x29, 0x47, 0xdc, 0xc0, 0x2d, 0x1f, 0xb3, 0x21, 0x17, 0x81, 0x55, 0x40, 0xd0, 0x1a, 0xea, 0x5a, 0x41, 0xf0, 0xc2, 0xf1, 0x37, 0x0e, 0xc7, 0x3f, 0xd0, 0xd3, 0xd5, 0x23, 0xb3, 0x9c, 0x73, 0xb8, 0xd9, 0xe3, 0xd6, 0x57, 0x8d, 0xcf, 0x62, 0x76, 0xb3, 0x87, 0xea, 0x42, 0xd2, 0x7c, 0xe3, 0xc4, 0x4a, 0xa6, 0x63, 0xba, 0x6a, 0xf9, 0x34, 0x1f, 0x2e, 0x2f, 0x88, 0x76, 0x16, 0xdd, 0x2e, 0xdd, 0x95, 0x2b, 0xad, 0xbc, 0xfe, 0x32, 0x16, 0xb4, 0xd7, 0xa0, 0x57, 0xae, 0xfe, 0x4d, 0xb7, 0xbf, 0x9b, 0x37, 0x77, 0xfe, 0x37, 0x00, 0x00, 0xff, 0xff, 0xfa, 0xbc, 0x0f, 0x9b, 0xfb, 0x04, 0x00, 0x00, } grpc-go-1.22.1/balancer/xds/internal/proto/envoy/api/v2/core/000077500000000000000000000000001351635773100236625ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/internal/proto/envoy/api/v2/core/address/000077500000000000000000000000001351635773100253075ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/internal/proto/envoy/api/v2/core/address/address.pb.go000077500000000000000000000526211351635773100276740ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: envoy/api/v2/core/address.proto package envoy_api_v2_core import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import wrappers "github.com/golang/protobuf/ptypes/wrappers" import base "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/core/base" import _ "google.golang.org/grpc/balancer/xds/internal/proto/validate" // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package type SocketAddress_Protocol int32 const ( SocketAddress_TCP SocketAddress_Protocol = 0 SocketAddress_UDP SocketAddress_Protocol = 1 ) var SocketAddress_Protocol_name = map[int32]string{ 0: "TCP", 1: "UDP", } var SocketAddress_Protocol_value = map[string]int32{ "TCP": 0, "UDP": 1, } func (x SocketAddress_Protocol) String() string { return proto.EnumName(SocketAddress_Protocol_name, int32(x)) } func (SocketAddress_Protocol) EnumDescriptor() ([]byte, []int) { return fileDescriptor_address_b91d58d2da3489da, []int{1, 0} } type Pipe struct { Path string `protobuf:"bytes,1,opt,name=path,proto3" json:"path,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Pipe) Reset() { *m = Pipe{} } func (m *Pipe) String() string { return proto.CompactTextString(m) } func (*Pipe) ProtoMessage() {} func (*Pipe) Descriptor() ([]byte, []int) { return fileDescriptor_address_b91d58d2da3489da, []int{0} } func (m *Pipe) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Pipe.Unmarshal(m, b) } func (m *Pipe) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Pipe.Marshal(b, m, deterministic) } func (dst *Pipe) XXX_Merge(src proto.Message) { xxx_messageInfo_Pipe.Merge(dst, src) } func (m *Pipe) XXX_Size() int { return xxx_messageInfo_Pipe.Size(m) } func (m *Pipe) XXX_DiscardUnknown() { xxx_messageInfo_Pipe.DiscardUnknown(m) } var xxx_messageInfo_Pipe proto.InternalMessageInfo func (m *Pipe) GetPath() string { if m != nil { return m.Path } return "" } type SocketAddress struct { Protocol SocketAddress_Protocol `protobuf:"varint,1,opt,name=protocol,proto3,enum=envoy.api.v2.core.SocketAddress_Protocol" json:"protocol,omitempty"` Address string `protobuf:"bytes,2,opt,name=address,proto3" json:"address,omitempty"` // Types that are valid to be assigned to PortSpecifier: // *SocketAddress_PortValue // *SocketAddress_NamedPort PortSpecifier isSocketAddress_PortSpecifier `protobuf_oneof:"port_specifier"` ResolverName string `protobuf:"bytes,5,opt,name=resolver_name,json=resolverName,proto3" json:"resolver_name,omitempty"` Ipv4Compat bool `protobuf:"varint,6,opt,name=ipv4_compat,json=ipv4Compat,proto3" json:"ipv4_compat,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *SocketAddress) Reset() { *m = SocketAddress{} } func (m *SocketAddress) String() string { return proto.CompactTextString(m) } func (*SocketAddress) ProtoMessage() {} func (*SocketAddress) Descriptor() ([]byte, []int) { return fileDescriptor_address_b91d58d2da3489da, []int{1} } func (m *SocketAddress) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_SocketAddress.Unmarshal(m, b) } func (m *SocketAddress) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_SocketAddress.Marshal(b, m, deterministic) } func (dst *SocketAddress) XXX_Merge(src proto.Message) { xxx_messageInfo_SocketAddress.Merge(dst, src) } func (m *SocketAddress) XXX_Size() int { return xxx_messageInfo_SocketAddress.Size(m) } func (m *SocketAddress) XXX_DiscardUnknown() { xxx_messageInfo_SocketAddress.DiscardUnknown(m) } var xxx_messageInfo_SocketAddress proto.InternalMessageInfo func (m *SocketAddress) GetProtocol() SocketAddress_Protocol { if m != nil { return m.Protocol } return SocketAddress_TCP } func (m *SocketAddress) GetAddress() string { if m != nil { return m.Address } return "" } type isSocketAddress_PortSpecifier interface { isSocketAddress_PortSpecifier() } type SocketAddress_PortValue struct { PortValue uint32 `protobuf:"varint,3,opt,name=port_value,json=portValue,proto3,oneof"` } type SocketAddress_NamedPort struct { NamedPort string `protobuf:"bytes,4,opt,name=named_port,json=namedPort,proto3,oneof"` } func (*SocketAddress_PortValue) isSocketAddress_PortSpecifier() {} func (*SocketAddress_NamedPort) isSocketAddress_PortSpecifier() {} func (m *SocketAddress) GetPortSpecifier() isSocketAddress_PortSpecifier { if m != nil { return m.PortSpecifier } return nil } func (m *SocketAddress) GetPortValue() uint32 { if x, ok := m.GetPortSpecifier().(*SocketAddress_PortValue); ok { return x.PortValue } return 0 } func (m *SocketAddress) GetNamedPort() string { if x, ok := m.GetPortSpecifier().(*SocketAddress_NamedPort); ok { return x.NamedPort } return "" } func (m *SocketAddress) GetResolverName() string { if m != nil { return m.ResolverName } return "" } func (m *SocketAddress) GetIpv4Compat() bool { if m != nil { return m.Ipv4Compat } return false } // XXX_OneofFuncs is for the internal use of the proto package. func (*SocketAddress) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _SocketAddress_OneofMarshaler, _SocketAddress_OneofUnmarshaler, _SocketAddress_OneofSizer, []interface{}{ (*SocketAddress_PortValue)(nil), (*SocketAddress_NamedPort)(nil), } } func _SocketAddress_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*SocketAddress) // port_specifier switch x := m.PortSpecifier.(type) { case *SocketAddress_PortValue: b.EncodeVarint(3<<3 | proto.WireVarint) b.EncodeVarint(uint64(x.PortValue)) case *SocketAddress_NamedPort: b.EncodeVarint(4<<3 | proto.WireBytes) b.EncodeStringBytes(x.NamedPort) case nil: default: return fmt.Errorf("SocketAddress.PortSpecifier has unexpected type %T", x) } return nil } func _SocketAddress_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*SocketAddress) switch tag { case 3: // port_specifier.port_value if wire != proto.WireVarint { return true, proto.ErrInternalBadWireType } x, err := b.DecodeVarint() m.PortSpecifier = &SocketAddress_PortValue{uint32(x)} return true, err case 4: // port_specifier.named_port if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } x, err := b.DecodeStringBytes() m.PortSpecifier = &SocketAddress_NamedPort{x} return true, err default: return false, nil } } func _SocketAddress_OneofSizer(msg proto.Message) (n int) { m := msg.(*SocketAddress) // port_specifier switch x := m.PortSpecifier.(type) { case *SocketAddress_PortValue: n += 1 // tag and wire n += proto.SizeVarint(uint64(x.PortValue)) case *SocketAddress_NamedPort: n += 1 // tag and wire n += proto.SizeVarint(uint64(len(x.NamedPort))) n += len(x.NamedPort) case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } type TcpKeepalive struct { KeepaliveProbes *wrappers.UInt32Value `protobuf:"bytes,1,opt,name=keepalive_probes,json=keepaliveProbes,proto3" json:"keepalive_probes,omitempty"` KeepaliveTime *wrappers.UInt32Value `protobuf:"bytes,2,opt,name=keepalive_time,json=keepaliveTime,proto3" json:"keepalive_time,omitempty"` KeepaliveInterval *wrappers.UInt32Value `protobuf:"bytes,3,opt,name=keepalive_interval,json=keepaliveInterval,proto3" json:"keepalive_interval,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *TcpKeepalive) Reset() { *m = TcpKeepalive{} } func (m *TcpKeepalive) String() string { return proto.CompactTextString(m) } func (*TcpKeepalive) ProtoMessage() {} func (*TcpKeepalive) Descriptor() ([]byte, []int) { return fileDescriptor_address_b91d58d2da3489da, []int{2} } func (m *TcpKeepalive) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_TcpKeepalive.Unmarshal(m, b) } func (m *TcpKeepalive) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_TcpKeepalive.Marshal(b, m, deterministic) } func (dst *TcpKeepalive) XXX_Merge(src proto.Message) { xxx_messageInfo_TcpKeepalive.Merge(dst, src) } func (m *TcpKeepalive) XXX_Size() int { return xxx_messageInfo_TcpKeepalive.Size(m) } func (m *TcpKeepalive) XXX_DiscardUnknown() { xxx_messageInfo_TcpKeepalive.DiscardUnknown(m) } var xxx_messageInfo_TcpKeepalive proto.InternalMessageInfo func (m *TcpKeepalive) GetKeepaliveProbes() *wrappers.UInt32Value { if m != nil { return m.KeepaliveProbes } return nil } func (m *TcpKeepalive) GetKeepaliveTime() *wrappers.UInt32Value { if m != nil { return m.KeepaliveTime } return nil } func (m *TcpKeepalive) GetKeepaliveInterval() *wrappers.UInt32Value { if m != nil { return m.KeepaliveInterval } return nil } type BindConfig struct { SourceAddress *SocketAddress `protobuf:"bytes,1,opt,name=source_address,json=sourceAddress,proto3" json:"source_address,omitempty"` Freebind *wrappers.BoolValue `protobuf:"bytes,2,opt,name=freebind,proto3" json:"freebind,omitempty"` SocketOptions []*base.SocketOption `protobuf:"bytes,3,rep,name=socket_options,json=socketOptions,proto3" json:"socket_options,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *BindConfig) Reset() { *m = BindConfig{} } func (m *BindConfig) String() string { return proto.CompactTextString(m) } func (*BindConfig) ProtoMessage() {} func (*BindConfig) Descriptor() ([]byte, []int) { return fileDescriptor_address_b91d58d2da3489da, []int{3} } func (m *BindConfig) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_BindConfig.Unmarshal(m, b) } func (m *BindConfig) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_BindConfig.Marshal(b, m, deterministic) } func (dst *BindConfig) XXX_Merge(src proto.Message) { xxx_messageInfo_BindConfig.Merge(dst, src) } func (m *BindConfig) XXX_Size() int { return xxx_messageInfo_BindConfig.Size(m) } func (m *BindConfig) XXX_DiscardUnknown() { xxx_messageInfo_BindConfig.DiscardUnknown(m) } var xxx_messageInfo_BindConfig proto.InternalMessageInfo func (m *BindConfig) GetSourceAddress() *SocketAddress { if m != nil { return m.SourceAddress } return nil } func (m *BindConfig) GetFreebind() *wrappers.BoolValue { if m != nil { return m.Freebind } return nil } func (m *BindConfig) GetSocketOptions() []*base.SocketOption { if m != nil { return m.SocketOptions } return nil } type Address struct { // Types that are valid to be assigned to Address: // *Address_SocketAddress // *Address_Pipe Address isAddress_Address `protobuf_oneof:"address"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Address) Reset() { *m = Address{} } func (m *Address) String() string { return proto.CompactTextString(m) } func (*Address) ProtoMessage() {} func (*Address) Descriptor() ([]byte, []int) { return fileDescriptor_address_b91d58d2da3489da, []int{4} } func (m *Address) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Address.Unmarshal(m, b) } func (m *Address) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Address.Marshal(b, m, deterministic) } func (dst *Address) XXX_Merge(src proto.Message) { xxx_messageInfo_Address.Merge(dst, src) } func (m *Address) XXX_Size() int { return xxx_messageInfo_Address.Size(m) } func (m *Address) XXX_DiscardUnknown() { xxx_messageInfo_Address.DiscardUnknown(m) } var xxx_messageInfo_Address proto.InternalMessageInfo type isAddress_Address interface { isAddress_Address() } type Address_SocketAddress struct { SocketAddress *SocketAddress `protobuf:"bytes,1,opt,name=socket_address,json=socketAddress,proto3,oneof"` } type Address_Pipe struct { Pipe *Pipe `protobuf:"bytes,2,opt,name=pipe,proto3,oneof"` } func (*Address_SocketAddress) isAddress_Address() {} func (*Address_Pipe) isAddress_Address() {} func (m *Address) GetAddress() isAddress_Address { if m != nil { return m.Address } return nil } func (m *Address) GetSocketAddress() *SocketAddress { if x, ok := m.GetAddress().(*Address_SocketAddress); ok { return x.SocketAddress } return nil } func (m *Address) GetPipe() *Pipe { if x, ok := m.GetAddress().(*Address_Pipe); ok { return x.Pipe } return nil } // XXX_OneofFuncs is for the internal use of the proto package. func (*Address) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _Address_OneofMarshaler, _Address_OneofUnmarshaler, _Address_OneofSizer, []interface{}{ (*Address_SocketAddress)(nil), (*Address_Pipe)(nil), } } func _Address_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*Address) // address switch x := m.Address.(type) { case *Address_SocketAddress: b.EncodeVarint(1<<3 | proto.WireBytes) if err := b.EncodeMessage(x.SocketAddress); err != nil { return err } case *Address_Pipe: b.EncodeVarint(2<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Pipe); err != nil { return err } case nil: default: return fmt.Errorf("Address.Address has unexpected type %T", x) } return nil } func _Address_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*Address) switch tag { case 1: // address.socket_address if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(SocketAddress) err := b.DecodeMessage(msg) m.Address = &Address_SocketAddress{msg} return true, err case 2: // address.pipe if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(Pipe) err := b.DecodeMessage(msg) m.Address = &Address_Pipe{msg} return true, err default: return false, nil } } func _Address_OneofSizer(msg proto.Message) (n int) { m := msg.(*Address) // address switch x := m.Address.(type) { case *Address_SocketAddress: s := proto.Size(x.SocketAddress) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *Address_Pipe: s := proto.Size(x.Pipe) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } type CidrRange struct { AddressPrefix string `protobuf:"bytes,1,opt,name=address_prefix,json=addressPrefix,proto3" json:"address_prefix,omitempty"` PrefixLen *wrappers.UInt32Value `protobuf:"bytes,2,opt,name=prefix_len,json=prefixLen,proto3" json:"prefix_len,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *CidrRange) Reset() { *m = CidrRange{} } func (m *CidrRange) String() string { return proto.CompactTextString(m) } func (*CidrRange) ProtoMessage() {} func (*CidrRange) Descriptor() ([]byte, []int) { return fileDescriptor_address_b91d58d2da3489da, []int{5} } func (m *CidrRange) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_CidrRange.Unmarshal(m, b) } func (m *CidrRange) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_CidrRange.Marshal(b, m, deterministic) } func (dst *CidrRange) XXX_Merge(src proto.Message) { xxx_messageInfo_CidrRange.Merge(dst, src) } func (m *CidrRange) XXX_Size() int { return xxx_messageInfo_CidrRange.Size(m) } func (m *CidrRange) XXX_DiscardUnknown() { xxx_messageInfo_CidrRange.DiscardUnknown(m) } var xxx_messageInfo_CidrRange proto.InternalMessageInfo func (m *CidrRange) GetAddressPrefix() string { if m != nil { return m.AddressPrefix } return "" } func (m *CidrRange) GetPrefixLen() *wrappers.UInt32Value { if m != nil { return m.PrefixLen } return nil } func init() { proto.RegisterType((*Pipe)(nil), "envoy.api.v2.core.Pipe") proto.RegisterType((*SocketAddress)(nil), "envoy.api.v2.core.SocketAddress") proto.RegisterType((*TcpKeepalive)(nil), "envoy.api.v2.core.TcpKeepalive") proto.RegisterType((*BindConfig)(nil), "envoy.api.v2.core.BindConfig") proto.RegisterType((*Address)(nil), "envoy.api.v2.core.Address") proto.RegisterType((*CidrRange)(nil), "envoy.api.v2.core.CidrRange") proto.RegisterEnum("envoy.api.v2.core.SocketAddress_Protocol", SocketAddress_Protocol_name, SocketAddress_Protocol_value) } func init() { proto.RegisterFile("envoy/api/v2/core/address.proto", fileDescriptor_address_b91d58d2da3489da) } var fileDescriptor_address_b91d58d2da3489da = []byte{ // 667 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x94, 0x53, 0x4f, 0x4f, 0xdb, 0x48, 0x14, 0xcf, 0xc4, 0x01, 0x92, 0x17, 0x92, 0x0d, 0x73, 0xc1, 0x8a, 0xd8, 0x4d, 0x14, 0xb4, 0x52, 0x16, 0xed, 0x3a, 0xbb, 0x61, 0xb5, 0x77, 0x9c, 0x55, 0x01, 0x51, 0xb5, 0xae, 0x81, 0x5e, 0xad, 0x49, 0xf2, 0x92, 0x8e, 0x70, 0x3c, 0xa3, 0xb1, 0x71, 0xe1, 0x56, 0xf5, 0xd0, 0x43, 0xef, 0xfd, 0x2e, 0x55, 0x4f, 0x7c, 0x87, 0x7e, 0x82, 0x1e, 0xf9, 0x14, 0x54, 0x33, 0xb6, 0x83, 0xda, 0xb4, 0xa2, 0xbd, 0xcd, 0xbc, 0xf7, 0xfb, 0xfd, 0xe6, 0xf7, 0xfe, 0x0c, 0x74, 0x30, 0x4a, 0xc5, 0xf5, 0x80, 0x49, 0x3e, 0x48, 0x87, 0x83, 0x89, 0x50, 0x38, 0x60, 0xd3, 0xa9, 0xc2, 0x38, 0x76, 0xa4, 0x12, 0x89, 0xa0, 0x5b, 0x06, 0xe0, 0x30, 0xc9, 0x9d, 0x74, 0xe8, 0x68, 0x40, 0x7b, 0x67, 0x95, 0x33, 0x66, 0x31, 0x66, 0x84, 0xf6, 0x6f, 0x73, 0x21, 0xe6, 0x21, 0x0e, 0xcc, 0x6d, 0x7c, 0x39, 0x1b, 0xbc, 0x54, 0x4c, 0x4a, 0x54, 0xb9, 0x60, 0x7b, 0x3b, 0x65, 0x21, 0x9f, 0xb2, 0x04, 0x07, 0xc5, 0x21, 0x4b, 0xf4, 0x7e, 0x87, 0x8a, 0xc7, 0x25, 0xd2, 0x5f, 0xa1, 0x22, 0x59, 0xf2, 0xc2, 0x26, 0x5d, 0xd2, 0xaf, 0xb9, 0xb5, 0x0f, 0xb7, 0x37, 0x56, 0x45, 0x95, 0xbb, 0xc4, 0x37, 0xe1, 0xde, 0xc7, 0x32, 0x34, 0x4e, 0xc5, 0xe4, 0x02, 0x93, 0x83, 0xcc, 0x28, 0x7d, 0x06, 0x55, 0xa3, 0x30, 0x11, 0xa1, 0x21, 0x35, 0x87, 0x7f, 0x38, 0x2b, 0xae, 0x9d, 0x2f, 0x38, 0x8e, 0x97, 0x13, 0x5c, 0xd0, 0xfa, 0x6b, 0xaf, 0x49, 0xb9, 0x45, 0xfc, 0xa5, 0x0c, 0xdd, 0x85, 0x8d, 0xbc, 0x0d, 0x76, 0xf9, 0x6b, 0x1b, 0x45, 0x86, 0xfe, 0x09, 0x20, 0x85, 0x4a, 0x82, 0x94, 0x85, 0x97, 0x68, 0x5b, 0x5d, 0xd2, 0x6f, 0xb8, 0x75, 0x8d, 0x5b, 0xdf, 0xab, 0xd8, 0x77, 0x77, 0xd6, 0x51, 0xc9, 0xaf, 0x69, 0xc0, 0x73, 0x9d, 0xa7, 0x1d, 0x80, 0x88, 0x2d, 0x70, 0x1a, 0xe8, 0x90, 0x5d, 0xd1, 0xaa, 0x1a, 0x60, 0x62, 0x9e, 0x50, 0x09, 0xdd, 0x85, 0x86, 0xc2, 0x58, 0x84, 0x29, 0xaa, 0x40, 0x47, 0xed, 0x35, 0x8d, 0xf1, 0x37, 0x8b, 0xe0, 0x13, 0xb6, 0xd0, 0x2a, 0x75, 0x2e, 0xd3, 0x7f, 0x83, 0x89, 0x58, 0x48, 0x96, 0xd8, 0xeb, 0x5d, 0xd2, 0xaf, 0xfa, 0xa0, 0x43, 0x23, 0x13, 0xe9, 0xed, 0x40, 0xb5, 0xa8, 0x8d, 0x6e, 0x80, 0x75, 0x36, 0xf2, 0x5a, 0x25, 0x7d, 0x38, 0xff, 0xdf, 0x6b, 0x11, 0x77, 0x1b, 0x9a, 0xc6, 0x72, 0x2c, 0x71, 0xc2, 0x67, 0x1c, 0x15, 0x5d, 0x7b, 0x7f, 0x7b, 0x63, 0x91, 0xde, 0x2d, 0x81, 0xcd, 0xb3, 0x89, 0x3c, 0x41, 0x94, 0x2c, 0xe4, 0x29, 0xd2, 0x43, 0x68, 0x5d, 0x14, 0x97, 0x40, 0x2a, 0x31, 0xc6, 0xd8, 0x34, 0xb7, 0x3e, 0xdc, 0x71, 0xb2, 0x09, 0x3b, 0xc5, 0x84, 0x9d, 0xf3, 0xe3, 0x28, 0xd9, 0x1f, 0x9a, 0x32, 0xfd, 0x5f, 0x96, 0x2c, 0xcf, 0x90, 0xe8, 0x08, 0x9a, 0xf7, 0x42, 0x09, 0x5f, 0xa0, 0xe9, 0xe8, 0x43, 0x32, 0x8d, 0x25, 0xe7, 0x8c, 0x2f, 0x90, 0x9e, 0x00, 0xbd, 0x17, 0xe1, 0x51, 0x82, 0x2a, 0x65, 0xa1, 0x69, 0xf9, 0x43, 0x42, 0x5b, 0x4b, 0xde, 0x71, 0x4e, 0xeb, 0x7d, 0x22, 0x00, 0x2e, 0x8f, 0xa6, 0x23, 0x11, 0xcd, 0xf8, 0x9c, 0x9e, 0x42, 0x33, 0x16, 0x97, 0x6a, 0x82, 0x41, 0x31, 0xf2, 0xac, 0xce, 0xee, 0x43, 0x4b, 0x94, 0xef, 0xce, 0x5b, 0xb3, 0x3b, 0x8d, 0x4c, 0xa3, 0xd8, 0xc9, 0xff, 0xa0, 0x3a, 0x53, 0x88, 0x63, 0x1e, 0x4d, 0xf3, 0x7a, 0xdb, 0x2b, 0x36, 0x5d, 0x21, 0xc2, 0xcc, 0xe4, 0x12, 0x4b, 0x1f, 0x69, 0x33, 0xfa, 0x8d, 0x40, 0xc8, 0x84, 0x8b, 0x28, 0xb6, 0xad, 0xae, 0xd5, 0xaf, 0x0f, 0x3b, 0xdf, 0x35, 0xf3, 0xd4, 0xe0, 0xf4, 0xfb, 0xf7, 0xb7, 0xb8, 0xf7, 0x8e, 0xc0, 0x46, 0xe1, 0xe5, 0x78, 0xa9, 0xf9, 0x93, 0x05, 0x1e, 0x95, 0x0a, 0xd9, 0x42, 0xea, 0x2f, 0xa8, 0x48, 0x2e, 0x8b, 0x11, 0x6e, 0x7f, 0x43, 0x40, 0x7f, 0xe1, 0xa3, 0x92, 0x6f, 0x60, 0x6e, 0x6b, 0xf9, 0x8d, 0x8a, 0x3d, 0x7b, 0x43, 0xa0, 0x36, 0xe2, 0x53, 0xe5, 0xb3, 0x68, 0x8e, 0xf4, 0x6f, 0x68, 0xe6, 0xf9, 0x40, 0x2a, 0x9c, 0xf1, 0xab, 0xd5, 0x4f, 0xdf, 0xc8, 0x01, 0x9e, 0xc9, 0xd3, 0x43, 0x80, 0x0c, 0x19, 0x84, 0x18, 0xfd, 0xc8, 0x26, 0xe5, 0x43, 0xda, 0xb3, 0xec, 0x57, 0xc4, 0xaf, 0x65, 0xdc, 0xc7, 0x18, 0xb9, 0xff, 0x40, 0x87, 0x8b, 0xcc, 0xbf, 0x54, 0xe2, 0xea, 0x7a, 0xb5, 0x14, 0x77, 0xf3, 0xa0, 0x78, 0x5a, 0x24, 0xc2, 0x23, 0xe3, 0x75, 0xa3, 0xbf, 0xff, 0x39, 0x00, 0x00, 0xff, 0xff, 0x03, 0x02, 0x9c, 0x89, 0x34, 0x05, 0x00, 0x00, } grpc-go-1.22.1/balancer/xds/internal/proto/envoy/api/v2/core/base/000077500000000000000000000000001351635773100245745ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/internal/proto/envoy/api/v2/core/base/base.pb.go000077500000000000000000001161531351635773100264470ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: envoy/api/v2/core/base.proto package core import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import any "github.com/golang/protobuf/ptypes/any" import _struct "github.com/golang/protobuf/ptypes/struct" import wrappers "github.com/golang/protobuf/ptypes/wrappers" import percent "google.golang.org/grpc/balancer/xds/internal/proto/envoy/type/percent" import _ "google.golang.org/grpc/balancer/xds/internal/proto/validate" // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package type RoutingPriority int32 const ( RoutingPriority_DEFAULT RoutingPriority = 0 RoutingPriority_HIGH RoutingPriority = 1 ) var RoutingPriority_name = map[int32]string{ 0: "DEFAULT", 1: "HIGH", } var RoutingPriority_value = map[string]int32{ "DEFAULT": 0, "HIGH": 1, } func (x RoutingPriority) String() string { return proto.EnumName(RoutingPriority_name, int32(x)) } func (RoutingPriority) EnumDescriptor() ([]byte, []int) { return fileDescriptor_base_33c58439b08f821d, []int{0} } type RequestMethod int32 const ( RequestMethod_METHOD_UNSPECIFIED RequestMethod = 0 RequestMethod_GET RequestMethod = 1 RequestMethod_HEAD RequestMethod = 2 RequestMethod_POST RequestMethod = 3 RequestMethod_PUT RequestMethod = 4 RequestMethod_DELETE RequestMethod = 5 RequestMethod_CONNECT RequestMethod = 6 RequestMethod_OPTIONS RequestMethod = 7 RequestMethod_TRACE RequestMethod = 8 ) var RequestMethod_name = map[int32]string{ 0: "METHOD_UNSPECIFIED", 1: "GET", 2: "HEAD", 3: "POST", 4: "PUT", 5: "DELETE", 6: "CONNECT", 7: "OPTIONS", 8: "TRACE", } var RequestMethod_value = map[string]int32{ "METHOD_UNSPECIFIED": 0, "GET": 1, "HEAD": 2, "POST": 3, "PUT": 4, "DELETE": 5, "CONNECT": 6, "OPTIONS": 7, "TRACE": 8, } func (x RequestMethod) String() string { return proto.EnumName(RequestMethod_name, int32(x)) } func (RequestMethod) EnumDescriptor() ([]byte, []int) { return fileDescriptor_base_33c58439b08f821d, []int{1} } type SocketOption_SocketState int32 const ( SocketOption_STATE_PREBIND SocketOption_SocketState = 0 SocketOption_STATE_BOUND SocketOption_SocketState = 1 SocketOption_STATE_LISTENING SocketOption_SocketState = 2 ) var SocketOption_SocketState_name = map[int32]string{ 0: "STATE_PREBIND", 1: "STATE_BOUND", 2: "STATE_LISTENING", } var SocketOption_SocketState_value = map[string]int32{ "STATE_PREBIND": 0, "STATE_BOUND": 1, "STATE_LISTENING": 2, } func (x SocketOption_SocketState) String() string { return proto.EnumName(SocketOption_SocketState_name, int32(x)) } func (SocketOption_SocketState) EnumDescriptor() ([]byte, []int) { return fileDescriptor_base_33c58439b08f821d, []int{9, 0} } type Locality struct { Region string `protobuf:"bytes,1,opt,name=region,proto3" json:"region,omitempty"` Zone string `protobuf:"bytes,2,opt,name=zone,proto3" json:"zone,omitempty"` SubZone string `protobuf:"bytes,3,opt,name=sub_zone,json=subZone,proto3" json:"sub_zone,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Locality) Reset() { *m = Locality{} } func (m *Locality) String() string { return proto.CompactTextString(m) } func (*Locality) ProtoMessage() {} func (*Locality) Descriptor() ([]byte, []int) { return fileDescriptor_base_33c58439b08f821d, []int{0} } func (m *Locality) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Locality.Unmarshal(m, b) } func (m *Locality) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Locality.Marshal(b, m, deterministic) } func (dst *Locality) XXX_Merge(src proto.Message) { xxx_messageInfo_Locality.Merge(dst, src) } func (m *Locality) XXX_Size() int { return xxx_messageInfo_Locality.Size(m) } func (m *Locality) XXX_DiscardUnknown() { xxx_messageInfo_Locality.DiscardUnknown(m) } var xxx_messageInfo_Locality proto.InternalMessageInfo func (m *Locality) GetRegion() string { if m != nil { return m.Region } return "" } func (m *Locality) GetZone() string { if m != nil { return m.Zone } return "" } func (m *Locality) GetSubZone() string { if m != nil { return m.SubZone } return "" } type Node struct { Id string `protobuf:"bytes,1,opt,name=id,proto3" json:"id,omitempty"` Cluster string `protobuf:"bytes,2,opt,name=cluster,proto3" json:"cluster,omitempty"` Metadata *_struct.Struct `protobuf:"bytes,3,opt,name=metadata,proto3" json:"metadata,omitempty"` Locality *Locality `protobuf:"bytes,4,opt,name=locality,proto3" json:"locality,omitempty"` BuildVersion string `protobuf:"bytes,5,opt,name=build_version,json=buildVersion,proto3" json:"build_version,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Node) Reset() { *m = Node{} } func (m *Node) String() string { return proto.CompactTextString(m) } func (*Node) ProtoMessage() {} func (*Node) Descriptor() ([]byte, []int) { return fileDescriptor_base_33c58439b08f821d, []int{1} } func (m *Node) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Node.Unmarshal(m, b) } func (m *Node) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Node.Marshal(b, m, deterministic) } func (dst *Node) XXX_Merge(src proto.Message) { xxx_messageInfo_Node.Merge(dst, src) } func (m *Node) XXX_Size() int { return xxx_messageInfo_Node.Size(m) } func (m *Node) XXX_DiscardUnknown() { xxx_messageInfo_Node.DiscardUnknown(m) } var xxx_messageInfo_Node proto.InternalMessageInfo func (m *Node) GetId() string { if m != nil { return m.Id } return "" } func (m *Node) GetCluster() string { if m != nil { return m.Cluster } return "" } func (m *Node) GetMetadata() *_struct.Struct { if m != nil { return m.Metadata } return nil } func (m *Node) GetLocality() *Locality { if m != nil { return m.Locality } return nil } func (m *Node) GetBuildVersion() string { if m != nil { return m.BuildVersion } return "" } type Metadata struct { FilterMetadata map[string]*_struct.Struct `protobuf:"bytes,1,rep,name=filter_metadata,json=filterMetadata,proto3" json:"filter_metadata,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Metadata) Reset() { *m = Metadata{} } func (m *Metadata) String() string { return proto.CompactTextString(m) } func (*Metadata) ProtoMessage() {} func (*Metadata) Descriptor() ([]byte, []int) { return fileDescriptor_base_33c58439b08f821d, []int{2} } func (m *Metadata) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Metadata.Unmarshal(m, b) } func (m *Metadata) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Metadata.Marshal(b, m, deterministic) } func (dst *Metadata) XXX_Merge(src proto.Message) { xxx_messageInfo_Metadata.Merge(dst, src) } func (m *Metadata) XXX_Size() int { return xxx_messageInfo_Metadata.Size(m) } func (m *Metadata) XXX_DiscardUnknown() { xxx_messageInfo_Metadata.DiscardUnknown(m) } var xxx_messageInfo_Metadata proto.InternalMessageInfo func (m *Metadata) GetFilterMetadata() map[string]*_struct.Struct { if m != nil { return m.FilterMetadata } return nil } type RuntimeUInt32 struct { DefaultValue uint32 `protobuf:"varint,2,opt,name=default_value,json=defaultValue,proto3" json:"default_value,omitempty"` RuntimeKey string `protobuf:"bytes,3,opt,name=runtime_key,json=runtimeKey,proto3" json:"runtime_key,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *RuntimeUInt32) Reset() { *m = RuntimeUInt32{} } func (m *RuntimeUInt32) String() string { return proto.CompactTextString(m) } func (*RuntimeUInt32) ProtoMessage() {} func (*RuntimeUInt32) Descriptor() ([]byte, []int) { return fileDescriptor_base_33c58439b08f821d, []int{3} } func (m *RuntimeUInt32) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_RuntimeUInt32.Unmarshal(m, b) } func (m *RuntimeUInt32) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_RuntimeUInt32.Marshal(b, m, deterministic) } func (dst *RuntimeUInt32) XXX_Merge(src proto.Message) { xxx_messageInfo_RuntimeUInt32.Merge(dst, src) } func (m *RuntimeUInt32) XXX_Size() int { return xxx_messageInfo_RuntimeUInt32.Size(m) } func (m *RuntimeUInt32) XXX_DiscardUnknown() { xxx_messageInfo_RuntimeUInt32.DiscardUnknown(m) } var xxx_messageInfo_RuntimeUInt32 proto.InternalMessageInfo func (m *RuntimeUInt32) GetDefaultValue() uint32 { if m != nil { return m.DefaultValue } return 0 } func (m *RuntimeUInt32) GetRuntimeKey() string { if m != nil { return m.RuntimeKey } return "" } type HeaderValue struct { Key string `protobuf:"bytes,1,opt,name=key,proto3" json:"key,omitempty"` Value string `protobuf:"bytes,2,opt,name=value,proto3" json:"value,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *HeaderValue) Reset() { *m = HeaderValue{} } func (m *HeaderValue) String() string { return proto.CompactTextString(m) } func (*HeaderValue) ProtoMessage() {} func (*HeaderValue) Descriptor() ([]byte, []int) { return fileDescriptor_base_33c58439b08f821d, []int{4} } func (m *HeaderValue) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_HeaderValue.Unmarshal(m, b) } func (m *HeaderValue) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_HeaderValue.Marshal(b, m, deterministic) } func (dst *HeaderValue) XXX_Merge(src proto.Message) { xxx_messageInfo_HeaderValue.Merge(dst, src) } func (m *HeaderValue) XXX_Size() int { return xxx_messageInfo_HeaderValue.Size(m) } func (m *HeaderValue) XXX_DiscardUnknown() { xxx_messageInfo_HeaderValue.DiscardUnknown(m) } var xxx_messageInfo_HeaderValue proto.InternalMessageInfo func (m *HeaderValue) GetKey() string { if m != nil { return m.Key } return "" } func (m *HeaderValue) GetValue() string { if m != nil { return m.Value } return "" } type HeaderValueOption struct { Header *HeaderValue `protobuf:"bytes,1,opt,name=header,proto3" json:"header,omitempty"` Append *wrappers.BoolValue `protobuf:"bytes,2,opt,name=append,proto3" json:"append,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *HeaderValueOption) Reset() { *m = HeaderValueOption{} } func (m *HeaderValueOption) String() string { return proto.CompactTextString(m) } func (*HeaderValueOption) ProtoMessage() {} func (*HeaderValueOption) Descriptor() ([]byte, []int) { return fileDescriptor_base_33c58439b08f821d, []int{5} } func (m *HeaderValueOption) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_HeaderValueOption.Unmarshal(m, b) } func (m *HeaderValueOption) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_HeaderValueOption.Marshal(b, m, deterministic) } func (dst *HeaderValueOption) XXX_Merge(src proto.Message) { xxx_messageInfo_HeaderValueOption.Merge(dst, src) } func (m *HeaderValueOption) XXX_Size() int { return xxx_messageInfo_HeaderValueOption.Size(m) } func (m *HeaderValueOption) XXX_DiscardUnknown() { xxx_messageInfo_HeaderValueOption.DiscardUnknown(m) } var xxx_messageInfo_HeaderValueOption proto.InternalMessageInfo func (m *HeaderValueOption) GetHeader() *HeaderValue { if m != nil { return m.Header } return nil } func (m *HeaderValueOption) GetAppend() *wrappers.BoolValue { if m != nil { return m.Append } return nil } type HeaderMap struct { Headers []*HeaderValue `protobuf:"bytes,1,rep,name=headers,proto3" json:"headers,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *HeaderMap) Reset() { *m = HeaderMap{} } func (m *HeaderMap) String() string { return proto.CompactTextString(m) } func (*HeaderMap) ProtoMessage() {} func (*HeaderMap) Descriptor() ([]byte, []int) { return fileDescriptor_base_33c58439b08f821d, []int{6} } func (m *HeaderMap) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_HeaderMap.Unmarshal(m, b) } func (m *HeaderMap) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_HeaderMap.Marshal(b, m, deterministic) } func (dst *HeaderMap) XXX_Merge(src proto.Message) { xxx_messageInfo_HeaderMap.Merge(dst, src) } func (m *HeaderMap) XXX_Size() int { return xxx_messageInfo_HeaderMap.Size(m) } func (m *HeaderMap) XXX_DiscardUnknown() { xxx_messageInfo_HeaderMap.DiscardUnknown(m) } var xxx_messageInfo_HeaderMap proto.InternalMessageInfo func (m *HeaderMap) GetHeaders() []*HeaderValue { if m != nil { return m.Headers } return nil } type DataSource struct { // Types that are valid to be assigned to Specifier: // *DataSource_Filename // *DataSource_InlineBytes // *DataSource_InlineString Specifier isDataSource_Specifier `protobuf_oneof:"specifier"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *DataSource) Reset() { *m = DataSource{} } func (m *DataSource) String() string { return proto.CompactTextString(m) } func (*DataSource) ProtoMessage() {} func (*DataSource) Descriptor() ([]byte, []int) { return fileDescriptor_base_33c58439b08f821d, []int{7} } func (m *DataSource) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_DataSource.Unmarshal(m, b) } func (m *DataSource) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_DataSource.Marshal(b, m, deterministic) } func (dst *DataSource) XXX_Merge(src proto.Message) { xxx_messageInfo_DataSource.Merge(dst, src) } func (m *DataSource) XXX_Size() int { return xxx_messageInfo_DataSource.Size(m) } func (m *DataSource) XXX_DiscardUnknown() { xxx_messageInfo_DataSource.DiscardUnknown(m) } var xxx_messageInfo_DataSource proto.InternalMessageInfo type isDataSource_Specifier interface { isDataSource_Specifier() } type DataSource_Filename struct { Filename string `protobuf:"bytes,1,opt,name=filename,proto3,oneof"` } type DataSource_InlineBytes struct { InlineBytes []byte `protobuf:"bytes,2,opt,name=inline_bytes,json=inlineBytes,proto3,oneof"` } type DataSource_InlineString struct { InlineString string `protobuf:"bytes,3,opt,name=inline_string,json=inlineString,proto3,oneof"` } func (*DataSource_Filename) isDataSource_Specifier() {} func (*DataSource_InlineBytes) isDataSource_Specifier() {} func (*DataSource_InlineString) isDataSource_Specifier() {} func (m *DataSource) GetSpecifier() isDataSource_Specifier { if m != nil { return m.Specifier } return nil } func (m *DataSource) GetFilename() string { if x, ok := m.GetSpecifier().(*DataSource_Filename); ok { return x.Filename } return "" } func (m *DataSource) GetInlineBytes() []byte { if x, ok := m.GetSpecifier().(*DataSource_InlineBytes); ok { return x.InlineBytes } return nil } func (m *DataSource) GetInlineString() string { if x, ok := m.GetSpecifier().(*DataSource_InlineString); ok { return x.InlineString } return "" } // XXX_OneofFuncs is for the internal use of the proto package. func (*DataSource) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _DataSource_OneofMarshaler, _DataSource_OneofUnmarshaler, _DataSource_OneofSizer, []interface{}{ (*DataSource_Filename)(nil), (*DataSource_InlineBytes)(nil), (*DataSource_InlineString)(nil), } } func _DataSource_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*DataSource) // specifier switch x := m.Specifier.(type) { case *DataSource_Filename: b.EncodeVarint(1<<3 | proto.WireBytes) b.EncodeStringBytes(x.Filename) case *DataSource_InlineBytes: b.EncodeVarint(2<<3 | proto.WireBytes) b.EncodeRawBytes(x.InlineBytes) case *DataSource_InlineString: b.EncodeVarint(3<<3 | proto.WireBytes) b.EncodeStringBytes(x.InlineString) case nil: default: return fmt.Errorf("DataSource.Specifier has unexpected type %T", x) } return nil } func _DataSource_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*DataSource) switch tag { case 1: // specifier.filename if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } x, err := b.DecodeStringBytes() m.Specifier = &DataSource_Filename{x} return true, err case 2: // specifier.inline_bytes if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } x, err := b.DecodeRawBytes(true) m.Specifier = &DataSource_InlineBytes{x} return true, err case 3: // specifier.inline_string if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } x, err := b.DecodeStringBytes() m.Specifier = &DataSource_InlineString{x} return true, err default: return false, nil } } func _DataSource_OneofSizer(msg proto.Message) (n int) { m := msg.(*DataSource) // specifier switch x := m.Specifier.(type) { case *DataSource_Filename: n += 1 // tag and wire n += proto.SizeVarint(uint64(len(x.Filename))) n += len(x.Filename) case *DataSource_InlineBytes: n += 1 // tag and wire n += proto.SizeVarint(uint64(len(x.InlineBytes))) n += len(x.InlineBytes) case *DataSource_InlineString: n += 1 // tag and wire n += proto.SizeVarint(uint64(len(x.InlineString))) n += len(x.InlineString) case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } type TransportSocket struct { Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` // Types that are valid to be assigned to ConfigType: // *TransportSocket_Config // *TransportSocket_TypedConfig ConfigType isTransportSocket_ConfigType `protobuf_oneof:"config_type"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *TransportSocket) Reset() { *m = TransportSocket{} } func (m *TransportSocket) String() string { return proto.CompactTextString(m) } func (*TransportSocket) ProtoMessage() {} func (*TransportSocket) Descriptor() ([]byte, []int) { return fileDescriptor_base_33c58439b08f821d, []int{8} } func (m *TransportSocket) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_TransportSocket.Unmarshal(m, b) } func (m *TransportSocket) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_TransportSocket.Marshal(b, m, deterministic) } func (dst *TransportSocket) XXX_Merge(src proto.Message) { xxx_messageInfo_TransportSocket.Merge(dst, src) } func (m *TransportSocket) XXX_Size() int { return xxx_messageInfo_TransportSocket.Size(m) } func (m *TransportSocket) XXX_DiscardUnknown() { xxx_messageInfo_TransportSocket.DiscardUnknown(m) } var xxx_messageInfo_TransportSocket proto.InternalMessageInfo func (m *TransportSocket) GetName() string { if m != nil { return m.Name } return "" } type isTransportSocket_ConfigType interface { isTransportSocket_ConfigType() } type TransportSocket_Config struct { Config *_struct.Struct `protobuf:"bytes,2,opt,name=config,proto3,oneof"` } type TransportSocket_TypedConfig struct { TypedConfig *any.Any `protobuf:"bytes,3,opt,name=typed_config,json=typedConfig,proto3,oneof"` } func (*TransportSocket_Config) isTransportSocket_ConfigType() {} func (*TransportSocket_TypedConfig) isTransportSocket_ConfigType() {} func (m *TransportSocket) GetConfigType() isTransportSocket_ConfigType { if m != nil { return m.ConfigType } return nil } func (m *TransportSocket) GetConfig() *_struct.Struct { if x, ok := m.GetConfigType().(*TransportSocket_Config); ok { return x.Config } return nil } func (m *TransportSocket) GetTypedConfig() *any.Any { if x, ok := m.GetConfigType().(*TransportSocket_TypedConfig); ok { return x.TypedConfig } return nil } // XXX_OneofFuncs is for the internal use of the proto package. func (*TransportSocket) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _TransportSocket_OneofMarshaler, _TransportSocket_OneofUnmarshaler, _TransportSocket_OneofSizer, []interface{}{ (*TransportSocket_Config)(nil), (*TransportSocket_TypedConfig)(nil), } } func _TransportSocket_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*TransportSocket) // config_type switch x := m.ConfigType.(type) { case *TransportSocket_Config: b.EncodeVarint(2<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Config); err != nil { return err } case *TransportSocket_TypedConfig: b.EncodeVarint(3<<3 | proto.WireBytes) if err := b.EncodeMessage(x.TypedConfig); err != nil { return err } case nil: default: return fmt.Errorf("TransportSocket.ConfigType has unexpected type %T", x) } return nil } func _TransportSocket_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*TransportSocket) switch tag { case 2: // config_type.config if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(_struct.Struct) err := b.DecodeMessage(msg) m.ConfigType = &TransportSocket_Config{msg} return true, err case 3: // config_type.typed_config if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(any.Any) err := b.DecodeMessage(msg) m.ConfigType = &TransportSocket_TypedConfig{msg} return true, err default: return false, nil } } func _TransportSocket_OneofSizer(msg proto.Message) (n int) { m := msg.(*TransportSocket) // config_type switch x := m.ConfigType.(type) { case *TransportSocket_Config: s := proto.Size(x.Config) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *TransportSocket_TypedConfig: s := proto.Size(x.TypedConfig) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } type SocketOption struct { Description string `protobuf:"bytes,1,opt,name=description,proto3" json:"description,omitempty"` Level int64 `protobuf:"varint,2,opt,name=level,proto3" json:"level,omitempty"` Name int64 `protobuf:"varint,3,opt,name=name,proto3" json:"name,omitempty"` // Types that are valid to be assigned to Value: // *SocketOption_IntValue // *SocketOption_BufValue Value isSocketOption_Value `protobuf_oneof:"value"` State SocketOption_SocketState `protobuf:"varint,6,opt,name=state,proto3,enum=envoy.api.v2.core.SocketOption_SocketState" json:"state,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *SocketOption) Reset() { *m = SocketOption{} } func (m *SocketOption) String() string { return proto.CompactTextString(m) } func (*SocketOption) ProtoMessage() {} func (*SocketOption) Descriptor() ([]byte, []int) { return fileDescriptor_base_33c58439b08f821d, []int{9} } func (m *SocketOption) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_SocketOption.Unmarshal(m, b) } func (m *SocketOption) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_SocketOption.Marshal(b, m, deterministic) } func (dst *SocketOption) XXX_Merge(src proto.Message) { xxx_messageInfo_SocketOption.Merge(dst, src) } func (m *SocketOption) XXX_Size() int { return xxx_messageInfo_SocketOption.Size(m) } func (m *SocketOption) XXX_DiscardUnknown() { xxx_messageInfo_SocketOption.DiscardUnknown(m) } var xxx_messageInfo_SocketOption proto.InternalMessageInfo func (m *SocketOption) GetDescription() string { if m != nil { return m.Description } return "" } func (m *SocketOption) GetLevel() int64 { if m != nil { return m.Level } return 0 } func (m *SocketOption) GetName() int64 { if m != nil { return m.Name } return 0 } type isSocketOption_Value interface { isSocketOption_Value() } type SocketOption_IntValue struct { IntValue int64 `protobuf:"varint,4,opt,name=int_value,json=intValue,proto3,oneof"` } type SocketOption_BufValue struct { BufValue []byte `protobuf:"bytes,5,opt,name=buf_value,json=bufValue,proto3,oneof"` } func (*SocketOption_IntValue) isSocketOption_Value() {} func (*SocketOption_BufValue) isSocketOption_Value() {} func (m *SocketOption) GetValue() isSocketOption_Value { if m != nil { return m.Value } return nil } func (m *SocketOption) GetIntValue() int64 { if x, ok := m.GetValue().(*SocketOption_IntValue); ok { return x.IntValue } return 0 } func (m *SocketOption) GetBufValue() []byte { if x, ok := m.GetValue().(*SocketOption_BufValue); ok { return x.BufValue } return nil } func (m *SocketOption) GetState() SocketOption_SocketState { if m != nil { return m.State } return SocketOption_STATE_PREBIND } // XXX_OneofFuncs is for the internal use of the proto package. func (*SocketOption) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _SocketOption_OneofMarshaler, _SocketOption_OneofUnmarshaler, _SocketOption_OneofSizer, []interface{}{ (*SocketOption_IntValue)(nil), (*SocketOption_BufValue)(nil), } } func _SocketOption_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*SocketOption) // value switch x := m.Value.(type) { case *SocketOption_IntValue: b.EncodeVarint(4<<3 | proto.WireVarint) b.EncodeVarint(uint64(x.IntValue)) case *SocketOption_BufValue: b.EncodeVarint(5<<3 | proto.WireBytes) b.EncodeRawBytes(x.BufValue) case nil: default: return fmt.Errorf("SocketOption.Value has unexpected type %T", x) } return nil } func _SocketOption_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*SocketOption) switch tag { case 4: // value.int_value if wire != proto.WireVarint { return true, proto.ErrInternalBadWireType } x, err := b.DecodeVarint() m.Value = &SocketOption_IntValue{int64(x)} return true, err case 5: // value.buf_value if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } x, err := b.DecodeRawBytes(true) m.Value = &SocketOption_BufValue{x} return true, err default: return false, nil } } func _SocketOption_OneofSizer(msg proto.Message) (n int) { m := msg.(*SocketOption) // value switch x := m.Value.(type) { case *SocketOption_IntValue: n += 1 // tag and wire n += proto.SizeVarint(uint64(x.IntValue)) case *SocketOption_BufValue: n += 1 // tag and wire n += proto.SizeVarint(uint64(len(x.BufValue))) n += len(x.BufValue) case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } type RuntimeFractionalPercent struct { DefaultValue *percent.FractionalPercent `protobuf:"bytes,1,opt,name=default_value,json=defaultValue,proto3" json:"default_value,omitempty"` RuntimeKey string `protobuf:"bytes,2,opt,name=runtime_key,json=runtimeKey,proto3" json:"runtime_key,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *RuntimeFractionalPercent) Reset() { *m = RuntimeFractionalPercent{} } func (m *RuntimeFractionalPercent) String() string { return proto.CompactTextString(m) } func (*RuntimeFractionalPercent) ProtoMessage() {} func (*RuntimeFractionalPercent) Descriptor() ([]byte, []int) { return fileDescriptor_base_33c58439b08f821d, []int{10} } func (m *RuntimeFractionalPercent) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_RuntimeFractionalPercent.Unmarshal(m, b) } func (m *RuntimeFractionalPercent) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_RuntimeFractionalPercent.Marshal(b, m, deterministic) } func (dst *RuntimeFractionalPercent) XXX_Merge(src proto.Message) { xxx_messageInfo_RuntimeFractionalPercent.Merge(dst, src) } func (m *RuntimeFractionalPercent) XXX_Size() int { return xxx_messageInfo_RuntimeFractionalPercent.Size(m) } func (m *RuntimeFractionalPercent) XXX_DiscardUnknown() { xxx_messageInfo_RuntimeFractionalPercent.DiscardUnknown(m) } var xxx_messageInfo_RuntimeFractionalPercent proto.InternalMessageInfo func (m *RuntimeFractionalPercent) GetDefaultValue() *percent.FractionalPercent { if m != nil { return m.DefaultValue } return nil } func (m *RuntimeFractionalPercent) GetRuntimeKey() string { if m != nil { return m.RuntimeKey } return "" } type ControlPlane struct { Identifier string `protobuf:"bytes,1,opt,name=identifier,proto3" json:"identifier,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ControlPlane) Reset() { *m = ControlPlane{} } func (m *ControlPlane) String() string { return proto.CompactTextString(m) } func (*ControlPlane) ProtoMessage() {} func (*ControlPlane) Descriptor() ([]byte, []int) { return fileDescriptor_base_33c58439b08f821d, []int{11} } func (m *ControlPlane) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ControlPlane.Unmarshal(m, b) } func (m *ControlPlane) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ControlPlane.Marshal(b, m, deterministic) } func (dst *ControlPlane) XXX_Merge(src proto.Message) { xxx_messageInfo_ControlPlane.Merge(dst, src) } func (m *ControlPlane) XXX_Size() int { return xxx_messageInfo_ControlPlane.Size(m) } func (m *ControlPlane) XXX_DiscardUnknown() { xxx_messageInfo_ControlPlane.DiscardUnknown(m) } var xxx_messageInfo_ControlPlane proto.InternalMessageInfo func (m *ControlPlane) GetIdentifier() string { if m != nil { return m.Identifier } return "" } func init() { proto.RegisterType((*Locality)(nil), "envoy.api.v2.core.Locality") proto.RegisterType((*Node)(nil), "envoy.api.v2.core.Node") proto.RegisterType((*Metadata)(nil), "envoy.api.v2.core.Metadata") proto.RegisterMapType((map[string]*_struct.Struct)(nil), "envoy.api.v2.core.Metadata.FilterMetadataEntry") proto.RegisterType((*RuntimeUInt32)(nil), "envoy.api.v2.core.RuntimeUInt32") proto.RegisterType((*HeaderValue)(nil), "envoy.api.v2.core.HeaderValue") proto.RegisterType((*HeaderValueOption)(nil), "envoy.api.v2.core.HeaderValueOption") proto.RegisterType((*HeaderMap)(nil), "envoy.api.v2.core.HeaderMap") proto.RegisterType((*DataSource)(nil), "envoy.api.v2.core.DataSource") proto.RegisterType((*TransportSocket)(nil), "envoy.api.v2.core.TransportSocket") proto.RegisterType((*SocketOption)(nil), "envoy.api.v2.core.SocketOption") proto.RegisterType((*RuntimeFractionalPercent)(nil), "envoy.api.v2.core.RuntimeFractionalPercent") proto.RegisterType((*ControlPlane)(nil), "envoy.api.v2.core.ControlPlane") proto.RegisterEnum("envoy.api.v2.core.RoutingPriority", RoutingPriority_name, RoutingPriority_value) proto.RegisterEnum("envoy.api.v2.core.RequestMethod", RequestMethod_name, RequestMethod_value) proto.RegisterEnum("envoy.api.v2.core.SocketOption_SocketState", SocketOption_SocketState_name, SocketOption_SocketState_value) } func init() { proto.RegisterFile("envoy/api/v2/core/base.proto", fileDescriptor_base_33c58439b08f821d) } var fileDescriptor_base_33c58439b08f821d = []byte{ // 1117 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x84, 0x55, 0xcd, 0x6e, 0xdb, 0x46, 0x10, 0x36, 0xf5, 0x67, 0x69, 0x28, 0xd9, 0xcc, 0x26, 0x48, 0x14, 0x37, 0x4e, 0x5c, 0xf5, 0x50, 0x23, 0x45, 0xa9, 0x56, 0x29, 0xd0, 0xb4, 0x37, 0xd3, 0xa2, 0x23, 0xa1, 0xb6, 0xa4, 0x50, 0x74, 0x5a, 0xe4, 0xa2, 0xae, 0xc4, 0x95, 0xb3, 0x08, 0xc3, 0x65, 0x97, 0x4b, 0xb5, 0xca, 0xa1, 0x08, 0x7a, 0x6c, 0x1e, 0xa5, 0xe8, 0xbd, 0xe8, 0x29, 0x40, 0x4f, 0x7d, 0x94, 0xbc, 0x45, 0xb1, 0x3f, 0x72, 0xe4, 0xd8, 0x68, 0x6e, 0xbb, 0xdf, 0x7c, 0xdf, 0x70, 0x66, 0x76, 0x66, 0x08, 0x77, 0x48, 0xb2, 0x60, 0xcb, 0x36, 0x4e, 0x69, 0x7b, 0xd1, 0x69, 0xcf, 0x18, 0x27, 0xed, 0x29, 0xce, 0x88, 0x9b, 0x72, 0x26, 0x18, 0xba, 0xa6, 0xac, 0x2e, 0x4e, 0xa9, 0xbb, 0xe8, 0xb8, 0xd2, 0xba, 0x73, 0xfb, 0x8c, 0xb1, 0xb3, 0x98, 0xb4, 0x15, 0x61, 0x9a, 0xcf, 0xdb, 0x38, 0x59, 0x6a, 0xf6, 0xce, 0x9d, 0xf7, 0x4d, 0x99, 0xe0, 0xf9, 0x4c, 0x18, 0xeb, 0xdd, 0xf7, 0xad, 0x3f, 0x73, 0x9c, 0xa6, 0x84, 0x67, 0xc6, 0x7e, 0x6b, 0x81, 0x63, 0x1a, 0x61, 0x41, 0xda, 0xab, 0x83, 0x31, 0x34, 0x75, 0x88, 0x62, 0x99, 0x92, 0x76, 0x4a, 0xf8, 0x8c, 0x24, 0xc6, 0x65, 0xeb, 0x31, 0x54, 0x8f, 0xd9, 0x0c, 0xc7, 0x54, 0x2c, 0xd1, 0x4d, 0xa8, 0x70, 0x72, 0x46, 0x59, 0xd2, 0xb4, 0xf6, 0xac, 0xfd, 0x5a, 0x60, 0x6e, 0x08, 0x41, 0xe9, 0x25, 0x4b, 0x48, 0xb3, 0xa0, 0x50, 0x75, 0x46, 0xb7, 0xa1, 0x9a, 0xe5, 0xd3, 0x89, 0xc2, 0x8b, 0x0a, 0xdf, 0xcc, 0xf2, 0xe9, 0x53, 0x96, 0x90, 0xd6, 0x3f, 0x16, 0x94, 0x06, 0x2c, 0x22, 0x68, 0x0b, 0x0a, 0x34, 0x32, 0xbe, 0x0a, 0x34, 0x42, 0x4d, 0xd8, 0x9c, 0xc5, 0x79, 0x26, 0x08, 0x37, 0xae, 0x56, 0x57, 0xf4, 0x00, 0xaa, 0x2f, 0x88, 0xc0, 0x11, 0x16, 0x58, 0x79, 0xb3, 0x3b, 0xb7, 0x5c, 0x9d, 0xab, 0xbb, 0xca, 0xd5, 0x1d, 0xab, 0x4a, 0x04, 0xe7, 0x44, 0xf4, 0x35, 0x54, 0x63, 0x13, 0x7a, 0xb3, 0xa4, 0x44, 0x1f, 0xb9, 0x97, 0x8a, 0xed, 0xae, 0xb2, 0x0b, 0xce, 0xc9, 0xe8, 0x13, 0x68, 0x4c, 0x73, 0x1a, 0x47, 0x93, 0x05, 0xe1, 0x99, 0x4c, 0xb7, 0xac, 0xa2, 0xa9, 0x2b, 0xf0, 0x89, 0xc6, 0x5a, 0x6f, 0x2c, 0xa8, 0x9e, 0xac, 0x3e, 0xf5, 0x03, 0x6c, 0xcf, 0x69, 0x2c, 0x08, 0x9f, 0x9c, 0x87, 0x69, 0xed, 0x15, 0xf7, 0xed, 0x4e, 0xfb, 0x8a, 0x2f, 0xae, 0x54, 0xee, 0x91, 0x92, 0xac, 0xae, 0x7e, 0x22, 0xf8, 0x32, 0xd8, 0x9a, 0x5f, 0x00, 0x77, 0x9e, 0xc2, 0xf5, 0x2b, 0x68, 0xc8, 0x81, 0xe2, 0x73, 0xb2, 0x34, 0xb5, 0x93, 0x47, 0xf4, 0x39, 0x94, 0x17, 0x38, 0xce, 0xf5, 0x2b, 0xfc, 0x4f, 0x7d, 0x34, 0xeb, 0xdb, 0xc2, 0x43, 0xab, 0xf5, 0x23, 0x34, 0x82, 0x3c, 0x11, 0xf4, 0x05, 0x39, 0xed, 0x27, 0xe2, 0x41, 0x47, 0x26, 0x1e, 0x91, 0x39, 0xce, 0x63, 0x31, 0x79, 0xe7, 0xab, 0x11, 0xd4, 0x0d, 0xf8, 0x44, 0x62, 0xe8, 0x3e, 0xd8, 0x5c, 0xab, 0x26, 0x32, 0x04, 0xf5, 0xb8, 0x5e, 0xed, 0xef, 0xb7, 0x6f, 0x8a, 0x25, 0x5e, 0xd8, 0xb3, 0x02, 0x30, 0xd6, 0xef, 0xc8, 0xb2, 0xf5, 0x18, 0xec, 0x1e, 0xc1, 0x11, 0xe1, 0x5a, 0x7a, 0x6f, 0x2d, 0x6a, 0xaf, 0x21, 0x25, 0x55, 0x5e, 0xd9, 0xb3, 0xf6, 0x5f, 0xbd, 0xb2, 0x74, 0x12, 0x1f, 0xaf, 0x27, 0x51, 0xf3, 0x6c, 0x49, 0xa9, 0xf0, 0x92, 0x22, 0x68, 0x4b, 0xeb, 0xb5, 0x05, 0xd7, 0xd6, 0x7c, 0x0e, 0x53, 0x21, 0x5b, 0xd0, 0x83, 0xca, 0x33, 0x05, 0x2a, 0xe7, 0x76, 0xe7, 0xee, 0x15, 0x75, 0x5f, 0x53, 0x79, 0x20, 0x3d, 0x97, 0x7f, 0xb7, 0x0a, 0x8e, 0x15, 0x18, 0x25, 0xea, 0x40, 0x45, 0x4e, 0x4b, 0x12, 0x99, 0x12, 0xee, 0x5c, 0x2a, 0xa1, 0xc7, 0x58, 0xac, 0xf4, 0x81, 0x61, 0xb6, 0x7c, 0xa8, 0x69, 0xb7, 0x27, 0x38, 0x45, 0x0f, 0x61, 0x53, 0xbb, 0xca, 0xcc, 0xeb, 0x7f, 0x20, 0x8a, 0x60, 0x45, 0x6f, 0xfd, 0x61, 0x01, 0x74, 0xb1, 0xc0, 0x63, 0x96, 0xf3, 0x19, 0x41, 0x9f, 0x42, 0x75, 0x4e, 0x63, 0x92, 0xe0, 0x17, 0xc4, 0x14, 0xeb, 0x5d, 0x7d, 0x7b, 0x1b, 0xc1, 0xb9, 0x11, 0xb9, 0x50, 0xa7, 0x49, 0x4c, 0x13, 0x32, 0x99, 0x2e, 0x05, 0xc9, 0x54, 0xe0, 0x75, 0x43, 0x7e, 0x59, 0x70, 0x24, 0xd9, 0xd6, 0x04, 0x4f, 0xda, 0xd1, 0x17, 0xd0, 0x30, 0xfc, 0x4c, 0x70, 0x9a, 0x9c, 0x5d, 0x7a, 0xbd, 0xde, 0x46, 0x60, 0x3c, 0x8e, 0x15, 0xc1, 0x43, 0x50, 0xcb, 0x52, 0x32, 0xa3, 0x73, 0x4a, 0x38, 0x2a, 0xff, 0xf5, 0xf6, 0x4d, 0xd1, 0x6a, 0xfd, 0x69, 0xc1, 0x76, 0xc8, 0x71, 0x92, 0xa5, 0x8c, 0x8b, 0x31, 0x9b, 0x3d, 0x27, 0x02, 0xed, 0x42, 0xe9, 0xca, 0x70, 0x03, 0x05, 0xa3, 0x2f, 0xa1, 0x32, 0x63, 0xc9, 0x9c, 0x9e, 0x7d, 0xa0, 0x3d, 0x7b, 0x1b, 0x81, 0x21, 0xa2, 0x6f, 0xa0, 0x2e, 0xf7, 0x51, 0x34, 0x31, 0x42, 0x3d, 0xf7, 0x37, 0x2e, 0x09, 0x0f, 0x92, 0xa5, 0x4c, 0x53, 0x71, 0x0f, 0x15, 0xd5, 0x6b, 0x80, 0xad, 0x45, 0x13, 0x89, 0xb6, 0xfe, 0x2d, 0x40, 0x5d, 0x87, 0x69, 0xba, 0x65, 0x0f, 0xec, 0x88, 0x64, 0x33, 0x4e, 0xd5, 0xd5, 0x4c, 0xd1, 0x3a, 0x84, 0x6e, 0x40, 0x39, 0x26, 0x0b, 0x12, 0xab, 0x70, 0x8b, 0x81, 0xbe, 0xc8, 0x45, 0xa7, 0x92, 0x2c, 0x2a, 0x50, 0x67, 0xb6, 0x0b, 0x35, 0x9a, 0xac, 0xe6, 0x45, 0xae, 0x99, 0xa2, 0x7c, 0x21, 0x9a, 0x98, 0x69, 0xd9, 0x85, 0xda, 0x34, 0x9f, 0x1b, 0xb3, 0xdc, 0x23, 0x75, 0x69, 0x9e, 0xe6, 0x73, 0x6d, 0xfe, 0x1e, 0xca, 0x99, 0xc0, 0x82, 0x34, 0xe5, 0x18, 0x6c, 0x75, 0x3e, 0xbb, 0xa2, 0x61, 0xd6, 0x23, 0x37, 0x97, 0xb1, 0x94, 0x78, 0x37, 0xde, 0xf5, 0xb0, 0x3a, 0xfd, 0xa6, 0xba, 0x59, 0xfb, 0x6b, 0x1d, 0x81, 0xbd, 0xc6, 0x45, 0xd7, 0xa0, 0x31, 0x0e, 0x0f, 0x42, 0x7f, 0x32, 0x0a, 0x7c, 0xaf, 0x3f, 0xe8, 0x3a, 0x1b, 0x68, 0x1b, 0x6c, 0x0d, 0x79, 0xc3, 0xd3, 0x41, 0xd7, 0xb1, 0xd0, 0x75, 0xd8, 0xd6, 0xc0, 0x71, 0x7f, 0x1c, 0xfa, 0x83, 0xfe, 0xe0, 0x91, 0x53, 0xf0, 0xb6, 0xcc, 0x44, 0xae, 0xde, 0xfe, 0xb5, 0x05, 0x4d, 0xb3, 0x34, 0x8e, 0x38, 0x9e, 0xc9, 0xa0, 0x70, 0x3c, 0xd2, 0xbf, 0x0c, 0x34, 0x78, 0x7f, 0x7f, 0xe8, 0x61, 0xdc, 0x35, 0x59, 0xc9, 0xc7, 0x70, 0x2f, 0xa9, 0x2e, 0xcc, 0xe2, 0xc5, 0x55, 0x73, 0xef, 0xe2, 0xaa, 0xd1, 0x3f, 0x85, 0xf5, 0xfd, 0xe2, 0x42, 0xfd, 0x90, 0x25, 0x82, 0xb3, 0x78, 0x14, 0xe3, 0x84, 0xa0, 0xbb, 0x00, 0x34, 0x22, 0x89, 0x50, 0xed, 0x6a, 0xde, 0x75, 0x0d, 0xb9, 0xbf, 0x0f, 0xdb, 0x01, 0xcb, 0x05, 0x4d, 0xce, 0x46, 0x9c, 0x32, 0x2e, 0x97, 0xbd, 0x0d, 0x9b, 0x5d, 0xff, 0xe8, 0xe0, 0xf4, 0x38, 0x74, 0x36, 0x50, 0x15, 0x4a, 0xbd, 0xfe, 0xa3, 0x9e, 0x63, 0xdd, 0xff, 0x15, 0x1a, 0x01, 0xf9, 0x29, 0x27, 0x99, 0x38, 0x21, 0xe2, 0x19, 0x8b, 0xd0, 0x4d, 0x40, 0x27, 0x7e, 0xd8, 0x1b, 0x76, 0x27, 0xa7, 0x83, 0xf1, 0xc8, 0x3f, 0xec, 0x1f, 0xf5, 0x7d, 0x59, 0xc6, 0x4d, 0x28, 0x3e, 0xf2, 0x43, 0xc7, 0x52, 0x5a, 0xff, 0xa0, 0xeb, 0x14, 0xe4, 0x69, 0x34, 0x1c, 0x87, 0x4e, 0x51, 0x1a, 0x47, 0xa7, 0xa1, 0x53, 0x42, 0x00, 0x95, 0xae, 0x7f, 0xec, 0x87, 0xbe, 0x53, 0x96, 0x5f, 0x3c, 0x1c, 0x0e, 0x06, 0xfe, 0x61, 0xe8, 0x54, 0xe4, 0x65, 0x38, 0x0a, 0xfb, 0xc3, 0xc1, 0xd8, 0xd9, 0x44, 0x35, 0x28, 0x87, 0xc1, 0xc1, 0xa1, 0xef, 0x54, 0xbd, 0xaf, 0xe0, 0x1e, 0x65, 0xba, 0x6e, 0x29, 0x67, 0xbf, 0x2c, 0x2f, 0x37, 0x86, 0x57, 0xf3, 0x70, 0x46, 0x46, 0x72, 0x0c, 0x46, 0xd6, 0xd3, 0x92, 0x84, 0xa6, 0x15, 0x35, 0x15, 0x0f, 0xfe, 0x0b, 0x00, 0x00, 0xff, 0xff, 0x42, 0xb6, 0x68, 0xfe, 0x73, 0x08, 0x00, 0x00, } grpc-go-1.22.1/balancer/xds/internal/proto/envoy/api/v2/core/config_source/000077500000000000000000000000001351635773100265075ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/internal/proto/envoy/api/v2/core/config_source/config_source.pb.go000077500000000000000000000435211351635773100322730ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: envoy/api/v2/core/config_source.proto package envoy_api_v2_core import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import duration "github.com/golang/protobuf/ptypes/duration" import wrappers "github.com/golang/protobuf/ptypes/wrappers" import grpc_service "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/core/grpc_service" import _ "google.golang.org/grpc/balancer/xds/internal/proto/validate" // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package type ApiConfigSource_ApiType int32 const ( ApiConfigSource_UNSUPPORTED_REST_LEGACY ApiConfigSource_ApiType = 0 // Deprecated: Do not use. ApiConfigSource_REST ApiConfigSource_ApiType = 1 ApiConfigSource_GRPC ApiConfigSource_ApiType = 2 ApiConfigSource_DELTA_GRPC ApiConfigSource_ApiType = 3 ) var ApiConfigSource_ApiType_name = map[int32]string{ 0: "UNSUPPORTED_REST_LEGACY", 1: "REST", 2: "GRPC", 3: "DELTA_GRPC", } var ApiConfigSource_ApiType_value = map[string]int32{ "UNSUPPORTED_REST_LEGACY": 0, "REST": 1, "GRPC": 2, "DELTA_GRPC": 3, } func (x ApiConfigSource_ApiType) String() string { return proto.EnumName(ApiConfigSource_ApiType_name, int32(x)) } func (ApiConfigSource_ApiType) EnumDescriptor() ([]byte, []int) { return fileDescriptor_config_source_d368846786cd1f7d, []int{0, 0} } type ApiConfigSource struct { ApiType ApiConfigSource_ApiType `protobuf:"varint,1,opt,name=api_type,json=apiType,proto3,enum=envoy.api.v2.core.ApiConfigSource_ApiType" json:"api_type,omitempty"` ClusterNames []string `protobuf:"bytes,2,rep,name=cluster_names,json=clusterNames,proto3" json:"cluster_names,omitempty"` GrpcServices []*grpc_service.GrpcService `protobuf:"bytes,4,rep,name=grpc_services,json=grpcServices,proto3" json:"grpc_services,omitempty"` RefreshDelay *duration.Duration `protobuf:"bytes,3,opt,name=refresh_delay,json=refreshDelay,proto3" json:"refresh_delay,omitempty"` RequestTimeout *duration.Duration `protobuf:"bytes,5,opt,name=request_timeout,json=requestTimeout,proto3" json:"request_timeout,omitempty"` RateLimitSettings *RateLimitSettings `protobuf:"bytes,6,opt,name=rate_limit_settings,json=rateLimitSettings,proto3" json:"rate_limit_settings,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ApiConfigSource) Reset() { *m = ApiConfigSource{} } func (m *ApiConfigSource) String() string { return proto.CompactTextString(m) } func (*ApiConfigSource) ProtoMessage() {} func (*ApiConfigSource) Descriptor() ([]byte, []int) { return fileDescriptor_config_source_d368846786cd1f7d, []int{0} } func (m *ApiConfigSource) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ApiConfigSource.Unmarshal(m, b) } func (m *ApiConfigSource) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ApiConfigSource.Marshal(b, m, deterministic) } func (dst *ApiConfigSource) XXX_Merge(src proto.Message) { xxx_messageInfo_ApiConfigSource.Merge(dst, src) } func (m *ApiConfigSource) XXX_Size() int { return xxx_messageInfo_ApiConfigSource.Size(m) } func (m *ApiConfigSource) XXX_DiscardUnknown() { xxx_messageInfo_ApiConfigSource.DiscardUnknown(m) } var xxx_messageInfo_ApiConfigSource proto.InternalMessageInfo func (m *ApiConfigSource) GetApiType() ApiConfigSource_ApiType { if m != nil { return m.ApiType } return ApiConfigSource_UNSUPPORTED_REST_LEGACY } func (m *ApiConfigSource) GetClusterNames() []string { if m != nil { return m.ClusterNames } return nil } func (m *ApiConfigSource) GetGrpcServices() []*grpc_service.GrpcService { if m != nil { return m.GrpcServices } return nil } func (m *ApiConfigSource) GetRefreshDelay() *duration.Duration { if m != nil { return m.RefreshDelay } return nil } func (m *ApiConfigSource) GetRequestTimeout() *duration.Duration { if m != nil { return m.RequestTimeout } return nil } func (m *ApiConfigSource) GetRateLimitSettings() *RateLimitSettings { if m != nil { return m.RateLimitSettings } return nil } type AggregatedConfigSource struct { XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *AggregatedConfigSource) Reset() { *m = AggregatedConfigSource{} } func (m *AggregatedConfigSource) String() string { return proto.CompactTextString(m) } func (*AggregatedConfigSource) ProtoMessage() {} func (*AggregatedConfigSource) Descriptor() ([]byte, []int) { return fileDescriptor_config_source_d368846786cd1f7d, []int{1} } func (m *AggregatedConfigSource) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_AggregatedConfigSource.Unmarshal(m, b) } func (m *AggregatedConfigSource) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_AggregatedConfigSource.Marshal(b, m, deterministic) } func (dst *AggregatedConfigSource) XXX_Merge(src proto.Message) { xxx_messageInfo_AggregatedConfigSource.Merge(dst, src) } func (m *AggregatedConfigSource) XXX_Size() int { return xxx_messageInfo_AggregatedConfigSource.Size(m) } func (m *AggregatedConfigSource) XXX_DiscardUnknown() { xxx_messageInfo_AggregatedConfigSource.DiscardUnknown(m) } var xxx_messageInfo_AggregatedConfigSource proto.InternalMessageInfo type RateLimitSettings struct { MaxTokens *wrappers.UInt32Value `protobuf:"bytes,1,opt,name=max_tokens,json=maxTokens,proto3" json:"max_tokens,omitempty"` FillRate *wrappers.DoubleValue `protobuf:"bytes,2,opt,name=fill_rate,json=fillRate,proto3" json:"fill_rate,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *RateLimitSettings) Reset() { *m = RateLimitSettings{} } func (m *RateLimitSettings) String() string { return proto.CompactTextString(m) } func (*RateLimitSettings) ProtoMessage() {} func (*RateLimitSettings) Descriptor() ([]byte, []int) { return fileDescriptor_config_source_d368846786cd1f7d, []int{2} } func (m *RateLimitSettings) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_RateLimitSettings.Unmarshal(m, b) } func (m *RateLimitSettings) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_RateLimitSettings.Marshal(b, m, deterministic) } func (dst *RateLimitSettings) XXX_Merge(src proto.Message) { xxx_messageInfo_RateLimitSettings.Merge(dst, src) } func (m *RateLimitSettings) XXX_Size() int { return xxx_messageInfo_RateLimitSettings.Size(m) } func (m *RateLimitSettings) XXX_DiscardUnknown() { xxx_messageInfo_RateLimitSettings.DiscardUnknown(m) } var xxx_messageInfo_RateLimitSettings proto.InternalMessageInfo func (m *RateLimitSettings) GetMaxTokens() *wrappers.UInt32Value { if m != nil { return m.MaxTokens } return nil } func (m *RateLimitSettings) GetFillRate() *wrappers.DoubleValue { if m != nil { return m.FillRate } return nil } type ConfigSource struct { // Types that are valid to be assigned to ConfigSourceSpecifier: // *ConfigSource_Path // *ConfigSource_ApiConfigSource // *ConfigSource_Ads ConfigSourceSpecifier isConfigSource_ConfigSourceSpecifier `protobuf_oneof:"config_source_specifier"` InitialFetchTimeout *duration.Duration `protobuf:"bytes,4,opt,name=initial_fetch_timeout,json=initialFetchTimeout,proto3" json:"initial_fetch_timeout,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ConfigSource) Reset() { *m = ConfigSource{} } func (m *ConfigSource) String() string { return proto.CompactTextString(m) } func (*ConfigSource) ProtoMessage() {} func (*ConfigSource) Descriptor() ([]byte, []int) { return fileDescriptor_config_source_d368846786cd1f7d, []int{3} } func (m *ConfigSource) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ConfigSource.Unmarshal(m, b) } func (m *ConfigSource) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ConfigSource.Marshal(b, m, deterministic) } func (dst *ConfigSource) XXX_Merge(src proto.Message) { xxx_messageInfo_ConfigSource.Merge(dst, src) } func (m *ConfigSource) XXX_Size() int { return xxx_messageInfo_ConfigSource.Size(m) } func (m *ConfigSource) XXX_DiscardUnknown() { xxx_messageInfo_ConfigSource.DiscardUnknown(m) } var xxx_messageInfo_ConfigSource proto.InternalMessageInfo type isConfigSource_ConfigSourceSpecifier interface { isConfigSource_ConfigSourceSpecifier() } type ConfigSource_Path struct { Path string `protobuf:"bytes,1,opt,name=path,proto3,oneof"` } type ConfigSource_ApiConfigSource struct { ApiConfigSource *ApiConfigSource `protobuf:"bytes,2,opt,name=api_config_source,json=apiConfigSource,proto3,oneof"` } type ConfigSource_Ads struct { Ads *AggregatedConfigSource `protobuf:"bytes,3,opt,name=ads,proto3,oneof"` } func (*ConfigSource_Path) isConfigSource_ConfigSourceSpecifier() {} func (*ConfigSource_ApiConfigSource) isConfigSource_ConfigSourceSpecifier() {} func (*ConfigSource_Ads) isConfigSource_ConfigSourceSpecifier() {} func (m *ConfigSource) GetConfigSourceSpecifier() isConfigSource_ConfigSourceSpecifier { if m != nil { return m.ConfigSourceSpecifier } return nil } func (m *ConfigSource) GetPath() string { if x, ok := m.GetConfigSourceSpecifier().(*ConfigSource_Path); ok { return x.Path } return "" } func (m *ConfigSource) GetApiConfigSource() *ApiConfigSource { if x, ok := m.GetConfigSourceSpecifier().(*ConfigSource_ApiConfigSource); ok { return x.ApiConfigSource } return nil } func (m *ConfigSource) GetAds() *AggregatedConfigSource { if x, ok := m.GetConfigSourceSpecifier().(*ConfigSource_Ads); ok { return x.Ads } return nil } func (m *ConfigSource) GetInitialFetchTimeout() *duration.Duration { if m != nil { return m.InitialFetchTimeout } return nil } // XXX_OneofFuncs is for the internal use of the proto package. func (*ConfigSource) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _ConfigSource_OneofMarshaler, _ConfigSource_OneofUnmarshaler, _ConfigSource_OneofSizer, []interface{}{ (*ConfigSource_Path)(nil), (*ConfigSource_ApiConfigSource)(nil), (*ConfigSource_Ads)(nil), } } func _ConfigSource_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*ConfigSource) // config_source_specifier switch x := m.ConfigSourceSpecifier.(type) { case *ConfigSource_Path: b.EncodeVarint(1<<3 | proto.WireBytes) b.EncodeStringBytes(x.Path) case *ConfigSource_ApiConfigSource: b.EncodeVarint(2<<3 | proto.WireBytes) if err := b.EncodeMessage(x.ApiConfigSource); err != nil { return err } case *ConfigSource_Ads: b.EncodeVarint(3<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Ads); err != nil { return err } case nil: default: return fmt.Errorf("ConfigSource.ConfigSourceSpecifier has unexpected type %T", x) } return nil } func _ConfigSource_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*ConfigSource) switch tag { case 1: // config_source_specifier.path if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } x, err := b.DecodeStringBytes() m.ConfigSourceSpecifier = &ConfigSource_Path{x} return true, err case 2: // config_source_specifier.api_config_source if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(ApiConfigSource) err := b.DecodeMessage(msg) m.ConfigSourceSpecifier = &ConfigSource_ApiConfigSource{msg} return true, err case 3: // config_source_specifier.ads if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(AggregatedConfigSource) err := b.DecodeMessage(msg) m.ConfigSourceSpecifier = &ConfigSource_Ads{msg} return true, err default: return false, nil } } func _ConfigSource_OneofSizer(msg proto.Message) (n int) { m := msg.(*ConfigSource) // config_source_specifier switch x := m.ConfigSourceSpecifier.(type) { case *ConfigSource_Path: n += 1 // tag and wire n += proto.SizeVarint(uint64(len(x.Path))) n += len(x.Path) case *ConfigSource_ApiConfigSource: s := proto.Size(x.ApiConfigSource) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *ConfigSource_Ads: s := proto.Size(x.Ads) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } func init() { proto.RegisterType((*ApiConfigSource)(nil), "envoy.api.v2.core.ApiConfigSource") proto.RegisterType((*AggregatedConfigSource)(nil), "envoy.api.v2.core.AggregatedConfigSource") proto.RegisterType((*RateLimitSettings)(nil), "envoy.api.v2.core.RateLimitSettings") proto.RegisterType((*ConfigSource)(nil), "envoy.api.v2.core.ConfigSource") proto.RegisterEnum("envoy.api.v2.core.ApiConfigSource_ApiType", ApiConfigSource_ApiType_name, ApiConfigSource_ApiType_value) } func init() { proto.RegisterFile("envoy/api/v2/core/config_source.proto", fileDescriptor_config_source_d368846786cd1f7d) } var fileDescriptor_config_source_d368846786cd1f7d = []byte{ // 667 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x84, 0x53, 0xcd, 0x6e, 0x13, 0x3d, 0x14, 0xcd, 0x24, 0x69, 0x9b, 0x38, 0x69, 0x9b, 0xb8, 0xdf, 0xf7, 0x75, 0xbe, 0x0a, 0xb5, 0x21, 0x14, 0x29, 0x74, 0x31, 0x91, 0x52, 0x89, 0x0d, 0x02, 0x29, 0x7f, 0xb4, 0x88, 0x52, 0x82, 0x93, 0x22, 0xb1, 0xb2, 0xdc, 0xc9, 0xcd, 0xd4, 0x62, 0x32, 0x63, 0x6c, 0x4f, 0x68, 0xb6, 0x2c, 0x78, 0x0b, 0xb6, 0x2c, 0x78, 0x02, 0xc4, 0xaa, 0xaf, 0xd3, 0xb7, 0x40, 0x9e, 0x4c, 0xa1, 0x6d, 0x52, 0x35, 0xab, 0xb9, 0xd7, 0xe7, 0x5c, 0x9f, 0x7b, 0x72, 0x8c, 0x1e, 0x43, 0x30, 0x09, 0xa7, 0x75, 0x26, 0x78, 0x7d, 0xd2, 0xa8, 0xbb, 0xa1, 0x84, 0xba, 0x1b, 0x06, 0x23, 0xee, 0x51, 0x15, 0x46, 0xd2, 0x05, 0x47, 0xc8, 0x50, 0x87, 0xb8, 0x1c, 0xc3, 0x1c, 0x26, 0xb8, 0x33, 0x69, 0x38, 0x06, 0xb6, 0xb5, 0x3b, 0xcf, 0xf4, 0xa4, 0x70, 0xa9, 0x02, 0x39, 0xe1, 0x57, 0xc4, 0xad, 0x6d, 0x2f, 0x0c, 0x3d, 0x1f, 0xea, 0x71, 0x75, 0x1a, 0x8d, 0xea, 0xc3, 0x48, 0x32, 0xcd, 0xc3, 0xe0, 0xae, 0xf3, 0xcf, 0x92, 0x09, 0x01, 0x52, 0x25, 0xe7, 0x9b, 0x13, 0xe6, 0xf3, 0x21, 0xd3, 0x50, 0xbf, 0xfa, 0x98, 0x1d, 0x54, 0xbf, 0x66, 0xd1, 0x7a, 0x53, 0xf0, 0x76, 0x2c, 0xb6, 0x1f, 0x6b, 0xc5, 0xef, 0x50, 0x8e, 0x09, 0x4e, 0xf5, 0x54, 0x80, 0x6d, 0x55, 0xac, 0xda, 0x5a, 0x63, 0xcf, 0x99, 0x13, 0xee, 0xdc, 0x62, 0x99, 0x7a, 0x30, 0x15, 0xd0, 0x42, 0xbf, 0x2e, 0x2f, 0x32, 0x4b, 0x5f, 0xac, 0x74, 0xc9, 0x22, 0x2b, 0x6c, 0xd6, 0xc4, 0x8f, 0xd0, 0xaa, 0xeb, 0x47, 0x4a, 0x83, 0xa4, 0x01, 0x1b, 0x83, 0xb2, 0xd3, 0x95, 0x4c, 0x2d, 0x4f, 0x8a, 0x49, 0xf3, 0xd8, 0xf4, 0x70, 0x1b, 0xad, 0x5e, 0x5f, 0x5d, 0xd9, 0xd9, 0x4a, 0xa6, 0x56, 0x68, 0x6c, 0x2f, 0xb8, 0xfc, 0x40, 0x0a, 0xb7, 0x3f, 0x83, 0x91, 0xa2, 0xf7, 0xb7, 0x50, 0xf8, 0x05, 0x5a, 0x95, 0x30, 0x92, 0xa0, 0xce, 0xe8, 0x10, 0x7c, 0x36, 0xb5, 0x33, 0x15, 0xab, 0x56, 0x68, 0xfc, 0xef, 0xcc, 0x1c, 0x72, 0xae, 0x1c, 0x72, 0x3a, 0x89, 0x83, 0xa4, 0x98, 0xe0, 0x3b, 0x06, 0x8e, 0x7b, 0x68, 0x5d, 0xc2, 0xa7, 0x08, 0x94, 0xa6, 0x9a, 0x8f, 0x21, 0x8c, 0xb4, 0xbd, 0x74, 0xcf, 0x84, 0x56, 0xd1, 0xac, 0xbc, 0xf2, 0xc3, 0xca, 0xee, 0xa5, 0x73, 0x29, 0xb2, 0x96, 0xf0, 0x07, 0x33, 0x3a, 0x1e, 0xa0, 0x0d, 0xc9, 0x34, 0x50, 0x9f, 0x8f, 0xb9, 0xa6, 0x0a, 0xb4, 0xe6, 0x81, 0xa7, 0xec, 0xe5, 0x78, 0xea, 0xee, 0x82, 0xe5, 0x08, 0xd3, 0x70, 0x64, 0xc0, 0xfd, 0x04, 0x4b, 0xca, 0xf2, 0x76, 0xab, 0x7a, 0x8c, 0x56, 0x12, 0xc7, 0xf1, 0x0e, 0xda, 0x3c, 0x39, 0xee, 0x9f, 0xf4, 0x7a, 0x6f, 0xc9, 0xa0, 0xdb, 0xa1, 0xa4, 0xdb, 0x1f, 0xd0, 0xa3, 0xee, 0x41, 0xb3, 0xfd, 0xa1, 0x94, 0xda, 0x4a, 0xe7, 0x2c, 0x9c, 0x43, 0x59, 0xd3, 0x2c, 0xc5, 0x5f, 0x07, 0xa4, 0xd7, 0x2e, 0xa5, 0xf1, 0x1a, 0x42, 0x9d, 0xee, 0xd1, 0xa0, 0x49, 0xe3, 0x3a, 0x53, 0xb5, 0xd1, 0x7f, 0x4d, 0xcf, 0x93, 0xe0, 0x31, 0x0d, 0xc3, 0xeb, 0x7f, 0x6c, 0xf5, 0x9b, 0x85, 0xca, 0x73, 0x92, 0xf0, 0x33, 0x84, 0xc6, 0xec, 0x9c, 0xea, 0xf0, 0x23, 0x04, 0x2a, 0x8e, 0x49, 0xa1, 0xf1, 0x60, 0xce, 0xa2, 0x93, 0x57, 0x81, 0xde, 0x6f, 0xbc, 0x67, 0x7e, 0x04, 0x24, 0x3f, 0x66, 0xe7, 0x83, 0x18, 0x8e, 0x5f, 0xa3, 0xfc, 0x88, 0xfb, 0x3e, 0x35, 0x6b, 0xd9, 0xe9, 0x3b, 0xb8, 0x9d, 0x30, 0x3a, 0xf5, 0x21, 0xe6, 0xb6, 0x4a, 0xc6, 0xe1, 0x02, 0xce, 0x3f, 0x4c, 0x25, 0x3f, 0x92, 0x33, 0x03, 0x8c, 0xac, 0xea, 0xf7, 0x34, 0x2a, 0xde, 0xc8, 0xef, 0x3f, 0x28, 0x2b, 0x98, 0x3e, 0x8b, 0x45, 0xe5, 0x0f, 0x53, 0x24, 0xae, 0x70, 0x0f, 0x95, 0x4d, 0xaa, 0x6f, 0x3c, 0xcb, 0xe4, 0xee, 0xea, 0xfd, 0xf1, 0x3e, 0x4c, 0x91, 0x75, 0x76, 0xeb, 0x9d, 0x3c, 0x47, 0x19, 0x36, 0x54, 0x49, 0xc0, 0x9e, 0x2c, 0x9a, 0xb1, 0xd0, 0xd0, 0xc3, 0x14, 0x31, 0x3c, 0xfc, 0x06, 0xfd, 0xcb, 0x03, 0xae, 0x39, 0xf3, 0xe9, 0x08, 0xb4, 0x7b, 0xf6, 0x27, 0x6f, 0xd9, 0xfb, 0x12, 0xbb, 0x91, 0xf0, 0x5e, 0x1a, 0x5a, 0x12, 0xb3, 0x56, 0x05, 0x6d, 0xde, 0xd8, 0x8d, 0x2a, 0x01, 0x2e, 0x1f, 0x71, 0x90, 0x78, 0xe9, 0xe7, 0xe5, 0x45, 0xc6, 0x6a, 0x3d, 0x45, 0x3b, 0x3c, 0x9c, 0xc9, 0x14, 0x32, 0x3c, 0x9f, 0xce, 0x2b, 0x6e, 0x95, 0xaf, 0x0b, 0xed, 0x99, 0x8b, 0x7b, 0xd6, 0xe9, 0x72, 0xac, 0x60, 0xff, 0x77, 0x00, 0x00, 0x00, 0xff, 0xff, 0x4c, 0x68, 0x6a, 0x4f, 0xe5, 0x04, 0x00, 0x00, } grpc-go-1.22.1/balancer/xds/internal/proto/envoy/api/v2/core/grpc_service/000077500000000000000000000000001351635773100263355ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/internal/proto/envoy/api/v2/core/grpc_service/grpc_service.pb.go000077500000000000000000001537631351635773100317610ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: envoy/api/v2/core/grpc_service.proto package envoy_api_v2_core import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import any "github.com/golang/protobuf/ptypes/any" import duration "github.com/golang/protobuf/ptypes/duration" import empty "github.com/golang/protobuf/ptypes/empty" import _struct "github.com/golang/protobuf/ptypes/struct" import base "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/core/base" import _ "google.golang.org/grpc/balancer/xds/internal/proto/validate" // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package type GrpcService struct { // Types that are valid to be assigned to TargetSpecifier: // *GrpcService_EnvoyGrpc_ // *GrpcService_GoogleGrpc_ TargetSpecifier isGrpcService_TargetSpecifier `protobuf_oneof:"target_specifier"` Timeout *duration.Duration `protobuf:"bytes,3,opt,name=timeout,proto3" json:"timeout,omitempty"` InitialMetadata []*base.HeaderValue `protobuf:"bytes,5,rep,name=initial_metadata,json=initialMetadata,proto3" json:"initial_metadata,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *GrpcService) Reset() { *m = GrpcService{} } func (m *GrpcService) String() string { return proto.CompactTextString(m) } func (*GrpcService) ProtoMessage() {} func (*GrpcService) Descriptor() ([]byte, []int) { return fileDescriptor_grpc_service_b85549433708d753, []int{0} } func (m *GrpcService) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_GrpcService.Unmarshal(m, b) } func (m *GrpcService) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_GrpcService.Marshal(b, m, deterministic) } func (dst *GrpcService) XXX_Merge(src proto.Message) { xxx_messageInfo_GrpcService.Merge(dst, src) } func (m *GrpcService) XXX_Size() int { return xxx_messageInfo_GrpcService.Size(m) } func (m *GrpcService) XXX_DiscardUnknown() { xxx_messageInfo_GrpcService.DiscardUnknown(m) } var xxx_messageInfo_GrpcService proto.InternalMessageInfo type isGrpcService_TargetSpecifier interface { isGrpcService_TargetSpecifier() } type GrpcService_EnvoyGrpc_ struct { EnvoyGrpc *GrpcService_EnvoyGrpc `protobuf:"bytes,1,opt,name=envoy_grpc,json=envoyGrpc,proto3,oneof"` } type GrpcService_GoogleGrpc_ struct { GoogleGrpc *GrpcService_GoogleGrpc `protobuf:"bytes,2,opt,name=google_grpc,json=googleGrpc,proto3,oneof"` } func (*GrpcService_EnvoyGrpc_) isGrpcService_TargetSpecifier() {} func (*GrpcService_GoogleGrpc_) isGrpcService_TargetSpecifier() {} func (m *GrpcService) GetTargetSpecifier() isGrpcService_TargetSpecifier { if m != nil { return m.TargetSpecifier } return nil } func (m *GrpcService) GetEnvoyGrpc() *GrpcService_EnvoyGrpc { if x, ok := m.GetTargetSpecifier().(*GrpcService_EnvoyGrpc_); ok { return x.EnvoyGrpc } return nil } func (m *GrpcService) GetGoogleGrpc() *GrpcService_GoogleGrpc { if x, ok := m.GetTargetSpecifier().(*GrpcService_GoogleGrpc_); ok { return x.GoogleGrpc } return nil } func (m *GrpcService) GetTimeout() *duration.Duration { if m != nil { return m.Timeout } return nil } func (m *GrpcService) GetInitialMetadata() []*base.HeaderValue { if m != nil { return m.InitialMetadata } return nil } // XXX_OneofFuncs is for the internal use of the proto package. func (*GrpcService) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _GrpcService_OneofMarshaler, _GrpcService_OneofUnmarshaler, _GrpcService_OneofSizer, []interface{}{ (*GrpcService_EnvoyGrpc_)(nil), (*GrpcService_GoogleGrpc_)(nil), } } func _GrpcService_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*GrpcService) // target_specifier switch x := m.TargetSpecifier.(type) { case *GrpcService_EnvoyGrpc_: b.EncodeVarint(1<<3 | proto.WireBytes) if err := b.EncodeMessage(x.EnvoyGrpc); err != nil { return err } case *GrpcService_GoogleGrpc_: b.EncodeVarint(2<<3 | proto.WireBytes) if err := b.EncodeMessage(x.GoogleGrpc); err != nil { return err } case nil: default: return fmt.Errorf("GrpcService.TargetSpecifier has unexpected type %T", x) } return nil } func _GrpcService_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*GrpcService) switch tag { case 1: // target_specifier.envoy_grpc if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(GrpcService_EnvoyGrpc) err := b.DecodeMessage(msg) m.TargetSpecifier = &GrpcService_EnvoyGrpc_{msg} return true, err case 2: // target_specifier.google_grpc if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(GrpcService_GoogleGrpc) err := b.DecodeMessage(msg) m.TargetSpecifier = &GrpcService_GoogleGrpc_{msg} return true, err default: return false, nil } } func _GrpcService_OneofSizer(msg proto.Message) (n int) { m := msg.(*GrpcService) // target_specifier switch x := m.TargetSpecifier.(type) { case *GrpcService_EnvoyGrpc_: s := proto.Size(x.EnvoyGrpc) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *GrpcService_GoogleGrpc_: s := proto.Size(x.GoogleGrpc) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } type GrpcService_EnvoyGrpc struct { ClusterName string `protobuf:"bytes,1,opt,name=cluster_name,json=clusterName,proto3" json:"cluster_name,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *GrpcService_EnvoyGrpc) Reset() { *m = GrpcService_EnvoyGrpc{} } func (m *GrpcService_EnvoyGrpc) String() string { return proto.CompactTextString(m) } func (*GrpcService_EnvoyGrpc) ProtoMessage() {} func (*GrpcService_EnvoyGrpc) Descriptor() ([]byte, []int) { return fileDescriptor_grpc_service_b85549433708d753, []int{0, 0} } func (m *GrpcService_EnvoyGrpc) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_GrpcService_EnvoyGrpc.Unmarshal(m, b) } func (m *GrpcService_EnvoyGrpc) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_GrpcService_EnvoyGrpc.Marshal(b, m, deterministic) } func (dst *GrpcService_EnvoyGrpc) XXX_Merge(src proto.Message) { xxx_messageInfo_GrpcService_EnvoyGrpc.Merge(dst, src) } func (m *GrpcService_EnvoyGrpc) XXX_Size() int { return xxx_messageInfo_GrpcService_EnvoyGrpc.Size(m) } func (m *GrpcService_EnvoyGrpc) XXX_DiscardUnknown() { xxx_messageInfo_GrpcService_EnvoyGrpc.DiscardUnknown(m) } var xxx_messageInfo_GrpcService_EnvoyGrpc proto.InternalMessageInfo func (m *GrpcService_EnvoyGrpc) GetClusterName() string { if m != nil { return m.ClusterName } return "" } type GrpcService_GoogleGrpc struct { TargetUri string `protobuf:"bytes,1,opt,name=target_uri,json=targetUri,proto3" json:"target_uri,omitempty"` ChannelCredentials *GrpcService_GoogleGrpc_ChannelCredentials `protobuf:"bytes,2,opt,name=channel_credentials,json=channelCredentials,proto3" json:"channel_credentials,omitempty"` CallCredentials []*GrpcService_GoogleGrpc_CallCredentials `protobuf:"bytes,3,rep,name=call_credentials,json=callCredentials,proto3" json:"call_credentials,omitempty"` StatPrefix string `protobuf:"bytes,4,opt,name=stat_prefix,json=statPrefix,proto3" json:"stat_prefix,omitempty"` CredentialsFactoryName string `protobuf:"bytes,5,opt,name=credentials_factory_name,json=credentialsFactoryName,proto3" json:"credentials_factory_name,omitempty"` Config *_struct.Struct `protobuf:"bytes,6,opt,name=config,proto3" json:"config,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *GrpcService_GoogleGrpc) Reset() { *m = GrpcService_GoogleGrpc{} } func (m *GrpcService_GoogleGrpc) String() string { return proto.CompactTextString(m) } func (*GrpcService_GoogleGrpc) ProtoMessage() {} func (*GrpcService_GoogleGrpc) Descriptor() ([]byte, []int) { return fileDescriptor_grpc_service_b85549433708d753, []int{0, 1} } func (m *GrpcService_GoogleGrpc) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_GrpcService_GoogleGrpc.Unmarshal(m, b) } func (m *GrpcService_GoogleGrpc) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_GrpcService_GoogleGrpc.Marshal(b, m, deterministic) } func (dst *GrpcService_GoogleGrpc) XXX_Merge(src proto.Message) { xxx_messageInfo_GrpcService_GoogleGrpc.Merge(dst, src) } func (m *GrpcService_GoogleGrpc) XXX_Size() int { return xxx_messageInfo_GrpcService_GoogleGrpc.Size(m) } func (m *GrpcService_GoogleGrpc) XXX_DiscardUnknown() { xxx_messageInfo_GrpcService_GoogleGrpc.DiscardUnknown(m) } var xxx_messageInfo_GrpcService_GoogleGrpc proto.InternalMessageInfo func (m *GrpcService_GoogleGrpc) GetTargetUri() string { if m != nil { return m.TargetUri } return "" } func (m *GrpcService_GoogleGrpc) GetChannelCredentials() *GrpcService_GoogleGrpc_ChannelCredentials { if m != nil { return m.ChannelCredentials } return nil } func (m *GrpcService_GoogleGrpc) GetCallCredentials() []*GrpcService_GoogleGrpc_CallCredentials { if m != nil { return m.CallCredentials } return nil } func (m *GrpcService_GoogleGrpc) GetStatPrefix() string { if m != nil { return m.StatPrefix } return "" } func (m *GrpcService_GoogleGrpc) GetCredentialsFactoryName() string { if m != nil { return m.CredentialsFactoryName } return "" } func (m *GrpcService_GoogleGrpc) GetConfig() *_struct.Struct { if m != nil { return m.Config } return nil } type GrpcService_GoogleGrpc_SslCredentials struct { RootCerts *base.DataSource `protobuf:"bytes,1,opt,name=root_certs,json=rootCerts,proto3" json:"root_certs,omitempty"` PrivateKey *base.DataSource `protobuf:"bytes,2,opt,name=private_key,json=privateKey,proto3" json:"private_key,omitempty"` CertChain *base.DataSource `protobuf:"bytes,3,opt,name=cert_chain,json=certChain,proto3" json:"cert_chain,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *GrpcService_GoogleGrpc_SslCredentials) Reset() { *m = GrpcService_GoogleGrpc_SslCredentials{} } func (m *GrpcService_GoogleGrpc_SslCredentials) String() string { return proto.CompactTextString(m) } func (*GrpcService_GoogleGrpc_SslCredentials) ProtoMessage() {} func (*GrpcService_GoogleGrpc_SslCredentials) Descriptor() ([]byte, []int) { return fileDescriptor_grpc_service_b85549433708d753, []int{0, 1, 0} } func (m *GrpcService_GoogleGrpc_SslCredentials) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_GrpcService_GoogleGrpc_SslCredentials.Unmarshal(m, b) } func (m *GrpcService_GoogleGrpc_SslCredentials) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_GrpcService_GoogleGrpc_SslCredentials.Marshal(b, m, deterministic) } func (dst *GrpcService_GoogleGrpc_SslCredentials) XXX_Merge(src proto.Message) { xxx_messageInfo_GrpcService_GoogleGrpc_SslCredentials.Merge(dst, src) } func (m *GrpcService_GoogleGrpc_SslCredentials) XXX_Size() int { return xxx_messageInfo_GrpcService_GoogleGrpc_SslCredentials.Size(m) } func (m *GrpcService_GoogleGrpc_SslCredentials) XXX_DiscardUnknown() { xxx_messageInfo_GrpcService_GoogleGrpc_SslCredentials.DiscardUnknown(m) } var xxx_messageInfo_GrpcService_GoogleGrpc_SslCredentials proto.InternalMessageInfo func (m *GrpcService_GoogleGrpc_SslCredentials) GetRootCerts() *base.DataSource { if m != nil { return m.RootCerts } return nil } func (m *GrpcService_GoogleGrpc_SslCredentials) GetPrivateKey() *base.DataSource { if m != nil { return m.PrivateKey } return nil } func (m *GrpcService_GoogleGrpc_SslCredentials) GetCertChain() *base.DataSource { if m != nil { return m.CertChain } return nil } type GrpcService_GoogleGrpc_GoogleLocalCredentials struct { XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *GrpcService_GoogleGrpc_GoogleLocalCredentials) Reset() { *m = GrpcService_GoogleGrpc_GoogleLocalCredentials{} } func (m *GrpcService_GoogleGrpc_GoogleLocalCredentials) String() string { return proto.CompactTextString(m) } func (*GrpcService_GoogleGrpc_GoogleLocalCredentials) ProtoMessage() {} func (*GrpcService_GoogleGrpc_GoogleLocalCredentials) Descriptor() ([]byte, []int) { return fileDescriptor_grpc_service_b85549433708d753, []int{0, 1, 1} } func (m *GrpcService_GoogleGrpc_GoogleLocalCredentials) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_GrpcService_GoogleGrpc_GoogleLocalCredentials.Unmarshal(m, b) } func (m *GrpcService_GoogleGrpc_GoogleLocalCredentials) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_GrpcService_GoogleGrpc_GoogleLocalCredentials.Marshal(b, m, deterministic) } func (dst *GrpcService_GoogleGrpc_GoogleLocalCredentials) XXX_Merge(src proto.Message) { xxx_messageInfo_GrpcService_GoogleGrpc_GoogleLocalCredentials.Merge(dst, src) } func (m *GrpcService_GoogleGrpc_GoogleLocalCredentials) XXX_Size() int { return xxx_messageInfo_GrpcService_GoogleGrpc_GoogleLocalCredentials.Size(m) } func (m *GrpcService_GoogleGrpc_GoogleLocalCredentials) XXX_DiscardUnknown() { xxx_messageInfo_GrpcService_GoogleGrpc_GoogleLocalCredentials.DiscardUnknown(m) } var xxx_messageInfo_GrpcService_GoogleGrpc_GoogleLocalCredentials proto.InternalMessageInfo type GrpcService_GoogleGrpc_ChannelCredentials struct { // Types that are valid to be assigned to CredentialSpecifier: // *GrpcService_GoogleGrpc_ChannelCredentials_SslCredentials // *GrpcService_GoogleGrpc_ChannelCredentials_GoogleDefault // *GrpcService_GoogleGrpc_ChannelCredentials_LocalCredentials CredentialSpecifier isGrpcService_GoogleGrpc_ChannelCredentials_CredentialSpecifier `protobuf_oneof:"credential_specifier"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *GrpcService_GoogleGrpc_ChannelCredentials) Reset() { *m = GrpcService_GoogleGrpc_ChannelCredentials{} } func (m *GrpcService_GoogleGrpc_ChannelCredentials) String() string { return proto.CompactTextString(m) } func (*GrpcService_GoogleGrpc_ChannelCredentials) ProtoMessage() {} func (*GrpcService_GoogleGrpc_ChannelCredentials) Descriptor() ([]byte, []int) { return fileDescriptor_grpc_service_b85549433708d753, []int{0, 1, 2} } func (m *GrpcService_GoogleGrpc_ChannelCredentials) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_GrpcService_GoogleGrpc_ChannelCredentials.Unmarshal(m, b) } func (m *GrpcService_GoogleGrpc_ChannelCredentials) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_GrpcService_GoogleGrpc_ChannelCredentials.Marshal(b, m, deterministic) } func (dst *GrpcService_GoogleGrpc_ChannelCredentials) XXX_Merge(src proto.Message) { xxx_messageInfo_GrpcService_GoogleGrpc_ChannelCredentials.Merge(dst, src) } func (m *GrpcService_GoogleGrpc_ChannelCredentials) XXX_Size() int { return xxx_messageInfo_GrpcService_GoogleGrpc_ChannelCredentials.Size(m) } func (m *GrpcService_GoogleGrpc_ChannelCredentials) XXX_DiscardUnknown() { xxx_messageInfo_GrpcService_GoogleGrpc_ChannelCredentials.DiscardUnknown(m) } var xxx_messageInfo_GrpcService_GoogleGrpc_ChannelCredentials proto.InternalMessageInfo type isGrpcService_GoogleGrpc_ChannelCredentials_CredentialSpecifier interface { isGrpcService_GoogleGrpc_ChannelCredentials_CredentialSpecifier() } type GrpcService_GoogleGrpc_ChannelCredentials_SslCredentials struct { SslCredentials *GrpcService_GoogleGrpc_SslCredentials `protobuf:"bytes,1,opt,name=ssl_credentials,json=sslCredentials,proto3,oneof"` } type GrpcService_GoogleGrpc_ChannelCredentials_GoogleDefault struct { GoogleDefault *empty.Empty `protobuf:"bytes,2,opt,name=google_default,json=googleDefault,proto3,oneof"` } type GrpcService_GoogleGrpc_ChannelCredentials_LocalCredentials struct { LocalCredentials *GrpcService_GoogleGrpc_GoogleLocalCredentials `protobuf:"bytes,3,opt,name=local_credentials,json=localCredentials,proto3,oneof"` } func (*GrpcService_GoogleGrpc_ChannelCredentials_SslCredentials) isGrpcService_GoogleGrpc_ChannelCredentials_CredentialSpecifier() { } func (*GrpcService_GoogleGrpc_ChannelCredentials_GoogleDefault) isGrpcService_GoogleGrpc_ChannelCredentials_CredentialSpecifier() { } func (*GrpcService_GoogleGrpc_ChannelCredentials_LocalCredentials) isGrpcService_GoogleGrpc_ChannelCredentials_CredentialSpecifier() { } func (m *GrpcService_GoogleGrpc_ChannelCredentials) GetCredentialSpecifier() isGrpcService_GoogleGrpc_ChannelCredentials_CredentialSpecifier { if m != nil { return m.CredentialSpecifier } return nil } func (m *GrpcService_GoogleGrpc_ChannelCredentials) GetSslCredentials() *GrpcService_GoogleGrpc_SslCredentials { if x, ok := m.GetCredentialSpecifier().(*GrpcService_GoogleGrpc_ChannelCredentials_SslCredentials); ok { return x.SslCredentials } return nil } func (m *GrpcService_GoogleGrpc_ChannelCredentials) GetGoogleDefault() *empty.Empty { if x, ok := m.GetCredentialSpecifier().(*GrpcService_GoogleGrpc_ChannelCredentials_GoogleDefault); ok { return x.GoogleDefault } return nil } func (m *GrpcService_GoogleGrpc_ChannelCredentials) GetLocalCredentials() *GrpcService_GoogleGrpc_GoogleLocalCredentials { if x, ok := m.GetCredentialSpecifier().(*GrpcService_GoogleGrpc_ChannelCredentials_LocalCredentials); ok { return x.LocalCredentials } return nil } // XXX_OneofFuncs is for the internal use of the proto package. func (*GrpcService_GoogleGrpc_ChannelCredentials) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _GrpcService_GoogleGrpc_ChannelCredentials_OneofMarshaler, _GrpcService_GoogleGrpc_ChannelCredentials_OneofUnmarshaler, _GrpcService_GoogleGrpc_ChannelCredentials_OneofSizer, []interface{}{ (*GrpcService_GoogleGrpc_ChannelCredentials_SslCredentials)(nil), (*GrpcService_GoogleGrpc_ChannelCredentials_GoogleDefault)(nil), (*GrpcService_GoogleGrpc_ChannelCredentials_LocalCredentials)(nil), } } func _GrpcService_GoogleGrpc_ChannelCredentials_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*GrpcService_GoogleGrpc_ChannelCredentials) // credential_specifier switch x := m.CredentialSpecifier.(type) { case *GrpcService_GoogleGrpc_ChannelCredentials_SslCredentials: b.EncodeVarint(1<<3 | proto.WireBytes) if err := b.EncodeMessage(x.SslCredentials); err != nil { return err } case *GrpcService_GoogleGrpc_ChannelCredentials_GoogleDefault: b.EncodeVarint(2<<3 | proto.WireBytes) if err := b.EncodeMessage(x.GoogleDefault); err != nil { return err } case *GrpcService_GoogleGrpc_ChannelCredentials_LocalCredentials: b.EncodeVarint(3<<3 | proto.WireBytes) if err := b.EncodeMessage(x.LocalCredentials); err != nil { return err } case nil: default: return fmt.Errorf("GrpcService_GoogleGrpc_ChannelCredentials.CredentialSpecifier has unexpected type %T", x) } return nil } func _GrpcService_GoogleGrpc_ChannelCredentials_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*GrpcService_GoogleGrpc_ChannelCredentials) switch tag { case 1: // credential_specifier.ssl_credentials if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(GrpcService_GoogleGrpc_SslCredentials) err := b.DecodeMessage(msg) m.CredentialSpecifier = &GrpcService_GoogleGrpc_ChannelCredentials_SslCredentials{msg} return true, err case 2: // credential_specifier.google_default if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(empty.Empty) err := b.DecodeMessage(msg) m.CredentialSpecifier = &GrpcService_GoogleGrpc_ChannelCredentials_GoogleDefault{msg} return true, err case 3: // credential_specifier.local_credentials if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(GrpcService_GoogleGrpc_GoogleLocalCredentials) err := b.DecodeMessage(msg) m.CredentialSpecifier = &GrpcService_GoogleGrpc_ChannelCredentials_LocalCredentials{msg} return true, err default: return false, nil } } func _GrpcService_GoogleGrpc_ChannelCredentials_OneofSizer(msg proto.Message) (n int) { m := msg.(*GrpcService_GoogleGrpc_ChannelCredentials) // credential_specifier switch x := m.CredentialSpecifier.(type) { case *GrpcService_GoogleGrpc_ChannelCredentials_SslCredentials: s := proto.Size(x.SslCredentials) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *GrpcService_GoogleGrpc_ChannelCredentials_GoogleDefault: s := proto.Size(x.GoogleDefault) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *GrpcService_GoogleGrpc_ChannelCredentials_LocalCredentials: s := proto.Size(x.LocalCredentials) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } type GrpcService_GoogleGrpc_CallCredentials struct { // Types that are valid to be assigned to CredentialSpecifier: // *GrpcService_GoogleGrpc_CallCredentials_AccessToken // *GrpcService_GoogleGrpc_CallCredentials_GoogleComputeEngine // *GrpcService_GoogleGrpc_CallCredentials_GoogleRefreshToken // *GrpcService_GoogleGrpc_CallCredentials_ServiceAccountJwtAccess // *GrpcService_GoogleGrpc_CallCredentials_GoogleIam // *GrpcService_GoogleGrpc_CallCredentials_FromPlugin CredentialSpecifier isGrpcService_GoogleGrpc_CallCredentials_CredentialSpecifier `protobuf_oneof:"credential_specifier"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *GrpcService_GoogleGrpc_CallCredentials) Reset() { *m = GrpcService_GoogleGrpc_CallCredentials{} } func (m *GrpcService_GoogleGrpc_CallCredentials) String() string { return proto.CompactTextString(m) } func (*GrpcService_GoogleGrpc_CallCredentials) ProtoMessage() {} func (*GrpcService_GoogleGrpc_CallCredentials) Descriptor() ([]byte, []int) { return fileDescriptor_grpc_service_b85549433708d753, []int{0, 1, 3} } func (m *GrpcService_GoogleGrpc_CallCredentials) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_GrpcService_GoogleGrpc_CallCredentials.Unmarshal(m, b) } func (m *GrpcService_GoogleGrpc_CallCredentials) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_GrpcService_GoogleGrpc_CallCredentials.Marshal(b, m, deterministic) } func (dst *GrpcService_GoogleGrpc_CallCredentials) XXX_Merge(src proto.Message) { xxx_messageInfo_GrpcService_GoogleGrpc_CallCredentials.Merge(dst, src) } func (m *GrpcService_GoogleGrpc_CallCredentials) XXX_Size() int { return xxx_messageInfo_GrpcService_GoogleGrpc_CallCredentials.Size(m) } func (m *GrpcService_GoogleGrpc_CallCredentials) XXX_DiscardUnknown() { xxx_messageInfo_GrpcService_GoogleGrpc_CallCredentials.DiscardUnknown(m) } var xxx_messageInfo_GrpcService_GoogleGrpc_CallCredentials proto.InternalMessageInfo type isGrpcService_GoogleGrpc_CallCredentials_CredentialSpecifier interface { isGrpcService_GoogleGrpc_CallCredentials_CredentialSpecifier() } type GrpcService_GoogleGrpc_CallCredentials_AccessToken struct { AccessToken string `protobuf:"bytes,1,opt,name=access_token,json=accessToken,proto3,oneof"` } type GrpcService_GoogleGrpc_CallCredentials_GoogleComputeEngine struct { GoogleComputeEngine *empty.Empty `protobuf:"bytes,2,opt,name=google_compute_engine,json=googleComputeEngine,proto3,oneof"` } type GrpcService_GoogleGrpc_CallCredentials_GoogleRefreshToken struct { GoogleRefreshToken string `protobuf:"bytes,3,opt,name=google_refresh_token,json=googleRefreshToken,proto3,oneof"` } type GrpcService_GoogleGrpc_CallCredentials_ServiceAccountJwtAccess struct { ServiceAccountJwtAccess *GrpcService_GoogleGrpc_CallCredentials_ServiceAccountJWTAccessCredentials `protobuf:"bytes,4,opt,name=service_account_jwt_access,json=serviceAccountJwtAccess,proto3,oneof"` } type GrpcService_GoogleGrpc_CallCredentials_GoogleIam struct { GoogleIam *GrpcService_GoogleGrpc_CallCredentials_GoogleIAMCredentials `protobuf:"bytes,5,opt,name=google_iam,json=googleIam,proto3,oneof"` } type GrpcService_GoogleGrpc_CallCredentials_FromPlugin struct { FromPlugin *GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin `protobuf:"bytes,6,opt,name=from_plugin,json=fromPlugin,proto3,oneof"` } func (*GrpcService_GoogleGrpc_CallCredentials_AccessToken) isGrpcService_GoogleGrpc_CallCredentials_CredentialSpecifier() { } func (*GrpcService_GoogleGrpc_CallCredentials_GoogleComputeEngine) isGrpcService_GoogleGrpc_CallCredentials_CredentialSpecifier() { } func (*GrpcService_GoogleGrpc_CallCredentials_GoogleRefreshToken) isGrpcService_GoogleGrpc_CallCredentials_CredentialSpecifier() { } func (*GrpcService_GoogleGrpc_CallCredentials_ServiceAccountJwtAccess) isGrpcService_GoogleGrpc_CallCredentials_CredentialSpecifier() { } func (*GrpcService_GoogleGrpc_CallCredentials_GoogleIam) isGrpcService_GoogleGrpc_CallCredentials_CredentialSpecifier() { } func (*GrpcService_GoogleGrpc_CallCredentials_FromPlugin) isGrpcService_GoogleGrpc_CallCredentials_CredentialSpecifier() { } func (m *GrpcService_GoogleGrpc_CallCredentials) GetCredentialSpecifier() isGrpcService_GoogleGrpc_CallCredentials_CredentialSpecifier { if m != nil { return m.CredentialSpecifier } return nil } func (m *GrpcService_GoogleGrpc_CallCredentials) GetAccessToken() string { if x, ok := m.GetCredentialSpecifier().(*GrpcService_GoogleGrpc_CallCredentials_AccessToken); ok { return x.AccessToken } return "" } func (m *GrpcService_GoogleGrpc_CallCredentials) GetGoogleComputeEngine() *empty.Empty { if x, ok := m.GetCredentialSpecifier().(*GrpcService_GoogleGrpc_CallCredentials_GoogleComputeEngine); ok { return x.GoogleComputeEngine } return nil } func (m *GrpcService_GoogleGrpc_CallCredentials) GetGoogleRefreshToken() string { if x, ok := m.GetCredentialSpecifier().(*GrpcService_GoogleGrpc_CallCredentials_GoogleRefreshToken); ok { return x.GoogleRefreshToken } return "" } func (m *GrpcService_GoogleGrpc_CallCredentials) GetServiceAccountJwtAccess() *GrpcService_GoogleGrpc_CallCredentials_ServiceAccountJWTAccessCredentials { if x, ok := m.GetCredentialSpecifier().(*GrpcService_GoogleGrpc_CallCredentials_ServiceAccountJwtAccess); ok { return x.ServiceAccountJwtAccess } return nil } func (m *GrpcService_GoogleGrpc_CallCredentials) GetGoogleIam() *GrpcService_GoogleGrpc_CallCredentials_GoogleIAMCredentials { if x, ok := m.GetCredentialSpecifier().(*GrpcService_GoogleGrpc_CallCredentials_GoogleIam); ok { return x.GoogleIam } return nil } func (m *GrpcService_GoogleGrpc_CallCredentials) GetFromPlugin() *GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin { if x, ok := m.GetCredentialSpecifier().(*GrpcService_GoogleGrpc_CallCredentials_FromPlugin); ok { return x.FromPlugin } return nil } // XXX_OneofFuncs is for the internal use of the proto package. func (*GrpcService_GoogleGrpc_CallCredentials) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _GrpcService_GoogleGrpc_CallCredentials_OneofMarshaler, _GrpcService_GoogleGrpc_CallCredentials_OneofUnmarshaler, _GrpcService_GoogleGrpc_CallCredentials_OneofSizer, []interface{}{ (*GrpcService_GoogleGrpc_CallCredentials_AccessToken)(nil), (*GrpcService_GoogleGrpc_CallCredentials_GoogleComputeEngine)(nil), (*GrpcService_GoogleGrpc_CallCredentials_GoogleRefreshToken)(nil), (*GrpcService_GoogleGrpc_CallCredentials_ServiceAccountJwtAccess)(nil), (*GrpcService_GoogleGrpc_CallCredentials_GoogleIam)(nil), (*GrpcService_GoogleGrpc_CallCredentials_FromPlugin)(nil), } } func _GrpcService_GoogleGrpc_CallCredentials_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*GrpcService_GoogleGrpc_CallCredentials) // credential_specifier switch x := m.CredentialSpecifier.(type) { case *GrpcService_GoogleGrpc_CallCredentials_AccessToken: b.EncodeVarint(1<<3 | proto.WireBytes) b.EncodeStringBytes(x.AccessToken) case *GrpcService_GoogleGrpc_CallCredentials_GoogleComputeEngine: b.EncodeVarint(2<<3 | proto.WireBytes) if err := b.EncodeMessage(x.GoogleComputeEngine); err != nil { return err } case *GrpcService_GoogleGrpc_CallCredentials_GoogleRefreshToken: b.EncodeVarint(3<<3 | proto.WireBytes) b.EncodeStringBytes(x.GoogleRefreshToken) case *GrpcService_GoogleGrpc_CallCredentials_ServiceAccountJwtAccess: b.EncodeVarint(4<<3 | proto.WireBytes) if err := b.EncodeMessage(x.ServiceAccountJwtAccess); err != nil { return err } case *GrpcService_GoogleGrpc_CallCredentials_GoogleIam: b.EncodeVarint(5<<3 | proto.WireBytes) if err := b.EncodeMessage(x.GoogleIam); err != nil { return err } case *GrpcService_GoogleGrpc_CallCredentials_FromPlugin: b.EncodeVarint(6<<3 | proto.WireBytes) if err := b.EncodeMessage(x.FromPlugin); err != nil { return err } case nil: default: return fmt.Errorf("GrpcService_GoogleGrpc_CallCredentials.CredentialSpecifier has unexpected type %T", x) } return nil } func _GrpcService_GoogleGrpc_CallCredentials_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*GrpcService_GoogleGrpc_CallCredentials) switch tag { case 1: // credential_specifier.access_token if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } x, err := b.DecodeStringBytes() m.CredentialSpecifier = &GrpcService_GoogleGrpc_CallCredentials_AccessToken{x} return true, err case 2: // credential_specifier.google_compute_engine if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(empty.Empty) err := b.DecodeMessage(msg) m.CredentialSpecifier = &GrpcService_GoogleGrpc_CallCredentials_GoogleComputeEngine{msg} return true, err case 3: // credential_specifier.google_refresh_token if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } x, err := b.DecodeStringBytes() m.CredentialSpecifier = &GrpcService_GoogleGrpc_CallCredentials_GoogleRefreshToken{x} return true, err case 4: // credential_specifier.service_account_jwt_access if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(GrpcService_GoogleGrpc_CallCredentials_ServiceAccountJWTAccessCredentials) err := b.DecodeMessage(msg) m.CredentialSpecifier = &GrpcService_GoogleGrpc_CallCredentials_ServiceAccountJwtAccess{msg} return true, err case 5: // credential_specifier.google_iam if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(GrpcService_GoogleGrpc_CallCredentials_GoogleIAMCredentials) err := b.DecodeMessage(msg) m.CredentialSpecifier = &GrpcService_GoogleGrpc_CallCredentials_GoogleIam{msg} return true, err case 6: // credential_specifier.from_plugin if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin) err := b.DecodeMessage(msg) m.CredentialSpecifier = &GrpcService_GoogleGrpc_CallCredentials_FromPlugin{msg} return true, err default: return false, nil } } func _GrpcService_GoogleGrpc_CallCredentials_OneofSizer(msg proto.Message) (n int) { m := msg.(*GrpcService_GoogleGrpc_CallCredentials) // credential_specifier switch x := m.CredentialSpecifier.(type) { case *GrpcService_GoogleGrpc_CallCredentials_AccessToken: n += 1 // tag and wire n += proto.SizeVarint(uint64(len(x.AccessToken))) n += len(x.AccessToken) case *GrpcService_GoogleGrpc_CallCredentials_GoogleComputeEngine: s := proto.Size(x.GoogleComputeEngine) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *GrpcService_GoogleGrpc_CallCredentials_GoogleRefreshToken: n += 1 // tag and wire n += proto.SizeVarint(uint64(len(x.GoogleRefreshToken))) n += len(x.GoogleRefreshToken) case *GrpcService_GoogleGrpc_CallCredentials_ServiceAccountJwtAccess: s := proto.Size(x.ServiceAccountJwtAccess) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *GrpcService_GoogleGrpc_CallCredentials_GoogleIam: s := proto.Size(x.GoogleIam) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *GrpcService_GoogleGrpc_CallCredentials_FromPlugin: s := proto.Size(x.FromPlugin) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } type GrpcService_GoogleGrpc_CallCredentials_ServiceAccountJWTAccessCredentials struct { JsonKey string `protobuf:"bytes,1,opt,name=json_key,json=jsonKey,proto3" json:"json_key,omitempty"` TokenLifetimeSeconds uint64 `protobuf:"varint,2,opt,name=token_lifetime_seconds,json=tokenLifetimeSeconds,proto3" json:"token_lifetime_seconds,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *GrpcService_GoogleGrpc_CallCredentials_ServiceAccountJWTAccessCredentials) Reset() { *m = GrpcService_GoogleGrpc_CallCredentials_ServiceAccountJWTAccessCredentials{} } func (m *GrpcService_GoogleGrpc_CallCredentials_ServiceAccountJWTAccessCredentials) String() string { return proto.CompactTextString(m) } func (*GrpcService_GoogleGrpc_CallCredentials_ServiceAccountJWTAccessCredentials) ProtoMessage() {} func (*GrpcService_GoogleGrpc_CallCredentials_ServiceAccountJWTAccessCredentials) Descriptor() ([]byte, []int) { return fileDescriptor_grpc_service_b85549433708d753, []int{0, 1, 3, 0} } func (m *GrpcService_GoogleGrpc_CallCredentials_ServiceAccountJWTAccessCredentials) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_GrpcService_GoogleGrpc_CallCredentials_ServiceAccountJWTAccessCredentials.Unmarshal(m, b) } func (m *GrpcService_GoogleGrpc_CallCredentials_ServiceAccountJWTAccessCredentials) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_GrpcService_GoogleGrpc_CallCredentials_ServiceAccountJWTAccessCredentials.Marshal(b, m, deterministic) } func (dst *GrpcService_GoogleGrpc_CallCredentials_ServiceAccountJWTAccessCredentials) XXX_Merge(src proto.Message) { xxx_messageInfo_GrpcService_GoogleGrpc_CallCredentials_ServiceAccountJWTAccessCredentials.Merge(dst, src) } func (m *GrpcService_GoogleGrpc_CallCredentials_ServiceAccountJWTAccessCredentials) XXX_Size() int { return xxx_messageInfo_GrpcService_GoogleGrpc_CallCredentials_ServiceAccountJWTAccessCredentials.Size(m) } func (m *GrpcService_GoogleGrpc_CallCredentials_ServiceAccountJWTAccessCredentials) XXX_DiscardUnknown() { xxx_messageInfo_GrpcService_GoogleGrpc_CallCredentials_ServiceAccountJWTAccessCredentials.DiscardUnknown(m) } var xxx_messageInfo_GrpcService_GoogleGrpc_CallCredentials_ServiceAccountJWTAccessCredentials proto.InternalMessageInfo func (m *GrpcService_GoogleGrpc_CallCredentials_ServiceAccountJWTAccessCredentials) GetJsonKey() string { if m != nil { return m.JsonKey } return "" } func (m *GrpcService_GoogleGrpc_CallCredentials_ServiceAccountJWTAccessCredentials) GetTokenLifetimeSeconds() uint64 { if m != nil { return m.TokenLifetimeSeconds } return 0 } type GrpcService_GoogleGrpc_CallCredentials_GoogleIAMCredentials struct { AuthorizationToken string `protobuf:"bytes,1,opt,name=authorization_token,json=authorizationToken,proto3" json:"authorization_token,omitempty"` AuthoritySelector string `protobuf:"bytes,2,opt,name=authority_selector,json=authoritySelector,proto3" json:"authority_selector,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *GrpcService_GoogleGrpc_CallCredentials_GoogleIAMCredentials) Reset() { *m = GrpcService_GoogleGrpc_CallCredentials_GoogleIAMCredentials{} } func (m *GrpcService_GoogleGrpc_CallCredentials_GoogleIAMCredentials) String() string { return proto.CompactTextString(m) } func (*GrpcService_GoogleGrpc_CallCredentials_GoogleIAMCredentials) ProtoMessage() {} func (*GrpcService_GoogleGrpc_CallCredentials_GoogleIAMCredentials) Descriptor() ([]byte, []int) { return fileDescriptor_grpc_service_b85549433708d753, []int{0, 1, 3, 1} } func (m *GrpcService_GoogleGrpc_CallCredentials_GoogleIAMCredentials) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_GrpcService_GoogleGrpc_CallCredentials_GoogleIAMCredentials.Unmarshal(m, b) } func (m *GrpcService_GoogleGrpc_CallCredentials_GoogleIAMCredentials) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_GrpcService_GoogleGrpc_CallCredentials_GoogleIAMCredentials.Marshal(b, m, deterministic) } func (dst *GrpcService_GoogleGrpc_CallCredentials_GoogleIAMCredentials) XXX_Merge(src proto.Message) { xxx_messageInfo_GrpcService_GoogleGrpc_CallCredentials_GoogleIAMCredentials.Merge(dst, src) } func (m *GrpcService_GoogleGrpc_CallCredentials_GoogleIAMCredentials) XXX_Size() int { return xxx_messageInfo_GrpcService_GoogleGrpc_CallCredentials_GoogleIAMCredentials.Size(m) } func (m *GrpcService_GoogleGrpc_CallCredentials_GoogleIAMCredentials) XXX_DiscardUnknown() { xxx_messageInfo_GrpcService_GoogleGrpc_CallCredentials_GoogleIAMCredentials.DiscardUnknown(m) } var xxx_messageInfo_GrpcService_GoogleGrpc_CallCredentials_GoogleIAMCredentials proto.InternalMessageInfo func (m *GrpcService_GoogleGrpc_CallCredentials_GoogleIAMCredentials) GetAuthorizationToken() string { if m != nil { return m.AuthorizationToken } return "" } func (m *GrpcService_GoogleGrpc_CallCredentials_GoogleIAMCredentials) GetAuthoritySelector() string { if m != nil { return m.AuthoritySelector } return "" } type GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin struct { Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` // Types that are valid to be assigned to ConfigType: // *GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin_Config // *GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin_TypedConfig ConfigType isGrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin_ConfigType `protobuf_oneof:"config_type"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin) Reset() { *m = GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin{} } func (m *GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin) String() string { return proto.CompactTextString(m) } func (*GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin) ProtoMessage() {} func (*GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin) Descriptor() ([]byte, []int) { return fileDescriptor_grpc_service_b85549433708d753, []int{0, 1, 3, 2} } func (m *GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin.Unmarshal(m, b) } func (m *GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin.Marshal(b, m, deterministic) } func (dst *GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin) XXX_Merge(src proto.Message) { xxx_messageInfo_GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin.Merge(dst, src) } func (m *GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin) XXX_Size() int { return xxx_messageInfo_GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin.Size(m) } func (m *GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin) XXX_DiscardUnknown() { xxx_messageInfo_GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin.DiscardUnknown(m) } var xxx_messageInfo_GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin proto.InternalMessageInfo func (m *GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin) GetName() string { if m != nil { return m.Name } return "" } type isGrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin_ConfigType interface { isGrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin_ConfigType() } type GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin_Config struct { Config *_struct.Struct `protobuf:"bytes,2,opt,name=config,proto3,oneof"` } type GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin_TypedConfig struct { TypedConfig *any.Any `protobuf:"bytes,3,opt,name=typed_config,json=typedConfig,proto3,oneof"` } func (*GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin_Config) isGrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin_ConfigType() { } func (*GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin_TypedConfig) isGrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin_ConfigType() { } func (m *GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin) GetConfigType() isGrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin_ConfigType { if m != nil { return m.ConfigType } return nil } func (m *GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin) GetConfig() *_struct.Struct { if x, ok := m.GetConfigType().(*GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin_Config); ok { return x.Config } return nil } func (m *GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin) GetTypedConfig() *any.Any { if x, ok := m.GetConfigType().(*GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin_TypedConfig); ok { return x.TypedConfig } return nil } // XXX_OneofFuncs is for the internal use of the proto package. func (*GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin_OneofMarshaler, _GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin_OneofUnmarshaler, _GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin_OneofSizer, []interface{}{ (*GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin_Config)(nil), (*GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin_TypedConfig)(nil), } } func _GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin) // config_type switch x := m.ConfigType.(type) { case *GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin_Config: b.EncodeVarint(2<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Config); err != nil { return err } case *GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin_TypedConfig: b.EncodeVarint(3<<3 | proto.WireBytes) if err := b.EncodeMessage(x.TypedConfig); err != nil { return err } case nil: default: return fmt.Errorf("GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin.ConfigType has unexpected type %T", x) } return nil } func _GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin) switch tag { case 2: // config_type.config if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(_struct.Struct) err := b.DecodeMessage(msg) m.ConfigType = &GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin_Config{msg} return true, err case 3: // config_type.typed_config if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(any.Any) err := b.DecodeMessage(msg) m.ConfigType = &GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin_TypedConfig{msg} return true, err default: return false, nil } } func _GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin_OneofSizer(msg proto.Message) (n int) { m := msg.(*GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin) // config_type switch x := m.ConfigType.(type) { case *GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin_Config: s := proto.Size(x.Config) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin_TypedConfig: s := proto.Size(x.TypedConfig) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } func init() { proto.RegisterType((*GrpcService)(nil), "envoy.api.v2.core.GrpcService") proto.RegisterType((*GrpcService_EnvoyGrpc)(nil), "envoy.api.v2.core.GrpcService.EnvoyGrpc") proto.RegisterType((*GrpcService_GoogleGrpc)(nil), "envoy.api.v2.core.GrpcService.GoogleGrpc") proto.RegisterType((*GrpcService_GoogleGrpc_SslCredentials)(nil), "envoy.api.v2.core.GrpcService.GoogleGrpc.SslCredentials") proto.RegisterType((*GrpcService_GoogleGrpc_GoogleLocalCredentials)(nil), "envoy.api.v2.core.GrpcService.GoogleGrpc.GoogleLocalCredentials") proto.RegisterType((*GrpcService_GoogleGrpc_ChannelCredentials)(nil), "envoy.api.v2.core.GrpcService.GoogleGrpc.ChannelCredentials") proto.RegisterType((*GrpcService_GoogleGrpc_CallCredentials)(nil), "envoy.api.v2.core.GrpcService.GoogleGrpc.CallCredentials") proto.RegisterType((*GrpcService_GoogleGrpc_CallCredentials_ServiceAccountJWTAccessCredentials)(nil), "envoy.api.v2.core.GrpcService.GoogleGrpc.CallCredentials.ServiceAccountJWTAccessCredentials") proto.RegisterType((*GrpcService_GoogleGrpc_CallCredentials_GoogleIAMCredentials)(nil), "envoy.api.v2.core.GrpcService.GoogleGrpc.CallCredentials.GoogleIAMCredentials") proto.RegisterType((*GrpcService_GoogleGrpc_CallCredentials_MetadataCredentialsFromPlugin)(nil), "envoy.api.v2.core.GrpcService.GoogleGrpc.CallCredentials.MetadataCredentialsFromPlugin") } func init() { proto.RegisterFile("envoy/api/v2/core/grpc_service.proto", fileDescriptor_grpc_service_b85549433708d753) } var fileDescriptor_grpc_service_b85549433708d753 = []byte{ // 1052 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x9c, 0x55, 0x4f, 0x6f, 0x1b, 0x45, 0x14, 0xb7, 0x63, 0xa7, 0xad, 0xdf, 0xa6, 0x89, 0x33, 0x09, 0x89, 0xb3, 0x34, 0x25, 0x2a, 0x1c, 0x02, 0x82, 0xb5, 0x70, 0x41, 0x6a, 0xa4, 0x0a, 0x88, 0x9d, 0xb4, 0x4e, 0x49, 0xab, 0x68, 0x5d, 0xe8, 0x05, 0x69, 0x34, 0x19, 0x8f, 0x9d, 0x69, 0xd7, 0x3b, 0xab, 0xd9, 0x59, 0xb7, 0xee, 0x99, 0x6f, 0xc1, 0x97, 0xe0, 0x88, 0x38, 0xf5, 0x8c, 0xf8, 0x0e, 0x5c, 0xb8, 0xf4, 0x5b, 0xa0, 0xf9, 0xe3, 0xc4, 0x6b, 0x47, 0x75, 0xc8, 0x6d, 0xf7, 0xfd, 0xde, 0xef, 0xfd, 0xf9, 0xcd, 0x9b, 0x37, 0xf0, 0x19, 0x8b, 0x87, 0x62, 0x54, 0x27, 0x09, 0xaf, 0x0f, 0x1b, 0x75, 0x2a, 0x24, 0xab, 0xf7, 0x65, 0x42, 0x71, 0xca, 0xe4, 0x90, 0x53, 0x16, 0x24, 0x52, 0x28, 0x81, 0x56, 0x8d, 0x57, 0x40, 0x12, 0x1e, 0x0c, 0x1b, 0x81, 0xf6, 0xf2, 0xef, 0xcc, 0x12, 0x4f, 0x49, 0xea, 0x08, 0xfe, 0x56, 0x5f, 0x88, 0x7e, 0xc4, 0xea, 0xe6, 0xef, 0x34, 0xeb, 0xd5, 0x49, 0x3c, 0x72, 0xd0, 0xdd, 0x69, 0xa8, 0x9b, 0x49, 0xa2, 0xb8, 0x88, 0x1d, 0x7e, 0x67, 0x1a, 0x4f, 0x95, 0xcc, 0xa8, 0x72, 0xe8, 0xc7, 0xd3, 0x28, 0x1b, 0x24, 0x6a, 0x1c, 0x7a, 0x73, 0x48, 0x22, 0xde, 0x25, 0x8a, 0xd5, 0xc7, 0x1f, 0x16, 0xb8, 0xf7, 0x2f, 0x02, 0xef, 0xb1, 0x4c, 0x68, 0xc7, 0x76, 0x85, 0x8e, 0x00, 0x4c, 0xf9, 0x58, 0xf7, 0x5a, 0x2b, 0xee, 0x14, 0x77, 0xbd, 0xc6, 0x6e, 0x30, 0xd3, 0x64, 0x30, 0xc1, 0x09, 0x0e, 0x35, 0xaa, 0x0d, 0xed, 0x42, 0x58, 0x61, 0xe3, 0x1f, 0x74, 0x0c, 0x9e, 0x2d, 0xc9, 0xc6, 0x5a, 0x30, 0xb1, 0x3e, 0x9f, 0x13, 0xeb, 0xb1, 0x61, 0xb8, 0x60, 0xd0, 0x3f, 0xff, 0x43, 0xf7, 0xe1, 0xa6, 0xe2, 0x03, 0x26, 0x32, 0x55, 0x2b, 0x99, 0x48, 0x5b, 0x81, 0x45, 0x83, 0x71, 0xc3, 0xc1, 0x81, 0x93, 0x2b, 0x1c, 0x7b, 0xa2, 0x23, 0xa8, 0xf2, 0x98, 0x2b, 0x4e, 0x22, 0x3c, 0x60, 0x8a, 0x74, 0x89, 0x22, 0xb5, 0xc5, 0x9d, 0xd2, 0xae, 0xd7, 0xb8, 0x7b, 0x49, 0x1d, 0x6d, 0x46, 0xba, 0x4c, 0xfe, 0x4c, 0xa2, 0x8c, 0x85, 0x2b, 0x8e, 0xf7, 0xd4, 0xd1, 0xfc, 0x3d, 0xa8, 0x9c, 0xf7, 0x89, 0xbe, 0x84, 0x25, 0x1a, 0x65, 0xa9, 0x62, 0x12, 0xc7, 0x64, 0xc0, 0x8c, 0x4e, 0x95, 0x66, 0xe5, 0xcf, 0xf7, 0xef, 0x4a, 0x65, 0xb9, 0xb0, 0x53, 0x0c, 0x3d, 0x07, 0x3f, 0x23, 0x03, 0xe6, 0xff, 0xb3, 0x02, 0x70, 0xd1, 0x17, 0xda, 0x05, 0x50, 0x44, 0xf6, 0x99, 0xc2, 0x99, 0xe4, 0xb3, 0xd4, 0x8a, 0x05, 0x7f, 0x92, 0x1c, 0x0d, 0x60, 0x8d, 0x9e, 0x91, 0x38, 0x66, 0x11, 0xa6, 0x92, 0x75, 0x59, 0xac, 0x2b, 0x4a, 0x9d, 0x92, 0x0f, 0xaf, 0xac, 0x64, 0xd0, 0xb2, 0x41, 0x5a, 0x17, 0x31, 0x42, 0x44, 0x67, 0x6c, 0xa8, 0x0b, 0x55, 0x4a, 0xa2, 0x7c, 0xae, 0x92, 0x51, 0x6b, 0xef, 0x7f, 0xe4, 0x22, 0x51, 0x2e, 0xd1, 0x0a, 0xcd, 0x1b, 0xd0, 0x17, 0xe0, 0xa5, 0x8a, 0x28, 0x9c, 0x48, 0xd6, 0xe3, 0x6f, 0x6a, 0xe5, 0xe9, 0xfe, 0x41, 0xa3, 0x27, 0x06, 0x44, 0x0f, 0xa0, 0x36, 0x51, 0x0c, 0xee, 0x11, 0xaa, 0x84, 0x1c, 0x59, 0xcd, 0x17, 0x35, 0x31, 0xdc, 0x98, 0xc0, 0x1f, 0x59, 0x58, 0x6b, 0x8e, 0xea, 0x70, 0x83, 0x8a, 0xb8, 0xc7, 0xfb, 0xb5, 0x1b, 0x46, 0xad, 0xcd, 0x99, 0x69, 0xe9, 0x98, 0xcb, 0x13, 0x3a, 0x37, 0xff, 0xef, 0x22, 0x2c, 0x77, 0xd2, 0x5c, 0xa5, 0x0f, 0x01, 0xa4, 0x10, 0x0a, 0x53, 0x26, 0x55, 0xea, 0xee, 0xc2, 0xf6, 0x25, 0x4a, 0x1c, 0x10, 0x45, 0x3a, 0x22, 0x93, 0x94, 0x85, 0x15, 0x4d, 0x68, 0x69, 0x7f, 0xf4, 0x1d, 0x78, 0x89, 0xe4, 0x43, 0xa2, 0x18, 0x7e, 0xc5, 0x46, 0xee, 0xd0, 0xe6, 0xd0, 0xc1, 0x31, 0x7e, 0x64, 0x23, 0x9d, 0x5d, 0x27, 0xc6, 0xf4, 0x8c, 0xf0, 0xd8, 0xcd, 0xfc, 0xbc, 0xec, 0x9a, 0xd0, 0xd2, 0xfe, 0x7e, 0x0d, 0x36, 0xec, 0xa1, 0x1c, 0x0b, 0x4a, 0x26, 0xbb, 0xf2, 0xff, 0x5a, 0x00, 0x34, 0x3b, 0x10, 0x88, 0xc2, 0x4a, 0x9a, 0xe6, 0xcf, 0xde, 0x76, 0xfc, 0xe0, 0xea, 0x67, 0x9f, 0xd7, 0xaf, 0x5d, 0x08, 0x97, 0xd3, 0xbc, 0xa2, 0xdf, 0xc3, 0xb2, 0x5b, 0x09, 0x5d, 0xd6, 0x23, 0x59, 0xa4, 0x9c, 0x2c, 0x1b, 0x33, 0xa7, 0x73, 0xa8, 0x97, 0x57, 0xbb, 0x10, 0xde, 0xb6, 0xc0, 0x81, 0x75, 0x47, 0x02, 0x56, 0x23, 0xdd, 0xd0, 0xd4, 0x8c, 0xea, 0x18, 0x3f, 0x5c, 0xbd, 0xce, 0xcb, 0x95, 0x69, 0x17, 0xc2, 0x6a, 0x34, 0x65, 0x6b, 0x6e, 0xc3, 0xfa, 0x45, 0x2a, 0x9c, 0x26, 0x8c, 0xf2, 0x1e, 0x67, 0x12, 0x2d, 0xfe, 0xf1, 0xfe, 0x5d, 0xa9, 0xe8, 0xff, 0x7a, 0x0b, 0x56, 0xa6, 0x26, 0x1e, 0x7d, 0x0a, 0x4b, 0x84, 0x52, 0x96, 0xa6, 0x58, 0x89, 0x57, 0x2c, 0xb6, 0x37, 0xbc, 0x5d, 0x08, 0x3d, 0x6b, 0x7d, 0xae, 0x8d, 0xe8, 0x18, 0x3e, 0x72, 0x4a, 0x50, 0x31, 0x48, 0x32, 0xc5, 0x30, 0x8b, 0xfb, 0x3c, 0x66, 0x73, 0x05, 0x59, 0xb3, 0x40, 0xcb, 0xb2, 0x0e, 0x0d, 0x09, 0x35, 0x60, 0xdd, 0x45, 0x93, 0xac, 0x27, 0x59, 0x7a, 0xe6, 0x52, 0x97, 0x5c, 0x6a, 0x64, 0xd1, 0xd0, 0x82, 0xb6, 0x82, 0xdf, 0x8a, 0xe0, 0xbb, 0xb7, 0x0c, 0x13, 0x4a, 0x45, 0x16, 0x2b, 0xfc, 0xf2, 0xb5, 0xc2, 0xb6, 0x4a, 0x73, 0x2f, 0xbd, 0xc6, 0x2f, 0xd7, 0xbe, 0xf8, 0x81, 0x73, 0xd9, 0xb7, 0xa1, 0x9f, 0xbc, 0x78, 0xbe, 0x6f, 0x02, 0xe7, 0x05, 0xdf, 0x4c, 0xf3, 0x5e, 0xaf, 0x95, 0xf5, 0x42, 0x02, 0xdc, 0xf2, 0xc7, 0x9c, 0x0c, 0xcc, 0x5d, 0xf7, 0x1a, 0xcf, 0xae, 0x5f, 0x8c, 0x85, 0x8e, 0xf6, 0x9f, 0xe6, 0xd3, 0x57, 0x6c, 0x8e, 0x23, 0x32, 0x40, 0x6f, 0xc1, 0xeb, 0x49, 0x31, 0xc0, 0x49, 0x94, 0xf5, 0x79, 0xec, 0xb6, 0xc6, 0x8b, 0xeb, 0x67, 0x1c, 0x3f, 0x1c, 0x13, 0xb6, 0x47, 0x52, 0x0c, 0x4e, 0x4c, 0x78, 0xfd, 0xb6, 0xf5, 0xce, 0xff, 0xfc, 0x0c, 0xee, 0xcd, 0x57, 0x0b, 0x6d, 0xc1, 0xad, 0x97, 0xa9, 0x88, 0xcd, 0x36, 0x31, 0x33, 0x15, 0xde, 0xd4, 0xff, 0x7a, 0x57, 0x7c, 0x03, 0x1b, 0xe6, 0xc0, 0x71, 0xc4, 0x7b, 0x4c, 0x3f, 0x7e, 0x38, 0x65, 0x54, 0xc4, 0x5d, 0xfb, 0x56, 0x94, 0xc3, 0x75, 0x83, 0x1e, 0x3b, 0xb0, 0x63, 0x31, 0x7f, 0x08, 0xeb, 0x97, 0xe9, 0x82, 0xea, 0xb0, 0x46, 0x32, 0x75, 0x26, 0x24, 0x7f, 0x6b, 0xde, 0xd3, 0xc9, 0x39, 0x0e, 0x51, 0x0e, 0xb2, 0xa3, 0xf4, 0x15, 0x8c, 0xad, 0x6a, 0x84, 0x53, 0x16, 0x31, 0xbd, 0x87, 0x4d, 0xea, 0x4a, 0xb8, 0x7a, 0x8e, 0x74, 0x1c, 0xe0, 0xff, 0x5e, 0x84, 0xed, 0x0f, 0xca, 0x83, 0x10, 0x94, 0x2f, 0xde, 0xd5, 0xd0, 0x7c, 0xa3, 0xaf, 0xcf, 0x37, 0xfa, 0xc2, 0x07, 0x37, 0x7a, 0xbb, 0x30, 0xde, 0xe9, 0x68, 0x0f, 0x96, 0xd4, 0x28, 0x61, 0x5d, 0xec, 0x88, 0x76, 0x51, 0xac, 0xcf, 0x10, 0xf7, 0x63, 0x7d, 0xb3, 0x3c, 0xe3, 0xdb, 0x32, 0xae, 0xcd, 0xdb, 0xe0, 0x59, 0x12, 0xd6, 0xd6, 0x39, 0x6b, 0xa0, 0xb9, 0x05, 0x55, 0xf7, 0xa4, 0x4f, 0x43, 0x4f, 0xca, 0xb7, 0xca, 0xd5, 0xc5, 0xe6, 0xb7, 0xf0, 0x09, 0x17, 0x76, 0x98, 0x12, 0x29, 0xde, 0x8c, 0x66, 0xe7, 0xaa, 0x59, 0x9d, 0x18, 0xac, 0x13, 0x5d, 0xd9, 0x49, 0xf1, 0xf4, 0x86, 0x29, 0xf1, 0xfe, 0x7f, 0x01, 0x00, 0x00, 0xff, 0xff, 0x5c, 0x77, 0x13, 0x9a, 0x8c, 0x0a, 0x00, 0x00, } grpc-go-1.22.1/balancer/xds/internal/proto/envoy/api/v2/core/health_check/000077500000000000000000000000001351635773100262645ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/internal/proto/envoy/api/v2/core/health_check/health_check.pb.go000077500000000000000000001154051351635773100316260ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: envoy/api/v2/core/health_check.proto package envoy_api_v2_core import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import any "github.com/golang/protobuf/ptypes/any" import duration "github.com/golang/protobuf/ptypes/duration" import _struct "github.com/golang/protobuf/ptypes/struct" import wrappers "github.com/golang/protobuf/ptypes/wrappers" import base "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/core/base" import _range "google.golang.org/grpc/balancer/xds/internal/proto/envoy/type/range" import _ "google.golang.org/grpc/balancer/xds/internal/proto/validate" // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package type HealthStatus int32 const ( HealthStatus_UNKNOWN HealthStatus = 0 HealthStatus_HEALTHY HealthStatus = 1 HealthStatus_UNHEALTHY HealthStatus = 2 HealthStatus_DRAINING HealthStatus = 3 HealthStatus_TIMEOUT HealthStatus = 4 HealthStatus_DEGRADED HealthStatus = 5 ) var HealthStatus_name = map[int32]string{ 0: "UNKNOWN", 1: "HEALTHY", 2: "UNHEALTHY", 3: "DRAINING", 4: "TIMEOUT", 5: "DEGRADED", } var HealthStatus_value = map[string]int32{ "UNKNOWN": 0, "HEALTHY": 1, "UNHEALTHY": 2, "DRAINING": 3, "TIMEOUT": 4, "DEGRADED": 5, } func (x HealthStatus) String() string { return proto.EnumName(HealthStatus_name, int32(x)) } func (HealthStatus) EnumDescriptor() ([]byte, []int) { return fileDescriptor_health_check_96ed99a3bbe98749, []int{0} } type HealthCheck struct { Timeout *duration.Duration `protobuf:"bytes,1,opt,name=timeout,proto3" json:"timeout,omitempty"` Interval *duration.Duration `protobuf:"bytes,2,opt,name=interval,proto3" json:"interval,omitempty"` IntervalJitter *duration.Duration `protobuf:"bytes,3,opt,name=interval_jitter,json=intervalJitter,proto3" json:"interval_jitter,omitempty"` IntervalJitterPercent uint32 `protobuf:"varint,18,opt,name=interval_jitter_percent,json=intervalJitterPercent,proto3" json:"interval_jitter_percent,omitempty"` UnhealthyThreshold *wrappers.UInt32Value `protobuf:"bytes,4,opt,name=unhealthy_threshold,json=unhealthyThreshold,proto3" json:"unhealthy_threshold,omitempty"` HealthyThreshold *wrappers.UInt32Value `protobuf:"bytes,5,opt,name=healthy_threshold,json=healthyThreshold,proto3" json:"healthy_threshold,omitempty"` AltPort *wrappers.UInt32Value `protobuf:"bytes,6,opt,name=alt_port,json=altPort,proto3" json:"alt_port,omitempty"` ReuseConnection *wrappers.BoolValue `protobuf:"bytes,7,opt,name=reuse_connection,json=reuseConnection,proto3" json:"reuse_connection,omitempty"` // Types that are valid to be assigned to HealthChecker: // *HealthCheck_HttpHealthCheck_ // *HealthCheck_TcpHealthCheck_ // *HealthCheck_GrpcHealthCheck_ // *HealthCheck_CustomHealthCheck_ HealthChecker isHealthCheck_HealthChecker `protobuf_oneof:"health_checker"` NoTrafficInterval *duration.Duration `protobuf:"bytes,12,opt,name=no_traffic_interval,json=noTrafficInterval,proto3" json:"no_traffic_interval,omitempty"` UnhealthyInterval *duration.Duration `protobuf:"bytes,14,opt,name=unhealthy_interval,json=unhealthyInterval,proto3" json:"unhealthy_interval,omitempty"` UnhealthyEdgeInterval *duration.Duration `protobuf:"bytes,15,opt,name=unhealthy_edge_interval,json=unhealthyEdgeInterval,proto3" json:"unhealthy_edge_interval,omitempty"` HealthyEdgeInterval *duration.Duration `protobuf:"bytes,16,opt,name=healthy_edge_interval,json=healthyEdgeInterval,proto3" json:"healthy_edge_interval,omitempty"` EventLogPath string `protobuf:"bytes,17,opt,name=event_log_path,json=eventLogPath,proto3" json:"event_log_path,omitempty"` AlwaysLogHealthCheckFailures bool `protobuf:"varint,19,opt,name=always_log_health_check_failures,json=alwaysLogHealthCheckFailures,proto3" json:"always_log_health_check_failures,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *HealthCheck) Reset() { *m = HealthCheck{} } func (m *HealthCheck) String() string { return proto.CompactTextString(m) } func (*HealthCheck) ProtoMessage() {} func (*HealthCheck) Descriptor() ([]byte, []int) { return fileDescriptor_health_check_96ed99a3bbe98749, []int{0} } func (m *HealthCheck) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_HealthCheck.Unmarshal(m, b) } func (m *HealthCheck) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_HealthCheck.Marshal(b, m, deterministic) } func (dst *HealthCheck) XXX_Merge(src proto.Message) { xxx_messageInfo_HealthCheck.Merge(dst, src) } func (m *HealthCheck) XXX_Size() int { return xxx_messageInfo_HealthCheck.Size(m) } func (m *HealthCheck) XXX_DiscardUnknown() { xxx_messageInfo_HealthCheck.DiscardUnknown(m) } var xxx_messageInfo_HealthCheck proto.InternalMessageInfo func (m *HealthCheck) GetTimeout() *duration.Duration { if m != nil { return m.Timeout } return nil } func (m *HealthCheck) GetInterval() *duration.Duration { if m != nil { return m.Interval } return nil } func (m *HealthCheck) GetIntervalJitter() *duration.Duration { if m != nil { return m.IntervalJitter } return nil } func (m *HealthCheck) GetIntervalJitterPercent() uint32 { if m != nil { return m.IntervalJitterPercent } return 0 } func (m *HealthCheck) GetUnhealthyThreshold() *wrappers.UInt32Value { if m != nil { return m.UnhealthyThreshold } return nil } func (m *HealthCheck) GetHealthyThreshold() *wrappers.UInt32Value { if m != nil { return m.HealthyThreshold } return nil } func (m *HealthCheck) GetAltPort() *wrappers.UInt32Value { if m != nil { return m.AltPort } return nil } func (m *HealthCheck) GetReuseConnection() *wrappers.BoolValue { if m != nil { return m.ReuseConnection } return nil } type isHealthCheck_HealthChecker interface { isHealthCheck_HealthChecker() } type HealthCheck_HttpHealthCheck_ struct { HttpHealthCheck *HealthCheck_HttpHealthCheck `protobuf:"bytes,8,opt,name=http_health_check,json=httpHealthCheck,proto3,oneof"` } type HealthCheck_TcpHealthCheck_ struct { TcpHealthCheck *HealthCheck_TcpHealthCheck `protobuf:"bytes,9,opt,name=tcp_health_check,json=tcpHealthCheck,proto3,oneof"` } type HealthCheck_GrpcHealthCheck_ struct { GrpcHealthCheck *HealthCheck_GrpcHealthCheck `protobuf:"bytes,11,opt,name=grpc_health_check,json=grpcHealthCheck,proto3,oneof"` } type HealthCheck_CustomHealthCheck_ struct { CustomHealthCheck *HealthCheck_CustomHealthCheck `protobuf:"bytes,13,opt,name=custom_health_check,json=customHealthCheck,proto3,oneof"` } func (*HealthCheck_HttpHealthCheck_) isHealthCheck_HealthChecker() {} func (*HealthCheck_TcpHealthCheck_) isHealthCheck_HealthChecker() {} func (*HealthCheck_GrpcHealthCheck_) isHealthCheck_HealthChecker() {} func (*HealthCheck_CustomHealthCheck_) isHealthCheck_HealthChecker() {} func (m *HealthCheck) GetHealthChecker() isHealthCheck_HealthChecker { if m != nil { return m.HealthChecker } return nil } func (m *HealthCheck) GetHttpHealthCheck() *HealthCheck_HttpHealthCheck { if x, ok := m.GetHealthChecker().(*HealthCheck_HttpHealthCheck_); ok { return x.HttpHealthCheck } return nil } func (m *HealthCheck) GetTcpHealthCheck() *HealthCheck_TcpHealthCheck { if x, ok := m.GetHealthChecker().(*HealthCheck_TcpHealthCheck_); ok { return x.TcpHealthCheck } return nil } func (m *HealthCheck) GetGrpcHealthCheck() *HealthCheck_GrpcHealthCheck { if x, ok := m.GetHealthChecker().(*HealthCheck_GrpcHealthCheck_); ok { return x.GrpcHealthCheck } return nil } func (m *HealthCheck) GetCustomHealthCheck() *HealthCheck_CustomHealthCheck { if x, ok := m.GetHealthChecker().(*HealthCheck_CustomHealthCheck_); ok { return x.CustomHealthCheck } return nil } func (m *HealthCheck) GetNoTrafficInterval() *duration.Duration { if m != nil { return m.NoTrafficInterval } return nil } func (m *HealthCheck) GetUnhealthyInterval() *duration.Duration { if m != nil { return m.UnhealthyInterval } return nil } func (m *HealthCheck) GetUnhealthyEdgeInterval() *duration.Duration { if m != nil { return m.UnhealthyEdgeInterval } return nil } func (m *HealthCheck) GetHealthyEdgeInterval() *duration.Duration { if m != nil { return m.HealthyEdgeInterval } return nil } func (m *HealthCheck) GetEventLogPath() string { if m != nil { return m.EventLogPath } return "" } func (m *HealthCheck) GetAlwaysLogHealthCheckFailures() bool { if m != nil { return m.AlwaysLogHealthCheckFailures } return false } // XXX_OneofFuncs is for the internal use of the proto package. func (*HealthCheck) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _HealthCheck_OneofMarshaler, _HealthCheck_OneofUnmarshaler, _HealthCheck_OneofSizer, []interface{}{ (*HealthCheck_HttpHealthCheck_)(nil), (*HealthCheck_TcpHealthCheck_)(nil), (*HealthCheck_GrpcHealthCheck_)(nil), (*HealthCheck_CustomHealthCheck_)(nil), } } func _HealthCheck_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*HealthCheck) // health_checker switch x := m.HealthChecker.(type) { case *HealthCheck_HttpHealthCheck_: b.EncodeVarint(8<<3 | proto.WireBytes) if err := b.EncodeMessage(x.HttpHealthCheck); err != nil { return err } case *HealthCheck_TcpHealthCheck_: b.EncodeVarint(9<<3 | proto.WireBytes) if err := b.EncodeMessage(x.TcpHealthCheck); err != nil { return err } case *HealthCheck_GrpcHealthCheck_: b.EncodeVarint(11<<3 | proto.WireBytes) if err := b.EncodeMessage(x.GrpcHealthCheck); err != nil { return err } case *HealthCheck_CustomHealthCheck_: b.EncodeVarint(13<<3 | proto.WireBytes) if err := b.EncodeMessage(x.CustomHealthCheck); err != nil { return err } case nil: default: return fmt.Errorf("HealthCheck.HealthChecker has unexpected type %T", x) } return nil } func _HealthCheck_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*HealthCheck) switch tag { case 8: // health_checker.http_health_check if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(HealthCheck_HttpHealthCheck) err := b.DecodeMessage(msg) m.HealthChecker = &HealthCheck_HttpHealthCheck_{msg} return true, err case 9: // health_checker.tcp_health_check if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(HealthCheck_TcpHealthCheck) err := b.DecodeMessage(msg) m.HealthChecker = &HealthCheck_TcpHealthCheck_{msg} return true, err case 11: // health_checker.grpc_health_check if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(HealthCheck_GrpcHealthCheck) err := b.DecodeMessage(msg) m.HealthChecker = &HealthCheck_GrpcHealthCheck_{msg} return true, err case 13: // health_checker.custom_health_check if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(HealthCheck_CustomHealthCheck) err := b.DecodeMessage(msg) m.HealthChecker = &HealthCheck_CustomHealthCheck_{msg} return true, err default: return false, nil } } func _HealthCheck_OneofSizer(msg proto.Message) (n int) { m := msg.(*HealthCheck) // health_checker switch x := m.HealthChecker.(type) { case *HealthCheck_HttpHealthCheck_: s := proto.Size(x.HttpHealthCheck) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *HealthCheck_TcpHealthCheck_: s := proto.Size(x.TcpHealthCheck) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *HealthCheck_GrpcHealthCheck_: s := proto.Size(x.GrpcHealthCheck) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *HealthCheck_CustomHealthCheck_: s := proto.Size(x.CustomHealthCheck) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } type HealthCheck_Payload struct { // Types that are valid to be assigned to Payload: // *HealthCheck_Payload_Text // *HealthCheck_Payload_Binary Payload isHealthCheck_Payload_Payload `protobuf_oneof:"payload"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *HealthCheck_Payload) Reset() { *m = HealthCheck_Payload{} } func (m *HealthCheck_Payload) String() string { return proto.CompactTextString(m) } func (*HealthCheck_Payload) ProtoMessage() {} func (*HealthCheck_Payload) Descriptor() ([]byte, []int) { return fileDescriptor_health_check_96ed99a3bbe98749, []int{0, 0} } func (m *HealthCheck_Payload) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_HealthCheck_Payload.Unmarshal(m, b) } func (m *HealthCheck_Payload) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_HealthCheck_Payload.Marshal(b, m, deterministic) } func (dst *HealthCheck_Payload) XXX_Merge(src proto.Message) { xxx_messageInfo_HealthCheck_Payload.Merge(dst, src) } func (m *HealthCheck_Payload) XXX_Size() int { return xxx_messageInfo_HealthCheck_Payload.Size(m) } func (m *HealthCheck_Payload) XXX_DiscardUnknown() { xxx_messageInfo_HealthCheck_Payload.DiscardUnknown(m) } var xxx_messageInfo_HealthCheck_Payload proto.InternalMessageInfo type isHealthCheck_Payload_Payload interface { isHealthCheck_Payload_Payload() } type HealthCheck_Payload_Text struct { Text string `protobuf:"bytes,1,opt,name=text,proto3,oneof"` } type HealthCheck_Payload_Binary struct { Binary []byte `protobuf:"bytes,2,opt,name=binary,proto3,oneof"` } func (*HealthCheck_Payload_Text) isHealthCheck_Payload_Payload() {} func (*HealthCheck_Payload_Binary) isHealthCheck_Payload_Payload() {} func (m *HealthCheck_Payload) GetPayload() isHealthCheck_Payload_Payload { if m != nil { return m.Payload } return nil } func (m *HealthCheck_Payload) GetText() string { if x, ok := m.GetPayload().(*HealthCheck_Payload_Text); ok { return x.Text } return "" } func (m *HealthCheck_Payload) GetBinary() []byte { if x, ok := m.GetPayload().(*HealthCheck_Payload_Binary); ok { return x.Binary } return nil } // XXX_OneofFuncs is for the internal use of the proto package. func (*HealthCheck_Payload) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _HealthCheck_Payload_OneofMarshaler, _HealthCheck_Payload_OneofUnmarshaler, _HealthCheck_Payload_OneofSizer, []interface{}{ (*HealthCheck_Payload_Text)(nil), (*HealthCheck_Payload_Binary)(nil), } } func _HealthCheck_Payload_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*HealthCheck_Payload) // payload switch x := m.Payload.(type) { case *HealthCheck_Payload_Text: b.EncodeVarint(1<<3 | proto.WireBytes) b.EncodeStringBytes(x.Text) case *HealthCheck_Payload_Binary: b.EncodeVarint(2<<3 | proto.WireBytes) b.EncodeRawBytes(x.Binary) case nil: default: return fmt.Errorf("HealthCheck_Payload.Payload has unexpected type %T", x) } return nil } func _HealthCheck_Payload_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*HealthCheck_Payload) switch tag { case 1: // payload.text if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } x, err := b.DecodeStringBytes() m.Payload = &HealthCheck_Payload_Text{x} return true, err case 2: // payload.binary if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } x, err := b.DecodeRawBytes(true) m.Payload = &HealthCheck_Payload_Binary{x} return true, err default: return false, nil } } func _HealthCheck_Payload_OneofSizer(msg proto.Message) (n int) { m := msg.(*HealthCheck_Payload) // payload switch x := m.Payload.(type) { case *HealthCheck_Payload_Text: n += 1 // tag and wire n += proto.SizeVarint(uint64(len(x.Text))) n += len(x.Text) case *HealthCheck_Payload_Binary: n += 1 // tag and wire n += proto.SizeVarint(uint64(len(x.Binary))) n += len(x.Binary) case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } type HealthCheck_HttpHealthCheck struct { Host string `protobuf:"bytes,1,opt,name=host,proto3" json:"host,omitempty"` Path string `protobuf:"bytes,2,opt,name=path,proto3" json:"path,omitempty"` Send *HealthCheck_Payload `protobuf:"bytes,3,opt,name=send,proto3" json:"send,omitempty"` Receive *HealthCheck_Payload `protobuf:"bytes,4,opt,name=receive,proto3" json:"receive,omitempty"` ServiceName string `protobuf:"bytes,5,opt,name=service_name,json=serviceName,proto3" json:"service_name,omitempty"` RequestHeadersToAdd []*base.HeaderValueOption `protobuf:"bytes,6,rep,name=request_headers_to_add,json=requestHeadersToAdd,proto3" json:"request_headers_to_add,omitempty"` RequestHeadersToRemove []string `protobuf:"bytes,8,rep,name=request_headers_to_remove,json=requestHeadersToRemove,proto3" json:"request_headers_to_remove,omitempty"` UseHttp2 bool `protobuf:"varint,7,opt,name=use_http2,json=useHttp2,proto3" json:"use_http2,omitempty"` ExpectedStatuses []*_range.Int64Range `protobuf:"bytes,9,rep,name=expected_statuses,json=expectedStatuses,proto3" json:"expected_statuses,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *HealthCheck_HttpHealthCheck) Reset() { *m = HealthCheck_HttpHealthCheck{} } func (m *HealthCheck_HttpHealthCheck) String() string { return proto.CompactTextString(m) } func (*HealthCheck_HttpHealthCheck) ProtoMessage() {} func (*HealthCheck_HttpHealthCheck) Descriptor() ([]byte, []int) { return fileDescriptor_health_check_96ed99a3bbe98749, []int{0, 1} } func (m *HealthCheck_HttpHealthCheck) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_HealthCheck_HttpHealthCheck.Unmarshal(m, b) } func (m *HealthCheck_HttpHealthCheck) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_HealthCheck_HttpHealthCheck.Marshal(b, m, deterministic) } func (dst *HealthCheck_HttpHealthCheck) XXX_Merge(src proto.Message) { xxx_messageInfo_HealthCheck_HttpHealthCheck.Merge(dst, src) } func (m *HealthCheck_HttpHealthCheck) XXX_Size() int { return xxx_messageInfo_HealthCheck_HttpHealthCheck.Size(m) } func (m *HealthCheck_HttpHealthCheck) XXX_DiscardUnknown() { xxx_messageInfo_HealthCheck_HttpHealthCheck.DiscardUnknown(m) } var xxx_messageInfo_HealthCheck_HttpHealthCheck proto.InternalMessageInfo func (m *HealthCheck_HttpHealthCheck) GetHost() string { if m != nil { return m.Host } return "" } func (m *HealthCheck_HttpHealthCheck) GetPath() string { if m != nil { return m.Path } return "" } func (m *HealthCheck_HttpHealthCheck) GetSend() *HealthCheck_Payload { if m != nil { return m.Send } return nil } func (m *HealthCheck_HttpHealthCheck) GetReceive() *HealthCheck_Payload { if m != nil { return m.Receive } return nil } func (m *HealthCheck_HttpHealthCheck) GetServiceName() string { if m != nil { return m.ServiceName } return "" } func (m *HealthCheck_HttpHealthCheck) GetRequestHeadersToAdd() []*base.HeaderValueOption { if m != nil { return m.RequestHeadersToAdd } return nil } func (m *HealthCheck_HttpHealthCheck) GetRequestHeadersToRemove() []string { if m != nil { return m.RequestHeadersToRemove } return nil } func (m *HealthCheck_HttpHealthCheck) GetUseHttp2() bool { if m != nil { return m.UseHttp2 } return false } func (m *HealthCheck_HttpHealthCheck) GetExpectedStatuses() []*_range.Int64Range { if m != nil { return m.ExpectedStatuses } return nil } type HealthCheck_TcpHealthCheck struct { Send *HealthCheck_Payload `protobuf:"bytes,1,opt,name=send,proto3" json:"send,omitempty"` Receive []*HealthCheck_Payload `protobuf:"bytes,2,rep,name=receive,proto3" json:"receive,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *HealthCheck_TcpHealthCheck) Reset() { *m = HealthCheck_TcpHealthCheck{} } func (m *HealthCheck_TcpHealthCheck) String() string { return proto.CompactTextString(m) } func (*HealthCheck_TcpHealthCheck) ProtoMessage() {} func (*HealthCheck_TcpHealthCheck) Descriptor() ([]byte, []int) { return fileDescriptor_health_check_96ed99a3bbe98749, []int{0, 2} } func (m *HealthCheck_TcpHealthCheck) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_HealthCheck_TcpHealthCheck.Unmarshal(m, b) } func (m *HealthCheck_TcpHealthCheck) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_HealthCheck_TcpHealthCheck.Marshal(b, m, deterministic) } func (dst *HealthCheck_TcpHealthCheck) XXX_Merge(src proto.Message) { xxx_messageInfo_HealthCheck_TcpHealthCheck.Merge(dst, src) } func (m *HealthCheck_TcpHealthCheck) XXX_Size() int { return xxx_messageInfo_HealthCheck_TcpHealthCheck.Size(m) } func (m *HealthCheck_TcpHealthCheck) XXX_DiscardUnknown() { xxx_messageInfo_HealthCheck_TcpHealthCheck.DiscardUnknown(m) } var xxx_messageInfo_HealthCheck_TcpHealthCheck proto.InternalMessageInfo func (m *HealthCheck_TcpHealthCheck) GetSend() *HealthCheck_Payload { if m != nil { return m.Send } return nil } func (m *HealthCheck_TcpHealthCheck) GetReceive() []*HealthCheck_Payload { if m != nil { return m.Receive } return nil } type HealthCheck_RedisHealthCheck struct { Key string `protobuf:"bytes,1,opt,name=key,proto3" json:"key,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *HealthCheck_RedisHealthCheck) Reset() { *m = HealthCheck_RedisHealthCheck{} } func (m *HealthCheck_RedisHealthCheck) String() string { return proto.CompactTextString(m) } func (*HealthCheck_RedisHealthCheck) ProtoMessage() {} func (*HealthCheck_RedisHealthCheck) Descriptor() ([]byte, []int) { return fileDescriptor_health_check_96ed99a3bbe98749, []int{0, 3} } func (m *HealthCheck_RedisHealthCheck) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_HealthCheck_RedisHealthCheck.Unmarshal(m, b) } func (m *HealthCheck_RedisHealthCheck) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_HealthCheck_RedisHealthCheck.Marshal(b, m, deterministic) } func (dst *HealthCheck_RedisHealthCheck) XXX_Merge(src proto.Message) { xxx_messageInfo_HealthCheck_RedisHealthCheck.Merge(dst, src) } func (m *HealthCheck_RedisHealthCheck) XXX_Size() int { return xxx_messageInfo_HealthCheck_RedisHealthCheck.Size(m) } func (m *HealthCheck_RedisHealthCheck) XXX_DiscardUnknown() { xxx_messageInfo_HealthCheck_RedisHealthCheck.DiscardUnknown(m) } var xxx_messageInfo_HealthCheck_RedisHealthCheck proto.InternalMessageInfo func (m *HealthCheck_RedisHealthCheck) GetKey() string { if m != nil { return m.Key } return "" } type HealthCheck_GrpcHealthCheck struct { ServiceName string `protobuf:"bytes,1,opt,name=service_name,json=serviceName,proto3" json:"service_name,omitempty"` Authority string `protobuf:"bytes,2,opt,name=authority,proto3" json:"authority,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *HealthCheck_GrpcHealthCheck) Reset() { *m = HealthCheck_GrpcHealthCheck{} } func (m *HealthCheck_GrpcHealthCheck) String() string { return proto.CompactTextString(m) } func (*HealthCheck_GrpcHealthCheck) ProtoMessage() {} func (*HealthCheck_GrpcHealthCheck) Descriptor() ([]byte, []int) { return fileDescriptor_health_check_96ed99a3bbe98749, []int{0, 4} } func (m *HealthCheck_GrpcHealthCheck) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_HealthCheck_GrpcHealthCheck.Unmarshal(m, b) } func (m *HealthCheck_GrpcHealthCheck) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_HealthCheck_GrpcHealthCheck.Marshal(b, m, deterministic) } func (dst *HealthCheck_GrpcHealthCheck) XXX_Merge(src proto.Message) { xxx_messageInfo_HealthCheck_GrpcHealthCheck.Merge(dst, src) } func (m *HealthCheck_GrpcHealthCheck) XXX_Size() int { return xxx_messageInfo_HealthCheck_GrpcHealthCheck.Size(m) } func (m *HealthCheck_GrpcHealthCheck) XXX_DiscardUnknown() { xxx_messageInfo_HealthCheck_GrpcHealthCheck.DiscardUnknown(m) } var xxx_messageInfo_HealthCheck_GrpcHealthCheck proto.InternalMessageInfo func (m *HealthCheck_GrpcHealthCheck) GetServiceName() string { if m != nil { return m.ServiceName } return "" } func (m *HealthCheck_GrpcHealthCheck) GetAuthority() string { if m != nil { return m.Authority } return "" } type HealthCheck_CustomHealthCheck struct { Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` // Types that are valid to be assigned to ConfigType: // *HealthCheck_CustomHealthCheck_Config // *HealthCheck_CustomHealthCheck_TypedConfig ConfigType isHealthCheck_CustomHealthCheck_ConfigType `protobuf_oneof:"config_type"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *HealthCheck_CustomHealthCheck) Reset() { *m = HealthCheck_CustomHealthCheck{} } func (m *HealthCheck_CustomHealthCheck) String() string { return proto.CompactTextString(m) } func (*HealthCheck_CustomHealthCheck) ProtoMessage() {} func (*HealthCheck_CustomHealthCheck) Descriptor() ([]byte, []int) { return fileDescriptor_health_check_96ed99a3bbe98749, []int{0, 5} } func (m *HealthCheck_CustomHealthCheck) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_HealthCheck_CustomHealthCheck.Unmarshal(m, b) } func (m *HealthCheck_CustomHealthCheck) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_HealthCheck_CustomHealthCheck.Marshal(b, m, deterministic) } func (dst *HealthCheck_CustomHealthCheck) XXX_Merge(src proto.Message) { xxx_messageInfo_HealthCheck_CustomHealthCheck.Merge(dst, src) } func (m *HealthCheck_CustomHealthCheck) XXX_Size() int { return xxx_messageInfo_HealthCheck_CustomHealthCheck.Size(m) } func (m *HealthCheck_CustomHealthCheck) XXX_DiscardUnknown() { xxx_messageInfo_HealthCheck_CustomHealthCheck.DiscardUnknown(m) } var xxx_messageInfo_HealthCheck_CustomHealthCheck proto.InternalMessageInfo func (m *HealthCheck_CustomHealthCheck) GetName() string { if m != nil { return m.Name } return "" } type isHealthCheck_CustomHealthCheck_ConfigType interface { isHealthCheck_CustomHealthCheck_ConfigType() } type HealthCheck_CustomHealthCheck_Config struct { Config *_struct.Struct `protobuf:"bytes,2,opt,name=config,proto3,oneof"` } type HealthCheck_CustomHealthCheck_TypedConfig struct { TypedConfig *any.Any `protobuf:"bytes,3,opt,name=typed_config,json=typedConfig,proto3,oneof"` } func (*HealthCheck_CustomHealthCheck_Config) isHealthCheck_CustomHealthCheck_ConfigType() {} func (*HealthCheck_CustomHealthCheck_TypedConfig) isHealthCheck_CustomHealthCheck_ConfigType() {} func (m *HealthCheck_CustomHealthCheck) GetConfigType() isHealthCheck_CustomHealthCheck_ConfigType { if m != nil { return m.ConfigType } return nil } func (m *HealthCheck_CustomHealthCheck) GetConfig() *_struct.Struct { if x, ok := m.GetConfigType().(*HealthCheck_CustomHealthCheck_Config); ok { return x.Config } return nil } func (m *HealthCheck_CustomHealthCheck) GetTypedConfig() *any.Any { if x, ok := m.GetConfigType().(*HealthCheck_CustomHealthCheck_TypedConfig); ok { return x.TypedConfig } return nil } // XXX_OneofFuncs is for the internal use of the proto package. func (*HealthCheck_CustomHealthCheck) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _HealthCheck_CustomHealthCheck_OneofMarshaler, _HealthCheck_CustomHealthCheck_OneofUnmarshaler, _HealthCheck_CustomHealthCheck_OneofSizer, []interface{}{ (*HealthCheck_CustomHealthCheck_Config)(nil), (*HealthCheck_CustomHealthCheck_TypedConfig)(nil), } } func _HealthCheck_CustomHealthCheck_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*HealthCheck_CustomHealthCheck) // config_type switch x := m.ConfigType.(type) { case *HealthCheck_CustomHealthCheck_Config: b.EncodeVarint(2<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Config); err != nil { return err } case *HealthCheck_CustomHealthCheck_TypedConfig: b.EncodeVarint(3<<3 | proto.WireBytes) if err := b.EncodeMessage(x.TypedConfig); err != nil { return err } case nil: default: return fmt.Errorf("HealthCheck_CustomHealthCheck.ConfigType has unexpected type %T", x) } return nil } func _HealthCheck_CustomHealthCheck_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*HealthCheck_CustomHealthCheck) switch tag { case 2: // config_type.config if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(_struct.Struct) err := b.DecodeMessage(msg) m.ConfigType = &HealthCheck_CustomHealthCheck_Config{msg} return true, err case 3: // config_type.typed_config if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(any.Any) err := b.DecodeMessage(msg) m.ConfigType = &HealthCheck_CustomHealthCheck_TypedConfig{msg} return true, err default: return false, nil } } func _HealthCheck_CustomHealthCheck_OneofSizer(msg proto.Message) (n int) { m := msg.(*HealthCheck_CustomHealthCheck) // config_type switch x := m.ConfigType.(type) { case *HealthCheck_CustomHealthCheck_Config: s := proto.Size(x.Config) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *HealthCheck_CustomHealthCheck_TypedConfig: s := proto.Size(x.TypedConfig) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } func init() { proto.RegisterType((*HealthCheck)(nil), "envoy.api.v2.core.HealthCheck") proto.RegisterType((*HealthCheck_Payload)(nil), "envoy.api.v2.core.HealthCheck.Payload") proto.RegisterType((*HealthCheck_HttpHealthCheck)(nil), "envoy.api.v2.core.HealthCheck.HttpHealthCheck") proto.RegisterType((*HealthCheck_TcpHealthCheck)(nil), "envoy.api.v2.core.HealthCheck.TcpHealthCheck") proto.RegisterType((*HealthCheck_RedisHealthCheck)(nil), "envoy.api.v2.core.HealthCheck.RedisHealthCheck") proto.RegisterType((*HealthCheck_GrpcHealthCheck)(nil), "envoy.api.v2.core.HealthCheck.GrpcHealthCheck") proto.RegisterType((*HealthCheck_CustomHealthCheck)(nil), "envoy.api.v2.core.HealthCheck.CustomHealthCheck") proto.RegisterEnum("envoy.api.v2.core.HealthStatus", HealthStatus_name, HealthStatus_value) } func init() { proto.RegisterFile("envoy/api/v2/core/health_check.proto", fileDescriptor_health_check_96ed99a3bbe98749) } var fileDescriptor_health_check_96ed99a3bbe98749 = []byte{ // 1166 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xa4, 0x96, 0xdf, 0x72, 0xdb, 0xc4, 0x17, 0xc7, 0xad, 0xd8, 0x89, 0xed, 0x63, 0x27, 0x91, 0xd7, 0xbf, 0x26, 0xaa, 0x7f, 0x81, 0x1a, 0x26, 0xc3, 0x64, 0x3a, 0x83, 0x0c, 0x2e, 0x94, 0x29, 0x57, 0xc4, 0x49, 0x5a, 0xbb, 0xb4, 0x6e, 0x66, 0xeb, 0x94, 0xe9, 0x0c, 0x8c, 0xd8, 0x48, 0x27, 0xb6, 0xa8, 0xa2, 0x15, 0xab, 0x95, 0x5b, 0xbf, 0x04, 0x17, 0x3c, 0x46, 0x6f, 0xe0, 0x92, 0xe1, 0xaa, 0x8f, 0x43, 0x79, 0x0a, 0x46, 0x2b, 0xd9, 0xb1, 0xad, 0x30, 0x49, 0x86, 0x3b, 0xe9, 0x9c, 0xf3, 0xfd, 0xec, 0x59, 0x9d, 0x3f, 0x23, 0xd8, 0x45, 0x7f, 0xcc, 0x27, 0x2d, 0x16, 0xb8, 0xad, 0x71, 0xbb, 0x65, 0x73, 0x81, 0xad, 0x11, 0x32, 0x4f, 0x8e, 0x2c, 0x7b, 0x84, 0xf6, 0x2b, 0x33, 0x10, 0x5c, 0x72, 0x52, 0x53, 0x51, 0x26, 0x0b, 0x5c, 0x73, 0xdc, 0x36, 0xe3, 0xa8, 0xc6, 0x4e, 0x56, 0x78, 0xca, 0x42, 0x4c, 0x04, 0x8d, 0xad, 0xc4, 0x2b, 0x27, 0x01, 0xb6, 0x04, 0xf3, 0x87, 0x53, 0xfb, 0xed, 0x21, 0xe7, 0x43, 0x0f, 0x5b, 0xea, 0xed, 0x34, 0x3a, 0x6b, 0x31, 0x7f, 0x92, 0xba, 0x3e, 0x5c, 0x76, 0x39, 0x91, 0x60, 0xd2, 0xe5, 0x7e, 0xea, 0xdf, 0x59, 0xf6, 0x87, 0x52, 0x44, 0xb6, 0xfc, 0x37, 0xf5, 0x6b, 0xc1, 0x82, 0x00, 0x45, 0x98, 0xfa, 0xb7, 0xc7, 0xcc, 0x73, 0x1d, 0x26, 0xb1, 0x35, 0x7d, 0x48, 0x1c, 0x1f, 0xff, 0x5e, 0x87, 0x4a, 0x57, 0xdd, 0xf8, 0x20, 0xbe, 0x30, 0xd9, 0x87, 0xa2, 0x74, 0xcf, 0x91, 0x47, 0xd2, 0xd0, 0x9a, 0xda, 0x5e, 0xa5, 0x7d, 0xdb, 0x4c, 0xd0, 0xe6, 0x14, 0x6d, 0x1e, 0xa6, 0x89, 0x75, 0xaa, 0x7f, 0xbe, 0x7f, 0x97, 0x2f, 0xbe, 0xd5, 0x0a, 0x25, 0xed, 0x6e, 0x8e, 0x4e, 0x75, 0xe4, 0x00, 0x4a, 0xae, 0x2f, 0x51, 0x8c, 0x99, 0x67, 0xac, 0xdc, 0x8c, 0x31, 0x13, 0x92, 0x0e, 0x6c, 0x4e, 0x9f, 0xad, 0x9f, 0x5c, 0x29, 0x51, 0x18, 0xf9, 0x2b, 0x58, 0x74, 0x63, 0xaa, 0x78, 0xac, 0x04, 0xe4, 0x3e, 0x6c, 0x2f, 0x31, 0xac, 0x00, 0x85, 0x8d, 0xbe, 0x34, 0x48, 0x53, 0xdb, 0x5b, 0xa7, 0xb7, 0x16, 0x05, 0xc7, 0x89, 0x93, 0x3c, 0x85, 0x7a, 0xe4, 0x27, 0x6d, 0x30, 0xb1, 0xe4, 0x48, 0x60, 0x38, 0xe2, 0x9e, 0x63, 0x14, 0xd4, 0xf9, 0x3b, 0x99, 0xf3, 0x4f, 0x7a, 0xbe, 0xbc, 0xd7, 0x7e, 0xc1, 0xbc, 0x08, 0x29, 0x99, 0x09, 0x07, 0x53, 0x1d, 0xe9, 0x41, 0x2d, 0x0b, 0x5b, 0xbd, 0x06, 0x4c, 0xcf, 0xa0, 0xbe, 0x82, 0x12, 0xf3, 0xa4, 0x15, 0x70, 0x21, 0x8d, 0xb5, 0x6b, 0x10, 0x8a, 0xcc, 0x93, 0xc7, 0x5c, 0x48, 0x72, 0x04, 0xba, 0xc0, 0x28, 0x44, 0xcb, 0xe6, 0xbe, 0x8f, 0x76, 0xfc, 0xb9, 0x8c, 0xa2, 0x02, 0x34, 0x32, 0x80, 0x0e, 0xe7, 0x5e, 0x22, 0xdf, 0x54, 0x9a, 0x83, 0x99, 0x84, 0x7c, 0x0f, 0xb5, 0x91, 0x94, 0x81, 0x35, 0x3f, 0x23, 0x46, 0x49, 0x71, 0x4c, 0x33, 0x33, 0x24, 0xe6, 0x5c, 0x63, 0x99, 0x5d, 0x29, 0x83, 0xb9, 0xf7, 0x6e, 0x8e, 0x6e, 0x8e, 0x16, 0x4d, 0xe4, 0x25, 0xe8, 0xd2, 0x5e, 0x82, 0x97, 0x15, 0xfc, 0xd3, 0x2b, 0xe0, 0x03, 0x7b, 0x89, 0xbd, 0x21, 0x17, 0x2c, 0x71, 0xe2, 0x43, 0x11, 0xd8, 0x8b, 0xec, 0xca, 0xb5, 0x12, 0x7f, 0x24, 0x02, 0x7b, 0x29, 0xf1, 0xe1, 0xa2, 0x89, 0x9c, 0x42, 0xdd, 0x8e, 0x42, 0xc9, 0xcf, 0x17, 0xf9, 0xeb, 0x8a, 0xff, 0xd9, 0x15, 0xfc, 0x03, 0xa5, 0x5c, 0x3c, 0xa1, 0x66, 0x2f, 0x1b, 0xc9, 0x09, 0xd4, 0x7d, 0x6e, 0x49, 0xc1, 0xce, 0xce, 0x5c, 0xdb, 0x9a, 0x0d, 0x58, 0xf5, 0xaa, 0x01, 0x83, 0x78, 0xc0, 0x56, 0xdf, 0x6a, 0x2b, 0x77, 0x73, 0xb4, 0xe6, 0xf3, 0x41, 0x02, 0xe8, 0x4d, 0xe7, 0x6c, 0x00, 0x17, 0x2d, 0x7b, 0x41, 0xdd, 0xb8, 0x11, 0x75, 0x06, 0x98, 0x51, 0x7f, 0x80, 0xed, 0x0b, 0x2a, 0x3a, 0x43, 0xbc, 0x40, 0x6f, 0xde, 0x04, 0x7d, 0x6b, 0x46, 0x39, 0x72, 0x86, 0x38, 0xc3, 0xbf, 0x84, 0x5b, 0x97, 0xc3, 0xf5, 0x9b, 0xc0, 0xeb, 0x97, 0xa1, 0x77, 0x61, 0x03, 0xc7, 0xe8, 0x4b, 0xcb, 0xe3, 0x43, 0x2b, 0x60, 0x72, 0x64, 0xd4, 0x9a, 0xda, 0x5e, 0x99, 0x56, 0x95, 0xf5, 0x09, 0x1f, 0x1e, 0x33, 0x39, 0x22, 0x0f, 0xa1, 0xc9, 0xbc, 0xd7, 0x6c, 0x12, 0xaa, 0xb0, 0xf9, 0xa2, 0x5b, 0x67, 0xcc, 0xf5, 0x22, 0x81, 0xa1, 0x51, 0x6f, 0x6a, 0x7b, 0x25, 0xba, 0x93, 0xc4, 0x3d, 0xe1, 0xc3, 0xb9, 0x62, 0x3e, 0x4c, 0x63, 0x1a, 0x2f, 0xa0, 0x78, 0xcc, 0x26, 0x1e, 0x67, 0x0e, 0xb9, 0x03, 0x05, 0x89, 0x6f, 0x92, 0xad, 0x5b, 0xee, 0x94, 0xe3, 0x3c, 0x0b, 0x62, 0xa5, 0xa9, 0x75, 0x73, 0x54, 0x39, 0x88, 0x01, 0x6b, 0xa7, 0xae, 0xcf, 0xc4, 0x44, 0x2d, 0xd5, 0x6a, 0x37, 0x47, 0xd3, 0xf7, 0x8e, 0x0e, 0xc5, 0x20, 0xa5, 0xac, 0xfe, 0xf1, 0xfe, 0x5d, 0x5e, 0x6b, 0xfc, 0x9d, 0x87, 0xcd, 0xa5, 0x81, 0x23, 0x04, 0x0a, 0x23, 0x1e, 0xa6, 0x07, 0x50, 0xf5, 0x4c, 0x3e, 0x80, 0x82, 0xba, 0xe3, 0xca, 0xd2, 0xa1, 0x54, 0x99, 0xc9, 0xd7, 0x50, 0x08, 0xd1, 0x77, 0xd2, 0xcd, 0xfb, 0xc9, 0x15, 0x8d, 0x9c, 0xde, 0x84, 0x2a, 0x0d, 0xf9, 0x06, 0x8a, 0x02, 0x6d, 0x74, 0xc7, 0x98, 0x2e, 0xce, 0xeb, 0xca, 0xa7, 0x32, 0xf2, 0x11, 0x54, 0x43, 0x14, 0x63, 0xd7, 0x46, 0xcb, 0x67, 0xe7, 0xa8, 0x56, 0x66, 0x99, 0x56, 0x52, 0x5b, 0x9f, 0x9d, 0x23, 0x39, 0x83, 0x2d, 0x81, 0x3f, 0x47, 0x18, 0xca, 0xb8, 0x08, 0x0e, 0x8a, 0xd0, 0x92, 0xdc, 0x62, 0x8e, 0x63, 0xac, 0x35, 0xf3, 0x7b, 0x95, 0xf6, 0xee, 0xe5, 0x67, 0x3a, 0x28, 0xd4, 0x82, 0x7b, 0x16, 0xa8, 0xa6, 0xa8, 0xc4, 0xf7, 0x5e, 0xfb, 0x55, 0xcb, 0xeb, 0x7f, 0x15, 0x69, 0x3d, 0x05, 0x26, 0x61, 0xe1, 0x80, 0xef, 0x3b, 0x0e, 0x79, 0x00, 0xb7, 0x2f, 0x39, 0x47, 0xe0, 0x39, 0x1f, 0xa3, 0x51, 0x6a, 0xe6, 0xf7, 0xca, 0x74, 0x6b, 0x59, 0x47, 0x95, 0x97, 0xfc, 0x1f, 0xca, 0xf1, 0xde, 0x8d, 0x77, 0x5d, 0x5b, 0xad, 0xdc, 0x12, 0x2d, 0x45, 0x21, 0xc6, 0xd5, 0x69, 0x93, 0x03, 0xa8, 0xe1, 0x9b, 0x00, 0x6d, 0x89, 0x8e, 0x15, 0x4a, 0x26, 0xa3, 0x10, 0x43, 0xa3, 0xac, 0x52, 0xdf, 0x4a, 0x53, 0x8f, 0xff, 0x21, 0xcc, 0x9e, 0x2f, 0xef, 0x7f, 0x41, 0xe3, 0x1f, 0x09, 0xaa, 0x4f, 0x05, 0xcf, 0xd3, 0xf8, 0xc6, 0x2f, 0x1a, 0x6c, 0x2c, 0x2e, 0xc0, 0x59, 0xe1, 0xb4, 0xff, 0x56, 0xb8, 0x15, 0x95, 0xc9, 0x4d, 0x0b, 0xd7, 0xd8, 0x05, 0x9d, 0xa2, 0xe3, 0x86, 0xf3, 0x19, 0xe9, 0x90, 0x7f, 0x85, 0x93, 0xb4, 0xf9, 0xe2, 0xc7, 0x06, 0x85, 0xcd, 0xa5, 0xd5, 0x9a, 0xa9, 0xb8, 0x96, 0xad, 0xf8, 0x0e, 0x94, 0x59, 0x24, 0x47, 0x5c, 0xb8, 0x32, 0x19, 0x84, 0x32, 0xbd, 0x30, 0x34, 0x7e, 0xd3, 0xa0, 0x96, 0xd9, 0xa7, 0x71, 0x97, 0x5f, 0xe0, 0x16, 0xba, 0x3c, 0x36, 0x93, 0xcf, 0x61, 0xcd, 0xe6, 0xfe, 0x99, 0x3b, 0x4c, 0xff, 0x56, 0xb6, 0x33, 0xeb, 0xe3, 0xb9, 0xfa, 0xd5, 0x8a, 0x27, 0x2e, 0x09, 0x24, 0x0f, 0xa0, 0x1a, 0xd7, 0xc5, 0xb1, 0x52, 0x61, 0x32, 0x20, 0xff, 0xcb, 0x08, 0xf7, 0xfd, 0x49, 0x37, 0x47, 0x2b, 0x2a, 0xf6, 0x40, 0x85, 0x76, 0xd6, 0xa1, 0x92, 0x88, 0xac, 0xd8, 0xda, 0xd9, 0x86, 0x8d, 0xf9, 0xf5, 0x81, 0x22, 0x1d, 0xe1, 0xc7, 0x85, 0x12, 0xe8, 0x15, 0x4a, 0x44, 0xfc, 0x21, 0x17, 0x36, 0xcc, 0xdd, 0x1f, 0xa1, 0x9a, 0xdc, 0x2e, 0xe9, 0x00, 0x52, 0x81, 0xe2, 0x49, 0xff, 0xdb, 0xfe, 0xb3, 0xef, 0xfa, 0x7a, 0x2e, 0x7e, 0xe9, 0x1e, 0xed, 0x3f, 0x19, 0x74, 0x5f, 0xea, 0x1a, 0x59, 0x87, 0xf2, 0x49, 0x7f, 0xfa, 0xba, 0x42, 0xaa, 0x50, 0x3a, 0xa4, 0xfb, 0xbd, 0x7e, 0xaf, 0xff, 0x48, 0xcf, 0xc7, 0x91, 0x83, 0xde, 0xd3, 0xa3, 0x67, 0x27, 0x03, 0xbd, 0xa0, 0x5c, 0x47, 0x8f, 0xe8, 0xfe, 0xe1, 0xd1, 0xa1, 0xbe, 0xda, 0xf9, 0x12, 0xee, 0xb8, 0x3c, 0xa9, 0x7a, 0x20, 0xf8, 0x9b, 0x49, 0xb6, 0x01, 0x3a, 0xfa, 0xdc, 0x07, 0x3e, 0x8e, 0xaf, 0x7b, 0xac, 0x9d, 0xae, 0xa9, 0x7b, 0xdf, 0xfb, 0x27, 0x00, 0x00, 0xff, 0xff, 0x31, 0x86, 0x71, 0x0b, 0x55, 0x0b, 0x00, 0x00, } grpc-go-1.22.1/balancer/xds/internal/proto/envoy/api/v2/core/protocol/000077500000000000000000000000001351635773100255235ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/internal/proto/envoy/api/v2/core/protocol/protocol.pb.go000077500000000000000000000337101351635773100303220ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: envoy/api/v2/core/protocol.proto package envoy_api_v2_core import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import duration "github.com/golang/protobuf/ptypes/duration" import wrappers "github.com/golang/protobuf/ptypes/wrappers" import _ "google.golang.org/grpc/balancer/xds/internal/proto/validate" // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package type TcpProtocolOptions struct { XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *TcpProtocolOptions) Reset() { *m = TcpProtocolOptions{} } func (m *TcpProtocolOptions) String() string { return proto.CompactTextString(m) } func (*TcpProtocolOptions) ProtoMessage() {} func (*TcpProtocolOptions) Descriptor() ([]byte, []int) { return fileDescriptor_protocol_2e969372c85b867d, []int{0} } func (m *TcpProtocolOptions) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_TcpProtocolOptions.Unmarshal(m, b) } func (m *TcpProtocolOptions) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_TcpProtocolOptions.Marshal(b, m, deterministic) } func (dst *TcpProtocolOptions) XXX_Merge(src proto.Message) { xxx_messageInfo_TcpProtocolOptions.Merge(dst, src) } func (m *TcpProtocolOptions) XXX_Size() int { return xxx_messageInfo_TcpProtocolOptions.Size(m) } func (m *TcpProtocolOptions) XXX_DiscardUnknown() { xxx_messageInfo_TcpProtocolOptions.DiscardUnknown(m) } var xxx_messageInfo_TcpProtocolOptions proto.InternalMessageInfo type HttpProtocolOptions struct { IdleTimeout *duration.Duration `protobuf:"bytes,1,opt,name=idle_timeout,json=idleTimeout,proto3" json:"idle_timeout,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *HttpProtocolOptions) Reset() { *m = HttpProtocolOptions{} } func (m *HttpProtocolOptions) String() string { return proto.CompactTextString(m) } func (*HttpProtocolOptions) ProtoMessage() {} func (*HttpProtocolOptions) Descriptor() ([]byte, []int) { return fileDescriptor_protocol_2e969372c85b867d, []int{1} } func (m *HttpProtocolOptions) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_HttpProtocolOptions.Unmarshal(m, b) } func (m *HttpProtocolOptions) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_HttpProtocolOptions.Marshal(b, m, deterministic) } func (dst *HttpProtocolOptions) XXX_Merge(src proto.Message) { xxx_messageInfo_HttpProtocolOptions.Merge(dst, src) } func (m *HttpProtocolOptions) XXX_Size() int { return xxx_messageInfo_HttpProtocolOptions.Size(m) } func (m *HttpProtocolOptions) XXX_DiscardUnknown() { xxx_messageInfo_HttpProtocolOptions.DiscardUnknown(m) } var xxx_messageInfo_HttpProtocolOptions proto.InternalMessageInfo func (m *HttpProtocolOptions) GetIdleTimeout() *duration.Duration { if m != nil { return m.IdleTimeout } return nil } type Http1ProtocolOptions struct { AllowAbsoluteUrl *wrappers.BoolValue `protobuf:"bytes,1,opt,name=allow_absolute_url,json=allowAbsoluteUrl,proto3" json:"allow_absolute_url,omitempty"` AcceptHttp_10 bool `protobuf:"varint,2,opt,name=accept_http_10,json=acceptHttp10,proto3" json:"accept_http_10,omitempty"` DefaultHostForHttp_10 string `protobuf:"bytes,3,opt,name=default_host_for_http_10,json=defaultHostForHttp10,proto3" json:"default_host_for_http_10,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Http1ProtocolOptions) Reset() { *m = Http1ProtocolOptions{} } func (m *Http1ProtocolOptions) String() string { return proto.CompactTextString(m) } func (*Http1ProtocolOptions) ProtoMessage() {} func (*Http1ProtocolOptions) Descriptor() ([]byte, []int) { return fileDescriptor_protocol_2e969372c85b867d, []int{2} } func (m *Http1ProtocolOptions) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Http1ProtocolOptions.Unmarshal(m, b) } func (m *Http1ProtocolOptions) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Http1ProtocolOptions.Marshal(b, m, deterministic) } func (dst *Http1ProtocolOptions) XXX_Merge(src proto.Message) { xxx_messageInfo_Http1ProtocolOptions.Merge(dst, src) } func (m *Http1ProtocolOptions) XXX_Size() int { return xxx_messageInfo_Http1ProtocolOptions.Size(m) } func (m *Http1ProtocolOptions) XXX_DiscardUnknown() { xxx_messageInfo_Http1ProtocolOptions.DiscardUnknown(m) } var xxx_messageInfo_Http1ProtocolOptions proto.InternalMessageInfo func (m *Http1ProtocolOptions) GetAllowAbsoluteUrl() *wrappers.BoolValue { if m != nil { return m.AllowAbsoluteUrl } return nil } func (m *Http1ProtocolOptions) GetAcceptHttp_10() bool { if m != nil { return m.AcceptHttp_10 } return false } func (m *Http1ProtocolOptions) GetDefaultHostForHttp_10() string { if m != nil { return m.DefaultHostForHttp_10 } return "" } type Http2ProtocolOptions struct { HpackTableSize *wrappers.UInt32Value `protobuf:"bytes,1,opt,name=hpack_table_size,json=hpackTableSize,proto3" json:"hpack_table_size,omitempty"` MaxConcurrentStreams *wrappers.UInt32Value `protobuf:"bytes,2,opt,name=max_concurrent_streams,json=maxConcurrentStreams,proto3" json:"max_concurrent_streams,omitempty"` InitialStreamWindowSize *wrappers.UInt32Value `protobuf:"bytes,3,opt,name=initial_stream_window_size,json=initialStreamWindowSize,proto3" json:"initial_stream_window_size,omitempty"` InitialConnectionWindowSize *wrappers.UInt32Value `protobuf:"bytes,4,opt,name=initial_connection_window_size,json=initialConnectionWindowSize,proto3" json:"initial_connection_window_size,omitempty"` AllowConnect bool `protobuf:"varint,5,opt,name=allow_connect,json=allowConnect,proto3" json:"allow_connect,omitempty"` AllowMetadata bool `protobuf:"varint,6,opt,name=allow_metadata,json=allowMetadata,proto3" json:"allow_metadata,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Http2ProtocolOptions) Reset() { *m = Http2ProtocolOptions{} } func (m *Http2ProtocolOptions) String() string { return proto.CompactTextString(m) } func (*Http2ProtocolOptions) ProtoMessage() {} func (*Http2ProtocolOptions) Descriptor() ([]byte, []int) { return fileDescriptor_protocol_2e969372c85b867d, []int{3} } func (m *Http2ProtocolOptions) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Http2ProtocolOptions.Unmarshal(m, b) } func (m *Http2ProtocolOptions) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Http2ProtocolOptions.Marshal(b, m, deterministic) } func (dst *Http2ProtocolOptions) XXX_Merge(src proto.Message) { xxx_messageInfo_Http2ProtocolOptions.Merge(dst, src) } func (m *Http2ProtocolOptions) XXX_Size() int { return xxx_messageInfo_Http2ProtocolOptions.Size(m) } func (m *Http2ProtocolOptions) XXX_DiscardUnknown() { xxx_messageInfo_Http2ProtocolOptions.DiscardUnknown(m) } var xxx_messageInfo_Http2ProtocolOptions proto.InternalMessageInfo func (m *Http2ProtocolOptions) GetHpackTableSize() *wrappers.UInt32Value { if m != nil { return m.HpackTableSize } return nil } func (m *Http2ProtocolOptions) GetMaxConcurrentStreams() *wrappers.UInt32Value { if m != nil { return m.MaxConcurrentStreams } return nil } func (m *Http2ProtocolOptions) GetInitialStreamWindowSize() *wrappers.UInt32Value { if m != nil { return m.InitialStreamWindowSize } return nil } func (m *Http2ProtocolOptions) GetInitialConnectionWindowSize() *wrappers.UInt32Value { if m != nil { return m.InitialConnectionWindowSize } return nil } func (m *Http2ProtocolOptions) GetAllowConnect() bool { if m != nil { return m.AllowConnect } return false } func (m *Http2ProtocolOptions) GetAllowMetadata() bool { if m != nil { return m.AllowMetadata } return false } type GrpcProtocolOptions struct { Http2ProtocolOptions *Http2ProtocolOptions `protobuf:"bytes,1,opt,name=http2_protocol_options,json=http2ProtocolOptions,proto3" json:"http2_protocol_options,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *GrpcProtocolOptions) Reset() { *m = GrpcProtocolOptions{} } func (m *GrpcProtocolOptions) String() string { return proto.CompactTextString(m) } func (*GrpcProtocolOptions) ProtoMessage() {} func (*GrpcProtocolOptions) Descriptor() ([]byte, []int) { return fileDescriptor_protocol_2e969372c85b867d, []int{4} } func (m *GrpcProtocolOptions) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_GrpcProtocolOptions.Unmarshal(m, b) } func (m *GrpcProtocolOptions) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_GrpcProtocolOptions.Marshal(b, m, deterministic) } func (dst *GrpcProtocolOptions) XXX_Merge(src proto.Message) { xxx_messageInfo_GrpcProtocolOptions.Merge(dst, src) } func (m *GrpcProtocolOptions) XXX_Size() int { return xxx_messageInfo_GrpcProtocolOptions.Size(m) } func (m *GrpcProtocolOptions) XXX_DiscardUnknown() { xxx_messageInfo_GrpcProtocolOptions.DiscardUnknown(m) } var xxx_messageInfo_GrpcProtocolOptions proto.InternalMessageInfo func (m *GrpcProtocolOptions) GetHttp2ProtocolOptions() *Http2ProtocolOptions { if m != nil { return m.Http2ProtocolOptions } return nil } func init() { proto.RegisterType((*TcpProtocolOptions)(nil), "envoy.api.v2.core.TcpProtocolOptions") proto.RegisterType((*HttpProtocolOptions)(nil), "envoy.api.v2.core.HttpProtocolOptions") proto.RegisterType((*Http1ProtocolOptions)(nil), "envoy.api.v2.core.Http1ProtocolOptions") proto.RegisterType((*Http2ProtocolOptions)(nil), "envoy.api.v2.core.Http2ProtocolOptions") proto.RegisterType((*GrpcProtocolOptions)(nil), "envoy.api.v2.core.GrpcProtocolOptions") } func init() { proto.RegisterFile("envoy/api/v2/core/protocol.proto", fileDescriptor_protocol_2e969372c85b867d) } var fileDescriptor_protocol_2e969372c85b867d = []byte{ // 556 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x9c, 0x93, 0x4f, 0x6f, 0xd3, 0x4c, 0x10, 0xc6, 0xe5, 0x37, 0x2f, 0xa5, 0x6c, 0xff, 0xd0, 0xba, 0x51, 0x6b, 0x0a, 0x2a, 0x51, 0x00, 0x11, 0xf5, 0x60, 0xb7, 0xae, 0xc4, 0x89, 0x0b, 0x29, 0x2a, 0xe5, 0x80, 0xa8, 0xdc, 0x16, 0x4e, 0x68, 0xb5, 0x59, 0x6f, 0x9a, 0x15, 0x1b, 0xcf, 0x6a, 0x3d, 0x4e, 0xd2, 0x7e, 0x34, 0x0e, 0x88, 0xaf, 0xc3, 0x99, 0x0f, 0x60, 0x64, 0xef, 0x26, 0x82, 0xa4, 0x12, 0x88, 0x53, 0xac, 0x99, 0xe7, 0xf9, 0x3d, 0x33, 0xf1, 0x98, 0xb4, 0x44, 0x36, 0x82, 0xeb, 0x88, 0x69, 0x19, 0x8d, 0xe2, 0x88, 0x83, 0x11, 0x91, 0x36, 0x80, 0xc0, 0x41, 0x85, 0xf5, 0x83, 0xbf, 0x59, 0x2b, 0x42, 0xa6, 0x65, 0x38, 0x8a, 0xc3, 0x4a, 0xb1, 0xbb, 0x77, 0x05, 0x70, 0xa5, 0x9c, 0xb2, 0x57, 0xf4, 0xa3, 0xb4, 0x30, 0x0c, 0x25, 0x64, 0xd6, 0xb2, 0xd8, 0x1f, 0x1b, 0xa6, 0xb5, 0x30, 0xb9, 0xeb, 0xef, 0x8c, 0x98, 0x92, 0x29, 0x43, 0x11, 0x4d, 0x1f, 0x6c, 0xa3, 0xdd, 0x24, 0xfe, 0x05, 0xd7, 0x67, 0x6e, 0x80, 0xf7, 0xba, 0x62, 0xe6, 0xed, 0x73, 0xb2, 0x75, 0x8a, 0x38, 0x5f, 0xf6, 0x5f, 0x92, 0x55, 0x99, 0x2a, 0x41, 0x51, 0x0e, 0x05, 0x14, 0x18, 0x78, 0x2d, 0xaf, 0xb3, 0x12, 0x3f, 0x08, 0x6d, 0x78, 0x38, 0x0d, 0x0f, 0x5f, 0xbb, 0xe1, 0x92, 0x95, 0x4a, 0x7e, 0x61, 0xd5, 0xed, 0xaf, 0x1e, 0x69, 0x56, 0xd4, 0xc3, 0x79, 0xec, 0x29, 0xf1, 0x99, 0x52, 0x30, 0xa6, 0xac, 0x97, 0x83, 0x2a, 0x50, 0xd0, 0xc2, 0x28, 0x07, 0xdf, 0x5d, 0x80, 0x77, 0x01, 0xd4, 0x07, 0xa6, 0x0a, 0x91, 0x6c, 0xd4, 0xae, 0x57, 0xce, 0x74, 0x69, 0x94, 0xff, 0x94, 0xac, 0x33, 0xce, 0x85, 0x46, 0x3a, 0x40, 0xd4, 0xf4, 0xf0, 0x20, 0xf8, 0xaf, 0xe5, 0x75, 0x96, 0x93, 0x55, 0x5b, 0xad, 0xd3, 0x0f, 0xfc, 0x17, 0x24, 0x48, 0x45, 0x9f, 0x15, 0x0a, 0xe9, 0x00, 0x72, 0xa4, 0x7d, 0x30, 0x33, 0x7d, 0xa3, 0xe5, 0x75, 0xee, 0x25, 0x4d, 0xd7, 0x3f, 0x85, 0x1c, 0x4f, 0xc0, 0x58, 0x5f, 0xfb, 0x47, 0xc3, 0x2e, 0x10, 0xcf, 0x2f, 0x70, 0x42, 0x36, 0x06, 0x9a, 0xf1, 0xcf, 0x14, 0x59, 0x4f, 0x09, 0x9a, 0xcb, 0x1b, 0xe1, 0xc6, 0x7f, 0xb4, 0x30, 0xfe, 0xe5, 0xdb, 0x0c, 0x8f, 0x62, 0xbb, 0xc0, 0x7a, 0xed, 0xba, 0xa8, 0x4c, 0xe7, 0xf2, 0x46, 0xf8, 0x9c, 0x6c, 0x0f, 0xd9, 0x84, 0x72, 0xc8, 0x78, 0x61, 0x8c, 0xc8, 0x90, 0xe6, 0x68, 0x04, 0x1b, 0xe6, 0xf5, 0x1a, 0x7f, 0xa0, 0x75, 0xef, 0x7f, 0xf9, 0xfe, 0xad, 0x41, 0xf6, 0x97, 0x83, 0xb2, 0x2c, 0xcb, 0xbb, 0x1d, 0x2f, 0x69, 0x0e, 0xd9, 0xe4, 0x78, 0xc6, 0x3a, 0xb7, 0x28, 0x5f, 0x91, 0x5d, 0x99, 0x49, 0x94, 0x4c, 0x39, 0x3a, 0x1d, 0xcb, 0x2c, 0x85, 0xb1, 0x1d, 0xbb, 0xf1, 0x17, 0x41, 0x9b, 0x55, 0xd0, 0xea, 0x3e, 0x71, 0x41, 0x65, 0xd9, 0x48, 0x76, 0x1c, 0xd2, 0x86, 0x7c, 0xac, 0x81, 0xf5, 0x4a, 0x48, 0xf6, 0xa6, 0x69, 0x1c, 0xb2, 0x4c, 0xf0, 0xea, 0x1f, 0xfb, 0x2d, 0xf1, 0xff, 0x7f, 0x4b, 0x7c, 0xe8, 0xb0, 0xc7, 0x33, 0xea, 0x2f, 0xa9, 0x4f, 0xc8, 0x9a, 0xbd, 0x28, 0x97, 0x19, 0xdc, 0x71, 0x67, 0x50, 0x15, 0x9d, 0xc3, 0x7f, 0x46, 0xd6, 0xad, 0x68, 0x28, 0x90, 0xa5, 0x0c, 0x59, 0xb0, 0x54, 0xab, 0xac, 0xf5, 0x9d, 0x2b, 0xb6, 0x91, 0x6c, 0xbd, 0x31, 0x9a, 0xcf, 0xbf, 0xf3, 0x4f, 0x64, 0xbb, 0xba, 0x99, 0x98, 0x4e, 0x3f, 0x5e, 0x0a, 0xb6, 0xe3, 0xde, 0xfc, 0xf3, 0x70, 0xe1, 0x2b, 0x0e, 0x6f, 0x3b, 0x9e, 0xa4, 0x39, 0xb8, 0xa5, 0xda, 0x8d, 0xc9, 0x63, 0x09, 0x16, 0xa1, 0x0d, 0x4c, 0xae, 0x17, 0x69, 0xdd, 0xb5, 0xa9, 0xa7, 0xfe, 0x3d, 0xf3, 0x7a, 0x4b, 0xf5, 0x28, 0x47, 0x3f, 0x03, 0x00, 0x00, 0xff, 0xff, 0x8f, 0x72, 0x53, 0x8f, 0x62, 0x04, 0x00, 0x00, } grpc-go-1.22.1/balancer/xds/internal/proto/envoy/api/v2/discovery/000077500000000000000000000000001351635773100247415ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/internal/proto/envoy/api/v2/discovery/discovery.pb.go000077500000000000000000000430431351635773100277060ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: envoy/api/v2/discovery.proto package v2 import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import any "github.com/golang/protobuf/ptypes/any" import status "google.golang.org/genproto/googleapis/rpc/status" import base "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/core/base" // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package type DiscoveryRequest struct { VersionInfo string `protobuf:"bytes,1,opt,name=version_info,json=versionInfo,proto3" json:"version_info,omitempty"` Node *base.Node `protobuf:"bytes,2,opt,name=node,proto3" json:"node,omitempty"` ResourceNames []string `protobuf:"bytes,3,rep,name=resource_names,json=resourceNames,proto3" json:"resource_names,omitempty"` TypeUrl string `protobuf:"bytes,4,opt,name=type_url,json=typeUrl,proto3" json:"type_url,omitempty"` ResponseNonce string `protobuf:"bytes,5,opt,name=response_nonce,json=responseNonce,proto3" json:"response_nonce,omitempty"` ErrorDetail *status.Status `protobuf:"bytes,6,opt,name=error_detail,json=errorDetail,proto3" json:"error_detail,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *DiscoveryRequest) Reset() { *m = DiscoveryRequest{} } func (m *DiscoveryRequest) String() string { return proto.CompactTextString(m) } func (*DiscoveryRequest) ProtoMessage() {} func (*DiscoveryRequest) Descriptor() ([]byte, []int) { return fileDescriptor_discovery_a1ffda4a09a0e500, []int{0} } func (m *DiscoveryRequest) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_DiscoveryRequest.Unmarshal(m, b) } func (m *DiscoveryRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_DiscoveryRequest.Marshal(b, m, deterministic) } func (dst *DiscoveryRequest) XXX_Merge(src proto.Message) { xxx_messageInfo_DiscoveryRequest.Merge(dst, src) } func (m *DiscoveryRequest) XXX_Size() int { return xxx_messageInfo_DiscoveryRequest.Size(m) } func (m *DiscoveryRequest) XXX_DiscardUnknown() { xxx_messageInfo_DiscoveryRequest.DiscardUnknown(m) } var xxx_messageInfo_DiscoveryRequest proto.InternalMessageInfo func (m *DiscoveryRequest) GetVersionInfo() string { if m != nil { return m.VersionInfo } return "" } func (m *DiscoveryRequest) GetNode() *base.Node { if m != nil { return m.Node } return nil } func (m *DiscoveryRequest) GetResourceNames() []string { if m != nil { return m.ResourceNames } return nil } func (m *DiscoveryRequest) GetTypeUrl() string { if m != nil { return m.TypeUrl } return "" } func (m *DiscoveryRequest) GetResponseNonce() string { if m != nil { return m.ResponseNonce } return "" } func (m *DiscoveryRequest) GetErrorDetail() *status.Status { if m != nil { return m.ErrorDetail } return nil } type DiscoveryResponse struct { VersionInfo string `protobuf:"bytes,1,opt,name=version_info,json=versionInfo,proto3" json:"version_info,omitempty"` Resources []*any.Any `protobuf:"bytes,2,rep,name=resources,proto3" json:"resources,omitempty"` Canary bool `protobuf:"varint,3,opt,name=canary,proto3" json:"canary,omitempty"` TypeUrl string `protobuf:"bytes,4,opt,name=type_url,json=typeUrl,proto3" json:"type_url,omitempty"` Nonce string `protobuf:"bytes,5,opt,name=nonce,proto3" json:"nonce,omitempty"` ControlPlane *base.ControlPlane `protobuf:"bytes,6,opt,name=control_plane,json=controlPlane,proto3" json:"control_plane,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *DiscoveryResponse) Reset() { *m = DiscoveryResponse{} } func (m *DiscoveryResponse) String() string { return proto.CompactTextString(m) } func (*DiscoveryResponse) ProtoMessage() {} func (*DiscoveryResponse) Descriptor() ([]byte, []int) { return fileDescriptor_discovery_a1ffda4a09a0e500, []int{1} } func (m *DiscoveryResponse) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_DiscoveryResponse.Unmarshal(m, b) } func (m *DiscoveryResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_DiscoveryResponse.Marshal(b, m, deterministic) } func (dst *DiscoveryResponse) XXX_Merge(src proto.Message) { xxx_messageInfo_DiscoveryResponse.Merge(dst, src) } func (m *DiscoveryResponse) XXX_Size() int { return xxx_messageInfo_DiscoveryResponse.Size(m) } func (m *DiscoveryResponse) XXX_DiscardUnknown() { xxx_messageInfo_DiscoveryResponse.DiscardUnknown(m) } var xxx_messageInfo_DiscoveryResponse proto.InternalMessageInfo func (m *DiscoveryResponse) GetVersionInfo() string { if m != nil { return m.VersionInfo } return "" } func (m *DiscoveryResponse) GetResources() []*any.Any { if m != nil { return m.Resources } return nil } func (m *DiscoveryResponse) GetCanary() bool { if m != nil { return m.Canary } return false } func (m *DiscoveryResponse) GetTypeUrl() string { if m != nil { return m.TypeUrl } return "" } func (m *DiscoveryResponse) GetNonce() string { if m != nil { return m.Nonce } return "" } func (m *DiscoveryResponse) GetControlPlane() *base.ControlPlane { if m != nil { return m.ControlPlane } return nil } type DeltaDiscoveryRequest struct { Node *base.Node `protobuf:"bytes,1,opt,name=node,proto3" json:"node,omitempty"` TypeUrl string `protobuf:"bytes,2,opt,name=type_url,json=typeUrl,proto3" json:"type_url,omitempty"` ResourceNamesSubscribe []string `protobuf:"bytes,3,rep,name=resource_names_subscribe,json=resourceNamesSubscribe,proto3" json:"resource_names_subscribe,omitempty"` ResourceNamesUnsubscribe []string `protobuf:"bytes,4,rep,name=resource_names_unsubscribe,json=resourceNamesUnsubscribe,proto3" json:"resource_names_unsubscribe,omitempty"` InitialResourceVersions map[string]string `protobuf:"bytes,5,rep,name=initial_resource_versions,json=initialResourceVersions,proto3" json:"initial_resource_versions,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"` ResponseNonce string `protobuf:"bytes,6,opt,name=response_nonce,json=responseNonce,proto3" json:"response_nonce,omitempty"` ErrorDetail *status.Status `protobuf:"bytes,7,opt,name=error_detail,json=errorDetail,proto3" json:"error_detail,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *DeltaDiscoveryRequest) Reset() { *m = DeltaDiscoveryRequest{} } func (m *DeltaDiscoveryRequest) String() string { return proto.CompactTextString(m) } func (*DeltaDiscoveryRequest) ProtoMessage() {} func (*DeltaDiscoveryRequest) Descriptor() ([]byte, []int) { return fileDescriptor_discovery_a1ffda4a09a0e500, []int{2} } func (m *DeltaDiscoveryRequest) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_DeltaDiscoveryRequest.Unmarshal(m, b) } func (m *DeltaDiscoveryRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_DeltaDiscoveryRequest.Marshal(b, m, deterministic) } func (dst *DeltaDiscoveryRequest) XXX_Merge(src proto.Message) { xxx_messageInfo_DeltaDiscoveryRequest.Merge(dst, src) } func (m *DeltaDiscoveryRequest) XXX_Size() int { return xxx_messageInfo_DeltaDiscoveryRequest.Size(m) } func (m *DeltaDiscoveryRequest) XXX_DiscardUnknown() { xxx_messageInfo_DeltaDiscoveryRequest.DiscardUnknown(m) } var xxx_messageInfo_DeltaDiscoveryRequest proto.InternalMessageInfo func (m *DeltaDiscoveryRequest) GetNode() *base.Node { if m != nil { return m.Node } return nil } func (m *DeltaDiscoveryRequest) GetTypeUrl() string { if m != nil { return m.TypeUrl } return "" } func (m *DeltaDiscoveryRequest) GetResourceNamesSubscribe() []string { if m != nil { return m.ResourceNamesSubscribe } return nil } func (m *DeltaDiscoveryRequest) GetResourceNamesUnsubscribe() []string { if m != nil { return m.ResourceNamesUnsubscribe } return nil } func (m *DeltaDiscoveryRequest) GetInitialResourceVersions() map[string]string { if m != nil { return m.InitialResourceVersions } return nil } func (m *DeltaDiscoveryRequest) GetResponseNonce() string { if m != nil { return m.ResponseNonce } return "" } func (m *DeltaDiscoveryRequest) GetErrorDetail() *status.Status { if m != nil { return m.ErrorDetail } return nil } type DeltaDiscoveryResponse struct { SystemVersionInfo string `protobuf:"bytes,1,opt,name=system_version_info,json=systemVersionInfo,proto3" json:"system_version_info,omitempty"` Resources []*Resource `protobuf:"bytes,2,rep,name=resources,proto3" json:"resources,omitempty"` RemovedResources []string `protobuf:"bytes,6,rep,name=removed_resources,json=removedResources,proto3" json:"removed_resources,omitempty"` Nonce string `protobuf:"bytes,5,opt,name=nonce,proto3" json:"nonce,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *DeltaDiscoveryResponse) Reset() { *m = DeltaDiscoveryResponse{} } func (m *DeltaDiscoveryResponse) String() string { return proto.CompactTextString(m) } func (*DeltaDiscoveryResponse) ProtoMessage() {} func (*DeltaDiscoveryResponse) Descriptor() ([]byte, []int) { return fileDescriptor_discovery_a1ffda4a09a0e500, []int{3} } func (m *DeltaDiscoveryResponse) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_DeltaDiscoveryResponse.Unmarshal(m, b) } func (m *DeltaDiscoveryResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_DeltaDiscoveryResponse.Marshal(b, m, deterministic) } func (dst *DeltaDiscoveryResponse) XXX_Merge(src proto.Message) { xxx_messageInfo_DeltaDiscoveryResponse.Merge(dst, src) } func (m *DeltaDiscoveryResponse) XXX_Size() int { return xxx_messageInfo_DeltaDiscoveryResponse.Size(m) } func (m *DeltaDiscoveryResponse) XXX_DiscardUnknown() { xxx_messageInfo_DeltaDiscoveryResponse.DiscardUnknown(m) } var xxx_messageInfo_DeltaDiscoveryResponse proto.InternalMessageInfo func (m *DeltaDiscoveryResponse) GetSystemVersionInfo() string { if m != nil { return m.SystemVersionInfo } return "" } func (m *DeltaDiscoveryResponse) GetResources() []*Resource { if m != nil { return m.Resources } return nil } func (m *DeltaDiscoveryResponse) GetRemovedResources() []string { if m != nil { return m.RemovedResources } return nil } func (m *DeltaDiscoveryResponse) GetNonce() string { if m != nil { return m.Nonce } return "" } type Resource struct { Name string `protobuf:"bytes,3,opt,name=name,proto3" json:"name,omitempty"` Aliases []string `protobuf:"bytes,4,rep,name=aliases,proto3" json:"aliases,omitempty"` Version string `protobuf:"bytes,1,opt,name=version,proto3" json:"version,omitempty"` Resource *any.Any `protobuf:"bytes,2,opt,name=resource,proto3" json:"resource,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Resource) Reset() { *m = Resource{} } func (m *Resource) String() string { return proto.CompactTextString(m) } func (*Resource) ProtoMessage() {} func (*Resource) Descriptor() ([]byte, []int) { return fileDescriptor_discovery_a1ffda4a09a0e500, []int{4} } func (m *Resource) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Resource.Unmarshal(m, b) } func (m *Resource) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Resource.Marshal(b, m, deterministic) } func (dst *Resource) XXX_Merge(src proto.Message) { xxx_messageInfo_Resource.Merge(dst, src) } func (m *Resource) XXX_Size() int { return xxx_messageInfo_Resource.Size(m) } func (m *Resource) XXX_DiscardUnknown() { xxx_messageInfo_Resource.DiscardUnknown(m) } var xxx_messageInfo_Resource proto.InternalMessageInfo func (m *Resource) GetName() string { if m != nil { return m.Name } return "" } func (m *Resource) GetAliases() []string { if m != nil { return m.Aliases } return nil } func (m *Resource) GetVersion() string { if m != nil { return m.Version } return "" } func (m *Resource) GetResource() *any.Any { if m != nil { return m.Resource } return nil } func init() { proto.RegisterType((*DiscoveryRequest)(nil), "envoy.api.v2.DiscoveryRequest") proto.RegisterType((*DiscoveryResponse)(nil), "envoy.api.v2.DiscoveryResponse") proto.RegisterType((*DeltaDiscoveryRequest)(nil), "envoy.api.v2.DeltaDiscoveryRequest") proto.RegisterMapType((map[string]string)(nil), "envoy.api.v2.DeltaDiscoveryRequest.InitialResourceVersionsEntry") proto.RegisterType((*DeltaDiscoveryResponse)(nil), "envoy.api.v2.DeltaDiscoveryResponse") proto.RegisterType((*Resource)(nil), "envoy.api.v2.Resource") } func init() { proto.RegisterFile("envoy/api/v2/discovery.proto", fileDescriptor_discovery_a1ffda4a09a0e500) } var fileDescriptor_discovery_a1ffda4a09a0e500 = []byte{ // 656 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x8c, 0x54, 0x41, 0x6b, 0xdb, 0x4c, 0x10, 0x45, 0xb6, 0xe3, 0xd8, 0x63, 0x27, 0x24, 0xfb, 0xe5, 0x73, 0x14, 0x13, 0xa8, 0x6b, 0x28, 0x18, 0x02, 0x52, 0x51, 0x5b, 0x08, 0xa5, 0x87, 0x36, 0x75, 0x0f, 0xe9, 0x21, 0x04, 0x85, 0xe4, 0xd0, 0x8b, 0x58, 0xcb, 0x93, 0x20, 0xaa, 0xec, 0xaa, 0xbb, 0x92, 0xa8, 0xa0, 0xa7, 0xd2, 0x3f, 0xd6, 0x9f, 0xd5, 0x53, 0x8b, 0x56, 0x2b, 0x5b, 0x4a, 0x44, 0xf0, 0x4d, 0x33, 0xf3, 0x66, 0x76, 0xde, 0xcc, 0x1b, 0xc1, 0x31, 0xb2, 0x94, 0x67, 0x36, 0x8d, 0x02, 0x3b, 0x75, 0xec, 0x65, 0x20, 0x7d, 0x9e, 0xa2, 0xc8, 0xac, 0x48, 0xf0, 0x98, 0x93, 0xa1, 0x8a, 0x5a, 0x34, 0x0a, 0xac, 0xd4, 0x19, 0xd7, 0xb1, 0x3e, 0x17, 0x68, 0x2f, 0xa8, 0xc4, 0x02, 0x3b, 0x3e, 0xba, 0xe3, 0xfc, 0x2e, 0x44, 0x5b, 0x59, 0x8b, 0xe4, 0xd6, 0xa6, 0x4c, 0x97, 0x19, 0x1f, 0xea, 0x90, 0x88, 0x7c, 0x5b, 0xc6, 0x34, 0x4e, 0x64, 0x11, 0x98, 0xfe, 0x6c, 0xc1, 0xde, 0xbc, 0x7c, 0xd3, 0xc5, 0x6f, 0x09, 0xca, 0x98, 0x3c, 0x87, 0x61, 0x8a, 0x42, 0x06, 0x9c, 0x79, 0x01, 0xbb, 0xe5, 0xa6, 0x31, 0x31, 0x66, 0x7d, 0x77, 0xa0, 0x7d, 0xe7, 0xec, 0x96, 0x93, 0x13, 0xe8, 0x30, 0xbe, 0x44, 0xb3, 0x35, 0x31, 0x66, 0x03, 0xe7, 0xd0, 0xaa, 0xb6, 0x69, 0xe5, 0x8d, 0x59, 0x17, 0x7c, 0x89, 0xae, 0x02, 0x91, 0x17, 0xb0, 0x2b, 0x50, 0xf2, 0x44, 0xf8, 0xe8, 0x31, 0x7a, 0x8f, 0xd2, 0x6c, 0x4f, 0xda, 0xb3, 0xbe, 0xbb, 0x53, 0x7a, 0x2f, 0x72, 0x27, 0x39, 0x82, 0x5e, 0x9c, 0x45, 0xe8, 0x25, 0x22, 0x34, 0x3b, 0xea, 0xc9, 0xed, 0xdc, 0xbe, 0x16, 0xa1, 0xae, 0x10, 0x71, 0x26, 0xd1, 0x63, 0x9c, 0xf9, 0x68, 0x6e, 0x29, 0xc0, 0x4e, 0xe9, 0xbd, 0xc8, 0x9d, 0xe4, 0x0d, 0x0c, 0x51, 0x08, 0x2e, 0xbc, 0x25, 0xc6, 0x34, 0x08, 0xcd, 0xae, 0xea, 0x8e, 0x58, 0x05, 0x7b, 0x4b, 0x44, 0xbe, 0x75, 0xa5, 0xd8, 0xbb, 0x03, 0x85, 0x9b, 0x2b, 0xd8, 0xf4, 0x8f, 0x01, 0xfb, 0x95, 0x21, 0x14, 0x15, 0x37, 0x99, 0x82, 0x03, 0xfd, 0x92, 0x82, 0x34, 0x5b, 0x93, 0xf6, 0x6c, 0xe0, 0x1c, 0x94, 0x8f, 0x95, 0x5b, 0xb0, 0x3e, 0xb0, 0xcc, 0x5d, 0xc3, 0xc8, 0x08, 0xba, 0x3e, 0x65, 0x54, 0x64, 0x66, 0x7b, 0x62, 0xcc, 0x7a, 0xae, 0xb6, 0x9e, 0x62, 0x7f, 0x00, 0x5b, 0x55, 0xd2, 0x85, 0x41, 0xe6, 0xb0, 0xe3, 0x73, 0x16, 0x0b, 0x1e, 0x7a, 0x51, 0x48, 0x19, 0x6a, 0xb6, 0xcf, 0x1a, 0x76, 0xf1, 0xb1, 0xc0, 0x5d, 0xe6, 0x30, 0x77, 0xe8, 0x57, 0xac, 0xe9, 0xdf, 0x36, 0xfc, 0x3f, 0xc7, 0x30, 0xa6, 0x8f, 0x54, 0x50, 0xae, 0xd8, 0xd8, 0x64, 0xc5, 0xd5, 0xee, 0x5b, 0xf5, 0xee, 0x4f, 0xc1, 0xac, 0x6f, 0xdf, 0x93, 0xc9, 0x42, 0xfa, 0x22, 0x58, 0xa0, 0xd6, 0xc1, 0xa8, 0xa6, 0x83, 0xab, 0x32, 0x4a, 0xde, 0xc1, 0xf8, 0x41, 0x66, 0xc2, 0xd6, 0xb9, 0x1d, 0x95, 0x6b, 0xd6, 0x72, 0xaf, 0xd7, 0x71, 0xf2, 0x03, 0x8e, 0x02, 0x16, 0xc4, 0x01, 0x0d, 0xbd, 0x55, 0x15, 0xbd, 0x3c, 0x69, 0x6e, 0xa9, 0x65, 0xbd, 0xaf, 0x93, 0x6a, 0x9c, 0x83, 0x75, 0x5e, 0x14, 0x71, 0x75, 0x8d, 0x1b, 0x5d, 0xe2, 0x13, 0x8b, 0x45, 0xe6, 0x1e, 0x06, 0xcd, 0xd1, 0x06, 0xc5, 0x76, 0x37, 0x51, 0xec, 0xf6, 0x46, 0x8a, 0x1d, 0x7f, 0x86, 0xe3, 0xa7, 0xda, 0x22, 0x7b, 0xd0, 0xfe, 0x8a, 0x99, 0x96, 0x6c, 0xfe, 0x99, 0x6b, 0x28, 0xa5, 0x61, 0x82, 0x7a, 0x3b, 0x85, 0xf1, 0xb6, 0x75, 0x6a, 0x4c, 0x7f, 0x1b, 0x30, 0x7a, 0xc8, 0x5c, 0x9f, 0x80, 0x05, 0xff, 0xc9, 0x4c, 0xc6, 0x78, 0xef, 0x35, 0x5c, 0xc2, 0x7e, 0x11, 0xba, 0xa9, 0xdc, 0xc3, 0xeb, 0xc7, 0xf7, 0x30, 0xaa, 0x8f, 0xb8, 0x6c, 0xb7, 0x7a, 0x11, 0x27, 0xb0, 0x2f, 0xf0, 0x9e, 0xa7, 0xb8, 0xf4, 0xd6, 0xd9, 0x5d, 0xb5, 0xdd, 0x3d, 0x1d, 0x70, 0x57, 0xe0, 0xc6, 0x5b, 0x98, 0xfe, 0x32, 0xa0, 0x57, 0x62, 0x08, 0x81, 0x4e, 0xae, 0x16, 0x75, 0x5f, 0x7d, 0x57, 0x7d, 0x13, 0x13, 0xb6, 0x69, 0x18, 0x50, 0x89, 0x52, 0xeb, 0xa6, 0x34, 0xf3, 0x88, 0x26, 0xa7, 0x79, 0x95, 0x26, 0x79, 0x09, 0xbd, 0xb2, 0x1f, 0xfd, 0x9f, 0x6b, 0x3e, 0xee, 0x15, 0xea, 0xcc, 0x81, 0x71, 0xc0, 0x0b, 0xc2, 0x91, 0xe0, 0xdf, 0xb3, 0x1a, 0xf7, 0xb3, 0xdd, 0xd5, 0x80, 0x2f, 0xf3, 0xf4, 0x4b, 0xe3, 0x4b, 0x2b, 0x75, 0x16, 0x5d, 0x55, 0xeb, 0xd5, 0xbf, 0x00, 0x00, 0x00, 0xff, 0xff, 0x98, 0x91, 0xb2, 0x9f, 0x08, 0x06, 0x00, 0x00, } grpc-go-1.22.1/balancer/xds/internal/proto/envoy/api/v2/eds/000077500000000000000000000000001351635773100235055ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/internal/proto/envoy/api/v2/eds/eds.pb.go000077500000000000000000000432731351635773100252230ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: envoy/api/v2/eds.proto package envoy_api_v2 import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import duration "github.com/golang/protobuf/ptypes/duration" import wrappers "github.com/golang/protobuf/ptypes/wrappers" import _ "google.golang.org/genproto/googleapis/api/annotations" import discovery "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/discovery" import endpoint "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/endpoint/endpoint" import percent "google.golang.org/grpc/balancer/xds/internal/proto/envoy/type/percent" import _ "google.golang.org/grpc/balancer/xds/internal/proto/validate" import ( context "golang.org/x/net/context" grpc "google.golang.org/grpc" ) // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package type ClusterLoadAssignment struct { ClusterName string `protobuf:"bytes,1,opt,name=cluster_name,json=clusterName,proto3" json:"cluster_name,omitempty"` Endpoints []*endpoint.LocalityLbEndpoints `protobuf:"bytes,2,rep,name=endpoints,proto3" json:"endpoints,omitempty"` NamedEndpoints map[string]*endpoint.Endpoint `protobuf:"bytes,5,rep,name=named_endpoints,json=namedEndpoints,proto3" json:"named_endpoints,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"` Policy *ClusterLoadAssignment_Policy `protobuf:"bytes,4,opt,name=policy,proto3" json:"policy,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ClusterLoadAssignment) Reset() { *m = ClusterLoadAssignment{} } func (m *ClusterLoadAssignment) String() string { return proto.CompactTextString(m) } func (*ClusterLoadAssignment) ProtoMessage() {} func (*ClusterLoadAssignment) Descriptor() ([]byte, []int) { return fileDescriptor_eds_1a80fb8e78974562, []int{0} } func (m *ClusterLoadAssignment) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ClusterLoadAssignment.Unmarshal(m, b) } func (m *ClusterLoadAssignment) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ClusterLoadAssignment.Marshal(b, m, deterministic) } func (dst *ClusterLoadAssignment) XXX_Merge(src proto.Message) { xxx_messageInfo_ClusterLoadAssignment.Merge(dst, src) } func (m *ClusterLoadAssignment) XXX_Size() int { return xxx_messageInfo_ClusterLoadAssignment.Size(m) } func (m *ClusterLoadAssignment) XXX_DiscardUnknown() { xxx_messageInfo_ClusterLoadAssignment.DiscardUnknown(m) } var xxx_messageInfo_ClusterLoadAssignment proto.InternalMessageInfo func (m *ClusterLoadAssignment) GetClusterName() string { if m != nil { return m.ClusterName } return "" } func (m *ClusterLoadAssignment) GetEndpoints() []*endpoint.LocalityLbEndpoints { if m != nil { return m.Endpoints } return nil } func (m *ClusterLoadAssignment) GetNamedEndpoints() map[string]*endpoint.Endpoint { if m != nil { return m.NamedEndpoints } return nil } func (m *ClusterLoadAssignment) GetPolicy() *ClusterLoadAssignment_Policy { if m != nil { return m.Policy } return nil } type ClusterLoadAssignment_Policy struct { DropOverloads []*ClusterLoadAssignment_Policy_DropOverload `protobuf:"bytes,2,rep,name=drop_overloads,json=dropOverloads,proto3" json:"drop_overloads,omitempty"` OverprovisioningFactor *wrappers.UInt32Value `protobuf:"bytes,3,opt,name=overprovisioning_factor,json=overprovisioningFactor,proto3" json:"overprovisioning_factor,omitempty"` EndpointStaleAfter *duration.Duration `protobuf:"bytes,4,opt,name=endpoint_stale_after,json=endpointStaleAfter,proto3" json:"endpoint_stale_after,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ClusterLoadAssignment_Policy) Reset() { *m = ClusterLoadAssignment_Policy{} } func (m *ClusterLoadAssignment_Policy) String() string { return proto.CompactTextString(m) } func (*ClusterLoadAssignment_Policy) ProtoMessage() {} func (*ClusterLoadAssignment_Policy) Descriptor() ([]byte, []int) { return fileDescriptor_eds_1a80fb8e78974562, []int{0, 1} } func (m *ClusterLoadAssignment_Policy) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ClusterLoadAssignment_Policy.Unmarshal(m, b) } func (m *ClusterLoadAssignment_Policy) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ClusterLoadAssignment_Policy.Marshal(b, m, deterministic) } func (dst *ClusterLoadAssignment_Policy) XXX_Merge(src proto.Message) { xxx_messageInfo_ClusterLoadAssignment_Policy.Merge(dst, src) } func (m *ClusterLoadAssignment_Policy) XXX_Size() int { return xxx_messageInfo_ClusterLoadAssignment_Policy.Size(m) } func (m *ClusterLoadAssignment_Policy) XXX_DiscardUnknown() { xxx_messageInfo_ClusterLoadAssignment_Policy.DiscardUnknown(m) } var xxx_messageInfo_ClusterLoadAssignment_Policy proto.InternalMessageInfo func (m *ClusterLoadAssignment_Policy) GetDropOverloads() []*ClusterLoadAssignment_Policy_DropOverload { if m != nil { return m.DropOverloads } return nil } func (m *ClusterLoadAssignment_Policy) GetOverprovisioningFactor() *wrappers.UInt32Value { if m != nil { return m.OverprovisioningFactor } return nil } func (m *ClusterLoadAssignment_Policy) GetEndpointStaleAfter() *duration.Duration { if m != nil { return m.EndpointStaleAfter } return nil } type ClusterLoadAssignment_Policy_DropOverload struct { Category string `protobuf:"bytes,1,opt,name=category,proto3" json:"category,omitempty"` DropPercentage *percent.FractionalPercent `protobuf:"bytes,2,opt,name=drop_percentage,json=dropPercentage,proto3" json:"drop_percentage,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ClusterLoadAssignment_Policy_DropOverload) Reset() { *m = ClusterLoadAssignment_Policy_DropOverload{} } func (m *ClusterLoadAssignment_Policy_DropOverload) String() string { return proto.CompactTextString(m) } func (*ClusterLoadAssignment_Policy_DropOverload) ProtoMessage() {} func (*ClusterLoadAssignment_Policy_DropOverload) Descriptor() ([]byte, []int) { return fileDescriptor_eds_1a80fb8e78974562, []int{0, 1, 0} } func (m *ClusterLoadAssignment_Policy_DropOverload) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ClusterLoadAssignment_Policy_DropOverload.Unmarshal(m, b) } func (m *ClusterLoadAssignment_Policy_DropOverload) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ClusterLoadAssignment_Policy_DropOverload.Marshal(b, m, deterministic) } func (dst *ClusterLoadAssignment_Policy_DropOverload) XXX_Merge(src proto.Message) { xxx_messageInfo_ClusterLoadAssignment_Policy_DropOverload.Merge(dst, src) } func (m *ClusterLoadAssignment_Policy_DropOverload) XXX_Size() int { return xxx_messageInfo_ClusterLoadAssignment_Policy_DropOverload.Size(m) } func (m *ClusterLoadAssignment_Policy_DropOverload) XXX_DiscardUnknown() { xxx_messageInfo_ClusterLoadAssignment_Policy_DropOverload.DiscardUnknown(m) } var xxx_messageInfo_ClusterLoadAssignment_Policy_DropOverload proto.InternalMessageInfo func (m *ClusterLoadAssignment_Policy_DropOverload) GetCategory() string { if m != nil { return m.Category } return "" } func (m *ClusterLoadAssignment_Policy_DropOverload) GetDropPercentage() *percent.FractionalPercent { if m != nil { return m.DropPercentage } return nil } func init() { proto.RegisterType((*ClusterLoadAssignment)(nil), "envoy.api.v2.ClusterLoadAssignment") proto.RegisterMapType((map[string]*endpoint.Endpoint)(nil), "envoy.api.v2.ClusterLoadAssignment.NamedEndpointsEntry") proto.RegisterType((*ClusterLoadAssignment_Policy)(nil), "envoy.api.v2.ClusterLoadAssignment.Policy") proto.RegisterType((*ClusterLoadAssignment_Policy_DropOverload)(nil), "envoy.api.v2.ClusterLoadAssignment.Policy.DropOverload") } // Reference imports to suppress errors if they are not otherwise used. var _ context.Context var _ grpc.ClientConn // This is a compile-time assertion to ensure that this generated file // is compatible with the grpc package it is being compiled against. const _ = grpc.SupportPackageIsVersion4 // EndpointDiscoveryServiceClient is the client API for EndpointDiscoveryService service. // // For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream. type EndpointDiscoveryServiceClient interface { StreamEndpoints(ctx context.Context, opts ...grpc.CallOption) (EndpointDiscoveryService_StreamEndpointsClient, error) FetchEndpoints(ctx context.Context, in *discovery.DiscoveryRequest, opts ...grpc.CallOption) (*discovery.DiscoveryResponse, error) } type endpointDiscoveryServiceClient struct { cc *grpc.ClientConn } func NewEndpointDiscoveryServiceClient(cc *grpc.ClientConn) EndpointDiscoveryServiceClient { return &endpointDiscoveryServiceClient{cc} } func (c *endpointDiscoveryServiceClient) StreamEndpoints(ctx context.Context, opts ...grpc.CallOption) (EndpointDiscoveryService_StreamEndpointsClient, error) { stream, err := c.cc.NewStream(ctx, &_EndpointDiscoveryService_serviceDesc.Streams[0], "/envoy.api.v2.EndpointDiscoveryService/StreamEndpoints", opts...) if err != nil { return nil, err } x := &endpointDiscoveryServiceStreamEndpointsClient{stream} return x, nil } type EndpointDiscoveryService_StreamEndpointsClient interface { Send(*discovery.DiscoveryRequest) error Recv() (*discovery.DiscoveryResponse, error) grpc.ClientStream } type endpointDiscoveryServiceStreamEndpointsClient struct { grpc.ClientStream } func (x *endpointDiscoveryServiceStreamEndpointsClient) Send(m *discovery.DiscoveryRequest) error { return x.ClientStream.SendMsg(m) } func (x *endpointDiscoveryServiceStreamEndpointsClient) Recv() (*discovery.DiscoveryResponse, error) { m := new(discovery.DiscoveryResponse) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func (c *endpointDiscoveryServiceClient) FetchEndpoints(ctx context.Context, in *discovery.DiscoveryRequest, opts ...grpc.CallOption) (*discovery.DiscoveryResponse, error) { out := new(discovery.DiscoveryResponse) err := c.cc.Invoke(ctx, "/envoy.api.v2.EndpointDiscoveryService/FetchEndpoints", in, out, opts...) if err != nil { return nil, err } return out, nil } // EndpointDiscoveryServiceServer is the server API for EndpointDiscoveryService service. type EndpointDiscoveryServiceServer interface { StreamEndpoints(EndpointDiscoveryService_StreamEndpointsServer) error FetchEndpoints(context.Context, *discovery.DiscoveryRequest) (*discovery.DiscoveryResponse, error) } func RegisterEndpointDiscoveryServiceServer(s *grpc.Server, srv EndpointDiscoveryServiceServer) { s.RegisterService(&_EndpointDiscoveryService_serviceDesc, srv) } func _EndpointDiscoveryService_StreamEndpoints_Handler(srv interface{}, stream grpc.ServerStream) error { return srv.(EndpointDiscoveryServiceServer).StreamEndpoints(&endpointDiscoveryServiceStreamEndpointsServer{stream}) } type EndpointDiscoveryService_StreamEndpointsServer interface { Send(*discovery.DiscoveryResponse) error Recv() (*discovery.DiscoveryRequest, error) grpc.ServerStream } type endpointDiscoveryServiceStreamEndpointsServer struct { grpc.ServerStream } func (x *endpointDiscoveryServiceStreamEndpointsServer) Send(m *discovery.DiscoveryResponse) error { return x.ServerStream.SendMsg(m) } func (x *endpointDiscoveryServiceStreamEndpointsServer) Recv() (*discovery.DiscoveryRequest, error) { m := new(discovery.DiscoveryRequest) if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func _EndpointDiscoveryService_FetchEndpoints_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(discovery.DiscoveryRequest) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(EndpointDiscoveryServiceServer).FetchEndpoints(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/envoy.api.v2.EndpointDiscoveryService/FetchEndpoints", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(EndpointDiscoveryServiceServer).FetchEndpoints(ctx, req.(*discovery.DiscoveryRequest)) } return interceptor(ctx, in, info, handler) } var _EndpointDiscoveryService_serviceDesc = grpc.ServiceDesc{ ServiceName: "envoy.api.v2.EndpointDiscoveryService", HandlerType: (*EndpointDiscoveryServiceServer)(nil), Methods: []grpc.MethodDesc{ { MethodName: "FetchEndpoints", Handler: _EndpointDiscoveryService_FetchEndpoints_Handler, }, }, Streams: []grpc.StreamDesc{ { StreamName: "StreamEndpoints", Handler: _EndpointDiscoveryService_StreamEndpoints_Handler, ServerStreams: true, ClientStreams: true, }, }, Metadata: "envoy/api/v2/eds.proto", } func init() { proto.RegisterFile("envoy/api/v2/eds.proto", fileDescriptor_eds_1a80fb8e78974562) } var fileDescriptor_eds_1a80fb8e78974562 = []byte{ // 665 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xac, 0x94, 0x4f, 0x6e, 0x13, 0x31, 0x14, 0xc6, 0xeb, 0x49, 0x1a, 0x52, 0x37, 0xb4, 0x95, 0x81, 0x76, 0x18, 0x85, 0x36, 0x8a, 0x40, 0xaa, 0x02, 0x9a, 0xa0, 0x54, 0xa8, 0xa8, 0xbb, 0x86, 0x36, 0x02, 0x54, 0x41, 0x34, 0x15, 0x08, 0x36, 0x04, 0x67, 0xc6, 0x0d, 0x16, 0x13, 0xdb, 0xd8, 0xce, 0xc0, 0x2c, 0xd8, 0xb0, 0x62, 0xcf, 0x2d, 0x7a, 0x04, 0x56, 0xac, 0xb8, 0x00, 0x57, 0x60, 0x83, 0xb8, 0x04, 0xf2, 0xfc, 0x6b, 0x87, 0xb6, 0x12, 0x0b, 0x76, 0x9e, 0x79, 0xef, 0xfd, 0xfc, 0xf9, 0x7b, 0xcf, 0x86, 0xab, 0x84, 0x45, 0x3c, 0xee, 0x62, 0x41, 0xbb, 0x51, 0xaf, 0x4b, 0x02, 0xe5, 0x0a, 0xc9, 0x35, 0x47, 0x8d, 0xe4, 0xbf, 0x8b, 0x05, 0x75, 0xa3, 0x9e, 0xd3, 0x2c, 0x65, 0x05, 0x54, 0xf9, 0x3c, 0x22, 0x32, 0x4e, 0x73, 0x9d, 0x9b, 0x65, 0x06, 0x0b, 0x04, 0xa7, 0x4c, 0x17, 0x8b, 0x2c, 0xcb, 0x4e, 0xb3, 0x74, 0x2c, 0x48, 0x57, 0x10, 0xe9, 0x93, 0x22, 0xd2, 0x9c, 0x70, 0x3e, 0x09, 0x49, 0x02, 0xc0, 0x8c, 0x71, 0x8d, 0x35, 0xe5, 0x2c, 0x53, 0xe2, 0xac, 0x45, 0x38, 0xa4, 0x01, 0xd6, 0xa4, 0x9b, 0x2f, 0xb2, 0xc0, 0x7a, 0x56, 0x96, 0x7c, 0x8d, 0x67, 0x47, 0xdd, 0xf7, 0x12, 0x0b, 0x41, 0xa4, 0xba, 0x28, 0x1e, 0xcc, 0x64, 0x42, 0x4e, 0xe3, 0xed, 0xef, 0x35, 0x78, 0xed, 0x41, 0x38, 0x53, 0x9a, 0xc8, 0x03, 0x8e, 0x83, 0x5d, 0xa5, 0xe8, 0x84, 0x4d, 0x09, 0xd3, 0xe8, 0x0e, 0x6c, 0xf8, 0x69, 0x60, 0xc4, 0xf0, 0x94, 0xd8, 0xa0, 0x05, 0x36, 0x17, 0xfa, 0x0b, 0x5f, 0x7f, 0x7d, 0xab, 0x54, 0xa5, 0xd5, 0x02, 0xde, 0x62, 0x16, 0x7e, 0x82, 0xa7, 0x04, 0x3d, 0x84, 0x0b, 0xf9, 0x51, 0x95, 0x6d, 0xb5, 0x2a, 0x9b, 0x8b, 0xbd, 0x8e, 0x7b, 0xda, 0x3e, 0xb7, 0x70, 0xe2, 0x80, 0xfb, 0x38, 0xa4, 0x3a, 0x3e, 0x18, 0xef, 0xe7, 0x15, 0xde, 0x49, 0x31, 0x7a, 0x0d, 0x97, 0xcd, 0x7e, 0xc1, 0xe8, 0x84, 0x37, 0x9f, 0xf0, 0xb6, 0xcb, 0xbc, 0x73, 0x55, 0xbb, 0x46, 0x4c, 0x50, 0x70, 0xf7, 0x99, 0x96, 0xb1, 0xb7, 0xc4, 0x4a, 0x3f, 0x51, 0x1f, 0xd6, 0x04, 0x0f, 0xa9, 0x1f, 0xdb, 0xd5, 0x16, 0x38, 0x2b, 0xf4, 0x7c, 0xf0, 0x30, 0xa9, 0xf0, 0xb2, 0x4a, 0x67, 0x0c, 0xaf, 0x9c, 0xb3, 0x15, 0x5a, 0x81, 0x95, 0xb7, 0x24, 0x4e, 0xbd, 0xf2, 0xcc, 0x12, 0xdd, 0x83, 0xf3, 0x11, 0x0e, 0x67, 0xc4, 0xb6, 0x92, 0xbd, 0x36, 0x2e, 0x30, 0x25, 0xe7, 0x78, 0x69, 0xf6, 0x8e, 0x75, 0x1f, 0x38, 0xc7, 0x15, 0x58, 0x4b, 0xb7, 0x45, 0xaf, 0xe0, 0x52, 0x20, 0xb9, 0x18, 0x99, 0x89, 0x0b, 0x39, 0x0e, 0x72, 0x8f, 0xb7, 0xff, 0x5d, 0xba, 0xbb, 0x27, 0xb9, 0x78, 0x9a, 0xd5, 0x7b, 0x97, 0x83, 0x53, 0x5f, 0xc6, 0xf4, 0x35, 0x83, 0x16, 0x92, 0x47, 0x54, 0x51, 0xce, 0x28, 0x9b, 0x8c, 0x8e, 0xb0, 0xaf, 0xb9, 0xb4, 0x2b, 0x89, 0xee, 0xa6, 0x9b, 0x0e, 0x92, 0x9b, 0x0f, 0x92, 0xfb, 0xec, 0x11, 0xd3, 0x5b, 0xbd, 0xe7, 0x46, 0x6d, 0x36, 0x15, 0x1d, 0xab, 0x35, 0xe7, 0xad, 0xfe, 0xcd, 0x19, 0x24, 0x18, 0xf4, 0x12, 0x5e, 0xcd, 0x0f, 0x3b, 0x52, 0x1a, 0x87, 0x64, 0x84, 0x8f, 0x34, 0x91, 0x59, 0x0b, 0xae, 0x9f, 0xc1, 0xef, 0x65, 0x73, 0xda, 0x6f, 0x18, 0xf6, 0xa5, 0x63, 0x50, 0xed, 0x58, 0xf5, 0x39, 0x0f, 0xe5, 0x90, 0x43, 0xc3, 0xd8, 0x35, 0x08, 0xe7, 0x23, 0x6c, 0x9c, 0x3e, 0x1b, 0xba, 0x05, 0xeb, 0x3e, 0xd6, 0x64, 0xc2, 0x65, 0x7c, 0x76, 0x6a, 0x8b, 0x10, 0x1a, 0xc0, 0xe5, 0xc4, 0xd3, 0xec, 0x1e, 0xe2, 0x49, 0xde, 0xa3, 0x1b, 0x99, 0xa9, 0xe6, 0x96, 0xba, 0x03, 0x89, 0x7d, 0xa3, 0x03, 0x87, 0xc3, 0x34, 0xcf, 0x4b, 0x3a, 0x31, 0x2c, 0x8a, 0x1e, 0x57, 0xeb, 0x60, 0xc5, 0xea, 0xfd, 0x06, 0xd0, 0xce, 0x9b, 0xb8, 0x97, 0xbf, 0x0d, 0x87, 0x44, 0x46, 0xd4, 0x27, 0xe8, 0x05, 0x5c, 0x3e, 0xd4, 0x92, 0xe0, 0xe9, 0xc9, 0x10, 0xae, 0x97, 0x3b, 0x57, 0x94, 0x78, 0xe4, 0xdd, 0x8c, 0x28, 0xed, 0x6c, 0x5c, 0x18, 0x57, 0x82, 0x33, 0x45, 0xda, 0x73, 0x9b, 0xe0, 0x2e, 0x40, 0x33, 0xb8, 0x34, 0x20, 0xda, 0x7f, 0xf3, 0x1f, 0xc1, 0xed, 0x4f, 0x3f, 0x7e, 0x7e, 0xb1, 0x9a, 0xed, 0xb5, 0xd2, 0x33, 0xb7, 0x53, 0x5c, 0xc7, 0x1d, 0xd0, 0xe9, 0xdf, 0x86, 0x0e, 0xe5, 0x29, 0x48, 0x48, 0xfe, 0x21, 0x2e, 0x31, 0xfb, 0xf5, 0xfd, 0x40, 0x0d, 0x4d, 0x23, 0x87, 0xe0, 0x33, 0x00, 0xe3, 0x5a, 0xd2, 0xd4, 0xad, 0x3f, 0x01, 0x00, 0x00, 0xff, 0xff, 0x26, 0x36, 0x57, 0x86, 0x67, 0x05, 0x00, 0x00, } grpc-go-1.22.1/balancer/xds/internal/proto/envoy/api/v2/endpoint/000077500000000000000000000000001351635773100245525ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/internal/proto/envoy/api/v2/endpoint/endpoint/000077500000000000000000000000001351635773100263725ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/internal/proto/envoy/api/v2/endpoint/endpoint/endpoint.pb.go000077500000000000000000000367741351635773100311650ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: envoy/api/v2/endpoint/endpoint.proto package endpoint import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import wrappers "github.com/golang/protobuf/ptypes/wrappers" import address "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/core/address" import base "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/core/base" import health_check "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/core/health_check" import _ "google.golang.org/grpc/balancer/xds/internal/proto/validate" // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package type Endpoint struct { Address *address.Address `protobuf:"bytes,1,opt,name=address,proto3" json:"address,omitempty"` HealthCheckConfig *Endpoint_HealthCheckConfig `protobuf:"bytes,2,opt,name=health_check_config,json=healthCheckConfig,proto3" json:"health_check_config,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Endpoint) Reset() { *m = Endpoint{} } func (m *Endpoint) String() string { return proto.CompactTextString(m) } func (*Endpoint) ProtoMessage() {} func (*Endpoint) Descriptor() ([]byte, []int) { return fileDescriptor_endpoint_d432c656305d64c3, []int{0} } func (m *Endpoint) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Endpoint.Unmarshal(m, b) } func (m *Endpoint) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Endpoint.Marshal(b, m, deterministic) } func (dst *Endpoint) XXX_Merge(src proto.Message) { xxx_messageInfo_Endpoint.Merge(dst, src) } func (m *Endpoint) XXX_Size() int { return xxx_messageInfo_Endpoint.Size(m) } func (m *Endpoint) XXX_DiscardUnknown() { xxx_messageInfo_Endpoint.DiscardUnknown(m) } var xxx_messageInfo_Endpoint proto.InternalMessageInfo func (m *Endpoint) GetAddress() *address.Address { if m != nil { return m.Address } return nil } func (m *Endpoint) GetHealthCheckConfig() *Endpoint_HealthCheckConfig { if m != nil { return m.HealthCheckConfig } return nil } type Endpoint_HealthCheckConfig struct { PortValue uint32 `protobuf:"varint,1,opt,name=port_value,json=portValue,proto3" json:"port_value,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Endpoint_HealthCheckConfig) Reset() { *m = Endpoint_HealthCheckConfig{} } func (m *Endpoint_HealthCheckConfig) String() string { return proto.CompactTextString(m) } func (*Endpoint_HealthCheckConfig) ProtoMessage() {} func (*Endpoint_HealthCheckConfig) Descriptor() ([]byte, []int) { return fileDescriptor_endpoint_d432c656305d64c3, []int{0, 0} } func (m *Endpoint_HealthCheckConfig) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Endpoint_HealthCheckConfig.Unmarshal(m, b) } func (m *Endpoint_HealthCheckConfig) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Endpoint_HealthCheckConfig.Marshal(b, m, deterministic) } func (dst *Endpoint_HealthCheckConfig) XXX_Merge(src proto.Message) { xxx_messageInfo_Endpoint_HealthCheckConfig.Merge(dst, src) } func (m *Endpoint_HealthCheckConfig) XXX_Size() int { return xxx_messageInfo_Endpoint_HealthCheckConfig.Size(m) } func (m *Endpoint_HealthCheckConfig) XXX_DiscardUnknown() { xxx_messageInfo_Endpoint_HealthCheckConfig.DiscardUnknown(m) } var xxx_messageInfo_Endpoint_HealthCheckConfig proto.InternalMessageInfo func (m *Endpoint_HealthCheckConfig) GetPortValue() uint32 { if m != nil { return m.PortValue } return 0 } type LbEndpoint struct { // Types that are valid to be assigned to HostIdentifier: // *LbEndpoint_Endpoint // *LbEndpoint_EndpointName HostIdentifier isLbEndpoint_HostIdentifier `protobuf_oneof:"host_identifier"` HealthStatus health_check.HealthStatus `protobuf:"varint,2,opt,name=health_status,json=healthStatus,proto3,enum=envoy.api.v2.core.HealthStatus" json:"health_status,omitempty"` Metadata *base.Metadata `protobuf:"bytes,3,opt,name=metadata,proto3" json:"metadata,omitempty"` LoadBalancingWeight *wrappers.UInt32Value `protobuf:"bytes,4,opt,name=load_balancing_weight,json=loadBalancingWeight,proto3" json:"load_balancing_weight,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *LbEndpoint) Reset() { *m = LbEndpoint{} } func (m *LbEndpoint) String() string { return proto.CompactTextString(m) } func (*LbEndpoint) ProtoMessage() {} func (*LbEndpoint) Descriptor() ([]byte, []int) { return fileDescriptor_endpoint_d432c656305d64c3, []int{1} } func (m *LbEndpoint) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_LbEndpoint.Unmarshal(m, b) } func (m *LbEndpoint) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_LbEndpoint.Marshal(b, m, deterministic) } func (dst *LbEndpoint) XXX_Merge(src proto.Message) { xxx_messageInfo_LbEndpoint.Merge(dst, src) } func (m *LbEndpoint) XXX_Size() int { return xxx_messageInfo_LbEndpoint.Size(m) } func (m *LbEndpoint) XXX_DiscardUnknown() { xxx_messageInfo_LbEndpoint.DiscardUnknown(m) } var xxx_messageInfo_LbEndpoint proto.InternalMessageInfo type isLbEndpoint_HostIdentifier interface { isLbEndpoint_HostIdentifier() } type LbEndpoint_Endpoint struct { Endpoint *Endpoint `protobuf:"bytes,1,opt,name=endpoint,proto3,oneof"` } type LbEndpoint_EndpointName struct { EndpointName string `protobuf:"bytes,5,opt,name=endpoint_name,json=endpointName,proto3,oneof"` } func (*LbEndpoint_Endpoint) isLbEndpoint_HostIdentifier() {} func (*LbEndpoint_EndpointName) isLbEndpoint_HostIdentifier() {} func (m *LbEndpoint) GetHostIdentifier() isLbEndpoint_HostIdentifier { if m != nil { return m.HostIdentifier } return nil } func (m *LbEndpoint) GetEndpoint() *Endpoint { if x, ok := m.GetHostIdentifier().(*LbEndpoint_Endpoint); ok { return x.Endpoint } return nil } func (m *LbEndpoint) GetEndpointName() string { if x, ok := m.GetHostIdentifier().(*LbEndpoint_EndpointName); ok { return x.EndpointName } return "" } func (m *LbEndpoint) GetHealthStatus() health_check.HealthStatus { if m != nil { return m.HealthStatus } return health_check.HealthStatus_UNKNOWN } func (m *LbEndpoint) GetMetadata() *base.Metadata { if m != nil { return m.Metadata } return nil } func (m *LbEndpoint) GetLoadBalancingWeight() *wrappers.UInt32Value { if m != nil { return m.LoadBalancingWeight } return nil } // XXX_OneofFuncs is for the internal use of the proto package. func (*LbEndpoint) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _LbEndpoint_OneofMarshaler, _LbEndpoint_OneofUnmarshaler, _LbEndpoint_OneofSizer, []interface{}{ (*LbEndpoint_Endpoint)(nil), (*LbEndpoint_EndpointName)(nil), } } func _LbEndpoint_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*LbEndpoint) // host_identifier switch x := m.HostIdentifier.(type) { case *LbEndpoint_Endpoint: b.EncodeVarint(1<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Endpoint); err != nil { return err } case *LbEndpoint_EndpointName: b.EncodeVarint(5<<3 | proto.WireBytes) b.EncodeStringBytes(x.EndpointName) case nil: default: return fmt.Errorf("LbEndpoint.HostIdentifier has unexpected type %T", x) } return nil } func _LbEndpoint_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*LbEndpoint) switch tag { case 1: // host_identifier.endpoint if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(Endpoint) err := b.DecodeMessage(msg) m.HostIdentifier = &LbEndpoint_Endpoint{msg} return true, err case 5: // host_identifier.endpoint_name if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } x, err := b.DecodeStringBytes() m.HostIdentifier = &LbEndpoint_EndpointName{x} return true, err default: return false, nil } } func _LbEndpoint_OneofSizer(msg proto.Message) (n int) { m := msg.(*LbEndpoint) // host_identifier switch x := m.HostIdentifier.(type) { case *LbEndpoint_Endpoint: s := proto.Size(x.Endpoint) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *LbEndpoint_EndpointName: n += 1 // tag and wire n += proto.SizeVarint(uint64(len(x.EndpointName))) n += len(x.EndpointName) case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } type LocalityLbEndpoints struct { Locality *base.Locality `protobuf:"bytes,1,opt,name=locality,proto3" json:"locality,omitempty"` LbEndpoints []*LbEndpoint `protobuf:"bytes,2,rep,name=lb_endpoints,json=lbEndpoints,proto3" json:"lb_endpoints,omitempty"` LoadBalancingWeight *wrappers.UInt32Value `protobuf:"bytes,3,opt,name=load_balancing_weight,json=loadBalancingWeight,proto3" json:"load_balancing_weight,omitempty"` Priority uint32 `protobuf:"varint,5,opt,name=priority,proto3" json:"priority,omitempty"` Proximity *wrappers.UInt32Value `protobuf:"bytes,6,opt,name=proximity,proto3" json:"proximity,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *LocalityLbEndpoints) Reset() { *m = LocalityLbEndpoints{} } func (m *LocalityLbEndpoints) String() string { return proto.CompactTextString(m) } func (*LocalityLbEndpoints) ProtoMessage() {} func (*LocalityLbEndpoints) Descriptor() ([]byte, []int) { return fileDescriptor_endpoint_d432c656305d64c3, []int{2} } func (m *LocalityLbEndpoints) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_LocalityLbEndpoints.Unmarshal(m, b) } func (m *LocalityLbEndpoints) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_LocalityLbEndpoints.Marshal(b, m, deterministic) } func (dst *LocalityLbEndpoints) XXX_Merge(src proto.Message) { xxx_messageInfo_LocalityLbEndpoints.Merge(dst, src) } func (m *LocalityLbEndpoints) XXX_Size() int { return xxx_messageInfo_LocalityLbEndpoints.Size(m) } func (m *LocalityLbEndpoints) XXX_DiscardUnknown() { xxx_messageInfo_LocalityLbEndpoints.DiscardUnknown(m) } var xxx_messageInfo_LocalityLbEndpoints proto.InternalMessageInfo func (m *LocalityLbEndpoints) GetLocality() *base.Locality { if m != nil { return m.Locality } return nil } func (m *LocalityLbEndpoints) GetLbEndpoints() []*LbEndpoint { if m != nil { return m.LbEndpoints } return nil } func (m *LocalityLbEndpoints) GetLoadBalancingWeight() *wrappers.UInt32Value { if m != nil { return m.LoadBalancingWeight } return nil } func (m *LocalityLbEndpoints) GetPriority() uint32 { if m != nil { return m.Priority } return 0 } func (m *LocalityLbEndpoints) GetProximity() *wrappers.UInt32Value { if m != nil { return m.Proximity } return nil } func init() { proto.RegisterType((*Endpoint)(nil), "envoy.api.v2.endpoint.Endpoint") proto.RegisterType((*Endpoint_HealthCheckConfig)(nil), "envoy.api.v2.endpoint.Endpoint.HealthCheckConfig") proto.RegisterType((*LbEndpoint)(nil), "envoy.api.v2.endpoint.LbEndpoint") proto.RegisterType((*LocalityLbEndpoints)(nil), "envoy.api.v2.endpoint.LocalityLbEndpoints") } func init() { proto.RegisterFile("envoy/api/v2/endpoint/endpoint.proto", fileDescriptor_endpoint_d432c656305d64c3) } var fileDescriptor_endpoint_d432c656305d64c3 = []byte{ // 571 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xac, 0x52, 0x51, 0x6f, 0xd3, 0x3c, 0x14, 0x5d, 0x96, 0x75, 0x6b, 0xdd, 0xf6, 0xfb, 0x54, 0x57, 0x13, 0x51, 0x99, 0xd8, 0x28, 0x03, 0x55, 0x7d, 0x70, 0x44, 0x87, 0x84, 0x84, 0x84, 0x80, 0x6c, 0x48, 0x45, 0x1a, 0x68, 0x32, 0x02, 0x24, 0x1e, 0x88, 0x9c, 0xc4, 0x6d, 0x2c, 0xd2, 0x38, 0x4a, 0xdc, 0x8e, 0xbe, 0xed, 0x77, 0xf1, 0xc4, 0xcf, 0xe0, 0x07, 0xec, 0x85, 0x5f, 0x31, 0x64, 0x27, 0x4e, 0xbb, 0xb5, 0xd3, 0x5e, 0x78, 0x73, 0xee, 0x3d, 0xe7, 0x9e, 0x7b, 0x4e, 0x2e, 0x38, 0xa4, 0xf1, 0x8c, 0xcf, 0x6d, 0x92, 0x30, 0x7b, 0x36, 0xb0, 0x69, 0x1c, 0x24, 0x9c, 0xc5, 0xa2, 0x7c, 0xa0, 0x24, 0xe5, 0x82, 0xc3, 0x5d, 0x85, 0x42, 0x24, 0x61, 0x68, 0x36, 0x40, 0xba, 0xd9, 0xd9, 0xbf, 0x46, 0xf6, 0x79, 0x4a, 0x6d, 0x12, 0x04, 0x29, 0xcd, 0xb2, 0x9c, 0xd7, 0xd9, 0x5b, 0x05, 0x78, 0x24, 0xa3, 0x45, 0xf7, 0x70, 0xb5, 0x1b, 0x52, 0x12, 0x89, 0xd0, 0xf5, 0x43, 0xea, 0x7f, 0x2f, 0x50, 0x0f, 0xc6, 0x9c, 0x8f, 0x23, 0x6a, 0xab, 0x2f, 0x6f, 0x3a, 0xb2, 0xcf, 0x53, 0x92, 0x24, 0x34, 0xd5, 0x1a, 0xf7, 0x66, 0x24, 0x62, 0x01, 0x11, 0xd4, 0xd6, 0x8f, 0xbc, 0xd1, 0xbd, 0x34, 0x40, 0xf5, 0x6d, 0xb1, 0x2a, 0x7c, 0x06, 0x76, 0x8a, 0xd5, 0x2c, 0xe3, 0xc0, 0xe8, 0xd5, 0x07, 0x1d, 0x74, 0xcd, 0x93, 0x54, 0x47, 0x6f, 0x72, 0x04, 0xd6, 0x50, 0x48, 0x40, 0x7b, 0x79, 0x23, 0xd7, 0xe7, 0xf1, 0x88, 0x8d, 0xad, 0x4d, 0x35, 0xe1, 0x29, 0x5a, 0x9b, 0x0a, 0xd2, 0x9a, 0x68, 0xa8, 0xa8, 0xc7, 0x92, 0x79, 0xac, 0x88, 0xb8, 0x15, 0xde, 0x2c, 0x75, 0x5e, 0x81, 0xd6, 0x0a, 0x0e, 0xf6, 0x01, 0x48, 0x78, 0x2a, 0xdc, 0x19, 0x89, 0xa6, 0x54, 0x2d, 0xdc, 0x74, 0xea, 0x3f, 0xff, 0xfc, 0x32, 0xb7, 0xfb, 0x5b, 0xd6, 0xd5, 0x95, 0x89, 0x6b, 0xb2, 0xfd, 0x59, 0x76, 0xbb, 0x97, 0x9b, 0x00, 0x9c, 0x7a, 0xa5, 0xd1, 0x97, 0xa0, 0xaa, 0x37, 0x29, 0x9c, 0xee, 0xdf, 0xb1, 0xe7, 0x70, 0x03, 0x97, 0x14, 0xf8, 0x18, 0x34, 0xf5, 0xdb, 0x8d, 0xc9, 0x84, 0x5a, 0x95, 0x03, 0xa3, 0x57, 0x1b, 0x6e, 0xe0, 0x86, 0x2e, 0x7f, 0x20, 0x13, 0x0a, 0x4f, 0x40, 0xb3, 0x08, 0x26, 0x13, 0x44, 0x4c, 0x33, 0x15, 0xc9, 0x7f, 0x37, 0xa5, 0x54, 0xa8, 0xb9, 0xbb, 0x8f, 0x0a, 0x86, 0x1b, 0xe1, 0xd2, 0x17, 0x7c, 0x0e, 0xaa, 0x13, 0x2a, 0x48, 0x40, 0x04, 0xb1, 0x4c, 0xb5, 0xeb, 0xfd, 0x35, 0x03, 0xde, 0x17, 0x10, 0x5c, 0x82, 0xe1, 0x37, 0xb0, 0x1b, 0x71, 0x12, 0xb8, 0x1e, 0x89, 0x48, 0xec, 0xb3, 0x78, 0xec, 0x9e, 0x53, 0x36, 0x0e, 0x85, 0xb5, 0xa5, 0xa6, 0xec, 0xa1, 0xfc, 0x66, 0x90, 0xbe, 0x19, 0xf4, 0xe9, 0x5d, 0x2c, 0x8e, 0x06, 0x2a, 0x30, 0xa7, 0x21, 0x83, 0xdc, 0xe9, 0x57, 0xac, 0x0b, 0xa3, 0x67, 0xe0, 0xb6, 0x1c, 0xe4, 0xe8, 0x39, 0x5f, 0xd4, 0x18, 0xa7, 0x05, 0xfe, 0x0f, 0x79, 0x26, 0x5c, 0x16, 0xd0, 0x58, 0xb0, 0x11, 0xa3, 0x69, 0xf7, 0xf7, 0x26, 0x68, 0x9f, 0x72, 0x9f, 0x44, 0x4c, 0xcc, 0x17, 0x71, 0x2b, 0x0f, 0x51, 0x51, 0x2e, 0xf2, 0x5e, 0xe7, 0x41, 0x33, 0x71, 0x09, 0x86, 0x27, 0xa0, 0x11, 0x79, 0xae, 0x4e, 0x55, 0x26, 0x68, 0xf6, 0xea, 0x83, 0x87, 0xb7, 0xfc, 0xac, 0x85, 0x24, 0xae, 0x47, 0x4b, 0xf2, 0xb7, 0x26, 0x61, 0xfe, 0x93, 0x24, 0xe0, 0x13, 0x50, 0x4d, 0x52, 0xc6, 0x53, 0x69, 0xaf, 0xa2, 0xee, 0x10, 0x48, 0x52, 0xa5, 0x6f, 0x5a, 0x17, 0x06, 0x2e, 0x7b, 0xf0, 0x05, 0xa8, 0x25, 0x29, 0xff, 0xc1, 0x26, 0x12, 0xb8, 0x7d, 0xb7, 0x36, 0x5e, 0xc0, 0x9d, 0xd7, 0xe0, 0x11, 0xe3, 0xb9, 0x6f, 0x59, 0x9c, 0xaf, 0x8f, 0xc0, 0x69, 0x6a, 0xd7, 0x67, 0x72, 0xde, 0x99, 0xf1, 0xb5, 0xbc, 0x5a, 0x6f, 0x5b, 0x49, 0x1c, 0xfd, 0x0d, 0x00, 0x00, 0xff, 0xff, 0x5c, 0x64, 0xf8, 0xa2, 0xce, 0x04, 0x00, 0x00, } grpc-go-1.22.1/balancer/xds/internal/proto/envoy/api/v2/endpoint/load_report/000077500000000000000000000000001351635773100270645ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/internal/proto/envoy/api/v2/endpoint/load_report/load_report.pb.go000077500000000000000000000460311351635773100323340ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: envoy/api/v2/endpoint/load_report.proto package envoy_api_v2_endpoint import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import duration "github.com/golang/protobuf/ptypes/duration" import _struct "github.com/golang/protobuf/ptypes/struct" import address "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/core/address" import base "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/core/base" import _ "google.golang.org/grpc/balancer/xds/internal/proto/validate" // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package type UpstreamLocalityStats struct { Locality *base.Locality `protobuf:"bytes,1,opt,name=locality,proto3" json:"locality,omitempty"` TotalSuccessfulRequests uint64 `protobuf:"varint,2,opt,name=total_successful_requests,json=totalSuccessfulRequests,proto3" json:"total_successful_requests,omitempty"` TotalRequestsInProgress uint64 `protobuf:"varint,3,opt,name=total_requests_in_progress,json=totalRequestsInProgress,proto3" json:"total_requests_in_progress,omitempty"` TotalErrorRequests uint64 `protobuf:"varint,4,opt,name=total_error_requests,json=totalErrorRequests,proto3" json:"total_error_requests,omitempty"` LoadMetricStats []*EndpointLoadMetricStats `protobuf:"bytes,5,rep,name=load_metric_stats,json=loadMetricStats,proto3" json:"load_metric_stats,omitempty"` UpstreamEndpointStats []*UpstreamEndpointStats `protobuf:"bytes,7,rep,name=upstream_endpoint_stats,json=upstreamEndpointStats,proto3" json:"upstream_endpoint_stats,omitempty"` Priority uint32 `protobuf:"varint,6,opt,name=priority,proto3" json:"priority,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *UpstreamLocalityStats) Reset() { *m = UpstreamLocalityStats{} } func (m *UpstreamLocalityStats) String() string { return proto.CompactTextString(m) } func (*UpstreamLocalityStats) ProtoMessage() {} func (*UpstreamLocalityStats) Descriptor() ([]byte, []int) { return fileDescriptor_load_report_5485b60725c658b8, []int{0} } func (m *UpstreamLocalityStats) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_UpstreamLocalityStats.Unmarshal(m, b) } func (m *UpstreamLocalityStats) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_UpstreamLocalityStats.Marshal(b, m, deterministic) } func (dst *UpstreamLocalityStats) XXX_Merge(src proto.Message) { xxx_messageInfo_UpstreamLocalityStats.Merge(dst, src) } func (m *UpstreamLocalityStats) XXX_Size() int { return xxx_messageInfo_UpstreamLocalityStats.Size(m) } func (m *UpstreamLocalityStats) XXX_DiscardUnknown() { xxx_messageInfo_UpstreamLocalityStats.DiscardUnknown(m) } var xxx_messageInfo_UpstreamLocalityStats proto.InternalMessageInfo func (m *UpstreamLocalityStats) GetLocality() *base.Locality { if m != nil { return m.Locality } return nil } func (m *UpstreamLocalityStats) GetTotalSuccessfulRequests() uint64 { if m != nil { return m.TotalSuccessfulRequests } return 0 } func (m *UpstreamLocalityStats) GetTotalRequestsInProgress() uint64 { if m != nil { return m.TotalRequestsInProgress } return 0 } func (m *UpstreamLocalityStats) GetTotalErrorRequests() uint64 { if m != nil { return m.TotalErrorRequests } return 0 } func (m *UpstreamLocalityStats) GetLoadMetricStats() []*EndpointLoadMetricStats { if m != nil { return m.LoadMetricStats } return nil } func (m *UpstreamLocalityStats) GetUpstreamEndpointStats() []*UpstreamEndpointStats { if m != nil { return m.UpstreamEndpointStats } return nil } func (m *UpstreamLocalityStats) GetPriority() uint32 { if m != nil { return m.Priority } return 0 } type UpstreamEndpointStats struct { Address *address.Address `protobuf:"bytes,1,opt,name=address,proto3" json:"address,omitempty"` Metadata *_struct.Struct `protobuf:"bytes,6,opt,name=metadata,proto3" json:"metadata,omitempty"` TotalSuccessfulRequests uint64 `protobuf:"varint,2,opt,name=total_successful_requests,json=totalSuccessfulRequests,proto3" json:"total_successful_requests,omitempty"` TotalRequestsInProgress uint64 `protobuf:"varint,3,opt,name=total_requests_in_progress,json=totalRequestsInProgress,proto3" json:"total_requests_in_progress,omitempty"` TotalErrorRequests uint64 `protobuf:"varint,4,opt,name=total_error_requests,json=totalErrorRequests,proto3" json:"total_error_requests,omitempty"` LoadMetricStats []*EndpointLoadMetricStats `protobuf:"bytes,5,rep,name=load_metric_stats,json=loadMetricStats,proto3" json:"load_metric_stats,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *UpstreamEndpointStats) Reset() { *m = UpstreamEndpointStats{} } func (m *UpstreamEndpointStats) String() string { return proto.CompactTextString(m) } func (*UpstreamEndpointStats) ProtoMessage() {} func (*UpstreamEndpointStats) Descriptor() ([]byte, []int) { return fileDescriptor_load_report_5485b60725c658b8, []int{1} } func (m *UpstreamEndpointStats) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_UpstreamEndpointStats.Unmarshal(m, b) } func (m *UpstreamEndpointStats) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_UpstreamEndpointStats.Marshal(b, m, deterministic) } func (dst *UpstreamEndpointStats) XXX_Merge(src proto.Message) { xxx_messageInfo_UpstreamEndpointStats.Merge(dst, src) } func (m *UpstreamEndpointStats) XXX_Size() int { return xxx_messageInfo_UpstreamEndpointStats.Size(m) } func (m *UpstreamEndpointStats) XXX_DiscardUnknown() { xxx_messageInfo_UpstreamEndpointStats.DiscardUnknown(m) } var xxx_messageInfo_UpstreamEndpointStats proto.InternalMessageInfo func (m *UpstreamEndpointStats) GetAddress() *address.Address { if m != nil { return m.Address } return nil } func (m *UpstreamEndpointStats) GetMetadata() *_struct.Struct { if m != nil { return m.Metadata } return nil } func (m *UpstreamEndpointStats) GetTotalSuccessfulRequests() uint64 { if m != nil { return m.TotalSuccessfulRequests } return 0 } func (m *UpstreamEndpointStats) GetTotalRequestsInProgress() uint64 { if m != nil { return m.TotalRequestsInProgress } return 0 } func (m *UpstreamEndpointStats) GetTotalErrorRequests() uint64 { if m != nil { return m.TotalErrorRequests } return 0 } func (m *UpstreamEndpointStats) GetLoadMetricStats() []*EndpointLoadMetricStats { if m != nil { return m.LoadMetricStats } return nil } type EndpointLoadMetricStats struct { MetricName string `protobuf:"bytes,1,opt,name=metric_name,json=metricName,proto3" json:"metric_name,omitempty"` NumRequestsFinishedWithMetric uint64 `protobuf:"varint,2,opt,name=num_requests_finished_with_metric,json=numRequestsFinishedWithMetric,proto3" json:"num_requests_finished_with_metric,omitempty"` TotalMetricValue float64 `protobuf:"fixed64,3,opt,name=total_metric_value,json=totalMetricValue,proto3" json:"total_metric_value,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *EndpointLoadMetricStats) Reset() { *m = EndpointLoadMetricStats{} } func (m *EndpointLoadMetricStats) String() string { return proto.CompactTextString(m) } func (*EndpointLoadMetricStats) ProtoMessage() {} func (*EndpointLoadMetricStats) Descriptor() ([]byte, []int) { return fileDescriptor_load_report_5485b60725c658b8, []int{2} } func (m *EndpointLoadMetricStats) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_EndpointLoadMetricStats.Unmarshal(m, b) } func (m *EndpointLoadMetricStats) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_EndpointLoadMetricStats.Marshal(b, m, deterministic) } func (dst *EndpointLoadMetricStats) XXX_Merge(src proto.Message) { xxx_messageInfo_EndpointLoadMetricStats.Merge(dst, src) } func (m *EndpointLoadMetricStats) XXX_Size() int { return xxx_messageInfo_EndpointLoadMetricStats.Size(m) } func (m *EndpointLoadMetricStats) XXX_DiscardUnknown() { xxx_messageInfo_EndpointLoadMetricStats.DiscardUnknown(m) } var xxx_messageInfo_EndpointLoadMetricStats proto.InternalMessageInfo func (m *EndpointLoadMetricStats) GetMetricName() string { if m != nil { return m.MetricName } return "" } func (m *EndpointLoadMetricStats) GetNumRequestsFinishedWithMetric() uint64 { if m != nil { return m.NumRequestsFinishedWithMetric } return 0 } func (m *EndpointLoadMetricStats) GetTotalMetricValue() float64 { if m != nil { return m.TotalMetricValue } return 0 } type ClusterStats struct { ClusterName string `protobuf:"bytes,1,opt,name=cluster_name,json=clusterName,proto3" json:"cluster_name,omitempty"` ClusterServiceName string `protobuf:"bytes,6,opt,name=cluster_service_name,json=clusterServiceName,proto3" json:"cluster_service_name,omitempty"` UpstreamLocalityStats []*UpstreamLocalityStats `protobuf:"bytes,2,rep,name=upstream_locality_stats,json=upstreamLocalityStats,proto3" json:"upstream_locality_stats,omitempty"` TotalDroppedRequests uint64 `protobuf:"varint,3,opt,name=total_dropped_requests,json=totalDroppedRequests,proto3" json:"total_dropped_requests,omitempty"` DroppedRequests []*ClusterStats_DroppedRequests `protobuf:"bytes,5,rep,name=dropped_requests,json=droppedRequests,proto3" json:"dropped_requests,omitempty"` LoadReportInterval *duration.Duration `protobuf:"bytes,4,opt,name=load_report_interval,json=loadReportInterval,proto3" json:"load_report_interval,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ClusterStats) Reset() { *m = ClusterStats{} } func (m *ClusterStats) String() string { return proto.CompactTextString(m) } func (*ClusterStats) ProtoMessage() {} func (*ClusterStats) Descriptor() ([]byte, []int) { return fileDescriptor_load_report_5485b60725c658b8, []int{3} } func (m *ClusterStats) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ClusterStats.Unmarshal(m, b) } func (m *ClusterStats) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ClusterStats.Marshal(b, m, deterministic) } func (dst *ClusterStats) XXX_Merge(src proto.Message) { xxx_messageInfo_ClusterStats.Merge(dst, src) } func (m *ClusterStats) XXX_Size() int { return xxx_messageInfo_ClusterStats.Size(m) } func (m *ClusterStats) XXX_DiscardUnknown() { xxx_messageInfo_ClusterStats.DiscardUnknown(m) } var xxx_messageInfo_ClusterStats proto.InternalMessageInfo func (m *ClusterStats) GetClusterName() string { if m != nil { return m.ClusterName } return "" } func (m *ClusterStats) GetClusterServiceName() string { if m != nil { return m.ClusterServiceName } return "" } func (m *ClusterStats) GetUpstreamLocalityStats() []*UpstreamLocalityStats { if m != nil { return m.UpstreamLocalityStats } return nil } func (m *ClusterStats) GetTotalDroppedRequests() uint64 { if m != nil { return m.TotalDroppedRequests } return 0 } func (m *ClusterStats) GetDroppedRequests() []*ClusterStats_DroppedRequests { if m != nil { return m.DroppedRequests } return nil } func (m *ClusterStats) GetLoadReportInterval() *duration.Duration { if m != nil { return m.LoadReportInterval } return nil } type ClusterStats_DroppedRequests struct { Category string `protobuf:"bytes,1,opt,name=category,proto3" json:"category,omitempty"` DroppedCount uint64 `protobuf:"varint,2,opt,name=dropped_count,json=droppedCount,proto3" json:"dropped_count,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ClusterStats_DroppedRequests) Reset() { *m = ClusterStats_DroppedRequests{} } func (m *ClusterStats_DroppedRequests) String() string { return proto.CompactTextString(m) } func (*ClusterStats_DroppedRequests) ProtoMessage() {} func (*ClusterStats_DroppedRequests) Descriptor() ([]byte, []int) { return fileDescriptor_load_report_5485b60725c658b8, []int{3, 0} } func (m *ClusterStats_DroppedRequests) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ClusterStats_DroppedRequests.Unmarshal(m, b) } func (m *ClusterStats_DroppedRequests) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ClusterStats_DroppedRequests.Marshal(b, m, deterministic) } func (dst *ClusterStats_DroppedRequests) XXX_Merge(src proto.Message) { xxx_messageInfo_ClusterStats_DroppedRequests.Merge(dst, src) } func (m *ClusterStats_DroppedRequests) XXX_Size() int { return xxx_messageInfo_ClusterStats_DroppedRequests.Size(m) } func (m *ClusterStats_DroppedRequests) XXX_DiscardUnknown() { xxx_messageInfo_ClusterStats_DroppedRequests.DiscardUnknown(m) } var xxx_messageInfo_ClusterStats_DroppedRequests proto.InternalMessageInfo func (m *ClusterStats_DroppedRequests) GetCategory() string { if m != nil { return m.Category } return "" } func (m *ClusterStats_DroppedRequests) GetDroppedCount() uint64 { if m != nil { return m.DroppedCount } return 0 } func init() { proto.RegisterType((*UpstreamLocalityStats)(nil), "envoy.api.v2.endpoint.UpstreamLocalityStats") proto.RegisterType((*UpstreamEndpointStats)(nil), "envoy.api.v2.endpoint.UpstreamEndpointStats") proto.RegisterType((*EndpointLoadMetricStats)(nil), "envoy.api.v2.endpoint.EndpointLoadMetricStats") proto.RegisterType((*ClusterStats)(nil), "envoy.api.v2.endpoint.ClusterStats") proto.RegisterType((*ClusterStats_DroppedRequests)(nil), "envoy.api.v2.endpoint.ClusterStats.DroppedRequests") } func init() { proto.RegisterFile("envoy/api/v2/endpoint/load_report.proto", fileDescriptor_load_report_5485b60725c658b8) } var fileDescriptor_load_report_5485b60725c658b8 = []byte{ // 737 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xec, 0x55, 0x5b, 0x6e, 0xd3, 0x4c, 0x18, 0x95, 0x93, 0xb4, 0x4d, 0x27, 0xad, 0xd2, 0x7f, 0xd4, 0xfe, 0x49, 0xfd, 0x5f, 0x1a, 0x52, 0x21, 0xf2, 0x50, 0xd9, 0x55, 0x5a, 0x09, 0x04, 0x4f, 0xa4, 0x2d, 0xa2, 0xa2, 0xa0, 0xca, 0x11, 0x20, 0x21, 0x81, 0x35, 0xb5, 0xa7, 0xe9, 0x48, 0xb6, 0xc7, 0xcc, 0x8c, 0x0d, 0x59, 0x02, 0xaf, 0x2c, 0x81, 0x25, 0xf0, 0xc8, 0x13, 0xdb, 0x60, 0x09, 0xec, 0x02, 0x79, 0x2e, 0xce, 0xb5, 0x12, 0x0b, 0xe0, 0xcd, 0xfe, 0xce, 0x39, 0xfe, 0x6e, 0x67, 0xc6, 0xe0, 0x1e, 0x4e, 0x72, 0x3a, 0x76, 0x51, 0x4a, 0xdc, 0xbc, 0xef, 0xe2, 0x24, 0x4c, 0x29, 0x49, 0x84, 0x1b, 0x51, 0x14, 0xfa, 0x0c, 0xa7, 0x94, 0x09, 0x27, 0x65, 0x54, 0x50, 0xb8, 0x23, 0x89, 0x0e, 0x4a, 0x89, 0x93, 0xf7, 0x1d, 0x43, 0xb4, 0xf7, 0x66, 0xf4, 0x01, 0x65, 0xd8, 0x45, 0x61, 0xc8, 0x30, 0xe7, 0x4a, 0x67, 0xff, 0xbb, 0x48, 0xb8, 0x42, 0x1c, 0x6b, 0xf4, 0xff, 0x11, 0xa5, 0xa3, 0x08, 0xbb, 0xf2, 0xed, 0x2a, 0xbb, 0x76, 0xc3, 0x8c, 0x21, 0x41, 0x68, 0x62, 0xd4, 0xf3, 0x38, 0x17, 0x2c, 0x0b, 0x74, 0x4d, 0x76, 0x2b, 0x47, 0x11, 0x09, 0x91, 0xc0, 0xae, 0x79, 0x50, 0x40, 0xf7, 0x47, 0x15, 0xec, 0xbc, 0x4c, 0xb9, 0x60, 0x18, 0xc5, 0x17, 0x34, 0x40, 0x11, 0x11, 0xe3, 0xa1, 0x40, 0x82, 0xc3, 0xfb, 0xa0, 0x1e, 0xe9, 0x40, 0xdb, 0xea, 0x58, 0xbd, 0x46, 0xff, 0x1f, 0x67, 0xa6, 0xb3, 0xa2, 0x42, 0xc7, 0x68, 0xbc, 0x92, 0x0c, 0x1f, 0x82, 0x5d, 0x41, 0x05, 0x8a, 0x7c, 0x9e, 0x05, 0x01, 0xe6, 0xfc, 0x3a, 0x8b, 0x7c, 0x86, 0xdf, 0x67, 0x98, 0x0b, 0xde, 0xae, 0x74, 0xac, 0x5e, 0xcd, 0x6b, 0x49, 0xc2, 0xb0, 0xc4, 0x3d, 0x0d, 0xc3, 0x47, 0xc0, 0x56, 0x5a, 0x23, 0xf0, 0x49, 0xe2, 0xa7, 0x8c, 0x8e, 0x8a, 0x39, 0xb5, 0xab, 0x53, 0x62, 0x23, 0x39, 0x4f, 0x2e, 0x35, 0x0c, 0x0f, 0xc1, 0xb6, 0x12, 0x63, 0xc6, 0x28, 0x9b, 0xe4, 0xac, 0x49, 0x19, 0x94, 0xd8, 0x59, 0x01, 0x95, 0xe9, 0xde, 0x80, 0xbf, 0xe4, 0xfe, 0x62, 0x2c, 0x18, 0x09, 0x7c, 0x5e, 0x34, 0xde, 0x5e, 0xe9, 0x54, 0x7b, 0x8d, 0xbe, 0xe3, 0x2c, 0x5d, 0xa3, 0x73, 0xa6, 0x1f, 0x2e, 0x28, 0x0a, 0x9f, 0x4b, 0x99, 0x1c, 0x97, 0xd7, 0x8c, 0x66, 0x03, 0x30, 0x04, 0xad, 0x4c, 0x0f, 0xd6, 0x37, 0x6a, 0x9d, 0x61, 0x4d, 0x66, 0x38, 0xb8, 0x25, 0x83, 0x59, 0x87, 0xc9, 0xa4, 0xbe, 0xbf, 0x93, 0x2d, 0x0b, 0x43, 0x1b, 0xd4, 0x53, 0x46, 0x28, 0x2b, 0xb6, 0xb4, 0xda, 0xb1, 0x7a, 0x9b, 0x5e, 0xf9, 0xde, 0xfd, 0x34, 0xb5, 0xdb, 0x59, 0xd5, 0x31, 0x58, 0xd3, 0xde, 0xd3, 0xab, 0xb5, 0x97, 0xac, 0xf6, 0xb1, 0x62, 0x78, 0x86, 0x0a, 0x8f, 0x40, 0x3d, 0xc6, 0x02, 0x85, 0x48, 0x20, 0x99, 0xab, 0xd1, 0x6f, 0x39, 0xca, 0x75, 0x8e, 0x71, 0x9d, 0x33, 0x94, 0xae, 0xf3, 0x4a, 0xe2, 0x1f, 0x37, 0xc8, 0x40, 0xf7, 0xab, 0x05, 0x5a, 0xb7, 0x90, 0xe1, 0x1e, 0x68, 0xe8, 0x94, 0x09, 0x8a, 0xb1, 0xdc, 0xc8, 0xba, 0x07, 0x54, 0xe8, 0x05, 0x8a, 0x31, 0x7c, 0x0a, 0xee, 0x24, 0x59, 0x3c, 0x99, 0xc2, 0x35, 0x49, 0x08, 0xbf, 0xc1, 0xa1, 0xff, 0x81, 0x88, 0x1b, 0x5d, 0xae, 0x9e, 0xe5, 0x7f, 0x49, 0x16, 0x9b, 0x86, 0x9e, 0x68, 0xda, 0x6b, 0x22, 0x6e, 0x54, 0x3e, 0x78, 0x00, 0x54, 0xe3, 0xa6, 0xc7, 0x1c, 0x45, 0x19, 0x96, 0x93, 0xb4, 0xbc, 0x2d, 0x89, 0x28, 0xe2, 0xab, 0x22, 0xde, 0xfd, 0x52, 0x03, 0x1b, 0x27, 0x51, 0xc6, 0x05, 0x66, 0xaa, 0xd2, 0x03, 0xb0, 0x11, 0xa8, 0xf7, 0xa9, 0x52, 0x07, 0xeb, 0xdf, 0x7e, 0x7e, 0xaf, 0xd6, 0x58, 0xa5, 0x63, 0x79, 0x0d, 0x0d, 0xcb, 0xb2, 0x0f, 0xc1, 0xb6, 0x61, 0x73, 0xcc, 0x72, 0x12, 0x60, 0xa5, 0x5a, 0x95, 0x0d, 0x42, 0x8d, 0x0d, 0x15, 0x24, 0x15, 0xe9, 0xd4, 0x99, 0x31, 0xf7, 0x89, 0xde, 0x43, 0xe5, 0xb7, 0xce, 0xcc, 0xcc, 0x15, 0x36, 0x00, 0x45, 0x61, 0x2b, 0x9f, 0xad, 0x4a, 0xdd, 0x9a, 0x9c, 0x9f, 0xd9, 0x5b, 0xee, 0x18, 0xfc, 0xad, 0x06, 0x12, 0x32, 0x9a, 0xa6, 0x38, 0x9c, 0xf8, 0x44, 0xd9, 0x4b, 0x79, 0xe8, 0x54, 0x81, 0xa5, 0x53, 0xde, 0x81, 0xad, 0x05, 0xbe, 0x32, 0xca, 0xd1, 0x2d, 0x05, 0x4e, 0x8f, 0xd1, 0x99, 0xfb, 0x9c, 0xd7, 0x0c, 0xe7, 0xbe, 0xff, 0x0c, 0x6c, 0x4f, 0xfd, 0x57, 0x7c, 0x92, 0x08, 0xcc, 0x72, 0x14, 0x49, 0xef, 0x36, 0xfa, 0xbb, 0x0b, 0xa7, 0xee, 0x54, 0xff, 0x0b, 0x3c, 0x58, 0xc8, 0x3c, 0xa9, 0x3a, 0xd7, 0x22, 0xfb, 0x2d, 0x68, 0xce, 0xd7, 0x7f, 0x17, 0xd4, 0x03, 0x24, 0xf0, 0x88, 0xb2, 0xf1, 0xe2, 0x0e, 0x4b, 0x08, 0xee, 0x83, 0x4d, 0xd3, 0x66, 0x40, 0xb3, 0x44, 0x68, 0x8f, 0x6d, 0xe8, 0xe0, 0x49, 0x11, 0x1b, 0x3c, 0x00, 0xfb, 0x84, 0xaa, 0xae, 0x53, 0x46, 0x3f, 0x8e, 0x97, 0x0f, 0x60, 0xd0, 0xbc, 0x28, 0x2b, 0xbb, 0x2c, 0xca, 0xbe, 0xb4, 0xae, 0x56, 0x65, 0xfd, 0x47, 0xbf, 0x02, 0x00, 0x00, 0xff, 0xff, 0xad, 0x11, 0xfc, 0x18, 0x5a, 0x07, 0x00, 0x00, } grpc-go-1.22.1/balancer/xds/internal/proto/envoy/service/000077500000000000000000000000001351635773100232725ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/internal/proto/envoy/service/discovery/000077500000000000000000000000001351635773100253015ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/internal/proto/envoy/service/discovery/v2/000077500000000000000000000000001351635773100256305ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/internal/proto/envoy/service/discovery/v2/ads/000077500000000000000000000000001351635773100263775ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/internal/proto/envoy/service/discovery/v2/ads/ads.pb.go000077500000000000000000000235071351635773100301070ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: envoy/service/discovery/v2/ads.proto package v2 import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import discovery "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/discovery" import ( context "golang.org/x/net/context" grpc "google.golang.org/grpc" ) // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package type AdsDummy struct { XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *AdsDummy) Reset() { *m = AdsDummy{} } func (m *AdsDummy) String() string { return proto.CompactTextString(m) } func (*AdsDummy) ProtoMessage() {} func (*AdsDummy) Descriptor() ([]byte, []int) { return fileDescriptor_ads_e4cd1a296681dd94, []int{0} } func (m *AdsDummy) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_AdsDummy.Unmarshal(m, b) } func (m *AdsDummy) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_AdsDummy.Marshal(b, m, deterministic) } func (dst *AdsDummy) XXX_Merge(src proto.Message) { xxx_messageInfo_AdsDummy.Merge(dst, src) } func (m *AdsDummy) XXX_Size() int { return xxx_messageInfo_AdsDummy.Size(m) } func (m *AdsDummy) XXX_DiscardUnknown() { xxx_messageInfo_AdsDummy.DiscardUnknown(m) } var xxx_messageInfo_AdsDummy proto.InternalMessageInfo func init() { proto.RegisterType((*AdsDummy)(nil), "envoy.service.discovery.v2.AdsDummy") } // Reference imports to suppress errors if they are not otherwise used. var _ context.Context var _ grpc.ClientConn // This is a compile-time assertion to ensure that this generated file // is compatible with the grpc package it is being compiled against. const _ = grpc.SupportPackageIsVersion4 // AggregatedDiscoveryServiceClient is the client API for AggregatedDiscoveryService service. // // For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream. type AggregatedDiscoveryServiceClient interface { StreamAggregatedResources(ctx context.Context, opts ...grpc.CallOption) (AggregatedDiscoveryService_StreamAggregatedResourcesClient, error) DeltaAggregatedResources(ctx context.Context, opts ...grpc.CallOption) (AggregatedDiscoveryService_DeltaAggregatedResourcesClient, error) } type aggregatedDiscoveryServiceClient struct { cc *grpc.ClientConn } func NewAggregatedDiscoveryServiceClient(cc *grpc.ClientConn) AggregatedDiscoveryServiceClient { return &aggregatedDiscoveryServiceClient{cc} } func (c *aggregatedDiscoveryServiceClient) StreamAggregatedResources(ctx context.Context, opts ...grpc.CallOption) (AggregatedDiscoveryService_StreamAggregatedResourcesClient, error) { stream, err := c.cc.NewStream(ctx, &_AggregatedDiscoveryService_serviceDesc.Streams[0], "/envoy.service.discovery.v2.AggregatedDiscoveryService/StreamAggregatedResources", opts...) if err != nil { return nil, err } x := &aggregatedDiscoveryServiceStreamAggregatedResourcesClient{stream} return x, nil } type AggregatedDiscoveryService_StreamAggregatedResourcesClient interface { Send(*discovery.DiscoveryRequest) error Recv() (*discovery.DiscoveryResponse, error) grpc.ClientStream } type aggregatedDiscoveryServiceStreamAggregatedResourcesClient struct { grpc.ClientStream } func (x *aggregatedDiscoveryServiceStreamAggregatedResourcesClient) Send(m *discovery.DiscoveryRequest) error { return x.ClientStream.SendMsg(m) } func (x *aggregatedDiscoveryServiceStreamAggregatedResourcesClient) Recv() (*discovery.DiscoveryResponse, error) { m := new(discovery.DiscoveryResponse) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func (c *aggregatedDiscoveryServiceClient) DeltaAggregatedResources(ctx context.Context, opts ...grpc.CallOption) (AggregatedDiscoveryService_DeltaAggregatedResourcesClient, error) { stream, err := c.cc.NewStream(ctx, &_AggregatedDiscoveryService_serviceDesc.Streams[1], "/envoy.service.discovery.v2.AggregatedDiscoveryService/DeltaAggregatedResources", opts...) if err != nil { return nil, err } x := &aggregatedDiscoveryServiceDeltaAggregatedResourcesClient{stream} return x, nil } type AggregatedDiscoveryService_DeltaAggregatedResourcesClient interface { Send(*discovery.DeltaDiscoveryRequest) error Recv() (*discovery.DeltaDiscoveryResponse, error) grpc.ClientStream } type aggregatedDiscoveryServiceDeltaAggregatedResourcesClient struct { grpc.ClientStream } func (x *aggregatedDiscoveryServiceDeltaAggregatedResourcesClient) Send(m *discovery.DeltaDiscoveryRequest) error { return x.ClientStream.SendMsg(m) } func (x *aggregatedDiscoveryServiceDeltaAggregatedResourcesClient) Recv() (*discovery.DeltaDiscoveryResponse, error) { m := new(discovery.DeltaDiscoveryResponse) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } // AggregatedDiscoveryServiceServer is the server API for AggregatedDiscoveryService service. type AggregatedDiscoveryServiceServer interface { StreamAggregatedResources(AggregatedDiscoveryService_StreamAggregatedResourcesServer) error DeltaAggregatedResources(AggregatedDiscoveryService_DeltaAggregatedResourcesServer) error } func RegisterAggregatedDiscoveryServiceServer(s *grpc.Server, srv AggregatedDiscoveryServiceServer) { s.RegisterService(&_AggregatedDiscoveryService_serviceDesc, srv) } func _AggregatedDiscoveryService_StreamAggregatedResources_Handler(srv interface{}, stream grpc.ServerStream) error { return srv.(AggregatedDiscoveryServiceServer).StreamAggregatedResources(&aggregatedDiscoveryServiceStreamAggregatedResourcesServer{stream}) } type AggregatedDiscoveryService_StreamAggregatedResourcesServer interface { Send(*discovery.DiscoveryResponse) error Recv() (*discovery.DiscoveryRequest, error) grpc.ServerStream } type aggregatedDiscoveryServiceStreamAggregatedResourcesServer struct { grpc.ServerStream } func (x *aggregatedDiscoveryServiceStreamAggregatedResourcesServer) Send(m *discovery.DiscoveryResponse) error { return x.ServerStream.SendMsg(m) } func (x *aggregatedDiscoveryServiceStreamAggregatedResourcesServer) Recv() (*discovery.DiscoveryRequest, error) { m := new(discovery.DiscoveryRequest) if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func _AggregatedDiscoveryService_DeltaAggregatedResources_Handler(srv interface{}, stream grpc.ServerStream) error { return srv.(AggregatedDiscoveryServiceServer).DeltaAggregatedResources(&aggregatedDiscoveryServiceDeltaAggregatedResourcesServer{stream}) } type AggregatedDiscoveryService_DeltaAggregatedResourcesServer interface { Send(*discovery.DeltaDiscoveryResponse) error Recv() (*discovery.DeltaDiscoveryRequest, error) grpc.ServerStream } type aggregatedDiscoveryServiceDeltaAggregatedResourcesServer struct { grpc.ServerStream } func (x *aggregatedDiscoveryServiceDeltaAggregatedResourcesServer) Send(m *discovery.DeltaDiscoveryResponse) error { return x.ServerStream.SendMsg(m) } func (x *aggregatedDiscoveryServiceDeltaAggregatedResourcesServer) Recv() (*discovery.DeltaDiscoveryRequest, error) { m := new(discovery.DeltaDiscoveryRequest) if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err } return m, nil } var _AggregatedDiscoveryService_serviceDesc = grpc.ServiceDesc{ ServiceName: "envoy.service.discovery.v2.AggregatedDiscoveryService", HandlerType: (*AggregatedDiscoveryServiceServer)(nil), Methods: []grpc.MethodDesc{}, Streams: []grpc.StreamDesc{ { StreamName: "StreamAggregatedResources", Handler: _AggregatedDiscoveryService_StreamAggregatedResources_Handler, ServerStreams: true, ClientStreams: true, }, { StreamName: "DeltaAggregatedResources", Handler: _AggregatedDiscoveryService_DeltaAggregatedResources_Handler, ServerStreams: true, ClientStreams: true, }, }, Metadata: "envoy/service/discovery/v2/ads.proto", } func init() { proto.RegisterFile("envoy/service/discovery/v2/ads.proto", fileDescriptor_ads_e4cd1a296681dd94) } var fileDescriptor_ads_e4cd1a296681dd94 = []byte{ // 236 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x7c, 0x90, 0xb1, 0x4e, 0xc3, 0x30, 0x10, 0x86, 0x31, 0x03, 0x42, 0x1e, 0x33, 0x41, 0x84, 0x40, 0x2a, 0x1d, 0x3a, 0x5d, 0x90, 0x99, 0x19, 0x5a, 0xe5, 0x01, 0xaa, 0x76, 0x63, 0x73, 0x93, 0x53, 0x64, 0x41, 0x7a, 0xc6, 0xe7, 0x58, 0xf8, 0x0d, 0x78, 0x59, 0xde, 0x01, 0x39, 0x86, 0x16, 0x01, 0x61, 0xbe, 0xef, 0xff, 0xff, 0xd3, 0x27, 0xe7, 0xb8, 0x0f, 0x14, 0x2b, 0x46, 0x17, 0x4c, 0x83, 0x55, 0x6b, 0xb8, 0xa1, 0x80, 0x2e, 0x56, 0x41, 0x55, 0xba, 0x65, 0xb0, 0x8e, 0x3c, 0x15, 0xe5, 0x48, 0xc1, 0x27, 0x05, 0x07, 0x0a, 0x82, 0x2a, 0xaf, 0x72, 0x83, 0xb6, 0x26, 0x65, 0x8e, 0xa7, 0x31, 0x39, 0x93, 0xf2, 0x7c, 0xd9, 0x72, 0x3d, 0xf4, 0x7d, 0x54, 0xef, 0x42, 0x96, 0xcb, 0xae, 0x73, 0xd8, 0x69, 0x8f, 0x6d, 0xfd, 0x45, 0x6e, 0x73, 0x6b, 0xb1, 0x93, 0x97, 0x5b, 0xef, 0x50, 0xf7, 0x47, 0x66, 0x83, 0x4c, 0x83, 0x6b, 0x90, 0x8b, 0x6b, 0xc8, 0x2f, 0x68, 0x6b, 0x20, 0x28, 0x38, 0x84, 0x37, 0xf8, 0x32, 0x20, 0xfb, 0xf2, 0x66, 0xf2, 0xce, 0x96, 0xf6, 0x8c, 0xb3, 0x93, 0x85, 0xb8, 0x13, 0xc5, 0x93, 0xbc, 0xa8, 0xf1, 0xd9, 0xeb, 0xbf, 0x26, 0x6e, 0x7f, 0x54, 0x24, 0xee, 0xd7, 0xce, 0xfc, 0x7f, 0xe8, 0xfb, 0xd8, 0xea, 0x41, 0x2e, 0x0c, 0x65, 0xde, 0x3a, 0x7a, 0x8d, 0x30, 0x6d, 0x71, 0x95, 0x2c, 0xad, 0x93, 0xb1, 0xb5, 0x78, 0x3c, 0x0d, 0xea, 0x4d, 0x88, 0xdd, 0xd9, 0x68, 0xf0, 0xfe, 0x23, 0x00, 0x00, 0xff, 0xff, 0x7e, 0x66, 0x01, 0x47, 0xa3, 0x01, 0x00, 0x00, } grpc-go-1.22.1/balancer/xds/internal/proto/envoy/service/load_stats/000077500000000000000000000000001351635773100254275ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/internal/proto/envoy/service/load_stats/v2/000077500000000000000000000000001351635773100257565ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/internal/proto/envoy/service/load_stats/v2/lrs/000077500000000000000000000000001351635773100265565ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/internal/proto/envoy/service/load_stats/v2/lrs/lrs.pb.go000077500000000000000000000257571351635773100303300ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: envoy/service/load_stats/v2/lrs.proto package v2 import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import duration "github.com/golang/protobuf/ptypes/duration" import base "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/core/base" import load_report "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/endpoint/load_report" import _ "google.golang.org/grpc/balancer/xds/internal/proto/validate" import ( context "golang.org/x/net/context" grpc "google.golang.org/grpc" ) // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package type LoadStatsRequest struct { Node *base.Node `protobuf:"bytes,1,opt,name=node,proto3" json:"node,omitempty"` ClusterStats []*load_report.ClusterStats `protobuf:"bytes,2,rep,name=cluster_stats,json=clusterStats,proto3" json:"cluster_stats,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *LoadStatsRequest) Reset() { *m = LoadStatsRequest{} } func (m *LoadStatsRequest) String() string { return proto.CompactTextString(m) } func (*LoadStatsRequest) ProtoMessage() {} func (*LoadStatsRequest) Descriptor() ([]byte, []int) { return fileDescriptor_lrs_0f8f2c1d40a1b9f7, []int{0} } func (m *LoadStatsRequest) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_LoadStatsRequest.Unmarshal(m, b) } func (m *LoadStatsRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_LoadStatsRequest.Marshal(b, m, deterministic) } func (dst *LoadStatsRequest) XXX_Merge(src proto.Message) { xxx_messageInfo_LoadStatsRequest.Merge(dst, src) } func (m *LoadStatsRequest) XXX_Size() int { return xxx_messageInfo_LoadStatsRequest.Size(m) } func (m *LoadStatsRequest) XXX_DiscardUnknown() { xxx_messageInfo_LoadStatsRequest.DiscardUnknown(m) } var xxx_messageInfo_LoadStatsRequest proto.InternalMessageInfo func (m *LoadStatsRequest) GetNode() *base.Node { if m != nil { return m.Node } return nil } func (m *LoadStatsRequest) GetClusterStats() []*load_report.ClusterStats { if m != nil { return m.ClusterStats } return nil } type LoadStatsResponse struct { Clusters []string `protobuf:"bytes,1,rep,name=clusters,proto3" json:"clusters,omitempty"` LoadReportingInterval *duration.Duration `protobuf:"bytes,2,opt,name=load_reporting_interval,json=loadReportingInterval,proto3" json:"load_reporting_interval,omitempty"` ReportEndpointGranularity bool `protobuf:"varint,3,opt,name=report_endpoint_granularity,json=reportEndpointGranularity,proto3" json:"report_endpoint_granularity,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *LoadStatsResponse) Reset() { *m = LoadStatsResponse{} } func (m *LoadStatsResponse) String() string { return proto.CompactTextString(m) } func (*LoadStatsResponse) ProtoMessage() {} func (*LoadStatsResponse) Descriptor() ([]byte, []int) { return fileDescriptor_lrs_0f8f2c1d40a1b9f7, []int{1} } func (m *LoadStatsResponse) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_LoadStatsResponse.Unmarshal(m, b) } func (m *LoadStatsResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_LoadStatsResponse.Marshal(b, m, deterministic) } func (dst *LoadStatsResponse) XXX_Merge(src proto.Message) { xxx_messageInfo_LoadStatsResponse.Merge(dst, src) } func (m *LoadStatsResponse) XXX_Size() int { return xxx_messageInfo_LoadStatsResponse.Size(m) } func (m *LoadStatsResponse) XXX_DiscardUnknown() { xxx_messageInfo_LoadStatsResponse.DiscardUnknown(m) } var xxx_messageInfo_LoadStatsResponse proto.InternalMessageInfo func (m *LoadStatsResponse) GetClusters() []string { if m != nil { return m.Clusters } return nil } func (m *LoadStatsResponse) GetLoadReportingInterval() *duration.Duration { if m != nil { return m.LoadReportingInterval } return nil } func (m *LoadStatsResponse) GetReportEndpointGranularity() bool { if m != nil { return m.ReportEndpointGranularity } return false } func init() { proto.RegisterType((*LoadStatsRequest)(nil), "envoy.service.load_stats.v2.LoadStatsRequest") proto.RegisterType((*LoadStatsResponse)(nil), "envoy.service.load_stats.v2.LoadStatsResponse") } // Reference imports to suppress errors if they are not otherwise used. var _ context.Context var _ grpc.ClientConn // This is a compile-time assertion to ensure that this generated file // is compatible with the grpc package it is being compiled against. const _ = grpc.SupportPackageIsVersion4 // LoadReportingServiceClient is the client API for LoadReportingService service. // // For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream. type LoadReportingServiceClient interface { StreamLoadStats(ctx context.Context, opts ...grpc.CallOption) (LoadReportingService_StreamLoadStatsClient, error) } type loadReportingServiceClient struct { cc *grpc.ClientConn } func NewLoadReportingServiceClient(cc *grpc.ClientConn) LoadReportingServiceClient { return &loadReportingServiceClient{cc} } func (c *loadReportingServiceClient) StreamLoadStats(ctx context.Context, opts ...grpc.CallOption) (LoadReportingService_StreamLoadStatsClient, error) { stream, err := c.cc.NewStream(ctx, &_LoadReportingService_serviceDesc.Streams[0], "/envoy.service.load_stats.v2.LoadReportingService/StreamLoadStats", opts...) if err != nil { return nil, err } x := &loadReportingServiceStreamLoadStatsClient{stream} return x, nil } type LoadReportingService_StreamLoadStatsClient interface { Send(*LoadStatsRequest) error Recv() (*LoadStatsResponse, error) grpc.ClientStream } type loadReportingServiceStreamLoadStatsClient struct { grpc.ClientStream } func (x *loadReportingServiceStreamLoadStatsClient) Send(m *LoadStatsRequest) error { return x.ClientStream.SendMsg(m) } func (x *loadReportingServiceStreamLoadStatsClient) Recv() (*LoadStatsResponse, error) { m := new(LoadStatsResponse) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } // LoadReportingServiceServer is the server API for LoadReportingService service. type LoadReportingServiceServer interface { StreamLoadStats(LoadReportingService_StreamLoadStatsServer) error } func RegisterLoadReportingServiceServer(s *grpc.Server, srv LoadReportingServiceServer) { s.RegisterService(&_LoadReportingService_serviceDesc, srv) } func _LoadReportingService_StreamLoadStats_Handler(srv interface{}, stream grpc.ServerStream) error { return srv.(LoadReportingServiceServer).StreamLoadStats(&loadReportingServiceStreamLoadStatsServer{stream}) } type LoadReportingService_StreamLoadStatsServer interface { Send(*LoadStatsResponse) error Recv() (*LoadStatsRequest, error) grpc.ServerStream } type loadReportingServiceStreamLoadStatsServer struct { grpc.ServerStream } func (x *loadReportingServiceStreamLoadStatsServer) Send(m *LoadStatsResponse) error { return x.ServerStream.SendMsg(m) } func (x *loadReportingServiceStreamLoadStatsServer) Recv() (*LoadStatsRequest, error) { m := new(LoadStatsRequest) if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err } return m, nil } var _LoadReportingService_serviceDesc = grpc.ServiceDesc{ ServiceName: "envoy.service.load_stats.v2.LoadReportingService", HandlerType: (*LoadReportingServiceServer)(nil), Methods: []grpc.MethodDesc{}, Streams: []grpc.StreamDesc{ { StreamName: "StreamLoadStats", Handler: _LoadReportingService_StreamLoadStats_Handler, ServerStreams: true, ClientStreams: true, }, }, Metadata: "envoy/service/load_stats/v2/lrs.proto", } func init() { proto.RegisterFile("envoy/service/load_stats/v2/lrs.proto", fileDescriptor_lrs_0f8f2c1d40a1b9f7) } var fileDescriptor_lrs_0f8f2c1d40a1b9f7 = []byte{ // 429 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x8c, 0x92, 0x41, 0x8e, 0xd3, 0x30, 0x14, 0x86, 0x71, 0x0b, 0xa8, 0x78, 0x40, 0x80, 0x05, 0x6a, 0xa6, 0x83, 0x50, 0x55, 0x04, 0x04, 0x21, 0x6c, 0x14, 0xf6, 0xb3, 0x28, 0x20, 0x40, 0xaa, 0xd0, 0x90, 0xee, 0xd8, 0x54, 0x6e, 0xf2, 0x88, 0x2c, 0x05, 0xbf, 0x60, 0x3b, 0x16, 0xbd, 0x01, 0x6c, 0x58, 0x70, 0x1c, 0x56, 0x9c, 0x80, 0x7b, 0x70, 0x0b, 0x94, 0x38, 0x99, 0xc9, 0xb0, 0xa8, 0x66, 0x17, 0xeb, 0x7d, 0xff, 0xf3, 0xff, 0xff, 0x31, 0x7d, 0x08, 0xda, 0xe3, 0x4e, 0x58, 0x30, 0x5e, 0x65, 0x20, 0x4a, 0x94, 0xf9, 0xc6, 0x3a, 0xe9, 0xac, 0xf0, 0x89, 0x28, 0x8d, 0xe5, 0x95, 0x41, 0x87, 0xec, 0xa8, 0xc5, 0x78, 0x87, 0xf1, 0x33, 0x8c, 0xfb, 0x64, 0x76, 0x2f, 0xec, 0x90, 0x95, 0x6a, 0x44, 0x19, 0x1a, 0x10, 0x5b, 0x69, 0x21, 0x48, 0x67, 0x8f, 0xcf, 0x4d, 0x41, 0xe7, 0x15, 0x2a, 0xed, 0xc2, 0x4d, 0x06, 0x2a, 0x34, 0xae, 0x03, 0xef, 0x17, 0x88, 0x45, 0x09, 0xa2, 0x3d, 0x6d, 0xeb, 0x4f, 0x22, 0xaf, 0x8d, 0x74, 0x0a, 0x75, 0x37, 0x9f, 0x7a, 0x59, 0xaa, 0x5c, 0x3a, 0x10, 0xfd, 0x47, 0x18, 0x2c, 0xbe, 0x13, 0x7a, 0x6b, 0x85, 0x32, 0x5f, 0x37, 0x86, 0x52, 0xf8, 0x52, 0x83, 0x75, 0xec, 0x29, 0xbd, 0xac, 0x31, 0x87, 0x88, 0xcc, 0x49, 0x7c, 0x90, 0x4c, 0x79, 0x08, 0x20, 0x2b, 0xc5, 0x7d, 0xc2, 0x1b, 0x8f, 0xfc, 0x3d, 0xe6, 0x90, 0xb6, 0x10, 0x7b, 0x4b, 0x6f, 0x64, 0x65, 0x6d, 0x1d, 0x98, 0x90, 0x2a, 0x1a, 0xcd, 0xc7, 0xf1, 0x41, 0xf2, 0xe0, 0xbc, 0xaa, 0xf7, 0xce, 0x5f, 0x06, 0x36, 0xdc, 0x77, 0x3d, 0x1b, 0x9c, 0x16, 0x7f, 0x08, 0xbd, 0x3d, 0xf0, 0x62, 0x2b, 0xd4, 0x16, 0xd8, 0x23, 0x3a, 0xe9, 0x28, 0x1b, 0x91, 0xf9, 0x38, 0xbe, 0xb6, 0xa4, 0xbf, 0xfe, 0xfe, 0x1e, 0x5f, 0xf9, 0x49, 0x46, 0x13, 0x92, 0x9e, 0xce, 0xd8, 0x07, 0x3a, 0x1d, 0xf4, 0xa2, 0x74, 0xb1, 0x51, 0xda, 0x81, 0xf1, 0xb2, 0x8c, 0x46, 0x6d, 0x8e, 0x43, 0x1e, 0x4a, 0xe2, 0x7d, 0x49, 0xfc, 0x55, 0x57, 0x52, 0x7a, 0xb7, 0x51, 0xa6, 0xbd, 0xf0, 0x5d, 0xa7, 0x63, 0xc7, 0xf4, 0x28, 0x6c, 0xdb, 0xf4, 0xf6, 0x37, 0x85, 0x91, 0xba, 0x2e, 0xa5, 0x51, 0x6e, 0x17, 0x8d, 0xe7, 0x24, 0x9e, 0xa4, 0x87, 0x01, 0x79, 0xdd, 0x11, 0x6f, 0xce, 0x80, 0xe4, 0x07, 0xa1, 0x77, 0x56, 0xc3, 0xcd, 0xeb, 0xf0, 0x06, 0x98, 0xa7, 0x37, 0xd7, 0xce, 0x80, 0xfc, 0x7c, 0x1a, 0x97, 0x3d, 0xe3, 0x7b, 0x9e, 0x09, 0xff, 0xff, 0x17, 0xcd, 0xf8, 0x45, 0xf1, 0xd0, 0xe2, 0xe2, 0x52, 0x4c, 0x9e, 0x93, 0xe5, 0x31, 0x7d, 0xa2, 0x30, 0x28, 0x2b, 0x83, 0x5f, 0x77, 0xfb, 0x96, 0x2c, 0x27, 0x2b, 0x63, 0x4f, 0x9a, 0xaa, 0x4e, 0xc8, 0xc7, 0x91, 0x4f, 0xbe, 0x11, 0xb2, 0xbd, 0xda, 0x56, 0xf7, 0xe2, 0x5f, 0x00, 0x00, 0x00, 0xff, 0xff, 0x44, 0xa2, 0xf5, 0xb3, 0xfa, 0x02, 0x00, 0x00, } grpc-go-1.22.1/balancer/xds/internal/proto/envoy/type/000077500000000000000000000000001351635773100226135ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/internal/proto/envoy/type/percent/000077500000000000000000000000001351635773100242535ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/internal/proto/envoy/type/percent/percent.pb.go000077500000000000000000000147711351635773100266570ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: envoy/type/percent.proto package envoy_type import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import _ "google.golang.org/grpc/balancer/xds/internal/proto/validate" // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package type FractionalPercent_DenominatorType int32 const ( FractionalPercent_HUNDRED FractionalPercent_DenominatorType = 0 FractionalPercent_TEN_THOUSAND FractionalPercent_DenominatorType = 1 FractionalPercent_MILLION FractionalPercent_DenominatorType = 2 ) var FractionalPercent_DenominatorType_name = map[int32]string{ 0: "HUNDRED", 1: "TEN_THOUSAND", 2: "MILLION", } var FractionalPercent_DenominatorType_value = map[string]int32{ "HUNDRED": 0, "TEN_THOUSAND": 1, "MILLION": 2, } func (x FractionalPercent_DenominatorType) String() string { return proto.EnumName(FractionalPercent_DenominatorType_name, int32(x)) } func (FractionalPercent_DenominatorType) EnumDescriptor() ([]byte, []int) { return fileDescriptor_percent_cd85ccaca181f641, []int{1, 0} } type Percent struct { Value float64 `protobuf:"fixed64,1,opt,name=value,proto3" json:"value,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Percent) Reset() { *m = Percent{} } func (m *Percent) String() string { return proto.CompactTextString(m) } func (*Percent) ProtoMessage() {} func (*Percent) Descriptor() ([]byte, []int) { return fileDescriptor_percent_cd85ccaca181f641, []int{0} } func (m *Percent) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Percent.Unmarshal(m, b) } func (m *Percent) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Percent.Marshal(b, m, deterministic) } func (dst *Percent) XXX_Merge(src proto.Message) { xxx_messageInfo_Percent.Merge(dst, src) } func (m *Percent) XXX_Size() int { return xxx_messageInfo_Percent.Size(m) } func (m *Percent) XXX_DiscardUnknown() { xxx_messageInfo_Percent.DiscardUnknown(m) } var xxx_messageInfo_Percent proto.InternalMessageInfo func (m *Percent) GetValue() float64 { if m != nil { return m.Value } return 0 } type FractionalPercent struct { Numerator uint32 `protobuf:"varint,1,opt,name=numerator,proto3" json:"numerator,omitempty"` Denominator FractionalPercent_DenominatorType `protobuf:"varint,2,opt,name=denominator,proto3,enum=envoy.type.FractionalPercent_DenominatorType" json:"denominator,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *FractionalPercent) Reset() { *m = FractionalPercent{} } func (m *FractionalPercent) String() string { return proto.CompactTextString(m) } func (*FractionalPercent) ProtoMessage() {} func (*FractionalPercent) Descriptor() ([]byte, []int) { return fileDescriptor_percent_cd85ccaca181f641, []int{1} } func (m *FractionalPercent) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_FractionalPercent.Unmarshal(m, b) } func (m *FractionalPercent) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_FractionalPercent.Marshal(b, m, deterministic) } func (dst *FractionalPercent) XXX_Merge(src proto.Message) { xxx_messageInfo_FractionalPercent.Merge(dst, src) } func (m *FractionalPercent) XXX_Size() int { return xxx_messageInfo_FractionalPercent.Size(m) } func (m *FractionalPercent) XXX_DiscardUnknown() { xxx_messageInfo_FractionalPercent.DiscardUnknown(m) } var xxx_messageInfo_FractionalPercent proto.InternalMessageInfo func (m *FractionalPercent) GetNumerator() uint32 { if m != nil { return m.Numerator } return 0 } func (m *FractionalPercent) GetDenominator() FractionalPercent_DenominatorType { if m != nil { return m.Denominator } return FractionalPercent_HUNDRED } func init() { proto.RegisterType((*Percent)(nil), "envoy.type.Percent") proto.RegisterType((*FractionalPercent)(nil), "envoy.type.FractionalPercent") proto.RegisterEnum("envoy.type.FractionalPercent_DenominatorType", FractionalPercent_DenominatorType_name, FractionalPercent_DenominatorType_value) } func init() { proto.RegisterFile("envoy/type/percent.proto", fileDescriptor_percent_cd85ccaca181f641) } var fileDescriptor_percent_cd85ccaca181f641 = []byte{ // 277 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x92, 0x48, 0xcd, 0x2b, 0xcb, 0xaf, 0xd4, 0x2f, 0xa9, 0x2c, 0x48, 0xd5, 0x2f, 0x48, 0x2d, 0x4a, 0x4e, 0xcd, 0x2b, 0xd1, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x17, 0xe2, 0x02, 0xcb, 0xe8, 0x81, 0x64, 0xa4, 0xc4, 0xcb, 0x12, 0x73, 0x32, 0x53, 0x12, 0x4b, 0x52, 0xf5, 0x61, 0x0c, 0x88, 0x22, 0x25, 0x2b, 0x2e, 0xf6, 0x00, 0x88, 0x2e, 0x21, 0x7d, 0x2e, 0xd6, 0xb2, 0xc4, 0x9c, 0xd2, 0x54, 0x09, 0x46, 0x05, 0x46, 0x0d, 0x46, 0x27, 0xc9, 0x5d, 0x2f, 0x0f, 0x30, 0x8b, 0x08, 0x09, 0x49, 0x32, 0x80, 0x41, 0xa4, 0x83, 0x26, 0x03, 0x14, 0x04, 0x41, 0xd4, 0x29, 0x9d, 0x65, 0xe4, 0x12, 0x74, 0x2b, 0x4a, 0x4c, 0x2e, 0xc9, 0xcc, 0xcf, 0x4b, 0xcc, 0x81, 0x19, 0x23, 0xc3, 0xc5, 0x99, 0x57, 0x9a, 0x9b, 0x5a, 0x94, 0x58, 0x92, 0x5f, 0x04, 0x36, 0x8a, 0x37, 0x08, 0x21, 0x20, 0x14, 0xcd, 0xc5, 0x9d, 0x92, 0x9a, 0x97, 0x9f, 0x9b, 0x99, 0x07, 0x96, 0x67, 0x52, 0x60, 0xd4, 0xe0, 0x33, 0xd2, 0xd5, 0x43, 0x38, 0x55, 0x0f, 0xc3, 0x44, 0x3d, 0x17, 0x84, 0x86, 0x90, 0xca, 0x82, 0x54, 0x27, 0x2e, 0x90, 0xcb, 0x58, 0x9b, 0x18, 0x99, 0x04, 0x18, 0x83, 0x90, 0x4d, 0x53, 0xb2, 0xe5, 0xe2, 0x47, 0x53, 0x2b, 0xc4, 0xcd, 0xc5, 0xee, 0x11, 0xea, 0xe7, 0x12, 0xe4, 0xea, 0x22, 0xc0, 0x20, 0x24, 0xc0, 0xc5, 0x13, 0xe2, 0xea, 0x17, 0x1f, 0xe2, 0xe1, 0x1f, 0x1a, 0xec, 0xe8, 0xe7, 0x22, 0xc0, 0x08, 0x92, 0xf6, 0xf5, 0xf4, 0xf1, 0xf1, 0xf4, 0xf7, 0x13, 0x60, 0x72, 0xd2, 0xe2, 0x92, 0xc8, 0xcc, 0x87, 0x38, 0xa5, 0xa0, 0x28, 0xbf, 0xa2, 0x12, 0xc9, 0x55, 0x4e, 0x3c, 0x50, 0xc7, 0x04, 0x80, 0x42, 0x2d, 0x80, 0x31, 0x89, 0x0d, 0x1c, 0x7c, 0xc6, 0x80, 0x00, 0x00, 0x00, 0xff, 0xff, 0x3d, 0x79, 0xd6, 0xbb, 0x7f, 0x01, 0x00, 0x00, } grpc-go-1.22.1/balancer/xds/internal/proto/envoy/type/range/000077500000000000000000000000001351635773100237075ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/internal/proto/envoy/type/range/range.pb.go000077500000000000000000000111651351635773100257410ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: envoy/type/range.proto package envoy_type import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package type Int64Range struct { Start int64 `protobuf:"varint,1,opt,name=start,proto3" json:"start,omitempty"` End int64 `protobuf:"varint,2,opt,name=end,proto3" json:"end,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Int64Range) Reset() { *m = Int64Range{} } func (m *Int64Range) String() string { return proto.CompactTextString(m) } func (*Int64Range) ProtoMessage() {} func (*Int64Range) Descriptor() ([]byte, []int) { return fileDescriptor_range_b0dd53fd27ccc9b2, []int{0} } func (m *Int64Range) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Int64Range.Unmarshal(m, b) } func (m *Int64Range) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Int64Range.Marshal(b, m, deterministic) } func (dst *Int64Range) XXX_Merge(src proto.Message) { xxx_messageInfo_Int64Range.Merge(dst, src) } func (m *Int64Range) XXX_Size() int { return xxx_messageInfo_Int64Range.Size(m) } func (m *Int64Range) XXX_DiscardUnknown() { xxx_messageInfo_Int64Range.DiscardUnknown(m) } var xxx_messageInfo_Int64Range proto.InternalMessageInfo func (m *Int64Range) GetStart() int64 { if m != nil { return m.Start } return 0 } func (m *Int64Range) GetEnd() int64 { if m != nil { return m.End } return 0 } type DoubleRange struct { Start float64 `protobuf:"fixed64,1,opt,name=start,proto3" json:"start,omitempty"` End float64 `protobuf:"fixed64,2,opt,name=end,proto3" json:"end,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *DoubleRange) Reset() { *m = DoubleRange{} } func (m *DoubleRange) String() string { return proto.CompactTextString(m) } func (*DoubleRange) ProtoMessage() {} func (*DoubleRange) Descriptor() ([]byte, []int) { return fileDescriptor_range_b0dd53fd27ccc9b2, []int{1} } func (m *DoubleRange) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_DoubleRange.Unmarshal(m, b) } func (m *DoubleRange) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_DoubleRange.Marshal(b, m, deterministic) } func (dst *DoubleRange) XXX_Merge(src proto.Message) { xxx_messageInfo_DoubleRange.Merge(dst, src) } func (m *DoubleRange) XXX_Size() int { return xxx_messageInfo_DoubleRange.Size(m) } func (m *DoubleRange) XXX_DiscardUnknown() { xxx_messageInfo_DoubleRange.DiscardUnknown(m) } var xxx_messageInfo_DoubleRange proto.InternalMessageInfo func (m *DoubleRange) GetStart() float64 { if m != nil { return m.Start } return 0 } func (m *DoubleRange) GetEnd() float64 { if m != nil { return m.End } return 0 } func init() { proto.RegisterType((*Int64Range)(nil), "envoy.type.Int64Range") proto.RegisterType((*DoubleRange)(nil), "envoy.type.DoubleRange") } func init() { proto.RegisterFile("envoy/type/range.proto", fileDescriptor_range_b0dd53fd27ccc9b2) } var fileDescriptor_range_b0dd53fd27ccc9b2 = []byte{ // 154 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x12, 0x4b, 0xcd, 0x2b, 0xcb, 0xaf, 0xd4, 0x2f, 0xa9, 0x2c, 0x48, 0xd5, 0x2f, 0x4a, 0xcc, 0x4b, 0x4f, 0xd5, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x17, 0xe2, 0x02, 0x8b, 0xeb, 0x81, 0xc4, 0x95, 0x4c, 0xb8, 0xb8, 0x3c, 0xf3, 0x4a, 0xcc, 0x4c, 0x82, 0x40, 0xf2, 0x42, 0x22, 0x5c, 0xac, 0xc5, 0x25, 0x89, 0x45, 0x25, 0x12, 0x8c, 0x0a, 0x8c, 0x1a, 0xcc, 0x41, 0x10, 0x8e, 0x90, 0x00, 0x17, 0x73, 0x6a, 0x5e, 0x8a, 0x04, 0x13, 0x58, 0x0c, 0xc4, 0x54, 0x32, 0xe5, 0xe2, 0x76, 0xc9, 0x2f, 0x4d, 0xca, 0x49, 0xc5, 0xa2, 0x8d, 0x11, 0x8b, 0x36, 0x46, 0xb0, 0x36, 0x27, 0x13, 0x2e, 0x89, 0xcc, 0x7c, 0x3d, 0xb0, 0xed, 0x05, 0x45, 0xf9, 0x15, 0x95, 0x7a, 0x08, 0x87, 0x38, 0x71, 0x81, 0x8d, 0x0a, 0x00, 0x39, 0x30, 0x80, 0x31, 0x0a, 0xe2, 0xc4, 0x78, 0x90, 0x4c, 0x12, 0x1b, 0xd8, 0xd5, 0xc6, 0x80, 0x00, 0x00, 0x00, 0xff, 0xff, 0x7f, 0xad, 0x02, 0xe6, 0xcf, 0x00, 0x00, 0x00, } grpc-go-1.22.1/balancer/xds/internal/proto/udpa/000077500000000000000000000000001351635773100214235ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/internal/proto/udpa/data/000077500000000000000000000000001351635773100223345ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/internal/proto/udpa/data/orca/000077500000000000000000000000001351635773100232605ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/internal/proto/udpa/data/orca/v1/000077500000000000000000000000001351635773100236065ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/internal/proto/udpa/data/orca/v1/orca_load_report/000077500000000000000000000000001351635773100271245ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/internal/proto/udpa/data/orca/v1/orca_load_report/orca_load_report.pb.go000077500000000000000000000124501351635773100333760ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: udpa/data/orca/v1/orca_load_report.proto package v1 import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import _ "google.golang.org/grpc/balancer/xds/internal/proto/validate" // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package type OrcaLoadReport struct { CpuUtilization float64 `protobuf:"fixed64,1,opt,name=cpu_utilization,json=cpuUtilization,proto3" json:"cpu_utilization,omitempty"` MemUtilization float64 `protobuf:"fixed64,2,opt,name=mem_utilization,json=memUtilization,proto3" json:"mem_utilization,omitempty"` Rps uint64 `protobuf:"varint,3,opt,name=rps,proto3" json:"rps,omitempty"` RequestCostOrUtilization map[string]float64 `protobuf:"bytes,4,rep,name=request_cost_or_utilization,json=requestCostOrUtilization,proto3" json:"request_cost_or_utilization,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"fixed64,2,opt,name=value,proto3"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *OrcaLoadReport) Reset() { *m = OrcaLoadReport{} } func (m *OrcaLoadReport) String() string { return proto.CompactTextString(m) } func (*OrcaLoadReport) ProtoMessage() {} func (*OrcaLoadReport) Descriptor() ([]byte, []int) { return fileDescriptor_orca_load_report_21c84c96f77315d6, []int{0} } func (m *OrcaLoadReport) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_OrcaLoadReport.Unmarshal(m, b) } func (m *OrcaLoadReport) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_OrcaLoadReport.Marshal(b, m, deterministic) } func (dst *OrcaLoadReport) XXX_Merge(src proto.Message) { xxx_messageInfo_OrcaLoadReport.Merge(dst, src) } func (m *OrcaLoadReport) XXX_Size() int { return xxx_messageInfo_OrcaLoadReport.Size(m) } func (m *OrcaLoadReport) XXX_DiscardUnknown() { xxx_messageInfo_OrcaLoadReport.DiscardUnknown(m) } var xxx_messageInfo_OrcaLoadReport proto.InternalMessageInfo func (m *OrcaLoadReport) GetCpuUtilization() float64 { if m != nil { return m.CpuUtilization } return 0 } func (m *OrcaLoadReport) GetMemUtilization() float64 { if m != nil { return m.MemUtilization } return 0 } func (m *OrcaLoadReport) GetRps() uint64 { if m != nil { return m.Rps } return 0 } func (m *OrcaLoadReport) GetRequestCostOrUtilization() map[string]float64 { if m != nil { return m.RequestCostOrUtilization } return nil } func init() { proto.RegisterType((*OrcaLoadReport)(nil), "udpa.data.orca.v1.OrcaLoadReport") proto.RegisterMapType((map[string]float64)(nil), "udpa.data.orca.v1.OrcaLoadReport.RequestCostOrUtilizationEntry") } func init() { proto.RegisterFile("udpa/data/orca/v1/orca_load_report.proto", fileDescriptor_orca_load_report_21c84c96f77315d6) } var fileDescriptor_orca_load_report_21c84c96f77315d6 = []byte{ // 321 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x8c, 0x91, 0x41, 0x4b, 0xc3, 0x30, 0x14, 0xc7, 0xed, 0x3a, 0x85, 0x65, 0x30, 0xb5, 0x0a, 0xce, 0x89, 0x58, 0x3c, 0xd5, 0x4b, 0xca, 0xf4, 0x22, 0x22, 0x0c, 0x26, 0x1e, 0x44, 0x61, 0x23, 0xe0, 0xc5, 0x4b, 0x89, 0x6d, 0x0e, 0xc5, 0x76, 0x2f, 0xbe, 0x26, 0xc1, 0x7a, 0xf0, 0xe6, 0x97, 0xf2, 0xe4, 0xd7, 0xf1, 0xe6, 0x47, 0x90, 0xb4, 0x13, 0x57, 0x86, 0x62, 0x2e, 0x79, 0x8f, 0xfc, 0xdf, 0xef, 0xbd, 0xfc, 0x1f, 0x09, 0x74, 0x22, 0x79, 0x98, 0x70, 0xc5, 0x43, 0xc0, 0x98, 0x87, 0x66, 0x58, 0xdd, 0x51, 0x06, 0x3c, 0x89, 0x50, 0x48, 0x40, 0x45, 0x25, 0x82, 0x02, 0x6f, 0xd3, 0x2a, 0xa9, 0x55, 0x52, 0xab, 0xa0, 0x66, 0x38, 0xd8, 0x31, 0x3c, 0x4b, 0x13, 0xae, 0x44, 0xf8, 0x1d, 0xd4, 0xda, 0xc3, 0x57, 0x97, 0xf4, 0x26, 0x18, 0xf3, 0x1b, 0xe0, 0x09, 0xab, 0x20, 0xde, 0x15, 0x59, 0x8f, 0xa5, 0x8e, 0xb4, 0x4a, 0xb3, 0xf4, 0x99, 0xab, 0x14, 0x66, 0x7d, 0xc7, 0x77, 0x02, 0x67, 0xec, 0xbf, 0x7d, 0xbc, 0xbb, 0x5d, 0xaf, 0x73, 0xb4, 0x32, 0x3f, 0xf3, 0x7c, 0xb7, 0xce, 0x3e, 0x47, 0xac, 0x17, 0x4b, 0x7d, 0xfb, 0x53, 0x67, 0x51, 0xb9, 0xc8, 0x1b, 0xa8, 0xd6, 0x7f, 0x51, 0xb9, 0xc8, 0x17, 0x51, 0x1b, 0xc4, 0x45, 0x59, 0xf4, 0x5d, 0xdf, 0x09, 0xda, 0xcc, 0x86, 0xde, 0x0b, 0xd9, 0x43, 0xf1, 0xa8, 0x45, 0xa1, 0xa2, 0x18, 0x0a, 0x15, 0x01, 0x36, 0x1a, 0xb5, 0x7d, 0x37, 0xe8, 0x1e, 0x8f, 0xe8, 0x92, 0x19, 0xb4, 0xf9, 0x5f, 0xca, 0x6a, 0xc8, 0x05, 0x14, 0x6a, 0x82, 0x0b, 0x2d, 0x2f, 0x67, 0x0a, 0x4b, 0xd6, 0xc7, 0x5f, 0x9e, 0x07, 0xd7, 0x64, 0xff, 0xcf, 0x52, 0x3b, 0xf2, 0x83, 0x28, 0x2b, 0xf3, 0x3a, 0xcc, 0x86, 0xde, 0x36, 0x59, 0x35, 0x3c, 0xd3, 0xa2, 0x76, 0x81, 0xd5, 0xc9, 0x59, 0xeb, 0xd4, 0x19, 0x9f, 0x93, 0x83, 0x14, 0xa8, 0x98, 0x19, 0x28, 0x25, 0xc2, 0x53, 0xb9, 0x3c, 0xf6, 0x78, 0xab, 0x39, 0xf7, 0xd4, 0xee, 0x6f, 0xea, 0xdc, 0xb5, 0xcc, 0xf0, 0x7e, 0xad, 0x5a, 0xe6, 0xc9, 0x57, 0x00, 0x00, 0x00, 0xff, 0xff, 0x1c, 0x1a, 0xcf, 0x42, 0x24, 0x02, 0x00, 0x00, } grpc-go-1.22.1/balancer/xds/internal/proto/udpa/service/000077500000000000000000000000001351635773100230635ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/internal/proto/udpa/service/orca/000077500000000000000000000000001351635773100240075ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/internal/proto/udpa/service/orca/v1/000077500000000000000000000000001351635773100243355ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/internal/proto/udpa/service/orca/v1/orca/000077500000000000000000000000001351635773100252615ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/internal/proto/udpa/service/orca/v1/orca/orca.pb.go000077500000000000000000000177161351635773100271530ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: udpa/service/orca/v1/orca.proto package v1 import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import duration "github.com/golang/protobuf/ptypes/duration" import orca_load_report "google.golang.org/grpc/balancer/xds/internal/proto/udpa/data/orca/v1/orca_load_report" import _ "google.golang.org/grpc/balancer/xds/internal/proto/validate" import ( context "golang.org/x/net/context" grpc "google.golang.org/grpc" ) // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package type OrcaLoadReportRequest struct { ReportInterval *duration.Duration `protobuf:"bytes,1,opt,name=report_interval,json=reportInterval,proto3" json:"report_interval,omitempty"` RequestCostNames []string `protobuf:"bytes,2,rep,name=request_cost_names,json=requestCostNames,proto3" json:"request_cost_names,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *OrcaLoadReportRequest) Reset() { *m = OrcaLoadReportRequest{} } func (m *OrcaLoadReportRequest) String() string { return proto.CompactTextString(m) } func (*OrcaLoadReportRequest) ProtoMessage() {} func (*OrcaLoadReportRequest) Descriptor() ([]byte, []int) { return fileDescriptor_orca_ca77e509304795c3, []int{0} } func (m *OrcaLoadReportRequest) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_OrcaLoadReportRequest.Unmarshal(m, b) } func (m *OrcaLoadReportRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_OrcaLoadReportRequest.Marshal(b, m, deterministic) } func (dst *OrcaLoadReportRequest) XXX_Merge(src proto.Message) { xxx_messageInfo_OrcaLoadReportRequest.Merge(dst, src) } func (m *OrcaLoadReportRequest) XXX_Size() int { return xxx_messageInfo_OrcaLoadReportRequest.Size(m) } func (m *OrcaLoadReportRequest) XXX_DiscardUnknown() { xxx_messageInfo_OrcaLoadReportRequest.DiscardUnknown(m) } var xxx_messageInfo_OrcaLoadReportRequest proto.InternalMessageInfo func (m *OrcaLoadReportRequest) GetReportInterval() *duration.Duration { if m != nil { return m.ReportInterval } return nil } func (m *OrcaLoadReportRequest) GetRequestCostNames() []string { if m != nil { return m.RequestCostNames } return nil } func init() { proto.RegisterType((*OrcaLoadReportRequest)(nil), "udpa.service.orca.v1.OrcaLoadReportRequest") } // Reference imports to suppress errors if they are not otherwise used. var _ context.Context var _ grpc.ClientConn // This is a compile-time assertion to ensure that this generated file // is compatible with the grpc package it is being compiled against. const _ = grpc.SupportPackageIsVersion4 // OpenRcaServiceClient is the client API for OpenRcaService service. // // For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream. type OpenRcaServiceClient interface { StreamCoreMetrics(ctx context.Context, in *OrcaLoadReportRequest, opts ...grpc.CallOption) (OpenRcaService_StreamCoreMetricsClient, error) } type openRcaServiceClient struct { cc *grpc.ClientConn } func NewOpenRcaServiceClient(cc *grpc.ClientConn) OpenRcaServiceClient { return &openRcaServiceClient{cc} } func (c *openRcaServiceClient) StreamCoreMetrics(ctx context.Context, in *OrcaLoadReportRequest, opts ...grpc.CallOption) (OpenRcaService_StreamCoreMetricsClient, error) { stream, err := c.cc.NewStream(ctx, &_OpenRcaService_serviceDesc.Streams[0], "/udpa.service.orca.v1.OpenRcaService/StreamCoreMetrics", opts...) if err != nil { return nil, err } x := &openRcaServiceStreamCoreMetricsClient{stream} if err := x.ClientStream.SendMsg(in); err != nil { return nil, err } if err := x.ClientStream.CloseSend(); err != nil { return nil, err } return x, nil } type OpenRcaService_StreamCoreMetricsClient interface { Recv() (*orca_load_report.OrcaLoadReport, error) grpc.ClientStream } type openRcaServiceStreamCoreMetricsClient struct { grpc.ClientStream } func (x *openRcaServiceStreamCoreMetricsClient) Recv() (*orca_load_report.OrcaLoadReport, error) { m := new(orca_load_report.OrcaLoadReport) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } // OpenRcaServiceServer is the server API for OpenRcaService service. type OpenRcaServiceServer interface { StreamCoreMetrics(*OrcaLoadReportRequest, OpenRcaService_StreamCoreMetricsServer) error } func RegisterOpenRcaServiceServer(s *grpc.Server, srv OpenRcaServiceServer) { s.RegisterService(&_OpenRcaService_serviceDesc, srv) } func _OpenRcaService_StreamCoreMetrics_Handler(srv interface{}, stream grpc.ServerStream) error { m := new(OrcaLoadReportRequest) if err := stream.RecvMsg(m); err != nil { return err } return srv.(OpenRcaServiceServer).StreamCoreMetrics(m, &openRcaServiceStreamCoreMetricsServer{stream}) } type OpenRcaService_StreamCoreMetricsServer interface { Send(*orca_load_report.OrcaLoadReport) error grpc.ServerStream } type openRcaServiceStreamCoreMetricsServer struct { grpc.ServerStream } func (x *openRcaServiceStreamCoreMetricsServer) Send(m *orca_load_report.OrcaLoadReport) error { return x.ServerStream.SendMsg(m) } var _OpenRcaService_serviceDesc = grpc.ServiceDesc{ ServiceName: "udpa.service.orca.v1.OpenRcaService", HandlerType: (*OpenRcaServiceServer)(nil), Methods: []grpc.MethodDesc{}, Streams: []grpc.StreamDesc{ { StreamName: "StreamCoreMetrics", Handler: _OpenRcaService_StreamCoreMetrics_Handler, ServerStreams: true, }, }, Metadata: "udpa/service/orca/v1/orca.proto", } func init() { proto.RegisterFile("udpa/service/orca/v1/orca.proto", fileDescriptor_orca_ca77e509304795c3) } var fileDescriptor_orca_ca77e509304795c3 = []byte{ // 300 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x74, 0x90, 0x4f, 0x4b, 0xc3, 0x40, 0x10, 0xc5, 0x49, 0x05, 0xa1, 0x2b, 0x54, 0x0d, 0x8a, 0xb5, 0x07, 0xad, 0x3d, 0x15, 0x94, 0x8d, 0xad, 0xf8, 0x05, 0x5a, 0x2f, 0x82, 0xda, 0x92, 0xde, 0xbc, 0x84, 0x69, 0x32, 0x96, 0x85, 0x34, 0x13, 0x67, 0x37, 0xab, 0xfd, 0x08, 0x7e, 0x6b, 0xc9, 0x6e, 0x7a, 0x10, 0xe2, 0x69, 0xff, 0xbc, 0xf7, 0x9b, 0xc7, 0x3c, 0x71, 0x5d, 0x65, 0x25, 0x44, 0x1a, 0xd9, 0xaa, 0x14, 0x23, 0xe2, 0x14, 0x22, 0x3b, 0x71, 0xa7, 0x2c, 0x99, 0x0c, 0x85, 0x67, 0xb5, 0x41, 0x36, 0x06, 0xe9, 0x04, 0x3b, 0x19, 0x8c, 0x1d, 0x96, 0x81, 0x81, 0x3f, 0x4c, 0x92, 0x13, 0x64, 0x09, 0x63, 0x49, 0x6c, 0x3c, 0x3f, 0xb8, 0xda, 0x10, 0x6d, 0x72, 0x8c, 0xdc, 0x6b, 0x5d, 0x7d, 0x44, 0x59, 0xc5, 0x60, 0x14, 0x15, 0x8d, 0x7e, 0x61, 0x21, 0x57, 0x19, 0x18, 0x8c, 0xf6, 0x17, 0x2f, 0x8c, 0x7e, 0x02, 0x71, 0xbe, 0xe0, 0x14, 0x5e, 0x08, 0xb2, 0xd8, 0x4d, 0x8c, 0xf1, 0xb3, 0x42, 0x6d, 0xc2, 0x99, 0x38, 0xf6, 0x11, 0x89, 0x2a, 0x0c, 0xb2, 0x85, 0xbc, 0x1f, 0x0c, 0x83, 0xf1, 0xd1, 0xf4, 0x52, 0xfa, 0x30, 0xb9, 0x0f, 0x93, 0x4f, 0x4d, 0x58, 0xdc, 0xf3, 0xc4, 0x73, 0x03, 0x84, 0x77, 0x22, 0x64, 0x3f, 0x2e, 0x49, 0x49, 0x9b, 0xa4, 0x80, 0x2d, 0xea, 0x7e, 0x67, 0x78, 0x30, 0xee, 0xc6, 0x27, 0x8d, 0x32, 0x27, 0x6d, 0xde, 0xea, 0xff, 0xe9, 0x97, 0xe8, 0x2d, 0x4a, 0x2c, 0xe2, 0x14, 0x56, 0xbe, 0x88, 0x10, 0xc5, 0xe9, 0xca, 0x30, 0xc2, 0x76, 0x4e, 0x8c, 0xaf, 0x68, 0x58, 0xa5, 0x3a, 0xbc, 0x95, 0x6d, 0x65, 0xc9, 0xd6, 0x2d, 0x06, 0x37, 0xde, 0x5c, 0x77, 0xf8, 0x8f, 0xf3, 0x3e, 0x98, 0x3d, 0x8a, 0x91, 0x22, 0x89, 0x85, 0xa5, 0x5d, 0xc9, 0xf4, 0xbd, 0x6b, 0x0d, 0x98, 0x75, 0x6b, 0x6e, 0x59, 0xef, 0xbc, 0x0c, 0xde, 0x3b, 0x76, 0xb2, 0x3e, 0x74, 0x05, 0x3c, 0xfc, 0x06, 0x00, 0x00, 0xff, 0xff, 0xac, 0x32, 0x62, 0x96, 0xde, 0x01, 0x00, 0x00, } grpc-go-1.22.1/balancer/xds/internal/proto/validate/000077500000000000000000000000001351635773100222635ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/internal/proto/validate/validate.pb.go000077500000000000000000002705711351635773100250220ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: validate/validate.proto package validate // import "google.golang.org/grpc/balancer/xds/internal/proto/validate" import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import descriptor "github.com/golang/protobuf/protoc-gen-go/descriptor" import duration "github.com/golang/protobuf/ptypes/duration" import timestamp "github.com/golang/protobuf/ptypes/timestamp" // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package type FieldRules struct { // Types that are valid to be assigned to Type: // *FieldRules_Float // *FieldRules_Double // *FieldRules_Int32 // *FieldRules_Int64 // *FieldRules_Uint32 // *FieldRules_Uint64 // *FieldRules_Sint32 // *FieldRules_Sint64 // *FieldRules_Fixed32 // *FieldRules_Fixed64 // *FieldRules_Sfixed32 // *FieldRules_Sfixed64 // *FieldRules_Bool // *FieldRules_String_ // *FieldRules_Bytes // *FieldRules_Enum // *FieldRules_Message // *FieldRules_Repeated // *FieldRules_Map // *FieldRules_Any // *FieldRules_Duration // *FieldRules_Timestamp Type isFieldRules_Type `protobuf_oneof:"type"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *FieldRules) Reset() { *m = FieldRules{} } func (m *FieldRules) String() string { return proto.CompactTextString(m) } func (*FieldRules) ProtoMessage() {} func (*FieldRules) Descriptor() ([]byte, []int) { return fileDescriptor_validate_4e427f48c21fab34, []int{0} } func (m *FieldRules) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_FieldRules.Unmarshal(m, b) } func (m *FieldRules) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_FieldRules.Marshal(b, m, deterministic) } func (dst *FieldRules) XXX_Merge(src proto.Message) { xxx_messageInfo_FieldRules.Merge(dst, src) } func (m *FieldRules) XXX_Size() int { return xxx_messageInfo_FieldRules.Size(m) } func (m *FieldRules) XXX_DiscardUnknown() { xxx_messageInfo_FieldRules.DiscardUnknown(m) } var xxx_messageInfo_FieldRules proto.InternalMessageInfo type isFieldRules_Type interface { isFieldRules_Type() } type FieldRules_Float struct { Float *FloatRules `protobuf:"bytes,1,opt,name=float,oneof"` } type FieldRules_Double struct { Double *DoubleRules `protobuf:"bytes,2,opt,name=double,oneof"` } type FieldRules_Int32 struct { Int32 *Int32Rules `protobuf:"bytes,3,opt,name=int32,oneof"` } type FieldRules_Int64 struct { Int64 *Int64Rules `protobuf:"bytes,4,opt,name=int64,oneof"` } type FieldRules_Uint32 struct { Uint32 *UInt32Rules `protobuf:"bytes,5,opt,name=uint32,oneof"` } type FieldRules_Uint64 struct { Uint64 *UInt64Rules `protobuf:"bytes,6,opt,name=uint64,oneof"` } type FieldRules_Sint32 struct { Sint32 *SInt32Rules `protobuf:"bytes,7,opt,name=sint32,oneof"` } type FieldRules_Sint64 struct { Sint64 *SInt64Rules `protobuf:"bytes,8,opt,name=sint64,oneof"` } type FieldRules_Fixed32 struct { Fixed32 *Fixed32Rules `protobuf:"bytes,9,opt,name=fixed32,oneof"` } type FieldRules_Fixed64 struct { Fixed64 *Fixed64Rules `protobuf:"bytes,10,opt,name=fixed64,oneof"` } type FieldRules_Sfixed32 struct { Sfixed32 *SFixed32Rules `protobuf:"bytes,11,opt,name=sfixed32,oneof"` } type FieldRules_Sfixed64 struct { Sfixed64 *SFixed64Rules `protobuf:"bytes,12,opt,name=sfixed64,oneof"` } type FieldRules_Bool struct { Bool *BoolRules `protobuf:"bytes,13,opt,name=bool,oneof"` } type FieldRules_String_ struct { String_ *StringRules `protobuf:"bytes,14,opt,name=string,oneof"` } type FieldRules_Bytes struct { Bytes *BytesRules `protobuf:"bytes,15,opt,name=bytes,oneof"` } type FieldRules_Enum struct { Enum *EnumRules `protobuf:"bytes,16,opt,name=enum,oneof"` } type FieldRules_Message struct { Message *MessageRules `protobuf:"bytes,17,opt,name=message,oneof"` } type FieldRules_Repeated struct { Repeated *RepeatedRules `protobuf:"bytes,18,opt,name=repeated,oneof"` } type FieldRules_Map struct { Map *MapRules `protobuf:"bytes,19,opt,name=map,oneof"` } type FieldRules_Any struct { Any *AnyRules `protobuf:"bytes,20,opt,name=any,oneof"` } type FieldRules_Duration struct { Duration *DurationRules `protobuf:"bytes,21,opt,name=duration,oneof"` } type FieldRules_Timestamp struct { Timestamp *TimestampRules `protobuf:"bytes,22,opt,name=timestamp,oneof"` } func (*FieldRules_Float) isFieldRules_Type() {} func (*FieldRules_Double) isFieldRules_Type() {} func (*FieldRules_Int32) isFieldRules_Type() {} func (*FieldRules_Int64) isFieldRules_Type() {} func (*FieldRules_Uint32) isFieldRules_Type() {} func (*FieldRules_Uint64) isFieldRules_Type() {} func (*FieldRules_Sint32) isFieldRules_Type() {} func (*FieldRules_Sint64) isFieldRules_Type() {} func (*FieldRules_Fixed32) isFieldRules_Type() {} func (*FieldRules_Fixed64) isFieldRules_Type() {} func (*FieldRules_Sfixed32) isFieldRules_Type() {} func (*FieldRules_Sfixed64) isFieldRules_Type() {} func (*FieldRules_Bool) isFieldRules_Type() {} func (*FieldRules_String_) isFieldRules_Type() {} func (*FieldRules_Bytes) isFieldRules_Type() {} func (*FieldRules_Enum) isFieldRules_Type() {} func (*FieldRules_Message) isFieldRules_Type() {} func (*FieldRules_Repeated) isFieldRules_Type() {} func (*FieldRules_Map) isFieldRules_Type() {} func (*FieldRules_Any) isFieldRules_Type() {} func (*FieldRules_Duration) isFieldRules_Type() {} func (*FieldRules_Timestamp) isFieldRules_Type() {} func (m *FieldRules) GetType() isFieldRules_Type { if m != nil { return m.Type } return nil } func (m *FieldRules) GetFloat() *FloatRules { if x, ok := m.GetType().(*FieldRules_Float); ok { return x.Float } return nil } func (m *FieldRules) GetDouble() *DoubleRules { if x, ok := m.GetType().(*FieldRules_Double); ok { return x.Double } return nil } func (m *FieldRules) GetInt32() *Int32Rules { if x, ok := m.GetType().(*FieldRules_Int32); ok { return x.Int32 } return nil } func (m *FieldRules) GetInt64() *Int64Rules { if x, ok := m.GetType().(*FieldRules_Int64); ok { return x.Int64 } return nil } func (m *FieldRules) GetUint32() *UInt32Rules { if x, ok := m.GetType().(*FieldRules_Uint32); ok { return x.Uint32 } return nil } func (m *FieldRules) GetUint64() *UInt64Rules { if x, ok := m.GetType().(*FieldRules_Uint64); ok { return x.Uint64 } return nil } func (m *FieldRules) GetSint32() *SInt32Rules { if x, ok := m.GetType().(*FieldRules_Sint32); ok { return x.Sint32 } return nil } func (m *FieldRules) GetSint64() *SInt64Rules { if x, ok := m.GetType().(*FieldRules_Sint64); ok { return x.Sint64 } return nil } func (m *FieldRules) GetFixed32() *Fixed32Rules { if x, ok := m.GetType().(*FieldRules_Fixed32); ok { return x.Fixed32 } return nil } func (m *FieldRules) GetFixed64() *Fixed64Rules { if x, ok := m.GetType().(*FieldRules_Fixed64); ok { return x.Fixed64 } return nil } func (m *FieldRules) GetSfixed32() *SFixed32Rules { if x, ok := m.GetType().(*FieldRules_Sfixed32); ok { return x.Sfixed32 } return nil } func (m *FieldRules) GetSfixed64() *SFixed64Rules { if x, ok := m.GetType().(*FieldRules_Sfixed64); ok { return x.Sfixed64 } return nil } func (m *FieldRules) GetBool() *BoolRules { if x, ok := m.GetType().(*FieldRules_Bool); ok { return x.Bool } return nil } func (m *FieldRules) GetString_() *StringRules { if x, ok := m.GetType().(*FieldRules_String_); ok { return x.String_ } return nil } func (m *FieldRules) GetBytes() *BytesRules { if x, ok := m.GetType().(*FieldRules_Bytes); ok { return x.Bytes } return nil } func (m *FieldRules) GetEnum() *EnumRules { if x, ok := m.GetType().(*FieldRules_Enum); ok { return x.Enum } return nil } func (m *FieldRules) GetMessage() *MessageRules { if x, ok := m.GetType().(*FieldRules_Message); ok { return x.Message } return nil } func (m *FieldRules) GetRepeated() *RepeatedRules { if x, ok := m.GetType().(*FieldRules_Repeated); ok { return x.Repeated } return nil } func (m *FieldRules) GetMap() *MapRules { if x, ok := m.GetType().(*FieldRules_Map); ok { return x.Map } return nil } func (m *FieldRules) GetAny() *AnyRules { if x, ok := m.GetType().(*FieldRules_Any); ok { return x.Any } return nil } func (m *FieldRules) GetDuration() *DurationRules { if x, ok := m.GetType().(*FieldRules_Duration); ok { return x.Duration } return nil } func (m *FieldRules) GetTimestamp() *TimestampRules { if x, ok := m.GetType().(*FieldRules_Timestamp); ok { return x.Timestamp } return nil } // XXX_OneofFuncs is for the internal use of the proto package. func (*FieldRules) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _FieldRules_OneofMarshaler, _FieldRules_OneofUnmarshaler, _FieldRules_OneofSizer, []interface{}{ (*FieldRules_Float)(nil), (*FieldRules_Double)(nil), (*FieldRules_Int32)(nil), (*FieldRules_Int64)(nil), (*FieldRules_Uint32)(nil), (*FieldRules_Uint64)(nil), (*FieldRules_Sint32)(nil), (*FieldRules_Sint64)(nil), (*FieldRules_Fixed32)(nil), (*FieldRules_Fixed64)(nil), (*FieldRules_Sfixed32)(nil), (*FieldRules_Sfixed64)(nil), (*FieldRules_Bool)(nil), (*FieldRules_String_)(nil), (*FieldRules_Bytes)(nil), (*FieldRules_Enum)(nil), (*FieldRules_Message)(nil), (*FieldRules_Repeated)(nil), (*FieldRules_Map)(nil), (*FieldRules_Any)(nil), (*FieldRules_Duration)(nil), (*FieldRules_Timestamp)(nil), } } func _FieldRules_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*FieldRules) // type switch x := m.Type.(type) { case *FieldRules_Float: b.EncodeVarint(1<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Float); err != nil { return err } case *FieldRules_Double: b.EncodeVarint(2<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Double); err != nil { return err } case *FieldRules_Int32: b.EncodeVarint(3<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Int32); err != nil { return err } case *FieldRules_Int64: b.EncodeVarint(4<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Int64); err != nil { return err } case *FieldRules_Uint32: b.EncodeVarint(5<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Uint32); err != nil { return err } case *FieldRules_Uint64: b.EncodeVarint(6<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Uint64); err != nil { return err } case *FieldRules_Sint32: b.EncodeVarint(7<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Sint32); err != nil { return err } case *FieldRules_Sint64: b.EncodeVarint(8<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Sint64); err != nil { return err } case *FieldRules_Fixed32: b.EncodeVarint(9<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Fixed32); err != nil { return err } case *FieldRules_Fixed64: b.EncodeVarint(10<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Fixed64); err != nil { return err } case *FieldRules_Sfixed32: b.EncodeVarint(11<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Sfixed32); err != nil { return err } case *FieldRules_Sfixed64: b.EncodeVarint(12<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Sfixed64); err != nil { return err } case *FieldRules_Bool: b.EncodeVarint(13<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Bool); err != nil { return err } case *FieldRules_String_: b.EncodeVarint(14<<3 | proto.WireBytes) if err := b.EncodeMessage(x.String_); err != nil { return err } case *FieldRules_Bytes: b.EncodeVarint(15<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Bytes); err != nil { return err } case *FieldRules_Enum: b.EncodeVarint(16<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Enum); err != nil { return err } case *FieldRules_Message: b.EncodeVarint(17<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Message); err != nil { return err } case *FieldRules_Repeated: b.EncodeVarint(18<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Repeated); err != nil { return err } case *FieldRules_Map: b.EncodeVarint(19<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Map); err != nil { return err } case *FieldRules_Any: b.EncodeVarint(20<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Any); err != nil { return err } case *FieldRules_Duration: b.EncodeVarint(21<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Duration); err != nil { return err } case *FieldRules_Timestamp: b.EncodeVarint(22<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Timestamp); err != nil { return err } case nil: default: return fmt.Errorf("FieldRules.Type has unexpected type %T", x) } return nil } func _FieldRules_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*FieldRules) switch tag { case 1: // type.float if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(FloatRules) err := b.DecodeMessage(msg) m.Type = &FieldRules_Float{msg} return true, err case 2: // type.double if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(DoubleRules) err := b.DecodeMessage(msg) m.Type = &FieldRules_Double{msg} return true, err case 3: // type.int32 if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(Int32Rules) err := b.DecodeMessage(msg) m.Type = &FieldRules_Int32{msg} return true, err case 4: // type.int64 if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(Int64Rules) err := b.DecodeMessage(msg) m.Type = &FieldRules_Int64{msg} return true, err case 5: // type.uint32 if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(UInt32Rules) err := b.DecodeMessage(msg) m.Type = &FieldRules_Uint32{msg} return true, err case 6: // type.uint64 if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(UInt64Rules) err := b.DecodeMessage(msg) m.Type = &FieldRules_Uint64{msg} return true, err case 7: // type.sint32 if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(SInt32Rules) err := b.DecodeMessage(msg) m.Type = &FieldRules_Sint32{msg} return true, err case 8: // type.sint64 if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(SInt64Rules) err := b.DecodeMessage(msg) m.Type = &FieldRules_Sint64{msg} return true, err case 9: // type.fixed32 if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(Fixed32Rules) err := b.DecodeMessage(msg) m.Type = &FieldRules_Fixed32{msg} return true, err case 10: // type.fixed64 if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(Fixed64Rules) err := b.DecodeMessage(msg) m.Type = &FieldRules_Fixed64{msg} return true, err case 11: // type.sfixed32 if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(SFixed32Rules) err := b.DecodeMessage(msg) m.Type = &FieldRules_Sfixed32{msg} return true, err case 12: // type.sfixed64 if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(SFixed64Rules) err := b.DecodeMessage(msg) m.Type = &FieldRules_Sfixed64{msg} return true, err case 13: // type.bool if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(BoolRules) err := b.DecodeMessage(msg) m.Type = &FieldRules_Bool{msg} return true, err case 14: // type.string if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(StringRules) err := b.DecodeMessage(msg) m.Type = &FieldRules_String_{msg} return true, err case 15: // type.bytes if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(BytesRules) err := b.DecodeMessage(msg) m.Type = &FieldRules_Bytes{msg} return true, err case 16: // type.enum if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(EnumRules) err := b.DecodeMessage(msg) m.Type = &FieldRules_Enum{msg} return true, err case 17: // type.message if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(MessageRules) err := b.DecodeMessage(msg) m.Type = &FieldRules_Message{msg} return true, err case 18: // type.repeated if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(RepeatedRules) err := b.DecodeMessage(msg) m.Type = &FieldRules_Repeated{msg} return true, err case 19: // type.map if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(MapRules) err := b.DecodeMessage(msg) m.Type = &FieldRules_Map{msg} return true, err case 20: // type.any if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(AnyRules) err := b.DecodeMessage(msg) m.Type = &FieldRules_Any{msg} return true, err case 21: // type.duration if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(DurationRules) err := b.DecodeMessage(msg) m.Type = &FieldRules_Duration{msg} return true, err case 22: // type.timestamp if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(TimestampRules) err := b.DecodeMessage(msg) m.Type = &FieldRules_Timestamp{msg} return true, err default: return false, nil } } func _FieldRules_OneofSizer(msg proto.Message) (n int) { m := msg.(*FieldRules) // type switch x := m.Type.(type) { case *FieldRules_Float: s := proto.Size(x.Float) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *FieldRules_Double: s := proto.Size(x.Double) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *FieldRules_Int32: s := proto.Size(x.Int32) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *FieldRules_Int64: s := proto.Size(x.Int64) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *FieldRules_Uint32: s := proto.Size(x.Uint32) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *FieldRules_Uint64: s := proto.Size(x.Uint64) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *FieldRules_Sint32: s := proto.Size(x.Sint32) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *FieldRules_Sint64: s := proto.Size(x.Sint64) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *FieldRules_Fixed32: s := proto.Size(x.Fixed32) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *FieldRules_Fixed64: s := proto.Size(x.Fixed64) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *FieldRules_Sfixed32: s := proto.Size(x.Sfixed32) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *FieldRules_Sfixed64: s := proto.Size(x.Sfixed64) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *FieldRules_Bool: s := proto.Size(x.Bool) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *FieldRules_String_: s := proto.Size(x.String_) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *FieldRules_Bytes: s := proto.Size(x.Bytes) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *FieldRules_Enum: s := proto.Size(x.Enum) n += 2 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *FieldRules_Message: s := proto.Size(x.Message) n += 2 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *FieldRules_Repeated: s := proto.Size(x.Repeated) n += 2 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *FieldRules_Map: s := proto.Size(x.Map) n += 2 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *FieldRules_Any: s := proto.Size(x.Any) n += 2 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *FieldRules_Duration: s := proto.Size(x.Duration) n += 2 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *FieldRules_Timestamp: s := proto.Size(x.Timestamp) n += 2 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } type FloatRules struct { Const *float32 `protobuf:"fixed32,1,opt,name=const" json:"const,omitempty"` Lt *float32 `protobuf:"fixed32,2,opt,name=lt" json:"lt,omitempty"` Lte *float32 `protobuf:"fixed32,3,opt,name=lte" json:"lte,omitempty"` Gt *float32 `protobuf:"fixed32,4,opt,name=gt" json:"gt,omitempty"` Gte *float32 `protobuf:"fixed32,5,opt,name=gte" json:"gte,omitempty"` In []float32 `protobuf:"fixed32,6,rep,name=in" json:"in,omitempty"` NotIn []float32 `protobuf:"fixed32,7,rep,name=not_in,json=notIn" json:"not_in,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *FloatRules) Reset() { *m = FloatRules{} } func (m *FloatRules) String() string { return proto.CompactTextString(m) } func (*FloatRules) ProtoMessage() {} func (*FloatRules) Descriptor() ([]byte, []int) { return fileDescriptor_validate_4e427f48c21fab34, []int{1} } func (m *FloatRules) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_FloatRules.Unmarshal(m, b) } func (m *FloatRules) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_FloatRules.Marshal(b, m, deterministic) } func (dst *FloatRules) XXX_Merge(src proto.Message) { xxx_messageInfo_FloatRules.Merge(dst, src) } func (m *FloatRules) XXX_Size() int { return xxx_messageInfo_FloatRules.Size(m) } func (m *FloatRules) XXX_DiscardUnknown() { xxx_messageInfo_FloatRules.DiscardUnknown(m) } var xxx_messageInfo_FloatRules proto.InternalMessageInfo func (m *FloatRules) GetConst() float32 { if m != nil && m.Const != nil { return *m.Const } return 0 } func (m *FloatRules) GetLt() float32 { if m != nil && m.Lt != nil { return *m.Lt } return 0 } func (m *FloatRules) GetLte() float32 { if m != nil && m.Lte != nil { return *m.Lte } return 0 } func (m *FloatRules) GetGt() float32 { if m != nil && m.Gt != nil { return *m.Gt } return 0 } func (m *FloatRules) GetGte() float32 { if m != nil && m.Gte != nil { return *m.Gte } return 0 } func (m *FloatRules) GetIn() []float32 { if m != nil { return m.In } return nil } func (m *FloatRules) GetNotIn() []float32 { if m != nil { return m.NotIn } return nil } type DoubleRules struct { Const *float64 `protobuf:"fixed64,1,opt,name=const" json:"const,omitempty"` Lt *float64 `protobuf:"fixed64,2,opt,name=lt" json:"lt,omitempty"` Lte *float64 `protobuf:"fixed64,3,opt,name=lte" json:"lte,omitempty"` Gt *float64 `protobuf:"fixed64,4,opt,name=gt" json:"gt,omitempty"` Gte *float64 `protobuf:"fixed64,5,opt,name=gte" json:"gte,omitempty"` In []float64 `protobuf:"fixed64,6,rep,name=in" json:"in,omitempty"` NotIn []float64 `protobuf:"fixed64,7,rep,name=not_in,json=notIn" json:"not_in,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *DoubleRules) Reset() { *m = DoubleRules{} } func (m *DoubleRules) String() string { return proto.CompactTextString(m) } func (*DoubleRules) ProtoMessage() {} func (*DoubleRules) Descriptor() ([]byte, []int) { return fileDescriptor_validate_4e427f48c21fab34, []int{2} } func (m *DoubleRules) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_DoubleRules.Unmarshal(m, b) } func (m *DoubleRules) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_DoubleRules.Marshal(b, m, deterministic) } func (dst *DoubleRules) XXX_Merge(src proto.Message) { xxx_messageInfo_DoubleRules.Merge(dst, src) } func (m *DoubleRules) XXX_Size() int { return xxx_messageInfo_DoubleRules.Size(m) } func (m *DoubleRules) XXX_DiscardUnknown() { xxx_messageInfo_DoubleRules.DiscardUnknown(m) } var xxx_messageInfo_DoubleRules proto.InternalMessageInfo func (m *DoubleRules) GetConst() float64 { if m != nil && m.Const != nil { return *m.Const } return 0 } func (m *DoubleRules) GetLt() float64 { if m != nil && m.Lt != nil { return *m.Lt } return 0 } func (m *DoubleRules) GetLte() float64 { if m != nil && m.Lte != nil { return *m.Lte } return 0 } func (m *DoubleRules) GetGt() float64 { if m != nil && m.Gt != nil { return *m.Gt } return 0 } func (m *DoubleRules) GetGte() float64 { if m != nil && m.Gte != nil { return *m.Gte } return 0 } func (m *DoubleRules) GetIn() []float64 { if m != nil { return m.In } return nil } func (m *DoubleRules) GetNotIn() []float64 { if m != nil { return m.NotIn } return nil } type Int32Rules struct { Const *int32 `protobuf:"varint,1,opt,name=const" json:"const,omitempty"` Lt *int32 `protobuf:"varint,2,opt,name=lt" json:"lt,omitempty"` Lte *int32 `protobuf:"varint,3,opt,name=lte" json:"lte,omitempty"` Gt *int32 `protobuf:"varint,4,opt,name=gt" json:"gt,omitempty"` Gte *int32 `protobuf:"varint,5,opt,name=gte" json:"gte,omitempty"` In []int32 `protobuf:"varint,6,rep,name=in" json:"in,omitempty"` NotIn []int32 `protobuf:"varint,7,rep,name=not_in,json=notIn" json:"not_in,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Int32Rules) Reset() { *m = Int32Rules{} } func (m *Int32Rules) String() string { return proto.CompactTextString(m) } func (*Int32Rules) ProtoMessage() {} func (*Int32Rules) Descriptor() ([]byte, []int) { return fileDescriptor_validate_4e427f48c21fab34, []int{3} } func (m *Int32Rules) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Int32Rules.Unmarshal(m, b) } func (m *Int32Rules) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Int32Rules.Marshal(b, m, deterministic) } func (dst *Int32Rules) XXX_Merge(src proto.Message) { xxx_messageInfo_Int32Rules.Merge(dst, src) } func (m *Int32Rules) XXX_Size() int { return xxx_messageInfo_Int32Rules.Size(m) } func (m *Int32Rules) XXX_DiscardUnknown() { xxx_messageInfo_Int32Rules.DiscardUnknown(m) } var xxx_messageInfo_Int32Rules proto.InternalMessageInfo func (m *Int32Rules) GetConst() int32 { if m != nil && m.Const != nil { return *m.Const } return 0 } func (m *Int32Rules) GetLt() int32 { if m != nil && m.Lt != nil { return *m.Lt } return 0 } func (m *Int32Rules) GetLte() int32 { if m != nil && m.Lte != nil { return *m.Lte } return 0 } func (m *Int32Rules) GetGt() int32 { if m != nil && m.Gt != nil { return *m.Gt } return 0 } func (m *Int32Rules) GetGte() int32 { if m != nil && m.Gte != nil { return *m.Gte } return 0 } func (m *Int32Rules) GetIn() []int32 { if m != nil { return m.In } return nil } func (m *Int32Rules) GetNotIn() []int32 { if m != nil { return m.NotIn } return nil } type Int64Rules struct { Const *int64 `protobuf:"varint,1,opt,name=const" json:"const,omitempty"` Lt *int64 `protobuf:"varint,2,opt,name=lt" json:"lt,omitempty"` Lte *int64 `protobuf:"varint,3,opt,name=lte" json:"lte,omitempty"` Gt *int64 `protobuf:"varint,4,opt,name=gt" json:"gt,omitempty"` Gte *int64 `protobuf:"varint,5,opt,name=gte" json:"gte,omitempty"` In []int64 `protobuf:"varint,6,rep,name=in" json:"in,omitempty"` NotIn []int64 `protobuf:"varint,7,rep,name=not_in,json=notIn" json:"not_in,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Int64Rules) Reset() { *m = Int64Rules{} } func (m *Int64Rules) String() string { return proto.CompactTextString(m) } func (*Int64Rules) ProtoMessage() {} func (*Int64Rules) Descriptor() ([]byte, []int) { return fileDescriptor_validate_4e427f48c21fab34, []int{4} } func (m *Int64Rules) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Int64Rules.Unmarshal(m, b) } func (m *Int64Rules) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Int64Rules.Marshal(b, m, deterministic) } func (dst *Int64Rules) XXX_Merge(src proto.Message) { xxx_messageInfo_Int64Rules.Merge(dst, src) } func (m *Int64Rules) XXX_Size() int { return xxx_messageInfo_Int64Rules.Size(m) } func (m *Int64Rules) XXX_DiscardUnknown() { xxx_messageInfo_Int64Rules.DiscardUnknown(m) } var xxx_messageInfo_Int64Rules proto.InternalMessageInfo func (m *Int64Rules) GetConst() int64 { if m != nil && m.Const != nil { return *m.Const } return 0 } func (m *Int64Rules) GetLt() int64 { if m != nil && m.Lt != nil { return *m.Lt } return 0 } func (m *Int64Rules) GetLte() int64 { if m != nil && m.Lte != nil { return *m.Lte } return 0 } func (m *Int64Rules) GetGt() int64 { if m != nil && m.Gt != nil { return *m.Gt } return 0 } func (m *Int64Rules) GetGte() int64 { if m != nil && m.Gte != nil { return *m.Gte } return 0 } func (m *Int64Rules) GetIn() []int64 { if m != nil { return m.In } return nil } func (m *Int64Rules) GetNotIn() []int64 { if m != nil { return m.NotIn } return nil } type UInt32Rules struct { Const *uint32 `protobuf:"varint,1,opt,name=const" json:"const,omitempty"` Lt *uint32 `protobuf:"varint,2,opt,name=lt" json:"lt,omitempty"` Lte *uint32 `protobuf:"varint,3,opt,name=lte" json:"lte,omitempty"` Gt *uint32 `protobuf:"varint,4,opt,name=gt" json:"gt,omitempty"` Gte *uint32 `protobuf:"varint,5,opt,name=gte" json:"gte,omitempty"` In []uint32 `protobuf:"varint,6,rep,name=in" json:"in,omitempty"` NotIn []uint32 `protobuf:"varint,7,rep,name=not_in,json=notIn" json:"not_in,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *UInt32Rules) Reset() { *m = UInt32Rules{} } func (m *UInt32Rules) String() string { return proto.CompactTextString(m) } func (*UInt32Rules) ProtoMessage() {} func (*UInt32Rules) Descriptor() ([]byte, []int) { return fileDescriptor_validate_4e427f48c21fab34, []int{5} } func (m *UInt32Rules) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_UInt32Rules.Unmarshal(m, b) } func (m *UInt32Rules) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_UInt32Rules.Marshal(b, m, deterministic) } func (dst *UInt32Rules) XXX_Merge(src proto.Message) { xxx_messageInfo_UInt32Rules.Merge(dst, src) } func (m *UInt32Rules) XXX_Size() int { return xxx_messageInfo_UInt32Rules.Size(m) } func (m *UInt32Rules) XXX_DiscardUnknown() { xxx_messageInfo_UInt32Rules.DiscardUnknown(m) } var xxx_messageInfo_UInt32Rules proto.InternalMessageInfo func (m *UInt32Rules) GetConst() uint32 { if m != nil && m.Const != nil { return *m.Const } return 0 } func (m *UInt32Rules) GetLt() uint32 { if m != nil && m.Lt != nil { return *m.Lt } return 0 } func (m *UInt32Rules) GetLte() uint32 { if m != nil && m.Lte != nil { return *m.Lte } return 0 } func (m *UInt32Rules) GetGt() uint32 { if m != nil && m.Gt != nil { return *m.Gt } return 0 } func (m *UInt32Rules) GetGte() uint32 { if m != nil && m.Gte != nil { return *m.Gte } return 0 } func (m *UInt32Rules) GetIn() []uint32 { if m != nil { return m.In } return nil } func (m *UInt32Rules) GetNotIn() []uint32 { if m != nil { return m.NotIn } return nil } type UInt64Rules struct { Const *uint64 `protobuf:"varint,1,opt,name=const" json:"const,omitempty"` Lt *uint64 `protobuf:"varint,2,opt,name=lt" json:"lt,omitempty"` Lte *uint64 `protobuf:"varint,3,opt,name=lte" json:"lte,omitempty"` Gt *uint64 `protobuf:"varint,4,opt,name=gt" json:"gt,omitempty"` Gte *uint64 `protobuf:"varint,5,opt,name=gte" json:"gte,omitempty"` In []uint64 `protobuf:"varint,6,rep,name=in" json:"in,omitempty"` NotIn []uint64 `protobuf:"varint,7,rep,name=not_in,json=notIn" json:"not_in,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *UInt64Rules) Reset() { *m = UInt64Rules{} } func (m *UInt64Rules) String() string { return proto.CompactTextString(m) } func (*UInt64Rules) ProtoMessage() {} func (*UInt64Rules) Descriptor() ([]byte, []int) { return fileDescriptor_validate_4e427f48c21fab34, []int{6} } func (m *UInt64Rules) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_UInt64Rules.Unmarshal(m, b) } func (m *UInt64Rules) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_UInt64Rules.Marshal(b, m, deterministic) } func (dst *UInt64Rules) XXX_Merge(src proto.Message) { xxx_messageInfo_UInt64Rules.Merge(dst, src) } func (m *UInt64Rules) XXX_Size() int { return xxx_messageInfo_UInt64Rules.Size(m) } func (m *UInt64Rules) XXX_DiscardUnknown() { xxx_messageInfo_UInt64Rules.DiscardUnknown(m) } var xxx_messageInfo_UInt64Rules proto.InternalMessageInfo func (m *UInt64Rules) GetConst() uint64 { if m != nil && m.Const != nil { return *m.Const } return 0 } func (m *UInt64Rules) GetLt() uint64 { if m != nil && m.Lt != nil { return *m.Lt } return 0 } func (m *UInt64Rules) GetLte() uint64 { if m != nil && m.Lte != nil { return *m.Lte } return 0 } func (m *UInt64Rules) GetGt() uint64 { if m != nil && m.Gt != nil { return *m.Gt } return 0 } func (m *UInt64Rules) GetGte() uint64 { if m != nil && m.Gte != nil { return *m.Gte } return 0 } func (m *UInt64Rules) GetIn() []uint64 { if m != nil { return m.In } return nil } func (m *UInt64Rules) GetNotIn() []uint64 { if m != nil { return m.NotIn } return nil } type SInt32Rules struct { Const *int32 `protobuf:"zigzag32,1,opt,name=const" json:"const,omitempty"` Lt *int32 `protobuf:"zigzag32,2,opt,name=lt" json:"lt,omitempty"` Lte *int32 `protobuf:"zigzag32,3,opt,name=lte" json:"lte,omitempty"` Gt *int32 `protobuf:"zigzag32,4,opt,name=gt" json:"gt,omitempty"` Gte *int32 `protobuf:"zigzag32,5,opt,name=gte" json:"gte,omitempty"` In []int32 `protobuf:"zigzag32,6,rep,name=in" json:"in,omitempty"` NotIn []int32 `protobuf:"zigzag32,7,rep,name=not_in,json=notIn" json:"not_in,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *SInt32Rules) Reset() { *m = SInt32Rules{} } func (m *SInt32Rules) String() string { return proto.CompactTextString(m) } func (*SInt32Rules) ProtoMessage() {} func (*SInt32Rules) Descriptor() ([]byte, []int) { return fileDescriptor_validate_4e427f48c21fab34, []int{7} } func (m *SInt32Rules) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_SInt32Rules.Unmarshal(m, b) } func (m *SInt32Rules) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_SInt32Rules.Marshal(b, m, deterministic) } func (dst *SInt32Rules) XXX_Merge(src proto.Message) { xxx_messageInfo_SInt32Rules.Merge(dst, src) } func (m *SInt32Rules) XXX_Size() int { return xxx_messageInfo_SInt32Rules.Size(m) } func (m *SInt32Rules) XXX_DiscardUnknown() { xxx_messageInfo_SInt32Rules.DiscardUnknown(m) } var xxx_messageInfo_SInt32Rules proto.InternalMessageInfo func (m *SInt32Rules) GetConst() int32 { if m != nil && m.Const != nil { return *m.Const } return 0 } func (m *SInt32Rules) GetLt() int32 { if m != nil && m.Lt != nil { return *m.Lt } return 0 } func (m *SInt32Rules) GetLte() int32 { if m != nil && m.Lte != nil { return *m.Lte } return 0 } func (m *SInt32Rules) GetGt() int32 { if m != nil && m.Gt != nil { return *m.Gt } return 0 } func (m *SInt32Rules) GetGte() int32 { if m != nil && m.Gte != nil { return *m.Gte } return 0 } func (m *SInt32Rules) GetIn() []int32 { if m != nil { return m.In } return nil } func (m *SInt32Rules) GetNotIn() []int32 { if m != nil { return m.NotIn } return nil } type SInt64Rules struct { Const *int64 `protobuf:"zigzag64,1,opt,name=const" json:"const,omitempty"` Lt *int64 `protobuf:"zigzag64,2,opt,name=lt" json:"lt,omitempty"` Lte *int64 `protobuf:"zigzag64,3,opt,name=lte" json:"lte,omitempty"` Gt *int64 `protobuf:"zigzag64,4,opt,name=gt" json:"gt,omitempty"` Gte *int64 `protobuf:"zigzag64,5,opt,name=gte" json:"gte,omitempty"` In []int64 `protobuf:"zigzag64,6,rep,name=in" json:"in,omitempty"` NotIn []int64 `protobuf:"zigzag64,7,rep,name=not_in,json=notIn" json:"not_in,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *SInt64Rules) Reset() { *m = SInt64Rules{} } func (m *SInt64Rules) String() string { return proto.CompactTextString(m) } func (*SInt64Rules) ProtoMessage() {} func (*SInt64Rules) Descriptor() ([]byte, []int) { return fileDescriptor_validate_4e427f48c21fab34, []int{8} } func (m *SInt64Rules) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_SInt64Rules.Unmarshal(m, b) } func (m *SInt64Rules) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_SInt64Rules.Marshal(b, m, deterministic) } func (dst *SInt64Rules) XXX_Merge(src proto.Message) { xxx_messageInfo_SInt64Rules.Merge(dst, src) } func (m *SInt64Rules) XXX_Size() int { return xxx_messageInfo_SInt64Rules.Size(m) } func (m *SInt64Rules) XXX_DiscardUnknown() { xxx_messageInfo_SInt64Rules.DiscardUnknown(m) } var xxx_messageInfo_SInt64Rules proto.InternalMessageInfo func (m *SInt64Rules) GetConst() int64 { if m != nil && m.Const != nil { return *m.Const } return 0 } func (m *SInt64Rules) GetLt() int64 { if m != nil && m.Lt != nil { return *m.Lt } return 0 } func (m *SInt64Rules) GetLte() int64 { if m != nil && m.Lte != nil { return *m.Lte } return 0 } func (m *SInt64Rules) GetGt() int64 { if m != nil && m.Gt != nil { return *m.Gt } return 0 } func (m *SInt64Rules) GetGte() int64 { if m != nil && m.Gte != nil { return *m.Gte } return 0 } func (m *SInt64Rules) GetIn() []int64 { if m != nil { return m.In } return nil } func (m *SInt64Rules) GetNotIn() []int64 { if m != nil { return m.NotIn } return nil } type Fixed32Rules struct { Const *uint32 `protobuf:"fixed32,1,opt,name=const" json:"const,omitempty"` Lt *uint32 `protobuf:"fixed32,2,opt,name=lt" json:"lt,omitempty"` Lte *uint32 `protobuf:"fixed32,3,opt,name=lte" json:"lte,omitempty"` Gt *uint32 `protobuf:"fixed32,4,opt,name=gt" json:"gt,omitempty"` Gte *uint32 `protobuf:"fixed32,5,opt,name=gte" json:"gte,omitempty"` In []uint32 `protobuf:"fixed32,6,rep,name=in" json:"in,omitempty"` NotIn []uint32 `protobuf:"fixed32,7,rep,name=not_in,json=notIn" json:"not_in,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Fixed32Rules) Reset() { *m = Fixed32Rules{} } func (m *Fixed32Rules) String() string { return proto.CompactTextString(m) } func (*Fixed32Rules) ProtoMessage() {} func (*Fixed32Rules) Descriptor() ([]byte, []int) { return fileDescriptor_validate_4e427f48c21fab34, []int{9} } func (m *Fixed32Rules) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Fixed32Rules.Unmarshal(m, b) } func (m *Fixed32Rules) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Fixed32Rules.Marshal(b, m, deterministic) } func (dst *Fixed32Rules) XXX_Merge(src proto.Message) { xxx_messageInfo_Fixed32Rules.Merge(dst, src) } func (m *Fixed32Rules) XXX_Size() int { return xxx_messageInfo_Fixed32Rules.Size(m) } func (m *Fixed32Rules) XXX_DiscardUnknown() { xxx_messageInfo_Fixed32Rules.DiscardUnknown(m) } var xxx_messageInfo_Fixed32Rules proto.InternalMessageInfo func (m *Fixed32Rules) GetConst() uint32 { if m != nil && m.Const != nil { return *m.Const } return 0 } func (m *Fixed32Rules) GetLt() uint32 { if m != nil && m.Lt != nil { return *m.Lt } return 0 } func (m *Fixed32Rules) GetLte() uint32 { if m != nil && m.Lte != nil { return *m.Lte } return 0 } func (m *Fixed32Rules) GetGt() uint32 { if m != nil && m.Gt != nil { return *m.Gt } return 0 } func (m *Fixed32Rules) GetGte() uint32 { if m != nil && m.Gte != nil { return *m.Gte } return 0 } func (m *Fixed32Rules) GetIn() []uint32 { if m != nil { return m.In } return nil } func (m *Fixed32Rules) GetNotIn() []uint32 { if m != nil { return m.NotIn } return nil } type Fixed64Rules struct { Const *uint64 `protobuf:"fixed64,1,opt,name=const" json:"const,omitempty"` Lt *uint64 `protobuf:"fixed64,2,opt,name=lt" json:"lt,omitempty"` Lte *uint64 `protobuf:"fixed64,3,opt,name=lte" json:"lte,omitempty"` Gt *uint64 `protobuf:"fixed64,4,opt,name=gt" json:"gt,omitempty"` Gte *uint64 `protobuf:"fixed64,5,opt,name=gte" json:"gte,omitempty"` In []uint64 `protobuf:"fixed64,6,rep,name=in" json:"in,omitempty"` NotIn []uint64 `protobuf:"fixed64,7,rep,name=not_in,json=notIn" json:"not_in,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Fixed64Rules) Reset() { *m = Fixed64Rules{} } func (m *Fixed64Rules) String() string { return proto.CompactTextString(m) } func (*Fixed64Rules) ProtoMessage() {} func (*Fixed64Rules) Descriptor() ([]byte, []int) { return fileDescriptor_validate_4e427f48c21fab34, []int{10} } func (m *Fixed64Rules) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Fixed64Rules.Unmarshal(m, b) } func (m *Fixed64Rules) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Fixed64Rules.Marshal(b, m, deterministic) } func (dst *Fixed64Rules) XXX_Merge(src proto.Message) { xxx_messageInfo_Fixed64Rules.Merge(dst, src) } func (m *Fixed64Rules) XXX_Size() int { return xxx_messageInfo_Fixed64Rules.Size(m) } func (m *Fixed64Rules) XXX_DiscardUnknown() { xxx_messageInfo_Fixed64Rules.DiscardUnknown(m) } var xxx_messageInfo_Fixed64Rules proto.InternalMessageInfo func (m *Fixed64Rules) GetConst() uint64 { if m != nil && m.Const != nil { return *m.Const } return 0 } func (m *Fixed64Rules) GetLt() uint64 { if m != nil && m.Lt != nil { return *m.Lt } return 0 } func (m *Fixed64Rules) GetLte() uint64 { if m != nil && m.Lte != nil { return *m.Lte } return 0 } func (m *Fixed64Rules) GetGt() uint64 { if m != nil && m.Gt != nil { return *m.Gt } return 0 } func (m *Fixed64Rules) GetGte() uint64 { if m != nil && m.Gte != nil { return *m.Gte } return 0 } func (m *Fixed64Rules) GetIn() []uint64 { if m != nil { return m.In } return nil } func (m *Fixed64Rules) GetNotIn() []uint64 { if m != nil { return m.NotIn } return nil } type SFixed32Rules struct { Const *int32 `protobuf:"fixed32,1,opt,name=const" json:"const,omitempty"` Lt *int32 `protobuf:"fixed32,2,opt,name=lt" json:"lt,omitempty"` Lte *int32 `protobuf:"fixed32,3,opt,name=lte" json:"lte,omitempty"` Gt *int32 `protobuf:"fixed32,4,opt,name=gt" json:"gt,omitempty"` Gte *int32 `protobuf:"fixed32,5,opt,name=gte" json:"gte,omitempty"` In []int32 `protobuf:"fixed32,6,rep,name=in" json:"in,omitempty"` NotIn []int32 `protobuf:"fixed32,7,rep,name=not_in,json=notIn" json:"not_in,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *SFixed32Rules) Reset() { *m = SFixed32Rules{} } func (m *SFixed32Rules) String() string { return proto.CompactTextString(m) } func (*SFixed32Rules) ProtoMessage() {} func (*SFixed32Rules) Descriptor() ([]byte, []int) { return fileDescriptor_validate_4e427f48c21fab34, []int{11} } func (m *SFixed32Rules) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_SFixed32Rules.Unmarshal(m, b) } func (m *SFixed32Rules) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_SFixed32Rules.Marshal(b, m, deterministic) } func (dst *SFixed32Rules) XXX_Merge(src proto.Message) { xxx_messageInfo_SFixed32Rules.Merge(dst, src) } func (m *SFixed32Rules) XXX_Size() int { return xxx_messageInfo_SFixed32Rules.Size(m) } func (m *SFixed32Rules) XXX_DiscardUnknown() { xxx_messageInfo_SFixed32Rules.DiscardUnknown(m) } var xxx_messageInfo_SFixed32Rules proto.InternalMessageInfo func (m *SFixed32Rules) GetConst() int32 { if m != nil && m.Const != nil { return *m.Const } return 0 } func (m *SFixed32Rules) GetLt() int32 { if m != nil && m.Lt != nil { return *m.Lt } return 0 } func (m *SFixed32Rules) GetLte() int32 { if m != nil && m.Lte != nil { return *m.Lte } return 0 } func (m *SFixed32Rules) GetGt() int32 { if m != nil && m.Gt != nil { return *m.Gt } return 0 } func (m *SFixed32Rules) GetGte() int32 { if m != nil && m.Gte != nil { return *m.Gte } return 0 } func (m *SFixed32Rules) GetIn() []int32 { if m != nil { return m.In } return nil } func (m *SFixed32Rules) GetNotIn() []int32 { if m != nil { return m.NotIn } return nil } type SFixed64Rules struct { Const *int64 `protobuf:"fixed64,1,opt,name=const" json:"const,omitempty"` Lt *int64 `protobuf:"fixed64,2,opt,name=lt" json:"lt,omitempty"` Lte *int64 `protobuf:"fixed64,3,opt,name=lte" json:"lte,omitempty"` Gt *int64 `protobuf:"fixed64,4,opt,name=gt" json:"gt,omitempty"` Gte *int64 `protobuf:"fixed64,5,opt,name=gte" json:"gte,omitempty"` In []int64 `protobuf:"fixed64,6,rep,name=in" json:"in,omitempty"` NotIn []int64 `protobuf:"fixed64,7,rep,name=not_in,json=notIn" json:"not_in,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *SFixed64Rules) Reset() { *m = SFixed64Rules{} } func (m *SFixed64Rules) String() string { return proto.CompactTextString(m) } func (*SFixed64Rules) ProtoMessage() {} func (*SFixed64Rules) Descriptor() ([]byte, []int) { return fileDescriptor_validate_4e427f48c21fab34, []int{12} } func (m *SFixed64Rules) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_SFixed64Rules.Unmarshal(m, b) } func (m *SFixed64Rules) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_SFixed64Rules.Marshal(b, m, deterministic) } func (dst *SFixed64Rules) XXX_Merge(src proto.Message) { xxx_messageInfo_SFixed64Rules.Merge(dst, src) } func (m *SFixed64Rules) XXX_Size() int { return xxx_messageInfo_SFixed64Rules.Size(m) } func (m *SFixed64Rules) XXX_DiscardUnknown() { xxx_messageInfo_SFixed64Rules.DiscardUnknown(m) } var xxx_messageInfo_SFixed64Rules proto.InternalMessageInfo func (m *SFixed64Rules) GetConst() int64 { if m != nil && m.Const != nil { return *m.Const } return 0 } func (m *SFixed64Rules) GetLt() int64 { if m != nil && m.Lt != nil { return *m.Lt } return 0 } func (m *SFixed64Rules) GetLte() int64 { if m != nil && m.Lte != nil { return *m.Lte } return 0 } func (m *SFixed64Rules) GetGt() int64 { if m != nil && m.Gt != nil { return *m.Gt } return 0 } func (m *SFixed64Rules) GetGte() int64 { if m != nil && m.Gte != nil { return *m.Gte } return 0 } func (m *SFixed64Rules) GetIn() []int64 { if m != nil { return m.In } return nil } func (m *SFixed64Rules) GetNotIn() []int64 { if m != nil { return m.NotIn } return nil } type BoolRules struct { Const *bool `protobuf:"varint,1,opt,name=const" json:"const,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *BoolRules) Reset() { *m = BoolRules{} } func (m *BoolRules) String() string { return proto.CompactTextString(m) } func (*BoolRules) ProtoMessage() {} func (*BoolRules) Descriptor() ([]byte, []int) { return fileDescriptor_validate_4e427f48c21fab34, []int{13} } func (m *BoolRules) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_BoolRules.Unmarshal(m, b) } func (m *BoolRules) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_BoolRules.Marshal(b, m, deterministic) } func (dst *BoolRules) XXX_Merge(src proto.Message) { xxx_messageInfo_BoolRules.Merge(dst, src) } func (m *BoolRules) XXX_Size() int { return xxx_messageInfo_BoolRules.Size(m) } func (m *BoolRules) XXX_DiscardUnknown() { xxx_messageInfo_BoolRules.DiscardUnknown(m) } var xxx_messageInfo_BoolRules proto.InternalMessageInfo func (m *BoolRules) GetConst() bool { if m != nil && m.Const != nil { return *m.Const } return false } type StringRules struct { Const *string `protobuf:"bytes,1,opt,name=const" json:"const,omitempty"` Len *uint64 `protobuf:"varint,19,opt,name=len" json:"len,omitempty"` MinLen *uint64 `protobuf:"varint,2,opt,name=min_len,json=minLen" json:"min_len,omitempty"` MaxLen *uint64 `protobuf:"varint,3,opt,name=max_len,json=maxLen" json:"max_len,omitempty"` LenBytes *uint64 `protobuf:"varint,20,opt,name=len_bytes,json=lenBytes" json:"len_bytes,omitempty"` MinBytes *uint64 `protobuf:"varint,4,opt,name=min_bytes,json=minBytes" json:"min_bytes,omitempty"` MaxBytes *uint64 `protobuf:"varint,5,opt,name=max_bytes,json=maxBytes" json:"max_bytes,omitempty"` Pattern *string `protobuf:"bytes,6,opt,name=pattern" json:"pattern,omitempty"` Prefix *string `protobuf:"bytes,7,opt,name=prefix" json:"prefix,omitempty"` Suffix *string `protobuf:"bytes,8,opt,name=suffix" json:"suffix,omitempty"` Contains *string `protobuf:"bytes,9,opt,name=contains" json:"contains,omitempty"` In []string `protobuf:"bytes,10,rep,name=in" json:"in,omitempty"` NotIn []string `protobuf:"bytes,11,rep,name=not_in,json=notIn" json:"not_in,omitempty"` // Types that are valid to be assigned to WellKnown: // *StringRules_Email // *StringRules_Hostname // *StringRules_Ip // *StringRules_Ipv4 // *StringRules_Ipv6 // *StringRules_Uri // *StringRules_UriRef WellKnown isStringRules_WellKnown `protobuf_oneof:"well_known"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *StringRules) Reset() { *m = StringRules{} } func (m *StringRules) String() string { return proto.CompactTextString(m) } func (*StringRules) ProtoMessage() {} func (*StringRules) Descriptor() ([]byte, []int) { return fileDescriptor_validate_4e427f48c21fab34, []int{14} } func (m *StringRules) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_StringRules.Unmarshal(m, b) } func (m *StringRules) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_StringRules.Marshal(b, m, deterministic) } func (dst *StringRules) XXX_Merge(src proto.Message) { xxx_messageInfo_StringRules.Merge(dst, src) } func (m *StringRules) XXX_Size() int { return xxx_messageInfo_StringRules.Size(m) } func (m *StringRules) XXX_DiscardUnknown() { xxx_messageInfo_StringRules.DiscardUnknown(m) } var xxx_messageInfo_StringRules proto.InternalMessageInfo func (m *StringRules) GetConst() string { if m != nil && m.Const != nil { return *m.Const } return "" } func (m *StringRules) GetLen() uint64 { if m != nil && m.Len != nil { return *m.Len } return 0 } func (m *StringRules) GetMinLen() uint64 { if m != nil && m.MinLen != nil { return *m.MinLen } return 0 } func (m *StringRules) GetMaxLen() uint64 { if m != nil && m.MaxLen != nil { return *m.MaxLen } return 0 } func (m *StringRules) GetLenBytes() uint64 { if m != nil && m.LenBytes != nil { return *m.LenBytes } return 0 } func (m *StringRules) GetMinBytes() uint64 { if m != nil && m.MinBytes != nil { return *m.MinBytes } return 0 } func (m *StringRules) GetMaxBytes() uint64 { if m != nil && m.MaxBytes != nil { return *m.MaxBytes } return 0 } func (m *StringRules) GetPattern() string { if m != nil && m.Pattern != nil { return *m.Pattern } return "" } func (m *StringRules) GetPrefix() string { if m != nil && m.Prefix != nil { return *m.Prefix } return "" } func (m *StringRules) GetSuffix() string { if m != nil && m.Suffix != nil { return *m.Suffix } return "" } func (m *StringRules) GetContains() string { if m != nil && m.Contains != nil { return *m.Contains } return "" } func (m *StringRules) GetIn() []string { if m != nil { return m.In } return nil } func (m *StringRules) GetNotIn() []string { if m != nil { return m.NotIn } return nil } type isStringRules_WellKnown interface { isStringRules_WellKnown() } type StringRules_Email struct { Email bool `protobuf:"varint,12,opt,name=email,oneof"` } type StringRules_Hostname struct { Hostname bool `protobuf:"varint,13,opt,name=hostname,oneof"` } type StringRules_Ip struct { Ip bool `protobuf:"varint,14,opt,name=ip,oneof"` } type StringRules_Ipv4 struct { Ipv4 bool `protobuf:"varint,15,opt,name=ipv4,oneof"` } type StringRules_Ipv6 struct { Ipv6 bool `protobuf:"varint,16,opt,name=ipv6,oneof"` } type StringRules_Uri struct { Uri bool `protobuf:"varint,17,opt,name=uri,oneof"` } type StringRules_UriRef struct { UriRef bool `protobuf:"varint,18,opt,name=uri_ref,json=uriRef,oneof"` } func (*StringRules_Email) isStringRules_WellKnown() {} func (*StringRules_Hostname) isStringRules_WellKnown() {} func (*StringRules_Ip) isStringRules_WellKnown() {} func (*StringRules_Ipv4) isStringRules_WellKnown() {} func (*StringRules_Ipv6) isStringRules_WellKnown() {} func (*StringRules_Uri) isStringRules_WellKnown() {} func (*StringRules_UriRef) isStringRules_WellKnown() {} func (m *StringRules) GetWellKnown() isStringRules_WellKnown { if m != nil { return m.WellKnown } return nil } func (m *StringRules) GetEmail() bool { if x, ok := m.GetWellKnown().(*StringRules_Email); ok { return x.Email } return false } func (m *StringRules) GetHostname() bool { if x, ok := m.GetWellKnown().(*StringRules_Hostname); ok { return x.Hostname } return false } func (m *StringRules) GetIp() bool { if x, ok := m.GetWellKnown().(*StringRules_Ip); ok { return x.Ip } return false } func (m *StringRules) GetIpv4() bool { if x, ok := m.GetWellKnown().(*StringRules_Ipv4); ok { return x.Ipv4 } return false } func (m *StringRules) GetIpv6() bool { if x, ok := m.GetWellKnown().(*StringRules_Ipv6); ok { return x.Ipv6 } return false } func (m *StringRules) GetUri() bool { if x, ok := m.GetWellKnown().(*StringRules_Uri); ok { return x.Uri } return false } func (m *StringRules) GetUriRef() bool { if x, ok := m.GetWellKnown().(*StringRules_UriRef); ok { return x.UriRef } return false } // XXX_OneofFuncs is for the internal use of the proto package. func (*StringRules) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _StringRules_OneofMarshaler, _StringRules_OneofUnmarshaler, _StringRules_OneofSizer, []interface{}{ (*StringRules_Email)(nil), (*StringRules_Hostname)(nil), (*StringRules_Ip)(nil), (*StringRules_Ipv4)(nil), (*StringRules_Ipv6)(nil), (*StringRules_Uri)(nil), (*StringRules_UriRef)(nil), } } func _StringRules_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*StringRules) // well_known switch x := m.WellKnown.(type) { case *StringRules_Email: t := uint64(0) if x.Email { t = 1 } b.EncodeVarint(12<<3 | proto.WireVarint) b.EncodeVarint(t) case *StringRules_Hostname: t := uint64(0) if x.Hostname { t = 1 } b.EncodeVarint(13<<3 | proto.WireVarint) b.EncodeVarint(t) case *StringRules_Ip: t := uint64(0) if x.Ip { t = 1 } b.EncodeVarint(14<<3 | proto.WireVarint) b.EncodeVarint(t) case *StringRules_Ipv4: t := uint64(0) if x.Ipv4 { t = 1 } b.EncodeVarint(15<<3 | proto.WireVarint) b.EncodeVarint(t) case *StringRules_Ipv6: t := uint64(0) if x.Ipv6 { t = 1 } b.EncodeVarint(16<<3 | proto.WireVarint) b.EncodeVarint(t) case *StringRules_Uri: t := uint64(0) if x.Uri { t = 1 } b.EncodeVarint(17<<3 | proto.WireVarint) b.EncodeVarint(t) case *StringRules_UriRef: t := uint64(0) if x.UriRef { t = 1 } b.EncodeVarint(18<<3 | proto.WireVarint) b.EncodeVarint(t) case nil: default: return fmt.Errorf("StringRules.WellKnown has unexpected type %T", x) } return nil } func _StringRules_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*StringRules) switch tag { case 12: // well_known.email if wire != proto.WireVarint { return true, proto.ErrInternalBadWireType } x, err := b.DecodeVarint() m.WellKnown = &StringRules_Email{x != 0} return true, err case 13: // well_known.hostname if wire != proto.WireVarint { return true, proto.ErrInternalBadWireType } x, err := b.DecodeVarint() m.WellKnown = &StringRules_Hostname{x != 0} return true, err case 14: // well_known.ip if wire != proto.WireVarint { return true, proto.ErrInternalBadWireType } x, err := b.DecodeVarint() m.WellKnown = &StringRules_Ip{x != 0} return true, err case 15: // well_known.ipv4 if wire != proto.WireVarint { return true, proto.ErrInternalBadWireType } x, err := b.DecodeVarint() m.WellKnown = &StringRules_Ipv4{x != 0} return true, err case 16: // well_known.ipv6 if wire != proto.WireVarint { return true, proto.ErrInternalBadWireType } x, err := b.DecodeVarint() m.WellKnown = &StringRules_Ipv6{x != 0} return true, err case 17: // well_known.uri if wire != proto.WireVarint { return true, proto.ErrInternalBadWireType } x, err := b.DecodeVarint() m.WellKnown = &StringRules_Uri{x != 0} return true, err case 18: // well_known.uri_ref if wire != proto.WireVarint { return true, proto.ErrInternalBadWireType } x, err := b.DecodeVarint() m.WellKnown = &StringRules_UriRef{x != 0} return true, err default: return false, nil } } func _StringRules_OneofSizer(msg proto.Message) (n int) { m := msg.(*StringRules) // well_known switch x := m.WellKnown.(type) { case *StringRules_Email: n += 1 // tag and wire n += 1 case *StringRules_Hostname: n += 1 // tag and wire n += 1 case *StringRules_Ip: n += 1 // tag and wire n += 1 case *StringRules_Ipv4: n += 1 // tag and wire n += 1 case *StringRules_Ipv6: n += 2 // tag and wire n += 1 case *StringRules_Uri: n += 2 // tag and wire n += 1 case *StringRules_UriRef: n += 2 // tag and wire n += 1 case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } type BytesRules struct { Const []byte `protobuf:"bytes,1,opt,name=const" json:"const,omitempty"` Len *uint64 `protobuf:"varint,13,opt,name=len" json:"len,omitempty"` MinLen *uint64 `protobuf:"varint,2,opt,name=min_len,json=minLen" json:"min_len,omitempty"` MaxLen *uint64 `protobuf:"varint,3,opt,name=max_len,json=maxLen" json:"max_len,omitempty"` Pattern *string `protobuf:"bytes,4,opt,name=pattern" json:"pattern,omitempty"` Prefix []byte `protobuf:"bytes,5,opt,name=prefix" json:"prefix,omitempty"` Suffix []byte `protobuf:"bytes,6,opt,name=suffix" json:"suffix,omitempty"` Contains []byte `protobuf:"bytes,7,opt,name=contains" json:"contains,omitempty"` In [][]byte `protobuf:"bytes,8,rep,name=in" json:"in,omitempty"` NotIn [][]byte `protobuf:"bytes,9,rep,name=not_in,json=notIn" json:"not_in,omitempty"` // Types that are valid to be assigned to WellKnown: // *BytesRules_Ip // *BytesRules_Ipv4 // *BytesRules_Ipv6 WellKnown isBytesRules_WellKnown `protobuf_oneof:"well_known"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *BytesRules) Reset() { *m = BytesRules{} } func (m *BytesRules) String() string { return proto.CompactTextString(m) } func (*BytesRules) ProtoMessage() {} func (*BytesRules) Descriptor() ([]byte, []int) { return fileDescriptor_validate_4e427f48c21fab34, []int{15} } func (m *BytesRules) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_BytesRules.Unmarshal(m, b) } func (m *BytesRules) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_BytesRules.Marshal(b, m, deterministic) } func (dst *BytesRules) XXX_Merge(src proto.Message) { xxx_messageInfo_BytesRules.Merge(dst, src) } func (m *BytesRules) XXX_Size() int { return xxx_messageInfo_BytesRules.Size(m) } func (m *BytesRules) XXX_DiscardUnknown() { xxx_messageInfo_BytesRules.DiscardUnknown(m) } var xxx_messageInfo_BytesRules proto.InternalMessageInfo func (m *BytesRules) GetConst() []byte { if m != nil { return m.Const } return nil } func (m *BytesRules) GetLen() uint64 { if m != nil && m.Len != nil { return *m.Len } return 0 } func (m *BytesRules) GetMinLen() uint64 { if m != nil && m.MinLen != nil { return *m.MinLen } return 0 } func (m *BytesRules) GetMaxLen() uint64 { if m != nil && m.MaxLen != nil { return *m.MaxLen } return 0 } func (m *BytesRules) GetPattern() string { if m != nil && m.Pattern != nil { return *m.Pattern } return "" } func (m *BytesRules) GetPrefix() []byte { if m != nil { return m.Prefix } return nil } func (m *BytesRules) GetSuffix() []byte { if m != nil { return m.Suffix } return nil } func (m *BytesRules) GetContains() []byte { if m != nil { return m.Contains } return nil } func (m *BytesRules) GetIn() [][]byte { if m != nil { return m.In } return nil } func (m *BytesRules) GetNotIn() [][]byte { if m != nil { return m.NotIn } return nil } type isBytesRules_WellKnown interface { isBytesRules_WellKnown() } type BytesRules_Ip struct { Ip bool `protobuf:"varint,10,opt,name=ip,oneof"` } type BytesRules_Ipv4 struct { Ipv4 bool `protobuf:"varint,11,opt,name=ipv4,oneof"` } type BytesRules_Ipv6 struct { Ipv6 bool `protobuf:"varint,12,opt,name=ipv6,oneof"` } func (*BytesRules_Ip) isBytesRules_WellKnown() {} func (*BytesRules_Ipv4) isBytesRules_WellKnown() {} func (*BytesRules_Ipv6) isBytesRules_WellKnown() {} func (m *BytesRules) GetWellKnown() isBytesRules_WellKnown { if m != nil { return m.WellKnown } return nil } func (m *BytesRules) GetIp() bool { if x, ok := m.GetWellKnown().(*BytesRules_Ip); ok { return x.Ip } return false } func (m *BytesRules) GetIpv4() bool { if x, ok := m.GetWellKnown().(*BytesRules_Ipv4); ok { return x.Ipv4 } return false } func (m *BytesRules) GetIpv6() bool { if x, ok := m.GetWellKnown().(*BytesRules_Ipv6); ok { return x.Ipv6 } return false } // XXX_OneofFuncs is for the internal use of the proto package. func (*BytesRules) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _BytesRules_OneofMarshaler, _BytesRules_OneofUnmarshaler, _BytesRules_OneofSizer, []interface{}{ (*BytesRules_Ip)(nil), (*BytesRules_Ipv4)(nil), (*BytesRules_Ipv6)(nil), } } func _BytesRules_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*BytesRules) // well_known switch x := m.WellKnown.(type) { case *BytesRules_Ip: t := uint64(0) if x.Ip { t = 1 } b.EncodeVarint(10<<3 | proto.WireVarint) b.EncodeVarint(t) case *BytesRules_Ipv4: t := uint64(0) if x.Ipv4 { t = 1 } b.EncodeVarint(11<<3 | proto.WireVarint) b.EncodeVarint(t) case *BytesRules_Ipv6: t := uint64(0) if x.Ipv6 { t = 1 } b.EncodeVarint(12<<3 | proto.WireVarint) b.EncodeVarint(t) case nil: default: return fmt.Errorf("BytesRules.WellKnown has unexpected type %T", x) } return nil } func _BytesRules_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*BytesRules) switch tag { case 10: // well_known.ip if wire != proto.WireVarint { return true, proto.ErrInternalBadWireType } x, err := b.DecodeVarint() m.WellKnown = &BytesRules_Ip{x != 0} return true, err case 11: // well_known.ipv4 if wire != proto.WireVarint { return true, proto.ErrInternalBadWireType } x, err := b.DecodeVarint() m.WellKnown = &BytesRules_Ipv4{x != 0} return true, err case 12: // well_known.ipv6 if wire != proto.WireVarint { return true, proto.ErrInternalBadWireType } x, err := b.DecodeVarint() m.WellKnown = &BytesRules_Ipv6{x != 0} return true, err default: return false, nil } } func _BytesRules_OneofSizer(msg proto.Message) (n int) { m := msg.(*BytesRules) // well_known switch x := m.WellKnown.(type) { case *BytesRules_Ip: n += 1 // tag and wire n += 1 case *BytesRules_Ipv4: n += 1 // tag and wire n += 1 case *BytesRules_Ipv6: n += 1 // tag and wire n += 1 case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } type EnumRules struct { Const *int32 `protobuf:"varint,1,opt,name=const" json:"const,omitempty"` DefinedOnly *bool `protobuf:"varint,2,opt,name=defined_only,json=definedOnly" json:"defined_only,omitempty"` In []int32 `protobuf:"varint,3,rep,name=in" json:"in,omitempty"` NotIn []int32 `protobuf:"varint,4,rep,name=not_in,json=notIn" json:"not_in,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *EnumRules) Reset() { *m = EnumRules{} } func (m *EnumRules) String() string { return proto.CompactTextString(m) } func (*EnumRules) ProtoMessage() {} func (*EnumRules) Descriptor() ([]byte, []int) { return fileDescriptor_validate_4e427f48c21fab34, []int{16} } func (m *EnumRules) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_EnumRules.Unmarshal(m, b) } func (m *EnumRules) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_EnumRules.Marshal(b, m, deterministic) } func (dst *EnumRules) XXX_Merge(src proto.Message) { xxx_messageInfo_EnumRules.Merge(dst, src) } func (m *EnumRules) XXX_Size() int { return xxx_messageInfo_EnumRules.Size(m) } func (m *EnumRules) XXX_DiscardUnknown() { xxx_messageInfo_EnumRules.DiscardUnknown(m) } var xxx_messageInfo_EnumRules proto.InternalMessageInfo func (m *EnumRules) GetConst() int32 { if m != nil && m.Const != nil { return *m.Const } return 0 } func (m *EnumRules) GetDefinedOnly() bool { if m != nil && m.DefinedOnly != nil { return *m.DefinedOnly } return false } func (m *EnumRules) GetIn() []int32 { if m != nil { return m.In } return nil } func (m *EnumRules) GetNotIn() []int32 { if m != nil { return m.NotIn } return nil } type MessageRules struct { Skip *bool `protobuf:"varint,1,opt,name=skip" json:"skip,omitempty"` Required *bool `protobuf:"varint,2,opt,name=required" json:"required,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *MessageRules) Reset() { *m = MessageRules{} } func (m *MessageRules) String() string { return proto.CompactTextString(m) } func (*MessageRules) ProtoMessage() {} func (*MessageRules) Descriptor() ([]byte, []int) { return fileDescriptor_validate_4e427f48c21fab34, []int{17} } func (m *MessageRules) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_MessageRules.Unmarshal(m, b) } func (m *MessageRules) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_MessageRules.Marshal(b, m, deterministic) } func (dst *MessageRules) XXX_Merge(src proto.Message) { xxx_messageInfo_MessageRules.Merge(dst, src) } func (m *MessageRules) XXX_Size() int { return xxx_messageInfo_MessageRules.Size(m) } func (m *MessageRules) XXX_DiscardUnknown() { xxx_messageInfo_MessageRules.DiscardUnknown(m) } var xxx_messageInfo_MessageRules proto.InternalMessageInfo func (m *MessageRules) GetSkip() bool { if m != nil && m.Skip != nil { return *m.Skip } return false } func (m *MessageRules) GetRequired() bool { if m != nil && m.Required != nil { return *m.Required } return false } type RepeatedRules struct { MinItems *uint64 `protobuf:"varint,1,opt,name=min_items,json=minItems" json:"min_items,omitempty"` MaxItems *uint64 `protobuf:"varint,2,opt,name=max_items,json=maxItems" json:"max_items,omitempty"` Unique *bool `protobuf:"varint,3,opt,name=unique" json:"unique,omitempty"` Items *FieldRules `protobuf:"bytes,4,opt,name=items" json:"items,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *RepeatedRules) Reset() { *m = RepeatedRules{} } func (m *RepeatedRules) String() string { return proto.CompactTextString(m) } func (*RepeatedRules) ProtoMessage() {} func (*RepeatedRules) Descriptor() ([]byte, []int) { return fileDescriptor_validate_4e427f48c21fab34, []int{18} } func (m *RepeatedRules) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_RepeatedRules.Unmarshal(m, b) } func (m *RepeatedRules) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_RepeatedRules.Marshal(b, m, deterministic) } func (dst *RepeatedRules) XXX_Merge(src proto.Message) { xxx_messageInfo_RepeatedRules.Merge(dst, src) } func (m *RepeatedRules) XXX_Size() int { return xxx_messageInfo_RepeatedRules.Size(m) } func (m *RepeatedRules) XXX_DiscardUnknown() { xxx_messageInfo_RepeatedRules.DiscardUnknown(m) } var xxx_messageInfo_RepeatedRules proto.InternalMessageInfo func (m *RepeatedRules) GetMinItems() uint64 { if m != nil && m.MinItems != nil { return *m.MinItems } return 0 } func (m *RepeatedRules) GetMaxItems() uint64 { if m != nil && m.MaxItems != nil { return *m.MaxItems } return 0 } func (m *RepeatedRules) GetUnique() bool { if m != nil && m.Unique != nil { return *m.Unique } return false } func (m *RepeatedRules) GetItems() *FieldRules { if m != nil { return m.Items } return nil } type MapRules struct { MinPairs *uint64 `protobuf:"varint,1,opt,name=min_pairs,json=minPairs" json:"min_pairs,omitempty"` MaxPairs *uint64 `protobuf:"varint,2,opt,name=max_pairs,json=maxPairs" json:"max_pairs,omitempty"` NoSparse *bool `protobuf:"varint,3,opt,name=no_sparse,json=noSparse" json:"no_sparse,omitempty"` Keys *FieldRules `protobuf:"bytes,4,opt,name=keys" json:"keys,omitempty"` Values *FieldRules `protobuf:"bytes,5,opt,name=values" json:"values,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *MapRules) Reset() { *m = MapRules{} } func (m *MapRules) String() string { return proto.CompactTextString(m) } func (*MapRules) ProtoMessage() {} func (*MapRules) Descriptor() ([]byte, []int) { return fileDescriptor_validate_4e427f48c21fab34, []int{19} } func (m *MapRules) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_MapRules.Unmarshal(m, b) } func (m *MapRules) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_MapRules.Marshal(b, m, deterministic) } func (dst *MapRules) XXX_Merge(src proto.Message) { xxx_messageInfo_MapRules.Merge(dst, src) } func (m *MapRules) XXX_Size() int { return xxx_messageInfo_MapRules.Size(m) } func (m *MapRules) XXX_DiscardUnknown() { xxx_messageInfo_MapRules.DiscardUnknown(m) } var xxx_messageInfo_MapRules proto.InternalMessageInfo func (m *MapRules) GetMinPairs() uint64 { if m != nil && m.MinPairs != nil { return *m.MinPairs } return 0 } func (m *MapRules) GetMaxPairs() uint64 { if m != nil && m.MaxPairs != nil { return *m.MaxPairs } return 0 } func (m *MapRules) GetNoSparse() bool { if m != nil && m.NoSparse != nil { return *m.NoSparse } return false } func (m *MapRules) GetKeys() *FieldRules { if m != nil { return m.Keys } return nil } func (m *MapRules) GetValues() *FieldRules { if m != nil { return m.Values } return nil } type AnyRules struct { Required *bool `protobuf:"varint,1,opt,name=required" json:"required,omitempty"` In []string `protobuf:"bytes,2,rep,name=in" json:"in,omitempty"` NotIn []string `protobuf:"bytes,3,rep,name=not_in,json=notIn" json:"not_in,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *AnyRules) Reset() { *m = AnyRules{} } func (m *AnyRules) String() string { return proto.CompactTextString(m) } func (*AnyRules) ProtoMessage() {} func (*AnyRules) Descriptor() ([]byte, []int) { return fileDescriptor_validate_4e427f48c21fab34, []int{20} } func (m *AnyRules) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_AnyRules.Unmarshal(m, b) } func (m *AnyRules) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_AnyRules.Marshal(b, m, deterministic) } func (dst *AnyRules) XXX_Merge(src proto.Message) { xxx_messageInfo_AnyRules.Merge(dst, src) } func (m *AnyRules) XXX_Size() int { return xxx_messageInfo_AnyRules.Size(m) } func (m *AnyRules) XXX_DiscardUnknown() { xxx_messageInfo_AnyRules.DiscardUnknown(m) } var xxx_messageInfo_AnyRules proto.InternalMessageInfo func (m *AnyRules) GetRequired() bool { if m != nil && m.Required != nil { return *m.Required } return false } func (m *AnyRules) GetIn() []string { if m != nil { return m.In } return nil } func (m *AnyRules) GetNotIn() []string { if m != nil { return m.NotIn } return nil } type DurationRules struct { Required *bool `protobuf:"varint,1,opt,name=required" json:"required,omitempty"` Const *duration.Duration `protobuf:"bytes,2,opt,name=const" json:"const,omitempty"` Lt *duration.Duration `protobuf:"bytes,3,opt,name=lt" json:"lt,omitempty"` Lte *duration.Duration `protobuf:"bytes,4,opt,name=lte" json:"lte,omitempty"` Gt *duration.Duration `protobuf:"bytes,5,opt,name=gt" json:"gt,omitempty"` Gte *duration.Duration `protobuf:"bytes,6,opt,name=gte" json:"gte,omitempty"` In []*duration.Duration `protobuf:"bytes,7,rep,name=in" json:"in,omitempty"` NotIn []*duration.Duration `protobuf:"bytes,8,rep,name=not_in,json=notIn" json:"not_in,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *DurationRules) Reset() { *m = DurationRules{} } func (m *DurationRules) String() string { return proto.CompactTextString(m) } func (*DurationRules) ProtoMessage() {} func (*DurationRules) Descriptor() ([]byte, []int) { return fileDescriptor_validate_4e427f48c21fab34, []int{21} } func (m *DurationRules) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_DurationRules.Unmarshal(m, b) } func (m *DurationRules) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_DurationRules.Marshal(b, m, deterministic) } func (dst *DurationRules) XXX_Merge(src proto.Message) { xxx_messageInfo_DurationRules.Merge(dst, src) } func (m *DurationRules) XXX_Size() int { return xxx_messageInfo_DurationRules.Size(m) } func (m *DurationRules) XXX_DiscardUnknown() { xxx_messageInfo_DurationRules.DiscardUnknown(m) } var xxx_messageInfo_DurationRules proto.InternalMessageInfo func (m *DurationRules) GetRequired() bool { if m != nil && m.Required != nil { return *m.Required } return false } func (m *DurationRules) GetConst() *duration.Duration { if m != nil { return m.Const } return nil } func (m *DurationRules) GetLt() *duration.Duration { if m != nil { return m.Lt } return nil } func (m *DurationRules) GetLte() *duration.Duration { if m != nil { return m.Lte } return nil } func (m *DurationRules) GetGt() *duration.Duration { if m != nil { return m.Gt } return nil } func (m *DurationRules) GetGte() *duration.Duration { if m != nil { return m.Gte } return nil } func (m *DurationRules) GetIn() []*duration.Duration { if m != nil { return m.In } return nil } func (m *DurationRules) GetNotIn() []*duration.Duration { if m != nil { return m.NotIn } return nil } type TimestampRules struct { Required *bool `protobuf:"varint,1,opt,name=required" json:"required,omitempty"` Const *timestamp.Timestamp `protobuf:"bytes,2,opt,name=const" json:"const,omitempty"` Lt *timestamp.Timestamp `protobuf:"bytes,3,opt,name=lt" json:"lt,omitempty"` Lte *timestamp.Timestamp `protobuf:"bytes,4,opt,name=lte" json:"lte,omitempty"` Gt *timestamp.Timestamp `protobuf:"bytes,5,opt,name=gt" json:"gt,omitempty"` Gte *timestamp.Timestamp `protobuf:"bytes,6,opt,name=gte" json:"gte,omitempty"` LtNow *bool `protobuf:"varint,7,opt,name=lt_now,json=ltNow" json:"lt_now,omitempty"` GtNow *bool `protobuf:"varint,8,opt,name=gt_now,json=gtNow" json:"gt_now,omitempty"` Within *duration.Duration `protobuf:"bytes,9,opt,name=within" json:"within,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *TimestampRules) Reset() { *m = TimestampRules{} } func (m *TimestampRules) String() string { return proto.CompactTextString(m) } func (*TimestampRules) ProtoMessage() {} func (*TimestampRules) Descriptor() ([]byte, []int) { return fileDescriptor_validate_4e427f48c21fab34, []int{22} } func (m *TimestampRules) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_TimestampRules.Unmarshal(m, b) } func (m *TimestampRules) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_TimestampRules.Marshal(b, m, deterministic) } func (dst *TimestampRules) XXX_Merge(src proto.Message) { xxx_messageInfo_TimestampRules.Merge(dst, src) } func (m *TimestampRules) XXX_Size() int { return xxx_messageInfo_TimestampRules.Size(m) } func (m *TimestampRules) XXX_DiscardUnknown() { xxx_messageInfo_TimestampRules.DiscardUnknown(m) } var xxx_messageInfo_TimestampRules proto.InternalMessageInfo func (m *TimestampRules) GetRequired() bool { if m != nil && m.Required != nil { return *m.Required } return false } func (m *TimestampRules) GetConst() *timestamp.Timestamp { if m != nil { return m.Const } return nil } func (m *TimestampRules) GetLt() *timestamp.Timestamp { if m != nil { return m.Lt } return nil } func (m *TimestampRules) GetLte() *timestamp.Timestamp { if m != nil { return m.Lte } return nil } func (m *TimestampRules) GetGt() *timestamp.Timestamp { if m != nil { return m.Gt } return nil } func (m *TimestampRules) GetGte() *timestamp.Timestamp { if m != nil { return m.Gte } return nil } func (m *TimestampRules) GetLtNow() bool { if m != nil && m.LtNow != nil { return *m.LtNow } return false } func (m *TimestampRules) GetGtNow() bool { if m != nil && m.GtNow != nil { return *m.GtNow } return false } func (m *TimestampRules) GetWithin() *duration.Duration { if m != nil { return m.Within } return nil } var E_Disabled = &proto.ExtensionDesc{ ExtendedType: (*descriptor.MessageOptions)(nil), ExtensionType: (*bool)(nil), Field: 919191, Name: "validate.disabled", Tag: "varint,919191,opt,name=disabled", Filename: "validate/validate.proto", } var E_Required = &proto.ExtensionDesc{ ExtendedType: (*descriptor.OneofOptions)(nil), ExtensionType: (*bool)(nil), Field: 919191, Name: "validate.required", Tag: "varint,919191,opt,name=required", Filename: "validate/validate.proto", } var E_Rules = &proto.ExtensionDesc{ ExtendedType: (*descriptor.FieldOptions)(nil), ExtensionType: (*FieldRules)(nil), Field: 919191, Name: "validate.rules", Tag: "bytes,919191,opt,name=rules", Filename: "validate/validate.proto", } func init() { proto.RegisterType((*FieldRules)(nil), "validate.FieldRules") proto.RegisterType((*FloatRules)(nil), "validate.FloatRules") proto.RegisterType((*DoubleRules)(nil), "validate.DoubleRules") proto.RegisterType((*Int32Rules)(nil), "validate.Int32Rules") proto.RegisterType((*Int64Rules)(nil), "validate.Int64Rules") proto.RegisterType((*UInt32Rules)(nil), "validate.UInt32Rules") proto.RegisterType((*UInt64Rules)(nil), "validate.UInt64Rules") proto.RegisterType((*SInt32Rules)(nil), "validate.SInt32Rules") proto.RegisterType((*SInt64Rules)(nil), "validate.SInt64Rules") proto.RegisterType((*Fixed32Rules)(nil), "validate.Fixed32Rules") proto.RegisterType((*Fixed64Rules)(nil), "validate.Fixed64Rules") proto.RegisterType((*SFixed32Rules)(nil), "validate.SFixed32Rules") proto.RegisterType((*SFixed64Rules)(nil), "validate.SFixed64Rules") proto.RegisterType((*BoolRules)(nil), "validate.BoolRules") proto.RegisterType((*StringRules)(nil), "validate.StringRules") proto.RegisterType((*BytesRules)(nil), "validate.BytesRules") proto.RegisterType((*EnumRules)(nil), "validate.EnumRules") proto.RegisterType((*MessageRules)(nil), "validate.MessageRules") proto.RegisterType((*RepeatedRules)(nil), "validate.RepeatedRules") proto.RegisterType((*MapRules)(nil), "validate.MapRules") proto.RegisterType((*AnyRules)(nil), "validate.AnyRules") proto.RegisterType((*DurationRules)(nil), "validate.DurationRules") proto.RegisterType((*TimestampRules)(nil), "validate.TimestampRules") proto.RegisterExtension(E_Disabled) proto.RegisterExtension(E_Required) proto.RegisterExtension(E_Rules) } func init() { proto.RegisterFile("validate/validate.proto", fileDescriptor_validate_4e427f48c21fab34) } var fileDescriptor_validate_4e427f48c21fab34 = []byte{ // 1634 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xa4, 0x98, 0xcb, 0x6e, 0xdb, 0xce, 0x15, 0xc6, 0x2b, 0xde, 0x44, 0x8d, 0xa5, 0x48, 0x9a, 0xd8, 0x0e, 0xe3, 0x5e, 0xe2, 0x68, 0x51, 0x38, 0x69, 0x60, 0xa5, 0x8e, 0x2b, 0x04, 0x09, 0x5a, 0xa0, 0x46, 0x1a, 0x34, 0x68, 0xd3, 0x14, 0x74, 0xb2, 0xe9, 0x46, 0xa0, 0xad, 0x11, 0x33, 0x30, 0x35, 0x64, 0x48, 0xca, 0xb6, 0x1e, 0x22, 0x6d, 0x77, 0x7d, 0x96, 0xae, 0xba, 0xef, 0x9b, 0x74, 0xdd, 0x6d, 0x17, 0xc5, 0xdc, 0x78, 0x39, 0xa4, 0xe5, 0xc5, 0x7f, 0xa7, 0x39, 0xe7, 0x3b, 0x33, 0x3f, 0x7c, 0x23, 0xce, 0x1c, 0x12, 0x3d, 0xba, 0x0e, 0x22, 0xba, 0x08, 0x72, 0x32, 0xd5, 0x3f, 0x8e, 0x93, 0x34, 0xce, 0x63, 0xec, 0xea, 0xf1, 0xc1, 0x61, 0x18, 0xc7, 0x61, 0x44, 0xa6, 0x22, 0x7e, 0xb1, 0x5e, 0x4e, 0x17, 0x24, 0xbb, 0x4c, 0x69, 0x92, 0xc7, 0xa9, 0xd4, 0x1e, 0xfc, 0xac, 0xa1, 0x58, 0xa7, 0x41, 0x4e, 0x63, 0xa6, 0xf2, 0x4f, 0x60, 0x3e, 0xa7, 0x2b, 0x92, 0xe5, 0xc1, 0x2a, 0x91, 0x82, 0xc9, 0xbf, 0x5d, 0x84, 0xde, 0x53, 0x12, 0x2d, 0xfc, 0x75, 0x44, 0x32, 0xfc, 0x02, 0xd9, 0xcb, 0x28, 0x0e, 0x72, 0xaf, 0x73, 0xd8, 0x39, 0xda, 0x39, 0xd9, 0x3d, 0x2e, 0xd8, 0xde, 0xf3, 0xb0, 0x10, 0xfd, 0xfe, 0x47, 0xbe, 0x14, 0xe1, 0x29, 0x72, 0x16, 0xf1, 0xfa, 0x22, 0x22, 0x9e, 0x21, 0xe4, 0x7b, 0xa5, 0xfc, 0x9d, 0x88, 0x6b, 0xbd, 0x92, 0xf1, 0xe9, 0x29, 0xcb, 0x5f, 0x9d, 0x78, 0x26, 0x9c, 0xfe, 0x03, 0x0f, 0x17, 0xd3, 0x0b, 0x91, 0x52, 0xcf, 0x4e, 0x3d, 0xab, 0x45, 0x3d, 0x3b, 0xad, 0xaa, 0x67, 0xa7, 0x1c, 0x66, 0x2d, 0x27, 0xb7, 0x21, 0xcc, 0x97, 0xda, 0xec, 0x4a, 0xa6, 0x0b, 0x66, 0xa7, 0x9e, 0xd3, 0x56, 0x50, 0x2e, 0xa0, 0x64, 0xbc, 0x20, 0x93, 0x2b, 0x74, 0x61, 0xc1, 0x79, 0x7d, 0x85, 0xac, 0x58, 0x21, 0x93, 0x2b, 0xb8, 0x6d, 0x05, 0x95, 0x15, 0xa4, 0x0c, 0x9f, 0xa0, 0xee, 0x92, 0xde, 0x92, 0xc5, 0xab, 0x13, 0xaf, 0x27, 0x2a, 0xf6, 0x2b, 0x1b, 0x20, 0x13, 0xba, 0x44, 0x0b, 0x8b, 0x9a, 0xd9, 0xa9, 0x87, 0x5a, 0x6b, 0xca, 0x65, 0xb4, 0x10, 0xff, 0x0a, 0xb9, 0x99, 0x5e, 0x68, 0x47, 0x14, 0x3d, 0xaa, 0xa0, 0x81, 0x95, 0x0a, 0x69, 0x59, 0x36, 0x3b, 0xf5, 0xfa, 0xed, 0x65, 0xe5, 0x62, 0x85, 0x14, 0x3f, 0x43, 0xd6, 0x45, 0x1c, 0x47, 0xde, 0x40, 0x94, 0x3c, 0x2c, 0x4b, 0xce, 0xe2, 0x38, 0xd2, 0x72, 0x21, 0x11, 0x8e, 0xe5, 0x29, 0x65, 0xa1, 0xf7, 0xa0, 0xe1, 0x98, 0x88, 0x97, 0x8e, 0x89, 0x21, 0xff, 0x8f, 0x5c, 0x6c, 0x72, 0x92, 0x79, 0x43, 0xf8, 0x1f, 0x39, 0xe3, 0xe1, 0xe2, 0x3f, 0x22, 0x44, 0x9c, 0x84, 0xb0, 0xf5, 0xca, 0x1b, 0x41, 0x92, 0xdf, 0xb1, 0xf5, 0xaa, 0x20, 0xe1, 0x12, 0x6e, 0xeb, 0x8a, 0x64, 0x59, 0x10, 0x12, 0x6f, 0x0c, 0x6d, 0xfd, 0x28, 0x13, 0x85, 0xad, 0x4a, 0xc8, 0xfd, 0x49, 0x49, 0x42, 0x82, 0x9c, 0x2c, 0x3c, 0x0c, 0xfd, 0xf1, 0x55, 0xa6, 0xf0, 0x47, 0x4b, 0xf1, 0xcf, 0x91, 0xb9, 0x0a, 0x12, 0xef, 0xa1, 0xa8, 0xc0, 0x95, 0x65, 0x82, 0x44, 0x8b, 0xb9, 0x80, 0xeb, 0x02, 0xb6, 0xf1, 0x76, 0xa1, 0xee, 0xb7, 0x6c, 0x53, 0xe8, 0x02, 0xb6, 0xe1, 0x18, 0xfa, 0x18, 0xf0, 0xf6, 0x20, 0xc6, 0x3b, 0x95, 0x29, 0x30, 0xb4, 0x14, 0xbf, 0x46, 0xbd, 0xe2, 0x74, 0xf0, 0xf6, 0x45, 0x9d, 0x57, 0xd6, 0x7d, 0xd6, 0x29, 0x5d, 0x58, 0x8a, 0xcf, 0x1c, 0x64, 0xe5, 0x9b, 0x84, 0x4c, 0xbe, 0x77, 0x10, 0x2a, 0xcf, 0x09, 0xbc, 0x8b, 0xec, 0xcb, 0x98, 0x65, 0xf2, 0x30, 0x31, 0x7c, 0x39, 0xc0, 0x0f, 0x90, 0x11, 0xe5, 0xe2, 0xc0, 0x30, 0x7c, 0x23, 0xca, 0xf1, 0x08, 0x99, 0x51, 0x4e, 0xc4, 0x89, 0x60, 0xf8, 0xfc, 0x27, 0x57, 0x84, 0xb9, 0x78, 0xe8, 0x0d, 0xdf, 0x08, 0x85, 0x22, 0xcc, 0x89, 0x78, 0xac, 0x0d, 0x9f, 0xff, 0xe4, 0x0a, 0xca, 0x3c, 0xe7, 0xd0, 0xe4, 0x0a, 0xca, 0xf0, 0x1e, 0x72, 0x58, 0x9c, 0xcf, 0x29, 0xf3, 0xba, 0x22, 0x66, 0xb3, 0x38, 0xff, 0xc0, 0x26, 0x7f, 0xed, 0xa0, 0x9d, 0xca, 0x41, 0x54, 0x07, 0xea, 0x34, 0x81, 0x3a, 0x10, 0xa8, 0x03, 0x81, 0x3a, 0x10, 0xa8, 0x03, 0x81, 0x3a, 0x2d, 0x40, 0x1d, 0x0d, 0xc4, 0x0d, 0x2a, 0x4f, 0x8a, 0x3a, 0x8f, 0xdd, 0xe4, 0xb1, 0x21, 0x8f, 0x0d, 0x79, 0x6c, 0xc8, 0x63, 0x43, 0x1e, 0xbb, 0x85, 0xc7, 0x06, 0x3c, 0xea, 0xa1, 0xad, 0xf3, 0x98, 0x4d, 0x1e, 0x13, 0xf2, 0x98, 0x90, 0xc7, 0x84, 0x3c, 0x26, 0xe4, 0x31, 0x5b, 0x78, 0xcc, 0xea, 0x86, 0x7d, 0xb9, 0xcb, 0xa0, 0x41, 0x13, 0x68, 0x00, 0x81, 0x06, 0x10, 0x68, 0x00, 0x81, 0x06, 0x10, 0x68, 0xd0, 0x02, 0x34, 0x80, 0x40, 0xad, 0x0e, 0x59, 0x4d, 0x20, 0x0b, 0x02, 0x59, 0x10, 0xc8, 0x82, 0x40, 0x16, 0x04, 0xb2, 0x5a, 0x80, 0xac, 0x2a, 0xd0, 0xf9, 0x5d, 0x0e, 0x8d, 0x9b, 0x40, 0x63, 0x08, 0x34, 0x86, 0x40, 0x63, 0x08, 0x34, 0x86, 0x40, 0xe3, 0x16, 0xa0, 0x31, 0x04, 0x6a, 0x75, 0x08, 0x37, 0x81, 0x30, 0x04, 0xc2, 0x10, 0x08, 0x43, 0x20, 0x0c, 0x81, 0x70, 0x0b, 0x10, 0xd6, 0x40, 0x7f, 0xeb, 0xa0, 0x7e, 0xf5, 0x06, 0xab, 0x13, 0x75, 0x9b, 0x44, 0x5d, 0x48, 0xd4, 0x85, 0x44, 0x5d, 0x48, 0xd4, 0x85, 0x44, 0xdd, 0x16, 0xa2, 0x6e, 0x83, 0xa8, 0xd5, 0x23, 0xa7, 0x49, 0xe4, 0x40, 0x22, 0x07, 0x12, 0x39, 0x90, 0xc8, 0x81, 0x44, 0x4e, 0x0b, 0x91, 0xa3, 0x89, 0xfe, 0xde, 0x41, 0x83, 0xf3, 0xbb, 0x4d, 0x1a, 0x36, 0x91, 0x86, 0x10, 0x69, 0x08, 0x91, 0x86, 0x10, 0x69, 0x08, 0x91, 0x86, 0x2d, 0x48, 0xc3, 0x26, 0x52, 0xab, 0x4b, 0xa3, 0x26, 0xd2, 0x08, 0x22, 0x8d, 0x20, 0xd2, 0x08, 0x22, 0x8d, 0x20, 0xd2, 0xa8, 0x05, 0x69, 0xa4, 0x91, 0x9e, 0xa2, 0x5e, 0xd1, 0xa1, 0xd4, 0x69, 0x5c, 0x45, 0x33, 0xf9, 0x9f, 0x89, 0x76, 0x2a, 0x8d, 0x49, 0x5d, 0xd5, 0xd3, 0xcc, 0x9c, 0x91, 0x30, 0x71, 0xc1, 0xf3, 0xf3, 0x80, 0x30, 0xfc, 0x08, 0x75, 0x57, 0x94, 0xcd, 0x79, 0x54, 0x1e, 0x1b, 0xce, 0x8a, 0xb2, 0x3f, 0xaa, 0x44, 0x70, 0x2b, 0x12, 0xa6, 0x4a, 0x04, 0xb7, 0x3c, 0xf1, 0x63, 0xd4, 0x8b, 0x08, 0x9b, 0xcb, 0x66, 0x67, 0x57, 0xa4, 0xdc, 0x88, 0x30, 0xd1, 0xe5, 0xf0, 0x24, 0x9f, 0x4e, 0x26, 0xe5, 0x29, 0xe3, 0xae, 0x68, 0x25, 0x19, 0xdc, 0xaa, 0xa4, 0xad, 0x92, 0xc1, 0xad, 0x4c, 0x7a, 0xa8, 0x9b, 0x04, 0x79, 0x4e, 0x52, 0x26, 0xba, 0xe0, 0x9e, 0xaf, 0x87, 0x78, 0x1f, 0x39, 0x49, 0x4a, 0x96, 0xf4, 0x56, 0x74, 0xbb, 0x3d, 0x5f, 0x8d, 0x78, 0x3c, 0x5b, 0x2f, 0x79, 0xdc, 0x95, 0x71, 0x39, 0xc2, 0x07, 0xc8, 0xbd, 0x8c, 0x59, 0x1e, 0x50, 0x96, 0x89, 0xe6, 0xb5, 0xe7, 0x17, 0x63, 0x65, 0x38, 0x3a, 0x34, 0x8f, 0x7a, 0xc0, 0xf0, 0x1d, 0x11, 0x93, 0x86, 0xe3, 0x7d, 0x64, 0x93, 0x55, 0x40, 0x23, 0xd1, 0x5c, 0xba, 0xbc, 0x6d, 0x13, 0x43, 0xfc, 0x13, 0xe4, 0x7e, 0x8d, 0xb3, 0x9c, 0x05, 0x2b, 0x22, 0x9a, 0x48, 0x9e, 0x2a, 0x22, 0x78, 0x84, 0x0c, 0x9a, 0x88, 0x7e, 0x91, 0xc7, 0x0d, 0x9a, 0xe0, 0x5d, 0x64, 0xd1, 0xe4, 0xfa, 0x54, 0xf4, 0x84, 0x3c, 0x26, 0x46, 0x2a, 0x3a, 0x13, 0xcd, 0x9f, 0x8e, 0xce, 0x30, 0x46, 0xe6, 0x3a, 0xa5, 0xa2, 0xc7, 0xe3, 0x41, 0x3e, 0xc0, 0x8f, 0x51, 0x77, 0x9d, 0xd2, 0x79, 0x4a, 0x96, 0xa2, 0x8d, 0x73, 0xc5, 0x3b, 0x40, 0x4a, 0x7d, 0xb2, 0x3c, 0xeb, 0x23, 0x74, 0x43, 0xa2, 0x68, 0x7e, 0xc5, 0xe2, 0x1b, 0x36, 0xf9, 0x97, 0x81, 0x50, 0xd9, 0x67, 0xd6, 0x77, 0xbf, 0x0f, 0x76, 0x7f, 0xf0, 0x43, 0x76, 0xbf, 0xb2, 0x4d, 0xd6, 0x5d, 0xdb, 0x64, 0x8b, 0x45, 0x9b, 0xdb, 0xe4, 0xc8, 0x78, 0xcb, 0x36, 0x75, 0x45, 0x06, 0x6e, 0x93, 0x7b, 0x68, 0x1e, 0xf5, 0xc1, 0x36, 0xf5, 0x44, 0x4c, 0x6d, 0x93, 0x34, 0x1c, 0xb5, 0x18, 0xbe, 0xd3, 0x6a, 0x78, 0xbf, 0x6a, 0x38, 0x70, 0xf0, 0x0a, 0xf5, 0x8a, 0xde, 0xfb, 0x8e, 0x7e, 0xe8, 0x29, 0xea, 0x2f, 0xc8, 0x92, 0x32, 0xb2, 0x98, 0xc7, 0x2c, 0xda, 0x08, 0xcb, 0x5c, 0x7f, 0x47, 0xc5, 0x3e, 0xb1, 0x68, 0xa3, 0xc0, 0xcd, 0x96, 0x76, 0xc7, 0xaa, 0xb6, 0x3b, 0xbf, 0x41, 0xfd, 0x6a, 0xeb, 0x8e, 0x31, 0xb2, 0xb2, 0x2b, 0x9a, 0xa8, 0x47, 0x5a, 0xfc, 0xe6, 0xfe, 0xa4, 0xe4, 0xdb, 0x9a, 0xa6, 0x64, 0xa1, 0x56, 0x2a, 0xc6, 0xbc, 0x5d, 0x1a, 0xd4, 0xda, 0x78, 0xfd, 0xe0, 0xd1, 0x9c, 0xac, 0x32, 0xd5, 0x13, 0xf0, 0x07, 0xef, 0x03, 0x1f, 0xeb, 0x07, 0x4f, 0x26, 0x8d, 0xe2, 0xc1, 0x93, 0xc9, 0x7d, 0xe4, 0xac, 0x19, 0xfd, 0xb6, 0x96, 0x47, 0x97, 0xeb, 0xab, 0x11, 0x7e, 0x8e, 0x6c, 0x59, 0xd0, 0x78, 0xe9, 0x2d, 0x5f, 0xd3, 0x7d, 0x29, 0x99, 0xfc, 0xb3, 0x83, 0x5c, 0xfd, 0x92, 0xa0, 0x51, 0x92, 0x80, 0xa6, 0x55, 0x94, 0x3f, 0xf3, 0xb1, 0x46, 0x91, 0xc9, 0x12, 0xa5, 0x48, 0xb2, 0x78, 0x9e, 0x25, 0x41, 0x9a, 0x69, 0x1a, 0x97, 0xc5, 0xe7, 0x62, 0x8c, 0x8f, 0x90, 0x75, 0x45, 0x36, 0xdb, 0x71, 0x84, 0x02, 0xbf, 0x40, 0xce, 0x75, 0x10, 0xad, 0xd5, 0x21, 0x73, 0x97, 0x56, 0x69, 0x26, 0x1f, 0x91, 0xab, 0xdf, 0x5b, 0x6a, 0x9e, 0x77, 0xea, 0x9e, 0xab, 0xad, 0x35, 0x5a, 0x8e, 0x0e, 0xb3, 0x72, 0x74, 0x4c, 0xfe, 0x63, 0xa0, 0x41, 0xed, 0xd5, 0x66, 0xeb, 0xa4, 0x53, 0xfd, 0x47, 0x93, 0xdf, 0x2d, 0x1e, 0x1f, 0xcb, 0xcf, 0x24, 0xc7, 0xfa, 0x33, 0x49, 0xf9, 0x96, 0xa4, 0xfe, 0x83, 0xcf, 0xc4, 0xad, 0x63, 0xde, 0xa7, 0xe6, 0x17, 0xd2, 0x2f, 0xe4, 0x85, 0x64, 0xdd, 0xa7, 0x15, 0x77, 0xd5, 0x33, 0x71, 0x57, 0xd9, 0xf7, 0xce, 0x1b, 0x8a, 0x79, 0xf9, 0x35, 0xe6, 0xdc, 0x3b, 0x6f, 0x28, 0xe7, 0x55, 0xb7, 0xd9, 0xf6, 0x79, 0x29, 0xc3, 0x2f, 0x0b, 0x43, 0xdd, 0xfb, 0xe4, 0xca, 0xeb, 0xff, 0x1a, 0xe8, 0x41, 0xfd, 0x75, 0x70, 0xab, 0xd9, 0x2f, 0xeb, 0x66, 0x1f, 0x34, 0xe6, 0x2f, 0xe7, 0x52, 0x6e, 0x3f, 0xaf, 0xb8, 0xbd, 0x4d, 0xce, 0xed, 0x7e, 0x51, 0xb5, 0x7b, 0x9b, 0x58, 0xf8, 0xfd, 0xbc, 0xe2, 0xf7, 0xd6, 0x99, 0x43, 0x31, 0x73, 0x69, 0xf8, 0xd6, 0x99, 0xb9, 0xe3, 0x7b, 0xc8, 0x89, 0xf2, 0x39, 0x8b, 0x6f, 0xc4, 0xa9, 0xea, 0xfa, 0x76, 0x94, 0xff, 0x29, 0xbe, 0xe1, 0xe1, 0x50, 0x86, 0x5d, 0x19, 0x0e, 0x45, 0xf8, 0x97, 0xc8, 0xb9, 0xa1, 0xf9, 0x57, 0x71, 0xb2, 0xde, 0xb3, 0x9f, 0x4a, 0xf8, 0xe6, 0xd7, 0xc8, 0x5d, 0xd0, 0x2c, 0xb8, 0x88, 0xc8, 0x02, 0x3f, 0x69, 0xc8, 0xd5, 0xb9, 0xf6, 0x29, 0xe1, 0x35, 0x99, 0xf7, 0x8f, 0xef, 0xaf, 0xe5, 0x2e, 0xe8, 0x92, 0x37, 0x6f, 0xcb, 0x1d, 0xc2, 0x3f, 0x6d, 0x94, 0x7f, 0x62, 0x24, 0x5e, 0x36, 0x8a, 0x75, 0xc1, 0x9b, 0x3f, 0x20, 0x3b, 0x15, 0xfb, 0xdc, 0xac, 0x14, 0x8f, 0x76, 0xbd, 0xf2, 0xce, 0x53, 0x4b, 0xcc, 0x71, 0xf6, 0x19, 0xed, 0x5d, 0xc6, 0xab, 0xe3, 0x68, 0xb3, 0xcc, 0x8f, 0x93, 0xf0, 0xba, 0x90, 0xfe, 0xe5, 0xad, 0x9a, 0x3b, 0x8c, 0xa3, 0x80, 0x85, 0xc7, 0x71, 0x1a, 0x4e, 0xc3, 0x34, 0xb9, 0x9c, 0x5e, 0x04, 0x51, 0xc0, 0x2e, 0x49, 0x3a, 0xbd, 0x5d, 0x64, 0x53, 0xca, 0xf8, 0xb5, 0x17, 0x44, 0xf2, 0x93, 0x66, 0xf1, 0xed, 0xf4, 0xff, 0x01, 0x00, 0x00, 0xff, 0xff, 0xdc, 0xca, 0x84, 0x71, 0x4f, 0x15, 0x00, 0x00, } grpc-go-1.22.1/balancer/xds/internal/regenerate_scripts/000077500000000000000000000000001351635773100232175ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/internal/regenerate_scripts/envoy-proto-gen.sh000077500000000000000000000021241351635773100266250ustar00rootroot00000000000000#!/bin/bash set -ex DATA_PLANE_API_VERSION=1935b52f94f7889ad9f538a17250e78cffd0af27 git clone git@github.com:envoyproxy/data-plane-api.git git clone git@github.com:envoyproxy/protoc-gen-validate.git cd data-plane-api git checkout $DATA_PLANE_API_VERSION cp ../utils/WORKSPACE . bazel clean --expunge # We download a local copy of the protoc-gen-validate repo to be used by bazel # for customizing proto generated code import path. # And we do a simple grep here to get the release version of the # proto-gen-validate that gets used by data-plane-api. PROTOC_GEN_VALIDATE=v$(grep "PGV_RELEASE =" ./bazel/repository_locations.bzl | sed -r 's/.*([0-9]+\.[0-9]+\.[0-9]+).*/\1/') cd ../protoc-gen-validate git checkout $PROTOC_GEN_VALIDATE git apply ../utils/protoc-gen-validate.patch cd ../data-plane-api # cleanup.sh remove all gogo proto related imports and labels. ../utils/cleanup.sh git apply ../utils/data-plane-api.patch # proto-gen.sh build all packages required for grpc xds implementation and move # proto generated code to grpc/balancer/xds/internal/proto subdirectory. ../utils/proto-gen.sh grpc-go-1.22.1/balancer/xds/internal/regenerate_scripts/utils/000077500000000000000000000000001351635773100243575ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/internal/regenerate_scripts/utils/README.md000066400000000000000000000002201351635773100256300ustar00rootroot00000000000000Run ./envoy-proto-gen.sh to generate xds proto generated code for grpc usage. Make sure your $GOPATH is valid and you have grpc-go repo locally.grpc-go-1.22.1/balancer/xds/internal/regenerate_scripts/utils/WORKSPACE000066400000000000000000000034321351635773100256420ustar00rootroot00000000000000workspace(name = "envoy_api") local_repository( name = "com_lyft_protoc_gen_validate", path = "../protoc-gen-validate", ) load("@bazel_tools//tools/build_defs/repo:git.bzl", "git_repository") load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive") http_archive( name = "io_bazel_rules_go", urls = ["https://github.com/bazelbuild/rules_go/releases/download/0.17.1/rules_go-0.17.1.tar.gz"], sha256 = "6776d68ebb897625dead17ae510eac3d5f6342367327875210df44dbe2aeeb19", ) git_repository( name = "com_github_golang_protobuf", remote = "https://github.com/golang/protobuf", commit = "aa810b61a9c79d51363740d207bb46cf8e620ed5", shallow_since = "1534281267 -0700", patches = [ "@io_bazel_rules_go//third_party:com_github_golang_protobuf-gazelle.patch", "@io_bazel_rules_go//third_party:com_github_golang_protobuf-extras.patch", ], patch_args = ["-p1"], ) load("//bazel:repositories.bzl", "api_dependencies") api_dependencies() load("@io_bazel_rules_go//go:deps.bzl", "go_rules_dependencies", "go_register_toolchains") go_rules_dependencies() go_register_toolchains() http_archive( name = "bazel_gazelle", urls = ["https://github.com/bazelbuild/bazel-gazelle/releases/download/0.17.0/bazel-gazelle-0.17.0.tar.gz"], sha256 = "3c681998538231a2d24d0c07ed5a7658cb72bfb5fd4bf9911157c0e9ac6a2687", ) load("@bazel_gazelle//:deps.bzl", "gazelle_dependencies") gazelle_dependencies() bind( name = "six", actual = "@six_archive//:six", ) http_archive( name = "six_archive", sha256 = "105f8d68616f8248e24bf0e9372ef04d3cc10104f1980f54d57b2ce73a5ad56a", build_file = "@com_google_protobuf//:six.BUILD", url = "https://pypi.python.org/packages/source/s/six/six-1.10.0.tar.gz#md5=34eed507548117b2ab523ab14b2f8b55", ) grpc-go-1.22.1/balancer/xds/internal/regenerate_scripts/utils/cleanup.sh000077500000000000000000000014711351635773100263500ustar00rootroot00000000000000#!/bin/bash find . -name '*.proto' -print0 | while IFS= read -r -d '' f do commands=( # Import mangling. -e 's#import "gogoproto/gogo.proto";##' # Remove references to gogo.proto extensions. -e 's#option (gogoproto\.[a-z_]\+) = \(true\|false\);##' -e 's#\(, \)\?(gogoproto\.[a-z_]\+) = \(true\|false\),\?##' # gogoproto removal can result in empty brackets. -e 's# \[\]##' # gogoproto removal can result in four spaces on a line by itself. -e '/^ $/d' ) sed -i "${commands[@]}" "$f" # gogoproto removal can leave a comma on the last element in a list. # This needs to run separately after all the commands above have finished # since it is multi-line and rewrites the output of the above patterns. sed -i -e '$!N; s#\(.*\),\([[:space:]]*\];\)#\1\2#; t; P; D;' "$f" done grpc-go-1.22.1/balancer/xds/internal/regenerate_scripts/utils/data-plane-api.patch000066400000000000000000000025351351635773100301620ustar00rootroot00000000000000diff --git a/bazel/api_build_system.bzl b/bazel/api_build_system.bzl index c68ccbd..e6cc8cb 100644 --- a/bazel/api_build_system.bzl +++ b/bazel/api_build_system.bzl @@ -7,7 +7,7 @@ _PY_SUFFIX = "_py" _CC_SUFFIX = "_cc" _GO_PROTO_SUFFIX = "_go_proto" _GO_GRPC_SUFFIX = "_go_grpc" -_GO_IMPORTPATH_PREFIX = "github.com/envoyproxy/data-plane-api/api/" +_GO_IMPORTPATH_PREFIX = "google.golang.org/grpc/balancer/xds/internal/proto/" def _Suffix(d, suffix): return d + suffix @@ -42,7 +42,7 @@ def api_py_proto_library(name, srcs = [], deps = [], has_services = 0): def api_go_proto_library(name, proto, deps = []): go_proto_library( name = _Suffix(name, _GO_PROTO_SUFFIX), - importpath = _Suffix(_GO_IMPORTPATH_PREFIX, name), + importpath = _Suffix(_GO_IMPORTPATH_PREFIX + native.package_name() + "/", name), proto = proto, visibility = ["//visibility:public"], deps = deps + [ @@ -60,7 +60,7 @@ def api_go_proto_library(name, proto, deps = []): def api_go_grpc_library(name, proto, deps = []): go_grpc_library( name = _Suffix(name, _GO_GRPC_SUFFIX), - importpath = _Suffix(_GO_IMPORTPATH_PREFIX, name), + importpath = _Suffix(_GO_IMPORTPATH_PREFIX + native.package_name() + "/", name), proto = proto, visibility = ["//visibility:public"], deps = deps + [ grpc-go-1.22.1/balancer/xds/internal/regenerate_scripts/utils/proto-gen.sh000077500000000000000000000020161351635773100266270ustar00rootroot00000000000000#!/bin/bash # packages is the collection of the packages that are required by xds for grpc. packages=( envoy/service/discovery/v2:ads_go_grpc envoy/api/v2:eds_go_grpc envoy/api/v2:cds_go_grpc envoy/api/v2/core:address_go_proto envoy/api/v2/core:base_go_proto envoy/api/v2/endpoint:endpoint_go_proto envoy/type:percent_go_proto envoy/service/load_stats/v2:lrs_go_grpc udpa/data/orca/v1:orca_load_report_go_proto udpa/service/orca/v1:orca_go_grpc ) if [ -z $GOPATH ]; then echo 'empty $GOPATH, exiting.'; exit 1 fi for i in ${packages[@]} do bazel build "$i" done dest="$PWD/../../proto/" rm -rf "$dest" srcs=( "find -L ./bazel-bin/envoy/ -name *.pb.go -print0" "find -L ./bazel-bin/udpa/ -name *.pb.go -print0" "find -L ./bazel-bin/ -name validate.pb.go -print0" ) for src in "${srcs[@]}" do eval "$src" | while IFS= read -r -d '' origin do target="${origin##*proto/}" final="$dest$target" mkdir -p "${final%*/*}" cp "$origin" "$dest$target" done done grpc-go-1.22.1/balancer/xds/internal/regenerate_scripts/utils/protoc-gen-validate.patch000066400000000000000000000015721351635773100312510ustar00rootroot00000000000000diff --git a/validate/BUILD b/validate/BUILD index af8c6c1..939d997 100644 --- a/validate/BUILD +++ b/validate/BUILD @@ -31,7 +31,7 @@ py_proto_library( go_proto_library( name = "go_default_library", - importpath = "github.com/lyft/protoc-gen-validate/validate", + importpath = "google.golang.org/grpc/balancer/xds/internal/proto/validate", proto = ":validate_proto", visibility = ["//visibility:public"], ) diff --git a/validate/validate.proto b/validate/validate.proto index 1c5e04a..7f5d4b0 100644 --- a/validate/validate.proto +++ b/validate/validate.proto @@ -1,7 +1,7 @@ syntax = "proto2"; package validate; -option go_package = "github.com/lyft/protoc-gen-validate/validate"; +option go_package = "google.golang.org/grpc/balancer/xds/internal/proto/validate"; option java_package = "com.lyft.pgv.validate"; import "google/protobuf/descriptor.proto"; grpc-go-1.22.1/balancer/xds/lrs/000077500000000000000000000000001351635773100163135ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/lrs/lrs.go000066400000000000000000000243721351635773100174520ustar00rootroot00000000000000/* * Copyright 2019 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ // Package lrs implements load reporting service for xds balancer. package lrs import ( "context" "sync" "sync/atomic" "time" "github.com/golang/protobuf/ptypes" structpb "github.com/golang/protobuf/ptypes/struct" "google.golang.org/grpc" "google.golang.org/grpc/balancer/xds/internal" basepb "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/core/base" loadreportpb "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/endpoint/load_report" lrsgrpc "google.golang.org/grpc/balancer/xds/internal/proto/envoy/service/load_stats/v2/lrs" lrspb "google.golang.org/grpc/balancer/xds/internal/proto/envoy/service/load_stats/v2/lrs" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/internal/backoff" ) const negativeOneUInt64 = ^uint64(0) // Store defines the interface for a load store. It keeps loads and can report // them to a server when requested. type Store interface { CallDropped(category string) CallStarted(l internal.Locality) CallFinished(l internal.Locality, err error) CallServerLoad(l internal.Locality, name string, d float64) ReportTo(ctx context.Context, cc *grpc.ClientConn) } type rpcCountData struct { // Only atomic accesses are allowed for the fields. succeeded *uint64 errored *uint64 inProgress *uint64 // Map from load name to load data (sum+count). Loading data from map is // atomic, but updating data takes a lock, which could cause contention when // multiple RPCs try to report loads for the same name. // // To fix the contention, shard this map. serverLoads sync.Map // map[string]*rpcLoadData } func newRPCCountData() *rpcCountData { return &rpcCountData{ succeeded: new(uint64), errored: new(uint64), inProgress: new(uint64), } } func (rcd *rpcCountData) incrSucceeded() { atomic.AddUint64(rcd.succeeded, 1) } func (rcd *rpcCountData) loadAndClearSucceeded() uint64 { return atomic.SwapUint64(rcd.succeeded, 0) } func (rcd *rpcCountData) incrErrored() { atomic.AddUint64(rcd.errored, 1) } func (rcd *rpcCountData) loadAndClearErrored() uint64 { return atomic.SwapUint64(rcd.errored, 0) } func (rcd *rpcCountData) incrInProgress() { atomic.AddUint64(rcd.inProgress, 1) } func (rcd *rpcCountData) decrInProgress() { atomic.AddUint64(rcd.inProgress, negativeOneUInt64) // atomic.Add(x, -1) } func (rcd *rpcCountData) loadInProgress() uint64 { return atomic.LoadUint64(rcd.inProgress) // InProgress count is not clear when reading. } func (rcd *rpcCountData) addServerLoad(name string, d float64) { loads, ok := rcd.serverLoads.Load(name) if !ok { tl := newRPCLoadData() loads, _ = rcd.serverLoads.LoadOrStore(name, tl) } loads.(*rpcLoadData).add(d) } // Data for server loads (from trailers or oob). Fields in this struct must be // updated consistently. // // The current solution is to hold a lock, which could cause contention. To fix, // shard serverLoads map in rpcCountData. type rpcLoadData struct { mu sync.Mutex sum float64 count uint64 } func newRPCLoadData() *rpcLoadData { return &rpcLoadData{} } func (rld *rpcLoadData) add(v float64) { rld.mu.Lock() rld.sum += v rld.count++ rld.mu.Unlock() } func (rld *rpcLoadData) loadAndClear() (s float64, c uint64) { rld.mu.Lock() s = rld.sum rld.sum = 0 c = rld.count rld.count = 0 rld.mu.Unlock() return } // lrsStore collects loads from xds balancer, and periodically sends load to the // server. type lrsStore struct { node *basepb.Node backoff backoff.Strategy lastReported time.Time drops sync.Map // map[string]*uint64 localityRPCCount sync.Map // map[internal.Locality]*rpcCountData } // NewStore creates a store for load reports. func NewStore(serviceName string) Store { return &lrsStore{ node: &basepb.Node{ Metadata: &structpb.Struct{ Fields: map[string]*structpb.Value{ internal.GrpcHostname: { Kind: &structpb.Value_StringValue{StringValue: serviceName}, }, }, }, }, backoff: backoff.Exponential{ MaxDelay: 120 * time.Second, }, lastReported: time.Now(), } } // Update functions are called by picker for each RPC. To avoid contention, all // updates are done atomically. // CallDropped adds one drop record with the given category to store. func (ls *lrsStore) CallDropped(category string) { p, ok := ls.drops.Load(category) if !ok { tp := new(uint64) p, _ = ls.drops.LoadOrStore(category, tp) } atomic.AddUint64(p.(*uint64), 1) } func (ls *lrsStore) CallStarted(l internal.Locality) { p, ok := ls.localityRPCCount.Load(l) if !ok { tp := newRPCCountData() p, _ = ls.localityRPCCount.LoadOrStore(l, tp) } p.(*rpcCountData).incrInProgress() } func (ls *lrsStore) CallFinished(l internal.Locality, err error) { p, ok := ls.localityRPCCount.Load(l) if !ok { // The map is never cleared, only values in the map are reset. So the // case where entry for call-finish is not found should never happen. return } p.(*rpcCountData).decrInProgress() if err == nil { p.(*rpcCountData).incrSucceeded() } else { p.(*rpcCountData).incrErrored() } } func (ls *lrsStore) CallServerLoad(l internal.Locality, name string, d float64) { p, ok := ls.localityRPCCount.Load(l) if !ok { // The map is never cleared, only values in the map are reset. So the // case where entry for CallServerLoad is not found should never happen. return } p.(*rpcCountData).addServerLoad(name, d) } func (ls *lrsStore) buildStats(clusterName string) []*loadreportpb.ClusterStats { var ( totalDropped uint64 droppedReqs []*loadreportpb.ClusterStats_DroppedRequests localityStats []*loadreportpb.UpstreamLocalityStats ) ls.drops.Range(func(category, countP interface{}) bool { tempCount := atomic.SwapUint64(countP.(*uint64), 0) if tempCount == 0 { return true } totalDropped += tempCount droppedReqs = append(droppedReqs, &loadreportpb.ClusterStats_DroppedRequests{ Category: category.(string), DroppedCount: tempCount, }) return true }) ls.localityRPCCount.Range(func(locality, countP interface{}) bool { tempLocality := locality.(internal.Locality) tempCount := countP.(*rpcCountData) tempSucceeded := tempCount.loadAndClearSucceeded() tempInProgress := tempCount.loadInProgress() tempErrored := tempCount.loadAndClearErrored() if tempSucceeded == 0 && tempInProgress == 0 && tempErrored == 0 { return true } var loadMetricStats []*loadreportpb.EndpointLoadMetricStats tempCount.serverLoads.Range(func(name, data interface{}) bool { tempName := name.(string) tempSum, tempCount := data.(*rpcLoadData).loadAndClear() if tempCount == 0 { return true } loadMetricStats = append(loadMetricStats, &loadreportpb.EndpointLoadMetricStats{ MetricName: tempName, NumRequestsFinishedWithMetric: tempCount, TotalMetricValue: tempSum, }, ) return true }) localityStats = append(localityStats, &loadreportpb.UpstreamLocalityStats{ Locality: &basepb.Locality{ Region: tempLocality.Region, Zone: tempLocality.Zone, SubZone: tempLocality.SubZone, }, TotalSuccessfulRequests: tempSucceeded, TotalRequestsInProgress: tempInProgress, TotalErrorRequests: tempErrored, LoadMetricStats: loadMetricStats, UpstreamEndpointStats: nil, // TODO: populate for per endpoint loads. }) return true }) dur := time.Since(ls.lastReported) ls.lastReported = time.Now() var ret []*loadreportpb.ClusterStats ret = append(ret, &loadreportpb.ClusterStats{ ClusterName: clusterName, UpstreamLocalityStats: localityStats, TotalDroppedRequests: totalDropped, DroppedRequests: droppedReqs, LoadReportInterval: ptypes.DurationProto(dur), }) return ret } // ReportTo makes a streaming lrs call to cc and blocks. // // It retries the call (with backoff) until ctx is canceled. func (ls *lrsStore) ReportTo(ctx context.Context, cc *grpc.ClientConn) { c := lrsgrpc.NewLoadReportingServiceClient(cc) var ( retryCount int doBackoff bool ) for { select { case <-ctx.Done(): return default: } if doBackoff { backoffTimer := time.NewTimer(ls.backoff.Backoff(retryCount)) select { case <-backoffTimer.C: case <-ctx.Done(): backoffTimer.Stop() return } retryCount++ } doBackoff = true stream, err := c.StreamLoadStats(ctx) if err != nil { grpclog.Infof("lrs: failed to create stream: %v", err) continue } if err := stream.Send(&lrspb.LoadStatsRequest{ Node: ls.node, }); err != nil { grpclog.Infof("lrs: failed to send first request: %v", err) continue } first, err := stream.Recv() if err != nil { grpclog.Infof("lrs: failed to receive first response: %v", err) continue } interval, err := ptypes.Duration(first.LoadReportingInterval) if err != nil { grpclog.Infof("lrs: failed to convert report interval: %v", err) continue } if len(first.Clusters) != 1 { grpclog.Infof("lrs: received multiple clusters %v, expect one cluster", first.Clusters) continue } if first.ReportEndpointGranularity { // TODO: fixme to support per endpoint loads. grpclog.Infof("lrs: endpoint loads requested, but not supported by current implementation") continue } // No backoff afterwards. doBackoff = false retryCount = 0 ls.sendLoads(ctx, stream, first.Clusters[0], interval) } } func (ls *lrsStore) sendLoads(ctx context.Context, stream lrsgrpc.LoadReportingService_StreamLoadStatsClient, clusterName string, interval time.Duration) { tick := time.NewTicker(interval) defer tick.Stop() for { select { case <-tick.C: case <-ctx.Done(): return } if err := stream.Send(&lrspb.LoadStatsRequest{ Node: ls.node, ClusterStats: ls.buildStats(clusterName), }); err != nil { grpclog.Infof("lrs: failed to send report: %v", err) return } } } grpc-go-1.22.1/balancer/xds/lrs/lrs_test.go000066400000000000000000000341201351635773100205010ustar00rootroot00000000000000/* * * Copyright 2019 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package lrs import ( "context" "fmt" "io" "net" "reflect" "sort" "sync" "testing" "time" "github.com/golang/protobuf/proto" durationpb "github.com/golang/protobuf/ptypes/duration" structpb "github.com/golang/protobuf/ptypes/struct" "github.com/google/go-cmp/cmp" "google.golang.org/grpc" "google.golang.org/grpc/balancer/xds/internal" basepb "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/core/base" loadreportpb "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/endpoint/load_report" lrsgrpc "google.golang.org/grpc/balancer/xds/internal/proto/envoy/service/load_stats/v2/lrs" lrspb "google.golang.org/grpc/balancer/xds/internal/proto/envoy/service/load_stats/v2/lrs" "google.golang.org/grpc/codes" "google.golang.org/grpc/status" ) const testService = "grpc.service.test" var ( dropCategories = []string{"drop_for_real", "drop_for_fun"} localities = []internal.Locality{{Region: "a"}, {Region: "b"}} errTest = fmt.Errorf("test error") ) type rpcCountDataForTest struct { succeeded uint64 errored uint64 inProgress uint64 serverLoads map[string]float64 } func newRPCCountDataForTest(succeeded, errored, inprogress uint64, serverLoads map[string]float64) *rpcCountDataForTest { return &rpcCountDataForTest{ succeeded: succeeded, errored: errored, inProgress: inprogress, serverLoads: serverLoads, } } // Equal() is needed to compare unexported fields. func (rcd *rpcCountDataForTest) Equal(b *rpcCountDataForTest) bool { return rcd.inProgress == b.inProgress && rcd.errored == b.errored && rcd.succeeded == b.succeeded && reflect.DeepEqual(rcd.serverLoads, b.serverLoads) } // equalClusterStats sorts requests and clear report internal before comparing. func equalClusterStats(a, b []*loadreportpb.ClusterStats) bool { for _, t := range [][]*loadreportpb.ClusterStats{a, b} { for _, s := range t { sort.Slice(s.DroppedRequests, func(i, j int) bool { return s.DroppedRequests[i].Category < s.DroppedRequests[j].Category }) sort.Slice(s.UpstreamLocalityStats, func(i, j int) bool { return s.UpstreamLocalityStats[i].Locality.String() < s.UpstreamLocalityStats[j].Locality.String() }) for _, us := range s.UpstreamLocalityStats { sort.Slice(us.LoadMetricStats, func(i, j int) bool { return us.LoadMetricStats[i].MetricName < us.LoadMetricStats[j].MetricName }) } s.LoadReportInterval = nil } } return reflect.DeepEqual(a, b) } func Test_lrsStore_buildStats_drops(t *testing.T) { tests := []struct { name string drops []map[string]uint64 }{ { name: "one drop report", drops: []map[string]uint64{{ dropCategories[0]: 31, dropCategories[1]: 41, }}, }, { name: "two drop reports", drops: []map[string]uint64{{ dropCategories[0]: 31, dropCategories[1]: 41, }, { dropCategories[0]: 59, dropCategories[1]: 26, }}, }, { name: "no empty report", drops: []map[string]uint64{{ dropCategories[0]: 31, dropCategories[1]: 41, }, { dropCategories[0]: 0, // This shouldn't cause an empty report for category[0]. dropCategories[1]: 26, }}, }, } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { ls := NewStore(testService).(*lrsStore) for _, ds := range tt.drops { var ( totalDropped uint64 droppedReqs []*loadreportpb.ClusterStats_DroppedRequests ) for cat, count := range ds { if count == 0 { continue } totalDropped += count droppedReqs = append(droppedReqs, &loadreportpb.ClusterStats_DroppedRequests{ Category: cat, DroppedCount: count, }) } want := []*loadreportpb.ClusterStats{ { ClusterName: testService, TotalDroppedRequests: totalDropped, DroppedRequests: droppedReqs, }, } var wg sync.WaitGroup for c, count := range ds { for i := 0; i < int(count); i++ { wg.Add(1) go func(i int, c string) { ls.CallDropped(c) wg.Done() }(i, c) } } wg.Wait() if got := ls.buildStats(testService); !equalClusterStats(got, want) { t.Errorf("lrsStore.buildStats() = %v, want %v", got, want) t.Errorf("%s", cmp.Diff(got, want)) } } }) } } func Test_lrsStore_buildStats_rpcCounts(t *testing.T) { tests := []struct { name string rpcs []map[internal.Locality]struct { start, success, failure uint64 serverData map[string]float64 // Will be reported with successful RPCs. } }{ { name: "one rpcCount report", rpcs: []map[internal.Locality]struct { start, success, failure uint64 serverData map[string]float64 }{{ localities[0]: {8, 3, 1, nil}, }}, }, { name: "two localities one rpcCount report", rpcs: []map[internal.Locality]struct { start, success, failure uint64 serverData map[string]float64 }{{ localities[0]: {8, 3, 1, nil}, localities[1]: {15, 1, 5, nil}, }}, }, { name: "three rpcCount reports", rpcs: []map[internal.Locality]struct { start, success, failure uint64 serverData map[string]float64 }{{ localities[0]: {8, 3, 1, nil}, localities[1]: {15, 1, 5, nil}, }, { localities[0]: {8, 3, 1, nil}, }, { localities[1]: {15, 1, 5, nil}, }}, }, { name: "no empty report", rpcs: []map[internal.Locality]struct { start, success, failure uint64 serverData map[string]float64 }{{ localities[0]: {4, 3, 1, nil}, localities[1]: {7, 1, 5, nil}, }, { localities[0]: {0, 0, 0, nil}, // This shouldn't cause an empty report for locality[0]. localities[1]: {1, 1, 0, nil}, }}, }, { name: "two localities one report with server loads", rpcs: []map[internal.Locality]struct { start, success, failure uint64 serverData map[string]float64 }{{ localities[0]: {8, 3, 1, map[string]float64{"cpu": 15, "mem": 20}}, localities[1]: {15, 4, 5, map[string]float64{"net": 5, "disk": 0.8}}, }}, }, { name: "three reports with server loads", rpcs: []map[internal.Locality]struct { start, success, failure uint64 serverData map[string]float64 }{{ localities[0]: {8, 3, 1, map[string]float64{"cpu": 15, "mem": 20}}, localities[1]: {15, 4, 5, map[string]float64{"net": 5, "disk": 0.8}}, }, { localities[0]: {8, 3, 1, map[string]float64{"cpu": 1, "mem": 2}}, }, { localities[1]: {15, 4, 5, map[string]float64{"net": 13, "disk": 1.4}}, }}, }, } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { ls := NewStore(testService).(*lrsStore) // InProgress count doesn't get cleared at each buildStats, keep // them to carry over. inProgressCounts := make(map[internal.Locality]uint64) for _, counts := range tt.rpcs { var upstreamLocalityStats []*loadreportpb.UpstreamLocalityStats for l, count := range counts { tempInProgress := count.start - count.success - count.failure + inProgressCounts[l] inProgressCounts[l] = tempInProgress if count.success == 0 && tempInProgress == 0 && count.failure == 0 { continue } var loadMetricStats []*loadreportpb.EndpointLoadMetricStats for n, d := range count.serverData { loadMetricStats = append(loadMetricStats, &loadreportpb.EndpointLoadMetricStats{ MetricName: n, NumRequestsFinishedWithMetric: count.success, TotalMetricValue: d * float64(count.success), }, ) } upstreamLocalityStats = append(upstreamLocalityStats, &loadreportpb.UpstreamLocalityStats{ Locality: l.ToProto(), TotalSuccessfulRequests: count.success, TotalRequestsInProgress: tempInProgress, TotalErrorRequests: count.failure, LoadMetricStats: loadMetricStats, }) } // InProgress count doesn't get cleared at each buildStats, and // needs to be carried over to the next result. for l, c := range inProgressCounts { if _, ok := counts[l]; !ok { upstreamLocalityStats = append(upstreamLocalityStats, &loadreportpb.UpstreamLocalityStats{ Locality: l.ToProto(), TotalRequestsInProgress: c, }) } } want := []*loadreportpb.ClusterStats{ { ClusterName: testService, UpstreamLocalityStats: upstreamLocalityStats, }, } var wg sync.WaitGroup for l, count := range counts { for i := 0; i < int(count.success); i++ { wg.Add(1) go func(l internal.Locality, serverData map[string]float64) { ls.CallStarted(l) ls.CallFinished(l, nil) for n, d := range serverData { ls.CallServerLoad(l, n, d) } wg.Done() }(l, count.serverData) } for i := 0; i < int(count.failure); i++ { wg.Add(1) go func(l internal.Locality) { ls.CallStarted(l) ls.CallFinished(l, errTest) wg.Done() }(l) } for i := 0; i < int(count.start-count.success-count.failure); i++ { wg.Add(1) go func(l internal.Locality) { ls.CallStarted(l) wg.Done() }(l) } } wg.Wait() if got := ls.buildStats(testService); !equalClusterStats(got, want) { t.Errorf("lrsStore.buildStats() = %v, want %v", got, want) t.Errorf("%s", cmp.Diff(got, want)) } } }) } } type lrsServer struct { reportingInterval *durationpb.Duration mu sync.Mutex dropTotal uint64 drops map[string]uint64 rpcs map[internal.Locality]*rpcCountDataForTest } func (lrss *lrsServer) StreamLoadStats(stream lrsgrpc.LoadReportingService_StreamLoadStatsServer) error { req, err := stream.Recv() if err != nil { return err } if !proto.Equal(req, &lrspb.LoadStatsRequest{ Node: &basepb.Node{ Metadata: &structpb.Struct{ Fields: map[string]*structpb.Value{ internal.GrpcHostname: { Kind: &structpb.Value_StringValue{StringValue: testService}, }, }, }, }, }) { return status.Errorf(codes.FailedPrecondition, "unexpected req: %+v", req) } if err := stream.Send(&lrspb.LoadStatsResponse{ Clusters: []string{testService}, LoadReportingInterval: lrss.reportingInterval, }); err != nil { return err } for { req, err := stream.Recv() if err != nil { if err == io.EOF { return nil } return err } stats := req.ClusterStats[0] lrss.mu.Lock() lrss.dropTotal += stats.TotalDroppedRequests for _, d := range stats.DroppedRequests { lrss.drops[d.Category] += d.DroppedCount } for _, ss := range stats.UpstreamLocalityStats { l := internal.Locality{ Region: ss.Locality.Region, Zone: ss.Locality.Zone, SubZone: ss.Locality.SubZone, } counts, ok := lrss.rpcs[l] if !ok { counts = newRPCCountDataForTest(0, 0, 0, nil) lrss.rpcs[l] = counts } counts.succeeded += ss.TotalSuccessfulRequests counts.inProgress = ss.TotalRequestsInProgress counts.errored += ss.TotalErrorRequests for _, ts := range ss.LoadMetricStats { if counts.serverLoads == nil { counts.serverLoads = make(map[string]float64) } counts.serverLoads[ts.MetricName] = ts.TotalMetricValue / float64(ts.NumRequestsFinishedWithMetric) } } lrss.mu.Unlock() } } func setupServer(t *testing.T, reportingInterval *durationpb.Duration) (addr string, lrss *lrsServer, cleanup func()) { lis, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("listen failed due to: %v", err) } svr := grpc.NewServer() lrss = &lrsServer{ reportingInterval: reportingInterval, drops: make(map[string]uint64), rpcs: make(map[internal.Locality]*rpcCountDataForTest), } lrsgrpc.RegisterLoadReportingServiceServer(svr, lrss) go svr.Serve(lis) return lis.Addr().String(), lrss, func() { svr.Stop() lis.Close() } } func Test_lrsStore_ReportTo(t *testing.T) { const intervalNano = 1000 * 1000 * 50 addr, lrss, cleanup := setupServer(t, &durationpb.Duration{ Seconds: 0, Nanos: intervalNano, }) defer cleanup() ls := NewStore(testService) cc, err := grpc.Dial(addr, grpc.WithInsecure()) if err != nil { t.Fatalf("failed to dial: %v", err) } ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() done := make(chan struct{}) go func() { ls.ReportTo(ctx, cc) close(done) }() drops := map[string]uint64{ dropCategories[0]: 13, dropCategories[1]: 14, } for c, d := range drops { for i := 0; i < int(d); i++ { ls.CallDropped(c) time.Sleep(time.Nanosecond * intervalNano / 10) } } rpcs := map[internal.Locality]*rpcCountDataForTest{ localities[0]: newRPCCountDataForTest(3, 1, 4, nil), localities[1]: newRPCCountDataForTest(1, 5, 9, map[string]float64{"pi": 3.14, "e": 2.71}), } for l, count := range rpcs { for i := 0; i < int(count.succeeded); i++ { go func(i int, l internal.Locality, count *rpcCountDataForTest) { ls.CallStarted(l) ls.CallFinished(l, nil) for n, d := range count.serverLoads { ls.CallServerLoad(l, n, d) } }(i, l, count) } for i := 0; i < int(count.inProgress); i++ { go func(i int, l internal.Locality) { ls.CallStarted(l) }(i, l) } for i := 0; i < int(count.errored); i++ { go func(i int, l internal.Locality) { ls.CallStarted(l) ls.CallFinished(l, errTest) }(i, l) } } time.Sleep(time.Nanosecond * intervalNano * 2) cancel() <-done lrss.mu.Lock() defer lrss.mu.Unlock() if !cmp.Equal(lrss.drops, drops) { t.Errorf("different: %v", cmp.Diff(lrss.drops, drops)) } if !cmp.Equal(lrss.rpcs, rpcs) { t.Errorf("different: %v", cmp.Diff(lrss.rpcs, rpcs)) } } grpc-go-1.22.1/balancer/xds/orca/000077500000000000000000000000001351635773100164375ustar00rootroot00000000000000grpc-go-1.22.1/balancer/xds/orca/orca.go000066400000000000000000000042271351635773100177170ustar00rootroot00000000000000/* * Copyright 2019 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ // Package orca implements Open Request Cost Aggregation. package orca import ( "github.com/golang/protobuf/proto" orcapb "google.golang.org/grpc/balancer/xds/internal/proto/udpa/data/orca/v1/orca_load_report" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/internal/balancerload" "google.golang.org/grpc/metadata" ) const mdKey = "X-Endpoint-Load-Metrics-Bin" // toBytes converts a orca load report into bytes. func toBytes(r *orcapb.OrcaLoadReport) []byte { if r == nil { return nil } b, err := proto.Marshal(r) if err != nil { grpclog.Warningf("orca: failed to marshal load report: %v", err) return nil } return b } // ToMetadata converts a orca load report into grpc metadata. func ToMetadata(r *orcapb.OrcaLoadReport) metadata.MD { b := toBytes(r) if b == nil { return nil } return metadata.Pairs(mdKey, string(b)) } // fromBytes reads load report bytes and converts it to orca. func fromBytes(b []byte) *orcapb.OrcaLoadReport { ret := new(orcapb.OrcaLoadReport) if err := proto.Unmarshal(b, ret); err != nil { grpclog.Warningf("orca: failed to unmarshal load report: %v", err) return nil } return ret } // FromMetadata reads load report from metadata and converts it to orca. // // It returns nil if report is not found in metadata. func FromMetadata(md metadata.MD) *orcapb.OrcaLoadReport { vs := md.Get(mdKey) if len(vs) == 0 { return nil } return fromBytes([]byte(vs[0])) } type loadParser struct{} func (*loadParser) Parse(md metadata.MD) interface{} { return FromMetadata(md) } func init() { balancerload.SetParser(&loadParser{}) } grpc-go-1.22.1/balancer/xds/orca/orca_test.go000066400000000000000000000037751351635773100207650ustar00rootroot00000000000000/* * Copyright 2019 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package orca import ( "reflect" "strings" "testing" "github.com/golang/protobuf/proto" orcapb "google.golang.org/grpc/balancer/xds/internal/proto/udpa/data/orca/v1/orca_load_report" "google.golang.org/grpc/metadata" ) var ( testMessage = &orcapb.OrcaLoadReport{ CpuUtilization: 0.1, MemUtilization: 0.2, RequestCostOrUtilization: map[string]float64{"ttt": 0.4}, } testBytes, _ = proto.Marshal(testMessage) ) func TestToMetadata(t *testing.T) { tests := []struct { name string r *orcapb.OrcaLoadReport want metadata.MD }{{ name: "nil", r: nil, want: nil, }, { name: "valid", r: testMessage, want: metadata.MD{ strings.ToLower(mdKey): []string{string(testBytes)}, }, }} for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { if got := ToMetadata(tt.r); !reflect.DeepEqual(got, tt.want) { t.Errorf("ToMetadata() = %v, want %v", got, tt.want) } }) } } func TestFromMetadata(t *testing.T) { tests := []struct { name string md metadata.MD want *orcapb.OrcaLoadReport }{{ name: "nil", md: nil, want: nil, }, { name: "valid", md: metadata.MD{ strings.ToLower(mdKey): []string{string(testBytes)}, }, want: testMessage, }} for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { if got := FromMetadata(tt.md); !proto.Equal(got, tt.want) { t.Errorf("FromMetadata() = %v, want %v", got, tt.want) } }) } } grpc-go-1.22.1/balancer/xds/xds.go000066400000000000000000000472241351635773100166510ustar00rootroot00000000000000/* * * Copyright 2019 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package xds implements a balancer that communicates with a remote balancer using the Envoy xDS // protocol. package xds import ( "context" "encoding/json" "fmt" "reflect" "sync" "time" "github.com/golang/protobuf/proto" "google.golang.org/grpc/balancer" "google.golang.org/grpc/balancer/xds/edsbalancer" cdspb "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/cds" edspb "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/eds" "google.golang.org/grpc/balancer/xds/lrs" "google.golang.org/grpc/connectivity" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/resolver" "google.golang.org/grpc/serviceconfig" ) const ( defaultTimeout = 10 * time.Second xdsName = "xds_experimental" ) var ( // This field is for testing purpose. // TODO: if later we make startupTimeout configurable through BuildOptions(maybe?), then we can remove // this field and configure through BuildOptions instead. startupTimeout = defaultTimeout newEDSBalancer = func(cc balancer.ClientConn, loadStore lrs.Store) edsBalancerInterface { return edsbalancer.NewXDSBalancer(cc, loadStore) } ) func init() { balancer.Register(newXDSBalancerBuilder()) } type xdsBalancerBuilder struct{} func newXDSBalancerBuilder() balancer.Builder { return &xdsBalancerBuilder{} } func (b *xdsBalancerBuilder) Build(cc balancer.ClientConn, opts balancer.BuildOptions) balancer.Balancer { ctx, cancel := context.WithCancel(context.Background()) x := &xdsBalancer{ ctx: ctx, cancel: cancel, buildOpts: opts, startupTimeout: startupTimeout, connStateMgr: &connStateMgr{}, startup: true, grpcUpdate: make(chan interface{}), xdsClientUpdate: make(chan interface{}), timer: createDrainedTimer(), // initialized a timer that won't fire without reset loadStore: lrs.NewStore(opts.Target.Endpoint), } x.cc = &xdsClientConn{ updateState: x.connStateMgr.updateState, ClientConn: cc, } go x.run() return x } func (b *xdsBalancerBuilder) Name() string { return xdsName } func (b *xdsBalancerBuilder) ParseConfig(c json.RawMessage) (serviceconfig.LoadBalancingConfig, error) { var cfg xdsConfig if err := json.Unmarshal(c, &cfg); err != nil { return nil, fmt.Errorf("unable to unmarshal balancer config %s into xds config", string(c)) } return &cfg, nil } // edsBalancerInterface defines the interface that edsBalancer must implement to // communicate with xdsBalancer. // // It's implemented by the real eds balancer and a fake testing eds balancer. type edsBalancerInterface interface { // HandleEDSResponse passes the received EDS message from traffic director to eds balancer. HandleEDSResponse(edsResp *edspb.ClusterLoadAssignment) // HandleChildPolicy updates the eds balancer the intra-cluster load balancing policy to use. HandleChildPolicy(name string, config json.RawMessage) // HandleSubConnStateChange handles state change for SubConn. HandleSubConnStateChange(sc balancer.SubConn, state connectivity.State) // Close closes the eds balancer. Close() } // xdsBalancer manages xdsClient and the actual balancer that does load balancing (either edsBalancer, // or fallback LB). type xdsBalancer struct { cc balancer.ClientConn // *xdsClientConn buildOpts balancer.BuildOptions startupTimeout time.Duration xdsStaleTimeout *time.Duration connStateMgr *connStateMgr ctx context.Context cancel context.CancelFunc startup bool // startup indicates whether this xdsBalancer is in startup stage. inFallbackMonitor bool // xdsBalancer continuously monitor the channels below, and will handle events from them in sync. grpcUpdate chan interface{} xdsClientUpdate chan interface{} timer *time.Timer noSubConnAlert <-chan struct{} client *client // may change when passed a different service config config *xdsConfig // may change when passed a different service config xdsLB edsBalancerInterface fallbackLB balancer.Balancer fallbackInitData *resolver.State // may change when HandleResolved address is called loadStore lrs.Store } func (x *xdsBalancer) startNewXDSClient(u *xdsConfig) { // If the xdsBalancer is in startup stage, then we need to apply the startup timeout for the first // xdsClient to get a response from the traffic director. if x.startup { x.startFallbackMonitoring() } // Whenever service config gives a new traffic director name, we need to create an xds client to // connect to it. However, previous xds client should not be closed until the new one successfully // connects to the traffic director (i.e. get an ADS response from the traffic director). Therefore, // we let each new client to be responsible to close its immediate predecessor. In this way, // xdsBalancer does not to implement complex synchronization to achieve the same purpose. prevClient := x.client // haveGotADS is true means, this xdsClient has got ADS response from director in the past, which // means it can close previous client if it hasn't and it now can send lose contact signal for // fallback monitoring. var haveGotADS bool // set up callbacks for the xds client. newADS := func(ctx context.Context, resp proto.Message) error { if !haveGotADS { if prevClient != nil { prevClient.close() } haveGotADS = true } return x.newADSResponse(ctx, resp) } loseContact := func(ctx context.Context) { // loseContact signal is only useful when the current xds client has received ADS response before, // and has not been closed by later xds client. if haveGotADS { select { case <-ctx.Done(): return default: } x.loseContact(ctx) } } exitCleanup := func() { // Each xds client is responsible to close its predecessor if there's one. There are two paths // for a xds client to close its predecessor: // 1. Once it receives its first ADS response. // 2. It hasn't received its first ADS response yet, but its own successor has received ADS // response (which triggers the exit of it). Therefore, it needs to close its predecessor if // it has one. // Here the exitCleanup is for the 2nd path. if !haveGotADS && prevClient != nil { prevClient.close() } } x.client = newXDSClient(u.BalancerName, u.ChildPolicy == nil, x.buildOpts, x.loadStore, newADS, loseContact, exitCleanup) go x.client.run() } // run gets executed in a goroutine once xdsBalancer is created. It monitors updates from grpc, // xdsClient and load balancer. It synchronizes the operations that happen inside xdsBalancer. It // exits when xdsBalancer is closed. func (x *xdsBalancer) run() { for { select { case update := <-x.grpcUpdate: x.handleGRPCUpdate(update) case update := <-x.xdsClientUpdate: x.handleXDSClientUpdate(update) case <-x.timer.C: // x.timer.C will block if we are not in fallback monitoring stage. x.switchFallback() case <-x.noSubConnAlert: // x.noSubConnAlert will block if we are not in fallback monitoring stage. x.switchFallback() case <-x.ctx.Done(): if x.client != nil { x.client.close() } if x.xdsLB != nil { x.xdsLB.Close() } if x.fallbackLB != nil { x.fallbackLB.Close() } return } } } func (x *xdsBalancer) handleGRPCUpdate(update interface{}) { switch u := update.(type) { case *subConnStateUpdate: if x.xdsLB != nil { x.xdsLB.HandleSubConnStateChange(u.sc, u.state.ConnectivityState) } if x.fallbackLB != nil { if lb, ok := x.fallbackLB.(balancer.V2Balancer); ok { lb.UpdateSubConnState(u.sc, u.state) } else { x.fallbackLB.HandleSubConnStateChange(u.sc, u.state.ConnectivityState) } } case *balancer.ClientConnState: cfg, _ := u.BalancerConfig.(*xdsConfig) if cfg == nil { // service config parsing failed. should never happen. return } var fallbackChanged bool // service config has been updated. if !reflect.DeepEqual(cfg, x.config) { if x.config == nil { // The first time we get config, we just need to start the xdsClient. x.startNewXDSClient(cfg) x.config = cfg x.fallbackInitData = &resolver.State{ Addresses: u.ResolverState.Addresses, // TODO(yuxuanli): get the fallback balancer config once the validation change completes, where // we can pass along the config struct. } return } // With a different BalancerName, we need to create a new xdsClient. // If current or previous ChildPolicy is nil, then we also need to recreate a new xdsClient. // This is because with nil ChildPolicy xdsClient will do CDS request, while non-nil won't. if cfg.BalancerName != x.config.BalancerName || (cfg.ChildPolicy == nil) != (x.config.ChildPolicy == nil) { x.startNewXDSClient(cfg) } // We will update the xdsLB with the new child policy, if we got a different one and it's not nil. // The nil case will be handled when the CDS response gets processed, we will update xdsLB at that time. if x.xdsLB != nil && !reflect.DeepEqual(cfg.ChildPolicy, x.config.ChildPolicy) && cfg.ChildPolicy != nil { x.xdsLB.HandleChildPolicy(cfg.ChildPolicy.Name, cfg.ChildPolicy.Config) } if x.fallbackLB != nil && !reflect.DeepEqual(cfg.FallBackPolicy, x.config.FallBackPolicy) { x.fallbackLB.Close() x.buildFallBackBalancer(cfg) fallbackChanged = true } } if x.fallbackLB != nil && (!reflect.DeepEqual(x.fallbackInitData.Addresses, u.ResolverState.Addresses) || fallbackChanged) { x.updateFallbackWithResolverState(&resolver.State{ Addresses: u.ResolverState.Addresses, }) } x.config = cfg x.fallbackInitData = &resolver.State{ Addresses: u.ResolverState.Addresses, // TODO(yuxuanli): get the fallback balancer config once the validation change completes, where // we can pass along the config struct. } default: // unreachable path panic("wrong update type") } } func (x *xdsBalancer) handleXDSClientUpdate(update interface{}) { switch u := update.(type) { case *cdsResp: select { case <-u.ctx.Done(): return default: } x.cancelFallbackAndSwitchEDSBalancerIfNecessary() // TODO: Get the optional xds record stale timeout from OutlierDetection message. If not exist, // reset to 0. // x.xdsStaleTimeout = u.OutlierDetection.TO_BE_DEFINED_AND_ADDED x.xdsLB.HandleChildPolicy(u.resp.LbPolicy.String(), nil) case *edsResp: select { case <-u.ctx.Done(): return default: } x.cancelFallbackAndSwitchEDSBalancerIfNecessary() x.xdsLB.HandleEDSResponse(u.resp) case *loseContact: select { case <-u.ctx.Done(): return default: } // if we are already doing fallback monitoring, then we ignore new loseContact signal. if x.inFallbackMonitor { return } x.inFallbackMonitor = true x.startFallbackMonitoring() default: panic("unexpected xds client update type") } } type connStateMgr struct { mu sync.Mutex curState connectivity.State notify chan struct{} } func (c *connStateMgr) updateState(s connectivity.State) { c.mu.Lock() defer c.mu.Unlock() c.curState = s if s != connectivity.Ready && c.notify != nil { close(c.notify) c.notify = nil } } func (c *connStateMgr) notifyWhenNotReady() <-chan struct{} { c.mu.Lock() defer c.mu.Unlock() if c.curState != connectivity.Ready { ch := make(chan struct{}) close(ch) return ch } c.notify = make(chan struct{}) return c.notify } // xdsClientConn wraps around the balancer.ClientConn passed in from grpc. The wrapping is to add // functionality to get notification when no subconn is in READY state. // TODO: once we have the change that keeps both edsbalancer and fallback balancer alive at the same // time, we need to make sure to synchronize updates from both entities on the ClientConn. type xdsClientConn struct { updateState func(s connectivity.State) balancer.ClientConn } func (w *xdsClientConn) UpdateBalancerState(s connectivity.State, p balancer.Picker) { w.updateState(s) w.ClientConn.UpdateBalancerState(s, p) } type subConnStateUpdate struct { sc balancer.SubConn state balancer.SubConnState } func (x *xdsBalancer) HandleSubConnStateChange(sc balancer.SubConn, state connectivity.State) { grpclog.Error("UpdateSubConnState should be called instead of HandleSubConnStateChange") } func (x *xdsBalancer) HandleResolvedAddrs(addrs []resolver.Address, err error) { grpclog.Error("UpdateResolverState should be called instead of HandleResolvedAddrs") } func (x *xdsBalancer) UpdateSubConnState(sc balancer.SubConn, state balancer.SubConnState) { update := &subConnStateUpdate{ sc: sc, state: state, } select { case x.grpcUpdate <- update: case <-x.ctx.Done(): } } func (x *xdsBalancer) UpdateClientConnState(s balancer.ClientConnState) { select { case x.grpcUpdate <- &s: case <-x.ctx.Done(): } } type cdsResp struct { ctx context.Context resp *cdspb.Cluster } type edsResp struct { ctx context.Context resp *edspb.ClusterLoadAssignment } func (x *xdsBalancer) newADSResponse(ctx context.Context, resp proto.Message) error { var update interface{} switch u := resp.(type) { case *cdspb.Cluster: // TODO: EDS requests should use CDS response's Name. Store // `u.GetName()` in `x.clusterName` and use it in xds_client. if u.GetType() != cdspb.Cluster_EDS { return fmt.Errorf("unexpected service discovery type, got %v, want %v", u.GetType(), cdspb.Cluster_EDS) } update = &cdsResp{ctx: ctx, resp: u} case *edspb.ClusterLoadAssignment: // nothing to check update = &edsResp{ctx: ctx, resp: u} default: grpclog.Warningf("xdsBalancer: got a response that's neither CDS nor EDS, type = %T", u) } select { case x.xdsClientUpdate <- update: case <-x.ctx.Done(): case <-ctx.Done(): } return nil } type loseContact struct { ctx context.Context } func (x *xdsBalancer) loseContact(ctx context.Context) { select { case x.xdsClientUpdate <- &loseContact{ctx: ctx}: case <-x.ctx.Done(): case <-ctx.Done(): } } func (x *xdsBalancer) switchFallback() { if x.xdsLB != nil { x.xdsLB.Close() x.xdsLB = nil } x.buildFallBackBalancer(x.config) x.updateFallbackWithResolverState(x.fallbackInitData) x.cancelFallbackMonitoring() } func (x *xdsBalancer) updateFallbackWithResolverState(s *resolver.State) { if lb, ok := x.fallbackLB.(balancer.V2Balancer); ok { lb.UpdateClientConnState(balancer.ClientConnState{ResolverState: resolver.State{ Addresses: s.Addresses, // TODO(yuxuanli): get the fallback balancer config once the validation change completes, where // we can pass along the config struct. }}) } else { x.fallbackLB.HandleResolvedAddrs(s.Addresses, nil) } } // x.cancelFallbackAndSwitchEDSBalancerIfNecessary() will be no-op if we have a working xds client. // It will cancel fallback monitoring if we are in fallback monitoring stage. // If there's no running edsBalancer currently, it will create one and initialize it. Also, it will // shutdown the fallback balancer if there's one running. func (x *xdsBalancer) cancelFallbackAndSwitchEDSBalancerIfNecessary() { // xDS update will cancel fallback monitoring if we are in fallback monitoring stage. x.cancelFallbackMonitoring() // xDS update will switch balancer back to edsBalancer if we are in fallback. if x.xdsLB == nil { if x.fallbackLB != nil { x.fallbackLB.Close() x.fallbackLB = nil } x.xdsLB = newEDSBalancer(x.cc, x.loadStore) if x.config.ChildPolicy != nil { x.xdsLB.HandleChildPolicy(x.config.ChildPolicy.Name, x.config.ChildPolicy.Config) } } } func (x *xdsBalancer) buildFallBackBalancer(c *xdsConfig) { if c.FallBackPolicy == nil { x.buildFallBackBalancer(&xdsConfig{ FallBackPolicy: &loadBalancingConfig{ Name: "round_robin", }, }) return } // builder will always be non-nil, since when parse JSON into xdsConfig, we check whether the specified // balancer is registered or not. builder := balancer.Get(c.FallBackPolicy.Name) x.fallbackLB = builder.Build(x.cc, x.buildOpts) } // There are three ways that could lead to fallback: // 1. During startup (i.e. the first xds client is just created and attempts to contact the traffic // director), fallback if it has not received any response from the director within the configured // timeout. // 2. After xds client loses contact with the remote, fallback if all connections to the backends are // lost (i.e. not in state READY). // 3. After xds client loses contact with the remote, fallback if the stale eds timeout has been // configured through CDS and is timed out. func (x *xdsBalancer) startFallbackMonitoring() { if x.startup { x.startup = false x.timer.Reset(x.startupTimeout) return } x.noSubConnAlert = x.connStateMgr.notifyWhenNotReady() if x.xdsStaleTimeout != nil { if !x.timer.Stop() { <-x.timer.C } x.timer.Reset(*x.xdsStaleTimeout) } } // There are two cases where fallback monitoring should be canceled: // 1. xDS client returns a new ADS message. // 2. fallback has been triggered. func (x *xdsBalancer) cancelFallbackMonitoring() { if !x.timer.Stop() { select { case <-x.timer.C: // For cases where some fallback condition happens along with the timeout, but timeout loses // the race, so we need to drain the x.timer.C. thus we don't trigger fallback again. default: // if the timer timeout leads us here, then there's no thing to drain from x.timer.C. } } x.noSubConnAlert = nil x.inFallbackMonitor = false } func (x *xdsBalancer) Close() { x.cancel() } func createDrainedTimer() *time.Timer { timer := time.NewTimer(0 * time.Millisecond) // make sure initially the timer channel is blocking until reset. if !timer.Stop() { <-timer.C } return timer } type xdsConfig struct { serviceconfig.LoadBalancingConfig BalancerName string ChildPolicy *loadBalancingConfig FallBackPolicy *loadBalancingConfig } // When unmarshalling json to xdsConfig, we iterate through the childPolicy/fallbackPolicy lists // and select the first LB policy which has been registered to be stored in the returned xdsConfig. func (p *xdsConfig) UnmarshalJSON(data []byte) error { var val map[string]json.RawMessage if err := json.Unmarshal(data, &val); err != nil { return err } for k, v := range val { switch k { case "balancerName": if err := json.Unmarshal(v, &p.BalancerName); err != nil { return err } case "childPolicy": var lbcfgs []*loadBalancingConfig if err := json.Unmarshal(v, &lbcfgs); err != nil { return err } for _, lbcfg := range lbcfgs { if balancer.Get(lbcfg.Name) != nil { p.ChildPolicy = lbcfg break } } case "fallbackPolicy": var lbcfgs []*loadBalancingConfig if err := json.Unmarshal(v, &lbcfgs); err != nil { return err } for _, lbcfg := range lbcfgs { if balancer.Get(lbcfg.Name) != nil { p.FallBackPolicy = lbcfg break } } } } return nil } func (p *xdsConfig) MarshalJSON() ([]byte, error) { return nil, nil } type loadBalancingConfig struct { Name string Config json.RawMessage } func (l *loadBalancingConfig) MarshalJSON() ([]byte, error) { m := make(map[string]json.RawMessage) m[l.Name] = l.Config return json.Marshal(m) } func (l *loadBalancingConfig) UnmarshalJSON(data []byte) error { var cfg map[string]json.RawMessage if err := json.Unmarshal(data, &cfg); err != nil { return err } for name, config := range cfg { l.Name = name l.Config = config } return nil } grpc-go-1.22.1/balancer/xds/xds_client.go000066400000000000000000000206771351635773100202120ustar00rootroot00000000000000/* * * Copyright 2019 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package xds import ( "context" "sync" "time" "github.com/golang/protobuf/proto" "github.com/golang/protobuf/ptypes" structpb "github.com/golang/protobuf/ptypes/struct" "google.golang.org/grpc" "google.golang.org/grpc/balancer" "google.golang.org/grpc/balancer/xds/internal" cdspb "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/cds" basepb "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/core/base" discoverypb "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/discovery" edspb "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/eds" adsgrpc "google.golang.org/grpc/balancer/xds/internal/proto/envoy/service/discovery/v2/ads" "google.golang.org/grpc/balancer/xds/lrs" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/internal/backoff" "google.golang.org/grpc/internal/channelz" ) const ( cdsType = "type.googleapis.com/envoy.api.v2.Cluster" edsType = "type.googleapis.com/envoy.api.v2.ClusterLoadAssignment" endpointRequired = "endpoints_required" ) var ( defaultBackoffConfig = backoff.Exponential{ MaxDelay: 120 * time.Second, } ) // client is responsible for connecting to the specified traffic director, passing the received // ADS response from the traffic director, and sending notification when communication with the // traffic director is lost. type client struct { ctx context.Context cancel context.CancelFunc cli adsgrpc.AggregatedDiscoveryServiceClient opts balancer.BuildOptions balancerName string // the traffic director name serviceName string // the user dial target name enableCDS bool newADS func(ctx context.Context, resp proto.Message) error loseContact func(ctx context.Context) cleanup func() backoff backoff.Strategy loadStore lrs.Store loadReportOnce sync.Once mu sync.Mutex cc *grpc.ClientConn } func (c *client) run() { c.dial() c.makeADSCall() } func (c *client) close() { c.cancel() c.mu.Lock() if c.cc != nil { c.cc.Close() } c.mu.Unlock() c.cleanup() } func (c *client) dial() { var dopts []grpc.DialOption if creds := c.opts.DialCreds; creds != nil { if err := creds.OverrideServerName(c.balancerName); err == nil { dopts = append(dopts, grpc.WithTransportCredentials(creds)) } else { grpclog.Warningf("xds: failed to override the server name in the credentials: %v, using Insecure", err) dopts = append(dopts, grpc.WithInsecure()) } } else { dopts = append(dopts, grpc.WithInsecure()) } if c.opts.Dialer != nil { dopts = append(dopts, grpc.WithContextDialer(c.opts.Dialer)) } // Explicitly set pickfirst as the balancer. dopts = append(dopts, grpc.WithBalancerName(grpc.PickFirstBalancerName)) if channelz.IsOn() { dopts = append(dopts, grpc.WithChannelzParentID(c.opts.ChannelzParentID)) } cc, err := grpc.DialContext(c.ctx, c.balancerName, dopts...) // Since this is a non-blocking dial, so if it fails, it due to some serious error (not network // related) error. if err != nil { grpclog.Fatalf("xds: failed to dial: %v", err) } c.mu.Lock() select { case <-c.ctx.Done(): cc.Close() default: // only assign c.cc when xds client has not been closed, to prevent ClientConn leak. c.cc = cc } c.mu.Unlock() } func (c *client) newCDSRequest() *discoverypb.DiscoveryRequest { cdsReq := &discoverypb.DiscoveryRequest{ Node: &basepb.Node{ Metadata: &structpb.Struct{ Fields: map[string]*structpb.Value{ internal.GrpcHostname: { Kind: &structpb.Value_StringValue{StringValue: c.serviceName}, }, }, }, }, TypeUrl: cdsType, } return cdsReq } func (c *client) newEDSRequest() *discoverypb.DiscoveryRequest { edsReq := &discoverypb.DiscoveryRequest{ Node: &basepb.Node{ Metadata: &structpb.Struct{ Fields: map[string]*structpb.Value{ internal.GrpcHostname: { Kind: &structpb.Value_StringValue{StringValue: c.serviceName}, }, endpointRequired: { Kind: &structpb.Value_BoolValue{BoolValue: c.enableCDS}, }, }, }, }, // TODO: the expected ResourceName could be in a different format from // dial target. (test_service.test_namespace.traffic_director.com vs // test_namespace:test_service). // // The solution today is to always include GrpcHostname in metadata, // with the value set to dial target. // // A future solution could be: always do CDS, get cluster name from CDS // response, and use it here. // `ResourceNames: []string{c.clusterName},` TypeUrl: edsType, } return edsReq } func (c *client) makeADSCall() { c.cli = adsgrpc.NewAggregatedDiscoveryServiceClient(c.cc) retryCount := 0 var doRetry bool for { select { case <-c.ctx.Done(): return default: } if doRetry { backoffTimer := time.NewTimer(c.backoff.Backoff(retryCount)) select { case <-backoffTimer.C: case <-c.ctx.Done(): backoffTimer.Stop() return } retryCount++ } firstRespReceived := c.adsCallAttempt() if firstRespReceived { retryCount = 0 doRetry = false } else { doRetry = true } c.loseContact(c.ctx) } } func (c *client) adsCallAttempt() (firstRespReceived bool) { firstRespReceived = false ctx, cancel := context.WithCancel(c.ctx) defer cancel() st, err := c.cli.StreamAggregatedResources(ctx, grpc.WaitForReady(true)) if err != nil { grpclog.Infof("xds: failed to initial ADS streaming RPC due to %v", err) return } if c.enableCDS { if err := st.Send(c.newCDSRequest()); err != nil { // current stream is broken, start a new one. grpclog.Infof("xds: ads RPC failed due to err: %v, when sending the CDS request ", err) return } } if err := st.Send(c.newEDSRequest()); err != nil { // current stream is broken, start a new one. grpclog.Infof("xds: ads RPC failed due to err: %v, when sending the EDS request", err) return } expectCDS := c.enableCDS for { resp, err := st.Recv() if err != nil { // current stream is broken, start a new one. grpclog.Infof("xds: ads RPC failed due to err: %v, when receiving the response", err) return } firstRespReceived = true resources := resp.GetResources() if len(resources) < 1 { grpclog.Warning("xds: ADS response contains 0 resource info.") // start a new call as server misbehaves by sending a ADS response with 0 resource info. return } if resp.GetTypeUrl() == cdsType && !c.enableCDS { grpclog.Warning("xds: received CDS response in custom plugin mode.") // start a new call as we receive CDS response when in EDS-only mode. return } var adsResp ptypes.DynamicAny if err := ptypes.UnmarshalAny(resources[0], &adsResp); err != nil { grpclog.Warningf("xds: failed to unmarshal resources due to %v.", err) return } switch adsResp.Message.(type) { case *cdspb.Cluster: expectCDS = false case *edspb.ClusterLoadAssignment: if expectCDS { grpclog.Warningf("xds: expecting CDS response, got EDS response instead.") return } } if err := c.newADS(c.ctx, adsResp.Message); err != nil { grpclog.Warningf("xds: processing new ADS message failed due to %v.", err) return } // Only start load reporting after ADS resp is received. // // Also, newADS() will close the previous load reporting stream, so we // don't have double reporting. c.loadReportOnce.Do(func() { if c.loadStore != nil { go c.loadStore.ReportTo(c.ctx, c.cc) } }) } } func newXDSClient(balancerName string, enableCDS bool, opts balancer.BuildOptions, loadStore lrs.Store, newADS func(context.Context, proto.Message) error, loseContact func(ctx context.Context), exitCleanup func()) *client { c := &client{ balancerName: balancerName, serviceName: opts.Target.Endpoint, enableCDS: enableCDS, opts: opts, newADS: newADS, loseContact: loseContact, cleanup: exitCleanup, backoff: defaultBackoffConfig, loadStore: loadStore, } c.ctx, c.cancel = context.WithCancel(context.Background()) return c } grpc-go-1.22.1/balancer/xds/xds_client_test.go000066400000000000000000000326461351635773100212500ustar00rootroot00000000000000/* * * Copyright 2019 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package xds import ( "context" "errors" "io" "net" "testing" "time" "github.com/golang/protobuf/proto" anypb "github.com/golang/protobuf/ptypes/any" durationpb "github.com/golang/protobuf/ptypes/duration" structpb "github.com/golang/protobuf/ptypes/struct" wrpb "github.com/golang/protobuf/ptypes/wrappers" "google.golang.org/grpc" "google.golang.org/grpc/balancer" "google.golang.org/grpc/balancer/xds/internal" cdspb "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/cds" addresspb "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/core/address" basepb "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/core/base" discoverypb "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/discovery" edspb "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/eds" endpointpb "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/endpoint/endpoint" adsgrpc "google.golang.org/grpc/balancer/xds/internal/proto/envoy/service/discovery/v2/ads" lrsgrpc "google.golang.org/grpc/balancer/xds/internal/proto/envoy/service/load_stats/v2/lrs" "google.golang.org/grpc/codes" "google.golang.org/grpc/resolver" "google.golang.org/grpc/status" ) var ( testServiceName = "test/foo" testCDSReq = &discoverypb.DiscoveryRequest{ Node: &basepb.Node{ Metadata: &structpb.Struct{ Fields: map[string]*structpb.Value{ internal.GrpcHostname: { Kind: &structpb.Value_StringValue{StringValue: testServiceName}, }, }, }, }, TypeUrl: cdsType, } testEDSReq = &discoverypb.DiscoveryRequest{ Node: &basepb.Node{ Metadata: &structpb.Struct{ Fields: map[string]*structpb.Value{ internal.GrpcHostname: { Kind: &structpb.Value_StringValue{StringValue: testServiceName}, }, endpointRequired: { Kind: &structpb.Value_BoolValue{BoolValue: true}, }, }, }, }, TypeUrl: edsType, } testEDSReqWithoutEndpoints = &discoverypb.DiscoveryRequest{ Node: &basepb.Node{ Metadata: &structpb.Struct{ Fields: map[string]*structpb.Value{ internal.GrpcHostname: { Kind: &structpb.Value_StringValue{StringValue: testServiceName}, }, endpointRequired: { Kind: &structpb.Value_BoolValue{BoolValue: false}, }, }, }, }, TypeUrl: edsType, } testCluster = &cdspb.Cluster{ Name: testServiceName, ClusterDiscoveryType: &cdspb.Cluster_Type{Type: cdspb.Cluster_EDS}, LbPolicy: cdspb.Cluster_ROUND_ROBIN, } marshaledCluster, _ = proto.Marshal(testCluster) testCDSResp = &discoverypb.DiscoveryResponse{ Resources: []*anypb.Any{ { TypeUrl: cdsType, Value: marshaledCluster, }, }, TypeUrl: cdsType, } testClusterLoadAssignment = &edspb.ClusterLoadAssignment{ ClusterName: testServiceName, Endpoints: []*endpointpb.LocalityLbEndpoints{ { Locality: &basepb.Locality{ Region: "asia-east1", Zone: "1", SubZone: "sa", }, LbEndpoints: []*endpointpb.LbEndpoint{ { HostIdentifier: &endpointpb.LbEndpoint_Endpoint{ Endpoint: &endpointpb.Endpoint{ Address: &addresspb.Address{ Address: &addresspb.Address_SocketAddress{ SocketAddress: &addresspb.SocketAddress{ Address: "1.1.1.1", PortSpecifier: &addresspb.SocketAddress_PortValue{ PortValue: 10001, }, ResolverName: "dns", }, }, }, HealthCheckConfig: nil, }, }, Metadata: &basepb.Metadata{ FilterMetadata: map[string]*structpb.Struct{ "xx.lb": { Fields: map[string]*structpb.Value{ "endpoint_name": { Kind: &structpb.Value_StringValue{ StringValue: "some.endpoint.name", }, }, }, }, }, }, }, }, LoadBalancingWeight: &wrpb.UInt32Value{ Value: 1, }, Priority: 0, }, }, } marshaledClusterLoadAssignment, _ = proto.Marshal(testClusterLoadAssignment) testEDSResp = &discoverypb.DiscoveryResponse{ Resources: []*anypb.Any{ { TypeUrl: edsType, Value: marshaledClusterLoadAssignment, }, }, TypeUrl: edsType, } testClusterLoadAssignmentWithoutEndpoints = &edspb.ClusterLoadAssignment{ ClusterName: testServiceName, Endpoints: []*endpointpb.LocalityLbEndpoints{ { Locality: &basepb.Locality{ SubZone: "sa", }, LoadBalancingWeight: &wrpb.UInt32Value{ Value: 128, }, Priority: 0, }, }, Policy: nil, } marshaledClusterLoadAssignmentWithoutEndpoints, _ = proto.Marshal(testClusterLoadAssignmentWithoutEndpoints) testEDSRespWithoutEndpoints = &discoverypb.DiscoveryResponse{ Resources: []*anypb.Any{ { TypeUrl: edsType, Value: marshaledClusterLoadAssignmentWithoutEndpoints, }, }, TypeUrl: edsType, } ) type testTrafficDirector struct { reqChan chan *request respChan chan *response } type request struct { req *discoverypb.DiscoveryRequest err error } type response struct { resp *discoverypb.DiscoveryResponse err error } func (ttd *testTrafficDirector) StreamAggregatedResources(s adsgrpc.AggregatedDiscoveryService_StreamAggregatedResourcesServer) error { for { req, err := s.Recv() if err != nil { ttd.reqChan <- &request{ req: nil, err: err, } if err == io.EOF { return nil } return err } ttd.reqChan <- &request{ req: req, err: nil, } if req.TypeUrl == edsType { break } } for { select { case resp := <-ttd.respChan: if resp.err != nil { return resp.err } if err := s.Send(resp.resp); err != nil { return err } case <-s.Context().Done(): return s.Context().Err() } } } func (ttd *testTrafficDirector) DeltaAggregatedResources(adsgrpc.AggregatedDiscoveryService_DeltaAggregatedResourcesServer) error { return status.Error(codes.Unimplemented, "") } func (ttd *testTrafficDirector) sendResp(resp *response) { ttd.respChan <- resp } func (ttd *testTrafficDirector) getReq() *request { return <-ttd.reqChan } func newTestTrafficDirector() *testTrafficDirector { return &testTrafficDirector{ reqChan: make(chan *request, 10), respChan: make(chan *response, 10), } } type testConfig struct { doCDS bool expectedRequests []*discoverypb.DiscoveryRequest responsesToSend []*discoverypb.DiscoveryResponse expectedADSResponses []proto.Message adsErr error svrErr error } func setupServer(t *testing.T) (addr string, td *testTrafficDirector, lrss *lrsServer, cleanup func()) { lis, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("listen failed due to: %v", err) } svr := grpc.NewServer() td = newTestTrafficDirector() lrss = &lrsServer{ drops: make(map[string]uint64), reportingInterval: &durationpb.Duration{ Seconds: 60 * 60, // 1 hour, each test can override this to a shorter duration. Nanos: 0, }, } adsgrpc.RegisterAggregatedDiscoveryServiceServer(svr, td) lrsgrpc.RegisterLoadReportingServiceServer(svr, lrss) go svr.Serve(lis) return lis.Addr().String(), td, lrss, func() { svr.Stop() lis.Close() } } func (s) TestXdsClientResponseHandling(t *testing.T) { for _, test := range []*testConfig{ { doCDS: true, expectedRequests: []*discoverypb.DiscoveryRequest{testCDSReq, testEDSReq}, responsesToSend: []*discoverypb.DiscoveryResponse{testCDSResp, testEDSResp}, expectedADSResponses: []proto.Message{testCluster, testClusterLoadAssignment}, }, { doCDS: false, expectedRequests: []*discoverypb.DiscoveryRequest{testEDSReqWithoutEndpoints}, responsesToSend: []*discoverypb.DiscoveryResponse{testEDSRespWithoutEndpoints}, expectedADSResponses: []proto.Message{testClusterLoadAssignmentWithoutEndpoints}, }, } { testXdsClientResponseHandling(t, test) } } func testXdsClientResponseHandling(t *testing.T, test *testConfig) { addr, td, _, cleanup := setupServer(t) defer cleanup() adsChan := make(chan proto.Message, 10) newADS := func(ctx context.Context, i proto.Message) error { adsChan <- i return nil } client := newXDSClient(addr, test.doCDS, balancer.BuildOptions{Target: resolver.Target{Endpoint: testServiceName}}, nil, newADS, func(context.Context) {}, func() {}) defer client.close() go client.run() for _, expectedReq := range test.expectedRequests { req := td.getReq() if req.err != nil { t.Fatalf("ads RPC failed with err: %v", req.err) } if !proto.Equal(req.req, expectedReq) { t.Fatalf("got ADS request %T %v, expected: %T %v", req.req, req.req, expectedReq, expectedReq) } } for i, resp := range test.responsesToSend { td.sendResp(&response{resp: resp}) ads := <-adsChan if !proto.Equal(ads, test.expectedADSResponses[i]) { t.Fatalf("received unexpected ads response, got %v, want %v", ads, test.expectedADSResponses[i]) } } } func (s) TestXdsClientLoseContact(t *testing.T) { for _, test := range []*testConfig{ { doCDS: true, responsesToSend: []*discoverypb.DiscoveryResponse{}, }, { doCDS: false, responsesToSend: []*discoverypb.DiscoveryResponse{testEDSRespWithoutEndpoints}, }, } { testXdsClientLoseContactRemoteClose(t, test) } for _, test := range []*testConfig{ { doCDS: false, responsesToSend: []*discoverypb.DiscoveryResponse{testCDSResp}, // CDS response when in custom mode. }, { doCDS: true, responsesToSend: []*discoverypb.DiscoveryResponse{{}}, // response with 0 resources is an error case. }, { doCDS: true, responsesToSend: []*discoverypb.DiscoveryResponse{testCDSResp}, adsErr: errors.New("some ads parsing error from xdsBalancer"), }, } { testXdsClientLoseContactADSRelatedErrorOccur(t, test) } } func testXdsClientLoseContactRemoteClose(t *testing.T, test *testConfig) { addr, td, _, cleanup := setupServer(t) defer cleanup() adsChan := make(chan proto.Message, 10) newADS := func(ctx context.Context, i proto.Message) error { adsChan <- i return nil } contactChan := make(chan *loseContact, 10) loseContactFunc := func(context.Context) { contactChan <- &loseContact{} } client := newXDSClient(addr, test.doCDS, balancer.BuildOptions{Target: resolver.Target{Endpoint: testServiceName}}, nil, newADS, loseContactFunc, func() {}) defer client.close() go client.run() // make sure server side get the request (i.e stream created successfully on client side) td.getReq() for _, resp := range test.responsesToSend { td.sendResp(&response{resp: resp}) // make sure client side receives it <-adsChan } cleanup() select { case <-contactChan: case <-time.After(2 * time.Second): t.Fatal("time out when expecting lost contact signal") } } func testXdsClientLoseContactADSRelatedErrorOccur(t *testing.T, test *testConfig) { addr, td, _, cleanup := setupServer(t) defer cleanup() adsChan := make(chan proto.Message, 10) newADS := func(ctx context.Context, i proto.Message) error { adsChan <- i return test.adsErr } contactChan := make(chan *loseContact, 10) loseContactFunc := func(context.Context) { contactChan <- &loseContact{} } client := newXDSClient(addr, test.doCDS, balancer.BuildOptions{Target: resolver.Target{Endpoint: testServiceName}}, nil, newADS, loseContactFunc, func() {}) defer client.close() go client.run() // make sure server side get the request (i.e stream created successfully on client side) td.getReq() for _, resp := range test.responsesToSend { td.sendResp(&response{resp: resp}) } select { case <-contactChan: case <-time.After(2 * time.Second): t.Fatal("time out when expecting lost contact signal") } } func (s) TestXdsClientExponentialRetry(t *testing.T) { cfg := &testConfig{ svrErr: status.Errorf(codes.Aborted, "abort the stream to trigger retry"), } addr, td, _, cleanup := setupServer(t) defer cleanup() adsChan := make(chan proto.Message, 10) newADS := func(ctx context.Context, i proto.Message) error { adsChan <- i return nil } contactChan := make(chan *loseContact, 10) loseContactFunc := func(context.Context) { contactChan <- &loseContact{} } client := newXDSClient(addr, cfg.doCDS, balancer.BuildOptions{Target: resolver.Target{Endpoint: testServiceName}}, nil, newADS, loseContactFunc, func() {}) defer client.close() go client.run() var secondRetry, thirdRetry time.Time for i := 0; i < 3; i++ { // make sure server side get the request (i.e stream created successfully on client side) td.getReq() td.sendResp(&response{err: cfg.svrErr}) select { case <-contactChan: if i == 1 { secondRetry = time.Now() } if i == 2 { thirdRetry = time.Now() } case <-time.After(2 * time.Second): t.Fatal("time out when expecting lost contact signal") } } if thirdRetry.Sub(secondRetry) < 1*time.Second { t.Fatalf("interval between second and third retry is %v, expected > 1s", thirdRetry.Sub(secondRetry)) } } grpc-go-1.22.1/balancer/xds/xds_lrs_test.go000066400000000000000000000101301351635773100205520ustar00rootroot00000000000000/* * * Copyright 2019 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package xds import ( "io" "sync" "testing" "time" "github.com/golang/protobuf/proto" durationpb "github.com/golang/protobuf/ptypes/duration" structpb "github.com/golang/protobuf/ptypes/struct" "github.com/google/go-cmp/cmp" "google.golang.org/grpc/balancer" "google.golang.org/grpc/balancer/xds/internal" basepb "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/core/base" lrsgrpc "google.golang.org/grpc/balancer/xds/internal/proto/envoy/service/load_stats/v2/lrs" lrspb "google.golang.org/grpc/balancer/xds/internal/proto/envoy/service/load_stats/v2/lrs" "google.golang.org/grpc/codes" "google.golang.org/grpc/resolver" "google.golang.org/grpc/status" ) type lrsServer struct { mu sync.Mutex dropTotal uint64 drops map[string]uint64 reportingInterval *durationpb.Duration } func (lrss *lrsServer) StreamLoadStats(stream lrsgrpc.LoadReportingService_StreamLoadStatsServer) error { req, err := stream.Recv() if err != nil { return err } if !proto.Equal(req, &lrspb.LoadStatsRequest{ Node: &basepb.Node{ Metadata: &structpb.Struct{ Fields: map[string]*structpb.Value{ internal.GrpcHostname: { Kind: &structpb.Value_StringValue{StringValue: testServiceName}, }, }, }, }, }) { return status.Errorf(codes.FailedPrecondition, "unexpected req: %+v", req) } if err := stream.Send(&lrspb.LoadStatsResponse{ Clusters: []string{testServiceName}, LoadReportingInterval: lrss.reportingInterval, }); err != nil { return err } for { req, err := stream.Recv() if err != nil { if err == io.EOF { return nil } return err } stats := req.ClusterStats[0] lrss.mu.Lock() lrss.dropTotal += stats.TotalDroppedRequests for _, d := range stats.DroppedRequests { lrss.drops[d.Category] += d.DroppedCount } lrss.mu.Unlock() } } func (s) TestXdsLoadReporting(t *testing.T) { originalNewEDSBalancer := newEDSBalancer newEDSBalancer = newFakeEDSBalancer defer func() { newEDSBalancer = originalNewEDSBalancer }() builder := balancer.Get(xdsName) cc := newTestClientConn() lb, ok := builder.Build(cc, balancer.BuildOptions{Target: resolver.Target{Endpoint: testServiceName}}).(*xdsBalancer) if !ok { t.Fatalf("unable to type assert to *xdsBalancer") } defer lb.Close() addr, td, lrss, cleanup := setupServer(t) defer cleanup() const intervalNano = 1000 * 1000 * 50 lrss.reportingInterval = &durationpb.Duration{ Seconds: 0, Nanos: intervalNano, } cfg := &xdsConfig{ BalancerName: addr, ChildPolicy: &loadBalancingConfig{Name: fakeBalancerA}, // Set this to skip cds. } lb.UpdateClientConnState(balancer.ClientConnState{BalancerConfig: cfg}) td.sendResp(&response{resp: testEDSRespWithoutEndpoints}) var ( i int edsLB *fakeEDSBalancer ) for i = 0; i < 10; i++ { edsLB = getLatestEdsBalancer() if edsLB != nil { break } time.Sleep(100 * time.Millisecond) } if i == 10 { t.Fatal("edsBalancer instance has not been created and assigned to lb.xdsLB after 1s") } var dropCategories = []string{"drop_for_real", "drop_for_fun"} drops := map[string]uint64{ dropCategories[0]: 31, dropCategories[1]: 41, } for c, d := range drops { for i := 0; i < int(d); i++ { edsLB.loadStore.CallDropped(c) time.Sleep(time.Nanosecond * intervalNano / 10) } } time.Sleep(time.Nanosecond * intervalNano * 2) lrss.mu.Lock() defer lrss.mu.Unlock() if !cmp.Equal(lrss.drops, drops) { t.Errorf("different: %v %v %v", lrss.drops, drops, cmp.Diff(lrss.drops, drops)) } } grpc-go-1.22.1/balancer/xds/xds_test.go000066400000000000000000000517211351635773100177050ustar00rootroot00000000000000/* * * Copyright 2019 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package xds import ( "encoding/json" "reflect" "sync" "testing" "time" "github.com/golang/protobuf/proto" "google.golang.org/grpc/balancer" discoverypb "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/discovery" edspb "google.golang.org/grpc/balancer/xds/internal/proto/envoy/api/v2/eds" "google.golang.org/grpc/balancer/xds/lrs" "google.golang.org/grpc/connectivity" "google.golang.org/grpc/internal/grpctest" "google.golang.org/grpc/internal/leakcheck" "google.golang.org/grpc/resolver" ) var lbABuilder = &balancerABuilder{} func init() { balancer.Register(lbABuilder) balancer.Register(&balancerBBuilder{}) } type s struct{} func (s) Teardown(t *testing.T) { leakcheck.Check(t) } func Test(t *testing.T) { grpctest.RunSubTests(t, s{}) } const ( fakeBalancerA = "fake_balancer_A" fakeBalancerB = "fake_balancer_B" fakeBalancerC = "fake_balancer_C" ) var ( testBalancerNameFooBar = "foo.bar" testLBConfigFooBar = &xdsConfig{ BalancerName: testBalancerNameFooBar, ChildPolicy: &loadBalancingConfig{Name: fakeBalancerA}, FallBackPolicy: &loadBalancingConfig{Name: fakeBalancerA}, } specialAddrForBalancerA = resolver.Address{Addr: "this.is.balancer.A"} specialAddrForBalancerB = resolver.Address{Addr: "this.is.balancer.B"} // mu protects the access of latestFakeEdsBalancer mu sync.Mutex latestFakeEdsBalancer *fakeEDSBalancer ) type balancerABuilder struct { mu sync.Mutex lastBalancer *balancerA } func (b *balancerABuilder) Build(cc balancer.ClientConn, opts balancer.BuildOptions) balancer.Balancer { b.mu.Lock() b.lastBalancer = &balancerA{cc: cc, subconnStateChange: make(chan *scStateChange, 10)} b.mu.Unlock() return b.lastBalancer } func (b *balancerABuilder) Name() string { return string(fakeBalancerA) } func (b *balancerABuilder) getLastBalancer() *balancerA { b.mu.Lock() defer b.mu.Unlock() return b.lastBalancer } func (b *balancerABuilder) clearLastBalancer() { b.mu.Lock() defer b.mu.Unlock() b.lastBalancer = nil } type balancerBBuilder struct{} func (b *balancerBBuilder) Build(cc balancer.ClientConn, opts balancer.BuildOptions) balancer.Balancer { return &balancerB{cc: cc} } func (*balancerBBuilder) Name() string { return string(fakeBalancerB) } type balancerA struct { cc balancer.ClientConn subconnStateChange chan *scStateChange } func (b *balancerA) HandleSubConnStateChange(sc balancer.SubConn, state connectivity.State) { b.subconnStateChange <- &scStateChange{sc: sc, state: state} } func (b *balancerA) HandleResolvedAddrs(addrs []resolver.Address, err error) { _, _ = b.cc.NewSubConn(append(addrs, specialAddrForBalancerA), balancer.NewSubConnOptions{}) } func (b *balancerA) Close() {} type balancerB struct { cc balancer.ClientConn } func (balancerB) HandleSubConnStateChange(sc balancer.SubConn, state connectivity.State) { panic("implement me") } func (b *balancerB) HandleResolvedAddrs(addrs []resolver.Address, err error) { _, _ = b.cc.NewSubConn(append(addrs, specialAddrForBalancerB), balancer.NewSubConnOptions{}) } func (balancerB) Close() {} func newTestClientConn() *testClientConn { return &testClientConn{ newSubConns: make(chan []resolver.Address, 10), } } type testClientConn struct { newSubConns chan []resolver.Address } func (t *testClientConn) NewSubConn(addrs []resolver.Address, opts balancer.NewSubConnOptions) (balancer.SubConn, error) { t.newSubConns <- addrs return nil, nil } func (testClientConn) RemoveSubConn(balancer.SubConn) { } func (testClientConn) UpdateBalancerState(s connectivity.State, p balancer.Picker) { } func (testClientConn) ResolveNow(resolver.ResolveNowOption) {} func (testClientConn) Target() string { return testServiceName } type scStateChange struct { sc balancer.SubConn state connectivity.State } type fakeEDSBalancer struct { cc balancer.ClientConn edsChan chan *edspb.ClusterLoadAssignment childPolicy chan *loadBalancingConfig fallbackPolicy chan *loadBalancingConfig subconnStateChange chan *scStateChange loadStore lrs.Store } func (f *fakeEDSBalancer) HandleSubConnStateChange(sc balancer.SubConn, state connectivity.State) { f.subconnStateChange <- &scStateChange{sc: sc, state: state} } func (f *fakeEDSBalancer) Close() { mu.Lock() defer mu.Unlock() latestFakeEdsBalancer = nil } func (f *fakeEDSBalancer) HandleEDSResponse(edsResp *edspb.ClusterLoadAssignment) { f.edsChan <- edsResp } func (f *fakeEDSBalancer) HandleChildPolicy(name string, config json.RawMessage) { f.childPolicy <- &loadBalancingConfig{ Name: name, Config: config, } } func newFakeEDSBalancer(cc balancer.ClientConn, loadStore lrs.Store) edsBalancerInterface { lb := &fakeEDSBalancer{ cc: cc, edsChan: make(chan *edspb.ClusterLoadAssignment, 10), childPolicy: make(chan *loadBalancingConfig, 10), fallbackPolicy: make(chan *loadBalancingConfig, 10), subconnStateChange: make(chan *scStateChange, 10), loadStore: loadStore, } mu.Lock() latestFakeEdsBalancer = lb mu.Unlock() return lb } func getLatestEdsBalancer() *fakeEDSBalancer { mu.Lock() defer mu.Unlock() return latestFakeEdsBalancer } type fakeSubConn struct{} func (*fakeSubConn) UpdateAddresses([]resolver.Address) { panic("implement me") } func (*fakeSubConn) Connect() { panic("implement me") } func (s) TestXdsBalanceHandleResolvedAddrs(t *testing.T) { startupTimeout = 500 * time.Millisecond defer func() { startupTimeout = defaultTimeout }() builder := balancer.Get(xdsName) cc := newTestClientConn() lb, ok := builder.Build(cc, balancer.BuildOptions{Target: resolver.Target{Endpoint: testServiceName}}).(*xdsBalancer) if !ok { t.Fatalf("unable to type assert to *xdsBalancer") } defer lb.Close() addrs := []resolver.Address{{Addr: "1.1.1.1:10001"}, {Addr: "2.2.2.2:10002"}, {Addr: "3.3.3.3:10003"}} for i := 0; i < 3; i++ { lb.UpdateClientConnState(balancer.ClientConnState{ ResolverState: resolver.State{Addresses: addrs}, BalancerConfig: testLBConfigFooBar, }) select { case nsc := <-cc.newSubConns: if !reflect.DeepEqual(append(addrs, specialAddrForBalancerA), nsc) { t.Fatalf("got new subconn address %v, want %v", nsc, append(addrs, specialAddrForBalancerA)) } case <-time.After(2 * time.Second): t.Fatal("timeout when geting new subconn result") } addrs = addrs[:2-i] } } func (s) TestXdsBalanceHandleBalancerConfigBalancerNameUpdate(t *testing.T) { startupTimeout = 500 * time.Millisecond originalNewEDSBalancer := newEDSBalancer newEDSBalancer = newFakeEDSBalancer defer func() { startupTimeout = defaultTimeout newEDSBalancer = originalNewEDSBalancer }() builder := balancer.Get(xdsName) cc := newTestClientConn() lb, ok := builder.Build(cc, balancer.BuildOptions{Target: resolver.Target{Endpoint: testServiceName}}).(*xdsBalancer) if !ok { t.Fatalf("unable to type assert to *xdsBalancer") } defer lb.Close() addrs := []resolver.Address{{Addr: "1.1.1.1:10001"}, {Addr: "2.2.2.2:10002"}, {Addr: "3.3.3.3:10003"}} lb.UpdateClientConnState(balancer.ClientConnState{ ResolverState: resolver.State{Addresses: addrs}, BalancerConfig: testLBConfigFooBar, }) // verify fallback takes over select { case nsc := <-cc.newSubConns: if !reflect.DeepEqual(append(addrs, specialAddrForBalancerA), nsc) { t.Fatalf("got new subconn address %v, want %v", nsc, append(addrs, specialAddrForBalancerA)) } case <-time.After(2 * time.Second): t.Fatalf("timeout when geting new subconn result") } var cleanups []func() defer func() { for _, cleanup := range cleanups { cleanup() } }() // In the first iteration, an eds balancer takes over fallback balancer // In the second iteration, a new xds client takes over previous one. for i := 0; i < 2; i++ { addr, td, _, cleanup := setupServer(t) cleanups = append(cleanups, cleanup) workingLBConfig := &xdsConfig{ BalancerName: addr, ChildPolicy: &loadBalancingConfig{Name: fakeBalancerA}, FallBackPolicy: &loadBalancingConfig{Name: fakeBalancerA}, } lb.UpdateClientConnState(balancer.ClientConnState{ ResolverState: resolver.State{Addresses: addrs}, BalancerConfig: workingLBConfig, }) td.sendResp(&response{resp: testEDSRespWithoutEndpoints}) var j int for j = 0; j < 10; j++ { if edsLB := getLatestEdsBalancer(); edsLB != nil { // edsLB won't change between the two iterations select { case gotEDS := <-edsLB.edsChan: if !proto.Equal(gotEDS, testClusterLoadAssignmentWithoutEndpoints) { t.Fatalf("edsBalancer got eds: %v, want %v", gotEDS, testClusterLoadAssignmentWithoutEndpoints) } case <-time.After(time.Second): t.Fatal("haven't got EDS update after 1s") } break } time.Sleep(100 * time.Millisecond) } if j == 10 { t.Fatal("edsBalancer instance has not been created or updated after 1s") } } } // switch child policy, lb stays the same // cds->eds or eds -> cds, restart xdsClient, lb stays the same func (s) TestXdsBalanceHandleBalancerConfigChildPolicyUpdate(t *testing.T) { originalNewEDSBalancer := newEDSBalancer newEDSBalancer = newFakeEDSBalancer defer func() { newEDSBalancer = originalNewEDSBalancer }() builder := balancer.Get(xdsName) cc := newTestClientConn() lb, ok := builder.Build(cc, balancer.BuildOptions{Target: resolver.Target{Endpoint: testServiceName}}).(*xdsBalancer) if !ok { t.Fatalf("unable to type assert to *xdsBalancer") } defer lb.Close() var cleanups []func() defer func() { for _, cleanup := range cleanups { cleanup() } }() for _, test := range []struct { cfg *xdsConfig responseToSend *discoverypb.DiscoveryResponse expectedChildPolicy *loadBalancingConfig }{ { cfg: &xdsConfig{ ChildPolicy: &loadBalancingConfig{ Name: fakeBalancerA, Config: json.RawMessage("{}"), }, }, responseToSend: testEDSRespWithoutEndpoints, expectedChildPolicy: &loadBalancingConfig{ Name: string(fakeBalancerA), Config: json.RawMessage(`{}`), }, }, { cfg: &xdsConfig{ ChildPolicy: &loadBalancingConfig{ Name: fakeBalancerB, Config: json.RawMessage("{}"), }, }, expectedChildPolicy: &loadBalancingConfig{ Name: string(fakeBalancerB), Config: json.RawMessage(`{}`), }, }, { cfg: &xdsConfig{}, responseToSend: testCDSResp, expectedChildPolicy: &loadBalancingConfig{ Name: "ROUND_ROBIN", }, }, } { addr, td, _, cleanup := setupServer(t) cleanups = append(cleanups, cleanup) test.cfg.BalancerName = addr lb.UpdateClientConnState(balancer.ClientConnState{BalancerConfig: test.cfg}) if test.responseToSend != nil { td.sendResp(&response{resp: test.responseToSend}) } var i int for i = 0; i < 10; i++ { if edsLB := getLatestEdsBalancer(); edsLB != nil { select { case childPolicy := <-edsLB.childPolicy: if !reflect.DeepEqual(childPolicy, test.expectedChildPolicy) { t.Fatalf("got childPolicy %v, want %v", childPolicy, test.expectedChildPolicy) } case <-time.After(time.Second): t.Fatal("haven't got policy update after 1s") } break } time.Sleep(100 * time.Millisecond) } if i == 10 { t.Fatal("edsBalancer instance has not been created or updated after 1s") } } } // not in fallback mode, overwrite fallback info. // in fallback mode, update config or switch balancer. func (s) TestXdsBalanceHandleBalancerConfigFallBackUpdate(t *testing.T) { originalNewEDSBalancer := newEDSBalancer newEDSBalancer = newFakeEDSBalancer defer func() { newEDSBalancer = originalNewEDSBalancer }() builder := balancer.Get(xdsName) cc := newTestClientConn() lb, ok := builder.Build(cc, balancer.BuildOptions{Target: resolver.Target{Endpoint: testServiceName}}).(*xdsBalancer) if !ok { t.Fatalf("unable to type assert to *xdsBalancer") } defer lb.Close() addr, td, _, cleanup := setupServer(t) cfg := xdsConfig{ BalancerName: addr, ChildPolicy: &loadBalancingConfig{Name: fakeBalancerA}, FallBackPolicy: &loadBalancingConfig{Name: fakeBalancerA}, } lb.UpdateClientConnState(balancer.ClientConnState{BalancerConfig: &cfg}) addrs := []resolver.Address{{Addr: "1.1.1.1:10001"}, {Addr: "2.2.2.2:10002"}, {Addr: "3.3.3.3:10003"}} cfg2 := cfg cfg2.FallBackPolicy = &loadBalancingConfig{Name: fakeBalancerB} lb.UpdateClientConnState(balancer.ClientConnState{ ResolverState: resolver.State{Addresses: addrs}, BalancerConfig: &cfg2, }) td.sendResp(&response{resp: testEDSRespWithoutEndpoints}) var i int for i = 0; i < 10; i++ { if edsLB := getLatestEdsBalancer(); edsLB != nil { break } time.Sleep(100 * time.Millisecond) } if i == 10 { t.Fatal("edsBalancer instance has not been created and assigned to lb.xdsLB after 1s") } cleanup() // verify fallback balancer B takes over select { case nsc := <-cc.newSubConns: if !reflect.DeepEqual(append(addrs, specialAddrForBalancerB), nsc) { t.Fatalf("got new subconn address %v, want %v", nsc, append(addrs, specialAddrForBalancerB)) } case <-time.After(5 * time.Second): t.Fatalf("timeout when geting new subconn result") } cfg3 := cfg cfg3.FallBackPolicy = &loadBalancingConfig{Name: fakeBalancerA} lb.UpdateClientConnState(balancer.ClientConnState{ ResolverState: resolver.State{Addresses: addrs}, BalancerConfig: &cfg3, }) // verify fallback balancer A takes over select { case nsc := <-cc.newSubConns: if !reflect.DeepEqual(append(addrs, specialAddrForBalancerA), nsc) { t.Fatalf("got new subconn address %v, want %v", nsc, append(addrs, specialAddrForBalancerA)) } case <-time.After(2 * time.Second): t.Fatalf("timeout when geting new subconn result") } } func (s) TestXdsBalancerHandlerSubConnStateChange(t *testing.T) { originalNewEDSBalancer := newEDSBalancer newEDSBalancer = newFakeEDSBalancer defer func() { newEDSBalancer = originalNewEDSBalancer }() builder := balancer.Get(xdsName) cc := newTestClientConn() lb, ok := builder.Build(cc, balancer.BuildOptions{Target: resolver.Target{Endpoint: testServiceName}}).(*xdsBalancer) if !ok { t.Fatalf("unable to type assert to *xdsBalancer") } defer lb.Close() addr, td, _, cleanup := setupServer(t) defer cleanup() cfg := &xdsConfig{ BalancerName: addr, ChildPolicy: &loadBalancingConfig{Name: fakeBalancerA}, FallBackPolicy: &loadBalancingConfig{Name: fakeBalancerA}, } lb.UpdateClientConnState(balancer.ClientConnState{BalancerConfig: cfg}) td.sendResp(&response{resp: testEDSRespWithoutEndpoints}) expectedScStateChange := &scStateChange{ sc: &fakeSubConn{}, state: connectivity.Ready, } var i int for i = 0; i < 10; i++ { if edsLB := getLatestEdsBalancer(); edsLB != nil { lb.UpdateSubConnState(expectedScStateChange.sc, balancer.SubConnState{ConnectivityState: expectedScStateChange.state}) select { case scsc := <-edsLB.subconnStateChange: if !reflect.DeepEqual(scsc, expectedScStateChange) { t.Fatalf("got subconn state change %v, want %v", scsc, expectedScStateChange) } case <-time.After(time.Second): t.Fatal("haven't got subconn state change after 1s") } break } time.Sleep(100 * time.Millisecond) } if i == 10 { t.Fatal("edsBalancer instance has not been created and assigned to lb.xdsLB after 1s") } // lbAbuilder has a per binary record what's the last balanceA created. We need to clear the record // to make sure there's a new one created and get the pointer to it. lbABuilder.clearLastBalancer() cleanup() // switch to fallback // fallback balancer A takes over for i = 0; i < 10; i++ { if fblb := lbABuilder.getLastBalancer(); fblb != nil { lb.UpdateSubConnState(expectedScStateChange.sc, balancer.SubConnState{ConnectivityState: expectedScStateChange.state}) select { case scsc := <-fblb.subconnStateChange: if !reflect.DeepEqual(scsc, expectedScStateChange) { t.Fatalf("got subconn state change %v, want %v", scsc, expectedScStateChange) } case <-time.After(time.Second): t.Fatal("haven't got subconn state change after 1s") } break } time.Sleep(100 * time.Millisecond) } if i == 10 { t.Fatal("balancerA instance has not been created after 1s") } } func (s) TestXdsBalancerFallBackSignalFromEdsBalancer(t *testing.T) { originalNewEDSBalancer := newEDSBalancer newEDSBalancer = newFakeEDSBalancer defer func() { newEDSBalancer = originalNewEDSBalancer }() builder := balancer.Get(xdsName) cc := newTestClientConn() lb, ok := builder.Build(cc, balancer.BuildOptions{Target: resolver.Target{Endpoint: testServiceName}}).(*xdsBalancer) if !ok { t.Fatalf("unable to type assert to *xdsBalancer") } defer lb.Close() addr, td, _, cleanup := setupServer(t) defer cleanup() cfg := &xdsConfig{ BalancerName: addr, ChildPolicy: &loadBalancingConfig{Name: fakeBalancerA}, FallBackPolicy: &loadBalancingConfig{Name: fakeBalancerA}, } lb.UpdateClientConnState(balancer.ClientConnState{BalancerConfig: cfg}) td.sendResp(&response{resp: testEDSRespWithoutEndpoints}) expectedScStateChange := &scStateChange{ sc: &fakeSubConn{}, state: connectivity.Ready, } var i int for i = 0; i < 10; i++ { if edsLB := getLatestEdsBalancer(); edsLB != nil { lb.UpdateSubConnState(expectedScStateChange.sc, balancer.SubConnState{ConnectivityState: expectedScStateChange.state}) select { case scsc := <-edsLB.subconnStateChange: if !reflect.DeepEqual(scsc, expectedScStateChange) { t.Fatalf("got subconn state change %v, want %v", scsc, expectedScStateChange) } case <-time.After(time.Second): t.Fatal("haven't got subconn state change after 1s") } break } time.Sleep(100 * time.Millisecond) } if i == 10 { t.Fatal("edsBalancer instance has not been created and assigned to lb.xdsLB after 1s") } // lbAbuilder has a per binary record what's the last balanceA created. We need to clear the record // to make sure there's a new one created and get the pointer to it. lbABuilder.clearLastBalancer() cleanup() // switch to fallback // fallback balancer A takes over for i = 0; i < 10; i++ { if fblb := lbABuilder.getLastBalancer(); fblb != nil { lb.UpdateSubConnState(expectedScStateChange.sc, balancer.SubConnState{ConnectivityState: expectedScStateChange.state}) select { case scsc := <-fblb.subconnStateChange: if !reflect.DeepEqual(scsc, expectedScStateChange) { t.Fatalf("got subconn state change %v, want %v", scsc, expectedScStateChange) } case <-time.After(time.Second): t.Fatal("haven't got subconn state change after 1s") } break } time.Sleep(100 * time.Millisecond) } if i == 10 { t.Fatal("balancerA instance has not been created after 1s") } } func (s) TestXdsBalancerConfigParsingSelectingLBPolicy(t *testing.T) { js := json.RawMessage(`{ "balancerName": "fake.foo.bar", "childPolicy": [{"fake_balancer_C": {}}, {"fake_balancer_A": {}}, {"fake_balancer_B": {}}], "fallbackPolicy": [{"fake_balancer_C": {}}, {"fake_balancer_B": {}}, {"fake_balancer_A": {}}] }`) cfg, err := (&xdsBalancerBuilder{}).ParseConfig(js) if err != nil { t.Fatalf("unable to unmarshal balancer config into xds config: %v", err) } xdsCfg := cfg.(*xdsConfig) wantChildPolicy := &loadBalancingConfig{Name: string(fakeBalancerA), Config: json.RawMessage(`{}`)} if !reflect.DeepEqual(xdsCfg.ChildPolicy, wantChildPolicy) { t.Fatalf("got child policy %v, want %v", xdsCfg.ChildPolicy, wantChildPolicy) } wantFallbackPolicy := &loadBalancingConfig{Name: string(fakeBalancerB), Config: json.RawMessage(`{}`)} if !reflect.DeepEqual(xdsCfg.FallBackPolicy, wantFallbackPolicy) { t.Fatalf("got fallback policy %v, want %v", xdsCfg.FallBackPolicy, wantFallbackPolicy) } } func (s) TestXdsLoadbalancingConfigParsing(t *testing.T) { tests := []struct { name string s string want *xdsConfig }{ { name: "empty", s: "{}", want: &xdsConfig{}, }, { name: "success1", s: `{"childPolicy":[{"pick_first":{}}]}`, want: &xdsConfig{ ChildPolicy: &loadBalancingConfig{ Name: "pick_first", Config: json.RawMessage(`{}`), }, }, }, { name: "success2", s: `{"childPolicy":[{"round_robin":{}},{"pick_first":{}}]}`, want: &xdsConfig{ ChildPolicy: &loadBalancingConfig{ Name: "round_robin", Config: json.RawMessage(`{}`), }, }, }, } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { var cfg xdsConfig if err := json.Unmarshal([]byte(tt.s), &cfg); err != nil || !reflect.DeepEqual(&cfg, tt.want) { t.Errorf("test name: %s, parseFullServiceConfig() = %+v, err: %v, want %+v, ", tt.name, cfg, err, tt.want) } }) } } grpc-go-1.22.1/balancer_conn_wrappers.go000066400000000000000000000201441351635773100202050ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "fmt" "sync" "google.golang.org/grpc/balancer" "google.golang.org/grpc/connectivity" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/resolver" ) // scStateUpdate contains the subConn and the new state it changed to. type scStateUpdate struct { sc balancer.SubConn state connectivity.State } // scStateUpdateBuffer is an unbounded channel for scStateChangeTuple. // TODO make a general purpose buffer that uses interface{}. type scStateUpdateBuffer struct { c chan *scStateUpdate mu sync.Mutex backlog []*scStateUpdate } func newSCStateUpdateBuffer() *scStateUpdateBuffer { return &scStateUpdateBuffer{ c: make(chan *scStateUpdate, 1), } } func (b *scStateUpdateBuffer) put(t *scStateUpdate) { b.mu.Lock() defer b.mu.Unlock() if len(b.backlog) == 0 { select { case b.c <- t: return default: } } b.backlog = append(b.backlog, t) } func (b *scStateUpdateBuffer) load() { b.mu.Lock() defer b.mu.Unlock() if len(b.backlog) > 0 { select { case b.c <- b.backlog[0]: b.backlog[0] = nil b.backlog = b.backlog[1:] default: } } } // get returns the channel that the scStateUpdate will be sent to. // // Upon receiving, the caller should call load to send another // scStateChangeTuple onto the channel if there is any. func (b *scStateUpdateBuffer) get() <-chan *scStateUpdate { return b.c } // ccBalancerWrapper is a wrapper on top of cc for balancers. // It implements balancer.ClientConn interface. type ccBalancerWrapper struct { cc *ClientConn balancer balancer.Balancer stateChangeQueue *scStateUpdateBuffer ccUpdateCh chan *balancer.ClientConnState done chan struct{} mu sync.Mutex subConns map[*acBalancerWrapper]struct{} } func newCCBalancerWrapper(cc *ClientConn, b balancer.Builder, bopts balancer.BuildOptions) *ccBalancerWrapper { ccb := &ccBalancerWrapper{ cc: cc, stateChangeQueue: newSCStateUpdateBuffer(), ccUpdateCh: make(chan *balancer.ClientConnState, 1), done: make(chan struct{}), subConns: make(map[*acBalancerWrapper]struct{}), } go ccb.watcher() ccb.balancer = b.Build(ccb, bopts) return ccb } // watcher balancer functions sequentially, so the balancer can be implemented // lock-free. func (ccb *ccBalancerWrapper) watcher() { for { select { case t := <-ccb.stateChangeQueue.get(): ccb.stateChangeQueue.load() select { case <-ccb.done: ccb.balancer.Close() return default: } if ub, ok := ccb.balancer.(balancer.V2Balancer); ok { ub.UpdateSubConnState(t.sc, balancer.SubConnState{ConnectivityState: t.state}) } else { ccb.balancer.HandleSubConnStateChange(t.sc, t.state) } case s := <-ccb.ccUpdateCh: select { case <-ccb.done: ccb.balancer.Close() return default: } if ub, ok := ccb.balancer.(balancer.V2Balancer); ok { ub.UpdateClientConnState(*s) } else { ccb.balancer.HandleResolvedAddrs(s.ResolverState.Addresses, nil) } case <-ccb.done: } select { case <-ccb.done: ccb.balancer.Close() ccb.mu.Lock() scs := ccb.subConns ccb.subConns = nil ccb.mu.Unlock() for acbw := range scs { ccb.cc.removeAddrConn(acbw.getAddrConn(), errConnDrain) } ccb.UpdateBalancerState(connectivity.Connecting, nil) return default: } ccb.cc.firstResolveEvent.Fire() } } func (ccb *ccBalancerWrapper) close() { close(ccb.done) } func (ccb *ccBalancerWrapper) handleSubConnStateChange(sc balancer.SubConn, s connectivity.State) { // When updating addresses for a SubConn, if the address in use is not in // the new addresses, the old ac will be tearDown() and a new ac will be // created. tearDown() generates a state change with Shutdown state, we // don't want the balancer to receive this state change. So before // tearDown() on the old ac, ac.acbw (acWrapper) will be set to nil, and // this function will be called with (nil, Shutdown). We don't need to call // balancer method in this case. if sc == nil { return } ccb.stateChangeQueue.put(&scStateUpdate{ sc: sc, state: s, }) } func (ccb *ccBalancerWrapper) updateClientConnState(ccs *balancer.ClientConnState) { if ccb.cc.curBalancerName != grpclbName { // Filter any grpclb addresses since we don't have the grpclb balancer. s := ccs.ResolverState for i := 0; i < len(s.Addresses); { if s.Addresses[i].Type == resolver.GRPCLB { copy(s.Addresses[i:], s.Addresses[i+1:]) s.Addresses = s.Addresses[:len(s.Addresses)-1] continue } i++ } } select { case <-ccb.ccUpdateCh: default: } ccb.ccUpdateCh <- ccs } func (ccb *ccBalancerWrapper) NewSubConn(addrs []resolver.Address, opts balancer.NewSubConnOptions) (balancer.SubConn, error) { if len(addrs) <= 0 { return nil, fmt.Errorf("grpc: cannot create SubConn with empty address list") } ccb.mu.Lock() defer ccb.mu.Unlock() if ccb.subConns == nil { return nil, fmt.Errorf("grpc: ClientConn balancer wrapper was closed") } ac, err := ccb.cc.newAddrConn(addrs, opts) if err != nil { return nil, err } acbw := &acBalancerWrapper{ac: ac} acbw.ac.mu.Lock() ac.acbw = acbw acbw.ac.mu.Unlock() ccb.subConns[acbw] = struct{}{} return acbw, nil } func (ccb *ccBalancerWrapper) RemoveSubConn(sc balancer.SubConn) { acbw, ok := sc.(*acBalancerWrapper) if !ok { return } ccb.mu.Lock() defer ccb.mu.Unlock() if ccb.subConns == nil { return } delete(ccb.subConns, acbw) ccb.cc.removeAddrConn(acbw.getAddrConn(), errConnDrain) } func (ccb *ccBalancerWrapper) UpdateBalancerState(s connectivity.State, p balancer.Picker) { ccb.mu.Lock() defer ccb.mu.Unlock() if ccb.subConns == nil { return } // Update picker before updating state. Even though the ordering here does // not matter, it can lead to multiple calls of Pick in the common start-up // case where we wait for ready and then perform an RPC. If the picker is // updated later, we could call the "connecting" picker when the state is // updated, and then call the "ready" picker after the picker gets updated. ccb.cc.blockingpicker.updatePicker(p) ccb.cc.csMgr.updateState(s) } func (ccb *ccBalancerWrapper) ResolveNow(o resolver.ResolveNowOption) { ccb.cc.resolveNow(o) } func (ccb *ccBalancerWrapper) Target() string { return ccb.cc.target } // acBalancerWrapper is a wrapper on top of ac for balancers. // It implements balancer.SubConn interface. type acBalancerWrapper struct { mu sync.Mutex ac *addrConn } func (acbw *acBalancerWrapper) UpdateAddresses(addrs []resolver.Address) { acbw.mu.Lock() defer acbw.mu.Unlock() if len(addrs) <= 0 { acbw.ac.tearDown(errConnDrain) return } if !acbw.ac.tryUpdateAddrs(addrs) { cc := acbw.ac.cc opts := acbw.ac.scopts acbw.ac.mu.Lock() // Set old ac.acbw to nil so the Shutdown state update will be ignored // by balancer. // // TODO(bar) the state transition could be wrong when tearDown() old ac // and creating new ac, fix the transition. acbw.ac.acbw = nil acbw.ac.mu.Unlock() acState := acbw.ac.getState() acbw.ac.tearDown(errConnDrain) if acState == connectivity.Shutdown { return } ac, err := cc.newAddrConn(addrs, opts) if err != nil { grpclog.Warningf("acBalancerWrapper: UpdateAddresses: failed to newAddrConn: %v", err) return } acbw.ac = ac ac.mu.Lock() ac.acbw = acbw ac.mu.Unlock() if acState != connectivity.Idle { ac.connect() } } } func (acbw *acBalancerWrapper) Connect() { acbw.mu.Lock() defer acbw.mu.Unlock() acbw.ac.connect() } func (acbw *acBalancerWrapper) getAddrConn() *addrConn { acbw.mu.Lock() defer acbw.mu.Unlock() return acbw.ac } grpc-go-1.22.1/balancer_switching_test.go000066400000000000000000000401141351635773100203620ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "context" "fmt" "math" "testing" "time" "google.golang.org/grpc/balancer" "google.golang.org/grpc/balancer/roundrobin" "google.golang.org/grpc/connectivity" _ "google.golang.org/grpc/grpclog/glogger" "google.golang.org/grpc/internal" "google.golang.org/grpc/resolver" "google.golang.org/grpc/resolver/manual" "google.golang.org/grpc/serviceconfig" ) var _ balancer.Builder = &magicalLB{} var _ balancer.Balancer = &magicalLB{} // magicalLB is a ringer for grpclb. It is used to avoid circular dependencies on the grpclb package type magicalLB struct{} func (b *magicalLB) Name() string { return "grpclb" } func (b *magicalLB) Build(cc balancer.ClientConn, opts balancer.BuildOptions) balancer.Balancer { return b } func (b *magicalLB) HandleSubConnStateChange(balancer.SubConn, connectivity.State) {} func (b *magicalLB) HandleResolvedAddrs([]resolver.Address, error) {} func (b *magicalLB) Close() {} func init() { balancer.Register(&magicalLB{}) } func checkPickFirst(cc *ClientConn, servers []*server) error { var ( req = "port" reply string err error ) connected := false for i := 0; i < 5000; i++ { if err = cc.Invoke(context.Background(), "/foo/bar", &req, &reply); errorDesc(err) == servers[0].port { if connected { // connected is set to false if peer is not server[0]. So if // connected is true here, this is the second time we saw // server[0] in a row. Break because pickfirst is in effect. break } connected = true } else { connected = false } time.Sleep(time.Millisecond) } if !connected { return fmt.Errorf("pickfirst is not in effect after 5 second, EmptyCall() = _, %v, want _, %v", err, servers[0].port) } // The following RPCs should all succeed with the first server. for i := 0; i < 3; i++ { err = cc.Invoke(context.Background(), "/foo/bar", &req, &reply) if errorDesc(err) != servers[0].port { return fmt.Errorf("index %d: want peer %v, got peer %v", i, servers[0].port, err) } } return nil } func checkRoundRobin(cc *ClientConn, servers []*server) error { var ( req = "port" reply string err error ) // Make sure connections to all servers are up. for i := 0; i < 2; i++ { // Do this check twice, otherwise the first RPC's transport may still be // picked by the closing pickfirst balancer, and the test becomes flaky. for _, s := range servers { var up bool for i := 0; i < 5000; i++ { if err = cc.Invoke(context.Background(), "/foo/bar", &req, &reply); errorDesc(err) == s.port { up = true break } time.Sleep(time.Millisecond) } if !up { return fmt.Errorf("server %v is not up within 5 second", s.port) } } } serverCount := len(servers) for i := 0; i < 3*serverCount; i++ { err = cc.Invoke(context.Background(), "/foo/bar", &req, &reply) if errorDesc(err) != servers[i%serverCount].port { return fmt.Errorf("index %d: want peer %v, got peer %v", i, servers[i%serverCount].port, err) } } return nil } func (s) TestSwitchBalancer(t *testing.T) { r, rcleanup := manual.GenerateAndRegisterManualResolver() defer rcleanup() const numServers = 2 servers, _, scleanup := startServers(t, numServers, math.MaxInt32) defer scleanup() cc, err := Dial(r.Scheme()+":///test.server", WithInsecure(), WithCodec(testCodec{})) if err != nil { t.Fatalf("failed to dial: %v", err) } defer cc.Close() addrs := []resolver.Address{{Addr: servers[0].addr}, {Addr: servers[1].addr}} r.UpdateState(resolver.State{Addresses: addrs}) // The default balancer is pickfirst. if err := checkPickFirst(cc, servers); err != nil { t.Fatalf("check pickfirst returned non-nil error: %v", err) } // Switch to roundrobin. cc.updateResolverState(resolver.State{ServiceConfig: parseCfg(`{"loadBalancingPolicy": "round_robin"}`), Addresses: addrs}) if err := checkRoundRobin(cc, servers); err != nil { t.Fatalf("check roundrobin returned non-nil error: %v", err) } // Switch to pickfirst. cc.updateResolverState(resolver.State{ServiceConfig: parseCfg(`{"loadBalancingPolicy": "pick_first"}`), Addresses: addrs}) if err := checkPickFirst(cc, servers); err != nil { t.Fatalf("check pickfirst returned non-nil error: %v", err) } } // Test that balancer specified by dial option will not be overridden. func (s) TestBalancerDialOption(t *testing.T) { r, rcleanup := manual.GenerateAndRegisterManualResolver() defer rcleanup() const numServers = 2 servers, _, scleanup := startServers(t, numServers, math.MaxInt32) defer scleanup() cc, err := Dial(r.Scheme()+":///test.server", WithInsecure(), WithCodec(testCodec{}), WithBalancerName(roundrobin.Name)) if err != nil { t.Fatalf("failed to dial: %v", err) } defer cc.Close() addrs := []resolver.Address{{Addr: servers[0].addr}, {Addr: servers[1].addr}} r.UpdateState(resolver.State{Addresses: addrs}) // The init balancer is roundrobin. if err := checkRoundRobin(cc, servers); err != nil { t.Fatalf("check roundrobin returned non-nil error: %v", err) } // Switch to pickfirst. cc.updateResolverState(resolver.State{ServiceConfig: parseCfg(`{"loadBalancingPolicy": "pick_first"}`), Addresses: addrs}) // Balancer is still roundrobin. if err := checkRoundRobin(cc, servers); err != nil { t.Fatalf("check roundrobin returned non-nil error: %v", err) } } // First addr update contains grpclb. func (s) TestSwitchBalancerGRPCLBFirst(t *testing.T) { r, rcleanup := manual.GenerateAndRegisterManualResolver() defer rcleanup() cc, err := Dial(r.Scheme()+":///test.server", WithInsecure(), WithCodec(testCodec{})) if err != nil { t.Fatalf("failed to dial: %v", err) } defer cc.Close() // ClientConn will switch balancer to grpclb when receives an address of // type GRPCLB. r.UpdateState(resolver.State{Addresses: []resolver.Address{{Addr: "backend"}, {Addr: "grpclb", Type: resolver.GRPCLB}}}) var isGRPCLB bool for i := 0; i < 5000; i++ { cc.mu.Lock() isGRPCLB = cc.curBalancerName == "grpclb" cc.mu.Unlock() if isGRPCLB { break } time.Sleep(time.Millisecond) } if !isGRPCLB { t.Fatalf("after 5 second, cc.balancer is of type %v, not grpclb", cc.curBalancerName) } // New update containing new backend and new grpclb. Should not switch // balancer. r.UpdateState(resolver.State{Addresses: []resolver.Address{{Addr: "backend2"}, {Addr: "grpclb2", Type: resolver.GRPCLB}}}) for i := 0; i < 200; i++ { cc.mu.Lock() isGRPCLB = cc.curBalancerName == "grpclb" cc.mu.Unlock() if !isGRPCLB { break } time.Sleep(time.Millisecond) } if !isGRPCLB { t.Fatalf("within 200 ms, cc.balancer switched to !grpclb, want grpclb") } var isPickFirst bool // Switch balancer to pickfirst. r.UpdateState(resolver.State{Addresses: []resolver.Address{{Addr: "backend"}}}) for i := 0; i < 5000; i++ { cc.mu.Lock() isPickFirst = cc.curBalancerName == PickFirstBalancerName cc.mu.Unlock() if isPickFirst { break } time.Sleep(time.Millisecond) } if !isPickFirst { t.Fatalf("after 5 second, cc.balancer is of type %v, not pick_first", cc.curBalancerName) } } // First addr update does not contain grpclb. func (s) TestSwitchBalancerGRPCLBSecond(t *testing.T) { r, rcleanup := manual.GenerateAndRegisterManualResolver() defer rcleanup() cc, err := Dial(r.Scheme()+":///test.server", WithInsecure(), WithCodec(testCodec{})) if err != nil { t.Fatalf("failed to dial: %v", err) } defer cc.Close() r.UpdateState(resolver.State{Addresses: []resolver.Address{{Addr: "backend"}}}) var isPickFirst bool for i := 0; i < 5000; i++ { cc.mu.Lock() isPickFirst = cc.curBalancerName == PickFirstBalancerName cc.mu.Unlock() if isPickFirst { break } time.Sleep(time.Millisecond) } if !isPickFirst { t.Fatalf("after 5 second, cc.balancer is of type %v, not pick_first", cc.curBalancerName) } // ClientConn will switch balancer to grpclb when receives an address of // type GRPCLB. r.UpdateState(resolver.State{Addresses: []resolver.Address{{Addr: "backend"}, {Addr: "grpclb", Type: resolver.GRPCLB}}}) var isGRPCLB bool for i := 0; i < 5000; i++ { cc.mu.Lock() isGRPCLB = cc.curBalancerName == "grpclb" cc.mu.Unlock() if isGRPCLB { break } time.Sleep(time.Millisecond) } if !isGRPCLB { t.Fatalf("after 5 second, cc.balancer is of type %v, not grpclb", cc.curBalancerName) } // New update containing new backend and new grpclb. Should not switch // balancer. r.UpdateState(resolver.State{Addresses: []resolver.Address{{Addr: "backend2"}, {Addr: "grpclb2", Type: resolver.GRPCLB}}}) for i := 0; i < 200; i++ { cc.mu.Lock() isGRPCLB = cc.curBalancerName == "grpclb" cc.mu.Unlock() if !isGRPCLB { break } time.Sleep(time.Millisecond) } if !isGRPCLB { t.Fatalf("within 200 ms, cc.balancer switched to !grpclb, want grpclb") } // Switch balancer back. r.UpdateState(resolver.State{Addresses: []resolver.Address{{Addr: "backend"}}}) for i := 0; i < 5000; i++ { cc.mu.Lock() isPickFirst = cc.curBalancerName == PickFirstBalancerName cc.mu.Unlock() if isPickFirst { break } time.Sleep(time.Millisecond) } if !isPickFirst { t.Fatalf("after 5 second, cc.balancer is of type %v, not pick_first", cc.curBalancerName) } } // Test that if the current balancer is roundrobin, after switching to grpclb, // when the resolved address doesn't contain grpclb addresses, balancer will be // switched back to roundrobin. func (s) TestSwitchBalancerGRPCLBRoundRobin(t *testing.T) { r, rcleanup := manual.GenerateAndRegisterManualResolver() defer rcleanup() cc, err := Dial(r.Scheme()+":///test.server", WithInsecure(), WithCodec(testCodec{})) if err != nil { t.Fatalf("failed to dial: %v", err) } defer cc.Close() sc := parseCfg(`{"loadBalancingPolicy": "round_robin"}`) r.UpdateState(resolver.State{Addresses: []resolver.Address{{Addr: "backend"}}, ServiceConfig: sc}) var isRoundRobin bool for i := 0; i < 5000; i++ { cc.mu.Lock() isRoundRobin = cc.curBalancerName == "round_robin" cc.mu.Unlock() if isRoundRobin { break } time.Sleep(time.Millisecond) } if !isRoundRobin { t.Fatalf("after 5 second, cc.balancer is of type %v, not round_robin", cc.curBalancerName) } // ClientConn will switch balancer to grpclb when receives an address of // type GRPCLB. r.UpdateState(resolver.State{Addresses: []resolver.Address{{Addr: "grpclb", Type: resolver.GRPCLB}}, ServiceConfig: sc}) var isGRPCLB bool for i := 0; i < 5000; i++ { cc.mu.Lock() isGRPCLB = cc.curBalancerName == "grpclb" cc.mu.Unlock() if isGRPCLB { break } time.Sleep(time.Millisecond) } if !isGRPCLB { t.Fatalf("after 5 second, cc.balancer is of type %v, not grpclb", cc.curBalancerName) } // Switch balancer back. r.UpdateState(resolver.State{Addresses: []resolver.Address{{Addr: "backend"}}, ServiceConfig: sc}) for i := 0; i < 5000; i++ { cc.mu.Lock() isRoundRobin = cc.curBalancerName == "round_robin" cc.mu.Unlock() if isRoundRobin { break } time.Sleep(time.Millisecond) } if !isRoundRobin { t.Fatalf("after 5 second, cc.balancer is of type %v, not round_robin", cc.curBalancerName) } } // Test that if resolved address list contains grpclb, the balancer option in // service config won't take effect. But when there's no grpclb address in a new // resolved address list, balancer will be switched to the new one. func (s) TestSwitchBalancerGRPCLBServiceConfig(t *testing.T) { r, rcleanup := manual.GenerateAndRegisterManualResolver() defer rcleanup() cc, err := Dial(r.Scheme()+":///test.server", WithInsecure(), WithCodec(testCodec{})) if err != nil { t.Fatalf("failed to dial: %v", err) } defer cc.Close() r.UpdateState(resolver.State{Addresses: []resolver.Address{{Addr: "backend"}}}) var isPickFirst bool for i := 0; i < 5000; i++ { cc.mu.Lock() isPickFirst = cc.curBalancerName == PickFirstBalancerName cc.mu.Unlock() if isPickFirst { break } time.Sleep(time.Millisecond) } if !isPickFirst { t.Fatalf("after 5 second, cc.balancer is of type %v, not pick_first", cc.curBalancerName) } // ClientConn will switch balancer to grpclb when receives an address of // type GRPCLB. addrs := []resolver.Address{{Addr: "grpclb", Type: resolver.GRPCLB}} r.UpdateState(resolver.State{Addresses: addrs}) var isGRPCLB bool for i := 0; i < 5000; i++ { cc.mu.Lock() isGRPCLB = cc.curBalancerName == "grpclb" cc.mu.Unlock() if isGRPCLB { break } time.Sleep(time.Millisecond) } if !isGRPCLB { t.Fatalf("after 5 second, cc.balancer is of type %v, not grpclb", cc.curBalancerName) } sc := parseCfg(`{"loadBalancingPolicy": "round_robin"}`) r.UpdateState(resolver.State{Addresses: addrs, ServiceConfig: sc}) var isRoundRobin bool for i := 0; i < 200; i++ { cc.mu.Lock() isRoundRobin = cc.curBalancerName == "round_robin" cc.mu.Unlock() if isRoundRobin { break } time.Sleep(time.Millisecond) } // Balancer should NOT switch to round_robin because resolved list contains // grpclb. if isRoundRobin { t.Fatalf("within 200 ms, cc.balancer switched to round_robin, want grpclb") } // Switch balancer back. r.UpdateState(resolver.State{Addresses: []resolver.Address{{Addr: "backend"}}, ServiceConfig: sc}) for i := 0; i < 5000; i++ { cc.mu.Lock() isRoundRobin = cc.curBalancerName == "round_robin" cc.mu.Unlock() if isRoundRobin { break } time.Sleep(time.Millisecond) } if !isRoundRobin { t.Fatalf("after 5 second, cc.balancer is of type %v, not round_robin", cc.curBalancerName) } } // Test that when switching to grpclb fails because grpclb is not registered, // the fallback balancer will only get backend addresses, not the grpclb server // address. // // The tests sends 3 server addresses (all backends) as resolved addresses, but // claim the first one is grpclb server. The all RPCs should all be send to the // other addresses, not the first one. func (s) TestSwitchBalancerGRPCLBWithGRPCLBNotRegistered(t *testing.T) { internal.BalancerUnregister("grpclb") defer balancer.Register(&magicalLB{}) r, rcleanup := manual.GenerateAndRegisterManualResolver() defer rcleanup() const numServers = 3 servers, _, scleanup := startServers(t, numServers, math.MaxInt32) defer scleanup() cc, err := Dial(r.Scheme()+":///test.server", WithInsecure(), WithCodec(testCodec{})) if err != nil { t.Fatalf("failed to dial: %v", err) } defer cc.Close() r.UpdateState(resolver.State{Addresses: []resolver.Address{{Addr: servers[1].addr}, {Addr: servers[2].addr}}}) // The default balancer is pickfirst. if err := checkPickFirst(cc, servers[1:]); err != nil { t.Fatalf("check pickfirst returned non-nil error: %v", err) } // Try switching to grpclb by sending servers[0] as grpclb address. It's // expected that servers[0] will be filtered out, so it will not be used by // the balancer. // // If the filtering failed, servers[0] will be used for RPCs and the RPCs // will succeed. The following checks will catch this and fail. addrs := []resolver.Address{ {Addr: servers[0].addr, Type: resolver.GRPCLB}, {Addr: servers[1].addr}, {Addr: servers[2].addr}} r.UpdateState(resolver.State{Addresses: addrs}) // Still check for pickfirst, but only with server[1] and server[2]. if err := checkPickFirst(cc, servers[1:]); err != nil { t.Fatalf("check pickfirst returned non-nil error: %v", err) } // Switch to roundrobin, and check against server[1] and server[2]. cc.updateResolverState(resolver.State{ServiceConfig: parseCfg(`{"loadBalancingPolicy": "round_robin"}`), Addresses: addrs}) if err := checkRoundRobin(cc, servers[1:]); err != nil { t.Fatalf("check roundrobin returned non-nil error: %v", err) } } func parseCfg(s string) serviceconfig.Config { c, err := serviceconfig.Parse(s) if err != nil { panic(fmt.Sprintf("Error parsing config %q: %v", s, err)) } return c } grpc-go-1.22.1/balancer_test.go000066400000000000000000000603371351635773100163140ustar00rootroot00000000000000/* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "context" "fmt" "math" "strconv" "sync" "testing" "time" "google.golang.org/grpc/codes" _ "google.golang.org/grpc/grpclog/glogger" "google.golang.org/grpc/naming" "google.golang.org/grpc/status" // V1 balancer tests use passthrough resolver instead of dns. // TODO(bar) remove this when removing v1 balaner entirely. _ "google.golang.org/grpc/resolver/passthrough" ) func pickFirstBalancerV1(r naming.Resolver) Balancer { return &pickFirst{&roundRobin{r: r}} } type testWatcher struct { // the channel to receives name resolution updates update chan *naming.Update // the side channel to get to know how many updates in a batch side chan int // the channel to notify update injector that the update reading is done readDone chan int } func (w *testWatcher) Next() (updates []*naming.Update, err error) { n := <-w.side if n == 0 { return nil, fmt.Errorf("w.side is closed") } for i := 0; i < n; i++ { u := <-w.update if u != nil { updates = append(updates, u) } } w.readDone <- 0 return } func (w *testWatcher) Close() { close(w.side) } // Inject naming resolution updates to the testWatcher. func (w *testWatcher) inject(updates []*naming.Update) { w.side <- len(updates) for _, u := range updates { w.update <- u } <-w.readDone } type testNameResolver struct { w *testWatcher addr string } func (r *testNameResolver) Resolve(target string) (naming.Watcher, error) { r.w = &testWatcher{ update: make(chan *naming.Update, 1), side: make(chan int, 1), readDone: make(chan int), } r.w.side <- 1 r.w.update <- &naming.Update{ Op: naming.Add, Addr: r.addr, } go func() { <-r.w.readDone }() return r.w, nil } func startServers(t *testing.T, numServers int, maxStreams uint32) ([]*server, *testNameResolver, func()) { var servers []*server for i := 0; i < numServers; i++ { s := newTestServer() servers = append(servers, s) go s.start(t, 0, maxStreams) s.wait(t, 2*time.Second) } // Point to server[0] addr := "localhost:" + servers[0].port return servers, &testNameResolver{ addr: addr, }, func() { for i := 0; i < numServers; i++ { servers[i].stop() } } } func (s) TestNameDiscovery(t *testing.T) { // Start 2 servers on 2 ports. numServers := 2 servers, r, cleanup := startServers(t, numServers, math.MaxUint32) defer cleanup() cc, err := Dial("passthrough:///foo.bar.com", WithBalancer(RoundRobin(r)), WithBlock(), WithInsecure(), WithCodec(testCodec{})) if err != nil { t.Fatalf("Failed to create ClientConn: %v", err) } defer cc.Close() req := "port" var reply string if err := cc.Invoke(context.Background(), "/foo/bar", &req, &reply); err == nil || errorDesc(err) != servers[0].port { t.Fatalf("grpc.Invoke(_, _, _, _, _) = %v, want %s", err, servers[0].port) } // Inject the name resolution change to remove servers[0] and add servers[1]. var updates []*naming.Update updates = append(updates, &naming.Update{ Op: naming.Delete, Addr: "localhost:" + servers[0].port, }) updates = append(updates, &naming.Update{ Op: naming.Add, Addr: "localhost:" + servers[1].port, }) r.w.inject(updates) // Loop until the rpcs in flight talks to servers[1]. for { if err := cc.Invoke(context.Background(), "/foo/bar", &req, &reply); err != nil && errorDesc(err) == servers[1].port { break } time.Sleep(10 * time.Millisecond) } } func (s) TestEmptyAddrs(t *testing.T) { servers, r, cleanup := startServers(t, 1, math.MaxUint32) defer cleanup() cc, err := Dial("passthrough:///foo.bar.com", WithBalancer(RoundRobin(r)), WithBlock(), WithInsecure(), WithCodec(testCodec{})) if err != nil { t.Fatalf("Failed to create ClientConn: %v", err) } defer cc.Close() var reply string if err := cc.Invoke(context.Background(), "/foo/bar", &expectedRequest, &reply); err != nil || reply != expectedResponse { t.Fatalf("grpc.Invoke(_, _, _, _, _) = %v, reply = %q, want %q, ", err, reply, expectedResponse) } // Inject name resolution change to remove the server so that there is no address // available after that. u := &naming.Update{ Op: naming.Delete, Addr: "localhost:" + servers[0].port, } r.w.inject([]*naming.Update{u}) // Loop until the above updates apply. for { time.Sleep(10 * time.Millisecond) ctx, cancel := context.WithTimeout(context.Background(), 10*time.Millisecond) if err := cc.Invoke(ctx, "/foo/bar", &expectedRequest, &reply); err != nil { cancel() break } cancel() } } func (s) TestRoundRobin(t *testing.T) { // Start 3 servers on 3 ports. numServers := 3 servers, r, cleanup := startServers(t, numServers, math.MaxUint32) defer cleanup() cc, err := Dial("passthrough:///foo.bar.com", WithBalancer(RoundRobin(r)), WithBlock(), WithInsecure(), WithCodec(testCodec{})) if err != nil { t.Fatalf("Failed to create ClientConn: %v", err) } defer cc.Close() // Add servers[1] to the service discovery. u := &naming.Update{ Op: naming.Add, Addr: "localhost:" + servers[1].port, } r.w.inject([]*naming.Update{u}) req := "port" var reply string // Loop until servers[1] is up for { if err := cc.Invoke(context.Background(), "/foo/bar", &req, &reply); err != nil && errorDesc(err) == servers[1].port { break } time.Sleep(10 * time.Millisecond) } // Add server2[2] to the service discovery. u = &naming.Update{ Op: naming.Add, Addr: "localhost:" + servers[2].port, } r.w.inject([]*naming.Update{u}) // Loop until both servers[2] are up. for { if err := cc.Invoke(context.Background(), "/foo/bar", &req, &reply); err != nil && errorDesc(err) == servers[2].port { break } time.Sleep(10 * time.Millisecond) } // Check the incoming RPCs served in a round-robin manner. for i := 0; i < 10; i++ { if err := cc.Invoke(context.Background(), "/foo/bar", &req, &reply); err == nil || errorDesc(err) != servers[i%numServers].port { t.Fatalf("Index %d: Invoke(_, _, _, _, _) = %v, want %s", i, err, servers[i%numServers].port) } } } func (s) TestCloseWithPendingRPC(t *testing.T) { servers, r, cleanup := startServers(t, 1, math.MaxUint32) defer cleanup() cc, err := Dial("passthrough:///foo.bar.com", WithBalancer(RoundRobin(r)), WithBlock(), WithInsecure(), WithCodec(testCodec{})) if err != nil { t.Fatalf("Failed to create ClientConn: %v", err) } defer cc.Close() var reply string if err := cc.Invoke(context.Background(), "/foo/bar", &expectedRequest, &reply, WaitForReady(true)); err != nil { t.Fatalf("grpc.Invoke(_, _, _, _, _) = %v, want %s", err, servers[0].port) } // Remove the server. updates := []*naming.Update{{ Op: naming.Delete, Addr: "localhost:" + servers[0].port, }} r.w.inject(updates) // Loop until the above update applies. for { ctx, cancel := context.WithTimeout(context.Background(), 10*time.Millisecond) if err := cc.Invoke(ctx, "/foo/bar", &expectedRequest, &reply, WaitForReady(true)); status.Code(err) == codes.DeadlineExceeded { cancel() break } time.Sleep(10 * time.Millisecond) cancel() } // Issue 2 RPCs which should be completed with error status once cc is closed. var wg sync.WaitGroup wg.Add(2) go func() { defer wg.Done() var reply string if err := cc.Invoke(context.Background(), "/foo/bar", &expectedRequest, &reply, WaitForReady(true)); err == nil { t.Errorf("grpc.Invoke(_, _, _, _, _) = %v, want not nil", err) } }() go func() { defer wg.Done() var reply string time.Sleep(5 * time.Millisecond) if err := cc.Invoke(context.Background(), "/foo/bar", &expectedRequest, &reply, WaitForReady(true)); err == nil { t.Errorf("grpc.Invoke(_, _, _, _, _) = %v, want not nil", err) } }() time.Sleep(5 * time.Millisecond) cc.Close() wg.Wait() } func (s) TestGetOnWaitChannel(t *testing.T) { servers, r, cleanup := startServers(t, 1, math.MaxUint32) defer cleanup() cc, err := Dial("passthrough:///foo.bar.com", WithBalancer(RoundRobin(r)), WithBlock(), WithInsecure(), WithCodec(testCodec{})) if err != nil { t.Fatalf("Failed to create ClientConn: %v", err) } defer cc.Close() // Remove all servers so that all upcoming RPCs will block on waitCh. updates := []*naming.Update{{ Op: naming.Delete, Addr: "localhost:" + servers[0].port, }} r.w.inject(updates) for { var reply string ctx, cancel := context.WithTimeout(context.Background(), 10*time.Millisecond) if err := cc.Invoke(ctx, "/foo/bar", &expectedRequest, &reply, WaitForReady(true)); status.Code(err) == codes.DeadlineExceeded { cancel() break } cancel() time.Sleep(10 * time.Millisecond) } var wg sync.WaitGroup wg.Add(1) go func() { defer wg.Done() var reply string if err := cc.Invoke(context.Background(), "/foo/bar", &expectedRequest, &reply, WaitForReady(true)); err != nil { t.Errorf("grpc.Invoke(_, _, _, _, _) = %v, want ", err) } }() // Add a connected server to get the above RPC through. updates = []*naming.Update{{ Op: naming.Add, Addr: "localhost:" + servers[0].port, }} r.w.inject(updates) // Wait until the above RPC succeeds. wg.Wait() } func (s) TestOneServerDown(t *testing.T) { // Start 2 servers. numServers := 2 servers, r, cleanup := startServers(t, numServers, math.MaxUint32) defer cleanup() cc, err := Dial("passthrough:///foo.bar.com", WithBalancer(RoundRobin(r)), WithBlock(), WithInsecure(), WithCodec(testCodec{}), WithWaitForHandshake()) if err != nil { t.Fatalf("Failed to create ClientConn: %v", err) } defer cc.Close() // Add servers[1] to the service discovery. var updates []*naming.Update updates = append(updates, &naming.Update{ Op: naming.Add, Addr: "localhost:" + servers[1].port, }) r.w.inject(updates) req := "port" var reply string // Loop until servers[1] is up for { if err := cc.Invoke(context.Background(), "/foo/bar", &req, &reply); err != nil && errorDesc(err) == servers[1].port { break } time.Sleep(10 * time.Millisecond) } var wg sync.WaitGroup numRPC := 100 sleepDuration := 10 * time.Millisecond wg.Add(1) go func() { time.Sleep(sleepDuration) // After sleepDuration, kill server[0]. servers[0].stop() wg.Done() }() // All non-failfast RPCs should not block because there's at least one connection available. for i := 0; i < numRPC; i++ { wg.Add(1) go func() { time.Sleep(sleepDuration) // After sleepDuration, invoke RPC. // server[0] is killed around the same time to make it racy between balancer and gRPC internals. cc.Invoke(context.Background(), "/foo/bar", &req, &reply, WaitForReady(true)) wg.Done() }() } wg.Wait() } func (s) TestOneAddressRemoval(t *testing.T) { // Start 2 servers. numServers := 2 servers, r, cleanup := startServers(t, numServers, math.MaxUint32) defer cleanup() cc, err := Dial("passthrough:///foo.bar.com", WithBalancer(RoundRobin(r)), WithBlock(), WithInsecure(), WithCodec(testCodec{})) if err != nil { t.Fatalf("Failed to create ClientConn: %v", err) } defer cc.Close() // Add servers[1] to the service discovery. var updates []*naming.Update updates = append(updates, &naming.Update{ Op: naming.Add, Addr: "localhost:" + servers[1].port, }) r.w.inject(updates) req := "port" var reply string // Loop until servers[1] is up for { if err := cc.Invoke(context.Background(), "/foo/bar", &req, &reply); err != nil && errorDesc(err) == servers[1].port { break } time.Sleep(10 * time.Millisecond) } var wg sync.WaitGroup numRPC := 100 sleepDuration := 10 * time.Millisecond wg.Add(1) go func() { time.Sleep(sleepDuration) // After sleepDuration, delete server[0]. var updates []*naming.Update updates = append(updates, &naming.Update{ Op: naming.Delete, Addr: "localhost:" + servers[0].port, }) r.w.inject(updates) wg.Done() }() // All non-failfast RPCs should not fail because there's at least one connection available. for i := 0; i < numRPC; i++ { wg.Add(1) go func() { var reply string time.Sleep(sleepDuration) // After sleepDuration, invoke RPC. // server[0] is removed around the same time to make it racy between balancer and gRPC internals. if err := cc.Invoke(context.Background(), "/foo/bar", &expectedRequest, &reply, WaitForReady(true)); err != nil { t.Errorf("grpc.Invoke(_, _, _, _, _) = %v, want nil", err) } wg.Done() }() } wg.Wait() } func checkServerUp(t *testing.T, currentServer *server) { req := "port" port := currentServer.port cc, err := Dial("passthrough:///localhost:"+port, WithBlock(), WithInsecure(), WithCodec(testCodec{})) if err != nil { t.Fatalf("Failed to create ClientConn: %v", err) } defer cc.Close() var reply string for { if err := cc.Invoke(context.Background(), "/foo/bar", &req, &reply); err != nil && errorDesc(err) == port { break } time.Sleep(10 * time.Millisecond) } } func (s) TestPickFirstEmptyAddrs(t *testing.T) { servers, r, cleanup := startServers(t, 1, math.MaxUint32) defer cleanup() cc, err := Dial("passthrough:///foo.bar.com", WithBalancer(pickFirstBalancerV1(r)), WithBlock(), WithInsecure(), WithCodec(testCodec{})) if err != nil { t.Fatalf("Failed to create ClientConn: %v", err) } defer cc.Close() var reply string if err := cc.Invoke(context.Background(), "/foo/bar", &expectedRequest, &reply); err != nil || reply != expectedResponse { t.Fatalf("grpc.Invoke(_, _, _, _, _) = %v, reply = %q, want %q, ", err, reply, expectedResponse) } // Inject name resolution change to remove the server so that there is no address // available after that. u := &naming.Update{ Op: naming.Delete, Addr: "localhost:" + servers[0].port, } r.w.inject([]*naming.Update{u}) // Loop until the above updates apply. for { time.Sleep(10 * time.Millisecond) ctx, cancel := context.WithTimeout(context.Background(), 10*time.Millisecond) if err := cc.Invoke(ctx, "/foo/bar", &expectedRequest, &reply); err != nil { cancel() break } cancel() } } func (s) TestPickFirstCloseWithPendingRPC(t *testing.T) { servers, r, cleanup := startServers(t, 1, math.MaxUint32) defer cleanup() cc, err := Dial("passthrough:///foo.bar.com", WithBalancer(pickFirstBalancerV1(r)), WithBlock(), WithInsecure(), WithCodec(testCodec{})) if err != nil { t.Fatalf("Failed to create ClientConn: %v", err) } defer cc.Close() var reply string if err := cc.Invoke(context.Background(), "/foo/bar", &expectedRequest, &reply, WaitForReady(true)); err != nil { t.Fatalf("grpc.Invoke(_, _, _, _, _) = %v, want %s", err, servers[0].port) } // Remove the server. updates := []*naming.Update{{ Op: naming.Delete, Addr: "localhost:" + servers[0].port, }} r.w.inject(updates) // Loop until the above update applies. for { ctx, cancel := context.WithTimeout(context.Background(), 10*time.Millisecond) if err := cc.Invoke(ctx, "/foo/bar", &expectedRequest, &reply, WaitForReady(true)); status.Code(err) == codes.DeadlineExceeded { cancel() break } time.Sleep(10 * time.Millisecond) cancel() } // Issue 2 RPCs which should be completed with error status once cc is closed. var wg sync.WaitGroup wg.Add(2) go func() { defer wg.Done() var reply string if err := cc.Invoke(context.Background(), "/foo/bar", &expectedRequest, &reply, WaitForReady(true)); err == nil { t.Errorf("grpc.Invoke(_, _, _, _, _) = %v, want not nil", err) } }() go func() { defer wg.Done() var reply string time.Sleep(5 * time.Millisecond) if err := cc.Invoke(context.Background(), "/foo/bar", &expectedRequest, &reply, WaitForReady(true)); err == nil { t.Errorf("grpc.Invoke(_, _, _, _, _) = %v, want not nil", err) } }() time.Sleep(5 * time.Millisecond) cc.Close() wg.Wait() } func (s) TestPickFirstOrderAllServerUp(t *testing.T) { // Start 3 servers on 3 ports. numServers := 3 servers, r, cleanup := startServers(t, numServers, math.MaxUint32) defer cleanup() cc, err := Dial("passthrough:///foo.bar.com", WithBalancer(pickFirstBalancerV1(r)), WithBlock(), WithInsecure(), WithCodec(testCodec{})) if err != nil { t.Fatalf("Failed to create ClientConn: %v", err) } defer cc.Close() // Add servers[1] and [2] to the service discovery. u := &naming.Update{ Op: naming.Add, Addr: "localhost:" + servers[1].port, } r.w.inject([]*naming.Update{u}) u = &naming.Update{ Op: naming.Add, Addr: "localhost:" + servers[2].port, } r.w.inject([]*naming.Update{u}) // Loop until all 3 servers are up checkServerUp(t, servers[0]) checkServerUp(t, servers[1]) checkServerUp(t, servers[2]) // Check the incoming RPCs served in server[0] req := "port" var reply string for i := 0; i < 20; i++ { if err := cc.Invoke(context.Background(), "/foo/bar", &req, &reply); err == nil || errorDesc(err) != servers[0].port { t.Fatalf("Index %d: Invoke(_, _, _, _, _) = %v, want %s", 0, err, servers[0].port) } time.Sleep(10 * time.Millisecond) } // Delete server[0] in the balancer, the incoming RPCs served in server[1] // For test addrconn, close server[0] instead u = &naming.Update{ Op: naming.Delete, Addr: "localhost:" + servers[0].port, } r.w.inject([]*naming.Update{u}) // Loop until it changes to server[1] for { if err := cc.Invoke(context.Background(), "/foo/bar", &req, &reply); err != nil && errorDesc(err) == servers[1].port { break } time.Sleep(10 * time.Millisecond) } for i := 0; i < 20; i++ { if err := cc.Invoke(context.Background(), "/foo/bar", &req, &reply); err == nil || errorDesc(err) != servers[1].port { t.Fatalf("Index %d: Invoke(_, _, _, _, _) = %v, want %s", 1, err, servers[1].port) } time.Sleep(10 * time.Millisecond) } // Add server[0] back to the balancer, the incoming RPCs served in server[1] // Add is append operation, the order of Notify now is {server[1].port server[2].port server[0].port} u = &naming.Update{ Op: naming.Add, Addr: "localhost:" + servers[0].port, } r.w.inject([]*naming.Update{u}) for i := 0; i < 20; i++ { if err := cc.Invoke(context.Background(), "/foo/bar", &req, &reply); err == nil || errorDesc(err) != servers[1].port { t.Fatalf("Index %d: Invoke(_, _, _, _, _) = %v, want %s", 1, err, servers[1].port) } time.Sleep(10 * time.Millisecond) } // Delete server[1] in the balancer, the incoming RPCs served in server[2] u = &naming.Update{ Op: naming.Delete, Addr: "localhost:" + servers[1].port, } r.w.inject([]*naming.Update{u}) for { if err := cc.Invoke(context.Background(), "/foo/bar", &req, &reply); err != nil && errorDesc(err) == servers[2].port { break } time.Sleep(1 * time.Second) } for i := 0; i < 20; i++ { if err := cc.Invoke(context.Background(), "/foo/bar", &req, &reply); err == nil || errorDesc(err) != servers[2].port { t.Fatalf("Index %d: Invoke(_, _, _, _, _) = %v, want %s", 2, err, servers[2].port) } time.Sleep(10 * time.Millisecond) } // Delete server[2] in the balancer, the incoming RPCs served in server[0] u = &naming.Update{ Op: naming.Delete, Addr: "localhost:" + servers[2].port, } r.w.inject([]*naming.Update{u}) for { if err := cc.Invoke(context.Background(), "/foo/bar", &req, &reply); err != nil && errorDesc(err) == servers[0].port { break } time.Sleep(1 * time.Second) } for i := 0; i < 20; i++ { if err := cc.Invoke(context.Background(), "/foo/bar", &req, &reply); err == nil || errorDesc(err) != servers[0].port { t.Fatalf("Index %d: Invoke(_, _, _, _, _) = %v, want %s", 0, err, servers[0].port) } time.Sleep(10 * time.Millisecond) } } func (s) TestPickFirstOrderOneServerDown(t *testing.T) { // Start 3 servers on 3 ports. numServers := 3 servers, r, cleanup := startServers(t, numServers, math.MaxUint32) defer cleanup() cc, err := Dial("passthrough:///foo.bar.com", WithBalancer(pickFirstBalancerV1(r)), WithBlock(), WithInsecure(), WithCodec(testCodec{}), WithWaitForHandshake()) if err != nil { t.Fatalf("Failed to create ClientConn: %v", err) } defer cc.Close() // Add servers[1] and [2] to the service discovery. u := &naming.Update{ Op: naming.Add, Addr: "localhost:" + servers[1].port, } r.w.inject([]*naming.Update{u}) u = &naming.Update{ Op: naming.Add, Addr: "localhost:" + servers[2].port, } r.w.inject([]*naming.Update{u}) // Loop until all 3 servers are up checkServerUp(t, servers[0]) checkServerUp(t, servers[1]) checkServerUp(t, servers[2]) // Check the incoming RPCs served in server[0] req := "port" var reply string for i := 0; i < 20; i++ { if err := cc.Invoke(context.Background(), "/foo/bar", &req, &reply); err == nil || errorDesc(err) != servers[0].port { t.Fatalf("Index %d: Invoke(_, _, _, _, _) = %v, want %s", 0, err, servers[0].port) } time.Sleep(10 * time.Millisecond) } // server[0] down, incoming RPCs served in server[1], but the order of Notify still remains // {server[0] server[1] server[2]} servers[0].stop() // Loop until it changes to server[1] for { if err := cc.Invoke(context.Background(), "/foo/bar", &req, &reply); err != nil && errorDesc(err) == servers[1].port { break } time.Sleep(10 * time.Millisecond) } for i := 0; i < 20; i++ { if err := cc.Invoke(context.Background(), "/foo/bar", &req, &reply); err == nil || errorDesc(err) != servers[1].port { t.Fatalf("Index %d: Invoke(_, _, _, _, _) = %v, want %s", 1, err, servers[1].port) } time.Sleep(10 * time.Millisecond) } // up the server[0] back, the incoming RPCs served in server[1] p, _ := strconv.Atoi(servers[0].port) servers[0] = newTestServer() go servers[0].start(t, p, math.MaxUint32) defer servers[0].stop() servers[0].wait(t, 2*time.Second) checkServerUp(t, servers[0]) for i := 0; i < 20; i++ { if err := cc.Invoke(context.Background(), "/foo/bar", &req, &reply); err == nil || errorDesc(err) != servers[1].port { t.Fatalf("Index %d: Invoke(_, _, _, _, _) = %v, want %s", 1, err, servers[1].port) } time.Sleep(10 * time.Millisecond) } // Delete server[1] in the balancer, the incoming RPCs served in server[0] u = &naming.Update{ Op: naming.Delete, Addr: "localhost:" + servers[1].port, } r.w.inject([]*naming.Update{u}) for { if err := cc.Invoke(context.Background(), "/foo/bar", &req, &reply); err != nil && errorDesc(err) == servers[0].port { break } time.Sleep(1 * time.Second) } for i := 0; i < 20; i++ { if err := cc.Invoke(context.Background(), "/foo/bar", &req, &reply); err == nil || errorDesc(err) != servers[0].port { t.Fatalf("Index %d: Invoke(_, _, _, _, _) = %v, want %s", 0, err, servers[0].port) } time.Sleep(10 * time.Millisecond) } } func (s) TestPickFirstOneAddressRemoval(t *testing.T) { // Start 2 servers. numServers := 2 servers, r, cleanup := startServers(t, numServers, math.MaxUint32) defer cleanup() cc, err := Dial("passthrough:///localhost:"+servers[0].port, WithBalancer(pickFirstBalancerV1(r)), WithBlock(), WithInsecure(), WithCodec(testCodec{})) if err != nil { t.Fatalf("Failed to create ClientConn: %v", err) } defer cc.Close() // Add servers[1] to the service discovery. var updates []*naming.Update updates = append(updates, &naming.Update{ Op: naming.Add, Addr: "localhost:" + servers[1].port, }) r.w.inject(updates) // Create a new cc to Loop until servers[1] is up checkServerUp(t, servers[0]) checkServerUp(t, servers[1]) var wg sync.WaitGroup numRPC := 100 sleepDuration := 10 * time.Millisecond wg.Add(1) go func() { time.Sleep(sleepDuration) // After sleepDuration, delete server[0]. var updates []*naming.Update updates = append(updates, &naming.Update{ Op: naming.Delete, Addr: "localhost:" + servers[0].port, }) r.w.inject(updates) wg.Done() }() // All non-failfast RPCs should not fail because there's at least one connection available. for i := 0; i < numRPC; i++ { wg.Add(1) go func() { var reply string time.Sleep(sleepDuration) // After sleepDuration, invoke RPC. // server[0] is removed around the same time to make it racy between balancer and gRPC internals. if err := cc.Invoke(context.Background(), "/foo/bar", &expectedRequest, &reply, WaitForReady(true)); err != nil { t.Errorf("grpc.Invoke(_, _, _, _, _) = %v, want nil", err) } wg.Done() }() } wg.Wait() } grpc-go-1.22.1/balancer_v1_wrapper.go000066400000000000000000000215311351635773100174140ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "context" "sync" "google.golang.org/grpc/balancer" "google.golang.org/grpc/connectivity" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/resolver" ) type balancerWrapperBuilder struct { b Balancer // The v1 balancer. } func (bwb *balancerWrapperBuilder) Build(cc balancer.ClientConn, opts balancer.BuildOptions) balancer.Balancer { bwb.b.Start(opts.Target.Endpoint, BalancerConfig{ DialCreds: opts.DialCreds, Dialer: opts.Dialer, }) _, pickfirst := bwb.b.(*pickFirst) bw := &balancerWrapper{ balancer: bwb.b, pickfirst: pickfirst, cc: cc, targetAddr: opts.Target.Endpoint, startCh: make(chan struct{}), conns: make(map[resolver.Address]balancer.SubConn), connSt: make(map[balancer.SubConn]*scState), csEvltr: &balancer.ConnectivityStateEvaluator{}, state: connectivity.Idle, } cc.UpdateBalancerState(connectivity.Idle, bw) go bw.lbWatcher() return bw } func (bwb *balancerWrapperBuilder) Name() string { return "wrapper" } type scState struct { addr Address // The v1 address type. s connectivity.State down func(error) } type balancerWrapper struct { balancer Balancer // The v1 balancer. pickfirst bool cc balancer.ClientConn targetAddr string // Target without the scheme. mu sync.Mutex conns map[resolver.Address]balancer.SubConn connSt map[balancer.SubConn]*scState // This channel is closed when handling the first resolver result. // lbWatcher blocks until this is closed, to avoid race between // - NewSubConn is created, cc wants to notify balancer of state changes; // - Build hasn't return, cc doesn't have access to balancer. startCh chan struct{} // To aggregate the connectivity state. csEvltr *balancer.ConnectivityStateEvaluator state connectivity.State } // lbWatcher watches the Notify channel of the balancer and manages // connections accordingly. func (bw *balancerWrapper) lbWatcher() { <-bw.startCh notifyCh := bw.balancer.Notify() if notifyCh == nil { // There's no resolver in the balancer. Connect directly. a := resolver.Address{ Addr: bw.targetAddr, Type: resolver.Backend, } sc, err := bw.cc.NewSubConn([]resolver.Address{a}, balancer.NewSubConnOptions{}) if err != nil { grpclog.Warningf("Error creating connection to %v. Err: %v", a, err) } else { bw.mu.Lock() bw.conns[a] = sc bw.connSt[sc] = &scState{ addr: Address{Addr: bw.targetAddr}, s: connectivity.Idle, } bw.mu.Unlock() sc.Connect() } return } for addrs := range notifyCh { grpclog.Infof("balancerWrapper: got update addr from Notify: %v", addrs) if bw.pickfirst { var ( oldA resolver.Address oldSC balancer.SubConn ) bw.mu.Lock() for oldA, oldSC = range bw.conns { break } bw.mu.Unlock() if len(addrs) <= 0 { if oldSC != nil { // Teardown old sc. bw.mu.Lock() delete(bw.conns, oldA) delete(bw.connSt, oldSC) bw.mu.Unlock() bw.cc.RemoveSubConn(oldSC) } continue } var newAddrs []resolver.Address for _, a := range addrs { newAddr := resolver.Address{ Addr: a.Addr, Type: resolver.Backend, // All addresses from balancer are all backends. ServerName: "", Metadata: a.Metadata, } newAddrs = append(newAddrs, newAddr) } if oldSC == nil { // Create new sc. sc, err := bw.cc.NewSubConn(newAddrs, balancer.NewSubConnOptions{}) if err != nil { grpclog.Warningf("Error creating connection to %v. Err: %v", newAddrs, err) } else { bw.mu.Lock() // For pickfirst, there should be only one SubConn, so the // address doesn't matter. All states updating (up and down) // and picking should all happen on that only SubConn. bw.conns[resolver.Address{}] = sc bw.connSt[sc] = &scState{ addr: addrs[0], // Use the first address. s: connectivity.Idle, } bw.mu.Unlock() sc.Connect() } } else { bw.mu.Lock() bw.connSt[oldSC].addr = addrs[0] bw.mu.Unlock() oldSC.UpdateAddresses(newAddrs) } } else { var ( add []resolver.Address // Addresses need to setup connections. del []balancer.SubConn // Connections need to tear down. ) resAddrs := make(map[resolver.Address]Address) for _, a := range addrs { resAddrs[resolver.Address{ Addr: a.Addr, Type: resolver.Backend, // All addresses from balancer are all backends. ServerName: "", Metadata: a.Metadata, }] = a } bw.mu.Lock() for a := range resAddrs { if _, ok := bw.conns[a]; !ok { add = append(add, a) } } for a, c := range bw.conns { if _, ok := resAddrs[a]; !ok { del = append(del, c) delete(bw.conns, a) // Keep the state of this sc in bw.connSt until its state becomes Shutdown. } } bw.mu.Unlock() for _, a := range add { sc, err := bw.cc.NewSubConn([]resolver.Address{a}, balancer.NewSubConnOptions{}) if err != nil { grpclog.Warningf("Error creating connection to %v. Err: %v", a, err) } else { bw.mu.Lock() bw.conns[a] = sc bw.connSt[sc] = &scState{ addr: resAddrs[a], s: connectivity.Idle, } bw.mu.Unlock() sc.Connect() } } for _, c := range del { bw.cc.RemoveSubConn(c) } } } } func (bw *balancerWrapper) HandleSubConnStateChange(sc balancer.SubConn, s connectivity.State) { bw.mu.Lock() defer bw.mu.Unlock() scSt, ok := bw.connSt[sc] if !ok { return } if s == connectivity.Idle { sc.Connect() } oldS := scSt.s scSt.s = s if oldS != connectivity.Ready && s == connectivity.Ready { scSt.down = bw.balancer.Up(scSt.addr) } else if oldS == connectivity.Ready && s != connectivity.Ready { if scSt.down != nil { scSt.down(errConnClosing) } } sa := bw.csEvltr.RecordTransition(oldS, s) if bw.state != sa { bw.state = sa } bw.cc.UpdateBalancerState(bw.state, bw) if s == connectivity.Shutdown { // Remove state for this sc. delete(bw.connSt, sc) } } func (bw *balancerWrapper) HandleResolvedAddrs([]resolver.Address, error) { bw.mu.Lock() defer bw.mu.Unlock() select { case <-bw.startCh: default: close(bw.startCh) } // There should be a resolver inside the balancer. // All updates here, if any, are ignored. } func (bw *balancerWrapper) Close() { bw.mu.Lock() defer bw.mu.Unlock() select { case <-bw.startCh: default: close(bw.startCh) } bw.balancer.Close() } // The picker is the balancerWrapper itself. // It either blocks or returns error, consistent with v1 balancer Get(). func (bw *balancerWrapper) Pick(ctx context.Context, opts balancer.PickOptions) (sc balancer.SubConn, done func(balancer.DoneInfo), err error) { failfast := true // Default failfast is true. if ss, ok := rpcInfoFromContext(ctx); ok { failfast = ss.failfast } a, p, err := bw.balancer.Get(ctx, BalancerGetOptions{BlockingWait: !failfast}) if err != nil { return nil, nil, err } if p != nil { done = func(balancer.DoneInfo) { p() } defer func() { if err != nil { p() } }() } bw.mu.Lock() defer bw.mu.Unlock() if bw.pickfirst { // Get the first sc in conns. for _, sc := range bw.conns { return sc, done, nil } return nil, nil, balancer.ErrNoSubConnAvailable } sc, ok1 := bw.conns[resolver.Address{ Addr: a.Addr, Type: resolver.Backend, ServerName: "", Metadata: a.Metadata, }] s, ok2 := bw.connSt[sc] if !ok1 || !ok2 { // This can only happen due to a race where Get() returned an address // that was subsequently removed by Notify. In this case we should // retry always. return nil, nil, balancer.ErrNoSubConnAvailable } switch s.s { case connectivity.Ready, connectivity.Idle: return sc, done, nil case connectivity.Shutdown, connectivity.TransientFailure: // If the returned sc has been shut down or is in transient failure, // return error, and this RPC will fail or wait for another picker (if // non-failfast). return nil, nil, balancer.ErrTransientFailure default: // For other states (connecting or unknown), the v1 balancer would // traditionally wait until ready and then issue the RPC. Returning // ErrNoSubConnAvailable will be a slight improvement in that it will // allow the balancer to choose another address in case others are // connected. return nil, nil, balancer.ErrNoSubConnAvailable } } grpc-go-1.22.1/benchmark/000077500000000000000000000000001351635773100151005ustar00rootroot00000000000000grpc-go-1.22.1/benchmark/benchmain/000077500000000000000000000000001351635773100170245ustar00rootroot00000000000000grpc-go-1.22.1/benchmark/benchmain/main.go000066400000000000000000000632601351635773100203060ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ /* Package main provides benchmark with setting flags. An example to run some benchmarks with profiling enabled: go run benchmark/benchmain/main.go -benchtime=10s -workloads=all \ -compression=gzip -maxConcurrentCalls=1 -trace=off \ -reqSizeBytes=1,1048576 -respSizeBytes=1,1048576 -networkMode=Local \ -cpuProfile=cpuProf -memProfile=memProf -memProfileRate=10000 -resultFile=result As a suggestion, when creating a branch, you can run this benchmark and save the result file "-resultFile=basePerf", and later when you at the middle of the work or finish the work, you can get the benchmark result and compare it with the base anytime. Assume there are two result files names as "basePerf" and "curPerf" created by adding -resultFile=basePerf and -resultFile=curPerf. To format the curPerf, run: go run benchmark/benchresult/main.go curPerf To observe how the performance changes based on a base result, run: go run benchmark/benchresult/main.go basePerf curPerf */ package main import ( "context" "encoding/gob" "flag" "fmt" "io" "io/ioutil" "log" "net" "os" "reflect" "runtime" "runtime/pprof" "strings" "sync" "sync/atomic" "time" "google.golang.org/grpc" bm "google.golang.org/grpc/benchmark" "google.golang.org/grpc/benchmark/flags" testpb "google.golang.org/grpc/benchmark/grpc_testing" "google.golang.org/grpc/benchmark/latency" "google.golang.org/grpc/benchmark/stats" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/internal/channelz" "google.golang.org/grpc/keepalive" "google.golang.org/grpc/test/bufconn" ) var ( workloads = flags.StringWithAllowedValues("workloads", workloadsAll, fmt.Sprintf("Workloads to execute - One of: %v", strings.Join(allWorkloads, ", ")), allWorkloads) traceMode = flags.StringWithAllowedValues("trace", toggleModeOff, fmt.Sprintf("Trace mode - One of: %v", strings.Join(allToggleModes, ", ")), allToggleModes) preloaderMode = flags.StringWithAllowedValues("preloader", toggleModeOff, fmt.Sprintf("Preloader mode - One of: %v", strings.Join(allToggleModes, ", ")), allToggleModes) channelzOn = flags.StringWithAllowedValues("channelz", toggleModeOff, fmt.Sprintf("Channelz mode - One of: %v", strings.Join(allToggleModes, ", ")), allToggleModes) compressorMode = flags.StringWithAllowedValues("compression", compModeOff, fmt.Sprintf("Compression mode - One of: %v", strings.Join(allCompModes, ", ")), allCompModes) networkMode = flags.StringWithAllowedValues("networkMode", networkModeNone, "Network mode includes LAN, WAN, Local and Longhaul", allNetworkModes) readLatency = flags.DurationSlice("latency", defaultReadLatency, "Simulated one-way network latency - may be a comma-separated list") readKbps = flags.IntSlice("kbps", defaultReadKbps, "Simulated network throughput (in kbps) - may be a comma-separated list") readMTU = flags.IntSlice("mtu", defaultReadMTU, "Simulated network MTU (Maximum Transmission Unit) - may be a comma-separated list") maxConcurrentCalls = flags.IntSlice("maxConcurrentCalls", defaultMaxConcurrentCalls, "Number of concurrent RPCs during benchmarks") readReqSizeBytes = flags.IntSlice("reqSizeBytes", defaultReqSizeBytes, "Request size in bytes - may be a comma-separated list") readRespSizeBytes = flags.IntSlice("respSizeBytes", defaultRespSizeBytes, "Response size in bytes - may be a comma-separated list") benchTime = flag.Duration("benchtime", time.Second, "Configures the amount of time to run each benchmark") memProfile = flag.String("memProfile", "", "Enables memory profiling output to the filename provided.") memProfileRate = flag.Int("memProfileRate", 512*1024, "Configures the memory profiling rate. \n"+ "memProfile should be set before setting profile rate. To include every allocated block in the profile, "+ "set MemProfileRate to 1. To turn off profiling entirely, set MemProfileRate to 0. 512 * 1024 by default.") cpuProfile = flag.String("cpuProfile", "", "Enables CPU profiling output to the filename provided") benchmarkResultFile = flag.String("resultFile", "", "Save the benchmark result into a binary file") useBufconn = flag.Bool("bufconn", false, "Use in-memory connection instead of system network I/O") enableKeepalive = flag.Bool("enable_keepalive", false, "Enable client keepalive. \n"+ "Keepalive.Time is set to 10s, Keepalive.Timeout is set to 1s, Keepalive.PermitWithoutStream is set to true.") ) const ( workloadsUnary = "unary" workloadsStreaming = "streaming" workloadsUnconstrained = "unconstrained" workloadsAll = "all" // Compression modes. compModeOff = "off" compModeGzip = "gzip" compModeNop = "nop" compModeAll = "all" // Toggle modes. toggleModeOff = "off" toggleModeOn = "on" toggleModeBoth = "both" // Network modes. networkModeNone = "none" networkModeLocal = "Local" networkModeLAN = "LAN" networkModeWAN = "WAN" networkLongHaul = "Longhaul" numStatsBuckets = 10 warmupCallCount = 10 warmuptime = time.Second ) var ( allWorkloads = []string{workloadsUnary, workloadsStreaming, workloadsUnconstrained, workloadsAll} allCompModes = []string{compModeOff, compModeGzip, compModeNop, compModeAll} allToggleModes = []string{toggleModeOff, toggleModeOn, toggleModeBoth} allNetworkModes = []string{networkModeNone, networkModeLocal, networkModeLAN, networkModeWAN, networkLongHaul} defaultReadLatency = []time.Duration{0, 40 * time.Millisecond} // if non-positive, no delay. defaultReadKbps = []int{0, 10240} // if non-positive, infinite defaultReadMTU = []int{0} // if non-positive, infinite defaultMaxConcurrentCalls = []int{1, 8, 64, 512} defaultReqSizeBytes = []int{1, 1024, 1024 * 1024} defaultRespSizeBytes = []int{1, 1024, 1024 * 1024} networks = map[string]latency.Network{ networkModeLocal: latency.Local, networkModeLAN: latency.LAN, networkModeWAN: latency.WAN, networkLongHaul: latency.Longhaul, } keepaliveTime = 10 * time.Second // this is the minimum allowed keepaliveTimeout = 1 * time.Second ) // runModes indicates the workloads to run. This is initialized with a call to // `runModesFromWorkloads`, passing the workloads flag set by the user. type runModes struct { unary, streaming, unconstrained bool } // runModesFromWorkloads determines the runModes based on the value of // workloads flag set by the user. func runModesFromWorkloads(workload string) runModes { r := runModes{} switch workload { case workloadsUnary: r.unary = true case workloadsStreaming: r.streaming = true case workloadsUnconstrained: r.unconstrained = true case workloadsAll: r.unary = true r.streaming = true r.unconstrained = true default: log.Fatalf("Unknown workloads setting: %v (want one of: %v)", workloads, strings.Join(allWorkloads, ", ")) } return r } type startFunc func(mode string, bf stats.Features) type stopFunc func(count uint64) type ucStopFunc func(req uint64, resp uint64) type rpcCallFunc func(pos int) type rpcSendFunc func(pos int) type rpcRecvFunc func(pos int) type rpcCleanupFunc func() func unaryBenchmark(start startFunc, stop stopFunc, bf stats.Features, s *stats.Stats) { caller, cleanup := makeFuncUnary(bf) defer cleanup() runBenchmark(caller, start, stop, bf, s, workloadsUnary) } func streamBenchmark(start startFunc, stop stopFunc, bf stats.Features, s *stats.Stats) { caller, cleanup := makeFuncStream(bf) defer cleanup() runBenchmark(caller, start, stop, bf, s, workloadsStreaming) } func unconstrainedStreamBenchmark(start startFunc, stop ucStopFunc, bf stats.Features, s *stats.Stats) { var sender rpcSendFunc var recver rpcRecvFunc var cleanup rpcCleanupFunc if bf.EnablePreloader { sender, recver, cleanup = makeFuncUnconstrainedStreamPreloaded(bf) } else { sender, recver, cleanup = makeFuncUnconstrainedStream(bf) } defer cleanup() var req, resp uint64 go func() { // Resets the counters once warmed up <-time.NewTimer(warmuptime).C atomic.StoreUint64(&req, 0) atomic.StoreUint64(&resp, 0) start(workloadsUnconstrained, bf) }() bmEnd := time.Now().Add(bf.BenchTime + warmuptime) var wg sync.WaitGroup wg.Add(2 * bf.MaxConcurrentCalls) for i := 0; i < bf.MaxConcurrentCalls; i++ { go func(pos int) { defer wg.Done() for { t := time.Now() if t.After(bmEnd) { return } sender(pos) atomic.AddUint64(&req, 1) } }(i) go func(pos int) { defer wg.Done() for { t := time.Now() if t.After(bmEnd) { return } recver(pos) atomic.AddUint64(&resp, 1) } }(i) } wg.Wait() stop(req, resp) } // makeClient returns a gRPC client for the grpc.testing.BenchmarkService // service. The client is configured using the different options in the passed // 'bf'. Also returns a cleanup function to close the client and release // resources. func makeClient(bf stats.Features) (testpb.BenchmarkServiceClient, func()) { nw := &latency.Network{Kbps: bf.Kbps, Latency: bf.Latency, MTU: bf.MTU} opts := []grpc.DialOption{} sopts := []grpc.ServerOption{} if bf.ModeCompressor == compModeNop { sopts = append(sopts, grpc.RPCCompressor(nopCompressor{}), grpc.RPCDecompressor(nopDecompressor{}), ) opts = append(opts, grpc.WithCompressor(nopCompressor{}), grpc.WithDecompressor(nopDecompressor{}), ) } if bf.ModeCompressor == compModeGzip { sopts = append(sopts, grpc.RPCCompressor(grpc.NewGZIPCompressor()), grpc.RPCDecompressor(grpc.NewGZIPDecompressor()), ) opts = append(opts, grpc.WithCompressor(grpc.NewGZIPCompressor()), grpc.WithDecompressor(grpc.NewGZIPDecompressor()), ) } if bf.EnableKeepalive { opts = append(opts, grpc.WithKeepaliveParams(keepalive.ClientParameters{ Time: keepaliveTime, Timeout: keepaliveTimeout, PermitWithoutStream: true, }), ) } sopts = append(sopts, grpc.MaxConcurrentStreams(uint32(bf.MaxConcurrentCalls+1))) opts = append(opts, grpc.WithInsecure()) var lis net.Listener if bf.UseBufConn { bcLis := bufconn.Listen(256 * 1024) lis = bcLis opts = append(opts, grpc.WithContextDialer(func(ctx context.Context, address string) (net.Conn, error) { return nw.ContextDialer(func(context.Context, string, string) (net.Conn, error) { return bcLis.Dial() })(ctx, "", "") })) } else { var err error lis, err = net.Listen("tcp", "localhost:0") if err != nil { grpclog.Fatalf("Failed to listen: %v", err) } opts = append(opts, grpc.WithContextDialer(func(ctx context.Context, address string) (net.Conn, error) { return nw.ContextDialer((&net.Dialer{}).DialContext)(ctx, "tcp", lis.Addr().String()) })) } lis = nw.Listener(lis) stopper := bm.StartServer(bm.ServerInfo{Type: "protobuf", Listener: lis}, sopts...) conn := bm.NewClientConn("" /* target not used */, opts...) return testpb.NewBenchmarkServiceClient(conn), func() { conn.Close() stopper() } } func makeFuncUnary(bf stats.Features) (rpcCallFunc, rpcCleanupFunc) { tc, cleanup := makeClient(bf) return func(int) { unaryCaller(tc, bf.ReqSizeBytes, bf.RespSizeBytes) }, cleanup } func makeFuncStream(bf stats.Features) (rpcCallFunc, rpcCleanupFunc) { tc, cleanup := makeClient(bf) streams := make([]testpb.BenchmarkService_StreamingCallClient, bf.MaxConcurrentCalls) for i := 0; i < bf.MaxConcurrentCalls; i++ { stream, err := tc.StreamingCall(context.Background()) if err != nil { grpclog.Fatalf("%v.StreamingCall(_) = _, %v", tc, err) } streams[i] = stream } return func(pos int) { streamCaller(streams[pos], bf.ReqSizeBytes, bf.RespSizeBytes) }, cleanup } func makeFuncUnconstrainedStreamPreloaded(bf stats.Features) (rpcSendFunc, rpcRecvFunc, rpcCleanupFunc) { streams, req, cleanup := setupUnconstrainedStream(bf) preparedMsg := make([]*grpc.PreparedMsg, len(streams)) for i, stream := range streams { preparedMsg[i] = &grpc.PreparedMsg{} err := preparedMsg[i].Encode(stream, req) if err != nil { grpclog.Fatalf("%v.Encode(%v, %v) = %v", preparedMsg[i], req, stream, err) } } return func(pos int) { streams[pos].SendMsg(preparedMsg[pos]) }, func(pos int) { streams[pos].Recv() }, cleanup } func makeFuncUnconstrainedStream(bf stats.Features) (rpcSendFunc, rpcRecvFunc, rpcCleanupFunc) { streams, req, cleanup := setupUnconstrainedStream(bf) return func(pos int) { streams[pos].Send(req) }, func(pos int) { streams[pos].Recv() }, cleanup } func setupUnconstrainedStream(bf stats.Features) ([]testpb.BenchmarkService_StreamingCallClient, *testpb.SimpleRequest, rpcCleanupFunc) { tc, cleanup := makeClient(bf) streams := make([]testpb.BenchmarkService_StreamingCallClient, bf.MaxConcurrentCalls) for i := 0; i < bf.MaxConcurrentCalls; i++ { stream, err := tc.UnconstrainedStreamingCall(context.Background()) if err != nil { grpclog.Fatalf("%v.UnconstrainedStreamingCall(_) = _, %v", tc, err) } streams[i] = stream } pl := bm.NewPayload(testpb.PayloadType_COMPRESSABLE, bf.ReqSizeBytes) req := &testpb.SimpleRequest{ ResponseType: pl.Type, ResponseSize: int32(bf.RespSizeBytes), Payload: pl, } return streams, req, cleanup } // Makes a UnaryCall gRPC request using the given BenchmarkServiceClient and // request and response sizes. func unaryCaller(client testpb.BenchmarkServiceClient, reqSize, respSize int) { if err := bm.DoUnaryCall(client, reqSize, respSize); err != nil { grpclog.Fatalf("DoUnaryCall failed: %v", err) } } func streamCaller(stream testpb.BenchmarkService_StreamingCallClient, reqSize, respSize int) { if err := bm.DoStreamingRoundTrip(stream, reqSize, respSize); err != nil { grpclog.Fatalf("DoStreamingRoundTrip failed: %v", err) } } func runBenchmark(caller rpcCallFunc, start startFunc, stop stopFunc, bf stats.Features, s *stats.Stats, mode string) { // Warm up connection. for i := 0; i < warmupCallCount; i++ { caller(0) } // Run benchmark. start(mode, bf) var wg sync.WaitGroup wg.Add(bf.MaxConcurrentCalls) bmEnd := time.Now().Add(bf.BenchTime) var count uint64 for i := 0; i < bf.MaxConcurrentCalls; i++ { go func(pos int) { defer wg.Done() for { t := time.Now() if t.After(bmEnd) { return } start := time.Now() caller(pos) elapse := time.Since(start) atomic.AddUint64(&count, 1) s.AddDuration(elapse) } }(i) } wg.Wait() stop(count) } // benchOpts represents all configurable options available while running this // benchmark. This is built from the values passed as flags. type benchOpts struct { rModes runModes benchTime time.Duration memProfileRate int memProfile string cpuProfile string networkMode string benchmarkResultFile string useBufconn bool enableKeepalive bool features *featureOpts } // featureOpts represents options which can have multiple values. The user // usually provides a comma-separated list of options for each of these // features through command line flags. We generate all possible combinations // for the provided values and run the benchmarks for each combination. type featureOpts struct { enableTrace []bool readLatencies []time.Duration readKbps []int readMTU []int maxConcurrentCalls []int reqSizeBytes []int respSizeBytes []int compModes []string enableChannelz []bool enablePreloader []bool } // makeFeaturesNum returns a slice of ints of size 'maxFeatureIndex' where each // element of the slice (indexed by 'featuresIndex' enum) contains the number // of features to be exercised by the benchmark code. // For example: Index 0 of the returned slice contains the number of values for // enableTrace feature, while index 1 contains the number of value of // readLatencies feature and so on. func makeFeaturesNum(b *benchOpts) []int { featuresNum := make([]int, stats.MaxFeatureIndex) for i := 0; i < len(featuresNum); i++ { switch stats.FeatureIndex(i) { case stats.EnableTraceIndex: featuresNum[i] = len(b.features.enableTrace) case stats.ReadLatenciesIndex: featuresNum[i] = len(b.features.readLatencies) case stats.ReadKbpsIndex: featuresNum[i] = len(b.features.readKbps) case stats.ReadMTUIndex: featuresNum[i] = len(b.features.readMTU) case stats.MaxConcurrentCallsIndex: featuresNum[i] = len(b.features.maxConcurrentCalls) case stats.ReqSizeBytesIndex: featuresNum[i] = len(b.features.reqSizeBytes) case stats.RespSizeBytesIndex: featuresNum[i] = len(b.features.respSizeBytes) case stats.CompModesIndex: featuresNum[i] = len(b.features.compModes) case stats.EnableChannelzIndex: featuresNum[i] = len(b.features.enableChannelz) case stats.EnablePreloaderIndex: featuresNum[i] = len(b.features.enablePreloader) default: log.Fatalf("Unknown feature index %v in generateFeatures. maxFeatureIndex is %v", i, stats.MaxFeatureIndex) } } return featuresNum } // sharedFeatures returns a bool slice which acts as a bitmask. Each item in // the slice represents a feature, indexed by 'featureIndex' enum. The bit is // set to 1 if the corresponding feature does not have multiple value, so is // shared amongst all benchmarks. func sharedFeatures(featuresNum []int) []bool { result := make([]bool, len(featuresNum)) for i, num := range featuresNum { if num <= 1 { result[i] = true } } return result } // generateFeatures generates all combinations of the provided feature options. // While all the feature options are stored in the benchOpts struct, the input // parameter 'featuresNum' is a slice indexed by 'featureIndex' enum containing // the number of values for each feature. // For example, let's say the user sets -workloads=all and // -maxConcurrentCalls=1,100, this would end up with the following // combinations: // [workloads: unary, maxConcurrentCalls=1] // [workloads: unary, maxConcurrentCalls=1] // [workloads: streaming, maxConcurrentCalls=100] // [workloads: streaming, maxConcurrentCalls=100] // [workloads: unconstrained, maxConcurrentCalls=1] // [workloads: unconstrained, maxConcurrentCalls=100] func (b *benchOpts) generateFeatures(featuresNum []int) []stats.Features { // curPos and initialPos are two slices where each value acts as an index // into the appropriate feature slice maintained in benchOpts.features. This // loop generates all possible combinations of features by changing one value // at a time, and once curPos becomes equal to initialPos, we have explored // all options. var result []stats.Features var curPos []int initialPos := make([]int, stats.MaxFeatureIndex) for !reflect.DeepEqual(initialPos, curPos) { if curPos == nil { curPos = make([]int, stats.MaxFeatureIndex) } result = append(result, stats.Features{ // These features stay the same for each iteration. NetworkMode: b.networkMode, UseBufConn: b.useBufconn, EnableKeepalive: b.enableKeepalive, BenchTime: b.benchTime, // These features can potentially change for each iteration. EnableTrace: b.features.enableTrace[curPos[stats.EnableTraceIndex]], Latency: b.features.readLatencies[curPos[stats.ReadLatenciesIndex]], Kbps: b.features.readKbps[curPos[stats.ReadKbpsIndex]], MTU: b.features.readMTU[curPos[stats.ReadMTUIndex]], MaxConcurrentCalls: b.features.maxConcurrentCalls[curPos[stats.MaxConcurrentCallsIndex]], ReqSizeBytes: b.features.reqSizeBytes[curPos[stats.ReqSizeBytesIndex]], RespSizeBytes: b.features.respSizeBytes[curPos[stats.RespSizeBytesIndex]], ModeCompressor: b.features.compModes[curPos[stats.CompModesIndex]], EnableChannelz: b.features.enableChannelz[curPos[stats.EnableChannelzIndex]], EnablePreloader: b.features.enablePreloader[curPos[stats.EnablePreloaderIndex]], }) addOne(curPos, featuresNum) } return result } // addOne mutates the input slice 'features' by changing one feature, thus // arriving at the next combination of feature values. 'featuresMaxPosition' // provides the numbers of allowed values for each feature, indexed by // 'featureIndex' enum. func addOne(features []int, featuresMaxPosition []int) { for i := len(features) - 1; i >= 0; i-- { features[i] = (features[i] + 1) if features[i]/featuresMaxPosition[i] == 0 { break } features[i] = features[i] % featuresMaxPosition[i] } } // processFlags reads the command line flags and builds benchOpts. Specifying // invalid values for certain flags will cause flag.Parse() to fail, and the // program to terminate. // This *SHOULD* be the only place where the flags are accessed. All other // parts of the benchmark code should rely on the returned benchOpts. func processFlags() *benchOpts { flag.Parse() if flag.NArg() != 0 { log.Fatal("Error: unparsed arguments: ", flag.Args()) } opts := &benchOpts{ rModes: runModesFromWorkloads(*workloads), benchTime: *benchTime, memProfileRate: *memProfileRate, memProfile: *memProfile, cpuProfile: *cpuProfile, networkMode: *networkMode, benchmarkResultFile: *benchmarkResultFile, useBufconn: *useBufconn, enableKeepalive: *enableKeepalive, features: &featureOpts{ enableTrace: setToggleMode(*traceMode), readLatencies: append([]time.Duration(nil), *readLatency...), readKbps: append([]int(nil), *readKbps...), readMTU: append([]int(nil), *readMTU...), maxConcurrentCalls: append([]int(nil), *maxConcurrentCalls...), reqSizeBytes: append([]int(nil), *readReqSizeBytes...), respSizeBytes: append([]int(nil), *readRespSizeBytes...), compModes: setCompressorMode(*compressorMode), enableChannelz: setToggleMode(*channelzOn), enablePreloader: setToggleMode(*preloaderMode), }, } // Re-write latency, kpbs and mtu if network mode is set. if network, ok := networks[opts.networkMode]; ok { opts.features.readLatencies = []time.Duration{network.Latency} opts.features.readKbps = []int{network.Kbps} opts.features.readMTU = []int{network.MTU} } return opts } func setToggleMode(val string) []bool { switch val { case toggleModeOn: return []bool{true} case toggleModeOff: return []bool{false} case toggleModeBoth: return []bool{false, true} default: // This should never happen because a wrong value passed to this flag would // be caught during flag.Parse(). return []bool{} } } func setCompressorMode(val string) []string { switch val { case compModeNop, compModeGzip, compModeOff: return []string{val} case compModeAll: return []string{compModeNop, compModeGzip, compModeOff} default: // This should never happen because a wrong value passed to this flag would // be caught during flag.Parse(). return []string{} } } func main() { opts := processFlags() before(opts) s := stats.NewStats(numStatsBuckets) featuresNum := makeFeaturesNum(opts) sf := sharedFeatures(featuresNum) var ( start = func(mode string, bf stats.Features) { s.StartRun(mode, bf, sf) } stop = func(count uint64) { s.EndRun(count) } ucStop = func(req uint64, resp uint64) { s.EndUnconstrainedRun(req, resp) } ) for _, bf := range opts.generateFeatures(featuresNum) { grpc.EnableTracing = bf.EnableTrace if bf.EnableChannelz { channelz.TurnOn() } if opts.rModes.unary { unaryBenchmark(start, stop, bf, s) } if opts.rModes.streaming { streamBenchmark(start, stop, bf, s) } if opts.rModes.unconstrained { unconstrainedStreamBenchmark(start, ucStop, bf, s) } } after(opts, s.GetResults()) } func before(opts *benchOpts) { if opts.memProfile != "" { runtime.MemProfileRate = opts.memProfileRate } if opts.cpuProfile != "" { f, err := os.Create(opts.cpuProfile) if err != nil { fmt.Fprintf(os.Stderr, "testing: %s\n", err) return } if err := pprof.StartCPUProfile(f); err != nil { fmt.Fprintf(os.Stderr, "testing: can't start cpu profile: %s\n", err) f.Close() return } } } func after(opts *benchOpts, data []stats.BenchResults) { if opts.cpuProfile != "" { pprof.StopCPUProfile() // flushes profile to disk } if opts.memProfile != "" { f, err := os.Create(opts.memProfile) if err != nil { fmt.Fprintf(os.Stderr, "testing: %s\n", err) os.Exit(2) } runtime.GC() // materialize all statistics if err = pprof.WriteHeapProfile(f); err != nil { fmt.Fprintf(os.Stderr, "testing: can't write heap profile %s: %s\n", opts.memProfile, err) os.Exit(2) } f.Close() } if opts.benchmarkResultFile != "" { f, err := os.Create(opts.benchmarkResultFile) if err != nil { log.Fatalf("testing: can't write benchmark result %s: %s\n", opts.benchmarkResultFile, err) } dataEncoder := gob.NewEncoder(f) dataEncoder.Encode(data) f.Close() } } // nopCompressor is a compressor that just copies data. type nopCompressor struct{} func (nopCompressor) Do(w io.Writer, p []byte) error { n, err := w.Write(p) if err != nil { return err } if n != len(p) { return fmt.Errorf("nopCompressor.Write: wrote %v bytes; want %v", n, len(p)) } return nil } func (nopCompressor) Type() string { return compModeNop } // nopDecompressor is a decompressor that just copies data. type nopDecompressor struct{} func (nopDecompressor) Do(r io.Reader) ([]byte, error) { return ioutil.ReadAll(r) } func (nopDecompressor) Type() string { return compModeNop } grpc-go-1.22.1/benchmark/benchmark.go000066400000000000000000000211571351635773100173670ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ //go:generate protoc -I grpc_testing --go_out=plugins=grpc:grpc_testing grpc_testing/control.proto grpc_testing/messages.proto grpc_testing/payloads.proto grpc_testing/services.proto grpc_testing/stats.proto /* Package benchmark implements the building blocks to setup end-to-end gRPC benchmarks. */ package benchmark import ( "context" "fmt" "io" "log" "net" "google.golang.org/grpc" testpb "google.golang.org/grpc/benchmark/grpc_testing" "google.golang.org/grpc/codes" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/status" ) // Allows reuse of the same testpb.Payload object. func setPayload(p *testpb.Payload, t testpb.PayloadType, size int) { if size < 0 { grpclog.Fatalf("Requested a response with invalid length %d", size) } body := make([]byte, size) switch t { case testpb.PayloadType_COMPRESSABLE: case testpb.PayloadType_UNCOMPRESSABLE: grpclog.Fatalf("PayloadType UNCOMPRESSABLE is not supported") default: grpclog.Fatalf("Unsupported payload type: %d", t) } p.Type = t p.Body = body } // NewPayload creates a payload with the given type and size. func NewPayload(t testpb.PayloadType, size int) *testpb.Payload { p := new(testpb.Payload) setPayload(p, t, size) return p } type testServer struct { } func (s *testServer) UnaryCall(ctx context.Context, in *testpb.SimpleRequest) (*testpb.SimpleResponse, error) { return &testpb.SimpleResponse{ Payload: NewPayload(in.ResponseType, int(in.ResponseSize)), }, nil } func (s *testServer) StreamingCall(stream testpb.BenchmarkService_StreamingCallServer) error { response := &testpb.SimpleResponse{ Payload: new(testpb.Payload), } in := new(testpb.SimpleRequest) for { // use ServerStream directly to reuse the same testpb.SimpleRequest object err := stream.(grpc.ServerStream).RecvMsg(in) if err == io.EOF { // read done. return nil } if err != nil { return err } setPayload(response.Payload, in.ResponseType, int(in.ResponseSize)) if err := stream.Send(response); err != nil { return err } } } func (s *testServer) UnconstrainedStreamingCall(stream testpb.BenchmarkService_UnconstrainedStreamingCallServer) error { in := new(testpb.SimpleRequest) // Receive a message to learn response type and size. err := stream.RecvMsg(in) if err == io.EOF { // read done. return nil } if err != nil { return err } response := &testpb.SimpleResponse{ Payload: new(testpb.Payload), } setPayload(response.Payload, in.ResponseType, int(in.ResponseSize)) go func() { for { // Using RecvMsg rather than Recv to prevent reallocation of SimpleRequest. err := stream.RecvMsg(in) switch status.Code(err) { case codes.Canceled: case codes.OK: default: log.Fatalf("server recv error: %v", err) } } }() go func() { for { err := stream.Send(response) switch status.Code(err) { case codes.Unavailable: case codes.OK: default: log.Fatalf("server send error: %v", err) } } }() <-stream.Context().Done() return stream.Context().Err() } // byteBufServer is a gRPC server that sends and receives byte buffer. // The purpose is to benchmark the gRPC performance without protobuf serialization/deserialization overhead. type byteBufServer struct { respSize int32 } // UnaryCall is an empty function and is not used for benchmark. // If bytebuf UnaryCall benchmark is needed later, the function body needs to be updated. func (s *byteBufServer) UnaryCall(ctx context.Context, in *testpb.SimpleRequest) (*testpb.SimpleResponse, error) { return &testpb.SimpleResponse{}, nil } func (s *byteBufServer) StreamingCall(stream testpb.BenchmarkService_StreamingCallServer) error { for { var in []byte err := stream.(grpc.ServerStream).RecvMsg(&in) if err == io.EOF { return nil } if err != nil { return err } out := make([]byte, s.respSize) if err := stream.(grpc.ServerStream).SendMsg(&out); err != nil { return err } } } func (s *byteBufServer) UnconstrainedStreamingCall(stream testpb.BenchmarkService_UnconstrainedStreamingCallServer) error { for { var in []byte err := stream.(grpc.ServerStream).RecvMsg(&in) if err == io.EOF { return nil } if err != nil { return err } out := make([]byte, s.respSize) if err := stream.(grpc.ServerStream).SendMsg(&out); err != nil { return err } } } // ServerInfo contains the information to create a gRPC benchmark server. type ServerInfo struct { // Type is the type of the server. // It should be "protobuf" or "bytebuf". Type string // Metadata is an optional configuration. // For "protobuf", it's ignored. // For "bytebuf", it should be an int representing response size. Metadata interface{} // Listener is the network listener for the server to use Listener net.Listener } // StartServer starts a gRPC server serving a benchmark service according to info. // It returns a function to stop the server. func StartServer(info ServerInfo, opts ...grpc.ServerOption) func() { opts = append(opts, grpc.WriteBufferSize(128*1024)) opts = append(opts, grpc.ReadBufferSize(128*1024)) s := grpc.NewServer(opts...) switch info.Type { case "protobuf": testpb.RegisterBenchmarkServiceServer(s, &testServer{}) case "bytebuf": respSize, ok := info.Metadata.(int32) if !ok { grpclog.Fatalf("failed to StartServer, invalid metadata: %v, for Type: %v", info.Metadata, info.Type) } testpb.RegisterBenchmarkServiceServer(s, &byteBufServer{respSize: respSize}) default: grpclog.Fatalf("failed to StartServer, unknown Type: %v", info.Type) } go s.Serve(info.Listener) return func() { s.Stop() } } // DoUnaryCall performs an unary RPC with given stub and request and response sizes. func DoUnaryCall(tc testpb.BenchmarkServiceClient, reqSize, respSize int) error { pl := NewPayload(testpb.PayloadType_COMPRESSABLE, reqSize) req := &testpb.SimpleRequest{ ResponseType: pl.Type, ResponseSize: int32(respSize), Payload: pl, } if _, err := tc.UnaryCall(context.Background(), req); err != nil { return fmt.Errorf("/BenchmarkService/UnaryCall(_, _) = _, %v, want _, ", err) } return nil } // DoStreamingRoundTrip performs a round trip for a single streaming rpc. func DoStreamingRoundTrip(stream testpb.BenchmarkService_StreamingCallClient, reqSize, respSize int) error { pl := NewPayload(testpb.PayloadType_COMPRESSABLE, reqSize) req := &testpb.SimpleRequest{ ResponseType: pl.Type, ResponseSize: int32(respSize), Payload: pl, } if err := stream.Send(req); err != nil { return fmt.Errorf("/BenchmarkService/StreamingCall.Send(_) = %v, want ", err) } if _, err := stream.Recv(); err != nil { // EOF is a valid error here. if err == io.EOF { return nil } return fmt.Errorf("/BenchmarkService/StreamingCall.Recv(_) = %v, want ", err) } return nil } // DoByteBufStreamingRoundTrip performs a round trip for a single streaming rpc, using a custom codec for byte buffer. func DoByteBufStreamingRoundTrip(stream testpb.BenchmarkService_StreamingCallClient, reqSize, respSize int) error { out := make([]byte, reqSize) if err := stream.(grpc.ClientStream).SendMsg(&out); err != nil { return fmt.Errorf("/BenchmarkService/StreamingCall.(ClientStream).SendMsg(_) = %v, want ", err) } var in []byte if err := stream.(grpc.ClientStream).RecvMsg(&in); err != nil { // EOF is a valid error here. if err == io.EOF { return nil } return fmt.Errorf("/BenchmarkService/StreamingCall.(ClientStream).RecvMsg(_) = %v, want ", err) } return nil } // NewClientConn creates a gRPC client connection to addr. func NewClientConn(addr string, opts ...grpc.DialOption) *grpc.ClientConn { return NewClientConnWithContext(context.Background(), addr, opts...) } // NewClientConnWithContext creates a gRPC client connection to addr using ctx. func NewClientConnWithContext(ctx context.Context, addr string, opts ...grpc.DialOption) *grpc.ClientConn { opts = append(opts, grpc.WithWriteBufferSize(128*1024)) opts = append(opts, grpc.WithReadBufferSize(128*1024)) conn, err := grpc.DialContext(ctx, addr, opts...) if err != nil { grpclog.Fatalf("NewClientConn(%q) failed to create a ClientConn %v", addr, err) } return conn } grpc-go-1.22.1/benchmark/benchresult/000077500000000000000000000000001351635773100174165ustar00rootroot00000000000000grpc-go-1.22.1/benchmark/benchresult/main.go000066400000000000000000000111711351635773100206720ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ /* To format the benchmark result: go run benchmark/benchresult/main.go resultfile To see the performance change based on a old result: go run benchmark/benchresult/main.go resultfile_old resultfile It will print the comparison result of intersection benchmarks between two files. */ package main import ( "encoding/gob" "fmt" "log" "os" "strings" "time" "google.golang.org/grpc/benchmark/stats" ) func createMap(fileName string) map[string]stats.BenchResults { f, err := os.Open(fileName) if err != nil { log.Fatalf("Read file %s error: %s\n", fileName, err) } defer f.Close() var data []stats.BenchResults decoder := gob.NewDecoder(f) if err = decoder.Decode(&data); err != nil { log.Fatalf("Decode file %s error: %s\n", fileName, err) } m := make(map[string]stats.BenchResults) for _, d := range data { m[d.RunMode+"-"+d.Features.String()] = d } return m } func intChange(title string, val1, val2 uint64) string { return fmt.Sprintf("%20s %12d %12d %8.2f%%\n", title, val1, val2, float64(int64(val2)-int64(val1))*100/float64(val1)) } func floatChange(title string, val1, val2 float64) string { return fmt.Sprintf("%20s %12.2f %12.2f %8.2f%%\n", title, val1, val2, float64(int64(val2)-int64(val1))*100/float64(val1)) } func timeChange(title string, val1, val2 time.Duration) string { return fmt.Sprintf("%20s %12s %12s %8.2f%%\n", title, val1.String(), val2.String(), float64(val2-val1)*100/float64(val1)) } func compareTwoMap(m1, m2 map[string]stats.BenchResults) { for k2, v2 := range m2 { if v1, ok := m1[k2]; ok { changes := k2 + "\n" changes += fmt.Sprintf("%20s %12s %12s %8s\n", "Title", "Before", "After", "Percentage") changes += intChange("TotalOps", v1.Data.TotalOps, v2.Data.TotalOps) changes += intChange("SendOps", v1.Data.SendOps, v2.Data.SendOps) changes += intChange("RecvOps", v1.Data.RecvOps, v2.Data.RecvOps) changes += intChange("Bytes/op", v1.Data.AllocedBytes, v2.Data.AllocedBytes) changes += intChange("Allocs/op", v1.Data.Allocs, v2.Data.Allocs) changes += floatChange("ReqT/op", v1.Data.ReqT, v2.Data.ReqT) changes += floatChange("RespT/op", v1.Data.RespT, v2.Data.RespT) changes += timeChange("50th-Lat", v1.Data.Fiftieth, v2.Data.Fiftieth) changes += timeChange("90th-Lat", v1.Data.Ninetieth, v2.Data.Ninetieth) changes += timeChange("99th-Lat", v1.Data.NinetyNinth, v2.Data.NinetyNinth) changes += timeChange("Avg-Lat", v1.Data.Average, v2.Data.Average) fmt.Printf("%s\n", changes) } } } func compareBenchmark(file1, file2 string) { compareTwoMap(createMap(file1), createMap(file2)) } func printline(benchName, total, send, recv, allocB, allocN, reqT, respT, ltc50, ltc90, l99, lAvg interface{}) { fmt.Printf("%-80v%12v%12v%12v%12v%12v%18v%18v%12v%12v%12v%12v\n", benchName, total, send, recv, allocB, allocN, reqT, respT, ltc50, ltc90, l99, lAvg) } func formatBenchmark(fileName string) { f, err := os.Open(fileName) if err != nil { log.Fatalf("Read file %s error: %s\n", fileName, err) } defer f.Close() var results []stats.BenchResults decoder := gob.NewDecoder(f) if err = decoder.Decode(&results); err != nil { log.Fatalf("Decode file %s error: %s\n", fileName, err) } if len(results) == 0 { log.Fatalf("No benchmark results in file %s\n", fileName) } fmt.Println("\nShared features:\n" + strings.Repeat("-", 20)) fmt.Print(results[0].Features.SharedFeatures(results[0].SharedFeatures)) fmt.Println(strings.Repeat("-", 35)) wantFeatures := results[0].SharedFeatures for i := 0; i < len(results[0].SharedFeatures); i++ { wantFeatures[i] = !wantFeatures[i] } printline("Name", "TotalOps", "SendOps", "RecvOps", "Alloc (B)", "Alloc (#)", "RequestT", "ResponseT", "L-50", "L-90", "L-99", "L-Avg") for _, r := range results { d := r.Data printline(r.RunMode+r.Features.PrintableName(wantFeatures), d.TotalOps, d.SendOps, d.RecvOps, d.AllocedBytes, d.Allocs, d.ReqT, d.RespT, d.Fiftieth, d.Ninetieth, d.NinetyNinth, d.Average) } } func main() { if len(os.Args) == 2 { formatBenchmark(os.Args[1]) } else { compareBenchmark(os.Args[1], os.Args[2]) } } grpc-go-1.22.1/benchmark/client/000077500000000000000000000000001351635773100163565ustar00rootroot00000000000000grpc-go-1.22.1/benchmark/client/main.go000066400000000000000000000140321351635773100176310ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ /* Package main provides a client used for benchmarking. Before running the client, the user would need to launch the grpc server. To start the server before running the client, you can run look for the command under the following file: benchmark/server/main.go After starting the server, the client can be run. An example of how to run this command is: go run benchmark/client/main.go -test_name=grpc_test If the server is running on a different port than 50051, then use the port flag for the client to hit the server on the correct port. An example for how to run this command on a different port can be found here: go run benchmark/client/main.go -test_name=grpc_test -port=8080 */ package main import ( "context" "flag" "fmt" "os" "runtime" "runtime/pprof" "sync" "time" "google.golang.org/grpc" "google.golang.org/grpc/benchmark" testpb "google.golang.org/grpc/benchmark/grpc_testing" "google.golang.org/grpc/benchmark/stats" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/internal/syscall" ) var ( port = flag.String("port", "50051", "Localhost port to connect to.") numRPC = flag.Int("r", 1, "The number of concurrent RPCs on each connection.") numConn = flag.Int("c", 1, "The number of parallel connections.") warmupDur = flag.Int("w", 10, "Warm-up duration in seconds") duration = flag.Int("d", 60, "Benchmark duration in seconds") rqSize = flag.Int("req", 1, "Request message size in bytes.") rspSize = flag.Int("resp", 1, "Response message size in bytes.") rpcType = flag.String("rpc_type", "unary", `Configure different client rpc type. Valid options are: unary; streaming.`) testName = flag.String("test_name", "", "Name of the test used for creating profiles.") wg sync.WaitGroup hopts = stats.HistogramOptions{ NumBuckets: 2495, GrowthFactor: .01, } mu sync.Mutex hists []*stats.Histogram ) func main() { flag.Parse() if *testName == "" { grpclog.Fatalf("test_name not set") } req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseSize: int32(*rspSize), Payload: &testpb.Payload{ Type: testpb.PayloadType_COMPRESSABLE, Body: make([]byte, *rqSize), }, } connectCtx, connectCancel := context.WithDeadline(context.Background(), time.Now().Add(5*time.Second)) defer connectCancel() ccs := buildConnections(connectCtx) warmDeadline := time.Now().Add(time.Duration(*warmupDur) * time.Second) endDeadline := warmDeadline.Add(time.Duration(*duration) * time.Second) cf, err := os.Create("/tmp/" + *testName + ".cpu") if err != nil { grpclog.Fatalf("Error creating file: %v", err) } defer cf.Close() pprof.StartCPUProfile(cf) cpuBeg := syscall.GetCPUTime() for _, cc := range ccs { runWithConn(cc, req, warmDeadline, endDeadline) } wg.Wait() cpu := time.Duration(syscall.GetCPUTime() - cpuBeg) pprof.StopCPUProfile() mf, err := os.Create("/tmp/" + *testName + ".mem") if err != nil { grpclog.Fatalf("Error creating file: %v", err) } defer mf.Close() runtime.GC() // materialize all statistics if err := pprof.WriteHeapProfile(mf); err != nil { grpclog.Fatalf("Error writing memory profile: %v", err) } hist := stats.NewHistogram(hopts) for _, h := range hists { hist.Merge(h) } parseHist(hist) fmt.Println("Client CPU utilization:", cpu) fmt.Println("Client CPU profile:", cf.Name()) fmt.Println("Client Mem Profile:", mf.Name()) } func buildConnections(ctx context.Context) []*grpc.ClientConn { ccs := make([]*grpc.ClientConn, *numConn) for i := range ccs { ccs[i] = benchmark.NewClientConnWithContext(ctx, "localhost:"+*port, grpc.WithInsecure(), grpc.WithBlock()) } return ccs } func runWithConn(cc *grpc.ClientConn, req *testpb.SimpleRequest, warmDeadline, endDeadline time.Time) { for i := 0; i < *numRPC; i++ { wg.Add(1) go func() { defer wg.Done() caller := makeCaller(cc, req) hist := stats.NewHistogram(hopts) for { start := time.Now() if start.After(endDeadline) { mu.Lock() hists = append(hists, hist) mu.Unlock() return } caller() elapsed := time.Since(start) if start.After(warmDeadline) { hist.Add(elapsed.Nanoseconds()) } } }() } } func makeCaller(cc *grpc.ClientConn, req *testpb.SimpleRequest) func() { client := testpb.NewBenchmarkServiceClient(cc) if *rpcType == "unary" { return func() { if _, err := client.UnaryCall(context.Background(), req); err != nil { grpclog.Fatalf("RPC failed: %v", err) } } } stream, err := client.StreamingCall(context.Background()) if err != nil { grpclog.Fatalf("RPC failed: %v", err) } return func() { if err := stream.Send(req); err != nil { grpclog.Fatalf("Streaming RPC failed to send: %v", err) } if _, err := stream.Recv(); err != nil { grpclog.Fatalf("Streaming RPC failed to read: %v", err) } } } func parseHist(hist *stats.Histogram) { fmt.Println("qps:", float64(hist.Count)/float64(*duration)) fmt.Printf("Latency: (50/90/99 %%ile): %v/%v/%v\n", time.Duration(median(.5, hist)), time.Duration(median(.9, hist)), time.Duration(median(.99, hist))) } func median(percentile float64, h *stats.Histogram) int64 { need := int64(float64(h.Count) * percentile) have := int64(0) for _, bucket := range h.Buckets { count := bucket.Count if have+count >= need { percent := float64(need-have) / float64(count) return int64((1.0-percent)*bucket.LowBound + percent*bucket.LowBound*(1.0+hopts.GrowthFactor)) } have += bucket.Count } panic("should have found a bound") } grpc-go-1.22.1/benchmark/flags/000077500000000000000000000000001351635773100161745ustar00rootroot00000000000000grpc-go-1.22.1/benchmark/flags/flags.go000066400000000000000000000066671351635773100176360ustar00rootroot00000000000000/* * * Copyright 2019 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ /* Package flags provide convenience types and routines to accept specific types of flag values on the command line. */ package flags import ( "bytes" "flag" "fmt" "strconv" "strings" "time" ) // stringFlagWithAllowedValues represents a string flag which can only take a // predefined set of values. type stringFlagWithAllowedValues struct { val string allowed []string } // StringWithAllowedValues returns a flag variable of type // stringFlagWithAllowedValues configured with the provided parameters. // 'allowed` is the set of values that this flag can be set to. func StringWithAllowedValues(name, defaultVal, usage string, allowed []string) *string { as := &stringFlagWithAllowedValues{defaultVal, allowed} flag.CommandLine.Var(as, name, usage) return &as.val } // String implements the flag.Value interface. func (as *stringFlagWithAllowedValues) String() string { return as.val } // Set implements the flag.Value interface. func (as *stringFlagWithAllowedValues) Set(val string) error { for _, a := range as.allowed { if a == val { as.val = val return nil } } return fmt.Errorf("want one of: %v", strings.Join(as.allowed, ", ")) } type durationSliceValue []time.Duration // DurationSlice returns a flag representing a slice of time.Duration objects. func DurationSlice(name string, defaultVal []time.Duration, usage string) *[]time.Duration { ds := make([]time.Duration, len(defaultVal)) copy(ds, defaultVal) dsv := (*durationSliceValue)(&ds) flag.CommandLine.Var(dsv, name, usage) return &ds } // Set implements the flag.Value interface. func (dsv *durationSliceValue) Set(s string) error { ds := strings.Split(s, ",") var dd []time.Duration for _, n := range ds { d, err := time.ParseDuration(n) if err != nil { return err } dd = append(dd, d) } *dsv = durationSliceValue(dd) return nil } // String implements the flag.Value interface. func (dsv *durationSliceValue) String() string { var b bytes.Buffer for i, d := range *dsv { if i > 0 { b.WriteRune(',') } b.WriteString(d.String()) } return b.String() } type intSliceValue []int // IntSlice returns a flag representing a slice of ints. func IntSlice(name string, defaultVal []int, usage string) *[]int { is := make([]int, len(defaultVal)) copy(is, defaultVal) isv := (*intSliceValue)(&is) flag.CommandLine.Var(isv, name, usage) return &is } // Set implements the flag.Value interface. func (isv *intSliceValue) Set(s string) error { is := strings.Split(s, ",") var ret []int for _, n := range is { i, err := strconv.Atoi(n) if err != nil { return err } ret = append(ret, i) } *isv = intSliceValue(ret) return nil } // String implements the flag.Value interface. func (isv *intSliceValue) String() string { var b bytes.Buffer for i, n := range *isv { if i > 0 { b.WriteRune(',') } b.WriteString(strconv.Itoa(n)) } return b.String() } grpc-go-1.22.1/benchmark/flags/flags_test.go000066400000000000000000000063711351635773100206650ustar00rootroot00000000000000/* * * Copyright 2019 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package flags import ( "flag" "reflect" "testing" "time" ) func TestStringWithAllowedValues(t *testing.T) { const defaultVal = "default" tests := []struct { args string allowed []string wantVal string wantErr bool }{ {"-workloads=all", []string{"unary", "streaming", "all"}, "all", false}, {"-workloads=disallowed", []string{"unary", "streaming", "all"}, defaultVal, true}, } for _, test := range tests { flag.CommandLine = flag.NewFlagSet("test", flag.ContinueOnError) var w = StringWithAllowedValues("workloads", defaultVal, "usage", test.allowed) err := flag.CommandLine.Parse([]string{test.args}) switch { case !test.wantErr && err != nil: t.Errorf("failed to parse command line args {%v}: %v", test.args, err) case test.wantErr && err == nil: t.Errorf("flag.Parse(%v) = nil, want non-nil error", test.args) default: if *w != test.wantVal { t.Errorf("flag value is %v, want %v", *w, test.wantVal) } } } } func TestDurationSlice(t *testing.T) { defaultVal := []time.Duration{time.Second, time.Nanosecond} tests := []struct { args string wantVal []time.Duration wantErr bool }{ {"-latencies=1s", []time.Duration{time.Second}, false}, {"-latencies=1s,2s,3s", []time.Duration{time.Second, 2 * time.Second, 3 * time.Second}, false}, {"-latencies=bad", defaultVal, true}, } for _, test := range tests { flag.CommandLine = flag.NewFlagSet("test", flag.ContinueOnError) var w = DurationSlice("latencies", defaultVal, "usage") err := flag.CommandLine.Parse([]string{test.args}) switch { case !test.wantErr && err != nil: t.Errorf("failed to parse command line args {%v}: %v", test.args, err) case test.wantErr && err == nil: t.Errorf("flag.Parse(%v) = nil, want non-nil error", test.args) default: if !reflect.DeepEqual(*w, test.wantVal) { t.Errorf("flag value is %v, want %v", *w, test.wantVal) } } } } func TestIntSlice(t *testing.T) { defaultVal := []int{1, 1024} tests := []struct { args string wantVal []int wantErr bool }{ {"-kbps=1", []int{1}, false}, {"-kbps=1,2,3", []int{1, 2, 3}, false}, {"-kbps=20e4", defaultVal, true}, } for _, test := range tests { flag.CommandLine = flag.NewFlagSet("test", flag.ContinueOnError) var w = IntSlice("kbps", defaultVal, "usage") err := flag.CommandLine.Parse([]string{test.args}) switch { case !test.wantErr && err != nil: t.Errorf("failed to parse command line args {%v}: %v", test.args, err) case test.wantErr && err == nil: t.Errorf("flag.Parse(%v) = nil, want non-nil error", test.args) default: if !reflect.DeepEqual(*w, test.wantVal) { t.Errorf("flag value is %v, want %v", *w, test.wantVal) } } } } grpc-go-1.22.1/benchmark/grpc_testing/000077500000000000000000000000001351635773100175705ustar00rootroot00000000000000grpc-go-1.22.1/benchmark/grpc_testing/control.pb.go000066400000000000000000001514731351635773100222120ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: control.proto package grpc_testing import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package type ClientType int32 const ( ClientType_SYNC_CLIENT ClientType = 0 ClientType_ASYNC_CLIENT ClientType = 1 ) var ClientType_name = map[int32]string{ 0: "SYNC_CLIENT", 1: "ASYNC_CLIENT", } var ClientType_value = map[string]int32{ "SYNC_CLIENT": 0, "ASYNC_CLIENT": 1, } func (x ClientType) String() string { return proto.EnumName(ClientType_name, int32(x)) } func (ClientType) EnumDescriptor() ([]byte, []int) { return fileDescriptor_control_63d6a60a9ad7e299, []int{0} } type ServerType int32 const ( ServerType_SYNC_SERVER ServerType = 0 ServerType_ASYNC_SERVER ServerType = 1 ServerType_ASYNC_GENERIC_SERVER ServerType = 2 ) var ServerType_name = map[int32]string{ 0: "SYNC_SERVER", 1: "ASYNC_SERVER", 2: "ASYNC_GENERIC_SERVER", } var ServerType_value = map[string]int32{ "SYNC_SERVER": 0, "ASYNC_SERVER": 1, "ASYNC_GENERIC_SERVER": 2, } func (x ServerType) String() string { return proto.EnumName(ServerType_name, int32(x)) } func (ServerType) EnumDescriptor() ([]byte, []int) { return fileDescriptor_control_63d6a60a9ad7e299, []int{1} } type RpcType int32 const ( RpcType_UNARY RpcType = 0 RpcType_STREAMING RpcType = 1 ) var RpcType_name = map[int32]string{ 0: "UNARY", 1: "STREAMING", } var RpcType_value = map[string]int32{ "UNARY": 0, "STREAMING": 1, } func (x RpcType) String() string { return proto.EnumName(RpcType_name, int32(x)) } func (RpcType) EnumDescriptor() ([]byte, []int) { return fileDescriptor_control_63d6a60a9ad7e299, []int{2} } // Parameters of poisson process distribution, which is a good representation // of activity coming in from independent identical stationary sources. type PoissonParams struct { // The rate of arrivals (a.k.a. lambda parameter of the exp distribution). OfferedLoad float64 `protobuf:"fixed64,1,opt,name=offered_load,json=offeredLoad,proto3" json:"offered_load,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *PoissonParams) Reset() { *m = PoissonParams{} } func (m *PoissonParams) String() string { return proto.CompactTextString(m) } func (*PoissonParams) ProtoMessage() {} func (*PoissonParams) Descriptor() ([]byte, []int) { return fileDescriptor_control_63d6a60a9ad7e299, []int{0} } func (m *PoissonParams) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_PoissonParams.Unmarshal(m, b) } func (m *PoissonParams) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_PoissonParams.Marshal(b, m, deterministic) } func (dst *PoissonParams) XXX_Merge(src proto.Message) { xxx_messageInfo_PoissonParams.Merge(dst, src) } func (m *PoissonParams) XXX_Size() int { return xxx_messageInfo_PoissonParams.Size(m) } func (m *PoissonParams) XXX_DiscardUnknown() { xxx_messageInfo_PoissonParams.DiscardUnknown(m) } var xxx_messageInfo_PoissonParams proto.InternalMessageInfo func (m *PoissonParams) GetOfferedLoad() float64 { if m != nil { return m.OfferedLoad } return 0 } type UniformParams struct { InterarrivalLo float64 `protobuf:"fixed64,1,opt,name=interarrival_lo,json=interarrivalLo,proto3" json:"interarrival_lo,omitempty"` InterarrivalHi float64 `protobuf:"fixed64,2,opt,name=interarrival_hi,json=interarrivalHi,proto3" json:"interarrival_hi,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *UniformParams) Reset() { *m = UniformParams{} } func (m *UniformParams) String() string { return proto.CompactTextString(m) } func (*UniformParams) ProtoMessage() {} func (*UniformParams) Descriptor() ([]byte, []int) { return fileDescriptor_control_63d6a60a9ad7e299, []int{1} } func (m *UniformParams) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_UniformParams.Unmarshal(m, b) } func (m *UniformParams) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_UniformParams.Marshal(b, m, deterministic) } func (dst *UniformParams) XXX_Merge(src proto.Message) { xxx_messageInfo_UniformParams.Merge(dst, src) } func (m *UniformParams) XXX_Size() int { return xxx_messageInfo_UniformParams.Size(m) } func (m *UniformParams) XXX_DiscardUnknown() { xxx_messageInfo_UniformParams.DiscardUnknown(m) } var xxx_messageInfo_UniformParams proto.InternalMessageInfo func (m *UniformParams) GetInterarrivalLo() float64 { if m != nil { return m.InterarrivalLo } return 0 } func (m *UniformParams) GetInterarrivalHi() float64 { if m != nil { return m.InterarrivalHi } return 0 } type DeterministicParams struct { OfferedLoad float64 `protobuf:"fixed64,1,opt,name=offered_load,json=offeredLoad,proto3" json:"offered_load,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *DeterministicParams) Reset() { *m = DeterministicParams{} } func (m *DeterministicParams) String() string { return proto.CompactTextString(m) } func (*DeterministicParams) ProtoMessage() {} func (*DeterministicParams) Descriptor() ([]byte, []int) { return fileDescriptor_control_63d6a60a9ad7e299, []int{2} } func (m *DeterministicParams) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_DeterministicParams.Unmarshal(m, b) } func (m *DeterministicParams) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_DeterministicParams.Marshal(b, m, deterministic) } func (dst *DeterministicParams) XXX_Merge(src proto.Message) { xxx_messageInfo_DeterministicParams.Merge(dst, src) } func (m *DeterministicParams) XXX_Size() int { return xxx_messageInfo_DeterministicParams.Size(m) } func (m *DeterministicParams) XXX_DiscardUnknown() { xxx_messageInfo_DeterministicParams.DiscardUnknown(m) } var xxx_messageInfo_DeterministicParams proto.InternalMessageInfo func (m *DeterministicParams) GetOfferedLoad() float64 { if m != nil { return m.OfferedLoad } return 0 } type ParetoParams struct { InterarrivalBase float64 `protobuf:"fixed64,1,opt,name=interarrival_base,json=interarrivalBase,proto3" json:"interarrival_base,omitempty"` Alpha float64 `protobuf:"fixed64,2,opt,name=alpha,proto3" json:"alpha,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ParetoParams) Reset() { *m = ParetoParams{} } func (m *ParetoParams) String() string { return proto.CompactTextString(m) } func (*ParetoParams) ProtoMessage() {} func (*ParetoParams) Descriptor() ([]byte, []int) { return fileDescriptor_control_63d6a60a9ad7e299, []int{3} } func (m *ParetoParams) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ParetoParams.Unmarshal(m, b) } func (m *ParetoParams) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ParetoParams.Marshal(b, m, deterministic) } func (dst *ParetoParams) XXX_Merge(src proto.Message) { xxx_messageInfo_ParetoParams.Merge(dst, src) } func (m *ParetoParams) XXX_Size() int { return xxx_messageInfo_ParetoParams.Size(m) } func (m *ParetoParams) XXX_DiscardUnknown() { xxx_messageInfo_ParetoParams.DiscardUnknown(m) } var xxx_messageInfo_ParetoParams proto.InternalMessageInfo func (m *ParetoParams) GetInterarrivalBase() float64 { if m != nil { return m.InterarrivalBase } return 0 } func (m *ParetoParams) GetAlpha() float64 { if m != nil { return m.Alpha } return 0 } // Once an RPC finishes, immediately start a new one. // No configuration parameters needed. type ClosedLoopParams struct { XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ClosedLoopParams) Reset() { *m = ClosedLoopParams{} } func (m *ClosedLoopParams) String() string { return proto.CompactTextString(m) } func (*ClosedLoopParams) ProtoMessage() {} func (*ClosedLoopParams) Descriptor() ([]byte, []int) { return fileDescriptor_control_63d6a60a9ad7e299, []int{4} } func (m *ClosedLoopParams) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ClosedLoopParams.Unmarshal(m, b) } func (m *ClosedLoopParams) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ClosedLoopParams.Marshal(b, m, deterministic) } func (dst *ClosedLoopParams) XXX_Merge(src proto.Message) { xxx_messageInfo_ClosedLoopParams.Merge(dst, src) } func (m *ClosedLoopParams) XXX_Size() int { return xxx_messageInfo_ClosedLoopParams.Size(m) } func (m *ClosedLoopParams) XXX_DiscardUnknown() { xxx_messageInfo_ClosedLoopParams.DiscardUnknown(m) } var xxx_messageInfo_ClosedLoopParams proto.InternalMessageInfo type LoadParams struct { // Types that are valid to be assigned to Load: // *LoadParams_ClosedLoop // *LoadParams_Poisson // *LoadParams_Uniform // *LoadParams_Determ // *LoadParams_Pareto Load isLoadParams_Load `protobuf_oneof:"load"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *LoadParams) Reset() { *m = LoadParams{} } func (m *LoadParams) String() string { return proto.CompactTextString(m) } func (*LoadParams) ProtoMessage() {} func (*LoadParams) Descriptor() ([]byte, []int) { return fileDescriptor_control_63d6a60a9ad7e299, []int{5} } func (m *LoadParams) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_LoadParams.Unmarshal(m, b) } func (m *LoadParams) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_LoadParams.Marshal(b, m, deterministic) } func (dst *LoadParams) XXX_Merge(src proto.Message) { xxx_messageInfo_LoadParams.Merge(dst, src) } func (m *LoadParams) XXX_Size() int { return xxx_messageInfo_LoadParams.Size(m) } func (m *LoadParams) XXX_DiscardUnknown() { xxx_messageInfo_LoadParams.DiscardUnknown(m) } var xxx_messageInfo_LoadParams proto.InternalMessageInfo type isLoadParams_Load interface { isLoadParams_Load() } type LoadParams_ClosedLoop struct { ClosedLoop *ClosedLoopParams `protobuf:"bytes,1,opt,name=closed_loop,json=closedLoop,proto3,oneof"` } type LoadParams_Poisson struct { Poisson *PoissonParams `protobuf:"bytes,2,opt,name=poisson,proto3,oneof"` } type LoadParams_Uniform struct { Uniform *UniformParams `protobuf:"bytes,3,opt,name=uniform,proto3,oneof"` } type LoadParams_Determ struct { Determ *DeterministicParams `protobuf:"bytes,4,opt,name=determ,proto3,oneof"` } type LoadParams_Pareto struct { Pareto *ParetoParams `protobuf:"bytes,5,opt,name=pareto,proto3,oneof"` } func (*LoadParams_ClosedLoop) isLoadParams_Load() {} func (*LoadParams_Poisson) isLoadParams_Load() {} func (*LoadParams_Uniform) isLoadParams_Load() {} func (*LoadParams_Determ) isLoadParams_Load() {} func (*LoadParams_Pareto) isLoadParams_Load() {} func (m *LoadParams) GetLoad() isLoadParams_Load { if m != nil { return m.Load } return nil } func (m *LoadParams) GetClosedLoop() *ClosedLoopParams { if x, ok := m.GetLoad().(*LoadParams_ClosedLoop); ok { return x.ClosedLoop } return nil } func (m *LoadParams) GetPoisson() *PoissonParams { if x, ok := m.GetLoad().(*LoadParams_Poisson); ok { return x.Poisson } return nil } func (m *LoadParams) GetUniform() *UniformParams { if x, ok := m.GetLoad().(*LoadParams_Uniform); ok { return x.Uniform } return nil } func (m *LoadParams) GetDeterm() *DeterministicParams { if x, ok := m.GetLoad().(*LoadParams_Determ); ok { return x.Determ } return nil } func (m *LoadParams) GetPareto() *ParetoParams { if x, ok := m.GetLoad().(*LoadParams_Pareto); ok { return x.Pareto } return nil } // XXX_OneofFuncs is for the internal use of the proto package. func (*LoadParams) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _LoadParams_OneofMarshaler, _LoadParams_OneofUnmarshaler, _LoadParams_OneofSizer, []interface{}{ (*LoadParams_ClosedLoop)(nil), (*LoadParams_Poisson)(nil), (*LoadParams_Uniform)(nil), (*LoadParams_Determ)(nil), (*LoadParams_Pareto)(nil), } } func _LoadParams_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*LoadParams) // load switch x := m.Load.(type) { case *LoadParams_ClosedLoop: b.EncodeVarint(1<<3 | proto.WireBytes) if err := b.EncodeMessage(x.ClosedLoop); err != nil { return err } case *LoadParams_Poisson: b.EncodeVarint(2<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Poisson); err != nil { return err } case *LoadParams_Uniform: b.EncodeVarint(3<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Uniform); err != nil { return err } case *LoadParams_Determ: b.EncodeVarint(4<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Determ); err != nil { return err } case *LoadParams_Pareto: b.EncodeVarint(5<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Pareto); err != nil { return err } case nil: default: return fmt.Errorf("LoadParams.Load has unexpected type %T", x) } return nil } func _LoadParams_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*LoadParams) switch tag { case 1: // load.closed_loop if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(ClosedLoopParams) err := b.DecodeMessage(msg) m.Load = &LoadParams_ClosedLoop{msg} return true, err case 2: // load.poisson if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(PoissonParams) err := b.DecodeMessage(msg) m.Load = &LoadParams_Poisson{msg} return true, err case 3: // load.uniform if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(UniformParams) err := b.DecodeMessage(msg) m.Load = &LoadParams_Uniform{msg} return true, err case 4: // load.determ if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(DeterministicParams) err := b.DecodeMessage(msg) m.Load = &LoadParams_Determ{msg} return true, err case 5: // load.pareto if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(ParetoParams) err := b.DecodeMessage(msg) m.Load = &LoadParams_Pareto{msg} return true, err default: return false, nil } } func _LoadParams_OneofSizer(msg proto.Message) (n int) { m := msg.(*LoadParams) // load switch x := m.Load.(type) { case *LoadParams_ClosedLoop: s := proto.Size(x.ClosedLoop) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *LoadParams_Poisson: s := proto.Size(x.Poisson) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *LoadParams_Uniform: s := proto.Size(x.Uniform) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *LoadParams_Determ: s := proto.Size(x.Determ) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *LoadParams_Pareto: s := proto.Size(x.Pareto) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } // presence of SecurityParams implies use of TLS type SecurityParams struct { UseTestCa bool `protobuf:"varint,1,opt,name=use_test_ca,json=useTestCa,proto3" json:"use_test_ca,omitempty"` ServerHostOverride string `protobuf:"bytes,2,opt,name=server_host_override,json=serverHostOverride,proto3" json:"server_host_override,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *SecurityParams) Reset() { *m = SecurityParams{} } func (m *SecurityParams) String() string { return proto.CompactTextString(m) } func (*SecurityParams) ProtoMessage() {} func (*SecurityParams) Descriptor() ([]byte, []int) { return fileDescriptor_control_63d6a60a9ad7e299, []int{6} } func (m *SecurityParams) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_SecurityParams.Unmarshal(m, b) } func (m *SecurityParams) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_SecurityParams.Marshal(b, m, deterministic) } func (dst *SecurityParams) XXX_Merge(src proto.Message) { xxx_messageInfo_SecurityParams.Merge(dst, src) } func (m *SecurityParams) XXX_Size() int { return xxx_messageInfo_SecurityParams.Size(m) } func (m *SecurityParams) XXX_DiscardUnknown() { xxx_messageInfo_SecurityParams.DiscardUnknown(m) } var xxx_messageInfo_SecurityParams proto.InternalMessageInfo func (m *SecurityParams) GetUseTestCa() bool { if m != nil { return m.UseTestCa } return false } func (m *SecurityParams) GetServerHostOverride() string { if m != nil { return m.ServerHostOverride } return "" } type ClientConfig struct { // List of targets to connect to. At least one target needs to be specified. ServerTargets []string `protobuf:"bytes,1,rep,name=server_targets,json=serverTargets,proto3" json:"server_targets,omitempty"` ClientType ClientType `protobuf:"varint,2,opt,name=client_type,json=clientType,proto3,enum=grpc.testing.ClientType" json:"client_type,omitempty"` SecurityParams *SecurityParams `protobuf:"bytes,3,opt,name=security_params,json=securityParams,proto3" json:"security_params,omitempty"` // How many concurrent RPCs to start for each channel. // For synchronous client, use a separate thread for each outstanding RPC. OutstandingRpcsPerChannel int32 `protobuf:"varint,4,opt,name=outstanding_rpcs_per_channel,json=outstandingRpcsPerChannel,proto3" json:"outstanding_rpcs_per_channel,omitempty"` // Number of independent client channels to create. // i-th channel will connect to server_target[i % server_targets.size()] ClientChannels int32 `protobuf:"varint,5,opt,name=client_channels,json=clientChannels,proto3" json:"client_channels,omitempty"` // Only for async client. Number of threads to use to start/manage RPCs. AsyncClientThreads int32 `protobuf:"varint,7,opt,name=async_client_threads,json=asyncClientThreads,proto3" json:"async_client_threads,omitempty"` RpcType RpcType `protobuf:"varint,8,opt,name=rpc_type,json=rpcType,proto3,enum=grpc.testing.RpcType" json:"rpc_type,omitempty"` // The requested load for the entire client (aggregated over all the threads). LoadParams *LoadParams `protobuf:"bytes,10,opt,name=load_params,json=loadParams,proto3" json:"load_params,omitempty"` PayloadConfig *PayloadConfig `protobuf:"bytes,11,opt,name=payload_config,json=payloadConfig,proto3" json:"payload_config,omitempty"` HistogramParams *HistogramParams `protobuf:"bytes,12,opt,name=histogram_params,json=histogramParams,proto3" json:"histogram_params,omitempty"` // Specify the cores we should run the client on, if desired CoreList []int32 `protobuf:"varint,13,rep,packed,name=core_list,json=coreList,proto3" json:"core_list,omitempty"` CoreLimit int32 `protobuf:"varint,14,opt,name=core_limit,json=coreLimit,proto3" json:"core_limit,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ClientConfig) Reset() { *m = ClientConfig{} } func (m *ClientConfig) String() string { return proto.CompactTextString(m) } func (*ClientConfig) ProtoMessage() {} func (*ClientConfig) Descriptor() ([]byte, []int) { return fileDescriptor_control_63d6a60a9ad7e299, []int{7} } func (m *ClientConfig) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ClientConfig.Unmarshal(m, b) } func (m *ClientConfig) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ClientConfig.Marshal(b, m, deterministic) } func (dst *ClientConfig) XXX_Merge(src proto.Message) { xxx_messageInfo_ClientConfig.Merge(dst, src) } func (m *ClientConfig) XXX_Size() int { return xxx_messageInfo_ClientConfig.Size(m) } func (m *ClientConfig) XXX_DiscardUnknown() { xxx_messageInfo_ClientConfig.DiscardUnknown(m) } var xxx_messageInfo_ClientConfig proto.InternalMessageInfo func (m *ClientConfig) GetServerTargets() []string { if m != nil { return m.ServerTargets } return nil } func (m *ClientConfig) GetClientType() ClientType { if m != nil { return m.ClientType } return ClientType_SYNC_CLIENT } func (m *ClientConfig) GetSecurityParams() *SecurityParams { if m != nil { return m.SecurityParams } return nil } func (m *ClientConfig) GetOutstandingRpcsPerChannel() int32 { if m != nil { return m.OutstandingRpcsPerChannel } return 0 } func (m *ClientConfig) GetClientChannels() int32 { if m != nil { return m.ClientChannels } return 0 } func (m *ClientConfig) GetAsyncClientThreads() int32 { if m != nil { return m.AsyncClientThreads } return 0 } func (m *ClientConfig) GetRpcType() RpcType { if m != nil { return m.RpcType } return RpcType_UNARY } func (m *ClientConfig) GetLoadParams() *LoadParams { if m != nil { return m.LoadParams } return nil } func (m *ClientConfig) GetPayloadConfig() *PayloadConfig { if m != nil { return m.PayloadConfig } return nil } func (m *ClientConfig) GetHistogramParams() *HistogramParams { if m != nil { return m.HistogramParams } return nil } func (m *ClientConfig) GetCoreList() []int32 { if m != nil { return m.CoreList } return nil } func (m *ClientConfig) GetCoreLimit() int32 { if m != nil { return m.CoreLimit } return 0 } type ClientStatus struct { Stats *ClientStats `protobuf:"bytes,1,opt,name=stats,proto3" json:"stats,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ClientStatus) Reset() { *m = ClientStatus{} } func (m *ClientStatus) String() string { return proto.CompactTextString(m) } func (*ClientStatus) ProtoMessage() {} func (*ClientStatus) Descriptor() ([]byte, []int) { return fileDescriptor_control_63d6a60a9ad7e299, []int{8} } func (m *ClientStatus) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ClientStatus.Unmarshal(m, b) } func (m *ClientStatus) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ClientStatus.Marshal(b, m, deterministic) } func (dst *ClientStatus) XXX_Merge(src proto.Message) { xxx_messageInfo_ClientStatus.Merge(dst, src) } func (m *ClientStatus) XXX_Size() int { return xxx_messageInfo_ClientStatus.Size(m) } func (m *ClientStatus) XXX_DiscardUnknown() { xxx_messageInfo_ClientStatus.DiscardUnknown(m) } var xxx_messageInfo_ClientStatus proto.InternalMessageInfo func (m *ClientStatus) GetStats() *ClientStats { if m != nil { return m.Stats } return nil } // Request current stats type Mark struct { // if true, the stats will be reset after taking their snapshot. Reset_ bool `protobuf:"varint,1,opt,name=reset,proto3" json:"reset,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Mark) Reset() { *m = Mark{} } func (m *Mark) String() string { return proto.CompactTextString(m) } func (*Mark) ProtoMessage() {} func (*Mark) Descriptor() ([]byte, []int) { return fileDescriptor_control_63d6a60a9ad7e299, []int{9} } func (m *Mark) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Mark.Unmarshal(m, b) } func (m *Mark) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Mark.Marshal(b, m, deterministic) } func (dst *Mark) XXX_Merge(src proto.Message) { xxx_messageInfo_Mark.Merge(dst, src) } func (m *Mark) XXX_Size() int { return xxx_messageInfo_Mark.Size(m) } func (m *Mark) XXX_DiscardUnknown() { xxx_messageInfo_Mark.DiscardUnknown(m) } var xxx_messageInfo_Mark proto.InternalMessageInfo func (m *Mark) GetReset_() bool { if m != nil { return m.Reset_ } return false } type ClientArgs struct { // Types that are valid to be assigned to Argtype: // *ClientArgs_Setup // *ClientArgs_Mark Argtype isClientArgs_Argtype `protobuf_oneof:"argtype"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ClientArgs) Reset() { *m = ClientArgs{} } func (m *ClientArgs) String() string { return proto.CompactTextString(m) } func (*ClientArgs) ProtoMessage() {} func (*ClientArgs) Descriptor() ([]byte, []int) { return fileDescriptor_control_63d6a60a9ad7e299, []int{10} } func (m *ClientArgs) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ClientArgs.Unmarshal(m, b) } func (m *ClientArgs) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ClientArgs.Marshal(b, m, deterministic) } func (dst *ClientArgs) XXX_Merge(src proto.Message) { xxx_messageInfo_ClientArgs.Merge(dst, src) } func (m *ClientArgs) XXX_Size() int { return xxx_messageInfo_ClientArgs.Size(m) } func (m *ClientArgs) XXX_DiscardUnknown() { xxx_messageInfo_ClientArgs.DiscardUnknown(m) } var xxx_messageInfo_ClientArgs proto.InternalMessageInfo type isClientArgs_Argtype interface { isClientArgs_Argtype() } type ClientArgs_Setup struct { Setup *ClientConfig `protobuf:"bytes,1,opt,name=setup,proto3,oneof"` } type ClientArgs_Mark struct { Mark *Mark `protobuf:"bytes,2,opt,name=mark,proto3,oneof"` } func (*ClientArgs_Setup) isClientArgs_Argtype() {} func (*ClientArgs_Mark) isClientArgs_Argtype() {} func (m *ClientArgs) GetArgtype() isClientArgs_Argtype { if m != nil { return m.Argtype } return nil } func (m *ClientArgs) GetSetup() *ClientConfig { if x, ok := m.GetArgtype().(*ClientArgs_Setup); ok { return x.Setup } return nil } func (m *ClientArgs) GetMark() *Mark { if x, ok := m.GetArgtype().(*ClientArgs_Mark); ok { return x.Mark } return nil } // XXX_OneofFuncs is for the internal use of the proto package. func (*ClientArgs) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _ClientArgs_OneofMarshaler, _ClientArgs_OneofUnmarshaler, _ClientArgs_OneofSizer, []interface{}{ (*ClientArgs_Setup)(nil), (*ClientArgs_Mark)(nil), } } func _ClientArgs_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*ClientArgs) // argtype switch x := m.Argtype.(type) { case *ClientArgs_Setup: b.EncodeVarint(1<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Setup); err != nil { return err } case *ClientArgs_Mark: b.EncodeVarint(2<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Mark); err != nil { return err } case nil: default: return fmt.Errorf("ClientArgs.Argtype has unexpected type %T", x) } return nil } func _ClientArgs_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*ClientArgs) switch tag { case 1: // argtype.setup if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(ClientConfig) err := b.DecodeMessage(msg) m.Argtype = &ClientArgs_Setup{msg} return true, err case 2: // argtype.mark if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(Mark) err := b.DecodeMessage(msg) m.Argtype = &ClientArgs_Mark{msg} return true, err default: return false, nil } } func _ClientArgs_OneofSizer(msg proto.Message) (n int) { m := msg.(*ClientArgs) // argtype switch x := m.Argtype.(type) { case *ClientArgs_Setup: s := proto.Size(x.Setup) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *ClientArgs_Mark: s := proto.Size(x.Mark) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } type ServerConfig struct { ServerType ServerType `protobuf:"varint,1,opt,name=server_type,json=serverType,proto3,enum=grpc.testing.ServerType" json:"server_type,omitempty"` SecurityParams *SecurityParams `protobuf:"bytes,2,opt,name=security_params,json=securityParams,proto3" json:"security_params,omitempty"` // Port on which to listen. Zero means pick unused port. Port int32 `protobuf:"varint,4,opt,name=port,proto3" json:"port,omitempty"` // Only for async server. Number of threads used to serve the requests. AsyncServerThreads int32 `protobuf:"varint,7,opt,name=async_server_threads,json=asyncServerThreads,proto3" json:"async_server_threads,omitempty"` // Specify the number of cores to limit server to, if desired CoreLimit int32 `protobuf:"varint,8,opt,name=core_limit,json=coreLimit,proto3" json:"core_limit,omitempty"` // payload config, used in generic server PayloadConfig *PayloadConfig `protobuf:"bytes,9,opt,name=payload_config,json=payloadConfig,proto3" json:"payload_config,omitempty"` // Specify the cores we should run the server on, if desired CoreList []int32 `protobuf:"varint,10,rep,packed,name=core_list,json=coreList,proto3" json:"core_list,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ServerConfig) Reset() { *m = ServerConfig{} } func (m *ServerConfig) String() string { return proto.CompactTextString(m) } func (*ServerConfig) ProtoMessage() {} func (*ServerConfig) Descriptor() ([]byte, []int) { return fileDescriptor_control_63d6a60a9ad7e299, []int{11} } func (m *ServerConfig) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ServerConfig.Unmarshal(m, b) } func (m *ServerConfig) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ServerConfig.Marshal(b, m, deterministic) } func (dst *ServerConfig) XXX_Merge(src proto.Message) { xxx_messageInfo_ServerConfig.Merge(dst, src) } func (m *ServerConfig) XXX_Size() int { return xxx_messageInfo_ServerConfig.Size(m) } func (m *ServerConfig) XXX_DiscardUnknown() { xxx_messageInfo_ServerConfig.DiscardUnknown(m) } var xxx_messageInfo_ServerConfig proto.InternalMessageInfo func (m *ServerConfig) GetServerType() ServerType { if m != nil { return m.ServerType } return ServerType_SYNC_SERVER } func (m *ServerConfig) GetSecurityParams() *SecurityParams { if m != nil { return m.SecurityParams } return nil } func (m *ServerConfig) GetPort() int32 { if m != nil { return m.Port } return 0 } func (m *ServerConfig) GetAsyncServerThreads() int32 { if m != nil { return m.AsyncServerThreads } return 0 } func (m *ServerConfig) GetCoreLimit() int32 { if m != nil { return m.CoreLimit } return 0 } func (m *ServerConfig) GetPayloadConfig() *PayloadConfig { if m != nil { return m.PayloadConfig } return nil } func (m *ServerConfig) GetCoreList() []int32 { if m != nil { return m.CoreList } return nil } type ServerArgs struct { // Types that are valid to be assigned to Argtype: // *ServerArgs_Setup // *ServerArgs_Mark Argtype isServerArgs_Argtype `protobuf_oneof:"argtype"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ServerArgs) Reset() { *m = ServerArgs{} } func (m *ServerArgs) String() string { return proto.CompactTextString(m) } func (*ServerArgs) ProtoMessage() {} func (*ServerArgs) Descriptor() ([]byte, []int) { return fileDescriptor_control_63d6a60a9ad7e299, []int{12} } func (m *ServerArgs) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ServerArgs.Unmarshal(m, b) } func (m *ServerArgs) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ServerArgs.Marshal(b, m, deterministic) } func (dst *ServerArgs) XXX_Merge(src proto.Message) { xxx_messageInfo_ServerArgs.Merge(dst, src) } func (m *ServerArgs) XXX_Size() int { return xxx_messageInfo_ServerArgs.Size(m) } func (m *ServerArgs) XXX_DiscardUnknown() { xxx_messageInfo_ServerArgs.DiscardUnknown(m) } var xxx_messageInfo_ServerArgs proto.InternalMessageInfo type isServerArgs_Argtype interface { isServerArgs_Argtype() } type ServerArgs_Setup struct { Setup *ServerConfig `protobuf:"bytes,1,opt,name=setup,proto3,oneof"` } type ServerArgs_Mark struct { Mark *Mark `protobuf:"bytes,2,opt,name=mark,proto3,oneof"` } func (*ServerArgs_Setup) isServerArgs_Argtype() {} func (*ServerArgs_Mark) isServerArgs_Argtype() {} func (m *ServerArgs) GetArgtype() isServerArgs_Argtype { if m != nil { return m.Argtype } return nil } func (m *ServerArgs) GetSetup() *ServerConfig { if x, ok := m.GetArgtype().(*ServerArgs_Setup); ok { return x.Setup } return nil } func (m *ServerArgs) GetMark() *Mark { if x, ok := m.GetArgtype().(*ServerArgs_Mark); ok { return x.Mark } return nil } // XXX_OneofFuncs is for the internal use of the proto package. func (*ServerArgs) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _ServerArgs_OneofMarshaler, _ServerArgs_OneofUnmarshaler, _ServerArgs_OneofSizer, []interface{}{ (*ServerArgs_Setup)(nil), (*ServerArgs_Mark)(nil), } } func _ServerArgs_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*ServerArgs) // argtype switch x := m.Argtype.(type) { case *ServerArgs_Setup: b.EncodeVarint(1<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Setup); err != nil { return err } case *ServerArgs_Mark: b.EncodeVarint(2<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Mark); err != nil { return err } case nil: default: return fmt.Errorf("ServerArgs.Argtype has unexpected type %T", x) } return nil } func _ServerArgs_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*ServerArgs) switch tag { case 1: // argtype.setup if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(ServerConfig) err := b.DecodeMessage(msg) m.Argtype = &ServerArgs_Setup{msg} return true, err case 2: // argtype.mark if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(Mark) err := b.DecodeMessage(msg) m.Argtype = &ServerArgs_Mark{msg} return true, err default: return false, nil } } func _ServerArgs_OneofSizer(msg proto.Message) (n int) { m := msg.(*ServerArgs) // argtype switch x := m.Argtype.(type) { case *ServerArgs_Setup: s := proto.Size(x.Setup) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *ServerArgs_Mark: s := proto.Size(x.Mark) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } type ServerStatus struct { Stats *ServerStats `protobuf:"bytes,1,opt,name=stats,proto3" json:"stats,omitempty"` // the port bound by the server Port int32 `protobuf:"varint,2,opt,name=port,proto3" json:"port,omitempty"` // Number of cores available to the server Cores int32 `protobuf:"varint,3,opt,name=cores,proto3" json:"cores,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ServerStatus) Reset() { *m = ServerStatus{} } func (m *ServerStatus) String() string { return proto.CompactTextString(m) } func (*ServerStatus) ProtoMessage() {} func (*ServerStatus) Descriptor() ([]byte, []int) { return fileDescriptor_control_63d6a60a9ad7e299, []int{13} } func (m *ServerStatus) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ServerStatus.Unmarshal(m, b) } func (m *ServerStatus) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ServerStatus.Marshal(b, m, deterministic) } func (dst *ServerStatus) XXX_Merge(src proto.Message) { xxx_messageInfo_ServerStatus.Merge(dst, src) } func (m *ServerStatus) XXX_Size() int { return xxx_messageInfo_ServerStatus.Size(m) } func (m *ServerStatus) XXX_DiscardUnknown() { xxx_messageInfo_ServerStatus.DiscardUnknown(m) } var xxx_messageInfo_ServerStatus proto.InternalMessageInfo func (m *ServerStatus) GetStats() *ServerStats { if m != nil { return m.Stats } return nil } func (m *ServerStatus) GetPort() int32 { if m != nil { return m.Port } return 0 } func (m *ServerStatus) GetCores() int32 { if m != nil { return m.Cores } return 0 } type CoreRequest struct { XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *CoreRequest) Reset() { *m = CoreRequest{} } func (m *CoreRequest) String() string { return proto.CompactTextString(m) } func (*CoreRequest) ProtoMessage() {} func (*CoreRequest) Descriptor() ([]byte, []int) { return fileDescriptor_control_63d6a60a9ad7e299, []int{14} } func (m *CoreRequest) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_CoreRequest.Unmarshal(m, b) } func (m *CoreRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_CoreRequest.Marshal(b, m, deterministic) } func (dst *CoreRequest) XXX_Merge(src proto.Message) { xxx_messageInfo_CoreRequest.Merge(dst, src) } func (m *CoreRequest) XXX_Size() int { return xxx_messageInfo_CoreRequest.Size(m) } func (m *CoreRequest) XXX_DiscardUnknown() { xxx_messageInfo_CoreRequest.DiscardUnknown(m) } var xxx_messageInfo_CoreRequest proto.InternalMessageInfo type CoreResponse struct { // Number of cores available on the server Cores int32 `protobuf:"varint,1,opt,name=cores,proto3" json:"cores,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *CoreResponse) Reset() { *m = CoreResponse{} } func (m *CoreResponse) String() string { return proto.CompactTextString(m) } func (*CoreResponse) ProtoMessage() {} func (*CoreResponse) Descriptor() ([]byte, []int) { return fileDescriptor_control_63d6a60a9ad7e299, []int{15} } func (m *CoreResponse) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_CoreResponse.Unmarshal(m, b) } func (m *CoreResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_CoreResponse.Marshal(b, m, deterministic) } func (dst *CoreResponse) XXX_Merge(src proto.Message) { xxx_messageInfo_CoreResponse.Merge(dst, src) } func (m *CoreResponse) XXX_Size() int { return xxx_messageInfo_CoreResponse.Size(m) } func (m *CoreResponse) XXX_DiscardUnknown() { xxx_messageInfo_CoreResponse.DiscardUnknown(m) } var xxx_messageInfo_CoreResponse proto.InternalMessageInfo func (m *CoreResponse) GetCores() int32 { if m != nil { return m.Cores } return 0 } type Void struct { XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Void) Reset() { *m = Void{} } func (m *Void) String() string { return proto.CompactTextString(m) } func (*Void) ProtoMessage() {} func (*Void) Descriptor() ([]byte, []int) { return fileDescriptor_control_63d6a60a9ad7e299, []int{16} } func (m *Void) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Void.Unmarshal(m, b) } func (m *Void) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Void.Marshal(b, m, deterministic) } func (dst *Void) XXX_Merge(src proto.Message) { xxx_messageInfo_Void.Merge(dst, src) } func (m *Void) XXX_Size() int { return xxx_messageInfo_Void.Size(m) } func (m *Void) XXX_DiscardUnknown() { xxx_messageInfo_Void.DiscardUnknown(m) } var xxx_messageInfo_Void proto.InternalMessageInfo // A single performance scenario: input to qps_json_driver type Scenario struct { // Human readable name for this scenario Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` // Client configuration ClientConfig *ClientConfig `protobuf:"bytes,2,opt,name=client_config,json=clientConfig,proto3" json:"client_config,omitempty"` // Number of clients to start for the test NumClients int32 `protobuf:"varint,3,opt,name=num_clients,json=numClients,proto3" json:"num_clients,omitempty"` // Server configuration ServerConfig *ServerConfig `protobuf:"bytes,4,opt,name=server_config,json=serverConfig,proto3" json:"server_config,omitempty"` // Number of servers to start for the test NumServers int32 `protobuf:"varint,5,opt,name=num_servers,json=numServers,proto3" json:"num_servers,omitempty"` // Warmup period, in seconds WarmupSeconds int32 `protobuf:"varint,6,opt,name=warmup_seconds,json=warmupSeconds,proto3" json:"warmup_seconds,omitempty"` // Benchmark time, in seconds BenchmarkSeconds int32 `protobuf:"varint,7,opt,name=benchmark_seconds,json=benchmarkSeconds,proto3" json:"benchmark_seconds,omitempty"` // Number of workers to spawn locally (usually zero) SpawnLocalWorkerCount int32 `protobuf:"varint,8,opt,name=spawn_local_worker_count,json=spawnLocalWorkerCount,proto3" json:"spawn_local_worker_count,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Scenario) Reset() { *m = Scenario{} } func (m *Scenario) String() string { return proto.CompactTextString(m) } func (*Scenario) ProtoMessage() {} func (*Scenario) Descriptor() ([]byte, []int) { return fileDescriptor_control_63d6a60a9ad7e299, []int{17} } func (m *Scenario) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Scenario.Unmarshal(m, b) } func (m *Scenario) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Scenario.Marshal(b, m, deterministic) } func (dst *Scenario) XXX_Merge(src proto.Message) { xxx_messageInfo_Scenario.Merge(dst, src) } func (m *Scenario) XXX_Size() int { return xxx_messageInfo_Scenario.Size(m) } func (m *Scenario) XXX_DiscardUnknown() { xxx_messageInfo_Scenario.DiscardUnknown(m) } var xxx_messageInfo_Scenario proto.InternalMessageInfo func (m *Scenario) GetName() string { if m != nil { return m.Name } return "" } func (m *Scenario) GetClientConfig() *ClientConfig { if m != nil { return m.ClientConfig } return nil } func (m *Scenario) GetNumClients() int32 { if m != nil { return m.NumClients } return 0 } func (m *Scenario) GetServerConfig() *ServerConfig { if m != nil { return m.ServerConfig } return nil } func (m *Scenario) GetNumServers() int32 { if m != nil { return m.NumServers } return 0 } func (m *Scenario) GetWarmupSeconds() int32 { if m != nil { return m.WarmupSeconds } return 0 } func (m *Scenario) GetBenchmarkSeconds() int32 { if m != nil { return m.BenchmarkSeconds } return 0 } func (m *Scenario) GetSpawnLocalWorkerCount() int32 { if m != nil { return m.SpawnLocalWorkerCount } return 0 } // A set of scenarios to be run with qps_json_driver type Scenarios struct { Scenarios []*Scenario `protobuf:"bytes,1,rep,name=scenarios,proto3" json:"scenarios,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Scenarios) Reset() { *m = Scenarios{} } func (m *Scenarios) String() string { return proto.CompactTextString(m) } func (*Scenarios) ProtoMessage() {} func (*Scenarios) Descriptor() ([]byte, []int) { return fileDescriptor_control_63d6a60a9ad7e299, []int{18} } func (m *Scenarios) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Scenarios.Unmarshal(m, b) } func (m *Scenarios) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Scenarios.Marshal(b, m, deterministic) } func (dst *Scenarios) XXX_Merge(src proto.Message) { xxx_messageInfo_Scenarios.Merge(dst, src) } func (m *Scenarios) XXX_Size() int { return xxx_messageInfo_Scenarios.Size(m) } func (m *Scenarios) XXX_DiscardUnknown() { xxx_messageInfo_Scenarios.DiscardUnknown(m) } var xxx_messageInfo_Scenarios proto.InternalMessageInfo func (m *Scenarios) GetScenarios() []*Scenario { if m != nil { return m.Scenarios } return nil } func init() { proto.RegisterType((*PoissonParams)(nil), "grpc.testing.PoissonParams") proto.RegisterType((*UniformParams)(nil), "grpc.testing.UniformParams") proto.RegisterType((*DeterministicParams)(nil), "grpc.testing.DeterministicParams") proto.RegisterType((*ParetoParams)(nil), "grpc.testing.ParetoParams") proto.RegisterType((*ClosedLoopParams)(nil), "grpc.testing.ClosedLoopParams") proto.RegisterType((*LoadParams)(nil), "grpc.testing.LoadParams") proto.RegisterType((*SecurityParams)(nil), "grpc.testing.SecurityParams") proto.RegisterType((*ClientConfig)(nil), "grpc.testing.ClientConfig") proto.RegisterType((*ClientStatus)(nil), "grpc.testing.ClientStatus") proto.RegisterType((*Mark)(nil), "grpc.testing.Mark") proto.RegisterType((*ClientArgs)(nil), "grpc.testing.ClientArgs") proto.RegisterType((*ServerConfig)(nil), "grpc.testing.ServerConfig") proto.RegisterType((*ServerArgs)(nil), "grpc.testing.ServerArgs") proto.RegisterType((*ServerStatus)(nil), "grpc.testing.ServerStatus") proto.RegisterType((*CoreRequest)(nil), "grpc.testing.CoreRequest") proto.RegisterType((*CoreResponse)(nil), "grpc.testing.CoreResponse") proto.RegisterType((*Void)(nil), "grpc.testing.Void") proto.RegisterType((*Scenario)(nil), "grpc.testing.Scenario") proto.RegisterType((*Scenarios)(nil), "grpc.testing.Scenarios") proto.RegisterEnum("grpc.testing.ClientType", ClientType_name, ClientType_value) proto.RegisterEnum("grpc.testing.ServerType", ServerType_name, ServerType_value) proto.RegisterEnum("grpc.testing.RpcType", RpcType_name, RpcType_value) } func init() { proto.RegisterFile("control.proto", fileDescriptor_control_63d6a60a9ad7e299) } var fileDescriptor_control_63d6a60a9ad7e299 = []byte{ // 1179 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xa4, 0x56, 0x6f, 0x6f, 0xdb, 0xb6, 0x13, 0xb6, 0x1d, 0xdb, 0xb1, 0x4e, 0xb6, 0xe3, 0x1f, 0x7f, 0xe9, 0xa0, 0xa6, 0x69, 0x97, 0x6a, 0x1b, 0x16, 0x64, 0x40, 0x5a, 0x78, 0x05, 0xba, 0x62, 0x2f, 0x02, 0xc7, 0x33, 0xea, 0x00, 0x69, 0x96, 0xd1, 0x69, 0x87, 0xbe, 0x12, 0x18, 0x99, 0xb1, 0x85, 0xc8, 0xa2, 0x46, 0x52, 0x09, 0xf2, 0x15, 0xf6, 0x99, 0xf6, 0x39, 0xf6, 0x35, 0xf6, 0x15, 0x06, 0xfe, 0x91, 0x23, 0xb9, 0x06, 0x9a, 0x6d, 0xef, 0xc4, 0xbb, 0xe7, 0xe1, 0x91, 0xf7, 0xdc, 0x1d, 0x05, 0x9d, 0x90, 0x25, 0x92, 0xb3, 0xf8, 0x30, 0xe5, 0x4c, 0x32, 0xd4, 0x9e, 0xf1, 0x34, 0x3c, 0x94, 0x54, 0xc8, 0x28, 0x99, 0xed, 0x74, 0x53, 0x72, 0x17, 0x33, 0x32, 0x15, 0xc6, 0xbb, 0xe3, 0x0a, 0x49, 0xa4, 0x5d, 0xf8, 0x7d, 0xe8, 0x9c, 0xb3, 0x48, 0x08, 0x96, 0x9c, 0x13, 0x4e, 0x16, 0x02, 0x3d, 0x87, 0x36, 0xbb, 0xba, 0xa2, 0x9c, 0x4e, 0x03, 0x45, 0xf2, 0xaa, 0x7b, 0xd5, 0xfd, 0x2a, 0x76, 0xad, 0xed, 0x94, 0x91, 0xa9, 0x4f, 0xa0, 0xf3, 0x3e, 0x89, 0xae, 0x18, 0x5f, 0x58, 0xce, 0xb7, 0xb0, 0x15, 0x25, 0x92, 0x72, 0xc2, 0x79, 0x74, 0x43, 0xe2, 0x20, 0x66, 0x96, 0xd6, 0x2d, 0x9a, 0x4f, 0xd9, 0x27, 0xc0, 0x79, 0xe4, 0xd5, 0x3e, 0x05, 0x8e, 0x23, 0xff, 0x07, 0xf8, 0xff, 0x4f, 0x54, 0x52, 0xbe, 0x88, 0x92, 0x48, 0xc8, 0x28, 0x7c, 0xf8, 0xe1, 0x7e, 0x81, 0xf6, 0x39, 0xe1, 0x54, 0x32, 0x4b, 0xf9, 0x0e, 0xfe, 0x57, 0x0a, 0x79, 0x49, 0x04, 0xb5, 0xbc, 0x5e, 0xd1, 0x71, 0x4c, 0x04, 0x45, 0xdb, 0xd0, 0x20, 0x71, 0x3a, 0x27, 0xf6, 0x54, 0x66, 0xe1, 0x23, 0xe8, 0x0d, 0x63, 0x26, 0x54, 0x00, 0x96, 0x9a, 0x6d, 0xfd, 0x3f, 0x6a, 0x00, 0x2a, 0x9e, 0x8d, 0x32, 0x00, 0x37, 0xd4, 0x90, 0x20, 0x66, 0x2c, 0xd5, 0xfb, 0xbb, 0xfd, 0x67, 0x87, 0x45, 0x1d, 0x0e, 0x57, 0xf7, 0x18, 0x57, 0x30, 0x84, 0x4b, 0x1b, 0x7a, 0x0d, 0x9b, 0xa9, 0x51, 0x42, 0x47, 0x77, 0xfb, 0x4f, 0xca, 0xf4, 0x92, 0x4c, 0xe3, 0x0a, 0xce, 0xd1, 0x8a, 0x98, 0x19, 0x39, 0xbc, 0x8d, 0x75, 0xc4, 0x92, 0x56, 0x8a, 0x68, 0xd1, 0xe8, 0x47, 0x68, 0x4e, 0x75, 0x92, 0xbd, 0xba, 0xe6, 0x3d, 0x2f, 0xf3, 0xd6, 0x08, 0x30, 0xae, 0x60, 0x4b, 0x41, 0xaf, 0xa0, 0x99, 0xea, 0x3c, 0x7b, 0x0d, 0x4d, 0xde, 0x59, 0x39, 0x6d, 0x41, 0x03, 0xc5, 0x32, 0xd8, 0xe3, 0x26, 0xd4, 0x95, 0x70, 0xfe, 0x25, 0x74, 0x27, 0x34, 0xcc, 0x78, 0x24, 0xef, 0x6c, 0x06, 0x9f, 0x81, 0x9b, 0x09, 0x1a, 0x28, 0x7e, 0x10, 0x12, 0x9d, 0xc1, 0x16, 0x76, 0x32, 0x41, 0x2f, 0xa8, 0x90, 0x43, 0x82, 0x5e, 0xc2, 0xb6, 0xa0, 0xfc, 0x86, 0xf2, 0x60, 0xce, 0x84, 0x0c, 0xd8, 0x0d, 0xe5, 0x3c, 0x9a, 0x52, 0x9d, 0x2b, 0x07, 0x23, 0xe3, 0x1b, 0x33, 0x21, 0x7f, 0xb6, 0x1e, 0xff, 0xf7, 0x06, 0xb4, 0x87, 0x71, 0x44, 0x13, 0x39, 0x64, 0xc9, 0x55, 0x34, 0x43, 0xdf, 0x40, 0xd7, 0x6e, 0x21, 0x09, 0x9f, 0x51, 0x29, 0xbc, 0xea, 0xde, 0xc6, 0xbe, 0x83, 0x3b, 0xc6, 0x7a, 0x61, 0x8c, 0xe8, 0x8d, 0xd2, 0x52, 0xd1, 0x02, 0x79, 0x97, 0x9a, 0x00, 0xdd, 0xbe, 0xb7, 0xaa, 0xa5, 0x02, 0x5c, 0xdc, 0xa5, 0x54, 0x69, 0x98, 0x7f, 0xa3, 0x11, 0x6c, 0x09, 0x7b, 0xad, 0x20, 0xd5, 0xf7, 0xb2, 0x92, 0xec, 0x96, 0xe9, 0xe5, 0xbb, 0xe3, 0xae, 0x28, 0xe7, 0xe2, 0x08, 0x76, 0x59, 0x26, 0x85, 0x24, 0xc9, 0x34, 0x4a, 0x66, 0x01, 0x4f, 0x43, 0x11, 0xa4, 0x94, 0x07, 0xe1, 0x9c, 0x24, 0x09, 0x8d, 0xb5, 0x5c, 0x0d, 0xfc, 0xb8, 0x80, 0xc1, 0x69, 0x28, 0xce, 0x29, 0x1f, 0x1a, 0x80, 0xea, 0x33, 0x7b, 0x05, 0x4b, 0x11, 0x5a, 0xa5, 0x06, 0xee, 0x1a, 0xb3, 0xc5, 0x09, 0x95, 0x55, 0x22, 0xee, 0x92, 0x30, 0xc8, 0x6f, 0x3c, 0xe7, 0x94, 0x4c, 0x85, 0xb7, 0xa9, 0xd1, 0x48, 0xfb, 0xec, 0x5d, 0x8d, 0x07, 0xbd, 0x84, 0x16, 0x4f, 0x43, 0x93, 0x9a, 0x96, 0x4e, 0xcd, 0xa3, 0xf2, 0xdd, 0x70, 0x1a, 0xea, 0xbc, 0x6c, 0x72, 0xf3, 0xa1, 0xf2, 0xa9, 0x34, 0xcf, 0x13, 0x02, 0x3a, 0x21, 0x2b, 0xf9, 0xbc, 0x6f, 0x25, 0x0c, 0xf1, 0x7d, 0x5b, 0x1d, 0x43, 0x3e, 0xbc, 0x82, 0x50, 0x6b, 0xe8, 0xb9, 0x6b, 0x5b, 0xc3, 0x60, 0x8c, 0xcc, 0xb8, 0x93, 0x16, 0x97, 0x68, 0x0c, 0xbd, 0x79, 0x24, 0x24, 0x9b, 0x71, 0xb2, 0xc8, 0xcf, 0xd0, 0xd6, 0xbb, 0x3c, 0x2d, 0xef, 0x32, 0xce, 0x51, 0xf6, 0x20, 0x5b, 0xf3, 0xb2, 0x01, 0x3d, 0x01, 0x27, 0x64, 0x9c, 0x06, 0x71, 0x24, 0xa4, 0xd7, 0xd9, 0xdb, 0xd8, 0x6f, 0xe0, 0x96, 0x32, 0x9c, 0x46, 0x42, 0xa2, 0xa7, 0x00, 0xd6, 0xb9, 0x88, 0xa4, 0xd7, 0xd5, 0xf9, 0x73, 0x8c, 0x77, 0x11, 0x49, 0xff, 0x28, 0xaf, 0xc5, 0x89, 0x24, 0x32, 0x13, 0xe8, 0x05, 0x34, 0xf4, 0x18, 0xb6, 0xa3, 0xe2, 0xf1, 0xba, 0xf2, 0x52, 0x50, 0x81, 0x0d, 0xce, 0xdf, 0x85, 0xfa, 0x3b, 0xc2, 0xaf, 0xd5, 0x88, 0xe2, 0x54, 0x50, 0x69, 0x3b, 0xc4, 0x2c, 0xfc, 0x0c, 0xc0, 0x70, 0x06, 0x7c, 0x26, 0x50, 0x1f, 0x1a, 0x82, 0xca, 0x2c, 0x9f, 0x43, 0x3b, 0xeb, 0x36, 0x37, 0xd9, 0x19, 0x57, 0xb0, 0x81, 0xa2, 0x7d, 0xa8, 0x2f, 0x08, 0xbf, 0xb6, 0xb3, 0x07, 0x95, 0x29, 0x2a, 0xf2, 0xb8, 0x82, 0x35, 0xe2, 0xd8, 0x81, 0x4d, 0xc2, 0x67, 0xaa, 0x00, 0xfc, 0x3f, 0x6b, 0xd0, 0x9e, 0xe8, 0xe6, 0xb1, 0xc9, 0x7e, 0x03, 0x6e, 0xde, 0x62, 0xaa, 0x40, 0xaa, 0xeb, 0x7a, 0xc7, 0x10, 0x4c, 0xef, 0x88, 0xe5, 0xf7, 0xba, 0xde, 0xa9, 0xfd, 0x8b, 0xde, 0x41, 0x50, 0x4f, 0x19, 0x97, 0xb6, 0x47, 0xf4, 0xf7, 0x7d, 0x95, 0xe7, 0x67, 0x5b, 0x53, 0xe5, 0xf6, 0x54, 0xb6, 0xca, 0xcb, 0x6a, 0xb6, 0x56, 0xd4, 0x5c, 0x53, 0x97, 0xce, 0x3f, 0xae, 0xcb, 0x52, 0x35, 0x41, 0xb9, 0x9a, 0x94, 0x9e, 0xe6, 0x40, 0x0f, 0xd0, 0xb3, 0x28, 0xc0, 0x7f, 0xd4, 0x33, 0xca, 0xe5, 0x7c, 0x50, 0x95, 0xde, 0x43, 0xf3, 0x2a, 0x5d, 0x66, 0xbf, 0x56, 0xc8, 0xfe, 0x36, 0x34, 0xd4, 0xbd, 0xcc, 0x28, 0x6c, 0x60, 0xb3, 0xf0, 0x3b, 0xe0, 0x0e, 0x19, 0xa7, 0x98, 0xfe, 0x96, 0x51, 0x21, 0xfd, 0xaf, 0xa1, 0x6d, 0x96, 0x22, 0x65, 0x89, 0x79, 0x89, 0x0d, 0xa9, 0x5a, 0x24, 0x35, 0xa1, 0xfe, 0x81, 0x45, 0x53, 0xff, 0xaf, 0x1a, 0xb4, 0x26, 0x21, 0x4d, 0x08, 0x8f, 0x98, 0x8a, 0x99, 0x90, 0x85, 0x29, 0x36, 0x07, 0xeb, 0x6f, 0x74, 0x04, 0x9d, 0x7c, 0x00, 0x1a, 0x7d, 0x6a, 0x9f, 0xeb, 0x04, 0xdc, 0x0e, 0x8b, 0x6f, 0xc5, 0x97, 0xe0, 0x26, 0xd9, 0xc2, 0x8e, 0xc5, 0xfc, 0xe8, 0x90, 0x64, 0x0b, 0xc3, 0x51, 0x33, 0xda, 0x3e, 0x1b, 0x79, 0x84, 0xfa, 0xe7, 0xb4, 0xc1, 0x6d, 0x51, 0x6c, 0x15, 0x1b, 0xc1, 0xd8, 0xf2, 0xf9, 0xac, 0x22, 0x18, 0x8e, 0x50, 0xcf, 0xd5, 0x2d, 0xe1, 0x8b, 0x2c, 0x0d, 0x04, 0x0d, 0x59, 0x32, 0x15, 0x5e, 0x53, 0x63, 0x3a, 0xc6, 0x3a, 0x31, 0x46, 0xf5, 0x83, 0x73, 0x49, 0x93, 0x70, 0xae, 0xb4, 0x5c, 0x22, 0x4d, 0x65, 0xf7, 0x96, 0x8e, 0x1c, 0xfc, 0x1a, 0x3c, 0x91, 0x92, 0xdb, 0x24, 0x88, 0x59, 0x48, 0xe2, 0xe0, 0x96, 0xf1, 0x6b, 0x7d, 0x83, 0x2c, 0xc9, 0xab, 0xfc, 0x91, 0xf6, 0x9f, 0x2a, 0xf7, 0xaf, 0xda, 0x3b, 0x54, 0x4e, 0x7f, 0x00, 0x4e, 0x9e, 0x70, 0x81, 0x5e, 0x81, 0x23, 0xf2, 0x85, 0x7e, 0x43, 0xdd, 0xfe, 0x17, 0x2b, 0xf7, 0xb6, 0x6e, 0x7c, 0x0f, 0x3c, 0x78, 0x91, 0xcf, 0x28, 0xdd, 0xee, 0x5b, 0xe0, 0x4e, 0x3e, 0x9e, 0x0d, 0x83, 0xe1, 0xe9, 0xc9, 0xe8, 0xec, 0xa2, 0x57, 0x41, 0x3d, 0x68, 0x0f, 0x8a, 0x96, 0xea, 0xc1, 0x49, 0xde, 0x04, 0x25, 0xc2, 0x64, 0x84, 0x3f, 0x8c, 0x70, 0x91, 0x60, 0x2d, 0x55, 0xe4, 0xc1, 0xb6, 0xb1, 0xbc, 0x1d, 0x9d, 0x8d, 0xf0, 0xc9, 0xd2, 0x53, 0x3b, 0xf8, 0x0a, 0x36, 0xed, 0xbb, 0x84, 0x1c, 0x68, 0xbc, 0x3f, 0x1b, 0xe0, 0x8f, 0xbd, 0x0a, 0xea, 0x80, 0x33, 0xb9, 0xc0, 0xa3, 0xc1, 0xbb, 0x93, 0xb3, 0xb7, 0xbd, 0xea, 0x65, 0x53, 0xff, 0x12, 0x7f, 0xff, 0x77, 0x00, 0x00, 0x00, 0xff, 0xff, 0x75, 0x59, 0xf4, 0x03, 0x4e, 0x0b, 0x00, 0x00, } grpc-go-1.22.1/benchmark/grpc_testing/control.proto000066400000000000000000000112531351635773100223370ustar00rootroot00000000000000// Copyright 2016 gRPC authors. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. syntax = "proto3"; import "payloads.proto"; import "stats.proto"; package grpc.testing; enum ClientType { SYNC_CLIENT = 0; ASYNC_CLIENT = 1; } enum ServerType { SYNC_SERVER = 0; ASYNC_SERVER = 1; ASYNC_GENERIC_SERVER = 2; } enum RpcType { UNARY = 0; STREAMING = 1; } // Parameters of poisson process distribution, which is a good representation // of activity coming in from independent identical stationary sources. message PoissonParams { // The rate of arrivals (a.k.a. lambda parameter of the exp distribution). double offered_load = 1; } message UniformParams { double interarrival_lo = 1; double interarrival_hi = 2; } message DeterministicParams { double offered_load = 1; } message ParetoParams { double interarrival_base = 1; double alpha = 2; } // Once an RPC finishes, immediately start a new one. // No configuration parameters needed. message ClosedLoopParams { } message LoadParams { oneof load { ClosedLoopParams closed_loop = 1; PoissonParams poisson = 2; UniformParams uniform = 3; DeterministicParams determ = 4; ParetoParams pareto = 5; }; } // presence of SecurityParams implies use of TLS message SecurityParams { bool use_test_ca = 1; string server_host_override = 2; } message ClientConfig { // List of targets to connect to. At least one target needs to be specified. repeated string server_targets = 1; ClientType client_type = 2; SecurityParams security_params = 3; // How many concurrent RPCs to start for each channel. // For synchronous client, use a separate thread for each outstanding RPC. int32 outstanding_rpcs_per_channel = 4; // Number of independent client channels to create. // i-th channel will connect to server_target[i % server_targets.size()] int32 client_channels = 5; // Only for async client. Number of threads to use to start/manage RPCs. int32 async_client_threads = 7; RpcType rpc_type = 8; // The requested load for the entire client (aggregated over all the threads). LoadParams load_params = 10; PayloadConfig payload_config = 11; HistogramParams histogram_params = 12; // Specify the cores we should run the client on, if desired repeated int32 core_list = 13; int32 core_limit = 14; } message ClientStatus { ClientStats stats = 1; } // Request current stats message Mark { // if true, the stats will be reset after taking their snapshot. bool reset = 1; } message ClientArgs { oneof argtype { ClientConfig setup = 1; Mark mark = 2; } } message ServerConfig { ServerType server_type = 1; SecurityParams security_params = 2; // Port on which to listen. Zero means pick unused port. int32 port = 4; // Only for async server. Number of threads used to serve the requests. int32 async_server_threads = 7; // Specify the number of cores to limit server to, if desired int32 core_limit = 8; // payload config, used in generic server PayloadConfig payload_config = 9; // Specify the cores we should run the server on, if desired repeated int32 core_list = 10; } message ServerArgs { oneof argtype { ServerConfig setup = 1; Mark mark = 2; } } message ServerStatus { ServerStats stats = 1; // the port bound by the server int32 port = 2; // Number of cores available to the server int32 cores = 3; } message CoreRequest { } message CoreResponse { // Number of cores available on the server int32 cores = 1; } message Void { } // A single performance scenario: input to qps_json_driver message Scenario { // Human readable name for this scenario string name = 1; // Client configuration ClientConfig client_config = 2; // Number of clients to start for the test int32 num_clients = 3; // Server configuration ServerConfig server_config = 4; // Number of servers to start for the test int32 num_servers = 5; // Warmup period, in seconds int32 warmup_seconds = 6; // Benchmark time, in seconds int32 benchmark_seconds = 7; // Number of workers to spawn locally (usually zero) int32 spawn_local_worker_count = 8; } // A set of scenarios to be run with qps_json_driver message Scenarios { repeated Scenario scenarios = 1; } grpc-go-1.22.1/benchmark/grpc_testing/messages.pb.go000066400000000000000000000701771351635773100223420ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: messages.proto package grpc_testing import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package // The type of payload that should be returned. type PayloadType int32 const ( // Compressable text format. PayloadType_COMPRESSABLE PayloadType = 0 // Uncompressable binary format. PayloadType_UNCOMPRESSABLE PayloadType = 1 // Randomly chosen from all other formats defined in this enum. PayloadType_RANDOM PayloadType = 2 ) var PayloadType_name = map[int32]string{ 0: "COMPRESSABLE", 1: "UNCOMPRESSABLE", 2: "RANDOM", } var PayloadType_value = map[string]int32{ "COMPRESSABLE": 0, "UNCOMPRESSABLE": 1, "RANDOM": 2, } func (x PayloadType) String() string { return proto.EnumName(PayloadType_name, int32(x)) } func (PayloadType) EnumDescriptor() ([]byte, []int) { return fileDescriptor_messages_5c70222ad96bf232, []int{0} } // Compression algorithms type CompressionType int32 const ( // No compression CompressionType_NONE CompressionType = 0 CompressionType_GZIP CompressionType = 1 CompressionType_DEFLATE CompressionType = 2 ) var CompressionType_name = map[int32]string{ 0: "NONE", 1: "GZIP", 2: "DEFLATE", } var CompressionType_value = map[string]int32{ "NONE": 0, "GZIP": 1, "DEFLATE": 2, } func (x CompressionType) String() string { return proto.EnumName(CompressionType_name, int32(x)) } func (CompressionType) EnumDescriptor() ([]byte, []int) { return fileDescriptor_messages_5c70222ad96bf232, []int{1} } // A block of data, to simply increase gRPC message size. type Payload struct { // The type of data in body. Type PayloadType `protobuf:"varint,1,opt,name=type,proto3,enum=grpc.testing.PayloadType" json:"type,omitempty"` // Primary contents of payload. Body []byte `protobuf:"bytes,2,opt,name=body,proto3" json:"body,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Payload) Reset() { *m = Payload{} } func (m *Payload) String() string { return proto.CompactTextString(m) } func (*Payload) ProtoMessage() {} func (*Payload) Descriptor() ([]byte, []int) { return fileDescriptor_messages_5c70222ad96bf232, []int{0} } func (m *Payload) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Payload.Unmarshal(m, b) } func (m *Payload) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Payload.Marshal(b, m, deterministic) } func (dst *Payload) XXX_Merge(src proto.Message) { xxx_messageInfo_Payload.Merge(dst, src) } func (m *Payload) XXX_Size() int { return xxx_messageInfo_Payload.Size(m) } func (m *Payload) XXX_DiscardUnknown() { xxx_messageInfo_Payload.DiscardUnknown(m) } var xxx_messageInfo_Payload proto.InternalMessageInfo func (m *Payload) GetType() PayloadType { if m != nil { return m.Type } return PayloadType_COMPRESSABLE } func (m *Payload) GetBody() []byte { if m != nil { return m.Body } return nil } // A protobuf representation for grpc status. This is used by test // clients to specify a status that the server should attempt to return. type EchoStatus struct { Code int32 `protobuf:"varint,1,opt,name=code,proto3" json:"code,omitempty"` Message string `protobuf:"bytes,2,opt,name=message,proto3" json:"message,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *EchoStatus) Reset() { *m = EchoStatus{} } func (m *EchoStatus) String() string { return proto.CompactTextString(m) } func (*EchoStatus) ProtoMessage() {} func (*EchoStatus) Descriptor() ([]byte, []int) { return fileDescriptor_messages_5c70222ad96bf232, []int{1} } func (m *EchoStatus) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_EchoStatus.Unmarshal(m, b) } func (m *EchoStatus) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_EchoStatus.Marshal(b, m, deterministic) } func (dst *EchoStatus) XXX_Merge(src proto.Message) { xxx_messageInfo_EchoStatus.Merge(dst, src) } func (m *EchoStatus) XXX_Size() int { return xxx_messageInfo_EchoStatus.Size(m) } func (m *EchoStatus) XXX_DiscardUnknown() { xxx_messageInfo_EchoStatus.DiscardUnknown(m) } var xxx_messageInfo_EchoStatus proto.InternalMessageInfo func (m *EchoStatus) GetCode() int32 { if m != nil { return m.Code } return 0 } func (m *EchoStatus) GetMessage() string { if m != nil { return m.Message } return "" } // Unary request. type SimpleRequest struct { // Desired payload type in the response from the server. // If response_type is RANDOM, server randomly chooses one from other formats. ResponseType PayloadType `protobuf:"varint,1,opt,name=response_type,json=responseType,proto3,enum=grpc.testing.PayloadType" json:"response_type,omitempty"` // Desired payload size in the response from the server. // If response_type is COMPRESSABLE, this denotes the size before compression. ResponseSize int32 `protobuf:"varint,2,opt,name=response_size,json=responseSize,proto3" json:"response_size,omitempty"` // Optional input payload sent along with the request. Payload *Payload `protobuf:"bytes,3,opt,name=payload,proto3" json:"payload,omitempty"` // Whether SimpleResponse should include username. FillUsername bool `protobuf:"varint,4,opt,name=fill_username,json=fillUsername,proto3" json:"fill_username,omitempty"` // Whether SimpleResponse should include OAuth scope. FillOauthScope bool `protobuf:"varint,5,opt,name=fill_oauth_scope,json=fillOauthScope,proto3" json:"fill_oauth_scope,omitempty"` // Compression algorithm to be used by the server for the response (stream) ResponseCompression CompressionType `protobuf:"varint,6,opt,name=response_compression,json=responseCompression,proto3,enum=grpc.testing.CompressionType" json:"response_compression,omitempty"` // Whether server should return a given status ResponseStatus *EchoStatus `protobuf:"bytes,7,opt,name=response_status,json=responseStatus,proto3" json:"response_status,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *SimpleRequest) Reset() { *m = SimpleRequest{} } func (m *SimpleRequest) String() string { return proto.CompactTextString(m) } func (*SimpleRequest) ProtoMessage() {} func (*SimpleRequest) Descriptor() ([]byte, []int) { return fileDescriptor_messages_5c70222ad96bf232, []int{2} } func (m *SimpleRequest) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_SimpleRequest.Unmarshal(m, b) } func (m *SimpleRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_SimpleRequest.Marshal(b, m, deterministic) } func (dst *SimpleRequest) XXX_Merge(src proto.Message) { xxx_messageInfo_SimpleRequest.Merge(dst, src) } func (m *SimpleRequest) XXX_Size() int { return xxx_messageInfo_SimpleRequest.Size(m) } func (m *SimpleRequest) XXX_DiscardUnknown() { xxx_messageInfo_SimpleRequest.DiscardUnknown(m) } var xxx_messageInfo_SimpleRequest proto.InternalMessageInfo func (m *SimpleRequest) GetResponseType() PayloadType { if m != nil { return m.ResponseType } return PayloadType_COMPRESSABLE } func (m *SimpleRequest) GetResponseSize() int32 { if m != nil { return m.ResponseSize } return 0 } func (m *SimpleRequest) GetPayload() *Payload { if m != nil { return m.Payload } return nil } func (m *SimpleRequest) GetFillUsername() bool { if m != nil { return m.FillUsername } return false } func (m *SimpleRequest) GetFillOauthScope() bool { if m != nil { return m.FillOauthScope } return false } func (m *SimpleRequest) GetResponseCompression() CompressionType { if m != nil { return m.ResponseCompression } return CompressionType_NONE } func (m *SimpleRequest) GetResponseStatus() *EchoStatus { if m != nil { return m.ResponseStatus } return nil } // Unary response, as configured by the request. type SimpleResponse struct { // Payload to increase message size. Payload *Payload `protobuf:"bytes,1,opt,name=payload,proto3" json:"payload,omitempty"` // The user the request came from, for verifying authentication was // successful when the client expected it. Username string `protobuf:"bytes,2,opt,name=username,proto3" json:"username,omitempty"` // OAuth scope. OauthScope string `protobuf:"bytes,3,opt,name=oauth_scope,json=oauthScope,proto3" json:"oauth_scope,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *SimpleResponse) Reset() { *m = SimpleResponse{} } func (m *SimpleResponse) String() string { return proto.CompactTextString(m) } func (*SimpleResponse) ProtoMessage() {} func (*SimpleResponse) Descriptor() ([]byte, []int) { return fileDescriptor_messages_5c70222ad96bf232, []int{3} } func (m *SimpleResponse) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_SimpleResponse.Unmarshal(m, b) } func (m *SimpleResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_SimpleResponse.Marshal(b, m, deterministic) } func (dst *SimpleResponse) XXX_Merge(src proto.Message) { xxx_messageInfo_SimpleResponse.Merge(dst, src) } func (m *SimpleResponse) XXX_Size() int { return xxx_messageInfo_SimpleResponse.Size(m) } func (m *SimpleResponse) XXX_DiscardUnknown() { xxx_messageInfo_SimpleResponse.DiscardUnknown(m) } var xxx_messageInfo_SimpleResponse proto.InternalMessageInfo func (m *SimpleResponse) GetPayload() *Payload { if m != nil { return m.Payload } return nil } func (m *SimpleResponse) GetUsername() string { if m != nil { return m.Username } return "" } func (m *SimpleResponse) GetOauthScope() string { if m != nil { return m.OauthScope } return "" } // Client-streaming request. type StreamingInputCallRequest struct { // Optional input payload sent along with the request. Payload *Payload `protobuf:"bytes,1,opt,name=payload,proto3" json:"payload,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *StreamingInputCallRequest) Reset() { *m = StreamingInputCallRequest{} } func (m *StreamingInputCallRequest) String() string { return proto.CompactTextString(m) } func (*StreamingInputCallRequest) ProtoMessage() {} func (*StreamingInputCallRequest) Descriptor() ([]byte, []int) { return fileDescriptor_messages_5c70222ad96bf232, []int{4} } func (m *StreamingInputCallRequest) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_StreamingInputCallRequest.Unmarshal(m, b) } func (m *StreamingInputCallRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_StreamingInputCallRequest.Marshal(b, m, deterministic) } func (dst *StreamingInputCallRequest) XXX_Merge(src proto.Message) { xxx_messageInfo_StreamingInputCallRequest.Merge(dst, src) } func (m *StreamingInputCallRequest) XXX_Size() int { return xxx_messageInfo_StreamingInputCallRequest.Size(m) } func (m *StreamingInputCallRequest) XXX_DiscardUnknown() { xxx_messageInfo_StreamingInputCallRequest.DiscardUnknown(m) } var xxx_messageInfo_StreamingInputCallRequest proto.InternalMessageInfo func (m *StreamingInputCallRequest) GetPayload() *Payload { if m != nil { return m.Payload } return nil } // Client-streaming response. type StreamingInputCallResponse struct { // Aggregated size of payloads received from the client. AggregatedPayloadSize int32 `protobuf:"varint,1,opt,name=aggregated_payload_size,json=aggregatedPayloadSize,proto3" json:"aggregated_payload_size,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *StreamingInputCallResponse) Reset() { *m = StreamingInputCallResponse{} } func (m *StreamingInputCallResponse) String() string { return proto.CompactTextString(m) } func (*StreamingInputCallResponse) ProtoMessage() {} func (*StreamingInputCallResponse) Descriptor() ([]byte, []int) { return fileDescriptor_messages_5c70222ad96bf232, []int{5} } func (m *StreamingInputCallResponse) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_StreamingInputCallResponse.Unmarshal(m, b) } func (m *StreamingInputCallResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_StreamingInputCallResponse.Marshal(b, m, deterministic) } func (dst *StreamingInputCallResponse) XXX_Merge(src proto.Message) { xxx_messageInfo_StreamingInputCallResponse.Merge(dst, src) } func (m *StreamingInputCallResponse) XXX_Size() int { return xxx_messageInfo_StreamingInputCallResponse.Size(m) } func (m *StreamingInputCallResponse) XXX_DiscardUnknown() { xxx_messageInfo_StreamingInputCallResponse.DiscardUnknown(m) } var xxx_messageInfo_StreamingInputCallResponse proto.InternalMessageInfo func (m *StreamingInputCallResponse) GetAggregatedPayloadSize() int32 { if m != nil { return m.AggregatedPayloadSize } return 0 } // Configuration for a particular response. type ResponseParameters struct { // Desired payload sizes in responses from the server. // If response_type is COMPRESSABLE, this denotes the size before compression. Size int32 `protobuf:"varint,1,opt,name=size,proto3" json:"size,omitempty"` // Desired interval between consecutive responses in the response stream in // microseconds. IntervalUs int32 `protobuf:"varint,2,opt,name=interval_us,json=intervalUs,proto3" json:"interval_us,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ResponseParameters) Reset() { *m = ResponseParameters{} } func (m *ResponseParameters) String() string { return proto.CompactTextString(m) } func (*ResponseParameters) ProtoMessage() {} func (*ResponseParameters) Descriptor() ([]byte, []int) { return fileDescriptor_messages_5c70222ad96bf232, []int{6} } func (m *ResponseParameters) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ResponseParameters.Unmarshal(m, b) } func (m *ResponseParameters) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ResponseParameters.Marshal(b, m, deterministic) } func (dst *ResponseParameters) XXX_Merge(src proto.Message) { xxx_messageInfo_ResponseParameters.Merge(dst, src) } func (m *ResponseParameters) XXX_Size() int { return xxx_messageInfo_ResponseParameters.Size(m) } func (m *ResponseParameters) XXX_DiscardUnknown() { xxx_messageInfo_ResponseParameters.DiscardUnknown(m) } var xxx_messageInfo_ResponseParameters proto.InternalMessageInfo func (m *ResponseParameters) GetSize() int32 { if m != nil { return m.Size } return 0 } func (m *ResponseParameters) GetIntervalUs() int32 { if m != nil { return m.IntervalUs } return 0 } // Server-streaming request. type StreamingOutputCallRequest struct { // Desired payload type in the response from the server. // If response_type is RANDOM, the payload from each response in the stream // might be of different types. This is to simulate a mixed type of payload // stream. ResponseType PayloadType `protobuf:"varint,1,opt,name=response_type,json=responseType,proto3,enum=grpc.testing.PayloadType" json:"response_type,omitempty"` // Configuration for each expected response message. ResponseParameters []*ResponseParameters `protobuf:"bytes,2,rep,name=response_parameters,json=responseParameters,proto3" json:"response_parameters,omitempty"` // Optional input payload sent along with the request. Payload *Payload `protobuf:"bytes,3,opt,name=payload,proto3" json:"payload,omitempty"` // Compression algorithm to be used by the server for the response (stream) ResponseCompression CompressionType `protobuf:"varint,6,opt,name=response_compression,json=responseCompression,proto3,enum=grpc.testing.CompressionType" json:"response_compression,omitempty"` // Whether server should return a given status ResponseStatus *EchoStatus `protobuf:"bytes,7,opt,name=response_status,json=responseStatus,proto3" json:"response_status,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *StreamingOutputCallRequest) Reset() { *m = StreamingOutputCallRequest{} } func (m *StreamingOutputCallRequest) String() string { return proto.CompactTextString(m) } func (*StreamingOutputCallRequest) ProtoMessage() {} func (*StreamingOutputCallRequest) Descriptor() ([]byte, []int) { return fileDescriptor_messages_5c70222ad96bf232, []int{7} } func (m *StreamingOutputCallRequest) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_StreamingOutputCallRequest.Unmarshal(m, b) } func (m *StreamingOutputCallRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_StreamingOutputCallRequest.Marshal(b, m, deterministic) } func (dst *StreamingOutputCallRequest) XXX_Merge(src proto.Message) { xxx_messageInfo_StreamingOutputCallRequest.Merge(dst, src) } func (m *StreamingOutputCallRequest) XXX_Size() int { return xxx_messageInfo_StreamingOutputCallRequest.Size(m) } func (m *StreamingOutputCallRequest) XXX_DiscardUnknown() { xxx_messageInfo_StreamingOutputCallRequest.DiscardUnknown(m) } var xxx_messageInfo_StreamingOutputCallRequest proto.InternalMessageInfo func (m *StreamingOutputCallRequest) GetResponseType() PayloadType { if m != nil { return m.ResponseType } return PayloadType_COMPRESSABLE } func (m *StreamingOutputCallRequest) GetResponseParameters() []*ResponseParameters { if m != nil { return m.ResponseParameters } return nil } func (m *StreamingOutputCallRequest) GetPayload() *Payload { if m != nil { return m.Payload } return nil } func (m *StreamingOutputCallRequest) GetResponseCompression() CompressionType { if m != nil { return m.ResponseCompression } return CompressionType_NONE } func (m *StreamingOutputCallRequest) GetResponseStatus() *EchoStatus { if m != nil { return m.ResponseStatus } return nil } // Server-streaming response, as configured by the request and parameters. type StreamingOutputCallResponse struct { // Payload to increase response size. Payload *Payload `protobuf:"bytes,1,opt,name=payload,proto3" json:"payload,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *StreamingOutputCallResponse) Reset() { *m = StreamingOutputCallResponse{} } func (m *StreamingOutputCallResponse) String() string { return proto.CompactTextString(m) } func (*StreamingOutputCallResponse) ProtoMessage() {} func (*StreamingOutputCallResponse) Descriptor() ([]byte, []int) { return fileDescriptor_messages_5c70222ad96bf232, []int{8} } func (m *StreamingOutputCallResponse) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_StreamingOutputCallResponse.Unmarshal(m, b) } func (m *StreamingOutputCallResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_StreamingOutputCallResponse.Marshal(b, m, deterministic) } func (dst *StreamingOutputCallResponse) XXX_Merge(src proto.Message) { xxx_messageInfo_StreamingOutputCallResponse.Merge(dst, src) } func (m *StreamingOutputCallResponse) XXX_Size() int { return xxx_messageInfo_StreamingOutputCallResponse.Size(m) } func (m *StreamingOutputCallResponse) XXX_DiscardUnknown() { xxx_messageInfo_StreamingOutputCallResponse.DiscardUnknown(m) } var xxx_messageInfo_StreamingOutputCallResponse proto.InternalMessageInfo func (m *StreamingOutputCallResponse) GetPayload() *Payload { if m != nil { return m.Payload } return nil } // For reconnect interop test only. // Client tells server what reconnection parameters it used. type ReconnectParams struct { MaxReconnectBackoffMs int32 `protobuf:"varint,1,opt,name=max_reconnect_backoff_ms,json=maxReconnectBackoffMs,proto3" json:"max_reconnect_backoff_ms,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ReconnectParams) Reset() { *m = ReconnectParams{} } func (m *ReconnectParams) String() string { return proto.CompactTextString(m) } func (*ReconnectParams) ProtoMessage() {} func (*ReconnectParams) Descriptor() ([]byte, []int) { return fileDescriptor_messages_5c70222ad96bf232, []int{9} } func (m *ReconnectParams) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ReconnectParams.Unmarshal(m, b) } func (m *ReconnectParams) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ReconnectParams.Marshal(b, m, deterministic) } func (dst *ReconnectParams) XXX_Merge(src proto.Message) { xxx_messageInfo_ReconnectParams.Merge(dst, src) } func (m *ReconnectParams) XXX_Size() int { return xxx_messageInfo_ReconnectParams.Size(m) } func (m *ReconnectParams) XXX_DiscardUnknown() { xxx_messageInfo_ReconnectParams.DiscardUnknown(m) } var xxx_messageInfo_ReconnectParams proto.InternalMessageInfo func (m *ReconnectParams) GetMaxReconnectBackoffMs() int32 { if m != nil { return m.MaxReconnectBackoffMs } return 0 } // For reconnect interop test only. // Server tells client whether its reconnects are following the spec and the // reconnect backoffs it saw. type ReconnectInfo struct { Passed bool `protobuf:"varint,1,opt,name=passed,proto3" json:"passed,omitempty"` BackoffMs []int32 `protobuf:"varint,2,rep,packed,name=backoff_ms,json=backoffMs,proto3" json:"backoff_ms,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ReconnectInfo) Reset() { *m = ReconnectInfo{} } func (m *ReconnectInfo) String() string { return proto.CompactTextString(m) } func (*ReconnectInfo) ProtoMessage() {} func (*ReconnectInfo) Descriptor() ([]byte, []int) { return fileDescriptor_messages_5c70222ad96bf232, []int{10} } func (m *ReconnectInfo) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ReconnectInfo.Unmarshal(m, b) } func (m *ReconnectInfo) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ReconnectInfo.Marshal(b, m, deterministic) } func (dst *ReconnectInfo) XXX_Merge(src proto.Message) { xxx_messageInfo_ReconnectInfo.Merge(dst, src) } func (m *ReconnectInfo) XXX_Size() int { return xxx_messageInfo_ReconnectInfo.Size(m) } func (m *ReconnectInfo) XXX_DiscardUnknown() { xxx_messageInfo_ReconnectInfo.DiscardUnknown(m) } var xxx_messageInfo_ReconnectInfo proto.InternalMessageInfo func (m *ReconnectInfo) GetPassed() bool { if m != nil { return m.Passed } return false } func (m *ReconnectInfo) GetBackoffMs() []int32 { if m != nil { return m.BackoffMs } return nil } func init() { proto.RegisterType((*Payload)(nil), "grpc.testing.Payload") proto.RegisterType((*EchoStatus)(nil), "grpc.testing.EchoStatus") proto.RegisterType((*SimpleRequest)(nil), "grpc.testing.SimpleRequest") proto.RegisterType((*SimpleResponse)(nil), "grpc.testing.SimpleResponse") proto.RegisterType((*StreamingInputCallRequest)(nil), "grpc.testing.StreamingInputCallRequest") proto.RegisterType((*StreamingInputCallResponse)(nil), "grpc.testing.StreamingInputCallResponse") proto.RegisterType((*ResponseParameters)(nil), "grpc.testing.ResponseParameters") proto.RegisterType((*StreamingOutputCallRequest)(nil), "grpc.testing.StreamingOutputCallRequest") proto.RegisterType((*StreamingOutputCallResponse)(nil), "grpc.testing.StreamingOutputCallResponse") proto.RegisterType((*ReconnectParams)(nil), "grpc.testing.ReconnectParams") proto.RegisterType((*ReconnectInfo)(nil), "grpc.testing.ReconnectInfo") proto.RegisterEnum("grpc.testing.PayloadType", PayloadType_name, PayloadType_value) proto.RegisterEnum("grpc.testing.CompressionType", CompressionType_name, CompressionType_value) } func init() { proto.RegisterFile("messages.proto", fileDescriptor_messages_5c70222ad96bf232) } var fileDescriptor_messages_5c70222ad96bf232 = []byte{ // 652 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xcc, 0x55, 0x4d, 0x6f, 0xd3, 0x40, 0x10, 0xc5, 0xf9, 0xee, 0x24, 0x4d, 0xa3, 0x85, 0x82, 0x5b, 0x54, 0x11, 0x99, 0x4b, 0x54, 0x89, 0x20, 0x05, 0x09, 0x24, 0x0e, 0xa0, 0xb4, 0x4d, 0x51, 0x50, 0x9a, 0x84, 0x75, 0x7b, 0xe1, 0x62, 0x6d, 0x9c, 0x8d, 0x6b, 0x11, 0x7b, 0x8d, 0x77, 0x8d, 0x9a, 0x1e, 0xb8, 0xf3, 0x83, 0xb9, 0xa3, 0x5d, 0x7f, 0xc4, 0x69, 0x7b, 0x68, 0xe1, 0xc2, 0x6d, 0xf7, 0xed, 0x9b, 0x97, 0x79, 0x33, 0xcf, 0x0a, 0x34, 0x3d, 0xca, 0x39, 0x71, 0x28, 0xef, 0x06, 0x21, 0x13, 0x0c, 0x35, 0x9c, 0x30, 0xb0, 0xbb, 0x82, 0x72, 0xe1, 0xfa, 0x8e, 0x31, 0x82, 0xea, 0x94, 0xac, 0x96, 0x8c, 0xcc, 0xd1, 0x2b, 0x28, 0x89, 0x55, 0x40, 0x75, 0xad, 0xad, 0x75, 0x9a, 0xbd, 0xbd, 0x6e, 0x9e, 0xd7, 0x4d, 0x48, 0xe7, 0xab, 0x80, 0x62, 0x45, 0x43, 0x08, 0x4a, 0x33, 0x36, 0x5f, 0xe9, 0x85, 0xb6, 0xd6, 0x69, 0x60, 0x75, 0x36, 0xde, 0x03, 0x0c, 0xec, 0x4b, 0x66, 0x0a, 0x22, 0x22, 0x2e, 0x19, 0x36, 0x9b, 0xc7, 0x82, 0x65, 0xac, 0xce, 0x48, 0x87, 0x6a, 0xd2, 0x8f, 0x2a, 0xdc, 0xc2, 0xe9, 0xd5, 0xf8, 0x55, 0x84, 0x6d, 0xd3, 0xf5, 0x82, 0x25, 0xc5, 0xf4, 0x7b, 0x44, 0xb9, 0x40, 0x1f, 0x60, 0x3b, 0xa4, 0x3c, 0x60, 0x3e, 0xa7, 0xd6, 0xfd, 0x3a, 0x6b, 0xa4, 0x7c, 0x79, 0x43, 0x2f, 0x73, 0xf5, 0xdc, 0xbd, 0x8e, 0x7f, 0xb1, 0xbc, 0x26, 0x99, 0xee, 0x35, 0x45, 0xaf, 0xa1, 0x1a, 0xc4, 0x0a, 0x7a, 0xb1, 0xad, 0x75, 0xea, 0xbd, 0xdd, 0x3b, 0xe5, 0x71, 0xca, 0x92, 0xaa, 0x0b, 0x77, 0xb9, 0xb4, 0x22, 0x4e, 0x43, 0x9f, 0x78, 0x54, 0x2f, 0xb5, 0xb5, 0x4e, 0x0d, 0x37, 0x24, 0x78, 0x91, 0x60, 0xa8, 0x03, 0x2d, 0x45, 0x62, 0x24, 0x12, 0x97, 0x16, 0xb7, 0x59, 0x40, 0xf5, 0xb2, 0xe2, 0x35, 0x25, 0x3e, 0x91, 0xb0, 0x29, 0x51, 0x34, 0x85, 0x27, 0x59, 0x93, 0x36, 0xf3, 0x82, 0x90, 0x72, 0xee, 0x32, 0x5f, 0xaf, 0x28, 0xaf, 0x07, 0x9b, 0xcd, 0x1c, 0xaf, 0x09, 0xca, 0xef, 0xe3, 0xb4, 0x34, 0xf7, 0x80, 0xfa, 0xb0, 0xb3, 0xb6, 0xad, 0x36, 0xa1, 0x57, 0x95, 0x33, 0x7d, 0x53, 0x6c, 0xbd, 0x29, 0xdc, 0xcc, 0x46, 0xa2, 0xee, 0xc6, 0x4f, 0x68, 0xa6, 0xab, 0x88, 0xf1, 0xfc, 0x98, 0xb4, 0x7b, 0x8d, 0x69, 0x1f, 0x6a, 0xd9, 0x84, 0xe2, 0x4d, 0x67, 0x77, 0xf4, 0x02, 0xea, 0xf9, 0xc1, 0x14, 0xd5, 0x33, 0xb0, 0x6c, 0x28, 0xc6, 0x08, 0xf6, 0x4c, 0x11, 0x52, 0xe2, 0xb9, 0xbe, 0x33, 0xf4, 0x83, 0x48, 0x1c, 0x93, 0xe5, 0x32, 0x8d, 0xc5, 0x43, 0x5b, 0x31, 0xce, 0x61, 0xff, 0x2e, 0xb5, 0xc4, 0xd9, 0x5b, 0x78, 0x46, 0x1c, 0x27, 0xa4, 0x0e, 0x11, 0x74, 0x6e, 0x25, 0x35, 0x71, 0x5e, 0xe2, 0xe0, 0xee, 0xae, 0x9f, 0x13, 0x69, 0x19, 0x1c, 0x63, 0x08, 0x28, 0xd5, 0x98, 0x92, 0x90, 0x78, 0x54, 0xd0, 0x50, 0x65, 0x3e, 0x57, 0xaa, 0xce, 0xd2, 0xae, 0xeb, 0x0b, 0x1a, 0xfe, 0x20, 0x32, 0x35, 0x49, 0x0a, 0x21, 0x85, 0x2e, 0xb8, 0xf1, 0xbb, 0x90, 0xeb, 0x70, 0x12, 0x89, 0x1b, 0x86, 0xff, 0xf5, 0x3b, 0xf8, 0x02, 0x59, 0x4e, 0xac, 0x20, 0x6b, 0x55, 0x2f, 0xb4, 0x8b, 0x9d, 0x7a, 0xaf, 0xbd, 0xa9, 0x72, 0xdb, 0x12, 0x46, 0xe1, 0x6d, 0x9b, 0x0f, 0xfe, 0x6a, 0xfe, 0xcb, 0x98, 0x8f, 0xe1, 0xf9, 0x9d, 0x63, 0xff, 0xcb, 0xcc, 0x1b, 0x9f, 0x61, 0x07, 0x53, 0x9b, 0xf9, 0x3e, 0xb5, 0x85, 0x1a, 0x16, 0x47, 0xef, 0x40, 0xf7, 0xc8, 0x95, 0x15, 0xa6, 0xb0, 0x35, 0x23, 0xf6, 0x37, 0xb6, 0x58, 0x58, 0x1e, 0x4f, 0xe3, 0xe5, 0x91, 0xab, 0xac, 0xea, 0x28, 0x7e, 0x3d, 0xe3, 0xc6, 0x29, 0x6c, 0x67, 0xe8, 0xd0, 0x5f, 0x30, 0xf4, 0x14, 0x2a, 0x01, 0xe1, 0x9c, 0xc6, 0xcd, 0xd4, 0x70, 0x72, 0x43, 0x07, 0x00, 0x39, 0x4d, 0xb9, 0xd4, 0x32, 0xde, 0x9a, 0xa5, 0x3a, 0x87, 0x1f, 0xa1, 0x9e, 0x4b, 0x06, 0x6a, 0x41, 0xe3, 0x78, 0x72, 0x36, 0xc5, 0x03, 0xd3, 0xec, 0x1f, 0x8d, 0x06, 0xad, 0x47, 0x08, 0x41, 0xf3, 0x62, 0xbc, 0x81, 0x69, 0x08, 0xa0, 0x82, 0xfb, 0xe3, 0x93, 0xc9, 0x59, 0xab, 0x70, 0xd8, 0x83, 0x9d, 0x1b, 0xfb, 0x40, 0x35, 0x28, 0x8d, 0x27, 0x63, 0x59, 0x5c, 0x83, 0xd2, 0xa7, 0xaf, 0xc3, 0x69, 0x4b, 0x43, 0x75, 0xa8, 0x9e, 0x0c, 0x4e, 0x47, 0xfd, 0xf3, 0x41, 0xab, 0x30, 0xab, 0xa8, 0xbf, 0x9a, 0x37, 0x7f, 0x02, 0x00, 0x00, 0xff, 0xff, 0xc2, 0x6a, 0xce, 0x1e, 0x7c, 0x06, 0x00, 0x00, } grpc-go-1.22.1/benchmark/grpc_testing/messages.proto000066400000000000000000000110451351635773100224650ustar00rootroot00000000000000// Copyright 2016 gRPC authors. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. // Message definitions to be used by integration test service definitions. syntax = "proto3"; package grpc.testing; // The type of payload that should be returned. enum PayloadType { // Compressable text format. COMPRESSABLE = 0; // Uncompressable binary format. UNCOMPRESSABLE = 1; // Randomly chosen from all other formats defined in this enum. RANDOM = 2; } // Compression algorithms enum CompressionType { // No compression NONE = 0; GZIP = 1; DEFLATE = 2; } // A block of data, to simply increase gRPC message size. message Payload { // The type of data in body. PayloadType type = 1; // Primary contents of payload. bytes body = 2; } // A protobuf representation for grpc status. This is used by test // clients to specify a status that the server should attempt to return. message EchoStatus { int32 code = 1; string message = 2; } // Unary request. message SimpleRequest { // Desired payload type in the response from the server. // If response_type is RANDOM, server randomly chooses one from other formats. PayloadType response_type = 1; // Desired payload size in the response from the server. // If response_type is COMPRESSABLE, this denotes the size before compression. int32 response_size = 2; // Optional input payload sent along with the request. Payload payload = 3; // Whether SimpleResponse should include username. bool fill_username = 4; // Whether SimpleResponse should include OAuth scope. bool fill_oauth_scope = 5; // Compression algorithm to be used by the server for the response (stream) CompressionType response_compression = 6; // Whether server should return a given status EchoStatus response_status = 7; } // Unary response, as configured by the request. message SimpleResponse { // Payload to increase message size. Payload payload = 1; // The user the request came from, for verifying authentication was // successful when the client expected it. string username = 2; // OAuth scope. string oauth_scope = 3; } // Client-streaming request. message StreamingInputCallRequest { // Optional input payload sent along with the request. Payload payload = 1; // Not expecting any payload from the response. } // Client-streaming response. message StreamingInputCallResponse { // Aggregated size of payloads received from the client. int32 aggregated_payload_size = 1; } // Configuration for a particular response. message ResponseParameters { // Desired payload sizes in responses from the server. // If response_type is COMPRESSABLE, this denotes the size before compression. int32 size = 1; // Desired interval between consecutive responses in the response stream in // microseconds. int32 interval_us = 2; } // Server-streaming request. message StreamingOutputCallRequest { // Desired payload type in the response from the server. // If response_type is RANDOM, the payload from each response in the stream // might be of different types. This is to simulate a mixed type of payload // stream. PayloadType response_type = 1; // Configuration for each expected response message. repeated ResponseParameters response_parameters = 2; // Optional input payload sent along with the request. Payload payload = 3; // Compression algorithm to be used by the server for the response (stream) CompressionType response_compression = 6; // Whether server should return a given status EchoStatus response_status = 7; } // Server-streaming response, as configured by the request and parameters. message StreamingOutputCallResponse { // Payload to increase response size. Payload payload = 1; } // For reconnect interop test only. // Client tells server what reconnection parameters it used. message ReconnectParams { int32 max_reconnect_backoff_ms = 1; } // For reconnect interop test only. // Server tells client whether its reconnects are following the spec and the // reconnect backoffs it saw. message ReconnectInfo { bool passed = 1; repeated int32 backoff_ms = 2; } grpc-go-1.22.1/benchmark/grpc_testing/payloads.pb.go000066400000000000000000000303341351635773100223360ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: payloads.proto package grpc_testing import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package type ByteBufferParams struct { ReqSize int32 `protobuf:"varint,1,opt,name=req_size,json=reqSize,proto3" json:"req_size,omitempty"` RespSize int32 `protobuf:"varint,2,opt,name=resp_size,json=respSize,proto3" json:"resp_size,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ByteBufferParams) Reset() { *m = ByteBufferParams{} } func (m *ByteBufferParams) String() string { return proto.CompactTextString(m) } func (*ByteBufferParams) ProtoMessage() {} func (*ByteBufferParams) Descriptor() ([]byte, []int) { return fileDescriptor_payloads_3abc71de35f06c83, []int{0} } func (m *ByteBufferParams) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ByteBufferParams.Unmarshal(m, b) } func (m *ByteBufferParams) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ByteBufferParams.Marshal(b, m, deterministic) } func (dst *ByteBufferParams) XXX_Merge(src proto.Message) { xxx_messageInfo_ByteBufferParams.Merge(dst, src) } func (m *ByteBufferParams) XXX_Size() int { return xxx_messageInfo_ByteBufferParams.Size(m) } func (m *ByteBufferParams) XXX_DiscardUnknown() { xxx_messageInfo_ByteBufferParams.DiscardUnknown(m) } var xxx_messageInfo_ByteBufferParams proto.InternalMessageInfo func (m *ByteBufferParams) GetReqSize() int32 { if m != nil { return m.ReqSize } return 0 } func (m *ByteBufferParams) GetRespSize() int32 { if m != nil { return m.RespSize } return 0 } type SimpleProtoParams struct { ReqSize int32 `protobuf:"varint,1,opt,name=req_size,json=reqSize,proto3" json:"req_size,omitempty"` RespSize int32 `protobuf:"varint,2,opt,name=resp_size,json=respSize,proto3" json:"resp_size,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *SimpleProtoParams) Reset() { *m = SimpleProtoParams{} } func (m *SimpleProtoParams) String() string { return proto.CompactTextString(m) } func (*SimpleProtoParams) ProtoMessage() {} func (*SimpleProtoParams) Descriptor() ([]byte, []int) { return fileDescriptor_payloads_3abc71de35f06c83, []int{1} } func (m *SimpleProtoParams) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_SimpleProtoParams.Unmarshal(m, b) } func (m *SimpleProtoParams) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_SimpleProtoParams.Marshal(b, m, deterministic) } func (dst *SimpleProtoParams) XXX_Merge(src proto.Message) { xxx_messageInfo_SimpleProtoParams.Merge(dst, src) } func (m *SimpleProtoParams) XXX_Size() int { return xxx_messageInfo_SimpleProtoParams.Size(m) } func (m *SimpleProtoParams) XXX_DiscardUnknown() { xxx_messageInfo_SimpleProtoParams.DiscardUnknown(m) } var xxx_messageInfo_SimpleProtoParams proto.InternalMessageInfo func (m *SimpleProtoParams) GetReqSize() int32 { if m != nil { return m.ReqSize } return 0 } func (m *SimpleProtoParams) GetRespSize() int32 { if m != nil { return m.RespSize } return 0 } type ComplexProtoParams struct { XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ComplexProtoParams) Reset() { *m = ComplexProtoParams{} } func (m *ComplexProtoParams) String() string { return proto.CompactTextString(m) } func (*ComplexProtoParams) ProtoMessage() {} func (*ComplexProtoParams) Descriptor() ([]byte, []int) { return fileDescriptor_payloads_3abc71de35f06c83, []int{2} } func (m *ComplexProtoParams) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ComplexProtoParams.Unmarshal(m, b) } func (m *ComplexProtoParams) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ComplexProtoParams.Marshal(b, m, deterministic) } func (dst *ComplexProtoParams) XXX_Merge(src proto.Message) { xxx_messageInfo_ComplexProtoParams.Merge(dst, src) } func (m *ComplexProtoParams) XXX_Size() int { return xxx_messageInfo_ComplexProtoParams.Size(m) } func (m *ComplexProtoParams) XXX_DiscardUnknown() { xxx_messageInfo_ComplexProtoParams.DiscardUnknown(m) } var xxx_messageInfo_ComplexProtoParams proto.InternalMessageInfo type PayloadConfig struct { // Types that are valid to be assigned to Payload: // *PayloadConfig_BytebufParams // *PayloadConfig_SimpleParams // *PayloadConfig_ComplexParams Payload isPayloadConfig_Payload `protobuf_oneof:"payload"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *PayloadConfig) Reset() { *m = PayloadConfig{} } func (m *PayloadConfig) String() string { return proto.CompactTextString(m) } func (*PayloadConfig) ProtoMessage() {} func (*PayloadConfig) Descriptor() ([]byte, []int) { return fileDescriptor_payloads_3abc71de35f06c83, []int{3} } func (m *PayloadConfig) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_PayloadConfig.Unmarshal(m, b) } func (m *PayloadConfig) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_PayloadConfig.Marshal(b, m, deterministic) } func (dst *PayloadConfig) XXX_Merge(src proto.Message) { xxx_messageInfo_PayloadConfig.Merge(dst, src) } func (m *PayloadConfig) XXX_Size() int { return xxx_messageInfo_PayloadConfig.Size(m) } func (m *PayloadConfig) XXX_DiscardUnknown() { xxx_messageInfo_PayloadConfig.DiscardUnknown(m) } var xxx_messageInfo_PayloadConfig proto.InternalMessageInfo type isPayloadConfig_Payload interface { isPayloadConfig_Payload() } type PayloadConfig_BytebufParams struct { BytebufParams *ByteBufferParams `protobuf:"bytes,1,opt,name=bytebuf_params,json=bytebufParams,proto3,oneof"` } type PayloadConfig_SimpleParams struct { SimpleParams *SimpleProtoParams `protobuf:"bytes,2,opt,name=simple_params,json=simpleParams,proto3,oneof"` } type PayloadConfig_ComplexParams struct { ComplexParams *ComplexProtoParams `protobuf:"bytes,3,opt,name=complex_params,json=complexParams,proto3,oneof"` } func (*PayloadConfig_BytebufParams) isPayloadConfig_Payload() {} func (*PayloadConfig_SimpleParams) isPayloadConfig_Payload() {} func (*PayloadConfig_ComplexParams) isPayloadConfig_Payload() {} func (m *PayloadConfig) GetPayload() isPayloadConfig_Payload { if m != nil { return m.Payload } return nil } func (m *PayloadConfig) GetBytebufParams() *ByteBufferParams { if x, ok := m.GetPayload().(*PayloadConfig_BytebufParams); ok { return x.BytebufParams } return nil } func (m *PayloadConfig) GetSimpleParams() *SimpleProtoParams { if x, ok := m.GetPayload().(*PayloadConfig_SimpleParams); ok { return x.SimpleParams } return nil } func (m *PayloadConfig) GetComplexParams() *ComplexProtoParams { if x, ok := m.GetPayload().(*PayloadConfig_ComplexParams); ok { return x.ComplexParams } return nil } // XXX_OneofFuncs is for the internal use of the proto package. func (*PayloadConfig) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _PayloadConfig_OneofMarshaler, _PayloadConfig_OneofUnmarshaler, _PayloadConfig_OneofSizer, []interface{}{ (*PayloadConfig_BytebufParams)(nil), (*PayloadConfig_SimpleParams)(nil), (*PayloadConfig_ComplexParams)(nil), } } func _PayloadConfig_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*PayloadConfig) // payload switch x := m.Payload.(type) { case *PayloadConfig_BytebufParams: b.EncodeVarint(1<<3 | proto.WireBytes) if err := b.EncodeMessage(x.BytebufParams); err != nil { return err } case *PayloadConfig_SimpleParams: b.EncodeVarint(2<<3 | proto.WireBytes) if err := b.EncodeMessage(x.SimpleParams); err != nil { return err } case *PayloadConfig_ComplexParams: b.EncodeVarint(3<<3 | proto.WireBytes) if err := b.EncodeMessage(x.ComplexParams); err != nil { return err } case nil: default: return fmt.Errorf("PayloadConfig.Payload has unexpected type %T", x) } return nil } func _PayloadConfig_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*PayloadConfig) switch tag { case 1: // payload.bytebuf_params if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(ByteBufferParams) err := b.DecodeMessage(msg) m.Payload = &PayloadConfig_BytebufParams{msg} return true, err case 2: // payload.simple_params if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(SimpleProtoParams) err := b.DecodeMessage(msg) m.Payload = &PayloadConfig_SimpleParams{msg} return true, err case 3: // payload.complex_params if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(ComplexProtoParams) err := b.DecodeMessage(msg) m.Payload = &PayloadConfig_ComplexParams{msg} return true, err default: return false, nil } } func _PayloadConfig_OneofSizer(msg proto.Message) (n int) { m := msg.(*PayloadConfig) // payload switch x := m.Payload.(type) { case *PayloadConfig_BytebufParams: s := proto.Size(x.BytebufParams) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *PayloadConfig_SimpleParams: s := proto.Size(x.SimpleParams) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *PayloadConfig_ComplexParams: s := proto.Size(x.ComplexParams) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } func init() { proto.RegisterType((*ByteBufferParams)(nil), "grpc.testing.ByteBufferParams") proto.RegisterType((*SimpleProtoParams)(nil), "grpc.testing.SimpleProtoParams") proto.RegisterType((*ComplexProtoParams)(nil), "grpc.testing.ComplexProtoParams") proto.RegisterType((*PayloadConfig)(nil), "grpc.testing.PayloadConfig") } func init() { proto.RegisterFile("payloads.proto", fileDescriptor_payloads_3abc71de35f06c83) } var fileDescriptor_payloads_3abc71de35f06c83 = []byte{ // 254 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0xe2, 0x2b, 0x48, 0xac, 0xcc, 0xc9, 0x4f, 0x4c, 0x29, 0xd6, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x17, 0xe2, 0x49, 0x2f, 0x2a, 0x48, 0xd6, 0x2b, 0x49, 0x2d, 0x2e, 0xc9, 0xcc, 0x4b, 0x57, 0xf2, 0xe2, 0x12, 0x70, 0xaa, 0x2c, 0x49, 0x75, 0x2a, 0x4d, 0x4b, 0x4b, 0x2d, 0x0a, 0x48, 0x2c, 0x4a, 0xcc, 0x2d, 0x16, 0x92, 0xe4, 0xe2, 0x28, 0x4a, 0x2d, 0x8c, 0x2f, 0xce, 0xac, 0x4a, 0x95, 0x60, 0x54, 0x60, 0xd4, 0x60, 0x0d, 0x62, 0x2f, 0x4a, 0x2d, 0x0c, 0xce, 0xac, 0x4a, 0x15, 0x92, 0xe6, 0xe2, 0x2c, 0x4a, 0x2d, 0x2e, 0x80, 0xc8, 0x31, 0x81, 0xe5, 0x38, 0x40, 0x02, 0x20, 0x49, 0x25, 0x6f, 0x2e, 0xc1, 0xe0, 0xcc, 0xdc, 0x82, 0x9c, 0xd4, 0x00, 0x90, 0x45, 0x14, 0x1a, 0x26, 0xc2, 0x25, 0xe4, 0x9c, 0x0f, 0x32, 0xac, 0x02, 0xc9, 0x34, 0xa5, 0x6f, 0x8c, 0x5c, 0xbc, 0x01, 0x10, 0xff, 0x38, 0xe7, 0xe7, 0xa5, 0x65, 0xa6, 0x0b, 0xb9, 0x73, 0xf1, 0x25, 0x55, 0x96, 0xa4, 0x26, 0x95, 0xa6, 0xc5, 0x17, 0x80, 0xd5, 0x80, 0x6d, 0xe1, 0x36, 0x92, 0xd3, 0x43, 0xf6, 0xa7, 0x1e, 0xba, 0x27, 0x3d, 0x18, 0x82, 0x78, 0xa1, 0xfa, 0xa0, 0x0e, 0x75, 0xe3, 0xe2, 0x2d, 0x06, 0xbb, 0x1e, 0x66, 0x0e, 0x13, 0xd8, 0x1c, 0x79, 0x54, 0x73, 0x30, 0x3c, 0xe8, 0xc1, 0x10, 0xc4, 0x03, 0xd1, 0x07, 0x35, 0xc7, 0x93, 0x8b, 0x2f, 0x19, 0xe2, 0x70, 0x98, 0x41, 0xcc, 0x60, 0x83, 0x14, 0x50, 0x0d, 0xc2, 0xf4, 0x1c, 0xc8, 0x49, 0x50, 0x9d, 0x10, 0x01, 0x27, 0x4e, 0x2e, 0x76, 0x68, 0xe4, 0x25, 0xb1, 0x81, 0x23, 0xcf, 0x18, 0x10, 0x00, 0x00, 0xff, 0xff, 0xb0, 0x8c, 0x18, 0x4e, 0xce, 0x01, 0x00, 0x00, } grpc-go-1.22.1/benchmark/grpc_testing/payloads.proto000066400000000000000000000021161351635773100224710ustar00rootroot00000000000000// Copyright 2016 gRPC authors. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. syntax = "proto3"; package grpc.testing; message ByteBufferParams { int32 req_size = 1; int32 resp_size = 2; } message SimpleProtoParams { int32 req_size = 1; int32 resp_size = 2; } message ComplexProtoParams { // TODO (vpai): Fill this in once the details of complex, representative // protos are decided } message PayloadConfig { oneof payload { ByteBufferParams bytebuf_params = 1; SimpleProtoParams simple_params = 2; ComplexProtoParams complex_params = 3; } } grpc-go-1.22.1/benchmark/grpc_testing/services.pb.go000066400000000000000000000431301351635773100223430ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: services.proto package grpc_testing import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import ( context "golang.org/x/net/context" grpc "google.golang.org/grpc" ) // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package // Reference imports to suppress errors if they are not otherwise used. var _ context.Context var _ grpc.ClientConn // This is a compile-time assertion to ensure that this generated file // is compatible with the grpc package it is being compiled against. const _ = grpc.SupportPackageIsVersion4 // BenchmarkServiceClient is the client API for BenchmarkService service. // // For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream. type BenchmarkServiceClient interface { // One request followed by one response. // The server returns the client payload as-is. UnaryCall(ctx context.Context, in *SimpleRequest, opts ...grpc.CallOption) (*SimpleResponse, error) // One request followed by one response. // The server returns the client payload as-is. StreamingCall(ctx context.Context, opts ...grpc.CallOption) (BenchmarkService_StreamingCallClient, error) // Unconstrainted streaming. // Both server and client keep sending & receiving simultaneously. UnconstrainedStreamingCall(ctx context.Context, opts ...grpc.CallOption) (BenchmarkService_UnconstrainedStreamingCallClient, error) } type benchmarkServiceClient struct { cc *grpc.ClientConn } func NewBenchmarkServiceClient(cc *grpc.ClientConn) BenchmarkServiceClient { return &benchmarkServiceClient{cc} } func (c *benchmarkServiceClient) UnaryCall(ctx context.Context, in *SimpleRequest, opts ...grpc.CallOption) (*SimpleResponse, error) { out := new(SimpleResponse) err := c.cc.Invoke(ctx, "/grpc.testing.BenchmarkService/UnaryCall", in, out, opts...) if err != nil { return nil, err } return out, nil } func (c *benchmarkServiceClient) StreamingCall(ctx context.Context, opts ...grpc.CallOption) (BenchmarkService_StreamingCallClient, error) { stream, err := c.cc.NewStream(ctx, &_BenchmarkService_serviceDesc.Streams[0], "/grpc.testing.BenchmarkService/StreamingCall", opts...) if err != nil { return nil, err } x := &benchmarkServiceStreamingCallClient{stream} return x, nil } type BenchmarkService_StreamingCallClient interface { Send(*SimpleRequest) error Recv() (*SimpleResponse, error) grpc.ClientStream } type benchmarkServiceStreamingCallClient struct { grpc.ClientStream } func (x *benchmarkServiceStreamingCallClient) Send(m *SimpleRequest) error { return x.ClientStream.SendMsg(m) } func (x *benchmarkServiceStreamingCallClient) Recv() (*SimpleResponse, error) { m := new(SimpleResponse) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func (c *benchmarkServiceClient) UnconstrainedStreamingCall(ctx context.Context, opts ...grpc.CallOption) (BenchmarkService_UnconstrainedStreamingCallClient, error) { stream, err := c.cc.NewStream(ctx, &_BenchmarkService_serviceDesc.Streams[1], "/grpc.testing.BenchmarkService/UnconstrainedStreamingCall", opts...) if err != nil { return nil, err } x := &benchmarkServiceUnconstrainedStreamingCallClient{stream} return x, nil } type BenchmarkService_UnconstrainedStreamingCallClient interface { Send(*SimpleRequest) error Recv() (*SimpleResponse, error) grpc.ClientStream } type benchmarkServiceUnconstrainedStreamingCallClient struct { grpc.ClientStream } func (x *benchmarkServiceUnconstrainedStreamingCallClient) Send(m *SimpleRequest) error { return x.ClientStream.SendMsg(m) } func (x *benchmarkServiceUnconstrainedStreamingCallClient) Recv() (*SimpleResponse, error) { m := new(SimpleResponse) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } // BenchmarkServiceServer is the server API for BenchmarkService service. type BenchmarkServiceServer interface { // One request followed by one response. // The server returns the client payload as-is. UnaryCall(context.Context, *SimpleRequest) (*SimpleResponse, error) // One request followed by one response. // The server returns the client payload as-is. StreamingCall(BenchmarkService_StreamingCallServer) error // Unconstrainted streaming. // Both server and client keep sending & receiving simultaneously. UnconstrainedStreamingCall(BenchmarkService_UnconstrainedStreamingCallServer) error } func RegisterBenchmarkServiceServer(s *grpc.Server, srv BenchmarkServiceServer) { s.RegisterService(&_BenchmarkService_serviceDesc, srv) } func _BenchmarkService_UnaryCall_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(SimpleRequest) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(BenchmarkServiceServer).UnaryCall(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/grpc.testing.BenchmarkService/UnaryCall", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(BenchmarkServiceServer).UnaryCall(ctx, req.(*SimpleRequest)) } return interceptor(ctx, in, info, handler) } func _BenchmarkService_StreamingCall_Handler(srv interface{}, stream grpc.ServerStream) error { return srv.(BenchmarkServiceServer).StreamingCall(&benchmarkServiceStreamingCallServer{stream}) } type BenchmarkService_StreamingCallServer interface { Send(*SimpleResponse) error Recv() (*SimpleRequest, error) grpc.ServerStream } type benchmarkServiceStreamingCallServer struct { grpc.ServerStream } func (x *benchmarkServiceStreamingCallServer) Send(m *SimpleResponse) error { return x.ServerStream.SendMsg(m) } func (x *benchmarkServiceStreamingCallServer) Recv() (*SimpleRequest, error) { m := new(SimpleRequest) if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func _BenchmarkService_UnconstrainedStreamingCall_Handler(srv interface{}, stream grpc.ServerStream) error { return srv.(BenchmarkServiceServer).UnconstrainedStreamingCall(&benchmarkServiceUnconstrainedStreamingCallServer{stream}) } type BenchmarkService_UnconstrainedStreamingCallServer interface { Send(*SimpleResponse) error Recv() (*SimpleRequest, error) grpc.ServerStream } type benchmarkServiceUnconstrainedStreamingCallServer struct { grpc.ServerStream } func (x *benchmarkServiceUnconstrainedStreamingCallServer) Send(m *SimpleResponse) error { return x.ServerStream.SendMsg(m) } func (x *benchmarkServiceUnconstrainedStreamingCallServer) Recv() (*SimpleRequest, error) { m := new(SimpleRequest) if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err } return m, nil } var _BenchmarkService_serviceDesc = grpc.ServiceDesc{ ServiceName: "grpc.testing.BenchmarkService", HandlerType: (*BenchmarkServiceServer)(nil), Methods: []grpc.MethodDesc{ { MethodName: "UnaryCall", Handler: _BenchmarkService_UnaryCall_Handler, }, }, Streams: []grpc.StreamDesc{ { StreamName: "StreamingCall", Handler: _BenchmarkService_StreamingCall_Handler, ServerStreams: true, ClientStreams: true, }, { StreamName: "UnconstrainedStreamingCall", Handler: _BenchmarkService_UnconstrainedStreamingCall_Handler, ServerStreams: true, ClientStreams: true, }, }, Metadata: "services.proto", } // WorkerServiceClient is the client API for WorkerService service. // // For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream. type WorkerServiceClient interface { // Start server with specified workload. // First request sent specifies the ServerConfig followed by ServerStatus // response. After that, a "Mark" can be sent anytime to request the latest // stats. Closing the stream will initiate shutdown of the test server // and once the shutdown has finished, the OK status is sent to terminate // this RPC. RunServer(ctx context.Context, opts ...grpc.CallOption) (WorkerService_RunServerClient, error) // Start client with specified workload. // First request sent specifies the ClientConfig followed by ClientStatus // response. After that, a "Mark" can be sent anytime to request the latest // stats. Closing the stream will initiate shutdown of the test client // and once the shutdown has finished, the OK status is sent to terminate // this RPC. RunClient(ctx context.Context, opts ...grpc.CallOption) (WorkerService_RunClientClient, error) // Just return the core count - unary call CoreCount(ctx context.Context, in *CoreRequest, opts ...grpc.CallOption) (*CoreResponse, error) // Quit this worker QuitWorker(ctx context.Context, in *Void, opts ...grpc.CallOption) (*Void, error) } type workerServiceClient struct { cc *grpc.ClientConn } func NewWorkerServiceClient(cc *grpc.ClientConn) WorkerServiceClient { return &workerServiceClient{cc} } func (c *workerServiceClient) RunServer(ctx context.Context, opts ...grpc.CallOption) (WorkerService_RunServerClient, error) { stream, err := c.cc.NewStream(ctx, &_WorkerService_serviceDesc.Streams[0], "/grpc.testing.WorkerService/RunServer", opts...) if err != nil { return nil, err } x := &workerServiceRunServerClient{stream} return x, nil } type WorkerService_RunServerClient interface { Send(*ServerArgs) error Recv() (*ServerStatus, error) grpc.ClientStream } type workerServiceRunServerClient struct { grpc.ClientStream } func (x *workerServiceRunServerClient) Send(m *ServerArgs) error { return x.ClientStream.SendMsg(m) } func (x *workerServiceRunServerClient) Recv() (*ServerStatus, error) { m := new(ServerStatus) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func (c *workerServiceClient) RunClient(ctx context.Context, opts ...grpc.CallOption) (WorkerService_RunClientClient, error) { stream, err := c.cc.NewStream(ctx, &_WorkerService_serviceDesc.Streams[1], "/grpc.testing.WorkerService/RunClient", opts...) if err != nil { return nil, err } x := &workerServiceRunClientClient{stream} return x, nil } type WorkerService_RunClientClient interface { Send(*ClientArgs) error Recv() (*ClientStatus, error) grpc.ClientStream } type workerServiceRunClientClient struct { grpc.ClientStream } func (x *workerServiceRunClientClient) Send(m *ClientArgs) error { return x.ClientStream.SendMsg(m) } func (x *workerServiceRunClientClient) Recv() (*ClientStatus, error) { m := new(ClientStatus) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func (c *workerServiceClient) CoreCount(ctx context.Context, in *CoreRequest, opts ...grpc.CallOption) (*CoreResponse, error) { out := new(CoreResponse) err := c.cc.Invoke(ctx, "/grpc.testing.WorkerService/CoreCount", in, out, opts...) if err != nil { return nil, err } return out, nil } func (c *workerServiceClient) QuitWorker(ctx context.Context, in *Void, opts ...grpc.CallOption) (*Void, error) { out := new(Void) err := c.cc.Invoke(ctx, "/grpc.testing.WorkerService/QuitWorker", in, out, opts...) if err != nil { return nil, err } return out, nil } // WorkerServiceServer is the server API for WorkerService service. type WorkerServiceServer interface { // Start server with specified workload. // First request sent specifies the ServerConfig followed by ServerStatus // response. After that, a "Mark" can be sent anytime to request the latest // stats. Closing the stream will initiate shutdown of the test server // and once the shutdown has finished, the OK status is sent to terminate // this RPC. RunServer(WorkerService_RunServerServer) error // Start client with specified workload. // First request sent specifies the ClientConfig followed by ClientStatus // response. After that, a "Mark" can be sent anytime to request the latest // stats. Closing the stream will initiate shutdown of the test client // and once the shutdown has finished, the OK status is sent to terminate // this RPC. RunClient(WorkerService_RunClientServer) error // Just return the core count - unary call CoreCount(context.Context, *CoreRequest) (*CoreResponse, error) // Quit this worker QuitWorker(context.Context, *Void) (*Void, error) } func RegisterWorkerServiceServer(s *grpc.Server, srv WorkerServiceServer) { s.RegisterService(&_WorkerService_serviceDesc, srv) } func _WorkerService_RunServer_Handler(srv interface{}, stream grpc.ServerStream) error { return srv.(WorkerServiceServer).RunServer(&workerServiceRunServerServer{stream}) } type WorkerService_RunServerServer interface { Send(*ServerStatus) error Recv() (*ServerArgs, error) grpc.ServerStream } type workerServiceRunServerServer struct { grpc.ServerStream } func (x *workerServiceRunServerServer) Send(m *ServerStatus) error { return x.ServerStream.SendMsg(m) } func (x *workerServiceRunServerServer) Recv() (*ServerArgs, error) { m := new(ServerArgs) if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func _WorkerService_RunClient_Handler(srv interface{}, stream grpc.ServerStream) error { return srv.(WorkerServiceServer).RunClient(&workerServiceRunClientServer{stream}) } type WorkerService_RunClientServer interface { Send(*ClientStatus) error Recv() (*ClientArgs, error) grpc.ServerStream } type workerServiceRunClientServer struct { grpc.ServerStream } func (x *workerServiceRunClientServer) Send(m *ClientStatus) error { return x.ServerStream.SendMsg(m) } func (x *workerServiceRunClientServer) Recv() (*ClientArgs, error) { m := new(ClientArgs) if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func _WorkerService_CoreCount_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(CoreRequest) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(WorkerServiceServer).CoreCount(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/grpc.testing.WorkerService/CoreCount", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(WorkerServiceServer).CoreCount(ctx, req.(*CoreRequest)) } return interceptor(ctx, in, info, handler) } func _WorkerService_QuitWorker_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(Void) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(WorkerServiceServer).QuitWorker(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/grpc.testing.WorkerService/QuitWorker", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(WorkerServiceServer).QuitWorker(ctx, req.(*Void)) } return interceptor(ctx, in, info, handler) } var _WorkerService_serviceDesc = grpc.ServiceDesc{ ServiceName: "grpc.testing.WorkerService", HandlerType: (*WorkerServiceServer)(nil), Methods: []grpc.MethodDesc{ { MethodName: "CoreCount", Handler: _WorkerService_CoreCount_Handler, }, { MethodName: "QuitWorker", Handler: _WorkerService_QuitWorker_Handler, }, }, Streams: []grpc.StreamDesc{ { StreamName: "RunServer", Handler: _WorkerService_RunServer_Handler, ServerStreams: true, ClientStreams: true, }, { StreamName: "RunClient", Handler: _WorkerService_RunClient_Handler, ServerStreams: true, ClientStreams: true, }, }, Metadata: "services.proto", } func init() { proto.RegisterFile("services.proto", fileDescriptor_services_e4655369b5d7f4d0) } var fileDescriptor_services_e4655369b5d7f4d0 = []byte{ // 271 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xac, 0x92, 0xc1, 0x4a, 0xc3, 0x40, 0x10, 0x86, 0x69, 0x0f, 0x42, 0x16, 0x53, 0x64, 0x4f, 0xba, 0xfa, 0x00, 0x9e, 0x82, 0x54, 0x5f, 0xc0, 0x06, 0x3d, 0x0a, 0x36, 0x54, 0x0f, 0x9e, 0xd6, 0x74, 0x88, 0x4b, 0x93, 0x99, 0x38, 0x33, 0x11, 0x7c, 0x02, 0x1f, 0xc1, 0xd7, 0x15, 0xb3, 0x56, 0x6a, 0xc8, 0xcd, 0x1e, 0xe7, 0xff, 0x86, 0x8f, 0xfd, 0x77, 0xd7, 0xcc, 0x04, 0xf8, 0x2d, 0x94, 0x20, 0x59, 0xcb, 0xa4, 0x64, 0x0f, 0x2b, 0x6e, 0xcb, 0x4c, 0x41, 0x34, 0x60, 0xe5, 0x66, 0x0d, 0x88, 0xf8, 0x6a, 0x4b, 0x5d, 0x5a, 0x12, 0x2a, 0x53, 0x1d, 0xc7, 0xf9, 0xc7, 0xd4, 0x1c, 0x2d, 0x00, 0xcb, 0x97, 0xc6, 0xf3, 0xa6, 0x88, 0x22, 0x7b, 0x6b, 0x92, 0x15, 0x7a, 0x7e, 0xcf, 0x7d, 0x5d, 0xdb, 0xd3, 0x6c, 0xd7, 0x97, 0x15, 0xa1, 0x69, 0x6b, 0x58, 0xc2, 0x6b, 0x07, 0xa2, 0xee, 0x6c, 0x1c, 0x4a, 0x4b, 0x28, 0x60, 0xef, 0x4c, 0x5a, 0x28, 0x83, 0x6f, 0x02, 0x56, 0xff, 0x74, 0x9d, 0x4f, 0x2e, 0x26, 0xf6, 0xc9, 0xb8, 0x15, 0x96, 0x84, 0xa2, 0xec, 0x03, 0xc2, 0x7a, 0x9f, 0xf2, 0xf9, 0xe7, 0xd4, 0xa4, 0x8f, 0xc4, 0x1b, 0xe0, 0xed, 0x35, 0xdc, 0x98, 0x64, 0xd9, 0xe1, 0xf7, 0x04, 0x6c, 0x8f, 0x07, 0x82, 0x3e, 0xbd, 0xe6, 0x4a, 0x9c, 0x1b, 0x23, 0x85, 0x7a, 0xed, 0xa4, 0x3f, 0x75, 0xd4, 0xe4, 0x75, 0x00, 0xd4, 0xa1, 0x26, 0xa6, 0x63, 0x9a, 0x48, 0x76, 0x34, 0x0b, 0x93, 0xe4, 0xc4, 0x90, 0x53, 0x87, 0x6a, 0x4f, 0x06, 0xcb, 0xc4, 0xbf, 0x4d, 0xdd, 0x18, 0xfa, 0x79, 0x90, 0x2b, 0x63, 0xee, 0xbb, 0xa0, 0xb1, 0xa6, 0xb5, 0x7f, 0x37, 0x1f, 0x28, 0xac, 0xdd, 0x48, 0xf6, 0x7c, 0xd0, 0x7f, 0x95, 0xcb, 0xaf, 0x00, 0x00, 0x00, 0xff, 0xff, 0x9a, 0xb4, 0x19, 0x36, 0x69, 0x02, 0x00, 0x00, } grpc-go-1.22.1/benchmark/grpc_testing/services.proto000066400000000000000000000045221351635773100225030ustar00rootroot00000000000000// Copyright 2016 gRPC authors. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. // An integration test service that covers all the method signature permutations // of unary/streaming requests/responses. syntax = "proto3"; import "messages.proto"; import "control.proto"; package grpc.testing; service BenchmarkService { // One request followed by one response. // The server returns the client payload as-is. rpc UnaryCall(SimpleRequest) returns (SimpleResponse); // One request followed by one response. // The server returns the client payload as-is. rpc StreamingCall(stream SimpleRequest) returns (stream SimpleResponse); // Unconstrainted streaming. // Both server and client keep sending & receiving simultaneously. rpc UnconstrainedStreamingCall(stream SimpleRequest) returns (stream SimpleResponse); } service WorkerService { // Start server with specified workload. // First request sent specifies the ServerConfig followed by ServerStatus // response. After that, a "Mark" can be sent anytime to request the latest // stats. Closing the stream will initiate shutdown of the test server // and once the shutdown has finished, the OK status is sent to terminate // this RPC. rpc RunServer(stream ServerArgs) returns (stream ServerStatus); // Start client with specified workload. // First request sent specifies the ClientConfig followed by ClientStatus // response. After that, a "Mark" can be sent anytime to request the latest // stats. Closing the stream will initiate shutdown of the test client // and once the shutdown has finished, the OK status is sent to terminate // this RPC. rpc RunClient(stream ClientArgs) returns (stream ClientStatus); // Just return the core count - unary call rpc CoreCount(CoreRequest) returns (CoreResponse); // Quit this worker rpc QuitWorker(Void) returns (Void); } grpc-go-1.22.1/benchmark/grpc_testing/stats.pb.go000066400000000000000000000255721351635773100216700ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: stats.proto package grpc_testing import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package type ServerStats struct { // wall clock time change in seconds since last reset TimeElapsed float64 `protobuf:"fixed64,1,opt,name=time_elapsed,json=timeElapsed,proto3" json:"time_elapsed,omitempty"` // change in user time (in seconds) used by the server since last reset TimeUser float64 `protobuf:"fixed64,2,opt,name=time_user,json=timeUser,proto3" json:"time_user,omitempty"` // change in server time (in seconds) used by the server process and all // threads since last reset TimeSystem float64 `protobuf:"fixed64,3,opt,name=time_system,json=timeSystem,proto3" json:"time_system,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ServerStats) Reset() { *m = ServerStats{} } func (m *ServerStats) String() string { return proto.CompactTextString(m) } func (*ServerStats) ProtoMessage() {} func (*ServerStats) Descriptor() ([]byte, []int) { return fileDescriptor_stats_8ba831c0cb3c3440, []int{0} } func (m *ServerStats) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ServerStats.Unmarshal(m, b) } func (m *ServerStats) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ServerStats.Marshal(b, m, deterministic) } func (dst *ServerStats) XXX_Merge(src proto.Message) { xxx_messageInfo_ServerStats.Merge(dst, src) } func (m *ServerStats) XXX_Size() int { return xxx_messageInfo_ServerStats.Size(m) } func (m *ServerStats) XXX_DiscardUnknown() { xxx_messageInfo_ServerStats.DiscardUnknown(m) } var xxx_messageInfo_ServerStats proto.InternalMessageInfo func (m *ServerStats) GetTimeElapsed() float64 { if m != nil { return m.TimeElapsed } return 0 } func (m *ServerStats) GetTimeUser() float64 { if m != nil { return m.TimeUser } return 0 } func (m *ServerStats) GetTimeSystem() float64 { if m != nil { return m.TimeSystem } return 0 } // Histogram params based on grpc/support/histogram.c type HistogramParams struct { Resolution float64 `protobuf:"fixed64,1,opt,name=resolution,proto3" json:"resolution,omitempty"` MaxPossible float64 `protobuf:"fixed64,2,opt,name=max_possible,json=maxPossible,proto3" json:"max_possible,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *HistogramParams) Reset() { *m = HistogramParams{} } func (m *HistogramParams) String() string { return proto.CompactTextString(m) } func (*HistogramParams) ProtoMessage() {} func (*HistogramParams) Descriptor() ([]byte, []int) { return fileDescriptor_stats_8ba831c0cb3c3440, []int{1} } func (m *HistogramParams) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_HistogramParams.Unmarshal(m, b) } func (m *HistogramParams) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_HistogramParams.Marshal(b, m, deterministic) } func (dst *HistogramParams) XXX_Merge(src proto.Message) { xxx_messageInfo_HistogramParams.Merge(dst, src) } func (m *HistogramParams) XXX_Size() int { return xxx_messageInfo_HistogramParams.Size(m) } func (m *HistogramParams) XXX_DiscardUnknown() { xxx_messageInfo_HistogramParams.DiscardUnknown(m) } var xxx_messageInfo_HistogramParams proto.InternalMessageInfo func (m *HistogramParams) GetResolution() float64 { if m != nil { return m.Resolution } return 0 } func (m *HistogramParams) GetMaxPossible() float64 { if m != nil { return m.MaxPossible } return 0 } // Histogram data based on grpc/support/histogram.c type HistogramData struct { Bucket []uint32 `protobuf:"varint,1,rep,packed,name=bucket,proto3" json:"bucket,omitempty"` MinSeen float64 `protobuf:"fixed64,2,opt,name=min_seen,json=minSeen,proto3" json:"min_seen,omitempty"` MaxSeen float64 `protobuf:"fixed64,3,opt,name=max_seen,json=maxSeen,proto3" json:"max_seen,omitempty"` Sum float64 `protobuf:"fixed64,4,opt,name=sum,proto3" json:"sum,omitempty"` SumOfSquares float64 `protobuf:"fixed64,5,opt,name=sum_of_squares,json=sumOfSquares,proto3" json:"sum_of_squares,omitempty"` Count float64 `protobuf:"fixed64,6,opt,name=count,proto3" json:"count,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *HistogramData) Reset() { *m = HistogramData{} } func (m *HistogramData) String() string { return proto.CompactTextString(m) } func (*HistogramData) ProtoMessage() {} func (*HistogramData) Descriptor() ([]byte, []int) { return fileDescriptor_stats_8ba831c0cb3c3440, []int{2} } func (m *HistogramData) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_HistogramData.Unmarshal(m, b) } func (m *HistogramData) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_HistogramData.Marshal(b, m, deterministic) } func (dst *HistogramData) XXX_Merge(src proto.Message) { xxx_messageInfo_HistogramData.Merge(dst, src) } func (m *HistogramData) XXX_Size() int { return xxx_messageInfo_HistogramData.Size(m) } func (m *HistogramData) XXX_DiscardUnknown() { xxx_messageInfo_HistogramData.DiscardUnknown(m) } var xxx_messageInfo_HistogramData proto.InternalMessageInfo func (m *HistogramData) GetBucket() []uint32 { if m != nil { return m.Bucket } return nil } func (m *HistogramData) GetMinSeen() float64 { if m != nil { return m.MinSeen } return 0 } func (m *HistogramData) GetMaxSeen() float64 { if m != nil { return m.MaxSeen } return 0 } func (m *HistogramData) GetSum() float64 { if m != nil { return m.Sum } return 0 } func (m *HistogramData) GetSumOfSquares() float64 { if m != nil { return m.SumOfSquares } return 0 } func (m *HistogramData) GetCount() float64 { if m != nil { return m.Count } return 0 } type ClientStats struct { // Latency histogram. Data points are in nanoseconds. Latencies *HistogramData `protobuf:"bytes,1,opt,name=latencies,proto3" json:"latencies,omitempty"` // See ServerStats for details. TimeElapsed float64 `protobuf:"fixed64,2,opt,name=time_elapsed,json=timeElapsed,proto3" json:"time_elapsed,omitempty"` TimeUser float64 `protobuf:"fixed64,3,opt,name=time_user,json=timeUser,proto3" json:"time_user,omitempty"` TimeSystem float64 `protobuf:"fixed64,4,opt,name=time_system,json=timeSystem,proto3" json:"time_system,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ClientStats) Reset() { *m = ClientStats{} } func (m *ClientStats) String() string { return proto.CompactTextString(m) } func (*ClientStats) ProtoMessage() {} func (*ClientStats) Descriptor() ([]byte, []int) { return fileDescriptor_stats_8ba831c0cb3c3440, []int{3} } func (m *ClientStats) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ClientStats.Unmarshal(m, b) } func (m *ClientStats) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ClientStats.Marshal(b, m, deterministic) } func (dst *ClientStats) XXX_Merge(src proto.Message) { xxx_messageInfo_ClientStats.Merge(dst, src) } func (m *ClientStats) XXX_Size() int { return xxx_messageInfo_ClientStats.Size(m) } func (m *ClientStats) XXX_DiscardUnknown() { xxx_messageInfo_ClientStats.DiscardUnknown(m) } var xxx_messageInfo_ClientStats proto.InternalMessageInfo func (m *ClientStats) GetLatencies() *HistogramData { if m != nil { return m.Latencies } return nil } func (m *ClientStats) GetTimeElapsed() float64 { if m != nil { return m.TimeElapsed } return 0 } func (m *ClientStats) GetTimeUser() float64 { if m != nil { return m.TimeUser } return 0 } func (m *ClientStats) GetTimeSystem() float64 { if m != nil { return m.TimeSystem } return 0 } func init() { proto.RegisterType((*ServerStats)(nil), "grpc.testing.ServerStats") proto.RegisterType((*HistogramParams)(nil), "grpc.testing.HistogramParams") proto.RegisterType((*HistogramData)(nil), "grpc.testing.HistogramData") proto.RegisterType((*ClientStats)(nil), "grpc.testing.ClientStats") } func init() { proto.RegisterFile("stats.proto", fileDescriptor_stats_8ba831c0cb3c3440) } var fileDescriptor_stats_8ba831c0cb3c3440 = []byte{ // 341 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x84, 0x92, 0xc1, 0x4a, 0xeb, 0x40, 0x14, 0x86, 0x49, 0xd3, 0xf6, 0xb6, 0x27, 0xed, 0xbd, 0x97, 0x41, 0x24, 0x52, 0xd0, 0x1a, 0x5c, 0x74, 0x95, 0x85, 0xae, 0x5c, 0xab, 0xe0, 0xce, 0xd2, 0xe8, 0x3a, 0x4c, 0xe3, 0x69, 0x19, 0xcc, 0xcc, 0xc4, 0x39, 0x33, 0x12, 0x1f, 0x49, 0x7c, 0x49, 0xc9, 0x24, 0x68, 0x55, 0xd0, 0x5d, 0xe6, 0xfb, 0x7e, 0xe6, 0xe4, 0xe4, 0x0f, 0x44, 0x64, 0xb9, 0xa5, 0xb4, 0x32, 0xda, 0x6a, 0x36, 0xd9, 0x9a, 0xaa, 0x48, 0x2d, 0x92, 0x15, 0x6a, 0x9b, 0x28, 0x88, 0x32, 0x34, 0x4f, 0x68, 0xb2, 0x26, 0xc2, 0x8e, 0x61, 0x62, 0x85, 0xc4, 0x1c, 0x4b, 0x5e, 0x11, 0xde, 0xc7, 0xc1, 0x3c, 0x58, 0x04, 0xab, 0xa8, 0x61, 0x57, 0x2d, 0x62, 0x33, 0x18, 0xfb, 0x88, 0x23, 0x34, 0x71, 0xcf, 0xfb, 0x51, 0x03, 0xee, 0x08, 0x0d, 0x3b, 0x02, 0x9f, 0xcd, 0xe9, 0x99, 0x2c, 0xca, 0x38, 0xf4, 0x1a, 0x1a, 0x94, 0x79, 0x92, 0xdc, 0xc2, 0xbf, 0x6b, 0x41, 0x56, 0x6f, 0x0d, 0x97, 0x4b, 0x6e, 0xb8, 0x24, 0x76, 0x08, 0x60, 0x90, 0x74, 0xe9, 0xac, 0xd0, 0xaa, 0x9b, 0xb8, 0x43, 0x9a, 0x77, 0x92, 0xbc, 0xce, 0x2b, 0x4d, 0x24, 0xd6, 0x25, 0x76, 0x33, 0x23, 0xc9, 0xeb, 0x65, 0x87, 0x92, 0xd7, 0x00, 0xa6, 0xef, 0xd7, 0x5e, 0x72, 0xcb, 0xd9, 0x3e, 0x0c, 0xd7, 0xae, 0x78, 0x40, 0x1b, 0x07, 0xf3, 0x70, 0x31, 0x5d, 0x75, 0x27, 0x76, 0x00, 0x23, 0x29, 0x54, 0x4e, 0x88, 0xaa, 0xbb, 0xe8, 0x8f, 0x14, 0x2a, 0x43, 0x54, 0x5e, 0xf1, 0xba, 0x55, 0x61, 0xa7, 0x78, 0xed, 0xd5, 0x7f, 0x08, 0xc9, 0xc9, 0xb8, 0xef, 0x69, 0xf3, 0xc8, 0x4e, 0xe0, 0x2f, 0x39, 0x99, 0xeb, 0x4d, 0x4e, 0x8f, 0x8e, 0x1b, 0xa4, 0x78, 0xe0, 0xe5, 0x84, 0x9c, 0xbc, 0xd9, 0x64, 0x2d, 0x63, 0x7b, 0x30, 0x28, 0xb4, 0x53, 0x36, 0x1e, 0x7a, 0xd9, 0x1e, 0x92, 0x97, 0x00, 0xa2, 0x8b, 0x52, 0xa0, 0xb2, 0xed, 0x47, 0x3f, 0x87, 0x71, 0xc9, 0x2d, 0xaa, 0x42, 0x20, 0xf9, 0xfd, 0xa3, 0xd3, 0x59, 0xba, 0xdb, 0x52, 0xfa, 0x69, 0xb7, 0xd5, 0x47, 0xfa, 0x5b, 0x5f, 0xbd, 0x5f, 0xfa, 0x0a, 0x7f, 0xee, 0xab, 0xff, 0xb5, 0xaf, 0xf5, 0xd0, 0xff, 0x34, 0x67, 0x6f, 0x01, 0x00, 0x00, 0xff, 0xff, 0xea, 0x75, 0x34, 0x90, 0x43, 0x02, 0x00, 0x00, } grpc-go-1.22.1/benchmark/grpc_testing/stats.proto000066400000000000000000000031461351635773100220170ustar00rootroot00000000000000// Copyright 2016 gRPC authors. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. syntax = "proto3"; package grpc.testing; message ServerStats { // wall clock time change in seconds since last reset double time_elapsed = 1; // change in user time (in seconds) used by the server since last reset double time_user = 2; // change in server time (in seconds) used by the server process and all // threads since last reset double time_system = 3; } // Histogram params based on grpc/support/histogram.c message HistogramParams { double resolution = 1; // first bucket is [0, 1 + resolution) double max_possible = 2; // use enough buckets to allow this value } // Histogram data based on grpc/support/histogram.c message HistogramData { repeated uint32 bucket = 1; double min_seen = 2; double max_seen = 3; double sum = 4; double sum_of_squares = 5; double count = 6; } message ClientStats { // Latency histogram. Data points are in nanoseconds. HistogramData latencies = 1; // See ServerStats for details. double time_elapsed = 2; double time_user = 3; double time_system = 4; } grpc-go-1.22.1/benchmark/latency/000077500000000000000000000000001351635773100165375ustar00rootroot00000000000000grpc-go-1.22.1/benchmark/latency/latency.go000066400000000000000000000224771351635773100205410ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package latency provides wrappers for net.Conn, net.Listener, and // net.Dialers, designed to interoperate to inject real-world latency into // network connections. package latency import ( "bytes" "context" "encoding/binary" "fmt" "io" "net" "time" ) // Dialer is a function matching the signature of net.Dial. type Dialer func(network, address string) (net.Conn, error) // TimeoutDialer is a function matching the signature of net.DialTimeout. type TimeoutDialer func(network, address string, timeout time.Duration) (net.Conn, error) // ContextDialer is a function matching the signature of // net.Dialer.DialContext. type ContextDialer func(ctx context.Context, network, address string) (net.Conn, error) // Network represents a network with the given bandwidth, latency, and MTU // (Maximum Transmission Unit) configuration, and can produce wrappers of // net.Listeners, net.Conn, and various forms of dialing functions. The // Listeners and Dialers/Conns on both sides of connections must come from this // package, but need not be created from the same Network. Latency is computed // when sending (in Write), and is injected when receiving (in Read). This // allows senders' Write calls to be non-blocking, as in real-world // applications. // // Note: Latency is injected by the sender specifying the absolute time data // should be available, and the reader delaying until that time arrives to // provide the data. This package attempts to counter-act the effects of clock // drift and existing network latency by measuring the delay between the // sender's transmission time and the receiver's reception time during startup. // No attempt is made to measure the existing bandwidth of the connection. type Network struct { Kbps int // Kilobits per second; if non-positive, infinite Latency time.Duration // One-way latency (sending); if non-positive, no delay MTU int // Bytes per packet; if non-positive, infinite } var ( //Local simulates local network. Local = Network{0, 0, 0} //LAN simulates local area network network. LAN = Network{100 * 1024, 2 * time.Millisecond, 1500} //WAN simulates wide area network. WAN = Network{20 * 1024, 30 * time.Millisecond, 1500} //Longhaul simulates bad network. Longhaul = Network{1000 * 1024, 200 * time.Millisecond, 9000} ) // Conn returns a net.Conn that wraps c and injects n's latency into that // connection. This function also imposes latency for connection creation. // If n's Latency is lower than the measured latency in c, an error is // returned. func (n *Network) Conn(c net.Conn) (net.Conn, error) { start := now() nc := &conn{Conn: c, network: n, readBuf: new(bytes.Buffer)} if err := nc.sync(); err != nil { return nil, err } sleep(start.Add(nc.delay).Sub(now())) return nc, nil } type conn struct { net.Conn network *Network readBuf *bytes.Buffer // one packet worth of data received lastSendEnd time.Time // time the previous Write should be fully on the wire delay time.Duration // desired latency - measured latency } // header is sent before all data transmitted by the application. type header struct { ReadTime int64 // Time the reader is allowed to read this packet (UnixNano) Sz int32 // Size of the data in the packet } func (c *conn) Write(p []byte) (n int, err error) { tNow := now() if c.lastSendEnd.Before(tNow) { c.lastSendEnd = tNow } for len(p) > 0 { pkt := p if c.network.MTU > 0 && len(pkt) > c.network.MTU { pkt = pkt[:c.network.MTU] p = p[c.network.MTU:] } else { p = nil } if c.network.Kbps > 0 { if congestion := c.lastSendEnd.Sub(tNow) - c.delay; congestion > 0 { // The network is full; sleep until this packet can be sent. sleep(congestion) tNow = tNow.Add(congestion) } } c.lastSendEnd = c.lastSendEnd.Add(c.network.pktTime(len(pkt))) hdr := header{ReadTime: c.lastSendEnd.Add(c.delay).UnixNano(), Sz: int32(len(pkt))} if err := binary.Write(c.Conn, binary.BigEndian, hdr); err != nil { return n, err } x, err := c.Conn.Write(pkt) n += x if err != nil { return n, err } } return n, nil } func (c *conn) Read(p []byte) (n int, err error) { if c.readBuf.Len() == 0 { var hdr header if err := binary.Read(c.Conn, binary.BigEndian, &hdr); err != nil { return 0, err } defer func() { sleep(time.Unix(0, hdr.ReadTime).Sub(now())) }() if _, err := io.CopyN(c.readBuf, c.Conn, int64(hdr.Sz)); err != nil { return 0, err } } // Read from readBuf. return c.readBuf.Read(p) } // sync does a handshake and then measures the latency on the network in // coordination with the other side. func (c *conn) sync() error { const ( pingMsg = "syncPing" warmup = 10 // minimum number of iterations to measure latency giveUp = 50 // maximum number of iterations to measure latency accuracy = time.Millisecond // req'd accuracy to stop early goodRun = 3 // stop early if latency within accuracy this many times ) type syncMsg struct { SendT int64 // Time sent. If zero, stop. RecvT int64 // Time received. If zero, fill in and respond. } // A trivial handshake if err := binary.Write(c.Conn, binary.BigEndian, []byte(pingMsg)); err != nil { return err } var ping [8]byte if err := binary.Read(c.Conn, binary.BigEndian, &ping); err != nil { return err } else if string(ping[:]) != pingMsg { return fmt.Errorf("malformed handshake message: %v (want %q)", ping, pingMsg) } // Both sides are alive and syncing. Calculate network delay / clock skew. att := 0 good := 0 var latency time.Duration localDone, remoteDone := false, false send := true for !localDone || !remoteDone { if send { if err := binary.Write(c.Conn, binary.BigEndian, syncMsg{SendT: now().UnixNano()}); err != nil { return err } att++ send = false } // Block until we get a syncMsg m := syncMsg{} if err := binary.Read(c.Conn, binary.BigEndian, &m); err != nil { return err } if m.RecvT == 0 { // Message initiated from other side. if m.SendT == 0 { remoteDone = true continue } // Send response. m.RecvT = now().UnixNano() if err := binary.Write(c.Conn, binary.BigEndian, m); err != nil { return err } continue } lag := time.Duration(m.RecvT - m.SendT) latency += lag avgLatency := latency / time.Duration(att) if e := lag - avgLatency; e > -accuracy && e < accuracy { good++ } else { good = 0 } if att < giveUp && (att < warmup || good < goodRun) { send = true continue } localDone = true latency = avgLatency // Tell the other side we're done. if err := binary.Write(c.Conn, binary.BigEndian, syncMsg{}); err != nil { return err } } if c.network.Latency <= 0 { return nil } c.delay = c.network.Latency - latency if c.delay < 0 { return fmt.Errorf("measured network latency (%v) higher than desired latency (%v)", latency, c.network.Latency) } return nil } // Listener returns a net.Listener that wraps l and injects n's latency in its // connections. func (n *Network) Listener(l net.Listener) net.Listener { return &listener{Listener: l, network: n} } type listener struct { net.Listener network *Network } func (l *listener) Accept() (net.Conn, error) { c, err := l.Listener.Accept() if err != nil { return nil, err } return l.network.Conn(c) } // Dialer returns a Dialer that wraps d and injects n's latency in its // connections. n's Latency is also injected to the connection's creation. func (n *Network) Dialer(d Dialer) Dialer { return func(network, address string) (net.Conn, error) { conn, err := d(network, address) if err != nil { return nil, err } return n.Conn(conn) } } // TimeoutDialer returns a TimeoutDialer that wraps d and injects n's latency // in its connections. n's Latency is also injected to the connection's // creation. func (n *Network) TimeoutDialer(d TimeoutDialer) TimeoutDialer { return func(network, address string, timeout time.Duration) (net.Conn, error) { conn, err := d(network, address, timeout) if err != nil { return nil, err } return n.Conn(conn) } } // ContextDialer returns a ContextDialer that wraps d and injects n's latency // in its connections. n's Latency is also injected to the connection's // creation. func (n *Network) ContextDialer(d ContextDialer) ContextDialer { return func(ctx context.Context, network, address string) (net.Conn, error) { conn, err := d(ctx, network, address) if err != nil { return nil, err } return n.Conn(conn) } } // pktTime returns the time it takes to transmit one packet of data of size b // in bytes. func (n *Network) pktTime(b int) time.Duration { if n.Kbps <= 0 { return time.Duration(0) } return time.Duration(b) * time.Second / time.Duration(n.Kbps*(1024/8)) } // Wrappers for testing var now = time.Now var sleep = time.Sleep grpc-go-1.22.1/benchmark/latency/latency_test.go000066400000000000000000000245361351635773100215760ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package latency import ( "bytes" "fmt" "net" "reflect" "sync" "testing" "time" ) // bufConn is a net.Conn implemented by a bytes.Buffer (which is a ReadWriter). type bufConn struct { *bytes.Buffer } func (bufConn) Close() error { panic("unimplemented") } func (bufConn) LocalAddr() net.Addr { panic("unimplemented") } func (bufConn) RemoteAddr() net.Addr { panic("unimplemented") } func (bufConn) SetDeadline(t time.Time) error { panic("unimplemneted") } func (bufConn) SetReadDeadline(t time.Time) error { panic("unimplemneted") } func (bufConn) SetWriteDeadline(t time.Time) error { panic("unimplemneted") } func restoreHooks() func() { s := sleep n := now return func() { sleep = s now = n } } func TestConn(t *testing.T) { defer restoreHooks()() // Constant time. now = func() time.Time { return time.Unix(123, 456) } // Capture sleep times for checking later. var sleepTimes []time.Duration sleep = func(t time.Duration) { sleepTimes = append(sleepTimes, t) } wantSleeps := func(want ...time.Duration) { if !reflect.DeepEqual(want, sleepTimes) { t.Fatalf("sleepTimes = %v; want %v", sleepTimes, want) } sleepTimes = nil } // Use a fairly high latency to cause a large BDP and avoid sleeps while // writing due to simulation of full buffers. latency := 1 * time.Second c, err := (&Network{Kbps: 1, Latency: latency, MTU: 5}).Conn(bufConn{&bytes.Buffer{}}) if err != nil { t.Fatalf("Unexpected error creating connection: %v", err) } wantSleeps(latency) // Connection creation delay. // 1 kbps = 128 Bps. Divides evenly by 1 second using nanos. byteLatency := time.Duration(time.Second / 128) write := func(b []byte) { n, err := c.Write(b) if n != len(b) || err != nil { t.Fatalf("c.Write(%v) = %v, %v; want %v, nil", b, n, err, len(b)) } } write([]byte{1, 2, 3, 4, 5}) // One full packet pkt1Time := latency + byteLatency*5 write([]byte{6}) // One partial packet pkt2Time := pkt1Time + byteLatency write([]byte{7, 8, 9, 10, 11, 12, 13}) // Two packets pkt3Time := pkt2Time + byteLatency*5 pkt4Time := pkt3Time + byteLatency*2 // No reads, so no sleeps yet. wantSleeps() read := func(n int, want []byte) { b := make([]byte, n) if rd, err := c.Read(b); err != nil || rd != len(want) { t.Fatalf("c.Read(<%v bytes>) = %v, %v; want %v, nil", n, rd, err, len(want)) } if !reflect.DeepEqual(b[:len(want)], want) { t.Fatalf("read %v; want %v", b, want) } } read(1, []byte{1}) wantSleeps(pkt1Time) read(1, []byte{2}) wantSleeps() read(3, []byte{3, 4, 5}) wantSleeps() read(2, []byte{6}) wantSleeps(pkt2Time) read(2, []byte{7, 8}) wantSleeps(pkt3Time) read(10, []byte{9, 10, 11}) wantSleeps() read(10, []byte{12, 13}) wantSleeps(pkt4Time) } func TestSync(t *testing.T) { defer restoreHooks()() // Infinitely fast CPU: time doesn't pass unless sleep is called. tn := time.Unix(123, 0) now = func() time.Time { return tn } sleep = func(d time.Duration) { tn = tn.Add(d) } // Simulate a 20ms latency network, then run sync across that and expect to // measure 20ms latency, or 10ms additional delay for a 30ms network. slowConn, err := (&Network{Kbps: 0, Latency: 20 * time.Millisecond, MTU: 5}).Conn(bufConn{&bytes.Buffer{}}) if err != nil { t.Fatalf("Unexpected error creating connection: %v", err) } c, err := (&Network{Latency: 30 * time.Millisecond}).Conn(slowConn) if err != nil { t.Fatalf("Unexpected error creating connection: %v", err) } if c.(*conn).delay != 10*time.Millisecond { t.Fatalf("c.delay = %v; want 10ms", c.(*conn).delay) } } func TestSyncTooSlow(t *testing.T) { defer restoreHooks()() // Infinitely fast CPU: time doesn't pass unless sleep is called. tn := time.Unix(123, 0) now = func() time.Time { return tn } sleep = func(d time.Duration) { tn = tn.Add(d) } // Simulate a 10ms latency network, then attempt to simulate a 5ms latency // network and expect an error. slowConn, err := (&Network{Kbps: 0, Latency: 10 * time.Millisecond, MTU: 5}).Conn(bufConn{&bytes.Buffer{}}) if err != nil { t.Fatalf("Unexpected error creating connection: %v", err) } errWant := "measured network latency (10ms) higher than desired latency (5ms)" if _, err := (&Network{Latency: 5 * time.Millisecond}).Conn(slowConn); err == nil || err.Error() != errWant { t.Fatalf("Conn() = _, %q; want _, %q", err, errWant) } } func TestListenerAndDialer(t *testing.T) { defer restoreHooks()() tn := time.Unix(123, 0) startTime := tn mu := &sync.Mutex{} now = func() time.Time { mu.Lock() defer mu.Unlock() return tn } // Use a fairly high latency to cause a large BDP and avoid sleeps while // writing due to simulation of full buffers. n := &Network{Kbps: 2, Latency: 1 * time.Second, MTU: 10} // 2 kbps = .25 kBps = 256 Bps byteLatency := func(n int) time.Duration { return time.Duration(n) * time.Second / 256 } // Create a real listener and wrap it. l, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("Unexpected error creating listener: %v", err) } defer l.Close() l = n.Listener(l) var serverConn net.Conn var scErr error scDone := make(chan struct{}) go func() { serverConn, scErr = l.Accept() close(scDone) }() // Create a dialer and use it. clientConn, err := n.TimeoutDialer(net.DialTimeout)("tcp", l.Addr().String(), 2*time.Second) if err != nil { t.Fatalf("Unexpected error dialing: %v", err) } defer clientConn.Close() // Block until server's Conn is available. <-scDone if scErr != nil { t.Fatalf("Unexpected error listening: %v", scErr) } defer serverConn.Close() // sleep (only) advances tn. Done after connections established so sync detects zero delay. sleep = func(d time.Duration) { mu.Lock() defer mu.Unlock() if d > 0 { tn = tn.Add(d) } } seq := func(a, b int) []byte { buf := make([]byte, b-a) for i := 0; i < b-a; i++ { buf[i] = byte(i + a) } return buf } pkt1 := seq(0, 10) pkt2 := seq(10, 30) pkt3 := seq(30, 35) write := func(c net.Conn, b []byte) { n, err := c.Write(b) if n != len(b) || err != nil { t.Fatalf("c.Write(%v) = %v, %v; want %v, nil", b, n, err, len(b)) } } write(serverConn, pkt1) write(serverConn, pkt2) write(serverConn, pkt3) write(clientConn, pkt3) write(clientConn, pkt1) write(clientConn, pkt2) if tn != startTime { t.Fatalf("unexpected sleep in write; tn = %v; want %v", tn, startTime) } read := func(c net.Conn, n int, want []byte, timeWant time.Time) { b := make([]byte, n) if rd, err := c.Read(b); err != nil || rd != len(want) { t.Fatalf("c.Read(<%v bytes>) = %v, %v; want %v, nil (read: %v)", n, rd, err, len(want), b[:rd]) } if !reflect.DeepEqual(b[:len(want)], want) { t.Fatalf("read %v; want %v", b, want) } if !tn.Equal(timeWant) { t.Errorf("tn after read(%v) = %v; want %v", want, tn, timeWant) } } read(clientConn, len(pkt1)+1, pkt1, startTime.Add(n.Latency+byteLatency(len(pkt1)))) read(serverConn, len(pkt3)+1, pkt3, tn) // tn was advanced by the above read; pkt3 is shorter than pkt1 read(clientConn, len(pkt2), pkt2[:10], startTime.Add(n.Latency+byteLatency(len(pkt1)+10))) read(clientConn, len(pkt2), pkt2[10:], startTime.Add(n.Latency+byteLatency(len(pkt1)+len(pkt2)))) read(clientConn, len(pkt3), pkt3, startTime.Add(n.Latency+byteLatency(len(pkt1)+len(pkt2)+len(pkt3)))) read(serverConn, len(pkt1), pkt1, tn) // tn already past the arrival time due to prior reads read(serverConn, len(pkt2), pkt2[:10], tn) read(serverConn, len(pkt2), pkt2[10:], tn) // Sleep awhile and make sure the read happens disregarding previous writes // (lastSendEnd handling). sleep(10 * time.Second) write(clientConn, pkt1) read(serverConn, len(pkt1), pkt1, tn.Add(n.Latency+byteLatency(len(pkt1)))) // Send, sleep longer than the network delay, then make sure the read happens // instantly. write(serverConn, pkt1) sleep(10 * time.Second) read(clientConn, len(pkt1), pkt1, tn) } func TestBufferBloat(t *testing.T) { defer restoreHooks()() // Infinitely fast CPU: time doesn't pass unless sleep is called. tn := time.Unix(123, 0) now = func() time.Time { return tn } // Capture sleep times for checking later. var sleepTimes []time.Duration sleep = func(d time.Duration) { sleepTimes = append(sleepTimes, d) tn = tn.Add(d) } wantSleeps := func(want ...time.Duration) error { if !reflect.DeepEqual(want, sleepTimes) { return fmt.Errorf("sleepTimes = %v; want %v", sleepTimes, want) } sleepTimes = nil return nil } n := &Network{Kbps: 8 /* 1KBps */, Latency: time.Second, MTU: 8} bdpBytes := (n.Kbps * 1024 / 8) * int(n.Latency/time.Second) // 1024 c, err := n.Conn(bufConn{&bytes.Buffer{}}) if err != nil { t.Fatalf("Unexpected error creating connection: %v", err) } wantSleeps(n.Latency) // Connection creation delay. write := func(n int, sleeps ...time.Duration) { if wt, err := c.Write(make([]byte, n)); err != nil || wt != n { t.Fatalf("c.Write(<%v bytes>) = %v, %v; want %v, nil", n, wt, err, n) } if err := wantSleeps(sleeps...); err != nil { t.Fatalf("After writing %v bytes: %v", n, err) } } read := func(n int, sleeps ...time.Duration) { if rd, err := c.Read(make([]byte, n)); err != nil || rd != n { t.Fatalf("c.Read(_) = %v, %v; want %v, nil", rd, err, n) } if err := wantSleeps(sleeps...); err != nil { t.Fatalf("After reading %v bytes: %v", n, err) } } write(8) // No reads and buffer not full, so no sleeps yet. read(8, time.Second+n.pktTime(8)) write(bdpBytes) // Fill the buffer. write(1) // We can send one extra packet even when the buffer is full. write(n.MTU, n.pktTime(1)) // Make sure we sleep to clear the previous write. write(1, n.pktTime(n.MTU)) write(n.MTU+1, n.pktTime(1), n.pktTime(n.MTU)) tn = tn.Add(10 * time.Second) // Wait long enough for the buffer to clear. write(bdpBytes) // No sleeps required. } grpc-go-1.22.1/benchmark/primitives/000077500000000000000000000000001351635773100172735ustar00rootroot00000000000000grpc-go-1.22.1/benchmark/primitives/code_string_test.go000066400000000000000000000064361351635773100231720ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package primitives_test import ( "strconv" "testing" "google.golang.org/grpc/codes" ) type codeBench uint32 const ( OK codeBench = iota Canceled Unknown InvalidArgument DeadlineExceeded NotFound AlreadyExists PermissionDenied ResourceExhausted FailedPrecondition Aborted OutOfRange Unimplemented Internal Unavailable DataLoss Unauthenticated ) // The following String() function was generated by stringer. const _Code_name = "OKCanceledUnknownInvalidArgumentDeadlineExceededNotFoundAlreadyExistsPermissionDeniedResourceExhaustedFailedPreconditionAbortedOutOfRangeUnimplementedInternalUnavailableDataLossUnauthenticated" var _Code_index = [...]uint8{0, 2, 10, 17, 32, 48, 56, 69, 85, 102, 120, 127, 137, 150, 158, 169, 177, 192} func (i codeBench) String() string { if i >= codeBench(len(_Code_index)-1) { return "Code(" + strconv.FormatInt(int64(i), 10) + ")" } return _Code_name[_Code_index[i]:_Code_index[i+1]] } var nameMap = map[codeBench]string{ OK: "OK", Canceled: "Canceled", Unknown: "Unknown", InvalidArgument: "InvalidArgument", DeadlineExceeded: "DeadlineExceeded", NotFound: "NotFound", AlreadyExists: "AlreadyExists", PermissionDenied: "PermissionDenied", ResourceExhausted: "ResourceExhausted", FailedPrecondition: "FailedPrecondition", Aborted: "Aborted", OutOfRange: "OutOfRange", Unimplemented: "Unimplemented", Internal: "Internal", Unavailable: "Unavailable", DataLoss: "DataLoss", Unauthenticated: "Unauthenticated", } func (i codeBench) StringUsingMap() string { if s, ok := nameMap[i]; ok { return s } return "Code(" + strconv.FormatInt(int64(i), 10) + ")" } func BenchmarkCodeStringStringer(b *testing.B) { b.ResetTimer() for i := 0; i < b.N; i++ { c := codeBench(uint32(i % 17)) _ = c.String() } b.StopTimer() } func BenchmarkCodeStringMap(b *testing.B) { b.ResetTimer() for i := 0; i < b.N; i++ { c := codeBench(uint32(i % 17)) _ = c.StringUsingMap() } b.StopTimer() } // codes.Code.String() does a switch. func BenchmarkCodeStringSwitch(b *testing.B) { b.ResetTimer() for i := 0; i < b.N; i++ { c := codes.Code(uint32(i % 17)) _ = c.String() } b.StopTimer() } // Testing all codes (0<=c<=16) and also one overflow (17). func BenchmarkCodeStringStringerWithOverflow(b *testing.B) { b.ResetTimer() for i := 0; i < b.N; i++ { c := codeBench(uint32(i % 18)) _ = c.String() } b.StopTimer() } // Testing all codes (0<=c<=16) and also one overflow (17). func BenchmarkCodeStringSwitchWithOverflow(b *testing.B) { b.ResetTimer() for i := 0; i < b.N; i++ { c := codes.Code(uint32(i % 18)) _ = c.String() } b.StopTimer() } grpc-go-1.22.1/benchmark/primitives/context_test.go000066400000000000000000000051761351635773100223560ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package primitives_test import ( "context" "testing" "time" ) func BenchmarkCancelContextErrNoErr(b *testing.B) { ctx, cancel := context.WithCancel(context.Background()) for i := 0; i < b.N; i++ { if err := ctx.Err(); err != nil { b.Fatal("error") } } cancel() } func BenchmarkCancelContextErrGotErr(b *testing.B) { ctx, cancel := context.WithCancel(context.Background()) cancel() for i := 0; i < b.N; i++ { if err := ctx.Err(); err == nil { b.Fatal("error") } } } func BenchmarkCancelContextChannelNoErr(b *testing.B) { ctx, cancel := context.WithCancel(context.Background()) for i := 0; i < b.N; i++ { select { case <-ctx.Done(): b.Fatal("error: ctx.Done():", ctx.Err()) default: } } cancel() } func BenchmarkCancelContextChannelGotErr(b *testing.B) { ctx, cancel := context.WithCancel(context.Background()) cancel() for i := 0; i < b.N; i++ { select { case <-ctx.Done(): if err := ctx.Err(); err == nil { b.Fatal("error") } default: b.Fatal("error: !ctx.Done()") } } } func BenchmarkTimerContextErrNoErr(b *testing.B) { ctx, cancel := context.WithTimeout(context.Background(), 24*time.Hour) for i := 0; i < b.N; i++ { if err := ctx.Err(); err != nil { b.Fatal("error") } } cancel() } func BenchmarkTimerContextErrGotErr(b *testing.B) { ctx, cancel := context.WithTimeout(context.Background(), time.Microsecond) cancel() for i := 0; i < b.N; i++ { if err := ctx.Err(); err == nil { b.Fatal("error") } } } func BenchmarkTimerContextChannelNoErr(b *testing.B) { ctx, cancel := context.WithTimeout(context.Background(), 24*time.Hour) for i := 0; i < b.N; i++ { select { case <-ctx.Done(): b.Fatal("error: ctx.Done():", ctx.Err()) default: } } cancel() } func BenchmarkTimerContextChannelGotErr(b *testing.B) { ctx, cancel := context.WithTimeout(context.Background(), time.Microsecond) cancel() for i := 0; i < b.N; i++ { select { case <-ctx.Done(): if err := ctx.Err(); err == nil { b.Fatal("error") } default: b.Fatal("error: !ctx.Done()") } } } grpc-go-1.22.1/benchmark/primitives/primitives_test.go000066400000000000000000000150471351635773100230630ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package primitives_test contains benchmarks for various synchronization primitives // available in Go. package primitives_test import ( "fmt" "sync" "sync/atomic" "testing" "time" "unsafe" ) func BenchmarkSelectClosed(b *testing.B) { c := make(chan struct{}) close(c) x := 0 b.ResetTimer() for i := 0; i < b.N; i++ { select { case <-c: x++ default: } } b.StopTimer() if x != b.N { b.Fatal("error") } } func BenchmarkSelectOpen(b *testing.B) { c := make(chan struct{}) x := 0 b.ResetTimer() for i := 0; i < b.N; i++ { select { case <-c: default: x++ } } b.StopTimer() if x != b.N { b.Fatal("error") } } func BenchmarkAtomicBool(b *testing.B) { c := int32(0) x := 0 b.ResetTimer() for i := 0; i < b.N; i++ { if atomic.LoadInt32(&c) == 0 { x++ } } b.StopTimer() if x != b.N { b.Fatal("error") } } func BenchmarkAtomicValueLoad(b *testing.B) { c := atomic.Value{} c.Store(0) x := 0 b.ResetTimer() for i := 0; i < b.N; i++ { if c.Load().(int) == 0 { x++ } } b.StopTimer() if x != b.N { b.Fatal("error") } } func BenchmarkAtomicValueStore(b *testing.B) { c := atomic.Value{} v := 123 b.ResetTimer() for i := 0; i < b.N; i++ { c.Store(v) } b.StopTimer() } func BenchmarkMutex(b *testing.B) { c := sync.Mutex{} x := 0 b.ResetTimer() for i := 0; i < b.N; i++ { c.Lock() x++ c.Unlock() } b.StopTimer() if x != b.N { b.Fatal("error") } } func BenchmarkRWMutex(b *testing.B) { c := sync.RWMutex{} x := 0 b.ResetTimer() for i := 0; i < b.N; i++ { c.RLock() x++ c.RUnlock() } b.StopTimer() if x != b.N { b.Fatal("error") } } func BenchmarkRWMutexW(b *testing.B) { c := sync.RWMutex{} x := 0 b.ResetTimer() for i := 0; i < b.N; i++ { c.Lock() x++ c.Unlock() } b.StopTimer() if x != b.N { b.Fatal("error") } } func BenchmarkMutexWithDefer(b *testing.B) { c := sync.Mutex{} x := 0 b.ResetTimer() for i := 0; i < b.N; i++ { func() { c.Lock() defer c.Unlock() x++ }() } b.StopTimer() if x != b.N { b.Fatal("error") } } func BenchmarkMutexWithClosureDefer(b *testing.B) { c := sync.Mutex{} x := 0 b.ResetTimer() for i := 0; i < b.N; i++ { func() { c.Lock() defer func() { c.Unlock() }() x++ }() } b.StopTimer() if x != b.N { b.Fatal("error") } } func BenchmarkMutexWithoutDefer(b *testing.B) { c := sync.Mutex{} x := 0 b.ResetTimer() for i := 0; i < b.N; i++ { func() { c.Lock() x++ c.Unlock() }() } b.StopTimer() if x != b.N { b.Fatal("error") } } func BenchmarkAtomicAddInt64(b *testing.B) { var c int64 b.ResetTimer() for i := 0; i < b.N; i++ { atomic.AddInt64(&c, 1) } b.StopTimer() if c != int64(b.N) { b.Fatal("error") } } func BenchmarkAtomicTimeValueStore(b *testing.B) { var c atomic.Value t := time.Now() b.ResetTimer() for i := 0; i < b.N; i++ { c.Store(t) } b.StopTimer() } func BenchmarkAtomic16BValueStore(b *testing.B) { var c atomic.Value t := struct { a int64 b int64 }{ 123, 123, } b.ResetTimer() for i := 0; i < b.N; i++ { c.Store(t) } b.StopTimer() } func BenchmarkAtomic32BValueStore(b *testing.B) { var c atomic.Value t := struct { a int64 b int64 c int64 d int64 }{ 123, 123, 123, 123, } b.ResetTimer() for i := 0; i < b.N; i++ { c.Store(t) } b.StopTimer() } func BenchmarkAtomicPointerStore(b *testing.B) { t := 123 var up unsafe.Pointer b.ResetTimer() for i := 0; i < b.N; i++ { atomic.StorePointer(&up, unsafe.Pointer(&t)) } b.StopTimer() } func BenchmarkAtomicTimePointerStore(b *testing.B) { t := time.Now() var up unsafe.Pointer b.ResetTimer() for i := 0; i < b.N; i++ { atomic.StorePointer(&up, unsafe.Pointer(&t)) } b.StopTimer() } func BenchmarkStoreContentionWithAtomic(b *testing.B) { t := 123 var c unsafe.Pointer b.RunParallel(func(pb *testing.PB) { for pb.Next() { atomic.StorePointer(&c, unsafe.Pointer(&t)) } }) } func BenchmarkStoreContentionWithMutex(b *testing.B) { t := 123 var mu sync.Mutex var c int b.RunParallel(func(pb *testing.PB) { for pb.Next() { mu.Lock() c = t mu.Unlock() } }) _ = c } type dummyStruct struct { a int64 b time.Time } func BenchmarkStructStoreContention(b *testing.B) { d := dummyStruct{} dp := unsafe.Pointer(&d) t := time.Now() for _, j := range []int{100000000, 10000, 0} { for _, i := range []int{100000, 10} { b.Run(fmt.Sprintf("CAS/%v/%v", j, i), func(b *testing.B) { b.SetParallelism(i) b.RunParallel(func(pb *testing.PB) { n := &dummyStruct{ b: t, } for pb.Next() { for y := 0; y < j; y++ { } for { v := (*dummyStruct)(atomic.LoadPointer(&dp)) n.a = v.a + 1 if atomic.CompareAndSwapPointer(&dp, unsafe.Pointer(v), unsafe.Pointer(n)) { n = v break } } } }) }) } } var mu sync.Mutex for _, j := range []int{100000000, 10000, 0} { for _, i := range []int{100000, 10} { b.Run(fmt.Sprintf("Mutex/%v/%v", j, i), func(b *testing.B) { b.SetParallelism(i) b.RunParallel(func(pb *testing.PB) { for pb.Next() { for y := 0; y < j; y++ { } mu.Lock() d.a++ d.b = t mu.Unlock() } }) }) } } } type myFooer struct{} func (myFooer) Foo() {} type fooer interface { Foo() } func BenchmarkInterfaceTypeAssertion(b *testing.B) { // Call a separate function to avoid compiler optimizations. runInterfaceTypeAssertion(b, myFooer{}) } func runInterfaceTypeAssertion(b *testing.B, fer interface{}) { x := 0 b.ResetTimer() for i := 0; i < b.N; i++ { if _, ok := fer.(fooer); ok { x++ } } b.StopTimer() if x != b.N { b.Fatal("error") } } func BenchmarkStructTypeAssertion(b *testing.B) { // Call a separate function to avoid compiler optimizations. runStructTypeAssertion(b, myFooer{}) } func runStructTypeAssertion(b *testing.B, fer interface{}) { x := 0 b.ResetTimer() for i := 0; i < b.N; i++ { if _, ok := fer.(myFooer); ok { x++ } } b.StopTimer() if x != b.N { b.Fatal("error") } } grpc-go-1.22.1/benchmark/primitives/syncmap_test.go000066400000000000000000000064071351635773100223420ustar00rootroot00000000000000/* * * Copyright 2019 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package primitives_test import ( "sync" "sync/atomic" "testing" ) type incrementUint64Map interface { increment(string) result(string) uint64 } type mapWithLock struct { mu sync.Mutex m map[string]uint64 } func newMapWithLock() incrementUint64Map { return &mapWithLock{ m: make(map[string]uint64), } } func (mwl *mapWithLock) increment(c string) { mwl.mu.Lock() mwl.m[c]++ mwl.mu.Unlock() } func (mwl *mapWithLock) result(c string) uint64 { return mwl.m[c] } type mapWithAtomicFastpath struct { mu sync.RWMutex m map[string]*uint64 } func newMapWithAtomicFastpath() incrementUint64Map { return &mapWithAtomicFastpath{ m: make(map[string]*uint64), } } func (mwaf *mapWithAtomicFastpath) increment(c string) { mwaf.mu.RLock() if p, ok := mwaf.m[c]; ok { atomic.AddUint64(p, 1) mwaf.mu.RUnlock() return } mwaf.mu.RUnlock() mwaf.mu.Lock() if p, ok := mwaf.m[c]; ok { atomic.AddUint64(p, 1) mwaf.mu.Unlock() return } var temp uint64 = 1 mwaf.m[c] = &temp mwaf.mu.Unlock() } func (mwaf *mapWithAtomicFastpath) result(c string) uint64 { return atomic.LoadUint64(mwaf.m[c]) } type mapWithSyncMap struct { m sync.Map } func newMapWithSyncMap() incrementUint64Map { return &mapWithSyncMap{} } func (mwsm *mapWithSyncMap) increment(c string) { p, ok := mwsm.m.Load(c) if !ok { tp := new(uint64) p, _ = mwsm.m.LoadOrStore(c, tp) } atomic.AddUint64(p.(*uint64), 1) } func (mwsm *mapWithSyncMap) result(c string) uint64 { p, _ := mwsm.m.Load(c) return atomic.LoadUint64(p.(*uint64)) } func benchmarkIncrementUint64Map(b *testing.B, f func() incrementUint64Map) { const cat = "cat" benches := []struct { name string goroutineCount int }{ { name: " 1", goroutineCount: 1, }, { name: " 10", goroutineCount: 10, }, { name: " 100", goroutineCount: 100, }, { name: "1000", goroutineCount: 1000, }, } for _, bb := range benches { b.Run(bb.name, func(b *testing.B) { m := f() var wg sync.WaitGroup wg.Add(bb.goroutineCount) b.ResetTimer() for i := 0; i < bb.goroutineCount; i++ { go func() { for j := 0; j < b.N; j++ { m.increment(cat) } wg.Done() }() } wg.Wait() b.StopTimer() if m.result(cat) != uint64(bb.goroutineCount*b.N) { b.Fatalf("result is %d, want %d", m.result(cat), b.N) } }) } } func BenchmarkMapWithSyncMutexContetion(b *testing.B) { benchmarkIncrementUint64Map(b, newMapWithLock) } func BenchmarkMapWithAtomicFastpath(b *testing.B) { benchmarkIncrementUint64Map(b, newMapWithAtomicFastpath) } func BenchmarkMapWithSyncMap(b *testing.B) { benchmarkIncrementUint64Map(b, newMapWithSyncMap) } grpc-go-1.22.1/benchmark/run_bench.sh000077500000000000000000000077571351635773100174220ustar00rootroot00000000000000#!/bin/bash rpcs=(1) conns=(1) warmup=10 dur=10 reqs=(1) resps=(1) rpc_types=(unary) # idx[0] = idx value for rpcs # idx[1] = idx value for conns # idx[2] = idx value for reqs # idx[3] = idx value for resps # idx[4] = idx value for rpc_types idx=(0 0 0 0 0) idx_max=(1 1 1 1 1) inc() { for i in $(seq $((${#idx[@]}-1)) -1 0); do idx[${i}]=$((${idx[${i}]}+1)) if [ ${idx[${i}]} == ${idx_max[${i}]} ]; then idx[${i}]=0 else break fi done local fin fin=1 # Check to see if we have looped back to the beginning. for v in ${idx[@]}; do if [ ${v} != 0 ]; then fin=0 break fi done if [ ${fin} == 1 ]; then rm -Rf ${out_dir} clean_and_die 0 fi } clean_and_die() { rm -Rf ${out_dir} exit $1 } run(){ local nr nr=${rpcs[${idx[0]}]} local nc nc=${conns[${idx[1]}]} req_sz=${reqs[${idx[2]}]} resp_sz=${resps[${idx[3]}]} r_type=${rpc_types[${idx[4]}]} # Following runs one benchmark base_port=50051 delta=0 test_name="r_"${nr}"_c_"${nc}"_req_"${req_sz}"_resp_"${resp_sz}"_"${r_type}"_"$(date +%s) echo "================================================================================" echo ${test_name} while : do port=$((${base_port}+${delta})) # Launch the server in background ${out_dir}/server --port=${port} --test_name="Server_"${test_name}& server_pid=$(echo $!) # Launch the client ${out_dir}/client --port=${port} --d=${dur} --w=${warmup} --r=${nr} --c=${nc} --req=${req_sz} --resp=${resp_sz} --rpc_type=${r_type} --test_name="client_"${test_name} client_status=$(echo $?) kill -INT ${server_pid} wait ${server_pid} if [ ${client_status} == 0 ]; then break fi delta=$((${delta}+1)) if [ ${delta} == 10 ]; then echo "Continuous 10 failed runs. Exiting now." rm -Rf ${out_dir} clean_and_die 1 fi done } set_param(){ local argname=$1 shift local idx=$1 shift if [ $# -eq 0 ]; then echo "${argname} not specified" exit 1 fi PARAM=($(echo $1 | sed 's/,/ /g')) if [ ${idx} -lt 0 ]; then return fi idx_max[${idx}]=${#PARAM[@]} } while [ $# -gt 0 ]; do case "$1" in -r) shift set_param "number of rpcs" 0 $1 rpcs=(${PARAM[@]}) shift ;; -c) shift set_param "number of connections" 1 $1 conns=(${PARAM[@]}) shift ;; -w) shift set_param "warm-up period" -1 $1 warmup=${PARAM} shift ;; -d) shift set_param "duration" -1 $1 dur=${PARAM} shift ;; -req) shift set_param "request size" 2 $1 reqs=(${PARAM[@]}) shift ;; -resp) shift set_param "response size" 3 $1 resps=(${PARAM[@]}) shift ;; -rpc_type) shift set_param "rpc type" 4 $1 rpc_types=(${PARAM[@]}) shift ;; -h|--help) echo "Following are valid options:" echo echo "-h, --help show brief help" echo "-w warm-up duration in seconds, default value is 10" echo "-d benchmark duration in seconds, default value is 60" echo "" echo "Each of the following can have multiple comma separated values." echo "" echo "-r number of RPCs, default value is 1" echo "-c number of Connections, default value is 1" echo "-req req size in bytes, default value is 1" echo "-resp resp size in bytes, default value is 1" echo "-rpc_type valid values are unary|streaming, default is unary" exit 0 ;; *) echo "Incorrect option $1" exit 1 ;; esac done # Build server and client out_dir=$(mktemp -d oss_benchXXX) go build -o ${out_dir}/server $GOPATH/src/google.golang.org/grpc/benchmark/server/main.go && go build -o ${out_dir}/client $GOPATH/src/google.golang.org/grpc/benchmark/client/main.go if [ $? != 0 ]; then clean_and_die 1 fi while : do run inc done grpc-go-1.22.1/benchmark/server/000077500000000000000000000000001351635773100164065ustar00rootroot00000000000000grpc-go-1.22.1/benchmark/server/main.go000066400000000000000000000047071351635773100176710ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ /* Package main provides a server used for benchmarking. It launches a server which is listening on port 50051. An example to start the server can be found at: go run benchmark/server/main.go -test_name=grpc_test After starting the server, the client can be run separately and used to test qps and latency. */ package main import ( "flag" "fmt" "net" _ "net/http/pprof" "os" "os/signal" "runtime" "runtime/pprof" "time" "google.golang.org/grpc/benchmark" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/internal/syscall" ) var ( port = flag.String("port", "50051", "Localhost port to listen on.") testName = flag.String("test_name", "", "Name of the test used for creating profiles.") ) func main() { flag.Parse() if *testName == "" { grpclog.Fatalf("test name not set") } lis, err := net.Listen("tcp", ":"+*port) if err != nil { grpclog.Fatalf("Failed to listen: %v", err) } defer lis.Close() cf, err := os.Create("/tmp/" + *testName + ".cpu") if err != nil { grpclog.Fatalf("Failed to create file: %v", err) } defer cf.Close() pprof.StartCPUProfile(cf) cpuBeg := syscall.GetCPUTime() // Launch server in a separate goroutine. stop := benchmark.StartServer(benchmark.ServerInfo{Type: "protobuf", Listener: lis}) // Wait on OS terminate signal. ch := make(chan os.Signal, 1) signal.Notify(ch, os.Interrupt) <-ch cpu := time.Duration(syscall.GetCPUTime() - cpuBeg) stop() pprof.StopCPUProfile() mf, err := os.Create("/tmp/" + *testName + ".mem") if err != nil { grpclog.Fatalf("Failed to create file: %v", err) } defer mf.Close() runtime.GC() // materialize all statistics if err := pprof.WriteHeapProfile(mf); err != nil { grpclog.Fatalf("Failed to write memory profile: %v", err) } fmt.Println("Server CPU utilization:", cpu) fmt.Println("Server CPU profile:", cf.Name()) fmt.Println("Server Mem Profile:", mf.Name()) } grpc-go-1.22.1/benchmark/stats/000077500000000000000000000000001351635773100162365ustar00rootroot00000000000000grpc-go-1.22.1/benchmark/stats/histogram.go000066400000000000000000000145221351635773100205660ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package stats import ( "bytes" "fmt" "io" "log" "math" "strconv" "strings" ) // Histogram accumulates values in the form of a histogram with // exponentially increased bucket sizes. type Histogram struct { // Count is the total number of values added to the histogram. Count int64 // Sum is the sum of all the values added to the histogram. Sum int64 // SumOfSquares is the sum of squares of all values. SumOfSquares int64 // Min is the minimum of all the values added to the histogram. Min int64 // Max is the maximum of all the values added to the histogram. Max int64 // Buckets contains all the buckets of the histogram. Buckets []HistogramBucket opts HistogramOptions logBaseBucketSize float64 oneOverLogOnePlusGrowthFactor float64 } // HistogramOptions contains the parameters that define the histogram's buckets. // The first bucket of the created histogram (with index 0) contains [min, min+n) // where n = BaseBucketSize, min = MinValue. // Bucket i (i>=1) contains [min + n * m^(i-1), min + n * m^i), where m = 1+GrowthFactor. // The type of the values is int64. type HistogramOptions struct { // NumBuckets is the number of buckets. NumBuckets int // GrowthFactor is the growth factor of the buckets. A value of 0.1 // indicates that bucket N+1 will be 10% larger than bucket N. GrowthFactor float64 // BaseBucketSize is the size of the first bucket. BaseBucketSize float64 // MinValue is the lower bound of the first bucket. MinValue int64 } // HistogramBucket represents one histogram bucket. type HistogramBucket struct { // LowBound is the lower bound of the bucket. LowBound float64 // Count is the number of values in the bucket. Count int64 } // NewHistogram returns a pointer to a new Histogram object that was created // with the provided options. func NewHistogram(opts HistogramOptions) *Histogram { if opts.NumBuckets == 0 { opts.NumBuckets = 32 } if opts.BaseBucketSize == 0.0 { opts.BaseBucketSize = 1.0 } h := Histogram{ Buckets: make([]HistogramBucket, opts.NumBuckets), Min: math.MaxInt64, Max: math.MinInt64, opts: opts, logBaseBucketSize: math.Log(opts.BaseBucketSize), oneOverLogOnePlusGrowthFactor: 1 / math.Log(1+opts.GrowthFactor), } m := 1.0 + opts.GrowthFactor delta := opts.BaseBucketSize h.Buckets[0].LowBound = float64(opts.MinValue) for i := 1; i < opts.NumBuckets; i++ { h.Buckets[i].LowBound = float64(opts.MinValue) + delta delta = delta * m } return &h } // Print writes textual output of the histogram values. func (h *Histogram) Print(w io.Writer) { h.PrintWithUnit(w, 1) } // PrintWithUnit writes textual output of the histogram values . // Data in histogram is divided by a Unit before print. func (h *Histogram) PrintWithUnit(w io.Writer, unit float64) { avg := float64(h.Sum) / float64(h.Count) fmt.Fprintf(w, "Count: %d Min: %5.1f Max: %5.1f Avg: %.2f\n", h.Count, float64(h.Min)/unit, float64(h.Max)/unit, avg/unit) fmt.Fprintf(w, "%s\n", strings.Repeat("-", 60)) if h.Count <= 0 { return } maxBucketDigitLen := len(strconv.FormatFloat(h.Buckets[len(h.Buckets)-1].LowBound, 'f', 6, 64)) if maxBucketDigitLen < 3 { // For "inf". maxBucketDigitLen = 3 } maxCountDigitLen := len(strconv.FormatInt(h.Count, 10)) percentMulti := 100 / float64(h.Count) accCount := int64(0) for i, b := range h.Buckets { fmt.Fprintf(w, "[%*f, ", maxBucketDigitLen, b.LowBound/unit) if i+1 < len(h.Buckets) { fmt.Fprintf(w, "%*f)", maxBucketDigitLen, h.Buckets[i+1].LowBound/unit) } else { fmt.Fprintf(w, "%*s)", maxBucketDigitLen, "inf") } accCount += b.Count fmt.Fprintf(w, " %*d %5.1f%% %5.1f%%", maxCountDigitLen, b.Count, float64(b.Count)*percentMulti, float64(accCount)*percentMulti) const barScale = 0.1 barLength := int(float64(b.Count)*percentMulti*barScale + 0.5) fmt.Fprintf(w, " %s\n", strings.Repeat("#", barLength)) } } // String returns the textual output of the histogram values as string. func (h *Histogram) String() string { var b bytes.Buffer h.Print(&b) return b.String() } // Clear resets all the content of histogram. func (h *Histogram) Clear() { h.Count = 0 h.Sum = 0 h.SumOfSquares = 0 h.Min = math.MaxInt64 h.Max = math.MinInt64 for i := range h.Buckets { h.Buckets[i].Count = 0 } } // Opts returns a copy of the options used to create the Histogram. func (h *Histogram) Opts() HistogramOptions { return h.opts } // Add adds a value to the histogram. func (h *Histogram) Add(value int64) error { bucket, err := h.findBucket(value) if err != nil { return err } h.Buckets[bucket].Count++ h.Count++ h.Sum += value h.SumOfSquares += value * value if value < h.Min { h.Min = value } if value > h.Max { h.Max = value } return nil } func (h *Histogram) findBucket(value int64) (int, error) { delta := float64(value - h.opts.MinValue) var b int if delta >= h.opts.BaseBucketSize { // b = log_{1+growthFactor} (delta / baseBucketSize) + 1 // = log(delta / baseBucketSize) / log(1+growthFactor) + 1 // = (log(delta) - log(baseBucketSize)) * (1 / log(1+growthFactor)) + 1 b = int((math.Log(delta)-h.logBaseBucketSize)*h.oneOverLogOnePlusGrowthFactor + 1) } if b >= len(h.Buckets) { return 0, fmt.Errorf("no bucket for value: %d", value) } return b, nil } // Merge takes another histogram h2, and merges its content into h. // The two histograms must be created by equivalent HistogramOptions. func (h *Histogram) Merge(h2 *Histogram) { if h.opts != h2.opts { log.Fatalf("failed to merge histograms, created by inequivalent options") } h.Count += h2.Count h.Sum += h2.Sum h.SumOfSquares += h2.SumOfSquares if h2.Min < h.Min { h.Min = h2.Min } if h2.Max > h.Max { h.Max = h2.Max } for i, b := range h2.Buckets { h.Buckets[i].Count += b.Count } } grpc-go-1.22.1/benchmark/stats/stats.go000066400000000000000000000351531351635773100177320ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package stats tracks the statistics associated with benchmark runs. package stats import ( "bytes" "fmt" "log" "math" "runtime" "sort" "strconv" "sync" "time" ) // FeatureIndex is an enum for features that usually differ across individual // benchmark runs in a single execution. These are usually configured by the // user through command line flags. type FeatureIndex int // FeatureIndex enum values corresponding to individually settable features. const ( EnableTraceIndex FeatureIndex = iota ReadLatenciesIndex ReadKbpsIndex ReadMTUIndex MaxConcurrentCallsIndex ReqSizeBytesIndex RespSizeBytesIndex CompModesIndex EnableChannelzIndex EnablePreloaderIndex // MaxFeatureIndex is a place holder to indicate the total number of feature // indices we have. Any new feature indices should be added above this. MaxFeatureIndex ) // Features represent configured options for a specific benchmark run. This is // usually constructed from command line arguments passed by the caller. See // benchmark/benchmain/main.go for defined command line flags. This is also // part of the BenchResults struct which is serialized and written to a file. type Features struct { // Network mode used for this benchmark run. Could be one of Local, LAN, WAN // or Longhaul. NetworkMode string // UseBufCon indicates whether an in-memory connection was used for this // benchmark run instead of system network I/O. UseBufConn bool // EnableKeepalive indicates if keepalives were enabled on the connections // used in this benchmark run. EnableKeepalive bool // BenchTime indicates the duration of the benchmark run. BenchTime time.Duration // Features defined above are usually the same for all benchmark runs in a // particular invocation, while the features defined below could vary from // run to run based on the configured command line. These features have a // corresponding featureIndex value which is used for a variety of reasons. // EnableTrace indicates if tracing was enabled. EnableTrace bool // Latency is the simulated one-way network latency used. Latency time.Duration // Kbps is the simulated network throughput used. Kbps int // MTU is the simulated network MTU used. MTU int // MaxConcurrentCalls is the number of concurrent RPCs made during this // benchmark run. MaxConcurrentCalls int // ReqSizeBytes is the request size in bytes used in this benchmark run. ReqSizeBytes int // RespSizeBytes is the response size in bytes used in this benchmark run. RespSizeBytes int // ModeCompressor represents the compressor mode used. ModeCompressor string // EnableChannelz indicates if channelz was turned on. EnableChannelz bool // EnablePreloader indicates if preloading was turned on. EnablePreloader bool } // String returns all the feature values as a string. func (f Features) String() string { return fmt.Sprintf("networkMode_%v-bufConn_%v-keepalive_%v-benchTime_%v-"+ "trace_%v-latency_%v-kbps_%v-MTU_%v-maxConcurrentCalls_%v-"+ "reqSize_%vB-respSize_%vB-compressor_%v-channelz_%v-preloader_%v", f.NetworkMode, f.UseBufConn, f.EnableKeepalive, f.BenchTime, f.EnableTrace, f.Latency, f.Kbps, f.MTU, f.MaxConcurrentCalls, f.ReqSizeBytes, f.RespSizeBytes, f.ModeCompressor, f.EnableChannelz, f.EnablePreloader) } // SharedFeatures returns the shared features as a pretty printable string. // 'wantFeatures' is a bitmask of wanted features, indexed by FeaturesIndex. func (f Features) SharedFeatures(wantFeatures []bool) string { var b bytes.Buffer if f.NetworkMode != "" { b.WriteString(fmt.Sprintf("Network: %v\n", f.NetworkMode)) } if f.UseBufConn { b.WriteString(fmt.Sprintf("UseBufConn: %v\n", f.UseBufConn)) } if f.EnableKeepalive { b.WriteString(fmt.Sprintf("EnableKeepalive: %v\n", f.EnableKeepalive)) } b.WriteString(fmt.Sprintf("BenchTime: %v\n", f.BenchTime)) f.partialString(&b, wantFeatures, ": ", "\n") return b.String() } // PrintableName returns a one line name which includes the features specified // by 'wantFeatures' which is a bitmask of wanted features, indexed by // FeaturesIndex. func (f Features) PrintableName(wantFeatures []bool) string { var b bytes.Buffer f.partialString(&b, wantFeatures, "_", "-") return b.String() } // partialString writes features specified by 'wantFeatures' to the provided // bytes.Buffer. func (f Features) partialString(b *bytes.Buffer, wantFeatures []bool, sep, delim string) { for i, sf := range wantFeatures { if sf { switch FeatureIndex(i) { case EnableTraceIndex: b.WriteString(fmt.Sprintf("Trace%v%v%v", sep, f.EnableTrace, delim)) case ReadLatenciesIndex: b.WriteString(fmt.Sprintf("Latency%v%v%v", sep, f.Latency, delim)) case ReadKbpsIndex: b.WriteString(fmt.Sprintf("Kbps%v%v%v", sep, f.Kbps, delim)) case ReadMTUIndex: b.WriteString(fmt.Sprintf("MTU%v%v%v", sep, f.MTU, delim)) case MaxConcurrentCallsIndex: b.WriteString(fmt.Sprintf("Callers%v%v%v", sep, f.MaxConcurrentCalls, delim)) case ReqSizeBytesIndex: b.WriteString(fmt.Sprintf("ReqSize%v%vB%v", sep, f.ReqSizeBytes, delim)) case RespSizeBytesIndex: b.WriteString(fmt.Sprintf("RespSize%v%vB%v", sep, f.RespSizeBytes, delim)) case CompModesIndex: b.WriteString(fmt.Sprintf("Compressor%v%v%v", sep, f.ModeCompressor, delim)) case EnableChannelzIndex: b.WriteString(fmt.Sprintf("Channelz%v%v%v", sep, f.EnableChannelz, delim)) case EnablePreloaderIndex: b.WriteString(fmt.Sprintf("Preloader%v%v%v", sep, f.EnablePreloader, delim)) default: log.Fatalf("Unknown feature index %v. maxFeatureIndex is %v", i, MaxFeatureIndex) } } } } // BenchResults records features and results of a benchmark run. A collection // of these structs is usually serialized and written to a file after a // benchmark execution, and could later be read for pretty-printing or // comparison with other benchmark results. type BenchResults struct { // RunMode is the workload mode for this benchmark run. This could be unary, // stream or unconstrained. RunMode string // Features represents the configured feature options for this run. Features Features // SharedFeatures represents the features which were shared across all // benchmark runs during one execution. It is a slice indexed by // 'FeaturesIndex' and a value of true indicates that the associated // feature is shared across all runs. SharedFeatures []bool // Data contains the statistical data of interest from the benchmark run. Data RunData } // RunData contains statistical data of interest from a benchmark run. type RunData struct { // TotalOps is the number of operations executed during this benchmark run. // Only makes sense for unary and streaming workloads. TotalOps uint64 // SendOps is the number of send operations executed during this benchmark // run. Only makes sense for unconstrained workloads. SendOps uint64 // RecvOps is the number of receive operations executed during this benchmark // run. Only makes sense for unconstrained workloads. RecvOps uint64 // AllocedBytes is the average memory allocation in bytes per operation. AllocedBytes uint64 // Allocs is the average number of memory allocations per operation. Allocs uint64 // ReqT is the average request throughput associated with this run. ReqT float64 // RespT is the average response throughput associated with this run. RespT float64 // We store different latencies associated with each run. These latencies are // only computed for unary and stream workloads as they are not very useful // for unconstrained workloads. // Fiftieth is the 50th percentile latency. Fiftieth time.Duration // Ninetieth is the 90th percentile latency. Ninetieth time.Duration // Ninetyninth is the 99th percentile latency. NinetyNinth time.Duration // Average is the average latency. Average time.Duration } type durationSlice []time.Duration func (a durationSlice) Len() int { return len(a) } func (a durationSlice) Swap(i, j int) { a[i], a[j] = a[j], a[i] } func (a durationSlice) Less(i, j int) bool { return a[i] < a[j] } // Stats is a helper for gathering statistics about individual benchmark runs. type Stats struct { mu sync.Mutex numBuckets int hw *histWrapper results []BenchResults startMS runtime.MemStats stopMS runtime.MemStats } type histWrapper struct { unit time.Duration histogram *Histogram durations durationSlice } // NewStats creates a new Stats instance. If numBuckets is not positive, the // default value (16) will be used. func NewStats(numBuckets int) *Stats { if numBuckets <= 0 { numBuckets = 16 } // Use one more bucket for the last unbounded bucket. s := &Stats{numBuckets: numBuckets + 1} s.hw = &histWrapper{} return s } // StartRun is to be invoked to indicate the start of a new benchmark run. func (s *Stats) StartRun(mode string, f Features, sf []bool) { s.mu.Lock() defer s.mu.Unlock() runtime.ReadMemStats(&s.startMS) s.results = append(s.results, BenchResults{RunMode: mode, Features: f, SharedFeatures: sf}) } // EndRun is to be invoked to indicate the end of the ongoing benchmark run. It // computes a bunch of stats and dumps them to stdout. func (s *Stats) EndRun(count uint64) { s.mu.Lock() defer s.mu.Unlock() runtime.ReadMemStats(&s.stopMS) r := &s.results[len(s.results)-1] r.Data = RunData{ TotalOps: count, AllocedBytes: s.stopMS.TotalAlloc - s.startMS.TotalAlloc, Allocs: s.stopMS.Mallocs - s.startMS.Mallocs, ReqT: float64(count) * float64(r.Features.ReqSizeBytes) * 8 / r.Features.BenchTime.Seconds(), RespT: float64(count) * float64(r.Features.RespSizeBytes) * 8 / r.Features.BenchTime.Seconds(), } s.computeLatencies(r) s.dump(r) s.hw = &histWrapper{} } // EndUnconstrainedRun is similar to EndRun, but is to be used for // unconstrained workloads. func (s *Stats) EndUnconstrainedRun(req uint64, resp uint64) { s.mu.Lock() defer s.mu.Unlock() runtime.ReadMemStats(&s.stopMS) r := &s.results[len(s.results)-1] r.Data = RunData{ SendOps: req, RecvOps: resp, AllocedBytes: (s.stopMS.TotalAlloc - s.startMS.TotalAlloc) / ((req + resp) / 2), Allocs: (s.stopMS.Mallocs - s.startMS.Mallocs) / ((req + resp) / 2), ReqT: float64(req) * float64(r.Features.ReqSizeBytes) * 8 / r.Features.BenchTime.Seconds(), RespT: float64(resp) * float64(r.Features.RespSizeBytes) * 8 / r.Features.BenchTime.Seconds(), } s.computeLatencies(r) s.dump(r) s.hw = &histWrapper{} } // AddDuration adds an elapsed duration per operation to the stats. This is // used by unary and stream modes where request and response stats are equal. func (s *Stats) AddDuration(d time.Duration) { s.mu.Lock() defer s.mu.Unlock() s.hw.durations = append(s.hw.durations, d) } // GetResults returns the results from all benchmark runs. func (s *Stats) GetResults() []BenchResults { s.mu.Lock() defer s.mu.Unlock() return s.results } // computeLatencies computes percentile latencies based on durations stored in // the stats object and updates the corresponding fields in the result object. func (s *Stats) computeLatencies(result *BenchResults) { if len(s.hw.durations) == 0 { return } sort.Sort(s.hw.durations) minDuration := int64(s.hw.durations[0]) maxDuration := int64(s.hw.durations[len(s.hw.durations)-1]) // Use the largest unit that can represent the minimum time duration. s.hw.unit = time.Nanosecond for _, u := range []time.Duration{time.Microsecond, time.Millisecond, time.Second} { if minDuration <= int64(u) { break } s.hw.unit = u } numBuckets := s.numBuckets if n := int(maxDuration - minDuration + 1); n < numBuckets { numBuckets = n } s.hw.histogram = NewHistogram(HistogramOptions{ NumBuckets: numBuckets, // max-min(lower bound of last bucket) = (1 + growthFactor)^(numBuckets-2) * baseBucketSize. GrowthFactor: math.Pow(float64(maxDuration-minDuration), 1/float64(numBuckets-2)) - 1, BaseBucketSize: 1.0, MinValue: minDuration, }) for _, d := range s.hw.durations { s.hw.histogram.Add(int64(d)) } result.Data.Fiftieth = s.hw.durations[max(s.hw.histogram.Count*int64(50)/100-1, 0)] result.Data.Ninetieth = s.hw.durations[max(s.hw.histogram.Count*int64(90)/100-1, 0)] result.Data.NinetyNinth = s.hw.durations[max(s.hw.histogram.Count*int64(99)/100-1, 0)] result.Data.Average = time.Duration(float64(s.hw.histogram.Sum) / float64(s.hw.histogram.Count)) } // dump returns a printable version. func (s *Stats) dump(result *BenchResults) { var b bytes.Buffer // This prints the run mode and all features of the bench on a line. b.WriteString(fmt.Sprintf("%s-%s:\n", result.RunMode, result.Features.String())) unit := s.hw.unit tUnit := fmt.Sprintf("%v", unit)[1:] // stores one of s, ms, μs, ns if l := result.Data.Fiftieth; l != 0 { b.WriteString(fmt.Sprintf("50_Latency: %s%s\t", strconv.FormatFloat(float64(l)/float64(unit), 'f', 4, 64), tUnit)) } if l := result.Data.Ninetieth; l != 0 { b.WriteString(fmt.Sprintf("90_Latency: %s%s\t", strconv.FormatFloat(float64(l)/float64(unit), 'f', 4, 64), tUnit)) } if l := result.Data.NinetyNinth; l != 0 { b.WriteString(fmt.Sprintf("99_Latency: %s%s\t", strconv.FormatFloat(float64(l)/float64(unit), 'f', 4, 64), tUnit)) } if l := result.Data.Average; l != 0 { b.WriteString(fmt.Sprintf("Avg_Latency: %s%s\t", strconv.FormatFloat(float64(l)/float64(unit), 'f', 4, 64), tUnit)) } b.WriteString(fmt.Sprintf("Bytes/op: %v\t", result.Data.AllocedBytes)) b.WriteString(fmt.Sprintf("Allocs/op: %v\t\n", result.Data.Allocs)) // This prints the histogram stats for the latency. if s.hw.histogram == nil { b.WriteString("Histogram (empty)\n") } else { b.WriteString(fmt.Sprintf("Histogram (unit: %s)\n", tUnit)) s.hw.histogram.PrintWithUnit(&b, float64(unit)) } // Print throughput data. req := result.Data.SendOps if req == 0 { req = result.Data.TotalOps } resp := result.Data.RecvOps if resp == 0 { resp = result.Data.TotalOps } b.WriteString(fmt.Sprintf("Number of requests: %v\tRequest throughput: %v bit/s\n", req, result.Data.ReqT)) b.WriteString(fmt.Sprintf("Number of responses: %v\tResponse throughput: %v bit/s\n", resp, result.Data.RespT)) fmt.Println(b.String()) } func max(a, b int64) int64 { if a > b { return a } return b } grpc-go-1.22.1/benchmark/worker/000077500000000000000000000000001351635773100164115ustar00rootroot00000000000000grpc-go-1.22.1/benchmark/worker/benchmark_client.go000066400000000000000000000300531351635773100222310ustar00rootroot00000000000000/* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package main import ( "context" "flag" "math" "runtime" "sync" "time" "google.golang.org/grpc" "google.golang.org/grpc/benchmark" testpb "google.golang.org/grpc/benchmark/grpc_testing" "google.golang.org/grpc/benchmark/stats" "google.golang.org/grpc/codes" "google.golang.org/grpc/credentials" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/internal/syscall" "google.golang.org/grpc/status" "google.golang.org/grpc/testdata" ) var caFile = flag.String("ca_file", "", "The file containing the CA root cert file") type lockingHistogram struct { mu sync.Mutex histogram *stats.Histogram } func (h *lockingHistogram) add(value int64) { h.mu.Lock() defer h.mu.Unlock() h.histogram.Add(value) } // swap sets h.histogram to o and returns its old value. func (h *lockingHistogram) swap(o *stats.Histogram) *stats.Histogram { h.mu.Lock() defer h.mu.Unlock() old := h.histogram h.histogram = o return old } func (h *lockingHistogram) mergeInto(merged *stats.Histogram) { h.mu.Lock() defer h.mu.Unlock() merged.Merge(h.histogram) } type benchmarkClient struct { closeConns func() stop chan bool lastResetTime time.Time histogramOptions stats.HistogramOptions lockingHistograms []lockingHistogram rusageLastReset *syscall.Rusage } func printClientConfig(config *testpb.ClientConfig) { // Some config options are ignored: // - client type: // will always create sync client // - async client threads. // - core list grpclog.Infof(" * client type: %v (ignored, always creates sync client)", config.ClientType) grpclog.Infof(" * async client threads: %v (ignored)", config.AsyncClientThreads) // TODO: use cores specified by CoreList when setting list of cores is supported in go. grpclog.Infof(" * core list: %v (ignored)", config.CoreList) grpclog.Infof(" - security params: %v", config.SecurityParams) grpclog.Infof(" - core limit: %v", config.CoreLimit) grpclog.Infof(" - payload config: %v", config.PayloadConfig) grpclog.Infof(" - rpcs per chann: %v", config.OutstandingRpcsPerChannel) grpclog.Infof(" - channel number: %v", config.ClientChannels) grpclog.Infof(" - load params: %v", config.LoadParams) grpclog.Infof(" - rpc type: %v", config.RpcType) grpclog.Infof(" - histogram params: %v", config.HistogramParams) grpclog.Infof(" - server targets: %v", config.ServerTargets) } func setupClientEnv(config *testpb.ClientConfig) { // Use all cpu cores available on machine by default. // TODO: Revisit this for the optimal default setup. if config.CoreLimit > 0 { runtime.GOMAXPROCS(int(config.CoreLimit)) } else { runtime.GOMAXPROCS(runtime.NumCPU()) } } // createConns creates connections according to given config. // It returns the connections and corresponding function to close them. // It returns non-nil error if there is anything wrong. func createConns(config *testpb.ClientConfig) ([]*grpc.ClientConn, func(), error) { var opts []grpc.DialOption // Sanity check for client type. switch config.ClientType { case testpb.ClientType_SYNC_CLIENT: case testpb.ClientType_ASYNC_CLIENT: default: return nil, nil, status.Errorf(codes.InvalidArgument, "unknown client type: %v", config.ClientType) } // Check and set security options. if config.SecurityParams != nil { if *caFile == "" { *caFile = testdata.Path("ca.pem") } creds, err := credentials.NewClientTLSFromFile(*caFile, config.SecurityParams.ServerHostOverride) if err != nil { return nil, nil, status.Errorf(codes.InvalidArgument, "failed to create TLS credentials %v", err) } opts = append(opts, grpc.WithTransportCredentials(creds)) } else { opts = append(opts, grpc.WithInsecure()) } // Use byteBufCodec if it is required. if config.PayloadConfig != nil { switch config.PayloadConfig.Payload.(type) { case *testpb.PayloadConfig_BytebufParams: opts = append(opts, grpc.WithDefaultCallOptions(grpc.CallCustomCodec(byteBufCodec{}))) case *testpb.PayloadConfig_SimpleParams: default: return nil, nil, status.Errorf(codes.InvalidArgument, "unknown payload config: %v", config.PayloadConfig) } } // Create connections. connCount := int(config.ClientChannels) conns := make([]*grpc.ClientConn, connCount) for connIndex := 0; connIndex < connCount; connIndex++ { conns[connIndex] = benchmark.NewClientConn(config.ServerTargets[connIndex%len(config.ServerTargets)], opts...) } return conns, func() { for _, conn := range conns { conn.Close() } }, nil } func performRPCs(config *testpb.ClientConfig, conns []*grpc.ClientConn, bc *benchmarkClient) error { // Read payload size and type from config. var ( payloadReqSize, payloadRespSize int payloadType string ) if config.PayloadConfig != nil { switch c := config.PayloadConfig.Payload.(type) { case *testpb.PayloadConfig_BytebufParams: payloadReqSize = int(c.BytebufParams.ReqSize) payloadRespSize = int(c.BytebufParams.RespSize) payloadType = "bytebuf" case *testpb.PayloadConfig_SimpleParams: payloadReqSize = int(c.SimpleParams.ReqSize) payloadRespSize = int(c.SimpleParams.RespSize) payloadType = "protobuf" default: return status.Errorf(codes.InvalidArgument, "unknown payload config: %v", config.PayloadConfig) } } // TODO add open loop distribution. switch config.LoadParams.Load.(type) { case *testpb.LoadParams_ClosedLoop: case *testpb.LoadParams_Poisson: return status.Errorf(codes.Unimplemented, "unsupported load params: %v", config.LoadParams) default: return status.Errorf(codes.InvalidArgument, "unknown load params: %v", config.LoadParams) } rpcCountPerConn := int(config.OutstandingRpcsPerChannel) switch config.RpcType { case testpb.RpcType_UNARY: bc.doCloseLoopUnary(conns, rpcCountPerConn, payloadReqSize, payloadRespSize) // TODO open loop. case testpb.RpcType_STREAMING: bc.doCloseLoopStreaming(conns, rpcCountPerConn, payloadReqSize, payloadRespSize, payloadType) // TODO open loop. default: return status.Errorf(codes.InvalidArgument, "unknown rpc type: %v", config.RpcType) } return nil } func startBenchmarkClient(config *testpb.ClientConfig) (*benchmarkClient, error) { printClientConfig(config) // Set running environment like how many cores to use. setupClientEnv(config) conns, closeConns, err := createConns(config) if err != nil { return nil, err } rpcCountPerConn := int(config.OutstandingRpcsPerChannel) bc := &benchmarkClient{ histogramOptions: stats.HistogramOptions{ NumBuckets: int(math.Log(config.HistogramParams.MaxPossible)/math.Log(1+config.HistogramParams.Resolution)) + 1, GrowthFactor: config.HistogramParams.Resolution, BaseBucketSize: (1 + config.HistogramParams.Resolution), MinValue: 0, }, lockingHistograms: make([]lockingHistogram, rpcCountPerConn*len(conns)), stop: make(chan bool), lastResetTime: time.Now(), closeConns: closeConns, rusageLastReset: syscall.GetRusage(), } if err = performRPCs(config, conns, bc); err != nil { // Close all connections if performRPCs failed. closeConns() return nil, err } return bc, nil } func (bc *benchmarkClient) doCloseLoopUnary(conns []*grpc.ClientConn, rpcCountPerConn int, reqSize int, respSize int) { for ic, conn := range conns { client := testpb.NewBenchmarkServiceClient(conn) // For each connection, create rpcCountPerConn goroutines to do rpc. for j := 0; j < rpcCountPerConn; j++ { // Create histogram for each goroutine. idx := ic*rpcCountPerConn + j bc.lockingHistograms[idx].histogram = stats.NewHistogram(bc.histogramOptions) // Start goroutine on the created mutex and histogram. go func(idx int) { // TODO: do warm up if necessary. // Now relying on worker client to reserve time to do warm up. // The worker client needs to wait for some time after client is created, // before starting benchmark. done := make(chan bool) for { go func() { start := time.Now() if err := benchmark.DoUnaryCall(client, reqSize, respSize); err != nil { select { case <-bc.stop: case done <- false: } return } elapse := time.Since(start) bc.lockingHistograms[idx].add(int64(elapse)) select { case <-bc.stop: case done <- true: } }() select { case <-bc.stop: return case <-done: } } }(idx) } } } func (bc *benchmarkClient) doCloseLoopStreaming(conns []*grpc.ClientConn, rpcCountPerConn int, reqSize int, respSize int, payloadType string) { var doRPC func(testpb.BenchmarkService_StreamingCallClient, int, int) error if payloadType == "bytebuf" { doRPC = benchmark.DoByteBufStreamingRoundTrip } else { doRPC = benchmark.DoStreamingRoundTrip } for ic, conn := range conns { // For each connection, create rpcCountPerConn goroutines to do rpc. for j := 0; j < rpcCountPerConn; j++ { c := testpb.NewBenchmarkServiceClient(conn) stream, err := c.StreamingCall(context.Background()) if err != nil { grpclog.Fatalf("%v.StreamingCall(_) = _, %v", c, err) } // Create histogram for each goroutine. idx := ic*rpcCountPerConn + j bc.lockingHistograms[idx].histogram = stats.NewHistogram(bc.histogramOptions) // Start goroutine on the created mutex and histogram. go func(idx int) { // TODO: do warm up if necessary. // Now relying on worker client to reserve time to do warm up. // The worker client needs to wait for some time after client is created, // before starting benchmark. for { start := time.Now() if err := doRPC(stream, reqSize, respSize); err != nil { return } elapse := time.Since(start) bc.lockingHistograms[idx].add(int64(elapse)) select { case <-bc.stop: return default: } } }(idx) } } } // getStats returns the stats for benchmark client. // It resets lastResetTime and all histograms if argument reset is true. func (bc *benchmarkClient) getStats(reset bool) *testpb.ClientStats { var wallTimeElapsed, uTimeElapsed, sTimeElapsed float64 mergedHistogram := stats.NewHistogram(bc.histogramOptions) if reset { // Merging histogram may take some time. // Put all histograms aside and merge later. toMerge := make([]*stats.Histogram, len(bc.lockingHistograms)) for i := range bc.lockingHistograms { toMerge[i] = bc.lockingHistograms[i].swap(stats.NewHistogram(bc.histogramOptions)) } for i := 0; i < len(toMerge); i++ { mergedHistogram.Merge(toMerge[i]) } wallTimeElapsed = time.Since(bc.lastResetTime).Seconds() latestRusage := syscall.GetRusage() uTimeElapsed, sTimeElapsed = syscall.CPUTimeDiff(bc.rusageLastReset, latestRusage) bc.rusageLastReset = latestRusage bc.lastResetTime = time.Now() } else { // Merge only, not reset. for i := range bc.lockingHistograms { bc.lockingHistograms[i].mergeInto(mergedHistogram) } wallTimeElapsed = time.Since(bc.lastResetTime).Seconds() uTimeElapsed, sTimeElapsed = syscall.CPUTimeDiff(bc.rusageLastReset, syscall.GetRusage()) } b := make([]uint32, len(mergedHistogram.Buckets)) for i, v := range mergedHistogram.Buckets { b[i] = uint32(v.Count) } return &testpb.ClientStats{ Latencies: &testpb.HistogramData{ Bucket: b, MinSeen: float64(mergedHistogram.Min), MaxSeen: float64(mergedHistogram.Max), Sum: float64(mergedHistogram.Sum), SumOfSquares: float64(mergedHistogram.SumOfSquares), Count: float64(mergedHistogram.Count), }, TimeElapsed: wallTimeElapsed, TimeUser: uTimeElapsed, TimeSystem: sTimeElapsed, } } func (bc *benchmarkClient) shutdown() { close(bc.stop) bc.closeConns() } grpc-go-1.22.1/benchmark/worker/benchmark_server.go000066400000000000000000000127021351635773100222620ustar00rootroot00000000000000/* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package main import ( "flag" "fmt" "net" "runtime" "strconv" "strings" "sync" "time" "google.golang.org/grpc" "google.golang.org/grpc/benchmark" testpb "google.golang.org/grpc/benchmark/grpc_testing" "google.golang.org/grpc/codes" "google.golang.org/grpc/credentials" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/internal/syscall" "google.golang.org/grpc/status" "google.golang.org/grpc/testdata" ) var ( certFile = flag.String("tls_cert_file", "", "The TLS cert file") keyFile = flag.String("tls_key_file", "", "The TLS key file") ) type benchmarkServer struct { port int cores int closeFunc func() mu sync.RWMutex lastResetTime time.Time rusageLastReset *syscall.Rusage } func printServerConfig(config *testpb.ServerConfig) { // Some config options are ignored: // - server type: // will always start sync server // - async server threads // - core list grpclog.Infof(" * server type: %v (ignored, always starts sync server)", config.ServerType) grpclog.Infof(" * async server threads: %v (ignored)", config.AsyncServerThreads) // TODO: use cores specified by CoreList when setting list of cores is supported in go. grpclog.Infof(" * core list: %v (ignored)", config.CoreList) grpclog.Infof(" - security params: %v", config.SecurityParams) grpclog.Infof(" - core limit: %v", config.CoreLimit) grpclog.Infof(" - port: %v", config.Port) grpclog.Infof(" - payload config: %v", config.PayloadConfig) } func startBenchmarkServer(config *testpb.ServerConfig, serverPort int) (*benchmarkServer, error) { printServerConfig(config) // Use all cpu cores available on machine by default. // TODO: Revisit this for the optimal default setup. numOfCores := runtime.NumCPU() if config.CoreLimit > 0 { numOfCores = int(config.CoreLimit) } runtime.GOMAXPROCS(numOfCores) var opts []grpc.ServerOption // Sanity check for server type. switch config.ServerType { case testpb.ServerType_SYNC_SERVER: case testpb.ServerType_ASYNC_SERVER: case testpb.ServerType_ASYNC_GENERIC_SERVER: default: return nil, status.Errorf(codes.InvalidArgument, "unknown server type: %v", config.ServerType) } // Set security options. if config.SecurityParams != nil { if *certFile == "" { *certFile = testdata.Path("server1.pem") } if *keyFile == "" { *keyFile = testdata.Path("server1.key") } creds, err := credentials.NewServerTLSFromFile(*certFile, *keyFile) if err != nil { grpclog.Fatalf("failed to generate credentials %v", err) } opts = append(opts, grpc.Creds(creds)) } // Priority: config.Port > serverPort > default (0). port := int(config.Port) if port == 0 { port = serverPort } lis, err := net.Listen("tcp", fmt.Sprintf(":%d", port)) if err != nil { grpclog.Fatalf("Failed to listen: %v", err) } addr := lis.Addr().String() // Create different benchmark server according to config. var closeFunc func() if config.PayloadConfig != nil { switch payload := config.PayloadConfig.Payload.(type) { case *testpb.PayloadConfig_BytebufParams: opts = append(opts, grpc.CustomCodec(byteBufCodec{})) closeFunc = benchmark.StartServer(benchmark.ServerInfo{ Type: "bytebuf", Metadata: payload.BytebufParams.RespSize, Listener: lis, }, opts...) case *testpb.PayloadConfig_SimpleParams: closeFunc = benchmark.StartServer(benchmark.ServerInfo{ Type: "protobuf", Listener: lis, }, opts...) case *testpb.PayloadConfig_ComplexParams: return nil, status.Errorf(codes.Unimplemented, "unsupported payload config: %v", config.PayloadConfig) default: return nil, status.Errorf(codes.InvalidArgument, "unknown payload config: %v", config.PayloadConfig) } } else { // Start protobuf server if payload config is nil. closeFunc = benchmark.StartServer(benchmark.ServerInfo{ Type: "protobuf", Listener: lis, }, opts...) } grpclog.Infof("benchmark server listening at %v", addr) addrSplitted := strings.Split(addr, ":") p, err := strconv.Atoi(addrSplitted[len(addrSplitted)-1]) if err != nil { grpclog.Fatalf("failed to get port number from server address: %v", err) } return &benchmarkServer{ port: p, cores: numOfCores, closeFunc: closeFunc, lastResetTime: time.Now(), rusageLastReset: syscall.GetRusage(), }, nil } // getStats returns the stats for benchmark server. // It resets lastResetTime if argument reset is true. func (bs *benchmarkServer) getStats(reset bool) *testpb.ServerStats { bs.mu.RLock() defer bs.mu.RUnlock() wallTimeElapsed := time.Since(bs.lastResetTime).Seconds() rusageLatest := syscall.GetRusage() uTimeElapsed, sTimeElapsed := syscall.CPUTimeDiff(bs.rusageLastReset, rusageLatest) if reset { bs.lastResetTime = time.Now() bs.rusageLastReset = rusageLatest } return &testpb.ServerStats{ TimeElapsed: wallTimeElapsed, TimeUser: uTimeElapsed, TimeSystem: sTimeElapsed, } } grpc-go-1.22.1/benchmark/worker/main.go000066400000000000000000000133361351635773100176720ustar00rootroot00000000000000/* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package main import ( "context" "flag" "fmt" "io" "net" "net/http" _ "net/http/pprof" "runtime" "strconv" "time" "google.golang.org/grpc" testpb "google.golang.org/grpc/benchmark/grpc_testing" "google.golang.org/grpc/codes" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/status" ) var ( driverPort = flag.Int("driver_port", 10000, "port for communication with driver") serverPort = flag.Int("server_port", 0, "port for benchmark server if not specified by server config message") pprofPort = flag.Int("pprof_port", -1, "Port for pprof debug server to listen on. Pprof server doesn't start if unset") blockProfRate = flag.Int("block_prof_rate", 0, "fraction of goroutine blocking events to report in blocking profile") ) type byteBufCodec struct { } func (byteBufCodec) Marshal(v interface{}) ([]byte, error) { b, ok := v.(*[]byte) if !ok { return nil, fmt.Errorf("failed to marshal: %v is not type of *[]byte", v) } return *b, nil } func (byteBufCodec) Unmarshal(data []byte, v interface{}) error { b, ok := v.(*[]byte) if !ok { return fmt.Errorf("failed to marshal: %v is not type of *[]byte", v) } *b = data return nil } func (byteBufCodec) String() string { return "bytebuffer" } // workerServer implements WorkerService rpc handlers. // It can create benchmarkServer or benchmarkClient on demand. type workerServer struct { stop chan<- bool serverPort int } func (s *workerServer) RunServer(stream testpb.WorkerService_RunServerServer) error { var bs *benchmarkServer defer func() { // Close benchmark server when stream ends. grpclog.Infof("closing benchmark server") if bs != nil { bs.closeFunc() } }() for { in, err := stream.Recv() if err == io.EOF { return nil } if err != nil { return err } var out *testpb.ServerStatus switch argtype := in.Argtype.(type) { case *testpb.ServerArgs_Setup: grpclog.Infof("server setup received:") if bs != nil { grpclog.Infof("server setup received when server already exists, closing the existing server") bs.closeFunc() } bs, err = startBenchmarkServer(argtype.Setup, s.serverPort) if err != nil { return err } out = &testpb.ServerStatus{ Stats: bs.getStats(false), Port: int32(bs.port), Cores: int32(bs.cores), } case *testpb.ServerArgs_Mark: grpclog.Infof("server mark received:") grpclog.Infof(" - %v", argtype) if bs == nil { return status.Error(codes.InvalidArgument, "server does not exist when mark received") } out = &testpb.ServerStatus{ Stats: bs.getStats(argtype.Mark.Reset_), Port: int32(bs.port), Cores: int32(bs.cores), } } if err := stream.Send(out); err != nil { return err } } } func (s *workerServer) RunClient(stream testpb.WorkerService_RunClientServer) error { var bc *benchmarkClient defer func() { // Shut down benchmark client when stream ends. grpclog.Infof("shuting down benchmark client") if bc != nil { bc.shutdown() } }() for { in, err := stream.Recv() if err == io.EOF { return nil } if err != nil { return err } var out *testpb.ClientStatus switch t := in.Argtype.(type) { case *testpb.ClientArgs_Setup: grpclog.Infof("client setup received:") if bc != nil { grpclog.Infof("client setup received when client already exists, shuting down the existing client") bc.shutdown() } bc, err = startBenchmarkClient(t.Setup) if err != nil { return err } out = &testpb.ClientStatus{ Stats: bc.getStats(false), } case *testpb.ClientArgs_Mark: grpclog.Infof("client mark received:") grpclog.Infof(" - %v", t) if bc == nil { return status.Error(codes.InvalidArgument, "client does not exist when mark received") } out = &testpb.ClientStatus{ Stats: bc.getStats(t.Mark.Reset_), } } if err := stream.Send(out); err != nil { return err } } } func (s *workerServer) CoreCount(ctx context.Context, in *testpb.CoreRequest) (*testpb.CoreResponse, error) { grpclog.Infof("core count: %v", runtime.NumCPU()) return &testpb.CoreResponse{Cores: int32(runtime.NumCPU())}, nil } func (s *workerServer) QuitWorker(ctx context.Context, in *testpb.Void) (*testpb.Void, error) { grpclog.Infof("quitting worker") s.stop <- true return &testpb.Void{}, nil } func main() { grpc.EnableTracing = false flag.Parse() lis, err := net.Listen("tcp", ":"+strconv.Itoa(*driverPort)) if err != nil { grpclog.Fatalf("failed to listen: %v", err) } grpclog.Infof("worker listening at port %v", *driverPort) s := grpc.NewServer() stop := make(chan bool) testpb.RegisterWorkerServiceServer(s, &workerServer{ stop: stop, serverPort: *serverPort, }) go func() { <-stop // Wait for 1 second before stopping the server to make sure the return value of QuitWorker is sent to client. // TODO revise this once server graceful stop is supported in gRPC. time.Sleep(time.Second) s.Stop() }() runtime.SetBlockProfileRate(*blockProfRate) if *pprofPort >= 0 { go func() { grpclog.Infoln("Starting pprof server on port " + strconv.Itoa(*pprofPort)) grpclog.Infoln(http.ListenAndServe("localhost:"+strconv.Itoa(*pprofPort), nil)) }() } s.Serve(lis) } grpc-go-1.22.1/binarylog/000077500000000000000000000000001351635773100151345ustar00rootroot00000000000000grpc-go-1.22.1/binarylog/grpc_binarylog_v1/000077500000000000000000000000001351635773100205435ustar00rootroot00000000000000grpc-go-1.22.1/binarylog/grpc_binarylog_v1/binarylog.pb.go000066400000000000000000001010111351635773100234520ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: grpc/binarylog/grpc_binarylog_v1/binarylog.proto package grpc_binarylog_v1 // import "google.golang.org/grpc/binarylog/grpc_binarylog_v1" import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import duration "github.com/golang/protobuf/ptypes/duration" import timestamp "github.com/golang/protobuf/ptypes/timestamp" // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package // Enumerates the type of event // Note the terminology is different from the RPC semantics // definition, but the same meaning is expressed here. type GrpcLogEntry_EventType int32 const ( GrpcLogEntry_EVENT_TYPE_UNKNOWN GrpcLogEntry_EventType = 0 // Header sent from client to server GrpcLogEntry_EVENT_TYPE_CLIENT_HEADER GrpcLogEntry_EventType = 1 // Header sent from server to client GrpcLogEntry_EVENT_TYPE_SERVER_HEADER GrpcLogEntry_EventType = 2 // Message sent from client to server GrpcLogEntry_EVENT_TYPE_CLIENT_MESSAGE GrpcLogEntry_EventType = 3 // Message sent from server to client GrpcLogEntry_EVENT_TYPE_SERVER_MESSAGE GrpcLogEntry_EventType = 4 // A signal that client is done sending GrpcLogEntry_EVENT_TYPE_CLIENT_HALF_CLOSE GrpcLogEntry_EventType = 5 // Trailer indicates the end of the RPC. // On client side, this event means a trailer was either received // from the network or the gRPC library locally generated a status // to inform the application about a failure. // On server side, this event means the server application requested // to send a trailer. Note: EVENT_TYPE_CANCEL may still arrive after // this due to races on server side. GrpcLogEntry_EVENT_TYPE_SERVER_TRAILER GrpcLogEntry_EventType = 6 // A signal that the RPC is cancelled. On client side, this // indicates the client application requests a cancellation. // On server side, this indicates that cancellation was detected. // Note: This marks the end of the RPC. Events may arrive after // this due to races. For example, on client side a trailer // may arrive even though the application requested to cancel the RPC. GrpcLogEntry_EVENT_TYPE_CANCEL GrpcLogEntry_EventType = 7 ) var GrpcLogEntry_EventType_name = map[int32]string{ 0: "EVENT_TYPE_UNKNOWN", 1: "EVENT_TYPE_CLIENT_HEADER", 2: "EVENT_TYPE_SERVER_HEADER", 3: "EVENT_TYPE_CLIENT_MESSAGE", 4: "EVENT_TYPE_SERVER_MESSAGE", 5: "EVENT_TYPE_CLIENT_HALF_CLOSE", 6: "EVENT_TYPE_SERVER_TRAILER", 7: "EVENT_TYPE_CANCEL", } var GrpcLogEntry_EventType_value = map[string]int32{ "EVENT_TYPE_UNKNOWN": 0, "EVENT_TYPE_CLIENT_HEADER": 1, "EVENT_TYPE_SERVER_HEADER": 2, "EVENT_TYPE_CLIENT_MESSAGE": 3, "EVENT_TYPE_SERVER_MESSAGE": 4, "EVENT_TYPE_CLIENT_HALF_CLOSE": 5, "EVENT_TYPE_SERVER_TRAILER": 6, "EVENT_TYPE_CANCEL": 7, } func (x GrpcLogEntry_EventType) String() string { return proto.EnumName(GrpcLogEntry_EventType_name, int32(x)) } func (GrpcLogEntry_EventType) EnumDescriptor() ([]byte, []int) { return fileDescriptor_binarylog_264c8c9c551ce911, []int{0, 0} } // Enumerates the entity that generates the log entry type GrpcLogEntry_Logger int32 const ( GrpcLogEntry_LOGGER_UNKNOWN GrpcLogEntry_Logger = 0 GrpcLogEntry_LOGGER_CLIENT GrpcLogEntry_Logger = 1 GrpcLogEntry_LOGGER_SERVER GrpcLogEntry_Logger = 2 ) var GrpcLogEntry_Logger_name = map[int32]string{ 0: "LOGGER_UNKNOWN", 1: "LOGGER_CLIENT", 2: "LOGGER_SERVER", } var GrpcLogEntry_Logger_value = map[string]int32{ "LOGGER_UNKNOWN": 0, "LOGGER_CLIENT": 1, "LOGGER_SERVER": 2, } func (x GrpcLogEntry_Logger) String() string { return proto.EnumName(GrpcLogEntry_Logger_name, int32(x)) } func (GrpcLogEntry_Logger) EnumDescriptor() ([]byte, []int) { return fileDescriptor_binarylog_264c8c9c551ce911, []int{0, 1} } type Address_Type int32 const ( Address_TYPE_UNKNOWN Address_Type = 0 // address is in 1.2.3.4 form Address_TYPE_IPV4 Address_Type = 1 // address is in IPv6 canonical form (RFC5952 section 4) // The scope is NOT included in the address string. Address_TYPE_IPV6 Address_Type = 2 // address is UDS string Address_TYPE_UNIX Address_Type = 3 ) var Address_Type_name = map[int32]string{ 0: "TYPE_UNKNOWN", 1: "TYPE_IPV4", 2: "TYPE_IPV6", 3: "TYPE_UNIX", } var Address_Type_value = map[string]int32{ "TYPE_UNKNOWN": 0, "TYPE_IPV4": 1, "TYPE_IPV6": 2, "TYPE_UNIX": 3, } func (x Address_Type) String() string { return proto.EnumName(Address_Type_name, int32(x)) } func (Address_Type) EnumDescriptor() ([]byte, []int) { return fileDescriptor_binarylog_264c8c9c551ce911, []int{7, 0} } // Log entry we store in binary logs type GrpcLogEntry struct { // The timestamp of the binary log message Timestamp *timestamp.Timestamp `protobuf:"bytes,1,opt,name=timestamp,proto3" json:"timestamp,omitempty"` // Uniquely identifies a call. The value must not be 0 in order to disambiguate // from an unset value. // Each call may have several log entries, they will all have the same call_id. // Nothing is guaranteed about their value other than they are unique across // different RPCs in the same gRPC process. CallId uint64 `protobuf:"varint,2,opt,name=call_id,json=callId,proto3" json:"call_id,omitempty"` // The entry sequence id for this call. The first GrpcLogEntry has a // value of 1, to disambiguate from an unset value. The purpose of // this field is to detect missing entries in environments where // durability or ordering is not guaranteed. SequenceIdWithinCall uint64 `protobuf:"varint,3,opt,name=sequence_id_within_call,json=sequenceIdWithinCall,proto3" json:"sequence_id_within_call,omitempty"` Type GrpcLogEntry_EventType `protobuf:"varint,4,opt,name=type,proto3,enum=grpc.binarylog.v1.GrpcLogEntry_EventType" json:"type,omitempty"` Logger GrpcLogEntry_Logger `protobuf:"varint,5,opt,name=logger,proto3,enum=grpc.binarylog.v1.GrpcLogEntry_Logger" json:"logger,omitempty"` // The logger uses one of the following fields to record the payload, // according to the type of the log entry. // // Types that are valid to be assigned to Payload: // *GrpcLogEntry_ClientHeader // *GrpcLogEntry_ServerHeader // *GrpcLogEntry_Message // *GrpcLogEntry_Trailer Payload isGrpcLogEntry_Payload `protobuf_oneof:"payload"` // true if payload does not represent the full message or metadata. PayloadTruncated bool `protobuf:"varint,10,opt,name=payload_truncated,json=payloadTruncated,proto3" json:"payload_truncated,omitempty"` // Peer address information, will only be recorded on the first // incoming event. On client side, peer is logged on // EVENT_TYPE_SERVER_HEADER normally or EVENT_TYPE_SERVER_TRAILER in // the case of trailers-only. On server side, peer is always // logged on EVENT_TYPE_CLIENT_HEADER. Peer *Address `protobuf:"bytes,11,opt,name=peer,proto3" json:"peer,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *GrpcLogEntry) Reset() { *m = GrpcLogEntry{} } func (m *GrpcLogEntry) String() string { return proto.CompactTextString(m) } func (*GrpcLogEntry) ProtoMessage() {} func (*GrpcLogEntry) Descriptor() ([]byte, []int) { return fileDescriptor_binarylog_264c8c9c551ce911, []int{0} } func (m *GrpcLogEntry) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_GrpcLogEntry.Unmarshal(m, b) } func (m *GrpcLogEntry) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_GrpcLogEntry.Marshal(b, m, deterministic) } func (dst *GrpcLogEntry) XXX_Merge(src proto.Message) { xxx_messageInfo_GrpcLogEntry.Merge(dst, src) } func (m *GrpcLogEntry) XXX_Size() int { return xxx_messageInfo_GrpcLogEntry.Size(m) } func (m *GrpcLogEntry) XXX_DiscardUnknown() { xxx_messageInfo_GrpcLogEntry.DiscardUnknown(m) } var xxx_messageInfo_GrpcLogEntry proto.InternalMessageInfo func (m *GrpcLogEntry) GetTimestamp() *timestamp.Timestamp { if m != nil { return m.Timestamp } return nil } func (m *GrpcLogEntry) GetCallId() uint64 { if m != nil { return m.CallId } return 0 } func (m *GrpcLogEntry) GetSequenceIdWithinCall() uint64 { if m != nil { return m.SequenceIdWithinCall } return 0 } func (m *GrpcLogEntry) GetType() GrpcLogEntry_EventType { if m != nil { return m.Type } return GrpcLogEntry_EVENT_TYPE_UNKNOWN } func (m *GrpcLogEntry) GetLogger() GrpcLogEntry_Logger { if m != nil { return m.Logger } return GrpcLogEntry_LOGGER_UNKNOWN } type isGrpcLogEntry_Payload interface { isGrpcLogEntry_Payload() } type GrpcLogEntry_ClientHeader struct { ClientHeader *ClientHeader `protobuf:"bytes,6,opt,name=client_header,json=clientHeader,proto3,oneof"` } type GrpcLogEntry_ServerHeader struct { ServerHeader *ServerHeader `protobuf:"bytes,7,opt,name=server_header,json=serverHeader,proto3,oneof"` } type GrpcLogEntry_Message struct { Message *Message `protobuf:"bytes,8,opt,name=message,proto3,oneof"` } type GrpcLogEntry_Trailer struct { Trailer *Trailer `protobuf:"bytes,9,opt,name=trailer,proto3,oneof"` } func (*GrpcLogEntry_ClientHeader) isGrpcLogEntry_Payload() {} func (*GrpcLogEntry_ServerHeader) isGrpcLogEntry_Payload() {} func (*GrpcLogEntry_Message) isGrpcLogEntry_Payload() {} func (*GrpcLogEntry_Trailer) isGrpcLogEntry_Payload() {} func (m *GrpcLogEntry) GetPayload() isGrpcLogEntry_Payload { if m != nil { return m.Payload } return nil } func (m *GrpcLogEntry) GetClientHeader() *ClientHeader { if x, ok := m.GetPayload().(*GrpcLogEntry_ClientHeader); ok { return x.ClientHeader } return nil } func (m *GrpcLogEntry) GetServerHeader() *ServerHeader { if x, ok := m.GetPayload().(*GrpcLogEntry_ServerHeader); ok { return x.ServerHeader } return nil } func (m *GrpcLogEntry) GetMessage() *Message { if x, ok := m.GetPayload().(*GrpcLogEntry_Message); ok { return x.Message } return nil } func (m *GrpcLogEntry) GetTrailer() *Trailer { if x, ok := m.GetPayload().(*GrpcLogEntry_Trailer); ok { return x.Trailer } return nil } func (m *GrpcLogEntry) GetPayloadTruncated() bool { if m != nil { return m.PayloadTruncated } return false } func (m *GrpcLogEntry) GetPeer() *Address { if m != nil { return m.Peer } return nil } // XXX_OneofFuncs is for the internal use of the proto package. func (*GrpcLogEntry) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _GrpcLogEntry_OneofMarshaler, _GrpcLogEntry_OneofUnmarshaler, _GrpcLogEntry_OneofSizer, []interface{}{ (*GrpcLogEntry_ClientHeader)(nil), (*GrpcLogEntry_ServerHeader)(nil), (*GrpcLogEntry_Message)(nil), (*GrpcLogEntry_Trailer)(nil), } } func _GrpcLogEntry_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*GrpcLogEntry) // payload switch x := m.Payload.(type) { case *GrpcLogEntry_ClientHeader: b.EncodeVarint(6<<3 | proto.WireBytes) if err := b.EncodeMessage(x.ClientHeader); err != nil { return err } case *GrpcLogEntry_ServerHeader: b.EncodeVarint(7<<3 | proto.WireBytes) if err := b.EncodeMessage(x.ServerHeader); err != nil { return err } case *GrpcLogEntry_Message: b.EncodeVarint(8<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Message); err != nil { return err } case *GrpcLogEntry_Trailer: b.EncodeVarint(9<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Trailer); err != nil { return err } case nil: default: return fmt.Errorf("GrpcLogEntry.Payload has unexpected type %T", x) } return nil } func _GrpcLogEntry_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*GrpcLogEntry) switch tag { case 6: // payload.client_header if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(ClientHeader) err := b.DecodeMessage(msg) m.Payload = &GrpcLogEntry_ClientHeader{msg} return true, err case 7: // payload.server_header if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(ServerHeader) err := b.DecodeMessage(msg) m.Payload = &GrpcLogEntry_ServerHeader{msg} return true, err case 8: // payload.message if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(Message) err := b.DecodeMessage(msg) m.Payload = &GrpcLogEntry_Message{msg} return true, err case 9: // payload.trailer if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(Trailer) err := b.DecodeMessage(msg) m.Payload = &GrpcLogEntry_Trailer{msg} return true, err default: return false, nil } } func _GrpcLogEntry_OneofSizer(msg proto.Message) (n int) { m := msg.(*GrpcLogEntry) // payload switch x := m.Payload.(type) { case *GrpcLogEntry_ClientHeader: s := proto.Size(x.ClientHeader) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *GrpcLogEntry_ServerHeader: s := proto.Size(x.ServerHeader) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *GrpcLogEntry_Message: s := proto.Size(x.Message) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *GrpcLogEntry_Trailer: s := proto.Size(x.Trailer) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } type ClientHeader struct { // This contains only the metadata from the application. Metadata *Metadata `protobuf:"bytes,1,opt,name=metadata,proto3" json:"metadata,omitempty"` // The name of the RPC method, which looks something like: // // // Note the leading "/" character. MethodName string `protobuf:"bytes,2,opt,name=method_name,json=methodName,proto3" json:"method_name,omitempty"` // A single process may be used to run multiple virtual // servers with different identities. // The authority is the name of such a server identitiy. // It is typically a portion of the URI in the form of // or : . Authority string `protobuf:"bytes,3,opt,name=authority,proto3" json:"authority,omitempty"` // the RPC timeout Timeout *duration.Duration `protobuf:"bytes,4,opt,name=timeout,proto3" json:"timeout,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ClientHeader) Reset() { *m = ClientHeader{} } func (m *ClientHeader) String() string { return proto.CompactTextString(m) } func (*ClientHeader) ProtoMessage() {} func (*ClientHeader) Descriptor() ([]byte, []int) { return fileDescriptor_binarylog_264c8c9c551ce911, []int{1} } func (m *ClientHeader) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ClientHeader.Unmarshal(m, b) } func (m *ClientHeader) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ClientHeader.Marshal(b, m, deterministic) } func (dst *ClientHeader) XXX_Merge(src proto.Message) { xxx_messageInfo_ClientHeader.Merge(dst, src) } func (m *ClientHeader) XXX_Size() int { return xxx_messageInfo_ClientHeader.Size(m) } func (m *ClientHeader) XXX_DiscardUnknown() { xxx_messageInfo_ClientHeader.DiscardUnknown(m) } var xxx_messageInfo_ClientHeader proto.InternalMessageInfo func (m *ClientHeader) GetMetadata() *Metadata { if m != nil { return m.Metadata } return nil } func (m *ClientHeader) GetMethodName() string { if m != nil { return m.MethodName } return "" } func (m *ClientHeader) GetAuthority() string { if m != nil { return m.Authority } return "" } func (m *ClientHeader) GetTimeout() *duration.Duration { if m != nil { return m.Timeout } return nil } type ServerHeader struct { // This contains only the metadata from the application. Metadata *Metadata `protobuf:"bytes,1,opt,name=metadata,proto3" json:"metadata,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ServerHeader) Reset() { *m = ServerHeader{} } func (m *ServerHeader) String() string { return proto.CompactTextString(m) } func (*ServerHeader) ProtoMessage() {} func (*ServerHeader) Descriptor() ([]byte, []int) { return fileDescriptor_binarylog_264c8c9c551ce911, []int{2} } func (m *ServerHeader) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ServerHeader.Unmarshal(m, b) } func (m *ServerHeader) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ServerHeader.Marshal(b, m, deterministic) } func (dst *ServerHeader) XXX_Merge(src proto.Message) { xxx_messageInfo_ServerHeader.Merge(dst, src) } func (m *ServerHeader) XXX_Size() int { return xxx_messageInfo_ServerHeader.Size(m) } func (m *ServerHeader) XXX_DiscardUnknown() { xxx_messageInfo_ServerHeader.DiscardUnknown(m) } var xxx_messageInfo_ServerHeader proto.InternalMessageInfo func (m *ServerHeader) GetMetadata() *Metadata { if m != nil { return m.Metadata } return nil } type Trailer struct { // This contains only the metadata from the application. Metadata *Metadata `protobuf:"bytes,1,opt,name=metadata,proto3" json:"metadata,omitempty"` // The gRPC status code. StatusCode uint32 `protobuf:"varint,2,opt,name=status_code,json=statusCode,proto3" json:"status_code,omitempty"` // An original status message before any transport specific // encoding. StatusMessage string `protobuf:"bytes,3,opt,name=status_message,json=statusMessage,proto3" json:"status_message,omitempty"` // The value of the 'grpc-status-details-bin' metadata key. If // present, this is always an encoded 'google.rpc.Status' message. StatusDetails []byte `protobuf:"bytes,4,opt,name=status_details,json=statusDetails,proto3" json:"status_details,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Trailer) Reset() { *m = Trailer{} } func (m *Trailer) String() string { return proto.CompactTextString(m) } func (*Trailer) ProtoMessage() {} func (*Trailer) Descriptor() ([]byte, []int) { return fileDescriptor_binarylog_264c8c9c551ce911, []int{3} } func (m *Trailer) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Trailer.Unmarshal(m, b) } func (m *Trailer) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Trailer.Marshal(b, m, deterministic) } func (dst *Trailer) XXX_Merge(src proto.Message) { xxx_messageInfo_Trailer.Merge(dst, src) } func (m *Trailer) XXX_Size() int { return xxx_messageInfo_Trailer.Size(m) } func (m *Trailer) XXX_DiscardUnknown() { xxx_messageInfo_Trailer.DiscardUnknown(m) } var xxx_messageInfo_Trailer proto.InternalMessageInfo func (m *Trailer) GetMetadata() *Metadata { if m != nil { return m.Metadata } return nil } func (m *Trailer) GetStatusCode() uint32 { if m != nil { return m.StatusCode } return 0 } func (m *Trailer) GetStatusMessage() string { if m != nil { return m.StatusMessage } return "" } func (m *Trailer) GetStatusDetails() []byte { if m != nil { return m.StatusDetails } return nil } // Message payload, used by CLIENT_MESSAGE and SERVER_MESSAGE type Message struct { // Length of the message. It may not be the same as the length of the // data field, as the logging payload can be truncated or omitted. Length uint32 `protobuf:"varint,1,opt,name=length,proto3" json:"length,omitempty"` // May be truncated or omitted. Data []byte `protobuf:"bytes,2,opt,name=data,proto3" json:"data,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Message) Reset() { *m = Message{} } func (m *Message) String() string { return proto.CompactTextString(m) } func (*Message) ProtoMessage() {} func (*Message) Descriptor() ([]byte, []int) { return fileDescriptor_binarylog_264c8c9c551ce911, []int{4} } func (m *Message) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Message.Unmarshal(m, b) } func (m *Message) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Message.Marshal(b, m, deterministic) } func (dst *Message) XXX_Merge(src proto.Message) { xxx_messageInfo_Message.Merge(dst, src) } func (m *Message) XXX_Size() int { return xxx_messageInfo_Message.Size(m) } func (m *Message) XXX_DiscardUnknown() { xxx_messageInfo_Message.DiscardUnknown(m) } var xxx_messageInfo_Message proto.InternalMessageInfo func (m *Message) GetLength() uint32 { if m != nil { return m.Length } return 0 } func (m *Message) GetData() []byte { if m != nil { return m.Data } return nil } // A list of metadata pairs, used in the payload of client header, // server header, and server trailer. // Implementations may omit some entries to honor the header limits // of GRPC_BINARY_LOG_CONFIG. // // Header keys added by gRPC are omitted. To be more specific, // implementations will not log the following entries, and this is // not to be treated as a truncation: // - entries handled by grpc that are not user visible, such as those // that begin with 'grpc-' (with exception of grpc-trace-bin) // or keys like 'lb-token' // - transport specific entries, including but not limited to: // ':path', ':authority', 'content-encoding', 'user-agent', 'te', etc // - entries added for call credentials // // Implementations must always log grpc-trace-bin if it is present. // Practically speaking it will only be visible on server side because // grpc-trace-bin is managed by low level client side mechanisms // inaccessible from the application level. On server side, the // header is just a normal metadata key. // The pair will not count towards the size limit. type Metadata struct { Entry []*MetadataEntry `protobuf:"bytes,1,rep,name=entry,proto3" json:"entry,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Metadata) Reset() { *m = Metadata{} } func (m *Metadata) String() string { return proto.CompactTextString(m) } func (*Metadata) ProtoMessage() {} func (*Metadata) Descriptor() ([]byte, []int) { return fileDescriptor_binarylog_264c8c9c551ce911, []int{5} } func (m *Metadata) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Metadata.Unmarshal(m, b) } func (m *Metadata) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Metadata.Marshal(b, m, deterministic) } func (dst *Metadata) XXX_Merge(src proto.Message) { xxx_messageInfo_Metadata.Merge(dst, src) } func (m *Metadata) XXX_Size() int { return xxx_messageInfo_Metadata.Size(m) } func (m *Metadata) XXX_DiscardUnknown() { xxx_messageInfo_Metadata.DiscardUnknown(m) } var xxx_messageInfo_Metadata proto.InternalMessageInfo func (m *Metadata) GetEntry() []*MetadataEntry { if m != nil { return m.Entry } return nil } // A metadata key value pair type MetadataEntry struct { Key string `protobuf:"bytes,1,opt,name=key,proto3" json:"key,omitempty"` Value []byte `protobuf:"bytes,2,opt,name=value,proto3" json:"value,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *MetadataEntry) Reset() { *m = MetadataEntry{} } func (m *MetadataEntry) String() string { return proto.CompactTextString(m) } func (*MetadataEntry) ProtoMessage() {} func (*MetadataEntry) Descriptor() ([]byte, []int) { return fileDescriptor_binarylog_264c8c9c551ce911, []int{6} } func (m *MetadataEntry) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_MetadataEntry.Unmarshal(m, b) } func (m *MetadataEntry) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_MetadataEntry.Marshal(b, m, deterministic) } func (dst *MetadataEntry) XXX_Merge(src proto.Message) { xxx_messageInfo_MetadataEntry.Merge(dst, src) } func (m *MetadataEntry) XXX_Size() int { return xxx_messageInfo_MetadataEntry.Size(m) } func (m *MetadataEntry) XXX_DiscardUnknown() { xxx_messageInfo_MetadataEntry.DiscardUnknown(m) } var xxx_messageInfo_MetadataEntry proto.InternalMessageInfo func (m *MetadataEntry) GetKey() string { if m != nil { return m.Key } return "" } func (m *MetadataEntry) GetValue() []byte { if m != nil { return m.Value } return nil } // Address information type Address struct { Type Address_Type `protobuf:"varint,1,opt,name=type,proto3,enum=grpc.binarylog.v1.Address_Type" json:"type,omitempty"` Address string `protobuf:"bytes,2,opt,name=address,proto3" json:"address,omitempty"` // only for TYPE_IPV4 and TYPE_IPV6 IpPort uint32 `protobuf:"varint,3,opt,name=ip_port,json=ipPort,proto3" json:"ip_port,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Address) Reset() { *m = Address{} } func (m *Address) String() string { return proto.CompactTextString(m) } func (*Address) ProtoMessage() {} func (*Address) Descriptor() ([]byte, []int) { return fileDescriptor_binarylog_264c8c9c551ce911, []int{7} } func (m *Address) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Address.Unmarshal(m, b) } func (m *Address) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Address.Marshal(b, m, deterministic) } func (dst *Address) XXX_Merge(src proto.Message) { xxx_messageInfo_Address.Merge(dst, src) } func (m *Address) XXX_Size() int { return xxx_messageInfo_Address.Size(m) } func (m *Address) XXX_DiscardUnknown() { xxx_messageInfo_Address.DiscardUnknown(m) } var xxx_messageInfo_Address proto.InternalMessageInfo func (m *Address) GetType() Address_Type { if m != nil { return m.Type } return Address_TYPE_UNKNOWN } func (m *Address) GetAddress() string { if m != nil { return m.Address } return "" } func (m *Address) GetIpPort() uint32 { if m != nil { return m.IpPort } return 0 } func init() { proto.RegisterType((*GrpcLogEntry)(nil), "grpc.binarylog.v1.GrpcLogEntry") proto.RegisterType((*ClientHeader)(nil), "grpc.binarylog.v1.ClientHeader") proto.RegisterType((*ServerHeader)(nil), "grpc.binarylog.v1.ServerHeader") proto.RegisterType((*Trailer)(nil), "grpc.binarylog.v1.Trailer") proto.RegisterType((*Message)(nil), "grpc.binarylog.v1.Message") proto.RegisterType((*Metadata)(nil), "grpc.binarylog.v1.Metadata") proto.RegisterType((*MetadataEntry)(nil), "grpc.binarylog.v1.MetadataEntry") proto.RegisterType((*Address)(nil), "grpc.binarylog.v1.Address") proto.RegisterEnum("grpc.binarylog.v1.GrpcLogEntry_EventType", GrpcLogEntry_EventType_name, GrpcLogEntry_EventType_value) proto.RegisterEnum("grpc.binarylog.v1.GrpcLogEntry_Logger", GrpcLogEntry_Logger_name, GrpcLogEntry_Logger_value) proto.RegisterEnum("grpc.binarylog.v1.Address_Type", Address_Type_name, Address_Type_value) } func init() { proto.RegisterFile("grpc/binarylog/grpc_binarylog_v1/binarylog.proto", fileDescriptor_binarylog_264c8c9c551ce911) } var fileDescriptor_binarylog_264c8c9c551ce911 = []byte{ // 900 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xa4, 0x55, 0x51, 0x6f, 0xe3, 0x44, 0x10, 0x3e, 0x37, 0x69, 0xdc, 0x4c, 0x92, 0xca, 0x5d, 0x95, 0x3b, 0x5f, 0x29, 0x34, 0xb2, 0x04, 0x0a, 0x42, 0x72, 0xb9, 0x94, 0xeb, 0xf1, 0x02, 0x52, 0x92, 0xfa, 0xd2, 0x88, 0x5c, 0x1a, 0x6d, 0x72, 0x3d, 0x40, 0x48, 0xd6, 0x36, 0x5e, 0x1c, 0x0b, 0xc7, 0x6b, 0xd6, 0x9b, 0xa0, 0xfc, 0x2c, 0xde, 0x90, 0xee, 0x77, 0xf1, 0x8e, 0xbc, 0x6b, 0x27, 0xa6, 0x69, 0x0f, 0x09, 0xde, 0x3c, 0xdf, 0x7c, 0xf3, 0xcd, 0xee, 0x78, 0x66, 0x16, 0xbe, 0xf2, 0x79, 0x3c, 0x3b, 0xbf, 0x0b, 0x22, 0xc2, 0xd7, 0x21, 0xf3, 0xcf, 0x53, 0xd3, 0xdd, 0x98, 0xee, 0xea, 0xc5, 0xd6, 0x67, 0xc7, 0x9c, 0x09, 0x86, 0x8e, 0x52, 0x8a, 0xbd, 0x45, 0x57, 0x2f, 0x4e, 0x3e, 0xf5, 0x19, 0xf3, 0x43, 0x7a, 0x2e, 0x09, 0x77, 0xcb, 0x5f, 0xce, 0xbd, 0x25, 0x27, 0x22, 0x60, 0x91, 0x0a, 0x39, 0x39, 0xbb, 0xef, 0x17, 0xc1, 0x82, 0x26, 0x82, 0x2c, 0x62, 0x45, 0xb0, 0xde, 0xeb, 0x50, 0xef, 0xf3, 0x78, 0x36, 0x64, 0xbe, 0x13, 0x09, 0xbe, 0x46, 0xdf, 0x40, 0x75, 0xc3, 0x31, 0xb5, 0xa6, 0xd6, 0xaa, 0xb5, 0x4f, 0x6c, 0xa5, 0x62, 0xe7, 0x2a, 0xf6, 0x34, 0x67, 0xe0, 0x2d, 0x19, 0x3d, 0x03, 0x7d, 0x46, 0xc2, 0xd0, 0x0d, 0x3c, 0x73, 0xaf, 0xa9, 0xb5, 0xca, 0xb8, 0x92, 0x9a, 0x03, 0x0f, 0xbd, 0x84, 0x67, 0x09, 0xfd, 0x6d, 0x49, 0xa3, 0x19, 0x75, 0x03, 0xcf, 0xfd, 0x3d, 0x10, 0xf3, 0x20, 0x72, 0x53, 0xa7, 0x59, 0x92, 0xc4, 0xe3, 0xdc, 0x3d, 0xf0, 0xde, 0x49, 0x67, 0x8f, 0x84, 0x21, 0xfa, 0x16, 0xca, 0x62, 0x1d, 0x53, 0xb3, 0xdc, 0xd4, 0x5a, 0x87, 0xed, 0x2f, 0xec, 0x9d, 0xdb, 0xdb, 0xc5, 0x83, 0xdb, 0xce, 0x8a, 0x46, 0x62, 0xba, 0x8e, 0x29, 0x96, 0x61, 0xe8, 0x3b, 0xa8, 0x84, 0xcc, 0xf7, 0x29, 0x37, 0xf7, 0xa5, 0xc0, 0xe7, 0xff, 0x26, 0x30, 0x94, 0x6c, 0x9c, 0x45, 0xa1, 0xd7, 0xd0, 0x98, 0x85, 0x01, 0x8d, 0x84, 0x3b, 0xa7, 0xc4, 0xa3, 0xdc, 0xac, 0xc8, 0x62, 0x9c, 0x3d, 0x20, 0xd3, 0x93, 0xbc, 0x6b, 0x49, 0xbb, 0x7e, 0x82, 0xeb, 0xb3, 0x82, 0x9d, 0xea, 0x24, 0x94, 0xaf, 0x28, 0xcf, 0x75, 0xf4, 0x47, 0x75, 0x26, 0x92, 0xb7, 0xd5, 0x49, 0x0a, 0x36, 0xba, 0x04, 0x7d, 0x41, 0x93, 0x84, 0xf8, 0xd4, 0x3c, 0xc8, 0x7f, 0xcb, 0x8e, 0xc2, 0x1b, 0xc5, 0xb8, 0x7e, 0x82, 0x73, 0x72, 0x1a, 0x27, 0x38, 0x09, 0x42, 0xca, 0xcd, 0xea, 0xa3, 0x71, 0x53, 0xc5, 0x48, 0xe3, 0x32, 0x32, 0xfa, 0x12, 0x8e, 0x62, 0xb2, 0x0e, 0x19, 0xf1, 0x5c, 0xc1, 0x97, 0xd1, 0x8c, 0x08, 0xea, 0x99, 0xd0, 0xd4, 0x5a, 0x07, 0xd8, 0xc8, 0x1c, 0xd3, 0x1c, 0x47, 0x36, 0x94, 0x63, 0x4a, 0xb9, 0x59, 0x7b, 0x34, 0x43, 0xc7, 0xf3, 0x38, 0x4d, 0x12, 0x2c, 0x79, 0xd6, 0x5f, 0x1a, 0x54, 0x37, 0x3f, 0x0c, 0x3d, 0x05, 0xe4, 0xdc, 0x3a, 0xa3, 0xa9, 0x3b, 0xfd, 0x71, 0xec, 0xb8, 0x6f, 0x47, 0xdf, 0x8f, 0x6e, 0xde, 0x8d, 0x8c, 0x27, 0xe8, 0x14, 0xcc, 0x02, 0xde, 0x1b, 0x0e, 0xd2, 0xef, 0x6b, 0xa7, 0x73, 0xe5, 0x60, 0x43, 0xbb, 0xe7, 0x9d, 0x38, 0xf8, 0xd6, 0xc1, 0xb9, 0x77, 0x0f, 0x7d, 0x02, 0xcf, 0x77, 0x63, 0xdf, 0x38, 0x93, 0x49, 0xa7, 0xef, 0x18, 0xa5, 0x7b, 0xee, 0x2c, 0x38, 0x77, 0x97, 0x51, 0x13, 0x4e, 0x1f, 0xc8, 0xdc, 0x19, 0xbe, 0x76, 0x7b, 0xc3, 0x9b, 0x89, 0x63, 0xec, 0x3f, 0x2c, 0x30, 0xc5, 0x9d, 0xc1, 0xd0, 0xc1, 0x46, 0x05, 0x7d, 0x04, 0x47, 0x45, 0x81, 0xce, 0xa8, 0xe7, 0x0c, 0x0d, 0xdd, 0xea, 0x42, 0x45, 0xb5, 0x19, 0x42, 0x70, 0x38, 0xbc, 0xe9, 0xf7, 0x1d, 0x5c, 0xb8, 0xef, 0x11, 0x34, 0x32, 0x4c, 0x65, 0x34, 0xb4, 0x02, 0xa4, 0x52, 0x18, 0x7b, 0xdd, 0x2a, 0xe8, 0x59, 0xfd, 0xad, 0xf7, 0x1a, 0xd4, 0x8b, 0xcd, 0x87, 0x5e, 0xc1, 0xc1, 0x82, 0x0a, 0xe2, 0x11, 0x41, 0xb2, 0xe1, 0xfd, 0xf8, 0xc1, 0x2e, 0x51, 0x14, 0xbc, 0x21, 0xa3, 0x33, 0xa8, 0x2d, 0xa8, 0x98, 0x33, 0xcf, 0x8d, 0xc8, 0x82, 0xca, 0x01, 0xae, 0x62, 0x50, 0xd0, 0x88, 0x2c, 0x28, 0x3a, 0x85, 0x2a, 0x59, 0x8a, 0x39, 0xe3, 0x81, 0x58, 0xcb, 0xb1, 0xad, 0xe2, 0x2d, 0x80, 0x2e, 0x40, 0x4f, 0x17, 0x01, 0x5b, 0x0a, 0x39, 0xae, 0xb5, 0xf6, 0xf3, 0x9d, 0x9d, 0x71, 0x95, 0x6d, 0x26, 0x9c, 0x33, 0xad, 0x3e, 0xd4, 0x8b, 0x1d, 0xff, 0x9f, 0x0f, 0x6f, 0xfd, 0xa1, 0x81, 0x9e, 0x75, 0xf0, 0xff, 0xaa, 0x40, 0x22, 0x88, 0x58, 0x26, 0xee, 0x8c, 0x79, 0xaa, 0x02, 0x0d, 0x0c, 0x0a, 0xea, 0x31, 0x8f, 0xa2, 0xcf, 0xe0, 0x30, 0x23, 0xe4, 0x73, 0xa8, 0xca, 0xd0, 0x50, 0x68, 0x36, 0x7a, 0x05, 0x9a, 0x47, 0x05, 0x09, 0xc2, 0x44, 0x56, 0xa4, 0x9e, 0xd3, 0xae, 0x14, 0x68, 0xbd, 0x04, 0x3d, 0x8f, 0x78, 0x0a, 0x95, 0x90, 0x46, 0xbe, 0x98, 0xcb, 0x03, 0x37, 0x70, 0x66, 0x21, 0x04, 0x65, 0x79, 0x8d, 0x3d, 0x19, 0x2f, 0xbf, 0xad, 0x2e, 0x1c, 0xe4, 0x67, 0x47, 0x97, 0xb0, 0x4f, 0xd3, 0xcd, 0x65, 0x6a, 0xcd, 0x52, 0xab, 0xd6, 0x6e, 0x7e, 0xe0, 0x9e, 0x72, 0xc3, 0x61, 0x45, 0xb7, 0x5e, 0x41, 0xe3, 0x1f, 0x38, 0x32, 0xa0, 0xf4, 0x2b, 0x5d, 0xcb, 0xec, 0x55, 0x9c, 0x7e, 0xa2, 0x63, 0xd8, 0x5f, 0x91, 0x70, 0x49, 0xb3, 0xdc, 0xca, 0xb0, 0xfe, 0xd4, 0x40, 0xcf, 0xe6, 0x18, 0x5d, 0x64, 0xdb, 0x59, 0x93, 0xcb, 0xf5, 0xec, 0xf1, 0x89, 0xb7, 0x0b, 0x3b, 0xd9, 0x04, 0x9d, 0x28, 0x34, 0xeb, 0xb0, 0xdc, 0x4c, 0x1f, 0x8f, 0x20, 0x76, 0x63, 0xc6, 0x85, 0xac, 0x6a, 0x03, 0x57, 0x82, 0x78, 0xcc, 0xb8, 0xb0, 0x1c, 0x28, 0xcb, 0x1d, 0x61, 0x40, 0xfd, 0xde, 0x76, 0x68, 0x40, 0x55, 0x22, 0x83, 0xf1, 0xed, 0xd7, 0x86, 0x56, 0x34, 0x2f, 0x8d, 0xbd, 0x8d, 0xf9, 0x76, 0x34, 0xf8, 0xc1, 0x28, 0x75, 0x7f, 0x86, 0xe3, 0x80, 0xed, 0x1e, 0xb2, 0x7b, 0xd8, 0x95, 0xd6, 0x90, 0xf9, 0xe3, 0xb4, 0x51, 0xc7, 0xda, 0x4f, 0xed, 0xac, 0x71, 0x7d, 0x16, 0x92, 0xc8, 0xb7, 0x19, 0x57, 0x4f, 0xf3, 0x87, 0x5e, 0xea, 0xbb, 0x8a, 0xec, 0xf2, 0x8b, 0xbf, 0x03, 0x00, 0x00, 0xff, 0xff, 0xe7, 0xf6, 0x4b, 0x50, 0xd4, 0x07, 0x00, 0x00, } grpc-go-1.22.1/call.go000066400000000000000000000046671351635773100144250ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "context" ) // Invoke sends the RPC request on the wire and returns after response is // received. This is typically called by generated code. // // All errors returned by Invoke are compatible with the status package. func (cc *ClientConn) Invoke(ctx context.Context, method string, args, reply interface{}, opts ...CallOption) error { // allow interceptor to see all applicable call options, which means those // configured as defaults from dial option as well as per-call options opts = combine(cc.dopts.callOptions, opts) if cc.dopts.unaryInt != nil { return cc.dopts.unaryInt(ctx, method, args, reply, cc, invoke, opts...) } return invoke(ctx, method, args, reply, cc, opts...) } func combine(o1 []CallOption, o2 []CallOption) []CallOption { // we don't use append because o1 could have extra capacity whose // elements would be overwritten, which could cause inadvertent // sharing (and race conditions) between concurrent calls if len(o1) == 0 { return o2 } else if len(o2) == 0 { return o1 } ret := make([]CallOption, len(o1)+len(o2)) copy(ret, o1) copy(ret[len(o1):], o2) return ret } // Invoke sends the RPC request on the wire and returns after response is // received. This is typically called by generated code. // // DEPRECATED: Use ClientConn.Invoke instead. func Invoke(ctx context.Context, method string, args, reply interface{}, cc *ClientConn, opts ...CallOption) error { return cc.Invoke(ctx, method, args, reply, opts...) } var unaryStreamDesc = &StreamDesc{ServerStreams: false, ClientStreams: false} func invoke(ctx context.Context, method string, req, reply interface{}, cc *ClientConn, opts ...CallOption) error { cs, err := newClientStream(ctx, unaryStreamDesc, cc, method, opts...) if err != nil { return err } if err := cs.SendMsg(req); err != nil { return err } return cs.RecvMsg(reply) } grpc-go-1.22.1/call_test.go000066400000000000000000000345271351635773100154620ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "context" "fmt" "io" "math" "net" "strconv" "strings" "sync" "testing" "time" "google.golang.org/grpc/codes" "google.golang.org/grpc/internal/transport" "google.golang.org/grpc/status" ) var ( expectedRequest = "ping" expectedResponse = "pong" weirdError = "format verbs: %v%s" sizeLargeErr = 1024 * 1024 canceled = 0 ) type testCodec struct { } func (testCodec) Marshal(v interface{}) ([]byte, error) { return []byte(*(v.(*string))), nil } func (testCodec) Unmarshal(data []byte, v interface{}) error { *(v.(*string)) = string(data) return nil } func (testCodec) String() string { return "test" } type testStreamHandler struct { port string t transport.ServerTransport } func (h *testStreamHandler) handleStream(t *testing.T, s *transport.Stream) { p := &parser{r: s} for { pf, req, err := p.recvMsg(math.MaxInt32) if err == io.EOF { break } if err != nil { return } if pf != compressionNone { t.Errorf("Received the mistaken message format %d, want %d", pf, compressionNone) return } var v string codec := testCodec{} if err := codec.Unmarshal(req, &v); err != nil { t.Errorf("Failed to unmarshal the received message: %v", err) return } if v == "weird error" { h.t.WriteStatus(s, status.New(codes.Internal, weirdError)) return } if v == "canceled" { canceled++ h.t.WriteStatus(s, status.New(codes.Internal, "")) return } if v == "port" { h.t.WriteStatus(s, status.New(codes.Internal, h.port)) return } if v != expectedRequest { h.t.WriteStatus(s, status.New(codes.Internal, strings.Repeat("A", sizeLargeErr))) return } } // send a response back to end the stream. data, err := encode(testCodec{}, &expectedResponse) if err != nil { t.Errorf("Failed to encode the response: %v", err) return } hdr, payload := msgHeader(data, nil) h.t.Write(s, hdr, payload, &transport.Options{}) h.t.WriteStatus(s, status.New(codes.OK, "")) } type server struct { lis net.Listener port string addr string startedErr chan error // sent nil or an error after server starts mu sync.Mutex conns map[transport.ServerTransport]bool } type ctxKey string func newTestServer() *server { return &server{startedErr: make(chan error, 1)} } // start starts server. Other goroutines should block on s.startedErr for further operations. func (s *server) start(t *testing.T, port int, maxStreams uint32) { var err error if port == 0 { s.lis, err = net.Listen("tcp", "localhost:0") } else { s.lis, err = net.Listen("tcp", "localhost:"+strconv.Itoa(port)) } if err != nil { s.startedErr <- fmt.Errorf("failed to listen: %v", err) return } s.addr = s.lis.Addr().String() _, p, err := net.SplitHostPort(s.addr) if err != nil { s.startedErr <- fmt.Errorf("failed to parse listener address: %v", err) return } s.port = p s.conns = make(map[transport.ServerTransport]bool) s.startedErr <- nil for { conn, err := s.lis.Accept() if err != nil { return } config := &transport.ServerConfig{ MaxStreams: maxStreams, } st, err := transport.NewServerTransport("http2", conn, config) if err != nil { continue } s.mu.Lock() if s.conns == nil { s.mu.Unlock() st.Close() return } s.conns[st] = true s.mu.Unlock() h := &testStreamHandler{ port: s.port, t: st, } go st.HandleStreams(func(s *transport.Stream) { go h.handleStream(t, s) }, func(ctx context.Context, method string) context.Context { return ctx }) } } func (s *server) wait(t *testing.T, timeout time.Duration) { select { case err := <-s.startedErr: if err != nil { t.Fatal(err) } case <-time.After(timeout): t.Fatalf("Timed out after %v waiting for server to be ready", timeout) } } func (s *server) stop() { s.lis.Close() s.mu.Lock() for c := range s.conns { c.Close() } s.conns = nil s.mu.Unlock() } func setUp(t *testing.T, port int, maxStreams uint32) (*server, *ClientConn) { return setUpWithOptions(t, port, maxStreams) } func setUpWithOptions(t *testing.T, port int, maxStreams uint32, dopts ...DialOption) (*server, *ClientConn) { server := newTestServer() go server.start(t, port, maxStreams) server.wait(t, 2*time.Second) addr := "localhost:" + server.port dopts = append(dopts, WithBlock(), WithInsecure(), WithCodec(testCodec{})) cc, err := Dial(addr, dopts...) if err != nil { t.Fatalf("Failed to create ClientConn: %v", err) } return server, cc } func (s) TestUnaryClientInterceptor(t *testing.T) { parentKey := ctxKey("parentKey") interceptor := func(ctx context.Context, method string, req, reply interface{}, cc *ClientConn, invoker UnaryInvoker, opts ...CallOption) error { if ctx.Value(parentKey) == nil { t.Fatalf("interceptor should have %v in context", parentKey) } return invoker(ctx, method, req, reply, cc, opts...) } server, cc := setUpWithOptions(t, 0, math.MaxUint32, WithUnaryInterceptor(interceptor)) defer func() { cc.Close() server.stop() }() var reply string ctx := context.Background() parentCtx := context.WithValue(ctx, ctxKey("parentKey"), 0) if err := cc.Invoke(parentCtx, "/foo/bar", &expectedRequest, &reply); err != nil || reply != expectedResponse { t.Fatalf("grpc.Invoke(_, _, _, _, _) = %v, want ", err) } } func (s) TestChainUnaryClientInterceptor(t *testing.T) { var ( parentKey = ctxKey("parentKey") firstIntKey = ctxKey("firstIntKey") secondIntKey = ctxKey("secondIntKey") ) firstInt := func(ctx context.Context, method string, req, reply interface{}, cc *ClientConn, invoker UnaryInvoker, opts ...CallOption) error { if ctx.Value(parentKey) == nil { t.Fatalf("first interceptor should have %v in context", parentKey) } if ctx.Value(firstIntKey) != nil { t.Fatalf("first interceptor should not have %v in context", firstIntKey) } if ctx.Value(secondIntKey) != nil { t.Fatalf("first interceptor should not have %v in context", secondIntKey) } firstCtx := context.WithValue(ctx, firstIntKey, 1) err := invoker(firstCtx, method, req, reply, cc, opts...) *(reply.(*string)) += "1" return err } secondInt := func(ctx context.Context, method string, req, reply interface{}, cc *ClientConn, invoker UnaryInvoker, opts ...CallOption) error { if ctx.Value(parentKey) == nil { t.Fatalf("second interceptor should have %v in context", parentKey) } if ctx.Value(firstIntKey) == nil { t.Fatalf("second interceptor should have %v in context", firstIntKey) } if ctx.Value(secondIntKey) != nil { t.Fatalf("second interceptor should not have %v in context", secondIntKey) } secondCtx := context.WithValue(ctx, secondIntKey, 2) err := invoker(secondCtx, method, req, reply, cc, opts...) *(reply.(*string)) += "2" return err } lastInt := func(ctx context.Context, method string, req, reply interface{}, cc *ClientConn, invoker UnaryInvoker, opts ...CallOption) error { if ctx.Value(parentKey) == nil { t.Fatalf("last interceptor should have %v in context", parentKey) } if ctx.Value(firstIntKey) == nil { t.Fatalf("last interceptor should have %v in context", firstIntKey) } if ctx.Value(secondIntKey) == nil { t.Fatalf("last interceptor should have %v in context", secondIntKey) } err := invoker(ctx, method, req, reply, cc, opts...) *(reply.(*string)) += "3" return err } server, cc := setUpWithOptions(t, 0, math.MaxUint32, WithChainUnaryInterceptor(firstInt, secondInt, lastInt)) defer func() { cc.Close() server.stop() }() var reply string ctx := context.Background() parentCtx := context.WithValue(ctx, ctxKey("parentKey"), 0) if err := cc.Invoke(parentCtx, "/foo/bar", &expectedRequest, &reply); err != nil || reply != expectedResponse+"321" { t.Fatalf("grpc.Invoke(_, _, _, _, _) = %v, want ", err) } } func (s) TestChainOnBaseUnaryClientInterceptor(t *testing.T) { var ( parentKey = ctxKey("parentKey") baseIntKey = ctxKey("baseIntKey") ) baseInt := func(ctx context.Context, method string, req, reply interface{}, cc *ClientConn, invoker UnaryInvoker, opts ...CallOption) error { if ctx.Value(parentKey) == nil { t.Fatalf("base interceptor should have %v in context", parentKey) } if ctx.Value(baseIntKey) != nil { t.Fatalf("base interceptor should not have %v in context", baseIntKey) } baseCtx := context.WithValue(ctx, baseIntKey, 1) return invoker(baseCtx, method, req, reply, cc, opts...) } chainInt := func(ctx context.Context, method string, req, reply interface{}, cc *ClientConn, invoker UnaryInvoker, opts ...CallOption) error { if ctx.Value(parentKey) == nil { t.Fatalf("chain interceptor should have %v in context", parentKey) } if ctx.Value(baseIntKey) == nil { t.Fatalf("chain interceptor should have %v in context", baseIntKey) } return invoker(ctx, method, req, reply, cc, opts...) } server, cc := setUpWithOptions(t, 0, math.MaxUint32, WithUnaryInterceptor(baseInt), WithChainUnaryInterceptor(chainInt)) defer func() { cc.Close() server.stop() }() var reply string ctx := context.Background() parentCtx := context.WithValue(ctx, ctxKey("parentKey"), 0) if err := cc.Invoke(parentCtx, "/foo/bar", &expectedRequest, &reply); err != nil || reply != expectedResponse { t.Fatalf("grpc.Invoke(_, _, _, _, _) = %v, want ", err) } } func (s) TestChainStreamClientInterceptor(t *testing.T) { var ( parentKey = ctxKey("parentKey") firstIntKey = ctxKey("firstIntKey") secondIntKey = ctxKey("secondIntKey") ) firstInt := func(ctx context.Context, desc *StreamDesc, cc *ClientConn, method string, streamer Streamer, opts ...CallOption) (ClientStream, error) { if ctx.Value(parentKey) == nil { t.Fatalf("first interceptor should have %v in context", parentKey) } if ctx.Value(firstIntKey) != nil { t.Fatalf("first interceptor should not have %v in context", firstIntKey) } if ctx.Value(secondIntKey) != nil { t.Fatalf("first interceptor should not have %v in context", secondIntKey) } firstCtx := context.WithValue(ctx, firstIntKey, 1) return streamer(firstCtx, desc, cc, method, opts...) } secondInt := func(ctx context.Context, desc *StreamDesc, cc *ClientConn, method string, streamer Streamer, opts ...CallOption) (ClientStream, error) { if ctx.Value(parentKey) == nil { t.Fatalf("second interceptor should have %v in context", parentKey) } if ctx.Value(firstIntKey) == nil { t.Fatalf("second interceptor should have %v in context", firstIntKey) } if ctx.Value(secondIntKey) != nil { t.Fatalf("second interceptor should not have %v in context", secondIntKey) } secondCtx := context.WithValue(ctx, secondIntKey, 2) return streamer(secondCtx, desc, cc, method, opts...) } lastInt := func(ctx context.Context, desc *StreamDesc, cc *ClientConn, method string, streamer Streamer, opts ...CallOption) (ClientStream, error) { if ctx.Value(parentKey) == nil { t.Fatalf("last interceptor should have %v in context", parentKey) } if ctx.Value(firstIntKey) == nil { t.Fatalf("last interceptor should have %v in context", firstIntKey) } if ctx.Value(secondIntKey) == nil { t.Fatalf("last interceptor should have %v in context", secondIntKey) } return streamer(ctx, desc, cc, method, opts...) } server, cc := setUpWithOptions(t, 0, math.MaxUint32, WithChainStreamInterceptor(firstInt, secondInt, lastInt)) defer func() { cc.Close() server.stop() }() ctx := context.Background() parentCtx := context.WithValue(ctx, ctxKey("parentKey"), 0) _, err := cc.NewStream(parentCtx, &StreamDesc{}, "/foo/bar") if err != nil { t.Fatalf("grpc.NewStream(_, _, _) = %v, want ", err) } } func (s) TestInvoke(t *testing.T) { server, cc := setUp(t, 0, math.MaxUint32) var reply string if err := cc.Invoke(context.Background(), "/foo/bar", &expectedRequest, &reply); err != nil || reply != expectedResponse { t.Fatalf("grpc.Invoke(_, _, _, _, _) = %v, want ", err) } cc.Close() server.stop() } func (s) TestInvokeLargeErr(t *testing.T) { server, cc := setUp(t, 0, math.MaxUint32) var reply string req := "hello" err := cc.Invoke(context.Background(), "/foo/bar", &req, &reply) if _, ok := status.FromError(err); !ok { t.Fatalf("grpc.Invoke(_, _, _, _, _) receives non rpc error.") } if status.Code(err) != codes.Internal || len(errorDesc(err)) != sizeLargeErr { t.Fatalf("grpc.Invoke(_, _, _, _, _) = %v, want an error of code %d and desc size %d", err, codes.Internal, sizeLargeErr) } cc.Close() server.stop() } // TestInvokeErrorSpecialChars checks that error messages don't get mangled. func (s) TestInvokeErrorSpecialChars(t *testing.T) { server, cc := setUp(t, 0, math.MaxUint32) var reply string req := "weird error" err := cc.Invoke(context.Background(), "/foo/bar", &req, &reply) if _, ok := status.FromError(err); !ok { t.Fatalf("grpc.Invoke(_, _, _, _, _) receives non rpc error.") } if got, want := errorDesc(err), weirdError; got != want { t.Fatalf("grpc.Invoke(_, _, _, _, _) error = %q, want %q", got, want) } cc.Close() server.stop() } // TestInvokeCancel checks that an Invoke with a canceled context is not sent. func (s) TestInvokeCancel(t *testing.T) { server, cc := setUp(t, 0, math.MaxUint32) var reply string req := "canceled" for i := 0; i < 100; i++ { ctx, cancel := context.WithCancel(context.Background()) cancel() cc.Invoke(ctx, "/foo/bar", &req, &reply) } if canceled != 0 { t.Fatalf("received %d of 100 canceled requests", canceled) } cc.Close() server.stop() } // TestInvokeCancelClosedNonFail checks that a canceled non-failfast RPC // on a closed client will terminate. func (s) TestInvokeCancelClosedNonFailFast(t *testing.T) { server, cc := setUp(t, 0, math.MaxUint32) var reply string cc.Close() req := "hello" ctx, cancel := context.WithCancel(context.Background()) cancel() if err := cc.Invoke(ctx, "/foo/bar", &req, &reply, WaitForReady(true)); err == nil { t.Fatalf("canceled invoke on closed connection should fail") } server.stop() } grpc-go-1.22.1/channelz/000077500000000000000000000000001351635773100147505ustar00rootroot00000000000000grpc-go-1.22.1/channelz/grpc_channelz_v1/000077500000000000000000000000001351635773100201735ustar00rootroot00000000000000grpc-go-1.22.1/channelz/grpc_channelz_v1/channelz.pb.go000066400000000000000000003763171351635773100227450ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: grpc/channelz/v1/channelz.proto package grpc_channelz_v1 // import "google.golang.org/grpc/channelz/grpc_channelz_v1" import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import any "github.com/golang/protobuf/ptypes/any" import duration "github.com/golang/protobuf/ptypes/duration" import timestamp "github.com/golang/protobuf/ptypes/timestamp" import wrappers "github.com/golang/protobuf/ptypes/wrappers" import ( context "golang.org/x/net/context" grpc "google.golang.org/grpc" ) // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package type ChannelConnectivityState_State int32 const ( ChannelConnectivityState_UNKNOWN ChannelConnectivityState_State = 0 ChannelConnectivityState_IDLE ChannelConnectivityState_State = 1 ChannelConnectivityState_CONNECTING ChannelConnectivityState_State = 2 ChannelConnectivityState_READY ChannelConnectivityState_State = 3 ChannelConnectivityState_TRANSIENT_FAILURE ChannelConnectivityState_State = 4 ChannelConnectivityState_SHUTDOWN ChannelConnectivityState_State = 5 ) var ChannelConnectivityState_State_name = map[int32]string{ 0: "UNKNOWN", 1: "IDLE", 2: "CONNECTING", 3: "READY", 4: "TRANSIENT_FAILURE", 5: "SHUTDOWN", } var ChannelConnectivityState_State_value = map[string]int32{ "UNKNOWN": 0, "IDLE": 1, "CONNECTING": 2, "READY": 3, "TRANSIENT_FAILURE": 4, "SHUTDOWN": 5, } func (x ChannelConnectivityState_State) String() string { return proto.EnumName(ChannelConnectivityState_State_name, int32(x)) } func (ChannelConnectivityState_State) EnumDescriptor() ([]byte, []int) { return fileDescriptor_channelz_eaeecd17d5e19ad2, []int{2, 0} } // The supported severity levels of trace events. type ChannelTraceEvent_Severity int32 const ( ChannelTraceEvent_CT_UNKNOWN ChannelTraceEvent_Severity = 0 ChannelTraceEvent_CT_INFO ChannelTraceEvent_Severity = 1 ChannelTraceEvent_CT_WARNING ChannelTraceEvent_Severity = 2 ChannelTraceEvent_CT_ERROR ChannelTraceEvent_Severity = 3 ) var ChannelTraceEvent_Severity_name = map[int32]string{ 0: "CT_UNKNOWN", 1: "CT_INFO", 2: "CT_WARNING", 3: "CT_ERROR", } var ChannelTraceEvent_Severity_value = map[string]int32{ "CT_UNKNOWN": 0, "CT_INFO": 1, "CT_WARNING": 2, "CT_ERROR": 3, } func (x ChannelTraceEvent_Severity) String() string { return proto.EnumName(ChannelTraceEvent_Severity_name, int32(x)) } func (ChannelTraceEvent_Severity) EnumDescriptor() ([]byte, []int) { return fileDescriptor_channelz_eaeecd17d5e19ad2, []int{4, 0} } // Channel is a logical grouping of channels, subchannels, and sockets. type Channel struct { // The identifier for this channel. This should bet set. Ref *ChannelRef `protobuf:"bytes,1,opt,name=ref,proto3" json:"ref,omitempty"` // Data specific to this channel. Data *ChannelData `protobuf:"bytes,2,opt,name=data,proto3" json:"data,omitempty"` // There are no ordering guarantees on the order of channel refs. // There may not be cycles in the ref graph. // A channel ref may be present in more than one channel or subchannel. ChannelRef []*ChannelRef `protobuf:"bytes,3,rep,name=channel_ref,json=channelRef,proto3" json:"channel_ref,omitempty"` // At most one of 'channel_ref+subchannel_ref' and 'socket' is set. // There are no ordering guarantees on the order of subchannel refs. // There may not be cycles in the ref graph. // A sub channel ref may be present in more than one channel or subchannel. SubchannelRef []*SubchannelRef `protobuf:"bytes,4,rep,name=subchannel_ref,json=subchannelRef,proto3" json:"subchannel_ref,omitempty"` // There are no ordering guarantees on the order of sockets. SocketRef []*SocketRef `protobuf:"bytes,5,rep,name=socket_ref,json=socketRef,proto3" json:"socket_ref,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Channel) Reset() { *m = Channel{} } func (m *Channel) String() string { return proto.CompactTextString(m) } func (*Channel) ProtoMessage() {} func (*Channel) Descriptor() ([]byte, []int) { return fileDescriptor_channelz_eaeecd17d5e19ad2, []int{0} } func (m *Channel) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Channel.Unmarshal(m, b) } func (m *Channel) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Channel.Marshal(b, m, deterministic) } func (dst *Channel) XXX_Merge(src proto.Message) { xxx_messageInfo_Channel.Merge(dst, src) } func (m *Channel) XXX_Size() int { return xxx_messageInfo_Channel.Size(m) } func (m *Channel) XXX_DiscardUnknown() { xxx_messageInfo_Channel.DiscardUnknown(m) } var xxx_messageInfo_Channel proto.InternalMessageInfo func (m *Channel) GetRef() *ChannelRef { if m != nil { return m.Ref } return nil } func (m *Channel) GetData() *ChannelData { if m != nil { return m.Data } return nil } func (m *Channel) GetChannelRef() []*ChannelRef { if m != nil { return m.ChannelRef } return nil } func (m *Channel) GetSubchannelRef() []*SubchannelRef { if m != nil { return m.SubchannelRef } return nil } func (m *Channel) GetSocketRef() []*SocketRef { if m != nil { return m.SocketRef } return nil } // Subchannel is a logical grouping of channels, subchannels, and sockets. // A subchannel is load balanced over by it's ancestor type Subchannel struct { // The identifier for this channel. Ref *SubchannelRef `protobuf:"bytes,1,opt,name=ref,proto3" json:"ref,omitempty"` // Data specific to this channel. Data *ChannelData `protobuf:"bytes,2,opt,name=data,proto3" json:"data,omitempty"` // There are no ordering guarantees on the order of channel refs. // There may not be cycles in the ref graph. // A channel ref may be present in more than one channel or subchannel. ChannelRef []*ChannelRef `protobuf:"bytes,3,rep,name=channel_ref,json=channelRef,proto3" json:"channel_ref,omitempty"` // At most one of 'channel_ref+subchannel_ref' and 'socket' is set. // There are no ordering guarantees on the order of subchannel refs. // There may not be cycles in the ref graph. // A sub channel ref may be present in more than one channel or subchannel. SubchannelRef []*SubchannelRef `protobuf:"bytes,4,rep,name=subchannel_ref,json=subchannelRef,proto3" json:"subchannel_ref,omitempty"` // There are no ordering guarantees on the order of sockets. SocketRef []*SocketRef `protobuf:"bytes,5,rep,name=socket_ref,json=socketRef,proto3" json:"socket_ref,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Subchannel) Reset() { *m = Subchannel{} } func (m *Subchannel) String() string { return proto.CompactTextString(m) } func (*Subchannel) ProtoMessage() {} func (*Subchannel) Descriptor() ([]byte, []int) { return fileDescriptor_channelz_eaeecd17d5e19ad2, []int{1} } func (m *Subchannel) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Subchannel.Unmarshal(m, b) } func (m *Subchannel) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Subchannel.Marshal(b, m, deterministic) } func (dst *Subchannel) XXX_Merge(src proto.Message) { xxx_messageInfo_Subchannel.Merge(dst, src) } func (m *Subchannel) XXX_Size() int { return xxx_messageInfo_Subchannel.Size(m) } func (m *Subchannel) XXX_DiscardUnknown() { xxx_messageInfo_Subchannel.DiscardUnknown(m) } var xxx_messageInfo_Subchannel proto.InternalMessageInfo func (m *Subchannel) GetRef() *SubchannelRef { if m != nil { return m.Ref } return nil } func (m *Subchannel) GetData() *ChannelData { if m != nil { return m.Data } return nil } func (m *Subchannel) GetChannelRef() []*ChannelRef { if m != nil { return m.ChannelRef } return nil } func (m *Subchannel) GetSubchannelRef() []*SubchannelRef { if m != nil { return m.SubchannelRef } return nil } func (m *Subchannel) GetSocketRef() []*SocketRef { if m != nil { return m.SocketRef } return nil } // These come from the specified states in this document: // https://github.com/grpc/grpc/blob/master/doc/connectivity-semantics-and-api.md type ChannelConnectivityState struct { State ChannelConnectivityState_State `protobuf:"varint,1,opt,name=state,proto3,enum=grpc.channelz.v1.ChannelConnectivityState_State" json:"state,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ChannelConnectivityState) Reset() { *m = ChannelConnectivityState{} } func (m *ChannelConnectivityState) String() string { return proto.CompactTextString(m) } func (*ChannelConnectivityState) ProtoMessage() {} func (*ChannelConnectivityState) Descriptor() ([]byte, []int) { return fileDescriptor_channelz_eaeecd17d5e19ad2, []int{2} } func (m *ChannelConnectivityState) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ChannelConnectivityState.Unmarshal(m, b) } func (m *ChannelConnectivityState) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ChannelConnectivityState.Marshal(b, m, deterministic) } func (dst *ChannelConnectivityState) XXX_Merge(src proto.Message) { xxx_messageInfo_ChannelConnectivityState.Merge(dst, src) } func (m *ChannelConnectivityState) XXX_Size() int { return xxx_messageInfo_ChannelConnectivityState.Size(m) } func (m *ChannelConnectivityState) XXX_DiscardUnknown() { xxx_messageInfo_ChannelConnectivityState.DiscardUnknown(m) } var xxx_messageInfo_ChannelConnectivityState proto.InternalMessageInfo func (m *ChannelConnectivityState) GetState() ChannelConnectivityState_State { if m != nil { return m.State } return ChannelConnectivityState_UNKNOWN } // Channel data is data related to a specific Channel or Subchannel. type ChannelData struct { // The connectivity state of the channel or subchannel. Implementations // should always set this. State *ChannelConnectivityState `protobuf:"bytes,1,opt,name=state,proto3" json:"state,omitempty"` // The target this channel originally tried to connect to. May be absent Target string `protobuf:"bytes,2,opt,name=target,proto3" json:"target,omitempty"` // A trace of recent events on the channel. May be absent. Trace *ChannelTrace `protobuf:"bytes,3,opt,name=trace,proto3" json:"trace,omitempty"` // The number of calls started on the channel CallsStarted int64 `protobuf:"varint,4,opt,name=calls_started,json=callsStarted,proto3" json:"calls_started,omitempty"` // The number of calls that have completed with an OK status CallsSucceeded int64 `protobuf:"varint,5,opt,name=calls_succeeded,json=callsSucceeded,proto3" json:"calls_succeeded,omitempty"` // The number of calls that have completed with a non-OK status CallsFailed int64 `protobuf:"varint,6,opt,name=calls_failed,json=callsFailed,proto3" json:"calls_failed,omitempty"` // The last time a call was started on the channel. LastCallStartedTimestamp *timestamp.Timestamp `protobuf:"bytes,7,opt,name=last_call_started_timestamp,json=lastCallStartedTimestamp,proto3" json:"last_call_started_timestamp,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ChannelData) Reset() { *m = ChannelData{} } func (m *ChannelData) String() string { return proto.CompactTextString(m) } func (*ChannelData) ProtoMessage() {} func (*ChannelData) Descriptor() ([]byte, []int) { return fileDescriptor_channelz_eaeecd17d5e19ad2, []int{3} } func (m *ChannelData) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ChannelData.Unmarshal(m, b) } func (m *ChannelData) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ChannelData.Marshal(b, m, deterministic) } func (dst *ChannelData) XXX_Merge(src proto.Message) { xxx_messageInfo_ChannelData.Merge(dst, src) } func (m *ChannelData) XXX_Size() int { return xxx_messageInfo_ChannelData.Size(m) } func (m *ChannelData) XXX_DiscardUnknown() { xxx_messageInfo_ChannelData.DiscardUnknown(m) } var xxx_messageInfo_ChannelData proto.InternalMessageInfo func (m *ChannelData) GetState() *ChannelConnectivityState { if m != nil { return m.State } return nil } func (m *ChannelData) GetTarget() string { if m != nil { return m.Target } return "" } func (m *ChannelData) GetTrace() *ChannelTrace { if m != nil { return m.Trace } return nil } func (m *ChannelData) GetCallsStarted() int64 { if m != nil { return m.CallsStarted } return 0 } func (m *ChannelData) GetCallsSucceeded() int64 { if m != nil { return m.CallsSucceeded } return 0 } func (m *ChannelData) GetCallsFailed() int64 { if m != nil { return m.CallsFailed } return 0 } func (m *ChannelData) GetLastCallStartedTimestamp() *timestamp.Timestamp { if m != nil { return m.LastCallStartedTimestamp } return nil } // A trace event is an interesting thing that happened to a channel or // subchannel, such as creation, address resolution, subchannel creation, etc. type ChannelTraceEvent struct { // High level description of the event. Description string `protobuf:"bytes,1,opt,name=description,proto3" json:"description,omitempty"` // the severity of the trace event Severity ChannelTraceEvent_Severity `protobuf:"varint,2,opt,name=severity,proto3,enum=grpc.channelz.v1.ChannelTraceEvent_Severity" json:"severity,omitempty"` // When this event occurred. Timestamp *timestamp.Timestamp `protobuf:"bytes,3,opt,name=timestamp,proto3" json:"timestamp,omitempty"` // ref of referenced channel or subchannel. // Optional, only present if this event refers to a child object. For example, // this field would be filled if this trace event was for a subchannel being // created. // // Types that are valid to be assigned to ChildRef: // *ChannelTraceEvent_ChannelRef // *ChannelTraceEvent_SubchannelRef ChildRef isChannelTraceEvent_ChildRef `protobuf_oneof:"child_ref"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ChannelTraceEvent) Reset() { *m = ChannelTraceEvent{} } func (m *ChannelTraceEvent) String() string { return proto.CompactTextString(m) } func (*ChannelTraceEvent) ProtoMessage() {} func (*ChannelTraceEvent) Descriptor() ([]byte, []int) { return fileDescriptor_channelz_eaeecd17d5e19ad2, []int{4} } func (m *ChannelTraceEvent) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ChannelTraceEvent.Unmarshal(m, b) } func (m *ChannelTraceEvent) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ChannelTraceEvent.Marshal(b, m, deterministic) } func (dst *ChannelTraceEvent) XXX_Merge(src proto.Message) { xxx_messageInfo_ChannelTraceEvent.Merge(dst, src) } func (m *ChannelTraceEvent) XXX_Size() int { return xxx_messageInfo_ChannelTraceEvent.Size(m) } func (m *ChannelTraceEvent) XXX_DiscardUnknown() { xxx_messageInfo_ChannelTraceEvent.DiscardUnknown(m) } var xxx_messageInfo_ChannelTraceEvent proto.InternalMessageInfo func (m *ChannelTraceEvent) GetDescription() string { if m != nil { return m.Description } return "" } func (m *ChannelTraceEvent) GetSeverity() ChannelTraceEvent_Severity { if m != nil { return m.Severity } return ChannelTraceEvent_CT_UNKNOWN } func (m *ChannelTraceEvent) GetTimestamp() *timestamp.Timestamp { if m != nil { return m.Timestamp } return nil } type isChannelTraceEvent_ChildRef interface { isChannelTraceEvent_ChildRef() } type ChannelTraceEvent_ChannelRef struct { ChannelRef *ChannelRef `protobuf:"bytes,4,opt,name=channel_ref,json=channelRef,proto3,oneof"` } type ChannelTraceEvent_SubchannelRef struct { SubchannelRef *SubchannelRef `protobuf:"bytes,5,opt,name=subchannel_ref,json=subchannelRef,proto3,oneof"` } func (*ChannelTraceEvent_ChannelRef) isChannelTraceEvent_ChildRef() {} func (*ChannelTraceEvent_SubchannelRef) isChannelTraceEvent_ChildRef() {} func (m *ChannelTraceEvent) GetChildRef() isChannelTraceEvent_ChildRef { if m != nil { return m.ChildRef } return nil } func (m *ChannelTraceEvent) GetChannelRef() *ChannelRef { if x, ok := m.GetChildRef().(*ChannelTraceEvent_ChannelRef); ok { return x.ChannelRef } return nil } func (m *ChannelTraceEvent) GetSubchannelRef() *SubchannelRef { if x, ok := m.GetChildRef().(*ChannelTraceEvent_SubchannelRef); ok { return x.SubchannelRef } return nil } // XXX_OneofFuncs is for the internal use of the proto package. func (*ChannelTraceEvent) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _ChannelTraceEvent_OneofMarshaler, _ChannelTraceEvent_OneofUnmarshaler, _ChannelTraceEvent_OneofSizer, []interface{}{ (*ChannelTraceEvent_ChannelRef)(nil), (*ChannelTraceEvent_SubchannelRef)(nil), } } func _ChannelTraceEvent_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*ChannelTraceEvent) // child_ref switch x := m.ChildRef.(type) { case *ChannelTraceEvent_ChannelRef: b.EncodeVarint(4<<3 | proto.WireBytes) if err := b.EncodeMessage(x.ChannelRef); err != nil { return err } case *ChannelTraceEvent_SubchannelRef: b.EncodeVarint(5<<3 | proto.WireBytes) if err := b.EncodeMessage(x.SubchannelRef); err != nil { return err } case nil: default: return fmt.Errorf("ChannelTraceEvent.ChildRef has unexpected type %T", x) } return nil } func _ChannelTraceEvent_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*ChannelTraceEvent) switch tag { case 4: // child_ref.channel_ref if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(ChannelRef) err := b.DecodeMessage(msg) m.ChildRef = &ChannelTraceEvent_ChannelRef{msg} return true, err case 5: // child_ref.subchannel_ref if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(SubchannelRef) err := b.DecodeMessage(msg) m.ChildRef = &ChannelTraceEvent_SubchannelRef{msg} return true, err default: return false, nil } } func _ChannelTraceEvent_OneofSizer(msg proto.Message) (n int) { m := msg.(*ChannelTraceEvent) // child_ref switch x := m.ChildRef.(type) { case *ChannelTraceEvent_ChannelRef: s := proto.Size(x.ChannelRef) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *ChannelTraceEvent_SubchannelRef: s := proto.Size(x.SubchannelRef) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } // ChannelTrace represents the recent events that have occurred on the channel. type ChannelTrace struct { // Number of events ever logged in this tracing object. This can differ from // events.size() because events can be overwritten or garbage collected by // implementations. NumEventsLogged int64 `protobuf:"varint,1,opt,name=num_events_logged,json=numEventsLogged,proto3" json:"num_events_logged,omitempty"` // Time that this channel was created. CreationTimestamp *timestamp.Timestamp `protobuf:"bytes,2,opt,name=creation_timestamp,json=creationTimestamp,proto3" json:"creation_timestamp,omitempty"` // List of events that have occurred on this channel. Events []*ChannelTraceEvent `protobuf:"bytes,3,rep,name=events,proto3" json:"events,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ChannelTrace) Reset() { *m = ChannelTrace{} } func (m *ChannelTrace) String() string { return proto.CompactTextString(m) } func (*ChannelTrace) ProtoMessage() {} func (*ChannelTrace) Descriptor() ([]byte, []int) { return fileDescriptor_channelz_eaeecd17d5e19ad2, []int{5} } func (m *ChannelTrace) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ChannelTrace.Unmarshal(m, b) } func (m *ChannelTrace) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ChannelTrace.Marshal(b, m, deterministic) } func (dst *ChannelTrace) XXX_Merge(src proto.Message) { xxx_messageInfo_ChannelTrace.Merge(dst, src) } func (m *ChannelTrace) XXX_Size() int { return xxx_messageInfo_ChannelTrace.Size(m) } func (m *ChannelTrace) XXX_DiscardUnknown() { xxx_messageInfo_ChannelTrace.DiscardUnknown(m) } var xxx_messageInfo_ChannelTrace proto.InternalMessageInfo func (m *ChannelTrace) GetNumEventsLogged() int64 { if m != nil { return m.NumEventsLogged } return 0 } func (m *ChannelTrace) GetCreationTimestamp() *timestamp.Timestamp { if m != nil { return m.CreationTimestamp } return nil } func (m *ChannelTrace) GetEvents() []*ChannelTraceEvent { if m != nil { return m.Events } return nil } // ChannelRef is a reference to a Channel. type ChannelRef struct { // The globally unique id for this channel. Must be a positive number. ChannelId int64 `protobuf:"varint,1,opt,name=channel_id,json=channelId,proto3" json:"channel_id,omitempty"` // An optional name associated with the channel. Name string `protobuf:"bytes,2,opt,name=name,proto3" json:"name,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ChannelRef) Reset() { *m = ChannelRef{} } func (m *ChannelRef) String() string { return proto.CompactTextString(m) } func (*ChannelRef) ProtoMessage() {} func (*ChannelRef) Descriptor() ([]byte, []int) { return fileDescriptor_channelz_eaeecd17d5e19ad2, []int{6} } func (m *ChannelRef) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ChannelRef.Unmarshal(m, b) } func (m *ChannelRef) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ChannelRef.Marshal(b, m, deterministic) } func (dst *ChannelRef) XXX_Merge(src proto.Message) { xxx_messageInfo_ChannelRef.Merge(dst, src) } func (m *ChannelRef) XXX_Size() int { return xxx_messageInfo_ChannelRef.Size(m) } func (m *ChannelRef) XXX_DiscardUnknown() { xxx_messageInfo_ChannelRef.DiscardUnknown(m) } var xxx_messageInfo_ChannelRef proto.InternalMessageInfo func (m *ChannelRef) GetChannelId() int64 { if m != nil { return m.ChannelId } return 0 } func (m *ChannelRef) GetName() string { if m != nil { return m.Name } return "" } // SubchannelRef is a reference to a Subchannel. type SubchannelRef struct { // The globally unique id for this subchannel. Must be a positive number. SubchannelId int64 `protobuf:"varint,7,opt,name=subchannel_id,json=subchannelId,proto3" json:"subchannel_id,omitempty"` // An optional name associated with the subchannel. Name string `protobuf:"bytes,8,opt,name=name,proto3" json:"name,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *SubchannelRef) Reset() { *m = SubchannelRef{} } func (m *SubchannelRef) String() string { return proto.CompactTextString(m) } func (*SubchannelRef) ProtoMessage() {} func (*SubchannelRef) Descriptor() ([]byte, []int) { return fileDescriptor_channelz_eaeecd17d5e19ad2, []int{7} } func (m *SubchannelRef) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_SubchannelRef.Unmarshal(m, b) } func (m *SubchannelRef) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_SubchannelRef.Marshal(b, m, deterministic) } func (dst *SubchannelRef) XXX_Merge(src proto.Message) { xxx_messageInfo_SubchannelRef.Merge(dst, src) } func (m *SubchannelRef) XXX_Size() int { return xxx_messageInfo_SubchannelRef.Size(m) } func (m *SubchannelRef) XXX_DiscardUnknown() { xxx_messageInfo_SubchannelRef.DiscardUnknown(m) } var xxx_messageInfo_SubchannelRef proto.InternalMessageInfo func (m *SubchannelRef) GetSubchannelId() int64 { if m != nil { return m.SubchannelId } return 0 } func (m *SubchannelRef) GetName() string { if m != nil { return m.Name } return "" } // SocketRef is a reference to a Socket. type SocketRef struct { // The globally unique id for this socket. Must be a positive number. SocketId int64 `protobuf:"varint,3,opt,name=socket_id,json=socketId,proto3" json:"socket_id,omitempty"` // An optional name associated with the socket. Name string `protobuf:"bytes,4,opt,name=name,proto3" json:"name,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *SocketRef) Reset() { *m = SocketRef{} } func (m *SocketRef) String() string { return proto.CompactTextString(m) } func (*SocketRef) ProtoMessage() {} func (*SocketRef) Descriptor() ([]byte, []int) { return fileDescriptor_channelz_eaeecd17d5e19ad2, []int{8} } func (m *SocketRef) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_SocketRef.Unmarshal(m, b) } func (m *SocketRef) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_SocketRef.Marshal(b, m, deterministic) } func (dst *SocketRef) XXX_Merge(src proto.Message) { xxx_messageInfo_SocketRef.Merge(dst, src) } func (m *SocketRef) XXX_Size() int { return xxx_messageInfo_SocketRef.Size(m) } func (m *SocketRef) XXX_DiscardUnknown() { xxx_messageInfo_SocketRef.DiscardUnknown(m) } var xxx_messageInfo_SocketRef proto.InternalMessageInfo func (m *SocketRef) GetSocketId() int64 { if m != nil { return m.SocketId } return 0 } func (m *SocketRef) GetName() string { if m != nil { return m.Name } return "" } // ServerRef is a reference to a Server. type ServerRef struct { // A globally unique identifier for this server. Must be a positive number. ServerId int64 `protobuf:"varint,5,opt,name=server_id,json=serverId,proto3" json:"server_id,omitempty"` // An optional name associated with the server. Name string `protobuf:"bytes,6,opt,name=name,proto3" json:"name,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ServerRef) Reset() { *m = ServerRef{} } func (m *ServerRef) String() string { return proto.CompactTextString(m) } func (*ServerRef) ProtoMessage() {} func (*ServerRef) Descriptor() ([]byte, []int) { return fileDescriptor_channelz_eaeecd17d5e19ad2, []int{9} } func (m *ServerRef) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ServerRef.Unmarshal(m, b) } func (m *ServerRef) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ServerRef.Marshal(b, m, deterministic) } func (dst *ServerRef) XXX_Merge(src proto.Message) { xxx_messageInfo_ServerRef.Merge(dst, src) } func (m *ServerRef) XXX_Size() int { return xxx_messageInfo_ServerRef.Size(m) } func (m *ServerRef) XXX_DiscardUnknown() { xxx_messageInfo_ServerRef.DiscardUnknown(m) } var xxx_messageInfo_ServerRef proto.InternalMessageInfo func (m *ServerRef) GetServerId() int64 { if m != nil { return m.ServerId } return 0 } func (m *ServerRef) GetName() string { if m != nil { return m.Name } return "" } // Server represents a single server. There may be multiple servers in a single // program. type Server struct { // The identifier for a Server. This should be set. Ref *ServerRef `protobuf:"bytes,1,opt,name=ref,proto3" json:"ref,omitempty"` // The associated data of the Server. Data *ServerData `protobuf:"bytes,2,opt,name=data,proto3" json:"data,omitempty"` // The sockets that the server is listening on. There are no ordering // guarantees. This may be absent. ListenSocket []*SocketRef `protobuf:"bytes,3,rep,name=listen_socket,json=listenSocket,proto3" json:"listen_socket,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Server) Reset() { *m = Server{} } func (m *Server) String() string { return proto.CompactTextString(m) } func (*Server) ProtoMessage() {} func (*Server) Descriptor() ([]byte, []int) { return fileDescriptor_channelz_eaeecd17d5e19ad2, []int{10} } func (m *Server) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Server.Unmarshal(m, b) } func (m *Server) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Server.Marshal(b, m, deterministic) } func (dst *Server) XXX_Merge(src proto.Message) { xxx_messageInfo_Server.Merge(dst, src) } func (m *Server) XXX_Size() int { return xxx_messageInfo_Server.Size(m) } func (m *Server) XXX_DiscardUnknown() { xxx_messageInfo_Server.DiscardUnknown(m) } var xxx_messageInfo_Server proto.InternalMessageInfo func (m *Server) GetRef() *ServerRef { if m != nil { return m.Ref } return nil } func (m *Server) GetData() *ServerData { if m != nil { return m.Data } return nil } func (m *Server) GetListenSocket() []*SocketRef { if m != nil { return m.ListenSocket } return nil } // ServerData is data for a specific Server. type ServerData struct { // A trace of recent events on the server. May be absent. Trace *ChannelTrace `protobuf:"bytes,1,opt,name=trace,proto3" json:"trace,omitempty"` // The number of incoming calls started on the server CallsStarted int64 `protobuf:"varint,2,opt,name=calls_started,json=callsStarted,proto3" json:"calls_started,omitempty"` // The number of incoming calls that have completed with an OK status CallsSucceeded int64 `protobuf:"varint,3,opt,name=calls_succeeded,json=callsSucceeded,proto3" json:"calls_succeeded,omitempty"` // The number of incoming calls that have a completed with a non-OK status CallsFailed int64 `protobuf:"varint,4,opt,name=calls_failed,json=callsFailed,proto3" json:"calls_failed,omitempty"` // The last time a call was started on the server. LastCallStartedTimestamp *timestamp.Timestamp `protobuf:"bytes,5,opt,name=last_call_started_timestamp,json=lastCallStartedTimestamp,proto3" json:"last_call_started_timestamp,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ServerData) Reset() { *m = ServerData{} } func (m *ServerData) String() string { return proto.CompactTextString(m) } func (*ServerData) ProtoMessage() {} func (*ServerData) Descriptor() ([]byte, []int) { return fileDescriptor_channelz_eaeecd17d5e19ad2, []int{11} } func (m *ServerData) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ServerData.Unmarshal(m, b) } func (m *ServerData) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ServerData.Marshal(b, m, deterministic) } func (dst *ServerData) XXX_Merge(src proto.Message) { xxx_messageInfo_ServerData.Merge(dst, src) } func (m *ServerData) XXX_Size() int { return xxx_messageInfo_ServerData.Size(m) } func (m *ServerData) XXX_DiscardUnknown() { xxx_messageInfo_ServerData.DiscardUnknown(m) } var xxx_messageInfo_ServerData proto.InternalMessageInfo func (m *ServerData) GetTrace() *ChannelTrace { if m != nil { return m.Trace } return nil } func (m *ServerData) GetCallsStarted() int64 { if m != nil { return m.CallsStarted } return 0 } func (m *ServerData) GetCallsSucceeded() int64 { if m != nil { return m.CallsSucceeded } return 0 } func (m *ServerData) GetCallsFailed() int64 { if m != nil { return m.CallsFailed } return 0 } func (m *ServerData) GetLastCallStartedTimestamp() *timestamp.Timestamp { if m != nil { return m.LastCallStartedTimestamp } return nil } // Information about an actual connection. Pronounced "sock-ay". type Socket struct { // The identifier for the Socket. Ref *SocketRef `protobuf:"bytes,1,opt,name=ref,proto3" json:"ref,omitempty"` // Data specific to this Socket. Data *SocketData `protobuf:"bytes,2,opt,name=data,proto3" json:"data,omitempty"` // The locally bound address. Local *Address `protobuf:"bytes,3,opt,name=local,proto3" json:"local,omitempty"` // The remote bound address. May be absent. Remote *Address `protobuf:"bytes,4,opt,name=remote,proto3" json:"remote,omitempty"` // Security details for this socket. May be absent if not available, or // there is no security on the socket. Security *Security `protobuf:"bytes,5,opt,name=security,proto3" json:"security,omitempty"` // Optional, represents the name of the remote endpoint, if different than // the original target name. RemoteName string `protobuf:"bytes,6,opt,name=remote_name,json=remoteName,proto3" json:"remote_name,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Socket) Reset() { *m = Socket{} } func (m *Socket) String() string { return proto.CompactTextString(m) } func (*Socket) ProtoMessage() {} func (*Socket) Descriptor() ([]byte, []int) { return fileDescriptor_channelz_eaeecd17d5e19ad2, []int{12} } func (m *Socket) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Socket.Unmarshal(m, b) } func (m *Socket) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Socket.Marshal(b, m, deterministic) } func (dst *Socket) XXX_Merge(src proto.Message) { xxx_messageInfo_Socket.Merge(dst, src) } func (m *Socket) XXX_Size() int { return xxx_messageInfo_Socket.Size(m) } func (m *Socket) XXX_DiscardUnknown() { xxx_messageInfo_Socket.DiscardUnknown(m) } var xxx_messageInfo_Socket proto.InternalMessageInfo func (m *Socket) GetRef() *SocketRef { if m != nil { return m.Ref } return nil } func (m *Socket) GetData() *SocketData { if m != nil { return m.Data } return nil } func (m *Socket) GetLocal() *Address { if m != nil { return m.Local } return nil } func (m *Socket) GetRemote() *Address { if m != nil { return m.Remote } return nil } func (m *Socket) GetSecurity() *Security { if m != nil { return m.Security } return nil } func (m *Socket) GetRemoteName() string { if m != nil { return m.RemoteName } return "" } // SocketData is data associated for a specific Socket. The fields present // are specific to the implementation, so there may be minor differences in // the semantics. (e.g. flow control windows) type SocketData struct { // The number of streams that have been started. StreamsStarted int64 `protobuf:"varint,1,opt,name=streams_started,json=streamsStarted,proto3" json:"streams_started,omitempty"` // The number of streams that have ended successfully: // On client side, received frame with eos bit set; // On server side, sent frame with eos bit set. StreamsSucceeded int64 `protobuf:"varint,2,opt,name=streams_succeeded,json=streamsSucceeded,proto3" json:"streams_succeeded,omitempty"` // The number of streams that have ended unsuccessfully: // On client side, ended without receiving frame with eos bit set; // On server side, ended without sending frame with eos bit set. StreamsFailed int64 `protobuf:"varint,3,opt,name=streams_failed,json=streamsFailed,proto3" json:"streams_failed,omitempty"` // The number of grpc messages successfully sent on this socket. MessagesSent int64 `protobuf:"varint,4,opt,name=messages_sent,json=messagesSent,proto3" json:"messages_sent,omitempty"` // The number of grpc messages received on this socket. MessagesReceived int64 `protobuf:"varint,5,opt,name=messages_received,json=messagesReceived,proto3" json:"messages_received,omitempty"` // The number of keep alives sent. This is typically implemented with HTTP/2 // ping messages. KeepAlivesSent int64 `protobuf:"varint,6,opt,name=keep_alives_sent,json=keepAlivesSent,proto3" json:"keep_alives_sent,omitempty"` // The last time a stream was created by this endpoint. Usually unset for // servers. LastLocalStreamCreatedTimestamp *timestamp.Timestamp `protobuf:"bytes,7,opt,name=last_local_stream_created_timestamp,json=lastLocalStreamCreatedTimestamp,proto3" json:"last_local_stream_created_timestamp,omitempty"` // The last time a stream was created by the remote endpoint. Usually unset // for clients. LastRemoteStreamCreatedTimestamp *timestamp.Timestamp `protobuf:"bytes,8,opt,name=last_remote_stream_created_timestamp,json=lastRemoteStreamCreatedTimestamp,proto3" json:"last_remote_stream_created_timestamp,omitempty"` // The last time a message was sent by this endpoint. LastMessageSentTimestamp *timestamp.Timestamp `protobuf:"bytes,9,opt,name=last_message_sent_timestamp,json=lastMessageSentTimestamp,proto3" json:"last_message_sent_timestamp,omitempty"` // The last time a message was received by this endpoint. LastMessageReceivedTimestamp *timestamp.Timestamp `protobuf:"bytes,10,opt,name=last_message_received_timestamp,json=lastMessageReceivedTimestamp,proto3" json:"last_message_received_timestamp,omitempty"` // The amount of window, granted to the local endpoint by the remote endpoint. // This may be slightly out of date due to network latency. This does NOT // include stream level or TCP level flow control info. LocalFlowControlWindow *wrappers.Int64Value `protobuf:"bytes,11,opt,name=local_flow_control_window,json=localFlowControlWindow,proto3" json:"local_flow_control_window,omitempty"` // The amount of window, granted to the remote endpoint by the local endpoint. // This may be slightly out of date due to network latency. This does NOT // include stream level or TCP level flow control info. RemoteFlowControlWindow *wrappers.Int64Value `protobuf:"bytes,12,opt,name=remote_flow_control_window,json=remoteFlowControlWindow,proto3" json:"remote_flow_control_window,omitempty"` // Socket options set on this socket. May be absent if 'summary' is set // on GetSocketRequest. Option []*SocketOption `protobuf:"bytes,13,rep,name=option,proto3" json:"option,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *SocketData) Reset() { *m = SocketData{} } func (m *SocketData) String() string { return proto.CompactTextString(m) } func (*SocketData) ProtoMessage() {} func (*SocketData) Descriptor() ([]byte, []int) { return fileDescriptor_channelz_eaeecd17d5e19ad2, []int{13} } func (m *SocketData) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_SocketData.Unmarshal(m, b) } func (m *SocketData) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_SocketData.Marshal(b, m, deterministic) } func (dst *SocketData) XXX_Merge(src proto.Message) { xxx_messageInfo_SocketData.Merge(dst, src) } func (m *SocketData) XXX_Size() int { return xxx_messageInfo_SocketData.Size(m) } func (m *SocketData) XXX_DiscardUnknown() { xxx_messageInfo_SocketData.DiscardUnknown(m) } var xxx_messageInfo_SocketData proto.InternalMessageInfo func (m *SocketData) GetStreamsStarted() int64 { if m != nil { return m.StreamsStarted } return 0 } func (m *SocketData) GetStreamsSucceeded() int64 { if m != nil { return m.StreamsSucceeded } return 0 } func (m *SocketData) GetStreamsFailed() int64 { if m != nil { return m.StreamsFailed } return 0 } func (m *SocketData) GetMessagesSent() int64 { if m != nil { return m.MessagesSent } return 0 } func (m *SocketData) GetMessagesReceived() int64 { if m != nil { return m.MessagesReceived } return 0 } func (m *SocketData) GetKeepAlivesSent() int64 { if m != nil { return m.KeepAlivesSent } return 0 } func (m *SocketData) GetLastLocalStreamCreatedTimestamp() *timestamp.Timestamp { if m != nil { return m.LastLocalStreamCreatedTimestamp } return nil } func (m *SocketData) GetLastRemoteStreamCreatedTimestamp() *timestamp.Timestamp { if m != nil { return m.LastRemoteStreamCreatedTimestamp } return nil } func (m *SocketData) GetLastMessageSentTimestamp() *timestamp.Timestamp { if m != nil { return m.LastMessageSentTimestamp } return nil } func (m *SocketData) GetLastMessageReceivedTimestamp() *timestamp.Timestamp { if m != nil { return m.LastMessageReceivedTimestamp } return nil } func (m *SocketData) GetLocalFlowControlWindow() *wrappers.Int64Value { if m != nil { return m.LocalFlowControlWindow } return nil } func (m *SocketData) GetRemoteFlowControlWindow() *wrappers.Int64Value { if m != nil { return m.RemoteFlowControlWindow } return nil } func (m *SocketData) GetOption() []*SocketOption { if m != nil { return m.Option } return nil } // Address represents the address used to create the socket. type Address struct { // Types that are valid to be assigned to Address: // *Address_TcpipAddress // *Address_UdsAddress_ // *Address_OtherAddress_ Address isAddress_Address `protobuf_oneof:"address"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Address) Reset() { *m = Address{} } func (m *Address) String() string { return proto.CompactTextString(m) } func (*Address) ProtoMessage() {} func (*Address) Descriptor() ([]byte, []int) { return fileDescriptor_channelz_eaeecd17d5e19ad2, []int{14} } func (m *Address) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Address.Unmarshal(m, b) } func (m *Address) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Address.Marshal(b, m, deterministic) } func (dst *Address) XXX_Merge(src proto.Message) { xxx_messageInfo_Address.Merge(dst, src) } func (m *Address) XXX_Size() int { return xxx_messageInfo_Address.Size(m) } func (m *Address) XXX_DiscardUnknown() { xxx_messageInfo_Address.DiscardUnknown(m) } var xxx_messageInfo_Address proto.InternalMessageInfo type isAddress_Address interface { isAddress_Address() } type Address_TcpipAddress struct { TcpipAddress *Address_TcpIpAddress `protobuf:"bytes,1,opt,name=tcpip_address,json=tcpipAddress,proto3,oneof"` } type Address_UdsAddress_ struct { UdsAddress *Address_UdsAddress `protobuf:"bytes,2,opt,name=uds_address,json=udsAddress,proto3,oneof"` } type Address_OtherAddress_ struct { OtherAddress *Address_OtherAddress `protobuf:"bytes,3,opt,name=other_address,json=otherAddress,proto3,oneof"` } func (*Address_TcpipAddress) isAddress_Address() {} func (*Address_UdsAddress_) isAddress_Address() {} func (*Address_OtherAddress_) isAddress_Address() {} func (m *Address) GetAddress() isAddress_Address { if m != nil { return m.Address } return nil } func (m *Address) GetTcpipAddress() *Address_TcpIpAddress { if x, ok := m.GetAddress().(*Address_TcpipAddress); ok { return x.TcpipAddress } return nil } func (m *Address) GetUdsAddress() *Address_UdsAddress { if x, ok := m.GetAddress().(*Address_UdsAddress_); ok { return x.UdsAddress } return nil } func (m *Address) GetOtherAddress() *Address_OtherAddress { if x, ok := m.GetAddress().(*Address_OtherAddress_); ok { return x.OtherAddress } return nil } // XXX_OneofFuncs is for the internal use of the proto package. func (*Address) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _Address_OneofMarshaler, _Address_OneofUnmarshaler, _Address_OneofSizer, []interface{}{ (*Address_TcpipAddress)(nil), (*Address_UdsAddress_)(nil), (*Address_OtherAddress_)(nil), } } func _Address_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*Address) // address switch x := m.Address.(type) { case *Address_TcpipAddress: b.EncodeVarint(1<<3 | proto.WireBytes) if err := b.EncodeMessage(x.TcpipAddress); err != nil { return err } case *Address_UdsAddress_: b.EncodeVarint(2<<3 | proto.WireBytes) if err := b.EncodeMessage(x.UdsAddress); err != nil { return err } case *Address_OtherAddress_: b.EncodeVarint(3<<3 | proto.WireBytes) if err := b.EncodeMessage(x.OtherAddress); err != nil { return err } case nil: default: return fmt.Errorf("Address.Address has unexpected type %T", x) } return nil } func _Address_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*Address) switch tag { case 1: // address.tcpip_address if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(Address_TcpIpAddress) err := b.DecodeMessage(msg) m.Address = &Address_TcpipAddress{msg} return true, err case 2: // address.uds_address if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(Address_UdsAddress) err := b.DecodeMessage(msg) m.Address = &Address_UdsAddress_{msg} return true, err case 3: // address.other_address if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(Address_OtherAddress) err := b.DecodeMessage(msg) m.Address = &Address_OtherAddress_{msg} return true, err default: return false, nil } } func _Address_OneofSizer(msg proto.Message) (n int) { m := msg.(*Address) // address switch x := m.Address.(type) { case *Address_TcpipAddress: s := proto.Size(x.TcpipAddress) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *Address_UdsAddress_: s := proto.Size(x.UdsAddress) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *Address_OtherAddress_: s := proto.Size(x.OtherAddress) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } type Address_TcpIpAddress struct { // Either the IPv4 or IPv6 address in bytes. Will be either 4 bytes or 16 // bytes in length. IpAddress []byte `protobuf:"bytes,1,opt,name=ip_address,json=ipAddress,proto3" json:"ip_address,omitempty"` // 0-64k, or -1 if not appropriate. Port int32 `protobuf:"varint,2,opt,name=port,proto3" json:"port,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Address_TcpIpAddress) Reset() { *m = Address_TcpIpAddress{} } func (m *Address_TcpIpAddress) String() string { return proto.CompactTextString(m) } func (*Address_TcpIpAddress) ProtoMessage() {} func (*Address_TcpIpAddress) Descriptor() ([]byte, []int) { return fileDescriptor_channelz_eaeecd17d5e19ad2, []int{14, 0} } func (m *Address_TcpIpAddress) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Address_TcpIpAddress.Unmarshal(m, b) } func (m *Address_TcpIpAddress) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Address_TcpIpAddress.Marshal(b, m, deterministic) } func (dst *Address_TcpIpAddress) XXX_Merge(src proto.Message) { xxx_messageInfo_Address_TcpIpAddress.Merge(dst, src) } func (m *Address_TcpIpAddress) XXX_Size() int { return xxx_messageInfo_Address_TcpIpAddress.Size(m) } func (m *Address_TcpIpAddress) XXX_DiscardUnknown() { xxx_messageInfo_Address_TcpIpAddress.DiscardUnknown(m) } var xxx_messageInfo_Address_TcpIpAddress proto.InternalMessageInfo func (m *Address_TcpIpAddress) GetIpAddress() []byte { if m != nil { return m.IpAddress } return nil } func (m *Address_TcpIpAddress) GetPort() int32 { if m != nil { return m.Port } return 0 } // A Unix Domain Socket address. type Address_UdsAddress struct { Filename string `protobuf:"bytes,1,opt,name=filename,proto3" json:"filename,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Address_UdsAddress) Reset() { *m = Address_UdsAddress{} } func (m *Address_UdsAddress) String() string { return proto.CompactTextString(m) } func (*Address_UdsAddress) ProtoMessage() {} func (*Address_UdsAddress) Descriptor() ([]byte, []int) { return fileDescriptor_channelz_eaeecd17d5e19ad2, []int{14, 1} } func (m *Address_UdsAddress) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Address_UdsAddress.Unmarshal(m, b) } func (m *Address_UdsAddress) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Address_UdsAddress.Marshal(b, m, deterministic) } func (dst *Address_UdsAddress) XXX_Merge(src proto.Message) { xxx_messageInfo_Address_UdsAddress.Merge(dst, src) } func (m *Address_UdsAddress) XXX_Size() int { return xxx_messageInfo_Address_UdsAddress.Size(m) } func (m *Address_UdsAddress) XXX_DiscardUnknown() { xxx_messageInfo_Address_UdsAddress.DiscardUnknown(m) } var xxx_messageInfo_Address_UdsAddress proto.InternalMessageInfo func (m *Address_UdsAddress) GetFilename() string { if m != nil { return m.Filename } return "" } // An address type not included above. type Address_OtherAddress struct { // The human readable version of the value. This value should be set. Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` // The actual address message. Value *any.Any `protobuf:"bytes,2,opt,name=value,proto3" json:"value,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Address_OtherAddress) Reset() { *m = Address_OtherAddress{} } func (m *Address_OtherAddress) String() string { return proto.CompactTextString(m) } func (*Address_OtherAddress) ProtoMessage() {} func (*Address_OtherAddress) Descriptor() ([]byte, []int) { return fileDescriptor_channelz_eaeecd17d5e19ad2, []int{14, 2} } func (m *Address_OtherAddress) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Address_OtherAddress.Unmarshal(m, b) } func (m *Address_OtherAddress) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Address_OtherAddress.Marshal(b, m, deterministic) } func (dst *Address_OtherAddress) XXX_Merge(src proto.Message) { xxx_messageInfo_Address_OtherAddress.Merge(dst, src) } func (m *Address_OtherAddress) XXX_Size() int { return xxx_messageInfo_Address_OtherAddress.Size(m) } func (m *Address_OtherAddress) XXX_DiscardUnknown() { xxx_messageInfo_Address_OtherAddress.DiscardUnknown(m) } var xxx_messageInfo_Address_OtherAddress proto.InternalMessageInfo func (m *Address_OtherAddress) GetName() string { if m != nil { return m.Name } return "" } func (m *Address_OtherAddress) GetValue() *any.Any { if m != nil { return m.Value } return nil } // Security represents details about how secure the socket is. type Security struct { // Types that are valid to be assigned to Model: // *Security_Tls_ // *Security_Other Model isSecurity_Model `protobuf_oneof:"model"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Security) Reset() { *m = Security{} } func (m *Security) String() string { return proto.CompactTextString(m) } func (*Security) ProtoMessage() {} func (*Security) Descriptor() ([]byte, []int) { return fileDescriptor_channelz_eaeecd17d5e19ad2, []int{15} } func (m *Security) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Security.Unmarshal(m, b) } func (m *Security) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Security.Marshal(b, m, deterministic) } func (dst *Security) XXX_Merge(src proto.Message) { xxx_messageInfo_Security.Merge(dst, src) } func (m *Security) XXX_Size() int { return xxx_messageInfo_Security.Size(m) } func (m *Security) XXX_DiscardUnknown() { xxx_messageInfo_Security.DiscardUnknown(m) } var xxx_messageInfo_Security proto.InternalMessageInfo type isSecurity_Model interface { isSecurity_Model() } type Security_Tls_ struct { Tls *Security_Tls `protobuf:"bytes,1,opt,name=tls,proto3,oneof"` } type Security_Other struct { Other *Security_OtherSecurity `protobuf:"bytes,2,opt,name=other,proto3,oneof"` } func (*Security_Tls_) isSecurity_Model() {} func (*Security_Other) isSecurity_Model() {} func (m *Security) GetModel() isSecurity_Model { if m != nil { return m.Model } return nil } func (m *Security) GetTls() *Security_Tls { if x, ok := m.GetModel().(*Security_Tls_); ok { return x.Tls } return nil } func (m *Security) GetOther() *Security_OtherSecurity { if x, ok := m.GetModel().(*Security_Other); ok { return x.Other } return nil } // XXX_OneofFuncs is for the internal use of the proto package. func (*Security) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _Security_OneofMarshaler, _Security_OneofUnmarshaler, _Security_OneofSizer, []interface{}{ (*Security_Tls_)(nil), (*Security_Other)(nil), } } func _Security_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*Security) // model switch x := m.Model.(type) { case *Security_Tls_: b.EncodeVarint(1<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Tls); err != nil { return err } case *Security_Other: b.EncodeVarint(2<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Other); err != nil { return err } case nil: default: return fmt.Errorf("Security.Model has unexpected type %T", x) } return nil } func _Security_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*Security) switch tag { case 1: // model.tls if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(Security_Tls) err := b.DecodeMessage(msg) m.Model = &Security_Tls_{msg} return true, err case 2: // model.other if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(Security_OtherSecurity) err := b.DecodeMessage(msg) m.Model = &Security_Other{msg} return true, err default: return false, nil } } func _Security_OneofSizer(msg proto.Message) (n int) { m := msg.(*Security) // model switch x := m.Model.(type) { case *Security_Tls_: s := proto.Size(x.Tls) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *Security_Other: s := proto.Size(x.Other) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } type Security_Tls struct { // Types that are valid to be assigned to CipherSuite: // *Security_Tls_StandardName // *Security_Tls_OtherName CipherSuite isSecurity_Tls_CipherSuite `protobuf_oneof:"cipher_suite"` // the certificate used by this endpoint. LocalCertificate []byte `protobuf:"bytes,3,opt,name=local_certificate,json=localCertificate,proto3" json:"local_certificate,omitempty"` // the certificate used by the remote endpoint. RemoteCertificate []byte `protobuf:"bytes,4,opt,name=remote_certificate,json=remoteCertificate,proto3" json:"remote_certificate,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Security_Tls) Reset() { *m = Security_Tls{} } func (m *Security_Tls) String() string { return proto.CompactTextString(m) } func (*Security_Tls) ProtoMessage() {} func (*Security_Tls) Descriptor() ([]byte, []int) { return fileDescriptor_channelz_eaeecd17d5e19ad2, []int{15, 0} } func (m *Security_Tls) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Security_Tls.Unmarshal(m, b) } func (m *Security_Tls) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Security_Tls.Marshal(b, m, deterministic) } func (dst *Security_Tls) XXX_Merge(src proto.Message) { xxx_messageInfo_Security_Tls.Merge(dst, src) } func (m *Security_Tls) XXX_Size() int { return xxx_messageInfo_Security_Tls.Size(m) } func (m *Security_Tls) XXX_DiscardUnknown() { xxx_messageInfo_Security_Tls.DiscardUnknown(m) } var xxx_messageInfo_Security_Tls proto.InternalMessageInfo type isSecurity_Tls_CipherSuite interface { isSecurity_Tls_CipherSuite() } type Security_Tls_StandardName struct { StandardName string `protobuf:"bytes,1,opt,name=standard_name,json=standardName,proto3,oneof"` } type Security_Tls_OtherName struct { OtherName string `protobuf:"bytes,2,opt,name=other_name,json=otherName,proto3,oneof"` } func (*Security_Tls_StandardName) isSecurity_Tls_CipherSuite() {} func (*Security_Tls_OtherName) isSecurity_Tls_CipherSuite() {} func (m *Security_Tls) GetCipherSuite() isSecurity_Tls_CipherSuite { if m != nil { return m.CipherSuite } return nil } func (m *Security_Tls) GetStandardName() string { if x, ok := m.GetCipherSuite().(*Security_Tls_StandardName); ok { return x.StandardName } return "" } func (m *Security_Tls) GetOtherName() string { if x, ok := m.GetCipherSuite().(*Security_Tls_OtherName); ok { return x.OtherName } return "" } func (m *Security_Tls) GetLocalCertificate() []byte { if m != nil { return m.LocalCertificate } return nil } func (m *Security_Tls) GetRemoteCertificate() []byte { if m != nil { return m.RemoteCertificate } return nil } // XXX_OneofFuncs is for the internal use of the proto package. func (*Security_Tls) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _Security_Tls_OneofMarshaler, _Security_Tls_OneofUnmarshaler, _Security_Tls_OneofSizer, []interface{}{ (*Security_Tls_StandardName)(nil), (*Security_Tls_OtherName)(nil), } } func _Security_Tls_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*Security_Tls) // cipher_suite switch x := m.CipherSuite.(type) { case *Security_Tls_StandardName: b.EncodeVarint(1<<3 | proto.WireBytes) b.EncodeStringBytes(x.StandardName) case *Security_Tls_OtherName: b.EncodeVarint(2<<3 | proto.WireBytes) b.EncodeStringBytes(x.OtherName) case nil: default: return fmt.Errorf("Security_Tls.CipherSuite has unexpected type %T", x) } return nil } func _Security_Tls_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*Security_Tls) switch tag { case 1: // cipher_suite.standard_name if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } x, err := b.DecodeStringBytes() m.CipherSuite = &Security_Tls_StandardName{x} return true, err case 2: // cipher_suite.other_name if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } x, err := b.DecodeStringBytes() m.CipherSuite = &Security_Tls_OtherName{x} return true, err default: return false, nil } } func _Security_Tls_OneofSizer(msg proto.Message) (n int) { m := msg.(*Security_Tls) // cipher_suite switch x := m.CipherSuite.(type) { case *Security_Tls_StandardName: n += 1 // tag and wire n += proto.SizeVarint(uint64(len(x.StandardName))) n += len(x.StandardName) case *Security_Tls_OtherName: n += 1 // tag and wire n += proto.SizeVarint(uint64(len(x.OtherName))) n += len(x.OtherName) case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } type Security_OtherSecurity struct { // The human readable version of the value. Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` // The actual security details message. Value *any.Any `protobuf:"bytes,2,opt,name=value,proto3" json:"value,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Security_OtherSecurity) Reset() { *m = Security_OtherSecurity{} } func (m *Security_OtherSecurity) String() string { return proto.CompactTextString(m) } func (*Security_OtherSecurity) ProtoMessage() {} func (*Security_OtherSecurity) Descriptor() ([]byte, []int) { return fileDescriptor_channelz_eaeecd17d5e19ad2, []int{15, 1} } func (m *Security_OtherSecurity) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Security_OtherSecurity.Unmarshal(m, b) } func (m *Security_OtherSecurity) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Security_OtherSecurity.Marshal(b, m, deterministic) } func (dst *Security_OtherSecurity) XXX_Merge(src proto.Message) { xxx_messageInfo_Security_OtherSecurity.Merge(dst, src) } func (m *Security_OtherSecurity) XXX_Size() int { return xxx_messageInfo_Security_OtherSecurity.Size(m) } func (m *Security_OtherSecurity) XXX_DiscardUnknown() { xxx_messageInfo_Security_OtherSecurity.DiscardUnknown(m) } var xxx_messageInfo_Security_OtherSecurity proto.InternalMessageInfo func (m *Security_OtherSecurity) GetName() string { if m != nil { return m.Name } return "" } func (m *Security_OtherSecurity) GetValue() *any.Any { if m != nil { return m.Value } return nil } // SocketOption represents socket options for a socket. Specifically, these // are the options returned by getsockopt(). type SocketOption struct { // The full name of the socket option. Typically this will be the upper case // name, such as "SO_REUSEPORT". Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` // The human readable value of this socket option. At least one of value or // additional will be set. Value string `protobuf:"bytes,2,opt,name=value,proto3" json:"value,omitempty"` // Additional data associated with the socket option. At least one of value // or additional will be set. Additional *any.Any `protobuf:"bytes,3,opt,name=additional,proto3" json:"additional,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *SocketOption) Reset() { *m = SocketOption{} } func (m *SocketOption) String() string { return proto.CompactTextString(m) } func (*SocketOption) ProtoMessage() {} func (*SocketOption) Descriptor() ([]byte, []int) { return fileDescriptor_channelz_eaeecd17d5e19ad2, []int{16} } func (m *SocketOption) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_SocketOption.Unmarshal(m, b) } func (m *SocketOption) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_SocketOption.Marshal(b, m, deterministic) } func (dst *SocketOption) XXX_Merge(src proto.Message) { xxx_messageInfo_SocketOption.Merge(dst, src) } func (m *SocketOption) XXX_Size() int { return xxx_messageInfo_SocketOption.Size(m) } func (m *SocketOption) XXX_DiscardUnknown() { xxx_messageInfo_SocketOption.DiscardUnknown(m) } var xxx_messageInfo_SocketOption proto.InternalMessageInfo func (m *SocketOption) GetName() string { if m != nil { return m.Name } return "" } func (m *SocketOption) GetValue() string { if m != nil { return m.Value } return "" } func (m *SocketOption) GetAdditional() *any.Any { if m != nil { return m.Additional } return nil } // For use with SocketOption's additional field. This is primarily used for // SO_RCVTIMEO and SO_SNDTIMEO type SocketOptionTimeout struct { Duration *duration.Duration `protobuf:"bytes,1,opt,name=duration,proto3" json:"duration,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *SocketOptionTimeout) Reset() { *m = SocketOptionTimeout{} } func (m *SocketOptionTimeout) String() string { return proto.CompactTextString(m) } func (*SocketOptionTimeout) ProtoMessage() {} func (*SocketOptionTimeout) Descriptor() ([]byte, []int) { return fileDescriptor_channelz_eaeecd17d5e19ad2, []int{17} } func (m *SocketOptionTimeout) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_SocketOptionTimeout.Unmarshal(m, b) } func (m *SocketOptionTimeout) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_SocketOptionTimeout.Marshal(b, m, deterministic) } func (dst *SocketOptionTimeout) XXX_Merge(src proto.Message) { xxx_messageInfo_SocketOptionTimeout.Merge(dst, src) } func (m *SocketOptionTimeout) XXX_Size() int { return xxx_messageInfo_SocketOptionTimeout.Size(m) } func (m *SocketOptionTimeout) XXX_DiscardUnknown() { xxx_messageInfo_SocketOptionTimeout.DiscardUnknown(m) } var xxx_messageInfo_SocketOptionTimeout proto.InternalMessageInfo func (m *SocketOptionTimeout) GetDuration() *duration.Duration { if m != nil { return m.Duration } return nil } // For use with SocketOption's additional field. This is primarily used for // SO_LINGER. type SocketOptionLinger struct { // active maps to `struct linger.l_onoff` Active bool `protobuf:"varint,1,opt,name=active,proto3" json:"active,omitempty"` // duration maps to `struct linger.l_linger` Duration *duration.Duration `protobuf:"bytes,2,opt,name=duration,proto3" json:"duration,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *SocketOptionLinger) Reset() { *m = SocketOptionLinger{} } func (m *SocketOptionLinger) String() string { return proto.CompactTextString(m) } func (*SocketOptionLinger) ProtoMessage() {} func (*SocketOptionLinger) Descriptor() ([]byte, []int) { return fileDescriptor_channelz_eaeecd17d5e19ad2, []int{18} } func (m *SocketOptionLinger) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_SocketOptionLinger.Unmarshal(m, b) } func (m *SocketOptionLinger) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_SocketOptionLinger.Marshal(b, m, deterministic) } func (dst *SocketOptionLinger) XXX_Merge(src proto.Message) { xxx_messageInfo_SocketOptionLinger.Merge(dst, src) } func (m *SocketOptionLinger) XXX_Size() int { return xxx_messageInfo_SocketOptionLinger.Size(m) } func (m *SocketOptionLinger) XXX_DiscardUnknown() { xxx_messageInfo_SocketOptionLinger.DiscardUnknown(m) } var xxx_messageInfo_SocketOptionLinger proto.InternalMessageInfo func (m *SocketOptionLinger) GetActive() bool { if m != nil { return m.Active } return false } func (m *SocketOptionLinger) GetDuration() *duration.Duration { if m != nil { return m.Duration } return nil } // For use with SocketOption's additional field. Tcp info for // SOL_TCP and TCP_INFO. type SocketOptionTcpInfo struct { TcpiState uint32 `protobuf:"varint,1,opt,name=tcpi_state,json=tcpiState,proto3" json:"tcpi_state,omitempty"` TcpiCaState uint32 `protobuf:"varint,2,opt,name=tcpi_ca_state,json=tcpiCaState,proto3" json:"tcpi_ca_state,omitempty"` TcpiRetransmits uint32 `protobuf:"varint,3,opt,name=tcpi_retransmits,json=tcpiRetransmits,proto3" json:"tcpi_retransmits,omitempty"` TcpiProbes uint32 `protobuf:"varint,4,opt,name=tcpi_probes,json=tcpiProbes,proto3" json:"tcpi_probes,omitempty"` TcpiBackoff uint32 `protobuf:"varint,5,opt,name=tcpi_backoff,json=tcpiBackoff,proto3" json:"tcpi_backoff,omitempty"` TcpiOptions uint32 `protobuf:"varint,6,opt,name=tcpi_options,json=tcpiOptions,proto3" json:"tcpi_options,omitempty"` TcpiSndWscale uint32 `protobuf:"varint,7,opt,name=tcpi_snd_wscale,json=tcpiSndWscale,proto3" json:"tcpi_snd_wscale,omitempty"` TcpiRcvWscale uint32 `protobuf:"varint,8,opt,name=tcpi_rcv_wscale,json=tcpiRcvWscale,proto3" json:"tcpi_rcv_wscale,omitempty"` TcpiRto uint32 `protobuf:"varint,9,opt,name=tcpi_rto,json=tcpiRto,proto3" json:"tcpi_rto,omitempty"` TcpiAto uint32 `protobuf:"varint,10,opt,name=tcpi_ato,json=tcpiAto,proto3" json:"tcpi_ato,omitempty"` TcpiSndMss uint32 `protobuf:"varint,11,opt,name=tcpi_snd_mss,json=tcpiSndMss,proto3" json:"tcpi_snd_mss,omitempty"` TcpiRcvMss uint32 `protobuf:"varint,12,opt,name=tcpi_rcv_mss,json=tcpiRcvMss,proto3" json:"tcpi_rcv_mss,omitempty"` TcpiUnacked uint32 `protobuf:"varint,13,opt,name=tcpi_unacked,json=tcpiUnacked,proto3" json:"tcpi_unacked,omitempty"` TcpiSacked uint32 `protobuf:"varint,14,opt,name=tcpi_sacked,json=tcpiSacked,proto3" json:"tcpi_sacked,omitempty"` TcpiLost uint32 `protobuf:"varint,15,opt,name=tcpi_lost,json=tcpiLost,proto3" json:"tcpi_lost,omitempty"` TcpiRetrans uint32 `protobuf:"varint,16,opt,name=tcpi_retrans,json=tcpiRetrans,proto3" json:"tcpi_retrans,omitempty"` TcpiFackets uint32 `protobuf:"varint,17,opt,name=tcpi_fackets,json=tcpiFackets,proto3" json:"tcpi_fackets,omitempty"` TcpiLastDataSent uint32 `protobuf:"varint,18,opt,name=tcpi_last_data_sent,json=tcpiLastDataSent,proto3" json:"tcpi_last_data_sent,omitempty"` TcpiLastAckSent uint32 `protobuf:"varint,19,opt,name=tcpi_last_ack_sent,json=tcpiLastAckSent,proto3" json:"tcpi_last_ack_sent,omitempty"` TcpiLastDataRecv uint32 `protobuf:"varint,20,opt,name=tcpi_last_data_recv,json=tcpiLastDataRecv,proto3" json:"tcpi_last_data_recv,omitempty"` TcpiLastAckRecv uint32 `protobuf:"varint,21,opt,name=tcpi_last_ack_recv,json=tcpiLastAckRecv,proto3" json:"tcpi_last_ack_recv,omitempty"` TcpiPmtu uint32 `protobuf:"varint,22,opt,name=tcpi_pmtu,json=tcpiPmtu,proto3" json:"tcpi_pmtu,omitempty"` TcpiRcvSsthresh uint32 `protobuf:"varint,23,opt,name=tcpi_rcv_ssthresh,json=tcpiRcvSsthresh,proto3" json:"tcpi_rcv_ssthresh,omitempty"` TcpiRtt uint32 `protobuf:"varint,24,opt,name=tcpi_rtt,json=tcpiRtt,proto3" json:"tcpi_rtt,omitempty"` TcpiRttvar uint32 `protobuf:"varint,25,opt,name=tcpi_rttvar,json=tcpiRttvar,proto3" json:"tcpi_rttvar,omitempty"` TcpiSndSsthresh uint32 `protobuf:"varint,26,opt,name=tcpi_snd_ssthresh,json=tcpiSndSsthresh,proto3" json:"tcpi_snd_ssthresh,omitempty"` TcpiSndCwnd uint32 `protobuf:"varint,27,opt,name=tcpi_snd_cwnd,json=tcpiSndCwnd,proto3" json:"tcpi_snd_cwnd,omitempty"` TcpiAdvmss uint32 `protobuf:"varint,28,opt,name=tcpi_advmss,json=tcpiAdvmss,proto3" json:"tcpi_advmss,omitempty"` TcpiReordering uint32 `protobuf:"varint,29,opt,name=tcpi_reordering,json=tcpiReordering,proto3" json:"tcpi_reordering,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *SocketOptionTcpInfo) Reset() { *m = SocketOptionTcpInfo{} } func (m *SocketOptionTcpInfo) String() string { return proto.CompactTextString(m) } func (*SocketOptionTcpInfo) ProtoMessage() {} func (*SocketOptionTcpInfo) Descriptor() ([]byte, []int) { return fileDescriptor_channelz_eaeecd17d5e19ad2, []int{19} } func (m *SocketOptionTcpInfo) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_SocketOptionTcpInfo.Unmarshal(m, b) } func (m *SocketOptionTcpInfo) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_SocketOptionTcpInfo.Marshal(b, m, deterministic) } func (dst *SocketOptionTcpInfo) XXX_Merge(src proto.Message) { xxx_messageInfo_SocketOptionTcpInfo.Merge(dst, src) } func (m *SocketOptionTcpInfo) XXX_Size() int { return xxx_messageInfo_SocketOptionTcpInfo.Size(m) } func (m *SocketOptionTcpInfo) XXX_DiscardUnknown() { xxx_messageInfo_SocketOptionTcpInfo.DiscardUnknown(m) } var xxx_messageInfo_SocketOptionTcpInfo proto.InternalMessageInfo func (m *SocketOptionTcpInfo) GetTcpiState() uint32 { if m != nil { return m.TcpiState } return 0 } func (m *SocketOptionTcpInfo) GetTcpiCaState() uint32 { if m != nil { return m.TcpiCaState } return 0 } func (m *SocketOptionTcpInfo) GetTcpiRetransmits() uint32 { if m != nil { return m.TcpiRetransmits } return 0 } func (m *SocketOptionTcpInfo) GetTcpiProbes() uint32 { if m != nil { return m.TcpiProbes } return 0 } func (m *SocketOptionTcpInfo) GetTcpiBackoff() uint32 { if m != nil { return m.TcpiBackoff } return 0 } func (m *SocketOptionTcpInfo) GetTcpiOptions() uint32 { if m != nil { return m.TcpiOptions } return 0 } func (m *SocketOptionTcpInfo) GetTcpiSndWscale() uint32 { if m != nil { return m.TcpiSndWscale } return 0 } func (m *SocketOptionTcpInfo) GetTcpiRcvWscale() uint32 { if m != nil { return m.TcpiRcvWscale } return 0 } func (m *SocketOptionTcpInfo) GetTcpiRto() uint32 { if m != nil { return m.TcpiRto } return 0 } func (m *SocketOptionTcpInfo) GetTcpiAto() uint32 { if m != nil { return m.TcpiAto } return 0 } func (m *SocketOptionTcpInfo) GetTcpiSndMss() uint32 { if m != nil { return m.TcpiSndMss } return 0 } func (m *SocketOptionTcpInfo) GetTcpiRcvMss() uint32 { if m != nil { return m.TcpiRcvMss } return 0 } func (m *SocketOptionTcpInfo) GetTcpiUnacked() uint32 { if m != nil { return m.TcpiUnacked } return 0 } func (m *SocketOptionTcpInfo) GetTcpiSacked() uint32 { if m != nil { return m.TcpiSacked } return 0 } func (m *SocketOptionTcpInfo) GetTcpiLost() uint32 { if m != nil { return m.TcpiLost } return 0 } func (m *SocketOptionTcpInfo) GetTcpiRetrans() uint32 { if m != nil { return m.TcpiRetrans } return 0 } func (m *SocketOptionTcpInfo) GetTcpiFackets() uint32 { if m != nil { return m.TcpiFackets } return 0 } func (m *SocketOptionTcpInfo) GetTcpiLastDataSent() uint32 { if m != nil { return m.TcpiLastDataSent } return 0 } func (m *SocketOptionTcpInfo) GetTcpiLastAckSent() uint32 { if m != nil { return m.TcpiLastAckSent } return 0 } func (m *SocketOptionTcpInfo) GetTcpiLastDataRecv() uint32 { if m != nil { return m.TcpiLastDataRecv } return 0 } func (m *SocketOptionTcpInfo) GetTcpiLastAckRecv() uint32 { if m != nil { return m.TcpiLastAckRecv } return 0 } func (m *SocketOptionTcpInfo) GetTcpiPmtu() uint32 { if m != nil { return m.TcpiPmtu } return 0 } func (m *SocketOptionTcpInfo) GetTcpiRcvSsthresh() uint32 { if m != nil { return m.TcpiRcvSsthresh } return 0 } func (m *SocketOptionTcpInfo) GetTcpiRtt() uint32 { if m != nil { return m.TcpiRtt } return 0 } func (m *SocketOptionTcpInfo) GetTcpiRttvar() uint32 { if m != nil { return m.TcpiRttvar } return 0 } func (m *SocketOptionTcpInfo) GetTcpiSndSsthresh() uint32 { if m != nil { return m.TcpiSndSsthresh } return 0 } func (m *SocketOptionTcpInfo) GetTcpiSndCwnd() uint32 { if m != nil { return m.TcpiSndCwnd } return 0 } func (m *SocketOptionTcpInfo) GetTcpiAdvmss() uint32 { if m != nil { return m.TcpiAdvmss } return 0 } func (m *SocketOptionTcpInfo) GetTcpiReordering() uint32 { if m != nil { return m.TcpiReordering } return 0 } type GetTopChannelsRequest struct { // start_channel_id indicates that only channels at or above this id should be // included in the results. // To request the first page, this should be set to 0. To request // subsequent pages, the client generates this value by adding 1 to // the highest seen result ID. StartChannelId int64 `protobuf:"varint,1,opt,name=start_channel_id,json=startChannelId,proto3" json:"start_channel_id,omitempty"` // If non-zero, the server will return a page of results containing // at most this many items. If zero, the server will choose a // reasonable page size. Must never be negative. MaxResults int64 `protobuf:"varint,2,opt,name=max_results,json=maxResults,proto3" json:"max_results,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *GetTopChannelsRequest) Reset() { *m = GetTopChannelsRequest{} } func (m *GetTopChannelsRequest) String() string { return proto.CompactTextString(m) } func (*GetTopChannelsRequest) ProtoMessage() {} func (*GetTopChannelsRequest) Descriptor() ([]byte, []int) { return fileDescriptor_channelz_eaeecd17d5e19ad2, []int{20} } func (m *GetTopChannelsRequest) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_GetTopChannelsRequest.Unmarshal(m, b) } func (m *GetTopChannelsRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_GetTopChannelsRequest.Marshal(b, m, deterministic) } func (dst *GetTopChannelsRequest) XXX_Merge(src proto.Message) { xxx_messageInfo_GetTopChannelsRequest.Merge(dst, src) } func (m *GetTopChannelsRequest) XXX_Size() int { return xxx_messageInfo_GetTopChannelsRequest.Size(m) } func (m *GetTopChannelsRequest) XXX_DiscardUnknown() { xxx_messageInfo_GetTopChannelsRequest.DiscardUnknown(m) } var xxx_messageInfo_GetTopChannelsRequest proto.InternalMessageInfo func (m *GetTopChannelsRequest) GetStartChannelId() int64 { if m != nil { return m.StartChannelId } return 0 } func (m *GetTopChannelsRequest) GetMaxResults() int64 { if m != nil { return m.MaxResults } return 0 } type GetTopChannelsResponse struct { // list of channels that the connection detail service knows about. Sorted in // ascending channel_id order. // Must contain at least 1 result, otherwise 'end' must be true. Channel []*Channel `protobuf:"bytes,1,rep,name=channel,proto3" json:"channel,omitempty"` // If set, indicates that the list of channels is the final list. Requesting // more channels can only return more if they are created after this RPC // completes. End bool `protobuf:"varint,2,opt,name=end,proto3" json:"end,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *GetTopChannelsResponse) Reset() { *m = GetTopChannelsResponse{} } func (m *GetTopChannelsResponse) String() string { return proto.CompactTextString(m) } func (*GetTopChannelsResponse) ProtoMessage() {} func (*GetTopChannelsResponse) Descriptor() ([]byte, []int) { return fileDescriptor_channelz_eaeecd17d5e19ad2, []int{21} } func (m *GetTopChannelsResponse) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_GetTopChannelsResponse.Unmarshal(m, b) } func (m *GetTopChannelsResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_GetTopChannelsResponse.Marshal(b, m, deterministic) } func (dst *GetTopChannelsResponse) XXX_Merge(src proto.Message) { xxx_messageInfo_GetTopChannelsResponse.Merge(dst, src) } func (m *GetTopChannelsResponse) XXX_Size() int { return xxx_messageInfo_GetTopChannelsResponse.Size(m) } func (m *GetTopChannelsResponse) XXX_DiscardUnknown() { xxx_messageInfo_GetTopChannelsResponse.DiscardUnknown(m) } var xxx_messageInfo_GetTopChannelsResponse proto.InternalMessageInfo func (m *GetTopChannelsResponse) GetChannel() []*Channel { if m != nil { return m.Channel } return nil } func (m *GetTopChannelsResponse) GetEnd() bool { if m != nil { return m.End } return false } type GetServersRequest struct { // start_server_id indicates that only servers at or above this id should be // included in the results. // To request the first page, this must be set to 0. To request // subsequent pages, the client generates this value by adding 1 to // the highest seen result ID. StartServerId int64 `protobuf:"varint,1,opt,name=start_server_id,json=startServerId,proto3" json:"start_server_id,omitempty"` // If non-zero, the server will return a page of results containing // at most this many items. If zero, the server will choose a // reasonable page size. Must never be negative. MaxResults int64 `protobuf:"varint,2,opt,name=max_results,json=maxResults,proto3" json:"max_results,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *GetServersRequest) Reset() { *m = GetServersRequest{} } func (m *GetServersRequest) String() string { return proto.CompactTextString(m) } func (*GetServersRequest) ProtoMessage() {} func (*GetServersRequest) Descriptor() ([]byte, []int) { return fileDescriptor_channelz_eaeecd17d5e19ad2, []int{22} } func (m *GetServersRequest) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_GetServersRequest.Unmarshal(m, b) } func (m *GetServersRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_GetServersRequest.Marshal(b, m, deterministic) } func (dst *GetServersRequest) XXX_Merge(src proto.Message) { xxx_messageInfo_GetServersRequest.Merge(dst, src) } func (m *GetServersRequest) XXX_Size() int { return xxx_messageInfo_GetServersRequest.Size(m) } func (m *GetServersRequest) XXX_DiscardUnknown() { xxx_messageInfo_GetServersRequest.DiscardUnknown(m) } var xxx_messageInfo_GetServersRequest proto.InternalMessageInfo func (m *GetServersRequest) GetStartServerId() int64 { if m != nil { return m.StartServerId } return 0 } func (m *GetServersRequest) GetMaxResults() int64 { if m != nil { return m.MaxResults } return 0 } type GetServersResponse struct { // list of servers that the connection detail service knows about. Sorted in // ascending server_id order. // Must contain at least 1 result, otherwise 'end' must be true. Server []*Server `protobuf:"bytes,1,rep,name=server,proto3" json:"server,omitempty"` // If set, indicates that the list of servers is the final list. Requesting // more servers will only return more if they are created after this RPC // completes. End bool `protobuf:"varint,2,opt,name=end,proto3" json:"end,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *GetServersResponse) Reset() { *m = GetServersResponse{} } func (m *GetServersResponse) String() string { return proto.CompactTextString(m) } func (*GetServersResponse) ProtoMessage() {} func (*GetServersResponse) Descriptor() ([]byte, []int) { return fileDescriptor_channelz_eaeecd17d5e19ad2, []int{23} } func (m *GetServersResponse) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_GetServersResponse.Unmarshal(m, b) } func (m *GetServersResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_GetServersResponse.Marshal(b, m, deterministic) } func (dst *GetServersResponse) XXX_Merge(src proto.Message) { xxx_messageInfo_GetServersResponse.Merge(dst, src) } func (m *GetServersResponse) XXX_Size() int { return xxx_messageInfo_GetServersResponse.Size(m) } func (m *GetServersResponse) XXX_DiscardUnknown() { xxx_messageInfo_GetServersResponse.DiscardUnknown(m) } var xxx_messageInfo_GetServersResponse proto.InternalMessageInfo func (m *GetServersResponse) GetServer() []*Server { if m != nil { return m.Server } return nil } func (m *GetServersResponse) GetEnd() bool { if m != nil { return m.End } return false } type GetServerRequest struct { // server_id is the identifier of the specific server to get. ServerId int64 `protobuf:"varint,1,opt,name=server_id,json=serverId,proto3" json:"server_id,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *GetServerRequest) Reset() { *m = GetServerRequest{} } func (m *GetServerRequest) String() string { return proto.CompactTextString(m) } func (*GetServerRequest) ProtoMessage() {} func (*GetServerRequest) Descriptor() ([]byte, []int) { return fileDescriptor_channelz_eaeecd17d5e19ad2, []int{24} } func (m *GetServerRequest) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_GetServerRequest.Unmarshal(m, b) } func (m *GetServerRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_GetServerRequest.Marshal(b, m, deterministic) } func (dst *GetServerRequest) XXX_Merge(src proto.Message) { xxx_messageInfo_GetServerRequest.Merge(dst, src) } func (m *GetServerRequest) XXX_Size() int { return xxx_messageInfo_GetServerRequest.Size(m) } func (m *GetServerRequest) XXX_DiscardUnknown() { xxx_messageInfo_GetServerRequest.DiscardUnknown(m) } var xxx_messageInfo_GetServerRequest proto.InternalMessageInfo func (m *GetServerRequest) GetServerId() int64 { if m != nil { return m.ServerId } return 0 } type GetServerResponse struct { // The Server that corresponds to the requested server_id. This field // should be set. Server *Server `protobuf:"bytes,1,opt,name=server,proto3" json:"server,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *GetServerResponse) Reset() { *m = GetServerResponse{} } func (m *GetServerResponse) String() string { return proto.CompactTextString(m) } func (*GetServerResponse) ProtoMessage() {} func (*GetServerResponse) Descriptor() ([]byte, []int) { return fileDescriptor_channelz_eaeecd17d5e19ad2, []int{25} } func (m *GetServerResponse) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_GetServerResponse.Unmarshal(m, b) } func (m *GetServerResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_GetServerResponse.Marshal(b, m, deterministic) } func (dst *GetServerResponse) XXX_Merge(src proto.Message) { xxx_messageInfo_GetServerResponse.Merge(dst, src) } func (m *GetServerResponse) XXX_Size() int { return xxx_messageInfo_GetServerResponse.Size(m) } func (m *GetServerResponse) XXX_DiscardUnknown() { xxx_messageInfo_GetServerResponse.DiscardUnknown(m) } var xxx_messageInfo_GetServerResponse proto.InternalMessageInfo func (m *GetServerResponse) GetServer() *Server { if m != nil { return m.Server } return nil } type GetServerSocketsRequest struct { ServerId int64 `protobuf:"varint,1,opt,name=server_id,json=serverId,proto3" json:"server_id,omitempty"` // start_socket_id indicates that only sockets at or above this id should be // included in the results. // To request the first page, this must be set to 0. To request // subsequent pages, the client generates this value by adding 1 to // the highest seen result ID. StartSocketId int64 `protobuf:"varint,2,opt,name=start_socket_id,json=startSocketId,proto3" json:"start_socket_id,omitempty"` // If non-zero, the server will return a page of results containing // at most this many items. If zero, the server will choose a // reasonable page size. Must never be negative. MaxResults int64 `protobuf:"varint,3,opt,name=max_results,json=maxResults,proto3" json:"max_results,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *GetServerSocketsRequest) Reset() { *m = GetServerSocketsRequest{} } func (m *GetServerSocketsRequest) String() string { return proto.CompactTextString(m) } func (*GetServerSocketsRequest) ProtoMessage() {} func (*GetServerSocketsRequest) Descriptor() ([]byte, []int) { return fileDescriptor_channelz_eaeecd17d5e19ad2, []int{26} } func (m *GetServerSocketsRequest) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_GetServerSocketsRequest.Unmarshal(m, b) } func (m *GetServerSocketsRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_GetServerSocketsRequest.Marshal(b, m, deterministic) } func (dst *GetServerSocketsRequest) XXX_Merge(src proto.Message) { xxx_messageInfo_GetServerSocketsRequest.Merge(dst, src) } func (m *GetServerSocketsRequest) XXX_Size() int { return xxx_messageInfo_GetServerSocketsRequest.Size(m) } func (m *GetServerSocketsRequest) XXX_DiscardUnknown() { xxx_messageInfo_GetServerSocketsRequest.DiscardUnknown(m) } var xxx_messageInfo_GetServerSocketsRequest proto.InternalMessageInfo func (m *GetServerSocketsRequest) GetServerId() int64 { if m != nil { return m.ServerId } return 0 } func (m *GetServerSocketsRequest) GetStartSocketId() int64 { if m != nil { return m.StartSocketId } return 0 } func (m *GetServerSocketsRequest) GetMaxResults() int64 { if m != nil { return m.MaxResults } return 0 } type GetServerSocketsResponse struct { // list of socket refs that the connection detail service knows about. Sorted in // ascending socket_id order. // Must contain at least 1 result, otherwise 'end' must be true. SocketRef []*SocketRef `protobuf:"bytes,1,rep,name=socket_ref,json=socketRef,proto3" json:"socket_ref,omitempty"` // If set, indicates that the list of sockets is the final list. Requesting // more sockets will only return more if they are created after this RPC // completes. End bool `protobuf:"varint,2,opt,name=end,proto3" json:"end,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *GetServerSocketsResponse) Reset() { *m = GetServerSocketsResponse{} } func (m *GetServerSocketsResponse) String() string { return proto.CompactTextString(m) } func (*GetServerSocketsResponse) ProtoMessage() {} func (*GetServerSocketsResponse) Descriptor() ([]byte, []int) { return fileDescriptor_channelz_eaeecd17d5e19ad2, []int{27} } func (m *GetServerSocketsResponse) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_GetServerSocketsResponse.Unmarshal(m, b) } func (m *GetServerSocketsResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_GetServerSocketsResponse.Marshal(b, m, deterministic) } func (dst *GetServerSocketsResponse) XXX_Merge(src proto.Message) { xxx_messageInfo_GetServerSocketsResponse.Merge(dst, src) } func (m *GetServerSocketsResponse) XXX_Size() int { return xxx_messageInfo_GetServerSocketsResponse.Size(m) } func (m *GetServerSocketsResponse) XXX_DiscardUnknown() { xxx_messageInfo_GetServerSocketsResponse.DiscardUnknown(m) } var xxx_messageInfo_GetServerSocketsResponse proto.InternalMessageInfo func (m *GetServerSocketsResponse) GetSocketRef() []*SocketRef { if m != nil { return m.SocketRef } return nil } func (m *GetServerSocketsResponse) GetEnd() bool { if m != nil { return m.End } return false } type GetChannelRequest struct { // channel_id is the identifier of the specific channel to get. ChannelId int64 `protobuf:"varint,1,opt,name=channel_id,json=channelId,proto3" json:"channel_id,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *GetChannelRequest) Reset() { *m = GetChannelRequest{} } func (m *GetChannelRequest) String() string { return proto.CompactTextString(m) } func (*GetChannelRequest) ProtoMessage() {} func (*GetChannelRequest) Descriptor() ([]byte, []int) { return fileDescriptor_channelz_eaeecd17d5e19ad2, []int{28} } func (m *GetChannelRequest) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_GetChannelRequest.Unmarshal(m, b) } func (m *GetChannelRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_GetChannelRequest.Marshal(b, m, deterministic) } func (dst *GetChannelRequest) XXX_Merge(src proto.Message) { xxx_messageInfo_GetChannelRequest.Merge(dst, src) } func (m *GetChannelRequest) XXX_Size() int { return xxx_messageInfo_GetChannelRequest.Size(m) } func (m *GetChannelRequest) XXX_DiscardUnknown() { xxx_messageInfo_GetChannelRequest.DiscardUnknown(m) } var xxx_messageInfo_GetChannelRequest proto.InternalMessageInfo func (m *GetChannelRequest) GetChannelId() int64 { if m != nil { return m.ChannelId } return 0 } type GetChannelResponse struct { // The Channel that corresponds to the requested channel_id. This field // should be set. Channel *Channel `protobuf:"bytes,1,opt,name=channel,proto3" json:"channel,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *GetChannelResponse) Reset() { *m = GetChannelResponse{} } func (m *GetChannelResponse) String() string { return proto.CompactTextString(m) } func (*GetChannelResponse) ProtoMessage() {} func (*GetChannelResponse) Descriptor() ([]byte, []int) { return fileDescriptor_channelz_eaeecd17d5e19ad2, []int{29} } func (m *GetChannelResponse) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_GetChannelResponse.Unmarshal(m, b) } func (m *GetChannelResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_GetChannelResponse.Marshal(b, m, deterministic) } func (dst *GetChannelResponse) XXX_Merge(src proto.Message) { xxx_messageInfo_GetChannelResponse.Merge(dst, src) } func (m *GetChannelResponse) XXX_Size() int { return xxx_messageInfo_GetChannelResponse.Size(m) } func (m *GetChannelResponse) XXX_DiscardUnknown() { xxx_messageInfo_GetChannelResponse.DiscardUnknown(m) } var xxx_messageInfo_GetChannelResponse proto.InternalMessageInfo func (m *GetChannelResponse) GetChannel() *Channel { if m != nil { return m.Channel } return nil } type GetSubchannelRequest struct { // subchannel_id is the identifier of the specific subchannel to get. SubchannelId int64 `protobuf:"varint,1,opt,name=subchannel_id,json=subchannelId,proto3" json:"subchannel_id,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *GetSubchannelRequest) Reset() { *m = GetSubchannelRequest{} } func (m *GetSubchannelRequest) String() string { return proto.CompactTextString(m) } func (*GetSubchannelRequest) ProtoMessage() {} func (*GetSubchannelRequest) Descriptor() ([]byte, []int) { return fileDescriptor_channelz_eaeecd17d5e19ad2, []int{30} } func (m *GetSubchannelRequest) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_GetSubchannelRequest.Unmarshal(m, b) } func (m *GetSubchannelRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_GetSubchannelRequest.Marshal(b, m, deterministic) } func (dst *GetSubchannelRequest) XXX_Merge(src proto.Message) { xxx_messageInfo_GetSubchannelRequest.Merge(dst, src) } func (m *GetSubchannelRequest) XXX_Size() int { return xxx_messageInfo_GetSubchannelRequest.Size(m) } func (m *GetSubchannelRequest) XXX_DiscardUnknown() { xxx_messageInfo_GetSubchannelRequest.DiscardUnknown(m) } var xxx_messageInfo_GetSubchannelRequest proto.InternalMessageInfo func (m *GetSubchannelRequest) GetSubchannelId() int64 { if m != nil { return m.SubchannelId } return 0 } type GetSubchannelResponse struct { // The Subchannel that corresponds to the requested subchannel_id. This // field should be set. Subchannel *Subchannel `protobuf:"bytes,1,opt,name=subchannel,proto3" json:"subchannel,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *GetSubchannelResponse) Reset() { *m = GetSubchannelResponse{} } func (m *GetSubchannelResponse) String() string { return proto.CompactTextString(m) } func (*GetSubchannelResponse) ProtoMessage() {} func (*GetSubchannelResponse) Descriptor() ([]byte, []int) { return fileDescriptor_channelz_eaeecd17d5e19ad2, []int{31} } func (m *GetSubchannelResponse) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_GetSubchannelResponse.Unmarshal(m, b) } func (m *GetSubchannelResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_GetSubchannelResponse.Marshal(b, m, deterministic) } func (dst *GetSubchannelResponse) XXX_Merge(src proto.Message) { xxx_messageInfo_GetSubchannelResponse.Merge(dst, src) } func (m *GetSubchannelResponse) XXX_Size() int { return xxx_messageInfo_GetSubchannelResponse.Size(m) } func (m *GetSubchannelResponse) XXX_DiscardUnknown() { xxx_messageInfo_GetSubchannelResponse.DiscardUnknown(m) } var xxx_messageInfo_GetSubchannelResponse proto.InternalMessageInfo func (m *GetSubchannelResponse) GetSubchannel() *Subchannel { if m != nil { return m.Subchannel } return nil } type GetSocketRequest struct { // socket_id is the identifier of the specific socket to get. SocketId int64 `protobuf:"varint,1,opt,name=socket_id,json=socketId,proto3" json:"socket_id,omitempty"` // If true, the response will contain only high level information // that is inexpensive to obtain. Fields thay may be omitted are // documented. Summary bool `protobuf:"varint,2,opt,name=summary,proto3" json:"summary,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *GetSocketRequest) Reset() { *m = GetSocketRequest{} } func (m *GetSocketRequest) String() string { return proto.CompactTextString(m) } func (*GetSocketRequest) ProtoMessage() {} func (*GetSocketRequest) Descriptor() ([]byte, []int) { return fileDescriptor_channelz_eaeecd17d5e19ad2, []int{32} } func (m *GetSocketRequest) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_GetSocketRequest.Unmarshal(m, b) } func (m *GetSocketRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_GetSocketRequest.Marshal(b, m, deterministic) } func (dst *GetSocketRequest) XXX_Merge(src proto.Message) { xxx_messageInfo_GetSocketRequest.Merge(dst, src) } func (m *GetSocketRequest) XXX_Size() int { return xxx_messageInfo_GetSocketRequest.Size(m) } func (m *GetSocketRequest) XXX_DiscardUnknown() { xxx_messageInfo_GetSocketRequest.DiscardUnknown(m) } var xxx_messageInfo_GetSocketRequest proto.InternalMessageInfo func (m *GetSocketRequest) GetSocketId() int64 { if m != nil { return m.SocketId } return 0 } func (m *GetSocketRequest) GetSummary() bool { if m != nil { return m.Summary } return false } type GetSocketResponse struct { // The Socket that corresponds to the requested socket_id. This field // should be set. Socket *Socket `protobuf:"bytes,1,opt,name=socket,proto3" json:"socket,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *GetSocketResponse) Reset() { *m = GetSocketResponse{} } func (m *GetSocketResponse) String() string { return proto.CompactTextString(m) } func (*GetSocketResponse) ProtoMessage() {} func (*GetSocketResponse) Descriptor() ([]byte, []int) { return fileDescriptor_channelz_eaeecd17d5e19ad2, []int{33} } func (m *GetSocketResponse) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_GetSocketResponse.Unmarshal(m, b) } func (m *GetSocketResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_GetSocketResponse.Marshal(b, m, deterministic) } func (dst *GetSocketResponse) XXX_Merge(src proto.Message) { xxx_messageInfo_GetSocketResponse.Merge(dst, src) } func (m *GetSocketResponse) XXX_Size() int { return xxx_messageInfo_GetSocketResponse.Size(m) } func (m *GetSocketResponse) XXX_DiscardUnknown() { xxx_messageInfo_GetSocketResponse.DiscardUnknown(m) } var xxx_messageInfo_GetSocketResponse proto.InternalMessageInfo func (m *GetSocketResponse) GetSocket() *Socket { if m != nil { return m.Socket } return nil } func init() { proto.RegisterType((*Channel)(nil), "grpc.channelz.v1.Channel") proto.RegisterType((*Subchannel)(nil), "grpc.channelz.v1.Subchannel") proto.RegisterType((*ChannelConnectivityState)(nil), "grpc.channelz.v1.ChannelConnectivityState") proto.RegisterType((*ChannelData)(nil), "grpc.channelz.v1.ChannelData") proto.RegisterType((*ChannelTraceEvent)(nil), "grpc.channelz.v1.ChannelTraceEvent") proto.RegisterType((*ChannelTrace)(nil), "grpc.channelz.v1.ChannelTrace") proto.RegisterType((*ChannelRef)(nil), "grpc.channelz.v1.ChannelRef") proto.RegisterType((*SubchannelRef)(nil), "grpc.channelz.v1.SubchannelRef") proto.RegisterType((*SocketRef)(nil), "grpc.channelz.v1.SocketRef") proto.RegisterType((*ServerRef)(nil), "grpc.channelz.v1.ServerRef") proto.RegisterType((*Server)(nil), "grpc.channelz.v1.Server") proto.RegisterType((*ServerData)(nil), "grpc.channelz.v1.ServerData") proto.RegisterType((*Socket)(nil), "grpc.channelz.v1.Socket") proto.RegisterType((*SocketData)(nil), "grpc.channelz.v1.SocketData") proto.RegisterType((*Address)(nil), "grpc.channelz.v1.Address") proto.RegisterType((*Address_TcpIpAddress)(nil), "grpc.channelz.v1.Address.TcpIpAddress") proto.RegisterType((*Address_UdsAddress)(nil), "grpc.channelz.v1.Address.UdsAddress") proto.RegisterType((*Address_OtherAddress)(nil), "grpc.channelz.v1.Address.OtherAddress") proto.RegisterType((*Security)(nil), "grpc.channelz.v1.Security") proto.RegisterType((*Security_Tls)(nil), "grpc.channelz.v1.Security.Tls") proto.RegisterType((*Security_OtherSecurity)(nil), "grpc.channelz.v1.Security.OtherSecurity") proto.RegisterType((*SocketOption)(nil), "grpc.channelz.v1.SocketOption") proto.RegisterType((*SocketOptionTimeout)(nil), "grpc.channelz.v1.SocketOptionTimeout") proto.RegisterType((*SocketOptionLinger)(nil), "grpc.channelz.v1.SocketOptionLinger") proto.RegisterType((*SocketOptionTcpInfo)(nil), "grpc.channelz.v1.SocketOptionTcpInfo") proto.RegisterType((*GetTopChannelsRequest)(nil), "grpc.channelz.v1.GetTopChannelsRequest") proto.RegisterType((*GetTopChannelsResponse)(nil), "grpc.channelz.v1.GetTopChannelsResponse") proto.RegisterType((*GetServersRequest)(nil), "grpc.channelz.v1.GetServersRequest") proto.RegisterType((*GetServersResponse)(nil), "grpc.channelz.v1.GetServersResponse") proto.RegisterType((*GetServerRequest)(nil), "grpc.channelz.v1.GetServerRequest") proto.RegisterType((*GetServerResponse)(nil), "grpc.channelz.v1.GetServerResponse") proto.RegisterType((*GetServerSocketsRequest)(nil), "grpc.channelz.v1.GetServerSocketsRequest") proto.RegisterType((*GetServerSocketsResponse)(nil), "grpc.channelz.v1.GetServerSocketsResponse") proto.RegisterType((*GetChannelRequest)(nil), "grpc.channelz.v1.GetChannelRequest") proto.RegisterType((*GetChannelResponse)(nil), "grpc.channelz.v1.GetChannelResponse") proto.RegisterType((*GetSubchannelRequest)(nil), "grpc.channelz.v1.GetSubchannelRequest") proto.RegisterType((*GetSubchannelResponse)(nil), "grpc.channelz.v1.GetSubchannelResponse") proto.RegisterType((*GetSocketRequest)(nil), "grpc.channelz.v1.GetSocketRequest") proto.RegisterType((*GetSocketResponse)(nil), "grpc.channelz.v1.GetSocketResponse") proto.RegisterEnum("grpc.channelz.v1.ChannelConnectivityState_State", ChannelConnectivityState_State_name, ChannelConnectivityState_State_value) proto.RegisterEnum("grpc.channelz.v1.ChannelTraceEvent_Severity", ChannelTraceEvent_Severity_name, ChannelTraceEvent_Severity_value) } // Reference imports to suppress errors if they are not otherwise used. var _ context.Context var _ grpc.ClientConn // This is a compile-time assertion to ensure that this generated file // is compatible with the grpc package it is being compiled against. const _ = grpc.SupportPackageIsVersion4 // ChannelzClient is the client API for Channelz service. // // For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream. type ChannelzClient interface { // Gets all root channels (i.e. channels the application has directly // created). This does not include subchannels nor non-top level channels. GetTopChannels(ctx context.Context, in *GetTopChannelsRequest, opts ...grpc.CallOption) (*GetTopChannelsResponse, error) // Gets all servers that exist in the process. GetServers(ctx context.Context, in *GetServersRequest, opts ...grpc.CallOption) (*GetServersResponse, error) // Returns a single Server, or else a NOT_FOUND code. GetServer(ctx context.Context, in *GetServerRequest, opts ...grpc.CallOption) (*GetServerResponse, error) // Gets all server sockets that exist in the process. GetServerSockets(ctx context.Context, in *GetServerSocketsRequest, opts ...grpc.CallOption) (*GetServerSocketsResponse, error) // Returns a single Channel, or else a NOT_FOUND code. GetChannel(ctx context.Context, in *GetChannelRequest, opts ...grpc.CallOption) (*GetChannelResponse, error) // Returns a single Subchannel, or else a NOT_FOUND code. GetSubchannel(ctx context.Context, in *GetSubchannelRequest, opts ...grpc.CallOption) (*GetSubchannelResponse, error) // Returns a single Socket or else a NOT_FOUND code. GetSocket(ctx context.Context, in *GetSocketRequest, opts ...grpc.CallOption) (*GetSocketResponse, error) } type channelzClient struct { cc *grpc.ClientConn } func NewChannelzClient(cc *grpc.ClientConn) ChannelzClient { return &channelzClient{cc} } func (c *channelzClient) GetTopChannels(ctx context.Context, in *GetTopChannelsRequest, opts ...grpc.CallOption) (*GetTopChannelsResponse, error) { out := new(GetTopChannelsResponse) err := c.cc.Invoke(ctx, "/grpc.channelz.v1.Channelz/GetTopChannels", in, out, opts...) if err != nil { return nil, err } return out, nil } func (c *channelzClient) GetServers(ctx context.Context, in *GetServersRequest, opts ...grpc.CallOption) (*GetServersResponse, error) { out := new(GetServersResponse) err := c.cc.Invoke(ctx, "/grpc.channelz.v1.Channelz/GetServers", in, out, opts...) if err != nil { return nil, err } return out, nil } func (c *channelzClient) GetServer(ctx context.Context, in *GetServerRequest, opts ...grpc.CallOption) (*GetServerResponse, error) { out := new(GetServerResponse) err := c.cc.Invoke(ctx, "/grpc.channelz.v1.Channelz/GetServer", in, out, opts...) if err != nil { return nil, err } return out, nil } func (c *channelzClient) GetServerSockets(ctx context.Context, in *GetServerSocketsRequest, opts ...grpc.CallOption) (*GetServerSocketsResponse, error) { out := new(GetServerSocketsResponse) err := c.cc.Invoke(ctx, "/grpc.channelz.v1.Channelz/GetServerSockets", in, out, opts...) if err != nil { return nil, err } return out, nil } func (c *channelzClient) GetChannel(ctx context.Context, in *GetChannelRequest, opts ...grpc.CallOption) (*GetChannelResponse, error) { out := new(GetChannelResponse) err := c.cc.Invoke(ctx, "/grpc.channelz.v1.Channelz/GetChannel", in, out, opts...) if err != nil { return nil, err } return out, nil } func (c *channelzClient) GetSubchannel(ctx context.Context, in *GetSubchannelRequest, opts ...grpc.CallOption) (*GetSubchannelResponse, error) { out := new(GetSubchannelResponse) err := c.cc.Invoke(ctx, "/grpc.channelz.v1.Channelz/GetSubchannel", in, out, opts...) if err != nil { return nil, err } return out, nil } func (c *channelzClient) GetSocket(ctx context.Context, in *GetSocketRequest, opts ...grpc.CallOption) (*GetSocketResponse, error) { out := new(GetSocketResponse) err := c.cc.Invoke(ctx, "/grpc.channelz.v1.Channelz/GetSocket", in, out, opts...) if err != nil { return nil, err } return out, nil } // ChannelzServer is the server API for Channelz service. type ChannelzServer interface { // Gets all root channels (i.e. channels the application has directly // created). This does not include subchannels nor non-top level channels. GetTopChannels(context.Context, *GetTopChannelsRequest) (*GetTopChannelsResponse, error) // Gets all servers that exist in the process. GetServers(context.Context, *GetServersRequest) (*GetServersResponse, error) // Returns a single Server, or else a NOT_FOUND code. GetServer(context.Context, *GetServerRequest) (*GetServerResponse, error) // Gets all server sockets that exist in the process. GetServerSockets(context.Context, *GetServerSocketsRequest) (*GetServerSocketsResponse, error) // Returns a single Channel, or else a NOT_FOUND code. GetChannel(context.Context, *GetChannelRequest) (*GetChannelResponse, error) // Returns a single Subchannel, or else a NOT_FOUND code. GetSubchannel(context.Context, *GetSubchannelRequest) (*GetSubchannelResponse, error) // Returns a single Socket or else a NOT_FOUND code. GetSocket(context.Context, *GetSocketRequest) (*GetSocketResponse, error) } func RegisterChannelzServer(s *grpc.Server, srv ChannelzServer) { s.RegisterService(&_Channelz_serviceDesc, srv) } func _Channelz_GetTopChannels_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(GetTopChannelsRequest) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(ChannelzServer).GetTopChannels(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/grpc.channelz.v1.Channelz/GetTopChannels", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(ChannelzServer).GetTopChannels(ctx, req.(*GetTopChannelsRequest)) } return interceptor(ctx, in, info, handler) } func _Channelz_GetServers_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(GetServersRequest) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(ChannelzServer).GetServers(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/grpc.channelz.v1.Channelz/GetServers", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(ChannelzServer).GetServers(ctx, req.(*GetServersRequest)) } return interceptor(ctx, in, info, handler) } func _Channelz_GetServer_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(GetServerRequest) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(ChannelzServer).GetServer(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/grpc.channelz.v1.Channelz/GetServer", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(ChannelzServer).GetServer(ctx, req.(*GetServerRequest)) } return interceptor(ctx, in, info, handler) } func _Channelz_GetServerSockets_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(GetServerSocketsRequest) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(ChannelzServer).GetServerSockets(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/grpc.channelz.v1.Channelz/GetServerSockets", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(ChannelzServer).GetServerSockets(ctx, req.(*GetServerSocketsRequest)) } return interceptor(ctx, in, info, handler) } func _Channelz_GetChannel_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(GetChannelRequest) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(ChannelzServer).GetChannel(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/grpc.channelz.v1.Channelz/GetChannel", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(ChannelzServer).GetChannel(ctx, req.(*GetChannelRequest)) } return interceptor(ctx, in, info, handler) } func _Channelz_GetSubchannel_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(GetSubchannelRequest) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(ChannelzServer).GetSubchannel(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/grpc.channelz.v1.Channelz/GetSubchannel", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(ChannelzServer).GetSubchannel(ctx, req.(*GetSubchannelRequest)) } return interceptor(ctx, in, info, handler) } func _Channelz_GetSocket_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(GetSocketRequest) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(ChannelzServer).GetSocket(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/grpc.channelz.v1.Channelz/GetSocket", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(ChannelzServer).GetSocket(ctx, req.(*GetSocketRequest)) } return interceptor(ctx, in, info, handler) } var _Channelz_serviceDesc = grpc.ServiceDesc{ ServiceName: "grpc.channelz.v1.Channelz", HandlerType: (*ChannelzServer)(nil), Methods: []grpc.MethodDesc{ { MethodName: "GetTopChannels", Handler: _Channelz_GetTopChannels_Handler, }, { MethodName: "GetServers", Handler: _Channelz_GetServers_Handler, }, { MethodName: "GetServer", Handler: _Channelz_GetServer_Handler, }, { MethodName: "GetServerSockets", Handler: _Channelz_GetServerSockets_Handler, }, { MethodName: "GetChannel", Handler: _Channelz_GetChannel_Handler, }, { MethodName: "GetSubchannel", Handler: _Channelz_GetSubchannel_Handler, }, { MethodName: "GetSocket", Handler: _Channelz_GetSocket_Handler, }, }, Streams: []grpc.StreamDesc{}, Metadata: "grpc/channelz/v1/channelz.proto", } func init() { proto.RegisterFile("grpc/channelz/v1/channelz.proto", fileDescriptor_channelz_eaeecd17d5e19ad2) } var fileDescriptor_channelz_eaeecd17d5e19ad2 = []byte{ // 2584 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe4, 0x59, 0x4b, 0x6f, 0xdb, 0xd8, 0xf5, 0xb7, 0xde, 0xd4, 0xd1, 0x23, 0xf2, 0x4d, 0x26, 0x43, 0x2b, 0x99, 0xb1, 0xff, 0xf4, 0x4c, 0xc6, 0x93, 0xfc, 0x23, 0xc7, 0x9e, 0x34, 0x28, 0x3a, 0x2d, 0x3a, 0xb6, 0x62, 0xc7, 0x72, 0x1d, 0x39, 0xa0, 0xe4, 0x49, 0xa6, 0x28, 0xca, 0xa1, 0xc9, 0x6b, 0x99, 0x35, 0x45, 0xaa, 0xbc, 0x57, 0xf2, 0x24, 0x9b, 0x2e, 0xba, 0xef, 0xb2, 0x28, 0xfa, 0x01, 0xba, 0xe9, 0xa2, 0x40, 0x81, 0x02, 0xed, 0xb6, 0xdf, 0xa6, 0xdf, 0xa2, 0xb8, 0x0f, 0x3e, 0xf4, 0xb2, 0x14, 0x64, 0xd9, 0x8d, 0x21, 0x1e, 0xfe, 0xce, 0xef, 0x9c, 0x7b, 0x5e, 0xf7, 0xf2, 0x1a, 0xd6, 0x7b, 0xc1, 0xc0, 0xda, 0xb6, 0x2e, 0x4d, 0xcf, 0xc3, 0xee, 0xbb, 0xed, 0xd1, 0x4e, 0xf4, 0xbb, 0x31, 0x08, 0x7c, 0xea, 0xa3, 0x1a, 0x03, 0x34, 0x22, 0xe1, 0x68, 0xa7, 0xbe, 0xd6, 0xf3, 0xfd, 0x9e, 0x8b, 0xb7, 0xf9, 0xfb, 0xf3, 0xe1, 0xc5, 0xb6, 0xe9, 0xbd, 0x15, 0xe0, 0xfa, 0xa7, 0x93, 0xaf, 0xec, 0x61, 0x60, 0x52, 0xc7, 0xf7, 0xe4, 0xfb, 0xf5, 0xc9, 0xf7, 0xd4, 0xe9, 0x63, 0x42, 0xcd, 0xfe, 0x60, 0x1e, 0xc1, 0x75, 0x60, 0x0e, 0x06, 0x38, 0x20, 0xe2, 0xbd, 0xf6, 0xb7, 0x34, 0x14, 0x9a, 0xc2, 0x17, 0xd4, 0x80, 0x4c, 0x80, 0x2f, 0xd4, 0xd4, 0x46, 0x6a, 0xab, 0xb4, 0x7b, 0xbf, 0x31, 0xe9, 0x67, 0x43, 0xe2, 0x74, 0x7c, 0xa1, 0x33, 0x20, 0xda, 0x81, 0xac, 0x6d, 0x52, 0x53, 0x4d, 0x73, 0x85, 0x4f, 0xe6, 0x2a, 0x3c, 0x37, 0xa9, 0xa9, 0x73, 0x28, 0xfa, 0x19, 0x94, 0x24, 0xc0, 0x60, 0xa6, 0x32, 0x1b, 0x99, 0x85, 0xa6, 0xc0, 0x8a, 0x7e, 0xa3, 0x43, 0xa8, 0x92, 0xe1, 0x79, 0x92, 0x21, 0xcb, 0x19, 0xd6, 0xa7, 0x19, 0x3a, 0x11, 0x8e, 0x91, 0x54, 0x48, 0xf2, 0x11, 0xfd, 0x04, 0x80, 0xf8, 0xd6, 0x15, 0xa6, 0x9c, 0x23, 0xc7, 0x39, 0xee, 0xcd, 0xe0, 0xe0, 0x18, 0xa6, 0x5f, 0x24, 0xe1, 0x4f, 0xed, 0x1f, 0x69, 0x80, 0x98, 0x1c, 0xed, 0x24, 0x83, 0xb6, 0xd0, 0x8f, 0xff, 0xe1, 0xb8, 0xfd, 0x3b, 0x05, 0xaa, 0x74, 0xaf, 0xe9, 0x7b, 0x1e, 0xb6, 0xa8, 0x33, 0x72, 0xe8, 0xdb, 0x0e, 0x35, 0x29, 0x46, 0x87, 0x90, 0x23, 0xec, 0x07, 0x8f, 0x63, 0x75, 0xf7, 0xc9, 0xdc, 0x95, 0x4d, 0xa9, 0x36, 0xf8, 0x5f, 0x5d, 0xa8, 0x6b, 0xbf, 0x86, 0x9c, 0x20, 0x2c, 0x41, 0xe1, 0xac, 0xfd, 0x8b, 0xf6, 0xe9, 0xeb, 0x76, 0x6d, 0x05, 0x29, 0x90, 0x6d, 0x3d, 0x3f, 0x39, 0xa8, 0xa5, 0x50, 0x15, 0xa0, 0x79, 0xda, 0x6e, 0x1f, 0x34, 0xbb, 0xad, 0xf6, 0x8b, 0x5a, 0x1a, 0x15, 0x21, 0xa7, 0x1f, 0xec, 0x3d, 0xff, 0xae, 0x96, 0x41, 0x1f, 0xc1, 0x6a, 0x57, 0xdf, 0x6b, 0x77, 0x5a, 0x07, 0xed, 0xae, 0x71, 0xb8, 0xd7, 0x3a, 0x39, 0xd3, 0x0f, 0x6a, 0x59, 0x54, 0x06, 0xa5, 0x73, 0x74, 0xd6, 0x7d, 0xce, 0x98, 0x72, 0xda, 0x7f, 0xd2, 0x50, 0x4a, 0x64, 0x07, 0x7d, 0x93, 0xf4, 0xbb, 0xb4, 0xfb, 0x70, 0x79, 0xbf, 0xa5, 0xc7, 0xe8, 0x2e, 0xe4, 0xa9, 0x19, 0xf4, 0x30, 0xe5, 0xe5, 0x50, 0xd4, 0xe5, 0x13, 0x7a, 0x0a, 0x39, 0x1a, 0x98, 0x16, 0x56, 0x33, 0x9c, 0xf9, 0xd3, 0xb9, 0xcc, 0x5d, 0x86, 0xd2, 0x05, 0x18, 0x6d, 0x42, 0xc5, 0x32, 0x5d, 0x97, 0x18, 0x84, 0x9a, 0x01, 0xc5, 0xb6, 0x9a, 0xdd, 0x48, 0x6d, 0x65, 0xf4, 0x32, 0x17, 0x76, 0x84, 0x0c, 0x7d, 0x01, 0xb7, 0x24, 0x68, 0x68, 0x59, 0x18, 0xdb, 0xd8, 0x56, 0x73, 0x1c, 0x56, 0x15, 0xb0, 0x50, 0x8a, 0xfe, 0x0f, 0x84, 0xa2, 0x71, 0x61, 0x3a, 0x2e, 0xb6, 0xd5, 0x3c, 0x47, 0x95, 0xb8, 0xec, 0x90, 0x8b, 0xd0, 0x77, 0x70, 0xcf, 0x35, 0x09, 0x35, 0x98, 0x2c, 0x34, 0x6a, 0x44, 0x43, 0x48, 0x2d, 0x70, 0xe7, 0xeb, 0x0d, 0x31, 0x85, 0x1a, 0xe1, 0x14, 0x6a, 0x74, 0x43, 0x84, 0xae, 0x32, 0xf5, 0xa6, 0xe9, 0xba, 0xd2, 0xbb, 0xe8, 0x8d, 0xf6, 0xa7, 0x0c, 0xac, 0x26, 0xd7, 0x78, 0x30, 0xc2, 0x1e, 0x45, 0x1b, 0x50, 0xb2, 0x31, 0xb1, 0x02, 0x67, 0xc0, 0xc6, 0x20, 0x8f, 0x7b, 0x51, 0x4f, 0x8a, 0xd0, 0x11, 0x28, 0x04, 0x8f, 0x70, 0xe0, 0xd0, 0xb7, 0x3c, 0xa6, 0xd5, 0xdd, 0xff, 0xbf, 0x39, 0x78, 0x9c, 0xb8, 0xd1, 0x91, 0x3a, 0x7a, 0xa4, 0x8d, 0x7e, 0x0c, 0xc5, 0x78, 0x29, 0x99, 0x85, 0x4b, 0x89, 0xc1, 0xe8, 0xe7, 0xe3, 0xfd, 0x9a, 0x5d, 0x3c, 0x52, 0x8f, 0x56, 0xc6, 0x3a, 0xf6, 0x68, 0xaa, 0x63, 0x73, 0x4b, 0x4d, 0x98, 0xa3, 0x95, 0x89, 0x9e, 0xd5, 0x0e, 0x40, 0x09, 0x97, 0xc6, 0xcb, 0xbf, 0x6b, 0xc4, 0x8d, 0x51, 0x82, 0x42, 0xb3, 0x6b, 0xb4, 0xda, 0x87, 0xa7, 0xb2, 0x37, 0xba, 0xc6, 0xeb, 0x3d, 0xbd, 0x2d, 0x7a, 0xa3, 0x0c, 0x4a, 0xb3, 0x6b, 0x1c, 0xe8, 0xfa, 0xa9, 0x5e, 0xcb, 0xec, 0x97, 0xa0, 0x68, 0x5d, 0x3a, 0xae, 0xcd, 0x7c, 0x61, 0xbd, 0x5c, 0x4e, 0x46, 0x10, 0x3d, 0x84, 0x55, 0x6f, 0xd8, 0x37, 0x30, 0x8b, 0x24, 0x31, 0x5c, 0xbf, 0xd7, 0xc3, 0x36, 0xcf, 0x4d, 0x46, 0xbf, 0xe5, 0x0d, 0xfb, 0x3c, 0xc2, 0xe4, 0x84, 0x8b, 0x51, 0x0b, 0x90, 0x15, 0x60, 0xbe, 0x8b, 0x25, 0x2a, 0x25, 0xbd, 0x30, 0xbc, 0xab, 0xa1, 0x56, 0x24, 0x42, 0x5f, 0x43, 0x5e, 0x98, 0x94, 0x13, 0x71, 0x73, 0x89, 0x44, 0xeb, 0x52, 0x45, 0xb3, 0x00, 0xe2, 0xf0, 0xa3, 0x4f, 0x20, 0x0c, 0xbf, 0xe1, 0x84, 0xae, 0x17, 0xa5, 0xa4, 0x65, 0x23, 0x04, 0x59, 0xcf, 0xec, 0x63, 0xd9, 0xa4, 0xfc, 0xf7, 0x71, 0x56, 0xc9, 0xd4, 0xb2, 0xc7, 0x59, 0x25, 0x5b, 0xcb, 0x1d, 0x67, 0x95, 0x5c, 0x2d, 0x7f, 0x9c, 0x55, 0xf2, 0xb5, 0xc2, 0x71, 0x56, 0x29, 0xd4, 0x94, 0xe3, 0xac, 0xa2, 0xd4, 0x8a, 0x9a, 0x0b, 0x95, 0xb1, 0xfc, 0xb0, 0x0e, 0x4d, 0x24, 0xd6, 0xb1, 0x79, 0x8b, 0x64, 0xf4, 0x72, 0x2c, 0x4c, 0x58, 0x53, 0xc6, 0xac, 0xa5, 0x6a, 0xe9, 0xe3, 0xac, 0x92, 0xae, 0x65, 0xe6, 0x59, 0xd6, 0xbe, 0x87, 0x62, 0x34, 0x7b, 0xd1, 0x3d, 0x90, 0xd3, 0x97, 0x59, 0xc9, 0x70, 0x2b, 0x8a, 0x10, 0x24, 0x2c, 0x64, 0xe7, 0x5a, 0x98, 0xbd, 0x1e, 0x66, 0x01, 0x07, 0x23, 0x1c, 0x84, 0x16, 0xf8, 0x03, 0xb3, 0x90, 0x93, 0x16, 0xb8, 0x20, 0x61, 0x21, 0xbf, 0xd4, 0x1a, 0x62, 0x0b, 0x7f, 0x4d, 0x41, 0x5e, 0x98, 0x40, 0x8f, 0x93, 0x7b, 0xeb, 0xac, 0x7d, 0x26, 0xf4, 0x44, 0xec, 0xab, 0x4f, 0xc6, 0xf6, 0xd5, 0xfb, 0xf3, 0xf0, 0x89, 0x6d, 0xf5, 0x1b, 0xa8, 0xb8, 0x0e, 0xa1, 0xd8, 0x33, 0x44, 0x60, 0x64, 0x19, 0xdd, 0xb8, 0xa5, 0x95, 0x85, 0x86, 0x10, 0x68, 0x7f, 0x60, 0xa7, 0x81, 0x88, 0x36, 0x9e, 0xda, 0xa9, 0x0f, 0x9a, 0xda, 0xe9, 0xe5, 0xa6, 0x76, 0x66, 0xa9, 0xa9, 0x9d, 0x7d, 0xef, 0xa9, 0x9d, 0xfb, 0x80, 0xa9, 0xfd, 0x97, 0x34, 0xe4, 0x45, 0x6c, 0x16, 0xa7, 0x2f, 0x8a, 0xe9, 0x92, 0xe9, 0xe3, 0xf8, 0x44, 0xfa, 0xb6, 0x21, 0xe7, 0xfa, 0x96, 0xe9, 0xca, 0xd9, 0xbc, 0x36, 0xad, 0xb2, 0x67, 0xdb, 0x01, 0x26, 0x44, 0x17, 0x38, 0xb4, 0x03, 0xf9, 0x00, 0xf7, 0x7d, 0x8a, 0xe5, 0x44, 0xbe, 0x41, 0x43, 0x02, 0xd1, 0x33, 0xb6, 0x9b, 0x58, 0x43, 0xbe, 0x9b, 0x44, 0x71, 0x99, 0x2e, 0x2c, 0x81, 0xd0, 0x23, 0x2c, 0x5a, 0x87, 0x92, 0x60, 0x30, 0x12, 0x5d, 0x00, 0x42, 0xd4, 0x36, 0xfb, 0x58, 0xfb, 0x7d, 0x01, 0x20, 0x5e, 0x11, 0x4b, 0x2f, 0xa1, 0x01, 0x36, 0xfb, 0x71, 0x15, 0x88, 0x21, 0x54, 0x95, 0xe2, 0xb0, 0x0e, 0x1e, 0xc1, 0x6a, 0x04, 0x8c, 0x2a, 0x41, 0x14, 0x4c, 0x2d, 0x84, 0x46, 0xb5, 0xf0, 0x39, 0x84, 0xea, 0x61, 0x35, 0x88, 0x9a, 0xa9, 0x48, 0xa9, 0xac, 0x87, 0x4d, 0xa8, 0xf4, 0x31, 0x21, 0x66, 0x0f, 0x13, 0x83, 0x60, 0x8f, 0x86, 0xc7, 0x86, 0x50, 0xd8, 0x61, 0x3b, 0xef, 0x23, 0x58, 0x8d, 0x40, 0x01, 0xb6, 0xb0, 0x33, 0x8a, 0x0e, 0x0e, 0xb5, 0xf0, 0x85, 0x2e, 0xe5, 0x68, 0x0b, 0x6a, 0x57, 0x18, 0x0f, 0x0c, 0xd3, 0x75, 0x46, 0x21, 0xa9, 0x38, 0x3e, 0x54, 0x99, 0x7c, 0x8f, 0x8b, 0x39, 0xed, 0x25, 0x6c, 0xf2, 0x5a, 0xe4, 0x19, 0x32, 0x84, 0x5f, 0x06, 0x1f, 0xf5, 0xef, 0x79, 0x92, 0x58, 0x67, 0x34, 0x27, 0x8c, 0xa5, 0xc3, 0x49, 0x9a, 0x82, 0x23, 0xde, 0x2d, 0x7e, 0x03, 0x9f, 0x71, 0x4b, 0x32, 0x2f, 0x73, 0x4d, 0x29, 0x0b, 0x4d, 0x6d, 0x30, 0x1e, 0x9d, 0xd3, 0xcc, 0xb1, 0x15, 0x76, 0x98, 0x0c, 0x0c, 0x0f, 0x40, 0xc2, 0x44, 0x71, 0xb9, 0x0e, 0x7b, 0x29, 0xb4, 0x59, 0x9c, 0x62, 0x6a, 0x13, 0xd6, 0xc7, 0xa8, 0xc3, 0x5c, 0x24, 0xe8, 0x61, 0x21, 0xfd, 0xfd, 0x04, 0x7d, 0x98, 0xb4, 0xd8, 0xc4, 0xb7, 0xb0, 0x26, 0xd2, 0x71, 0xe1, 0xfa, 0xd7, 0x86, 0xe5, 0x7b, 0x34, 0xf0, 0x5d, 0xe3, 0xda, 0xf1, 0x6c, 0xff, 0x5a, 0x2d, 0x85, 0xfd, 0x3c, 0x41, 0xde, 0xf2, 0xe8, 0xb3, 0xa7, 0xdf, 0x9a, 0xee, 0x10, 0xeb, 0x77, 0xb9, 0xf6, 0xa1, 0xeb, 0x5f, 0x37, 0x85, 0xee, 0x6b, 0xae, 0x8a, 0xde, 0x40, 0x5d, 0x06, 0x7f, 0x16, 0x71, 0x79, 0x31, 0xf1, 0xc7, 0x42, 0x7d, 0x9a, 0xf9, 0x19, 0xe4, 0x7d, 0x71, 0x22, 0xac, 0xf0, 0x11, 0xfe, 0xe9, 0xbc, 0xf1, 0x71, 0xca, 0x51, 0xba, 0x44, 0x6b, 0xff, 0xcc, 0x40, 0x41, 0xb6, 0x3c, 0x7a, 0x09, 0x15, 0x6a, 0x0d, 0x9c, 0x81, 0x61, 0x0a, 0x81, 0x9c, 0x5c, 0x0f, 0xe6, 0x0e, 0x89, 0x46, 0xd7, 0x1a, 0xb4, 0x06, 0xf2, 0xe1, 0x68, 0x45, 0x2f, 0x73, 0xf5, 0x90, 0xee, 0x05, 0x94, 0x86, 0x36, 0x89, 0xc8, 0xc4, 0x58, 0xfb, 0x6c, 0x3e, 0xd9, 0x99, 0x4d, 0x62, 0x2a, 0x18, 0x46, 0x4f, 0xcc, 0x2f, 0x9f, 0x5e, 0xe2, 0x20, 0xa2, 0xca, 0x2c, 0xf2, 0xeb, 0x94, 0xc1, 0x13, 0x7e, 0xf9, 0x89, 0xe7, 0xfa, 0x1e, 0x94, 0x93, 0x7e, 0xb3, 0x93, 0xcf, 0xc4, 0x9a, 0xcb, 0x7a, 0x31, 0x5e, 0x06, 0x82, 0xec, 0xc0, 0x0f, 0xc4, 0xe7, 0x49, 0x4e, 0xe7, 0xbf, 0xeb, 0x5b, 0x00, 0xb1, 0xb7, 0xa8, 0x0e, 0xca, 0x85, 0xe3, 0x62, 0x3e, 0xe7, 0xc4, 0x79, 0x3c, 0x7a, 0xae, 0xb7, 0xa1, 0x9c, 0x74, 0x26, 0x3a, 0x15, 0xa4, 0xe2, 0x53, 0x01, 0x7a, 0x08, 0xb9, 0x11, 0xcb, 0xae, 0x0c, 0xd1, 0x9d, 0xa9, 0x02, 0xd8, 0xf3, 0xde, 0xea, 0x02, 0xb2, 0x5f, 0x84, 0x82, 0xf4, 0x54, 0xfb, 0x63, 0x86, 0x9d, 0x6c, 0xe5, 0xb8, 0xdd, 0x85, 0x0c, 0x75, 0xc9, 0xfc, 0x6d, 0x37, 0x04, 0x36, 0xba, 0x2e, 0x8b, 0x08, 0x03, 0xb3, 0x8f, 0x37, 0x1e, 0x18, 0x69, 0x77, 0xeb, 0x06, 0x2d, 0xbe, 0x86, 0xf0, 0xe9, 0x68, 0x45, 0x17, 0x8a, 0xf5, 0x7f, 0xa5, 0x20, 0xd3, 0x75, 0x09, 0xfa, 0x1c, 0x2a, 0x84, 0x9a, 0x9e, 0x6d, 0x06, 0xb6, 0x11, 0x2f, 0x8f, 0x45, 0x3e, 0x14, 0xb3, 0x91, 0x8f, 0xd6, 0x01, 0x44, 0x22, 0xe3, 0xa3, 0xe4, 0xd1, 0x8a, 0x5e, 0xe4, 0x32, 0x0e, 0x78, 0x04, 0xab, 0xa2, 0xef, 0x2c, 0x1c, 0x50, 0xe7, 0xc2, 0xb1, 0xd8, 0xa7, 0x65, 0x86, 0x67, 0xa4, 0xc6, 0x5f, 0x34, 0x63, 0x39, 0x7a, 0x0c, 0x48, 0x36, 0x53, 0x12, 0x9d, 0xe5, 0xe8, 0x55, 0xf1, 0x26, 0x01, 0xdf, 0xaf, 0x42, 0xd9, 0x72, 0x06, 0xcc, 0x3a, 0x19, 0x3a, 0x14, 0xd7, 0x4f, 0xa1, 0x32, 0xb6, 0xaa, 0x0f, 0x4e, 0x4d, 0x01, 0x72, 0x7d, 0xdf, 0xc6, 0xae, 0xe6, 0x41, 0x39, 0xd9, 0x6b, 0x33, 0x89, 0xef, 0x24, 0x89, 0x8b, 0x92, 0x02, 0x3d, 0x05, 0x30, 0x6d, 0xdb, 0x61, 0x5a, 0xd1, 0xae, 0x3e, 0xdb, 0x66, 0x02, 0xa7, 0x9d, 0xc0, 0xed, 0xa4, 0x3d, 0x36, 0xc6, 0xfc, 0x21, 0x45, 0x3f, 0x02, 0x25, 0xbc, 0x2d, 0x93, 0x75, 0xb1, 0x36, 0x45, 0xf5, 0x5c, 0x02, 0xf4, 0x08, 0xaa, 0x59, 0x80, 0x92, 0x6c, 0x27, 0x8e, 0xd7, 0xc3, 0x01, 0xfb, 0x4c, 0x37, 0xd9, 0xe7, 0xbb, 0x58, 0x85, 0xa2, 0xcb, 0xa7, 0x31, 0x23, 0xe9, 0xe5, 0x8d, 0xfc, 0x5d, 0x99, 0xf0, 0xd9, 0x1a, 0xb4, 0xbc, 0x0b, 0x9f, 0xf5, 0x22, 0x9b, 0x21, 0x46, 0x7c, 0xa9, 0x50, 0xd1, 0x8b, 0x4c, 0x22, 0x6e, 0x35, 0x34, 0x31, 0xa1, 0x0c, 0xcb, 0x94, 0x88, 0x34, 0x47, 0x94, 0x98, 0xb0, 0x69, 0x0a, 0xcc, 0x97, 0x50, 0xe3, 0x98, 0x00, 0xd3, 0xc0, 0xf4, 0x48, 0xdf, 0xa1, 0x62, 0x60, 0x54, 0xf4, 0x5b, 0x4c, 0xae, 0xc7, 0x62, 0x76, 0x46, 0xe1, 0xd0, 0x41, 0xe0, 0x9f, 0x63, 0xc2, 0x4b, 0xa7, 0xa2, 0x73, 0x07, 0x5e, 0x71, 0x09, 0x3b, 0x4a, 0x72, 0xc0, 0xb9, 0x69, 0x5d, 0xf9, 0x17, 0xe2, 0x1b, 0x54, 0x9a, 0xdb, 0x17, 0xa2, 0x08, 0x22, 0xe6, 0x29, 0xe1, 0x9b, 0xbc, 0x84, 0x88, 0xa5, 0x11, 0xf4, 0x00, 0x6e, 0x89, 0x45, 0x79, 0xb6, 0x71, 0x4d, 0x2c, 0xd3, 0xc5, 0x7c, 0x37, 0xaf, 0xe8, 0x7c, 0x31, 0x1d, 0xcf, 0x7e, 0xcd, 0x85, 0x11, 0x2e, 0xb0, 0x46, 0x21, 0x4e, 0x89, 0x71, 0xba, 0x35, 0x92, 0xb8, 0x35, 0x50, 0x04, 0x8e, 0xfa, 0x7c, 0x23, 0xad, 0xe8, 0x05, 0x0e, 0xa0, 0x7e, 0xf4, 0xca, 0xa4, 0x3e, 0xdf, 0x04, 0xe5, 0xab, 0x3d, 0xea, 0xa3, 0x0d, 0xe9, 0x28, 0xf3, 0xa2, 0x4f, 0x08, 0xdf, 0xc6, 0xe4, 0x6a, 0x3b, 0x9e, 0xfd, 0x92, 0x90, 0x08, 0xc1, 0xec, 0x33, 0x44, 0x39, 0x46, 0xe8, 0xd6, 0x88, 0x21, 0xc2, 0xc5, 0x0e, 0x3d, 0xd3, 0xba, 0xc2, 0xb6, 0x5a, 0x89, 0x17, 0x7b, 0x26, 0x44, 0x51, 0x4c, 0x89, 0x40, 0x54, 0x13, 0x56, 0x04, 0xe0, 0x1e, 0xf0, 0x84, 0x1a, 0xae, 0x4f, 0xa8, 0x7a, 0x8b, 0xbf, 0xe6, 0x3e, 0x9f, 0xf8, 0x84, 0x46, 0x06, 0x64, 0xf2, 0xd4, 0x5a, 0x6c, 0x40, 0x26, 0x2e, 0x82, 0x5c, 0x30, 0x3a, 0x4a, 0xd4, 0xd5, 0x18, 0x72, 0x28, 0x44, 0xe8, 0x31, 0xdc, 0x16, 0x26, 0xd8, 0x31, 0x81, 0x9d, 0x94, 0xc5, 0xf9, 0x0b, 0x71, 0x24, 0xaf, 0x8e, 0x13, 0x93, 0xf0, 0x63, 0xa7, 0x3c, 0xd8, 0xa1, 0x18, 0x6e, 0x5a, 0x57, 0x02, 0x7d, 0x3b, 0xae, 0x19, 0x86, 0xde, 0xb3, 0xae, 0x38, 0x78, 0x9a, 0x3b, 0xc0, 0xd6, 0x48, 0xbd, 0x33, 0xcd, 0xad, 0x63, 0x6b, 0x34, 0xcd, 0xcd, 0xd1, 0x1f, 0x4d, 0x71, 0x73, 0x70, 0x18, 0x9a, 0x41, 0x9f, 0x0e, 0xd5, 0xbb, 0x71, 0x68, 0x5e, 0xf5, 0xe9, 0x10, 0x3d, 0x84, 0xd5, 0x28, 0x3b, 0x84, 0xd0, 0xcb, 0x00, 0x93, 0x4b, 0xf5, 0xe3, 0x44, 0x61, 0x5b, 0xa3, 0x8e, 0x14, 0x27, 0x2a, 0x84, 0xaa, 0x6a, 0xb2, 0x42, 0x68, 0x94, 0x9f, 0x80, 0xd2, 0x91, 0x19, 0xa8, 0x6b, 0x89, 0x1c, 0x73, 0x49, 0x64, 0x87, 0xd5, 0x49, 0x64, 0xa7, 0x1e, 0xdb, 0xe9, 0x78, 0x76, 0x64, 0x27, 0xec, 0x47, 0x86, 0xb5, 0xae, 0x3d, 0x5b, 0xbd, 0x17, 0x27, 0xa3, 0xe3, 0xd9, 0xcd, 0x6b, 0x2f, 0x2e, 0x08, 0xd3, 0x1e, 0xb1, 0xa2, 0xba, 0x1f, 0x1b, 0xdc, 0xe3, 0x12, 0x76, 0xf2, 0x97, 0x39, 0xf7, 0x03, 0x1b, 0x07, 0x8e, 0xd7, 0x53, 0x3f, 0xe1, 0xa0, 0xaa, 0x48, 0x7b, 0x28, 0xd5, 0xce, 0xe1, 0xa3, 0x17, 0x98, 0x76, 0xfd, 0x81, 0xfc, 0x86, 0x24, 0x3a, 0xfe, 0xed, 0x10, 0x13, 0xca, 0x0e, 0xdb, 0xfc, 0x9b, 0xc1, 0x98, 0xba, 0xc1, 0xa8, 0x72, 0x79, 0x33, 0xba, 0x58, 0x58, 0x87, 0x52, 0xdf, 0xfc, 0xc1, 0x08, 0x30, 0x19, 0xba, 0x94, 0xc8, 0xcf, 0x06, 0xe8, 0x9b, 0x3f, 0xe8, 0x42, 0xa2, 0x19, 0x70, 0x77, 0xd2, 0x06, 0x19, 0xf8, 0x1e, 0xc1, 0xe8, 0x2b, 0x28, 0x48, 0x7a, 0x35, 0xc5, 0x8f, 0x58, 0x6b, 0xf3, 0xaf, 0xb3, 0x42, 0x24, 0xaa, 0x41, 0x06, 0x7b, 0xe2, 0xf3, 0x44, 0xd1, 0xd9, 0x4f, 0xed, 0x57, 0xb0, 0xfa, 0x02, 0x53, 0xf1, 0xc9, 0x1c, 0x2d, 0xe0, 0x01, 0xfb, 0xf8, 0x61, 0x0b, 0x88, 0xaf, 0x13, 0x52, 0xe1, 0x77, 0x8a, 0x19, 0x48, 0xf4, 0x32, 0xee, 0xbf, 0x01, 0x94, 0x64, 0x97, 0xae, 0x3f, 0x81, 0xbc, 0x20, 0x96, 0x9e, 0xab, 0x73, 0xaf, 0x12, 0x24, 0x6e, 0x86, 0xdf, 0xdb, 0x50, 0x8b, 0x98, 0x43, 0xb7, 0xc7, 0xee, 0x3f, 0x52, 0xe3, 0xf7, 0x1f, 0xda, 0x41, 0x62, 0xa1, 0x33, 0x3d, 0x49, 0x2d, 0xe3, 0x89, 0xf6, 0x3b, 0xf8, 0x38, 0xa2, 0x11, 0x3b, 0x06, 0x59, 0xc6, 0x7c, 0x22, 0xa4, 0xd1, 0x1d, 0x50, 0x3a, 0x19, 0xd2, 0xf0, 0x22, 0x68, 0x22, 0xa4, 0x99, 0xa9, 0x90, 0x5e, 0x82, 0x3a, 0xed, 0x80, 0x5c, 0xce, 0xf8, 0xff, 0x03, 0x52, 0xef, 0xf3, 0xff, 0x80, 0x19, 0x21, 0xde, 0xe5, 0x11, 0x8b, 0xee, 0xe4, 0xc4, 0x22, 0x6f, 0xbe, 0x97, 0xd3, 0x5a, 0x3c, 0xe1, 0x91, 0xce, 0xac, 0x5a, 0x4d, 0x2d, 0x57, 0xab, 0xda, 0xd7, 0x70, 0x87, 0x2d, 0x34, 0x71, 0x5b, 0x27, 0x3c, 0x98, 0xba, 0xb1, 0x4b, 0x4d, 0xdf, 0xd8, 0x69, 0x67, 0xbc, 0x37, 0x93, 0xca, 0xd2, 0x95, 0x9f, 0x02, 0xc4, 0xc0, 0xf9, 0xff, 0x5b, 0x4b, 0x68, 0x26, 0xf0, 0x5a, 0x4b, 0x54, 0x9d, 0x0c, 0x5a, 0x9c, 0xf6, 0x28, 0xa7, 0xa9, 0x89, 0x7b, 0x3d, 0x15, 0x0a, 0x64, 0xd8, 0xef, 0x9b, 0xc1, 0x5b, 0x19, 0xd9, 0xf0, 0x31, 0xac, 0x47, 0x49, 0x95, 0xa8, 0x47, 0x71, 0xf3, 0x35, 0xbf, 0x1e, 0x85, 0x86, 0xc4, 0xed, 0xfe, 0x39, 0x07, 0x8a, 0x0c, 0xdd, 0x3b, 0x64, 0x41, 0x75, 0x7c, 0x5a, 0xa0, 0x2f, 0xa6, 0x09, 0x66, 0xce, 0xac, 0xfa, 0xd6, 0x62, 0xa0, 0xf4, 0xf1, 0x35, 0x40, 0xdc, 0xd3, 0x68, 0x73, 0xa6, 0xde, 0xf8, 0x3c, 0xa9, 0x7f, 0x76, 0x33, 0x48, 0x12, 0x77, 0xa1, 0x18, 0x49, 0x91, 0x76, 0x83, 0x4a, 0x48, 0xbb, 0x79, 0x23, 0x46, 0xb2, 0x3a, 0x89, 0x41, 0x21, 0xfb, 0x05, 0x7d, 0x79, 0x83, 0xe2, 0x78, 0x53, 0xd7, 0x1f, 0x2e, 0x03, 0x1d, 0x8b, 0x4c, 0xf8, 0xef, 0xdb, 0xd9, 0xde, 0x8d, 0xb7, 0xd3, 0x9c, 0xc8, 0x4c, 0xf6, 0xcf, 0xf7, 0x50, 0x19, 0xab, 0x66, 0xf4, 0x60, 0xb6, 0x57, 0x93, 0xbd, 0x52, 0xff, 0x62, 0x21, 0x6e, 0x3c, 0xf6, 0xe2, 0xa2, 0x70, 0x4e, 0xec, 0x93, 0x55, 0x3f, 0x2f, 0xf6, 0x63, 0xe5, 0xbc, 0xff, 0x06, 0x6e, 0x3b, 0xfe, 0x14, 0x70, 0xbf, 0x12, 0x16, 0xec, 0x2b, 0x76, 0x24, 0x7f, 0x95, 0xfa, 0xe5, 0x13, 0x79, 0x44, 0xef, 0xf9, 0xae, 0xe9, 0xf5, 0x1a, 0x7e, 0xd0, 0xdb, 0x1e, 0xff, 0xb7, 0x3d, 0x7b, 0x0a, 0x77, 0xd3, 0x77, 0xc6, 0x68, 0xe7, 0x3c, 0xcf, 0x4f, 0xf3, 0x5f, 0xfd, 0x37, 0x00, 0x00, 0xff, 0xff, 0x54, 0xae, 0x0b, 0x93, 0xdf, 0x1f, 0x00, 0x00, } grpc-go-1.22.1/channelz/service/000077500000000000000000000000001351635773100164105ustar00rootroot00000000000000grpc-go-1.22.1/channelz/service/func_linux.go000066400000000000000000000070551351635773100211200ustar00rootroot00000000000000// +build !appengine /* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package service import ( "github.com/golang/protobuf/ptypes" channelzpb "google.golang.org/grpc/channelz/grpc_channelz_v1" "google.golang.org/grpc/internal/channelz" ) func sockoptToProto(skopts *channelz.SocketOptionData) []*channelzpb.SocketOption { var opts []*channelzpb.SocketOption if skopts.Linger != nil { additional, err := ptypes.MarshalAny(&channelzpb.SocketOptionLinger{ Active: skopts.Linger.Onoff != 0, Duration: convertToPtypesDuration(int64(skopts.Linger.Linger), 0), }) if err == nil { opts = append(opts, &channelzpb.SocketOption{ Name: "SO_LINGER", Additional: additional, }) } } if skopts.RecvTimeout != nil { additional, err := ptypes.MarshalAny(&channelzpb.SocketOptionTimeout{ Duration: convertToPtypesDuration(int64(skopts.RecvTimeout.Sec), int64(skopts.RecvTimeout.Usec)), }) if err == nil { opts = append(opts, &channelzpb.SocketOption{ Name: "SO_RCVTIMEO", Additional: additional, }) } } if skopts.SendTimeout != nil { additional, err := ptypes.MarshalAny(&channelzpb.SocketOptionTimeout{ Duration: convertToPtypesDuration(int64(skopts.SendTimeout.Sec), int64(skopts.SendTimeout.Usec)), }) if err == nil { opts = append(opts, &channelzpb.SocketOption{ Name: "SO_SNDTIMEO", Additional: additional, }) } } if skopts.TCPInfo != nil { additional, err := ptypes.MarshalAny(&channelzpb.SocketOptionTcpInfo{ TcpiState: uint32(skopts.TCPInfo.State), TcpiCaState: uint32(skopts.TCPInfo.Ca_state), TcpiRetransmits: uint32(skopts.TCPInfo.Retransmits), TcpiProbes: uint32(skopts.TCPInfo.Probes), TcpiBackoff: uint32(skopts.TCPInfo.Backoff), TcpiOptions: uint32(skopts.TCPInfo.Options), // https://golang.org/pkg/syscall/#TCPInfo // TCPInfo struct does not contain info about TcpiSndWscale and TcpiRcvWscale. TcpiRto: skopts.TCPInfo.Rto, TcpiAto: skopts.TCPInfo.Ato, TcpiSndMss: skopts.TCPInfo.Snd_mss, TcpiRcvMss: skopts.TCPInfo.Rcv_mss, TcpiUnacked: skopts.TCPInfo.Unacked, TcpiSacked: skopts.TCPInfo.Sacked, TcpiLost: skopts.TCPInfo.Lost, TcpiRetrans: skopts.TCPInfo.Retrans, TcpiFackets: skopts.TCPInfo.Fackets, TcpiLastDataSent: skopts.TCPInfo.Last_data_sent, TcpiLastAckSent: skopts.TCPInfo.Last_ack_sent, TcpiLastDataRecv: skopts.TCPInfo.Last_data_recv, TcpiLastAckRecv: skopts.TCPInfo.Last_ack_recv, TcpiPmtu: skopts.TCPInfo.Pmtu, TcpiRcvSsthresh: skopts.TCPInfo.Rcv_ssthresh, TcpiRtt: skopts.TCPInfo.Rtt, TcpiRttvar: skopts.TCPInfo.Rttvar, TcpiSndSsthresh: skopts.TCPInfo.Snd_ssthresh, TcpiSndCwnd: skopts.TCPInfo.Snd_cwnd, TcpiAdvmss: skopts.TCPInfo.Advmss, TcpiReordering: skopts.TCPInfo.Reordering, }) if err == nil { opts = append(opts, &channelzpb.SocketOption{ Name: "TCP_INFO", Additional: additional, }) } } return opts } grpc-go-1.22.1/channelz/service/func_nonlinux.go000066400000000000000000000015421351635773100216260ustar00rootroot00000000000000// +build !linux appengine /* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package service import ( channelzpb "google.golang.org/grpc/channelz/grpc_channelz_v1" "google.golang.org/grpc/internal/channelz" ) func sockoptToProto(skopts *channelz.SocketOptionData) []*channelzpb.SocketOption { return nil } grpc-go-1.22.1/channelz/service/regenerate.sh000077500000000000000000000020071351635773100210670ustar00rootroot00000000000000#!/bin/bash # Copyright 2018 gRPC authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. set -eux -o pipefail TMP=$(mktemp -d) function finish { rm -rf "$TMP" } trap finish EXIT pushd "$TMP" mkdir -p grpc/channelz/v1 curl https://raw.githubusercontent.com/grpc/grpc-proto/master/grpc/channelz/v1/channelz.proto > grpc/channelz/v1/channelz.proto protoc --go_out=plugins=grpc,paths=source_relative:. -I. grpc/channelz/v1/*.proto popd rm -f ../grpc_channelz_v1/*.pb.go cp "$TMP"/grpc/channelz/v1/*.pb.go ../grpc_channelz_v1/ grpc-go-1.22.1/channelz/service/service.go000066400000000000000000000322431351635773100204030ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ //go:generate ./regenerate.sh // Package service provides an implementation for channelz service server. package service import ( "context" "net" "time" "github.com/golang/protobuf/ptypes" durpb "github.com/golang/protobuf/ptypes/duration" wrpb "github.com/golang/protobuf/ptypes/wrappers" "google.golang.org/grpc" channelzgrpc "google.golang.org/grpc/channelz/grpc_channelz_v1" channelzpb "google.golang.org/grpc/channelz/grpc_channelz_v1" "google.golang.org/grpc/codes" "google.golang.org/grpc/connectivity" "google.golang.org/grpc/credentials" "google.golang.org/grpc/internal/channelz" "google.golang.org/grpc/status" ) func init() { channelz.TurnOn() } func convertToPtypesDuration(sec int64, usec int64) *durpb.Duration { return ptypes.DurationProto(time.Duration(sec*1e9 + usec*1e3)) } // RegisterChannelzServiceToServer registers the channelz service to the given server. func RegisterChannelzServiceToServer(s *grpc.Server) { channelzgrpc.RegisterChannelzServer(s, newCZServer()) } func newCZServer() channelzgrpc.ChannelzServer { return &serverImpl{} } type serverImpl struct{} func connectivityStateToProto(s connectivity.State) *channelzpb.ChannelConnectivityState { switch s { case connectivity.Idle: return &channelzpb.ChannelConnectivityState{State: channelzpb.ChannelConnectivityState_IDLE} case connectivity.Connecting: return &channelzpb.ChannelConnectivityState{State: channelzpb.ChannelConnectivityState_CONNECTING} case connectivity.Ready: return &channelzpb.ChannelConnectivityState{State: channelzpb.ChannelConnectivityState_READY} case connectivity.TransientFailure: return &channelzpb.ChannelConnectivityState{State: channelzpb.ChannelConnectivityState_TRANSIENT_FAILURE} case connectivity.Shutdown: return &channelzpb.ChannelConnectivityState{State: channelzpb.ChannelConnectivityState_SHUTDOWN} default: return &channelzpb.ChannelConnectivityState{State: channelzpb.ChannelConnectivityState_UNKNOWN} } } func channelTraceToProto(ct *channelz.ChannelTrace) *channelzpb.ChannelTrace { pbt := &channelzpb.ChannelTrace{} pbt.NumEventsLogged = ct.EventNum if ts, err := ptypes.TimestampProto(ct.CreationTime); err == nil { pbt.CreationTimestamp = ts } var events []*channelzpb.ChannelTraceEvent for _, e := range ct.Events { cte := &channelzpb.ChannelTraceEvent{ Description: e.Desc, Severity: channelzpb.ChannelTraceEvent_Severity(e.Severity), } if ts, err := ptypes.TimestampProto(e.Timestamp); err == nil { cte.Timestamp = ts } if e.RefID != 0 { switch e.RefType { case channelz.RefChannel: cte.ChildRef = &channelzpb.ChannelTraceEvent_ChannelRef{ChannelRef: &channelzpb.ChannelRef{ChannelId: e.RefID, Name: e.RefName}} case channelz.RefSubChannel: cte.ChildRef = &channelzpb.ChannelTraceEvent_SubchannelRef{SubchannelRef: &channelzpb.SubchannelRef{SubchannelId: e.RefID, Name: e.RefName}} } } events = append(events, cte) } pbt.Events = events return pbt } func channelMetricToProto(cm *channelz.ChannelMetric) *channelzpb.Channel { c := &channelzpb.Channel{} c.Ref = &channelzpb.ChannelRef{ChannelId: cm.ID, Name: cm.RefName} c.Data = &channelzpb.ChannelData{ State: connectivityStateToProto(cm.ChannelData.State), Target: cm.ChannelData.Target, CallsStarted: cm.ChannelData.CallsStarted, CallsSucceeded: cm.ChannelData.CallsSucceeded, CallsFailed: cm.ChannelData.CallsFailed, } if ts, err := ptypes.TimestampProto(cm.ChannelData.LastCallStartedTimestamp); err == nil { c.Data.LastCallStartedTimestamp = ts } nestedChans := make([]*channelzpb.ChannelRef, 0, len(cm.NestedChans)) for id, ref := range cm.NestedChans { nestedChans = append(nestedChans, &channelzpb.ChannelRef{ChannelId: id, Name: ref}) } c.ChannelRef = nestedChans subChans := make([]*channelzpb.SubchannelRef, 0, len(cm.SubChans)) for id, ref := range cm.SubChans { subChans = append(subChans, &channelzpb.SubchannelRef{SubchannelId: id, Name: ref}) } c.SubchannelRef = subChans sockets := make([]*channelzpb.SocketRef, 0, len(cm.Sockets)) for id, ref := range cm.Sockets { sockets = append(sockets, &channelzpb.SocketRef{SocketId: id, Name: ref}) } c.SocketRef = sockets c.Data.Trace = channelTraceToProto(cm.Trace) return c } func subChannelMetricToProto(cm *channelz.SubChannelMetric) *channelzpb.Subchannel { sc := &channelzpb.Subchannel{} sc.Ref = &channelzpb.SubchannelRef{SubchannelId: cm.ID, Name: cm.RefName} sc.Data = &channelzpb.ChannelData{ State: connectivityStateToProto(cm.ChannelData.State), Target: cm.ChannelData.Target, CallsStarted: cm.ChannelData.CallsStarted, CallsSucceeded: cm.ChannelData.CallsSucceeded, CallsFailed: cm.ChannelData.CallsFailed, } if ts, err := ptypes.TimestampProto(cm.ChannelData.LastCallStartedTimestamp); err == nil { sc.Data.LastCallStartedTimestamp = ts } nestedChans := make([]*channelzpb.ChannelRef, 0, len(cm.NestedChans)) for id, ref := range cm.NestedChans { nestedChans = append(nestedChans, &channelzpb.ChannelRef{ChannelId: id, Name: ref}) } sc.ChannelRef = nestedChans subChans := make([]*channelzpb.SubchannelRef, 0, len(cm.SubChans)) for id, ref := range cm.SubChans { subChans = append(subChans, &channelzpb.SubchannelRef{SubchannelId: id, Name: ref}) } sc.SubchannelRef = subChans sockets := make([]*channelzpb.SocketRef, 0, len(cm.Sockets)) for id, ref := range cm.Sockets { sockets = append(sockets, &channelzpb.SocketRef{SocketId: id, Name: ref}) } sc.SocketRef = sockets sc.Data.Trace = channelTraceToProto(cm.Trace) return sc } func securityToProto(se credentials.ChannelzSecurityValue) *channelzpb.Security { switch v := se.(type) { case *credentials.TLSChannelzSecurityValue: return &channelzpb.Security{Model: &channelzpb.Security_Tls_{Tls: &channelzpb.Security_Tls{ CipherSuite: &channelzpb.Security_Tls_StandardName{StandardName: v.StandardName}, LocalCertificate: v.LocalCertificate, RemoteCertificate: v.RemoteCertificate, }}} case *credentials.OtherChannelzSecurityValue: otherSecurity := &channelzpb.Security_OtherSecurity{ Name: v.Name, } if anyval, err := ptypes.MarshalAny(v.Value); err == nil { otherSecurity.Value = anyval } return &channelzpb.Security{Model: &channelzpb.Security_Other{Other: otherSecurity}} } return nil } func addrToProto(a net.Addr) *channelzpb.Address { switch a.Network() { case "udp": // TODO: Address_OtherAddress{}. Need proto def for Value. case "ip": // Note zone info is discarded through the conversion. return &channelzpb.Address{Address: &channelzpb.Address_TcpipAddress{TcpipAddress: &channelzpb.Address_TcpIpAddress{IpAddress: a.(*net.IPAddr).IP}}} case "ip+net": // Note mask info is discarded through the conversion. return &channelzpb.Address{Address: &channelzpb.Address_TcpipAddress{TcpipAddress: &channelzpb.Address_TcpIpAddress{IpAddress: a.(*net.IPNet).IP}}} case "tcp": // Note zone info is discarded through the conversion. return &channelzpb.Address{Address: &channelzpb.Address_TcpipAddress{TcpipAddress: &channelzpb.Address_TcpIpAddress{IpAddress: a.(*net.TCPAddr).IP, Port: int32(a.(*net.TCPAddr).Port)}}} case "unix", "unixgram", "unixpacket": return &channelzpb.Address{Address: &channelzpb.Address_UdsAddress_{UdsAddress: &channelzpb.Address_UdsAddress{Filename: a.String()}}} default: } return &channelzpb.Address{} } func socketMetricToProto(sm *channelz.SocketMetric) *channelzpb.Socket { s := &channelzpb.Socket{} s.Ref = &channelzpb.SocketRef{SocketId: sm.ID, Name: sm.RefName} s.Data = &channelzpb.SocketData{ StreamsStarted: sm.SocketData.StreamsStarted, StreamsSucceeded: sm.SocketData.StreamsSucceeded, StreamsFailed: sm.SocketData.StreamsFailed, MessagesSent: sm.SocketData.MessagesSent, MessagesReceived: sm.SocketData.MessagesReceived, KeepAlivesSent: sm.SocketData.KeepAlivesSent, } if ts, err := ptypes.TimestampProto(sm.SocketData.LastLocalStreamCreatedTimestamp); err == nil { s.Data.LastLocalStreamCreatedTimestamp = ts } if ts, err := ptypes.TimestampProto(sm.SocketData.LastRemoteStreamCreatedTimestamp); err == nil { s.Data.LastRemoteStreamCreatedTimestamp = ts } if ts, err := ptypes.TimestampProto(sm.SocketData.LastMessageSentTimestamp); err == nil { s.Data.LastMessageSentTimestamp = ts } if ts, err := ptypes.TimestampProto(sm.SocketData.LastMessageReceivedTimestamp); err == nil { s.Data.LastMessageReceivedTimestamp = ts } s.Data.LocalFlowControlWindow = &wrpb.Int64Value{Value: sm.SocketData.LocalFlowControlWindow} s.Data.RemoteFlowControlWindow = &wrpb.Int64Value{Value: sm.SocketData.RemoteFlowControlWindow} if sm.SocketData.SocketOptions != nil { s.Data.Option = sockoptToProto(sm.SocketData.SocketOptions) } if sm.SocketData.Security != nil { s.Security = securityToProto(sm.SocketData.Security) } if sm.SocketData.LocalAddr != nil { s.Local = addrToProto(sm.SocketData.LocalAddr) } if sm.SocketData.RemoteAddr != nil { s.Remote = addrToProto(sm.SocketData.RemoteAddr) } s.RemoteName = sm.SocketData.RemoteName return s } func (s *serverImpl) GetTopChannels(ctx context.Context, req *channelzpb.GetTopChannelsRequest) (*channelzpb.GetTopChannelsResponse, error) { metrics, end := channelz.GetTopChannels(req.GetStartChannelId(), req.GetMaxResults()) resp := &channelzpb.GetTopChannelsResponse{} for _, m := range metrics { resp.Channel = append(resp.Channel, channelMetricToProto(m)) } resp.End = end return resp, nil } func serverMetricToProto(sm *channelz.ServerMetric) *channelzpb.Server { s := &channelzpb.Server{} s.Ref = &channelzpb.ServerRef{ServerId: sm.ID, Name: sm.RefName} s.Data = &channelzpb.ServerData{ CallsStarted: sm.ServerData.CallsStarted, CallsSucceeded: sm.ServerData.CallsSucceeded, CallsFailed: sm.ServerData.CallsFailed, } if ts, err := ptypes.TimestampProto(sm.ServerData.LastCallStartedTimestamp); err == nil { s.Data.LastCallStartedTimestamp = ts } sockets := make([]*channelzpb.SocketRef, 0, len(sm.ListenSockets)) for id, ref := range sm.ListenSockets { sockets = append(sockets, &channelzpb.SocketRef{SocketId: id, Name: ref}) } s.ListenSocket = sockets return s } func (s *serverImpl) GetServers(ctx context.Context, req *channelzpb.GetServersRequest) (*channelzpb.GetServersResponse, error) { metrics, end := channelz.GetServers(req.GetStartServerId(), req.GetMaxResults()) resp := &channelzpb.GetServersResponse{} for _, m := range metrics { resp.Server = append(resp.Server, serverMetricToProto(m)) } resp.End = end return resp, nil } func (s *serverImpl) GetServerSockets(ctx context.Context, req *channelzpb.GetServerSocketsRequest) (*channelzpb.GetServerSocketsResponse, error) { metrics, end := channelz.GetServerSockets(req.GetServerId(), req.GetStartSocketId(), req.GetMaxResults()) resp := &channelzpb.GetServerSocketsResponse{} for _, m := range metrics { resp.SocketRef = append(resp.SocketRef, &channelzpb.SocketRef{SocketId: m.ID, Name: m.RefName}) } resp.End = end return resp, nil } func (s *serverImpl) GetChannel(ctx context.Context, req *channelzpb.GetChannelRequest) (*channelzpb.GetChannelResponse, error) { var metric *channelz.ChannelMetric if metric = channelz.GetChannel(req.GetChannelId()); metric == nil { return nil, status.Errorf(codes.NotFound, "requested channel %d not found", req.GetChannelId()) } resp := &channelzpb.GetChannelResponse{Channel: channelMetricToProto(metric)} return resp, nil } func (s *serverImpl) GetSubchannel(ctx context.Context, req *channelzpb.GetSubchannelRequest) (*channelzpb.GetSubchannelResponse, error) { var metric *channelz.SubChannelMetric if metric = channelz.GetSubChannel(req.GetSubchannelId()); metric == nil { return nil, status.Errorf(codes.NotFound, "requested sub channel %d not found", req.GetSubchannelId()) } resp := &channelzpb.GetSubchannelResponse{Subchannel: subChannelMetricToProto(metric)} return resp, nil } func (s *serverImpl) GetSocket(ctx context.Context, req *channelzpb.GetSocketRequest) (*channelzpb.GetSocketResponse, error) { var metric *channelz.SocketMetric if metric = channelz.GetSocket(req.GetSocketId()); metric == nil { return nil, status.Errorf(codes.NotFound, "requested socket %d not found", req.GetSocketId()) } resp := &channelzpb.GetSocketResponse{Socket: socketMetricToProto(metric)} return resp, nil } func (s *serverImpl) GetServer(ctx context.Context, req *channelzpb.GetServerRequest) (*channelzpb.GetServerResponse, error) { var metric *channelz.ServerMetric if metric = channelz.GetServer(req.GetServerId()); metric == nil { return nil, status.Errorf(codes.NotFound, "requested server %d not found", req.GetServerId()) } resp := &channelzpb.GetServerResponse{Server: serverMetricToProto(metric)} return resp, nil } grpc-go-1.22.1/channelz/service/service_sktopt_test.go000066400000000000000000000120701351635773100230420ustar00rootroot00000000000000// +build linux,!appengine // +build 386 amd64 /* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // SocketOptions is only supported on linux system. The functions defined in // this file are to parse the socket option field and the test is specifically // to verify the behavior of socket option parsing. package service import ( "context" "reflect" "strconv" "testing" "github.com/golang/protobuf/ptypes" durpb "github.com/golang/protobuf/ptypes/duration" "golang.org/x/sys/unix" channelzpb "google.golang.org/grpc/channelz/grpc_channelz_v1" "google.golang.org/grpc/internal/channelz" ) func init() { // Assign protoToSocketOption to protoToSocketOpt in order to enable socket option // data conversion from proto message to channelz defined struct. protoToSocketOpt = protoToSocketOption } func convertToDuration(d *durpb.Duration) (sec int64, usec int64) { if d != nil { if dur, err := ptypes.Duration(d); err == nil { sec = int64(int64(dur) / 1e9) usec = (int64(dur) - sec*1e9) / 1e3 } } return } func protoToLinger(protoLinger *channelzpb.SocketOptionLinger) *unix.Linger { linger := &unix.Linger{} if protoLinger.GetActive() { linger.Onoff = 1 } lv, _ := convertToDuration(protoLinger.GetDuration()) linger.Linger = int32(lv) return linger } func protoToSocketOption(skopts []*channelzpb.SocketOption) *channelz.SocketOptionData { skdata := &channelz.SocketOptionData{} for _, opt := range skopts { switch opt.GetName() { case "SO_LINGER": protoLinger := &channelzpb.SocketOptionLinger{} err := ptypes.UnmarshalAny(opt.GetAdditional(), protoLinger) if err == nil { skdata.Linger = protoToLinger(protoLinger) } case "SO_RCVTIMEO": protoTimeout := &channelzpb.SocketOptionTimeout{} err := ptypes.UnmarshalAny(opt.GetAdditional(), protoTimeout) if err == nil { skdata.RecvTimeout = protoToTime(protoTimeout) } case "SO_SNDTIMEO": protoTimeout := &channelzpb.SocketOptionTimeout{} err := ptypes.UnmarshalAny(opt.GetAdditional(), protoTimeout) if err == nil { skdata.SendTimeout = protoToTime(protoTimeout) } case "TCP_INFO": tcpi := &channelzpb.SocketOptionTcpInfo{} err := ptypes.UnmarshalAny(opt.GetAdditional(), tcpi) if err == nil { skdata.TCPInfo = &unix.TCPInfo{ State: uint8(tcpi.TcpiState), Ca_state: uint8(tcpi.TcpiCaState), Retransmits: uint8(tcpi.TcpiRetransmits), Probes: uint8(tcpi.TcpiProbes), Backoff: uint8(tcpi.TcpiBackoff), Options: uint8(tcpi.TcpiOptions), Rto: tcpi.TcpiRto, Ato: tcpi.TcpiAto, Snd_mss: tcpi.TcpiSndMss, Rcv_mss: tcpi.TcpiRcvMss, Unacked: tcpi.TcpiUnacked, Sacked: tcpi.TcpiSacked, Lost: tcpi.TcpiLost, Retrans: tcpi.TcpiRetrans, Fackets: tcpi.TcpiFackets, Last_data_sent: tcpi.TcpiLastDataSent, Last_ack_sent: tcpi.TcpiLastAckSent, Last_data_recv: tcpi.TcpiLastDataRecv, Last_ack_recv: tcpi.TcpiLastAckRecv, Pmtu: tcpi.TcpiPmtu, Rcv_ssthresh: tcpi.TcpiRcvSsthresh, Rtt: tcpi.TcpiRtt, Rttvar: tcpi.TcpiRttvar, Snd_ssthresh: tcpi.TcpiSndSsthresh, Snd_cwnd: tcpi.TcpiSndCwnd, Advmss: tcpi.TcpiAdvmss, Reordering: tcpi.TcpiReordering} } } } return skdata } func TestGetSocketOptions(t *testing.T) { czCleanup := channelz.NewChannelzStorage() defer cleanupWrapper(czCleanup, t) ss := []*dummySocket{ { socketOptions: &channelz.SocketOptionData{ Linger: &unix.Linger{Onoff: 1, Linger: 2}, RecvTimeout: &unix.Timeval{Sec: 10, Usec: 1}, SendTimeout: &unix.Timeval{}, TCPInfo: &unix.TCPInfo{State: 1}, }, }, } svr := newCZServer() ids := make([]int64, len(ss)) svrID := channelz.RegisterServer(&dummyServer{}, "") defer channelz.RemoveEntry(svrID) for i, s := range ss { ids[i] = channelz.RegisterNormalSocket(s, svrID, strconv.Itoa(i)) defer channelz.RemoveEntry(ids[i]) } for i, s := range ss { resp, _ := svr.GetSocket(context.Background(), &channelzpb.GetSocketRequest{SocketId: ids[i]}) metrics := resp.GetSocket() if !reflect.DeepEqual(metrics.GetRef(), &channelzpb.SocketRef{SocketId: ids[i], Name: strconv.Itoa(i)}) || !reflect.DeepEqual(socketProtoToStruct(metrics), s) { t.Fatalf("resp.GetSocket() want: metrics.GetRef() = %#v and %#v, got: metrics.GetRef() = %#v and %#v", &channelzpb.SocketRef{SocketId: ids[i], Name: strconv.Itoa(i)}, s, metrics.GetRef(), socketProtoToStruct(metrics)) } } } grpc-go-1.22.1/channelz/service/service_test.go000066400000000000000000000654121351635773100214460ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package service import ( "context" "fmt" "net" "reflect" "strconv" "testing" "time" "github.com/golang/protobuf/proto" "github.com/golang/protobuf/ptypes" channelzpb "google.golang.org/grpc/channelz/grpc_channelz_v1" "google.golang.org/grpc/connectivity" "google.golang.org/grpc/credentials" "google.golang.org/grpc/internal/channelz" ) func init() { channelz.TurnOn() } func cleanupWrapper(cleanup func() error, t *testing.T) { if err := cleanup(); err != nil { t.Error(err) } } type protoToSocketOptFunc func([]*channelzpb.SocketOption) *channelz.SocketOptionData // protoToSocketOpt is used in function socketProtoToStruct to extract socket option // data from unmarshaled proto message. // It is only defined under linux, non-appengine environment on x86 architecture. var protoToSocketOpt protoToSocketOptFunc // emptyTime is used for detecting unset value of time.Time type. // For go1.7 and earlier, ptypes.Timestamp will fill in the loc field of time.Time // with &utcLoc. However zero value of a time.Time type value loc field is nil. // This behavior will make reflect.DeepEqual fail upon unset time.Time field, // and cause false positive fatal error. // TODO: Go1.7 is no longer supported - does this need a change? var emptyTime time.Time type dummyChannel struct { state connectivity.State target string callsStarted int64 callsSucceeded int64 callsFailed int64 lastCallStartedTimestamp time.Time } func (d *dummyChannel) ChannelzMetric() *channelz.ChannelInternalMetric { return &channelz.ChannelInternalMetric{ State: d.state, Target: d.target, CallsStarted: d.callsStarted, CallsSucceeded: d.callsSucceeded, CallsFailed: d.callsFailed, LastCallStartedTimestamp: d.lastCallStartedTimestamp, } } type dummyServer struct { callsStarted int64 callsSucceeded int64 callsFailed int64 lastCallStartedTimestamp time.Time } func (d *dummyServer) ChannelzMetric() *channelz.ServerInternalMetric { return &channelz.ServerInternalMetric{ CallsStarted: d.callsStarted, CallsSucceeded: d.callsSucceeded, CallsFailed: d.callsFailed, LastCallStartedTimestamp: d.lastCallStartedTimestamp, } } type dummySocket struct { streamsStarted int64 streamsSucceeded int64 streamsFailed int64 messagesSent int64 messagesReceived int64 keepAlivesSent int64 lastLocalStreamCreatedTimestamp time.Time lastRemoteStreamCreatedTimestamp time.Time lastMessageSentTimestamp time.Time lastMessageReceivedTimestamp time.Time localFlowControlWindow int64 remoteFlowControlWindow int64 socketOptions *channelz.SocketOptionData localAddr net.Addr remoteAddr net.Addr security credentials.ChannelzSecurityValue remoteName string } func (d *dummySocket) ChannelzMetric() *channelz.SocketInternalMetric { return &channelz.SocketInternalMetric{ StreamsStarted: d.streamsStarted, StreamsSucceeded: d.streamsSucceeded, StreamsFailed: d.streamsFailed, MessagesSent: d.messagesSent, MessagesReceived: d.messagesReceived, KeepAlivesSent: d.keepAlivesSent, LastLocalStreamCreatedTimestamp: d.lastLocalStreamCreatedTimestamp, LastRemoteStreamCreatedTimestamp: d.lastRemoteStreamCreatedTimestamp, LastMessageSentTimestamp: d.lastMessageSentTimestamp, LastMessageReceivedTimestamp: d.lastMessageReceivedTimestamp, LocalFlowControlWindow: d.localFlowControlWindow, RemoteFlowControlWindow: d.remoteFlowControlWindow, SocketOptions: d.socketOptions, LocalAddr: d.localAddr, RemoteAddr: d.remoteAddr, Security: d.security, RemoteName: d.remoteName, } } func channelProtoToStruct(c *channelzpb.Channel) *dummyChannel { dc := &dummyChannel{} pdata := c.GetData() switch pdata.GetState().GetState() { case channelzpb.ChannelConnectivityState_UNKNOWN: // TODO: what should we set here? case channelzpb.ChannelConnectivityState_IDLE: dc.state = connectivity.Idle case channelzpb.ChannelConnectivityState_CONNECTING: dc.state = connectivity.Connecting case channelzpb.ChannelConnectivityState_READY: dc.state = connectivity.Ready case channelzpb.ChannelConnectivityState_TRANSIENT_FAILURE: dc.state = connectivity.TransientFailure case channelzpb.ChannelConnectivityState_SHUTDOWN: dc.state = connectivity.Shutdown } dc.target = pdata.GetTarget() dc.callsStarted = pdata.CallsStarted dc.callsSucceeded = pdata.CallsSucceeded dc.callsFailed = pdata.CallsFailed if t, err := ptypes.Timestamp(pdata.GetLastCallStartedTimestamp()); err == nil { if !t.Equal(emptyTime) { dc.lastCallStartedTimestamp = t } } return dc } func serverProtoToStruct(s *channelzpb.Server) *dummyServer { ds := &dummyServer{} pdata := s.GetData() ds.callsStarted = pdata.CallsStarted ds.callsSucceeded = pdata.CallsSucceeded ds.callsFailed = pdata.CallsFailed if t, err := ptypes.Timestamp(pdata.GetLastCallStartedTimestamp()); err == nil { if !t.Equal(emptyTime) { ds.lastCallStartedTimestamp = t } } return ds } func socketProtoToStruct(s *channelzpb.Socket) *dummySocket { ds := &dummySocket{} pdata := s.GetData() ds.streamsStarted = pdata.GetStreamsStarted() ds.streamsSucceeded = pdata.GetStreamsSucceeded() ds.streamsFailed = pdata.GetStreamsFailed() ds.messagesSent = pdata.GetMessagesSent() ds.messagesReceived = pdata.GetMessagesReceived() ds.keepAlivesSent = pdata.GetKeepAlivesSent() if t, err := ptypes.Timestamp(pdata.GetLastLocalStreamCreatedTimestamp()); err == nil { if !t.Equal(emptyTime) { ds.lastLocalStreamCreatedTimestamp = t } } if t, err := ptypes.Timestamp(pdata.GetLastRemoteStreamCreatedTimestamp()); err == nil { if !t.Equal(emptyTime) { ds.lastRemoteStreamCreatedTimestamp = t } } if t, err := ptypes.Timestamp(pdata.GetLastMessageSentTimestamp()); err == nil { if !t.Equal(emptyTime) { ds.lastMessageSentTimestamp = t } } if t, err := ptypes.Timestamp(pdata.GetLastMessageReceivedTimestamp()); err == nil { if !t.Equal(emptyTime) { ds.lastMessageReceivedTimestamp = t } } if v := pdata.GetLocalFlowControlWindow(); v != nil { ds.localFlowControlWindow = v.Value } if v := pdata.GetRemoteFlowControlWindow(); v != nil { ds.remoteFlowControlWindow = v.Value } if v := pdata.GetOption(); v != nil && protoToSocketOpt != nil { ds.socketOptions = protoToSocketOpt(v) } if v := s.GetSecurity(); v != nil { ds.security = protoToSecurity(v) } if local := s.GetLocal(); local != nil { ds.localAddr = protoToAddr(local) } if remote := s.GetRemote(); remote != nil { ds.remoteAddr = protoToAddr(remote) } ds.remoteName = s.GetRemoteName() return ds } func protoToSecurity(protoSecurity *channelzpb.Security) credentials.ChannelzSecurityValue { switch v := protoSecurity.Model.(type) { case *channelzpb.Security_Tls_: return &credentials.TLSChannelzSecurityValue{StandardName: v.Tls.GetStandardName(), LocalCertificate: v.Tls.GetLocalCertificate(), RemoteCertificate: v.Tls.GetRemoteCertificate()} case *channelzpb.Security_Other: sv := &credentials.OtherChannelzSecurityValue{Name: v.Other.GetName()} var x ptypes.DynamicAny if err := ptypes.UnmarshalAny(v.Other.GetValue(), &x); err == nil { sv.Value = x.Message } return sv } return nil } func protoToAddr(a *channelzpb.Address) net.Addr { switch v := a.Address.(type) { case *channelzpb.Address_TcpipAddress: if port := v.TcpipAddress.GetPort(); port != 0 { return &net.TCPAddr{IP: v.TcpipAddress.GetIpAddress(), Port: int(port)} } return &net.IPAddr{IP: v.TcpipAddress.GetIpAddress()} case *channelzpb.Address_UdsAddress_: return &net.UnixAddr{Name: v.UdsAddress.GetFilename(), Net: "unix"} case *channelzpb.Address_OtherAddress_: // TODO: } return nil } func convertSocketRefSliceToMap(sktRefs []*channelzpb.SocketRef) map[int64]string { m := make(map[int64]string) for _, sr := range sktRefs { m[sr.SocketId] = sr.Name } return m } type OtherSecurityValue struct { LocalCertificate []byte `protobuf:"bytes,1,opt,name=local_certificate,json=localCertificate,proto3" json:"local_certificate,omitempty"` RemoteCertificate []byte `protobuf:"bytes,2,opt,name=remote_certificate,json=remoteCertificate,proto3" json:"remote_certificate,omitempty"` } func (m *OtherSecurityValue) Reset() { *m = OtherSecurityValue{} } func (m *OtherSecurityValue) String() string { return proto.CompactTextString(m) } func (*OtherSecurityValue) ProtoMessage() {} func init() { // Ad-hoc registering the proto type here to facilitate UnmarshalAny of OtherSecurityValue. proto.RegisterType((*OtherSecurityValue)(nil), "grpc.credentials.OtherChannelzSecurityValue") } func TestGetTopChannels(t *testing.T) { tcs := []*dummyChannel{ { state: connectivity.Connecting, target: "test.channelz:1234", callsStarted: 6, callsSucceeded: 2, callsFailed: 3, lastCallStartedTimestamp: time.Now().UTC(), }, { state: connectivity.Connecting, target: "test.channelz:1234", callsStarted: 1, callsSucceeded: 2, callsFailed: 3, lastCallStartedTimestamp: time.Now().UTC(), }, { state: connectivity.Shutdown, target: "test.channelz:8888", callsStarted: 0, callsSucceeded: 0, callsFailed: 0, }, {}, } czCleanup := channelz.NewChannelzStorage() defer cleanupWrapper(czCleanup, t) for _, c := range tcs { id := channelz.RegisterChannel(c, 0, "") defer channelz.RemoveEntry(id) } s := newCZServer() resp, _ := s.GetTopChannels(context.Background(), &channelzpb.GetTopChannelsRequest{StartChannelId: 0}) if !resp.GetEnd() { t.Fatalf("resp.GetEnd() want true, got %v", resp.GetEnd()) } for i, c := range resp.GetChannel() { if !reflect.DeepEqual(channelProtoToStruct(c), tcs[i]) { t.Fatalf("dummyChannel: %d, want: %#v, got: %#v", i, tcs[i], channelProtoToStruct(c)) } } for i := 0; i < 50; i++ { id := channelz.RegisterChannel(tcs[0], 0, "") defer channelz.RemoveEntry(id) } resp, _ = s.GetTopChannels(context.Background(), &channelzpb.GetTopChannelsRequest{StartChannelId: 0}) if resp.GetEnd() { t.Fatalf("resp.GetEnd() want false, got %v", resp.GetEnd()) } } func TestGetServers(t *testing.T) { ss := []*dummyServer{ { callsStarted: 6, callsSucceeded: 2, callsFailed: 3, lastCallStartedTimestamp: time.Now().UTC(), }, { callsStarted: 1, callsSucceeded: 2, callsFailed: 3, lastCallStartedTimestamp: time.Now().UTC(), }, { callsStarted: 1, callsSucceeded: 0, callsFailed: 0, lastCallStartedTimestamp: time.Now().UTC(), }, } czCleanup := channelz.NewChannelzStorage() defer cleanupWrapper(czCleanup, t) for _, s := range ss { id := channelz.RegisterServer(s, "") defer channelz.RemoveEntry(id) } svr := newCZServer() resp, _ := svr.GetServers(context.Background(), &channelzpb.GetServersRequest{StartServerId: 0}) if !resp.GetEnd() { t.Fatalf("resp.GetEnd() want true, got %v", resp.GetEnd()) } for i, s := range resp.GetServer() { if !reflect.DeepEqual(serverProtoToStruct(s), ss[i]) { t.Fatalf("dummyServer: %d, want: %#v, got: %#v", i, ss[i], serverProtoToStruct(s)) } } for i := 0; i < 50; i++ { id := channelz.RegisterServer(ss[0], "") defer channelz.RemoveEntry(id) } resp, _ = svr.GetServers(context.Background(), &channelzpb.GetServersRequest{StartServerId: 0}) if resp.GetEnd() { t.Fatalf("resp.GetEnd() want false, got %v", resp.GetEnd()) } } func TestGetServerSockets(t *testing.T) { czCleanup := channelz.NewChannelzStorage() defer cleanupWrapper(czCleanup, t) svrID := channelz.RegisterServer(&dummyServer{}, "") defer channelz.RemoveEntry(svrID) refNames := []string{"listen socket 1", "normal socket 1", "normal socket 2"} ids := make([]int64, 3) ids[0] = channelz.RegisterListenSocket(&dummySocket{}, svrID, refNames[0]) ids[1] = channelz.RegisterNormalSocket(&dummySocket{}, svrID, refNames[1]) ids[2] = channelz.RegisterNormalSocket(&dummySocket{}, svrID, refNames[2]) for _, id := range ids { defer channelz.RemoveEntry(id) } svr := newCZServer() resp, _ := svr.GetServerSockets(context.Background(), &channelzpb.GetServerSocketsRequest{ServerId: svrID, StartSocketId: 0}) if !resp.GetEnd() { t.Fatalf("resp.GetEnd() want: true, got: %v", resp.GetEnd()) } // GetServerSockets only return normal sockets. want := map[int64]string{ ids[1]: refNames[1], ids[2]: refNames[2], } if !reflect.DeepEqual(convertSocketRefSliceToMap(resp.GetSocketRef()), want) { t.Fatalf("GetServerSockets want: %#v, got: %#v", want, resp.GetSocketRef()) } for i := 0; i < 50; i++ { id := channelz.RegisterNormalSocket(&dummySocket{}, svrID, "") defer channelz.RemoveEntry(id) } resp, _ = svr.GetServerSockets(context.Background(), &channelzpb.GetServerSocketsRequest{ServerId: svrID, StartSocketId: 0}) if resp.GetEnd() { t.Fatalf("resp.GetEnd() want false, got %v", resp.GetEnd()) } } // This test makes a GetServerSockets with a non-zero start ID, and expect only // sockets with ID >= the given start ID. func TestGetServerSocketsNonZeroStartID(t *testing.T) { czCleanup := channelz.NewChannelzStorage() defer cleanupWrapper(czCleanup, t) svrID := channelz.RegisterServer(&dummyServer{}, "") defer channelz.RemoveEntry(svrID) refNames := []string{"listen socket 1", "normal socket 1", "normal socket 2"} ids := make([]int64, 3) ids[0] = channelz.RegisterListenSocket(&dummySocket{}, svrID, refNames[0]) ids[1] = channelz.RegisterNormalSocket(&dummySocket{}, svrID, refNames[1]) ids[2] = channelz.RegisterNormalSocket(&dummySocket{}, svrID, refNames[2]) for _, id := range ids { defer channelz.RemoveEntry(id) } svr := newCZServer() // Make GetServerSockets with startID = ids[1]+1, so socket-1 won't be // included in the response. resp, _ := svr.GetServerSockets(context.Background(), &channelzpb.GetServerSocketsRequest{ServerId: svrID, StartSocketId: ids[1] + 1}) if !resp.GetEnd() { t.Fatalf("resp.GetEnd() want: true, got: %v", resp.GetEnd()) } // GetServerSockets only return normal socket-2, socket-1 should be // filtered by start ID. want := map[int64]string{ ids[2]: refNames[2], } if !reflect.DeepEqual(convertSocketRefSliceToMap(resp.GetSocketRef()), want) { t.Fatalf("GetServerSockets want: %#v, got: %#v", want, resp.GetSocketRef()) } } func TestGetChannel(t *testing.T) { czCleanup := channelz.NewChannelzStorage() defer cleanupWrapper(czCleanup, t) refNames := []string{"top channel 1", "nested channel 1", "sub channel 2", "nested channel 3"} ids := make([]int64, 4) ids[0] = channelz.RegisterChannel(&dummyChannel{}, 0, refNames[0]) channelz.AddTraceEvent(ids[0], &channelz.TraceEventDesc{ Desc: "Channel Created", Severity: channelz.CtINFO, }) ids[1] = channelz.RegisterChannel(&dummyChannel{}, ids[0], refNames[1]) channelz.AddTraceEvent(ids[1], &channelz.TraceEventDesc{ Desc: "Channel Created", Severity: channelz.CtINFO, Parent: &channelz.TraceEventDesc{ Desc: fmt.Sprintf("Nested Channel(id:%d) created", ids[1]), Severity: channelz.CtINFO, }, }) ids[2] = channelz.RegisterSubChannel(&dummyChannel{}, ids[0], refNames[2]) channelz.AddTraceEvent(ids[2], &channelz.TraceEventDesc{ Desc: "SubChannel Created", Severity: channelz.CtINFO, Parent: &channelz.TraceEventDesc{ Desc: fmt.Sprintf("SubChannel(id:%d) created", ids[2]), Severity: channelz.CtINFO, }, }) ids[3] = channelz.RegisterChannel(&dummyChannel{}, ids[1], refNames[3]) channelz.AddTraceEvent(ids[3], &channelz.TraceEventDesc{ Desc: "Channel Created", Severity: channelz.CtINFO, Parent: &channelz.TraceEventDesc{ Desc: fmt.Sprintf("Nested Channel(id:%d) created", ids[3]), Severity: channelz.CtINFO, }, }) channelz.AddTraceEvent(ids[0], &channelz.TraceEventDesc{ Desc: fmt.Sprintf("Channel Connectivity change to %v", connectivity.Ready), Severity: channelz.CtINFO, }) channelz.AddTraceEvent(ids[0], &channelz.TraceEventDesc{ Desc: "Resolver returns an empty address list", Severity: channelz.CtWarning, }) for _, id := range ids { defer channelz.RemoveEntry(id) } svr := newCZServer() resp, _ := svr.GetChannel(context.Background(), &channelzpb.GetChannelRequest{ChannelId: ids[0]}) metrics := resp.GetChannel() subChans := metrics.GetSubchannelRef() if len(subChans) != 1 || subChans[0].GetName() != refNames[2] || subChans[0].GetSubchannelId() != ids[2] { t.Fatalf("metrics.GetSubChannelRef() want %#v, got %#v", []*channelzpb.SubchannelRef{{SubchannelId: ids[2], Name: refNames[2]}}, subChans) } nestedChans := metrics.GetChannelRef() if len(nestedChans) != 1 || nestedChans[0].GetName() != refNames[1] || nestedChans[0].GetChannelId() != ids[1] { t.Fatalf("metrics.GetChannelRef() want %#v, got %#v", []*channelzpb.ChannelRef{{ChannelId: ids[1], Name: refNames[1]}}, nestedChans) } trace := metrics.GetData().GetTrace() want := []struct { desc string severity channelzpb.ChannelTraceEvent_Severity childID int64 childRef string }{ {desc: "Channel Created", severity: channelzpb.ChannelTraceEvent_CT_INFO}, {desc: fmt.Sprintf("Nested Channel(id:%d) created", ids[1]), severity: channelzpb.ChannelTraceEvent_CT_INFO, childID: ids[1], childRef: refNames[1]}, {desc: fmt.Sprintf("SubChannel(id:%d) created", ids[2]), severity: channelzpb.ChannelTraceEvent_CT_INFO, childID: ids[2], childRef: refNames[2]}, {desc: fmt.Sprintf("Channel Connectivity change to %v", connectivity.Ready), severity: channelzpb.ChannelTraceEvent_CT_INFO}, {desc: "Resolver returns an empty address list", severity: channelzpb.ChannelTraceEvent_CT_WARNING}, } for i, e := range trace.Events { if e.GetDescription() != want[i].desc { t.Fatalf("trace: GetDescription want %#v, got %#v", want[i].desc, e.GetDescription()) } if e.GetSeverity() != want[i].severity { t.Fatalf("trace: GetSeverity want %#v, got %#v", want[i].severity, e.GetSeverity()) } if want[i].childID == 0 && (e.GetChannelRef() != nil || e.GetSubchannelRef() != nil) { t.Fatalf("trace: GetChannelRef() should return nil, as there is no reference") } if e.GetChannelRef().GetChannelId() != want[i].childID || e.GetChannelRef().GetName() != want[i].childRef { if e.GetSubchannelRef().GetSubchannelId() != want[i].childID || e.GetSubchannelRef().GetName() != want[i].childRef { t.Fatalf("trace: GetChannelRef/GetSubchannelRef want (child ID: %d, child name: %q), got %#v and %#v", want[i].childID, want[i].childRef, e.GetChannelRef(), e.GetSubchannelRef()) } } } resp, _ = svr.GetChannel(context.Background(), &channelzpb.GetChannelRequest{ChannelId: ids[1]}) metrics = resp.GetChannel() nestedChans = metrics.GetChannelRef() if len(nestedChans) != 1 || nestedChans[0].GetName() != refNames[3] || nestedChans[0].GetChannelId() != ids[3] { t.Fatalf("metrics.GetChannelRef() want %#v, got %#v", []*channelzpb.ChannelRef{{ChannelId: ids[3], Name: refNames[3]}}, nestedChans) } } func TestGetSubChannel(t *testing.T) { var ( subchanCreated = "SubChannel Created" subchanConnectivityChange = fmt.Sprintf("Subchannel Connectivity change to %v", connectivity.Ready) subChanPickNewAddress = fmt.Sprintf("Subchannel picks a new address %q to connect", "0.0.0.0") ) czCleanup := channelz.NewChannelzStorage() defer cleanupWrapper(czCleanup, t) refNames := []string{"top channel 1", "sub channel 1", "socket 1", "socket 2"} ids := make([]int64, 4) ids[0] = channelz.RegisterChannel(&dummyChannel{}, 0, refNames[0]) channelz.AddTraceEvent(ids[0], &channelz.TraceEventDesc{ Desc: "Channel Created", Severity: channelz.CtINFO, }) ids[1] = channelz.RegisterSubChannel(&dummyChannel{}, ids[0], refNames[1]) channelz.AddTraceEvent(ids[1], &channelz.TraceEventDesc{ Desc: subchanCreated, Severity: channelz.CtINFO, Parent: &channelz.TraceEventDesc{ Desc: fmt.Sprintf("Nested Channel(id:%d) created", ids[0]), Severity: channelz.CtINFO, }, }) ids[2] = channelz.RegisterNormalSocket(&dummySocket{}, ids[1], refNames[2]) ids[3] = channelz.RegisterNormalSocket(&dummySocket{}, ids[1], refNames[3]) channelz.AddTraceEvent(ids[1], &channelz.TraceEventDesc{ Desc: subchanConnectivityChange, Severity: channelz.CtINFO, }) channelz.AddTraceEvent(ids[1], &channelz.TraceEventDesc{ Desc: subChanPickNewAddress, Severity: channelz.CtINFO, }) for _, id := range ids { defer channelz.RemoveEntry(id) } svr := newCZServer() resp, _ := svr.GetSubchannel(context.Background(), &channelzpb.GetSubchannelRequest{SubchannelId: ids[1]}) metrics := resp.GetSubchannel() want := map[int64]string{ ids[2]: refNames[2], ids[3]: refNames[3], } if !reflect.DeepEqual(convertSocketRefSliceToMap(metrics.GetSocketRef()), want) { t.Fatalf("metrics.GetSocketRef() want %#v: got: %#v", want, metrics.GetSocketRef()) } trace := metrics.GetData().GetTrace() wantTrace := []struct { desc string severity channelzpb.ChannelTraceEvent_Severity childID int64 childRef string }{ {desc: subchanCreated, severity: channelzpb.ChannelTraceEvent_CT_INFO}, {desc: subchanConnectivityChange, severity: channelzpb.ChannelTraceEvent_CT_INFO}, {desc: subChanPickNewAddress, severity: channelzpb.ChannelTraceEvent_CT_INFO}, } for i, e := range trace.Events { if e.GetDescription() != wantTrace[i].desc { t.Fatalf("trace: GetDescription want %#v, got %#v", wantTrace[i].desc, e.GetDescription()) } if e.GetSeverity() != wantTrace[i].severity { t.Fatalf("trace: GetSeverity want %#v, got %#v", wantTrace[i].severity, e.GetSeverity()) } if wantTrace[i].childID == 0 && (e.GetChannelRef() != nil || e.GetSubchannelRef() != nil) { t.Fatalf("trace: GetChannelRef() should return nil, as there is no reference") } if e.GetChannelRef().GetChannelId() != wantTrace[i].childID || e.GetChannelRef().GetName() != wantTrace[i].childRef { if e.GetSubchannelRef().GetSubchannelId() != wantTrace[i].childID || e.GetSubchannelRef().GetName() != wantTrace[i].childRef { t.Fatalf("trace: GetChannelRef/GetSubchannelRef want (child ID: %d, child name: %q), got %#v and %#v", wantTrace[i].childID, wantTrace[i].childRef, e.GetChannelRef(), e.GetSubchannelRef()) } } } } func TestGetSocket(t *testing.T) { czCleanup := channelz.NewChannelzStorage() defer cleanupWrapper(czCleanup, t) ss := []*dummySocket{ { streamsStarted: 10, streamsSucceeded: 2, streamsFailed: 3, messagesSent: 20, messagesReceived: 10, keepAlivesSent: 2, lastLocalStreamCreatedTimestamp: time.Now().UTC(), lastRemoteStreamCreatedTimestamp: time.Now().UTC(), lastMessageSentTimestamp: time.Now().UTC(), lastMessageReceivedTimestamp: time.Now().UTC(), localFlowControlWindow: 65536, remoteFlowControlWindow: 1024, localAddr: &net.TCPAddr{IP: net.ParseIP("1.0.0.1"), Port: 10001}, remoteAddr: &net.TCPAddr{IP: net.ParseIP("12.0.0.1"), Port: 10002}, remoteName: "remote.remote", }, { streamsStarted: 10, streamsSucceeded: 2, streamsFailed: 3, messagesSent: 20, messagesReceived: 10, keepAlivesSent: 2, lastRemoteStreamCreatedTimestamp: time.Now().UTC(), lastMessageSentTimestamp: time.Now().UTC(), lastMessageReceivedTimestamp: time.Now().UTC(), localFlowControlWindow: 65536, remoteFlowControlWindow: 1024, localAddr: &net.UnixAddr{Name: "file.path", Net: "unix"}, remoteAddr: &net.UnixAddr{Name: "another.path", Net: "unix"}, remoteName: "remote.remote", }, { streamsStarted: 5, streamsSucceeded: 2, streamsFailed: 3, messagesSent: 20, messagesReceived: 10, keepAlivesSent: 2, lastLocalStreamCreatedTimestamp: time.Now().UTC(), lastMessageSentTimestamp: time.Now().UTC(), lastMessageReceivedTimestamp: time.Now().UTC(), localFlowControlWindow: 65536, remoteFlowControlWindow: 10240, localAddr: &net.IPAddr{IP: net.ParseIP("1.0.0.1")}, remoteAddr: &net.IPAddr{IP: net.ParseIP("9.0.0.1")}, remoteName: "", }, { localAddr: &net.TCPAddr{IP: net.ParseIP("127.0.0.1"), Port: 10001}, }, { security: &credentials.TLSChannelzSecurityValue{ StandardName: "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", RemoteCertificate: []byte{48, 130, 2, 156, 48, 130, 2, 5, 160}, }, }, { security: &credentials.OtherChannelzSecurityValue{ Name: "XXXX", }, }, { security: &credentials.OtherChannelzSecurityValue{ Name: "YYYY", Value: &OtherSecurityValue{LocalCertificate: []byte{1, 2, 3}, RemoteCertificate: []byte{4, 5, 6}}, }, }, } svr := newCZServer() ids := make([]int64, len(ss)) svrID := channelz.RegisterServer(&dummyServer{}, "") defer channelz.RemoveEntry(svrID) for i, s := range ss { ids[i] = channelz.RegisterNormalSocket(s, svrID, strconv.Itoa(i)) defer channelz.RemoveEntry(ids[i]) } for i, s := range ss { resp, _ := svr.GetSocket(context.Background(), &channelzpb.GetSocketRequest{SocketId: ids[i]}) metrics := resp.GetSocket() if !reflect.DeepEqual(metrics.GetRef(), &channelzpb.SocketRef{SocketId: ids[i], Name: strconv.Itoa(i)}) || !reflect.DeepEqual(socketProtoToStruct(metrics), s) { t.Fatalf("resp.GetSocket() want: metrics.GetRef() = %#v and %#v, got: metrics.GetRef() = %#v and %#v", &channelzpb.SocketRef{SocketId: ids[i], Name: strconv.Itoa(i)}, s, metrics.GetRef(), socketProtoToStruct(metrics)) } } } grpc-go-1.22.1/channelz/service/util_sktopt_386_test.go000066400000000000000000000017311351635773100227610ustar00rootroot00000000000000// +build 386,linux,!appengine /* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package service import ( "golang.org/x/sys/unix" channelzpb "google.golang.org/grpc/channelz/grpc_channelz_v1" ) func protoToTime(protoTime *channelzpb.SocketOptionTimeout) *unix.Timeval { timeout := &unix.Timeval{} sec, usec := convertToDuration(protoTime.GetDuration()) timeout.Sec, timeout.Usec = int32(sec), int32(usec) return timeout } grpc-go-1.22.1/channelz/service/util_sktopt_amd64_test.go000066400000000000000000000016651351635773100233620ustar00rootroot00000000000000// +build amd64,linux,!appengine /* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package service import ( "golang.org/x/sys/unix" channelzpb "google.golang.org/grpc/channelz/grpc_channelz_v1" ) func protoToTime(protoTime *channelzpb.SocketOptionTimeout) *unix.Timeval { timeout := &unix.Timeval{} timeout.Sec, timeout.Usec = convertToDuration(protoTime.GetDuration()) return timeout } grpc-go-1.22.1/clientconn.go000066400000000000000000001270031351635773100156340ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "context" "errors" "fmt" "math" "net" "reflect" "strings" "sync" "sync/atomic" "time" "google.golang.org/grpc/balancer" _ "google.golang.org/grpc/balancer/roundrobin" // To register roundrobin. "google.golang.org/grpc/codes" "google.golang.org/grpc/connectivity" "google.golang.org/grpc/credentials" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/internal/backoff" "google.golang.org/grpc/internal/channelz" "google.golang.org/grpc/internal/envconfig" "google.golang.org/grpc/internal/grpcsync" "google.golang.org/grpc/internal/transport" "google.golang.org/grpc/keepalive" "google.golang.org/grpc/resolver" _ "google.golang.org/grpc/resolver/dns" // To register dns resolver. _ "google.golang.org/grpc/resolver/passthrough" // To register passthrough resolver. "google.golang.org/grpc/serviceconfig" "google.golang.org/grpc/status" ) const ( // minimum time to give a connection to complete minConnectTimeout = 20 * time.Second // must match grpclbName in grpclb/grpclb.go grpclbName = "grpclb" ) var ( // ErrClientConnClosing indicates that the operation is illegal because // the ClientConn is closing. // // Deprecated: this error should not be relied upon by users; use the status // code of Canceled instead. ErrClientConnClosing = status.Error(codes.Canceled, "grpc: the client connection is closing") // errConnDrain indicates that the connection starts to be drained and does not accept any new RPCs. errConnDrain = errors.New("grpc: the connection is drained") // errConnClosing indicates that the connection is closing. errConnClosing = errors.New("grpc: the connection is closing") // errBalancerClosed indicates that the balancer is closed. errBalancerClosed = errors.New("grpc: balancer is closed") // invalidDefaultServiceConfigErrPrefix is used to prefix the json parsing error for the default // service config. invalidDefaultServiceConfigErrPrefix = "grpc: the provided default service config is invalid" ) // The following errors are returned from Dial and DialContext var ( // errNoTransportSecurity indicates that there is no transport security // being set for ClientConn. Users should either set one or explicitly // call WithInsecure DialOption to disable security. errNoTransportSecurity = errors.New("grpc: no transport security set (use grpc.WithInsecure() explicitly or set credentials)") // errTransportCredsAndBundle indicates that creds bundle is used together // with other individual Transport Credentials. errTransportCredsAndBundle = errors.New("grpc: credentials.Bundle may not be used with individual TransportCredentials") // errTransportCredentialsMissing indicates that users want to transmit security // information (e.g., OAuth2 token) which requires secure connection on an insecure // connection. errTransportCredentialsMissing = errors.New("grpc: the credentials require transport level security (use grpc.WithTransportCredentials() to set)") // errCredentialsConflict indicates that grpc.WithTransportCredentials() // and grpc.WithInsecure() are both called for a connection. errCredentialsConflict = errors.New("grpc: transport credentials are set for an insecure connection (grpc.WithTransportCredentials() and grpc.WithInsecure() are both called)") ) const ( defaultClientMaxReceiveMessageSize = 1024 * 1024 * 4 defaultClientMaxSendMessageSize = math.MaxInt32 // http2IOBufSize specifies the buffer size for sending frames. defaultWriteBufSize = 32 * 1024 defaultReadBufSize = 32 * 1024 ) // Dial creates a client connection to the given target. func Dial(target string, opts ...DialOption) (*ClientConn, error) { return DialContext(context.Background(), target, opts...) } // DialContext creates a client connection to the given target. By default, it's // a non-blocking dial (the function won't wait for connections to be // established, and connecting happens in the background). To make it a blocking // dial, use WithBlock() dial option. // // In the non-blocking case, the ctx does not act against the connection. It // only controls the setup steps. // // In the blocking case, ctx can be used to cancel or expire the pending // connection. Once this function returns, the cancellation and expiration of // ctx will be noop. Users should call ClientConn.Close to terminate all the // pending operations after this function returns. // // The target name syntax is defined in // https://github.com/grpc/grpc/blob/master/doc/naming.md. // e.g. to use dns resolver, a "dns:///" prefix should be applied to the target. func DialContext(ctx context.Context, target string, opts ...DialOption) (conn *ClientConn, err error) { cc := &ClientConn{ target: target, csMgr: &connectivityStateManager{}, conns: make(map[*addrConn]struct{}), dopts: defaultDialOptions(), blockingpicker: newPickerWrapper(), czData: new(channelzData), firstResolveEvent: grpcsync.NewEvent(), } cc.retryThrottler.Store((*retryThrottler)(nil)) cc.ctx, cc.cancel = context.WithCancel(context.Background()) for _, opt := range opts { opt.apply(&cc.dopts) } chainUnaryClientInterceptors(cc) chainStreamClientInterceptors(cc) defer func() { if err != nil { cc.Close() } }() if channelz.IsOn() { if cc.dopts.channelzParentID != 0 { cc.channelzID = channelz.RegisterChannel(&channelzChannel{cc}, cc.dopts.channelzParentID, target) channelz.AddTraceEvent(cc.channelzID, &channelz.TraceEventDesc{ Desc: "Channel Created", Severity: channelz.CtINFO, Parent: &channelz.TraceEventDesc{ Desc: fmt.Sprintf("Nested Channel(id:%d) created", cc.channelzID), Severity: channelz.CtINFO, }, }) } else { cc.channelzID = channelz.RegisterChannel(&channelzChannel{cc}, 0, target) channelz.AddTraceEvent(cc.channelzID, &channelz.TraceEventDesc{ Desc: "Channel Created", Severity: channelz.CtINFO, }) } cc.csMgr.channelzID = cc.channelzID } if !cc.dopts.insecure { if cc.dopts.copts.TransportCredentials == nil && cc.dopts.copts.CredsBundle == nil { return nil, errNoTransportSecurity } if cc.dopts.copts.TransportCredentials != nil && cc.dopts.copts.CredsBundle != nil { return nil, errTransportCredsAndBundle } } else { if cc.dopts.copts.TransportCredentials != nil || cc.dopts.copts.CredsBundle != nil { return nil, errCredentialsConflict } for _, cd := range cc.dopts.copts.PerRPCCredentials { if cd.RequireTransportSecurity() { return nil, errTransportCredentialsMissing } } } if cc.dopts.defaultServiceConfigRawJSON != nil { sc, err := parseServiceConfig(*cc.dopts.defaultServiceConfigRawJSON) if err != nil { return nil, fmt.Errorf("%s: %v", invalidDefaultServiceConfigErrPrefix, err) } cc.dopts.defaultServiceConfig = sc } cc.mkp = cc.dopts.copts.KeepaliveParams if cc.dopts.copts.Dialer == nil { cc.dopts.copts.Dialer = newProxyDialer( func(ctx context.Context, addr string) (net.Conn, error) { network, addr := parseDialTarget(addr) return (&net.Dialer{}).DialContext(ctx, network, addr) }, ) } if cc.dopts.copts.UserAgent != "" { cc.dopts.copts.UserAgent += " " + grpcUA } else { cc.dopts.copts.UserAgent = grpcUA } if cc.dopts.timeout > 0 { var cancel context.CancelFunc ctx, cancel = context.WithTimeout(ctx, cc.dopts.timeout) defer cancel() } defer func() { select { case <-ctx.Done(): conn, err = nil, ctx.Err() default: } }() scSet := false if cc.dopts.scChan != nil { // Try to get an initial service config. select { case sc, ok := <-cc.dopts.scChan: if ok { cc.sc = &sc scSet = true } default: } } if cc.dopts.bs == nil { cc.dopts.bs = backoff.Exponential{ MaxDelay: DefaultBackoffConfig.MaxDelay, } } if cc.dopts.resolverBuilder == nil { // Only try to parse target when resolver builder is not already set. cc.parsedTarget = parseTarget(cc.target) grpclog.Infof("parsed scheme: %q", cc.parsedTarget.Scheme) cc.dopts.resolverBuilder = resolver.Get(cc.parsedTarget.Scheme) if cc.dopts.resolverBuilder == nil { // If resolver builder is still nil, the parsed target's scheme is // not registered. Fallback to default resolver and set Endpoint to // the original target. grpclog.Infof("scheme %q not registered, fallback to default scheme", cc.parsedTarget.Scheme) cc.parsedTarget = resolver.Target{ Scheme: resolver.GetDefaultScheme(), Endpoint: target, } cc.dopts.resolverBuilder = resolver.Get(cc.parsedTarget.Scheme) } } else { cc.parsedTarget = resolver.Target{Endpoint: target} } creds := cc.dopts.copts.TransportCredentials if creds != nil && creds.Info().ServerName != "" { cc.authority = creds.Info().ServerName } else if cc.dopts.insecure && cc.dopts.authority != "" { cc.authority = cc.dopts.authority } else { // Use endpoint from "scheme://authority/endpoint" as the default // authority for ClientConn. cc.authority = cc.parsedTarget.Endpoint } if cc.dopts.scChan != nil && !scSet { // Blocking wait for the initial service config. select { case sc, ok := <-cc.dopts.scChan: if ok { cc.sc = &sc } case <-ctx.Done(): return nil, ctx.Err() } } if cc.dopts.scChan != nil { go cc.scWatcher() } var credsClone credentials.TransportCredentials if creds := cc.dopts.copts.TransportCredentials; creds != nil { credsClone = creds.Clone() } cc.balancerBuildOpts = balancer.BuildOptions{ DialCreds: credsClone, CredsBundle: cc.dopts.copts.CredsBundle, Dialer: cc.dopts.copts.Dialer, ChannelzParentID: cc.channelzID, Target: cc.parsedTarget, } // Build the resolver. rWrapper, err := newCCResolverWrapper(cc) if err != nil { return nil, fmt.Errorf("failed to build resolver: %v", err) } cc.mu.Lock() cc.resolverWrapper = rWrapper cc.mu.Unlock() // A blocking dial blocks until the clientConn is ready. if cc.dopts.block { for { s := cc.GetState() if s == connectivity.Ready { break } else if cc.dopts.copts.FailOnNonTempDialError && s == connectivity.TransientFailure { if err = cc.blockingpicker.connectionError(); err != nil { terr, ok := err.(interface { Temporary() bool }) if ok && !terr.Temporary() { return nil, err } } } if !cc.WaitForStateChange(ctx, s) { // ctx got timeout or canceled. return nil, ctx.Err() } } } return cc, nil } // chainUnaryClientInterceptors chains all unary client interceptors into one. func chainUnaryClientInterceptors(cc *ClientConn) { interceptors := cc.dopts.chainUnaryInts // Prepend dopts.unaryInt to the chaining interceptors if it exists, since unaryInt will // be executed before any other chained interceptors. if cc.dopts.unaryInt != nil { interceptors = append([]UnaryClientInterceptor{cc.dopts.unaryInt}, interceptors...) } var chainedInt UnaryClientInterceptor if len(interceptors) == 0 { chainedInt = nil } else if len(interceptors) == 1 { chainedInt = interceptors[0] } else { chainedInt = func(ctx context.Context, method string, req, reply interface{}, cc *ClientConn, invoker UnaryInvoker, opts ...CallOption) error { return interceptors[0](ctx, method, req, reply, cc, getChainUnaryInvoker(interceptors, 0, invoker), opts...) } } cc.dopts.unaryInt = chainedInt } // getChainUnaryInvoker recursively generate the chained unary invoker. func getChainUnaryInvoker(interceptors []UnaryClientInterceptor, curr int, finalInvoker UnaryInvoker) UnaryInvoker { if curr == len(interceptors)-1 { return finalInvoker } return func(ctx context.Context, method string, req, reply interface{}, cc *ClientConn, opts ...CallOption) error { return interceptors[curr+1](ctx, method, req, reply, cc, getChainUnaryInvoker(interceptors, curr+1, finalInvoker), opts...) } } // chainStreamClientInterceptors chains all stream client interceptors into one. func chainStreamClientInterceptors(cc *ClientConn) { interceptors := cc.dopts.chainStreamInts // Prepend dopts.streamInt to the chaining interceptors if it exists, since streamInt will // be executed before any other chained interceptors. if cc.dopts.streamInt != nil { interceptors = append([]StreamClientInterceptor{cc.dopts.streamInt}, interceptors...) } var chainedInt StreamClientInterceptor if len(interceptors) == 0 { chainedInt = nil } else if len(interceptors) == 1 { chainedInt = interceptors[0] } else { chainedInt = func(ctx context.Context, desc *StreamDesc, cc *ClientConn, method string, streamer Streamer, opts ...CallOption) (ClientStream, error) { return interceptors[0](ctx, desc, cc, method, getChainStreamer(interceptors, 0, streamer), opts...) } } cc.dopts.streamInt = chainedInt } // getChainStreamer recursively generate the chained client stream constructor. func getChainStreamer(interceptors []StreamClientInterceptor, curr int, finalStreamer Streamer) Streamer { if curr == len(interceptors)-1 { return finalStreamer } return func(ctx context.Context, desc *StreamDesc, cc *ClientConn, method string, opts ...CallOption) (ClientStream, error) { return interceptors[curr+1](ctx, desc, cc, method, getChainStreamer(interceptors, curr+1, finalStreamer), opts...) } } // connectivityStateManager keeps the connectivity.State of ClientConn. // This struct will eventually be exported so the balancers can access it. type connectivityStateManager struct { mu sync.Mutex state connectivity.State notifyChan chan struct{} channelzID int64 } // updateState updates the connectivity.State of ClientConn. // If there's a change it notifies goroutines waiting on state change to // happen. func (csm *connectivityStateManager) updateState(state connectivity.State) { csm.mu.Lock() defer csm.mu.Unlock() if csm.state == connectivity.Shutdown { return } if csm.state == state { return } csm.state = state if channelz.IsOn() { channelz.AddTraceEvent(csm.channelzID, &channelz.TraceEventDesc{ Desc: fmt.Sprintf("Channel Connectivity change to %v", state), Severity: channelz.CtINFO, }) } if csm.notifyChan != nil { // There are other goroutines waiting on this channel. close(csm.notifyChan) csm.notifyChan = nil } } func (csm *connectivityStateManager) getState() connectivity.State { csm.mu.Lock() defer csm.mu.Unlock() return csm.state } func (csm *connectivityStateManager) getNotifyChan() <-chan struct{} { csm.mu.Lock() defer csm.mu.Unlock() if csm.notifyChan == nil { csm.notifyChan = make(chan struct{}) } return csm.notifyChan } // ClientConn represents a client connection to an RPC server. type ClientConn struct { ctx context.Context cancel context.CancelFunc target string parsedTarget resolver.Target authority string dopts dialOptions csMgr *connectivityStateManager balancerBuildOpts balancer.BuildOptions blockingpicker *pickerWrapper mu sync.RWMutex resolverWrapper *ccResolverWrapper sc *ServiceConfig conns map[*addrConn]struct{} // Keepalive parameter can be updated if a GoAway is received. mkp keepalive.ClientParameters curBalancerName string balancerWrapper *ccBalancerWrapper retryThrottler atomic.Value firstResolveEvent *grpcsync.Event channelzID int64 // channelz unique identification number czData *channelzData } // WaitForStateChange waits until the connectivity.State of ClientConn changes from sourceState or // ctx expires. A true value is returned in former case and false in latter. // This is an EXPERIMENTAL API. func (cc *ClientConn) WaitForStateChange(ctx context.Context, sourceState connectivity.State) bool { ch := cc.csMgr.getNotifyChan() if cc.csMgr.getState() != sourceState { return true } select { case <-ctx.Done(): return false case <-ch: return true } } // GetState returns the connectivity.State of ClientConn. // This is an EXPERIMENTAL API. func (cc *ClientConn) GetState() connectivity.State { return cc.csMgr.getState() } func (cc *ClientConn) scWatcher() { for { select { case sc, ok := <-cc.dopts.scChan: if !ok { return } cc.mu.Lock() // TODO: load balance policy runtime change is ignored. // We may revisit this decision in the future. cc.sc = &sc cc.mu.Unlock() case <-cc.ctx.Done(): return } } } // waitForResolvedAddrs blocks until the resolver has provided addresses or the // context expires. Returns nil unless the context expires first; otherwise // returns a status error based on the context. func (cc *ClientConn) waitForResolvedAddrs(ctx context.Context) error { // This is on the RPC path, so we use a fast path to avoid the // more-expensive "select" below after the resolver has returned once. if cc.firstResolveEvent.HasFired() { return nil } select { case <-cc.firstResolveEvent.Done(): return nil case <-ctx.Done(): return status.FromContextError(ctx.Err()).Err() case <-cc.ctx.Done(): return ErrClientConnClosing } } func (cc *ClientConn) updateResolverState(s resolver.State) error { cc.mu.Lock() defer cc.mu.Unlock() // Check if the ClientConn is already closed. Some fields (e.g. // balancerWrapper) are set to nil when closing the ClientConn, and could // cause nil pointer panic if we don't have this check. if cc.conns == nil { return nil } if cc.dopts.disableServiceConfig || s.ServiceConfig == nil { if cc.dopts.defaultServiceConfig != nil && cc.sc == nil { cc.applyServiceConfig(cc.dopts.defaultServiceConfig) } } else if sc, ok := s.ServiceConfig.(*ServiceConfig); ok { cc.applyServiceConfig(sc) } var balCfg serviceconfig.LoadBalancingConfig if cc.dopts.balancerBuilder == nil { // Only look at balancer types and switch balancer if balancer dial // option is not set. var newBalancerName string if cc.sc != nil && cc.sc.lbConfig != nil { newBalancerName = cc.sc.lbConfig.name balCfg = cc.sc.lbConfig.cfg } else { var isGRPCLB bool for _, a := range s.Addresses { if a.Type == resolver.GRPCLB { isGRPCLB = true break } } if isGRPCLB { newBalancerName = grpclbName } else if cc.sc != nil && cc.sc.LB != nil { newBalancerName = *cc.sc.LB } else { newBalancerName = PickFirstBalancerName } } cc.switchBalancer(newBalancerName) } else if cc.balancerWrapper == nil { // Balancer dial option was set, and this is the first time handling // resolved addresses. Build a balancer with dopts.balancerBuilder. cc.curBalancerName = cc.dopts.balancerBuilder.Name() cc.balancerWrapper = newCCBalancerWrapper(cc, cc.dopts.balancerBuilder, cc.balancerBuildOpts) } cc.balancerWrapper.updateClientConnState(&balancer.ClientConnState{ResolverState: s, BalancerConfig: balCfg}) return nil } // switchBalancer starts the switching from current balancer to the balancer // with the given name. // // It will NOT send the current address list to the new balancer. If needed, // caller of this function should send address list to the new balancer after // this function returns. // // Caller must hold cc.mu. func (cc *ClientConn) switchBalancer(name string) { if strings.EqualFold(cc.curBalancerName, name) { return } grpclog.Infof("ClientConn switching balancer to %q", name) if cc.dopts.balancerBuilder != nil { grpclog.Infoln("ignoring balancer switching: Balancer DialOption used instead") return } if cc.balancerWrapper != nil { cc.balancerWrapper.close() } builder := balancer.Get(name) if channelz.IsOn() { if builder == nil { channelz.AddTraceEvent(cc.channelzID, &channelz.TraceEventDesc{ Desc: fmt.Sprintf("Channel switches to new LB policy %q due to fallback from invalid balancer name", PickFirstBalancerName), Severity: channelz.CtWarning, }) } else { channelz.AddTraceEvent(cc.channelzID, &channelz.TraceEventDesc{ Desc: fmt.Sprintf("Channel switches to new LB policy %q", name), Severity: channelz.CtINFO, }) } } if builder == nil { grpclog.Infof("failed to get balancer builder for: %v, using pick_first instead", name) builder = newPickfirstBuilder() } cc.curBalancerName = builder.Name() cc.balancerWrapper = newCCBalancerWrapper(cc, builder, cc.balancerBuildOpts) } func (cc *ClientConn) handleSubConnStateChange(sc balancer.SubConn, s connectivity.State) { cc.mu.Lock() if cc.conns == nil { cc.mu.Unlock() return } // TODO(bar switching) send updates to all balancer wrappers when balancer // gracefully switching is supported. cc.balancerWrapper.handleSubConnStateChange(sc, s) cc.mu.Unlock() } // newAddrConn creates an addrConn for addrs and adds it to cc.conns. // // Caller needs to make sure len(addrs) > 0. func (cc *ClientConn) newAddrConn(addrs []resolver.Address, opts balancer.NewSubConnOptions) (*addrConn, error) { ac := &addrConn{ cc: cc, addrs: addrs, scopts: opts, dopts: cc.dopts, czData: new(channelzData), resetBackoff: make(chan struct{}), } ac.ctx, ac.cancel = context.WithCancel(cc.ctx) // Track ac in cc. This needs to be done before any getTransport(...) is called. cc.mu.Lock() if cc.conns == nil { cc.mu.Unlock() return nil, ErrClientConnClosing } if channelz.IsOn() { ac.channelzID = channelz.RegisterSubChannel(ac, cc.channelzID, "") channelz.AddTraceEvent(ac.channelzID, &channelz.TraceEventDesc{ Desc: "Subchannel Created", Severity: channelz.CtINFO, Parent: &channelz.TraceEventDesc{ Desc: fmt.Sprintf("Subchannel(id:%d) created", ac.channelzID), Severity: channelz.CtINFO, }, }) } cc.conns[ac] = struct{}{} cc.mu.Unlock() return ac, nil } // removeAddrConn removes the addrConn in the subConn from clientConn. // It also tears down the ac with the given error. func (cc *ClientConn) removeAddrConn(ac *addrConn, err error) { cc.mu.Lock() if cc.conns == nil { cc.mu.Unlock() return } delete(cc.conns, ac) cc.mu.Unlock() ac.tearDown(err) } func (cc *ClientConn) channelzMetric() *channelz.ChannelInternalMetric { return &channelz.ChannelInternalMetric{ State: cc.GetState(), Target: cc.target, CallsStarted: atomic.LoadInt64(&cc.czData.callsStarted), CallsSucceeded: atomic.LoadInt64(&cc.czData.callsSucceeded), CallsFailed: atomic.LoadInt64(&cc.czData.callsFailed), LastCallStartedTimestamp: time.Unix(0, atomic.LoadInt64(&cc.czData.lastCallStartedTime)), } } // Target returns the target string of the ClientConn. // This is an EXPERIMENTAL API. func (cc *ClientConn) Target() string { return cc.target } func (cc *ClientConn) incrCallsStarted() { atomic.AddInt64(&cc.czData.callsStarted, 1) atomic.StoreInt64(&cc.czData.lastCallStartedTime, time.Now().UnixNano()) } func (cc *ClientConn) incrCallsSucceeded() { atomic.AddInt64(&cc.czData.callsSucceeded, 1) } func (cc *ClientConn) incrCallsFailed() { atomic.AddInt64(&cc.czData.callsFailed, 1) } // connect starts creating a transport. // It does nothing if the ac is not IDLE. // TODO(bar) Move this to the addrConn section. func (ac *addrConn) connect() error { ac.mu.Lock() if ac.state == connectivity.Shutdown { ac.mu.Unlock() return errConnClosing } if ac.state != connectivity.Idle { ac.mu.Unlock() return nil } // Update connectivity state within the lock to prevent subsequent or // concurrent calls from resetting the transport more than once. ac.updateConnectivityState(connectivity.Connecting) ac.mu.Unlock() // Start a goroutine connecting to the server asynchronously. go ac.resetTransport() return nil } // tryUpdateAddrs tries to update ac.addrs with the new addresses list. // // If ac is Connecting, it returns false. The caller should tear down the ac and // create a new one. Note that the backoff will be reset when this happens. // // If ac is TransientFailure, it updates ac.addrs and returns true. The updated // addresses will be picked up by retry in the next iteration after backoff. // // If ac is Shutdown or Idle, it updates ac.addrs and returns true. // // If ac is Ready, it checks whether current connected address of ac is in the // new addrs list. // - If true, it updates ac.addrs and returns true. The ac will keep using // the existing connection. // - If false, it does nothing and returns false. func (ac *addrConn) tryUpdateAddrs(addrs []resolver.Address) bool { ac.mu.Lock() defer ac.mu.Unlock() grpclog.Infof("addrConn: tryUpdateAddrs curAddr: %v, addrs: %v", ac.curAddr, addrs) if ac.state == connectivity.Shutdown || ac.state == connectivity.TransientFailure || ac.state == connectivity.Idle { ac.addrs = addrs return true } if ac.state == connectivity.Connecting { return false } // ac.state is Ready, try to find the connected address. var curAddrFound bool for _, a := range addrs { if reflect.DeepEqual(ac.curAddr, a) { curAddrFound = true break } } grpclog.Infof("addrConn: tryUpdateAddrs curAddrFound: %v", curAddrFound) if curAddrFound { ac.addrs = addrs } return curAddrFound } // GetMethodConfig gets the method config of the input method. // If there's an exact match for input method (i.e. /service/method), we return // the corresponding MethodConfig. // If there isn't an exact match for the input method, we look for the default config // under the service (i.e /service/). If there is a default MethodConfig for // the service, we return it. // Otherwise, we return an empty MethodConfig. func (cc *ClientConn) GetMethodConfig(method string) MethodConfig { // TODO: Avoid the locking here. cc.mu.RLock() defer cc.mu.RUnlock() if cc.sc == nil { return MethodConfig{} } m, ok := cc.sc.Methods[method] if !ok { i := strings.LastIndex(method, "/") m = cc.sc.Methods[method[:i+1]] } return m } func (cc *ClientConn) healthCheckConfig() *healthCheckConfig { cc.mu.RLock() defer cc.mu.RUnlock() if cc.sc == nil { return nil } return cc.sc.healthCheckConfig } func (cc *ClientConn) getTransport(ctx context.Context, failfast bool, method string) (transport.ClientTransport, func(balancer.DoneInfo), error) { t, done, err := cc.blockingpicker.pick(ctx, failfast, balancer.PickOptions{ FullMethodName: method, }) if err != nil { return nil, nil, toRPCErr(err) } return t, done, nil } func (cc *ClientConn) applyServiceConfig(sc *ServiceConfig) error { if sc == nil { // should never reach here. return fmt.Errorf("got nil pointer for service config") } cc.sc = sc if cc.sc.retryThrottling != nil { newThrottler := &retryThrottler{ tokens: cc.sc.retryThrottling.MaxTokens, max: cc.sc.retryThrottling.MaxTokens, thresh: cc.sc.retryThrottling.MaxTokens / 2, ratio: cc.sc.retryThrottling.TokenRatio, } cc.retryThrottler.Store(newThrottler) } else { cc.retryThrottler.Store((*retryThrottler)(nil)) } return nil } func (cc *ClientConn) resolveNow(o resolver.ResolveNowOption) { cc.mu.RLock() r := cc.resolverWrapper cc.mu.RUnlock() if r == nil { return } go r.resolveNow(o) } // ResetConnectBackoff wakes up all subchannels in transient failure and causes // them to attempt another connection immediately. It also resets the backoff // times used for subsequent attempts regardless of the current state. // // In general, this function should not be used. Typical service or network // outages result in a reasonable client reconnection strategy by default. // However, if a previously unavailable network becomes available, this may be // used to trigger an immediate reconnect. // // This API is EXPERIMENTAL. func (cc *ClientConn) ResetConnectBackoff() { cc.mu.Lock() defer cc.mu.Unlock() for ac := range cc.conns { ac.resetConnectBackoff() } } // Close tears down the ClientConn and all underlying connections. func (cc *ClientConn) Close() error { defer cc.cancel() cc.mu.Lock() if cc.conns == nil { cc.mu.Unlock() return ErrClientConnClosing } conns := cc.conns cc.conns = nil cc.csMgr.updateState(connectivity.Shutdown) rWrapper := cc.resolverWrapper cc.resolverWrapper = nil bWrapper := cc.balancerWrapper cc.balancerWrapper = nil cc.mu.Unlock() cc.blockingpicker.close() if rWrapper != nil { rWrapper.close() } if bWrapper != nil { bWrapper.close() } for ac := range conns { ac.tearDown(ErrClientConnClosing) } if channelz.IsOn() { ted := &channelz.TraceEventDesc{ Desc: "Channel Deleted", Severity: channelz.CtINFO, } if cc.dopts.channelzParentID != 0 { ted.Parent = &channelz.TraceEventDesc{ Desc: fmt.Sprintf("Nested channel(id:%d) deleted", cc.channelzID), Severity: channelz.CtINFO, } } channelz.AddTraceEvent(cc.channelzID, ted) // TraceEvent needs to be called before RemoveEntry, as TraceEvent may add trace reference to // the entity being deleted, and thus prevent it from being deleted right away. channelz.RemoveEntry(cc.channelzID) } return nil } // addrConn is a network connection to a given address. type addrConn struct { ctx context.Context cancel context.CancelFunc cc *ClientConn dopts dialOptions acbw balancer.SubConn scopts balancer.NewSubConnOptions // transport is set when there's a viable transport (note: ac state may not be READY as LB channel // health checking may require server to report healthy to set ac to READY), and is reset // to nil when the current transport should no longer be used to create a stream (e.g. after GoAway // is received, transport is closed, ac has been torn down). transport transport.ClientTransport // The current transport. mu sync.Mutex curAddr resolver.Address // The current address. addrs []resolver.Address // All addresses that the resolver resolved to. // Use updateConnectivityState for updating addrConn's connectivity state. state connectivity.State backoffIdx int // Needs to be stateful for resetConnectBackoff. resetBackoff chan struct{} channelzID int64 // channelz unique identification number. czData *channelzData } // Note: this requires a lock on ac.mu. func (ac *addrConn) updateConnectivityState(s connectivity.State) { if ac.state == s { return } updateMsg := fmt.Sprintf("Subchannel Connectivity change to %v", s) ac.state = s if channelz.IsOn() { channelz.AddTraceEvent(ac.channelzID, &channelz.TraceEventDesc{ Desc: updateMsg, Severity: channelz.CtINFO, }) } ac.cc.handleSubConnStateChange(ac.acbw, s) } // adjustParams updates parameters used to create transports upon // receiving a GoAway. func (ac *addrConn) adjustParams(r transport.GoAwayReason) { switch r { case transport.GoAwayTooManyPings: v := 2 * ac.dopts.copts.KeepaliveParams.Time ac.cc.mu.Lock() if v > ac.cc.mkp.Time { ac.cc.mkp.Time = v } ac.cc.mu.Unlock() } } func (ac *addrConn) resetTransport() { for i := 0; ; i++ { if i > 0 { ac.cc.resolveNow(resolver.ResolveNowOption{}) } ac.mu.Lock() if ac.state == connectivity.Shutdown { ac.mu.Unlock() return } addrs := ac.addrs backoffFor := ac.dopts.bs.Backoff(ac.backoffIdx) // This will be the duration that dial gets to finish. dialDuration := minConnectTimeout if ac.dopts.minConnectTimeout != nil { dialDuration = ac.dopts.minConnectTimeout() } if dialDuration < backoffFor { // Give dial more time as we keep failing to connect. dialDuration = backoffFor } // We can potentially spend all the time trying the first address, and // if the server accepts the connection and then hangs, the following // addresses will never be tried. // // The spec doesn't mention what should be done for multiple addresses. // https://github.com/grpc/grpc/blob/master/doc/connection-backoff.md#proposed-backoff-algorithm connectDeadline := time.Now().Add(dialDuration) ac.updateConnectivityState(connectivity.Connecting) ac.transport = nil ac.mu.Unlock() newTr, addr, reconnect, err := ac.tryAllAddrs(addrs, connectDeadline) if err != nil { // After exhausting all addresses, the addrConn enters // TRANSIENT_FAILURE. ac.mu.Lock() if ac.state == connectivity.Shutdown { ac.mu.Unlock() return } ac.updateConnectivityState(connectivity.TransientFailure) // Backoff. b := ac.resetBackoff ac.mu.Unlock() timer := time.NewTimer(backoffFor) select { case <-timer.C: ac.mu.Lock() ac.backoffIdx++ ac.mu.Unlock() case <-b: timer.Stop() case <-ac.ctx.Done(): timer.Stop() return } continue } ac.mu.Lock() if ac.state == connectivity.Shutdown { newTr.Close() ac.mu.Unlock() return } ac.curAddr = addr ac.transport = newTr ac.backoffIdx = 0 hctx, hcancel := context.WithCancel(ac.ctx) ac.startHealthCheck(hctx) ac.mu.Unlock() // Block until the created transport is down. And when this happens, // we restart from the top of the addr list. <-reconnect.Done() hcancel() // Need to reconnect after a READY, the addrConn enters // TRANSIENT_FAILURE. // // This will set addrConn to TRANSIENT_FAILURE for a very short period // of time, and turns CONNECTING. It seems reasonable to skip this, but // READY-CONNECTING is not a valid transition. ac.mu.Lock() if ac.state == connectivity.Shutdown { ac.mu.Unlock() return } ac.updateConnectivityState(connectivity.TransientFailure) ac.mu.Unlock() } } // tryAllAddrs tries to creates a connection to the addresses, and stop when at the // first successful one. It returns the transport, the address and a Event in // the successful case. The Event fires when the returned transport disconnects. func (ac *addrConn) tryAllAddrs(addrs []resolver.Address, connectDeadline time.Time) (transport.ClientTransport, resolver.Address, *grpcsync.Event, error) { for _, addr := range addrs { ac.mu.Lock() if ac.state == connectivity.Shutdown { ac.mu.Unlock() return nil, resolver.Address{}, nil, errConnClosing } ac.cc.mu.RLock() ac.dopts.copts.KeepaliveParams = ac.cc.mkp ac.cc.mu.RUnlock() copts := ac.dopts.copts if ac.scopts.CredsBundle != nil { copts.CredsBundle = ac.scopts.CredsBundle } ac.mu.Unlock() if channelz.IsOn() { channelz.AddTraceEvent(ac.channelzID, &channelz.TraceEventDesc{ Desc: fmt.Sprintf("Subchannel picks a new address %q to connect", addr.Addr), Severity: channelz.CtINFO, }) } newTr, reconnect, err := ac.createTransport(addr, copts, connectDeadline) if err == nil { return newTr, addr, reconnect, nil } ac.cc.blockingpicker.updateConnectionError(err) } // Couldn't connect to any address. return nil, resolver.Address{}, nil, fmt.Errorf("couldn't connect to any address") } // createTransport creates a connection to addr. It returns the transport and a // Event in the successful case. The Event fires when the returned transport // disconnects. func (ac *addrConn) createTransport(addr resolver.Address, copts transport.ConnectOptions, connectDeadline time.Time) (transport.ClientTransport, *grpcsync.Event, error) { prefaceReceived := make(chan struct{}) onCloseCalled := make(chan struct{}) reconnect := grpcsync.NewEvent() target := transport.TargetInfo{ Addr: addr.Addr, Metadata: addr.Metadata, Authority: ac.cc.authority, } onGoAway := func(r transport.GoAwayReason) { ac.mu.Lock() ac.adjustParams(r) ac.mu.Unlock() reconnect.Fire() } onClose := func() { close(onCloseCalled) reconnect.Fire() } onPrefaceReceipt := func() { close(prefaceReceived) } connectCtx, cancel := context.WithDeadline(ac.ctx, connectDeadline) defer cancel() if channelz.IsOn() { copts.ChannelzParentID = ac.channelzID } newTr, err := transport.NewClientTransport(connectCtx, ac.cc.ctx, target, copts, onPrefaceReceipt, onGoAway, onClose) if err != nil { // newTr is either nil, or closed. grpclog.Warningf("grpc: addrConn.createTransport failed to connect to %v. Err :%v. Reconnecting...", addr, err) return nil, nil, err } if ac.dopts.reqHandshake == envconfig.RequireHandshakeOn { select { case <-time.After(connectDeadline.Sub(time.Now())): // We didn't get the preface in time. newTr.Close() grpclog.Warningf("grpc: addrConn.createTransport failed to connect to %v: didn't receive server preface in time. Reconnecting...", addr) return nil, nil, errors.New("timed out waiting for server handshake") case <-prefaceReceived: // We got the preface - huzzah! things are good. case <-onCloseCalled: // The transport has already closed - noop. return nil, nil, errors.New("connection closed") // TODO(deklerk) this should bail on ac.ctx.Done(). Add a test and fix. } } return newTr, reconnect, nil } // startHealthCheck starts the health checking stream (RPC) to watch the health // stats of this connection if health checking is requested and configured. // // LB channel health checking is enabled when all requirements below are met: // 1. it is not disabled by the user with the WithDisableHealthCheck DialOption // 2. internal.HealthCheckFunc is set by importing the grpc/healthcheck package // 3. a service config with non-empty healthCheckConfig field is provided // 4. the load balancer requests it // // It sets addrConn to READY if the health checking stream is not started. // // Caller must hold ac.mu. func (ac *addrConn) startHealthCheck(ctx context.Context) { var healthcheckManagingState bool defer func() { if !healthcheckManagingState { ac.updateConnectivityState(connectivity.Ready) } }() if ac.cc.dopts.disableHealthCheck { return } healthCheckConfig := ac.cc.healthCheckConfig() if healthCheckConfig == nil { return } if !ac.scopts.HealthCheckEnabled { return } healthCheckFunc := ac.cc.dopts.healthCheckFunc if healthCheckFunc == nil { // The health package is not imported to set health check function. // // TODO: add a link to the health check doc in the error message. grpclog.Error("Health check is requested but health check function is not set.") return } healthcheckManagingState = true // Set up the health check helper functions. currentTr := ac.transport newStream := func(method string) (interface{}, error) { ac.mu.Lock() if ac.transport != currentTr { ac.mu.Unlock() return nil, status.Error(codes.Canceled, "the provided transport is no longer valid to use") } ac.mu.Unlock() return newNonRetryClientStream(ctx, &StreamDesc{ServerStreams: true}, method, currentTr, ac) } setConnectivityState := func(s connectivity.State) { ac.mu.Lock() defer ac.mu.Unlock() if ac.transport != currentTr { return } ac.updateConnectivityState(s) } // Start the health checking stream. go func() { err := ac.cc.dopts.healthCheckFunc(ctx, newStream, setConnectivityState, healthCheckConfig.ServiceName) if err != nil { if status.Code(err) == codes.Unimplemented { if channelz.IsOn() { channelz.AddTraceEvent(ac.channelzID, &channelz.TraceEventDesc{ Desc: "Subchannel health check is unimplemented at server side, thus health check is disabled", Severity: channelz.CtError, }) } grpclog.Error("Subchannel health check is unimplemented at server side, thus health check is disabled") } else { grpclog.Errorf("HealthCheckFunc exits with unexpected error %v", err) } } }() } func (ac *addrConn) resetConnectBackoff() { ac.mu.Lock() close(ac.resetBackoff) ac.backoffIdx = 0 ac.resetBackoff = make(chan struct{}) ac.mu.Unlock() } // getReadyTransport returns the transport if ac's state is READY. // Otherwise it returns nil, false. // If ac's state is IDLE, it will trigger ac to connect. func (ac *addrConn) getReadyTransport() (transport.ClientTransport, bool) { ac.mu.Lock() if ac.state == connectivity.Ready && ac.transport != nil { t := ac.transport ac.mu.Unlock() return t, true } var idle bool if ac.state == connectivity.Idle { idle = true } ac.mu.Unlock() // Trigger idle ac to connect. if idle { ac.connect() } return nil, false } // tearDown starts to tear down the addrConn. // TODO(zhaoq): Make this synchronous to avoid unbounded memory consumption in // some edge cases (e.g., the caller opens and closes many addrConn's in a // tight loop. // tearDown doesn't remove ac from ac.cc.conns. func (ac *addrConn) tearDown(err error) { ac.mu.Lock() if ac.state == connectivity.Shutdown { ac.mu.Unlock() return } curTr := ac.transport ac.transport = nil // We have to set the state to Shutdown before anything else to prevent races // between setting the state and logic that waits on context cancelation / etc. ac.updateConnectivityState(connectivity.Shutdown) ac.cancel() ac.curAddr = resolver.Address{} if err == errConnDrain && curTr != nil { // GracefulClose(...) may be executed multiple times when // i) receiving multiple GoAway frames from the server; or // ii) there are concurrent name resolver/Balancer triggered // address removal and GoAway. // We have to unlock and re-lock here because GracefulClose => Close => onClose, which requires locking ac.mu. ac.mu.Unlock() curTr.GracefulClose() ac.mu.Lock() } if channelz.IsOn() { channelz.AddTraceEvent(ac.channelzID, &channelz.TraceEventDesc{ Desc: "Subchannel Deleted", Severity: channelz.CtINFO, Parent: &channelz.TraceEventDesc{ Desc: fmt.Sprintf("Subchanel(id:%d) deleted", ac.channelzID), Severity: channelz.CtINFO, }, }) // TraceEvent needs to be called before RemoveEntry, as TraceEvent may add trace reference to // the entity beng deleted, and thus prevent it from being deleted right away. channelz.RemoveEntry(ac.channelzID) } ac.mu.Unlock() } func (ac *addrConn) getState() connectivity.State { ac.mu.Lock() defer ac.mu.Unlock() return ac.state } func (ac *addrConn) ChannelzMetric() *channelz.ChannelInternalMetric { ac.mu.Lock() addr := ac.curAddr.Addr ac.mu.Unlock() return &channelz.ChannelInternalMetric{ State: ac.getState(), Target: addr, CallsStarted: atomic.LoadInt64(&ac.czData.callsStarted), CallsSucceeded: atomic.LoadInt64(&ac.czData.callsSucceeded), CallsFailed: atomic.LoadInt64(&ac.czData.callsFailed), LastCallStartedTimestamp: time.Unix(0, atomic.LoadInt64(&ac.czData.lastCallStartedTime)), } } func (ac *addrConn) incrCallsStarted() { atomic.AddInt64(&ac.czData.callsStarted, 1) atomic.StoreInt64(&ac.czData.lastCallStartedTime, time.Now().UnixNano()) } func (ac *addrConn) incrCallsSucceeded() { atomic.AddInt64(&ac.czData.callsSucceeded, 1) } func (ac *addrConn) incrCallsFailed() { atomic.AddInt64(&ac.czData.callsFailed, 1) } type retryThrottler struct { max float64 thresh float64 ratio float64 mu sync.Mutex tokens float64 // TODO(dfawley): replace with atomic and remove lock. } // throttle subtracts a retry token from the pool and returns whether a retry // should be throttled (disallowed) based upon the retry throttling policy in // the service config. func (rt *retryThrottler) throttle() bool { if rt == nil { return false } rt.mu.Lock() defer rt.mu.Unlock() rt.tokens-- if rt.tokens < 0 { rt.tokens = 0 } return rt.tokens <= rt.thresh } func (rt *retryThrottler) successfulRPC() { if rt == nil { return } rt.mu.Lock() defer rt.mu.Unlock() rt.tokens += rt.ratio if rt.tokens > rt.max { rt.tokens = rt.max } } type channelzChannel struct { cc *ClientConn } func (c *channelzChannel) ChannelzMetric() *channelz.ChannelInternalMetric { return c.cc.channelzMetric() } // ErrClientConnTimeout indicates that the ClientConn cannot establish the // underlying connections within the specified timeout. // // Deprecated: This error is never returned by grpc and should not be // referenced by users. var ErrClientConnTimeout = errors.New("grpc: timed out when dialing") grpc-go-1.22.1/clientconn_state_transition_test.go000066400000000000000000000315111351635773100223430ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "context" "net" "sync" "testing" "time" "golang.org/x/net/http2" "google.golang.org/grpc/balancer" "google.golang.org/grpc/connectivity" "google.golang.org/grpc/internal/testutils" "google.golang.org/grpc/resolver" "google.golang.org/grpc/resolver/manual" ) const stateRecordingBalancerName = "state_recoding_balancer" var testBalancerBuilder = newStateRecordingBalancerBuilder() func init() { balancer.Register(testBalancerBuilder) } // These tests use a pipeListener. This listener is similar to net.Listener // except that it is unbuffered, so each read and write will wait for the other // side's corresponding write or read. func (s) TestStateTransitions_SingleAddress(t *testing.T) { for _, test := range []struct { desc string want []connectivity.State server func(net.Listener) net.Conn }{ { desc: "When the server returns server preface, the client enters READY.", want: []connectivity.State{ connectivity.Connecting, connectivity.Ready, }, server: func(lis net.Listener) net.Conn { conn, err := lis.Accept() if err != nil { t.Error(err) return nil } go keepReading(conn) framer := http2.NewFramer(conn, conn) if err := framer.WriteSettings(http2.Setting{}); err != nil { t.Errorf("Error while writing settings frame. %v", err) return nil } return conn }, }, { desc: "When the connection is closed, the client enters TRANSIENT FAILURE.", want: []connectivity.State{ connectivity.Connecting, connectivity.TransientFailure, }, server: func(lis net.Listener) net.Conn { conn, err := lis.Accept() if err != nil { t.Error(err) return nil } conn.Close() return nil }, }, { desc: `When the server sends its connection preface, but the connection dies before the client can write its connection preface, the client enters TRANSIENT FAILURE.`, want: []connectivity.State{ connectivity.Connecting, connectivity.TransientFailure, }, server: func(lis net.Listener) net.Conn { conn, err := lis.Accept() if err != nil { t.Error(err) return nil } framer := http2.NewFramer(conn, conn) if err := framer.WriteSettings(http2.Setting{}); err != nil { t.Errorf("Error while writing settings frame. %v", err) return nil } conn.Close() return nil }, }, { desc: `When the server reads the client connection preface but does not send its connection preface, the client enters TRANSIENT FAILURE.`, want: []connectivity.State{ connectivity.Connecting, connectivity.TransientFailure, }, server: func(lis net.Listener) net.Conn { conn, err := lis.Accept() if err != nil { t.Error(err) return nil } go keepReading(conn) return conn }, }, } { t.Log(test.desc) testStateTransitionSingleAddress(t, test.want, test.server) } } func testStateTransitionSingleAddress(t *testing.T, want []connectivity.State, server func(net.Listener) net.Conn) { ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second) defer cancel() pl := testutils.NewPipeListener() defer pl.Close() // Launch the server. var conn net.Conn var connMu sync.Mutex go func() { connMu.Lock() conn = server(pl) connMu.Unlock() }() client, err := DialContext(ctx, "", WithWaitForHandshake(), WithInsecure(), WithBalancerName(stateRecordingBalancerName), WithDialer(pl.Dialer()), withBackoff(noBackoff{}), withMinConnectDeadline(func() time.Duration { return time.Millisecond * 100 })) if err != nil { t.Fatal(err) } defer client.Close() stateNotifications := testBalancerBuilder.nextStateNotifier() timeout := time.After(5 * time.Second) for i := 0; i < len(want); i++ { select { case <-timeout: t.Fatalf("timed out waiting for state %d (%v) in flow %v", i, want[i], want) case seen := <-stateNotifications: if seen != want[i] { t.Fatalf("expected to see %v at position %d in flow %v, got %v", want[i], i, want, seen) } } } connMu.Lock() defer connMu.Unlock() if conn != nil { err = conn.Close() if err != nil { t.Fatal(err) } } } // When a READY connection is closed, the client enters TRANSIENT FAILURE before CONNECTING. func (s) TestStateTransitions_ReadyToTransientFailure(t *testing.T) { want := []connectivity.State{ connectivity.Connecting, connectivity.Ready, connectivity.TransientFailure, connectivity.Connecting, } ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second) defer cancel() lis, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("Error while listening. Err: %v", err) } defer lis.Close() sawReady := make(chan struct{}) // Launch the server. go func() { conn, err := lis.Accept() if err != nil { t.Error(err) return } go keepReading(conn) framer := http2.NewFramer(conn, conn) if err := framer.WriteSettings(http2.Setting{}); err != nil { t.Errorf("Error while writing settings frame. %v", err) return } // Prevents race between onPrefaceReceipt and onClose. <-sawReady conn.Close() }() client, err := DialContext(ctx, lis.Addr().String(), WithWaitForHandshake(), WithInsecure(), WithBalancerName(stateRecordingBalancerName)) if err != nil { t.Fatal(err) } defer client.Close() stateNotifications := testBalancerBuilder.nextStateNotifier() timeout := time.After(5 * time.Second) for i := 0; i < len(want); i++ { select { case <-timeout: t.Fatalf("timed out waiting for state %d (%v) in flow %v", i, want[i], want) case seen := <-stateNotifications: if seen == connectivity.Ready { close(sawReady) } if seen != want[i] { t.Fatalf("expected to see %v at position %d in flow %v, got %v", want[i], i, want, seen) } } } } // When the first connection is closed, the client enters stays in CONNECTING // until it tries the second address (which succeeds, and then it enters READY). func (s) TestStateTransitions_TriesAllAddrsBeforeTransientFailure(t *testing.T) { want := []connectivity.State{ connectivity.Connecting, connectivity.Ready, } ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second) defer cancel() lis1, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("Error while listening. Err: %v", err) } defer lis1.Close() lis2, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("Error while listening. Err: %v", err) } defer lis2.Close() server1Done := make(chan struct{}) server2Done := make(chan struct{}) // Launch server 1. go func() { conn, err := lis1.Accept() if err != nil { t.Error(err) return } conn.Close() close(server1Done) }() // Launch server 2. go func() { conn, err := lis2.Accept() if err != nil { t.Error(err) return } go keepReading(conn) framer := http2.NewFramer(conn, conn) if err := framer.WriteSettings(http2.Setting{}); err != nil { t.Errorf("Error while writing settings frame. %v", err) return } close(server2Done) }() rb := manual.NewBuilderWithScheme("whatever") rb.InitialState(resolver.State{Addresses: []resolver.Address{ {Addr: lis1.Addr().String()}, {Addr: lis2.Addr().String()}, }}) client, err := DialContext(ctx, "this-gets-overwritten", WithInsecure(), WithWaitForHandshake(), WithBalancerName(stateRecordingBalancerName), withResolverBuilder(rb)) if err != nil { t.Fatal(err) } defer client.Close() stateNotifications := testBalancerBuilder.nextStateNotifier() timeout := time.After(5 * time.Second) for i := 0; i < len(want); i++ { select { case <-timeout: t.Fatalf("timed out waiting for state %d (%v) in flow %v", i, want[i], want) case seen := <-stateNotifications: if seen != want[i] { t.Fatalf("expected to see %v at position %d in flow %v, got %v", want[i], i, want, seen) } } } select { case <-timeout: t.Fatal("saw the correct state transitions, but timed out waiting for client to finish interactions with server 1") case <-server1Done: } select { case <-timeout: t.Fatal("saw the correct state transitions, but timed out waiting for client to finish interactions with server 2") case <-server2Done: } } // When there are multiple addresses, and we enter READY on one of them, a // later closure should cause the client to enter TRANSIENT FAILURE before it // re-enters CONNECTING. func (s) TestStateTransitions_MultipleAddrsEntersReady(t *testing.T) { want := []connectivity.State{ connectivity.Connecting, connectivity.Ready, connectivity.TransientFailure, connectivity.Connecting, } ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second) defer cancel() lis1, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("Error while listening. Err: %v", err) } defer lis1.Close() // Never actually gets used; we just want it to be alive so that the resolver has two addresses to target. lis2, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("Error while listening. Err: %v", err) } defer lis2.Close() server1Done := make(chan struct{}) sawReady := make(chan struct{}) // Launch server 1. go func() { conn, err := lis1.Accept() if err != nil { t.Error(err) return } go keepReading(conn) framer := http2.NewFramer(conn, conn) if err := framer.WriteSettings(http2.Setting{}); err != nil { t.Errorf("Error while writing settings frame. %v", err) return } <-sawReady conn.Close() _, err = lis1.Accept() if err != nil { t.Error(err) return } close(server1Done) }() rb := manual.NewBuilderWithScheme("whatever") rb.InitialState(resolver.State{Addresses: []resolver.Address{ {Addr: lis1.Addr().String()}, {Addr: lis2.Addr().String()}, }}) client, err := DialContext(ctx, "this-gets-overwritten", WithInsecure(), WithWaitForHandshake(), WithBalancerName(stateRecordingBalancerName), withResolverBuilder(rb)) if err != nil { t.Fatal(err) } defer client.Close() stateNotifications := testBalancerBuilder.nextStateNotifier() timeout := time.After(2 * time.Second) for i := 0; i < len(want); i++ { select { case <-timeout: t.Fatalf("timed out waiting for state %d (%v) in flow %v", i, want[i], want) case seen := <-stateNotifications: if seen == connectivity.Ready { close(sawReady) } if seen != want[i] { t.Fatalf("expected to see %v at position %d in flow %v, got %v", want[i], i, want, seen) } } } select { case <-timeout: t.Fatal("saw the correct state transitions, but timed out waiting for client to finish interactions with server 1") case <-server1Done: } } type stateRecordingBalancer struct { notifier chan<- connectivity.State balancer.Balancer } func (b *stateRecordingBalancer) HandleSubConnStateChange(sc balancer.SubConn, s connectivity.State) { b.notifier <- s b.Balancer.HandleSubConnStateChange(sc, s) } func (b *stateRecordingBalancer) ResetNotifier(r chan<- connectivity.State) { b.notifier = r } func (b *stateRecordingBalancer) Close() { b.Balancer.Close() } type stateRecordingBalancerBuilder struct { mu sync.Mutex notifier chan connectivity.State // The notifier used in the last Balancer. } func newStateRecordingBalancerBuilder() *stateRecordingBalancerBuilder { return &stateRecordingBalancerBuilder{} } func (b *stateRecordingBalancerBuilder) Name() string { return stateRecordingBalancerName } func (b *stateRecordingBalancerBuilder) Build(cc balancer.ClientConn, opts balancer.BuildOptions) balancer.Balancer { stateNotifications := make(chan connectivity.State, 10) b.mu.Lock() b.notifier = stateNotifications b.mu.Unlock() return &stateRecordingBalancer{ notifier: stateNotifications, Balancer: balancer.Get(PickFirstBalancerName).Build(cc, opts), } } func (b *stateRecordingBalancerBuilder) nextStateNotifier() <-chan connectivity.State { b.mu.Lock() defer b.mu.Unlock() ret := b.notifier b.notifier = nil return ret } type noBackoff struct{} func (b noBackoff) Backoff(int) time.Duration { return time.Duration(0) } // Keep reading until something causes the connection to die (EOF, server // closed, etc). Useful as a tool for mindlessly keeping the connection // healthy, since the client will error if things like client prefaces are not // accepted in a timely fashion. func keepReading(conn net.Conn) { buf := make([]byte, 1024) for _, err := conn.Read(buf); err == nil; _, err = conn.Read(buf) { } } grpc-go-1.22.1/clientconn_test.go000066400000000000000000001162511351635773100166760ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "context" "errors" "fmt" "math" "net" "strings" "sync/atomic" "testing" "time" "golang.org/x/net/http2" "google.golang.org/grpc/connectivity" "google.golang.org/grpc/credentials" "google.golang.org/grpc/internal/backoff" "google.golang.org/grpc/internal/envconfig" "google.golang.org/grpc/internal/transport" "google.golang.org/grpc/keepalive" "google.golang.org/grpc/naming" "google.golang.org/grpc/resolver" "google.golang.org/grpc/resolver/manual" _ "google.golang.org/grpc/resolver/passthrough" "google.golang.org/grpc/testdata" ) func assertState(wantState connectivity.State, cc *ClientConn) (connectivity.State, bool) { ctx, cancel := context.WithTimeout(context.Background(), time.Second) defer cancel() var state connectivity.State for state = cc.GetState(); state != wantState && cc.WaitForStateChange(ctx, state); state = cc.GetState() { } return state, state == wantState } func (s) TestDialWithTimeout(t *testing.T) { lis, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("Error while listening. Err: %v", err) } defer lis.Close() lisAddr := resolver.Address{Addr: lis.Addr().String()} lisDone := make(chan struct{}) dialDone := make(chan struct{}) // 1st listener accepts the connection and then does nothing go func() { defer close(lisDone) conn, err := lis.Accept() if err != nil { t.Errorf("Error while accepting. Err: %v", err) return } framer := http2.NewFramer(conn, conn) if err := framer.WriteSettings(http2.Setting{}); err != nil { t.Errorf("Error while writing settings. Err: %v", err) return } <-dialDone // Close conn only after dial returns. }() r, cleanup := manual.GenerateAndRegisterManualResolver() defer cleanup() r.InitialState(resolver.State{Addresses: []resolver.Address{lisAddr}}) client, err := Dial(r.Scheme()+":///test.server", WithInsecure(), WithTimeout(5*time.Second)) close(dialDone) if err != nil { t.Fatalf("Dial failed. Err: %v", err) } defer client.Close() timeout := time.After(1 * time.Second) select { case <-timeout: t.Fatal("timed out waiting for server to finish") case <-lisDone: } } func (s) TestDialWithMultipleBackendsNotSendingServerPreface(t *testing.T) { lis1, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("Error while listening. Err: %v", err) } defer lis1.Close() lis1Addr := resolver.Address{Addr: lis1.Addr().String()} lis1Done := make(chan struct{}) // 1st listener accepts the connection and immediately closes it. go func() { defer close(lis1Done) conn, err := lis1.Accept() if err != nil { t.Errorf("Error while accepting. Err: %v", err) return } conn.Close() }() lis2, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("Error while listening. Err: %v", err) } defer lis2.Close() lis2Done := make(chan struct{}) lis2Addr := resolver.Address{Addr: lis2.Addr().String()} // 2nd listener should get a connection attempt since the first one failed. go func() { defer close(lis2Done) _, err := lis2.Accept() // Closing the client will clean up this conn. if err != nil { t.Errorf("Error while accepting. Err: %v", err) return } }() r, cleanup := manual.GenerateAndRegisterManualResolver() defer cleanup() r.InitialState(resolver.State{Addresses: []resolver.Address{lis1Addr, lis2Addr}}) client, err := Dial(r.Scheme()+":///test.server", WithInsecure()) if err != nil { t.Fatalf("Dial failed. Err: %v", err) } defer client.Close() timeout := time.After(5 * time.Second) select { case <-timeout: t.Fatal("timed out waiting for server 1 to finish") case <-lis1Done: } select { case <-timeout: t.Fatal("timed out waiting for server 2 to finish") case <-lis2Done: } } var allReqHSSettings = []envconfig.RequireHandshakeSetting{ envconfig.RequireHandshakeOff, envconfig.RequireHandshakeOn, } func (s) TestDialWaitsForServerSettings(t *testing.T) { // Restore current setting after test. old := envconfig.RequireHandshake defer func() { envconfig.RequireHandshake = old }() // Test with all environment variable settings, which should not impact the // test case since WithWaitForHandshake has higher priority. for _, setting := range allReqHSSettings { envconfig.RequireHandshake = setting lis, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("Error while listening. Err: %v", err) } defer lis.Close() done := make(chan struct{}) sent := make(chan struct{}) dialDone := make(chan struct{}) go func() { // Launch the server. defer func() { close(done) }() conn, err := lis.Accept() if err != nil { t.Errorf("Error while accepting. Err: %v", err) return } defer conn.Close() // Sleep for a little bit to make sure that Dial on client // side blocks until settings are received. time.Sleep(100 * time.Millisecond) framer := http2.NewFramer(conn, conn) close(sent) if err := framer.WriteSettings(http2.Setting{}); err != nil { t.Errorf("Error while writing settings. Err: %v", err) return } <-dialDone // Close conn only after dial returns. }() ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() client, err := DialContext(ctx, lis.Addr().String(), WithInsecure(), WithWaitForHandshake(), WithBlock()) close(dialDone) if err != nil { t.Fatalf("Error while dialing. Err: %v", err) } defer client.Close() select { case <-sent: default: t.Fatalf("Dial returned before server settings were sent") } <-done } } func (s) TestDialWaitsForServerSettingsViaEnv(t *testing.T) { // Set default behavior and restore current setting after test. old := envconfig.RequireHandshake envconfig.RequireHandshake = envconfig.RequireHandshakeOn defer func() { envconfig.RequireHandshake = old }() lis, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("Error while listening. Err: %v", err) } defer lis.Close() done := make(chan struct{}) sent := make(chan struct{}) dialDone := make(chan struct{}) go func() { // Launch the server. defer func() { close(done) }() conn, err := lis.Accept() if err != nil { t.Errorf("Error while accepting. Err: %v", err) return } defer conn.Close() // Sleep for a little bit to make sure that Dial on client // side blocks until settings are received. time.Sleep(100 * time.Millisecond) framer := http2.NewFramer(conn, conn) close(sent) if err := framer.WriteSettings(http2.Setting{}); err != nil { t.Errorf("Error while writing settings. Err: %v", err) return } <-dialDone // Close conn only after dial returns. }() ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() client, err := DialContext(ctx, lis.Addr().String(), WithInsecure(), WithBlock()) close(dialDone) if err != nil { t.Fatalf("Error while dialing. Err: %v", err) } defer client.Close() select { case <-sent: default: t.Fatalf("Dial returned before server settings were sent") } <-done } func (s) TestDialWaitsForServerSettingsAndFails(t *testing.T) { // Restore current setting after test. old := envconfig.RequireHandshake defer func() { envconfig.RequireHandshake = old }() for _, setting := range allReqHSSettings { envconfig.RequireHandshake = setting lis, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("Error while listening. Err: %v", err) } done := make(chan struct{}) numConns := 0 go func() { // Launch the server. defer func() { close(done) }() for { conn, err := lis.Accept() if err != nil { break } numConns++ defer conn.Close() } }() ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second) defer cancel() client, err := DialContext(ctx, lis.Addr().String(), WithInsecure(), WithWaitForHandshake(), WithBlock(), withBackoff(noBackoff{}), withMinConnectDeadline(func() time.Duration { return time.Second / 4 })) lis.Close() if err == nil { client.Close() t.Fatalf("Unexpected success (err=nil) while dialing") } if err != context.DeadlineExceeded { t.Fatalf("DialContext(_) = %v; want context.DeadlineExceeded", err) } if numConns < 2 { t.Fatalf("dial attempts: %v; want > 1", numConns) } <-done } } func (s) TestDialWaitsForServerSettingsViaEnvAndFails(t *testing.T) { // Set default behavior and restore current setting after test. old := envconfig.RequireHandshake envconfig.RequireHandshake = envconfig.RequireHandshakeOn defer func() { envconfig.RequireHandshake = old }() lis, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("Error while listening. Err: %v", err) } done := make(chan struct{}) numConns := 0 go func() { // Launch the server. defer func() { close(done) }() for { conn, err := lis.Accept() if err != nil { break } numConns++ defer conn.Close() } }() ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second) defer cancel() client, err := DialContext(ctx, lis.Addr().String(), WithInsecure(), WithBlock(), withBackoff(noBackoff{}), withMinConnectDeadline(func() time.Duration { return time.Second / 4 })) lis.Close() if err == nil { client.Close() t.Fatalf("Unexpected success (err=nil) while dialing") } if err != context.DeadlineExceeded { t.Fatalf("DialContext(_) = %v; want context.DeadlineExceeded", err) } if numConns < 2 { t.Fatalf("dial attempts: %v; want > 1", numConns) } <-done } func (s) TestDialDoesNotWaitForServerSettings(t *testing.T) { // Restore current setting after test. old := envconfig.RequireHandshake defer func() { envconfig.RequireHandshake = old }() envconfig.RequireHandshake = envconfig.RequireHandshakeOff lis, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("Error while listening. Err: %v", err) } defer lis.Close() done := make(chan struct{}) dialDone := make(chan struct{}) go func() { // Launch the server. defer func() { close(done) }() conn, err := lis.Accept() if err != nil { t.Errorf("Error while accepting. Err: %v", err) return } defer conn.Close() <-dialDone // Close conn only after dial returns. }() ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) defer cancel() client, err := DialContext(ctx, lis.Addr().String(), WithInsecure(), WithBlock()) if err != nil { t.Fatalf("DialContext returned err =%v; want nil", err) } defer client.Close() if state := client.GetState(); state != connectivity.Ready { t.Fatalf("client.GetState() = %v; want connectivity.Ready", state) } close(dialDone) } // 1. Client connects to a server that doesn't send preface. // 2. After minConnectTimeout(500 ms here), client disconnects and retries. // 3. The new server sends its preface. // 4. Client doesn't kill the connection this time. func (s) TestCloseConnectionWhenServerPrefaceNotReceived(t *testing.T) { // Restore current setting after test. old := envconfig.RequireHandshake defer func() { envconfig.RequireHandshake = old }() envconfig.RequireHandshake = envconfig.RequireHandshakeOn lis, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("Error while listening. Err: %v", err) } var ( conn2 net.Conn over uint32 ) defer func() { lis.Close() // conn2 shouldn't be closed until the client has // observed a successful test. if conn2 != nil { conn2.Close() } }() done := make(chan struct{}) accepted := make(chan struct{}) go func() { // Launch the server. defer close(done) conn1, err := lis.Accept() if err != nil { t.Errorf("Error while accepting. Err: %v", err) return } defer conn1.Close() // Don't send server settings and the client should close the connection and try again. conn2, err = lis.Accept() // Accept a reconnection request from client. if err != nil { t.Errorf("Error while accepting. Err: %v", err) return } close(accepted) framer := http2.NewFramer(conn2, conn2) if err = framer.WriteSettings(http2.Setting{}); err != nil { t.Errorf("Error while writing settings. Err: %v", err) return } b := make([]byte, 8) for { _, err = conn2.Read(b) if err == nil { continue } if atomic.LoadUint32(&over) == 1 { // The connection stayed alive for the timer. // Success. return } t.Errorf("Unexpected error while reading. Err: %v, want timeout error", err) break } }() client, err := Dial(lis.Addr().String(), WithInsecure(), withMinConnectDeadline(func() time.Duration { return time.Millisecond * 500 })) if err != nil { t.Fatalf("Error while dialing. Err: %v", err) } // wait for connection to be accepted on the server. timer := time.NewTimer(time.Second * 10) select { case <-accepted: case <-timer.C: t.Fatalf("Client didn't make another connection request in time.") } // Make sure the connection stays alive for sometime. time.Sleep(time.Second) atomic.StoreUint32(&over, 1) client.Close() <-done } func (s) TestBackoffWhenNoServerPrefaceReceived(t *testing.T) { lis, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("Error while listening. Err: %v", err) } defer lis.Close() done := make(chan struct{}) go func() { // Launch the server. defer func() { close(done) }() conn, err := lis.Accept() // Accept the connection only to close it immediately. if err != nil { t.Errorf("Error while accepting. Err: %v", err) return } prevAt := time.Now() conn.Close() var prevDuration time.Duration // Make sure the retry attempts are backed off properly. for i := 0; i < 3; i++ { conn, err := lis.Accept() if err != nil { t.Errorf("Error while accepting. Err: %v", err) return } meow := time.Now() conn.Close() dr := meow.Sub(prevAt) if dr <= prevDuration { t.Errorf("Client backoff did not increase with retries. Previous duration: %v, current duration: %v", prevDuration, dr) return } prevDuration = dr prevAt = meow } }() client, err := Dial(lis.Addr().String(), WithInsecure()) if err != nil { t.Fatalf("Error while dialing. Err: %v", err) } defer client.Close() <-done } func (s) TestConnectivityStates(t *testing.T) { servers, resolver, cleanup := startServers(t, 2, math.MaxUint32) defer cleanup() cc, err := Dial("passthrough:///foo.bar.com", WithBalancer(RoundRobin(resolver)), WithInsecure()) if err != nil { t.Fatalf("Dial(\"foo.bar.com\", WithBalancer(_)) = _, %v, want _ ", err) } defer cc.Close() wantState := connectivity.Ready if state, ok := assertState(wantState, cc); !ok { t.Fatalf("asserState(%s) = %s, false, want %s, true", wantState, state, wantState) } // Send an update to delete the server connection (tearDown addrConn). update := []*naming.Update{ { Op: naming.Delete, Addr: "localhost:" + servers[0].port, }, } resolver.w.inject(update) wantState = connectivity.TransientFailure if state, ok := assertState(wantState, cc); !ok { t.Fatalf("asserState(%s) = %s, false, want %s, true", wantState, state, wantState) } update[0] = &naming.Update{ Op: naming.Add, Addr: "localhost:" + servers[1].port, } resolver.w.inject(update) wantState = connectivity.Ready if state, ok := assertState(wantState, cc); !ok { t.Fatalf("asserState(%s) = %s, false, want %s, true", wantState, state, wantState) } } func (s) TestWithTimeout(t *testing.T) { conn, err := Dial("passthrough:///Non-Existent.Server:80", WithTimeout(time.Millisecond), WithBlock(), WithInsecure()) if err == nil { conn.Close() } if err != context.DeadlineExceeded { t.Fatalf("Dial(_, _) = %v, %v, want %v", conn, err, context.DeadlineExceeded) } } func (s) TestWithTransportCredentialsTLS(t *testing.T) { ctx, cancel := context.WithTimeout(context.Background(), time.Millisecond) defer cancel() creds, err := credentials.NewClientTLSFromFile(testdata.Path("ca.pem"), "x.test.youtube.com") if err != nil { t.Fatalf("Failed to create credentials %v", err) } conn, err := DialContext(ctx, "passthrough:///Non-Existent.Server:80", WithTransportCredentials(creds), WithBlock()) if err == nil { conn.Close() } if err != context.DeadlineExceeded { t.Fatalf("Dial(_, _) = %v, %v, want %v", conn, err, context.DeadlineExceeded) } } func (s) TestDefaultAuthority(t *testing.T) { target := "Non-Existent.Server:8080" conn, err := Dial(target, WithInsecure()) if err != nil { t.Fatalf("Dial(_, _) = _, %v, want _, ", err) } defer conn.Close() if conn.authority != target { t.Fatalf("%v.authority = %v, want %v", conn, conn.authority, target) } } func (s) TestTLSServerNameOverwrite(t *testing.T) { overwriteServerName := "over.write.server.name" creds, err := credentials.NewClientTLSFromFile(testdata.Path("ca.pem"), overwriteServerName) if err != nil { t.Fatalf("Failed to create credentials %v", err) } conn, err := Dial("passthrough:///Non-Existent.Server:80", WithTransportCredentials(creds)) if err != nil { t.Fatalf("Dial(_, _) = _, %v, want _, ", err) } defer conn.Close() if conn.authority != overwriteServerName { t.Fatalf("%v.authority = %v, want %v", conn, conn.authority, overwriteServerName) } } func (s) TestWithAuthority(t *testing.T) { overwriteServerName := "over.write.server.name" conn, err := Dial("passthrough:///Non-Existent.Server:80", WithInsecure(), WithAuthority(overwriteServerName)) if err != nil { t.Fatalf("Dial(_, _) = _, %v, want _, ", err) } defer conn.Close() if conn.authority != overwriteServerName { t.Fatalf("%v.authority = %v, want %v", conn, conn.authority, overwriteServerName) } } func (s) TestWithAuthorityAndTLS(t *testing.T) { overwriteServerName := "over.write.server.name" creds, err := credentials.NewClientTLSFromFile(testdata.Path("ca.pem"), overwriteServerName) if err != nil { t.Fatalf("Failed to create credentials %v", err) } conn, err := Dial("passthrough:///Non-Existent.Server:80", WithTransportCredentials(creds), WithAuthority("no.effect.authority")) if err != nil { t.Fatalf("Dial(_, _) = _, %v, want _, ", err) } defer conn.Close() if conn.authority != overwriteServerName { t.Fatalf("%v.authority = %v, want %v", conn, conn.authority, overwriteServerName) } } // When creating a transport configured with n addresses, only calculate the // backoff once per "round" of attempts instead of once per address (n times // per "round" of attempts). func (s) TestDial_OneBackoffPerRetryGroup(t *testing.T) { var attempts uint32 getMinConnectTimeout := func() time.Duration { if atomic.AddUint32(&attempts, 1) == 1 { // Once all addresses are exhausted, hang around and wait for the // client.Close to happen rather than re-starting a new round of // attempts. return time.Hour } t.Error("only one attempt backoff calculation, but got more") return 0 } ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second) defer cancel() lis1, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("Error while listening. Err: %v", err) } defer lis1.Close() lis2, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("Error while listening. Err: %v", err) } defer lis2.Close() server1Done := make(chan struct{}) server2Done := make(chan struct{}) // Launch server 1. go func() { conn, err := lis1.Accept() if err != nil { t.Error(err) return } conn.Close() close(server1Done) }() // Launch server 2. go func() { conn, err := lis2.Accept() if err != nil { t.Error(err) return } conn.Close() close(server2Done) }() rb := manual.NewBuilderWithScheme("whatever") rb.InitialState(resolver.State{Addresses: []resolver.Address{ {Addr: lis1.Addr().String()}, {Addr: lis2.Addr().String()}, }}) client, err := DialContext(ctx, "this-gets-overwritten", WithInsecure(), WithBalancerName(stateRecordingBalancerName), withResolverBuilder(rb), withMinConnectDeadline(getMinConnectTimeout)) if err != nil { t.Fatal(err) } defer client.Close() timeout := time.After(15 * time.Second) select { case <-timeout: t.Fatal("timed out waiting for test to finish") case <-server1Done: } select { case <-timeout: t.Fatal("timed out waiting for test to finish") case <-server2Done: } } func (s) TestDialContextCancel(t *testing.T) { ctx, cancel := context.WithCancel(context.Background()) cancel() if _, err := DialContext(ctx, "Non-Existent.Server:80", WithBlock(), WithInsecure()); err != context.Canceled { t.Fatalf("DialContext(%v, _) = _, %v, want _, %v", ctx, err, context.Canceled) } } type failFastError struct{} func (failFastError) Error() string { return "failfast" } func (failFastError) Temporary() bool { return false } func (s) TestDialContextFailFast(t *testing.T) { ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() failErr := failFastError{} dialer := func(string, time.Duration) (net.Conn, error) { return nil, failErr } _, err := DialContext(ctx, "Non-Existent.Server:80", WithBlock(), WithInsecure(), WithDialer(dialer), FailOnNonTempDialError(true)) if terr, ok := err.(transport.ConnectionError); !ok || terr.Origin() != failErr { t.Fatalf("DialContext() = _, %v, want _, %v", err, failErr) } } // blockingBalancer mimics the behavior of balancers whose initialization takes a long time. // In this test, reading from blockingBalancer.Notify() blocks forever. type blockingBalancer struct { ch chan []Address } func newBlockingBalancer() Balancer { return &blockingBalancer{ch: make(chan []Address)} } func (b *blockingBalancer) Start(target string, config BalancerConfig) error { return nil } func (b *blockingBalancer) Up(addr Address) func(error) { return nil } func (b *blockingBalancer) Get(ctx context.Context, opts BalancerGetOptions) (addr Address, put func(), err error) { return Address{}, nil, nil } func (b *blockingBalancer) Notify() <-chan []Address { return b.ch } func (b *blockingBalancer) Close() error { close(b.ch) return nil } func (s) TestDialWithBlockingBalancer(t *testing.T) { ctx, cancel := context.WithCancel(context.Background()) dialDone := make(chan struct{}) go func() { DialContext(ctx, "Non-Existent.Server:80", WithBlock(), WithInsecure(), WithBalancer(newBlockingBalancer())) close(dialDone) }() cancel() <-dialDone } // securePerRPCCredentials always requires transport security. type securePerRPCCredentials struct{} func (c securePerRPCCredentials) GetRequestMetadata(ctx context.Context, uri ...string) (map[string]string, error) { return nil, nil } func (c securePerRPCCredentials) RequireTransportSecurity() bool { return true } func (s) TestCredentialsMisuse(t *testing.T) { tlsCreds, err := credentials.NewClientTLSFromFile(testdata.Path("ca.pem"), "x.test.youtube.com") if err != nil { t.Fatalf("Failed to create authenticator %v", err) } // Two conflicting credential configurations if _, err := Dial("passthrough:///Non-Existent.Server:80", WithTransportCredentials(tlsCreds), WithBlock(), WithInsecure()); err != errCredentialsConflict { t.Fatalf("Dial(_, _) = _, %v, want _, %v", err, errCredentialsConflict) } // security info on insecure connection if _, err := Dial("passthrough:///Non-Existent.Server:80", WithPerRPCCredentials(securePerRPCCredentials{}), WithBlock(), WithInsecure()); err != errTransportCredentialsMissing { t.Fatalf("Dial(_, _) = _, %v, want _, %v", err, errTransportCredentialsMissing) } } func (s) TestWithBackoffConfigDefault(t *testing.T) { testBackoffConfigSet(t, &DefaultBackoffConfig) } func (s) TestWithBackoffConfig(t *testing.T) { b := BackoffConfig{MaxDelay: DefaultBackoffConfig.MaxDelay / 2} expected := b testBackoffConfigSet(t, &expected, WithBackoffConfig(b)) } func (s) TestWithBackoffMaxDelay(t *testing.T) { md := DefaultBackoffConfig.MaxDelay / 2 expected := BackoffConfig{MaxDelay: md} testBackoffConfigSet(t, &expected, WithBackoffMaxDelay(md)) } func testBackoffConfigSet(t *testing.T, expected *BackoffConfig, opts ...DialOption) { opts = append(opts, WithInsecure()) conn, err := Dial("passthrough:///foo:80", opts...) if err != nil { t.Fatalf("unexpected error dialing connection: %v", err) } defer conn.Close() if conn.dopts.bs == nil { t.Fatalf("backoff config not set") } actual, ok := conn.dopts.bs.(backoff.Exponential) if !ok { t.Fatalf("unexpected type of backoff config: %#v", conn.dopts.bs) } expectedValue := backoff.Exponential{ MaxDelay: expected.MaxDelay, } if actual != expectedValue { t.Fatalf("unexpected backoff config on connection: %v, want %v", actual, expected) } } // emptyBalancer returns an empty set of servers. type emptyBalancer struct { ch chan []Address } func newEmptyBalancer() Balancer { return &emptyBalancer{ch: make(chan []Address, 1)} } func (b *emptyBalancer) Start(_ string, _ BalancerConfig) error { b.ch <- nil return nil } func (b *emptyBalancer) Up(_ Address) func(error) { return nil } func (b *emptyBalancer) Get(_ context.Context, _ BalancerGetOptions) (Address, func(), error) { return Address{}, nil, nil } func (b *emptyBalancer) Notify() <-chan []Address { return b.ch } func (b *emptyBalancer) Close() error { close(b.ch) return nil } func (s) TestNonblockingDialWithEmptyBalancer(t *testing.T) { ctx, cancel := context.WithCancel(context.Background()) defer cancel() dialDone := make(chan error) go func() { dialDone <- func() error { conn, err := DialContext(ctx, "Non-Existent.Server:80", WithInsecure(), WithBalancer(newEmptyBalancer())) if err != nil { return err } return conn.Close() }() }() if err := <-dialDone; err != nil { t.Fatalf("unexpected error dialing connection: %s", err) } } func (s) TestResolverServiceConfigBeforeAddressNotPanic(t *testing.T) { r, rcleanup := manual.GenerateAndRegisterManualResolver() defer rcleanup() cc, err := Dial(r.Scheme()+":///test.server", WithInsecure()) if err != nil { t.Fatalf("failed to dial: %v", err) } defer cc.Close() // SwitchBalancer before NewAddress. There was no balancer created, this // makes sure we don't call close on nil balancerWrapper. r.UpdateState(resolver.State{ServiceConfig: parseCfg(`{"loadBalancingPolicy": "round_robin"}`)}) // This should not panic. time.Sleep(time.Second) // Sleep to make sure the service config is handled by ClientConn. } func (s) TestResolverServiceConfigWhileClosingNotPanic(t *testing.T) { for i := 0; i < 10; i++ { // Run this multiple times to make sure it doesn't panic. r, rcleanup := manual.GenerateAndRegisterManualResolver() defer rcleanup() cc, err := Dial(r.Scheme()+":///test.server", WithInsecure()) if err != nil { t.Fatalf("failed to dial: %v", err) } // Send a new service config while closing the ClientConn. go cc.Close() go r.UpdateState(resolver.State{ServiceConfig: parseCfg(`{"loadBalancingPolicy": "round_robin"}`)}) // This should not panic. } } func (s) TestResolverEmptyUpdateNotPanic(t *testing.T) { r, rcleanup := manual.GenerateAndRegisterManualResolver() defer rcleanup() cc, err := Dial(r.Scheme()+":///test.server", WithInsecure()) if err != nil { t.Fatalf("failed to dial: %v", err) } defer cc.Close() // This make sure we don't create addrConn with empty address list. r.UpdateState(resolver.State{}) // This should not panic. time.Sleep(time.Second) // Sleep to make sure the service config is handled by ClientConn. } func (s) TestClientUpdatesParamsAfterGoAway(t *testing.T) { lis, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("Failed to listen. Err: %v", err) } defer lis.Close() connected := make(chan struct{}) go func() { conn, err := lis.Accept() if err != nil { t.Errorf("error accepting connection: %v", err) return } defer conn.Close() f := http2.NewFramer(conn, conn) // Start a goroutine to read from the conn to prevent the client from // blocking after it writes its preface. go func() { for { if _, err := f.ReadFrame(); err != nil { return } } }() if err := f.WriteSettings(http2.Setting{}); err != nil { t.Errorf("error writing settings: %v", err) return } <-connected if err := f.WriteGoAway(0, http2.ErrCodeEnhanceYourCalm, []byte("too_many_pings")); err != nil { t.Errorf("error writing GOAWAY: %v", err) return } }() addr := lis.Addr().String() ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) defer cancel() cc, err := DialContext(ctx, addr, WithBlock(), WithInsecure(), WithKeepaliveParams(keepalive.ClientParameters{ Time: 10 * time.Second, Timeout: 100 * time.Millisecond, PermitWithoutStream: true, })) if err != nil { t.Fatalf("Dial(%s, _) = _, %v, want _, ", addr, err) } defer cc.Close() close(connected) for { time.Sleep(10 * time.Millisecond) cc.mu.RLock() v := cc.mkp.Time if v == 20*time.Second { // Success cc.mu.RUnlock() return } if ctx.Err() != nil { // Timeout t.Fatalf("cc.dopts.copts.Keepalive.Time = %v , want 20s", v) } cc.mu.RUnlock() } } func (s) TestDisableServiceConfigOption(t *testing.T) { r, cleanup := manual.GenerateAndRegisterManualResolver() defer cleanup() addr := r.Scheme() + ":///non.existent" cc, err := Dial(addr, WithInsecure(), WithDisableServiceConfig()) if err != nil { t.Fatalf("Dial(%s, _) = _, %v, want _, ", addr, err) } defer cc.Close() r.UpdateState(resolver.State{ServiceConfig: parseCfg(`{ "methodConfig": [ { "name": [ { "service": "foo", "method": "Bar" } ], "waitForReady": true } ] }`)}) time.Sleep(1 * time.Second) m := cc.GetMethodConfig("/foo/Bar") if m.WaitForReady != nil { t.Fatalf("want: method (\"/foo/bar/\") config to be empty, got: %+v", m) } } func (s) TestGetClientConnTarget(t *testing.T) { addr := "nonexist:///non.existent" cc, err := Dial(addr, WithInsecure()) if err != nil { t.Fatalf("Dial(%s, _) = _, %v, want _, ", addr, err) } defer cc.Close() if cc.Target() != addr { t.Fatalf("Target() = %s, want %s", cc.Target(), addr) } } type backoffForever struct{} func (b backoffForever) Backoff(int) time.Duration { return time.Duration(math.MaxInt64) } func (s) TestResetConnectBackoff(t *testing.T) { dials := make(chan struct{}) defer func() { // If we fail, let the http2client break out of dialing. select { case <-dials: default: } }() dialer := func(string, time.Duration) (net.Conn, error) { dials <- struct{}{} return nil, errors.New("failed to fake dial") } cc, err := Dial("any", WithInsecure(), WithDialer(dialer), withBackoff(backoffForever{})) if err != nil { t.Fatalf("Dial() = _, %v; want _, nil", err) } defer cc.Close() select { case <-dials: case <-time.NewTimer(10 * time.Second).C: t.Fatal("Failed to call dial within 10s") } select { case <-dials: t.Fatal("Dial called unexpectedly before resetting backoff") case <-time.NewTimer(100 * time.Millisecond).C: } cc.ResetConnectBackoff() select { case <-dials: case <-time.NewTimer(10 * time.Second).C: t.Fatal("Failed to call dial within 10s after resetting backoff") } } func (s) TestBackoffCancel(t *testing.T) { dialStrCh := make(chan string) cc, err := Dial("any", WithInsecure(), WithDialer(func(t string, _ time.Duration) (net.Conn, error) { dialStrCh <- t return nil, fmt.Errorf("test dialer, always error") })) if err != nil { t.Fatalf("Failed to create ClientConn: %v", err) } <-dialStrCh cc.Close() // Should not leak. May need -count 5000 to exercise. } // UpdateAddresses should cause the next reconnect to begin from the top of the // list if the connection is not READY. func (s) TestUpdateAddresses_RetryFromFirstAddr(t *testing.T) { lis1, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("Error while listening. Err: %v", err) } defer lis1.Close() lis2, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("Error while listening. Err: %v", err) } defer lis2.Close() lis3, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("Error while listening. Err: %v", err) } defer lis3.Close() closeServer2 := make(chan struct{}) server1ContactedFirstTime := make(chan struct{}) server1ContactedSecondTime := make(chan struct{}) server2ContactedFirstTime := make(chan struct{}) server2ContactedSecondTime := make(chan struct{}) server3Contacted := make(chan struct{}) // Launch server 1. go func() { // First, let's allow the initial connection to go READY. We need to do // this because tryUpdateAddrs only works after there's some non-nil // address on the ac, and curAddress is only set after READY. conn1, err := lis1.Accept() if err != nil { t.Error(err) return } go keepReading(conn1) framer := http2.NewFramer(conn1, conn1) if err := framer.WriteSettings(http2.Setting{}); err != nil { t.Errorf("Error while writing settings frame. %v", err) return } // nextStateNotifier() is updated after balancerBuilder.Build(), which is // called by grpc.Dial. It's safe to do it here because lis1.Accept blocks // until balancer is built to process the addresses. stateNotifications := testBalancerBuilder.nextStateNotifier() // Wait for the transport to become ready. for s := range stateNotifications { if s == connectivity.Ready { break } } // Once it's ready, curAddress has been set. So let's close this // connection prompting the first reconnect cycle. conn1.Close() // Accept and immediately close, causing it to go to server2. conn2, err := lis1.Accept() if err != nil { t.Error(err) return } close(server1ContactedFirstTime) conn2.Close() // Hopefully it picks this server after tryUpdateAddrs. lis1.Accept() close(server1ContactedSecondTime) }() // Launch server 2. go func() { // Accept and then hang waiting for the test call tryUpdateAddrs and // then signal to this server to close. After this server closes, it // should start from the top instead of trying server2 or continuing // to server3. conn, err := lis2.Accept() if err != nil { t.Error(err) return } close(server2ContactedFirstTime) <-closeServer2 conn.Close() // After tryUpdateAddrs, it should NOT try server2. lis2.Accept() close(server2ContactedSecondTime) }() // Launch server 3. go func() { // After tryUpdateAddrs, it should NOT try server3. (or any other time) lis3.Accept() close(server3Contacted) }() addrsList := []resolver.Address{ {Addr: lis1.Addr().String()}, {Addr: lis2.Addr().String()}, {Addr: lis3.Addr().String()}, } rb := manual.NewBuilderWithScheme("whatever") rb.InitialState(resolver.State{Addresses: addrsList}) client, err := Dial("this-gets-overwritten", WithInsecure(), WithWaitForHandshake(), withResolverBuilder(rb), withBackoff(noBackoff{}), WithBalancerName(stateRecordingBalancerName), withMinConnectDeadline(func() time.Duration { return time.Hour })) if err != nil { t.Fatal(err) } defer client.Close() timeout := time.After(5 * time.Second) // Wait for server1 to be contacted (which will immediately fail), then // server2 (which will hang waiting for our signal). select { case <-server1ContactedFirstTime: case <-timeout: t.Fatal("timed out waiting for server1 to be contacted") } select { case <-server2ContactedFirstTime: case <-timeout: t.Fatal("timed out waiting for server2 to be contacted") } // Grab the addrConn and call tryUpdateAddrs. var ac *addrConn client.mu.Lock() for clientAC := range client.conns { ac = clientAC break } client.mu.Unlock() ac.acbw.UpdateAddresses(addrsList) // We've called tryUpdateAddrs - now let's make server2 close the // connection and check that it goes back to server1 instead of continuing // to server3 or trying server2 again. close(closeServer2) select { case <-server1ContactedSecondTime: case <-server2ContactedSecondTime: t.Fatal("server2 was contacted a second time, but it after tryUpdateAddrs it should have re-started the list and tried server1") case <-server3Contacted: t.Fatal("server3 was contacted, but after tryUpdateAddrs it should have re-started the list and tried server1") case <-timeout: t.Fatal("timed out waiting for any server to be contacted after tryUpdateAddrs") } } func (s) TestDefaultServiceConfig(t *testing.T) { r, cleanup := manual.GenerateAndRegisterManualResolver() defer cleanup() addr := r.Scheme() + ":///non.existent" js := `{ "methodConfig": [ { "name": [ { "service": "foo", "method": "bar" } ], "waitForReady": true } ] }` testInvalidDefaultServiceConfig(t) testDefaultServiceConfigWhenResolverServiceConfigDisabled(t, r, addr, js) testDefaultServiceConfigWhenResolverDoesNotReturnServiceConfig(t, r, addr, js) testDefaultServiceConfigWhenResolverReturnInvalidServiceConfig(t, r, addr, js) } func verifyWaitForReadyEqualsTrue(cc *ClientConn) bool { var i int for i = 0; i < 10; i++ { mc := cc.GetMethodConfig("/foo/bar") if mc.WaitForReady != nil && *mc.WaitForReady == true { break } time.Sleep(100 * time.Millisecond) } return i != 10 } func testInvalidDefaultServiceConfig(t *testing.T) { _, err := Dial("fake.com", WithInsecure(), WithDefaultServiceConfig("")) if !strings.Contains(err.Error(), invalidDefaultServiceConfigErrPrefix) { t.Fatalf("Dial got err: %v, want err contains: %v", err, invalidDefaultServiceConfigErrPrefix) } } func testDefaultServiceConfigWhenResolverServiceConfigDisabled(t *testing.T, r resolver.Resolver, addr string, js string) { cc, err := Dial(addr, WithInsecure(), WithDisableServiceConfig(), WithDefaultServiceConfig(js)) if err != nil { t.Fatalf("Dial(%s, _) = _, %v, want _, ", addr, err) } defer cc.Close() // Resolver service config gets ignored since resolver service config is disabled. r.(*manual.Resolver).UpdateState(resolver.State{ Addresses: []resolver.Address{{Addr: addr}}, ServiceConfig: parseCfg("{}"), }) if !verifyWaitForReadyEqualsTrue(cc) { t.Fatal("default service config failed to be applied after 1s") } } func testDefaultServiceConfigWhenResolverDoesNotReturnServiceConfig(t *testing.T, r resolver.Resolver, addr string, js string) { cc, err := Dial(addr, WithInsecure(), WithDefaultServiceConfig(js)) if err != nil { t.Fatalf("Dial(%s, _) = _, %v, want _, ", addr, err) } defer cc.Close() r.(*manual.Resolver).UpdateState(resolver.State{ Addresses: []resolver.Address{{Addr: addr}}, }) if !verifyWaitForReadyEqualsTrue(cc) { t.Fatal("default service config failed to be applied after 1s") } } func testDefaultServiceConfigWhenResolverReturnInvalidServiceConfig(t *testing.T, r resolver.Resolver, addr string, js string) { cc, err := Dial(addr, WithInsecure(), WithDefaultServiceConfig(js)) if err != nil { t.Fatalf("Dial(%s, _) = _, %v, want _, ", addr, err) } defer cc.Close() r.(*manual.Resolver).UpdateState(resolver.State{ Addresses: []resolver.Address{{Addr: addr}}, ServiceConfig: nil, }) if !verifyWaitForReadyEqualsTrue(cc) { t.Fatal("default service config failed to be applied after 1s") } } grpc-go-1.22.1/codec.go000066400000000000000000000032311351635773100145510ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "google.golang.org/grpc/encoding" _ "google.golang.org/grpc/encoding/proto" // to register the Codec for "proto" ) // baseCodec contains the functionality of both Codec and encoding.Codec, but // omits the name/string, which vary between the two and are not needed for // anything besides the registry in the encoding package. type baseCodec interface { Marshal(v interface{}) ([]byte, error) Unmarshal(data []byte, v interface{}) error } var _ baseCodec = Codec(nil) var _ baseCodec = encoding.Codec(nil) // Codec defines the interface gRPC uses to encode and decode messages. // Note that implementations of this interface must be thread safe; // a Codec's methods can be called from concurrent goroutines. // // Deprecated: use encoding.Codec instead. type Codec interface { // Marshal returns the wire format of v. Marshal(v interface{}) ([]byte, error) // Unmarshal parses the wire format into v. Unmarshal(data []byte, v interface{}) error // String returns the name of the Codec implementation. This is unused by // gRPC. String() string } grpc-go-1.22.1/codec_test.go000066400000000000000000000015771351635773100156230ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "testing" "google.golang.org/grpc/encoding" "google.golang.org/grpc/encoding/proto" ) func (s) TestGetCodecForProtoIsNotNil(t *testing.T) { if encoding.GetCodec(proto.Name) == nil { t.Fatalf("encoding.GetCodec(%q) must not be nil by default", proto.Name) } } grpc-go-1.22.1/codegen.sh000077500000000000000000000011731351635773100151130ustar00rootroot00000000000000#!/usr/bin/env bash # This script serves as an example to demonstrate how to generate the gRPC-Go # interface and the related messages from .proto file. # # It assumes the installation of i) Google proto buffer compiler at # https://github.com/google/protobuf (after v2.6.1) and ii) the Go codegen # plugin at https://github.com/golang/protobuf (after 2015-02-20). If you have # not, please install them first. # # We recommend running this script at $GOPATH/src. # # If this is not what you need, feel free to make your own scripts. Again, this # script is for demonstration purpose. # proto=$1 protoc --go_out=plugins=grpc:. $proto grpc-go-1.22.1/codes/000077500000000000000000000000001351635773100142435ustar00rootroot00000000000000grpc-go-1.22.1/codes/code_string.go000066400000000000000000000027051351635773100170760ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package codes import "strconv" func (c Code) String() string { switch c { case OK: return "OK" case Canceled: return "Canceled" case Unknown: return "Unknown" case InvalidArgument: return "InvalidArgument" case DeadlineExceeded: return "DeadlineExceeded" case NotFound: return "NotFound" case AlreadyExists: return "AlreadyExists" case PermissionDenied: return "PermissionDenied" case ResourceExhausted: return "ResourceExhausted" case FailedPrecondition: return "FailedPrecondition" case Aborted: return "Aborted" case OutOfRange: return "OutOfRange" case Unimplemented: return "Unimplemented" case Internal: return "Internal" case Unavailable: return "Unavailable" case DataLoss: return "DataLoss" case Unauthenticated: return "Unauthenticated" default: return "Code(" + strconv.FormatInt(int64(c), 10) + ")" } } grpc-go-1.22.1/codes/codes.go000066400000000000000000000161741351635773100157000ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package codes defines the canonical error codes used by gRPC. It is // consistent across various languages. package codes // import "google.golang.org/grpc/codes" import ( "fmt" "strconv" ) // A Code is an unsigned 32-bit error code as defined in the gRPC spec. type Code uint32 const ( // OK is returned on success. OK Code = 0 // Canceled indicates the operation was canceled (typically by the caller). Canceled Code = 1 // Unknown error. An example of where this error may be returned is // if a Status value received from another address space belongs to // an error-space that is not known in this address space. Also // errors raised by APIs that do not return enough error information // may be converted to this error. Unknown Code = 2 // InvalidArgument indicates client specified an invalid argument. // Note that this differs from FailedPrecondition. It indicates arguments // that are problematic regardless of the state of the system // (e.g., a malformed file name). InvalidArgument Code = 3 // DeadlineExceeded means operation expired before completion. // For operations that change the state of the system, this error may be // returned even if the operation has completed successfully. For // example, a successful response from a server could have been delayed // long enough for the deadline to expire. DeadlineExceeded Code = 4 // NotFound means some requested entity (e.g., file or directory) was // not found. NotFound Code = 5 // AlreadyExists means an attempt to create an entity failed because one // already exists. AlreadyExists Code = 6 // PermissionDenied indicates the caller does not have permission to // execute the specified operation. It must not be used for rejections // caused by exhausting some resource (use ResourceExhausted // instead for those errors). It must not be // used if the caller cannot be identified (use Unauthenticated // instead for those errors). PermissionDenied Code = 7 // ResourceExhausted indicates some resource has been exhausted, perhaps // a per-user quota, or perhaps the entire file system is out of space. ResourceExhausted Code = 8 // FailedPrecondition indicates operation was rejected because the // system is not in a state required for the operation's execution. // For example, directory to be deleted may be non-empty, an rmdir // operation is applied to a non-directory, etc. // // A litmus test that may help a service implementor in deciding // between FailedPrecondition, Aborted, and Unavailable: // (a) Use Unavailable if the client can retry just the failing call. // (b) Use Aborted if the client should retry at a higher-level // (e.g., restarting a read-modify-write sequence). // (c) Use FailedPrecondition if the client should not retry until // the system state has been explicitly fixed. E.g., if an "rmdir" // fails because the directory is non-empty, FailedPrecondition // should be returned since the client should not retry unless // they have first fixed up the directory by deleting files from it. // (d) Use FailedPrecondition if the client performs conditional // REST Get/Update/Delete on a resource and the resource on the // server does not match the condition. E.g., conflicting // read-modify-write on the same resource. FailedPrecondition Code = 9 // Aborted indicates the operation was aborted, typically due to a // concurrency issue like sequencer check failures, transaction aborts, // etc. // // See litmus test above for deciding between FailedPrecondition, // Aborted, and Unavailable. Aborted Code = 10 // OutOfRange means operation was attempted past the valid range. // E.g., seeking or reading past end of file. // // Unlike InvalidArgument, this error indicates a problem that may // be fixed if the system state changes. For example, a 32-bit file // system will generate InvalidArgument if asked to read at an // offset that is not in the range [0,2^32-1], but it will generate // OutOfRange if asked to read from an offset past the current // file size. // // There is a fair bit of overlap between FailedPrecondition and // OutOfRange. We recommend using OutOfRange (the more specific // error) when it applies so that callers who are iterating through // a space can easily look for an OutOfRange error to detect when // they are done. OutOfRange Code = 11 // Unimplemented indicates operation is not implemented or not // supported/enabled in this service. Unimplemented Code = 12 // Internal errors. Means some invariants expected by underlying // system has been broken. If you see one of these errors, // something is very broken. Internal Code = 13 // Unavailable indicates the service is currently unavailable. // This is a most likely a transient condition and may be corrected // by retrying with a backoff. Note that it is not always safe to retry // non-idempotent operations. // // See litmus test above for deciding between FailedPrecondition, // Aborted, and Unavailable. Unavailable Code = 14 // DataLoss indicates unrecoverable data loss or corruption. DataLoss Code = 15 // Unauthenticated indicates the request does not have valid // authentication credentials for the operation. Unauthenticated Code = 16 _maxCode = 17 ) var strToCode = map[string]Code{ `"OK"`: OK, `"CANCELLED"`:/* [sic] */ Canceled, `"UNKNOWN"`: Unknown, `"INVALID_ARGUMENT"`: InvalidArgument, `"DEADLINE_EXCEEDED"`: DeadlineExceeded, `"NOT_FOUND"`: NotFound, `"ALREADY_EXISTS"`: AlreadyExists, `"PERMISSION_DENIED"`: PermissionDenied, `"RESOURCE_EXHAUSTED"`: ResourceExhausted, `"FAILED_PRECONDITION"`: FailedPrecondition, `"ABORTED"`: Aborted, `"OUT_OF_RANGE"`: OutOfRange, `"UNIMPLEMENTED"`: Unimplemented, `"INTERNAL"`: Internal, `"UNAVAILABLE"`: Unavailable, `"DATA_LOSS"`: DataLoss, `"UNAUTHENTICATED"`: Unauthenticated, } // UnmarshalJSON unmarshals b into the Code. func (c *Code) UnmarshalJSON(b []byte) error { // From json.Unmarshaler: By convention, to approximate the behavior of // Unmarshal itself, Unmarshalers implement UnmarshalJSON([]byte("null")) as // a no-op. if string(b) == "null" { return nil } if c == nil { return fmt.Errorf("nil receiver passed to UnmarshalJSON") } if ci, err := strconv.ParseUint(string(b), 10, 32); err == nil { if ci >= _maxCode { return fmt.Errorf("invalid code: %q", ci) } *c = Code(ci) return nil } if jc, ok := strToCode[string(b)]; ok { *c = jc return nil } return fmt.Errorf("invalid code: %q", string(b)) } grpc-go-1.22.1/codes/codes_test.go000066400000000000000000000044611351635773100167330ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package codes import ( "encoding/json" "reflect" "testing" cpb "google.golang.org/genproto/googleapis/rpc/code" ) func TestUnmarshalJSON(t *testing.T) { for s, v := range cpb.Code_value { want := Code(v) var got Code if err := got.UnmarshalJSON([]byte(`"` + s + `"`)); err != nil || got != want { t.Errorf("got.UnmarshalJSON(%q) = %v; want . got=%v; want %v", s, err, got, want) } } } func TestJSONUnmarshal(t *testing.T) { var got []Code want := []Code{OK, NotFound, Internal, Canceled} in := `["OK", "NOT_FOUND", "INTERNAL", "CANCELLED"]` err := json.Unmarshal([]byte(in), &got) if err != nil || !reflect.DeepEqual(got, want) { t.Fatalf("json.Unmarshal(%q, &got) = %v; want . got=%v; want %v", in, err, got, want) } } func TestUnmarshalJSON_NilReceiver(t *testing.T) { var got *Code in := OK.String() if err := got.UnmarshalJSON([]byte(in)); err == nil { t.Errorf("got.UnmarshalJSON(%q) = nil; want . got=%v", in, got) } } func TestUnmarshalJSON_UnknownInput(t *testing.T) { var got Code for _, in := range [][]byte{[]byte(""), []byte("xxx"), []byte("Code(17)"), nil} { if err := got.UnmarshalJSON([]byte(in)); err == nil { t.Errorf("got.UnmarshalJSON(%q) = nil; want . got=%v", in, got) } } } func TestUnmarshalJSON_MarshalUnmarshal(t *testing.T) { for i := 0; i < _maxCode; i++ { var cUnMarshaled Code c := Code(i) cJSON, err := json.Marshal(c) if err != nil { t.Errorf("marshalling %q failed: %v", c, err) } if err := json.Unmarshal(cJSON, &cUnMarshaled); err != nil { t.Errorf("unmarshalling code failed: %s", err) } if c != cUnMarshaled { t.Errorf("code is %q after marshalling/unmarshalling, expected %q", cUnMarshaled, c) } } } grpc-go-1.22.1/connectivity/000077500000000000000000000000001351635773100156645ustar00rootroot00000000000000grpc-go-1.22.1/connectivity/connectivity.go000066400000000000000000000040771351635773100207410ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package connectivity defines connectivity semantics. // For details, see https://github.com/grpc/grpc/blob/master/doc/connectivity-semantics-and-api.md. // All APIs in this package are experimental. package connectivity import ( "context" "google.golang.org/grpc/grpclog" ) // State indicates the state of connectivity. // It can be the state of a ClientConn or SubConn. type State int func (s State) String() string { switch s { case Idle: return "IDLE" case Connecting: return "CONNECTING" case Ready: return "READY" case TransientFailure: return "TRANSIENT_FAILURE" case Shutdown: return "SHUTDOWN" default: grpclog.Errorf("unknown connectivity state: %d", s) return "Invalid-State" } } const ( // Idle indicates the ClientConn is idle. Idle State = iota // Connecting indicates the ClientConn is connecting. Connecting // Ready indicates the ClientConn is ready for work. Ready // TransientFailure indicates the ClientConn has seen a failure but expects to recover. TransientFailure // Shutdown indicates the ClientConn has started shutting down. Shutdown ) // Reporter reports the connectivity states. type Reporter interface { // CurrentState returns the current state of the reporter. CurrentState() State // WaitForStateChange blocks until the reporter's state is different from the given state, // and returns true. // It returns false if <-ctx.Done() can proceed (ctx got timeout or got canceled). WaitForStateChange(context.Context, State) bool } grpc-go-1.22.1/credentials/000077500000000000000000000000001351635773100154435ustar00rootroot00000000000000grpc-go-1.22.1/credentials/alts/000077500000000000000000000000001351635773100164065ustar00rootroot00000000000000grpc-go-1.22.1/credentials/alts/alts.go000066400000000000000000000252071351635773100177060ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package alts implements the ALTS credential support by gRPC library, which // encapsulates all the state needed by a client to authenticate with a server // using ALTS and make various assertions, e.g., about the client's identity, // role, or whether it is authorized to make a particular call. // This package is experimental. package alts import ( "context" "errors" "fmt" "net" "sync" "time" "google.golang.org/grpc/credentials" core "google.golang.org/grpc/credentials/alts/internal" "google.golang.org/grpc/credentials/alts/internal/handshaker" "google.golang.org/grpc/credentials/alts/internal/handshaker/service" altspb "google.golang.org/grpc/credentials/alts/internal/proto/grpc_gcp" "google.golang.org/grpc/grpclog" ) const ( // hypervisorHandshakerServiceAddress represents the default ALTS gRPC // handshaker service address in the hypervisor. hypervisorHandshakerServiceAddress = "metadata.google.internal:8080" // defaultTimeout specifies the server handshake timeout. defaultTimeout = 30.0 * time.Second // The following constants specify the minimum and maximum acceptable // protocol versions. protocolVersionMaxMajor = 2 protocolVersionMaxMinor = 1 protocolVersionMinMajor = 2 protocolVersionMinMinor = 1 ) var ( once sync.Once maxRPCVersion = &altspb.RpcProtocolVersions_Version{ Major: protocolVersionMaxMajor, Minor: protocolVersionMaxMinor, } minRPCVersion = &altspb.RpcProtocolVersions_Version{ Major: protocolVersionMinMajor, Minor: protocolVersionMinMinor, } // ErrUntrustedPlatform is returned from ClientHandshake and // ServerHandshake is running on a platform where the trustworthiness of // the handshaker service is not guaranteed. ErrUntrustedPlatform = errors.New("ALTS: untrusted platform. ALTS is only supported on GCP") ) // AuthInfo exposes security information from the ALTS handshake to the // application. This interface is to be implemented by ALTS. Users should not // need a brand new implementation of this interface. For situations like // testing, any new implementation should embed this interface. This allows // ALTS to add new methods to this interface. type AuthInfo interface { // ApplicationProtocol returns application protocol negotiated for the // ALTS connection. ApplicationProtocol() string // RecordProtocol returns the record protocol negotiated for the ALTS // connection. RecordProtocol() string // SecurityLevel returns the security level of the created ALTS secure // channel. SecurityLevel() altspb.SecurityLevel // PeerServiceAccount returns the peer service account. PeerServiceAccount() string // LocalServiceAccount returns the local service account. LocalServiceAccount() string // PeerRPCVersions returns the RPC version supported by the peer. PeerRPCVersions() *altspb.RpcProtocolVersions } // ClientOptions contains the client-side options of an ALTS channel. These // options will be passed to the underlying ALTS handshaker. type ClientOptions struct { // TargetServiceAccounts contains a list of expected target service // accounts. TargetServiceAccounts []string // HandshakerServiceAddress represents the ALTS handshaker gRPC service // address to connect to. HandshakerServiceAddress string } // DefaultClientOptions creates a new ClientOptions object with the default // values. func DefaultClientOptions() *ClientOptions { return &ClientOptions{ HandshakerServiceAddress: hypervisorHandshakerServiceAddress, } } // ServerOptions contains the server-side options of an ALTS channel. These // options will be passed to the underlying ALTS handshaker. type ServerOptions struct { // HandshakerServiceAddress represents the ALTS handshaker gRPC service // address to connect to. HandshakerServiceAddress string } // DefaultServerOptions creates a new ServerOptions object with the default // values. func DefaultServerOptions() *ServerOptions { return &ServerOptions{ HandshakerServiceAddress: hypervisorHandshakerServiceAddress, } } // altsTC is the credentials required for authenticating a connection using ALTS. // It implements credentials.TransportCredentials interface. type altsTC struct { info *credentials.ProtocolInfo side core.Side accounts []string hsAddress string } // NewClientCreds constructs a client-side ALTS TransportCredentials object. func NewClientCreds(opts *ClientOptions) credentials.TransportCredentials { return newALTS(core.ClientSide, opts.TargetServiceAccounts, opts.HandshakerServiceAddress) } // NewServerCreds constructs a server-side ALTS TransportCredentials object. func NewServerCreds(opts *ServerOptions) credentials.TransportCredentials { return newALTS(core.ServerSide, nil, opts.HandshakerServiceAddress) } func newALTS(side core.Side, accounts []string, hsAddress string) credentials.TransportCredentials { once.Do(func() { vmOnGCP = isRunningOnGCP() }) if hsAddress == "" { hsAddress = hypervisorHandshakerServiceAddress } return &altsTC{ info: &credentials.ProtocolInfo{ SecurityProtocol: "alts", SecurityVersion: "1.0", }, side: side, accounts: accounts, hsAddress: hsAddress, } } // ClientHandshake implements the client side handshake protocol. func (g *altsTC) ClientHandshake(ctx context.Context, addr string, rawConn net.Conn) (_ net.Conn, _ credentials.AuthInfo, err error) { if !vmOnGCP { return nil, nil, ErrUntrustedPlatform } // Connecting to ALTS handshaker service. hsConn, err := service.Dial(g.hsAddress) if err != nil { return nil, nil, err } // Do not close hsConn since it is shared with other handshakes. // Possible context leak: // The cancel function for the child context we create will only be // called a non-nil error is returned. var cancel context.CancelFunc ctx, cancel = context.WithCancel(ctx) defer func() { if err != nil { cancel() } }() opts := handshaker.DefaultClientHandshakerOptions() opts.TargetName = addr opts.TargetServiceAccounts = g.accounts opts.RPCVersions = &altspb.RpcProtocolVersions{ MaxRpcVersion: maxRPCVersion, MinRpcVersion: minRPCVersion, } chs, err := handshaker.NewClientHandshaker(ctx, hsConn, rawConn, opts) if err != nil { return nil, nil, err } defer func() { if err != nil { chs.Close() } }() secConn, authInfo, err := chs.ClientHandshake(ctx) if err != nil { return nil, nil, err } altsAuthInfo, ok := authInfo.(AuthInfo) if !ok { return nil, nil, errors.New("client-side auth info is not of type alts.AuthInfo") } match, _ := checkRPCVersions(opts.RPCVersions, altsAuthInfo.PeerRPCVersions()) if !match { return nil, nil, fmt.Errorf("server-side RPC versions are not compatible with this client, local versions: %v, peer versions: %v", opts.RPCVersions, altsAuthInfo.PeerRPCVersions()) } return secConn, authInfo, nil } // ServerHandshake implements the server side ALTS handshaker. func (g *altsTC) ServerHandshake(rawConn net.Conn) (_ net.Conn, _ credentials.AuthInfo, err error) { if !vmOnGCP { return nil, nil, ErrUntrustedPlatform } // Connecting to ALTS handshaker service. hsConn, err := service.Dial(g.hsAddress) if err != nil { return nil, nil, err } // Do not close hsConn since it's shared with other handshakes. ctx, cancel := context.WithTimeout(context.Background(), defaultTimeout) defer cancel() opts := handshaker.DefaultServerHandshakerOptions() opts.RPCVersions = &altspb.RpcProtocolVersions{ MaxRpcVersion: maxRPCVersion, MinRpcVersion: minRPCVersion, } shs, err := handshaker.NewServerHandshaker(ctx, hsConn, rawConn, opts) if err != nil { return nil, nil, err } defer func() { if err != nil { shs.Close() } }() secConn, authInfo, err := shs.ServerHandshake(ctx) if err != nil { return nil, nil, err } altsAuthInfo, ok := authInfo.(AuthInfo) if !ok { return nil, nil, errors.New("server-side auth info is not of type alts.AuthInfo") } match, _ := checkRPCVersions(opts.RPCVersions, altsAuthInfo.PeerRPCVersions()) if !match { return nil, nil, fmt.Errorf("client-side RPC versions is not compatible with this server, local versions: %v, peer versions: %v", opts.RPCVersions, altsAuthInfo.PeerRPCVersions()) } return secConn, authInfo, nil } func (g *altsTC) Info() credentials.ProtocolInfo { return *g.info } func (g *altsTC) Clone() credentials.TransportCredentials { info := *g.info var accounts []string if g.accounts != nil { accounts = make([]string, len(g.accounts)) copy(accounts, g.accounts) } return &altsTC{ info: &info, side: g.side, hsAddress: g.hsAddress, accounts: accounts, } } func (g *altsTC) OverrideServerName(serverNameOverride string) error { g.info.ServerName = serverNameOverride return nil } // compareRPCVersion returns 0 if v1 == v2, 1 if v1 > v2 and -1 if v1 < v2. func compareRPCVersions(v1, v2 *altspb.RpcProtocolVersions_Version) int { switch { case v1.GetMajor() > v2.GetMajor(), v1.GetMajor() == v2.GetMajor() && v1.GetMinor() > v2.GetMinor(): return 1 case v1.GetMajor() < v2.GetMajor(), v1.GetMajor() == v2.GetMajor() && v1.GetMinor() < v2.GetMinor(): return -1 } return 0 } // checkRPCVersions performs a version check between local and peer rpc protocol // versions. This function returns true if the check passes which means both // parties agreed on a common rpc protocol to use, and false otherwise. The // function also returns the highest common RPC protocol version both parties // agreed on. func checkRPCVersions(local, peer *altspb.RpcProtocolVersions) (bool, *altspb.RpcProtocolVersions_Version) { if local == nil || peer == nil { grpclog.Error("invalid checkRPCVersions argument, either local or peer is nil.") return false, nil } // maxCommonVersion is MIN(local.max, peer.max). maxCommonVersion := local.GetMaxRpcVersion() if compareRPCVersions(local.GetMaxRpcVersion(), peer.GetMaxRpcVersion()) > 0 { maxCommonVersion = peer.GetMaxRpcVersion() } // minCommonVersion is MAX(local.min, peer.min). minCommonVersion := peer.GetMinRpcVersion() if compareRPCVersions(local.GetMinRpcVersion(), peer.GetMinRpcVersion()) > 0 { minCommonVersion = local.GetMinRpcVersion() } if compareRPCVersions(maxCommonVersion, minCommonVersion) < 0 { return false, nil } return true, maxCommonVersion } grpc-go-1.22.1/credentials/alts/alts_test.go000066400000000000000000000166311351635773100207460ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package alts import ( "reflect" "testing" "github.com/golang/protobuf/proto" altspb "google.golang.org/grpc/credentials/alts/internal/proto/grpc_gcp" ) func TestInfoServerName(t *testing.T) { // This is not testing any handshaker functionality, so it's fine to only // use NewServerCreds and not NewClientCreds. alts := NewServerCreds(DefaultServerOptions()) if got, want := alts.Info().ServerName, ""; got != want { t.Fatalf("%v.Info().ServerName = %v, want %v", alts, got, want) } } func TestOverrideServerName(t *testing.T) { wantServerName := "server.name" // This is not testing any handshaker functionality, so it's fine to only // use NewServerCreds and not NewClientCreds. c := NewServerCreds(DefaultServerOptions()) c.OverrideServerName(wantServerName) if got, want := c.Info().ServerName, wantServerName; got != want { t.Fatalf("c.Info().ServerName = %v, want %v", got, want) } } func TestCloneClient(t *testing.T) { wantServerName := "server.name" opt := DefaultClientOptions() opt.TargetServiceAccounts = []string{"not", "empty"} c := NewClientCreds(opt) c.OverrideServerName(wantServerName) cc := c.Clone() if got, want := cc.Info().ServerName, wantServerName; got != want { t.Fatalf("cc.Info().ServerName = %v, want %v", got, want) } cc.OverrideServerName("") if got, want := c.Info().ServerName, wantServerName; got != want { t.Fatalf("Change in clone should not affect the original, c.Info().ServerName = %v, want %v", got, want) } if got, want := cc.Info().ServerName, ""; got != want { t.Fatalf("cc.Info().ServerName = %v, want %v", got, want) } ct := c.(*altsTC) cct := cc.(*altsTC) if ct.side != cct.side { t.Errorf("cc.side = %q, want %q", cct.side, ct.side) } if ct.hsAddress != cct.hsAddress { t.Errorf("cc.hsAddress = %q, want %q", cct.hsAddress, ct.hsAddress) } if !reflect.DeepEqual(ct.accounts, cct.accounts) { t.Errorf("cc.accounts = %q, want %q", cct.accounts, ct.accounts) } } func TestCloneServer(t *testing.T) { wantServerName := "server.name" c := NewServerCreds(DefaultServerOptions()) c.OverrideServerName(wantServerName) cc := c.Clone() if got, want := cc.Info().ServerName, wantServerName; got != want { t.Fatalf("cc.Info().ServerName = %v, want %v", got, want) } cc.OverrideServerName("") if got, want := c.Info().ServerName, wantServerName; got != want { t.Fatalf("Change in clone should not affect the original, c.Info().ServerName = %v, want %v", got, want) } if got, want := cc.Info().ServerName, ""; got != want { t.Fatalf("cc.Info().ServerName = %v, want %v", got, want) } ct := c.(*altsTC) cct := cc.(*altsTC) if ct.side != cct.side { t.Errorf("cc.side = %q, want %q", cct.side, ct.side) } if ct.hsAddress != cct.hsAddress { t.Errorf("cc.hsAddress = %q, want %q", cct.hsAddress, ct.hsAddress) } if !reflect.DeepEqual(ct.accounts, cct.accounts) { t.Errorf("cc.accounts = %q, want %q", cct.accounts, ct.accounts) } } func TestInfo(t *testing.T) { // This is not testing any handshaker functionality, so it's fine to only // use NewServerCreds and not NewClientCreds. c := NewServerCreds(DefaultServerOptions()) info := c.Info() if got, want := info.ProtocolVersion, ""; got != want { t.Errorf("info.ProtocolVersion=%v, want %v", got, want) } if got, want := info.SecurityProtocol, "alts"; got != want { t.Errorf("info.SecurityProtocol=%v, want %v", got, want) } if got, want := info.SecurityVersion, "1.0"; got != want { t.Errorf("info.SecurityVersion=%v, want %v", got, want) } if got, want := info.ServerName, ""; got != want { t.Errorf("info.ServerName=%v, want %v", got, want) } } func TestCompareRPCVersions(t *testing.T) { for _, tc := range []struct { v1 *altspb.RpcProtocolVersions_Version v2 *altspb.RpcProtocolVersions_Version output int }{ { version(3, 2), version(2, 1), 1, }, { version(3, 2), version(3, 1), 1, }, { version(2, 1), version(3, 2), -1, }, { version(3, 1), version(3, 2), -1, }, { version(3, 2), version(3, 2), 0, }, } { if got, want := compareRPCVersions(tc.v1, tc.v2), tc.output; got != want { t.Errorf("compareRPCVersions(%v, %v)=%v, want %v", tc.v1, tc.v2, got, want) } } } func TestCheckRPCVersions(t *testing.T) { for _, tc := range []struct { desc string local *altspb.RpcProtocolVersions peer *altspb.RpcProtocolVersions output bool maxCommonVersion *altspb.RpcProtocolVersions_Version }{ { "local.max > peer.max and local.min > peer.min", versions(2, 1, 3, 2), versions(1, 2, 2, 1), true, version(2, 1), }, { "local.max > peer.max and local.min < peer.min", versions(1, 2, 3, 2), versions(2, 1, 2, 1), true, version(2, 1), }, { "local.max > peer.max and local.min = peer.min", versions(2, 1, 3, 2), versions(2, 1, 2, 1), true, version(2, 1), }, { "local.max < peer.max and local.min > peer.min", versions(2, 1, 2, 1), versions(1, 2, 3, 2), true, version(2, 1), }, { "local.max = peer.max and local.min > peer.min", versions(2, 1, 2, 1), versions(1, 2, 2, 1), true, version(2, 1), }, { "local.max < peer.max and local.min < peer.min", versions(1, 2, 2, 1), versions(2, 1, 3, 2), true, version(2, 1), }, { "local.max < peer.max and local.min = peer.min", versions(1, 2, 2, 1), versions(1, 2, 3, 2), true, version(2, 1), }, { "local.max = peer.max and local.min < peer.min", versions(1, 2, 2, 1), versions(2, 1, 2, 1), true, version(2, 1), }, { "all equal", versions(2, 1, 2, 1), versions(2, 1, 2, 1), true, version(2, 1), }, { "max is smaller than min", versions(2, 1, 1, 2), versions(2, 1, 1, 2), false, nil, }, { "no overlap, local > peer", versions(4, 3, 6, 5), versions(1, 0, 2, 1), false, nil, }, { "no overlap, local < peer", versions(1, 0, 2, 1), versions(4, 3, 6, 5), false, nil, }, { "no overlap, max < min", versions(6, 5, 4, 3), versions(2, 1, 1, 0), false, nil, }, } { output, maxCommonVersion := checkRPCVersions(tc.local, tc.peer) if got, want := output, tc.output; got != want { t.Errorf("%v: checkRPCVersions(%v, %v)=(%v, _), want (%v, _)", tc.desc, tc.local, tc.peer, got, want) } if got, want := maxCommonVersion, tc.maxCommonVersion; !proto.Equal(got, want) { t.Errorf("%v: checkRPCVersions(%v, %v)=(_, %v), want (_, %v)", tc.desc, tc.local, tc.peer, got, want) } } } func version(major, minor uint32) *altspb.RpcProtocolVersions_Version { return &altspb.RpcProtocolVersions_Version{ Major: major, Minor: minor, } } func versions(minMajor, minMinor, maxMajor, maxMinor uint32) *altspb.RpcProtocolVersions { return &altspb.RpcProtocolVersions{ MinRpcVersion: version(minMajor, minMinor), MaxRpcVersion: version(maxMajor, maxMinor), } } grpc-go-1.22.1/credentials/alts/internal/000077500000000000000000000000001351635773100202225ustar00rootroot00000000000000grpc-go-1.22.1/credentials/alts/internal/authinfo/000077500000000000000000000000001351635773100220375ustar00rootroot00000000000000grpc-go-1.22.1/credentials/alts/internal/authinfo/authinfo.go000066400000000000000000000054711351635773100242120ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package authinfo provide authentication information returned by handshakers. package authinfo import ( "google.golang.org/grpc/credentials" altspb "google.golang.org/grpc/credentials/alts/internal/proto/grpc_gcp" ) var _ credentials.AuthInfo = (*altsAuthInfo)(nil) // altsAuthInfo exposes security information from the ALTS handshake to the // application. altsAuthInfo is immutable and implements credentials.AuthInfo. type altsAuthInfo struct { p *altspb.AltsContext } // New returns a new altsAuthInfo object given handshaker results. func New(result *altspb.HandshakerResult) credentials.AuthInfo { return newAuthInfo(result) } func newAuthInfo(result *altspb.HandshakerResult) *altsAuthInfo { return &altsAuthInfo{ p: &altspb.AltsContext{ ApplicationProtocol: result.GetApplicationProtocol(), RecordProtocol: result.GetRecordProtocol(), // TODO: assign security level from result. SecurityLevel: altspb.SecurityLevel_INTEGRITY_AND_PRIVACY, PeerServiceAccount: result.GetPeerIdentity().GetServiceAccount(), LocalServiceAccount: result.GetLocalIdentity().GetServiceAccount(), PeerRpcVersions: result.GetPeerRpcVersions(), }, } } // AuthType identifies the context as providing ALTS authentication information. func (s *altsAuthInfo) AuthType() string { return "alts" } // ApplicationProtocol returns the context's application protocol. func (s *altsAuthInfo) ApplicationProtocol() string { return s.p.GetApplicationProtocol() } // RecordProtocol returns the context's record protocol. func (s *altsAuthInfo) RecordProtocol() string { return s.p.GetRecordProtocol() } // SecurityLevel returns the context's security level. func (s *altsAuthInfo) SecurityLevel() altspb.SecurityLevel { return s.p.GetSecurityLevel() } // PeerServiceAccount returns the context's peer service account. func (s *altsAuthInfo) PeerServiceAccount() string { return s.p.GetPeerServiceAccount() } // LocalServiceAccount returns the context's local service account. func (s *altsAuthInfo) LocalServiceAccount() string { return s.p.GetLocalServiceAccount() } // PeerRPCVersions returns the context's peer RPC versions. func (s *altsAuthInfo) PeerRPCVersions() *altspb.RpcProtocolVersions { return s.p.GetPeerRpcVersions() } grpc-go-1.22.1/credentials/alts/internal/authinfo/authinfo_test.go000066400000000000000000000075601351635773100252520ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package authinfo import ( "reflect" "testing" altspb "google.golang.org/grpc/credentials/alts/internal/proto/grpc_gcp" ) const ( testAppProtocol = "my_app" testRecordProtocol = "very_secure_protocol" testPeerAccount = "peer_service_account" testLocalAccount = "local_service_account" testPeerHostname = "peer_hostname" testLocalHostname = "local_hostname" ) func TestALTSAuthInfo(t *testing.T) { for _, tc := range []struct { result *altspb.HandshakerResult outAppProtocol string outRecordProtocol string outSecurityLevel altspb.SecurityLevel outPeerAccount string outLocalAccount string outPeerRPCVersions *altspb.RpcProtocolVersions }{ { &altspb.HandshakerResult{ ApplicationProtocol: testAppProtocol, RecordProtocol: testRecordProtocol, PeerIdentity: &altspb.Identity{ IdentityOneof: &altspb.Identity_ServiceAccount{ ServiceAccount: testPeerAccount, }, }, LocalIdentity: &altspb.Identity{ IdentityOneof: &altspb.Identity_ServiceAccount{ ServiceAccount: testLocalAccount, }, }, }, testAppProtocol, testRecordProtocol, altspb.SecurityLevel_INTEGRITY_AND_PRIVACY, testPeerAccount, testLocalAccount, nil, }, { &altspb.HandshakerResult{ ApplicationProtocol: testAppProtocol, RecordProtocol: testRecordProtocol, PeerIdentity: &altspb.Identity{ IdentityOneof: &altspb.Identity_Hostname{ Hostname: testPeerHostname, }, }, LocalIdentity: &altspb.Identity{ IdentityOneof: &altspb.Identity_Hostname{ Hostname: testLocalHostname, }, }, PeerRpcVersions: &altspb.RpcProtocolVersions{ MaxRpcVersion: &altspb.RpcProtocolVersions_Version{ Major: 20, Minor: 21, }, MinRpcVersion: &altspb.RpcProtocolVersions_Version{ Major: 10, Minor: 11, }, }, }, testAppProtocol, testRecordProtocol, altspb.SecurityLevel_INTEGRITY_AND_PRIVACY, "", "", &altspb.RpcProtocolVersions{ MaxRpcVersion: &altspb.RpcProtocolVersions_Version{ Major: 20, Minor: 21, }, MinRpcVersion: &altspb.RpcProtocolVersions_Version{ Major: 10, Minor: 11, }, }, }, } { authInfo := newAuthInfo(tc.result) if got, want := authInfo.AuthType(), "alts"; got != want { t.Errorf("authInfo.AuthType()=%v, want %v", got, want) } if got, want := authInfo.ApplicationProtocol(), tc.outAppProtocol; got != want { t.Errorf("authInfo.ApplicationProtocol()=%v, want %v", got, want) } if got, want := authInfo.RecordProtocol(), tc.outRecordProtocol; got != want { t.Errorf("authInfo.RecordProtocol()=%v, want %v", got, want) } if got, want := authInfo.SecurityLevel(), tc.outSecurityLevel; got != want { t.Errorf("authInfo.SecurityLevel()=%v, want %v", got, want) } if got, want := authInfo.PeerServiceAccount(), tc.outPeerAccount; got != want { t.Errorf("authInfo.PeerServiceAccount()=%v, want %v", got, want) } if got, want := authInfo.LocalServiceAccount(), tc.outLocalAccount; got != want { t.Errorf("authInfo.LocalServiceAccount()=%v, want %v", got, want) } if got, want := authInfo.PeerRPCVersions(), tc.outPeerRPCVersions; !reflect.DeepEqual(got, want) { t.Errorf("authinfo.PeerRpcVersions()=%v, want %v", got, want) } } } grpc-go-1.22.1/credentials/alts/internal/common.go000066400000000000000000000043661351635773100220520ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ //go:generate ./regenerate.sh // Package internal contains common core functionality for ALTS. package internal import ( "context" "net" "google.golang.org/grpc/credentials" ) const ( // ClientSide identifies the client in this communication. ClientSide Side = iota // ServerSide identifies the server in this communication. ServerSide ) // PeerNotRespondingError is returned when a peer server is not responding // after a channel has been established. It is treated as a temporary connection // error and re-connection to the server should be attempted. var PeerNotRespondingError = &peerNotRespondingError{} // Side identifies the party's role: client or server. type Side int type peerNotRespondingError struct{} // Return an error message for the purpose of logging. func (e *peerNotRespondingError) Error() string { return "peer server is not responding and re-connection should be attempted." } // Temporary indicates if this connection error is temporary or fatal. func (e *peerNotRespondingError) Temporary() bool { return true } // Handshaker defines a ALTS handshaker interface. type Handshaker interface { // ClientHandshake starts and completes a client-side handshaking and // returns a secure connection and corresponding auth information. ClientHandshake(ctx context.Context) (net.Conn, credentials.AuthInfo, error) // ServerHandshake starts and completes a server-side handshaking and // returns a secure connection and corresponding auth information. ServerHandshake(ctx context.Context) (net.Conn, credentials.AuthInfo, error) // Close terminates the Handshaker. It should be called when the caller // obtains the secure connection. Close() } grpc-go-1.22.1/credentials/alts/internal/conn/000077500000000000000000000000001351635773100211575ustar00rootroot00000000000000grpc-go-1.22.1/credentials/alts/internal/conn/aeadrekey.go000066400000000000000000000077471351635773100234570ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package conn import ( "bytes" "crypto/aes" "crypto/cipher" "crypto/hmac" "crypto/sha256" "encoding/binary" "fmt" "strconv" ) // rekeyAEAD holds the necessary information for an AEAD based on // AES-GCM that performs nonce-based key derivation and XORs the // nonce with a random mask. type rekeyAEAD struct { kdfKey []byte kdfCounter []byte nonceMask []byte nonceBuf []byte gcmAEAD cipher.AEAD } // KeySizeError signals that the given key does not have the correct size. type KeySizeError int func (k KeySizeError) Error() string { return "alts/conn: invalid key size " + strconv.Itoa(int(k)) } // newRekeyAEAD creates a new instance of aes128gcm with rekeying. // The key argument should be 44 bytes, the first 32 bytes are used as a key // for HKDF-expand and the remainining 12 bytes are used as a random mask for // the counter. func newRekeyAEAD(key []byte) (*rekeyAEAD, error) { k := len(key) if k != kdfKeyLen+nonceLen { return nil, KeySizeError(k) } return &rekeyAEAD{ kdfKey: key[:kdfKeyLen], kdfCounter: make([]byte, kdfCounterLen), nonceMask: key[kdfKeyLen:], nonceBuf: make([]byte, nonceLen), gcmAEAD: nil, }, nil } // Seal rekeys if nonce[2:8] is different than in the last call, masks the nonce, // and calls Seal for aes128gcm. func (s *rekeyAEAD) Seal(dst, nonce, plaintext, additionalData []byte) []byte { if err := s.rekeyIfRequired(nonce); err != nil { panic(fmt.Sprintf("Rekeying failed with: %s", err.Error())) } maskNonce(s.nonceBuf, nonce, s.nonceMask) return s.gcmAEAD.Seal(dst, s.nonceBuf, plaintext, additionalData) } // Open rekeys if nonce[2:8] is different than in the last call, masks the nonce, // and calls Open for aes128gcm. func (s *rekeyAEAD) Open(dst, nonce, ciphertext, additionalData []byte) ([]byte, error) { if err := s.rekeyIfRequired(nonce); err != nil { return nil, err } maskNonce(s.nonceBuf, nonce, s.nonceMask) return s.gcmAEAD.Open(dst, s.nonceBuf, ciphertext, additionalData) } // rekeyIfRequired creates a new aes128gcm AEAD if the existing AEAD is nil // or cannot be used with given nonce. func (s *rekeyAEAD) rekeyIfRequired(nonce []byte) error { newKdfCounter := nonce[kdfCounterOffset : kdfCounterOffset+kdfCounterLen] if s.gcmAEAD != nil && bytes.Equal(newKdfCounter, s.kdfCounter) { return nil } copy(s.kdfCounter, newKdfCounter) a, err := aes.NewCipher(hkdfExpand(s.kdfKey, s.kdfCounter)) if err != nil { return err } s.gcmAEAD, err = cipher.NewGCM(a) return err } // maskNonce XORs the given nonce with the mask and stores the result in dst. func maskNonce(dst, nonce, mask []byte) { nonce1 := binary.LittleEndian.Uint64(nonce[:sizeUint64]) nonce2 := binary.LittleEndian.Uint32(nonce[sizeUint64:]) mask1 := binary.LittleEndian.Uint64(mask[:sizeUint64]) mask2 := binary.LittleEndian.Uint32(mask[sizeUint64:]) binary.LittleEndian.PutUint64(dst[:sizeUint64], nonce1^mask1) binary.LittleEndian.PutUint32(dst[sizeUint64:], nonce2^mask2) } // NonceSize returns the required nonce size. func (s *rekeyAEAD) NonceSize() int { return s.gcmAEAD.NonceSize() } // Overhead returns the ciphertext overhead. func (s *rekeyAEAD) Overhead() int { return s.gcmAEAD.Overhead() } // hkdfExpand computes the first 16 bytes of the HKDF-expand function // defined in RFC5869. func hkdfExpand(key, info []byte) []byte { mac := hmac.New(sha256.New, key) mac.Write(info) mac.Write([]byte{0x01}[:]) return mac.Sum(nil)[:aeadKeyLen] } grpc-go-1.22.1/credentials/alts/internal/conn/aeadrekey_test.go000066400000000000000000000362201351635773100245020ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package conn import ( "bytes" "encoding/hex" "testing" ) // cryptoTestVector is struct for a rekey test vector type rekeyAEADTestVector struct { desc string key, nonce, plaintext, aad, ciphertext []byte } // Test encrypt and decrypt using (adapted) test vectors for AES-GCM. func TestAES128GCMRekeyEncrypt(t *testing.T) { for _, test := range []rekeyAEADTestVector{ // NIST vectors from: // http://csrc.nist.gov/groups/ST/toolkit/BCM/documents/proposedmodes/gcm/gcm-revised-spec.pdf // // IEEE vectors from: // http://www.ieee802.org/1/files/public/docs2011/bn-randall-test-vectors-0511-v1.pdf // // Key expanded by setting // expandedKey = (key || // key ^ {0x01,..,0x01} || // key ^ {0x02,..,0x02})[0:44]. { desc: "Derived from NIST test vector 1", key: dehex("0000000000000000000000000000000001010101010101010101010101010101020202020202020202020202"), nonce: dehex("000000000000000000000000"), aad: dehex(""), plaintext: dehex(""), ciphertext: dehex("85e873e002f6ebdc4060954eb8675508"), }, { desc: "Derived from NIST test vector 2", key: dehex("0000000000000000000000000000000001010101010101010101010101010101020202020202020202020202"), nonce: dehex("000000000000000000000000"), aad: dehex(""), plaintext: dehex("00000000000000000000000000000000"), ciphertext: dehex("51e9a8cb23ca2512c8256afff8e72d681aca19a1148ac115e83df4888cc00d11"), }, { desc: "Derived from NIST test vector 3", key: dehex("feffe9928665731c6d6a8f9467308308fffee8938764721d6c6b8e9566318209fcfdeb908467711e6f688d96"), nonce: dehex("cafebabefacedbaddecaf888"), aad: dehex(""), plaintext: dehex("d9313225f88406e5a55909c5aff5269a86a7a9531534f7da2e4c303d8a318a721c3c0c95956809532fcf0e2449a6b525b16aedf5aa0de657ba637b391aafd255"), ciphertext: dehex("1018ed5a1402a86516d6576d70b2ffccca261b94df88b58f53b64dfba435d18b2f6e3b7869f9353d4ac8cf09afb1663daa7b4017e6fc2c177c0c087c0df1162129952213cee1bc6e9c8495dd705e1f3d"), }, { desc: "Derived from NIST test vector 4", key: dehex("feffe9928665731c6d6a8f9467308308fffee8938764721d6c6b8e9566318209fcfdeb908467711e6f688d96"), nonce: dehex("cafebabefacedbaddecaf888"), aad: dehex("feedfacedeadbeeffeedfacedeadbeefabaddad2"), plaintext: dehex("d9313225f88406e5a55909c5aff5269a86a7a9531534f7da2e4c303d8a318a721c3c0c95956809532fcf0e2449a6b525b16aedf5aa0de657ba637b39"), ciphertext: dehex("1018ed5a1402a86516d6576d70b2ffccca261b94df88b58f53b64dfba435d18b2f6e3b7869f9353d4ac8cf09afb1663daa7b4017e6fc2c177c0c087c4764565d077e9124001ddb27fc0848c5"), }, { desc: "Derived from adapted NIST test vector 4 for KDF counter boundary (flip nonce bit 15)", key: dehex("feffe9928665731c6d6a8f9467308308fffee8938764721d6c6b8e9566318209fcfdeb908467711e6f688d96"), nonce: dehex("ca7ebabefacedbaddecaf888"), aad: dehex("feedfacedeadbeeffeedfacedeadbeefabaddad2"), plaintext: dehex("d9313225f88406e5a55909c5aff5269a86a7a9531534f7da2e4c303d8a318a721c3c0c95956809532fcf0e2449a6b525b16aedf5aa0de657ba637b39"), ciphertext: dehex("e650d3c0fb879327f2d03287fa93cd07342b136215adbca00c3bd5099ec41832b1d18e0423ed26bb12c6cd09debb29230a94c0cee15903656f85edb6fc509b1b28216382172ecbcc31e1e9b1"), }, { desc: "Derived from adapted NIST test vector 4 for KDF counter boundary (flip nonce bit 16)", key: dehex("feffe9928665731c6d6a8f9467308308fffee8938764721d6c6b8e9566318209fcfdeb908467711e6f688d96"), nonce: dehex("cafebbbefacedbaddecaf888"), aad: dehex("feedfacedeadbeeffeedfacedeadbeefabaddad2"), plaintext: dehex("d9313225f88406e5a55909c5aff5269a86a7a9531534f7da2e4c303d8a318a721c3c0c95956809532fcf0e2449a6b525b16aedf5aa0de657ba637b39"), ciphertext: dehex("c0121e6c954d0767f96630c33450999791b2da2ad05c4190169ccad9ac86ff1c721e3d82f2ad22ab463bab4a0754b7dd68ca4de7ea2531b625eda01f89312b2ab957d5c7f8568dd95fcdcd1f"), }, { desc: "Derived from adapted NIST test vector 4 for KDF counter boundary (flip nonce bit 63)", key: dehex("feffe9928665731c6d6a8f9467308308fffee8938764721d6c6b8e9566318209fcfdeb908467711e6f688d96"), nonce: dehex("cafebabefacedb2ddecaf888"), aad: dehex("feedfacedeadbeeffeedfacedeadbeefabaddad2"), plaintext: dehex("d9313225f88406e5a55909c5aff5269a86a7a9531534f7da2e4c303d8a318a721c3c0c95956809532fcf0e2449a6b525b16aedf5aa0de657ba637b39"), ciphertext: dehex("8af37ea5684a4d81d4fd817261fd9743099e7e6a025eaacf8e54b124fb5743149e05cb89f4a49467fe2e5e5965f29a19f99416b0016b54585d12553783ba59e9f782e82e097c336bf7989f08"), }, { desc: "Derived from adapted NIST test vector 4 for KDF counter boundary (flip nonce bit 64)", key: dehex("feffe9928665731c6d6a8f9467308308fffee8938764721d6c6b8e9566318209fcfdeb908467711e6f688d96"), nonce: dehex("cafebabefacedbaddfcaf888"), aad: dehex("feedfacedeadbeeffeedfacedeadbeefabaddad2"), plaintext: dehex("d9313225f88406e5a55909c5aff5269a86a7a9531534f7da2e4c303d8a318a721c3c0c95956809532fcf0e2449a6b525b16aedf5aa0de657ba637b39"), ciphertext: dehex("fbd528448d0346bfa878634864d407a35a039de9db2f1feb8e965b3ae9356ce6289441d77f8f0df294891f37ea438b223e3bf2bdc53d4c5a74fb680bb312a8dec6f7252cbcd7f5799750ad78"), }, { desc: "Derived from IEEE 2.1.1 54-byte auth", key: dehex("ad7a2bd03eac835a6f620fdcb506b345ac7b2ad13fad825b6e630eddb407b244af7829d23cae81586d600dde"), nonce: dehex("12153524c0895e81b2c28465"), aad: dehex("d609b1f056637a0d46df998d88e5222ab2c2846512153524c0895e8108000f101112131415161718191a1b1c1d1e1f202122232425262728292a2b2c2d2e2f30313233340001"), plaintext: dehex(""), ciphertext: dehex("3ea0b584f3c85e93f9320ea591699efb"), }, { desc: "Derived from IEEE 2.1.2 54-byte auth", key: dehex("e3c08a8f06c6e3ad95a70557b23f75483ce33021a9c72b7025666204c69c0b72e1c2888d04c4e1af97a50755"), nonce: dehex("12153524c0895e81b2c28465"), aad: dehex("d609b1f056637a0d46df998d88e5222ab2c2846512153524c0895e8108000f101112131415161718191a1b1c1d1e1f202122232425262728292a2b2c2d2e2f30313233340001"), plaintext: dehex(""), ciphertext: dehex("294e028bf1fe6f14c4e8f7305c933eb5"), }, { desc: "Derived from IEEE 2.2.1 60-byte crypt", key: dehex("ad7a2bd03eac835a6f620fdcb506b345ac7b2ad13fad825b6e630eddb407b244af7829d23cae81586d600dde"), nonce: dehex("12153524c0895e81b2c28465"), aad: dehex("d609b1f056637a0d46df998d88e52e00b2c2846512153524c0895e81"), plaintext: dehex("08000f101112131415161718191a1b1c1d1e1f202122232425262728292a2b2c2d2e2f303132333435363738393a0002"), ciphertext: dehex("db3d25719c6b0a3ca6145c159d5c6ed9aff9c6e0b79f17019ea923b8665ddf52137ad611f0d1bf417a7ca85e45afe106ff9c7569d335d086ae6c03f00987ccd6"), }, { desc: "Derived from IEEE 2.2.2 60-byte crypt", key: dehex("e3c08a8f06c6e3ad95a70557b23f75483ce33021a9c72b7025666204c69c0b72e1c2888d04c4e1af97a50755"), nonce: dehex("12153524c0895e81b2c28465"), aad: dehex("d609b1f056637a0d46df998d88e52e00b2c2846512153524c0895e81"), plaintext: dehex("08000f101112131415161718191a1b1c1d1e1f202122232425262728292a2b2c2d2e2f303132333435363738393a0002"), ciphertext: dehex("1641f28ec13afcc8f7903389787201051644914933e9202bb9d06aa020c2a67ef51dfe7bc00a856c55b8f8133e77f659132502bad63f5713d57d0c11e0f871ed"), }, { desc: "Derived from IEEE 2.3.1 60-byte auth", key: dehex("071b113b0ca743fecccf3d051f737382061a103a0da642ffcdce3c041e727283051913390ea541fccecd3f07"), nonce: dehex("f0761e8dcd3d000176d457ed"), aad: dehex("e20106d7cd0df0761e8dcd3d88e5400076d457ed08000f101112131415161718191a1b1c1d1e1f202122232425262728292a2b2c2d2e2f303132333435363738393a0003"), plaintext: dehex(""), ciphertext: dehex("58837a10562b0f1f8edbe58ca55811d3"), }, { desc: "Derived from IEEE 2.3.2 60-byte auth", key: dehex("691d3ee909d7f54167fd1ca0b5d769081f2bde1aee655fdbab80bd5295ae6be76b1f3ceb0bd5f74365ff1ea2"), nonce: dehex("f0761e8dcd3d000176d457ed"), aad: dehex("e20106d7cd0df0761e8dcd3d88e5400076d457ed08000f101112131415161718191a1b1c1d1e1f202122232425262728292a2b2c2d2e2f303132333435363738393a0003"), plaintext: dehex(""), ciphertext: dehex("c2722ff6ca29a257718a529d1f0c6a3b"), }, { desc: "Derived from IEEE 2.4.1 54-byte crypt", key: dehex("071b113b0ca743fecccf3d051f737382061a103a0da642ffcdce3c041e727283051913390ea541fccecd3f07"), nonce: dehex("f0761e8dcd3d000176d457ed"), aad: dehex("e20106d7cd0df0761e8dcd3d88e54c2a76d457ed"), plaintext: dehex("08000f101112131415161718191a1b1c1d1e1f202122232425262728292a2b2c2d2e2f30313233340004"), ciphertext: dehex("fd96b715b93a13346af51e8acdf792cdc7b2686f8574c70e6b0cbf16291ded427ad73fec48cd298e0528a1f4c644a949fc31dc9279706ddba33f"), }, { desc: "Derived from IEEE 2.4.2 54-byte crypt", key: dehex("691d3ee909d7f54167fd1ca0b5d769081f2bde1aee655fdbab80bd5295ae6be76b1f3ceb0bd5f74365ff1ea2"), nonce: dehex("f0761e8dcd3d000176d457ed"), aad: dehex("e20106d7cd0df0761e8dcd3d88e54c2a76d457ed"), plaintext: dehex("08000f101112131415161718191a1b1c1d1e1f202122232425262728292a2b2c2d2e2f30313233340004"), ciphertext: dehex("b68f6300c2e9ae833bdc070e24021a3477118e78ccf84e11a485d861476c300f175353d5cdf92008a4f878e6cc3577768085c50a0e98fda6cbb8"), }, { desc: "Derived from IEEE 2.5.1 65-byte auth", key: dehex("013fe00b5f11be7f866d0cbbc55a7a90003ee10a5e10bf7e876c0dbac45b7b91033de2095d13bc7d846f0eb9"), nonce: dehex("7cfde9f9e33724c68932d612"), aad: dehex("84c5d513d2aaf6e5bbd2727788e523008932d6127cfde9f9e33724c608000f101112131415161718191a1b1c1d1e1f202122232425262728292a2b2c2d2e2f303132333435363738393a3b3c3d3e3f0005"), plaintext: dehex(""), ciphertext: dehex("cca20eecda6283f09bb3543dd99edb9b"), }, { desc: "Derived from IEEE 2.5.2 65-byte auth", key: dehex("83c093b58de7ffe1c0da926ac43fb3609ac1c80fee1b624497ef942e2f79a82381c291b78fe5fde3c2d89068"), nonce: dehex("7cfde9f9e33724c68932d612"), aad: dehex("84c5d513d2aaf6e5bbd2727788e523008932d6127cfde9f9e33724c608000f101112131415161718191a1b1c1d1e1f202122232425262728292a2b2c2d2e2f303132333435363738393a3b3c3d3e3f0005"), plaintext: dehex(""), ciphertext: dehex("b232cc1da5117bf15003734fa599d271"), }, { desc: "Derived from IEEE 2.6.1 61-byte crypt", key: dehex("013fe00b5f11be7f866d0cbbc55a7a90003ee10a5e10bf7e876c0dbac45b7b91033de2095d13bc7d846f0eb9"), nonce: dehex("7cfde9f9e33724c68932d612"), aad: dehex("84c5d513d2aaf6e5bbd2727788e52f008932d6127cfde9f9e33724c6"), plaintext: dehex("08000f101112131415161718191a1b1c1d1e1f202122232425262728292a2b2c2d2e2f303132333435363738393a3b0006"), ciphertext: dehex("ff1910d35ad7e5657890c7c560146fd038707f204b66edbc3d161f8ace244b985921023c436e3a1c3532ecd5d09a056d70be583f0d10829d9387d07d33d872e490"), }, { desc: "Derived from IEEE 2.6.2 61-byte crypt", key: dehex("83c093b58de7ffe1c0da926ac43fb3609ac1c80fee1b624497ef942e2f79a82381c291b78fe5fde3c2d89068"), nonce: dehex("7cfde9f9e33724c68932d612"), aad: dehex("84c5d513d2aaf6e5bbd2727788e52f008932d6127cfde9f9e33724c6"), plaintext: dehex("08000f101112131415161718191a1b1c1d1e1f202122232425262728292a2b2c2d2e2f303132333435363738393a3b0006"), ciphertext: dehex("0db4cf956b5f97eca4eab82a6955307f9ae02a32dd7d93f83d66ad04e1cfdc5182ad12abdea5bbb619a1bd5fb9a573590fba908e9c7a46c1f7ba0905d1b55ffda4"), }, { desc: "Derived from IEEE 2.7.1 79-byte crypt", key: dehex("88ee087fd95da9fbf6725aa9d757b0cd89ef097ed85ca8faf7735ba8d656b1cc8aec0a7ddb5fabf9f47058ab"), nonce: dehex("7ae8e2ca4ec500012e58495c"), aad: dehex("68f2e77696ce7ae8e2ca4ec588e541002e58495c08000f101112131415161718191a1b1c1d1e1f202122232425262728292a2b2c2d2e2f303132333435363738393a3b3c3d3e3f404142434445464748494a4b4c4d0007"), plaintext: dehex(""), ciphertext: dehex("813f0e630f96fb2d030f58d83f5cdfd0"), }, { desc: "Derived from IEEE 2.7.2 79-byte crypt", key: dehex("4c973dbc7364621674f8b5b89e5c15511fced9216490fb1c1a2caa0ffe0407e54e953fbe7166601476fab7ba"), nonce: dehex("7ae8e2ca4ec500012e58495c"), aad: dehex("68f2e77696ce7ae8e2ca4ec588e541002e58495c08000f101112131415161718191a1b1c1d1e1f202122232425262728292a2b2c2d2e2f303132333435363738393a3b3c3d3e3f404142434445464748494a4b4c4d0007"), plaintext: dehex(""), ciphertext: dehex("77e5a44c21eb07188aacbd74d1980e97"), }, { desc: "Derived from IEEE 2.8.1 61-byte crypt", key: dehex("88ee087fd95da9fbf6725aa9d757b0cd89ef097ed85ca8faf7735ba8d656b1cc8aec0a7ddb5fabf9f47058ab"), nonce: dehex("7ae8e2ca4ec500012e58495c"), aad: dehex("68f2e77696ce7ae8e2ca4ec588e54d002e58495c"), plaintext: dehex("08000f101112131415161718191a1b1c1d1e1f202122232425262728292a2b2c2d2e2f303132333435363738393a3b3c3d3e3f404142434445464748490008"), ciphertext: dehex("958ec3f6d60afeda99efd888f175e5fcd4c87b9bcc5c2f5426253a8b506296c8c43309ab2adb5939462541d95e80811e04e706b1498f2c407c7fb234f8cc01a647550ee6b557b35a7e3945381821f4"), }, { desc: "Derived from IEEE 2.8.2 61-byte crypt", key: dehex("4c973dbc7364621674f8b5b89e5c15511fced9216490fb1c1a2caa0ffe0407e54e953fbe7166601476fab7ba"), nonce: dehex("7ae8e2ca4ec500012e58495c"), aad: dehex("68f2e77696ce7ae8e2ca4ec588e54d002e58495c"), plaintext: dehex("08000f101112131415161718191a1b1c1d1e1f202122232425262728292a2b2c2d2e2f303132333435363738393a3b3c3d3e3f404142434445464748490008"), ciphertext: dehex("b44d072011cd36d272a9b7a98db9aa90cbc5c67b93ddce67c854503214e2e896ec7e9db649ed4bcf6f850aac0223d0cf92c83db80795c3a17ecc1248bb00591712b1ae71e268164196252162810b00"), }} { aead, err := newRekeyAEAD(test.key) if err != nil { t.Fatal("unexpected failure in newRekeyAEAD: ", err.Error()) } if got := aead.Seal(nil, test.nonce, test.plaintext, test.aad); !bytes.Equal(got, test.ciphertext) { t.Errorf("Unexpected ciphertext for test vector '%s':\nciphertext=%s\nwant= %s", test.desc, hex.EncodeToString(got), hex.EncodeToString(test.ciphertext)) } if got, err := aead.Open(nil, test.nonce, test.ciphertext, test.aad); err != nil || !bytes.Equal(got, test.plaintext) { t.Errorf("Unexpected plaintext for test vector '%s':\nplaintext=%s (err=%v)\nwant= %s", test.desc, hex.EncodeToString(got), err, hex.EncodeToString(test.plaintext)) } } } func dehex(s string) []byte { if len(s) == 0 { return make([]byte, 0) } b, err := hex.DecodeString(s) if err != nil { panic(err) } return b } grpc-go-1.22.1/credentials/alts/internal/conn/aes128gcm.go000066400000000000000000000063011351635773100232000ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package conn import ( "crypto/aes" "crypto/cipher" core "google.golang.org/grpc/credentials/alts/internal" ) const ( // Overflow length n in bytes, never encrypt more than 2^(n*8) frames (in // each direction). overflowLenAES128GCM = 5 ) // aes128gcm is the struct that holds necessary information for ALTS record. // The counter value is NOT included in the payload during the encryption and // decryption operations. type aes128gcm struct { // inCounter is used in ALTS record to check that incoming counters are // as expected, since ALTS record guarantees that messages are unwrapped // in the same order that the peer wrapped them. inCounter Counter outCounter Counter aead cipher.AEAD } // NewAES128GCM creates an instance that uses aes128gcm for ALTS record. func NewAES128GCM(side core.Side, key []byte) (ALTSRecordCrypto, error) { c, err := aes.NewCipher(key) if err != nil { return nil, err } a, err := cipher.NewGCM(c) if err != nil { return nil, err } return &aes128gcm{ inCounter: NewInCounter(side, overflowLenAES128GCM), outCounter: NewOutCounter(side, overflowLenAES128GCM), aead: a, }, nil } // Encrypt is the encryption function. dst can contain bytes at the beginning of // the ciphertext that will not be encrypted but will be authenticated. If dst // has enough capacity to hold these bytes, the ciphertext and the tag, no // allocation and copy operations will be performed. dst and plaintext do not // overlap. func (s *aes128gcm) Encrypt(dst, plaintext []byte) ([]byte, error) { // If we need to allocate an output buffer, we want to include space for // GCM tag to avoid forcing ALTS record to reallocate as well. dlen := len(dst) dst, out := SliceForAppend(dst, len(plaintext)+GcmTagSize) seq, err := s.outCounter.Value() if err != nil { return nil, err } data := out[:len(plaintext)] copy(data, plaintext) // data may alias plaintext // Seal appends the ciphertext and the tag to its first argument and // returns the updated slice. However, SliceForAppend above ensures that // dst has enough capacity to avoid a reallocation and copy due to the // append. dst = s.aead.Seal(dst[:dlen], seq, data, nil) s.outCounter.Inc() return dst, nil } func (s *aes128gcm) EncryptionOverhead() int { return GcmTagSize } func (s *aes128gcm) Decrypt(dst, ciphertext []byte) ([]byte, error) { seq, err := s.inCounter.Value() if err != nil { return nil, err } // If dst is equal to ciphertext[:0], ciphertext storage is reused. plaintext, err := s.aead.Open(dst, seq, ciphertext, nil) if err != nil { return nil, ErrAuth } s.inCounter.Inc() return plaintext, nil } grpc-go-1.22.1/credentials/alts/internal/conn/aes128gcm_test.go000066400000000000000000000174241351635773100242470ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package conn import ( "bytes" "testing" core "google.golang.org/grpc/credentials/alts/internal" ) // cryptoTestVector is struct for a GCM test vector type cryptoTestVector struct { key, counter, plaintext, ciphertext, tag []byte allocateDst bool } // getGCMCryptoPair outputs a client/server pair on aes128gcm. func getGCMCryptoPair(key []byte, counter []byte, t *testing.T) (ALTSRecordCrypto, ALTSRecordCrypto) { client, err := NewAES128GCM(core.ClientSide, key) if err != nil { t.Fatalf("NewAES128GCM(ClientSide, key) = %v", err) } server, err := NewAES128GCM(core.ServerSide, key) if err != nil { t.Fatalf("NewAES128GCM(ServerSide, key) = %v", err) } // set counter if provided. if counter != nil { if CounterSide(counter) == core.ClientSide { client.(*aes128gcm).outCounter = CounterFromValue(counter, overflowLenAES128GCM) server.(*aes128gcm).inCounter = CounterFromValue(counter, overflowLenAES128GCM) } else { server.(*aes128gcm).outCounter = CounterFromValue(counter, overflowLenAES128GCM) client.(*aes128gcm).inCounter = CounterFromValue(counter, overflowLenAES128GCM) } } return client, server } func testGCMEncryptionDecryption(sender ALTSRecordCrypto, receiver ALTSRecordCrypto, test *cryptoTestVector, withCounter bool, t *testing.T) { // Ciphertext is: counter + encrypted text + tag. ciphertext := []byte(nil) if withCounter { ciphertext = append(ciphertext, test.counter...) } ciphertext = append(ciphertext, test.ciphertext...) ciphertext = append(ciphertext, test.tag...) // Decrypt. if got, err := receiver.Decrypt(nil, ciphertext); err != nil || !bytes.Equal(got, test.plaintext) { t.Errorf("key=%v\ncounter=%v\ntag=%v\nciphertext=%v\nDecrypt = %v, %v\nwant: %v", test.key, test.counter, test.tag, test.ciphertext, got, err, test.plaintext) } // Encrypt. var dst []byte if test.allocateDst { dst = make([]byte, len(test.plaintext)+sender.EncryptionOverhead()) } if got, err := sender.Encrypt(dst[:0], test.plaintext); err != nil || !bytes.Equal(got, ciphertext) { t.Errorf("key=%v\ncounter=%v\nplaintext=%v\nEncrypt = %v, %v\nwant: %v", test.key, test.counter, test.plaintext, got, err, ciphertext) } } // Test encrypt and decrypt using test vectors for aes128gcm. func TestAES128GCMEncrypt(t *testing.T) { for _, test := range []cryptoTestVector{ { key: dehex("11754cd72aec309bf52f7687212e8957"), counter: dehex("3c819d9a9bed087615030b65"), plaintext: nil, ciphertext: nil, tag: dehex("250327c674aaf477aef2675748cf6971"), allocateDst: false, }, { key: dehex("ca47248ac0b6f8372a97ac43508308ed"), counter: dehex("ffd2b598feabc9019262d2be"), plaintext: nil, ciphertext: nil, tag: dehex("60d20404af527d248d893ae495707d1a"), allocateDst: false, }, { key: dehex("7fddb57453c241d03efbed3ac44e371c"), counter: dehex("ee283a3fc75575e33efd4887"), plaintext: dehex("d5de42b461646c255c87bd2962d3b9a2"), ciphertext: dehex("2ccda4a5415cb91e135c2a0f78c9b2fd"), tag: dehex("b36d1df9b9d5e596f83e8b7f52971cb3"), allocateDst: false, }, { key: dehex("ab72c77b97cb5fe9a382d9fe81ffdbed"), counter: dehex("54cc7dc2c37ec006bcc6d1da"), plaintext: dehex("007c5e5b3e59df24a7c355584fc1518d"), ciphertext: dehex("0e1bde206a07a9c2c1b65300f8c64997"), tag: dehex("2b4401346697138c7a4891ee59867d0c"), allocateDst: false, }, { key: dehex("11754cd72aec309bf52f7687212e8957"), counter: dehex("3c819d9a9bed087615030b65"), plaintext: nil, ciphertext: nil, tag: dehex("250327c674aaf477aef2675748cf6971"), allocateDst: true, }, { key: dehex("ca47248ac0b6f8372a97ac43508308ed"), counter: dehex("ffd2b598feabc9019262d2be"), plaintext: nil, ciphertext: nil, tag: dehex("60d20404af527d248d893ae495707d1a"), allocateDst: true, }, { key: dehex("7fddb57453c241d03efbed3ac44e371c"), counter: dehex("ee283a3fc75575e33efd4887"), plaintext: dehex("d5de42b461646c255c87bd2962d3b9a2"), ciphertext: dehex("2ccda4a5415cb91e135c2a0f78c9b2fd"), tag: dehex("b36d1df9b9d5e596f83e8b7f52971cb3"), allocateDst: true, }, { key: dehex("ab72c77b97cb5fe9a382d9fe81ffdbed"), counter: dehex("54cc7dc2c37ec006bcc6d1da"), plaintext: dehex("007c5e5b3e59df24a7c355584fc1518d"), ciphertext: dehex("0e1bde206a07a9c2c1b65300f8c64997"), tag: dehex("2b4401346697138c7a4891ee59867d0c"), allocateDst: true, }, } { // Test encryption and decryption for aes128gcm. client, server := getGCMCryptoPair(test.key, test.counter, t) if CounterSide(test.counter) == core.ClientSide { testGCMEncryptionDecryption(client, server, &test, false, t) } else { testGCMEncryptionDecryption(server, client, &test, false, t) } } } func testGCMEncryptRoundtrip(client ALTSRecordCrypto, server ALTSRecordCrypto, t *testing.T) { // Encrypt. const plaintext = "This is plaintext." var err error buf := []byte(plaintext) buf, err = client.Encrypt(buf[:0], buf) if err != nil { t.Fatal("Encrypting with client-side context: unexpected error", err, "\n", "Plaintext:", []byte(plaintext)) } // Encrypt a second message. const plaintext2 = "This is a second plaintext." buf2 := []byte(plaintext2) buf2, err = client.Encrypt(buf2[:0], buf2) if err != nil { t.Fatal("Encrypting with client-side context: unexpected error", err, "\n", "Plaintext:", []byte(plaintext2)) } // Decryption fails: cannot decrypt second message before first. if got, err := server.Decrypt(nil, buf2); err == nil { t.Error("Decrypting client-side ciphertext with a client-side context unexpectedly succeeded; want unexpected counter error:\n", " Original plaintext:", []byte(plaintext2), "\n", " Ciphertext:", buf2, "\n", " Decrypted plaintext:", got) } // Decryption fails: wrong counter space. if got, err := client.Decrypt(nil, buf); err == nil { t.Error("Decrypting client-side ciphertext with a client-side context unexpectedly succeeded; want counter space error:\n", " Original plaintext:", []byte(plaintext), "\n", " Ciphertext:", buf, "\n", " Decrypted plaintext:", got) } // Decrypt first message. ciphertext := append([]byte(nil), buf...) buf, err = server.Decrypt(buf[:0], buf) if err != nil || string(buf) != plaintext { t.Fatal("Decrypting client-side ciphertext with a server-side context did not produce original content:\n", " Original plaintext:", []byte(plaintext), "\n", " Ciphertext:", ciphertext, "\n", " Decryption error:", err, "\n", " Decrypted plaintext:", buf) } // Decryption fails: replay attack. if got, err := server.Decrypt(nil, buf); err == nil { t.Error("Decrypting client-side ciphertext with a client-side context unexpectedly succeeded; want unexpected counter error:\n", " Original plaintext:", []byte(plaintext), "\n", " Ciphertext:", buf, "\n", " Decrypted plaintext:", got) } } // Test encrypt and decrypt on roundtrip messages for aes128gcm. func TestAES128GCMEncryptRoundtrip(t *testing.T) { // Test for aes128gcm. key := make([]byte, 16) client, server := getGCMCryptoPair(key, nil, t) testGCMEncryptRoundtrip(client, server, t) } grpc-go-1.22.1/credentials/alts/internal/conn/aes128gcmrekey.go000066400000000000000000000071411351635773100242430ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package conn import ( "crypto/cipher" core "google.golang.org/grpc/credentials/alts/internal" ) const ( // Overflow length n in bytes, never encrypt more than 2^(n*8) frames (in // each direction). overflowLenAES128GCMRekey = 8 nonceLen = 12 aeadKeyLen = 16 kdfKeyLen = 32 kdfCounterOffset = 2 kdfCounterLen = 6 sizeUint64 = 8 ) // aes128gcmRekey is the struct that holds necessary information for ALTS record. // The counter value is NOT included in the payload during the encryption and // decryption operations. type aes128gcmRekey struct { // inCounter is used in ALTS record to check that incoming counters are // as expected, since ALTS record guarantees that messages are unwrapped // in the same order that the peer wrapped them. inCounter Counter outCounter Counter inAEAD cipher.AEAD outAEAD cipher.AEAD } // NewAES128GCMRekey creates an instance that uses aes128gcm with rekeying // for ALTS record. The key argument should be 44 bytes, the first 32 bytes // are used as a key for HKDF-expand and the remainining 12 bytes are used // as a random mask for the counter. func NewAES128GCMRekey(side core.Side, key []byte) (ALTSRecordCrypto, error) { inCounter := NewInCounter(side, overflowLenAES128GCMRekey) outCounter := NewOutCounter(side, overflowLenAES128GCMRekey) inAEAD, err := newRekeyAEAD(key) if err != nil { return nil, err } outAEAD, err := newRekeyAEAD(key) if err != nil { return nil, err } return &aes128gcmRekey{ inCounter, outCounter, inAEAD, outAEAD, }, nil } // Encrypt is the encryption function. dst can contain bytes at the beginning of // the ciphertext that will not be encrypted but will be authenticated. If dst // has enough capacity to hold these bytes, the ciphertext and the tag, no // allocation and copy operations will be performed. dst and plaintext do not // overlap. func (s *aes128gcmRekey) Encrypt(dst, plaintext []byte) ([]byte, error) { // If we need to allocate an output buffer, we want to include space for // GCM tag to avoid forcing ALTS record to reallocate as well. dlen := len(dst) dst, out := SliceForAppend(dst, len(plaintext)+GcmTagSize) seq, err := s.outCounter.Value() if err != nil { return nil, err } data := out[:len(plaintext)] copy(data, plaintext) // data may alias plaintext // Seal appends the ciphertext and the tag to its first argument and // returns the updated slice. However, SliceForAppend above ensures that // dst has enough capacity to avoid a reallocation and copy due to the // append. dst = s.outAEAD.Seal(dst[:dlen], seq, data, nil) s.outCounter.Inc() return dst, nil } func (s *aes128gcmRekey) EncryptionOverhead() int { return GcmTagSize } func (s *aes128gcmRekey) Decrypt(dst, ciphertext []byte) ([]byte, error) { seq, err := s.inCounter.Value() if err != nil { return nil, err } plaintext, err := s.inAEAD.Open(dst, seq, ciphertext, nil) if err != nil { return nil, ErrAuth } s.inCounter.Inc() return plaintext, nil } grpc-go-1.22.1/credentials/alts/internal/conn/aes128gcmrekey_test.go000066400000000000000000000077751351635773100253170ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package conn import ( "testing" core "google.golang.org/grpc/credentials/alts/internal" ) // getGCMCryptoPair outputs a client/server pair on aes128gcmRekey. func getRekeyCryptoPair(key []byte, counter []byte, t *testing.T) (ALTSRecordCrypto, ALTSRecordCrypto) { client, err := NewAES128GCMRekey(core.ClientSide, key) if err != nil { t.Fatalf("NewAES128GCMRekey(ClientSide, key) = %v", err) } server, err := NewAES128GCMRekey(core.ServerSide, key) if err != nil { t.Fatalf("NewAES128GCMRekey(ServerSide, key) = %v", err) } // set counter if provided. if counter != nil { if CounterSide(counter) == core.ClientSide { client.(*aes128gcmRekey).outCounter = CounterFromValue(counter, overflowLenAES128GCMRekey) server.(*aes128gcmRekey).inCounter = CounterFromValue(counter, overflowLenAES128GCMRekey) } else { server.(*aes128gcmRekey).outCounter = CounterFromValue(counter, overflowLenAES128GCMRekey) client.(*aes128gcmRekey).inCounter = CounterFromValue(counter, overflowLenAES128GCMRekey) } } return client, server } func testRekeyEncryptRoundtrip(client ALTSRecordCrypto, server ALTSRecordCrypto, t *testing.T) { // Encrypt. const plaintext = "This is plaintext." var err error buf := []byte(plaintext) buf, err = client.Encrypt(buf[:0], buf) if err != nil { t.Fatal("Encrypting with client-side context: unexpected error", err, "\n", "Plaintext:", []byte(plaintext)) } // Encrypt a second message. const plaintext2 = "This is a second plaintext." buf2 := []byte(plaintext2) buf2, err = client.Encrypt(buf2[:0], buf2) if err != nil { t.Fatal("Encrypting with client-side context: unexpected error", err, "\n", "Plaintext:", []byte(plaintext2)) } // Decryption fails: cannot decrypt second message before first. if got, err := server.Decrypt(nil, buf2); err == nil { t.Error("Decrypting client-side ciphertext with a client-side context unexpectedly succeeded; want unexpected counter error:\n", " Original plaintext:", []byte(plaintext2), "\n", " Ciphertext:", buf2, "\n", " Decrypted plaintext:", got) } // Decryption fails: wrong counter space. if got, err := client.Decrypt(nil, buf); err == nil { t.Error("Decrypting client-side ciphertext with a client-side context unexpectedly succeeded; want counter space error:\n", " Original plaintext:", []byte(plaintext), "\n", " Ciphertext:", buf, "\n", " Decrypted plaintext:", got) } // Decrypt first message. ciphertext := append([]byte(nil), buf...) buf, err = server.Decrypt(buf[:0], buf) if err != nil || string(buf) != plaintext { t.Fatal("Decrypting client-side ciphertext with a server-side context did not produce original content:\n", " Original plaintext:", []byte(plaintext), "\n", " Ciphertext:", ciphertext, "\n", " Decryption error:", err, "\n", " Decrypted plaintext:", buf) } // Decryption fails: replay attack. if got, err := server.Decrypt(nil, buf); err == nil { t.Error("Decrypting client-side ciphertext with a client-side context unexpectedly succeeded; want unexpected counter error:\n", " Original plaintext:", []byte(plaintext), "\n", " Ciphertext:", buf, "\n", " Decrypted plaintext:", got) } } // Test encrypt and decrypt on roundtrip messages for aes128gcmRekey. func TestAES128GCMRekeyEncryptRoundtrip(t *testing.T) { // Test for aes128gcmRekey. key := make([]byte, 44) client, server := getRekeyCryptoPair(key, nil, t) testRekeyEncryptRoundtrip(client, server, t) } grpc-go-1.22.1/credentials/alts/internal/conn/common.go000066400000000000000000000043231351635773100230000ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package conn import ( "encoding/binary" "errors" "fmt" ) const ( // GcmTagSize is the GCM tag size is the difference in length between // plaintext and ciphertext. From crypto/cipher/gcm.go in Go crypto // library. GcmTagSize = 16 ) // ErrAuth occurs on authentication failure. var ErrAuth = errors.New("message authentication failed") // SliceForAppend takes a slice and a requested number of bytes. It returns a // slice with the contents of the given slice followed by that many bytes and a // second slice that aliases into it and contains only the extra bytes. If the // original slice has sufficient capacity then no allocation is performed. func SliceForAppend(in []byte, n int) (head, tail []byte) { if total := len(in) + n; cap(in) >= total { head = in[:total] } else { head = make([]byte, total) copy(head, in) } tail = head[len(in):] return head, tail } // ParseFramedMsg parse the provided buffer and returns a frame of the format // msgLength+msg and any remaining bytes in that buffer. func ParseFramedMsg(b []byte, maxLen uint32) ([]byte, []byte, error) { // If the size field is not complete, return the provided buffer as // remaining buffer. if len(b) < MsgLenFieldSize { return nil, b, nil } msgLenField := b[:MsgLenFieldSize] length := binary.LittleEndian.Uint32(msgLenField) if length > maxLen { return nil, nil, fmt.Errorf("received the frame length %d larger than the limit %d", length, maxLen) } if len(b) < int(length)+4 { // account for the first 4 msg length bytes. // Frame is not complete yet. return nil, b, nil } return b[:MsgLenFieldSize+length], b[MsgLenFieldSize+length:], nil } grpc-go-1.22.1/credentials/alts/internal/conn/counter.go000066400000000000000000000025351351635773100231720ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package conn import ( "errors" ) const counterLen = 12 var ( errInvalidCounter = errors.New("invalid counter") ) // Counter is a 96-bit, little-endian counter. type Counter struct { value [counterLen]byte invalid bool overflowLen int } // Value returns the current value of the counter as a byte slice. func (c *Counter) Value() ([]byte, error) { if c.invalid { return nil, errInvalidCounter } return c.value[:], nil } // Inc increments the counter and checks for overflow. func (c *Counter) Inc() { // If the counter is already invalid, there is no need to increase it. if c.invalid { return } i := 0 for ; i < c.overflowLen; i++ { c.value[i]++ if c.value[i] != 0 { break } } if i == c.overflowLen { c.invalid = true } } grpc-go-1.22.1/credentials/alts/internal/conn/counter_test.go000066400000000000000000000102571351635773100242310ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package conn import ( "bytes" "testing" core "google.golang.org/grpc/credentials/alts/internal" ) const ( testOverflowLen = 5 ) func TestCounterSides(t *testing.T) { for _, side := range []core.Side{core.ClientSide, core.ServerSide} { outCounter := NewOutCounter(side, testOverflowLen) inCounter := NewInCounter(side, testOverflowLen) for i := 0; i < 1024; i++ { value, _ := outCounter.Value() if g, w := CounterSide(value), side; g != w { t.Errorf("after %d iterations, CounterSide(outCounter.Value()) = %v, want %v", i, g, w) break } value, _ = inCounter.Value() if g, w := CounterSide(value), side; g == w { t.Errorf("after %d iterations, CounterSide(inCounter.Value()) = %v, want %v", i, g, w) break } outCounter.Inc() inCounter.Inc() } } } func TestCounterInc(t *testing.T) { for _, test := range []struct { counter []byte want []byte }{ { counter: []byte{0x00, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, want: []byte{0x01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, }, { counter: []byte{0x00, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0x80}, want: []byte{0x01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0x80}, }, { counter: []byte{0xff, 0x00, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, want: []byte{0x00, 0x01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, }, { counter: []byte{0x42, 0xff, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, want: []byte{0x43, 0xff, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, }, { counter: []byte{0xff, 0xff, 0xff, 0xff, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}, want: []byte{0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}, }, { counter: []byte{0xff, 0xff, 0xff, 0xff, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x80}, want: []byte{0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x80}, }, } { c := CounterFromValue(test.counter, overflowLenAES128GCM) c.Inc() value, _ := c.Value() if g, w := value, test.want; !bytes.Equal(g, w) || c.invalid { t.Errorf("counter(%v).Inc() =\n%v, want\n%v", test.counter, g, w) } } } func TestRolloverCounter(t *testing.T) { for _, test := range []struct { desc string value []byte overflowLen int }{ { desc: "testing overflow without rekeying 1", value: []byte{0xFE, 0xFF, 0xFF, 0xFF, 0xFF, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x80}, overflowLen: 5, }, { desc: "testing overflow without rekeying 2", value: []byte{0xFE, 0xFF, 0xFF, 0xFF, 0xFF, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}, overflowLen: 5, }, { desc: "testing overflow for rekeying mode 1", value: []byte{0xFE, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0x00, 0x00, 0x00, 0x80}, overflowLen: 8, }, { desc: "testing overflow for rekeying mode 2", value: []byte{0xFE, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0x00, 0x00, 0x00, 0x00}, overflowLen: 8, }, } { c := CounterFromValue(test.value, overflowLenAES128GCM) // First Inc() + Value() should work. c.Inc() _, err := c.Value() if err != nil { t.Errorf("%v: first Inc() + Value() unexpectedly failed: %v, want error", test.desc, err) } // Second Inc() + Value() should fail. c.Inc() _, err = c.Value() if err != errInvalidCounter { t.Errorf("%v: second Inc() + Value() unexpectedly succeeded: want %v", test.desc, errInvalidCounter) } // Third Inc() + Value() should also fail because the counter is // already in an invalid state. c.Inc() _, err = c.Value() if err != errInvalidCounter { t.Errorf("%v: Third Inc() + Value() unexpectedly succeeded: want %v", test.desc, errInvalidCounter) } } } grpc-go-1.22.1/credentials/alts/internal/conn/record.go000066400000000000000000000225201351635773100227650ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package conn contains an implementation of a secure channel created by gRPC // handshakers. package conn import ( "encoding/binary" "fmt" "math" "net" core "google.golang.org/grpc/credentials/alts/internal" ) // ALTSRecordCrypto is the interface for gRPC ALTS record protocol. type ALTSRecordCrypto interface { // Encrypt encrypts the plaintext and computes the tag (if any) of dst // and plaintext, dst and plaintext do not overlap. Encrypt(dst, plaintext []byte) ([]byte, error) // EncryptionOverhead returns the tag size (if any) in bytes. EncryptionOverhead() int // Decrypt decrypts ciphertext and verify the tag (if any). dst and // ciphertext may alias exactly or not at all. To reuse ciphertext's // storage for the decrypted output, use ciphertext[:0] as dst. Decrypt(dst, ciphertext []byte) ([]byte, error) } // ALTSRecordFunc is a function type for factory functions that create // ALTSRecordCrypto instances. type ALTSRecordFunc func(s core.Side, keyData []byte) (ALTSRecordCrypto, error) const ( // MsgLenFieldSize is the byte size of the frame length field of a // framed message. MsgLenFieldSize = 4 // The byte size of the message type field of a framed message. msgTypeFieldSize = 4 // The bytes size limit for a ALTS record message. altsRecordLengthLimit = 1024 * 1024 // 1 MiB // The default bytes size of a ALTS record message. altsRecordDefaultLength = 4 * 1024 // 4KiB // Message type value included in ALTS record framing. altsRecordMsgType = uint32(0x06) // The initial write buffer size. altsWriteBufferInitialSize = 32 * 1024 // 32KiB // The maximum write buffer size. This *must* be multiple of // altsRecordDefaultLength. altsWriteBufferMaxSize = 512 * 1024 // 512KiB ) var ( protocols = make(map[string]ALTSRecordFunc) ) // RegisterProtocol register a ALTS record encryption protocol. func RegisterProtocol(protocol string, f ALTSRecordFunc) error { if _, ok := protocols[protocol]; ok { return fmt.Errorf("protocol %v is already registered", protocol) } protocols[protocol] = f return nil } // conn represents a secured connection. It implements the net.Conn interface. type conn struct { net.Conn crypto ALTSRecordCrypto // buf holds data that has been read from the connection and decrypted, // but has not yet been returned by Read. buf []byte payloadLengthLimit int // protected holds data read from the network but have not yet been // decrypted. This data might not compose a complete frame. protected []byte // writeBuf is a buffer used to contain encrypted frames before being // written to the network. writeBuf []byte // nextFrame stores the next frame (in protected buffer) info. nextFrame []byte // overhead is the calculated overhead of each frame. overhead int } // NewConn creates a new secure channel instance given the other party role and // handshaking result. func NewConn(c net.Conn, side core.Side, recordProtocol string, key []byte, protected []byte) (net.Conn, error) { newCrypto := protocols[recordProtocol] if newCrypto == nil { return nil, fmt.Errorf("negotiated unknown next_protocol %q", recordProtocol) } crypto, err := newCrypto(side, key) if err != nil { return nil, fmt.Errorf("protocol %q: %v", recordProtocol, err) } overhead := MsgLenFieldSize + msgTypeFieldSize + crypto.EncryptionOverhead() payloadLengthLimit := altsRecordDefaultLength - overhead if protected == nil { // We pre-allocate protected to be of size // 2*altsRecordDefaultLength-1 during initialization. We only // read from the network into protected when protected does not // contain a complete frame, which is at most // altsRecordDefaultLength-1 (bytes). And we read at most // altsRecordDefaultLength (bytes) data into protected at one // time. Therefore, 2*altsRecordDefaultLength-1 is large enough // to buffer data read from the network. protected = make([]byte, 0, 2*altsRecordDefaultLength-1) } altsConn := &conn{ Conn: c, crypto: crypto, payloadLengthLimit: payloadLengthLimit, protected: protected, writeBuf: make([]byte, altsWriteBufferInitialSize), nextFrame: protected, overhead: overhead, } return altsConn, nil } // Read reads and decrypts a frame from the underlying connection, and copies the // decrypted payload into b. If the size of the payload is greater than len(b), // Read retains the remaining bytes in an internal buffer, and subsequent calls // to Read will read from this buffer until it is exhausted. func (p *conn) Read(b []byte) (n int, err error) { if len(p.buf) == 0 { var framedMsg []byte framedMsg, p.nextFrame, err = ParseFramedMsg(p.nextFrame, altsRecordLengthLimit) if err != nil { return n, err } // Check whether the next frame to be decrypted has been // completely received yet. if len(framedMsg) == 0 { copy(p.protected, p.nextFrame) p.protected = p.protected[:len(p.nextFrame)] // Always copy next incomplete frame to the beginning of // the protected buffer and reset nextFrame to it. p.nextFrame = p.protected } // Check whether a complete frame has been received yet. for len(framedMsg) == 0 { if len(p.protected) == cap(p.protected) { tmp := make([]byte, len(p.protected), cap(p.protected)+altsRecordDefaultLength) copy(tmp, p.protected) p.protected = tmp } n, err = p.Conn.Read(p.protected[len(p.protected):min(cap(p.protected), len(p.protected)+altsRecordDefaultLength)]) if err != nil { return 0, err } p.protected = p.protected[:len(p.protected)+n] framedMsg, p.nextFrame, err = ParseFramedMsg(p.protected, altsRecordLengthLimit) if err != nil { return 0, err } } // Now we have a complete frame, decrypted it. msg := framedMsg[MsgLenFieldSize:] msgType := binary.LittleEndian.Uint32(msg[:msgTypeFieldSize]) if msgType&0xff != altsRecordMsgType { return 0, fmt.Errorf("received frame with incorrect message type %v, expected lower byte %v", msgType, altsRecordMsgType) } ciphertext := msg[msgTypeFieldSize:] // Decrypt requires that if the dst and ciphertext alias, they // must alias exactly. Code here used to use msg[:0], but msg // starts MsgLenFieldSize+msgTypeFieldSize bytes earlier than // ciphertext, so they alias inexactly. Using ciphertext[:0] // arranges the appropriate aliasing without needing to copy // ciphertext or use a separate destination buffer. For more info // check: https://golang.org/pkg/crypto/cipher/#AEAD. p.buf, err = p.crypto.Decrypt(ciphertext[:0], ciphertext) if err != nil { return 0, err } } n = copy(b, p.buf) p.buf = p.buf[n:] return n, nil } // Write encrypts, frames, and writes bytes from b to the underlying connection. func (p *conn) Write(b []byte) (n int, err error) { n = len(b) // Calculate the output buffer size with framing and encryption overhead. numOfFrames := int(math.Ceil(float64(len(b)) / float64(p.payloadLengthLimit))) size := len(b) + numOfFrames*p.overhead // If writeBuf is too small, increase its size up to the maximum size. partialBSize := len(b) if size > altsWriteBufferMaxSize { size = altsWriteBufferMaxSize const numOfFramesInMaxWriteBuf = altsWriteBufferMaxSize / altsRecordDefaultLength partialBSize = numOfFramesInMaxWriteBuf * p.payloadLengthLimit } if len(p.writeBuf) < size { p.writeBuf = make([]byte, size) } for partialBStart := 0; partialBStart < len(b); partialBStart += partialBSize { partialBEnd := partialBStart + partialBSize if partialBEnd > len(b) { partialBEnd = len(b) } partialB := b[partialBStart:partialBEnd] writeBufIndex := 0 for len(partialB) > 0 { payloadLen := len(partialB) if payloadLen > p.payloadLengthLimit { payloadLen = p.payloadLengthLimit } buf := partialB[:payloadLen] partialB = partialB[payloadLen:] // Write buffer contains: length, type, payload, and tag // if any. // 1. Fill in type field. msg := p.writeBuf[writeBufIndex+MsgLenFieldSize:] binary.LittleEndian.PutUint32(msg, altsRecordMsgType) // 2. Encrypt the payload and create a tag if any. msg, err = p.crypto.Encrypt(msg[:msgTypeFieldSize], buf) if err != nil { return n, err } // 3. Fill in the size field. binary.LittleEndian.PutUint32(p.writeBuf[writeBufIndex:], uint32(len(msg))) // 4. Increase writeBufIndex. writeBufIndex += len(buf) + p.overhead } nn, err := p.Conn.Write(p.writeBuf[:writeBufIndex]) if err != nil { // We need to calculate the actual data size that was // written. This means we need to remove header, // encryption overheads, and any partially-written // frame data. numOfWrittenFrames := int(math.Floor(float64(nn) / float64(altsRecordDefaultLength))) return partialBStart + numOfWrittenFrames*p.payloadLengthLimit, err } } return n, nil } func min(a, b int) int { if a < b { return a } return b } grpc-go-1.22.1/credentials/alts/internal/conn/record_test.go000066400000000000000000000215771351635773100240370ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package conn import ( "bytes" "encoding/binary" "fmt" "io" "math" "net" "reflect" "testing" core "google.golang.org/grpc/credentials/alts/internal" ) var ( nextProtocols = []string{"ALTSRP_GCM_AES128"} altsRecordFuncs = map[string]ALTSRecordFunc{ // ALTS handshaker protocols. "ALTSRP_GCM_AES128": func(s core.Side, keyData []byte) (ALTSRecordCrypto, error) { return NewAES128GCM(s, keyData) }, } ) func init() { for protocol, f := range altsRecordFuncs { if err := RegisterProtocol(protocol, f); err != nil { panic(err) } } } // testConn mimics a net.Conn to the peer. type testConn struct { net.Conn in *bytes.Buffer out *bytes.Buffer } func (c *testConn) Read(b []byte) (n int, err error) { return c.in.Read(b) } func (c *testConn) Write(b []byte) (n int, err error) { return c.out.Write(b) } func (c *testConn) Close() error { return nil } func newTestALTSRecordConn(in, out *bytes.Buffer, side core.Side, np string) *conn { key := []byte{ // 16 arbitrary bytes. 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xe2, 0xd2, 0x4c, 0xce, 0x4f, 0x49} tc := testConn{ in: in, out: out, } c, err := NewConn(&tc, side, np, key, nil) if err != nil { panic(fmt.Sprintf("Unexpected error creating test ALTS record connection: %v", err)) } return c.(*conn) } func newConnPair(np string) (client, server *conn) { clientBuf := new(bytes.Buffer) serverBuf := new(bytes.Buffer) clientConn := newTestALTSRecordConn(clientBuf, serverBuf, core.ClientSide, np) serverConn := newTestALTSRecordConn(serverBuf, clientBuf, core.ServerSide, np) return clientConn, serverConn } func testPingPong(t *testing.T, np string) { clientConn, serverConn := newConnPair(np) clientMsg := []byte("Client Message") if n, err := clientConn.Write(clientMsg); n != len(clientMsg) || err != nil { t.Fatalf("Client Write() = %v, %v; want %v, ", n, err, len(clientMsg)) } rcvClientMsg := make([]byte, len(clientMsg)) if n, err := serverConn.Read(rcvClientMsg); n != len(rcvClientMsg) || err != nil { t.Fatalf("Server Read() = %v, %v; want %v, ", n, err, len(rcvClientMsg)) } if !reflect.DeepEqual(clientMsg, rcvClientMsg) { t.Fatalf("Client Write()/Server Read() = %v, want %v", rcvClientMsg, clientMsg) } serverMsg := []byte("Server Message") if n, err := serverConn.Write(serverMsg); n != len(serverMsg) || err != nil { t.Fatalf("Server Write() = %v, %v; want %v, ", n, err, len(serverMsg)) } rcvServerMsg := make([]byte, len(serverMsg)) if n, err := clientConn.Read(rcvServerMsg); n != len(rcvServerMsg) || err != nil { t.Fatalf("Client Read() = %v, %v; want %v, ", n, err, len(rcvServerMsg)) } if !reflect.DeepEqual(serverMsg, rcvServerMsg) { t.Fatalf("Server Write()/Client Read() = %v, want %v", rcvServerMsg, serverMsg) } } func TestPingPong(t *testing.T) { for _, np := range nextProtocols { testPingPong(t, np) } } func testSmallReadBuffer(t *testing.T, np string) { clientConn, serverConn := newConnPair(np) msg := []byte("Very Important Message") if n, err := clientConn.Write(msg); err != nil { t.Fatalf("Write() = %v, %v; want %v, ", n, err, len(msg)) } rcvMsg := make([]byte, len(msg)) n := 2 // Arbitrary index to break rcvMsg in two. rcvMsg1 := rcvMsg[:n] rcvMsg2 := rcvMsg[n:] if n, err := serverConn.Read(rcvMsg1); n != len(rcvMsg1) || err != nil { t.Fatalf("Read() = %v, %v; want %v, ", n, err, len(rcvMsg1)) } if n, err := serverConn.Read(rcvMsg2); n != len(rcvMsg2) || err != nil { t.Fatalf("Read() = %v, %v; want %v, ", n, err, len(rcvMsg2)) } if !reflect.DeepEqual(msg, rcvMsg) { t.Fatalf("Write()/Read() = %v, want %v", rcvMsg, msg) } } func TestSmallReadBuffer(t *testing.T) { for _, np := range nextProtocols { testSmallReadBuffer(t, np) } } func testLargeMsg(t *testing.T, np string) { clientConn, serverConn := newConnPair(np) // msgLen is such that the length in the framing is larger than the // default size of one frame. msgLen := altsRecordDefaultLength - msgTypeFieldSize - clientConn.crypto.EncryptionOverhead() + 1 msg := make([]byte, msgLen) if n, err := clientConn.Write(msg); n != len(msg) || err != nil { t.Fatalf("Write() = %v, %v; want %v, ", n, err, len(msg)) } rcvMsg := make([]byte, len(msg)) if n, err := io.ReadFull(serverConn, rcvMsg); n != len(rcvMsg) || err != nil { t.Fatalf("Read() = %v, %v; want %v, ", n, err, len(rcvMsg)) } if !reflect.DeepEqual(msg, rcvMsg) { t.Fatalf("Write()/Server Read() = %v, want %v", rcvMsg, msg) } } func TestLargeMsg(t *testing.T) { for _, np := range nextProtocols { testLargeMsg(t, np) } } func testIncorrectMsgType(t *testing.T, np string) { // framedMsg is an empty ciphertext with correct framing but wrong // message type. framedMsg := make([]byte, MsgLenFieldSize+msgTypeFieldSize) binary.LittleEndian.PutUint32(framedMsg[:MsgLenFieldSize], msgTypeFieldSize) wrongMsgType := uint32(0x22) binary.LittleEndian.PutUint32(framedMsg[MsgLenFieldSize:], wrongMsgType) in := bytes.NewBuffer(framedMsg) c := newTestALTSRecordConn(in, nil, core.ClientSide, np) b := make([]byte, 1) if n, err := c.Read(b); n != 0 || err == nil { t.Fatalf("Read() = , want %v", fmt.Errorf("received frame with incorrect message type %v", wrongMsgType)) } } func TestIncorrectMsgType(t *testing.T) { for _, np := range nextProtocols { testIncorrectMsgType(t, np) } } func testFrameTooLarge(t *testing.T, np string) { buf := new(bytes.Buffer) clientConn := newTestALTSRecordConn(nil, buf, core.ClientSide, np) serverConn := newTestALTSRecordConn(buf, nil, core.ServerSide, np) // payloadLen is such that the length in the framing is larger than // allowed in one frame. payloadLen := altsRecordLengthLimit - msgTypeFieldSize - clientConn.crypto.EncryptionOverhead() + 1 payload := make([]byte, payloadLen) c, err := clientConn.crypto.Encrypt(nil, payload) if err != nil { t.Fatalf(fmt.Sprintf("Error encrypting message: %v", err)) } msgLen := msgTypeFieldSize + len(c) framedMsg := make([]byte, MsgLenFieldSize+msgLen) binary.LittleEndian.PutUint32(framedMsg[:MsgLenFieldSize], uint32(msgTypeFieldSize+len(c))) msg := framedMsg[MsgLenFieldSize:] binary.LittleEndian.PutUint32(msg[:msgTypeFieldSize], altsRecordMsgType) copy(msg[msgTypeFieldSize:], c) if _, err = buf.Write(framedMsg); err != nil { t.Fatal(fmt.Sprintf("Unexpected error writing to buffer: %v", err)) } b := make([]byte, 1) if n, err := serverConn.Read(b); n != 0 || err == nil { t.Fatalf("Read() = , want %v", fmt.Errorf("received the frame length %d larger than the limit %d", altsRecordLengthLimit+1, altsRecordLengthLimit)) } } func TestFrameTooLarge(t *testing.T) { for _, np := range nextProtocols { testFrameTooLarge(t, np) } } func testWriteLargeData(t *testing.T, np string) { // Test sending and receiving messages larger than the maximum write // buffer size. clientConn, serverConn := newConnPair(np) // Message size is intentionally chosen to not be multiple of // payloadLengthLimtit. msgSize := altsWriteBufferMaxSize + (100 * 1024) clientMsg := make([]byte, msgSize) for i := 0; i < msgSize; i++ { clientMsg[i] = 0xAA } if n, err := clientConn.Write(clientMsg); n != len(clientMsg) || err != nil { t.Fatalf("Client Write() = %v, %v; want %v, ", n, err, len(clientMsg)) } // We need to keep reading until the entire message is received. The // reason we set all bytes of the message to a value other than zero is // to avoid ambiguous zero-init value of rcvClientMsg buffer and the // actual received data. rcvClientMsg := make([]byte, 0, msgSize) numberOfExpectedFrames := int(math.Ceil(float64(msgSize) / float64(serverConn.payloadLengthLimit))) for i := 0; i < numberOfExpectedFrames; i++ { expectedRcvSize := serverConn.payloadLengthLimit if i == numberOfExpectedFrames-1 { // Last frame might be smaller. expectedRcvSize = msgSize % serverConn.payloadLengthLimit } tmpBuf := make([]byte, expectedRcvSize) if n, err := serverConn.Read(tmpBuf); n != len(tmpBuf) || err != nil { t.Fatalf("Server Read() = %v, %v; want %v, ", n, err, len(tmpBuf)) } rcvClientMsg = append(rcvClientMsg, tmpBuf...) } if !reflect.DeepEqual(clientMsg, rcvClientMsg) { t.Fatalf("Client Write()/Server Read() = %v, want %v", rcvClientMsg, clientMsg) } } func TestWriteLargeData(t *testing.T) { for _, np := range nextProtocols { testWriteLargeData(t, np) } } grpc-go-1.22.1/credentials/alts/internal/conn/utils.go000066400000000000000000000040011351635773100226410ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package conn import core "google.golang.org/grpc/credentials/alts/internal" // NewOutCounter returns an outgoing counter initialized to the starting sequence // number for the client/server side of a connection. func NewOutCounter(s core.Side, overflowLen int) (c Counter) { c.overflowLen = overflowLen if s == core.ServerSide { // Server counters in ALTS record have the little-endian high bit // set. c.value[counterLen-1] = 0x80 } return } // NewInCounter returns an incoming counter initialized to the starting sequence // number for the client/server side of a connection. This is used in ALTS record // to check that incoming counters are as expected, since ALTS record guarantees // that messages are unwrapped in the same order that the peer wrapped them. func NewInCounter(s core.Side, overflowLen int) (c Counter) { c.overflowLen = overflowLen if s == core.ClientSide { // Server counters in ALTS record have the little-endian high bit // set. c.value[counterLen-1] = 0x80 } return } // CounterFromValue creates a new counter given an initial value. func CounterFromValue(value []byte, overflowLen int) (c Counter) { c.overflowLen = overflowLen copy(c.value[:], value) return } // CounterSide returns the connection side (client/server) a sequence counter is // associated with. func CounterSide(c []byte) core.Side { if c[counterLen-1]&0x80 == 0x80 { return core.ServerSide } return core.ClientSide } grpc-go-1.22.1/credentials/alts/internal/handshaker/000077500000000000000000000000001351635773100223325ustar00rootroot00000000000000grpc-go-1.22.1/credentials/alts/internal/handshaker/handshaker.go000066400000000000000000000271071351635773100250000ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package handshaker provides ALTS handshaking functionality for GCP. package handshaker import ( "context" "errors" "fmt" "io" "net" "sync" grpc "google.golang.org/grpc" "google.golang.org/grpc/codes" "google.golang.org/grpc/credentials" core "google.golang.org/grpc/credentials/alts/internal" "google.golang.org/grpc/credentials/alts/internal/authinfo" "google.golang.org/grpc/credentials/alts/internal/conn" altsgrpc "google.golang.org/grpc/credentials/alts/internal/proto/grpc_gcp" altspb "google.golang.org/grpc/credentials/alts/internal/proto/grpc_gcp" ) const ( // The maximum byte size of receive frames. frameLimit = 64 * 1024 // 64 KB rekeyRecordProtocolName = "ALTSRP_GCM_AES128_REKEY" // maxPendingHandshakes represents the maximum number of concurrent // handshakes. maxPendingHandshakes = 100 ) var ( hsProtocol = altspb.HandshakeProtocol_ALTS appProtocols = []string{"grpc"} recordProtocols = []string{rekeyRecordProtocolName} keyLength = map[string]int{ rekeyRecordProtocolName: 44, } altsRecordFuncs = map[string]conn.ALTSRecordFunc{ // ALTS handshaker protocols. rekeyRecordProtocolName: func(s core.Side, keyData []byte) (conn.ALTSRecordCrypto, error) { return conn.NewAES128GCMRekey(s, keyData) }, } // control number of concurrent created (but not closed) handshakers. mu sync.Mutex concurrentHandshakes = int64(0) // errDropped occurs when maxPendingHandshakes is reached. errDropped = errors.New("maximum number of concurrent ALTS handshakes is reached") ) func init() { for protocol, f := range altsRecordFuncs { if err := conn.RegisterProtocol(protocol, f); err != nil { panic(err) } } } func acquire(n int64) bool { mu.Lock() success := maxPendingHandshakes-concurrentHandshakes >= n if success { concurrentHandshakes += n } mu.Unlock() return success } func release(n int64) { mu.Lock() concurrentHandshakes -= n if concurrentHandshakes < 0 { mu.Unlock() panic("bad release") } mu.Unlock() } // ClientHandshakerOptions contains the client handshaker options that can // provided by the caller. type ClientHandshakerOptions struct { // ClientIdentity is the handshaker client local identity. ClientIdentity *altspb.Identity // TargetName is the server service account name for secure name // checking. TargetName string // TargetServiceAccounts contains a list of expected target service // accounts. One of these accounts should match one of the accounts in // the handshaker results. Otherwise, the handshake fails. TargetServiceAccounts []string // RPCVersions specifies the gRPC versions accepted by the client. RPCVersions *altspb.RpcProtocolVersions } // ServerHandshakerOptions contains the server handshaker options that can // provided by the caller. type ServerHandshakerOptions struct { // RPCVersions specifies the gRPC versions accepted by the server. RPCVersions *altspb.RpcProtocolVersions } // DefaultClientHandshakerOptions returns the default client handshaker options. func DefaultClientHandshakerOptions() *ClientHandshakerOptions { return &ClientHandshakerOptions{} } // DefaultServerHandshakerOptions returns the default client handshaker options. func DefaultServerHandshakerOptions() *ServerHandshakerOptions { return &ServerHandshakerOptions{} } // TODO: add support for future local and remote endpoint in both client options // and server options (server options struct does not exist now. When // caller can provide endpoints, it should be created. // altsHandshaker is used to complete a ALTS handshaking between client and // server. This handshaker talks to the ALTS handshaker service in the metadata // server. type altsHandshaker struct { // RPC stream used to access the ALTS Handshaker service. stream altsgrpc.HandshakerService_DoHandshakeClient // the connection to the peer. conn net.Conn // client handshake options. clientOpts *ClientHandshakerOptions // server handshake options. serverOpts *ServerHandshakerOptions // defines the side doing the handshake, client or server. side core.Side } // NewClientHandshaker creates a ALTS handshaker for GCP which contains an RPC // stub created using the passed conn and used to talk to the ALTS Handshaker // service in the metadata server. func NewClientHandshaker(ctx context.Context, conn *grpc.ClientConn, c net.Conn, opts *ClientHandshakerOptions) (core.Handshaker, error) { stream, err := altsgrpc.NewHandshakerServiceClient(conn).DoHandshake(ctx, grpc.WaitForReady(true)) if err != nil { return nil, err } return &altsHandshaker{ stream: stream, conn: c, clientOpts: opts, side: core.ClientSide, }, nil } // NewServerHandshaker creates a ALTS handshaker for GCP which contains an RPC // stub created using the passed conn and used to talk to the ALTS Handshaker // service in the metadata server. func NewServerHandshaker(ctx context.Context, conn *grpc.ClientConn, c net.Conn, opts *ServerHandshakerOptions) (core.Handshaker, error) { stream, err := altsgrpc.NewHandshakerServiceClient(conn).DoHandshake(ctx, grpc.WaitForReady(true)) if err != nil { return nil, err } return &altsHandshaker{ stream: stream, conn: c, serverOpts: opts, side: core.ServerSide, }, nil } // ClientHandshake starts and completes a client ALTS handshaking for GCP. Once // done, ClientHandshake returns a secure connection. func (h *altsHandshaker) ClientHandshake(ctx context.Context) (net.Conn, credentials.AuthInfo, error) { if !acquire(1) { return nil, nil, errDropped } defer release(1) if h.side != core.ClientSide { return nil, nil, errors.New("only handshakers created using NewClientHandshaker can perform a client handshaker") } // Create target identities from service account list. targetIdentities := make([]*altspb.Identity, 0, len(h.clientOpts.TargetServiceAccounts)) for _, account := range h.clientOpts.TargetServiceAccounts { targetIdentities = append(targetIdentities, &altspb.Identity{ IdentityOneof: &altspb.Identity_ServiceAccount{ ServiceAccount: account, }, }) } req := &altspb.HandshakerReq{ ReqOneof: &altspb.HandshakerReq_ClientStart{ ClientStart: &altspb.StartClientHandshakeReq{ HandshakeSecurityProtocol: hsProtocol, ApplicationProtocols: appProtocols, RecordProtocols: recordProtocols, TargetIdentities: targetIdentities, LocalIdentity: h.clientOpts.ClientIdentity, TargetName: h.clientOpts.TargetName, RpcVersions: h.clientOpts.RPCVersions, }, }, } conn, result, err := h.doHandshake(req) if err != nil { return nil, nil, err } authInfo := authinfo.New(result) return conn, authInfo, nil } // ServerHandshake starts and completes a server ALTS handshaking for GCP. Once // done, ServerHandshake returns a secure connection. func (h *altsHandshaker) ServerHandshake(ctx context.Context) (net.Conn, credentials.AuthInfo, error) { if !acquire(1) { return nil, nil, errDropped } defer release(1) if h.side != core.ServerSide { return nil, nil, errors.New("only handshakers created using NewServerHandshaker can perform a server handshaker") } p := make([]byte, frameLimit) n, err := h.conn.Read(p) if err != nil { return nil, nil, err } // Prepare server parameters. // TODO: currently only ALTS parameters are provided. Might need to use // more options in the future. params := make(map[int32]*altspb.ServerHandshakeParameters) params[int32(altspb.HandshakeProtocol_ALTS)] = &altspb.ServerHandshakeParameters{ RecordProtocols: recordProtocols, } req := &altspb.HandshakerReq{ ReqOneof: &altspb.HandshakerReq_ServerStart{ ServerStart: &altspb.StartServerHandshakeReq{ ApplicationProtocols: appProtocols, HandshakeParameters: params, InBytes: p[:n], RpcVersions: h.serverOpts.RPCVersions, }, }, } conn, result, err := h.doHandshake(req) if err != nil { return nil, nil, err } authInfo := authinfo.New(result) return conn, authInfo, nil } func (h *altsHandshaker) doHandshake(req *altspb.HandshakerReq) (net.Conn, *altspb.HandshakerResult, error) { resp, err := h.accessHandshakerService(req) if err != nil { return nil, nil, err } // Check of the returned status is an error. if resp.GetStatus() != nil { if got, want := resp.GetStatus().Code, uint32(codes.OK); got != want { return nil, nil, fmt.Errorf("%v", resp.GetStatus().Details) } } var extra []byte if req.GetServerStart() != nil { extra = req.GetServerStart().GetInBytes()[resp.GetBytesConsumed():] } result, extra, err := h.processUntilDone(resp, extra) if err != nil { return nil, nil, err } // The handshaker returns a 128 bytes key. It should be truncated based // on the returned record protocol. keyLen, ok := keyLength[result.RecordProtocol] if !ok { return nil, nil, fmt.Errorf("unknown resulted record protocol %v", result.RecordProtocol) } sc, err := conn.NewConn(h.conn, h.side, result.GetRecordProtocol(), result.KeyData[:keyLen], extra) if err != nil { return nil, nil, err } return sc, result, nil } func (h *altsHandshaker) accessHandshakerService(req *altspb.HandshakerReq) (*altspb.HandshakerResp, error) { if err := h.stream.Send(req); err != nil { return nil, err } resp, err := h.stream.Recv() if err != nil { return nil, err } return resp, nil } // processUntilDone processes the handshake until the handshaker service returns // the results. Handshaker service takes care of frame parsing, so we read // whatever received from the network and send it to the handshaker service. func (h *altsHandshaker) processUntilDone(resp *altspb.HandshakerResp, extra []byte) (*altspb.HandshakerResult, []byte, error) { for { if len(resp.OutFrames) > 0 { if _, err := h.conn.Write(resp.OutFrames); err != nil { return nil, nil, err } } if resp.Result != nil { return resp.Result, extra, nil } buf := make([]byte, frameLimit) n, err := h.conn.Read(buf) if err != nil && err != io.EOF { return nil, nil, err } // If there is nothing to send to the handshaker service, and // nothing is received from the peer, then we are stuck. // This covers the case when the peer is not responding. Note // that handshaker service connection issues are caught in // accessHandshakerService before we even get here. if len(resp.OutFrames) == 0 && n == 0 { return nil, nil, core.PeerNotRespondingError } // Append extra bytes from the previous interaction with the // handshaker service with the current buffer read from conn. p := append(extra, buf[:n]...) resp, err = h.accessHandshakerService(&altspb.HandshakerReq{ ReqOneof: &altspb.HandshakerReq_Next{ Next: &altspb.NextHandshakeMessageReq{ InBytes: p, }, }, }) if err != nil { return nil, nil, err } // Set extra based on handshaker service response. if n == 0 { extra = nil } else { extra = buf[resp.GetBytesConsumed():n] } } } // Close terminates the Handshaker. It should be called when the caller obtains // the secure connection. func (h *altsHandshaker) Close() { h.stream.CloseSend() } grpc-go-1.22.1/credentials/alts/internal/handshaker/handshaker_test.go000066400000000000000000000162131351635773100260330ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package handshaker import ( "bytes" "context" "testing" "time" grpc "google.golang.org/grpc" core "google.golang.org/grpc/credentials/alts/internal" altspb "google.golang.org/grpc/credentials/alts/internal/proto/grpc_gcp" "google.golang.org/grpc/credentials/alts/internal/testutil" ) var ( testRecordProtocol = rekeyRecordProtocolName testKey = []byte{ // 44 arbitrary bytes. 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xe2, 0xd2, 0x4c, 0xce, 0x4f, 0x49, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xe2, 0xd2, 0x4c, 0xce, 0x4f, 0x49, 0x1f, 0x8b, 0xd2, 0x4c, 0xce, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xe2, } testServiceAccount = "test_service_account" testTargetServiceAccounts = []string{testServiceAccount} testClientIdentity = &altspb.Identity{ IdentityOneof: &altspb.Identity_Hostname{ Hostname: "i_am_a_client", }, } ) // testRPCStream mimics a altspb.HandshakerService_DoHandshakeClient object. type testRPCStream struct { grpc.ClientStream t *testing.T isClient bool // The resp expected to be returned by Recv(). Make sure this is set to // the content the test requires before Recv() is invoked. recvBuf *altspb.HandshakerResp // false if it is the first access to Handshaker service on Envelope. first bool // useful for testing concurrent calls. delay time.Duration } func (t *testRPCStream) Recv() (*altspb.HandshakerResp, error) { resp := t.recvBuf t.recvBuf = nil return resp, nil } func (t *testRPCStream) Send(req *altspb.HandshakerReq) error { var resp *altspb.HandshakerResp if !t.first { // Generate the bytes to be returned by Recv() for the initial // handshaking. t.first = true if t.isClient { resp = &altspb.HandshakerResp{ OutFrames: testutil.MakeFrame("ClientInit"), // Simulate consuming ServerInit. BytesConsumed: 14, } } else { resp = &altspb.HandshakerResp{ OutFrames: testutil.MakeFrame("ServerInit"), // Simulate consuming ClientInit. BytesConsumed: 14, } } } else { // Add delay to test concurrent calls. cleanup := stat.Update() defer cleanup() time.Sleep(t.delay) // Generate the response to be returned by Recv() for the // follow-up handshaking. result := &altspb.HandshakerResult{ RecordProtocol: testRecordProtocol, KeyData: testKey, } resp = &altspb.HandshakerResp{ Result: result, // Simulate consuming ClientFinished or ServerFinished. BytesConsumed: 18, } } t.recvBuf = resp return nil } func (t *testRPCStream) CloseSend() error { return nil } var stat testutil.Stats func TestClientHandshake(t *testing.T) { for _, testCase := range []struct { delay time.Duration numberOfHandshakes int }{ {0 * time.Millisecond, 1}, {100 * time.Millisecond, 10 * maxPendingHandshakes}, } { errc := make(chan error) stat.Reset() for i := 0; i < testCase.numberOfHandshakes; i++ { stream := &testRPCStream{ t: t, isClient: true, } // Preload the inbound frames. f1 := testutil.MakeFrame("ServerInit") f2 := testutil.MakeFrame("ServerFinished") in := bytes.NewBuffer(f1) in.Write(f2) out := new(bytes.Buffer) tc := testutil.NewTestConn(in, out) chs := &altsHandshaker{ stream: stream, conn: tc, clientOpts: &ClientHandshakerOptions{ TargetServiceAccounts: testTargetServiceAccounts, ClientIdentity: testClientIdentity, }, side: core.ClientSide, } go func() { _, context, err := chs.ClientHandshake(context.Background()) if err == nil && context == nil { panic("expected non-nil ALTS context") } errc <- err chs.Close() }() } // Ensure all errors are expected. for i := 0; i < testCase.numberOfHandshakes; i++ { if err := <-errc; err != nil && err != errDropped { t.Errorf("ClientHandshake() = _, %v, want _, or %v", err, errDropped) } } // Ensure that there are no concurrent calls more than the limit. if stat.MaxConcurrentCalls > maxPendingHandshakes { t.Errorf("Observed %d concurrent handshakes; want <= %d", stat.MaxConcurrentCalls, maxPendingHandshakes) } } } func TestServerHandshake(t *testing.T) { for _, testCase := range []struct { delay time.Duration numberOfHandshakes int }{ {0 * time.Millisecond, 1}, {100 * time.Millisecond, 10 * maxPendingHandshakes}, } { errc := make(chan error) stat.Reset() for i := 0; i < testCase.numberOfHandshakes; i++ { stream := &testRPCStream{ t: t, isClient: false, } // Preload the inbound frames. f1 := testutil.MakeFrame("ClientInit") f2 := testutil.MakeFrame("ClientFinished") in := bytes.NewBuffer(f1) in.Write(f2) out := new(bytes.Buffer) tc := testutil.NewTestConn(in, out) shs := &altsHandshaker{ stream: stream, conn: tc, serverOpts: DefaultServerHandshakerOptions(), side: core.ServerSide, } go func() { _, context, err := shs.ServerHandshake(context.Background()) if err == nil && context == nil { panic("expected non-nil ALTS context") } errc <- err shs.Close() }() } // Ensure all errors are expected. for i := 0; i < testCase.numberOfHandshakes; i++ { if err := <-errc; err != nil && err != errDropped { t.Errorf("ServerHandshake() = _, %v, want _, or %v", err, errDropped) } } // Ensure that there are no concurrent calls more than the limit. if stat.MaxConcurrentCalls > maxPendingHandshakes { t.Errorf("Observed %d concurrent handshakes; want <= %d", stat.MaxConcurrentCalls, maxPendingHandshakes) } } } // testUnresponsiveRPCStream is used for testing the PeerNotResponding case. type testUnresponsiveRPCStream struct { grpc.ClientStream } func (t *testUnresponsiveRPCStream) Recv() (*altspb.HandshakerResp, error) { return &altspb.HandshakerResp{}, nil } func (t *testUnresponsiveRPCStream) Send(req *altspb.HandshakerReq) error { return nil } func (t *testUnresponsiveRPCStream) CloseSend() error { return nil } func TestPeerNotResponding(t *testing.T) { stream := &testUnresponsiveRPCStream{} chs := &altsHandshaker{ stream: stream, conn: testutil.NewUnresponsiveTestConn(), clientOpts: &ClientHandshakerOptions{ TargetServiceAccounts: testTargetServiceAccounts, ClientIdentity: testClientIdentity, }, side: core.ClientSide, } _, context, err := chs.ClientHandshake(context.Background()) chs.Close() if context != nil { t.Error("expected non-nil ALTS context") } if got, want := err, core.PeerNotRespondingError; got != want { t.Errorf("ClientHandshake() = %v, want %v", got, want) } } grpc-go-1.22.1/credentials/alts/internal/handshaker/service/000077500000000000000000000000001351635773100237725ustar00rootroot00000000000000grpc-go-1.22.1/credentials/alts/internal/handshaker/service/service.go000066400000000000000000000027471351635773100257730ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package service manages connections between the VM application and the ALTS // handshaker service. package service import ( "sync" grpc "google.golang.org/grpc" ) var ( // hsConn represents a connection to hypervisor handshaker service. hsConn *grpc.ClientConn mu sync.Mutex // hsDialer will be reassigned in tests. hsDialer = grpc.Dial ) // Dial dials the handshake service in the hypervisor. If a connection has // already been established, this function returns it. Otherwise, a new // connection is created. func Dial(hsAddress string) (*grpc.ClientConn, error) { mu.Lock() defer mu.Unlock() if hsConn == nil { // Create a new connection to the handshaker service. Note that // this connection stays open until the application is closed. var err error hsConn, err = hsDialer(hsAddress, grpc.WithInsecure()) if err != nil { return nil, err } } return hsConn, nil } grpc-go-1.22.1/credentials/alts/internal/handshaker/service/service_test.go000066400000000000000000000033371351635773100270260ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package service import ( "testing" grpc "google.golang.org/grpc" ) const ( // The address is irrelevant in this test. testAddress = "some_address" ) func TestDial(t *testing.T) { defer func() func() { temp := hsDialer hsDialer = func(target string, opts ...grpc.DialOption) (*grpc.ClientConn, error) { return &grpc.ClientConn{}, nil } return func() { hsDialer = temp } }() // Ensure that hsConn is nil at first. hsConn = nil // First call to Dial, it should create set hsConn. conn1, err := Dial(testAddress) if err != nil { t.Fatalf("first call to Dial failed: %v", err) } if conn1 == nil { t.Fatal("first call to Dial(_)=(nil, _), want not nil") } if got, want := hsConn, conn1; got != want { t.Fatalf("hsConn=%v, want %v", got, want) } // Second call to Dial should return conn1 above. conn2, err := Dial(testAddress) if err != nil { t.Fatalf("second call to Dial(_) failed: %v", err) } if got, want := conn2, conn1; got != want { t.Fatalf("second call to Dial(_)=(%v, _), want (%v,. _)", got, want) } if got, want := hsConn, conn1; got != want { t.Fatalf("hsConn=%v, want %v", got, want) } } grpc-go-1.22.1/credentials/alts/internal/proto/000077500000000000000000000000001351635773100213655ustar00rootroot00000000000000grpc-go-1.22.1/credentials/alts/internal/proto/grpc_gcp/000077500000000000000000000000001351635773100231515ustar00rootroot00000000000000grpc-go-1.22.1/credentials/alts/internal/proto/grpc_gcp/altscontext.pb.go000066400000000000000000000154741351635773100264630ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: grpc/gcp/altscontext.proto package grpc_gcp // import "google.golang.org/grpc/credentials/alts/internal/proto/grpc_gcp" import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package type AltsContext struct { // The application protocol negotiated for this connection. ApplicationProtocol string `protobuf:"bytes,1,opt,name=application_protocol,json=applicationProtocol,proto3" json:"application_protocol,omitempty"` // The record protocol negotiated for this connection. RecordProtocol string `protobuf:"bytes,2,opt,name=record_protocol,json=recordProtocol,proto3" json:"record_protocol,omitempty"` // The security level of the created secure channel. SecurityLevel SecurityLevel `protobuf:"varint,3,opt,name=security_level,json=securityLevel,proto3,enum=grpc.gcp.SecurityLevel" json:"security_level,omitempty"` // The peer service account. PeerServiceAccount string `protobuf:"bytes,4,opt,name=peer_service_account,json=peerServiceAccount,proto3" json:"peer_service_account,omitempty"` // The local service account. LocalServiceAccount string `protobuf:"bytes,5,opt,name=local_service_account,json=localServiceAccount,proto3" json:"local_service_account,omitempty"` // The RPC protocol versions supported by the peer. PeerRpcVersions *RpcProtocolVersions `protobuf:"bytes,6,opt,name=peer_rpc_versions,json=peerRpcVersions,proto3" json:"peer_rpc_versions,omitempty"` // Additional attributes of the peer. PeerAttributes map[string]string `protobuf:"bytes,7,rep,name=peer_attributes,json=peerAttributes,proto3" json:"peer_attributes,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *AltsContext) Reset() { *m = AltsContext{} } func (m *AltsContext) String() string { return proto.CompactTextString(m) } func (*AltsContext) ProtoMessage() {} func (*AltsContext) Descriptor() ([]byte, []int) { return fileDescriptor_altscontext_f6b7868f9a30497f, []int{0} } func (m *AltsContext) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_AltsContext.Unmarshal(m, b) } func (m *AltsContext) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_AltsContext.Marshal(b, m, deterministic) } func (dst *AltsContext) XXX_Merge(src proto.Message) { xxx_messageInfo_AltsContext.Merge(dst, src) } func (m *AltsContext) XXX_Size() int { return xxx_messageInfo_AltsContext.Size(m) } func (m *AltsContext) XXX_DiscardUnknown() { xxx_messageInfo_AltsContext.DiscardUnknown(m) } var xxx_messageInfo_AltsContext proto.InternalMessageInfo func (m *AltsContext) GetApplicationProtocol() string { if m != nil { return m.ApplicationProtocol } return "" } func (m *AltsContext) GetRecordProtocol() string { if m != nil { return m.RecordProtocol } return "" } func (m *AltsContext) GetSecurityLevel() SecurityLevel { if m != nil { return m.SecurityLevel } return SecurityLevel_SECURITY_NONE } func (m *AltsContext) GetPeerServiceAccount() string { if m != nil { return m.PeerServiceAccount } return "" } func (m *AltsContext) GetLocalServiceAccount() string { if m != nil { return m.LocalServiceAccount } return "" } func (m *AltsContext) GetPeerRpcVersions() *RpcProtocolVersions { if m != nil { return m.PeerRpcVersions } return nil } func (m *AltsContext) GetPeerAttributes() map[string]string { if m != nil { return m.PeerAttributes } return nil } func init() { proto.RegisterType((*AltsContext)(nil), "grpc.gcp.AltsContext") proto.RegisterMapType((map[string]string)(nil), "grpc.gcp.AltsContext.PeerAttributesEntry") } func init() { proto.RegisterFile("grpc/gcp/altscontext.proto", fileDescriptor_altscontext_f6b7868f9a30497f) } var fileDescriptor_altscontext_f6b7868f9a30497f = []byte{ // 411 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x6c, 0x92, 0x4d, 0x6f, 0x13, 0x31, 0x10, 0x86, 0xb5, 0x0d, 0x2d, 0xe0, 0x88, 0xb4, 0xb8, 0xa9, 0x58, 0x45, 0x42, 0x8a, 0xb8, 0xb0, 0x5c, 0x76, 0x21, 0x5c, 0x10, 0x07, 0x50, 0x8a, 0x38, 0x20, 0x71, 0x88, 0xb6, 0x12, 0x07, 0x2e, 0x2b, 0x77, 0x3a, 0xb2, 0x2c, 0x5c, 0x8f, 0x35, 0x76, 0x22, 0xf2, 0xb3, 0xf9, 0x07, 0x68, 0xed, 0xcd, 0x07, 0x1f, 0xb7, 0x9d, 0x79, 0x9f, 0x19, 0xbf, 0xb3, 0x33, 0x62, 0xa6, 0xd9, 0x43, 0xa3, 0xc1, 0x37, 0xca, 0xc6, 0x00, 0xe4, 0x22, 0xfe, 0x8c, 0xb5, 0x67, 0x8a, 0x24, 0x1f, 0xf5, 0x5a, 0xad, 0xc1, 0xcf, 0xaa, 0x3d, 0x15, 0x59, 0xb9, 0xe0, 0x89, 0x63, 0x17, 0x10, 0xd6, 0x6c, 0xe2, 0xb6, 0x03, 0xba, 0xbf, 0x27, 0x97, 0x6b, 0x5e, 0xfc, 0x1a, 0x89, 0xf1, 0xd2, 0xc6, 0xf0, 0x29, 0x77, 0x92, 0x6f, 0xc4, 0x54, 0x79, 0x6f, 0x0d, 0xa8, 0x68, 0xc8, 0x75, 0x09, 0x02, 0xb2, 0x65, 0x31, 0x2f, 0xaa, 0xc7, 0xed, 0xe5, 0x91, 0xb6, 0x1a, 0x24, 0xf9, 0x52, 0x9c, 0x33, 0x02, 0xf1, 0xdd, 0x81, 0x3e, 0x49, 0xf4, 0x24, 0xa7, 0xf7, 0xe0, 0x07, 0x31, 0xd9, 0x9b, 0xb0, 0xb8, 0x41, 0x5b, 0x8e, 0xe6, 0x45, 0x35, 0x59, 0x3c, 0xab, 0x77, 0xc6, 0xeb, 0x9b, 0x41, 0xff, 0xda, 0xcb, 0xed, 0x93, 0x70, 0x1c, 0xca, 0xd7, 0x62, 0xea, 0x11, 0xb9, 0x0b, 0xc8, 0x1b, 0x03, 0xd8, 0x29, 0x00, 0x5a, 0xbb, 0x58, 0x3e, 0x48, 0xaf, 0xc9, 0x5e, 0xbb, 0xc9, 0xd2, 0x32, 0x2b, 0x72, 0x21, 0xae, 0x2c, 0x81, 0xb2, 0xff, 0x94, 0x9c, 0xe6, 0x71, 0x92, 0xf8, 0x57, 0xcd, 0x17, 0xf1, 0x34, 0xbd, 0xc2, 0x1e, 0xba, 0x0d, 0x72, 0x30, 0xe4, 0x42, 0x79, 0x36, 0x2f, 0xaa, 0xf1, 0xe2, 0xf9, 0xc1, 0x68, 0xeb, 0x61, 0x37, 0xd7, 0xb7, 0x01, 0x6a, 0xcf, 0xfb, 0xba, 0xd6, 0xc3, 0x2e, 0x21, 0x5b, 0x91, 0x52, 0x9d, 0x8a, 0x91, 0xcd, 0xed, 0x3a, 0x62, 0x28, 0x1f, 0xce, 0x47, 0xd5, 0x78, 0xf1, 0xea, 0xd0, 0xe8, 0xe8, 0xe7, 0xd7, 0x2b, 0x44, 0x5e, 0xee, 0xd9, 0xcf, 0x2e, 0xf2, 0xb6, 0x9d, 0xf8, 0x3f, 0x92, 0xb3, 0xa5, 0xb8, 0xfc, 0x0f, 0x26, 0x2f, 0xc4, 0xe8, 0x07, 0x6e, 0x87, 0x35, 0xf5, 0x9f, 0x72, 0x2a, 0x4e, 0x37, 0xca, 0xae, 0x71, 0x58, 0x46, 0x0e, 0xde, 0x9f, 0xbc, 0x2b, 0xae, 0xad, 0xb8, 0x32, 0x94, 0x1d, 0xf4, 0x47, 0x54, 0x1b, 0x17, 0x91, 0x9d, 0xb2, 0xd7, 0x17, 0x47, 0x66, 0xd2, 0x74, 0xab, 0xe2, 0xfb, 0x47, 0x4d, 0xa4, 0x2d, 0xd6, 0x9a, 0xac, 0x72, 0xba, 0x26, 0xd6, 0x4d, 0x3a, 0x2e, 0x60, 0xbc, 0x43, 0x17, 0x8d, 0xb2, 0x21, 0x9d, 0x62, 0xb3, 0xeb, 0xd2, 0xa4, 0x2b, 0x48, 0x50, 0xa7, 0xc1, 0xdf, 0x9e, 0xa5, 0xf8, 0xed, 0xef, 0x00, 0x00, 0x00, 0xff, 0xff, 0x9b, 0x8c, 0xe4, 0x6a, 0xba, 0x02, 0x00, 0x00, } grpc-go-1.22.1/credentials/alts/internal/proto/grpc_gcp/handshaker.pb.go000066400000000000000000001333001351635773100262100ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: grpc/gcp/handshaker.proto package grpc_gcp // import "google.golang.org/grpc/credentials/alts/internal/proto/grpc_gcp" import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import ( context "golang.org/x/net/context" grpc "google.golang.org/grpc" ) // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package type HandshakeProtocol int32 const ( // Default value. HandshakeProtocol_HANDSHAKE_PROTOCOL_UNSPECIFIED HandshakeProtocol = 0 // TLS handshake protocol. HandshakeProtocol_TLS HandshakeProtocol = 1 // Application Layer Transport Security handshake protocol. HandshakeProtocol_ALTS HandshakeProtocol = 2 ) var HandshakeProtocol_name = map[int32]string{ 0: "HANDSHAKE_PROTOCOL_UNSPECIFIED", 1: "TLS", 2: "ALTS", } var HandshakeProtocol_value = map[string]int32{ "HANDSHAKE_PROTOCOL_UNSPECIFIED": 0, "TLS": 1, "ALTS": 2, } func (x HandshakeProtocol) String() string { return proto.EnumName(HandshakeProtocol_name, int32(x)) } func (HandshakeProtocol) EnumDescriptor() ([]byte, []int) { return fileDescriptor_handshaker_1dfe659b12ea825e, []int{0} } type NetworkProtocol int32 const ( NetworkProtocol_NETWORK_PROTOCOL_UNSPECIFIED NetworkProtocol = 0 NetworkProtocol_TCP NetworkProtocol = 1 NetworkProtocol_UDP NetworkProtocol = 2 ) var NetworkProtocol_name = map[int32]string{ 0: "NETWORK_PROTOCOL_UNSPECIFIED", 1: "TCP", 2: "UDP", } var NetworkProtocol_value = map[string]int32{ "NETWORK_PROTOCOL_UNSPECIFIED": 0, "TCP": 1, "UDP": 2, } func (x NetworkProtocol) String() string { return proto.EnumName(NetworkProtocol_name, int32(x)) } func (NetworkProtocol) EnumDescriptor() ([]byte, []int) { return fileDescriptor_handshaker_1dfe659b12ea825e, []int{1} } type Endpoint struct { // IP address. It should contain an IPv4 or IPv6 string literal, e.g. // "192.168.0.1" or "2001:db8::1". IpAddress string `protobuf:"bytes,1,opt,name=ip_address,json=ipAddress,proto3" json:"ip_address,omitempty"` // Port number. Port int32 `protobuf:"varint,2,opt,name=port,proto3" json:"port,omitempty"` // Network protocol (e.g., TCP, UDP) associated with this endpoint. Protocol NetworkProtocol `protobuf:"varint,3,opt,name=protocol,proto3,enum=grpc.gcp.NetworkProtocol" json:"protocol,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Endpoint) Reset() { *m = Endpoint{} } func (m *Endpoint) String() string { return proto.CompactTextString(m) } func (*Endpoint) ProtoMessage() {} func (*Endpoint) Descriptor() ([]byte, []int) { return fileDescriptor_handshaker_1dfe659b12ea825e, []int{0} } func (m *Endpoint) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Endpoint.Unmarshal(m, b) } func (m *Endpoint) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Endpoint.Marshal(b, m, deterministic) } func (dst *Endpoint) XXX_Merge(src proto.Message) { xxx_messageInfo_Endpoint.Merge(dst, src) } func (m *Endpoint) XXX_Size() int { return xxx_messageInfo_Endpoint.Size(m) } func (m *Endpoint) XXX_DiscardUnknown() { xxx_messageInfo_Endpoint.DiscardUnknown(m) } var xxx_messageInfo_Endpoint proto.InternalMessageInfo func (m *Endpoint) GetIpAddress() string { if m != nil { return m.IpAddress } return "" } func (m *Endpoint) GetPort() int32 { if m != nil { return m.Port } return 0 } func (m *Endpoint) GetProtocol() NetworkProtocol { if m != nil { return m.Protocol } return NetworkProtocol_NETWORK_PROTOCOL_UNSPECIFIED } type Identity struct { // Types that are valid to be assigned to IdentityOneof: // *Identity_ServiceAccount // *Identity_Hostname IdentityOneof isIdentity_IdentityOneof `protobuf_oneof:"identity_oneof"` // Additional attributes of the identity. Attributes map[string]string `protobuf:"bytes,3,rep,name=attributes,proto3" json:"attributes,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Identity) Reset() { *m = Identity{} } func (m *Identity) String() string { return proto.CompactTextString(m) } func (*Identity) ProtoMessage() {} func (*Identity) Descriptor() ([]byte, []int) { return fileDescriptor_handshaker_1dfe659b12ea825e, []int{1} } func (m *Identity) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Identity.Unmarshal(m, b) } func (m *Identity) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Identity.Marshal(b, m, deterministic) } func (dst *Identity) XXX_Merge(src proto.Message) { xxx_messageInfo_Identity.Merge(dst, src) } func (m *Identity) XXX_Size() int { return xxx_messageInfo_Identity.Size(m) } func (m *Identity) XXX_DiscardUnknown() { xxx_messageInfo_Identity.DiscardUnknown(m) } var xxx_messageInfo_Identity proto.InternalMessageInfo type isIdentity_IdentityOneof interface { isIdentity_IdentityOneof() } type Identity_ServiceAccount struct { ServiceAccount string `protobuf:"bytes,1,opt,name=service_account,json=serviceAccount,proto3,oneof"` } type Identity_Hostname struct { Hostname string `protobuf:"bytes,2,opt,name=hostname,proto3,oneof"` } func (*Identity_ServiceAccount) isIdentity_IdentityOneof() {} func (*Identity_Hostname) isIdentity_IdentityOneof() {} func (m *Identity) GetIdentityOneof() isIdentity_IdentityOneof { if m != nil { return m.IdentityOneof } return nil } func (m *Identity) GetServiceAccount() string { if x, ok := m.GetIdentityOneof().(*Identity_ServiceAccount); ok { return x.ServiceAccount } return "" } func (m *Identity) GetHostname() string { if x, ok := m.GetIdentityOneof().(*Identity_Hostname); ok { return x.Hostname } return "" } func (m *Identity) GetAttributes() map[string]string { if m != nil { return m.Attributes } return nil } // XXX_OneofFuncs is for the internal use of the proto package. func (*Identity) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _Identity_OneofMarshaler, _Identity_OneofUnmarshaler, _Identity_OneofSizer, []interface{}{ (*Identity_ServiceAccount)(nil), (*Identity_Hostname)(nil), } } func _Identity_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*Identity) // identity_oneof switch x := m.IdentityOneof.(type) { case *Identity_ServiceAccount: b.EncodeVarint(1<<3 | proto.WireBytes) b.EncodeStringBytes(x.ServiceAccount) case *Identity_Hostname: b.EncodeVarint(2<<3 | proto.WireBytes) b.EncodeStringBytes(x.Hostname) case nil: default: return fmt.Errorf("Identity.IdentityOneof has unexpected type %T", x) } return nil } func _Identity_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*Identity) switch tag { case 1: // identity_oneof.service_account if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } x, err := b.DecodeStringBytes() m.IdentityOneof = &Identity_ServiceAccount{x} return true, err case 2: // identity_oneof.hostname if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } x, err := b.DecodeStringBytes() m.IdentityOneof = &Identity_Hostname{x} return true, err default: return false, nil } } func _Identity_OneofSizer(msg proto.Message) (n int) { m := msg.(*Identity) // identity_oneof switch x := m.IdentityOneof.(type) { case *Identity_ServiceAccount: n += 1 // tag and wire n += proto.SizeVarint(uint64(len(x.ServiceAccount))) n += len(x.ServiceAccount) case *Identity_Hostname: n += 1 // tag and wire n += proto.SizeVarint(uint64(len(x.Hostname))) n += len(x.Hostname) case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } type StartClientHandshakeReq struct { // Handshake security protocol requested by the client. HandshakeSecurityProtocol HandshakeProtocol `protobuf:"varint,1,opt,name=handshake_security_protocol,json=handshakeSecurityProtocol,proto3,enum=grpc.gcp.HandshakeProtocol" json:"handshake_security_protocol,omitempty"` // The application protocols supported by the client, e.g., "h2" (for http2), // "grpc". ApplicationProtocols []string `protobuf:"bytes,2,rep,name=application_protocols,json=applicationProtocols,proto3" json:"application_protocols,omitempty"` // The record protocols supported by the client, e.g., // "ALTSRP_GCM_AES128". RecordProtocols []string `protobuf:"bytes,3,rep,name=record_protocols,json=recordProtocols,proto3" json:"record_protocols,omitempty"` // (Optional) Describes which server identities are acceptable by the client. // If target identities are provided and none of them matches the peer // identity of the server, handshake will fail. TargetIdentities []*Identity `protobuf:"bytes,4,rep,name=target_identities,json=targetIdentities,proto3" json:"target_identities,omitempty"` // (Optional) Application may specify a local identity. Otherwise, the // handshaker chooses a default local identity. LocalIdentity *Identity `protobuf:"bytes,5,opt,name=local_identity,json=localIdentity,proto3" json:"local_identity,omitempty"` // (Optional) Local endpoint information of the connection to the server, // such as local IP address, port number, and network protocol. LocalEndpoint *Endpoint `protobuf:"bytes,6,opt,name=local_endpoint,json=localEndpoint,proto3" json:"local_endpoint,omitempty"` // (Optional) Endpoint information of the remote server, such as IP address, // port number, and network protocol. RemoteEndpoint *Endpoint `protobuf:"bytes,7,opt,name=remote_endpoint,json=remoteEndpoint,proto3" json:"remote_endpoint,omitempty"` // (Optional) If target name is provided, a secure naming check is performed // to verify that the peer authenticated identity is indeed authorized to run // the target name. TargetName string `protobuf:"bytes,8,opt,name=target_name,json=targetName,proto3" json:"target_name,omitempty"` // (Optional) RPC protocol versions supported by the client. RpcVersions *RpcProtocolVersions `protobuf:"bytes,9,opt,name=rpc_versions,json=rpcVersions,proto3" json:"rpc_versions,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *StartClientHandshakeReq) Reset() { *m = StartClientHandshakeReq{} } func (m *StartClientHandshakeReq) String() string { return proto.CompactTextString(m) } func (*StartClientHandshakeReq) ProtoMessage() {} func (*StartClientHandshakeReq) Descriptor() ([]byte, []int) { return fileDescriptor_handshaker_1dfe659b12ea825e, []int{2} } func (m *StartClientHandshakeReq) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_StartClientHandshakeReq.Unmarshal(m, b) } func (m *StartClientHandshakeReq) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_StartClientHandshakeReq.Marshal(b, m, deterministic) } func (dst *StartClientHandshakeReq) XXX_Merge(src proto.Message) { xxx_messageInfo_StartClientHandshakeReq.Merge(dst, src) } func (m *StartClientHandshakeReq) XXX_Size() int { return xxx_messageInfo_StartClientHandshakeReq.Size(m) } func (m *StartClientHandshakeReq) XXX_DiscardUnknown() { xxx_messageInfo_StartClientHandshakeReq.DiscardUnknown(m) } var xxx_messageInfo_StartClientHandshakeReq proto.InternalMessageInfo func (m *StartClientHandshakeReq) GetHandshakeSecurityProtocol() HandshakeProtocol { if m != nil { return m.HandshakeSecurityProtocol } return HandshakeProtocol_HANDSHAKE_PROTOCOL_UNSPECIFIED } func (m *StartClientHandshakeReq) GetApplicationProtocols() []string { if m != nil { return m.ApplicationProtocols } return nil } func (m *StartClientHandshakeReq) GetRecordProtocols() []string { if m != nil { return m.RecordProtocols } return nil } func (m *StartClientHandshakeReq) GetTargetIdentities() []*Identity { if m != nil { return m.TargetIdentities } return nil } func (m *StartClientHandshakeReq) GetLocalIdentity() *Identity { if m != nil { return m.LocalIdentity } return nil } func (m *StartClientHandshakeReq) GetLocalEndpoint() *Endpoint { if m != nil { return m.LocalEndpoint } return nil } func (m *StartClientHandshakeReq) GetRemoteEndpoint() *Endpoint { if m != nil { return m.RemoteEndpoint } return nil } func (m *StartClientHandshakeReq) GetTargetName() string { if m != nil { return m.TargetName } return "" } func (m *StartClientHandshakeReq) GetRpcVersions() *RpcProtocolVersions { if m != nil { return m.RpcVersions } return nil } type ServerHandshakeParameters struct { // The record protocols supported by the server, e.g., // "ALTSRP_GCM_AES128". RecordProtocols []string `protobuf:"bytes,1,rep,name=record_protocols,json=recordProtocols,proto3" json:"record_protocols,omitempty"` // (Optional) A list of local identities supported by the server, if // specified. Otherwise, the handshaker chooses a default local identity. LocalIdentities []*Identity `protobuf:"bytes,2,rep,name=local_identities,json=localIdentities,proto3" json:"local_identities,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ServerHandshakeParameters) Reset() { *m = ServerHandshakeParameters{} } func (m *ServerHandshakeParameters) String() string { return proto.CompactTextString(m) } func (*ServerHandshakeParameters) ProtoMessage() {} func (*ServerHandshakeParameters) Descriptor() ([]byte, []int) { return fileDescriptor_handshaker_1dfe659b12ea825e, []int{3} } func (m *ServerHandshakeParameters) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ServerHandshakeParameters.Unmarshal(m, b) } func (m *ServerHandshakeParameters) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ServerHandshakeParameters.Marshal(b, m, deterministic) } func (dst *ServerHandshakeParameters) XXX_Merge(src proto.Message) { xxx_messageInfo_ServerHandshakeParameters.Merge(dst, src) } func (m *ServerHandshakeParameters) XXX_Size() int { return xxx_messageInfo_ServerHandshakeParameters.Size(m) } func (m *ServerHandshakeParameters) XXX_DiscardUnknown() { xxx_messageInfo_ServerHandshakeParameters.DiscardUnknown(m) } var xxx_messageInfo_ServerHandshakeParameters proto.InternalMessageInfo func (m *ServerHandshakeParameters) GetRecordProtocols() []string { if m != nil { return m.RecordProtocols } return nil } func (m *ServerHandshakeParameters) GetLocalIdentities() []*Identity { if m != nil { return m.LocalIdentities } return nil } type StartServerHandshakeReq struct { // The application protocols supported by the server, e.g., "h2" (for http2), // "grpc". ApplicationProtocols []string `protobuf:"bytes,1,rep,name=application_protocols,json=applicationProtocols,proto3" json:"application_protocols,omitempty"` // Handshake parameters (record protocols and local identities supported by // the server) mapped by the handshake protocol. Each handshake security // protocol (e.g., TLS or ALTS) has its own set of record protocols and local // identities. Since protobuf does not support enum as key to the map, the key // to handshake_parameters is the integer value of HandshakeProtocol enum. HandshakeParameters map[int32]*ServerHandshakeParameters `protobuf:"bytes,2,rep,name=handshake_parameters,json=handshakeParameters,proto3" json:"handshake_parameters,omitempty" protobuf_key:"varint,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"` // Bytes in out_frames returned from the peer's HandshakerResp. It is possible // that the peer's out_frames are split into multiple HandshakReq messages. InBytes []byte `protobuf:"bytes,3,opt,name=in_bytes,json=inBytes,proto3" json:"in_bytes,omitempty"` // (Optional) Local endpoint information of the connection to the client, // such as local IP address, port number, and network protocol. LocalEndpoint *Endpoint `protobuf:"bytes,4,opt,name=local_endpoint,json=localEndpoint,proto3" json:"local_endpoint,omitempty"` // (Optional) Endpoint information of the remote client, such as IP address, // port number, and network protocol. RemoteEndpoint *Endpoint `protobuf:"bytes,5,opt,name=remote_endpoint,json=remoteEndpoint,proto3" json:"remote_endpoint,omitempty"` // (Optional) RPC protocol versions supported by the server. RpcVersions *RpcProtocolVersions `protobuf:"bytes,6,opt,name=rpc_versions,json=rpcVersions,proto3" json:"rpc_versions,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *StartServerHandshakeReq) Reset() { *m = StartServerHandshakeReq{} } func (m *StartServerHandshakeReq) String() string { return proto.CompactTextString(m) } func (*StartServerHandshakeReq) ProtoMessage() {} func (*StartServerHandshakeReq) Descriptor() ([]byte, []int) { return fileDescriptor_handshaker_1dfe659b12ea825e, []int{4} } func (m *StartServerHandshakeReq) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_StartServerHandshakeReq.Unmarshal(m, b) } func (m *StartServerHandshakeReq) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_StartServerHandshakeReq.Marshal(b, m, deterministic) } func (dst *StartServerHandshakeReq) XXX_Merge(src proto.Message) { xxx_messageInfo_StartServerHandshakeReq.Merge(dst, src) } func (m *StartServerHandshakeReq) XXX_Size() int { return xxx_messageInfo_StartServerHandshakeReq.Size(m) } func (m *StartServerHandshakeReq) XXX_DiscardUnknown() { xxx_messageInfo_StartServerHandshakeReq.DiscardUnknown(m) } var xxx_messageInfo_StartServerHandshakeReq proto.InternalMessageInfo func (m *StartServerHandshakeReq) GetApplicationProtocols() []string { if m != nil { return m.ApplicationProtocols } return nil } func (m *StartServerHandshakeReq) GetHandshakeParameters() map[int32]*ServerHandshakeParameters { if m != nil { return m.HandshakeParameters } return nil } func (m *StartServerHandshakeReq) GetInBytes() []byte { if m != nil { return m.InBytes } return nil } func (m *StartServerHandshakeReq) GetLocalEndpoint() *Endpoint { if m != nil { return m.LocalEndpoint } return nil } func (m *StartServerHandshakeReq) GetRemoteEndpoint() *Endpoint { if m != nil { return m.RemoteEndpoint } return nil } func (m *StartServerHandshakeReq) GetRpcVersions() *RpcProtocolVersions { if m != nil { return m.RpcVersions } return nil } type NextHandshakeMessageReq struct { // Bytes in out_frames returned from the peer's HandshakerResp. It is possible // that the peer's out_frames are split into multiple NextHandshakerMessageReq // messages. InBytes []byte `protobuf:"bytes,1,opt,name=in_bytes,json=inBytes,proto3" json:"in_bytes,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *NextHandshakeMessageReq) Reset() { *m = NextHandshakeMessageReq{} } func (m *NextHandshakeMessageReq) String() string { return proto.CompactTextString(m) } func (*NextHandshakeMessageReq) ProtoMessage() {} func (*NextHandshakeMessageReq) Descriptor() ([]byte, []int) { return fileDescriptor_handshaker_1dfe659b12ea825e, []int{5} } func (m *NextHandshakeMessageReq) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_NextHandshakeMessageReq.Unmarshal(m, b) } func (m *NextHandshakeMessageReq) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_NextHandshakeMessageReq.Marshal(b, m, deterministic) } func (dst *NextHandshakeMessageReq) XXX_Merge(src proto.Message) { xxx_messageInfo_NextHandshakeMessageReq.Merge(dst, src) } func (m *NextHandshakeMessageReq) XXX_Size() int { return xxx_messageInfo_NextHandshakeMessageReq.Size(m) } func (m *NextHandshakeMessageReq) XXX_DiscardUnknown() { xxx_messageInfo_NextHandshakeMessageReq.DiscardUnknown(m) } var xxx_messageInfo_NextHandshakeMessageReq proto.InternalMessageInfo func (m *NextHandshakeMessageReq) GetInBytes() []byte { if m != nil { return m.InBytes } return nil } type HandshakerReq struct { // Types that are valid to be assigned to ReqOneof: // *HandshakerReq_ClientStart // *HandshakerReq_ServerStart // *HandshakerReq_Next ReqOneof isHandshakerReq_ReqOneof `protobuf_oneof:"req_oneof"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *HandshakerReq) Reset() { *m = HandshakerReq{} } func (m *HandshakerReq) String() string { return proto.CompactTextString(m) } func (*HandshakerReq) ProtoMessage() {} func (*HandshakerReq) Descriptor() ([]byte, []int) { return fileDescriptor_handshaker_1dfe659b12ea825e, []int{6} } func (m *HandshakerReq) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_HandshakerReq.Unmarshal(m, b) } func (m *HandshakerReq) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_HandshakerReq.Marshal(b, m, deterministic) } func (dst *HandshakerReq) XXX_Merge(src proto.Message) { xxx_messageInfo_HandshakerReq.Merge(dst, src) } func (m *HandshakerReq) XXX_Size() int { return xxx_messageInfo_HandshakerReq.Size(m) } func (m *HandshakerReq) XXX_DiscardUnknown() { xxx_messageInfo_HandshakerReq.DiscardUnknown(m) } var xxx_messageInfo_HandshakerReq proto.InternalMessageInfo type isHandshakerReq_ReqOneof interface { isHandshakerReq_ReqOneof() } type HandshakerReq_ClientStart struct { ClientStart *StartClientHandshakeReq `protobuf:"bytes,1,opt,name=client_start,json=clientStart,proto3,oneof"` } type HandshakerReq_ServerStart struct { ServerStart *StartServerHandshakeReq `protobuf:"bytes,2,opt,name=server_start,json=serverStart,proto3,oneof"` } type HandshakerReq_Next struct { Next *NextHandshakeMessageReq `protobuf:"bytes,3,opt,name=next,proto3,oneof"` } func (*HandshakerReq_ClientStart) isHandshakerReq_ReqOneof() {} func (*HandshakerReq_ServerStart) isHandshakerReq_ReqOneof() {} func (*HandshakerReq_Next) isHandshakerReq_ReqOneof() {} func (m *HandshakerReq) GetReqOneof() isHandshakerReq_ReqOneof { if m != nil { return m.ReqOneof } return nil } func (m *HandshakerReq) GetClientStart() *StartClientHandshakeReq { if x, ok := m.GetReqOneof().(*HandshakerReq_ClientStart); ok { return x.ClientStart } return nil } func (m *HandshakerReq) GetServerStart() *StartServerHandshakeReq { if x, ok := m.GetReqOneof().(*HandshakerReq_ServerStart); ok { return x.ServerStart } return nil } func (m *HandshakerReq) GetNext() *NextHandshakeMessageReq { if x, ok := m.GetReqOneof().(*HandshakerReq_Next); ok { return x.Next } return nil } // XXX_OneofFuncs is for the internal use of the proto package. func (*HandshakerReq) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _HandshakerReq_OneofMarshaler, _HandshakerReq_OneofUnmarshaler, _HandshakerReq_OneofSizer, []interface{}{ (*HandshakerReq_ClientStart)(nil), (*HandshakerReq_ServerStart)(nil), (*HandshakerReq_Next)(nil), } } func _HandshakerReq_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*HandshakerReq) // req_oneof switch x := m.ReqOneof.(type) { case *HandshakerReq_ClientStart: b.EncodeVarint(1<<3 | proto.WireBytes) if err := b.EncodeMessage(x.ClientStart); err != nil { return err } case *HandshakerReq_ServerStart: b.EncodeVarint(2<<3 | proto.WireBytes) if err := b.EncodeMessage(x.ServerStart); err != nil { return err } case *HandshakerReq_Next: b.EncodeVarint(3<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Next); err != nil { return err } case nil: default: return fmt.Errorf("HandshakerReq.ReqOneof has unexpected type %T", x) } return nil } func _HandshakerReq_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*HandshakerReq) switch tag { case 1: // req_oneof.client_start if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(StartClientHandshakeReq) err := b.DecodeMessage(msg) m.ReqOneof = &HandshakerReq_ClientStart{msg} return true, err case 2: // req_oneof.server_start if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(StartServerHandshakeReq) err := b.DecodeMessage(msg) m.ReqOneof = &HandshakerReq_ServerStart{msg} return true, err case 3: // req_oneof.next if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(NextHandshakeMessageReq) err := b.DecodeMessage(msg) m.ReqOneof = &HandshakerReq_Next{msg} return true, err default: return false, nil } } func _HandshakerReq_OneofSizer(msg proto.Message) (n int) { m := msg.(*HandshakerReq) // req_oneof switch x := m.ReqOneof.(type) { case *HandshakerReq_ClientStart: s := proto.Size(x.ClientStart) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *HandshakerReq_ServerStart: s := proto.Size(x.ServerStart) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *HandshakerReq_Next: s := proto.Size(x.Next) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } type HandshakerResult struct { // The application protocol negotiated for this connection. ApplicationProtocol string `protobuf:"bytes,1,opt,name=application_protocol,json=applicationProtocol,proto3" json:"application_protocol,omitempty"` // The record protocol negotiated for this connection. RecordProtocol string `protobuf:"bytes,2,opt,name=record_protocol,json=recordProtocol,proto3" json:"record_protocol,omitempty"` // Cryptographic key data. The key data may be more than the key length // required for the record protocol, thus the client of the handshaker // service needs to truncate the key data into the right key length. KeyData []byte `protobuf:"bytes,3,opt,name=key_data,json=keyData,proto3" json:"key_data,omitempty"` // The authenticated identity of the peer. PeerIdentity *Identity `protobuf:"bytes,4,opt,name=peer_identity,json=peerIdentity,proto3" json:"peer_identity,omitempty"` // The local identity used in the handshake. LocalIdentity *Identity `protobuf:"bytes,5,opt,name=local_identity,json=localIdentity,proto3" json:"local_identity,omitempty"` // Indicate whether the handshaker service client should keep the channel // between the handshaker service open, e.g., in order to handle // post-handshake messages in the future. KeepChannelOpen bool `protobuf:"varint,6,opt,name=keep_channel_open,json=keepChannelOpen,proto3" json:"keep_channel_open,omitempty"` // The RPC protocol versions supported by the peer. PeerRpcVersions *RpcProtocolVersions `protobuf:"bytes,7,opt,name=peer_rpc_versions,json=peerRpcVersions,proto3" json:"peer_rpc_versions,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *HandshakerResult) Reset() { *m = HandshakerResult{} } func (m *HandshakerResult) String() string { return proto.CompactTextString(m) } func (*HandshakerResult) ProtoMessage() {} func (*HandshakerResult) Descriptor() ([]byte, []int) { return fileDescriptor_handshaker_1dfe659b12ea825e, []int{7} } func (m *HandshakerResult) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_HandshakerResult.Unmarshal(m, b) } func (m *HandshakerResult) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_HandshakerResult.Marshal(b, m, deterministic) } func (dst *HandshakerResult) XXX_Merge(src proto.Message) { xxx_messageInfo_HandshakerResult.Merge(dst, src) } func (m *HandshakerResult) XXX_Size() int { return xxx_messageInfo_HandshakerResult.Size(m) } func (m *HandshakerResult) XXX_DiscardUnknown() { xxx_messageInfo_HandshakerResult.DiscardUnknown(m) } var xxx_messageInfo_HandshakerResult proto.InternalMessageInfo func (m *HandshakerResult) GetApplicationProtocol() string { if m != nil { return m.ApplicationProtocol } return "" } func (m *HandshakerResult) GetRecordProtocol() string { if m != nil { return m.RecordProtocol } return "" } func (m *HandshakerResult) GetKeyData() []byte { if m != nil { return m.KeyData } return nil } func (m *HandshakerResult) GetPeerIdentity() *Identity { if m != nil { return m.PeerIdentity } return nil } func (m *HandshakerResult) GetLocalIdentity() *Identity { if m != nil { return m.LocalIdentity } return nil } func (m *HandshakerResult) GetKeepChannelOpen() bool { if m != nil { return m.KeepChannelOpen } return false } func (m *HandshakerResult) GetPeerRpcVersions() *RpcProtocolVersions { if m != nil { return m.PeerRpcVersions } return nil } type HandshakerStatus struct { // The status code. This could be the gRPC status code. Code uint32 `protobuf:"varint,1,opt,name=code,proto3" json:"code,omitempty"` // The status details. Details string `protobuf:"bytes,2,opt,name=details,proto3" json:"details,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *HandshakerStatus) Reset() { *m = HandshakerStatus{} } func (m *HandshakerStatus) String() string { return proto.CompactTextString(m) } func (*HandshakerStatus) ProtoMessage() {} func (*HandshakerStatus) Descriptor() ([]byte, []int) { return fileDescriptor_handshaker_1dfe659b12ea825e, []int{8} } func (m *HandshakerStatus) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_HandshakerStatus.Unmarshal(m, b) } func (m *HandshakerStatus) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_HandshakerStatus.Marshal(b, m, deterministic) } func (dst *HandshakerStatus) XXX_Merge(src proto.Message) { xxx_messageInfo_HandshakerStatus.Merge(dst, src) } func (m *HandshakerStatus) XXX_Size() int { return xxx_messageInfo_HandshakerStatus.Size(m) } func (m *HandshakerStatus) XXX_DiscardUnknown() { xxx_messageInfo_HandshakerStatus.DiscardUnknown(m) } var xxx_messageInfo_HandshakerStatus proto.InternalMessageInfo func (m *HandshakerStatus) GetCode() uint32 { if m != nil { return m.Code } return 0 } func (m *HandshakerStatus) GetDetails() string { if m != nil { return m.Details } return "" } type HandshakerResp struct { // Frames to be given to the peer for the NextHandshakeMessageReq. May be // empty if no out_frames have to be sent to the peer or if in_bytes in the // HandshakerReq are incomplete. All the non-empty out frames must be sent to // the peer even if the handshaker status is not OK as these frames may // contain the alert frames. OutFrames []byte `protobuf:"bytes,1,opt,name=out_frames,json=outFrames,proto3" json:"out_frames,omitempty"` // Number of bytes in the in_bytes consumed by the handshaker. It is possible // that part of in_bytes in HandshakerReq was unrelated to the handshake // process. BytesConsumed uint32 `protobuf:"varint,2,opt,name=bytes_consumed,json=bytesConsumed,proto3" json:"bytes_consumed,omitempty"` // This is set iff the handshake was successful. out_frames may still be set // to frames that needs to be forwarded to the peer. Result *HandshakerResult `protobuf:"bytes,3,opt,name=result,proto3" json:"result,omitempty"` // Status of the handshaker. Status *HandshakerStatus `protobuf:"bytes,4,opt,name=status,proto3" json:"status,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *HandshakerResp) Reset() { *m = HandshakerResp{} } func (m *HandshakerResp) String() string { return proto.CompactTextString(m) } func (*HandshakerResp) ProtoMessage() {} func (*HandshakerResp) Descriptor() ([]byte, []int) { return fileDescriptor_handshaker_1dfe659b12ea825e, []int{9} } func (m *HandshakerResp) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_HandshakerResp.Unmarshal(m, b) } func (m *HandshakerResp) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_HandshakerResp.Marshal(b, m, deterministic) } func (dst *HandshakerResp) XXX_Merge(src proto.Message) { xxx_messageInfo_HandshakerResp.Merge(dst, src) } func (m *HandshakerResp) XXX_Size() int { return xxx_messageInfo_HandshakerResp.Size(m) } func (m *HandshakerResp) XXX_DiscardUnknown() { xxx_messageInfo_HandshakerResp.DiscardUnknown(m) } var xxx_messageInfo_HandshakerResp proto.InternalMessageInfo func (m *HandshakerResp) GetOutFrames() []byte { if m != nil { return m.OutFrames } return nil } func (m *HandshakerResp) GetBytesConsumed() uint32 { if m != nil { return m.BytesConsumed } return 0 } func (m *HandshakerResp) GetResult() *HandshakerResult { if m != nil { return m.Result } return nil } func (m *HandshakerResp) GetStatus() *HandshakerStatus { if m != nil { return m.Status } return nil } func init() { proto.RegisterType((*Endpoint)(nil), "grpc.gcp.Endpoint") proto.RegisterType((*Identity)(nil), "grpc.gcp.Identity") proto.RegisterMapType((map[string]string)(nil), "grpc.gcp.Identity.AttributesEntry") proto.RegisterType((*StartClientHandshakeReq)(nil), "grpc.gcp.StartClientHandshakeReq") proto.RegisterType((*ServerHandshakeParameters)(nil), "grpc.gcp.ServerHandshakeParameters") proto.RegisterType((*StartServerHandshakeReq)(nil), "grpc.gcp.StartServerHandshakeReq") proto.RegisterMapType((map[int32]*ServerHandshakeParameters)(nil), "grpc.gcp.StartServerHandshakeReq.HandshakeParametersEntry") proto.RegisterType((*NextHandshakeMessageReq)(nil), "grpc.gcp.NextHandshakeMessageReq") proto.RegisterType((*HandshakerReq)(nil), "grpc.gcp.HandshakerReq") proto.RegisterType((*HandshakerResult)(nil), "grpc.gcp.HandshakerResult") proto.RegisterType((*HandshakerStatus)(nil), "grpc.gcp.HandshakerStatus") proto.RegisterType((*HandshakerResp)(nil), "grpc.gcp.HandshakerResp") proto.RegisterEnum("grpc.gcp.HandshakeProtocol", HandshakeProtocol_name, HandshakeProtocol_value) proto.RegisterEnum("grpc.gcp.NetworkProtocol", NetworkProtocol_name, NetworkProtocol_value) } // Reference imports to suppress errors if they are not otherwise used. var _ context.Context var _ grpc.ClientConn // This is a compile-time assertion to ensure that this generated file // is compatible with the grpc package it is being compiled against. const _ = grpc.SupportPackageIsVersion4 // HandshakerServiceClient is the client API for HandshakerService service. // // For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream. type HandshakerServiceClient interface { // Handshaker service accepts a stream of handshaker request, returning a // stream of handshaker response. Client is expected to send exactly one // message with either client_start or server_start followed by one or more // messages with next. Each time client sends a request, the handshaker // service expects to respond. Client does not have to wait for service's // response before sending next request. DoHandshake(ctx context.Context, opts ...grpc.CallOption) (HandshakerService_DoHandshakeClient, error) } type handshakerServiceClient struct { cc *grpc.ClientConn } func NewHandshakerServiceClient(cc *grpc.ClientConn) HandshakerServiceClient { return &handshakerServiceClient{cc} } func (c *handshakerServiceClient) DoHandshake(ctx context.Context, opts ...grpc.CallOption) (HandshakerService_DoHandshakeClient, error) { stream, err := c.cc.NewStream(ctx, &_HandshakerService_serviceDesc.Streams[0], "/grpc.gcp.HandshakerService/DoHandshake", opts...) if err != nil { return nil, err } x := &handshakerServiceDoHandshakeClient{stream} return x, nil } type HandshakerService_DoHandshakeClient interface { Send(*HandshakerReq) error Recv() (*HandshakerResp, error) grpc.ClientStream } type handshakerServiceDoHandshakeClient struct { grpc.ClientStream } func (x *handshakerServiceDoHandshakeClient) Send(m *HandshakerReq) error { return x.ClientStream.SendMsg(m) } func (x *handshakerServiceDoHandshakeClient) Recv() (*HandshakerResp, error) { m := new(HandshakerResp) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } // HandshakerServiceServer is the server API for HandshakerService service. type HandshakerServiceServer interface { // Handshaker service accepts a stream of handshaker request, returning a // stream of handshaker response. Client is expected to send exactly one // message with either client_start or server_start followed by one or more // messages with next. Each time client sends a request, the handshaker // service expects to respond. Client does not have to wait for service's // response before sending next request. DoHandshake(HandshakerService_DoHandshakeServer) error } func RegisterHandshakerServiceServer(s *grpc.Server, srv HandshakerServiceServer) { s.RegisterService(&_HandshakerService_serviceDesc, srv) } func _HandshakerService_DoHandshake_Handler(srv interface{}, stream grpc.ServerStream) error { return srv.(HandshakerServiceServer).DoHandshake(&handshakerServiceDoHandshakeServer{stream}) } type HandshakerService_DoHandshakeServer interface { Send(*HandshakerResp) error Recv() (*HandshakerReq, error) grpc.ServerStream } type handshakerServiceDoHandshakeServer struct { grpc.ServerStream } func (x *handshakerServiceDoHandshakeServer) Send(m *HandshakerResp) error { return x.ServerStream.SendMsg(m) } func (x *handshakerServiceDoHandshakeServer) Recv() (*HandshakerReq, error) { m := new(HandshakerReq) if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err } return m, nil } var _HandshakerService_serviceDesc = grpc.ServiceDesc{ ServiceName: "grpc.gcp.HandshakerService", HandlerType: (*HandshakerServiceServer)(nil), Methods: []grpc.MethodDesc{}, Streams: []grpc.StreamDesc{ { StreamName: "DoHandshake", Handler: _HandshakerService_DoHandshake_Handler, ServerStreams: true, ClientStreams: true, }, }, Metadata: "grpc/gcp/handshaker.proto", } func init() { proto.RegisterFile("grpc/gcp/handshaker.proto", fileDescriptor_handshaker_1dfe659b12ea825e) } var fileDescriptor_handshaker_1dfe659b12ea825e = []byte{ // 1168 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xa4, 0x56, 0xdf, 0x6e, 0x1a, 0xc7, 0x17, 0xf6, 0x02, 0xb6, 0xf1, 0xc1, 0xfc, 0xf1, 0xc4, 0x51, 0xd6, 0x4e, 0xf2, 0xfb, 0x51, 0xaa, 0xaa, 0x24, 0x17, 0xd0, 0x92, 0x56, 0x69, 0x52, 0x45, 0x09, 0x60, 0x2c, 0xdc, 0xa4, 0x18, 0x2d, 0x4e, 0x2b, 0x35, 0x17, 0xab, 0xc9, 0x32, 0xc1, 0x2b, 0x96, 0x99, 0xf5, 0xcc, 0xe0, 0x86, 0x07, 0xe8, 0xe3, 0xf4, 0x15, 0xfa, 0x36, 0x95, 0xfa, 0x00, 0xbd, 0x6f, 0xb5, 0xb3, 0xb3, 0x7f, 0xc0, 0x10, 0x25, 0xea, 0xdd, 0xee, 0x99, 0xef, 0x3b, 0x7b, 0xe6, 0x3b, 0xdf, 0x9c, 0x1d, 0x38, 0x9a, 0x70, 0xdf, 0x69, 0x4e, 0x1c, 0xbf, 0x79, 0x89, 0xe9, 0x58, 0x5c, 0xe2, 0x29, 0xe1, 0x0d, 0x9f, 0x33, 0xc9, 0x50, 0x3e, 0x58, 0x6a, 0x4c, 0x1c, 0xff, 0xb8, 0x1e, 0x83, 0x24, 0xc7, 0x54, 0xf8, 0x8c, 0x4b, 0x5b, 0x10, 0x67, 0xce, 0x5d, 0xb9, 0xb0, 0x1d, 0x36, 0x9b, 0x31, 0x1a, 0x72, 0x6a, 0x12, 0xf2, 0x3d, 0x3a, 0xf6, 0x99, 0x4b, 0x25, 0xba, 0x0f, 0xe0, 0xfa, 0x36, 0x1e, 0x8f, 0x39, 0x11, 0xc2, 0x34, 0xaa, 0x46, 0x7d, 0xcf, 0xda, 0x73, 0xfd, 0x76, 0x18, 0x40, 0x08, 0x72, 0x41, 0x22, 0x33, 0x53, 0x35, 0xea, 0xdb, 0x96, 0x7a, 0x46, 0xdf, 0x42, 0x5e, 0xe5, 0x71, 0x98, 0x67, 0x66, 0xab, 0x46, 0xbd, 0xd4, 0x3a, 0x6a, 0x44, 0x55, 0x34, 0x06, 0x44, 0xfe, 0xca, 0xf8, 0x74, 0xa8, 0x01, 0x56, 0x0c, 0xad, 0xfd, 0x65, 0x40, 0xfe, 0x6c, 0x4c, 0xa8, 0x74, 0xe5, 0x02, 0x3d, 0x80, 0xb2, 0x20, 0xfc, 0xda, 0x75, 0x88, 0x8d, 0x1d, 0x87, 0xcd, 0xa9, 0x0c, 0xbf, 0xdd, 0xdf, 0xb2, 0x4a, 0x7a, 0xa1, 0x1d, 0xc6, 0xd1, 0x3d, 0xc8, 0x5f, 0x32, 0x21, 0x29, 0x9e, 0x11, 0x55, 0x46, 0x80, 0x89, 0x23, 0xa8, 0x03, 0x80, 0xa5, 0xe4, 0xee, 0xdb, 0xb9, 0x24, 0xc2, 0xcc, 0x56, 0xb3, 0xf5, 0x42, 0xab, 0x96, 0x94, 0x13, 0x7d, 0xb0, 0xd1, 0x8e, 0x41, 0x3d, 0x2a, 0xf9, 0xc2, 0x4a, 0xb1, 0x8e, 0x9f, 0x41, 0x79, 0x65, 0x19, 0x55, 0x20, 0x3b, 0x25, 0x0b, 0xad, 0x47, 0xf0, 0x88, 0x0e, 0x61, 0xfb, 0x1a, 0x7b, 0x73, 0x5d, 0x83, 0x15, 0xbe, 0x3c, 0xcd, 0x7c, 0x67, 0x74, 0x2a, 0x50, 0x72, 0xf5, 0x67, 0x6c, 0x46, 0x09, 0x7b, 0x57, 0xfb, 0x3d, 0x07, 0x77, 0x46, 0x12, 0x73, 0xd9, 0xf5, 0x5c, 0x42, 0x65, 0x3f, 0x6a, 0x9a, 0x45, 0xae, 0xd0, 0x1b, 0xb8, 0x1b, 0x37, 0x31, 0xe9, 0x4f, 0x2c, 0xa8, 0xa1, 0x04, 0xbd, 0x9b, 0xec, 0x20, 0x26, 0xc7, 0x92, 0x1e, 0xc5, 0xfc, 0x91, 0xa6, 0x47, 0x4b, 0xe8, 0x11, 0xdc, 0xc6, 0xbe, 0xef, 0xb9, 0x0e, 0x96, 0x2e, 0xa3, 0x71, 0x56, 0x61, 0x66, 0xaa, 0xd9, 0xfa, 0x9e, 0x75, 0x98, 0x5a, 0x8c, 0x38, 0x02, 0x3d, 0x80, 0x0a, 0x27, 0x0e, 0xe3, 0xe3, 0x14, 0x3e, 0xab, 0xf0, 0xe5, 0x30, 0x9e, 0x40, 0x9f, 0xc3, 0x81, 0xc4, 0x7c, 0x42, 0xa4, 0xad, 0x77, 0xec, 0x12, 0x61, 0xe6, 0x94, 0xe8, 0xe8, 0xa6, 0xe8, 0x56, 0x25, 0x04, 0x9f, 0xc5, 0x58, 0xf4, 0x04, 0x4a, 0x1e, 0x73, 0xb0, 0x17, 0xf1, 0x17, 0xe6, 0x76, 0xd5, 0xd8, 0xc0, 0x2e, 0x2a, 0x64, 0x6c, 0x99, 0x98, 0x4a, 0xb4, 0x77, 0xcd, 0x9d, 0x55, 0x6a, 0xe4, 0x6a, 0x4d, 0x8d, 0x4d, 0xfe, 0x3d, 0x94, 0x39, 0x99, 0x31, 0x49, 0x12, 0xee, 0xee, 0x46, 0x6e, 0x29, 0x84, 0xc6, 0xe4, 0xff, 0x43, 0x41, 0xef, 0x59, 0x59, 0x30, 0xaf, 0xda, 0x0f, 0x61, 0x68, 0x10, 0x58, 0xf0, 0x05, 0xec, 0x73, 0xdf, 0xb1, 0xaf, 0x09, 0x17, 0x2e, 0xa3, 0xc2, 0xdc, 0x53, 0xa9, 0xef, 0x27, 0xa9, 0x2d, 0xdf, 0x89, 0x24, 0xfc, 0x49, 0x83, 0xac, 0x02, 0xf7, 0x9d, 0xe8, 0xa5, 0xf6, 0x9b, 0x01, 0x47, 0x23, 0xc2, 0xaf, 0x09, 0x4f, 0xba, 0x8d, 0x39, 0x9e, 0x11, 0x49, 0xf8, 0xfa, 0xfe, 0x18, 0xeb, 0xfb, 0xf3, 0x0c, 0x2a, 0x4b, 0xf2, 0x06, 0xed, 0xc9, 0x6c, 0x6c, 0x4f, 0x39, 0x2d, 0xb0, 0x4b, 0x44, 0xed, 0x9f, 0xac, 0xf6, 0xed, 0x4a, 0x31, 0x81, 0x6f, 0x37, 0x5a, 0xcb, 0xf8, 0x80, 0xb5, 0x66, 0x70, 0x98, 0x98, 0xdd, 0x8f, 0xb7, 0xa4, 0x6b, 0x7a, 0x9a, 0xd4, 0xb4, 0xe1, 0xab, 0x8d, 0x35, 0x7a, 0x84, 0xe7, 0xf7, 0xd6, 0xe5, 0x1a, 0xa5, 0x8e, 0x20, 0xef, 0x52, 0xfb, 0xed, 0x22, 0x1c, 0x05, 0x46, 0x7d, 0xdf, 0xda, 0x75, 0x69, 0x27, 0x78, 0x5d, 0xe3, 0x9e, 0xdc, 0x7f, 0x70, 0xcf, 0xf6, 0x47, 0xbb, 0x67, 0xd5, 0x1c, 0x3b, 0x9f, 0x6a, 0x8e, 0xe3, 0x29, 0x98, 0x9b, 0x54, 0x48, 0x8f, 0xa9, 0xed, 0x70, 0x4c, 0x3d, 0x49, 0x8f, 0xa9, 0x42, 0xeb, 0xf3, 0x94, 0xc4, 0x9b, 0x0c, 0x96, 0x9a, 0x65, 0xb5, 0x6f, 0xe0, 0xce, 0x80, 0xbc, 0x4f, 0x26, 0xd6, 0x8f, 0x44, 0x08, 0x3c, 0x51, 0x06, 0x48, 0x8b, 0x6b, 0x2c, 0x89, 0x5b, 0xfb, 0xd3, 0x80, 0x62, 0x4c, 0xe1, 0x01, 0xf8, 0x14, 0xf6, 0x1d, 0x35, 0xfb, 0x6c, 0x11, 0x74, 0x56, 0x11, 0x0a, 0xad, 0xcf, 0x56, 0x1a, 0x7e, 0x73, 0x3c, 0xf6, 0xb7, 0xac, 0x42, 0x48, 0x54, 0x80, 0x20, 0x8f, 0x50, 0x75, 0xeb, 0x3c, 0x99, 0xb5, 0x79, 0x6e, 0x1a, 0x27, 0xc8, 0x13, 0x12, 0xc3, 0x3c, 0x8f, 0x21, 0x47, 0xc9, 0x7b, 0xa9, 0x5c, 0xb1, 0xc4, 0xdf, 0xb0, 0xdb, 0xfe, 0x96, 0xa5, 0x08, 0x9d, 0x02, 0xec, 0x71, 0x72, 0xa5, 0xe7, 0xfa, 0xdf, 0x19, 0xa8, 0xa4, 0xf7, 0x29, 0xe6, 0x9e, 0x44, 0x5f, 0xc3, 0xe1, 0xba, 0x83, 0xa1, 0xff, 0x1d, 0xb7, 0xd6, 0x9c, 0x0b, 0xf4, 0x25, 0x94, 0x57, 0x4e, 0xb4, 0xfe, 0xab, 0x94, 0x96, 0x0f, 0x74, 0xa0, 0xf9, 0x94, 0x2c, 0xec, 0x31, 0x96, 0x38, 0x32, 0xf4, 0x94, 0x2c, 0x4e, 0xb0, 0xc4, 0xe8, 0x31, 0x14, 0x7d, 0x42, 0x78, 0x32, 0x48, 0x73, 0x1b, 0x07, 0xe9, 0x7e, 0x00, 0xbc, 0x39, 0x47, 0x3f, 0x7d, 0x04, 0x3f, 0x84, 0x83, 0x29, 0x21, 0xbe, 0xed, 0x5c, 0x62, 0x4a, 0x89, 0x67, 0x33, 0x9f, 0x50, 0xe5, 0xe8, 0xbc, 0x55, 0x0e, 0x16, 0xba, 0x61, 0xfc, 0xdc, 0x27, 0x14, 0x9d, 0xc1, 0x81, 0xaa, 0x6f, 0xc9, 0xfd, 0xbb, 0x1f, 0xe3, 0xfe, 0x72, 0xc0, 0xb3, 0x52, 0xe3, 0xf1, 0x45, 0x5a, 0xf5, 0x91, 0xc4, 0x72, 0xae, 0x2e, 0x26, 0x0e, 0x1b, 0x13, 0xa5, 0x72, 0xd1, 0x52, 0xcf, 0xc8, 0x84, 0xdd, 0x31, 0x91, 0xd8, 0x55, 0xff, 0xbb, 0x40, 0xce, 0xe8, 0xb5, 0xf6, 0x87, 0x01, 0xa5, 0xa5, 0xc6, 0xf9, 0xc1, 0xc5, 0x87, 0xcd, 0xa5, 0xfd, 0x2e, 0x38, 0x05, 0x91, 0xa1, 0xf7, 0xd8, 0x5c, 0x9e, 0xaa, 0x00, 0xfa, 0x02, 0x4a, 0xca, 0xea, 0xb6, 0xc3, 0xa8, 0x98, 0xcf, 0xc8, 0x58, 0xa5, 0x2c, 0x5a, 0x45, 0x15, 0xed, 0xea, 0x20, 0x6a, 0xc1, 0x0e, 0x57, 0x36, 0xd0, 0xce, 0x3a, 0x5e, 0xf3, 0xe3, 0xd6, 0x46, 0xb1, 0x34, 0x32, 0xe0, 0x08, 0xb5, 0x09, 0xdd, 0xb2, 0xb5, 0x9c, 0x70, 0x9b, 0x96, 0x46, 0x3e, 0xfc, 0x01, 0x0e, 0x6e, 0x5c, 0x04, 0x50, 0x0d, 0xfe, 0xd7, 0x6f, 0x0f, 0x4e, 0x46, 0xfd, 0xf6, 0xcb, 0x9e, 0x3d, 0xb4, 0xce, 0x2f, 0xce, 0xbb, 0xe7, 0xaf, 0xec, 0xd7, 0x83, 0xd1, 0xb0, 0xd7, 0x3d, 0x3b, 0x3d, 0xeb, 0x9d, 0x54, 0xb6, 0xd0, 0x2e, 0x64, 0x2f, 0x5e, 0x8d, 0x2a, 0x06, 0xca, 0x43, 0xae, 0xfd, 0xea, 0x62, 0x54, 0xc9, 0x3c, 0xec, 0x41, 0x79, 0xe5, 0x96, 0x86, 0xaa, 0x70, 0x6f, 0xd0, 0xbb, 0xf8, 0xf9, 0xdc, 0x7a, 0xf9, 0xa1, 0x3c, 0xdd, 0x61, 0xc5, 0x08, 0x1e, 0x5e, 0x9f, 0x0c, 0x2b, 0x99, 0xd6, 0x9b, 0x54, 0x49, 0x7c, 0x14, 0xde, 0xd9, 0xd0, 0x29, 0x14, 0x4e, 0x58, 0x1c, 0x46, 0x77, 0xd6, 0xcb, 0x71, 0x75, 0x6c, 0x6e, 0xd0, 0xc9, 0xaf, 0x6d, 0xd5, 0x8d, 0xaf, 0x8c, 0xce, 0x14, 0x6e, 0xbb, 0x2c, 0xc4, 0x60, 0x4f, 0x8a, 0x86, 0x4b, 0x25, 0xe1, 0x14, 0x7b, 0x9d, 0x72, 0x02, 0x57, 0xd5, 0x0f, 0x8d, 0x5f, 0x9e, 0x4f, 0x18, 0x9b, 0x78, 0xa4, 0x31, 0x61, 0x1e, 0xa6, 0x93, 0x06, 0xe3, 0x93, 0xa6, 0xba, 0x0a, 0x3b, 0x9c, 0x28, 0xe3, 0x62, 0x4f, 0x34, 0x83, 0x24, 0xcd, 0x28, 0x49, 0x53, 0x9d, 0x3a, 0x05, 0xb2, 0x27, 0x8e, 0xff, 0x76, 0x47, 0xbd, 0x3f, 0xfa, 0x37, 0x00, 0x00, 0xff, 0xff, 0x6e, 0x37, 0x34, 0x9b, 0x67, 0x0b, 0x00, 0x00, } grpc-go-1.22.1/credentials/alts/internal/proto/grpc_gcp/transport_security_common.pb.go000066400000000000000000000172111351635773100314350ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: grpc/gcp/transport_security_common.proto package grpc_gcp // import "google.golang.org/grpc/credentials/alts/internal/proto/grpc_gcp" import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package // The security level of the created channel. The list is sorted in increasing // level of security. This order must always be maintained. type SecurityLevel int32 const ( SecurityLevel_SECURITY_NONE SecurityLevel = 0 SecurityLevel_INTEGRITY_ONLY SecurityLevel = 1 SecurityLevel_INTEGRITY_AND_PRIVACY SecurityLevel = 2 ) var SecurityLevel_name = map[int32]string{ 0: "SECURITY_NONE", 1: "INTEGRITY_ONLY", 2: "INTEGRITY_AND_PRIVACY", } var SecurityLevel_value = map[string]int32{ "SECURITY_NONE": 0, "INTEGRITY_ONLY": 1, "INTEGRITY_AND_PRIVACY": 2, } func (x SecurityLevel) String() string { return proto.EnumName(SecurityLevel_name, int32(x)) } func (SecurityLevel) EnumDescriptor() ([]byte, []int) { return fileDescriptor_transport_security_common_71945991f2c3b4a6, []int{0} } // Max and min supported RPC protocol versions. type RpcProtocolVersions struct { // Maximum supported RPC version. MaxRpcVersion *RpcProtocolVersions_Version `protobuf:"bytes,1,opt,name=max_rpc_version,json=maxRpcVersion,proto3" json:"max_rpc_version,omitempty"` // Minimum supported RPC version. MinRpcVersion *RpcProtocolVersions_Version `protobuf:"bytes,2,opt,name=min_rpc_version,json=minRpcVersion,proto3" json:"min_rpc_version,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *RpcProtocolVersions) Reset() { *m = RpcProtocolVersions{} } func (m *RpcProtocolVersions) String() string { return proto.CompactTextString(m) } func (*RpcProtocolVersions) ProtoMessage() {} func (*RpcProtocolVersions) Descriptor() ([]byte, []int) { return fileDescriptor_transport_security_common_71945991f2c3b4a6, []int{0} } func (m *RpcProtocolVersions) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_RpcProtocolVersions.Unmarshal(m, b) } func (m *RpcProtocolVersions) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_RpcProtocolVersions.Marshal(b, m, deterministic) } func (dst *RpcProtocolVersions) XXX_Merge(src proto.Message) { xxx_messageInfo_RpcProtocolVersions.Merge(dst, src) } func (m *RpcProtocolVersions) XXX_Size() int { return xxx_messageInfo_RpcProtocolVersions.Size(m) } func (m *RpcProtocolVersions) XXX_DiscardUnknown() { xxx_messageInfo_RpcProtocolVersions.DiscardUnknown(m) } var xxx_messageInfo_RpcProtocolVersions proto.InternalMessageInfo func (m *RpcProtocolVersions) GetMaxRpcVersion() *RpcProtocolVersions_Version { if m != nil { return m.MaxRpcVersion } return nil } func (m *RpcProtocolVersions) GetMinRpcVersion() *RpcProtocolVersions_Version { if m != nil { return m.MinRpcVersion } return nil } // RPC version contains a major version and a minor version. type RpcProtocolVersions_Version struct { Major uint32 `protobuf:"varint,1,opt,name=major,proto3" json:"major,omitempty"` Minor uint32 `protobuf:"varint,2,opt,name=minor,proto3" json:"minor,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *RpcProtocolVersions_Version) Reset() { *m = RpcProtocolVersions_Version{} } func (m *RpcProtocolVersions_Version) String() string { return proto.CompactTextString(m) } func (*RpcProtocolVersions_Version) ProtoMessage() {} func (*RpcProtocolVersions_Version) Descriptor() ([]byte, []int) { return fileDescriptor_transport_security_common_71945991f2c3b4a6, []int{0, 0} } func (m *RpcProtocolVersions_Version) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_RpcProtocolVersions_Version.Unmarshal(m, b) } func (m *RpcProtocolVersions_Version) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_RpcProtocolVersions_Version.Marshal(b, m, deterministic) } func (dst *RpcProtocolVersions_Version) XXX_Merge(src proto.Message) { xxx_messageInfo_RpcProtocolVersions_Version.Merge(dst, src) } func (m *RpcProtocolVersions_Version) XXX_Size() int { return xxx_messageInfo_RpcProtocolVersions_Version.Size(m) } func (m *RpcProtocolVersions_Version) XXX_DiscardUnknown() { xxx_messageInfo_RpcProtocolVersions_Version.DiscardUnknown(m) } var xxx_messageInfo_RpcProtocolVersions_Version proto.InternalMessageInfo func (m *RpcProtocolVersions_Version) GetMajor() uint32 { if m != nil { return m.Major } return 0 } func (m *RpcProtocolVersions_Version) GetMinor() uint32 { if m != nil { return m.Minor } return 0 } func init() { proto.RegisterType((*RpcProtocolVersions)(nil), "grpc.gcp.RpcProtocolVersions") proto.RegisterType((*RpcProtocolVersions_Version)(nil), "grpc.gcp.RpcProtocolVersions.Version") proto.RegisterEnum("grpc.gcp.SecurityLevel", SecurityLevel_name, SecurityLevel_value) } func init() { proto.RegisterFile("grpc/gcp/transport_security_common.proto", fileDescriptor_transport_security_common_71945991f2c3b4a6) } var fileDescriptor_transport_security_common_71945991f2c3b4a6 = []byte{ // 323 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x94, 0x91, 0x41, 0x4b, 0x3b, 0x31, 0x10, 0xc5, 0xff, 0x5b, 0xf8, 0xab, 0x44, 0x56, 0xeb, 0x6a, 0x41, 0xc5, 0x83, 0x08, 0x42, 0xf1, 0x90, 0x05, 0xc5, 0xb3, 0xb4, 0xb5, 0x48, 0xa1, 0x6e, 0xeb, 0xb6, 0x16, 0xea, 0x25, 0xc4, 0x18, 0x42, 0x24, 0x9b, 0x09, 0xb3, 0xb1, 0xd4, 0xaf, 0xec, 0xa7, 0x90, 0x4d, 0xbb, 0x14, 0xc1, 0x8b, 0xb7, 0xbc, 0xc7, 0xcc, 0x6f, 0x32, 0xf3, 0x48, 0x5b, 0xa1, 0x13, 0xa9, 0x12, 0x2e, 0xf5, 0xc8, 0x6d, 0xe9, 0x00, 0x3d, 0x2b, 0xa5, 0xf8, 0x40, 0xed, 0x3f, 0x99, 0x80, 0xa2, 0x00, 0x4b, 0x1d, 0x82, 0x87, 0x64, 0xa7, 0xaa, 0xa4, 0x4a, 0xb8, 0x8b, 0xaf, 0x88, 0x1c, 0xe6, 0x4e, 0x8c, 0x2b, 0x5b, 0x80, 0x99, 0x49, 0x2c, 0x35, 0xd8, 0x32, 0x79, 0x24, 0xfb, 0x05, 0x5f, 0x32, 0x74, 0x82, 0x2d, 0x56, 0xde, 0x71, 0x74, 0x1e, 0xb5, 0x77, 0xaf, 0x2f, 0x69, 0xdd, 0x4b, 0x7f, 0xe9, 0xa3, 0xeb, 0x47, 0x1e, 0x17, 0x7c, 0x99, 0x3b, 0xb1, 0x96, 0x01, 0xa7, 0xed, 0x0f, 0x5c, 0xe3, 0x6f, 0x38, 0x6d, 0x37, 0xb8, 0xd3, 0x5b, 0xb2, 0x5d, 0x93, 0x8f, 0xc8, 0xff, 0x82, 0xbf, 0x03, 0x86, 0xef, 0xc5, 0xf9, 0x4a, 0x04, 0x57, 0x5b, 0xc0, 0x30, 0xa5, 0x72, 0x2b, 0x71, 0xf5, 0x44, 0xe2, 0xc9, 0xfa, 0x1e, 0x43, 0xb9, 0x90, 0x26, 0x39, 0x20, 0xf1, 0xa4, 0xdf, 0x7b, 0xce, 0x07, 0xd3, 0x39, 0xcb, 0x46, 0x59, 0xbf, 0xf9, 0x2f, 0x49, 0xc8, 0xde, 0x20, 0x9b, 0xf6, 0x1f, 0x82, 0x37, 0xca, 0x86, 0xf3, 0x66, 0x94, 0x9c, 0x90, 0xd6, 0xc6, 0xeb, 0x64, 0xf7, 0x6c, 0x9c, 0x0f, 0x66, 0x9d, 0xde, 0xbc, 0xd9, 0xe8, 0x2e, 0x49, 0x4b, 0xc3, 0x6a, 0x07, 0x6e, 0x7c, 0x49, 0xb5, 0xf5, 0x12, 0x2d, 0x37, 0xdd, 0xb3, 0x69, 0x9d, 0x41, 0x3d, 0xb2, 0x17, 0x12, 0x08, 0x2b, 0x8e, 0xa3, 0x97, 0x3b, 0x05, 0xa0, 0x8c, 0xa4, 0x0a, 0x0c, 0xb7, 0x8a, 0x02, 0xaa, 0x34, 0xc4, 0x27, 0x50, 0xbe, 0x49, 0xeb, 0x35, 0x37, 0x65, 0x5a, 0x11, 0xd3, 0x9a, 0x98, 0x86, 0xe8, 0x42, 0x11, 0x53, 0xc2, 0xbd, 0x6e, 0x05, 0x7d, 0xf3, 0x1d, 0x00, 0x00, 0xff, 0xff, 0x31, 0x14, 0xb4, 0x11, 0xf6, 0x01, 0x00, 0x00, } grpc-go-1.22.1/credentials/alts/internal/regenerate.sh000077500000000000000000000023411351635773100227020ustar00rootroot00000000000000#!/bin/bash # Copyright 2018 gRPC authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. set -eux -o pipefail TMP=$(mktemp -d) function finish { rm -rf "$TMP" } trap finish EXIT pushd "$TMP" mkdir -p grpc/gcp curl https://raw.githubusercontent.com/grpc/grpc-proto/master/grpc/gcp/altscontext.proto > grpc/gcp/altscontext.proto curl https://raw.githubusercontent.com/grpc/grpc-proto/master/grpc/gcp/handshaker.proto > grpc/gcp/handshaker.proto curl https://raw.githubusercontent.com/grpc/grpc-proto/master/grpc/gcp/transport_security_common.proto > grpc/gcp/transport_security_common.proto protoc --go_out=plugins=grpc,paths=source_relative:. -I. grpc/gcp/*.proto popd rm -f proto/grpc_gcp/*.pb.go cp "$TMP"/grpc/gcp/*.pb.go proto/grpc_gcp/ grpc-go-1.22.1/credentials/alts/internal/testutil/000077500000000000000000000000001351635773100220775ustar00rootroot00000000000000grpc-go-1.22.1/credentials/alts/internal/testutil/testutil.go000066400000000000000000000055141351635773100243100ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package testutil include useful test utilities for the handshaker. package testutil import ( "bytes" "encoding/binary" "io" "net" "sync" "google.golang.org/grpc/credentials/alts/internal/conn" ) // Stats is used to collect statistics about concurrent handshake calls. type Stats struct { mu sync.Mutex calls int MaxConcurrentCalls int } // Update updates the statistics by adding one call. func (s *Stats) Update() func() { s.mu.Lock() s.calls++ if s.calls > s.MaxConcurrentCalls { s.MaxConcurrentCalls = s.calls } s.mu.Unlock() return func() { s.mu.Lock() s.calls-- s.mu.Unlock() } } // Reset resets the statistics. func (s *Stats) Reset() { s.mu.Lock() defer s.mu.Unlock() s.calls = 0 s.MaxConcurrentCalls = 0 } // testConn mimics a net.Conn to the peer. type testConn struct { net.Conn in *bytes.Buffer out *bytes.Buffer } // NewTestConn creates a new instance of testConn object. func NewTestConn(in *bytes.Buffer, out *bytes.Buffer) net.Conn { return &testConn{ in: in, out: out, } } // Read reads from the in buffer. func (c *testConn) Read(b []byte) (n int, err error) { return c.in.Read(b) } // Write writes to the out buffer. func (c *testConn) Write(b []byte) (n int, err error) { return c.out.Write(b) } // Close closes the testConn object. func (c *testConn) Close() error { return nil } // unresponsiveTestConn mimics a net.Conn for an unresponsive peer. It is used // for testing the PeerNotResponding case. type unresponsiveTestConn struct { net.Conn } // NewUnresponsiveTestConn creates a new instance of unresponsiveTestConn object. func NewUnresponsiveTestConn() net.Conn { return &unresponsiveTestConn{} } // Read reads from the in buffer. func (c *unresponsiveTestConn) Read(b []byte) (n int, err error) { return 0, io.EOF } // Write writes to the out buffer. func (c *unresponsiveTestConn) Write(b []byte) (n int, err error) { return 0, nil } // Close closes the TestConn object. func (c *unresponsiveTestConn) Close() error { return nil } // MakeFrame creates a handshake frame. func MakeFrame(pl string) []byte { f := make([]byte, len(pl)+conn.MsgLenFieldSize) binary.LittleEndian.PutUint32(f, uint32(len(pl))) copy(f[conn.MsgLenFieldSize:], []byte(pl)) return f } grpc-go-1.22.1/credentials/alts/utils.go000066400000000000000000000074361351635773100201070ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package alts import ( "context" "errors" "fmt" "io" "io/ioutil" "log" "os" "os/exec" "regexp" "runtime" "strings" "google.golang.org/grpc/peer" ) const ( linuxProductNameFile = "/sys/class/dmi/id/product_name" windowsCheckCommand = "powershell.exe" windowsCheckCommandArgs = "Get-WmiObject -Class Win32_BIOS" powershellOutputFilter = "Manufacturer" windowsManufacturerRegex = ":(.*)" ) type platformError string func (k platformError) Error() string { return fmt.Sprintf("%s is not supported", string(k)) } var ( // The following two variables will be reassigned in tests. runningOS = runtime.GOOS manufacturerReader = func() (io.Reader, error) { switch runningOS { case "linux": return os.Open(linuxProductNameFile) case "windows": cmd := exec.Command(windowsCheckCommand, windowsCheckCommandArgs) out, err := cmd.Output() if err != nil { return nil, err } for _, line := range strings.Split(strings.TrimSuffix(string(out), "\n"), "\n") { if strings.HasPrefix(line, powershellOutputFilter) { re := regexp.MustCompile(windowsManufacturerRegex) name := re.FindString(line) name = strings.TrimLeft(name, ":") return strings.NewReader(name), nil } } return nil, errors.New("cannot determine the machine's manufacturer") default: return nil, platformError(runningOS) } } vmOnGCP bool ) // isRunningOnGCP checks whether the local system, without doing a network request is // running on GCP. func isRunningOnGCP() bool { manufacturer, err := readManufacturer() if err != nil { log.Fatalf("failure to read manufacturer information: %v", err) } name := string(manufacturer) switch runningOS { case "linux": name = strings.TrimSpace(name) return name == "Google" || name == "Google Compute Engine" case "windows": name = strings.Replace(name, " ", "", -1) name = strings.Replace(name, "\n", "", -1) name = strings.Replace(name, "\r", "", -1) return name == "Google" default: log.Fatal(platformError(runningOS)) } return false } func readManufacturer() ([]byte, error) { reader, err := manufacturerReader() if err != nil { return nil, err } if reader == nil { return nil, errors.New("got nil reader") } manufacturer, err := ioutil.ReadAll(reader) if err != nil { return nil, fmt.Errorf("failed reading %v: %v", linuxProductNameFile, err) } return manufacturer, nil } // AuthInfoFromContext extracts the alts.AuthInfo object from the given context, // if it exists. This API should be used by gRPC server RPC handlers to get // information about the communicating peer. For client-side, use grpc.Peer() // CallOption. func AuthInfoFromContext(ctx context.Context) (AuthInfo, error) { p, ok := peer.FromContext(ctx) if !ok { return nil, errors.New("no Peer found in Context") } return AuthInfoFromPeer(p) } // AuthInfoFromPeer extracts the alts.AuthInfo object from the given peer, if it // exists. This API should be used by gRPC clients after obtaining a peer object // using the grpc.Peer() CallOption. func AuthInfoFromPeer(p *peer.Peer) (AuthInfo, error) { altsAuthInfo, ok := p.AuthInfo.(AuthInfo) if !ok { return nil, errors.New("no alts.AuthInfo found in Peer") } return altsAuthInfo, nil } grpc-go-1.22.1/credentials/alts/utils_test.go000066400000000000000000000100211351635773100211260ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package alts import ( "context" "io" "strings" "testing" altspb "google.golang.org/grpc/credentials/alts/internal/proto/grpc_gcp" "google.golang.org/grpc/peer" ) func TestIsRunningOnGCP(t *testing.T) { for _, tc := range []struct { description string testOS string testReader io.Reader out bool }{ // Linux tests. {"linux: not a GCP platform", "linux", strings.NewReader("not GCP"), false}, {"Linux: GCP platform (Google)", "linux", strings.NewReader("Google"), true}, {"Linux: GCP platform (Google Compute Engine)", "linux", strings.NewReader("Google Compute Engine"), true}, {"Linux: GCP platform (Google Compute Engine) with extra spaces", "linux", strings.NewReader(" Google Compute Engine "), true}, // Windows tests. {"windows: not a GCP platform", "windows", strings.NewReader("not GCP"), false}, {"windows: GCP platform (Google)", "windows", strings.NewReader("Google"), true}, {"windows: GCP platform (Google) with extra spaces", "windows", strings.NewReader(" Google "), true}, } { reverseFunc := setup(tc.testOS, tc.testReader) if got, want := isRunningOnGCP(), tc.out; got != want { t.Errorf("%v: isRunningOnGCP()=%v, want %v", tc.description, got, want) } reverseFunc() } } func setup(testOS string, testReader io.Reader) func() { tmpOS := runningOS tmpReader := manufacturerReader // Set test OS and reader function. runningOS = testOS manufacturerReader = func() (io.Reader, error) { return testReader, nil } return func() { runningOS = tmpOS manufacturerReader = tmpReader } } func TestAuthInfoFromContext(t *testing.T) { ctx := context.Background() altsAuthInfo := &fakeALTSAuthInfo{} p := &peer.Peer{ AuthInfo: altsAuthInfo, } for _, tc := range []struct { desc string ctx context.Context success bool out AuthInfo }{ { "working case", peer.NewContext(ctx, p), true, altsAuthInfo, }, } { authInfo, err := AuthInfoFromContext(tc.ctx) if got, want := (err == nil), tc.success; got != want { t.Errorf("%v: AuthInfoFromContext(_)=(err=nil)=%v, want %v", tc.desc, got, want) } if got, want := authInfo, tc.out; got != want { t.Errorf("%v:, AuthInfoFromContext(_)=(%v, _), want (%v, _)", tc.desc, got, want) } } } func TestAuthInfoFromPeer(t *testing.T) { altsAuthInfo := &fakeALTSAuthInfo{} p := &peer.Peer{ AuthInfo: altsAuthInfo, } for _, tc := range []struct { desc string p *peer.Peer success bool out AuthInfo }{ { "working case", p, true, altsAuthInfo, }, } { authInfo, err := AuthInfoFromPeer(tc.p) if got, want := (err == nil), tc.success; got != want { t.Errorf("%v: AuthInfoFromPeer(_)=(err=nil)=%v, want %v", tc.desc, got, want) } if got, want := authInfo, tc.out; got != want { t.Errorf("%v:, AuthInfoFromPeer(_)=(%v, _), want (%v, _)", tc.desc, got, want) } } } type fakeALTSAuthInfo struct{} func (*fakeALTSAuthInfo) AuthType() string { return "" } func (*fakeALTSAuthInfo) ApplicationProtocol() string { return "" } func (*fakeALTSAuthInfo) RecordProtocol() string { return "" } func (*fakeALTSAuthInfo) SecurityLevel() altspb.SecurityLevel { return altspb.SecurityLevel_SECURITY_NONE } func (*fakeALTSAuthInfo) PeerServiceAccount() string { return "" } func (*fakeALTSAuthInfo) LocalServiceAccount() string { return "" } func (*fakeALTSAuthInfo) PeerRPCVersions() *altspb.RpcProtocolVersions { return nil } grpc-go-1.22.1/credentials/credentials.go000066400000000000000000000321071351635773100202720ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package credentials implements various credentials supported by gRPC library, // which encapsulate all the state needed by a client to authenticate with a // server and make various assertions, e.g., about the client's identity, role, // or whether it is authorized to make a particular call. package credentials // import "google.golang.org/grpc/credentials" import ( "context" "crypto/tls" "crypto/x509" "errors" "fmt" "io/ioutil" "net" "strings" "github.com/golang/protobuf/proto" "google.golang.org/grpc/credentials/internal" ) // PerRPCCredentials defines the common interface for the credentials which need to // attach security information to every RPC (e.g., oauth2). type PerRPCCredentials interface { // GetRequestMetadata gets the current request metadata, refreshing // tokens if required. This should be called by the transport layer on // each request, and the data should be populated in headers or other // context. If a status code is returned, it will be used as the status // for the RPC. uri is the URI of the entry point for the request. // When supported by the underlying implementation, ctx can be used for // timeout and cancellation. // TODO(zhaoq): Define the set of the qualified keys instead of leaving // it as an arbitrary string. GetRequestMetadata(ctx context.Context, uri ...string) (map[string]string, error) // RequireTransportSecurity indicates whether the credentials requires // transport security. RequireTransportSecurity() bool } // ProtocolInfo provides information regarding the gRPC wire protocol version, // security protocol, security protocol version in use, server name, etc. type ProtocolInfo struct { // ProtocolVersion is the gRPC wire protocol version. ProtocolVersion string // SecurityProtocol is the security protocol in use. SecurityProtocol string // SecurityVersion is the security protocol version. SecurityVersion string // ServerName is the user-configured server name. ServerName string } // AuthInfo defines the common interface for the auth information the users are interested in. type AuthInfo interface { AuthType() string } // ErrConnDispatched indicates that rawConn has been dispatched out of gRPC // and the caller should not close rawConn. var ErrConnDispatched = errors.New("credentials: rawConn is dispatched out of gRPC") // TransportCredentials defines the common interface for all the live gRPC wire // protocols and supported transport security protocols (e.g., TLS, SSL). type TransportCredentials interface { // ClientHandshake does the authentication handshake specified by the corresponding // authentication protocol on rawConn for clients. It returns the authenticated // connection and the corresponding auth information about the connection. // Implementations must use the provided context to implement timely cancellation. // gRPC will try to reconnect if the error returned is a temporary error // (io.EOF, context.DeadlineExceeded or err.Temporary() == true). // If the returned error is a wrapper error, implementations should make sure that // the error implements Temporary() to have the correct retry behaviors. // // If the returned net.Conn is closed, it MUST close the net.Conn provided. ClientHandshake(context.Context, string, net.Conn) (net.Conn, AuthInfo, error) // ServerHandshake does the authentication handshake for servers. It returns // the authenticated connection and the corresponding auth information about // the connection. // // If the returned net.Conn is closed, it MUST close the net.Conn provided. ServerHandshake(net.Conn) (net.Conn, AuthInfo, error) // Info provides the ProtocolInfo of this TransportCredentials. Info() ProtocolInfo // Clone makes a copy of this TransportCredentials. Clone() TransportCredentials // OverrideServerName overrides the server name used to verify the hostname on the returned certificates from the server. // gRPC internals also use it to override the virtual hosting name if it is set. // It must be called before dialing. Currently, this is only used by grpclb. OverrideServerName(string) error } // Bundle is a combination of TransportCredentials and PerRPCCredentials. // // It also contains a mode switching method, so it can be used as a combination // of different credential policies. // // Bundle cannot be used together with individual TransportCredentials. // PerRPCCredentials from Bundle will be appended to other PerRPCCredentials. // // This API is experimental. type Bundle interface { TransportCredentials() TransportCredentials PerRPCCredentials() PerRPCCredentials // NewWithMode should make a copy of Bundle, and switch mode. Modifying the // existing Bundle may cause races. // // NewWithMode returns nil if the requested mode is not supported. NewWithMode(mode string) (Bundle, error) } // TLSInfo contains the auth information for a TLS authenticated connection. // It implements the AuthInfo interface. type TLSInfo struct { State tls.ConnectionState } // AuthType returns the type of TLSInfo as a string. func (t TLSInfo) AuthType() string { return "tls" } // GetSecurityValue returns security info requested by channelz. func (t TLSInfo) GetSecurityValue() ChannelzSecurityValue { v := &TLSChannelzSecurityValue{ StandardName: cipherSuiteLookup[t.State.CipherSuite], } // Currently there's no way to get LocalCertificate info from tls package. if len(t.State.PeerCertificates) > 0 { v.RemoteCertificate = t.State.PeerCertificates[0].Raw } return v } // tlsCreds is the credentials required for authenticating a connection using TLS. type tlsCreds struct { // TLS configuration config *tls.Config } func (c tlsCreds) Info() ProtocolInfo { return ProtocolInfo{ SecurityProtocol: "tls", SecurityVersion: "1.2", ServerName: c.config.ServerName, } } func (c *tlsCreds) ClientHandshake(ctx context.Context, authority string, rawConn net.Conn) (_ net.Conn, _ AuthInfo, err error) { // use local cfg to avoid clobbering ServerName if using multiple endpoints cfg := cloneTLSConfig(c.config) if cfg.ServerName == "" { colonPos := strings.LastIndex(authority, ":") if colonPos == -1 { colonPos = len(authority) } cfg.ServerName = authority[:colonPos] } conn := tls.Client(rawConn, cfg) errChannel := make(chan error, 1) go func() { errChannel <- conn.Handshake() }() select { case err := <-errChannel: if err != nil { return nil, nil, err } case <-ctx.Done(): return nil, nil, ctx.Err() } return internal.WrapSyscallConn(rawConn, conn), TLSInfo{conn.ConnectionState()}, nil } func (c *tlsCreds) ServerHandshake(rawConn net.Conn) (net.Conn, AuthInfo, error) { conn := tls.Server(rawConn, c.config) if err := conn.Handshake(); err != nil { return nil, nil, err } return internal.WrapSyscallConn(rawConn, conn), TLSInfo{conn.ConnectionState()}, nil } func (c *tlsCreds) Clone() TransportCredentials { return NewTLS(c.config) } func (c *tlsCreds) OverrideServerName(serverNameOverride string) error { c.config.ServerName = serverNameOverride return nil } const alpnProtoStrH2 = "h2" func appendH2ToNextProtos(ps []string) []string { for _, p := range ps { if p == alpnProtoStrH2 { return ps } } ret := make([]string, 0, len(ps)+1) ret = append(ret, ps...) return append(ret, alpnProtoStrH2) } // NewTLS uses c to construct a TransportCredentials based on TLS. func NewTLS(c *tls.Config) TransportCredentials { tc := &tlsCreds{cloneTLSConfig(c)} tc.config.NextProtos = appendH2ToNextProtos(tc.config.NextProtos) return tc } // NewClientTLSFromCert constructs TLS credentials from the input certificate for client. // serverNameOverride is for testing only. If set to a non empty string, // it will override the virtual host name of authority (e.g. :authority header field) in requests. func NewClientTLSFromCert(cp *x509.CertPool, serverNameOverride string) TransportCredentials { return NewTLS(&tls.Config{ServerName: serverNameOverride, RootCAs: cp}) } // NewClientTLSFromFile constructs TLS credentials from the input certificate file for client. // serverNameOverride is for testing only. If set to a non empty string, // it will override the virtual host name of authority (e.g. :authority header field) in requests. func NewClientTLSFromFile(certFile, serverNameOverride string) (TransportCredentials, error) { b, err := ioutil.ReadFile(certFile) if err != nil { return nil, err } cp := x509.NewCertPool() if !cp.AppendCertsFromPEM(b) { return nil, fmt.Errorf("credentials: failed to append certificates") } return NewTLS(&tls.Config{ServerName: serverNameOverride, RootCAs: cp}), nil } // NewServerTLSFromCert constructs TLS credentials from the input certificate for server. func NewServerTLSFromCert(cert *tls.Certificate) TransportCredentials { return NewTLS(&tls.Config{Certificates: []tls.Certificate{*cert}}) } // NewServerTLSFromFile constructs TLS credentials from the input certificate file and key // file for server. func NewServerTLSFromFile(certFile, keyFile string) (TransportCredentials, error) { cert, err := tls.LoadX509KeyPair(certFile, keyFile) if err != nil { return nil, err } return NewTLS(&tls.Config{Certificates: []tls.Certificate{cert}}), nil } // ChannelzSecurityInfo defines the interface that security protocols should implement // in order to provide security info to channelz. type ChannelzSecurityInfo interface { GetSecurityValue() ChannelzSecurityValue } // ChannelzSecurityValue defines the interface that GetSecurityValue() return value // should satisfy. This interface should only be satisfied by *TLSChannelzSecurityValue // and *OtherChannelzSecurityValue. type ChannelzSecurityValue interface { isChannelzSecurityValue() } // TLSChannelzSecurityValue defines the struct that TLS protocol should return // from GetSecurityValue(), containing security info like cipher and certificate used. type TLSChannelzSecurityValue struct { ChannelzSecurityValue StandardName string LocalCertificate []byte RemoteCertificate []byte } // OtherChannelzSecurityValue defines the struct that non-TLS protocol should return // from GetSecurityValue(), which contains protocol specific security info. Note // the Value field will be sent to users of channelz requesting channel info, and // thus sensitive info should better be avoided. type OtherChannelzSecurityValue struct { ChannelzSecurityValue Name string Value proto.Message } var cipherSuiteLookup = map[uint16]string{ tls.TLS_RSA_WITH_RC4_128_SHA: "TLS_RSA_WITH_RC4_128_SHA", tls.TLS_RSA_WITH_3DES_EDE_CBC_SHA: "TLS_RSA_WITH_3DES_EDE_CBC_SHA", tls.TLS_RSA_WITH_AES_128_CBC_SHA: "TLS_RSA_WITH_AES_128_CBC_SHA", tls.TLS_RSA_WITH_AES_256_CBC_SHA: "TLS_RSA_WITH_AES_256_CBC_SHA", tls.TLS_RSA_WITH_AES_128_GCM_SHA256: "TLS_RSA_WITH_AES_128_GCM_SHA256", tls.TLS_RSA_WITH_AES_256_GCM_SHA384: "TLS_RSA_WITH_AES_256_GCM_SHA384", tls.TLS_ECDHE_ECDSA_WITH_RC4_128_SHA: "TLS_ECDHE_ECDSA_WITH_RC4_128_SHA", tls.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA: "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA", tls.TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA: "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA", tls.TLS_ECDHE_RSA_WITH_RC4_128_SHA: "TLS_ECDHE_RSA_WITH_RC4_128_SHA", tls.TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA: "TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA", tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA: "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA", tls.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA: "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA", tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256: "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256: "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", tls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384: "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", tls.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384: "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", tls.TLS_FALLBACK_SCSV: "TLS_FALLBACK_SCSV", tls.TLS_RSA_WITH_AES_128_CBC_SHA256: "TLS_RSA_WITH_AES_128_CBC_SHA256", tls.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256: "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256", tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256: "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256", tls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305: "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305", tls.TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305: "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305", } // cloneTLSConfig returns a shallow clone of the exported // fields of cfg, ignoring the unexported sync.Once, which // contains a mutex and must not be copied. // // If cfg is nil, a new zero tls.Config is returned. // // TODO: inline this function if possible. func cloneTLSConfig(cfg *tls.Config) *tls.Config { if cfg == nil { return &tls.Config{} } return cfg.Clone() } grpc-go-1.22.1/credentials/credentials_test.go000066400000000000000000000152371351635773100213360ustar00rootroot00000000000000/* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package credentials import ( "context" "crypto/tls" "net" "reflect" "testing" "google.golang.org/grpc/testdata" ) func TestTLSOverrideServerName(t *testing.T) { expectedServerName := "server.name" c := NewTLS(nil) c.OverrideServerName(expectedServerName) if c.Info().ServerName != expectedServerName { t.Fatalf("c.Info().ServerName = %v, want %v", c.Info().ServerName, expectedServerName) } } func TestTLSClone(t *testing.T) { expectedServerName := "server.name" c := NewTLS(nil) c.OverrideServerName(expectedServerName) cc := c.Clone() if cc.Info().ServerName != expectedServerName { t.Fatalf("cc.Info().ServerName = %v, want %v", cc.Info().ServerName, expectedServerName) } cc.OverrideServerName("") if c.Info().ServerName != expectedServerName { t.Fatalf("Change in clone should not affect the original, c.Info().ServerName = %v, want %v", c.Info().ServerName, expectedServerName) } } type serverHandshake func(net.Conn) (AuthInfo, error) func TestClientHandshakeReturnsAuthInfo(t *testing.T) { done := make(chan AuthInfo, 1) lis := launchServer(t, tlsServerHandshake, done) defer lis.Close() lisAddr := lis.Addr().String() clientAuthInfo := clientHandle(t, gRPCClientHandshake, lisAddr) // wait until server sends serverAuthInfo or fails. serverAuthInfo, ok := <-done if !ok { t.Fatalf("Error at server-side") } if !compare(clientAuthInfo, serverAuthInfo) { t.Fatalf("c.ClientHandshake(_, %v, _) = %v, want %v.", lisAddr, clientAuthInfo, serverAuthInfo) } } func TestServerHandshakeReturnsAuthInfo(t *testing.T) { done := make(chan AuthInfo, 1) lis := launchServer(t, gRPCServerHandshake, done) defer lis.Close() clientAuthInfo := clientHandle(t, tlsClientHandshake, lis.Addr().String()) // wait until server sends serverAuthInfo or fails. serverAuthInfo, ok := <-done if !ok { t.Fatalf("Error at server-side") } if !compare(clientAuthInfo, serverAuthInfo) { t.Fatalf("ServerHandshake(_) = %v, want %v.", serverAuthInfo, clientAuthInfo) } } func TestServerAndClientHandshake(t *testing.T) { done := make(chan AuthInfo, 1) lis := launchServer(t, gRPCServerHandshake, done) defer lis.Close() clientAuthInfo := clientHandle(t, gRPCClientHandshake, lis.Addr().String()) // wait until server sends serverAuthInfo or fails. serverAuthInfo, ok := <-done if !ok { t.Fatalf("Error at server-side") } if !compare(clientAuthInfo, serverAuthInfo) { t.Fatalf("AuthInfo returned by server: %v and client: %v aren't same", serverAuthInfo, clientAuthInfo) } } func compare(a1, a2 AuthInfo) bool { if a1.AuthType() != a2.AuthType() { return false } switch a1.AuthType() { case "tls": state1 := a1.(TLSInfo).State state2 := a2.(TLSInfo).State if state1.Version == state2.Version && state1.HandshakeComplete == state2.HandshakeComplete && state1.CipherSuite == state2.CipherSuite && state1.NegotiatedProtocol == state2.NegotiatedProtocol { return true } return false default: return false } } func launchServer(t *testing.T, hs serverHandshake, done chan AuthInfo) net.Listener { lis, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("Failed to listen: %v", err) } go serverHandle(t, hs, done, lis) return lis } // Is run in a separate goroutine. func serverHandle(t *testing.T, hs serverHandshake, done chan AuthInfo, lis net.Listener) { serverRawConn, err := lis.Accept() if err != nil { t.Errorf("Server failed to accept connection: %v", err) close(done) return } serverAuthInfo, err := hs(serverRawConn) if err != nil { t.Errorf("Server failed while handshake. Error: %v", err) serverRawConn.Close() close(done) return } done <- serverAuthInfo } func clientHandle(t *testing.T, hs func(net.Conn, string) (AuthInfo, error), lisAddr string) AuthInfo { conn, err := net.Dial("tcp", lisAddr) if err != nil { t.Fatalf("Client failed to connect to %s. Error: %v", lisAddr, err) } defer conn.Close() clientAuthInfo, err := hs(conn, lisAddr) if err != nil { t.Fatalf("Error on client while handshake. Error: %v", err) } return clientAuthInfo } // Server handshake implementation in gRPC. func gRPCServerHandshake(conn net.Conn) (AuthInfo, error) { serverTLS, err := NewServerTLSFromFile(testdata.Path("server1.pem"), testdata.Path("server1.key")) if err != nil { return nil, err } _, serverAuthInfo, err := serverTLS.ServerHandshake(conn) if err != nil { return nil, err } return serverAuthInfo, nil } // Client handshake implementation in gRPC. func gRPCClientHandshake(conn net.Conn, lisAddr string) (AuthInfo, error) { clientTLS := NewTLS(&tls.Config{InsecureSkipVerify: true}) _, authInfo, err := clientTLS.ClientHandshake(context.Background(), lisAddr, conn) if err != nil { return nil, err } return authInfo, nil } func tlsServerHandshake(conn net.Conn) (AuthInfo, error) { cert, err := tls.LoadX509KeyPair(testdata.Path("server1.pem"), testdata.Path("server1.key")) if err != nil { return nil, err } serverTLSConfig := &tls.Config{Certificates: []tls.Certificate{cert}} serverConn := tls.Server(conn, serverTLSConfig) err = serverConn.Handshake() if err != nil { return nil, err } return TLSInfo{State: serverConn.ConnectionState()}, nil } func tlsClientHandshake(conn net.Conn, _ string) (AuthInfo, error) { clientTLSConfig := &tls.Config{InsecureSkipVerify: true} clientConn := tls.Client(conn, clientTLSConfig) if err := clientConn.Handshake(); err != nil { return nil, err } return TLSInfo{State: clientConn.ConnectionState()}, nil } func TestAppendH2ToNextProtos(t *testing.T) { tests := []struct { name string ps []string want []string }{ { name: "empty", ps: nil, want: []string{"h2"}, }, { name: "only h2", ps: []string{"h2"}, want: []string{"h2"}, }, { name: "with h2", ps: []string{"alpn", "h2"}, want: []string{"alpn", "h2"}, }, { name: "no h2", ps: []string{"alpn"}, want: []string{"alpn", "h2"}, }, } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { if got := appendH2ToNextProtos(tt.ps); !reflect.DeepEqual(got, tt.want) { t.Errorf("appendH2ToNextProtos() = %v, want %v", got, tt.want) } }) } } grpc-go-1.22.1/credentials/google/000077500000000000000000000000001351635773100167175ustar00rootroot00000000000000grpc-go-1.22.1/credentials/google/google.go000066400000000000000000000075341351635773100205330ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package google defines credentials for google cloud services. package google import ( "context" "fmt" "time" "google.golang.org/grpc/credentials" "google.golang.org/grpc/credentials/alts" "google.golang.org/grpc/credentials/oauth" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/internal" ) const tokenRequestTimeout = 30 * time.Second // NewDefaultCredentials returns a credentials bundle that is configured to work // with google services. // // This API is experimental. func NewDefaultCredentials() credentials.Bundle { c := &creds{ newPerRPCCreds: func() credentials.PerRPCCredentials { ctx, cancel := context.WithTimeout(context.Background(), tokenRequestTimeout) defer cancel() perRPCCreds, err := oauth.NewApplicationDefault(ctx) if err != nil { grpclog.Warningf("google default creds: failed to create application oauth: %v", err) } return perRPCCreds }, } bundle, err := c.NewWithMode(internal.CredsBundleModeFallback) if err != nil { grpclog.Warningf("google default creds: failed to create new creds: %v", err) } return bundle } // NewComputeEngineCredentials returns a credentials bundle that is configured to work // with google services. This API must only be used when running on GCE. Authentication configured // by this API represents the GCE VM's default service account. // // This API is experimental. func NewComputeEngineCredentials() credentials.Bundle { c := &creds{ newPerRPCCreds: func() credentials.PerRPCCredentials { return oauth.NewComputeEngine() }, } bundle, err := c.NewWithMode(internal.CredsBundleModeFallback) if err != nil { grpclog.Warningf("compute engine creds: failed to create new creds: %v", err) } return bundle } // creds implements credentials.Bundle. type creds struct { // Supported modes are defined in internal/internal.go. mode string // The transport credentials associated with this bundle. transportCreds credentials.TransportCredentials // The per RPC credentials associated with this bundle. perRPCCreds credentials.PerRPCCredentials // Creates new per RPC credentials newPerRPCCreds func() credentials.PerRPCCredentials } func (c *creds) TransportCredentials() credentials.TransportCredentials { return c.transportCreds } func (c *creds) PerRPCCredentials() credentials.PerRPCCredentials { if c == nil { return nil } return c.perRPCCreds } // NewWithMode should make a copy of Bundle, and switch mode. Modifying the // existing Bundle may cause races. func (c *creds) NewWithMode(mode string) (credentials.Bundle, error) { newCreds := &creds{ mode: mode, newPerRPCCreds: c.newPerRPCCreds, } // Create transport credentials. switch mode { case internal.CredsBundleModeFallback: newCreds.transportCreds = credentials.NewTLS(nil) case internal.CredsBundleModeBackendFromBalancer, internal.CredsBundleModeBalancer: // Only the clients can use google default credentials, so we only need // to create new ALTS client creds here. newCreds.transportCreds = alts.NewClientCreds(alts.DefaultClientOptions()) default: return nil, fmt.Errorf("unsupported mode: %v", mode) } if mode == internal.CredsBundleModeFallback || mode == internal.CredsBundleModeBackendFromBalancer { newCreds.perRPCCreds = newCreds.newPerRPCCreds() } return newCreds, nil } grpc-go-1.22.1/credentials/internal/000077500000000000000000000000001351635773100172575ustar00rootroot00000000000000grpc-go-1.22.1/credentials/internal/syscallconn.go000066400000000000000000000035451351635773100221450ustar00rootroot00000000000000// +build !appengine /* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package internal contains credentials-internal code. package internal import ( "net" "syscall" ) type sysConn = syscall.Conn // syscallConn keeps reference of rawConn to support syscall.Conn for channelz. // SyscallConn() (the method in interface syscall.Conn) is explicitly // implemented on this type, // // Interface syscall.Conn is implemented by most net.Conn implementations (e.g. // TCPConn, UnixConn), but is not part of net.Conn interface. So wrapper conns // that embed net.Conn don't implement syscall.Conn. (Side note: tls.Conn // doesn't embed net.Conn, so even if syscall.Conn is part of net.Conn, it won't // help here). type syscallConn struct { net.Conn // sysConn is a type alias of syscall.Conn. It's necessary because the name // `Conn` collides with `net.Conn`. sysConn } // WrapSyscallConn tries to wrap rawConn and newConn into a net.Conn that // implements syscall.Conn. rawConn will be used to support syscall, and newConn // will be used for read/write. // // This function returns newConn if rawConn doesn't implement syscall.Conn. func WrapSyscallConn(rawConn, newConn net.Conn) net.Conn { sysConn, ok := rawConn.(syscall.Conn) if !ok { return newConn } return &syscallConn{ Conn: newConn, sysConn: sysConn, } } grpc-go-1.22.1/credentials/internal/syscallconn_appengine.go000066400000000000000000000014241351635773100241650ustar00rootroot00000000000000// +build appengine /* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package internal import ( "net" ) // WrapSyscallConn returns newConn on appengine. func WrapSyscallConn(rawConn, newConn net.Conn) net.Conn { return newConn } grpc-go-1.22.1/credentials/internal/syscallconn_test.go000066400000000000000000000030571351635773100232020ustar00rootroot00000000000000// +build !appengine /* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package internal_test import ( "net" "syscall" "testing" "google.golang.org/grpc/credentials/internal" ) type syscallConn struct { net.Conn } func (*syscallConn) SyscallConn() (syscall.RawConn, error) { return nil, nil } type nonSyscallConn struct { net.Conn } func TestWrapSyscallConn(t *testing.T) { sc := &syscallConn{} nsc := &nonSyscallConn{} wrapConn := internal.WrapSyscallConn(sc, nsc) if _, ok := wrapConn.(syscall.Conn); !ok { t.Errorf("returned conn (type %T) doesn't implement syscall.Conn, want implement", wrapConn) } } func TestWrapSyscallConnNoWrap(t *testing.T) { nscRaw := &nonSyscallConn{} nsc := &nonSyscallConn{} wrapConn := internal.WrapSyscallConn(nscRaw, nsc) if _, ok := wrapConn.(syscall.Conn); ok { t.Errorf("returned conn (type %T) implements syscall.Conn, want not implement", wrapConn) } if wrapConn != nsc { t.Errorf("returned conn is %p, want %p (the passed-in newConn)", wrapConn, nsc) } } grpc-go-1.22.1/credentials/oauth/000077500000000000000000000000001351635773100165635ustar00rootroot00000000000000grpc-go-1.22.1/credentials/oauth/oauth.go000066400000000000000000000122141351635773100202320ustar00rootroot00000000000000/* * * Copyright 2015 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package oauth implements gRPC credentials using OAuth. package oauth import ( "context" "fmt" "io/ioutil" "sync" "golang.org/x/oauth2" "golang.org/x/oauth2/google" "golang.org/x/oauth2/jwt" "google.golang.org/grpc/credentials" ) // TokenSource supplies PerRPCCredentials from an oauth2.TokenSource. type TokenSource struct { oauth2.TokenSource } // GetRequestMetadata gets the request metadata as a map from a TokenSource. func (ts TokenSource) GetRequestMetadata(ctx context.Context, uri ...string) (map[string]string, error) { token, err := ts.Token() if err != nil { return nil, err } return map[string]string{ "authorization": token.Type() + " " + token.AccessToken, }, nil } // RequireTransportSecurity indicates whether the credentials requires transport security. func (ts TokenSource) RequireTransportSecurity() bool { return true } type jwtAccess struct { jsonKey []byte } // NewJWTAccessFromFile creates PerRPCCredentials from the given keyFile. func NewJWTAccessFromFile(keyFile string) (credentials.PerRPCCredentials, error) { jsonKey, err := ioutil.ReadFile(keyFile) if err != nil { return nil, fmt.Errorf("credentials: failed to read the service account key file: %v", err) } return NewJWTAccessFromKey(jsonKey) } // NewJWTAccessFromKey creates PerRPCCredentials from the given jsonKey. func NewJWTAccessFromKey(jsonKey []byte) (credentials.PerRPCCredentials, error) { return jwtAccess{jsonKey}, nil } func (j jwtAccess) GetRequestMetadata(ctx context.Context, uri ...string) (map[string]string, error) { ts, err := google.JWTAccessTokenSourceFromJSON(j.jsonKey, uri[0]) if err != nil { return nil, err } token, err := ts.Token() if err != nil { return nil, err } return map[string]string{ "authorization": token.Type() + " " + token.AccessToken, }, nil } func (j jwtAccess) RequireTransportSecurity() bool { return true } // oauthAccess supplies PerRPCCredentials from a given token. type oauthAccess struct { token oauth2.Token } // NewOauthAccess constructs the PerRPCCredentials using a given token. func NewOauthAccess(token *oauth2.Token) credentials.PerRPCCredentials { return oauthAccess{token: *token} } func (oa oauthAccess) GetRequestMetadata(ctx context.Context, uri ...string) (map[string]string, error) { return map[string]string{ "authorization": oa.token.Type() + " " + oa.token.AccessToken, }, nil } func (oa oauthAccess) RequireTransportSecurity() bool { return true } // NewComputeEngine constructs the PerRPCCredentials that fetches access tokens from // Google Compute Engine (GCE)'s metadata server. It is only valid to use this // if your program is running on a GCE instance. // TODO(dsymonds): Deprecate and remove this. func NewComputeEngine() credentials.PerRPCCredentials { return TokenSource{google.ComputeTokenSource("")} } // serviceAccount represents PerRPCCredentials via JWT signing key. type serviceAccount struct { mu sync.Mutex config *jwt.Config t *oauth2.Token } func (s *serviceAccount) GetRequestMetadata(ctx context.Context, uri ...string) (map[string]string, error) { s.mu.Lock() defer s.mu.Unlock() if !s.t.Valid() { var err error s.t, err = s.config.TokenSource(ctx).Token() if err != nil { return nil, err } } return map[string]string{ "authorization": s.t.Type() + " " + s.t.AccessToken, }, nil } func (s *serviceAccount) RequireTransportSecurity() bool { return true } // NewServiceAccountFromKey constructs the PerRPCCredentials using the JSON key slice // from a Google Developers service account. func NewServiceAccountFromKey(jsonKey []byte, scope ...string) (credentials.PerRPCCredentials, error) { config, err := google.JWTConfigFromJSON(jsonKey, scope...) if err != nil { return nil, err } return &serviceAccount{config: config}, nil } // NewServiceAccountFromFile constructs the PerRPCCredentials using the JSON key file // of a Google Developers service account. func NewServiceAccountFromFile(keyFile string, scope ...string) (credentials.PerRPCCredentials, error) { jsonKey, err := ioutil.ReadFile(keyFile) if err != nil { return nil, fmt.Errorf("credentials: failed to read the service account key file: %v", err) } return NewServiceAccountFromKey(jsonKey, scope...) } // NewApplicationDefault returns "Application Default Credentials". For more // detail, see https://developers.google.com/accounts/docs/application-default-credentials. func NewApplicationDefault(ctx context.Context, scope ...string) (credentials.PerRPCCredentials, error) { t, err := google.DefaultTokenSource(ctx, scope...) if err != nil { return nil, err } return TokenSource{t}, nil } grpc-go-1.22.1/credentials/tls13.go000066400000000000000000000017351351635773100167460ustar00rootroot00000000000000// +build go1.12 /* * * Copyright 2019 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package credentials import "crypto/tls" // This init function adds cipher suite constants only defined in Go 1.12. func init() { cipherSuiteLookup[tls.TLS_AES_128_GCM_SHA256] = "TLS_AES_128_GCM_SHA256" cipherSuiteLookup[tls.TLS_AES_256_GCM_SHA384] = "TLS_AES_256_GCM_SHA384" cipherSuiteLookup[tls.TLS_CHACHA20_POLY1305_SHA256] = "TLS_CHACHA20_POLY1305_SHA256" } grpc-go-1.22.1/dialoptions.go000066400000000000000000000457441351635773100160400ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "context" "fmt" "net" "time" "google.golang.org/grpc/balancer" "google.golang.org/grpc/credentials" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/internal" "google.golang.org/grpc/internal/backoff" "google.golang.org/grpc/internal/envconfig" "google.golang.org/grpc/internal/transport" "google.golang.org/grpc/keepalive" "google.golang.org/grpc/resolver" "google.golang.org/grpc/stats" ) // dialOptions configure a Dial call. dialOptions are set by the DialOption // values passed to Dial. type dialOptions struct { unaryInt UnaryClientInterceptor streamInt StreamClientInterceptor chainUnaryInts []UnaryClientInterceptor chainStreamInts []StreamClientInterceptor cp Compressor dc Decompressor bs backoff.Strategy block bool insecure bool timeout time.Duration scChan <-chan ServiceConfig authority string copts transport.ConnectOptions callOptions []CallOption // This is used by v1 balancer dial option WithBalancer to support v1 // balancer, and also by WithBalancerName dial option. balancerBuilder balancer.Builder // This is to support grpclb. resolverBuilder resolver.Builder reqHandshake envconfig.RequireHandshakeSetting channelzParentID int64 disableServiceConfig bool disableRetry bool disableHealthCheck bool healthCheckFunc internal.HealthChecker minConnectTimeout func() time.Duration defaultServiceConfig *ServiceConfig // defaultServiceConfig is parsed from defaultServiceConfigRawJSON. defaultServiceConfigRawJSON *string } // DialOption configures how we set up the connection. type DialOption interface { apply(*dialOptions) } // EmptyDialOption does not alter the dial configuration. It can be embedded in // another structure to build custom dial options. // // This API is EXPERIMENTAL. type EmptyDialOption struct{} func (EmptyDialOption) apply(*dialOptions) {} // funcDialOption wraps a function that modifies dialOptions into an // implementation of the DialOption interface. type funcDialOption struct { f func(*dialOptions) } func (fdo *funcDialOption) apply(do *dialOptions) { fdo.f(do) } func newFuncDialOption(f func(*dialOptions)) *funcDialOption { return &funcDialOption{ f: f, } } // WithWaitForHandshake blocks until the initial settings frame is received from // the server before assigning RPCs to the connection. // // Deprecated: this is the default behavior, and this option will be removed // after the 1.18 release. func WithWaitForHandshake() DialOption { return newFuncDialOption(func(o *dialOptions) { o.reqHandshake = envconfig.RequireHandshakeOn }) } // WithWriteBufferSize determines how much data can be batched before doing a // write on the wire. The corresponding memory allocation for this buffer will // be twice the size to keep syscalls low. The default value for this buffer is // 32KB. // // Zero will disable the write buffer such that each write will be on underlying // connection. Note: A Send call may not directly translate to a write. func WithWriteBufferSize(s int) DialOption { return newFuncDialOption(func(o *dialOptions) { o.copts.WriteBufferSize = s }) } // WithReadBufferSize lets you set the size of read buffer, this determines how // much data can be read at most for each read syscall. // // The default value for this buffer is 32KB. Zero will disable read buffer for // a connection so data framer can access the underlying conn directly. func WithReadBufferSize(s int) DialOption { return newFuncDialOption(func(o *dialOptions) { o.copts.ReadBufferSize = s }) } // WithInitialWindowSize returns a DialOption which sets the value for initial // window size on a stream. The lower bound for window size is 64K and any value // smaller than that will be ignored. func WithInitialWindowSize(s int32) DialOption { return newFuncDialOption(func(o *dialOptions) { o.copts.InitialWindowSize = s }) } // WithInitialConnWindowSize returns a DialOption which sets the value for // initial window size on a connection. The lower bound for window size is 64K // and any value smaller than that will be ignored. func WithInitialConnWindowSize(s int32) DialOption { return newFuncDialOption(func(o *dialOptions) { o.copts.InitialConnWindowSize = s }) } // WithMaxMsgSize returns a DialOption which sets the maximum message size the // client can receive. // // Deprecated: use WithDefaultCallOptions(MaxCallRecvMsgSize(s)) instead. func WithMaxMsgSize(s int) DialOption { return WithDefaultCallOptions(MaxCallRecvMsgSize(s)) } // WithDefaultCallOptions returns a DialOption which sets the default // CallOptions for calls over the connection. func WithDefaultCallOptions(cos ...CallOption) DialOption { return newFuncDialOption(func(o *dialOptions) { o.callOptions = append(o.callOptions, cos...) }) } // WithCodec returns a DialOption which sets a codec for message marshaling and // unmarshaling. // // Deprecated: use WithDefaultCallOptions(ForceCodec(_)) instead. func WithCodec(c Codec) DialOption { return WithDefaultCallOptions(CallCustomCodec(c)) } // WithCompressor returns a DialOption which sets a Compressor to use for // message compression. It has lower priority than the compressor set by the // UseCompressor CallOption. // // Deprecated: use UseCompressor instead. func WithCompressor(cp Compressor) DialOption { return newFuncDialOption(func(o *dialOptions) { o.cp = cp }) } // WithDecompressor returns a DialOption which sets a Decompressor to use for // incoming message decompression. If incoming response messages are encoded // using the decompressor's Type(), it will be used. Otherwise, the message // encoding will be used to look up the compressor registered via // encoding.RegisterCompressor, which will then be used to decompress the // message. If no compressor is registered for the encoding, an Unimplemented // status error will be returned. // // Deprecated: use encoding.RegisterCompressor instead. func WithDecompressor(dc Decompressor) DialOption { return newFuncDialOption(func(o *dialOptions) { o.dc = dc }) } // WithBalancer returns a DialOption which sets a load balancer with the v1 API. // Name resolver will be ignored if this DialOption is specified. // // Deprecated: use the new balancer APIs in balancer package and // WithBalancerName. func WithBalancer(b Balancer) DialOption { return newFuncDialOption(func(o *dialOptions) { o.balancerBuilder = &balancerWrapperBuilder{ b: b, } }) } // WithBalancerName sets the balancer that the ClientConn will be initialized // with. Balancer registered with balancerName will be used. This function // panics if no balancer was registered by balancerName. // // The balancer cannot be overridden by balancer option specified by service // config. // // This is an EXPERIMENTAL API. func WithBalancerName(balancerName string) DialOption { builder := balancer.Get(balancerName) if builder == nil { panic(fmt.Sprintf("grpc.WithBalancerName: no balancer is registered for name %v", balancerName)) } return newFuncDialOption(func(o *dialOptions) { o.balancerBuilder = builder }) } // withResolverBuilder is only for grpclb. func withResolverBuilder(b resolver.Builder) DialOption { return newFuncDialOption(func(o *dialOptions) { o.resolverBuilder = b }) } // WithServiceConfig returns a DialOption which has a channel to read the // service configuration. // // Deprecated: service config should be received through name resolver, as // specified here. // https://github.com/grpc/grpc/blob/master/doc/service_config.md func WithServiceConfig(c <-chan ServiceConfig) DialOption { return newFuncDialOption(func(o *dialOptions) { o.scChan = c }) } // WithBackoffMaxDelay configures the dialer to use the provided maximum delay // when backing off after failed connection attempts. func WithBackoffMaxDelay(md time.Duration) DialOption { return WithBackoffConfig(BackoffConfig{MaxDelay: md}) } // WithBackoffConfig configures the dialer to use the provided backoff // parameters after connection failures. // // Use WithBackoffMaxDelay until more parameters on BackoffConfig are opened up // for use. func WithBackoffConfig(b BackoffConfig) DialOption { return withBackoff(backoff.Exponential{ MaxDelay: b.MaxDelay, }) } // withBackoff sets the backoff strategy used for connectRetryNum after a failed // connection attempt. // // This can be exported if arbitrary backoff strategies are allowed by gRPC. func withBackoff(bs backoff.Strategy) DialOption { return newFuncDialOption(func(o *dialOptions) { o.bs = bs }) } // WithBlock returns a DialOption which makes caller of Dial blocks until the // underlying connection is up. Without this, Dial returns immediately and // connecting the server happens in background. func WithBlock() DialOption { return newFuncDialOption(func(o *dialOptions) { o.block = true }) } // WithInsecure returns a DialOption which disables transport security for this // ClientConn. Note that transport security is required unless WithInsecure is // set. func WithInsecure() DialOption { return newFuncDialOption(func(o *dialOptions) { o.insecure = true }) } // WithTransportCredentials returns a DialOption which configures a connection // level security credentials (e.g., TLS/SSL). This should not be used together // with WithCredentialsBundle. func WithTransportCredentials(creds credentials.TransportCredentials) DialOption { return newFuncDialOption(func(o *dialOptions) { o.copts.TransportCredentials = creds }) } // WithPerRPCCredentials returns a DialOption which sets credentials and places // auth state on each outbound RPC. func WithPerRPCCredentials(creds credentials.PerRPCCredentials) DialOption { return newFuncDialOption(func(o *dialOptions) { o.copts.PerRPCCredentials = append(o.copts.PerRPCCredentials, creds) }) } // WithCredentialsBundle returns a DialOption to set a credentials bundle for // the ClientConn.WithCreds. This should not be used together with // WithTransportCredentials. // // This API is experimental. func WithCredentialsBundle(b credentials.Bundle) DialOption { return newFuncDialOption(func(o *dialOptions) { o.copts.CredsBundle = b }) } // WithTimeout returns a DialOption that configures a timeout for dialing a // ClientConn initially. This is valid if and only if WithBlock() is present. // // Deprecated: use DialContext and context.WithTimeout instead. func WithTimeout(d time.Duration) DialOption { return newFuncDialOption(func(o *dialOptions) { o.timeout = d }) } // WithContextDialer returns a DialOption that sets a dialer to create // connections. If FailOnNonTempDialError() is set to true, and an error is // returned by f, gRPC checks the error's Temporary() method to decide if it // should try to reconnect to the network address. func WithContextDialer(f func(context.Context, string) (net.Conn, error)) DialOption { return newFuncDialOption(func(o *dialOptions) { o.copts.Dialer = f }) } func init() { internal.WithResolverBuilder = withResolverBuilder internal.WithHealthCheckFunc = withHealthCheckFunc } // WithDialer returns a DialOption that specifies a function to use for dialing // network addresses. If FailOnNonTempDialError() is set to true, and an error // is returned by f, gRPC checks the error's Temporary() method to decide if it // should try to reconnect to the network address. // // Deprecated: use WithContextDialer instead func WithDialer(f func(string, time.Duration) (net.Conn, error)) DialOption { return WithContextDialer( func(ctx context.Context, addr string) (net.Conn, error) { if deadline, ok := ctx.Deadline(); ok { return f(addr, time.Until(deadline)) } return f(addr, 0) }) } // WithStatsHandler returns a DialOption that specifies the stats handler for // all the RPCs and underlying network connections in this ClientConn. func WithStatsHandler(h stats.Handler) DialOption { return newFuncDialOption(func(o *dialOptions) { o.copts.StatsHandler = h }) } // FailOnNonTempDialError returns a DialOption that specifies if gRPC fails on // non-temporary dial errors. If f is true, and dialer returns a non-temporary // error, gRPC will fail the connection to the network address and won't try to // reconnect. The default value of FailOnNonTempDialError is false. // // FailOnNonTempDialError only affects the initial dial, and does not do // anything useful unless you are also using WithBlock(). // // This is an EXPERIMENTAL API. func FailOnNonTempDialError(f bool) DialOption { return newFuncDialOption(func(o *dialOptions) { o.copts.FailOnNonTempDialError = f }) } // WithUserAgent returns a DialOption that specifies a user agent string for all // the RPCs. func WithUserAgent(s string) DialOption { return newFuncDialOption(func(o *dialOptions) { o.copts.UserAgent = s }) } // WithKeepaliveParams returns a DialOption that specifies keepalive parameters // for the client transport. func WithKeepaliveParams(kp keepalive.ClientParameters) DialOption { if kp.Time < internal.KeepaliveMinPingTime { grpclog.Warningf("Adjusting keepalive ping interval to minimum period of %v", internal.KeepaliveMinPingTime) kp.Time = internal.KeepaliveMinPingTime } return newFuncDialOption(func(o *dialOptions) { o.copts.KeepaliveParams = kp }) } // WithUnaryInterceptor returns a DialOption that specifies the interceptor for // unary RPCs. func WithUnaryInterceptor(f UnaryClientInterceptor) DialOption { return newFuncDialOption(func(o *dialOptions) { o.unaryInt = f }) } // WithChainUnaryInterceptor returns a DialOption that specifies the chained // interceptor for unary RPCs. The first interceptor will be the outer most, // while the last interceptor will be the inner most wrapper around the real call. // All interceptors added by this method will be chained, and the interceptor // defined by WithUnaryInterceptor will always be prepended to the chain. func WithChainUnaryInterceptor(interceptors ...UnaryClientInterceptor) DialOption { return newFuncDialOption(func(o *dialOptions) { o.chainUnaryInts = append(o.chainUnaryInts, interceptors...) }) } // WithStreamInterceptor returns a DialOption that specifies the interceptor for // streaming RPCs. func WithStreamInterceptor(f StreamClientInterceptor) DialOption { return newFuncDialOption(func(o *dialOptions) { o.streamInt = f }) } // WithChainStreamInterceptor returns a DialOption that specifies the chained // interceptor for unary RPCs. The first interceptor will be the outer most, // while the last interceptor will be the inner most wrapper around the real call. // All interceptors added by this method will be chained, and the interceptor // defined by WithStreamInterceptor will always be prepended to the chain. func WithChainStreamInterceptor(interceptors ...StreamClientInterceptor) DialOption { return newFuncDialOption(func(o *dialOptions) { o.chainStreamInts = append(o.chainStreamInts, interceptors...) }) } // WithAuthority returns a DialOption that specifies the value to be used as the // :authority pseudo-header. This value only works with WithInsecure and has no // effect if TransportCredentials are present. func WithAuthority(a string) DialOption { return newFuncDialOption(func(o *dialOptions) { o.authority = a }) } // WithChannelzParentID returns a DialOption that specifies the channelz ID of // current ClientConn's parent. This function is used in nested channel creation // (e.g. grpclb dial). func WithChannelzParentID(id int64) DialOption { return newFuncDialOption(func(o *dialOptions) { o.channelzParentID = id }) } // WithDisableServiceConfig returns a DialOption that causes gRPC to ignore any // service config provided by the resolver and provides a hint to the resolver // to not fetch service configs. // // Note that this dial option only disables service config from resolver. If // default service config is provided, gRPC will use the default service config. func WithDisableServiceConfig() DialOption { return newFuncDialOption(func(o *dialOptions) { o.disableServiceConfig = true }) } // WithDefaultServiceConfig returns a DialOption that configures the default // service config, which will be used in cases where: // 1. WithDisableServiceConfig is called. // 2. Resolver does not return service config or if the resolver gets and invalid config. // // This API is EXPERIMENTAL. func WithDefaultServiceConfig(s string) DialOption { return newFuncDialOption(func(o *dialOptions) { o.defaultServiceConfigRawJSON = &s }) } // WithDisableRetry returns a DialOption that disables retries, even if the // service config enables them. This does not impact transparent retries, which // will happen automatically if no data is written to the wire or if the RPC is // unprocessed by the remote server. // // Retry support is currently disabled by default, but will be enabled by // default in the future. Until then, it may be enabled by setting the // environment variable "GRPC_GO_RETRY" to "on". // // This API is EXPERIMENTAL. func WithDisableRetry() DialOption { return newFuncDialOption(func(o *dialOptions) { o.disableRetry = true }) } // WithMaxHeaderListSize returns a DialOption that specifies the maximum // (uncompressed) size of header list that the client is prepared to accept. func WithMaxHeaderListSize(s uint32) DialOption { return newFuncDialOption(func(o *dialOptions) { o.copts.MaxHeaderListSize = &s }) } // WithDisableHealthCheck disables the LB channel health checking for all // SubConns of this ClientConn. // // This API is EXPERIMENTAL. func WithDisableHealthCheck() DialOption { return newFuncDialOption(func(o *dialOptions) { o.disableHealthCheck = true }) } // withHealthCheckFunc replaces the default health check function with the // provided one. It makes tests easier to change the health check function. // // For testing purpose only. func withHealthCheckFunc(f internal.HealthChecker) DialOption { return newFuncDialOption(func(o *dialOptions) { o.healthCheckFunc = f }) } func defaultDialOptions() dialOptions { return dialOptions{ disableRetry: !envconfig.Retry, reqHandshake: envconfig.RequireHandshake, healthCheckFunc: internal.HealthCheckFunc, copts: transport.ConnectOptions{ WriteBufferSize: defaultWriteBufSize, ReadBufferSize: defaultReadBufSize, }, } } // withGetMinConnectDeadline specifies the function that clientconn uses to // get minConnectDeadline. This can be used to make connection attempts happen // faster/slower. // // For testing purpose only. func withMinConnectDeadline(f func() time.Duration) DialOption { return newFuncDialOption(func(o *dialOptions) { o.minConnectTimeout = f }) } grpc-go-1.22.1/doc.go000066400000000000000000000013631351635773100142450ustar00rootroot00000000000000/* * * Copyright 2015 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ /* Package grpc implements an RPC system called gRPC. See grpc.io for more information about gRPC. */ package grpc // import "google.golang.org/grpc" grpc-go-1.22.1/encoding/000077500000000000000000000000001351635773100147345ustar00rootroot00000000000000grpc-go-1.22.1/encoding/encoding.go000066400000000000000000000107221351635773100170530ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package encoding defines the interface for the compressor and codec, and // functions to register and retrieve compressors and codecs. // // This package is EXPERIMENTAL. package encoding import ( "io" "strings" ) // Identity specifies the optional encoding for uncompressed streams. // It is intended for grpc internal use only. const Identity = "identity" // Compressor is used for compressing and decompressing when sending or // receiving messages. type Compressor interface { // Compress writes the data written to wc to w after compressing it. If an // error occurs while initializing the compressor, that error is returned // instead. Compress(w io.Writer) (io.WriteCloser, error) // Decompress reads data from r, decompresses it, and provides the // uncompressed data via the returned io.Reader. If an error occurs while // initializing the decompressor, that error is returned instead. Decompress(r io.Reader) (io.Reader, error) // Name is the name of the compression codec and is used to set the content // coding header. The result must be static; the result cannot change // between calls. Name() string } var registeredCompressor = make(map[string]Compressor) // RegisterCompressor registers the compressor with gRPC by its name. It can // be activated when sending an RPC via grpc.UseCompressor(). It will be // automatically accessed when receiving a message based on the content coding // header. Servers also use it to send a response with the same encoding as // the request. // // NOTE: this function must only be called during initialization time (i.e. in // an init() function), and is not thread-safe. If multiple Compressors are // registered with the same name, the one registered last will take effect. func RegisterCompressor(c Compressor) { registeredCompressor[c.Name()] = c } // GetCompressor returns Compressor for the given compressor name. func GetCompressor(name string) Compressor { return registeredCompressor[name] } // Codec defines the interface gRPC uses to encode and decode messages. Note // that implementations of this interface must be thread safe; a Codec's // methods can be called from concurrent goroutines. type Codec interface { // Marshal returns the wire format of v. Marshal(v interface{}) ([]byte, error) // Unmarshal parses the wire format into v. Unmarshal(data []byte, v interface{}) error // Name returns the name of the Codec implementation. The returned string // will be used as part of content type in transmission. The result must be // static; the result cannot change between calls. Name() string } var registeredCodecs = make(map[string]Codec) // RegisterCodec registers the provided Codec for use with all gRPC clients and // servers. // // The Codec will be stored and looked up by result of its Name() method, which // should match the content-subtype of the encoding handled by the Codec. This // is case-insensitive, and is stored and looked up as lowercase. If the // result of calling Name() is an empty string, RegisterCodec will panic. See // Content-Type on // https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md#requests for // more details. // // NOTE: this function must only be called during initialization time (i.e. in // an init() function), and is not thread-safe. If multiple Compressors are // registered with the same name, the one registered last will take effect. func RegisterCodec(codec Codec) { if codec == nil { panic("cannot register a nil Codec") } if codec.Name() == "" { panic("cannot register Codec with empty string result for Name()") } contentSubtype := strings.ToLower(codec.Name()) registeredCodecs[contentSubtype] = codec } // GetCodec gets a registered Codec by content-subtype, or nil if no Codec is // registered for the content-subtype. // // The content-subtype is expected to be lowercase. func GetCodec(contentSubtype string) Codec { return registeredCodecs[contentSubtype] } grpc-go-1.22.1/encoding/gzip/000077500000000000000000000000001351635773100157055ustar00rootroot00000000000000grpc-go-1.22.1/encoding/gzip/gzip.go000066400000000000000000000054501351635773100172110ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package gzip implements and registers the gzip compressor // during the initialization. // This package is EXPERIMENTAL. package gzip import ( "compress/gzip" "fmt" "io" "io/ioutil" "sync" "google.golang.org/grpc/encoding" ) // Name is the name registered for the gzip compressor. const Name = "gzip" func init() { c := &compressor{} c.poolCompressor.New = func() interface{} { return &writer{Writer: gzip.NewWriter(ioutil.Discard), pool: &c.poolCompressor} } encoding.RegisterCompressor(c) } type writer struct { *gzip.Writer pool *sync.Pool } // SetLevel updates the registered gzip compressor to use the compression level specified (gzip.HuffmanOnly is not supported). // NOTE: this function must only be called during initialization time (i.e. in an init() function), // and is not thread-safe. // // The error returned will be nil if the specified level is valid. func SetLevel(level int) error { if level < gzip.DefaultCompression || level > gzip.BestCompression { return fmt.Errorf("grpc: invalid gzip compression level: %d", level) } c := encoding.GetCompressor(Name).(*compressor) c.poolCompressor.New = func() interface{} { w, err := gzip.NewWriterLevel(ioutil.Discard, level) if err != nil { panic(err) } return &writer{Writer: w, pool: &c.poolCompressor} } return nil } func (c *compressor) Compress(w io.Writer) (io.WriteCloser, error) { z := c.poolCompressor.Get().(*writer) z.Writer.Reset(w) return z, nil } func (z *writer) Close() error { defer z.pool.Put(z) return z.Writer.Close() } type reader struct { *gzip.Reader pool *sync.Pool } func (c *compressor) Decompress(r io.Reader) (io.Reader, error) { z, inPool := c.poolDecompressor.Get().(*reader) if !inPool { newZ, err := gzip.NewReader(r) if err != nil { return nil, err } return &reader{Reader: newZ, pool: &c.poolDecompressor}, nil } if err := z.Reset(r); err != nil { c.poolDecompressor.Put(z) return nil, err } return z, nil } func (z *reader) Read(p []byte) (n int, err error) { n, err = z.Reader.Read(p) if err == io.EOF { z.pool.Put(z) } return n, err } func (c *compressor) Name() string { return Name } type compressor struct { poolCompressor sync.Pool poolDecompressor sync.Pool } grpc-go-1.22.1/encoding/proto/000077500000000000000000000000001351635773100160775ustar00rootroot00000000000000grpc-go-1.22.1/encoding/proto/proto.go000066400000000000000000000047661351635773100176060ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package proto defines the protobuf codec. Importing this package will // register the codec. package proto import ( "math" "sync" "github.com/golang/protobuf/proto" "google.golang.org/grpc/encoding" ) // Name is the name registered for the proto compressor. const Name = "proto" func init() { encoding.RegisterCodec(codec{}) } // codec is a Codec implementation with protobuf. It is the default codec for gRPC. type codec struct{} type cachedProtoBuffer struct { lastMarshaledSize uint32 proto.Buffer } func capToMaxInt32(val int) uint32 { if val > math.MaxInt32 { return uint32(math.MaxInt32) } return uint32(val) } func marshal(v interface{}, cb *cachedProtoBuffer) ([]byte, error) { protoMsg := v.(proto.Message) newSlice := make([]byte, 0, cb.lastMarshaledSize) cb.SetBuf(newSlice) cb.Reset() if err := cb.Marshal(protoMsg); err != nil { return nil, err } out := cb.Bytes() cb.lastMarshaledSize = capToMaxInt32(len(out)) return out, nil } func (codec) Marshal(v interface{}) ([]byte, error) { if pm, ok := v.(proto.Marshaler); ok { // object can marshal itself, no need for buffer return pm.Marshal() } cb := protoBufferPool.Get().(*cachedProtoBuffer) out, err := marshal(v, cb) // put back buffer and lose the ref to the slice cb.SetBuf(nil) protoBufferPool.Put(cb) return out, err } func (codec) Unmarshal(data []byte, v interface{}) error { protoMsg := v.(proto.Message) protoMsg.Reset() if pu, ok := protoMsg.(proto.Unmarshaler); ok { // object can unmarshal itself, no need for buffer return pu.Unmarshal(data) } cb := protoBufferPool.Get().(*cachedProtoBuffer) cb.SetBuf(data) err := cb.Unmarshal(protoMsg) cb.SetBuf(nil) protoBufferPool.Put(cb) return err } func (codec) Name() string { return Name } var protoBufferPool = &sync.Pool{ New: func() interface{} { return &cachedProtoBuffer{ Buffer: proto.Buffer{}, lastMarshaledSize: 16, } }, } grpc-go-1.22.1/encoding/proto/proto_benchmark_test.go000066400000000000000000000054501351635773100226460ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package proto import ( "fmt" "testing" "github.com/golang/protobuf/proto" "google.golang.org/grpc/encoding" "google.golang.org/grpc/test/codec_perf" ) func setupBenchmarkProtoCodecInputs(payloadBaseSize uint32) []proto.Message { payloadBase := make([]byte, payloadBaseSize) // arbitrary byte slices payloadSuffixes := [][]byte{ []byte("one"), []byte("two"), []byte("three"), []byte("four"), []byte("five"), } protoStructs := make([]proto.Message, 0) for _, p := range payloadSuffixes { ps := &codec_perf.Buffer{} ps.Body = append(payloadBase, p...) protoStructs = append(protoStructs, ps) } return protoStructs } // The possible use of certain protobuf APIs like the proto.Buffer API potentially involves caching // on our side. This can add checks around memory allocations and possible contention. // Example run: go test -v -run=^$ -bench=BenchmarkProtoCodec -benchmem func BenchmarkProtoCodec(b *testing.B) { // range of message sizes payloadBaseSizes := make([]uint32, 0) for i := uint32(0); i <= 12; i += 4 { payloadBaseSizes = append(payloadBaseSizes, 1< + " " + . Users can easily get the token by parsing the string, and then verify the validity of it. If the token is not valid, returns an error with error code `codes.Unauthenticated`. If the token is valid, then invoke the method handler to start processing the RPC. grpc-go-1.22.1/examples/features/authentication/client/000077500000000000000000000000001351635773100230775ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/authentication/client/main.go000066400000000000000000000050211351635773100243500ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // The client demonstrates how to supply an OAuth2 token for every RPC. package main import ( "context" "flag" "fmt" "log" "time" "golang.org/x/oauth2" "google.golang.org/grpc" "google.golang.org/grpc/credentials" "google.golang.org/grpc/credentials/oauth" ecpb "google.golang.org/grpc/examples/features/proto/echo" "google.golang.org/grpc/testdata" ) var addr = flag.String("addr", "localhost:50051", "the address to connect to") func callUnaryEcho(client ecpb.EchoClient, message string) { ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() resp, err := client.UnaryEcho(ctx, &ecpb.EchoRequest{Message: message}) if err != nil { log.Fatalf("client.UnaryEcho(_) = _, %v: ", err) } fmt.Println("UnaryEcho: ", resp.Message) } func main() { flag.Parse() // Set up the credentials for the connection. perRPC := oauth.NewOauthAccess(fetchToken()) creds, err := credentials.NewClientTLSFromFile(testdata.Path("ca.pem"), "x.test.youtube.com") if err != nil { log.Fatalf("failed to load credentials: %v", err) } opts := []grpc.DialOption{ // In addition to the following grpc.DialOption, callers may also use // the grpc.CallOption grpc.PerRPCCredentials with the RPC invocation // itself. // See: https://godoc.org/google.golang.org/grpc#PerRPCCredentials grpc.WithPerRPCCredentials(perRPC), // oauth.NewOauthAccess requires the configuration of transport // credentials. grpc.WithTransportCredentials(creds), } conn, err := grpc.Dial(*addr, opts...) if err != nil { log.Fatalf("did not connect: %v", err) } defer conn.Close() rgc := ecpb.NewEchoClient(conn) callUnaryEcho(rgc, "hello world") } // fetchToken simulates a token lookup and omits the details of proper token // acquisition. For examples of how to acquire an OAuth2 token, see: // https://godoc.org/golang.org/x/oauth2 func fetchToken() *oauth2.Token { return &oauth2.Token{ AccessToken: "some-secret-token", } } grpc-go-1.22.1/examples/features/authentication/server/000077500000000000000000000000001351635773100231275ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/authentication/server/main.go000066400000000000000000000076751351635773100244210ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // The server demonstrates how to consume and validate OAuth2 tokens provided by // clients for each RPC. package main import ( "context" "crypto/tls" "flag" "fmt" "log" "net" "strings" "google.golang.org/grpc" "google.golang.org/grpc/codes" "google.golang.org/grpc/credentials" ecpb "google.golang.org/grpc/examples/features/proto/echo" "google.golang.org/grpc/metadata" "google.golang.org/grpc/status" "google.golang.org/grpc/testdata" ) var ( errMissingMetadata = status.Errorf(codes.InvalidArgument, "missing metadata") errInvalidToken = status.Errorf(codes.Unauthenticated, "invalid token") ) var port = flag.Int("port", 50051, "the port to serve on") func main() { flag.Parse() fmt.Printf("server starting on port %d...\n", *port) cert, err := tls.LoadX509KeyPair(testdata.Path("server1.pem"), testdata.Path("server1.key")) if err != nil { log.Fatalf("failed to load key pair: %s", err) } opts := []grpc.ServerOption{ // The following grpc.ServerOption adds an interceptor for all unary // RPCs. To configure an interceptor for streaming RPCs, see: // https://godoc.org/google.golang.org/grpc#StreamInterceptor grpc.UnaryInterceptor(ensureValidToken), // Enable TLS for all incoming connections. grpc.Creds(credentials.NewServerTLSFromCert(&cert)), } s := grpc.NewServer(opts...) ecpb.RegisterEchoServer(s, &ecServer{}) lis, err := net.Listen("tcp", fmt.Sprintf(":%d", *port)) if err != nil { log.Fatalf("failed to listen: %v", err) } if err := s.Serve(lis); err != nil { log.Fatalf("failed to serve: %v", err) } } type ecServer struct{} func (s *ecServer) UnaryEcho(ctx context.Context, req *ecpb.EchoRequest) (*ecpb.EchoResponse, error) { return &ecpb.EchoResponse{Message: req.Message}, nil } func (s *ecServer) ServerStreamingEcho(*ecpb.EchoRequest, ecpb.Echo_ServerStreamingEchoServer) error { return status.Errorf(codes.Unimplemented, "not implemented") } func (s *ecServer) ClientStreamingEcho(ecpb.Echo_ClientStreamingEchoServer) error { return status.Errorf(codes.Unimplemented, "not implemented") } func (s *ecServer) BidirectionalStreamingEcho(ecpb.Echo_BidirectionalStreamingEchoServer) error { return status.Errorf(codes.Unimplemented, "not implemented") } // valid validates the authorization. func valid(authorization []string) bool { if len(authorization) < 1 { return false } token := strings.TrimPrefix(authorization[0], "Bearer ") // Perform the token validation here. For the sake of this example, the code // here forgoes any of the usual OAuth2 token validation and instead checks // for a token matching an arbitrary string. return token == "some-secret-token" } // ensureValidToken ensures a valid token exists within a request's metadata. If // the token is missing or invalid, the interceptor blocks execution of the // handler and returns an error. Otherwise, the interceptor invokes the unary // handler. func ensureValidToken(ctx context.Context, req interface{}, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (interface{}, error) { md, ok := metadata.FromIncomingContext(ctx) if !ok { return nil, errMissingMetadata } // The keys within metadata.MD are normalized to lowercase. // See: https://godoc.org/google.golang.org/grpc/metadata#New if !valid(md["authorization"]) { return nil, errInvalidToken } // Continue execution of handler after ensuring a valid token. return handler(ctx, req) } grpc-go-1.22.1/examples/features/cancellation/000077500000000000000000000000001351635773100212365ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/cancellation/README.md000066400000000000000000000004431351635773100225160ustar00rootroot00000000000000# Cancellation This example shows how clients can cancel in-flight RPCs by canceling the context passed to the RPC call. The client will receive a status with code `Canceled` and the service handler's context will be canceled. ``` go run server/main.go ``` ``` go run client/main.go ``` grpc-go-1.22.1/examples/features/cancellation/client/000077500000000000000000000000001351635773100225145ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/cancellation/client/main.go000066400000000000000000000051221351635773100237670ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Binary client is an example client. package main import ( "context" "flag" "fmt" "log" "time" "google.golang.org/grpc" "google.golang.org/grpc/codes" pb "google.golang.org/grpc/examples/features/proto/echo" "google.golang.org/grpc/status" ) var addr = flag.String("addr", "localhost:50051", "the address to connect to") func sendMessage(stream pb.Echo_BidirectionalStreamingEchoClient, msg string) error { fmt.Printf("sending message %q\n", msg) return stream.Send(&pb.EchoRequest{Message: msg}) } func recvMessage(stream pb.Echo_BidirectionalStreamingEchoClient, wantErrCode codes.Code) { res, err := stream.Recv() if status.Code(err) != wantErrCode { log.Fatalf("stream.Recv() = %v, %v; want _, status.Code(err)=%v", res, err, wantErrCode) } if err != nil { fmt.Printf("stream.Recv() returned expected error %v\n", err) return } fmt.Printf("received message %q\n", res.GetMessage()) } func main() { flag.Parse() // Set up a connection to the server. conn, err := grpc.Dial(*addr, grpc.WithInsecure()) if err != nil { log.Fatalf("did not connect: %v", err) } defer conn.Close() c := pb.NewEchoClient(conn) // Initiate the stream with a context that supports cancellation. ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) stream, err := c.BidirectionalStreamingEcho(ctx) if err != nil { log.Fatalf("error creating stream: %v", err) } // Send some test messages. if err := sendMessage(stream, "hello"); err != nil { log.Fatalf("error sending on stream: %v", err) } if err := sendMessage(stream, "world"); err != nil { log.Fatalf("error sending on stream: %v", err) } // Ensure the RPC is working. recvMessage(stream, codes.OK) recvMessage(stream, codes.OK) fmt.Println("cancelling context") cancel() // This Send may or may not return an error, depending on whether the // monitored context detects cancellation before the call is made. sendMessage(stream, "closed") // This Recv should never succeed. recvMessage(stream, codes.Canceled) } grpc-go-1.22.1/examples/features/cancellation/server/000077500000000000000000000000001351635773100225445ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/cancellation/server/main.go000066400000000000000000000040651351635773100240240ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Binary server is an example server. package main import ( "context" "flag" "fmt" "io" "log" "net" "google.golang.org/grpc" "google.golang.org/grpc/codes" pb "google.golang.org/grpc/examples/features/proto/echo" "google.golang.org/grpc/status" ) var port = flag.Int("port", 50051, "the port to serve on") type server struct{} func (s *server) UnaryEcho(ctx context.Context, in *pb.EchoRequest) (*pb.EchoResponse, error) { return nil, status.Error(codes.Unimplemented, "not implemented") } func (s *server) ServerStreamingEcho(in *pb.EchoRequest, stream pb.Echo_ServerStreamingEchoServer) error { return status.Error(codes.Unimplemented, "not implemented") } func (s *server) ClientStreamingEcho(stream pb.Echo_ClientStreamingEchoServer) error { return status.Error(codes.Unimplemented, "not implemented") } func (s *server) BidirectionalStreamingEcho(stream pb.Echo_BidirectionalStreamingEchoServer) error { for { in, err := stream.Recv() if err != nil { fmt.Printf("server: error receiving from stream: %v\n", err) if err == io.EOF { return nil } return err } fmt.Printf("echoing message %q\n", in.Message) stream.Send(&pb.EchoResponse{Message: in.Message}) } } func main() { flag.Parse() lis, err := net.Listen("tcp", fmt.Sprintf(":%d", *port)) if err != nil { log.Fatalf("failed to listen: %v", err) } fmt.Printf("server listening at port %v\n", lis.Addr()) s := grpc.NewServer() pb.RegisterEchoServer(s, &server{}) s.Serve(lis) } grpc-go-1.22.1/examples/features/compression/000077500000000000000000000000001351635773100211435ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/compression/README.md000066400000000000000000000005031351635773100224200ustar00rootroot00000000000000# Compression This example shows how clients can specify compression options when performing RPCs, and how to install support for compressors on the server. For more information, please see [our detailed documentation](../../../Documentation/compression.md). ``` go run server/main.go ``` ``` go run client/main.go ``` grpc-go-1.22.1/examples/features/compression/client/000077500000000000000000000000001351635773100224215ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/compression/client/main.go000066400000000000000000000033421351635773100236760ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Binary client is an example client. package main import ( "context" "flag" "fmt" "log" "time" "google.golang.org/grpc" "google.golang.org/grpc/encoding/gzip" // Install the gzip compressor pb "google.golang.org/grpc/examples/features/proto/echo" ) var addr = flag.String("addr", "localhost:50051", "the address to connect to") func main() { flag.Parse() // Set up a connection to the server. conn, err := grpc.Dial(*addr, grpc.WithInsecure()) if err != nil { log.Fatalf("did not connect: %v", err) } defer conn.Close() c := pb.NewEchoClient(conn) // Send the RPC compressed. If all RPCs on a client should be sent this // way, use the DialOption: // grpc.WithDefaultCallOptions(grpc.UseCompressor(gzip.Name)) const msg = "compress" ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() res, err := c.UnaryEcho(ctx, &pb.EchoRequest{Message: msg}, grpc.UseCompressor(gzip.Name)) fmt.Printf("UnaryEcho call returned %q, %v\n", res.GetMessage(), err) if err != nil || res.GetMessage() != msg { log.Fatalf("Message=%q, err=%v; want Message=%q, err=", res.GetMessage(), err, msg) } } grpc-go-1.22.1/examples/features/compression/server/000077500000000000000000000000001351635773100224515ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/compression/server/main.go000066400000000000000000000037151351635773100237320ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Binary server is an example server. package main import ( "context" "flag" "fmt" "log" "net" "google.golang.org/grpc" "google.golang.org/grpc/codes" pb "google.golang.org/grpc/examples/features/proto/echo" "google.golang.org/grpc/status" _ "google.golang.org/grpc/encoding/gzip" // Install the gzip compressor ) var port = flag.Int("port", 50051, "the port to serve on") type server struct{} func (s *server) UnaryEcho(ctx context.Context, in *pb.EchoRequest) (*pb.EchoResponse, error) { fmt.Printf("UnaryEcho called with message %q\n", in.GetMessage()) return &pb.EchoResponse{Message: in.Message}, nil } func (s *server) ServerStreamingEcho(in *pb.EchoRequest, stream pb.Echo_ServerStreamingEchoServer) error { return status.Error(codes.Unimplemented, "not implemented") } func (s *server) ClientStreamingEcho(stream pb.Echo_ClientStreamingEchoServer) error { return status.Error(codes.Unimplemented, "not implemented") } func (s *server) BidirectionalStreamingEcho(stream pb.Echo_BidirectionalStreamingEchoServer) error { return status.Error(codes.Unimplemented, "not implemented") } func main() { flag.Parse() lis, err := net.Listen("tcp", fmt.Sprintf(":%d", *port)) if err != nil { log.Fatalf("failed to listen: %v", err) } fmt.Printf("server listening at %v\n", lis.Addr()) s := grpc.NewServer() pb.RegisterEchoServer(s, &server{}) s.Serve(lis) } grpc-go-1.22.1/examples/features/deadline/000077500000000000000000000000001351635773100203475ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/deadline/client/000077500000000000000000000000001351635773100216255ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/deadline/client/main.go000066400000000000000000000052401351635773100231010ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Binary client is an example client. package main import ( "context" "flag" "fmt" "log" "time" "google.golang.org/grpc" "google.golang.org/grpc/codes" pb "google.golang.org/grpc/examples/features/proto/echo" "google.golang.org/grpc/status" ) var addr = flag.String("addr", "localhost:50052", "the address to connect to") func unaryCall(c pb.EchoClient, requestID int, message string, want codes.Code) { // Creates a context with a one second deadline for the RPC. ctx, cancel := context.WithTimeout(context.Background(), time.Second) defer cancel() req := &pb.EchoRequest{Message: message} _, err := c.UnaryEcho(ctx, req) got := status.Code(err) fmt.Printf("[%v] wanted = %v, got = %v\n", requestID, want, got) } func streamingCall(c pb.EchoClient, requestID int, message string, want codes.Code) { // Creates a context with a one second deadline for the RPC. ctx, cancel := context.WithTimeout(context.Background(), time.Second) defer cancel() stream, err := c.BidirectionalStreamingEcho(ctx) if err != nil { log.Printf("Stream err: %v", err) return } err = stream.Send(&pb.EchoRequest{Message: message}) if err != nil { log.Printf("Send error: %v", err) return } _, err = stream.Recv() got := status.Code(err) fmt.Printf("[%v] wanted = %v, got = %v\n", requestID, want, got) } func main() { flag.Parse() conn, err := grpc.Dial(*addr, grpc.WithInsecure()) if err != nil { log.Fatalf("did not connect: %v", err) } defer conn.Close() c := pb.NewEchoClient(conn) // A successful request unaryCall(c, 1, "world", codes.OK) // Exceeds deadline unaryCall(c, 2, "delay", codes.DeadlineExceeded) // A successful request with propagated deadline unaryCall(c, 3, "[propagate me]world", codes.OK) // Exceeds propagated deadline unaryCall(c, 4, "[propagate me][propagate me]world", codes.DeadlineExceeded) // Receives a response from the stream successfully. streamingCall(c, 5, "[propagate me]world", codes.OK) // Exceeds propagated deadline before receiving a response streamingCall(c, 6, "[propagate me][propagate me]world", codes.DeadlineExceeded) } grpc-go-1.22.1/examples/features/deadline/server/000077500000000000000000000000001351635773100216555ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/deadline/server/main.go000066400000000000000000000062661351635773100231420ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Binary server is an example server. package main import ( "context" "flag" "fmt" "io" "log" "net" "strings" "time" "google.golang.org/grpc" "google.golang.org/grpc/codes" pb "google.golang.org/grpc/examples/features/proto/echo" "google.golang.org/grpc/status" ) var port = flag.Int("port", 50052, "port number") // server is used to implement EchoServer. type server struct { client pb.EchoClient cc *grpc.ClientConn } func (s *server) UnaryEcho(ctx context.Context, req *pb.EchoRequest) (*pb.EchoResponse, error) { message := req.Message if strings.HasPrefix(message, "[propagate me]") { time.Sleep(800 * time.Millisecond) message = strings.TrimPrefix(message, "[propagate me]") return s.client.UnaryEcho(ctx, &pb.EchoRequest{Message: message}) } if message == "delay" { time.Sleep(1500 * time.Millisecond) } return &pb.EchoResponse{Message: req.Message}, nil } func (s *server) ServerStreamingEcho(req *pb.EchoRequest, stream pb.Echo_ServerStreamingEchoServer) error { return status.Error(codes.Unimplemented, "RPC unimplemented") } func (s *server) ClientStreamingEcho(stream pb.Echo_ClientStreamingEchoServer) error { return status.Error(codes.Unimplemented, "RPC unimplemented") } func (s *server) BidirectionalStreamingEcho(stream pb.Echo_BidirectionalStreamingEchoServer) error { for { req, err := stream.Recv() if err == io.EOF { return status.Error(codes.InvalidArgument, "request message not received") } if err != nil { return err } message := req.Message if strings.HasPrefix(message, "[propagate me]") { time.Sleep(800 * time.Millisecond) message = strings.TrimPrefix(message, "[propagate me]") res, err := s.client.UnaryEcho(stream.Context(), &pb.EchoRequest{Message: message}) if err != nil { return err } stream.Send(res) } if message == "delay" { time.Sleep(1500 * time.Millisecond) } stream.Send(&pb.EchoResponse{Message: message}) } } func (s *server) Close() { s.cc.Close() } func newEchoServer() *server { target := fmt.Sprintf("localhost:%v", *port) cc, err := grpc.Dial(target, grpc.WithInsecure()) if err != nil { log.Fatalf("did not connect: %v", err) } return &server{client: pb.NewEchoClient(cc), cc: cc} } func main() { flag.Parse() address := fmt.Sprintf(":%v", *port) lis, err := net.Listen("tcp", address) if err != nil { log.Fatalf("failed to listen: %v", err) } echoServer := newEchoServer() defer echoServer.Close() grpcServer := grpc.NewServer() pb.RegisterEchoServer(grpcServer, echoServer) if err := grpcServer.Serve(lis); err != nil { log.Fatalf("failed to serve: %v", err) } } grpc-go-1.22.1/examples/features/debugging/000077500000000000000000000000001351635773100205355ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/debugging/README.md000066400000000000000000000017671351635773100220270ustar00rootroot00000000000000# Debugging Currently, grpc provides two major tools to help user debug issues, which are logging and channelz. ## Logs gRPC has put substantial logging instruments on critical paths of gRPC to help users debug issues. The [Log Levels](https://github.com/grpc/grpc-go/blob/master/Documentation/log_levels.md) doc describes what each log level means in the gRPC context. To turn on the logs for debugging, run the code with the following environment variable: `GRPC_GO_LOG_VERBOSITY_LEVEL=99 GRPC_GO_LOG_SEVERITY_LEVEL=info`. ## Channelz We also provides a runtime debugging tool, Channelz, to help users with live debugging. See the channelz blog post here ([link](https://grpc.io/blog/a_short_introduction_to_channelz)) for details about how to use channelz service to debug live program. ## Try it The example is able to showcase how logging and channelz can help with debugging. See the channelz blog post linked above for full explanation. ``` go run server/main.go ``` ``` go run client/main.go ```grpc-go-1.22.1/examples/features/debugging/client/000077500000000000000000000000001351635773100220135ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/debugging/client/main.go000066400000000000000000000050101351635773100232620ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Binary client is an example client. package main import ( "log" "net" "os" "time" "golang.org/x/net/context" "google.golang.org/grpc" "google.golang.org/grpc/channelz/service" pb "google.golang.org/grpc/examples/helloworld/helloworld" "google.golang.org/grpc/resolver" "google.golang.org/grpc/resolver/manual" ) const ( defaultName = "world" ) func main() { /***** Set up the server serving channelz service. *****/ lis, err := net.Listen("tcp", ":50052") if err != nil { log.Fatalf("failed to listen: %v", err) } defer lis.Close() s := grpc.NewServer() service.RegisterChannelzServiceToServer(s) go s.Serve(lis) defer s.Stop() /***** Initialize manual resolver and Dial *****/ r, rcleanup := manual.GenerateAndRegisterManualResolver() defer rcleanup() // Set up a connection to the server. conn, err := grpc.Dial(r.Scheme()+":///test.server", grpc.WithInsecure(), grpc.WithBalancerName("round_robin")) if err != nil { log.Fatalf("did not connect: %v", err) } defer conn.Close() // Manually provide resolved addresses for the target. r.UpdateState(resolver.State{Addresses: []resolver.Address{{Addr: ":10001"}, {Addr: ":10002"}, {Addr: ":10003"}}}) c := pb.NewGreeterClient(conn) // Contact the server and print out its response. name := defaultName if len(os.Args) > 1 { name = os.Args[1] } /***** Make 100 SayHello RPCs *****/ for i := 0; i < 100; i++ { // Setting a 150ms timeout on the RPC. ctx, cancel := context.WithTimeout(context.Background(), 150*time.Millisecond) defer cancel() r, err := c.SayHello(ctx, &pb.HelloRequest{Name: name}) if err != nil { log.Printf("could not greet: %v", err) } else { log.Printf("Greeting: %s", r.Message) } } /***** Wait for user exiting the program *****/ // Unless you exit the program (e.g. CTRL+C), channelz data will be available for querying. // Users can take time to examine and learn about the info provided by channelz. select {} } grpc-go-1.22.1/examples/features/debugging/server/000077500000000000000000000000001351635773100220435ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/debugging/server/main.go000066400000000000000000000045601351635773100233230ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Binary server is an example server. package main import ( "log" "net" "time" "golang.org/x/net/context" "google.golang.org/grpc" "google.golang.org/grpc/channelz/service" pb "google.golang.org/grpc/examples/helloworld/helloworld" "google.golang.org/grpc/internal/grpcrand" ) var ( ports = []string{":10001", ":10002", ":10003"} ) // server is used to implement helloworld.GreeterServer. type server struct{} // SayHello implements helloworld.GreeterServer func (s *server) SayHello(ctx context.Context, in *pb.HelloRequest) (*pb.HelloReply, error) { return &pb.HelloReply{Message: "Hello " + in.Name}, nil } // slow server is used to simulate a server that has a variable delay in its response. type slowServer struct{} // SayHello implements helloworld.GreeterServer func (s *slowServer) SayHello(ctx context.Context, in *pb.HelloRequest) (*pb.HelloReply, error) { // Delay 100ms ~ 200ms before replying time.Sleep(time.Duration(100+grpcrand.Intn(100)) * time.Millisecond) return &pb.HelloReply{Message: "Hello " + in.Name}, nil } func main() { /***** Set up the server serving channelz service. *****/ lis, err := net.Listen("tcp", ":50051") if err != nil { log.Fatalf("failed to listen: %v", err) } defer lis.Close() s := grpc.NewServer() service.RegisterChannelzServiceToServer(s) go s.Serve(lis) defer s.Stop() /***** Start three GreeterServers(with one of them to be the slowServer). *****/ for i := 0; i < 3; i++ { lis, err := net.Listen("tcp", ports[i]) if err != nil { log.Fatalf("failed to listen: %v", err) } defer lis.Close() s := grpc.NewServer() if i == 2 { pb.RegisterGreeterServer(s, &slowServer{}) } else { pb.RegisterGreeterServer(s, &server{}) } go s.Serve(lis) } /***** Wait for user exiting the program *****/ select {} } grpc-go-1.22.1/examples/features/encryption/000077500000000000000000000000001351635773100207745ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/encryption/ALTS/000077500000000000000000000000001351635773100215375ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/encryption/ALTS/client/000077500000000000000000000000001351635773100230155ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/encryption/ALTS/client/main.go000066400000000000000000000032251351635773100242720ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Binary client is an example client. package main import ( "context" "flag" "fmt" "log" "time" "google.golang.org/grpc" "google.golang.org/grpc/credentials/alts" ecpb "google.golang.org/grpc/examples/features/proto/echo" ) var addr = flag.String("addr", "localhost:50051", "the address to connect to") func callUnaryEcho(client ecpb.EchoClient, message string) { ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() resp, err := client.UnaryEcho(ctx, &ecpb.EchoRequest{Message: message}) if err != nil { log.Fatalf("client.UnaryEcho(_) = _, %v: ", err) } fmt.Println("UnaryEcho: ", resp.Message) } func main() { flag.Parse() // Create alts based credential. altsTC := alts.NewClientCreds(alts.DefaultClientOptions()) // Set up a connection to the server. conn, err := grpc.Dial(*addr, grpc.WithTransportCredentials(altsTC)) if err != nil { log.Fatalf("did not connect: %v", err) } defer conn.Close() // Make a echo client and send an RPC. rgc := ecpb.NewEchoClient(conn) callUnaryEcho(rgc, "hello world") } grpc-go-1.22.1/examples/features/encryption/ALTS/server/000077500000000000000000000000001351635773100230455ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/encryption/ALTS/server/main.go000066400000000000000000000040371351635773100243240ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Binary server is an example server. package main import ( "context" "flag" "fmt" "log" "net" "google.golang.org/grpc" "google.golang.org/grpc/codes" "google.golang.org/grpc/credentials/alts" ecpb "google.golang.org/grpc/examples/features/proto/echo" "google.golang.org/grpc/status" ) var port = flag.Int("port", 50051, "the port to serve on") type ecServer struct{} func (s *ecServer) UnaryEcho(ctx context.Context, req *ecpb.EchoRequest) (*ecpb.EchoResponse, error) { return &ecpb.EchoResponse{Message: req.Message}, nil } func (s *ecServer) ServerStreamingEcho(*ecpb.EchoRequest, ecpb.Echo_ServerStreamingEchoServer) error { return status.Errorf(codes.Unimplemented, "not implemented") } func (s *ecServer) ClientStreamingEcho(ecpb.Echo_ClientStreamingEchoServer) error { return status.Errorf(codes.Unimplemented, "not implemented") } func (s *ecServer) BidirectionalStreamingEcho(ecpb.Echo_BidirectionalStreamingEchoServer) error { return status.Errorf(codes.Unimplemented, "not implemented") } func main() { flag.Parse() lis, err := net.Listen("tcp", fmt.Sprintf(":%d", *port)) if err != nil { log.Fatalf("failed to listen: %v", err) } // Create alts based credential. altsTC := alts.NewServerCreds(alts.DefaultServerOptions()) s := grpc.NewServer(grpc.Creds(altsTC)) // Register EchoServer on the server. ecpb.RegisterEchoServer(s, &ecServer{}) if err := s.Serve(lis); err != nil { log.Fatalf("failed to serve: %v", err) } } grpc-go-1.22.1/examples/features/encryption/README.md000066400000000000000000000072351351635773100222620ustar00rootroot00000000000000# Encryption The example for encryption includes two individual examples for TLS and ALTS encryption mechanism respectively. ## Try it In each example's subdirectory: ``` go run server/main.go ``` ``` go run client/main.go ``` ## Explanation ### TLS TLS is a commonly used cryptographic protocol to provide end-to-end communication security. In the example, we show how to set up a server authenticated TLS connection to transmit RPC. In our `grpc/credentials` package, we provide several convenience methods to create grpc [`credentials.TransportCredentials`](https://godoc.org/google.golang.org/grpc/credentials#TransportCredentials) base on TLS. Refer to the [godoc](https://godoc.org/google.golang.org/grpc/credentials) for details. In our example, we use the public/private keys created ahead: * "server1.pem" contains the server certificate (public key). * "server1.key" contains the server private key. * "ca.pem" contains the certificate (certificate authority) that can verify the server's certificate. On server side, we provide the paths to "server1.pem" and "server1.key" to configure TLS and create the server credential using [`credentials.NewServerTLSFromFile`](https://godoc.org/google.golang.org/grpc/credentials#NewServerTLSFromFile). On client side, we provide the path to the "ca.pem" to configure TLS and create the client credential using [`credentials.NewClientTLSFromFile`](https://godoc.org/google.golang.org/grpc/credentials#NewClientTLSFromFile). Note that we override the server name with "x.test.youtube.com", as the server certificate is valid for *.test.youtube.com but not localhost. It is solely for the convenience of making an example. Once the credentials have been created at both sides, we can start the server with the just created server credential (by calling [`grpc.Creds`](https://godoc.org/google.golang.org/grpc#Creds)) and let client dial to the server with the created client credential (by calling [`grpc.WithTransportCredentials`](https://godoc.org/google.golang.org/grpc#WithTransportCredentials)) And finally we make an RPC call over the created `grpc.ClientConn` to test the secure connection based upon TLS is successfully up. ### ALTS NOTE: ALTS currently needs special early access permission on GCP. You can ask about the detailed process in https://groups.google.com/forum/#!forum/grpc-io. ALTS is the Google's Application Layer Transport Security, which supports mutual authentication and transport encryption. Note that ALTS is currently only supported on Google Cloud Platform, and therefore you can only run the example successfully in a GCP environment. In our example, we show how to initiate a secure connection that is based on ALTS. Unlike TLS, ALTS makes certificate/key management transparent to user. So it is easier to set up. On server side, first call [`alts.DefaultServerOptions`](https://godoc.org/google.golang.org/grpc/credentials/alts#DefaultServerOptions) to get the configuration for alts and then provide the configuration to [`alts.NewServerCreds`](https://godoc.org/google.golang.org/grpc/credentials/alts#NewServerCreds) to create the server credential based upon alts. On client side, first call [`alts.DefaultClientOptions`](https://godoc.org/google.golang.org/grpc/credentials/alts#DefaultClientOptions) to get the configuration for alts and then provide the configuration to [`alts.NewClientCreds`](https://godoc.org/google.golang.org/grpc/credentials/alts#NewClientCreds) to create the client credential based upon alts. Next, same as TLS, start the server with the server credential and let client dial to server with the client credential. Finally, make an RPC to test the secure connection based upon ALTS is successfully up.grpc-go-1.22.1/examples/features/encryption/TLS/000077500000000000000000000000001351635773100214365ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/encryption/TLS/client/000077500000000000000000000000001351635773100227145ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/encryption/TLS/client/main.go000066400000000000000000000034341351635773100241730ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Binary client is an example client. package main import ( "context" "flag" "fmt" "log" "time" "google.golang.org/grpc" "google.golang.org/grpc/credentials" ecpb "google.golang.org/grpc/examples/features/proto/echo" "google.golang.org/grpc/testdata" ) var addr = flag.String("addr", "localhost:50051", "the address to connect to") func callUnaryEcho(client ecpb.EchoClient, message string) { ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() resp, err := client.UnaryEcho(ctx, &ecpb.EchoRequest{Message: message}) if err != nil { log.Fatalf("client.UnaryEcho(_) = _, %v: ", err) } fmt.Println("UnaryEcho: ", resp.Message) } func main() { flag.Parse() // Create tls based credential. creds, err := credentials.NewClientTLSFromFile(testdata.Path("ca.pem"), "x.test.youtube.com") if err != nil { log.Fatalf("failed to load credentials: %v", err) } // Set up a connection to the server. conn, err := grpc.Dial(*addr, grpc.WithTransportCredentials(creds)) if err != nil { log.Fatalf("did not connect: %v", err) } defer conn.Close() // Make a echo client and send an RPC. rgc := ecpb.NewEchoClient(conn) callUnaryEcho(rgc, "hello world") } grpc-go-1.22.1/examples/features/encryption/TLS/server/000077500000000000000000000000001351635773100227445ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/encryption/TLS/server/main.go000066400000000000000000000042661351635773100242270ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Binary server is an example server. package main import ( "context" "flag" "fmt" "log" "net" "google.golang.org/grpc" "google.golang.org/grpc/codes" "google.golang.org/grpc/credentials" ecpb "google.golang.org/grpc/examples/features/proto/echo" "google.golang.org/grpc/status" "google.golang.org/grpc/testdata" ) var port = flag.Int("port", 50051, "the port to serve on") type ecServer struct{} func (s *ecServer) UnaryEcho(ctx context.Context, req *ecpb.EchoRequest) (*ecpb.EchoResponse, error) { return &ecpb.EchoResponse{Message: req.Message}, nil } func (s *ecServer) ServerStreamingEcho(*ecpb.EchoRequest, ecpb.Echo_ServerStreamingEchoServer) error { return status.Errorf(codes.Unimplemented, "not implemented") } func (s *ecServer) ClientStreamingEcho(ecpb.Echo_ClientStreamingEchoServer) error { return status.Errorf(codes.Unimplemented, "not implemented") } func (s *ecServer) BidirectionalStreamingEcho(ecpb.Echo_BidirectionalStreamingEchoServer) error { return status.Errorf(codes.Unimplemented, "not implemented") } func main() { flag.Parse() lis, err := net.Listen("tcp", fmt.Sprintf(":%d", *port)) if err != nil { log.Fatalf("failed to listen: %v", err) } // Create tls based credential. creds, err := credentials.NewServerTLSFromFile(testdata.Path("server1.pem"), testdata.Path("server1.key")) if err != nil { log.Fatalf("failed to create credentials: %v", err) } s := grpc.NewServer(grpc.Creds(creds)) // Register EchoServer on the server. ecpb.RegisterEchoServer(s, &ecServer{}) if err := s.Serve(lis); err != nil { log.Fatalf("failed to serve: %v", err) } } grpc-go-1.22.1/examples/features/errors/000077500000000000000000000000001351635773100201165ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/errors/README.md000066400000000000000000000007231351635773100213770ustar00rootroot00000000000000# Description This example demonstrates the use of status details in grpc errors. # Run the sample code Run the server: ```sh $ go run ./server/main.go ``` Then run the client in another terminal: ```sh $ go run ./client/main.go ``` It should succeed and print the greeting it received from the server. Then run the client again: ```sh $ go run ./client/main.go ``` This time, it should fail by printing error status details that it received from the server. grpc-go-1.22.1/examples/features/errors/client/000077500000000000000000000000001351635773100213745ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/errors/client/main.go000066400000000000000000000033131351635773100226470ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Binary client is an example client. package main import ( "context" "flag" "log" "os" "time" epb "google.golang.org/genproto/googleapis/rpc/errdetails" "google.golang.org/grpc" pb "google.golang.org/grpc/examples/helloworld/helloworld" "google.golang.org/grpc/status" ) var addr = flag.String("addr", "localhost:50052", "the address to connect to") func main() { flag.Parse() // Set up a connection to the server. conn, err := grpc.Dial(*addr, grpc.WithInsecure()) if err != nil { log.Fatalf("did not connect: %v", err) } defer func() { if e := conn.Close(); e != nil { log.Printf("failed to close connection: %s", e) } }() c := pb.NewGreeterClient(conn) ctx, cancel := context.WithTimeout(context.Background(), time.Second) defer cancel() r, err := c.SayHello(ctx, &pb.HelloRequest{Name: "world"}) if err != nil { s := status.Convert(err) for _, d := range s.Details() { switch info := d.(type) { case *epb.QuotaFailure: log.Printf("Quota failure: %s", info) default: log.Printf("Unexpected type: %s", info) } } os.Exit(1) } log.Printf("Greeting: %s", r.Message) } grpc-go-1.22.1/examples/features/errors/server/000077500000000000000000000000001351635773100214245ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/errors/server/main.go000066400000000000000000000041431351635773100227010ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Binary server is an example server. package main import ( "context" "flag" "fmt" "log" "net" "sync" epb "google.golang.org/genproto/googleapis/rpc/errdetails" "google.golang.org/grpc" "google.golang.org/grpc/codes" pb "google.golang.org/grpc/examples/helloworld/helloworld" "google.golang.org/grpc/status" ) var port = flag.Int("port", 50052, "port number") // server is used to implement helloworld.GreeterServer. type server struct { mu sync.Mutex count map[string]int } // SayHello implements helloworld.GreeterServer func (s *server) SayHello(ctx context.Context, in *pb.HelloRequest) (*pb.HelloReply, error) { s.mu.Lock() defer s.mu.Unlock() // Track the number of times the user has been greeted. s.count[in.Name]++ if s.count[in.Name] > 1 { st := status.New(codes.ResourceExhausted, "Request limit exceeded.") ds, err := st.WithDetails( &epb.QuotaFailure{ Violations: []*epb.QuotaFailure_Violation{{ Subject: fmt.Sprintf("name:%s", in.Name), Description: "Limit one greeting per person", }}, }, ) if err != nil { return nil, st.Err() } return nil, ds.Err() } return &pb.HelloReply{Message: "Hello " + in.Name}, nil } func main() { flag.Parse() address := fmt.Sprintf(":%v", *port) lis, err := net.Listen("tcp", address) if err != nil { log.Fatalf("failed to listen: %v", err) } s := grpc.NewServer() pb.RegisterGreeterServer(s, &server{count: make(map[string]int)}) if err := s.Serve(lis); err != nil { log.Fatalf("failed to serve: %v", err) } } grpc-go-1.22.1/examples/features/interceptor/000077500000000000000000000000001351635773100211405ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/interceptor/README.md000066400000000000000000000120171351635773100224200ustar00rootroot00000000000000# Interceptor gRPC provides simple APIs to implement and install interceptors on a per ClientConn/Server basis. Interceptor intercepts the execution of each RPC call. Users can use interceptors to do logging, authentication/authorization, metrics collection, and many other functionality that can be shared across RPCs. ## Try it ``` go run server/main.go ``` ``` go run client/main.go ``` ## Explanation In gRPC, interceptors can be categorized into two kinds in terms of the type of RPC calls they intercept. The first one is the **unary interceptor**, which intercepts unary RPC calls. And the other is the **stream interceptor** which deals with streaming RPC calls. See [here](https://grpc.io/docs/guides/concepts.html#rpc-life-cycle) for explanation about unary RPCs and streaming RPCs. Each of client and server has their own types of unary and stream interceptors. Thus, there are in total four different types of interceptors in gRPC. ### Client-side #### Unary Interceptor [`UnaryClientInterceptor`](https://godoc.org/google.golang.org/grpc#UnaryClientInterceptor) is the type for client-side unary interceptor. It is essentially a function type with signature: `func(ctx context.Context, method string, req, reply interface{}, cc *ClientConn, invoker UnaryInvoker, opts ...CallOption) error`. An implementation of a unary interceptor can usually be divided into three parts: pre-processing, invoking RPC method, and post-processing. For pre-processing, users can get info about the current RPC call by examining the args passed in, such as RPC context, method string, request to be sent, and CallOptions configured. With the info, users can even modify the RPC call. For instance, in the example, we examine the list of CallOptions and see if call credential has been configured. If not, configure it to use oauth2 with token "some-secrete-token" as fallback. In our example, we intentionally omit configuring the per RPC credential to resort to fallback. After pre-processing is done, use can invoke the RPC call by calling the `invoker`. Once the invoker returns the reply and error, user can do post-processing of the RPC call. Usually, it's about dealing with the returned reply and error. In the example, we log the RPC timing and error info. To install a unary interceptor on a ClientConn, configure `Dial` with `DialOption` [`WithUnaryInterceptor`](https://godoc.org/google.golang.org/grpc#WithUnaryInterceptor). #### Stream Interceptor [`StreamClientInterceptor`](https://godoc.org/google.golang.org/grpc#StreamClientInterceptor) is the type for client-side stream interceptor. It is a function type with signature: `func(ctx context.Context, desc *StreamDesc, cc *ClientConn, method string, streamer Streamer, opts ...CallOption) (ClientStream, error)`. An implementation of a stream interceptor usually include pre-processing, and stream operation interception. For pre-processing, it's similar to unary interceptor. However, rather than doing the RPC method invocation and post-processing afterwards, stream interceptor intercepts the users' operation on the stream. First, the interceptor calls the passed-in `streamer` to get a `ClientStream`, and then wraps around the `ClientStream` and overloading its methods with intercepting logic. Finally, interceptors returns the wrapped `ClientStream` to user to operate on. In the example, we define a new struct `wrappedStream`, which is embedded with a `ClientStream`. Then, we implement (overload) the `SendMsg` and `RecvMsg` methods on `wrappedStream` to intercepts these two operations on the embedded `ClientStream`. In the example, we log the message type info and time info for interception purpose. To install the stream interceptor for a ClientConn, configure `Dial` with `DialOption` [`WithStreamInterceptor`](https://godoc.org/google.golang.org/grpc#WithStreamInterceptor). ### Server-side Server side interceptor is similar to client side, though with slightly different provided info. #### Unary Interceptor [`UnaryServerInterceptor`](https://godoc.org/google.golang.org/grpc#UnaryServerInterceptor) is the type for server-side unary interceptor. It is a function type with signature: `func(ctx context.Context, req interface{}, info *UnaryServerInfo, handler UnaryHandler) (resp interface{}, err error)`. Refer to client-side unary interceptor section for detailed implementation explanation. To install the unary interceptor for a Server, configure `NewServer` with `ServerOption` [`UnaryInterceptor`](https://godoc.org/google.golang.org/grpc#UnaryInterceptor). #### Stream Interceptor [`StreamServerInterceptor`](https://godoc.org/google.golang.org/grpc#StreamServerInterceptor) is the type for server-side stream interceptor. It is a function type with signature: `func(srv interface{}, ss ServerStream, info *StreamServerInfo, handler StreamHandler) error`. Refer to client-side stream interceptor section for detailed implementation explanation. To install the unary interceptor for a Server, configure `NewServer` with `ServerOption` [`StreamInterceptor`](https://godoc.org/google.golang.org/grpc#StreamInterceptor). grpc-go-1.22.1/examples/features/interceptor/client/000077500000000000000000000000001351635773100224165ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/interceptor/client/main.go000066400000000000000000000114131351635773100236710ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Binary client is an example client. package main import ( "context" "flag" "fmt" "io" "log" "time" "golang.org/x/oauth2" "google.golang.org/grpc" "google.golang.org/grpc/credentials" "google.golang.org/grpc/credentials/oauth" ecpb "google.golang.org/grpc/examples/features/proto/echo" "google.golang.org/grpc/testdata" ) var addr = flag.String("addr", "localhost:50051", "the address to connect to") const fallbackToken = "some-secret-token" // logger is to mock a sophisticated logging system. To simplify the example, we just print out the content. func logger(format string, a ...interface{}) { fmt.Printf("LOG:\t"+format+"\n", a...) } // unaryInterceptor is an example unary interceptor. func unaryInterceptor(ctx context.Context, method string, req, reply interface{}, cc *grpc.ClientConn, invoker grpc.UnaryInvoker, opts ...grpc.CallOption) error { var credsConfigured bool for _, o := range opts { _, ok := o.(grpc.PerRPCCredsCallOption) if ok { credsConfigured = true break } } if !credsConfigured { opts = append(opts, grpc.PerRPCCredentials(oauth.NewOauthAccess(&oauth2.Token{ AccessToken: fallbackToken, }))) } start := time.Now() err := invoker(ctx, method, req, reply, cc, opts...) end := time.Now() logger("RPC: %s, start time: %s, end time: %s, err: %v", method, start.Format("Basic"), end.Format(time.RFC3339), err) return err } // wrappedStream wraps around the embedded grpc.ClientStream, and intercepts the RecvMsg and // SendMsg method call. type wrappedStream struct { grpc.ClientStream } func (w *wrappedStream) RecvMsg(m interface{}) error { logger("Receive a message (Type: %T) at %v", m, time.Now().Format(time.RFC3339)) return w.ClientStream.RecvMsg(m) } func (w *wrappedStream) SendMsg(m interface{}) error { logger("Send a message (Type: %T) at %v", m, time.Now().Format(time.RFC3339)) return w.ClientStream.SendMsg(m) } func newWrappedStream(s grpc.ClientStream) grpc.ClientStream { return &wrappedStream{s} } // streamInterceptor is an example stream interceptor. func streamInterceptor(ctx context.Context, desc *grpc.StreamDesc, cc *grpc.ClientConn, method string, streamer grpc.Streamer, opts ...grpc.CallOption) (grpc.ClientStream, error) { var credsConfigured bool for _, o := range opts { _, ok := o.(*grpc.PerRPCCredsCallOption) if ok { credsConfigured = true } } if !credsConfigured { opts = append(opts, grpc.PerRPCCredentials(oauth.NewOauthAccess(&oauth2.Token{ AccessToken: fallbackToken, }))) } s, err := streamer(ctx, desc, cc, method, opts...) if err != nil { return nil, err } return newWrappedStream(s), nil } func callUnaryEcho(client ecpb.EchoClient, message string) { ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() resp, err := client.UnaryEcho(ctx, &ecpb.EchoRequest{Message: message}) if err != nil { log.Fatalf("client.UnaryEcho(_) = _, %v: ", err) } fmt.Println("UnaryEcho: ", resp.Message) } func callBidiStreamingEcho(client ecpb.EchoClient) { ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() c, err := client.BidirectionalStreamingEcho(ctx) if err != nil { return } for i := 0; i < 5; i++ { if err := c.Send(&ecpb.EchoRequest{Message: fmt.Sprintf("Request %d", i+1)}); err != nil { log.Fatalf("failed to send request due to error: %v", err) } } c.CloseSend() for { resp, err := c.Recv() if err == io.EOF { break } if err != nil { log.Fatalf("failed to receive response due to error: %v", err) } fmt.Println("BidiStreaming Echo: ", resp.Message) } } func main() { flag.Parse() // Create tls based credential. creds, err := credentials.NewClientTLSFromFile(testdata.Path("ca.pem"), "x.test.youtube.com") if err != nil { log.Fatalf("failed to load credentials: %v", err) } // Set up a connection to the server. conn, err := grpc.Dial(*addr, grpc.WithTransportCredentials(creds), grpc.WithUnaryInterceptor(unaryInterceptor), grpc.WithStreamInterceptor(streamInterceptor)) if err != nil { log.Fatalf("did not connect: %v", err) } defer conn.Close() // Make a echo client and send RPCs. rgc := ecpb.NewEchoClient(conn) callUnaryEcho(rgc, "hello world") callBidiStreamingEcho(rgc) } grpc-go-1.22.1/examples/features/interceptor/server/000077500000000000000000000000001351635773100224465ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/interceptor/server/main.go000066400000000000000000000115161351635773100237250ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Binary server is an example server. package main import ( "context" "flag" "fmt" "io" "log" "net" "strings" "time" "google.golang.org/grpc" "google.golang.org/grpc/codes" "google.golang.org/grpc/credentials" ecpb "google.golang.org/grpc/examples/features/proto/echo" "google.golang.org/grpc/metadata" "google.golang.org/grpc/status" "google.golang.org/grpc/testdata" ) var ( port = flag.Int("port", 50051, "the port to serve on") errMissingMetadata = status.Errorf(codes.InvalidArgument, "missing metadata") errInvalidToken = status.Errorf(codes.Unauthenticated, "invalid token") ) // logger is to mock a sophisticated logging system. To simplify the example, we just print out the content. func logger(format string, a ...interface{}) { fmt.Printf("LOG:\t"+format+"\n", a...) } type server struct{} func (s *server) UnaryEcho(ctx context.Context, in *ecpb.EchoRequest) (*ecpb.EchoResponse, error) { fmt.Printf("unary echoing message %q\n", in.Message) return &ecpb.EchoResponse{Message: in.Message}, nil } func (s *server) ServerStreamingEcho(in *ecpb.EchoRequest, stream ecpb.Echo_ServerStreamingEchoServer) error { return status.Error(codes.Unimplemented, "not implemented") } func (s *server) ClientStreamingEcho(stream ecpb.Echo_ClientStreamingEchoServer) error { return status.Error(codes.Unimplemented, "not implemented") } func (s *server) BidirectionalStreamingEcho(stream ecpb.Echo_BidirectionalStreamingEchoServer) error { for { in, err := stream.Recv() if err != nil { if err == io.EOF { return nil } fmt.Printf("server: error receiving from stream: %v\n", err) return err } fmt.Printf("bidi echoing message %q\n", in.Message) stream.Send(&ecpb.EchoResponse{Message: in.Message}) } } // valid validates the authorization. func valid(authorization []string) bool { if len(authorization) < 1 { return false } token := strings.TrimPrefix(authorization[0], "Bearer ") // Perform the token validation here. For the sake of this example, the code // here forgoes any of the usual OAuth2 token validation and instead checks // for a token matching an arbitrary string. return token == "some-secret-token" } func unaryInterceptor(ctx context.Context, req interface{}, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (interface{}, error) { // authentication (token verification) md, ok := metadata.FromIncomingContext(ctx) if !ok { return nil, errMissingMetadata } if !valid(md["authorization"]) { return nil, errInvalidToken } m, err := handler(ctx, req) if err != nil { logger("RPC failed with error %v", err) } return m, err } // wrappedStream wraps around the embedded grpc.ServerStream, and intercepts the RecvMsg and // SendMsg method call. type wrappedStream struct { grpc.ServerStream } func (w *wrappedStream) RecvMsg(m interface{}) error { logger("Receive a message (Type: %T) at %s", m, time.Now().Format(time.RFC3339)) return w.ServerStream.RecvMsg(m) } func (w *wrappedStream) SendMsg(m interface{}) error { logger("Send a message (Type: %T) at %v", m, time.Now().Format(time.RFC3339)) return w.ServerStream.SendMsg(m) } func newWrappedStream(s grpc.ServerStream) grpc.ServerStream { return &wrappedStream{s} } func streamInterceptor(srv interface{}, ss grpc.ServerStream, info *grpc.StreamServerInfo, handler grpc.StreamHandler) error { // authentication (token verification) md, ok := metadata.FromIncomingContext(ss.Context()) if !ok { return errMissingMetadata } if !valid(md["authorization"]) { return errInvalidToken } err := handler(srv, newWrappedStream(ss)) if err != nil { logger("RPC failed with error %v", err) } return err } func main() { flag.Parse() lis, err := net.Listen("tcp", fmt.Sprintf(":%d", *port)) if err != nil { log.Fatalf("failed to listen: %v", err) } // Create tls based credential. creds, err := credentials.NewServerTLSFromFile(testdata.Path("server1.pem"), testdata.Path("server1.key")) if err != nil { log.Fatalf("failed to create credentials: %v", err) } s := grpc.NewServer(grpc.Creds(creds), grpc.UnaryInterceptor(unaryInterceptor), grpc.StreamInterceptor(streamInterceptor)) // Register EchoServer on the server. ecpb.RegisterEchoServer(s, &server{}) if err := s.Serve(lis); err != nil { log.Fatalf("failed to serve: %v", err) } } grpc-go-1.22.1/examples/features/keepalive/000077500000000000000000000000001351635773100205475ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/keepalive/README.md000066400000000000000000000005741351635773100220340ustar00rootroot00000000000000# Keepalive This example illustrates how to set up client-side keepalive pings and server-side keepalive ping enforcement and connection idleness settings. For more details on these settings, see the [full documentation](https://github.com/grpc/grpc-go/tree/master/Documentation/keepalive.md). ``` go run server/main.go ``` ``` GODEBUG=http2debug=2 go run client/main.go ``` grpc-go-1.22.1/examples/features/keepalive/client/000077500000000000000000000000001351635773100220255ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/keepalive/client/main.go000066400000000000000000000035451351635773100233070ustar00rootroot00000000000000/* * * Copyright 2019 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Binary client is an example client. package main import ( "context" "flag" "fmt" "log" "time" "google.golang.org/grpc" pb "google.golang.org/grpc/examples/features/proto/echo" "google.golang.org/grpc/keepalive" ) var addr = flag.String("addr", "localhost:50052", "the address to connect to") var kacp = keepalive.ClientParameters{ Time: 10 * time.Second, // send pings every 10 seconds if there is no activity Timeout: time.Second, // wait 1 second for ping ack before considering the connection dead PermitWithoutStream: true, // send pings even without active streams } func main() { flag.Parse() conn, err := grpc.Dial(*addr, grpc.WithInsecure(), grpc.WithKeepaliveParams(kacp)) if err != nil { log.Fatalf("did not connect: %v", err) } defer conn.Close() c := pb.NewEchoClient(conn) ctx, cancel := context.WithTimeout(context.Background(), 3*time.Minute) defer cancel() fmt.Println("Performing unary request") res, err := c.UnaryEcho(ctx, &pb.EchoRequest{Message: "keepalive demo"}) if err != nil { log.Fatalf("unexpected error from UnaryEcho: %v", err) } fmt.Println("RPC response:", res) select {} // Block forever; run with GODEBUG=http2debug=2 to observe ping frames and GOAWAYs due to idleness. } grpc-go-1.22.1/examples/features/keepalive/server/000077500000000000000000000000001351635773100220555ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/keepalive/server/main.go000066400000000000000000000055331351635773100233360ustar00rootroot00000000000000/* * * Copyright 2019 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Binary server is an example server. package main import ( "context" "flag" "fmt" "log" "net" "time" "google.golang.org/grpc" "google.golang.org/grpc/codes" pb "google.golang.org/grpc/examples/features/proto/echo" "google.golang.org/grpc/keepalive" "google.golang.org/grpc/status" ) var port = flag.Int("port", 50052, "port number") var kaep = keepalive.EnforcementPolicy{ MinTime: 5 * time.Second, // If a client pings more than once every 5 seconds, terminate the connection PermitWithoutStream: true, // Allow pings even when there are no active streams } var kasp = keepalive.ServerParameters{ MaxConnectionIdle: 15 * time.Second, // If a client is idle for 15 seconds, send a GOAWAY MaxConnectionAge: 30 * time.Second, // If any connection is alive for more than 30 seconds, send a GOAWAY MaxConnectionAgeGrace: 5 * time.Second, // Allow 5 seconds for pending RPCs to complete before forcibly closing connections Time: 5 * time.Second, // Ping the client if it is idle for 5 seconds to ensure the connection is still active Timeout: 1 * time.Second, // Wait 1 second for the ping ack before assuming the connection is dead } // server implements EchoServer. type server struct{} func (s *server) UnaryEcho(ctx context.Context, req *pb.EchoRequest) (*pb.EchoResponse, error) { return &pb.EchoResponse{Message: req.Message}, nil } func (s *server) ServerStreamingEcho(req *pb.EchoRequest, stream pb.Echo_ServerStreamingEchoServer) error { return status.Error(codes.Unimplemented, "RPC unimplemented") } func (s *server) ClientStreamingEcho(stream pb.Echo_ClientStreamingEchoServer) error { return status.Error(codes.Unimplemented, "RPC unimplemented") } func (s *server) BidirectionalStreamingEcho(stream pb.Echo_BidirectionalStreamingEchoServer) error { return status.Error(codes.Unimplemented, "RPC unimplemented") } func main() { flag.Parse() address := fmt.Sprintf(":%v", *port) lis, err := net.Listen("tcp", address) if err != nil { log.Fatalf("failed to listen: %v", err) } s := grpc.NewServer(grpc.KeepaliveEnforcementPolicy(kaep), grpc.KeepaliveParams(kasp)) pb.RegisterEchoServer(s, &server{}) if err := s.Serve(lis); err != nil { log.Fatalf("failed to serve: %v", err) } } grpc-go-1.22.1/examples/features/load_balancing/000077500000000000000000000000001351635773100215175ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/load_balancing/README.md000066400000000000000000000061111351635773100227750ustar00rootroot00000000000000# Load balancing This examples shows how `ClientConn` can pick different load balancing policies. Note: to show the effect of load balancers, an example resolver is installed in this example to get the backend addresses. It's suggested to read the name resolver example before this example. ## Try it ``` go run server/main.go ``` ``` go run client/main.go ``` ## Explanation Two echo servers are serving on ":50051" and ":50052". They will include their serving address in the response. So the server on ":50051" will reply to the RPC with `this is examples/load_balancing (from :50051)`. Two clients are created, to connect to both of these servers (they get both server addresses from the name resolver). Each client picks a different load balancer (using `grpc.WithBalancerName`): `pick_first` or `round_robin`. (These two policies are supported in gRPC by default. To add a custom balancing policy, implement the interfaces defined in https://godoc.org/google.golang.org/grpc/balancer). Note that balancers can also be switched using service config, which allows service owners (instead of client owners) to pick the balancer to use. Service config doc is available at https://github.com/grpc/grpc/blob/master/doc/service_config.md. ### pick_first The first client is configured to use `pick_first`. `pick_first` tries to connect to the first address, uses it for all RPCs if it connects, or try the next address if it fails (and keep doing that until one connection is successful). Because of this, all the RPCs will be sent to the same backend. The responses received all show the same backend address. ``` this is examples/load_balancing (from :50051) this is examples/load_balancing (from :50051) this is examples/load_balancing (from :50051) this is examples/load_balancing (from :50051) this is examples/load_balancing (from :50051) this is examples/load_balancing (from :50051) this is examples/load_balancing (from :50051) this is examples/load_balancing (from :50051) this is examples/load_balancing (from :50051) this is examples/load_balancing (from :50051) ``` ### round_robin The second client is configured to use `round_robin`. `round_robin` connects to all the addresses it sees, and sends an RPC to each backend one at a time in order. E.g. the first RPC will be sent to backend-1, the second RPC will be be sent to backend-2, and the third RPC will be be sent to backend-1 again. ``` this is examples/load_balancing (from :50051) this is examples/load_balancing (from :50051) this is examples/load_balancing (from :50052) this is examples/load_balancing (from :50051) this is examples/load_balancing (from :50052) this is examples/load_balancing (from :50051) this is examples/load_balancing (from :50052) this is examples/load_balancing (from :50051) this is examples/load_balancing (from :50052) this is examples/load_balancing (from :50051) ``` Note that it's possible to see two continues RPC sent to the same backend. That's because `round_robin` only picks the connections ready for RPCs. So if one of the two connections is not ready for some reason, all RPCs will be sent to the ready connection. grpc-go-1.22.1/examples/features/load_balancing/client/000077500000000000000000000000001351635773100227755ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/load_balancing/client/main.go000066400000000000000000000066001351635773100242520ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Binary client is an example client. package main import ( "context" "fmt" "log" "time" "google.golang.org/grpc" ecpb "google.golang.org/grpc/examples/features/proto/echo" "google.golang.org/grpc/resolver" ) const ( exampleScheme = "example" exampleServiceName = "lb.example.grpc.io" ) var addrs = []string{"localhost:50051", "localhost:50052"} func callUnaryEcho(c ecpb.EchoClient, message string) { ctx, cancel := context.WithTimeout(context.Background(), time.Second) defer cancel() r, err := c.UnaryEcho(ctx, &ecpb.EchoRequest{Message: message}) if err != nil { log.Fatalf("could not greet: %v", err) } fmt.Println(r.Message) } func makeRPCs(cc *grpc.ClientConn, n int) { hwc := ecpb.NewEchoClient(cc) for i := 0; i < n; i++ { callUnaryEcho(hwc, "this is examples/load_balancing") } } func main() { pickfirstConn, err := grpc.Dial( fmt.Sprintf("%s:///%s", exampleScheme, exampleServiceName), // grpc.WithBalancerName("pick_first"), // "pick_first" is the default, so this DialOption is not necessary. grpc.WithInsecure(), ) if err != nil { log.Fatalf("did not connect: %v", err) } defer pickfirstConn.Close() fmt.Println("--- calling helloworld.Greeter/SayHello with pick_first ---") makeRPCs(pickfirstConn, 10) fmt.Println() // Make another ClientConn with round_robin policy. roundrobinConn, err := grpc.Dial( fmt.Sprintf("%s:///%s", exampleScheme, exampleServiceName), grpc.WithBalancerName("round_robin"), // This sets the initial balancing policy. grpc.WithInsecure(), ) if err != nil { log.Fatalf("did not connect: %v", err) } defer roundrobinConn.Close() fmt.Println("--- calling helloworld.Greeter/SayHello with round_robin ---") makeRPCs(roundrobinConn, 10) } // Following is an example name resolver implementation. Read the name // resolution example to learn more about it. type exampleResolverBuilder struct{} func (*exampleResolverBuilder) Build(target resolver.Target, cc resolver.ClientConn, opts resolver.BuildOption) (resolver.Resolver, error) { r := &exampleResolver{ target: target, cc: cc, addrsStore: map[string][]string{ exampleServiceName: addrs, }, } r.start() return r, nil } func (*exampleResolverBuilder) Scheme() string { return exampleScheme } type exampleResolver struct { target resolver.Target cc resolver.ClientConn addrsStore map[string][]string } func (r *exampleResolver) start() { addrStrs := r.addrsStore[r.target.Endpoint] addrs := make([]resolver.Address, len(addrStrs)) for i, s := range addrStrs { addrs[i] = resolver.Address{Addr: s} } r.cc.UpdateState(resolver.State{Addresses: addrs}) } func (*exampleResolver) ResolveNow(o resolver.ResolveNowOption) {} func (*exampleResolver) Close() {} func init() { resolver.Register(&exampleResolverBuilder{}) } grpc-go-1.22.1/examples/features/load_balancing/server/000077500000000000000000000000001351635773100230255ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/load_balancing/server/main.go000066400000000000000000000041061351635773100243010ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Binary server is an example server. package main import ( "context" "fmt" "log" "net" "sync" "google.golang.org/grpc" "google.golang.org/grpc/codes" ecpb "google.golang.org/grpc/examples/features/proto/echo" "google.golang.org/grpc/status" ) var ( addrs = []string{":50051", ":50052"} ) type ecServer struct { addr string } func (s *ecServer) UnaryEcho(ctx context.Context, req *ecpb.EchoRequest) (*ecpb.EchoResponse, error) { return &ecpb.EchoResponse{Message: fmt.Sprintf("%s (from %s)", req.Message, s.addr)}, nil } func (s *ecServer) ServerStreamingEcho(*ecpb.EchoRequest, ecpb.Echo_ServerStreamingEchoServer) error { return status.Errorf(codes.Unimplemented, "not implemented") } func (s *ecServer) ClientStreamingEcho(ecpb.Echo_ClientStreamingEchoServer) error { return status.Errorf(codes.Unimplemented, "not implemented") } func (s *ecServer) BidirectionalStreamingEcho(ecpb.Echo_BidirectionalStreamingEchoServer) error { return status.Errorf(codes.Unimplemented, "not implemented") } func startServer(addr string) { lis, err := net.Listen("tcp", addr) if err != nil { log.Fatalf("failed to listen: %v", err) } s := grpc.NewServer() ecpb.RegisterEchoServer(s, &ecServer{addr: addr}) log.Printf("serving on %s\n", addr) if err := s.Serve(lis); err != nil { log.Fatalf("failed to serve: %v", err) } } func main() { var wg sync.WaitGroup for _, addr := range addrs { wg.Add(1) go func(addr string) { defer wg.Done() startServer(addr) }(addr) } wg.Wait() } grpc-go-1.22.1/examples/features/metadata/000077500000000000000000000000001351635773100203625ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/metadata/README.md000066400000000000000000000005071351635773100216430ustar00rootroot00000000000000# Metadata example This example shows how to set and read metadata in RPC headers and trailers. Please see [grpc-metadata.md](https://github.com/grpc/grpc-go/blob/master/Documentation/grpc-metadata.md) for more information. ## Start the server ``` go run server/main.go ``` ## Run the client ``` go run client/main.go ``` grpc-go-1.22.1/examples/features/metadata/client/000077500000000000000000000000001351635773100216405ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/metadata/client/main.go000066400000000000000000000202001351635773100231050ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Binary client is an example client. package main import ( "context" "flag" "fmt" "io" "log" "time" "google.golang.org/grpc" pb "google.golang.org/grpc/examples/features/proto/echo" "google.golang.org/grpc/metadata" ) var addr = flag.String("addr", "localhost:50051", "the address to connect to") const ( timestampFormat = time.StampNano // "Jan _2 15:04:05.000" streamingCount = 10 ) func unaryCallWithMetadata(c pb.EchoClient, message string) { fmt.Printf("--- unary ---\n") // Create metadata and context. md := metadata.Pairs("timestamp", time.Now().Format(timestampFormat)) ctx := metadata.NewOutgoingContext(context.Background(), md) // Make RPC using the context with the metadata. var header, trailer metadata.MD r, err := c.UnaryEcho(ctx, &pb.EchoRequest{Message: message}, grpc.Header(&header), grpc.Trailer(&trailer)) if err != nil { log.Fatalf("failed to call UnaryEcho: %v", err) } if t, ok := header["timestamp"]; ok { fmt.Printf("timestamp from header:\n") for i, e := range t { fmt.Printf(" %d. %s\n", i, e) } } else { log.Fatal("timestamp expected but doesn't exist in header") } if l, ok := header["location"]; ok { fmt.Printf("location from header:\n") for i, e := range l { fmt.Printf(" %d. %s\n", i, e) } } else { log.Fatal("location expected but doesn't exist in header") } fmt.Printf("response:\n") fmt.Printf(" - %s\n", r.Message) if t, ok := trailer["timestamp"]; ok { fmt.Printf("timestamp from trailer:\n") for i, e := range t { fmt.Printf(" %d. %s\n", i, e) } } else { log.Fatal("timestamp expected but doesn't exist in trailer") } } func serverStreamingWithMetadata(c pb.EchoClient, message string) { fmt.Printf("--- server streaming ---\n") // Create metadata and context. md := metadata.Pairs("timestamp", time.Now().Format(timestampFormat)) ctx := metadata.NewOutgoingContext(context.Background(), md) // Make RPC using the context with the metadata. stream, err := c.ServerStreamingEcho(ctx, &pb.EchoRequest{Message: message}) if err != nil { log.Fatalf("failed to call ServerStreamingEcho: %v", err) } // Read the header when the header arrives. header, err := stream.Header() if err != nil { log.Fatalf("failed to get header from stream: %v", err) } // Read metadata from server's header. if t, ok := header["timestamp"]; ok { fmt.Printf("timestamp from header:\n") for i, e := range t { fmt.Printf(" %d. %s\n", i, e) } } else { log.Fatal("timestamp expected but doesn't exist in header") } if l, ok := header["location"]; ok { fmt.Printf("location from header:\n") for i, e := range l { fmt.Printf(" %d. %s\n", i, e) } } else { log.Fatal("location expected but doesn't exist in header") } // Read all the responses. var rpcStatus error fmt.Printf("response:\n") for { r, err := stream.Recv() if err != nil { rpcStatus = err break } fmt.Printf(" - %s\n", r.Message) } if rpcStatus != io.EOF { log.Fatalf("failed to finish server streaming: %v", rpcStatus) } // Read the trailer after the RPC is finished. trailer := stream.Trailer() // Read metadata from server's trailer. if t, ok := trailer["timestamp"]; ok { fmt.Printf("timestamp from trailer:\n") for i, e := range t { fmt.Printf(" %d. %s\n", i, e) } } else { log.Fatal("timestamp expected but doesn't exist in trailer") } } func clientStreamWithMetadata(c pb.EchoClient, message string) { fmt.Printf("--- client streaming ---\n") // Create metadata and context. md := metadata.Pairs("timestamp", time.Now().Format(timestampFormat)) ctx := metadata.NewOutgoingContext(context.Background(), md) // Make RPC using the context with the metadata. stream, err := c.ClientStreamingEcho(ctx) if err != nil { log.Fatalf("failed to call ClientStreamingEcho: %v\n", err) } // Read the header when the header arrives. header, err := stream.Header() if err != nil { log.Fatalf("failed to get header from stream: %v", err) } // Read metadata from server's header. if t, ok := header["timestamp"]; ok { fmt.Printf("timestamp from header:\n") for i, e := range t { fmt.Printf(" %d. %s\n", i, e) } } else { log.Fatal("timestamp expected but doesn't exist in header") } if l, ok := header["location"]; ok { fmt.Printf("location from header:\n") for i, e := range l { fmt.Printf(" %d. %s\n", i, e) } } else { log.Fatal("location expected but doesn't exist in header") } // Send all requests to the server. for i := 0; i < streamingCount; i++ { if err := stream.Send(&pb.EchoRequest{Message: message}); err != nil { log.Fatalf("failed to send streaming: %v\n", err) } } // Read the response. r, err := stream.CloseAndRecv() if err != nil { log.Fatalf("failed to CloseAndRecv: %v\n", err) } fmt.Printf("response:\n") fmt.Printf(" - %s\n\n", r.Message) // Read the trailer after the RPC is finished. trailer := stream.Trailer() // Read metadata from server's trailer. if t, ok := trailer["timestamp"]; ok { fmt.Printf("timestamp from trailer:\n") for i, e := range t { fmt.Printf(" %d. %s\n", i, e) } } else { log.Fatal("timestamp expected but doesn't exist in trailer") } } func bidirectionalWithMetadata(c pb.EchoClient, message string) { fmt.Printf("--- bidirectional ---\n") // Create metadata and context. md := metadata.Pairs("timestamp", time.Now().Format(timestampFormat)) ctx := metadata.NewOutgoingContext(context.Background(), md) // Make RPC using the context with the metadata. stream, err := c.BidirectionalStreamingEcho(ctx) if err != nil { log.Fatalf("failed to call BidirectionalStreamingEcho: %v\n", err) } go func() { // Read the header when the header arrives. header, err := stream.Header() if err != nil { log.Fatalf("failed to get header from stream: %v", err) } // Read metadata from server's header. if t, ok := header["timestamp"]; ok { fmt.Printf("timestamp from header:\n") for i, e := range t { fmt.Printf(" %d. %s\n", i, e) } } else { log.Fatal("timestamp expected but doesn't exist in header") } if l, ok := header["location"]; ok { fmt.Printf("location from header:\n") for i, e := range l { fmt.Printf(" %d. %s\n", i, e) } } else { log.Fatal("location expected but doesn't exist in header") } // Send all requests to the server. for i := 0; i < streamingCount; i++ { if err := stream.Send(&pb.EchoRequest{Message: message}); err != nil { log.Fatalf("failed to send streaming: %v\n", err) } } stream.CloseSend() }() // Read all the responses. var rpcStatus error fmt.Printf("response:\n") for { r, err := stream.Recv() if err != nil { rpcStatus = err break } fmt.Printf(" - %s\n", r.Message) } if rpcStatus != io.EOF { log.Fatalf("failed to finish server streaming: %v", rpcStatus) } // Read the trailer after the RPC is finished. trailer := stream.Trailer() // Read metadata from server's trailer. if t, ok := trailer["timestamp"]; ok { fmt.Printf("timestamp from trailer:\n") for i, e := range t { fmt.Printf(" %d. %s\n", i, e) } } else { log.Fatal("timestamp expected but doesn't exist in trailer") } } const message = "this is examples/metadata" func main() { flag.Parse() // Set up a connection to the server. conn, err := grpc.Dial(*addr, grpc.WithInsecure()) if err != nil { log.Fatalf("did not connect: %v", err) } defer conn.Close() c := pb.NewEchoClient(conn) unaryCallWithMetadata(c, message) time.Sleep(1 * time.Second) serverStreamingWithMetadata(c, message) time.Sleep(1 * time.Second) clientStreamWithMetadata(c, message) time.Sleep(1 * time.Second) bidirectionalWithMetadata(c, message) } grpc-go-1.22.1/examples/features/metadata/server/000077500000000000000000000000001351635773100216705ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/metadata/server/main.go000066400000000000000000000131501351635773100231430ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Binary server is an example server. package main import ( "context" "flag" "fmt" "io" "log" "math/rand" "net" "time" "google.golang.org/grpc" "google.golang.org/grpc/codes" pb "google.golang.org/grpc/examples/features/proto/echo" "google.golang.org/grpc/metadata" "google.golang.org/grpc/status" ) var port = flag.Int("port", 50051, "the port to serve on") const ( timestampFormat = time.StampNano streamingCount = 10 ) type server struct{} func (s *server) UnaryEcho(ctx context.Context, in *pb.EchoRequest) (*pb.EchoResponse, error) { fmt.Printf("--- UnaryEcho ---\n") // Create trailer in defer to record function return time. defer func() { trailer := metadata.Pairs("timestamp", time.Now().Format(timestampFormat)) grpc.SetTrailer(ctx, trailer) }() // Read metadata from client. md, ok := metadata.FromIncomingContext(ctx) if !ok { return nil, status.Errorf(codes.DataLoss, "UnaryEcho: failed to get metadata") } if t, ok := md["timestamp"]; ok { fmt.Printf("timestamp from metadata:\n") for i, e := range t { fmt.Printf(" %d. %s\n", i, e) } } // Create and send header. header := metadata.New(map[string]string{"location": "MTV", "timestamp": time.Now().Format(timestampFormat)}) grpc.SendHeader(ctx, header) fmt.Printf("request received: %v, sending echo\n", in) return &pb.EchoResponse{Message: in.Message}, nil } func (s *server) ServerStreamingEcho(in *pb.EchoRequest, stream pb.Echo_ServerStreamingEchoServer) error { fmt.Printf("--- ServerStreamingEcho ---\n") // Create trailer in defer to record function return time. defer func() { trailer := metadata.Pairs("timestamp", time.Now().Format(timestampFormat)) stream.SetTrailer(trailer) }() // Read metadata from client. md, ok := metadata.FromIncomingContext(stream.Context()) if !ok { return status.Errorf(codes.DataLoss, "ServerStreamingEcho: failed to get metadata") } if t, ok := md["timestamp"]; ok { fmt.Printf("timestamp from metadata:\n") for i, e := range t { fmt.Printf(" %d. %s\n", i, e) } } // Create and send header. header := metadata.New(map[string]string{"location": "MTV", "timestamp": time.Now().Format(timestampFormat)}) stream.SendHeader(header) fmt.Printf("request received: %v\n", in) // Read requests and send responses. for i := 0; i < streamingCount; i++ { fmt.Printf("echo message %v\n", in.Message) err := stream.Send(&pb.EchoResponse{Message: in.Message}) if err != nil { return err } } return nil } func (s *server) ClientStreamingEcho(stream pb.Echo_ClientStreamingEchoServer) error { fmt.Printf("--- ClientStreamingEcho ---\n") // Create trailer in defer to record function return time. defer func() { trailer := metadata.Pairs("timestamp", time.Now().Format(timestampFormat)) stream.SetTrailer(trailer) }() // Read metadata from client. md, ok := metadata.FromIncomingContext(stream.Context()) if !ok { return status.Errorf(codes.DataLoss, "ClientStreamingEcho: failed to get metadata") } if t, ok := md["timestamp"]; ok { fmt.Printf("timestamp from metadata:\n") for i, e := range t { fmt.Printf(" %d. %s\n", i, e) } } // Create and send header. header := metadata.New(map[string]string{"location": "MTV", "timestamp": time.Now().Format(timestampFormat)}) stream.SendHeader(header) // Read requests and send responses. var message string for { in, err := stream.Recv() if err == io.EOF { fmt.Printf("echo last received message\n") return stream.SendAndClose(&pb.EchoResponse{Message: message}) } message = in.Message fmt.Printf("request received: %v, building echo\n", in) if err != nil { return err } } } func (s *server) BidirectionalStreamingEcho(stream pb.Echo_BidirectionalStreamingEchoServer) error { fmt.Printf("--- BidirectionalStreamingEcho ---\n") // Create trailer in defer to record function return time. defer func() { trailer := metadata.Pairs("timestamp", time.Now().Format(timestampFormat)) stream.SetTrailer(trailer) }() // Read metadata from client. md, ok := metadata.FromIncomingContext(stream.Context()) if !ok { return status.Errorf(codes.DataLoss, "BidirectionalStreamingEcho: failed to get metadata") } if t, ok := md["timestamp"]; ok { fmt.Printf("timestamp from metadata:\n") for i, e := range t { fmt.Printf(" %d. %s\n", i, e) } } // Create and send header. header := metadata.New(map[string]string{"location": "MTV", "timestamp": time.Now().Format(timestampFormat)}) stream.SendHeader(header) // Read requests and send responses. for { in, err := stream.Recv() if err == io.EOF { return nil } if err != nil { return err } fmt.Printf("request received %v, sending echo\n", in) if err := stream.Send(&pb.EchoResponse{Message: in.Message}); err != nil { return err } } } func main() { flag.Parse() rand.Seed(time.Now().UnixNano()) lis, err := net.Listen("tcp", fmt.Sprintf(":%d", *port)) if err != nil { log.Fatalf("failed to listen: %v", err) } fmt.Printf("server listening at %v\n", lis.Addr()) s := grpc.NewServer() pb.RegisterEchoServer(s, &server{}) s.Serve(lis) } grpc-go-1.22.1/examples/features/multiplex/000077500000000000000000000000001351635773100206255ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/multiplex/README.md000066400000000000000000000003451351635773100221060ustar00rootroot00000000000000# Multiplex A `grpc.ClientConn` can be shared by two stubs and two services can share a `grpc.Server`. This example illustrates how to perform both types of sharing. ``` go run server/main.go ``` ``` go run client/main.go ``` grpc-go-1.22.1/examples/features/multiplex/client/000077500000000000000000000000001351635773100221035ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/multiplex/client/main.go000066400000000000000000000043331351635773100233610ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Binary client is an example client. package main import ( "context" "flag" "fmt" "log" "time" "google.golang.org/grpc" ecpb "google.golang.org/grpc/examples/features/proto/echo" hwpb "google.golang.org/grpc/examples/helloworld/helloworld" ) var addr = flag.String("addr", "localhost:50051", "the address to connect to") // callSayHello calls SayHello on c with the given name, and prints the // response. func callSayHello(c hwpb.GreeterClient, name string) { ctx, cancel := context.WithTimeout(context.Background(), time.Second) defer cancel() r, err := c.SayHello(ctx, &hwpb.HelloRequest{Name: name}) if err != nil { log.Fatalf("client.SayHello(_) = _, %v", err) } fmt.Println("Greeting: ", r.Message) } func callUnaryEcho(client ecpb.EchoClient, message string) { ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() resp, err := client.UnaryEcho(ctx, &ecpb.EchoRequest{Message: message}) if err != nil { log.Fatalf("client.UnaryEcho(_) = _, %v: ", err) } fmt.Println("UnaryEcho: ", resp.Message) } func main() { flag.Parse() // Set up a connection to the server. conn, err := grpc.Dial(*addr, grpc.WithInsecure()) if err != nil { log.Fatalf("did not connect: %v", err) } defer conn.Close() fmt.Println("--- calling helloworld.Greeter/SayHello ---") // Make a greeter client and send an RPC. hwc := hwpb.NewGreeterClient(conn) callSayHello(hwc, "multiplex") fmt.Println() fmt.Println("--- calling routeguide.RouteGuide/GetFeature ---") // Make a routeguild client with the same ClientConn. rgc := ecpb.NewEchoClient(conn) callUnaryEcho(rgc, "this is examples/multiplex") } grpc-go-1.22.1/examples/features/multiplex/server/000077500000000000000000000000001351635773100221335ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/multiplex/server/main.go000066400000000000000000000045601351635773100234130ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Binary server is an example server. package main import ( "context" "flag" "fmt" "log" "net" "google.golang.org/grpc" "google.golang.org/grpc/codes" ecpb "google.golang.org/grpc/examples/features/proto/echo" hwpb "google.golang.org/grpc/examples/helloworld/helloworld" "google.golang.org/grpc/status" ) var port = flag.Int("port", 50051, "the port to serve on") // hwServer is used to implement helloworld.GreeterServer. type hwServer struct{} // SayHello implements helloworld.GreeterServer func (s *hwServer) SayHello(ctx context.Context, in *hwpb.HelloRequest) (*hwpb.HelloReply, error) { return &hwpb.HelloReply{Message: "Hello " + in.Name}, nil } type ecServer struct{} func (s *ecServer) UnaryEcho(ctx context.Context, req *ecpb.EchoRequest) (*ecpb.EchoResponse, error) { return &ecpb.EchoResponse{Message: req.Message}, nil } func (s *ecServer) ServerStreamingEcho(*ecpb.EchoRequest, ecpb.Echo_ServerStreamingEchoServer) error { return status.Errorf(codes.Unimplemented, "not implemented") } func (s *ecServer) ClientStreamingEcho(ecpb.Echo_ClientStreamingEchoServer) error { return status.Errorf(codes.Unimplemented, "not implemented") } func (s *ecServer) BidirectionalStreamingEcho(ecpb.Echo_BidirectionalStreamingEchoServer) error { return status.Errorf(codes.Unimplemented, "not implemented") } func main() { flag.Parse() lis, err := net.Listen("tcp", fmt.Sprintf(":%d", *port)) if err != nil { log.Fatalf("failed to listen: %v", err) } fmt.Printf("server listening at %v\n", lis.Addr()) s := grpc.NewServer() // Register Greeter on the server. hwpb.RegisterGreeterServer(s, &hwServer{}) // Register RouteGuide on the same server. ecpb.RegisterEchoServer(s, &ecServer{}) if err := s.Serve(lis); err != nil { log.Fatalf("failed to serve: %v", err) } } grpc-go-1.22.1/examples/features/name_resolving/000077500000000000000000000000001351635773100216125ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/name_resolving/README.md000066400000000000000000000025421351635773100230740ustar00rootroot00000000000000# Name resolving This examples shows how `ClientConn` can pick different name resolvers. ## What is a name resolver A name resolver can be seen as a `map[service-name][]backend-ip`. It takes a service name, and returns a list of IPs of the backends. A common used name resolver is DNS. In this example, a resolver is created to resolve `resolver.example.grpc.io` to `localhost:50051`. ## Try it ``` go run server/main.go ``` ``` go run client/main.go ``` ## Explanation The echo server is serving on ":50051". Two clients are created, one is dialing to `passthrough:///localhost:50051`, while the other is dialing to `example:///resolver.example.grpc.io`. Both of them can connect the server. Name resolver is picked based on the `scheme` in the target string. See https://github.com/grpc/grpc/blob/master/doc/naming.md for the target syntax. The first client picks the `passthrough` resolver, which takes the input, and use it as the backend addresses. The second is connecting to service name `resolver.example.grpc.io`. Without a proper name resolver, this would fail. In the example it picks the `example` resolver that we installed. The `example` resolver can handle `resolver.example.grpc.io` correctly by returning the backend address. So even though the backend IP is not set when ClientConn is created, the connection will be created to the correct backend.grpc-go-1.22.1/examples/features/name_resolving/client/000077500000000000000000000000001351635773100230705ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/name_resolving/client/main.go000066400000000000000000000077561351635773100243620ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Binary client is an example client. package main import ( "context" "fmt" "log" "time" "google.golang.org/grpc" ecpb "google.golang.org/grpc/examples/features/proto/echo" "google.golang.org/grpc/resolver" ) const ( exampleScheme = "example" exampleServiceName = "resolver.example.grpc.io" backendAddr = "localhost:50051" ) func callUnaryEcho(c ecpb.EchoClient, message string) { ctx, cancel := context.WithTimeout(context.Background(), time.Second) defer cancel() r, err := c.UnaryEcho(ctx, &ecpb.EchoRequest{Message: message}) if err != nil { log.Fatalf("could not greet: %v", err) } fmt.Println(r.Message) } func makeRPCs(cc *grpc.ClientConn, n int) { hwc := ecpb.NewEchoClient(cc) for i := 0; i < n; i++ { callUnaryEcho(hwc, "this is examples/name_resolving") } } func main() { passthroughConn, err := grpc.Dial( fmt.Sprintf("passthrough:///%s", backendAddr), // Dial to "passthrough:///localhost:50051" grpc.WithInsecure(), ) if err != nil { log.Fatalf("did not connect: %v", err) } defer passthroughConn.Close() fmt.Printf("--- calling helloworld.Greeter/SayHello to \"passthrough:///%s\"\n", backendAddr) makeRPCs(passthroughConn, 10) fmt.Println() exampleConn, err := grpc.Dial( fmt.Sprintf("%s:///%s", exampleScheme, exampleServiceName), // Dial to "example:///resolver.example.grpc.io" grpc.WithInsecure(), ) if err != nil { log.Fatalf("did not connect: %v", err) } defer exampleConn.Close() fmt.Printf("--- calling helloworld.Greeter/SayHello to \"%s:///%s\"\n", exampleScheme, exampleServiceName) makeRPCs(exampleConn, 10) } // Following is an example name resolver. It includes a // ResolverBuilder(https://godoc.org/google.golang.org/grpc/resolver#Builder) // and a Resolver(https://godoc.org/google.golang.org/grpc/resolver#Resolver). // // A ResolverBuilder is registered for a scheme (in this example, "example" is // the scheme). When a ClientConn is created for this scheme, the // ResolverBuilder will be picked to build a Resolver. Note that a new Resolver // is built for each ClientConn. The Resolver will watch the updates for the // target, and send updates to the ClientConn. // exampleResolverBuilder is a // ResolverBuilder(https://godoc.org/google.golang.org/grpc/resolver#Builder). type exampleResolverBuilder struct{} func (*exampleResolverBuilder) Build(target resolver.Target, cc resolver.ClientConn, opts resolver.BuildOption) (resolver.Resolver, error) { r := &exampleResolver{ target: target, cc: cc, addrsStore: map[string][]string{ exampleServiceName: {backendAddr}, }, } r.start() return r, nil } func (*exampleResolverBuilder) Scheme() string { return exampleScheme } // exampleResolver is a // Resolver(https://godoc.org/google.golang.org/grpc/resolver#Resolver). type exampleResolver struct { target resolver.Target cc resolver.ClientConn addrsStore map[string][]string } func (r *exampleResolver) start() { addrStrs := r.addrsStore[r.target.Endpoint] addrs := make([]resolver.Address, len(addrStrs)) for i, s := range addrStrs { addrs[i] = resolver.Address{Addr: s} } r.cc.UpdateState(resolver.State{Addresses: addrs}) } func (*exampleResolver) ResolveNow(o resolver.ResolveNowOption) {} func (*exampleResolver) Close() {} func init() { // Register the example ResolverBuilder. This is usually done in a package's // init() function. resolver.Register(&exampleResolverBuilder{}) } grpc-go-1.22.1/examples/features/name_resolving/server/000077500000000000000000000000001351635773100231205ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/name_resolving/server/main.go000066400000000000000000000035621351635773100244010ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Binary server is an example server. package main import ( "context" "fmt" "log" "net" "google.golang.org/grpc" "google.golang.org/grpc/codes" ecpb "google.golang.org/grpc/examples/features/proto/echo" "google.golang.org/grpc/status" ) const addr = "localhost:50051" type ecServer struct { addr string } func (s *ecServer) UnaryEcho(ctx context.Context, req *ecpb.EchoRequest) (*ecpb.EchoResponse, error) { return &ecpb.EchoResponse{Message: fmt.Sprintf("%s (from %s)", req.Message, s.addr)}, nil } func (s *ecServer) ServerStreamingEcho(*ecpb.EchoRequest, ecpb.Echo_ServerStreamingEchoServer) error { return status.Errorf(codes.Unimplemented, "not implemented") } func (s *ecServer) ClientStreamingEcho(ecpb.Echo_ClientStreamingEchoServer) error { return status.Errorf(codes.Unimplemented, "not implemented") } func (s *ecServer) BidirectionalStreamingEcho(ecpb.Echo_BidirectionalStreamingEchoServer) error { return status.Errorf(codes.Unimplemented, "not implemented") } func main() { lis, err := net.Listen("tcp", addr) if err != nil { log.Fatalf("failed to listen: %v", err) } s := grpc.NewServer() ecpb.RegisterEchoServer(s, &ecServer{addr: addr}) log.Printf("serving on %s\n", addr) if err := s.Serve(lis); err != nil { log.Fatalf("failed to serve: %v", err) } } grpc-go-1.22.1/examples/features/proto/000077500000000000000000000000001351635773100177455ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/proto/doc.go000066400000000000000000000013641351635773100210450ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ //go:generate protoc -I ./echo --go_out=plugins=grpc,paths=source_relative:./echo ./echo/echo.proto // Package proto is for go generate. package proto grpc-go-1.22.1/examples/features/proto/echo/000077500000000000000000000000001351635773100206635ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/proto/echo/echo.pb.go000066400000000000000000000316741351635773100225430ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: echo.proto package echo // import "google.golang.org/grpc/examples/features/proto/echo" import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import ( context "golang.org/x/net/context" grpc "google.golang.org/grpc" ) // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package // EchoRequest is the request for echo. type EchoRequest struct { Message string `protobuf:"bytes,1,opt,name=message,proto3" json:"message,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *EchoRequest) Reset() { *m = EchoRequest{} } func (m *EchoRequest) String() string { return proto.CompactTextString(m) } func (*EchoRequest) ProtoMessage() {} func (*EchoRequest) Descriptor() ([]byte, []int) { return fileDescriptor_echo_9d6886b3223721ca, []int{0} } func (m *EchoRequest) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_EchoRequest.Unmarshal(m, b) } func (m *EchoRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_EchoRequest.Marshal(b, m, deterministic) } func (dst *EchoRequest) XXX_Merge(src proto.Message) { xxx_messageInfo_EchoRequest.Merge(dst, src) } func (m *EchoRequest) XXX_Size() int { return xxx_messageInfo_EchoRequest.Size(m) } func (m *EchoRequest) XXX_DiscardUnknown() { xxx_messageInfo_EchoRequest.DiscardUnknown(m) } var xxx_messageInfo_EchoRequest proto.InternalMessageInfo func (m *EchoRequest) GetMessage() string { if m != nil { return m.Message } return "" } // EchoResponse is the response for echo. type EchoResponse struct { Message string `protobuf:"bytes,1,opt,name=message,proto3" json:"message,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *EchoResponse) Reset() { *m = EchoResponse{} } func (m *EchoResponse) String() string { return proto.CompactTextString(m) } func (*EchoResponse) ProtoMessage() {} func (*EchoResponse) Descriptor() ([]byte, []int) { return fileDescriptor_echo_9d6886b3223721ca, []int{1} } func (m *EchoResponse) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_EchoResponse.Unmarshal(m, b) } func (m *EchoResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_EchoResponse.Marshal(b, m, deterministic) } func (dst *EchoResponse) XXX_Merge(src proto.Message) { xxx_messageInfo_EchoResponse.Merge(dst, src) } func (m *EchoResponse) XXX_Size() int { return xxx_messageInfo_EchoResponse.Size(m) } func (m *EchoResponse) XXX_DiscardUnknown() { xxx_messageInfo_EchoResponse.DiscardUnknown(m) } var xxx_messageInfo_EchoResponse proto.InternalMessageInfo func (m *EchoResponse) GetMessage() string { if m != nil { return m.Message } return "" } func init() { proto.RegisterType((*EchoRequest)(nil), "grpc.examples.echo.EchoRequest") proto.RegisterType((*EchoResponse)(nil), "grpc.examples.echo.EchoResponse") } // Reference imports to suppress errors if they are not otherwise used. var _ context.Context var _ grpc.ClientConn // This is a compile-time assertion to ensure that this generated file // is compatible with the grpc package it is being compiled against. const _ = grpc.SupportPackageIsVersion4 // EchoClient is the client API for Echo service. // // For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream. type EchoClient interface { // UnaryEcho is unary echo. UnaryEcho(ctx context.Context, in *EchoRequest, opts ...grpc.CallOption) (*EchoResponse, error) // ServerStreamingEcho is server side streaming. ServerStreamingEcho(ctx context.Context, in *EchoRequest, opts ...grpc.CallOption) (Echo_ServerStreamingEchoClient, error) // ClientStreamingEcho is client side streaming. ClientStreamingEcho(ctx context.Context, opts ...grpc.CallOption) (Echo_ClientStreamingEchoClient, error) // BidirectionalStreamingEcho is bidi streaming. BidirectionalStreamingEcho(ctx context.Context, opts ...grpc.CallOption) (Echo_BidirectionalStreamingEchoClient, error) } type echoClient struct { cc *grpc.ClientConn } func NewEchoClient(cc *grpc.ClientConn) EchoClient { return &echoClient{cc} } func (c *echoClient) UnaryEcho(ctx context.Context, in *EchoRequest, opts ...grpc.CallOption) (*EchoResponse, error) { out := new(EchoResponse) err := c.cc.Invoke(ctx, "/grpc.examples.echo.Echo/UnaryEcho", in, out, opts...) if err != nil { return nil, err } return out, nil } func (c *echoClient) ServerStreamingEcho(ctx context.Context, in *EchoRequest, opts ...grpc.CallOption) (Echo_ServerStreamingEchoClient, error) { stream, err := c.cc.NewStream(ctx, &_Echo_serviceDesc.Streams[0], "/grpc.examples.echo.Echo/ServerStreamingEcho", opts...) if err != nil { return nil, err } x := &echoServerStreamingEchoClient{stream} if err := x.ClientStream.SendMsg(in); err != nil { return nil, err } if err := x.ClientStream.CloseSend(); err != nil { return nil, err } return x, nil } type Echo_ServerStreamingEchoClient interface { Recv() (*EchoResponse, error) grpc.ClientStream } type echoServerStreamingEchoClient struct { grpc.ClientStream } func (x *echoServerStreamingEchoClient) Recv() (*EchoResponse, error) { m := new(EchoResponse) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func (c *echoClient) ClientStreamingEcho(ctx context.Context, opts ...grpc.CallOption) (Echo_ClientStreamingEchoClient, error) { stream, err := c.cc.NewStream(ctx, &_Echo_serviceDesc.Streams[1], "/grpc.examples.echo.Echo/ClientStreamingEcho", opts...) if err != nil { return nil, err } x := &echoClientStreamingEchoClient{stream} return x, nil } type Echo_ClientStreamingEchoClient interface { Send(*EchoRequest) error CloseAndRecv() (*EchoResponse, error) grpc.ClientStream } type echoClientStreamingEchoClient struct { grpc.ClientStream } func (x *echoClientStreamingEchoClient) Send(m *EchoRequest) error { return x.ClientStream.SendMsg(m) } func (x *echoClientStreamingEchoClient) CloseAndRecv() (*EchoResponse, error) { if err := x.ClientStream.CloseSend(); err != nil { return nil, err } m := new(EchoResponse) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func (c *echoClient) BidirectionalStreamingEcho(ctx context.Context, opts ...grpc.CallOption) (Echo_BidirectionalStreamingEchoClient, error) { stream, err := c.cc.NewStream(ctx, &_Echo_serviceDesc.Streams[2], "/grpc.examples.echo.Echo/BidirectionalStreamingEcho", opts...) if err != nil { return nil, err } x := &echoBidirectionalStreamingEchoClient{stream} return x, nil } type Echo_BidirectionalStreamingEchoClient interface { Send(*EchoRequest) error Recv() (*EchoResponse, error) grpc.ClientStream } type echoBidirectionalStreamingEchoClient struct { grpc.ClientStream } func (x *echoBidirectionalStreamingEchoClient) Send(m *EchoRequest) error { return x.ClientStream.SendMsg(m) } func (x *echoBidirectionalStreamingEchoClient) Recv() (*EchoResponse, error) { m := new(EchoResponse) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } // EchoServer is the server API for Echo service. type EchoServer interface { // UnaryEcho is unary echo. UnaryEcho(context.Context, *EchoRequest) (*EchoResponse, error) // ServerStreamingEcho is server side streaming. ServerStreamingEcho(*EchoRequest, Echo_ServerStreamingEchoServer) error // ClientStreamingEcho is client side streaming. ClientStreamingEcho(Echo_ClientStreamingEchoServer) error // BidirectionalStreamingEcho is bidi streaming. BidirectionalStreamingEcho(Echo_BidirectionalStreamingEchoServer) error } func RegisterEchoServer(s *grpc.Server, srv EchoServer) { s.RegisterService(&_Echo_serviceDesc, srv) } func _Echo_UnaryEcho_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(EchoRequest) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(EchoServer).UnaryEcho(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/grpc.examples.echo.Echo/UnaryEcho", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(EchoServer).UnaryEcho(ctx, req.(*EchoRequest)) } return interceptor(ctx, in, info, handler) } func _Echo_ServerStreamingEcho_Handler(srv interface{}, stream grpc.ServerStream) error { m := new(EchoRequest) if err := stream.RecvMsg(m); err != nil { return err } return srv.(EchoServer).ServerStreamingEcho(m, &echoServerStreamingEchoServer{stream}) } type Echo_ServerStreamingEchoServer interface { Send(*EchoResponse) error grpc.ServerStream } type echoServerStreamingEchoServer struct { grpc.ServerStream } func (x *echoServerStreamingEchoServer) Send(m *EchoResponse) error { return x.ServerStream.SendMsg(m) } func _Echo_ClientStreamingEcho_Handler(srv interface{}, stream grpc.ServerStream) error { return srv.(EchoServer).ClientStreamingEcho(&echoClientStreamingEchoServer{stream}) } type Echo_ClientStreamingEchoServer interface { SendAndClose(*EchoResponse) error Recv() (*EchoRequest, error) grpc.ServerStream } type echoClientStreamingEchoServer struct { grpc.ServerStream } func (x *echoClientStreamingEchoServer) SendAndClose(m *EchoResponse) error { return x.ServerStream.SendMsg(m) } func (x *echoClientStreamingEchoServer) Recv() (*EchoRequest, error) { m := new(EchoRequest) if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func _Echo_BidirectionalStreamingEcho_Handler(srv interface{}, stream grpc.ServerStream) error { return srv.(EchoServer).BidirectionalStreamingEcho(&echoBidirectionalStreamingEchoServer{stream}) } type Echo_BidirectionalStreamingEchoServer interface { Send(*EchoResponse) error Recv() (*EchoRequest, error) grpc.ServerStream } type echoBidirectionalStreamingEchoServer struct { grpc.ServerStream } func (x *echoBidirectionalStreamingEchoServer) Send(m *EchoResponse) error { return x.ServerStream.SendMsg(m) } func (x *echoBidirectionalStreamingEchoServer) Recv() (*EchoRequest, error) { m := new(EchoRequest) if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err } return m, nil } var _Echo_serviceDesc = grpc.ServiceDesc{ ServiceName: "grpc.examples.echo.Echo", HandlerType: (*EchoServer)(nil), Methods: []grpc.MethodDesc{ { MethodName: "UnaryEcho", Handler: _Echo_UnaryEcho_Handler, }, }, Streams: []grpc.StreamDesc{ { StreamName: "ServerStreamingEcho", Handler: _Echo_ServerStreamingEcho_Handler, ServerStreams: true, }, { StreamName: "ClientStreamingEcho", Handler: _Echo_ClientStreamingEcho_Handler, ClientStreams: true, }, { StreamName: "BidirectionalStreamingEcho", Handler: _Echo_BidirectionalStreamingEcho_Handler, ServerStreams: true, ClientStreams: true, }, }, Metadata: "echo.proto", } func init() { proto.RegisterFile("echo.proto", fileDescriptor_echo_9d6886b3223721ca) } var fileDescriptor_echo_9d6886b3223721ca = []byte{ // 234 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xb4, 0x92, 0xb1, 0x4b, 0x03, 0x31, 0x14, 0x87, 0x3d, 0x11, 0xa5, 0x4f, 0xa7, 0xb8, 0x94, 0x2e, 0x96, 0x5b, 0xbc, 0x29, 0x29, 0x16, 0xff, 0x81, 0x8a, 0xbb, 0xb4, 0xb8, 0x88, 0x4b, 0x3c, 0x7f, 0xa6, 0x81, 0x5c, 0xde, 0xf9, 0x92, 0x8a, 0xfe, 0xed, 0x2e, 0x92, 0x2b, 0x05, 0x41, 0xba, 0xd5, 0x2d, 0x8f, 0x7c, 0xef, 0xfb, 0x96, 0x47, 0x84, 0x76, 0xcd, 0xba, 0x17, 0xce, 0xac, 0x94, 0x93, 0xbe, 0xd5, 0xf8, 0xb4, 0x5d, 0x1f, 0x90, 0x74, 0xf9, 0xa9, 0xaf, 0xe9, 0xfc, 0xbe, 0x5d, 0xf3, 0x12, 0xef, 0x1b, 0xa4, 0xac, 0xc6, 0x74, 0xd6, 0x21, 0x25, 0xeb, 0x30, 0xae, 0xa6, 0x55, 0x33, 0x5a, 0xee, 0xc6, 0xba, 0xa1, 0x8b, 0x2d, 0x98, 0x7a, 0x8e, 0x09, 0xfb, 0xc9, 0x9b, 0xef, 0x63, 0x3a, 0x29, 0xa8, 0x7a, 0xa0, 0xd1, 0x63, 0xb4, 0xf2, 0x35, 0x0c, 0x57, 0xfa, 0x6f, 0x5d, 0xff, 0x4a, 0x4f, 0xa6, 0xfb, 0x81, 0x6d, 0xb2, 0x3e, 0x52, 0xcf, 0x74, 0xb9, 0x82, 0x7c, 0x40, 0x56, 0x59, 0x60, 0x3b, 0x1f, 0xdd, 0xc1, 0xdc, 0xb3, 0xaa, 0xd8, 0xef, 0x82, 0x47, 0xcc, 0x87, 0xb7, 0x37, 0x95, 0x02, 0x4d, 0x16, 0xfe, 0xd5, 0x0b, 0xda, 0xec, 0x39, 0xda, 0xf0, 0x1f, 0x91, 0x59, 0xb5, 0xb8, 0x7d, 0x9a, 0x3b, 0x66, 0x17, 0xa0, 0x1d, 0x07, 0x1b, 0x9d, 0x66, 0x71, 0xa6, 0xac, 0x9a, 0xdd, 0xaa, 0x79, 0x83, 0xcd, 0x1b, 0x41, 0x32, 0xc3, 0x59, 0x98, 0x62, 0x7a, 0x39, 0x1d, 0xde, 0xf3, 0x9f, 0x00, 0x00, 0x00, 0xff, 0xff, 0x23, 0x14, 0x26, 0x96, 0x30, 0x02, 0x00, 0x00, } grpc-go-1.22.1/examples/features/proto/echo/echo.proto000066400000000000000000000026141351635773100226710ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ syntax = "proto3"; package grpc.examples.echo; option go_package = "google.golang.org/grpc/examples/features/proto/echo"; // EchoRequest is the request for echo. message EchoRequest { string message = 1; } // EchoResponse is the response for echo. message EchoResponse { string message = 1; } // Echo is the echo service. service Echo { // UnaryEcho is unary echo. rpc UnaryEcho(EchoRequest) returns (EchoResponse) {} // ServerStreamingEcho is server side streaming. rpc ServerStreamingEcho(EchoRequest) returns (stream EchoResponse) {} // ClientStreamingEcho is client side streaming. rpc ClientStreamingEcho(stream EchoRequest) returns (EchoResponse) {} // BidirectionalStreamingEcho is bidi streaming. rpc BidirectionalStreamingEcho(stream EchoRequest) returns (stream EchoResponse) {} } grpc-go-1.22.1/examples/features/reflection/000077500000000000000000000000001351635773100207345ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/reflection/README.md000066400000000000000000000007331351635773100222160ustar00rootroot00000000000000# Reflection This example shows how reflection can be registered on a gRPC server. See https://github.com/grpc/grpc-go/blob/master/Documentation/server-reflection-tutorial.md for a tutorial. # Try it ```go go run server/main.go ``` There are multiple existing reflection clients. To use `gRPC CLI`, follow https://github.com/grpc/grpc-go/blob/master/Documentation/server-reflection-tutorial.md#grpc-cli. To use `grpcurl`, see https://github.com/fullstorydev/grpcurl. grpc-go-1.22.1/examples/features/reflection/server/000077500000000000000000000000001351635773100222425ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/reflection/server/main.go000066400000000000000000000047361351635773100235270ustar00rootroot00000000000000/* * * Copyright 2019 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Binary server is an example server. package main import ( "context" "flag" "fmt" "log" "net" "google.golang.org/grpc" "google.golang.org/grpc/codes" ecpb "google.golang.org/grpc/examples/features/proto/echo" hwpb "google.golang.org/grpc/examples/helloworld/helloworld" "google.golang.org/grpc/reflection" "google.golang.org/grpc/status" ) var port = flag.Int("port", 50051, "the port to serve on") // hwServer is used to implement helloworld.GreeterServer. type hwServer struct{} // SayHello implements helloworld.GreeterServer func (s *hwServer) SayHello(ctx context.Context, in *hwpb.HelloRequest) (*hwpb.HelloReply, error) { return &hwpb.HelloReply{Message: "Hello " + in.Name}, nil } type ecServer struct{} func (s *ecServer) UnaryEcho(ctx context.Context, req *ecpb.EchoRequest) (*ecpb.EchoResponse, error) { return &ecpb.EchoResponse{Message: req.Message}, nil } func (s *ecServer) ServerStreamingEcho(*ecpb.EchoRequest, ecpb.Echo_ServerStreamingEchoServer) error { return status.Errorf(codes.Unimplemented, "not implemented") } func (s *ecServer) ClientStreamingEcho(ecpb.Echo_ClientStreamingEchoServer) error { return status.Errorf(codes.Unimplemented, "not implemented") } func (s *ecServer) BidirectionalStreamingEcho(ecpb.Echo_BidirectionalStreamingEchoServer) error { return status.Errorf(codes.Unimplemented, "not implemented") } func main() { flag.Parse() lis, err := net.Listen("tcp", fmt.Sprintf(":%d", *port)) if err != nil { log.Fatalf("failed to listen: %v", err) } fmt.Printf("server listening at %v\n", lis.Addr()) s := grpc.NewServer() // Register Greeter on the server. hwpb.RegisterGreeterServer(s, &hwServer{}) // Register RouteGuide on the same server. ecpb.RegisterEchoServer(s, &ecServer{}) // Register reflection service on gRPC server. reflection.Register(s) if err := s.Serve(lis); err != nil { log.Fatalf("failed to serve: %v", err) } } grpc-go-1.22.1/examples/features/wait_for_ready/000077500000000000000000000000001351635773100216005ustar00rootroot00000000000000grpc-go-1.22.1/examples/features/wait_for_ready/README.md000066400000000000000000000007401351635773100230600ustar00rootroot00000000000000# Wait for ready example This example shows how to enable "wait for ready" in RPC calls. This code starts a server with a 2 seconds delay. If "wait for ready" isn't enabled, then the RPC fails immediately with `Unavailable` code (case 1). If "wait for ready" is enabled, then the RPC waits for the server. If context dies before the server is available, then it fails with `DeadlineExceeded` (case 3). Otherwise it succeeds (case 2). ## Run the example ``` go run main.go ``` grpc-go-1.22.1/examples/features/wait_for_ready/main.go000066400000000000000000000064501351635773100230600ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Binary wait_for_ready is an example for "wait for ready". package main import ( "context" "fmt" "log" "net" "sync" "time" "google.golang.org/grpc" "google.golang.org/grpc/codes" pb "google.golang.org/grpc/examples/features/proto/echo" "google.golang.org/grpc/status" ) // server is used to implement EchoServer. type server struct{} func (s *server) UnaryEcho(ctx context.Context, req *pb.EchoRequest) (*pb.EchoResponse, error) { return &pb.EchoResponse{Message: req.Message}, nil } func (s *server) ServerStreamingEcho(req *pb.EchoRequest, stream pb.Echo_ServerStreamingEchoServer) error { return status.Error(codes.Unimplemented, "RPC unimplemented") } func (s *server) ClientStreamingEcho(stream pb.Echo_ClientStreamingEchoServer) error { return status.Error(codes.Unimplemented, "RPC unimplemented") } func (s *server) BidirectionalStreamingEcho(stream pb.Echo_BidirectionalStreamingEchoServer) error { return status.Error(codes.Unimplemented, "RPC unimplemented") } // serve starts listening with a 2 seconds delay. func serve() { lis, err := net.Listen("tcp", ":50053") if err != nil { log.Fatalf("failed to listen: %v", err) } s := grpc.NewServer() pb.RegisterEchoServer(s, &server{}) if err := s.Serve(lis); err != nil { log.Fatalf("failed to serve: %v", err) } } func main() { conn, err := grpc.Dial("localhost:50053", grpc.WithInsecure()) if err != nil { log.Fatalf("did not connect: %v", err) } defer conn.Close() c := pb.NewEchoClient(conn) var wg sync.WaitGroup wg.Add(3) // "Wait for ready" is not enabled, returns error with code "Unavailable". go func() { defer wg.Done() ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() _, err := c.UnaryEcho(ctx, &pb.EchoRequest{Message: "Hi!"}) got := status.Code(err) fmt.Printf("[1] wanted = %v, got = %v\n", codes.Unavailable, got) }() // "Wait for ready" is enabled, returns nil error. go func() { defer wg.Done() ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() _, err := c.UnaryEcho(ctx, &pb.EchoRequest{Message: "Hi!"}, grpc.WaitForReady(true)) got := status.Code(err) fmt.Printf("[2] wanted = %v, got = %v\n", codes.OK, got) }() // "Wait for ready" is enabled but exceeds the deadline before server starts listening, // returns error with code "DeadlineExceeded". go func() { defer wg.Done() ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second) defer cancel() _, err := c.UnaryEcho(ctx, &pb.EchoRequest{Message: "Hi!"}, grpc.WaitForReady(true)) got := status.Code(err) fmt.Printf("[3] wanted = %v, got = %v\n", codes.DeadlineExceeded, got) }() time.Sleep(2 * time.Second) go serve() wg.Wait() } grpc-go-1.22.1/examples/gotutorial.md000066400000000000000000000515101351635773100175010ustar00rootroot00000000000000# gRPC Basics: Go This tutorial provides a basic Go programmer's introduction to working with gRPC. By walking through this example you'll learn how to: - Define a service in a `.proto` file. - Generate server and client code using the protocol buffer compiler. - Use the Go gRPC API to write a simple client and server for your service. It assumes that you have read the [Getting started](https://github.com/grpc/grpc/tree/master/examples) guide and are familiar with [protocol buffers](https://developers.google.com/protocol-buffers/docs/overview). Note that the example in this tutorial uses the proto3 version of the protocol buffers language, you can find out more in the [proto3 language guide](https://developers.google.com/protocol-buffers/docs/proto3) and see the [release notes](https://github.com/google/protobuf/releases) for the new version in the protocol buffers Github repository. This isn't a comprehensive guide to using gRPC in Go: more reference documentation is coming soon. ## Why use gRPC? Our example is a simple route mapping application that lets clients get information about features on their route, create a summary of their route, and exchange route information such as traffic updates with the server and other clients. With gRPC we can define our service once in a `.proto` file and implement clients and servers in any of gRPC's supported languages, which in turn can be run in environments ranging from servers inside Google to your own tablet - all the complexity of communication between different languages and environments is handled for you by gRPC. We also get all the advantages of working with protocol buffers, including efficient serialization, a simple IDL, and easy interface updating. ## Example code and setup The example code for our tutorial is in [grpc/grpc-go/examples/route_guide](https://github.com/grpc/grpc-go/tree/master/examples/route_guide). To download the example, clone the `grpc-go` repository by running the following command: ```shell $ go get google.golang.org/grpc ``` Then change your current directory to `grpc-go/examples/route_guide`: ```shell $ cd $GOPATH/src/google.golang.org/grpc/examples/route_guide ``` You also should have the relevant tools installed to generate the server and client interface code - if you don't already, follow the setup instructions in [the Go quick start guide](https://github.com/grpc/grpc-go/tree/master/examples/). ## Defining the service Our first step (as you'll know from the [quick start](https://grpc.io/docs/#quick-start)) is to define the gRPC *service* and the method *request* and *response* types using [protocol buffers](https://developers.google.com/protocol-buffers/docs/overview). You can see the complete `.proto` file in [examples/route_guide/routeguide/route_guide.proto](https://github.com/grpc/grpc-go/tree/master/examples/route_guide/routeguide/route_guide.proto). To define a service, you specify a named `service` in your `.proto` file: ```proto service RouteGuide { ... } ``` Then you define `rpc` methods inside your service definition, specifying their request and response types. gRPC lets you define four kinds of service method, all of which are used in the `RouteGuide` service: - A *simple RPC* where the client sends a request to the server using the stub and waits for a response to come back, just like a normal function call. ```proto // Obtains the feature at a given position. rpc GetFeature(Point) returns (Feature) {} ``` - A *server-side streaming RPC* where the client sends a request to the server and gets a stream to read a sequence of messages back. The client reads from the returned stream until there are no more messages. As you can see in our example, you specify a server-side streaming method by placing the `stream` keyword before the *response* type. ```proto // Obtains the Features available within the given Rectangle. Results are // streamed rather than returned at once (e.g. in a response message with a // repeated field), as the rectangle may cover a large area and contain a // huge number of features. rpc ListFeatures(Rectangle) returns (stream Feature) {} ``` - A *client-side streaming RPC* where the client writes a sequence of messages and sends them to the server, again using a provided stream. Once the client has finished writing the messages, it waits for the server to read them all and return its response. You specify a client-side streaming method by placing the `stream` keyword before the *request* type. ```proto // Accepts a stream of Points on a route being traversed, returning a // RouteSummary when traversal is completed. rpc RecordRoute(stream Point) returns (RouteSummary) {} ``` - A *bidirectional streaming RPC* where both sides send a sequence of messages using a read-write stream. The two streams operate independently, so clients and servers can read and write in whatever order they like: for example, the server could wait to receive all the client messages before writing its responses, or it could alternately read a message then write a message, or some other combination of reads and writes. The order of messages in each stream is preserved. You specify this type of method by placing the `stream` keyword before both the request and the response. ```proto // Accepts a stream of RouteNotes sent while a route is being traversed, // while receiving other RouteNotes (e.g. from other users). rpc RouteChat(stream RouteNote) returns (stream RouteNote) {} ``` Our `.proto` file also contains protocol buffer message type definitions for all the request and response types used in our service methods - for example, here's the `Point` message type: ```proto // Points are represented as latitude-longitude pairs in the E7 representation // (degrees multiplied by 10**7 and rounded to the nearest integer). // Latitudes should be in the range +/- 90 degrees and longitude should be in // the range +/- 180 degrees (inclusive). message Point { int32 latitude = 1; int32 longitude = 2; } ``` ## Generating client and server code Next we need to generate the gRPC client and server interfaces from our `.proto` service definition. We do this using the protocol buffer compiler `protoc` with a special gRPC Go plugin. For simplicity, we've provided a [bash script](https://github.com/grpc/grpc-go/blob/master/codegen.sh) that runs `protoc` for you with the appropriate plugin, input, and output (if you want to run this by yourself, make sure you've installed protoc and followed the gRPC-Go [installation instructions](https://github.com/grpc/grpc-go/blob/master/README.md) first): ```shell $ codegen.sh route_guide.proto ``` which actually runs: ```shell $ protoc --go_out=plugins=grpc:. route_guide.proto ``` Running this command generates the following file in your current directory: - `route_guide.pb.go` This contains: - All the protocol buffer code to populate, serialize, and retrieve our request and response message types - An interface type (or *stub*) for clients to call with the methods defined in the `RouteGuide` service. - An interface type for servers to implement, also with the methods defined in the `RouteGuide` service. ## Creating the server First let's look at how we create a `RouteGuide` server. If you're only interested in creating gRPC clients, you can skip this section and go straight to [Creating the client](#client) (though you might find it interesting anyway!). There are two parts to making our `RouteGuide` service do its job: - Implementing the service interface generated from our service definition: doing the actual "work" of our service. - Running a gRPC server to listen for requests from clients and dispatch them to the right service implementation. You can find our example `RouteGuide` server in [grpc-go/examples/route_guide/server/server.go](https://github.com/grpc/grpc-go/tree/master/examples/route_guide/server/server.go). Let's take a closer look at how it works. ### Implementing RouteGuide As you can see, our server has a `routeGuideServer` struct type that implements the generated `RouteGuideServer` interface: ```go type routeGuideServer struct { ... } ... func (s *routeGuideServer) GetFeature(ctx context.Context, point *pb.Point) (*pb.Feature, error) { ... } ... func (s *routeGuideServer) ListFeatures(rect *pb.Rectangle, stream pb.RouteGuide_ListFeaturesServer) error { ... } ... func (s *routeGuideServer) RecordRoute(stream pb.RouteGuide_RecordRouteServer) error { ... } ... func (s *routeGuideServer) RouteChat(stream pb.RouteGuide_RouteChatServer) error { ... } ... ``` #### Simple RPC `routeGuideServer` implements all our service methods. Let's look at the simplest type first, `GetFeature`, which just gets a `Point` from the client and returns the corresponding feature information from its database in a `Feature`. ```go func (s *routeGuideServer) GetFeature(ctx context.Context, point *pb.Point) (*pb.Feature, error) { for _, feature := range s.savedFeatures { if proto.Equal(feature.Location, point) { return feature, nil } } // No feature was found, return an unnamed feature return &pb.Feature{"", point}, nil } ``` The method is passed a context object for the RPC and the client's `Point` protocol buffer request. It returns a `Feature` protocol buffer object with the response information and an `error`. In the method we populate the `Feature` with the appropriate information, and then `return` it along with an `nil` error to tell gRPC that we've finished dealing with the RPC and that the `Feature` can be returned to the client. #### Server-side streaming RPC Now let's look at one of our streaming RPCs. `ListFeatures` is a server-side streaming RPC, so we need to send back multiple `Feature`s to our client. ```go func (s *routeGuideServer) ListFeatures(rect *pb.Rectangle, stream pb.RouteGuide_ListFeaturesServer) error { for _, feature := range s.savedFeatures { if inRange(feature.Location, rect) { if err := stream.Send(feature); err != nil { return err } } } return nil } ``` As you can see, instead of getting simple request and response objects in our method parameters, this time we get a request object (the `Rectangle` in which our client wants to find `Feature`s) and a special `RouteGuide_ListFeaturesServer` object to write our responses. In the method, we populate as many `Feature` objects as we need to return, writing them to the `RouteGuide_ListFeaturesServer` using its `Send()` method. Finally, as in our simple RPC, we return a `nil` error to tell gRPC that we've finished writing responses. Should any error happen in this call, we return a non-`nil` error; the gRPC layer will translate it into an appropriate RPC status to be sent on the wire. #### Client-side streaming RPC Now let's look at something a little more complicated: the client-side streaming method `RecordRoute`, where we get a stream of `Point`s from the client and return a single `RouteSummary` with information about their trip. As you can see, this time the method doesn't have a request parameter at all. Instead, it gets a `RouteGuide_RecordRouteServer` stream, which the server can use to both read *and* write messages - it can receive client messages using its `Recv()` method and return its single response using its `SendAndClose()` method. ```go func (s *routeGuideServer) RecordRoute(stream pb.RouteGuide_RecordRouteServer) error { var pointCount, featureCount, distance int32 var lastPoint *pb.Point startTime := time.Now() for { point, err := stream.Recv() if err == io.EOF { endTime := time.Now() return stream.SendAndClose(&pb.RouteSummary{ PointCount: pointCount, FeatureCount: featureCount, Distance: distance, ElapsedTime: int32(endTime.Sub(startTime).Seconds()), }) } if err != nil { return err } pointCount++ for _, feature := range s.savedFeatures { if proto.Equal(feature.Location, point) { featureCount++ } } if lastPoint != nil { distance += calcDistance(lastPoint, point) } lastPoint = point } } ``` In the method body we use the `RouteGuide_RecordRouteServer`s `Recv()` method to repeatedly read in our client's requests to a request object (in this case a `Point`) until there are no more messages: the server needs to check the error returned from `Recv()` after each call. If this is `nil`, the stream is still good and it can continue reading; if it's `io.EOF` the message stream has ended and the server can return its `RouteSummary`. If it has any other value, we return the error "as is" so that it'll be translated to an RPC status by the gRPC layer. #### Bidirectional streaming RPC Finally, let's look at our bidirectional streaming RPC `RouteChat()`. ```go func (s *routeGuideServer) RouteChat(stream pb.RouteGuide_RouteChatServer) error { for { in, err := stream.Recv() if err == io.EOF { return nil } if err != nil { return err } key := serialize(in.Location) ... // look for notes to be sent to client for _, note := range s.routeNotes[key] { if err := stream.Send(note); err != nil { return err } } } } ``` This time we get a `RouteGuide_RouteChatServer` stream that, as in our client-side streaming example, can be used to read and write messages. However, this time we return values via our method's stream while the client is still writing messages to *their* message stream. The syntax for reading and writing here is very similar to our client-streaming method, except the server uses the stream's `Send()` method rather than `SendAndClose()` because it's writing multiple responses. Although each side will always get the other's messages in the order they were written, both the client and server can read and write in any order — the streams operate completely independently. ### Starting the server Once we've implemented all our methods, we also need to start up a gRPC server so that clients can actually use our service. The following snippet shows how we do this for our `RouteGuide` service: ```go flag.Parse() lis, err := net.Listen("tcp", fmt.Sprintf("localhost:%d", *port)) if err != nil { log.Fatalf("failed to listen: %v", err) } grpcServer := grpc.NewServer() pb.RegisterRouteGuideServer(grpcServer, &routeGuideServer{}) ... // determine whether to use TLS grpcServer.Serve(lis) ``` To build and start a server, we: 1. Specify the port we want to use to listen for client requests using `lis, err := net.Listen("tcp", fmt.Sprintf("localhost:%d", *port))`. 2. Create an instance of the gRPC server using `grpc.NewServer()`. 3. Register our service implementation with the gRPC server. 4. Call `Serve()` on the server with our port details to do a blocking wait until the process is killed or `Stop()` is called. ## Creating the client In this section, we'll look at creating a Go client for our `RouteGuide` service. You can see our complete example client code in [grpc-go/examples/route_guide/client/client.go](https://github.com/grpc/grpc-go/tree/master/examples/route_guide/client/client.go). ### Creating a stub To call service methods, we first need to create a gRPC *channel* to communicate with the server. We create this by passing the server address and port number to `grpc.Dial()` as follows: ```go conn, err := grpc.Dial(*serverAddr) if err != nil { ... } defer conn.Close() ``` You can use `DialOptions` to set the auth credentials (e.g., TLS, GCE credentials, JWT credentials) in `grpc.Dial` if the service you request requires that - however, we don't need to do this for our `RouteGuide` service. Once the gRPC *channel* is setup, we need a client *stub* to perform RPCs. We get this using the `NewRouteGuideClient` method provided in the `pb` package we generated from our `.proto` file. ```go client := pb.NewRouteGuideClient(conn) ``` ### Calling service methods Now let's look at how we call our service methods. Note that in gRPC-Go, RPCs operate in a blocking/synchronous mode, which means that the RPC call waits for the server to respond, and will either return a response or an error. #### Simple RPC Calling the simple RPC `GetFeature` is nearly as straightforward as calling a local method. ```go feature, err := client.GetFeature(ctx, &pb.Point{409146138, -746188906}) if err != nil { ... } ``` As you can see, we call the method on the stub we got earlier. In our method parameters we create and populate a request protocol buffer object (in our case `Point`). We also pass a `context.Context` object which lets us change our RPC's behaviour if necessary, such as time-out/cancel an RPC in flight. If the call doesn't return an error, then we can read the response information from the server from the first return value. ```go log.Println(feature) ``` #### Server-side streaming RPC Here's where we call the server-side streaming method `ListFeatures`, which returns a stream of geographical `Feature`s. If you've already read [Creating the server](#server) some of this may look very familiar - streaming RPCs are implemented in a similar way on both sides. ```go rect := &pb.Rectangle{ ... } // initialize a pb.Rectangle stream, err := client.ListFeatures(ctx, rect) if err != nil { ... } for { feature, err := stream.Recv() if err == io.EOF { break } if err != nil { log.Fatalf("%v.ListFeatures(_) = _, %v", client, err) } log.Println(feature) } ``` As in the simple RPC, we pass the method a context and a request. However, instead of getting a response object back, we get back an instance of `RouteGuide_ListFeaturesClient`. The client can use the `RouteGuide_ListFeaturesClient` stream to read the server's responses. We use the `RouteGuide_ListFeaturesClient`'s `Recv()` method to repeatedly read in the server's responses to a response protocol buffer object (in this case a `Feature`) until there are no more messages: the client needs to check the error `err` returned from `Recv()` after each call. If `nil`, the stream is still good and it can continue reading; if it's `io.EOF` then the message stream has ended; otherwise there must be an RPC error, which is passed over through `err`. #### Client-side streaming RPC The client-side streaming method `RecordRoute` is similar to the server-side method, except that we only pass the method a context and get a `RouteGuide_RecordRouteClient` stream back, which we can use to both write *and* read messages. ```go // Create a random number of random points r := rand.New(rand.NewSource(time.Now().UnixNano())) pointCount := int(r.Int31n(100)) + 2 // Traverse at least two points var points []*pb.Point for i := 0; i < pointCount; i++ { points = append(points, randomPoint(r)) } log.Printf("Traversing %d points.", len(points)) stream, err := client.RecordRoute(ctx) if err != nil { log.Fatalf("%v.RecordRoute(_) = _, %v", client, err) } for _, point := range points { if err := stream.Send(point); err != nil { log.Fatalf("%v.Send(%v) = %v", stream, point, err) } } reply, err := stream.CloseAndRecv() if err != nil { log.Fatalf("%v.CloseAndRecv() got error %v, want %v", stream, err, nil) } log.Printf("Route summary: %v", reply) ``` The `RouteGuide_RecordRouteClient` has a `Send()` method that we can use to send requests to the server. Once we've finished writing our client's requests to the stream using `Send()`, we need to call `CloseAndRecv()` on the stream to let gRPC know that we've finished writing and are expecting to receive a response. We get our RPC status from the `err` returned from `CloseAndRecv()`. If the status is `nil`, then the first return value from `CloseAndRecv()` will be a valid server response. #### Bidirectional streaming RPC Finally, let's look at our bidirectional streaming RPC `RouteChat()`. As in the case of `RecordRoute`, we only pass the method a context object and get back a stream that we can use to both write and read messages. However, this time we return values via our method's stream while the server is still writing messages to *their* message stream. ```go stream, err := client.RouteChat(ctx) waitc := make(chan struct{}) go func() { for { in, err := stream.Recv() if err == io.EOF { // read done. close(waitc) return } if err != nil { log.Fatalf("Failed to receive a note : %v", err) } log.Printf("Got message %s at point(%d, %d)", in.Message, in.Location.Latitude, in.Location.Longitude) } }() for _, note := range notes { if err := stream.Send(note); err != nil { log.Fatalf("Failed to send a note: %v", err) } } stream.CloseSend() <-waitc ``` The syntax for reading and writing here is very similar to our client-side streaming method, except we use the stream's `CloseSend()` method once we've finished our call. Although each side will always get the other's messages in the order they were written, both the client and server can read and write in any order — the streams operate completely independently. ## Try it out! To compile and run the server, assuming you are in the folder `$GOPATH/src/google.golang.org/grpc/examples/route_guide`, simply: ```sh $ go run server/server.go ``` Likewise, to run the client: ```sh $ go run client/client.go ``` grpc-go-1.22.1/examples/helloworld/000077500000000000000000000000001351635773100171375ustar00rootroot00000000000000grpc-go-1.22.1/examples/helloworld/greeter_client/000077500000000000000000000000001351635773100221325ustar00rootroot00000000000000grpc-go-1.22.1/examples/helloworld/greeter_client/main.go000066400000000000000000000026621351635773100234130ustar00rootroot00000000000000/* * * Copyright 2015 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package main implements a client for Greeter service. package main import ( "context" "log" "os" "time" "google.golang.org/grpc" pb "google.golang.org/grpc/examples/helloworld/helloworld" ) const ( address = "localhost:50051" defaultName = "world" ) func main() { // Set up a connection to the server. conn, err := grpc.Dial(address, grpc.WithInsecure()) if err != nil { log.Fatalf("did not connect: %v", err) } defer conn.Close() c := pb.NewGreeterClient(conn) // Contact the server and print out its response. name := defaultName if len(os.Args) > 1 { name = os.Args[1] } ctx, cancel := context.WithTimeout(context.Background(), time.Second) defer cancel() r, err := c.SayHello(ctx, &pb.HelloRequest{Name: name}) if err != nil { log.Fatalf("could not greet: %v", err) } log.Printf("Greeting: %s", r.Message) } grpc-go-1.22.1/examples/helloworld/greeter_server/000077500000000000000000000000001351635773100221625ustar00rootroot00000000000000grpc-go-1.22.1/examples/helloworld/greeter_server/main.go000066400000000000000000000027471351635773100234470ustar00rootroot00000000000000/* * * Copyright 2015 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ //go:generate protoc -I ../helloworld --go_out=plugins=grpc:../helloworld ../helloworld/helloworld.proto // Package main implements a server for Greeter service. package main import ( "context" "log" "net" "google.golang.org/grpc" pb "google.golang.org/grpc/examples/helloworld/helloworld" ) const ( port = ":50051" ) // server is used to implement helloworld.GreeterServer. type server struct{} // SayHello implements helloworld.GreeterServer func (s *server) SayHello(ctx context.Context, in *pb.HelloRequest) (*pb.HelloReply, error) { log.Printf("Received: %v", in.Name) return &pb.HelloReply{Message: "Hello " + in.Name}, nil } func main() { lis, err := net.Listen("tcp", port) if err != nil { log.Fatalf("failed to listen: %v", err) } s := grpc.NewServer() pb.RegisterGreeterServer(s, &server{}) if err := s.Serve(lis); err != nil { log.Fatalf("failed to serve: %v", err) } } grpc-go-1.22.1/examples/helloworld/helloworld/000077500000000000000000000000001351635773100213125ustar00rootroot00000000000000grpc-go-1.22.1/examples/helloworld/helloworld/helloworld.pb.go000066400000000000000000000154641351635773100244260ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: helloworld.proto package helloworld import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import ( context "golang.org/x/net/context" grpc "google.golang.org/grpc" ) // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package // The request message containing the user's name. type HelloRequest struct { Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *HelloRequest) Reset() { *m = HelloRequest{} } func (m *HelloRequest) String() string { return proto.CompactTextString(m) } func (*HelloRequest) ProtoMessage() {} func (*HelloRequest) Descriptor() ([]byte, []int) { return fileDescriptor_helloworld_71e208cbdc16936b, []int{0} } func (m *HelloRequest) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_HelloRequest.Unmarshal(m, b) } func (m *HelloRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_HelloRequest.Marshal(b, m, deterministic) } func (dst *HelloRequest) XXX_Merge(src proto.Message) { xxx_messageInfo_HelloRequest.Merge(dst, src) } func (m *HelloRequest) XXX_Size() int { return xxx_messageInfo_HelloRequest.Size(m) } func (m *HelloRequest) XXX_DiscardUnknown() { xxx_messageInfo_HelloRequest.DiscardUnknown(m) } var xxx_messageInfo_HelloRequest proto.InternalMessageInfo func (m *HelloRequest) GetName() string { if m != nil { return m.Name } return "" } // The response message containing the greetings type HelloReply struct { Message string `protobuf:"bytes,1,opt,name=message,proto3" json:"message,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *HelloReply) Reset() { *m = HelloReply{} } func (m *HelloReply) String() string { return proto.CompactTextString(m) } func (*HelloReply) ProtoMessage() {} func (*HelloReply) Descriptor() ([]byte, []int) { return fileDescriptor_helloworld_71e208cbdc16936b, []int{1} } func (m *HelloReply) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_HelloReply.Unmarshal(m, b) } func (m *HelloReply) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_HelloReply.Marshal(b, m, deterministic) } func (dst *HelloReply) XXX_Merge(src proto.Message) { xxx_messageInfo_HelloReply.Merge(dst, src) } func (m *HelloReply) XXX_Size() int { return xxx_messageInfo_HelloReply.Size(m) } func (m *HelloReply) XXX_DiscardUnknown() { xxx_messageInfo_HelloReply.DiscardUnknown(m) } var xxx_messageInfo_HelloReply proto.InternalMessageInfo func (m *HelloReply) GetMessage() string { if m != nil { return m.Message } return "" } func init() { proto.RegisterType((*HelloRequest)(nil), "helloworld.HelloRequest") proto.RegisterType((*HelloReply)(nil), "helloworld.HelloReply") } // Reference imports to suppress errors if they are not otherwise used. var _ context.Context var _ grpc.ClientConn // This is a compile-time assertion to ensure that this generated file // is compatible with the grpc package it is being compiled against. const _ = grpc.SupportPackageIsVersion4 // GreeterClient is the client API for Greeter service. // // For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream. type GreeterClient interface { // Sends a greeting SayHello(ctx context.Context, in *HelloRequest, opts ...grpc.CallOption) (*HelloReply, error) } type greeterClient struct { cc *grpc.ClientConn } func NewGreeterClient(cc *grpc.ClientConn) GreeterClient { return &greeterClient{cc} } func (c *greeterClient) SayHello(ctx context.Context, in *HelloRequest, opts ...grpc.CallOption) (*HelloReply, error) { out := new(HelloReply) err := c.cc.Invoke(ctx, "/helloworld.Greeter/SayHello", in, out, opts...) if err != nil { return nil, err } return out, nil } // GreeterServer is the server API for Greeter service. type GreeterServer interface { // Sends a greeting SayHello(context.Context, *HelloRequest) (*HelloReply, error) } func RegisterGreeterServer(s *grpc.Server, srv GreeterServer) { s.RegisterService(&_Greeter_serviceDesc, srv) } func _Greeter_SayHello_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(HelloRequest) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(GreeterServer).SayHello(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/helloworld.Greeter/SayHello", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(GreeterServer).SayHello(ctx, req.(*HelloRequest)) } return interceptor(ctx, in, info, handler) } var _Greeter_serviceDesc = grpc.ServiceDesc{ ServiceName: "helloworld.Greeter", HandlerType: (*GreeterServer)(nil), Methods: []grpc.MethodDesc{ { MethodName: "SayHello", Handler: _Greeter_SayHello_Handler, }, }, Streams: []grpc.StreamDesc{}, Metadata: "helloworld.proto", } func init() { proto.RegisterFile("helloworld.proto", fileDescriptor_helloworld_71e208cbdc16936b) } var fileDescriptor_helloworld_71e208cbdc16936b = []byte{ // 175 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x12, 0xc8, 0x48, 0xcd, 0xc9, 0xc9, 0x2f, 0xcf, 0x2f, 0xca, 0x49, 0xd1, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x17, 0xe2, 0x42, 0x88, 0x28, 0x29, 0x71, 0xf1, 0x78, 0x80, 0x78, 0x41, 0xa9, 0x85, 0xa5, 0xa9, 0xc5, 0x25, 0x42, 0x42, 0x5c, 0x2c, 0x79, 0x89, 0xb9, 0xa9, 0x12, 0x8c, 0x0a, 0x8c, 0x1a, 0x9c, 0x41, 0x60, 0xb6, 0x92, 0x1a, 0x17, 0x17, 0x54, 0x4d, 0x41, 0x4e, 0xa5, 0x90, 0x04, 0x17, 0x7b, 0x6e, 0x6a, 0x71, 0x71, 0x62, 0x3a, 0x4c, 0x11, 0x8c, 0x6b, 0xe4, 0xc9, 0xc5, 0xee, 0x5e, 0x94, 0x9a, 0x5a, 0x92, 0x5a, 0x24, 0x64, 0xc7, 0xc5, 0x11, 0x9c, 0x58, 0x09, 0xd6, 0x25, 0x24, 0xa1, 0x87, 0xe4, 0x02, 0x64, 0xcb, 0xa4, 0xc4, 0xb0, 0xc8, 0x14, 0xe4, 0x54, 0x2a, 0x31, 0x38, 0x19, 0x70, 0x49, 0x67, 0xe6, 0xeb, 0xa5, 0x17, 0x15, 0x24, 0xeb, 0xa5, 0x56, 0x24, 0xe6, 0x16, 0xe4, 0xa4, 0x16, 0x23, 0xa9, 0x75, 0xe2, 0x07, 0x2b, 0x0e, 0x07, 0xb1, 0x03, 0x40, 0x5e, 0x0a, 0x60, 0x4c, 0x62, 0x03, 0xfb, 0xcd, 0x18, 0x10, 0x00, 0x00, 0xff, 0xff, 0x0f, 0xb7, 0xcd, 0xf2, 0xef, 0x00, 0x00, 0x00, } grpc-go-1.22.1/examples/helloworld/helloworld/helloworld.proto000066400000000000000000000021051351635773100245500ustar00rootroot00000000000000// Copyright 2015 gRPC authors. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. syntax = "proto3"; option java_multiple_files = true; option java_package = "io.grpc.examples.helloworld"; option java_outer_classname = "HelloWorldProto"; package helloworld; // The greeting service definition. service Greeter { // Sends a greeting rpc SayHello (HelloRequest) returns (HelloReply) {} } // The request message containing the user's name. message HelloRequest { string name = 1; } // The response message containing the greetings message HelloReply { string message = 1; } grpc-go-1.22.1/examples/helloworld/mock_helloworld/000077500000000000000000000000001351635773100223235ustar00rootroot00000000000000grpc-go-1.22.1/examples/helloworld/mock_helloworld/hw_mock.go000066400000000000000000000027271351635773100243110ustar00rootroot00000000000000// Automatically generated by MockGen. DO NOT EDIT! // Source: google.golang.org/grpc/examples/helloworld/helloworld (interfaces: GreeterClient) package mock_helloworld import ( gomock "github.com/golang/mock/gomock" context "golang.org/x/net/context" grpc "google.golang.org/grpc" helloworld "google.golang.org/grpc/examples/helloworld/helloworld" ) // Mock of GreeterClient interface type MockGreeterClient struct { ctrl *gomock.Controller recorder *_MockGreeterClientRecorder } // Recorder for MockGreeterClient (not exported) type _MockGreeterClientRecorder struct { mock *MockGreeterClient } func NewMockGreeterClient(ctrl *gomock.Controller) *MockGreeterClient { mock := &MockGreeterClient{ctrl: ctrl} mock.recorder = &_MockGreeterClientRecorder{mock} return mock } func (_m *MockGreeterClient) EXPECT() *_MockGreeterClientRecorder { return _m.recorder } func (_m *MockGreeterClient) SayHello(_param0 context.Context, _param1 *helloworld.HelloRequest, _param2 ...grpc.CallOption) (*helloworld.HelloReply, error) { _s := []interface{}{_param0, _param1} for _, _x := range _param2 { _s = append(_s, _x) } ret := _m.ctrl.Call(_m, "SayHello", _s...) ret0, _ := ret[0].(*helloworld.HelloReply) ret1, _ := ret[1].(error) return ret0, ret1 } func (_mr *_MockGreeterClientRecorder) SayHello(arg0, arg1 interface{}, arg2 ...interface{}) *gomock.Call { _s := append([]interface{}{arg0, arg1}, arg2...) return _mr.mock.ctrl.RecordCall(_mr.mock, "SayHello", _s...) } grpc-go-1.22.1/examples/helloworld/mock_helloworld/hw_mock_test.go000066400000000000000000000036021351635773100253410ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package mock_helloworld_test import ( "context" "fmt" "testing" "time" "github.com/golang/mock/gomock" "github.com/golang/protobuf/proto" helloworld "google.golang.org/grpc/examples/helloworld/helloworld" hwmock "google.golang.org/grpc/examples/helloworld/mock_helloworld" ) // rpcMsg implements the gomock.Matcher interface type rpcMsg struct { msg proto.Message } func (r *rpcMsg) Matches(msg interface{}) bool { m, ok := msg.(proto.Message) if !ok { return false } return proto.Equal(m, r.msg) } func (r *rpcMsg) String() string { return fmt.Sprintf("is %s", r.msg) } func TestSayHello(t *testing.T) { ctrl := gomock.NewController(t) defer ctrl.Finish() mockGreeterClient := hwmock.NewMockGreeterClient(ctrl) req := &helloworld.HelloRequest{Name: "unit_test"} mockGreeterClient.EXPECT().SayHello( gomock.Any(), &rpcMsg{msg: req}, ).Return(&helloworld.HelloReply{Message: "Mocked Interface"}, nil) testSayHello(t, mockGreeterClient) } func testSayHello(t *testing.T, client helloworld.GreeterClient) { ctx, cancel := context.WithTimeout(context.Background(), time.Second) defer cancel() r, err := client.SayHello(ctx, &helloworld.HelloRequest{Name: "unit_test"}) if err != nil || r.Message != "Mocked Interface" { t.Errorf("mocking failed") } t.Log("Reply : ", r.Message) } grpc-go-1.22.1/examples/route_guide/000077500000000000000000000000001351635773100172775ustar00rootroot00000000000000grpc-go-1.22.1/examples/route_guide/README.md000066400000000000000000000015471351635773100205650ustar00rootroot00000000000000# Description The route guide server and client demonstrate how to use grpc go libraries to perform unary, client streaming, server streaming and full duplex RPCs. Please refer to [gRPC Basics: Go](https://grpc.io/docs/tutorials/basic/go.html) for more information. See the definition of the route guide service in routeguide/route_guide.proto. # Run the sample code To compile and run the server, assuming you are in the root of the route_guide folder, i.e., .../examples/route_guide/, simply: ```sh $ go run server/server.go ``` Likewise, to run the client: ```sh $ go run client/client.go ``` # Optional command line flags The server and client both take optional command line flags. For example, the client and server run without TLS by default. To enable TLS: ```sh $ go run server/server.go -tls=true ``` and ```sh $ go run client/client.go -tls=true ``` grpc-go-1.22.1/examples/route_guide/client/000077500000000000000000000000001351635773100205555ustar00rootroot00000000000000grpc-go-1.22.1/examples/route_guide/client/client.go000066400000000000000000000140611351635773100223640ustar00rootroot00000000000000/* * * Copyright 2015 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package main implements a simple gRPC client that demonstrates how to use gRPC-Go libraries // to perform unary, client streaming, server streaming and full duplex RPCs. // // It interacts with the route guide service whose definition can be found in routeguide/route_guide.proto. package main import ( "context" "flag" "io" "log" "math/rand" "time" "google.golang.org/grpc" "google.golang.org/grpc/credentials" pb "google.golang.org/grpc/examples/route_guide/routeguide" "google.golang.org/grpc/testdata" ) var ( tls = flag.Bool("tls", false, "Connection uses TLS if true, else plain TCP") caFile = flag.String("ca_file", "", "The file containing the CA root cert file") serverAddr = flag.String("server_addr", "127.0.0.1:10000", "The server address in the format of host:port") serverHostOverride = flag.String("server_host_override", "x.test.youtube.com", "The server name use to verify the hostname returned by TLS handshake") ) // printFeature gets the feature for the given point. func printFeature(client pb.RouteGuideClient, point *pb.Point) { log.Printf("Getting feature for point (%d, %d)", point.Latitude, point.Longitude) ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() feature, err := client.GetFeature(ctx, point) if err != nil { log.Fatalf("%v.GetFeatures(_) = _, %v: ", client, err) } log.Println(feature) } // printFeatures lists all the features within the given bounding Rectangle. func printFeatures(client pb.RouteGuideClient, rect *pb.Rectangle) { log.Printf("Looking for features within %v", rect) ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() stream, err := client.ListFeatures(ctx, rect) if err != nil { log.Fatalf("%v.ListFeatures(_) = _, %v", client, err) } for { feature, err := stream.Recv() if err == io.EOF { break } if err != nil { log.Fatalf("%v.ListFeatures(_) = _, %v", client, err) } log.Println(feature) } } // runRecordRoute sends a sequence of points to server and expects to get a RouteSummary from server. func runRecordRoute(client pb.RouteGuideClient) { // Create a random number of random points r := rand.New(rand.NewSource(time.Now().UnixNano())) pointCount := int(r.Int31n(100)) + 2 // Traverse at least two points var points []*pb.Point for i := 0; i < pointCount; i++ { points = append(points, randomPoint(r)) } log.Printf("Traversing %d points.", len(points)) ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() stream, err := client.RecordRoute(ctx) if err != nil { log.Fatalf("%v.RecordRoute(_) = _, %v", client, err) } for _, point := range points { if err := stream.Send(point); err != nil { log.Fatalf("%v.Send(%v) = %v", stream, point, err) } } reply, err := stream.CloseAndRecv() if err != nil { log.Fatalf("%v.CloseAndRecv() got error %v, want %v", stream, err, nil) } log.Printf("Route summary: %v", reply) } // runRouteChat receives a sequence of route notes, while sending notes for various locations. func runRouteChat(client pb.RouteGuideClient) { notes := []*pb.RouteNote{ {Location: &pb.Point{Latitude: 0, Longitude: 1}, Message: "First message"}, {Location: &pb.Point{Latitude: 0, Longitude: 2}, Message: "Second message"}, {Location: &pb.Point{Latitude: 0, Longitude: 3}, Message: "Third message"}, {Location: &pb.Point{Latitude: 0, Longitude: 1}, Message: "Fourth message"}, {Location: &pb.Point{Latitude: 0, Longitude: 2}, Message: "Fifth message"}, {Location: &pb.Point{Latitude: 0, Longitude: 3}, Message: "Sixth message"}, } ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() stream, err := client.RouteChat(ctx) if err != nil { log.Fatalf("%v.RouteChat(_) = _, %v", client, err) } waitc := make(chan struct{}) go func() { for { in, err := stream.Recv() if err == io.EOF { // read done. close(waitc) return } if err != nil { log.Fatalf("Failed to receive a note : %v", err) } log.Printf("Got message %s at point(%d, %d)", in.Message, in.Location.Latitude, in.Location.Longitude) } }() for _, note := range notes { if err := stream.Send(note); err != nil { log.Fatalf("Failed to send a note: %v", err) } } stream.CloseSend() <-waitc } func randomPoint(r *rand.Rand) *pb.Point { lat := (r.Int31n(180) - 90) * 1e7 long := (r.Int31n(360) - 180) * 1e7 return &pb.Point{Latitude: lat, Longitude: long} } func main() { flag.Parse() var opts []grpc.DialOption if *tls { if *caFile == "" { *caFile = testdata.Path("ca.pem") } creds, err := credentials.NewClientTLSFromFile(*caFile, *serverHostOverride) if err != nil { log.Fatalf("Failed to create TLS credentials %v", err) } opts = append(opts, grpc.WithTransportCredentials(creds)) } else { opts = append(opts, grpc.WithInsecure()) } conn, err := grpc.Dial(*serverAddr, opts...) if err != nil { log.Fatalf("fail to dial: %v", err) } defer conn.Close() client := pb.NewRouteGuideClient(conn) // Looking for a valid feature printFeature(client, &pb.Point{Latitude: 409146138, Longitude: -746188906}) // Feature missing. printFeature(client, &pb.Point{Latitude: 0, Longitude: 0}) // Looking for features between 40, -75 and 42, -73. printFeatures(client, &pb.Rectangle{ Lo: &pb.Point{Latitude: 400000000, Longitude: -750000000}, Hi: &pb.Point{Latitude: 420000000, Longitude: -730000000}, }) // RecordRoute runRecordRoute(client) // RouteChat runRouteChat(client) } grpc-go-1.22.1/examples/route_guide/mock_routeguide/000077500000000000000000000000001351635773100224645ustar00rootroot00000000000000grpc-go-1.22.1/examples/route_guide/mock_routeguide/rg_mock.go000066400000000000000000000147711351635773100244460ustar00rootroot00000000000000// Automatically generated by MockGen. DO NOT EDIT! // Source: google.golang.org/grpc/examples/route_guide/routeguide (interfaces: RouteGuideClient,RouteGuide_RouteChatClient) package mock_routeguide import ( gomock "github.com/golang/mock/gomock" context "golang.org/x/net/context" grpc "google.golang.org/grpc" routeguide "google.golang.org/grpc/examples/route_guide/routeguide" metadata "google.golang.org/grpc/metadata" ) // Mock of RouteGuideClient interface type MockRouteGuideClient struct { ctrl *gomock.Controller recorder *_MockRouteGuideClientRecorder } // Recorder for MockRouteGuideClient (not exported) type _MockRouteGuideClientRecorder struct { mock *MockRouteGuideClient } func NewMockRouteGuideClient(ctrl *gomock.Controller) *MockRouteGuideClient { mock := &MockRouteGuideClient{ctrl: ctrl} mock.recorder = &_MockRouteGuideClientRecorder{mock} return mock } func (_m *MockRouteGuideClient) EXPECT() *_MockRouteGuideClientRecorder { return _m.recorder } func (_m *MockRouteGuideClient) GetFeature(_param0 context.Context, _param1 *routeguide.Point, _param2 ...grpc.CallOption) (*routeguide.Feature, error) { _s := []interface{}{_param0, _param1} for _, _x := range _param2 { _s = append(_s, _x) } ret := _m.ctrl.Call(_m, "GetFeature", _s...) ret0, _ := ret[0].(*routeguide.Feature) ret1, _ := ret[1].(error) return ret0, ret1 } func (_mr *_MockRouteGuideClientRecorder) GetFeature(arg0, arg1 interface{}, arg2 ...interface{}) *gomock.Call { _s := append([]interface{}{arg0, arg1}, arg2...) return _mr.mock.ctrl.RecordCall(_mr.mock, "GetFeature", _s...) } func (_m *MockRouteGuideClient) ListFeatures(_param0 context.Context, _param1 *routeguide.Rectangle, _param2 ...grpc.CallOption) (routeguide.RouteGuide_ListFeaturesClient, error) { _s := []interface{}{_param0, _param1} for _, _x := range _param2 { _s = append(_s, _x) } ret := _m.ctrl.Call(_m, "ListFeatures", _s...) ret0, _ := ret[0].(routeguide.RouteGuide_ListFeaturesClient) ret1, _ := ret[1].(error) return ret0, ret1 } func (_mr *_MockRouteGuideClientRecorder) ListFeatures(arg0, arg1 interface{}, arg2 ...interface{}) *gomock.Call { _s := append([]interface{}{arg0, arg1}, arg2...) return _mr.mock.ctrl.RecordCall(_mr.mock, "ListFeatures", _s...) } func (_m *MockRouteGuideClient) RecordRoute(_param0 context.Context, _param1 ...grpc.CallOption) (routeguide.RouteGuide_RecordRouteClient, error) { _s := []interface{}{_param0} for _, _x := range _param1 { _s = append(_s, _x) } ret := _m.ctrl.Call(_m, "RecordRoute", _s...) ret0, _ := ret[0].(routeguide.RouteGuide_RecordRouteClient) ret1, _ := ret[1].(error) return ret0, ret1 } func (_mr *_MockRouteGuideClientRecorder) RecordRoute(arg0 interface{}, arg1 ...interface{}) *gomock.Call { _s := append([]interface{}{arg0}, arg1...) return _mr.mock.ctrl.RecordCall(_mr.mock, "RecordRoute", _s...) } func (_m *MockRouteGuideClient) RouteChat(_param0 context.Context, _param1 ...grpc.CallOption) (routeguide.RouteGuide_RouteChatClient, error) { _s := []interface{}{_param0} for _, _x := range _param1 { _s = append(_s, _x) } ret := _m.ctrl.Call(_m, "RouteChat", _s...) ret0, _ := ret[0].(routeguide.RouteGuide_RouteChatClient) ret1, _ := ret[1].(error) return ret0, ret1 } func (_mr *_MockRouteGuideClientRecorder) RouteChat(arg0 interface{}, arg1 ...interface{}) *gomock.Call { _s := append([]interface{}{arg0}, arg1...) return _mr.mock.ctrl.RecordCall(_mr.mock, "RouteChat", _s...) } // Mock of RouteGuide_RouteChatClient interface type MockRouteGuide_RouteChatClient struct { ctrl *gomock.Controller recorder *_MockRouteGuide_RouteChatClientRecorder } // Recorder for MockRouteGuide_RouteChatClient (not exported) type _MockRouteGuide_RouteChatClientRecorder struct { mock *MockRouteGuide_RouteChatClient } func NewMockRouteGuide_RouteChatClient(ctrl *gomock.Controller) *MockRouteGuide_RouteChatClient { mock := &MockRouteGuide_RouteChatClient{ctrl: ctrl} mock.recorder = &_MockRouteGuide_RouteChatClientRecorder{mock} return mock } func (_m *MockRouteGuide_RouteChatClient) EXPECT() *_MockRouteGuide_RouteChatClientRecorder { return _m.recorder } func (_m *MockRouteGuide_RouteChatClient) CloseSend() error { ret := _m.ctrl.Call(_m, "CloseSend") ret0, _ := ret[0].(error) return ret0 } func (_mr *_MockRouteGuide_RouteChatClientRecorder) CloseSend() *gomock.Call { return _mr.mock.ctrl.RecordCall(_mr.mock, "CloseSend") } func (_m *MockRouteGuide_RouteChatClient) Context() context.Context { ret := _m.ctrl.Call(_m, "Context") ret0, _ := ret[0].(context.Context) return ret0 } func (_mr *_MockRouteGuide_RouteChatClientRecorder) Context() *gomock.Call { return _mr.mock.ctrl.RecordCall(_mr.mock, "Context") } func (_m *MockRouteGuide_RouteChatClient) Header() (metadata.MD, error) { ret := _m.ctrl.Call(_m, "Header") ret0, _ := ret[0].(metadata.MD) ret1, _ := ret[1].(error) return ret0, ret1 } func (_mr *_MockRouteGuide_RouteChatClientRecorder) Header() *gomock.Call { return _mr.mock.ctrl.RecordCall(_mr.mock, "Header") } func (_m *MockRouteGuide_RouteChatClient) Recv() (*routeguide.RouteNote, error) { ret := _m.ctrl.Call(_m, "Recv") ret0, _ := ret[0].(*routeguide.RouteNote) ret1, _ := ret[1].(error) return ret0, ret1 } func (_mr *_MockRouteGuide_RouteChatClientRecorder) Recv() *gomock.Call { return _mr.mock.ctrl.RecordCall(_mr.mock, "Recv") } func (_m *MockRouteGuide_RouteChatClient) RecvMsg(_param0 interface{}) error { ret := _m.ctrl.Call(_m, "RecvMsg", _param0) ret0, _ := ret[0].(error) return ret0 } func (_mr *_MockRouteGuide_RouteChatClientRecorder) RecvMsg(arg0 interface{}) *gomock.Call { return _mr.mock.ctrl.RecordCall(_mr.mock, "RecvMsg", arg0) } func (_m *MockRouteGuide_RouteChatClient) Send(_param0 *routeguide.RouteNote) error { ret := _m.ctrl.Call(_m, "Send", _param0) ret0, _ := ret[0].(error) return ret0 } func (_mr *_MockRouteGuide_RouteChatClientRecorder) Send(arg0 interface{}) *gomock.Call { return _mr.mock.ctrl.RecordCall(_mr.mock, "Send", arg0) } func (_m *MockRouteGuide_RouteChatClient) SendMsg(_param0 interface{}) error { ret := _m.ctrl.Call(_m, "SendMsg", _param0) ret0, _ := ret[0].(error) return ret0 } func (_mr *_MockRouteGuide_RouteChatClientRecorder) SendMsg(arg0 interface{}) *gomock.Call { return _mr.mock.ctrl.RecordCall(_mr.mock, "SendMsg", arg0) } func (_m *MockRouteGuide_RouteChatClient) Trailer() metadata.MD { ret := _m.ctrl.Call(_m, "Trailer") ret0, _ := ret[0].(metadata.MD) return ret0 } func (_mr *_MockRouteGuide_RouteChatClientRecorder) Trailer() *gomock.Call { return _mr.mock.ctrl.RecordCall(_mr.mock, "Trailer") } grpc-go-1.22.1/examples/route_guide/mock_routeguide/rg_mock_test.go000066400000000000000000000042061351635773100254750ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package mock_routeguide_test import ( "context" "fmt" "testing" "time" "github.com/golang/mock/gomock" "github.com/golang/protobuf/proto" rgmock "google.golang.org/grpc/examples/route_guide/mock_routeguide" rgpb "google.golang.org/grpc/examples/route_guide/routeguide" ) var msg = &rgpb.RouteNote{ Location: &rgpb.Point{Latitude: 17, Longitude: 29}, Message: "Taxi-cab", } func TestRouteChat(t *testing.T) { ctrl := gomock.NewController(t) defer ctrl.Finish() // Create mock for the stream returned by RouteChat stream := rgmock.NewMockRouteGuide_RouteChatClient(ctrl) // set expectation on sending. stream.EXPECT().Send( gomock.Any(), ).Return(nil) // Set expectation on receiving. stream.EXPECT().Recv().Return(msg, nil) stream.EXPECT().CloseSend().Return(nil) // Create mock for the client interface. rgclient := rgmock.NewMockRouteGuideClient(ctrl) // Set expectation on RouteChat rgclient.EXPECT().RouteChat( gomock.Any(), ).Return(stream, nil) if err := testRouteChat(rgclient); err != nil { t.Fatalf("Test failed: %v", err) } } func testRouteChat(client rgpb.RouteGuideClient) error { ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() stream, err := client.RouteChat(ctx) if err != nil { return err } if err := stream.Send(msg); err != nil { return err } if err := stream.CloseSend(); err != nil { return err } got, err := stream.Recv() if err != nil { return err } if !proto.Equal(got, msg) { return fmt.Errorf("stream.Recv() = %v, want %v", got, msg) } return nil } grpc-go-1.22.1/examples/route_guide/routeguide/000077500000000000000000000000001351635773100214535ustar00rootroot00000000000000grpc-go-1.22.1/examples/route_guide/routeguide/route_guide.pb.go000066400000000000000000000513451351635773100247250ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: route_guide.proto package routeguide import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import ( context "golang.org/x/net/context" grpc "google.golang.org/grpc" ) // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package // Points are represented as latitude-longitude pairs in the E7 representation // (degrees multiplied by 10**7 and rounded to the nearest integer). // Latitudes should be in the range +/- 90 degrees and longitude should be in // the range +/- 180 degrees (inclusive). type Point struct { Latitude int32 `protobuf:"varint,1,opt,name=latitude,proto3" json:"latitude,omitempty"` Longitude int32 `protobuf:"varint,2,opt,name=longitude,proto3" json:"longitude,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Point) Reset() { *m = Point{} } func (m *Point) String() string { return proto.CompactTextString(m) } func (*Point) ProtoMessage() {} func (*Point) Descriptor() ([]byte, []int) { return fileDescriptor_route_guide_dc79de2de4c66c19, []int{0} } func (m *Point) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Point.Unmarshal(m, b) } func (m *Point) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Point.Marshal(b, m, deterministic) } func (dst *Point) XXX_Merge(src proto.Message) { xxx_messageInfo_Point.Merge(dst, src) } func (m *Point) XXX_Size() int { return xxx_messageInfo_Point.Size(m) } func (m *Point) XXX_DiscardUnknown() { xxx_messageInfo_Point.DiscardUnknown(m) } var xxx_messageInfo_Point proto.InternalMessageInfo func (m *Point) GetLatitude() int32 { if m != nil { return m.Latitude } return 0 } func (m *Point) GetLongitude() int32 { if m != nil { return m.Longitude } return 0 } // A latitude-longitude rectangle, represented as two diagonally opposite // points "lo" and "hi". type Rectangle struct { // One corner of the rectangle. Lo *Point `protobuf:"bytes,1,opt,name=lo,proto3" json:"lo,omitempty"` // The other corner of the rectangle. Hi *Point `protobuf:"bytes,2,opt,name=hi,proto3" json:"hi,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Rectangle) Reset() { *m = Rectangle{} } func (m *Rectangle) String() string { return proto.CompactTextString(m) } func (*Rectangle) ProtoMessage() {} func (*Rectangle) Descriptor() ([]byte, []int) { return fileDescriptor_route_guide_dc79de2de4c66c19, []int{1} } func (m *Rectangle) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Rectangle.Unmarshal(m, b) } func (m *Rectangle) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Rectangle.Marshal(b, m, deterministic) } func (dst *Rectangle) XXX_Merge(src proto.Message) { xxx_messageInfo_Rectangle.Merge(dst, src) } func (m *Rectangle) XXX_Size() int { return xxx_messageInfo_Rectangle.Size(m) } func (m *Rectangle) XXX_DiscardUnknown() { xxx_messageInfo_Rectangle.DiscardUnknown(m) } var xxx_messageInfo_Rectangle proto.InternalMessageInfo func (m *Rectangle) GetLo() *Point { if m != nil { return m.Lo } return nil } func (m *Rectangle) GetHi() *Point { if m != nil { return m.Hi } return nil } // A feature names something at a given point. // // If a feature could not be named, the name is empty. type Feature struct { // The name of the feature. Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` // The point where the feature is detected. Location *Point `protobuf:"bytes,2,opt,name=location,proto3" json:"location,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Feature) Reset() { *m = Feature{} } func (m *Feature) String() string { return proto.CompactTextString(m) } func (*Feature) ProtoMessage() {} func (*Feature) Descriptor() ([]byte, []int) { return fileDescriptor_route_guide_dc79de2de4c66c19, []int{2} } func (m *Feature) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Feature.Unmarshal(m, b) } func (m *Feature) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Feature.Marshal(b, m, deterministic) } func (dst *Feature) XXX_Merge(src proto.Message) { xxx_messageInfo_Feature.Merge(dst, src) } func (m *Feature) XXX_Size() int { return xxx_messageInfo_Feature.Size(m) } func (m *Feature) XXX_DiscardUnknown() { xxx_messageInfo_Feature.DiscardUnknown(m) } var xxx_messageInfo_Feature proto.InternalMessageInfo func (m *Feature) GetName() string { if m != nil { return m.Name } return "" } func (m *Feature) GetLocation() *Point { if m != nil { return m.Location } return nil } // A RouteNote is a message sent while at a given point. type RouteNote struct { // The location from which the message is sent. Location *Point `protobuf:"bytes,1,opt,name=location,proto3" json:"location,omitempty"` // The message to be sent. Message string `protobuf:"bytes,2,opt,name=message,proto3" json:"message,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *RouteNote) Reset() { *m = RouteNote{} } func (m *RouteNote) String() string { return proto.CompactTextString(m) } func (*RouteNote) ProtoMessage() {} func (*RouteNote) Descriptor() ([]byte, []int) { return fileDescriptor_route_guide_dc79de2de4c66c19, []int{3} } func (m *RouteNote) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_RouteNote.Unmarshal(m, b) } func (m *RouteNote) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_RouteNote.Marshal(b, m, deterministic) } func (dst *RouteNote) XXX_Merge(src proto.Message) { xxx_messageInfo_RouteNote.Merge(dst, src) } func (m *RouteNote) XXX_Size() int { return xxx_messageInfo_RouteNote.Size(m) } func (m *RouteNote) XXX_DiscardUnknown() { xxx_messageInfo_RouteNote.DiscardUnknown(m) } var xxx_messageInfo_RouteNote proto.InternalMessageInfo func (m *RouteNote) GetLocation() *Point { if m != nil { return m.Location } return nil } func (m *RouteNote) GetMessage() string { if m != nil { return m.Message } return "" } // A RouteSummary is received in response to a RecordRoute rpc. // // It contains the number of individual points received, the number of // detected features, and the total distance covered as the cumulative sum of // the distance between each point. type RouteSummary struct { // The number of points received. PointCount int32 `protobuf:"varint,1,opt,name=point_count,json=pointCount,proto3" json:"point_count,omitempty"` // The number of known features passed while traversing the route. FeatureCount int32 `protobuf:"varint,2,opt,name=feature_count,json=featureCount,proto3" json:"feature_count,omitempty"` // The distance covered in metres. Distance int32 `protobuf:"varint,3,opt,name=distance,proto3" json:"distance,omitempty"` // The duration of the traversal in seconds. ElapsedTime int32 `protobuf:"varint,4,opt,name=elapsed_time,json=elapsedTime,proto3" json:"elapsed_time,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *RouteSummary) Reset() { *m = RouteSummary{} } func (m *RouteSummary) String() string { return proto.CompactTextString(m) } func (*RouteSummary) ProtoMessage() {} func (*RouteSummary) Descriptor() ([]byte, []int) { return fileDescriptor_route_guide_dc79de2de4c66c19, []int{4} } func (m *RouteSummary) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_RouteSummary.Unmarshal(m, b) } func (m *RouteSummary) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_RouteSummary.Marshal(b, m, deterministic) } func (dst *RouteSummary) XXX_Merge(src proto.Message) { xxx_messageInfo_RouteSummary.Merge(dst, src) } func (m *RouteSummary) XXX_Size() int { return xxx_messageInfo_RouteSummary.Size(m) } func (m *RouteSummary) XXX_DiscardUnknown() { xxx_messageInfo_RouteSummary.DiscardUnknown(m) } var xxx_messageInfo_RouteSummary proto.InternalMessageInfo func (m *RouteSummary) GetPointCount() int32 { if m != nil { return m.PointCount } return 0 } func (m *RouteSummary) GetFeatureCount() int32 { if m != nil { return m.FeatureCount } return 0 } func (m *RouteSummary) GetDistance() int32 { if m != nil { return m.Distance } return 0 } func (m *RouteSummary) GetElapsedTime() int32 { if m != nil { return m.ElapsedTime } return 0 } func init() { proto.RegisterType((*Point)(nil), "routeguide.Point") proto.RegisterType((*Rectangle)(nil), "routeguide.Rectangle") proto.RegisterType((*Feature)(nil), "routeguide.Feature") proto.RegisterType((*RouteNote)(nil), "routeguide.RouteNote") proto.RegisterType((*RouteSummary)(nil), "routeguide.RouteSummary") } // Reference imports to suppress errors if they are not otherwise used. var _ context.Context var _ grpc.ClientConn // This is a compile-time assertion to ensure that this generated file // is compatible with the grpc package it is being compiled against. const _ = grpc.SupportPackageIsVersion4 // RouteGuideClient is the client API for RouteGuide service. // // For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream. type RouteGuideClient interface { // A simple RPC. // // Obtains the feature at a given position. // // A feature with an empty name is returned if there's no feature at the given // position. GetFeature(ctx context.Context, in *Point, opts ...grpc.CallOption) (*Feature, error) // A server-to-client streaming RPC. // // Obtains the Features available within the given Rectangle. Results are // streamed rather than returned at once (e.g. in a response message with a // repeated field), as the rectangle may cover a large area and contain a // huge number of features. ListFeatures(ctx context.Context, in *Rectangle, opts ...grpc.CallOption) (RouteGuide_ListFeaturesClient, error) // A client-to-server streaming RPC. // // Accepts a stream of Points on a route being traversed, returning a // RouteSummary when traversal is completed. RecordRoute(ctx context.Context, opts ...grpc.CallOption) (RouteGuide_RecordRouteClient, error) // A Bidirectional streaming RPC. // // Accepts a stream of RouteNotes sent while a route is being traversed, // while receiving other RouteNotes (e.g. from other users). RouteChat(ctx context.Context, opts ...grpc.CallOption) (RouteGuide_RouteChatClient, error) } type routeGuideClient struct { cc *grpc.ClientConn } func NewRouteGuideClient(cc *grpc.ClientConn) RouteGuideClient { return &routeGuideClient{cc} } func (c *routeGuideClient) GetFeature(ctx context.Context, in *Point, opts ...grpc.CallOption) (*Feature, error) { out := new(Feature) err := c.cc.Invoke(ctx, "/routeguide.RouteGuide/GetFeature", in, out, opts...) if err != nil { return nil, err } return out, nil } func (c *routeGuideClient) ListFeatures(ctx context.Context, in *Rectangle, opts ...grpc.CallOption) (RouteGuide_ListFeaturesClient, error) { stream, err := c.cc.NewStream(ctx, &_RouteGuide_serviceDesc.Streams[0], "/routeguide.RouteGuide/ListFeatures", opts...) if err != nil { return nil, err } x := &routeGuideListFeaturesClient{stream} if err := x.ClientStream.SendMsg(in); err != nil { return nil, err } if err := x.ClientStream.CloseSend(); err != nil { return nil, err } return x, nil } type RouteGuide_ListFeaturesClient interface { Recv() (*Feature, error) grpc.ClientStream } type routeGuideListFeaturesClient struct { grpc.ClientStream } func (x *routeGuideListFeaturesClient) Recv() (*Feature, error) { m := new(Feature) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func (c *routeGuideClient) RecordRoute(ctx context.Context, opts ...grpc.CallOption) (RouteGuide_RecordRouteClient, error) { stream, err := c.cc.NewStream(ctx, &_RouteGuide_serviceDesc.Streams[1], "/routeguide.RouteGuide/RecordRoute", opts...) if err != nil { return nil, err } x := &routeGuideRecordRouteClient{stream} return x, nil } type RouteGuide_RecordRouteClient interface { Send(*Point) error CloseAndRecv() (*RouteSummary, error) grpc.ClientStream } type routeGuideRecordRouteClient struct { grpc.ClientStream } func (x *routeGuideRecordRouteClient) Send(m *Point) error { return x.ClientStream.SendMsg(m) } func (x *routeGuideRecordRouteClient) CloseAndRecv() (*RouteSummary, error) { if err := x.ClientStream.CloseSend(); err != nil { return nil, err } m := new(RouteSummary) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func (c *routeGuideClient) RouteChat(ctx context.Context, opts ...grpc.CallOption) (RouteGuide_RouteChatClient, error) { stream, err := c.cc.NewStream(ctx, &_RouteGuide_serviceDesc.Streams[2], "/routeguide.RouteGuide/RouteChat", opts...) if err != nil { return nil, err } x := &routeGuideRouteChatClient{stream} return x, nil } type RouteGuide_RouteChatClient interface { Send(*RouteNote) error Recv() (*RouteNote, error) grpc.ClientStream } type routeGuideRouteChatClient struct { grpc.ClientStream } func (x *routeGuideRouteChatClient) Send(m *RouteNote) error { return x.ClientStream.SendMsg(m) } func (x *routeGuideRouteChatClient) Recv() (*RouteNote, error) { m := new(RouteNote) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } // RouteGuideServer is the server API for RouteGuide service. type RouteGuideServer interface { // A simple RPC. // // Obtains the feature at a given position. // // A feature with an empty name is returned if there's no feature at the given // position. GetFeature(context.Context, *Point) (*Feature, error) // A server-to-client streaming RPC. // // Obtains the Features available within the given Rectangle. Results are // streamed rather than returned at once (e.g. in a response message with a // repeated field), as the rectangle may cover a large area and contain a // huge number of features. ListFeatures(*Rectangle, RouteGuide_ListFeaturesServer) error // A client-to-server streaming RPC. // // Accepts a stream of Points on a route being traversed, returning a // RouteSummary when traversal is completed. RecordRoute(RouteGuide_RecordRouteServer) error // A Bidirectional streaming RPC. // // Accepts a stream of RouteNotes sent while a route is being traversed, // while receiving other RouteNotes (e.g. from other users). RouteChat(RouteGuide_RouteChatServer) error } func RegisterRouteGuideServer(s *grpc.Server, srv RouteGuideServer) { s.RegisterService(&_RouteGuide_serviceDesc, srv) } func _RouteGuide_GetFeature_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(Point) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(RouteGuideServer).GetFeature(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/routeguide.RouteGuide/GetFeature", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(RouteGuideServer).GetFeature(ctx, req.(*Point)) } return interceptor(ctx, in, info, handler) } func _RouteGuide_ListFeatures_Handler(srv interface{}, stream grpc.ServerStream) error { m := new(Rectangle) if err := stream.RecvMsg(m); err != nil { return err } return srv.(RouteGuideServer).ListFeatures(m, &routeGuideListFeaturesServer{stream}) } type RouteGuide_ListFeaturesServer interface { Send(*Feature) error grpc.ServerStream } type routeGuideListFeaturesServer struct { grpc.ServerStream } func (x *routeGuideListFeaturesServer) Send(m *Feature) error { return x.ServerStream.SendMsg(m) } func _RouteGuide_RecordRoute_Handler(srv interface{}, stream grpc.ServerStream) error { return srv.(RouteGuideServer).RecordRoute(&routeGuideRecordRouteServer{stream}) } type RouteGuide_RecordRouteServer interface { SendAndClose(*RouteSummary) error Recv() (*Point, error) grpc.ServerStream } type routeGuideRecordRouteServer struct { grpc.ServerStream } func (x *routeGuideRecordRouteServer) SendAndClose(m *RouteSummary) error { return x.ServerStream.SendMsg(m) } func (x *routeGuideRecordRouteServer) Recv() (*Point, error) { m := new(Point) if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func _RouteGuide_RouteChat_Handler(srv interface{}, stream grpc.ServerStream) error { return srv.(RouteGuideServer).RouteChat(&routeGuideRouteChatServer{stream}) } type RouteGuide_RouteChatServer interface { Send(*RouteNote) error Recv() (*RouteNote, error) grpc.ServerStream } type routeGuideRouteChatServer struct { grpc.ServerStream } func (x *routeGuideRouteChatServer) Send(m *RouteNote) error { return x.ServerStream.SendMsg(m) } func (x *routeGuideRouteChatServer) Recv() (*RouteNote, error) { m := new(RouteNote) if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err } return m, nil } var _RouteGuide_serviceDesc = grpc.ServiceDesc{ ServiceName: "routeguide.RouteGuide", HandlerType: (*RouteGuideServer)(nil), Methods: []grpc.MethodDesc{ { MethodName: "GetFeature", Handler: _RouteGuide_GetFeature_Handler, }, }, Streams: []grpc.StreamDesc{ { StreamName: "ListFeatures", Handler: _RouteGuide_ListFeatures_Handler, ServerStreams: true, }, { StreamName: "RecordRoute", Handler: _RouteGuide_RecordRoute_Handler, ClientStreams: true, }, { StreamName: "RouteChat", Handler: _RouteGuide_RouteChat_Handler, ServerStreams: true, ClientStreams: true, }, }, Metadata: "route_guide.proto", } func init() { proto.RegisterFile("route_guide.proto", fileDescriptor_route_guide_dc79de2de4c66c19) } var fileDescriptor_route_guide_dc79de2de4c66c19 = []byte{ // 404 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x84, 0x53, 0xdd, 0xca, 0xd3, 0x40, 0x10, 0xfd, 0x36, 0x7e, 0x9f, 0x6d, 0x26, 0x11, 0xe9, 0x88, 0x10, 0xa2, 0xa0, 0x8d, 0x37, 0xbd, 0x31, 0x94, 0x0a, 0x5e, 0x56, 0x6c, 0xc1, 0xde, 0x14, 0xa9, 0xb1, 0xf7, 0x65, 0x4d, 0xc6, 0x74, 0x61, 0x93, 0x0d, 0xc9, 0x06, 0xf4, 0x01, 0x7c, 0x02, 0x5f, 0x58, 0xb2, 0x49, 0xda, 0x54, 0x5b, 0xbc, 0xdb, 0x39, 0x73, 0xce, 0xfc, 0x9c, 0x61, 0x61, 0x52, 0xaa, 0x5a, 0xd3, 0x21, 0xad, 0x45, 0x42, 0x61, 0x51, 0x2a, 0xad, 0x10, 0x0c, 0x64, 0x90, 0xe0, 0x23, 0x3c, 0xec, 0x94, 0xc8, 0x35, 0xfa, 0x30, 0x96, 0x5c, 0x0b, 0x5d, 0x27, 0xe4, 0xb1, 0xd7, 0x6c, 0xf6, 0x10, 0x9d, 0x62, 0x7c, 0x09, 0xb6, 0x54, 0x79, 0xda, 0x26, 0x2d, 0x93, 0x3c, 0x03, 0xc1, 0x17, 0xb0, 0x23, 0x8a, 0x35, 0xcf, 0x53, 0x49, 0x38, 0x05, 0x4b, 0x2a, 0x53, 0xc0, 0x59, 0x4c, 0xc2, 0x73, 0xa3, 0xd0, 0x74, 0x89, 0x2c, 0xa9, 0x1a, 0xca, 0x51, 0x98, 0x32, 0xd7, 0x29, 0x47, 0x11, 0x6c, 0x61, 0xf4, 0x89, 0xb8, 0xae, 0x4b, 0x42, 0x84, 0xfb, 0x9c, 0x67, 0xed, 0x4c, 0x76, 0x64, 0xde, 0xf8, 0x16, 0xc6, 0x52, 0xc5, 0x5c, 0x0b, 0x95, 0xdf, 0xae, 0x73, 0xa2, 0x04, 0x7b, 0xb0, 0xa3, 0x26, 0xfb, 0x59, 0xe9, 0x4b, 0x2d, 0xfb, 0xaf, 0x16, 0x3d, 0x18, 0x65, 0x54, 0x55, 0x3c, 0x6d, 0x17, 0xb7, 0xa3, 0x3e, 0x0c, 0x7e, 0x33, 0x70, 0x4d, 0xd9, 0xaf, 0x75, 0x96, 0xf1, 0xf2, 0x27, 0xbe, 0x02, 0xa7, 0x68, 0xd4, 0x87, 0x58, 0xd5, 0xb9, 0xee, 0x4c, 0x04, 0x03, 0xad, 0x1b, 0x04, 0xdf, 0xc0, 0x93, 0xef, 0xed, 0x56, 0x1d, 0xa5, 0xb5, 0xd2, 0xed, 0xc0, 0x96, 0xe4, 0xc3, 0x38, 0x11, 0x95, 0xe6, 0x79, 0x4c, 0xde, 0xa3, 0xf6, 0x0e, 0x7d, 0x8c, 0x53, 0x70, 0x49, 0xf2, 0xa2, 0xa2, 0xe4, 0xa0, 0x45, 0x46, 0xde, 0xbd, 0xc9, 0x3b, 0x1d, 0xb6, 0x17, 0x19, 0x2d, 0x7e, 0x59, 0x00, 0x66, 0xaa, 0x4d, 0xb3, 0x0e, 0xbe, 0x07, 0xd8, 0x90, 0xee, 0xbd, 0xfc, 0x77, 0x53, 0xff, 0xd9, 0x10, 0xea, 0x78, 0xc1, 0x1d, 0x2e, 0xc1, 0xdd, 0x8a, 0xaa, 0x17, 0x56, 0xf8, 0x7c, 0x48, 0x3b, 0x5d, 0xfb, 0x86, 0x7a, 0xce, 0x70, 0x09, 0x4e, 0x44, 0xb1, 0x2a, 0x13, 0x33, 0xcb, 0xb5, 0xc6, 0xde, 0x45, 0xc5, 0x81, 0x8f, 0xc1, 0xdd, 0x8c, 0xe1, 0x87, 0xee, 0x64, 0xeb, 0x23, 0xd7, 0x7f, 0x35, 0xef, 0x2f, 0xe9, 0x5f, 0x87, 0x1b, 0xf9, 0x9c, 0xad, 0xe6, 0xf0, 0x42, 0xa8, 0x30, 0x2d, 0x8b, 0x38, 0xa4, 0x1f, 0x3c, 0x2b, 0x24, 0x55, 0x03, 0xfa, 0xea, 0xe9, 0xd9, 0xa3, 0x5d, 0xf3, 0x27, 0x76, 0xec, 0xdb, 0x63, 0xf3, 0x39, 0xde, 0xfd, 0x09, 0x00, 0x00, 0xff, 0xff, 0xc8, 0xe4, 0xef, 0xe6, 0x31, 0x03, 0x00, 0x00, } grpc-go-1.22.1/examples/route_guide/routeguide/route_guide.proto000066400000000000000000000065771351635773100250720ustar00rootroot00000000000000// Copyright 2015 gRPC authors. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. syntax = "proto3"; option java_multiple_files = true; option java_package = "io.grpc.examples.routeguide"; option java_outer_classname = "RouteGuideProto"; package routeguide; // Interface exported by the server. service RouteGuide { // A simple RPC. // // Obtains the feature at a given position. // // A feature with an empty name is returned if there's no feature at the given // position. rpc GetFeature(Point) returns (Feature) {} // A server-to-client streaming RPC. // // Obtains the Features available within the given Rectangle. Results are // streamed rather than returned at once (e.g. in a response message with a // repeated field), as the rectangle may cover a large area and contain a // huge number of features. rpc ListFeatures(Rectangle) returns (stream Feature) {} // A client-to-server streaming RPC. // // Accepts a stream of Points on a route being traversed, returning a // RouteSummary when traversal is completed. rpc RecordRoute(stream Point) returns (RouteSummary) {} // A Bidirectional streaming RPC. // // Accepts a stream of RouteNotes sent while a route is being traversed, // while receiving other RouteNotes (e.g. from other users). rpc RouteChat(stream RouteNote) returns (stream RouteNote) {} } // Points are represented as latitude-longitude pairs in the E7 representation // (degrees multiplied by 10**7 and rounded to the nearest integer). // Latitudes should be in the range +/- 90 degrees and longitude should be in // the range +/- 180 degrees (inclusive). message Point { int32 latitude = 1; int32 longitude = 2; } // A latitude-longitude rectangle, represented as two diagonally opposite // points "lo" and "hi". message Rectangle { // One corner of the rectangle. Point lo = 1; // The other corner of the rectangle. Point hi = 2; } // A feature names something at a given point. // // If a feature could not be named, the name is empty. message Feature { // The name of the feature. string name = 1; // The point where the feature is detected. Point location = 2; } // A RouteNote is a message sent while at a given point. message RouteNote { // The location from which the message is sent. Point location = 1; // The message to be sent. string message = 2; } // A RouteSummary is received in response to a RecordRoute rpc. // // It contains the number of individual points received, the number of // detected features, and the total distance covered as the cumulative sum of // the distance between each point. message RouteSummary { // The number of points received. int32 point_count = 1; // The number of known features passed while traversing the route. int32 feature_count = 2; // The distance covered in metres. int32 distance = 3; // The duration of the traversal in seconds. int32 elapsed_time = 4; } grpc-go-1.22.1/examples/route_guide/server/000077500000000000000000000000001351635773100206055ustar00rootroot00000000000000grpc-go-1.22.1/examples/route_guide/server/server.go000066400000000000000000000511171351635773100224470ustar00rootroot00000000000000/* * * Copyright 2015 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ //go:generate protoc -I ../routeguide --go_out=plugins=grpc:../routeguide ../routeguide/route_guide.proto // Package main implements a simple gRPC server that demonstrates how to use gRPC-Go libraries // to perform unary, client streaming, server streaming and full duplex RPCs. // // It implements the route guide service whose definition can be found in routeguide/route_guide.proto. package main import ( "context" "encoding/json" "flag" "fmt" "io" "io/ioutil" "log" "math" "net" "sync" "time" "google.golang.org/grpc" "google.golang.org/grpc/credentials" "google.golang.org/grpc/testdata" "github.com/golang/protobuf/proto" pb "google.golang.org/grpc/examples/route_guide/routeguide" ) var ( tls = flag.Bool("tls", false, "Connection uses TLS if true, else plain TCP") certFile = flag.String("cert_file", "", "The TLS cert file") keyFile = flag.String("key_file", "", "The TLS key file") jsonDBFile = flag.String("json_db_file", "", "A json file containing a list of features") port = flag.Int("port", 10000, "The server port") ) type routeGuideServer struct { savedFeatures []*pb.Feature // read-only after initialized mu sync.Mutex // protects routeNotes routeNotes map[string][]*pb.RouteNote } // GetFeature returns the feature at the given point. func (s *routeGuideServer) GetFeature(ctx context.Context, point *pb.Point) (*pb.Feature, error) { for _, feature := range s.savedFeatures { if proto.Equal(feature.Location, point) { return feature, nil } } // No feature was found, return an unnamed feature return &pb.Feature{Location: point}, nil } // ListFeatures lists all features contained within the given bounding Rectangle. func (s *routeGuideServer) ListFeatures(rect *pb.Rectangle, stream pb.RouteGuide_ListFeaturesServer) error { for _, feature := range s.savedFeatures { if inRange(feature.Location, rect) { if err := stream.Send(feature); err != nil { return err } } } return nil } // RecordRoute records a route composited of a sequence of points. // // It gets a stream of points, and responds with statistics about the "trip": // number of points, number of known features visited, total distance traveled, and // total time spent. func (s *routeGuideServer) RecordRoute(stream pb.RouteGuide_RecordRouteServer) error { var pointCount, featureCount, distance int32 var lastPoint *pb.Point startTime := time.Now() for { point, err := stream.Recv() if err == io.EOF { endTime := time.Now() return stream.SendAndClose(&pb.RouteSummary{ PointCount: pointCount, FeatureCount: featureCount, Distance: distance, ElapsedTime: int32(endTime.Sub(startTime).Seconds()), }) } if err != nil { return err } pointCount++ for _, feature := range s.savedFeatures { if proto.Equal(feature.Location, point) { featureCount++ } } if lastPoint != nil { distance += calcDistance(lastPoint, point) } lastPoint = point } } // RouteChat receives a stream of message/location pairs, and responds with a stream of all // previous messages at each of those locations. func (s *routeGuideServer) RouteChat(stream pb.RouteGuide_RouteChatServer) error { for { in, err := stream.Recv() if err == io.EOF { return nil } if err != nil { return err } key := serialize(in.Location) s.mu.Lock() s.routeNotes[key] = append(s.routeNotes[key], in) // Note: this copy prevents blocking other clients while serving this one. // We don't need to do a deep copy, because elements in the slice are // insert-only and never modified. rn := make([]*pb.RouteNote, len(s.routeNotes[key])) copy(rn, s.routeNotes[key]) s.mu.Unlock() for _, note := range rn { if err := stream.Send(note); err != nil { return err } } } } // loadFeatures loads features from a JSON file. func (s *routeGuideServer) loadFeatures(filePath string) { var data []byte if filePath != "" { var err error data, err = ioutil.ReadFile(filePath) if err != nil { log.Fatalf("Failed to load default features: %v", err) } } else { data = exampleData } if err := json.Unmarshal(data, &s.savedFeatures); err != nil { log.Fatalf("Failed to load default features: %v", err) } } func toRadians(num float64) float64 { return num * math.Pi / float64(180) } // calcDistance calculates the distance between two points using the "haversine" formula. // The formula is based on http://mathforum.org/library/drmath/view/51879.html. func calcDistance(p1 *pb.Point, p2 *pb.Point) int32 { const CordFactor float64 = 1e7 const R = float64(6371000) // earth radius in metres lat1 := toRadians(float64(p1.Latitude) / CordFactor) lat2 := toRadians(float64(p2.Latitude) / CordFactor) lng1 := toRadians(float64(p1.Longitude) / CordFactor) lng2 := toRadians(float64(p2.Longitude) / CordFactor) dlat := lat2 - lat1 dlng := lng2 - lng1 a := math.Sin(dlat/2)*math.Sin(dlat/2) + math.Cos(lat1)*math.Cos(lat2)* math.Sin(dlng/2)*math.Sin(dlng/2) c := 2 * math.Atan2(math.Sqrt(a), math.Sqrt(1-a)) distance := R * c return int32(distance) } func inRange(point *pb.Point, rect *pb.Rectangle) bool { left := math.Min(float64(rect.Lo.Longitude), float64(rect.Hi.Longitude)) right := math.Max(float64(rect.Lo.Longitude), float64(rect.Hi.Longitude)) top := math.Max(float64(rect.Lo.Latitude), float64(rect.Hi.Latitude)) bottom := math.Min(float64(rect.Lo.Latitude), float64(rect.Hi.Latitude)) if float64(point.Longitude) >= left && float64(point.Longitude) <= right && float64(point.Latitude) >= bottom && float64(point.Latitude) <= top { return true } return false } func serialize(point *pb.Point) string { return fmt.Sprintf("%d %d", point.Latitude, point.Longitude) } func newServer() *routeGuideServer { s := &routeGuideServer{routeNotes: make(map[string][]*pb.RouteNote)} s.loadFeatures(*jsonDBFile) return s } func main() { flag.Parse() lis, err := net.Listen("tcp", fmt.Sprintf("localhost:%d", *port)) if err != nil { log.Fatalf("failed to listen: %v", err) } var opts []grpc.ServerOption if *tls { if *certFile == "" { *certFile = testdata.Path("server1.pem") } if *keyFile == "" { *keyFile = testdata.Path("server1.key") } creds, err := credentials.NewServerTLSFromFile(*certFile, *keyFile) if err != nil { log.Fatalf("Failed to generate credentials %v", err) } opts = []grpc.ServerOption{grpc.Creds(creds)} } grpcServer := grpc.NewServer(opts...) pb.RegisterRouteGuideServer(grpcServer, newServer()) grpcServer.Serve(lis) } // exampleData is a copy of testdata/route_guide_db.json. It's to avoid // specifying file path with `go run`. var exampleData = []byte(`[{ "location": { "latitude": 407838351, "longitude": -746143763 }, "name": "Patriots Path, Mendham, NJ 07945, USA" }, { "location": { "latitude": 408122808, "longitude": -743999179 }, "name": "101 New Jersey 10, Whippany, NJ 07981, USA" }, { "location": { "latitude": 413628156, "longitude": -749015468 }, "name": "U.S. 6, Shohola, PA 18458, USA" }, { "location": { "latitude": 419999544, "longitude": -740371136 }, "name": "5 Conners Road, Kingston, NY 12401, USA" }, { "location": { "latitude": 414008389, "longitude": -743951297 }, "name": "Mid Hudson Psychiatric Center, New Hampton, NY 10958, USA" }, { "location": { "latitude": 419611318, "longitude": -746524769 }, "name": "287 Flugertown Road, Livingston Manor, NY 12758, USA" }, { "location": { "latitude": 406109563, "longitude": -742186778 }, "name": "4001 Tremley Point Road, Linden, NJ 07036, USA" }, { "location": { "latitude": 416802456, "longitude": -742370183 }, "name": "352 South Mountain Road, Wallkill, NY 12589, USA" }, { "location": { "latitude": 412950425, "longitude": -741077389 }, "name": "Bailey Turn Road, Harriman, NY 10926, USA" }, { "location": { "latitude": 412144655, "longitude": -743949739 }, "name": "193-199 Wawayanda Road, Hewitt, NJ 07421, USA" }, { "location": { "latitude": 415736605, "longitude": -742847522 }, "name": "406-496 Ward Avenue, Pine Bush, NY 12566, USA" }, { "location": { "latitude": 413843930, "longitude": -740501726 }, "name": "162 Merrill Road, Highland Mills, NY 10930, USA" }, { "location": { "latitude": 410873075, "longitude": -744459023 }, "name": "Clinton Road, West Milford, NJ 07480, USA" }, { "location": { "latitude": 412346009, "longitude": -744026814 }, "name": "16 Old Brook Lane, Warwick, NY 10990, USA" }, { "location": { "latitude": 402948455, "longitude": -747903913 }, "name": "3 Drake Lane, Pennington, NJ 08534, USA" }, { "location": { "latitude": 406337092, "longitude": -740122226 }, "name": "6324 8th Avenue, Brooklyn, NY 11220, USA" }, { "location": { "latitude": 406421967, "longitude": -747727624 }, "name": "1 Merck Access Road, Whitehouse Station, NJ 08889, USA" }, { "location": { "latitude": 416318082, "longitude": -749677716 }, "name": "78-98 Schalck Road, Narrowsburg, NY 12764, USA" }, { "location": { "latitude": 415301720, "longitude": -748416257 }, "name": "282 Lakeview Drive Road, Highland Lake, NY 12743, USA" }, { "location": { "latitude": 402647019, "longitude": -747071791 }, "name": "330 Evelyn Avenue, Hamilton Township, NJ 08619, USA" }, { "location": { "latitude": 412567807, "longitude": -741058078 }, "name": "New York State Reference Route 987E, Southfields, NY 10975, USA" }, { "location": { "latitude": 416855156, "longitude": -744420597 }, "name": "103-271 Tempaloni Road, Ellenville, NY 12428, USA" }, { "location": { "latitude": 404663628, "longitude": -744820157 }, "name": "1300 Airport Road, North Brunswick Township, NJ 08902, USA" }, { "location": { "latitude": 407113723, "longitude": -749746483 }, "name": "" }, { "location": { "latitude": 402133926, "longitude": -743613249 }, "name": "" }, { "location": { "latitude": 400273442, "longitude": -741220915 }, "name": "" }, { "location": { "latitude": 411236786, "longitude": -744070769 }, "name": "" }, { "location": { "latitude": 411633782, "longitude": -746784970 }, "name": "211-225 Plains Road, Augusta, NJ 07822, USA" }, { "location": { "latitude": 415830701, "longitude": -742952812 }, "name": "" }, { "location": { "latitude": 413447164, "longitude": -748712898 }, "name": "165 Pedersen Ridge Road, Milford, PA 18337, USA" }, { "location": { "latitude": 405047245, "longitude": -749800722 }, "name": "100-122 Locktown Road, Frenchtown, NJ 08825, USA" }, { "location": { "latitude": 418858923, "longitude": -746156790 }, "name": "" }, { "location": { "latitude": 417951888, "longitude": -748484944 }, "name": "650-652 Willi Hill Road, Swan Lake, NY 12783, USA" }, { "location": { "latitude": 407033786, "longitude": -743977337 }, "name": "26 East 3rd Street, New Providence, NJ 07974, USA" }, { "location": { "latitude": 417548014, "longitude": -740075041 }, "name": "" }, { "location": { "latitude": 410395868, "longitude": -744972325 }, "name": "" }, { "location": { "latitude": 404615353, "longitude": -745129803 }, "name": "" }, { "location": { "latitude": 406589790, "longitude": -743560121 }, "name": "611 Lawrence Avenue, Westfield, NJ 07090, USA" }, { "location": { "latitude": 414653148, "longitude": -740477477 }, "name": "18 Lannis Avenue, New Windsor, NY 12553, USA" }, { "location": { "latitude": 405957808, "longitude": -743255336 }, "name": "82-104 Amherst Avenue, Colonia, NJ 07067, USA" }, { "location": { "latitude": 411733589, "longitude": -741648093 }, "name": "170 Seven Lakes Drive, Sloatsburg, NY 10974, USA" }, { "location": { "latitude": 412676291, "longitude": -742606606 }, "name": "1270 Lakes Road, Monroe, NY 10950, USA" }, { "location": { "latitude": 409224445, "longitude": -748286738 }, "name": "509-535 Alphano Road, Great Meadows, NJ 07838, USA" }, { "location": { "latitude": 406523420, "longitude": -742135517 }, "name": "652 Garden Street, Elizabeth, NJ 07202, USA" }, { "location": { "latitude": 401827388, "longitude": -740294537 }, "name": "349 Sea Spray Court, Neptune City, NJ 07753, USA" }, { "location": { "latitude": 410564152, "longitude": -743685054 }, "name": "13-17 Stanley Street, West Milford, NJ 07480, USA" }, { "location": { "latitude": 408472324, "longitude": -740726046 }, "name": "47 Industrial Avenue, Teterboro, NJ 07608, USA" }, { "location": { "latitude": 412452168, "longitude": -740214052 }, "name": "5 White Oak Lane, Stony Point, NY 10980, USA" }, { "location": { "latitude": 409146138, "longitude": -746188906 }, "name": "Berkshire Valley Management Area Trail, Jefferson, NJ, USA" }, { "location": { "latitude": 404701380, "longitude": -744781745 }, "name": "1007 Jersey Avenue, New Brunswick, NJ 08901, USA" }, { "location": { "latitude": 409642566, "longitude": -746017679 }, "name": "6 East Emerald Isle Drive, Lake Hopatcong, NJ 07849, USA" }, { "location": { "latitude": 408031728, "longitude": -748645385 }, "name": "1358-1474 New Jersey 57, Port Murray, NJ 07865, USA" }, { "location": { "latitude": 413700272, "longitude": -742135189 }, "name": "367 Prospect Road, Chester, NY 10918, USA" }, { "location": { "latitude": 404310607, "longitude": -740282632 }, "name": "10 Simon Lake Drive, Atlantic Highlands, NJ 07716, USA" }, { "location": { "latitude": 409319800, "longitude": -746201391 }, "name": "11 Ward Street, Mount Arlington, NJ 07856, USA" }, { "location": { "latitude": 406685311, "longitude": -742108603 }, "name": "300-398 Jefferson Avenue, Elizabeth, NJ 07201, USA" }, { "location": { "latitude": 419018117, "longitude": -749142781 }, "name": "43 Dreher Road, Roscoe, NY 12776, USA" }, { "location": { "latitude": 412856162, "longitude": -745148837 }, "name": "Swan Street, Pine Island, NY 10969, USA" }, { "location": { "latitude": 416560744, "longitude": -746721964 }, "name": "66 Pleasantview Avenue, Monticello, NY 12701, USA" }, { "location": { "latitude": 405314270, "longitude": -749836354 }, "name": "" }, { "location": { "latitude": 414219548, "longitude": -743327440 }, "name": "" }, { "location": { "latitude": 415534177, "longitude": -742900616 }, "name": "565 Winding Hills Road, Montgomery, NY 12549, USA" }, { "location": { "latitude": 406898530, "longitude": -749127080 }, "name": "231 Rocky Run Road, Glen Gardner, NJ 08826, USA" }, { "location": { "latitude": 407586880, "longitude": -741670168 }, "name": "100 Mount Pleasant Avenue, Newark, NJ 07104, USA" }, { "location": { "latitude": 400106455, "longitude": -742870190 }, "name": "517-521 Huntington Drive, Manchester Township, NJ 08759, USA" }, { "location": { "latitude": 400066188, "longitude": -746793294 }, "name": "" }, { "location": { "latitude": 418803880, "longitude": -744102673 }, "name": "40 Mountain Road, Napanoch, NY 12458, USA" }, { "location": { "latitude": 414204288, "longitude": -747895140 }, "name": "" }, { "location": { "latitude": 414777405, "longitude": -740615601 }, "name": "" }, { "location": { "latitude": 415464475, "longitude": -747175374 }, "name": "48 North Road, Forestburgh, NY 12777, USA" }, { "location": { "latitude": 404062378, "longitude": -746376177 }, "name": "" }, { "location": { "latitude": 405688272, "longitude": -749285130 }, "name": "" }, { "location": { "latitude": 400342070, "longitude": -748788996 }, "name": "" }, { "location": { "latitude": 401809022, "longitude": -744157964 }, "name": "" }, { "location": { "latitude": 404226644, "longitude": -740517141 }, "name": "9 Thompson Avenue, Leonardo, NJ 07737, USA" }, { "location": { "latitude": 410322033, "longitude": -747871659 }, "name": "" }, { "location": { "latitude": 407100674, "longitude": -747742727 }, "name": "" }, { "location": { "latitude": 418811433, "longitude": -741718005 }, "name": "213 Bush Road, Stone Ridge, NY 12484, USA" }, { "location": { "latitude": 415034302, "longitude": -743850945 }, "name": "" }, { "location": { "latitude": 411349992, "longitude": -743694161 }, "name": "" }, { "location": { "latitude": 404839914, "longitude": -744759616 }, "name": "1-17 Bergen Court, New Brunswick, NJ 08901, USA" }, { "location": { "latitude": 414638017, "longitude": -745957854 }, "name": "35 Oakland Valley Road, Cuddebackville, NY 12729, USA" }, { "location": { "latitude": 412127800, "longitude": -740173578 }, "name": "" }, { "location": { "latitude": 401263460, "longitude": -747964303 }, "name": "" }, { "location": { "latitude": 412843391, "longitude": -749086026 }, "name": "" }, { "location": { "latitude": 418512773, "longitude": -743067823 }, "name": "" }, { "location": { "latitude": 404318328, "longitude": -740835638 }, "name": "42-102 Main Street, Belford, NJ 07718, USA" }, { "location": { "latitude": 419020746, "longitude": -741172328 }, "name": "" }, { "location": { "latitude": 404080723, "longitude": -746119569 }, "name": "" }, { "location": { "latitude": 401012643, "longitude": -744035134 }, "name": "" }, { "location": { "latitude": 404306372, "longitude": -741079661 }, "name": "" }, { "location": { "latitude": 403966326, "longitude": -748519297 }, "name": "" }, { "location": { "latitude": 405002031, "longitude": -748407866 }, "name": "" }, { "location": { "latitude": 409532885, "longitude": -742200683 }, "name": "" }, { "location": { "latitude": 416851321, "longitude": -742674555 }, "name": "" }, { "location": { "latitude": 406411633, "longitude": -741722051 }, "name": "3387 Richmond Terrace, Staten Island, NY 10303, USA" }, { "location": { "latitude": 413069058, "longitude": -744597778 }, "name": "261 Van Sickle Road, Goshen, NY 10924, USA" }, { "location": { "latitude": 418465462, "longitude": -746859398 }, "name": "" }, { "location": { "latitude": 411733222, "longitude": -744228360 }, "name": "" }, { "location": { "latitude": 410248224, "longitude": -747127767 }, "name": "3 Hasta Way, Newton, NJ 07860, USA" }]`) grpc-go-1.22.1/examples/route_guide/testdata/000077500000000000000000000000001351635773100211105ustar00rootroot00000000000000grpc-go-1.22.1/examples/route_guide/testdata/route_guide_db.json000066400000000000000000000327101351635773100247660ustar00rootroot00000000000000[{ "location": { "latitude": 407838351, "longitude": -746143763 }, "name": "Patriots Path, Mendham, NJ 07945, USA" }, { "location": { "latitude": 408122808, "longitude": -743999179 }, "name": "101 New Jersey 10, Whippany, NJ 07981, USA" }, { "location": { "latitude": 413628156, "longitude": -749015468 }, "name": "U.S. 6, Shohola, PA 18458, USA" }, { "location": { "latitude": 419999544, "longitude": -740371136 }, "name": "5 Conners Road, Kingston, NY 12401, USA" }, { "location": { "latitude": 414008389, "longitude": -743951297 }, "name": "Mid Hudson Psychiatric Center, New Hampton, NY 10958, USA" }, { "location": { "latitude": 419611318, "longitude": -746524769 }, "name": "287 Flugertown Road, Livingston Manor, NY 12758, USA" }, { "location": { "latitude": 406109563, "longitude": -742186778 }, "name": "4001 Tremley Point Road, Linden, NJ 07036, USA" }, { "location": { "latitude": 416802456, "longitude": -742370183 }, "name": "352 South Mountain Road, Wallkill, NY 12589, USA" }, { "location": { "latitude": 412950425, "longitude": -741077389 }, "name": "Bailey Turn Road, Harriman, NY 10926, USA" }, { "location": { "latitude": 412144655, "longitude": -743949739 }, "name": "193-199 Wawayanda Road, Hewitt, NJ 07421, USA" }, { "location": { "latitude": 415736605, "longitude": -742847522 }, "name": "406-496 Ward Avenue, Pine Bush, NY 12566, USA" }, { "location": { "latitude": 413843930, "longitude": -740501726 }, "name": "162 Merrill Road, Highland Mills, NY 10930, USA" }, { "location": { "latitude": 410873075, "longitude": -744459023 }, "name": "Clinton Road, West Milford, NJ 07480, USA" }, { "location": { "latitude": 412346009, "longitude": -744026814 }, "name": "16 Old Brook Lane, Warwick, NY 10990, USA" }, { "location": { "latitude": 402948455, "longitude": -747903913 }, "name": "3 Drake Lane, Pennington, NJ 08534, USA" }, { "location": { "latitude": 406337092, "longitude": -740122226 }, "name": "6324 8th Avenue, Brooklyn, NY 11220, USA" }, { "location": { "latitude": 406421967, "longitude": -747727624 }, "name": "1 Merck Access Road, Whitehouse Station, NJ 08889, USA" }, { "location": { "latitude": 416318082, "longitude": -749677716 }, "name": "78-98 Schalck Road, Narrowsburg, NY 12764, USA" }, { "location": { "latitude": 415301720, "longitude": -748416257 }, "name": "282 Lakeview Drive Road, Highland Lake, NY 12743, USA" }, { "location": { "latitude": 402647019, "longitude": -747071791 }, "name": "330 Evelyn Avenue, Hamilton Township, NJ 08619, USA" }, { "location": { "latitude": 412567807, "longitude": -741058078 }, "name": "New York State Reference Route 987E, Southfields, NY 10975, USA" }, { "location": { "latitude": 416855156, "longitude": -744420597 }, "name": "103-271 Tempaloni Road, Ellenville, NY 12428, USA" }, { "location": { "latitude": 404663628, "longitude": -744820157 }, "name": "1300 Airport Road, North Brunswick Township, NJ 08902, USA" }, { "location": { "latitude": 407113723, "longitude": -749746483 }, "name": "" }, { "location": { "latitude": 402133926, "longitude": -743613249 }, "name": "" }, { "location": { "latitude": 400273442, "longitude": -741220915 }, "name": "" }, { "location": { "latitude": 411236786, "longitude": -744070769 }, "name": "" }, { "location": { "latitude": 411633782, "longitude": -746784970 }, "name": "211-225 Plains Road, Augusta, NJ 07822, USA" }, { "location": { "latitude": 415830701, "longitude": -742952812 }, "name": "" }, { "location": { "latitude": 413447164, "longitude": -748712898 }, "name": "165 Pedersen Ridge Road, Milford, PA 18337, USA" }, { "location": { "latitude": 405047245, "longitude": -749800722 }, "name": "100-122 Locktown Road, Frenchtown, NJ 08825, USA" }, { "location": { "latitude": 418858923, "longitude": -746156790 }, "name": "" }, { "location": { "latitude": 417951888, "longitude": -748484944 }, "name": "650-652 Willi Hill Road, Swan Lake, NY 12783, USA" }, { "location": { "latitude": 407033786, "longitude": -743977337 }, "name": "26 East 3rd Street, New Providence, NJ 07974, USA" }, { "location": { "latitude": 417548014, "longitude": -740075041 }, "name": "" }, { "location": { "latitude": 410395868, "longitude": -744972325 }, "name": "" }, { "location": { "latitude": 404615353, "longitude": -745129803 }, "name": "" }, { "location": { "latitude": 406589790, "longitude": -743560121 }, "name": "611 Lawrence Avenue, Westfield, NJ 07090, USA" }, { "location": { "latitude": 414653148, "longitude": -740477477 }, "name": "18 Lannis Avenue, New Windsor, NY 12553, USA" }, { "location": { "latitude": 405957808, "longitude": -743255336 }, "name": "82-104 Amherst Avenue, Colonia, NJ 07067, USA" }, { "location": { "latitude": 411733589, "longitude": -741648093 }, "name": "170 Seven Lakes Drive, Sloatsburg, NY 10974, USA" }, { "location": { "latitude": 412676291, "longitude": -742606606 }, "name": "1270 Lakes Road, Monroe, NY 10950, USA" }, { "location": { "latitude": 409224445, "longitude": -748286738 }, "name": "509-535 Alphano Road, Great Meadows, NJ 07838, USA" }, { "location": { "latitude": 406523420, "longitude": -742135517 }, "name": "652 Garden Street, Elizabeth, NJ 07202, USA" }, { "location": { "latitude": 401827388, "longitude": -740294537 }, "name": "349 Sea Spray Court, Neptune City, NJ 07753, USA" }, { "location": { "latitude": 410564152, "longitude": -743685054 }, "name": "13-17 Stanley Street, West Milford, NJ 07480, USA" }, { "location": { "latitude": 408472324, "longitude": -740726046 }, "name": "47 Industrial Avenue, Teterboro, NJ 07608, USA" }, { "location": { "latitude": 412452168, "longitude": -740214052 }, "name": "5 White Oak Lane, Stony Point, NY 10980, USA" }, { "location": { "latitude": 409146138, "longitude": -746188906 }, "name": "Berkshire Valley Management Area Trail, Jefferson, NJ, USA" }, { "location": { "latitude": 404701380, "longitude": -744781745 }, "name": "1007 Jersey Avenue, New Brunswick, NJ 08901, USA" }, { "location": { "latitude": 409642566, "longitude": -746017679 }, "name": "6 East Emerald Isle Drive, Lake Hopatcong, NJ 07849, USA" }, { "location": { "latitude": 408031728, "longitude": -748645385 }, "name": "1358-1474 New Jersey 57, Port Murray, NJ 07865, USA" }, { "location": { "latitude": 413700272, "longitude": -742135189 }, "name": "367 Prospect Road, Chester, NY 10918, USA" }, { "location": { "latitude": 404310607, "longitude": -740282632 }, "name": "10 Simon Lake Drive, Atlantic Highlands, NJ 07716, USA" }, { "location": { "latitude": 409319800, "longitude": -746201391 }, "name": "11 Ward Street, Mount Arlington, NJ 07856, USA" }, { "location": { "latitude": 406685311, "longitude": -742108603 }, "name": "300-398 Jefferson Avenue, Elizabeth, NJ 07201, USA" }, { "location": { "latitude": 419018117, "longitude": -749142781 }, "name": "43 Dreher Road, Roscoe, NY 12776, USA" }, { "location": { "latitude": 412856162, "longitude": -745148837 }, "name": "Swan Street, Pine Island, NY 10969, USA" }, { "location": { "latitude": 416560744, "longitude": -746721964 }, "name": "66 Pleasantview Avenue, Monticello, NY 12701, USA" }, { "location": { "latitude": 405314270, "longitude": -749836354 }, "name": "" }, { "location": { "latitude": 414219548, "longitude": -743327440 }, "name": "" }, { "location": { "latitude": 415534177, "longitude": -742900616 }, "name": "565 Winding Hills Road, Montgomery, NY 12549, USA" }, { "location": { "latitude": 406898530, "longitude": -749127080 }, "name": "231 Rocky Run Road, Glen Gardner, NJ 08826, USA" }, { "location": { "latitude": 407586880, "longitude": -741670168 }, "name": "100 Mount Pleasant Avenue, Newark, NJ 07104, USA" }, { "location": { "latitude": 400106455, "longitude": -742870190 }, "name": "517-521 Huntington Drive, Manchester Township, NJ 08759, USA" }, { "location": { "latitude": 400066188, "longitude": -746793294 }, "name": "" }, { "location": { "latitude": 418803880, "longitude": -744102673 }, "name": "40 Mountain Road, Napanoch, NY 12458, USA" }, { "location": { "latitude": 414204288, "longitude": -747895140 }, "name": "" }, { "location": { "latitude": 414777405, "longitude": -740615601 }, "name": "" }, { "location": { "latitude": 415464475, "longitude": -747175374 }, "name": "48 North Road, Forestburgh, NY 12777, USA" }, { "location": { "latitude": 404062378, "longitude": -746376177 }, "name": "" }, { "location": { "latitude": 405688272, "longitude": -749285130 }, "name": "" }, { "location": { "latitude": 400342070, "longitude": -748788996 }, "name": "" }, { "location": { "latitude": 401809022, "longitude": -744157964 }, "name": "" }, { "location": { "latitude": 404226644, "longitude": -740517141 }, "name": "9 Thompson Avenue, Leonardo, NJ 07737, USA" }, { "location": { "latitude": 410322033, "longitude": -747871659 }, "name": "" }, { "location": { "latitude": 407100674, "longitude": -747742727 }, "name": "" }, { "location": { "latitude": 418811433, "longitude": -741718005 }, "name": "213 Bush Road, Stone Ridge, NY 12484, USA" }, { "location": { "latitude": 415034302, "longitude": -743850945 }, "name": "" }, { "location": { "latitude": 411349992, "longitude": -743694161 }, "name": "" }, { "location": { "latitude": 404839914, "longitude": -744759616 }, "name": "1-17 Bergen Court, New Brunswick, NJ 08901, USA" }, { "location": { "latitude": 414638017, "longitude": -745957854 }, "name": "35 Oakland Valley Road, Cuddebackville, NY 12729, USA" }, { "location": { "latitude": 412127800, "longitude": -740173578 }, "name": "" }, { "location": { "latitude": 401263460, "longitude": -747964303 }, "name": "" }, { "location": { "latitude": 412843391, "longitude": -749086026 }, "name": "" }, { "location": { "latitude": 418512773, "longitude": -743067823 }, "name": "" }, { "location": { "latitude": 404318328, "longitude": -740835638 }, "name": "42-102 Main Street, Belford, NJ 07718, USA" }, { "location": { "latitude": 419020746, "longitude": -741172328 }, "name": "" }, { "location": { "latitude": 404080723, "longitude": -746119569 }, "name": "" }, { "location": { "latitude": 401012643, "longitude": -744035134 }, "name": "" }, { "location": { "latitude": 404306372, "longitude": -741079661 }, "name": "" }, { "location": { "latitude": 403966326, "longitude": -748519297 }, "name": "" }, { "location": { "latitude": 405002031, "longitude": -748407866 }, "name": "" }, { "location": { "latitude": 409532885, "longitude": -742200683 }, "name": "" }, { "location": { "latitude": 416851321, "longitude": -742674555 }, "name": "" }, { "location": { "latitude": 406411633, "longitude": -741722051 }, "name": "3387 Richmond Terrace, Staten Island, NY 10303, USA" }, { "location": { "latitude": 413069058, "longitude": -744597778 }, "name": "261 Van Sickle Road, Goshen, NY 10924, USA" }, { "location": { "latitude": 418465462, "longitude": -746859398 }, "name": "" }, { "location": { "latitude": 411733222, "longitude": -744228360 }, "name": "" }, { "location": { "latitude": 410248224, "longitude": -747127767 }, "name": "3 Hasta Way, Newton, NJ 07860, USA" }] grpc-go-1.22.1/go.mod000066400000000000000000000013721351635773100142570ustar00rootroot00000000000000module google.golang.org/grpc require ( cloud.google.com/go v0.26.0 // indirect github.com/BurntSushi/toml v0.3.1 // indirect github.com/client9/misspell v0.3.4 github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b github.com/golang/mock v1.1.1 github.com/golang/protobuf v1.2.0 github.com/google/go-cmp v0.2.0 golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3 golang.org/x/net v0.0.0-20190311183353-d8887717615a golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135 google.golang.org/appengine v1.1.0 // indirect google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8 honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc ) grpc-go-1.22.1/go.sum000066400000000000000000000067541351635773100143150ustar00rootroot00000000000000cloud.google.com/go v0.26.0 h1:e0WKqKTd5BnrG8aKH3J3h+QvEIQtSUcf2n5UZ5ZgLtQ= cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ= github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= github.com/client9/misspell v0.3.4 h1:ta993UF76GwbvJcIo3Y68y/M3WxlpEHPWIGDkJYwzJI= github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw= github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b h1:VKtxabqXZkF25pY9ekfRL6a582T4P37/31XEstQ5p58= github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= github.com/golang/mock v1.1.1 h1:G5FRp8JnTd7RQH5kemVNlMeyXQAztQ3mOWV95KxsXH8= github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A= github.com/golang/protobuf v1.2.0 h1:P3YflyNX/ehuJFLhxviNdFxQPkGK5cDcApsge1SqnvM= github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/google/go-cmp v0.2.0 h1:+dTQ8DZQJz0Mb/HjFlkptS1FeQ4cWSnN941F8aEG4SQ= github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3 h1:XQyxROzUlZH+WIQwySDgnISgOivlhjIEwaQaJEJrrN0= golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= golang.org/x/net v0.0.0-20190311183353-d8887717615a h1:oWX7TPOiFAMXLq8o0ikBYfCJVlRHBcsciT5bXOrH628= golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be h1:vEDujvNQGv4jgYKudGeI/+DAX4Jffq6hpD55MmoEvKs= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= golang.org/x/sync v0.0.0-20190423024810-112230192c58 h1:8gQV6CLnAEikrhgkHFbMAEhagSSnXWGV915qUMm9mrU= golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a h1:1BGLXjeY4akVXGgbC9HugT3Jv3hCI0z56oJR5vAMgBU= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/tools v0.0.0-20190311212946-11955173bddd h1:/e+gpKk9r3dJobndpTytxS2gOy6m5uvpg+ISQoEcusQ= golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135 h1:5Beo0mZN8dRzgrMMkDp0jc8YXQKx9DiJ2k1dkvGsn5A= golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= google.golang.org/appengine v1.1.0 h1:igQkv0AAhEIvTEpD5LIpAfav2eeVO9HBTjvKHVJPRSs= google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8 h1:Nw54tB0rB7hY/N0NQvRW8DG4Yk3Q6T9cu9RcFQDu1tc= google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc h1:/hemPrYIhOhy8zYrNj+069zDB68us2sMGsfkFJO0iZs= honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= grpc-go-1.22.1/grpc_test.go000066400000000000000000000023051351635773100154670ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "sync/atomic" "testing" "google.golang.org/grpc/internal/grpctest" "google.golang.org/grpc/internal/leakcheck" ) type s struct{} var lcFailed uint32 type errorer struct { t *testing.T } func (e errorer) Errorf(format string, args ...interface{}) { atomic.StoreUint32(&lcFailed, 1) e.t.Errorf(format, args...) } func (s) Teardown(t *testing.T) { if atomic.LoadUint32(&lcFailed) == 1 { return } leakcheck.Check(errorer{t: t}) if atomic.LoadUint32(&lcFailed) == 1 { t.Log("Leak check disabled for future tests") } } func Test(t *testing.T) { grpctest.RunSubTests(t, s{}) } grpc-go-1.22.1/grpclog/000077500000000000000000000000001351635773100146035ustar00rootroot00000000000000grpc-go-1.22.1/grpclog/glogger/000077500000000000000000000000001351635773100162315ustar00rootroot00000000000000grpc-go-1.22.1/grpclog/glogger/glogger.go000066400000000000000000000041371351635773100202130ustar00rootroot00000000000000/* * * Copyright 2015 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package glogger defines glog-based logging for grpc. // Importing this package will install glog as the logger used by grpclog. package glogger import ( "fmt" "github.com/golang/glog" "google.golang.org/grpc/grpclog" ) func init() { grpclog.SetLoggerV2(&glogger{}) } type glogger struct{} func (g *glogger) Info(args ...interface{}) { glog.InfoDepth(2, args...) } func (g *glogger) Infoln(args ...interface{}) { glog.InfoDepth(2, fmt.Sprintln(args...)) } func (g *glogger) Infof(format string, args ...interface{}) { glog.InfoDepth(2, fmt.Sprintf(format, args...)) } func (g *glogger) Warning(args ...interface{}) { glog.WarningDepth(2, args...) } func (g *glogger) Warningln(args ...interface{}) { glog.WarningDepth(2, fmt.Sprintln(args...)) } func (g *glogger) Warningf(format string, args ...interface{}) { glog.WarningDepth(2, fmt.Sprintf(format, args...)) } func (g *glogger) Error(args ...interface{}) { glog.ErrorDepth(2, args...) } func (g *glogger) Errorln(args ...interface{}) { glog.ErrorDepth(2, fmt.Sprintln(args...)) } func (g *glogger) Errorf(format string, args ...interface{}) { glog.ErrorDepth(2, fmt.Sprintf(format, args...)) } func (g *glogger) Fatal(args ...interface{}) { glog.FatalDepth(2, args...) } func (g *glogger) Fatalln(args ...interface{}) { glog.FatalDepth(2, fmt.Sprintln(args...)) } func (g *glogger) Fatalf(format string, args ...interface{}) { glog.FatalDepth(2, fmt.Sprintf(format, args...)) } func (g *glogger) V(l int) bool { return bool(glog.V(glog.Level(l))) } grpc-go-1.22.1/grpclog/grpclog.go000066400000000000000000000072051351635773100165730ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package grpclog defines logging for grpc. // // All logs in transport and grpclb packages only go to verbose level 2. // All logs in other packages in grpc are logged in spite of the verbosity level. // // In the default logger, // severity level can be set by environment variable GRPC_GO_LOG_SEVERITY_LEVEL, // verbosity level can be set by GRPC_GO_LOG_VERBOSITY_LEVEL. package grpclog // import "google.golang.org/grpc/grpclog" import "os" var logger = newLoggerV2() // V reports whether verbosity level l is at least the requested verbose level. func V(l int) bool { return logger.V(l) } // Info logs to the INFO log. func Info(args ...interface{}) { logger.Info(args...) } // Infof logs to the INFO log. Arguments are handled in the manner of fmt.Printf. func Infof(format string, args ...interface{}) { logger.Infof(format, args...) } // Infoln logs to the INFO log. Arguments are handled in the manner of fmt.Println. func Infoln(args ...interface{}) { logger.Infoln(args...) } // Warning logs to the WARNING log. func Warning(args ...interface{}) { logger.Warning(args...) } // Warningf logs to the WARNING log. Arguments are handled in the manner of fmt.Printf. func Warningf(format string, args ...interface{}) { logger.Warningf(format, args...) } // Warningln logs to the WARNING log. Arguments are handled in the manner of fmt.Println. func Warningln(args ...interface{}) { logger.Warningln(args...) } // Error logs to the ERROR log. func Error(args ...interface{}) { logger.Error(args...) } // Errorf logs to the ERROR log. Arguments are handled in the manner of fmt.Printf. func Errorf(format string, args ...interface{}) { logger.Errorf(format, args...) } // Errorln logs to the ERROR log. Arguments are handled in the manner of fmt.Println. func Errorln(args ...interface{}) { logger.Errorln(args...) } // Fatal logs to the FATAL log. Arguments are handled in the manner of fmt.Print. // It calls os.Exit() with exit code 1. func Fatal(args ...interface{}) { logger.Fatal(args...) // Make sure fatal logs will exit. os.Exit(1) } // Fatalf logs to the FATAL log. Arguments are handled in the manner of fmt.Printf. // It calles os.Exit() with exit code 1. func Fatalf(format string, args ...interface{}) { logger.Fatalf(format, args...) // Make sure fatal logs will exit. os.Exit(1) } // Fatalln logs to the FATAL log. Arguments are handled in the manner of fmt.Println. // It calle os.Exit()) with exit code 1. func Fatalln(args ...interface{}) { logger.Fatalln(args...) // Make sure fatal logs will exit. os.Exit(1) } // Print prints to the logger. Arguments are handled in the manner of fmt.Print. // // Deprecated: use Info. func Print(args ...interface{}) { logger.Info(args...) } // Printf prints to the logger. Arguments are handled in the manner of fmt.Printf. // // Deprecated: use Infof. func Printf(format string, args ...interface{}) { logger.Infof(format, args...) } // Println prints to the logger. Arguments are handled in the manner of fmt.Println. // // Deprecated: use Infoln. func Println(args ...interface{}) { logger.Infoln(args...) } grpc-go-1.22.1/grpclog/logger.go000066400000000000000000000041231351635773100164110ustar00rootroot00000000000000/* * * Copyright 2015 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpclog // Logger mimics golang's standard Logger as an interface. // // Deprecated: use LoggerV2. type Logger interface { Fatal(args ...interface{}) Fatalf(format string, args ...interface{}) Fatalln(args ...interface{}) Print(args ...interface{}) Printf(format string, args ...interface{}) Println(args ...interface{}) } // SetLogger sets the logger that is used in grpc. Call only from // init() functions. // // Deprecated: use SetLoggerV2. func SetLogger(l Logger) { logger = &loggerWrapper{Logger: l} } // loggerWrapper wraps Logger into a LoggerV2. type loggerWrapper struct { Logger } func (g *loggerWrapper) Info(args ...interface{}) { g.Logger.Print(args...) } func (g *loggerWrapper) Infoln(args ...interface{}) { g.Logger.Println(args...) } func (g *loggerWrapper) Infof(format string, args ...interface{}) { g.Logger.Printf(format, args...) } func (g *loggerWrapper) Warning(args ...interface{}) { g.Logger.Print(args...) } func (g *loggerWrapper) Warningln(args ...interface{}) { g.Logger.Println(args...) } func (g *loggerWrapper) Warningf(format string, args ...interface{}) { g.Logger.Printf(format, args...) } func (g *loggerWrapper) Error(args ...interface{}) { g.Logger.Print(args...) } func (g *loggerWrapper) Errorln(args ...interface{}) { g.Logger.Println(args...) } func (g *loggerWrapper) Errorf(format string, args ...interface{}) { g.Logger.Printf(format, args...) } func (g *loggerWrapper) V(l int) bool { // Returns true for all verbose level. return true } grpc-go-1.22.1/grpclog/loggerv2.go000066400000000000000000000144371351635773100166720ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpclog import ( "io" "io/ioutil" "log" "os" "strconv" ) // LoggerV2 does underlying logging work for grpclog. type LoggerV2 interface { // Info logs to INFO log. Arguments are handled in the manner of fmt.Print. Info(args ...interface{}) // Infoln logs to INFO log. Arguments are handled in the manner of fmt.Println. Infoln(args ...interface{}) // Infof logs to INFO log. Arguments are handled in the manner of fmt.Printf. Infof(format string, args ...interface{}) // Warning logs to WARNING log. Arguments are handled in the manner of fmt.Print. Warning(args ...interface{}) // Warningln logs to WARNING log. Arguments are handled in the manner of fmt.Println. Warningln(args ...interface{}) // Warningf logs to WARNING log. Arguments are handled in the manner of fmt.Printf. Warningf(format string, args ...interface{}) // Error logs to ERROR log. Arguments are handled in the manner of fmt.Print. Error(args ...interface{}) // Errorln logs to ERROR log. Arguments are handled in the manner of fmt.Println. Errorln(args ...interface{}) // Errorf logs to ERROR log. Arguments are handled in the manner of fmt.Printf. Errorf(format string, args ...interface{}) // Fatal logs to ERROR log. Arguments are handled in the manner of fmt.Print. // gRPC ensures that all Fatal logs will exit with os.Exit(1). // Implementations may also call os.Exit() with a non-zero exit code. Fatal(args ...interface{}) // Fatalln logs to ERROR log. Arguments are handled in the manner of fmt.Println. // gRPC ensures that all Fatal logs will exit with os.Exit(1). // Implementations may also call os.Exit() with a non-zero exit code. Fatalln(args ...interface{}) // Fatalf logs to ERROR log. Arguments are handled in the manner of fmt.Printf. // gRPC ensures that all Fatal logs will exit with os.Exit(1). // Implementations may also call os.Exit() with a non-zero exit code. Fatalf(format string, args ...interface{}) // V reports whether verbosity level l is at least the requested verbose level. V(l int) bool } // SetLoggerV2 sets logger that is used in grpc to a V2 logger. // Not mutex-protected, should be called before any gRPC functions. func SetLoggerV2(l LoggerV2) { logger = l } const ( // infoLog indicates Info severity. infoLog int = iota // warningLog indicates Warning severity. warningLog // errorLog indicates Error severity. errorLog // fatalLog indicates Fatal severity. fatalLog ) // severityName contains the string representation of each severity. var severityName = []string{ infoLog: "INFO", warningLog: "WARNING", errorLog: "ERROR", fatalLog: "FATAL", } // loggerT is the default logger used by grpclog. type loggerT struct { m []*log.Logger v int } // NewLoggerV2 creates a loggerV2 with the provided writers. // Fatal logs will be written to errorW, warningW, infoW, followed by exit(1). // Error logs will be written to errorW, warningW and infoW. // Warning logs will be written to warningW and infoW. // Info logs will be written to infoW. func NewLoggerV2(infoW, warningW, errorW io.Writer) LoggerV2 { return NewLoggerV2WithVerbosity(infoW, warningW, errorW, 0) } // NewLoggerV2WithVerbosity creates a loggerV2 with the provided writers and // verbosity level. func NewLoggerV2WithVerbosity(infoW, warningW, errorW io.Writer, v int) LoggerV2 { var m []*log.Logger m = append(m, log.New(infoW, severityName[infoLog]+": ", log.LstdFlags)) m = append(m, log.New(io.MultiWriter(infoW, warningW), severityName[warningLog]+": ", log.LstdFlags)) ew := io.MultiWriter(infoW, warningW, errorW) // ew will be used for error and fatal. m = append(m, log.New(ew, severityName[errorLog]+": ", log.LstdFlags)) m = append(m, log.New(ew, severityName[fatalLog]+": ", log.LstdFlags)) return &loggerT{m: m, v: v} } // newLoggerV2 creates a loggerV2 to be used as default logger. // All logs are written to stderr. func newLoggerV2() LoggerV2 { errorW := ioutil.Discard warningW := ioutil.Discard infoW := ioutil.Discard logLevel := os.Getenv("GRPC_GO_LOG_SEVERITY_LEVEL") switch logLevel { case "", "ERROR", "error": // If env is unset, set level to ERROR. errorW = os.Stderr case "WARNING", "warning": warningW = os.Stderr case "INFO", "info": infoW = os.Stderr } var v int vLevel := os.Getenv("GRPC_GO_LOG_VERBOSITY_LEVEL") if vl, err := strconv.Atoi(vLevel); err == nil { v = vl } return NewLoggerV2WithVerbosity(infoW, warningW, errorW, v) } func (g *loggerT) Info(args ...interface{}) { g.m[infoLog].Print(args...) } func (g *loggerT) Infoln(args ...interface{}) { g.m[infoLog].Println(args...) } func (g *loggerT) Infof(format string, args ...interface{}) { g.m[infoLog].Printf(format, args...) } func (g *loggerT) Warning(args ...interface{}) { g.m[warningLog].Print(args...) } func (g *loggerT) Warningln(args ...interface{}) { g.m[warningLog].Println(args...) } func (g *loggerT) Warningf(format string, args ...interface{}) { g.m[warningLog].Printf(format, args...) } func (g *loggerT) Error(args ...interface{}) { g.m[errorLog].Print(args...) } func (g *loggerT) Errorln(args ...interface{}) { g.m[errorLog].Println(args...) } func (g *loggerT) Errorf(format string, args ...interface{}) { g.m[errorLog].Printf(format, args...) } func (g *loggerT) Fatal(args ...interface{}) { g.m[fatalLog].Fatal(args...) // No need to call os.Exit() again because log.Logger.Fatal() calls os.Exit(). } func (g *loggerT) Fatalln(args ...interface{}) { g.m[fatalLog].Fatalln(args...) // No need to call os.Exit() again because log.Logger.Fatal() calls os.Exit(). } func (g *loggerT) Fatalf(format string, args ...interface{}) { g.m[fatalLog].Fatalf(format, args...) // No need to call os.Exit() again because log.Logger.Fatal() calls os.Exit(). } func (g *loggerT) V(l int) bool { return l <= g.v } grpc-go-1.22.1/grpclog/loggerv2_test.go000066400000000000000000000034761351635773100177320ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpclog import ( "bytes" "fmt" "regexp" "testing" ) func TestLoggerV2Severity(t *testing.T) { buffers := []*bytes.Buffer{new(bytes.Buffer), new(bytes.Buffer), new(bytes.Buffer)} SetLoggerV2(NewLoggerV2(buffers[infoLog], buffers[warningLog], buffers[errorLog])) Info(severityName[infoLog]) Warning(severityName[warningLog]) Error(severityName[errorLog]) for i := 0; i < fatalLog; i++ { buf := buffers[i] // The content of info buffer should be something like: // INFO: 2017/04/07 14:55:42 INFO // WARNING: 2017/04/07 14:55:42 WARNING // ERROR: 2017/04/07 14:55:42 ERROR for j := i; j < fatalLog; j++ { b, err := buf.ReadBytes('\n') if err != nil { t.Fatal(err) } if err := checkLogForSeverity(j, b); err != nil { t.Fatal(err) } } } } // check if b is in the format of: // WARNING: 2017/04/07 14:55:42 WARNING func checkLogForSeverity(s int, b []byte) error { expected := regexp.MustCompile(fmt.Sprintf(`^%s: [0-9]{4}/[0-9]{2}/[0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2} %s\n$`, severityName[s], severityName[s])) if m := expected.Match(b); !m { return fmt.Errorf("got: %v, want string in format of: %v", string(b), severityName[s]+": 2016/10/05 17:09:26 "+severityName[s]) } return nil } grpc-go-1.22.1/health/000077500000000000000000000000001351635773100144135ustar00rootroot00000000000000grpc-go-1.22.1/health/client.go000066400000000000000000000065121351635773100162240ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package health import ( "context" "fmt" "io" "time" "google.golang.org/grpc" "google.golang.org/grpc/codes" "google.golang.org/grpc/connectivity" healthpb "google.golang.org/grpc/health/grpc_health_v1" "google.golang.org/grpc/internal" "google.golang.org/grpc/internal/backoff" "google.golang.org/grpc/status" ) const maxDelay = 120 * time.Second var backoffStrategy = backoff.Exponential{MaxDelay: maxDelay} var backoffFunc = func(ctx context.Context, retries int) bool { d := backoffStrategy.Backoff(retries) timer := time.NewTimer(d) select { case <-timer.C: return true case <-ctx.Done(): timer.Stop() return false } } func init() { internal.HealthCheckFunc = clientHealthCheck } const healthCheckMethod = "/grpc.health.v1.Health/Watch" // This function implements the protocol defined at: // https://github.com/grpc/grpc/blob/master/doc/health-checking.md func clientHealthCheck(ctx context.Context, newStream func(string) (interface{}, error), setConnectivityState func(connectivity.State), service string) error { tryCnt := 0 retryConnection: for { // Backs off if the connection has failed in some way without receiving a message in the previous retry. if tryCnt > 0 && !backoffFunc(ctx, tryCnt-1) { return nil } tryCnt++ if ctx.Err() != nil { return nil } setConnectivityState(connectivity.Connecting) rawS, err := newStream(healthCheckMethod) if err != nil { continue retryConnection } s, ok := rawS.(grpc.ClientStream) // Ideally, this should never happen. But if it happens, the server is marked as healthy for LBing purposes. if !ok { setConnectivityState(connectivity.Ready) return fmt.Errorf("newStream returned %v (type %T); want grpc.ClientStream", rawS, rawS) } if err = s.SendMsg(&healthpb.HealthCheckRequest{Service: service}); err != nil && err != io.EOF { // Stream should have been closed, so we can safely continue to create a new stream. continue retryConnection } s.CloseSend() resp := new(healthpb.HealthCheckResponse) for { err = s.RecvMsg(resp) // Reports healthy for the LBing purposes if health check is not implemented in the server. if status.Code(err) == codes.Unimplemented { setConnectivityState(connectivity.Ready) return err } // Reports unhealthy if server's Watch method gives an error other than UNIMPLEMENTED. if err != nil { setConnectivityState(connectivity.TransientFailure) continue retryConnection } // As a message has been received, removes the need for backoff for the next retry by reseting the try count. tryCnt = 0 if resp.Status == healthpb.HealthCheckResponse_SERVING { setConnectivityState(connectivity.Ready) } else { setConnectivityState(connectivity.TransientFailure) } } } } grpc-go-1.22.1/health/client_test.go000066400000000000000000000027631351635773100172670ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package health import ( "context" "errors" "reflect" "testing" "time" "google.golang.org/grpc/connectivity" ) func TestClientHealthCheckBackoff(t *testing.T) { const maxRetries = 5 var want []time.Duration for i := 0; i < maxRetries; i++ { want = append(want, time.Duration(i+1)*time.Second) } var got []time.Duration newStream := func(string) (interface{}, error) { if len(got) < maxRetries { return nil, errors.New("backoff") } return nil, nil } oldBackoffFunc := backoffFunc backoffFunc = func(ctx context.Context, retries int) bool { got = append(got, time.Duration(retries+1)*time.Second) return true } defer func() { backoffFunc = oldBackoffFunc }() clientHealthCheck(context.Background(), newStream, func(connectivity.State) {}, "test") if !reflect.DeepEqual(got, want) { t.Fatalf("Backoff durations for %v retries are %v. (expected: %v)", maxRetries, got, want) } } grpc-go-1.22.1/health/grpc_health_v1/000077500000000000000000000000001351635773100173015ustar00rootroot00000000000000grpc-go-1.22.1/health/grpc_health_v1/health.pb.go000066400000000000000000000307601351635773100215030ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: grpc/health/v1/health.proto package grpc_health_v1 // import "google.golang.org/grpc/health/grpc_health_v1" import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import ( context "golang.org/x/net/context" grpc "google.golang.org/grpc" ) // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package type HealthCheckResponse_ServingStatus int32 const ( HealthCheckResponse_UNKNOWN HealthCheckResponse_ServingStatus = 0 HealthCheckResponse_SERVING HealthCheckResponse_ServingStatus = 1 HealthCheckResponse_NOT_SERVING HealthCheckResponse_ServingStatus = 2 HealthCheckResponse_SERVICE_UNKNOWN HealthCheckResponse_ServingStatus = 3 ) var HealthCheckResponse_ServingStatus_name = map[int32]string{ 0: "UNKNOWN", 1: "SERVING", 2: "NOT_SERVING", 3: "SERVICE_UNKNOWN", } var HealthCheckResponse_ServingStatus_value = map[string]int32{ "UNKNOWN": 0, "SERVING": 1, "NOT_SERVING": 2, "SERVICE_UNKNOWN": 3, } func (x HealthCheckResponse_ServingStatus) String() string { return proto.EnumName(HealthCheckResponse_ServingStatus_name, int32(x)) } func (HealthCheckResponse_ServingStatus) EnumDescriptor() ([]byte, []int) { return fileDescriptor_health_6b1a06aa67f91efd, []int{1, 0} } type HealthCheckRequest struct { Service string `protobuf:"bytes,1,opt,name=service,proto3" json:"service,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *HealthCheckRequest) Reset() { *m = HealthCheckRequest{} } func (m *HealthCheckRequest) String() string { return proto.CompactTextString(m) } func (*HealthCheckRequest) ProtoMessage() {} func (*HealthCheckRequest) Descriptor() ([]byte, []int) { return fileDescriptor_health_6b1a06aa67f91efd, []int{0} } func (m *HealthCheckRequest) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_HealthCheckRequest.Unmarshal(m, b) } func (m *HealthCheckRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_HealthCheckRequest.Marshal(b, m, deterministic) } func (dst *HealthCheckRequest) XXX_Merge(src proto.Message) { xxx_messageInfo_HealthCheckRequest.Merge(dst, src) } func (m *HealthCheckRequest) XXX_Size() int { return xxx_messageInfo_HealthCheckRequest.Size(m) } func (m *HealthCheckRequest) XXX_DiscardUnknown() { xxx_messageInfo_HealthCheckRequest.DiscardUnknown(m) } var xxx_messageInfo_HealthCheckRequest proto.InternalMessageInfo func (m *HealthCheckRequest) GetService() string { if m != nil { return m.Service } return "" } type HealthCheckResponse struct { Status HealthCheckResponse_ServingStatus `protobuf:"varint,1,opt,name=status,proto3,enum=grpc.health.v1.HealthCheckResponse_ServingStatus" json:"status,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *HealthCheckResponse) Reset() { *m = HealthCheckResponse{} } func (m *HealthCheckResponse) String() string { return proto.CompactTextString(m) } func (*HealthCheckResponse) ProtoMessage() {} func (*HealthCheckResponse) Descriptor() ([]byte, []int) { return fileDescriptor_health_6b1a06aa67f91efd, []int{1} } func (m *HealthCheckResponse) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_HealthCheckResponse.Unmarshal(m, b) } func (m *HealthCheckResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_HealthCheckResponse.Marshal(b, m, deterministic) } func (dst *HealthCheckResponse) XXX_Merge(src proto.Message) { xxx_messageInfo_HealthCheckResponse.Merge(dst, src) } func (m *HealthCheckResponse) XXX_Size() int { return xxx_messageInfo_HealthCheckResponse.Size(m) } func (m *HealthCheckResponse) XXX_DiscardUnknown() { xxx_messageInfo_HealthCheckResponse.DiscardUnknown(m) } var xxx_messageInfo_HealthCheckResponse proto.InternalMessageInfo func (m *HealthCheckResponse) GetStatus() HealthCheckResponse_ServingStatus { if m != nil { return m.Status } return HealthCheckResponse_UNKNOWN } func init() { proto.RegisterType((*HealthCheckRequest)(nil), "grpc.health.v1.HealthCheckRequest") proto.RegisterType((*HealthCheckResponse)(nil), "grpc.health.v1.HealthCheckResponse") proto.RegisterEnum("grpc.health.v1.HealthCheckResponse_ServingStatus", HealthCheckResponse_ServingStatus_name, HealthCheckResponse_ServingStatus_value) } // Reference imports to suppress errors if they are not otherwise used. var _ context.Context var _ grpc.ClientConn // This is a compile-time assertion to ensure that this generated file // is compatible with the grpc package it is being compiled against. const _ = grpc.SupportPackageIsVersion4 // HealthClient is the client API for Health service. // // For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream. type HealthClient interface { // If the requested service is unknown, the call will fail with status // NOT_FOUND. Check(ctx context.Context, in *HealthCheckRequest, opts ...grpc.CallOption) (*HealthCheckResponse, error) // Performs a watch for the serving status of the requested service. // The server will immediately send back a message indicating the current // serving status. It will then subsequently send a new message whenever // the service's serving status changes. // // If the requested service is unknown when the call is received, the // server will send a message setting the serving status to // SERVICE_UNKNOWN but will *not* terminate the call. If at some // future point, the serving status of the service becomes known, the // server will send a new message with the service's serving status. // // If the call terminates with status UNIMPLEMENTED, then clients // should assume this method is not supported and should not retry the // call. If the call terminates with any other status (including OK), // clients should retry the call with appropriate exponential backoff. Watch(ctx context.Context, in *HealthCheckRequest, opts ...grpc.CallOption) (Health_WatchClient, error) } type healthClient struct { cc *grpc.ClientConn } func NewHealthClient(cc *grpc.ClientConn) HealthClient { return &healthClient{cc} } func (c *healthClient) Check(ctx context.Context, in *HealthCheckRequest, opts ...grpc.CallOption) (*HealthCheckResponse, error) { out := new(HealthCheckResponse) err := c.cc.Invoke(ctx, "/grpc.health.v1.Health/Check", in, out, opts...) if err != nil { return nil, err } return out, nil } func (c *healthClient) Watch(ctx context.Context, in *HealthCheckRequest, opts ...grpc.CallOption) (Health_WatchClient, error) { stream, err := c.cc.NewStream(ctx, &_Health_serviceDesc.Streams[0], "/grpc.health.v1.Health/Watch", opts...) if err != nil { return nil, err } x := &healthWatchClient{stream} if err := x.ClientStream.SendMsg(in); err != nil { return nil, err } if err := x.ClientStream.CloseSend(); err != nil { return nil, err } return x, nil } type Health_WatchClient interface { Recv() (*HealthCheckResponse, error) grpc.ClientStream } type healthWatchClient struct { grpc.ClientStream } func (x *healthWatchClient) Recv() (*HealthCheckResponse, error) { m := new(HealthCheckResponse) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } // HealthServer is the server API for Health service. type HealthServer interface { // If the requested service is unknown, the call will fail with status // NOT_FOUND. Check(context.Context, *HealthCheckRequest) (*HealthCheckResponse, error) // Performs a watch for the serving status of the requested service. // The server will immediately send back a message indicating the current // serving status. It will then subsequently send a new message whenever // the service's serving status changes. // // If the requested service is unknown when the call is received, the // server will send a message setting the serving status to // SERVICE_UNKNOWN but will *not* terminate the call. If at some // future point, the serving status of the service becomes known, the // server will send a new message with the service's serving status. // // If the call terminates with status UNIMPLEMENTED, then clients // should assume this method is not supported and should not retry the // call. If the call terminates with any other status (including OK), // clients should retry the call with appropriate exponential backoff. Watch(*HealthCheckRequest, Health_WatchServer) error } func RegisterHealthServer(s *grpc.Server, srv HealthServer) { s.RegisterService(&_Health_serviceDesc, srv) } func _Health_Check_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(HealthCheckRequest) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(HealthServer).Check(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/grpc.health.v1.Health/Check", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(HealthServer).Check(ctx, req.(*HealthCheckRequest)) } return interceptor(ctx, in, info, handler) } func _Health_Watch_Handler(srv interface{}, stream grpc.ServerStream) error { m := new(HealthCheckRequest) if err := stream.RecvMsg(m); err != nil { return err } return srv.(HealthServer).Watch(m, &healthWatchServer{stream}) } type Health_WatchServer interface { Send(*HealthCheckResponse) error grpc.ServerStream } type healthWatchServer struct { grpc.ServerStream } func (x *healthWatchServer) Send(m *HealthCheckResponse) error { return x.ServerStream.SendMsg(m) } var _Health_serviceDesc = grpc.ServiceDesc{ ServiceName: "grpc.health.v1.Health", HandlerType: (*HealthServer)(nil), Methods: []grpc.MethodDesc{ { MethodName: "Check", Handler: _Health_Check_Handler, }, }, Streams: []grpc.StreamDesc{ { StreamName: "Watch", Handler: _Health_Watch_Handler, ServerStreams: true, }, }, Metadata: "grpc/health/v1/health.proto", } func init() { proto.RegisterFile("grpc/health/v1/health.proto", fileDescriptor_health_6b1a06aa67f91efd) } var fileDescriptor_health_6b1a06aa67f91efd = []byte{ // 297 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x92, 0x4e, 0x2f, 0x2a, 0x48, 0xd6, 0xcf, 0x48, 0x4d, 0xcc, 0x29, 0xc9, 0xd0, 0x2f, 0x33, 0x84, 0xb2, 0xf4, 0x0a, 0x8a, 0xf2, 0x4b, 0xf2, 0x85, 0xf8, 0x40, 0x92, 0x7a, 0x50, 0xa1, 0x32, 0x43, 0x25, 0x3d, 0x2e, 0x21, 0x0f, 0x30, 0xc7, 0x39, 0x23, 0x35, 0x39, 0x3b, 0x28, 0xb5, 0xb0, 0x34, 0xb5, 0xb8, 0x44, 0x48, 0x82, 0x8b, 0xbd, 0x38, 0xb5, 0xa8, 0x2c, 0x33, 0x39, 0x55, 0x82, 0x51, 0x81, 0x51, 0x83, 0x33, 0x08, 0xc6, 0x55, 0xda, 0xc8, 0xc8, 0x25, 0x8c, 0xa2, 0xa1, 0xb8, 0x20, 0x3f, 0xaf, 0x38, 0x55, 0xc8, 0x93, 0x8b, 0xad, 0xb8, 0x24, 0xb1, 0xa4, 0xb4, 0x18, 0xac, 0x81, 0xcf, 0xc8, 0x50, 0x0f, 0xd5, 0x22, 0x3d, 0x2c, 0x9a, 0xf4, 0x82, 0x41, 0x86, 0xe6, 0xa5, 0x07, 0x83, 0x35, 0x06, 0x41, 0x0d, 0x50, 0xf2, 0xe7, 0xe2, 0x45, 0x91, 0x10, 0xe2, 0xe6, 0x62, 0x0f, 0xf5, 0xf3, 0xf6, 0xf3, 0x0f, 0xf7, 0x13, 0x60, 0x00, 0x71, 0x82, 0x5d, 0x83, 0xc2, 0x3c, 0xfd, 0xdc, 0x05, 0x18, 0x85, 0xf8, 0xb9, 0xb8, 0xfd, 0xfc, 0x43, 0xe2, 0x61, 0x02, 0x4c, 0x42, 0xc2, 0x5c, 0xfc, 0x60, 0x8e, 0xb3, 0x6b, 0x3c, 0x4c, 0x0b, 0xb3, 0xd1, 0x3a, 0x46, 0x2e, 0x36, 0x88, 0xf5, 0x42, 0x01, 0x5c, 0xac, 0x60, 0x27, 0x08, 0x29, 0xe1, 0x75, 0x1f, 0x38, 0x14, 0xa4, 0x94, 0x89, 0xf0, 0x83, 0x50, 0x10, 0x17, 0x6b, 0x78, 0x62, 0x49, 0x72, 0x06, 0xd5, 0x4c, 0x34, 0x60, 0x74, 0x4a, 0xe4, 0x12, 0xcc, 0xcc, 0x47, 0x53, 0xea, 0xc4, 0x0d, 0x51, 0x1b, 0x00, 0x8a, 0xc6, 0x00, 0xc6, 0x28, 0x9d, 0xf4, 0xfc, 0xfc, 0xf4, 0x9c, 0x54, 0xbd, 0xf4, 0xfc, 0x9c, 0xc4, 0xbc, 0x74, 0xbd, 0xfc, 0xa2, 0x74, 0x7d, 0xe4, 0x78, 0x07, 0xb1, 0xe3, 0x21, 0xec, 0xf8, 0x32, 0xc3, 0x55, 0x4c, 0x7c, 0xee, 0x20, 0xd3, 0x20, 0x46, 0xe8, 0x85, 0x19, 0x26, 0xb1, 0x81, 0x93, 0x83, 0x31, 0x20, 0x00, 0x00, 0xff, 0xff, 0x12, 0x7d, 0x96, 0xcb, 0x2d, 0x02, 0x00, 0x00, } grpc-go-1.22.1/health/regenerate.sh000077500000000000000000000017571351635773100171050ustar00rootroot00000000000000#!/bin/bash # Copyright 2018 gRPC authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. set -eux -o pipefail TMP=$(mktemp -d) function finish { rm -rf "$TMP" } trap finish EXIT pushd "$TMP" mkdir -p grpc/health/v1 curl https://raw.githubusercontent.com/grpc/grpc-proto/master/grpc/health/v1/health.proto > grpc/health/v1/health.proto protoc --go_out=plugins=grpc,paths=source_relative:. -I. grpc/health/v1/*.proto popd rm -f grpc_health_v1/*.pb.go cp "$TMP"/grpc/health/v1/*.pb.go grpc_health_v1/ grpc-go-1.22.1/health/server.go000066400000000000000000000126131351635773100162530ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ //go:generate ./regenerate.sh // Package health provides a service that exposes server's health and it must be // imported to enable support for client-side health checks. package health import ( "context" "sync" "google.golang.org/grpc/codes" "google.golang.org/grpc/grpclog" healthgrpc "google.golang.org/grpc/health/grpc_health_v1" healthpb "google.golang.org/grpc/health/grpc_health_v1" "google.golang.org/grpc/status" ) // Server implements `service Health`. type Server struct { mu sync.Mutex // If shutdown is true, it's expected all serving status is NOT_SERVING, and // will stay in NOT_SERVING. shutdown bool // statusMap stores the serving status of the services this Server monitors. statusMap map[string]healthpb.HealthCheckResponse_ServingStatus updates map[string]map[healthgrpc.Health_WatchServer]chan healthpb.HealthCheckResponse_ServingStatus } // NewServer returns a new Server. func NewServer() *Server { return &Server{ statusMap: map[string]healthpb.HealthCheckResponse_ServingStatus{"": healthpb.HealthCheckResponse_SERVING}, updates: make(map[string]map[healthgrpc.Health_WatchServer]chan healthpb.HealthCheckResponse_ServingStatus), } } // Check implements `service Health`. func (s *Server) Check(ctx context.Context, in *healthpb.HealthCheckRequest) (*healthpb.HealthCheckResponse, error) { s.mu.Lock() defer s.mu.Unlock() if servingStatus, ok := s.statusMap[in.Service]; ok { return &healthpb.HealthCheckResponse{ Status: servingStatus, }, nil } return nil, status.Error(codes.NotFound, "unknown service") } // Watch implements `service Health`. func (s *Server) Watch(in *healthpb.HealthCheckRequest, stream healthgrpc.Health_WatchServer) error { service := in.Service // update channel is used for getting service status updates. update := make(chan healthpb.HealthCheckResponse_ServingStatus, 1) s.mu.Lock() // Puts the initial status to the channel. if servingStatus, ok := s.statusMap[service]; ok { update <- servingStatus } else { update <- healthpb.HealthCheckResponse_SERVICE_UNKNOWN } // Registers the update channel to the correct place in the updates map. if _, ok := s.updates[service]; !ok { s.updates[service] = make(map[healthgrpc.Health_WatchServer]chan healthpb.HealthCheckResponse_ServingStatus) } s.updates[service][stream] = update defer func() { s.mu.Lock() delete(s.updates[service], stream) s.mu.Unlock() }() s.mu.Unlock() var lastSentStatus healthpb.HealthCheckResponse_ServingStatus = -1 for { select { // Status updated. Sends the up-to-date status to the client. case servingStatus := <-update: if lastSentStatus == servingStatus { continue } lastSentStatus = servingStatus err := stream.Send(&healthpb.HealthCheckResponse{Status: servingStatus}) if err != nil { return status.Error(codes.Canceled, "Stream has ended.") } // Context done. Removes the update channel from the updates map. case <-stream.Context().Done(): return status.Error(codes.Canceled, "Stream has ended.") } } } // SetServingStatus is called when need to reset the serving status of a service // or insert a new service entry into the statusMap. func (s *Server) SetServingStatus(service string, servingStatus healthpb.HealthCheckResponse_ServingStatus) { s.mu.Lock() defer s.mu.Unlock() if s.shutdown { grpclog.Infof("health: status changing for %s to %v is ignored because health service is shutdown", service, servingStatus) return } s.setServingStatusLocked(service, servingStatus) } func (s *Server) setServingStatusLocked(service string, servingStatus healthpb.HealthCheckResponse_ServingStatus) { s.statusMap[service] = servingStatus for _, update := range s.updates[service] { // Clears previous updates, that are not sent to the client, from the channel. // This can happen if the client is not reading and the server gets flow control limited. select { case <-update: default: } // Puts the most recent update to the channel. update <- servingStatus } } // Shutdown sets all serving status to NOT_SERVING, and configures the server to // ignore all future status changes. // // This changes serving status for all services. To set status for a perticular // services, call SetServingStatus(). func (s *Server) Shutdown() { s.mu.Lock() defer s.mu.Unlock() s.shutdown = true for service := range s.statusMap { s.setServingStatusLocked(service, healthpb.HealthCheckResponse_NOT_SERVING) } } // Resume sets all serving status to SERVING, and configures the server to // accept all future status changes. // // This changes serving status for all services. To set status for a perticular // services, call SetServingStatus(). func (s *Server) Resume() { s.mu.Lock() defer s.mu.Unlock() s.shutdown = false for service := range s.statusMap { s.setServingStatusLocked(service, healthpb.HealthCheckResponse_SERVING) } } grpc-go-1.22.1/health/server_internal_test.go000066400000000000000000000041221351635773100212020ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package health import ( "sync" "testing" "time" healthpb "google.golang.org/grpc/health/grpc_health_v1" ) func TestShutdown(t *testing.T) { const testService = "tteesstt" s := NewServer() s.SetServingStatus(testService, healthpb.HealthCheckResponse_SERVING) status := s.statusMap[testService] if status != healthpb.HealthCheckResponse_SERVING { t.Fatalf("status for %s is %v, want %v", testService, status, healthpb.HealthCheckResponse_SERVING) } var wg sync.WaitGroup wg.Add(2) // Run SetServingStatus and Shutdown in parallel. go func() { for i := 0; i < 1000; i++ { s.SetServingStatus(testService, healthpb.HealthCheckResponse_SERVING) time.Sleep(time.Microsecond) } wg.Done() }() go func() { time.Sleep(300 * time.Microsecond) s.Shutdown() wg.Done() }() wg.Wait() s.mu.Lock() status = s.statusMap[testService] s.mu.Unlock() if status != healthpb.HealthCheckResponse_NOT_SERVING { t.Fatalf("status for %s is %v, want %v", testService, status, healthpb.HealthCheckResponse_NOT_SERVING) } s.Resume() status = s.statusMap[testService] if status != healthpb.HealthCheckResponse_SERVING { t.Fatalf("status for %s is %v, want %v", testService, status, healthpb.HealthCheckResponse_SERVING) } s.SetServingStatus(testService, healthpb.HealthCheckResponse_NOT_SERVING) status = s.statusMap[testService] if status != healthpb.HealthCheckResponse_NOT_SERVING { t.Fatalf("status for %s is %v, want %v", testService, status, healthpb.HealthCheckResponse_NOT_SERVING) } } grpc-go-1.22.1/health/server_test.go000066400000000000000000000017101351635773100173060ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package health_test import ( "testing" "google.golang.org/grpc" "google.golang.org/grpc/health" healthgrpc "google.golang.org/grpc/health/grpc_health_v1" ) // Make sure the service implementation complies with the proto definition. func TestRegister(t *testing.T) { s := grpc.NewServer() healthgrpc.RegisterHealthServer(s, health.NewServer()) s.Stop() } grpc-go-1.22.1/install_gae.sh000077500000000000000000000003541351635773100157710ustar00rootroot00000000000000#!/bin/bash TMP=$(mktemp -d /tmp/sdk.XXX) \ && curl -o $TMP.zip "https://storage.googleapis.com/appengine-sdks/featured/go_appengine_sdk_linux_amd64-1.9.68.zip" \ && unzip -q $TMP.zip -d $TMP \ && export PATH="$PATH:$TMP/go_appengine" grpc-go-1.22.1/interceptor.go000066400000000000000000000077021351635773100160410ustar00rootroot00000000000000/* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "context" ) // UnaryInvoker is called by UnaryClientInterceptor to complete RPCs. type UnaryInvoker func(ctx context.Context, method string, req, reply interface{}, cc *ClientConn, opts ...CallOption) error // UnaryClientInterceptor intercepts the execution of a unary RPC on the client. invoker is the handler to complete the RPC // and it is the responsibility of the interceptor to call it. // This is an EXPERIMENTAL API. type UnaryClientInterceptor func(ctx context.Context, method string, req, reply interface{}, cc *ClientConn, invoker UnaryInvoker, opts ...CallOption) error // Streamer is called by StreamClientInterceptor to create a ClientStream. type Streamer func(ctx context.Context, desc *StreamDesc, cc *ClientConn, method string, opts ...CallOption) (ClientStream, error) // StreamClientInterceptor intercepts the creation of ClientStream. It may return a custom ClientStream to intercept all I/O // operations. streamer is the handler to create a ClientStream and it is the responsibility of the interceptor to call it. // This is an EXPERIMENTAL API. type StreamClientInterceptor func(ctx context.Context, desc *StreamDesc, cc *ClientConn, method string, streamer Streamer, opts ...CallOption) (ClientStream, error) // UnaryServerInfo consists of various information about a unary RPC on // server side. All per-rpc information may be mutated by the interceptor. type UnaryServerInfo struct { // Server is the service implementation the user provides. This is read-only. Server interface{} // FullMethod is the full RPC method string, i.e., /package.service/method. FullMethod string } // UnaryHandler defines the handler invoked by UnaryServerInterceptor to complete the normal // execution of a unary RPC. If a UnaryHandler returns an error, it should be produced by the // status package, or else gRPC will use codes.Unknown as the status code and err.Error() as // the status message of the RPC. type UnaryHandler func(ctx context.Context, req interface{}) (interface{}, error) // UnaryServerInterceptor provides a hook to intercept the execution of a unary RPC on the server. info // contains all the information of this RPC the interceptor can operate on. And handler is the wrapper // of the service method implementation. It is the responsibility of the interceptor to invoke handler // to complete the RPC. type UnaryServerInterceptor func(ctx context.Context, req interface{}, info *UnaryServerInfo, handler UnaryHandler) (resp interface{}, err error) // StreamServerInfo consists of various information about a streaming RPC on // server side. All per-rpc information may be mutated by the interceptor. type StreamServerInfo struct { // FullMethod is the full RPC method string, i.e., /package.service/method. FullMethod string // IsClientStream indicates whether the RPC is a client streaming RPC. IsClientStream bool // IsServerStream indicates whether the RPC is a server streaming RPC. IsServerStream bool } // StreamServerInterceptor provides a hook to intercept the execution of a streaming RPC on the server. // info contains all the information of this RPC the interceptor can operate on. And handler is the // service method implementation. It is the responsibility of the interceptor to invoke handler to // complete the RPC. type StreamServerInterceptor func(srv interface{}, ss ServerStream, info *StreamServerInfo, handler StreamHandler) error grpc-go-1.22.1/internal/000077500000000000000000000000001351635773100147625ustar00rootroot00000000000000grpc-go-1.22.1/internal/backoff/000077500000000000000000000000001351635773100163555ustar00rootroot00000000000000grpc-go-1.22.1/internal/backoff/backoff.go000066400000000000000000000043201351635773100202760ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package backoff implement the backoff strategy for gRPC. // // This is kept in internal until the gRPC project decides whether or not to // allow alternative backoff strategies. package backoff import ( "time" "google.golang.org/grpc/internal/grpcrand" ) // Strategy defines the methodology for backing off after a grpc connection // failure. // type Strategy interface { // Backoff returns the amount of time to wait before the next retry given // the number of consecutive failures. Backoff(retries int) time.Duration } const ( // baseDelay is the amount of time to wait before retrying after the first // failure. baseDelay = 1.0 * time.Second // factor is applied to the backoff after each retry. factor = 1.6 // jitter provides a range to randomize backoff delays. jitter = 0.2 ) // Exponential implements exponential backoff algorithm as defined in // https://github.com/grpc/grpc/blob/master/doc/connection-backoff.md. type Exponential struct { // MaxDelay is the upper bound of backoff delay. MaxDelay time.Duration } // Backoff returns the amount of time to wait before the next retry given the // number of retries. func (bc Exponential) Backoff(retries int) time.Duration { if retries == 0 { return baseDelay } backoff, max := float64(baseDelay), float64(bc.MaxDelay) for backoff < max && retries > 0 { backoff *= factor retries-- } if backoff > max { backoff = max } // Randomize backoff delays so that if a cluster of requests start at // the same time, they won't operate in lockstep. backoff *= 1 + jitter*(grpcrand.Float64()*2-1) if backoff < 0 { return 0 } return time.Duration(backoff) } grpc-go-1.22.1/internal/balancerload/000077500000000000000000000000001351635773100173715ustar00rootroot00000000000000grpc-go-1.22.1/internal/balancerload/load.go000066400000000000000000000023521351635773100206410ustar00rootroot00000000000000/* * Copyright 2019 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ // Package balancerload defines APIs to parse server loads in trailers. The // parsed loads are sent to balancers in DoneInfo. package balancerload import ( "google.golang.org/grpc/metadata" ) // Parser converts loads from metadata into a concrete type. type Parser interface { // Parse parses loads from metadata. Parse(md metadata.MD) interface{} } var parser Parser // SetParser sets the load parser. // // Not mutex-protected, should be called before any gRPC functions. func SetParser(lr Parser) { parser = lr } // Parse calls parser.Read(). func Parse(md metadata.MD) interface{} { if parser == nil { return nil } return parser.Parse(md) } grpc-go-1.22.1/internal/binarylog/000077500000000000000000000000001351635773100167505ustar00rootroot00000000000000grpc-go-1.22.1/internal/binarylog/binarylog.go000066400000000000000000000111021351635773100212600ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package binarylog implementation binary logging as defined in // https://github.com/grpc/proposal/blob/master/A16-binary-logging.md. package binarylog import ( "fmt" "os" "google.golang.org/grpc/grpclog" ) // Logger is the global binary logger. It can be used to get binary logger for // each method. type Logger interface { getMethodLogger(methodName string) *MethodLogger } // binLogger is the global binary logger for the binary. One of this should be // built at init time from the configuration (environment varialbe or flags). // // It is used to get a methodLogger for each individual method. var binLogger Logger // SetLogger sets the binarg logger. // // Only call this at init time. func SetLogger(l Logger) { binLogger = l } // GetMethodLogger returns the methodLogger for the given methodName. // // methodName should be in the format of "/service/method". // // Each methodLogger returned by this method is a new instance. This is to // generate sequence id within the call. func GetMethodLogger(methodName string) *MethodLogger { if binLogger == nil { return nil } return binLogger.getMethodLogger(methodName) } func init() { const envStr = "GRPC_BINARY_LOG_FILTER" configStr := os.Getenv(envStr) binLogger = NewLoggerFromConfigString(configStr) } type methodLoggerConfig struct { // Max length of header and message. hdr, msg uint64 } type logger struct { all *methodLoggerConfig services map[string]*methodLoggerConfig methods map[string]*methodLoggerConfig blacklist map[string]struct{} } // newEmptyLogger creates an empty logger. The map fields need to be filled in // using the set* functions. func newEmptyLogger() *logger { return &logger{} } // Set method logger for "*". func (l *logger) setDefaultMethodLogger(ml *methodLoggerConfig) error { if l.all != nil { return fmt.Errorf("conflicting global rules found") } l.all = ml return nil } // Set method logger for "service/*". // // New methodLogger with same service overrides the old one. func (l *logger) setServiceMethodLogger(service string, ml *methodLoggerConfig) error { if _, ok := l.services[service]; ok { return fmt.Errorf("conflicting rules for service %v found", service) } if l.services == nil { l.services = make(map[string]*methodLoggerConfig) } l.services[service] = ml return nil } // Set method logger for "service/method". // // New methodLogger with same method overrides the old one. func (l *logger) setMethodMethodLogger(method string, ml *methodLoggerConfig) error { if _, ok := l.blacklist[method]; ok { return fmt.Errorf("conflicting rules for method %v found", method) } if _, ok := l.methods[method]; ok { return fmt.Errorf("conflicting rules for method %v found", method) } if l.methods == nil { l.methods = make(map[string]*methodLoggerConfig) } l.methods[method] = ml return nil } // Set blacklist method for "-service/method". func (l *logger) setBlacklist(method string) error { if _, ok := l.blacklist[method]; ok { return fmt.Errorf("conflicting rules for method %v found", method) } if _, ok := l.methods[method]; ok { return fmt.Errorf("conflicting rules for method %v found", method) } if l.blacklist == nil { l.blacklist = make(map[string]struct{}) } l.blacklist[method] = struct{}{} return nil } // getMethodLogger returns the methodLogger for the given methodName. // // methodName should be in the format of "/service/method". // // Each methodLogger returned by this method is a new instance. This is to // generate sequence id within the call. func (l *logger) getMethodLogger(methodName string) *MethodLogger { s, m, err := parseMethodName(methodName) if err != nil { grpclog.Infof("binarylogging: failed to parse %q: %v", methodName, err) return nil } if ml, ok := l.methods[s+"/"+m]; ok { return newMethodLogger(ml.hdr, ml.msg) } if _, ok := l.blacklist[s+"/"+m]; ok { return nil } if ml, ok := l.services[s]; ok { return newMethodLogger(ml.hdr, ml.msg) } if l.all == nil { return nil } return newMethodLogger(l.all.hdr, l.all.msg) } grpc-go-1.22.1/internal/binarylog/binarylog_end2end_test.go000066400000000000000000000676001351635773100237340ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package binarylog_test import ( "context" "fmt" "io" "net" "sort" "sync" "testing" "time" "github.com/golang/protobuf/proto" "google.golang.org/grpc" pb "google.golang.org/grpc/binarylog/grpc_binarylog_v1" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/internal/binarylog" "google.golang.org/grpc/metadata" testpb "google.golang.org/grpc/stats/grpc_testing" "google.golang.org/grpc/status" ) func init() { // Setting environment variable in tests doesn't work because of the init // orders. Set the loggers directly here. binarylog.SetLogger(binarylog.AllLogger) binarylog.SetDefaultSink(testSink) } var testSink = &testBinLogSink{} type testBinLogSink struct { mu sync.Mutex buf []*pb.GrpcLogEntry } func (s *testBinLogSink) Write(e *pb.GrpcLogEntry) error { s.mu.Lock() s.buf = append(s.buf, e) s.mu.Unlock() return nil } func (s *testBinLogSink) Close() error { return nil } // Returns all client entris if client is true, otherwise return all server // entries. func (s *testBinLogSink) logEntries(client bool) []*pb.GrpcLogEntry { logger := pb.GrpcLogEntry_LOGGER_SERVER if client { logger = pb.GrpcLogEntry_LOGGER_CLIENT } var ret []*pb.GrpcLogEntry s.mu.Lock() for _, e := range s.buf { if e.Logger == logger { ret = append(ret, e) } } s.mu.Unlock() return ret } func (s *testBinLogSink) clear() { s.mu.Lock() s.buf = nil s.mu.Unlock() } var ( // For headers: testMetadata = metadata.MD{ "key1": []string{"value1"}, "key2": []string{"value2"}, } // For trailers: testTrailerMetadata = metadata.MD{ "tkey1": []string{"trailerValue1"}, "tkey2": []string{"trailerValue2"}, } // The id for which the service handler should return error. errorID int32 = 32202 globalRPCID uint64 // RPC id starts with 1, but we do ++ at the beginning of each test. ) type testServer struct { te *test } func (s *testServer) UnaryCall(ctx context.Context, in *testpb.SimpleRequest) (*testpb.SimpleResponse, error) { md, ok := metadata.FromIncomingContext(ctx) if ok { if err := grpc.SendHeader(ctx, md); err != nil { return nil, status.Errorf(status.Code(err), "grpc.SendHeader(_, %v) = %v, want ", md, err) } if err := grpc.SetTrailer(ctx, testTrailerMetadata); err != nil { return nil, status.Errorf(status.Code(err), "grpc.SetTrailer(_, %v) = %v, want ", testTrailerMetadata, err) } } if in.Id == errorID { return nil, fmt.Errorf("got error id: %v", in.Id) } return &testpb.SimpleResponse{Id: in.Id}, nil } func (s *testServer) FullDuplexCall(stream testpb.TestService_FullDuplexCallServer) error { md, ok := metadata.FromIncomingContext(stream.Context()) if ok { if err := stream.SendHeader(md); err != nil { return status.Errorf(status.Code(err), "stream.SendHeader(%v) = %v, want %v", md, err, nil) } stream.SetTrailer(testTrailerMetadata) } for { in, err := stream.Recv() if err == io.EOF { // read done. return nil } if err != nil { return err } if in.Id == errorID { return fmt.Errorf("got error id: %v", in.Id) } if err := stream.Send(&testpb.SimpleResponse{Id: in.Id}); err != nil { return err } } } func (s *testServer) ClientStreamCall(stream testpb.TestService_ClientStreamCallServer) error { md, ok := metadata.FromIncomingContext(stream.Context()) if ok { if err := stream.SendHeader(md); err != nil { return status.Errorf(status.Code(err), "stream.SendHeader(%v) = %v, want %v", md, err, nil) } stream.SetTrailer(testTrailerMetadata) } for { in, err := stream.Recv() if err == io.EOF { // read done. return stream.SendAndClose(&testpb.SimpleResponse{Id: int32(0)}) } if err != nil { return err } if in.Id == errorID { return fmt.Errorf("got error id: %v", in.Id) } } } func (s *testServer) ServerStreamCall(in *testpb.SimpleRequest, stream testpb.TestService_ServerStreamCallServer) error { md, ok := metadata.FromIncomingContext(stream.Context()) if ok { if err := stream.SendHeader(md); err != nil { return status.Errorf(status.Code(err), "stream.SendHeader(%v) = %v, want %v", md, err, nil) } stream.SetTrailer(testTrailerMetadata) } if in.Id == errorID { return fmt.Errorf("got error id: %v", in.Id) } for i := 0; i < 5; i++ { if err := stream.Send(&testpb.SimpleResponse{Id: in.Id}); err != nil { return err } } return nil } // test is an end-to-end test. It should be created with the newTest // func, modified as needed, and then started with its startServer method. // It should be cleaned up with the tearDown method. type test struct { t *testing.T testServer testpb.TestServiceServer // nil means none // srv and srvAddr are set once startServer is called. srv *grpc.Server srvAddr string // Server IP without port. srvIP net.IP srvPort int cc *grpc.ClientConn // nil until requested via clientConn // Fields for client address. Set by the service handler. clientAddrMu sync.Mutex clientIP net.IP clientPort int } func (te *test) tearDown() { if te.cc != nil { te.cc.Close() te.cc = nil } te.srv.Stop() } type testConfig struct { } // newTest returns a new test using the provided testing.T and // environment. It is returned with default values. Tests should // modify it before calling its startServer and clientConn methods. func newTest(t *testing.T, tc *testConfig) *test { te := &test{ t: t, } return te } type listenerWrapper struct { net.Listener te *test } func (lw *listenerWrapper) Accept() (net.Conn, error) { conn, err := lw.Listener.Accept() if err != nil { return nil, err } lw.te.clientAddrMu.Lock() lw.te.clientIP = conn.RemoteAddr().(*net.TCPAddr).IP lw.te.clientPort = conn.RemoteAddr().(*net.TCPAddr).Port lw.te.clientAddrMu.Unlock() return conn, nil } // startServer starts a gRPC server listening. Callers should defer a // call to te.tearDown to clean up. func (te *test) startServer(ts testpb.TestServiceServer) { te.testServer = ts lis, err := net.Listen("tcp", "localhost:0") lis = &listenerWrapper{ Listener: lis, te: te, } if err != nil { te.t.Fatalf("Failed to listen: %v", err) } var opts []grpc.ServerOption s := grpc.NewServer(opts...) te.srv = s if te.testServer != nil { testpb.RegisterTestServiceServer(s, te.testServer) } go s.Serve(lis) te.srvAddr = lis.Addr().String() te.srvIP = lis.Addr().(*net.TCPAddr).IP te.srvPort = lis.Addr().(*net.TCPAddr).Port } func (te *test) clientConn() *grpc.ClientConn { if te.cc != nil { return te.cc } opts := []grpc.DialOption{grpc.WithInsecure(), grpc.WithBlock()} var err error te.cc, err = grpc.Dial(te.srvAddr, opts...) if err != nil { te.t.Fatalf("Dial(%q) = %v", te.srvAddr, err) } return te.cc } type rpcType int const ( unaryRPC rpcType = iota clientStreamRPC serverStreamRPC fullDuplexStreamRPC cancelRPC ) type rpcConfig struct { count int // Number of requests and responses for streaming RPCs. success bool // Whether the RPC should succeed or return error. callType rpcType // Type of RPC. } func (te *test) doUnaryCall(c *rpcConfig) (*testpb.SimpleRequest, *testpb.SimpleResponse, error) { var ( resp *testpb.SimpleResponse req *testpb.SimpleRequest err error ) tc := testpb.NewTestServiceClient(te.clientConn()) if c.success { req = &testpb.SimpleRequest{Id: errorID + 1} } else { req = &testpb.SimpleRequest{Id: errorID} } ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() ctx = metadata.NewOutgoingContext(ctx, testMetadata) resp, err = tc.UnaryCall(ctx, req) return req, resp, err } func (te *test) doFullDuplexCallRoundtrip(c *rpcConfig) ([]*testpb.SimpleRequest, []*testpb.SimpleResponse, error) { var ( reqs []*testpb.SimpleRequest resps []*testpb.SimpleResponse err error ) tc := testpb.NewTestServiceClient(te.clientConn()) ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() ctx = metadata.NewOutgoingContext(ctx, testMetadata) stream, err := tc.FullDuplexCall(ctx) if err != nil { return reqs, resps, err } if c.callType == cancelRPC { cancel() return reqs, resps, context.Canceled } var startID int32 if !c.success { startID = errorID } for i := 0; i < c.count; i++ { req := &testpb.SimpleRequest{ Id: int32(i) + startID, } reqs = append(reqs, req) if err = stream.Send(req); err != nil { return reqs, resps, err } var resp *testpb.SimpleResponse if resp, err = stream.Recv(); err != nil { return reqs, resps, err } resps = append(resps, resp) } if err = stream.CloseSend(); err != nil && err != io.EOF { return reqs, resps, err } if _, err = stream.Recv(); err != io.EOF { return reqs, resps, err } return reqs, resps, nil } func (te *test) doClientStreamCall(c *rpcConfig) ([]*testpb.SimpleRequest, *testpb.SimpleResponse, error) { var ( reqs []*testpb.SimpleRequest resp *testpb.SimpleResponse err error ) tc := testpb.NewTestServiceClient(te.clientConn()) ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() ctx = metadata.NewOutgoingContext(ctx, testMetadata) stream, err := tc.ClientStreamCall(ctx) if err != nil { return reqs, resp, err } var startID int32 if !c.success { startID = errorID } for i := 0; i < c.count; i++ { req := &testpb.SimpleRequest{ Id: int32(i) + startID, } reqs = append(reqs, req) if err = stream.Send(req); err != nil { return reqs, resp, err } } resp, err = stream.CloseAndRecv() return reqs, resp, err } func (te *test) doServerStreamCall(c *rpcConfig) (*testpb.SimpleRequest, []*testpb.SimpleResponse, error) { var ( req *testpb.SimpleRequest resps []*testpb.SimpleResponse err error ) tc := testpb.NewTestServiceClient(te.clientConn()) ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() ctx = metadata.NewOutgoingContext(ctx, testMetadata) var startID int32 if !c.success { startID = errorID } req = &testpb.SimpleRequest{Id: startID} stream, err := tc.ServerStreamCall(ctx, req) if err != nil { return req, resps, err } for { var resp *testpb.SimpleResponse resp, err := stream.Recv() if err == io.EOF { return req, resps, nil } else if err != nil { return req, resps, err } resps = append(resps, resp) } } type expectedData struct { te *test cc *rpcConfig method string requests []*testpb.SimpleRequest responses []*testpb.SimpleResponse err error } func (ed *expectedData) newClientHeaderEntry(client bool, rpcID, inRPCID uint64) *pb.GrpcLogEntry { logger := pb.GrpcLogEntry_LOGGER_CLIENT var peer *pb.Address if !client { logger = pb.GrpcLogEntry_LOGGER_SERVER ed.te.clientAddrMu.Lock() peer = &pb.Address{ Address: ed.te.clientIP.String(), IpPort: uint32(ed.te.clientPort), } if ed.te.clientIP.To4() != nil { peer.Type = pb.Address_TYPE_IPV4 } else { peer.Type = pb.Address_TYPE_IPV6 } ed.te.clientAddrMu.Unlock() } return &pb.GrpcLogEntry{ Timestamp: nil, CallId: rpcID, SequenceIdWithinCall: inRPCID, Type: pb.GrpcLogEntry_EVENT_TYPE_CLIENT_HEADER, Logger: logger, Payload: &pb.GrpcLogEntry_ClientHeader{ ClientHeader: &pb.ClientHeader{ Metadata: binarylog.MdToMetadataProto(testMetadata), MethodName: ed.method, Authority: ed.te.srvAddr, }, }, Peer: peer, } } func (ed *expectedData) newServerHeaderEntry(client bool, rpcID, inRPCID uint64) *pb.GrpcLogEntry { logger := pb.GrpcLogEntry_LOGGER_SERVER var peer *pb.Address if client { logger = pb.GrpcLogEntry_LOGGER_CLIENT peer = &pb.Address{ Address: ed.te.srvIP.String(), IpPort: uint32(ed.te.srvPort), } if ed.te.srvIP.To4() != nil { peer.Type = pb.Address_TYPE_IPV4 } else { peer.Type = pb.Address_TYPE_IPV6 } } return &pb.GrpcLogEntry{ Timestamp: nil, CallId: rpcID, SequenceIdWithinCall: inRPCID, Type: pb.GrpcLogEntry_EVENT_TYPE_SERVER_HEADER, Logger: logger, Payload: &pb.GrpcLogEntry_ServerHeader{ ServerHeader: &pb.ServerHeader{ Metadata: binarylog.MdToMetadataProto(testMetadata), }, }, Peer: peer, } } func (ed *expectedData) newClientMessageEntry(client bool, rpcID, inRPCID uint64, msg *testpb.SimpleRequest) *pb.GrpcLogEntry { logger := pb.GrpcLogEntry_LOGGER_CLIENT if !client { logger = pb.GrpcLogEntry_LOGGER_SERVER } data, err := proto.Marshal(msg) if err != nil { grpclog.Infof("binarylogging_testing: failed to marshal proto message: %v", err) } return &pb.GrpcLogEntry{ Timestamp: nil, CallId: rpcID, SequenceIdWithinCall: inRPCID, Type: pb.GrpcLogEntry_EVENT_TYPE_CLIENT_MESSAGE, Logger: logger, Payload: &pb.GrpcLogEntry_Message{ Message: &pb.Message{ Length: uint32(len(data)), Data: data, }, }, } } func (ed *expectedData) newServerMessageEntry(client bool, rpcID, inRPCID uint64, msg *testpb.SimpleResponse) *pb.GrpcLogEntry { logger := pb.GrpcLogEntry_LOGGER_CLIENT if !client { logger = pb.GrpcLogEntry_LOGGER_SERVER } data, err := proto.Marshal(msg) if err != nil { grpclog.Infof("binarylogging_testing: failed to marshal proto message: %v", err) } return &pb.GrpcLogEntry{ Timestamp: nil, CallId: rpcID, SequenceIdWithinCall: inRPCID, Type: pb.GrpcLogEntry_EVENT_TYPE_SERVER_MESSAGE, Logger: logger, Payload: &pb.GrpcLogEntry_Message{ Message: &pb.Message{ Length: uint32(len(data)), Data: data, }, }, } } func (ed *expectedData) newHalfCloseEntry(client bool, rpcID, inRPCID uint64) *pb.GrpcLogEntry { logger := pb.GrpcLogEntry_LOGGER_CLIENT if !client { logger = pb.GrpcLogEntry_LOGGER_SERVER } return &pb.GrpcLogEntry{ Timestamp: nil, CallId: rpcID, SequenceIdWithinCall: inRPCID, Type: pb.GrpcLogEntry_EVENT_TYPE_CLIENT_HALF_CLOSE, Payload: nil, // No payload here. Logger: logger, } } func (ed *expectedData) newServerTrailerEntry(client bool, rpcID, inRPCID uint64, stErr error) *pb.GrpcLogEntry { logger := pb.GrpcLogEntry_LOGGER_SERVER var peer *pb.Address if client { logger = pb.GrpcLogEntry_LOGGER_CLIENT peer = &pb.Address{ Address: ed.te.srvIP.String(), IpPort: uint32(ed.te.srvPort), } if ed.te.srvIP.To4() != nil { peer.Type = pb.Address_TYPE_IPV4 } else { peer.Type = pb.Address_TYPE_IPV6 } } st, ok := status.FromError(stErr) if !ok { grpclog.Info("binarylogging: error in trailer is not a status error") } stProto := st.Proto() var ( detailsBytes []byte err error ) if stProto != nil && len(stProto.Details) != 0 { detailsBytes, err = proto.Marshal(stProto) if err != nil { grpclog.Infof("binarylogging: failed to marshal status proto: %v", err) } } return &pb.GrpcLogEntry{ Timestamp: nil, CallId: rpcID, SequenceIdWithinCall: inRPCID, Type: pb.GrpcLogEntry_EVENT_TYPE_SERVER_TRAILER, Logger: logger, Payload: &pb.GrpcLogEntry_Trailer{ Trailer: &pb.Trailer{ Metadata: binarylog.MdToMetadataProto(testTrailerMetadata), // st will be nil if err was not a status error, but nil is ok. StatusCode: uint32(st.Code()), StatusMessage: st.Message(), StatusDetails: detailsBytes, }, }, Peer: peer, } } func (ed *expectedData) newCancelEntry(rpcID, inRPCID uint64) *pb.GrpcLogEntry { return &pb.GrpcLogEntry{ Timestamp: nil, CallId: rpcID, SequenceIdWithinCall: inRPCID, Type: pb.GrpcLogEntry_EVENT_TYPE_CANCEL, Logger: pb.GrpcLogEntry_LOGGER_CLIENT, Payload: nil, } } func (ed *expectedData) toClientLogEntries() []*pb.GrpcLogEntry { var ( ret []*pb.GrpcLogEntry idInRPC uint64 = 1 ) ret = append(ret, ed.newClientHeaderEntry(true, globalRPCID, idInRPC)) idInRPC++ switch ed.cc.callType { case unaryRPC, fullDuplexStreamRPC: for i := 0; i < len(ed.requests); i++ { ret = append(ret, ed.newClientMessageEntry(true, globalRPCID, idInRPC, ed.requests[i])) idInRPC++ if i == 0 { // First message, append ServerHeader. ret = append(ret, ed.newServerHeaderEntry(true, globalRPCID, idInRPC)) idInRPC++ } if !ed.cc.success { // There is no response in the RPC error case. continue } ret = append(ret, ed.newServerMessageEntry(true, globalRPCID, idInRPC, ed.responses[i])) idInRPC++ } if ed.cc.success && ed.cc.callType == fullDuplexStreamRPC { ret = append(ret, ed.newHalfCloseEntry(true, globalRPCID, idInRPC)) idInRPC++ } case clientStreamRPC, serverStreamRPC: for i := 0; i < len(ed.requests); i++ { ret = append(ret, ed.newClientMessageEntry(true, globalRPCID, idInRPC, ed.requests[i])) idInRPC++ } if ed.cc.callType == clientStreamRPC { ret = append(ret, ed.newHalfCloseEntry(true, globalRPCID, idInRPC)) idInRPC++ } ret = append(ret, ed.newServerHeaderEntry(true, globalRPCID, idInRPC)) idInRPC++ if ed.cc.success { for i := 0; i < len(ed.responses); i++ { ret = append(ret, ed.newServerMessageEntry(true, globalRPCID, idInRPC, ed.responses[0])) idInRPC++ } } } if ed.cc.callType == cancelRPC { ret = append(ret, ed.newCancelEntry(globalRPCID, idInRPC)) idInRPC++ } else { ret = append(ret, ed.newServerTrailerEntry(true, globalRPCID, idInRPC, ed.err)) idInRPC++ } return ret } func (ed *expectedData) toServerLogEntries() []*pb.GrpcLogEntry { var ( ret []*pb.GrpcLogEntry idInRPC uint64 = 1 ) ret = append(ret, ed.newClientHeaderEntry(false, globalRPCID, idInRPC)) idInRPC++ switch ed.cc.callType { case unaryRPC: ret = append(ret, ed.newClientMessageEntry(false, globalRPCID, idInRPC, ed.requests[0])) idInRPC++ ret = append(ret, ed.newServerHeaderEntry(false, globalRPCID, idInRPC)) idInRPC++ if ed.cc.success { ret = append(ret, ed.newServerMessageEntry(false, globalRPCID, idInRPC, ed.responses[0])) idInRPC++ } case fullDuplexStreamRPC: ret = append(ret, ed.newServerHeaderEntry(false, globalRPCID, idInRPC)) idInRPC++ for i := 0; i < len(ed.requests); i++ { ret = append(ret, ed.newClientMessageEntry(false, globalRPCID, idInRPC, ed.requests[i])) idInRPC++ if !ed.cc.success { // There is no response in the RPC error case. continue } ret = append(ret, ed.newServerMessageEntry(false, globalRPCID, idInRPC, ed.responses[i])) idInRPC++ } if ed.cc.success && ed.cc.callType == fullDuplexStreamRPC { ret = append(ret, ed.newHalfCloseEntry(false, globalRPCID, idInRPC)) idInRPC++ } case clientStreamRPC: ret = append(ret, ed.newServerHeaderEntry(false, globalRPCID, idInRPC)) idInRPC++ for i := 0; i < len(ed.requests); i++ { ret = append(ret, ed.newClientMessageEntry(false, globalRPCID, idInRPC, ed.requests[i])) idInRPC++ } if ed.cc.success { ret = append(ret, ed.newHalfCloseEntry(false, globalRPCID, idInRPC)) idInRPC++ ret = append(ret, ed.newServerMessageEntry(false, globalRPCID, idInRPC, ed.responses[0])) idInRPC++ } case serverStreamRPC: ret = append(ret, ed.newClientMessageEntry(false, globalRPCID, idInRPC, ed.requests[0])) idInRPC++ ret = append(ret, ed.newServerHeaderEntry(false, globalRPCID, idInRPC)) idInRPC++ for i := 0; i < len(ed.responses); i++ { ret = append(ret, ed.newServerMessageEntry(false, globalRPCID, idInRPC, ed.responses[0])) idInRPC++ } } ret = append(ret, ed.newServerTrailerEntry(false, globalRPCID, idInRPC, ed.err)) idInRPC++ return ret } func runRPCs(t *testing.T, tc *testConfig, cc *rpcConfig) *expectedData { te := newTest(t, tc) te.startServer(&testServer{te: te}) defer te.tearDown() expect := &expectedData{ te: te, cc: cc, } switch cc.callType { case unaryRPC: expect.method = "/grpc.testing.TestService/UnaryCall" req, resp, err := te.doUnaryCall(cc) expect.requests = []*testpb.SimpleRequest{req} expect.responses = []*testpb.SimpleResponse{resp} expect.err = err case clientStreamRPC: expect.method = "/grpc.testing.TestService/ClientStreamCall" reqs, resp, err := te.doClientStreamCall(cc) expect.requests = reqs expect.responses = []*testpb.SimpleResponse{resp} expect.err = err case serverStreamRPC: expect.method = "/grpc.testing.TestService/ServerStreamCall" req, resps, err := te.doServerStreamCall(cc) expect.responses = resps expect.requests = []*testpb.SimpleRequest{req} expect.err = err case fullDuplexStreamRPC, cancelRPC: expect.method = "/grpc.testing.TestService/FullDuplexCall" expect.requests, expect.responses, expect.err = te.doFullDuplexCallRoundtrip(cc) } if cc.success != (expect.err == nil) { t.Fatalf("cc.success: %v, got error: %v", cc.success, expect.err) } te.cc.Close() te.srv.GracefulStop() // Wait for the server to stop. return expect } // equalLogEntry sorts the metadata entries by key (to compare metadata). // // This function is typically called with only two entries. It's written in this // way so the code can be put in a for loop instead of copied twice. func equalLogEntry(entries ...*pb.GrpcLogEntry) (equal bool) { for i, e := range entries { // Clear out some fields we don't compare. e.Timestamp = nil e.CallId = 0 // CallID is global to the binary, hard to compare. if h := e.GetClientHeader(); h != nil { h.Timeout = nil tmp := append(h.Metadata.Entry[:0], h.Metadata.Entry...) h.Metadata.Entry = tmp sort.Slice(h.Metadata.Entry, func(i, j int) bool { return h.Metadata.Entry[i].Key < h.Metadata.Entry[j].Key }) } if h := e.GetServerHeader(); h != nil { tmp := append(h.Metadata.Entry[:0], h.Metadata.Entry...) h.Metadata.Entry = tmp sort.Slice(h.Metadata.Entry, func(i, j int) bool { return h.Metadata.Entry[i].Key < h.Metadata.Entry[j].Key }) } if h := e.GetTrailer(); h != nil { sort.Slice(h.Metadata.Entry, func(i, j int) bool { return h.Metadata.Entry[i].Key < h.Metadata.Entry[j].Key }) } if i > 0 && !proto.Equal(e, entries[i-1]) { return false } } return true } func testClientBinaryLog(t *testing.T, c *rpcConfig) error { defer testSink.clear() expect := runRPCs(t, &testConfig{}, c) want := expect.toClientLogEntries() var got []*pb.GrpcLogEntry // In racy cases, some entries are not logged when the RPC is finished (e.g. // context.Cancel). // // Check 10 times, with a sleep of 1/100 seconds between each check. Makes // it an 1-second wait in total. for i := 0; i < 10; i++ { got = testSink.logEntries(true) // all client entries. if len(want) == len(got) { break } time.Sleep(100 * time.Millisecond) } if len(want) != len(got) { for i, e := range want { t.Errorf("in want: %d, %s", i, e.GetType()) } for i, e := range got { t.Errorf("in got: %d, %s", i, e.GetType()) } return fmt.Errorf("didn't get same amount of log entries, want: %d, got: %d", len(want), len(got)) } var errored bool for i := 0; i < len(got); i++ { if !equalLogEntry(want[i], got[i]) { t.Errorf("entry: %d, want %+v, got %+v", i, want[i], got[i]) errored = true } } if errored { return fmt.Errorf("test failed") } return nil } func TestClientBinaryLogUnaryRPC(t *testing.T) { if err := testClientBinaryLog(t, &rpcConfig{success: true, callType: unaryRPC}); err != nil { t.Fatal(err) } } func TestClientBinaryLogUnaryRPCError(t *testing.T) { if err := testClientBinaryLog(t, &rpcConfig{success: false, callType: unaryRPC}); err != nil { t.Fatal(err) } } func TestClientBinaryLogClientStreamRPC(t *testing.T) { count := 5 if err := testClientBinaryLog(t, &rpcConfig{count: count, success: true, callType: clientStreamRPC}); err != nil { t.Fatal(err) } } func TestClientBinaryLogClientStreamRPCError(t *testing.T) { count := 1 if err := testClientBinaryLog(t, &rpcConfig{count: count, success: false, callType: clientStreamRPC}); err != nil { t.Fatal(err) } } func TestClientBinaryLogServerStreamRPC(t *testing.T) { count := 5 if err := testClientBinaryLog(t, &rpcConfig{count: count, success: true, callType: serverStreamRPC}); err != nil { t.Fatal(err) } } func TestClientBinaryLogServerStreamRPCError(t *testing.T) { count := 5 if err := testClientBinaryLog(t, &rpcConfig{count: count, success: false, callType: serverStreamRPC}); err != nil { t.Fatal(err) } } func TestClientBinaryLogFullDuplexRPC(t *testing.T) { count := 5 if err := testClientBinaryLog(t, &rpcConfig{count: count, success: true, callType: fullDuplexStreamRPC}); err != nil { t.Fatal(err) } } func TestClientBinaryLogFullDuplexRPCError(t *testing.T) { count := 5 if err := testClientBinaryLog(t, &rpcConfig{count: count, success: false, callType: fullDuplexStreamRPC}); err != nil { t.Fatal(err) } } func TestClientBinaryLogCancel(t *testing.T) { count := 5 if err := testClientBinaryLog(t, &rpcConfig{count: count, success: false, callType: cancelRPC}); err != nil { t.Fatal(err) } } func testServerBinaryLog(t *testing.T, c *rpcConfig) error { defer testSink.clear() expect := runRPCs(t, &testConfig{}, c) want := expect.toServerLogEntries() var got []*pb.GrpcLogEntry // In racy cases, some entries are not logged when the RPC is finished (e.g. // context.Cancel). This is unlikely to happen on server side, but it does // no harm to retry. // // Check 10 times, with a sleep of 1/100 seconds between each check. Makes // it an 1-second wait in total. for i := 0; i < 10; i++ { got = testSink.logEntries(false) // all server entries. if len(want) == len(got) { break } time.Sleep(100 * time.Millisecond) } if len(want) != len(got) { for i, e := range want { t.Errorf("in want: %d, %s", i, e.GetType()) } for i, e := range got { t.Errorf("in got: %d, %s", i, e.GetType()) } return fmt.Errorf("didn't get same amount of log entries, want: %d, got: %d", len(want), len(got)) } var errored bool for i := 0; i < len(got); i++ { if !equalLogEntry(want[i], got[i]) { t.Errorf("entry: %d, want %+v, got %+v", i, want[i], got[i]) errored = true } } if errored { return fmt.Errorf("test failed") } return nil } func TestServerBinaryLogUnaryRPC(t *testing.T) { if err := testServerBinaryLog(t, &rpcConfig{success: true, callType: unaryRPC}); err != nil { t.Fatal(err) } } func TestServerBinaryLogUnaryRPCError(t *testing.T) { if err := testServerBinaryLog(t, &rpcConfig{success: false, callType: unaryRPC}); err != nil { t.Fatal(err) } } func TestServerBinaryLogClientStreamRPC(t *testing.T) { count := 5 if err := testServerBinaryLog(t, &rpcConfig{count: count, success: true, callType: clientStreamRPC}); err != nil { t.Fatal(err) } } func TestServerBinaryLogClientStreamRPCError(t *testing.T) { count := 1 if err := testServerBinaryLog(t, &rpcConfig{count: count, success: false, callType: clientStreamRPC}); err != nil { t.Fatal(err) } } func TestServerBinaryLogServerStreamRPC(t *testing.T) { count := 5 if err := testServerBinaryLog(t, &rpcConfig{count: count, success: true, callType: serverStreamRPC}); err != nil { t.Fatal(err) } } func TestServerBinaryLogServerStreamRPCError(t *testing.T) { count := 5 if err := testServerBinaryLog(t, &rpcConfig{count: count, success: false, callType: serverStreamRPC}); err != nil { t.Fatal(err) } } func TestServerBinaryLogFullDuplex(t *testing.T) { count := 5 if err := testServerBinaryLog(t, &rpcConfig{count: count, success: true, callType: fullDuplexStreamRPC}); err != nil { t.Fatal(err) } } func TestServerBinaryLogFullDuplexError(t *testing.T) { count := 5 if err := testServerBinaryLog(t, &rpcConfig{count: count, success: false, callType: fullDuplexStreamRPC}); err != nil { t.Fatal(err) } } grpc-go-1.22.1/internal/binarylog/binarylog_test.go000066400000000000000000000057171351635773100223360ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package binarylog import ( "testing" ) // Test that get method logger returns the one with the most exact match. func TestGetMethodLogger(t *testing.T) { testCases := []struct { in string method string hdr, msg uint64 }{ // Global. { in: "*{h:12;m:23}", method: "/s/m", hdr: 12, msg: 23, }, // service/*. { in: "*,s/*{h:12;m:23}", method: "/s/m", hdr: 12, msg: 23, }, // Service/method. { in: "*{h;m},s/m{h:12;m:23}", method: "/s/m", hdr: 12, msg: 23, }, { in: "*{h;m},s/*{h:314;m},s/m{h:12;m:23}", method: "/s/m", hdr: 12, msg: 23, }, { in: "*{h;m},s/*{h:12;m:23},s/m", method: "/s/m", hdr: maxUInt, msg: maxUInt, }, // service/*. { in: "*{h;m},s/*{h:12;m:23},s/m1", method: "/s/m", hdr: 12, msg: 23, }, { in: "*{h;m},s1/*,s/m{h:12;m:23}", method: "/s/m", hdr: 12, msg: 23, }, // With black list. { in: "*{h:12;m:23},-s/m1", method: "/s/m", hdr: 12, msg: 23, }, } for _, tc := range testCases { l := NewLoggerFromConfigString(tc.in) if l == nil { t.Errorf("in: %q, failed to create logger from config string", tc.in) continue } ml := l.getMethodLogger(tc.method) if ml == nil { t.Errorf("in: %q, method logger is nil, want non-nil", tc.in) continue } if ml.headerMaxLen != tc.hdr || ml.messageMaxLen != tc.msg { t.Errorf("in: %q, want header: %v, message: %v, got header: %v, message: %v", tc.in, tc.hdr, tc.msg, ml.headerMaxLen, ml.messageMaxLen) } } } // expect method logger to be nil func TestGetMethodLoggerOff(t *testing.T) { testCases := []struct { in string method string }{ // method not specified. { in: "s1/m", method: "/s/m", }, { in: "s/m1", method: "/s/m", }, { in: "s1/*", method: "/s/m", }, { in: "s1/*,s/m1", method: "/s/m", }, // blacklisted. { in: "*,-s/m", method: "/s/m", }, { in: "s/*,-s/m", method: "/s/m", }, { in: "-s/m,s/*", method: "/s/m", }, } for _, tc := range testCases { l := NewLoggerFromConfigString(tc.in) if l == nil { t.Errorf("in: %q, failed to create logger from config string", tc.in) continue } ml := l.getMethodLogger(tc.method) if ml != nil { t.Errorf("in: %q, method logger is non-nil, want nil", tc.in) } } } grpc-go-1.22.1/internal/binarylog/binarylog_testutil.go000066400000000000000000000030021351635773100232150ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // This file contains exported variables/functions that are exported for testing // only. // // An ideal way for this would be to put those in a *_test.go but in binarylog // package. But this doesn't work with staticcheck with go module. Error was: // "MdToMetadataProto not declared by package binarylog". This could be caused // by the way staticcheck looks for files for a certain package, which doesn't // support *_test.go files. // // Move those to binary_test.go when staticcheck is fixed. package binarylog var ( // AllLogger is a logger that logs all headers/messages for all RPCs. It's // for testing only. AllLogger = NewLoggerFromConfigString("*") // MdToMetadataProto converts metadata to a binary logging proto message. // It's for testing only. MdToMetadataProto = mdToMetadataProto // AddrToProto converts an address to a binary logging proto message. It's // for testing only. AddrToProto = addrToProto ) grpc-go-1.22.1/internal/binarylog/env_config.go000066400000000000000000000152061351635773100214200ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package binarylog import ( "errors" "fmt" "regexp" "strconv" "strings" "google.golang.org/grpc/grpclog" ) // NewLoggerFromConfigString reads the string and build a logger. It can be used // to build a new logger and assign it to binarylog.Logger. // // Example filter config strings: // - "" Nothing will be logged // - "*" All headers and messages will be fully logged. // - "*{h}" Only headers will be logged. // - "*{m:256}" Only the first 256 bytes of each message will be logged. // - "Foo/*" Logs every method in service Foo // - "Foo/*,-Foo/Bar" Logs every method in service Foo except method /Foo/Bar // - "Foo/*,Foo/Bar{m:256}" Logs the first 256 bytes of each message in method // /Foo/Bar, logs all headers and messages in every other method in service // Foo. // // If two configs exist for one certain method or service, the one specified // later overrides the privous config. func NewLoggerFromConfigString(s string) Logger { if s == "" { return nil } l := newEmptyLogger() methods := strings.Split(s, ",") for _, method := range methods { if err := l.fillMethodLoggerWithConfigString(method); err != nil { grpclog.Warningf("failed to parse binary log config: %v", err) return nil } } return l } // fillMethodLoggerWithConfigString parses config, creates methodLogger and adds // it to the right map in the logger. func (l *logger) fillMethodLoggerWithConfigString(config string) error { // "" is invalid. if config == "" { return errors.New("empty string is not a valid method binary logging config") } // "-service/method", blacklist, no * or {} allowed. if config[0] == '-' { s, m, suffix, err := parseMethodConfigAndSuffix(config[1:]) if err != nil { return fmt.Errorf("invalid config: %q, %v", config, err) } if m == "*" { return fmt.Errorf("invalid config: %q, %v", config, "* not allowd in blacklist config") } if suffix != "" { return fmt.Errorf("invalid config: %q, %v", config, "header/message limit not allowed in blacklist config") } if err := l.setBlacklist(s + "/" + m); err != nil { return fmt.Errorf("invalid config: %v", err) } return nil } // "*{h:256;m:256}" if config[0] == '*' { hdr, msg, err := parseHeaderMessageLengthConfig(config[1:]) if err != nil { return fmt.Errorf("invalid config: %q, %v", config, err) } if err := l.setDefaultMethodLogger(&methodLoggerConfig{hdr: hdr, msg: msg}); err != nil { return fmt.Errorf("invalid config: %v", err) } return nil } s, m, suffix, err := parseMethodConfigAndSuffix(config) if err != nil { return fmt.Errorf("invalid config: %q, %v", config, err) } hdr, msg, err := parseHeaderMessageLengthConfig(suffix) if err != nil { return fmt.Errorf("invalid header/message length config: %q, %v", suffix, err) } if m == "*" { if err := l.setServiceMethodLogger(s, &methodLoggerConfig{hdr: hdr, msg: msg}); err != nil { return fmt.Errorf("invalid config: %v", err) } } else { if err := l.setMethodMethodLogger(s+"/"+m, &methodLoggerConfig{hdr: hdr, msg: msg}); err != nil { return fmt.Errorf("invalid config: %v", err) } } return nil } const ( // TODO: this const is only used by env_config now. But could be useful for // other config. Move to binarylog.go if necessary. maxUInt = ^uint64(0) // For "p.s/m" plus any suffix. Suffix will be parsed again. See test for // expected output. longMethodConfigRegexpStr = `^([\w./]+)/((?:\w+)|[*])(.+)?$` // For suffix from above, "{h:123,m:123}". See test for expected output. optionalLengthRegexpStr = `(?::(\d+))?` // Optional ":123". headerConfigRegexpStr = `^{h` + optionalLengthRegexpStr + `}$` messageConfigRegexpStr = `^{m` + optionalLengthRegexpStr + `}$` headerMessageConfigRegexpStr = `^{h` + optionalLengthRegexpStr + `;m` + optionalLengthRegexpStr + `}$` ) var ( longMethodConfigRegexp = regexp.MustCompile(longMethodConfigRegexpStr) headerConfigRegexp = regexp.MustCompile(headerConfigRegexpStr) messageConfigRegexp = regexp.MustCompile(messageConfigRegexpStr) headerMessageConfigRegexp = regexp.MustCompile(headerMessageConfigRegexpStr) ) // Turn "service/method{h;m}" into "service", "method", "{h;m}". func parseMethodConfigAndSuffix(c string) (service, method, suffix string, _ error) { // Regexp result: // // in: "p.s/m{h:123,m:123}", // out: []string{"p.s/m{h:123,m:123}", "p.s", "m", "{h:123,m:123}"}, match := longMethodConfigRegexp.FindStringSubmatch(c) if match == nil { return "", "", "", fmt.Errorf("%q contains invalid substring", c) } service = match[1] method = match[2] suffix = match[3] return } // Turn "{h:123;m:345}" into 123, 345. // // Return maxUInt if length is unspecified. func parseHeaderMessageLengthConfig(c string) (hdrLenStr, msgLenStr uint64, err error) { if c == "" { return maxUInt, maxUInt, nil } // Header config only. if match := headerConfigRegexp.FindStringSubmatch(c); match != nil { if s := match[1]; s != "" { hdrLenStr, err = strconv.ParseUint(s, 10, 64) if err != nil { return 0, 0, fmt.Errorf("failed to convert %q to uint", s) } return hdrLenStr, 0, nil } return maxUInt, 0, nil } // Message config only. if match := messageConfigRegexp.FindStringSubmatch(c); match != nil { if s := match[1]; s != "" { msgLenStr, err = strconv.ParseUint(s, 10, 64) if err != nil { return 0, 0, fmt.Errorf("failed to convert %q to uint", s) } return 0, msgLenStr, nil } return 0, maxUInt, nil } // Header and message config both. if match := headerMessageConfigRegexp.FindStringSubmatch(c); match != nil { // Both hdr and msg are specified, but one or two of them might be empty. hdrLenStr = maxUInt msgLenStr = maxUInt if s := match[1]; s != "" { hdrLenStr, err = strconv.ParseUint(s, 10, 64) if err != nil { return 0, 0, fmt.Errorf("failed to convert %q to uint", s) } } if s := match[2]; s != "" { msgLenStr, err = strconv.ParseUint(s, 10, 64) if err != nil { return 0, 0, fmt.Errorf("failed to convert %q to uint", s) } } return hdrLenStr, msgLenStr, nil } return 0, 0, fmt.Errorf("%q contains invalid substring", c) } grpc-go-1.22.1/internal/binarylog/env_config_test.go000066400000000000000000000233301351635773100224540ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package binarylog import ( "fmt" "testing" ) // This tests that when multiple configs are specified, all methods loggers will // be set correctly. Correctness of each logger is covered by other unit tests. func TestNewLoggerFromConfigString(t *testing.T) { const ( s1 = "s1" m1 = "m1" m2 = "m2" fullM1 = s1 + "/" + m1 fullM2 = s1 + "/" + m2 ) c := fmt.Sprintf("*{h:1;m:2},%s{h},%s{m},%s{h;m}", s1+"/*", fullM1, fullM2) l := NewLoggerFromConfigString(c).(*logger) if l.all.hdr != 1 || l.all.msg != 2 { t.Errorf("l.all = %#v, want headerLen: 1, messageLen: 2", l.all) } if ml, ok := l.services[s1]; ok { if ml.hdr != maxUInt || ml.msg != 0 { t.Errorf("want maxUInt header, 0 message, got header: %v, message: %v", ml.hdr, ml.msg) } } else { t.Errorf("service/* is not set") } if ml, ok := l.methods[fullM1]; ok { if ml.hdr != 0 || ml.msg != maxUInt { t.Errorf("want 0 header, maxUInt message, got header: %v, message: %v", ml.hdr, ml.msg) } } else { t.Errorf("service/method{h} is not set") } if ml, ok := l.methods[fullM2]; ok { if ml.hdr != maxUInt || ml.msg != maxUInt { t.Errorf("want maxUInt header, maxUInt message, got header: %v, message: %v", ml.hdr, ml.msg) } } else { t.Errorf("service/method{h;m} is not set") } } func TestNewLoggerFromConfigStringInvalid(t *testing.T) { testCases := []string{ "", "*{}", "s/m,*{}", "s/m,s/m{a}", // Duplciate rules. "s/m,-s/m", "-s/m,s/m", "s/m,s/m", "s/m,s/m{h:1;m:1}", "s/m{h:1;m:1},s/m", "-s/m,-s/m", "s/*,s/*{h:1;m:1}", "*,*{h:1;m:1}", } for _, tc := range testCases { l := NewLoggerFromConfigString(tc) if l != nil { t.Errorf("With config %q, want logger %v, got %v", tc, nil, l) } } } func TestParseMethodConfigAndSuffix(t *testing.T) { testCases := []struct { in, service, method, suffix string }{ { in: "p.s/m", service: "p.s", method: "m", suffix: "", }, { in: "p.s/m{h,m}", service: "p.s", method: "m", suffix: "{h,m}", }, { in: "p.s/*", service: "p.s", method: "*", suffix: "", }, { in: "p.s/*{h,m}", service: "p.s", method: "*", suffix: "{h,m}", }, // invalid suffix will be detected by another function. { in: "p.s/m{invalidsuffix}", service: "p.s", method: "m", suffix: "{invalidsuffix}", }, { in: "p.s/*{invalidsuffix}", service: "p.s", method: "*", suffix: "{invalidsuffix}", }, { in: "s/m*", service: "s", method: "m", suffix: "*", }, { in: "s/*m", service: "s", method: "*", suffix: "m", }, { in: "s/**", service: "s", method: "*", suffix: "*", }, } for _, tc := range testCases { t.Logf("testing parseMethodConfigAndSuffix(%q)", tc.in) s, m, suffix, err := parseMethodConfigAndSuffix(tc.in) if err != nil { t.Errorf("returned error %v, want nil", err) continue } if s != tc.service { t.Errorf("service = %q, want %q", s, tc.service) } if m != tc.method { t.Errorf("method = %q, want %q", m, tc.method) } if suffix != tc.suffix { t.Errorf("suffix = %q, want %q", suffix, tc.suffix) } } } func TestParseMethodConfigAndSuffixInvalid(t *testing.T) { testCases := []string{ "*/m", "*/m{}", } for _, tc := range testCases { s, m, suffix, err := parseMethodConfigAndSuffix(tc) if err == nil { t.Errorf("Parsing %q got nil error with %q, %q, %q, want non-nil error", tc, s, m, suffix) } } } func TestParseHeaderMessageLengthConfig(t *testing.T) { testCases := []struct { in string hdr, msg uint64 }{ { in: "", hdr: maxUInt, msg: maxUInt, }, { in: "{h}", hdr: maxUInt, msg: 0, }, { in: "{h:314}", hdr: 314, msg: 0, }, { in: "{m}", hdr: 0, msg: maxUInt, }, { in: "{m:213}", hdr: 0, msg: 213, }, { in: "{h;m}", hdr: maxUInt, msg: maxUInt, }, { in: "{h:314;m}", hdr: 314, msg: maxUInt, }, { in: "{h;m:213}", hdr: maxUInt, msg: 213, }, { in: "{h:314;m:213}", hdr: 314, msg: 213, }, } for _, tc := range testCases { t.Logf("testing parseHeaderMessageLengthConfig(%q)", tc.in) hdr, msg, err := parseHeaderMessageLengthConfig(tc.in) if err != nil { t.Errorf("returned error %v, want nil", err) continue } if hdr != tc.hdr { t.Errorf("header length = %v, want %v", hdr, tc.hdr) } if msg != tc.msg { t.Errorf("message length = %v, want %v", msg, tc.msg) } } } func TestParseHeaderMessageLengthConfigInvalid(t *testing.T) { testCases := []string{ "{}", "{h;a}", "{h;m;b}", } for _, tc := range testCases { _, _, err := parseHeaderMessageLengthConfig(tc) if err == nil { t.Errorf("Parsing %q got nil error, want non-nil error", tc) } } } func TestFillMethodLoggerWithConfigStringBlacklist(t *testing.T) { testCases := []string{ "p.s/m", "service/method", } for _, tc := range testCases { c := "-" + tc t.Logf("testing fillMethodLoggerWithConfigString(%q)", c) l := newEmptyLogger() if err := l.fillMethodLoggerWithConfigString(c); err != nil { t.Errorf("returned err %v, want nil", err) continue } _, ok := l.blacklist[tc] if !ok { t.Errorf("blacklist[%q] is not set", tc) } } } func TestFillMethodLoggerWithConfigStringGlobal(t *testing.T) { testCases := []struct { in string hdr, msg uint64 }{ { in: "", hdr: maxUInt, msg: maxUInt, }, { in: "{h}", hdr: maxUInt, msg: 0, }, { in: "{h:314}", hdr: 314, msg: 0, }, { in: "{m}", hdr: 0, msg: maxUInt, }, { in: "{m:213}", hdr: 0, msg: 213, }, { in: "{h;m}", hdr: maxUInt, msg: maxUInt, }, { in: "{h:314;m}", hdr: 314, msg: maxUInt, }, { in: "{h;m:213}", hdr: maxUInt, msg: 213, }, { in: "{h:314;m:213}", hdr: 314, msg: 213, }, } for _, tc := range testCases { c := "*" + tc.in t.Logf("testing fillMethodLoggerWithConfigString(%q)", c) l := newEmptyLogger() if err := l.fillMethodLoggerWithConfigString(c); err != nil { t.Errorf("returned err %v, want nil", err) continue } if l.all == nil { t.Errorf("l.all is not set") continue } if hdr := l.all.hdr; hdr != tc.hdr { t.Errorf("header length = %v, want %v", hdr, tc.hdr) } if msg := l.all.msg; msg != tc.msg { t.Errorf("message length = %v, want %v", msg, tc.msg) } } } func TestFillMethodLoggerWithConfigStringPerService(t *testing.T) { testCases := []struct { in string hdr, msg uint64 }{ { in: "", hdr: maxUInt, msg: maxUInt, }, { in: "{h}", hdr: maxUInt, msg: 0, }, { in: "{h:314}", hdr: 314, msg: 0, }, { in: "{m}", hdr: 0, msg: maxUInt, }, { in: "{m:213}", hdr: 0, msg: 213, }, { in: "{h;m}", hdr: maxUInt, msg: maxUInt, }, { in: "{h:314;m}", hdr: 314, msg: maxUInt, }, { in: "{h;m:213}", hdr: maxUInt, msg: 213, }, { in: "{h:314;m:213}", hdr: 314, msg: 213, }, } const serviceName = "service" for _, tc := range testCases { c := serviceName + "/*" + tc.in t.Logf("testing fillMethodLoggerWithConfigString(%q)", c) l := newEmptyLogger() if err := l.fillMethodLoggerWithConfigString(c); err != nil { t.Errorf("returned err %v, want nil", err) continue } ml, ok := l.services[serviceName] if !ok { t.Errorf("l.service[%q] is not set", serviceName) continue } if hdr := ml.hdr; hdr != tc.hdr { t.Errorf("header length = %v, want %v", hdr, tc.hdr) } if msg := ml.msg; msg != tc.msg { t.Errorf("message length = %v, want %v", msg, tc.msg) } } } func TestFillMethodLoggerWithConfigStringPerMethod(t *testing.T) { testCases := []struct { in string hdr, msg uint64 }{ { in: "", hdr: maxUInt, msg: maxUInt, }, { in: "{h}", hdr: maxUInt, msg: 0, }, { in: "{h:314}", hdr: 314, msg: 0, }, { in: "{m}", hdr: 0, msg: maxUInt, }, { in: "{m:213}", hdr: 0, msg: 213, }, { in: "{h;m}", hdr: maxUInt, msg: maxUInt, }, { in: "{h:314;m}", hdr: 314, msg: maxUInt, }, { in: "{h;m:213}", hdr: maxUInt, msg: 213, }, { in: "{h:314;m:213}", hdr: 314, msg: 213, }, } const ( serviceName = "service" methodName = "method" fullMethodName = serviceName + "/" + methodName ) for _, tc := range testCases { c := fullMethodName + tc.in t.Logf("testing fillMethodLoggerWithConfigString(%q)", c) l := newEmptyLogger() if err := l.fillMethodLoggerWithConfigString(c); err != nil { t.Errorf("returned err %v, want nil", err) continue } ml, ok := l.methods[fullMethodName] if !ok { t.Errorf("l.methods[%q] is not set", fullMethodName) continue } if hdr := ml.hdr; hdr != tc.hdr { t.Errorf("header length = %v, want %v", hdr, tc.hdr) } if msg := ml.msg; msg != tc.msg { t.Errorf("message length = %v, want %v", msg, tc.msg) } } } func TestFillMethodLoggerWithConfigStringInvalid(t *testing.T) { testCases := []string{ "", "{}", "p.s/m{}", "p.s/m{a}", "p.s/m*", "p.s/**", "*/m", "-p.s/*", "-p.s/m{h}", } l := &logger{} for _, tc := range testCases { if err := l.fillMethodLoggerWithConfigString(tc); err == nil { t.Errorf("fillMethodLoggerWithConfigString(%q) returned nil error, want non-nil", tc) } } } grpc-go-1.22.1/internal/binarylog/method_logger.go000066400000000000000000000245301351635773100221220ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package binarylog import ( "net" "strings" "sync/atomic" "time" "github.com/golang/protobuf/proto" "github.com/golang/protobuf/ptypes" pb "google.golang.org/grpc/binarylog/grpc_binarylog_v1" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/metadata" "google.golang.org/grpc/status" ) type callIDGenerator struct { id uint64 } func (g *callIDGenerator) next() uint64 { id := atomic.AddUint64(&g.id, 1) return id } // reset is for testing only, and doesn't need to be thread safe. func (g *callIDGenerator) reset() { g.id = 0 } var idGen callIDGenerator // MethodLogger is the sub-logger for each method. type MethodLogger struct { headerMaxLen, messageMaxLen uint64 callID uint64 idWithinCallGen *callIDGenerator sink Sink // TODO(blog): make this plugable. } func newMethodLogger(h, m uint64) *MethodLogger { return &MethodLogger{ headerMaxLen: h, messageMaxLen: m, callID: idGen.next(), idWithinCallGen: &callIDGenerator{}, sink: defaultSink, // TODO(blog): make it plugable. } } // Log creates a proto binary log entry, and logs it to the sink. func (ml *MethodLogger) Log(c LogEntryConfig) { m := c.toProto() timestamp, _ := ptypes.TimestampProto(time.Now()) m.Timestamp = timestamp m.CallId = ml.callID m.SequenceIdWithinCall = ml.idWithinCallGen.next() switch pay := m.Payload.(type) { case *pb.GrpcLogEntry_ClientHeader: m.PayloadTruncated = ml.truncateMetadata(pay.ClientHeader.GetMetadata()) case *pb.GrpcLogEntry_ServerHeader: m.PayloadTruncated = ml.truncateMetadata(pay.ServerHeader.GetMetadata()) case *pb.GrpcLogEntry_Message: m.PayloadTruncated = ml.truncateMessage(pay.Message) } ml.sink.Write(m) } func (ml *MethodLogger) truncateMetadata(mdPb *pb.Metadata) (truncated bool) { if ml.headerMaxLen == maxUInt { return false } var ( bytesLimit = ml.headerMaxLen index int ) // At the end of the loop, index will be the first entry where the total // size is greater than the limit: // // len(entry[:index]) <= ml.hdr && len(entry[:index+1]) > ml.hdr. for ; index < len(mdPb.Entry); index++ { entry := mdPb.Entry[index] if entry.Key == "grpc-trace-bin" { // "grpc-trace-bin" is a special key. It's kept in the log entry, // but not counted towards the size limit. continue } currentEntryLen := uint64(len(entry.Value)) if currentEntryLen > bytesLimit { break } bytesLimit -= currentEntryLen } truncated = index < len(mdPb.Entry) mdPb.Entry = mdPb.Entry[:index] return truncated } func (ml *MethodLogger) truncateMessage(msgPb *pb.Message) (truncated bool) { if ml.messageMaxLen == maxUInt { return false } if ml.messageMaxLen >= uint64(len(msgPb.Data)) { return false } msgPb.Data = msgPb.Data[:ml.messageMaxLen] return true } // LogEntryConfig represents the configuration for binary log entry. type LogEntryConfig interface { toProto() *pb.GrpcLogEntry } // ClientHeader configs the binary log entry to be a ClientHeader entry. type ClientHeader struct { OnClientSide bool Header metadata.MD MethodName string Authority string Timeout time.Duration // PeerAddr is required only when it's on server side. PeerAddr net.Addr } func (c *ClientHeader) toProto() *pb.GrpcLogEntry { // This function doesn't need to set all the fields (e.g. seq ID). The Log // function will set the fields when necessary. clientHeader := &pb.ClientHeader{ Metadata: mdToMetadataProto(c.Header), MethodName: c.MethodName, Authority: c.Authority, } if c.Timeout > 0 { clientHeader.Timeout = ptypes.DurationProto(c.Timeout) } ret := &pb.GrpcLogEntry{ Type: pb.GrpcLogEntry_EVENT_TYPE_CLIENT_HEADER, Payload: &pb.GrpcLogEntry_ClientHeader{ ClientHeader: clientHeader, }, } if c.OnClientSide { ret.Logger = pb.GrpcLogEntry_LOGGER_CLIENT } else { ret.Logger = pb.GrpcLogEntry_LOGGER_SERVER } if c.PeerAddr != nil { ret.Peer = addrToProto(c.PeerAddr) } return ret } // ServerHeader configs the binary log entry to be a ServerHeader entry. type ServerHeader struct { OnClientSide bool Header metadata.MD // PeerAddr is required only when it's on client side. PeerAddr net.Addr } func (c *ServerHeader) toProto() *pb.GrpcLogEntry { ret := &pb.GrpcLogEntry{ Type: pb.GrpcLogEntry_EVENT_TYPE_SERVER_HEADER, Payload: &pb.GrpcLogEntry_ServerHeader{ ServerHeader: &pb.ServerHeader{ Metadata: mdToMetadataProto(c.Header), }, }, } if c.OnClientSide { ret.Logger = pb.GrpcLogEntry_LOGGER_CLIENT } else { ret.Logger = pb.GrpcLogEntry_LOGGER_SERVER } if c.PeerAddr != nil { ret.Peer = addrToProto(c.PeerAddr) } return ret } // ClientMessage configs the binary log entry to be a ClientMessage entry. type ClientMessage struct { OnClientSide bool // Message can be a proto.Message or []byte. Other messages formats are not // supported. Message interface{} } func (c *ClientMessage) toProto() *pb.GrpcLogEntry { var ( data []byte err error ) if m, ok := c.Message.(proto.Message); ok { data, err = proto.Marshal(m) if err != nil { grpclog.Infof("binarylogging: failed to marshal proto message: %v", err) } } else if b, ok := c.Message.([]byte); ok { data = b } else { grpclog.Infof("binarylogging: message to log is neither proto.message nor []byte") } ret := &pb.GrpcLogEntry{ Type: pb.GrpcLogEntry_EVENT_TYPE_CLIENT_MESSAGE, Payload: &pb.GrpcLogEntry_Message{ Message: &pb.Message{ Length: uint32(len(data)), Data: data, }, }, } if c.OnClientSide { ret.Logger = pb.GrpcLogEntry_LOGGER_CLIENT } else { ret.Logger = pb.GrpcLogEntry_LOGGER_SERVER } return ret } // ServerMessage configs the binary log entry to be a ServerMessage entry. type ServerMessage struct { OnClientSide bool // Message can be a proto.Message or []byte. Other messages formats are not // supported. Message interface{} } func (c *ServerMessage) toProto() *pb.GrpcLogEntry { var ( data []byte err error ) if m, ok := c.Message.(proto.Message); ok { data, err = proto.Marshal(m) if err != nil { grpclog.Infof("binarylogging: failed to marshal proto message: %v", err) } } else if b, ok := c.Message.([]byte); ok { data = b } else { grpclog.Infof("binarylogging: message to log is neither proto.message nor []byte") } ret := &pb.GrpcLogEntry{ Type: pb.GrpcLogEntry_EVENT_TYPE_SERVER_MESSAGE, Payload: &pb.GrpcLogEntry_Message{ Message: &pb.Message{ Length: uint32(len(data)), Data: data, }, }, } if c.OnClientSide { ret.Logger = pb.GrpcLogEntry_LOGGER_CLIENT } else { ret.Logger = pb.GrpcLogEntry_LOGGER_SERVER } return ret } // ClientHalfClose configs the binary log entry to be a ClientHalfClose entry. type ClientHalfClose struct { OnClientSide bool } func (c *ClientHalfClose) toProto() *pb.GrpcLogEntry { ret := &pb.GrpcLogEntry{ Type: pb.GrpcLogEntry_EVENT_TYPE_CLIENT_HALF_CLOSE, Payload: nil, // No payload here. } if c.OnClientSide { ret.Logger = pb.GrpcLogEntry_LOGGER_CLIENT } else { ret.Logger = pb.GrpcLogEntry_LOGGER_SERVER } return ret } // ServerTrailer configs the binary log entry to be a ServerTrailer entry. type ServerTrailer struct { OnClientSide bool Trailer metadata.MD // Err is the status error. Err error // PeerAddr is required only when it's on client side and the RPC is trailer // only. PeerAddr net.Addr } func (c *ServerTrailer) toProto() *pb.GrpcLogEntry { st, ok := status.FromError(c.Err) if !ok { grpclog.Info("binarylogging: error in trailer is not a status error") } var ( detailsBytes []byte err error ) stProto := st.Proto() if stProto != nil && len(stProto.Details) != 0 { detailsBytes, err = proto.Marshal(stProto) if err != nil { grpclog.Infof("binarylogging: failed to marshal status proto: %v", err) } } ret := &pb.GrpcLogEntry{ Type: pb.GrpcLogEntry_EVENT_TYPE_SERVER_TRAILER, Payload: &pb.GrpcLogEntry_Trailer{ Trailer: &pb.Trailer{ Metadata: mdToMetadataProto(c.Trailer), StatusCode: uint32(st.Code()), StatusMessage: st.Message(), StatusDetails: detailsBytes, }, }, } if c.OnClientSide { ret.Logger = pb.GrpcLogEntry_LOGGER_CLIENT } else { ret.Logger = pb.GrpcLogEntry_LOGGER_SERVER } if c.PeerAddr != nil { ret.Peer = addrToProto(c.PeerAddr) } return ret } // Cancel configs the binary log entry to be a Cancel entry. type Cancel struct { OnClientSide bool } func (c *Cancel) toProto() *pb.GrpcLogEntry { ret := &pb.GrpcLogEntry{ Type: pb.GrpcLogEntry_EVENT_TYPE_CANCEL, Payload: nil, } if c.OnClientSide { ret.Logger = pb.GrpcLogEntry_LOGGER_CLIENT } else { ret.Logger = pb.GrpcLogEntry_LOGGER_SERVER } return ret } // metadataKeyOmit returns whether the metadata entry with this key should be // omitted. func metadataKeyOmit(key string) bool { switch key { case "lb-token", ":path", ":authority", "content-encoding", "content-type", "user-agent", "te": return true case "grpc-trace-bin": // grpc-trace-bin is special because it's visiable to users. return false } return strings.HasPrefix(key, "grpc-") } func mdToMetadataProto(md metadata.MD) *pb.Metadata { ret := &pb.Metadata{} for k, vv := range md { if metadataKeyOmit(k) { continue } for _, v := range vv { ret.Entry = append(ret.Entry, &pb.MetadataEntry{ Key: k, Value: []byte(v), }, ) } } return ret } func addrToProto(addr net.Addr) *pb.Address { ret := &pb.Address{} switch a := addr.(type) { case *net.TCPAddr: if a.IP.To4() != nil { ret.Type = pb.Address_TYPE_IPV4 } else if a.IP.To16() != nil { ret.Type = pb.Address_TYPE_IPV6 } else { ret.Type = pb.Address_TYPE_UNKNOWN // Do not set address and port fields. break } ret.Address = a.IP.String() ret.IpPort = uint32(a.Port) case *net.UnixAddr: ret.Type = pb.Address_TYPE_UNIX ret.Address = a.String() default: ret.Type = pb.Address_TYPE_UNKNOWN } return ret } grpc-go-1.22.1/internal/binarylog/method_logger_test.go000066400000000000000000000321141351635773100231560ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package binarylog import ( "bytes" "fmt" "net" "testing" "time" "github.com/golang/protobuf/proto" dpb "github.com/golang/protobuf/ptypes/duration" pb "google.golang.org/grpc/binarylog/grpc_binarylog_v1" "google.golang.org/grpc/codes" "google.golang.org/grpc/status" ) func TestLog(t *testing.T) { idGen.reset() ml := newMethodLogger(10, 10) // Set sink to testing buffer. buf := bytes.NewBuffer(nil) ml.sink = newWriterSink(buf) addr := "1.2.3.4" port := 790 tcpAddr, _ := net.ResolveTCPAddr("tcp", fmt.Sprintf("%v:%d", addr, port)) addr6 := "2001:1db8:85a3::8a2e:1370:7334" port6 := 796 tcpAddr6, _ := net.ResolveTCPAddr("tcp", fmt.Sprintf("[%v]:%d", addr6, port6)) testProtoMsg := &pb.Message{ Length: 1, Data: []byte{'a'}, } testProtoBytes, _ := proto.Marshal(testProtoMsg) testCases := []struct { config LogEntryConfig want *pb.GrpcLogEntry }{ { config: &ClientHeader{ OnClientSide: false, Header: map[string][]string{ "a": {"b", "bb"}, }, MethodName: "testservice/testmethod", Authority: "test.service.io", Timeout: 2*time.Second + 3*time.Nanosecond, PeerAddr: tcpAddr, }, want: &pb.GrpcLogEntry{ Timestamp: nil, CallId: 1, SequenceIdWithinCall: 0, Type: pb.GrpcLogEntry_EVENT_TYPE_CLIENT_HEADER, Logger: pb.GrpcLogEntry_LOGGER_SERVER, Payload: &pb.GrpcLogEntry_ClientHeader{ ClientHeader: &pb.ClientHeader{ Metadata: &pb.Metadata{ Entry: []*pb.MetadataEntry{ {Key: "a", Value: []byte{'b'}}, {Key: "a", Value: []byte{'b', 'b'}}, }, }, MethodName: "testservice/testmethod", Authority: "test.service.io", Timeout: &dpb.Duration{ Seconds: 2, Nanos: 3, }, }, }, PayloadTruncated: false, Peer: &pb.Address{ Type: pb.Address_TYPE_IPV4, Address: addr, IpPort: uint32(port), }, }, }, { config: &ClientHeader{ OnClientSide: false, MethodName: "testservice/testmethod", Authority: "test.service.io", }, want: &pb.GrpcLogEntry{ Timestamp: nil, CallId: 1, SequenceIdWithinCall: 0, Type: pb.GrpcLogEntry_EVENT_TYPE_CLIENT_HEADER, Logger: pb.GrpcLogEntry_LOGGER_SERVER, Payload: &pb.GrpcLogEntry_ClientHeader{ ClientHeader: &pb.ClientHeader{ Metadata: &pb.Metadata{}, MethodName: "testservice/testmethod", Authority: "test.service.io", }, }, PayloadTruncated: false, }, }, { config: &ServerHeader{ OnClientSide: true, Header: map[string][]string{ "a": {"b", "bb"}, }, PeerAddr: tcpAddr6, }, want: &pb.GrpcLogEntry{ Timestamp: nil, CallId: 1, SequenceIdWithinCall: 0, Type: pb.GrpcLogEntry_EVENT_TYPE_SERVER_HEADER, Logger: pb.GrpcLogEntry_LOGGER_CLIENT, Payload: &pb.GrpcLogEntry_ServerHeader{ ServerHeader: &pb.ServerHeader{ Metadata: &pb.Metadata{ Entry: []*pb.MetadataEntry{ {Key: "a", Value: []byte{'b'}}, {Key: "a", Value: []byte{'b', 'b'}}, }, }, }, }, PayloadTruncated: false, Peer: &pb.Address{ Type: pb.Address_TYPE_IPV6, Address: addr6, IpPort: uint32(port6), }, }, }, { config: &ClientMessage{ OnClientSide: true, Message: testProtoMsg, }, want: &pb.GrpcLogEntry{ Timestamp: nil, CallId: 1, SequenceIdWithinCall: 0, Type: pb.GrpcLogEntry_EVENT_TYPE_CLIENT_MESSAGE, Logger: pb.GrpcLogEntry_LOGGER_CLIENT, Payload: &pb.GrpcLogEntry_Message{ Message: &pb.Message{ Length: uint32(len(testProtoBytes)), Data: testProtoBytes, }, }, PayloadTruncated: false, Peer: nil, }, }, { config: &ServerMessage{ OnClientSide: false, Message: testProtoMsg, }, want: &pb.GrpcLogEntry{ Timestamp: nil, CallId: 1, SequenceIdWithinCall: 0, Type: pb.GrpcLogEntry_EVENT_TYPE_SERVER_MESSAGE, Logger: pb.GrpcLogEntry_LOGGER_SERVER, Payload: &pb.GrpcLogEntry_Message{ Message: &pb.Message{ Length: uint32(len(testProtoBytes)), Data: testProtoBytes, }, }, PayloadTruncated: false, Peer: nil, }, }, { config: &ClientHalfClose{ OnClientSide: false, }, want: &pb.GrpcLogEntry{ Timestamp: nil, CallId: 1, SequenceIdWithinCall: 0, Type: pb.GrpcLogEntry_EVENT_TYPE_CLIENT_HALF_CLOSE, Logger: pb.GrpcLogEntry_LOGGER_SERVER, Payload: nil, PayloadTruncated: false, Peer: nil, }, }, { config: &ServerTrailer{ OnClientSide: true, Err: status.Errorf(codes.Unavailable, "test"), PeerAddr: tcpAddr, }, want: &pb.GrpcLogEntry{ Timestamp: nil, CallId: 1, SequenceIdWithinCall: 0, Type: pb.GrpcLogEntry_EVENT_TYPE_SERVER_TRAILER, Logger: pb.GrpcLogEntry_LOGGER_CLIENT, Payload: &pb.GrpcLogEntry_Trailer{ Trailer: &pb.Trailer{ Metadata: &pb.Metadata{}, StatusCode: uint32(codes.Unavailable), StatusMessage: "test", StatusDetails: nil, }, }, PayloadTruncated: false, Peer: &pb.Address{ Type: pb.Address_TYPE_IPV4, Address: addr, IpPort: uint32(port), }, }, }, { // Err is nil, Log OK status. config: &ServerTrailer{ OnClientSide: true, }, want: &pb.GrpcLogEntry{ Timestamp: nil, CallId: 1, SequenceIdWithinCall: 0, Type: pb.GrpcLogEntry_EVENT_TYPE_SERVER_TRAILER, Logger: pb.GrpcLogEntry_LOGGER_CLIENT, Payload: &pb.GrpcLogEntry_Trailer{ Trailer: &pb.Trailer{ Metadata: &pb.Metadata{}, StatusCode: uint32(codes.OK), StatusMessage: "", StatusDetails: nil, }, }, PayloadTruncated: false, Peer: nil, }, }, { config: &Cancel{ OnClientSide: true, }, want: &pb.GrpcLogEntry{ Timestamp: nil, CallId: 1, SequenceIdWithinCall: 0, Type: pb.GrpcLogEntry_EVENT_TYPE_CANCEL, Logger: pb.GrpcLogEntry_LOGGER_CLIENT, Payload: nil, PayloadTruncated: false, Peer: nil, }, }, // gRPC headers should be omitted. { config: &ClientHeader{ OnClientSide: false, Header: map[string][]string{ "grpc-reserved": {"to be omitted"}, ":authority": {"to be omitted"}, "a": {"b", "bb"}, }, }, want: &pb.GrpcLogEntry{ Timestamp: nil, CallId: 1, SequenceIdWithinCall: 0, Type: pb.GrpcLogEntry_EVENT_TYPE_CLIENT_HEADER, Logger: pb.GrpcLogEntry_LOGGER_SERVER, Payload: &pb.GrpcLogEntry_ClientHeader{ ClientHeader: &pb.ClientHeader{ Metadata: &pb.Metadata{ Entry: []*pb.MetadataEntry{ {Key: "a", Value: []byte{'b'}}, {Key: "a", Value: []byte{'b', 'b'}}, }, }, }, }, PayloadTruncated: false, }, }, { config: &ServerHeader{ OnClientSide: true, Header: map[string][]string{ "grpc-reserved": {"to be omitted"}, ":authority": {"to be omitted"}, "a": {"b", "bb"}, }, }, want: &pb.GrpcLogEntry{ Timestamp: nil, CallId: 1, SequenceIdWithinCall: 0, Type: pb.GrpcLogEntry_EVENT_TYPE_SERVER_HEADER, Logger: pb.GrpcLogEntry_LOGGER_CLIENT, Payload: &pb.GrpcLogEntry_ServerHeader{ ServerHeader: &pb.ServerHeader{ Metadata: &pb.Metadata{ Entry: []*pb.MetadataEntry{ {Key: "a", Value: []byte{'b'}}, {Key: "a", Value: []byte{'b', 'b'}}, }, }, }, }, PayloadTruncated: false, }, }, } for i, tc := range testCases { buf.Reset() tc.want.SequenceIdWithinCall = uint64(i + 1) ml.Log(tc.config) inSink := new(pb.GrpcLogEntry) if err := proto.Unmarshal(buf.Bytes()[4:], inSink); err != nil { t.Errorf("failed to unmarshal bytes in sink to proto: %v", err) continue } inSink.Timestamp = nil // Strip timestamp before comparing. if !proto.Equal(inSink, tc.want) { t.Errorf("Log(%+v), in sink: %+v, want %+v", tc.config, inSink, tc.want) } } } func TestTruncateMetadataNotTruncated(t *testing.T) { testCases := []struct { ml *MethodLogger mpPb *pb.Metadata }{ { ml: newMethodLogger(maxUInt, maxUInt), mpPb: &pb.Metadata{ Entry: []*pb.MetadataEntry{ {Key: "", Value: []byte{1}}, }, }, }, { ml: newMethodLogger(2, maxUInt), mpPb: &pb.Metadata{ Entry: []*pb.MetadataEntry{ {Key: "", Value: []byte{1}}, }, }, }, { ml: newMethodLogger(1, maxUInt), mpPb: &pb.Metadata{ Entry: []*pb.MetadataEntry{ {Key: "", Value: nil}, }, }, }, { ml: newMethodLogger(2, maxUInt), mpPb: &pb.Metadata{ Entry: []*pb.MetadataEntry{ {Key: "", Value: []byte{1, 1}}, }, }, }, { ml: newMethodLogger(2, maxUInt), mpPb: &pb.Metadata{ Entry: []*pb.MetadataEntry{ {Key: "", Value: []byte{1}}, {Key: "", Value: []byte{1}}, }, }, }, // "grpc-trace-bin" is kept in log but not counted towards the size // limit. { ml: newMethodLogger(1, maxUInt), mpPb: &pb.Metadata{ Entry: []*pb.MetadataEntry{ {Key: "", Value: []byte{1}}, {Key: "grpc-trace-bin", Value: []byte("some.trace.key")}, }, }, }, } for i, tc := range testCases { truncated := tc.ml.truncateMetadata(tc.mpPb) if truncated { t.Errorf("test case %v, returned truncated, want not truncated", i) } } } func TestTruncateMetadataTruncated(t *testing.T) { testCases := []struct { ml *MethodLogger mpPb *pb.Metadata entryLen int }{ { ml: newMethodLogger(2, maxUInt), mpPb: &pb.Metadata{ Entry: []*pb.MetadataEntry{ {Key: "", Value: []byte{1, 1, 1}}, }, }, entryLen: 0, }, { ml: newMethodLogger(2, maxUInt), mpPb: &pb.Metadata{ Entry: []*pb.MetadataEntry{ {Key: "", Value: []byte{1}}, {Key: "", Value: []byte{1}}, {Key: "", Value: []byte{1}}, }, }, entryLen: 2, }, { ml: newMethodLogger(2, maxUInt), mpPb: &pb.Metadata{ Entry: []*pb.MetadataEntry{ {Key: "", Value: []byte{1, 1}}, {Key: "", Value: []byte{1}}, }, }, entryLen: 1, }, { ml: newMethodLogger(2, maxUInt), mpPb: &pb.Metadata{ Entry: []*pb.MetadataEntry{ {Key: "", Value: []byte{1}}, {Key: "", Value: []byte{1, 1}}, }, }, entryLen: 1, }, } for i, tc := range testCases { truncated := tc.ml.truncateMetadata(tc.mpPb) if !truncated { t.Errorf("test case %v, returned not truncated, want truncated", i) continue } if len(tc.mpPb.Entry) != tc.entryLen { t.Errorf("test case %v, entry length: %v, want: %v", i, len(tc.mpPb.Entry), tc.entryLen) } } } func TestTruncateMessageNotTruncated(t *testing.T) { testCases := []struct { ml *MethodLogger msgPb *pb.Message }{ { ml: newMethodLogger(maxUInt, maxUInt), msgPb: &pb.Message{ Data: []byte{1}, }, }, { ml: newMethodLogger(maxUInt, 3), msgPb: &pb.Message{ Data: []byte{1, 1}, }, }, { ml: newMethodLogger(maxUInt, 2), msgPb: &pb.Message{ Data: []byte{1, 1}, }, }, } for i, tc := range testCases { truncated := tc.ml.truncateMessage(tc.msgPb) if truncated { t.Errorf("test case %v, returned truncated, want not truncated", i) } } } func TestTruncateMessageTruncated(t *testing.T) { testCases := []struct { ml *MethodLogger msgPb *pb.Message oldLength uint32 }{ { ml: newMethodLogger(maxUInt, 2), msgPb: &pb.Message{ Length: 3, Data: []byte{1, 1, 1}, }, oldLength: 3, }, } for i, tc := range testCases { truncated := tc.ml.truncateMessage(tc.msgPb) if !truncated { t.Errorf("test case %v, returned not truncated, want truncated", i) continue } if len(tc.msgPb.Data) != int(tc.ml.messageMaxLen) { t.Errorf("test case %v, message length: %v, want: %v", i, len(tc.msgPb.Data), tc.ml.messageMaxLen) } if tc.msgPb.Length != tc.oldLength { t.Errorf("test case %v, message.Length field: %v, want: %v", i, tc.msgPb.Length, tc.oldLength) } } } grpc-go-1.22.1/internal/binarylog/regenerate.sh000077500000000000000000000021251351635773100214300ustar00rootroot00000000000000#!/bin/bash # Copyright 2018 gRPC authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. set -eux -o pipefail TMP=$(mktemp -d) function finish { rm -rf "$TMP" } trap finish EXIT pushd "$TMP" mkdir -p grpc/binarylog/grpc_binarylog_v1 curl https://raw.githubusercontent.com/grpc/grpc-proto/master/grpc/binlog/v1/binarylog.proto > grpc/binarylog/grpc_binarylog_v1/binarylog.proto protoc --go_out=plugins=grpc,paths=source_relative:. -I. grpc/binarylog/grpc_binarylog_v1/*.proto popd rm -f ./grpc_binarylog_v1/*.pb.go cp "$TMP"/grpc/binarylog/grpc_binarylog_v1/*.pb.go ../../binarylog/grpc_binarylog_v1/ grpc-go-1.22.1/internal/binarylog/regexp_test.go000066400000000000000000000072301351635773100216320ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package binarylog import ( "reflect" "testing" ) func TestLongMethodConfigRegexp(t *testing.T) { testCases := []struct { in string out []string }{ {in: "", out: nil}, {in: "*/m", out: nil}, { in: "p.s/m{}", out: []string{"p.s/m{}", "p.s", "m", "{}"}, }, { in: "p.s/m", out: []string{"p.s/m", "p.s", "m", ""}, }, { in: "p.s/m{h}", out: []string{"p.s/m{h}", "p.s", "m", "{h}"}, }, { in: "p.s/m{m}", out: []string{"p.s/m{m}", "p.s", "m", "{m}"}, }, { in: "p.s/m{h:123}", out: []string{"p.s/m{h:123}", "p.s", "m", "{h:123}"}, }, { in: "p.s/m{m:123}", out: []string{"p.s/m{m:123}", "p.s", "m", "{m:123}"}, }, { in: "p.s/m{h:123,m:123}", out: []string{"p.s/m{h:123,m:123}", "p.s", "m", "{h:123,m:123}"}, }, { in: "p.s/*", out: []string{"p.s/*", "p.s", "*", ""}, }, { in: "p.s/*{h}", out: []string{"p.s/*{h}", "p.s", "*", "{h}"}, }, { in: "s/m*", out: []string{"s/m*", "s", "m", "*"}, }, { in: "s/**", out: []string{"s/**", "s", "*", "*"}, }, } for _, tc := range testCases { match := longMethodConfigRegexp.FindStringSubmatch(tc.in) if !reflect.DeepEqual(match, tc.out) { t.Errorf("in: %q, out: %q, want: %q", tc.in, match, tc.out) } } } func TestHeaderConfigRegexp(t *testing.T) { testCases := []struct { in string out []string }{ {in: "{}", out: nil}, {in: "{a:b}", out: nil}, {in: "{m:123}", out: nil}, {in: "{h:123;m:123}", out: nil}, { in: "{h}", out: []string{"{h}", ""}, }, { in: "{h:123}", out: []string{"{h:123}", "123"}, }, } for _, tc := range testCases { match := headerConfigRegexp.FindStringSubmatch(tc.in) if !reflect.DeepEqual(match, tc.out) { t.Errorf("in: %q, out: %q, want: %q", tc.in, match, tc.out) } } } func TestMessageConfigRegexp(t *testing.T) { testCases := []struct { in string out []string }{ {in: "{}", out: nil}, {in: "{a:b}", out: nil}, {in: "{h:123}", out: nil}, {in: "{h:123;m:123}", out: nil}, { in: "{m}", out: []string{"{m}", ""}, }, { in: "{m:123}", out: []string{"{m:123}", "123"}, }, } for _, tc := range testCases { match := messageConfigRegexp.FindStringSubmatch(tc.in) if !reflect.DeepEqual(match, tc.out) { t.Errorf("in: %q, out: %q, want: %q", tc.in, match, tc.out) } } } func TestHeaderMessageConfigRegexp(t *testing.T) { testCases := []struct { in string out []string }{ {in: "{}", out: nil}, {in: "{a:b}", out: nil}, {in: "{h}", out: nil}, {in: "{h:123}", out: nil}, {in: "{m}", out: nil}, {in: "{m:123}", out: nil}, { in: "{h;m}", out: []string{"{h;m}", "", ""}, }, { in: "{h:123;m}", out: []string{"{h:123;m}", "123", ""}, }, { in: "{h;m:123}", out: []string{"{h;m:123}", "", "123"}, }, { in: "{h:123;m:123}", out: []string{"{h:123;m:123}", "123", "123"}, }, } for _, tc := range testCases { match := headerMessageConfigRegexp.FindStringSubmatch(tc.in) if !reflect.DeepEqual(match, tc.out) { t.Errorf("in: %q, out: %q, want: %q", tc.in, match, tc.out) } } } grpc-go-1.22.1/internal/binarylog/sink.go000066400000000000000000000075371351635773100202570ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package binarylog import ( "bufio" "encoding/binary" "fmt" "io" "io/ioutil" "sync" "time" "github.com/golang/protobuf/proto" pb "google.golang.org/grpc/binarylog/grpc_binarylog_v1" "google.golang.org/grpc/grpclog" ) var ( defaultSink Sink = &noopSink{} // TODO(blog): change this default (file in /tmp). ) // SetDefaultSink sets the sink where binary logs will be written to. // // Not thread safe. Only set during initialization. func SetDefaultSink(s Sink) { if defaultSink != nil { defaultSink.Close() } defaultSink = s } // Sink writes log entry into the binary log sink. type Sink interface { // Write will be called to write the log entry into the sink. // // It should be thread-safe so it can be called in parallel. Write(*pb.GrpcLogEntry) error // Close will be called when the Sink is replaced by a new Sink. Close() error } type noopSink struct{} func (ns *noopSink) Write(*pb.GrpcLogEntry) error { return nil } func (ns *noopSink) Close() error { return nil } // newWriterSink creates a binary log sink with the given writer. // // Write() marshalls the proto message and writes it to the given writer. Each // message is prefixed with a 4 byte big endian unsigned integer as the length. // // No buffer is done, Close() doesn't try to close the writer. func newWriterSink(w io.Writer) *writerSink { return &writerSink{out: w} } type writerSink struct { out io.Writer } func (ws *writerSink) Write(e *pb.GrpcLogEntry) error { b, err := proto.Marshal(e) if err != nil { grpclog.Infof("binary logging: failed to marshal proto message: %v", err) } hdr := make([]byte, 4) binary.BigEndian.PutUint32(hdr, uint32(len(b))) if _, err := ws.out.Write(hdr); err != nil { return err } if _, err := ws.out.Write(b); err != nil { return err } return nil } func (ws *writerSink) Close() error { return nil } type bufWriteCloserSink struct { mu sync.Mutex closer io.Closer out *writerSink // out is built on buf. buf *bufio.Writer // buf is kept for flush. writeStartOnce sync.Once writeTicker *time.Ticker } func (fs *bufWriteCloserSink) Write(e *pb.GrpcLogEntry) error { // Start the write loop when Write is called. fs.writeStartOnce.Do(fs.startFlushGoroutine) fs.mu.Lock() if err := fs.out.Write(e); err != nil { fs.mu.Unlock() return err } fs.mu.Unlock() return nil } const ( bufFlushDuration = 60 * time.Second ) func (fs *bufWriteCloserSink) startFlushGoroutine() { fs.writeTicker = time.NewTicker(bufFlushDuration) go func() { for range fs.writeTicker.C { fs.mu.Lock() fs.buf.Flush() fs.mu.Unlock() } }() } func (fs *bufWriteCloserSink) Close() error { if fs.writeTicker != nil { fs.writeTicker.Stop() } fs.mu.Lock() fs.buf.Flush() fs.closer.Close() fs.out.Close() fs.mu.Unlock() return nil } func newBufWriteCloserSink(o io.WriteCloser) Sink { bufW := bufio.NewWriter(o) return &bufWriteCloserSink{ closer: o, out: newWriterSink(bufW), buf: bufW, } } // NewTempFileSink creates a temp file and returns a Sink that writes to this // file. func NewTempFileSink() (Sink, error) { tempFile, err := ioutil.TempFile("/tmp", "grpcgo_binarylog_*.txt") if err != nil { return nil, fmt.Errorf("failed to create temp file: %v", err) } return newBufWriteCloserSink(tempFile), nil } grpc-go-1.22.1/internal/binarylog/util.go000066400000000000000000000022711351635773100202560ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package binarylog import ( "errors" "strings" ) // parseMethodName splits service and method from the input. It expects format // "/service/method". // // TODO: move to internal/grpcutil. func parseMethodName(methodName string) (service, method string, _ error) { if !strings.HasPrefix(methodName, "/") { return "", "", errors.New("invalid method name: should start with /") } methodName = methodName[1:] pos := strings.LastIndex(methodName, "/") if pos < 0 { return "", "", errors.New("invalid method name: suffix /method is missing") } return methodName[:pos], methodName[pos+1:], nil } grpc-go-1.22.1/internal/binarylog/util_test.go000066400000000000000000000030221351635773100213100ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package binarylog import "testing" func TestParseMethodName(t *testing.T) { testCases := []struct { methodName string service, method string }{ {methodName: "/s/m", service: "s", method: "m"}, {methodName: "/p.s/m", service: "p.s", method: "m"}, {methodName: "/p/s/m", service: "p/s", method: "m"}, } for _, tc := range testCases { s, m, err := parseMethodName(tc.methodName) if err != nil { t.Errorf("Parsing %q got error %v, want nil", tc.methodName, err) continue } if s != tc.service || m != tc.method { t.Errorf("Parseing %q got service %q, method %q, want service %q, method %q", tc.methodName, s, m, tc.service, tc.method, ) } } } func TestParseMethodNameInvalid(t *testing.T) { testCases := []string{ "/", "/sm", "", "sm", } for _, tc := range testCases { _, _, err := parseMethodName(tc) if err == nil { t.Errorf("Parsing %q got nil error, want non-nil error", tc) } } } grpc-go-1.22.1/internal/channelz/000077500000000000000000000000001351635773100165645ustar00rootroot00000000000000grpc-go-1.22.1/internal/channelz/funcs.go000066400000000000000000000472201351635773100202360ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package channelz defines APIs for enabling channelz service, entry // registration/deletion, and accessing channelz data. It also defines channelz // metric struct formats. // // All APIs in this package are experimental. package channelz import ( "fmt" "sort" "sync" "sync/atomic" "time" "google.golang.org/grpc/grpclog" ) const ( defaultMaxTraceEntry int32 = 30 ) var ( db dbWrapper idGen idGenerator // EntryPerPage defines the number of channelz entries to be shown on a web page. EntryPerPage = int64(50) curState int32 maxTraceEntry = defaultMaxTraceEntry ) // TurnOn turns on channelz data collection. func TurnOn() { if !IsOn() { NewChannelzStorage() atomic.StoreInt32(&curState, 1) } } // IsOn returns whether channelz data collection is on. func IsOn() bool { return atomic.CompareAndSwapInt32(&curState, 1, 1) } // SetMaxTraceEntry sets maximum number of trace entry per entity (i.e. channel/subchannel). // Setting it to 0 will disable channel tracing. func SetMaxTraceEntry(i int32) { atomic.StoreInt32(&maxTraceEntry, i) } // ResetMaxTraceEntryToDefault resets the maximum number of trace entry per entity to default. func ResetMaxTraceEntryToDefault() { atomic.StoreInt32(&maxTraceEntry, defaultMaxTraceEntry) } func getMaxTraceEntry() int { i := atomic.LoadInt32(&maxTraceEntry) return int(i) } // dbWarpper wraps around a reference to internal channelz data storage, and // provide synchronized functionality to set and get the reference. type dbWrapper struct { mu sync.RWMutex DB *channelMap } func (d *dbWrapper) set(db *channelMap) { d.mu.Lock() d.DB = db d.mu.Unlock() } func (d *dbWrapper) get() *channelMap { d.mu.RLock() defer d.mu.RUnlock() return d.DB } // NewChannelzStorage initializes channelz data storage and id generator. // // This function returns a cleanup function to wait for all channelz state to be reset by the // grpc goroutines when those entities get closed. By using this cleanup function, we make sure tests // don't mess up each other, i.e. lingering goroutine from previous test doing entity removal happen // to remove some entity just register by the new test, since the id space is the same. // // Note: This function is exported for testing purpose only. User should not call // it in most cases. func NewChannelzStorage() (cleanup func() error) { db.set(&channelMap{ topLevelChannels: make(map[int64]struct{}), channels: make(map[int64]*channel), listenSockets: make(map[int64]*listenSocket), normalSockets: make(map[int64]*normalSocket), servers: make(map[int64]*server), subChannels: make(map[int64]*subChannel), }) idGen.reset() return func() error { var err error cm := db.get() if cm == nil { return nil } for i := 0; i < 1000; i++ { cm.mu.Lock() if len(cm.topLevelChannels) == 0 && len(cm.servers) == 0 && len(cm.channels) == 0 && len(cm.subChannels) == 0 && len(cm.listenSockets) == 0 && len(cm.normalSockets) == 0 { cm.mu.Unlock() // all things stored in the channelz map have been cleared. return nil } cm.mu.Unlock() time.Sleep(10 * time.Millisecond) } cm.mu.Lock() err = fmt.Errorf("after 10s the channelz map has not been cleaned up yet, topchannels: %d, servers: %d, channels: %d, subchannels: %d, listen sockets: %d, normal sockets: %d", len(cm.topLevelChannels), len(cm.servers), len(cm.channels), len(cm.subChannels), len(cm.listenSockets), len(cm.normalSockets)) cm.mu.Unlock() return err } } // GetTopChannels returns a slice of top channel's ChannelMetric, along with a // boolean indicating whether there's more top channels to be queried for. // // The arg id specifies that only top channel with id at or above it will be included // in the result. The returned slice is up to a length of the arg maxResults or // EntryPerPage if maxResults is zero, and is sorted in ascending id order. func GetTopChannels(id int64, maxResults int64) ([]*ChannelMetric, bool) { return db.get().GetTopChannels(id, maxResults) } // GetServers returns a slice of server's ServerMetric, along with a // boolean indicating whether there's more servers to be queried for. // // The arg id specifies that only server with id at or above it will be included // in the result. The returned slice is up to a length of the arg maxResults or // EntryPerPage if maxResults is zero, and is sorted in ascending id order. func GetServers(id int64, maxResults int64) ([]*ServerMetric, bool) { return db.get().GetServers(id, maxResults) } // GetServerSockets returns a slice of server's (identified by id) normal socket's // SocketMetric, along with a boolean indicating whether there's more sockets to // be queried for. // // The arg startID specifies that only sockets with id at or above it will be // included in the result. The returned slice is up to a length of the arg maxResults // or EntryPerPage if maxResults is zero, and is sorted in ascending id order. func GetServerSockets(id int64, startID int64, maxResults int64) ([]*SocketMetric, bool) { return db.get().GetServerSockets(id, startID, maxResults) } // GetChannel returns the ChannelMetric for the channel (identified by id). func GetChannel(id int64) *ChannelMetric { return db.get().GetChannel(id) } // GetSubChannel returns the SubChannelMetric for the subchannel (identified by id). func GetSubChannel(id int64) *SubChannelMetric { return db.get().GetSubChannel(id) } // GetSocket returns the SocketInternalMetric for the socket (identified by id). func GetSocket(id int64) *SocketMetric { return db.get().GetSocket(id) } // GetServer returns the ServerMetric for the server (identified by id). func GetServer(id int64) *ServerMetric { return db.get().GetServer(id) } // RegisterChannel registers the given channel c in channelz database with ref // as its reference name, and add it to the child list of its parent (identified // by pid). pid = 0 means no parent. It returns the unique channelz tracking id // assigned to this channel. func RegisterChannel(c Channel, pid int64, ref string) int64 { id := idGen.genID() cn := &channel{ refName: ref, c: c, subChans: make(map[int64]string), nestedChans: make(map[int64]string), id: id, pid: pid, trace: &channelTrace{createdTime: time.Now(), events: make([]*TraceEvent, 0, getMaxTraceEntry())}, } if pid == 0 { db.get().addChannel(id, cn, true, pid, ref) } else { db.get().addChannel(id, cn, false, pid, ref) } return id } // RegisterSubChannel registers the given channel c in channelz database with ref // as its reference name, and add it to the child list of its parent (identified // by pid). It returns the unique channelz tracking id assigned to this subchannel. func RegisterSubChannel(c Channel, pid int64, ref string) int64 { if pid == 0 { grpclog.Error("a SubChannel's parent id cannot be 0") return 0 } id := idGen.genID() sc := &subChannel{ refName: ref, c: c, sockets: make(map[int64]string), id: id, pid: pid, trace: &channelTrace{createdTime: time.Now(), events: make([]*TraceEvent, 0, getMaxTraceEntry())}, } db.get().addSubChannel(id, sc, pid, ref) return id } // RegisterServer registers the given server s in channelz database. It returns // the unique channelz tracking id assigned to this server. func RegisterServer(s Server, ref string) int64 { id := idGen.genID() svr := &server{ refName: ref, s: s, sockets: make(map[int64]string), listenSockets: make(map[int64]string), id: id, } db.get().addServer(id, svr) return id } // RegisterListenSocket registers the given listen socket s in channelz database // with ref as its reference name, and add it to the child list of its parent // (identified by pid). It returns the unique channelz tracking id assigned to // this listen socket. func RegisterListenSocket(s Socket, pid int64, ref string) int64 { if pid == 0 { grpclog.Error("a ListenSocket's parent id cannot be 0") return 0 } id := idGen.genID() ls := &listenSocket{refName: ref, s: s, id: id, pid: pid} db.get().addListenSocket(id, ls, pid, ref) return id } // RegisterNormalSocket registers the given normal socket s in channelz database // with ref as its reference name, and add it to the child list of its parent // (identified by pid). It returns the unique channelz tracking id assigned to // this normal socket. func RegisterNormalSocket(s Socket, pid int64, ref string) int64 { if pid == 0 { grpclog.Error("a NormalSocket's parent id cannot be 0") return 0 } id := idGen.genID() ns := &normalSocket{refName: ref, s: s, id: id, pid: pid} db.get().addNormalSocket(id, ns, pid, ref) return id } // RemoveEntry removes an entry with unique channelz trakcing id to be id from // channelz database. func RemoveEntry(id int64) { db.get().removeEntry(id) } // TraceEventDesc is what the caller of AddTraceEvent should provide to describe the event to be added // to the channel trace. // The Parent field is optional. It is used for event that will be recorded in the entity's parent // trace also. type TraceEventDesc struct { Desc string Severity Severity Parent *TraceEventDesc } // AddTraceEvent adds trace related to the entity with specified id, using the provided TraceEventDesc. func AddTraceEvent(id int64, desc *TraceEventDesc) { if getMaxTraceEntry() == 0 { return } db.get().traceEvent(id, desc) } // channelMap is the storage data structure for channelz. // Methods of channelMap can be divided in two two categories with respect to locking. // 1. Methods acquire the global lock. // 2. Methods that can only be called when global lock is held. // A second type of method need always to be called inside a first type of method. type channelMap struct { mu sync.RWMutex topLevelChannels map[int64]struct{} servers map[int64]*server channels map[int64]*channel subChannels map[int64]*subChannel listenSockets map[int64]*listenSocket normalSockets map[int64]*normalSocket } func (c *channelMap) addServer(id int64, s *server) { c.mu.Lock() s.cm = c c.servers[id] = s c.mu.Unlock() } func (c *channelMap) addChannel(id int64, cn *channel, isTopChannel bool, pid int64, ref string) { c.mu.Lock() cn.cm = c cn.trace.cm = c c.channels[id] = cn if isTopChannel { c.topLevelChannels[id] = struct{}{} } else { c.findEntry(pid).addChild(id, cn) } c.mu.Unlock() } func (c *channelMap) addSubChannel(id int64, sc *subChannel, pid int64, ref string) { c.mu.Lock() sc.cm = c sc.trace.cm = c c.subChannels[id] = sc c.findEntry(pid).addChild(id, sc) c.mu.Unlock() } func (c *channelMap) addListenSocket(id int64, ls *listenSocket, pid int64, ref string) { c.mu.Lock() ls.cm = c c.listenSockets[id] = ls c.findEntry(pid).addChild(id, ls) c.mu.Unlock() } func (c *channelMap) addNormalSocket(id int64, ns *normalSocket, pid int64, ref string) { c.mu.Lock() ns.cm = c c.normalSockets[id] = ns c.findEntry(pid).addChild(id, ns) c.mu.Unlock() } // removeEntry triggers the removal of an entry, which may not indeed delete the entry, if it has to // wait on the deletion of its children and until no other entity's channel trace references it. // It may lead to a chain of entry deletion. For example, deleting the last socket of a gracefully // shutting down server will lead to the server being also deleted. func (c *channelMap) removeEntry(id int64) { c.mu.Lock() c.findEntry(id).triggerDelete() c.mu.Unlock() } // c.mu must be held by the caller func (c *channelMap) decrTraceRefCount(id int64) { e := c.findEntry(id) if v, ok := e.(tracedChannel); ok { v.decrTraceRefCount() e.deleteSelfIfReady() } } // c.mu must be held by the caller. func (c *channelMap) findEntry(id int64) entry { var v entry var ok bool if v, ok = c.channels[id]; ok { return v } if v, ok = c.subChannels[id]; ok { return v } if v, ok = c.servers[id]; ok { return v } if v, ok = c.listenSockets[id]; ok { return v } if v, ok = c.normalSockets[id]; ok { return v } return &dummyEntry{idNotFound: id} } // c.mu must be held by the caller // deleteEntry simply deletes an entry from the channelMap. Before calling this // method, caller must check this entry is ready to be deleted, i.e removeEntry() // has been called on it, and no children still exist. // Conditionals are ordered by the expected frequency of deletion of each entity // type, in order to optimize performance. func (c *channelMap) deleteEntry(id int64) { var ok bool if _, ok = c.normalSockets[id]; ok { delete(c.normalSockets, id) return } if _, ok = c.subChannels[id]; ok { delete(c.subChannels, id) return } if _, ok = c.channels[id]; ok { delete(c.channels, id) delete(c.topLevelChannels, id) return } if _, ok = c.listenSockets[id]; ok { delete(c.listenSockets, id) return } if _, ok = c.servers[id]; ok { delete(c.servers, id) return } } func (c *channelMap) traceEvent(id int64, desc *TraceEventDesc) { c.mu.Lock() child := c.findEntry(id) childTC, ok := child.(tracedChannel) if !ok { c.mu.Unlock() return } childTC.getChannelTrace().append(&TraceEvent{Desc: desc.Desc, Severity: desc.Severity, Timestamp: time.Now()}) if desc.Parent != nil { parent := c.findEntry(child.getParentID()) var chanType RefChannelType switch child.(type) { case *channel: chanType = RefChannel case *subChannel: chanType = RefSubChannel } if parentTC, ok := parent.(tracedChannel); ok { parentTC.getChannelTrace().append(&TraceEvent{ Desc: desc.Parent.Desc, Severity: desc.Parent.Severity, Timestamp: time.Now(), RefID: id, RefName: childTC.getRefName(), RefType: chanType, }) childTC.incrTraceRefCount() } } c.mu.Unlock() } type int64Slice []int64 func (s int64Slice) Len() int { return len(s) } func (s int64Slice) Swap(i, j int) { s[i], s[j] = s[j], s[i] } func (s int64Slice) Less(i, j int) bool { return s[i] < s[j] } func copyMap(m map[int64]string) map[int64]string { n := make(map[int64]string) for k, v := range m { n[k] = v } return n } func min(a, b int64) int64 { if a < b { return a } return b } func (c *channelMap) GetTopChannels(id int64, maxResults int64) ([]*ChannelMetric, bool) { if maxResults <= 0 { maxResults = EntryPerPage } c.mu.RLock() l := int64(len(c.topLevelChannels)) ids := make([]int64, 0, l) cns := make([]*channel, 0, min(l, maxResults)) for k := range c.topLevelChannels { ids = append(ids, k) } sort.Sort(int64Slice(ids)) idx := sort.Search(len(ids), func(i int) bool { return ids[i] >= id }) count := int64(0) var end bool var t []*ChannelMetric for i, v := range ids[idx:] { if count == maxResults { break } if cn, ok := c.channels[v]; ok { cns = append(cns, cn) t = append(t, &ChannelMetric{ NestedChans: copyMap(cn.nestedChans), SubChans: copyMap(cn.subChans), }) count++ } if i == len(ids[idx:])-1 { end = true break } } c.mu.RUnlock() if count == 0 { end = true } for i, cn := range cns { t[i].ChannelData = cn.c.ChannelzMetric() t[i].ID = cn.id t[i].RefName = cn.refName t[i].Trace = cn.trace.dumpData() } return t, end } func (c *channelMap) GetServers(id, maxResults int64) ([]*ServerMetric, bool) { if maxResults <= 0 { maxResults = EntryPerPage } c.mu.RLock() l := int64(len(c.servers)) ids := make([]int64, 0, l) ss := make([]*server, 0, min(l, maxResults)) for k := range c.servers { ids = append(ids, k) } sort.Sort(int64Slice(ids)) idx := sort.Search(len(ids), func(i int) bool { return ids[i] >= id }) count := int64(0) var end bool var s []*ServerMetric for i, v := range ids[idx:] { if count == maxResults { break } if svr, ok := c.servers[v]; ok { ss = append(ss, svr) s = append(s, &ServerMetric{ ListenSockets: copyMap(svr.listenSockets), }) count++ } if i == len(ids[idx:])-1 { end = true break } } c.mu.RUnlock() if count == 0 { end = true } for i, svr := range ss { s[i].ServerData = svr.s.ChannelzMetric() s[i].ID = svr.id s[i].RefName = svr.refName } return s, end } func (c *channelMap) GetServerSockets(id int64, startID int64, maxResults int64) ([]*SocketMetric, bool) { if maxResults <= 0 { maxResults = EntryPerPage } var svr *server var ok bool c.mu.RLock() if svr, ok = c.servers[id]; !ok { // server with id doesn't exist. c.mu.RUnlock() return nil, true } svrskts := svr.sockets l := int64(len(svrskts)) ids := make([]int64, 0, l) sks := make([]*normalSocket, 0, min(l, maxResults)) for k := range svrskts { ids = append(ids, k) } sort.Sort(int64Slice(ids)) idx := sort.Search(len(ids), func(i int) bool { return ids[i] >= startID }) count := int64(0) var end bool for i, v := range ids[idx:] { if count == maxResults { break } if ns, ok := c.normalSockets[v]; ok { sks = append(sks, ns) count++ } if i == len(ids[idx:])-1 { end = true break } } c.mu.RUnlock() if count == 0 { end = true } var s []*SocketMetric for _, ns := range sks { sm := &SocketMetric{} sm.SocketData = ns.s.ChannelzMetric() sm.ID = ns.id sm.RefName = ns.refName s = append(s, sm) } return s, end } func (c *channelMap) GetChannel(id int64) *ChannelMetric { cm := &ChannelMetric{} var cn *channel var ok bool c.mu.RLock() if cn, ok = c.channels[id]; !ok { // channel with id doesn't exist. c.mu.RUnlock() return nil } cm.NestedChans = copyMap(cn.nestedChans) cm.SubChans = copyMap(cn.subChans) // cn.c can be set to &dummyChannel{} when deleteSelfFromMap is called. Save a copy of cn.c when // holding the lock to prevent potential data race. chanCopy := cn.c c.mu.RUnlock() cm.ChannelData = chanCopy.ChannelzMetric() cm.ID = cn.id cm.RefName = cn.refName cm.Trace = cn.trace.dumpData() return cm } func (c *channelMap) GetSubChannel(id int64) *SubChannelMetric { cm := &SubChannelMetric{} var sc *subChannel var ok bool c.mu.RLock() if sc, ok = c.subChannels[id]; !ok { // subchannel with id doesn't exist. c.mu.RUnlock() return nil } cm.Sockets = copyMap(sc.sockets) // sc.c can be set to &dummyChannel{} when deleteSelfFromMap is called. Save a copy of sc.c when // holding the lock to prevent potential data race. chanCopy := sc.c c.mu.RUnlock() cm.ChannelData = chanCopy.ChannelzMetric() cm.ID = sc.id cm.RefName = sc.refName cm.Trace = sc.trace.dumpData() return cm } func (c *channelMap) GetSocket(id int64) *SocketMetric { sm := &SocketMetric{} c.mu.RLock() if ls, ok := c.listenSockets[id]; ok { c.mu.RUnlock() sm.SocketData = ls.s.ChannelzMetric() sm.ID = ls.id sm.RefName = ls.refName return sm } if ns, ok := c.normalSockets[id]; ok { c.mu.RUnlock() sm.SocketData = ns.s.ChannelzMetric() sm.ID = ns.id sm.RefName = ns.refName return sm } c.mu.RUnlock() return nil } func (c *channelMap) GetServer(id int64) *ServerMetric { sm := &ServerMetric{} var svr *server var ok bool c.mu.RLock() if svr, ok = c.servers[id]; !ok { c.mu.RUnlock() return nil } sm.ListenSockets = copyMap(svr.listenSockets) c.mu.RUnlock() sm.ID = svr.id sm.RefName = svr.refName sm.ServerData = svr.s.ChannelzMetric() return sm } type idGenerator struct { id int64 } func (i *idGenerator) reset() { atomic.StoreInt64(&i.id, 0) } func (i *idGenerator) genID() int64 { return atomic.AddInt64(&i.id, 1) } grpc-go-1.22.1/internal/channelz/types.go000066400000000000000000000553041351635773100202660ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package channelz import ( "net" "sync" "sync/atomic" "time" "google.golang.org/grpc/connectivity" "google.golang.org/grpc/credentials" "google.golang.org/grpc/grpclog" ) // entry represents a node in the channelz database. type entry interface { // addChild adds a child e, whose channelz id is id to child list addChild(id int64, e entry) // deleteChild deletes a child with channelz id to be id from child list deleteChild(id int64) // triggerDelete tries to delete self from channelz database. However, if child // list is not empty, then deletion from the database is on hold until the last // child is deleted from database. triggerDelete() // deleteSelfIfReady check whether triggerDelete() has been called before, and whether child // list is now empty. If both conditions are met, then delete self from database. deleteSelfIfReady() // getParentID returns parent ID of the entry. 0 value parent ID means no parent. getParentID() int64 } // dummyEntry is a fake entry to handle entry not found case. type dummyEntry struct { idNotFound int64 } func (d *dummyEntry) addChild(id int64, e entry) { // Note: It is possible for a normal program to reach here under race condition. // For example, there could be a race between ClientConn.Close() info being propagated // to addrConn and http2Client. ClientConn.Close() cancel the context and result // in http2Client to error. The error info is then caught by transport monitor // and before addrConn.tearDown() is called in side ClientConn.Close(). Therefore, // the addrConn will create a new transport. And when registering the new transport in // channelz, its parent addrConn could have already been torn down and deleted // from channelz tracking, and thus reach the code here. grpclog.Infof("attempt to add child of type %T with id %d to a parent (id=%d) that doesn't currently exist", e, id, d.idNotFound) } func (d *dummyEntry) deleteChild(id int64) { // It is possible for a normal program to reach here under race condition. // Refer to the example described in addChild(). grpclog.Infof("attempt to delete child with id %d from a parent (id=%d) that doesn't currently exist", id, d.idNotFound) } func (d *dummyEntry) triggerDelete() { grpclog.Warningf("attempt to delete an entry (id=%d) that doesn't currently exist", d.idNotFound) } func (*dummyEntry) deleteSelfIfReady() { // code should not reach here. deleteSelfIfReady is always called on an existing entry. } func (*dummyEntry) getParentID() int64 { return 0 } // ChannelMetric defines the info channelz provides for a specific Channel, which // includes ChannelInternalMetric and channelz-specific data, such as channelz id, // child list, etc. type ChannelMetric struct { // ID is the channelz id of this channel. ID int64 // RefName is the human readable reference string of this channel. RefName string // ChannelData contains channel internal metric reported by the channel through // ChannelzMetric(). ChannelData *ChannelInternalMetric // NestedChans tracks the nested channel type children of this channel in the format of // a map from nested channel channelz id to corresponding reference string. NestedChans map[int64]string // SubChans tracks the subchannel type children of this channel in the format of a // map from subchannel channelz id to corresponding reference string. SubChans map[int64]string // Sockets tracks the socket type children of this channel in the format of a map // from socket channelz id to corresponding reference string. // Note current grpc implementation doesn't allow channel having sockets directly, // therefore, this is field is unused. Sockets map[int64]string // Trace contains the most recent traced events. Trace *ChannelTrace } // SubChannelMetric defines the info channelz provides for a specific SubChannel, // which includes ChannelInternalMetric and channelz-specific data, such as // channelz id, child list, etc. type SubChannelMetric struct { // ID is the channelz id of this subchannel. ID int64 // RefName is the human readable reference string of this subchannel. RefName string // ChannelData contains subchannel internal metric reported by the subchannel // through ChannelzMetric(). ChannelData *ChannelInternalMetric // NestedChans tracks the nested channel type children of this subchannel in the format of // a map from nested channel channelz id to corresponding reference string. // Note current grpc implementation doesn't allow subchannel to have nested channels // as children, therefore, this field is unused. NestedChans map[int64]string // SubChans tracks the subchannel type children of this subchannel in the format of a // map from subchannel channelz id to corresponding reference string. // Note current grpc implementation doesn't allow subchannel to have subchannels // as children, therefore, this field is unused. SubChans map[int64]string // Sockets tracks the socket type children of this subchannel in the format of a map // from socket channelz id to corresponding reference string. Sockets map[int64]string // Trace contains the most recent traced events. Trace *ChannelTrace } // ChannelInternalMetric defines the struct that the implementor of Channel interface // should return from ChannelzMetric(). type ChannelInternalMetric struct { // current connectivity state of the channel. State connectivity.State // The target this channel originally tried to connect to. May be absent Target string // The number of calls started on the channel. CallsStarted int64 // The number of calls that have completed with an OK status. CallsSucceeded int64 // The number of calls that have a completed with a non-OK status. CallsFailed int64 // The last time a call was started on the channel. LastCallStartedTimestamp time.Time } // ChannelTrace stores traced events on a channel/subchannel and related info. type ChannelTrace struct { // EventNum is the number of events that ever got traced (i.e. including those that have been deleted) EventNum int64 // CreationTime is the creation time of the trace. CreationTime time.Time // Events stores the most recent trace events (up to $maxTraceEntry, newer event will overwrite the // oldest one) Events []*TraceEvent } // TraceEvent represent a single trace event type TraceEvent struct { // Desc is a simple description of the trace event. Desc string // Severity states the severity of this trace event. Severity Severity // Timestamp is the event time. Timestamp time.Time // RefID is the id of the entity that gets referenced in the event. RefID is 0 if no other entity is // involved in this event. // e.g. SubChannel (id: 4[]) Created. --> RefID = 4, RefName = "" (inside []) RefID int64 // RefName is the reference name for the entity that gets referenced in the event. RefName string // RefType indicates the referenced entity type, i.e Channel or SubChannel. RefType RefChannelType } // Channel is the interface that should be satisfied in order to be tracked by // channelz as Channel or SubChannel. type Channel interface { ChannelzMetric() *ChannelInternalMetric } type dummyChannel struct{} func (d *dummyChannel) ChannelzMetric() *ChannelInternalMetric { return &ChannelInternalMetric{} } type channel struct { refName string c Channel closeCalled bool nestedChans map[int64]string subChans map[int64]string id int64 pid int64 cm *channelMap trace *channelTrace // traceRefCount is the number of trace events that reference this channel. // Non-zero traceRefCount means the trace of this channel cannot be deleted. traceRefCount int32 } func (c *channel) addChild(id int64, e entry) { switch v := e.(type) { case *subChannel: c.subChans[id] = v.refName case *channel: c.nestedChans[id] = v.refName default: grpclog.Errorf("cannot add a child (id = %d) of type %T to a channel", id, e) } } func (c *channel) deleteChild(id int64) { delete(c.subChans, id) delete(c.nestedChans, id) c.deleteSelfIfReady() } func (c *channel) triggerDelete() { c.closeCalled = true c.deleteSelfIfReady() } func (c *channel) getParentID() int64 { return c.pid } // deleteSelfFromTree tries to delete the channel from the channelz entry relation tree, which means // deleting the channel reference from its parent's child list. // // In order for a channel to be deleted from the tree, it must meet the criteria that, removal of the // corresponding grpc object has been invoked, and the channel does not have any children left. // // The returned boolean value indicates whether the channel has been successfully deleted from tree. func (c *channel) deleteSelfFromTree() (deleted bool) { if !c.closeCalled || len(c.subChans)+len(c.nestedChans) != 0 { return false } // not top channel if c.pid != 0 { c.cm.findEntry(c.pid).deleteChild(c.id) } return true } // deleteSelfFromMap checks whether it is valid to delete the channel from the map, which means // deleting the channel from channelz's tracking entirely. Users can no longer use id to query the // channel, and its memory will be garbage collected. // // The trace reference count of the channel must be 0 in order to be deleted from the map. This is // specified in the channel tracing gRFC that as long as some other trace has reference to an entity, // the trace of the referenced entity must not be deleted. In order to release the resource allocated // by grpc, the reference to the grpc object is reset to a dummy object. // // deleteSelfFromMap must be called after deleteSelfFromTree returns true. // // It returns a bool to indicate whether the channel can be safely deleted from map. func (c *channel) deleteSelfFromMap() (delete bool) { if c.getTraceRefCount() != 0 { c.c = &dummyChannel{} return false } return true } // deleteSelfIfReady tries to delete the channel itself from the channelz database. // The delete process includes two steps: // 1. delete the channel from the entry relation tree, i.e. delete the channel reference from its // parent's child list. // 2. delete the channel from the map, i.e. delete the channel entirely from channelz. Lookup by id // will return entry not found error. func (c *channel) deleteSelfIfReady() { if !c.deleteSelfFromTree() { return } if !c.deleteSelfFromMap() { return } c.cm.deleteEntry(c.id) c.trace.clear() } func (c *channel) getChannelTrace() *channelTrace { return c.trace } func (c *channel) incrTraceRefCount() { atomic.AddInt32(&c.traceRefCount, 1) } func (c *channel) decrTraceRefCount() { atomic.AddInt32(&c.traceRefCount, -1) } func (c *channel) getTraceRefCount() int { i := atomic.LoadInt32(&c.traceRefCount) return int(i) } func (c *channel) getRefName() string { return c.refName } type subChannel struct { refName string c Channel closeCalled bool sockets map[int64]string id int64 pid int64 cm *channelMap trace *channelTrace traceRefCount int32 } func (sc *subChannel) addChild(id int64, e entry) { if v, ok := e.(*normalSocket); ok { sc.sockets[id] = v.refName } else { grpclog.Errorf("cannot add a child (id = %d) of type %T to a subChannel", id, e) } } func (sc *subChannel) deleteChild(id int64) { delete(sc.sockets, id) sc.deleteSelfIfReady() } func (sc *subChannel) triggerDelete() { sc.closeCalled = true sc.deleteSelfIfReady() } func (sc *subChannel) getParentID() int64 { return sc.pid } // deleteSelfFromTree tries to delete the subchannel from the channelz entry relation tree, which // means deleting the subchannel reference from its parent's child list. // // In order for a subchannel to be deleted from the tree, it must meet the criteria that, removal of // the corresponding grpc object has been invoked, and the subchannel does not have any children left. // // The returned boolean value indicates whether the channel has been successfully deleted from tree. func (sc *subChannel) deleteSelfFromTree() (deleted bool) { if !sc.closeCalled || len(sc.sockets) != 0 { return false } sc.cm.findEntry(sc.pid).deleteChild(sc.id) return true } // deleteSelfFromMap checks whether it is valid to delete the subchannel from the map, which means // deleting the subchannel from channelz's tracking entirely. Users can no longer use id to query // the subchannel, and its memory will be garbage collected. // // The trace reference count of the subchannel must be 0 in order to be deleted from the map. This is // specified in the channel tracing gRFC that as long as some other trace has reference to an entity, // the trace of the referenced entity must not be deleted. In order to release the resource allocated // by grpc, the reference to the grpc object is reset to a dummy object. // // deleteSelfFromMap must be called after deleteSelfFromTree returns true. // // It returns a bool to indicate whether the channel can be safely deleted from map. func (sc *subChannel) deleteSelfFromMap() (delete bool) { if sc.getTraceRefCount() != 0 { // free the grpc struct (i.e. addrConn) sc.c = &dummyChannel{} return false } return true } // deleteSelfIfReady tries to delete the subchannel itself from the channelz database. // The delete process includes two steps: // 1. delete the subchannel from the entry relation tree, i.e. delete the subchannel reference from // its parent's child list. // 2. delete the subchannel from the map, i.e. delete the subchannel entirely from channelz. Lookup // by id will return entry not found error. func (sc *subChannel) deleteSelfIfReady() { if !sc.deleteSelfFromTree() { return } if !sc.deleteSelfFromMap() { return } sc.cm.deleteEntry(sc.id) sc.trace.clear() } func (sc *subChannel) getChannelTrace() *channelTrace { return sc.trace } func (sc *subChannel) incrTraceRefCount() { atomic.AddInt32(&sc.traceRefCount, 1) } func (sc *subChannel) decrTraceRefCount() { atomic.AddInt32(&sc.traceRefCount, -1) } func (sc *subChannel) getTraceRefCount() int { i := atomic.LoadInt32(&sc.traceRefCount) return int(i) } func (sc *subChannel) getRefName() string { return sc.refName } // SocketMetric defines the info channelz provides for a specific Socket, which // includes SocketInternalMetric and channelz-specific data, such as channelz id, etc. type SocketMetric struct { // ID is the channelz id of this socket. ID int64 // RefName is the human readable reference string of this socket. RefName string // SocketData contains socket internal metric reported by the socket through // ChannelzMetric(). SocketData *SocketInternalMetric } // SocketInternalMetric defines the struct that the implementor of Socket interface // should return from ChannelzMetric(). type SocketInternalMetric struct { // The number of streams that have been started. StreamsStarted int64 // The number of streams that have ended successfully: // On client side, receiving frame with eos bit set. // On server side, sending frame with eos bit set. StreamsSucceeded int64 // The number of streams that have ended unsuccessfully: // On client side, termination without receiving frame with eos bit set. // On server side, termination without sending frame with eos bit set. StreamsFailed int64 // The number of messages successfully sent on this socket. MessagesSent int64 MessagesReceived int64 // The number of keep alives sent. This is typically implemented with HTTP/2 // ping messages. KeepAlivesSent int64 // The last time a stream was created by this endpoint. Usually unset for // servers. LastLocalStreamCreatedTimestamp time.Time // The last time a stream was created by the remote endpoint. Usually unset // for clients. LastRemoteStreamCreatedTimestamp time.Time // The last time a message was sent by this endpoint. LastMessageSentTimestamp time.Time // The last time a message was received by this endpoint. LastMessageReceivedTimestamp time.Time // The amount of window, granted to the local endpoint by the remote endpoint. // This may be slightly out of date due to network latency. This does NOT // include stream level or TCP level flow control info. LocalFlowControlWindow int64 // The amount of window, granted to the remote endpoint by the local endpoint. // This may be slightly out of date due to network latency. This does NOT // include stream level or TCP level flow control info. RemoteFlowControlWindow int64 // The locally bound address. LocalAddr net.Addr // The remote bound address. May be absent. RemoteAddr net.Addr // Optional, represents the name of the remote endpoint, if different than // the original target name. RemoteName string SocketOptions *SocketOptionData Security credentials.ChannelzSecurityValue } // Socket is the interface that should be satisfied in order to be tracked by // channelz as Socket. type Socket interface { ChannelzMetric() *SocketInternalMetric } type listenSocket struct { refName string s Socket id int64 pid int64 cm *channelMap } func (ls *listenSocket) addChild(id int64, e entry) { grpclog.Errorf("cannot add a child (id = %d) of type %T to a listen socket", id, e) } func (ls *listenSocket) deleteChild(id int64) { grpclog.Errorf("cannot delete a child (id = %d) from a listen socket", id) } func (ls *listenSocket) triggerDelete() { ls.cm.deleteEntry(ls.id) ls.cm.findEntry(ls.pid).deleteChild(ls.id) } func (ls *listenSocket) deleteSelfIfReady() { grpclog.Errorf("cannot call deleteSelfIfReady on a listen socket") } func (ls *listenSocket) getParentID() int64 { return ls.pid } type normalSocket struct { refName string s Socket id int64 pid int64 cm *channelMap } func (ns *normalSocket) addChild(id int64, e entry) { grpclog.Errorf("cannot add a child (id = %d) of type %T to a normal socket", id, e) } func (ns *normalSocket) deleteChild(id int64) { grpclog.Errorf("cannot delete a child (id = %d) from a normal socket", id) } func (ns *normalSocket) triggerDelete() { ns.cm.deleteEntry(ns.id) ns.cm.findEntry(ns.pid).deleteChild(ns.id) } func (ns *normalSocket) deleteSelfIfReady() { grpclog.Errorf("cannot call deleteSelfIfReady on a normal socket") } func (ns *normalSocket) getParentID() int64 { return ns.pid } // ServerMetric defines the info channelz provides for a specific Server, which // includes ServerInternalMetric and channelz-specific data, such as channelz id, // child list, etc. type ServerMetric struct { // ID is the channelz id of this server. ID int64 // RefName is the human readable reference string of this server. RefName string // ServerData contains server internal metric reported by the server through // ChannelzMetric(). ServerData *ServerInternalMetric // ListenSockets tracks the listener socket type children of this server in the // format of a map from socket channelz id to corresponding reference string. ListenSockets map[int64]string } // ServerInternalMetric defines the struct that the implementor of Server interface // should return from ChannelzMetric(). type ServerInternalMetric struct { // The number of incoming calls started on the server. CallsStarted int64 // The number of incoming calls that have completed with an OK status. CallsSucceeded int64 // The number of incoming calls that have a completed with a non-OK status. CallsFailed int64 // The last time a call was started on the server. LastCallStartedTimestamp time.Time } // Server is the interface to be satisfied in order to be tracked by channelz as // Server. type Server interface { ChannelzMetric() *ServerInternalMetric } type server struct { refName string s Server closeCalled bool sockets map[int64]string listenSockets map[int64]string id int64 cm *channelMap } func (s *server) addChild(id int64, e entry) { switch v := e.(type) { case *normalSocket: s.sockets[id] = v.refName case *listenSocket: s.listenSockets[id] = v.refName default: grpclog.Errorf("cannot add a child (id = %d) of type %T to a server", id, e) } } func (s *server) deleteChild(id int64) { delete(s.sockets, id) delete(s.listenSockets, id) s.deleteSelfIfReady() } func (s *server) triggerDelete() { s.closeCalled = true s.deleteSelfIfReady() } func (s *server) deleteSelfIfReady() { if !s.closeCalled || len(s.sockets)+len(s.listenSockets) != 0 { return } s.cm.deleteEntry(s.id) } func (s *server) getParentID() int64 { return 0 } type tracedChannel interface { getChannelTrace() *channelTrace incrTraceRefCount() decrTraceRefCount() getRefName() string } type channelTrace struct { cm *channelMap createdTime time.Time eventCount int64 mu sync.Mutex events []*TraceEvent } func (c *channelTrace) append(e *TraceEvent) { c.mu.Lock() if len(c.events) == getMaxTraceEntry() { del := c.events[0] c.events = c.events[1:] if del.RefID != 0 { // start recursive cleanup in a goroutine to not block the call originated from grpc. go func() { // need to acquire c.cm.mu lock to call the unlocked attemptCleanup func. c.cm.mu.Lock() c.cm.decrTraceRefCount(del.RefID) c.cm.mu.Unlock() }() } } e.Timestamp = time.Now() c.events = append(c.events, e) c.eventCount++ c.mu.Unlock() } func (c *channelTrace) clear() { c.mu.Lock() for _, e := range c.events { if e.RefID != 0 { // caller should have already held the c.cm.mu lock. c.cm.decrTraceRefCount(e.RefID) } } c.mu.Unlock() } // Severity is the severity level of a trace event. // The canonical enumeration of all valid values is here: // https://github.com/grpc/grpc-proto/blob/9b13d199cc0d4703c7ea26c9c330ba695866eb23/grpc/channelz/v1/channelz.proto#L126. type Severity int const ( // CtUNKNOWN indicates unknown severity of a trace event. CtUNKNOWN Severity = iota // CtINFO indicates info level severity of a trace event. CtINFO // CtWarning indicates warning level severity of a trace event. CtWarning // CtError indicates error level severity of a trace event. CtError ) // RefChannelType is the type of the entity being referenced in a trace event. type RefChannelType int const ( // RefChannel indicates the referenced entity is a Channel. RefChannel RefChannelType = iota // RefSubChannel indicates the referenced entity is a SubChannel. RefSubChannel ) func (c *channelTrace) dumpData() *ChannelTrace { c.mu.Lock() ct := &ChannelTrace{EventNum: c.eventCount, CreationTime: c.createdTime} ct.Events = c.events[:len(c.events)] c.mu.Unlock() return ct } grpc-go-1.22.1/internal/channelz/types_linux.go000066400000000000000000000031241351635773100214760ustar00rootroot00000000000000// +build !appengine /* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package channelz import ( "syscall" "golang.org/x/sys/unix" ) // SocketOptionData defines the struct to hold socket option data, and related // getter function to obtain info from fd. type SocketOptionData struct { Linger *unix.Linger RecvTimeout *unix.Timeval SendTimeout *unix.Timeval TCPInfo *unix.TCPInfo } // Getsockopt defines the function to get socket options requested by channelz. // It is to be passed to syscall.RawConn.Control(). func (s *SocketOptionData) Getsockopt(fd uintptr) { if v, err := unix.GetsockoptLinger(int(fd), syscall.SOL_SOCKET, syscall.SO_LINGER); err == nil { s.Linger = v } if v, err := unix.GetsockoptTimeval(int(fd), syscall.SOL_SOCKET, syscall.SO_RCVTIMEO); err == nil { s.RecvTimeout = v } if v, err := unix.GetsockoptTimeval(int(fd), syscall.SOL_SOCKET, syscall.SO_SNDTIMEO); err == nil { s.SendTimeout = v } if v, err := unix.GetsockoptTCPInfo(int(fd), syscall.SOL_TCP, syscall.TCP_INFO); err == nil { s.TCPInfo = v } } grpc-go-1.22.1/internal/channelz/types_nonlinux.go000066400000000000000000000023721351635773100222150ustar00rootroot00000000000000// +build !linux appengine /* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package channelz import ( "sync" "google.golang.org/grpc/grpclog" ) var once sync.Once // SocketOptionData defines the struct to hold socket option data, and related // getter function to obtain info from fd. // Windows OS doesn't support Socket Option type SocketOptionData struct { } // Getsockopt defines the function to get socket options requested by channelz. // It is to be passed to syscall.RawConn.Control(). // Windows OS doesn't support Socket Option func (s *SocketOptionData) Getsockopt(fd uintptr) { once.Do(func() { grpclog.Warningln("Channelz: socket options are not supported on non-linux os and appengine.") }) } grpc-go-1.22.1/internal/channelz/util_linux.go000066400000000000000000000017451351635773100213160ustar00rootroot00000000000000// +build linux,!appengine /* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package channelz import ( "syscall" ) // GetSocketOption gets the socket option info of the conn. func GetSocketOption(socket interface{}) *SocketOptionData { c, ok := socket.(syscall.Conn) if !ok { return nil } data := &SocketOptionData{} if rawConn, err := c.SyscallConn(); err == nil { rawConn.Control(data.Getsockopt) return data } return nil } grpc-go-1.22.1/internal/channelz/util_nonlinux.go000066400000000000000000000014141351635773100220220ustar00rootroot00000000000000// +build !linux appengine /* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package channelz // GetSocketOption gets the socket option info of the conn. func GetSocketOption(c interface{}) *SocketOptionData { return nil } grpc-go-1.22.1/internal/channelz/util_test.go000066400000000000000000000062661351635773100211410ustar00rootroot00000000000000// +build linux,go1.10,!appengine /* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // The test in this file should be run in an environment that has go1.10 or later, // as the function SyscallConn() (required to get socket option) was introduced // to net.TCPListener in go1.10. package channelz_test import ( "net" "reflect" "syscall" "testing" "golang.org/x/sys/unix" "google.golang.org/grpc/internal/channelz" ) func TestGetSocketOpt(t *testing.T) { network, addr := "tcp", ":0" ln, err := net.Listen(network, addr) if err != nil { t.Fatalf("net.Listen(%s,%s) failed with err: %v", network, addr, err) } defer ln.Close() go func() { ln.Accept() }() conn, _ := net.Dial(network, ln.Addr().String()) defer conn.Close() tcpc := conn.(*net.TCPConn) raw, err := tcpc.SyscallConn() if err != nil { t.Fatalf("SyscallConn() failed due to %v", err) } l := &unix.Linger{Onoff: 1, Linger: 5} recvTimout := &unix.Timeval{Sec: 100} sendTimeout := &unix.Timeval{Sec: 8888} raw.Control(func(fd uintptr) { err := unix.SetsockoptLinger(int(fd), syscall.SOL_SOCKET, syscall.SO_LINGER, l) if err != nil { t.Fatalf("failed to SetsockoptLinger(%v,%v,%v,%v) due to %v", int(fd), syscall.SOL_SOCKET, syscall.SO_LINGER, l, err) } err = unix.SetsockoptTimeval(int(fd), syscall.SOL_SOCKET, syscall.SO_RCVTIMEO, recvTimout) if err != nil { t.Fatalf("failed to SetsockoptTimeval(%v,%v,%v,%v) due to %v", int(fd), syscall.SOL_SOCKET, syscall.SO_RCVTIMEO, recvTimout, err) } err = unix.SetsockoptTimeval(int(fd), syscall.SOL_SOCKET, syscall.SO_SNDTIMEO, sendTimeout) if err != nil { t.Fatalf("failed to SetsockoptTimeval(%v,%v,%v,%v) due to %v", int(fd), syscall.SOL_SOCKET, syscall.SO_SNDTIMEO, sendTimeout, err) } }) sktopt := channelz.GetSocketOption(conn) if !reflect.DeepEqual(sktopt.Linger, l) { t.Fatalf("get socket option linger, want: %v, got %v", l, sktopt.Linger) } if !reflect.DeepEqual(sktopt.RecvTimeout, recvTimout) { t.Logf("get socket option recv timeout, want: %v, got %v, may be caused by system allowing non or partial setting of this value", recvTimout, sktopt.RecvTimeout) } if !reflect.DeepEqual(sktopt.SendTimeout, sendTimeout) { t.Logf("get socket option send timeout, want: %v, got %v, may be caused by system allowing non or partial setting of this value", sendTimeout, sktopt.SendTimeout) } if sktopt == nil || sktopt.TCPInfo != nil && sktopt.TCPInfo.State != 1 { t.Fatalf("TCPInfo.State want 1 (TCP_ESTABLISHED), got %v", sktopt) } sktopt = channelz.GetSocketOption(ln) if sktopt == nil || sktopt.TCPInfo == nil || sktopt.TCPInfo.State != 10 { t.Fatalf("TCPInfo.State want 10 (TCP_LISTEN), got %v", sktopt) } } grpc-go-1.22.1/internal/envconfig/000077500000000000000000000000001351635773100167405ustar00rootroot00000000000000grpc-go-1.22.1/internal/envconfig/envconfig.go000066400000000000000000000034111351635773100212440ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package envconfig contains grpc settings configured by environment variables. package envconfig import ( "os" "strings" ) const ( prefix = "GRPC_GO_" retryStr = prefix + "RETRY" requireHandshakeStr = prefix + "REQUIRE_HANDSHAKE" ) // RequireHandshakeSetting describes the settings for handshaking. type RequireHandshakeSetting int const ( // RequireHandshakeOn indicates to wait for handshake before considering a // connection ready/successful. RequireHandshakeOn RequireHandshakeSetting = iota // RequireHandshakeOff indicates to not wait for handshake before // considering a connection ready/successful. RequireHandshakeOff ) var ( // Retry is set if retry is explicitly enabled via "GRPC_GO_RETRY=on". Retry = strings.EqualFold(os.Getenv(retryStr), "on") // RequireHandshake is set based upon the GRPC_GO_REQUIRE_HANDSHAKE // environment variable. // // Will be removed after the 1.18 release. RequireHandshake = RequireHandshakeOn ) func init() { switch strings.ToLower(os.Getenv(requireHandshakeStr)) { case "on": fallthrough default: RequireHandshake = RequireHandshakeOn case "off": RequireHandshake = RequireHandshakeOff } } grpc-go-1.22.1/internal/grpcrand/000077500000000000000000000000001351635773100165625ustar00rootroot00000000000000grpc-go-1.22.1/internal/grpcrand/grpcrand.go000066400000000000000000000024641351635773100207170ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package grpcrand implements math/rand functions in a concurrent-safe way // with a global random source, independent of math/rand's global source. package grpcrand import ( "math/rand" "sync" "time" ) var ( r = rand.New(rand.NewSource(time.Now().UnixNano())) mu sync.Mutex ) // Int63n implements rand.Int63n on the grpcrand global source. func Int63n(n int64) int64 { mu.Lock() res := r.Int63n(n) mu.Unlock() return res } // Intn implements rand.Intn on the grpcrand global source. func Intn(n int) int { mu.Lock() res := r.Intn(n) mu.Unlock() return res } // Float64 implements rand.Float64 on the grpcrand global source. func Float64() float64 { mu.Lock() res := r.Float64() mu.Unlock() return res } grpc-go-1.22.1/internal/grpcsync/000077500000000000000000000000001351635773100166125ustar00rootroot00000000000000grpc-go-1.22.1/internal/grpcsync/event.go000066400000000000000000000030621351635773100202630ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package grpcsync implements additional synchronization primitives built upon // the sync package. package grpcsync import ( "sync" "sync/atomic" ) // Event represents a one-time event that may occur in the future. type Event struct { fired int32 c chan struct{} o sync.Once } // Fire causes e to complete. It is safe to call multiple times, and // concurrently. It returns true iff this call to Fire caused the signaling // channel returned by Done to close. func (e *Event) Fire() bool { ret := false e.o.Do(func() { atomic.StoreInt32(&e.fired, 1) close(e.c) ret = true }) return ret } // Done returns a channel that will be closed when Fire is called. func (e *Event) Done() <-chan struct{} { return e.c } // HasFired returns true if Fire has been called. func (e *Event) HasFired() bool { return atomic.LoadInt32(&e.fired) == 1 } // NewEvent returns a new, ready-to-use Event. func NewEvent() *Event { return &Event{c: make(chan struct{})} } grpc-go-1.22.1/internal/grpcsync/event_test.go000066400000000000000000000030041351635773100213160ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpcsync import "testing" func TestEventHasFired(t *testing.T) { e := NewEvent() if e.HasFired() { t.Fatal("e.HasFired() = true; want false") } if !e.Fire() { t.Fatal("e.Fire() = false; want true") } if !e.HasFired() { t.Fatal("e.HasFired() = false; want true") } } func TestEventDoneChannel(t *testing.T) { e := NewEvent() select { case <-e.Done(): t.Fatal("e.HasFired() = true; want false") default: } if !e.Fire() { t.Fatal("e.Fire() = false; want true") } select { case <-e.Done(): default: t.Fatal("e.HasFired() = false; want true") } } func TestEventMultipleFires(t *testing.T) { e := NewEvent() if e.HasFired() { t.Fatal("e.HasFired() = true; want false") } if !e.Fire() { t.Fatal("e.Fire() = false; want true") } for i := 0; i < 3; i++ { if !e.HasFired() { t.Fatal("e.HasFired() = false; want true") } if e.Fire() { t.Fatal("e.Fire() = true; want false") } } } grpc-go-1.22.1/internal/grpctest/000077500000000000000000000000001351635773100166155ustar00rootroot00000000000000grpc-go-1.22.1/internal/grpctest/example_test.go000066400000000000000000000024241351635773100216400ustar00rootroot00000000000000/* * * Copyright 2019 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpctest_test import ( "testing" "google.golang.org/grpc/internal/grpctest" ) type s struct { i int } func (s *s) Setup(t *testing.T) { t.Log("Per-test setup code") s.i = 5 } func (s *s) TestSomething(t *testing.T) { t.Log("TestSomething") if s.i != 5 { t.Errorf("s.i = %v; want 5", s.i) } s.i = 3 } func (s *s) TestSomethingElse(t *testing.T) { t.Log("TestSomethingElse") if got, want := s.i%4, 1; got != want { t.Errorf("s.i %% 4 = %v; want %v", got, want) } s.i = 3 } func (s *s) Teardown(t *testing.T) { t.Log("Per-test teardown code") if s.i != 3 { t.Fatalf("s.i = %v; want 3", s.i) } } func TestExample(t *testing.T) { grpctest.RunSubTests(t, &s{}) } grpc-go-1.22.1/internal/grpctest/grpctest.go000066400000000000000000000040431351635773100210000ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package grpctest implements testing helpers. package grpctest import ( "reflect" "strings" "testing" ) func getTestFunc(t *testing.T, xv reflect.Value, name string) func(*testing.T) { if m := xv.MethodByName(name); m.IsValid() { if f, ok := m.Interface().(func(*testing.T)); ok { return f } // Method exists but has the wrong type signature. t.Fatalf("grpctest: function %v has unexpected signature (%T)", name, m.Interface()) } return func(*testing.T) {} } // RunSubTests runs all "Test___" functions that are methods of x as subtests // of the current test. If x contains methods "Setup(*testing.T)" or // "Teardown(*testing.T)", those are run before or after each of the test // functions, respectively. // // For example usage, see example_test.go. Run it using: // $ go test -v -run TestExample . // // To run a specific test/subtest: // $ go test -v -run 'TestExample/^Something$' . func RunSubTests(t *testing.T, x interface{}) { xt := reflect.TypeOf(x) xv := reflect.ValueOf(x) setup := getTestFunc(t, xv, "Setup") teardown := getTestFunc(t, xv, "Teardown") for i := 0; i < xt.NumMethod(); i++ { methodName := xt.Method(i).Name if !strings.HasPrefix(methodName, "Test") { continue } tfunc := getTestFunc(t, xv, methodName) t.Run(strings.TrimPrefix(methodName, "Test"), func(t *testing.T) { setup(t) // defer teardown to guarantee it is run even if tfunc uses t.Fatal() defer teardown(t) tfunc(t) }) } } grpc-go-1.22.1/internal/grpctest/grpctest_test.go000066400000000000000000000026511351635773100220420ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpctest import ( "reflect" "testing" ) type tRunST struct { setup, test, teardown bool } func (t *tRunST) Setup(*testing.T) { t.setup = true } func (t *tRunST) TestSubTest(*testing.T) { t.test = true } func (t *tRunST) Teardown(*testing.T) { t.teardown = true } func TestRunSubTests(t *testing.T) { x := &tRunST{} RunSubTests(t, x) if want := (&tRunST{setup: true, test: true, teardown: true}); !reflect.DeepEqual(x, want) { t.Fatalf("x = %v; want all fields true", x) } } type tNoST struct { test bool } func (t *tNoST) TestSubTest(*testing.T) { t.test = true } func TestNoSetupOrTeardown(t *testing.T) { // Ensures nothing panics or fails if Setup/Teardown are omitted. x := &tNoST{} RunSubTests(t, x) if want := (&tNoST{test: true}); !reflect.DeepEqual(x, want) { t.Fatalf("x = %v; want %v", x, want) } } grpc-go-1.22.1/internal/internal.go000066400000000000000000000057371351635773100171410ustar00rootroot00000000000000/* * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package internal contains gRPC-internal code, to avoid polluting // the godoc of the top-level grpc package. It must not import any grpc // symbols to avoid circular dependencies. package internal import ( "context" "time" "google.golang.org/grpc/connectivity" ) var ( // WithResolverBuilder is exported by dialoptions.go WithResolverBuilder interface{} // func (resolver.Builder) grpc.DialOption // WithHealthCheckFunc is not exported by dialoptions.go WithHealthCheckFunc interface{} // func (HealthChecker) DialOption // HealthCheckFunc is used to provide client-side LB channel health checking HealthCheckFunc HealthChecker // BalancerUnregister is exported by package balancer to unregister a balancer. BalancerUnregister func(name string) // KeepaliveMinPingTime is the minimum ping interval. This must be 10s by // default, but tests may wish to set it lower for convenience. KeepaliveMinPingTime = 10 * time.Second // ParseServiceConfig is a function to parse JSON service configs into // opaque data structures. ParseServiceConfig func(sc string) (interface{}, error) // StatusRawProto is exported by status/status.go. This func returns a // pointer to the wrapped Status proto for a given status.Status without a // call to proto.Clone(). The returned Status proto should not be mutated by // the caller. StatusRawProto interface{} // func (*status.Status) *spb.Status ) // HealthChecker defines the signature of the client-side LB channel health checking function. // // The implementation is expected to create a health checking RPC stream by // calling newStream(), watch for the health status of serviceName, and report // it's health back by calling setConnectivityState(). // // The health checking protocol is defined at: // https://github.com/grpc/grpc/blob/master/doc/health-checking.md type HealthChecker func(ctx context.Context, newStream func(string) (interface{}, error), setConnectivityState func(connectivity.State), serviceName string) error const ( // CredsBundleModeFallback switches GoogleDefaultCreds to fallback mode. CredsBundleModeFallback = "fallback" // CredsBundleModeBalancer switches GoogleDefaultCreds to grpclb balancer // mode. CredsBundleModeBalancer = "balancer" // CredsBundleModeBackendFromBalancer switches GoogleDefaultCreds to mode // that supports backend returned by grpclb balancer. CredsBundleModeBackendFromBalancer = "backend-from-balancer" ) grpc-go-1.22.1/internal/leakcheck/000077500000000000000000000000001351635773100166745ustar00rootroot00000000000000grpc-go-1.22.1/internal/leakcheck/leakcheck.go000066400000000000000000000057651351635773100211520ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package leakcheck contains functions to check leaked goroutines. // // Call "defer leakcheck.Check(t)" at the beginning of tests. package leakcheck import ( "runtime" "sort" "strings" "time" ) var goroutinesToIgnore = []string{ "testing.Main(", "testing.tRunner(", "testing.(*M).", "runtime.goexit", "created by runtime.gc", "created by runtime/trace.Start", "interestingGoroutines", "runtime.MHeap_Scavenger", "signal.signal_recv", "sigterm.handler", "runtime_mcall", "(*loggingT).flushDaemon", "goroutine in C code", } // RegisterIgnoreGoroutine appends s into the ignore goroutine list. The // goroutines whose stack trace contains s will not be identified as leaked // goroutines. Not thread-safe, only call this function in init(). func RegisterIgnoreGoroutine(s string) { goroutinesToIgnore = append(goroutinesToIgnore, s) } func ignore(g string) bool { sl := strings.SplitN(g, "\n", 2) if len(sl) != 2 { return true } stack := strings.TrimSpace(sl[1]) if strings.HasPrefix(stack, "testing.RunTests") { return true } if stack == "" { return true } for _, s := range goroutinesToIgnore { if strings.Contains(stack, s) { return true } } return false } // interestingGoroutines returns all goroutines we care about for the purpose of // leak checking. It excludes testing or runtime ones. func interestingGoroutines() (gs []string) { buf := make([]byte, 2<<20) buf = buf[:runtime.Stack(buf, true)] for _, g := range strings.Split(string(buf), "\n\n") { if !ignore(g) { gs = append(gs, g) } } sort.Strings(gs) return } // Errorfer is the interface that wraps the Errorf method. It's a subset of // testing.TB to make it easy to use Check. type Errorfer interface { Errorf(format string, args ...interface{}) } func check(efer Errorfer, timeout time.Duration) { // Loop, waiting for goroutines to shut down. // Wait up to timeout, but finish as quickly as possible. deadline := time.Now().Add(timeout) var leaked []string for time.Now().Before(deadline) { if leaked = interestingGoroutines(); len(leaked) == 0 { return } time.Sleep(50 * time.Millisecond) } for _, g := range leaked { efer.Errorf("Leaked goroutine: %v", g) } } // Check looks at the currently-running goroutines and checks if there are any // interestring (created by gRPC) goroutines leaked. It waits up to 10 seconds // in the error cases. func Check(efer Errorfer) { check(efer, 10*time.Second) } grpc-go-1.22.1/internal/leakcheck/leakcheck_test.go000066400000000000000000000036571351635773100222070ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package leakcheck import ( "fmt" "strings" "testing" "time" ) type testErrorfer struct { errorCount int errors []string } func (e *testErrorfer) Errorf(format string, args ...interface{}) { e.errors = append(e.errors, fmt.Sprintf(format, args...)) e.errorCount++ } func TestCheck(t *testing.T) { const leakCount = 3 for i := 0; i < leakCount; i++ { go func() { time.Sleep(2 * time.Second) }() } if ig := interestingGoroutines(); len(ig) == 0 { t.Error("blah") } e := &testErrorfer{} check(e, time.Second) if e.errorCount != leakCount { t.Errorf("check found %v leaks, want %v leaks", e.errorCount, leakCount) t.Logf("leaked goroutines:\n%v", strings.Join(e.errors, "\n")) } check(t, 3*time.Second) } func ignoredTestingLeak(d time.Duration) { time.Sleep(d) } func TestCheckRegisterIgnore(t *testing.T) { RegisterIgnoreGoroutine("ignoredTestingLeak") const leakCount = 3 for i := 0; i < leakCount; i++ { go func() { time.Sleep(2 * time.Second) }() } go func() { ignoredTestingLeak(3 * time.Second) }() if ig := interestingGoroutines(); len(ig) == 0 { t.Error("blah") } e := &testErrorfer{} check(e, time.Second) if e.errorCount != leakCount { t.Errorf("check found %v leaks, want %v leaks", e.errorCount, leakCount) t.Logf("leaked goroutines:\n%v", strings.Join(e.errors, "\n")) } check(t, 3*time.Second) } grpc-go-1.22.1/internal/syscall/000077500000000000000000000000001351635773100164345ustar00rootroot00000000000000grpc-go-1.22.1/internal/syscall/syscall_linux.go000066400000000000000000000062701351635773100216610ustar00rootroot00000000000000// +build !appengine /* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package syscall provides functionalities that grpc uses to get low-level operating system // stats/info. package syscall import ( "fmt" "net" "syscall" "time" "golang.org/x/sys/unix" "google.golang.org/grpc/grpclog" ) // GetCPUTime returns the how much CPU time has passed since the start of this process. func GetCPUTime() int64 { var ts unix.Timespec if err := unix.ClockGettime(unix.CLOCK_PROCESS_CPUTIME_ID, &ts); err != nil { grpclog.Fatal(err) } return ts.Nano() } // Rusage is an alias for syscall.Rusage under linux non-appengine environment. type Rusage syscall.Rusage // GetRusage returns the resource usage of current process. func GetRusage() (rusage *Rusage) { rusage = new(Rusage) syscall.Getrusage(syscall.RUSAGE_SELF, (*syscall.Rusage)(rusage)) return } // CPUTimeDiff returns the differences of user CPU time and system CPU time used // between two Rusage structs. func CPUTimeDiff(first *Rusage, latest *Rusage) (float64, float64) { f := (*syscall.Rusage)(first) l := (*syscall.Rusage)(latest) var ( utimeDiffs = l.Utime.Sec - f.Utime.Sec utimeDiffus = l.Utime.Usec - f.Utime.Usec stimeDiffs = l.Stime.Sec - f.Stime.Sec stimeDiffus = l.Stime.Usec - f.Stime.Usec ) uTimeElapsed := float64(utimeDiffs) + float64(utimeDiffus)*1.0e-6 sTimeElapsed := float64(stimeDiffs) + float64(stimeDiffus)*1.0e-6 return uTimeElapsed, sTimeElapsed } // SetTCPUserTimeout sets the TCP user timeout on a connection's socket func SetTCPUserTimeout(conn net.Conn, timeout time.Duration) error { tcpconn, ok := conn.(*net.TCPConn) if !ok { // not a TCP connection. exit early return nil } rawConn, err := tcpconn.SyscallConn() if err != nil { return fmt.Errorf("error getting raw connection: %v", err) } err = rawConn.Control(func(fd uintptr) { err = syscall.SetsockoptInt(int(fd), syscall.IPPROTO_TCP, unix.TCP_USER_TIMEOUT, int(timeout/time.Millisecond)) }) if err != nil { return fmt.Errorf("error setting option on socket: %v", err) } return nil } // GetTCPUserTimeout gets the TCP user timeout on a connection's socket func GetTCPUserTimeout(conn net.Conn) (opt int, err error) { tcpconn, ok := conn.(*net.TCPConn) if !ok { err = fmt.Errorf("conn is not *net.TCPConn. got %T", conn) return } rawConn, err := tcpconn.SyscallConn() if err != nil { err = fmt.Errorf("error getting raw connection: %v", err) return } err = rawConn.Control(func(fd uintptr) { opt, err = syscall.GetsockoptInt(int(fd), syscall.IPPROTO_TCP, unix.TCP_USER_TIMEOUT) }) if err != nil { err = fmt.Errorf("error getting option on socket: %v", err) return } return } grpc-go-1.22.1/internal/syscall/syscall_nonlinux.go000066400000000000000000000036431351635773100223750ustar00rootroot00000000000000// +build !linux appengine /* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package syscall import ( "net" "sync" "time" "google.golang.org/grpc/grpclog" ) var once sync.Once func log() { once.Do(func() { grpclog.Info("CPU time info is unavailable on non-linux or appengine environment.") }) } // GetCPUTime returns the how much CPU time has passed since the start of this process. // It always returns 0 under non-linux or appengine environment. func GetCPUTime() int64 { log() return 0 } // Rusage is an empty struct under non-linux or appengine environment. type Rusage struct{} // GetRusage is a no-op function under non-linux or appengine environment. func GetRusage() (rusage *Rusage) { log() return nil } // CPUTimeDiff returns the differences of user CPU time and system CPU time used // between two Rusage structs. It a no-op function for non-linux or appengine environment. func CPUTimeDiff(first *Rusage, latest *Rusage) (float64, float64) { log() return 0, 0 } // SetTCPUserTimeout is a no-op function under non-linux or appengine environments func SetTCPUserTimeout(conn net.Conn, timeout time.Duration) error { log() return nil } // GetTCPUserTimeout is a no-op function under non-linux or appengine environments // a negative return value indicates the operation is not supported func GetTCPUserTimeout(conn net.Conn) (int, error) { log() return -1, nil } grpc-go-1.22.1/internal/testutils/000077500000000000000000000000001351635773100170225ustar00rootroot00000000000000grpc-go-1.22.1/internal/testutils/pipe_listener.go000066400000000000000000000042431351635773100222160ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package testutils contains testing helpers. package testutils import ( "errors" "net" "time" ) var errClosed = errors.New("closed") type pipeAddr struct{} func (p pipeAddr) Network() string { return "pipe" } func (p pipeAddr) String() string { return "pipe" } // PipeListener is a listener with an unbuffered pipe. Each write will complete only once the other side reads. It // should only be created using NewPipeListener. type PipeListener struct { c chan chan<- net.Conn done chan struct{} } // NewPipeListener creates a new pipe listener. func NewPipeListener() *PipeListener { return &PipeListener{ c: make(chan chan<- net.Conn), done: make(chan struct{}), } } // Accept accepts a connection. func (p *PipeListener) Accept() (net.Conn, error) { var connChan chan<- net.Conn select { case <-p.done: return nil, errClosed case connChan = <-p.c: select { case <-p.done: close(connChan) return nil, errClosed default: } } c1, c2 := net.Pipe() connChan <- c1 close(connChan) return c2, nil } // Close closes the listener. func (p *PipeListener) Close() error { close(p.done) return nil } // Addr returns a pipe addr. func (p *PipeListener) Addr() net.Addr { return pipeAddr{} } // Dialer dials a connection. func (p *PipeListener) Dialer() func(string, time.Duration) (net.Conn, error) { return func(string, time.Duration) (net.Conn, error) { connChan := make(chan net.Conn) select { case p.c <- connChan: case <-p.done: return nil, errClosed } conn, ok := <-connChan if !ok { return nil, errClosed } return conn, nil } } grpc-go-1.22.1/internal/testutils/pipe_listener_test.go000066400000000000000000000076701351635773100232640ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package testutils_test import ( "testing" "time" "google.golang.org/grpc/internal/testutils" ) func TestPipeListener(t *testing.T) { pl := testutils.NewPipeListener() recvdBytes := make(chan []byte) const want = "hello world" go func() { c, err := pl.Accept() if err != nil { t.Error(err) } read := make([]byte, len(want)) _, err = c.Read(read) if err != nil { t.Error(err) } recvdBytes <- read }() dl := pl.Dialer() conn, err := dl("", time.Duration(0)) if err != nil { t.Fatal(err) } _, err = conn.Write([]byte(want)) if err != nil { t.Fatal(err) } select { case gotBytes := <-recvdBytes: got := string(gotBytes) if got != want { t.Fatalf("expected to get %s, got %s", got, want) } case <-time.After(100 * time.Millisecond): t.Fatal("timed out waiting for server to receive bytes") } } func TestUnblocking(t *testing.T) { for _, test := range []struct { desc string blockFuncShouldError bool blockFunc func(*testutils.PipeListener, chan struct{}) error unblockFunc func(*testutils.PipeListener) error }{ { desc: "Accept unblocks Dial", blockFunc: func(pl *testutils.PipeListener, done chan struct{}) error { dl := pl.Dialer() _, err := dl("", time.Duration(0)) close(done) return err }, unblockFunc: func(pl *testutils.PipeListener) error { _, err := pl.Accept() return err }, }, { desc: "Close unblocks Dial", blockFuncShouldError: true, // because pl.Close will be called blockFunc: func(pl *testutils.PipeListener, done chan struct{}) error { dl := pl.Dialer() _, err := dl("", time.Duration(0)) close(done) return err }, unblockFunc: func(pl *testutils.PipeListener) error { return pl.Close() }, }, { desc: "Dial unblocks Accept", blockFunc: func(pl *testutils.PipeListener, done chan struct{}) error { _, err := pl.Accept() close(done) return err }, unblockFunc: func(pl *testutils.PipeListener) error { dl := pl.Dialer() _, err := dl("", time.Duration(0)) return err }, }, { desc: "Close unblocks Accept", blockFuncShouldError: true, // because pl.Close will be called blockFunc: func(pl *testutils.PipeListener, done chan struct{}) error { _, err := pl.Accept() close(done) return err }, unblockFunc: func(pl *testutils.PipeListener) error { return pl.Close() }, }, } { t.Log(test.desc) testUnblocking(t, test.blockFunc, test.unblockFunc, test.blockFuncShouldError) } } func testUnblocking(t *testing.T, blockFunc func(*testutils.PipeListener, chan struct{}) error, unblockFunc func(*testutils.PipeListener) error, blockFuncShouldError bool) { pl := testutils.NewPipeListener() dialFinished := make(chan struct{}) go func() { err := blockFunc(pl, dialFinished) if blockFuncShouldError && err == nil { t.Error("expected blocking func to return error because pl.Close was called, but got nil") } if !blockFuncShouldError && err != nil { t.Error(err) } }() select { case <-dialFinished: t.Fatal("expected Dial to block until pl.Close or pl.Accept") default: } if err := unblockFunc(pl); err != nil { t.Fatal(err) } select { case <-dialFinished: case <-time.After(100 * time.Millisecond): t.Fatal("expected Accept to unblock after pl.Accept was called") } } grpc-go-1.22.1/internal/testutils/status_equal.go000066400000000000000000000020541351635773100220640ustar00rootroot00000000000000/* * * Copyright 2019 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package testutils import ( "github.com/golang/protobuf/proto" "google.golang.org/grpc/status" ) // StatusErrEqual returns true iff both err1 and err2 wrap status.Status errors // and their underlying status protos are equal. func StatusErrEqual(err1, err2 error) bool { status1, ok := status.FromError(err1) if !ok { return false } status2, ok := status.FromError(err2) if !ok { return false } return proto.Equal(status1.Proto(), status2.Proto()) } grpc-go-1.22.1/internal/testutils/status_equal_test.go000066400000000000000000000031711351635773100231240ustar00rootroot00000000000000/* * * Copyright 2019 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package testutils import ( "testing" anypb "github.com/golang/protobuf/ptypes/any" spb "google.golang.org/genproto/googleapis/rpc/status" "google.golang.org/grpc/codes" "google.golang.org/grpc/status" ) var statusErr = status.ErrorProto(&spb.Status{ Code: int32(codes.DataLoss), Message: "error for testing", Details: []*anypb.Any{{ TypeUrl: "url", Value: []byte{6, 0, 0, 6, 1, 3}, }}, }) func TestStatusErrEqual(t *testing.T) { tests := []struct { name string err1 error err2 error wantEqual bool }{ {"nil errors", nil, nil, true}, {"equal OK status", status.New(codes.OK, "").Err(), status.New(codes.OK, "").Err(), true}, {"equal status errors", statusErr, statusErr, true}, {"different status errors", statusErr, status.New(codes.OK, "").Err(), false}, } for _, test := range tests { if gotEqual := StatusErrEqual(test.err1, test.err2); gotEqual != test.wantEqual { t.Errorf("%v: StatusErrEqual(%v, %v) = %v, want %v", test.name, test.err1, test.err2, gotEqual, test.wantEqual) } } } grpc-go-1.22.1/internal/transport/000077500000000000000000000000001351635773100170165ustar00rootroot00000000000000grpc-go-1.22.1/internal/transport/bdp_estimator.go000066400000000000000000000104731351635773100222060ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package transport import ( "sync" "time" ) const ( // bdpLimit is the maximum value the flow control windows will be increased // to. TCP typically limits this to 4MB, but some systems go up to 16MB. // Since this is only a limit, it is safe to make it optimistic. bdpLimit = (1 << 20) * 16 // alpha is a constant factor used to keep a moving average // of RTTs. alpha = 0.9 // If the current bdp sample is greater than or equal to // our beta * our estimated bdp and the current bandwidth // sample is the maximum bandwidth observed so far, we // increase our bbp estimate by a factor of gamma. beta = 0.66 // To put our bdp to be smaller than or equal to twice the real BDP, // we should multiply our current sample with 4/3, however to round things out // we use 2 as the multiplication factor. gamma = 2 ) // Adding arbitrary data to ping so that its ack can be identified. // Easter-egg: what does the ping message say? var bdpPing = &ping{data: [8]byte{2, 4, 16, 16, 9, 14, 7, 7}} type bdpEstimator struct { // sentAt is the time when the ping was sent. sentAt time.Time mu sync.Mutex // bdp is the current bdp estimate. bdp uint32 // sample is the number of bytes received in one measurement cycle. sample uint32 // bwMax is the maximum bandwidth noted so far (bytes/sec). bwMax float64 // bool to keep track of the beginning of a new measurement cycle. isSent bool // Callback to update the window sizes. updateFlowControl func(n uint32) // sampleCount is the number of samples taken so far. sampleCount uint64 // round trip time (seconds) rtt float64 } // timesnap registers the time bdp ping was sent out so that // network rtt can be calculated when its ack is received. // It is called (by controller) when the bdpPing is // being written on the wire. func (b *bdpEstimator) timesnap(d [8]byte) { if bdpPing.data != d { return } b.sentAt = time.Now() } // add adds bytes to the current sample for calculating bdp. // It returns true only if a ping must be sent. This can be used // by the caller (handleData) to make decision about batching // a window update with it. func (b *bdpEstimator) add(n uint32) bool { b.mu.Lock() defer b.mu.Unlock() if b.bdp == bdpLimit { return false } if !b.isSent { b.isSent = true b.sample = n b.sentAt = time.Time{} b.sampleCount++ return true } b.sample += n return false } // calculate is called when an ack for a bdp ping is received. // Here we calculate the current bdp and bandwidth sample and // decide if the flow control windows should go up. func (b *bdpEstimator) calculate(d [8]byte) { // Check if the ping acked for was the bdp ping. if bdpPing.data != d { return } b.mu.Lock() rttSample := time.Since(b.sentAt).Seconds() if b.sampleCount < 10 { // Bootstrap rtt with an average of first 10 rtt samples. b.rtt += (rttSample - b.rtt) / float64(b.sampleCount) } else { // Heed to the recent past more. b.rtt += (rttSample - b.rtt) * float64(alpha) } b.isSent = false // The number of bytes accumulated so far in the sample is smaller // than or equal to 1.5 times the real BDP on a saturated connection. bwCurrent := float64(b.sample) / (b.rtt * float64(1.5)) if bwCurrent > b.bwMax { b.bwMax = bwCurrent } // If the current sample (which is smaller than or equal to the 1.5 times the real BDP) is // greater than or equal to 2/3rd our perceived bdp AND this is the maximum bandwidth seen so far, we // should update our perception of the network BDP. if float64(b.sample) >= beta*float64(b.bdp) && bwCurrent == b.bwMax && b.bdp != bdpLimit { sampleFloat := float64(b.sample) b.bdp = uint32(gamma * sampleFloat) if b.bdp > bdpLimit { b.bdp = bdpLimit } bdp := b.bdp b.mu.Unlock() b.updateFlowControl(bdp) return } b.mu.Unlock() } grpc-go-1.22.1/internal/transport/controlbuf.go000066400000000000000000000521601351635773100215260ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package transport import ( "bytes" "fmt" "runtime" "sync" "golang.org/x/net/http2" "golang.org/x/net/http2/hpack" ) var updateHeaderTblSize = func(e *hpack.Encoder, v uint32) { e.SetMaxDynamicTableSizeLimit(v) } type itemNode struct { it interface{} next *itemNode } type itemList struct { head *itemNode tail *itemNode } func (il *itemList) enqueue(i interface{}) { n := &itemNode{it: i} if il.tail == nil { il.head, il.tail = n, n return } il.tail.next = n il.tail = n } // peek returns the first item in the list without removing it from the // list. func (il *itemList) peek() interface{} { return il.head.it } func (il *itemList) dequeue() interface{} { if il.head == nil { return nil } i := il.head.it il.head = il.head.next if il.head == nil { il.tail = nil } return i } func (il *itemList) dequeueAll() *itemNode { h := il.head il.head, il.tail = nil, nil return h } func (il *itemList) isEmpty() bool { return il.head == nil } // The following defines various control items which could flow through // the control buffer of transport. They represent different aspects of // control tasks, e.g., flow control, settings, streaming resetting, etc. // registerStream is used to register an incoming stream with loopy writer. type registerStream struct { streamID uint32 wq *writeQuota } // headerFrame is also used to register stream on the client-side. type headerFrame struct { streamID uint32 hf []hpack.HeaderField endStream bool // Valid on server side. initStream func(uint32) (bool, error) // Used only on the client side. onWrite func() wq *writeQuota // write quota for the stream created. cleanup *cleanupStream // Valid on the server side. onOrphaned func(error) // Valid on client-side } type cleanupStream struct { streamID uint32 rst bool rstCode http2.ErrCode onWrite func() } type dataFrame struct { streamID uint32 endStream bool h []byte d []byte // onEachWrite is called every time // a part of d is written out. onEachWrite func() } type incomingWindowUpdate struct { streamID uint32 increment uint32 } type outgoingWindowUpdate struct { streamID uint32 increment uint32 } type incomingSettings struct { ss []http2.Setting } type outgoingSettings struct { ss []http2.Setting } type incomingGoAway struct { } type goAway struct { code http2.ErrCode debugData []byte headsUp bool closeConn bool } type ping struct { ack bool data [8]byte } type outFlowControlSizeRequest struct { resp chan uint32 } type outStreamState int const ( active outStreamState = iota empty waitingOnStreamQuota ) type outStream struct { id uint32 state outStreamState itl *itemList bytesOutStanding int wq *writeQuota next *outStream prev *outStream } func (s *outStream) deleteSelf() { if s.prev != nil { s.prev.next = s.next } if s.next != nil { s.next.prev = s.prev } s.next, s.prev = nil, nil } type outStreamList struct { // Following are sentinel objects that mark the // beginning and end of the list. They do not // contain any item lists. All valid objects are // inserted in between them. // This is needed so that an outStream object can // deleteSelf() in O(1) time without knowing which // list it belongs to. head *outStream tail *outStream } func newOutStreamList() *outStreamList { head, tail := new(outStream), new(outStream) head.next = tail tail.prev = head return &outStreamList{ head: head, tail: tail, } } func (l *outStreamList) enqueue(s *outStream) { e := l.tail.prev e.next = s s.prev = e s.next = l.tail l.tail.prev = s } // remove from the beginning of the list. func (l *outStreamList) dequeue() *outStream { b := l.head.next if b == l.tail { return nil } b.deleteSelf() return b } // controlBuffer is a way to pass information to loopy. // Information is passed as specific struct types called control frames. // A control frame not only represents data, messages or headers to be sent out // but can also be used to instruct loopy to update its internal state. // It shouldn't be confused with an HTTP2 frame, although some of the control frames // like dataFrame and headerFrame do go out on wire as HTTP2 frames. type controlBuffer struct { ch chan struct{} done <-chan struct{} mu sync.Mutex consumerWaiting bool list *itemList err error } func newControlBuffer(done <-chan struct{}) *controlBuffer { return &controlBuffer{ ch: make(chan struct{}, 1), list: &itemList{}, done: done, } } func (c *controlBuffer) put(it interface{}) error { _, err := c.executeAndPut(nil, it) return err } func (c *controlBuffer) executeAndPut(f func(it interface{}) bool, it interface{}) (bool, error) { var wakeUp bool c.mu.Lock() if c.err != nil { c.mu.Unlock() return false, c.err } if f != nil { if !f(it) { // f wasn't successful c.mu.Unlock() return false, nil } } if c.consumerWaiting { wakeUp = true c.consumerWaiting = false } c.list.enqueue(it) c.mu.Unlock() if wakeUp { select { case c.ch <- struct{}{}: default: } } return true, nil } // Note argument f should never be nil. func (c *controlBuffer) execute(f func(it interface{}) bool, it interface{}) (bool, error) { c.mu.Lock() if c.err != nil { c.mu.Unlock() return false, c.err } if !f(it) { // f wasn't successful c.mu.Unlock() return false, nil } c.mu.Unlock() return true, nil } func (c *controlBuffer) get(block bool) (interface{}, error) { for { c.mu.Lock() if c.err != nil { c.mu.Unlock() return nil, c.err } if !c.list.isEmpty() { h := c.list.dequeue() c.mu.Unlock() return h, nil } if !block { c.mu.Unlock() return nil, nil } c.consumerWaiting = true c.mu.Unlock() select { case <-c.ch: case <-c.done: c.finish() return nil, ErrConnClosing } } } func (c *controlBuffer) finish() { c.mu.Lock() if c.err != nil { c.mu.Unlock() return } c.err = ErrConnClosing // There may be headers for streams in the control buffer. // These streams need to be cleaned out since the transport // is still not aware of these yet. for head := c.list.dequeueAll(); head != nil; head = head.next { hdr, ok := head.it.(*headerFrame) if !ok { continue } if hdr.onOrphaned != nil { // It will be nil on the server-side. hdr.onOrphaned(ErrConnClosing) } } c.mu.Unlock() } type side int const ( clientSide side = iota serverSide ) // Loopy receives frames from the control buffer. // Each frame is handled individually; most of the work done by loopy goes // into handling data frames. Loopy maintains a queue of active streams, and each // stream maintains a queue of data frames; as loopy receives data frames // it gets added to the queue of the relevant stream. // Loopy goes over this list of active streams by processing one node every iteration, // thereby closely resemebling to a round-robin scheduling over all streams. While // processing a stream, loopy writes out data bytes from this stream capped by the min // of http2MaxFrameLen, connection-level flow control and stream-level flow control. type loopyWriter struct { side side cbuf *controlBuffer sendQuota uint32 oiws uint32 // outbound initial window size. // estdStreams is map of all established streams that are not cleaned-up yet. // On client-side, this is all streams whose headers were sent out. // On server-side, this is all streams whose headers were received. estdStreams map[uint32]*outStream // Established streams. // activeStreams is a linked-list of all streams that have data to send and some // stream-level flow control quota. // Each of these streams internally have a list of data items(and perhaps trailers // on the server-side) to be sent out. activeStreams *outStreamList framer *framer hBuf *bytes.Buffer // The buffer for HPACK encoding. hEnc *hpack.Encoder // HPACK encoder. bdpEst *bdpEstimator draining bool // Side-specific handlers ssGoAwayHandler func(*goAway) (bool, error) } func newLoopyWriter(s side, fr *framer, cbuf *controlBuffer, bdpEst *bdpEstimator) *loopyWriter { var buf bytes.Buffer l := &loopyWriter{ side: s, cbuf: cbuf, sendQuota: defaultWindowSize, oiws: defaultWindowSize, estdStreams: make(map[uint32]*outStream), activeStreams: newOutStreamList(), framer: fr, hBuf: &buf, hEnc: hpack.NewEncoder(&buf), bdpEst: bdpEst, } return l } const minBatchSize = 1000 // run should be run in a separate goroutine. // It reads control frames from controlBuf and processes them by: // 1. Updating loopy's internal state, or/and // 2. Writing out HTTP2 frames on the wire. // // Loopy keeps all active streams with data to send in a linked-list. // All streams in the activeStreams linked-list must have both: // 1. Data to send, and // 2. Stream level flow control quota available. // // In each iteration of run loop, other than processing the incoming control // frame, loopy calls processData, which processes one node from the activeStreams linked-list. // This results in writing of HTTP2 frames into an underlying write buffer. // When there's no more control frames to read from controlBuf, loopy flushes the write buffer. // As an optimization, to increase the batch size for each flush, loopy yields the processor, once // if the batch size is too low to give stream goroutines a chance to fill it up. func (l *loopyWriter) run() (err error) { defer func() { if err == ErrConnClosing { // Don't log ErrConnClosing as error since it happens // 1. When the connection is closed by some other known issue. // 2. User closed the connection. // 3. A graceful close of connection. infof("transport: loopyWriter.run returning. %v", err) err = nil } }() for { it, err := l.cbuf.get(true) if err != nil { return err } if err = l.handle(it); err != nil { return err } if _, err = l.processData(); err != nil { return err } gosched := true hasdata: for { it, err := l.cbuf.get(false) if err != nil { return err } if it != nil { if err = l.handle(it); err != nil { return err } if _, err = l.processData(); err != nil { return err } continue hasdata } isEmpty, err := l.processData() if err != nil { return err } if !isEmpty { continue hasdata } if gosched { gosched = false if l.framer.writer.offset < minBatchSize { runtime.Gosched() continue hasdata } } l.framer.writer.Flush() break hasdata } } } func (l *loopyWriter) outgoingWindowUpdateHandler(w *outgoingWindowUpdate) error { return l.framer.fr.WriteWindowUpdate(w.streamID, w.increment) } func (l *loopyWriter) incomingWindowUpdateHandler(w *incomingWindowUpdate) error { // Otherwise update the quota. if w.streamID == 0 { l.sendQuota += w.increment return nil } // Find the stream and update it. if str, ok := l.estdStreams[w.streamID]; ok { str.bytesOutStanding -= int(w.increment) if strQuota := int(l.oiws) - str.bytesOutStanding; strQuota > 0 && str.state == waitingOnStreamQuota { str.state = active l.activeStreams.enqueue(str) return nil } } return nil } func (l *loopyWriter) outgoingSettingsHandler(s *outgoingSettings) error { return l.framer.fr.WriteSettings(s.ss...) } func (l *loopyWriter) incomingSettingsHandler(s *incomingSettings) error { if err := l.applySettings(s.ss); err != nil { return err } return l.framer.fr.WriteSettingsAck() } func (l *loopyWriter) registerStreamHandler(h *registerStream) error { str := &outStream{ id: h.streamID, state: empty, itl: &itemList{}, wq: h.wq, } l.estdStreams[h.streamID] = str return nil } func (l *loopyWriter) headerHandler(h *headerFrame) error { if l.side == serverSide { str, ok := l.estdStreams[h.streamID] if !ok { warningf("transport: loopy doesn't recognize the stream: %d", h.streamID) return nil } // Case 1.A: Server is responding back with headers. if !h.endStream { return l.writeHeader(h.streamID, h.endStream, h.hf, h.onWrite) } // else: Case 1.B: Server wants to close stream. if str.state != empty { // either active or waiting on stream quota. // add it str's list of items. str.itl.enqueue(h) return nil } if err := l.writeHeader(h.streamID, h.endStream, h.hf, h.onWrite); err != nil { return err } return l.cleanupStreamHandler(h.cleanup) } // Case 2: Client wants to originate stream. str := &outStream{ id: h.streamID, state: empty, itl: &itemList{}, wq: h.wq, } str.itl.enqueue(h) return l.originateStream(str) } func (l *loopyWriter) originateStream(str *outStream) error { hdr := str.itl.dequeue().(*headerFrame) sendPing, err := hdr.initStream(str.id) if err != nil { if err == ErrConnClosing { return err } // Other errors(errStreamDrain) need not close transport. return nil } if err = l.writeHeader(str.id, hdr.endStream, hdr.hf, hdr.onWrite); err != nil { return err } l.estdStreams[str.id] = str if sendPing { return l.pingHandler(&ping{data: [8]byte{}}) } return nil } func (l *loopyWriter) writeHeader(streamID uint32, endStream bool, hf []hpack.HeaderField, onWrite func()) error { if onWrite != nil { onWrite() } l.hBuf.Reset() for _, f := range hf { if err := l.hEnc.WriteField(f); err != nil { warningf("transport: loopyWriter.writeHeader encountered error while encoding headers:", err) } } var ( err error endHeaders, first bool ) first = true for !endHeaders { size := l.hBuf.Len() if size > http2MaxFrameLen { size = http2MaxFrameLen } else { endHeaders = true } if first { first = false err = l.framer.fr.WriteHeaders(http2.HeadersFrameParam{ StreamID: streamID, BlockFragment: l.hBuf.Next(size), EndStream: endStream, EndHeaders: endHeaders, }) } else { err = l.framer.fr.WriteContinuation( streamID, endHeaders, l.hBuf.Next(size), ) } if err != nil { return err } } return nil } func (l *loopyWriter) preprocessData(df *dataFrame) error { str, ok := l.estdStreams[df.streamID] if !ok { return nil } // If we got data for a stream it means that // stream was originated and the headers were sent out. str.itl.enqueue(df) if str.state == empty { str.state = active l.activeStreams.enqueue(str) } return nil } func (l *loopyWriter) pingHandler(p *ping) error { if !p.ack { l.bdpEst.timesnap(p.data) } return l.framer.fr.WritePing(p.ack, p.data) } func (l *loopyWriter) outFlowControlSizeRequestHandler(o *outFlowControlSizeRequest) error { o.resp <- l.sendQuota return nil } func (l *loopyWriter) cleanupStreamHandler(c *cleanupStream) error { c.onWrite() if str, ok := l.estdStreams[c.streamID]; ok { // On the server side it could be a trailers-only response or // a RST_STREAM before stream initialization thus the stream might // not be established yet. delete(l.estdStreams, c.streamID) str.deleteSelf() } if c.rst { // If RST_STREAM needs to be sent. if err := l.framer.fr.WriteRSTStream(c.streamID, c.rstCode); err != nil { return err } } if l.side == clientSide && l.draining && len(l.estdStreams) == 0 { return ErrConnClosing } return nil } func (l *loopyWriter) incomingGoAwayHandler(*incomingGoAway) error { if l.side == clientSide { l.draining = true if len(l.estdStreams) == 0 { return ErrConnClosing } } return nil } func (l *loopyWriter) goAwayHandler(g *goAway) error { // Handling of outgoing GoAway is very specific to side. if l.ssGoAwayHandler != nil { draining, err := l.ssGoAwayHandler(g) if err != nil { return err } l.draining = draining } return nil } func (l *loopyWriter) handle(i interface{}) error { switch i := i.(type) { case *incomingWindowUpdate: return l.incomingWindowUpdateHandler(i) case *outgoingWindowUpdate: return l.outgoingWindowUpdateHandler(i) case *incomingSettings: return l.incomingSettingsHandler(i) case *outgoingSettings: return l.outgoingSettingsHandler(i) case *headerFrame: return l.headerHandler(i) case *registerStream: return l.registerStreamHandler(i) case *cleanupStream: return l.cleanupStreamHandler(i) case *incomingGoAway: return l.incomingGoAwayHandler(i) case *dataFrame: return l.preprocessData(i) case *ping: return l.pingHandler(i) case *goAway: return l.goAwayHandler(i) case *outFlowControlSizeRequest: return l.outFlowControlSizeRequestHandler(i) default: return fmt.Errorf("transport: unknown control message type %T", i) } } func (l *loopyWriter) applySettings(ss []http2.Setting) error { for _, s := range ss { switch s.ID { case http2.SettingInitialWindowSize: o := l.oiws l.oiws = s.Val if o < l.oiws { // If the new limit is greater make all depleted streams active. for _, stream := range l.estdStreams { if stream.state == waitingOnStreamQuota { stream.state = active l.activeStreams.enqueue(stream) } } } case http2.SettingHeaderTableSize: updateHeaderTblSize(l.hEnc, s.Val) } } return nil } // processData removes the first stream from active streams, writes out at most 16KB // of its data and then puts it at the end of activeStreams if there's still more data // to be sent and stream has some stream-level flow control. func (l *loopyWriter) processData() (bool, error) { if l.sendQuota == 0 { return true, nil } str := l.activeStreams.dequeue() // Remove the first stream. if str == nil { return true, nil } dataItem := str.itl.peek().(*dataFrame) // Peek at the first data item this stream. // A data item is represented by a dataFrame, since it later translates into // multiple HTTP2 data frames. // Every dataFrame has two buffers; h that keeps grpc-message header and d that is acutal data. // As an optimization to keep wire traffic low, data from d is copied to h to make as big as the // maximum possilbe HTTP2 frame size. if len(dataItem.h) == 0 && len(dataItem.d) == 0 { // Empty data frame // Client sends out empty data frame with endStream = true if err := l.framer.fr.WriteData(dataItem.streamID, dataItem.endStream, nil); err != nil { return false, err } str.itl.dequeue() // remove the empty data item from stream if str.itl.isEmpty() { str.state = empty } else if trailer, ok := str.itl.peek().(*headerFrame); ok { // the next item is trailers. if err := l.writeHeader(trailer.streamID, trailer.endStream, trailer.hf, trailer.onWrite); err != nil { return false, err } if err := l.cleanupStreamHandler(trailer.cleanup); err != nil { return false, nil } } else { l.activeStreams.enqueue(str) } return false, nil } var ( idx int buf []byte ) if len(dataItem.h) != 0 { // data header has not been written out yet. buf = dataItem.h } else { idx = 1 buf = dataItem.d } size := http2MaxFrameLen if len(buf) < size { size = len(buf) } if strQuota := int(l.oiws) - str.bytesOutStanding; strQuota <= 0 { // stream-level flow control. str.state = waitingOnStreamQuota return false, nil } else if strQuota < size { size = strQuota } if l.sendQuota < uint32(size) { // connection-level flow control. size = int(l.sendQuota) } // Now that outgoing flow controls are checked we can replenish str's write quota str.wq.replenish(size) var endStream bool // If this is the last data message on this stream and all of it can be written in this iteration. if dataItem.endStream && size == len(buf) { // buf contains either data or it contains header but data is empty. if idx == 1 || len(dataItem.d) == 0 { endStream = true } } if dataItem.onEachWrite != nil { dataItem.onEachWrite() } if err := l.framer.fr.WriteData(dataItem.streamID, endStream, buf[:size]); err != nil { return false, err } buf = buf[size:] str.bytesOutStanding += size l.sendQuota -= uint32(size) if idx == 0 { dataItem.h = buf } else { dataItem.d = buf } if len(dataItem.h) == 0 && len(dataItem.d) == 0 { // All the data from that message was written out. str.itl.dequeue() } if str.itl.isEmpty() { str.state = empty } else if trailer, ok := str.itl.peek().(*headerFrame); ok { // The next item is trailers. if err := l.writeHeader(trailer.streamID, trailer.endStream, trailer.hf, trailer.onWrite); err != nil { return false, err } if err := l.cleanupStreamHandler(trailer.cleanup); err != nil { return false, err } } else if int(l.oiws)-str.bytesOutStanding <= 0 { // Ran out of stream quota. str.state = waitingOnStreamQuota } else { // Otherwise add it back to the list of active streams. l.activeStreams.enqueue(str) } return false, nil } grpc-go-1.22.1/internal/transport/defaults.go000066400000000000000000000032301351635773100211520ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package transport import ( "math" "time" ) const ( // The default value of flow control window size in HTTP2 spec. defaultWindowSize = 65535 // The initial window size for flow control. initialWindowSize = defaultWindowSize // for an RPC infinity = time.Duration(math.MaxInt64) defaultClientKeepaliveTime = infinity defaultClientKeepaliveTimeout = 20 * time.Second defaultMaxStreamsClient = 100 defaultMaxConnectionIdle = infinity defaultMaxConnectionAge = infinity defaultMaxConnectionAgeGrace = infinity defaultServerKeepaliveTime = 2 * time.Hour defaultServerKeepaliveTimeout = 20 * time.Second defaultKeepalivePolicyMinTime = 5 * time.Minute // max window limit set by HTTP2 Specs. maxWindowSize = math.MaxInt32 // defaultWriteQuota is the default value for number of data // bytes that each stream can schedule before some of it being // flushed out. defaultWriteQuota = 64 * 1024 defaultClientMaxHeaderListSize = uint32(16 << 20) defaultServerMaxHeaderListSize = uint32(16 << 20) ) grpc-go-1.22.1/internal/transport/flowcontrol.go000066400000000000000000000131661351635773100217240ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package transport import ( "fmt" "math" "sync" "sync/atomic" ) // writeQuota is a soft limit on the amount of data a stream can // schedule before some of it is written out. type writeQuota struct { quota int32 // get waits on read from when quota goes less than or equal to zero. // replenish writes on it when quota goes positive again. ch chan struct{} // done is triggered in error case. done <-chan struct{} // replenish is called by loopyWriter to give quota back to. // It is implemented as a field so that it can be updated // by tests. replenish func(n int) } func newWriteQuota(sz int32, done <-chan struct{}) *writeQuota { w := &writeQuota{ quota: sz, ch: make(chan struct{}, 1), done: done, } w.replenish = w.realReplenish return w } func (w *writeQuota) get(sz int32) error { for { if atomic.LoadInt32(&w.quota) > 0 { atomic.AddInt32(&w.quota, -sz) return nil } select { case <-w.ch: continue case <-w.done: return errStreamDone } } } func (w *writeQuota) realReplenish(n int) { sz := int32(n) a := atomic.AddInt32(&w.quota, sz) b := a - sz if b <= 0 && a > 0 { select { case w.ch <- struct{}{}: default: } } } type trInFlow struct { limit uint32 unacked uint32 effectiveWindowSize uint32 } func (f *trInFlow) newLimit(n uint32) uint32 { d := n - f.limit f.limit = n f.updateEffectiveWindowSize() return d } func (f *trInFlow) onData(n uint32) uint32 { f.unacked += n if f.unacked >= f.limit/4 { w := f.unacked f.unacked = 0 f.updateEffectiveWindowSize() return w } f.updateEffectiveWindowSize() return 0 } func (f *trInFlow) reset() uint32 { w := f.unacked f.unacked = 0 f.updateEffectiveWindowSize() return w } func (f *trInFlow) updateEffectiveWindowSize() { atomic.StoreUint32(&f.effectiveWindowSize, f.limit-f.unacked) } func (f *trInFlow) getSize() uint32 { return atomic.LoadUint32(&f.effectiveWindowSize) } // TODO(mmukhi): Simplify this code. // inFlow deals with inbound flow control type inFlow struct { mu sync.Mutex // The inbound flow control limit for pending data. limit uint32 // pendingData is the overall data which have been received but not been // consumed by applications. pendingData uint32 // The amount of data the application has consumed but grpc has not sent // window update for them. Used to reduce window update frequency. pendingUpdate uint32 // delta is the extra window update given by receiver when an application // is reading data bigger in size than the inFlow limit. delta uint32 } // newLimit updates the inflow window to a new value n. // It assumes that n is always greater than the old limit. func (f *inFlow) newLimit(n uint32) uint32 { f.mu.Lock() d := n - f.limit f.limit = n f.mu.Unlock() return d } func (f *inFlow) maybeAdjust(n uint32) uint32 { if n > uint32(math.MaxInt32) { n = uint32(math.MaxInt32) } f.mu.Lock() // estSenderQuota is the receiver's view of the maximum number of bytes the sender // can send without a window update. estSenderQuota := int32(f.limit - (f.pendingData + f.pendingUpdate)) // estUntransmittedData is the maximum number of bytes the sends might not have put // on the wire yet. A value of 0 or less means that we have already received all or // more bytes than the application is requesting to read. estUntransmittedData := int32(n - f.pendingData) // Casting into int32 since it could be negative. // This implies that unless we send a window update, the sender won't be able to send all the bytes // for this message. Therefore we must send an update over the limit since there's an active read // request from the application. if estUntransmittedData > estSenderQuota { // Sender's window shouldn't go more than 2^31 - 1 as specified in the HTTP spec. if f.limit+n > maxWindowSize { f.delta = maxWindowSize - f.limit } else { // Send a window update for the whole message and not just the difference between // estUntransmittedData and estSenderQuota. This will be helpful in case the message // is padded; We will fallback on the current available window(at least a 1/4th of the limit). f.delta = n } f.mu.Unlock() return f.delta } f.mu.Unlock() return 0 } // onData is invoked when some data frame is received. It updates pendingData. func (f *inFlow) onData(n uint32) error { f.mu.Lock() f.pendingData += n if f.pendingData+f.pendingUpdate > f.limit+f.delta { limit := f.limit rcvd := f.pendingData + f.pendingUpdate f.mu.Unlock() return fmt.Errorf("received %d-bytes data exceeding the limit %d bytes", rcvd, limit) } f.mu.Unlock() return nil } // onRead is invoked when the application reads the data. It returns the window size // to be sent to the peer. func (f *inFlow) onRead(n uint32) uint32 { f.mu.Lock() if f.pendingData == 0 { f.mu.Unlock() return 0 } f.pendingData -= n if n > f.delta { n -= f.delta f.delta = 0 } else { f.delta -= n n = 0 } f.pendingUpdate += n if f.pendingUpdate >= f.limit/4 { wu := f.pendingUpdate f.pendingUpdate = 0 f.mu.Unlock() return wu } f.mu.Unlock() return 0 } grpc-go-1.22.1/internal/transport/handler_server.go000066400000000000000000000275111351635773100223560ustar00rootroot00000000000000/* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // This file is the implementation of a gRPC server using HTTP/2 which // uses the standard Go http2 Server implementation (via the // http.Handler interface), rather than speaking low-level HTTP/2 // frames itself. It is the implementation of *grpc.Server.ServeHTTP. package transport import ( "bytes" "context" "errors" "fmt" "io" "net" "net/http" "strings" "sync" "time" "github.com/golang/protobuf/proto" "golang.org/x/net/http2" "google.golang.org/grpc/codes" "google.golang.org/grpc/credentials" "google.golang.org/grpc/metadata" "google.golang.org/grpc/peer" "google.golang.org/grpc/stats" "google.golang.org/grpc/status" ) // NewServerHandlerTransport returns a ServerTransport handling gRPC // from inside an http.Handler. It requires that the http Server // supports HTTP/2. func NewServerHandlerTransport(w http.ResponseWriter, r *http.Request, stats stats.Handler) (ServerTransport, error) { if r.ProtoMajor != 2 { return nil, errors.New("gRPC requires HTTP/2") } if r.Method != "POST" { return nil, errors.New("invalid gRPC request method") } contentType := r.Header.Get("Content-Type") // TODO: do we assume contentType is lowercase? we did before contentSubtype, validContentType := contentSubtype(contentType) if !validContentType { return nil, errors.New("invalid gRPC request content-type") } if _, ok := w.(http.Flusher); !ok { return nil, errors.New("gRPC requires a ResponseWriter supporting http.Flusher") } st := &serverHandlerTransport{ rw: w, req: r, closedCh: make(chan struct{}), writes: make(chan func()), contentType: contentType, contentSubtype: contentSubtype, stats: stats, } if v := r.Header.Get("grpc-timeout"); v != "" { to, err := decodeTimeout(v) if err != nil { return nil, status.Errorf(codes.Internal, "malformed time-out: %v", err) } st.timeoutSet = true st.timeout = to } metakv := []string{"content-type", contentType} if r.Host != "" { metakv = append(metakv, ":authority", r.Host) } for k, vv := range r.Header { k = strings.ToLower(k) if isReservedHeader(k) && !isWhitelistedHeader(k) { continue } for _, v := range vv { v, err := decodeMetadataHeader(k, v) if err != nil { return nil, status.Errorf(codes.Internal, "malformed binary metadata: %v", err) } metakv = append(metakv, k, v) } } st.headerMD = metadata.Pairs(metakv...) return st, nil } // serverHandlerTransport is an implementation of ServerTransport // which replies to exactly one gRPC request (exactly one HTTP request), // using the net/http.Handler interface. This http.Handler is guaranteed // at this point to be speaking over HTTP/2, so it's able to speak valid // gRPC. type serverHandlerTransport struct { rw http.ResponseWriter req *http.Request timeoutSet bool timeout time.Duration didCommonHeaders bool headerMD metadata.MD closeOnce sync.Once closedCh chan struct{} // closed on Close // writes is a channel of code to run serialized in the // ServeHTTP (HandleStreams) goroutine. The channel is closed // when WriteStatus is called. writes chan func() // block concurrent WriteStatus calls // e.g. grpc/(*serverStream).SendMsg/RecvMsg writeStatusMu sync.Mutex // we just mirror the request content-type contentType string // we store both contentType and contentSubtype so we don't keep recreating them // TODO make sure this is consistent across handler_server and http2_server contentSubtype string stats stats.Handler } func (ht *serverHandlerTransport) Close() error { ht.closeOnce.Do(ht.closeCloseChanOnce) return nil } func (ht *serverHandlerTransport) closeCloseChanOnce() { close(ht.closedCh) } func (ht *serverHandlerTransport) RemoteAddr() net.Addr { return strAddr(ht.req.RemoteAddr) } // strAddr is a net.Addr backed by either a TCP "ip:port" string, or // the empty string if unknown. type strAddr string func (a strAddr) Network() string { if a != "" { // Per the documentation on net/http.Request.RemoteAddr, if this is // set, it's set to the IP:port of the peer (hence, TCP): // https://golang.org/pkg/net/http/#Request // // If we want to support Unix sockets later, we can // add our own grpc-specific convention within the // grpc codebase to set RemoteAddr to a different // format, or probably better: we can attach it to the // context and use that from serverHandlerTransport.RemoteAddr. return "tcp" } return "" } func (a strAddr) String() string { return string(a) } // do runs fn in the ServeHTTP goroutine. func (ht *serverHandlerTransport) do(fn func()) error { select { case <-ht.closedCh: return ErrConnClosing case ht.writes <- fn: return nil } } func (ht *serverHandlerTransport) WriteStatus(s *Stream, st *status.Status) error { ht.writeStatusMu.Lock() defer ht.writeStatusMu.Unlock() err := ht.do(func() { ht.writeCommonHeaders(s) // And flush, in case no header or body has been sent yet. // This forces a separation of headers and trailers if this is the // first call (for example, in end2end tests's TestNoService). ht.rw.(http.Flusher).Flush() h := ht.rw.Header() h.Set("Grpc-Status", fmt.Sprintf("%d", st.Code())) if m := st.Message(); m != "" { h.Set("Grpc-Message", encodeGrpcMessage(m)) } if p := st.Proto(); p != nil && len(p.Details) > 0 { stBytes, err := proto.Marshal(p) if err != nil { // TODO: return error instead, when callers are able to handle it. panic(err) } h.Set("Grpc-Status-Details-Bin", encodeBinHeader(stBytes)) } if md := s.Trailer(); len(md) > 0 { for k, vv := range md { // Clients don't tolerate reading restricted headers after some non restricted ones were sent. if isReservedHeader(k) { continue } for _, v := range vv { // http2 ResponseWriter mechanism to send undeclared Trailers after // the headers have possibly been written. h.Add(http2.TrailerPrefix+k, encodeMetadataHeader(k, v)) } } } }) if err == nil { // transport has not been closed if ht.stats != nil { ht.stats.HandleRPC(s.Context(), &stats.OutTrailer{}) } } ht.Close() return err } // writeCommonHeaders sets common headers on the first write // call (Write, WriteHeader, or WriteStatus). func (ht *serverHandlerTransport) writeCommonHeaders(s *Stream) { if ht.didCommonHeaders { return } ht.didCommonHeaders = true h := ht.rw.Header() h["Date"] = nil // suppress Date to make tests happy; TODO: restore h.Set("Content-Type", ht.contentType) // Predeclare trailers we'll set later in WriteStatus (after the body). // This is a SHOULD in the HTTP RFC, and the way you add (known) // Trailers per the net/http.ResponseWriter contract. // See https://golang.org/pkg/net/http/#ResponseWriter // and https://golang.org/pkg/net/http/#example_ResponseWriter_trailers h.Add("Trailer", "Grpc-Status") h.Add("Trailer", "Grpc-Message") h.Add("Trailer", "Grpc-Status-Details-Bin") if s.sendCompress != "" { h.Set("Grpc-Encoding", s.sendCompress) } } func (ht *serverHandlerTransport) Write(s *Stream, hdr []byte, data []byte, opts *Options) error { return ht.do(func() { ht.writeCommonHeaders(s) ht.rw.Write(hdr) ht.rw.Write(data) ht.rw.(http.Flusher).Flush() }) } func (ht *serverHandlerTransport) WriteHeader(s *Stream, md metadata.MD) error { err := ht.do(func() { ht.writeCommonHeaders(s) h := ht.rw.Header() for k, vv := range md { // Clients don't tolerate reading restricted headers after some non restricted ones were sent. if isReservedHeader(k) { continue } for _, v := range vv { v = encodeMetadataHeader(k, v) h.Add(k, v) } } ht.rw.WriteHeader(200) ht.rw.(http.Flusher).Flush() }) if err == nil { if ht.stats != nil { ht.stats.HandleRPC(s.Context(), &stats.OutHeader{}) } } return err } func (ht *serverHandlerTransport) HandleStreams(startStream func(*Stream), traceCtx func(context.Context, string) context.Context) { // With this transport type there will be exactly 1 stream: this HTTP request. ctx := ht.req.Context() var cancel context.CancelFunc if ht.timeoutSet { ctx, cancel = context.WithTimeout(ctx, ht.timeout) } else { ctx, cancel = context.WithCancel(ctx) } // requestOver is closed when the status has been written via WriteStatus. requestOver := make(chan struct{}) go func() { select { case <-requestOver: case <-ht.closedCh: case <-ht.req.Context().Done(): } cancel() ht.Close() }() req := ht.req s := &Stream{ id: 0, // irrelevant requestRead: func(int) {}, cancel: cancel, buf: newRecvBuffer(), st: ht, method: req.URL.Path, recvCompress: req.Header.Get("grpc-encoding"), contentSubtype: ht.contentSubtype, } pr := &peer.Peer{ Addr: ht.RemoteAddr(), } if req.TLS != nil { pr.AuthInfo = credentials.TLSInfo{State: *req.TLS} } ctx = metadata.NewIncomingContext(ctx, ht.headerMD) s.ctx = peer.NewContext(ctx, pr) if ht.stats != nil { s.ctx = ht.stats.TagRPC(s.ctx, &stats.RPCTagInfo{FullMethodName: s.method}) inHeader := &stats.InHeader{ FullMethod: s.method, RemoteAddr: ht.RemoteAddr(), Compression: s.recvCompress, } ht.stats.HandleRPC(s.ctx, inHeader) } s.trReader = &transportReader{ reader: &recvBufferReader{ctx: s.ctx, ctxDone: s.ctx.Done(), recv: s.buf, freeBuffer: func(*bytes.Buffer) {}}, windowHandler: func(int) {}, } // readerDone is closed when the Body.Read-ing goroutine exits. readerDone := make(chan struct{}) go func() { defer close(readerDone) // TODO: minimize garbage, optimize recvBuffer code/ownership const readSize = 8196 for buf := make([]byte, readSize); ; { n, err := req.Body.Read(buf) if n > 0 { s.buf.put(recvMsg{buffer: bytes.NewBuffer(buf[:n:n])}) buf = buf[n:] } if err != nil { s.buf.put(recvMsg{err: mapRecvMsgError(err)}) return } if len(buf) == 0 { buf = make([]byte, readSize) } } }() // startStream is provided by the *grpc.Server's serveStreams. // It starts a goroutine serving s and exits immediately. // The goroutine that is started is the one that then calls // into ht, calling WriteHeader, Write, WriteStatus, Close, etc. startStream(s) ht.runStream() close(requestOver) // Wait for reading goroutine to finish. req.Body.Close() <-readerDone } func (ht *serverHandlerTransport) runStream() { for { select { case fn := <-ht.writes: fn() case <-ht.closedCh: return } } } func (ht *serverHandlerTransport) IncrMsgSent() {} func (ht *serverHandlerTransport) IncrMsgRecv() {} func (ht *serverHandlerTransport) Drain() { panic("Drain() is not implemented") } // mapRecvMsgError returns the non-nil err into the appropriate // error value as expected by callers of *grpc.parser.recvMsg. // In particular, in can only be: // * io.EOF // * io.ErrUnexpectedEOF // * of type transport.ConnectionError // * an error from the status package func mapRecvMsgError(err error) error { if err == io.EOF || err == io.ErrUnexpectedEOF { return err } if se, ok := err.(http2.StreamError); ok { if code, ok := http2ErrConvTab[se.Code]; ok { return status.Error(code, se.Error()) } } if strings.Contains(err.Error(), "body closed by handler") { return status.Error(codes.Canceled, err.Error()) } return connectionErrorf(true, err, err.Error()) } grpc-go-1.22.1/internal/transport/handler_server_test.go000066400000000000000000000307311351635773100234130ustar00rootroot00000000000000/* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package transport import ( "context" "errors" "fmt" "io" "net/http" "net/http/httptest" "net/url" "reflect" "sync" "testing" "time" "github.com/golang/protobuf/proto" dpb "github.com/golang/protobuf/ptypes/duration" epb "google.golang.org/genproto/googleapis/rpc/errdetails" "google.golang.org/grpc/codes" "google.golang.org/grpc/metadata" "google.golang.org/grpc/status" ) func TestHandlerTransport_NewServerHandlerTransport(t *testing.T) { type testCase struct { name string req *http.Request wantErr string modrw func(http.ResponseWriter) http.ResponseWriter check func(*serverHandlerTransport, *testCase) error } tests := []testCase{ { name: "http/1.1", req: &http.Request{ ProtoMajor: 1, ProtoMinor: 1, }, wantErr: "gRPC requires HTTP/2", }, { name: "bad method", req: &http.Request{ ProtoMajor: 2, Method: "GET", Header: http.Header{}, RequestURI: "/", }, wantErr: "invalid gRPC request method", }, { name: "bad content type", req: &http.Request{ ProtoMajor: 2, Method: "POST", Header: http.Header{ "Content-Type": {"application/foo"}, }, RequestURI: "/service/foo.bar", }, wantErr: "invalid gRPC request content-type", }, { name: "not flusher", req: &http.Request{ ProtoMajor: 2, Method: "POST", Header: http.Header{ "Content-Type": {"application/grpc"}, }, RequestURI: "/service/foo.bar", }, modrw: func(w http.ResponseWriter) http.ResponseWriter { // Return w without its Flush method type onlyCloseNotifier interface { http.ResponseWriter http.CloseNotifier } return struct{ onlyCloseNotifier }{w.(onlyCloseNotifier)} }, wantErr: "gRPC requires a ResponseWriter supporting http.Flusher", }, { name: "valid", req: &http.Request{ ProtoMajor: 2, Method: "POST", Header: http.Header{ "Content-Type": {"application/grpc"}, }, URL: &url.URL{ Path: "/service/foo.bar", }, RequestURI: "/service/foo.bar", }, check: func(t *serverHandlerTransport, tt *testCase) error { if t.req != tt.req { return fmt.Errorf("t.req = %p; want %p", t.req, tt.req) } if t.rw == nil { return errors.New("t.rw = nil; want non-nil") } return nil }, }, { name: "with timeout", req: &http.Request{ ProtoMajor: 2, Method: "POST", Header: http.Header{ "Content-Type": []string{"application/grpc"}, "Grpc-Timeout": {"200m"}, }, URL: &url.URL{ Path: "/service/foo.bar", }, RequestURI: "/service/foo.bar", }, check: func(t *serverHandlerTransport, tt *testCase) error { if !t.timeoutSet { return errors.New("timeout not set") } if want := 200 * time.Millisecond; t.timeout != want { return fmt.Errorf("timeout = %v; want %v", t.timeout, want) } return nil }, }, { name: "with bad timeout", req: &http.Request{ ProtoMajor: 2, Method: "POST", Header: http.Header{ "Content-Type": []string{"application/grpc"}, "Grpc-Timeout": {"tomorrow"}, }, URL: &url.URL{ Path: "/service/foo.bar", }, RequestURI: "/service/foo.bar", }, wantErr: `rpc error: code = Internal desc = malformed time-out: transport: timeout unit is not recognized: "tomorrow"`, }, { name: "with metadata", req: &http.Request{ ProtoMajor: 2, Method: "POST", Header: http.Header{ "Content-Type": []string{"application/grpc"}, "meta-foo": {"foo-val"}, "meta-bar": {"bar-val1", "bar-val2"}, "user-agent": {"x/y a/b"}, }, URL: &url.URL{ Path: "/service/foo.bar", }, RequestURI: "/service/foo.bar", }, check: func(ht *serverHandlerTransport, tt *testCase) error { want := metadata.MD{ "meta-bar": {"bar-val1", "bar-val2"}, "user-agent": {"x/y a/b"}, "meta-foo": {"foo-val"}, "content-type": {"application/grpc"}, } if !reflect.DeepEqual(ht.headerMD, want) { return fmt.Errorf("metdata = %#v; want %#v", ht.headerMD, want) } return nil }, }, } for _, tt := range tests { rw := newTestHandlerResponseWriter() if tt.modrw != nil { rw = tt.modrw(rw) } got, gotErr := NewServerHandlerTransport(rw, tt.req, nil) if (gotErr != nil) != (tt.wantErr != "") || (gotErr != nil && gotErr.Error() != tt.wantErr) { t.Errorf("%s: error = %q; want %q", tt.name, gotErr.Error(), tt.wantErr) continue } if gotErr != nil { continue } if tt.check != nil { if err := tt.check(got.(*serverHandlerTransport), &tt); err != nil { t.Errorf("%s: %v", tt.name, err) } } } } type testHandlerResponseWriter struct { *httptest.ResponseRecorder closeNotify chan bool } func (w testHandlerResponseWriter) CloseNotify() <-chan bool { return w.closeNotify } func (w testHandlerResponseWriter) Flush() {} func newTestHandlerResponseWriter() http.ResponseWriter { return testHandlerResponseWriter{ ResponseRecorder: httptest.NewRecorder(), closeNotify: make(chan bool, 1), } } type handleStreamTest struct { t *testing.T bodyw *io.PipeWriter rw testHandlerResponseWriter ht *serverHandlerTransport } func newHandleStreamTest(t *testing.T) *handleStreamTest { bodyr, bodyw := io.Pipe() req := &http.Request{ ProtoMajor: 2, Method: "POST", Header: http.Header{ "Content-Type": {"application/grpc"}, }, URL: &url.URL{ Path: "/service/foo.bar", }, RequestURI: "/service/foo.bar", Body: bodyr, } rw := newTestHandlerResponseWriter().(testHandlerResponseWriter) ht, err := NewServerHandlerTransport(rw, req, nil) if err != nil { t.Fatal(err) } return &handleStreamTest{ t: t, bodyw: bodyw, ht: ht.(*serverHandlerTransport), rw: rw, } } func TestHandlerTransport_HandleStreams(t *testing.T) { st := newHandleStreamTest(t) handleStream := func(s *Stream) { if want := "/service/foo.bar"; s.method != want { t.Errorf("stream method = %q; want %q", s.method, want) } st.bodyw.Close() // no body st.ht.WriteStatus(s, status.New(codes.OK, "")) } st.ht.HandleStreams( func(s *Stream) { go handleStream(s) }, func(ctx context.Context, method string) context.Context { return ctx }, ) wantHeader := http.Header{ "Date": nil, "Content-Type": {"application/grpc"}, "Trailer": {"Grpc-Status", "Grpc-Message", "Grpc-Status-Details-Bin"}, "Grpc-Status": {"0"}, } if !reflect.DeepEqual(st.rw.HeaderMap, wantHeader) { t.Errorf("Header+Trailer Map: %#v; want %#v", st.rw.HeaderMap, wantHeader) } } // Tests that codes.Unimplemented will close the body, per comment in handler_server.go. func TestHandlerTransport_HandleStreams_Unimplemented(t *testing.T) { handleStreamCloseBodyTest(t, codes.Unimplemented, "thingy is unimplemented") } // Tests that codes.InvalidArgument will close the body, per comment in handler_server.go. func TestHandlerTransport_HandleStreams_InvalidArgument(t *testing.T) { handleStreamCloseBodyTest(t, codes.InvalidArgument, "bad arg") } func handleStreamCloseBodyTest(t *testing.T, statusCode codes.Code, msg string) { st := newHandleStreamTest(t) handleStream := func(s *Stream) { st.ht.WriteStatus(s, status.New(statusCode, msg)) } st.ht.HandleStreams( func(s *Stream) { go handleStream(s) }, func(ctx context.Context, method string) context.Context { return ctx }, ) wantHeader := http.Header{ "Date": nil, "Content-Type": {"application/grpc"}, "Trailer": {"Grpc-Status", "Grpc-Message", "Grpc-Status-Details-Bin"}, "Grpc-Status": {fmt.Sprint(uint32(statusCode))}, "Grpc-Message": {encodeGrpcMessage(msg)}, } if !reflect.DeepEqual(st.rw.HeaderMap, wantHeader) { t.Errorf("Header+Trailer mismatch.\n got: %#v\nwant: %#v", st.rw.HeaderMap, wantHeader) } } func TestHandlerTransport_HandleStreams_Timeout(t *testing.T) { bodyr, bodyw := io.Pipe() req := &http.Request{ ProtoMajor: 2, Method: "POST", Header: http.Header{ "Content-Type": {"application/grpc"}, "Grpc-Timeout": {"200m"}, }, URL: &url.URL{ Path: "/service/foo.bar", }, RequestURI: "/service/foo.bar", Body: bodyr, } rw := newTestHandlerResponseWriter().(testHandlerResponseWriter) ht, err := NewServerHandlerTransport(rw, req, nil) if err != nil { t.Fatal(err) } runStream := func(s *Stream) { defer bodyw.Close() select { case <-s.ctx.Done(): case <-time.After(5 * time.Second): t.Errorf("timeout waiting for ctx.Done") return } err := s.ctx.Err() if err != context.DeadlineExceeded { t.Errorf("ctx.Err = %v; want %v", err, context.DeadlineExceeded) return } ht.WriteStatus(s, status.New(codes.DeadlineExceeded, "too slow")) } ht.HandleStreams( func(s *Stream) { go runStream(s) }, func(ctx context.Context, method string) context.Context { return ctx }, ) wantHeader := http.Header{ "Date": nil, "Content-Type": {"application/grpc"}, "Trailer": {"Grpc-Status", "Grpc-Message", "Grpc-Status-Details-Bin"}, "Grpc-Status": {"4"}, "Grpc-Message": {encodeGrpcMessage("too slow")}, } if !reflect.DeepEqual(rw.HeaderMap, wantHeader) { t.Errorf("Header+Trailer Map mismatch.\n got: %#v\nwant: %#v", rw.HeaderMap, wantHeader) } } // TestHandlerTransport_HandleStreams_MultiWriteStatus ensures that // concurrent "WriteStatus"s do not panic writing to closed "writes" channel. func TestHandlerTransport_HandleStreams_MultiWriteStatus(t *testing.T) { testHandlerTransportHandleStreams(t, func(st *handleStreamTest, s *Stream) { if want := "/service/foo.bar"; s.method != want { t.Errorf("stream method = %q; want %q", s.method, want) } st.bodyw.Close() // no body var wg sync.WaitGroup wg.Add(5) for i := 0; i < 5; i++ { go func() { defer wg.Done() st.ht.WriteStatus(s, status.New(codes.OK, "")) }() } wg.Wait() }) } // TestHandlerTransport_HandleStreams_WriteStatusWrite ensures that "Write" // following "WriteStatus" does not panic writing to closed "writes" channel. func TestHandlerTransport_HandleStreams_WriteStatusWrite(t *testing.T) { testHandlerTransportHandleStreams(t, func(st *handleStreamTest, s *Stream) { if want := "/service/foo.bar"; s.method != want { t.Errorf("stream method = %q; want %q", s.method, want) } st.bodyw.Close() // no body st.ht.WriteStatus(s, status.New(codes.OK, "")) st.ht.Write(s, []byte("hdr"), []byte("data"), &Options{}) }) } func testHandlerTransportHandleStreams(t *testing.T, handleStream func(st *handleStreamTest, s *Stream)) { st := newHandleStreamTest(t) st.ht.HandleStreams( func(s *Stream) { go handleStream(st, s) }, func(ctx context.Context, method string) context.Context { return ctx }, ) } func TestHandlerTransport_HandleStreams_ErrDetails(t *testing.T) { errDetails := []proto.Message{ &epb.RetryInfo{ RetryDelay: &dpb.Duration{Seconds: 60}, }, &epb.ResourceInfo{ ResourceType: "foo bar", ResourceName: "service.foo.bar", Owner: "User", }, } statusCode := codes.ResourceExhausted msg := "you are being throttled" st, err := status.New(statusCode, msg).WithDetails(errDetails...) if err != nil { t.Fatal(err) } stBytes, err := proto.Marshal(st.Proto()) if err != nil { t.Fatal(err) } hst := newHandleStreamTest(t) handleStream := func(s *Stream) { hst.ht.WriteStatus(s, st) } hst.ht.HandleStreams( func(s *Stream) { go handleStream(s) }, func(ctx context.Context, method string) context.Context { return ctx }, ) wantHeader := http.Header{ "Date": nil, "Content-Type": {"application/grpc"}, "Trailer": {"Grpc-Status", "Grpc-Message", "Grpc-Status-Details-Bin"}, "Grpc-Status": {fmt.Sprint(uint32(statusCode))}, "Grpc-Message": {encodeGrpcMessage(msg)}, "Grpc-Status-Details-Bin": {encodeBinHeader(stBytes)}, } if !reflect.DeepEqual(hst.rw.HeaderMap, wantHeader) { t.Errorf("Header+Trailer mismatch.\n got: %#v\nwant: %#v", hst.rw.HeaderMap, wantHeader) } } grpc-go-1.22.1/internal/transport/http2_client.go000066400000000000000000001237471351635773100217620ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package transport import ( "context" "fmt" "io" "math" "net" "strconv" "strings" "sync" "sync/atomic" "time" "golang.org/x/net/http2" "golang.org/x/net/http2/hpack" "google.golang.org/grpc/codes" "google.golang.org/grpc/credentials" "google.golang.org/grpc/internal/channelz" "google.golang.org/grpc/internal/syscall" "google.golang.org/grpc/keepalive" "google.golang.org/grpc/metadata" "google.golang.org/grpc/peer" "google.golang.org/grpc/stats" "google.golang.org/grpc/status" ) // http2Client implements the ClientTransport interface with HTTP2. type http2Client struct { ctx context.Context cancel context.CancelFunc ctxDone <-chan struct{} // Cache the ctx.Done() chan. userAgent string md interface{} conn net.Conn // underlying communication channel loopy *loopyWriter remoteAddr net.Addr localAddr net.Addr authInfo credentials.AuthInfo // auth info about the connection readerDone chan struct{} // sync point to enable testing. writerDone chan struct{} // sync point to enable testing. // goAway is closed to notify the upper layer (i.e., addrConn.transportMonitor) // that the server sent GoAway on this transport. goAway chan struct{} // awakenKeepalive is used to wake up keepalive when after it has gone dormant. awakenKeepalive chan struct{} framer *framer // controlBuf delivers all the control related tasks (e.g., window // updates, reset streams, and various settings) to the controller. controlBuf *controlBuffer fc *trInFlow // The scheme used: https if TLS is on, http otherwise. scheme string isSecure bool perRPCCreds []credentials.PerRPCCredentials // Boolean to keep track of reading activity on transport. // 1 is true and 0 is false. activity uint32 // Accessed atomically. kp keepalive.ClientParameters keepaliveEnabled bool statsHandler stats.Handler initialWindowSize int32 // configured by peer through SETTINGS_MAX_HEADER_LIST_SIZE maxSendHeaderListSize *uint32 bdpEst *bdpEstimator // onPrefaceReceipt is a callback that client transport calls upon // receiving server preface to signal that a succefull HTTP2 // connection was established. onPrefaceReceipt func() maxConcurrentStreams uint32 streamQuota int64 streamsQuotaAvailable chan struct{} waitingStreams uint32 nextID uint32 mu sync.Mutex // guard the following variables state transportState activeStreams map[uint32]*Stream // prevGoAway ID records the Last-Stream-ID in the previous GOAway frame. prevGoAwayID uint32 // goAwayReason records the http2.ErrCode and debug data received with the // GoAway frame. goAwayReason GoAwayReason // Fields below are for channelz metric collection. channelzID int64 // channelz unique identification number czData *channelzData onGoAway func(GoAwayReason) onClose func() bufferPool *bufferPool } func dial(ctx context.Context, fn func(context.Context, string) (net.Conn, error), addr string) (net.Conn, error) { if fn != nil { return fn(ctx, addr) } return (&net.Dialer{}).DialContext(ctx, "tcp", addr) } func isTemporary(err error) bool { switch err := err.(type) { case interface { Temporary() bool }: return err.Temporary() case interface { Timeout() bool }: // Timeouts may be resolved upon retry, and are thus treated as // temporary. return err.Timeout() } return true } // newHTTP2Client constructs a connected ClientTransport to addr based on HTTP2 // and starts to receive messages on it. Non-nil error returns if construction // fails. func newHTTP2Client(connectCtx, ctx context.Context, addr TargetInfo, opts ConnectOptions, onPrefaceReceipt func(), onGoAway func(GoAwayReason), onClose func()) (_ *http2Client, err error) { scheme := "http" ctx, cancel := context.WithCancel(ctx) defer func() { if err != nil { cancel() } }() conn, err := dial(connectCtx, opts.Dialer, addr.Addr) if err != nil { if opts.FailOnNonTempDialError { return nil, connectionErrorf(isTemporary(err), err, "transport: error while dialing: %v", err) } return nil, connectionErrorf(true, err, "transport: Error while dialing %v", err) } // Any further errors will close the underlying connection defer func(conn net.Conn) { if err != nil { conn.Close() } }(conn) kp := opts.KeepaliveParams // Validate keepalive parameters. if kp.Time == 0 { kp.Time = defaultClientKeepaliveTime } if kp.Timeout == 0 { kp.Timeout = defaultClientKeepaliveTimeout } keepaliveEnabled := false if kp.Time != infinity { if err = syscall.SetTCPUserTimeout(conn, kp.Timeout); err != nil { return nil, connectionErrorf(false, err, "transport: failed to set TCP_USER_TIMEOUT: %v", err) } keepaliveEnabled = true } var ( isSecure bool authInfo credentials.AuthInfo ) transportCreds := opts.TransportCredentials perRPCCreds := opts.PerRPCCredentials if b := opts.CredsBundle; b != nil { if t := b.TransportCredentials(); t != nil { transportCreds = t } if t := b.PerRPCCredentials(); t != nil { perRPCCreds = append(perRPCCreds, t) } } if transportCreds != nil { scheme = "https" conn, authInfo, err = transportCreds.ClientHandshake(connectCtx, addr.Authority, conn) if err != nil { return nil, connectionErrorf(isTemporary(err), err, "transport: authentication handshake failed: %v", err) } isSecure = true } dynamicWindow := true icwz := int32(initialWindowSize) if opts.InitialConnWindowSize >= defaultWindowSize { icwz = opts.InitialConnWindowSize dynamicWindow = false } writeBufSize := opts.WriteBufferSize readBufSize := opts.ReadBufferSize maxHeaderListSize := defaultClientMaxHeaderListSize if opts.MaxHeaderListSize != nil { maxHeaderListSize = *opts.MaxHeaderListSize } t := &http2Client{ ctx: ctx, ctxDone: ctx.Done(), // Cache Done chan. cancel: cancel, userAgent: opts.UserAgent, md: addr.Metadata, conn: conn, remoteAddr: conn.RemoteAddr(), localAddr: conn.LocalAddr(), authInfo: authInfo, readerDone: make(chan struct{}), writerDone: make(chan struct{}), goAway: make(chan struct{}), awakenKeepalive: make(chan struct{}, 1), framer: newFramer(conn, writeBufSize, readBufSize, maxHeaderListSize), fc: &trInFlow{limit: uint32(icwz)}, scheme: scheme, activeStreams: make(map[uint32]*Stream), isSecure: isSecure, perRPCCreds: perRPCCreds, kp: kp, statsHandler: opts.StatsHandler, initialWindowSize: initialWindowSize, onPrefaceReceipt: onPrefaceReceipt, nextID: 1, maxConcurrentStreams: defaultMaxStreamsClient, streamQuota: defaultMaxStreamsClient, streamsQuotaAvailable: make(chan struct{}, 1), czData: new(channelzData), onGoAway: onGoAway, onClose: onClose, keepaliveEnabled: keepaliveEnabled, bufferPool: newBufferPool(), } t.controlBuf = newControlBuffer(t.ctxDone) if opts.InitialWindowSize >= defaultWindowSize { t.initialWindowSize = opts.InitialWindowSize dynamicWindow = false } if dynamicWindow { t.bdpEst = &bdpEstimator{ bdp: initialWindowSize, updateFlowControl: t.updateFlowControl, } } // Make sure awakenKeepalive can't be written upon. // keepalive routine will make it writable, if need be. t.awakenKeepalive <- struct{}{} if t.statsHandler != nil { t.ctx = t.statsHandler.TagConn(t.ctx, &stats.ConnTagInfo{ RemoteAddr: t.remoteAddr, LocalAddr: t.localAddr, }) connBegin := &stats.ConnBegin{ Client: true, } t.statsHandler.HandleConn(t.ctx, connBegin) } if channelz.IsOn() { t.channelzID = channelz.RegisterNormalSocket(t, opts.ChannelzParentID, fmt.Sprintf("%s -> %s", t.localAddr, t.remoteAddr)) } if t.keepaliveEnabled { go t.keepalive() } // Start the reader goroutine for incoming message. Each transport has // a dedicated goroutine which reads HTTP2 frame from network. Then it // dispatches the frame to the corresponding stream entity. go t.reader() // Send connection preface to server. n, err := t.conn.Write(clientPreface) if err != nil { t.Close() return nil, connectionErrorf(true, err, "transport: failed to write client preface: %v", err) } if n != len(clientPreface) { t.Close() return nil, connectionErrorf(true, err, "transport: preface mismatch, wrote %d bytes; want %d", n, len(clientPreface)) } var ss []http2.Setting if t.initialWindowSize != defaultWindowSize { ss = append(ss, http2.Setting{ ID: http2.SettingInitialWindowSize, Val: uint32(t.initialWindowSize), }) } if opts.MaxHeaderListSize != nil { ss = append(ss, http2.Setting{ ID: http2.SettingMaxHeaderListSize, Val: *opts.MaxHeaderListSize, }) } err = t.framer.fr.WriteSettings(ss...) if err != nil { t.Close() return nil, connectionErrorf(true, err, "transport: failed to write initial settings frame: %v", err) } // Adjust the connection flow control window if needed. if delta := uint32(icwz - defaultWindowSize); delta > 0 { if err := t.framer.fr.WriteWindowUpdate(0, delta); err != nil { t.Close() return nil, connectionErrorf(true, err, "transport: failed to write window update: %v", err) } } if err := t.framer.writer.Flush(); err != nil { return nil, err } go func() { t.loopy = newLoopyWriter(clientSide, t.framer, t.controlBuf, t.bdpEst) err := t.loopy.run() if err != nil { errorf("transport: loopyWriter.run returning. Err: %v", err) } // If it's a connection error, let reader goroutine handle it // since there might be data in the buffers. if _, ok := err.(net.Error); !ok { t.conn.Close() } close(t.writerDone) }() return t, nil } func (t *http2Client) newStream(ctx context.Context, callHdr *CallHdr) *Stream { // TODO(zhaoq): Handle uint32 overflow of Stream.id. s := &Stream{ done: make(chan struct{}), method: callHdr.Method, sendCompress: callHdr.SendCompress, buf: newRecvBuffer(), headerChan: make(chan struct{}), contentSubtype: callHdr.ContentSubtype, } s.wq = newWriteQuota(defaultWriteQuota, s.done) s.requestRead = func(n int) { t.adjustWindow(s, uint32(n)) } // The client side stream context should have exactly the same life cycle with the user provided context. // That means, s.ctx should be read-only. And s.ctx is done iff ctx is done. // So we use the original context here instead of creating a copy. s.ctx = ctx s.trReader = &transportReader{ reader: &recvBufferReader{ ctx: s.ctx, ctxDone: s.ctx.Done(), recv: s.buf, closeStream: func(err error) { t.CloseStream(s, err) }, freeBuffer: t.bufferPool.put, }, windowHandler: func(n int) { t.updateWindow(s, uint32(n)) }, } return s } func (t *http2Client) getPeer() *peer.Peer { pr := &peer.Peer{ Addr: t.remoteAddr, } // Attach Auth info if there is any. if t.authInfo != nil { pr.AuthInfo = t.authInfo } return pr } func (t *http2Client) createHeaderFields(ctx context.Context, callHdr *CallHdr) ([]hpack.HeaderField, error) { aud := t.createAudience(callHdr) authData, err := t.getTrAuthData(ctx, aud) if err != nil { return nil, err } callAuthData, err := t.getCallAuthData(ctx, aud, callHdr) if err != nil { return nil, err } // TODO(mmukhi): Benchmark if the performance gets better if count the metadata and other header fields // first and create a slice of that exact size. // Make the slice of certain predictable size to reduce allocations made by append. hfLen := 7 // :method, :scheme, :path, :authority, content-type, user-agent, te hfLen += len(authData) + len(callAuthData) headerFields := make([]hpack.HeaderField, 0, hfLen) headerFields = append(headerFields, hpack.HeaderField{Name: ":method", Value: "POST"}) headerFields = append(headerFields, hpack.HeaderField{Name: ":scheme", Value: t.scheme}) headerFields = append(headerFields, hpack.HeaderField{Name: ":path", Value: callHdr.Method}) headerFields = append(headerFields, hpack.HeaderField{Name: ":authority", Value: callHdr.Host}) headerFields = append(headerFields, hpack.HeaderField{Name: "content-type", Value: contentType(callHdr.ContentSubtype)}) headerFields = append(headerFields, hpack.HeaderField{Name: "user-agent", Value: t.userAgent}) headerFields = append(headerFields, hpack.HeaderField{Name: "te", Value: "trailers"}) if callHdr.PreviousAttempts > 0 { headerFields = append(headerFields, hpack.HeaderField{Name: "grpc-previous-rpc-attempts", Value: strconv.Itoa(callHdr.PreviousAttempts)}) } if callHdr.SendCompress != "" { headerFields = append(headerFields, hpack.HeaderField{Name: "grpc-encoding", Value: callHdr.SendCompress}) } if dl, ok := ctx.Deadline(); ok { // Send out timeout regardless its value. The server can detect timeout context by itself. // TODO(mmukhi): Perhaps this field should be updated when actually writing out to the wire. timeout := time.Until(dl) headerFields = append(headerFields, hpack.HeaderField{Name: "grpc-timeout", Value: encodeTimeout(timeout)}) } for k, v := range authData { headerFields = append(headerFields, hpack.HeaderField{Name: k, Value: encodeMetadataHeader(k, v)}) } for k, v := range callAuthData { headerFields = append(headerFields, hpack.HeaderField{Name: k, Value: encodeMetadataHeader(k, v)}) } if b := stats.OutgoingTags(ctx); b != nil { headerFields = append(headerFields, hpack.HeaderField{Name: "grpc-tags-bin", Value: encodeBinHeader(b)}) } if b := stats.OutgoingTrace(ctx); b != nil { headerFields = append(headerFields, hpack.HeaderField{Name: "grpc-trace-bin", Value: encodeBinHeader(b)}) } if md, added, ok := metadata.FromOutgoingContextRaw(ctx); ok { var k string for k, vv := range md { // HTTP doesn't allow you to set pseudoheaders after non pseudoheaders were set. if isReservedHeader(k) { continue } for _, v := range vv { headerFields = append(headerFields, hpack.HeaderField{Name: k, Value: encodeMetadataHeader(k, v)}) } } for _, vv := range added { for i, v := range vv { if i%2 == 0 { k = v continue } // HTTP doesn't allow you to set pseudoheaders after non pseudoheaders were set. if isReservedHeader(k) { continue } headerFields = append(headerFields, hpack.HeaderField{Name: strings.ToLower(k), Value: encodeMetadataHeader(k, v)}) } } } if md, ok := t.md.(*metadata.MD); ok { for k, vv := range *md { if isReservedHeader(k) { continue } for _, v := range vv { headerFields = append(headerFields, hpack.HeaderField{Name: k, Value: encodeMetadataHeader(k, v)}) } } } return headerFields, nil } func (t *http2Client) createAudience(callHdr *CallHdr) string { // Create an audience string only if needed. if len(t.perRPCCreds) == 0 && callHdr.Creds == nil { return "" } // Construct URI required to get auth request metadata. // Omit port if it is the default one. host := strings.TrimSuffix(callHdr.Host, ":443") pos := strings.LastIndex(callHdr.Method, "/") if pos == -1 { pos = len(callHdr.Method) } return "https://" + host + callHdr.Method[:pos] } func (t *http2Client) getTrAuthData(ctx context.Context, audience string) (map[string]string, error) { authData := map[string]string{} for _, c := range t.perRPCCreds { data, err := c.GetRequestMetadata(ctx, audience) if err != nil { if _, ok := status.FromError(err); ok { return nil, err } return nil, status.Errorf(codes.Unauthenticated, "transport: %v", err) } for k, v := range data { // Capital header names are illegal in HTTP/2. k = strings.ToLower(k) authData[k] = v } } return authData, nil } func (t *http2Client) getCallAuthData(ctx context.Context, audience string, callHdr *CallHdr) (map[string]string, error) { callAuthData := map[string]string{} // Check if credentials.PerRPCCredentials were provided via call options. // Note: if these credentials are provided both via dial options and call // options, then both sets of credentials will be applied. if callCreds := callHdr.Creds; callCreds != nil { if !t.isSecure && callCreds.RequireTransportSecurity() { return nil, status.Error(codes.Unauthenticated, "transport: cannot send secure credentials on an insecure connection") } data, err := callCreds.GetRequestMetadata(ctx, audience) if err != nil { return nil, status.Errorf(codes.Internal, "transport: %v", err) } for k, v := range data { // Capital header names are illegal in HTTP/2 k = strings.ToLower(k) callAuthData[k] = v } } return callAuthData, nil } // NewStream creates a stream and registers it into the transport as "active" // streams. func (t *http2Client) NewStream(ctx context.Context, callHdr *CallHdr) (_ *Stream, err error) { ctx = peer.NewContext(ctx, t.getPeer()) headerFields, err := t.createHeaderFields(ctx, callHdr) if err != nil { return nil, err } s := t.newStream(ctx, callHdr) cleanup := func(err error) { if s.swapState(streamDone) == streamDone { // If it was already done, return. return } // The stream was unprocessed by the server. atomic.StoreUint32(&s.unprocessed, 1) s.write(recvMsg{err: err}) close(s.done) // If headerChan isn't closed, then close it. if atomic.CompareAndSwapUint32(&s.headerChanClosed, 0, 1) { close(s.headerChan) } } hdr := &headerFrame{ hf: headerFields, endStream: false, initStream: func(id uint32) (bool, error) { t.mu.Lock() if state := t.state; state != reachable { t.mu.Unlock() // Do a quick cleanup. err := error(errStreamDrain) if state == closing { err = ErrConnClosing } cleanup(err) return false, err } t.activeStreams[id] = s if channelz.IsOn() { atomic.AddInt64(&t.czData.streamsStarted, 1) atomic.StoreInt64(&t.czData.lastStreamCreatedTime, time.Now().UnixNano()) } var sendPing bool // If the number of active streams change from 0 to 1, then check if keepalive // has gone dormant. If so, wake it up. if len(t.activeStreams) == 1 && t.keepaliveEnabled { select { case t.awakenKeepalive <- struct{}{}: sendPing = true // Fill the awakenKeepalive channel again as this channel must be // kept non-writable except at the point that the keepalive() // goroutine is waiting either to be awaken or shutdown. t.awakenKeepalive <- struct{}{} default: } } t.mu.Unlock() return sendPing, nil }, onOrphaned: cleanup, wq: s.wq, } firstTry := true var ch chan struct{} checkForStreamQuota := func(it interface{}) bool { if t.streamQuota <= 0 { // Can go negative if server decreases it. if firstTry { t.waitingStreams++ } ch = t.streamsQuotaAvailable return false } if !firstTry { t.waitingStreams-- } t.streamQuota-- h := it.(*headerFrame) h.streamID = t.nextID t.nextID += 2 s.id = h.streamID s.fc = &inFlow{limit: uint32(t.initialWindowSize)} if t.streamQuota > 0 && t.waitingStreams > 0 { select { case t.streamsQuotaAvailable <- struct{}{}: default: } } return true } var hdrListSizeErr error checkForHeaderListSize := func(it interface{}) bool { if t.maxSendHeaderListSize == nil { return true } hdrFrame := it.(*headerFrame) var sz int64 for _, f := range hdrFrame.hf { if sz += int64(f.Size()); sz > int64(*t.maxSendHeaderListSize) { hdrListSizeErr = status.Errorf(codes.Internal, "header list size to send violates the maximum size (%d bytes) set by server", *t.maxSendHeaderListSize) return false } } return true } for { success, err := t.controlBuf.executeAndPut(func(it interface{}) bool { if !checkForStreamQuota(it) { return false } if !checkForHeaderListSize(it) { return false } return true }, hdr) if err != nil { return nil, err } if success { break } if hdrListSizeErr != nil { return nil, hdrListSizeErr } firstTry = false select { case <-ch: case <-s.ctx.Done(): return nil, ContextErr(s.ctx.Err()) case <-t.goAway: return nil, errStreamDrain case <-t.ctx.Done(): return nil, ErrConnClosing } } if t.statsHandler != nil { outHeader := &stats.OutHeader{ Client: true, FullMethod: callHdr.Method, RemoteAddr: t.remoteAddr, LocalAddr: t.localAddr, Compression: callHdr.SendCompress, } t.statsHandler.HandleRPC(s.ctx, outHeader) } return s, nil } // CloseStream clears the footprint of a stream when the stream is not needed any more. // This must not be executed in reader's goroutine. func (t *http2Client) CloseStream(s *Stream, err error) { var ( rst bool rstCode http2.ErrCode ) if err != nil { rst = true rstCode = http2.ErrCodeCancel } t.closeStream(s, err, rst, rstCode, status.Convert(err), nil, false) } func (t *http2Client) closeStream(s *Stream, err error, rst bool, rstCode http2.ErrCode, st *status.Status, mdata map[string][]string, eosReceived bool) { // Set stream status to done. if s.swapState(streamDone) == streamDone { // If it was already done, return. If multiple closeStream calls // happen simultaneously, wait for the first to finish. <-s.done return } // status and trailers can be updated here without any synchronization because the stream goroutine will // only read it after it sees an io.EOF error from read or write and we'll write those errors // only after updating this. s.status = st if len(mdata) > 0 { s.trailer = mdata } if err != nil { // This will unblock reads eventually. s.write(recvMsg{err: err}) } // If headerChan isn't closed, then close it. if atomic.CompareAndSwapUint32(&s.headerChanClosed, 0, 1) { s.noHeaders = true close(s.headerChan) } cleanup := &cleanupStream{ streamID: s.id, onWrite: func() { t.mu.Lock() if t.activeStreams != nil { delete(t.activeStreams, s.id) } t.mu.Unlock() if channelz.IsOn() { if eosReceived { atomic.AddInt64(&t.czData.streamsSucceeded, 1) } else { atomic.AddInt64(&t.czData.streamsFailed, 1) } } }, rst: rst, rstCode: rstCode, } addBackStreamQuota := func(interface{}) bool { t.streamQuota++ if t.streamQuota > 0 && t.waitingStreams > 0 { select { case t.streamsQuotaAvailable <- struct{}{}: default: } } return true } t.controlBuf.executeAndPut(addBackStreamQuota, cleanup) // This will unblock write. close(s.done) } // Close kicks off the shutdown process of the transport. This should be called // only once on a transport. Once it is called, the transport should not be // accessed any more. // // This method blocks until the addrConn that initiated this transport is // re-connected. This happens because t.onClose() begins reconnect logic at the // addrConn level and blocks until the addrConn is successfully connected. func (t *http2Client) Close() error { t.mu.Lock() // Make sure we only Close once. if t.state == closing { t.mu.Unlock() return nil } t.state = closing streams := t.activeStreams t.activeStreams = nil t.mu.Unlock() t.controlBuf.finish() t.cancel() err := t.conn.Close() if channelz.IsOn() { channelz.RemoveEntry(t.channelzID) } // Notify all active streams. for _, s := range streams { t.closeStream(s, ErrConnClosing, false, http2.ErrCodeNo, status.New(codes.Unavailable, ErrConnClosing.Desc), nil, false) } if t.statsHandler != nil { connEnd := &stats.ConnEnd{ Client: true, } t.statsHandler.HandleConn(t.ctx, connEnd) } t.onClose() return err } // GracefulClose sets the state to draining, which prevents new streams from // being created and causes the transport to be closed when the last active // stream is closed. If there are no active streams, the transport is closed // immediately. This does nothing if the transport is already draining or // closing. func (t *http2Client) GracefulClose() { t.mu.Lock() // Make sure we move to draining only from active. if t.state == draining || t.state == closing { t.mu.Unlock() return } t.state = draining active := len(t.activeStreams) t.mu.Unlock() if active == 0 { t.Close() return } t.controlBuf.put(&incomingGoAway{}) } // Write formats the data into HTTP2 data frame(s) and sends it out. The caller // should proceed only if Write returns nil. func (t *http2Client) Write(s *Stream, hdr []byte, data []byte, opts *Options) error { if opts.Last { // If it's the last message, update stream state. if !s.compareAndSwapState(streamActive, streamWriteDone) { return errStreamDone } } else if s.getState() != streamActive { return errStreamDone } df := &dataFrame{ streamID: s.id, endStream: opts.Last, } if hdr != nil || data != nil { // If it's not an empty data frame. // Add some data to grpc message header so that we can equally // distribute bytes across frames. emptyLen := http2MaxFrameLen - len(hdr) if emptyLen > len(data) { emptyLen = len(data) } hdr = append(hdr, data[:emptyLen]...) data = data[emptyLen:] df.h, df.d = hdr, data // TODO(mmukhi): The above logic in this if can be moved to loopyWriter's data handler. if err := s.wq.get(int32(len(hdr) + len(data))); err != nil { return err } } return t.controlBuf.put(df) } func (t *http2Client) getStream(f http2.Frame) (*Stream, bool) { t.mu.Lock() defer t.mu.Unlock() s, ok := t.activeStreams[f.Header().StreamID] return s, ok } // adjustWindow sends out extra window update over the initial window size // of stream if the application is requesting data larger in size than // the window. func (t *http2Client) adjustWindow(s *Stream, n uint32) { if w := s.fc.maybeAdjust(n); w > 0 { t.controlBuf.put(&outgoingWindowUpdate{streamID: s.id, increment: w}) } } // updateWindow adjusts the inbound quota for the stream. // Window updates will be sent out when the cumulative quota // exceeds the corresponding threshold. func (t *http2Client) updateWindow(s *Stream, n uint32) { if w := s.fc.onRead(n); w > 0 { t.controlBuf.put(&outgoingWindowUpdate{streamID: s.id, increment: w}) } } // updateFlowControl updates the incoming flow control windows // for the transport and the stream based on the current bdp // estimation. func (t *http2Client) updateFlowControl(n uint32) { t.mu.Lock() for _, s := range t.activeStreams { s.fc.newLimit(n) } t.mu.Unlock() updateIWS := func(interface{}) bool { t.initialWindowSize = int32(n) return true } t.controlBuf.executeAndPut(updateIWS, &outgoingWindowUpdate{streamID: 0, increment: t.fc.newLimit(n)}) t.controlBuf.put(&outgoingSettings{ ss: []http2.Setting{ { ID: http2.SettingInitialWindowSize, Val: n, }, }, }) } func (t *http2Client) handleData(f *http2.DataFrame) { size := f.Header().Length var sendBDPPing bool if t.bdpEst != nil { sendBDPPing = t.bdpEst.add(size) } // Decouple connection's flow control from application's read. // An update on connection's flow control should not depend on // whether user application has read the data or not. Such a // restriction is already imposed on the stream's flow control, // and therefore the sender will be blocked anyways. // Decoupling the connection flow control will prevent other // active(fast) streams from starving in presence of slow or // inactive streams. // if w := t.fc.onData(size); w > 0 { t.controlBuf.put(&outgoingWindowUpdate{ streamID: 0, increment: w, }) } if sendBDPPing { // Avoid excessive ping detection (e.g. in an L7 proxy) // by sending a window update prior to the BDP ping. if w := t.fc.reset(); w > 0 { t.controlBuf.put(&outgoingWindowUpdate{ streamID: 0, increment: w, }) } t.controlBuf.put(bdpPing) } // Select the right stream to dispatch. s, ok := t.getStream(f) if !ok { return } if size > 0 { if err := s.fc.onData(size); err != nil { t.closeStream(s, io.EOF, true, http2.ErrCodeFlowControl, status.New(codes.Internal, err.Error()), nil, false) return } if f.Header().Flags.Has(http2.FlagDataPadded) { if w := s.fc.onRead(size - uint32(len(f.Data()))); w > 0 { t.controlBuf.put(&outgoingWindowUpdate{s.id, w}) } } // TODO(bradfitz, zhaoq): A copy is required here because there is no // guarantee f.Data() is consumed before the arrival of next frame. // Can this copy be eliminated? if len(f.Data()) > 0 { buffer := t.bufferPool.get() buffer.Reset() buffer.Write(f.Data()) s.write(recvMsg{buffer: buffer}) } } // The server has closed the stream without sending trailers. Record that // the read direction is closed, and set the status appropriately. if f.FrameHeader.Flags.Has(http2.FlagDataEndStream) { t.closeStream(s, io.EOF, false, http2.ErrCodeNo, status.New(codes.Internal, "server closed the stream without sending trailers"), nil, true) } } func (t *http2Client) handleRSTStream(f *http2.RSTStreamFrame) { s, ok := t.getStream(f) if !ok { return } if f.ErrCode == http2.ErrCodeRefusedStream { // The stream was unprocessed by the server. atomic.StoreUint32(&s.unprocessed, 1) } statusCode, ok := http2ErrConvTab[f.ErrCode] if !ok { warningf("transport: http2Client.handleRSTStream found no mapped gRPC status for the received http2 error %v", f.ErrCode) statusCode = codes.Unknown } if statusCode == codes.Canceled { // Our deadline was already exceeded, and that was likely the cause of // this cancelation. Alter the status code accordingly. if d, ok := s.ctx.Deadline(); ok && d.After(time.Now()) { statusCode = codes.DeadlineExceeded } } t.closeStream(s, io.EOF, false, http2.ErrCodeNo, status.Newf(statusCode, "stream terminated by RST_STREAM with error code: %v", f.ErrCode), nil, false) } func (t *http2Client) handleSettings(f *http2.SettingsFrame, isFirst bool) { if f.IsAck() { return } var maxStreams *uint32 var ss []http2.Setting var updateFuncs []func() f.ForeachSetting(func(s http2.Setting) error { switch s.ID { case http2.SettingMaxConcurrentStreams: maxStreams = new(uint32) *maxStreams = s.Val case http2.SettingMaxHeaderListSize: updateFuncs = append(updateFuncs, func() { t.maxSendHeaderListSize = new(uint32) *t.maxSendHeaderListSize = s.Val }) default: ss = append(ss, s) } return nil }) if isFirst && maxStreams == nil { maxStreams = new(uint32) *maxStreams = math.MaxUint32 } sf := &incomingSettings{ ss: ss, } if maxStreams != nil { updateStreamQuota := func() { delta := int64(*maxStreams) - int64(t.maxConcurrentStreams) t.maxConcurrentStreams = *maxStreams t.streamQuota += delta if delta > 0 && t.waitingStreams > 0 { close(t.streamsQuotaAvailable) // wake all of them up. t.streamsQuotaAvailable = make(chan struct{}, 1) } } updateFuncs = append(updateFuncs, updateStreamQuota) } t.controlBuf.executeAndPut(func(interface{}) bool { for _, f := range updateFuncs { f() } return true }, sf) } func (t *http2Client) handlePing(f *http2.PingFrame) { if f.IsAck() { // Maybe it's a BDP ping. if t.bdpEst != nil { t.bdpEst.calculate(f.Data) } return } pingAck := &ping{ack: true} copy(pingAck.data[:], f.Data[:]) t.controlBuf.put(pingAck) } func (t *http2Client) handleGoAway(f *http2.GoAwayFrame) { t.mu.Lock() if t.state == closing { t.mu.Unlock() return } if f.ErrCode == http2.ErrCodeEnhanceYourCalm { infof("Client received GoAway with http2.ErrCodeEnhanceYourCalm.") } id := f.LastStreamID if id > 0 && id%2 != 1 { t.mu.Unlock() t.Close() return } // A client can receive multiple GoAways from the server (see // https://github.com/grpc/grpc-go/issues/1387). The idea is that the first // GoAway will be sent with an ID of MaxInt32 and the second GoAway will be // sent after an RTT delay with the ID of the last stream the server will // process. // // Therefore, when we get the first GoAway we don't necessarily close any // streams. While in case of second GoAway we close all streams created after // the GoAwayId. This way streams that were in-flight while the GoAway from // server was being sent don't get killed. select { case <-t.goAway: // t.goAway has been closed (i.e.,multiple GoAways). // If there are multiple GoAways the first one should always have an ID greater than the following ones. if id > t.prevGoAwayID { t.mu.Unlock() t.Close() return } default: t.setGoAwayReason(f) close(t.goAway) t.state = draining t.controlBuf.put(&incomingGoAway{}) // This has to be a new goroutine because we're still using the current goroutine to read in the transport. t.onGoAway(t.goAwayReason) } // All streams with IDs greater than the GoAwayId // and smaller than the previous GoAway ID should be killed. upperLimit := t.prevGoAwayID if upperLimit == 0 { // This is the first GoAway Frame. upperLimit = math.MaxUint32 // Kill all streams after the GoAway ID. } for streamID, stream := range t.activeStreams { if streamID > id && streamID <= upperLimit { // The stream was unprocessed by the server. atomic.StoreUint32(&stream.unprocessed, 1) t.closeStream(stream, errStreamDrain, false, http2.ErrCodeNo, statusGoAway, nil, false) } } t.prevGoAwayID = id active := len(t.activeStreams) t.mu.Unlock() if active == 0 { t.Close() } } // setGoAwayReason sets the value of t.goAwayReason based // on the GoAway frame received. // It expects a lock on transport's mutext to be held by // the caller. func (t *http2Client) setGoAwayReason(f *http2.GoAwayFrame) { t.goAwayReason = GoAwayNoReason switch f.ErrCode { case http2.ErrCodeEnhanceYourCalm: if string(f.DebugData()) == "too_many_pings" { t.goAwayReason = GoAwayTooManyPings } } } func (t *http2Client) GetGoAwayReason() GoAwayReason { t.mu.Lock() defer t.mu.Unlock() return t.goAwayReason } func (t *http2Client) handleWindowUpdate(f *http2.WindowUpdateFrame) { t.controlBuf.put(&incomingWindowUpdate{ streamID: f.Header().StreamID, increment: f.Increment, }) } // operateHeaders takes action on the decoded headers. func (t *http2Client) operateHeaders(frame *http2.MetaHeadersFrame) { s, ok := t.getStream(frame) if !ok { return } endStream := frame.StreamEnded() atomic.StoreUint32(&s.bytesReceived, 1) initialHeader := atomic.LoadUint32(&s.headerChanClosed) == 0 if !initialHeader && !endStream { // As specified by gRPC over HTTP2, a HEADERS frame (and associated CONTINUATION frames) can only appear at the start or end of a stream. Therefore, second HEADERS frame must have EOS bit set. st := status.New(codes.Internal, "a HEADERS frame cannot appear in the middle of a stream") t.closeStream(s, st.Err(), true, http2.ErrCodeProtocol, st, nil, false) return } state := &decodeState{} // Initialize isGRPC value to be !initialHeader, since if a gRPC Response-Headers has already been received, then it means that the peer is speaking gRPC and we are in gRPC mode. state.data.isGRPC = !initialHeader if err := state.decodeHeader(frame); err != nil { t.closeStream(s, err, true, http2.ErrCodeProtocol, status.Convert(err), nil, endStream) return } isHeader := false defer func() { if t.statsHandler != nil { if isHeader { inHeader := &stats.InHeader{ Client: true, WireLength: int(frame.Header().Length), } t.statsHandler.HandleRPC(s.ctx, inHeader) } else { inTrailer := &stats.InTrailer{ Client: true, WireLength: int(frame.Header().Length), } t.statsHandler.HandleRPC(s.ctx, inTrailer) } } }() // If headerChan hasn't been closed yet if atomic.CompareAndSwapUint32(&s.headerChanClosed, 0, 1) { if !endStream { // HEADERS frame block carries a Response-Headers. isHeader = true // These values can be set without any synchronization because // stream goroutine will read it only after seeing a closed // headerChan which we'll close after setting this. s.recvCompress = state.data.encoding if len(state.data.mdata) > 0 { s.header = state.data.mdata } } else { // HEADERS frame block carries a Trailers-Only. s.noHeaders = true } close(s.headerChan) } if !endStream { return } // if client received END_STREAM from server while stream was still active, send RST_STREAM rst := s.getState() == streamActive t.closeStream(s, io.EOF, rst, http2.ErrCodeNo, state.status(), state.data.mdata, true) } // reader runs as a separate goroutine in charge of reading data from network // connection. // // TODO(zhaoq): currently one reader per transport. Investigate whether this is // optimal. // TODO(zhaoq): Check the validity of the incoming frame sequence. func (t *http2Client) reader() { defer close(t.readerDone) // Check the validity of server preface. frame, err := t.framer.fr.ReadFrame() if err != nil { t.Close() // this kicks off resetTransport, so must be last before return return } t.conn.SetReadDeadline(time.Time{}) // reset deadline once we get the settings frame (we didn't time out, yay!) if t.keepaliveEnabled { atomic.CompareAndSwapUint32(&t.activity, 0, 1) } sf, ok := frame.(*http2.SettingsFrame) if !ok { t.Close() // this kicks off resetTransport, so must be last before return return } t.onPrefaceReceipt() t.handleSettings(sf, true) // loop to keep reading incoming messages on this transport. for { frame, err := t.framer.fr.ReadFrame() if t.keepaliveEnabled { atomic.CompareAndSwapUint32(&t.activity, 0, 1) } if err != nil { // Abort an active stream if the http2.Framer returns a // http2.StreamError. This can happen only if the server's response // is malformed http2. if se, ok := err.(http2.StreamError); ok { t.mu.Lock() s := t.activeStreams[se.StreamID] t.mu.Unlock() if s != nil { // use error detail to provide better err message code := http2ErrConvTab[se.Code] msg := t.framer.fr.ErrorDetail().Error() t.closeStream(s, status.Error(code, msg), true, http2.ErrCodeProtocol, status.New(code, msg), nil, false) } continue } else { // Transport error. t.Close() return } } switch frame := frame.(type) { case *http2.MetaHeadersFrame: t.operateHeaders(frame) case *http2.DataFrame: t.handleData(frame) case *http2.RSTStreamFrame: t.handleRSTStream(frame) case *http2.SettingsFrame: t.handleSettings(frame, false) case *http2.PingFrame: t.handlePing(frame) case *http2.GoAwayFrame: t.handleGoAway(frame) case *http2.WindowUpdateFrame: t.handleWindowUpdate(frame) default: errorf("transport: http2Client.reader got unhandled frame type %v.", frame) } } } // keepalive running in a separate goroutune makes sure the connection is alive by sending pings. func (t *http2Client) keepalive() { p := &ping{data: [8]byte{}} timer := time.NewTimer(t.kp.Time) for { select { case <-timer.C: if atomic.CompareAndSwapUint32(&t.activity, 1, 0) { timer.Reset(t.kp.Time) continue } // Check if keepalive should go dormant. t.mu.Lock() if len(t.activeStreams) < 1 && !t.kp.PermitWithoutStream { // Make awakenKeepalive writable. <-t.awakenKeepalive t.mu.Unlock() select { case <-t.awakenKeepalive: // If the control gets here a ping has been sent // need to reset the timer with keepalive.Timeout. case <-t.ctx.Done(): return } } else { t.mu.Unlock() if channelz.IsOn() { atomic.AddInt64(&t.czData.kpCount, 1) } // Send ping. t.controlBuf.put(p) } // By the time control gets here a ping has been sent one way or the other. timer.Reset(t.kp.Timeout) select { case <-timer.C: if atomic.CompareAndSwapUint32(&t.activity, 1, 0) { timer.Reset(t.kp.Time) continue } t.Close() return case <-t.ctx.Done(): if !timer.Stop() { <-timer.C } return } case <-t.ctx.Done(): if !timer.Stop() { <-timer.C } return } } } func (t *http2Client) Error() <-chan struct{} { return t.ctx.Done() } func (t *http2Client) GoAway() <-chan struct{} { return t.goAway } func (t *http2Client) ChannelzMetric() *channelz.SocketInternalMetric { s := channelz.SocketInternalMetric{ StreamsStarted: atomic.LoadInt64(&t.czData.streamsStarted), StreamsSucceeded: atomic.LoadInt64(&t.czData.streamsSucceeded), StreamsFailed: atomic.LoadInt64(&t.czData.streamsFailed), MessagesSent: atomic.LoadInt64(&t.czData.msgSent), MessagesReceived: atomic.LoadInt64(&t.czData.msgRecv), KeepAlivesSent: atomic.LoadInt64(&t.czData.kpCount), LastLocalStreamCreatedTimestamp: time.Unix(0, atomic.LoadInt64(&t.czData.lastStreamCreatedTime)), LastMessageSentTimestamp: time.Unix(0, atomic.LoadInt64(&t.czData.lastMsgSentTime)), LastMessageReceivedTimestamp: time.Unix(0, atomic.LoadInt64(&t.czData.lastMsgRecvTime)), LocalFlowControlWindow: int64(t.fc.getSize()), SocketOptions: channelz.GetSocketOption(t.conn), LocalAddr: t.localAddr, RemoteAddr: t.remoteAddr, // RemoteName : } if au, ok := t.authInfo.(credentials.ChannelzSecurityInfo); ok { s.Security = au.GetSecurityValue() } s.RemoteFlowControlWindow = t.getOutFlowWindow() return &s } func (t *http2Client) RemoteAddr() net.Addr { return t.remoteAddr } func (t *http2Client) IncrMsgSent() { atomic.AddInt64(&t.czData.msgSent, 1) atomic.StoreInt64(&t.czData.lastMsgSentTime, time.Now().UnixNano()) } func (t *http2Client) IncrMsgRecv() { atomic.AddInt64(&t.czData.msgRecv, 1) atomic.StoreInt64(&t.czData.lastMsgRecvTime, time.Now().UnixNano()) } func (t *http2Client) getOutFlowWindow() int64 { resp := make(chan uint32, 1) timer := time.NewTimer(time.Second) defer timer.Stop() t.controlBuf.put(&outFlowControlSizeRequest{resp}) select { case sz := <-resp: return int64(sz) case <-t.ctxDone: return -1 case <-timer.C: return -2 } } grpc-go-1.22.1/internal/transport/http2_server.go000066400000000000000000001106501351635773100217770ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package transport import ( "bytes" "context" "errors" "fmt" "io" "math" "net" "strconv" "sync" "sync/atomic" "time" "github.com/golang/protobuf/proto" "golang.org/x/net/http2" "golang.org/x/net/http2/hpack" spb "google.golang.org/genproto/googleapis/rpc/status" "google.golang.org/grpc/codes" "google.golang.org/grpc/credentials" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/internal" "google.golang.org/grpc/internal/channelz" "google.golang.org/grpc/internal/grpcrand" "google.golang.org/grpc/keepalive" "google.golang.org/grpc/metadata" "google.golang.org/grpc/peer" "google.golang.org/grpc/stats" "google.golang.org/grpc/status" "google.golang.org/grpc/tap" ) var ( // ErrIllegalHeaderWrite indicates that setting header is illegal because of // the stream's state. ErrIllegalHeaderWrite = errors.New("transport: the stream is done or WriteHeader was already called") // ErrHeaderListSizeLimitViolation indicates that the header list size is larger // than the limit set by peer. ErrHeaderListSizeLimitViolation = errors.New("transport: trying to send header list size larger than the limit set by peer") // statusRawProto is a function to get to the raw status proto wrapped in a // status.Status without a proto.Clone(). statusRawProto = internal.StatusRawProto.(func(*status.Status) *spb.Status) ) // http2Server implements the ServerTransport interface with HTTP2. type http2Server struct { ctx context.Context ctxDone <-chan struct{} // Cache the context.Done() chan cancel context.CancelFunc conn net.Conn loopy *loopyWriter readerDone chan struct{} // sync point to enable testing. writerDone chan struct{} // sync point to enable testing. remoteAddr net.Addr localAddr net.Addr maxStreamID uint32 // max stream ID ever seen authInfo credentials.AuthInfo // auth info about the connection inTapHandle tap.ServerInHandle framer *framer // The max number of concurrent streams. maxStreams uint32 // controlBuf delivers all the control related tasks (e.g., window // updates, reset streams, and various settings) to the controller. controlBuf *controlBuffer fc *trInFlow stats stats.Handler // Flag to keep track of reading activity on transport. // 1 is true and 0 is false. activity uint32 // Accessed atomically. // Keepalive and max-age parameters for the server. kp keepalive.ServerParameters // Keepalive enforcement policy. kep keepalive.EnforcementPolicy // The time instance last ping was received. lastPingAt time.Time // Number of times the client has violated keepalive ping policy so far. pingStrikes uint8 // Flag to signify that number of ping strikes should be reset to 0. // This is set whenever data or header frames are sent. // 1 means yes. resetPingStrikes uint32 // Accessed atomically. initialWindowSize int32 bdpEst *bdpEstimator maxSendHeaderListSize *uint32 mu sync.Mutex // guard the following // drainChan is initialized when drain(...) is called the first time. // After which the server writes out the first GoAway(with ID 2^31-1) frame. // Then an independent goroutine will be launched to later send the second GoAway. // During this time we don't want to write another first GoAway(with ID 2^31 -1) frame. // Thus call to drain(...) will be a no-op if drainChan is already initialized since draining is // already underway. drainChan chan struct{} state transportState activeStreams map[uint32]*Stream // idle is the time instant when the connection went idle. // This is either the beginning of the connection or when the number of // RPCs go down to 0. // When the connection is busy, this value is set to 0. idle time.Time // Fields below are for channelz metric collection. channelzID int64 // channelz unique identification number czData *channelzData bufferPool *bufferPool } // newHTTP2Server constructs a ServerTransport based on HTTP2. ConnectionError is // returned if something goes wrong. func newHTTP2Server(conn net.Conn, config *ServerConfig) (_ ServerTransport, err error) { writeBufSize := config.WriteBufferSize readBufSize := config.ReadBufferSize maxHeaderListSize := defaultServerMaxHeaderListSize if config.MaxHeaderListSize != nil { maxHeaderListSize = *config.MaxHeaderListSize } framer := newFramer(conn, writeBufSize, readBufSize, maxHeaderListSize) // Send initial settings as connection preface to client. var isettings []http2.Setting // TODO(zhaoq): Have a better way to signal "no limit" because 0 is // permitted in the HTTP2 spec. maxStreams := config.MaxStreams if maxStreams == 0 { maxStreams = math.MaxUint32 } else { isettings = append(isettings, http2.Setting{ ID: http2.SettingMaxConcurrentStreams, Val: maxStreams, }) } dynamicWindow := true iwz := int32(initialWindowSize) if config.InitialWindowSize >= defaultWindowSize { iwz = config.InitialWindowSize dynamicWindow = false } icwz := int32(initialWindowSize) if config.InitialConnWindowSize >= defaultWindowSize { icwz = config.InitialConnWindowSize dynamicWindow = false } if iwz != defaultWindowSize { isettings = append(isettings, http2.Setting{ ID: http2.SettingInitialWindowSize, Val: uint32(iwz)}) } if config.MaxHeaderListSize != nil { isettings = append(isettings, http2.Setting{ ID: http2.SettingMaxHeaderListSize, Val: *config.MaxHeaderListSize, }) } if err := framer.fr.WriteSettings(isettings...); err != nil { return nil, connectionErrorf(false, err, "transport: %v", err) } // Adjust the connection flow control window if needed. if delta := uint32(icwz - defaultWindowSize); delta > 0 { if err := framer.fr.WriteWindowUpdate(0, delta); err != nil { return nil, connectionErrorf(false, err, "transport: %v", err) } } kp := config.KeepaliveParams if kp.MaxConnectionIdle == 0 { kp.MaxConnectionIdle = defaultMaxConnectionIdle } if kp.MaxConnectionAge == 0 { kp.MaxConnectionAge = defaultMaxConnectionAge } // Add a jitter to MaxConnectionAge. kp.MaxConnectionAge += getJitter(kp.MaxConnectionAge) if kp.MaxConnectionAgeGrace == 0 { kp.MaxConnectionAgeGrace = defaultMaxConnectionAgeGrace } if kp.Time == 0 { kp.Time = defaultServerKeepaliveTime } if kp.Timeout == 0 { kp.Timeout = defaultServerKeepaliveTimeout } kep := config.KeepalivePolicy if kep.MinTime == 0 { kep.MinTime = defaultKeepalivePolicyMinTime } ctx, cancel := context.WithCancel(context.Background()) t := &http2Server{ ctx: ctx, cancel: cancel, ctxDone: ctx.Done(), conn: conn, remoteAddr: conn.RemoteAddr(), localAddr: conn.LocalAddr(), authInfo: config.AuthInfo, framer: framer, readerDone: make(chan struct{}), writerDone: make(chan struct{}), maxStreams: maxStreams, inTapHandle: config.InTapHandle, fc: &trInFlow{limit: uint32(icwz)}, state: reachable, activeStreams: make(map[uint32]*Stream), stats: config.StatsHandler, kp: kp, idle: time.Now(), kep: kep, initialWindowSize: iwz, czData: new(channelzData), bufferPool: newBufferPool(), } t.controlBuf = newControlBuffer(t.ctxDone) if dynamicWindow { t.bdpEst = &bdpEstimator{ bdp: initialWindowSize, updateFlowControl: t.updateFlowControl, } } if t.stats != nil { t.ctx = t.stats.TagConn(t.ctx, &stats.ConnTagInfo{ RemoteAddr: t.remoteAddr, LocalAddr: t.localAddr, }) connBegin := &stats.ConnBegin{} t.stats.HandleConn(t.ctx, connBegin) } if channelz.IsOn() { t.channelzID = channelz.RegisterNormalSocket(t, config.ChannelzParentID, fmt.Sprintf("%s -> %s", t.remoteAddr, t.localAddr)) } t.framer.writer.Flush() defer func() { if err != nil { t.Close() } }() // Check the validity of client preface. preface := make([]byte, len(clientPreface)) if _, err := io.ReadFull(t.conn, preface); err != nil { return nil, connectionErrorf(false, err, "transport: http2Server.HandleStreams failed to receive the preface from client: %v", err) } if !bytes.Equal(preface, clientPreface) { return nil, connectionErrorf(false, nil, "transport: http2Server.HandleStreams received bogus greeting from client: %q", preface) } frame, err := t.framer.fr.ReadFrame() if err == io.EOF || err == io.ErrUnexpectedEOF { return nil, err } if err != nil { return nil, connectionErrorf(false, err, "transport: http2Server.HandleStreams failed to read initial settings frame: %v", err) } atomic.StoreUint32(&t.activity, 1) sf, ok := frame.(*http2.SettingsFrame) if !ok { return nil, connectionErrorf(false, nil, "transport: http2Server.HandleStreams saw invalid preface type %T from client", frame) } t.handleSettings(sf) go func() { t.loopy = newLoopyWriter(serverSide, t.framer, t.controlBuf, t.bdpEst) t.loopy.ssGoAwayHandler = t.outgoingGoAwayHandler if err := t.loopy.run(); err != nil { errorf("transport: loopyWriter.run returning. Err: %v", err) } t.conn.Close() close(t.writerDone) }() go t.keepalive() return t, nil } // operateHeader takes action on the decoded headers. func (t *http2Server) operateHeaders(frame *http2.MetaHeadersFrame, handle func(*Stream), traceCtx func(context.Context, string) context.Context) (fatal bool) { streamID := frame.Header().StreamID state := &decodeState{ serverSide: true, } if err := state.decodeHeader(frame); err != nil { if se, ok := status.FromError(err); ok { t.controlBuf.put(&cleanupStream{ streamID: streamID, rst: true, rstCode: statusCodeConvTab[se.Code()], onWrite: func() {}, }) } return false } buf := newRecvBuffer() s := &Stream{ id: streamID, st: t, buf: buf, fc: &inFlow{limit: uint32(t.initialWindowSize)}, recvCompress: state.data.encoding, method: state.data.method, contentSubtype: state.data.contentSubtype, } if frame.StreamEnded() { // s is just created by the caller. No lock needed. s.state = streamReadDone } if state.data.timeoutSet { s.ctx, s.cancel = context.WithTimeout(t.ctx, state.data.timeout) } else { s.ctx, s.cancel = context.WithCancel(t.ctx) } pr := &peer.Peer{ Addr: t.remoteAddr, } // Attach Auth info if there is any. if t.authInfo != nil { pr.AuthInfo = t.authInfo } s.ctx = peer.NewContext(s.ctx, pr) // Attach the received metadata to the context. if len(state.data.mdata) > 0 { s.ctx = metadata.NewIncomingContext(s.ctx, state.data.mdata) } if state.data.statsTags != nil { s.ctx = stats.SetIncomingTags(s.ctx, state.data.statsTags) } if state.data.statsTrace != nil { s.ctx = stats.SetIncomingTrace(s.ctx, state.data.statsTrace) } if t.inTapHandle != nil { var err error info := &tap.Info{ FullMethodName: state.data.method, } s.ctx, err = t.inTapHandle(s.ctx, info) if err != nil { warningf("transport: http2Server.operateHeaders got an error from InTapHandle: %v", err) t.controlBuf.put(&cleanupStream{ streamID: s.id, rst: true, rstCode: http2.ErrCodeRefusedStream, onWrite: func() {}, }) return false } } t.mu.Lock() if t.state != reachable { t.mu.Unlock() return false } if uint32(len(t.activeStreams)) >= t.maxStreams { t.mu.Unlock() t.controlBuf.put(&cleanupStream{ streamID: streamID, rst: true, rstCode: http2.ErrCodeRefusedStream, onWrite: func() {}, }) return false } if streamID%2 != 1 || streamID <= t.maxStreamID { t.mu.Unlock() // illegal gRPC stream id. errorf("transport: http2Server.HandleStreams received an illegal stream id: %v", streamID) return true } t.maxStreamID = streamID t.activeStreams[streamID] = s if len(t.activeStreams) == 1 { t.idle = time.Time{} } t.mu.Unlock() if channelz.IsOn() { atomic.AddInt64(&t.czData.streamsStarted, 1) atomic.StoreInt64(&t.czData.lastStreamCreatedTime, time.Now().UnixNano()) } s.requestRead = func(n int) { t.adjustWindow(s, uint32(n)) } s.ctx = traceCtx(s.ctx, s.method) if t.stats != nil { s.ctx = t.stats.TagRPC(s.ctx, &stats.RPCTagInfo{FullMethodName: s.method}) inHeader := &stats.InHeader{ FullMethod: s.method, RemoteAddr: t.remoteAddr, LocalAddr: t.localAddr, Compression: s.recvCompress, WireLength: int(frame.Header().Length), } t.stats.HandleRPC(s.ctx, inHeader) } s.ctxDone = s.ctx.Done() s.wq = newWriteQuota(defaultWriteQuota, s.ctxDone) s.trReader = &transportReader{ reader: &recvBufferReader{ ctx: s.ctx, ctxDone: s.ctxDone, recv: s.buf, freeBuffer: t.bufferPool.put, }, windowHandler: func(n int) { t.updateWindow(s, uint32(n)) }, } // Register the stream with loopy. t.controlBuf.put(®isterStream{ streamID: s.id, wq: s.wq, }) handle(s) return false } // HandleStreams receives incoming streams using the given handler. This is // typically run in a separate goroutine. // traceCtx attaches trace to ctx and returns the new context. func (t *http2Server) HandleStreams(handle func(*Stream), traceCtx func(context.Context, string) context.Context) { defer close(t.readerDone) for { frame, err := t.framer.fr.ReadFrame() atomic.StoreUint32(&t.activity, 1) if err != nil { if se, ok := err.(http2.StreamError); ok { warningf("transport: http2Server.HandleStreams encountered http2.StreamError: %v", se) t.mu.Lock() s := t.activeStreams[se.StreamID] t.mu.Unlock() if s != nil { t.closeStream(s, true, se.Code, false) } else { t.controlBuf.put(&cleanupStream{ streamID: se.StreamID, rst: true, rstCode: se.Code, onWrite: func() {}, }) } continue } if err == io.EOF || err == io.ErrUnexpectedEOF { t.Close() return } warningf("transport: http2Server.HandleStreams failed to read frame: %v", err) t.Close() return } switch frame := frame.(type) { case *http2.MetaHeadersFrame: if t.operateHeaders(frame, handle, traceCtx) { t.Close() break } case *http2.DataFrame: t.handleData(frame) case *http2.RSTStreamFrame: t.handleRSTStream(frame) case *http2.SettingsFrame: t.handleSettings(frame) case *http2.PingFrame: t.handlePing(frame) case *http2.WindowUpdateFrame: t.handleWindowUpdate(frame) case *http2.GoAwayFrame: // TODO: Handle GoAway from the client appropriately. default: errorf("transport: http2Server.HandleStreams found unhandled frame type %v.", frame) } } } func (t *http2Server) getStream(f http2.Frame) (*Stream, bool) { t.mu.Lock() defer t.mu.Unlock() if t.activeStreams == nil { // The transport is closing. return nil, false } s, ok := t.activeStreams[f.Header().StreamID] if !ok { // The stream is already done. return nil, false } return s, true } // adjustWindow sends out extra window update over the initial window size // of stream if the application is requesting data larger in size than // the window. func (t *http2Server) adjustWindow(s *Stream, n uint32) { if w := s.fc.maybeAdjust(n); w > 0 { t.controlBuf.put(&outgoingWindowUpdate{streamID: s.id, increment: w}) } } // updateWindow adjusts the inbound quota for the stream and the transport. // Window updates will deliver to the controller for sending when // the cumulative quota exceeds the corresponding threshold. func (t *http2Server) updateWindow(s *Stream, n uint32) { if w := s.fc.onRead(n); w > 0 { t.controlBuf.put(&outgoingWindowUpdate{streamID: s.id, increment: w, }) } } // updateFlowControl updates the incoming flow control windows // for the transport and the stream based on the current bdp // estimation. func (t *http2Server) updateFlowControl(n uint32) { t.mu.Lock() for _, s := range t.activeStreams { s.fc.newLimit(n) } t.initialWindowSize = int32(n) t.mu.Unlock() t.controlBuf.put(&outgoingWindowUpdate{ streamID: 0, increment: t.fc.newLimit(n), }) t.controlBuf.put(&outgoingSettings{ ss: []http2.Setting{ { ID: http2.SettingInitialWindowSize, Val: n, }, }, }) } func (t *http2Server) handleData(f *http2.DataFrame) { size := f.Header().Length var sendBDPPing bool if t.bdpEst != nil { sendBDPPing = t.bdpEst.add(size) } // Decouple connection's flow control from application's read. // An update on connection's flow control should not depend on // whether user application has read the data or not. Such a // restriction is already imposed on the stream's flow control, // and therefore the sender will be blocked anyways. // Decoupling the connection flow control will prevent other // active(fast) streams from starving in presence of slow or // inactive streams. if w := t.fc.onData(size); w > 0 { t.controlBuf.put(&outgoingWindowUpdate{ streamID: 0, increment: w, }) } if sendBDPPing { // Avoid excessive ping detection (e.g. in an L7 proxy) // by sending a window update prior to the BDP ping. if w := t.fc.reset(); w > 0 { t.controlBuf.put(&outgoingWindowUpdate{ streamID: 0, increment: w, }) } t.controlBuf.put(bdpPing) } // Select the right stream to dispatch. s, ok := t.getStream(f) if !ok { return } if size > 0 { if err := s.fc.onData(size); err != nil { t.closeStream(s, true, http2.ErrCodeFlowControl, false) return } if f.Header().Flags.Has(http2.FlagDataPadded) { if w := s.fc.onRead(size - uint32(len(f.Data()))); w > 0 { t.controlBuf.put(&outgoingWindowUpdate{s.id, w}) } } // TODO(bradfitz, zhaoq): A copy is required here because there is no // guarantee f.Data() is consumed before the arrival of next frame. // Can this copy be eliminated? if len(f.Data()) > 0 { buffer := t.bufferPool.get() buffer.Reset() buffer.Write(f.Data()) s.write(recvMsg{buffer: buffer}) } } if f.Header().Flags.Has(http2.FlagDataEndStream) { // Received the end of stream from the client. s.compareAndSwapState(streamActive, streamReadDone) s.write(recvMsg{err: io.EOF}) } } func (t *http2Server) handleRSTStream(f *http2.RSTStreamFrame) { // If the stream is not deleted from the transport's active streams map, then do a regular close stream. if s, ok := t.getStream(f); ok { t.closeStream(s, false, 0, false) return } // If the stream is already deleted from the active streams map, then put a cleanupStream item into controlbuf to delete the stream from loopy writer's established streams map. t.controlBuf.put(&cleanupStream{ streamID: f.Header().StreamID, rst: false, rstCode: 0, onWrite: func() {}, }) } func (t *http2Server) handleSettings(f *http2.SettingsFrame) { if f.IsAck() { return } var ss []http2.Setting var updateFuncs []func() f.ForeachSetting(func(s http2.Setting) error { switch s.ID { case http2.SettingMaxHeaderListSize: updateFuncs = append(updateFuncs, func() { t.maxSendHeaderListSize = new(uint32) *t.maxSendHeaderListSize = s.Val }) default: ss = append(ss, s) } return nil }) t.controlBuf.executeAndPut(func(interface{}) bool { for _, f := range updateFuncs { f() } return true }, &incomingSettings{ ss: ss, }) } const ( maxPingStrikes = 2 defaultPingTimeout = 2 * time.Hour ) func (t *http2Server) handlePing(f *http2.PingFrame) { if f.IsAck() { if f.Data == goAwayPing.data && t.drainChan != nil { close(t.drainChan) return } // Maybe it's a BDP ping. if t.bdpEst != nil { t.bdpEst.calculate(f.Data) } return } pingAck := &ping{ack: true} copy(pingAck.data[:], f.Data[:]) t.controlBuf.put(pingAck) now := time.Now() defer func() { t.lastPingAt = now }() // A reset ping strikes means that we don't need to check for policy // violation for this ping and the pingStrikes counter should be set // to 0. if atomic.CompareAndSwapUint32(&t.resetPingStrikes, 1, 0) { t.pingStrikes = 0 return } t.mu.Lock() ns := len(t.activeStreams) t.mu.Unlock() if ns < 1 && !t.kep.PermitWithoutStream { // Keepalive shouldn't be active thus, this new ping should // have come after at least defaultPingTimeout. if t.lastPingAt.Add(defaultPingTimeout).After(now) { t.pingStrikes++ } } else { // Check if keepalive policy is respected. if t.lastPingAt.Add(t.kep.MinTime).After(now) { t.pingStrikes++ } } if t.pingStrikes > maxPingStrikes { // Send goaway and close the connection. errorf("transport: Got too many pings from the client, closing the connection.") t.controlBuf.put(&goAway{code: http2.ErrCodeEnhanceYourCalm, debugData: []byte("too_many_pings"), closeConn: true}) } } func (t *http2Server) handleWindowUpdate(f *http2.WindowUpdateFrame) { t.controlBuf.put(&incomingWindowUpdate{ streamID: f.Header().StreamID, increment: f.Increment, }) } func appendHeaderFieldsFromMD(headerFields []hpack.HeaderField, md metadata.MD) []hpack.HeaderField { for k, vv := range md { if isReservedHeader(k) { // Clients don't tolerate reading restricted headers after some non restricted ones were sent. continue } for _, v := range vv { headerFields = append(headerFields, hpack.HeaderField{Name: k, Value: encodeMetadataHeader(k, v)}) } } return headerFields } func (t *http2Server) checkForHeaderListSize(it interface{}) bool { if t.maxSendHeaderListSize == nil { return true } hdrFrame := it.(*headerFrame) var sz int64 for _, f := range hdrFrame.hf { if sz += int64(f.Size()); sz > int64(*t.maxSendHeaderListSize) { errorf("header list size to send violates the maximum size (%d bytes) set by client", *t.maxSendHeaderListSize) return false } } return true } // WriteHeader sends the header metedata md back to the client. func (t *http2Server) WriteHeader(s *Stream, md metadata.MD) error { if s.updateHeaderSent() || s.getState() == streamDone { return ErrIllegalHeaderWrite } s.hdrMu.Lock() if md.Len() > 0 { if s.header.Len() > 0 { s.header = metadata.Join(s.header, md) } else { s.header = md } } if err := t.writeHeaderLocked(s); err != nil { s.hdrMu.Unlock() return err } s.hdrMu.Unlock() return nil } func (t *http2Server) writeHeaderLocked(s *Stream) error { // TODO(mmukhi): Benchmark if the performance gets better if count the metadata and other header fields // first and create a slice of that exact size. headerFields := make([]hpack.HeaderField, 0, 2) // at least :status, content-type will be there if none else. headerFields = append(headerFields, hpack.HeaderField{Name: ":status", Value: "200"}) headerFields = append(headerFields, hpack.HeaderField{Name: "content-type", Value: contentType(s.contentSubtype)}) if s.sendCompress != "" { headerFields = append(headerFields, hpack.HeaderField{Name: "grpc-encoding", Value: s.sendCompress}) } headerFields = appendHeaderFieldsFromMD(headerFields, s.header) success, err := t.controlBuf.executeAndPut(t.checkForHeaderListSize, &headerFrame{ streamID: s.id, hf: headerFields, endStream: false, onWrite: func() { atomic.StoreUint32(&t.resetPingStrikes, 1) }, }) if !success { if err != nil { return err } t.closeStream(s, true, http2.ErrCodeInternal, false) return ErrHeaderListSizeLimitViolation } if t.stats != nil { // Note: WireLength is not set in outHeader. // TODO(mmukhi): Revisit this later, if needed. outHeader := &stats.OutHeader{} t.stats.HandleRPC(s.Context(), outHeader) } return nil } // WriteStatus sends stream status to the client and terminates the stream. // There is no further I/O operations being able to perform on this stream. // TODO(zhaoq): Now it indicates the end of entire stream. Revisit if early // OK is adopted. func (t *http2Server) WriteStatus(s *Stream, st *status.Status) error { if s.getState() == streamDone { return nil } s.hdrMu.Lock() // TODO(mmukhi): Benchmark if the performance gets better if count the metadata and other header fields // first and create a slice of that exact size. headerFields := make([]hpack.HeaderField, 0, 2) // grpc-status and grpc-message will be there if none else. if !s.updateHeaderSent() { // No headers have been sent. if len(s.header) > 0 { // Send a separate header frame. if err := t.writeHeaderLocked(s); err != nil { s.hdrMu.Unlock() return err } } else { // Send a trailer only response. headerFields = append(headerFields, hpack.HeaderField{Name: ":status", Value: "200"}) headerFields = append(headerFields, hpack.HeaderField{Name: "content-type", Value: contentType(s.contentSubtype)}) } } headerFields = append(headerFields, hpack.HeaderField{Name: "grpc-status", Value: strconv.Itoa(int(st.Code()))}) headerFields = append(headerFields, hpack.HeaderField{Name: "grpc-message", Value: encodeGrpcMessage(st.Message())}) if p := statusRawProto(st); p != nil && len(p.Details) > 0 { stBytes, err := proto.Marshal(p) if err != nil { // TODO: return error instead, when callers are able to handle it. grpclog.Errorf("transport: failed to marshal rpc status: %v, error: %v", p, err) } else { headerFields = append(headerFields, hpack.HeaderField{Name: "grpc-status-details-bin", Value: encodeBinHeader(stBytes)}) } } // Attach the trailer metadata. headerFields = appendHeaderFieldsFromMD(headerFields, s.trailer) trailingHeader := &headerFrame{ streamID: s.id, hf: headerFields, endStream: true, onWrite: func() { atomic.StoreUint32(&t.resetPingStrikes, 1) }, } s.hdrMu.Unlock() success, err := t.controlBuf.execute(t.checkForHeaderListSize, trailingHeader) if !success { if err != nil { return err } t.closeStream(s, true, http2.ErrCodeInternal, false) return ErrHeaderListSizeLimitViolation } // Send a RST_STREAM after the trailers if the client has not already half-closed. rst := s.getState() == streamActive t.finishStream(s, rst, http2.ErrCodeNo, trailingHeader, true) if t.stats != nil { t.stats.HandleRPC(s.Context(), &stats.OutTrailer{}) } return nil } // Write converts the data into HTTP2 data frame and sends it out. Non-nil error // is returns if it fails (e.g., framing error, transport error). func (t *http2Server) Write(s *Stream, hdr []byte, data []byte, opts *Options) error { if !s.isHeaderSent() { // Headers haven't been written yet. if err := t.WriteHeader(s, nil); err != nil { if _, ok := err.(ConnectionError); ok { return err } // TODO(mmukhi, dfawley): Make sure this is the right code to return. return status.Errorf(codes.Internal, "transport: %v", err) } } else { // Writing headers checks for this condition. if s.getState() == streamDone { // TODO(mmukhi, dfawley): Should the server write also return io.EOF? s.cancel() select { case <-t.ctx.Done(): return ErrConnClosing default: } return ContextErr(s.ctx.Err()) } } // Add some data to header frame so that we can equally distribute bytes across frames. emptyLen := http2MaxFrameLen - len(hdr) if emptyLen > len(data) { emptyLen = len(data) } hdr = append(hdr, data[:emptyLen]...) data = data[emptyLen:] df := &dataFrame{ streamID: s.id, h: hdr, d: data, onEachWrite: func() { atomic.StoreUint32(&t.resetPingStrikes, 1) }, } if err := s.wq.get(int32(len(hdr) + len(data))); err != nil { select { case <-t.ctx.Done(): return ErrConnClosing default: } return ContextErr(s.ctx.Err()) } return t.controlBuf.put(df) } // keepalive running in a separate goroutine does the following: // 1. Gracefully closes an idle connection after a duration of keepalive.MaxConnectionIdle. // 2. Gracefully closes any connection after a duration of keepalive.MaxConnectionAge. // 3. Forcibly closes a connection after an additive period of keepalive.MaxConnectionAgeGrace over keepalive.MaxConnectionAge. // 4. Makes sure a connection is alive by sending pings with a frequency of keepalive.Time and closes a non-responsive connection // after an additional duration of keepalive.Timeout. func (t *http2Server) keepalive() { p := &ping{} var pingSent bool maxIdle := time.NewTimer(t.kp.MaxConnectionIdle) maxAge := time.NewTimer(t.kp.MaxConnectionAge) keepalive := time.NewTimer(t.kp.Time) // NOTE: All exit paths of this function should reset their // respective timers. A failure to do so will cause the // following clean-up to deadlock and eventually leak. defer func() { if !maxIdle.Stop() { <-maxIdle.C } if !maxAge.Stop() { <-maxAge.C } if !keepalive.Stop() { <-keepalive.C } }() for { select { case <-maxIdle.C: t.mu.Lock() idle := t.idle if idle.IsZero() { // The connection is non-idle. t.mu.Unlock() maxIdle.Reset(t.kp.MaxConnectionIdle) continue } val := t.kp.MaxConnectionIdle - time.Since(idle) t.mu.Unlock() if val <= 0 { // The connection has been idle for a duration of keepalive.MaxConnectionIdle or more. // Gracefully close the connection. t.drain(http2.ErrCodeNo, []byte{}) // Resetting the timer so that the clean-up doesn't deadlock. maxIdle.Reset(infinity) return } maxIdle.Reset(val) case <-maxAge.C: t.drain(http2.ErrCodeNo, []byte{}) maxAge.Reset(t.kp.MaxConnectionAgeGrace) select { case <-maxAge.C: // Close the connection after grace period. t.Close() // Resetting the timer so that the clean-up doesn't deadlock. maxAge.Reset(infinity) case <-t.ctx.Done(): } return case <-keepalive.C: if atomic.CompareAndSwapUint32(&t.activity, 1, 0) { pingSent = false keepalive.Reset(t.kp.Time) continue } if pingSent { t.Close() // Resetting the timer so that the clean-up doesn't deadlock. keepalive.Reset(infinity) return } pingSent = true if channelz.IsOn() { atomic.AddInt64(&t.czData.kpCount, 1) } t.controlBuf.put(p) keepalive.Reset(t.kp.Timeout) case <-t.ctx.Done(): return } } } // Close starts shutting down the http2Server transport. // TODO(zhaoq): Now the destruction is not blocked on any pending streams. This // could cause some resource issue. Revisit this later. func (t *http2Server) Close() error { t.mu.Lock() if t.state == closing { t.mu.Unlock() return errors.New("transport: Close() was already called") } t.state = closing streams := t.activeStreams t.activeStreams = nil t.mu.Unlock() t.controlBuf.finish() t.cancel() err := t.conn.Close() if channelz.IsOn() { channelz.RemoveEntry(t.channelzID) } // Cancel all active streams. for _, s := range streams { s.cancel() } if t.stats != nil { connEnd := &stats.ConnEnd{} t.stats.HandleConn(t.ctx, connEnd) } return err } // deleteStream deletes the stream s from transport's active streams. func (t *http2Server) deleteStream(s *Stream, eosReceived bool) { // In case stream sending and receiving are invoked in separate // goroutines (e.g., bi-directional streaming), cancel needs to be // called to interrupt the potential blocking on other goroutines. s.cancel() t.mu.Lock() if _, ok := t.activeStreams[s.id]; ok { delete(t.activeStreams, s.id) if len(t.activeStreams) == 0 { t.idle = time.Now() } } t.mu.Unlock() if channelz.IsOn() { if eosReceived { atomic.AddInt64(&t.czData.streamsSucceeded, 1) } else { atomic.AddInt64(&t.czData.streamsFailed, 1) } } } // finishStream closes the stream and puts the trailing headerFrame into controlbuf. func (t *http2Server) finishStream(s *Stream, rst bool, rstCode http2.ErrCode, hdr *headerFrame, eosReceived bool) { oldState := s.swapState(streamDone) if oldState == streamDone { // If the stream was already done, return. return } hdr.cleanup = &cleanupStream{ streamID: s.id, rst: rst, rstCode: rstCode, onWrite: func() { t.deleteStream(s, eosReceived) }, } t.controlBuf.put(hdr) } // closeStream clears the footprint of a stream when the stream is not needed any more. func (t *http2Server) closeStream(s *Stream, rst bool, rstCode http2.ErrCode, eosReceived bool) { s.swapState(streamDone) t.deleteStream(s, eosReceived) t.controlBuf.put(&cleanupStream{ streamID: s.id, rst: rst, rstCode: rstCode, onWrite: func() {}, }) } func (t *http2Server) RemoteAddr() net.Addr { return t.remoteAddr } func (t *http2Server) Drain() { t.drain(http2.ErrCodeNo, []byte{}) } func (t *http2Server) drain(code http2.ErrCode, debugData []byte) { t.mu.Lock() defer t.mu.Unlock() if t.drainChan != nil { return } t.drainChan = make(chan struct{}) t.controlBuf.put(&goAway{code: code, debugData: debugData, headsUp: true}) } var goAwayPing = &ping{data: [8]byte{1, 6, 1, 8, 0, 3, 3, 9}} // Handles outgoing GoAway and returns true if loopy needs to put itself // in draining mode. func (t *http2Server) outgoingGoAwayHandler(g *goAway) (bool, error) { t.mu.Lock() if t.state == closing { // TODO(mmukhi): This seems unnecessary. t.mu.Unlock() // The transport is closing. return false, ErrConnClosing } sid := t.maxStreamID if !g.headsUp { // Stop accepting more streams now. t.state = draining if len(t.activeStreams) == 0 { g.closeConn = true } t.mu.Unlock() if err := t.framer.fr.WriteGoAway(sid, g.code, g.debugData); err != nil { return false, err } if g.closeConn { // Abruptly close the connection following the GoAway (via // loopywriter). But flush out what's inside the buffer first. t.framer.writer.Flush() return false, fmt.Errorf("transport: Connection closing") } return true, nil } t.mu.Unlock() // For a graceful close, send out a GoAway with stream ID of MaxUInt32, // Follow that with a ping and wait for the ack to come back or a timer // to expire. During this time accept new streams since they might have // originated before the GoAway reaches the client. // After getting the ack or timer expiration send out another GoAway this // time with an ID of the max stream server intends to process. if err := t.framer.fr.WriteGoAway(math.MaxUint32, http2.ErrCodeNo, []byte{}); err != nil { return false, err } if err := t.framer.fr.WritePing(false, goAwayPing.data); err != nil { return false, err } go func() { timer := time.NewTimer(time.Minute) defer timer.Stop() select { case <-t.drainChan: case <-timer.C: case <-t.ctx.Done(): return } t.controlBuf.put(&goAway{code: g.code, debugData: g.debugData}) }() return false, nil } func (t *http2Server) ChannelzMetric() *channelz.SocketInternalMetric { s := channelz.SocketInternalMetric{ StreamsStarted: atomic.LoadInt64(&t.czData.streamsStarted), StreamsSucceeded: atomic.LoadInt64(&t.czData.streamsSucceeded), StreamsFailed: atomic.LoadInt64(&t.czData.streamsFailed), MessagesSent: atomic.LoadInt64(&t.czData.msgSent), MessagesReceived: atomic.LoadInt64(&t.czData.msgRecv), KeepAlivesSent: atomic.LoadInt64(&t.czData.kpCount), LastRemoteStreamCreatedTimestamp: time.Unix(0, atomic.LoadInt64(&t.czData.lastStreamCreatedTime)), LastMessageSentTimestamp: time.Unix(0, atomic.LoadInt64(&t.czData.lastMsgSentTime)), LastMessageReceivedTimestamp: time.Unix(0, atomic.LoadInt64(&t.czData.lastMsgRecvTime)), LocalFlowControlWindow: int64(t.fc.getSize()), SocketOptions: channelz.GetSocketOption(t.conn), LocalAddr: t.localAddr, RemoteAddr: t.remoteAddr, // RemoteName : } if au, ok := t.authInfo.(credentials.ChannelzSecurityInfo); ok { s.Security = au.GetSecurityValue() } s.RemoteFlowControlWindow = t.getOutFlowWindow() return &s } func (t *http2Server) IncrMsgSent() { atomic.AddInt64(&t.czData.msgSent, 1) atomic.StoreInt64(&t.czData.lastMsgSentTime, time.Now().UnixNano()) } func (t *http2Server) IncrMsgRecv() { atomic.AddInt64(&t.czData.msgRecv, 1) atomic.StoreInt64(&t.czData.lastMsgRecvTime, time.Now().UnixNano()) } func (t *http2Server) getOutFlowWindow() int64 { resp := make(chan uint32, 1) timer := time.NewTimer(time.Second) defer timer.Stop() t.controlBuf.put(&outFlowControlSizeRequest{resp}) select { case sz := <-resp: return int64(sz) case <-t.ctxDone: return -1 case <-timer.C: return -2 } } func getJitter(v time.Duration) time.Duration { if v == infinity { return 0 } // Generate a jitter between +/- 10% of the value. r := int64(v / 10) j := grpcrand.Int63n(2*r) - r return time.Duration(j) } grpc-go-1.22.1/internal/transport/http_util.go000066400000000000000000000461301351635773100213650ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package transport import ( "bufio" "bytes" "encoding/base64" "fmt" "io" "math" "net" "net/http" "strconv" "strings" "time" "unicode/utf8" "github.com/golang/protobuf/proto" "golang.org/x/net/http2" "golang.org/x/net/http2/hpack" spb "google.golang.org/genproto/googleapis/rpc/status" "google.golang.org/grpc/codes" "google.golang.org/grpc/status" ) const ( // http2MaxFrameLen specifies the max length of a HTTP2 frame. http2MaxFrameLen = 16384 // 16KB frame // http://http2.github.io/http2-spec/#SettingValues http2InitHeaderTableSize = 4096 // baseContentType is the base content-type for gRPC. This is a valid // content-type on it's own, but can also include a content-subtype such as // "proto" as a suffix after "+" or ";". See // https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md#requests // for more details. baseContentType = "application/grpc" ) var ( clientPreface = []byte(http2.ClientPreface) http2ErrConvTab = map[http2.ErrCode]codes.Code{ http2.ErrCodeNo: codes.Internal, http2.ErrCodeProtocol: codes.Internal, http2.ErrCodeInternal: codes.Internal, http2.ErrCodeFlowControl: codes.ResourceExhausted, http2.ErrCodeSettingsTimeout: codes.Internal, http2.ErrCodeStreamClosed: codes.Internal, http2.ErrCodeFrameSize: codes.Internal, http2.ErrCodeRefusedStream: codes.Unavailable, http2.ErrCodeCancel: codes.Canceled, http2.ErrCodeCompression: codes.Internal, http2.ErrCodeConnect: codes.Internal, http2.ErrCodeEnhanceYourCalm: codes.ResourceExhausted, http2.ErrCodeInadequateSecurity: codes.PermissionDenied, http2.ErrCodeHTTP11Required: codes.Internal, } statusCodeConvTab = map[codes.Code]http2.ErrCode{ codes.Internal: http2.ErrCodeInternal, codes.Canceled: http2.ErrCodeCancel, codes.Unavailable: http2.ErrCodeRefusedStream, codes.ResourceExhausted: http2.ErrCodeEnhanceYourCalm, codes.PermissionDenied: http2.ErrCodeInadequateSecurity, } // HTTPStatusConvTab is the HTTP status code to gRPC error code conversion table. HTTPStatusConvTab = map[int]codes.Code{ // 400 Bad Request - INTERNAL. http.StatusBadRequest: codes.Internal, // 401 Unauthorized - UNAUTHENTICATED. http.StatusUnauthorized: codes.Unauthenticated, // 403 Forbidden - PERMISSION_DENIED. http.StatusForbidden: codes.PermissionDenied, // 404 Not Found - UNIMPLEMENTED. http.StatusNotFound: codes.Unimplemented, // 429 Too Many Requests - UNAVAILABLE. http.StatusTooManyRequests: codes.Unavailable, // 502 Bad Gateway - UNAVAILABLE. http.StatusBadGateway: codes.Unavailable, // 503 Service Unavailable - UNAVAILABLE. http.StatusServiceUnavailable: codes.Unavailable, // 504 Gateway timeout - UNAVAILABLE. http.StatusGatewayTimeout: codes.Unavailable, } ) type parsedHeaderData struct { encoding string // statusGen caches the stream status received from the trailer the server // sent. Client side only. Do not access directly. After all trailers are // parsed, use the status method to retrieve the status. statusGen *status.Status // rawStatusCode and rawStatusMsg are set from the raw trailer fields and are not // intended for direct access outside of parsing. rawStatusCode *int rawStatusMsg string httpStatus *int // Server side only fields. timeoutSet bool timeout time.Duration method string // key-value metadata map from the peer. mdata map[string][]string statsTags []byte statsTrace []byte contentSubtype string // isGRPC field indicates whether the peer is speaking gRPC (otherwise HTTP). // // We are in gRPC mode (peer speaking gRPC) if: // * We are client side and have already received a HEADER frame that indicates gRPC peer. // * The header contains valid a content-type, i.e. a string starts with "application/grpc" // And we should handle error specific to gRPC. // // Otherwise (i.e. a content-type string starts without "application/grpc", or does not exist), we // are in HTTP fallback mode, and should handle error specific to HTTP. isGRPC bool grpcErr error httpErr error contentTypeErr string } // decodeState configures decoding criteria and records the decoded data. type decodeState struct { // whether decoding on server side or not serverSide bool // Records the states during HPACK decoding. It will be filled with info parsed from HTTP HEADERS // frame once decodeHeader function has been invoked and returned. data parsedHeaderData } // isReservedHeader checks whether hdr belongs to HTTP2 headers // reserved by gRPC protocol. Any other headers are classified as the // user-specified metadata. func isReservedHeader(hdr string) bool { if hdr != "" && hdr[0] == ':' { return true } switch hdr { case "content-type", "user-agent", "grpc-message-type", "grpc-encoding", "grpc-message", "grpc-status", "grpc-timeout", "grpc-status-details-bin", // Intentionally exclude grpc-previous-rpc-attempts and // grpc-retry-pushback-ms, which are "reserved", but their API // intentionally works via metadata. "te": return true default: return false } } // isWhitelistedHeader checks whether hdr should be propagated into metadata // visible to users, even though it is classified as "reserved", above. func isWhitelistedHeader(hdr string) bool { switch hdr { case ":authority", "user-agent": return true default: return false } } // contentSubtype returns the content-subtype for the given content-type. The // given content-type must be a valid content-type that starts with // "application/grpc". A content-subtype will follow "application/grpc" after a // "+" or ";". See // https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md#requests for // more details. // // If contentType is not a valid content-type for gRPC, the boolean // will be false, otherwise true. If content-type == "application/grpc", // "application/grpc+", or "application/grpc;", the boolean will be true, // but no content-subtype will be returned. // // contentType is assumed to be lowercase already. func contentSubtype(contentType string) (string, bool) { if contentType == baseContentType { return "", true } if !strings.HasPrefix(contentType, baseContentType) { return "", false } // guaranteed since != baseContentType and has baseContentType prefix switch contentType[len(baseContentType)] { case '+', ';': // this will return true for "application/grpc+" or "application/grpc;" // which the previous validContentType function tested to be valid, so we // just say that no content-subtype is specified in this case return contentType[len(baseContentType)+1:], true default: return "", false } } // contentSubtype is assumed to be lowercase func contentType(contentSubtype string) string { if contentSubtype == "" { return baseContentType } return baseContentType + "+" + contentSubtype } func (d *decodeState) status() *status.Status { if d.data.statusGen == nil { // No status-details were provided; generate status using code/msg. d.data.statusGen = status.New(codes.Code(int32(*(d.data.rawStatusCode))), d.data.rawStatusMsg) } return d.data.statusGen } const binHdrSuffix = "-bin" func encodeBinHeader(v []byte) string { return base64.RawStdEncoding.EncodeToString(v) } func decodeBinHeader(v string) ([]byte, error) { if len(v)%4 == 0 { // Input was padded, or padding was not necessary. return base64.StdEncoding.DecodeString(v) } return base64.RawStdEncoding.DecodeString(v) } func encodeMetadataHeader(k, v string) string { if strings.HasSuffix(k, binHdrSuffix) { return encodeBinHeader(([]byte)(v)) } return v } func decodeMetadataHeader(k, v string) (string, error) { if strings.HasSuffix(k, binHdrSuffix) { b, err := decodeBinHeader(v) return string(b), err } return v, nil } func (d *decodeState) decodeHeader(frame *http2.MetaHeadersFrame) error { // frame.Truncated is set to true when framer detects that the current header // list size hits MaxHeaderListSize limit. if frame.Truncated { return status.Error(codes.Internal, "peer header list size exceeded limit") } for _, hf := range frame.Fields { d.processHeaderField(hf) } if d.data.isGRPC { if d.data.grpcErr != nil { return d.data.grpcErr } if d.serverSide { return nil } if d.data.rawStatusCode == nil && d.data.statusGen == nil { // gRPC status doesn't exist. // Set rawStatusCode to be unknown and return nil error. // So that, if the stream has ended this Unknown status // will be propagated to the user. // Otherwise, it will be ignored. In which case, status from // a later trailer, that has StreamEnded flag set, is propagated. code := int(codes.Unknown) d.data.rawStatusCode = &code } return nil } // HTTP fallback mode if d.data.httpErr != nil { return d.data.httpErr } var ( code = codes.Internal // when header does not include HTTP status, return INTERNAL ok bool ) if d.data.httpStatus != nil { code, ok = HTTPStatusConvTab[*(d.data.httpStatus)] if !ok { code = codes.Unknown } } return status.Error(code, d.constructHTTPErrMsg()) } // constructErrMsg constructs error message to be returned in HTTP fallback mode. // Format: HTTP status code and its corresponding message + content-type error message. func (d *decodeState) constructHTTPErrMsg() string { var errMsgs []string if d.data.httpStatus == nil { errMsgs = append(errMsgs, "malformed header: missing HTTP status") } else { errMsgs = append(errMsgs, fmt.Sprintf("%s: HTTP status code %d", http.StatusText(*(d.data.httpStatus)), *d.data.httpStatus)) } if d.data.contentTypeErr == "" { errMsgs = append(errMsgs, "transport: missing content-type field") } else { errMsgs = append(errMsgs, d.data.contentTypeErr) } return strings.Join(errMsgs, "; ") } func (d *decodeState) addMetadata(k, v string) { if d.data.mdata == nil { d.data.mdata = make(map[string][]string) } d.data.mdata[k] = append(d.data.mdata[k], v) } func (d *decodeState) processHeaderField(f hpack.HeaderField) { switch f.Name { case "content-type": contentSubtype, validContentType := contentSubtype(f.Value) if !validContentType { d.data.contentTypeErr = fmt.Sprintf("transport: received the unexpected content-type %q", f.Value) return } d.data.contentSubtype = contentSubtype // TODO: do we want to propagate the whole content-type in the metadata, // or come up with a way to just propagate the content-subtype if it was set? // ie {"content-type": "application/grpc+proto"} or {"content-subtype": "proto"} // in the metadata? d.addMetadata(f.Name, f.Value) d.data.isGRPC = true case "grpc-encoding": d.data.encoding = f.Value case "grpc-status": code, err := strconv.Atoi(f.Value) if err != nil { d.data.grpcErr = status.Errorf(codes.Internal, "transport: malformed grpc-status: %v", err) return } d.data.rawStatusCode = &code case "grpc-message": d.data.rawStatusMsg = decodeGrpcMessage(f.Value) case "grpc-status-details-bin": v, err := decodeBinHeader(f.Value) if err != nil { d.data.grpcErr = status.Errorf(codes.Internal, "transport: malformed grpc-status-details-bin: %v", err) return } s := &spb.Status{} if err := proto.Unmarshal(v, s); err != nil { d.data.grpcErr = status.Errorf(codes.Internal, "transport: malformed grpc-status-details-bin: %v", err) return } d.data.statusGen = status.FromProto(s) case "grpc-timeout": d.data.timeoutSet = true var err error if d.data.timeout, err = decodeTimeout(f.Value); err != nil { d.data.grpcErr = status.Errorf(codes.Internal, "transport: malformed time-out: %v", err) } case ":path": d.data.method = f.Value case ":status": code, err := strconv.Atoi(f.Value) if err != nil { d.data.httpErr = status.Errorf(codes.Internal, "transport: malformed http-status: %v", err) return } d.data.httpStatus = &code case "grpc-tags-bin": v, err := decodeBinHeader(f.Value) if err != nil { d.data.grpcErr = status.Errorf(codes.Internal, "transport: malformed grpc-tags-bin: %v", err) return } d.data.statsTags = v d.addMetadata(f.Name, string(v)) case "grpc-trace-bin": v, err := decodeBinHeader(f.Value) if err != nil { d.data.grpcErr = status.Errorf(codes.Internal, "transport: malformed grpc-trace-bin: %v", err) return } d.data.statsTrace = v d.addMetadata(f.Name, string(v)) default: if isReservedHeader(f.Name) && !isWhitelistedHeader(f.Name) { break } v, err := decodeMetadataHeader(f.Name, f.Value) if err != nil { errorf("Failed to decode metadata header (%q, %q): %v", f.Name, f.Value, err) return } d.addMetadata(f.Name, v) } } type timeoutUnit uint8 const ( hour timeoutUnit = 'H' minute timeoutUnit = 'M' second timeoutUnit = 'S' millisecond timeoutUnit = 'm' microsecond timeoutUnit = 'u' nanosecond timeoutUnit = 'n' ) func timeoutUnitToDuration(u timeoutUnit) (d time.Duration, ok bool) { switch u { case hour: return time.Hour, true case minute: return time.Minute, true case second: return time.Second, true case millisecond: return time.Millisecond, true case microsecond: return time.Microsecond, true case nanosecond: return time.Nanosecond, true default: } return } const maxTimeoutValue int64 = 100000000 - 1 // div does integer division and round-up the result. Note that this is // equivalent to (d+r-1)/r but has less chance to overflow. func div(d, r time.Duration) int64 { if m := d % r; m > 0 { return int64(d/r + 1) } return int64(d / r) } // TODO(zhaoq): It is the simplistic and not bandwidth efficient. Improve it. func encodeTimeout(t time.Duration) string { if t <= 0 { return "0n" } if d := div(t, time.Nanosecond); d <= maxTimeoutValue { return strconv.FormatInt(d, 10) + "n" } if d := div(t, time.Microsecond); d <= maxTimeoutValue { return strconv.FormatInt(d, 10) + "u" } if d := div(t, time.Millisecond); d <= maxTimeoutValue { return strconv.FormatInt(d, 10) + "m" } if d := div(t, time.Second); d <= maxTimeoutValue { return strconv.FormatInt(d, 10) + "S" } if d := div(t, time.Minute); d <= maxTimeoutValue { return strconv.FormatInt(d, 10) + "M" } // Note that maxTimeoutValue * time.Hour > MaxInt64. return strconv.FormatInt(div(t, time.Hour), 10) + "H" } func decodeTimeout(s string) (time.Duration, error) { size := len(s) if size < 2 { return 0, fmt.Errorf("transport: timeout string is too short: %q", s) } if size > 9 { // Spec allows for 8 digits plus the unit. return 0, fmt.Errorf("transport: timeout string is too long: %q", s) } unit := timeoutUnit(s[size-1]) d, ok := timeoutUnitToDuration(unit) if !ok { return 0, fmt.Errorf("transport: timeout unit is not recognized: %q", s) } t, err := strconv.ParseInt(s[:size-1], 10, 64) if err != nil { return 0, err } const maxHours = math.MaxInt64 / int64(time.Hour) if d == time.Hour && t > maxHours { // This timeout would overflow math.MaxInt64; clamp it. return time.Duration(math.MaxInt64), nil } return d * time.Duration(t), nil } const ( spaceByte = ' ' tildeByte = '~' percentByte = '%' ) // encodeGrpcMessage is used to encode status code in header field // "grpc-message". It does percent encoding and also replaces invalid utf-8 // characters with Unicode replacement character. // // It checks to see if each individual byte in msg is an allowable byte, and // then either percent encoding or passing it through. When percent encoding, // the byte is converted into hexadecimal notation with a '%' prepended. func encodeGrpcMessage(msg string) string { if msg == "" { return "" } lenMsg := len(msg) for i := 0; i < lenMsg; i++ { c := msg[i] if !(c >= spaceByte && c <= tildeByte && c != percentByte) { return encodeGrpcMessageUnchecked(msg) } } return msg } func encodeGrpcMessageUnchecked(msg string) string { var buf bytes.Buffer for len(msg) > 0 { r, size := utf8.DecodeRuneInString(msg) for _, b := range []byte(string(r)) { if size > 1 { // If size > 1, r is not ascii. Always do percent encoding. buf.WriteString(fmt.Sprintf("%%%02X", b)) continue } // The for loop is necessary even if size == 1. r could be // utf8.RuneError. // // fmt.Sprintf("%%%02X", utf8.RuneError) gives "%FFFD". if b >= spaceByte && b <= tildeByte && b != percentByte { buf.WriteByte(b) } else { buf.WriteString(fmt.Sprintf("%%%02X", b)) } } msg = msg[size:] } return buf.String() } // decodeGrpcMessage decodes the msg encoded by encodeGrpcMessage. func decodeGrpcMessage(msg string) string { if msg == "" { return "" } lenMsg := len(msg) for i := 0; i < lenMsg; i++ { if msg[i] == percentByte && i+2 < lenMsg { return decodeGrpcMessageUnchecked(msg) } } return msg } func decodeGrpcMessageUnchecked(msg string) string { var buf bytes.Buffer lenMsg := len(msg) for i := 0; i < lenMsg; i++ { c := msg[i] if c == percentByte && i+2 < lenMsg { parsed, err := strconv.ParseUint(msg[i+1:i+3], 16, 8) if err != nil { buf.WriteByte(c) } else { buf.WriteByte(byte(parsed)) i += 2 } } else { buf.WriteByte(c) } } return buf.String() } type bufWriter struct { buf []byte offset int batchSize int conn net.Conn err error onFlush func() } func newBufWriter(conn net.Conn, batchSize int) *bufWriter { return &bufWriter{ buf: make([]byte, batchSize*2), batchSize: batchSize, conn: conn, } } func (w *bufWriter) Write(b []byte) (n int, err error) { if w.err != nil { return 0, w.err } if w.batchSize == 0 { // Buffer has been disabled. return w.conn.Write(b) } for len(b) > 0 { nn := copy(w.buf[w.offset:], b) b = b[nn:] w.offset += nn n += nn if w.offset >= w.batchSize { err = w.Flush() } } return n, err } func (w *bufWriter) Flush() error { if w.err != nil { return w.err } if w.offset == 0 { return nil } if w.onFlush != nil { w.onFlush() } _, w.err = w.conn.Write(w.buf[:w.offset]) w.offset = 0 return w.err } type framer struct { writer *bufWriter fr *http2.Framer } func newFramer(conn net.Conn, writeBufferSize, readBufferSize int, maxHeaderListSize uint32) *framer { if writeBufferSize < 0 { writeBufferSize = 0 } var r io.Reader = conn if readBufferSize > 0 { r = bufio.NewReaderSize(r, readBufferSize) } w := newBufWriter(conn, writeBufferSize) f := &framer{ writer: w, fr: http2.NewFramer(w, r), } // Opt-in to Frame reuse API on framer to reduce garbage. // Frames aren't safe to read from after a subsequent call to ReadFrame. f.fr.SetReuseFrames() f.fr.MaxHeaderListSize = maxHeaderListSize f.fr.ReadMetaHeaders = hpack.NewDecoder(http2InitHeaderTableSize, nil) return f } grpc-go-1.22.1/internal/transport/http_util_test.go000066400000000000000000000147731351635773100224340ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package transport import ( "fmt" "reflect" "testing" "time" ) func TestTimeoutEncode(t *testing.T) { for _, test := range []struct { in string out string }{ {"12345678ns", "12345678n"}, {"123456789ns", "123457u"}, {"12345678us", "12345678u"}, {"123456789us", "123457m"}, {"12345678ms", "12345678m"}, {"123456789ms", "123457S"}, {"12345678s", "12345678S"}, {"123456789s", "2057614M"}, {"12345678m", "12345678M"}, {"123456789m", "2057614H"}, } { d, err := time.ParseDuration(test.in) if err != nil { t.Fatalf("failed to parse duration string %s: %v", test.in, err) } out := encodeTimeout(d) if out != test.out { t.Fatalf("timeoutEncode(%s) = %s, want %s", test.in, out, test.out) } } } func TestTimeoutDecode(t *testing.T) { for _, test := range []struct { // input s string // output d time.Duration err error }{ {"1234S", time.Second * 1234, nil}, {"1234x", 0, fmt.Errorf("transport: timeout unit is not recognized: %q", "1234x")}, {"1", 0, fmt.Errorf("transport: timeout string is too short: %q", "1")}, {"", 0, fmt.Errorf("transport: timeout string is too short: %q", "")}, } { d, err := decodeTimeout(test.s) if d != test.d || fmt.Sprint(err) != fmt.Sprint(test.err) { t.Fatalf("timeoutDecode(%q) = %d, %v, want %d, %v", test.s, int64(d), err, int64(test.d), test.err) } } } func TestContentSubtype(t *testing.T) { tests := []struct { contentType string want string wantValid bool }{ {"application/grpc", "", true}, {"application/grpc+", "", true}, {"application/grpc+blah", "blah", true}, {"application/grpc;", "", true}, {"application/grpc;blah", "blah", true}, {"application/grpcd", "", false}, {"application/grpd", "", false}, {"application/grp", "", false}, } for _, tt := range tests { got, gotValid := contentSubtype(tt.contentType) if got != tt.want || gotValid != tt.wantValid { t.Errorf("contentSubtype(%q) = (%v, %v); want (%v, %v)", tt.contentType, got, gotValid, tt.want, tt.wantValid) } } } func TestEncodeGrpcMessage(t *testing.T) { for _, tt := range []struct { input string expected string }{ {"", ""}, {"Hello", "Hello"}, {"\u0000", "%00"}, {"%", "%25"}, {"系统", "%E7%B3%BB%E7%BB%9F"}, {string([]byte{0xff, 0xfe, 0xfd}), "%EF%BF%BD%EF%BF%BD%EF%BF%BD"}, } { actual := encodeGrpcMessage(tt.input) if tt.expected != actual { t.Errorf("encodeGrpcMessage(%q) = %q, want %q", tt.input, actual, tt.expected) } } // make sure that all the visible ASCII chars except '%' are not percent encoded. for i := ' '; i <= '~' && i != '%'; i++ { output := encodeGrpcMessage(string(i)) if output != string(i) { t.Errorf("encodeGrpcMessage(%v) = %v, want %v", string(i), output, string(i)) } } // make sure that all the invisible ASCII chars and '%' are percent encoded. for i := rune(0); i == '%' || (i >= rune(0) && i < ' ') || (i > '~' && i <= rune(127)); i++ { output := encodeGrpcMessage(string(i)) expected := fmt.Sprintf("%%%02X", i) if output != expected { t.Errorf("encodeGrpcMessage(%v) = %v, want %v", string(i), output, expected) } } } func TestDecodeGrpcMessage(t *testing.T) { for _, tt := range []struct { input string expected string }{ {"", ""}, {"Hello", "Hello"}, {"H%61o", "Hao"}, {"H%6", "H%6"}, {"%G0", "%G0"}, {"%E7%B3%BB%E7%BB%9F", "系统"}, {"%EF%BF%BD", "�"}, } { actual := decodeGrpcMessage(tt.input) if tt.expected != actual { t.Errorf("decodeGrpcMessage(%q) = %q, want %q", tt.input, actual, tt.expected) } } // make sure that all the visible ASCII chars except '%' are not percent decoded. for i := ' '; i <= '~' && i != '%'; i++ { output := decodeGrpcMessage(string(i)) if output != string(i) { t.Errorf("decodeGrpcMessage(%v) = %v, want %v", string(i), output, string(i)) } } // make sure that all the invisible ASCII chars and '%' are percent decoded. for i := rune(0); i == '%' || (i >= rune(0) && i < ' ') || (i > '~' && i <= rune(127)); i++ { output := decodeGrpcMessage(fmt.Sprintf("%%%02X", i)) if output != string(i) { t.Errorf("decodeGrpcMessage(%v) = %v, want %v", fmt.Sprintf("%%%02X", i), output, string(i)) } } } // Decode an encoded string should get the same thing back, except for invalid // utf8 chars. func TestDecodeEncodeGrpcMessage(t *testing.T) { testCases := []struct { orig string want string }{ {"", ""}, {"hello", "hello"}, {"h%6", "h%6"}, {"%G0", "%G0"}, {"系统", "系统"}, {"Hello, 世界", "Hello, 世界"}, {string([]byte{0xff, 0xfe, 0xfd}), "���"}, {string([]byte{0xff}) + "Hello" + string([]byte{0xfe}) + "世界" + string([]byte{0xfd}), "�Hello�世界�"}, } for _, tC := range testCases { got := decodeGrpcMessage(encodeGrpcMessage(tC.orig)) if got != tC.want { t.Errorf("decodeGrpcMessage(encodeGrpcMessage(%q)) = %q, want %q", tC.orig, got, tC.want) } } } const binaryValue = string(128) func TestEncodeMetadataHeader(t *testing.T) { for _, test := range []struct { // input kin string vin string // output vout string }{ {"key", "abc", "abc"}, {"KEY", "abc", "abc"}, {"key-bin", "abc", "YWJj"}, {"key-bin", binaryValue, "woA"}, } { v := encodeMetadataHeader(test.kin, test.vin) if !reflect.DeepEqual(v, test.vout) { t.Fatalf("encodeMetadataHeader(%q, %q) = %q, want %q", test.kin, test.vin, v, test.vout) } } } func TestDecodeMetadataHeader(t *testing.T) { for _, test := range []struct { // input kin string vin string // output vout string err error }{ {"a", "abc", "abc", nil}, {"key-bin", "Zm9vAGJhcg==", "foo\x00bar", nil}, {"key-bin", "Zm9vAGJhcg", "foo\x00bar", nil}, {"key-bin", "woA=", binaryValue, nil}, {"a", "abc,efg", "abc,efg", nil}, } { v, err := decodeMetadataHeader(test.kin, test.vin) if !reflect.DeepEqual(v, test.vout) || !reflect.DeepEqual(err, test.err) { t.Fatalf("decodeMetadataHeader(%q, %q) = %q, %v, want %q, %v", test.kin, test.vin, v, err, test.vout, test.err) } } } grpc-go-1.22.1/internal/transport/log.go000066400000000000000000000022021351635773100201220ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // This file contains wrappers for grpclog functions. // The transport package only logs to verbose level 2 by default. package transport import "google.golang.org/grpc/grpclog" const logLevel = 2 func infof(format string, args ...interface{}) { if grpclog.V(logLevel) { grpclog.Infof(format, args...) } } func warningf(format string, args ...interface{}) { if grpclog.V(logLevel) { grpclog.Warningf(format, args...) } } func errorf(format string, args ...interface{}) { if grpclog.V(logLevel) { grpclog.Errorf(format, args...) } } grpc-go-1.22.1/internal/transport/transport.go000066400000000000000000000625611351635773100214130ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package transport defines and implements message oriented communication // channel to complete various transactions (e.g., an RPC). It is meant for // grpc-internal usage and is not intended to be imported directly by users. package transport import ( "bytes" "context" "errors" "fmt" "io" "net" "sync" "sync/atomic" "google.golang.org/grpc/codes" "google.golang.org/grpc/credentials" "google.golang.org/grpc/keepalive" "google.golang.org/grpc/metadata" "google.golang.org/grpc/stats" "google.golang.org/grpc/status" "google.golang.org/grpc/tap" ) type bufferPool struct { pool sync.Pool } func newBufferPool() *bufferPool { return &bufferPool{ pool: sync.Pool{ New: func() interface{} { return new(bytes.Buffer) }, }, } } func (p *bufferPool) get() *bytes.Buffer { return p.pool.Get().(*bytes.Buffer) } func (p *bufferPool) put(b *bytes.Buffer) { p.pool.Put(b) } // recvMsg represents the received msg from the transport. All transport // protocol specific info has been removed. type recvMsg struct { buffer *bytes.Buffer // nil: received some data // io.EOF: stream is completed. data is nil. // other non-nil error: transport failure. data is nil. err error } // recvBuffer is an unbounded channel of recvMsg structs. // Note recvBuffer differs from controlBuffer only in that recvBuffer // holds a channel of only recvMsg structs instead of objects implementing "item" interface. // recvBuffer is written to much more often than // controlBuffer and using strict recvMsg structs helps avoid allocation in "recvBuffer.put" type recvBuffer struct { c chan recvMsg mu sync.Mutex backlog []recvMsg err error } func newRecvBuffer() *recvBuffer { b := &recvBuffer{ c: make(chan recvMsg, 1), } return b } func (b *recvBuffer) put(r recvMsg) { b.mu.Lock() if b.err != nil { b.mu.Unlock() // An error had occurred earlier, don't accept more // data or errors. return } b.err = r.err if len(b.backlog) == 0 { select { case b.c <- r: b.mu.Unlock() return default: } } b.backlog = append(b.backlog, r) b.mu.Unlock() } func (b *recvBuffer) load() { b.mu.Lock() if len(b.backlog) > 0 { select { case b.c <- b.backlog[0]: b.backlog[0] = recvMsg{} b.backlog = b.backlog[1:] default: } } b.mu.Unlock() } // get returns the channel that receives a recvMsg in the buffer. // // Upon receipt of a recvMsg, the caller should call load to send another // recvMsg onto the channel if there is any. func (b *recvBuffer) get() <-chan recvMsg { return b.c } // recvBufferReader implements io.Reader interface to read the data from // recvBuffer. type recvBufferReader struct { closeStream func(error) // Closes the client transport stream with the given error and nil trailer metadata. ctx context.Context ctxDone <-chan struct{} // cache of ctx.Done() (for performance). recv *recvBuffer last *bytes.Buffer // Stores the remaining data in the previous calls. err error freeBuffer func(*bytes.Buffer) } // Read reads the next len(p) bytes from last. If last is drained, it tries to // read additional data from recv. It blocks if there no additional data available // in recv. If Read returns any non-nil error, it will continue to return that error. func (r *recvBufferReader) Read(p []byte) (n int, err error) { if r.err != nil { return 0, r.err } if r.last != nil { // Read remaining data left in last call. copied, _ := r.last.Read(p) if r.last.Len() == 0 { r.freeBuffer(r.last) r.last = nil } return copied, nil } if r.closeStream != nil { n, r.err = r.readClient(p) } else { n, r.err = r.read(p) } return n, r.err } func (r *recvBufferReader) read(p []byte) (n int, err error) { select { case <-r.ctxDone: return 0, ContextErr(r.ctx.Err()) case m := <-r.recv.get(): return r.readAdditional(m, p) } } func (r *recvBufferReader) readClient(p []byte) (n int, err error) { // If the context is canceled, then closes the stream with nil metadata. // closeStream writes its error parameter to r.recv as a recvMsg. // r.readAdditional acts on that message and returns the necessary error. select { case <-r.ctxDone: r.closeStream(ContextErr(r.ctx.Err())) m := <-r.recv.get() return r.readAdditional(m, p) case m := <-r.recv.get(): return r.readAdditional(m, p) } } func (r *recvBufferReader) readAdditional(m recvMsg, p []byte) (n int, err error) { r.recv.load() if m.err != nil { return 0, m.err } copied, _ := m.buffer.Read(p) if m.buffer.Len() == 0 { r.freeBuffer(m.buffer) r.last = nil } else { r.last = m.buffer } return copied, nil } type streamState uint32 const ( streamActive streamState = iota streamWriteDone // EndStream sent streamReadDone // EndStream received streamDone // the entire stream is finished. ) // Stream represents an RPC in the transport layer. type Stream struct { id uint32 st ServerTransport // nil for client side Stream ctx context.Context // the associated context of the stream cancel context.CancelFunc // always nil for client side Stream done chan struct{} // closed at the end of stream to unblock writers. On the client side. ctxDone <-chan struct{} // same as done chan but for server side. Cache of ctx.Done() (for performance) method string // the associated RPC method of the stream recvCompress string sendCompress string buf *recvBuffer trReader io.Reader fc *inFlow wq *writeQuota // Callback to state application's intentions to read data. This // is used to adjust flow control, if needed. requestRead func(int) headerChan chan struct{} // closed to indicate the end of header metadata. headerChanClosed uint32 // set when headerChan is closed. Used to avoid closing headerChan multiple times. // hdrMu protects header and trailer metadata on the server-side. hdrMu sync.Mutex // On client side, header keeps the received header metadata. // // On server side, header keeps the header set by SetHeader(). The complete // header will merged into this after t.WriteHeader() is called. header metadata.MD trailer metadata.MD // the key-value map of trailer metadata. noHeaders bool // set if the client never received headers (set only after the stream is done). // On the server-side, headerSent is atomically set to 1 when the headers are sent out. headerSent uint32 state streamState // On client-side it is the status error received from the server. // On server-side it is unused. status *status.Status bytesReceived uint32 // indicates whether any bytes have been received on this stream unprocessed uint32 // set if the server sends a refused stream or GOAWAY including this stream // contentSubtype is the content-subtype for requests. // this must be lowercase or the behavior is undefined. contentSubtype string } // isHeaderSent is only valid on the server-side. func (s *Stream) isHeaderSent() bool { return atomic.LoadUint32(&s.headerSent) == 1 } // updateHeaderSent updates headerSent and returns true // if it was alreay set. It is valid only on server-side. func (s *Stream) updateHeaderSent() bool { return atomic.SwapUint32(&s.headerSent, 1) == 1 } func (s *Stream) swapState(st streamState) streamState { return streamState(atomic.SwapUint32((*uint32)(&s.state), uint32(st))) } func (s *Stream) compareAndSwapState(oldState, newState streamState) bool { return atomic.CompareAndSwapUint32((*uint32)(&s.state), uint32(oldState), uint32(newState)) } func (s *Stream) getState() streamState { return streamState(atomic.LoadUint32((*uint32)(&s.state))) } func (s *Stream) waitOnHeader() error { if s.headerChan == nil { // On the server headerChan is always nil since a stream originates // only after having received headers. return nil } select { case <-s.ctx.Done(): return ContextErr(s.ctx.Err()) case <-s.headerChan: return nil } } // RecvCompress returns the compression algorithm applied to the inbound // message. It is empty string if there is no compression applied. func (s *Stream) RecvCompress() string { if err := s.waitOnHeader(); err != nil { return "" } return s.recvCompress } // SetSendCompress sets the compression algorithm to the stream. func (s *Stream) SetSendCompress(str string) { s.sendCompress = str } // Done returns a channel which is closed when it receives the final status // from the server. func (s *Stream) Done() <-chan struct{} { return s.done } // Header returns the header metadata of the stream. // // On client side, it acquires the key-value pairs of header metadata once it is // available. It blocks until i) the metadata is ready or ii) there is no header // metadata or iii) the stream is canceled/expired. // // On server side, it returns the out header after t.WriteHeader is called. func (s *Stream) Header() (metadata.MD, error) { if s.headerChan == nil && s.header != nil { // On server side, return the header in stream. It will be the out // header after t.WriteHeader is called. return s.header.Copy(), nil } err := s.waitOnHeader() // Even if the stream is closed, header is returned if available. select { case <-s.headerChan: if s.header == nil { return nil, nil } return s.header.Copy(), nil default: } return nil, err } // TrailersOnly blocks until a header or trailers-only frame is received and // then returns true if the stream was trailers-only. If the stream ends // before headers are received, returns true, nil. If a context error happens // first, returns it as a status error. Client-side only. func (s *Stream) TrailersOnly() (bool, error) { err := s.waitOnHeader() if err != nil { return false, err } return s.noHeaders, nil } // Trailer returns the cached trailer metedata. Note that if it is not called // after the entire stream is done, it could return an empty MD. Client // side only. // It can be safely read only after stream has ended that is either read // or write have returned io.EOF. func (s *Stream) Trailer() metadata.MD { c := s.trailer.Copy() return c } // ContentSubtype returns the content-subtype for a request. For example, a // content-subtype of "proto" will result in a content-type of // "application/grpc+proto". This will always be lowercase. See // https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md#requests for // more details. func (s *Stream) ContentSubtype() string { return s.contentSubtype } // Context returns the context of the stream. func (s *Stream) Context() context.Context { return s.ctx } // Method returns the method for the stream. func (s *Stream) Method() string { return s.method } // Status returns the status received from the server. // Status can be read safely only after the stream has ended, // that is, after Done() is closed. func (s *Stream) Status() *status.Status { return s.status } // SetHeader sets the header metadata. This can be called multiple times. // Server side only. // This should not be called in parallel to other data writes. func (s *Stream) SetHeader(md metadata.MD) error { if md.Len() == 0 { return nil } if s.isHeaderSent() || s.getState() == streamDone { return ErrIllegalHeaderWrite } s.hdrMu.Lock() s.header = metadata.Join(s.header, md) s.hdrMu.Unlock() return nil } // SendHeader sends the given header metadata. The given metadata is // combined with any metadata set by previous calls to SetHeader and // then written to the transport stream. func (s *Stream) SendHeader(md metadata.MD) error { return s.st.WriteHeader(s, md) } // SetTrailer sets the trailer metadata which will be sent with the RPC status // by the server. This can be called multiple times. Server side only. // This should not be called parallel to other data writes. func (s *Stream) SetTrailer(md metadata.MD) error { if md.Len() == 0 { return nil } if s.getState() == streamDone { return ErrIllegalHeaderWrite } s.hdrMu.Lock() s.trailer = metadata.Join(s.trailer, md) s.hdrMu.Unlock() return nil } func (s *Stream) write(m recvMsg) { s.buf.put(m) } // Read reads all p bytes from the wire for this stream. func (s *Stream) Read(p []byte) (n int, err error) { // Don't request a read if there was an error earlier if er := s.trReader.(*transportReader).er; er != nil { return 0, er } s.requestRead(len(p)) return io.ReadFull(s.trReader, p) } // tranportReader reads all the data available for this Stream from the transport and // passes them into the decoder, which converts them into a gRPC message stream. // The error is io.EOF when the stream is done or another non-nil error if // the stream broke. type transportReader struct { reader io.Reader // The handler to control the window update procedure for both this // particular stream and the associated transport. windowHandler func(int) er error } func (t *transportReader) Read(p []byte) (n int, err error) { n, err = t.reader.Read(p) if err != nil { t.er = err return } t.windowHandler(n) return } // BytesReceived indicates whether any bytes have been received on this stream. func (s *Stream) BytesReceived() bool { return atomic.LoadUint32(&s.bytesReceived) == 1 } // Unprocessed indicates whether the server did not process this stream -- // i.e. it sent a refused stream or GOAWAY including this stream ID. func (s *Stream) Unprocessed() bool { return atomic.LoadUint32(&s.unprocessed) == 1 } // GoString is implemented by Stream so context.String() won't // race when printing %#v. func (s *Stream) GoString() string { return fmt.Sprintf("", s, s.method) } // state of transport type transportState int const ( reachable transportState = iota closing draining ) // ServerConfig consists of all the configurations to establish a server transport. type ServerConfig struct { MaxStreams uint32 AuthInfo credentials.AuthInfo InTapHandle tap.ServerInHandle StatsHandler stats.Handler KeepaliveParams keepalive.ServerParameters KeepalivePolicy keepalive.EnforcementPolicy InitialWindowSize int32 InitialConnWindowSize int32 WriteBufferSize int ReadBufferSize int ChannelzParentID int64 MaxHeaderListSize *uint32 } // NewServerTransport creates a ServerTransport with conn or non-nil error // if it fails. func NewServerTransport(protocol string, conn net.Conn, config *ServerConfig) (ServerTransport, error) { return newHTTP2Server(conn, config) } // ConnectOptions covers all relevant options for communicating with the server. type ConnectOptions struct { // UserAgent is the application user agent. UserAgent string // Dialer specifies how to dial a network address. Dialer func(context.Context, string) (net.Conn, error) // FailOnNonTempDialError specifies if gRPC fails on non-temporary dial errors. FailOnNonTempDialError bool // PerRPCCredentials stores the PerRPCCredentials required to issue RPCs. PerRPCCredentials []credentials.PerRPCCredentials // TransportCredentials stores the Authenticator required to setup a client // connection. Only one of TransportCredentials and CredsBundle is non-nil. TransportCredentials credentials.TransportCredentials // CredsBundle is the credentials bundle to be used. Only one of // TransportCredentials and CredsBundle is non-nil. CredsBundle credentials.Bundle // KeepaliveParams stores the keepalive parameters. KeepaliveParams keepalive.ClientParameters // StatsHandler stores the handler for stats. StatsHandler stats.Handler // InitialWindowSize sets the initial window size for a stream. InitialWindowSize int32 // InitialConnWindowSize sets the initial window size for a connection. InitialConnWindowSize int32 // WriteBufferSize sets the size of write buffer which in turn determines how much data can be batched before it's written on the wire. WriteBufferSize int // ReadBufferSize sets the size of read buffer, which in turn determines how much data can be read at most for one read syscall. ReadBufferSize int // ChannelzParentID sets the addrConn id which initiate the creation of this client transport. ChannelzParentID int64 // MaxHeaderListSize sets the max (uncompressed) size of header list that is prepared to be received. MaxHeaderListSize *uint32 } // TargetInfo contains the information of the target such as network address and metadata. type TargetInfo struct { Addr string Metadata interface{} Authority string } // NewClientTransport establishes the transport with the required ConnectOptions // and returns it to the caller. func NewClientTransport(connectCtx, ctx context.Context, target TargetInfo, opts ConnectOptions, onPrefaceReceipt func(), onGoAway func(GoAwayReason), onClose func()) (ClientTransport, error) { return newHTTP2Client(connectCtx, ctx, target, opts, onPrefaceReceipt, onGoAway, onClose) } // Options provides additional hints and information for message // transmission. type Options struct { // Last indicates whether this write is the last piece for // this stream. Last bool } // CallHdr carries the information of a particular RPC. type CallHdr struct { // Host specifies the peer's host. Host string // Method specifies the operation to perform. Method string // SendCompress specifies the compression algorithm applied on // outbound message. SendCompress string // Creds specifies credentials.PerRPCCredentials for a call. Creds credentials.PerRPCCredentials // ContentSubtype specifies the content-subtype for a request. For example, a // content-subtype of "proto" will result in a content-type of // "application/grpc+proto". The value of ContentSubtype must be all // lowercase, otherwise the behavior is undefined. See // https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md#requests // for more details. ContentSubtype string PreviousAttempts int // value of grpc-previous-rpc-attempts header to set } // ClientTransport is the common interface for all gRPC client-side transport // implementations. type ClientTransport interface { // Close tears down this transport. Once it returns, the transport // should not be accessed any more. The caller must make sure this // is called only once. Close() error // GracefulClose starts to tear down the transport: the transport will stop // accepting new RPCs and NewStream will return error. Once all streams are // finished, the transport will close. // // It does not block. GracefulClose() // Write sends the data for the given stream. A nil stream indicates // the write is to be performed on the transport as a whole. Write(s *Stream, hdr []byte, data []byte, opts *Options) error // NewStream creates a Stream for an RPC. NewStream(ctx context.Context, callHdr *CallHdr) (*Stream, error) // CloseStream clears the footprint of a stream when the stream is // not needed any more. The err indicates the error incurred when // CloseStream is called. Must be called when a stream is finished // unless the associated transport is closing. CloseStream(stream *Stream, err error) // Error returns a channel that is closed when some I/O error // happens. Typically the caller should have a goroutine to monitor // this in order to take action (e.g., close the current transport // and create a new one) in error case. It should not return nil // once the transport is initiated. Error() <-chan struct{} // GoAway returns a channel that is closed when ClientTransport // receives the draining signal from the server (e.g., GOAWAY frame in // HTTP/2). GoAway() <-chan struct{} // GetGoAwayReason returns the reason why GoAway frame was received. GetGoAwayReason() GoAwayReason // RemoteAddr returns the remote network address. RemoteAddr() net.Addr // IncrMsgSent increments the number of message sent through this transport. IncrMsgSent() // IncrMsgRecv increments the number of message received through this transport. IncrMsgRecv() } // ServerTransport is the common interface for all gRPC server-side transport // implementations. // // Methods may be called concurrently from multiple goroutines, but // Write methods for a given Stream will be called serially. type ServerTransport interface { // HandleStreams receives incoming streams using the given handler. HandleStreams(func(*Stream), func(context.Context, string) context.Context) // WriteHeader sends the header metadata for the given stream. // WriteHeader may not be called on all streams. WriteHeader(s *Stream, md metadata.MD) error // Write sends the data for the given stream. // Write may not be called on all streams. Write(s *Stream, hdr []byte, data []byte, opts *Options) error // WriteStatus sends the status of a stream to the client. WriteStatus is // the final call made on a stream and always occurs. WriteStatus(s *Stream, st *status.Status) error // Close tears down the transport. Once it is called, the transport // should not be accessed any more. All the pending streams and their // handlers will be terminated asynchronously. Close() error // RemoteAddr returns the remote network address. RemoteAddr() net.Addr // Drain notifies the client this ServerTransport stops accepting new RPCs. Drain() // IncrMsgSent increments the number of message sent through this transport. IncrMsgSent() // IncrMsgRecv increments the number of message received through this transport. IncrMsgRecv() } // connectionErrorf creates an ConnectionError with the specified error description. func connectionErrorf(temp bool, e error, format string, a ...interface{}) ConnectionError { return ConnectionError{ Desc: fmt.Sprintf(format, a...), temp: temp, err: e, } } // ConnectionError is an error that results in the termination of the // entire connection and the retry of all the active streams. type ConnectionError struct { Desc string temp bool err error } func (e ConnectionError) Error() string { return fmt.Sprintf("connection error: desc = %q", e.Desc) } // Temporary indicates if this connection error is temporary or fatal. func (e ConnectionError) Temporary() bool { return e.temp } // Origin returns the original error of this connection error. func (e ConnectionError) Origin() error { // Never return nil error here. // If the original error is nil, return itself. if e.err == nil { return e } return e.err } var ( // ErrConnClosing indicates that the transport is closing. ErrConnClosing = connectionErrorf(true, nil, "transport is closing") // errStreamDrain indicates that the stream is rejected because the // connection is draining. This could be caused by goaway or balancer // removing the address. errStreamDrain = status.Error(codes.Unavailable, "the connection is draining") // errStreamDone is returned from write at the client side to indiacte application // layer of an error. errStreamDone = errors.New("the stream is done") // StatusGoAway indicates that the server sent a GOAWAY that included this // stream's ID in unprocessed RPCs. statusGoAway = status.New(codes.Unavailable, "the stream is rejected because server is draining the connection") ) // GoAwayReason contains the reason for the GoAway frame received. type GoAwayReason uint8 const ( // GoAwayInvalid indicates that no GoAway frame is received. GoAwayInvalid GoAwayReason = 0 // GoAwayNoReason is the default value when GoAway frame is received. GoAwayNoReason GoAwayReason = 1 // GoAwayTooManyPings indicates that a GoAway frame with // ErrCodeEnhanceYourCalm was received and that the debug data said // "too_many_pings". GoAwayTooManyPings GoAwayReason = 2 ) // channelzData is used to store channelz related data for http2Client and http2Server. // These fields cannot be embedded in the original structs (e.g. http2Client), since to do atomic // operation on int64 variable on 32-bit machine, user is responsible to enforce memory alignment. // Here, by grouping those int64 fields inside a struct, we are enforcing the alignment. type channelzData struct { kpCount int64 // The number of streams that have started, including already finished ones. streamsStarted int64 // Client side: The number of streams that have ended successfully by receiving // EoS bit set frame from server. // Server side: The number of streams that have ended successfully by sending // frame with EoS bit set. streamsSucceeded int64 streamsFailed int64 // lastStreamCreatedTime stores the timestamp that the last stream gets created. It is of int64 type // instead of time.Time since it's more costly to atomically update time.Time variable than int64 // variable. The same goes for lastMsgSentTime and lastMsgRecvTime. lastStreamCreatedTime int64 msgSent int64 msgRecv int64 lastMsgSentTime int64 lastMsgRecvTime int64 } // ContextErr converts the error from context package into a status error. func ContextErr(err error) error { switch err { case context.DeadlineExceeded: return status.Error(codes.DeadlineExceeded, err.Error()) case context.Canceled: return status.Error(codes.Canceled, err.Error()) } return status.Errorf(codes.Internal, "Unexpected error from context packet: %v", err) } grpc-go-1.22.1/internal/transport/transport_test.go000066400000000000000000001766701351635773100224610ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package transport import ( "bytes" "context" "encoding/binary" "errors" "fmt" "io" "math" "net" "runtime" "strconv" "strings" "sync" "testing" "time" "golang.org/x/net/http2" "golang.org/x/net/http2/hpack" "google.golang.org/grpc/codes" "google.golang.org/grpc/internal/leakcheck" "google.golang.org/grpc/internal/syscall" "google.golang.org/grpc/internal/testutils" "google.golang.org/grpc/keepalive" "google.golang.org/grpc/status" ) type server struct { lis net.Listener port string startedErr chan error // error (or nil) with server start value mu sync.Mutex conns map[ServerTransport]bool h *testStreamHandler ready chan struct{} } var ( expectedRequest = []byte("ping") expectedResponse = []byte("pong") expectedRequestLarge = make([]byte, initialWindowSize*2) expectedResponseLarge = make([]byte, initialWindowSize*2) expectedInvalidHeaderField = "invalid/content-type" ) func init() { expectedRequestLarge[0] = 'g' expectedRequestLarge[len(expectedRequestLarge)-1] = 'r' expectedResponseLarge[0] = 'p' expectedResponseLarge[len(expectedResponseLarge)-1] = 'c' } type testStreamHandler struct { t *http2Server notify chan struct{} getNotified chan struct{} } type hType int const ( normal hType = iota suspended notifyCall misbehaved encodingRequiredStatus invalidHeaderField delayRead pingpong ) func (h *testStreamHandler) handleStreamAndNotify(s *Stream) { if h.notify == nil { return } go func() { select { case <-h.notify: default: close(h.notify) } }() } func (h *testStreamHandler) handleStream(t *testing.T, s *Stream) { req := expectedRequest resp := expectedResponse if s.Method() == "foo.Large" { req = expectedRequestLarge resp = expectedResponseLarge } p := make([]byte, len(req)) _, err := s.Read(p) if err != nil { return } if !bytes.Equal(p, req) { t.Errorf("handleStream got %v, want %v", p, req) h.t.WriteStatus(s, status.New(codes.Internal, "panic")) return } // send a response back to the client. h.t.Write(s, nil, resp, &Options{}) // send the trailer to end the stream. h.t.WriteStatus(s, status.New(codes.OK, "")) } func (h *testStreamHandler) handleStreamPingPong(t *testing.T, s *Stream) { header := make([]byte, 5) for { if _, err := s.Read(header); err != nil { if err == io.EOF { h.t.WriteStatus(s, status.New(codes.OK, "")) return } t.Errorf("Error on server while reading data header: %v", err) h.t.WriteStatus(s, status.New(codes.Internal, "panic")) return } sz := binary.BigEndian.Uint32(header[1:]) msg := make([]byte, int(sz)) if _, err := s.Read(msg); err != nil { t.Errorf("Error on server while reading message: %v", err) h.t.WriteStatus(s, status.New(codes.Internal, "panic")) return } buf := make([]byte, sz+5) buf[0] = byte(0) binary.BigEndian.PutUint32(buf[1:], uint32(sz)) copy(buf[5:], msg) h.t.Write(s, nil, buf, &Options{}) } } func (h *testStreamHandler) handleStreamMisbehave(t *testing.T, s *Stream) { conn, ok := s.st.(*http2Server) if !ok { t.Errorf("Failed to convert %v to *http2Server", s.st) h.t.WriteStatus(s, status.New(codes.Internal, "")) return } var sent int p := make([]byte, http2MaxFrameLen) for sent < initialWindowSize { n := initialWindowSize - sent // The last message may be smaller than http2MaxFrameLen if n <= http2MaxFrameLen { if s.Method() == "foo.Connection" { // Violate connection level flow control window of client but do not // violate any stream level windows. p = make([]byte, n) } else { // Violate stream level flow control window of client. p = make([]byte, n+1) } } conn.controlBuf.put(&dataFrame{ streamID: s.id, h: nil, d: p, onEachWrite: func() {}, }) sent += len(p) } } func (h *testStreamHandler) handleStreamEncodingRequiredStatus(t *testing.T, s *Stream) { // raw newline is not accepted by http2 framer so it must be encoded. h.t.WriteStatus(s, encodingTestStatus) } func (h *testStreamHandler) handleStreamInvalidHeaderField(t *testing.T, s *Stream) { headerFields := []hpack.HeaderField{} headerFields = append(headerFields, hpack.HeaderField{Name: "content-type", Value: expectedInvalidHeaderField}) h.t.controlBuf.put(&headerFrame{ streamID: s.id, hf: headerFields, endStream: false, }) } // handleStreamDelayRead delays reads so that the other side has to halt on // stream-level flow control. // This handler assumes dynamic flow control is turned off and assumes window // sizes to be set to defaultWindowSize. func (h *testStreamHandler) handleStreamDelayRead(t *testing.T, s *Stream) { req := expectedRequest resp := expectedResponse if s.Method() == "foo.Large" { req = expectedRequestLarge resp = expectedResponseLarge } var ( mu sync.Mutex total int ) s.wq.replenish = func(n int) { mu.Lock() total += n mu.Unlock() s.wq.realReplenish(n) } getTotal := func() int { mu.Lock() defer mu.Unlock() return total } done := make(chan struct{}) defer close(done) go func() { for { select { // Prevent goroutine from leaking. case <-done: return default: } if getTotal() == defaultWindowSize { // Signal the client to start reading and // thereby send window update. close(h.notify) return } runtime.Gosched() } }() p := make([]byte, len(req)) // Let the other side run out of stream-level window before // starting to read and thereby sending a window update. timer := time.NewTimer(time.Second * 10) select { case <-h.getNotified: timer.Stop() case <-timer.C: t.Errorf("Server timed-out.") return } _, err := s.Read(p) if err != nil { t.Errorf("s.Read(_) = _, %v, want _, ", err) return } if !bytes.Equal(p, req) { t.Errorf("handleStream got %v, want %v", p, req) return } // This write will cause server to run out of stream level, // flow control and the other side won't send a window update // until that happens. if err := h.t.Write(s, nil, resp, &Options{}); err != nil { t.Errorf("server Write got %v, want ", err) return } // Read one more time to ensure that everything remains fine and // that the goroutine, that we launched earlier to signal client // to read, gets enough time to process. _, err = s.Read(p) if err != nil { t.Errorf("s.Read(_) = _, %v, want _, nil", err) return } // send the trailer to end the stream. if err := h.t.WriteStatus(s, status.New(codes.OK, "")); err != nil { t.Errorf("server WriteStatus got %v, want ", err) return } } // start starts server. Other goroutines should block on s.readyChan for further operations. func (s *server) start(t *testing.T, port int, serverConfig *ServerConfig, ht hType) { var err error if port == 0 { s.lis, err = net.Listen("tcp", "localhost:0") } else { s.lis, err = net.Listen("tcp", "localhost:"+strconv.Itoa(port)) } if err != nil { s.startedErr <- fmt.Errorf("failed to listen: %v", err) return } _, p, err := net.SplitHostPort(s.lis.Addr().String()) if err != nil { s.startedErr <- fmt.Errorf("failed to parse listener address: %v", err) return } s.port = p s.conns = make(map[ServerTransport]bool) s.startedErr <- nil for { conn, err := s.lis.Accept() if err != nil { return } transport, err := NewServerTransport("http2", conn, serverConfig) if err != nil { return } s.mu.Lock() if s.conns == nil { s.mu.Unlock() transport.Close() return } s.conns[transport] = true h := &testStreamHandler{t: transport.(*http2Server)} s.h = h s.mu.Unlock() switch ht { case notifyCall: go transport.HandleStreams(h.handleStreamAndNotify, func(ctx context.Context, _ string) context.Context { return ctx }) case suspended: go transport.HandleStreams(func(*Stream) {}, // Do nothing to handle the stream. func(ctx context.Context, method string) context.Context { return ctx }) case misbehaved: go transport.HandleStreams(func(s *Stream) { go h.handleStreamMisbehave(t, s) }, func(ctx context.Context, method string) context.Context { return ctx }) case encodingRequiredStatus: go transport.HandleStreams(func(s *Stream) { go h.handleStreamEncodingRequiredStatus(t, s) }, func(ctx context.Context, method string) context.Context { return ctx }) case invalidHeaderField: go transport.HandleStreams(func(s *Stream) { go h.handleStreamInvalidHeaderField(t, s) }, func(ctx context.Context, method string) context.Context { return ctx }) case delayRead: h.notify = make(chan struct{}) h.getNotified = make(chan struct{}) s.mu.Lock() close(s.ready) s.mu.Unlock() go transport.HandleStreams(func(s *Stream) { go h.handleStreamDelayRead(t, s) }, func(ctx context.Context, method string) context.Context { return ctx }) case pingpong: go transport.HandleStreams(func(s *Stream) { go h.handleStreamPingPong(t, s) }, func(ctx context.Context, method string) context.Context { return ctx }) default: go transport.HandleStreams(func(s *Stream) { go h.handleStream(t, s) }, func(ctx context.Context, method string) context.Context { return ctx }) } } } func (s *server) wait(t *testing.T, timeout time.Duration) { select { case err := <-s.startedErr: if err != nil { t.Fatal(err) } case <-time.After(timeout): t.Fatalf("Timed out after %v waiting for server to be ready", timeout) } } func (s *server) stop() { s.lis.Close() s.mu.Lock() for c := range s.conns { c.Close() } s.conns = nil s.mu.Unlock() } func setUpServerOnly(t *testing.T, port int, serverConfig *ServerConfig, ht hType) *server { server := &server{startedErr: make(chan error, 1), ready: make(chan struct{})} go server.start(t, port, serverConfig, ht) server.wait(t, 2*time.Second) return server } func setUp(t *testing.T, port int, maxStreams uint32, ht hType) (*server, *http2Client, func()) { return setUpWithOptions(t, port, &ServerConfig{MaxStreams: maxStreams}, ht, ConnectOptions{}) } func setUpWithOptions(t *testing.T, port int, serverConfig *ServerConfig, ht hType, copts ConnectOptions) (*server, *http2Client, func()) { server := setUpServerOnly(t, port, serverConfig, ht) addr := "localhost:" + server.port target := TargetInfo{ Addr: addr, } connectCtx, cancel := context.WithDeadline(context.Background(), time.Now().Add(2*time.Second)) ct, connErr := NewClientTransport(connectCtx, context.Background(), target, copts, func() {}, func(GoAwayReason) {}, func() {}) if connErr != nil { cancel() // Do not cancel in success path. t.Fatalf("failed to create transport: %v", connErr) } return server, ct.(*http2Client), cancel } func setUpWithNoPingServer(t *testing.T, copts ConnectOptions, done chan net.Conn) (*http2Client, func()) { lis, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("Failed to listen: %v", err) } // Launch a non responsive server. go func() { defer lis.Close() conn, err := lis.Accept() if err != nil { t.Errorf("Error at server-side while accepting: %v", err) close(done) return } done <- conn }() connectCtx, cancel := context.WithDeadline(context.Background(), time.Now().Add(2*time.Second)) tr, err := NewClientTransport(connectCtx, context.Background(), TargetInfo{Addr: lis.Addr().String()}, copts, func() {}, func(GoAwayReason) {}, func() {}) if err != nil { cancel() // Do not cancel in success path. // Server clean-up. lis.Close() if conn, ok := <-done; ok { conn.Close() } t.Fatalf("Failed to dial: %v", err) } return tr.(*http2Client), cancel } // TestInflightStreamClosing ensures that closing in-flight stream // sends status error to concurrent stream reader. func TestInflightStreamClosing(t *testing.T) { serverConfig := &ServerConfig{} server, client, cancel := setUpWithOptions(t, 0, serverConfig, suspended, ConnectOptions{}) defer cancel() defer server.stop() defer client.Close() stream, err := client.NewStream(context.Background(), &CallHdr{}) if err != nil { t.Fatalf("Client failed to create RPC request: %v", err) } donec := make(chan struct{}) serr := status.Error(codes.Internal, "client connection is closing") go func() { defer close(donec) if _, err := stream.Read(make([]byte, defaultWindowSize)); err != serr { t.Errorf("unexpected Stream error %v, expected %v", err, serr) } }() // should unblock concurrent stream.Read client.CloseStream(stream, serr) // wait for stream.Read error timeout := time.NewTimer(5 * time.Second) select { case <-donec: if !timeout.Stop() { <-timeout.C } case <-timeout.C: t.Fatalf("Test timed out, expected a status error.") } } // TestMaxConnectionIdle tests that a server will send GoAway to a idle client. // An idle client is one who doesn't make any RPC calls for a duration of // MaxConnectionIdle time. func TestMaxConnectionIdle(t *testing.T) { serverConfig := &ServerConfig{ KeepaliveParams: keepalive.ServerParameters{ MaxConnectionIdle: 2 * time.Second, }, } server, client, cancel := setUpWithOptions(t, 0, serverConfig, suspended, ConnectOptions{}) defer cancel() defer server.stop() defer client.Close() stream, err := client.NewStream(context.Background(), &CallHdr{}) if err != nil { t.Fatalf("Client failed to create RPC request: %v", err) } client.closeStream(stream, io.EOF, true, http2.ErrCodeCancel, nil, nil, false) // wait for server to see that closed stream and max-age logic to send goaway after no new RPCs are mode timeout := time.NewTimer(time.Second * 4) select { case <-client.GoAway(): if !timeout.Stop() { <-timeout.C } case <-timeout.C: t.Fatalf("Test timed out, expected a GoAway from the server.") } } // TestMaxConenctionIdleNegative tests that a server will not send GoAway to a non-idle(busy) client. func TestMaxConnectionIdleNegative(t *testing.T) { serverConfig := &ServerConfig{ KeepaliveParams: keepalive.ServerParameters{ MaxConnectionIdle: 2 * time.Second, }, } server, client, cancel := setUpWithOptions(t, 0, serverConfig, suspended, ConnectOptions{}) defer cancel() defer server.stop() defer client.Close() _, err := client.NewStream(context.Background(), &CallHdr{}) if err != nil { t.Fatalf("Client failed to create RPC request: %v", err) } timeout := time.NewTimer(time.Second * 4) select { case <-client.GoAway(): if !timeout.Stop() { <-timeout.C } t.Fatalf("A non-idle client received a GoAway.") case <-timeout.C: } } // TestMaxConnectionAge tests that a server will send GoAway after a duration of MaxConnectionAge. func TestMaxConnectionAge(t *testing.T) { serverConfig := &ServerConfig{ KeepaliveParams: keepalive.ServerParameters{ MaxConnectionAge: 2 * time.Second, }, } server, client, cancel := setUpWithOptions(t, 0, serverConfig, suspended, ConnectOptions{}) defer cancel() defer server.stop() defer client.Close() _, err := client.NewStream(context.Background(), &CallHdr{}) if err != nil { t.Fatalf("Client failed to create stream: %v", err) } // Wait for max-age logic to send GoAway. timeout := time.NewTimer(4 * time.Second) select { case <-client.GoAway(): if !timeout.Stop() { <-timeout.C } case <-timeout.C: t.Fatalf("Test timer out, expected a GoAway from the server.") } } const ( defaultWriteBufSize = 32 * 1024 defaultReadBufSize = 32 * 1024 ) // TestKeepaliveServer tests that a server closes connection with a client that doesn't respond to keepalive pings. func TestKeepaliveServer(t *testing.T) { serverConfig := &ServerConfig{ KeepaliveParams: keepalive.ServerParameters{ Time: 2 * time.Second, Timeout: 1 * time.Second, }, } server, c, cancel := setUpWithOptions(t, 0, serverConfig, suspended, ConnectOptions{}) defer cancel() defer server.stop() defer c.Close() client, err := net.Dial("tcp", server.lis.Addr().String()) if err != nil { t.Fatalf("Failed to dial: %v", err) } defer client.Close() // Set read deadline on client conn so that it doesn't block forever in errorsome cases. client.SetDeadline(time.Now().Add(10 * time.Second)) if n, err := client.Write(clientPreface); err != nil || n != len(clientPreface) { t.Fatalf("Error writing client preface; n=%v, err=%v", n, err) } framer := newFramer(client, defaultWriteBufSize, defaultReadBufSize, 0) if err := framer.fr.WriteSettings(http2.Setting{}); err != nil { t.Fatal("Error writing settings frame:", err) } framer.writer.Flush() // Wait for keepalive logic to close the connection. time.Sleep(4 * time.Second) b := make([]byte, 24) for { _, err = client.Read(b) if err == nil { continue } if err != io.EOF { t.Fatalf("client.Read(_) = _,%v, want io.EOF", err) } break } } // TestKeepaliveServerNegative tests that a server doesn't close connection with a client that responds to keepalive pings. func TestKeepaliveServerNegative(t *testing.T) { serverConfig := &ServerConfig{ KeepaliveParams: keepalive.ServerParameters{ Time: 2 * time.Second, Timeout: 1 * time.Second, }, } server, client, cancel := setUpWithOptions(t, 0, serverConfig, suspended, ConnectOptions{}) defer cancel() defer server.stop() defer client.Close() // Give keepalive logic some time by sleeping. time.Sleep(4 * time.Second) // Assert that client is still active. client.mu.Lock() defer client.mu.Unlock() if client.state != reachable { t.Fatalf("Test failed: Expected server-client connection to be healthy.") } } func TestKeepaliveClientClosesIdleTransport(t *testing.T) { done := make(chan net.Conn, 1) tr, cancel := setUpWithNoPingServer(t, ConnectOptions{KeepaliveParams: keepalive.ClientParameters{ Time: 2 * time.Second, // Keepalive time = 2 sec. Timeout: 1 * time.Second, // Keepalive timeout = 1 sec. PermitWithoutStream: true, // Run keepalive even with no RPCs. }}, done) defer cancel() defer tr.Close() conn, ok := <-done if !ok { t.Fatalf("Server didn't return connection object") } defer conn.Close() // Sleep for keepalive to close the connection. time.Sleep(4 * time.Second) // Assert that the connection was closed. tr.mu.Lock() defer tr.mu.Unlock() if tr.state == reachable { t.Fatalf("Test Failed: Expected client transport to have closed.") } } func TestKeepaliveClientStaysHealthyOnIdleTransport(t *testing.T) { done := make(chan net.Conn, 1) tr, cancel := setUpWithNoPingServer(t, ConnectOptions{KeepaliveParams: keepalive.ClientParameters{ Time: 2 * time.Second, // Keepalive time = 2 sec. Timeout: 1 * time.Second, // Keepalive timeout = 1 sec. }}, done) defer cancel() defer tr.Close() conn, ok := <-done if !ok { t.Fatalf("server didn't reutrn connection object") } defer conn.Close() // Give keepalive some time. time.Sleep(4 * time.Second) // Assert that connections is still healthy. tr.mu.Lock() defer tr.mu.Unlock() if tr.state != reachable { t.Fatalf("Test failed: Expected client transport to be healthy.") } } func TestKeepaliveClientClosesWithActiveStreams(t *testing.T) { done := make(chan net.Conn, 1) tr, cancel := setUpWithNoPingServer(t, ConnectOptions{KeepaliveParams: keepalive.ClientParameters{ Time: 2 * time.Second, // Keepalive time = 2 sec. Timeout: 1 * time.Second, // Keepalive timeout = 1 sec. }}, done) defer cancel() defer tr.Close() conn, ok := <-done if !ok { t.Fatalf("Server didn't return connection object") } defer conn.Close() // Create a stream. _, err := tr.NewStream(context.Background(), &CallHdr{}) if err != nil { t.Fatalf("Failed to create a new stream: %v", err) } // Give keepalive some time. time.Sleep(4 * time.Second) // Assert that transport was closed. tr.mu.Lock() defer tr.mu.Unlock() if tr.state == reachable { t.Fatalf("Test failed: Expected client transport to have closed.") } } func TestKeepaliveClientStaysHealthyWithResponsiveServer(t *testing.T) { s, tr, cancel := setUpWithOptions(t, 0, &ServerConfig{MaxStreams: math.MaxUint32}, normal, ConnectOptions{KeepaliveParams: keepalive.ClientParameters{ Time: 2 * time.Second, // Keepalive time = 2 sec. Timeout: 1 * time.Second, // Keepalive timeout = 1 sec. PermitWithoutStream: true, // Run keepalive even with no RPCs. }}) defer cancel() defer s.stop() defer tr.Close() // Give keep alive some time. time.Sleep(4 * time.Second) // Assert that transport is healthy. tr.mu.Lock() defer tr.mu.Unlock() if tr.state != reachable { t.Fatalf("Test failed: Expected client transport to be healthy.") } } func TestKeepaliveServerEnforcementWithAbusiveClientNoRPC(t *testing.T) { serverConfig := &ServerConfig{ KeepalivePolicy: keepalive.EnforcementPolicy{ MinTime: 2 * time.Second, }, } clientOptions := ConnectOptions{ KeepaliveParams: keepalive.ClientParameters{ Time: 50 * time.Millisecond, Timeout: 1 * time.Second, PermitWithoutStream: true, }, } server, client, cancel := setUpWithOptions(t, 0, serverConfig, normal, clientOptions) defer cancel() defer server.stop() defer client.Close() timeout := time.NewTimer(10 * time.Second) select { case <-client.GoAway(): if !timeout.Stop() { <-timeout.C } case <-timeout.C: t.Fatalf("Test failed: Expected a GoAway from server.") } time.Sleep(500 * time.Millisecond) client.mu.Lock() defer client.mu.Unlock() if client.state == reachable { t.Fatalf("Test failed: Expected the connection to be closed.") } } func TestKeepaliveServerEnforcementWithAbusiveClientWithRPC(t *testing.T) { serverConfig := &ServerConfig{ KeepalivePolicy: keepalive.EnforcementPolicy{ MinTime: 2 * time.Second, }, } clientOptions := ConnectOptions{ KeepaliveParams: keepalive.ClientParameters{ Time: 50 * time.Millisecond, Timeout: 1 * time.Second, }, } server, client, cancel := setUpWithOptions(t, 0, serverConfig, suspended, clientOptions) defer cancel() defer server.stop() defer client.Close() if _, err := client.NewStream(context.Background(), &CallHdr{}); err != nil { t.Fatalf("Client failed to create stream.") } timeout := time.NewTimer(10 * time.Second) select { case <-client.GoAway(): if !timeout.Stop() { <-timeout.C } case <-timeout.C: t.Fatalf("Test failed: Expected a GoAway from server.") } time.Sleep(500 * time.Millisecond) client.mu.Lock() defer client.mu.Unlock() if client.state == reachable { t.Fatalf("Test failed: Expected the connection to be closed.") } } func TestKeepaliveServerEnforcementWithObeyingClientNoRPC(t *testing.T) { serverConfig := &ServerConfig{ KeepalivePolicy: keepalive.EnforcementPolicy{ MinTime: 100 * time.Millisecond, PermitWithoutStream: true, }, } clientOptions := ConnectOptions{ KeepaliveParams: keepalive.ClientParameters{ Time: 101 * time.Millisecond, Timeout: 1 * time.Second, PermitWithoutStream: true, }, } server, client, cancel := setUpWithOptions(t, 0, serverConfig, normal, clientOptions) defer cancel() defer server.stop() defer client.Close() // Give keepalive enough time. time.Sleep(3 * time.Second) // Assert that connection is healthy. client.mu.Lock() defer client.mu.Unlock() if client.state != reachable { t.Fatalf("Test failed: Expected connection to be healthy.") } } func TestKeepaliveServerEnforcementWithObeyingClientWithRPC(t *testing.T) { serverConfig := &ServerConfig{ KeepalivePolicy: keepalive.EnforcementPolicy{ MinTime: 100 * time.Millisecond, }, } clientOptions := ConnectOptions{ KeepaliveParams: keepalive.ClientParameters{ Time: 101 * time.Millisecond, Timeout: 1 * time.Second, }, } server, client, cancel := setUpWithOptions(t, 0, serverConfig, suspended, clientOptions) defer cancel() defer server.stop() defer client.Close() if _, err := client.NewStream(context.Background(), &CallHdr{}); err != nil { t.Fatalf("Client failed to create stream.") } // Give keepalive enough time. time.Sleep(3 * time.Second) // Assert that connection is healthy. client.mu.Lock() defer client.mu.Unlock() if client.state != reachable { t.Fatalf("Test failed: Expected connection to be healthy.") } } func TestClientSendAndReceive(t *testing.T) { server, ct, cancel := setUp(t, 0, math.MaxUint32, normal) defer cancel() callHdr := &CallHdr{ Host: "localhost", Method: "foo.Small", } s1, err1 := ct.NewStream(context.Background(), callHdr) if err1 != nil { t.Fatalf("failed to open stream: %v", err1) } if s1.id != 1 { t.Fatalf("wrong stream id: %d", s1.id) } s2, err2 := ct.NewStream(context.Background(), callHdr) if err2 != nil { t.Fatalf("failed to open stream: %v", err2) } if s2.id != 3 { t.Fatalf("wrong stream id: %d", s2.id) } opts := Options{Last: true} if err := ct.Write(s1, nil, expectedRequest, &opts); err != nil && err != io.EOF { t.Fatalf("failed to send data: %v", err) } p := make([]byte, len(expectedResponse)) _, recvErr := s1.Read(p) if recvErr != nil || !bytes.Equal(p, expectedResponse) { t.Fatalf("Error: %v, want ; Result: %v, want %v", recvErr, p, expectedResponse) } _, recvErr = s1.Read(p) if recvErr != io.EOF { t.Fatalf("Error: %v; want ", recvErr) } ct.Close() server.stop() } func TestClientErrorNotify(t *testing.T) { server, ct, cancel := setUp(t, 0, math.MaxUint32, normal) defer cancel() go server.stop() // ct.reader should detect the error and activate ct.Error(). <-ct.Error() ct.Close() } func performOneRPC(ct ClientTransport) { callHdr := &CallHdr{ Host: "localhost", Method: "foo.Small", } s, err := ct.NewStream(context.Background(), callHdr) if err != nil { return } opts := Options{Last: true} if err := ct.Write(s, []byte{}, expectedRequest, &opts); err == nil || err == io.EOF { time.Sleep(5 * time.Millisecond) // The following s.Recv()'s could error out because the // underlying transport is gone. // // Read response p := make([]byte, len(expectedResponse)) s.Read(p) // Read io.EOF s.Read(p) } } func TestClientMix(t *testing.T) { s, ct, cancel := setUp(t, 0, math.MaxUint32, normal) defer cancel() go func(s *server) { time.Sleep(5 * time.Second) s.stop() }(s) go func(ct ClientTransport) { <-ct.Error() ct.Close() }(ct) for i := 0; i < 1000; i++ { time.Sleep(10 * time.Millisecond) go performOneRPC(ct) } } func TestLargeMessage(t *testing.T) { server, ct, cancel := setUp(t, 0, math.MaxUint32, normal) defer cancel() callHdr := &CallHdr{ Host: "localhost", Method: "foo.Large", } var wg sync.WaitGroup for i := 0; i < 2; i++ { wg.Add(1) go func() { defer wg.Done() s, err := ct.NewStream(context.Background(), callHdr) if err != nil { t.Errorf("%v.NewStream(_, _) = _, %v, want _, ", ct, err) } if err := ct.Write(s, []byte{}, expectedRequestLarge, &Options{Last: true}); err != nil && err != io.EOF { t.Errorf("%v.Write(_, _, _) = %v, want ", ct, err) } p := make([]byte, len(expectedResponseLarge)) if _, err := s.Read(p); err != nil || !bytes.Equal(p, expectedResponseLarge) { t.Errorf("s.Read(%v) = _, %v, want %v, ", err, p, expectedResponse) } if _, err = s.Read(p); err != io.EOF { t.Errorf("Failed to complete the stream %v; want ", err) } }() } wg.Wait() ct.Close() server.stop() } func TestLargeMessageWithDelayRead(t *testing.T) { // Disable dynamic flow control. sc := &ServerConfig{ InitialWindowSize: defaultWindowSize, InitialConnWindowSize: defaultWindowSize, } co := ConnectOptions{ InitialWindowSize: defaultWindowSize, InitialConnWindowSize: defaultWindowSize, } server, ct, cancel := setUpWithOptions(t, 0, sc, delayRead, co) defer cancel() defer server.stop() defer ct.Close() server.mu.Lock() ready := server.ready server.mu.Unlock() callHdr := &CallHdr{ Host: "localhost", Method: "foo.Large", } ctx, cancel := context.WithDeadline(context.Background(), time.Now().Add(time.Second*10)) defer cancel() s, err := ct.NewStream(ctx, callHdr) if err != nil { t.Fatalf("%v.NewStream(_, _) = _, %v, want _, ", ct, err) return } // Wait for server's handerler to be initialized select { case <-ready: case <-ctx.Done(): t.Fatalf("Client timed out waiting for server handler to be initialized.") } server.mu.Lock() serviceHandler := server.h server.mu.Unlock() var ( mu sync.Mutex total int ) s.wq.replenish = func(n int) { mu.Lock() total += n mu.Unlock() s.wq.realReplenish(n) } getTotal := func() int { mu.Lock() defer mu.Unlock() return total } done := make(chan struct{}) defer close(done) go func() { for { select { // Prevent goroutine from leaking in case of error. case <-done: return default: } if getTotal() == defaultWindowSize { // unblock server to be able to read and // thereby send stream level window update. close(serviceHandler.getNotified) return } runtime.Gosched() } }() // This write will cause client to run out of stream level, // flow control and the other side won't send a window update // until that happens. if err := ct.Write(s, []byte{}, expectedRequestLarge, &Options{}); err != nil { t.Fatalf("write(_, _, _) = %v, want ", err) } p := make([]byte, len(expectedResponseLarge)) // Wait for the other side to run out of stream level flow control before // reading and thereby sending a window update. select { case <-serviceHandler.notify: case <-ctx.Done(): t.Fatalf("Client timed out") } if _, err := s.Read(p); err != nil || !bytes.Equal(p, expectedResponseLarge) { t.Fatalf("s.Read(_) = _, %v, want _, ", err) } if err := ct.Write(s, []byte{}, expectedRequestLarge, &Options{Last: true}); err != nil { t.Fatalf("Write(_, _, _) = %v, want ", err) } if _, err = s.Read(p); err != io.EOF { t.Fatalf("Failed to complete the stream %v; want ", err) } } func TestGracefulClose(t *testing.T) { server, ct, cancel := setUp(t, 0, math.MaxUint32, pingpong) defer cancel() defer func() { // Stop the server's listener to make the server's goroutines terminate // (after the last active stream is done). server.lis.Close() // Check for goroutine leaks (i.e. GracefulClose with an active stream // doesn't eventually close the connection when that stream completes). leakcheck.Check(t) // Correctly clean up the server server.stop() }() ctx, cancel := context.WithDeadline(context.Background(), time.Now().Add(time.Second*10)) defer cancel() s, err := ct.NewStream(ctx, &CallHdr{}) if err != nil { t.Fatalf("NewStream(_, _) = _, %v, want _, ", err) } msg := make([]byte, 1024) outgoingHeader := make([]byte, 5) outgoingHeader[0] = byte(0) binary.BigEndian.PutUint32(outgoingHeader[1:], uint32(len(msg))) incomingHeader := make([]byte, 5) if err := ct.Write(s, outgoingHeader, msg, &Options{}); err != nil { t.Fatalf("Error while writing: %v", err) } if _, err := s.Read(incomingHeader); err != nil { t.Fatalf("Error while reading: %v", err) } sz := binary.BigEndian.Uint32(incomingHeader[1:]) recvMsg := make([]byte, int(sz)) if _, err := s.Read(recvMsg); err != nil { t.Fatalf("Error while reading: %v", err) } ct.GracefulClose() var wg sync.WaitGroup // Expect the failure for all the follow-up streams because ct has been closed gracefully. for i := 0; i < 200; i++ { wg.Add(1) go func() { defer wg.Done() str, err := ct.NewStream(context.Background(), &CallHdr{}) if err == ErrConnClosing { return } else if err != nil { t.Errorf("_.NewStream(_, _) = _, %v, want _, %v", err, ErrConnClosing) return } ct.Write(str, nil, nil, &Options{Last: true}) if _, err := str.Read(make([]byte, 8)); err != errStreamDrain && err != ErrConnClosing { t.Errorf("_.Read(_) = _, %v, want _, %v or %v", err, errStreamDrain, ErrConnClosing) } }() } ct.Write(s, nil, nil, &Options{Last: true}) if _, err := s.Read(incomingHeader); err != io.EOF { t.Fatalf("Client expected EOF from the server. Got: %v", err) } // The stream which was created before graceful close can still proceed. wg.Wait() } func TestLargeMessageSuspension(t *testing.T) { server, ct, cancel := setUp(t, 0, math.MaxUint32, suspended) defer cancel() callHdr := &CallHdr{ Host: "localhost", Method: "foo.Large", } // Set a long enough timeout for writing a large message out. ctx, cancel := context.WithTimeout(context.Background(), time.Second) defer cancel() s, err := ct.NewStream(ctx, callHdr) if err != nil { t.Fatalf("failed to open stream: %v", err) } // Launch a goroutine simillar to the stream monitoring goroutine in // stream.go to keep track of context timeout and call CloseStream. go func() { <-ctx.Done() ct.CloseStream(s, ContextErr(ctx.Err())) }() // Write should not be done successfully due to flow control. msg := make([]byte, initialWindowSize*8) ct.Write(s, nil, msg, &Options{}) err = ct.Write(s, nil, msg, &Options{Last: true}) if err != errStreamDone { t.Fatalf("Write got %v, want io.EOF", err) } expectedErr := status.Error(codes.DeadlineExceeded, context.DeadlineExceeded.Error()) if _, err := s.Read(make([]byte, 8)); err.Error() != expectedErr.Error() { t.Fatalf("Read got %v of type %T, want %v", err, err, expectedErr) } ct.Close() server.stop() } func TestMaxStreams(t *testing.T) { serverConfig := &ServerConfig{ MaxStreams: 1, } server, ct, cancel := setUpWithOptions(t, 0, serverConfig, suspended, ConnectOptions{}) defer cancel() defer ct.Close() defer server.stop() callHdr := &CallHdr{ Host: "localhost", Method: "foo.Large", } s, err := ct.NewStream(context.Background(), callHdr) if err != nil { t.Fatalf("Failed to open stream: %v", err) } // Keep creating streams until one fails with deadline exceeded, marking the application // of server settings on client. slist := []*Stream{} pctx, cancel := context.WithCancel(context.Background()) defer cancel() timer := time.NewTimer(time.Second * 10) expectedErr := status.Error(codes.DeadlineExceeded, context.DeadlineExceeded.Error()) for { select { case <-timer.C: t.Fatalf("Test timeout: client didn't receive server settings.") default: } ctx, cancel := context.WithDeadline(pctx, time.Now().Add(time.Second)) // This is only to get rid of govet. All these context are based on a base // context which is canceled at the end of the test. defer cancel() if str, err := ct.NewStream(ctx, callHdr); err == nil { slist = append(slist, str) continue } else if err.Error() != expectedErr.Error() { t.Fatalf("ct.NewStream(_,_) = _, %v, want _, %v", err, expectedErr) } timer.Stop() break } done := make(chan struct{}) // Try and create a new stream. go func() { defer close(done) ctx, cancel := context.WithDeadline(context.Background(), time.Now().Add(time.Second*10)) defer cancel() if _, err := ct.NewStream(ctx, callHdr); err != nil { t.Errorf("Failed to open stream: %v", err) } }() // Close all the extra streams created and make sure the new stream is not created. for _, str := range slist { ct.CloseStream(str, nil) } select { case <-done: t.Fatalf("Test failed: didn't expect new stream to be created just yet.") default: } // Close the first stream created so that the new stream can finally be created. ct.CloseStream(s, nil) <-done ct.Close() <-ct.writerDone if ct.maxConcurrentStreams != 1 { t.Fatalf("ct.maxConcurrentStreams: %d, want 1", ct.maxConcurrentStreams) } } func TestServerContextCanceledOnClosedConnection(t *testing.T) { server, ct, cancel := setUp(t, 0, math.MaxUint32, suspended) defer cancel() callHdr := &CallHdr{ Host: "localhost", Method: "foo", } var sc *http2Server // Wait until the server transport is setup. for { server.mu.Lock() if len(server.conns) == 0 { server.mu.Unlock() time.Sleep(time.Millisecond) continue } for k := range server.conns { var ok bool sc, ok = k.(*http2Server) if !ok { t.Fatalf("Failed to convert %v to *http2Server", k) } } server.mu.Unlock() break } s, err := ct.NewStream(context.Background(), callHdr) if err != nil { t.Fatalf("Failed to open stream: %v", err) } ct.controlBuf.put(&dataFrame{ streamID: s.id, endStream: false, h: nil, d: make([]byte, http2MaxFrameLen), onEachWrite: func() {}, }) // Loop until the server side stream is created. var ss *Stream for { time.Sleep(time.Second) sc.mu.Lock() if len(sc.activeStreams) == 0 { sc.mu.Unlock() continue } ss = sc.activeStreams[s.id] sc.mu.Unlock() break } ct.Close() select { case <-ss.Context().Done(): if ss.Context().Err() != context.Canceled { t.Fatalf("ss.Context().Err() got %v, want %v", ss.Context().Err(), context.Canceled) } case <-time.After(5 * time.Second): t.Fatalf("Failed to cancel the context of the sever side stream.") } server.stop() } func TestClientConnDecoupledFromApplicationRead(t *testing.T) { connectOptions := ConnectOptions{ InitialWindowSize: defaultWindowSize, InitialConnWindowSize: defaultWindowSize, } server, client, cancel := setUpWithOptions(t, 0, &ServerConfig{}, notifyCall, connectOptions) defer cancel() defer server.stop() defer client.Close() waitWhileTrue(t, func() (bool, error) { server.mu.Lock() defer server.mu.Unlock() if len(server.conns) == 0 { return true, fmt.Errorf("timed-out while waiting for connection to be created on the server") } return false, nil }) var st *http2Server server.mu.Lock() for k := range server.conns { st = k.(*http2Server) } notifyChan := make(chan struct{}) server.h.notify = notifyChan server.mu.Unlock() cstream1, err := client.NewStream(context.Background(), &CallHdr{}) if err != nil { t.Fatalf("Client failed to create first stream. Err: %v", err) } <-notifyChan var sstream1 *Stream // Access stream on the server. st.mu.Lock() for _, v := range st.activeStreams { if v.id == cstream1.id { sstream1 = v } } st.mu.Unlock() if sstream1 == nil { t.Fatalf("Didn't find stream corresponding to client cstream.id: %v on the server", cstream1.id) } // Exhaust client's connection window. if err := st.Write(sstream1, []byte{}, make([]byte, defaultWindowSize), &Options{}); err != nil { t.Fatalf("Server failed to write data. Err: %v", err) } notifyChan = make(chan struct{}) server.mu.Lock() server.h.notify = notifyChan server.mu.Unlock() // Create another stream on client. cstream2, err := client.NewStream(context.Background(), &CallHdr{}) if err != nil { t.Fatalf("Client failed to create second stream. Err: %v", err) } <-notifyChan var sstream2 *Stream st.mu.Lock() for _, v := range st.activeStreams { if v.id == cstream2.id { sstream2 = v } } st.mu.Unlock() if sstream2 == nil { t.Fatalf("Didn't find stream corresponding to client cstream.id: %v on the server", cstream2.id) } // Server should be able to send data on the new stream, even though the client hasn't read anything on the first stream. if err := st.Write(sstream2, []byte{}, make([]byte, defaultWindowSize), &Options{}); err != nil { t.Fatalf("Server failed to write data. Err: %v", err) } // Client should be able to read data on second stream. if _, err := cstream2.Read(make([]byte, defaultWindowSize)); err != nil { t.Fatalf("_.Read(_) = _, %v, want _, ", err) } // Client should be able to read data on first stream. if _, err := cstream1.Read(make([]byte, defaultWindowSize)); err != nil { t.Fatalf("_.Read(_) = _, %v, want _, ", err) } } func TestServerConnDecoupledFromApplicationRead(t *testing.T) { serverConfig := &ServerConfig{ InitialWindowSize: defaultWindowSize, InitialConnWindowSize: defaultWindowSize, } server, client, cancel := setUpWithOptions(t, 0, serverConfig, suspended, ConnectOptions{}) defer cancel() defer server.stop() defer client.Close() waitWhileTrue(t, func() (bool, error) { server.mu.Lock() defer server.mu.Unlock() if len(server.conns) == 0 { return true, fmt.Errorf("timed-out while waiting for connection to be created on the server") } return false, nil }) var st *http2Server server.mu.Lock() for k := range server.conns { st = k.(*http2Server) } server.mu.Unlock() cstream1, err := client.NewStream(context.Background(), &CallHdr{}) if err != nil { t.Fatalf("Failed to create 1st stream. Err: %v", err) } // Exhaust server's connection window. if err := client.Write(cstream1, nil, make([]byte, defaultWindowSize), &Options{Last: true}); err != nil { t.Fatalf("Client failed to write data. Err: %v", err) } //Client should be able to create another stream and send data on it. cstream2, err := client.NewStream(context.Background(), &CallHdr{}) if err != nil { t.Fatalf("Failed to create 2nd stream. Err: %v", err) } if err := client.Write(cstream2, nil, make([]byte, defaultWindowSize), &Options{}); err != nil { t.Fatalf("Client failed to write data. Err: %v", err) } // Get the streams on server. waitWhileTrue(t, func() (bool, error) { st.mu.Lock() defer st.mu.Unlock() if len(st.activeStreams) != 2 { return true, fmt.Errorf("timed-out while waiting for server to have created the streams") } return false, nil }) var sstream1 *Stream st.mu.Lock() for _, v := range st.activeStreams { if v.id == 1 { sstream1 = v } } st.mu.Unlock() // Reading from the stream on server should succeed. if _, err := sstream1.Read(make([]byte, defaultWindowSize)); err != nil { t.Fatalf("_.Read(_) = %v, want ", err) } if _, err := sstream1.Read(make([]byte, 1)); err != io.EOF { t.Fatalf("_.Read(_) = %v, want io.EOF", err) } } func TestServerWithMisbehavedClient(t *testing.T) { server := setUpServerOnly(t, 0, &ServerConfig{}, suspended) defer server.stop() // Create a client that can override server stream quota. mconn, err := net.Dial("tcp", server.lis.Addr().String()) if err != nil { t.Fatalf("Clent failed to dial:%v", err) } defer mconn.Close() if err := mconn.SetWriteDeadline(time.Now().Add(time.Second * 10)); err != nil { t.Fatalf("Failed to set write deadline: %v", err) } if n, err := mconn.Write(clientPreface); err != nil || n != len(clientPreface) { t.Fatalf("mconn.Write(clientPreface) = %d, %v, want %d, ", n, err, len(clientPreface)) } // success chan indicates that reader received a RSTStream from server. success := make(chan struct{}) var mu sync.Mutex framer := http2.NewFramer(mconn, mconn) if err := framer.WriteSettings(); err != nil { t.Fatalf("Error while writing settings: %v", err) } go func() { // Launch a reader for this misbehaving client. for { frame, err := framer.ReadFrame() if err != nil { return } switch frame := frame.(type) { case *http2.PingFrame: // Write ping ack back so that server's BDP estimation works right. mu.Lock() framer.WritePing(true, frame.Data) mu.Unlock() case *http2.RSTStreamFrame: if frame.Header().StreamID != 1 || http2.ErrCode(frame.ErrCode) != http2.ErrCodeFlowControl { t.Errorf("RST stream received with streamID: %d and code: %v, want streamID: 1 and code: http2.ErrCodeFlowControl", frame.Header().StreamID, http2.ErrCode(frame.ErrCode)) } close(success) return default: // Do nothing. } } }() // Create a stream. var buf bytes.Buffer henc := hpack.NewEncoder(&buf) // TODO(mmukhi): Remove unnecessary fields. if err := henc.WriteField(hpack.HeaderField{Name: ":method", Value: "POST"}); err != nil { t.Fatalf("Error while encoding header: %v", err) } if err := henc.WriteField(hpack.HeaderField{Name: ":path", Value: "foo"}); err != nil { t.Fatalf("Error while encoding header: %v", err) } if err := henc.WriteField(hpack.HeaderField{Name: ":authority", Value: "localhost"}); err != nil { t.Fatalf("Error while encoding header: %v", err) } if err := henc.WriteField(hpack.HeaderField{Name: "content-type", Value: "application/grpc"}); err != nil { t.Fatalf("Error while encoding header: %v", err) } mu.Lock() if err := framer.WriteHeaders(http2.HeadersFrameParam{StreamID: 1, BlockFragment: buf.Bytes(), EndHeaders: true}); err != nil { mu.Unlock() t.Fatalf("Error while writing headers: %v", err) } mu.Unlock() // Test server behavior for violation of stream flow control window size restriction. timer := time.NewTimer(time.Second * 5) dbuf := make([]byte, http2MaxFrameLen) for { select { case <-timer.C: t.Fatalf("Test timed out.") case <-success: return default: } mu.Lock() if err := framer.WriteData(1, false, dbuf); err != nil { mu.Unlock() // Error here means the server could have closed the connection due to flow control // violation. Make sure that is the case by waiting for success chan to be closed. select { case <-timer.C: t.Fatalf("Error while writing data: %v", err) case <-success: return } } mu.Unlock() // This for loop is capable of hogging the CPU and cause starvation // in Go versions prior to 1.9, // in single CPU environment. Explicitly relinquish processor. runtime.Gosched() } } func TestClientWithMisbehavedServer(t *testing.T) { // Create a misbehaving server. lis, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("Error while listening: %v", err) } defer lis.Close() // success chan indicates that the server received // RSTStream from the client. success := make(chan struct{}) go func() { // Launch the misbehaving server. sconn, err := lis.Accept() if err != nil { t.Errorf("Error while accepting: %v", err) return } defer sconn.Close() if _, err := io.ReadFull(sconn, make([]byte, len(clientPreface))); err != nil { t.Errorf("Error while reading clieng preface: %v", err) return } sfr := http2.NewFramer(sconn, sconn) if err := sfr.WriteSettingsAck(); err != nil { t.Errorf("Error while writing settings: %v", err) return } var mu sync.Mutex for { frame, err := sfr.ReadFrame() if err != nil { return } switch frame := frame.(type) { case *http2.HeadersFrame: // When the client creates a stream, violate the stream flow control. go func() { buf := make([]byte, http2MaxFrameLen) for { mu.Lock() if err := sfr.WriteData(1, false, buf); err != nil { mu.Unlock() return } mu.Unlock() // This for loop is capable of hogging the CPU and cause starvation // in Go versions prior to 1.9, // in single CPU environment. Explicitly relinquish processor. runtime.Gosched() } }() case *http2.RSTStreamFrame: if frame.Header().StreamID != 1 || http2.ErrCode(frame.ErrCode) != http2.ErrCodeFlowControl { t.Errorf("RST stream received with streamID: %d and code: %v, want streamID: 1 and code: http2.ErrCodeFlowControl", frame.Header().StreamID, http2.ErrCode(frame.ErrCode)) } close(success) return case *http2.PingFrame: mu.Lock() sfr.WritePing(true, frame.Data) mu.Unlock() default: } } }() connectCtx, cancel := context.WithDeadline(context.Background(), time.Now().Add(2*time.Second)) defer cancel() ct, err := NewClientTransport(connectCtx, context.Background(), TargetInfo{Addr: lis.Addr().String()}, ConnectOptions{}, func() {}, func(GoAwayReason) {}, func() {}) if err != nil { t.Fatalf("Error while creating client transport: %v", err) } defer ct.Close() str, err := ct.NewStream(context.Background(), &CallHdr{}) if err != nil { t.Fatalf("Error while creating stream: %v", err) } timer := time.NewTimer(time.Second * 5) go func() { // This go routine mimics the one in stream.go to call CloseStream. <-str.Done() ct.CloseStream(str, nil) }() select { case <-timer.C: t.Fatalf("Test timed-out.") case <-success: } } var encodingTestStatus = status.New(codes.Internal, "\n") func TestEncodingRequiredStatus(t *testing.T) { server, ct, cancel := setUp(t, 0, math.MaxUint32, encodingRequiredStatus) defer cancel() callHdr := &CallHdr{ Host: "localhost", Method: "foo", } s, err := ct.NewStream(context.Background(), callHdr) if err != nil { return } opts := Options{Last: true} if err := ct.Write(s, nil, expectedRequest, &opts); err != nil && err != errStreamDone { t.Fatalf("Failed to write the request: %v", err) } p := make([]byte, http2MaxFrameLen) if _, err := s.trReader.(*transportReader).Read(p); err != io.EOF { t.Fatalf("Read got error %v, want %v", err, io.EOF) } if !testutils.StatusErrEqual(s.Status().Err(), encodingTestStatus.Err()) { t.Fatalf("stream with status %v, want %v", s.Status(), encodingTestStatus) } ct.Close() server.stop() } func TestInvalidHeaderField(t *testing.T) { server, ct, cancel := setUp(t, 0, math.MaxUint32, invalidHeaderField) defer cancel() callHdr := &CallHdr{ Host: "localhost", Method: "foo", } s, err := ct.NewStream(context.Background(), callHdr) if err != nil { return } p := make([]byte, http2MaxFrameLen) _, err = s.trReader.(*transportReader).Read(p) if se, ok := status.FromError(err); !ok || se.Code() != codes.Internal || !strings.Contains(err.Error(), expectedInvalidHeaderField) { t.Fatalf("Read got error %v, want error with code %s and contains %q", err, codes.Internal, expectedInvalidHeaderField) } ct.Close() server.stop() } func TestHeaderChanClosedAfterReceivingAnInvalidHeader(t *testing.T) { server, ct, cancel := setUp(t, 0, math.MaxUint32, invalidHeaderField) defer cancel() defer server.stop() defer ct.Close() s, err := ct.NewStream(context.Background(), &CallHdr{Host: "localhost", Method: "foo"}) if err != nil { t.Fatalf("failed to create the stream") } timer := time.NewTimer(time.Second) defer timer.Stop() select { case <-s.headerChan: case <-timer.C: t.Errorf("s.headerChan: got open, want closed") } } func TestIsReservedHeader(t *testing.T) { tests := []struct { h string want bool }{ {"", false}, // but should be rejected earlier {"foo", false}, {"content-type", true}, {"user-agent", true}, {":anything", true}, {"grpc-message-type", true}, {"grpc-encoding", true}, {"grpc-message", true}, {"grpc-status", true}, {"grpc-timeout", true}, {"te", true}, } for _, tt := range tests { got := isReservedHeader(tt.h) if got != tt.want { t.Errorf("isReservedHeader(%q) = %v; want %v", tt.h, got, tt.want) } } } func TestContextErr(t *testing.T) { for _, test := range []struct { // input errIn error // outputs errOut error }{ {context.DeadlineExceeded, status.Error(codes.DeadlineExceeded, context.DeadlineExceeded.Error())}, {context.Canceled, status.Error(codes.Canceled, context.Canceled.Error())}, } { err := ContextErr(test.errIn) if err.Error() != test.errOut.Error() { t.Fatalf("ContextErr{%v} = %v \nwant %v", test.errIn, err, test.errOut) } } } type windowSizeConfig struct { serverStream int32 serverConn int32 clientStream int32 clientConn int32 } func TestAccountCheckWindowSizeWithLargeWindow(t *testing.T) { wc := windowSizeConfig{ serverStream: 10 * 1024 * 1024, serverConn: 12 * 1024 * 1024, clientStream: 6 * 1024 * 1024, clientConn: 8 * 1024 * 1024, } testFlowControlAccountCheck(t, 1024*1024, wc) } func TestAccountCheckWindowSizeWithSmallWindow(t *testing.T) { wc := windowSizeConfig{ serverStream: defaultWindowSize, // Note this is smaller than initialConnWindowSize which is the current default. serverConn: defaultWindowSize, clientStream: defaultWindowSize, clientConn: defaultWindowSize, } testFlowControlAccountCheck(t, 1024*1024, wc) } func TestAccountCheckDynamicWindowSmallMessage(t *testing.T) { testFlowControlAccountCheck(t, 1024, windowSizeConfig{}) } func TestAccountCheckDynamicWindowLargeMessage(t *testing.T) { testFlowControlAccountCheck(t, 1024*1024, windowSizeConfig{}) } func testFlowControlAccountCheck(t *testing.T, msgSize int, wc windowSizeConfig) { sc := &ServerConfig{ InitialWindowSize: wc.serverStream, InitialConnWindowSize: wc.serverConn, } co := ConnectOptions{ InitialWindowSize: wc.clientStream, InitialConnWindowSize: wc.clientConn, } server, client, cancel := setUpWithOptions(t, 0, sc, pingpong, co) defer cancel() defer server.stop() defer client.Close() waitWhileTrue(t, func() (bool, error) { server.mu.Lock() defer server.mu.Unlock() if len(server.conns) == 0 { return true, fmt.Errorf("timed out while waiting for server transport to be created") } return false, nil }) var st *http2Server server.mu.Lock() for k := range server.conns { st = k.(*http2Server) } server.mu.Unlock() const numStreams = 10 clientStreams := make([]*Stream, numStreams) for i := 0; i < numStreams; i++ { var err error clientStreams[i], err = client.NewStream(context.Background(), &CallHdr{}) if err != nil { t.Fatalf("Failed to create stream. Err: %v", err) } } var wg sync.WaitGroup // For each stream send pingpong messages to the server. for _, stream := range clientStreams { wg.Add(1) go func(stream *Stream) { defer wg.Done() buf := make([]byte, msgSize+5) buf[0] = byte(0) binary.BigEndian.PutUint32(buf[1:], uint32(msgSize)) opts := Options{} header := make([]byte, 5) for i := 1; i <= 10; i++ { if err := client.Write(stream, nil, buf, &opts); err != nil { t.Errorf("Error on client while writing message: %v", err) return } if _, err := stream.Read(header); err != nil { t.Errorf("Error on client while reading data frame header: %v", err) return } sz := binary.BigEndian.Uint32(header[1:]) recvMsg := make([]byte, int(sz)) if _, err := stream.Read(recvMsg); err != nil { t.Errorf("Error on client while reading data: %v", err) return } if len(recvMsg) != msgSize { t.Errorf("Length of message received by client: %v, want: %v", len(recvMsg), msgSize) return } } }(stream) } wg.Wait() serverStreams := map[uint32]*Stream{} loopyClientStreams := map[uint32]*outStream{} loopyServerStreams := map[uint32]*outStream{} // Get all the streams from server reader and writer and client writer. st.mu.Lock() for _, stream := range clientStreams { id := stream.id serverStreams[id] = st.activeStreams[id] loopyServerStreams[id] = st.loopy.estdStreams[id] loopyClientStreams[id] = client.loopy.estdStreams[id] } st.mu.Unlock() // Close all streams for _, stream := range clientStreams { client.Write(stream, nil, nil, &Options{Last: true}) if _, err := stream.Read(make([]byte, 5)); err != io.EOF { t.Fatalf("Client expected an EOF from the server. Got: %v", err) } } // Close down both server and client so that their internals can be read without data // races. client.Close() st.Close() <-st.readerDone <-st.writerDone <-client.readerDone <-client.writerDone for _, cstream := range clientStreams { id := cstream.id sstream := serverStreams[id] loopyServerStream := loopyServerStreams[id] loopyClientStream := loopyClientStreams[id] // Check stream flow control. if int(cstream.fc.limit+cstream.fc.delta-cstream.fc.pendingData-cstream.fc.pendingUpdate) != int(st.loopy.oiws)-loopyServerStream.bytesOutStanding { t.Fatalf("Account mismatch: client stream inflow limit(%d) + delta(%d) - pendingData(%d) - pendingUpdate(%d) != server outgoing InitialWindowSize(%d) - outgoingStream.bytesOutStanding(%d)", cstream.fc.limit, cstream.fc.delta, cstream.fc.pendingData, cstream.fc.pendingUpdate, st.loopy.oiws, loopyServerStream.bytesOutStanding) } if int(sstream.fc.limit+sstream.fc.delta-sstream.fc.pendingData-sstream.fc.pendingUpdate) != int(client.loopy.oiws)-loopyClientStream.bytesOutStanding { t.Fatalf("Account mismatch: server stream inflow limit(%d) + delta(%d) - pendingData(%d) - pendingUpdate(%d) != client outgoing InitialWindowSize(%d) - outgoingStream.bytesOutStanding(%d)", sstream.fc.limit, sstream.fc.delta, sstream.fc.pendingData, sstream.fc.pendingUpdate, client.loopy.oiws, loopyClientStream.bytesOutStanding) } } // Check transport flow control. if client.fc.limit != client.fc.unacked+st.loopy.sendQuota { t.Fatalf("Account mismatch: client transport inflow(%d) != client unacked(%d) + server sendQuota(%d)", client.fc.limit, client.fc.unacked, st.loopy.sendQuota) } if st.fc.limit != st.fc.unacked+client.loopy.sendQuota { t.Fatalf("Account mismatch: server transport inflow(%d) != server unacked(%d) + client sendQuota(%d)", st.fc.limit, st.fc.unacked, client.loopy.sendQuota) } } func waitWhileTrue(t *testing.T, condition func() (bool, error)) { var ( wait bool err error ) timer := time.NewTimer(time.Second * 5) for { wait, err = condition() if wait { select { case <-timer.C: t.Fatalf(err.Error()) default: time.Sleep(50 * time.Millisecond) continue } } if !timer.Stop() { <-timer.C } break } } // If any error occurs on a call to Stream.Read, future calls // should continue to return that same error. func TestReadGivesSameErrorAfterAnyErrorOccurs(t *testing.T) { testRecvBuffer := newRecvBuffer() s := &Stream{ ctx: context.Background(), buf: testRecvBuffer, requestRead: func(int) {}, } s.trReader = &transportReader{ reader: &recvBufferReader{ ctx: s.ctx, ctxDone: s.ctx.Done(), recv: s.buf, freeBuffer: func(*bytes.Buffer) {}, }, windowHandler: func(int) {}, } testData := make([]byte, 1) testData[0] = 5 testBuffer := bytes.NewBuffer(testData) testErr := errors.New("test error") s.write(recvMsg{buffer: testBuffer, err: testErr}) inBuf := make([]byte, 1) actualCount, actualErr := s.Read(inBuf) if actualCount != 0 { t.Errorf("actualCount, _ := s.Read(_) differs; want 0; got %v", actualCount) } if actualErr.Error() != testErr.Error() { t.Errorf("_ , actualErr := s.Read(_) differs; want actualErr.Error() to be %v; got %v", testErr.Error(), actualErr.Error()) } s.write(recvMsg{buffer: testBuffer, err: nil}) s.write(recvMsg{buffer: testBuffer, err: errors.New("different error from first")}) for i := 0; i < 2; i++ { inBuf := make([]byte, 1) actualCount, actualErr := s.Read(inBuf) if actualCount != 0 { t.Errorf("actualCount, _ := s.Read(_) differs; want %v; got %v", 0, actualCount) } if actualErr.Error() != testErr.Error() { t.Errorf("_ , actualErr := s.Read(_) differs; want actualErr.Error() to be %v; got %v", testErr.Error(), actualErr.Error()) } } } func TestPingPong1B(t *testing.T) { runPingPongTest(t, 1) } func TestPingPong1KB(t *testing.T) { runPingPongTest(t, 1024) } func TestPingPong64KB(t *testing.T) { runPingPongTest(t, 65536) } func TestPingPong1MB(t *testing.T) { runPingPongTest(t, 1048576) } //This is a stress-test of flow control logic. func runPingPongTest(t *testing.T, msgSize int) { server, client, cancel := setUp(t, 0, 0, pingpong) defer cancel() defer server.stop() defer client.Close() waitWhileTrue(t, func() (bool, error) { server.mu.Lock() defer server.mu.Unlock() if len(server.conns) == 0 { return true, fmt.Errorf("timed out while waiting for server transport to be created") } return false, nil }) stream, err := client.NewStream(context.Background(), &CallHdr{}) if err != nil { t.Fatalf("Failed to create stream. Err: %v", err) } msg := make([]byte, msgSize) outgoingHeader := make([]byte, 5) outgoingHeader[0] = byte(0) binary.BigEndian.PutUint32(outgoingHeader[1:], uint32(msgSize)) opts := &Options{} incomingHeader := make([]byte, 5) done := make(chan struct{}) go func() { timer := time.NewTimer(time.Second * 5) <-timer.C close(done) }() for { select { case <-done: client.Write(stream, nil, nil, &Options{Last: true}) if _, err := stream.Read(incomingHeader); err != io.EOF { t.Fatalf("Client expected EOF from the server. Got: %v", err) } return default: if err := client.Write(stream, outgoingHeader, msg, opts); err != nil { t.Fatalf("Error on client while writing message. Err: %v", err) } if _, err := stream.Read(incomingHeader); err != nil { t.Fatalf("Error on client while reading data header. Err: %v", err) } sz := binary.BigEndian.Uint32(incomingHeader[1:]) recvMsg := make([]byte, int(sz)) if _, err := stream.Read(recvMsg); err != nil { t.Fatalf("Error on client while reading data. Err: %v", err) } } } } type tableSizeLimit struct { mu sync.Mutex limits []uint32 } func (t *tableSizeLimit) add(limit uint32) { t.mu.Lock() t.limits = append(t.limits, limit) t.mu.Unlock() } func (t *tableSizeLimit) getLen() int { t.mu.Lock() defer t.mu.Unlock() return len(t.limits) } func (t *tableSizeLimit) getIndex(i int) uint32 { t.mu.Lock() defer t.mu.Unlock() return t.limits[i] } func TestHeaderTblSize(t *testing.T) { limits := &tableSizeLimit{} updateHeaderTblSize = func(e *hpack.Encoder, v uint32) { e.SetMaxDynamicTableSizeLimit(v) limits.add(v) } defer func() { updateHeaderTblSize = func(e *hpack.Encoder, v uint32) { e.SetMaxDynamicTableSizeLimit(v) } }() server, ct, cancel := setUp(t, 0, math.MaxUint32, normal) defer cancel() defer ct.Close() defer server.stop() _, err := ct.NewStream(context.Background(), &CallHdr{}) if err != nil { t.Fatalf("failed to open stream: %v", err) } var svrTransport ServerTransport var i int for i = 0; i < 1000; i++ { server.mu.Lock() if len(server.conns) != 0 { server.mu.Unlock() break } server.mu.Unlock() time.Sleep(10 * time.Millisecond) continue } if i == 1000 { t.Fatalf("unable to create any server transport after 10s") } for st := range server.conns { svrTransport = st break } svrTransport.(*http2Server).controlBuf.put(&outgoingSettings{ ss: []http2.Setting{ { ID: http2.SettingHeaderTableSize, Val: uint32(100), }, }, }) for i = 0; i < 1000; i++ { if limits.getLen() != 1 { time.Sleep(10 * time.Millisecond) continue } if val := limits.getIndex(0); val != uint32(100) { t.Fatalf("expected limits[0] = 100, got %d", val) } break } if i == 1000 { t.Fatalf("expected len(limits) = 1 within 10s, got != 1") } ct.controlBuf.put(&outgoingSettings{ ss: []http2.Setting{ { ID: http2.SettingHeaderTableSize, Val: uint32(200), }, }, }) for i := 0; i < 1000; i++ { if limits.getLen() != 2 { time.Sleep(10 * time.Millisecond) continue } if val := limits.getIndex(1); val != uint32(200) { t.Fatalf("expected limits[1] = 200, got %d", val) } break } if i == 1000 { t.Fatalf("expected len(limits) = 2 within 10s, got != 2") } } // TestTCPUserTimeout tests that the TCP_USER_TIMEOUT socket option is set to the // keepalive timeout, as detailed in proposal A18 func TestTCPUserTimeout(t *testing.T) { tests := []struct { time time.Duration timeout time.Duration }{ { 10 * time.Second, 10 * time.Second, }, { 0, 0, }, } for _, tt := range tests { server, client, cancel := setUpWithOptions( t, 0, &ServerConfig{ KeepaliveParams: keepalive.ServerParameters{ Time: tt.timeout, Timeout: tt.timeout, }, }, normal, ConnectOptions{ KeepaliveParams: keepalive.ClientParameters{ Time: tt.time, Timeout: tt.timeout, }, }, ) defer cancel() defer server.stop() defer client.Close() stream, err := client.NewStream(context.Background(), &CallHdr{}) if err != nil { t.Fatalf("Client failed to create RPC request: %v", err) } client.closeStream(stream, io.EOF, true, http2.ErrCodeCancel, nil, nil, false) opt, err := syscall.GetTCPUserTimeout(client.conn) if err != nil { t.Fatalf("GetTCPUserTimeout error: %v", err) } if opt < 0 { t.Skipf("skipping test on unsupported environment") } if timeoutMS := int(tt.timeout / time.Millisecond); timeoutMS != opt { t.Fatalf("wrong TCP_USER_TIMEOUT set on conn. expected %d. got %d", timeoutMS, opt) } } } grpc-go-1.22.1/interop/000077500000000000000000000000001351635773100146265ustar00rootroot00000000000000grpc-go-1.22.1/interop/alts/000077500000000000000000000000001351635773100155715ustar00rootroot00000000000000grpc-go-1.22.1/interop/alts/client/000077500000000000000000000000001351635773100170475ustar00rootroot00000000000000grpc-go-1.22.1/interop/alts/client/client.go000066400000000000000000000037471351635773100206670ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // This binary can only run on Google Cloud Platform (GCP). package main import ( "context" "flag" "time" grpc "google.golang.org/grpc" "google.golang.org/grpc/credentials/alts" "google.golang.org/grpc/grpclog" testpb "google.golang.org/grpc/interop/grpc_testing" ) var ( hsAddr = flag.String("alts_handshaker_service_address", "", "ALTS handshaker gRPC service address") serverAddr = flag.String("server_address", ":8080", "The port on which the server is listening") ) func main() { flag.Parse() opts := alts.DefaultClientOptions() if *hsAddr != "" { opts.HandshakerServiceAddress = *hsAddr } altsTC := alts.NewClientCreds(opts) // Block until the server is ready. conn, err := grpc.Dial(*serverAddr, grpc.WithTransportCredentials(altsTC), grpc.WithBlock()) if err != nil { grpclog.Fatalf("gRPC Client: failed to dial the server at %v: %v", *serverAddr, err) } defer conn.Close() grpcClient := testpb.NewTestServiceClient(conn) // Call the EmptyCall API. ctx := context.Background() request := &testpb.Empty{} if _, err := grpcClient.EmptyCall(ctx, request); err != nil { grpclog.Fatalf("grpc Client: EmptyCall(_, %v) failed: %v", request, err) } grpclog.Info("grpc Client: empty call succeeded") // This sleep prevents the connection from being abruptly disconnected // when running this binary (along with grpc_server) on GCP dev cluster. time.Sleep(1 * time.Second) } grpc-go-1.22.1/interop/alts/server/000077500000000000000000000000001351635773100170775ustar00rootroot00000000000000grpc-go-1.22.1/interop/alts/server/server.go000066400000000000000000000055131351635773100207400ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // This binary can only run on Google Cloud Platform (GCP). package main import ( "context" "flag" "net" "strings" grpc "google.golang.org/grpc" "google.golang.org/grpc/credentials/alts" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/interop" testpb "google.golang.org/grpc/interop/grpc_testing" "google.golang.org/grpc/tap" ) const ( udsAddrPrefix = "unix:" ) var ( hsAddr = flag.String("alts_handshaker_service_address", "", "ALTS handshaker gRPC service address") serverAddr = flag.String("server_address", ":8080", "The address on which the server is listening. Only two types of addresses are supported, 'host:port' and 'unix:/path'.") ) func main() { flag.Parse() // If the server address starts with `unix:`, then we have a UDS address. network := "tcp" address := *serverAddr if strings.HasPrefix(address, udsAddrPrefix) { network = "unix" address = strings.TrimPrefix(address, udsAddrPrefix) } lis, err := net.Listen(network, address) if err != nil { grpclog.Fatalf("gRPC Server: failed to start the server at %v: %v", address, err) } opts := alts.DefaultServerOptions() if *hsAddr != "" { opts.HandshakerServiceAddress = *hsAddr } altsTC := alts.NewServerCreds(opts) grpcServer := grpc.NewServer(grpc.Creds(altsTC), grpc.InTapHandle(authz)) testpb.RegisterTestServiceServer(grpcServer, interop.NewTestServer()) grpcServer.Serve(lis) } // authz shows how to access client information at the server side to perform // application-layer authorization checks. func authz(ctx context.Context, info *tap.Info) (context.Context, error) { authInfo, err := alts.AuthInfoFromContext(ctx) if err != nil { return nil, err } // Access all alts.AuthInfo data: grpclog.Infof("authInfo.ApplicationProtocol() = %v", authInfo.ApplicationProtocol()) grpclog.Infof("authInfo.RecordProtocol() = %v", authInfo.RecordProtocol()) grpclog.Infof("authInfo.SecurityLevel() = %v", authInfo.SecurityLevel()) grpclog.Infof("authInfo.PeerServiceAccount() = %v", authInfo.PeerServiceAccount()) grpclog.Infof("authInfo.LocalServiceAccount() = %v", authInfo.LocalServiceAccount()) grpclog.Infof("authInfo.PeerRPCVersions() = %v", authInfo.PeerRPCVersions()) grpclog.Infof("info.FullMethodName = %v", info.FullMethodName) return ctx, nil } grpc-go-1.22.1/interop/client/000077500000000000000000000000001351635773100161045ustar00rootroot00000000000000grpc-go-1.22.1/interop/client/client.go000066400000000000000000000257311351635773100177210ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package main import ( "flag" "net" "strconv" "google.golang.org/grpc" _ "google.golang.org/grpc/balancer/grpclb" "google.golang.org/grpc/credentials" "google.golang.org/grpc/credentials/alts" "google.golang.org/grpc/credentials/google" "google.golang.org/grpc/credentials/oauth" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/interop" testpb "google.golang.org/grpc/interop/grpc_testing" "google.golang.org/grpc/resolver" "google.golang.org/grpc/testdata" ) const ( googleDefaultCredsName = "google_default_credentials" computeEngineCredsName = "compute_engine_channel_creds" ) var ( caFile = flag.String("ca_file", "", "The file containning the CA root cert file") useTLS = flag.Bool("use_tls", false, "Connection uses TLS if true") useALTS = flag.Bool("use_alts", false, "Connection uses ALTS if true (this option can only be used on GCP)") customCredentialsType = flag.String("custom_credentials_type", "", "Custom creds to use, excluding TLS or ALTS") altsHSAddr = flag.String("alts_handshaker_service_address", "", "ALTS handshaker gRPC service address") testCA = flag.Bool("use_test_ca", false, "Whether to replace platform root CAs with test CA as the CA root") serviceAccountKeyFile = flag.String("service_account_key_file", "", "Path to service account json key file") oauthScope = flag.String("oauth_scope", "", "The scope for OAuth2 tokens") defaultServiceAccount = flag.String("default_service_account", "", "Email of GCE default service account") serverHost = flag.String("server_host", "localhost", "The server host name") serverPort = flag.Int("server_port", 10000, "The server port number") tlsServerName = flag.String("server_host_override", "", "The server name use to verify the hostname returned by TLS handshake if it is not empty. Otherwise, --server_host is used.") testCase = flag.String("test_case", "large_unary", `Configure different test cases. Valid options are: empty_unary : empty (zero bytes) request and response; large_unary : single request and (large) response; client_streaming : request streaming with single response; server_streaming : single request with response streaming; ping_pong : full-duplex streaming; empty_stream : full-duplex streaming with zero message; timeout_on_sleeping_server: fullduplex streaming on a sleeping server; compute_engine_creds: large_unary with compute engine auth; service_account_creds: large_unary with service account auth; jwt_token_creds: large_unary with jwt token auth; per_rpc_creds: large_unary with per rpc token; oauth2_auth_token: large_unary with oauth2 token auth; google_default_credentials: large_unary with google default credentials compute_engine_channel_credentials: large_unary with compute engine creds cancel_after_begin: cancellation after metadata has been sent but before payloads are sent; cancel_after_first_response: cancellation after receiving 1st message from the server; status_code_and_message: status code propagated back to client; special_status_message: Unicode and whitespace is correctly processed in status message; custom_metadata: server will echo custom metadata; unimplemented_method: client attempts to call unimplemented method; unimplemented_service: client attempts to call unimplemented service; pick_first_unary: all requests are sent to one server despite multiple servers are resolved.`) ) type credsMode uint8 const ( credsNone credsMode = iota credsTLS credsALTS credsGoogleDefaultCreds credsComputeEngineCreds ) func main() { flag.Parse() var useGDC bool // use google default creds var useCEC bool // use compute engine creds if *customCredentialsType != "" { switch *customCredentialsType { case googleDefaultCredsName: useGDC = true case computeEngineCredsName: useCEC = true default: grpclog.Fatalf("If set, custom_credentials_type can only be set to one of %v or %v", googleDefaultCredsName, computeEngineCredsName) } } if (*useTLS && *useALTS) || (*useTLS && useGDC) || (*useALTS && useGDC) || (*useTLS && useCEC) || (*useALTS && useCEC) { grpclog.Fatalf("only one of TLS, ALTS, google default creds, or compute engine creds can be used") } var credsChosen credsMode switch { case *useTLS: credsChosen = credsTLS case *useALTS: credsChosen = credsALTS case useGDC: credsChosen = credsGoogleDefaultCreds case useCEC: credsChosen = credsComputeEngineCreds } resolver.SetDefaultScheme("dns") serverAddr := net.JoinHostPort(*serverHost, strconv.Itoa(*serverPort)) var opts []grpc.DialOption switch credsChosen { case credsTLS: var sn string if *tlsServerName != "" { sn = *tlsServerName } var creds credentials.TransportCredentials if *testCA { var err error if *caFile == "" { *caFile = testdata.Path("ca.pem") } creds, err = credentials.NewClientTLSFromFile(*caFile, sn) if err != nil { grpclog.Fatalf("Failed to create TLS credentials %v", err) } } else { creds = credentials.NewClientTLSFromCert(nil, sn) } opts = append(opts, grpc.WithTransportCredentials(creds)) case credsALTS: altsOpts := alts.DefaultClientOptions() if *altsHSAddr != "" { altsOpts.HandshakerServiceAddress = *altsHSAddr } altsTC := alts.NewClientCreds(altsOpts) opts = append(opts, grpc.WithTransportCredentials(altsTC)) case credsGoogleDefaultCreds: opts = append(opts, grpc.WithCredentialsBundle(google.NewDefaultCredentials())) case credsComputeEngineCreds: opts = append(opts, grpc.WithCredentialsBundle(google.NewComputeEngineCredentials())) case credsNone: opts = append(opts, grpc.WithInsecure()) default: grpclog.Fatal("Invalid creds") } if credsChosen == credsTLS { if *testCase == "compute_engine_creds" { opts = append(opts, grpc.WithPerRPCCredentials(oauth.NewComputeEngine())) } else if *testCase == "service_account_creds" { jwtCreds, err := oauth.NewServiceAccountFromFile(*serviceAccountKeyFile, *oauthScope) if err != nil { grpclog.Fatalf("Failed to create JWT credentials: %v", err) } opts = append(opts, grpc.WithPerRPCCredentials(jwtCreds)) } else if *testCase == "jwt_token_creds" { jwtCreds, err := oauth.NewJWTAccessFromFile(*serviceAccountKeyFile) if err != nil { grpclog.Fatalf("Failed to create JWT credentials: %v", err) } opts = append(opts, grpc.WithPerRPCCredentials(jwtCreds)) } else if *testCase == "oauth2_auth_token" { opts = append(opts, grpc.WithPerRPCCredentials(oauth.NewOauthAccess(interop.GetToken(*serviceAccountKeyFile, *oauthScope)))) } } opts = append(opts, grpc.WithBlock()) conn, err := grpc.Dial(serverAddr, opts...) if err != nil { grpclog.Fatalf("Fail to dial: %v", err) } defer conn.Close() tc := testpb.NewTestServiceClient(conn) switch *testCase { case "empty_unary": interop.DoEmptyUnaryCall(tc) grpclog.Infoln("EmptyUnaryCall done") case "large_unary": interop.DoLargeUnaryCall(tc) grpclog.Infoln("LargeUnaryCall done") case "client_streaming": interop.DoClientStreaming(tc) grpclog.Infoln("ClientStreaming done") case "server_streaming": interop.DoServerStreaming(tc) grpclog.Infoln("ServerStreaming done") case "ping_pong": interop.DoPingPong(tc) grpclog.Infoln("Pingpong done") case "empty_stream": interop.DoEmptyStream(tc) grpclog.Infoln("Emptystream done") case "timeout_on_sleeping_server": interop.DoTimeoutOnSleepingServer(tc) grpclog.Infoln("TimeoutOnSleepingServer done") case "compute_engine_creds": if credsChosen != credsTLS { grpclog.Fatalf("TLS credentials need to be set for compute_engine_creds test case.") } interop.DoComputeEngineCreds(tc, *defaultServiceAccount, *oauthScope) grpclog.Infoln("ComputeEngineCreds done") case "service_account_creds": if credsChosen != credsTLS { grpclog.Fatalf("TLS credentials need to be set for service_account_creds test case.") } interop.DoServiceAccountCreds(tc, *serviceAccountKeyFile, *oauthScope) grpclog.Infoln("ServiceAccountCreds done") case "jwt_token_creds": if credsChosen != credsTLS { grpclog.Fatalf("TLS credentials need to be set for jwt_token_creds test case.") } interop.DoJWTTokenCreds(tc, *serviceAccountKeyFile) grpclog.Infoln("JWTtokenCreds done") case "per_rpc_creds": if credsChosen != credsTLS { grpclog.Fatalf("TLS credentials need to be set for per_rpc_creds test case.") } interop.DoPerRPCCreds(tc, *serviceAccountKeyFile, *oauthScope) grpclog.Infoln("PerRPCCreds done") case "oauth2_auth_token": if credsChosen != credsTLS { grpclog.Fatalf("TLS credentials need to be set for oauth2_auth_token test case.") } interop.DoOauth2TokenCreds(tc, *serviceAccountKeyFile, *oauthScope) grpclog.Infoln("Oauth2TokenCreds done") case "google_default_credentials": if credsChosen != credsGoogleDefaultCreds { grpclog.Fatalf("GoogleDefaultCredentials need to be set for google_default_credentials test case.") } interop.DoGoogleDefaultCredentials(tc, *defaultServiceAccount) grpclog.Infoln("GoogleDefaultCredentials done") case "compute_engine_channel_credentials": if credsChosen != credsComputeEngineCreds { grpclog.Fatalf("ComputeEngineCreds need to be set for compute_engine_channel_credentials test case.") } interop.DoComputeEngineChannelCredentials(tc, *defaultServiceAccount) grpclog.Infoln("ComputeEngineChannelCredentials done") case "cancel_after_begin": interop.DoCancelAfterBegin(tc) grpclog.Infoln("CancelAfterBegin done") case "cancel_after_first_response": interop.DoCancelAfterFirstResponse(tc) grpclog.Infoln("CancelAfterFirstResponse done") case "status_code_and_message": interop.DoStatusCodeAndMessage(tc) grpclog.Infoln("StatusCodeAndMessage done") case "special_status_message": interop.DoSpecialStatusMessage(tc) grpclog.Infoln("SpecialStatusMessage done") case "custom_metadata": interop.DoCustomMetadata(tc) grpclog.Infoln("CustomMetadata done") case "unimplemented_method": interop.DoUnimplementedMethod(conn) grpclog.Infoln("UnimplementedMethod done") case "unimplemented_service": interop.DoUnimplementedService(testpb.NewUnimplementedServiceClient(conn)) grpclog.Infoln("UnimplementedService done") case "pick_first_unary": interop.DoPickFirstUnary(tc) grpclog.Infoln("PickFirstUnary done") default: grpclog.Fatal("Unsupported test case: ", *testCase) } } grpc-go-1.22.1/interop/fake_grpclb/000077500000000000000000000000001351635773100170655ustar00rootroot00000000000000grpc-go-1.22.1/interop/fake_grpclb/fake_grpclb.go000066400000000000000000000135701351635773100216610ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // This file is for testing only. Runs a fake grpclb balancer server. // The name of the service to load balance for and the addresses // of that service are provided by command line flags. package main import ( "flag" "net" "strconv" "strings" "time" "google.golang.org/grpc" lbpb "google.golang.org/grpc/balancer/grpclb/grpc_lb_v1" "google.golang.org/grpc/codes" "google.golang.org/grpc/credentials" "google.golang.org/grpc/credentials/alts" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/status" "google.golang.org/grpc/testdata" ) var ( port = flag.Int("port", 10000, "Port to listen on.") backendAddrs = flag.String("backend_addrs", "", "Comma separated list of backend IP/port addresses.") useALTS = flag.Bool("use_alts", false, "Listen on ALTS credentials.") useTLS = flag.Bool("use_tls", false, "Listen on TLS credentials, using a test certificate.") shortStream = flag.Bool("short_stream", false, "End the balancer stream immediately after sending the first server list.") serviceName = flag.String("service_name", "UNSET", "Name of the service being load balanced for.") ) type loadBalancerServer struct { serverListResponse *lbpb.LoadBalanceResponse } func (l *loadBalancerServer) BalanceLoad(stream lbpb.LoadBalancer_BalanceLoadServer) error { grpclog.Info("Begin handling new BalancerLoad request.") var lbReq *lbpb.LoadBalanceRequest var err error if lbReq, err = stream.Recv(); err != nil { grpclog.Errorf("Error receiving LoadBalanceRequest: %v", err) return err } grpclog.Info("LoadBalancerRequest received.") initialReq := lbReq.GetInitialRequest() if initialReq == nil { grpclog.Info("Expected first request to be an InitialRequest. Got: %v", lbReq) return status.Error(codes.Unknown, "First request not an InitialRequest") } // gRPC clients targeting foo.bar.com:443 can sometimes include the ":443" suffix in // their requested names; handle this case. TODO: make 443 configurable? var cleanedName string var requestedNamePortNumber string if cleanedName, requestedNamePortNumber, err = net.SplitHostPort(initialReq.Name); err != nil { cleanedName = initialReq.Name } else { if requestedNamePortNumber != "443" { grpclog.Info("Bad requested service name port number: %v.", requestedNamePortNumber) return status.Error(codes.Unknown, "Bad requested service name port number") } } if cleanedName != *serviceName { grpclog.Info("Expected requested service name: %v. Got: %v", *serviceName, initialReq.Name) return status.Error(codes.NotFound, "Bad requested service name") } if err := stream.Send(&lbpb.LoadBalanceResponse{ LoadBalanceResponseType: &lbpb.LoadBalanceResponse_InitialResponse{ InitialResponse: &lbpb.InitialLoadBalanceResponse{}, }, }); err != nil { grpclog.Errorf("Error sending initial LB response: %v", err) return status.Error(codes.Unknown, "Error sending initial response") } grpclog.Info("Send LoadBalanceResponse: %v", l.serverListResponse) if err := stream.Send(l.serverListResponse); err != nil { grpclog.Errorf("Error sending LB response: %v", err) return status.Error(codes.Unknown, "Error sending response") } if *shortStream { return nil } for { grpclog.Info("Send LoadBalanceResponse: %v", l.serverListResponse) if err := stream.Send(l.serverListResponse); err != nil { grpclog.Errorf("Error sending LB response: %v", err) return status.Error(codes.Unknown, "Error sending response") } time.Sleep(10 * time.Second) } } func main() { flag.Parse() var opts []grpc.ServerOption if *useTLS { certFile := testdata.Path("server1.pem") keyFile := testdata.Path("server1.key") creds, err := credentials.NewServerTLSFromFile(certFile, keyFile) if err != nil { grpclog.Fatalf("Failed to generate credentials %v", err) } opts = append(opts, grpc.Creds(creds)) } else if *useALTS { altsOpts := alts.DefaultServerOptions() altsTC := alts.NewServerCreds(altsOpts) opts = append(opts, grpc.Creds(altsTC)) } var serverList []*lbpb.Server if len(*backendAddrs) == 0 { serverList = make([]*lbpb.Server, 0) } else { rawBackendAddrs := strings.Split(*backendAddrs, ",") serverList = make([]*lbpb.Server, len(rawBackendAddrs)) for i := range rawBackendAddrs { rawIP, rawPort, err := net.SplitHostPort(rawBackendAddrs[i]) if err != nil { grpclog.Fatalf("Failed to parse --backend_addrs[%d]=%v, error: %v", i, rawBackendAddrs[i], err) } ip := net.ParseIP(rawIP) if ip == nil { grpclog.Fatalf("Failed to parse ip: %v", rawIP) } numericPort, err := strconv.Atoi(rawPort) if err != nil { grpclog.Fatalf("Failed to convert port %v to int", rawPort) } grpclog.Infof("Adding backend ip: %v, port: %d", ip.String(), numericPort) serverList[i] = &lbpb.Server{ IpAddress: ip, Port: int32(numericPort), } } } serverListResponse := &lbpb.LoadBalanceResponse{ LoadBalanceResponseType: &lbpb.LoadBalanceResponse_ServerList{ ServerList: &lbpb.ServerList{ Servers: serverList, }, }, } server := grpc.NewServer(opts...) grpclog.Infof("Begin listening on %d.", *port) lis, err := net.Listen("tcp", ":"+strconv.Itoa(*port)) if err != nil { grpclog.Fatalf("Failed to listen on port %v: %v", *port, err) } lbpb.RegisterLoadBalancerServer(server, &loadBalancerServer{ serverListResponse: serverListResponse, }) server.Serve(lis) } grpc-go-1.22.1/interop/grpc_testing/000077500000000000000000000000001351635773100173165ustar00rootroot00000000000000grpc-go-1.22.1/interop/grpc_testing/test.pb.go000066400000000000000000001210661351635773100212320ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: grpc_testing/test.proto package grpc_testing import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import ( context "golang.org/x/net/context" grpc "google.golang.org/grpc" ) // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package // The type of payload that should be returned. type PayloadType int32 const ( // Compressable text format. PayloadType_COMPRESSABLE PayloadType = 0 // Uncompressable binary format. PayloadType_UNCOMPRESSABLE PayloadType = 1 // Randomly chosen from all other formats defined in this enum. PayloadType_RANDOM PayloadType = 2 ) var PayloadType_name = map[int32]string{ 0: "COMPRESSABLE", 1: "UNCOMPRESSABLE", 2: "RANDOM", } var PayloadType_value = map[string]int32{ "COMPRESSABLE": 0, "UNCOMPRESSABLE": 1, "RANDOM": 2, } func (x PayloadType) String() string { return proto.EnumName(PayloadType_name, int32(x)) } func (PayloadType) EnumDescriptor() ([]byte, []int) { return fileDescriptor_test_56dd6f68792c8a57, []int{0} } type Empty struct { XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Empty) Reset() { *m = Empty{} } func (m *Empty) String() string { return proto.CompactTextString(m) } func (*Empty) ProtoMessage() {} func (*Empty) Descriptor() ([]byte, []int) { return fileDescriptor_test_56dd6f68792c8a57, []int{0} } func (m *Empty) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Empty.Unmarshal(m, b) } func (m *Empty) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Empty.Marshal(b, m, deterministic) } func (dst *Empty) XXX_Merge(src proto.Message) { xxx_messageInfo_Empty.Merge(dst, src) } func (m *Empty) XXX_Size() int { return xxx_messageInfo_Empty.Size(m) } func (m *Empty) XXX_DiscardUnknown() { xxx_messageInfo_Empty.DiscardUnknown(m) } var xxx_messageInfo_Empty proto.InternalMessageInfo // A block of data, to simply increase gRPC message size. type Payload struct { // The type of data in body. Type PayloadType `protobuf:"varint,1,opt,name=type,proto3,enum=grpc.testing.PayloadType" json:"type,omitempty"` // Primary contents of payload. Body []byte `protobuf:"bytes,2,opt,name=body,proto3" json:"body,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Payload) Reset() { *m = Payload{} } func (m *Payload) String() string { return proto.CompactTextString(m) } func (*Payload) ProtoMessage() {} func (*Payload) Descriptor() ([]byte, []int) { return fileDescriptor_test_56dd6f68792c8a57, []int{1} } func (m *Payload) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Payload.Unmarshal(m, b) } func (m *Payload) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Payload.Marshal(b, m, deterministic) } func (dst *Payload) XXX_Merge(src proto.Message) { xxx_messageInfo_Payload.Merge(dst, src) } func (m *Payload) XXX_Size() int { return xxx_messageInfo_Payload.Size(m) } func (m *Payload) XXX_DiscardUnknown() { xxx_messageInfo_Payload.DiscardUnknown(m) } var xxx_messageInfo_Payload proto.InternalMessageInfo func (m *Payload) GetType() PayloadType { if m != nil { return m.Type } return PayloadType_COMPRESSABLE } func (m *Payload) GetBody() []byte { if m != nil { return m.Body } return nil } // A protobuf representation for grpc status. This is used by test // clients to specify a status that the server should attempt to return. type EchoStatus struct { Code int32 `protobuf:"varint,1,opt,name=code,proto3" json:"code,omitempty"` Message string `protobuf:"bytes,2,opt,name=message,proto3" json:"message,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *EchoStatus) Reset() { *m = EchoStatus{} } func (m *EchoStatus) String() string { return proto.CompactTextString(m) } func (*EchoStatus) ProtoMessage() {} func (*EchoStatus) Descriptor() ([]byte, []int) { return fileDescriptor_test_56dd6f68792c8a57, []int{2} } func (m *EchoStatus) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_EchoStatus.Unmarshal(m, b) } func (m *EchoStatus) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_EchoStatus.Marshal(b, m, deterministic) } func (dst *EchoStatus) XXX_Merge(src proto.Message) { xxx_messageInfo_EchoStatus.Merge(dst, src) } func (m *EchoStatus) XXX_Size() int { return xxx_messageInfo_EchoStatus.Size(m) } func (m *EchoStatus) XXX_DiscardUnknown() { xxx_messageInfo_EchoStatus.DiscardUnknown(m) } var xxx_messageInfo_EchoStatus proto.InternalMessageInfo func (m *EchoStatus) GetCode() int32 { if m != nil { return m.Code } return 0 } func (m *EchoStatus) GetMessage() string { if m != nil { return m.Message } return "" } // Unary request. type SimpleRequest struct { // Desired payload type in the response from the server. // If response_type is RANDOM, server randomly chooses one from other formats. ResponseType PayloadType `protobuf:"varint,1,opt,name=response_type,json=responseType,proto3,enum=grpc.testing.PayloadType" json:"response_type,omitempty"` // Desired payload size in the response from the server. // If response_type is COMPRESSABLE, this denotes the size before compression. ResponseSize int32 `protobuf:"varint,2,opt,name=response_size,json=responseSize,proto3" json:"response_size,omitempty"` // Optional input payload sent along with the request. Payload *Payload `protobuf:"bytes,3,opt,name=payload,proto3" json:"payload,omitempty"` // Whether SimpleResponse should include username. FillUsername bool `protobuf:"varint,4,opt,name=fill_username,json=fillUsername,proto3" json:"fill_username,omitempty"` // Whether SimpleResponse should include OAuth scope. FillOauthScope bool `protobuf:"varint,5,opt,name=fill_oauth_scope,json=fillOauthScope,proto3" json:"fill_oauth_scope,omitempty"` // Whether server should return a given status ResponseStatus *EchoStatus `protobuf:"bytes,7,opt,name=response_status,json=responseStatus,proto3" json:"response_status,omitempty"` // Whether SimpleResponse should include server_id. FillServerId bool `protobuf:"varint,9,opt,name=fill_server_id,json=fillServerId,proto3" json:"fill_server_id,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *SimpleRequest) Reset() { *m = SimpleRequest{} } func (m *SimpleRequest) String() string { return proto.CompactTextString(m) } func (*SimpleRequest) ProtoMessage() {} func (*SimpleRequest) Descriptor() ([]byte, []int) { return fileDescriptor_test_56dd6f68792c8a57, []int{3} } func (m *SimpleRequest) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_SimpleRequest.Unmarshal(m, b) } func (m *SimpleRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_SimpleRequest.Marshal(b, m, deterministic) } func (dst *SimpleRequest) XXX_Merge(src proto.Message) { xxx_messageInfo_SimpleRequest.Merge(dst, src) } func (m *SimpleRequest) XXX_Size() int { return xxx_messageInfo_SimpleRequest.Size(m) } func (m *SimpleRequest) XXX_DiscardUnknown() { xxx_messageInfo_SimpleRequest.DiscardUnknown(m) } var xxx_messageInfo_SimpleRequest proto.InternalMessageInfo func (m *SimpleRequest) GetResponseType() PayloadType { if m != nil { return m.ResponseType } return PayloadType_COMPRESSABLE } func (m *SimpleRequest) GetResponseSize() int32 { if m != nil { return m.ResponseSize } return 0 } func (m *SimpleRequest) GetPayload() *Payload { if m != nil { return m.Payload } return nil } func (m *SimpleRequest) GetFillUsername() bool { if m != nil { return m.FillUsername } return false } func (m *SimpleRequest) GetFillOauthScope() bool { if m != nil { return m.FillOauthScope } return false } func (m *SimpleRequest) GetResponseStatus() *EchoStatus { if m != nil { return m.ResponseStatus } return nil } func (m *SimpleRequest) GetFillServerId() bool { if m != nil { return m.FillServerId } return false } // Unary response, as configured by the request. type SimpleResponse struct { // Payload to increase message size. Payload *Payload `protobuf:"bytes,1,opt,name=payload,proto3" json:"payload,omitempty"` // The user the request came from, for verifying authentication was // successful when the client expected it. Username string `protobuf:"bytes,2,opt,name=username,proto3" json:"username,omitempty"` // OAuth scope. OauthScope string `protobuf:"bytes,3,opt,name=oauth_scope,json=oauthScope,proto3" json:"oauth_scope,omitempty"` // Server ID. This must be unique among different server instances, // but the same across all RPC's made to a particular server instance. ServerId string `protobuf:"bytes,4,opt,name=server_id,json=serverId,proto3" json:"server_id,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *SimpleResponse) Reset() { *m = SimpleResponse{} } func (m *SimpleResponse) String() string { return proto.CompactTextString(m) } func (*SimpleResponse) ProtoMessage() {} func (*SimpleResponse) Descriptor() ([]byte, []int) { return fileDescriptor_test_56dd6f68792c8a57, []int{4} } func (m *SimpleResponse) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_SimpleResponse.Unmarshal(m, b) } func (m *SimpleResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_SimpleResponse.Marshal(b, m, deterministic) } func (dst *SimpleResponse) XXX_Merge(src proto.Message) { xxx_messageInfo_SimpleResponse.Merge(dst, src) } func (m *SimpleResponse) XXX_Size() int { return xxx_messageInfo_SimpleResponse.Size(m) } func (m *SimpleResponse) XXX_DiscardUnknown() { xxx_messageInfo_SimpleResponse.DiscardUnknown(m) } var xxx_messageInfo_SimpleResponse proto.InternalMessageInfo func (m *SimpleResponse) GetPayload() *Payload { if m != nil { return m.Payload } return nil } func (m *SimpleResponse) GetUsername() string { if m != nil { return m.Username } return "" } func (m *SimpleResponse) GetOauthScope() string { if m != nil { return m.OauthScope } return "" } func (m *SimpleResponse) GetServerId() string { if m != nil { return m.ServerId } return "" } // Client-streaming request. type StreamingInputCallRequest struct { // Optional input payload sent along with the request. Payload *Payload `protobuf:"bytes,1,opt,name=payload,proto3" json:"payload,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *StreamingInputCallRequest) Reset() { *m = StreamingInputCallRequest{} } func (m *StreamingInputCallRequest) String() string { return proto.CompactTextString(m) } func (*StreamingInputCallRequest) ProtoMessage() {} func (*StreamingInputCallRequest) Descriptor() ([]byte, []int) { return fileDescriptor_test_56dd6f68792c8a57, []int{5} } func (m *StreamingInputCallRequest) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_StreamingInputCallRequest.Unmarshal(m, b) } func (m *StreamingInputCallRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_StreamingInputCallRequest.Marshal(b, m, deterministic) } func (dst *StreamingInputCallRequest) XXX_Merge(src proto.Message) { xxx_messageInfo_StreamingInputCallRequest.Merge(dst, src) } func (m *StreamingInputCallRequest) XXX_Size() int { return xxx_messageInfo_StreamingInputCallRequest.Size(m) } func (m *StreamingInputCallRequest) XXX_DiscardUnknown() { xxx_messageInfo_StreamingInputCallRequest.DiscardUnknown(m) } var xxx_messageInfo_StreamingInputCallRequest proto.InternalMessageInfo func (m *StreamingInputCallRequest) GetPayload() *Payload { if m != nil { return m.Payload } return nil } // Client-streaming response. type StreamingInputCallResponse struct { // Aggregated size of payloads received from the client. AggregatedPayloadSize int32 `protobuf:"varint,1,opt,name=aggregated_payload_size,json=aggregatedPayloadSize,proto3" json:"aggregated_payload_size,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *StreamingInputCallResponse) Reset() { *m = StreamingInputCallResponse{} } func (m *StreamingInputCallResponse) String() string { return proto.CompactTextString(m) } func (*StreamingInputCallResponse) ProtoMessage() {} func (*StreamingInputCallResponse) Descriptor() ([]byte, []int) { return fileDescriptor_test_56dd6f68792c8a57, []int{6} } func (m *StreamingInputCallResponse) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_StreamingInputCallResponse.Unmarshal(m, b) } func (m *StreamingInputCallResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_StreamingInputCallResponse.Marshal(b, m, deterministic) } func (dst *StreamingInputCallResponse) XXX_Merge(src proto.Message) { xxx_messageInfo_StreamingInputCallResponse.Merge(dst, src) } func (m *StreamingInputCallResponse) XXX_Size() int { return xxx_messageInfo_StreamingInputCallResponse.Size(m) } func (m *StreamingInputCallResponse) XXX_DiscardUnknown() { xxx_messageInfo_StreamingInputCallResponse.DiscardUnknown(m) } var xxx_messageInfo_StreamingInputCallResponse proto.InternalMessageInfo func (m *StreamingInputCallResponse) GetAggregatedPayloadSize() int32 { if m != nil { return m.AggregatedPayloadSize } return 0 } // Configuration for a particular response. type ResponseParameters struct { // Desired payload sizes in responses from the server. // If response_type is COMPRESSABLE, this denotes the size before compression. Size int32 `protobuf:"varint,1,opt,name=size,proto3" json:"size,omitempty"` // Desired interval between consecutive responses in the response stream in // microseconds. IntervalUs int32 `protobuf:"varint,2,opt,name=interval_us,json=intervalUs,proto3" json:"interval_us,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ResponseParameters) Reset() { *m = ResponseParameters{} } func (m *ResponseParameters) String() string { return proto.CompactTextString(m) } func (*ResponseParameters) ProtoMessage() {} func (*ResponseParameters) Descriptor() ([]byte, []int) { return fileDescriptor_test_56dd6f68792c8a57, []int{7} } func (m *ResponseParameters) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ResponseParameters.Unmarshal(m, b) } func (m *ResponseParameters) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ResponseParameters.Marshal(b, m, deterministic) } func (dst *ResponseParameters) XXX_Merge(src proto.Message) { xxx_messageInfo_ResponseParameters.Merge(dst, src) } func (m *ResponseParameters) XXX_Size() int { return xxx_messageInfo_ResponseParameters.Size(m) } func (m *ResponseParameters) XXX_DiscardUnknown() { xxx_messageInfo_ResponseParameters.DiscardUnknown(m) } var xxx_messageInfo_ResponseParameters proto.InternalMessageInfo func (m *ResponseParameters) GetSize() int32 { if m != nil { return m.Size } return 0 } func (m *ResponseParameters) GetIntervalUs() int32 { if m != nil { return m.IntervalUs } return 0 } // Server-streaming request. type StreamingOutputCallRequest struct { // Desired payload type in the response from the server. // If response_type is RANDOM, the payload from each response in the stream // might be of different types. This is to simulate a mixed type of payload // stream. ResponseType PayloadType `protobuf:"varint,1,opt,name=response_type,json=responseType,proto3,enum=grpc.testing.PayloadType" json:"response_type,omitempty"` // Configuration for each expected response message. ResponseParameters []*ResponseParameters `protobuf:"bytes,2,rep,name=response_parameters,json=responseParameters,proto3" json:"response_parameters,omitempty"` // Optional input payload sent along with the request. Payload *Payload `protobuf:"bytes,3,opt,name=payload,proto3" json:"payload,omitempty"` // Whether server should return a given status ResponseStatus *EchoStatus `protobuf:"bytes,7,opt,name=response_status,json=responseStatus,proto3" json:"response_status,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *StreamingOutputCallRequest) Reset() { *m = StreamingOutputCallRequest{} } func (m *StreamingOutputCallRequest) String() string { return proto.CompactTextString(m) } func (*StreamingOutputCallRequest) ProtoMessage() {} func (*StreamingOutputCallRequest) Descriptor() ([]byte, []int) { return fileDescriptor_test_56dd6f68792c8a57, []int{8} } func (m *StreamingOutputCallRequest) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_StreamingOutputCallRequest.Unmarshal(m, b) } func (m *StreamingOutputCallRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_StreamingOutputCallRequest.Marshal(b, m, deterministic) } func (dst *StreamingOutputCallRequest) XXX_Merge(src proto.Message) { xxx_messageInfo_StreamingOutputCallRequest.Merge(dst, src) } func (m *StreamingOutputCallRequest) XXX_Size() int { return xxx_messageInfo_StreamingOutputCallRequest.Size(m) } func (m *StreamingOutputCallRequest) XXX_DiscardUnknown() { xxx_messageInfo_StreamingOutputCallRequest.DiscardUnknown(m) } var xxx_messageInfo_StreamingOutputCallRequest proto.InternalMessageInfo func (m *StreamingOutputCallRequest) GetResponseType() PayloadType { if m != nil { return m.ResponseType } return PayloadType_COMPRESSABLE } func (m *StreamingOutputCallRequest) GetResponseParameters() []*ResponseParameters { if m != nil { return m.ResponseParameters } return nil } func (m *StreamingOutputCallRequest) GetPayload() *Payload { if m != nil { return m.Payload } return nil } func (m *StreamingOutputCallRequest) GetResponseStatus() *EchoStatus { if m != nil { return m.ResponseStatus } return nil } // Server-streaming response, as configured by the request and parameters. type StreamingOutputCallResponse struct { // Payload to increase response size. Payload *Payload `protobuf:"bytes,1,opt,name=payload,proto3" json:"payload,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *StreamingOutputCallResponse) Reset() { *m = StreamingOutputCallResponse{} } func (m *StreamingOutputCallResponse) String() string { return proto.CompactTextString(m) } func (*StreamingOutputCallResponse) ProtoMessage() {} func (*StreamingOutputCallResponse) Descriptor() ([]byte, []int) { return fileDescriptor_test_56dd6f68792c8a57, []int{9} } func (m *StreamingOutputCallResponse) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_StreamingOutputCallResponse.Unmarshal(m, b) } func (m *StreamingOutputCallResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_StreamingOutputCallResponse.Marshal(b, m, deterministic) } func (dst *StreamingOutputCallResponse) XXX_Merge(src proto.Message) { xxx_messageInfo_StreamingOutputCallResponse.Merge(dst, src) } func (m *StreamingOutputCallResponse) XXX_Size() int { return xxx_messageInfo_StreamingOutputCallResponse.Size(m) } func (m *StreamingOutputCallResponse) XXX_DiscardUnknown() { xxx_messageInfo_StreamingOutputCallResponse.DiscardUnknown(m) } var xxx_messageInfo_StreamingOutputCallResponse proto.InternalMessageInfo func (m *StreamingOutputCallResponse) GetPayload() *Payload { if m != nil { return m.Payload } return nil } func init() { proto.RegisterType((*Empty)(nil), "grpc.testing.Empty") proto.RegisterType((*Payload)(nil), "grpc.testing.Payload") proto.RegisterType((*EchoStatus)(nil), "grpc.testing.EchoStatus") proto.RegisterType((*SimpleRequest)(nil), "grpc.testing.SimpleRequest") proto.RegisterType((*SimpleResponse)(nil), "grpc.testing.SimpleResponse") proto.RegisterType((*StreamingInputCallRequest)(nil), "grpc.testing.StreamingInputCallRequest") proto.RegisterType((*StreamingInputCallResponse)(nil), "grpc.testing.StreamingInputCallResponse") proto.RegisterType((*ResponseParameters)(nil), "grpc.testing.ResponseParameters") proto.RegisterType((*StreamingOutputCallRequest)(nil), "grpc.testing.StreamingOutputCallRequest") proto.RegisterType((*StreamingOutputCallResponse)(nil), "grpc.testing.StreamingOutputCallResponse") proto.RegisterEnum("grpc.testing.PayloadType", PayloadType_name, PayloadType_value) } // Reference imports to suppress errors if they are not otherwise used. var _ context.Context var _ grpc.ClientConn // This is a compile-time assertion to ensure that this generated file // is compatible with the grpc package it is being compiled against. const _ = grpc.SupportPackageIsVersion4 // TestServiceClient is the client API for TestService service. // // For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream. type TestServiceClient interface { // One empty request followed by one empty response. EmptyCall(ctx context.Context, in *Empty, opts ...grpc.CallOption) (*Empty, error) // One request followed by one response. // The server returns the client payload as-is. UnaryCall(ctx context.Context, in *SimpleRequest, opts ...grpc.CallOption) (*SimpleResponse, error) // One request followed by a sequence of responses (streamed download). // The server returns the payload with client desired type and sizes. StreamingOutputCall(ctx context.Context, in *StreamingOutputCallRequest, opts ...grpc.CallOption) (TestService_StreamingOutputCallClient, error) // A sequence of requests followed by one response (streamed upload). // The server returns the aggregated size of client payload as the result. StreamingInputCall(ctx context.Context, opts ...grpc.CallOption) (TestService_StreamingInputCallClient, error) // A sequence of requests with each request served by the server immediately. // As one request could lead to multiple responses, this interface // demonstrates the idea of full duplexing. FullDuplexCall(ctx context.Context, opts ...grpc.CallOption) (TestService_FullDuplexCallClient, error) // A sequence of requests followed by a sequence of responses. // The server buffers all the client requests and then serves them in order. A // stream of responses are returned to the client when the server starts with // first request. HalfDuplexCall(ctx context.Context, opts ...grpc.CallOption) (TestService_HalfDuplexCallClient, error) } type testServiceClient struct { cc *grpc.ClientConn } func NewTestServiceClient(cc *grpc.ClientConn) TestServiceClient { return &testServiceClient{cc} } func (c *testServiceClient) EmptyCall(ctx context.Context, in *Empty, opts ...grpc.CallOption) (*Empty, error) { out := new(Empty) err := c.cc.Invoke(ctx, "/grpc.testing.TestService/EmptyCall", in, out, opts...) if err != nil { return nil, err } return out, nil } func (c *testServiceClient) UnaryCall(ctx context.Context, in *SimpleRequest, opts ...grpc.CallOption) (*SimpleResponse, error) { out := new(SimpleResponse) err := c.cc.Invoke(ctx, "/grpc.testing.TestService/UnaryCall", in, out, opts...) if err != nil { return nil, err } return out, nil } func (c *testServiceClient) StreamingOutputCall(ctx context.Context, in *StreamingOutputCallRequest, opts ...grpc.CallOption) (TestService_StreamingOutputCallClient, error) { stream, err := c.cc.NewStream(ctx, &_TestService_serviceDesc.Streams[0], "/grpc.testing.TestService/StreamingOutputCall", opts...) if err != nil { return nil, err } x := &testServiceStreamingOutputCallClient{stream} if err := x.ClientStream.SendMsg(in); err != nil { return nil, err } if err := x.ClientStream.CloseSend(); err != nil { return nil, err } return x, nil } type TestService_StreamingOutputCallClient interface { Recv() (*StreamingOutputCallResponse, error) grpc.ClientStream } type testServiceStreamingOutputCallClient struct { grpc.ClientStream } func (x *testServiceStreamingOutputCallClient) Recv() (*StreamingOutputCallResponse, error) { m := new(StreamingOutputCallResponse) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func (c *testServiceClient) StreamingInputCall(ctx context.Context, opts ...grpc.CallOption) (TestService_StreamingInputCallClient, error) { stream, err := c.cc.NewStream(ctx, &_TestService_serviceDesc.Streams[1], "/grpc.testing.TestService/StreamingInputCall", opts...) if err != nil { return nil, err } x := &testServiceStreamingInputCallClient{stream} return x, nil } type TestService_StreamingInputCallClient interface { Send(*StreamingInputCallRequest) error CloseAndRecv() (*StreamingInputCallResponse, error) grpc.ClientStream } type testServiceStreamingInputCallClient struct { grpc.ClientStream } func (x *testServiceStreamingInputCallClient) Send(m *StreamingInputCallRequest) error { return x.ClientStream.SendMsg(m) } func (x *testServiceStreamingInputCallClient) CloseAndRecv() (*StreamingInputCallResponse, error) { if err := x.ClientStream.CloseSend(); err != nil { return nil, err } m := new(StreamingInputCallResponse) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func (c *testServiceClient) FullDuplexCall(ctx context.Context, opts ...grpc.CallOption) (TestService_FullDuplexCallClient, error) { stream, err := c.cc.NewStream(ctx, &_TestService_serviceDesc.Streams[2], "/grpc.testing.TestService/FullDuplexCall", opts...) if err != nil { return nil, err } x := &testServiceFullDuplexCallClient{stream} return x, nil } type TestService_FullDuplexCallClient interface { Send(*StreamingOutputCallRequest) error Recv() (*StreamingOutputCallResponse, error) grpc.ClientStream } type testServiceFullDuplexCallClient struct { grpc.ClientStream } func (x *testServiceFullDuplexCallClient) Send(m *StreamingOutputCallRequest) error { return x.ClientStream.SendMsg(m) } func (x *testServiceFullDuplexCallClient) Recv() (*StreamingOutputCallResponse, error) { m := new(StreamingOutputCallResponse) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func (c *testServiceClient) HalfDuplexCall(ctx context.Context, opts ...grpc.CallOption) (TestService_HalfDuplexCallClient, error) { stream, err := c.cc.NewStream(ctx, &_TestService_serviceDesc.Streams[3], "/grpc.testing.TestService/HalfDuplexCall", opts...) if err != nil { return nil, err } x := &testServiceHalfDuplexCallClient{stream} return x, nil } type TestService_HalfDuplexCallClient interface { Send(*StreamingOutputCallRequest) error Recv() (*StreamingOutputCallResponse, error) grpc.ClientStream } type testServiceHalfDuplexCallClient struct { grpc.ClientStream } func (x *testServiceHalfDuplexCallClient) Send(m *StreamingOutputCallRequest) error { return x.ClientStream.SendMsg(m) } func (x *testServiceHalfDuplexCallClient) Recv() (*StreamingOutputCallResponse, error) { m := new(StreamingOutputCallResponse) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } // TestServiceServer is the server API for TestService service. type TestServiceServer interface { // One empty request followed by one empty response. EmptyCall(context.Context, *Empty) (*Empty, error) // One request followed by one response. // The server returns the client payload as-is. UnaryCall(context.Context, *SimpleRequest) (*SimpleResponse, error) // One request followed by a sequence of responses (streamed download). // The server returns the payload with client desired type and sizes. StreamingOutputCall(*StreamingOutputCallRequest, TestService_StreamingOutputCallServer) error // A sequence of requests followed by one response (streamed upload). // The server returns the aggregated size of client payload as the result. StreamingInputCall(TestService_StreamingInputCallServer) error // A sequence of requests with each request served by the server immediately. // As one request could lead to multiple responses, this interface // demonstrates the idea of full duplexing. FullDuplexCall(TestService_FullDuplexCallServer) error // A sequence of requests followed by a sequence of responses. // The server buffers all the client requests and then serves them in order. A // stream of responses are returned to the client when the server starts with // first request. HalfDuplexCall(TestService_HalfDuplexCallServer) error } func RegisterTestServiceServer(s *grpc.Server, srv TestServiceServer) { s.RegisterService(&_TestService_serviceDesc, srv) } func _TestService_EmptyCall_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(Empty) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(TestServiceServer).EmptyCall(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/grpc.testing.TestService/EmptyCall", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(TestServiceServer).EmptyCall(ctx, req.(*Empty)) } return interceptor(ctx, in, info, handler) } func _TestService_UnaryCall_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(SimpleRequest) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(TestServiceServer).UnaryCall(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/grpc.testing.TestService/UnaryCall", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(TestServiceServer).UnaryCall(ctx, req.(*SimpleRequest)) } return interceptor(ctx, in, info, handler) } func _TestService_StreamingOutputCall_Handler(srv interface{}, stream grpc.ServerStream) error { m := new(StreamingOutputCallRequest) if err := stream.RecvMsg(m); err != nil { return err } return srv.(TestServiceServer).StreamingOutputCall(m, &testServiceStreamingOutputCallServer{stream}) } type TestService_StreamingOutputCallServer interface { Send(*StreamingOutputCallResponse) error grpc.ServerStream } type testServiceStreamingOutputCallServer struct { grpc.ServerStream } func (x *testServiceStreamingOutputCallServer) Send(m *StreamingOutputCallResponse) error { return x.ServerStream.SendMsg(m) } func _TestService_StreamingInputCall_Handler(srv interface{}, stream grpc.ServerStream) error { return srv.(TestServiceServer).StreamingInputCall(&testServiceStreamingInputCallServer{stream}) } type TestService_StreamingInputCallServer interface { SendAndClose(*StreamingInputCallResponse) error Recv() (*StreamingInputCallRequest, error) grpc.ServerStream } type testServiceStreamingInputCallServer struct { grpc.ServerStream } func (x *testServiceStreamingInputCallServer) SendAndClose(m *StreamingInputCallResponse) error { return x.ServerStream.SendMsg(m) } func (x *testServiceStreamingInputCallServer) Recv() (*StreamingInputCallRequest, error) { m := new(StreamingInputCallRequest) if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func _TestService_FullDuplexCall_Handler(srv interface{}, stream grpc.ServerStream) error { return srv.(TestServiceServer).FullDuplexCall(&testServiceFullDuplexCallServer{stream}) } type TestService_FullDuplexCallServer interface { Send(*StreamingOutputCallResponse) error Recv() (*StreamingOutputCallRequest, error) grpc.ServerStream } type testServiceFullDuplexCallServer struct { grpc.ServerStream } func (x *testServiceFullDuplexCallServer) Send(m *StreamingOutputCallResponse) error { return x.ServerStream.SendMsg(m) } func (x *testServiceFullDuplexCallServer) Recv() (*StreamingOutputCallRequest, error) { m := new(StreamingOutputCallRequest) if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func _TestService_HalfDuplexCall_Handler(srv interface{}, stream grpc.ServerStream) error { return srv.(TestServiceServer).HalfDuplexCall(&testServiceHalfDuplexCallServer{stream}) } type TestService_HalfDuplexCallServer interface { Send(*StreamingOutputCallResponse) error Recv() (*StreamingOutputCallRequest, error) grpc.ServerStream } type testServiceHalfDuplexCallServer struct { grpc.ServerStream } func (x *testServiceHalfDuplexCallServer) Send(m *StreamingOutputCallResponse) error { return x.ServerStream.SendMsg(m) } func (x *testServiceHalfDuplexCallServer) Recv() (*StreamingOutputCallRequest, error) { m := new(StreamingOutputCallRequest) if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err } return m, nil } var _TestService_serviceDesc = grpc.ServiceDesc{ ServiceName: "grpc.testing.TestService", HandlerType: (*TestServiceServer)(nil), Methods: []grpc.MethodDesc{ { MethodName: "EmptyCall", Handler: _TestService_EmptyCall_Handler, }, { MethodName: "UnaryCall", Handler: _TestService_UnaryCall_Handler, }, }, Streams: []grpc.StreamDesc{ { StreamName: "StreamingOutputCall", Handler: _TestService_StreamingOutputCall_Handler, ServerStreams: true, }, { StreamName: "StreamingInputCall", Handler: _TestService_StreamingInputCall_Handler, ClientStreams: true, }, { StreamName: "FullDuplexCall", Handler: _TestService_FullDuplexCall_Handler, ServerStreams: true, ClientStreams: true, }, { StreamName: "HalfDuplexCall", Handler: _TestService_HalfDuplexCall_Handler, ServerStreams: true, ClientStreams: true, }, }, Metadata: "grpc_testing/test.proto", } // UnimplementedServiceClient is the client API for UnimplementedService service. // // For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream. type UnimplementedServiceClient interface { // A call that no server should implement UnimplementedCall(ctx context.Context, in *Empty, opts ...grpc.CallOption) (*Empty, error) } type unimplementedServiceClient struct { cc *grpc.ClientConn } func NewUnimplementedServiceClient(cc *grpc.ClientConn) UnimplementedServiceClient { return &unimplementedServiceClient{cc} } func (c *unimplementedServiceClient) UnimplementedCall(ctx context.Context, in *Empty, opts ...grpc.CallOption) (*Empty, error) { out := new(Empty) err := c.cc.Invoke(ctx, "/grpc.testing.UnimplementedService/UnimplementedCall", in, out, opts...) if err != nil { return nil, err } return out, nil } // UnimplementedServiceServer is the server API for UnimplementedService service. type UnimplementedServiceServer interface { // A call that no server should implement UnimplementedCall(context.Context, *Empty) (*Empty, error) } func RegisterUnimplementedServiceServer(s *grpc.Server, srv UnimplementedServiceServer) { s.RegisterService(&_UnimplementedService_serviceDesc, srv) } func _UnimplementedService_UnimplementedCall_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(Empty) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(UnimplementedServiceServer).UnimplementedCall(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/grpc.testing.UnimplementedService/UnimplementedCall", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(UnimplementedServiceServer).UnimplementedCall(ctx, req.(*Empty)) } return interceptor(ctx, in, info, handler) } var _UnimplementedService_serviceDesc = grpc.ServiceDesc{ ServiceName: "grpc.testing.UnimplementedService", HandlerType: (*UnimplementedServiceServer)(nil), Methods: []grpc.MethodDesc{ { MethodName: "UnimplementedCall", Handler: _UnimplementedService_UnimplementedCall_Handler, }, }, Streams: []grpc.StreamDesc{}, Metadata: "grpc_testing/test.proto", } func init() { proto.RegisterFile("grpc_testing/test.proto", fileDescriptor_test_56dd6f68792c8a57) } var fileDescriptor_test_56dd6f68792c8a57 = []byte{ // 695 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xbc, 0x55, 0xdd, 0x6e, 0xd3, 0x4a, 0x10, 0x3e, 0x4e, 0x93, 0xa6, 0x99, 0xa4, 0x39, 0x39, 0xdb, 0x53, 0xd5, 0x4d, 0x91, 0x88, 0x0c, 0x12, 0x06, 0x89, 0x14, 0x05, 0xc1, 0x05, 0x12, 0xa0, 0xd2, 0xa6, 0xa2, 0x52, 0xdb, 0x14, 0xbb, 0xb9, 0x8e, 0xb6, 0xf1, 0xd4, 0xb5, 0xe4, 0x3f, 0xbc, 0xeb, 0x8a, 0xf4, 0x55, 0xb8, 0xe4, 0x31, 0x78, 0x16, 0xde, 0x05, 0xed, 0xda, 0x8e, 0x9d, 0x34, 0x15, 0x2d, 0x15, 0x5c, 0xc5, 0x3b, 0xf3, 0xcd, 0xcc, 0xf7, 0x8d, 0xbf, 0x75, 0x60, 0xc3, 0x8e, 0xc2, 0xf1, 0x88, 0x23, 0xe3, 0x8e, 0x6f, 0x6f, 0x8b, 0xdf, 0x6e, 0x18, 0x05, 0x3c, 0x20, 0x0d, 0x91, 0xe8, 0xa6, 0x09, 0xad, 0x0a, 0x95, 0xbe, 0x17, 0xf2, 0x89, 0x76, 0x08, 0xd5, 0x13, 0x3a, 0x71, 0x03, 0x6a, 0x91, 0xe7, 0x50, 0xe6, 0x93, 0x10, 0x55, 0xa5, 0xa3, 0xe8, 0xcd, 0xde, 0x66, 0xb7, 0x58, 0xd0, 0x4d, 0x41, 0xa7, 0x93, 0x10, 0x0d, 0x09, 0x23, 0x04, 0xca, 0x67, 0x81, 0x35, 0x51, 0x4b, 0x1d, 0x45, 0x6f, 0x18, 0xf2, 0x59, 0x7b, 0x03, 0xd0, 0x1f, 0x5f, 0x04, 0x26, 0xa7, 0x3c, 0x66, 0x02, 0x31, 0x0e, 0xac, 0xa4, 0x61, 0xc5, 0x90, 0xcf, 0x44, 0x85, 0xaa, 0x87, 0x8c, 0x51, 0x1b, 0x65, 0x61, 0xcd, 0xc8, 0x8e, 0xda, 0x8f, 0x12, 0xac, 0x9a, 0x8e, 0x17, 0xba, 0x68, 0xe0, 0xe7, 0x18, 0x19, 0x27, 0xef, 0x60, 0x35, 0x42, 0x16, 0x06, 0x3e, 0xc3, 0xd1, 0xed, 0x98, 0x35, 0x32, 0xbc, 0x38, 0x91, 0x47, 0x85, 0x7a, 0xe6, 0x5c, 0x25, 0x13, 0x2b, 0x39, 0xc8, 0x74, 0xae, 0x90, 0x6c, 0x43, 0x35, 0x4c, 0x3a, 0xa8, 0x4b, 0x1d, 0x45, 0xaf, 0xf7, 0xd6, 0x17, 0xb6, 0x37, 0x32, 0x94, 0xe8, 0x7a, 0xee, 0xb8, 0xee, 0x28, 0x66, 0x18, 0xf9, 0xd4, 0x43, 0xb5, 0xdc, 0x51, 0xf4, 0x15, 0xa3, 0x21, 0x82, 0xc3, 0x34, 0x46, 0x74, 0x68, 0x49, 0x50, 0x40, 0x63, 0x7e, 0x31, 0x62, 0xe3, 0x20, 0x44, 0xb5, 0x22, 0x71, 0x4d, 0x11, 0x1f, 0x88, 0xb0, 0x29, 0xa2, 0x64, 0x07, 0xfe, 0xcd, 0x49, 0xca, 0xbd, 0xa9, 0x55, 0xc9, 0x43, 0x9d, 0xe5, 0x91, 0xef, 0xd5, 0x68, 0x4e, 0x05, 0x24, 0x7b, 0x7e, 0x0c, 0xb2, 0xe9, 0x88, 0x61, 0x74, 0x89, 0xd1, 0xc8, 0xb1, 0xd4, 0x5a, 0x4e, 0xc9, 0x94, 0xc1, 0x03, 0x4b, 0xfb, 0xaa, 0x40, 0x33, 0xdb, 0x6f, 0x52, 0x5e, 0xd4, 0xae, 0xdc, 0x4a, 0x7b, 0x1b, 0x56, 0xa6, 0xb2, 0x93, 0xd7, 0x37, 0x3d, 0x93, 0x87, 0x50, 0x2f, 0xaa, 0x5d, 0x92, 0x69, 0x08, 0x72, 0xa5, 0x5b, 0x50, 0xcb, 0x19, 0x96, 0x93, 0x6a, 0x96, 0xb1, 0x3b, 0x84, 0x4d, 0x93, 0x47, 0x48, 0x3d, 0xc7, 0xb7, 0x0f, 0xfc, 0x30, 0xe6, 0xbb, 0xd4, 0x75, 0x33, 0x23, 0xdc, 0x95, 0xa7, 0x76, 0x0a, 0xed, 0x45, 0xdd, 0x52, 0xd9, 0xaf, 0x61, 0x83, 0xda, 0x76, 0x84, 0x36, 0xe5, 0x68, 0x8d, 0xd2, 0x9a, 0xc4, 0x21, 0x89, 0x55, 0xd7, 0xf3, 0x74, 0xda, 0x5a, 0x58, 0x45, 0x3b, 0x00, 0x92, 0xf5, 0x38, 0xa1, 0x11, 0xf5, 0x90, 0x63, 0x24, 0x5d, 0x5e, 0x28, 0x95, 0xcf, 0x62, 0x17, 0x8e, 0xcf, 0x31, 0xba, 0xa4, 0xc2, 0x27, 0xa9, 0xef, 0x20, 0x0b, 0x0d, 0x99, 0xf6, 0xad, 0x54, 0x60, 0x38, 0x88, 0xf9, 0x9c, 0xe0, 0xfb, 0x3a, 0xff, 0x13, 0xac, 0x4d, 0xeb, 0xc3, 0x29, 0x55, 0xb5, 0xd4, 0x59, 0xd2, 0xeb, 0xbd, 0xce, 0x6c, 0x97, 0xeb, 0x92, 0x0c, 0x12, 0x5d, 0x97, 0x79, 0xe7, 0x7b, 0x72, 0x7f, 0x63, 0x6b, 0xc7, 0xb0, 0xb5, 0x70, 0x49, 0xbf, 0x69, 0xdf, 0x67, 0xef, 0xa1, 0x5e, 0xd8, 0x19, 0x69, 0x41, 0x63, 0x77, 0x70, 0x74, 0x62, 0xf4, 0x4d, 0x73, 0xe7, 0xc3, 0x61, 0xbf, 0xf5, 0x0f, 0x21, 0xd0, 0x1c, 0x1e, 0xcf, 0xc4, 0x14, 0x02, 0xb0, 0x6c, 0xec, 0x1c, 0xef, 0x0d, 0x8e, 0x5a, 0xa5, 0xde, 0xf7, 0x32, 0xd4, 0x4f, 0x91, 0x71, 0x71, 0xa9, 0x9c, 0x31, 0x92, 0x57, 0x50, 0x93, 0x9f, 0x51, 0x41, 0x8b, 0xac, 0xcd, 0xe9, 0x12, 0x89, 0xf6, 0xa2, 0x20, 0xd9, 0x87, 0xda, 0xd0, 0xa7, 0x51, 0x52, 0xb6, 0x35, 0x8b, 0x98, 0xf9, 0x04, 0xb6, 0x1f, 0x2c, 0x4e, 0xa6, 0x0b, 0x70, 0x61, 0x6d, 0xc1, 0x7e, 0x88, 0x3e, 0x57, 0x74, 0xa3, 0xcf, 0xda, 0x4f, 0x6f, 0x81, 0x4c, 0x66, 0xbd, 0x50, 0x88, 0x03, 0xe4, 0xfa, 0xa5, 0x22, 0x4f, 0x6e, 0x68, 0x31, 0x7f, 0x89, 0xdb, 0xfa, 0xaf, 0x81, 0xc9, 0x28, 0x5d, 0x8c, 0x6a, 0xee, 0xc7, 0xae, 0xbb, 0x17, 0x87, 0x2e, 0x7e, 0xf9, 0x63, 0x9a, 0x74, 0x45, 0xaa, 0x6a, 0x7e, 0xa4, 0xee, 0xf9, 0x5f, 0x18, 0xd5, 0x1b, 0xc2, 0xff, 0x43, 0x5f, 0xbe, 0x41, 0x0f, 0x7d, 0x8e, 0x56, 0xe6, 0xa2, 0xb7, 0xf0, 0xdf, 0x4c, 0xfc, 0x6e, 0x6e, 0x3a, 0x5b, 0x96, 0x7f, 0xf0, 0x2f, 0x7f, 0x06, 0x00, 0x00, 0xff, 0xff, 0xf1, 0xe0, 0xc2, 0x5f, 0xfb, 0x07, 0x00, 0x00, } grpc-go-1.22.1/interop/grpc_testing/test.proto000066400000000000000000000135741351635773100213740ustar00rootroot00000000000000// Copyright 2017 gRPC authors. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. // An integration test service that covers all the method signature permutations // of unary/streaming requests/responses. syntax = "proto3"; package grpc.testing; message Empty {} // The type of payload that should be returned. enum PayloadType { // Compressable text format. COMPRESSABLE = 0; // Uncompressable binary format. UNCOMPRESSABLE = 1; // Randomly chosen from all other formats defined in this enum. RANDOM = 2; } // A block of data, to simply increase gRPC message size. message Payload { // The type of data in body. PayloadType type = 1; // Primary contents of payload. bytes body = 2; } // A protobuf representation for grpc status. This is used by test // clients to specify a status that the server should attempt to return. message EchoStatus { int32 code = 1; string message = 2; } // Unary request. message SimpleRequest { // Desired payload type in the response from the server. // If response_type is RANDOM, server randomly chooses one from other formats. PayloadType response_type = 1; // Desired payload size in the response from the server. // If response_type is COMPRESSABLE, this denotes the size before compression. int32 response_size = 2; // Optional input payload sent along with the request. Payload payload = 3; // Whether SimpleResponse should include username. bool fill_username = 4; // Whether SimpleResponse should include OAuth scope. bool fill_oauth_scope = 5; // Whether server should return a given status EchoStatus response_status = 7; // Whether SimpleResponse should include server_id. bool fill_server_id = 9; } // Unary response, as configured by the request. message SimpleResponse { // Payload to increase message size. Payload payload = 1; // The user the request came from, for verifying authentication was // successful when the client expected it. string username = 2; // OAuth scope. string oauth_scope = 3; // Server ID. This must be unique among different server instances, // but the same across all RPC's made to a particular server instance. string server_id = 4; } // Client-streaming request. message StreamingInputCallRequest { // Optional input payload sent along with the request. Payload payload = 1; // Not expecting any payload from the response. } // Client-streaming response. message StreamingInputCallResponse { // Aggregated size of payloads received from the client. int32 aggregated_payload_size = 1; } // Configuration for a particular response. message ResponseParameters { // Desired payload sizes in responses from the server. // If response_type is COMPRESSABLE, this denotes the size before compression. int32 size = 1; // Desired interval between consecutive responses in the response stream in // microseconds. int32 interval_us = 2; } // Server-streaming request. message StreamingOutputCallRequest { // Desired payload type in the response from the server. // If response_type is RANDOM, the payload from each response in the stream // might be of different types. This is to simulate a mixed type of payload // stream. PayloadType response_type = 1; // Configuration for each expected response message. repeated ResponseParameters response_parameters = 2; // Optional input payload sent along with the request. Payload payload = 3; // Whether server should return a given status EchoStatus response_status = 7; } // Server-streaming response, as configured by the request and parameters. message StreamingOutputCallResponse { // Payload to increase response size. Payload payload = 1; } // A simple service to test the various types of RPCs and experiment with // performance with various types of payload. service TestService { // One empty request followed by one empty response. rpc EmptyCall(Empty) returns (Empty); // One request followed by one response. // The server returns the client payload as-is. rpc UnaryCall(SimpleRequest) returns (SimpleResponse); // One request followed by a sequence of responses (streamed download). // The server returns the payload with client desired type and sizes. rpc StreamingOutputCall(StreamingOutputCallRequest) returns (stream StreamingOutputCallResponse); // A sequence of requests followed by one response (streamed upload). // The server returns the aggregated size of client payload as the result. rpc StreamingInputCall(stream StreamingInputCallRequest) returns (StreamingInputCallResponse); // A sequence of requests with each request served by the server immediately. // As one request could lead to multiple responses, this interface // demonstrates the idea of full duplexing. rpc FullDuplexCall(stream StreamingOutputCallRequest) returns (stream StreamingOutputCallResponse); // A sequence of requests followed by a sequence of responses. // The server buffers all the client requests and then serves them in order. A // stream of responses are returned to the client when the server starts with // first request. rpc HalfDuplexCall(stream StreamingOutputCallRequest) returns (stream StreamingOutputCallResponse); } // A simple service NOT implemented at servers so clients can test for // that case. service UnimplementedService { // A call that no server should implement rpc UnimplementedCall(grpc.testing.Empty) returns (grpc.testing.Empty); } grpc-go-1.22.1/interop/http2/000077500000000000000000000000001351635773100156675ustar00rootroot00000000000000grpc-go-1.22.1/interop/http2/negative_http2_client.go000066400000000000000000000114101351635773100224740ustar00rootroot00000000000000/* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * * * Client used to test http2 error edge cases like GOAWAYs and RST_STREAMs * * Documentation: * https://github.com/grpc/grpc/blob/master/doc/negative-http2-interop-test-descriptions.md */ package main import ( "context" "flag" "net" "strconv" "sync" "time" "google.golang.org/grpc" "google.golang.org/grpc/codes" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/interop" testpb "google.golang.org/grpc/interop/grpc_testing" "google.golang.org/grpc/status" ) var ( serverHost = flag.String("server_host", "127.0.0.1", "The server host name") serverPort = flag.Int("server_port", 8080, "The server port number") testCase = flag.String("test_case", "goaway", `Configure different test cases. Valid options are: goaway : client sends two requests, the server will send a goaway in between; rst_after_header : server will send rst_stream after it sends headers; rst_during_data : server will send rst_stream while sending data; rst_after_data : server will send rst_stream after sending data; ping : server will send pings between each http2 frame; max_streams : server will ensure that the max_concurrent_streams limit is upheld;`) largeReqSize = 271828 largeRespSize = 314159 ) func largeSimpleRequest() *testpb.SimpleRequest { pl := interop.ClientNewPayload(testpb.PayloadType_COMPRESSABLE, largeReqSize) return &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseSize: int32(largeRespSize), Payload: pl, } } // sends two unary calls. The server asserts that the calls use different connections. func goaway(tc testpb.TestServiceClient) { interop.DoLargeUnaryCall(tc) // sleep to ensure that the client has time to recv the GOAWAY. // TODO(ncteisen): make this less hacky. time.Sleep(1 * time.Second) interop.DoLargeUnaryCall(tc) } func rstAfterHeader(tc testpb.TestServiceClient) { req := largeSimpleRequest() reply, err := tc.UnaryCall(context.Background(), req) if reply != nil { grpclog.Fatalf("Client received reply despite server sending rst stream after header") } if status.Code(err) != codes.Internal { grpclog.Fatalf("%v.UnaryCall() = _, %v, want _, %v", tc, status.Code(err), codes.Internal) } } func rstDuringData(tc testpb.TestServiceClient) { req := largeSimpleRequest() reply, err := tc.UnaryCall(context.Background(), req) if reply != nil { grpclog.Fatalf("Client received reply despite server sending rst stream during data") } if status.Code(err) != codes.Unknown { grpclog.Fatalf("%v.UnaryCall() = _, %v, want _, %v", tc, status.Code(err), codes.Unknown) } } func rstAfterData(tc testpb.TestServiceClient) { req := largeSimpleRequest() reply, err := tc.UnaryCall(context.Background(), req) if reply != nil { grpclog.Fatalf("Client received reply despite server sending rst stream after data") } if status.Code(err) != codes.Internal { grpclog.Fatalf("%v.UnaryCall() = _, %v, want _, %v", tc, status.Code(err), codes.Internal) } } func ping(tc testpb.TestServiceClient) { // The server will assert that every ping it sends was ACK-ed by the client. interop.DoLargeUnaryCall(tc) } func maxStreams(tc testpb.TestServiceClient) { interop.DoLargeUnaryCall(tc) var wg sync.WaitGroup for i := 0; i < 15; i++ { wg.Add(1) go func() { defer wg.Done() interop.DoLargeUnaryCall(tc) }() } wg.Wait() } func main() { flag.Parse() serverAddr := net.JoinHostPort(*serverHost, strconv.Itoa(*serverPort)) var opts []grpc.DialOption opts = append(opts, grpc.WithInsecure()) conn, err := grpc.Dial(serverAddr, opts...) if err != nil { grpclog.Fatalf("Fail to dial: %v", err) } defer conn.Close() tc := testpb.NewTestServiceClient(conn) switch *testCase { case "goaway": goaway(tc) grpclog.Infoln("goaway done") case "rst_after_header": rstAfterHeader(tc) grpclog.Infoln("rst_after_header done") case "rst_during_data": rstDuringData(tc) grpclog.Infoln("rst_during_data done") case "rst_after_data": rstAfterData(tc) grpclog.Infoln("rst_after_data done") case "ping": ping(tc) grpclog.Infoln("ping done") case "max_streams": maxStreams(tc) grpclog.Infoln("max_streams done") default: grpclog.Fatal("Unsupported test case: ", *testCase) } } grpc-go-1.22.1/interop/server/000077500000000000000000000000001351635773100161345ustar00rootroot00000000000000grpc-go-1.22.1/interop/server/server.go000066400000000000000000000045561351635773100200030ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package main import ( "flag" "net" "strconv" "google.golang.org/grpc" "google.golang.org/grpc/credentials" "google.golang.org/grpc/credentials/alts" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/interop" testpb "google.golang.org/grpc/interop/grpc_testing" "google.golang.org/grpc/testdata" ) var ( useTLS = flag.Bool("use_tls", false, "Connection uses TLS if true, else plain TCP") useALTS = flag.Bool("use_alts", false, "Connection uses ALTS if true (this option can only be used on GCP)") altsHSAddr = flag.String("alts_handshaker_service_address", "", "ALTS handshaker gRPC service address") certFile = flag.String("tls_cert_file", "", "The TLS cert file") keyFile = flag.String("tls_key_file", "", "The TLS key file") port = flag.Int("port", 10000, "The server port") ) func main() { flag.Parse() if *useTLS && *useALTS { grpclog.Fatalf("use_tls and use_alts cannot be both set to true") } p := strconv.Itoa(*port) lis, err := net.Listen("tcp", ":"+p) if err != nil { grpclog.Fatalf("failed to listen: %v", err) } var opts []grpc.ServerOption if *useTLS { if *certFile == "" { *certFile = testdata.Path("server1.pem") } if *keyFile == "" { *keyFile = testdata.Path("server1.key") } creds, err := credentials.NewServerTLSFromFile(*certFile, *keyFile) if err != nil { grpclog.Fatalf("Failed to generate credentials %v", err) } opts = append(opts, grpc.Creds(creds)) } else if *useALTS { altsOpts := alts.DefaultServerOptions() if *altsHSAddr != "" { altsOpts.HandshakerServiceAddress = *altsHSAddr } altsTC := alts.NewServerCreds(altsOpts) opts = append(opts, grpc.Creds(altsTC)) } server := grpc.NewServer(opts...) testpb.RegisterTestServiceServer(server, interop.NewTestServer()) server.Serve(lis) } grpc-go-1.22.1/interop/test_utils.go000066400000000000000000000665331351635773100173710ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ //go:generate protoc --go_out=plugins=grpc:. grpc_testing/test.proto package interop import ( "context" "fmt" "io" "io/ioutil" "strings" "time" "github.com/golang/protobuf/proto" "golang.org/x/oauth2" "golang.org/x/oauth2/google" "google.golang.org/grpc" "google.golang.org/grpc/codes" "google.golang.org/grpc/grpclog" testpb "google.golang.org/grpc/interop/grpc_testing" "google.golang.org/grpc/metadata" "google.golang.org/grpc/status" ) var ( reqSizes = []int{27182, 8, 1828, 45904} respSizes = []int{31415, 9, 2653, 58979} largeReqSize = 271828 largeRespSize = 314159 initialMetadataKey = "x-grpc-test-echo-initial" trailingMetadataKey = "x-grpc-test-echo-trailing-bin" ) // ClientNewPayload returns a payload of the given type and size. func ClientNewPayload(t testpb.PayloadType, size int) *testpb.Payload { if size < 0 { grpclog.Fatalf("Requested a response with invalid length %d", size) } body := make([]byte, size) switch t { case testpb.PayloadType_COMPRESSABLE: case testpb.PayloadType_UNCOMPRESSABLE: grpclog.Fatalf("PayloadType UNCOMPRESSABLE is not supported") default: grpclog.Fatalf("Unsupported payload type: %d", t) } return &testpb.Payload{ Type: t, Body: body, } } // DoEmptyUnaryCall performs a unary RPC with empty request and response messages. func DoEmptyUnaryCall(tc testpb.TestServiceClient, args ...grpc.CallOption) { reply, err := tc.EmptyCall(context.Background(), &testpb.Empty{}, args...) if err != nil { grpclog.Fatal("/TestService/EmptyCall RPC failed: ", err) } if !proto.Equal(&testpb.Empty{}, reply) { grpclog.Fatalf("/TestService/EmptyCall receives %v, want %v", reply, testpb.Empty{}) } } // DoLargeUnaryCall performs a unary RPC with large payload in the request and response. func DoLargeUnaryCall(tc testpb.TestServiceClient, args ...grpc.CallOption) { pl := ClientNewPayload(testpb.PayloadType_COMPRESSABLE, largeReqSize) req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseSize: int32(largeRespSize), Payload: pl, } reply, err := tc.UnaryCall(context.Background(), req, args...) if err != nil { grpclog.Fatal("/TestService/UnaryCall RPC failed: ", err) } t := reply.GetPayload().GetType() s := len(reply.GetPayload().GetBody()) if t != testpb.PayloadType_COMPRESSABLE || s != largeRespSize { grpclog.Fatalf("Got the reply with type %d len %d; want %d, %d", t, s, testpb.PayloadType_COMPRESSABLE, largeRespSize) } } // DoClientStreaming performs a client streaming RPC. func DoClientStreaming(tc testpb.TestServiceClient, args ...grpc.CallOption) { stream, err := tc.StreamingInputCall(context.Background(), args...) if err != nil { grpclog.Fatalf("%v.StreamingInputCall(_) = _, %v", tc, err) } var sum int for _, s := range reqSizes { pl := ClientNewPayload(testpb.PayloadType_COMPRESSABLE, s) req := &testpb.StreamingInputCallRequest{ Payload: pl, } if err := stream.Send(req); err != nil { grpclog.Fatalf("%v has error %v while sending %v", stream, err, req) } sum += s } reply, err := stream.CloseAndRecv() if err != nil { grpclog.Fatalf("%v.CloseAndRecv() got error %v, want %v", stream, err, nil) } if reply.GetAggregatedPayloadSize() != int32(sum) { grpclog.Fatalf("%v.CloseAndRecv().GetAggregatePayloadSize() = %v; want %v", stream, reply.GetAggregatedPayloadSize(), sum) } } // DoServerStreaming performs a server streaming RPC. func DoServerStreaming(tc testpb.TestServiceClient, args ...grpc.CallOption) { respParam := make([]*testpb.ResponseParameters, len(respSizes)) for i, s := range respSizes { respParam[i] = &testpb.ResponseParameters{ Size: int32(s), } } req := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseParameters: respParam, } stream, err := tc.StreamingOutputCall(context.Background(), req, args...) if err != nil { grpclog.Fatalf("%v.StreamingOutputCall(_) = _, %v", tc, err) } var rpcStatus error var respCnt int var index int for { reply, err := stream.Recv() if err != nil { rpcStatus = err break } t := reply.GetPayload().GetType() if t != testpb.PayloadType_COMPRESSABLE { grpclog.Fatalf("Got the reply of type %d, want %d", t, testpb.PayloadType_COMPRESSABLE) } size := len(reply.GetPayload().GetBody()) if size != respSizes[index] { grpclog.Fatalf("Got reply body of length %d, want %d", size, respSizes[index]) } index++ respCnt++ } if rpcStatus != io.EOF { grpclog.Fatalf("Failed to finish the server streaming rpc: %v", rpcStatus) } if respCnt != len(respSizes) { grpclog.Fatalf("Got %d reply, want %d", len(respSizes), respCnt) } } // DoPingPong performs ping-pong style bi-directional streaming RPC. func DoPingPong(tc testpb.TestServiceClient, args ...grpc.CallOption) { stream, err := tc.FullDuplexCall(context.Background(), args...) if err != nil { grpclog.Fatalf("%v.FullDuplexCall(_) = _, %v", tc, err) } var index int for index < len(reqSizes) { respParam := []*testpb.ResponseParameters{ { Size: int32(respSizes[index]), }, } pl := ClientNewPayload(testpb.PayloadType_COMPRESSABLE, reqSizes[index]) req := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseParameters: respParam, Payload: pl, } if err := stream.Send(req); err != nil { grpclog.Fatalf("%v has error %v while sending %v", stream, err, req) } reply, err := stream.Recv() if err != nil { grpclog.Fatalf("%v.Recv() = %v", stream, err) } t := reply.GetPayload().GetType() if t != testpb.PayloadType_COMPRESSABLE { grpclog.Fatalf("Got the reply of type %d, want %d", t, testpb.PayloadType_COMPRESSABLE) } size := len(reply.GetPayload().GetBody()) if size != respSizes[index] { grpclog.Fatalf("Got reply body of length %d, want %d", size, respSizes[index]) } index++ } if err := stream.CloseSend(); err != nil { grpclog.Fatalf("%v.CloseSend() got %v, want %v", stream, err, nil) } if _, err := stream.Recv(); err != io.EOF { grpclog.Fatalf("%v failed to complele the ping pong test: %v", stream, err) } } // DoEmptyStream sets up a bi-directional streaming with zero message. func DoEmptyStream(tc testpb.TestServiceClient, args ...grpc.CallOption) { stream, err := tc.FullDuplexCall(context.Background(), args...) if err != nil { grpclog.Fatalf("%v.FullDuplexCall(_) = _, %v", tc, err) } if err := stream.CloseSend(); err != nil { grpclog.Fatalf("%v.CloseSend() got %v, want %v", stream, err, nil) } if _, err := stream.Recv(); err != io.EOF { grpclog.Fatalf("%v failed to complete the empty stream test: %v", stream, err) } } // DoTimeoutOnSleepingServer performs an RPC on a sleep server which causes RPC timeout. func DoTimeoutOnSleepingServer(tc testpb.TestServiceClient, args ...grpc.CallOption) { ctx, cancel := context.WithTimeout(context.Background(), 1*time.Millisecond) defer cancel() stream, err := tc.FullDuplexCall(ctx, args...) if err != nil { if status.Code(err) == codes.DeadlineExceeded { return } grpclog.Fatalf("%v.FullDuplexCall(_) = _, %v", tc, err) } pl := ClientNewPayload(testpb.PayloadType_COMPRESSABLE, 27182) req := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, Payload: pl, } if err := stream.Send(req); err != nil && err != io.EOF { grpclog.Fatalf("%v.Send(_) = %v", stream, err) } if _, err := stream.Recv(); status.Code(err) != codes.DeadlineExceeded { grpclog.Fatalf("%v.Recv() = _, %v, want error code %d", stream, err, codes.DeadlineExceeded) } } // DoComputeEngineCreds performs a unary RPC with compute engine auth. func DoComputeEngineCreds(tc testpb.TestServiceClient, serviceAccount, oauthScope string) { pl := ClientNewPayload(testpb.PayloadType_COMPRESSABLE, largeReqSize) req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseSize: int32(largeRespSize), Payload: pl, FillUsername: true, FillOauthScope: true, } reply, err := tc.UnaryCall(context.Background(), req) if err != nil { grpclog.Fatal("/TestService/UnaryCall RPC failed: ", err) } user := reply.GetUsername() scope := reply.GetOauthScope() if user != serviceAccount { grpclog.Fatalf("Got user name %q, want %q.", user, serviceAccount) } if !strings.Contains(oauthScope, scope) { grpclog.Fatalf("Got OAuth scope %q which is NOT a substring of %q.", scope, oauthScope) } } func getServiceAccountJSONKey(keyFile string) []byte { jsonKey, err := ioutil.ReadFile(keyFile) if err != nil { grpclog.Fatalf("Failed to read the service account key file: %v", err) } return jsonKey } // DoServiceAccountCreds performs a unary RPC with service account auth. func DoServiceAccountCreds(tc testpb.TestServiceClient, serviceAccountKeyFile, oauthScope string) { pl := ClientNewPayload(testpb.PayloadType_COMPRESSABLE, largeReqSize) req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseSize: int32(largeRespSize), Payload: pl, FillUsername: true, FillOauthScope: true, } reply, err := tc.UnaryCall(context.Background(), req) if err != nil { grpclog.Fatal("/TestService/UnaryCall RPC failed: ", err) } jsonKey := getServiceAccountJSONKey(serviceAccountKeyFile) user := reply.GetUsername() scope := reply.GetOauthScope() if !strings.Contains(string(jsonKey), user) { grpclog.Fatalf("Got user name %q which is NOT a substring of %q.", user, jsonKey) } if !strings.Contains(oauthScope, scope) { grpclog.Fatalf("Got OAuth scope %q which is NOT a substring of %q.", scope, oauthScope) } } // DoJWTTokenCreds performs a unary RPC with JWT token auth. func DoJWTTokenCreds(tc testpb.TestServiceClient, serviceAccountKeyFile string) { pl := ClientNewPayload(testpb.PayloadType_COMPRESSABLE, largeReqSize) req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseSize: int32(largeRespSize), Payload: pl, FillUsername: true, } reply, err := tc.UnaryCall(context.Background(), req) if err != nil { grpclog.Fatal("/TestService/UnaryCall RPC failed: ", err) } jsonKey := getServiceAccountJSONKey(serviceAccountKeyFile) user := reply.GetUsername() if !strings.Contains(string(jsonKey), user) { grpclog.Fatalf("Got user name %q which is NOT a substring of %q.", user, jsonKey) } } // GetToken obtains an OAUTH token from the input. func GetToken(serviceAccountKeyFile string, oauthScope string) *oauth2.Token { jsonKey := getServiceAccountJSONKey(serviceAccountKeyFile) config, err := google.JWTConfigFromJSON(jsonKey, oauthScope) if err != nil { grpclog.Fatalf("Failed to get the config: %v", err) } token, err := config.TokenSource(context.Background()).Token() if err != nil { grpclog.Fatalf("Failed to get the token: %v", err) } return token } // DoOauth2TokenCreds performs a unary RPC with OAUTH2 token auth. func DoOauth2TokenCreds(tc testpb.TestServiceClient, serviceAccountKeyFile, oauthScope string) { pl := ClientNewPayload(testpb.PayloadType_COMPRESSABLE, largeReqSize) req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseSize: int32(largeRespSize), Payload: pl, FillUsername: true, FillOauthScope: true, } reply, err := tc.UnaryCall(context.Background(), req) if err != nil { grpclog.Fatal("/TestService/UnaryCall RPC failed: ", err) } jsonKey := getServiceAccountJSONKey(serviceAccountKeyFile) user := reply.GetUsername() scope := reply.GetOauthScope() if !strings.Contains(string(jsonKey), user) { grpclog.Fatalf("Got user name %q which is NOT a substring of %q.", user, jsonKey) } if !strings.Contains(oauthScope, scope) { grpclog.Fatalf("Got OAuth scope %q which is NOT a substring of %q.", scope, oauthScope) } } // DoPerRPCCreds performs a unary RPC with per RPC OAUTH2 token. func DoPerRPCCreds(tc testpb.TestServiceClient, serviceAccountKeyFile, oauthScope string) { jsonKey := getServiceAccountJSONKey(serviceAccountKeyFile) pl := ClientNewPayload(testpb.PayloadType_COMPRESSABLE, largeReqSize) req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseSize: int32(largeRespSize), Payload: pl, FillUsername: true, FillOauthScope: true, } token := GetToken(serviceAccountKeyFile, oauthScope) kv := map[string]string{"authorization": token.Type() + " " + token.AccessToken} ctx := metadata.NewOutgoingContext(context.Background(), metadata.MD{"authorization": []string{kv["authorization"]}}) reply, err := tc.UnaryCall(ctx, req) if err != nil { grpclog.Fatal("/TestService/UnaryCall RPC failed: ", err) } user := reply.GetUsername() scope := reply.GetOauthScope() if !strings.Contains(string(jsonKey), user) { grpclog.Fatalf("Got user name %q which is NOT a substring of %q.", user, jsonKey) } if !strings.Contains(oauthScope, scope) { grpclog.Fatalf("Got OAuth scope %q which is NOT a substring of %q.", scope, oauthScope) } } // DoGoogleDefaultCredentials performs an unary RPC with google default credentials func DoGoogleDefaultCredentials(tc testpb.TestServiceClient, defaultServiceAccount string) { pl := ClientNewPayload(testpb.PayloadType_COMPRESSABLE, largeReqSize) req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseSize: int32(largeRespSize), Payload: pl, FillUsername: true, FillOauthScope: true, } reply, err := tc.UnaryCall(context.Background(), req) if err != nil { grpclog.Fatal("/TestService/UnaryCall RPC failed: ", err) } if reply.GetUsername() != defaultServiceAccount { grpclog.Fatalf("Got user name %q; wanted %q. ", reply.GetUsername(), defaultServiceAccount) } } // DoComputeEngineChannelCredentials performs an unary RPC with compute engine channel credentials func DoComputeEngineChannelCredentials(tc testpb.TestServiceClient, defaultServiceAccount string) { pl := ClientNewPayload(testpb.PayloadType_COMPRESSABLE, largeReqSize) req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseSize: int32(largeRespSize), Payload: pl, FillUsername: true, FillOauthScope: true, } reply, err := tc.UnaryCall(context.Background(), req) if err != nil { grpclog.Fatal("/TestService/UnaryCall RPC failed: ", err) } if reply.GetUsername() != defaultServiceAccount { grpclog.Fatalf("Got user name %q; wanted %q. ", reply.GetUsername(), defaultServiceAccount) } } var testMetadata = metadata.MD{ "key1": []string{"value1"}, "key2": []string{"value2"}, } // DoCancelAfterBegin cancels the RPC after metadata has been sent but before payloads are sent. func DoCancelAfterBegin(tc testpb.TestServiceClient, args ...grpc.CallOption) { ctx, cancel := context.WithCancel(metadata.NewOutgoingContext(context.Background(), testMetadata)) stream, err := tc.StreamingInputCall(ctx, args...) if err != nil { grpclog.Fatalf("%v.StreamingInputCall(_) = _, %v", tc, err) } cancel() _, err = stream.CloseAndRecv() if status.Code(err) != codes.Canceled { grpclog.Fatalf("%v.CloseAndRecv() got error code %d, want %d", stream, status.Code(err), codes.Canceled) } } // DoCancelAfterFirstResponse cancels the RPC after receiving the first message from the server. func DoCancelAfterFirstResponse(tc testpb.TestServiceClient, args ...grpc.CallOption) { ctx, cancel := context.WithCancel(context.Background()) stream, err := tc.FullDuplexCall(ctx, args...) if err != nil { grpclog.Fatalf("%v.FullDuplexCall(_) = _, %v", tc, err) } respParam := []*testpb.ResponseParameters{ { Size: 31415, }, } pl := ClientNewPayload(testpb.PayloadType_COMPRESSABLE, 27182) req := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseParameters: respParam, Payload: pl, } if err := stream.Send(req); err != nil { grpclog.Fatalf("%v has error %v while sending %v", stream, err, req) } if _, err := stream.Recv(); err != nil { grpclog.Fatalf("%v.Recv() = %v", stream, err) } cancel() if _, err := stream.Recv(); status.Code(err) != codes.Canceled { grpclog.Fatalf("%v compleled with error code %d, want %d", stream, status.Code(err), codes.Canceled) } } var ( initialMetadataValue = "test_initial_metadata_value" trailingMetadataValue = "\x0a\x0b\x0a\x0b\x0a\x0b" customMetadata = metadata.Pairs( initialMetadataKey, initialMetadataValue, trailingMetadataKey, trailingMetadataValue, ) ) func validateMetadata(header, trailer metadata.MD) { if len(header[initialMetadataKey]) != 1 { grpclog.Fatalf("Expected exactly one header from server. Received %d", len(header[initialMetadataKey])) } if header[initialMetadataKey][0] != initialMetadataValue { grpclog.Fatalf("Got header %s; want %s", header[initialMetadataKey][0], initialMetadataValue) } if len(trailer[trailingMetadataKey]) != 1 { grpclog.Fatalf("Expected exactly one trailer from server. Received %d", len(trailer[trailingMetadataKey])) } if trailer[trailingMetadataKey][0] != trailingMetadataValue { grpclog.Fatalf("Got trailer %s; want %s", trailer[trailingMetadataKey][0], trailingMetadataValue) } } // DoCustomMetadata checks that metadata is echoed back to the client. func DoCustomMetadata(tc testpb.TestServiceClient, args ...grpc.CallOption) { // Testing with UnaryCall. pl := ClientNewPayload(testpb.PayloadType_COMPRESSABLE, 1) req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseSize: int32(1), Payload: pl, } ctx := metadata.NewOutgoingContext(context.Background(), customMetadata) var header, trailer metadata.MD args = append(args, grpc.Header(&header), grpc.Trailer(&trailer)) reply, err := tc.UnaryCall( ctx, req, args..., ) if err != nil { grpclog.Fatal("/TestService/UnaryCall RPC failed: ", err) } t := reply.GetPayload().GetType() s := len(reply.GetPayload().GetBody()) if t != testpb.PayloadType_COMPRESSABLE || s != 1 { grpclog.Fatalf("Got the reply with type %d len %d; want %d, %d", t, s, testpb.PayloadType_COMPRESSABLE, 1) } validateMetadata(header, trailer) // Testing with FullDuplex. stream, err := tc.FullDuplexCall(ctx, args...) if err != nil { grpclog.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } respParam := []*testpb.ResponseParameters{ { Size: 1, }, } streamReq := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseParameters: respParam, Payload: pl, } if err := stream.Send(streamReq); err != nil { grpclog.Fatalf("%v has error %v while sending %v", stream, err, streamReq) } streamHeader, err := stream.Header() if err != nil { grpclog.Fatalf("%v.Header() = %v", stream, err) } if _, err := stream.Recv(); err != nil { grpclog.Fatalf("%v.Recv() = %v", stream, err) } if err := stream.CloseSend(); err != nil { grpclog.Fatalf("%v.CloseSend() = %v, want ", stream, err) } if _, err := stream.Recv(); err != io.EOF { grpclog.Fatalf("%v failed to complete the custom metadata test: %v", stream, err) } streamTrailer := stream.Trailer() validateMetadata(streamHeader, streamTrailer) } // DoStatusCodeAndMessage checks that the status code is propagated back to the client. func DoStatusCodeAndMessage(tc testpb.TestServiceClient, args ...grpc.CallOption) { var code int32 = 2 msg := "test status message" expectedErr := status.Error(codes.Code(code), msg) respStatus := &testpb.EchoStatus{ Code: code, Message: msg, } // Test UnaryCall. req := &testpb.SimpleRequest{ ResponseStatus: respStatus, } if _, err := tc.UnaryCall(context.Background(), req, args...); err == nil || err.Error() != expectedErr.Error() { grpclog.Fatalf("%v.UnaryCall(_, %v) = _, %v, want _, %v", tc, req, err, expectedErr) } // Test FullDuplexCall. stream, err := tc.FullDuplexCall(context.Background(), args...) if err != nil { grpclog.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } streamReq := &testpb.StreamingOutputCallRequest{ ResponseStatus: respStatus, } if err := stream.Send(streamReq); err != nil { grpclog.Fatalf("%v has error %v while sending %v, want ", stream, err, streamReq) } if err := stream.CloseSend(); err != nil { grpclog.Fatalf("%v.CloseSend() = %v, want ", stream, err) } if _, err = stream.Recv(); err.Error() != expectedErr.Error() { grpclog.Fatalf("%v.Recv() returned error %v, want %v", stream, err, expectedErr) } } // DoSpecialStatusMessage verifies Unicode and whitespace is correctly processed // in status message. func DoSpecialStatusMessage(tc testpb.TestServiceClient, args ...grpc.CallOption) { const ( code int32 = 2 msg string = "\t\ntest with whitespace\r\nand Unicode BMP ☺ and non-BMP 😈\t\n" ) expectedErr := status.Error(codes.Code(code), msg) req := &testpb.SimpleRequest{ ResponseStatus: &testpb.EchoStatus{ Code: code, Message: msg, }, } ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() if _, err := tc.UnaryCall(ctx, req, args...); err == nil || err.Error() != expectedErr.Error() { grpclog.Fatalf("%v.UnaryCall(_, %v) = _, %v, want _, %v", tc, req, err, expectedErr) } } // DoUnimplementedService attempts to call a method from an unimplemented service. func DoUnimplementedService(tc testpb.UnimplementedServiceClient) { _, err := tc.UnimplementedCall(context.Background(), &testpb.Empty{}) if status.Code(err) != codes.Unimplemented { grpclog.Fatalf("%v.UnimplementedCall() = _, %v, want _, %v", tc, status.Code(err), codes.Unimplemented) } } // DoUnimplementedMethod attempts to call an unimplemented method. func DoUnimplementedMethod(cc *grpc.ClientConn) { var req, reply proto.Message if err := cc.Invoke(context.Background(), "/grpc.testing.TestService/UnimplementedCall", req, reply); err == nil || status.Code(err) != codes.Unimplemented { grpclog.Fatalf("ClientConn.Invoke(_, _, _, _, _) = %v, want error code %s", err, codes.Unimplemented) } } // DoPickFirstUnary runs multiple RPCs (rpcCount) and checks that all requests // are sent to the same backend. func DoPickFirstUnary(tc testpb.TestServiceClient) { const rpcCount = 100 pl := ClientNewPayload(testpb.PayloadType_COMPRESSABLE, 1) req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseSize: int32(1), Payload: pl, FillServerId: true, } ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() var serverID string for i := 0; i < rpcCount; i++ { resp, err := tc.UnaryCall(ctx, req) if err != nil { grpclog.Fatalf("iteration %d, failed to do UnaryCall: %v", i, err) } id := resp.ServerId if id == "" { grpclog.Fatalf("iteration %d, got empty server ID", i) } if i == 0 { serverID = id continue } if serverID != id { grpclog.Fatalf("iteration %d, got different server ids: %q vs %q", i, serverID, id) } } } type testServer struct { } // NewTestServer creates a test server for test service. func NewTestServer() testpb.TestServiceServer { return &testServer{} } func (s *testServer) EmptyCall(ctx context.Context, in *testpb.Empty) (*testpb.Empty, error) { return new(testpb.Empty), nil } func serverNewPayload(t testpb.PayloadType, size int32) (*testpb.Payload, error) { if size < 0 { return nil, fmt.Errorf("requested a response with invalid length %d", size) } body := make([]byte, size) switch t { case testpb.PayloadType_COMPRESSABLE: case testpb.PayloadType_UNCOMPRESSABLE: return nil, fmt.Errorf("payloadType UNCOMPRESSABLE is not supported") default: return nil, fmt.Errorf("unsupported payload type: %d", t) } return &testpb.Payload{ Type: t, Body: body, }, nil } func (s *testServer) UnaryCall(ctx context.Context, in *testpb.SimpleRequest) (*testpb.SimpleResponse, error) { st := in.GetResponseStatus() if md, ok := metadata.FromIncomingContext(ctx); ok { if initialMetadata, ok := md[initialMetadataKey]; ok { header := metadata.Pairs(initialMetadataKey, initialMetadata[0]) grpc.SendHeader(ctx, header) } if trailingMetadata, ok := md[trailingMetadataKey]; ok { trailer := metadata.Pairs(trailingMetadataKey, trailingMetadata[0]) grpc.SetTrailer(ctx, trailer) } } if st != nil && st.Code != 0 { return nil, status.Error(codes.Code(st.Code), st.Message) } pl, err := serverNewPayload(in.GetResponseType(), in.GetResponseSize()) if err != nil { return nil, err } return &testpb.SimpleResponse{ Payload: pl, }, nil } func (s *testServer) StreamingOutputCall(args *testpb.StreamingOutputCallRequest, stream testpb.TestService_StreamingOutputCallServer) error { cs := args.GetResponseParameters() for _, c := range cs { if us := c.GetIntervalUs(); us > 0 { time.Sleep(time.Duration(us) * time.Microsecond) } pl, err := serverNewPayload(args.GetResponseType(), c.GetSize()) if err != nil { return err } if err := stream.Send(&testpb.StreamingOutputCallResponse{ Payload: pl, }); err != nil { return err } } return nil } func (s *testServer) StreamingInputCall(stream testpb.TestService_StreamingInputCallServer) error { var sum int for { in, err := stream.Recv() if err == io.EOF { return stream.SendAndClose(&testpb.StreamingInputCallResponse{ AggregatedPayloadSize: int32(sum), }) } if err != nil { return err } p := in.GetPayload().GetBody() sum += len(p) } } func (s *testServer) FullDuplexCall(stream testpb.TestService_FullDuplexCallServer) error { if md, ok := metadata.FromIncomingContext(stream.Context()); ok { if initialMetadata, ok := md[initialMetadataKey]; ok { header := metadata.Pairs(initialMetadataKey, initialMetadata[0]) stream.SendHeader(header) } if trailingMetadata, ok := md[trailingMetadataKey]; ok { trailer := metadata.Pairs(trailingMetadataKey, trailingMetadata[0]) stream.SetTrailer(trailer) } } for { in, err := stream.Recv() if err == io.EOF { // read done. return nil } if err != nil { return err } st := in.GetResponseStatus() if st != nil && st.Code != 0 { return status.Error(codes.Code(st.Code), st.Message) } cs := in.GetResponseParameters() for _, c := range cs { if us := c.GetIntervalUs(); us > 0 { time.Sleep(time.Duration(us) * time.Microsecond) } pl, err := serverNewPayload(in.GetResponseType(), c.GetSize()) if err != nil { return err } if err := stream.Send(&testpb.StreamingOutputCallResponse{ Payload: pl, }); err != nil { return err } } } } func (s *testServer) HalfDuplexCall(stream testpb.TestService_HalfDuplexCallServer) error { var msgBuf []*testpb.StreamingOutputCallRequest for { in, err := stream.Recv() if err == io.EOF { // read done. break } if err != nil { return err } msgBuf = append(msgBuf, in) } for _, m := range msgBuf { cs := m.GetResponseParameters() for _, c := range cs { if us := c.GetIntervalUs(); us > 0 { time.Sleep(time.Duration(us) * time.Microsecond) } pl, err := serverNewPayload(m.GetResponseType(), c.GetSize()) if err != nil { return err } if err := stream.Send(&testpb.StreamingOutputCallResponse{ Payload: pl, }); err != nil { return err } } } return nil } grpc-go-1.22.1/keepalive/000077500000000000000000000000001351635773100151135ustar00rootroot00000000000000grpc-go-1.22.1/keepalive/keepalive.go000066400000000000000000000076631351635773100174230ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package keepalive defines configurable parameters for point-to-point // healthcheck. package keepalive import ( "time" ) // ClientParameters is used to set keepalive parameters on the client-side. // These configure how the client will actively probe to notice when a // connection is broken and send pings so intermediaries will be aware of the // liveness of the connection. Make sure these parameters are set in // coordination with the keepalive policy on the server, as incompatible // settings can result in closing of connection. type ClientParameters struct { // After a duration of this time if the client doesn't see any activity it // pings the server to see if the transport is still alive. // If set below 10s, a minimum value of 10s will be used instead. Time time.Duration // The current default value is infinity. // After having pinged for keepalive check, the client waits for a duration // of Timeout and if no activity is seen even after that the connection is // closed. Timeout time.Duration // The current default value is 20 seconds. // If true, client sends keepalive pings even with no active RPCs. If false, // when there are no active RPCs, Time and Timeout will be ignored and no // keepalive pings will be sent. PermitWithoutStream bool // false by default. } // ServerParameters is used to set keepalive and max-age parameters on the // server-side. type ServerParameters struct { // MaxConnectionIdle is a duration for the amount of time after which an // idle connection would be closed by sending a GoAway. Idleness duration is // defined since the most recent time the number of outstanding RPCs became // zero or the connection establishment. MaxConnectionIdle time.Duration // The current default value is infinity. // MaxConnectionAge is a duration for the maximum amount of time a // connection may exist before it will be closed by sending a GoAway. A // random jitter of +/-10% will be added to MaxConnectionAge to spread out // connection storms. MaxConnectionAge time.Duration // The current default value is infinity. // MaxConnectionAgeGrace is an additive period after MaxConnectionAge after // which the connection will be forcibly closed. MaxConnectionAgeGrace time.Duration // The current default value is infinity. // After a duration of this time if the server doesn't see any activity it // pings the client to see if the transport is still alive. // If set below 1s, a minimum value of 1s will be used instead. Time time.Duration // The current default value is 2 hours. // After having pinged for keepalive check, the server waits for a duration // of Timeout and if no activity is seen even after that the connection is // closed. Timeout time.Duration // The current default value is 20 seconds. } // EnforcementPolicy is used to set keepalive enforcement policy on the // server-side. Server will close connection with a client that violates this // policy. type EnforcementPolicy struct { // MinTime is the minimum amount of time a client should wait before sending // a keepalive ping. MinTime time.Duration // The current default value is 5 minutes. // If true, server allows keepalive pings even when there are no active // streams(RPCs). If false, and client sends ping when there are no active // streams, server will send GOAWAY and close the connection. PermitWithoutStream bool // false by default. } grpc-go-1.22.1/metadata/000077500000000000000000000000001351635773100147265ustar00rootroot00000000000000grpc-go-1.22.1/metadata/metadata.go000066400000000000000000000142541351635773100170430ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package metadata define the structure of the metadata supported by gRPC library. // Please refer to https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md // for more information about custom-metadata. package metadata // import "google.golang.org/grpc/metadata" import ( "context" "fmt" "strings" ) // DecodeKeyValue returns k, v, nil. // // Deprecated: use k and v directly instead. func DecodeKeyValue(k, v string) (string, string, error) { return k, v, nil } // MD is a mapping from metadata keys to values. Users should use the following // two convenience functions New and Pairs to generate MD. type MD map[string][]string // New creates an MD from a given key-value map. // // Only the following ASCII characters are allowed in keys: // - digits: 0-9 // - uppercase letters: A-Z (normalized to lower) // - lowercase letters: a-z // - special characters: -_. // Uppercase letters are automatically converted to lowercase. // // Keys beginning with "grpc-" are reserved for grpc-internal use only and may // result in errors if set in metadata. func New(m map[string]string) MD { md := MD{} for k, val := range m { key := strings.ToLower(k) md[key] = append(md[key], val) } return md } // Pairs returns an MD formed by the mapping of key, value ... // Pairs panics if len(kv) is odd. // // Only the following ASCII characters are allowed in keys: // - digits: 0-9 // - uppercase letters: A-Z (normalized to lower) // - lowercase letters: a-z // - special characters: -_. // Uppercase letters are automatically converted to lowercase. // // Keys beginning with "grpc-" are reserved for grpc-internal use only and may // result in errors if set in metadata. func Pairs(kv ...string) MD { if len(kv)%2 == 1 { panic(fmt.Sprintf("metadata: Pairs got the odd number of input pairs for metadata: %d", len(kv))) } md := MD{} var key string for i, s := range kv { if i%2 == 0 { key = strings.ToLower(s) continue } md[key] = append(md[key], s) } return md } // Len returns the number of items in md. func (md MD) Len() int { return len(md) } // Copy returns a copy of md. func (md MD) Copy() MD { return Join(md) } // Get obtains the values for a given key. func (md MD) Get(k string) []string { k = strings.ToLower(k) return md[k] } // Set sets the value of a given key with a slice of values. func (md MD) Set(k string, vals ...string) { if len(vals) == 0 { return } k = strings.ToLower(k) md[k] = vals } // Append adds the values to key k, not overwriting what was already stored at that key. func (md MD) Append(k string, vals ...string) { if len(vals) == 0 { return } k = strings.ToLower(k) md[k] = append(md[k], vals...) } // Join joins any number of mds into a single MD. // The order of values for each key is determined by the order in which // the mds containing those values are presented to Join. func Join(mds ...MD) MD { out := MD{} for _, md := range mds { for k, v := range md { out[k] = append(out[k], v...) } } return out } type mdIncomingKey struct{} type mdOutgoingKey struct{} // NewIncomingContext creates a new context with incoming md attached. func NewIncomingContext(ctx context.Context, md MD) context.Context { return context.WithValue(ctx, mdIncomingKey{}, md) } // NewOutgoingContext creates a new context with outgoing md attached. If used // in conjunction with AppendToOutgoingContext, NewOutgoingContext will // overwrite any previously-appended metadata. func NewOutgoingContext(ctx context.Context, md MD) context.Context { return context.WithValue(ctx, mdOutgoingKey{}, rawMD{md: md}) } // AppendToOutgoingContext returns a new context with the provided kv merged // with any existing metadata in the context. Please refer to the // documentation of Pairs for a description of kv. func AppendToOutgoingContext(ctx context.Context, kv ...string) context.Context { if len(kv)%2 == 1 { panic(fmt.Sprintf("metadata: AppendToOutgoingContext got an odd number of input pairs for metadata: %d", len(kv))) } md, _ := ctx.Value(mdOutgoingKey{}).(rawMD) added := make([][]string, len(md.added)+1) copy(added, md.added) added[len(added)-1] = make([]string, len(kv)) copy(added[len(added)-1], kv) return context.WithValue(ctx, mdOutgoingKey{}, rawMD{md: md.md, added: added}) } // FromIncomingContext returns the incoming metadata in ctx if it exists. The // returned MD should not be modified. Writing to it may cause races. // Modification should be made to copies of the returned MD. func FromIncomingContext(ctx context.Context) (md MD, ok bool) { md, ok = ctx.Value(mdIncomingKey{}).(MD) return } // FromOutgoingContextRaw returns the un-merged, intermediary contents // of rawMD. Remember to perform strings.ToLower on the keys. The returned // MD should not be modified. Writing to it may cause races. Modification // should be made to copies of the returned MD. // // This is intended for gRPC-internal use ONLY. func FromOutgoingContextRaw(ctx context.Context) (MD, [][]string, bool) { raw, ok := ctx.Value(mdOutgoingKey{}).(rawMD) if !ok { return nil, nil, false } return raw.md, raw.added, true } // FromOutgoingContext returns the outgoing metadata in ctx if it exists. The // returned MD should not be modified. Writing to it may cause races. // Modification should be made to copies of the returned MD. func FromOutgoingContext(ctx context.Context) (MD, bool) { raw, ok := ctx.Value(mdOutgoingKey{}).(rawMD) if !ok { return nil, false } mds := make([]MD, 0, len(raw.added)+1) mds = append(mds, raw.md) for _, vv := range raw.added { mds = append(mds, Pairs(vv...)) } return Join(mds...), ok } type rawMD struct { md MD added [][]string } grpc-go-1.22.1/metadata/metadata_test.go000066400000000000000000000154271351635773100201050ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package metadata import ( "context" "reflect" "strconv" "testing" ) func TestPairsMD(t *testing.T) { for _, test := range []struct { // input kv []string // output md MD }{ {[]string{}, MD{}}, {[]string{"k1", "v1", "k1", "v2"}, MD{"k1": []string{"v1", "v2"}}}, } { md := Pairs(test.kv...) if !reflect.DeepEqual(md, test.md) { t.Fatalf("Pairs(%v) = %v, want %v", test.kv, md, test.md) } } } func TestCopy(t *testing.T) { const key, val = "key", "val" orig := Pairs(key, val) cpy := orig.Copy() if !reflect.DeepEqual(orig, cpy) { t.Errorf("copied value not equal to the original, got %v, want %v", cpy, orig) } orig[key][0] = "foo" if v := cpy[key][0]; v != val { t.Errorf("change in original should not affect copy, got %q, want %q", v, val) } } func TestJoin(t *testing.T) { for _, test := range []struct { mds []MD want MD }{ {[]MD{}, MD{}}, {[]MD{Pairs("foo", "bar")}, Pairs("foo", "bar")}, {[]MD{Pairs("foo", "bar"), Pairs("foo", "baz")}, Pairs("foo", "bar", "foo", "baz")}, {[]MD{Pairs("foo", "bar"), Pairs("foo", "baz"), Pairs("zip", "zap")}, Pairs("foo", "bar", "foo", "baz", "zip", "zap")}, } { md := Join(test.mds...) if !reflect.DeepEqual(md, test.want) { t.Errorf("context's metadata is %v, want %v", md, test.want) } } } func TestGet(t *testing.T) { for _, test := range []struct { md MD key string wantVals []string }{ {md: Pairs("My-Optional-Header", "42"), key: "My-Optional-Header", wantVals: []string{"42"}}, {md: Pairs("Header", "42", "Header", "43", "Header", "44", "other", "1"), key: "HEADER", wantVals: []string{"42", "43", "44"}}, {md: Pairs("HEADER", "10"), key: "HEADER", wantVals: []string{"10"}}, } { vals := test.md.Get(test.key) if !reflect.DeepEqual(vals, test.wantVals) { t.Errorf("value of metadata %v is %v, want %v", test.key, vals, test.wantVals) } } } func TestSet(t *testing.T) { for _, test := range []struct { md MD setKey string setVals []string want MD }{ { md: Pairs("My-Optional-Header", "42", "other-key", "999"), setKey: "Other-Key", setVals: []string{"1"}, want: Pairs("my-optional-header", "42", "other-key", "1"), }, { md: Pairs("My-Optional-Header", "42"), setKey: "Other-Key", setVals: []string{"1", "2", "3"}, want: Pairs("my-optional-header", "42", "other-key", "1", "other-key", "2", "other-key", "3"), }, { md: Pairs("My-Optional-Header", "42"), setKey: "Other-Key", setVals: []string{}, want: Pairs("my-optional-header", "42"), }, } { test.md.Set(test.setKey, test.setVals...) if !reflect.DeepEqual(test.md, test.want) { t.Errorf("value of metadata is %v, want %v", test.md, test.want) } } } func TestAppend(t *testing.T) { for _, test := range []struct { md MD appendKey string appendVals []string want MD }{ { md: Pairs("My-Optional-Header", "42"), appendKey: "Other-Key", appendVals: []string{"1"}, want: Pairs("my-optional-header", "42", "other-key", "1"), }, { md: Pairs("My-Optional-Header", "42"), appendKey: "my-OptIoNal-HeAder", appendVals: []string{"1", "2", "3"}, want: Pairs("my-optional-header", "42", "my-optional-header", "1", "my-optional-header", "2", "my-optional-header", "3"), }, { md: Pairs("My-Optional-Header", "42"), appendKey: "my-OptIoNal-HeAder", appendVals: []string{}, want: Pairs("my-optional-header", "42"), }, } { test.md.Append(test.appendKey, test.appendVals...) if !reflect.DeepEqual(test.md, test.want) { t.Errorf("value of metadata is %v, want %v", test.md, test.want) } } } func TestAppendToOutgoingContext(t *testing.T) { // Pre-existing metadata ctx := NewOutgoingContext(context.Background(), Pairs("k1", "v1", "k2", "v2")) ctx = AppendToOutgoingContext(ctx, "k1", "v3") ctx = AppendToOutgoingContext(ctx, "k1", "v4") md, ok := FromOutgoingContext(ctx) if !ok { t.Errorf("Expected MD to exist in ctx, but got none") } want := Pairs("k1", "v1", "k1", "v3", "k1", "v4", "k2", "v2") if !reflect.DeepEqual(md, want) { t.Errorf("context's metadata is %v, want %v", md, want) } // No existing metadata ctx = AppendToOutgoingContext(context.Background(), "k1", "v1") md, ok = FromOutgoingContext(ctx) if !ok { t.Errorf("Expected MD to exist in ctx, but got none") } want = Pairs("k1", "v1") if !reflect.DeepEqual(md, want) { t.Errorf("context's metadata is %v, want %v", md, want) } } func TestAppendToOutgoingContext_Repeated(t *testing.T) { ctx := context.Background() for i := 0; i < 100; i = i + 2 { ctx1 := AppendToOutgoingContext(ctx, "k", strconv.Itoa(i)) ctx2 := AppendToOutgoingContext(ctx, "k", strconv.Itoa(i+1)) md1, _ := FromOutgoingContext(ctx1) md2, _ := FromOutgoingContext(ctx2) if reflect.DeepEqual(md1, md2) { t.Fatalf("md1, md2 = %v, %v; should not be equal", md1, md2) } ctx = ctx1 } } func TestAppendToOutgoingContext_FromKVSlice(t *testing.T) { const k, v = "a", "b" kv := []string{k, v} ctx := AppendToOutgoingContext(context.Background(), kv...) md, _ := FromOutgoingContext(ctx) if md[k][0] != v { t.Fatalf("md[%q] = %q; want %q", k, md[k], v) } kv[1] = "xxx" md, _ = FromOutgoingContext(ctx) if md[k][0] != v { t.Fatalf("md[%q] = %q; want %q", k, md[k], v) } } // Old/slow approach to adding metadata to context func Benchmark_AddingMetadata_ContextManipulationApproach(b *testing.B) { // TODO: Add in N=1-100 tests once Go1.6 support is removed. const num = 10 for n := 0; n < b.N; n++ { ctx := context.Background() for i := 0; i < num; i++ { md, _ := FromOutgoingContext(ctx) NewOutgoingContext(ctx, Join(Pairs("k1", "v1", "k2", "v2"), md)) } } } // Newer/faster approach to adding metadata to context func BenchmarkAppendToOutgoingContext(b *testing.B) { const num = 10 for n := 0; n < b.N; n++ { ctx := context.Background() for i := 0; i < num; i++ { ctx = AppendToOutgoingContext(ctx, "k1", "v1", "k2", "v2") } } } func BenchmarkFromOutgoingContext(b *testing.B) { ctx := context.Background() ctx = NewOutgoingContext(ctx, MD{"k3": {"v3", "v4"}}) ctx = AppendToOutgoingContext(ctx, "k1", "v1", "k2", "v2") for n := 0; n < b.N; n++ { FromOutgoingContext(ctx) } } grpc-go-1.22.1/naming/000077500000000000000000000000001351635773100144175ustar00rootroot00000000000000grpc-go-1.22.1/naming/dns_resolver.go000066400000000000000000000206151351635773100174570ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package naming import ( "context" "errors" "fmt" "net" "strconv" "time" "google.golang.org/grpc/grpclog" ) const ( defaultPort = "443" defaultFreq = time.Minute * 30 ) var ( errMissingAddr = errors.New("missing address") errWatcherClose = errors.New("watcher has been closed") lookupHost = net.DefaultResolver.LookupHost lookupSRV = net.DefaultResolver.LookupSRV ) // NewDNSResolverWithFreq creates a DNS Resolver that can resolve DNS names, and // create watchers that poll the DNS server using the frequency set by freq. func NewDNSResolverWithFreq(freq time.Duration) (Resolver, error) { return &dnsResolver{freq: freq}, nil } // NewDNSResolver creates a DNS Resolver that can resolve DNS names, and create // watchers that poll the DNS server using the default frequency defined by defaultFreq. func NewDNSResolver() (Resolver, error) { return NewDNSResolverWithFreq(defaultFreq) } // dnsResolver handles name resolution for names following the DNS scheme type dnsResolver struct { // frequency of polling the DNS server that the watchers created by this resolver will use. freq time.Duration } // formatIP returns ok = false if addr is not a valid textual representation of an IP address. // If addr is an IPv4 address, return the addr and ok = true. // If addr is an IPv6 address, return the addr enclosed in square brackets and ok = true. func formatIP(addr string) (addrIP string, ok bool) { ip := net.ParseIP(addr) if ip == nil { return "", false } if ip.To4() != nil { return addr, true } return "[" + addr + "]", true } // parseTarget takes the user input target string, returns formatted host and port info. // If target doesn't specify a port, set the port to be the defaultPort. // If target is in IPv6 format and host-name is enclosed in square brackets, brackets // are stripped when setting the host. // examples: // target: "www.google.com" returns host: "www.google.com", port: "443" // target: "ipv4-host:80" returns host: "ipv4-host", port: "80" // target: "[ipv6-host]" returns host: "ipv6-host", port: "443" // target: ":80" returns host: "localhost", port: "80" // target: ":" returns host: "localhost", port: "443" func parseTarget(target string) (host, port string, err error) { if target == "" { return "", "", errMissingAddr } if ip := net.ParseIP(target); ip != nil { // target is an IPv4 or IPv6(without brackets) address return target, defaultPort, nil } if host, port, err := net.SplitHostPort(target); err == nil { // target has port, i.e ipv4-host:port, [ipv6-host]:port, host-name:port if host == "" { // Keep consistent with net.Dial(): If the host is empty, as in ":80", the local system is assumed. host = "localhost" } if port == "" { // If the port field is empty(target ends with colon), e.g. "[::1]:", defaultPort is used. port = defaultPort } return host, port, nil } if host, port, err := net.SplitHostPort(target + ":" + defaultPort); err == nil { // target doesn't have port return host, port, nil } return "", "", fmt.Errorf("invalid target address %v", target) } // Resolve creates a watcher that watches the name resolution of the target. func (r *dnsResolver) Resolve(target string) (Watcher, error) { host, port, err := parseTarget(target) if err != nil { return nil, err } if net.ParseIP(host) != nil { ipWatcher := &ipWatcher{ updateChan: make(chan *Update, 1), } host, _ = formatIP(host) ipWatcher.updateChan <- &Update{Op: Add, Addr: host + ":" + port} return ipWatcher, nil } ctx, cancel := context.WithCancel(context.Background()) return &dnsWatcher{ r: r, host: host, port: port, ctx: ctx, cancel: cancel, t: time.NewTimer(0), }, nil } // dnsWatcher watches for the name resolution update for a specific target type dnsWatcher struct { r *dnsResolver host string port string // The latest resolved address set curAddrs map[string]*Update ctx context.Context cancel context.CancelFunc t *time.Timer } // ipWatcher watches for the name resolution update for an IP address. type ipWatcher struct { updateChan chan *Update } // Next returns the address resolution Update for the target. For IP address, // the resolution is itself, thus polling name server is unnecessary. Therefore, // Next() will return an Update the first time it is called, and will be blocked // for all following calls as no Update exists until watcher is closed. func (i *ipWatcher) Next() ([]*Update, error) { u, ok := <-i.updateChan if !ok { return nil, errWatcherClose } return []*Update{u}, nil } // Close closes the ipWatcher. func (i *ipWatcher) Close() { close(i.updateChan) } // AddressType indicates the address type returned by name resolution. type AddressType uint8 const ( // Backend indicates the server is a backend server. Backend AddressType = iota // GRPCLB indicates the server is a grpclb load balancer. GRPCLB ) // AddrMetadataGRPCLB contains the information the name resolver for grpclb should provide. The // name resolver used by the grpclb balancer is required to provide this type of metadata in // its address updates. type AddrMetadataGRPCLB struct { // AddrType is the type of server (grpc load balancer or backend). AddrType AddressType // ServerName is the name of the grpc load balancer. Used for authentication. ServerName string } // compileUpdate compares the old resolved addresses and newly resolved addresses, // and generates an update list func (w *dnsWatcher) compileUpdate(newAddrs map[string]*Update) []*Update { var res []*Update for a, u := range w.curAddrs { if _, ok := newAddrs[a]; !ok { u.Op = Delete res = append(res, u) } } for a, u := range newAddrs { if _, ok := w.curAddrs[a]; !ok { res = append(res, u) } } return res } func (w *dnsWatcher) lookupSRV() map[string]*Update { newAddrs := make(map[string]*Update) _, srvs, err := lookupSRV(w.ctx, "grpclb", "tcp", w.host) if err != nil { grpclog.Infof("grpc: failed dns SRV record lookup due to %v.\n", err) return nil } for _, s := range srvs { lbAddrs, err := lookupHost(w.ctx, s.Target) if err != nil { grpclog.Warningf("grpc: failed load balancer address dns lookup due to %v.\n", err) continue } for _, a := range lbAddrs { a, ok := formatIP(a) if !ok { grpclog.Errorf("grpc: failed IP parsing due to %v.\n", err) continue } addr := a + ":" + strconv.Itoa(int(s.Port)) newAddrs[addr] = &Update{Addr: addr, Metadata: AddrMetadataGRPCLB{AddrType: GRPCLB, ServerName: s.Target}} } } return newAddrs } func (w *dnsWatcher) lookupHost() map[string]*Update { newAddrs := make(map[string]*Update) addrs, err := lookupHost(w.ctx, w.host) if err != nil { grpclog.Warningf("grpc: failed dns A record lookup due to %v.\n", err) return nil } for _, a := range addrs { a, ok := formatIP(a) if !ok { grpclog.Errorf("grpc: failed IP parsing due to %v.\n", err) continue } addr := a + ":" + w.port newAddrs[addr] = &Update{Addr: addr} } return newAddrs } func (w *dnsWatcher) lookup() []*Update { newAddrs := w.lookupSRV() if newAddrs == nil { // If failed to get any balancer address (either no corresponding SRV for the // target, or caused by failure during resolution/parsing of the balancer target), // return any A record info available. newAddrs = w.lookupHost() } result := w.compileUpdate(newAddrs) w.curAddrs = newAddrs return result } // Next returns the resolved address update(delta) for the target. If there's no // change, it will sleep for 30 mins and try to resolve again after that. func (w *dnsWatcher) Next() ([]*Update, error) { for { select { case <-w.ctx.Done(): return nil, errWatcherClose case <-w.t.C: } result := w.lookup() // Next lookup should happen after an interval defined by w.r.freq. w.t.Reset(w.r.freq) if len(result) > 0 { return result, nil } } } func (w *dnsWatcher) Close() { w.cancel() } grpc-go-1.22.1/naming/dns_resolver_test.go000066400000000000000000000217211351635773100205150ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package naming import ( "context" "fmt" "net" "reflect" "sync" "testing" "time" ) func newUpdateWithMD(op Operation, addr, lb string) *Update { return &Update{ Op: op, Addr: addr, Metadata: AddrMetadataGRPCLB{AddrType: GRPCLB, ServerName: lb}, } } func toMap(u []*Update) map[string]*Update { m := make(map[string]*Update) for _, v := range u { m[v.Addr] = v } return m } func TestCompileUpdate(t *testing.T) { tests := []struct { oldAddrs []string newAddrs []string want []*Update }{ { []string{}, []string{"1.0.0.1"}, []*Update{{Op: Add, Addr: "1.0.0.1"}}, }, { []string{"1.0.0.1"}, []string{"1.0.0.1"}, []*Update{}, }, { []string{"1.0.0.0"}, []string{"1.0.0.1"}, []*Update{{Op: Delete, Addr: "1.0.0.0"}, {Op: Add, Addr: "1.0.0.1"}}, }, { []string{"1.0.0.1"}, []string{"1.0.0.0"}, []*Update{{Op: Add, Addr: "1.0.0.0"}, {Op: Delete, Addr: "1.0.0.1"}}, }, { []string{"1.0.0.1"}, []string{"1.0.0.1", "1.0.0.2", "1.0.0.3"}, []*Update{{Op: Add, Addr: "1.0.0.2"}, {Op: Add, Addr: "1.0.0.3"}}, }, { []string{"1.0.0.1", "1.0.0.2", "1.0.0.3"}, []string{"1.0.0.0"}, []*Update{{Op: Add, Addr: "1.0.0.0"}, {Op: Delete, Addr: "1.0.0.1"}, {Op: Delete, Addr: "1.0.0.2"}, {Op: Delete, Addr: "1.0.0.3"}}, }, { []string{"1.0.0.1", "1.0.0.3", "1.0.0.5"}, []string{"1.0.0.2", "1.0.0.3", "1.0.0.6"}, []*Update{{Op: Delete, Addr: "1.0.0.1"}, {Op: Add, Addr: "1.0.0.2"}, {Op: Delete, Addr: "1.0.0.5"}, {Op: Add, Addr: "1.0.0.6"}}, }, { []string{"1.0.0.1", "1.0.0.1", "1.0.0.2"}, []string{"1.0.0.1"}, []*Update{{Op: Delete, Addr: "1.0.0.2"}}, }, } var w dnsWatcher for _, c := range tests { w.curAddrs = make(map[string]*Update) newUpdates := make(map[string]*Update) for _, a := range c.oldAddrs { w.curAddrs[a] = &Update{Addr: a} } for _, a := range c.newAddrs { newUpdates[a] = &Update{Addr: a} } r := w.compileUpdate(newUpdates) if !reflect.DeepEqual(toMap(c.want), toMap(r)) { t.Errorf("w(%+v).compileUpdate(%+v) = %+v, want %+v", c.oldAddrs, c.newAddrs, updatesToSlice(r), updatesToSlice(c.want)) } } } func TestResolveFunc(t *testing.T) { tests := []struct { addr string want error }{ // TODO(yuxuanli): More false cases? {"www.google.com", nil}, {"foo.bar:12345", nil}, {"127.0.0.1", nil}, {"127.0.0.1:12345", nil}, {"[::1]:80", nil}, {"[2001:db8:a0b:12f0::1]:21", nil}, {":80", nil}, {"127.0.0...1:12345", nil}, {"[fe80::1%lo0]:80", nil}, {"golang.org:http", nil}, {"[2001:db8::1]:http", nil}, {":", nil}, {"", errMissingAddr}, {"[2001:db8:a0b:12f0::1", fmt.Errorf("invalid target address %v", "[2001:db8:a0b:12f0::1")}, } r, err := NewDNSResolver() if err != nil { t.Errorf("%v", err) } for _, v := range tests { _, err := r.Resolve(v.addr) if !reflect.DeepEqual(err, v.want) { t.Errorf("Resolve(%q) = %v, want %v", v.addr, err, v.want) } } } var hostLookupTbl = map[string][]string{ "foo.bar.com": {"1.2.3.4", "5.6.7.8"}, "ipv4.single.fake": {"1.2.3.4"}, "ipv4.multi.fake": {"1.2.3.4", "5.6.7.8", "9.10.11.12"}, "ipv6.single.fake": {"2607:f8b0:400a:801::1001"}, "ipv6.multi.fake": {"2607:f8b0:400a:801::1001", "2607:f8b0:400a:801::1002", "2607:f8b0:400a:801::1003"}, } func hostLookup(host string) ([]string, error) { if addrs, ok := hostLookupTbl[host]; ok { return addrs, nil } return nil, fmt.Errorf("failed to lookup host:%s resolution in hostLookupTbl", host) } var srvLookupTbl = map[string][]*net.SRV{ "_grpclb._tcp.srv.ipv4.single.fake": {&net.SRV{Target: "ipv4.single.fake", Port: 1234}}, "_grpclb._tcp.srv.ipv4.multi.fake": {&net.SRV{Target: "ipv4.multi.fake", Port: 1234}}, "_grpclb._tcp.srv.ipv6.single.fake": {&net.SRV{Target: "ipv6.single.fake", Port: 1234}}, "_grpclb._tcp.srv.ipv6.multi.fake": {&net.SRV{Target: "ipv6.multi.fake", Port: 1234}}, } func srvLookup(service, proto, name string) (string, []*net.SRV, error) { cname := "_" + service + "._" + proto + "." + name if srvs, ok := srvLookupTbl[cname]; ok { return cname, srvs, nil } return "", nil, fmt.Errorf("failed to lookup srv record for %s in srvLookupTbl", cname) } func updatesToSlice(updates []*Update) []Update { res := make([]Update, len(updates)) for i, u := range updates { res[i] = *u } return res } func testResolver(t *testing.T, freq time.Duration, slp time.Duration) { tests := []struct { target string want []*Update }{ { "foo.bar.com", []*Update{{Op: Add, Addr: "1.2.3.4" + colonDefaultPort}, {Op: Add, Addr: "5.6.7.8" + colonDefaultPort}}, }, { "foo.bar.com:1234", []*Update{{Op: Add, Addr: "1.2.3.4:1234"}, {Op: Add, Addr: "5.6.7.8:1234"}}, }, { "srv.ipv4.single.fake", []*Update{newUpdateWithMD(Add, "1.2.3.4:1234", "ipv4.single.fake")}, }, { "srv.ipv4.multi.fake", []*Update{ newUpdateWithMD(Add, "1.2.3.4:1234", "ipv4.multi.fake"), newUpdateWithMD(Add, "5.6.7.8:1234", "ipv4.multi.fake"), newUpdateWithMD(Add, "9.10.11.12:1234", "ipv4.multi.fake")}, }, { "srv.ipv6.single.fake", []*Update{newUpdateWithMD(Add, "[2607:f8b0:400a:801::1001]:1234", "ipv6.single.fake")}, }, { "srv.ipv6.multi.fake", []*Update{ newUpdateWithMD(Add, "[2607:f8b0:400a:801::1001]:1234", "ipv6.multi.fake"), newUpdateWithMD(Add, "[2607:f8b0:400a:801::1002]:1234", "ipv6.multi.fake"), newUpdateWithMD(Add, "[2607:f8b0:400a:801::1003]:1234", "ipv6.multi.fake"), }, }, } for _, a := range tests { r, err := NewDNSResolverWithFreq(freq) if err != nil { t.Fatalf("%v\n", err) } w, err := r.Resolve(a.target) if err != nil { t.Fatalf("%v\n", err) } updates, err := w.Next() if err != nil { t.Fatalf("%v\n", err) } if !reflect.DeepEqual(toMap(a.want), toMap(updates)) { t.Errorf("Resolve(%q) = %+v, want %+v\n", a.target, updatesToSlice(updates), updatesToSlice(a.want)) } var wg sync.WaitGroup wg.Add(1) go func() { defer wg.Done() for { _, err := w.Next() if err != nil { return } t.Error("Execution shouldn't reach here, since w.Next() should be blocked until close happen.") } }() // Sleep for sometime to let watcher do more than one lookup time.Sleep(slp) w.Close() wg.Wait() } } func replaceNetFunc() func() { oldLookupHost := lookupHost oldLookupSRV := lookupSRV lookupHost = func(ctx context.Context, host string) ([]string, error) { return hostLookup(host) } lookupSRV = func(ctx context.Context, service, proto, name string) (string, []*net.SRV, error) { return srvLookup(service, proto, name) } return func() { lookupHost = oldLookupHost lookupSRV = oldLookupSRV } } func TestResolve(t *testing.T) { defer replaceNetFunc()() testResolver(t, time.Millisecond*5, time.Millisecond*10) } const colonDefaultPort = ":" + defaultPort func TestIPWatcher(t *testing.T) { tests := []struct { target string want []*Update }{ {"127.0.0.1", []*Update{{Op: Add, Addr: "127.0.0.1" + colonDefaultPort}}}, {"127.0.0.1:12345", []*Update{{Op: Add, Addr: "127.0.0.1:12345"}}}, {"::1", []*Update{{Op: Add, Addr: "[::1]" + colonDefaultPort}}}, {"[::1]:12345", []*Update{{Op: Add, Addr: "[::1]:12345"}}}, {"[::1]:", []*Update{{Op: Add, Addr: "[::1]:443"}}}, {"2001:db8:85a3::8a2e:370:7334", []*Update{{Op: Add, Addr: "[2001:db8:85a3::8a2e:370:7334]" + colonDefaultPort}}}, {"[2001:db8:85a3::8a2e:370:7334]", []*Update{{Op: Add, Addr: "[2001:db8:85a3::8a2e:370:7334]" + colonDefaultPort}}}, {"[2001:db8:85a3::8a2e:370:7334]:12345", []*Update{{Op: Add, Addr: "[2001:db8:85a3::8a2e:370:7334]:12345"}}}, {"[2001:db8::1]:http", []*Update{{Op: Add, Addr: "[2001:db8::1]:http"}}}, // TODO(yuxuanli): zone support? } for _, v := range tests { r, err := NewDNSResolverWithFreq(time.Millisecond * 5) if err != nil { t.Fatalf("%v\n", err) } w, err := r.Resolve(v.target) if err != nil { t.Fatalf("%v\n", err) } var updates []*Update var wg sync.WaitGroup wg.Add(1) count := 0 go func() { defer wg.Done() for { u, err := w.Next() if err != nil { return } updates = u count++ } }() // Sleep for sometime to let watcher do more than one lookup time.Sleep(time.Millisecond * 10) w.Close() wg.Wait() if !reflect.DeepEqual(v.want, updates) { t.Errorf("Resolve(%q) = %v, want %+v\n", v.target, updatesToSlice(updates), updatesToSlice(v.want)) } if count != 1 { t.Errorf("IPWatcher Next() should return only once, not %d times\n", count) } } } grpc-go-1.22.1/naming/naming.go000066400000000000000000000042711351635773100162230ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package naming defines the naming API and related data structures for gRPC. // // This package is deprecated: please use package resolver instead. package naming // Operation defines the corresponding operations for a name resolution change. // // Deprecated: please use package resolver. type Operation uint8 const ( // Add indicates a new address is added. Add Operation = iota // Delete indicates an existing address is deleted. Delete ) // Update defines a name resolution update. Notice that it is not valid having both // empty string Addr and nil Metadata in an Update. // // Deprecated: please use package resolver. type Update struct { // Op indicates the operation of the update. Op Operation // Addr is the updated address. It is empty string if there is no address update. Addr string // Metadata is the updated metadata. It is nil if there is no metadata update. // Metadata is not required for a custom naming implementation. Metadata interface{} } // Resolver creates a Watcher for a target to track its resolution changes. // // Deprecated: please use package resolver. type Resolver interface { // Resolve creates a Watcher for target. Resolve(target string) (Watcher, error) } // Watcher watches for the updates on the specified target. // // Deprecated: please use package resolver. type Watcher interface { // Next blocks until an update or error happens. It may return one or more // updates. The first call should get the full set of the results. It should // return an error if and only if Watcher cannot recover. Next() ([]*Update, error) // Close closes the Watcher. Close() } grpc-go-1.22.1/peer/000077500000000000000000000000001351635773100141015ustar00rootroot00000000000000grpc-go-1.22.1/peer/peer.go000066400000000000000000000027271351635773100153730ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package peer defines various peer information associated with RPCs and // corresponding utils. package peer import ( "context" "net" "google.golang.org/grpc/credentials" ) // Peer contains the information of the peer for an RPC, such as the address // and authentication information. type Peer struct { // Addr is the peer address. Addr net.Addr // AuthInfo is the authentication information of the transport. // It is nil if there is no transport security being used. AuthInfo credentials.AuthInfo } type peerKey struct{} // NewContext creates a new context with peer information attached. func NewContext(ctx context.Context, p *Peer) context.Context { return context.WithValue(ctx, peerKey{}, p) } // FromContext returns the peer information in ctx if it exists. func FromContext(ctx context.Context) (p *Peer, ok bool) { p, ok = ctx.Value(peerKey{}).(*Peer) return } grpc-go-1.22.1/picker_wrapper.go000066400000000000000000000123431351635773100165150ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "context" "io" "sync" "google.golang.org/grpc/balancer" "google.golang.org/grpc/codes" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/internal/channelz" "google.golang.org/grpc/internal/transport" "google.golang.org/grpc/status" ) // pickerWrapper is a wrapper of balancer.Picker. It blocks on certain pick // actions and unblock when there's a picker update. type pickerWrapper struct { mu sync.Mutex done bool blockingCh chan struct{} picker balancer.Picker // The latest connection happened. connErrMu sync.Mutex connErr error } func newPickerWrapper() *pickerWrapper { bp := &pickerWrapper{blockingCh: make(chan struct{})} return bp } func (bp *pickerWrapper) updateConnectionError(err error) { bp.connErrMu.Lock() bp.connErr = err bp.connErrMu.Unlock() } func (bp *pickerWrapper) connectionError() error { bp.connErrMu.Lock() err := bp.connErr bp.connErrMu.Unlock() return err } // updatePicker is called by UpdateBalancerState. It unblocks all blocked pick. func (bp *pickerWrapper) updatePicker(p balancer.Picker) { bp.mu.Lock() if bp.done { bp.mu.Unlock() return } bp.picker = p // bp.blockingCh should never be nil. close(bp.blockingCh) bp.blockingCh = make(chan struct{}) bp.mu.Unlock() } func doneChannelzWrapper(acw *acBalancerWrapper, done func(balancer.DoneInfo)) func(balancer.DoneInfo) { acw.mu.Lock() ac := acw.ac acw.mu.Unlock() ac.incrCallsStarted() return func(b balancer.DoneInfo) { if b.Err != nil && b.Err != io.EOF { ac.incrCallsFailed() } else { ac.incrCallsSucceeded() } if done != nil { done(b) } } } // pick returns the transport that will be used for the RPC. // It may block in the following cases: // - there's no picker // - the current picker returns ErrNoSubConnAvailable // - the current picker returns other errors and failfast is false. // - the subConn returned by the current picker is not READY // When one of these situations happens, pick blocks until the picker gets updated. func (bp *pickerWrapper) pick(ctx context.Context, failfast bool, opts balancer.PickOptions) (transport.ClientTransport, func(balancer.DoneInfo), error) { var ch chan struct{} for { bp.mu.Lock() if bp.done { bp.mu.Unlock() return nil, nil, ErrClientConnClosing } if bp.picker == nil { ch = bp.blockingCh } if ch == bp.blockingCh { // This could happen when either: // - bp.picker is nil (the previous if condition), or // - has called pick on the current picker. bp.mu.Unlock() select { case <-ctx.Done(): if connectionErr := bp.connectionError(); connectionErr != nil { switch ctx.Err() { case context.DeadlineExceeded: return nil, nil, status.Errorf(codes.DeadlineExceeded, "latest connection error: %v", connectionErr) case context.Canceled: return nil, nil, status.Errorf(codes.Canceled, "latest connection error: %v", connectionErr) } } return nil, nil, ctx.Err() case <-ch: } continue } ch = bp.blockingCh p := bp.picker bp.mu.Unlock() subConn, done, err := p.Pick(ctx, opts) if err != nil { switch err { case balancer.ErrNoSubConnAvailable: continue case balancer.ErrTransientFailure: if !failfast { continue } return nil, nil, status.Errorf(codes.Unavailable, "%v, latest connection error: %v", err, bp.connectionError()) case context.DeadlineExceeded: return nil, nil, status.Error(codes.DeadlineExceeded, err.Error()) case context.Canceled: return nil, nil, status.Error(codes.Canceled, err.Error()) default: if _, ok := status.FromError(err); ok { return nil, nil, err } // err is some other error. return nil, nil, status.Error(codes.Unknown, err.Error()) } } acw, ok := subConn.(*acBalancerWrapper) if !ok { grpclog.Error("subconn returned from pick is not *acBalancerWrapper") continue } if t, ok := acw.getAddrConn().getReadyTransport(); ok { if channelz.IsOn() { return t, doneChannelzWrapper(acw, done), nil } return t, done, nil } if done != nil { // Calling done with nil error, no bytes sent and no bytes received. // DoneInfo with default value works. done(balancer.DoneInfo{}) } grpclog.Infof("blockingPicker: the picked transport is not ready, loop back to repick") // If ok == false, ac.state is not READY. // A valid picker always returns READY subConn. This means the state of ac // just changed, and picker will be updated shortly. // continue back to the beginning of the for loop to repick. } } func (bp *pickerWrapper) close() { bp.mu.Lock() defer bp.mu.Unlock() if bp.done { return } bp.done = true close(bp.blockingCh) } grpc-go-1.22.1/picker_wrapper_test.go000066400000000000000000000114011351635773100175460ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "context" "fmt" "sync/atomic" "testing" "time" "google.golang.org/grpc/balancer" "google.golang.org/grpc/connectivity" _ "google.golang.org/grpc/grpclog/glogger" "google.golang.org/grpc/internal/transport" ) const goroutineCount = 5 var ( testT = &testTransport{} testSC = &acBalancerWrapper{ac: &addrConn{ state: connectivity.Ready, transport: testT, }} testSCNotReady = &acBalancerWrapper{ac: &addrConn{ state: connectivity.TransientFailure, }} ) type testTransport struct { transport.ClientTransport } type testingPicker struct { err error sc balancer.SubConn maxCalled int64 } func (p *testingPicker) Pick(ctx context.Context, opts balancer.PickOptions) (balancer.SubConn, func(balancer.DoneInfo), error) { if atomic.AddInt64(&p.maxCalled, -1) < 0 { return nil, nil, fmt.Errorf("pick called to many times (> goroutineCount)") } if p.err != nil { return nil, nil, p.err } return p.sc, nil, nil } func (s) TestBlockingPickTimeout(t *testing.T) { bp := newPickerWrapper() ctx, cancel := context.WithTimeout(context.Background(), time.Millisecond) defer cancel() if _, _, err := bp.pick(ctx, true, balancer.PickOptions{}); err != context.DeadlineExceeded { t.Errorf("bp.pick returned error %v, want DeadlineExceeded", err) } } func (s) TestBlockingPick(t *testing.T) { bp := newPickerWrapper() // All goroutines should block because picker is nil in bp. var finishedCount uint64 for i := goroutineCount; i > 0; i-- { go func() { if tr, _, err := bp.pick(context.Background(), true, balancer.PickOptions{}); err != nil || tr != testT { t.Errorf("bp.pick returned non-nil error: %v", err) } atomic.AddUint64(&finishedCount, 1) }() } time.Sleep(50 * time.Millisecond) if c := atomic.LoadUint64(&finishedCount); c != 0 { t.Errorf("finished goroutines count: %v, want 0", c) } bp.updatePicker(&testingPicker{sc: testSC, maxCalled: goroutineCount}) } func (s) TestBlockingPickNoSubAvailable(t *testing.T) { bp := newPickerWrapper() var finishedCount uint64 bp.updatePicker(&testingPicker{err: balancer.ErrNoSubConnAvailable, maxCalled: goroutineCount}) // All goroutines should block because picker returns no sc available. for i := goroutineCount; i > 0; i-- { go func() { if tr, _, err := bp.pick(context.Background(), true, balancer.PickOptions{}); err != nil || tr != testT { t.Errorf("bp.pick returned non-nil error: %v", err) } atomic.AddUint64(&finishedCount, 1) }() } time.Sleep(50 * time.Millisecond) if c := atomic.LoadUint64(&finishedCount); c != 0 { t.Errorf("finished goroutines count: %v, want 0", c) } bp.updatePicker(&testingPicker{sc: testSC, maxCalled: goroutineCount}) } func (s) TestBlockingPickTransientWaitforready(t *testing.T) { bp := newPickerWrapper() bp.updatePicker(&testingPicker{err: balancer.ErrTransientFailure, maxCalled: goroutineCount}) var finishedCount uint64 // All goroutines should block because picker returns transientFailure and // picks are not failfast. for i := goroutineCount; i > 0; i-- { go func() { if tr, _, err := bp.pick(context.Background(), false, balancer.PickOptions{}); err != nil || tr != testT { t.Errorf("bp.pick returned non-nil error: %v", err) } atomic.AddUint64(&finishedCount, 1) }() } time.Sleep(time.Millisecond) if c := atomic.LoadUint64(&finishedCount); c != 0 { t.Errorf("finished goroutines count: %v, want 0", c) } bp.updatePicker(&testingPicker{sc: testSC, maxCalled: goroutineCount}) } func (s) TestBlockingPickSCNotReady(t *testing.T) { bp := newPickerWrapper() bp.updatePicker(&testingPicker{sc: testSCNotReady, maxCalled: goroutineCount}) var finishedCount uint64 // All goroutines should block because sc is not ready. for i := goroutineCount; i > 0; i-- { go func() { if tr, _, err := bp.pick(context.Background(), true, balancer.PickOptions{}); err != nil || tr != testT { t.Errorf("bp.pick returned non-nil error: %v", err) } atomic.AddUint64(&finishedCount, 1) }() } time.Sleep(time.Millisecond) if c := atomic.LoadUint64(&finishedCount); c != 0 { t.Errorf("finished goroutines count: %v, want 0", c) } bp.updatePicker(&testingPicker{sc: testSC, maxCalled: goroutineCount}) } grpc-go-1.22.1/pickfirst.go000066400000000000000000000055461351635773100155050ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "context" "google.golang.org/grpc/balancer" "google.golang.org/grpc/connectivity" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/resolver" ) // PickFirstBalancerName is the name of the pick_first balancer. const PickFirstBalancerName = "pick_first" func newPickfirstBuilder() balancer.Builder { return &pickfirstBuilder{} } type pickfirstBuilder struct{} func (*pickfirstBuilder) Build(cc balancer.ClientConn, opt balancer.BuildOptions) balancer.Balancer { return &pickfirstBalancer{cc: cc} } func (*pickfirstBuilder) Name() string { return PickFirstBalancerName } type pickfirstBalancer struct { cc balancer.ClientConn sc balancer.SubConn } func (b *pickfirstBalancer) HandleResolvedAddrs(addrs []resolver.Address, err error) { if err != nil { grpclog.Infof("pickfirstBalancer: HandleResolvedAddrs called with error %v", err) return } if b.sc == nil { b.sc, err = b.cc.NewSubConn(addrs, balancer.NewSubConnOptions{}) if err != nil { //TODO(yuxuanli): why not change the cc state to Idle? grpclog.Errorf("pickfirstBalancer: failed to NewSubConn: %v", err) return } b.cc.UpdateBalancerState(connectivity.Idle, &picker{sc: b.sc}) b.sc.Connect() } else { b.sc.UpdateAddresses(addrs) b.sc.Connect() } } func (b *pickfirstBalancer) HandleSubConnStateChange(sc balancer.SubConn, s connectivity.State) { grpclog.Infof("pickfirstBalancer: HandleSubConnStateChange: %p, %v", sc, s) if b.sc != sc { grpclog.Infof("pickfirstBalancer: ignored state change because sc is not recognized") return } if s == connectivity.Shutdown { b.sc = nil return } switch s { case connectivity.Ready, connectivity.Idle: b.cc.UpdateBalancerState(s, &picker{sc: sc}) case connectivity.Connecting: b.cc.UpdateBalancerState(s, &picker{err: balancer.ErrNoSubConnAvailable}) case connectivity.TransientFailure: b.cc.UpdateBalancerState(s, &picker{err: balancer.ErrTransientFailure}) } } func (b *pickfirstBalancer) Close() { } type picker struct { err error sc balancer.SubConn } func (p *picker) Pick(ctx context.Context, opts balancer.PickOptions) (balancer.SubConn, func(balancer.DoneInfo), error) { if p.err != nil { return nil, nil, p.err } return p.sc, nil, nil } func init() { balancer.Register(newPickfirstBuilder()) } grpc-go-1.22.1/pickfirst_test.go000066400000000000000000000273211351635773100165370ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "context" "math" "sync" "testing" "time" "google.golang.org/grpc/codes" "google.golang.org/grpc/resolver" "google.golang.org/grpc/resolver/manual" "google.golang.org/grpc/status" ) func errorDesc(err error) string { if s, ok := status.FromError(err); ok { return s.Message() } return err.Error() } func (s) TestOneBackendPickfirst(t *testing.T) { r, rcleanup := manual.GenerateAndRegisterManualResolver() defer rcleanup() numServers := 1 servers, _, scleanup := startServers(t, numServers, math.MaxInt32) defer scleanup() cc, err := Dial(r.Scheme()+":///test.server", WithInsecure(), WithCodec(testCodec{})) if err != nil { t.Fatalf("failed to dial: %v", err) } defer cc.Close() // The first RPC should fail because there's no address. ctx, cancel := context.WithTimeout(context.Background(), time.Millisecond) defer cancel() req := "port" var reply string if err := cc.Invoke(ctx, "/foo/bar", &req, &reply); err == nil || status.Code(err) != codes.DeadlineExceeded { t.Fatalf("EmptyCall() = _, %v, want _, DeadlineExceeded", err) } r.UpdateState(resolver.State{Addresses: []resolver.Address{{Addr: servers[0].addr}}}) // The second RPC should succeed. for i := 0; i < 1000; i++ { if err = cc.Invoke(context.Background(), "/foo/bar", &req, &reply); err != nil && errorDesc(err) == servers[0].port { return } time.Sleep(time.Millisecond) } t.Fatalf("EmptyCall() = _, %v, want _, %v", err, servers[0].port) } func (s) TestBackendsPickfirst(t *testing.T) { r, rcleanup := manual.GenerateAndRegisterManualResolver() defer rcleanup() numServers := 2 servers, _, scleanup := startServers(t, numServers, math.MaxInt32) defer scleanup() cc, err := Dial(r.Scheme()+":///test.server", WithInsecure(), WithCodec(testCodec{})) if err != nil { t.Fatalf("failed to dial: %v", err) } defer cc.Close() // The first RPC should fail because there's no address. ctx, cancel := context.WithTimeout(context.Background(), time.Millisecond) defer cancel() req := "port" var reply string if err := cc.Invoke(ctx, "/foo/bar", &req, &reply); err == nil || status.Code(err) != codes.DeadlineExceeded { t.Fatalf("EmptyCall() = _, %v, want _, DeadlineExceeded", err) } r.UpdateState(resolver.State{Addresses: []resolver.Address{{Addr: servers[0].addr}, {Addr: servers[1].addr}}}) // The second RPC should succeed with the first server. for i := 0; i < 1000; i++ { if err = cc.Invoke(context.Background(), "/foo/bar", &req, &reply); err != nil && errorDesc(err) == servers[0].port { return } time.Sleep(time.Millisecond) } t.Fatalf("EmptyCall() = _, %v, want _, %v", err, servers[0].port) } func (s) TestNewAddressWhileBlockingPickfirst(t *testing.T) { r, rcleanup := manual.GenerateAndRegisterManualResolver() defer rcleanup() numServers := 1 servers, _, scleanup := startServers(t, numServers, math.MaxInt32) defer scleanup() cc, err := Dial(r.Scheme()+":///test.server", WithInsecure(), WithCodec(testCodec{})) if err != nil { t.Fatalf("failed to dial: %v", err) } defer cc.Close() // The first RPC should fail because there's no address. ctx, cancel := context.WithTimeout(context.Background(), time.Millisecond) defer cancel() req := "port" var reply string if err := cc.Invoke(ctx, "/foo/bar", &req, &reply); err == nil || status.Code(err) != codes.DeadlineExceeded { t.Fatalf("EmptyCall() = _, %v, want _, DeadlineExceeded", err) } var wg sync.WaitGroup for i := 0; i < 3; i++ { wg.Add(1) go func() { defer wg.Done() // This RPC blocks until NewAddress is called. cc.Invoke(context.Background(), "/foo/bar", &req, &reply) }() } time.Sleep(50 * time.Millisecond) r.UpdateState(resolver.State{Addresses: []resolver.Address{{Addr: servers[0].addr}}}) wg.Wait() } func (s) TestCloseWithPendingRPCPickfirst(t *testing.T) { r, rcleanup := manual.GenerateAndRegisterManualResolver() defer rcleanup() numServers := 1 _, _, scleanup := startServers(t, numServers, math.MaxInt32) defer scleanup() cc, err := Dial(r.Scheme()+":///test.server", WithInsecure(), WithCodec(testCodec{})) if err != nil { t.Fatalf("failed to dial: %v", err) } defer cc.Close() // The first RPC should fail because there's no address. ctx, cancel := context.WithTimeout(context.Background(), time.Millisecond) defer cancel() req := "port" var reply string if err := cc.Invoke(ctx, "/foo/bar", &req, &reply); err == nil || status.Code(err) != codes.DeadlineExceeded { t.Fatalf("EmptyCall() = _, %v, want _, DeadlineExceeded", err) } var wg sync.WaitGroup for i := 0; i < 3; i++ { wg.Add(1) go func() { defer wg.Done() // This RPC blocks until NewAddress is called. cc.Invoke(context.Background(), "/foo/bar", &req, &reply) }() } time.Sleep(50 * time.Millisecond) cc.Close() wg.Wait() } func (s) TestOneServerDownPickfirst(t *testing.T) { r, rcleanup := manual.GenerateAndRegisterManualResolver() defer rcleanup() numServers := 2 servers, _, scleanup := startServers(t, numServers, math.MaxInt32) defer scleanup() cc, err := Dial(r.Scheme()+":///test.server", WithInsecure(), WithCodec(testCodec{}), WithWaitForHandshake()) if err != nil { t.Fatalf("failed to dial: %v", err) } defer cc.Close() // The first RPC should fail because there's no address. ctx, cancel := context.WithTimeout(context.Background(), time.Millisecond) defer cancel() req := "port" var reply string if err := cc.Invoke(ctx, "/foo/bar", &req, &reply); err == nil || status.Code(err) != codes.DeadlineExceeded { t.Fatalf("EmptyCall() = _, %v, want _, DeadlineExceeded", err) } r.UpdateState(resolver.State{Addresses: []resolver.Address{{Addr: servers[0].addr}, {Addr: servers[1].addr}}}) // The second RPC should succeed with the first server. for i := 0; i < 1000; i++ { if err = cc.Invoke(context.Background(), "/foo/bar", &req, &reply); err != nil && errorDesc(err) == servers[0].port { break } time.Sleep(time.Millisecond) } servers[0].stop() for i := 0; i < 1000; i++ { if err = cc.Invoke(context.Background(), "/foo/bar", &req, &reply); err != nil && errorDesc(err) == servers[1].port { return } time.Sleep(time.Millisecond) } t.Fatalf("EmptyCall() = _, %v, want _, %v", err, servers[0].port) } func (s) TestAllServersDownPickfirst(t *testing.T) { r, rcleanup := manual.GenerateAndRegisterManualResolver() defer rcleanup() numServers := 2 servers, _, scleanup := startServers(t, numServers, math.MaxInt32) defer scleanup() cc, err := Dial(r.Scheme()+":///test.server", WithInsecure(), WithCodec(testCodec{}), WithWaitForHandshake()) if err != nil { t.Fatalf("failed to dial: %v", err) } defer cc.Close() // The first RPC should fail because there's no address. ctx, cancel := context.WithTimeout(context.Background(), time.Millisecond) defer cancel() req := "port" var reply string if err := cc.Invoke(ctx, "/foo/bar", &req, &reply); err == nil || status.Code(err) != codes.DeadlineExceeded { t.Fatalf("EmptyCall() = _, %v, want _, DeadlineExceeded", err) } r.UpdateState(resolver.State{Addresses: []resolver.Address{{Addr: servers[0].addr}, {Addr: servers[1].addr}}}) // The second RPC should succeed with the first server. for i := 0; i < 1000; i++ { if err = cc.Invoke(context.Background(), "/foo/bar", &req, &reply); err != nil && errorDesc(err) == servers[0].port { break } time.Sleep(time.Millisecond) } for i := 0; i < numServers; i++ { servers[i].stop() } for i := 0; i < 1000; i++ { if err = cc.Invoke(context.Background(), "/foo/bar", &req, &reply); status.Code(err) == codes.Unavailable { return } time.Sleep(time.Millisecond) } t.Fatalf("EmptyCall() = _, %v, want _, error with code unavailable", err) } func (s) TestAddressesRemovedPickfirst(t *testing.T) { r, rcleanup := manual.GenerateAndRegisterManualResolver() defer rcleanup() numServers := 3 servers, _, scleanup := startServers(t, numServers, math.MaxInt32) defer scleanup() cc, err := Dial(r.Scheme()+":///test.server", WithInsecure(), WithCodec(testCodec{})) if err != nil { t.Fatalf("failed to dial: %v", err) } defer cc.Close() // The first RPC should fail because there's no address. ctx, cancel := context.WithTimeout(context.Background(), time.Millisecond) defer cancel() req := "port" var reply string if err := cc.Invoke(ctx, "/foo/bar", &req, &reply); err == nil || status.Code(err) != codes.DeadlineExceeded { t.Fatalf("EmptyCall() = _, %v, want _, DeadlineExceeded", err) } r.UpdateState(resolver.State{Addresses: []resolver.Address{{Addr: servers[0].addr}, {Addr: servers[1].addr}, {Addr: servers[2].addr}}}) for i := 0; i < 1000; i++ { if err = cc.Invoke(context.Background(), "/foo/bar", &req, &reply); err != nil && errorDesc(err) == servers[0].port { break } time.Sleep(time.Millisecond) } for i := 0; i < 20; i++ { if err := cc.Invoke(context.Background(), "/foo/bar", &req, &reply); err == nil || errorDesc(err) != servers[0].port { t.Fatalf("Index %d: Invoke(_, _, _, _, _) = %v, want %s", 0, err, servers[0].port) } time.Sleep(10 * time.Millisecond) } // Remove server[0]. r.UpdateState(resolver.State{Addresses: []resolver.Address{{Addr: servers[1].addr}, {Addr: servers[2].addr}}}) for i := 0; i < 1000; i++ { if err = cc.Invoke(context.Background(), "/foo/bar", &req, &reply); err != nil && errorDesc(err) == servers[1].port { break } time.Sleep(time.Millisecond) } for i := 0; i < 20; i++ { if err := cc.Invoke(context.Background(), "/foo/bar", &req, &reply); err == nil || errorDesc(err) != servers[1].port { t.Fatalf("Index %d: Invoke(_, _, _, _, _) = %v, want %s", 1, err, servers[1].port) } time.Sleep(10 * time.Millisecond) } // Append server[0], nothing should change. r.UpdateState(resolver.State{Addresses: []resolver.Address{{Addr: servers[1].addr}, {Addr: servers[2].addr}, {Addr: servers[0].addr}}}) for i := 0; i < 20; i++ { if err := cc.Invoke(context.Background(), "/foo/bar", &req, &reply); err == nil || errorDesc(err) != servers[1].port { t.Fatalf("Index %d: Invoke(_, _, _, _, _) = %v, want %s", 1, err, servers[1].port) } time.Sleep(10 * time.Millisecond) } // Remove server[1]. r.UpdateState(resolver.State{Addresses: []resolver.Address{{Addr: servers[2].addr}, {Addr: servers[0].addr}}}) for i := 0; i < 1000; i++ { if err = cc.Invoke(context.Background(), "/foo/bar", &req, &reply); err != nil && errorDesc(err) == servers[2].port { break } time.Sleep(time.Millisecond) } for i := 0; i < 20; i++ { if err := cc.Invoke(context.Background(), "/foo/bar", &req, &reply); err == nil || errorDesc(err) != servers[2].port { t.Fatalf("Index %d: Invoke(_, _, _, _, _) = %v, want %s", 2, err, servers[2].port) } time.Sleep(10 * time.Millisecond) } // Remove server[2]. r.UpdateState(resolver.State{Addresses: []resolver.Address{{Addr: servers[0].addr}}}) for i := 0; i < 1000; i++ { if err = cc.Invoke(context.Background(), "/foo/bar", &req, &reply); err != nil && errorDesc(err) == servers[0].port { break } time.Sleep(time.Millisecond) } for i := 0; i < 20; i++ { if err := cc.Invoke(context.Background(), "/foo/bar", &req, &reply); err == nil || errorDesc(err) != servers[0].port { t.Fatalf("Index %d: Invoke(_, _, _, _, _) = %v, want %s", 0, err, servers[0].port) } time.Sleep(10 * time.Millisecond) } } grpc-go-1.22.1/preloader.go000066400000000000000000000035131351635773100154540ustar00rootroot00000000000000/* * * Copyright 2019 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "google.golang.org/grpc/codes" "google.golang.org/grpc/status" ) // PreparedMsg is responsible for creating a Marshalled and Compressed object. // // This API is EXPERIMENTAL. type PreparedMsg struct { // Struct for preparing msg before sending them encodedData []byte hdr []byte payload []byte } // Encode marshalls and compresses the message using the codec and compressor for the stream. func (p *PreparedMsg) Encode(s Stream, msg interface{}) error { ctx := s.Context() rpcInfo, ok := rpcInfoFromContext(ctx) if !ok { return status.Errorf(codes.Internal, "grpc: unable to get rpcInfo") } // check if the context has the relevant information to prepareMsg if rpcInfo.preloaderInfo == nil { return status.Errorf(codes.Internal, "grpc: rpcInfo.preloaderInfo is nil") } if rpcInfo.preloaderInfo.codec == nil { return status.Errorf(codes.Internal, "grpc: rpcInfo.preloaderInfo.codec is nil") } // prepare the msg data, err := encode(rpcInfo.preloaderInfo.codec, msg) if err != nil { return err } p.encodedData = data compData, err := compress(data, rpcInfo.preloaderInfo.cp, rpcInfo.preloaderInfo.comp) if err != nil { return err } p.hdr, p.payload = msgHeader(data, compData) return nil } grpc-go-1.22.1/proxy.go000066400000000000000000000100521351635773100146540ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "bufio" "context" "encoding/base64" "errors" "fmt" "io" "net" "net/http" "net/http/httputil" "net/url" ) const proxyAuthHeaderKey = "Proxy-Authorization" var ( // errDisabled indicates that proxy is disabled for the address. errDisabled = errors.New("proxy is disabled for the address") // The following variable will be overwritten in the tests. httpProxyFromEnvironment = http.ProxyFromEnvironment ) func mapAddress(ctx context.Context, address string) (*url.URL, error) { req := &http.Request{ URL: &url.URL{ Scheme: "https", Host: address, }, } url, err := httpProxyFromEnvironment(req) if err != nil { return nil, err } if url == nil { return nil, errDisabled } return url, nil } // To read a response from a net.Conn, http.ReadResponse() takes a bufio.Reader. // It's possible that this reader reads more than what's need for the response and stores // those bytes in the buffer. // bufConn wraps the original net.Conn and the bufio.Reader to make sure we don't lose the // bytes in the buffer. type bufConn struct { net.Conn r io.Reader } func (c *bufConn) Read(b []byte) (int, error) { return c.r.Read(b) } func basicAuth(username, password string) string { auth := username + ":" + password return base64.StdEncoding.EncodeToString([]byte(auth)) } func doHTTPConnectHandshake(ctx context.Context, conn net.Conn, backendAddr string, proxyURL *url.URL) (_ net.Conn, err error) { defer func() { if err != nil { conn.Close() } }() req := &http.Request{ Method: http.MethodConnect, URL: &url.URL{Host: backendAddr}, Header: map[string][]string{"User-Agent": {grpcUA}}, } if t := proxyURL.User; t != nil { u := t.Username() p, _ := t.Password() req.Header.Add(proxyAuthHeaderKey, "Basic "+basicAuth(u, p)) } if err := sendHTTPRequest(ctx, req, conn); err != nil { return nil, fmt.Errorf("failed to write the HTTP request: %v", err) } r := bufio.NewReader(conn) resp, err := http.ReadResponse(r, req) if err != nil { return nil, fmt.Errorf("reading server HTTP response: %v", err) } defer resp.Body.Close() if resp.StatusCode != http.StatusOK { dump, err := httputil.DumpResponse(resp, true) if err != nil { return nil, fmt.Errorf("failed to do connect handshake, status code: %s", resp.Status) } return nil, fmt.Errorf("failed to do connect handshake, response: %q", dump) } return &bufConn{Conn: conn, r: r}, nil } // newProxyDialer returns a dialer that connects to proxy first if necessary. // The returned dialer checks if a proxy is necessary, dial to the proxy with the // provided dialer, does HTTP CONNECT handshake and returns the connection. func newProxyDialer(dialer func(context.Context, string) (net.Conn, error)) func(context.Context, string) (net.Conn, error) { return func(ctx context.Context, addr string) (conn net.Conn, err error) { var newAddr string proxyURL, err := mapAddress(ctx, addr) if err != nil { if err != errDisabled { return nil, err } newAddr = addr } else { newAddr = proxyURL.Host } conn, err = dialer(ctx, newAddr) if err != nil { return } if proxyURL != nil { // proxy is disabled if proxyURL is nil. conn, err = doHTTPConnectHandshake(ctx, conn, addr, proxyURL) } return } } func sendHTTPRequest(ctx context.Context, req *http.Request, conn net.Conn) error { req = req.WithContext(ctx) if err := req.Write(conn); err != nil { return fmt.Errorf("failed to write the HTTP request: %v", err) } return nil } grpc-go-1.22.1/proxy_test.go000066400000000000000000000130271351635773100157200ustar00rootroot00000000000000// +build !race /* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "bufio" "context" "encoding/base64" "fmt" "io" "net" "net/http" "net/url" "testing" "time" ) const ( envTestAddr = "1.2.3.4:8080" envProxyAddr = "2.3.4.5:7687" ) // overwriteAndRestore overwrite function httpProxyFromEnvironment and // returns a function to restore the default values. func overwrite(hpfe func(req *http.Request) (*url.URL, error)) func() { backHPFE := httpProxyFromEnvironment httpProxyFromEnvironment = hpfe return func() { httpProxyFromEnvironment = backHPFE } } type proxyServer struct { t *testing.T lis net.Listener in net.Conn out net.Conn requestCheck func(*http.Request) error } func (p *proxyServer) run() { in, err := p.lis.Accept() if err != nil { return } p.in = in req, err := http.ReadRequest(bufio.NewReader(in)) if err != nil { p.t.Errorf("failed to read CONNECT req: %v", err) return } if err := p.requestCheck(req); err != nil { resp := http.Response{StatusCode: http.StatusMethodNotAllowed} resp.Write(p.in) p.in.Close() p.t.Errorf("get wrong CONNECT req: %+v, error: %v", req, err) return } out, err := net.Dial("tcp", req.URL.Host) if err != nil { p.t.Errorf("failed to dial to server: %v", err) return } resp := http.Response{StatusCode: http.StatusOK, Proto: "HTTP/1.0"} resp.Write(p.in) p.out = out go io.Copy(p.in, p.out) go io.Copy(p.out, p.in) } func (p *proxyServer) stop() { p.lis.Close() if p.in != nil { p.in.Close() } if p.out != nil { p.out.Close() } } func testHTTPConnect(t *testing.T, proxyURLModify func(*url.URL) *url.URL, proxyReqCheck func(*http.Request) error) { plis, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("failed to listen: %v", err) } p := &proxyServer{ t: t, lis: plis, requestCheck: proxyReqCheck, } go p.run() defer p.stop() blis, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("failed to listen: %v", err) } msg := []byte{4, 3, 5, 2} recvBuf := make([]byte, len(msg)) done := make(chan struct{}) go func() { in, err := blis.Accept() if err != nil { t.Errorf("failed to accept: %v", err) return } defer in.Close() in.Read(recvBuf) close(done) }() // Overwrite the function in the test and restore them in defer. hpfe := func(req *http.Request) (*url.URL, error) { return proxyURLModify(&url.URL{Host: plis.Addr().String()}), nil } defer overwrite(hpfe)() // Dial to proxy server. dialer := newProxyDialer(func(ctx context.Context, addr string) (net.Conn, error) { if deadline, ok := ctx.Deadline(); ok { return net.DialTimeout("tcp", addr, time.Until(deadline)) } return net.Dial("tcp", addr) }) ctx, cancel := context.WithTimeout(context.Background(), time.Second) defer cancel() c, err := dialer(ctx, blis.Addr().String()) if err != nil { t.Fatalf("http connect Dial failed: %v", err) } defer c.Close() // Send msg on the connection. c.Write(msg) <-done // Check received msg. if string(recvBuf) != string(msg) { t.Fatalf("received msg: %v, want %v", recvBuf, msg) } } func (s) TestHTTPConnect(t *testing.T) { testHTTPConnect(t, func(in *url.URL) *url.URL { return in }, func(req *http.Request) error { if req.Method != http.MethodConnect { return fmt.Errorf("unexpected Method %q, want %q", req.Method, http.MethodConnect) } if req.UserAgent() != grpcUA { return fmt.Errorf("unexpect user agent %q, want %q", req.UserAgent(), grpcUA) } return nil }, ) } func (s) TestHTTPConnectBasicAuth(t *testing.T) { const ( user = "notAUser" password = "notAPassword" ) testHTTPConnect(t, func(in *url.URL) *url.URL { in.User = url.UserPassword(user, password) return in }, func(req *http.Request) error { if req.Method != http.MethodConnect { return fmt.Errorf("unexpected Method %q, want %q", req.Method, http.MethodConnect) } if req.UserAgent() != grpcUA { return fmt.Errorf("unexpect user agent %q, want %q", req.UserAgent(), grpcUA) } wantProxyAuthStr := "Basic " + base64.StdEncoding.EncodeToString([]byte(user+":"+password)) if got := req.Header.Get(proxyAuthHeaderKey); got != wantProxyAuthStr { gotDecoded, _ := base64.StdEncoding.DecodeString(got) wantDecoded, _ := base64.StdEncoding.DecodeString(wantProxyAuthStr) return fmt.Errorf("unexpected auth %q (%q), want %q (%q)", got, gotDecoded, wantProxyAuthStr, wantDecoded) } return nil }, ) } func (s) TestMapAddressEnv(t *testing.T) { // Overwrite the function in the test and restore them in defer. hpfe := func(req *http.Request) (*url.URL, error) { if req.URL.Host == envTestAddr { return &url.URL{ Scheme: "https", Host: envProxyAddr, }, nil } return nil, nil } defer overwrite(hpfe)() // envTestAddr should be handled by ProxyFromEnvironment. got, err := mapAddress(context.Background(), envTestAddr) if err != nil { t.Error(err) } if got.Host != envProxyAddr { t.Errorf("want %v, got %v", envProxyAddr, got) } } grpc-go-1.22.1/reflection/000077500000000000000000000000001351635773100153005ustar00rootroot00000000000000grpc-go-1.22.1/reflection/README.md000066400000000000000000000007051351635773100165610ustar00rootroot00000000000000# Reflection Package reflection implements server reflection service. The service implemented is defined in: https://github.com/grpc/grpc/blob/master/src/proto/grpc/reflection/v1alpha/reflection.proto. To register server reflection on a gRPC server: ```go import "google.golang.org/grpc/reflection" s := grpc.NewServer() pb.RegisterYourOwnServer(s, &server{}) // Register reflection service on gRPC server. reflection.Register(s) s.Serve(lis) ``` grpc-go-1.22.1/reflection/grpc_reflection_v1alpha/000077500000000000000000000000001351635773100220615ustar00rootroot00000000000000grpc-go-1.22.1/reflection/grpc_reflection_v1alpha/reflection.pb.go000066400000000000000000001122161351635773100251450ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: grpc_reflection_v1alpha/reflection.proto package grpc_reflection_v1alpha import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import ( context "golang.org/x/net/context" grpc "google.golang.org/grpc" ) // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package // The message sent by the client when calling ServerReflectionInfo method. type ServerReflectionRequest struct { Host string `protobuf:"bytes,1,opt,name=host,proto3" json:"host,omitempty"` // To use reflection service, the client should set one of the following // fields in message_request. The server distinguishes requests by their // defined field and then handles them using corresponding methods. // // Types that are valid to be assigned to MessageRequest: // *ServerReflectionRequest_FileByFilename // *ServerReflectionRequest_FileContainingSymbol // *ServerReflectionRequest_FileContainingExtension // *ServerReflectionRequest_AllExtensionNumbersOfType // *ServerReflectionRequest_ListServices MessageRequest isServerReflectionRequest_MessageRequest `protobuf_oneof:"message_request"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ServerReflectionRequest) Reset() { *m = ServerReflectionRequest{} } func (m *ServerReflectionRequest) String() string { return proto.CompactTextString(m) } func (*ServerReflectionRequest) ProtoMessage() {} func (*ServerReflectionRequest) Descriptor() ([]byte, []int) { return fileDescriptor_reflection_178bd1e101bf8b63, []int{0} } func (m *ServerReflectionRequest) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ServerReflectionRequest.Unmarshal(m, b) } func (m *ServerReflectionRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ServerReflectionRequest.Marshal(b, m, deterministic) } func (dst *ServerReflectionRequest) XXX_Merge(src proto.Message) { xxx_messageInfo_ServerReflectionRequest.Merge(dst, src) } func (m *ServerReflectionRequest) XXX_Size() int { return xxx_messageInfo_ServerReflectionRequest.Size(m) } func (m *ServerReflectionRequest) XXX_DiscardUnknown() { xxx_messageInfo_ServerReflectionRequest.DiscardUnknown(m) } var xxx_messageInfo_ServerReflectionRequest proto.InternalMessageInfo func (m *ServerReflectionRequest) GetHost() string { if m != nil { return m.Host } return "" } type isServerReflectionRequest_MessageRequest interface { isServerReflectionRequest_MessageRequest() } type ServerReflectionRequest_FileByFilename struct { FileByFilename string `protobuf:"bytes,3,opt,name=file_by_filename,json=fileByFilename,proto3,oneof"` } type ServerReflectionRequest_FileContainingSymbol struct { FileContainingSymbol string `protobuf:"bytes,4,opt,name=file_containing_symbol,json=fileContainingSymbol,proto3,oneof"` } type ServerReflectionRequest_FileContainingExtension struct { FileContainingExtension *ExtensionRequest `protobuf:"bytes,5,opt,name=file_containing_extension,json=fileContainingExtension,proto3,oneof"` } type ServerReflectionRequest_AllExtensionNumbersOfType struct { AllExtensionNumbersOfType string `protobuf:"bytes,6,opt,name=all_extension_numbers_of_type,json=allExtensionNumbersOfType,proto3,oneof"` } type ServerReflectionRequest_ListServices struct { ListServices string `protobuf:"bytes,7,opt,name=list_services,json=listServices,proto3,oneof"` } func (*ServerReflectionRequest_FileByFilename) isServerReflectionRequest_MessageRequest() {} func (*ServerReflectionRequest_FileContainingSymbol) isServerReflectionRequest_MessageRequest() {} func (*ServerReflectionRequest_FileContainingExtension) isServerReflectionRequest_MessageRequest() {} func (*ServerReflectionRequest_AllExtensionNumbersOfType) isServerReflectionRequest_MessageRequest() {} func (*ServerReflectionRequest_ListServices) isServerReflectionRequest_MessageRequest() {} func (m *ServerReflectionRequest) GetMessageRequest() isServerReflectionRequest_MessageRequest { if m != nil { return m.MessageRequest } return nil } func (m *ServerReflectionRequest) GetFileByFilename() string { if x, ok := m.GetMessageRequest().(*ServerReflectionRequest_FileByFilename); ok { return x.FileByFilename } return "" } func (m *ServerReflectionRequest) GetFileContainingSymbol() string { if x, ok := m.GetMessageRequest().(*ServerReflectionRequest_FileContainingSymbol); ok { return x.FileContainingSymbol } return "" } func (m *ServerReflectionRequest) GetFileContainingExtension() *ExtensionRequest { if x, ok := m.GetMessageRequest().(*ServerReflectionRequest_FileContainingExtension); ok { return x.FileContainingExtension } return nil } func (m *ServerReflectionRequest) GetAllExtensionNumbersOfType() string { if x, ok := m.GetMessageRequest().(*ServerReflectionRequest_AllExtensionNumbersOfType); ok { return x.AllExtensionNumbersOfType } return "" } func (m *ServerReflectionRequest) GetListServices() string { if x, ok := m.GetMessageRequest().(*ServerReflectionRequest_ListServices); ok { return x.ListServices } return "" } // XXX_OneofFuncs is for the internal use of the proto package. func (*ServerReflectionRequest) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _ServerReflectionRequest_OneofMarshaler, _ServerReflectionRequest_OneofUnmarshaler, _ServerReflectionRequest_OneofSizer, []interface{}{ (*ServerReflectionRequest_FileByFilename)(nil), (*ServerReflectionRequest_FileContainingSymbol)(nil), (*ServerReflectionRequest_FileContainingExtension)(nil), (*ServerReflectionRequest_AllExtensionNumbersOfType)(nil), (*ServerReflectionRequest_ListServices)(nil), } } func _ServerReflectionRequest_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*ServerReflectionRequest) // message_request switch x := m.MessageRequest.(type) { case *ServerReflectionRequest_FileByFilename: b.EncodeVarint(3<<3 | proto.WireBytes) b.EncodeStringBytes(x.FileByFilename) case *ServerReflectionRequest_FileContainingSymbol: b.EncodeVarint(4<<3 | proto.WireBytes) b.EncodeStringBytes(x.FileContainingSymbol) case *ServerReflectionRequest_FileContainingExtension: b.EncodeVarint(5<<3 | proto.WireBytes) if err := b.EncodeMessage(x.FileContainingExtension); err != nil { return err } case *ServerReflectionRequest_AllExtensionNumbersOfType: b.EncodeVarint(6<<3 | proto.WireBytes) b.EncodeStringBytes(x.AllExtensionNumbersOfType) case *ServerReflectionRequest_ListServices: b.EncodeVarint(7<<3 | proto.WireBytes) b.EncodeStringBytes(x.ListServices) case nil: default: return fmt.Errorf("ServerReflectionRequest.MessageRequest has unexpected type %T", x) } return nil } func _ServerReflectionRequest_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*ServerReflectionRequest) switch tag { case 3: // message_request.file_by_filename if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } x, err := b.DecodeStringBytes() m.MessageRequest = &ServerReflectionRequest_FileByFilename{x} return true, err case 4: // message_request.file_containing_symbol if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } x, err := b.DecodeStringBytes() m.MessageRequest = &ServerReflectionRequest_FileContainingSymbol{x} return true, err case 5: // message_request.file_containing_extension if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(ExtensionRequest) err := b.DecodeMessage(msg) m.MessageRequest = &ServerReflectionRequest_FileContainingExtension{msg} return true, err case 6: // message_request.all_extension_numbers_of_type if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } x, err := b.DecodeStringBytes() m.MessageRequest = &ServerReflectionRequest_AllExtensionNumbersOfType{x} return true, err case 7: // message_request.list_services if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } x, err := b.DecodeStringBytes() m.MessageRequest = &ServerReflectionRequest_ListServices{x} return true, err default: return false, nil } } func _ServerReflectionRequest_OneofSizer(msg proto.Message) (n int) { m := msg.(*ServerReflectionRequest) // message_request switch x := m.MessageRequest.(type) { case *ServerReflectionRequest_FileByFilename: n += 1 // tag and wire n += proto.SizeVarint(uint64(len(x.FileByFilename))) n += len(x.FileByFilename) case *ServerReflectionRequest_FileContainingSymbol: n += 1 // tag and wire n += proto.SizeVarint(uint64(len(x.FileContainingSymbol))) n += len(x.FileContainingSymbol) case *ServerReflectionRequest_FileContainingExtension: s := proto.Size(x.FileContainingExtension) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *ServerReflectionRequest_AllExtensionNumbersOfType: n += 1 // tag and wire n += proto.SizeVarint(uint64(len(x.AllExtensionNumbersOfType))) n += len(x.AllExtensionNumbersOfType) case *ServerReflectionRequest_ListServices: n += 1 // tag and wire n += proto.SizeVarint(uint64(len(x.ListServices))) n += len(x.ListServices) case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } // The type name and extension number sent by the client when requesting // file_containing_extension. type ExtensionRequest struct { // Fully-qualified type name. The format should be . ContainingType string `protobuf:"bytes,1,opt,name=containing_type,json=containingType,proto3" json:"containing_type,omitempty"` ExtensionNumber int32 `protobuf:"varint,2,opt,name=extension_number,json=extensionNumber,proto3" json:"extension_number,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ExtensionRequest) Reset() { *m = ExtensionRequest{} } func (m *ExtensionRequest) String() string { return proto.CompactTextString(m) } func (*ExtensionRequest) ProtoMessage() {} func (*ExtensionRequest) Descriptor() ([]byte, []int) { return fileDescriptor_reflection_178bd1e101bf8b63, []int{1} } func (m *ExtensionRequest) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ExtensionRequest.Unmarshal(m, b) } func (m *ExtensionRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ExtensionRequest.Marshal(b, m, deterministic) } func (dst *ExtensionRequest) XXX_Merge(src proto.Message) { xxx_messageInfo_ExtensionRequest.Merge(dst, src) } func (m *ExtensionRequest) XXX_Size() int { return xxx_messageInfo_ExtensionRequest.Size(m) } func (m *ExtensionRequest) XXX_DiscardUnknown() { xxx_messageInfo_ExtensionRequest.DiscardUnknown(m) } var xxx_messageInfo_ExtensionRequest proto.InternalMessageInfo func (m *ExtensionRequest) GetContainingType() string { if m != nil { return m.ContainingType } return "" } func (m *ExtensionRequest) GetExtensionNumber() int32 { if m != nil { return m.ExtensionNumber } return 0 } // The message sent by the server to answer ServerReflectionInfo method. type ServerReflectionResponse struct { ValidHost string `protobuf:"bytes,1,opt,name=valid_host,json=validHost,proto3" json:"valid_host,omitempty"` OriginalRequest *ServerReflectionRequest `protobuf:"bytes,2,opt,name=original_request,json=originalRequest,proto3" json:"original_request,omitempty"` // The server sets one of the following fields according to the // message_request in the request. // // Types that are valid to be assigned to MessageResponse: // *ServerReflectionResponse_FileDescriptorResponse // *ServerReflectionResponse_AllExtensionNumbersResponse // *ServerReflectionResponse_ListServicesResponse // *ServerReflectionResponse_ErrorResponse MessageResponse isServerReflectionResponse_MessageResponse `protobuf_oneof:"message_response"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ServerReflectionResponse) Reset() { *m = ServerReflectionResponse{} } func (m *ServerReflectionResponse) String() string { return proto.CompactTextString(m) } func (*ServerReflectionResponse) ProtoMessage() {} func (*ServerReflectionResponse) Descriptor() ([]byte, []int) { return fileDescriptor_reflection_178bd1e101bf8b63, []int{2} } func (m *ServerReflectionResponse) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ServerReflectionResponse.Unmarshal(m, b) } func (m *ServerReflectionResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ServerReflectionResponse.Marshal(b, m, deterministic) } func (dst *ServerReflectionResponse) XXX_Merge(src proto.Message) { xxx_messageInfo_ServerReflectionResponse.Merge(dst, src) } func (m *ServerReflectionResponse) XXX_Size() int { return xxx_messageInfo_ServerReflectionResponse.Size(m) } func (m *ServerReflectionResponse) XXX_DiscardUnknown() { xxx_messageInfo_ServerReflectionResponse.DiscardUnknown(m) } var xxx_messageInfo_ServerReflectionResponse proto.InternalMessageInfo func (m *ServerReflectionResponse) GetValidHost() string { if m != nil { return m.ValidHost } return "" } func (m *ServerReflectionResponse) GetOriginalRequest() *ServerReflectionRequest { if m != nil { return m.OriginalRequest } return nil } type isServerReflectionResponse_MessageResponse interface { isServerReflectionResponse_MessageResponse() } type ServerReflectionResponse_FileDescriptorResponse struct { FileDescriptorResponse *FileDescriptorResponse `protobuf:"bytes,4,opt,name=file_descriptor_response,json=fileDescriptorResponse,proto3,oneof"` } type ServerReflectionResponse_AllExtensionNumbersResponse struct { AllExtensionNumbersResponse *ExtensionNumberResponse `protobuf:"bytes,5,opt,name=all_extension_numbers_response,json=allExtensionNumbersResponse,proto3,oneof"` } type ServerReflectionResponse_ListServicesResponse struct { ListServicesResponse *ListServiceResponse `protobuf:"bytes,6,opt,name=list_services_response,json=listServicesResponse,proto3,oneof"` } type ServerReflectionResponse_ErrorResponse struct { ErrorResponse *ErrorResponse `protobuf:"bytes,7,opt,name=error_response,json=errorResponse,proto3,oneof"` } func (*ServerReflectionResponse_FileDescriptorResponse) isServerReflectionResponse_MessageResponse() {} func (*ServerReflectionResponse_AllExtensionNumbersResponse) isServerReflectionResponse_MessageResponse() { } func (*ServerReflectionResponse_ListServicesResponse) isServerReflectionResponse_MessageResponse() {} func (*ServerReflectionResponse_ErrorResponse) isServerReflectionResponse_MessageResponse() {} func (m *ServerReflectionResponse) GetMessageResponse() isServerReflectionResponse_MessageResponse { if m != nil { return m.MessageResponse } return nil } func (m *ServerReflectionResponse) GetFileDescriptorResponse() *FileDescriptorResponse { if x, ok := m.GetMessageResponse().(*ServerReflectionResponse_FileDescriptorResponse); ok { return x.FileDescriptorResponse } return nil } func (m *ServerReflectionResponse) GetAllExtensionNumbersResponse() *ExtensionNumberResponse { if x, ok := m.GetMessageResponse().(*ServerReflectionResponse_AllExtensionNumbersResponse); ok { return x.AllExtensionNumbersResponse } return nil } func (m *ServerReflectionResponse) GetListServicesResponse() *ListServiceResponse { if x, ok := m.GetMessageResponse().(*ServerReflectionResponse_ListServicesResponse); ok { return x.ListServicesResponse } return nil } func (m *ServerReflectionResponse) GetErrorResponse() *ErrorResponse { if x, ok := m.GetMessageResponse().(*ServerReflectionResponse_ErrorResponse); ok { return x.ErrorResponse } return nil } // XXX_OneofFuncs is for the internal use of the proto package. func (*ServerReflectionResponse) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _ServerReflectionResponse_OneofMarshaler, _ServerReflectionResponse_OneofUnmarshaler, _ServerReflectionResponse_OneofSizer, []interface{}{ (*ServerReflectionResponse_FileDescriptorResponse)(nil), (*ServerReflectionResponse_AllExtensionNumbersResponse)(nil), (*ServerReflectionResponse_ListServicesResponse)(nil), (*ServerReflectionResponse_ErrorResponse)(nil), } } func _ServerReflectionResponse_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*ServerReflectionResponse) // message_response switch x := m.MessageResponse.(type) { case *ServerReflectionResponse_FileDescriptorResponse: b.EncodeVarint(4<<3 | proto.WireBytes) if err := b.EncodeMessage(x.FileDescriptorResponse); err != nil { return err } case *ServerReflectionResponse_AllExtensionNumbersResponse: b.EncodeVarint(5<<3 | proto.WireBytes) if err := b.EncodeMessage(x.AllExtensionNumbersResponse); err != nil { return err } case *ServerReflectionResponse_ListServicesResponse: b.EncodeVarint(6<<3 | proto.WireBytes) if err := b.EncodeMessage(x.ListServicesResponse); err != nil { return err } case *ServerReflectionResponse_ErrorResponse: b.EncodeVarint(7<<3 | proto.WireBytes) if err := b.EncodeMessage(x.ErrorResponse); err != nil { return err } case nil: default: return fmt.Errorf("ServerReflectionResponse.MessageResponse has unexpected type %T", x) } return nil } func _ServerReflectionResponse_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*ServerReflectionResponse) switch tag { case 4: // message_response.file_descriptor_response if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(FileDescriptorResponse) err := b.DecodeMessage(msg) m.MessageResponse = &ServerReflectionResponse_FileDescriptorResponse{msg} return true, err case 5: // message_response.all_extension_numbers_response if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(ExtensionNumberResponse) err := b.DecodeMessage(msg) m.MessageResponse = &ServerReflectionResponse_AllExtensionNumbersResponse{msg} return true, err case 6: // message_response.list_services_response if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(ListServiceResponse) err := b.DecodeMessage(msg) m.MessageResponse = &ServerReflectionResponse_ListServicesResponse{msg} return true, err case 7: // message_response.error_response if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(ErrorResponse) err := b.DecodeMessage(msg) m.MessageResponse = &ServerReflectionResponse_ErrorResponse{msg} return true, err default: return false, nil } } func _ServerReflectionResponse_OneofSizer(msg proto.Message) (n int) { m := msg.(*ServerReflectionResponse) // message_response switch x := m.MessageResponse.(type) { case *ServerReflectionResponse_FileDescriptorResponse: s := proto.Size(x.FileDescriptorResponse) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *ServerReflectionResponse_AllExtensionNumbersResponse: s := proto.Size(x.AllExtensionNumbersResponse) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *ServerReflectionResponse_ListServicesResponse: s := proto.Size(x.ListServicesResponse) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case *ServerReflectionResponse_ErrorResponse: s := proto.Size(x.ErrorResponse) n += 1 // tag and wire n += proto.SizeVarint(uint64(s)) n += s case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } // Serialized FileDescriptorProto messages sent by the server answering // a file_by_filename, file_containing_symbol, or file_containing_extension // request. type FileDescriptorResponse struct { // Serialized FileDescriptorProto messages. We avoid taking a dependency on // descriptor.proto, which uses proto2 only features, by making them opaque // bytes instead. FileDescriptorProto [][]byte `protobuf:"bytes,1,rep,name=file_descriptor_proto,json=fileDescriptorProto,proto3" json:"file_descriptor_proto,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *FileDescriptorResponse) Reset() { *m = FileDescriptorResponse{} } func (m *FileDescriptorResponse) String() string { return proto.CompactTextString(m) } func (*FileDescriptorResponse) ProtoMessage() {} func (*FileDescriptorResponse) Descriptor() ([]byte, []int) { return fileDescriptor_reflection_178bd1e101bf8b63, []int{3} } func (m *FileDescriptorResponse) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_FileDescriptorResponse.Unmarshal(m, b) } func (m *FileDescriptorResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_FileDescriptorResponse.Marshal(b, m, deterministic) } func (dst *FileDescriptorResponse) XXX_Merge(src proto.Message) { xxx_messageInfo_FileDescriptorResponse.Merge(dst, src) } func (m *FileDescriptorResponse) XXX_Size() int { return xxx_messageInfo_FileDescriptorResponse.Size(m) } func (m *FileDescriptorResponse) XXX_DiscardUnknown() { xxx_messageInfo_FileDescriptorResponse.DiscardUnknown(m) } var xxx_messageInfo_FileDescriptorResponse proto.InternalMessageInfo func (m *FileDescriptorResponse) GetFileDescriptorProto() [][]byte { if m != nil { return m.FileDescriptorProto } return nil } // A list of extension numbers sent by the server answering // all_extension_numbers_of_type request. type ExtensionNumberResponse struct { // Full name of the base type, including the package name. The format // is . BaseTypeName string `protobuf:"bytes,1,opt,name=base_type_name,json=baseTypeName,proto3" json:"base_type_name,omitempty"` ExtensionNumber []int32 `protobuf:"varint,2,rep,packed,name=extension_number,json=extensionNumber,proto3" json:"extension_number,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ExtensionNumberResponse) Reset() { *m = ExtensionNumberResponse{} } func (m *ExtensionNumberResponse) String() string { return proto.CompactTextString(m) } func (*ExtensionNumberResponse) ProtoMessage() {} func (*ExtensionNumberResponse) Descriptor() ([]byte, []int) { return fileDescriptor_reflection_178bd1e101bf8b63, []int{4} } func (m *ExtensionNumberResponse) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ExtensionNumberResponse.Unmarshal(m, b) } func (m *ExtensionNumberResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ExtensionNumberResponse.Marshal(b, m, deterministic) } func (dst *ExtensionNumberResponse) XXX_Merge(src proto.Message) { xxx_messageInfo_ExtensionNumberResponse.Merge(dst, src) } func (m *ExtensionNumberResponse) XXX_Size() int { return xxx_messageInfo_ExtensionNumberResponse.Size(m) } func (m *ExtensionNumberResponse) XXX_DiscardUnknown() { xxx_messageInfo_ExtensionNumberResponse.DiscardUnknown(m) } var xxx_messageInfo_ExtensionNumberResponse proto.InternalMessageInfo func (m *ExtensionNumberResponse) GetBaseTypeName() string { if m != nil { return m.BaseTypeName } return "" } func (m *ExtensionNumberResponse) GetExtensionNumber() []int32 { if m != nil { return m.ExtensionNumber } return nil } // A list of ServiceResponse sent by the server answering list_services request. type ListServiceResponse struct { // The information of each service may be expanded in the future, so we use // ServiceResponse message to encapsulate it. Service []*ServiceResponse `protobuf:"bytes,1,rep,name=service,proto3" json:"service,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ListServiceResponse) Reset() { *m = ListServiceResponse{} } func (m *ListServiceResponse) String() string { return proto.CompactTextString(m) } func (*ListServiceResponse) ProtoMessage() {} func (*ListServiceResponse) Descriptor() ([]byte, []int) { return fileDescriptor_reflection_178bd1e101bf8b63, []int{5} } func (m *ListServiceResponse) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ListServiceResponse.Unmarshal(m, b) } func (m *ListServiceResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ListServiceResponse.Marshal(b, m, deterministic) } func (dst *ListServiceResponse) XXX_Merge(src proto.Message) { xxx_messageInfo_ListServiceResponse.Merge(dst, src) } func (m *ListServiceResponse) XXX_Size() int { return xxx_messageInfo_ListServiceResponse.Size(m) } func (m *ListServiceResponse) XXX_DiscardUnknown() { xxx_messageInfo_ListServiceResponse.DiscardUnknown(m) } var xxx_messageInfo_ListServiceResponse proto.InternalMessageInfo func (m *ListServiceResponse) GetService() []*ServiceResponse { if m != nil { return m.Service } return nil } // The information of a single service used by ListServiceResponse to answer // list_services request. type ServiceResponse struct { // Full name of a registered service, including its package name. The format // is . Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ServiceResponse) Reset() { *m = ServiceResponse{} } func (m *ServiceResponse) String() string { return proto.CompactTextString(m) } func (*ServiceResponse) ProtoMessage() {} func (*ServiceResponse) Descriptor() ([]byte, []int) { return fileDescriptor_reflection_178bd1e101bf8b63, []int{6} } func (m *ServiceResponse) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ServiceResponse.Unmarshal(m, b) } func (m *ServiceResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ServiceResponse.Marshal(b, m, deterministic) } func (dst *ServiceResponse) XXX_Merge(src proto.Message) { xxx_messageInfo_ServiceResponse.Merge(dst, src) } func (m *ServiceResponse) XXX_Size() int { return xxx_messageInfo_ServiceResponse.Size(m) } func (m *ServiceResponse) XXX_DiscardUnknown() { xxx_messageInfo_ServiceResponse.DiscardUnknown(m) } var xxx_messageInfo_ServiceResponse proto.InternalMessageInfo func (m *ServiceResponse) GetName() string { if m != nil { return m.Name } return "" } // The error code and error message sent by the server when an error occurs. type ErrorResponse struct { // This field uses the error codes defined in grpc::StatusCode. ErrorCode int32 `protobuf:"varint,1,opt,name=error_code,json=errorCode,proto3" json:"error_code,omitempty"` ErrorMessage string `protobuf:"bytes,2,opt,name=error_message,json=errorMessage,proto3" json:"error_message,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ErrorResponse) Reset() { *m = ErrorResponse{} } func (m *ErrorResponse) String() string { return proto.CompactTextString(m) } func (*ErrorResponse) ProtoMessage() {} func (*ErrorResponse) Descriptor() ([]byte, []int) { return fileDescriptor_reflection_178bd1e101bf8b63, []int{7} } func (m *ErrorResponse) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ErrorResponse.Unmarshal(m, b) } func (m *ErrorResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ErrorResponse.Marshal(b, m, deterministic) } func (dst *ErrorResponse) XXX_Merge(src proto.Message) { xxx_messageInfo_ErrorResponse.Merge(dst, src) } func (m *ErrorResponse) XXX_Size() int { return xxx_messageInfo_ErrorResponse.Size(m) } func (m *ErrorResponse) XXX_DiscardUnknown() { xxx_messageInfo_ErrorResponse.DiscardUnknown(m) } var xxx_messageInfo_ErrorResponse proto.InternalMessageInfo func (m *ErrorResponse) GetErrorCode() int32 { if m != nil { return m.ErrorCode } return 0 } func (m *ErrorResponse) GetErrorMessage() string { if m != nil { return m.ErrorMessage } return "" } func init() { proto.RegisterType((*ServerReflectionRequest)(nil), "grpc.reflection.v1alpha.ServerReflectionRequest") proto.RegisterType((*ExtensionRequest)(nil), "grpc.reflection.v1alpha.ExtensionRequest") proto.RegisterType((*ServerReflectionResponse)(nil), "grpc.reflection.v1alpha.ServerReflectionResponse") proto.RegisterType((*FileDescriptorResponse)(nil), "grpc.reflection.v1alpha.FileDescriptorResponse") proto.RegisterType((*ExtensionNumberResponse)(nil), "grpc.reflection.v1alpha.ExtensionNumberResponse") proto.RegisterType((*ListServiceResponse)(nil), "grpc.reflection.v1alpha.ListServiceResponse") proto.RegisterType((*ServiceResponse)(nil), "grpc.reflection.v1alpha.ServiceResponse") proto.RegisterType((*ErrorResponse)(nil), "grpc.reflection.v1alpha.ErrorResponse") } // Reference imports to suppress errors if they are not otherwise used. var _ context.Context var _ grpc.ClientConn // This is a compile-time assertion to ensure that this generated file // is compatible with the grpc package it is being compiled against. const _ = grpc.SupportPackageIsVersion4 // ServerReflectionClient is the client API for ServerReflection service. // // For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream. type ServerReflectionClient interface { // The reflection service is structured as a bidirectional stream, ensuring // all related requests go to a single server. ServerReflectionInfo(ctx context.Context, opts ...grpc.CallOption) (ServerReflection_ServerReflectionInfoClient, error) } type serverReflectionClient struct { cc *grpc.ClientConn } func NewServerReflectionClient(cc *grpc.ClientConn) ServerReflectionClient { return &serverReflectionClient{cc} } func (c *serverReflectionClient) ServerReflectionInfo(ctx context.Context, opts ...grpc.CallOption) (ServerReflection_ServerReflectionInfoClient, error) { stream, err := c.cc.NewStream(ctx, &_ServerReflection_serviceDesc.Streams[0], "/grpc.reflection.v1alpha.ServerReflection/ServerReflectionInfo", opts...) if err != nil { return nil, err } x := &serverReflectionServerReflectionInfoClient{stream} return x, nil } type ServerReflection_ServerReflectionInfoClient interface { Send(*ServerReflectionRequest) error Recv() (*ServerReflectionResponse, error) grpc.ClientStream } type serverReflectionServerReflectionInfoClient struct { grpc.ClientStream } func (x *serverReflectionServerReflectionInfoClient) Send(m *ServerReflectionRequest) error { return x.ClientStream.SendMsg(m) } func (x *serverReflectionServerReflectionInfoClient) Recv() (*ServerReflectionResponse, error) { m := new(ServerReflectionResponse) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } // ServerReflectionServer is the server API for ServerReflection service. type ServerReflectionServer interface { // The reflection service is structured as a bidirectional stream, ensuring // all related requests go to a single server. ServerReflectionInfo(ServerReflection_ServerReflectionInfoServer) error } func RegisterServerReflectionServer(s *grpc.Server, srv ServerReflectionServer) { s.RegisterService(&_ServerReflection_serviceDesc, srv) } func _ServerReflection_ServerReflectionInfo_Handler(srv interface{}, stream grpc.ServerStream) error { return srv.(ServerReflectionServer).ServerReflectionInfo(&serverReflectionServerReflectionInfoServer{stream}) } type ServerReflection_ServerReflectionInfoServer interface { Send(*ServerReflectionResponse) error Recv() (*ServerReflectionRequest, error) grpc.ServerStream } type serverReflectionServerReflectionInfoServer struct { grpc.ServerStream } func (x *serverReflectionServerReflectionInfoServer) Send(m *ServerReflectionResponse) error { return x.ServerStream.SendMsg(m) } func (x *serverReflectionServerReflectionInfoServer) Recv() (*ServerReflectionRequest, error) { m := new(ServerReflectionRequest) if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err } return m, nil } var _ServerReflection_serviceDesc = grpc.ServiceDesc{ ServiceName: "grpc.reflection.v1alpha.ServerReflection", HandlerType: (*ServerReflectionServer)(nil), Methods: []grpc.MethodDesc{}, Streams: []grpc.StreamDesc{ { StreamName: "ServerReflectionInfo", Handler: _ServerReflection_ServerReflectionInfo_Handler, ServerStreams: true, ClientStreams: true, }, }, Metadata: "grpc_reflection_v1alpha/reflection.proto", } func init() { proto.RegisterFile("grpc_reflection_v1alpha/reflection.proto", fileDescriptor_reflection_178bd1e101bf8b63) } var fileDescriptor_reflection_178bd1e101bf8b63 = []byte{ // 656 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x94, 0x54, 0x51, 0x73, 0xd2, 0x40, 0x10, 0x6e, 0x5a, 0x68, 0x87, 0x85, 0x02, 0x5e, 0x2b, 0xa4, 0x3a, 0x75, 0x98, 0x68, 0x35, 0x75, 0x1c, 0xda, 0xe2, 0x8c, 0x3f, 0x80, 0xaa, 0x83, 0x33, 0xb5, 0x75, 0x0e, 0x5f, 0x1c, 0x1f, 0x6e, 0x02, 0x2c, 0x34, 0x1a, 0x72, 0xf1, 0x2e, 0x45, 0x79, 0xf2, 0x47, 0xf8, 0xa3, 0xfc, 0x4b, 0x3e, 0x3a, 0x77, 0x09, 0x21, 0xa4, 0x44, 0xa7, 0x4f, 0x30, 0xdf, 0xee, 0xde, 0xb7, 0xbb, 0xdf, 0xb7, 0x01, 0x7b, 0x22, 0x82, 0x21, 0x13, 0x38, 0xf6, 0x70, 0x18, 0xba, 0xdc, 0x67, 0xb3, 0x33, 0xc7, 0x0b, 0xae, 0x9d, 0x93, 0x25, 0xd4, 0x0e, 0x04, 0x0f, 0x39, 0x69, 0xaa, 0xcc, 0x76, 0x0a, 0x8e, 0x33, 0xad, 0x3f, 0x9b, 0xd0, 0xec, 0xa3, 0x98, 0xa1, 0xa0, 0x49, 0x90, 0xe2, 0xb7, 0x1b, 0x94, 0x21, 0x21, 0x50, 0xb8, 0xe6, 0x32, 0x34, 0x8d, 0x96, 0x61, 0x97, 0xa8, 0xfe, 0x4f, 0x9e, 0x43, 0x7d, 0xec, 0x7a, 0xc8, 0x06, 0x73, 0xa6, 0x7e, 0x7d, 0x67, 0x8a, 0xe6, 0x96, 0x8a, 0xf7, 0x36, 0x68, 0x55, 0x21, 0xdd, 0xf9, 0xdb, 0x18, 0x27, 0xaf, 0xa0, 0xa1, 0x73, 0x87, 0xdc, 0x0f, 0x1d, 0xd7, 0x77, 0xfd, 0x09, 0x93, 0xf3, 0xe9, 0x80, 0x7b, 0x66, 0x21, 0xae, 0xd8, 0x57, 0xf1, 0xf3, 0x24, 0xdc, 0xd7, 0x51, 0x32, 0x81, 0x83, 0x6c, 0x1d, 0xfe, 0x08, 0xd1, 0x97, 0x2e, 0xf7, 0xcd, 0x62, 0xcb, 0xb0, 0xcb, 0x9d, 0xe3, 0x76, 0xce, 0x40, 0xed, 0x37, 0x8b, 0xcc, 0x78, 0x8a, 0xde, 0x06, 0x6d, 0xae, 0xb2, 0x24, 0x19, 0xa4, 0x0b, 0x87, 0x8e, 0xe7, 0x2d, 0x1f, 0x67, 0xfe, 0xcd, 0x74, 0x80, 0x42, 0x32, 0x3e, 0x66, 0xe1, 0x3c, 0x40, 0x73, 0x3b, 0xee, 0xf3, 0xc0, 0xf1, 0xbc, 0xa4, 0xec, 0x32, 0x4a, 0xba, 0x1a, 0x7f, 0x9c, 0x07, 0x48, 0x8e, 0x60, 0xd7, 0x73, 0x65, 0xc8, 0x24, 0x8a, 0x99, 0x3b, 0x44, 0x69, 0xee, 0xc4, 0x35, 0x15, 0x05, 0xf7, 0x63, 0xb4, 0x7b, 0x0f, 0x6a, 0x53, 0x94, 0xd2, 0x99, 0x20, 0x13, 0x51, 0x63, 0xd6, 0x18, 0xea, 0xd9, 0x66, 0xc9, 0x33, 0xa8, 0xa5, 0xa6, 0xd6, 0x3d, 0x44, 0xdb, 0xaf, 0x2e, 0x61, 0x4d, 0x7b, 0x0c, 0xf5, 0x6c, 0xdb, 0xe6, 0x66, 0xcb, 0xb0, 0x8b, 0xb4, 0x86, 0xab, 0x8d, 0x5a, 0xbf, 0x0b, 0x60, 0xde, 0x96, 0x58, 0x06, 0xdc, 0x97, 0x48, 0x0e, 0x01, 0x66, 0x8e, 0xe7, 0x8e, 0x58, 0x4a, 0xe9, 0x92, 0x46, 0x7a, 0x4a, 0xee, 0xcf, 0x50, 0xe7, 0xc2, 0x9d, 0xb8, 0xbe, 0xe3, 0x2d, 0xfa, 0xd6, 0x34, 0xe5, 0xce, 0x69, 0xae, 0x02, 0x39, 0x76, 0xa2, 0xb5, 0xc5, 0x4b, 0x8b, 0x61, 0xbf, 0x82, 0xa9, 0x75, 0x1e, 0xa1, 0x1c, 0x0a, 0x37, 0x08, 0xb9, 0x60, 0x22, 0xee, 0x4b, 0x3b, 0xa4, 0xdc, 0x39, 0xc9, 0x25, 0x51, 0x26, 0x7b, 0x9d, 0xd4, 0x2d, 0xc6, 0xe9, 0x6d, 0x50, 0x6d, 0xb9, 0xdb, 0x11, 0xf2, 0x1d, 0x1e, 0xad, 0xd7, 0x3a, 0xa1, 0x2c, 0xfe, 0x67, 0xae, 0x8c, 0x01, 0x52, 0x9c, 0x0f, 0xd7, 0xd8, 0x23, 0x21, 0x1e, 0x41, 0x63, 0xc5, 0x20, 0x4b, 0xc2, 0x6d, 0x4d, 0xf8, 0x22, 0x97, 0xf0, 0x62, 0x69, 0xa0, 0x14, 0xd9, 0x7e, 0xda, 0x57, 0x09, 0xcb, 0x15, 0x54, 0x51, 0x88, 0xf4, 0x06, 0x77, 0xf4, 0xeb, 0x4f, 0xf3, 0xc7, 0x51, 0xe9, 0xa9, 0x77, 0x77, 0x31, 0x0d, 0x74, 0x09, 0xd4, 0x97, 0x86, 0x8d, 0x30, 0xeb, 0x02, 0x1a, 0xeb, 0xf7, 0x4e, 0x3a, 0x70, 0x3f, 0x2b, 0xa5, 0xfe, 0xf0, 0x98, 0x46, 0x6b, 0xcb, 0xae, 0xd0, 0xbd, 0x55, 0x51, 0x3e, 0xa8, 0x90, 0xf5, 0x05, 0x9a, 0x39, 0x2b, 0x25, 0x4f, 0xa0, 0x3a, 0x70, 0x24, 0xea, 0x03, 0x60, 0xfa, 0x1b, 0x13, 0x39, 0xb3, 0xa2, 0x50, 0xe5, 0xff, 0x4b, 0xf5, 0x7d, 0x59, 0x7f, 0x03, 0x5b, 0xeb, 0x6e, 0xe0, 0x13, 0xec, 0xad, 0xd9, 0x26, 0xe9, 0xc2, 0x4e, 0x2c, 0x8b, 0x6e, 0xb4, 0xdc, 0xb1, 0xff, 0xe9, 0xea, 0x54, 0x29, 0x5d, 0x14, 0x5a, 0x47, 0x50, 0xcb, 0x3e, 0x4b, 0xa0, 0x90, 0x6a, 0x5a, 0xff, 0xb7, 0xfa, 0xb0, 0xbb, 0xb2, 0x71, 0x75, 0x79, 0x91, 0x62, 0x43, 0x3e, 0x8a, 0x52, 0x8b, 0xb4, 0xa4, 0x91, 0x73, 0x3e, 0x42, 0xf2, 0x18, 0x22, 0x41, 0x58, 0xac, 0x82, 0x3e, 0xbb, 0x12, 0xad, 0x68, 0xf0, 0x7d, 0x84, 0x75, 0x7e, 0x19, 0x50, 0xcf, 0x9e, 0x1b, 0xf9, 0x09, 0xfb, 0x59, 0xec, 0x9d, 0x3f, 0xe6, 0xe4, 0xce, 0x17, 0xfb, 0xe0, 0xec, 0x0e, 0x15, 0xd1, 0x54, 0xb6, 0x71, 0x6a, 0x0c, 0xb6, 0xb5, 0xf4, 0x2f, 0xff, 0x06, 0x00, 0x00, 0xff, 0xff, 0x85, 0x02, 0x09, 0x9d, 0x9f, 0x06, 0x00, 0x00, } grpc-go-1.22.1/reflection/grpc_reflection_v1alpha/reflection.proto000066400000000000000000000124751351635773100253110ustar00rootroot00000000000000// Copyright 2016 gRPC authors. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. // Service exported by server reflection syntax = "proto3"; package grpc.reflection.v1alpha; service ServerReflection { // The reflection service is structured as a bidirectional stream, ensuring // all related requests go to a single server. rpc ServerReflectionInfo(stream ServerReflectionRequest) returns (stream ServerReflectionResponse); } // The message sent by the client when calling ServerReflectionInfo method. message ServerReflectionRequest { string host = 1; // To use reflection service, the client should set one of the following // fields in message_request. The server distinguishes requests by their // defined field and then handles them using corresponding methods. oneof message_request { // Find a proto file by the file name. string file_by_filename = 3; // Find the proto file that declares the given fully-qualified symbol name. // This field should be a fully-qualified symbol name // (e.g. .[.] or .). string file_containing_symbol = 4; // Find the proto file which defines an extension extending the given // message type with the given field number. ExtensionRequest file_containing_extension = 5; // Finds the tag numbers used by all known extensions of extendee_type, and // appends them to ExtensionNumberResponse in an undefined order. // Its corresponding method is best-effort: it's not guaranteed that the // reflection service will implement this method, and it's not guaranteed // that this method will provide all extensions. Returns // StatusCode::UNIMPLEMENTED if it's not implemented. // This field should be a fully-qualified type name. The format is // . string all_extension_numbers_of_type = 6; // List the full names of registered services. The content will not be // checked. string list_services = 7; } } // The type name and extension number sent by the client when requesting // file_containing_extension. message ExtensionRequest { // Fully-qualified type name. The format should be . string containing_type = 1; int32 extension_number = 2; } // The message sent by the server to answer ServerReflectionInfo method. message ServerReflectionResponse { string valid_host = 1; ServerReflectionRequest original_request = 2; // The server sets one of the following fields according to the // message_request in the request. oneof message_response { // This message is used to answer file_by_filename, file_containing_symbol, // file_containing_extension requests with transitive dependencies. // As the repeated label is not allowed in oneof fields, we use a // FileDescriptorResponse message to encapsulate the repeated fields. // The reflection service is allowed to avoid sending FileDescriptorProtos // that were previously sent in response to earlier requests in the stream. FileDescriptorResponse file_descriptor_response = 4; // This message is used to answer all_extension_numbers_of_type requests. ExtensionNumberResponse all_extension_numbers_response = 5; // This message is used to answer list_services requests. ListServiceResponse list_services_response = 6; // This message is used when an error occurs. ErrorResponse error_response = 7; } } // Serialized FileDescriptorProto messages sent by the server answering // a file_by_filename, file_containing_symbol, or file_containing_extension // request. message FileDescriptorResponse { // Serialized FileDescriptorProto messages. We avoid taking a dependency on // descriptor.proto, which uses proto2 only features, by making them opaque // bytes instead. repeated bytes file_descriptor_proto = 1; } // A list of extension numbers sent by the server answering // all_extension_numbers_of_type request. message ExtensionNumberResponse { // Full name of the base type, including the package name. The format // is . string base_type_name = 1; repeated int32 extension_number = 2; } // A list of ServiceResponse sent by the server answering list_services request. message ListServiceResponse { // The information of each service may be expanded in the future, so we use // ServiceResponse message to encapsulate it. repeated ServiceResponse service = 1; } // The information of a single service used by ListServiceResponse to answer // list_services request. message ServiceResponse { // Full name of a registered service, including its package name. The format // is . string name = 1; } // The error code and error message sent by the server when an error occurs. message ErrorResponse { // This field uses the error codes defined in grpc::StatusCode. int32 error_code = 1; string error_message = 2; } grpc-go-1.22.1/reflection/grpc_testing/000077500000000000000000000000001351635773100177705ustar00rootroot00000000000000grpc-go-1.22.1/reflection/grpc_testing/proto2.pb.go000066400000000000000000000055641351635773100221560ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: proto2.proto package grpc_testing import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package type ToBeExtended struct { Foo *int32 `protobuf:"varint,1,req,name=foo" json:"foo,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` proto.XXX_InternalExtensions `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ToBeExtended) Reset() { *m = ToBeExtended{} } func (m *ToBeExtended) String() string { return proto.CompactTextString(m) } func (*ToBeExtended) ProtoMessage() {} func (*ToBeExtended) Descriptor() ([]byte, []int) { return fileDescriptor_proto2_b16f7a513d0acdc0, []int{0} } var extRange_ToBeExtended = []proto.ExtensionRange{ {Start: 10, End: 30}, } func (*ToBeExtended) ExtensionRangeArray() []proto.ExtensionRange { return extRange_ToBeExtended } func (m *ToBeExtended) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ToBeExtended.Unmarshal(m, b) } func (m *ToBeExtended) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ToBeExtended.Marshal(b, m, deterministic) } func (dst *ToBeExtended) XXX_Merge(src proto.Message) { xxx_messageInfo_ToBeExtended.Merge(dst, src) } func (m *ToBeExtended) XXX_Size() int { return xxx_messageInfo_ToBeExtended.Size(m) } func (m *ToBeExtended) XXX_DiscardUnknown() { xxx_messageInfo_ToBeExtended.DiscardUnknown(m) } var xxx_messageInfo_ToBeExtended proto.InternalMessageInfo func (m *ToBeExtended) GetFoo() int32 { if m != nil && m.Foo != nil { return *m.Foo } return 0 } func init() { proto.RegisterType((*ToBeExtended)(nil), "grpc.testing.ToBeExtended") } func init() { proto.RegisterFile("proto2.proto", fileDescriptor_proto2_b16f7a513d0acdc0) } var fileDescriptor_proto2_b16f7a513d0acdc0 = []byte{ // 86 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0xe2, 0x29, 0x28, 0xca, 0x2f, 0xc9, 0x37, 0xd2, 0x03, 0x53, 0x42, 0x3c, 0xe9, 0x45, 0x05, 0xc9, 0x7a, 0x25, 0xa9, 0xc5, 0x25, 0x99, 0x79, 0xe9, 0x4a, 0x6a, 0x5c, 0x3c, 0x21, 0xf9, 0x4e, 0xa9, 0xae, 0x15, 0x25, 0xa9, 0x79, 0x29, 0xa9, 0x29, 0x42, 0x02, 0x5c, 0xcc, 0x69, 0xf9, 0xf9, 0x12, 0x8c, 0x0a, 0x4c, 0x1a, 0xac, 0x41, 0x20, 0xa6, 0x16, 0x0b, 0x07, 0x97, 0x80, 0x3c, 0x20, 0x00, 0x00, 0xff, 0xff, 0x74, 0x86, 0x9c, 0x08, 0x44, 0x00, 0x00, 0x00, } grpc-go-1.22.1/reflection/grpc_testing/proto2.proto000066400000000000000000000013041351635773100223000ustar00rootroot00000000000000// Copyright 2017 gRPC authors. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. syntax = "proto2"; package grpc.testing; message ToBeExtended { required int32 foo = 1; extensions 10 to 30; } grpc-go-1.22.1/reflection/grpc_testing/proto2_ext.pb.go000066400000000000000000000077271351635773100230410ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: proto2_ext.proto package grpc_testing import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package type Extension struct { Whatzit *int32 `protobuf:"varint,1,opt,name=whatzit" json:"whatzit,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Extension) Reset() { *m = Extension{} } func (m *Extension) String() string { return proto.CompactTextString(m) } func (*Extension) ProtoMessage() {} func (*Extension) Descriptor() ([]byte, []int) { return fileDescriptor_proto2_ext_4437118420d604f2, []int{0} } func (m *Extension) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Extension.Unmarshal(m, b) } func (m *Extension) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Extension.Marshal(b, m, deterministic) } func (dst *Extension) XXX_Merge(src proto.Message) { xxx_messageInfo_Extension.Merge(dst, src) } func (m *Extension) XXX_Size() int { return xxx_messageInfo_Extension.Size(m) } func (m *Extension) XXX_DiscardUnknown() { xxx_messageInfo_Extension.DiscardUnknown(m) } var xxx_messageInfo_Extension proto.InternalMessageInfo func (m *Extension) GetWhatzit() int32 { if m != nil && m.Whatzit != nil { return *m.Whatzit } return 0 } var E_Foo = &proto.ExtensionDesc{ ExtendedType: (*ToBeExtended)(nil), ExtensionType: (*int32)(nil), Field: 13, Name: "grpc.testing.foo", Tag: "varint,13,opt,name=foo", Filename: "proto2_ext.proto", } var E_Bar = &proto.ExtensionDesc{ ExtendedType: (*ToBeExtended)(nil), ExtensionType: (*Extension)(nil), Field: 17, Name: "grpc.testing.bar", Tag: "bytes,17,opt,name=bar", Filename: "proto2_ext.proto", } var E_Baz = &proto.ExtensionDesc{ ExtendedType: (*ToBeExtended)(nil), ExtensionType: (*SearchRequest)(nil), Field: 19, Name: "grpc.testing.baz", Tag: "bytes,19,opt,name=baz", Filename: "proto2_ext.proto", } func init() { proto.RegisterType((*Extension)(nil), "grpc.testing.Extension") proto.RegisterExtension(E_Foo) proto.RegisterExtension(E_Bar) proto.RegisterExtension(E_Baz) } func init() { proto.RegisterFile("proto2_ext.proto", fileDescriptor_proto2_ext_4437118420d604f2) } var fileDescriptor_proto2_ext_4437118420d604f2 = []byte{ // 179 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x12, 0x28, 0x28, 0xca, 0x2f, 0xc9, 0x37, 0x8a, 0x4f, 0xad, 0x28, 0xd1, 0x03, 0x33, 0x85, 0x78, 0xd2, 0x8b, 0x0a, 0x92, 0xf5, 0x4a, 0x52, 0x8b, 0x4b, 0x32, 0xf3, 0xd2, 0xa5, 0x78, 0x20, 0xf2, 0x10, 0x39, 0x29, 0x2e, 0x90, 0x30, 0x84, 0xad, 0xa4, 0xca, 0xc5, 0xe9, 0x5a, 0x51, 0x92, 0x9a, 0x57, 0x9c, 0x99, 0x9f, 0x27, 0x24, 0xc1, 0xc5, 0x5e, 0x9e, 0x91, 0x58, 0x52, 0x95, 0x59, 0x22, 0xc1, 0xa8, 0xc0, 0xa8, 0xc1, 0x1a, 0x04, 0xe3, 0x5a, 0xe9, 0x70, 0x31, 0xa7, 0xe5, 0xe7, 0x0b, 0x49, 0xe9, 0x21, 0x1b, 0xab, 0x17, 0x92, 0xef, 0x94, 0x0a, 0xd6, 0x9d, 0x92, 0x9a, 0x22, 0xc1, 0x0b, 0xd6, 0x01, 0x52, 0x66, 0xe5, 0xca, 0xc5, 0x9c, 0x94, 0x58, 0x84, 0x57, 0xb5, 0xa0, 0x02, 0xa3, 0x06, 0xb7, 0x91, 0x38, 0xaa, 0x0a, 0xb8, 0x4b, 0x82, 0x40, 0xfa, 0xad, 0x3c, 0x41, 0xc6, 0x54, 0xe1, 0x35, 0x46, 0x18, 0x6c, 0x8c, 0x34, 0xaa, 0x8a, 0xe0, 0xd4, 0xc4, 0xa2, 0xe4, 0x8c, 0xa0, 0xd4, 0xc2, 0xd2, 0xd4, 0xe2, 0x12, 0x90, 0x51, 0x55, 0x80, 0x00, 0x00, 0x00, 0xff, 0xff, 0x71, 0x6b, 0x94, 0x9f, 0x21, 0x01, 0x00, 0x00, } grpc-go-1.22.1/reflection/grpc_testing/proto2_ext.proto000066400000000000000000000015211351635773100231610ustar00rootroot00000000000000// Copyright 2017 gRPC authors. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. syntax = "proto2"; package grpc.testing; import "proto2.proto"; import "test.proto"; extend ToBeExtended { optional int32 foo = 13; optional Extension bar = 17; optional SearchRequest baz = 19; } message Extension { optional int32 whatzit = 1; } grpc-go-1.22.1/reflection/grpc_testing/proto2_ext2.pb.go000066400000000000000000000074651351635773100231220ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: proto2_ext2.proto package grpc_testing import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package type AnotherExtension struct { Whatchamacallit *int32 `protobuf:"varint,1,opt,name=whatchamacallit" json:"whatchamacallit,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *AnotherExtension) Reset() { *m = AnotherExtension{} } func (m *AnotherExtension) String() string { return proto.CompactTextString(m) } func (*AnotherExtension) ProtoMessage() {} func (*AnotherExtension) Descriptor() ([]byte, []int) { return fileDescriptor_proto2_ext2_039d342873655470, []int{0} } func (m *AnotherExtension) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_AnotherExtension.Unmarshal(m, b) } func (m *AnotherExtension) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_AnotherExtension.Marshal(b, m, deterministic) } func (dst *AnotherExtension) XXX_Merge(src proto.Message) { xxx_messageInfo_AnotherExtension.Merge(dst, src) } func (m *AnotherExtension) XXX_Size() int { return xxx_messageInfo_AnotherExtension.Size(m) } func (m *AnotherExtension) XXX_DiscardUnknown() { xxx_messageInfo_AnotherExtension.DiscardUnknown(m) } var xxx_messageInfo_AnotherExtension proto.InternalMessageInfo func (m *AnotherExtension) GetWhatchamacallit() int32 { if m != nil && m.Whatchamacallit != nil { return *m.Whatchamacallit } return 0 } var E_Frob = &proto.ExtensionDesc{ ExtendedType: (*ToBeExtended)(nil), ExtensionType: (*string)(nil), Field: 23, Name: "grpc.testing.frob", Tag: "bytes,23,opt,name=frob", Filename: "proto2_ext2.proto", } var E_Nitz = &proto.ExtensionDesc{ ExtendedType: (*ToBeExtended)(nil), ExtensionType: (*AnotherExtension)(nil), Field: 29, Name: "grpc.testing.nitz", Tag: "bytes,29,opt,name=nitz", Filename: "proto2_ext2.proto", } func init() { proto.RegisterType((*AnotherExtension)(nil), "grpc.testing.AnotherExtension") proto.RegisterExtension(E_Frob) proto.RegisterExtension(E_Nitz) } func init() { proto.RegisterFile("proto2_ext2.proto", fileDescriptor_proto2_ext2_039d342873655470) } var fileDescriptor_proto2_ext2_039d342873655470 = []byte{ // 165 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x12, 0x2c, 0x28, 0xca, 0x2f, 0xc9, 0x37, 0x8a, 0x4f, 0xad, 0x28, 0x31, 0xd2, 0x03, 0xb3, 0x85, 0x78, 0xd2, 0x8b, 0x0a, 0x92, 0xf5, 0x4a, 0x52, 0x8b, 0x4b, 0x32, 0xf3, 0xd2, 0xa5, 0x78, 0x20, 0x0a, 0x20, 0x72, 0x4a, 0x36, 0x5c, 0x02, 0x8e, 0x79, 0xf9, 0x25, 0x19, 0xa9, 0x45, 0xae, 0x15, 0x25, 0xa9, 0x79, 0xc5, 0x99, 0xf9, 0x79, 0x42, 0x1a, 0x5c, 0xfc, 0xe5, 0x19, 0x89, 0x25, 0xc9, 0x19, 0x89, 0xb9, 0x89, 0xc9, 0x89, 0x39, 0x39, 0x99, 0x25, 0x12, 0x8c, 0x0a, 0x8c, 0x1a, 0xac, 0x41, 0xe8, 0xc2, 0x56, 0x7a, 0x5c, 0x2c, 0x69, 0x45, 0xf9, 0x49, 0x42, 0x52, 0x7a, 0xc8, 0x56, 0xe8, 0x85, 0xe4, 0x3b, 0xa5, 0x82, 0x8d, 0x4b, 0x49, 0x4d, 0x91, 0x10, 0x57, 0x60, 0xd4, 0xe0, 0x0c, 0x02, 0xab, 0xb3, 0xf2, 0xe3, 0x62, 0xc9, 0xcb, 0x2c, 0xa9, 0xc2, 0xab, 0x5e, 0x56, 0x81, 0x51, 0x83, 0xdb, 0x48, 0x0e, 0x55, 0x05, 0xba, 0x1b, 0x83, 0xc0, 0xe6, 0x00, 0x02, 0x00, 0x00, 0xff, 0xff, 0xf0, 0x7e, 0x0d, 0x26, 0xed, 0x00, 0x00, 0x00, } grpc-go-1.22.1/reflection/grpc_testing/proto2_ext2.proto000066400000000000000000000014621351635773100232470ustar00rootroot00000000000000// Copyright 2017 gRPC authors. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. syntax = "proto2"; package grpc.testing; import "proto2.proto"; extend ToBeExtended { optional string frob = 23; optional AnotherExtension nitz = 29; } message AnotherExtension { optional int32 whatchamacallit = 1; } grpc-go-1.22.1/reflection/grpc_testing/test.pb.go000066400000000000000000000260201351635773100216760ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: test.proto package grpc_testing import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import ( context "golang.org/x/net/context" grpc "google.golang.org/grpc" ) // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package type SearchResponse struct { Results []*SearchResponse_Result `protobuf:"bytes,1,rep,name=results,proto3" json:"results,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *SearchResponse) Reset() { *m = SearchResponse{} } func (m *SearchResponse) String() string { return proto.CompactTextString(m) } func (*SearchResponse) ProtoMessage() {} func (*SearchResponse) Descriptor() ([]byte, []int) { return fileDescriptor_test_a0c753075da50dd4, []int{0} } func (m *SearchResponse) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_SearchResponse.Unmarshal(m, b) } func (m *SearchResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_SearchResponse.Marshal(b, m, deterministic) } func (dst *SearchResponse) XXX_Merge(src proto.Message) { xxx_messageInfo_SearchResponse.Merge(dst, src) } func (m *SearchResponse) XXX_Size() int { return xxx_messageInfo_SearchResponse.Size(m) } func (m *SearchResponse) XXX_DiscardUnknown() { xxx_messageInfo_SearchResponse.DiscardUnknown(m) } var xxx_messageInfo_SearchResponse proto.InternalMessageInfo func (m *SearchResponse) GetResults() []*SearchResponse_Result { if m != nil { return m.Results } return nil } type SearchResponse_Result struct { Url string `protobuf:"bytes,1,opt,name=url,proto3" json:"url,omitempty"` Title string `protobuf:"bytes,2,opt,name=title,proto3" json:"title,omitempty"` Snippets []string `protobuf:"bytes,3,rep,name=snippets,proto3" json:"snippets,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *SearchResponse_Result) Reset() { *m = SearchResponse_Result{} } func (m *SearchResponse_Result) String() string { return proto.CompactTextString(m) } func (*SearchResponse_Result) ProtoMessage() {} func (*SearchResponse_Result) Descriptor() ([]byte, []int) { return fileDescriptor_test_a0c753075da50dd4, []int{0, 0} } func (m *SearchResponse_Result) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_SearchResponse_Result.Unmarshal(m, b) } func (m *SearchResponse_Result) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_SearchResponse_Result.Marshal(b, m, deterministic) } func (dst *SearchResponse_Result) XXX_Merge(src proto.Message) { xxx_messageInfo_SearchResponse_Result.Merge(dst, src) } func (m *SearchResponse_Result) XXX_Size() int { return xxx_messageInfo_SearchResponse_Result.Size(m) } func (m *SearchResponse_Result) XXX_DiscardUnknown() { xxx_messageInfo_SearchResponse_Result.DiscardUnknown(m) } var xxx_messageInfo_SearchResponse_Result proto.InternalMessageInfo func (m *SearchResponse_Result) GetUrl() string { if m != nil { return m.Url } return "" } func (m *SearchResponse_Result) GetTitle() string { if m != nil { return m.Title } return "" } func (m *SearchResponse_Result) GetSnippets() []string { if m != nil { return m.Snippets } return nil } type SearchRequest struct { Query string `protobuf:"bytes,1,opt,name=query,proto3" json:"query,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *SearchRequest) Reset() { *m = SearchRequest{} } func (m *SearchRequest) String() string { return proto.CompactTextString(m) } func (*SearchRequest) ProtoMessage() {} func (*SearchRequest) Descriptor() ([]byte, []int) { return fileDescriptor_test_a0c753075da50dd4, []int{1} } func (m *SearchRequest) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_SearchRequest.Unmarshal(m, b) } func (m *SearchRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_SearchRequest.Marshal(b, m, deterministic) } func (dst *SearchRequest) XXX_Merge(src proto.Message) { xxx_messageInfo_SearchRequest.Merge(dst, src) } func (m *SearchRequest) XXX_Size() int { return xxx_messageInfo_SearchRequest.Size(m) } func (m *SearchRequest) XXX_DiscardUnknown() { xxx_messageInfo_SearchRequest.DiscardUnknown(m) } var xxx_messageInfo_SearchRequest proto.InternalMessageInfo func (m *SearchRequest) GetQuery() string { if m != nil { return m.Query } return "" } func init() { proto.RegisterType((*SearchResponse)(nil), "grpc.testing.SearchResponse") proto.RegisterType((*SearchResponse_Result)(nil), "grpc.testing.SearchResponse.Result") proto.RegisterType((*SearchRequest)(nil), "grpc.testing.SearchRequest") } // Reference imports to suppress errors if they are not otherwise used. var _ context.Context var _ grpc.ClientConn // This is a compile-time assertion to ensure that this generated file // is compatible with the grpc package it is being compiled against. const _ = grpc.SupportPackageIsVersion4 // SearchServiceClient is the client API for SearchService service. // // For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream. type SearchServiceClient interface { Search(ctx context.Context, in *SearchRequest, opts ...grpc.CallOption) (*SearchResponse, error) StreamingSearch(ctx context.Context, opts ...grpc.CallOption) (SearchService_StreamingSearchClient, error) } type searchServiceClient struct { cc *grpc.ClientConn } func NewSearchServiceClient(cc *grpc.ClientConn) SearchServiceClient { return &searchServiceClient{cc} } func (c *searchServiceClient) Search(ctx context.Context, in *SearchRequest, opts ...grpc.CallOption) (*SearchResponse, error) { out := new(SearchResponse) err := c.cc.Invoke(ctx, "/grpc.testing.SearchService/Search", in, out, opts...) if err != nil { return nil, err } return out, nil } func (c *searchServiceClient) StreamingSearch(ctx context.Context, opts ...grpc.CallOption) (SearchService_StreamingSearchClient, error) { stream, err := c.cc.NewStream(ctx, &_SearchService_serviceDesc.Streams[0], "/grpc.testing.SearchService/StreamingSearch", opts...) if err != nil { return nil, err } x := &searchServiceStreamingSearchClient{stream} return x, nil } type SearchService_StreamingSearchClient interface { Send(*SearchRequest) error Recv() (*SearchResponse, error) grpc.ClientStream } type searchServiceStreamingSearchClient struct { grpc.ClientStream } func (x *searchServiceStreamingSearchClient) Send(m *SearchRequest) error { return x.ClientStream.SendMsg(m) } func (x *searchServiceStreamingSearchClient) Recv() (*SearchResponse, error) { m := new(SearchResponse) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } // SearchServiceServer is the server API for SearchService service. type SearchServiceServer interface { Search(context.Context, *SearchRequest) (*SearchResponse, error) StreamingSearch(SearchService_StreamingSearchServer) error } func RegisterSearchServiceServer(s *grpc.Server, srv SearchServiceServer) { s.RegisterService(&_SearchService_serviceDesc, srv) } func _SearchService_Search_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(SearchRequest) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(SearchServiceServer).Search(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/grpc.testing.SearchService/Search", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(SearchServiceServer).Search(ctx, req.(*SearchRequest)) } return interceptor(ctx, in, info, handler) } func _SearchService_StreamingSearch_Handler(srv interface{}, stream grpc.ServerStream) error { return srv.(SearchServiceServer).StreamingSearch(&searchServiceStreamingSearchServer{stream}) } type SearchService_StreamingSearchServer interface { Send(*SearchResponse) error Recv() (*SearchRequest, error) grpc.ServerStream } type searchServiceStreamingSearchServer struct { grpc.ServerStream } func (x *searchServiceStreamingSearchServer) Send(m *SearchResponse) error { return x.ServerStream.SendMsg(m) } func (x *searchServiceStreamingSearchServer) Recv() (*SearchRequest, error) { m := new(SearchRequest) if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err } return m, nil } var _SearchService_serviceDesc = grpc.ServiceDesc{ ServiceName: "grpc.testing.SearchService", HandlerType: (*SearchServiceServer)(nil), Methods: []grpc.MethodDesc{ { MethodName: "Search", Handler: _SearchService_Search_Handler, }, }, Streams: []grpc.StreamDesc{ { StreamName: "StreamingSearch", Handler: _SearchService_StreamingSearch_Handler, ServerStreams: true, ClientStreams: true, }, }, Metadata: "test.proto", } func init() { proto.RegisterFile("test.proto", fileDescriptor_test_a0c753075da50dd4) } var fileDescriptor_test_a0c753075da50dd4 = []byte{ // 231 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xa4, 0x91, 0xbd, 0x4a, 0xc5, 0x40, 0x10, 0x85, 0x59, 0x83, 0xd1, 0x3b, 0xfe, 0x32, 0x58, 0x84, 0x68, 0x11, 0xae, 0x08, 0xa9, 0x16, 0xb9, 0xd6, 0x56, 0xb6, 0x16, 0xb2, 0x79, 0x82, 0x6b, 0x18, 0xe2, 0x42, 0x4c, 0x36, 0x33, 0x13, 0xc1, 0x87, 0xb1, 0xf5, 0x39, 0x25, 0x59, 0x23, 0x0a, 0x62, 0x63, 0xb7, 0xe7, 0xe3, 0xcc, 0xb7, 0xbb, 0x0c, 0x80, 0x92, 0xa8, 0x0d, 0xdc, 0x6b, 0x8f, 0x87, 0x0d, 0x87, 0xda, 0x4e, 0xc0, 0x77, 0xcd, 0xfa, 0xcd, 0xc0, 0x71, 0x45, 0x5b, 0xae, 0x9f, 0x1c, 0x49, 0xe8, 0x3b, 0x21, 0xbc, 0x85, 0x3d, 0x26, 0x19, 0x5b, 0x95, 0xcc, 0x14, 0x49, 0x79, 0xb0, 0xb9, 0xb4, 0xdf, 0x47, 0xec, 0xcf, 0xba, 0x75, 0x73, 0xd7, 0x2d, 0x33, 0xf9, 0x3d, 0xa4, 0x11, 0xe1, 0x29, 0x24, 0x23, 0xb7, 0x99, 0x29, 0x4c, 0xb9, 0x72, 0xd3, 0x11, 0xcf, 0x60, 0x57, 0xbd, 0xb6, 0x94, 0xed, 0xcc, 0x2c, 0x06, 0xcc, 0x61, 0x5f, 0x3a, 0x1f, 0x02, 0xa9, 0x64, 0x49, 0x91, 0x94, 0x2b, 0xf7, 0x95, 0xd7, 0x57, 0x70, 0xb4, 0xdc, 0x37, 0x8c, 0x24, 0x3a, 0x29, 0x86, 0x91, 0xf8, 0xf5, 0x53, 0x1b, 0xc3, 0xe6, 0xdd, 0x2c, 0xbd, 0x8a, 0xf8, 0xc5, 0xd7, 0x84, 0x77, 0x90, 0x46, 0x80, 0xe7, 0xbf, 0x3f, 0x7f, 0xd6, 0xe5, 0x17, 0x7f, 0xfd, 0x0d, 0x1f, 0xe0, 0xa4, 0x52, 0xa6, 0xed, 0xb3, 0xef, 0x9a, 0x7f, 0xdb, 0x4a, 0x73, 0x6d, 0x1e, 0xd3, 0x79, 0x09, 0x37, 0x1f, 0x01, 0x00, 0x00, 0xff, 0xff, 0x20, 0xd6, 0x09, 0xb8, 0x92, 0x01, 0x00, 0x00, } grpc-go-1.22.1/reflection/grpc_testing/test.proto000066400000000000000000000017441351635773100220420ustar00rootroot00000000000000// Copyright 2017 gRPC authors. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. syntax = "proto3"; package grpc.testing; message SearchResponse { message Result { string url = 1; string title = 2; repeated string snippets = 3; } repeated Result results = 1; } message SearchRequest { string query = 1; } service SearchService { rpc Search(SearchRequest) returns (SearchResponse); rpc StreamingSearch(stream SearchRequest) returns (stream SearchResponse); } grpc-go-1.22.1/reflection/grpc_testingv3/000077500000000000000000000000001351635773100202415ustar00rootroot00000000000000grpc-go-1.22.1/reflection/grpc_testingv3/testv3.pb.go000066400000000000000000000376601351635773100224340ustar00rootroot00000000000000// Code generated by protoc-gen-go. // source: testv3.proto // DO NOT EDIT! /* Package grpc_testingv3 is a generated protocol buffer package. It is generated from these files: testv3.proto It has these top-level messages: SearchResponseV3 SearchRequestV3 */ package grpc_testingv3 import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import ( context "golang.org/x/net/context" grpc "google.golang.org/grpc" ) // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package type SearchResponseV3_State int32 const ( SearchResponseV3_UNKNOWN SearchResponseV3_State = 0 SearchResponseV3_FRESH SearchResponseV3_State = 1 SearchResponseV3_STALE SearchResponseV3_State = 2 ) var SearchResponseV3_State_name = map[int32]string{ 0: "UNKNOWN", 1: "FRESH", 2: "STALE", } var SearchResponseV3_State_value = map[string]int32{ "UNKNOWN": 0, "FRESH": 1, "STALE": 2, } func (x SearchResponseV3_State) String() string { return proto.EnumName(SearchResponseV3_State_name, int32(x)) } func (SearchResponseV3_State) EnumDescriptor() ([]byte, []int) { return fileDescriptor0, []int{0, 0} } type SearchResponseV3 struct { Results []*SearchResponseV3_Result `protobuf:"bytes,1,rep,name=results" json:"results,omitempty"` State SearchResponseV3_State `protobuf:"varint,2,opt,name=state,enum=grpc.testingv3.SearchResponseV3_State" json:"state,omitempty"` } func (m *SearchResponseV3) Reset() { *m = SearchResponseV3{} } func (m *SearchResponseV3) String() string { return proto.CompactTextString(m) } func (*SearchResponseV3) ProtoMessage() {} func (*SearchResponseV3) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} } func (m *SearchResponseV3) GetResults() []*SearchResponseV3_Result { if m != nil { return m.Results } return nil } func (m *SearchResponseV3) GetState() SearchResponseV3_State { if m != nil { return m.State } return SearchResponseV3_UNKNOWN } type SearchResponseV3_Result struct { Url string `protobuf:"bytes,1,opt,name=url" json:"url,omitempty"` Title string `protobuf:"bytes,2,opt,name=title" json:"title,omitempty"` Snippets []string `protobuf:"bytes,3,rep,name=snippets" json:"snippets,omitempty"` Metadata map[string]*SearchResponseV3_Result_Value `protobuf:"bytes,4,rep,name=metadata" json:"metadata,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"` } func (m *SearchResponseV3_Result) Reset() { *m = SearchResponseV3_Result{} } func (m *SearchResponseV3_Result) String() string { return proto.CompactTextString(m) } func (*SearchResponseV3_Result) ProtoMessage() {} func (*SearchResponseV3_Result) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0, 0} } func (m *SearchResponseV3_Result) GetUrl() string { if m != nil { return m.Url } return "" } func (m *SearchResponseV3_Result) GetTitle() string { if m != nil { return m.Title } return "" } func (m *SearchResponseV3_Result) GetSnippets() []string { if m != nil { return m.Snippets } return nil } func (m *SearchResponseV3_Result) GetMetadata() map[string]*SearchResponseV3_Result_Value { if m != nil { return m.Metadata } return nil } type SearchResponseV3_Result_Value struct { // Types that are valid to be assigned to Val: // *SearchResponseV3_Result_Value_Str // *SearchResponseV3_Result_Value_Int // *SearchResponseV3_Result_Value_Real Val isSearchResponseV3_Result_Value_Val `protobuf_oneof:"val"` } func (m *SearchResponseV3_Result_Value) Reset() { *m = SearchResponseV3_Result_Value{} } func (m *SearchResponseV3_Result_Value) String() string { return proto.CompactTextString(m) } func (*SearchResponseV3_Result_Value) ProtoMessage() {} func (*SearchResponseV3_Result_Value) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0, 0, 0} } type isSearchResponseV3_Result_Value_Val interface { isSearchResponseV3_Result_Value_Val() } type SearchResponseV3_Result_Value_Str struct { Str string `protobuf:"bytes,1,opt,name=str,oneof"` } type SearchResponseV3_Result_Value_Int struct { Int int64 `protobuf:"varint,2,opt,name=int,oneof"` } type SearchResponseV3_Result_Value_Real struct { Real float64 `protobuf:"fixed64,3,opt,name=real,oneof"` } func (*SearchResponseV3_Result_Value_Str) isSearchResponseV3_Result_Value_Val() {} func (*SearchResponseV3_Result_Value_Int) isSearchResponseV3_Result_Value_Val() {} func (*SearchResponseV3_Result_Value_Real) isSearchResponseV3_Result_Value_Val() {} func (m *SearchResponseV3_Result_Value) GetVal() isSearchResponseV3_Result_Value_Val { if m != nil { return m.Val } return nil } func (m *SearchResponseV3_Result_Value) GetStr() string { if x, ok := m.GetVal().(*SearchResponseV3_Result_Value_Str); ok { return x.Str } return "" } func (m *SearchResponseV3_Result_Value) GetInt() int64 { if x, ok := m.GetVal().(*SearchResponseV3_Result_Value_Int); ok { return x.Int } return 0 } func (m *SearchResponseV3_Result_Value) GetReal() float64 { if x, ok := m.GetVal().(*SearchResponseV3_Result_Value_Real); ok { return x.Real } return 0 } // XXX_OneofFuncs is for the internal use of the proto package. func (*SearchResponseV3_Result_Value) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _SearchResponseV3_Result_Value_OneofMarshaler, _SearchResponseV3_Result_Value_OneofUnmarshaler, _SearchResponseV3_Result_Value_OneofSizer, []interface{}{ (*SearchResponseV3_Result_Value_Str)(nil), (*SearchResponseV3_Result_Value_Int)(nil), (*SearchResponseV3_Result_Value_Real)(nil), } } func _SearchResponseV3_Result_Value_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*SearchResponseV3_Result_Value) // val switch x := m.Val.(type) { case *SearchResponseV3_Result_Value_Str: b.EncodeVarint(1<<3 | proto.WireBytes) b.EncodeStringBytes(x.Str) case *SearchResponseV3_Result_Value_Int: b.EncodeVarint(2<<3 | proto.WireVarint) b.EncodeVarint(uint64(x.Int)) case *SearchResponseV3_Result_Value_Real: b.EncodeVarint(3<<3 | proto.WireFixed64) b.EncodeFixed64(math.Float64bits(x.Real)) case nil: default: return fmt.Errorf("SearchResponseV3_Result_Value.Val has unexpected type %T", x) } return nil } func _SearchResponseV3_Result_Value_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*SearchResponseV3_Result_Value) switch tag { case 1: // val.str if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } x, err := b.DecodeStringBytes() m.Val = &SearchResponseV3_Result_Value_Str{x} return true, err case 2: // val.int if wire != proto.WireVarint { return true, proto.ErrInternalBadWireType } x, err := b.DecodeVarint() m.Val = &SearchResponseV3_Result_Value_Int{int64(x)} return true, err case 3: // val.real if wire != proto.WireFixed64 { return true, proto.ErrInternalBadWireType } x, err := b.DecodeFixed64() m.Val = &SearchResponseV3_Result_Value_Real{math.Float64frombits(x)} return true, err default: return false, nil } } func _SearchResponseV3_Result_Value_OneofSizer(msg proto.Message) (n int) { m := msg.(*SearchResponseV3_Result_Value) // val switch x := m.Val.(type) { case *SearchResponseV3_Result_Value_Str: n += proto.SizeVarint(1<<3 | proto.WireBytes) n += proto.SizeVarint(uint64(len(x.Str))) n += len(x.Str) case *SearchResponseV3_Result_Value_Int: n += proto.SizeVarint(2<<3 | proto.WireVarint) n += proto.SizeVarint(uint64(x.Int)) case *SearchResponseV3_Result_Value_Real: n += proto.SizeVarint(3<<3 | proto.WireFixed64) n += 8 case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } type SearchRequestV3 struct { Query string `protobuf:"bytes,1,opt,name=query" json:"query,omitempty"` } func (m *SearchRequestV3) Reset() { *m = SearchRequestV3{} } func (m *SearchRequestV3) String() string { return proto.CompactTextString(m) } func (*SearchRequestV3) ProtoMessage() {} func (*SearchRequestV3) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} } func (m *SearchRequestV3) GetQuery() string { if m != nil { return m.Query } return "" } func init() { proto.RegisterType((*SearchResponseV3)(nil), "grpc.testingv3.SearchResponseV3") proto.RegisterType((*SearchResponseV3_Result)(nil), "grpc.testingv3.SearchResponseV3.Result") proto.RegisterType((*SearchResponseV3_Result_Value)(nil), "grpc.testingv3.SearchResponseV3.Result.Value") proto.RegisterType((*SearchRequestV3)(nil), "grpc.testingv3.SearchRequestV3") proto.RegisterEnum("grpc.testingv3.SearchResponseV3_State", SearchResponseV3_State_name, SearchResponseV3_State_value) } // Reference imports to suppress errors if they are not otherwise used. var _ context.Context var _ grpc.ClientConn // This is a compile-time assertion to ensure that this generated file // is compatible with the grpc package it is being compiled against. const _ = grpc.SupportPackageIsVersion3 // Client API for SearchServiceV3 service type SearchServiceV3Client interface { Search(ctx context.Context, in *SearchRequestV3, opts ...grpc.CallOption) (*SearchResponseV3, error) StreamingSearch(ctx context.Context, opts ...grpc.CallOption) (SearchServiceV3_StreamingSearchClient, error) } type searchServiceV3Client struct { cc *grpc.ClientConn } func NewSearchServiceV3Client(cc *grpc.ClientConn) SearchServiceV3Client { return &searchServiceV3Client{cc} } func (c *searchServiceV3Client) Search(ctx context.Context, in *SearchRequestV3, opts ...grpc.CallOption) (*SearchResponseV3, error) { out := new(SearchResponseV3) err := grpc.Invoke(ctx, "/grpc.testingv3.SearchServiceV3/Search", in, out, c.cc, opts...) if err != nil { return nil, err } return out, nil } func (c *searchServiceV3Client) StreamingSearch(ctx context.Context, opts ...grpc.CallOption) (SearchServiceV3_StreamingSearchClient, error) { stream, err := grpc.NewClientStream(ctx, &_SearchServiceV3_serviceDesc.Streams[0], c.cc, "/grpc.testingv3.SearchServiceV3/StreamingSearch", opts...) if err != nil { return nil, err } x := &searchServiceV3StreamingSearchClient{stream} return x, nil } type SearchServiceV3_StreamingSearchClient interface { Send(*SearchRequestV3) error Recv() (*SearchResponseV3, error) grpc.ClientStream } type searchServiceV3StreamingSearchClient struct { grpc.ClientStream } func (x *searchServiceV3StreamingSearchClient) Send(m *SearchRequestV3) error { return x.ClientStream.SendMsg(m) } func (x *searchServiceV3StreamingSearchClient) Recv() (*SearchResponseV3, error) { m := new(SearchResponseV3) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } // Server API for SearchServiceV3 service type SearchServiceV3Server interface { Search(context.Context, *SearchRequestV3) (*SearchResponseV3, error) StreamingSearch(SearchServiceV3_StreamingSearchServer) error } func RegisterSearchServiceV3Server(s *grpc.Server, srv SearchServiceV3Server) { s.RegisterService(&_SearchServiceV3_serviceDesc, srv) } func _SearchServiceV3_Search_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(SearchRequestV3) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(SearchServiceV3Server).Search(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/grpc.testingv3.SearchServiceV3/Search", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(SearchServiceV3Server).Search(ctx, req.(*SearchRequestV3)) } return interceptor(ctx, in, info, handler) } func _SearchServiceV3_StreamingSearch_Handler(srv interface{}, stream grpc.ServerStream) error { return srv.(SearchServiceV3Server).StreamingSearch(&searchServiceV3StreamingSearchServer{stream}) } type SearchServiceV3_StreamingSearchServer interface { Send(*SearchResponseV3) error Recv() (*SearchRequestV3, error) grpc.ServerStream } type searchServiceV3StreamingSearchServer struct { grpc.ServerStream } func (x *searchServiceV3StreamingSearchServer) Send(m *SearchResponseV3) error { return x.ServerStream.SendMsg(m) } func (x *searchServiceV3StreamingSearchServer) Recv() (*SearchRequestV3, error) { m := new(SearchRequestV3) if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err } return m, nil } var _SearchServiceV3_serviceDesc = grpc.ServiceDesc{ ServiceName: "grpc.testingv3.SearchServiceV3", HandlerType: (*SearchServiceV3Server)(nil), Methods: []grpc.MethodDesc{ { MethodName: "Search", Handler: _SearchServiceV3_Search_Handler, }, }, Streams: []grpc.StreamDesc{ { StreamName: "StreamingSearch", Handler: _SearchServiceV3_StreamingSearch_Handler, ServerStreams: true, ClientStreams: true, }, }, Metadata: fileDescriptor0, } func init() { proto.RegisterFile("testv3.proto", fileDescriptor0) } var fileDescriptor0 = []byte{ // 416 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xac, 0x93, 0xd1, 0x6a, 0xd4, 0x40, 0x14, 0x86, 0x77, 0x36, 0x9b, 0x6d, 0xf7, 0xac, 0xb6, 0x61, 0xe8, 0x45, 0xc8, 0x8d, 0x61, 0x2f, 0x6c, 0x10, 0x0c, 0x92, 0x20, 0x88, 0x78, 0x53, 0x65, 0x65, 0xa1, 0x75, 0xc5, 0x89, 0xae, 0xde, 0x8e, 0xeb, 0x61, 0x8d, 0x4d, 0xb3, 0xe9, 0xcc, 0x49, 0x60, 0x9f, 0xc5, 0x17, 0xf1, 0x55, 0x7c, 0x1b, 0x99, 0x99, 0xa6, 0x50, 0x41, 0xba, 0x17, 0xde, 0xcd, 0x7f, 0x38, 0xff, 0x37, 0xff, 0x3f, 0x24, 0xf0, 0x80, 0x50, 0x53, 0x97, 0xa7, 0x8d, 0xda, 0xd2, 0x96, 0x1f, 0x6d, 0x54, 0xb3, 0x4e, 0xcd, 0xa8, 0xac, 0x37, 0x5d, 0x3e, 0xfb, 0x39, 0x82, 0xa0, 0x40, 0xa9, 0xd6, 0xdf, 0x05, 0xea, 0x66, 0x5b, 0x6b, 0x5c, 0xe5, 0xfc, 0x0c, 0x0e, 0x14, 0xea, 0xb6, 0x22, 0x1d, 0xb2, 0xd8, 0x4b, 0xa6, 0xd9, 0x69, 0x7a, 0xd7, 0x96, 0xfe, 0x6d, 0x49, 0x85, 0xdd, 0x17, 0xbd, 0x8f, 0xbf, 0x02, 0x5f, 0x93, 0x24, 0x0c, 0x87, 0x31, 0x4b, 0x8e, 0xb2, 0xc7, 0xf7, 0x02, 0x0a, 0xb3, 0x2d, 0x9c, 0x29, 0xfa, 0x3d, 0x84, 0xb1, 0x23, 0xf2, 0x00, 0xbc, 0x56, 0x55, 0x21, 0x8b, 0x59, 0x32, 0x11, 0xe6, 0xc8, 0x4f, 0xc0, 0xa7, 0x92, 0x2a, 0x87, 0x9e, 0x08, 0x27, 0x78, 0x04, 0x87, 0xba, 0x2e, 0x9b, 0x06, 0x49, 0x87, 0x5e, 0xec, 0x25, 0x13, 0x71, 0xab, 0xf9, 0x07, 0x38, 0xbc, 0x42, 0x92, 0xdf, 0x24, 0xc9, 0x70, 0x64, 0x0b, 0x3d, 0xdf, 0xb3, 0x50, 0xfa, 0xee, 0xc6, 0x37, 0xaf, 0x49, 0xed, 0xc4, 0x2d, 0x26, 0xba, 0x00, 0x7f, 0x25, 0xab, 0x16, 0x39, 0x07, 0x4f, 0x93, 0x72, 0xf9, 0x16, 0x03, 0x61, 0x84, 0x99, 0x95, 0x35, 0xd9, 0x7c, 0x9e, 0x99, 0x95, 0x35, 0xf1, 0x13, 0x18, 0x29, 0x94, 0x55, 0xe8, 0xc5, 0x2c, 0x61, 0x8b, 0x81, 0xb0, 0xea, 0xb5, 0x0f, 0x5e, 0x27, 0xab, 0xe8, 0x07, 0x3c, 0xbc, 0x73, 0x91, 0x69, 0x7d, 0x89, 0xbb, 0xbe, 0xf5, 0x25, 0xee, 0xf8, 0x1b, 0xf0, 0x3b, 0x73, 0xa1, 0xa5, 0x4e, 0xb3, 0xa7, 0xfb, 0x16, 0xb0, 0x29, 0x85, 0xf3, 0xbe, 0x1c, 0xbe, 0x60, 0xb3, 0x27, 0xe0, 0xdb, 0xb7, 0xe6, 0x53, 0x38, 0xf8, 0xb4, 0x3c, 0x5f, 0xbe, 0xff, 0xbc, 0x0c, 0x06, 0x7c, 0x02, 0xfe, 0x5b, 0x31, 0x2f, 0x16, 0x01, 0x33, 0xc7, 0xe2, 0xe3, 0xd9, 0xc5, 0x3c, 0x18, 0xce, 0x4e, 0xe1, 0xb8, 0xe7, 0x5e, 0xb7, 0xa8, 0x69, 0x95, 0x9b, 0xd7, 0xbf, 0x6e, 0x51, 0xf5, 0xd9, 0x9c, 0xc8, 0x7e, 0xb1, 0x7e, 0xb3, 0x40, 0xd5, 0x95, 0x6b, 0xf3, 0x15, 0x9d, 0xc3, 0xd8, 0x8d, 0xf8, 0xa3, 0x7f, 0x85, 0xbd, 0x81, 0x46, 0xf1, 0x7d, 0x6d, 0xf8, 0x17, 0x38, 0x2e, 0x48, 0xa1, 0xbc, 0x2a, 0xeb, 0xcd, 0x7f, 0xa3, 0x26, 0xec, 0x19, 0xfb, 0x3a, 0xb6, 0x3f, 0x46, 0xfe, 0x27, 0x00, 0x00, 0xff, 0xff, 0xed, 0xa2, 0x8d, 0x75, 0x28, 0x03, 0x00, 0x00, } grpc-go-1.22.1/reflection/grpc_testingv3/testv3.proto000066400000000000000000000012331351635773100225550ustar00rootroot00000000000000syntax = "proto3"; package grpc.testingv3; message SearchResponseV3 { message Result { string url = 1; string title = 2; repeated string snippets = 3; message Value { oneof val { string str = 1; int64 int = 2; double real = 3; } } map metadata = 4; } enum State { UNKNOWN = 0; FRESH = 1; STALE = 2; } repeated Result results = 1; State state = 2; } message SearchRequestV3 { string query = 1; } service SearchServiceV3 { rpc Search(SearchRequestV3) returns (SearchResponseV3); rpc StreamingSearch(stream SearchRequestV3) returns (stream SearchResponseV3); } grpc-go-1.22.1/reflection/serverreflection.go000066400000000000000000000316761351635773100212250ustar00rootroot00000000000000/* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ //go:generate protoc --go_out=plugins=grpc:. grpc_reflection_v1alpha/reflection.proto /* Package reflection implements server reflection service. The service implemented is defined in: https://github.com/grpc/grpc/blob/master/src/proto/grpc/reflection/v1alpha/reflection.proto. To register server reflection on a gRPC server: import "google.golang.org/grpc/reflection" s := grpc.NewServer() pb.RegisterYourOwnServer(s, &server{}) // Register reflection service on gRPC server. reflection.Register(s) s.Serve(lis) */ package reflection // import "google.golang.org/grpc/reflection" import ( "bytes" "compress/gzip" "fmt" "io" "io/ioutil" "reflect" "sort" "sync" "github.com/golang/protobuf/proto" dpb "github.com/golang/protobuf/protoc-gen-go/descriptor" "google.golang.org/grpc" "google.golang.org/grpc/codes" rpb "google.golang.org/grpc/reflection/grpc_reflection_v1alpha" "google.golang.org/grpc/status" ) type serverReflectionServer struct { s *grpc.Server initSymbols sync.Once serviceNames []string symbols map[string]*dpb.FileDescriptorProto // map of fully-qualified names to files } // Register registers the server reflection service on the given gRPC server. func Register(s *grpc.Server) { rpb.RegisterServerReflectionServer(s, &serverReflectionServer{ s: s, }) } // protoMessage is used for type assertion on proto messages. // Generated proto message implements function Descriptor(), but Descriptor() // is not part of interface proto.Message. This interface is needed to // call Descriptor(). type protoMessage interface { Descriptor() ([]byte, []int) } func (s *serverReflectionServer) getSymbols() (svcNames []string, symbolIndex map[string]*dpb.FileDescriptorProto) { s.initSymbols.Do(func() { serviceInfo := s.s.GetServiceInfo() s.symbols = map[string]*dpb.FileDescriptorProto{} s.serviceNames = make([]string, 0, len(serviceInfo)) processed := map[string]struct{}{} for svc, info := range serviceInfo { s.serviceNames = append(s.serviceNames, svc) fdenc, ok := parseMetadata(info.Metadata) if !ok { continue } fd, err := decodeFileDesc(fdenc) if err != nil { continue } s.processFile(fd, processed) } sort.Strings(s.serviceNames) }) return s.serviceNames, s.symbols } func (s *serverReflectionServer) processFile(fd *dpb.FileDescriptorProto, processed map[string]struct{}) { filename := fd.GetName() if _, ok := processed[filename]; ok { return } processed[filename] = struct{}{} prefix := fd.GetPackage() for _, msg := range fd.MessageType { s.processMessage(fd, prefix, msg) } for _, en := range fd.EnumType { s.processEnum(fd, prefix, en) } for _, ext := range fd.Extension { s.processField(fd, prefix, ext) } for _, svc := range fd.Service { svcName := fqn(prefix, svc.GetName()) s.symbols[svcName] = fd for _, meth := range svc.Method { name := fqn(svcName, meth.GetName()) s.symbols[name] = fd } } for _, dep := range fd.Dependency { fdenc := proto.FileDescriptor(dep) fdDep, err := decodeFileDesc(fdenc) if err != nil { continue } s.processFile(fdDep, processed) } } func (s *serverReflectionServer) processMessage(fd *dpb.FileDescriptorProto, prefix string, msg *dpb.DescriptorProto) { msgName := fqn(prefix, msg.GetName()) s.symbols[msgName] = fd for _, nested := range msg.NestedType { s.processMessage(fd, msgName, nested) } for _, en := range msg.EnumType { s.processEnum(fd, msgName, en) } for _, ext := range msg.Extension { s.processField(fd, msgName, ext) } for _, fld := range msg.Field { s.processField(fd, msgName, fld) } for _, oneof := range msg.OneofDecl { oneofName := fqn(msgName, oneof.GetName()) s.symbols[oneofName] = fd } } func (s *serverReflectionServer) processEnum(fd *dpb.FileDescriptorProto, prefix string, en *dpb.EnumDescriptorProto) { enName := fqn(prefix, en.GetName()) s.symbols[enName] = fd for _, val := range en.Value { valName := fqn(enName, val.GetName()) s.symbols[valName] = fd } } func (s *serverReflectionServer) processField(fd *dpb.FileDescriptorProto, prefix string, fld *dpb.FieldDescriptorProto) { fldName := fqn(prefix, fld.GetName()) s.symbols[fldName] = fd } func fqn(prefix, name string) string { if prefix == "" { return name } return prefix + "." + name } // fileDescForType gets the file descriptor for the given type. // The given type should be a proto message. func (s *serverReflectionServer) fileDescForType(st reflect.Type) (*dpb.FileDescriptorProto, error) { m, ok := reflect.Zero(reflect.PtrTo(st)).Interface().(protoMessage) if !ok { return nil, fmt.Errorf("failed to create message from type: %v", st) } enc, _ := m.Descriptor() return decodeFileDesc(enc) } // decodeFileDesc does decompression and unmarshalling on the given // file descriptor byte slice. func decodeFileDesc(enc []byte) (*dpb.FileDescriptorProto, error) { raw, err := decompress(enc) if err != nil { return nil, fmt.Errorf("failed to decompress enc: %v", err) } fd := new(dpb.FileDescriptorProto) if err := proto.Unmarshal(raw, fd); err != nil { return nil, fmt.Errorf("bad descriptor: %v", err) } return fd, nil } // decompress does gzip decompression. func decompress(b []byte) ([]byte, error) { r, err := gzip.NewReader(bytes.NewReader(b)) if err != nil { return nil, fmt.Errorf("bad gzipped descriptor: %v", err) } out, err := ioutil.ReadAll(r) if err != nil { return nil, fmt.Errorf("bad gzipped descriptor: %v", err) } return out, nil } func typeForName(name string) (reflect.Type, error) { pt := proto.MessageType(name) if pt == nil { return nil, fmt.Errorf("unknown type: %q", name) } st := pt.Elem() return st, nil } func fileDescContainingExtension(st reflect.Type, ext int32) (*dpb.FileDescriptorProto, error) { m, ok := reflect.Zero(reflect.PtrTo(st)).Interface().(proto.Message) if !ok { return nil, fmt.Errorf("failed to create message from type: %v", st) } var extDesc *proto.ExtensionDesc for id, desc := range proto.RegisteredExtensions(m) { if id == ext { extDesc = desc break } } if extDesc == nil { return nil, fmt.Errorf("failed to find registered extension for extension number %v", ext) } return decodeFileDesc(proto.FileDescriptor(extDesc.Filename)) } func (s *serverReflectionServer) allExtensionNumbersForType(st reflect.Type) ([]int32, error) { m, ok := reflect.Zero(reflect.PtrTo(st)).Interface().(proto.Message) if !ok { return nil, fmt.Errorf("failed to create message from type: %v", st) } exts := proto.RegisteredExtensions(m) out := make([]int32, 0, len(exts)) for id := range exts { out = append(out, id) } return out, nil } // fileDescEncodingByFilename finds the file descriptor for given filename, // does marshalling on it and returns the marshalled result. func (s *serverReflectionServer) fileDescEncodingByFilename(name string) ([]byte, error) { enc := proto.FileDescriptor(name) if enc == nil { return nil, fmt.Errorf("unknown file: %v", name) } fd, err := decodeFileDesc(enc) if err != nil { return nil, err } return proto.Marshal(fd) } // parseMetadata finds the file descriptor bytes specified meta. // For SupportPackageIsVersion4, m is the name of the proto file, we // call proto.FileDescriptor to get the byte slice. // For SupportPackageIsVersion3, m is a byte slice itself. func parseMetadata(meta interface{}) ([]byte, bool) { // Check if meta is the file name. if fileNameForMeta, ok := meta.(string); ok { return proto.FileDescriptor(fileNameForMeta), true } // Check if meta is the byte slice. if enc, ok := meta.([]byte); ok { return enc, true } return nil, false } // fileDescEncodingContainingSymbol finds the file descriptor containing the given symbol, // does marshalling on it and returns the marshalled result. // The given symbol can be a type, a service or a method. func (s *serverReflectionServer) fileDescEncodingContainingSymbol(name string) ([]byte, error) { _, symbols := s.getSymbols() fd := symbols[name] if fd == nil { // Check if it's a type name that was not present in the // transitive dependencies of the registered services. if st, err := typeForName(name); err == nil { fd, err = s.fileDescForType(st) if err != nil { return nil, err } } } if fd == nil { return nil, fmt.Errorf("unknown symbol: %v", name) } return proto.Marshal(fd) } // fileDescEncodingContainingExtension finds the file descriptor containing given extension, // does marshalling on it and returns the marshalled result. func (s *serverReflectionServer) fileDescEncodingContainingExtension(typeName string, extNum int32) ([]byte, error) { st, err := typeForName(typeName) if err != nil { return nil, err } fd, err := fileDescContainingExtension(st, extNum) if err != nil { return nil, err } return proto.Marshal(fd) } // allExtensionNumbersForTypeName returns all extension numbers for the given type. func (s *serverReflectionServer) allExtensionNumbersForTypeName(name string) ([]int32, error) { st, err := typeForName(name) if err != nil { return nil, err } extNums, err := s.allExtensionNumbersForType(st) if err != nil { return nil, err } return extNums, nil } // ServerReflectionInfo is the reflection service handler. func (s *serverReflectionServer) ServerReflectionInfo(stream rpb.ServerReflection_ServerReflectionInfoServer) error { for { in, err := stream.Recv() if err == io.EOF { return nil } if err != nil { return err } out := &rpb.ServerReflectionResponse{ ValidHost: in.Host, OriginalRequest: in, } switch req := in.MessageRequest.(type) { case *rpb.ServerReflectionRequest_FileByFilename: b, err := s.fileDescEncodingByFilename(req.FileByFilename) if err != nil { out.MessageResponse = &rpb.ServerReflectionResponse_ErrorResponse{ ErrorResponse: &rpb.ErrorResponse{ ErrorCode: int32(codes.NotFound), ErrorMessage: err.Error(), }, } } else { out.MessageResponse = &rpb.ServerReflectionResponse_FileDescriptorResponse{ FileDescriptorResponse: &rpb.FileDescriptorResponse{FileDescriptorProto: [][]byte{b}}, } } case *rpb.ServerReflectionRequest_FileContainingSymbol: b, err := s.fileDescEncodingContainingSymbol(req.FileContainingSymbol) if err != nil { out.MessageResponse = &rpb.ServerReflectionResponse_ErrorResponse{ ErrorResponse: &rpb.ErrorResponse{ ErrorCode: int32(codes.NotFound), ErrorMessage: err.Error(), }, } } else { out.MessageResponse = &rpb.ServerReflectionResponse_FileDescriptorResponse{ FileDescriptorResponse: &rpb.FileDescriptorResponse{FileDescriptorProto: [][]byte{b}}, } } case *rpb.ServerReflectionRequest_FileContainingExtension: typeName := req.FileContainingExtension.ContainingType extNum := req.FileContainingExtension.ExtensionNumber b, err := s.fileDescEncodingContainingExtension(typeName, extNum) if err != nil { out.MessageResponse = &rpb.ServerReflectionResponse_ErrorResponse{ ErrorResponse: &rpb.ErrorResponse{ ErrorCode: int32(codes.NotFound), ErrorMessage: err.Error(), }, } } else { out.MessageResponse = &rpb.ServerReflectionResponse_FileDescriptorResponse{ FileDescriptorResponse: &rpb.FileDescriptorResponse{FileDescriptorProto: [][]byte{b}}, } } case *rpb.ServerReflectionRequest_AllExtensionNumbersOfType: extNums, err := s.allExtensionNumbersForTypeName(req.AllExtensionNumbersOfType) if err != nil { out.MessageResponse = &rpb.ServerReflectionResponse_ErrorResponse{ ErrorResponse: &rpb.ErrorResponse{ ErrorCode: int32(codes.NotFound), ErrorMessage: err.Error(), }, } } else { out.MessageResponse = &rpb.ServerReflectionResponse_AllExtensionNumbersResponse{ AllExtensionNumbersResponse: &rpb.ExtensionNumberResponse{ BaseTypeName: req.AllExtensionNumbersOfType, ExtensionNumber: extNums, }, } } case *rpb.ServerReflectionRequest_ListServices: svcNames, _ := s.getSymbols() serviceResponses := make([]*rpb.ServiceResponse, len(svcNames)) for i, n := range svcNames { serviceResponses[i] = &rpb.ServiceResponse{ Name: n, } } out.MessageResponse = &rpb.ServerReflectionResponse_ListServicesResponse{ ListServicesResponse: &rpb.ListServiceResponse{ Service: serviceResponses, }, } default: return status.Errorf(codes.InvalidArgument, "invalid MessageRequest: %v", in.MessageRequest) } if err := stream.Send(out); err != nil { return err } } } grpc-go-1.22.1/reflection/serverreflection_test.go000066400000000000000000000407411351635773100222550ustar00rootroot00000000000000/* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ //go:generate protoc -I grpc_testing --go_out=plugins=grpc:grpc_testing/ grpc_testing/proto2.proto grpc_testing/proto2_ext.proto grpc_testing/proto2_ext2.proto grpc_testing/test.proto // Note: grpc_testingv3/testv3.pb.go is not re-generated because it was // intentionally generated by an older version of protoc-gen-go. package reflection import ( "context" "fmt" "net" "reflect" "sort" "testing" "github.com/golang/protobuf/proto" dpb "github.com/golang/protobuf/protoc-gen-go/descriptor" "google.golang.org/grpc" rpb "google.golang.org/grpc/reflection/grpc_reflection_v1alpha" pb "google.golang.org/grpc/reflection/grpc_testing" pbv3 "google.golang.org/grpc/reflection/grpc_testingv3" ) var ( s = &serverReflectionServer{} // fileDescriptor of each test proto file. fdTest *dpb.FileDescriptorProto fdTestv3 *dpb.FileDescriptorProto fdProto2 *dpb.FileDescriptorProto fdProto2Ext *dpb.FileDescriptorProto fdProto2Ext2 *dpb.FileDescriptorProto // fileDescriptor marshalled. fdTestByte []byte fdTestv3Byte []byte fdProto2Byte []byte fdProto2ExtByte []byte fdProto2Ext2Byte []byte ) func loadFileDesc(filename string) (*dpb.FileDescriptorProto, []byte) { enc := proto.FileDescriptor(filename) if enc == nil { panic(fmt.Sprintf("failed to find fd for file: %v", filename)) } fd, err := decodeFileDesc(enc) if err != nil { panic(fmt.Sprintf("failed to decode enc: %v", err)) } b, err := proto.Marshal(fd) if err != nil { panic(fmt.Sprintf("failed to marshal fd: %v", err)) } return fd, b } func init() { fdTest, fdTestByte = loadFileDesc("test.proto") fdTestv3, fdTestv3Byte = loadFileDesc("testv3.proto") fdProto2, fdProto2Byte = loadFileDesc("proto2.proto") fdProto2Ext, fdProto2ExtByte = loadFileDesc("proto2_ext.proto") fdProto2Ext2, fdProto2Ext2Byte = loadFileDesc("proto2_ext2.proto") } func TestFileDescForType(t *testing.T) { for _, test := range []struct { st reflect.Type wantFd *dpb.FileDescriptorProto }{ {reflect.TypeOf(pb.SearchResponse_Result{}), fdTest}, {reflect.TypeOf(pb.ToBeExtended{}), fdProto2}, } { fd, err := s.fileDescForType(test.st) if err != nil || !proto.Equal(fd, test.wantFd) { t.Errorf("fileDescForType(%q) = %q, %v, want %q, ", test.st, fd, err, test.wantFd) } } } func TestTypeForName(t *testing.T) { for _, test := range []struct { name string want reflect.Type }{ {"grpc.testing.SearchResponse", reflect.TypeOf(pb.SearchResponse{})}, } { r, err := typeForName(test.name) if err != nil || r != test.want { t.Errorf("typeForName(%q) = %q, %v, want %q, ", test.name, r, err, test.want) } } } func TestTypeForNameNotFound(t *testing.T) { for _, test := range []string{ "grpc.testing.not_exiting", } { _, err := typeForName(test) if err == nil { t.Errorf("typeForName(%q) = _, %v, want _, ", test, err) } } } func TestFileDescContainingExtension(t *testing.T) { for _, test := range []struct { st reflect.Type extNum int32 want *dpb.FileDescriptorProto }{ {reflect.TypeOf(pb.ToBeExtended{}), 13, fdProto2Ext}, {reflect.TypeOf(pb.ToBeExtended{}), 17, fdProto2Ext}, {reflect.TypeOf(pb.ToBeExtended{}), 19, fdProto2Ext}, {reflect.TypeOf(pb.ToBeExtended{}), 23, fdProto2Ext2}, {reflect.TypeOf(pb.ToBeExtended{}), 29, fdProto2Ext2}, } { fd, err := fileDescContainingExtension(test.st, test.extNum) if err != nil || !proto.Equal(fd, test.want) { t.Errorf("fileDescContainingExtension(%q) = %q, %v, want %q, ", test.st, fd, err, test.want) } } } // intArray is used to sort []int32 type intArray []int32 func (s intArray) Len() int { return len(s) } func (s intArray) Swap(i, j int) { s[i], s[j] = s[j], s[i] } func (s intArray) Less(i, j int) bool { return s[i] < s[j] } func TestAllExtensionNumbersForType(t *testing.T) { for _, test := range []struct { st reflect.Type want []int32 }{ {reflect.TypeOf(pb.ToBeExtended{}), []int32{13, 17, 19, 23, 29}}, } { r, err := s.allExtensionNumbersForType(test.st) sort.Sort(intArray(r)) if err != nil || !reflect.DeepEqual(r, test.want) { t.Errorf("allExtensionNumbersForType(%q) = %v, %v, want %v, ", test.st, r, err, test.want) } } } // Do end2end tests. type server struct{} func (s *server) Search(ctx context.Context, in *pb.SearchRequest) (*pb.SearchResponse, error) { return &pb.SearchResponse{}, nil } func (s *server) StreamingSearch(stream pb.SearchService_StreamingSearchServer) error { return nil } type serverV3 struct{} func (s *serverV3) Search(ctx context.Context, in *pbv3.SearchRequestV3) (*pbv3.SearchResponseV3, error) { return &pbv3.SearchResponseV3{}, nil } func (s *serverV3) StreamingSearch(stream pbv3.SearchServiceV3_StreamingSearchServer) error { return nil } func TestReflectionEnd2end(t *testing.T) { // Start server. lis, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("failed to listen: %v", err) } s := grpc.NewServer() pb.RegisterSearchServiceServer(s, &server{}) pbv3.RegisterSearchServiceV3Server(s, &serverV3{}) // Register reflection service on s. Register(s) go s.Serve(lis) // Create client. conn, err := grpc.Dial(lis.Addr().String(), grpc.WithInsecure()) if err != nil { t.Fatalf("cannot connect to server: %v", err) } defer conn.Close() c := rpb.NewServerReflectionClient(conn) stream, err := c.ServerReflectionInfo(context.Background(), grpc.WaitForReady(true)) if err != nil { t.Fatalf("cannot get ServerReflectionInfo: %v", err) } testFileByFilename(t, stream) testFileByFilenameError(t, stream) testFileContainingSymbol(t, stream) testFileContainingSymbolError(t, stream) testFileContainingExtension(t, stream) testFileContainingExtensionError(t, stream) testAllExtensionNumbersOfType(t, stream) testAllExtensionNumbersOfTypeError(t, stream) testListServices(t, stream) s.Stop() } func testFileByFilename(t *testing.T, stream rpb.ServerReflection_ServerReflectionInfoClient) { for _, test := range []struct { filename string want []byte }{ {"test.proto", fdTestByte}, {"proto2.proto", fdProto2Byte}, {"proto2_ext.proto", fdProto2ExtByte}, } { if err := stream.Send(&rpb.ServerReflectionRequest{ MessageRequest: &rpb.ServerReflectionRequest_FileByFilename{ FileByFilename: test.filename, }, }); err != nil { t.Fatalf("failed to send request: %v", err) } r, err := stream.Recv() if err != nil { // io.EOF is not ok. t.Fatalf("failed to recv response: %v", err) } switch r.MessageResponse.(type) { case *rpb.ServerReflectionResponse_FileDescriptorResponse: if !reflect.DeepEqual(r.GetFileDescriptorResponse().FileDescriptorProto[0], test.want) { t.Errorf("FileByFilename(%v)\nreceived: %q,\nwant: %q", test.filename, r.GetFileDescriptorResponse().FileDescriptorProto[0], test.want) } default: t.Errorf("FileByFilename(%v) = %v, want type ", test.filename, r.MessageResponse) } } } func testFileByFilenameError(t *testing.T, stream rpb.ServerReflection_ServerReflectionInfoClient) { for _, test := range []string{ "test.poto", "proo2.proto", "proto2_et.proto", } { if err := stream.Send(&rpb.ServerReflectionRequest{ MessageRequest: &rpb.ServerReflectionRequest_FileByFilename{ FileByFilename: test, }, }); err != nil { t.Fatalf("failed to send request: %v", err) } r, err := stream.Recv() if err != nil { // io.EOF is not ok. t.Fatalf("failed to recv response: %v", err) } switch r.MessageResponse.(type) { case *rpb.ServerReflectionResponse_ErrorResponse: default: t.Errorf("FileByFilename(%v) = %v, want type ", test, r.MessageResponse) } } } func testFileContainingSymbol(t *testing.T, stream rpb.ServerReflection_ServerReflectionInfoClient) { for _, test := range []struct { symbol string want []byte }{ {"grpc.testing.SearchService", fdTestByte}, {"grpc.testing.SearchService.Search", fdTestByte}, {"grpc.testing.SearchService.StreamingSearch", fdTestByte}, {"grpc.testing.SearchResponse", fdTestByte}, {"grpc.testing.ToBeExtended", fdProto2Byte}, // Test support package v3. {"grpc.testingv3.SearchServiceV3", fdTestv3Byte}, {"grpc.testingv3.SearchServiceV3.Search", fdTestv3Byte}, {"grpc.testingv3.SearchServiceV3.StreamingSearch", fdTestv3Byte}, {"grpc.testingv3.SearchResponseV3", fdTestv3Byte}, // search for field, oneof, enum, and enum value symbols, too {"grpc.testingv3.SearchResponseV3.Result.snippets", fdTestv3Byte}, {"grpc.testingv3.SearchResponseV3.Result.Value.val", fdTestv3Byte}, {"grpc.testingv3.SearchResponseV3.Result.Value.str", fdTestv3Byte}, {"grpc.testingv3.SearchResponseV3.State", fdTestv3Byte}, {"grpc.testingv3.SearchResponseV3.State.FRESH", fdTestv3Byte}, } { if err := stream.Send(&rpb.ServerReflectionRequest{ MessageRequest: &rpb.ServerReflectionRequest_FileContainingSymbol{ FileContainingSymbol: test.symbol, }, }); err != nil { t.Fatalf("failed to send request: %v", err) } r, err := stream.Recv() if err != nil { // io.EOF is not ok. t.Fatalf("failed to recv response: %v", err) } switch r.MessageResponse.(type) { case *rpb.ServerReflectionResponse_FileDescriptorResponse: if !reflect.DeepEqual(r.GetFileDescriptorResponse().FileDescriptorProto[0], test.want) { t.Errorf("FileContainingSymbol(%v)\nreceived: %q,\nwant: %q", test.symbol, r.GetFileDescriptorResponse().FileDescriptorProto[0], test.want) } default: t.Errorf("FileContainingSymbol(%v) = %v, want type ", test.symbol, r.MessageResponse) } } } func testFileContainingSymbolError(t *testing.T, stream rpb.ServerReflection_ServerReflectionInfoClient) { for _, test := range []string{ "grpc.testing.SerchService", "grpc.testing.SearchService.SearchE", "grpc.tesing.SearchResponse", "gpc.testing.ToBeExtended", } { if err := stream.Send(&rpb.ServerReflectionRequest{ MessageRequest: &rpb.ServerReflectionRequest_FileContainingSymbol{ FileContainingSymbol: test, }, }); err != nil { t.Fatalf("failed to send request: %v", err) } r, err := stream.Recv() if err != nil { // io.EOF is not ok. t.Fatalf("failed to recv response: %v", err) } switch r.MessageResponse.(type) { case *rpb.ServerReflectionResponse_ErrorResponse: default: t.Errorf("FileContainingSymbol(%v) = %v, want type ", test, r.MessageResponse) } } } func testFileContainingExtension(t *testing.T, stream rpb.ServerReflection_ServerReflectionInfoClient) { for _, test := range []struct { typeName string extNum int32 want []byte }{ {"grpc.testing.ToBeExtended", 13, fdProto2ExtByte}, {"grpc.testing.ToBeExtended", 17, fdProto2ExtByte}, {"grpc.testing.ToBeExtended", 19, fdProto2ExtByte}, {"grpc.testing.ToBeExtended", 23, fdProto2Ext2Byte}, {"grpc.testing.ToBeExtended", 29, fdProto2Ext2Byte}, } { if err := stream.Send(&rpb.ServerReflectionRequest{ MessageRequest: &rpb.ServerReflectionRequest_FileContainingExtension{ FileContainingExtension: &rpb.ExtensionRequest{ ContainingType: test.typeName, ExtensionNumber: test.extNum, }, }, }); err != nil { t.Fatalf("failed to send request: %v", err) } r, err := stream.Recv() if err != nil { // io.EOF is not ok. t.Fatalf("failed to recv response: %v", err) } switch r.MessageResponse.(type) { case *rpb.ServerReflectionResponse_FileDescriptorResponse: if !reflect.DeepEqual(r.GetFileDescriptorResponse().FileDescriptorProto[0], test.want) { t.Errorf("FileContainingExtension(%v, %v)\nreceived: %q,\nwant: %q", test.typeName, test.extNum, r.GetFileDescriptorResponse().FileDescriptorProto[0], test.want) } default: t.Errorf("FileContainingExtension(%v, %v) = %v, want type ", test.typeName, test.extNum, r.MessageResponse) } } } func testFileContainingExtensionError(t *testing.T, stream rpb.ServerReflection_ServerReflectionInfoClient) { for _, test := range []struct { typeName string extNum int32 }{ {"grpc.testing.ToBExtended", 17}, {"grpc.testing.ToBeExtended", 15}, } { if err := stream.Send(&rpb.ServerReflectionRequest{ MessageRequest: &rpb.ServerReflectionRequest_FileContainingExtension{ FileContainingExtension: &rpb.ExtensionRequest{ ContainingType: test.typeName, ExtensionNumber: test.extNum, }, }, }); err != nil { t.Fatalf("failed to send request: %v", err) } r, err := stream.Recv() if err != nil { // io.EOF is not ok. t.Fatalf("failed to recv response: %v", err) } switch r.MessageResponse.(type) { case *rpb.ServerReflectionResponse_ErrorResponse: default: t.Errorf("FileContainingExtension(%v, %v) = %v, want type ", test.typeName, test.extNum, r.MessageResponse) } } } func testAllExtensionNumbersOfType(t *testing.T, stream rpb.ServerReflection_ServerReflectionInfoClient) { for _, test := range []struct { typeName string want []int32 }{ {"grpc.testing.ToBeExtended", []int32{13, 17, 19, 23, 29}}, } { if err := stream.Send(&rpb.ServerReflectionRequest{ MessageRequest: &rpb.ServerReflectionRequest_AllExtensionNumbersOfType{ AllExtensionNumbersOfType: test.typeName, }, }); err != nil { t.Fatalf("failed to send request: %v", err) } r, err := stream.Recv() if err != nil { // io.EOF is not ok. t.Fatalf("failed to recv response: %v", err) } switch r.MessageResponse.(type) { case *rpb.ServerReflectionResponse_AllExtensionNumbersResponse: extNum := r.GetAllExtensionNumbersResponse().ExtensionNumber sort.Sort(intArray(extNum)) if r.GetAllExtensionNumbersResponse().BaseTypeName != test.typeName || !reflect.DeepEqual(extNum, test.want) { t.Errorf("AllExtensionNumbersOfType(%v)\nreceived: %v,\nwant: {%q %v}", r.GetAllExtensionNumbersResponse(), test.typeName, test.typeName, test.want) } default: t.Errorf("AllExtensionNumbersOfType(%v) = %v, want type ", test.typeName, r.MessageResponse) } } } func testAllExtensionNumbersOfTypeError(t *testing.T, stream rpb.ServerReflection_ServerReflectionInfoClient) { for _, test := range []string{ "grpc.testing.ToBeExtendedE", } { if err := stream.Send(&rpb.ServerReflectionRequest{ MessageRequest: &rpb.ServerReflectionRequest_AllExtensionNumbersOfType{ AllExtensionNumbersOfType: test, }, }); err != nil { t.Fatalf("failed to send request: %v", err) } r, err := stream.Recv() if err != nil { // io.EOF is not ok. t.Fatalf("failed to recv response: %v", err) } switch r.MessageResponse.(type) { case *rpb.ServerReflectionResponse_ErrorResponse: default: t.Errorf("AllExtensionNumbersOfType(%v) = %v, want type ", test, r.MessageResponse) } } } func testListServices(t *testing.T, stream rpb.ServerReflection_ServerReflectionInfoClient) { if err := stream.Send(&rpb.ServerReflectionRequest{ MessageRequest: &rpb.ServerReflectionRequest_ListServices{}, }); err != nil { t.Fatalf("failed to send request: %v", err) } r, err := stream.Recv() if err != nil { // io.EOF is not ok. t.Fatalf("failed to recv response: %v", err) } switch r.MessageResponse.(type) { case *rpb.ServerReflectionResponse_ListServicesResponse: services := r.GetListServicesResponse().Service want := []string{ "grpc.testingv3.SearchServiceV3", "grpc.testing.SearchService", "grpc.reflection.v1alpha.ServerReflection", } // Compare service names in response with want. if len(services) != len(want) { t.Errorf("= %v, want service names: %v", services, want) } m := make(map[string]int) for _, e := range services { m[e.Name]++ } for _, e := range want { if m[e] > 0 { m[e]-- continue } t.Errorf("ListService\nreceived: %v,\nwant: %q", services, want) } default: t.Errorf("ListServices = %v, want type ", r.MessageResponse) } } grpc-go-1.22.1/resolver/000077500000000000000000000000001351635773100150075ustar00rootroot00000000000000grpc-go-1.22.1/resolver/dns/000077500000000000000000000000001351635773100155735ustar00rootroot00000000000000grpc-go-1.22.1/resolver/dns/dns_resolver.go000066400000000000000000000312341351635773100206320ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package dns implements a dns resolver to be installed as the default resolver // in grpc. package dns import ( "context" "encoding/json" "errors" "fmt" "net" "os" "strconv" "strings" "sync" "time" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/internal/backoff" "google.golang.org/grpc/internal/grpcrand" "google.golang.org/grpc/resolver" ) func init() { resolver.Register(NewBuilder()) } const ( defaultPort = "443" defaultFreq = time.Minute * 30 defaultDNSSvrPort = "53" golang = "GO" // txtPrefix is the prefix string to be prepended to the host name for txt record lookup. txtPrefix = "_grpc_config." // In DNS, service config is encoded in a TXT record via the mechanism // described in RFC-1464 using the attribute name grpc_config. txtAttribute = "grpc_config=" ) var ( errMissingAddr = errors.New("dns resolver: missing address") // Addresses ending with a colon that is supposed to be the separator // between host and port is not allowed. E.g. "::" is a valid address as // it is an IPv6 address (host only) and "[::]:" is invalid as it ends with // a colon as the host and port separator errEndsWithColon = errors.New("dns resolver: missing port after port-separator colon") ) var ( defaultResolver netResolver = net.DefaultResolver // To prevent excessive re-resolution, we enforce a rate limit on DNS // resolution requests. minDNSResRate = 30 * time.Second ) var customAuthorityDialler = func(authority string) func(ctx context.Context, network, address string) (net.Conn, error) { return func(ctx context.Context, network, address string) (net.Conn, error) { var dialer net.Dialer return dialer.DialContext(ctx, network, authority) } } var customAuthorityResolver = func(authority string) (netResolver, error) { host, port, err := parseTarget(authority, defaultDNSSvrPort) if err != nil { return nil, err } authorityWithPort := net.JoinHostPort(host, port) return &net.Resolver{ PreferGo: true, Dial: customAuthorityDialler(authorityWithPort), }, nil } // NewBuilder creates a dnsBuilder which is used to factory DNS resolvers. func NewBuilder() resolver.Builder { return &dnsBuilder{minFreq: defaultFreq} } type dnsBuilder struct { // minimum frequency of polling the DNS server. minFreq time.Duration } // Build creates and starts a DNS resolver that watches the name resolution of the target. func (b *dnsBuilder) Build(target resolver.Target, cc resolver.ClientConn, opts resolver.BuildOption) (resolver.Resolver, error) { host, port, err := parseTarget(target.Endpoint, defaultPort) if err != nil { return nil, err } // IP address. if net.ParseIP(host) != nil { host, _ = formatIP(host) addr := []resolver.Address{{Addr: host + ":" + port}} i := &ipResolver{ cc: cc, ip: addr, rn: make(chan struct{}, 1), q: make(chan struct{}), } cc.NewAddress(addr) go i.watcher() return i, nil } // DNS address (non-IP). ctx, cancel := context.WithCancel(context.Background()) d := &dnsResolver{ freq: b.minFreq, backoff: backoff.Exponential{MaxDelay: b.minFreq}, host: host, port: port, ctx: ctx, cancel: cancel, cc: cc, t: time.NewTimer(0), rn: make(chan struct{}, 1), disableServiceConfig: opts.DisableServiceConfig, } if target.Authority == "" { d.resolver = defaultResolver } else { d.resolver, err = customAuthorityResolver(target.Authority) if err != nil { return nil, err } } d.wg.Add(1) go d.watcher() return d, nil } // Scheme returns the naming scheme of this resolver builder, which is "dns". func (b *dnsBuilder) Scheme() string { return "dns" } type netResolver interface { LookupHost(ctx context.Context, host string) (addrs []string, err error) LookupSRV(ctx context.Context, service, proto, name string) (cname string, addrs []*net.SRV, err error) LookupTXT(ctx context.Context, name string) (txts []string, err error) } // ipResolver watches for the name resolution update for an IP address. type ipResolver struct { cc resolver.ClientConn ip []resolver.Address // rn channel is used by ResolveNow() to force an immediate resolution of the target. rn chan struct{} q chan struct{} } // ResolveNow resend the address it stores, no resolution is needed. func (i *ipResolver) ResolveNow(opt resolver.ResolveNowOption) { select { case i.rn <- struct{}{}: default: } } // Close closes the ipResolver. func (i *ipResolver) Close() { close(i.q) } func (i *ipResolver) watcher() { for { select { case <-i.rn: i.cc.NewAddress(i.ip) case <-i.q: return } } } // dnsResolver watches for the name resolution update for a non-IP target. type dnsResolver struct { freq time.Duration backoff backoff.Exponential retryCount int host string port string resolver netResolver ctx context.Context cancel context.CancelFunc cc resolver.ClientConn // rn channel is used by ResolveNow() to force an immediate resolution of the target. rn chan struct{} t *time.Timer // wg is used to enforce Close() to return after the watcher() goroutine has finished. // Otherwise, data race will be possible. [Race Example] in dns_resolver_test we // replace the real lookup functions with mocked ones to facilitate testing. // If Close() doesn't wait for watcher() goroutine finishes, race detector sometimes // will warns lookup (READ the lookup function pointers) inside watcher() goroutine // has data race with replaceNetFunc (WRITE the lookup function pointers). wg sync.WaitGroup disableServiceConfig bool } // ResolveNow invoke an immediate resolution of the target that this dnsResolver watches. func (d *dnsResolver) ResolveNow(opt resolver.ResolveNowOption) { select { case d.rn <- struct{}{}: default: } } // Close closes the dnsResolver. func (d *dnsResolver) Close() { d.cancel() d.wg.Wait() d.t.Stop() } func (d *dnsResolver) watcher() { defer d.wg.Done() for { select { case <-d.ctx.Done(): return case <-d.t.C: case <-d.rn: if !d.t.Stop() { // Before resetting a timer, it should be stopped to prevent racing with // reads on it's channel. <-d.t.C } } result, sc := d.lookup() // Next lookup should happen within an interval defined by d.freq. It may be // more often due to exponential retry on empty address list. if len(result) == 0 { d.retryCount++ d.t.Reset(d.backoff.Backoff(d.retryCount)) } else { d.retryCount = 0 d.t.Reset(d.freq) } d.cc.NewServiceConfig(sc) d.cc.NewAddress(result) // Sleep to prevent excessive re-resolutions. Incoming resolution requests // will be queued in d.rn. t := time.NewTimer(minDNSResRate) select { case <-t.C: case <-d.ctx.Done(): t.Stop() return } } } func (d *dnsResolver) lookupSRV() []resolver.Address { var newAddrs []resolver.Address _, srvs, err := d.resolver.LookupSRV(d.ctx, "grpclb", "tcp", d.host) if err != nil { grpclog.Infof("grpc: failed dns SRV record lookup due to %v.\n", err) return nil } for _, s := range srvs { lbAddrs, err := d.resolver.LookupHost(d.ctx, s.Target) if err != nil { grpclog.Infof("grpc: failed load balancer address dns lookup due to %v.\n", err) continue } for _, a := range lbAddrs { a, ok := formatIP(a) if !ok { grpclog.Errorf("grpc: failed IP parsing due to %v.\n", err) continue } addr := a + ":" + strconv.Itoa(int(s.Port)) newAddrs = append(newAddrs, resolver.Address{Addr: addr, Type: resolver.GRPCLB, ServerName: s.Target}) } } return newAddrs } func (d *dnsResolver) lookupTXT() string { ss, err := d.resolver.LookupTXT(d.ctx, txtPrefix+d.host) if err != nil { grpclog.Infof("grpc: failed dns TXT record lookup due to %v.\n", err) return "" } var res string for _, s := range ss { res += s } // TXT record must have "grpc_config=" attribute in order to be used as service config. if !strings.HasPrefix(res, txtAttribute) { grpclog.Warningf("grpc: TXT record %v missing %v attribute", res, txtAttribute) return "" } return strings.TrimPrefix(res, txtAttribute) } func (d *dnsResolver) lookupHost() []resolver.Address { var newAddrs []resolver.Address addrs, err := d.resolver.LookupHost(d.ctx, d.host) if err != nil { grpclog.Warningf("grpc: failed dns A record lookup due to %v.\n", err) return nil } for _, a := range addrs { a, ok := formatIP(a) if !ok { grpclog.Errorf("grpc: failed IP parsing due to %v.\n", err) continue } addr := a + ":" + d.port newAddrs = append(newAddrs, resolver.Address{Addr: addr}) } return newAddrs } func (d *dnsResolver) lookup() ([]resolver.Address, string) { newAddrs := d.lookupSRV() // Support fallback to non-balancer address. newAddrs = append(newAddrs, d.lookupHost()...) if d.disableServiceConfig { return newAddrs, "" } sc := d.lookupTXT() return newAddrs, canaryingSC(sc) } // formatIP returns ok = false if addr is not a valid textual representation of an IP address. // If addr is an IPv4 address, return the addr and ok = true. // If addr is an IPv6 address, return the addr enclosed in square brackets and ok = true. func formatIP(addr string) (addrIP string, ok bool) { ip := net.ParseIP(addr) if ip == nil { return "", false } if ip.To4() != nil { return addr, true } return "[" + addr + "]", true } // parseTarget takes the user input target string and default port, returns formatted host and port info. // If target doesn't specify a port, set the port to be the defaultPort. // If target is in IPv6 format and host-name is enclosed in square brackets, brackets // are stripped when setting the host. // examples: // target: "www.google.com" defaultPort: "443" returns host: "www.google.com", port: "443" // target: "ipv4-host:80" defaultPort: "443" returns host: "ipv4-host", port: "80" // target: "[ipv6-host]" defaultPort: "443" returns host: "ipv6-host", port: "443" // target: ":80" defaultPort: "443" returns host: "localhost", port: "80" func parseTarget(target, defaultPort string) (host, port string, err error) { if target == "" { return "", "", errMissingAddr } if ip := net.ParseIP(target); ip != nil { // target is an IPv4 or IPv6(without brackets) address return target, defaultPort, nil } if host, port, err = net.SplitHostPort(target); err == nil { if port == "" { // If the port field is empty (target ends with colon), e.g. "[::1]:", this is an error. return "", "", errEndsWithColon } // target has port, i.e ipv4-host:port, [ipv6-host]:port, host-name:port if host == "" { // Keep consistent with net.Dial(): If the host is empty, as in ":80", the local system is assumed. host = "localhost" } return host, port, nil } if host, port, err = net.SplitHostPort(target + ":" + defaultPort); err == nil { // target doesn't have port return host, port, nil } return "", "", fmt.Errorf("invalid target address %v, error info: %v", target, err) } type rawChoice struct { ClientLanguage *[]string `json:"clientLanguage,omitempty"` Percentage *int `json:"percentage,omitempty"` ClientHostName *[]string `json:"clientHostName,omitempty"` ServiceConfig *json.RawMessage `json:"serviceConfig,omitempty"` } func containsString(a *[]string, b string) bool { if a == nil { return true } for _, c := range *a { if c == b { return true } } return false } func chosenByPercentage(a *int) bool { if a == nil { return true } return grpcrand.Intn(100)+1 <= *a } func canaryingSC(js string) string { if js == "" { return "" } var rcs []rawChoice err := json.Unmarshal([]byte(js), &rcs) if err != nil { grpclog.Warningf("grpc: failed to parse service config json string due to %v.\n", err) return "" } cliHostname, err := os.Hostname() if err != nil { grpclog.Warningf("grpc: failed to get client hostname due to %v.\n", err) return "" } var sc string for _, c := range rcs { if !containsString(c.ClientLanguage, golang) || !chosenByPercentage(c.Percentage) || !containsString(c.ClientHostName, cliHostname) || c.ServiceConfig == nil { continue } sc = string(*c.ServiceConfig) break } return sc } grpc-go-1.22.1/resolver/dns/dns_resolver_test.go000066400000000000000000000661331351635773100216770ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package dns import ( "context" "errors" "fmt" "net" "os" "reflect" "sync" "testing" "time" "google.golang.org/grpc/internal/leakcheck" "google.golang.org/grpc/resolver" ) func TestMain(m *testing.M) { // Set a valid duration for the re-resolution rate only for tests which are // actually testing that feature. dc := replaceDNSResRate(time.Duration(0)) defer dc() cleanup := replaceNetFunc(nil) code := m.Run() cleanup() os.Exit(code) } const ( txtBytesLimit = 255 ) type testClientConn struct { target string m1 sync.Mutex addrs []resolver.Address a int // how many times NewAddress() has been called m2 sync.Mutex sc string s int } func (t *testClientConn) UpdateState(s resolver.State) { panic("unused") } func (t *testClientConn) NewAddress(addresses []resolver.Address) { t.m1.Lock() defer t.m1.Unlock() t.addrs = addresses t.a++ } func (t *testClientConn) getAddress() ([]resolver.Address, int) { t.m1.Lock() defer t.m1.Unlock() return t.addrs, t.a } func (t *testClientConn) NewServiceConfig(serviceConfig string) { t.m2.Lock() defer t.m2.Unlock() t.sc = serviceConfig t.s++ } func (t *testClientConn) getSc() (string, int) { t.m2.Lock() defer t.m2.Unlock() return t.sc, t.s } type testResolver struct { // A write to this channel is made when this resolver receives a resolution // request. Tests can rely on reading from this channel to be notified about // resolution requests instead of sleeping for a predefined period of time. ch chan struct{} } func (tr *testResolver) LookupHost(ctx context.Context, host string) ([]string, error) { if tr.ch != nil { tr.ch <- struct{}{} } return hostLookup(host) } func (*testResolver) LookupSRV(ctx context.Context, service, proto, name string) (string, []*net.SRV, error) { return srvLookup(service, proto, name) } func (*testResolver) LookupTXT(ctx context.Context, host string) ([]string, error) { return txtLookup(host) } func replaceNetFunc(ch chan struct{}) func() { oldResolver := defaultResolver defaultResolver = &testResolver{ch: ch} return func() { defaultResolver = oldResolver } } func replaceDNSResRate(d time.Duration) func() { oldMinDNSResRate := minDNSResRate minDNSResRate = d return func() { minDNSResRate = oldMinDNSResRate } } var hostLookupTbl = struct { sync.Mutex tbl map[string][]string }{ tbl: map[string][]string{ "foo.bar.com": {"1.2.3.4", "5.6.7.8"}, "ipv4.single.fake": {"1.2.3.4"}, "srv.ipv4.single.fake": {"2.4.6.8"}, "ipv4.multi.fake": {"1.2.3.4", "5.6.7.8", "9.10.11.12"}, "ipv6.single.fake": {"2607:f8b0:400a:801::1001"}, "ipv6.multi.fake": {"2607:f8b0:400a:801::1001", "2607:f8b0:400a:801::1002", "2607:f8b0:400a:801::1003"}, }, } func hostLookup(host string) ([]string, error) { hostLookupTbl.Lock() defer hostLookupTbl.Unlock() if addrs, cnt := hostLookupTbl.tbl[host]; cnt { return addrs, nil } return nil, fmt.Errorf("failed to lookup host:%s resolution in hostLookupTbl", host) } var srvLookupTbl = struct { sync.Mutex tbl map[string][]*net.SRV }{ tbl: map[string][]*net.SRV{ "_grpclb._tcp.srv.ipv4.single.fake": {&net.SRV{Target: "ipv4.single.fake", Port: 1234}}, "_grpclb._tcp.srv.ipv4.multi.fake": {&net.SRV{Target: "ipv4.multi.fake", Port: 1234}}, "_grpclb._tcp.srv.ipv6.single.fake": {&net.SRV{Target: "ipv6.single.fake", Port: 1234}}, "_grpclb._tcp.srv.ipv6.multi.fake": {&net.SRV{Target: "ipv6.multi.fake", Port: 1234}}, }, } func srvLookup(service, proto, name string) (string, []*net.SRV, error) { cname := "_" + service + "._" + proto + "." + name srvLookupTbl.Lock() defer srvLookupTbl.Unlock() if srvs, cnt := srvLookupTbl.tbl[cname]; cnt { return cname, srvs, nil } return "", nil, fmt.Errorf("failed to lookup srv record for %s in srvLookupTbl", cname) } // div divides a byte slice into a slice of strings, each of which is of maximum // 255 bytes length, which is the length limit per TXT record in DNS. func div(b []byte) []string { var r []string for i := 0; i < len(b); i += txtBytesLimit { if i+txtBytesLimit > len(b) { r = append(r, string(b[i:])) } else { r = append(r, string(b[i:i+txtBytesLimit])) } } return r } // scfs contains an array of service config file string in JSON format. // Notes about the scfs contents and usage: // scfs contains 4 service config file JSON strings for testing. Inside each // service config file, there are multiple choices. scfs[0:3] each contains 5 // choices, and first 3 choices are nonmatching choices based on canarying rule, // while the last two are matched choices. scfs[3] only contains 3 choices, and // all of them are nonmatching based on canarying rule. For each of scfs[0:3], // the eventually returned service config, which is from the first of the two // matched choices, is stored in the corresponding scs element (e.g. // scfs[0]->scs[0]). scfs and scs elements are used in pair to test the dns // resolver functionality, with scfs as the input and scs used for validation of // the output. For scfs[3], it corresponds to empty service config, since there // isn't a matched choice. var scfs = []string{ `[ { "clientLanguage": [ "CPP", "JAVA" ], "serviceConfig": { "loadBalancingPolicy": "grpclb", "methodConfig": [ { "name": [ { "service": "all" } ], "timeout": "1s" } ] } }, { "percentage": 0, "serviceConfig": { "loadBalancingPolicy": "grpclb", "methodConfig": [ { "name": [ { "service": "all" } ], "timeout": "1s" } ] } }, { "clientHostName": [ "localhost" ], "serviceConfig": { "loadBalancingPolicy": "grpclb", "methodConfig": [ { "name": [ { "service": "all" } ], "timeout": "1s" } ] } }, { "clientLanguage": [ "GO" ], "percentage": 100, "serviceConfig": { "methodConfig": [ { "name": [ { "method": "bar" } ], "maxRequestMessageBytes": 1024, "maxResponseMessageBytes": 1024 } ] } }, { "serviceConfig": { "loadBalancingPolicy": "round_robin", "methodConfig": [ { "name": [ { "service": "foo", "method": "bar" } ], "waitForReady": true } ] } } ]`, `[ { "clientLanguage": [ "CPP", "JAVA" ], "serviceConfig": { "loadBalancingPolicy": "grpclb", "methodConfig": [ { "name": [ { "service": "all" } ], "timeout": "1s" } ] } }, { "percentage": 0, "serviceConfig": { "loadBalancingPolicy": "grpclb", "methodConfig": [ { "name": [ { "service": "all" } ], "timeout": "1s" } ] } }, { "clientHostName": [ "localhost" ], "serviceConfig": { "loadBalancingPolicy": "grpclb", "methodConfig": [ { "name": [ { "service": "all" } ], "timeout": "1s" } ] } }, { "clientLanguage": [ "GO" ], "percentage": 100, "serviceConfig": { "methodConfig": [ { "name": [ { "service": "foo", "method": "bar" } ], "waitForReady": true, "timeout": "1s", "maxRequestMessageBytes": 1024, "maxResponseMessageBytes": 1024 } ] } }, { "serviceConfig": { "loadBalancingPolicy": "round_robin", "methodConfig": [ { "name": [ { "service": "foo", "method": "bar" } ], "waitForReady": true } ] } } ]`, `[ { "clientLanguage": [ "CPP", "JAVA" ], "serviceConfig": { "loadBalancingPolicy": "grpclb", "methodConfig": [ { "name": [ { "service": "all" } ], "timeout": "1s" } ] } }, { "percentage": 0, "serviceConfig": { "loadBalancingPolicy": "grpclb", "methodConfig": [ { "name": [ { "service": "all" } ], "timeout": "1s" } ] } }, { "clientHostName": [ "localhost" ], "serviceConfig": { "loadBalancingPolicy": "grpclb", "methodConfig": [ { "name": [ { "service": "all" } ], "timeout": "1s" } ] } }, { "clientLanguage": [ "GO" ], "percentage": 100, "serviceConfig": { "loadBalancingPolicy": "round_robin", "methodConfig": [ { "name": [ { "service": "foo" } ], "waitForReady": true, "timeout": "1s" }, { "name": [ { "service": "bar" } ], "waitForReady": false } ] } }, { "serviceConfig": { "loadBalancingPolicy": "round_robin", "methodConfig": [ { "name": [ { "service": "foo", "method": "bar" } ], "waitForReady": true } ] } } ]`, `[ { "clientLanguage": [ "CPP", "JAVA" ], "serviceConfig": { "loadBalancingPolicy": "grpclb", "methodConfig": [ { "name": [ { "service": "all" } ], "timeout": "1s" } ] } }, { "percentage": 0, "serviceConfig": { "loadBalancingPolicy": "grpclb", "methodConfig": [ { "name": [ { "service": "all" } ], "timeout": "1s" } ] } }, { "clientHostName": [ "localhost" ], "serviceConfig": { "loadBalancingPolicy": "grpclb", "methodConfig": [ { "name": [ { "service": "all" } ], "timeout": "1s" } ] } } ]`, } // scs contains an array of service config string in JSON format. var scs = []string{ `{ "methodConfig": [ { "name": [ { "method": "bar" } ], "maxRequestMessageBytes": 1024, "maxResponseMessageBytes": 1024 } ] }`, `{ "methodConfig": [ { "name": [ { "service": "foo", "method": "bar" } ], "waitForReady": true, "timeout": "1s", "maxRequestMessageBytes": 1024, "maxResponseMessageBytes": 1024 } ] }`, `{ "loadBalancingPolicy": "round_robin", "methodConfig": [ { "name": [ { "service": "foo" } ], "waitForReady": true, "timeout": "1s" }, { "name": [ { "service": "bar" } ], "waitForReady": false } ] }`, } // scLookupTbl is a set, which contains targets that have service config. Target // not in this set should not have service config. var scLookupTbl = map[string]bool{ txtPrefix + "foo.bar.com": true, txtPrefix + "srv.ipv4.single.fake": true, txtPrefix + "srv.ipv4.multi.fake": true, txtPrefix + "no.attribute": true, } // generateSCF generates a slice of strings (aggregately representing a single // service config file) for the input name, which mocks the result from a real // DNS TXT record lookup. func generateSCF(name string) []string { var b []byte switch name { case "foo.bar.com": b = []byte(scfs[0]) case "srv.ipv4.single.fake": b = []byte(scfs[1]) case "srv.ipv4.multi.fake": b = []byte(scfs[2]) default: b = []byte(scfs[3]) } if name == "no.attribute" { return div(b) } return div(append([]byte(txtAttribute), b...)) } // generateSC returns a service config string in JSON format for the input name. func generateSC(name string) string { _, cnt := scLookupTbl[name] if !cnt || name == "no.attribute" { return "" } switch name { case "foo.bar.com": return scs[0] case "srv.ipv4.single.fake": return scs[1] case "srv.ipv4.multi.fake": return scs[2] default: return "" } } var txtLookupTbl = struct { sync.Mutex tbl map[string][]string }{ tbl: map[string][]string{ "foo.bar.com": generateSCF("foo.bar.com"), "srv.ipv4.single.fake": generateSCF("srv.ipv4.single.fake"), "srv.ipv4.multi.fake": generateSCF("srv.ipv4.multi.fake"), "srv.ipv6.single.fake": generateSCF("srv.ipv6.single.fake"), "srv.ipv6.multi.fake": generateSCF("srv.ipv6.multi.fake"), "no.attribute": generateSCF("no.attribute"), }, } func txtLookup(host string) ([]string, error) { txtLookupTbl.Lock() defer txtLookupTbl.Unlock() if scs, cnt := txtLookupTbl.tbl[host]; cnt { return scs, nil } return nil, fmt.Errorf("failed to lookup TXT:%s resolution in txtLookupTbl", host) } func TestResolve(t *testing.T) { testDNSResolver(t) testDNSResolveNow(t) testIPResolver(t) } func testDNSResolver(t *testing.T) { defer leakcheck.Check(t) tests := []struct { target string addrWant []resolver.Address scWant string }{ { "foo.bar.com", []resolver.Address{{Addr: "1.2.3.4" + colonDefaultPort}, {Addr: "5.6.7.8" + colonDefaultPort}}, generateSC("foo.bar.com"), }, { "foo.bar.com:1234", []resolver.Address{{Addr: "1.2.3.4:1234"}, {Addr: "5.6.7.8:1234"}}, generateSC("foo.bar.com"), }, { "srv.ipv4.single.fake", []resolver.Address{{Addr: "1.2.3.4:1234", Type: resolver.GRPCLB, ServerName: "ipv4.single.fake"}, {Addr: "2.4.6.8" + colonDefaultPort}}, generateSC("srv.ipv4.single.fake"), }, { "srv.ipv4.multi.fake", []resolver.Address{ {Addr: "1.2.3.4:1234", Type: resolver.GRPCLB, ServerName: "ipv4.multi.fake"}, {Addr: "5.6.7.8:1234", Type: resolver.GRPCLB, ServerName: "ipv4.multi.fake"}, {Addr: "9.10.11.12:1234", Type: resolver.GRPCLB, ServerName: "ipv4.multi.fake"}, }, generateSC("srv.ipv4.multi.fake"), }, { "srv.ipv6.single.fake", []resolver.Address{{Addr: "[2607:f8b0:400a:801::1001]:1234", Type: resolver.GRPCLB, ServerName: "ipv6.single.fake"}}, generateSC("srv.ipv6.single.fake"), }, { "srv.ipv6.multi.fake", []resolver.Address{ {Addr: "[2607:f8b0:400a:801::1001]:1234", Type: resolver.GRPCLB, ServerName: "ipv6.multi.fake"}, {Addr: "[2607:f8b0:400a:801::1002]:1234", Type: resolver.GRPCLB, ServerName: "ipv6.multi.fake"}, {Addr: "[2607:f8b0:400a:801::1003]:1234", Type: resolver.GRPCLB, ServerName: "ipv6.multi.fake"}, }, generateSC("srv.ipv6.multi.fake"), }, { "no.attribute", nil, generateSC("no.attribute"), }, } for _, a := range tests { b := NewBuilder() cc := &testClientConn{target: a.target} r, err := b.Build(resolver.Target{Endpoint: a.target}, cc, resolver.BuildOption{}) if err != nil { t.Fatalf("%v\n", err) } var addrs []resolver.Address var cnt int for { addrs, cnt = cc.getAddress() if cnt > 0 { break } time.Sleep(time.Millisecond) } var sc string for { sc, cnt = cc.getSc() if cnt > 0 { break } time.Sleep(time.Millisecond) } if !reflect.DeepEqual(a.addrWant, addrs) { t.Errorf("Resolved addresses of target: %q = %+v, want %+v\n", a.target, addrs, a.addrWant) } if !reflect.DeepEqual(a.scWant, sc) { t.Errorf("Resolved service config of target: %q = %+v, want %+v\n", a.target, sc, a.scWant) } r.Close() } } func mutateTbl(target string) func() { hostLookupTbl.Lock() oldHostTblEntry := hostLookupTbl.tbl[target] hostLookupTbl.tbl[target] = hostLookupTbl.tbl[target][:len(oldHostTblEntry)-1] hostLookupTbl.Unlock() txtLookupTbl.Lock() oldTxtTblEntry := txtLookupTbl.tbl[target] txtLookupTbl.tbl[target] = []string{""} txtLookupTbl.Unlock() return func() { hostLookupTbl.Lock() hostLookupTbl.tbl[target] = oldHostTblEntry hostLookupTbl.Unlock() txtLookupTbl.Lock() txtLookupTbl.tbl[target] = oldTxtTblEntry txtLookupTbl.Unlock() } } func testDNSResolveNow(t *testing.T) { defer leakcheck.Check(t) tests := []struct { target string addrWant []resolver.Address addrNext []resolver.Address scWant string scNext string }{ { "foo.bar.com", []resolver.Address{{Addr: "1.2.3.4" + colonDefaultPort}, {Addr: "5.6.7.8" + colonDefaultPort}}, []resolver.Address{{Addr: "1.2.3.4" + colonDefaultPort}}, generateSC("foo.bar.com"), "", }, } for _, a := range tests { b := NewBuilder() cc := &testClientConn{target: a.target} r, err := b.Build(resolver.Target{Endpoint: a.target}, cc, resolver.BuildOption{}) if err != nil { t.Fatalf("%v\n", err) } var addrs []resolver.Address var cnt int for { addrs, cnt = cc.getAddress() if cnt > 0 { break } time.Sleep(time.Millisecond) } var sc string for { sc, cnt = cc.getSc() if cnt > 0 { break } time.Sleep(time.Millisecond) } if !reflect.DeepEqual(a.addrWant, addrs) { t.Errorf("Resolved addresses of target: %q = %+v, want %+v\n", a.target, addrs, a.addrWant) } if !reflect.DeepEqual(a.scWant, sc) { t.Errorf("Resolved service config of target: %q = %+v, want %+v\n", a.target, sc, a.scWant) } revertTbl := mutateTbl(a.target) r.ResolveNow(resolver.ResolveNowOption{}) for { addrs, cnt = cc.getAddress() if cnt == 2 { break } time.Sleep(time.Millisecond) } for { sc, cnt = cc.getSc() if cnt == 2 { break } time.Sleep(time.Millisecond) } if !reflect.DeepEqual(a.addrNext, addrs) { t.Errorf("Resolved addresses of target: %q = %+v, want %+v\n", a.target, addrs, a.addrNext) } if !reflect.DeepEqual(a.scNext, sc) { t.Errorf("Resolved service config of target: %q = %+v, want %+v\n", a.target, sc, a.scNext) } revertTbl() r.Close() } } const colonDefaultPort = ":" + defaultPort func testIPResolver(t *testing.T) { defer leakcheck.Check(t) tests := []struct { target string want []resolver.Address }{ {"127.0.0.1", []resolver.Address{{Addr: "127.0.0.1" + colonDefaultPort}}}, {"127.0.0.1:12345", []resolver.Address{{Addr: "127.0.0.1:12345"}}}, {"::1", []resolver.Address{{Addr: "[::1]" + colonDefaultPort}}}, {"[::1]:12345", []resolver.Address{{Addr: "[::1]:12345"}}}, {"[::1]", []resolver.Address{{Addr: "[::1]:443"}}}, {"2001:db8:85a3::8a2e:370:7334", []resolver.Address{{Addr: "[2001:db8:85a3::8a2e:370:7334]" + colonDefaultPort}}}, {"[2001:db8:85a3::8a2e:370:7334]", []resolver.Address{{Addr: "[2001:db8:85a3::8a2e:370:7334]" + colonDefaultPort}}}, {"[2001:db8:85a3::8a2e:370:7334]:12345", []resolver.Address{{Addr: "[2001:db8:85a3::8a2e:370:7334]:12345"}}}, {"[2001:db8::1]:http", []resolver.Address{{Addr: "[2001:db8::1]:http"}}}, // TODO(yuxuanli): zone support? } for _, v := range tests { b := NewBuilder() cc := &testClientConn{target: v.target} r, err := b.Build(resolver.Target{Endpoint: v.target}, cc, resolver.BuildOption{}) if err != nil { t.Fatalf("%v\n", err) } var addrs []resolver.Address var cnt int for { addrs, cnt = cc.getAddress() if cnt > 0 { break } time.Sleep(time.Millisecond) } if !reflect.DeepEqual(v.want, addrs) { t.Errorf("Resolved addresses of target: %q = %+v, want %+v\n", v.target, addrs, v.want) } r.ResolveNow(resolver.ResolveNowOption{}) for { addrs, cnt = cc.getAddress() if cnt == 2 { break } time.Sleep(time.Millisecond) } if !reflect.DeepEqual(v.want, addrs) { t.Errorf("Resolved addresses of target: %q = %+v, want %+v\n", v.target, addrs, v.want) } r.Close() } } func TestResolveFunc(t *testing.T) { defer leakcheck.Check(t) tests := []struct { addr string want error }{ // TODO(yuxuanli): More false cases? {"www.google.com", nil}, {"foo.bar:12345", nil}, {"127.0.0.1", nil}, {"::", nil}, {"127.0.0.1:12345", nil}, {"[::1]:80", nil}, {"[2001:db8:a0b:12f0::1]:21", nil}, {":80", nil}, {"127.0.0...1:12345", nil}, {"[fe80::1%lo0]:80", nil}, {"golang.org:http", nil}, {"[2001:db8::1]:http", nil}, {"[2001:db8::1]:", errEndsWithColon}, {":", errEndsWithColon}, {"", errMissingAddr}, {"[2001:db8:a0b:12f0::1", fmt.Errorf("invalid target address [2001:db8:a0b:12f0::1, error info: address [2001:db8:a0b:12f0::1:443: missing ']' in address")}, } b := NewBuilder() for _, v := range tests { cc := &testClientConn{target: v.addr} r, err := b.Build(resolver.Target{Endpoint: v.addr}, cc, resolver.BuildOption{}) if err == nil { r.Close() } if !reflect.DeepEqual(err, v.want) { t.Errorf("Build(%q, cc, resolver.BuildOption{}) = %v, want %v", v.addr, err, v.want) } } } func TestDisableServiceConfig(t *testing.T) { defer leakcheck.Check(t) tests := []struct { target string scWant string disableServiceConfig bool }{ { "foo.bar.com", generateSC("foo.bar.com"), false, }, { "foo.bar.com", "", true, }, } for _, a := range tests { b := NewBuilder() cc := &testClientConn{target: a.target} r, err := b.Build(resolver.Target{Endpoint: a.target}, cc, resolver.BuildOption{DisableServiceConfig: a.disableServiceConfig}) if err != nil { t.Fatalf("%v\n", err) } var cnt int var sc string for { sc, cnt = cc.getSc() if cnt > 0 { break } time.Sleep(time.Millisecond) } if !reflect.DeepEqual(a.scWant, sc) { t.Errorf("Resolved service config of target: %q = %+v, want %+v\n", a.target, sc, a.scWant) } r.Close() } } func TestDNSResolverRetry(t *testing.T) { b := NewBuilder() target := "ipv4.single.fake" cc := &testClientConn{target: target} r, err := b.Build(resolver.Target{Endpoint: target}, cc, resolver.BuildOption{}) if err != nil { t.Fatalf("%v\n", err) } var addrs []resolver.Address for { addrs, _ = cc.getAddress() if len(addrs) == 1 { break } time.Sleep(time.Millisecond) } want := []resolver.Address{{Addr: "1.2.3.4" + colonDefaultPort}} if !reflect.DeepEqual(want, addrs) { t.Errorf("Resolved addresses of target: %q = %+v, want %+v\n", target, addrs, want) } // mutate the host lookup table so the target has 0 address returned. revertTbl := mutateTbl(target) // trigger a resolve that will get empty address list r.ResolveNow(resolver.ResolveNowOption{}) for { addrs, _ = cc.getAddress() if len(addrs) == 0 { break } time.Sleep(time.Millisecond) } revertTbl() // wait for the retry to happen in two seconds. timer := time.NewTimer(2 * time.Second) for { b := false select { case <-timer.C: b = true default: addrs, _ = cc.getAddress() if len(addrs) == 1 { b = true break } time.Sleep(time.Millisecond) } if b { break } } if !reflect.DeepEqual(want, addrs) { t.Errorf("Resolved addresses of target: %q = %+v, want %+v\n", target, addrs, want) } r.Close() } func TestCustomAuthority(t *testing.T) { defer leakcheck.Check(t) tests := []struct { authority string authorityWant string expectError bool }{ { "4.3.2.1:" + defaultDNSSvrPort, "4.3.2.1:" + defaultDNSSvrPort, false, }, { "4.3.2.1:123", "4.3.2.1:123", false, }, { "4.3.2.1", "4.3.2.1:" + defaultDNSSvrPort, false, }, { "::1", "[::1]:" + defaultDNSSvrPort, false, }, { "[::1]", "[::1]:" + defaultDNSSvrPort, false, }, { "[::1]:123", "[::1]:123", false, }, { "dnsserver.com", "dnsserver.com:" + defaultDNSSvrPort, false, }, { ":123", "localhost:123", false, }, { ":", "", true, }, { "[::1]:", "", true, }, { "dnsserver.com:", "", true, }, } oldCustomAuthorityDialler := customAuthorityDialler defer func() { customAuthorityDialler = oldCustomAuthorityDialler }() for _, a := range tests { errChan := make(chan error, 1) customAuthorityDialler = func(authority string) func(ctx context.Context, network, address string) (net.Conn, error) { if authority != a.authorityWant { errChan <- fmt.Errorf("wrong custom authority passed to resolver. input: %s expected: %s actual: %s", a.authority, a.authorityWant, authority) } else { errChan <- nil } return func(ctx context.Context, network, address string) (net.Conn, error) { return nil, errors.New("no need to dial") } } b := NewBuilder() cc := &testClientConn{target: "foo.bar.com"} r, err := b.Build(resolver.Target{Endpoint: "foo.bar.com", Authority: a.authority}, cc, resolver.BuildOption{}) if err == nil { r.Close() err = <-errChan if err != nil { t.Errorf(err.Error()) } if a.expectError { t.Errorf("custom authority should have caused an error: %s", a.authority) } } else if !a.expectError { t.Errorf("unexpected error using custom authority %s: %s", a.authority, err) } } } // TestRateLimitedResolve exercises the rate limit enforced on re-resolution // requests. It sets the re-resolution rate to a small value and repeatedly // calls ResolveNow() and ensures only the expected number of resolution // requests are made. func TestRateLimitedResolve(t *testing.T) { defer leakcheck.Check(t) const dnsResRate = 100 * time.Millisecond dc := replaceDNSResRate(dnsResRate) defer dc() // Create a new testResolver{} for this test because we want the exact count // of the number of times the resolver was invoked. nc := replaceNetFunc(make(chan struct{}, 1)) defer nc() target := "foo.bar.com" b := NewBuilder() cc := &testClientConn{target: target} r, err := b.Build(resolver.Target{Endpoint: target}, cc, resolver.BuildOption{}) if err != nil { t.Fatalf("resolver.Build() returned error: %v\n", err) } defer r.Close() dnsR, ok := r.(*dnsResolver) if !ok { t.Fatalf("resolver.Build() returned unexpected type: %T\n", dnsR) } tr, ok := dnsR.resolver.(*testResolver) if !ok { t.Fatalf("delegate resolver returned unexpected type: %T\n", tr) } // Wait for the first resolution request to be done. This happens as part of // the first iteration of the for loop in watcher() because we start with a // timer of zero duration. <-tr.ch // Here we start a couple of goroutines. One repeatedly calls ResolveNow() // until asked to stop, and the other waits for two resolution requests to be // made to our testResolver and stops the former. We measure the start and // end times, and expect the duration elapsed to be in the interval // {2*dnsResRate, 3*dnsResRate} start := time.Now() done := make(chan struct{}) go func() { for { select { case <-done: return default: r.ResolveNow(resolver.ResolveNowOption{}) time.Sleep(1 * time.Millisecond) } } }() gotCalls := 0 const wantCalls = 2 min, max := wantCalls*dnsResRate, (wantCalls+1)*dnsResRate tMax := time.NewTimer(max) for gotCalls != wantCalls { select { case <-tr.ch: gotCalls++ case <-tMax.C: t.Fatalf("Timed out waiting for %v calls after %v; got %v", wantCalls, max, gotCalls) } } close(done) elapsed := time.Since(start) if gotCalls != wantCalls { t.Fatalf("resolve count mismatch for target: %q = %+v, want %+v\n", target, gotCalls, wantCalls) } if elapsed < min { t.Fatalf("elapsed time: %v, wanted it to be between {%v and %v}", elapsed, min, max) } wantAddrs := []resolver.Address{{Addr: "1.2.3.4" + colonDefaultPort}, {Addr: "5.6.7.8" + colonDefaultPort}} var gotAddrs []resolver.Address for { var cnt int gotAddrs, cnt = cc.getAddress() if cnt > 0 { break } time.Sleep(time.Millisecond) } if !reflect.DeepEqual(gotAddrs, wantAddrs) { t.Errorf("Resolved addresses of target: %q = %+v, want %+v\n", target, gotAddrs, wantAddrs) } } grpc-go-1.22.1/resolver/manual/000077500000000000000000000000001351635773100162645ustar00rootroot00000000000000grpc-go-1.22.1/resolver/manual/manual.go000066400000000000000000000047271351635773100201020ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package manual defines a resolver that can be used to manually send resolved // addresses to ClientConn. package manual import ( "strconv" "time" "google.golang.org/grpc/resolver" ) // NewBuilderWithScheme creates a new test resolver builder with the given scheme. func NewBuilderWithScheme(scheme string) *Resolver { return &Resolver{ scheme: scheme, } } // Resolver is also a resolver builder. // It's build() function always returns itself. type Resolver struct { scheme string // Fields actually belong to the resolver. cc resolver.ClientConn bootstrapState *resolver.State } // InitialState adds initial state to the resolver so that UpdateState doesn't // need to be explicitly called after Dial. func (r *Resolver) InitialState(s resolver.State) { r.bootstrapState = &s } // Build returns itself for Resolver, because it's both a builder and a resolver. func (r *Resolver) Build(target resolver.Target, cc resolver.ClientConn, opts resolver.BuildOption) (resolver.Resolver, error) { r.cc = cc if r.bootstrapState != nil { r.UpdateState(*r.bootstrapState) } return r, nil } // Scheme returns the test scheme. func (r *Resolver) Scheme() string { return r.scheme } // ResolveNow is a noop for Resolver. func (*Resolver) ResolveNow(o resolver.ResolveNowOption) {} // Close is a noop for Resolver. func (*Resolver) Close() {} // UpdateState calls cc.UpdateState. func (r *Resolver) UpdateState(s resolver.State) { r.cc.UpdateState(s) } // GenerateAndRegisterManualResolver generates a random scheme and a Resolver // with it. It also registers this Resolver. // It returns the Resolver and a cleanup function to unregister it. func GenerateAndRegisterManualResolver() (*Resolver, func()) { scheme := strconv.FormatInt(time.Now().UnixNano(), 36) r := NewBuilderWithScheme(scheme) resolver.Register(r) return r, func() { resolver.UnregisterForTesting(scheme) } } grpc-go-1.22.1/resolver/passthrough/000077500000000000000000000000001351635773100173565ustar00rootroot00000000000000grpc-go-1.22.1/resolver/passthrough/passthrough.go000066400000000000000000000030211351635773100222500ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package passthrough implements a pass-through resolver. It sends the target // name without scheme back to gRPC as resolved address. package passthrough import "google.golang.org/grpc/resolver" const scheme = "passthrough" type passthroughBuilder struct{} func (*passthroughBuilder) Build(target resolver.Target, cc resolver.ClientConn, opts resolver.BuildOption) (resolver.Resolver, error) { r := &passthroughResolver{ target: target, cc: cc, } r.start() return r, nil } func (*passthroughBuilder) Scheme() string { return scheme } type passthroughResolver struct { target resolver.Target cc resolver.ClientConn } func (r *passthroughResolver) start() { r.cc.UpdateState(resolver.State{Addresses: []resolver.Address{{Addr: r.target.Endpoint}}}) } func (*passthroughResolver) ResolveNow(o resolver.ResolveNowOption) {} func (*passthroughResolver) Close() {} func init() { resolver.Register(&passthroughBuilder{}) } grpc-go-1.22.1/resolver/resolver.go000066400000000000000000000155021351635773100172020ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package resolver defines APIs for name resolution in gRPC. // All APIs in this package are experimental. package resolver import ( "google.golang.org/grpc/serviceconfig" ) var ( // m is a map from scheme to resolver builder. m = make(map[string]Builder) // defaultScheme is the default scheme to use. defaultScheme = "passthrough" ) // TODO(bar) install dns resolver in init(){}. // Register registers the resolver builder to the resolver map. b.Scheme will be // used as the scheme registered with this builder. // // NOTE: this function must only be called during initialization time (i.e. in // an init() function), and is not thread-safe. If multiple Resolvers are // registered with the same name, the one registered last will take effect. func Register(b Builder) { m[b.Scheme()] = b } // Get returns the resolver builder registered with the given scheme. // // If no builder is register with the scheme, nil will be returned. func Get(scheme string) Builder { if b, ok := m[scheme]; ok { return b } return nil } // SetDefaultScheme sets the default scheme that will be used. The default // default scheme is "passthrough". // // NOTE: this function must only be called during initialization time (i.e. in // an init() function), and is not thread-safe. The scheme set last overrides // previously set values. func SetDefaultScheme(scheme string) { defaultScheme = scheme } // GetDefaultScheme gets the default scheme that will be used. func GetDefaultScheme() string { return defaultScheme } // AddressType indicates the address type returned by name resolution. type AddressType uint8 const ( // Backend indicates the address is for a backend server. Backend AddressType = iota // GRPCLB indicates the address is for a grpclb load balancer. GRPCLB ) // Address represents a server the client connects to. // This is the EXPERIMENTAL API and may be changed or extended in the future. type Address struct { // Addr is the server address on which a connection will be established. Addr string // Type is the type of this address. Type AddressType // ServerName is the name of this address. // // e.g. if Type is GRPCLB, ServerName should be the name of the remote load // balancer, not the name of the backend. ServerName string // Metadata is the information associated with Addr, which may be used // to make load balancing decision. Metadata interface{} } // BuildOption includes additional information for the builder to create // the resolver. type BuildOption struct { // DisableServiceConfig indicates whether resolver should fetch service config data. DisableServiceConfig bool } // State contains the current Resolver state relevant to the ClientConn. type State struct { Addresses []Address // Resolved addresses for the target // ServiceConfig is the parsed service config; obtained from // serviceconfig.Parse. ServiceConfig serviceconfig.Config // TODO: add Err error } // ClientConn contains the callbacks for resolver to notify any updates // to the gRPC ClientConn. // // This interface is to be implemented by gRPC. Users should not need a // brand new implementation of this interface. For the situations like // testing, the new implementation should embed this interface. This allows // gRPC to add new methods to this interface. type ClientConn interface { // UpdateState updates the state of the ClientConn appropriately. UpdateState(State) // NewAddress is called by resolver to notify ClientConn a new list // of resolved addresses. // The address list should be the complete list of resolved addresses. // // Deprecated: Use UpdateState instead. NewAddress(addresses []Address) // NewServiceConfig is called by resolver to notify ClientConn a new // service config. The service config should be provided as a json string. // // Deprecated: Use UpdateState instead. NewServiceConfig(serviceConfig string) } // Target represents a target for gRPC, as specified in: // https://github.com/grpc/grpc/blob/master/doc/naming.md. // It is parsed from the target string that gets passed into Dial or DialContext by the user. And // grpc passes it to the resolver and the balancer. // // If the target follows the naming spec, and the parsed scheme is registered with grpc, we will // parse the target string according to the spec. e.g. "dns://some_authority/foo.bar" will be parsed // into &Target{Scheme: "dns", Authority: "some_authority", Endpoint: "foo.bar"} // // If the target does not contain a scheme, we will apply the default scheme, and set the Target to // be the full target string. e.g. "foo.bar" will be parsed into // &Target{Scheme: resolver.GetDefaultScheme(), Endpoint: "foo.bar"}. // // If the parsed scheme is not registered (i.e. no corresponding resolver available to resolve the // endpoint), we set the Scheme to be the default scheme, and set the Endpoint to be the full target // string. e.g. target string "unknown_scheme://authority/endpoint" will be parsed into // &Target{Scheme: resolver.GetDefaultScheme(), Endpoint: "unknown_scheme://authority/endpoint"}. type Target struct { Scheme string Authority string Endpoint string } // Builder creates a resolver that will be used to watch name resolution updates. type Builder interface { // Build creates a new resolver for the given target. // // gRPC dial calls Build synchronously, and fails if the returned error is // not nil. Build(target Target, cc ClientConn, opts BuildOption) (Resolver, error) // Scheme returns the scheme supported by this resolver. // Scheme is defined at https://github.com/grpc/grpc/blob/master/doc/naming.md. Scheme() string } // ResolveNowOption includes additional information for ResolveNow. type ResolveNowOption struct{} // Resolver watches for the updates on the specified target. // Updates include address updates and service config updates. type Resolver interface { // ResolveNow will be called by gRPC to try to resolve the target name // again. It's just a hint, resolver can ignore this if it's not necessary. // // It could be called multiple times concurrently. ResolveNow(ResolveNowOption) // Close closes the resolver. Close() } // UnregisterForTesting removes the resolver builder with the given scheme from the // resolver map. // This function is for testing only. func UnregisterForTesting(scheme string) { delete(m, scheme) } grpc-go-1.22.1/resolver_conn_wrapper.go000066400000000000000000000117661351635773100201260ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "fmt" "strings" "sync/atomic" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/internal/channelz" "google.golang.org/grpc/resolver" ) // ccResolverWrapper is a wrapper on top of cc for resolvers. // It implements resolver.ClientConnection interface. type ccResolverWrapper struct { cc *ClientConn resolver resolver.Resolver addrCh chan []resolver.Address scCh chan string done uint32 // accessed atomically; set to 1 when closed. curState resolver.State } // split2 returns the values from strings.SplitN(s, sep, 2). // If sep is not found, it returns ("", "", false) instead. func split2(s, sep string) (string, string, bool) { spl := strings.SplitN(s, sep, 2) if len(spl) < 2 { return "", "", false } return spl[0], spl[1], true } // parseTarget splits target into a struct containing scheme, authority and // endpoint. // // If target is not a valid scheme://authority/endpoint, it returns {Endpoint: // target}. func parseTarget(target string) (ret resolver.Target) { var ok bool ret.Scheme, ret.Endpoint, ok = split2(target, "://") if !ok { return resolver.Target{Endpoint: target} } ret.Authority, ret.Endpoint, ok = split2(ret.Endpoint, "/") if !ok { return resolver.Target{Endpoint: target} } return ret } // newCCResolverWrapper parses cc.target for scheme and gets the resolver // builder for this scheme and builds the resolver. The monitoring goroutine // for it is not started yet and can be created by calling start(). // // If withResolverBuilder dial option is set, the specified resolver will be // used instead. func newCCResolverWrapper(cc *ClientConn) (*ccResolverWrapper, error) { rb := cc.dopts.resolverBuilder if rb == nil { return nil, fmt.Errorf("could not get resolver for scheme: %q", cc.parsedTarget.Scheme) } ccr := &ccResolverWrapper{ cc: cc, addrCh: make(chan []resolver.Address, 1), scCh: make(chan string, 1), } var err error ccr.resolver, err = rb.Build(cc.parsedTarget, ccr, resolver.BuildOption{DisableServiceConfig: cc.dopts.disableServiceConfig}) if err != nil { return nil, err } return ccr, nil } func (ccr *ccResolverWrapper) resolveNow(o resolver.ResolveNowOption) { ccr.resolver.ResolveNow(o) } func (ccr *ccResolverWrapper) close() { ccr.resolver.Close() atomic.StoreUint32(&ccr.done, 1) } func (ccr *ccResolverWrapper) isDone() bool { return atomic.LoadUint32(&ccr.done) == 1 } func (ccr *ccResolverWrapper) UpdateState(s resolver.State) { if ccr.isDone() { return } grpclog.Infof("ccResolverWrapper: sending update to cc: %v", s) if channelz.IsOn() { ccr.addChannelzTraceEvent(s) } ccr.cc.updateResolverState(s) ccr.curState = s } // NewAddress is called by the resolver implementation to send addresses to gRPC. func (ccr *ccResolverWrapper) NewAddress(addrs []resolver.Address) { if ccr.isDone() { return } grpclog.Infof("ccResolverWrapper: sending new addresses to cc: %v", addrs) if channelz.IsOn() { ccr.addChannelzTraceEvent(resolver.State{Addresses: addrs, ServiceConfig: ccr.curState.ServiceConfig}) } ccr.curState.Addresses = addrs ccr.cc.updateResolverState(ccr.curState) } // NewServiceConfig is called by the resolver implementation to send service // configs to gRPC. func (ccr *ccResolverWrapper) NewServiceConfig(sc string) { if ccr.isDone() { return } grpclog.Infof("ccResolverWrapper: got new service config: %v", sc) c, err := parseServiceConfig(sc) if err != nil { return } if channelz.IsOn() { ccr.addChannelzTraceEvent(resolver.State{Addresses: ccr.curState.Addresses, ServiceConfig: c}) } ccr.curState.ServiceConfig = c ccr.cc.updateResolverState(ccr.curState) } func (ccr *ccResolverWrapper) addChannelzTraceEvent(s resolver.State) { var updates []string oldSC, oldOK := ccr.curState.ServiceConfig.(*ServiceConfig) newSC, newOK := s.ServiceConfig.(*ServiceConfig) if oldOK != newOK || (oldOK && newOK && oldSC.rawJSONString != newSC.rawJSONString) { updates = append(updates, "service config updated") } if len(ccr.curState.Addresses) > 0 && len(s.Addresses) == 0 { updates = append(updates, "resolver returned an empty address list") } else if len(ccr.curState.Addresses) == 0 && len(s.Addresses) > 0 { updates = append(updates, "resolver returned new addresses") } channelz.AddTraceEvent(ccr.cc.channelzID, &channelz.TraceEventDesc{ Desc: fmt.Sprintf("Resolver state updated: %+v (%v)", s, strings.Join(updates, "; ")), Severity: channelz.CtINFO, }) } grpc-go-1.22.1/resolver_conn_wrapper_test.go000066400000000000000000000113231351635773100211520ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "fmt" "net" "testing" "time" "google.golang.org/grpc/resolver" ) func (s) TestParseTarget(t *testing.T) { for _, test := range []resolver.Target{ {Scheme: "dns", Authority: "", Endpoint: "google.com"}, {Scheme: "dns", Authority: "a.server.com", Endpoint: "google.com"}, {Scheme: "dns", Authority: "a.server.com", Endpoint: "google.com/?a=b"}, {Scheme: "passthrough", Authority: "", Endpoint: "/unix/socket/address"}, } { str := test.Scheme + "://" + test.Authority + "/" + test.Endpoint got := parseTarget(str) if got != test { t.Errorf("parseTarget(%q) = %+v, want %+v", str, got, test) } } } func (s) TestParseTargetString(t *testing.T) { for _, test := range []struct { targetStr string want resolver.Target }{ {targetStr: "", want: resolver.Target{Scheme: "", Authority: "", Endpoint: ""}}, {targetStr: ":///", want: resolver.Target{Scheme: "", Authority: "", Endpoint: ""}}, {targetStr: "a:///", want: resolver.Target{Scheme: "a", Authority: "", Endpoint: ""}}, {targetStr: "://a/", want: resolver.Target{Scheme: "", Authority: "a", Endpoint: ""}}, {targetStr: ":///a", want: resolver.Target{Scheme: "", Authority: "", Endpoint: "a"}}, {targetStr: "a://b/", want: resolver.Target{Scheme: "a", Authority: "b", Endpoint: ""}}, {targetStr: "a:///b", want: resolver.Target{Scheme: "a", Authority: "", Endpoint: "b"}}, {targetStr: "://a/b", want: resolver.Target{Scheme: "", Authority: "a", Endpoint: "b"}}, {targetStr: "a://b/c", want: resolver.Target{Scheme: "a", Authority: "b", Endpoint: "c"}}, {targetStr: "dns:///google.com", want: resolver.Target{Scheme: "dns", Authority: "", Endpoint: "google.com"}}, {targetStr: "dns://a.server.com/google.com", want: resolver.Target{Scheme: "dns", Authority: "a.server.com", Endpoint: "google.com"}}, {targetStr: "dns://a.server.com/google.com/?a=b", want: resolver.Target{Scheme: "dns", Authority: "a.server.com", Endpoint: "google.com/?a=b"}}, {targetStr: "/", want: resolver.Target{Scheme: "", Authority: "", Endpoint: "/"}}, {targetStr: "google.com", want: resolver.Target{Scheme: "", Authority: "", Endpoint: "google.com"}}, {targetStr: "google.com/?a=b", want: resolver.Target{Scheme: "", Authority: "", Endpoint: "google.com/?a=b"}}, {targetStr: "/unix/socket/address", want: resolver.Target{Scheme: "", Authority: "", Endpoint: "/unix/socket/address"}}, // If we can only parse part of the target. {targetStr: "://", want: resolver.Target{Scheme: "", Authority: "", Endpoint: "://"}}, {targetStr: "unix://domain", want: resolver.Target{Scheme: "", Authority: "", Endpoint: "unix://domain"}}, {targetStr: "a:b", want: resolver.Target{Scheme: "", Authority: "", Endpoint: "a:b"}}, {targetStr: "a/b", want: resolver.Target{Scheme: "", Authority: "", Endpoint: "a/b"}}, {targetStr: "a:/b", want: resolver.Target{Scheme: "", Authority: "", Endpoint: "a:/b"}}, {targetStr: "a//b", want: resolver.Target{Scheme: "", Authority: "", Endpoint: "a//b"}}, {targetStr: "a://b", want: resolver.Target{Scheme: "", Authority: "", Endpoint: "a://b"}}, } { got := parseTarget(test.targetStr) if got != test.want { t.Errorf("parseTarget(%q) = %+v, want %+v", test.targetStr, got, test.want) } } } // The target string with unknown scheme should be kept unchanged and passed to // the dialer. func (s) TestDialParseTargetUnknownScheme(t *testing.T) { for _, test := range []struct { targetStr string want string }{ {"/unix/socket/address", "/unix/socket/address"}, // Special test for "unix:///". {"unix:///unix/socket/address", "unix:///unix/socket/address"}, // For known scheme. {"passthrough://a.server.com/google.com", "google.com"}, } { dialStrCh := make(chan string, 1) cc, err := Dial(test.targetStr, WithInsecure(), WithDialer(func(addr string, _ time.Duration) (net.Conn, error) { select { case dialStrCh <- addr: default: } return nil, fmt.Errorf("test dialer, always error") })) if err != nil { t.Fatalf("Failed to create ClientConn: %v", err) } got := <-dialStrCh cc.Close() if got != test.want { t.Errorf("Dial(%q), dialer got %q, want %q", test.targetStr, got, test.want) } } } grpc-go-1.22.1/rpc_util.go000066400000000000000000000634351351635773100153310ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "bytes" "compress/gzip" "context" "encoding/binary" "fmt" "io" "io/ioutil" "math" "net/url" "strings" "sync" "time" "google.golang.org/grpc/codes" "google.golang.org/grpc/credentials" "google.golang.org/grpc/encoding" "google.golang.org/grpc/encoding/proto" "google.golang.org/grpc/internal/transport" "google.golang.org/grpc/metadata" "google.golang.org/grpc/peer" "google.golang.org/grpc/stats" "google.golang.org/grpc/status" ) // Compressor defines the interface gRPC uses to compress a message. // // Deprecated: use package encoding. type Compressor interface { // Do compresses p into w. Do(w io.Writer, p []byte) error // Type returns the compression algorithm the Compressor uses. Type() string } type gzipCompressor struct { pool sync.Pool } // NewGZIPCompressor creates a Compressor based on GZIP. // // Deprecated: use package encoding/gzip. func NewGZIPCompressor() Compressor { c, _ := NewGZIPCompressorWithLevel(gzip.DefaultCompression) return c } // NewGZIPCompressorWithLevel is like NewGZIPCompressor but specifies the gzip compression level instead // of assuming DefaultCompression. // // The error returned will be nil if the level is valid. // // Deprecated: use package encoding/gzip. func NewGZIPCompressorWithLevel(level int) (Compressor, error) { if level < gzip.DefaultCompression || level > gzip.BestCompression { return nil, fmt.Errorf("grpc: invalid compression level: %d", level) } return &gzipCompressor{ pool: sync.Pool{ New: func() interface{} { w, err := gzip.NewWriterLevel(ioutil.Discard, level) if err != nil { panic(err) } return w }, }, }, nil } func (c *gzipCompressor) Do(w io.Writer, p []byte) error { z := c.pool.Get().(*gzip.Writer) defer c.pool.Put(z) z.Reset(w) if _, err := z.Write(p); err != nil { return err } return z.Close() } func (c *gzipCompressor) Type() string { return "gzip" } // Decompressor defines the interface gRPC uses to decompress a message. // // Deprecated: use package encoding. type Decompressor interface { // Do reads the data from r and uncompress them. Do(r io.Reader) ([]byte, error) // Type returns the compression algorithm the Decompressor uses. Type() string } type gzipDecompressor struct { pool sync.Pool } // NewGZIPDecompressor creates a Decompressor based on GZIP. // // Deprecated: use package encoding/gzip. func NewGZIPDecompressor() Decompressor { return &gzipDecompressor{} } func (d *gzipDecompressor) Do(r io.Reader) ([]byte, error) { var z *gzip.Reader switch maybeZ := d.pool.Get().(type) { case nil: newZ, err := gzip.NewReader(r) if err != nil { return nil, err } z = newZ case *gzip.Reader: z = maybeZ if err := z.Reset(r); err != nil { d.pool.Put(z) return nil, err } } defer func() { z.Close() d.pool.Put(z) }() return ioutil.ReadAll(z) } func (d *gzipDecompressor) Type() string { return "gzip" } // callInfo contains all related configuration and information about an RPC. type callInfo struct { compressorType string failFast bool stream ClientStream maxReceiveMessageSize *int maxSendMessageSize *int creds credentials.PerRPCCredentials contentSubtype string codec baseCodec maxRetryRPCBufferSize int } func defaultCallInfo() *callInfo { return &callInfo{ failFast: true, maxRetryRPCBufferSize: 256 * 1024, // 256KB } } // CallOption configures a Call before it starts or extracts information from // a Call after it completes. type CallOption interface { // before is called before the call is sent to any server. If before // returns a non-nil error, the RPC fails with that error. before(*callInfo) error // after is called after the call has completed. after cannot return an // error, so any failures should be reported via output parameters. after(*callInfo) } // EmptyCallOption does not alter the Call configuration. // It can be embedded in another structure to carry satellite data for use // by interceptors. type EmptyCallOption struct{} func (EmptyCallOption) before(*callInfo) error { return nil } func (EmptyCallOption) after(*callInfo) {} // Header returns a CallOptions that retrieves the header metadata // for a unary RPC. func Header(md *metadata.MD) CallOption { return HeaderCallOption{HeaderAddr: md} } // HeaderCallOption is a CallOption for collecting response header metadata. // The metadata field will be populated *after* the RPC completes. // This is an EXPERIMENTAL API. type HeaderCallOption struct { HeaderAddr *metadata.MD } func (o HeaderCallOption) before(c *callInfo) error { return nil } func (o HeaderCallOption) after(c *callInfo) { if c.stream != nil { *o.HeaderAddr, _ = c.stream.Header() } } // Trailer returns a CallOptions that retrieves the trailer metadata // for a unary RPC. func Trailer(md *metadata.MD) CallOption { return TrailerCallOption{TrailerAddr: md} } // TrailerCallOption is a CallOption for collecting response trailer metadata. // The metadata field will be populated *after* the RPC completes. // This is an EXPERIMENTAL API. type TrailerCallOption struct { TrailerAddr *metadata.MD } func (o TrailerCallOption) before(c *callInfo) error { return nil } func (o TrailerCallOption) after(c *callInfo) { if c.stream != nil { *o.TrailerAddr = c.stream.Trailer() } } // Peer returns a CallOption that retrieves peer information for a unary RPC. // The peer field will be populated *after* the RPC completes. func Peer(p *peer.Peer) CallOption { return PeerCallOption{PeerAddr: p} } // PeerCallOption is a CallOption for collecting the identity of the remote // peer. The peer field will be populated *after* the RPC completes. // This is an EXPERIMENTAL API. type PeerCallOption struct { PeerAddr *peer.Peer } func (o PeerCallOption) before(c *callInfo) error { return nil } func (o PeerCallOption) after(c *callInfo) { if c.stream != nil { if x, ok := peer.FromContext(c.stream.Context()); ok { *o.PeerAddr = *x } } } // WaitForReady configures the action to take when an RPC is attempted on broken // connections or unreachable servers. If waitForReady is false, the RPC will fail // immediately. Otherwise, the RPC client will block the call until a // connection is available (or the call is canceled or times out) and will // retry the call if it fails due to a transient error. gRPC will not retry if // data was written to the wire unless the server indicates it did not process // the data. Please refer to // https://github.com/grpc/grpc/blob/master/doc/wait-for-ready.md. // // By default, RPCs don't "wait for ready". func WaitForReady(waitForReady bool) CallOption { return FailFastCallOption{FailFast: !waitForReady} } // FailFast is the opposite of WaitForReady. // // Deprecated: use WaitForReady. func FailFast(failFast bool) CallOption { return FailFastCallOption{FailFast: failFast} } // FailFastCallOption is a CallOption for indicating whether an RPC should fail // fast or not. // This is an EXPERIMENTAL API. type FailFastCallOption struct { FailFast bool } func (o FailFastCallOption) before(c *callInfo) error { c.failFast = o.FailFast return nil } func (o FailFastCallOption) after(c *callInfo) {} // MaxCallRecvMsgSize returns a CallOption which sets the maximum message size the client can receive. func MaxCallRecvMsgSize(s int) CallOption { return MaxRecvMsgSizeCallOption{MaxRecvMsgSize: s} } // MaxRecvMsgSizeCallOption is a CallOption that indicates the maximum message // size the client can receive. // This is an EXPERIMENTAL API. type MaxRecvMsgSizeCallOption struct { MaxRecvMsgSize int } func (o MaxRecvMsgSizeCallOption) before(c *callInfo) error { c.maxReceiveMessageSize = &o.MaxRecvMsgSize return nil } func (o MaxRecvMsgSizeCallOption) after(c *callInfo) {} // MaxCallSendMsgSize returns a CallOption which sets the maximum message size the client can send. func MaxCallSendMsgSize(s int) CallOption { return MaxSendMsgSizeCallOption{MaxSendMsgSize: s} } // MaxSendMsgSizeCallOption is a CallOption that indicates the maximum message // size the client can send. // This is an EXPERIMENTAL API. type MaxSendMsgSizeCallOption struct { MaxSendMsgSize int } func (o MaxSendMsgSizeCallOption) before(c *callInfo) error { c.maxSendMessageSize = &o.MaxSendMsgSize return nil } func (o MaxSendMsgSizeCallOption) after(c *callInfo) {} // PerRPCCredentials returns a CallOption that sets credentials.PerRPCCredentials // for a call. func PerRPCCredentials(creds credentials.PerRPCCredentials) CallOption { return PerRPCCredsCallOption{Creds: creds} } // PerRPCCredsCallOption is a CallOption that indicates the per-RPC // credentials to use for the call. // This is an EXPERIMENTAL API. type PerRPCCredsCallOption struct { Creds credentials.PerRPCCredentials } func (o PerRPCCredsCallOption) before(c *callInfo) error { c.creds = o.Creds return nil } func (o PerRPCCredsCallOption) after(c *callInfo) {} // UseCompressor returns a CallOption which sets the compressor used when // sending the request. If WithCompressor is also set, UseCompressor has // higher priority. // // This API is EXPERIMENTAL. func UseCompressor(name string) CallOption { return CompressorCallOption{CompressorType: name} } // CompressorCallOption is a CallOption that indicates the compressor to use. // This is an EXPERIMENTAL API. type CompressorCallOption struct { CompressorType string } func (o CompressorCallOption) before(c *callInfo) error { c.compressorType = o.CompressorType return nil } func (o CompressorCallOption) after(c *callInfo) {} // CallContentSubtype returns a CallOption that will set the content-subtype // for a call. For example, if content-subtype is "json", the Content-Type over // the wire will be "application/grpc+json". The content-subtype is converted // to lowercase before being included in Content-Type. See Content-Type on // https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md#requests for // more details. // // If ForceCodec is not also used, the content-subtype will be used to look up // the Codec to use in the registry controlled by RegisterCodec. See the // documentation on RegisterCodec for details on registration. The lookup of // content-subtype is case-insensitive. If no such Codec is found, the call // will result in an error with code codes.Internal. // // If ForceCodec is also used, that Codec will be used for all request and // response messages, with the content-subtype set to the given contentSubtype // here for requests. func CallContentSubtype(contentSubtype string) CallOption { return ContentSubtypeCallOption{ContentSubtype: strings.ToLower(contentSubtype)} } // ContentSubtypeCallOption is a CallOption that indicates the content-subtype // used for marshaling messages. // This is an EXPERIMENTAL API. type ContentSubtypeCallOption struct { ContentSubtype string } func (o ContentSubtypeCallOption) before(c *callInfo) error { c.contentSubtype = o.ContentSubtype return nil } func (o ContentSubtypeCallOption) after(c *callInfo) {} // ForceCodec returns a CallOption that will set the given Codec to be // used for all request and response messages for a call. The result of calling // String() will be used as the content-subtype in a case-insensitive manner. // // See Content-Type on // https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md#requests for // more details. Also see the documentation on RegisterCodec and // CallContentSubtype for more details on the interaction between Codec and // content-subtype. // // This function is provided for advanced users; prefer to use only // CallContentSubtype to select a registered codec instead. // // This is an EXPERIMENTAL API. func ForceCodec(codec encoding.Codec) CallOption { return ForceCodecCallOption{Codec: codec} } // ForceCodecCallOption is a CallOption that indicates the codec used for // marshaling messages. // // This is an EXPERIMENTAL API. type ForceCodecCallOption struct { Codec encoding.Codec } func (o ForceCodecCallOption) before(c *callInfo) error { c.codec = o.Codec return nil } func (o ForceCodecCallOption) after(c *callInfo) {} // CallCustomCodec behaves like ForceCodec, but accepts a grpc.Codec instead of // an encoding.Codec. // // Deprecated: use ForceCodec instead. func CallCustomCodec(codec Codec) CallOption { return CustomCodecCallOption{Codec: codec} } // CustomCodecCallOption is a CallOption that indicates the codec used for // marshaling messages. // // This is an EXPERIMENTAL API. type CustomCodecCallOption struct { Codec Codec } func (o CustomCodecCallOption) before(c *callInfo) error { c.codec = o.Codec return nil } func (o CustomCodecCallOption) after(c *callInfo) {} // MaxRetryRPCBufferSize returns a CallOption that limits the amount of memory // used for buffering this RPC's requests for retry purposes. // // This API is EXPERIMENTAL. func MaxRetryRPCBufferSize(bytes int) CallOption { return MaxRetryRPCBufferSizeCallOption{bytes} } // MaxRetryRPCBufferSizeCallOption is a CallOption indicating the amount of // memory to be used for caching this RPC for retry purposes. // This is an EXPERIMENTAL API. type MaxRetryRPCBufferSizeCallOption struct { MaxRetryRPCBufferSize int } func (o MaxRetryRPCBufferSizeCallOption) before(c *callInfo) error { c.maxRetryRPCBufferSize = o.MaxRetryRPCBufferSize return nil } func (o MaxRetryRPCBufferSizeCallOption) after(c *callInfo) {} // The format of the payload: compressed or not? type payloadFormat uint8 const ( compressionNone payloadFormat = 0 // no compression compressionMade payloadFormat = 1 // compressed ) // parser reads complete gRPC messages from the underlying reader. type parser struct { // r is the underlying reader. // See the comment on recvMsg for the permissible // error types. r io.Reader // The header of a gRPC message. Find more detail at // https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md header [5]byte } // recvMsg reads a complete gRPC message from the stream. // // It returns the message and its payload (compression/encoding) // format. The caller owns the returned msg memory. // // If there is an error, possible values are: // * io.EOF, when no messages remain // * io.ErrUnexpectedEOF // * of type transport.ConnectionError // * an error from the status package // No other error values or types must be returned, which also means // that the underlying io.Reader must not return an incompatible // error. func (p *parser) recvMsg(maxReceiveMessageSize int) (pf payloadFormat, msg []byte, err error) { if _, err := p.r.Read(p.header[:]); err != nil { return 0, nil, err } pf = payloadFormat(p.header[0]) length := binary.BigEndian.Uint32(p.header[1:]) if length == 0 { return pf, nil, nil } if int64(length) > int64(maxInt) { return 0, nil, status.Errorf(codes.ResourceExhausted, "grpc: received message larger than max length allowed on current machine (%d vs. %d)", length, maxInt) } if int(length) > maxReceiveMessageSize { return 0, nil, status.Errorf(codes.ResourceExhausted, "grpc: received message larger than max (%d vs. %d)", length, maxReceiveMessageSize) } // TODO(bradfitz,zhaoq): garbage. reuse buffer after proto decoding instead // of making it for each message: msg = make([]byte, int(length)) if _, err := p.r.Read(msg); err != nil { if err == io.EOF { err = io.ErrUnexpectedEOF } return 0, nil, err } return pf, msg, nil } // encode serializes msg and returns a buffer containing the message, or an // error if it is too large to be transmitted by grpc. If msg is nil, it // generates an empty message. func encode(c baseCodec, msg interface{}) ([]byte, error) { if msg == nil { // NOTE: typed nils will not be caught by this check return nil, nil } b, err := c.Marshal(msg) if err != nil { return nil, status.Errorf(codes.Internal, "grpc: error while marshaling: %v", err.Error()) } if uint(len(b)) > math.MaxUint32 { return nil, status.Errorf(codes.ResourceExhausted, "grpc: message too large (%d bytes)", len(b)) } return b, nil } // compress returns the input bytes compressed by compressor or cp. If both // compressors are nil, returns nil. // // TODO(dfawley): eliminate cp parameter by wrapping Compressor in an encoding.Compressor. func compress(in []byte, cp Compressor, compressor encoding.Compressor) ([]byte, error) { if compressor == nil && cp == nil { return nil, nil } wrapErr := func(err error) error { return status.Errorf(codes.Internal, "grpc: error while compressing: %v", err.Error()) } cbuf := &bytes.Buffer{} if compressor != nil { z, err := compressor.Compress(cbuf) if err != nil { return nil, wrapErr(err) } if _, err := z.Write(in); err != nil { return nil, wrapErr(err) } if err := z.Close(); err != nil { return nil, wrapErr(err) } } else { if err := cp.Do(cbuf, in); err != nil { return nil, wrapErr(err) } } return cbuf.Bytes(), nil } const ( payloadLen = 1 sizeLen = 4 headerLen = payloadLen + sizeLen ) // msgHeader returns a 5-byte header for the message being transmitted and the // payload, which is compData if non-nil or data otherwise. func msgHeader(data, compData []byte) (hdr []byte, payload []byte) { hdr = make([]byte, headerLen) if compData != nil { hdr[0] = byte(compressionMade) data = compData } else { hdr[0] = byte(compressionNone) } // Write length of payload into buf binary.BigEndian.PutUint32(hdr[payloadLen:], uint32(len(data))) return hdr, data } func outPayload(client bool, msg interface{}, data, payload []byte, t time.Time) *stats.OutPayload { return &stats.OutPayload{ Client: client, Payload: msg, Data: data, Length: len(data), WireLength: len(payload) + headerLen, SentTime: t, } } func checkRecvPayload(pf payloadFormat, recvCompress string, haveCompressor bool) *status.Status { switch pf { case compressionNone: case compressionMade: if recvCompress == "" || recvCompress == encoding.Identity { return status.New(codes.Internal, "grpc: compressed flag set with identity or empty encoding") } if !haveCompressor { return status.Newf(codes.Unimplemented, "grpc: Decompressor is not installed for grpc-encoding %q", recvCompress) } default: return status.Newf(codes.Internal, "grpc: received unexpected payload format %d", pf) } return nil } type payloadInfo struct { wireLength int // The compressed length got from wire. uncompressedBytes []byte } func recvAndDecompress(p *parser, s *transport.Stream, dc Decompressor, maxReceiveMessageSize int, payInfo *payloadInfo, compressor encoding.Compressor) ([]byte, error) { pf, d, err := p.recvMsg(maxReceiveMessageSize) if err != nil { return nil, err } if payInfo != nil { payInfo.wireLength = len(d) } if st := checkRecvPayload(pf, s.RecvCompress(), compressor != nil || dc != nil); st != nil { return nil, st.Err() } if pf == compressionMade { // To match legacy behavior, if the decompressor is set by WithDecompressor or RPCDecompressor, // use this decompressor as the default. if dc != nil { d, err = dc.Do(bytes.NewReader(d)) if err != nil { return nil, status.Errorf(codes.Internal, "grpc: failed to decompress the received message %v", err) } } else { dcReader, err := compressor.Decompress(bytes.NewReader(d)) if err != nil { return nil, status.Errorf(codes.Internal, "grpc: failed to decompress the received message %v", err) } // Read from LimitReader with limit max+1. So if the underlying // reader is over limit, the result will be bigger than max. d, err = ioutil.ReadAll(io.LimitReader(dcReader, int64(maxReceiveMessageSize)+1)) if err != nil { return nil, status.Errorf(codes.Internal, "grpc: failed to decompress the received message %v", err) } } } if len(d) > maxReceiveMessageSize { // TODO: Revisit the error code. Currently keep it consistent with java // implementation. return nil, status.Errorf(codes.ResourceExhausted, "grpc: received message larger than max (%d vs. %d)", len(d), maxReceiveMessageSize) } return d, nil } // For the two compressor parameters, both should not be set, but if they are, // dc takes precedence over compressor. // TODO(dfawley): wrap the old compressor/decompressor using the new API? func recv(p *parser, c baseCodec, s *transport.Stream, dc Decompressor, m interface{}, maxReceiveMessageSize int, payInfo *payloadInfo, compressor encoding.Compressor) error { d, err := recvAndDecompress(p, s, dc, maxReceiveMessageSize, payInfo, compressor) if err != nil { return err } if err := c.Unmarshal(d, m); err != nil { return status.Errorf(codes.Internal, "grpc: failed to unmarshal the received message %v", err) } if payInfo != nil { payInfo.uncompressedBytes = d } return nil } // Information about RPC type rpcInfo struct { failfast bool preloaderInfo *compressorInfo } // Information about Preloader // Responsible for storing codec, and compressors // If stream (s) has context s.Context which stores rpcInfo that has non nil // pointers to codec, and compressors, then we can use preparedMsg for Async message prep // and reuse marshalled bytes type compressorInfo struct { codec baseCodec cp Compressor comp encoding.Compressor } type rpcInfoContextKey struct{} func newContextWithRPCInfo(ctx context.Context, failfast bool, codec baseCodec, cp Compressor, comp encoding.Compressor) context.Context { return context.WithValue(ctx, rpcInfoContextKey{}, &rpcInfo{ failfast: failfast, preloaderInfo: &compressorInfo{ codec: codec, cp: cp, comp: comp, }, }) } func rpcInfoFromContext(ctx context.Context) (s *rpcInfo, ok bool) { s, ok = ctx.Value(rpcInfoContextKey{}).(*rpcInfo) return } // Code returns the error code for err if it was produced by the rpc system. // Otherwise, it returns codes.Unknown. // // Deprecated: use status.Code instead. func Code(err error) codes.Code { return status.Code(err) } // ErrorDesc returns the error description of err if it was produced by the rpc system. // Otherwise, it returns err.Error() or empty string when err is nil. // // Deprecated: use status.Convert and Message method instead. func ErrorDesc(err error) string { return status.Convert(err).Message() } // Errorf returns an error containing an error code and a description; // Errorf returns nil if c is OK. // // Deprecated: use status.Errorf instead. func Errorf(c codes.Code, format string, a ...interface{}) error { return status.Errorf(c, format, a...) } // toRPCErr converts an error into an error from the status package. func toRPCErr(err error) error { if err == nil || err == io.EOF { return err } if err == io.ErrUnexpectedEOF { return status.Error(codes.Internal, err.Error()) } if _, ok := status.FromError(err); ok { return err } switch e := err.(type) { case transport.ConnectionError: return status.Error(codes.Unavailable, e.Desc) default: switch err { case context.DeadlineExceeded: return status.Error(codes.DeadlineExceeded, err.Error()) case context.Canceled: return status.Error(codes.Canceled, err.Error()) } } return status.Error(codes.Unknown, err.Error()) } // setCallInfoCodec should only be called after CallOptions have been applied. func setCallInfoCodec(c *callInfo) error { if c.codec != nil { // codec was already set by a CallOption; use it. return nil } if c.contentSubtype == "" { // No codec specified in CallOptions; use proto by default. c.codec = encoding.GetCodec(proto.Name) return nil } // c.contentSubtype is already lowercased in CallContentSubtype c.codec = encoding.GetCodec(c.contentSubtype) if c.codec == nil { return status.Errorf(codes.Internal, "no codec registered for content-subtype %s", c.contentSubtype) } return nil } // parseDialTarget returns the network and address to pass to dialer func parseDialTarget(target string) (net string, addr string) { net = "tcp" m1 := strings.Index(target, ":") m2 := strings.Index(target, ":/") // handle unix:addr which will fail with url.Parse if m1 >= 0 && m2 < 0 { if n := target[0:m1]; n == "unix" { net = n addr = target[m1+1:] return net, addr } } if m2 >= 0 { t, err := url.Parse(target) if err != nil { return net, target } scheme := t.Scheme addr = t.Path if scheme == "unix" { net = scheme if addr == "" { addr = t.Host } return net, addr } } return net, target } // channelzData is used to store channelz related data for ClientConn, addrConn and Server. // These fields cannot be embedded in the original structs (e.g. ClientConn), since to do atomic // operation on int64 variable on 32-bit machine, user is responsible to enforce memory alignment. // Here, by grouping those int64 fields inside a struct, we are enforcing the alignment. type channelzData struct { callsStarted int64 callsFailed int64 callsSucceeded int64 // lastCallStartedTime stores the timestamp that last call starts. It is of int64 type instead of // time.Time since it's more costly to atomically update time.Time variable than int64 variable. lastCallStartedTime int64 } // The SupportPackageIsVersion variables are referenced from generated protocol // buffer files to ensure compatibility with the gRPC version used. The latest // support package version is 5. // // Older versions are kept for compatibility. They may be removed if // compatibility cannot be maintained. // // These constants should not be referenced from any other code. const ( SupportPackageIsVersion3 = true SupportPackageIsVersion4 = true SupportPackageIsVersion5 = true ) const grpcUA = "grpc-go/" + Version grpc-go-1.22.1/rpc_util_test.go000066400000000000000000000203371351635773100163620ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "bytes" "compress/gzip" "io" "math" "reflect" "testing" "github.com/golang/protobuf/proto" "google.golang.org/grpc/codes" "google.golang.org/grpc/encoding" protoenc "google.golang.org/grpc/encoding/proto" "google.golang.org/grpc/internal/testutils" "google.golang.org/grpc/internal/transport" "google.golang.org/grpc/status" perfpb "google.golang.org/grpc/test/codec_perf" ) type fullReader struct { reader io.Reader } func (f fullReader) Read(p []byte) (int, error) { return io.ReadFull(f.reader, p) } var _ CallOption = EmptyCallOption{} // ensure EmptyCallOption implements the interface func (s) TestSimpleParsing(t *testing.T) { bigMsg := bytes.Repeat([]byte{'x'}, 1<<24) for _, test := range []struct { // input p []byte // outputs err error b []byte pt payloadFormat }{ {nil, io.EOF, nil, compressionNone}, {[]byte{0, 0, 0, 0, 0}, nil, nil, compressionNone}, {[]byte{0, 0, 0, 0, 1, 'a'}, nil, []byte{'a'}, compressionNone}, {[]byte{1, 0}, io.ErrUnexpectedEOF, nil, compressionNone}, {[]byte{0, 0, 0, 0, 10, 'a'}, io.ErrUnexpectedEOF, nil, compressionNone}, // Check that messages with length >= 2^24 are parsed. {append([]byte{0, 1, 0, 0, 0}, bigMsg...), nil, bigMsg, compressionNone}, } { buf := fullReader{bytes.NewReader(test.p)} parser := &parser{r: buf} pt, b, err := parser.recvMsg(math.MaxInt32) if err != test.err || !bytes.Equal(b, test.b) || pt != test.pt { t.Fatalf("parser{%v}.recvMsg(_) = %v, %v, %v\nwant %v, %v, %v", test.p, pt, b, err, test.pt, test.b, test.err) } } } func (s) TestMultipleParsing(t *testing.T) { // Set a byte stream consists of 3 messages with their headers. p := []byte{0, 0, 0, 0, 1, 'a', 0, 0, 0, 0, 2, 'b', 'c', 0, 0, 0, 0, 1, 'd'} b := fullReader{bytes.NewReader(p)} parser := &parser{r: b} wantRecvs := []struct { pt payloadFormat data []byte }{ {compressionNone, []byte("a")}, {compressionNone, []byte("bc")}, {compressionNone, []byte("d")}, } for i, want := range wantRecvs { pt, data, err := parser.recvMsg(math.MaxInt32) if err != nil || pt != want.pt || !reflect.DeepEqual(data, want.data) { t.Fatalf("after %d calls, parser{%v}.recvMsg(_) = %v, %v, %v\nwant %v, %v, ", i, p, pt, data, err, want.pt, want.data) } } pt, data, err := parser.recvMsg(math.MaxInt32) if err != io.EOF { t.Fatalf("after %d recvMsgs calls, parser{%v}.recvMsg(_) = %v, %v, %v\nwant _, _, %v", len(wantRecvs), p, pt, data, err, io.EOF) } } func (s) TestEncode(t *testing.T) { for _, test := range []struct { // input msg proto.Message // outputs hdr []byte data []byte err error }{ {nil, []byte{0, 0, 0, 0, 0}, []byte{}, nil}, } { data, err := encode(encoding.GetCodec(protoenc.Name), test.msg) if err != test.err || !bytes.Equal(data, test.data) { t.Errorf("encode(_, %v) = %v, %v; want %v, %v", test.msg, data, err, test.data, test.err) continue } if hdr, _ := msgHeader(data, nil); !bytes.Equal(hdr, test.hdr) { t.Errorf("msgHeader(%v, false) = %v; want %v", data, hdr, test.hdr) } } } func (s) TestCompress(t *testing.T) { bestCompressor, err := NewGZIPCompressorWithLevel(gzip.BestCompression) if err != nil { t.Fatalf("Could not initialize gzip compressor with best compression.") } bestSpeedCompressor, err := NewGZIPCompressorWithLevel(gzip.BestSpeed) if err != nil { t.Fatalf("Could not initialize gzip compressor with best speed compression.") } defaultCompressor, err := NewGZIPCompressorWithLevel(gzip.BestSpeed) if err != nil { t.Fatalf("Could not initialize gzip compressor with default compression.") } level5, err := NewGZIPCompressorWithLevel(5) if err != nil { t.Fatalf("Could not initialize gzip compressor with level 5 compression.") } for _, test := range []struct { // input data []byte cp Compressor dc Decompressor // outputs err error }{ {make([]byte, 1024), NewGZIPCompressor(), NewGZIPDecompressor(), nil}, {make([]byte, 1024), bestCompressor, NewGZIPDecompressor(), nil}, {make([]byte, 1024), bestSpeedCompressor, NewGZIPDecompressor(), nil}, {make([]byte, 1024), defaultCompressor, NewGZIPDecompressor(), nil}, {make([]byte, 1024), level5, NewGZIPDecompressor(), nil}, } { b := new(bytes.Buffer) if err := test.cp.Do(b, test.data); err != test.err { t.Fatalf("Compressor.Do(_, %v) = %v, want %v", test.data, err, test.err) } if b.Len() >= len(test.data) { t.Fatalf("The compressor fails to compress data.") } if p, err := test.dc.Do(b); err != nil || !bytes.Equal(test.data, p) { t.Fatalf("Decompressor.Do(%v) = %v, %v, want %v, ", b, p, err, test.data) } } } func (s) TestToRPCErr(t *testing.T) { for _, test := range []struct { // input errIn error // outputs errOut error }{ {transport.ErrConnClosing, status.Error(codes.Unavailable, transport.ErrConnClosing.Desc)}, {io.ErrUnexpectedEOF, status.Error(codes.Internal, io.ErrUnexpectedEOF.Error())}, } { err := toRPCErr(test.errIn) if _, ok := status.FromError(err); !ok { t.Errorf("toRPCErr{%v} returned type %T, want %T", test.errIn, err, status.Error) } if !testutils.StatusErrEqual(err, test.errOut) { t.Errorf("toRPCErr{%v} = %v \nwant %v", test.errIn, err, test.errOut) } } } func (s) TestParseDialTarget(t *testing.T) { for _, test := range []struct { target, wantNet, wantAddr string }{ {"unix:etcd:0", "unix", "etcd:0"}, {"unix:///tmp/unix-3", "unix", "/tmp/unix-3"}, {"unix://domain", "unix", "domain"}, {"unix://etcd:0", "unix", "etcd:0"}, {"unix:///etcd:0", "unix", "/etcd:0"}, {"passthrough://unix://domain", "tcp", "passthrough://unix://domain"}, {"https://google.com:443", "tcp", "https://google.com:443"}, {"dns:///google.com", "tcp", "dns:///google.com"}, {"/unix/socket/address", "tcp", "/unix/socket/address"}, } { gotNet, gotAddr := parseDialTarget(test.target) if gotNet != test.wantNet || gotAddr != test.wantAddr { t.Errorf("parseDialTarget(%q) = %s, %s want %s, %s", test.target, gotNet, gotAddr, test.wantNet, test.wantAddr) } } } // bmEncode benchmarks encoding a Protocol Buffer message containing mSize // bytes. func bmEncode(b *testing.B, mSize int) { cdc := encoding.GetCodec(protoenc.Name) msg := &perfpb.Buffer{Body: make([]byte, mSize)} encodeData, _ := encode(cdc, msg) encodedSz := int64(len(encodeData)) b.ReportAllocs() b.ResetTimer() for i := 0; i < b.N; i++ { encode(cdc, msg) } b.SetBytes(encodedSz) } func BenchmarkEncode1B(b *testing.B) { bmEncode(b, 1) } func BenchmarkEncode1KiB(b *testing.B) { bmEncode(b, 1024) } func BenchmarkEncode8KiB(b *testing.B) { bmEncode(b, 8*1024) } func BenchmarkEncode64KiB(b *testing.B) { bmEncode(b, 64*1024) } func BenchmarkEncode512KiB(b *testing.B) { bmEncode(b, 512*1024) } func BenchmarkEncode1MiB(b *testing.B) { bmEncode(b, 1024*1024) } // bmCompressor benchmarks a compressor of a Protocol Buffer message containing // mSize bytes. func bmCompressor(b *testing.B, mSize int, cp Compressor) { payload := make([]byte, mSize) cBuf := bytes.NewBuffer(make([]byte, mSize)) b.ReportAllocs() b.ResetTimer() for i := 0; i < b.N; i++ { cp.Do(cBuf, payload) cBuf.Reset() } } func BenchmarkGZIPCompressor1B(b *testing.B) { bmCompressor(b, 1, NewGZIPCompressor()) } func BenchmarkGZIPCompressor1KiB(b *testing.B) { bmCompressor(b, 1024, NewGZIPCompressor()) } func BenchmarkGZIPCompressor8KiB(b *testing.B) { bmCompressor(b, 8*1024, NewGZIPCompressor()) } func BenchmarkGZIPCompressor64KiB(b *testing.B) { bmCompressor(b, 64*1024, NewGZIPCompressor()) } func BenchmarkGZIPCompressor512KiB(b *testing.B) { bmCompressor(b, 512*1024, NewGZIPCompressor()) } func BenchmarkGZIPCompressor1MiB(b *testing.B) { bmCompressor(b, 1024*1024, NewGZIPCompressor()) } grpc-go-1.22.1/server.go000066400000000000000000001266511351635773100150160ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "context" "errors" "fmt" "io" "math" "net" "net/http" "reflect" "runtime" "strings" "sync" "sync/atomic" "time" "golang.org/x/net/trace" "google.golang.org/grpc/codes" "google.golang.org/grpc/credentials" "google.golang.org/grpc/encoding" "google.golang.org/grpc/encoding/proto" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/internal/binarylog" "google.golang.org/grpc/internal/channelz" "google.golang.org/grpc/internal/transport" "google.golang.org/grpc/keepalive" "google.golang.org/grpc/metadata" "google.golang.org/grpc/peer" "google.golang.org/grpc/stats" "google.golang.org/grpc/status" "google.golang.org/grpc/tap" ) const ( defaultServerMaxReceiveMessageSize = 1024 * 1024 * 4 defaultServerMaxSendMessageSize = math.MaxInt32 ) type methodHandler func(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor UnaryServerInterceptor) (interface{}, error) // MethodDesc represents an RPC service's method specification. type MethodDesc struct { MethodName string Handler methodHandler } // ServiceDesc represents an RPC service's specification. type ServiceDesc struct { ServiceName string // The pointer to the service interface. Used to check whether the user // provided implementation satisfies the interface requirements. HandlerType interface{} Methods []MethodDesc Streams []StreamDesc Metadata interface{} } // service consists of the information of the server serving this service and // the methods in this service. type service struct { server interface{} // the server for service methods md map[string]*MethodDesc sd map[string]*StreamDesc mdata interface{} } // Server is a gRPC server to serve RPC requests. type Server struct { opts serverOptions mu sync.Mutex // guards following lis map[net.Listener]bool conns map[transport.ServerTransport]bool serve bool drain bool cv *sync.Cond // signaled when connections close for GracefulStop m map[string]*service // service name -> service info events trace.EventLog quit chan struct{} done chan struct{} quitOnce sync.Once doneOnce sync.Once channelzRemoveOnce sync.Once serveWG sync.WaitGroup // counts active Serve goroutines for GracefulStop channelzID int64 // channelz unique identification number czData *channelzData } type serverOptions struct { creds credentials.TransportCredentials codec baseCodec cp Compressor dc Decompressor unaryInt UnaryServerInterceptor streamInt StreamServerInterceptor inTapHandle tap.ServerInHandle statsHandler stats.Handler maxConcurrentStreams uint32 maxReceiveMessageSize int maxSendMessageSize int unknownStreamDesc *StreamDesc keepaliveParams keepalive.ServerParameters keepalivePolicy keepalive.EnforcementPolicy initialWindowSize int32 initialConnWindowSize int32 writeBufferSize int readBufferSize int connectionTimeout time.Duration maxHeaderListSize *uint32 } var defaultServerOptions = serverOptions{ maxReceiveMessageSize: defaultServerMaxReceiveMessageSize, maxSendMessageSize: defaultServerMaxSendMessageSize, connectionTimeout: 120 * time.Second, writeBufferSize: defaultWriteBufSize, readBufferSize: defaultReadBufSize, } // A ServerOption sets options such as credentials, codec and keepalive parameters, etc. type ServerOption interface { apply(*serverOptions) } // EmptyServerOption does not alter the server configuration. It can be embedded // in another structure to build custom server options. // // This API is EXPERIMENTAL. type EmptyServerOption struct{} func (EmptyServerOption) apply(*serverOptions) {} // funcServerOption wraps a function that modifies serverOptions into an // implementation of the ServerOption interface. type funcServerOption struct { f func(*serverOptions) } func (fdo *funcServerOption) apply(do *serverOptions) { fdo.f(do) } func newFuncServerOption(f func(*serverOptions)) *funcServerOption { return &funcServerOption{ f: f, } } // WriteBufferSize determines how much data can be batched before doing a write on the wire. // The corresponding memory allocation for this buffer will be twice the size to keep syscalls low. // The default value for this buffer is 32KB. // Zero will disable the write buffer such that each write will be on underlying connection. // Note: A Send call may not directly translate to a write. func WriteBufferSize(s int) ServerOption { return newFuncServerOption(func(o *serverOptions) { o.writeBufferSize = s }) } // ReadBufferSize lets you set the size of read buffer, this determines how much data can be read at most // for one read syscall. // The default value for this buffer is 32KB. // Zero will disable read buffer for a connection so data framer can access the underlying // conn directly. func ReadBufferSize(s int) ServerOption { return newFuncServerOption(func(o *serverOptions) { o.readBufferSize = s }) } // InitialWindowSize returns a ServerOption that sets window size for stream. // The lower bound for window size is 64K and any value smaller than that will be ignored. func InitialWindowSize(s int32) ServerOption { return newFuncServerOption(func(o *serverOptions) { o.initialWindowSize = s }) } // InitialConnWindowSize returns a ServerOption that sets window size for a connection. // The lower bound for window size is 64K and any value smaller than that will be ignored. func InitialConnWindowSize(s int32) ServerOption { return newFuncServerOption(func(o *serverOptions) { o.initialConnWindowSize = s }) } // KeepaliveParams returns a ServerOption that sets keepalive and max-age parameters for the server. func KeepaliveParams(kp keepalive.ServerParameters) ServerOption { if kp.Time > 0 && kp.Time < time.Second { grpclog.Warning("Adjusting keepalive ping interval to minimum period of 1s") kp.Time = time.Second } return newFuncServerOption(func(o *serverOptions) { o.keepaliveParams = kp }) } // KeepaliveEnforcementPolicy returns a ServerOption that sets keepalive enforcement policy for the server. func KeepaliveEnforcementPolicy(kep keepalive.EnforcementPolicy) ServerOption { return newFuncServerOption(func(o *serverOptions) { o.keepalivePolicy = kep }) } // CustomCodec returns a ServerOption that sets a codec for message marshaling and unmarshaling. // // This will override any lookups by content-subtype for Codecs registered with RegisterCodec. func CustomCodec(codec Codec) ServerOption { return newFuncServerOption(func(o *serverOptions) { o.codec = codec }) } // RPCCompressor returns a ServerOption that sets a compressor for outbound // messages. For backward compatibility, all outbound messages will be sent // using this compressor, regardless of incoming message compression. By // default, server messages will be sent using the same compressor with which // request messages were sent. // // Deprecated: use encoding.RegisterCompressor instead. func RPCCompressor(cp Compressor) ServerOption { return newFuncServerOption(func(o *serverOptions) { o.cp = cp }) } // RPCDecompressor returns a ServerOption that sets a decompressor for inbound // messages. It has higher priority than decompressors registered via // encoding.RegisterCompressor. // // Deprecated: use encoding.RegisterCompressor instead. func RPCDecompressor(dc Decompressor) ServerOption { return newFuncServerOption(func(o *serverOptions) { o.dc = dc }) } // MaxMsgSize returns a ServerOption to set the max message size in bytes the server can receive. // If this is not set, gRPC uses the default limit. // // Deprecated: use MaxRecvMsgSize instead. func MaxMsgSize(m int) ServerOption { return MaxRecvMsgSize(m) } // MaxRecvMsgSize returns a ServerOption to set the max message size in bytes the server can receive. // If this is not set, gRPC uses the default 4MB. func MaxRecvMsgSize(m int) ServerOption { return newFuncServerOption(func(o *serverOptions) { o.maxReceiveMessageSize = m }) } // MaxSendMsgSize returns a ServerOption to set the max message size in bytes the server can send. // If this is not set, gRPC uses the default `math.MaxInt32`. func MaxSendMsgSize(m int) ServerOption { return newFuncServerOption(func(o *serverOptions) { o.maxSendMessageSize = m }) } // MaxConcurrentStreams returns a ServerOption that will apply a limit on the number // of concurrent streams to each ServerTransport. func MaxConcurrentStreams(n uint32) ServerOption { return newFuncServerOption(func(o *serverOptions) { o.maxConcurrentStreams = n }) } // Creds returns a ServerOption that sets credentials for server connections. func Creds(c credentials.TransportCredentials) ServerOption { return newFuncServerOption(func(o *serverOptions) { o.creds = c }) } // UnaryInterceptor returns a ServerOption that sets the UnaryServerInterceptor for the // server. Only one unary interceptor can be installed. The construction of multiple // interceptors (e.g., chaining) can be implemented at the caller. func UnaryInterceptor(i UnaryServerInterceptor) ServerOption { return newFuncServerOption(func(o *serverOptions) { if o.unaryInt != nil { panic("The unary server interceptor was already set and may not be reset.") } o.unaryInt = i }) } // StreamInterceptor returns a ServerOption that sets the StreamServerInterceptor for the // server. Only one stream interceptor can be installed. func StreamInterceptor(i StreamServerInterceptor) ServerOption { return newFuncServerOption(func(o *serverOptions) { if o.streamInt != nil { panic("The stream server interceptor was already set and may not be reset.") } o.streamInt = i }) } // InTapHandle returns a ServerOption that sets the tap handle for all the server // transport to be created. Only one can be installed. func InTapHandle(h tap.ServerInHandle) ServerOption { return newFuncServerOption(func(o *serverOptions) { if o.inTapHandle != nil { panic("The tap handle was already set and may not be reset.") } o.inTapHandle = h }) } // StatsHandler returns a ServerOption that sets the stats handler for the server. func StatsHandler(h stats.Handler) ServerOption { return newFuncServerOption(func(o *serverOptions) { o.statsHandler = h }) } // UnknownServiceHandler returns a ServerOption that allows for adding a custom // unknown service handler. The provided method is a bidi-streaming RPC service // handler that will be invoked instead of returning the "unimplemented" gRPC // error whenever a request is received for an unregistered service or method. // The handling function has full access to the Context of the request and the // stream, and the invocation bypasses interceptors. func UnknownServiceHandler(streamHandler StreamHandler) ServerOption { return newFuncServerOption(func(o *serverOptions) { o.unknownStreamDesc = &StreamDesc{ StreamName: "unknown_service_handler", Handler: streamHandler, // We need to assume that the users of the streamHandler will want to use both. ClientStreams: true, ServerStreams: true, } }) } // ConnectionTimeout returns a ServerOption that sets the timeout for // connection establishment (up to and including HTTP/2 handshaking) for all // new connections. If this is not set, the default is 120 seconds. A zero or // negative value will result in an immediate timeout. // // This API is EXPERIMENTAL. func ConnectionTimeout(d time.Duration) ServerOption { return newFuncServerOption(func(o *serverOptions) { o.connectionTimeout = d }) } // MaxHeaderListSize returns a ServerOption that sets the max (uncompressed) size // of header list that the server is prepared to accept. func MaxHeaderListSize(s uint32) ServerOption { return newFuncServerOption(func(o *serverOptions) { o.maxHeaderListSize = &s }) } // NewServer creates a gRPC server which has no service registered and has not // started to accept requests yet. func NewServer(opt ...ServerOption) *Server { opts := defaultServerOptions for _, o := range opt { o.apply(&opts) } s := &Server{ lis: make(map[net.Listener]bool), opts: opts, conns: make(map[transport.ServerTransport]bool), m: make(map[string]*service), quit: make(chan struct{}), done: make(chan struct{}), czData: new(channelzData), } s.cv = sync.NewCond(&s.mu) if EnableTracing { _, file, line, _ := runtime.Caller(1) s.events = trace.NewEventLog("grpc.Server", fmt.Sprintf("%s:%d", file, line)) } if channelz.IsOn() { s.channelzID = channelz.RegisterServer(&channelzServer{s}, "") } return s } // printf records an event in s's event log, unless s has been stopped. // REQUIRES s.mu is held. func (s *Server) printf(format string, a ...interface{}) { if s.events != nil { s.events.Printf(format, a...) } } // errorf records an error in s's event log, unless s has been stopped. // REQUIRES s.mu is held. func (s *Server) errorf(format string, a ...interface{}) { if s.events != nil { s.events.Errorf(format, a...) } } // RegisterService registers a service and its implementation to the gRPC // server. It is called from the IDL generated code. This must be called before // invoking Serve. func (s *Server) RegisterService(sd *ServiceDesc, ss interface{}) { ht := reflect.TypeOf(sd.HandlerType).Elem() st := reflect.TypeOf(ss) if !st.Implements(ht) { grpclog.Fatalf("grpc: Server.RegisterService found the handler of type %v that does not satisfy %v", st, ht) } s.register(sd, ss) } func (s *Server) register(sd *ServiceDesc, ss interface{}) { s.mu.Lock() defer s.mu.Unlock() s.printf("RegisterService(%q)", sd.ServiceName) if s.serve { grpclog.Fatalf("grpc: Server.RegisterService after Server.Serve for %q", sd.ServiceName) } if _, ok := s.m[sd.ServiceName]; ok { grpclog.Fatalf("grpc: Server.RegisterService found duplicate service registration for %q", sd.ServiceName) } srv := &service{ server: ss, md: make(map[string]*MethodDesc), sd: make(map[string]*StreamDesc), mdata: sd.Metadata, } for i := range sd.Methods { d := &sd.Methods[i] srv.md[d.MethodName] = d } for i := range sd.Streams { d := &sd.Streams[i] srv.sd[d.StreamName] = d } s.m[sd.ServiceName] = srv } // MethodInfo contains the information of an RPC including its method name and type. type MethodInfo struct { // Name is the method name only, without the service name or package name. Name string // IsClientStream indicates whether the RPC is a client streaming RPC. IsClientStream bool // IsServerStream indicates whether the RPC is a server streaming RPC. IsServerStream bool } // ServiceInfo contains unary RPC method info, streaming RPC method info and metadata for a service. type ServiceInfo struct { Methods []MethodInfo // Metadata is the metadata specified in ServiceDesc when registering service. Metadata interface{} } // GetServiceInfo returns a map from service names to ServiceInfo. // Service names include the package names, in the form of .. func (s *Server) GetServiceInfo() map[string]ServiceInfo { ret := make(map[string]ServiceInfo) for n, srv := range s.m { methods := make([]MethodInfo, 0, len(srv.md)+len(srv.sd)) for m := range srv.md { methods = append(methods, MethodInfo{ Name: m, IsClientStream: false, IsServerStream: false, }) } for m, d := range srv.sd { methods = append(methods, MethodInfo{ Name: m, IsClientStream: d.ClientStreams, IsServerStream: d.ServerStreams, }) } ret[n] = ServiceInfo{ Methods: methods, Metadata: srv.mdata, } } return ret } // ErrServerStopped indicates that the operation is now illegal because of // the server being stopped. var ErrServerStopped = errors.New("grpc: the server has been stopped") func (s *Server) useTransportAuthenticator(rawConn net.Conn) (net.Conn, credentials.AuthInfo, error) { if s.opts.creds == nil { return rawConn, nil, nil } return s.opts.creds.ServerHandshake(rawConn) } type listenSocket struct { net.Listener channelzID int64 } func (l *listenSocket) ChannelzMetric() *channelz.SocketInternalMetric { return &channelz.SocketInternalMetric{ SocketOptions: channelz.GetSocketOption(l.Listener), LocalAddr: l.Listener.Addr(), } } func (l *listenSocket) Close() error { err := l.Listener.Close() if channelz.IsOn() { channelz.RemoveEntry(l.channelzID) } return err } // Serve accepts incoming connections on the listener lis, creating a new // ServerTransport and service goroutine for each. The service goroutines // read gRPC requests and then call the registered handlers to reply to them. // Serve returns when lis.Accept fails with fatal errors. lis will be closed when // this method returns. // Serve will return a non-nil error unless Stop or GracefulStop is called. func (s *Server) Serve(lis net.Listener) error { s.mu.Lock() s.printf("serving") s.serve = true if s.lis == nil { // Serve called after Stop or GracefulStop. s.mu.Unlock() lis.Close() return ErrServerStopped } s.serveWG.Add(1) defer func() { s.serveWG.Done() select { // Stop or GracefulStop called; block until done and return nil. case <-s.quit: <-s.done default: } }() ls := &listenSocket{Listener: lis} s.lis[ls] = true if channelz.IsOn() { ls.channelzID = channelz.RegisterListenSocket(ls, s.channelzID, lis.Addr().String()) } s.mu.Unlock() defer func() { s.mu.Lock() if s.lis != nil && s.lis[ls] { ls.Close() delete(s.lis, ls) } s.mu.Unlock() }() var tempDelay time.Duration // how long to sleep on accept failure for { rawConn, err := lis.Accept() if err != nil { if ne, ok := err.(interface { Temporary() bool }); ok && ne.Temporary() { if tempDelay == 0 { tempDelay = 5 * time.Millisecond } else { tempDelay *= 2 } if max := 1 * time.Second; tempDelay > max { tempDelay = max } s.mu.Lock() s.printf("Accept error: %v; retrying in %v", err, tempDelay) s.mu.Unlock() timer := time.NewTimer(tempDelay) select { case <-timer.C: case <-s.quit: timer.Stop() return nil } continue } s.mu.Lock() s.printf("done serving; Accept = %v", err) s.mu.Unlock() select { case <-s.quit: return nil default: } return err } tempDelay = 0 // Start a new goroutine to deal with rawConn so we don't stall this Accept // loop goroutine. // // Make sure we account for the goroutine so GracefulStop doesn't nil out // s.conns before this conn can be added. s.serveWG.Add(1) go func() { s.handleRawConn(rawConn) s.serveWG.Done() }() } } // handleRawConn forks a goroutine to handle a just-accepted connection that // has not had any I/O performed on it yet. func (s *Server) handleRawConn(rawConn net.Conn) { rawConn.SetDeadline(time.Now().Add(s.opts.connectionTimeout)) conn, authInfo, err := s.useTransportAuthenticator(rawConn) if err != nil { // ErrConnDispatched means that the connection was dispatched away from // gRPC; those connections should be left open. if err != credentials.ErrConnDispatched { s.mu.Lock() s.errorf("ServerHandshake(%q) failed: %v", rawConn.RemoteAddr(), err) s.mu.Unlock() grpclog.Warningf("grpc: Server.Serve failed to complete security handshake from %q: %v", rawConn.RemoteAddr(), err) rawConn.Close() } rawConn.SetDeadline(time.Time{}) return } s.mu.Lock() if s.conns == nil { s.mu.Unlock() conn.Close() return } s.mu.Unlock() // Finish handshaking (HTTP2) st := s.newHTTP2Transport(conn, authInfo) if st == nil { return } rawConn.SetDeadline(time.Time{}) if !s.addConn(st) { return } go func() { s.serveStreams(st) s.removeConn(st) }() } // newHTTP2Transport sets up a http/2 transport (using the // gRPC http2 server transport in transport/http2_server.go). func (s *Server) newHTTP2Transport(c net.Conn, authInfo credentials.AuthInfo) transport.ServerTransport { config := &transport.ServerConfig{ MaxStreams: s.opts.maxConcurrentStreams, AuthInfo: authInfo, InTapHandle: s.opts.inTapHandle, StatsHandler: s.opts.statsHandler, KeepaliveParams: s.opts.keepaliveParams, KeepalivePolicy: s.opts.keepalivePolicy, InitialWindowSize: s.opts.initialWindowSize, InitialConnWindowSize: s.opts.initialConnWindowSize, WriteBufferSize: s.opts.writeBufferSize, ReadBufferSize: s.opts.readBufferSize, ChannelzParentID: s.channelzID, MaxHeaderListSize: s.opts.maxHeaderListSize, } st, err := transport.NewServerTransport("http2", c, config) if err != nil { s.mu.Lock() s.errorf("NewServerTransport(%q) failed: %v", c.RemoteAddr(), err) s.mu.Unlock() c.Close() grpclog.Warningln("grpc: Server.Serve failed to create ServerTransport: ", err) return nil } return st } func (s *Server) serveStreams(st transport.ServerTransport) { defer st.Close() var wg sync.WaitGroup st.HandleStreams(func(stream *transport.Stream) { wg.Add(1) go func() { defer wg.Done() s.handleStream(st, stream, s.traceInfo(st, stream)) }() }, func(ctx context.Context, method string) context.Context { if !EnableTracing { return ctx } tr := trace.New("grpc.Recv."+methodFamily(method), method) return trace.NewContext(ctx, tr) }) wg.Wait() } var _ http.Handler = (*Server)(nil) // ServeHTTP implements the Go standard library's http.Handler // interface by responding to the gRPC request r, by looking up // the requested gRPC method in the gRPC server s. // // The provided HTTP request must have arrived on an HTTP/2 // connection. When using the Go standard library's server, // practically this means that the Request must also have arrived // over TLS. // // To share one port (such as 443 for https) between gRPC and an // existing http.Handler, use a root http.Handler such as: // // if r.ProtoMajor == 2 && strings.HasPrefix( // r.Header.Get("Content-Type"), "application/grpc") { // grpcServer.ServeHTTP(w, r) // } else { // yourMux.ServeHTTP(w, r) // } // // Note that ServeHTTP uses Go's HTTP/2 server implementation which is totally // separate from grpc-go's HTTP/2 server. Performance and features may vary // between the two paths. ServeHTTP does not support some gRPC features // available through grpc-go's HTTP/2 server, and it is currently EXPERIMENTAL // and subject to change. func (s *Server) ServeHTTP(w http.ResponseWriter, r *http.Request) { st, err := transport.NewServerHandlerTransport(w, r, s.opts.statsHandler) if err != nil { http.Error(w, err.Error(), http.StatusInternalServerError) return } if !s.addConn(st) { return } defer s.removeConn(st) s.serveStreams(st) } // traceInfo returns a traceInfo and associates it with stream, if tracing is enabled. // If tracing is not enabled, it returns nil. func (s *Server) traceInfo(st transport.ServerTransport, stream *transport.Stream) (trInfo *traceInfo) { tr, ok := trace.FromContext(stream.Context()) if !ok { return nil } trInfo = &traceInfo{ tr: tr, firstLine: firstLine{ client: false, remoteAddr: st.RemoteAddr(), }, } if dl, ok := stream.Context().Deadline(); ok { trInfo.firstLine.deadline = time.Until(dl) } return trInfo } func (s *Server) addConn(st transport.ServerTransport) bool { s.mu.Lock() defer s.mu.Unlock() if s.conns == nil { st.Close() return false } if s.drain { // Transport added after we drained our existing conns: drain it // immediately. st.Drain() } s.conns[st] = true return true } func (s *Server) removeConn(st transport.ServerTransport) { s.mu.Lock() defer s.mu.Unlock() if s.conns != nil { delete(s.conns, st) s.cv.Broadcast() } } func (s *Server) channelzMetric() *channelz.ServerInternalMetric { return &channelz.ServerInternalMetric{ CallsStarted: atomic.LoadInt64(&s.czData.callsStarted), CallsSucceeded: atomic.LoadInt64(&s.czData.callsSucceeded), CallsFailed: atomic.LoadInt64(&s.czData.callsFailed), LastCallStartedTimestamp: time.Unix(0, atomic.LoadInt64(&s.czData.lastCallStartedTime)), } } func (s *Server) incrCallsStarted() { atomic.AddInt64(&s.czData.callsStarted, 1) atomic.StoreInt64(&s.czData.lastCallStartedTime, time.Now().UnixNano()) } func (s *Server) incrCallsSucceeded() { atomic.AddInt64(&s.czData.callsSucceeded, 1) } func (s *Server) incrCallsFailed() { atomic.AddInt64(&s.czData.callsFailed, 1) } func (s *Server) sendResponse(t transport.ServerTransport, stream *transport.Stream, msg interface{}, cp Compressor, opts *transport.Options, comp encoding.Compressor) error { data, err := encode(s.getCodec(stream.ContentSubtype()), msg) if err != nil { grpclog.Errorln("grpc: server failed to encode response: ", err) return err } compData, err := compress(data, cp, comp) if err != nil { grpclog.Errorln("grpc: server failed to compress response: ", err) return err } hdr, payload := msgHeader(data, compData) // TODO(dfawley): should we be checking len(data) instead? if len(payload) > s.opts.maxSendMessageSize { return status.Errorf(codes.ResourceExhausted, "grpc: trying to send message larger than max (%d vs. %d)", len(payload), s.opts.maxSendMessageSize) } err = t.Write(stream, hdr, payload, opts) if err == nil && s.opts.statsHandler != nil { s.opts.statsHandler.HandleRPC(stream.Context(), outPayload(false, msg, data, payload, time.Now())) } return err } func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport.Stream, srv *service, md *MethodDesc, trInfo *traceInfo) (err error) { if channelz.IsOn() { s.incrCallsStarted() defer func() { if err != nil && err != io.EOF { s.incrCallsFailed() } else { s.incrCallsSucceeded() } }() } sh := s.opts.statsHandler if sh != nil { beginTime := time.Now() begin := &stats.Begin{ BeginTime: beginTime, } sh.HandleRPC(stream.Context(), begin) defer func() { end := &stats.End{ BeginTime: beginTime, EndTime: time.Now(), } if err != nil && err != io.EOF { end.Error = toRPCErr(err) } sh.HandleRPC(stream.Context(), end) }() } if trInfo != nil { defer trInfo.tr.Finish() trInfo.tr.LazyLog(&trInfo.firstLine, false) defer func() { if err != nil && err != io.EOF { trInfo.tr.LazyLog(&fmtStringer{"%v", []interface{}{err}}, true) trInfo.tr.SetError() } }() } binlog := binarylog.GetMethodLogger(stream.Method()) if binlog != nil { ctx := stream.Context() md, _ := metadata.FromIncomingContext(ctx) logEntry := &binarylog.ClientHeader{ Header: md, MethodName: stream.Method(), PeerAddr: nil, } if deadline, ok := ctx.Deadline(); ok { logEntry.Timeout = time.Until(deadline) if logEntry.Timeout < 0 { logEntry.Timeout = 0 } } if a := md[":authority"]; len(a) > 0 { logEntry.Authority = a[0] } if peer, ok := peer.FromContext(ctx); ok { logEntry.PeerAddr = peer.Addr } binlog.Log(logEntry) } // comp and cp are used for compression. decomp and dc are used for // decompression. If comp and decomp are both set, they are the same; // however they are kept separate to ensure that at most one of the // compressor/decompressor variable pairs are set for use later. var comp, decomp encoding.Compressor var cp Compressor var dc Decompressor // If dc is set and matches the stream's compression, use it. Otherwise, try // to find a matching registered compressor for decomp. if rc := stream.RecvCompress(); s.opts.dc != nil && s.opts.dc.Type() == rc { dc = s.opts.dc } else if rc != "" && rc != encoding.Identity { decomp = encoding.GetCompressor(rc) if decomp == nil { st := status.Newf(codes.Unimplemented, "grpc: Decompressor is not installed for grpc-encoding %q", rc) t.WriteStatus(stream, st) return st.Err() } } // If cp is set, use it. Otherwise, attempt to compress the response using // the incoming message compression method. // // NOTE: this needs to be ahead of all handling, https://github.com/grpc/grpc-go/issues/686. if s.opts.cp != nil { cp = s.opts.cp stream.SetSendCompress(cp.Type()) } else if rc := stream.RecvCompress(); rc != "" && rc != encoding.Identity { // Legacy compressor not specified; attempt to respond with same encoding. comp = encoding.GetCompressor(rc) if comp != nil { stream.SetSendCompress(rc) } } var payInfo *payloadInfo if sh != nil || binlog != nil { payInfo = &payloadInfo{} } d, err := recvAndDecompress(&parser{r: stream}, stream, dc, s.opts.maxReceiveMessageSize, payInfo, decomp) if err != nil { if st, ok := status.FromError(err); ok { if e := t.WriteStatus(stream, st); e != nil { grpclog.Warningf("grpc: Server.processUnaryRPC failed to write status %v", e) } } return err } if channelz.IsOn() { t.IncrMsgRecv() } df := func(v interface{}) error { if err := s.getCodec(stream.ContentSubtype()).Unmarshal(d, v); err != nil { return status.Errorf(codes.Internal, "grpc: error unmarshalling request: %v", err) } if sh != nil { sh.HandleRPC(stream.Context(), &stats.InPayload{ RecvTime: time.Now(), Payload: v, WireLength: payInfo.wireLength, Data: d, Length: len(d), }) } if binlog != nil { binlog.Log(&binarylog.ClientMessage{ Message: d, }) } if trInfo != nil { trInfo.tr.LazyLog(&payload{sent: false, msg: v}, true) } return nil } ctx := NewContextWithServerTransportStream(stream.Context(), stream) reply, appErr := md.Handler(srv.server, ctx, df, s.opts.unaryInt) if appErr != nil { appStatus, ok := status.FromError(appErr) if !ok { // Convert appErr if it is not a grpc status error. appErr = status.Error(codes.Unknown, appErr.Error()) appStatus, _ = status.FromError(appErr) } if trInfo != nil { trInfo.tr.LazyLog(stringer(appStatus.Message()), true) trInfo.tr.SetError() } if e := t.WriteStatus(stream, appStatus); e != nil { grpclog.Warningf("grpc: Server.processUnaryRPC failed to write status: %v", e) } if binlog != nil { if h, _ := stream.Header(); h.Len() > 0 { // Only log serverHeader if there was header. Otherwise it can // be trailer only. binlog.Log(&binarylog.ServerHeader{ Header: h, }) } binlog.Log(&binarylog.ServerTrailer{ Trailer: stream.Trailer(), Err: appErr, }) } return appErr } if trInfo != nil { trInfo.tr.LazyLog(stringer("OK"), false) } opts := &transport.Options{Last: true} if err := s.sendResponse(t, stream, reply, cp, opts, comp); err != nil { if err == io.EOF { // The entire stream is done (for unary RPC only). return err } if s, ok := status.FromError(err); ok { if e := t.WriteStatus(stream, s); e != nil { grpclog.Warningf("grpc: Server.processUnaryRPC failed to write status: %v", e) } } else { switch st := err.(type) { case transport.ConnectionError: // Nothing to do here. default: panic(fmt.Sprintf("grpc: Unexpected error (%T) from sendResponse: %v", st, st)) } } if binlog != nil { h, _ := stream.Header() binlog.Log(&binarylog.ServerHeader{ Header: h, }) binlog.Log(&binarylog.ServerTrailer{ Trailer: stream.Trailer(), Err: appErr, }) } return err } if binlog != nil { h, _ := stream.Header() binlog.Log(&binarylog.ServerHeader{ Header: h, }) binlog.Log(&binarylog.ServerMessage{ Message: reply, }) } if channelz.IsOn() { t.IncrMsgSent() } if trInfo != nil { trInfo.tr.LazyLog(&payload{sent: true, msg: reply}, true) } // TODO: Should we be logging if writing status failed here, like above? // Should the logging be in WriteStatus? Should we ignore the WriteStatus // error or allow the stats handler to see it? err = t.WriteStatus(stream, status.New(codes.OK, "")) if binlog != nil { binlog.Log(&binarylog.ServerTrailer{ Trailer: stream.Trailer(), Err: appErr, }) } return err } func (s *Server) processStreamingRPC(t transport.ServerTransport, stream *transport.Stream, srv *service, sd *StreamDesc, trInfo *traceInfo) (err error) { if channelz.IsOn() { s.incrCallsStarted() defer func() { if err != nil && err != io.EOF { s.incrCallsFailed() } else { s.incrCallsSucceeded() } }() } sh := s.opts.statsHandler if sh != nil { beginTime := time.Now() begin := &stats.Begin{ BeginTime: beginTime, } sh.HandleRPC(stream.Context(), begin) defer func() { end := &stats.End{ BeginTime: beginTime, EndTime: time.Now(), } if err != nil && err != io.EOF { end.Error = toRPCErr(err) } sh.HandleRPC(stream.Context(), end) }() } ctx := NewContextWithServerTransportStream(stream.Context(), stream) ss := &serverStream{ ctx: ctx, t: t, s: stream, p: &parser{r: stream}, codec: s.getCodec(stream.ContentSubtype()), maxReceiveMessageSize: s.opts.maxReceiveMessageSize, maxSendMessageSize: s.opts.maxSendMessageSize, trInfo: trInfo, statsHandler: sh, } ss.binlog = binarylog.GetMethodLogger(stream.Method()) if ss.binlog != nil { md, _ := metadata.FromIncomingContext(ctx) logEntry := &binarylog.ClientHeader{ Header: md, MethodName: stream.Method(), PeerAddr: nil, } if deadline, ok := ctx.Deadline(); ok { logEntry.Timeout = time.Until(deadline) if logEntry.Timeout < 0 { logEntry.Timeout = 0 } } if a := md[":authority"]; len(a) > 0 { logEntry.Authority = a[0] } if peer, ok := peer.FromContext(ss.Context()); ok { logEntry.PeerAddr = peer.Addr } ss.binlog.Log(logEntry) } // If dc is set and matches the stream's compression, use it. Otherwise, try // to find a matching registered compressor for decomp. if rc := stream.RecvCompress(); s.opts.dc != nil && s.opts.dc.Type() == rc { ss.dc = s.opts.dc } else if rc != "" && rc != encoding.Identity { ss.decomp = encoding.GetCompressor(rc) if ss.decomp == nil { st := status.Newf(codes.Unimplemented, "grpc: Decompressor is not installed for grpc-encoding %q", rc) t.WriteStatus(ss.s, st) return st.Err() } } // If cp is set, use it. Otherwise, attempt to compress the response using // the incoming message compression method. // // NOTE: this needs to be ahead of all handling, https://github.com/grpc/grpc-go/issues/686. if s.opts.cp != nil { ss.cp = s.opts.cp stream.SetSendCompress(s.opts.cp.Type()) } else if rc := stream.RecvCompress(); rc != "" && rc != encoding.Identity { // Legacy compressor not specified; attempt to respond with same encoding. ss.comp = encoding.GetCompressor(rc) if ss.comp != nil { stream.SetSendCompress(rc) } } if trInfo != nil { trInfo.tr.LazyLog(&trInfo.firstLine, false) defer func() { ss.mu.Lock() if err != nil && err != io.EOF { ss.trInfo.tr.LazyLog(&fmtStringer{"%v", []interface{}{err}}, true) ss.trInfo.tr.SetError() } ss.trInfo.tr.Finish() ss.trInfo.tr = nil ss.mu.Unlock() }() } var appErr error var server interface{} if srv != nil { server = srv.server } if s.opts.streamInt == nil { appErr = sd.Handler(server, ss) } else { info := &StreamServerInfo{ FullMethod: stream.Method(), IsClientStream: sd.ClientStreams, IsServerStream: sd.ServerStreams, } appErr = s.opts.streamInt(server, ss, info, sd.Handler) } if appErr != nil { appStatus, ok := status.FromError(appErr) if !ok { appStatus = status.New(codes.Unknown, appErr.Error()) appErr = appStatus.Err() } if trInfo != nil { ss.mu.Lock() ss.trInfo.tr.LazyLog(stringer(appStatus.Message()), true) ss.trInfo.tr.SetError() ss.mu.Unlock() } t.WriteStatus(ss.s, appStatus) if ss.binlog != nil { ss.binlog.Log(&binarylog.ServerTrailer{ Trailer: ss.s.Trailer(), Err: appErr, }) } // TODO: Should we log an error from WriteStatus here and below? return appErr } if trInfo != nil { ss.mu.Lock() ss.trInfo.tr.LazyLog(stringer("OK"), false) ss.mu.Unlock() } err = t.WriteStatus(ss.s, status.New(codes.OK, "")) if ss.binlog != nil { ss.binlog.Log(&binarylog.ServerTrailer{ Trailer: ss.s.Trailer(), Err: appErr, }) } return err } func (s *Server) handleStream(t transport.ServerTransport, stream *transport.Stream, trInfo *traceInfo) { sm := stream.Method() if sm != "" && sm[0] == '/' { sm = sm[1:] } pos := strings.LastIndex(sm, "/") if pos == -1 { if trInfo != nil { trInfo.tr.LazyLog(&fmtStringer{"Malformed method name %q", []interface{}{sm}}, true) trInfo.tr.SetError() } errDesc := fmt.Sprintf("malformed method name: %q", stream.Method()) if err := t.WriteStatus(stream, status.New(codes.ResourceExhausted, errDesc)); err != nil { if trInfo != nil { trInfo.tr.LazyLog(&fmtStringer{"%v", []interface{}{err}}, true) trInfo.tr.SetError() } grpclog.Warningf("grpc: Server.handleStream failed to write status: %v", err) } if trInfo != nil { trInfo.tr.Finish() } return } service := sm[:pos] method := sm[pos+1:] srv, knownService := s.m[service] if knownService { if md, ok := srv.md[method]; ok { s.processUnaryRPC(t, stream, srv, md, trInfo) return } if sd, ok := srv.sd[method]; ok { s.processStreamingRPC(t, stream, srv, sd, trInfo) return } } // Unknown service, or known server unknown method. if unknownDesc := s.opts.unknownStreamDesc; unknownDesc != nil { s.processStreamingRPC(t, stream, nil, unknownDesc, trInfo) return } var errDesc string if !knownService { errDesc = fmt.Sprintf("unknown service %v", service) } else { errDesc = fmt.Sprintf("unknown method %v for service %v", method, service) } if trInfo != nil { trInfo.tr.LazyPrintf("%s", errDesc) trInfo.tr.SetError() } if err := t.WriteStatus(stream, status.New(codes.Unimplemented, errDesc)); err != nil { if trInfo != nil { trInfo.tr.LazyLog(&fmtStringer{"%v", []interface{}{err}}, true) trInfo.tr.SetError() } grpclog.Warningf("grpc: Server.handleStream failed to write status: %v", err) } if trInfo != nil { trInfo.tr.Finish() } } // The key to save ServerTransportStream in the context. type streamKey struct{} // NewContextWithServerTransportStream creates a new context from ctx and // attaches stream to it. // // This API is EXPERIMENTAL. func NewContextWithServerTransportStream(ctx context.Context, stream ServerTransportStream) context.Context { return context.WithValue(ctx, streamKey{}, stream) } // ServerTransportStream is a minimal interface that a transport stream must // implement. This can be used to mock an actual transport stream for tests of // handler code that use, for example, grpc.SetHeader (which requires some // stream to be in context). // // See also NewContextWithServerTransportStream. // // This API is EXPERIMENTAL. type ServerTransportStream interface { Method() string SetHeader(md metadata.MD) error SendHeader(md metadata.MD) error SetTrailer(md metadata.MD) error } // ServerTransportStreamFromContext returns the ServerTransportStream saved in // ctx. Returns nil if the given context has no stream associated with it // (which implies it is not an RPC invocation context). // // This API is EXPERIMENTAL. func ServerTransportStreamFromContext(ctx context.Context) ServerTransportStream { s, _ := ctx.Value(streamKey{}).(ServerTransportStream) return s } // Stop stops the gRPC server. It immediately closes all open // connections and listeners. // It cancels all active RPCs on the server side and the corresponding // pending RPCs on the client side will get notified by connection // errors. func (s *Server) Stop() { s.quitOnce.Do(func() { close(s.quit) }) defer func() { s.serveWG.Wait() s.doneOnce.Do(func() { close(s.done) }) }() s.channelzRemoveOnce.Do(func() { if channelz.IsOn() { channelz.RemoveEntry(s.channelzID) } }) s.mu.Lock() listeners := s.lis s.lis = nil st := s.conns s.conns = nil // interrupt GracefulStop if Stop and GracefulStop are called concurrently. s.cv.Broadcast() s.mu.Unlock() for lis := range listeners { lis.Close() } for c := range st { c.Close() } s.mu.Lock() if s.events != nil { s.events.Finish() s.events = nil } s.mu.Unlock() } // GracefulStop stops the gRPC server gracefully. It stops the server from // accepting new connections and RPCs and blocks until all the pending RPCs are // finished. func (s *Server) GracefulStop() { s.quitOnce.Do(func() { close(s.quit) }) defer func() { s.doneOnce.Do(func() { close(s.done) }) }() s.channelzRemoveOnce.Do(func() { if channelz.IsOn() { channelz.RemoveEntry(s.channelzID) } }) s.mu.Lock() if s.conns == nil { s.mu.Unlock() return } for lis := range s.lis { lis.Close() } s.lis = nil if !s.drain { for st := range s.conns { st.Drain() } s.drain = true } // Wait for serving threads to be ready to exit. Only then can we be sure no // new conns will be created. s.mu.Unlock() s.serveWG.Wait() s.mu.Lock() for len(s.conns) != 0 { s.cv.Wait() } s.conns = nil if s.events != nil { s.events.Finish() s.events = nil } s.mu.Unlock() } // contentSubtype must be lowercase // cannot return nil func (s *Server) getCodec(contentSubtype string) baseCodec { if s.opts.codec != nil { return s.opts.codec } if contentSubtype == "" { return encoding.GetCodec(proto.Name) } codec := encoding.GetCodec(contentSubtype) if codec == nil { return encoding.GetCodec(proto.Name) } return codec } // SetHeader sets the header metadata. // When called multiple times, all the provided metadata will be merged. // All the metadata will be sent out when one of the following happens: // - grpc.SendHeader() is called; // - The first response is sent out; // - An RPC status is sent out (error or success). func SetHeader(ctx context.Context, md metadata.MD) error { if md.Len() == 0 { return nil } stream := ServerTransportStreamFromContext(ctx) if stream == nil { return status.Errorf(codes.Internal, "grpc: failed to fetch the stream from the context %v", ctx) } return stream.SetHeader(md) } // SendHeader sends header metadata. It may be called at most once. // The provided md and headers set by SetHeader() will be sent. func SendHeader(ctx context.Context, md metadata.MD) error { stream := ServerTransportStreamFromContext(ctx) if stream == nil { return status.Errorf(codes.Internal, "grpc: failed to fetch the stream from the context %v", ctx) } if err := stream.SendHeader(md); err != nil { return toRPCErr(err) } return nil } // SetTrailer sets the trailer metadata that will be sent when an RPC returns. // When called more than once, all the provided metadata will be merged. func SetTrailer(ctx context.Context, md metadata.MD) error { if md.Len() == 0 { return nil } stream := ServerTransportStreamFromContext(ctx) if stream == nil { return status.Errorf(codes.Internal, "grpc: failed to fetch the stream from the context %v", ctx) } return stream.SetTrailer(md) } // Method returns the method string for the server context. The returned // string is in the format of "/service/method". func Method(ctx context.Context) (string, bool) { s := ServerTransportStreamFromContext(ctx) if s == nil { return "", false } return s.Method(), true } type channelzServer struct { s *Server } func (c *channelzServer) ChannelzMetric() *channelz.ServerInternalMetric { return c.s.channelzMetric() } grpc-go-1.22.1/server_test.go000066400000000000000000000061361351635773100160500ustar00rootroot00000000000000/* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "context" "net" "reflect" "strings" "testing" "time" "google.golang.org/grpc/internal/transport" ) type emptyServiceServer interface{} type testServer struct{} func (s) TestStopBeforeServe(t *testing.T) { lis, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("failed to create listener: %v", err) } server := NewServer() server.Stop() err = server.Serve(lis) if err != ErrServerStopped { t.Fatalf("server.Serve() error = %v, want %v", err, ErrServerStopped) } // server.Serve is responsible for closing the listener, even if the // server was already stopped. err = lis.Close() if got, want := errorDesc(err), "use of closed"; !strings.Contains(got, want) { t.Errorf("Close() error = %q, want %q", got, want) } } func (s) TestGracefulStop(t *testing.T) { lis, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("failed to create listener: %v", err) } server := NewServer() go func() { // make sure Serve() is called time.Sleep(time.Millisecond * 500) server.GracefulStop() }() err = server.Serve(lis) if err != nil { t.Fatalf("Serve() returned non-nil error on GracefulStop: %v", err) } } func (s) TestGetServiceInfo(t *testing.T) { testSd := ServiceDesc{ ServiceName: "grpc.testing.EmptyService", HandlerType: (*emptyServiceServer)(nil), Methods: []MethodDesc{ { MethodName: "EmptyCall", Handler: nil, }, }, Streams: []StreamDesc{ { StreamName: "EmptyStream", Handler: nil, ServerStreams: false, ClientStreams: true, }, }, Metadata: []int{0, 2, 1, 3}, } server := NewServer() server.RegisterService(&testSd, &testServer{}) info := server.GetServiceInfo() want := map[string]ServiceInfo{ "grpc.testing.EmptyService": { Methods: []MethodInfo{ { Name: "EmptyCall", IsClientStream: false, IsServerStream: false, }, { Name: "EmptyStream", IsClientStream: true, IsServerStream: false, }}, Metadata: []int{0, 2, 1, 3}, }, } if !reflect.DeepEqual(info, want) { t.Errorf("GetServiceInfo() = %+v, want %+v", info, want) } } func (s) TestStreamContext(t *testing.T) { expectedStream := &transport.Stream{} ctx := NewContextWithServerTransportStream(context.Background(), expectedStream) s := ServerTransportStreamFromContext(ctx) stream, ok := s.(*transport.Stream) if !ok || expectedStream != stream { t.Fatalf("GetStreamFromContext(%v) = %v, %t, want: %v, true", ctx, stream, ok, expectedStream) } } grpc-go-1.22.1/service_config.go000066400000000000000000000322431351635773100164660ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "encoding/json" "fmt" "strconv" "strings" "time" "google.golang.org/grpc/balancer" "google.golang.org/grpc/codes" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/internal" "google.golang.org/grpc/serviceconfig" ) const maxInt = int(^uint(0) >> 1) // MethodConfig defines the configuration recommended by the service providers for a // particular method. // // Deprecated: Users should not use this struct. Service config should be received // through name resolver, as specified here // https://github.com/grpc/grpc/blob/master/doc/service_config.md type MethodConfig struct { // WaitForReady indicates whether RPCs sent to this method should wait until // the connection is ready by default (!failfast). The value specified via the // gRPC client API will override the value set here. WaitForReady *bool // Timeout is the default timeout for RPCs sent to this method. The actual // deadline used will be the minimum of the value specified here and the value // set by the application via the gRPC client API. If either one is not set, // then the other will be used. If neither is set, then the RPC has no deadline. Timeout *time.Duration // MaxReqSize is the maximum allowed payload size for an individual request in a // stream (client->server) in bytes. The size which is measured is the serialized // payload after per-message compression (but before stream compression) in bytes. // The actual value used is the minimum of the value specified here and the value set // by the application via the gRPC client API. If either one is not set, then the other // will be used. If neither is set, then the built-in default is used. MaxReqSize *int // MaxRespSize is the maximum allowed payload size for an individual response in a // stream (server->client) in bytes. MaxRespSize *int // RetryPolicy configures retry options for the method. retryPolicy *retryPolicy } type lbConfig struct { name string cfg serviceconfig.LoadBalancingConfig } // ServiceConfig is provided by the service provider and contains parameters for how // clients that connect to the service should behave. // // Deprecated: Users should not use this struct. Service config should be received // through name resolver, as specified here // https://github.com/grpc/grpc/blob/master/doc/service_config.md type ServiceConfig struct { serviceconfig.Config // LB is the load balancer the service providers recommends. The balancer // specified via grpc.WithBalancer will override this. This is deprecated; // lbConfigs is preferred. If lbConfig and LB are both present, lbConfig // will be used. LB *string // lbConfig is the service config's load balancing configuration. If // lbConfig and LB are both present, lbConfig will be used. lbConfig *lbConfig // Methods contains a map for the methods in this service. If there is an // exact match for a method (i.e. /service/method) in the map, use the // corresponding MethodConfig. If there's no exact match, look for the // default config for the service (/service/) and use the corresponding // MethodConfig if it exists. Otherwise, the method has no MethodConfig to // use. Methods map[string]MethodConfig // If a retryThrottlingPolicy is provided, gRPC will automatically throttle // retry attempts and hedged RPCs when the client’s ratio of failures to // successes exceeds a threshold. // // For each server name, the gRPC client will maintain a token_count which is // initially set to maxTokens, and can take values between 0 and maxTokens. // // Every outgoing RPC (regardless of service or method invoked) will change // token_count as follows: // // - Every failed RPC will decrement the token_count by 1. // - Every successful RPC will increment the token_count by tokenRatio. // // If token_count is less than or equal to maxTokens / 2, then RPCs will not // be retried and hedged RPCs will not be sent. retryThrottling *retryThrottlingPolicy // healthCheckConfig must be set as one of the requirement to enable LB channel // health check. healthCheckConfig *healthCheckConfig // rawJSONString stores service config json string that get parsed into // this service config struct. rawJSONString string } // healthCheckConfig defines the go-native version of the LB channel health check config. type healthCheckConfig struct { // serviceName is the service name to use in the health-checking request. ServiceName string } // retryPolicy defines the go-native version of the retry policy defined by the // service config here: // https://github.com/grpc/proposal/blob/master/A6-client-retries.md#integration-with-service-config type retryPolicy struct { // MaxAttempts is the maximum number of attempts, including the original RPC. // // This field is required and must be two or greater. maxAttempts int // Exponential backoff parameters. The initial retry attempt will occur at // random(0, initialBackoffMS). In general, the nth attempt will occur at // random(0, // min(initialBackoffMS*backoffMultiplier**(n-1), maxBackoffMS)). // // These fields are required and must be greater than zero. initialBackoff time.Duration maxBackoff time.Duration backoffMultiplier float64 // The set of status codes which may be retried. // // Status codes are specified as strings, e.g., "UNAVAILABLE". // // This field is required and must be non-empty. // Note: a set is used to store this for easy lookup. retryableStatusCodes map[codes.Code]bool } type jsonRetryPolicy struct { MaxAttempts int InitialBackoff string MaxBackoff string BackoffMultiplier float64 RetryableStatusCodes []codes.Code } // retryThrottlingPolicy defines the go-native version of the retry throttling // policy defined by the service config here: // https://github.com/grpc/proposal/blob/master/A6-client-retries.md#integration-with-service-config type retryThrottlingPolicy struct { // The number of tokens starts at maxTokens. The token_count will always be // between 0 and maxTokens. // // This field is required and must be greater than zero. MaxTokens float64 // The amount of tokens to add on each successful RPC. Typically this will // be some number between 0 and 1, e.g., 0.1. // // This field is required and must be greater than zero. Up to 3 decimal // places are supported. TokenRatio float64 } func parseDuration(s *string) (*time.Duration, error) { if s == nil { return nil, nil } if !strings.HasSuffix(*s, "s") { return nil, fmt.Errorf("malformed duration %q", *s) } ss := strings.SplitN((*s)[:len(*s)-1], ".", 3) if len(ss) > 2 { return nil, fmt.Errorf("malformed duration %q", *s) } // hasDigits is set if either the whole or fractional part of the number is // present, since both are optional but one is required. hasDigits := false var d time.Duration if len(ss[0]) > 0 { i, err := strconv.ParseInt(ss[0], 10, 32) if err != nil { return nil, fmt.Errorf("malformed duration %q: %v", *s, err) } d = time.Duration(i) * time.Second hasDigits = true } if len(ss) == 2 && len(ss[1]) > 0 { if len(ss[1]) > 9 { return nil, fmt.Errorf("malformed duration %q", *s) } f, err := strconv.ParseInt(ss[1], 10, 64) if err != nil { return nil, fmt.Errorf("malformed duration %q: %v", *s, err) } for i := 9; i > len(ss[1]); i-- { f *= 10 } d += time.Duration(f) hasDigits = true } if !hasDigits { return nil, fmt.Errorf("malformed duration %q", *s) } return &d, nil } type jsonName struct { Service *string Method *string } func (j jsonName) generatePath() (string, bool) { if j.Service == nil { return "", false } res := "/" + *j.Service + "/" if j.Method != nil { res += *j.Method } return res, true } // TODO(lyuxuan): delete this struct after cleaning up old service config implementation. type jsonMC struct { Name *[]jsonName WaitForReady *bool Timeout *string MaxRequestMessageBytes *int64 MaxResponseMessageBytes *int64 RetryPolicy *jsonRetryPolicy } type loadBalancingConfig map[string]json.RawMessage // TODO(lyuxuan): delete this struct after cleaning up old service config implementation. type jsonSC struct { LoadBalancingPolicy *string LoadBalancingConfig *[]loadBalancingConfig MethodConfig *[]jsonMC RetryThrottling *retryThrottlingPolicy HealthCheckConfig *healthCheckConfig } func init() { internal.ParseServiceConfig = func(sc string) (interface{}, error) { return parseServiceConfig(sc) } } func parseServiceConfig(js string) (*ServiceConfig, error) { if len(js) == 0 { return nil, fmt.Errorf("no JSON service config provided") } var rsc jsonSC err := json.Unmarshal([]byte(js), &rsc) if err != nil { grpclog.Warningf("grpc: parseServiceConfig error unmarshaling %s due to %v", js, err) return nil, err } sc := ServiceConfig{ LB: rsc.LoadBalancingPolicy, Methods: make(map[string]MethodConfig), retryThrottling: rsc.RetryThrottling, healthCheckConfig: rsc.HealthCheckConfig, rawJSONString: js, } if rsc.LoadBalancingConfig != nil { for i, lbcfg := range *rsc.LoadBalancingConfig { if len(lbcfg) != 1 { err := fmt.Errorf("invalid loadBalancingConfig: entry %v does not contain exactly 1 policy/config pair: %q", i, lbcfg) grpclog.Warningf(err.Error()) return nil, err } var name string var jsonCfg json.RawMessage for name, jsonCfg = range lbcfg { } builder := balancer.Get(name) if builder == nil { continue } sc.lbConfig = &lbConfig{name: name} if parser, ok := builder.(balancer.ConfigParser); ok { var err error sc.lbConfig.cfg, err = parser.ParseConfig(jsonCfg) if err != nil { return nil, fmt.Errorf("error parsing loadBalancingConfig for policy %q: %v", name, err) } } else if string(jsonCfg) != "{}" { grpclog.Warningf("non-empty balancer configuration %q, but balancer does not implement ParseConfig", string(jsonCfg)) } break } } if rsc.MethodConfig == nil { return &sc, nil } for _, m := range *rsc.MethodConfig { if m.Name == nil { continue } d, err := parseDuration(m.Timeout) if err != nil { grpclog.Warningf("grpc: parseServiceConfig error unmarshaling %s due to %v", js, err) return nil, err } mc := MethodConfig{ WaitForReady: m.WaitForReady, Timeout: d, } if mc.retryPolicy, err = convertRetryPolicy(m.RetryPolicy); err != nil { grpclog.Warningf("grpc: parseServiceConfig error unmarshaling %s due to %v", js, err) return nil, err } if m.MaxRequestMessageBytes != nil { if *m.MaxRequestMessageBytes > int64(maxInt) { mc.MaxReqSize = newInt(maxInt) } else { mc.MaxReqSize = newInt(int(*m.MaxRequestMessageBytes)) } } if m.MaxResponseMessageBytes != nil { if *m.MaxResponseMessageBytes > int64(maxInt) { mc.MaxRespSize = newInt(maxInt) } else { mc.MaxRespSize = newInt(int(*m.MaxResponseMessageBytes)) } } for _, n := range *m.Name { if path, valid := n.generatePath(); valid { sc.Methods[path] = mc } } } if sc.retryThrottling != nil { if mt := sc.retryThrottling.MaxTokens; mt <= 0 || mt > 1000 { return nil, fmt.Errorf("invalid retry throttling config: maxTokens (%v) out of range (0, 1000]", mt) } if tr := sc.retryThrottling.TokenRatio; tr <= 0 { return nil, fmt.Errorf("invalid retry throttling config: tokenRatio (%v) may not be negative", tr) } } return &sc, nil } func convertRetryPolicy(jrp *jsonRetryPolicy) (p *retryPolicy, err error) { if jrp == nil { return nil, nil } ib, err := parseDuration(&jrp.InitialBackoff) if err != nil { return nil, err } mb, err := parseDuration(&jrp.MaxBackoff) if err != nil { return nil, err } if jrp.MaxAttempts <= 1 || *ib <= 0 || *mb <= 0 || jrp.BackoffMultiplier <= 0 || len(jrp.RetryableStatusCodes) == 0 { grpclog.Warningf("grpc: ignoring retry policy %v due to illegal configuration", jrp) return nil, nil } rp := &retryPolicy{ maxAttempts: jrp.MaxAttempts, initialBackoff: *ib, maxBackoff: *mb, backoffMultiplier: jrp.BackoffMultiplier, retryableStatusCodes: make(map[codes.Code]bool), } if rp.maxAttempts > 5 { // TODO(retry): Make the max maxAttempts configurable. rp.maxAttempts = 5 } for _, code := range jrp.RetryableStatusCodes { rp.retryableStatusCodes[code] = true } return rp, nil } func min(a, b *int) *int { if *a < *b { return a } return b } func getMaxSize(mcMax, doptMax *int, defaultVal int) *int { if mcMax == nil && doptMax == nil { return &defaultVal } if mcMax != nil && doptMax != nil { return min(mcMax, doptMax) } if mcMax != nil { return mcMax } return doptMax } func newInt(b int) *int { return &b } grpc-go-1.22.1/service_config_test.go000066400000000000000000000210541351635773100175230ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "encoding/json" "fmt" "math" "reflect" "testing" "time" "google.golang.org/grpc/balancer" "google.golang.org/grpc/serviceconfig" ) type parseTestCase struct { scjs string wantSC *ServiceConfig wantErr bool } func runParseTests(t *testing.T, testCases []parseTestCase) { for _, c := range testCases { sc, err := parseServiceConfig(c.scjs) if !c.wantErr { c.wantSC.rawJSONString = c.scjs } if c.wantErr != (err != nil) || !reflect.DeepEqual(sc, c.wantSC) { t.Fatalf("parseServiceConfig(%s) = %+v, %v, want %+v, %v", c.scjs, sc, err, c.wantSC, c.wantErr) } } } type pbbData struct { serviceconfig.LoadBalancingConfig Foo string Bar int } type parseBalancerBuilder struct{} func (parseBalancerBuilder) Name() string { return "pbb" } func (parseBalancerBuilder) ParseConfig(c json.RawMessage) (serviceconfig.LoadBalancingConfig, error) { d := pbbData{} if err := json.Unmarshal(c, &d); err != nil { return nil, err } return d, nil } func (parseBalancerBuilder) Build(cc balancer.ClientConn, opts balancer.BuildOptions) balancer.Balancer { panic("unimplemented") } func init() { balancer.Register(parseBalancerBuilder{}) } func (s) TestParseLBConfig(t *testing.T) { testcases := []parseTestCase{ { `{ "loadBalancingConfig": [{"pbb": { "foo": "hi" } }] }`, &ServiceConfig{ Methods: make(map[string]MethodConfig), lbConfig: &lbConfig{name: "pbb", cfg: pbbData{Foo: "hi"}}, }, false, }, } runParseTests(t, testcases) } func (s) TestParseLoadBalancer(t *testing.T) { testcases := []parseTestCase{ { `{ "loadBalancingPolicy": "round_robin", "methodConfig": [ { "name": [ { "service": "foo", "method": "Bar" } ], "waitForReady": true } ] }`, &ServiceConfig{ LB: newString("round_robin"), Methods: map[string]MethodConfig{ "/foo/Bar": { WaitForReady: newBool(true), }, }, }, false, }, { `{ "loadBalancingPolicy": 1, "methodConfig": [ { "name": [ { "service": "foo", "method": "Bar" } ], "waitForReady": false } ] }`, nil, true, }, } runParseTests(t, testcases) } func (s) TestParseWaitForReady(t *testing.T) { testcases := []parseTestCase{ { `{ "methodConfig": [ { "name": [ { "service": "foo", "method": "Bar" } ], "waitForReady": true } ] }`, &ServiceConfig{ Methods: map[string]MethodConfig{ "/foo/Bar": { WaitForReady: newBool(true), }, }, }, false, }, { `{ "methodConfig": [ { "name": [ { "service": "foo", "method": "Bar" } ], "waitForReady": false } ] }`, &ServiceConfig{ Methods: map[string]MethodConfig{ "/foo/Bar": { WaitForReady: newBool(false), }, }, }, false, }, { `{ "methodConfig": [ { "name": [ { "service": "foo", "method": "Bar" } ], "waitForReady": fall }, { "name": [ { "service": "foo", "method": "Bar" } ], "waitForReady": true } ] }`, nil, true, }, } runParseTests(t, testcases) } func (s) TestParseTimeOut(t *testing.T) { testcases := []parseTestCase{ { `{ "methodConfig": [ { "name": [ { "service": "foo", "method": "Bar" } ], "timeout": "1s" } ] }`, &ServiceConfig{ Methods: map[string]MethodConfig{ "/foo/Bar": { Timeout: newDuration(time.Second), }, }, }, false, }, { `{ "methodConfig": [ { "name": [ { "service": "foo", "method": "Bar" } ], "timeout": "3c" } ] }`, nil, true, }, { `{ "methodConfig": [ { "name": [ { "service": "foo", "method": "Bar" } ], "timeout": "3c" }, { "name": [ { "service": "foo", "method": "Bar" } ], "timeout": "1s" } ] }`, nil, true, }, } runParseTests(t, testcases) } func (s) TestParseMsgSize(t *testing.T) { testcases := []parseTestCase{ { `{ "methodConfig": [ { "name": [ { "service": "foo", "method": "Bar" } ], "maxRequestMessageBytes": 1024, "maxResponseMessageBytes": 2048 } ] }`, &ServiceConfig{ Methods: map[string]MethodConfig{ "/foo/Bar": { MaxReqSize: newInt(1024), MaxRespSize: newInt(2048), }, }, }, false, }, { `{ "methodConfig": [ { "name": [ { "service": "foo", "method": "Bar" } ], "maxRequestMessageBytes": "1024", "maxResponseMessageBytes": "2048" }, { "name": [ { "service": "foo", "method": "Bar" } ], "maxRequestMessageBytes": 1024, "maxResponseMessageBytes": 2048 } ] }`, nil, true, }, } runParseTests(t, testcases) } func (s) TestParseDuration(t *testing.T) { testCases := []struct { s *string want *time.Duration err bool }{ {s: nil, want: nil}, {s: newString("1s"), want: newDuration(time.Second)}, {s: newString("-1s"), want: newDuration(-time.Second)}, {s: newString("1.1s"), want: newDuration(1100 * time.Millisecond)}, {s: newString("1.s"), want: newDuration(time.Second)}, {s: newString("1.0s"), want: newDuration(time.Second)}, {s: newString(".002s"), want: newDuration(2 * time.Millisecond)}, {s: newString(".002000s"), want: newDuration(2 * time.Millisecond)}, {s: newString("0.003s"), want: newDuration(3 * time.Millisecond)}, {s: newString("0.000004s"), want: newDuration(4 * time.Microsecond)}, {s: newString("5000.000000009s"), want: newDuration(5000*time.Second + 9*time.Nanosecond)}, {s: newString("4999.999999999s"), want: newDuration(5000*time.Second - time.Nanosecond)}, {s: newString("1"), err: true}, {s: newString("s"), err: true}, {s: newString(".s"), err: true}, {s: newString("1 s"), err: true}, {s: newString(" 1s"), err: true}, {s: newString("1ms"), err: true}, {s: newString("1.1.1s"), err: true}, {s: newString("Xs"), err: true}, {s: newString("as"), err: true}, {s: newString(".0000000001s"), err: true}, {s: newString(fmt.Sprint(math.MaxInt32) + "s"), want: newDuration(math.MaxInt32 * time.Second)}, {s: newString(fmt.Sprint(int64(math.MaxInt32)+1) + "s"), err: true}, } for _, tc := range testCases { got, err := parseDuration(tc.s) if tc.err != (err != nil) || (got == nil) != (tc.want == nil) || (got != nil && *got != *tc.want) { wantErr := "" if tc.err { wantErr = "" } s := "" if tc.s != nil { s = `&"` + *tc.s + `"` } t.Errorf("parseDuration(%v) = %v, %v; want %v, %v", s, got, err, tc.want, wantErr) } } } func newBool(b bool) *bool { return &b } func newDuration(b time.Duration) *time.Duration { return &b } func newString(b string) *string { return &b } grpc-go-1.22.1/serviceconfig/000077500000000000000000000000001351635773100157745ustar00rootroot00000000000000grpc-go-1.22.1/serviceconfig/serviceconfig.go000066400000000000000000000025321351635773100211530ustar00rootroot00000000000000/* * * Copyright 2019 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package serviceconfig defines types and methods for operating on gRPC // service configs. // // This package is EXPERIMENTAL. package serviceconfig import ( "google.golang.org/grpc/internal" ) // Config represents an opaque data structure holding a service config. type Config interface { isConfig() } // LoadBalancingConfig represents an opaque data structure holding a load // balancer config. type LoadBalancingConfig interface { isLoadBalancingConfig() } // Parse parses the JSON service config provided into an internal form or // returns an error if the config is invalid. func Parse(ServiceConfigJSON string) (Config, error) { c, err := internal.ParseServiceConfig(ServiceConfigJSON) if err != nil { return nil, err } return c.(Config), err } grpc-go-1.22.1/stats/000077500000000000000000000000001351635773100143045ustar00rootroot00000000000000grpc-go-1.22.1/stats/grpc_testing/000077500000000000000000000000001351635773100167745ustar00rootroot00000000000000grpc-go-1.22.1/stats/grpc_testing/test.pb.go000066400000000000000000000322251351635773100207060ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: grpc_testing/test.proto package grpc_testing import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import ( context "golang.org/x/net/context" grpc "google.golang.org/grpc" ) // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package type SimpleRequest struct { Id int32 `protobuf:"varint,2,opt,name=id,proto3" json:"id,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *SimpleRequest) Reset() { *m = SimpleRequest{} } func (m *SimpleRequest) String() string { return proto.CompactTextString(m) } func (*SimpleRequest) ProtoMessage() {} func (*SimpleRequest) Descriptor() ([]byte, []int) { return fileDescriptor_test_dd7ffeaa75513a0a, []int{0} } func (m *SimpleRequest) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_SimpleRequest.Unmarshal(m, b) } func (m *SimpleRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_SimpleRequest.Marshal(b, m, deterministic) } func (dst *SimpleRequest) XXX_Merge(src proto.Message) { xxx_messageInfo_SimpleRequest.Merge(dst, src) } func (m *SimpleRequest) XXX_Size() int { return xxx_messageInfo_SimpleRequest.Size(m) } func (m *SimpleRequest) XXX_DiscardUnknown() { xxx_messageInfo_SimpleRequest.DiscardUnknown(m) } var xxx_messageInfo_SimpleRequest proto.InternalMessageInfo func (m *SimpleRequest) GetId() int32 { if m != nil { return m.Id } return 0 } type SimpleResponse struct { Id int32 `protobuf:"varint,3,opt,name=id,proto3" json:"id,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *SimpleResponse) Reset() { *m = SimpleResponse{} } func (m *SimpleResponse) String() string { return proto.CompactTextString(m) } func (*SimpleResponse) ProtoMessage() {} func (*SimpleResponse) Descriptor() ([]byte, []int) { return fileDescriptor_test_dd7ffeaa75513a0a, []int{1} } func (m *SimpleResponse) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_SimpleResponse.Unmarshal(m, b) } func (m *SimpleResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_SimpleResponse.Marshal(b, m, deterministic) } func (dst *SimpleResponse) XXX_Merge(src proto.Message) { xxx_messageInfo_SimpleResponse.Merge(dst, src) } func (m *SimpleResponse) XXX_Size() int { return xxx_messageInfo_SimpleResponse.Size(m) } func (m *SimpleResponse) XXX_DiscardUnknown() { xxx_messageInfo_SimpleResponse.DiscardUnknown(m) } var xxx_messageInfo_SimpleResponse proto.InternalMessageInfo func (m *SimpleResponse) GetId() int32 { if m != nil { return m.Id } return 0 } func init() { proto.RegisterType((*SimpleRequest)(nil), "grpc.testing.SimpleRequest") proto.RegisterType((*SimpleResponse)(nil), "grpc.testing.SimpleResponse") } // Reference imports to suppress errors if they are not otherwise used. var _ context.Context var _ grpc.ClientConn // This is a compile-time assertion to ensure that this generated file // is compatible with the grpc package it is being compiled against. const _ = grpc.SupportPackageIsVersion4 // TestServiceClient is the client API for TestService service. // // For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream. type TestServiceClient interface { // One request followed by one response. // The server returns the client id as-is. UnaryCall(ctx context.Context, in *SimpleRequest, opts ...grpc.CallOption) (*SimpleResponse, error) // A sequence of requests with each request served by the server immediately. // As one request could lead to multiple responses, this interface // demonstrates the idea of full duplexing. FullDuplexCall(ctx context.Context, opts ...grpc.CallOption) (TestService_FullDuplexCallClient, error) // Client stream ClientStreamCall(ctx context.Context, opts ...grpc.CallOption) (TestService_ClientStreamCallClient, error) // Server stream ServerStreamCall(ctx context.Context, in *SimpleRequest, opts ...grpc.CallOption) (TestService_ServerStreamCallClient, error) } type testServiceClient struct { cc *grpc.ClientConn } func NewTestServiceClient(cc *grpc.ClientConn) TestServiceClient { return &testServiceClient{cc} } func (c *testServiceClient) UnaryCall(ctx context.Context, in *SimpleRequest, opts ...grpc.CallOption) (*SimpleResponse, error) { out := new(SimpleResponse) err := c.cc.Invoke(ctx, "/grpc.testing.TestService/UnaryCall", in, out, opts...) if err != nil { return nil, err } return out, nil } func (c *testServiceClient) FullDuplexCall(ctx context.Context, opts ...grpc.CallOption) (TestService_FullDuplexCallClient, error) { stream, err := c.cc.NewStream(ctx, &_TestService_serviceDesc.Streams[0], "/grpc.testing.TestService/FullDuplexCall", opts...) if err != nil { return nil, err } x := &testServiceFullDuplexCallClient{stream} return x, nil } type TestService_FullDuplexCallClient interface { Send(*SimpleRequest) error Recv() (*SimpleResponse, error) grpc.ClientStream } type testServiceFullDuplexCallClient struct { grpc.ClientStream } func (x *testServiceFullDuplexCallClient) Send(m *SimpleRequest) error { return x.ClientStream.SendMsg(m) } func (x *testServiceFullDuplexCallClient) Recv() (*SimpleResponse, error) { m := new(SimpleResponse) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func (c *testServiceClient) ClientStreamCall(ctx context.Context, opts ...grpc.CallOption) (TestService_ClientStreamCallClient, error) { stream, err := c.cc.NewStream(ctx, &_TestService_serviceDesc.Streams[1], "/grpc.testing.TestService/ClientStreamCall", opts...) if err != nil { return nil, err } x := &testServiceClientStreamCallClient{stream} return x, nil } type TestService_ClientStreamCallClient interface { Send(*SimpleRequest) error CloseAndRecv() (*SimpleResponse, error) grpc.ClientStream } type testServiceClientStreamCallClient struct { grpc.ClientStream } func (x *testServiceClientStreamCallClient) Send(m *SimpleRequest) error { return x.ClientStream.SendMsg(m) } func (x *testServiceClientStreamCallClient) CloseAndRecv() (*SimpleResponse, error) { if err := x.ClientStream.CloseSend(); err != nil { return nil, err } m := new(SimpleResponse) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func (c *testServiceClient) ServerStreamCall(ctx context.Context, in *SimpleRequest, opts ...grpc.CallOption) (TestService_ServerStreamCallClient, error) { stream, err := c.cc.NewStream(ctx, &_TestService_serviceDesc.Streams[2], "/grpc.testing.TestService/ServerStreamCall", opts...) if err != nil { return nil, err } x := &testServiceServerStreamCallClient{stream} if err := x.ClientStream.SendMsg(in); err != nil { return nil, err } if err := x.ClientStream.CloseSend(); err != nil { return nil, err } return x, nil } type TestService_ServerStreamCallClient interface { Recv() (*SimpleResponse, error) grpc.ClientStream } type testServiceServerStreamCallClient struct { grpc.ClientStream } func (x *testServiceServerStreamCallClient) Recv() (*SimpleResponse, error) { m := new(SimpleResponse) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } // TestServiceServer is the server API for TestService service. type TestServiceServer interface { // One request followed by one response. // The server returns the client id as-is. UnaryCall(context.Context, *SimpleRequest) (*SimpleResponse, error) // A sequence of requests with each request served by the server immediately. // As one request could lead to multiple responses, this interface // demonstrates the idea of full duplexing. FullDuplexCall(TestService_FullDuplexCallServer) error // Client stream ClientStreamCall(TestService_ClientStreamCallServer) error // Server stream ServerStreamCall(*SimpleRequest, TestService_ServerStreamCallServer) error } func RegisterTestServiceServer(s *grpc.Server, srv TestServiceServer) { s.RegisterService(&_TestService_serviceDesc, srv) } func _TestService_UnaryCall_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(SimpleRequest) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(TestServiceServer).UnaryCall(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/grpc.testing.TestService/UnaryCall", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(TestServiceServer).UnaryCall(ctx, req.(*SimpleRequest)) } return interceptor(ctx, in, info, handler) } func _TestService_FullDuplexCall_Handler(srv interface{}, stream grpc.ServerStream) error { return srv.(TestServiceServer).FullDuplexCall(&testServiceFullDuplexCallServer{stream}) } type TestService_FullDuplexCallServer interface { Send(*SimpleResponse) error Recv() (*SimpleRequest, error) grpc.ServerStream } type testServiceFullDuplexCallServer struct { grpc.ServerStream } func (x *testServiceFullDuplexCallServer) Send(m *SimpleResponse) error { return x.ServerStream.SendMsg(m) } func (x *testServiceFullDuplexCallServer) Recv() (*SimpleRequest, error) { m := new(SimpleRequest) if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func _TestService_ClientStreamCall_Handler(srv interface{}, stream grpc.ServerStream) error { return srv.(TestServiceServer).ClientStreamCall(&testServiceClientStreamCallServer{stream}) } type TestService_ClientStreamCallServer interface { SendAndClose(*SimpleResponse) error Recv() (*SimpleRequest, error) grpc.ServerStream } type testServiceClientStreamCallServer struct { grpc.ServerStream } func (x *testServiceClientStreamCallServer) SendAndClose(m *SimpleResponse) error { return x.ServerStream.SendMsg(m) } func (x *testServiceClientStreamCallServer) Recv() (*SimpleRequest, error) { m := new(SimpleRequest) if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func _TestService_ServerStreamCall_Handler(srv interface{}, stream grpc.ServerStream) error { m := new(SimpleRequest) if err := stream.RecvMsg(m); err != nil { return err } return srv.(TestServiceServer).ServerStreamCall(m, &testServiceServerStreamCallServer{stream}) } type TestService_ServerStreamCallServer interface { Send(*SimpleResponse) error grpc.ServerStream } type testServiceServerStreamCallServer struct { grpc.ServerStream } func (x *testServiceServerStreamCallServer) Send(m *SimpleResponse) error { return x.ServerStream.SendMsg(m) } var _TestService_serviceDesc = grpc.ServiceDesc{ ServiceName: "grpc.testing.TestService", HandlerType: (*TestServiceServer)(nil), Methods: []grpc.MethodDesc{ { MethodName: "UnaryCall", Handler: _TestService_UnaryCall_Handler, }, }, Streams: []grpc.StreamDesc{ { StreamName: "FullDuplexCall", Handler: _TestService_FullDuplexCall_Handler, ServerStreams: true, ClientStreams: true, }, { StreamName: "ClientStreamCall", Handler: _TestService_ClientStreamCall_Handler, ClientStreams: true, }, { StreamName: "ServerStreamCall", Handler: _TestService_ServerStreamCall_Handler, ServerStreams: true, }, }, Metadata: "grpc_testing/test.proto", } func init() { proto.RegisterFile("grpc_testing/test.proto", fileDescriptor_test_dd7ffeaa75513a0a) } var fileDescriptor_test_dd7ffeaa75513a0a = []byte{ // 202 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x12, 0x4f, 0x2f, 0x2a, 0x48, 0x8e, 0x2f, 0x49, 0x2d, 0x2e, 0xc9, 0xcc, 0x4b, 0xd7, 0x07, 0xd1, 0x7a, 0x05, 0x45, 0xf9, 0x25, 0xf9, 0x42, 0x3c, 0x20, 0x09, 0x3d, 0xa8, 0x84, 0x92, 0x3c, 0x17, 0x6f, 0x70, 0x66, 0x6e, 0x41, 0x4e, 0x6a, 0x50, 0x6a, 0x61, 0x69, 0x6a, 0x71, 0x89, 0x10, 0x1f, 0x17, 0x53, 0x66, 0x8a, 0x04, 0x93, 0x02, 0xa3, 0x06, 0x6b, 0x10, 0x53, 0x66, 0x8a, 0x92, 0x02, 0x17, 0x1f, 0x4c, 0x41, 0x71, 0x41, 0x7e, 0x5e, 0x71, 0x2a, 0x54, 0x05, 0x33, 0x4c, 0x85, 0xd1, 0x09, 0x26, 0x2e, 0xee, 0x90, 0xd4, 0xe2, 0x92, 0xe0, 0xd4, 0xa2, 0xb2, 0xcc, 0xe4, 0x54, 0x21, 0x37, 0x2e, 0xce, 0xd0, 0xbc, 0xc4, 0xa2, 0x4a, 0xe7, 0xc4, 0x9c, 0x1c, 0x21, 0x69, 0x3d, 0x64, 0xeb, 0xf4, 0x50, 0xec, 0x92, 0x92, 0xc1, 0x2e, 0x09, 0xb5, 0xc7, 0x9f, 0x8b, 0xcf, 0xad, 0x34, 0x27, 0xc7, 0xa5, 0xb4, 0x20, 0x27, 0xb5, 0x82, 0x42, 0xc3, 0x34, 0x18, 0x0d, 0x18, 0x85, 0xfc, 0xb9, 0x04, 0x9c, 0x73, 0x32, 0x53, 0xf3, 0x4a, 0x82, 0x4b, 0x8a, 0x52, 0x13, 0x73, 0x29, 0x36, 0x12, 0x64, 0x20, 0xc8, 0xd3, 0xa9, 0x45, 0x54, 0x31, 0xd0, 0x80, 0x31, 0x89, 0x0d, 0x1c, 0x45, 0xc6, 0x80, 0x00, 0x00, 0x00, 0xff, 0xff, 0x4c, 0x43, 0x27, 0x67, 0xbd, 0x01, 0x00, 0x00, } grpc-go-1.22.1/stats/grpc_testing/test.proto000066400000000000000000000025331351635773100210430ustar00rootroot00000000000000// Copyright 2017 gRPC authors. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. syntax = "proto3"; package grpc.testing; message SimpleRequest { int32 id = 2; } message SimpleResponse { int32 id = 3; } // A simple test service. service TestService { // One request followed by one response. // The server returns the client id as-is. rpc UnaryCall(SimpleRequest) returns (SimpleResponse); // A sequence of requests with each request served by the server immediately. // As one request could lead to multiple responses, this interface // demonstrates the idea of full duplexing. rpc FullDuplexCall(stream SimpleRequest) returns (stream SimpleResponse); // Client stream rpc ClientStreamCall(stream SimpleRequest) returns (SimpleResponse); // Server stream rpc ServerStreamCall(SimpleRequest) returns (stream SimpleResponse); } grpc-go-1.22.1/stats/handlers.go000066400000000000000000000044241351635773100164370ustar00rootroot00000000000000/* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package stats import ( "context" "net" ) // ConnTagInfo defines the relevant information needed by connection context tagger. type ConnTagInfo struct { // RemoteAddr is the remote address of the corresponding connection. RemoteAddr net.Addr // LocalAddr is the local address of the corresponding connection. LocalAddr net.Addr } // RPCTagInfo defines the relevant information needed by RPC context tagger. type RPCTagInfo struct { // FullMethodName is the RPC method in the format of /package.service/method. FullMethodName string // FailFast indicates if this RPC is failfast. // This field is only valid on client side, it's always false on server side. FailFast bool } // Handler defines the interface for the related stats handling (e.g., RPCs, connections). type Handler interface { // TagRPC can attach some information to the given context. // The context used for the rest lifetime of the RPC will be derived from // the returned context. TagRPC(context.Context, *RPCTagInfo) context.Context // HandleRPC processes the RPC stats. HandleRPC(context.Context, RPCStats) // TagConn can attach some information to the given context. // The returned context will be used for stats handling. // For conn stats handling, the context used in HandleConn for this // connection will be derived from the context returned. // For RPC stats handling, // - On server side, the context used in HandleRPC for all RPCs on this // connection will be derived from the context returned. // - On client side, the context is not derived from the context returned. TagConn(context.Context, *ConnTagInfo) context.Context // HandleConn processes the Conn stats. HandleConn(context.Context, ConnStats) } grpc-go-1.22.1/stats/stats.go000066400000000000000000000235471351635773100160040ustar00rootroot00000000000000/* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ //go:generate protoc --go_out=plugins=grpc:. grpc_testing/test.proto // Package stats is for collecting and reporting various network and RPC stats. // This package is for monitoring purpose only. All fields are read-only. // All APIs are experimental. package stats // import "google.golang.org/grpc/stats" import ( "context" "net" "time" "google.golang.org/grpc/metadata" ) // RPCStats contains stats information about RPCs. type RPCStats interface { isRPCStats() // IsClient returns true if this RPCStats is from client side. IsClient() bool } // Begin contains stats when an RPC begins. // FailFast is only valid if this Begin is from client side. type Begin struct { // Client is true if this Begin is from client side. Client bool // BeginTime is the time when the RPC begins. BeginTime time.Time // FailFast indicates if this RPC is failfast. FailFast bool } // IsClient indicates if the stats information is from client side. func (s *Begin) IsClient() bool { return s.Client } func (s *Begin) isRPCStats() {} // InPayload contains the information for an incoming payload. type InPayload struct { // Client is true if this InPayload is from client side. Client bool // Payload is the payload with original type. Payload interface{} // Data is the serialized message payload. Data []byte // Length is the length of uncompressed data. Length int // WireLength is the length of data on wire (compressed, signed, encrypted). WireLength int // RecvTime is the time when the payload is received. RecvTime time.Time } // IsClient indicates if the stats information is from client side. func (s *InPayload) IsClient() bool { return s.Client } func (s *InPayload) isRPCStats() {} // InHeader contains stats when a header is received. type InHeader struct { // Client is true if this InHeader is from client side. Client bool // WireLength is the wire length of header. WireLength int // The following fields are valid only if Client is false. // FullMethod is the full RPC method string, i.e., /package.service/method. FullMethod string // RemoteAddr is the remote address of the corresponding connection. RemoteAddr net.Addr // LocalAddr is the local address of the corresponding connection. LocalAddr net.Addr // Compression is the compression algorithm used for the RPC. Compression string } // IsClient indicates if the stats information is from client side. func (s *InHeader) IsClient() bool { return s.Client } func (s *InHeader) isRPCStats() {} // InTrailer contains stats when a trailer is received. type InTrailer struct { // Client is true if this InTrailer is from client side. Client bool // WireLength is the wire length of trailer. WireLength int } // IsClient indicates if the stats information is from client side. func (s *InTrailer) IsClient() bool { return s.Client } func (s *InTrailer) isRPCStats() {} // OutPayload contains the information for an outgoing payload. type OutPayload struct { // Client is true if this OutPayload is from client side. Client bool // Payload is the payload with original type. Payload interface{} // Data is the serialized message payload. Data []byte // Length is the length of uncompressed data. Length int // WireLength is the length of data on wire (compressed, signed, encrypted). WireLength int // SentTime is the time when the payload is sent. SentTime time.Time } // IsClient indicates if this stats information is from client side. func (s *OutPayload) IsClient() bool { return s.Client } func (s *OutPayload) isRPCStats() {} // OutHeader contains stats when a header is sent. type OutHeader struct { // Client is true if this OutHeader is from client side. Client bool // The following fields are valid only if Client is true. // FullMethod is the full RPC method string, i.e., /package.service/method. FullMethod string // RemoteAddr is the remote address of the corresponding connection. RemoteAddr net.Addr // LocalAddr is the local address of the corresponding connection. LocalAddr net.Addr // Compression is the compression algorithm used for the RPC. Compression string } // IsClient indicates if this stats information is from client side. func (s *OutHeader) IsClient() bool { return s.Client } func (s *OutHeader) isRPCStats() {} // OutTrailer contains stats when a trailer is sent. type OutTrailer struct { // Client is true if this OutTrailer is from client side. Client bool // WireLength is the wire length of trailer. WireLength int } // IsClient indicates if this stats information is from client side. func (s *OutTrailer) IsClient() bool { return s.Client } func (s *OutTrailer) isRPCStats() {} // End contains stats when an RPC ends. type End struct { // Client is true if this End is from client side. Client bool // BeginTime is the time when the RPC began. BeginTime time.Time // EndTime is the time when the RPC ends. EndTime time.Time // Trailer contains the trailer metadata received from the server. This // field is only valid if this End is from the client side. Trailer metadata.MD // Error is the error the RPC ended with. It is an error generated from // status.Status and can be converted back to status.Status using // status.FromError if non-nil. Error error } // IsClient indicates if this is from client side. func (s *End) IsClient() bool { return s.Client } func (s *End) isRPCStats() {} // ConnStats contains stats information about connections. type ConnStats interface { isConnStats() // IsClient returns true if this ConnStats is from client side. IsClient() bool } // ConnBegin contains the stats of a connection when it is established. type ConnBegin struct { // Client is true if this ConnBegin is from client side. Client bool } // IsClient indicates if this is from client side. func (s *ConnBegin) IsClient() bool { return s.Client } func (s *ConnBegin) isConnStats() {} // ConnEnd contains the stats of a connection when it ends. type ConnEnd struct { // Client is true if this ConnEnd is from client side. Client bool } // IsClient indicates if this is from client side. func (s *ConnEnd) IsClient() bool { return s.Client } func (s *ConnEnd) isConnStats() {} type incomingTagsKey struct{} type outgoingTagsKey struct{} // SetTags attaches stats tagging data to the context, which will be sent in // the outgoing RPC with the header grpc-tags-bin. Subsequent calls to // SetTags will overwrite the values from earlier calls. // // NOTE: this is provided only for backward compatibility with existing clients // and will likely be removed in an upcoming release. New uses should transmit // this type of data using metadata with a different, non-reserved (i.e. does // not begin with "grpc-") header name. func SetTags(ctx context.Context, b []byte) context.Context { return context.WithValue(ctx, outgoingTagsKey{}, b) } // Tags returns the tags from the context for the inbound RPC. // // NOTE: this is provided only for backward compatibility with existing clients // and will likely be removed in an upcoming release. New uses should transmit // this type of data using metadata with a different, non-reserved (i.e. does // not begin with "grpc-") header name. func Tags(ctx context.Context) []byte { b, _ := ctx.Value(incomingTagsKey{}).([]byte) return b } // SetIncomingTags attaches stats tagging data to the context, to be read by // the application (not sent in outgoing RPCs). // // This is intended for gRPC-internal use ONLY. func SetIncomingTags(ctx context.Context, b []byte) context.Context { return context.WithValue(ctx, incomingTagsKey{}, b) } // OutgoingTags returns the tags from the context for the outbound RPC. // // This is intended for gRPC-internal use ONLY. func OutgoingTags(ctx context.Context) []byte { b, _ := ctx.Value(outgoingTagsKey{}).([]byte) return b } type incomingTraceKey struct{} type outgoingTraceKey struct{} // SetTrace attaches stats tagging data to the context, which will be sent in // the outgoing RPC with the header grpc-trace-bin. Subsequent calls to // SetTrace will overwrite the values from earlier calls. // // NOTE: this is provided only for backward compatibility with existing clients // and will likely be removed in an upcoming release. New uses should transmit // this type of data using metadata with a different, non-reserved (i.e. does // not begin with "grpc-") header name. func SetTrace(ctx context.Context, b []byte) context.Context { return context.WithValue(ctx, outgoingTraceKey{}, b) } // Trace returns the trace from the context for the inbound RPC. // // NOTE: this is provided only for backward compatibility with existing clients // and will likely be removed in an upcoming release. New uses should transmit // this type of data using metadata with a different, non-reserved (i.e. does // not begin with "grpc-") header name. func Trace(ctx context.Context) []byte { b, _ := ctx.Value(incomingTraceKey{}).([]byte) return b } // SetIncomingTrace attaches stats tagging data to the context, to be read by // the application (not sent in outgoing RPCs). It is intended for // gRPC-internal use. func SetIncomingTrace(ctx context.Context, b []byte) context.Context { return context.WithValue(ctx, incomingTraceKey{}, b) } // OutgoingTrace returns the trace from the context for the outbound RPC. It is // intended for gRPC-internal use. func OutgoingTrace(ctx context.Context) []byte { b, _ := ctx.Value(outgoingTraceKey{}).([]byte) return b } grpc-go-1.22.1/stats/stats_test.go000066400000000000000000001032401351635773100170300ustar00rootroot00000000000000/* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package stats_test import ( "context" "fmt" "io" "net" "reflect" "sync" "testing" "time" "github.com/golang/protobuf/proto" "google.golang.org/grpc" "google.golang.org/grpc/metadata" "google.golang.org/grpc/stats" testpb "google.golang.org/grpc/stats/grpc_testing" "google.golang.org/grpc/status" ) func init() { grpc.EnableTracing = false } type connCtxKey struct{} type rpcCtxKey struct{} var ( // For headers: testMetadata = metadata.MD{ "key1": []string{"value1"}, "key2": []string{"value2"}, } // For trailers: testTrailerMetadata = metadata.MD{ "tkey1": []string{"trailerValue1"}, "tkey2": []string{"trailerValue2"}, } // The id for which the service handler should return error. errorID int32 = 32202 ) type testServer struct{} func (s *testServer) UnaryCall(ctx context.Context, in *testpb.SimpleRequest) (*testpb.SimpleResponse, error) { md, ok := metadata.FromIncomingContext(ctx) if ok { if err := grpc.SendHeader(ctx, md); err != nil { return nil, status.Errorf(status.Code(err), "grpc.SendHeader(_, %v) = %v, want ", md, err) } if err := grpc.SetTrailer(ctx, testTrailerMetadata); err != nil { return nil, status.Errorf(status.Code(err), "grpc.SetTrailer(_, %v) = %v, want ", testTrailerMetadata, err) } } if in.Id == errorID { return nil, fmt.Errorf("got error id: %v", in.Id) } return &testpb.SimpleResponse{Id: in.Id}, nil } func (s *testServer) FullDuplexCall(stream testpb.TestService_FullDuplexCallServer) error { md, ok := metadata.FromIncomingContext(stream.Context()) if ok { if err := stream.SendHeader(md); err != nil { return status.Errorf(status.Code(err), "%v.SendHeader(%v) = %v, want %v", stream, md, err, nil) } stream.SetTrailer(testTrailerMetadata) } for { in, err := stream.Recv() if err == io.EOF { // read done. return nil } if err != nil { return err } if in.Id == errorID { return fmt.Errorf("got error id: %v", in.Id) } if err := stream.Send(&testpb.SimpleResponse{Id: in.Id}); err != nil { return err } } } func (s *testServer) ClientStreamCall(stream testpb.TestService_ClientStreamCallServer) error { md, ok := metadata.FromIncomingContext(stream.Context()) if ok { if err := stream.SendHeader(md); err != nil { return status.Errorf(status.Code(err), "%v.SendHeader(%v) = %v, want %v", stream, md, err, nil) } stream.SetTrailer(testTrailerMetadata) } for { in, err := stream.Recv() if err == io.EOF { // read done. return stream.SendAndClose(&testpb.SimpleResponse{Id: int32(0)}) } if err != nil { return err } if in.Id == errorID { return fmt.Errorf("got error id: %v", in.Id) } } } func (s *testServer) ServerStreamCall(in *testpb.SimpleRequest, stream testpb.TestService_ServerStreamCallServer) error { md, ok := metadata.FromIncomingContext(stream.Context()) if ok { if err := stream.SendHeader(md); err != nil { return status.Errorf(status.Code(err), "%v.SendHeader(%v) = %v, want %v", stream, md, err, nil) } stream.SetTrailer(testTrailerMetadata) } if in.Id == errorID { return fmt.Errorf("got error id: %v", in.Id) } for i := 0; i < 5; i++ { if err := stream.Send(&testpb.SimpleResponse{Id: in.Id}); err != nil { return err } } return nil } // test is an end-to-end test. It should be created with the newTest // func, modified as needed, and then started with its startServer method. // It should be cleaned up with the tearDown method. type test struct { t *testing.T compress string clientStatsHandler stats.Handler serverStatsHandler stats.Handler testServer testpb.TestServiceServer // nil means none // srv and srvAddr are set once startServer is called. srv *grpc.Server srvAddr string cc *grpc.ClientConn // nil until requested via clientConn } func (te *test) tearDown() { if te.cc != nil { te.cc.Close() te.cc = nil } te.srv.Stop() } type testConfig struct { compress string } // newTest returns a new test using the provided testing.T and // environment. It is returned with default values. Tests should // modify it before calling its startServer and clientConn methods. func newTest(t *testing.T, tc *testConfig, ch stats.Handler, sh stats.Handler) *test { te := &test{ t: t, compress: tc.compress, clientStatsHandler: ch, serverStatsHandler: sh, } return te } // startServer starts a gRPC server listening. Callers should defer a // call to te.tearDown to clean up. func (te *test) startServer(ts testpb.TestServiceServer) { te.testServer = ts lis, err := net.Listen("tcp", "localhost:0") if err != nil { te.t.Fatalf("Failed to listen: %v", err) } var opts []grpc.ServerOption if te.compress == "gzip" { opts = append(opts, grpc.RPCCompressor(grpc.NewGZIPCompressor()), grpc.RPCDecompressor(grpc.NewGZIPDecompressor()), ) } if te.serverStatsHandler != nil { opts = append(opts, grpc.StatsHandler(te.serverStatsHandler)) } s := grpc.NewServer(opts...) te.srv = s if te.testServer != nil { testpb.RegisterTestServiceServer(s, te.testServer) } go s.Serve(lis) te.srvAddr = lis.Addr().String() } func (te *test) clientConn() *grpc.ClientConn { if te.cc != nil { return te.cc } opts := []grpc.DialOption{grpc.WithInsecure(), grpc.WithBlock()} if te.compress == "gzip" { opts = append(opts, grpc.WithCompressor(grpc.NewGZIPCompressor()), grpc.WithDecompressor(grpc.NewGZIPDecompressor()), ) } if te.clientStatsHandler != nil { opts = append(opts, grpc.WithStatsHandler(te.clientStatsHandler)) } var err error te.cc, err = grpc.Dial(te.srvAddr, opts...) if err != nil { te.t.Fatalf("Dial(%q) = %v", te.srvAddr, err) } return te.cc } type rpcType int const ( unaryRPC rpcType = iota clientStreamRPC serverStreamRPC fullDuplexStreamRPC ) type rpcConfig struct { count int // Number of requests and responses for streaming RPCs. success bool // Whether the RPC should succeed or return error. failfast bool callType rpcType // Type of RPC. } func (te *test) doUnaryCall(c *rpcConfig) (*testpb.SimpleRequest, *testpb.SimpleResponse, error) { var ( resp *testpb.SimpleResponse req *testpb.SimpleRequest err error ) tc := testpb.NewTestServiceClient(te.clientConn()) if c.success { req = &testpb.SimpleRequest{Id: errorID + 1} } else { req = &testpb.SimpleRequest{Id: errorID} } ctx := metadata.NewOutgoingContext(context.Background(), testMetadata) resp, err = tc.UnaryCall(ctx, req, grpc.WaitForReady(!c.failfast)) return req, resp, err } func (te *test) doFullDuplexCallRoundtrip(c *rpcConfig) ([]*testpb.SimpleRequest, []*testpb.SimpleResponse, error) { var ( reqs []*testpb.SimpleRequest resps []*testpb.SimpleResponse err error ) tc := testpb.NewTestServiceClient(te.clientConn()) stream, err := tc.FullDuplexCall(metadata.NewOutgoingContext(context.Background(), testMetadata), grpc.WaitForReady(!c.failfast)) if err != nil { return reqs, resps, err } var startID int32 if !c.success { startID = errorID } for i := 0; i < c.count; i++ { req := &testpb.SimpleRequest{ Id: int32(i) + startID, } reqs = append(reqs, req) if err = stream.Send(req); err != nil { return reqs, resps, err } var resp *testpb.SimpleResponse if resp, err = stream.Recv(); err != nil { return reqs, resps, err } resps = append(resps, resp) } if err = stream.CloseSend(); err != nil && err != io.EOF { return reqs, resps, err } if _, err = stream.Recv(); err != io.EOF { return reqs, resps, err } return reqs, resps, nil } func (te *test) doClientStreamCall(c *rpcConfig) ([]*testpb.SimpleRequest, *testpb.SimpleResponse, error) { var ( reqs []*testpb.SimpleRequest resp *testpb.SimpleResponse err error ) tc := testpb.NewTestServiceClient(te.clientConn()) stream, err := tc.ClientStreamCall(metadata.NewOutgoingContext(context.Background(), testMetadata), grpc.WaitForReady(!c.failfast)) if err != nil { return reqs, resp, err } var startID int32 if !c.success { startID = errorID } for i := 0; i < c.count; i++ { req := &testpb.SimpleRequest{ Id: int32(i) + startID, } reqs = append(reqs, req) if err = stream.Send(req); err != nil { return reqs, resp, err } } resp, err = stream.CloseAndRecv() return reqs, resp, err } func (te *test) doServerStreamCall(c *rpcConfig) (*testpb.SimpleRequest, []*testpb.SimpleResponse, error) { var ( req *testpb.SimpleRequest resps []*testpb.SimpleResponse err error ) tc := testpb.NewTestServiceClient(te.clientConn()) var startID int32 if !c.success { startID = errorID } req = &testpb.SimpleRequest{Id: startID} stream, err := tc.ServerStreamCall(metadata.NewOutgoingContext(context.Background(), testMetadata), req, grpc.WaitForReady(!c.failfast)) if err != nil { return req, resps, err } for { var resp *testpb.SimpleResponse resp, err := stream.Recv() if err == io.EOF { return req, resps, nil } else if err != nil { return req, resps, err } resps = append(resps, resp) } } type expectedData struct { method string serverAddr string compression string reqIdx int requests []*testpb.SimpleRequest respIdx int responses []*testpb.SimpleResponse err error failfast bool } type gotData struct { ctx context.Context client bool s interface{} // This could be RPCStats or ConnStats. } const ( begin int = iota end inPayload inHeader inTrailer outPayload outHeader // TODO: test outTrailer ? connbegin connend ) func checkBegin(t *testing.T, d *gotData, e *expectedData) { var ( ok bool st *stats.Begin ) if st, ok = d.s.(*stats.Begin); !ok { t.Fatalf("got %T, want Begin", d.s) } if d.ctx == nil { t.Fatalf("d.ctx = nil, want ") } if st.BeginTime.IsZero() { t.Fatalf("st.BeginTime = %v, want ", st.BeginTime) } if d.client { if st.FailFast != e.failfast { t.Fatalf("st.FailFast = %v, want %v", st.FailFast, e.failfast) } } } func checkInHeader(t *testing.T, d *gotData, e *expectedData) { var ( ok bool st *stats.InHeader ) if st, ok = d.s.(*stats.InHeader); !ok { t.Fatalf("got %T, want InHeader", d.s) } if d.ctx == nil { t.Fatalf("d.ctx = nil, want ") } if !d.client { if st.FullMethod != e.method { t.Fatalf("st.FullMethod = %s, want %v", st.FullMethod, e.method) } if st.LocalAddr.String() != e.serverAddr { t.Fatalf("st.LocalAddr = %v, want %v", st.LocalAddr, e.serverAddr) } if st.Compression != e.compression { t.Fatalf("st.Compression = %v, want %v", st.Compression, e.compression) } if connInfo, ok := d.ctx.Value(connCtxKey{}).(*stats.ConnTagInfo); ok { if connInfo.RemoteAddr != st.RemoteAddr { t.Fatalf("connInfo.RemoteAddr = %v, want %v", connInfo.RemoteAddr, st.RemoteAddr) } if connInfo.LocalAddr != st.LocalAddr { t.Fatalf("connInfo.LocalAddr = %v, want %v", connInfo.LocalAddr, st.LocalAddr) } } else { t.Fatalf("got context %v, want one with connCtxKey", d.ctx) } if rpcInfo, ok := d.ctx.Value(rpcCtxKey{}).(*stats.RPCTagInfo); ok { if rpcInfo.FullMethodName != st.FullMethod { t.Fatalf("rpcInfo.FullMethod = %s, want %v", rpcInfo.FullMethodName, st.FullMethod) } } else { t.Fatalf("got context %v, want one with rpcCtxKey", d.ctx) } } } func checkInPayload(t *testing.T, d *gotData, e *expectedData) { var ( ok bool st *stats.InPayload ) if st, ok = d.s.(*stats.InPayload); !ok { t.Fatalf("got %T, want InPayload", d.s) } if d.ctx == nil { t.Fatalf("d.ctx = nil, want ") } if d.client { b, err := proto.Marshal(e.responses[e.respIdx]) if err != nil { t.Fatalf("failed to marshal message: %v", err) } if reflect.TypeOf(st.Payload) != reflect.TypeOf(e.responses[e.respIdx]) { t.Fatalf("st.Payload = %T, want %T", st.Payload, e.responses[e.respIdx]) } e.respIdx++ if string(st.Data) != string(b) { t.Fatalf("st.Data = %v, want %v", st.Data, b) } if st.Length != len(b) { t.Fatalf("st.Lenght = %v, want %v", st.Length, len(b)) } } else { b, err := proto.Marshal(e.requests[e.reqIdx]) if err != nil { t.Fatalf("failed to marshal message: %v", err) } if reflect.TypeOf(st.Payload) != reflect.TypeOf(e.requests[e.reqIdx]) { t.Fatalf("st.Payload = %T, want %T", st.Payload, e.requests[e.reqIdx]) } e.reqIdx++ if string(st.Data) != string(b) { t.Fatalf("st.Data = %v, want %v", st.Data, b) } if st.Length != len(b) { t.Fatalf("st.Lenght = %v, want %v", st.Length, len(b)) } } // Below are sanity checks that WireLength and RecvTime are populated. // TODO: check values of WireLength and RecvTime. if len(st.Data) > 0 && st.WireLength == 0 { t.Fatalf("st.WireLength = %v with non-empty data, want ", st.WireLength) } if st.RecvTime.IsZero() { t.Fatalf("st.ReceivedTime = %v, want ", st.RecvTime) } } func checkInTrailer(t *testing.T, d *gotData, e *expectedData) { var ( ok bool ) if _, ok = d.s.(*stats.InTrailer); !ok { t.Fatalf("got %T, want InTrailer", d.s) } if d.ctx == nil { t.Fatalf("d.ctx = nil, want ") } } func checkOutHeader(t *testing.T, d *gotData, e *expectedData) { var ( ok bool st *stats.OutHeader ) if st, ok = d.s.(*stats.OutHeader); !ok { t.Fatalf("got %T, want OutHeader", d.s) } if d.ctx == nil { t.Fatalf("d.ctx = nil, want ") } if d.client { if st.FullMethod != e.method { t.Fatalf("st.FullMethod = %s, want %v", st.FullMethod, e.method) } if st.RemoteAddr.String() != e.serverAddr { t.Fatalf("st.RemoteAddr = %v, want %v", st.RemoteAddr, e.serverAddr) } if st.Compression != e.compression { t.Fatalf("st.Compression = %v, want %v", st.Compression, e.compression) } if rpcInfo, ok := d.ctx.Value(rpcCtxKey{}).(*stats.RPCTagInfo); ok { if rpcInfo.FullMethodName != st.FullMethod { t.Fatalf("rpcInfo.FullMethod = %s, want %v", rpcInfo.FullMethodName, st.FullMethod) } } else { t.Fatalf("got context %v, want one with rpcCtxKey", d.ctx) } } } func checkOutPayload(t *testing.T, d *gotData, e *expectedData) { var ( ok bool st *stats.OutPayload ) if st, ok = d.s.(*stats.OutPayload); !ok { t.Fatalf("got %T, want OutPayload", d.s) } if d.ctx == nil { t.Fatalf("d.ctx = nil, want ") } if d.client { b, err := proto.Marshal(e.requests[e.reqIdx]) if err != nil { t.Fatalf("failed to marshal message: %v", err) } if reflect.TypeOf(st.Payload) != reflect.TypeOf(e.requests[e.reqIdx]) { t.Fatalf("st.Payload = %T, want %T", st.Payload, e.requests[e.reqIdx]) } e.reqIdx++ if string(st.Data) != string(b) { t.Fatalf("st.Data = %v, want %v", st.Data, b) } if st.Length != len(b) { t.Fatalf("st.Lenght = %v, want %v", st.Length, len(b)) } } else { b, err := proto.Marshal(e.responses[e.respIdx]) if err != nil { t.Fatalf("failed to marshal message: %v", err) } if reflect.TypeOf(st.Payload) != reflect.TypeOf(e.responses[e.respIdx]) { t.Fatalf("st.Payload = %T, want %T", st.Payload, e.responses[e.respIdx]) } e.respIdx++ if string(st.Data) != string(b) { t.Fatalf("st.Data = %v, want %v", st.Data, b) } if st.Length != len(b) { t.Fatalf("st.Lenght = %v, want %v", st.Length, len(b)) } } // Below are sanity checks that WireLength and SentTime are populated. // TODO: check values of WireLength and SentTime. if len(st.Data) > 0 && st.WireLength == 0 { t.Fatalf("st.WireLength = %v with non-empty data, want ", st.WireLength) } if st.SentTime.IsZero() { t.Fatalf("st.SentTime = %v, want ", st.SentTime) } } func checkOutTrailer(t *testing.T, d *gotData, e *expectedData) { var ( ok bool st *stats.OutTrailer ) if st, ok = d.s.(*stats.OutTrailer); !ok { t.Fatalf("got %T, want OutTrailer", d.s) } if d.ctx == nil { t.Fatalf("d.ctx = nil, want ") } if st.Client { t.Fatalf("st IsClient = true, want false") } } func checkEnd(t *testing.T, d *gotData, e *expectedData) { var ( ok bool st *stats.End ) if st, ok = d.s.(*stats.End); !ok { t.Fatalf("got %T, want End", d.s) } if d.ctx == nil { t.Fatalf("d.ctx = nil, want ") } if st.BeginTime.IsZero() { t.Fatalf("st.BeginTime = %v, want ", st.BeginTime) } if st.EndTime.IsZero() { t.Fatalf("st.EndTime = %v, want ", st.EndTime) } actual, ok := status.FromError(st.Error) if !ok { t.Fatalf("expected st.Error to be a statusError, got %v (type %T)", st.Error, st.Error) } expectedStatus, _ := status.FromError(e.err) if actual.Code() != expectedStatus.Code() || actual.Message() != expectedStatus.Message() { t.Fatalf("st.Error = %v, want %v", st.Error, e.err) } if st.Client { if !reflect.DeepEqual(st.Trailer, testTrailerMetadata) { t.Fatalf("st.Trailer = %v, want %v", st.Trailer, testTrailerMetadata) } } else { if st.Trailer != nil { t.Fatalf("st.Trailer = %v, want nil", st.Trailer) } } } func checkConnBegin(t *testing.T, d *gotData, e *expectedData) { var ( ok bool st *stats.ConnBegin ) if st, ok = d.s.(*stats.ConnBegin); !ok { t.Fatalf("got %T, want ConnBegin", d.s) } if d.ctx == nil { t.Fatalf("d.ctx = nil, want ") } st.IsClient() // TODO remove this. } func checkConnEnd(t *testing.T, d *gotData, e *expectedData) { var ( ok bool st *stats.ConnEnd ) if st, ok = d.s.(*stats.ConnEnd); !ok { t.Fatalf("got %T, want ConnEnd", d.s) } if d.ctx == nil { t.Fatalf("d.ctx = nil, want ") } st.IsClient() // TODO remove this. } type statshandler struct { mu sync.Mutex gotRPC []*gotData gotConn []*gotData } func (h *statshandler) TagConn(ctx context.Context, info *stats.ConnTagInfo) context.Context { return context.WithValue(ctx, connCtxKey{}, info) } func (h *statshandler) TagRPC(ctx context.Context, info *stats.RPCTagInfo) context.Context { return context.WithValue(ctx, rpcCtxKey{}, info) } func (h *statshandler) HandleConn(ctx context.Context, s stats.ConnStats) { h.mu.Lock() defer h.mu.Unlock() h.gotConn = append(h.gotConn, &gotData{ctx, s.IsClient(), s}) } func (h *statshandler) HandleRPC(ctx context.Context, s stats.RPCStats) { h.mu.Lock() defer h.mu.Unlock() h.gotRPC = append(h.gotRPC, &gotData{ctx, s.IsClient(), s}) } func checkConnStats(t *testing.T, got []*gotData) { if len(got) <= 0 || len(got)%2 != 0 { for i, g := range got { t.Errorf(" - %v, %T = %+v, ctx: %v", i, g.s, g.s, g.ctx) } t.Fatalf("got %v stats, want even positive number", len(got)) } // The first conn stats must be a ConnBegin. checkConnBegin(t, got[0], nil) // The last conn stats must be a ConnEnd. checkConnEnd(t, got[len(got)-1], nil) } func checkServerStats(t *testing.T, got []*gotData, expect *expectedData, checkFuncs []func(t *testing.T, d *gotData, e *expectedData)) { if len(got) != len(checkFuncs) { for i, g := range got { t.Errorf(" - %v, %T", i, g.s) } t.Fatalf("got %v stats, want %v stats", len(got), len(checkFuncs)) } var rpcctx context.Context for i := 0; i < len(got); i++ { if _, ok := got[i].s.(stats.RPCStats); ok { if rpcctx != nil && got[i].ctx != rpcctx { t.Fatalf("got different contexts with stats %T", got[i].s) } rpcctx = got[i].ctx } } for i, f := range checkFuncs { f(t, got[i], expect) } } func testServerStats(t *testing.T, tc *testConfig, cc *rpcConfig, checkFuncs []func(t *testing.T, d *gotData, e *expectedData)) { h := &statshandler{} te := newTest(t, tc, nil, h) te.startServer(&testServer{}) defer te.tearDown() var ( reqs []*testpb.SimpleRequest resps []*testpb.SimpleResponse err error method string req *testpb.SimpleRequest resp *testpb.SimpleResponse e error ) switch cc.callType { case unaryRPC: method = "/grpc.testing.TestService/UnaryCall" req, resp, e = te.doUnaryCall(cc) reqs = []*testpb.SimpleRequest{req} resps = []*testpb.SimpleResponse{resp} err = e case clientStreamRPC: method = "/grpc.testing.TestService/ClientStreamCall" reqs, resp, e = te.doClientStreamCall(cc) resps = []*testpb.SimpleResponse{resp} err = e case serverStreamRPC: method = "/grpc.testing.TestService/ServerStreamCall" req, resps, e = te.doServerStreamCall(cc) reqs = []*testpb.SimpleRequest{req} err = e case fullDuplexStreamRPC: method = "/grpc.testing.TestService/FullDuplexCall" reqs, resps, err = te.doFullDuplexCallRoundtrip(cc) } if cc.success != (err == nil) { t.Fatalf("cc.success: %v, got error: %v", cc.success, err) } te.cc.Close() te.srv.GracefulStop() // Wait for the server to stop. for { h.mu.Lock() if len(h.gotRPC) >= len(checkFuncs) { h.mu.Unlock() break } h.mu.Unlock() time.Sleep(10 * time.Millisecond) } for { h.mu.Lock() if _, ok := h.gotConn[len(h.gotConn)-1].s.(*stats.ConnEnd); ok { h.mu.Unlock() break } h.mu.Unlock() time.Sleep(10 * time.Millisecond) } expect := &expectedData{ serverAddr: te.srvAddr, compression: tc.compress, method: method, requests: reqs, responses: resps, err: err, } h.mu.Lock() checkConnStats(t, h.gotConn) h.mu.Unlock() checkServerStats(t, h.gotRPC, expect, checkFuncs) } func TestServerStatsUnaryRPC(t *testing.T) { testServerStats(t, &testConfig{compress: ""}, &rpcConfig{success: true, callType: unaryRPC}, []func(t *testing.T, d *gotData, e *expectedData){ checkInHeader, checkBegin, checkInPayload, checkOutHeader, checkOutPayload, checkOutTrailer, checkEnd, }) } func TestServerStatsUnaryRPCError(t *testing.T) { testServerStats(t, &testConfig{compress: ""}, &rpcConfig{success: false, callType: unaryRPC}, []func(t *testing.T, d *gotData, e *expectedData){ checkInHeader, checkBegin, checkInPayload, checkOutHeader, checkOutTrailer, checkEnd, }) } func TestServerStatsClientStreamRPC(t *testing.T) { count := 5 checkFuncs := []func(t *testing.T, d *gotData, e *expectedData){ checkInHeader, checkBegin, checkOutHeader, } ioPayFuncs := []func(t *testing.T, d *gotData, e *expectedData){ checkInPayload, } for i := 0; i < count; i++ { checkFuncs = append(checkFuncs, ioPayFuncs...) } checkFuncs = append(checkFuncs, checkOutPayload, checkOutTrailer, checkEnd, ) testServerStats(t, &testConfig{compress: "gzip"}, &rpcConfig{count: count, success: true, callType: clientStreamRPC}, checkFuncs) } func TestServerStatsClientStreamRPCError(t *testing.T) { count := 1 testServerStats(t, &testConfig{compress: "gzip"}, &rpcConfig{count: count, success: false, callType: clientStreamRPC}, []func(t *testing.T, d *gotData, e *expectedData){ checkInHeader, checkBegin, checkOutHeader, checkInPayload, checkOutTrailer, checkEnd, }) } func TestServerStatsServerStreamRPC(t *testing.T) { count := 5 checkFuncs := []func(t *testing.T, d *gotData, e *expectedData){ checkInHeader, checkBegin, checkInPayload, checkOutHeader, } ioPayFuncs := []func(t *testing.T, d *gotData, e *expectedData){ checkOutPayload, } for i := 0; i < count; i++ { checkFuncs = append(checkFuncs, ioPayFuncs...) } checkFuncs = append(checkFuncs, checkOutTrailer, checkEnd, ) testServerStats(t, &testConfig{compress: "gzip"}, &rpcConfig{count: count, success: true, callType: serverStreamRPC}, checkFuncs) } func TestServerStatsServerStreamRPCError(t *testing.T) { count := 5 testServerStats(t, &testConfig{compress: "gzip"}, &rpcConfig{count: count, success: false, callType: serverStreamRPC}, []func(t *testing.T, d *gotData, e *expectedData){ checkInHeader, checkBegin, checkInPayload, checkOutHeader, checkOutTrailer, checkEnd, }) } func TestServerStatsFullDuplexRPC(t *testing.T) { count := 5 checkFuncs := []func(t *testing.T, d *gotData, e *expectedData){ checkInHeader, checkBegin, checkOutHeader, } ioPayFuncs := []func(t *testing.T, d *gotData, e *expectedData){ checkInPayload, checkOutPayload, } for i := 0; i < count; i++ { checkFuncs = append(checkFuncs, ioPayFuncs...) } checkFuncs = append(checkFuncs, checkOutTrailer, checkEnd, ) testServerStats(t, &testConfig{compress: "gzip"}, &rpcConfig{count: count, success: true, callType: fullDuplexStreamRPC}, checkFuncs) } func TestServerStatsFullDuplexRPCError(t *testing.T) { count := 5 testServerStats(t, &testConfig{compress: "gzip"}, &rpcConfig{count: count, success: false, callType: fullDuplexStreamRPC}, []func(t *testing.T, d *gotData, e *expectedData){ checkInHeader, checkBegin, checkOutHeader, checkInPayload, checkOutTrailer, checkEnd, }) } type checkFuncWithCount struct { f func(t *testing.T, d *gotData, e *expectedData) c int // expected count } func checkClientStats(t *testing.T, got []*gotData, expect *expectedData, checkFuncs map[int]*checkFuncWithCount) { var expectLen int for _, v := range checkFuncs { expectLen += v.c } if len(got) != expectLen { for i, g := range got { t.Errorf(" - %v, %T", i, g.s) } t.Fatalf("got %v stats, want %v stats", len(got), expectLen) } var tagInfoInCtx *stats.RPCTagInfo for i := 0; i < len(got); i++ { if _, ok := got[i].s.(stats.RPCStats); ok { tagInfoInCtxNew, _ := got[i].ctx.Value(rpcCtxKey{}).(*stats.RPCTagInfo) if tagInfoInCtx != nil && tagInfoInCtx != tagInfoInCtxNew { t.Fatalf("got context containing different tagInfo with stats %T", got[i].s) } tagInfoInCtx = tagInfoInCtxNew } } for _, s := range got { switch s.s.(type) { case *stats.Begin: if checkFuncs[begin].c <= 0 { t.Fatalf("unexpected stats: %T", s.s) } checkFuncs[begin].f(t, s, expect) checkFuncs[begin].c-- case *stats.OutHeader: if checkFuncs[outHeader].c <= 0 { t.Fatalf("unexpected stats: %T", s.s) } checkFuncs[outHeader].f(t, s, expect) checkFuncs[outHeader].c-- case *stats.OutPayload: if checkFuncs[outPayload].c <= 0 { t.Fatalf("unexpected stats: %T", s.s) } checkFuncs[outPayload].f(t, s, expect) checkFuncs[outPayload].c-- case *stats.InHeader: if checkFuncs[inHeader].c <= 0 { t.Fatalf("unexpected stats: %T", s.s) } checkFuncs[inHeader].f(t, s, expect) checkFuncs[inHeader].c-- case *stats.InPayload: if checkFuncs[inPayload].c <= 0 { t.Fatalf("unexpected stats: %T", s.s) } checkFuncs[inPayload].f(t, s, expect) checkFuncs[inPayload].c-- case *stats.InTrailer: if checkFuncs[inTrailer].c <= 0 { t.Fatalf("unexpected stats: %T", s.s) } checkFuncs[inTrailer].f(t, s, expect) checkFuncs[inTrailer].c-- case *stats.End: if checkFuncs[end].c <= 0 { t.Fatalf("unexpected stats: %T", s.s) } checkFuncs[end].f(t, s, expect) checkFuncs[end].c-- case *stats.ConnBegin: if checkFuncs[connbegin].c <= 0 { t.Fatalf("unexpected stats: %T", s.s) } checkFuncs[connbegin].f(t, s, expect) checkFuncs[connbegin].c-- case *stats.ConnEnd: if checkFuncs[connend].c <= 0 { t.Fatalf("unexpected stats: %T", s.s) } checkFuncs[connend].f(t, s, expect) checkFuncs[connend].c-- default: t.Fatalf("unexpected stats: %T", s.s) } } } func testClientStats(t *testing.T, tc *testConfig, cc *rpcConfig, checkFuncs map[int]*checkFuncWithCount) { h := &statshandler{} te := newTest(t, tc, h, nil) te.startServer(&testServer{}) defer te.tearDown() var ( reqs []*testpb.SimpleRequest resps []*testpb.SimpleResponse method string err error req *testpb.SimpleRequest resp *testpb.SimpleResponse e error ) switch cc.callType { case unaryRPC: method = "/grpc.testing.TestService/UnaryCall" req, resp, e = te.doUnaryCall(cc) reqs = []*testpb.SimpleRequest{req} resps = []*testpb.SimpleResponse{resp} err = e case clientStreamRPC: method = "/grpc.testing.TestService/ClientStreamCall" reqs, resp, e = te.doClientStreamCall(cc) resps = []*testpb.SimpleResponse{resp} err = e case serverStreamRPC: method = "/grpc.testing.TestService/ServerStreamCall" req, resps, e = te.doServerStreamCall(cc) reqs = []*testpb.SimpleRequest{req} err = e case fullDuplexStreamRPC: method = "/grpc.testing.TestService/FullDuplexCall" reqs, resps, err = te.doFullDuplexCallRoundtrip(cc) } if cc.success != (err == nil) { t.Fatalf("cc.success: %v, got error: %v", cc.success, err) } te.cc.Close() te.srv.GracefulStop() // Wait for the server to stop. lenRPCStats := 0 for _, v := range checkFuncs { lenRPCStats += v.c } for { h.mu.Lock() if len(h.gotRPC) >= lenRPCStats { h.mu.Unlock() break } h.mu.Unlock() time.Sleep(10 * time.Millisecond) } for { h.mu.Lock() if _, ok := h.gotConn[len(h.gotConn)-1].s.(*stats.ConnEnd); ok { h.mu.Unlock() break } h.mu.Unlock() time.Sleep(10 * time.Millisecond) } expect := &expectedData{ serverAddr: te.srvAddr, compression: tc.compress, method: method, requests: reqs, responses: resps, failfast: cc.failfast, err: err, } h.mu.Lock() checkConnStats(t, h.gotConn) h.mu.Unlock() checkClientStats(t, h.gotRPC, expect, checkFuncs) } func TestClientStatsUnaryRPC(t *testing.T) { testClientStats(t, &testConfig{compress: ""}, &rpcConfig{success: true, failfast: false, callType: unaryRPC}, map[int]*checkFuncWithCount{ begin: {checkBegin, 1}, outHeader: {checkOutHeader, 1}, outPayload: {checkOutPayload, 1}, inHeader: {checkInHeader, 1}, inPayload: {checkInPayload, 1}, inTrailer: {checkInTrailer, 1}, end: {checkEnd, 1}, }) } func TestClientStatsUnaryRPCError(t *testing.T) { testClientStats(t, &testConfig{compress: ""}, &rpcConfig{success: false, failfast: false, callType: unaryRPC}, map[int]*checkFuncWithCount{ begin: {checkBegin, 1}, outHeader: {checkOutHeader, 1}, outPayload: {checkOutPayload, 1}, inHeader: {checkInHeader, 1}, inTrailer: {checkInTrailer, 1}, end: {checkEnd, 1}, }) } func TestClientStatsClientStreamRPC(t *testing.T) { count := 5 testClientStats(t, &testConfig{compress: "gzip"}, &rpcConfig{count: count, success: true, failfast: false, callType: clientStreamRPC}, map[int]*checkFuncWithCount{ begin: {checkBegin, 1}, outHeader: {checkOutHeader, 1}, inHeader: {checkInHeader, 1}, outPayload: {checkOutPayload, count}, inTrailer: {checkInTrailer, 1}, inPayload: {checkInPayload, 1}, end: {checkEnd, 1}, }) } func TestClientStatsClientStreamRPCError(t *testing.T) { count := 1 testClientStats(t, &testConfig{compress: "gzip"}, &rpcConfig{count: count, success: false, failfast: false, callType: clientStreamRPC}, map[int]*checkFuncWithCount{ begin: {checkBegin, 1}, outHeader: {checkOutHeader, 1}, inHeader: {checkInHeader, 1}, outPayload: {checkOutPayload, 1}, inTrailer: {checkInTrailer, 1}, end: {checkEnd, 1}, }) } func TestClientStatsServerStreamRPC(t *testing.T) { count := 5 testClientStats(t, &testConfig{compress: "gzip"}, &rpcConfig{count: count, success: true, failfast: false, callType: serverStreamRPC}, map[int]*checkFuncWithCount{ begin: {checkBegin, 1}, outHeader: {checkOutHeader, 1}, outPayload: {checkOutPayload, 1}, inHeader: {checkInHeader, 1}, inPayload: {checkInPayload, count}, inTrailer: {checkInTrailer, 1}, end: {checkEnd, 1}, }) } func TestClientStatsServerStreamRPCError(t *testing.T) { count := 5 testClientStats(t, &testConfig{compress: "gzip"}, &rpcConfig{count: count, success: false, failfast: false, callType: serverStreamRPC}, map[int]*checkFuncWithCount{ begin: {checkBegin, 1}, outHeader: {checkOutHeader, 1}, outPayload: {checkOutPayload, 1}, inHeader: {checkInHeader, 1}, inTrailer: {checkInTrailer, 1}, end: {checkEnd, 1}, }) } func TestClientStatsFullDuplexRPC(t *testing.T) { count := 5 testClientStats(t, &testConfig{compress: "gzip"}, &rpcConfig{count: count, success: true, failfast: false, callType: fullDuplexStreamRPC}, map[int]*checkFuncWithCount{ begin: {checkBegin, 1}, outHeader: {checkOutHeader, 1}, outPayload: {checkOutPayload, count}, inHeader: {checkInHeader, 1}, inPayload: {checkInPayload, count}, inTrailer: {checkInTrailer, 1}, end: {checkEnd, 1}, }) } func TestClientStatsFullDuplexRPCError(t *testing.T) { count := 5 testClientStats(t, &testConfig{compress: "gzip"}, &rpcConfig{count: count, success: false, failfast: false, callType: fullDuplexStreamRPC}, map[int]*checkFuncWithCount{ begin: {checkBegin, 1}, outHeader: {checkOutHeader, 1}, outPayload: {checkOutPayload, 1}, inHeader: {checkInHeader, 1}, inTrailer: {checkInTrailer, 1}, end: {checkEnd, 1}, }) } func TestTags(t *testing.T) { b := []byte{5, 2, 4, 3, 1} ctx := stats.SetTags(context.Background(), b) if tg := stats.OutgoingTags(ctx); !reflect.DeepEqual(tg, b) { t.Errorf("OutgoingTags(%v) = %v; want %v", ctx, tg, b) } if tg := stats.Tags(ctx); tg != nil { t.Errorf("Tags(%v) = %v; want nil", ctx, tg) } ctx = stats.SetIncomingTags(context.Background(), b) if tg := stats.Tags(ctx); !reflect.DeepEqual(tg, b) { t.Errorf("Tags(%v) = %v; want %v", ctx, tg, b) } if tg := stats.OutgoingTags(ctx); tg != nil { t.Errorf("OutgoingTags(%v) = %v; want nil", ctx, tg) } } func TestTrace(t *testing.T) { b := []byte{5, 2, 4, 3, 1} ctx := stats.SetTrace(context.Background(), b) if tr := stats.OutgoingTrace(ctx); !reflect.DeepEqual(tr, b) { t.Errorf("OutgoingTrace(%v) = %v; want %v", ctx, tr, b) } if tr := stats.Trace(ctx); tr != nil { t.Errorf("Trace(%v) = %v; want nil", ctx, tr) } ctx = stats.SetIncomingTrace(context.Background(), b) if tr := stats.Trace(ctx); !reflect.DeepEqual(tr, b) { t.Errorf("Trace(%v) = %v; want %v", ctx, tr, b) } if tr := stats.OutgoingTrace(ctx); tr != nil { t.Errorf("OutgoingTrace(%v) = %v; want nil", ctx, tr) } } grpc-go-1.22.1/status/000077500000000000000000000000001351635773100144715ustar00rootroot00000000000000grpc-go-1.22.1/status/status.go000066400000000000000000000144051351635773100163470ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package status implements errors returned by gRPC. These errors are // serialized and transmitted on the wire between server and client, and allow // for additional data to be transmitted via the Details field in the status // proto. gRPC service handlers should return an error created by this // package, and gRPC clients should expect a corresponding error to be // returned from the RPC call. // // This package upholds the invariants that a non-nil error may not // contain an OK code, and an OK code must result in a nil error. package status import ( "context" "errors" "fmt" "github.com/golang/protobuf/proto" "github.com/golang/protobuf/ptypes" spb "google.golang.org/genproto/googleapis/rpc/status" "google.golang.org/grpc/codes" "google.golang.org/grpc/internal" ) func init() { internal.StatusRawProto = statusRawProto } func statusRawProto(s *Status) *spb.Status { return s.s } // statusError is an alias of a status proto. It implements error and Status, // and a nil statusError should never be returned by this package. type statusError spb.Status func (se *statusError) Error() string { p := (*spb.Status)(se) return fmt.Sprintf("rpc error: code = %s desc = %s", codes.Code(p.GetCode()), p.GetMessage()) } func (se *statusError) GRPCStatus() *Status { return &Status{s: (*spb.Status)(se)} } // Status represents an RPC status code, message, and details. It is immutable // and should be created with New, Newf, or FromProto. type Status struct { s *spb.Status } // Code returns the status code contained in s. func (s *Status) Code() codes.Code { if s == nil || s.s == nil { return codes.OK } return codes.Code(s.s.Code) } // Message returns the message contained in s. func (s *Status) Message() string { if s == nil || s.s == nil { return "" } return s.s.Message } // Proto returns s's status as an spb.Status proto message. func (s *Status) Proto() *spb.Status { if s == nil { return nil } return proto.Clone(s.s).(*spb.Status) } // Err returns an immutable error representing s; returns nil if s.Code() is // OK. func (s *Status) Err() error { if s.Code() == codes.OK { return nil } return (*statusError)(s.s) } // New returns a Status representing c and msg. func New(c codes.Code, msg string) *Status { return &Status{s: &spb.Status{Code: int32(c), Message: msg}} } // Newf returns New(c, fmt.Sprintf(format, a...)). func Newf(c codes.Code, format string, a ...interface{}) *Status { return New(c, fmt.Sprintf(format, a...)) } // Error returns an error representing c and msg. If c is OK, returns nil. func Error(c codes.Code, msg string) error { return New(c, msg).Err() } // Errorf returns Error(c, fmt.Sprintf(format, a...)). func Errorf(c codes.Code, format string, a ...interface{}) error { return Error(c, fmt.Sprintf(format, a...)) } // ErrorProto returns an error representing s. If s.Code is OK, returns nil. func ErrorProto(s *spb.Status) error { return FromProto(s).Err() } // FromProto returns a Status representing s. func FromProto(s *spb.Status) *Status { return &Status{s: proto.Clone(s).(*spb.Status)} } // FromError returns a Status representing err if it was produced from this // package or has a method `GRPCStatus() *Status`. Otherwise, ok is false and a // Status is returned with codes.Unknown and the original error message. func FromError(err error) (s *Status, ok bool) { if err == nil { return &Status{s: &spb.Status{Code: int32(codes.OK)}}, true } if se, ok := err.(interface { GRPCStatus() *Status }); ok { return se.GRPCStatus(), true } return New(codes.Unknown, err.Error()), false } // Convert is a convenience function which removes the need to handle the // boolean return value from FromError. func Convert(err error) *Status { s, _ := FromError(err) return s } // WithDetails returns a new status with the provided details messages appended to the status. // If any errors are encountered, it returns nil and the first error encountered. func (s *Status) WithDetails(details ...proto.Message) (*Status, error) { if s.Code() == codes.OK { return nil, errors.New("no error details for status with code OK") } // s.Code() != OK implies that s.Proto() != nil. p := s.Proto() for _, detail := range details { any, err := ptypes.MarshalAny(detail) if err != nil { return nil, err } p.Details = append(p.Details, any) } return &Status{s: p}, nil } // Details returns a slice of details messages attached to the status. // If a detail cannot be decoded, the error is returned in place of the detail. func (s *Status) Details() []interface{} { if s == nil || s.s == nil { return nil } details := make([]interface{}, 0, len(s.s.Details)) for _, any := range s.s.Details { detail := &ptypes.DynamicAny{} if err := ptypes.UnmarshalAny(any, detail); err != nil { details = append(details, err) continue } details = append(details, detail.Message) } return details } // Code returns the Code of the error if it is a Status error, codes.OK if err // is nil, or codes.Unknown otherwise. func Code(err error) codes.Code { // Don't use FromError to avoid allocation of OK status. if err == nil { return codes.OK } if se, ok := err.(interface { GRPCStatus() *Status }); ok { return se.GRPCStatus().Code() } return codes.Unknown } // FromContextError converts a context error into a Status. It returns a // Status with codes.OK if err is nil, or a Status with codes.Unknown if err is // non-nil and not a context error. func FromContextError(err error) *Status { switch err { case nil: return New(codes.OK, "") case context.DeadlineExceeded: return New(codes.DeadlineExceeded, err.Error()) case context.Canceled: return New(codes.Canceled, err.Error()) default: return New(codes.Unknown, err.Error()) } } grpc-go-1.22.1/status/status_test.go000066400000000000000000000223321351635773100174040ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package status import ( "context" "errors" "fmt" "reflect" "testing" "github.com/golang/protobuf/proto" "github.com/golang/protobuf/ptypes" apb "github.com/golang/protobuf/ptypes/any" dpb "github.com/golang/protobuf/ptypes/duration" cpb "google.golang.org/genproto/googleapis/rpc/code" epb "google.golang.org/genproto/googleapis/rpc/errdetails" spb "google.golang.org/genproto/googleapis/rpc/status" "google.golang.org/grpc/codes" ) // errEqual is essentially a copy of testutils.StatusErrEqual(), to avoid a // cyclic dependency. func errEqual(err1, err2 error) bool { status1, ok := FromError(err1) if !ok { return false } status2, ok := FromError(err2) if !ok { return false } return proto.Equal(status1.Proto(), status2.Proto()) } func TestErrorsWithSameParameters(t *testing.T) { const description = "some description" e1 := Errorf(codes.AlreadyExists, description) e2 := Errorf(codes.AlreadyExists, description) if e1 == e2 || !errEqual(e1, e2) { t.Fatalf("Errors should be equivalent but unique - e1: %v, %v e2: %p, %v", e1.(*statusError), e1, e2.(*statusError), e2) } } func TestFromToProto(t *testing.T) { s := &spb.Status{ Code: int32(codes.Internal), Message: "test test test", Details: []*apb.Any{{TypeUrl: "foo", Value: []byte{3, 2, 1}}}, } err := FromProto(s) if got := err.Proto(); !proto.Equal(s, got) { t.Fatalf("Expected errors to be identical - s: %v got: %v", s, got) } } func TestFromNilProto(t *testing.T) { tests := []*Status{nil, FromProto(nil)} for _, s := range tests { if c := s.Code(); c != codes.OK { t.Errorf("s: %v - Expected s.Code() = OK; got %v", s, c) } if m := s.Message(); m != "" { t.Errorf("s: %v - Expected s.Message() = \"\"; got %q", s, m) } if p := s.Proto(); p != nil { t.Errorf("s: %v - Expected s.Proto() = nil; got %q", s, p) } if e := s.Err(); e != nil { t.Errorf("s: %v - Expected s.Err() = nil; got %v", s, e) } } } func TestError(t *testing.T) { err := Error(codes.Internal, "test description") if got, want := err.Error(), "rpc error: code = Internal desc = test description"; got != want { t.Fatalf("err.Error() = %q; want %q", got, want) } s, _ := FromError(err) if got, want := s.Code(), codes.Internal; got != want { t.Fatalf("err.Code() = %s; want %s", got, want) } if got, want := s.Message(), "test description"; got != want { t.Fatalf("err.Message() = %s; want %s", got, want) } } func TestErrorOK(t *testing.T) { err := Error(codes.OK, "foo") if err != nil { t.Fatalf("Error(codes.OK, _) = %p; want nil", err.(*statusError)) } } func TestErrorProtoOK(t *testing.T) { s := &spb.Status{Code: int32(codes.OK)} if got := ErrorProto(s); got != nil { t.Fatalf("ErrorProto(%v) = %v; want nil", s, got) } } func TestFromError(t *testing.T) { code, message := codes.Internal, "test description" err := Error(code, message) s, ok := FromError(err) if !ok || s.Code() != code || s.Message() != message || s.Err() == nil { t.Fatalf("FromError(%v) = %v, %v; want , true", err, s, ok, code, message) } } func TestFromErrorOK(t *testing.T) { code, message := codes.OK, "" s, ok := FromError(nil) if !ok || s.Code() != code || s.Message() != message || s.Err() != nil { t.Fatalf("FromError(nil) = %v, %v; want , true", s, ok, code, message) } } type customError struct { Code codes.Code Message string Details []*apb.Any } func (c customError) Error() string { return fmt.Sprintf("rpc error: code = %s desc = %s", c.Code, c.Message) } func (c customError) GRPCStatus() *Status { return &Status{ s: &spb.Status{ Code: int32(c.Code), Message: c.Message, Details: c.Details, }, } } func TestFromErrorImplementsInterface(t *testing.T) { code, message := codes.Internal, "test description" details := []*apb.Any{{ TypeUrl: "testUrl", Value: []byte("testValue"), }} err := customError{ Code: code, Message: message, Details: details, } s, ok := FromError(err) if !ok || s.Code() != code || s.Message() != message || s.Err() == nil { t.Fatalf("FromError(%v) = %v, %v; want , true", err, s, ok, code, message) } pd := s.Proto().GetDetails() if len(pd) != 1 || !proto.Equal(pd[0], details[0]) { t.Fatalf("s.Proto.GetDetails() = %v; want ", pd, details) } } func TestFromErrorUnknownError(t *testing.T) { code, message := codes.Unknown, "unknown error" err := errors.New("unknown error") s, ok := FromError(err) if ok || s.Code() != code || s.Message() != message { t.Fatalf("FromError(%v) = %v, %v; want , false", err, s, ok, code, message) } } func TestConvertKnownError(t *testing.T) { code, message := codes.Internal, "test description" err := Error(code, message) s := Convert(err) if s.Code() != code || s.Message() != message { t.Fatalf("Convert(%v) = %v; want ", err, s, code, message) } } func TestConvertUnknownError(t *testing.T) { code, message := codes.Unknown, "unknown error" err := errors.New("unknown error") s := Convert(err) if s.Code() != code || s.Message() != message { t.Fatalf("Convert(%v) = %v; want ", err, s, code, message) } } func TestStatus_ErrorDetails(t *testing.T) { tests := []struct { code codes.Code details []proto.Message }{ { code: codes.NotFound, details: nil, }, { code: codes.NotFound, details: []proto.Message{ &epb.ResourceInfo{ ResourceType: "book", ResourceName: "projects/1234/books/5678", Owner: "User", }, }, }, { code: codes.Internal, details: []proto.Message{ &epb.DebugInfo{ StackEntries: []string{ "first stack", "second stack", }, }, }, }, { code: codes.Unavailable, details: []proto.Message{ &epb.RetryInfo{ RetryDelay: &dpb.Duration{Seconds: 60}, }, &epb.ResourceInfo{ ResourceType: "book", ResourceName: "projects/1234/books/5678", Owner: "User", }, }, }, } for _, tc := range tests { s, err := New(tc.code, "").WithDetails(tc.details...) if err != nil { t.Fatalf("(%v).WithDetails(%+v) failed: %v", str(s), tc.details, err) } details := s.Details() for i := range details { if !proto.Equal(details[i].(proto.Message), tc.details[i]) { t.Fatalf("(%v).Details()[%d] = %+v, want %+v", str(s), i, details[i], tc.details[i]) } } } } func TestStatus_WithDetails_Fail(t *testing.T) { tests := []*Status{ nil, FromProto(nil), New(codes.OK, ""), } for _, s := range tests { if s, err := s.WithDetails(); err == nil || s != nil { t.Fatalf("(%v).WithDetails(%+v) = %v, %v; want nil, non-nil", str(s), []proto.Message{}, s, err) } } } func TestStatus_ErrorDetails_Fail(t *testing.T) { tests := []struct { s *Status i []interface{} }{ { nil, nil, }, { FromProto(nil), nil, }, { New(codes.OK, ""), []interface{}{}, }, { FromProto(&spb.Status{ Code: int32(cpb.Code_CANCELLED), Details: []*apb.Any{ { TypeUrl: "", Value: []byte{}, }, mustMarshalAny(&epb.ResourceInfo{ ResourceType: "book", ResourceName: "projects/1234/books/5678", Owner: "User", }), }, }), []interface{}{ errors.New(`message type url "" is invalid`), &epb.ResourceInfo{ ResourceType: "book", ResourceName: "projects/1234/books/5678", Owner: "User", }, }, }, } for _, tc := range tests { got := tc.s.Details() if !reflect.DeepEqual(got, tc.i) { t.Errorf("(%v).Details() = %+v, want %+v", str(tc.s), got, tc.i) } } } func str(s *Status) string { if s == nil { return "nil" } if s.s == nil { return "" } return fmt.Sprintf("", codes.Code(s.s.GetCode()), s.s.GetMessage(), s.s.GetDetails()) } // mustMarshalAny converts a protobuf message to an any. func mustMarshalAny(msg proto.Message) *apb.Any { any, err := ptypes.MarshalAny(msg) if err != nil { panic(fmt.Sprintf("ptypes.MarshalAny(%+v) failed: %v", msg, err)) } return any } func TestFromContextError(t *testing.T) { testCases := []struct { in error want *Status }{ {in: nil, want: New(codes.OK, "")}, {in: context.DeadlineExceeded, want: New(codes.DeadlineExceeded, context.DeadlineExceeded.Error())}, {in: context.Canceled, want: New(codes.Canceled, context.Canceled.Error())}, {in: errors.New("other"), want: New(codes.Unknown, "other")}, } for _, tc := range testCases { got := FromContextError(tc.in) if got.Code() != tc.want.Code() || got.Message() != tc.want.Message() { t.Errorf("FromContextError(%v) = %v; want %v", tc.in, got, tc.want) } } } grpc-go-1.22.1/stream.go000066400000000000000000001276601351635773100150040ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "context" "errors" "io" "math" "strconv" "sync" "time" "golang.org/x/net/trace" "google.golang.org/grpc/balancer" "google.golang.org/grpc/codes" "google.golang.org/grpc/encoding" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/internal/balancerload" "google.golang.org/grpc/internal/binarylog" "google.golang.org/grpc/internal/channelz" "google.golang.org/grpc/internal/grpcrand" "google.golang.org/grpc/internal/transport" "google.golang.org/grpc/metadata" "google.golang.org/grpc/peer" "google.golang.org/grpc/stats" "google.golang.org/grpc/status" ) // StreamHandler defines the handler called by gRPC server to complete the // execution of a streaming RPC. If a StreamHandler returns an error, it // should be produced by the status package, or else gRPC will use // codes.Unknown as the status code and err.Error() as the status message // of the RPC. type StreamHandler func(srv interface{}, stream ServerStream) error // StreamDesc represents a streaming RPC service's method specification. type StreamDesc struct { StreamName string Handler StreamHandler // At least one of these is true. ServerStreams bool ClientStreams bool } // Stream defines the common interface a client or server stream has to satisfy. // // Deprecated: See ClientStream and ServerStream documentation instead. type Stream interface { // Deprecated: See ClientStream and ServerStream documentation instead. Context() context.Context // Deprecated: See ClientStream and ServerStream documentation instead. SendMsg(m interface{}) error // Deprecated: See ClientStream and ServerStream documentation instead. RecvMsg(m interface{}) error } // ClientStream defines the client-side behavior of a streaming RPC. // // All errors returned from ClientStream methods are compatible with the // status package. type ClientStream interface { // Header returns the header metadata received from the server if there // is any. It blocks if the metadata is not ready to read. Header() (metadata.MD, error) // Trailer returns the trailer metadata from the server, if there is any. // It must only be called after stream.CloseAndRecv has returned, or // stream.Recv has returned a non-nil error (including io.EOF). Trailer() metadata.MD // CloseSend closes the send direction of the stream. It closes the stream // when non-nil error is met. It is also not safe to call CloseSend // concurrently with SendMsg. CloseSend() error // Context returns the context for this stream. // // It should not be called until after Header or RecvMsg has returned. Once // called, subsequent client-side retries are disabled. Context() context.Context // SendMsg is generally called by generated code. On error, SendMsg aborts // the stream. If the error was generated by the client, the status is // returned directly; otherwise, io.EOF is returned and the status of // the stream may be discovered using RecvMsg. // // SendMsg blocks until: // - There is sufficient flow control to schedule m with the transport, or // - The stream is done, or // - The stream breaks. // // SendMsg does not wait until the message is received by the server. An // untimely stream closure may result in lost messages. To ensure delivery, // users should ensure the RPC completed successfully using RecvMsg. // // It is safe to have a goroutine calling SendMsg and another goroutine // calling RecvMsg on the same stream at the same time, but it is not safe // to call SendMsg on the same stream in different goroutines. It is also // not safe to call CloseSend concurrently with SendMsg. SendMsg(m interface{}) error // RecvMsg blocks until it receives a message into m or the stream is // done. It returns io.EOF when the stream completes successfully. On // any other error, the stream is aborted and the error contains the RPC // status. // // It is safe to have a goroutine calling SendMsg and another goroutine // calling RecvMsg on the same stream at the same time, but it is not // safe to call RecvMsg on the same stream in different goroutines. RecvMsg(m interface{}) error } // NewStream creates a new Stream for the client side. This is typically // called by generated code. ctx is used for the lifetime of the stream. // // To ensure resources are not leaked due to the stream returned, one of the following // actions must be performed: // // 1. Call Close on the ClientConn. // 2. Cancel the context provided. // 3. Call RecvMsg until a non-nil error is returned. A protobuf-generated // client-streaming RPC, for instance, might use the helper function // CloseAndRecv (note that CloseSend does not Recv, therefore is not // guaranteed to release all resources). // 4. Receive a non-nil, non-io.EOF error from Header or SendMsg. // // If none of the above happen, a goroutine and a context will be leaked, and grpc // will not call the optionally-configured stats handler with a stats.End message. func (cc *ClientConn) NewStream(ctx context.Context, desc *StreamDesc, method string, opts ...CallOption) (ClientStream, error) { // allow interceptor to see all applicable call options, which means those // configured as defaults from dial option as well as per-call options opts = combine(cc.dopts.callOptions, opts) if cc.dopts.streamInt != nil { return cc.dopts.streamInt(ctx, desc, cc, method, newClientStream, opts...) } return newClientStream(ctx, desc, cc, method, opts...) } // NewClientStream is a wrapper for ClientConn.NewStream. func NewClientStream(ctx context.Context, desc *StreamDesc, cc *ClientConn, method string, opts ...CallOption) (ClientStream, error) { return cc.NewStream(ctx, desc, method, opts...) } func newClientStream(ctx context.Context, desc *StreamDesc, cc *ClientConn, method string, opts ...CallOption) (_ ClientStream, err error) { if channelz.IsOn() { cc.incrCallsStarted() defer func() { if err != nil { cc.incrCallsFailed() } }() } c := defaultCallInfo() // Provide an opportunity for the first RPC to see the first service config // provided by the resolver. if err := cc.waitForResolvedAddrs(ctx); err != nil { return nil, err } mc := cc.GetMethodConfig(method) if mc.WaitForReady != nil { c.failFast = !*mc.WaitForReady } // Possible context leak: // The cancel function for the child context we create will only be called // when RecvMsg returns a non-nil error, if the ClientConn is closed, or if // an error is generated by SendMsg. // https://github.com/grpc/grpc-go/issues/1818. var cancel context.CancelFunc if mc.Timeout != nil && *mc.Timeout >= 0 { ctx, cancel = context.WithTimeout(ctx, *mc.Timeout) } else { ctx, cancel = context.WithCancel(ctx) } defer func() { if err != nil { cancel() } }() for _, o := range opts { if err := o.before(c); err != nil { return nil, toRPCErr(err) } } c.maxSendMessageSize = getMaxSize(mc.MaxReqSize, c.maxSendMessageSize, defaultClientMaxSendMessageSize) c.maxReceiveMessageSize = getMaxSize(mc.MaxRespSize, c.maxReceiveMessageSize, defaultClientMaxReceiveMessageSize) if err := setCallInfoCodec(c); err != nil { return nil, err } callHdr := &transport.CallHdr{ Host: cc.authority, Method: method, ContentSubtype: c.contentSubtype, } // Set our outgoing compression according to the UseCompressor CallOption, if // set. In that case, also find the compressor from the encoding package. // Otherwise, use the compressor configured by the WithCompressor DialOption, // if set. var cp Compressor var comp encoding.Compressor if ct := c.compressorType; ct != "" { callHdr.SendCompress = ct if ct != encoding.Identity { comp = encoding.GetCompressor(ct) if comp == nil { return nil, status.Errorf(codes.Internal, "grpc: Compressor is not installed for requested grpc-encoding %q", ct) } } } else if cc.dopts.cp != nil { callHdr.SendCompress = cc.dopts.cp.Type() cp = cc.dopts.cp } if c.creds != nil { callHdr.Creds = c.creds } var trInfo *traceInfo if EnableTracing { trInfo = &traceInfo{ tr: trace.New("grpc.Sent."+methodFamily(method), method), firstLine: firstLine{ client: true, }, } if deadline, ok := ctx.Deadline(); ok { trInfo.firstLine.deadline = time.Until(deadline) } trInfo.tr.LazyLog(&trInfo.firstLine, false) ctx = trace.NewContext(ctx, trInfo.tr) } ctx = newContextWithRPCInfo(ctx, c.failFast, c.codec, cp, comp) sh := cc.dopts.copts.StatsHandler var beginTime time.Time if sh != nil { ctx = sh.TagRPC(ctx, &stats.RPCTagInfo{FullMethodName: method, FailFast: c.failFast}) beginTime = time.Now() begin := &stats.Begin{ Client: true, BeginTime: beginTime, FailFast: c.failFast, } sh.HandleRPC(ctx, begin) } cs := &clientStream{ callHdr: callHdr, ctx: ctx, methodConfig: &mc, opts: opts, callInfo: c, cc: cc, desc: desc, codec: c.codec, cp: cp, comp: comp, cancel: cancel, beginTime: beginTime, firstAttempt: true, } if !cc.dopts.disableRetry { cs.retryThrottler = cc.retryThrottler.Load().(*retryThrottler) } cs.binlog = binarylog.GetMethodLogger(method) cs.callInfo.stream = cs // Only this initial attempt has stats/tracing. // TODO(dfawley): move to newAttempt when per-attempt stats are implemented. if err := cs.newAttemptLocked(sh, trInfo); err != nil { cs.finish(err) return nil, err } op := func(a *csAttempt) error { return a.newStream() } if err := cs.withRetry(op, func() { cs.bufferForRetryLocked(0, op) }); err != nil { cs.finish(err) return nil, err } if cs.binlog != nil { md, _ := metadata.FromOutgoingContext(ctx) logEntry := &binarylog.ClientHeader{ OnClientSide: true, Header: md, MethodName: method, Authority: cs.cc.authority, } if deadline, ok := ctx.Deadline(); ok { logEntry.Timeout = time.Until(deadline) if logEntry.Timeout < 0 { logEntry.Timeout = 0 } } cs.binlog.Log(logEntry) } if desc != unaryStreamDesc { // Listen on cc and stream contexts to cleanup when the user closes the // ClientConn or cancels the stream context. In all other cases, an error // should already be injected into the recv buffer by the transport, which // the client will eventually receive, and then we will cancel the stream's // context in clientStream.finish. go func() { select { case <-cc.ctx.Done(): cs.finish(ErrClientConnClosing) case <-ctx.Done(): cs.finish(toRPCErr(ctx.Err())) } }() } return cs, nil } func (cs *clientStream) newAttemptLocked(sh stats.Handler, trInfo *traceInfo) error { cs.attempt = &csAttempt{ cs: cs, dc: cs.cc.dopts.dc, statsHandler: sh, trInfo: trInfo, } if err := cs.ctx.Err(); err != nil { return toRPCErr(err) } t, done, err := cs.cc.getTransport(cs.ctx, cs.callInfo.failFast, cs.callHdr.Method) if err != nil { return err } if trInfo != nil { trInfo.firstLine.SetRemoteAddr(t.RemoteAddr()) } cs.attempt.t = t cs.attempt.done = done return nil } func (a *csAttempt) newStream() error { cs := a.cs cs.callHdr.PreviousAttempts = cs.numRetries s, err := a.t.NewStream(cs.ctx, cs.callHdr) if err != nil { return toRPCErr(err) } cs.attempt.s = s cs.attempt.p = &parser{r: s} return nil } // clientStream implements a client side Stream. type clientStream struct { callHdr *transport.CallHdr opts []CallOption callInfo *callInfo cc *ClientConn desc *StreamDesc codec baseCodec cp Compressor comp encoding.Compressor cancel context.CancelFunc // cancels all attempts sentLast bool // sent an end stream beginTime time.Time methodConfig *MethodConfig ctx context.Context // the application's context, wrapped by stats/tracing retryThrottler *retryThrottler // The throttler active when the RPC began. binlog *binarylog.MethodLogger // Binary logger, can be nil. // serverHeaderBinlogged is a boolean for whether server header has been // logged. Server header will be logged when the first time one of those // happens: stream.Header(), stream.Recv(). // // It's only read and used by Recv() and Header(), so it doesn't need to be // synchronized. serverHeaderBinlogged bool mu sync.Mutex firstAttempt bool // if true, transparent retry is valid numRetries int // exclusive of transparent retry attempt(s) numRetriesSincePushback int // retries since pushback; to reset backoff finished bool // TODO: replace with atomic cmpxchg or sync.Once? attempt *csAttempt // the active client stream attempt // TODO(hedging): hedging will have multiple attempts simultaneously. committed bool // active attempt committed for retry? buffer []func(a *csAttempt) error // operations to replay on retry bufferSize int // current size of buffer } // csAttempt implements a single transport stream attempt within a // clientStream. type csAttempt struct { cs *clientStream t transport.ClientTransport s *transport.Stream p *parser done func(balancer.DoneInfo) finished bool dc Decompressor decomp encoding.Compressor decompSet bool mu sync.Mutex // guards trInfo.tr // trInfo may be nil (if EnableTracing is false). // trInfo.tr is set when created (if EnableTracing is true), // and cleared when the finish method is called. trInfo *traceInfo statsHandler stats.Handler } func (cs *clientStream) commitAttemptLocked() { cs.committed = true cs.buffer = nil } func (cs *clientStream) commitAttempt() { cs.mu.Lock() cs.commitAttemptLocked() cs.mu.Unlock() } // shouldRetry returns nil if the RPC should be retried; otherwise it returns // the error that should be returned by the operation. func (cs *clientStream) shouldRetry(err error) error { if cs.attempt.s == nil && !cs.callInfo.failFast { // In the event of any error from NewStream (attempt.s == nil), we // never attempted to write anything to the wire, so we can retry // indefinitely for non-fail-fast RPCs. return nil } if cs.finished || cs.committed { // RPC is finished or committed; cannot retry. return err } // Wait for the trailers. if cs.attempt.s != nil { <-cs.attempt.s.Done() } if cs.firstAttempt && !cs.callInfo.failFast && (cs.attempt.s == nil || cs.attempt.s.Unprocessed()) { // First attempt, wait-for-ready, stream unprocessed: transparently retry. cs.firstAttempt = false return nil } cs.firstAttempt = false if cs.cc.dopts.disableRetry { return err } pushback := 0 hasPushback := false if cs.attempt.s != nil { if to, toErr := cs.attempt.s.TrailersOnly(); toErr != nil || !to { return err } // TODO(retry): Move down if the spec changes to not check server pushback // before considering this a failure for throttling. sps := cs.attempt.s.Trailer()["grpc-retry-pushback-ms"] if len(sps) == 1 { var e error if pushback, e = strconv.Atoi(sps[0]); e != nil || pushback < 0 { grpclog.Infof("Server retry pushback specified to abort (%q).", sps[0]) cs.retryThrottler.throttle() // This counts as a failure for throttling. return err } hasPushback = true } else if len(sps) > 1 { grpclog.Warningf("Server retry pushback specified multiple values (%q); not retrying.", sps) cs.retryThrottler.throttle() // This counts as a failure for throttling. return err } } var code codes.Code if cs.attempt.s != nil { code = cs.attempt.s.Status().Code() } else { code = status.Convert(err).Code() } rp := cs.methodConfig.retryPolicy if rp == nil || !rp.retryableStatusCodes[code] { return err } // Note: the ordering here is important; we count this as a failure // only if the code matched a retryable code. if cs.retryThrottler.throttle() { return err } if cs.numRetries+1 >= rp.maxAttempts { return err } var dur time.Duration if hasPushback { dur = time.Millisecond * time.Duration(pushback) cs.numRetriesSincePushback = 0 } else { fact := math.Pow(rp.backoffMultiplier, float64(cs.numRetriesSincePushback)) cur := float64(rp.initialBackoff) * fact if max := float64(rp.maxBackoff); cur > max { cur = max } dur = time.Duration(grpcrand.Int63n(int64(cur))) cs.numRetriesSincePushback++ } // TODO(dfawley): we could eagerly fail here if dur puts us past the // deadline, but unsure if it is worth doing. t := time.NewTimer(dur) select { case <-t.C: cs.numRetries++ return nil case <-cs.ctx.Done(): t.Stop() return status.FromContextError(cs.ctx.Err()).Err() } } // Returns nil if a retry was performed and succeeded; error otherwise. func (cs *clientStream) retryLocked(lastErr error) error { for { cs.attempt.finish(lastErr) if err := cs.shouldRetry(lastErr); err != nil { cs.commitAttemptLocked() return err } if err := cs.newAttemptLocked(nil, nil); err != nil { return err } if lastErr = cs.replayBufferLocked(); lastErr == nil { return nil } } } func (cs *clientStream) Context() context.Context { cs.commitAttempt() // No need to lock before using attempt, since we know it is committed and // cannot change. return cs.attempt.s.Context() } func (cs *clientStream) withRetry(op func(a *csAttempt) error, onSuccess func()) error { cs.mu.Lock() for { if cs.committed { cs.mu.Unlock() return op(cs.attempt) } a := cs.attempt cs.mu.Unlock() err := op(a) cs.mu.Lock() if a != cs.attempt { // We started another attempt already. continue } if err == io.EOF { <-a.s.Done() } if err == nil || (err == io.EOF && a.s.Status().Code() == codes.OK) { onSuccess() cs.mu.Unlock() return err } if err := cs.retryLocked(err); err != nil { cs.mu.Unlock() return err } } } func (cs *clientStream) Header() (metadata.MD, error) { var m metadata.MD err := cs.withRetry(func(a *csAttempt) error { var err error m, err = a.s.Header() return toRPCErr(err) }, cs.commitAttemptLocked) if err != nil { cs.finish(err) return nil, err } if cs.binlog != nil && !cs.serverHeaderBinlogged { // Only log if binary log is on and header has not been logged. logEntry := &binarylog.ServerHeader{ OnClientSide: true, Header: m, PeerAddr: nil, } if peer, ok := peer.FromContext(cs.Context()); ok { logEntry.PeerAddr = peer.Addr } cs.binlog.Log(logEntry) cs.serverHeaderBinlogged = true } return m, err } func (cs *clientStream) Trailer() metadata.MD { // On RPC failure, we never need to retry, because usage requires that // RecvMsg() returned a non-nil error before calling this function is valid. // We would have retried earlier if necessary. // // Commit the attempt anyway, just in case users are not following those // directions -- it will prevent races and should not meaningfully impact // performance. cs.commitAttempt() if cs.attempt.s == nil { return nil } return cs.attempt.s.Trailer() } func (cs *clientStream) replayBufferLocked() error { a := cs.attempt for _, f := range cs.buffer { if err := f(a); err != nil { return err } } return nil } func (cs *clientStream) bufferForRetryLocked(sz int, op func(a *csAttempt) error) { // Note: we still will buffer if retry is disabled (for transparent retries). if cs.committed { return } cs.bufferSize += sz if cs.bufferSize > cs.callInfo.maxRetryRPCBufferSize { cs.commitAttemptLocked() return } cs.buffer = append(cs.buffer, op) } func (cs *clientStream) SendMsg(m interface{}) (err error) { defer func() { if err != nil && err != io.EOF { // Call finish on the client stream for errors generated by this SendMsg // call, as these indicate problems created by this client. (Transport // errors are converted to an io.EOF error in csAttempt.sendMsg; the real // error will be returned from RecvMsg eventually in that case, or be // retried.) cs.finish(err) } }() if cs.sentLast { return status.Errorf(codes.Internal, "SendMsg called after CloseSend") } if !cs.desc.ClientStreams { cs.sentLast = true } // load hdr, payload, data hdr, payload, data, err := prepareMsg(m, cs.codec, cs.cp, cs.comp) if err != nil { return err } // TODO(dfawley): should we be checking len(data) instead? if len(payload) > *cs.callInfo.maxSendMessageSize { return status.Errorf(codes.ResourceExhausted, "trying to send message larger than max (%d vs. %d)", len(payload), *cs.callInfo.maxSendMessageSize) } msgBytes := data // Store the pointer before setting to nil. For binary logging. op := func(a *csAttempt) error { err := a.sendMsg(m, hdr, payload, data) // nil out the message and uncomp when replaying; they are only needed for // stats which is disabled for subsequent attempts. m, data = nil, nil return err } err = cs.withRetry(op, func() { cs.bufferForRetryLocked(len(hdr)+len(payload), op) }) if cs.binlog != nil && err == nil { cs.binlog.Log(&binarylog.ClientMessage{ OnClientSide: true, Message: msgBytes, }) } return } func (cs *clientStream) RecvMsg(m interface{}) error { if cs.binlog != nil && !cs.serverHeaderBinlogged { // Call Header() to binary log header if it's not already logged. cs.Header() } var recvInfo *payloadInfo if cs.binlog != nil { recvInfo = &payloadInfo{} } err := cs.withRetry(func(a *csAttempt) error { return a.recvMsg(m, recvInfo) }, cs.commitAttemptLocked) if cs.binlog != nil && err == nil { cs.binlog.Log(&binarylog.ServerMessage{ OnClientSide: true, Message: recvInfo.uncompressedBytes, }) } if err != nil || !cs.desc.ServerStreams { // err != nil or non-server-streaming indicates end of stream. cs.finish(err) if cs.binlog != nil { // finish will not log Trailer. Log Trailer here. logEntry := &binarylog.ServerTrailer{ OnClientSide: true, Trailer: cs.Trailer(), Err: err, } if logEntry.Err == io.EOF { logEntry.Err = nil } if peer, ok := peer.FromContext(cs.Context()); ok { logEntry.PeerAddr = peer.Addr } cs.binlog.Log(logEntry) } } return err } func (cs *clientStream) CloseSend() error { if cs.sentLast { // TODO: return an error and finish the stream instead, due to API misuse? return nil } cs.sentLast = true op := func(a *csAttempt) error { a.t.Write(a.s, nil, nil, &transport.Options{Last: true}) // Always return nil; io.EOF is the only error that might make sense // instead, but there is no need to signal the client to call RecvMsg // as the only use left for the stream after CloseSend is to call // RecvMsg. This also matches historical behavior. return nil } cs.withRetry(op, func() { cs.bufferForRetryLocked(0, op) }) if cs.binlog != nil { cs.binlog.Log(&binarylog.ClientHalfClose{ OnClientSide: true, }) } // We never returned an error here for reasons. return nil } func (cs *clientStream) finish(err error) { if err == io.EOF { // Ending a stream with EOF indicates a success. err = nil } cs.mu.Lock() if cs.finished { cs.mu.Unlock() return } cs.finished = true cs.commitAttemptLocked() cs.mu.Unlock() // For binary logging. only log cancel in finish (could be caused by RPC ctx // canceled or ClientConn closed). Trailer will be logged in RecvMsg. // // Only one of cancel or trailer needs to be logged. In the cases where // users don't call RecvMsg, users must have already canceled the RPC. if cs.binlog != nil && status.Code(err) == codes.Canceled { cs.binlog.Log(&binarylog.Cancel{ OnClientSide: true, }) } if err == nil { cs.retryThrottler.successfulRPC() } if channelz.IsOn() { if err != nil { cs.cc.incrCallsFailed() } else { cs.cc.incrCallsSucceeded() } } if cs.attempt != nil { cs.attempt.finish(err) } // after functions all rely upon having a stream. if cs.attempt.s != nil { for _, o := range cs.opts { o.after(cs.callInfo) } } cs.cancel() } func (a *csAttempt) sendMsg(m interface{}, hdr, payld, data []byte) error { cs := a.cs if a.trInfo != nil { a.mu.Lock() if a.trInfo.tr != nil { a.trInfo.tr.LazyLog(&payload{sent: true, msg: m}, true) } a.mu.Unlock() } if err := a.t.Write(a.s, hdr, payld, &transport.Options{Last: !cs.desc.ClientStreams}); err != nil { if !cs.desc.ClientStreams { // For non-client-streaming RPCs, we return nil instead of EOF on error // because the generated code requires it. finish is not called; RecvMsg() // will call it with the stream's status independently. return nil } return io.EOF } if a.statsHandler != nil { a.statsHandler.HandleRPC(cs.ctx, outPayload(true, m, data, payld, time.Now())) } if channelz.IsOn() { a.t.IncrMsgSent() } return nil } func (a *csAttempt) recvMsg(m interface{}, payInfo *payloadInfo) (err error) { cs := a.cs if a.statsHandler != nil && payInfo == nil { payInfo = &payloadInfo{} } if !a.decompSet { // Block until we receive headers containing received message encoding. if ct := a.s.RecvCompress(); ct != "" && ct != encoding.Identity { if a.dc == nil || a.dc.Type() != ct { // No configured decompressor, or it does not match the incoming // message encoding; attempt to find a registered compressor that does. a.dc = nil a.decomp = encoding.GetCompressor(ct) } } else { // No compression is used; disable our decompressor. a.dc = nil } // Only initialize this state once per stream. a.decompSet = true } err = recv(a.p, cs.codec, a.s, a.dc, m, *cs.callInfo.maxReceiveMessageSize, payInfo, a.decomp) if err != nil { if err == io.EOF { if statusErr := a.s.Status().Err(); statusErr != nil { return statusErr } return io.EOF // indicates successful end of stream. } return toRPCErr(err) } if a.trInfo != nil { a.mu.Lock() if a.trInfo.tr != nil { a.trInfo.tr.LazyLog(&payload{sent: false, msg: m}, true) } a.mu.Unlock() } if a.statsHandler != nil { a.statsHandler.HandleRPC(cs.ctx, &stats.InPayload{ Client: true, RecvTime: time.Now(), Payload: m, // TODO truncate large payload. Data: payInfo.uncompressedBytes, WireLength: payInfo.wireLength, Length: len(payInfo.uncompressedBytes), }) } if channelz.IsOn() { a.t.IncrMsgRecv() } if cs.desc.ServerStreams { // Subsequent messages should be received by subsequent RecvMsg calls. return nil } // Special handling for non-server-stream rpcs. // This recv expects EOF or errors, so we don't collect inPayload. err = recv(a.p, cs.codec, a.s, a.dc, m, *cs.callInfo.maxReceiveMessageSize, nil, a.decomp) if err == nil { return toRPCErr(errors.New("grpc: client streaming protocol violation: get , want ")) } if err == io.EOF { return a.s.Status().Err() // non-server streaming Recv returns nil on success } return toRPCErr(err) } func (a *csAttempt) finish(err error) { a.mu.Lock() if a.finished { a.mu.Unlock() return } a.finished = true if err == io.EOF { // Ending a stream with EOF indicates a success. err = nil } var tr metadata.MD if a.s != nil { a.t.CloseStream(a.s, err) tr = a.s.Trailer() } if a.done != nil { br := false if a.s != nil { br = a.s.BytesReceived() } a.done(balancer.DoneInfo{ Err: err, Trailer: tr, BytesSent: a.s != nil, BytesReceived: br, ServerLoad: balancerload.Parse(tr), }) } if a.statsHandler != nil { end := &stats.End{ Client: true, BeginTime: a.cs.beginTime, EndTime: time.Now(), Trailer: tr, Error: err, } a.statsHandler.HandleRPC(a.cs.ctx, end) } if a.trInfo != nil && a.trInfo.tr != nil { if err == nil { a.trInfo.tr.LazyPrintf("RPC: [OK]") } else { a.trInfo.tr.LazyPrintf("RPC: [%v]", err) a.trInfo.tr.SetError() } a.trInfo.tr.Finish() a.trInfo.tr = nil } a.mu.Unlock() } // newClientStream creates a ClientStream with the specified transport, on the // given addrConn. // // It's expected that the given transport is either the same one in addrConn, or // is already closed. To avoid race, transport is specified separately, instead // of using ac.transpot. // // Main difference between this and ClientConn.NewStream: // - no retry // - no service config (or wait for service config) // - no tracing or stats func newNonRetryClientStream(ctx context.Context, desc *StreamDesc, method string, t transport.ClientTransport, ac *addrConn, opts ...CallOption) (_ ClientStream, err error) { if t == nil { // TODO: return RPC error here? return nil, errors.New("transport provided is nil") } // defaultCallInfo contains unnecessary info(i.e. failfast, maxRetryRPCBufferSize), so we just initialize an empty struct. c := &callInfo{} // Possible context leak: // The cancel function for the child context we create will only be called // when RecvMsg returns a non-nil error, if the ClientConn is closed, or if // an error is generated by SendMsg. // https://github.com/grpc/grpc-go/issues/1818. ctx, cancel := context.WithCancel(ctx) defer func() { if err != nil { cancel() } }() for _, o := range opts { if err := o.before(c); err != nil { return nil, toRPCErr(err) } } c.maxReceiveMessageSize = getMaxSize(nil, c.maxReceiveMessageSize, defaultClientMaxReceiveMessageSize) c.maxSendMessageSize = getMaxSize(nil, c.maxSendMessageSize, defaultServerMaxSendMessageSize) if err := setCallInfoCodec(c); err != nil { return nil, err } callHdr := &transport.CallHdr{ Host: ac.cc.authority, Method: method, ContentSubtype: c.contentSubtype, } // Set our outgoing compression according to the UseCompressor CallOption, if // set. In that case, also find the compressor from the encoding package. // Otherwise, use the compressor configured by the WithCompressor DialOption, // if set. var cp Compressor var comp encoding.Compressor if ct := c.compressorType; ct != "" { callHdr.SendCompress = ct if ct != encoding.Identity { comp = encoding.GetCompressor(ct) if comp == nil { return nil, status.Errorf(codes.Internal, "grpc: Compressor is not installed for requested grpc-encoding %q", ct) } } } else if ac.cc.dopts.cp != nil { callHdr.SendCompress = ac.cc.dopts.cp.Type() cp = ac.cc.dopts.cp } if c.creds != nil { callHdr.Creds = c.creds } // Use a special addrConnStream to avoid retry. as := &addrConnStream{ callHdr: callHdr, ac: ac, ctx: ctx, cancel: cancel, opts: opts, callInfo: c, desc: desc, codec: c.codec, cp: cp, comp: comp, t: t, } as.callInfo.stream = as s, err := as.t.NewStream(as.ctx, as.callHdr) if err != nil { err = toRPCErr(err) return nil, err } as.s = s as.p = &parser{r: s} ac.incrCallsStarted() if desc != unaryStreamDesc { // Listen on cc and stream contexts to cleanup when the user closes the // ClientConn or cancels the stream context. In all other cases, an error // should already be injected into the recv buffer by the transport, which // the client will eventually receive, and then we will cancel the stream's // context in clientStream.finish. go func() { select { case <-ac.ctx.Done(): as.finish(status.Error(codes.Canceled, "grpc: the SubConn is closing")) case <-ctx.Done(): as.finish(toRPCErr(ctx.Err())) } }() } return as, nil } type addrConnStream struct { s *transport.Stream ac *addrConn callHdr *transport.CallHdr cancel context.CancelFunc opts []CallOption callInfo *callInfo t transport.ClientTransport ctx context.Context sentLast bool desc *StreamDesc codec baseCodec cp Compressor comp encoding.Compressor decompSet bool dc Decompressor decomp encoding.Compressor p *parser mu sync.Mutex finished bool } func (as *addrConnStream) Header() (metadata.MD, error) { m, err := as.s.Header() if err != nil { as.finish(toRPCErr(err)) } return m, err } func (as *addrConnStream) Trailer() metadata.MD { return as.s.Trailer() } func (as *addrConnStream) CloseSend() error { if as.sentLast { // TODO: return an error and finish the stream instead, due to API misuse? return nil } as.sentLast = true as.t.Write(as.s, nil, nil, &transport.Options{Last: true}) // Always return nil; io.EOF is the only error that might make sense // instead, but there is no need to signal the client to call RecvMsg // as the only use left for the stream after CloseSend is to call // RecvMsg. This also matches historical behavior. return nil } func (as *addrConnStream) Context() context.Context { return as.s.Context() } func (as *addrConnStream) SendMsg(m interface{}) (err error) { defer func() { if err != nil && err != io.EOF { // Call finish on the client stream for errors generated by this SendMsg // call, as these indicate problems created by this client. (Transport // errors are converted to an io.EOF error in csAttempt.sendMsg; the real // error will be returned from RecvMsg eventually in that case, or be // retried.) as.finish(err) } }() if as.sentLast { return status.Errorf(codes.Internal, "SendMsg called after CloseSend") } if !as.desc.ClientStreams { as.sentLast = true } // load hdr, payload, data hdr, payld, _, err := prepareMsg(m, as.codec, as.cp, as.comp) if err != nil { return err } // TODO(dfawley): should we be checking len(data) instead? if len(payld) > *as.callInfo.maxSendMessageSize { return status.Errorf(codes.ResourceExhausted, "trying to send message larger than max (%d vs. %d)", len(payld), *as.callInfo.maxSendMessageSize) } if err := as.t.Write(as.s, hdr, payld, &transport.Options{Last: !as.desc.ClientStreams}); err != nil { if !as.desc.ClientStreams { // For non-client-streaming RPCs, we return nil instead of EOF on error // because the generated code requires it. finish is not called; RecvMsg() // will call it with the stream's status independently. return nil } return io.EOF } if channelz.IsOn() { as.t.IncrMsgSent() } return nil } func (as *addrConnStream) RecvMsg(m interface{}) (err error) { defer func() { if err != nil || !as.desc.ServerStreams { // err != nil or non-server-streaming indicates end of stream. as.finish(err) } }() if !as.decompSet { // Block until we receive headers containing received message encoding. if ct := as.s.RecvCompress(); ct != "" && ct != encoding.Identity { if as.dc == nil || as.dc.Type() != ct { // No configured decompressor, or it does not match the incoming // message encoding; attempt to find a registered compressor that does. as.dc = nil as.decomp = encoding.GetCompressor(ct) } } else { // No compression is used; disable our decompressor. as.dc = nil } // Only initialize this state once per stream. as.decompSet = true } err = recv(as.p, as.codec, as.s, as.dc, m, *as.callInfo.maxReceiveMessageSize, nil, as.decomp) if err != nil { if err == io.EOF { if statusErr := as.s.Status().Err(); statusErr != nil { return statusErr } return io.EOF // indicates successful end of stream. } return toRPCErr(err) } if channelz.IsOn() { as.t.IncrMsgRecv() } if as.desc.ServerStreams { // Subsequent messages should be received by subsequent RecvMsg calls. return nil } // Special handling for non-server-stream rpcs. // This recv expects EOF or errors, so we don't collect inPayload. err = recv(as.p, as.codec, as.s, as.dc, m, *as.callInfo.maxReceiveMessageSize, nil, as.decomp) if err == nil { return toRPCErr(errors.New("grpc: client streaming protocol violation: get , want ")) } if err == io.EOF { return as.s.Status().Err() // non-server streaming Recv returns nil on success } return toRPCErr(err) } func (as *addrConnStream) finish(err error) { as.mu.Lock() if as.finished { as.mu.Unlock() return } as.finished = true if err == io.EOF { // Ending a stream with EOF indicates a success. err = nil } if as.s != nil { as.t.CloseStream(as.s, err) } if err != nil { as.ac.incrCallsFailed() } else { as.ac.incrCallsSucceeded() } as.cancel() as.mu.Unlock() } // ServerStream defines the server-side behavior of a streaming RPC. // // All errors returned from ServerStream methods are compatible with the // status package. type ServerStream interface { // SetHeader sets the header metadata. It may be called multiple times. // When call multiple times, all the provided metadata will be merged. // All the metadata will be sent out when one of the following happens: // - ServerStream.SendHeader() is called; // - The first response is sent out; // - An RPC status is sent out (error or success). SetHeader(metadata.MD) error // SendHeader sends the header metadata. // The provided md and headers set by SetHeader() will be sent. // It fails if called multiple times. SendHeader(metadata.MD) error // SetTrailer sets the trailer metadata which will be sent with the RPC status. // When called more than once, all the provided metadata will be merged. SetTrailer(metadata.MD) // Context returns the context for this stream. Context() context.Context // SendMsg sends a message. On error, SendMsg aborts the stream and the // error is returned directly. // // SendMsg blocks until: // - There is sufficient flow control to schedule m with the transport, or // - The stream is done, or // - The stream breaks. // // SendMsg does not wait until the message is received by the client. An // untimely stream closure may result in lost messages. // // It is safe to have a goroutine calling SendMsg and another goroutine // calling RecvMsg on the same stream at the same time, but it is not safe // to call SendMsg on the same stream in different goroutines. SendMsg(m interface{}) error // RecvMsg blocks until it receives a message into m or the stream is // done. It returns io.EOF when the client has performed a CloseSend. On // any non-EOF error, the stream is aborted and the error contains the // RPC status. // // It is safe to have a goroutine calling SendMsg and another goroutine // calling RecvMsg on the same stream at the same time, but it is not // safe to call RecvMsg on the same stream in different goroutines. RecvMsg(m interface{}) error } // serverStream implements a server side Stream. type serverStream struct { ctx context.Context t transport.ServerTransport s *transport.Stream p *parser codec baseCodec cp Compressor dc Decompressor comp encoding.Compressor decomp encoding.Compressor maxReceiveMessageSize int maxSendMessageSize int trInfo *traceInfo statsHandler stats.Handler binlog *binarylog.MethodLogger // serverHeaderBinlogged indicates whether server header has been logged. It // will happen when one of the following two happens: stream.SendHeader(), // stream.Send(). // // It's only checked in send and sendHeader, doesn't need to be // synchronized. serverHeaderBinlogged bool mu sync.Mutex // protects trInfo.tr after the service handler runs. } func (ss *serverStream) Context() context.Context { return ss.ctx } func (ss *serverStream) SetHeader(md metadata.MD) error { if md.Len() == 0 { return nil } return ss.s.SetHeader(md) } func (ss *serverStream) SendHeader(md metadata.MD) error { err := ss.t.WriteHeader(ss.s, md) if ss.binlog != nil && !ss.serverHeaderBinlogged { h, _ := ss.s.Header() ss.binlog.Log(&binarylog.ServerHeader{ Header: h, }) ss.serverHeaderBinlogged = true } return err } func (ss *serverStream) SetTrailer(md metadata.MD) { if md.Len() == 0 { return } ss.s.SetTrailer(md) } func (ss *serverStream) SendMsg(m interface{}) (err error) { defer func() { if ss.trInfo != nil { ss.mu.Lock() if ss.trInfo.tr != nil { if err == nil { ss.trInfo.tr.LazyLog(&payload{sent: true, msg: m}, true) } else { ss.trInfo.tr.LazyLog(&fmtStringer{"%v", []interface{}{err}}, true) ss.trInfo.tr.SetError() } } ss.mu.Unlock() } if err != nil && err != io.EOF { st, _ := status.FromError(toRPCErr(err)) ss.t.WriteStatus(ss.s, st) // Non-user specified status was sent out. This should be an error // case (as a server side Cancel maybe). // // This is not handled specifically now. User will return a final // status from the service handler, we will log that error instead. // This behavior is similar to an interceptor. } if channelz.IsOn() && err == nil { ss.t.IncrMsgSent() } }() // load hdr, payload, data hdr, payload, data, err := prepareMsg(m, ss.codec, ss.cp, ss.comp) if err != nil { return err } // TODO(dfawley): should we be checking len(data) instead? if len(payload) > ss.maxSendMessageSize { return status.Errorf(codes.ResourceExhausted, "trying to send message larger than max (%d vs. %d)", len(payload), ss.maxSendMessageSize) } if err := ss.t.Write(ss.s, hdr, payload, &transport.Options{Last: false}); err != nil { return toRPCErr(err) } if ss.binlog != nil { if !ss.serverHeaderBinlogged { h, _ := ss.s.Header() ss.binlog.Log(&binarylog.ServerHeader{ Header: h, }) ss.serverHeaderBinlogged = true } ss.binlog.Log(&binarylog.ServerMessage{ Message: data, }) } if ss.statsHandler != nil { ss.statsHandler.HandleRPC(ss.s.Context(), outPayload(false, m, data, payload, time.Now())) } return nil } func (ss *serverStream) RecvMsg(m interface{}) (err error) { defer func() { if ss.trInfo != nil { ss.mu.Lock() if ss.trInfo.tr != nil { if err == nil { ss.trInfo.tr.LazyLog(&payload{sent: false, msg: m}, true) } else if err != io.EOF { ss.trInfo.tr.LazyLog(&fmtStringer{"%v", []interface{}{err}}, true) ss.trInfo.tr.SetError() } } ss.mu.Unlock() } if err != nil && err != io.EOF { st, _ := status.FromError(toRPCErr(err)) ss.t.WriteStatus(ss.s, st) // Non-user specified status was sent out. This should be an error // case (as a server side Cancel maybe). // // This is not handled specifically now. User will return a final // status from the service handler, we will log that error instead. // This behavior is similar to an interceptor. } if channelz.IsOn() && err == nil { ss.t.IncrMsgRecv() } }() var payInfo *payloadInfo if ss.statsHandler != nil || ss.binlog != nil { payInfo = &payloadInfo{} } if err := recv(ss.p, ss.codec, ss.s, ss.dc, m, ss.maxReceiveMessageSize, payInfo, ss.decomp); err != nil { if err == io.EOF { if ss.binlog != nil { ss.binlog.Log(&binarylog.ClientHalfClose{}) } return err } if err == io.ErrUnexpectedEOF { err = status.Errorf(codes.Internal, io.ErrUnexpectedEOF.Error()) } return toRPCErr(err) } if ss.statsHandler != nil { ss.statsHandler.HandleRPC(ss.s.Context(), &stats.InPayload{ RecvTime: time.Now(), Payload: m, // TODO truncate large payload. Data: payInfo.uncompressedBytes, WireLength: payInfo.wireLength, Length: len(payInfo.uncompressedBytes), }) } if ss.binlog != nil { ss.binlog.Log(&binarylog.ClientMessage{ Message: payInfo.uncompressedBytes, }) } return nil } // MethodFromServerStream returns the method string for the input stream. // The returned string is in the format of "/service/method". func MethodFromServerStream(stream ServerStream) (string, bool) { return Method(stream.Context()) } // prepareMsg returns the hdr, payload and data // using the compressors passed or using the // passed preparedmsg func prepareMsg(m interface{}, codec baseCodec, cp Compressor, comp encoding.Compressor) (hdr, payload, data []byte, err error) { if preparedMsg, ok := m.(*PreparedMsg); ok { return preparedMsg.hdr, preparedMsg.payload, preparedMsg.encodedData, nil } // The input interface is not a prepared msg. // Marshal and Compress the data at this point data, err = encode(codec, m) if err != nil { return nil, nil, nil, err } compData, err := compress(data, cp, comp) if err != nil { return nil, nil, nil, err } hdr, payload = msgHeader(data, compData) return hdr, payload, data, nil } grpc-go-1.22.1/stress/000077500000000000000000000000001351635773100144715ustar00rootroot00000000000000grpc-go-1.22.1/stress/client/000077500000000000000000000000001351635773100157475ustar00rootroot00000000000000grpc-go-1.22.1/stress/client/main.go000066400000000000000000000247351351635773100172350ustar00rootroot00000000000000/* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ //go:generate protoc -I ../grpc_testing --go_out=plugins=grpc:../grpc_testing ../grpc_testing/metrics.proto // client starts an interop client to do stress test and a metrics server to report qps. package main import ( "context" "flag" "fmt" "math/rand" "net" "strconv" "strings" "sync" "time" "google.golang.org/grpc" "google.golang.org/grpc/codes" "google.golang.org/grpc/credentials" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/interop" testpb "google.golang.org/grpc/interop/grpc_testing" "google.golang.org/grpc/status" metricspb "google.golang.org/grpc/stress/grpc_testing" "google.golang.org/grpc/testdata" ) var ( serverAddresses = flag.String("server_addresses", "localhost:8080", "a list of server addresses") testCases = flag.String("test_cases", "", "a list of test cases along with the relative weights") testDurationSecs = flag.Int("test_duration_secs", -1, "test duration in seconds") numChannelsPerServer = flag.Int("num_channels_per_server", 1, "Number of channels (i.e connections) to each server") numStubsPerChannel = flag.Int("num_stubs_per_channel", 1, "Number of client stubs per each connection to server") metricsPort = flag.Int("metrics_port", 8081, "The port at which the stress client exposes QPS metrics") useTLS = flag.Bool("use_tls", false, "Connection uses TLS if true, else plain TCP") testCA = flag.Bool("use_test_ca", false, "Whether to replace platform root CAs with test CA as the CA root") tlsServerName = flag.String("server_host_override", "foo.test.google.fr", "The server name use to verify the hostname returned by TLS handshake if it is not empty. Otherwise, --server_host is used.") caFile = flag.String("ca_file", "", "The file containning the CA root cert file") ) // testCaseWithWeight contains the test case type and its weight. type testCaseWithWeight struct { name string weight int } // parseTestCases converts test case string to a list of struct testCaseWithWeight. func parseTestCases(testCaseString string) []testCaseWithWeight { testCaseStrings := strings.Split(testCaseString, ",") testCases := make([]testCaseWithWeight, len(testCaseStrings)) for i, str := range testCaseStrings { testCase := strings.Split(str, ":") if len(testCase) != 2 { panic(fmt.Sprintf("invalid test case with weight: %s", str)) } // Check if test case is supported. switch testCase[0] { case "empty_unary", "large_unary", "client_streaming", "server_streaming", "ping_pong", "empty_stream", "timeout_on_sleeping_server", "cancel_after_begin", "cancel_after_first_response", "status_code_and_message", "custom_metadata": default: panic(fmt.Sprintf("unknown test type: %s", testCase[0])) } testCases[i].name = testCase[0] w, err := strconv.Atoi(testCase[1]) if err != nil { panic(fmt.Sprintf("%v", err)) } testCases[i].weight = w } return testCases } // weightedRandomTestSelector defines a weighted random selector for test case types. type weightedRandomTestSelector struct { tests []testCaseWithWeight totalWeight int } // newWeightedRandomTestSelector constructs a weightedRandomTestSelector with the given list of testCaseWithWeight. func newWeightedRandomTestSelector(tests []testCaseWithWeight) *weightedRandomTestSelector { var totalWeight int for _, t := range tests { totalWeight += t.weight } rand.Seed(time.Now().UnixNano()) return &weightedRandomTestSelector{tests, totalWeight} } func (selector weightedRandomTestSelector) getNextTest() string { random := rand.Intn(selector.totalWeight) var weightSofar int for _, test := range selector.tests { weightSofar += test.weight if random < weightSofar { return test.name } } panic("no test case selected by weightedRandomTestSelector") } // gauge stores the qps of one interop client (one stub). type gauge struct { mutex sync.RWMutex val int64 } func (g *gauge) set(v int64) { g.mutex.Lock() defer g.mutex.Unlock() g.val = v } func (g *gauge) get() int64 { g.mutex.RLock() defer g.mutex.RUnlock() return g.val } // server implements metrics server functions. type server struct { mutex sync.RWMutex // gauges is a map from /stress_test/server_/channel_/stub_/qps to its qps gauge. gauges map[string]*gauge } // newMetricsServer returns a new metrics server. func newMetricsServer() *server { return &server{gauges: make(map[string]*gauge)} } // GetAllGauges returns all gauges. func (s *server) GetAllGauges(in *metricspb.EmptyMessage, stream metricspb.MetricsService_GetAllGaugesServer) error { s.mutex.RLock() defer s.mutex.RUnlock() for name, gauge := range s.gauges { if err := stream.Send(&metricspb.GaugeResponse{Name: name, Value: &metricspb.GaugeResponse_LongValue{LongValue: gauge.get()}}); err != nil { return err } } return nil } // GetGauge returns the gauge for the given name. func (s *server) GetGauge(ctx context.Context, in *metricspb.GaugeRequest) (*metricspb.GaugeResponse, error) { s.mutex.RLock() defer s.mutex.RUnlock() if g, ok := s.gauges[in.Name]; ok { return &metricspb.GaugeResponse{Name: in.Name, Value: &metricspb.GaugeResponse_LongValue{LongValue: g.get()}}, nil } return nil, status.Errorf(codes.InvalidArgument, "gauge with name %s not found", in.Name) } // createGauge creates a gauge using the given name in metrics server. func (s *server) createGauge(name string) *gauge { s.mutex.Lock() defer s.mutex.Unlock() if _, ok := s.gauges[name]; ok { // gauge already exists. panic(fmt.Sprintf("gauge %s already exists", name)) } var g gauge s.gauges[name] = &g return &g } func startServer(server *server, port int) { lis, err := net.Listen("tcp", ":"+strconv.Itoa(port)) if err != nil { grpclog.Fatalf("failed to listen: %v", err) } s := grpc.NewServer() metricspb.RegisterMetricsServiceServer(s, server) s.Serve(lis) } // performRPCs uses weightedRandomTestSelector to select test case and runs the tests. func performRPCs(gauge *gauge, conn *grpc.ClientConn, selector *weightedRandomTestSelector, stop <-chan bool) { client := testpb.NewTestServiceClient(conn) var numCalls int64 startTime := time.Now() for { test := selector.getNextTest() switch test { case "empty_unary": interop.DoEmptyUnaryCall(client, grpc.WaitForReady(true)) case "large_unary": interop.DoLargeUnaryCall(client, grpc.WaitForReady(true)) case "client_streaming": interop.DoClientStreaming(client, grpc.WaitForReady(true)) case "server_streaming": interop.DoServerStreaming(client, grpc.WaitForReady(true)) case "ping_pong": interop.DoPingPong(client, grpc.WaitForReady(true)) case "empty_stream": interop.DoEmptyStream(client, grpc.WaitForReady(true)) case "timeout_on_sleeping_server": interop.DoTimeoutOnSleepingServer(client, grpc.WaitForReady(true)) case "cancel_after_begin": interop.DoCancelAfterBegin(client, grpc.WaitForReady(true)) case "cancel_after_first_response": interop.DoCancelAfterFirstResponse(client, grpc.WaitForReady(true)) case "status_code_and_message": interop.DoStatusCodeAndMessage(client, grpc.WaitForReady(true)) case "custom_metadata": interop.DoCustomMetadata(client, grpc.WaitForReady(true)) } numCalls++ gauge.set(int64(float64(numCalls) / time.Since(startTime).Seconds())) select { case <-stop: return default: } } } func logParameterInfo(addresses []string, tests []testCaseWithWeight) { grpclog.Infof("server_addresses: %s", *serverAddresses) grpclog.Infof("test_cases: %s", *testCases) grpclog.Infof("test_duration_secs: %d", *testDurationSecs) grpclog.Infof("num_channels_per_server: %d", *numChannelsPerServer) grpclog.Infof("num_stubs_per_channel: %d", *numStubsPerChannel) grpclog.Infof("metrics_port: %d", *metricsPort) grpclog.Infof("use_tls: %t", *useTLS) grpclog.Infof("use_test_ca: %t", *testCA) grpclog.Infof("server_host_override: %s", *tlsServerName) grpclog.Infoln("addresses:") for i, addr := range addresses { grpclog.Infof("%d. %s\n", i+1, addr) } grpclog.Infoln("tests:") for i, test := range tests { grpclog.Infof("%d. %v\n", i+1, test) } } func newConn(address string, useTLS, testCA bool, tlsServerName string) (*grpc.ClientConn, error) { var opts []grpc.DialOption if useTLS { var sn string if tlsServerName != "" { sn = tlsServerName } var creds credentials.TransportCredentials if testCA { var err error if *caFile == "" { *caFile = testdata.Path("ca.pem") } creds, err = credentials.NewClientTLSFromFile(*caFile, sn) if err != nil { grpclog.Fatalf("Failed to create TLS credentials %v", err) } } else { creds = credentials.NewClientTLSFromCert(nil, sn) } opts = append(opts, grpc.WithTransportCredentials(creds)) } else { opts = append(opts, grpc.WithInsecure()) } return grpc.Dial(address, opts...) } func main() { flag.Parse() addresses := strings.Split(*serverAddresses, ",") tests := parseTestCases(*testCases) logParameterInfo(addresses, tests) testSelector := newWeightedRandomTestSelector(tests) metricsServer := newMetricsServer() var wg sync.WaitGroup wg.Add(len(addresses) * *numChannelsPerServer * *numStubsPerChannel) stop := make(chan bool) for serverIndex, address := range addresses { for connIndex := 0; connIndex < *numChannelsPerServer; connIndex++ { conn, err := newConn(address, *useTLS, *testCA, *tlsServerName) if err != nil { grpclog.Fatalf("Fail to dial: %v", err) } defer conn.Close() for clientIndex := 0; clientIndex < *numStubsPerChannel; clientIndex++ { name := fmt.Sprintf("/stress_test/server_%d/channel_%d/stub_%d/qps", serverIndex+1, connIndex+1, clientIndex+1) go func() { defer wg.Done() g := metricsServer.createGauge(name) performRPCs(g, conn, testSelector, stop) }() } } } go startServer(metricsServer, *metricsPort) if *testDurationSecs > 0 { time.Sleep(time.Duration(*testDurationSecs) * time.Second) close(stop) } wg.Wait() grpclog.Infof(" ===== ALL DONE ===== ") } grpc-go-1.22.1/stress/grpc_testing/000077500000000000000000000000001351635773100171615ustar00rootroot00000000000000grpc-go-1.22.1/stress/grpc_testing/metrics.pb.go000066400000000000000000000345341351635773100215670ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: metrics.proto package grpc_testing import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import ( context "golang.org/x/net/context" grpc "google.golang.org/grpc" ) // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package // Response message containing the gauge name and value type GaugeResponse struct { Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` // Types that are valid to be assigned to Value: // *GaugeResponse_LongValue // *GaugeResponse_DoubleValue // *GaugeResponse_StringValue Value isGaugeResponse_Value `protobuf_oneof:"value"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *GaugeResponse) Reset() { *m = GaugeResponse{} } func (m *GaugeResponse) String() string { return proto.CompactTextString(m) } func (*GaugeResponse) ProtoMessage() {} func (*GaugeResponse) Descriptor() ([]byte, []int) { return fileDescriptor_metrics_c9a45afc44ac5637, []int{0} } func (m *GaugeResponse) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_GaugeResponse.Unmarshal(m, b) } func (m *GaugeResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_GaugeResponse.Marshal(b, m, deterministic) } func (dst *GaugeResponse) XXX_Merge(src proto.Message) { xxx_messageInfo_GaugeResponse.Merge(dst, src) } func (m *GaugeResponse) XXX_Size() int { return xxx_messageInfo_GaugeResponse.Size(m) } func (m *GaugeResponse) XXX_DiscardUnknown() { xxx_messageInfo_GaugeResponse.DiscardUnknown(m) } var xxx_messageInfo_GaugeResponse proto.InternalMessageInfo func (m *GaugeResponse) GetName() string { if m != nil { return m.Name } return "" } type isGaugeResponse_Value interface { isGaugeResponse_Value() } type GaugeResponse_LongValue struct { LongValue int64 `protobuf:"varint,2,opt,name=long_value,json=longValue,proto3,oneof"` } type GaugeResponse_DoubleValue struct { DoubleValue float64 `protobuf:"fixed64,3,opt,name=double_value,json=doubleValue,proto3,oneof"` } type GaugeResponse_StringValue struct { StringValue string `protobuf:"bytes,4,opt,name=string_value,json=stringValue,proto3,oneof"` } func (*GaugeResponse_LongValue) isGaugeResponse_Value() {} func (*GaugeResponse_DoubleValue) isGaugeResponse_Value() {} func (*GaugeResponse_StringValue) isGaugeResponse_Value() {} func (m *GaugeResponse) GetValue() isGaugeResponse_Value { if m != nil { return m.Value } return nil } func (m *GaugeResponse) GetLongValue() int64 { if x, ok := m.GetValue().(*GaugeResponse_LongValue); ok { return x.LongValue } return 0 } func (m *GaugeResponse) GetDoubleValue() float64 { if x, ok := m.GetValue().(*GaugeResponse_DoubleValue); ok { return x.DoubleValue } return 0 } func (m *GaugeResponse) GetStringValue() string { if x, ok := m.GetValue().(*GaugeResponse_StringValue); ok { return x.StringValue } return "" } // XXX_OneofFuncs is for the internal use of the proto package. func (*GaugeResponse) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _GaugeResponse_OneofMarshaler, _GaugeResponse_OneofUnmarshaler, _GaugeResponse_OneofSizer, []interface{}{ (*GaugeResponse_LongValue)(nil), (*GaugeResponse_DoubleValue)(nil), (*GaugeResponse_StringValue)(nil), } } func _GaugeResponse_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*GaugeResponse) // value switch x := m.Value.(type) { case *GaugeResponse_LongValue: b.EncodeVarint(2<<3 | proto.WireVarint) b.EncodeVarint(uint64(x.LongValue)) case *GaugeResponse_DoubleValue: b.EncodeVarint(3<<3 | proto.WireFixed64) b.EncodeFixed64(math.Float64bits(x.DoubleValue)) case *GaugeResponse_StringValue: b.EncodeVarint(4<<3 | proto.WireBytes) b.EncodeStringBytes(x.StringValue) case nil: default: return fmt.Errorf("GaugeResponse.Value has unexpected type %T", x) } return nil } func _GaugeResponse_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*GaugeResponse) switch tag { case 2: // value.long_value if wire != proto.WireVarint { return true, proto.ErrInternalBadWireType } x, err := b.DecodeVarint() m.Value = &GaugeResponse_LongValue{int64(x)} return true, err case 3: // value.double_value if wire != proto.WireFixed64 { return true, proto.ErrInternalBadWireType } x, err := b.DecodeFixed64() m.Value = &GaugeResponse_DoubleValue{math.Float64frombits(x)} return true, err case 4: // value.string_value if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } x, err := b.DecodeStringBytes() m.Value = &GaugeResponse_StringValue{x} return true, err default: return false, nil } } func _GaugeResponse_OneofSizer(msg proto.Message) (n int) { m := msg.(*GaugeResponse) // value switch x := m.Value.(type) { case *GaugeResponse_LongValue: n += 1 // tag and wire n += proto.SizeVarint(uint64(x.LongValue)) case *GaugeResponse_DoubleValue: n += 1 // tag and wire n += 8 case *GaugeResponse_StringValue: n += 1 // tag and wire n += proto.SizeVarint(uint64(len(x.StringValue))) n += len(x.StringValue) case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } // Request message containing the gauge name type GaugeRequest struct { Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *GaugeRequest) Reset() { *m = GaugeRequest{} } func (m *GaugeRequest) String() string { return proto.CompactTextString(m) } func (*GaugeRequest) ProtoMessage() {} func (*GaugeRequest) Descriptor() ([]byte, []int) { return fileDescriptor_metrics_c9a45afc44ac5637, []int{1} } func (m *GaugeRequest) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_GaugeRequest.Unmarshal(m, b) } func (m *GaugeRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_GaugeRequest.Marshal(b, m, deterministic) } func (dst *GaugeRequest) XXX_Merge(src proto.Message) { xxx_messageInfo_GaugeRequest.Merge(dst, src) } func (m *GaugeRequest) XXX_Size() int { return xxx_messageInfo_GaugeRequest.Size(m) } func (m *GaugeRequest) XXX_DiscardUnknown() { xxx_messageInfo_GaugeRequest.DiscardUnknown(m) } var xxx_messageInfo_GaugeRequest proto.InternalMessageInfo func (m *GaugeRequest) GetName() string { if m != nil { return m.Name } return "" } type EmptyMessage struct { XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *EmptyMessage) Reset() { *m = EmptyMessage{} } func (m *EmptyMessage) String() string { return proto.CompactTextString(m) } func (*EmptyMessage) ProtoMessage() {} func (*EmptyMessage) Descriptor() ([]byte, []int) { return fileDescriptor_metrics_c9a45afc44ac5637, []int{2} } func (m *EmptyMessage) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_EmptyMessage.Unmarshal(m, b) } func (m *EmptyMessage) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_EmptyMessage.Marshal(b, m, deterministic) } func (dst *EmptyMessage) XXX_Merge(src proto.Message) { xxx_messageInfo_EmptyMessage.Merge(dst, src) } func (m *EmptyMessage) XXX_Size() int { return xxx_messageInfo_EmptyMessage.Size(m) } func (m *EmptyMessage) XXX_DiscardUnknown() { xxx_messageInfo_EmptyMessage.DiscardUnknown(m) } var xxx_messageInfo_EmptyMessage proto.InternalMessageInfo func init() { proto.RegisterType((*GaugeResponse)(nil), "grpc.testing.GaugeResponse") proto.RegisterType((*GaugeRequest)(nil), "grpc.testing.GaugeRequest") proto.RegisterType((*EmptyMessage)(nil), "grpc.testing.EmptyMessage") } // Reference imports to suppress errors if they are not otherwise used. var _ context.Context var _ grpc.ClientConn // This is a compile-time assertion to ensure that this generated file // is compatible with the grpc package it is being compiled against. const _ = grpc.SupportPackageIsVersion4 // MetricsServiceClient is the client API for MetricsService service. // // For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream. type MetricsServiceClient interface { // Returns the values of all the gauges that are currently being maintained by // the service GetAllGauges(ctx context.Context, in *EmptyMessage, opts ...grpc.CallOption) (MetricsService_GetAllGaugesClient, error) // Returns the value of one gauge GetGauge(ctx context.Context, in *GaugeRequest, opts ...grpc.CallOption) (*GaugeResponse, error) } type metricsServiceClient struct { cc *grpc.ClientConn } func NewMetricsServiceClient(cc *grpc.ClientConn) MetricsServiceClient { return &metricsServiceClient{cc} } func (c *metricsServiceClient) GetAllGauges(ctx context.Context, in *EmptyMessage, opts ...grpc.CallOption) (MetricsService_GetAllGaugesClient, error) { stream, err := c.cc.NewStream(ctx, &_MetricsService_serviceDesc.Streams[0], "/grpc.testing.MetricsService/GetAllGauges", opts...) if err != nil { return nil, err } x := &metricsServiceGetAllGaugesClient{stream} if err := x.ClientStream.SendMsg(in); err != nil { return nil, err } if err := x.ClientStream.CloseSend(); err != nil { return nil, err } return x, nil } type MetricsService_GetAllGaugesClient interface { Recv() (*GaugeResponse, error) grpc.ClientStream } type metricsServiceGetAllGaugesClient struct { grpc.ClientStream } func (x *metricsServiceGetAllGaugesClient) Recv() (*GaugeResponse, error) { m := new(GaugeResponse) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func (c *metricsServiceClient) GetGauge(ctx context.Context, in *GaugeRequest, opts ...grpc.CallOption) (*GaugeResponse, error) { out := new(GaugeResponse) err := c.cc.Invoke(ctx, "/grpc.testing.MetricsService/GetGauge", in, out, opts...) if err != nil { return nil, err } return out, nil } // MetricsServiceServer is the server API for MetricsService service. type MetricsServiceServer interface { // Returns the values of all the gauges that are currently being maintained by // the service GetAllGauges(*EmptyMessage, MetricsService_GetAllGaugesServer) error // Returns the value of one gauge GetGauge(context.Context, *GaugeRequest) (*GaugeResponse, error) } func RegisterMetricsServiceServer(s *grpc.Server, srv MetricsServiceServer) { s.RegisterService(&_MetricsService_serviceDesc, srv) } func _MetricsService_GetAllGauges_Handler(srv interface{}, stream grpc.ServerStream) error { m := new(EmptyMessage) if err := stream.RecvMsg(m); err != nil { return err } return srv.(MetricsServiceServer).GetAllGauges(m, &metricsServiceGetAllGaugesServer{stream}) } type MetricsService_GetAllGaugesServer interface { Send(*GaugeResponse) error grpc.ServerStream } type metricsServiceGetAllGaugesServer struct { grpc.ServerStream } func (x *metricsServiceGetAllGaugesServer) Send(m *GaugeResponse) error { return x.ServerStream.SendMsg(m) } func _MetricsService_GetGauge_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(GaugeRequest) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(MetricsServiceServer).GetGauge(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/grpc.testing.MetricsService/GetGauge", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(MetricsServiceServer).GetGauge(ctx, req.(*GaugeRequest)) } return interceptor(ctx, in, info, handler) } var _MetricsService_serviceDesc = grpc.ServiceDesc{ ServiceName: "grpc.testing.MetricsService", HandlerType: (*MetricsServiceServer)(nil), Methods: []grpc.MethodDesc{ { MethodName: "GetGauge", Handler: _MetricsService_GetGauge_Handler, }, }, Streams: []grpc.StreamDesc{ { StreamName: "GetAllGauges", Handler: _MetricsService_GetAllGauges_Handler, ServerStreams: true, }, }, Metadata: "metrics.proto", } func init() { proto.RegisterFile("metrics.proto", fileDescriptor_metrics_c9a45afc44ac5637) } var fileDescriptor_metrics_c9a45afc44ac5637 = []byte{ // 256 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x7c, 0x91, 0x3f, 0x4f, 0xc3, 0x30, 0x10, 0xc5, 0x6b, 0x5a, 0xfe, 0xf4, 0x70, 0x3b, 0x78, 0xaa, 0xca, 0x40, 0x14, 0x96, 0x4c, 0x11, 0x82, 0x4f, 0x00, 0x08, 0xa5, 0x0c, 0x5d, 0x82, 0xc4, 0x8a, 0xd2, 0x70, 0xb2, 0x22, 0x39, 0x71, 0xf0, 0x5d, 0x2a, 0xf1, 0x49, 0x58, 0xf9, 0xa8, 0xc8, 0x4e, 0x55, 0xa5, 0x08, 0x75, 0xb3, 0x7e, 0xf7, 0xfc, 0xfc, 0x9e, 0x0f, 0x66, 0x35, 0xb2, 0xab, 0x4a, 0x4a, 0x5b, 0x67, 0xd9, 0x2a, 0xa9, 0x5d, 0x5b, 0xa6, 0x8c, 0xc4, 0x55, 0xa3, 0xe3, 0x6f, 0x01, 0xb3, 0xac, 0xe8, 0x34, 0xe6, 0x48, 0xad, 0x6d, 0x08, 0x95, 0x82, 0x49, 0x53, 0xd4, 0xb8, 0x10, 0x91, 0x48, 0xa6, 0x79, 0x38, 0xab, 0x6b, 0x00, 0x63, 0x1b, 0xfd, 0xbe, 0x2d, 0x4c, 0x87, 0x8b, 0x93, 0x48, 0x24, 0xe3, 0xd5, 0x28, 0x9f, 0x7a, 0xf6, 0xe6, 0x91, 0xba, 0x01, 0xf9, 0x61, 0xbb, 0x8d, 0xc1, 0x9d, 0x64, 0x1c, 0x89, 0x44, 0xac, 0x46, 0xf9, 0x65, 0x4f, 0xf7, 0x22, 0x62, 0x57, 0xed, 0x7d, 0x26, 0xfe, 0x05, 0x2f, 0xea, 0x69, 0x10, 0x3d, 0x9e, 0xc3, 0x69, 0x98, 0xc6, 0x31, 0xc8, 0x5d, 0xb0, 0xcf, 0x0e, 0x89, 0xff, 0xcb, 0x15, 0xcf, 0x41, 0x3e, 0xd7, 0x2d, 0x7f, 0xad, 0x91, 0xa8, 0xd0, 0x78, 0xf7, 0x23, 0x60, 0xbe, 0xee, 0xdb, 0xbe, 0xa2, 0xdb, 0x56, 0x25, 0xaa, 0x17, 0x90, 0x19, 0xf2, 0x83, 0x31, 0xc1, 0x8c, 0xd4, 0x32, 0x1d, 0xf6, 0x4f, 0x87, 0xd7, 0x97, 0x57, 0x87, 0xb3, 0x83, 0x7f, 0xb9, 0x15, 0xea, 0x09, 0x2e, 0x32, 0xe4, 0x40, 0xff, 0xda, 0x0c, 0x93, 0x1e, 0xb5, 0xd9, 0x9c, 0x85, 0x2d, 0xdc, 0xff, 0x06, 0x00, 0x00, 0xff, 0xff, 0x5e, 0x7d, 0xb2, 0xc9, 0x96, 0x01, 0x00, 0x00, } grpc-go-1.22.1/stress/grpc_testing/metrics.proto000066400000000000000000000027441351635773100217230ustar00rootroot00000000000000// Copyright 2015-2016 gRPC authors. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. // Contains the definitions for a metrics service and the type of metrics // exposed by the service. // // Currently, 'Gauge' (i.e a metric that represents the measured value of // something at an instant of time) is the only metric type supported by the // service. syntax = "proto3"; package grpc.testing; // Response message containing the gauge name and value message GaugeResponse { string name = 1; oneof value { int64 long_value = 2; double double_value = 3; string string_value = 4; } } // Request message containing the gauge name message GaugeRequest { string name = 1; } message EmptyMessage {} service MetricsService { // Returns the values of all the gauges that are currently being maintained by // the service rpc GetAllGauges(EmptyMessage) returns (stream GaugeResponse); // Returns the value of one gauge rpc GetGauge(GaugeRequest) returns (GaugeResponse); } grpc-go-1.22.1/stress/metrics_client/000077500000000000000000000000001351635773100174755ustar00rootroot00000000000000grpc-go-1.22.1/stress/metrics_client/main.go000066400000000000000000000042511351635773100207520ustar00rootroot00000000000000/* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package main import ( "context" "flag" "fmt" "io" "google.golang.org/grpc" "google.golang.org/grpc/grpclog" metricspb "google.golang.org/grpc/stress/grpc_testing" ) var ( metricsServerAddress = flag.String("metrics_server_address", "", "The metrics server addresses in the fomrat :") totalOnly = flag.Bool("total_only", false, "If true, this prints only the total value of all gauges") ) func printMetrics(client metricspb.MetricsServiceClient, totalOnly bool) { stream, err := client.GetAllGauges(context.Background(), &metricspb.EmptyMessage{}) if err != nil { grpclog.Fatalf("failed to call GetAllGuages: %v", err) } var ( overallQPS int64 rpcStatus error ) for { gaugeResponse, err := stream.Recv() if err != nil { rpcStatus = err break } if _, ok := gaugeResponse.GetValue().(*metricspb.GaugeResponse_LongValue); !ok { panic(fmt.Sprintf("gauge %s is not a long value", gaugeResponse.Name)) } v := gaugeResponse.GetLongValue() if !totalOnly { grpclog.Infof("%s: %d", gaugeResponse.Name, v) } overallQPS += v } if rpcStatus != io.EOF { grpclog.Fatalf("failed to finish server streaming: %v", rpcStatus) } grpclog.Infof("overall qps: %d", overallQPS) } func main() { flag.Parse() if *metricsServerAddress == "" { grpclog.Fatalf("Metrics server address is empty.") } conn, err := grpc.Dial(*metricsServerAddress, grpc.WithInsecure()) if err != nil { grpclog.Fatalf("cannot connect to metrics server: %v", err) } defer conn.Close() c := metricspb.NewMetricsServiceClient(conn) printMetrics(c, *totalOnly) } grpc-go-1.22.1/tap/000077500000000000000000000000001351635773100137325ustar00rootroot00000000000000grpc-go-1.22.1/tap/tap.go000066400000000000000000000040401351635773100150430ustar00rootroot00000000000000/* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package tap defines the function handles which are executed on the transport // layer of gRPC-Go and related information. Everything here is EXPERIMENTAL. package tap import ( "context" ) // Info defines the relevant information needed by the handles. type Info struct { // FullMethodName is the string of grpc method (in the format of // /package.service/method). FullMethodName string // TODO: More to be added. } // ServerInHandle defines the function which runs before a new stream is created // on the server side. If it returns a non-nil error, the stream will not be // created and a RST_STREAM will be sent back to the client with REFUSED_STREAM. // The client will receive an RPC error "code = Unavailable, desc = stream // terminated by RST_STREAM with error code: REFUSED_STREAM". // // It's intended to be used in situations where you don't want to waste the // resources to accept the new stream (e.g. rate-limiting). And the content of // the error will be ignored and won't be sent back to the client. For other // general usages, please use interceptors. // // Note that it is executed in the per-connection I/O goroutine(s) instead of // per-RPC goroutine. Therefore, users should NOT have any // blocking/time-consuming work in this handle. Otherwise all the RPCs would // slow down. Also, for the same reason, this handle won't be called // concurrently by gRPC. type ServerInHandle func(ctx context.Context, info *Info) (context.Context, error) grpc-go-1.22.1/test/000077500000000000000000000000001351635773100141255ustar00rootroot00000000000000grpc-go-1.22.1/test/balancer_test.go000066400000000000000000000167561351635773100173010ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package test import ( "context" "reflect" "testing" "time" "google.golang.org/grpc" "google.golang.org/grpc/balancer" "google.golang.org/grpc/connectivity" "google.golang.org/grpc/credentials" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/internal/balancerload" "google.golang.org/grpc/internal/testutils" "google.golang.org/grpc/metadata" "google.golang.org/grpc/resolver" testpb "google.golang.org/grpc/test/grpc_testing" "google.golang.org/grpc/testdata" ) const testBalancerName = "testbalancer" // testBalancer creates one subconn with the first address from resolved // addresses. // // It's used to test options for NewSubConn are applies correctly. type testBalancer struct { cc balancer.ClientConn sc balancer.SubConn newSubConnOptions balancer.NewSubConnOptions pickOptions []balancer.PickOptions doneInfo []balancer.DoneInfo } func (b *testBalancer) Build(cc balancer.ClientConn, opt balancer.BuildOptions) balancer.Balancer { b.cc = cc return b } func (*testBalancer) Name() string { return testBalancerName } func (b *testBalancer) HandleResolvedAddrs(addrs []resolver.Address, err error) { // Only create a subconn at the first time. if err == nil && b.sc == nil { b.sc, err = b.cc.NewSubConn(addrs, b.newSubConnOptions) if err != nil { grpclog.Errorf("testBalancer: failed to NewSubConn: %v", err) return } b.cc.UpdateBalancerState(connectivity.Connecting, &picker{sc: b.sc, bal: b}) b.sc.Connect() } } func (b *testBalancer) HandleSubConnStateChange(sc balancer.SubConn, s connectivity.State) { grpclog.Infof("testBalancer: HandleSubConnStateChange: %p, %v", sc, s) if b.sc != sc { grpclog.Infof("testBalancer: ignored state change because sc is not recognized") return } if s == connectivity.Shutdown { b.sc = nil return } switch s { case connectivity.Ready, connectivity.Idle: b.cc.UpdateBalancerState(s, &picker{sc: sc, bal: b}) case connectivity.Connecting: b.cc.UpdateBalancerState(s, &picker{err: balancer.ErrNoSubConnAvailable, bal: b}) case connectivity.TransientFailure: b.cc.UpdateBalancerState(s, &picker{err: balancer.ErrTransientFailure, bal: b}) } } func (b *testBalancer) Close() { } type picker struct { err error sc balancer.SubConn bal *testBalancer } func (p *picker) Pick(ctx context.Context, opts balancer.PickOptions) (balancer.SubConn, func(balancer.DoneInfo), error) { if p.err != nil { return nil, nil, p.err } p.bal.pickOptions = append(p.bal.pickOptions, opts) return p.sc, func(d balancer.DoneInfo) { p.bal.doneInfo = append(p.bal.doneInfo, d) }, nil } func (s) TestCredsBundleFromBalancer(t *testing.T) { balancer.Register(&testBalancer{ newSubConnOptions: balancer.NewSubConnOptions{ CredsBundle: &testCredsBundle{}, }, }) te := newTest(t, env{name: "creds-bundle", network: "tcp", balancer: ""}) te.tapHandle = authHandle te.customDialOptions = []grpc.DialOption{ grpc.WithBalancerName(testBalancerName), } creds, err := credentials.NewServerTLSFromFile(testdata.Path("server1.pem"), testdata.Path("server1.key")) if err != nil { t.Fatalf("Failed to generate credentials %v", err) } te.customServerOptions = []grpc.ServerOption{ grpc.Creds(creds), } te.startServer(&testServer{}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}); err != nil { t.Fatalf("Test failed. Reason: %v", err) } } func (s) TestDoneInfo(t *testing.T) { for _, e := range listTestEnv() { testDoneInfo(t, e) } } func testDoneInfo(t *testing.T, e env) { te := newTest(t, e) b := &testBalancer{} balancer.Register(b) te.customDialOptions = []grpc.DialOption{ grpc.WithBalancerName(testBalancerName), } te.userAgent = failAppUA te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) defer cancel() wantErr := detailedError if _, err := tc.EmptyCall(ctx, &testpb.Empty{}); !testutils.StatusErrEqual(err, wantErr) { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, %v", err, wantErr) } if _, err := tc.UnaryCall(ctx, &testpb.SimpleRequest{}); err != nil { t.Fatalf("TestService.UnaryCall(%v, _, _, _) = _, %v; want _, ", ctx, err) } if len(b.doneInfo) < 1 || !testutils.StatusErrEqual(b.doneInfo[0].Err, wantErr) { t.Fatalf("b.doneInfo = %v; want b.doneInfo[0].Err = %v", b.doneInfo, wantErr) } if len(b.doneInfo) < 2 || !reflect.DeepEqual(b.doneInfo[1].Trailer, testTrailerMetadata) { t.Fatalf("b.doneInfo = %v; want b.doneInfo[1].Trailer = %v", b.doneInfo, testTrailerMetadata) } if len(b.pickOptions) != len(b.doneInfo) { t.Fatalf("Got %d picks, but %d doneInfo, want equal amount", len(b.pickOptions), len(b.doneInfo)) } // To test done() is always called, even if it's returned with a non-Ready // SubConn. // // Stop server and at the same time send RPCs. There are chances that picker // is not updated in time, causing a non-Ready SubConn to be returned. finished := make(chan struct{}) go func() { for i := 0; i < 20; i++ { tc.UnaryCall(ctx, &testpb.SimpleRequest{}) } close(finished) }() te.srv.Stop() <-finished if len(b.pickOptions) != len(b.doneInfo) { t.Fatalf("Got %d picks, %d doneInfo, want equal amount", len(b.pickOptions), len(b.doneInfo)) } } const loadMDKey = "X-Endpoint-Load-Metrics-Bin" type testLoadParser struct{} func (*testLoadParser) Parse(md metadata.MD) interface{} { vs := md.Get(loadMDKey) if len(vs) == 0 { return nil } return vs[0] } func init() { balancerload.SetParser(&testLoadParser{}) } func (s) TestDoneLoads(t *testing.T) { for _, e := range listTestEnv() { testDoneLoads(t, e) } } func testDoneLoads(t *testing.T, e env) { b := &testBalancer{} balancer.Register(b) const testLoad = "test-load-,-should-be-orca" ss := &stubServer{ emptyCall: func(ctx context.Context, in *testpb.Empty) (*testpb.Empty, error) { grpc.SetTrailer(ctx, metadata.Pairs(loadMDKey, testLoad)) return &testpb.Empty{}, nil }, } if err := ss.Start(nil, grpc.WithBalancerName(testBalancerName)); err != nil { t.Fatalf("error starting testing server: %v", err) } defer ss.Stop() tc := testpb.NewTestServiceClient(ss.cc) ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) defer cancel() if _, err := tc.EmptyCall(ctx, &testpb.Empty{}); err != nil { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, %v", err, nil) } poWant := []balancer.PickOptions{ {FullMethodName: "/grpc.testing.TestService/EmptyCall"}, } if !reflect.DeepEqual(b.pickOptions, poWant) { t.Fatalf("b.pickOptions = %v; want %v", b.pickOptions, poWant) } if len(b.doneInfo) < 1 { t.Fatalf("b.doneInfo = %v, want length 1", b.doneInfo) } gotLoad, _ := b.doneInfo[0].ServerLoad.(string) if gotLoad != testLoad { t.Fatalf("b.doneInfo[0].ServerLoad = %v; want = %v", b.doneInfo[0].ServerLoad, testLoad) } } grpc-go-1.22.1/test/bufconn/000077500000000000000000000000001351635773100155575ustar00rootroot00000000000000grpc-go-1.22.1/test/bufconn/bufconn.go000066400000000000000000000122121351635773100175360ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package bufconn provides a net.Conn implemented by a buffer and related // dialing and listening functionality. package bufconn import ( "fmt" "io" "net" "sync" "time" ) // Listener implements a net.Listener that creates local, buffered net.Conns // via its Accept and Dial method. type Listener struct { mu sync.Mutex sz int ch chan net.Conn done chan struct{} } var errClosed = fmt.Errorf("closed") // Listen returns a Listener that can only be contacted by its own Dialers and // creates buffered connections between the two. func Listen(sz int) *Listener { return &Listener{sz: sz, ch: make(chan net.Conn), done: make(chan struct{})} } // Accept blocks until Dial is called, then returns a net.Conn for the server // half of the connection. func (l *Listener) Accept() (net.Conn, error) { select { case <-l.done: return nil, errClosed case c := <-l.ch: return c, nil } } // Close stops the listener. func (l *Listener) Close() error { l.mu.Lock() defer l.mu.Unlock() select { case <-l.done: // Already closed. break default: close(l.done) } return nil } // Addr reports the address of the listener. func (l *Listener) Addr() net.Addr { return addr{} } // Dial creates an in-memory full-duplex network connection, unblocks Accept by // providing it the server half of the connection, and returns the client half // of the connection. func (l *Listener) Dial() (net.Conn, error) { p1, p2 := newPipe(l.sz), newPipe(l.sz) select { case <-l.done: return nil, errClosed case l.ch <- &conn{p1, p2}: return &conn{p2, p1}, nil } } type pipe struct { mu sync.Mutex // buf contains the data in the pipe. It is a ring buffer of fixed capacity, // with r and w pointing to the offset to read and write, respsectively. // // Data is read between [r, w) and written to [w, r), wrapping around the end // of the slice if necessary. // // The buffer is empty if r == len(buf), otherwise if r == w, it is full. // // w and r are always in the range [0, cap(buf)) and [0, len(buf)]. buf []byte w, r int wwait sync.Cond rwait sync.Cond closed bool writeClosed bool } func newPipe(sz int) *pipe { p := &pipe{buf: make([]byte, 0, sz)} p.wwait.L = &p.mu p.rwait.L = &p.mu return p } func (p *pipe) empty() bool { return p.r == len(p.buf) } func (p *pipe) full() bool { return p.r < len(p.buf) && p.r == p.w } func (p *pipe) Read(b []byte) (n int, err error) { p.mu.Lock() defer p.mu.Unlock() // Block until p has data. for { if p.closed { return 0, io.ErrClosedPipe } if !p.empty() { break } if p.writeClosed { return 0, io.EOF } p.rwait.Wait() } wasFull := p.full() n = copy(b, p.buf[p.r:len(p.buf)]) p.r += n if p.r == cap(p.buf) { p.r = 0 p.buf = p.buf[:p.w] } // Signal a blocked writer, if any if wasFull { p.wwait.Signal() } return n, nil } func (p *pipe) Write(b []byte) (n int, err error) { p.mu.Lock() defer p.mu.Unlock() if p.closed { return 0, io.ErrClosedPipe } for len(b) > 0 { // Block until p is not full. for { if p.closed || p.writeClosed { return 0, io.ErrClosedPipe } if !p.full() { break } p.wwait.Wait() } wasEmpty := p.empty() end := cap(p.buf) if p.w < p.r { end = p.r } x := copy(p.buf[p.w:end], b) b = b[x:] n += x p.w += x if p.w > len(p.buf) { p.buf = p.buf[:p.w] } if p.w == cap(p.buf) { p.w = 0 } // Signal a blocked reader, if any. if wasEmpty { p.rwait.Signal() } } return n, nil } func (p *pipe) Close() error { p.mu.Lock() defer p.mu.Unlock() p.closed = true // Signal all blocked readers and writers to return an error. p.rwait.Broadcast() p.wwait.Broadcast() return nil } func (p *pipe) closeWrite() error { p.mu.Lock() defer p.mu.Unlock() p.writeClosed = true // Signal all blocked readers and writers to return an error. p.rwait.Broadcast() p.wwait.Broadcast() return nil } type conn struct { io.Reader io.Writer } func (c *conn) Close() error { err1 := c.Reader.(*pipe).Close() err2 := c.Writer.(*pipe).closeWrite() if err1 != nil { return err1 } return err2 } func (*conn) LocalAddr() net.Addr { return addr{} } func (*conn) RemoteAddr() net.Addr { return addr{} } func (c *conn) SetDeadline(t time.Time) error { return fmt.Errorf("unsupported") } func (c *conn) SetReadDeadline(t time.Time) error { return fmt.Errorf("unsupported") } func (c *conn) SetWriteDeadline(t time.Time) error { return fmt.Errorf("unsupported") } type addr struct{} func (addr) Network() string { return "bufconn" } func (addr) String() string { return "bufconn" } grpc-go-1.22.1/test/bufconn/bufconn_test.go000066400000000000000000000111261351635773100206000ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package bufconn import ( "fmt" "io" "net" "reflect" "testing" "time" ) func testRW(r io.Reader, w io.Writer) error { for i := 0; i < 20; i++ { d := make([]byte, i) for j := 0; j < i; j++ { d[j] = byte(i - j) } var rn int var rerr error b := make([]byte, i) done := make(chan struct{}) go func() { for rn < len(b) && rerr == nil { var x int x, rerr = r.Read(b[rn:]) rn += x } close(done) }() wn, werr := w.Write(d) if wn != i || werr != nil { return fmt.Errorf("%v: w.Write(%v) = %v, %v; want %v, nil", i, d, wn, werr, i) } select { case <-done: case <-time.After(500 * time.Millisecond): return fmt.Errorf("%v: r.Read never returned", i) } if rn != i || rerr != nil { return fmt.Errorf("%v: r.Read = %v, %v; want %v, nil", i, rn, rerr, i) } if !reflect.DeepEqual(b, d) { return fmt.Errorf("%v: r.Read read %v; want %v", i, b, d) } } return nil } func TestPipe(t *testing.T) { p := newPipe(10) if err := testRW(p, p); err != nil { t.Fatalf(err.Error()) } } func TestPipeClose(t *testing.T) { p := newPipe(10) p.Close() if _, err := p.Write(nil); err != io.ErrClosedPipe { t.Fatalf("p.Write = _, %v; want _, %v", err, io.ErrClosedPipe) } if _, err := p.Read(nil); err != io.ErrClosedPipe { t.Fatalf("p.Read = _, %v; want _, %v", err, io.ErrClosedPipe) } } func TestConn(t *testing.T) { p1, p2 := newPipe(10), newPipe(10) c1, c2 := &conn{p1, p2}, &conn{p2, p1} if err := testRW(c1, c2); err != nil { t.Fatalf(err.Error()) } if err := testRW(c2, c1); err != nil { t.Fatalf(err.Error()) } } func TestConnCloseWithData(t *testing.T) { lis := Listen(7) errChan := make(chan error) var lisConn net.Conn go func() { var err error if lisConn, err = lis.Accept(); err != nil { errChan <- err } close(errChan) }() dialConn, err := lis.Dial() if err != nil { t.Fatalf("Dial error: %v", err) } if err := <-errChan; err != nil { t.Fatalf("Listen error: %v", err) } // Write some data on both sides of the connection. n, err := dialConn.Write([]byte("hello")) if n != 5 || err != nil { t.Fatalf("dialConn.Write([]byte{\"hello\"}) = %v, %v; want 5, ", n, err) } n, err = lisConn.Write([]byte("hello")) if n != 5 || err != nil { t.Fatalf("lisConn.Write([]byte{\"hello\"}) = %v, %v; want 5, ", n, err) } // Close dial-side; writes from either side should fail. dialConn.Close() if _, err := lisConn.Write([]byte("hello")); err != io.ErrClosedPipe { t.Fatalf("lisConn.Write() = _, ; want _, ") } if _, err := dialConn.Write([]byte("hello")); err != io.ErrClosedPipe { t.Fatalf("dialConn.Write() = _, ; want _, ") } // Read from both sides; reads on lisConn should work, but dialConn should // fail. buf := make([]byte, 6) if _, err := dialConn.Read(buf); err != io.ErrClosedPipe { t.Fatalf("dialConn.Read(buf) = %v, %v; want _, io.ErrClosedPipe", n, err) } n, err = lisConn.Read(buf) if n != 5 || err != nil { t.Fatalf("lisConn.Read(buf) = %v, %v; want 5, ", n, err) } } func TestListener(t *testing.T) { l := Listen(7) var s net.Conn var serr error done := make(chan struct{}) go func() { s, serr = l.Accept() close(done) }() c, cerr := l.Dial() <-done if cerr != nil || serr != nil { t.Fatalf("cerr = %v, serr = %v; want nil, nil", cerr, serr) } if err := testRW(c, s); err != nil { t.Fatalf(err.Error()) } if err := testRW(s, c); err != nil { t.Fatalf(err.Error()) } } func TestCloseWhileDialing(t *testing.T) { l := Listen(7) var c net.Conn var err error done := make(chan struct{}) go func() { c, err = l.Dial() close(done) }() l.Close() <-done if c != nil || err != errClosed { t.Fatalf("c, err = %v, %v; want nil, %v", c, err, errClosed) } } func TestCloseWhileAccepting(t *testing.T) { l := Listen(7) var c net.Conn var err error done := make(chan struct{}) go func() { c, err = l.Accept() close(done) }() l.Close() <-done if c != nil || err != errClosed { t.Fatalf("c, err = %v, %v; want nil, %v", c, err, errClosed) } } grpc-go-1.22.1/test/channelz_linux_go110_test.go000066400000000000000000000060121351635773100214420ustar00rootroot00000000000000// +build go1.10,linux,!appengine /* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // The test in this file should be run in an environment that has go1.10 or later, // as the function SyscallConn() (required to get socket option) was // introduced to net.TCPListener in go1.10. package test import ( "testing" "time" "google.golang.org/grpc/internal/channelz" testpb "google.golang.org/grpc/test/grpc_testing" ) func (s) TestCZSocketMetricsSocketOption(t *testing.T) { envs := []env{tcpClearRREnv, tcpTLSRREnv} for _, e := range envs { testCZSocketMetricsSocketOption(t, e) } } func testCZSocketMetricsSocketOption(t *testing.T, e env) { czCleanup := channelz.NewChannelzStorage() defer czCleanupWrapper(czCleanup, t) te := newTest(t, e) te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) doSuccessfulUnaryCall(tc, t) time.Sleep(10 * time.Millisecond) ss, _ := channelz.GetServers(0, 0) if len(ss) != 1 { t.Fatalf("There should be one server, not %d", len(ss)) } if len(ss[0].ListenSockets) != 1 { t.Fatalf("There should be one listen socket, not %d", len(ss[0].ListenSockets)) } for id := range ss[0].ListenSockets { sm := channelz.GetSocket(id) if sm == nil || sm.SocketData == nil || sm.SocketData.SocketOptions == nil { t.Fatalf("Unable to get server listen socket options") } } ns, _ := channelz.GetServerSockets(ss[0].ID, 0, 0) if len(ns) != 1 { t.Fatalf("There should be one server normal socket, not %d", len(ns)) } if ns[0] == nil || ns[0].SocketData == nil || ns[0].SocketData.SocketOptions == nil { t.Fatalf("Unable to get server normal socket options") } tchan, _ := channelz.GetTopChannels(0, 0) if len(tchan) != 1 { t.Fatalf("There should only be one top channel, not %d", len(tchan)) } if len(tchan[0].SubChans) != 1 { t.Fatalf("There should only be one subchannel under top channel %d, not %d", tchan[0].ID, len(tchan[0].SubChans)) } var id int64 for id = range tchan[0].SubChans { break } sc := channelz.GetSubChannel(id) if sc == nil { t.Fatalf("There should only be one socket under subchannel %d, not 0", id) } if len(sc.Sockets) != 1 { t.Fatalf("There should only be one socket under subchannel %d, not %d", sc.ID, len(sc.Sockets)) } for id = range sc.Sockets { break } skt := channelz.GetSocket(id) if skt == nil || skt.SocketData == nil || skt.SocketData.SocketOptions == nil { t.Fatalf("Unable to get client normal socket options") } } grpc-go-1.22.1/test/channelz_test.go000066400000000000000000002160041351635773100173200ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package test import ( "context" "crypto/tls" "fmt" "net" "reflect" "strings" "sync" "testing" "time" "golang.org/x/net/http2" "google.golang.org/grpc" _ "google.golang.org/grpc/balancer/grpclb" "google.golang.org/grpc/balancer/roundrobin" "google.golang.org/grpc/codes" "google.golang.org/grpc/connectivity" "google.golang.org/grpc/credentials" "google.golang.org/grpc/internal" "google.golang.org/grpc/internal/channelz" "google.golang.org/grpc/keepalive" "google.golang.org/grpc/resolver" "google.golang.org/grpc/resolver/manual" "google.golang.org/grpc/status" testpb "google.golang.org/grpc/test/grpc_testing" "google.golang.org/grpc/testdata" ) func czCleanupWrapper(cleanup func() error, t *testing.T) { if err := cleanup(); err != nil { t.Error(err) } } func verifyResultWithDelay(f func() (bool, error)) error { var ok bool var err error for i := 0; i < 1000; i++ { if ok, err = f(); ok { return nil } time.Sleep(10 * time.Millisecond) } return err } func (s) TestCZServerRegistrationAndDeletion(t *testing.T) { testcases := []struct { total int start int64 max int64 length int64 end bool }{ {total: int(channelz.EntryPerPage), start: 0, max: 0, length: channelz.EntryPerPage, end: true}, {total: int(channelz.EntryPerPage) - 1, start: 0, max: 0, length: channelz.EntryPerPage - 1, end: true}, {total: int(channelz.EntryPerPage) + 1, start: 0, max: 0, length: channelz.EntryPerPage, end: false}, {total: int(channelz.EntryPerPage) + 1, start: int64(2*(channelz.EntryPerPage+1) + 1), max: 0, length: 0, end: true}, {total: int(channelz.EntryPerPage), start: 0, max: 1, length: 1, end: false}, {total: int(channelz.EntryPerPage), start: 0, max: channelz.EntryPerPage - 1, length: channelz.EntryPerPage - 1, end: false}, } for _, c := range testcases { czCleanup := channelz.NewChannelzStorage() defer czCleanupWrapper(czCleanup, t) e := tcpClearRREnv te := newTest(t, e) te.startServers(&testServer{security: e.security}, c.total) ss, end := channelz.GetServers(c.start, c.max) if int64(len(ss)) != c.length || end != c.end { t.Fatalf("GetServers(%d) = %+v (len of which: %d), end: %+v, want len(GetServers(%d)) = %d, end: %+v", c.start, ss, len(ss), end, c.start, c.length, c.end) } te.tearDown() ss, end = channelz.GetServers(c.start, c.max) if len(ss) != 0 || !end { t.Fatalf("GetServers(0) = %+v (len of which: %d), end: %+v, want len(GetServers(0)) = 0, end: true", ss, len(ss), end) } } } func (s) TestCZGetServer(t *testing.T) { czCleanup := channelz.NewChannelzStorage() defer czCleanupWrapper(czCleanup, t) e := tcpClearRREnv te := newTest(t, e) te.startServer(&testServer{security: e.security}) defer te.tearDown() ss, _ := channelz.GetServers(0, 0) if len(ss) != 1 { t.Fatalf("there should only be one server, not %d", len(ss)) } serverID := ss[0].ID srv := channelz.GetServer(serverID) if srv == nil { t.Fatalf("server %d does not exist", serverID) } if srv.ID != serverID { t.Fatalf("server want id %d, but got %d", serverID, srv.ID) } te.tearDown() if err := verifyResultWithDelay(func() (bool, error) { srv := channelz.GetServer(serverID) if srv != nil { return false, fmt.Errorf("server %d should not exist", serverID) } return true, nil }); err != nil { t.Fatal(err) } } func (s) TestCZTopChannelRegistrationAndDeletion(t *testing.T) { testcases := []struct { total int start int64 max int64 length int64 end bool }{ {total: int(channelz.EntryPerPage), start: 0, max: 0, length: channelz.EntryPerPage, end: true}, {total: int(channelz.EntryPerPage) - 1, start: 0, max: 0, length: channelz.EntryPerPage - 1, end: true}, {total: int(channelz.EntryPerPage) + 1, start: 0, max: 0, length: channelz.EntryPerPage, end: false}, {total: int(channelz.EntryPerPage) + 1, start: int64(2*(channelz.EntryPerPage+1) + 1), max: 0, length: 0, end: true}, {total: int(channelz.EntryPerPage), start: 0, max: 1, length: 1, end: false}, {total: int(channelz.EntryPerPage), start: 0, max: channelz.EntryPerPage - 1, length: channelz.EntryPerPage - 1, end: false}, } for _, c := range testcases { czCleanup := channelz.NewChannelzStorage() defer czCleanupWrapper(czCleanup, t) e := tcpClearRREnv te := newTest(t, e) var ccs []*grpc.ClientConn for i := 0; i < c.total; i++ { cc := te.clientConn() te.cc = nil // avoid making next dial blocking te.srvAddr = "" ccs = append(ccs, cc) } if err := verifyResultWithDelay(func() (bool, error) { if tcs, end := channelz.GetTopChannels(c.start, c.max); int64(len(tcs)) != c.length || end != c.end { return false, fmt.Errorf("getTopChannels(%d) = %+v (len of which: %d), end: %+v, want len(GetTopChannels(%d)) = %d, end: %+v", c.start, tcs, len(tcs), end, c.start, c.length, c.end) } return true, nil }); err != nil { t.Fatal(err) } for _, cc := range ccs { cc.Close() } if err := verifyResultWithDelay(func() (bool, error) { if tcs, end := channelz.GetTopChannels(c.start, c.max); len(tcs) != 0 || !end { return false, fmt.Errorf("getTopChannels(0) = %+v (len of which: %d), end: %+v, want len(GetTopChannels(0)) = 0, end: true", tcs, len(tcs), end) } return true, nil }); err != nil { t.Fatal(err) } te.tearDown() } } func (s) TestCZTopChannelRegistrationAndDeletionWhenDialFail(t *testing.T) { czCleanup := channelz.NewChannelzStorage() defer czCleanupWrapper(czCleanup, t) // Make dial fails (due to no transport security specified) _, err := grpc.Dial("fake.addr") if err == nil { t.Fatal("expecting dial to fail") } if tcs, end := channelz.GetTopChannels(0, 0); tcs != nil || !end { t.Fatalf("GetTopChannels(0, 0) = %v, %v, want , true", tcs, end) } } func (s) TestCZNestedChannelRegistrationAndDeletion(t *testing.T) { czCleanup := channelz.NewChannelzStorage() defer czCleanupWrapper(czCleanup, t) e := tcpClearRREnv // avoid calling API to set balancer type, which will void service config's change of balancer. e.balancer = "" te := newTest(t, e) r, cleanup := manual.GenerateAndRegisterManualResolver() defer cleanup() resolvedAddrs := []resolver.Address{{Addr: "127.0.0.1:0", Type: resolver.GRPCLB, ServerName: "grpclb.server"}} r.InitialState(resolver.State{Addresses: resolvedAddrs}) te.resolverScheme = r.Scheme() te.clientConn() defer te.tearDown() if err := verifyResultWithDelay(func() (bool, error) { tcs, _ := channelz.GetTopChannels(0, 0) if len(tcs) != 1 { return false, fmt.Errorf("there should only be one top channel, not %d", len(tcs)) } if len(tcs[0].NestedChans) != 1 { return false, fmt.Errorf("there should be one nested channel from grpclb, not %d", len(tcs[0].NestedChans)) } return true, nil }); err != nil { t.Fatal(err) } r.UpdateState(resolver.State{Addresses: []resolver.Address{{Addr: "127.0.0.1:0"}}, ServiceConfig: parseCfg(`{"loadBalancingPolicy": "round_robin"}`)}) // wait for the shutdown of grpclb balancer if err := verifyResultWithDelay(func() (bool, error) { tcs, _ := channelz.GetTopChannels(0, 0) if len(tcs) != 1 { return false, fmt.Errorf("there should only be one top channel, not %d", len(tcs)) } if len(tcs[0].NestedChans) != 0 { return false, fmt.Errorf("there should be 0 nested channel from grpclb, not %d", len(tcs[0].NestedChans)) } return true, nil }); err != nil { t.Fatal(err) } } func (s) TestCZClientSubChannelSocketRegistrationAndDeletion(t *testing.T) { czCleanup := channelz.NewChannelzStorage() defer czCleanupWrapper(czCleanup, t) e := tcpClearRREnv num := 3 // number of backends te := newTest(t, e) var svrAddrs []resolver.Address te.startServers(&testServer{security: e.security}, num) r, cleanup := manual.GenerateAndRegisterManualResolver() defer cleanup() for _, a := range te.srvAddrs { svrAddrs = append(svrAddrs, resolver.Address{Addr: a}) } r.InitialState(resolver.State{Addresses: svrAddrs}) te.resolverScheme = r.Scheme() te.clientConn() defer te.tearDown() // Here, we just wait for all sockets to be up. In the future, if we implement // IDLE, we may need to make several rpc calls to create the sockets. if err := verifyResultWithDelay(func() (bool, error) { tcs, _ := channelz.GetTopChannels(0, 0) if len(tcs) != 1 { return false, fmt.Errorf("there should only be one top channel, not %d", len(tcs)) } if len(tcs[0].SubChans) != num { return false, fmt.Errorf("there should be %d subchannel not %d", num, len(tcs[0].SubChans)) } count := 0 for k := range tcs[0].SubChans { sc := channelz.GetSubChannel(k) if sc == nil { return false, fmt.Errorf("got subchannel") } count += len(sc.Sockets) } if count != num { return false, fmt.Errorf("there should be %d sockets not %d", num, count) } return true, nil }); err != nil { t.Fatal(err) } r.UpdateState(resolver.State{Addresses: svrAddrs[:len(svrAddrs)-1]}) if err := verifyResultWithDelay(func() (bool, error) { tcs, _ := channelz.GetTopChannels(0, 0) if len(tcs) != 1 { return false, fmt.Errorf("there should only be one top channel, not %d", len(tcs)) } if len(tcs[0].SubChans) != num-1 { return false, fmt.Errorf("there should be %d subchannel not %d", num-1, len(tcs[0].SubChans)) } count := 0 for k := range tcs[0].SubChans { sc := channelz.GetSubChannel(k) if sc == nil { return false, fmt.Errorf("got subchannel") } count += len(sc.Sockets) } if count != num-1 { return false, fmt.Errorf("there should be %d sockets not %d", num-1, count) } return true, nil }); err != nil { t.Fatal(err) } } func (s) TestCZServerSocketRegistrationAndDeletion(t *testing.T) { testcases := []struct { total int start int64 max int64 length int64 end bool }{ {total: int(channelz.EntryPerPage), start: 0, max: 0, length: channelz.EntryPerPage, end: true}, {total: int(channelz.EntryPerPage) - 1, start: 0, max: 0, length: channelz.EntryPerPage - 1, end: true}, {total: int(channelz.EntryPerPage) + 1, start: 0, max: 0, length: channelz.EntryPerPage, end: false}, {total: int(channelz.EntryPerPage), start: 1, max: 0, length: channelz.EntryPerPage - 1, end: true}, {total: int(channelz.EntryPerPage) + 1, start: channelz.EntryPerPage + 1, max: 0, length: 0, end: true}, {total: int(channelz.EntryPerPage), start: 0, max: 1, length: 1, end: false}, {total: int(channelz.EntryPerPage), start: 0, max: channelz.EntryPerPage - 1, length: channelz.EntryPerPage - 1, end: false}, } for _, c := range testcases { czCleanup := channelz.NewChannelzStorage() defer czCleanupWrapper(czCleanup, t) e := tcpClearRREnv te := newTest(t, e) te.startServer(&testServer{security: e.security}) var ccs []*grpc.ClientConn for i := 0; i < c.total; i++ { cc := te.clientConn() te.cc = nil ccs = append(ccs, cc) } var svrID int64 if err := verifyResultWithDelay(func() (bool, error) { ss, _ := channelz.GetServers(0, 0) if len(ss) != 1 { return false, fmt.Errorf("there should only be one server, not %d", len(ss)) } if len(ss[0].ListenSockets) != 1 { return false, fmt.Errorf("there should only be one server listen socket, not %d", len(ss[0].ListenSockets)) } startID := c.start if startID != 0 { ns, _ := channelz.GetServerSockets(ss[0].ID, 0, int64(c.total)) if int64(len(ns)) < c.start { return false, fmt.Errorf("there should more than %d sockets, not %d", len(ns), c.start) } startID = ns[c.start-1].ID + 1 } ns, end := channelz.GetServerSockets(ss[0].ID, startID, c.max) if int64(len(ns)) != c.length || end != c.end { return false, fmt.Errorf("GetServerSockets(%d) = %+v (len of which: %d), end: %+v, want len(GetServerSockets(%d)) = %d, end: %+v", c.start, ns, len(ns), end, c.start, c.length, c.end) } svrID = ss[0].ID return true, nil }); err != nil { t.Fatal(err) } for _, cc := range ccs { cc.Close() } if err := verifyResultWithDelay(func() (bool, error) { ns, _ := channelz.GetServerSockets(svrID, c.start, c.max) if len(ns) != 0 { return false, fmt.Errorf("there should be %d normal sockets not %d", 0, len(ns)) } return true, nil }); err != nil { t.Fatal(err) } te.tearDown() } } func (s) TestCZServerListenSocketDeletion(t *testing.T) { czCleanup := channelz.NewChannelzStorage() defer czCleanupWrapper(czCleanup, t) s := grpc.NewServer() lis, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("failed to listen: %v", err) } go s.Serve(lis) if err := verifyResultWithDelay(func() (bool, error) { ss, _ := channelz.GetServers(0, 0) if len(ss) != 1 { return false, fmt.Errorf("there should only be one server, not %d", len(ss)) } if len(ss[0].ListenSockets) != 1 { return false, fmt.Errorf("there should only be one server listen socket, not %d", len(ss[0].ListenSockets)) } return true, nil }); err != nil { t.Fatal(err) } lis.Close() if err := verifyResultWithDelay(func() (bool, error) { ss, _ := channelz.GetServers(0, 0) if len(ss) != 1 { return false, fmt.Errorf("there should be 1 server, not %d", len(ss)) } if len(ss[0].ListenSockets) != 0 { return false, fmt.Errorf("there should only be %d server listen socket, not %d", 0, len(ss[0].ListenSockets)) } return true, nil }); err != nil { t.Fatal(err) } s.Stop() } type dummyChannel struct{} func (d *dummyChannel) ChannelzMetric() *channelz.ChannelInternalMetric { return &channelz.ChannelInternalMetric{} } type dummySocket struct{} func (d *dummySocket) ChannelzMetric() *channelz.SocketInternalMetric { return &channelz.SocketInternalMetric{} } func (s) TestCZRecusivelyDeletionOfEntry(t *testing.T) { // +--+TopChan+---+ // | | // v v // +-+SubChan1+--+ SubChan2 // | | // v v // Socket1 Socket2 czCleanup := channelz.NewChannelzStorage() defer czCleanupWrapper(czCleanup, t) topChanID := channelz.RegisterChannel(&dummyChannel{}, 0, "") subChanID1 := channelz.RegisterSubChannel(&dummyChannel{}, topChanID, "") subChanID2 := channelz.RegisterSubChannel(&dummyChannel{}, topChanID, "") sktID1 := channelz.RegisterNormalSocket(&dummySocket{}, subChanID1, "") sktID2 := channelz.RegisterNormalSocket(&dummySocket{}, subChanID1, "") tcs, _ := channelz.GetTopChannels(0, 0) if tcs == nil || len(tcs) != 1 { t.Fatalf("There should be one TopChannel entry") } if len(tcs[0].SubChans) != 2 { t.Fatalf("There should be two SubChannel entries") } sc := channelz.GetSubChannel(subChanID1) if sc == nil || len(sc.Sockets) != 2 { t.Fatalf("There should be two Socket entries") } channelz.RemoveEntry(topChanID) tcs, _ = channelz.GetTopChannels(0, 0) if tcs == nil || len(tcs) != 1 { t.Fatalf("There should be one TopChannel entry") } channelz.RemoveEntry(subChanID1) channelz.RemoveEntry(subChanID2) tcs, _ = channelz.GetTopChannels(0, 0) if tcs == nil || len(tcs) != 1 { t.Fatalf("There should be one TopChannel entry") } if len(tcs[0].SubChans) != 1 { t.Fatalf("There should be one SubChannel entry") } channelz.RemoveEntry(sktID1) channelz.RemoveEntry(sktID2) tcs, _ = channelz.GetTopChannels(0, 0) if tcs != nil { t.Fatalf("There should be no TopChannel entry") } } func (s) TestCZChannelMetrics(t *testing.T) { czCleanup := channelz.NewChannelzStorage() defer czCleanupWrapper(czCleanup, t) e := tcpClearRREnv num := 3 // number of backends te := newTest(t, e) te.maxClientSendMsgSize = newInt(8) var svrAddrs []resolver.Address te.startServers(&testServer{security: e.security}, num) r, cleanup := manual.GenerateAndRegisterManualResolver() defer cleanup() for _, a := range te.srvAddrs { svrAddrs = append(svrAddrs, resolver.Address{Addr: a}) } r.InitialState(resolver.State{Addresses: svrAddrs}) te.resolverScheme = r.Scheme() cc := te.clientConn() defer te.tearDown() tc := testpb.NewTestServiceClient(cc) if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}); err != nil { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, ", err) } const smallSize = 1 const largeSize = 8 largePayload, err := newPayload(testpb.PayloadType_COMPRESSABLE, largeSize) if err != nil { t.Fatal(err) } req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseSize: int32(smallSize), Payload: largePayload, } if _, err := tc.UnaryCall(context.Background(), req); err == nil || status.Code(err) != codes.ResourceExhausted { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, error code: %s", err, codes.ResourceExhausted) } stream, err := tc.FullDuplexCall(context.Background()) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } defer stream.CloseSend() // Here, we just wait for all sockets to be up. In the future, if we implement // IDLE, we may need to make several rpc calls to create the sockets. if err := verifyResultWithDelay(func() (bool, error) { tcs, _ := channelz.GetTopChannels(0, 0) if len(tcs) != 1 { return false, fmt.Errorf("there should only be one top channel, not %d", len(tcs)) } if len(tcs[0].SubChans) != num { return false, fmt.Errorf("there should be %d subchannel not %d", num, len(tcs[0].SubChans)) } var cst, csu, cf int64 for k := range tcs[0].SubChans { sc := channelz.GetSubChannel(k) if sc == nil { return false, fmt.Errorf("got subchannel") } cst += sc.ChannelData.CallsStarted csu += sc.ChannelData.CallsSucceeded cf += sc.ChannelData.CallsFailed } if cst != 3 { return false, fmt.Errorf("there should be 3 CallsStarted not %d", cst) } if csu != 1 { return false, fmt.Errorf("there should be 1 CallsSucceeded not %d", csu) } if cf != 1 { return false, fmt.Errorf("there should be 1 CallsFailed not %d", cf) } if tcs[0].ChannelData.CallsStarted != 3 { return false, fmt.Errorf("there should be 3 CallsStarted not %d", tcs[0].ChannelData.CallsStarted) } if tcs[0].ChannelData.CallsSucceeded != 1 { return false, fmt.Errorf("there should be 1 CallsSucceeded not %d", tcs[0].ChannelData.CallsSucceeded) } if tcs[0].ChannelData.CallsFailed != 1 { return false, fmt.Errorf("there should be 1 CallsFailed not %d", tcs[0].ChannelData.CallsFailed) } return true, nil }); err != nil { t.Fatal(err) } } func (s) TestCZServerMetrics(t *testing.T) { czCleanup := channelz.NewChannelzStorage() defer czCleanupWrapper(czCleanup, t) e := tcpClearRREnv te := newTest(t, e) te.maxServerReceiveMsgSize = newInt(8) te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}); err != nil { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, ", err) } const smallSize = 1 const largeSize = 8 largePayload, err := newPayload(testpb.PayloadType_COMPRESSABLE, largeSize) if err != nil { t.Fatal(err) } req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseSize: int32(smallSize), Payload: largePayload, } if _, err := tc.UnaryCall(context.Background(), req); err == nil || status.Code(err) != codes.ResourceExhausted { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, error code: %s", err, codes.ResourceExhausted) } stream, err := tc.FullDuplexCall(context.Background()) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } defer stream.CloseSend() if err := verifyResultWithDelay(func() (bool, error) { ss, _ := channelz.GetServers(0, 0) if len(ss) != 1 { return false, fmt.Errorf("there should only be one server, not %d", len(ss)) } if ss[0].ServerData.CallsStarted != 3 { return false, fmt.Errorf("there should be 3 CallsStarted not %d", ss[0].ServerData.CallsStarted) } if ss[0].ServerData.CallsSucceeded != 1 { return false, fmt.Errorf("there should be 1 CallsSucceeded not %d", ss[0].ServerData.CallsSucceeded) } if ss[0].ServerData.CallsFailed != 1 { return false, fmt.Errorf("there should be 1 CallsFailed not %d", ss[0].ServerData.CallsFailed) } return true, nil }); err != nil { t.Fatal(err) } } type testServiceClientWrapper struct { testpb.TestServiceClient mu sync.RWMutex streamsCreated int } func (t *testServiceClientWrapper) getCurrentStreamID() uint32 { t.mu.RLock() defer t.mu.RUnlock() return uint32(2*t.streamsCreated - 1) } func (t *testServiceClientWrapper) EmptyCall(ctx context.Context, in *testpb.Empty, opts ...grpc.CallOption) (*testpb.Empty, error) { t.mu.Lock() defer t.mu.Unlock() t.streamsCreated++ return t.TestServiceClient.EmptyCall(ctx, in, opts...) } func (t *testServiceClientWrapper) UnaryCall(ctx context.Context, in *testpb.SimpleRequest, opts ...grpc.CallOption) (*testpb.SimpleResponse, error) { t.mu.Lock() defer t.mu.Unlock() t.streamsCreated++ return t.TestServiceClient.UnaryCall(ctx, in, opts...) } func (t *testServiceClientWrapper) StreamingOutputCall(ctx context.Context, in *testpb.StreamingOutputCallRequest, opts ...grpc.CallOption) (testpb.TestService_StreamingOutputCallClient, error) { t.mu.Lock() defer t.mu.Unlock() t.streamsCreated++ return t.TestServiceClient.StreamingOutputCall(ctx, in, opts...) } func (t *testServiceClientWrapper) StreamingInputCall(ctx context.Context, opts ...grpc.CallOption) (testpb.TestService_StreamingInputCallClient, error) { t.mu.Lock() defer t.mu.Unlock() t.streamsCreated++ return t.TestServiceClient.StreamingInputCall(ctx, opts...) } func (t *testServiceClientWrapper) FullDuplexCall(ctx context.Context, opts ...grpc.CallOption) (testpb.TestService_FullDuplexCallClient, error) { t.mu.Lock() defer t.mu.Unlock() t.streamsCreated++ return t.TestServiceClient.FullDuplexCall(ctx, opts...) } func (t *testServiceClientWrapper) HalfDuplexCall(ctx context.Context, opts ...grpc.CallOption) (testpb.TestService_HalfDuplexCallClient, error) { t.mu.Lock() defer t.mu.Unlock() t.streamsCreated++ return t.TestServiceClient.HalfDuplexCall(ctx, opts...) } func doSuccessfulUnaryCall(tc testpb.TestServiceClient, t *testing.T) { if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}); err != nil { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, ", err) } } func doStreamingInputCallWithLargePayload(tc testpb.TestServiceClient, t *testing.T) { s, err := tc.StreamingInputCall(context.Background()) if err != nil { t.Fatalf("TestService/StreamingInputCall(_) = _, %v, want ", err) } payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, 10000) if err != nil { t.Fatal(err) } s.Send(&testpb.StreamingInputCallRequest{Payload: payload}) } func doServerSideFailedUnaryCall(tc testpb.TestServiceClient, t *testing.T) { const smallSize = 1 const largeSize = 2000 largePayload, err := newPayload(testpb.PayloadType_COMPRESSABLE, largeSize) if err != nil { t.Fatal(err) } req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseSize: int32(smallSize), Payload: largePayload, } if _, err := tc.UnaryCall(context.Background(), req); err == nil || status.Code(err) != codes.ResourceExhausted { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, error code: %s", err, codes.ResourceExhausted) } } func doClientSideInitiatedFailedStream(tc testpb.TestServiceClient, t *testing.T) { ctx, cancel := context.WithCancel(context.Background()) stream, err := tc.FullDuplexCall(ctx) if err != nil { t.Fatalf("TestService/FullDuplexCall(_) = _, %v, want ", err) } const smallSize = 1 smallPayload, err := newPayload(testpb.PayloadType_COMPRESSABLE, smallSize) if err != nil { t.Fatal(err) } sreq := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseParameters: []*testpb.ResponseParameters{ {Size: smallSize}, }, Payload: smallPayload, } if err := stream.Send(sreq); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, sreq, err) } if _, err := stream.Recv(); err != nil { t.Fatalf("%v.Recv() = %v, want ", stream, err) } // By canceling the call, the client will send rst_stream to end the call, and // the stream will failed as a result. cancel() } // This func is to be used to test client side counting of failed streams. func doServerSideInitiatedFailedStreamWithRSTStream(tc testpb.TestServiceClient, t *testing.T, l *listenerWrapper) { stream, err := tc.FullDuplexCall(context.Background()) if err != nil { t.Fatalf("TestService/FullDuplexCall(_) = _, %v, want ", err) } const smallSize = 1 smallPayload, err := newPayload(testpb.PayloadType_COMPRESSABLE, smallSize) if err != nil { t.Fatal(err) } sreq := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseParameters: []*testpb.ResponseParameters{ {Size: smallSize}, }, Payload: smallPayload, } if err := stream.Send(sreq); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, sreq, err) } if _, err := stream.Recv(); err != nil { t.Fatalf("%v.Recv() = %v, want ", stream, err) } rcw := l.getLastConn() if rcw != nil { rcw.writeRSTStream(tc.(*testServiceClientWrapper).getCurrentStreamID(), http2.ErrCodeCancel) } if _, err := stream.Recv(); err == nil { t.Fatalf("%v.Recv() = %v, want ", stream, err) } } // this func is to be used to test client side counting of failed streams. func doServerSideInitiatedFailedStreamWithGoAway(tc testpb.TestServiceClient, t *testing.T, l *listenerWrapper) { // This call is just to keep the transport from shutting down (socket will be deleted // in this case, and we will not be able to get metrics). s, err := tc.FullDuplexCall(context.Background()) if err != nil { t.Fatalf("TestService/FullDuplexCall(_) = _, %v, want ", err) } if err := s.Send(&testpb.StreamingOutputCallRequest{ResponseParameters: []*testpb.ResponseParameters{ { Size: 1, }, }}); err != nil { t.Fatalf("s.Send() failed with error: %v", err) } if _, err := s.Recv(); err != nil { t.Fatalf("s.Recv() failed with error: %v", err) } s, err = tc.FullDuplexCall(context.Background()) if err != nil { t.Fatalf("TestService/FullDuplexCall(_) = _, %v, want ", err) } if err := s.Send(&testpb.StreamingOutputCallRequest{ResponseParameters: []*testpb.ResponseParameters{ { Size: 1, }, }}); err != nil { t.Fatalf("s.Send() failed with error: %v", err) } if _, err := s.Recv(); err != nil { t.Fatalf("s.Recv() failed with error: %v", err) } rcw := l.getLastConn() if rcw != nil { rcw.writeGoAway(tc.(*testServiceClientWrapper).getCurrentStreamID()-2, http2.ErrCodeCancel, []byte{}) } if _, err := s.Recv(); err == nil { t.Fatalf("%v.Recv() = %v, want ", s, err) } } // this func is to be used to test client side counting of failed streams. func doServerSideInitiatedFailedStreamWithClientBreakFlowControl(tc testpb.TestServiceClient, t *testing.T, dw *dialerWrapper) { stream, err := tc.FullDuplexCall(context.Background()) if err != nil { t.Fatalf("TestService/FullDuplexCall(_) = _, %v, want ", err) } // sleep here to make sure header frame being sent before the data frame we write directly below. time.Sleep(10 * time.Millisecond) payload := make([]byte, 65537) dw.getRawConnWrapper().writeRawFrame(http2.FrameData, 0, tc.(*testServiceClientWrapper).getCurrentStreamID(), payload) if _, err := stream.Recv(); err == nil || status.Code(err) != codes.ResourceExhausted { t.Fatalf("%v.Recv() = %v, want error code: %v", stream, err, codes.ResourceExhausted) } } func doIdleCallToInvokeKeepAlive(tc testpb.TestServiceClient, t *testing.T) { ctx, cancel := context.WithCancel(context.Background()) _, err := tc.FullDuplexCall(ctx) if err != nil { t.Fatalf("TestService/FullDuplexCall(_) = _, %v, want ", err) } // Allow for at least 2 keepalives (1s per ping interval) time.Sleep(4 * time.Second) cancel() } func (s) TestCZClientSocketMetricsStreamsAndMessagesCount(t *testing.T) { czCleanup := channelz.NewChannelzStorage() defer czCleanupWrapper(czCleanup, t) e := tcpClearRREnv te := newTest(t, e) te.maxServerReceiveMsgSize = newInt(20) te.maxClientReceiveMsgSize = newInt(20) rcw := te.startServerWithConnControl(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := &testServiceClientWrapper{TestServiceClient: testpb.NewTestServiceClient(cc)} doSuccessfulUnaryCall(tc, t) var scID, skID int64 if err := verifyResultWithDelay(func() (bool, error) { tchan, _ := channelz.GetTopChannels(0, 0) if len(tchan) != 1 { return false, fmt.Errorf("there should only be one top channel, not %d", len(tchan)) } if len(tchan[0].SubChans) != 1 { return false, fmt.Errorf("there should only be one subchannel under top channel %d, not %d", tchan[0].ID, len(tchan[0].SubChans)) } for scID = range tchan[0].SubChans { break } sc := channelz.GetSubChannel(scID) if sc == nil { return false, fmt.Errorf("there should only be one socket under subchannel %d, not 0", scID) } if len(sc.Sockets) != 1 { return false, fmt.Errorf("there should only be one socket under subchannel %d, not %d", sc.ID, len(sc.Sockets)) } for skID = range sc.Sockets { break } skt := channelz.GetSocket(skID) sktData := skt.SocketData if sktData.StreamsStarted != 1 || sktData.StreamsSucceeded != 1 || sktData.MessagesSent != 1 || sktData.MessagesReceived != 1 { return false, fmt.Errorf("channelz.GetSocket(%d), want (StreamsStarted, StreamsSucceeded, MessagesSent, MessagesReceived) = (1, 1, 1, 1), got (%d, %d, %d, %d)", skt.ID, sktData.StreamsStarted, sktData.StreamsSucceeded, sktData.MessagesSent, sktData.MessagesReceived) } return true, nil }); err != nil { t.Fatal(err) } doServerSideFailedUnaryCall(tc, t) if err := verifyResultWithDelay(func() (bool, error) { skt := channelz.GetSocket(skID) sktData := skt.SocketData if sktData.StreamsStarted != 2 || sktData.StreamsSucceeded != 2 || sktData.MessagesSent != 2 || sktData.MessagesReceived != 1 { return false, fmt.Errorf("channelz.GetSocket(%d), want (StreamsStarted, StreamsSucceeded, MessagesSent, MessagesReceived) = (2, 2, 2, 1), got (%d, %d, %d, %d)", skt.ID, sktData.StreamsStarted, sktData.StreamsSucceeded, sktData.MessagesSent, sktData.MessagesReceived) } return true, nil }); err != nil { t.Fatal(err) } doClientSideInitiatedFailedStream(tc, t) if err := verifyResultWithDelay(func() (bool, error) { skt := channelz.GetSocket(skID) sktData := skt.SocketData if sktData.StreamsStarted != 3 || sktData.StreamsSucceeded != 2 || sktData.StreamsFailed != 1 || sktData.MessagesSent != 3 || sktData.MessagesReceived != 2 { return false, fmt.Errorf("channelz.GetSocket(%d), want (StreamsStarted, StreamsSucceeded, StreamsFailed, MessagesSent, MessagesReceived) = (3, 2, 1, 3, 2), got (%d, %d, %d, %d, %d)", skt.ID, sktData.StreamsStarted, sktData.StreamsSucceeded, sktData.StreamsFailed, sktData.MessagesSent, sktData.MessagesReceived) } return true, nil }); err != nil { t.Fatal(err) } doServerSideInitiatedFailedStreamWithRSTStream(tc, t, rcw) if err := verifyResultWithDelay(func() (bool, error) { skt := channelz.GetSocket(skID) sktData := skt.SocketData if sktData.StreamsStarted != 4 || sktData.StreamsSucceeded != 2 || sktData.StreamsFailed != 2 || sktData.MessagesSent != 4 || sktData.MessagesReceived != 3 { return false, fmt.Errorf("channelz.GetSocket(%d), want (StreamsStarted, StreamsSucceeded, StreamsFailed, MessagesSent, MessagesReceived) = (4, 2, 2, 4, 3), got (%d, %d, %d, %d, %d)", skt.ID, sktData.StreamsStarted, sktData.StreamsSucceeded, sktData.StreamsFailed, sktData.MessagesSent, sktData.MessagesReceived) } return true, nil }); err != nil { t.Fatal(err) } doServerSideInitiatedFailedStreamWithGoAway(tc, t, rcw) if err := verifyResultWithDelay(func() (bool, error) { skt := channelz.GetSocket(skID) sktData := skt.SocketData if sktData.StreamsStarted != 6 || sktData.StreamsSucceeded != 2 || sktData.StreamsFailed != 3 || sktData.MessagesSent != 6 || sktData.MessagesReceived != 5 { return false, fmt.Errorf("channelz.GetSocket(%d), want (StreamsStarted, StreamsSucceeded, StreamsFailed, MessagesSent, MessagesReceived) = (6, 2, 3, 6, 5), got (%d, %d, %d, %d, %d)", skt.ID, sktData.StreamsStarted, sktData.StreamsSucceeded, sktData.StreamsFailed, sktData.MessagesSent, sktData.MessagesReceived) } return true, nil }); err != nil { t.Fatal(err) } } // This test is to complete TestCZClientSocketMetricsStreamsAndMessagesCount and // TestCZServerSocketMetricsStreamsAndMessagesCount by adding the test case of // server sending RST_STREAM to client due to client side flow control violation. // It is separated from other cases due to setup incompatibly, i.e. max receive // size violation will mask flow control violation. func (s) TestCZClientAndServerSocketMetricsStreamsCountFlowControlRSTStream(t *testing.T) { czCleanup := channelz.NewChannelzStorage() defer czCleanupWrapper(czCleanup, t) e := tcpClearRREnv te := newTest(t, e) te.serverInitialWindowSize = 65536 // Avoid overflowing connection level flow control window, which will lead to // transport being closed. te.serverInitialConnWindowSize = 65536 * 2 te.startServer(&testServer{security: e.security}) defer te.tearDown() cc, dw := te.clientConnWithConnControl() tc := &testServiceClientWrapper{TestServiceClient: testpb.NewTestServiceClient(cc)} doServerSideInitiatedFailedStreamWithClientBreakFlowControl(tc, t, dw) if err := verifyResultWithDelay(func() (bool, error) { tchan, _ := channelz.GetTopChannels(0, 0) if len(tchan) != 1 { return false, fmt.Errorf("there should only be one top channel, not %d", len(tchan)) } if len(tchan[0].SubChans) != 1 { return false, fmt.Errorf("there should only be one subchannel under top channel %d, not %d", tchan[0].ID, len(tchan[0].SubChans)) } var id int64 for id = range tchan[0].SubChans { break } sc := channelz.GetSubChannel(id) if sc == nil { return false, fmt.Errorf("there should only be one socket under subchannel %d, not 0", id) } if len(sc.Sockets) != 1 { return false, fmt.Errorf("there should only be one socket under subchannel %d, not %d", sc.ID, len(sc.Sockets)) } for id = range sc.Sockets { break } skt := channelz.GetSocket(id) sktData := skt.SocketData if sktData.StreamsStarted != 1 || sktData.StreamsSucceeded != 0 || sktData.StreamsFailed != 1 { return false, fmt.Errorf("channelz.GetSocket(%d), want (StreamsStarted, StreamsSucceeded, StreamsFailed) = (1, 0, 1), got (%d, %d, %d)", skt.ID, sktData.StreamsStarted, sktData.StreamsSucceeded, sktData.StreamsFailed) } ss, _ := channelz.GetServers(0, 0) if len(ss) != 1 { return false, fmt.Errorf("there should only be one server, not %d", len(ss)) } ns, _ := channelz.GetServerSockets(ss[0].ID, 0, 0) if len(ns) != 1 { return false, fmt.Errorf("there should be one server normal socket, not %d", len(ns)) } sktData = ns[0].SocketData if sktData.StreamsStarted != 1 || sktData.StreamsSucceeded != 0 || sktData.StreamsFailed != 1 { return false, fmt.Errorf("server socket metric with ID %d, want (StreamsStarted, StreamsSucceeded, StreamsFailed) = (1, 0, 1), got (%d, %d, %d)", ns[0].ID, sktData.StreamsStarted, sktData.StreamsSucceeded, sktData.StreamsFailed) } return true, nil }); err != nil { t.Fatal(err) } } func (s) TestCZClientAndServerSocketMetricsFlowControl(t *testing.T) { czCleanup := channelz.NewChannelzStorage() defer czCleanupWrapper(czCleanup, t) e := tcpClearRREnv te := newTest(t, e) // disable BDP te.serverInitialWindowSize = 65536 te.serverInitialConnWindowSize = 65536 te.clientInitialWindowSize = 65536 te.clientInitialConnWindowSize = 65536 te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) for i := 0; i < 10; i++ { doSuccessfulUnaryCall(tc, t) } var cliSktID, svrSktID int64 if err := verifyResultWithDelay(func() (bool, error) { tchan, _ := channelz.GetTopChannels(0, 0) if len(tchan) != 1 { return false, fmt.Errorf("there should only be one top channel, not %d", len(tchan)) } if len(tchan[0].SubChans) != 1 { return false, fmt.Errorf("there should only be one subchannel under top channel %d, not %d", tchan[0].ID, len(tchan[0].SubChans)) } var id int64 for id = range tchan[0].SubChans { break } sc := channelz.GetSubChannel(id) if sc == nil { return false, fmt.Errorf("there should only be one socket under subchannel %d, not 0", id) } if len(sc.Sockets) != 1 { return false, fmt.Errorf("there should only be one socket under subchannel %d, not %d", sc.ID, len(sc.Sockets)) } for id = range sc.Sockets { break } skt := channelz.GetSocket(id) sktData := skt.SocketData // 65536 - 5 (Length-Prefixed-Message size) * 10 = 65486 if sktData.LocalFlowControlWindow != 65486 || sktData.RemoteFlowControlWindow != 65486 { return false, fmt.Errorf("client: (LocalFlowControlWindow, RemoteFlowControlWindow) size should be (65536, 65486), not (%d, %d)", sktData.LocalFlowControlWindow, sktData.RemoteFlowControlWindow) } ss, _ := channelz.GetServers(0, 0) if len(ss) != 1 { return false, fmt.Errorf("there should only be one server, not %d", len(ss)) } ns, _ := channelz.GetServerSockets(ss[0].ID, 0, 0) sktData = ns[0].SocketData if sktData.LocalFlowControlWindow != 65486 || sktData.RemoteFlowControlWindow != 65486 { return false, fmt.Errorf("server: (LocalFlowControlWindow, RemoteFlowControlWindow) size should be (65536, 65486), not (%d, %d)", sktData.LocalFlowControlWindow, sktData.RemoteFlowControlWindow) } cliSktID, svrSktID = id, ss[0].ID return true, nil }); err != nil { t.Fatal(err) } doStreamingInputCallWithLargePayload(tc, t) if err := verifyResultWithDelay(func() (bool, error) { skt := channelz.GetSocket(cliSktID) sktData := skt.SocketData // Local: 65536 - 5 (Length-Prefixed-Message size) * 10 = 65486 // Remote: 65536 - 5 (Length-Prefixed-Message size) * 10 - 10011 = 55475 if sktData.LocalFlowControlWindow != 65486 || sktData.RemoteFlowControlWindow != 55475 { return false, fmt.Errorf("client: (LocalFlowControlWindow, RemoteFlowControlWindow) size should be (65486, 55475), not (%d, %d)", sktData.LocalFlowControlWindow, sktData.RemoteFlowControlWindow) } ss, _ := channelz.GetServers(0, 0) if len(ss) != 1 { return false, fmt.Errorf("there should only be one server, not %d", len(ss)) } ns, _ := channelz.GetServerSockets(svrSktID, 0, 0) sktData = ns[0].SocketData if sktData.LocalFlowControlWindow != 55475 || sktData.RemoteFlowControlWindow != 65486 { return false, fmt.Errorf("server: (LocalFlowControlWindow, RemoteFlowControlWindow) size should be (55475, 65486), not (%d, %d)", sktData.LocalFlowControlWindow, sktData.RemoteFlowControlWindow) } return true, nil }); err != nil { t.Fatal(err) } // triggers transport flow control window update on server side, since unacked // bytes should be larger than limit now. i.e. 50 + 20022 > 65536/4. doStreamingInputCallWithLargePayload(tc, t) if err := verifyResultWithDelay(func() (bool, error) { skt := channelz.GetSocket(cliSktID) sktData := skt.SocketData // Local: 65536 - 5 (Length-Prefixed-Message size) * 10 = 65486 // Remote: 65536 if sktData.LocalFlowControlWindow != 65486 || sktData.RemoteFlowControlWindow != 65536 { return false, fmt.Errorf("client: (LocalFlowControlWindow, RemoteFlowControlWindow) size should be (65486, 65536), not (%d, %d)", sktData.LocalFlowControlWindow, sktData.RemoteFlowControlWindow) } ss, _ := channelz.GetServers(0, 0) if len(ss) != 1 { return false, fmt.Errorf("there should only be one server, not %d", len(ss)) } ns, _ := channelz.GetServerSockets(svrSktID, 0, 0) sktData = ns[0].SocketData if sktData.LocalFlowControlWindow != 65536 || sktData.RemoteFlowControlWindow != 65486 { return false, fmt.Errorf("server: (LocalFlowControlWindow, RemoteFlowControlWindow) size should be (65536, 65486), not (%d, %d)", sktData.LocalFlowControlWindow, sktData.RemoteFlowControlWindow) } return true, nil }); err != nil { t.Fatal(err) } } func (s) TestCZClientSocketMetricsKeepAlive(t *testing.T) { czCleanup := channelz.NewChannelzStorage() defer czCleanupWrapper(czCleanup, t) defer func(t time.Duration) { internal.KeepaliveMinPingTime = t }(internal.KeepaliveMinPingTime) internal.KeepaliveMinPingTime = time.Second e := tcpClearRREnv te := newTest(t, e) te.customDialOptions = append(te.customDialOptions, grpc.WithKeepaliveParams( keepalive.ClientParameters{ Time: time.Second, Timeout: 500 * time.Millisecond, PermitWithoutStream: true, })) te.customServerOptions = append(te.customServerOptions, grpc.KeepaliveEnforcementPolicy( keepalive.EnforcementPolicy{ MinTime: 500 * time.Millisecond, PermitWithoutStream: true, })) te.startServer(&testServer{security: e.security}) te.clientConn() // Dial the server defer te.tearDown() if err := verifyResultWithDelay(func() (bool, error) { tchan, _ := channelz.GetTopChannels(0, 0) if len(tchan) != 1 { return false, fmt.Errorf("there should only be one top channel, not %d", len(tchan)) } if len(tchan[0].SubChans) != 1 { return false, fmt.Errorf("there should only be one subchannel under top channel %d, not %d", tchan[0].ID, len(tchan[0].SubChans)) } var id int64 for id = range tchan[0].SubChans { break } sc := channelz.GetSubChannel(id) if sc == nil { return false, fmt.Errorf("there should only be one socket under subchannel %d, not 0", id) } if len(sc.Sockets) != 1 { return false, fmt.Errorf("there should only be one socket under subchannel %d, not %d", sc.ID, len(sc.Sockets)) } for id = range sc.Sockets { break } skt := channelz.GetSocket(id) if skt.SocketData.KeepAlivesSent != 2 { return false, fmt.Errorf("there should be 2 KeepAlives sent, not %d", skt.SocketData.KeepAlivesSent) } return true, nil }); err != nil { t.Fatal(err) } } func (s) TestCZServerSocketMetricsStreamsAndMessagesCount(t *testing.T) { czCleanup := channelz.NewChannelzStorage() defer czCleanupWrapper(czCleanup, t) e := tcpClearRREnv te := newTest(t, e) te.maxServerReceiveMsgSize = newInt(20) te.maxClientReceiveMsgSize = newInt(20) te.startServer(&testServer{security: e.security}) defer te.tearDown() cc, _ := te.clientConnWithConnControl() tc := &testServiceClientWrapper{TestServiceClient: testpb.NewTestServiceClient(cc)} var svrID int64 if err := verifyResultWithDelay(func() (bool, error) { ss, _ := channelz.GetServers(0, 0) if len(ss) != 1 { return false, fmt.Errorf("there should only be one server, not %d", len(ss)) } svrID = ss[0].ID return true, nil }); err != nil { t.Fatal(err) } doSuccessfulUnaryCall(tc, t) if err := verifyResultWithDelay(func() (bool, error) { ns, _ := channelz.GetServerSockets(svrID, 0, 0) sktData := ns[0].SocketData if sktData.StreamsStarted != 1 || sktData.StreamsSucceeded != 1 || sktData.StreamsFailed != 0 || sktData.MessagesSent != 1 || sktData.MessagesReceived != 1 { return false, fmt.Errorf("server socket metric with ID %d, want (StreamsStarted, StreamsSucceeded, MessagesSent, MessagesReceived) = (1, 1, 1, 1), got (%d, %d, %d, %d, %d)", ns[0].ID, sktData.StreamsStarted, sktData.StreamsSucceeded, sktData.StreamsFailed, sktData.MessagesSent, sktData.MessagesReceived) } return true, nil }); err != nil { t.Fatal(err) } doServerSideFailedUnaryCall(tc, t) if err := verifyResultWithDelay(func() (bool, error) { ns, _ := channelz.GetServerSockets(svrID, 0, 0) sktData := ns[0].SocketData if sktData.StreamsStarted != 2 || sktData.StreamsSucceeded != 2 || sktData.StreamsFailed != 0 || sktData.MessagesSent != 1 || sktData.MessagesReceived != 1 { return false, fmt.Errorf("server socket metric with ID %d, want (StreamsStarted, StreamsSucceeded, StreamsFailed, MessagesSent, MessagesReceived) = (2, 2, 0, 1, 1), got (%d, %d, %d, %d, %d)", ns[0].ID, sktData.StreamsStarted, sktData.StreamsSucceeded, sktData.StreamsFailed, sktData.MessagesSent, sktData.MessagesReceived) } return true, nil }); err != nil { t.Fatal(err) } doClientSideInitiatedFailedStream(tc, t) if err := verifyResultWithDelay(func() (bool, error) { ns, _ := channelz.GetServerSockets(svrID, 0, 0) sktData := ns[0].SocketData if sktData.StreamsStarted != 3 || sktData.StreamsSucceeded != 2 || sktData.StreamsFailed != 1 || sktData.MessagesSent != 2 || sktData.MessagesReceived != 2 { return false, fmt.Errorf("server socket metric with ID %d, want (StreamsStarted, StreamsSucceeded, StreamsFailed, MessagesSent, MessagesReceived) = (3, 2, 1, 2, 2), got (%d, %d, %d, %d, %d)", ns[0].ID, sktData.StreamsStarted, sktData.StreamsSucceeded, sktData.StreamsFailed, sktData.MessagesSent, sktData.MessagesReceived) } return true, nil }); err != nil { t.Fatal(err) } } func (s) TestCZServerSocketMetricsKeepAlive(t *testing.T) { czCleanup := channelz.NewChannelzStorage() defer czCleanupWrapper(czCleanup, t) e := tcpClearRREnv te := newTest(t, e) te.customServerOptions = append(te.customServerOptions, grpc.KeepaliveParams(keepalive.ServerParameters{Time: time.Second, Timeout: 500 * time.Millisecond})) te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) doIdleCallToInvokeKeepAlive(tc, t) if err := verifyResultWithDelay(func() (bool, error) { ss, _ := channelz.GetServers(0, 0) if len(ss) != 1 { return false, fmt.Errorf("there should be one server, not %d", len(ss)) } ns, _ := channelz.GetServerSockets(ss[0].ID, 0, 0) if len(ns) != 1 { return false, fmt.Errorf("there should be one server normal socket, not %d", len(ns)) } if ns[0].SocketData.KeepAlivesSent != 2 { // doIdleCallToInvokeKeepAlive func is set up to send 2 KeepAlives. return false, fmt.Errorf("there should be 2 KeepAlives sent, not %d", ns[0].SocketData.KeepAlivesSent) } return true, nil }); err != nil { t.Fatal(err) } } var cipherSuites = []string{ "TLS_RSA_WITH_RC4_128_SHA", "TLS_RSA_WITH_3DES_EDE_CBC_SHA", "TLS_RSA_WITH_AES_128_CBC_SHA", "TLS_RSA_WITH_AES_256_CBC_SHA", "TLS_RSA_WITH_AES_128_GCM_SHA256", "TLS_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_RC4_128_SHA", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA", "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA", "TLS_ECDHE_RSA_WITH_RC4_128_SHA", "TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_FALLBACK_SCSV", "TLS_RSA_WITH_AES_128_CBC_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256", "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305", "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305", "TLS_AES_128_GCM_SHA256", "TLS_AES_256_GCM_SHA384", "TLS_CHACHA20_POLY1305_SHA256", } func (s) TestCZSocketGetSecurityValueTLS(t *testing.T) { czCleanup := channelz.NewChannelzStorage() defer czCleanupWrapper(czCleanup, t) e := tcpTLSRREnv te := newTest(t, e) te.startServer(&testServer{security: e.security}) defer te.tearDown() te.clientConn() if err := verifyResultWithDelay(func() (bool, error) { tchan, _ := channelz.GetTopChannels(0, 0) if len(tchan) != 1 { return false, fmt.Errorf("there should only be one top channel, not %d", len(tchan)) } if len(tchan[0].SubChans) != 1 { return false, fmt.Errorf("there should only be one subchannel under top channel %d, not %d", tchan[0].ID, len(tchan[0].SubChans)) } var id int64 for id = range tchan[0].SubChans { break } sc := channelz.GetSubChannel(id) if sc == nil { return false, fmt.Errorf("there should only be one socket under subchannel %d, not 0", id) } if len(sc.Sockets) != 1 { return false, fmt.Errorf("there should only be one socket under subchannel %d, not %d", sc.ID, len(sc.Sockets)) } for id = range sc.Sockets { break } skt := channelz.GetSocket(id) cert, _ := tls.LoadX509KeyPair(testdata.Path("server1.pem"), testdata.Path("server1.key")) securityVal, ok := skt.SocketData.Security.(*credentials.TLSChannelzSecurityValue) if !ok { return false, fmt.Errorf("the SocketData.Security is of type: %T, want: *credentials.TLSChannelzSecurityValue", skt.SocketData.Security) } if !reflect.DeepEqual(securityVal.RemoteCertificate, cert.Certificate[0]) { return false, fmt.Errorf("SocketData.Security.RemoteCertificate got: %v, want: %v", securityVal.RemoteCertificate, cert.Certificate[0]) } for _, v := range cipherSuites { if v == securityVal.StandardName { return true, nil } } return false, fmt.Errorf("SocketData.Security.StandardName got: %v, want it to be one of %v", securityVal.StandardName, cipherSuites) }); err != nil { t.Fatal(err) } } func (s) TestCZChannelTraceCreationDeletion(t *testing.T) { czCleanup := channelz.NewChannelzStorage() defer czCleanupWrapper(czCleanup, t) e := tcpClearRREnv // avoid calling API to set balancer type, which will void service config's change of balancer. e.balancer = "" te := newTest(t, e) r, cleanup := manual.GenerateAndRegisterManualResolver() defer cleanup() resolvedAddrs := []resolver.Address{{Addr: "127.0.0.1:0", Type: resolver.GRPCLB, ServerName: "grpclb.server"}} r.InitialState(resolver.State{Addresses: resolvedAddrs}) te.resolverScheme = r.Scheme() te.clientConn() defer te.tearDown() var nestedConn int64 if err := verifyResultWithDelay(func() (bool, error) { tcs, _ := channelz.GetTopChannels(0, 0) if len(tcs) != 1 { return false, fmt.Errorf("there should only be one top channel, not %d", len(tcs)) } if len(tcs[0].NestedChans) != 1 { return false, fmt.Errorf("there should be one nested channel from grpclb, not %d", len(tcs[0].NestedChans)) } for k := range tcs[0].NestedChans { nestedConn = k } for _, e := range tcs[0].Trace.Events { if e.RefID == nestedConn && e.RefType != channelz.RefChannel { return false, fmt.Errorf("nested channel trace event shoud have RefChannel as RefType") } } ncm := channelz.GetChannel(nestedConn) if ncm.Trace == nil { return false, fmt.Errorf("trace for nested channel should not be empty") } if len(ncm.Trace.Events) == 0 { return false, fmt.Errorf("there should be at least one trace event for nested channel not 0") } if ncm.Trace.Events[0].Desc != "Channel Created" { return false, fmt.Errorf("the first trace event should be \"Channel Created\", not %q", ncm.Trace.Events[0].Desc) } return true, nil }); err != nil { t.Fatal(err) } r.UpdateState(resolver.State{Addresses: []resolver.Address{{Addr: "127.0.0.1:0"}}, ServiceConfig: parseCfg(`{"loadBalancingPolicy": "round_robin"}`)}) // wait for the shutdown of grpclb balancer if err := verifyResultWithDelay(func() (bool, error) { tcs, _ := channelz.GetTopChannels(0, 0) if len(tcs) != 1 { return false, fmt.Errorf("there should only be one top channel, not %d", len(tcs)) } if len(tcs[0].NestedChans) != 0 { return false, fmt.Errorf("there should be 0 nested channel from grpclb, not %d", len(tcs[0].NestedChans)) } ncm := channelz.GetChannel(nestedConn) if ncm == nil { return false, fmt.Errorf("nested channel should still exist due to parent's trace reference") } if ncm.Trace == nil { return false, fmt.Errorf("trace for nested channel should not be empty") } if len(ncm.Trace.Events) == 0 { return false, fmt.Errorf("there should be at least one trace event for nested channel not 0") } if ncm.Trace.Events[len(ncm.Trace.Events)-1].Desc != "Channel Deleted" { return false, fmt.Errorf("the first trace event should be \"Channel Deleted\", not %q", ncm.Trace.Events[0].Desc) } return true, nil }); err != nil { t.Fatal(err) } } func (s) TestCZSubChannelTraceCreationDeletion(t *testing.T) { czCleanup := channelz.NewChannelzStorage() defer czCleanupWrapper(czCleanup, t) e := tcpClearRREnv te := newTest(t, e) te.startServer(&testServer{security: e.security}) r, cleanup := manual.GenerateAndRegisterManualResolver() defer cleanup() r.InitialState(resolver.State{Addresses: []resolver.Address{{Addr: te.srvAddr}}}) te.resolverScheme = r.Scheme() te.clientConn() defer te.tearDown() var subConn int64 // Here, we just wait for all sockets to be up. In the future, if we implement // IDLE, we may need to make several rpc calls to create the sockets. if err := verifyResultWithDelay(func() (bool, error) { tcs, _ := channelz.GetTopChannels(0, 0) if len(tcs) != 1 { return false, fmt.Errorf("there should only be one top channel, not %d", len(tcs)) } if len(tcs[0].SubChans) != 1 { return false, fmt.Errorf("there should be 1 subchannel not %d", len(tcs[0].SubChans)) } for k := range tcs[0].SubChans { subConn = k } for _, e := range tcs[0].Trace.Events { if e.RefID == subConn && e.RefType != channelz.RefSubChannel { return false, fmt.Errorf("subchannel trace event shoud have RefType to be RefSubChannel") } } scm := channelz.GetSubChannel(subConn) if scm == nil { return false, fmt.Errorf("subChannel does not exist") } if scm.Trace == nil { return false, fmt.Errorf("trace for subChannel should not be empty") } if len(scm.Trace.Events) == 0 { return false, fmt.Errorf("there should be at least one trace event for subChannel not 0") } if scm.Trace.Events[0].Desc != "Subchannel Created" { return false, fmt.Errorf("the first trace event should be \"Subchannel Created\", not %q", scm.Trace.Events[0].Desc) } return true, nil }); err != nil { t.Fatal(err) } r.UpdateState(resolver.State{Addresses: []resolver.Address{}}) if err := verifyResultWithDelay(func() (bool, error) { tcs, _ := channelz.GetTopChannels(0, 0) if len(tcs) != 1 { return false, fmt.Errorf("there should only be one top channel, not %d", len(tcs)) } if len(tcs[0].SubChans) != 0 { return false, fmt.Errorf("there should be 0 subchannel not %d", len(tcs[0].SubChans)) } scm := channelz.GetSubChannel(subConn) if scm == nil { return false, fmt.Errorf("subChannel should still exist due to parent's trace reference") } if scm.Trace == nil { return false, fmt.Errorf("trace for SubChannel should not be empty") } if len(scm.Trace.Events) == 0 { return false, fmt.Errorf("there should be at least one trace event for subChannel not 0") } if got, want := scm.Trace.Events[len(scm.Trace.Events)-1].Desc, "Subchannel Deleted"; got != want { return false, fmt.Errorf("the last trace event should be %q, not %q", want, got) } return true, nil }); err != nil { t.Fatal(err) } } func (s) TestCZChannelAddressResolutionChange(t *testing.T) { czCleanup := channelz.NewChannelzStorage() defer czCleanupWrapper(czCleanup, t) e := tcpClearRREnv e.balancer = "" te := newTest(t, e) te.startServer(&testServer{security: e.security}) r, cleanup := manual.GenerateAndRegisterManualResolver() defer cleanup() addrs := []resolver.Address{{Addr: te.srvAddr}} r.InitialState(resolver.State{Addresses: addrs}) te.resolverScheme = r.Scheme() te.clientConn() defer te.tearDown() var cid int64 // Here, we just wait for all sockets to be up. In the future, if we implement // IDLE, we may need to make several rpc calls to create the sockets. if err := verifyResultWithDelay(func() (bool, error) { tcs, _ := channelz.GetTopChannels(0, 0) if len(tcs) != 1 { return false, fmt.Errorf("there should only be one top channel, not %d", len(tcs)) } cid = tcs[0].ID for i := len(tcs[0].Trace.Events) - 1; i >= 0; i-- { if strings.Contains(tcs[0].Trace.Events[i].Desc, "resolver returned new addresses") { break } if i == 0 { return false, fmt.Errorf("events do not contain expected address resolution from empty address state. Got: %+v", tcs[0].Trace.Events) } } return true, nil }); err != nil { t.Fatal(err) } r.UpdateState(resolver.State{Addresses: addrs, ServiceConfig: parseCfg(`{"loadBalancingPolicy": "round_robin"}`)}) if err := verifyResultWithDelay(func() (bool, error) { cm := channelz.GetChannel(cid) for i := len(cm.Trace.Events) - 1; i >= 0; i-- { if cm.Trace.Events[i].Desc == fmt.Sprintf("Channel switches to new LB policy %q", roundrobin.Name) { break } if i == 0 { return false, fmt.Errorf("events do not contain expected address resolution change of LB policy") } } return true, nil }); err != nil { t.Fatal(err) } newSC := parseCfg(`{ "methodConfig": [ { "name": [ { "service": "grpc.testing.TestService", "method": "EmptyCall" } ], "waitForReady": false, "timeout": ".001s" } ] }`) r.UpdateState(resolver.State{Addresses: addrs, ServiceConfig: newSC}) if err := verifyResultWithDelay(func() (bool, error) { cm := channelz.GetChannel(cid) var es []string for i := len(cm.Trace.Events) - 1; i >= 0; i-- { if strings.Contains(cm.Trace.Events[i].Desc, "service config updated") { break } es = append(es, cm.Trace.Events[i].Desc) if i == 0 { return false, fmt.Errorf("events do not contain expected address resolution of new service config\n Events:\n%v", strings.Join(es, "\n")) } } return true, nil }); err != nil { t.Fatal(err) } r.UpdateState(resolver.State{Addresses: []resolver.Address{}, ServiceConfig: newSC}) if err := verifyResultWithDelay(func() (bool, error) { cm := channelz.GetChannel(cid) for i := len(cm.Trace.Events) - 1; i >= 0; i-- { if strings.Contains(cm.Trace.Events[i].Desc, "resolver returned an empty address list") { break } if i == 0 { return false, fmt.Errorf("events do not contain expected address resolution of empty address") } } return true, nil }); err != nil { t.Fatal(err) } } func (s) TestCZSubChannelPickedNewAddress(t *testing.T) { czCleanup := channelz.NewChannelzStorage() defer czCleanupWrapper(czCleanup, t) e := tcpClearRREnv e.balancer = "" te := newTest(t, e) te.startServers(&testServer{security: e.security}, 3) r, cleanup := manual.GenerateAndRegisterManualResolver() defer cleanup() var svrAddrs []resolver.Address for _, a := range te.srvAddrs { svrAddrs = append(svrAddrs, resolver.Address{Addr: a}) } r.InitialState(resolver.State{Addresses: svrAddrs}) te.resolverScheme = r.Scheme() cc := te.clientConn() defer te.tearDown() tc := testpb.NewTestServiceClient(cc) // make sure the connection is up ctx, cancel := context.WithTimeout(context.Background(), time.Second) defer cancel() if _, err := tc.EmptyCall(ctx, &testpb.Empty{}); err != nil { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, ", err) } te.srvs[0].Stop() te.srvs[1].Stop() // Here, we just wait for all sockets to be up. In the future, if we implement // IDLE, we may need to make several rpc calls to create the sockets. if err := verifyResultWithDelay(func() (bool, error) { tcs, _ := channelz.GetTopChannels(0, 0) if len(tcs) != 1 { return false, fmt.Errorf("there should only be one top channel, not %d", len(tcs)) } if len(tcs[0].SubChans) != 1 { return false, fmt.Errorf("there should be 1 subchannel not %d", len(tcs[0].SubChans)) } var subConn int64 for k := range tcs[0].SubChans { subConn = k } scm := channelz.GetSubChannel(subConn) if scm.Trace == nil { return false, fmt.Errorf("trace for SubChannel should not be empty") } if len(scm.Trace.Events) == 0 { return false, fmt.Errorf("there should be at least one trace event for subChannel not 0") } for i := len(scm.Trace.Events) - 1; i >= 0; i-- { if scm.Trace.Events[i].Desc == fmt.Sprintf("Subchannel picks a new address %q to connect", te.srvAddrs[2]) { break } if i == 0 { return false, fmt.Errorf("events do not contain expected address resolution of subchannel picked new address") } } return true, nil }); err != nil { t.Fatal(err) } } func (s) TestCZSubChannelConnectivityState(t *testing.T) { czCleanup := channelz.NewChannelzStorage() defer czCleanupWrapper(czCleanup, t) e := tcpClearRREnv te := newTest(t, e) te.startServer(&testServer{security: e.security}) r, cleanup := manual.GenerateAndRegisterManualResolver() defer cleanup() r.InitialState(resolver.State{Addresses: []resolver.Address{{Addr: te.srvAddr}}}) te.resolverScheme = r.Scheme() cc := te.clientConn() defer te.tearDown() tc := testpb.NewTestServiceClient(cc) // make sure the connection is up ctx, cancel := context.WithTimeout(context.Background(), time.Second) defer cancel() if _, err := tc.EmptyCall(ctx, &testpb.Empty{}); err != nil { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, ", err) } var subConn int64 te.srv.Stop() if err := verifyResultWithDelay(func() (bool, error) { // we need to obtain the SubChannel id before it gets deleted from Channel's children list (due // to effect of r.UpdateState(resolver.State{Addresses:[]resolver.Address{}})) if subConn == 0 { tcs, _ := channelz.GetTopChannels(0, 0) if len(tcs) != 1 { return false, fmt.Errorf("there should only be one top channel, not %d", len(tcs)) } if len(tcs[0].SubChans) != 1 { return false, fmt.Errorf("there should be 1 subchannel not %d", len(tcs[0].SubChans)) } for k := range tcs[0].SubChans { // get the SubChannel id for further trace inquiry. subConn = k } } scm := channelz.GetSubChannel(subConn) if scm == nil { return false, fmt.Errorf("subChannel should still exist due to parent's trace reference") } if scm.Trace == nil { return false, fmt.Errorf("trace for SubChannel should not be empty") } if len(scm.Trace.Events) == 0 { return false, fmt.Errorf("there should be at least one trace event for subChannel not 0") } var ready, connecting, transient, shutdown int for _, e := range scm.Trace.Events { if e.Desc == fmt.Sprintf("Subchannel Connectivity change to %v", connectivity.TransientFailure) { transient++ } } // Make sure the SubChannel has already seen transient failure before shutting it down through // r.UpdateState(resolver.State{Addresses:[]resolver.Address{}}). if transient == 0 { return false, fmt.Errorf("transient failure has not happened on SubChannel yet") } transient = 0 r.UpdateState(resolver.State{Addresses: []resolver.Address{}}) for _, e := range scm.Trace.Events { if e.Desc == fmt.Sprintf("Subchannel Connectivity change to %v", connectivity.Ready) { ready++ } if e.Desc == fmt.Sprintf("Subchannel Connectivity change to %v", connectivity.Connecting) { connecting++ } if e.Desc == fmt.Sprintf("Subchannel Connectivity change to %v", connectivity.TransientFailure) { transient++ } if e.Desc == fmt.Sprintf("Subchannel Connectivity change to %v", connectivity.Shutdown) { shutdown++ } } // example: // Subchannel Created // Subchannel's connectivity state changed to CONNECTING // Subchannel picked a new address: "localhost:36011" // Subchannel's connectivity state changed to READY // Subchannel's connectivity state changed to TRANSIENT_FAILURE // Subchannel's connectivity state changed to CONNECTING // Subchannel picked a new address: "localhost:36011" // Subchannel's connectivity state changed to SHUTDOWN // Subchannel Deleted if ready != 1 || connecting < 1 || transient < 1 || shutdown != 1 { return false, fmt.Errorf("got: ready = %d, connecting = %d, transient = %d, shutdown = %d, want: 1, >=1, >=1, 1", ready, connecting, transient, shutdown) } return true, nil }); err != nil { t.Fatal(err) } } func (s) TestCZChannelConnectivityState(t *testing.T) { czCleanup := channelz.NewChannelzStorage() defer czCleanupWrapper(czCleanup, t) e := tcpClearRREnv te := newTest(t, e) te.startServer(&testServer{security: e.security}) r, cleanup := manual.GenerateAndRegisterManualResolver() defer cleanup() r.InitialState(resolver.State{Addresses: []resolver.Address{{Addr: te.srvAddr}}}) te.resolverScheme = r.Scheme() cc := te.clientConn() defer te.tearDown() tc := testpb.NewTestServiceClient(cc) // make sure the connection is up ctx, cancel := context.WithTimeout(context.Background(), time.Second) defer cancel() if _, err := tc.EmptyCall(ctx, &testpb.Empty{}); err != nil { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, ", err) } te.srv.Stop() if err := verifyResultWithDelay(func() (bool, error) { tcs, _ := channelz.GetTopChannels(0, 0) if len(tcs) != 1 { return false, fmt.Errorf("there should only be one top channel, not %d", len(tcs)) } var ready, connecting, transient int for _, e := range tcs[0].Trace.Events { if e.Desc == fmt.Sprintf("Channel Connectivity change to %v", connectivity.Ready) { ready++ } if e.Desc == fmt.Sprintf("Channel Connectivity change to %v", connectivity.Connecting) { connecting++ } if e.Desc == fmt.Sprintf("Channel Connectivity change to %v", connectivity.TransientFailure) { transient++ } } // example: // Channel Created // Adressses resolved (from empty address state): "localhost:40467" // SubChannel (id: 4[]) Created // Channel's connectivity state changed to CONNECTING // Channel's connectivity state changed to READY // Channel's connectivity state changed to TRANSIENT_FAILURE // Channel's connectivity state changed to CONNECTING // Channel's connectivity state changed to TRANSIENT_FAILURE if ready != 1 || connecting < 1 || transient < 1 { return false, fmt.Errorf("got: ready = %d, connecting = %d, transient = %d, want: 1, >=1, >=1", ready, connecting, transient) } return true, nil }); err != nil { t.Fatal(err) } } func (s) TestCZTraceOverwriteChannelDeletion(t *testing.T) { czCleanup := channelz.NewChannelzStorage() defer czCleanupWrapper(czCleanup, t) e := tcpClearRREnv // avoid calling API to set balancer type, which will void service config's change of balancer. e.balancer = "" te := newTest(t, e) channelz.SetMaxTraceEntry(1) defer channelz.ResetMaxTraceEntryToDefault() r, cleanup := manual.GenerateAndRegisterManualResolver() defer cleanup() resolvedAddrs := []resolver.Address{{Addr: "127.0.0.1:0", Type: resolver.GRPCLB, ServerName: "grpclb.server"}} r.InitialState(resolver.State{Addresses: resolvedAddrs}) te.resolverScheme = r.Scheme() te.clientConn() defer te.tearDown() var nestedConn int64 if err := verifyResultWithDelay(func() (bool, error) { tcs, _ := channelz.GetTopChannels(0, 0) if len(tcs) != 1 { return false, fmt.Errorf("there should only be one top channel, not %d", len(tcs)) } if len(tcs[0].NestedChans) != 1 { return false, fmt.Errorf("there should be one nested channel from grpclb, not %d", len(tcs[0].NestedChans)) } for k := range tcs[0].NestedChans { nestedConn = k } return true, nil }); err != nil { t.Fatal(err) } r.UpdateState(resolver.State{Addresses: []resolver.Address{{Addr: "127.0.0.1:0"}}, ServiceConfig: parseCfg(`{"loadBalancingPolicy": "round_robin"}`)}) // wait for the shutdown of grpclb balancer if err := verifyResultWithDelay(func() (bool, error) { tcs, _ := channelz.GetTopChannels(0, 0) if len(tcs) != 1 { return false, fmt.Errorf("there should only be one top channel, not %d", len(tcs)) } if len(tcs[0].NestedChans) != 0 { return false, fmt.Errorf("there should be 0 nested channel from grpclb, not %d", len(tcs[0].NestedChans)) } return true, nil }); err != nil { t.Fatal(err) } // verify that the nested channel no longer exist due to trace referencing it got overwritten. if err := verifyResultWithDelay(func() (bool, error) { cm := channelz.GetChannel(nestedConn) if cm != nil { return false, fmt.Errorf("nested channel should have been deleted since its parent's trace should not contain any reference to it anymore") } return true, nil }); err != nil { t.Fatal(err) } } func (s) TestCZTraceOverwriteSubChannelDeletion(t *testing.T) { czCleanup := channelz.NewChannelzStorage() defer czCleanupWrapper(czCleanup, t) e := tcpClearRREnv te := newTest(t, e) channelz.SetMaxTraceEntry(1) defer channelz.ResetMaxTraceEntryToDefault() te.startServer(&testServer{security: e.security}) r, cleanup := manual.GenerateAndRegisterManualResolver() defer cleanup() r.InitialState(resolver.State{Addresses: []resolver.Address{{Addr: te.srvAddr}}}) te.resolverScheme = r.Scheme() te.clientConn() defer te.tearDown() var subConn int64 // Here, we just wait for all sockets to be up. In the future, if we implement // IDLE, we may need to make several rpc calls to create the sockets. if err := verifyResultWithDelay(func() (bool, error) { tcs, _ := channelz.GetTopChannels(0, 0) if len(tcs) != 1 { return false, fmt.Errorf("there should only be one top channel, not %d", len(tcs)) } if len(tcs[0].SubChans) != 1 { return false, fmt.Errorf("there should be 1 subchannel not %d", len(tcs[0].SubChans)) } for k := range tcs[0].SubChans { subConn = k } return true, nil }); err != nil { t.Fatal(err) } r.UpdateState(resolver.State{Addresses: []resolver.Address{}}) if err := verifyResultWithDelay(func() (bool, error) { tcs, _ := channelz.GetTopChannels(0, 0) if len(tcs) != 1 { return false, fmt.Errorf("there should only be one top channel, not %d", len(tcs)) } if len(tcs[0].SubChans) != 0 { return false, fmt.Errorf("there should be 0 subchannel not %d", len(tcs[0].SubChans)) } return true, nil }); err != nil { t.Fatal(err) } // verify that the subchannel no longer exist due to trace referencing it got overwritten. if err := verifyResultWithDelay(func() (bool, error) { cm := channelz.GetChannel(subConn) if cm != nil { return false, fmt.Errorf("subchannel should have been deleted since its parent's trace should not contain any reference to it anymore") } return true, nil }); err != nil { t.Fatal(err) } } func (s) TestCZTraceTopChannelDeletionTraceClear(t *testing.T) { czCleanup := channelz.NewChannelzStorage() defer czCleanupWrapper(czCleanup, t) e := tcpClearRREnv te := newTest(t, e) te.startServer(&testServer{security: e.security}) r, cleanup := manual.GenerateAndRegisterManualResolver() defer cleanup() r.InitialState(resolver.State{Addresses: []resolver.Address{{Addr: te.srvAddr}}}) te.resolverScheme = r.Scheme() te.clientConn() var subConn int64 // Here, we just wait for all sockets to be up. In the future, if we implement // IDLE, we may need to make several rpc calls to create the sockets. if err := verifyResultWithDelay(func() (bool, error) { tcs, _ := channelz.GetTopChannels(0, 0) if len(tcs) != 1 { return false, fmt.Errorf("there should only be one top channel, not %d", len(tcs)) } if len(tcs[0].SubChans) != 1 { return false, fmt.Errorf("there should be 1 subchannel not %d", len(tcs[0].SubChans)) } for k := range tcs[0].SubChans { subConn = k } return true, nil }); err != nil { t.Fatal(err) } te.tearDown() // verify that the subchannel no longer exist due to parent channel got deleted and its trace cleared. if err := verifyResultWithDelay(func() (bool, error) { cm := channelz.GetChannel(subConn) if cm != nil { return false, fmt.Errorf("subchannel should have been deleted since its parent's trace should not contain any reference to it anymore") } return true, nil }); err != nil { t.Fatal(err) } } grpc-go-1.22.1/test/codec_perf/000077500000000000000000000000001351635773100162165ustar00rootroot00000000000000grpc-go-1.22.1/test/codec_perf/perf.pb.go000066400000000000000000000051501351635773100201020ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: codec_perf/perf.proto package codec_perf import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package // Buffer is a message that contains a body of bytes that is used to exercise // encoding and decoding overheads. type Buffer struct { Body []byte `protobuf:"bytes,1,opt,name=body,proto3" json:"body,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Buffer) Reset() { *m = Buffer{} } func (m *Buffer) String() string { return proto.CompactTextString(m) } func (*Buffer) ProtoMessage() {} func (*Buffer) Descriptor() ([]byte, []int) { return fileDescriptor_perf_6cc81a33b24d08e7, []int{0} } func (m *Buffer) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Buffer.Unmarshal(m, b) } func (m *Buffer) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Buffer.Marshal(b, m, deterministic) } func (dst *Buffer) XXX_Merge(src proto.Message) { xxx_messageInfo_Buffer.Merge(dst, src) } func (m *Buffer) XXX_Size() int { return xxx_messageInfo_Buffer.Size(m) } func (m *Buffer) XXX_DiscardUnknown() { xxx_messageInfo_Buffer.DiscardUnknown(m) } var xxx_messageInfo_Buffer proto.InternalMessageInfo func (m *Buffer) GetBody() []byte { if m != nil { return m.Body } return nil } func init() { proto.RegisterType((*Buffer)(nil), "codec.perf.Buffer") } func init() { proto.RegisterFile("codec_perf/perf.proto", fileDescriptor_perf_6cc81a33b24d08e7) } var fileDescriptor_perf_6cc81a33b24d08e7 = []byte{ // 83 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x12, 0x4d, 0xce, 0x4f, 0x49, 0x4d, 0x8e, 0x2f, 0x48, 0x2d, 0x4a, 0xd3, 0x07, 0x11, 0x7a, 0x05, 0x45, 0xf9, 0x25, 0xf9, 0x42, 0x5c, 0x60, 0x61, 0x3d, 0x90, 0x88, 0x92, 0x0c, 0x17, 0x9b, 0x53, 0x69, 0x5a, 0x5a, 0x6a, 0x91, 0x90, 0x10, 0x17, 0x4b, 0x52, 0x7e, 0x4a, 0xa5, 0x04, 0xa3, 0x02, 0xa3, 0x06, 0x4f, 0x10, 0x98, 0x9d, 0xc4, 0x06, 0xd6, 0x60, 0x0c, 0x08, 0x00, 0x00, 0xff, 0xff, 0xa3, 0x5f, 0x4f, 0x3c, 0x49, 0x00, 0x00, 0x00, } grpc-go-1.22.1/test/codec_perf/perf.proto000066400000000000000000000015741351635773100202460ustar00rootroot00000000000000// Copyright 2017 gRPC authors. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. // Messages used for performance tests that may not reference grpc directly for // reasons of import cycles. syntax = "proto3"; package codec.perf; // Buffer is a message that contains a body of bytes that is used to exercise // encoding and decoding overheads. message Buffer { bytes body = 1; } grpc-go-1.22.1/test/context_canceled_test.go000066400000000000000000000057321351635773100210240ustar00rootroot00000000000000/* * * Copyright 2019 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package test import ( "context" "testing" "time" "google.golang.org/grpc/codes" "google.golang.org/grpc/metadata" "google.golang.org/grpc/status" testpb "google.golang.org/grpc/test/grpc_testing" ) func (s) TestContextCanceled(t *testing.T) { ss := &stubServer{ fullDuplexCall: func(stream testpb.TestService_FullDuplexCallServer) error { stream.SetTrailer(metadata.New(map[string]string{"a": "b"})) return status.Error(codes.PermissionDenied, "perm denied") }, } if err := ss.Start(nil); err != nil { t.Fatalf("Error starting endpoint server: %v", err) } defer ss.Stop() // Runs 10 rounds of tests with the given delay and returns counts of status codes. // Fails in case of trailer/status code inconsistency. const cntRetry uint = 10 runTest := func(delay time.Duration) (cntCanceled, cntPermDenied uint) { for i := uint(0); i < cntRetry; i++ { ctx, cancel := context.WithTimeout(context.Background(), delay) defer cancel() str, err := ss.client.FullDuplexCall(ctx) if err != nil { continue } _, err = str.Recv() if err == nil { t.Fatalf("non-nil error expected from Recv()") } _, trlOk := str.Trailer()["a"] switch status.Code(err) { case codes.PermissionDenied: if !trlOk { t.Fatalf(`status err: %v; wanted key "a" in trailer but didn't get it`, err) } cntPermDenied++ case codes.DeadlineExceeded: if trlOk { t.Fatalf(`status err: %v; didn't want key "a" in trailer but got it`, err) } cntCanceled++ default: t.Fatalf(`unexpected status err: %v`, err) } } return cntCanceled, cntPermDenied } // Tries to find the delay that causes canceled/perm denied race. canceledOk, permDeniedOk := false, false for lower, upper := time.Duration(0), 2*time.Millisecond; lower <= upper; { delay := lower + (upper-lower)/2 cntCanceled, cntPermDenied := runTest(delay) if cntPermDenied > 0 && cntCanceled > 0 { // Delay that causes the race is found. return } // Set OK flags. if cntCanceled > 0 { canceledOk = true } if cntPermDenied > 0 { permDeniedOk = true } if cntPermDenied == 0 { // No perm denied, increase the delay. lower += (upper-lower)/10 + 1 } else { // All perm denied, decrease the delay. upper -= (upper-lower)/10 + 1 } } if !canceledOk || !permDeniedOk { t.Fatalf(`couldn't find the delay that causes canceled/perm denied race.`) } } grpc-go-1.22.1/test/creds_test.go000066400000000000000000000072351351635773100166220ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package test // TODO(https://github.com/grpc/grpc-go/issues/2330): move all creds related // tests to this file. import ( "context" "testing" "google.golang.org/grpc" "google.golang.org/grpc/credentials" testpb "google.golang.org/grpc/test/grpc_testing" "google.golang.org/grpc/testdata" ) const ( bundlePerRPCOnly = "perRPCOnly" bundleTLSOnly = "tlsOnly" ) type testCredsBundle struct { t *testing.T mode string } func (c *testCredsBundle) TransportCredentials() credentials.TransportCredentials { if c.mode == bundlePerRPCOnly { return nil } creds, err := credentials.NewClientTLSFromFile(testdata.Path("ca.pem"), "x.test.youtube.com") if err != nil { c.t.Logf("Failed to load credentials: %v", err) return nil } return creds } func (c *testCredsBundle) PerRPCCredentials() credentials.PerRPCCredentials { if c.mode == bundleTLSOnly { return nil } return testPerRPCCredentials{} } func (c *testCredsBundle) NewWithMode(mode string) (credentials.Bundle, error) { return &testCredsBundle{mode: mode}, nil } func (s) TestCredsBundleBoth(t *testing.T) { te := newTest(t, env{name: "creds-bundle", network: "tcp", balancer: "v1", security: "empty"}) te.tapHandle = authHandle te.customDialOptions = []grpc.DialOption{ grpc.WithCredentialsBundle(&testCredsBundle{t: t}), } creds, err := credentials.NewServerTLSFromFile(testdata.Path("server1.pem"), testdata.Path("server1.key")) if err != nil { t.Fatalf("Failed to generate credentials %v", err) } te.customServerOptions = []grpc.ServerOption{ grpc.Creds(creds), } te.startServer(&testServer{}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}); err != nil { t.Fatalf("Test failed. Reason: %v", err) } } func (s) TestCredsBundleTransportCredentials(t *testing.T) { te := newTest(t, env{name: "creds-bundle", network: "tcp", balancer: "v1", security: "empty"}) te.customDialOptions = []grpc.DialOption{ grpc.WithCredentialsBundle(&testCredsBundle{t: t, mode: bundleTLSOnly}), } creds, err := credentials.NewServerTLSFromFile(testdata.Path("server1.pem"), testdata.Path("server1.key")) if err != nil { t.Fatalf("Failed to generate credentials %v", err) } te.customServerOptions = []grpc.ServerOption{ grpc.Creds(creds), } te.startServer(&testServer{}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}); err != nil { t.Fatalf("Test failed. Reason: %v", err) } } func (s) TestCredsBundlePerRPCCredentials(t *testing.T) { te := newTest(t, env{name: "creds-bundle", network: "tcp", balancer: "v1", security: "empty"}) te.tapHandle = authHandle te.customDialOptions = []grpc.DialOption{ grpc.WithCredentialsBundle(&testCredsBundle{t: t, mode: bundlePerRPCOnly}), } te.startServer(&testServer{}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}); err != nil { t.Fatalf("Test failed. Reason: %v", err) } } grpc-go-1.22.1/test/end2end_test.go000066400000000000000000007035151351635773100170450ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ //go:generate protoc --go_out=plugins=grpc:. codec_perf/perf.proto //go:generate protoc --go_out=plugins=grpc:. grpc_testing/test.proto package test import ( "bufio" "bytes" "context" "crypto/tls" "errors" "flag" "fmt" "io" "math" "net" "net/http" "os" "reflect" "runtime" "strings" "sync" "sync/atomic" "syscall" "testing" "time" "github.com/golang/protobuf/proto" anypb "github.com/golang/protobuf/ptypes/any" "golang.org/x/net/http2" "golang.org/x/net/http2/hpack" spb "google.golang.org/genproto/googleapis/rpc/status" "google.golang.org/grpc" "google.golang.org/grpc/balancer/roundrobin" "google.golang.org/grpc/codes" "google.golang.org/grpc/connectivity" "google.golang.org/grpc/credentials" "google.golang.org/grpc/encoding" _ "google.golang.org/grpc/encoding/gzip" _ "google.golang.org/grpc/grpclog/glogger" "google.golang.org/grpc/health" healthgrpc "google.golang.org/grpc/health/grpc_health_v1" healthpb "google.golang.org/grpc/health/grpc_health_v1" "google.golang.org/grpc/internal/channelz" "google.golang.org/grpc/internal/grpcsync" "google.golang.org/grpc/internal/grpctest" "google.golang.org/grpc/internal/leakcheck" "google.golang.org/grpc/internal/testutils" "google.golang.org/grpc/internal/transport" "google.golang.org/grpc/metadata" "google.golang.org/grpc/peer" "google.golang.org/grpc/resolver" "google.golang.org/grpc/resolver/manual" _ "google.golang.org/grpc/resolver/passthrough" "google.golang.org/grpc/serviceconfig" "google.golang.org/grpc/stats" "google.golang.org/grpc/status" "google.golang.org/grpc/tap" testpb "google.golang.org/grpc/test/grpc_testing" "google.golang.org/grpc/testdata" ) func init() { channelz.TurnOn() } type s struct{} var lcFailed uint32 type errorer struct { t *testing.T } func (e errorer) Errorf(format string, args ...interface{}) { atomic.StoreUint32(&lcFailed, 1) e.t.Errorf(format, args...) } func (s) Teardown(t *testing.T) { if atomic.LoadUint32(&lcFailed) == 1 { return } leakcheck.Check(errorer{t: t}) if atomic.LoadUint32(&lcFailed) == 1 { t.Log("Leak check disabled for future tests") } } func Test(t *testing.T) { grpctest.RunSubTests(t, s{}) } var ( // For headers: testMetadata = metadata.MD{ "key1": []string{"value1"}, "key2": []string{"value2"}, "key3-bin": []string{"binvalue1", string([]byte{1, 2, 3})}, } testMetadata2 = metadata.MD{ "key1": []string{"value12"}, "key2": []string{"value22"}, } // For trailers: testTrailerMetadata = metadata.MD{ "tkey1": []string{"trailerValue1"}, "tkey2": []string{"trailerValue2"}, "tkey3-bin": []string{"trailerbinvalue1", string([]byte{3, 2, 1})}, } testTrailerMetadata2 = metadata.MD{ "tkey1": []string{"trailerValue12"}, "tkey2": []string{"trailerValue22"}, } // capital "Key" is illegal in HTTP/2. malformedHTTP2Metadata = metadata.MD{ "Key": []string{"foo"}, } testAppUA = "myApp1/1.0 myApp2/0.9" failAppUA = "fail-this-RPC" detailedError = status.ErrorProto(&spb.Status{ Code: int32(codes.DataLoss), Message: "error for testing: " + failAppUA, Details: []*anypb.Any{{ TypeUrl: "url", Value: []byte{6, 0, 0, 6, 1, 3}, }}, }) ) var raceMode bool // set by race.go in race mode type testServer struct { security string // indicate the authentication protocol used by this server. earlyFail bool // whether to error out the execution of a service handler prematurely. setAndSendHeader bool // whether to call setHeader and sendHeader. setHeaderOnly bool // whether to only call setHeader, not sendHeader. multipleSetTrailer bool // whether to call setTrailer multiple times. unaryCallSleepTime time.Duration } func (s *testServer) EmptyCall(ctx context.Context, in *testpb.Empty) (*testpb.Empty, error) { if md, ok := metadata.FromIncomingContext(ctx); ok { // For testing purpose, returns an error if user-agent is failAppUA. // To test that client gets the correct error. if ua, ok := md["user-agent"]; !ok || strings.HasPrefix(ua[0], failAppUA) { return nil, detailedError } var str []string for _, entry := range md["user-agent"] { str = append(str, "ua", entry) } grpc.SendHeader(ctx, metadata.Pairs(str...)) } return new(testpb.Empty), nil } func newPayload(t testpb.PayloadType, size int32) (*testpb.Payload, error) { if size < 0 { return nil, fmt.Errorf("requested a response with invalid length %d", size) } body := make([]byte, size) switch t { case testpb.PayloadType_COMPRESSABLE: case testpb.PayloadType_UNCOMPRESSABLE: return nil, fmt.Errorf("PayloadType UNCOMPRESSABLE is not supported") default: return nil, fmt.Errorf("unsupported payload type: %d", t) } return &testpb.Payload{ Type: t, Body: body, }, nil } func (s *testServer) UnaryCall(ctx context.Context, in *testpb.SimpleRequest) (*testpb.SimpleResponse, error) { md, ok := metadata.FromIncomingContext(ctx) if ok { if _, exists := md[":authority"]; !exists { return nil, status.Errorf(codes.DataLoss, "expected an :authority metadata: %v", md) } if s.setAndSendHeader { if err := grpc.SetHeader(ctx, md); err != nil { return nil, status.Errorf(status.Code(err), "grpc.SetHeader(_, %v) = %v, want ", md, err) } if err := grpc.SendHeader(ctx, testMetadata2); err != nil { return nil, status.Errorf(status.Code(err), "grpc.SendHeader(_, %v) = %v, want ", testMetadata2, err) } } else if s.setHeaderOnly { if err := grpc.SetHeader(ctx, md); err != nil { return nil, status.Errorf(status.Code(err), "grpc.SetHeader(_, %v) = %v, want ", md, err) } if err := grpc.SetHeader(ctx, testMetadata2); err != nil { return nil, status.Errorf(status.Code(err), "grpc.SetHeader(_, %v) = %v, want ", testMetadata2, err) } } else { if err := grpc.SendHeader(ctx, md); err != nil { return nil, status.Errorf(status.Code(err), "grpc.SendHeader(_, %v) = %v, want ", md, err) } } if err := grpc.SetTrailer(ctx, testTrailerMetadata); err != nil { return nil, status.Errorf(status.Code(err), "grpc.SetTrailer(_, %v) = %v, want ", testTrailerMetadata, err) } if s.multipleSetTrailer { if err := grpc.SetTrailer(ctx, testTrailerMetadata2); err != nil { return nil, status.Errorf(status.Code(err), "grpc.SetTrailer(_, %v) = %v, want ", testTrailerMetadata2, err) } } } pr, ok := peer.FromContext(ctx) if !ok { return nil, status.Error(codes.DataLoss, "failed to get peer from ctx") } if pr.Addr == net.Addr(nil) { return nil, status.Error(codes.DataLoss, "failed to get peer address") } if s.security != "" { // Check Auth info var authType, serverName string switch info := pr.AuthInfo.(type) { case credentials.TLSInfo: authType = info.AuthType() serverName = info.State.ServerName default: return nil, status.Error(codes.Unauthenticated, "Unknown AuthInfo type") } if authType != s.security { return nil, status.Errorf(codes.Unauthenticated, "Wrong auth type: got %q, want %q", authType, s.security) } if serverName != "x.test.youtube.com" { return nil, status.Errorf(codes.Unauthenticated, "Unknown server name %q", serverName) } } // Simulate some service delay. time.Sleep(s.unaryCallSleepTime) payload, err := newPayload(in.GetResponseType(), in.GetResponseSize()) if err != nil { return nil, err } return &testpb.SimpleResponse{ Payload: payload, }, nil } func (s *testServer) StreamingOutputCall(args *testpb.StreamingOutputCallRequest, stream testpb.TestService_StreamingOutputCallServer) error { if md, ok := metadata.FromIncomingContext(stream.Context()); ok { if _, exists := md[":authority"]; !exists { return status.Errorf(codes.DataLoss, "expected an :authority metadata: %v", md) } // For testing purpose, returns an error if user-agent is failAppUA. // To test that client gets the correct error. if ua, ok := md["user-agent"]; !ok || strings.HasPrefix(ua[0], failAppUA) { return status.Error(codes.DataLoss, "error for testing: "+failAppUA) } } cs := args.GetResponseParameters() for _, c := range cs { if us := c.GetIntervalUs(); us > 0 { time.Sleep(time.Duration(us) * time.Microsecond) } payload, err := newPayload(args.GetResponseType(), c.GetSize()) if err != nil { return err } if err := stream.Send(&testpb.StreamingOutputCallResponse{ Payload: payload, }); err != nil { return err } } return nil } func (s *testServer) StreamingInputCall(stream testpb.TestService_StreamingInputCallServer) error { var sum int for { in, err := stream.Recv() if err == io.EOF { return stream.SendAndClose(&testpb.StreamingInputCallResponse{ AggregatedPayloadSize: int32(sum), }) } if err != nil { return err } p := in.GetPayload().GetBody() sum += len(p) if s.earlyFail { return status.Error(codes.NotFound, "not found") } } } func (s *testServer) FullDuplexCall(stream testpb.TestService_FullDuplexCallServer) error { md, ok := metadata.FromIncomingContext(stream.Context()) if ok { if s.setAndSendHeader { if err := stream.SetHeader(md); err != nil { return status.Errorf(status.Code(err), "%v.SetHeader(_, %v) = %v, want ", stream, md, err) } if err := stream.SendHeader(testMetadata2); err != nil { return status.Errorf(status.Code(err), "%v.SendHeader(_, %v) = %v, want ", stream, testMetadata2, err) } } else if s.setHeaderOnly { if err := stream.SetHeader(md); err != nil { return status.Errorf(status.Code(err), "%v.SetHeader(_, %v) = %v, want ", stream, md, err) } if err := stream.SetHeader(testMetadata2); err != nil { return status.Errorf(status.Code(err), "%v.SetHeader(_, %v) = %v, want ", stream, testMetadata2, err) } } else { if err := stream.SendHeader(md); err != nil { return status.Errorf(status.Code(err), "%v.SendHeader(%v) = %v, want %v", stream, md, err, nil) } } stream.SetTrailer(testTrailerMetadata) if s.multipleSetTrailer { stream.SetTrailer(testTrailerMetadata2) } } for { in, err := stream.Recv() if err == io.EOF { // read done. return nil } if err != nil { // to facilitate testSvrWriteStatusEarlyWrite if status.Code(err) == codes.ResourceExhausted { return status.Errorf(codes.Internal, "fake error for test testSvrWriteStatusEarlyWrite. true error: %s", err.Error()) } return err } cs := in.GetResponseParameters() for _, c := range cs { if us := c.GetIntervalUs(); us > 0 { time.Sleep(time.Duration(us) * time.Microsecond) } payload, err := newPayload(in.GetResponseType(), c.GetSize()) if err != nil { return err } if err := stream.Send(&testpb.StreamingOutputCallResponse{ Payload: payload, }); err != nil { // to facilitate testSvrWriteStatusEarlyWrite if status.Code(err) == codes.ResourceExhausted { return status.Errorf(codes.Internal, "fake error for test testSvrWriteStatusEarlyWrite. true error: %s", err.Error()) } return err } } } } func (s *testServer) HalfDuplexCall(stream testpb.TestService_HalfDuplexCallServer) error { var msgBuf []*testpb.StreamingOutputCallRequest for { in, err := stream.Recv() if err == io.EOF { // read done. break } if err != nil { return err } msgBuf = append(msgBuf, in) } for _, m := range msgBuf { cs := m.GetResponseParameters() for _, c := range cs { if us := c.GetIntervalUs(); us > 0 { time.Sleep(time.Duration(us) * time.Microsecond) } payload, err := newPayload(m.GetResponseType(), c.GetSize()) if err != nil { return err } if err := stream.Send(&testpb.StreamingOutputCallResponse{ Payload: payload, }); err != nil { return err } } } return nil } type env struct { name string network string // The type of network such as tcp, unix, etc. security string // The security protocol such as TLS, SSH, etc. httpHandler bool // whether to use the http.Handler ServerTransport; requires TLS balancer string // One of "round_robin", "pick_first", "v1", or "". customDialer func(string, string, time.Duration) (net.Conn, error) } func (e env) runnable() bool { if runtime.GOOS == "windows" && e.network == "unix" { return false } return true } func (e env) dialer(addr string, timeout time.Duration) (net.Conn, error) { if e.customDialer != nil { return e.customDialer(e.network, addr, timeout) } return net.DialTimeout(e.network, addr, timeout) } var ( tcpClearEnv = env{name: "tcp-clear-v1-balancer", network: "tcp", balancer: "v1"} tcpTLSEnv = env{name: "tcp-tls-v1-balancer", network: "tcp", security: "tls", balancer: "v1"} tcpClearRREnv = env{name: "tcp-clear", network: "tcp", balancer: "round_robin"} tcpTLSRREnv = env{name: "tcp-tls", network: "tcp", security: "tls", balancer: "round_robin"} handlerEnv = env{name: "handler-tls", network: "tcp", security: "tls", httpHandler: true, balancer: "round_robin"} noBalancerEnv = env{name: "no-balancer", network: "tcp", security: "tls"} allEnv = []env{tcpClearEnv, tcpTLSEnv, tcpClearRREnv, tcpTLSRREnv, handlerEnv, noBalancerEnv} ) var onlyEnv = flag.String("only_env", "", "If non-empty, one of 'tcp-clear', 'tcp-tls', 'unix-clear', 'unix-tls', or 'handler-tls' to only run the tests for that environment. Empty means all.") func listTestEnv() (envs []env) { if *onlyEnv != "" { for _, e := range allEnv { if e.name == *onlyEnv { if !e.runnable() { panic(fmt.Sprintf("--only_env environment %q does not run on %s", *onlyEnv, runtime.GOOS)) } return []env{e} } } panic(fmt.Sprintf("invalid --only_env value %q", *onlyEnv)) } for _, e := range allEnv { if e.runnable() { envs = append(envs, e) } } return envs } // test is an end-to-end test. It should be created with the newTest // func, modified as needed, and then started with its startServer method. // It should be cleaned up with the tearDown method. type test struct { // The following are setup in newTest(). t *testing.T e env ctx context.Context // valid for life of test, before tearDown cancel context.CancelFunc // The following knobs are for the server-side, and should be set after // calling newTest() and before calling startServer(). // whether or not to expose the server's health via the default health // service implementation. enableHealthServer bool // In almost all cases, one should set the 'enableHealthServer' flag above to // expose the server's health using the default health service // implementation. This should only be used when a non-default health service // implementation is required. healthServer healthpb.HealthServer maxStream uint32 tapHandle tap.ServerInHandle maxServerMsgSize *int maxServerReceiveMsgSize *int maxServerSendMsgSize *int maxServerHeaderListSize *uint32 // Used to test the deprecated API WithCompressor and WithDecompressor. serverCompression bool unknownHandler grpc.StreamHandler unaryServerInt grpc.UnaryServerInterceptor streamServerInt grpc.StreamServerInterceptor serverInitialWindowSize int32 serverInitialConnWindowSize int32 customServerOptions []grpc.ServerOption // The following knobs are for the client-side, and should be set after // calling newTest() and before calling clientConn(). maxClientMsgSize *int maxClientReceiveMsgSize *int maxClientSendMsgSize *int maxClientHeaderListSize *uint32 userAgent string // Used to test the deprecated API WithCompressor and WithDecompressor. clientCompression bool // Used to test the new compressor registration API UseCompressor. clientUseCompression bool // clientNopCompression is set to create a compressor whose type is not supported. clientNopCompression bool unaryClientInt grpc.UnaryClientInterceptor streamClientInt grpc.StreamClientInterceptor sc <-chan grpc.ServiceConfig customCodec encoding.Codec clientInitialWindowSize int32 clientInitialConnWindowSize int32 perRPCCreds credentials.PerRPCCredentials customDialOptions []grpc.DialOption resolverScheme string // All test dialing is blocking by default. Set this to true if dial // should be non-blocking. nonBlockingDial bool // These are are set once startServer is called. The common case is to have // only one testServer. srv stopper hSrv healthpb.HealthServer srvAddr string // These are are set once startServers is called. srvs []stopper hSrvs []healthpb.HealthServer srvAddrs []string cc *grpc.ClientConn // nil until requested via clientConn restoreLogs func() // nil unless declareLogNoise is used } type stopper interface { Stop() GracefulStop() } func (te *test) tearDown() { if te.cancel != nil { te.cancel() te.cancel = nil } if te.cc != nil { te.cc.Close() te.cc = nil } if te.restoreLogs != nil { te.restoreLogs() te.restoreLogs = nil } if te.srv != nil { te.srv.Stop() } for _, s := range te.srvs { s.Stop() } } // newTest returns a new test using the provided testing.T and // environment. It is returned with default values. Tests should // modify it before calling its startServer and clientConn methods. func newTest(t *testing.T, e env) *test { te := &test{ t: t, e: e, maxStream: math.MaxUint32, } te.ctx, te.cancel = context.WithCancel(context.Background()) return te } func (te *test) listenAndServe(ts testpb.TestServiceServer, listen func(network, address string) (net.Listener, error)) net.Listener { te.t.Logf("Running test in %s environment...", te.e.name) sopts := []grpc.ServerOption{grpc.MaxConcurrentStreams(te.maxStream)} if te.maxServerMsgSize != nil { sopts = append(sopts, grpc.MaxMsgSize(*te.maxServerMsgSize)) } if te.maxServerReceiveMsgSize != nil { sopts = append(sopts, grpc.MaxRecvMsgSize(*te.maxServerReceiveMsgSize)) } if te.maxServerSendMsgSize != nil { sopts = append(sopts, grpc.MaxSendMsgSize(*te.maxServerSendMsgSize)) } if te.maxServerHeaderListSize != nil { sopts = append(sopts, grpc.MaxHeaderListSize(*te.maxServerHeaderListSize)) } if te.tapHandle != nil { sopts = append(sopts, grpc.InTapHandle(te.tapHandle)) } if te.serverCompression { sopts = append(sopts, grpc.RPCCompressor(grpc.NewGZIPCompressor()), grpc.RPCDecompressor(grpc.NewGZIPDecompressor()), ) } if te.unaryServerInt != nil { sopts = append(sopts, grpc.UnaryInterceptor(te.unaryServerInt)) } if te.streamServerInt != nil { sopts = append(sopts, grpc.StreamInterceptor(te.streamServerInt)) } if te.unknownHandler != nil { sopts = append(sopts, grpc.UnknownServiceHandler(te.unknownHandler)) } if te.serverInitialWindowSize > 0 { sopts = append(sopts, grpc.InitialWindowSize(te.serverInitialWindowSize)) } if te.serverInitialConnWindowSize > 0 { sopts = append(sopts, grpc.InitialConnWindowSize(te.serverInitialConnWindowSize)) } la := "localhost:0" switch te.e.network { case "unix": la = "/tmp/testsock" + fmt.Sprintf("%d", time.Now().UnixNano()) syscall.Unlink(la) } lis, err := listen(te.e.network, la) if err != nil { te.t.Fatalf("Failed to listen: %v", err) } switch te.e.security { case "tls": creds, err := credentials.NewServerTLSFromFile(testdata.Path("server1.pem"), testdata.Path("server1.key")) if err != nil { te.t.Fatalf("Failed to generate credentials %v", err) } sopts = append(sopts, grpc.Creds(creds)) case "clientTimeoutCreds": sopts = append(sopts, grpc.Creds(&clientTimeoutCreds{})) } sopts = append(sopts, te.customServerOptions...) s := grpc.NewServer(sopts...) if ts != nil { testpb.RegisterTestServiceServer(s, ts) } // Create a new default health server if enableHealthServer is set, or use // the provided one. hs := te.healthServer if te.enableHealthServer { hs = health.NewServer() } if hs != nil { healthgrpc.RegisterHealthServer(s, hs) } addr := la switch te.e.network { case "unix": default: _, port, err := net.SplitHostPort(lis.Addr().String()) if err != nil { te.t.Fatalf("Failed to parse listener address: %v", err) } addr = "localhost:" + port } te.srv = s te.hSrv = hs te.srvAddr = addr if te.e.httpHandler { if te.e.security != "tls" { te.t.Fatalf("unsupported environment settings") } cert, err := tls.LoadX509KeyPair(testdata.Path("server1.pem"), testdata.Path("server1.key")) if err != nil { te.t.Fatal("tls.LoadX509KeyPair(server1.pem, server1.key) failed: ", err) } hs := &http.Server{ Handler: s, TLSConfig: &tls.Config{Certificates: []tls.Certificate{cert}}, } if err := http2.ConfigureServer(hs, &http2.Server{MaxConcurrentStreams: te.maxStream}); err != nil { te.t.Fatal("http2.ConfigureServer(_, _) failed: ", err) } te.srv = wrapHS{hs} tlsListener := tls.NewListener(lis, hs.TLSConfig) go hs.Serve(tlsListener) return lis } go s.Serve(lis) return lis } type wrapHS struct { s *http.Server } func (w wrapHS) GracefulStop() { w.s.Shutdown(context.Background()) } func (w wrapHS) Stop() { w.s.Close() } func (te *test) startServerWithConnControl(ts testpb.TestServiceServer) *listenerWrapper { l := te.listenAndServe(ts, listenWithConnControl) return l.(*listenerWrapper) } // startServer starts a gRPC server exposing the provided TestService // implementation. Callers should defer a call to te.tearDown to clean up func (te *test) startServer(ts testpb.TestServiceServer) { te.listenAndServe(ts, net.Listen) } // startServers starts 'num' gRPC servers exposing the provided TestService. func (te *test) startServers(ts testpb.TestServiceServer, num int) { for i := 0; i < num; i++ { te.startServer(ts) te.srvs = append(te.srvs, te.srv.(*grpc.Server)) te.hSrvs = append(te.hSrvs, te.hSrv) te.srvAddrs = append(te.srvAddrs, te.srvAddr) te.srv = nil te.hSrv = nil te.srvAddr = "" } } type nopCompressor struct { grpc.Compressor } // NewNopCompressor creates a compressor to test the case that type is not supported. func NewNopCompressor() grpc.Compressor { return &nopCompressor{grpc.NewGZIPCompressor()} } func (c *nopCompressor) Type() string { return "nop" } type nopDecompressor struct { grpc.Decompressor } // NewNopDecompressor creates a decompressor to test the case that type is not supported. func NewNopDecompressor() grpc.Decompressor { return &nopDecompressor{grpc.NewGZIPDecompressor()} } func (d *nopDecompressor) Type() string { return "nop" } func (te *test) configDial(opts ...grpc.DialOption) ([]grpc.DialOption, string) { opts = append(opts, grpc.WithDialer(te.e.dialer), grpc.WithUserAgent(te.userAgent)) if te.sc != nil { opts = append(opts, grpc.WithServiceConfig(te.sc)) } if te.clientCompression { opts = append(opts, grpc.WithCompressor(grpc.NewGZIPCompressor()), grpc.WithDecompressor(grpc.NewGZIPDecompressor()), ) } if te.clientUseCompression { opts = append(opts, grpc.WithDefaultCallOptions(grpc.UseCompressor("gzip"))) } if te.clientNopCompression { opts = append(opts, grpc.WithCompressor(NewNopCompressor()), grpc.WithDecompressor(NewNopDecompressor()), ) } if te.unaryClientInt != nil { opts = append(opts, grpc.WithUnaryInterceptor(te.unaryClientInt)) } if te.streamClientInt != nil { opts = append(opts, grpc.WithStreamInterceptor(te.streamClientInt)) } if te.maxClientMsgSize != nil { opts = append(opts, grpc.WithMaxMsgSize(*te.maxClientMsgSize)) } if te.maxClientReceiveMsgSize != nil { opts = append(opts, grpc.WithDefaultCallOptions(grpc.MaxCallRecvMsgSize(*te.maxClientReceiveMsgSize))) } if te.maxClientSendMsgSize != nil { opts = append(opts, grpc.WithDefaultCallOptions(grpc.MaxCallSendMsgSize(*te.maxClientSendMsgSize))) } if te.maxClientHeaderListSize != nil { opts = append(opts, grpc.WithMaxHeaderListSize(*te.maxClientHeaderListSize)) } switch te.e.security { case "tls": creds, err := credentials.NewClientTLSFromFile(testdata.Path("ca.pem"), "x.test.youtube.com") if err != nil { te.t.Fatalf("Failed to load credentials: %v", err) } opts = append(opts, grpc.WithTransportCredentials(creds)) case "clientTimeoutCreds": opts = append(opts, grpc.WithTransportCredentials(&clientTimeoutCreds{})) case "empty": // Don't add any transport creds option. default: opts = append(opts, grpc.WithInsecure()) } // TODO(bar) switch balancer case "pick_first". var scheme string if te.resolverScheme == "" { scheme = "passthrough:///" } else { scheme = te.resolverScheme + ":///" } switch te.e.balancer { case "v1": opts = append(opts, grpc.WithBalancer(grpc.RoundRobin(nil))) case "round_robin": opts = append(opts, grpc.WithBalancerName(roundrobin.Name)) } if te.clientInitialWindowSize > 0 { opts = append(opts, grpc.WithInitialWindowSize(te.clientInitialWindowSize)) } if te.clientInitialConnWindowSize > 0 { opts = append(opts, grpc.WithInitialConnWindowSize(te.clientInitialConnWindowSize)) } if te.perRPCCreds != nil { opts = append(opts, grpc.WithPerRPCCredentials(te.perRPCCreds)) } if te.customCodec != nil { opts = append(opts, grpc.WithDefaultCallOptions(grpc.ForceCodec(te.customCodec))) } if !te.nonBlockingDial && te.srvAddr != "" { // Only do a blocking dial if server is up. opts = append(opts, grpc.WithBlock()) } if te.srvAddr == "" { te.srvAddr = "client.side.only.test" } opts = append(opts, te.customDialOptions...) return opts, scheme } func (te *test) clientConnWithConnControl() (*grpc.ClientConn, *dialerWrapper) { if te.cc != nil { return te.cc, nil } opts, scheme := te.configDial() dw := &dialerWrapper{} // overwrite the dialer before opts = append(opts, grpc.WithDialer(dw.dialer)) var err error te.cc, err = grpc.Dial(scheme+te.srvAddr, opts...) if err != nil { te.t.Fatalf("Dial(%q) = %v", scheme+te.srvAddr, err) } return te.cc, dw } func (te *test) clientConn(opts ...grpc.DialOption) *grpc.ClientConn { if te.cc != nil { return te.cc } var scheme string opts, scheme = te.configDial(opts...) var err error te.cc, err = grpc.Dial(scheme+te.srvAddr, opts...) if err != nil { te.t.Fatalf("Dial(%q) = %v", scheme+te.srvAddr, err) } return te.cc } func (te *test) declareLogNoise(phrases ...string) { te.restoreLogs = declareLogNoise(te.t, phrases...) } func (te *test) withServerTester(fn func(st *serverTester)) { c, err := te.e.dialer(te.srvAddr, 10*time.Second) if err != nil { te.t.Fatal(err) } defer c.Close() if te.e.security == "tls" { c = tls.Client(c, &tls.Config{ InsecureSkipVerify: true, NextProtos: []string{http2.NextProtoTLS}, }) } st := newServerTesterFromConn(te.t, c) st.greet() fn(st) } type lazyConn struct { net.Conn beLazy int32 } func (l *lazyConn) Write(b []byte) (int, error) { if atomic.LoadInt32(&(l.beLazy)) == 1 { time.Sleep(time.Second) } return l.Conn.Write(b) } func (s) TestContextDeadlineNotIgnored(t *testing.T) { e := noBalancerEnv var lc *lazyConn e.customDialer = func(network, addr string, timeout time.Duration) (net.Conn, error) { conn, err := net.DialTimeout(network, addr, timeout) if err != nil { return nil, err } lc = &lazyConn{Conn: conn} return lc, nil } te := newTest(t, e) te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}); err != nil { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, ", err) } atomic.StoreInt32(&(lc.beLazy), 1) ctx, cancel := context.WithTimeout(context.Background(), 50*time.Millisecond) defer cancel() t1 := time.Now() if _, err := tc.EmptyCall(ctx, &testpb.Empty{}); status.Code(err) != codes.DeadlineExceeded { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, context.DeadlineExceeded", err) } if time.Since(t1) > 2*time.Second { t.Fatalf("TestService/EmptyCall(_, _) ran over the deadline") } } func (s) TestTimeoutOnDeadServer(t *testing.T) { for _, e := range listTestEnv() { testTimeoutOnDeadServer(t, e) } } func testTimeoutOnDeadServer(t *testing.T, e env) { te := newTest(t, e) te.userAgent = testAppUA te.declareLogNoise( "transport: http2Client.notifyError got notified that the client transport was broken EOF", "grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing", "grpc: addrConn.resetTransport failed to create client transport: connection error", ) te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}, grpc.WaitForReady(true)); err != nil { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, ", err) } te.srv.Stop() // Wait for the client to notice the connection is gone. ctx, cancel := context.WithTimeout(context.Background(), 500*time.Millisecond) state := cc.GetState() for ; state == connectivity.Ready && cc.WaitForStateChange(ctx, state); state = cc.GetState() { } cancel() if state == connectivity.Ready { t.Fatalf("Timed out waiting for non-ready state") } ctx, cancel = context.WithTimeout(context.Background(), time.Millisecond) _, err := tc.EmptyCall(ctx, &testpb.Empty{}, grpc.WaitForReady(true)) cancel() if e.balancer != "" && status.Code(err) != codes.DeadlineExceeded { // If e.balancer == nil, the ac will stop reconnecting because the dialer returns non-temp error, // the error will be an internal error. t.Fatalf("TestService/EmptyCall(%v, _) = _, %v, want _, error code: %s", ctx, err, codes.DeadlineExceeded) } awaitNewConnLogOutput() } func (s) TestServerGracefulStopIdempotent(t *testing.T) { for _, e := range listTestEnv() { if e.name == "handler-tls" { continue } testServerGracefulStopIdempotent(t, e) } } func testServerGracefulStopIdempotent(t *testing.T, e env) { te := newTest(t, e) te.userAgent = testAppUA te.startServer(&testServer{security: e.security}) defer te.tearDown() for i := 0; i < 3; i++ { te.srv.GracefulStop() } } func (s) TestServerGoAway(t *testing.T) { for _, e := range listTestEnv() { if e.name == "handler-tls" { continue } testServerGoAway(t, e) } } func testServerGoAway(t *testing.T, e env) { te := newTest(t, e) te.userAgent = testAppUA te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) // Finish an RPC to make sure the connection is good. ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) defer cancel() if _, err := tc.EmptyCall(ctx, &testpb.Empty{}, grpc.WaitForReady(true)); err != nil { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, ", err) } ch := make(chan struct{}) go func() { te.srv.GracefulStop() close(ch) }() // Loop until the server side GoAway signal is propagated to the client. for { ctx, cancel := context.WithTimeout(context.Background(), 10*time.Millisecond) if _, err := tc.EmptyCall(ctx, &testpb.Empty{}); err != nil && status.Code(err) != codes.DeadlineExceeded { cancel() break } cancel() } // A new RPC should fail. ctx, cancel = context.WithTimeout(context.Background(), 5*time.Second) defer cancel() if _, err := tc.EmptyCall(ctx, &testpb.Empty{}); status.Code(err) != codes.Unavailable && status.Code(err) != codes.Internal { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, %s or %s", err, codes.Unavailable, codes.Internal) } <-ch awaitNewConnLogOutput() } func (s) TestServerGoAwayPendingRPC(t *testing.T) { for _, e := range listTestEnv() { if e.name == "handler-tls" { continue } testServerGoAwayPendingRPC(t, e) } } func testServerGoAwayPendingRPC(t *testing.T, e env) { te := newTest(t, e) te.userAgent = testAppUA te.declareLogNoise( "transport: http2Client.notifyError got notified that the client transport was broken EOF", "grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing", "grpc: addrConn.resetTransport failed to create client transport: connection error", ) te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) stream, err := tc.FullDuplexCall(ctx, grpc.WaitForReady(true)) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } // Finish an RPC to make sure the connection is good. if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}, grpc.WaitForReady(true)); err != nil { t.Fatalf("%v.EmptyCall(_, _, _) = _, %v, want _, ", tc, err) } ch := make(chan struct{}) go func() { te.srv.GracefulStop() close(ch) }() // Loop until the server side GoAway signal is propagated to the client. start := time.Now() errored := false for time.Since(start) < time.Second { ctx, cancel := context.WithTimeout(context.Background(), 10*time.Millisecond) _, err := tc.EmptyCall(ctx, &testpb.Empty{}, grpc.WaitForReady(true)) cancel() if err != nil { errored = true break } } if !errored { t.Fatalf("GoAway never received by client") } respParam := []*testpb.ResponseParameters{{Size: 1}} payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, int32(100)) if err != nil { t.Fatal(err) } req := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseParameters: respParam, Payload: payload, } // The existing RPC should be still good to proceed. if err := stream.Send(req); err != nil { t.Fatalf("%v.Send(_) = %v, want ", stream, err) } if _, err := stream.Recv(); err != nil { t.Fatalf("%v.Recv() = _, %v, want _, ", stream, err) } // The RPC will run until canceled. cancel() <-ch awaitNewConnLogOutput() } func (s) TestServerMultipleGoAwayPendingRPC(t *testing.T) { for _, e := range listTestEnv() { if e.name == "handler-tls" { continue } testServerMultipleGoAwayPendingRPC(t, e) } } func testServerMultipleGoAwayPendingRPC(t *testing.T, e env) { te := newTest(t, e) te.userAgent = testAppUA te.declareLogNoise( "transport: http2Client.notifyError got notified that the client transport was broken EOF", "grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing", "grpc: addrConn.resetTransport failed to create client transport: connection error", ) te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) ctx, cancel := context.WithCancel(context.Background()) stream, err := tc.FullDuplexCall(ctx, grpc.WaitForReady(true)) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } // Finish an RPC to make sure the connection is good. if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}, grpc.WaitForReady(true)); err != nil { t.Fatalf("%v.EmptyCall(_, _, _) = _, %v, want _, ", tc, err) } ch1 := make(chan struct{}) go func() { te.srv.GracefulStop() close(ch1) }() ch2 := make(chan struct{}) go func() { te.srv.GracefulStop() close(ch2) }() // Loop until the server side GoAway signal is propagated to the client. for { ctx, cancel := context.WithTimeout(context.Background(), 10*time.Millisecond) if _, err := tc.EmptyCall(ctx, &testpb.Empty{}, grpc.WaitForReady(true)); err != nil { cancel() break } cancel() } select { case <-ch1: t.Fatal("GracefulStop() terminated early") case <-ch2: t.Fatal("GracefulStop() terminated early") default: } respParam := []*testpb.ResponseParameters{ { Size: 1, }, } payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, int32(100)) if err != nil { t.Fatal(err) } req := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseParameters: respParam, Payload: payload, } // The existing RPC should be still good to proceed. if err := stream.Send(req); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, req, err) } if _, err := stream.Recv(); err != nil { t.Fatalf("%v.Recv() = _, %v, want _, ", stream, err) } if err := stream.CloseSend(); err != nil { t.Fatalf("%v.CloseSend() = %v, want ", stream, err) } <-ch1 <-ch2 cancel() awaitNewConnLogOutput() } func (s) TestConcurrentClientConnCloseAndServerGoAway(t *testing.T) { for _, e := range listTestEnv() { if e.name == "handler-tls" { continue } testConcurrentClientConnCloseAndServerGoAway(t, e) } } func testConcurrentClientConnCloseAndServerGoAway(t *testing.T, e env) { te := newTest(t, e) te.userAgent = testAppUA te.declareLogNoise( "transport: http2Client.notifyError got notified that the client transport was broken EOF", "grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing", "grpc: addrConn.resetTransport failed to create client transport: connection error", ) te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}, grpc.WaitForReady(true)); err != nil { t.Fatalf("%v.EmptyCall(_, _, _) = _, %v, want _, ", tc, err) } ch := make(chan struct{}) // Close ClientConn and Server concurrently. go func() { te.srv.GracefulStop() close(ch) }() go func() { cc.Close() }() <-ch } func (s) TestConcurrentServerStopAndGoAway(t *testing.T) { for _, e := range listTestEnv() { if e.name == "handler-tls" { continue } testConcurrentServerStopAndGoAway(t, e) } } func testConcurrentServerStopAndGoAway(t *testing.T, e env) { te := newTest(t, e) te.userAgent = testAppUA te.declareLogNoise( "transport: http2Client.notifyError got notified that the client transport was broken EOF", "grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing", "grpc: addrConn.resetTransport failed to create client transport: connection error", ) te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) stream, err := tc.FullDuplexCall(context.Background(), grpc.WaitForReady(true)) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } // Finish an RPC to make sure the connection is good. if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}, grpc.WaitForReady(true)); err != nil { t.Fatalf("%v.EmptyCall(_, _, _) = _, %v, want _, ", tc, err) } ch := make(chan struct{}) go func() { te.srv.GracefulStop() close(ch) }() // Loop until the server side GoAway signal is propagated to the client. for { ctx, cancel := context.WithTimeout(context.Background(), 10*time.Millisecond) if _, err := tc.EmptyCall(ctx, &testpb.Empty{}, grpc.WaitForReady(true)); err != nil { cancel() break } cancel() } // Stop the server and close all the connections. te.srv.Stop() respParam := []*testpb.ResponseParameters{ { Size: 1, }, } payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, int32(100)) if err != nil { t.Fatal(err) } req := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseParameters: respParam, Payload: payload, } sendStart := time.Now() for { if err := stream.Send(req); err == io.EOF { // stream.Send should eventually send io.EOF break } else if err != nil { // Send should never return a transport-level error. t.Fatalf("stream.Send(%v) = %v; want ", req, err) } if time.Since(sendStart) > 2*time.Second { t.Fatalf("stream.Send(_) did not return io.EOF after 2s") } time.Sleep(time.Millisecond) } if _, err := stream.Recv(); err == nil || err == io.EOF { t.Fatalf("%v.Recv() = _, %v, want _, ", stream, err) } <-ch awaitNewConnLogOutput() } func (s) TestClientConnCloseAfterGoAwayWithActiveStream(t *testing.T) { for _, e := range listTestEnv() { if e.name == "handler-tls" { continue } testClientConnCloseAfterGoAwayWithActiveStream(t, e) } } func testClientConnCloseAfterGoAwayWithActiveStream(t *testing.T, e env) { te := newTest(t, e) te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) ctx, cancel := context.WithCancel(context.Background()) defer cancel() if _, err := tc.FullDuplexCall(ctx); err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want _, ", tc, err) } done := make(chan struct{}) go func() { te.srv.GracefulStop() close(done) }() time.Sleep(50 * time.Millisecond) cc.Close() timeout := time.NewTimer(time.Second) select { case <-done: case <-timeout.C: t.Fatalf("Test timed-out.") } } func (s) TestFailFast(t *testing.T) { for _, e := range listTestEnv() { testFailFast(t, e) } } func testFailFast(t *testing.T, e env) { te := newTest(t, e) te.userAgent = testAppUA te.declareLogNoise( "transport: http2Client.notifyError got notified that the client transport was broken EOF", "grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing", "grpc: addrConn.resetTransport failed to create client transport: connection error", ) te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() if _, err := tc.EmptyCall(ctx, &testpb.Empty{}); err != nil { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, ", err) } // Stop the server and tear down all the existing connections. te.srv.Stop() // Loop until the server teardown is propagated to the client. for { ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) _, err := tc.EmptyCall(ctx, &testpb.Empty{}) cancel() if status.Code(err) == codes.Unavailable { break } t.Logf("%v.EmptyCall(_, _) = _, %v", tc, err) time.Sleep(10 * time.Millisecond) } // The client keeps reconnecting and ongoing fail-fast RPCs should fail with code.Unavailable. if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}); status.Code(err) != codes.Unavailable { t.Fatalf("TestService/EmptyCall(_, _, _) = _, %v, want _, error code: %s", err, codes.Unavailable) } if _, err := tc.StreamingInputCall(context.Background()); status.Code(err) != codes.Unavailable { t.Fatalf("TestService/StreamingInputCall(_) = _, %v, want _, error code: %s", err, codes.Unavailable) } awaitNewConnLogOutput() } func testServiceConfigSetup(t *testing.T, e env) *test { te := newTest(t, e) te.userAgent = testAppUA te.declareLogNoise( "transport: http2Client.notifyError got notified that the client transport was broken EOF", "grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing", "grpc: addrConn.resetTransport failed to create client transport: connection error", "Failed to dial : context canceled; please retry.", ) return te } func newBool(b bool) (a *bool) { return &b } func newInt(b int) (a *int) { return &b } func newDuration(b time.Duration) (a *time.Duration) { a = new(time.Duration) *a = b return } func (s) TestGetMethodConfig(t *testing.T) { te := testServiceConfigSetup(t, tcpClearRREnv) defer te.tearDown() r, rcleanup := manual.GenerateAndRegisterManualResolver() defer rcleanup() te.resolverScheme = r.Scheme() cc := te.clientConn() addrs := []resolver.Address{{Addr: te.srvAddr}} r.UpdateState(resolver.State{ Addresses: addrs, ServiceConfig: parseCfg(`{ "methodConfig": [ { "name": [ { "service": "grpc.testing.TestService", "method": "EmptyCall" } ], "waitForReady": true, "timeout": ".001s" }, { "name": [ { "service": "grpc.testing.TestService" } ], "waitForReady": false } ] }`)}) tc := testpb.NewTestServiceClient(cc) // Make sure service config has been processed by grpc. for { if cc.GetMethodConfig("/grpc.testing.TestService/EmptyCall").WaitForReady != nil { break } time.Sleep(time.Millisecond) } // The following RPCs are expected to become non-fail-fast ones with 1ms deadline. var err error if _, err = tc.EmptyCall(context.Background(), &testpb.Empty{}); status.Code(err) != codes.DeadlineExceeded { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, %s", err, codes.DeadlineExceeded) } r.UpdateState(resolver.State{Addresses: addrs, ServiceConfig: parseCfg(`{ "methodConfig": [ { "name": [ { "service": "grpc.testing.TestService", "method": "UnaryCall" } ], "waitForReady": true, "timeout": ".001s" }, { "name": [ { "service": "grpc.testing.TestService" } ], "waitForReady": false } ] }`)}) // Make sure service config has been processed by grpc. for { if mc := cc.GetMethodConfig("/grpc.testing.TestService/EmptyCall"); mc.WaitForReady != nil && !*mc.WaitForReady { break } time.Sleep(time.Millisecond) } // The following RPCs are expected to become fail-fast. if _, err = tc.EmptyCall(context.Background(), &testpb.Empty{}); status.Code(err) != codes.Unavailable { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, %s", err, codes.Unavailable) } } func (s) TestServiceConfigWaitForReady(t *testing.T) { te := testServiceConfigSetup(t, tcpClearRREnv) defer te.tearDown() r, rcleanup := manual.GenerateAndRegisterManualResolver() defer rcleanup() // Case1: Client API set failfast to be false, and service config set wait_for_ready to be false, Client API should win, and the rpc will wait until deadline exceeds. te.resolverScheme = r.Scheme() cc := te.clientConn() addrs := []resolver.Address{{Addr: te.srvAddr}} r.UpdateState(resolver.State{ Addresses: addrs, ServiceConfig: parseCfg(`{ "methodConfig": [ { "name": [ { "service": "grpc.testing.TestService", "method": "EmptyCall" }, { "service": "grpc.testing.TestService", "method": "FullDuplexCall" } ], "waitForReady": false, "timeout": ".001s" } ] }`)}) tc := testpb.NewTestServiceClient(cc) // Make sure service config has been processed by grpc. for { if cc.GetMethodConfig("/grpc.testing.TestService/FullDuplexCall").WaitForReady != nil { break } time.Sleep(time.Millisecond) } // The following RPCs are expected to become non-fail-fast ones with 1ms deadline. var err error if _, err = tc.EmptyCall(context.Background(), &testpb.Empty{}, grpc.WaitForReady(true)); status.Code(err) != codes.DeadlineExceeded { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, %s", err, codes.DeadlineExceeded) } if _, err := tc.FullDuplexCall(context.Background(), grpc.WaitForReady(true)); status.Code(err) != codes.DeadlineExceeded { t.Fatalf("TestService/FullDuplexCall(_) = _, %v, want %s", err, codes.DeadlineExceeded) } // Generate a service config update. // Case2:Client API set failfast to be false, and service config set wait_for_ready to be true, and the rpc will wait until deadline exceeds. r.UpdateState(resolver.State{ Addresses: addrs, ServiceConfig: parseCfg(`{ "methodConfig": [ { "name": [ { "service": "grpc.testing.TestService", "method": "EmptyCall" }, { "service": "grpc.testing.TestService", "method": "FullDuplexCall" } ], "waitForReady": true, "timeout": ".001s" } ] }`)}) // Wait for the new service config to take effect. for { if mc := cc.GetMethodConfig("/grpc.testing.TestService/EmptyCall"); mc.WaitForReady != nil && *mc.WaitForReady { break } time.Sleep(time.Millisecond) } // The following RPCs are expected to become non-fail-fast ones with 1ms deadline. if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}); status.Code(err) != codes.DeadlineExceeded { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, %s", err, codes.DeadlineExceeded) } if _, err := tc.FullDuplexCall(context.Background()); status.Code(err) != codes.DeadlineExceeded { t.Fatalf("TestService/FullDuplexCall(_) = _, %v, want %s", err, codes.DeadlineExceeded) } } func (s) TestServiceConfigTimeout(t *testing.T) { te := testServiceConfigSetup(t, tcpClearRREnv) defer te.tearDown() r, rcleanup := manual.GenerateAndRegisterManualResolver() defer rcleanup() // Case1: Client API sets timeout to be 1ns and ServiceConfig sets timeout to be 1hr. Timeout should be 1ns (min of 1ns and 1hr) and the rpc will wait until deadline exceeds. te.resolverScheme = r.Scheme() cc := te.clientConn() addrs := []resolver.Address{{Addr: te.srvAddr}} r.UpdateState(resolver.State{ Addresses: addrs, ServiceConfig: parseCfg(`{ "methodConfig": [ { "name": [ { "service": "grpc.testing.TestService", "method": "EmptyCall" }, { "service": "grpc.testing.TestService", "method": "FullDuplexCall" } ], "waitForReady": true, "timeout": "3600s" } ] }`)}) tc := testpb.NewTestServiceClient(cc) // Make sure service config has been processed by grpc. for { if cc.GetMethodConfig("/grpc.testing.TestService/FullDuplexCall").Timeout != nil { break } time.Sleep(time.Millisecond) } // The following RPCs are expected to become non-fail-fast ones with 1ns deadline. var err error ctx, cancel := context.WithTimeout(context.Background(), time.Nanosecond) if _, err = tc.EmptyCall(ctx, &testpb.Empty{}, grpc.WaitForReady(true)); status.Code(err) != codes.DeadlineExceeded { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, %s", err, codes.DeadlineExceeded) } cancel() ctx, cancel = context.WithTimeout(context.Background(), time.Nanosecond) if _, err = tc.FullDuplexCall(ctx, grpc.WaitForReady(true)); status.Code(err) != codes.DeadlineExceeded { t.Fatalf("TestService/FullDuplexCall(_) = _, %v, want %s", err, codes.DeadlineExceeded) } cancel() // Generate a service config update. // Case2: Client API sets timeout to be 1hr and ServiceConfig sets timeout to be 1ns. Timeout should be 1ns (min of 1ns and 1hr) and the rpc will wait until deadline exceeds. r.UpdateState(resolver.State{ Addresses: addrs, ServiceConfig: parseCfg(`{ "methodConfig": [ { "name": [ { "service": "grpc.testing.TestService", "method": "EmptyCall" }, { "service": "grpc.testing.TestService", "method": "FullDuplexCall" } ], "waitForReady": true, "timeout": ".000000001s" } ] }`)}) // Wait for the new service config to take effect. for { if mc := cc.GetMethodConfig("/grpc.testing.TestService/FullDuplexCall"); mc.Timeout != nil && *mc.Timeout == time.Nanosecond { break } time.Sleep(time.Millisecond) } ctx, cancel = context.WithTimeout(context.Background(), time.Hour) if _, err = tc.EmptyCall(ctx, &testpb.Empty{}, grpc.WaitForReady(true)); status.Code(err) != codes.DeadlineExceeded { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, %s", err, codes.DeadlineExceeded) } cancel() ctx, cancel = context.WithTimeout(context.Background(), time.Hour) if _, err = tc.FullDuplexCall(ctx, grpc.WaitForReady(true)); status.Code(err) != codes.DeadlineExceeded { t.Fatalf("TestService/FullDuplexCall(_) = _, %v, want %s", err, codes.DeadlineExceeded) } cancel() } func (s) TestServiceConfigMaxMsgSize(t *testing.T) { e := tcpClearRREnv r, rcleanup := manual.GenerateAndRegisterManualResolver() defer rcleanup() // Setting up values and objects shared across all test cases. const smallSize = 1 const largeSize = 1024 const extraLargeSize = 2048 smallPayload, err := newPayload(testpb.PayloadType_COMPRESSABLE, smallSize) if err != nil { t.Fatal(err) } largePayload, err := newPayload(testpb.PayloadType_COMPRESSABLE, largeSize) if err != nil { t.Fatal(err) } extraLargePayload, err := newPayload(testpb.PayloadType_COMPRESSABLE, extraLargeSize) if err != nil { t.Fatal(err) } sc := parseCfg(`{ "methodConfig": [ { "name": [ { "service": "grpc.testing.TestService", "method": "UnaryCall" }, { "service": "grpc.testing.TestService", "method": "FullDuplexCall" } ], "maxRequestMessageBytes": 2048, "maxResponseMessageBytes": 2048 } ] }`) // Case1: sc set maxReqSize to 2048 (send), maxRespSize to 2048 (recv). te1 := testServiceConfigSetup(t, e) defer te1.tearDown() te1.resolverScheme = r.Scheme() te1.nonBlockingDial = true te1.startServer(&testServer{security: e.security}) cc1 := te1.clientConn() addrs := []resolver.Address{{Addr: te1.srvAddr}} r.UpdateState(resolver.State{Addresses: addrs, ServiceConfig: sc}) tc := testpb.NewTestServiceClient(cc1) req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseSize: int32(extraLargeSize), Payload: smallPayload, } for { if cc1.GetMethodConfig("/grpc.testing.TestService/FullDuplexCall").MaxReqSize != nil { break } time.Sleep(time.Millisecond) } // Test for unary RPC recv. if _, err = tc.UnaryCall(context.Background(), req, grpc.WaitForReady(true)); err == nil || status.Code(err) != codes.ResourceExhausted { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, error code: %s", err, codes.ResourceExhausted) } // Test for unary RPC send. req.Payload = extraLargePayload req.ResponseSize = int32(smallSize) if _, err := tc.UnaryCall(context.Background(), req); err == nil || status.Code(err) != codes.ResourceExhausted { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, error code: %s", err, codes.ResourceExhausted) } // Test for streaming RPC recv. respParam := []*testpb.ResponseParameters{ { Size: int32(extraLargeSize), }, } sreq := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseParameters: respParam, Payload: smallPayload, } stream, err := tc.FullDuplexCall(te1.ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } if err = stream.Send(sreq); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, sreq, err) } if _, err = stream.Recv(); err == nil || status.Code(err) != codes.ResourceExhausted { t.Fatalf("%v.Recv() = _, %v, want _, error code: %s", stream, err, codes.ResourceExhausted) } // Test for streaming RPC send. respParam[0].Size = int32(smallSize) sreq.Payload = extraLargePayload stream, err = tc.FullDuplexCall(te1.ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } if err = stream.Send(sreq); err == nil || status.Code(err) != codes.ResourceExhausted { t.Fatalf("%v.Send(%v) = %v, want _, error code: %s", stream, sreq, err, codes.ResourceExhausted) } // Case2: Client API set maxReqSize to 1024 (send), maxRespSize to 1024 (recv). Sc sets maxReqSize to 2048 (send), maxRespSize to 2048 (recv). te2 := testServiceConfigSetup(t, e) te2.resolverScheme = r.Scheme() te2.nonBlockingDial = true te2.maxClientReceiveMsgSize = newInt(1024) te2.maxClientSendMsgSize = newInt(1024) te2.startServer(&testServer{security: e.security}) defer te2.tearDown() cc2 := te2.clientConn() r.UpdateState(resolver.State{Addresses: []resolver.Address{{Addr: te2.srvAddr}}, ServiceConfig: sc}) tc = testpb.NewTestServiceClient(cc2) for { if cc2.GetMethodConfig("/grpc.testing.TestService/FullDuplexCall").MaxReqSize != nil { break } time.Sleep(time.Millisecond) } // Test for unary RPC recv. req.Payload = smallPayload req.ResponseSize = int32(largeSize) if _, err = tc.UnaryCall(context.Background(), req, grpc.WaitForReady(true)); err == nil || status.Code(err) != codes.ResourceExhausted { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, error code: %s", err, codes.ResourceExhausted) } // Test for unary RPC send. req.Payload = largePayload req.ResponseSize = int32(smallSize) if _, err := tc.UnaryCall(context.Background(), req); err == nil || status.Code(err) != codes.ResourceExhausted { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, error code: %s", err, codes.ResourceExhausted) } // Test for streaming RPC recv. stream, err = tc.FullDuplexCall(te2.ctx) respParam[0].Size = int32(largeSize) sreq.Payload = smallPayload if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } if err = stream.Send(sreq); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, sreq, err) } if _, err = stream.Recv(); err == nil || status.Code(err) != codes.ResourceExhausted { t.Fatalf("%v.Recv() = _, %v, want _, error code: %s", stream, err, codes.ResourceExhausted) } // Test for streaming RPC send. respParam[0].Size = int32(smallSize) sreq.Payload = largePayload stream, err = tc.FullDuplexCall(te2.ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } if err = stream.Send(sreq); err == nil || status.Code(err) != codes.ResourceExhausted { t.Fatalf("%v.Send(%v) = %v, want _, error code: %s", stream, sreq, err, codes.ResourceExhausted) } // Case3: Client API set maxReqSize to 4096 (send), maxRespSize to 4096 (recv). Sc sets maxReqSize to 2048 (send), maxRespSize to 2048 (recv). te3 := testServiceConfigSetup(t, e) te3.resolverScheme = r.Scheme() te3.nonBlockingDial = true te3.maxClientReceiveMsgSize = newInt(4096) te3.maxClientSendMsgSize = newInt(4096) te3.startServer(&testServer{security: e.security}) defer te3.tearDown() cc3 := te3.clientConn() r.UpdateState(resolver.State{Addresses: []resolver.Address{{Addr: te3.srvAddr}}, ServiceConfig: sc}) tc = testpb.NewTestServiceClient(cc3) for { if cc3.GetMethodConfig("/grpc.testing.TestService/FullDuplexCall").MaxReqSize != nil { break } time.Sleep(time.Millisecond) } // Test for unary RPC recv. req.Payload = smallPayload req.ResponseSize = int32(largeSize) if _, err = tc.UnaryCall(context.Background(), req, grpc.WaitForReady(true)); err != nil { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want ", err) } req.ResponseSize = int32(extraLargeSize) if _, err := tc.UnaryCall(context.Background(), req); err == nil || status.Code(err) != codes.ResourceExhausted { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, error code: %s", err, codes.ResourceExhausted) } // Test for unary RPC send. req.Payload = largePayload req.ResponseSize = int32(smallSize) if _, err := tc.UnaryCall(context.Background(), req); err != nil { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want ", err) } req.Payload = extraLargePayload if _, err = tc.UnaryCall(context.Background(), req); err == nil || status.Code(err) != codes.ResourceExhausted { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, error code: %s", err, codes.ResourceExhausted) } // Test for streaming RPC recv. stream, err = tc.FullDuplexCall(te3.ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } respParam[0].Size = int32(largeSize) sreq.Payload = smallPayload if err = stream.Send(sreq); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, sreq, err) } if _, err = stream.Recv(); err != nil { t.Fatalf("%v.Recv() = _, %v, want ", stream, err) } respParam[0].Size = int32(extraLargeSize) if err = stream.Send(sreq); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, sreq, err) } if _, err = stream.Recv(); err == nil || status.Code(err) != codes.ResourceExhausted { t.Fatalf("%v.Recv() = _, %v, want _, error code: %s", stream, err, codes.ResourceExhausted) } // Test for streaming RPC send. respParam[0].Size = int32(smallSize) sreq.Payload = largePayload stream, err = tc.FullDuplexCall(te3.ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } if err := stream.Send(sreq); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, sreq, err) } sreq.Payload = extraLargePayload if err := stream.Send(sreq); err == nil || status.Code(err) != codes.ResourceExhausted { t.Fatalf("%v.Send(%v) = %v, want _, error code: %s", stream, sreq, err, codes.ResourceExhausted) } } // Reading from a streaming RPC may fail with context canceled if timeout was // set by service config (https://github.com/grpc/grpc-go/issues/1818). This // test makes sure read from streaming RPC doesn't fail in this case. func (s) TestStreamingRPCWithTimeoutInServiceConfigRecv(t *testing.T) { te := testServiceConfigSetup(t, tcpClearRREnv) te.startServer(&testServer{security: tcpClearRREnv.security}) defer te.tearDown() r, rcleanup := manual.GenerateAndRegisterManualResolver() defer rcleanup() te.resolverScheme = r.Scheme() te.nonBlockingDial = true cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) r.UpdateState(resolver.State{ Addresses: []resolver.Address{{Addr: te.srvAddr}}, ServiceConfig: parseCfg(`{ "methodConfig": [ { "name": [ { "service": "grpc.testing.TestService", "method": "FullDuplexCall" } ], "waitForReady": true, "timeout": "10s" } ] }`)}) // Make sure service config has been processed by grpc. for { if cc.GetMethodConfig("/grpc.testing.TestService/FullDuplexCall").Timeout != nil { break } time.Sleep(time.Millisecond) } ctx, cancel := context.WithCancel(context.Background()) defer cancel() stream, err := tc.FullDuplexCall(ctx, grpc.WaitForReady(true)) if err != nil { t.Fatalf("TestService/FullDuplexCall(_) = _, %v, want ", err) } payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, 0) if err != nil { t.Fatalf("failed to newPayload: %v", err) } req := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseParameters: []*testpb.ResponseParameters{{Size: 0}}, Payload: payload, } if err := stream.Send(req); err != nil { t.Fatalf("stream.Send(%v) = %v, want ", req, err) } stream.CloseSend() time.Sleep(time.Second) // Sleep 1 second before recv to make sure the final status is received // before the recv. if _, err := stream.Recv(); err != nil { t.Fatalf("stream.Recv = _, %v, want _, ", err) } // Keep reading to drain the stream. for { if _, err := stream.Recv(); err != nil { break } } } func (s) TestPreloaderClientSend(t *testing.T) { for _, e := range listTestEnv() { testPreloaderClientSend(t, e) } } func testPreloaderClientSend(t *testing.T, e env) { te := newTest(t, e) te.userAgent = testAppUA te.declareLogNoise( "transport: http2Client.notifyError got notified that the client transport was broken EOF", "grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing", "grpc: addrConn.resetTransport failed to create client transport: connection error", "Failed to dial : context canceled; please retry.", ) te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) // Test for streaming RPC recv. // Set context for send with proper RPC Information stream, err := tc.FullDuplexCall(te.ctx, grpc.UseCompressor("gzip")) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } var index int for index < len(reqSizes) { respParam := []*testpb.ResponseParameters{ { Size: int32(respSizes[index]), }, } payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, int32(reqSizes[index])) if err != nil { t.Fatal(err) } req := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseParameters: respParam, Payload: payload, } preparedMsg := &grpc.PreparedMsg{} err = preparedMsg.Encode(stream, req) if err != nil { t.Fatalf("PrepareMsg failed for size %d : %v", reqSizes[index], err) } if err := stream.SendMsg(preparedMsg); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, req, err) } reply, err := stream.Recv() if err != nil { t.Fatalf("%v.Recv() = %v, want ", stream, err) } pt := reply.GetPayload().GetType() if pt != testpb.PayloadType_COMPRESSABLE { t.Fatalf("Got the reply of type %d, want %d", pt, testpb.PayloadType_COMPRESSABLE) } size := len(reply.GetPayload().GetBody()) if size != int(respSizes[index]) { t.Fatalf("Got reply body of length %d, want %d", size, respSizes[index]) } index++ } if err := stream.CloseSend(); err != nil { t.Fatalf("%v.CloseSend() got %v, want %v", stream, err, nil) } if _, err := stream.Recv(); err != io.EOF { t.Fatalf("%v failed to complele the ping pong test: %v", stream, err) } } func (s) TestMaxMsgSizeClientDefault(t *testing.T) { for _, e := range listTestEnv() { testMaxMsgSizeClientDefault(t, e) } } func testMaxMsgSizeClientDefault(t *testing.T, e env) { te := newTest(t, e) te.userAgent = testAppUA te.declareLogNoise( "transport: http2Client.notifyError got notified that the client transport was broken EOF", "grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing", "grpc: addrConn.resetTransport failed to create client transport: connection error", "Failed to dial : context canceled; please retry.", ) te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) const smallSize = 1 const largeSize = 4 * 1024 * 1024 smallPayload, err := newPayload(testpb.PayloadType_COMPRESSABLE, smallSize) if err != nil { t.Fatal(err) } req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseSize: int32(largeSize), Payload: smallPayload, } // Test for unary RPC recv. if _, err := tc.UnaryCall(context.Background(), req); err == nil || status.Code(err) != codes.ResourceExhausted { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, error code: %s", err, codes.ResourceExhausted) } respParam := []*testpb.ResponseParameters{ { Size: int32(largeSize), }, } sreq := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseParameters: respParam, Payload: smallPayload, } // Test for streaming RPC recv. stream, err := tc.FullDuplexCall(te.ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } if err := stream.Send(sreq); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, sreq, err) } if _, err := stream.Recv(); err == nil || status.Code(err) != codes.ResourceExhausted { t.Fatalf("%v.Recv() = _, %v, want _, error code: %s", stream, err, codes.ResourceExhausted) } } func (s) TestMaxMsgSizeClientAPI(t *testing.T) { for _, e := range listTestEnv() { testMaxMsgSizeClientAPI(t, e) } } func testMaxMsgSizeClientAPI(t *testing.T, e env) { te := newTest(t, e) te.userAgent = testAppUA // To avoid error on server side. te.maxServerSendMsgSize = newInt(5 * 1024 * 1024) te.maxClientReceiveMsgSize = newInt(1024) te.maxClientSendMsgSize = newInt(1024) te.declareLogNoise( "transport: http2Client.notifyError got notified that the client transport was broken EOF", "grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing", "grpc: addrConn.resetTransport failed to create client transport: connection error", "Failed to dial : context canceled; please retry.", ) te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) const smallSize = 1 const largeSize = 1024 smallPayload, err := newPayload(testpb.PayloadType_COMPRESSABLE, smallSize) if err != nil { t.Fatal(err) } largePayload, err := newPayload(testpb.PayloadType_COMPRESSABLE, largeSize) if err != nil { t.Fatal(err) } req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseSize: int32(largeSize), Payload: smallPayload, } // Test for unary RPC recv. if _, err := tc.UnaryCall(context.Background(), req); err == nil || status.Code(err) != codes.ResourceExhausted { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, error code: %s", err, codes.ResourceExhausted) } // Test for unary RPC send. req.Payload = largePayload req.ResponseSize = int32(smallSize) if _, err := tc.UnaryCall(context.Background(), req); err == nil || status.Code(err) != codes.ResourceExhausted { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, error code: %s", err, codes.ResourceExhausted) } respParam := []*testpb.ResponseParameters{ { Size: int32(largeSize), }, } sreq := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseParameters: respParam, Payload: smallPayload, } // Test for streaming RPC recv. stream, err := tc.FullDuplexCall(te.ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } if err := stream.Send(sreq); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, sreq, err) } if _, err := stream.Recv(); err == nil || status.Code(err) != codes.ResourceExhausted { t.Fatalf("%v.Recv() = _, %v, want _, error code: %s", stream, err, codes.ResourceExhausted) } // Test for streaming RPC send. respParam[0].Size = int32(smallSize) sreq.Payload = largePayload stream, err = tc.FullDuplexCall(te.ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } if err := stream.Send(sreq); err == nil || status.Code(err) != codes.ResourceExhausted { t.Fatalf("%v.Send(%v) = %v, want _, error code: %s", stream, sreq, err, codes.ResourceExhausted) } } func (s) TestMaxMsgSizeServerAPI(t *testing.T) { for _, e := range listTestEnv() { testMaxMsgSizeServerAPI(t, e) } } func testMaxMsgSizeServerAPI(t *testing.T, e env) { te := newTest(t, e) te.userAgent = testAppUA te.maxServerReceiveMsgSize = newInt(1024) te.maxServerSendMsgSize = newInt(1024) te.declareLogNoise( "transport: http2Client.notifyError got notified that the client transport was broken EOF", "grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing", "grpc: addrConn.resetTransport failed to create client transport: connection error", "Failed to dial : context canceled; please retry.", ) te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) const smallSize = 1 const largeSize = 1024 smallPayload, err := newPayload(testpb.PayloadType_COMPRESSABLE, smallSize) if err != nil { t.Fatal(err) } largePayload, err := newPayload(testpb.PayloadType_COMPRESSABLE, largeSize) if err != nil { t.Fatal(err) } req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseSize: int32(largeSize), Payload: smallPayload, } // Test for unary RPC send. if _, err := tc.UnaryCall(context.Background(), req); err == nil || status.Code(err) != codes.ResourceExhausted { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, error code: %s", err, codes.ResourceExhausted) } // Test for unary RPC recv. req.Payload = largePayload req.ResponseSize = int32(smallSize) if _, err := tc.UnaryCall(context.Background(), req); err == nil || status.Code(err) != codes.ResourceExhausted { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, error code: %s", err, codes.ResourceExhausted) } respParam := []*testpb.ResponseParameters{ { Size: int32(largeSize), }, } sreq := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseParameters: respParam, Payload: smallPayload, } // Test for streaming RPC send. stream, err := tc.FullDuplexCall(te.ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } if err := stream.Send(sreq); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, sreq, err) } if _, err := stream.Recv(); err == nil || status.Code(err) != codes.ResourceExhausted { t.Fatalf("%v.Recv() = _, %v, want _, error code: %s", stream, err, codes.ResourceExhausted) } // Test for streaming RPC recv. respParam[0].Size = int32(smallSize) sreq.Payload = largePayload stream, err = tc.FullDuplexCall(te.ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } if err := stream.Send(sreq); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, sreq, err) } if _, err := stream.Recv(); err == nil || status.Code(err) != codes.ResourceExhausted { t.Fatalf("%v.Recv() = _, %v, want _, error code: %s", stream, err, codes.ResourceExhausted) } } func (s) TestTap(t *testing.T) { for _, e := range listTestEnv() { if e.name == "handler-tls" { continue } testTap(t, e) } } type myTap struct { cnt int } func (t *myTap) handle(ctx context.Context, info *tap.Info) (context.Context, error) { if info != nil { if info.FullMethodName == "/grpc.testing.TestService/EmptyCall" { t.cnt++ } else if info.FullMethodName == "/grpc.testing.TestService/UnaryCall" { return nil, fmt.Errorf("tap error") } } return ctx, nil } func testTap(t *testing.T, e env) { te := newTest(t, e) te.userAgent = testAppUA ttap := &myTap{} te.tapHandle = ttap.handle te.declareLogNoise( "transport: http2Client.notifyError got notified that the client transport was broken EOF", "grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing", "grpc: addrConn.resetTransport failed to create client transport: connection error", ) te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}); err != nil { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, ", err) } if ttap.cnt != 1 { t.Fatalf("Get the count in ttap %d, want 1", ttap.cnt) } payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, 31) if err != nil { t.Fatal(err) } req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseSize: 45, Payload: payload, } if _, err := tc.UnaryCall(context.Background(), req); status.Code(err) != codes.Unavailable { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, %s", err, codes.Unavailable) } } func healthCheck(d time.Duration, cc *grpc.ClientConn, serviceName string) (*healthpb.HealthCheckResponse, error) { ctx, cancel := context.WithTimeout(context.Background(), d) defer cancel() hc := healthgrpc.NewHealthClient(cc) req := &healthpb.HealthCheckRequest{ Service: serviceName, } return hc.Check(ctx, req) } func (s) TestHealthCheckOnSuccess(t *testing.T) { for _, e := range listTestEnv() { testHealthCheckOnSuccess(t, e) } } func testHealthCheckOnSuccess(t *testing.T, e env) { te := newTest(t, e) hs := health.NewServer() hs.SetServingStatus("grpc.health.v1.Health", 1) te.healthServer = hs te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() if _, err := healthCheck(1*time.Second, cc, "grpc.health.v1.Health"); err != nil { t.Fatalf("Health/Check(_, _) = _, %v, want _, ", err) } } func (s) TestHealthCheckOnFailure(t *testing.T) { for _, e := range listTestEnv() { testHealthCheckOnFailure(t, e) } } func testHealthCheckOnFailure(t *testing.T, e env) { te := newTest(t, e) te.declareLogNoise( "Failed to dial ", "grpc: the client connection is closing; please retry", ) hs := health.NewServer() hs.SetServingStatus("grpc.health.v1.HealthCheck", 1) te.healthServer = hs te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() wantErr := status.Error(codes.DeadlineExceeded, "context deadline exceeded") if _, err := healthCheck(0*time.Second, cc, "grpc.health.v1.Health"); !testutils.StatusErrEqual(err, wantErr) { t.Fatalf("Health/Check(_, _) = _, %v, want _, error code %s", err, codes.DeadlineExceeded) } awaitNewConnLogOutput() } func (s) TestHealthCheckOff(t *testing.T) { for _, e := range listTestEnv() { // TODO(bradfitz): Temporarily skip this env due to #619. if e.name == "handler-tls" { continue } testHealthCheckOff(t, e) } } func testHealthCheckOff(t *testing.T, e env) { te := newTest(t, e) te.startServer(&testServer{security: e.security}) defer te.tearDown() want := status.Error(codes.Unimplemented, "unknown service grpc.health.v1.Health") if _, err := healthCheck(1*time.Second, te.clientConn(), ""); !testutils.StatusErrEqual(err, want) { t.Fatalf("Health/Check(_, _) = _, %v, want _, %v", err, want) } } func (s) TestHealthWatchMultipleClients(t *testing.T) { for _, e := range listTestEnv() { testHealthWatchMultipleClients(t, e) } } func testHealthWatchMultipleClients(t *testing.T, e env) { const service = "grpc.health.v1.Health1" hs := health.NewServer() te := newTest(t, e) te.healthServer = hs te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() hc := healthgrpc.NewHealthClient(cc) ctx, cancel := context.WithCancel(context.Background()) defer cancel() req := &healthpb.HealthCheckRequest{ Service: service, } stream1, err := hc.Watch(ctx, req) if err != nil { t.Fatalf("error: %v", err) } healthWatchChecker(t, stream1, healthpb.HealthCheckResponse_SERVICE_UNKNOWN) stream2, err := hc.Watch(ctx, req) if err != nil { t.Fatalf("error: %v", err) } healthWatchChecker(t, stream2, healthpb.HealthCheckResponse_SERVICE_UNKNOWN) hs.SetServingStatus(service, healthpb.HealthCheckResponse_NOT_SERVING) healthWatchChecker(t, stream1, healthpb.HealthCheckResponse_NOT_SERVING) healthWatchChecker(t, stream2, healthpb.HealthCheckResponse_NOT_SERVING) } func (s) TestHealthWatchSameStatus(t *testing.T) { for _, e := range listTestEnv() { testHealthWatchSameStatus(t, e) } } func testHealthWatchSameStatus(t *testing.T, e env) { const service = "grpc.health.v1.Health1" hs := health.NewServer() te := newTest(t, e) te.healthServer = hs te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() hc := healthgrpc.NewHealthClient(cc) ctx, cancel := context.WithCancel(context.Background()) defer cancel() req := &healthpb.HealthCheckRequest{ Service: service, } stream1, err := hc.Watch(ctx, req) if err != nil { t.Fatalf("error: %v", err) } healthWatchChecker(t, stream1, healthpb.HealthCheckResponse_SERVICE_UNKNOWN) hs.SetServingStatus(service, healthpb.HealthCheckResponse_SERVING) healthWatchChecker(t, stream1, healthpb.HealthCheckResponse_SERVING) hs.SetServingStatus(service, healthpb.HealthCheckResponse_SERVING) hs.SetServingStatus(service, healthpb.HealthCheckResponse_NOT_SERVING) healthWatchChecker(t, stream1, healthpb.HealthCheckResponse_NOT_SERVING) } func (s) TestHealthWatchServiceStatusSetBeforeStartingServer(t *testing.T) { for _, e := range listTestEnv() { testHealthWatchSetServiceStatusBeforeStartingServer(t, e) } } func testHealthWatchSetServiceStatusBeforeStartingServer(t *testing.T, e env) { const service = "grpc.health.v1.Health1" hs := health.NewServer() te := newTest(t, e) te.healthServer = hs hs.SetServingStatus(service, healthpb.HealthCheckResponse_SERVING) te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() hc := healthgrpc.NewHealthClient(cc) ctx, cancel := context.WithCancel(context.Background()) defer cancel() req := &healthpb.HealthCheckRequest{ Service: service, } stream, err := hc.Watch(ctx, req) if err != nil { t.Fatalf("error: %v", err) } healthWatchChecker(t, stream, healthpb.HealthCheckResponse_SERVING) } func (s) TestHealthWatchDefaultStatusChange(t *testing.T) { for _, e := range listTestEnv() { testHealthWatchDefaultStatusChange(t, e) } } func testHealthWatchDefaultStatusChange(t *testing.T, e env) { const service = "grpc.health.v1.Health1" hs := health.NewServer() te := newTest(t, e) te.healthServer = hs te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() hc := healthgrpc.NewHealthClient(cc) ctx, cancel := context.WithCancel(context.Background()) defer cancel() req := &healthpb.HealthCheckRequest{ Service: service, } stream, err := hc.Watch(ctx, req) if err != nil { t.Fatalf("error: %v", err) } healthWatchChecker(t, stream, healthpb.HealthCheckResponse_SERVICE_UNKNOWN) hs.SetServingStatus(service, healthpb.HealthCheckResponse_SERVING) healthWatchChecker(t, stream, healthpb.HealthCheckResponse_SERVING) } func (s) TestHealthWatchSetServiceStatusBeforeClientCallsWatch(t *testing.T) { for _, e := range listTestEnv() { testHealthWatchSetServiceStatusBeforeClientCallsWatch(t, e) } } func testHealthWatchSetServiceStatusBeforeClientCallsWatch(t *testing.T, e env) { const service = "grpc.health.v1.Health1" hs := health.NewServer() te := newTest(t, e) te.healthServer = hs te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() hc := healthgrpc.NewHealthClient(cc) ctx, cancel := context.WithCancel(context.Background()) defer cancel() req := &healthpb.HealthCheckRequest{ Service: service, } hs.SetServingStatus(service, healthpb.HealthCheckResponse_SERVING) stream, err := hc.Watch(ctx, req) if err != nil { t.Fatalf("error: %v", err) } healthWatchChecker(t, stream, healthpb.HealthCheckResponse_SERVING) } func (s) TestHealthWatchOverallServerHealthChange(t *testing.T) { for _, e := range listTestEnv() { testHealthWatchOverallServerHealthChange(t, e) } } func testHealthWatchOverallServerHealthChange(t *testing.T, e env) { const service = "" hs := health.NewServer() te := newTest(t, e) te.healthServer = hs te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() hc := healthgrpc.NewHealthClient(cc) ctx, cancel := context.WithCancel(context.Background()) defer cancel() req := &healthpb.HealthCheckRequest{ Service: service, } stream, err := hc.Watch(ctx, req) if err != nil { t.Fatalf("error: %v", err) } healthWatchChecker(t, stream, healthpb.HealthCheckResponse_SERVING) hs.SetServingStatus(service, healthpb.HealthCheckResponse_NOT_SERVING) healthWatchChecker(t, stream, healthpb.HealthCheckResponse_NOT_SERVING) } func healthWatchChecker(t *testing.T, stream healthgrpc.Health_WatchClient, expectedServingStatus healthpb.HealthCheckResponse_ServingStatus) { response, err := stream.Recv() if err != nil { t.Fatalf("error on %v.Recv(): %v", stream, err) } if response.Status != expectedServingStatus { t.Fatalf("response.Status is %v (%v expected)", response.Status, expectedServingStatus) } } func (s) TestUnknownHandler(t *testing.T) { // An example unknownHandler that returns a different code and a different method, making sure that we do not // expose what methods are implemented to a client that is not authenticated. unknownHandler := func(srv interface{}, stream grpc.ServerStream) error { return status.Error(codes.Unauthenticated, "user unauthenticated") } for _, e := range listTestEnv() { // TODO(bradfitz): Temporarily skip this env due to #619. if e.name == "handler-tls" { continue } testUnknownHandler(t, e, unknownHandler) } } func testUnknownHandler(t *testing.T, e env, unknownHandler grpc.StreamHandler) { te := newTest(t, e) te.unknownHandler = unknownHandler te.startServer(&testServer{security: e.security}) defer te.tearDown() want := status.Error(codes.Unauthenticated, "user unauthenticated") if _, err := healthCheck(1*time.Second, te.clientConn(), ""); !testutils.StatusErrEqual(err, want) { t.Fatalf("Health/Check(_, _) = _, %v, want _, %v", err, want) } } func (s) TestHealthCheckServingStatus(t *testing.T) { for _, e := range listTestEnv() { testHealthCheckServingStatus(t, e) } } func testHealthCheckServingStatus(t *testing.T, e env) { te := newTest(t, e) hs := health.NewServer() te.healthServer = hs te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() out, err := healthCheck(1*time.Second, cc, "") if err != nil { t.Fatalf("Health/Check(_, _) = _, %v, want _, ", err) } if out.Status != healthpb.HealthCheckResponse_SERVING { t.Fatalf("Got the serving status %v, want SERVING", out.Status) } wantErr := status.Error(codes.NotFound, "unknown service") if _, err := healthCheck(1*time.Second, cc, "grpc.health.v1.Health"); !testutils.StatusErrEqual(err, wantErr) { t.Fatalf("Health/Check(_, _) = _, %v, want _, error code %s", err, codes.NotFound) } hs.SetServingStatus("grpc.health.v1.Health", healthpb.HealthCheckResponse_SERVING) out, err = healthCheck(1*time.Second, cc, "grpc.health.v1.Health") if err != nil { t.Fatalf("Health/Check(_, _) = _, %v, want _, ", err) } if out.Status != healthpb.HealthCheckResponse_SERVING { t.Fatalf("Got the serving status %v, want SERVING", out.Status) } hs.SetServingStatus("grpc.health.v1.Health", healthpb.HealthCheckResponse_NOT_SERVING) out, err = healthCheck(1*time.Second, cc, "grpc.health.v1.Health") if err != nil { t.Fatalf("Health/Check(_, _) = _, %v, want _, ", err) } if out.Status != healthpb.HealthCheckResponse_NOT_SERVING { t.Fatalf("Got the serving status %v, want NOT_SERVING", out.Status) } } func (s) TestEmptyUnaryWithUserAgent(t *testing.T) { for _, e := range listTestEnv() { testEmptyUnaryWithUserAgent(t, e) } } func testEmptyUnaryWithUserAgent(t *testing.T, e env) { te := newTest(t, e) te.userAgent = testAppUA te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) var header metadata.MD reply, err := tc.EmptyCall(context.Background(), &testpb.Empty{}, grpc.Header(&header)) if err != nil || !proto.Equal(&testpb.Empty{}, reply) { t.Fatalf("TestService/EmptyCall(_, _) = %v, %v, want %v, ", reply, err, &testpb.Empty{}) } if v, ok := header["ua"]; !ok || !strings.HasPrefix(v[0], testAppUA) { t.Fatalf("header[\"ua\"] = %q, %t, want string with prefix %q, true", v, ok, testAppUA) } te.srv.Stop() } func (s) TestFailedEmptyUnary(t *testing.T) { for _, e := range listTestEnv() { if e.name == "handler-tls" { // This test covers status details, but // Grpc-Status-Details-Bin is not support in handler_server. continue } testFailedEmptyUnary(t, e) } } func testFailedEmptyUnary(t *testing.T, e env) { te := newTest(t, e) te.userAgent = failAppUA te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) ctx := metadata.NewOutgoingContext(context.Background(), testMetadata) wantErr := detailedError if _, err := tc.EmptyCall(ctx, &testpb.Empty{}); !testutils.StatusErrEqual(err, wantErr) { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, %v", err, wantErr) } } func (s) TestLargeUnary(t *testing.T) { for _, e := range listTestEnv() { testLargeUnary(t, e) } } func testLargeUnary(t *testing.T, e env) { te := newTest(t, e) te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) const argSize = 271828 const respSize = 314159 payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, argSize) if err != nil { t.Fatal(err) } req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseSize: respSize, Payload: payload, } reply, err := tc.UnaryCall(context.Background(), req) if err != nil { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, ", err) } pt := reply.GetPayload().GetType() ps := len(reply.GetPayload().GetBody()) if pt != testpb.PayloadType_COMPRESSABLE || ps != respSize { t.Fatalf("Got the reply with type %d len %d; want %d, %d", pt, ps, testpb.PayloadType_COMPRESSABLE, respSize) } } // Test backward-compatibility API for setting msg size limit. func (s) TestExceedMsgLimit(t *testing.T) { for _, e := range listTestEnv() { testExceedMsgLimit(t, e) } } func testExceedMsgLimit(t *testing.T, e env) { te := newTest(t, e) maxMsgSize := 1024 te.maxServerMsgSize, te.maxClientMsgSize = newInt(maxMsgSize), newInt(maxMsgSize) te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) largeSize := int32(maxMsgSize + 1) const smallSize = 1 largePayload, err := newPayload(testpb.PayloadType_COMPRESSABLE, largeSize) if err != nil { t.Fatal(err) } smallPayload, err := newPayload(testpb.PayloadType_COMPRESSABLE, smallSize) if err != nil { t.Fatal(err) } // Make sure the server cannot receive a unary RPC of largeSize. req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseSize: smallSize, Payload: largePayload, } if _, err := tc.UnaryCall(context.Background(), req); err == nil || status.Code(err) != codes.ResourceExhausted { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, error code: %s", err, codes.ResourceExhausted) } // Make sure the client cannot receive a unary RPC of largeSize. req.ResponseSize = largeSize req.Payload = smallPayload if _, err := tc.UnaryCall(context.Background(), req); err == nil || status.Code(err) != codes.ResourceExhausted { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, error code: %s", err, codes.ResourceExhausted) } // Make sure the server cannot receive a streaming RPC of largeSize. stream, err := tc.FullDuplexCall(te.ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } respParam := []*testpb.ResponseParameters{ { Size: 1, }, } sreq := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseParameters: respParam, Payload: largePayload, } if err := stream.Send(sreq); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, sreq, err) } if _, err := stream.Recv(); err == nil || status.Code(err) != codes.ResourceExhausted { t.Fatalf("%v.Recv() = _, %v, want _, error code: %s", stream, err, codes.ResourceExhausted) } // Test on client side for streaming RPC. stream, err = tc.FullDuplexCall(te.ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } respParam[0].Size = largeSize sreq.Payload = smallPayload if err := stream.Send(sreq); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, sreq, err) } if _, err := stream.Recv(); err == nil || status.Code(err) != codes.ResourceExhausted { t.Fatalf("%v.Recv() = _, %v, want _, error code: %s", stream, err, codes.ResourceExhausted) } } func (s) TestPeerClientSide(t *testing.T) { for _, e := range listTestEnv() { testPeerClientSide(t, e) } } func testPeerClientSide(t *testing.T, e env) { te := newTest(t, e) te.userAgent = testAppUA te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) peer := new(peer.Peer) if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}, grpc.Peer(peer), grpc.WaitForReady(true)); err != nil { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, ", err) } pa := peer.Addr.String() if e.network == "unix" { if pa != te.srvAddr { t.Fatalf("peer.Addr = %v, want %v", pa, te.srvAddr) } return } _, pp, err := net.SplitHostPort(pa) if err != nil { t.Fatalf("Failed to parse address from peer.") } _, sp, err := net.SplitHostPort(te.srvAddr) if err != nil { t.Fatalf("Failed to parse address of test server.") } if pp != sp { t.Fatalf("peer.Addr = localhost:%v, want localhost:%v", pp, sp) } } // TestPeerNegative tests that if call fails setting peer // doesn't cause a segmentation fault. // issue#1141 https://github.com/grpc/grpc-go/issues/1141 func (s) TestPeerNegative(t *testing.T) { for _, e := range listTestEnv() { testPeerNegative(t, e) } } func testPeerNegative(t *testing.T, e env) { te := newTest(t, e) te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) peer := new(peer.Peer) ctx, cancel := context.WithCancel(context.Background()) cancel() tc.EmptyCall(ctx, &testpb.Empty{}, grpc.Peer(peer)) } func (s) TestPeerFailedRPC(t *testing.T) { for _, e := range listTestEnv() { testPeerFailedRPC(t, e) } } func testPeerFailedRPC(t *testing.T, e env) { te := newTest(t, e) te.maxServerReceiveMsgSize = newInt(1 * 1024) te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) // first make a successful request to the server if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}); err != nil { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, ", err) } // make a second request that will be rejected by the server const largeSize = 5 * 1024 largePayload, err := newPayload(testpb.PayloadType_COMPRESSABLE, largeSize) if err != nil { t.Fatal(err) } req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, Payload: largePayload, } peer := new(peer.Peer) if _, err := tc.UnaryCall(context.Background(), req, grpc.Peer(peer)); err == nil || status.Code(err) != codes.ResourceExhausted { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, error code: %s", err, codes.ResourceExhausted) } else { pa := peer.Addr.String() if e.network == "unix" { if pa != te.srvAddr { t.Fatalf("peer.Addr = %v, want %v", pa, te.srvAddr) } return } _, pp, err := net.SplitHostPort(pa) if err != nil { t.Fatalf("Failed to parse address from peer.") } _, sp, err := net.SplitHostPort(te.srvAddr) if err != nil { t.Fatalf("Failed to parse address of test server.") } if pp != sp { t.Fatalf("peer.Addr = localhost:%v, want localhost:%v", pp, sp) } } } func (s) TestMetadataUnaryRPC(t *testing.T) { for _, e := range listTestEnv() { testMetadataUnaryRPC(t, e) } } func testMetadataUnaryRPC(t *testing.T, e env) { te := newTest(t, e) te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) const argSize = 2718 const respSize = 314 payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, argSize) if err != nil { t.Fatal(err) } req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseSize: respSize, Payload: payload, } var header, trailer metadata.MD ctx := metadata.NewOutgoingContext(context.Background(), testMetadata) if _, err := tc.UnaryCall(ctx, req, grpc.Header(&header), grpc.Trailer(&trailer)); err != nil { t.Fatalf("TestService.UnaryCall(%v, _, _, _) = _, %v; want _, ", ctx, err) } // Ignore optional response headers that Servers may set: if header != nil { delete(header, "trailer") // RFC 2616 says server SHOULD (but optional) declare trailers delete(header, "date") // the Date header is also optional delete(header, "user-agent") delete(header, "content-type") } if !reflect.DeepEqual(header, testMetadata) { t.Fatalf("Received header metadata %v, want %v", header, testMetadata) } if !reflect.DeepEqual(trailer, testTrailerMetadata) { t.Fatalf("Received trailer metadata %v, want %v", trailer, testTrailerMetadata) } } func (s) TestMetadataOrderUnaryRPC(t *testing.T) { for _, e := range listTestEnv() { testMetadataOrderUnaryRPC(t, e) } } func testMetadataOrderUnaryRPC(t *testing.T, e env) { te := newTest(t, e) te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) ctx := metadata.NewOutgoingContext(context.Background(), testMetadata) ctx = metadata.AppendToOutgoingContext(ctx, "key1", "value2") ctx = metadata.AppendToOutgoingContext(ctx, "key1", "value3") // using Join to built expected metadata instead of FromOutgoingContext newMetadata := metadata.Join(testMetadata, metadata.Pairs("key1", "value2", "key1", "value3")) var header metadata.MD if _, err := tc.UnaryCall(ctx, &testpb.SimpleRequest{}, grpc.Header(&header)); err != nil { t.Fatal(err) } // Ignore optional response headers that Servers may set: if header != nil { delete(header, "trailer") // RFC 2616 says server SHOULD (but optional) declare trailers delete(header, "date") // the Date header is also optional delete(header, "user-agent") delete(header, "content-type") } if !reflect.DeepEqual(header, newMetadata) { t.Fatalf("Received header metadata %v, want %v", header, newMetadata) } } func (s) TestMultipleSetTrailerUnaryRPC(t *testing.T) { for _, e := range listTestEnv() { testMultipleSetTrailerUnaryRPC(t, e) } } func testMultipleSetTrailerUnaryRPC(t *testing.T, e env) { te := newTest(t, e) te.startServer(&testServer{security: e.security, multipleSetTrailer: true}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) const ( argSize = 1 respSize = 1 ) payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, argSize) if err != nil { t.Fatal(err) } req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseSize: respSize, Payload: payload, } var trailer metadata.MD ctx := metadata.NewOutgoingContext(context.Background(), testMetadata) if _, err := tc.UnaryCall(ctx, req, grpc.Trailer(&trailer), grpc.WaitForReady(true)); err != nil { t.Fatalf("TestService.UnaryCall(%v, _, _, _) = _, %v; want _, ", ctx, err) } expectedTrailer := metadata.Join(testTrailerMetadata, testTrailerMetadata2) if !reflect.DeepEqual(trailer, expectedTrailer) { t.Fatalf("Received trailer metadata %v, want %v", trailer, expectedTrailer) } } func (s) TestMultipleSetTrailerStreamingRPC(t *testing.T) { for _, e := range listTestEnv() { testMultipleSetTrailerStreamingRPC(t, e) } } func testMultipleSetTrailerStreamingRPC(t *testing.T, e env) { te := newTest(t, e) te.startServer(&testServer{security: e.security, multipleSetTrailer: true}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) ctx := metadata.NewOutgoingContext(context.Background(), testMetadata) stream, err := tc.FullDuplexCall(ctx, grpc.WaitForReady(true)) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } if err := stream.CloseSend(); err != nil { t.Fatalf("%v.CloseSend() got %v, want %v", stream, err, nil) } if _, err := stream.Recv(); err != io.EOF { t.Fatalf("%v failed to complele the FullDuplexCall: %v", stream, err) } trailer := stream.Trailer() expectedTrailer := metadata.Join(testTrailerMetadata, testTrailerMetadata2) if !reflect.DeepEqual(trailer, expectedTrailer) { t.Fatalf("Received trailer metadata %v, want %v", trailer, expectedTrailer) } } func (s) TestSetAndSendHeaderUnaryRPC(t *testing.T) { for _, e := range listTestEnv() { if e.name == "handler-tls" { continue } testSetAndSendHeaderUnaryRPC(t, e) } } // To test header metadata is sent on SendHeader(). func testSetAndSendHeaderUnaryRPC(t *testing.T, e env) { te := newTest(t, e) te.startServer(&testServer{security: e.security, setAndSendHeader: true}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) const ( argSize = 1 respSize = 1 ) payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, argSize) if err != nil { t.Fatal(err) } req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseSize: respSize, Payload: payload, } var header metadata.MD ctx := metadata.NewOutgoingContext(context.Background(), testMetadata) if _, err := tc.UnaryCall(ctx, req, grpc.Header(&header), grpc.WaitForReady(true)); err != nil { t.Fatalf("TestService.UnaryCall(%v, _, _, _) = _, %v; want _, ", ctx, err) } delete(header, "user-agent") delete(header, "content-type") expectedHeader := metadata.Join(testMetadata, testMetadata2) if !reflect.DeepEqual(header, expectedHeader) { t.Fatalf("Received header metadata %v, want %v", header, expectedHeader) } } func (s) TestMultipleSetHeaderUnaryRPC(t *testing.T) { for _, e := range listTestEnv() { if e.name == "handler-tls" { continue } testMultipleSetHeaderUnaryRPC(t, e) } } // To test header metadata is sent when sending response. func testMultipleSetHeaderUnaryRPC(t *testing.T, e env) { te := newTest(t, e) te.startServer(&testServer{security: e.security, setHeaderOnly: true}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) const ( argSize = 1 respSize = 1 ) payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, argSize) if err != nil { t.Fatal(err) } req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseSize: respSize, Payload: payload, } var header metadata.MD ctx := metadata.NewOutgoingContext(context.Background(), testMetadata) if _, err := tc.UnaryCall(ctx, req, grpc.Header(&header), grpc.WaitForReady(true)); err != nil { t.Fatalf("TestService.UnaryCall(%v, _, _, _) = _, %v; want _, ", ctx, err) } delete(header, "user-agent") delete(header, "content-type") expectedHeader := metadata.Join(testMetadata, testMetadata2) if !reflect.DeepEqual(header, expectedHeader) { t.Fatalf("Received header metadata %v, want %v", header, expectedHeader) } } func (s) TestMultipleSetHeaderUnaryRPCError(t *testing.T) { for _, e := range listTestEnv() { if e.name == "handler-tls" { continue } testMultipleSetHeaderUnaryRPCError(t, e) } } // To test header metadata is sent when sending status. func testMultipleSetHeaderUnaryRPCError(t *testing.T, e env) { te := newTest(t, e) te.startServer(&testServer{security: e.security, setHeaderOnly: true}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) const ( argSize = 1 respSize = -1 // Invalid respSize to make RPC fail. ) payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, argSize) if err != nil { t.Fatal(err) } req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseSize: respSize, Payload: payload, } var header metadata.MD ctx := metadata.NewOutgoingContext(context.Background(), testMetadata) if _, err := tc.UnaryCall(ctx, req, grpc.Header(&header), grpc.WaitForReady(true)); err == nil { t.Fatalf("TestService.UnaryCall(%v, _, _, _) = _, %v; want _, ", ctx, err) } delete(header, "user-agent") delete(header, "content-type") expectedHeader := metadata.Join(testMetadata, testMetadata2) if !reflect.DeepEqual(header, expectedHeader) { t.Fatalf("Received header metadata %v, want %v", header, expectedHeader) } } func (s) TestSetAndSendHeaderStreamingRPC(t *testing.T) { for _, e := range listTestEnv() { if e.name == "handler-tls" { continue } testSetAndSendHeaderStreamingRPC(t, e) } } // To test header metadata is sent on SendHeader(). func testSetAndSendHeaderStreamingRPC(t *testing.T, e env) { te := newTest(t, e) te.startServer(&testServer{security: e.security, setAndSendHeader: true}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) ctx := metadata.NewOutgoingContext(context.Background(), testMetadata) stream, err := tc.FullDuplexCall(ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } if err := stream.CloseSend(); err != nil { t.Fatalf("%v.CloseSend() got %v, want %v", stream, err, nil) } if _, err := stream.Recv(); err != io.EOF { t.Fatalf("%v failed to complele the FullDuplexCall: %v", stream, err) } header, err := stream.Header() if err != nil { t.Fatalf("%v.Header() = _, %v, want _, ", stream, err) } delete(header, "user-agent") delete(header, "content-type") expectedHeader := metadata.Join(testMetadata, testMetadata2) if !reflect.DeepEqual(header, expectedHeader) { t.Fatalf("Received header metadata %v, want %v", header, expectedHeader) } } func (s) TestMultipleSetHeaderStreamingRPC(t *testing.T) { for _, e := range listTestEnv() { if e.name == "handler-tls" { continue } testMultipleSetHeaderStreamingRPC(t, e) } } // To test header metadata is sent when sending response. func testMultipleSetHeaderStreamingRPC(t *testing.T, e env) { te := newTest(t, e) te.startServer(&testServer{security: e.security, setHeaderOnly: true}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) const ( argSize = 1 respSize = 1 ) ctx := metadata.NewOutgoingContext(context.Background(), testMetadata) stream, err := tc.FullDuplexCall(ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, argSize) if err != nil { t.Fatal(err) } req := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseParameters: []*testpb.ResponseParameters{ {Size: respSize}, }, Payload: payload, } if err := stream.Send(req); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, req, err) } if _, err := stream.Recv(); err != nil { t.Fatalf("%v.Recv() = %v, want ", stream, err) } if err := stream.CloseSend(); err != nil { t.Fatalf("%v.CloseSend() got %v, want %v", stream, err, nil) } if _, err := stream.Recv(); err != io.EOF { t.Fatalf("%v failed to complele the FullDuplexCall: %v", stream, err) } header, err := stream.Header() if err != nil { t.Fatalf("%v.Header() = _, %v, want _, ", stream, err) } delete(header, "user-agent") delete(header, "content-type") expectedHeader := metadata.Join(testMetadata, testMetadata2) if !reflect.DeepEqual(header, expectedHeader) { t.Fatalf("Received header metadata %v, want %v", header, expectedHeader) } } func (s) TestMultipleSetHeaderStreamingRPCError(t *testing.T) { for _, e := range listTestEnv() { if e.name == "handler-tls" { continue } testMultipleSetHeaderStreamingRPCError(t, e) } } // To test header metadata is sent when sending status. func testMultipleSetHeaderStreamingRPCError(t *testing.T, e env) { te := newTest(t, e) te.startServer(&testServer{security: e.security, setHeaderOnly: true}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) const ( argSize = 1 respSize = -1 ) ctx, cancel := context.WithCancel(context.Background()) defer cancel() ctx = metadata.NewOutgoingContext(ctx, testMetadata) stream, err := tc.FullDuplexCall(ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, argSize) if err != nil { t.Fatal(err) } req := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseParameters: []*testpb.ResponseParameters{ {Size: respSize}, }, Payload: payload, } if err := stream.Send(req); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, req, err) } if _, err := stream.Recv(); err == nil { t.Fatalf("%v.Recv() = %v, want ", stream, err) } header, err := stream.Header() if err != nil { t.Fatalf("%v.Header() = _, %v, want _, ", stream, err) } delete(header, "user-agent") delete(header, "content-type") expectedHeader := metadata.Join(testMetadata, testMetadata2) if !reflect.DeepEqual(header, expectedHeader) { t.Fatalf("Received header metadata %v, want %v", header, expectedHeader) } if err := stream.CloseSend(); err != nil { t.Fatalf("%v.CloseSend() got %v, want %v", stream, err, nil) } } // TestMalformedHTTP2Metadata verfies the returned error when the client // sends an illegal metadata. func (s) TestMalformedHTTP2Metadata(t *testing.T) { for _, e := range listTestEnv() { if e.name == "handler-tls" { // Failed with "server stops accepting new RPCs". // Server stops accepting new RPCs when the client sends an illegal http2 header. continue } testMalformedHTTP2Metadata(t, e) } } func testMalformedHTTP2Metadata(t *testing.T, e env) { te := newTest(t, e) te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, 2718) if err != nil { t.Fatal(err) } req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseSize: 314, Payload: payload, } ctx := metadata.NewOutgoingContext(context.Background(), malformedHTTP2Metadata) if _, err := tc.UnaryCall(ctx, req); status.Code(err) != codes.Internal { t.Fatalf("TestService.UnaryCall(%v, _) = _, %v; want _, %s", ctx, err, codes.Internal) } } func (s) TestTransparentRetry(t *testing.T) { for _, e := range listTestEnv() { if e.name == "handler-tls" { // Fails with RST_STREAM / FLOW_CONTROL_ERROR continue } testTransparentRetry(t, e) } } // This test makes sure RPCs are retried times when they receive a RST_STREAM // with the REFUSED_STREAM error code, which the InTapHandle provokes. func testTransparentRetry(t *testing.T, e env) { te := newTest(t, e) attempts := 0 successAttempt := 2 te.tapHandle = func(ctx context.Context, _ *tap.Info) (context.Context, error) { attempts++ if attempts < successAttempt { return nil, errors.New("not now") } return ctx, nil } te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tsc := testpb.NewTestServiceClient(cc) testCases := []struct { successAttempt int failFast bool errCode codes.Code }{{ successAttempt: 1, }, { successAttempt: 2, }, { successAttempt: 3, errCode: codes.Unavailable, }, { successAttempt: 1, failFast: true, }, { successAttempt: 2, failFast: true, errCode: codes.Unavailable, // We won't retry on fail fast. }} for _, tc := range testCases { attempts = 0 successAttempt = tc.successAttempt ctx, cancel := context.WithTimeout(context.Background(), time.Second) _, err := tsc.EmptyCall(ctx, &testpb.Empty{}, grpc.WaitForReady(!tc.failFast)) cancel() if status.Code(err) != tc.errCode { t.Errorf("%+v: tsc.EmptyCall(_, _) = _, %v, want _, Code=%v", tc, err, tc.errCode) } } } func (s) TestCancel(t *testing.T) { for _, e := range listTestEnv() { testCancel(t, e) } } func testCancel(t *testing.T, e env) { te := newTest(t, e) te.declareLogNoise("grpc: the client connection is closing; please retry") te.startServer(&testServer{security: e.security, unaryCallSleepTime: time.Second}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) const argSize = 2718 const respSize = 314 payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, argSize) if err != nil { t.Fatal(err) } req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseSize: respSize, Payload: payload, } ctx, cancel := context.WithCancel(context.Background()) time.AfterFunc(1*time.Millisecond, cancel) if r, err := tc.UnaryCall(ctx, req); status.Code(err) != codes.Canceled { t.Fatalf("TestService/UnaryCall(_, _) = %v, %v; want _, error code: %s", r, err, codes.Canceled) } awaitNewConnLogOutput() } func (s) TestCancelNoIO(t *testing.T) { for _, e := range listTestEnv() { testCancelNoIO(t, e) } } func testCancelNoIO(t *testing.T, e env) { te := newTest(t, e) te.declareLogNoise("http2Client.notifyError got notified that the client transport was broken") te.maxStream = 1 // Only allows 1 live stream per server transport. te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) // Start one blocked RPC for which we'll never send streaming // input. This will consume the 1 maximum concurrent streams, // causing future RPCs to hang. ctx, cancelFirst := context.WithCancel(context.Background()) _, err := tc.StreamingInputCall(ctx) if err != nil { t.Fatalf("%v.StreamingInputCall(_) = _, %v, want _, ", tc, err) } // Loop until the ClientConn receives the initial settings // frame from the server, notifying it about the maximum // concurrent streams. We know when it's received it because // an RPC will fail with codes.DeadlineExceeded instead of // succeeding. // TODO(bradfitz): add internal test hook for this (Issue 534) for { ctx, cancelSecond := context.WithTimeout(context.Background(), 50*time.Millisecond) _, err := tc.StreamingInputCall(ctx) cancelSecond() if err == nil { continue } if status.Code(err) == codes.DeadlineExceeded { break } t.Fatalf("%v.StreamingInputCall(_) = _, %v, want _, %s", tc, err, codes.DeadlineExceeded) } // If there are any RPCs in flight before the client receives // the max streams setting, let them be expired. // TODO(bradfitz): add internal test hook for this (Issue 534) time.Sleep(50 * time.Millisecond) go func() { time.Sleep(50 * time.Millisecond) cancelFirst() }() // This should be blocked until the 1st is canceled, then succeed. ctx, cancelThird := context.WithTimeout(context.Background(), 500*time.Millisecond) if _, err := tc.StreamingInputCall(ctx); err != nil { t.Errorf("%v.StreamingInputCall(_) = _, %v, want _, ", tc, err) } cancelThird() } // The following tests the gRPC streaming RPC implementations. // TODO(zhaoq): Have better coverage on error cases. var ( reqSizes = []int{27182, 8, 1828, 45904} respSizes = []int{31415, 9, 2653, 58979} ) func (s) TestNoService(t *testing.T) { for _, e := range listTestEnv() { testNoService(t, e) } } func testNoService(t *testing.T, e env) { te := newTest(t, e) te.startServer(nil) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) stream, err := tc.FullDuplexCall(te.ctx, grpc.WaitForReady(true)) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } if _, err := stream.Recv(); status.Code(err) != codes.Unimplemented { t.Fatalf("stream.Recv() = _, %v, want _, error code %s", err, codes.Unimplemented) } } func (s) TestPingPong(t *testing.T) { for _, e := range listTestEnv() { testPingPong(t, e) } } func testPingPong(t *testing.T, e env) { te := newTest(t, e) te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) stream, err := tc.FullDuplexCall(te.ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } var index int for index < len(reqSizes) { respParam := []*testpb.ResponseParameters{ { Size: int32(respSizes[index]), }, } payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, int32(reqSizes[index])) if err != nil { t.Fatal(err) } req := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseParameters: respParam, Payload: payload, } if err := stream.Send(req); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, req, err) } reply, err := stream.Recv() if err != nil { t.Fatalf("%v.Recv() = %v, want ", stream, err) } pt := reply.GetPayload().GetType() if pt != testpb.PayloadType_COMPRESSABLE { t.Fatalf("Got the reply of type %d, want %d", pt, testpb.PayloadType_COMPRESSABLE) } size := len(reply.GetPayload().GetBody()) if size != int(respSizes[index]) { t.Fatalf("Got reply body of length %d, want %d", size, respSizes[index]) } index++ } if err := stream.CloseSend(); err != nil { t.Fatalf("%v.CloseSend() got %v, want %v", stream, err, nil) } if _, err := stream.Recv(); err != io.EOF { t.Fatalf("%v failed to complele the ping pong test: %v", stream, err) } } func (s) TestMetadataStreamingRPC(t *testing.T) { for _, e := range listTestEnv() { testMetadataStreamingRPC(t, e) } } func testMetadataStreamingRPC(t *testing.T, e env) { te := newTest(t, e) te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) ctx := metadata.NewOutgoingContext(te.ctx, testMetadata) stream, err := tc.FullDuplexCall(ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } go func() { headerMD, err := stream.Header() if e.security == "tls" { delete(headerMD, "transport_security_type") } delete(headerMD, "trailer") // ignore if present delete(headerMD, "user-agent") delete(headerMD, "content-type") if err != nil || !reflect.DeepEqual(testMetadata, headerMD) { t.Errorf("#1 %v.Header() = %v, %v, want %v, ", stream, headerMD, err, testMetadata) } // test the cached value. headerMD, err = stream.Header() delete(headerMD, "trailer") // ignore if present delete(headerMD, "user-agent") delete(headerMD, "content-type") if err != nil || !reflect.DeepEqual(testMetadata, headerMD) { t.Errorf("#2 %v.Header() = %v, %v, want %v, ", stream, headerMD, err, testMetadata) } err = func() error { for index := 0; index < len(reqSizes); index++ { respParam := []*testpb.ResponseParameters{ { Size: int32(respSizes[index]), }, } payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, int32(reqSizes[index])) if err != nil { return err } req := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseParameters: respParam, Payload: payload, } if err := stream.Send(req); err != nil { return fmt.Errorf("%v.Send(%v) = %v, want ", stream, req, err) } } return nil }() // Tell the server we're done sending args. stream.CloseSend() if err != nil { t.Error(err) } }() for { if _, err := stream.Recv(); err != nil { break } } trailerMD := stream.Trailer() if !reflect.DeepEqual(testTrailerMetadata, trailerMD) { t.Fatalf("%v.Trailer() = %v, want %v", stream, trailerMD, testTrailerMetadata) } } func (s) TestServerStreaming(t *testing.T) { for _, e := range listTestEnv() { testServerStreaming(t, e) } } func testServerStreaming(t *testing.T, e env) { te := newTest(t, e) te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) respParam := make([]*testpb.ResponseParameters, len(respSizes)) for i, s := range respSizes { respParam[i] = &testpb.ResponseParameters{ Size: int32(s), } } req := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseParameters: respParam, } stream, err := tc.StreamingOutputCall(context.Background(), req) if err != nil { t.Fatalf("%v.StreamingOutputCall(_) = _, %v, want ", tc, err) } var rpcStatus error var respCnt int var index int for { reply, err := stream.Recv() if err != nil { rpcStatus = err break } pt := reply.GetPayload().GetType() if pt != testpb.PayloadType_COMPRESSABLE { t.Fatalf("Got the reply of type %d, want %d", pt, testpb.PayloadType_COMPRESSABLE) } size := len(reply.GetPayload().GetBody()) if size != int(respSizes[index]) { t.Fatalf("Got reply body of length %d, want %d", size, respSizes[index]) } index++ respCnt++ } if rpcStatus != io.EOF { t.Fatalf("Failed to finish the server streaming rpc: %v, want ", rpcStatus) } if respCnt != len(respSizes) { t.Fatalf("Got %d reply, want %d", len(respSizes), respCnt) } } func (s) TestFailedServerStreaming(t *testing.T) { for _, e := range listTestEnv() { testFailedServerStreaming(t, e) } } func testFailedServerStreaming(t *testing.T, e env) { te := newTest(t, e) te.userAgent = failAppUA te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) respParam := make([]*testpb.ResponseParameters, len(respSizes)) for i, s := range respSizes { respParam[i] = &testpb.ResponseParameters{ Size: int32(s), } } req := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseParameters: respParam, } ctx := metadata.NewOutgoingContext(te.ctx, testMetadata) stream, err := tc.StreamingOutputCall(ctx, req) if err != nil { t.Fatalf("%v.StreamingOutputCall(_) = _, %v, want ", tc, err) } wantErr := status.Error(codes.DataLoss, "error for testing: "+failAppUA) if _, err := stream.Recv(); !reflect.DeepEqual(err, wantErr) { t.Fatalf("%v.Recv() = _, %v, want _, %v", stream, err, wantErr) } } // concurrentSendServer is a TestServiceServer whose // StreamingOutputCall makes ten serial Send calls, sending payloads // "0".."9", inclusive. TestServerStreamingConcurrent verifies they // were received in the correct order, and that there were no races. // // All other TestServiceServer methods crash if called. type concurrentSendServer struct { testpb.TestServiceServer } func (s concurrentSendServer) StreamingOutputCall(args *testpb.StreamingOutputCallRequest, stream testpb.TestService_StreamingOutputCallServer) error { for i := 0; i < 10; i++ { stream.Send(&testpb.StreamingOutputCallResponse{ Payload: &testpb.Payload{ Body: []byte{'0' + uint8(i)}, }, }) } return nil } // Tests doing a bunch of concurrent streaming output calls. func (s) TestServerStreamingConcurrent(t *testing.T) { for _, e := range listTestEnv() { testServerStreamingConcurrent(t, e) } } func testServerStreamingConcurrent(t *testing.T, e env) { te := newTest(t, e) te.startServer(concurrentSendServer{}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) doStreamingCall := func() { req := &testpb.StreamingOutputCallRequest{} stream, err := tc.StreamingOutputCall(context.Background(), req) if err != nil { t.Errorf("%v.StreamingOutputCall(_) = _, %v, want ", tc, err) return } var ngot int var buf bytes.Buffer for { reply, err := stream.Recv() if err == io.EOF { break } if err != nil { t.Fatal(err) } ngot++ if buf.Len() > 0 { buf.WriteByte(',') } buf.Write(reply.GetPayload().GetBody()) } if want := 10; ngot != want { t.Errorf("Got %d replies, want %d", ngot, want) } if got, want := buf.String(), "0,1,2,3,4,5,6,7,8,9"; got != want { t.Errorf("Got replies %q; want %q", got, want) } } var wg sync.WaitGroup for i := 0; i < 20; i++ { wg.Add(1) go func() { defer wg.Done() doStreamingCall() }() } wg.Wait() } func generatePayloadSizes() [][]int { reqSizes := [][]int{ {27182, 8, 1828, 45904}, } num8KPayloads := 1024 eightKPayloads := []int{} for i := 0; i < num8KPayloads; i++ { eightKPayloads = append(eightKPayloads, (1 << 13)) } reqSizes = append(reqSizes, eightKPayloads) num2MPayloads := 8 twoMPayloads := []int{} for i := 0; i < num2MPayloads; i++ { twoMPayloads = append(twoMPayloads, (1 << 21)) } reqSizes = append(reqSizes, twoMPayloads) return reqSizes } func (s) TestClientStreaming(t *testing.T) { for _, s := range generatePayloadSizes() { for _, e := range listTestEnv() { testClientStreaming(t, e, s) } } } func testClientStreaming(t *testing.T, e env, sizes []int) { te := newTest(t, e) te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) ctx, cancel := context.WithTimeout(te.ctx, time.Second*30) defer cancel() stream, err := tc.StreamingInputCall(ctx) if err != nil { t.Fatalf("%v.StreamingInputCall(_) = _, %v, want ", tc, err) } var sum int for _, s := range sizes { payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, int32(s)) if err != nil { t.Fatal(err) } req := &testpb.StreamingInputCallRequest{ Payload: payload, } if err := stream.Send(req); err != nil { t.Fatalf("%v.Send(_) = %v, want ", stream, err) } sum += s } reply, err := stream.CloseAndRecv() if err != nil { t.Fatalf("%v.CloseAndRecv() got error %v, want %v", stream, err, nil) } if reply.GetAggregatedPayloadSize() != int32(sum) { t.Fatalf("%v.CloseAndRecv().GetAggregatePayloadSize() = %v; want %v", stream, reply.GetAggregatedPayloadSize(), sum) } } func (s) TestClientStreamingError(t *testing.T) { for _, e := range listTestEnv() { if e.name == "handler-tls" { continue } testClientStreamingError(t, e) } } func testClientStreamingError(t *testing.T, e env) { te := newTest(t, e) te.startServer(&testServer{security: e.security, earlyFail: true}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) stream, err := tc.StreamingInputCall(te.ctx) if err != nil { t.Fatalf("%v.StreamingInputCall(_) = _, %v, want ", tc, err) } payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, 1) if err != nil { t.Fatal(err) } req := &testpb.StreamingInputCallRequest{ Payload: payload, } // The 1st request should go through. if err := stream.Send(req); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, req, err) } for { if err := stream.Send(req); err != io.EOF { continue } if _, err := stream.CloseAndRecv(); status.Code(err) != codes.NotFound { t.Fatalf("%v.CloseAndRecv() = %v, want error %s", stream, err, codes.NotFound) } break } } func (s) TestExceedMaxStreamsLimit(t *testing.T) { for _, e := range listTestEnv() { testExceedMaxStreamsLimit(t, e) } } func testExceedMaxStreamsLimit(t *testing.T, e env) { te := newTest(t, e) te.declareLogNoise( "http2Client.notifyError got notified that the client transport was broken", "Conn.resetTransport failed to create client transport", "grpc: the connection is closing", ) te.maxStream = 1 // Only allows 1 live stream per server transport. te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) _, err := tc.StreamingInputCall(te.ctx) if err != nil { t.Fatalf("%v.StreamingInputCall(_) = _, %v, want _, ", tc, err) } // Loop until receiving the new max stream setting from the server. for { ctx, cancel := context.WithTimeout(context.Background(), 50*time.Millisecond) defer cancel() _, err := tc.StreamingInputCall(ctx) if err == nil { time.Sleep(50 * time.Millisecond) continue } if status.Code(err) == codes.DeadlineExceeded { break } t.Fatalf("%v.StreamingInputCall(_) = _, %v, want _, %s", tc, err, codes.DeadlineExceeded) } } func (s) TestStreamsQuotaRecovery(t *testing.T) { for _, e := range listTestEnv() { testStreamsQuotaRecovery(t, e) } } func testStreamsQuotaRecovery(t *testing.T, e env) { te := newTest(t, e) te.declareLogNoise( "http2Client.notifyError got notified that the client transport was broken", "Conn.resetTransport failed to create client transport", "grpc: the connection is closing", ) te.maxStream = 1 // Allows 1 live stream. te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) ctx, cancel := context.WithCancel(context.Background()) defer cancel() if _, err := tc.StreamingInputCall(ctx); err != nil { t.Fatalf("tc.StreamingInputCall(_) = _, %v, want _, ", err) } // Loop until the new max stream setting is effective. for { ctx, cancel := context.WithTimeout(context.Background(), 50*time.Millisecond) _, err := tc.StreamingInputCall(ctx) cancel() if err == nil { time.Sleep(5 * time.Millisecond) continue } if status.Code(err) == codes.DeadlineExceeded { break } t.Fatalf("tc.StreamingInputCall(_) = _, %v, want _, %s", err, codes.DeadlineExceeded) } var wg sync.WaitGroup for i := 0; i < 10; i++ { wg.Add(1) go func() { defer wg.Done() payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, 314) if err != nil { t.Error(err) return } req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseSize: 1592, Payload: payload, } // No rpc should go through due to the max streams limit. ctx, cancel := context.WithTimeout(context.Background(), 10*time.Millisecond) defer cancel() if _, err := tc.UnaryCall(ctx, req, grpc.WaitForReady(true)); status.Code(err) != codes.DeadlineExceeded { t.Errorf("tc.UnaryCall(_, _) = _, %v, want _, %s", err, codes.DeadlineExceeded) } }() } wg.Wait() cancel() // A new stream should be allowed after canceling the first one. ctx, cancel = context.WithTimeout(context.Background(), 5*time.Second) defer cancel() if _, err := tc.StreamingInputCall(ctx); err != nil { t.Fatalf("tc.StreamingInputCall(_) = _, %v, want _, %v", err, nil) } } func (s) TestCompressServerHasNoSupport(t *testing.T) { for _, e := range listTestEnv() { testCompressServerHasNoSupport(t, e) } } func testCompressServerHasNoSupport(t *testing.T, e env) { te := newTest(t, e) te.serverCompression = false te.clientCompression = false te.clientNopCompression = true te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) const argSize = 271828 const respSize = 314159 payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, argSize) if err != nil { t.Fatal(err) } req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseSize: respSize, Payload: payload, } if _, err := tc.UnaryCall(context.Background(), req); err == nil || status.Code(err) != codes.Unimplemented { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, error code %s", err, codes.Unimplemented) } // Streaming RPC stream, err := tc.FullDuplexCall(context.Background()) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } if _, err := stream.Recv(); err == nil || status.Code(err) != codes.Unimplemented { t.Fatalf("%v.Recv() = %v, want error code %s", stream, err, codes.Unimplemented) } } func (s) TestCompressOK(t *testing.T) { for _, e := range listTestEnv() { testCompressOK(t, e) } } func testCompressOK(t *testing.T, e env) { te := newTest(t, e) te.serverCompression = true te.clientCompression = true te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) // Unary call const argSize = 271828 const respSize = 314159 payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, argSize) if err != nil { t.Fatal(err) } req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseSize: respSize, Payload: payload, } ctx := metadata.NewOutgoingContext(context.Background(), metadata.Pairs("something", "something")) if _, err := tc.UnaryCall(ctx, req); err != nil { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, ", err) } // Streaming RPC ctx, cancel := context.WithCancel(context.Background()) defer cancel() stream, err := tc.FullDuplexCall(ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } respParam := []*testpb.ResponseParameters{ { Size: 31415, }, } payload, err = newPayload(testpb.PayloadType_COMPRESSABLE, int32(31415)) if err != nil { t.Fatal(err) } sreq := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseParameters: respParam, Payload: payload, } if err := stream.Send(sreq); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, sreq, err) } stream.CloseSend() if _, err := stream.Recv(); err != nil { t.Fatalf("%v.Recv() = %v, want ", stream, err) } if _, err := stream.Recv(); err != io.EOF { t.Fatalf("%v.Recv() = %v, want io.EOF", stream, err) } } func (s) TestIdentityEncoding(t *testing.T) { for _, e := range listTestEnv() { testIdentityEncoding(t, e) } } func testIdentityEncoding(t *testing.T, e env) { te := newTest(t, e) te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) // Unary call payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, 5) if err != nil { t.Fatal(err) } req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseSize: 10, Payload: payload, } ctx := metadata.NewOutgoingContext(context.Background(), metadata.Pairs("something", "something")) if _, err := tc.UnaryCall(ctx, req); err != nil { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, ", err) } // Streaming RPC ctx, cancel := context.WithCancel(context.Background()) defer cancel() stream, err := tc.FullDuplexCall(ctx, grpc.UseCompressor("identity")) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } payload, err = newPayload(testpb.PayloadType_COMPRESSABLE, int32(31415)) if err != nil { t.Fatal(err) } sreq := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseParameters: []*testpb.ResponseParameters{{Size: 10}}, Payload: payload, } if err := stream.Send(sreq); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, sreq, err) } stream.CloseSend() if _, err := stream.Recv(); err != nil { t.Fatalf("%v.Recv() = %v, want ", stream, err) } if _, err := stream.Recv(); err != io.EOF { t.Fatalf("%v.Recv() = %v, want io.EOF", stream, err) } } func (s) TestUnaryClientInterceptor(t *testing.T) { for _, e := range listTestEnv() { testUnaryClientInterceptor(t, e) } } func failOkayRPC(ctx context.Context, method string, req, reply interface{}, cc *grpc.ClientConn, invoker grpc.UnaryInvoker, opts ...grpc.CallOption) error { err := invoker(ctx, method, req, reply, cc, opts...) if err == nil { return status.Error(codes.NotFound, "") } return err } func testUnaryClientInterceptor(t *testing.T, e env) { te := newTest(t, e) te.userAgent = testAppUA te.unaryClientInt = failOkayRPC te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}); status.Code(err) != codes.NotFound { t.Fatalf("%v.EmptyCall(_, _) = _, %v, want _, error code %s", tc, err, codes.NotFound) } } func (s) TestStreamClientInterceptor(t *testing.T) { for _, e := range listTestEnv() { testStreamClientInterceptor(t, e) } } func failOkayStream(ctx context.Context, desc *grpc.StreamDesc, cc *grpc.ClientConn, method string, streamer grpc.Streamer, opts ...grpc.CallOption) (grpc.ClientStream, error) { s, err := streamer(ctx, desc, cc, method, opts...) if err == nil { return nil, status.Error(codes.NotFound, "") } return s, nil } func testStreamClientInterceptor(t *testing.T, e env) { te := newTest(t, e) te.streamClientInt = failOkayStream te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) respParam := []*testpb.ResponseParameters{ { Size: int32(1), }, } payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, int32(1)) if err != nil { t.Fatal(err) } req := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseParameters: respParam, Payload: payload, } if _, err := tc.StreamingOutputCall(context.Background(), req); status.Code(err) != codes.NotFound { t.Fatalf("%v.StreamingOutputCall(_) = _, %v, want _, error code %s", tc, err, codes.NotFound) } } func (s) TestUnaryServerInterceptor(t *testing.T) { for _, e := range listTestEnv() { testUnaryServerInterceptor(t, e) } } func errInjector(ctx context.Context, req interface{}, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (interface{}, error) { return nil, status.Error(codes.PermissionDenied, "") } func testUnaryServerInterceptor(t *testing.T, e env) { te := newTest(t, e) te.unaryServerInt = errInjector te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}); status.Code(err) != codes.PermissionDenied { t.Fatalf("%v.EmptyCall(_, _) = _, %v, want _, error code %s", tc, err, codes.PermissionDenied) } } func (s) TestStreamServerInterceptor(t *testing.T) { for _, e := range listTestEnv() { // TODO(bradfitz): Temporarily skip this env due to #619. if e.name == "handler-tls" { continue } testStreamServerInterceptor(t, e) } } func fullDuplexOnly(srv interface{}, ss grpc.ServerStream, info *grpc.StreamServerInfo, handler grpc.StreamHandler) error { if info.FullMethod == "/grpc.testing.TestService/FullDuplexCall" { return handler(srv, ss) } // Reject the other methods. return status.Error(codes.PermissionDenied, "") } func testStreamServerInterceptor(t *testing.T, e env) { te := newTest(t, e) te.streamServerInt = fullDuplexOnly te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) respParam := []*testpb.ResponseParameters{ { Size: int32(1), }, } payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, int32(1)) if err != nil { t.Fatal(err) } req := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseParameters: respParam, Payload: payload, } s1, err := tc.StreamingOutputCall(context.Background(), req) if err != nil { t.Fatalf("%v.StreamingOutputCall(_) = _, %v, want _, ", tc, err) } if _, err := s1.Recv(); status.Code(err) != codes.PermissionDenied { t.Fatalf("%v.StreamingInputCall(_) = _, %v, want _, error code %s", tc, err, codes.PermissionDenied) } s2, err := tc.FullDuplexCall(context.Background()) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } if err := s2.Send(req); err != nil { t.Fatalf("%v.Send(_) = %v, want ", s2, err) } if _, err := s2.Recv(); err != nil { t.Fatalf("%v.Recv() = _, %v, want _, ", s2, err) } } // funcServer implements methods of TestServiceServer using funcs, // similar to an http.HandlerFunc. // Any unimplemented method will crash. Tests implement the method(s) // they need. type funcServer struct { testpb.TestServiceServer unaryCall func(ctx context.Context, in *testpb.SimpleRequest) (*testpb.SimpleResponse, error) streamingInputCall func(stream testpb.TestService_StreamingInputCallServer) error fullDuplexCall func(stream testpb.TestService_FullDuplexCallServer) error } func (s *funcServer) UnaryCall(ctx context.Context, in *testpb.SimpleRequest) (*testpb.SimpleResponse, error) { return s.unaryCall(ctx, in) } func (s *funcServer) StreamingInputCall(stream testpb.TestService_StreamingInputCallServer) error { return s.streamingInputCall(stream) } func (s *funcServer) FullDuplexCall(stream testpb.TestService_FullDuplexCallServer) error { return s.fullDuplexCall(stream) } func (s) TestClientRequestBodyErrorUnexpectedEOF(t *testing.T) { for _, e := range listTestEnv() { testClientRequestBodyErrorUnexpectedEOF(t, e) } } func testClientRequestBodyErrorUnexpectedEOF(t *testing.T, e env) { te := newTest(t, e) ts := &funcServer{unaryCall: func(ctx context.Context, in *testpb.SimpleRequest) (*testpb.SimpleResponse, error) { errUnexpectedCall := errors.New("unexpected call func server method") t.Error(errUnexpectedCall) return nil, errUnexpectedCall }} te.startServer(ts) defer te.tearDown() te.withServerTester(func(st *serverTester) { st.writeHeadersGRPC(1, "/grpc.testing.TestService/UnaryCall") // Say we have 5 bytes coming, but set END_STREAM flag: st.writeData(1, true, []byte{0, 0, 0, 0, 5}) st.wantAnyFrame() // wait for server to crash (it used to crash) }) } func (s) TestClientRequestBodyErrorCloseAfterLength(t *testing.T) { for _, e := range listTestEnv() { testClientRequestBodyErrorCloseAfterLength(t, e) } } func testClientRequestBodyErrorCloseAfterLength(t *testing.T, e env) { te := newTest(t, e) te.declareLogNoise("Server.processUnaryRPC failed to write status") ts := &funcServer{unaryCall: func(ctx context.Context, in *testpb.SimpleRequest) (*testpb.SimpleResponse, error) { errUnexpectedCall := errors.New("unexpected call func server method") t.Error(errUnexpectedCall) return nil, errUnexpectedCall }} te.startServer(ts) defer te.tearDown() te.withServerTester(func(st *serverTester) { st.writeHeadersGRPC(1, "/grpc.testing.TestService/UnaryCall") // say we're sending 5 bytes, but then close the connection instead. st.writeData(1, false, []byte{0, 0, 0, 0, 5}) st.cc.Close() }) } func (s) TestClientRequestBodyErrorCancel(t *testing.T) { for _, e := range listTestEnv() { testClientRequestBodyErrorCancel(t, e) } } func testClientRequestBodyErrorCancel(t *testing.T, e env) { te := newTest(t, e) gotCall := make(chan bool, 1) ts := &funcServer{unaryCall: func(ctx context.Context, in *testpb.SimpleRequest) (*testpb.SimpleResponse, error) { gotCall <- true return new(testpb.SimpleResponse), nil }} te.startServer(ts) defer te.tearDown() te.withServerTester(func(st *serverTester) { st.writeHeadersGRPC(1, "/grpc.testing.TestService/UnaryCall") // Say we have 5 bytes coming, but cancel it instead. st.writeRSTStream(1, http2.ErrCodeCancel) st.writeData(1, false, []byte{0, 0, 0, 0, 5}) // Verify we didn't a call yet. select { case <-gotCall: t.Fatal("unexpected call") default: } // And now send an uncanceled (but still invalid), just to get a response. st.writeHeadersGRPC(3, "/grpc.testing.TestService/UnaryCall") st.writeData(3, true, []byte{0, 0, 0, 0, 0}) <-gotCall st.wantAnyFrame() }) } func (s) TestClientRequestBodyErrorCancelStreamingInput(t *testing.T) { for _, e := range listTestEnv() { testClientRequestBodyErrorCancelStreamingInput(t, e) } } func testClientRequestBodyErrorCancelStreamingInput(t *testing.T, e env) { te := newTest(t, e) recvErr := make(chan error, 1) ts := &funcServer{streamingInputCall: func(stream testpb.TestService_StreamingInputCallServer) error { _, err := stream.Recv() recvErr <- err return nil }} te.startServer(ts) defer te.tearDown() te.withServerTester(func(st *serverTester) { st.writeHeadersGRPC(1, "/grpc.testing.TestService/StreamingInputCall") // Say we have 5 bytes coming, but cancel it instead. st.writeData(1, false, []byte{0, 0, 0, 0, 5}) st.writeRSTStream(1, http2.ErrCodeCancel) var got error select { case got = <-recvErr: case <-time.After(3 * time.Second): t.Fatal("timeout waiting for error") } if grpc.Code(got) != codes.Canceled { t.Errorf("error = %#v; want error code %s", got, codes.Canceled) } }) } func (s) TestClientResourceExhaustedCancelFullDuplex(t *testing.T) { for _, e := range listTestEnv() { if e.httpHandler { // httpHandler write won't be blocked on flow control window. continue } testClientResourceExhaustedCancelFullDuplex(t, e) } } func testClientResourceExhaustedCancelFullDuplex(t *testing.T, e env) { te := newTest(t, e) recvErr := make(chan error, 1) ts := &funcServer{fullDuplexCall: func(stream testpb.TestService_FullDuplexCallServer) error { defer close(recvErr) _, err := stream.Recv() if err != nil { return status.Errorf(codes.Internal, "stream.Recv() got error: %v, want ", err) } // create a payload that's larger than the default flow control window. payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, 10) if err != nil { return err } resp := &testpb.StreamingOutputCallResponse{ Payload: payload, } ce := make(chan error) go func() { var err error for { if err = stream.Send(resp); err != nil { break } } ce <- err }() select { case err = <-ce: case <-time.After(10 * time.Second): err = errors.New("10s timeout reached") } recvErr <- err return err }} te.startServer(ts) defer te.tearDown() // set a low limit on receive message size to error with Resource Exhausted on // client side when server send a large message. te.maxClientReceiveMsgSize = newInt(10) cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) stream, err := tc.FullDuplexCall(context.Background()) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } req := &testpb.StreamingOutputCallRequest{} if err := stream.Send(req); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, req, err) } if _, err := stream.Recv(); status.Code(err) != codes.ResourceExhausted { t.Fatalf("%v.Recv() = _, %v, want _, error code: %s", stream, err, codes.ResourceExhausted) } err = <-recvErr if status.Code(err) != codes.Canceled { t.Fatalf("server got error %v, want error code: %s", err, codes.Canceled) } } type clientTimeoutCreds struct { timeoutReturned bool } func (c *clientTimeoutCreds) ClientHandshake(ctx context.Context, addr string, rawConn net.Conn) (net.Conn, credentials.AuthInfo, error) { if !c.timeoutReturned { c.timeoutReturned = true return nil, nil, context.DeadlineExceeded } return rawConn, nil, nil } func (c *clientTimeoutCreds) ServerHandshake(rawConn net.Conn) (net.Conn, credentials.AuthInfo, error) { return rawConn, nil, nil } func (c *clientTimeoutCreds) Info() credentials.ProtocolInfo { return credentials.ProtocolInfo{} } func (c *clientTimeoutCreds) Clone() credentials.TransportCredentials { return nil } func (c *clientTimeoutCreds) OverrideServerName(s string) error { return nil } func (s) TestNonFailFastRPCSucceedOnTimeoutCreds(t *testing.T) { te := newTest(t, env{name: "timeout-cred", network: "tcp", security: "clientTimeoutCreds", balancer: "v1"}) te.userAgent = testAppUA te.startServer(&testServer{security: te.e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) // This unary call should succeed, because ClientHandshake will succeed for the second time. if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}, grpc.WaitForReady(true)); err != nil { te.t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want ", err) } } type serverDispatchCred struct { rawConnCh chan net.Conn } func newServerDispatchCred() *serverDispatchCred { return &serverDispatchCred{ rawConnCh: make(chan net.Conn, 1), } } func (c *serverDispatchCred) ClientHandshake(ctx context.Context, addr string, rawConn net.Conn) (net.Conn, credentials.AuthInfo, error) { return rawConn, nil, nil } func (c *serverDispatchCred) ServerHandshake(rawConn net.Conn) (net.Conn, credentials.AuthInfo, error) { select { case c.rawConnCh <- rawConn: default: } return nil, nil, credentials.ErrConnDispatched } func (c *serverDispatchCred) Info() credentials.ProtocolInfo { return credentials.ProtocolInfo{} } func (c *serverDispatchCred) Clone() credentials.TransportCredentials { return nil } func (c *serverDispatchCred) OverrideServerName(s string) error { return nil } func (c *serverDispatchCred) getRawConn() net.Conn { return <-c.rawConnCh } func (s) TestServerCredsDispatch(t *testing.T) { lis, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("Failed to listen: %v", err) } cred := newServerDispatchCred() s := grpc.NewServer(grpc.Creds(cred)) go s.Serve(lis) defer s.Stop() cc, err := grpc.Dial(lis.Addr().String(), grpc.WithTransportCredentials(cred)) if err != nil { t.Fatalf("grpc.Dial(%q) = %v", lis.Addr().String(), err) } defer cc.Close() rawConn := cred.getRawConn() // Give grpc a chance to see the error and potentially close the connection. // And check that connection is not closed after that. time.Sleep(100 * time.Millisecond) // Check rawConn is not closed. if n, err := rawConn.Write([]byte{0}); n <= 0 || err != nil { t.Errorf("Read() = %v, %v; want n>0, ", n, err) } } type authorityCheckCreds struct { got string } func (c *authorityCheckCreds) ServerHandshake(rawConn net.Conn) (net.Conn, credentials.AuthInfo, error) { return rawConn, nil, nil } func (c *authorityCheckCreds) ClientHandshake(ctx context.Context, authority string, rawConn net.Conn) (net.Conn, credentials.AuthInfo, error) { c.got = authority return rawConn, nil, nil } func (c *authorityCheckCreds) Info() credentials.ProtocolInfo { return credentials.ProtocolInfo{} } func (c *authorityCheckCreds) Clone() credentials.TransportCredentials { return c } func (c *authorityCheckCreds) OverrideServerName(s string) error { return nil } // This test makes sure that the authority client handshake gets is the endpoint // in dial target, not the resolved ip address. func (s) TestCredsHandshakeAuthority(t *testing.T) { const testAuthority = "test.auth.ori.ty" lis, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("Failed to listen: %v", err) } cred := &authorityCheckCreds{} s := grpc.NewServer() go s.Serve(lis) defer s.Stop() r, rcleanup := manual.GenerateAndRegisterManualResolver() defer rcleanup() cc, err := grpc.Dial(r.Scheme()+":///"+testAuthority, grpc.WithTransportCredentials(cred)) if err != nil { t.Fatalf("grpc.Dial(%q) = %v", lis.Addr().String(), err) } defer cc.Close() r.UpdateState(resolver.State{Addresses: []resolver.Address{{Addr: lis.Addr().String()}}}) ctx, cancel := context.WithTimeout(context.Background(), 100*time.Millisecond) defer cancel() for { s := cc.GetState() if s == connectivity.Ready { break } if !cc.WaitForStateChange(ctx, s) { // ctx got timeout or canceled. t.Fatalf("ClientConn is not ready after 100 ms") } } if cred.got != testAuthority { t.Fatalf("client creds got authority: %q, want: %q", cred.got, testAuthority) } } type clientFailCreds struct{} func (c *clientFailCreds) ServerHandshake(rawConn net.Conn) (net.Conn, credentials.AuthInfo, error) { return rawConn, nil, nil } func (c *clientFailCreds) ClientHandshake(ctx context.Context, authority string, rawConn net.Conn) (net.Conn, credentials.AuthInfo, error) { return nil, nil, fmt.Errorf("client handshake fails with fatal error") } func (c *clientFailCreds) Info() credentials.ProtocolInfo { return credentials.ProtocolInfo{} } func (c *clientFailCreds) Clone() credentials.TransportCredentials { return c } func (c *clientFailCreds) OverrideServerName(s string) error { return nil } // This test makes sure that failfast RPCs fail if client handshake fails with // fatal errors. func (s) TestFailfastRPCFailOnFatalHandshakeError(t *testing.T) { lis, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("Failed to listen: %v", err) } defer lis.Close() cc, err := grpc.Dial("passthrough:///"+lis.Addr().String(), grpc.WithTransportCredentials(&clientFailCreds{})) if err != nil { t.Fatalf("grpc.Dial(_) = %v", err) } defer cc.Close() tc := testpb.NewTestServiceClient(cc) // This unary call should fail, but not timeout. ctx, cancel := context.WithTimeout(context.Background(), time.Second) defer cancel() if _, err := tc.EmptyCall(ctx, &testpb.Empty{}, grpc.WaitForReady(false)); status.Code(err) != codes.Unavailable { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want ", err) } } func (s) TestFlowControlLogicalRace(t *testing.T) { // Test for a regression of https://github.com/grpc/grpc-go/issues/632, // and other flow control bugs. const ( itemCount = 100 itemSize = 1 << 10 recvCount = 2 maxFailures = 3 requestTimeout = time.Second * 5 ) requestCount := 10000 if raceMode { requestCount = 1000 } lis, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("Failed to listen: %v", err) } defer lis.Close() s := grpc.NewServer() testpb.RegisterTestServiceServer(s, &flowControlLogicalRaceServer{ itemCount: itemCount, itemSize: itemSize, }) defer s.Stop() go s.Serve(lis) ctx := context.Background() cc, err := grpc.Dial(lis.Addr().String(), grpc.WithInsecure(), grpc.WithBlock()) if err != nil { t.Fatalf("grpc.Dial(%q) = %v", lis.Addr().String(), err) } defer cc.Close() cl := testpb.NewTestServiceClient(cc) failures := 0 for i := 0; i < requestCount; i++ { ctx, cancel := context.WithTimeout(ctx, requestTimeout) output, err := cl.StreamingOutputCall(ctx, &testpb.StreamingOutputCallRequest{}) if err != nil { t.Fatalf("StreamingOutputCall; err = %q", err) } j := 0 loop: for ; j < recvCount; j++ { _, err := output.Recv() if err != nil { if err == io.EOF { break loop } switch status.Code(err) { case codes.DeadlineExceeded: break loop default: t.Fatalf("Recv; err = %q", err) } } } cancel() <-ctx.Done() if j < recvCount { t.Errorf("got %d responses to request %d", j, i) failures++ if failures >= maxFailures { // Continue past the first failure to see if the connection is // entirely broken, or if only a single RPC was affected break } } } } type flowControlLogicalRaceServer struct { testpb.TestServiceServer itemSize int itemCount int } func (s *flowControlLogicalRaceServer) StreamingOutputCall(req *testpb.StreamingOutputCallRequest, srv testpb.TestService_StreamingOutputCallServer) error { for i := 0; i < s.itemCount; i++ { err := srv.Send(&testpb.StreamingOutputCallResponse{ Payload: &testpb.Payload{ // Sending a large stream of data which the client reject // helps to trigger some types of flow control bugs. // // Reallocating memory here is inefficient, but the stress it // puts on the GC leads to more frequent flow control // failures. The GC likely causes more variety in the // goroutine scheduling orders. Body: bytes.Repeat([]byte("a"), s.itemSize), }, }) if err != nil { return err } } return nil } type lockingWriter struct { mu sync.Mutex w io.Writer } func (lw *lockingWriter) Write(p []byte) (n int, err error) { lw.mu.Lock() defer lw.mu.Unlock() return lw.w.Write(p) } func (lw *lockingWriter) setWriter(w io.Writer) { lw.mu.Lock() defer lw.mu.Unlock() lw.w = w } var testLogOutput = &lockingWriter{w: os.Stderr} // awaitNewConnLogOutput waits for any of grpc.NewConn's goroutines to // terminate, if they're still running. It spams logs with this // message. We wait for it so our log filter is still // active. Otherwise the "defer restore()" at the top of various test // functions restores our log filter and then the goroutine spams. func awaitNewConnLogOutput() { awaitLogOutput(50*time.Millisecond, "grpc: the client connection is closing; please retry") } func awaitLogOutput(maxWait time.Duration, phrase string) { pb := []byte(phrase) timer := time.NewTimer(maxWait) defer timer.Stop() wakeup := make(chan bool, 1) for { if logOutputHasContents(pb, wakeup) { return } select { case <-timer.C: // Too slow. Oh well. return case <-wakeup: } } } func logOutputHasContents(v []byte, wakeup chan<- bool) bool { testLogOutput.mu.Lock() defer testLogOutput.mu.Unlock() fw, ok := testLogOutput.w.(*filterWriter) if !ok { return false } fw.mu.Lock() defer fw.mu.Unlock() if bytes.Contains(fw.buf.Bytes(), v) { return true } fw.wakeup = wakeup return false } var verboseLogs = flag.Bool("verbose_logs", false, "show all grpclog output, without filtering") func noop() {} // declareLogNoise declares that t is expected to emit the following noisy phrases, // even on success. Those phrases will be filtered from grpclog output // and only be shown if *verbose_logs or t ends up failing. // The returned restore function should be called with defer to be run // before the test ends. func declareLogNoise(t *testing.T, phrases ...string) (restore func()) { if *verboseLogs { return noop } fw := &filterWriter{dst: os.Stderr, filter: phrases} testLogOutput.setWriter(fw) return func() { if t.Failed() { fw.mu.Lock() defer fw.mu.Unlock() if fw.buf.Len() > 0 { t.Logf("Complete log output:\n%s", fw.buf.Bytes()) } } testLogOutput.setWriter(os.Stderr) } } type filterWriter struct { dst io.Writer filter []string mu sync.Mutex buf bytes.Buffer wakeup chan<- bool // if non-nil, gets true on write } func (fw *filterWriter) Write(p []byte) (n int, err error) { fw.mu.Lock() fw.buf.Write(p) if fw.wakeup != nil { select { case fw.wakeup <- true: default: } } fw.mu.Unlock() ps := string(p) for _, f := range fw.filter { if strings.Contains(ps, f) { return len(p), nil } } return fw.dst.Write(p) } // stubServer is a server that is easy to customize within individual test // cases. type stubServer struct { // Guarantees we satisfy this interface; panics if unimplemented methods are called. testpb.TestServiceServer // Customizable implementations of server handlers. emptyCall func(ctx context.Context, in *testpb.Empty) (*testpb.Empty, error) unaryCall func(ctx context.Context, in *testpb.SimpleRequest) (*testpb.SimpleResponse, error) fullDuplexCall func(stream testpb.TestService_FullDuplexCallServer) error // A client connected to this service the test may use. Created in Start(). client testpb.TestServiceClient cc *grpc.ClientConn s *grpc.Server addr string // address of listener cleanups []func() // Lambdas executed in Stop(); populated by Start(). r *manual.Resolver } func (ss *stubServer) EmptyCall(ctx context.Context, in *testpb.Empty) (*testpb.Empty, error) { return ss.emptyCall(ctx, in) } func (ss *stubServer) UnaryCall(ctx context.Context, in *testpb.SimpleRequest) (*testpb.SimpleResponse, error) { return ss.unaryCall(ctx, in) } func (ss *stubServer) FullDuplexCall(stream testpb.TestService_FullDuplexCallServer) error { return ss.fullDuplexCall(stream) } // Start starts the server and creates a client connected to it. func (ss *stubServer) Start(sopts []grpc.ServerOption, dopts ...grpc.DialOption) error { r, cleanup := manual.GenerateAndRegisterManualResolver() ss.r = r ss.cleanups = append(ss.cleanups, cleanup) lis, err := net.Listen("tcp", "localhost:0") if err != nil { return fmt.Errorf(`net.Listen("tcp", "localhost:0") = %v`, err) } ss.addr = lis.Addr().String() ss.cleanups = append(ss.cleanups, func() { lis.Close() }) s := grpc.NewServer(sopts...) testpb.RegisterTestServiceServer(s, ss) go s.Serve(lis) ss.cleanups = append(ss.cleanups, s.Stop) ss.s = s target := ss.r.Scheme() + ":///" + ss.addr opts := append([]grpc.DialOption{grpc.WithInsecure()}, dopts...) cc, err := grpc.Dial(target, opts...) if err != nil { return fmt.Errorf("grpc.Dial(%q) = %v", target, err) } ss.cc = cc ss.r.UpdateState(resolver.State{Addresses: []resolver.Address{{Addr: ss.addr}}}) if err := ss.waitForReady(cc); err != nil { return err } ss.cleanups = append(ss.cleanups, func() { cc.Close() }) ss.client = testpb.NewTestServiceClient(cc) return nil } func (ss *stubServer) newServiceConfig(sc string) { ss.r.UpdateState(resolver.State{Addresses: []resolver.Address{{Addr: ss.addr}}, ServiceConfig: parseCfg(sc)}) } func (ss *stubServer) waitForReady(cc *grpc.ClientConn) error { ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() for { s := cc.GetState() if s == connectivity.Ready { return nil } if !cc.WaitForStateChange(ctx, s) { // ctx got timeout or canceled. return ctx.Err() } } } func (ss *stubServer) Stop() { for i := len(ss.cleanups) - 1; i >= 0; i-- { ss.cleanups[i]() } } func (s) TestGRPCMethod(t *testing.T) { var method string var ok bool ss := &stubServer{ emptyCall: func(ctx context.Context, in *testpb.Empty) (*testpb.Empty, error) { method, ok = grpc.Method(ctx) return &testpb.Empty{}, nil }, } if err := ss.Start(nil); err != nil { t.Fatalf("Error starting endpoint server: %v", err) } defer ss.Stop() ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second) defer cancel() if _, err := ss.client.EmptyCall(ctx, &testpb.Empty{}); err != nil { t.Fatalf("ss.client.EmptyCall(_, _) = _, %v; want _, nil", err) } if want := "/grpc.testing.TestService/EmptyCall"; !ok || method != want { t.Fatalf("grpc.Method(_) = %q, %v; want %q, true", method, ok, want) } } func (s) TestUnaryProxyDoesNotForwardMetadata(t *testing.T) { const mdkey = "somedata" // endpoint ensures mdkey is NOT in metadata and returns an error if it is. endpoint := &stubServer{ emptyCall: func(ctx context.Context, in *testpb.Empty) (*testpb.Empty, error) { if md, ok := metadata.FromIncomingContext(ctx); !ok || md[mdkey] != nil { return nil, status.Errorf(codes.Internal, "endpoint: md=%v; want !contains(%q)", md, mdkey) } return &testpb.Empty{}, nil }, } if err := endpoint.Start(nil); err != nil { t.Fatalf("Error starting endpoint server: %v", err) } defer endpoint.Stop() // proxy ensures mdkey IS in metadata, then forwards the RPC to endpoint // without explicitly copying the metadata. proxy := &stubServer{ emptyCall: func(ctx context.Context, in *testpb.Empty) (*testpb.Empty, error) { if md, ok := metadata.FromIncomingContext(ctx); !ok || md[mdkey] == nil { return nil, status.Errorf(codes.Internal, "proxy: md=%v; want contains(%q)", md, mdkey) } return endpoint.client.EmptyCall(ctx, in) }, } if err := proxy.Start(nil); err != nil { t.Fatalf("Error starting proxy server: %v", err) } defer proxy.Stop() ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second) defer cancel() md := metadata.Pairs(mdkey, "val") ctx = metadata.NewOutgoingContext(ctx, md) // Sanity check that endpoint properly errors when it sees mdkey. _, err := endpoint.client.EmptyCall(ctx, &testpb.Empty{}) if s, ok := status.FromError(err); !ok || s.Code() != codes.Internal { t.Fatalf("endpoint.client.EmptyCall(_, _) = _, %v; want _, ", err) } if _, err := proxy.client.EmptyCall(ctx, &testpb.Empty{}); err != nil { t.Fatal(err.Error()) } } func (s) TestStreamingProxyDoesNotForwardMetadata(t *testing.T) { const mdkey = "somedata" // doFDC performs a FullDuplexCall with client and returns the error from the // first stream.Recv call, or nil if that error is io.EOF. Calls t.Fatal if // the stream cannot be established. doFDC := func(ctx context.Context, client testpb.TestServiceClient) error { stream, err := client.FullDuplexCall(ctx) if err != nil { t.Fatalf("Unwanted error: %v", err) } if _, err := stream.Recv(); err != io.EOF { return err } return nil } // endpoint ensures mdkey is NOT in metadata and returns an error if it is. endpoint := &stubServer{ fullDuplexCall: func(stream testpb.TestService_FullDuplexCallServer) error { ctx := stream.Context() if md, ok := metadata.FromIncomingContext(ctx); !ok || md[mdkey] != nil { return status.Errorf(codes.Internal, "endpoint: md=%v; want !contains(%q)", md, mdkey) } return nil }, } if err := endpoint.Start(nil); err != nil { t.Fatalf("Error starting endpoint server: %v", err) } defer endpoint.Stop() // proxy ensures mdkey IS in metadata, then forwards the RPC to endpoint // without explicitly copying the metadata. proxy := &stubServer{ fullDuplexCall: func(stream testpb.TestService_FullDuplexCallServer) error { ctx := stream.Context() if md, ok := metadata.FromIncomingContext(ctx); !ok || md[mdkey] == nil { return status.Errorf(codes.Internal, "endpoint: md=%v; want !contains(%q)", md, mdkey) } return doFDC(ctx, endpoint.client) }, } if err := proxy.Start(nil); err != nil { t.Fatalf("Error starting proxy server: %v", err) } defer proxy.Stop() ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second) defer cancel() md := metadata.Pairs(mdkey, "val") ctx = metadata.NewOutgoingContext(ctx, md) // Sanity check that endpoint properly errors when it sees mdkey in ctx. err := doFDC(ctx, endpoint.client) if s, ok := status.FromError(err); !ok || s.Code() != codes.Internal { t.Fatalf("stream.Recv() = _, %v; want _, ", err) } if err := doFDC(ctx, proxy.client); err != nil { t.Fatalf("doFDC(_, proxy.client) = %v; want nil", err) } } func (s) TestStatsTagsAndTrace(t *testing.T) { // Data added to context by client (typically in a stats handler). tags := []byte{1, 5, 2, 4, 3} trace := []byte{5, 2, 1, 3, 4} // endpoint ensures Tags() and Trace() in context match those that were added // by the client and returns an error if not. endpoint := &stubServer{ emptyCall: func(ctx context.Context, in *testpb.Empty) (*testpb.Empty, error) { md, _ := metadata.FromIncomingContext(ctx) if tg := stats.Tags(ctx); !reflect.DeepEqual(tg, tags) { return nil, status.Errorf(codes.Internal, "stats.Tags(%v)=%v; want %v", ctx, tg, tags) } if !reflect.DeepEqual(md["grpc-tags-bin"], []string{string(tags)}) { return nil, status.Errorf(codes.Internal, "md['grpc-tags-bin']=%v; want %v", md["grpc-tags-bin"], tags) } if tr := stats.Trace(ctx); !reflect.DeepEqual(tr, trace) { return nil, status.Errorf(codes.Internal, "stats.Trace(%v)=%v; want %v", ctx, tr, trace) } if !reflect.DeepEqual(md["grpc-trace-bin"], []string{string(trace)}) { return nil, status.Errorf(codes.Internal, "md['grpc-trace-bin']=%v; want %v", md["grpc-trace-bin"], trace) } return &testpb.Empty{}, nil }, } if err := endpoint.Start(nil); err != nil { t.Fatalf("Error starting endpoint server: %v", err) } defer endpoint.Stop() ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second) defer cancel() testCases := []struct { ctx context.Context want codes.Code }{ {ctx: ctx, want: codes.Internal}, {ctx: stats.SetTags(ctx, tags), want: codes.Internal}, {ctx: stats.SetTrace(ctx, trace), want: codes.Internal}, {ctx: stats.SetTags(stats.SetTrace(ctx, tags), tags), want: codes.Internal}, {ctx: stats.SetTags(stats.SetTrace(ctx, trace), tags), want: codes.OK}, } for _, tc := range testCases { _, err := endpoint.client.EmptyCall(tc.ctx, &testpb.Empty{}) if tc.want == codes.OK && err != nil { t.Fatalf("endpoint.client.EmptyCall(%v, _) = _, %v; want _, nil", tc.ctx, err) } if s, ok := status.FromError(err); !ok || s.Code() != tc.want { t.Fatalf("endpoint.client.EmptyCall(%v, _) = _, %v; want _, ", tc.ctx, err, tc.want) } } } func (s) TestTapTimeout(t *testing.T) { sopts := []grpc.ServerOption{ grpc.InTapHandle(func(ctx context.Context, _ *tap.Info) (context.Context, error) { c, cancel := context.WithCancel(ctx) // Call cancel instead of setting a deadline so we can detect which error // occurred -- this cancellation (desired) or the client's deadline // expired (indicating this cancellation did not affect the RPC). time.AfterFunc(10*time.Millisecond, cancel) return c, nil }), } ss := &stubServer{ emptyCall: func(ctx context.Context, in *testpb.Empty) (*testpb.Empty, error) { <-ctx.Done() return nil, status.Errorf(codes.Canceled, ctx.Err().Error()) }, } if err := ss.Start(sopts); err != nil { t.Fatalf("Error starting endpoint server: %v", err) } defer ss.Stop() // This was known to be flaky; test several times. for i := 0; i < 10; i++ { // Set our own deadline in case the server hangs. ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second) res, err := ss.client.EmptyCall(ctx, &testpb.Empty{}) cancel() if s, ok := status.FromError(err); !ok || s.Code() != codes.Canceled { t.Fatalf("ss.client.EmptyCall(context.Background(), _) = %v, %v; want nil, ", res, err) } } } func (s) TestClientWriteFailsAfterServerClosesStream(t *testing.T) { ss := &stubServer{ fullDuplexCall: func(stream testpb.TestService_FullDuplexCallServer) error { return status.Errorf(codes.Internal, "") }, } sopts := []grpc.ServerOption{} if err := ss.Start(sopts); err != nil { t.Fatalf("Error starting endpoint server: %v", err) } defer ss.Stop() ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() stream, err := ss.client.FullDuplexCall(ctx) if err != nil { t.Fatalf("Error while creating stream: %v", err) } for { if err := stream.Send(&testpb.StreamingOutputCallRequest{}); err == nil { time.Sleep(5 * time.Millisecond) } else if err == io.EOF { break // Success. } else { t.Fatalf("stream.Send(_) = %v, want io.EOF", err) } } } type windowSizeConfig struct { serverStream int32 serverConn int32 clientStream int32 clientConn int32 } func max(a, b int32) int32 { if a > b { return a } return b } func (s) TestConfigurableWindowSizeWithLargeWindow(t *testing.T) { wc := windowSizeConfig{ serverStream: 8 * 1024 * 1024, serverConn: 12 * 1024 * 1024, clientStream: 6 * 1024 * 1024, clientConn: 8 * 1024 * 1024, } for _, e := range listTestEnv() { testConfigurableWindowSize(t, e, wc) } } func (s) TestConfigurableWindowSizeWithSmallWindow(t *testing.T) { wc := windowSizeConfig{ serverStream: 1, serverConn: 1, clientStream: 1, clientConn: 1, } for _, e := range listTestEnv() { testConfigurableWindowSize(t, e, wc) } } func testConfigurableWindowSize(t *testing.T, e env, wc windowSizeConfig) { te := newTest(t, e) te.serverInitialWindowSize = wc.serverStream te.serverInitialConnWindowSize = wc.serverConn te.clientInitialWindowSize = wc.clientStream te.clientInitialConnWindowSize = wc.clientConn te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) stream, err := tc.FullDuplexCall(context.Background()) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } numOfIter := 11 // Set message size to exhaust largest of window sizes. messageSize := max(max(wc.serverStream, wc.serverConn), max(wc.clientStream, wc.clientConn)) / int32(numOfIter-1) messageSize = max(messageSize, 64*1024) payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, messageSize) if err != nil { t.Fatal(err) } respParams := []*testpb.ResponseParameters{ { Size: messageSize, }, } req := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseParameters: respParams, Payload: payload, } for i := 0; i < numOfIter; i++ { if err := stream.Send(req); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, req, err) } if _, err := stream.Recv(); err != nil { t.Fatalf("%v.Recv() = _, %v, want _, ", stream, err) } } if err := stream.CloseSend(); err != nil { t.Fatalf("%v.CloseSend() = %v, want ", stream, err) } } var ( // test authdata authdata = map[string]string{ "test-key": "test-value", "test-key2-bin": string([]byte{1, 2, 3}), } ) type testPerRPCCredentials struct{} func (cr testPerRPCCredentials) GetRequestMetadata(ctx context.Context, uri ...string) (map[string]string, error) { return authdata, nil } func (cr testPerRPCCredentials) RequireTransportSecurity() bool { return false } func authHandle(ctx context.Context, info *tap.Info) (context.Context, error) { md, ok := metadata.FromIncomingContext(ctx) if !ok { return ctx, fmt.Errorf("didn't find metadata in context") } for k, vwant := range authdata { vgot, ok := md[k] if !ok { return ctx, fmt.Errorf("didn't find authdata key %v in context", k) } if vgot[0] != vwant { return ctx, fmt.Errorf("for key %v, got value %v, want %v", k, vgot, vwant) } } return ctx, nil } func (s) TestPerRPCCredentialsViaDialOptions(t *testing.T) { for _, e := range listTestEnv() { testPerRPCCredentialsViaDialOptions(t, e) } } func testPerRPCCredentialsViaDialOptions(t *testing.T, e env) { te := newTest(t, e) te.tapHandle = authHandle te.perRPCCreds = testPerRPCCredentials{} te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}); err != nil { t.Fatalf("Test failed. Reason: %v", err) } } func (s) TestPerRPCCredentialsViaCallOptions(t *testing.T) { for _, e := range listTestEnv() { testPerRPCCredentialsViaCallOptions(t, e) } } func testPerRPCCredentialsViaCallOptions(t *testing.T, e env) { te := newTest(t, e) te.tapHandle = authHandle te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}, grpc.PerRPCCredentials(testPerRPCCredentials{})); err != nil { t.Fatalf("Test failed. Reason: %v", err) } } func (s) TestPerRPCCredentialsViaDialOptionsAndCallOptions(t *testing.T) { for _, e := range listTestEnv() { testPerRPCCredentialsViaDialOptionsAndCallOptions(t, e) } } func testPerRPCCredentialsViaDialOptionsAndCallOptions(t *testing.T, e env) { te := newTest(t, e) te.perRPCCreds = testPerRPCCredentials{} // When credentials are provided via both dial options and call options, // we apply both sets. te.tapHandle = func(ctx context.Context, _ *tap.Info) (context.Context, error) { md, ok := metadata.FromIncomingContext(ctx) if !ok { return ctx, fmt.Errorf("couldn't find metadata in context") } for k, vwant := range authdata { vgot, ok := md[k] if !ok { return ctx, fmt.Errorf("couldn't find metadata for key %v", k) } if len(vgot) != 2 { return ctx, fmt.Errorf("len of value for key %v was %v, want 2", k, len(vgot)) } if vgot[0] != vwant || vgot[1] != vwant { return ctx, fmt.Errorf("value for %v was %v, want [%v, %v]", k, vgot, vwant, vwant) } } return ctx, nil } te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}, grpc.PerRPCCredentials(testPerRPCCredentials{})); err != nil { t.Fatalf("Test failed. Reason: %v", err) } } func (s) TestWaitForReadyConnection(t *testing.T) { for _, e := range listTestEnv() { testWaitForReadyConnection(t, e) } } func testWaitForReadyConnection(t *testing.T, e env) { te := newTest(t, e) te.userAgent = testAppUA te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() // Non-blocking dial. tc := testpb.NewTestServiceClient(cc) ctx, cancel := context.WithTimeout(context.Background(), time.Second) defer cancel() state := cc.GetState() // Wait for connection to be Ready. for ; state != connectivity.Ready && cc.WaitForStateChange(ctx, state); state = cc.GetState() { } if state != connectivity.Ready { t.Fatalf("Want connection state to be Ready, got %v", state) } ctx, cancel = context.WithTimeout(context.Background(), time.Second) defer cancel() // Make a fail-fast RPC. if _, err := tc.EmptyCall(ctx, &testpb.Empty{}); err != nil { t.Fatalf("TestService/EmptyCall(_,_) = _, %v, want _, nil", err) } } type errCodec struct { noError bool } func (c *errCodec) Marshal(v interface{}) ([]byte, error) { if c.noError { return []byte{}, nil } return nil, fmt.Errorf("3987^12 + 4365^12 = 4472^12") } func (c *errCodec) Unmarshal(data []byte, v interface{}) error { return nil } func (c *errCodec) Name() string { return "Fermat's near-miss." } func (s) TestEncodeDoesntPanic(t *testing.T) { for _, e := range listTestEnv() { testEncodeDoesntPanic(t, e) } } func testEncodeDoesntPanic(t *testing.T, e env) { te := newTest(t, e) erc := &errCodec{} te.customCodec = erc te.startServer(&testServer{security: e.security}) defer te.tearDown() te.customCodec = nil tc := testpb.NewTestServiceClient(te.clientConn()) // Failure case, should not panic. tc.EmptyCall(context.Background(), &testpb.Empty{}) erc.noError = true // Passing case. if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}); err != nil { t.Fatalf("EmptyCall(_, _) = _, %v, want _, ", err) } } func (s) TestSvrWriteStatusEarlyWrite(t *testing.T) { for _, e := range listTestEnv() { testSvrWriteStatusEarlyWrite(t, e) } } func testSvrWriteStatusEarlyWrite(t *testing.T, e env) { te := newTest(t, e) const smallSize = 1024 const largeSize = 2048 const extraLargeSize = 4096 te.maxServerReceiveMsgSize = newInt(largeSize) te.maxServerSendMsgSize = newInt(largeSize) smallPayload, err := newPayload(testpb.PayloadType_COMPRESSABLE, smallSize) if err != nil { t.Fatal(err) } extraLargePayload, err := newPayload(testpb.PayloadType_COMPRESSABLE, extraLargeSize) if err != nil { t.Fatal(err) } te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) respParam := []*testpb.ResponseParameters{ { Size: int32(smallSize), }, } sreq := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseParameters: respParam, Payload: extraLargePayload, } // Test recv case: server receives a message larger than maxServerReceiveMsgSize. stream, err := tc.FullDuplexCall(te.ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } if err = stream.Send(sreq); err != nil { t.Fatalf("%v.Send() = _, %v, want ", stream, err) } if _, err = stream.Recv(); err == nil || status.Code(err) != codes.ResourceExhausted { t.Fatalf("%v.Recv() = _, %v, want _, error code: %s", stream, err, codes.ResourceExhausted) } // Test send case: server sends a message larger than maxServerSendMsgSize. sreq.Payload = smallPayload respParam[0].Size = int32(extraLargeSize) stream, err = tc.FullDuplexCall(te.ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } if err = stream.Send(sreq); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, sreq, err) } if _, err = stream.Recv(); err == nil || status.Code(err) != codes.ResourceExhausted { t.Fatalf("%v.Recv() = _, %v, want _, error code: %s", stream, err, codes.ResourceExhausted) } } // The following functions with function name ending with TD indicates that they // should be deleted after old service config API is deprecated and deleted. func testServiceConfigSetupTD(t *testing.T, e env) (*test, chan grpc.ServiceConfig) { te := newTest(t, e) // We write before read. ch := make(chan grpc.ServiceConfig, 1) te.sc = ch te.userAgent = testAppUA te.declareLogNoise( "transport: http2Client.notifyError got notified that the client transport was broken EOF", "grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing", "grpc: addrConn.resetTransport failed to create client transport: connection error", "Failed to dial : context canceled; please retry.", ) return te, ch } func (s) TestServiceConfigGetMethodConfigTD(t *testing.T) { for _, e := range listTestEnv() { testGetMethodConfigTD(t, e) } } func testGetMethodConfigTD(t *testing.T, e env) { te, ch := testServiceConfigSetupTD(t, e) defer te.tearDown() mc1 := grpc.MethodConfig{ WaitForReady: newBool(true), Timeout: newDuration(time.Millisecond), } mc2 := grpc.MethodConfig{WaitForReady: newBool(false)} m := make(map[string]grpc.MethodConfig) m["/grpc.testing.TestService/EmptyCall"] = mc1 m["/grpc.testing.TestService/"] = mc2 sc := grpc.ServiceConfig{ Methods: m, } ch <- sc cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) // The following RPCs are expected to become non-fail-fast ones with 1ms deadline. if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}); status.Code(err) != codes.DeadlineExceeded { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, %s", err, codes.DeadlineExceeded) } m = make(map[string]grpc.MethodConfig) m["/grpc.testing.TestService/UnaryCall"] = mc1 m["/grpc.testing.TestService/"] = mc2 sc = grpc.ServiceConfig{ Methods: m, } ch <- sc // Wait for the new service config to propagate. for { if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}); status.Code(err) == codes.DeadlineExceeded { continue } break } // The following RPCs are expected to become fail-fast. if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}); status.Code(err) != codes.Unavailable { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, %s", err, codes.Unavailable) } } func (s) TestServiceConfigWaitForReadyTD(t *testing.T) { for _, e := range listTestEnv() { testServiceConfigWaitForReadyTD(t, e) } } func testServiceConfigWaitForReadyTD(t *testing.T, e env) { te, ch := testServiceConfigSetupTD(t, e) defer te.tearDown() // Case1: Client API set failfast to be false, and service config set wait_for_ready to be false, Client API should win, and the rpc will wait until deadline exceeds. mc := grpc.MethodConfig{ WaitForReady: newBool(false), Timeout: newDuration(time.Millisecond), } m := make(map[string]grpc.MethodConfig) m["/grpc.testing.TestService/EmptyCall"] = mc m["/grpc.testing.TestService/FullDuplexCall"] = mc sc := grpc.ServiceConfig{ Methods: m, } ch <- sc cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) // The following RPCs are expected to become non-fail-fast ones with 1ms deadline. if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}, grpc.WaitForReady(true)); status.Code(err) != codes.DeadlineExceeded { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, %s", err, codes.DeadlineExceeded) } if _, err := tc.FullDuplexCall(context.Background(), grpc.WaitForReady(true)); status.Code(err) != codes.DeadlineExceeded { t.Fatalf("TestService/FullDuplexCall(_) = _, %v, want %s", err, codes.DeadlineExceeded) } // Generate a service config update. // Case2: Client API does not set failfast, and service config set wait_for_ready to be true, and the rpc will wait until deadline exceeds. mc.WaitForReady = newBool(true) m = make(map[string]grpc.MethodConfig) m["/grpc.testing.TestService/EmptyCall"] = mc m["/grpc.testing.TestService/FullDuplexCall"] = mc sc = grpc.ServiceConfig{ Methods: m, } ch <- sc // Wait for the new service config to take effect. mc = cc.GetMethodConfig("/grpc.testing.TestService/EmptyCall") for { if !*mc.WaitForReady { time.Sleep(100 * time.Millisecond) mc = cc.GetMethodConfig("/grpc.testing.TestService/EmptyCall") continue } break } // The following RPCs are expected to become non-fail-fast ones with 1ms deadline. if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}); status.Code(err) != codes.DeadlineExceeded { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, %s", err, codes.DeadlineExceeded) } if _, err := tc.FullDuplexCall(context.Background()); status.Code(err) != codes.DeadlineExceeded { t.Fatalf("TestService/FullDuplexCall(_) = _, %v, want %s", err, codes.DeadlineExceeded) } } func (s) TestServiceConfigTimeoutTD(t *testing.T) { for _, e := range listTestEnv() { testServiceConfigTimeoutTD(t, e) } } func testServiceConfigTimeoutTD(t *testing.T, e env) { te, ch := testServiceConfigSetupTD(t, e) defer te.tearDown() // Case1: Client API sets timeout to be 1ns and ServiceConfig sets timeout to be 1hr. Timeout should be 1ns (min of 1ns and 1hr) and the rpc will wait until deadline exceeds. mc := grpc.MethodConfig{ Timeout: newDuration(time.Hour), } m := make(map[string]grpc.MethodConfig) m["/grpc.testing.TestService/EmptyCall"] = mc m["/grpc.testing.TestService/FullDuplexCall"] = mc sc := grpc.ServiceConfig{ Methods: m, } ch <- sc cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) // The following RPCs are expected to become non-fail-fast ones with 1ns deadline. ctx, cancel := context.WithTimeout(context.Background(), time.Nanosecond) if _, err := tc.EmptyCall(ctx, &testpb.Empty{}, grpc.WaitForReady(true)); status.Code(err) != codes.DeadlineExceeded { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, %s", err, codes.DeadlineExceeded) } cancel() ctx, cancel = context.WithTimeout(context.Background(), time.Nanosecond) if _, err := tc.FullDuplexCall(ctx, grpc.WaitForReady(true)); status.Code(err) != codes.DeadlineExceeded { t.Fatalf("TestService/FullDuplexCall(_) = _, %v, want %s", err, codes.DeadlineExceeded) } cancel() // Generate a service config update. // Case2: Client API sets timeout to be 1hr and ServiceConfig sets timeout to be 1ns. Timeout should be 1ns (min of 1ns and 1hr) and the rpc will wait until deadline exceeds. mc.Timeout = newDuration(time.Nanosecond) m = make(map[string]grpc.MethodConfig) m["/grpc.testing.TestService/EmptyCall"] = mc m["/grpc.testing.TestService/FullDuplexCall"] = mc sc = grpc.ServiceConfig{ Methods: m, } ch <- sc // Wait for the new service config to take effect. mc = cc.GetMethodConfig("/grpc.testing.TestService/FullDuplexCall") for { if *mc.Timeout != time.Nanosecond { time.Sleep(100 * time.Millisecond) mc = cc.GetMethodConfig("/grpc.testing.TestService/FullDuplexCall") continue } break } ctx, cancel = context.WithTimeout(context.Background(), time.Hour) if _, err := tc.EmptyCall(ctx, &testpb.Empty{}, grpc.WaitForReady(true)); status.Code(err) != codes.DeadlineExceeded { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, %s", err, codes.DeadlineExceeded) } cancel() ctx, cancel = context.WithTimeout(context.Background(), time.Hour) if _, err := tc.FullDuplexCall(ctx, grpc.WaitForReady(true)); status.Code(err) != codes.DeadlineExceeded { t.Fatalf("TestService/FullDuplexCall(_) = _, %v, want %s", err, codes.DeadlineExceeded) } cancel() } func (s) TestServiceConfigMaxMsgSizeTD(t *testing.T) { for _, e := range listTestEnv() { testServiceConfigMaxMsgSizeTD(t, e) } } func testServiceConfigMaxMsgSizeTD(t *testing.T, e env) { // Setting up values and objects shared across all test cases. const smallSize = 1 const largeSize = 1024 const extraLargeSize = 2048 smallPayload, err := newPayload(testpb.PayloadType_COMPRESSABLE, smallSize) if err != nil { t.Fatal(err) } largePayload, err := newPayload(testpb.PayloadType_COMPRESSABLE, largeSize) if err != nil { t.Fatal(err) } extraLargePayload, err := newPayload(testpb.PayloadType_COMPRESSABLE, extraLargeSize) if err != nil { t.Fatal(err) } mc := grpc.MethodConfig{ MaxReqSize: newInt(extraLargeSize), MaxRespSize: newInt(extraLargeSize), } m := make(map[string]grpc.MethodConfig) m["/grpc.testing.TestService/UnaryCall"] = mc m["/grpc.testing.TestService/FullDuplexCall"] = mc sc := grpc.ServiceConfig{ Methods: m, } // Case1: sc set maxReqSize to 2048 (send), maxRespSize to 2048 (recv). te1, ch1 := testServiceConfigSetupTD(t, e) te1.startServer(&testServer{security: e.security}) defer te1.tearDown() ch1 <- sc tc := testpb.NewTestServiceClient(te1.clientConn()) req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseSize: int32(extraLargeSize), Payload: smallPayload, } // Test for unary RPC recv. if _, err := tc.UnaryCall(context.Background(), req); err == nil || status.Code(err) != codes.ResourceExhausted { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, error code: %s", err, codes.ResourceExhausted) } // Test for unary RPC send. req.Payload = extraLargePayload req.ResponseSize = int32(smallSize) if _, err := tc.UnaryCall(context.Background(), req); err == nil || status.Code(err) != codes.ResourceExhausted { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, error code: %s", err, codes.ResourceExhausted) } // Test for streaming RPC recv. respParam := []*testpb.ResponseParameters{ { Size: int32(extraLargeSize), }, } sreq := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseParameters: respParam, Payload: smallPayload, } stream, err := tc.FullDuplexCall(te1.ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } if err := stream.Send(sreq); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, sreq, err) } if _, err := stream.Recv(); err == nil || status.Code(err) != codes.ResourceExhausted { t.Fatalf("%v.Recv() = _, %v, want _, error code: %s", stream, err, codes.ResourceExhausted) } // Test for streaming RPC send. respParam[0].Size = int32(smallSize) sreq.Payload = extraLargePayload stream, err = tc.FullDuplexCall(te1.ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } if err := stream.Send(sreq); err == nil || status.Code(err) != codes.ResourceExhausted { t.Fatalf("%v.Send(%v) = %v, want _, error code: %s", stream, sreq, err, codes.ResourceExhausted) } // Case2: Client API set maxReqSize to 1024 (send), maxRespSize to 1024 (recv). Sc sets maxReqSize to 2048 (send), maxRespSize to 2048 (recv). te2, ch2 := testServiceConfigSetupTD(t, e) te2.maxClientReceiveMsgSize = newInt(1024) te2.maxClientSendMsgSize = newInt(1024) te2.startServer(&testServer{security: e.security}) defer te2.tearDown() ch2 <- sc tc = testpb.NewTestServiceClient(te2.clientConn()) // Test for unary RPC recv. req.Payload = smallPayload req.ResponseSize = int32(largeSize) if _, err := tc.UnaryCall(context.Background(), req); err == nil || status.Code(err) != codes.ResourceExhausted { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, error code: %s", err, codes.ResourceExhausted) } // Test for unary RPC send. req.Payload = largePayload req.ResponseSize = int32(smallSize) if _, err := tc.UnaryCall(context.Background(), req); err == nil || status.Code(err) != codes.ResourceExhausted { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, error code: %s", err, codes.ResourceExhausted) } // Test for streaming RPC recv. stream, err = tc.FullDuplexCall(te2.ctx) respParam[0].Size = int32(largeSize) sreq.Payload = smallPayload if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } if err := stream.Send(sreq); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, sreq, err) } if _, err := stream.Recv(); err == nil || status.Code(err) != codes.ResourceExhausted { t.Fatalf("%v.Recv() = _, %v, want _, error code: %s", stream, err, codes.ResourceExhausted) } // Test for streaming RPC send. respParam[0].Size = int32(smallSize) sreq.Payload = largePayload stream, err = tc.FullDuplexCall(te2.ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } if err := stream.Send(sreq); err == nil || status.Code(err) != codes.ResourceExhausted { t.Fatalf("%v.Send(%v) = %v, want _, error code: %s", stream, sreq, err, codes.ResourceExhausted) } // Case3: Client API set maxReqSize to 4096 (send), maxRespSize to 4096 (recv). Sc sets maxReqSize to 2048 (send), maxRespSize to 2048 (recv). te3, ch3 := testServiceConfigSetupTD(t, e) te3.maxClientReceiveMsgSize = newInt(4096) te3.maxClientSendMsgSize = newInt(4096) te3.startServer(&testServer{security: e.security}) defer te3.tearDown() ch3 <- sc tc = testpb.NewTestServiceClient(te3.clientConn()) // Test for unary RPC recv. req.Payload = smallPayload req.ResponseSize = int32(largeSize) if _, err := tc.UnaryCall(context.Background(), req); err != nil { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want ", err) } req.ResponseSize = int32(extraLargeSize) if _, err := tc.UnaryCall(context.Background(), req); err == nil || status.Code(err) != codes.ResourceExhausted { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, error code: %s", err, codes.ResourceExhausted) } // Test for unary RPC send. req.Payload = largePayload req.ResponseSize = int32(smallSize) if _, err := tc.UnaryCall(context.Background(), req); err != nil { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want ", err) } req.Payload = extraLargePayload if _, err := tc.UnaryCall(context.Background(), req); err == nil || status.Code(err) != codes.ResourceExhausted { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, error code: %s", err, codes.ResourceExhausted) } // Test for streaming RPC recv. stream, err = tc.FullDuplexCall(te3.ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } respParam[0].Size = int32(largeSize) sreq.Payload = smallPayload if err := stream.Send(sreq); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, sreq, err) } if _, err := stream.Recv(); err != nil { t.Fatalf("%v.Recv() = _, %v, want ", stream, err) } respParam[0].Size = int32(extraLargeSize) if err := stream.Send(sreq); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, sreq, err) } if _, err := stream.Recv(); err == nil || status.Code(err) != codes.ResourceExhausted { t.Fatalf("%v.Recv() = _, %v, want _, error code: %s", stream, err, codes.ResourceExhausted) } // Test for streaming RPC send. respParam[0].Size = int32(smallSize) sreq.Payload = largePayload stream, err = tc.FullDuplexCall(te3.ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } if err := stream.Send(sreq); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, sreq, err) } sreq.Payload = extraLargePayload if err := stream.Send(sreq); err == nil || status.Code(err) != codes.ResourceExhausted { t.Fatalf("%v.Send(%v) = %v, want _, error code: %s", stream, sreq, err, codes.ResourceExhausted) } } func (s) TestMethodFromServerStream(t *testing.T) { const testMethod = "/package.service/method" e := tcpClearRREnv te := newTest(t, e) var method string var ok bool te.unknownHandler = func(srv interface{}, stream grpc.ServerStream) error { method, ok = grpc.MethodFromServerStream(stream) return nil } te.startServer(nil) defer te.tearDown() _ = te.clientConn().Invoke(context.Background(), testMethod, nil, nil) if !ok || method != testMethod { t.Fatalf("Invoke with method %q, got %q, %v, want %q, true", testMethod, method, ok, testMethod) } } func (s) TestInterceptorCanAccessCallOptions(t *testing.T) { e := tcpClearRREnv te := newTest(t, e) te.startServer(&testServer{security: e.security}) defer te.tearDown() type observedOptions struct { headers []*metadata.MD trailers []*metadata.MD peer []*peer.Peer creds []credentials.PerRPCCredentials failFast []bool maxRecvSize []int maxSendSize []int compressor []string subtype []string } var observedOpts observedOptions populateOpts := func(opts []grpc.CallOption) { for _, o := range opts { switch o := o.(type) { case grpc.HeaderCallOption: observedOpts.headers = append(observedOpts.headers, o.HeaderAddr) case grpc.TrailerCallOption: observedOpts.trailers = append(observedOpts.trailers, o.TrailerAddr) case grpc.PeerCallOption: observedOpts.peer = append(observedOpts.peer, o.PeerAddr) case grpc.PerRPCCredsCallOption: observedOpts.creds = append(observedOpts.creds, o.Creds) case grpc.FailFastCallOption: observedOpts.failFast = append(observedOpts.failFast, o.FailFast) case grpc.MaxRecvMsgSizeCallOption: observedOpts.maxRecvSize = append(observedOpts.maxRecvSize, o.MaxRecvMsgSize) case grpc.MaxSendMsgSizeCallOption: observedOpts.maxSendSize = append(observedOpts.maxSendSize, o.MaxSendMsgSize) case grpc.CompressorCallOption: observedOpts.compressor = append(observedOpts.compressor, o.CompressorType) case grpc.ContentSubtypeCallOption: observedOpts.subtype = append(observedOpts.subtype, o.ContentSubtype) } } } te.unaryClientInt = func(ctx context.Context, method string, req, reply interface{}, cc *grpc.ClientConn, invoker grpc.UnaryInvoker, opts ...grpc.CallOption) error { populateOpts(opts) return nil } te.streamClientInt = func(ctx context.Context, desc *grpc.StreamDesc, cc *grpc.ClientConn, method string, streamer grpc.Streamer, opts ...grpc.CallOption) (grpc.ClientStream, error) { populateOpts(opts) return nil, nil } defaults := []grpc.CallOption{ grpc.WaitForReady(true), grpc.MaxCallRecvMsgSize(1010), } tc := testpb.NewTestServiceClient(te.clientConn(grpc.WithDefaultCallOptions(defaults...))) var headers metadata.MD var trailers metadata.MD var pr peer.Peer tc.UnaryCall(context.Background(), &testpb.SimpleRequest{}, grpc.MaxCallRecvMsgSize(100), grpc.MaxCallSendMsgSize(200), grpc.PerRPCCredentials(testPerRPCCredentials{}), grpc.Header(&headers), grpc.Trailer(&trailers), grpc.Peer(&pr)) expected := observedOptions{ failFast: []bool{false}, maxRecvSize: []int{1010, 100}, maxSendSize: []int{200}, creds: []credentials.PerRPCCredentials{testPerRPCCredentials{}}, headers: []*metadata.MD{&headers}, trailers: []*metadata.MD{&trailers}, peer: []*peer.Peer{&pr}, } if !reflect.DeepEqual(expected, observedOpts) { t.Errorf("unary call did not observe expected options: expected %#v, got %#v", expected, observedOpts) } observedOpts = observedOptions{} // reset tc.StreamingInputCall(context.Background(), grpc.WaitForReady(false), grpc.MaxCallSendMsgSize(2020), grpc.UseCompressor("comp-type"), grpc.CallContentSubtype("json")) expected = observedOptions{ failFast: []bool{false, true}, maxRecvSize: []int{1010}, maxSendSize: []int{2020}, compressor: []string{"comp-type"}, subtype: []string{"json"}, } if !reflect.DeepEqual(expected, observedOpts) { t.Errorf("streaming call did not observe expected options: expected %#v, got %#v", expected, observedOpts) } } func (s) TestCompressorRegister(t *testing.T) { for _, e := range listTestEnv() { testCompressorRegister(t, e) } } func testCompressorRegister(t *testing.T, e env) { te := newTest(t, e) te.clientCompression = false te.serverCompression = false te.clientUseCompression = true te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) // Unary call const argSize = 271828 const respSize = 314159 payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, argSize) if err != nil { t.Fatal(err) } req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseSize: respSize, Payload: payload, } ctx := metadata.NewOutgoingContext(context.Background(), metadata.Pairs("something", "something")) if _, err := tc.UnaryCall(ctx, req); err != nil { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, ", err) } // Streaming RPC ctx, cancel := context.WithCancel(context.Background()) defer cancel() stream, err := tc.FullDuplexCall(ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } respParam := []*testpb.ResponseParameters{ { Size: 31415, }, } payload, err = newPayload(testpb.PayloadType_COMPRESSABLE, int32(31415)) if err != nil { t.Fatal(err) } sreq := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseParameters: respParam, Payload: payload, } if err := stream.Send(sreq); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, sreq, err) } if _, err := stream.Recv(); err != nil { t.Fatalf("%v.Recv() = %v, want ", stream, err) } } func (s) TestServeExitsWhenListenerClosed(t *testing.T) { ss := &stubServer{ emptyCall: func(context.Context, *testpb.Empty) (*testpb.Empty, error) { return &testpb.Empty{}, nil }, } s := grpc.NewServer() defer s.Stop() testpb.RegisterTestServiceServer(s, ss) lis, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("Failed to create listener: %v", err) } done := make(chan struct{}) go func() { s.Serve(lis) close(done) }() cc, err := grpc.Dial(lis.Addr().String(), grpc.WithInsecure(), grpc.WithBlock()) if err != nil { t.Fatalf("Failed to dial server: %v", err) } defer cc.Close() c := testpb.NewTestServiceClient(cc) ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) defer cancel() if _, err := c.EmptyCall(ctx, &testpb.Empty{}); err != nil { t.Fatalf("Failed to send test RPC to server: %v", err) } if err := lis.Close(); err != nil { t.Fatalf("Failed to close listener: %v", err) } const timeout = 5 * time.Second timer := time.NewTimer(timeout) select { case <-done: return case <-timer.C: t.Fatalf("Serve did not return after %v", timeout) } } // Service handler returns status with invalid utf8 message. func (s) TestStatusInvalidUTF8Message(t *testing.T) { var ( origMsg = string([]byte{0xff, 0xfe, 0xfd}) wantMsg = "���" ) ss := &stubServer{ emptyCall: func(ctx context.Context, in *testpb.Empty) (*testpb.Empty, error) { return nil, status.Errorf(codes.Internal, origMsg) }, } if err := ss.Start(nil); err != nil { t.Fatalf("Error starting endpoint server: %v", err) } defer ss.Stop() ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second) defer cancel() if _, err := ss.client.EmptyCall(ctx, &testpb.Empty{}); status.Convert(err).Message() != wantMsg { t.Fatalf("ss.client.EmptyCall(_, _) = _, %v (msg %q); want _, err with msg %q", err, status.Convert(err).Message(), wantMsg) } } // Service handler returns status with details and invalid utf8 message. Proto // will fail to marshal the status because of the invalid utf8 message. Details // will be dropped when sending. func (s) TestStatusInvalidUTF8Details(t *testing.T) { var ( origMsg = string([]byte{0xff, 0xfe, 0xfd}) wantMsg = "���" ) ss := &stubServer{ emptyCall: func(ctx context.Context, in *testpb.Empty) (*testpb.Empty, error) { st := status.New(codes.Internal, origMsg) st, err := st.WithDetails(&testpb.Empty{}) if err != nil { return nil, err } return nil, st.Err() }, } if err := ss.Start(nil); err != nil { t.Fatalf("Error starting endpoint server: %v", err) } defer ss.Stop() ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second) defer cancel() _, err := ss.client.EmptyCall(ctx, &testpb.Empty{}) st := status.Convert(err) if st.Message() != wantMsg { t.Fatalf("ss.client.EmptyCall(_, _) = _, %v (msg %q); want _, err with msg %q", err, st.Message(), wantMsg) } if len(st.Details()) != 0 { // Details should be dropped on the server side. t.Fatalf("RPC status contain details: %v, want no details", st.Details()) } } func (s) TestClientDoesntDeadlockWhileWritingErrornousLargeMessages(t *testing.T) { for _, e := range listTestEnv() { if e.httpHandler { continue } testClientDoesntDeadlockWhileWritingErrornousLargeMessages(t, e) } } func testClientDoesntDeadlockWhileWritingErrornousLargeMessages(t *testing.T, e env) { te := newTest(t, e) te.userAgent = testAppUA smallSize := 1024 te.maxServerReceiveMsgSize = &smallSize te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, 1048576) if err != nil { t.Fatal(err) } req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, Payload: payload, } var wg sync.WaitGroup for i := 0; i < 10; i++ { wg.Add(1) go func() { defer wg.Done() for j := 0; j < 100; j++ { ctx, cancel := context.WithDeadline(context.Background(), time.Now().Add(time.Second*10)) defer cancel() if _, err := tc.UnaryCall(ctx, req); status.Code(err) != codes.ResourceExhausted { t.Errorf("TestService/UnaryCall(_,_) = _. %v, want code: %s", err, codes.ResourceExhausted) return } } }() } wg.Wait() } const clientAlwaysFailCredErrorMsg = "clientAlwaysFailCred always fails" var errClientAlwaysFailCred = errors.New(clientAlwaysFailCredErrorMsg) type clientAlwaysFailCred struct{} func (c clientAlwaysFailCred) ClientHandshake(ctx context.Context, addr string, rawConn net.Conn) (net.Conn, credentials.AuthInfo, error) { return nil, nil, errClientAlwaysFailCred } func (c clientAlwaysFailCred) ServerHandshake(rawConn net.Conn) (net.Conn, credentials.AuthInfo, error) { return rawConn, nil, nil } func (c clientAlwaysFailCred) Info() credentials.ProtocolInfo { return credentials.ProtocolInfo{} } func (c clientAlwaysFailCred) Clone() credentials.TransportCredentials { return nil } func (c clientAlwaysFailCred) OverrideServerName(s string) error { return nil } func (s) TestFailFastRPCErrorOnBadCertificates(t *testing.T) { te := newTest(t, env{name: "bad-cred", network: "tcp", security: "clientAlwaysFailCred", balancer: "round_robin"}) te.startServer(&testServer{security: te.e.security}) defer te.tearDown() opts := []grpc.DialOption{grpc.WithTransportCredentials(clientAlwaysFailCred{})} ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() cc, err := grpc.DialContext(ctx, te.srvAddr, opts...) if err != nil { t.Fatalf("Dial(_) = %v, want %v", err, nil) } defer cc.Close() tc := testpb.NewTestServiceClient(cc) for i := 0; i < 1000; i++ { // This loop runs for at most 1 second. The first several RPCs will fail // with Unavailable because the connection hasn't started. When the // first connection failed with creds error, the next RPC should also // fail with the expected error. if _, err = tc.EmptyCall(context.Background(), &testpb.Empty{}); strings.Contains(err.Error(), clientAlwaysFailCredErrorMsg) { return } time.Sleep(time.Millisecond) } te.t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want err.Error() contains %q", err, clientAlwaysFailCredErrorMsg) } func (s) TestWaitForReadyRPCErrorOnBadCertificates(t *testing.T) { te := newTest(t, env{name: "bad-cred", network: "tcp", security: "clientAlwaysFailCred", balancer: "round_robin"}) te.startServer(&testServer{security: te.e.security}) defer te.tearDown() opts := []grpc.DialOption{grpc.WithTransportCredentials(clientAlwaysFailCred{})} dctx, dcancel := context.WithTimeout(context.Background(), 10*time.Second) defer dcancel() cc, err := grpc.DialContext(dctx, te.srvAddr, opts...) if err != nil { t.Fatalf("Dial(_) = %v, want %v", err, nil) } defer cc.Close() tc := testpb.NewTestServiceClient(cc) ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second) defer cancel() if _, err = tc.EmptyCall(ctx, &testpb.Empty{}, grpc.WaitForReady(true)); strings.Contains(err.Error(), clientAlwaysFailCredErrorMsg) { return } te.t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want err.Error() contains %q", err, clientAlwaysFailCredErrorMsg) } func (s) TestRPCTimeout(t *testing.T) { for _, e := range listTestEnv() { testRPCTimeout(t, e) } } func testRPCTimeout(t *testing.T, e env) { te := newTest(t, e) te.startServer(&testServer{security: e.security, unaryCallSleepTime: 500 * time.Millisecond}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) const argSize = 2718 const respSize = 314 payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, argSize) if err != nil { t.Fatal(err) } req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE, ResponseSize: respSize, Payload: payload, } for i := -1; i <= 10; i++ { ctx, cancel := context.WithTimeout(context.Background(), time.Duration(i)*time.Millisecond) if _, err := tc.UnaryCall(ctx, req); status.Code(err) != codes.DeadlineExceeded { t.Fatalf("TestService/UnaryCallv(_, _) = _, %v; want , error code: %s", err, codes.DeadlineExceeded) } cancel() } } func (s) TestDisabledIOBuffers(t *testing.T) { payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, int32(60000)) if err != nil { t.Fatalf("Failed to create payload: %v", err) } req := &testpb.StreamingOutputCallRequest{ Payload: payload, } resp := &testpb.StreamingOutputCallResponse{ Payload: payload, } ss := &stubServer{ fullDuplexCall: func(stream testpb.TestService_FullDuplexCallServer) error { for { in, err := stream.Recv() if err == io.EOF { return nil } if err != nil { t.Errorf("stream.Recv() = _, %v, want _, ", err) return err } if !reflect.DeepEqual(in.Payload.Body, payload.Body) { t.Errorf("Received message(len: %v) on server not what was expected(len: %v).", len(in.Payload.Body), len(payload.Body)) return err } if err := stream.Send(resp); err != nil { t.Errorf("stream.Send(_)= %v, want ", err) return err } } }, } s := grpc.NewServer(grpc.WriteBufferSize(0), grpc.ReadBufferSize(0)) testpb.RegisterTestServiceServer(s, ss) lis, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("Failed to create listener: %v", err) } done := make(chan struct{}) go func() { s.Serve(lis) close(done) }() defer s.Stop() dctx, dcancel := context.WithTimeout(context.Background(), 5*time.Second) defer dcancel() cc, err := grpc.DialContext(dctx, lis.Addr().String(), grpc.WithInsecure(), grpc.WithBlock(), grpc.WithWriteBufferSize(0), grpc.WithReadBufferSize(0)) if err != nil { t.Fatalf("Failed to dial server") } defer cc.Close() c := testpb.NewTestServiceClient(cc) ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) defer cancel() stream, err := c.FullDuplexCall(ctx, grpc.WaitForReady(true)) if err != nil { t.Fatalf("Failed to send test RPC to server") } for i := 0; i < 10; i++ { if err := stream.Send(req); err != nil { t.Fatalf("stream.Send(_) = %v, want ", err) } in, err := stream.Recv() if err != nil { t.Fatalf("stream.Recv() = _, %v, want _, ", err) } if !reflect.DeepEqual(in.Payload.Body, payload.Body) { t.Fatalf("Received message(len: %v) on client not what was expected(len: %v).", len(in.Payload.Body), len(payload.Body)) } } stream.CloseSend() if _, err := stream.Recv(); err != io.EOF { t.Fatalf("stream.Recv() = _, %v, want _, io.EOF", err) } } func (s) TestServerMaxHeaderListSizeClientUserViolation(t *testing.T) { for _, e := range listTestEnv() { if e.httpHandler { continue } testServerMaxHeaderListSizeClientUserViolation(t, e) } } func testServerMaxHeaderListSizeClientUserViolation(t *testing.T, e env) { te := newTest(t, e) te.maxServerHeaderListSize = new(uint32) *te.maxServerHeaderListSize = 216 te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) defer cancel() metadata.AppendToOutgoingContext(ctx, "oversize", string(make([]byte, 216))) var err error if err = verifyResultWithDelay(func() (bool, error) { if _, err = tc.EmptyCall(ctx, &testpb.Empty{}); err != nil && status.Code(err) == codes.Internal { return true, nil } return false, fmt.Errorf("tc.EmptyCall() = _, err: %v, want _, error code: %v", err, codes.Internal) }); err != nil { t.Fatal(err) } } func (s) TestClientMaxHeaderListSizeServerUserViolation(t *testing.T) { for _, e := range listTestEnv() { if e.httpHandler { continue } testClientMaxHeaderListSizeServerUserViolation(t, e) } } func testClientMaxHeaderListSizeServerUserViolation(t *testing.T, e env) { te := newTest(t, e) te.maxClientHeaderListSize = new(uint32) *te.maxClientHeaderListSize = 1 // any header server sends will violate te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) defer cancel() var err error if err = verifyResultWithDelay(func() (bool, error) { if _, err = tc.EmptyCall(ctx, &testpb.Empty{}); err != nil && status.Code(err) == codes.Internal { return true, nil } return false, fmt.Errorf("tc.EmptyCall() = _, err: %v, want _, error code: %v", err, codes.Internal) }); err != nil { t.Fatal(err) } } func (s) TestServerMaxHeaderListSizeClientIntentionalViolation(t *testing.T) { for _, e := range listTestEnv() { if e.httpHandler || e.security == "tls" { continue } testServerMaxHeaderListSizeClientIntentionalViolation(t, e) } } func testServerMaxHeaderListSizeClientIntentionalViolation(t *testing.T, e env) { te := newTest(t, e) te.maxServerHeaderListSize = new(uint32) *te.maxServerHeaderListSize = 512 te.startServer(&testServer{security: e.security}) defer te.tearDown() cc, dw := te.clientConnWithConnControl() tc := &testServiceClientWrapper{TestServiceClient: testpb.NewTestServiceClient(cc)} ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) defer cancel() stream, err := tc.FullDuplexCall(ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want _, ", tc, err) } rcw := dw.getRawConnWrapper() val := make([]string, 512) for i := range val { val[i] = "a" } // allow for client to send the initial header time.Sleep(100 * time.Millisecond) rcw.writeHeaders(http2.HeadersFrameParam{ StreamID: tc.getCurrentStreamID(), BlockFragment: rcw.encodeHeader("oversize", strings.Join(val, "")), EndStream: false, EndHeaders: true, }) if _, err := stream.Recv(); err == nil || status.Code(err) != codes.Internal { t.Fatalf("stream.Recv() = _, %v, want _, error code: %v", err, codes.Internal) } } func (s) TestClientMaxHeaderListSizeServerIntentionalViolation(t *testing.T) { for _, e := range listTestEnv() { if e.httpHandler || e.security == "tls" { continue } testClientMaxHeaderListSizeServerIntentionalViolation(t, e) } } func testClientMaxHeaderListSizeServerIntentionalViolation(t *testing.T, e env) { te := newTest(t, e) te.maxClientHeaderListSize = new(uint32) *te.maxClientHeaderListSize = 200 lw := te.startServerWithConnControl(&testServer{security: e.security, setHeaderOnly: true}) defer te.tearDown() cc, _ := te.clientConnWithConnControl() tc := &testServiceClientWrapper{TestServiceClient: testpb.NewTestServiceClient(cc)} ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) defer cancel() stream, err := tc.FullDuplexCall(ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want _, ", tc, err) } var i int var rcw *rawConnWrapper for i = 0; i < 100; i++ { rcw = lw.getLastConn() if rcw != nil { break } time.Sleep(10 * time.Millisecond) continue } if i == 100 { t.Fatalf("failed to create server transport after 1s") } val := make([]string, 200) for i := range val { val[i] = "a" } // allow for client to send the initial header. time.Sleep(100 * time.Millisecond) rcw.writeHeaders(http2.HeadersFrameParam{ StreamID: tc.getCurrentStreamID(), BlockFragment: rcw.encodeRawHeader("oversize", strings.Join(val, "")), EndStream: false, EndHeaders: true, }) if _, err := stream.Recv(); err == nil || status.Code(err) != codes.Internal { t.Fatalf("stream.Recv() = _, %v, want _, error code: %v", err, codes.Internal) } } func (s) TestNetPipeConn(t *testing.T) { // This test will block indefinitely if grpc writes both client and server // prefaces without either reading from the Conn. pl := testutils.NewPipeListener() s := grpc.NewServer() defer s.Stop() ts := &funcServer{unaryCall: func(ctx context.Context, in *testpb.SimpleRequest) (*testpb.SimpleResponse, error) { return &testpb.SimpleResponse{}, nil }} testpb.RegisterTestServiceServer(s, ts) go s.Serve(pl) ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) defer cancel() cc, err := grpc.DialContext(ctx, "", grpc.WithInsecure(), grpc.WithDialer(pl.Dialer())) if err != nil { t.Fatalf("Error creating client: %v", err) } defer cc.Close() client := testpb.NewTestServiceClient(cc) if _, err := client.UnaryCall(ctx, &testpb.SimpleRequest{}); err != nil { t.Fatalf("UnaryCall(_) = _, %v; want _, nil", err) } } func (s) TestLargeTimeout(t *testing.T) { for _, e := range listTestEnv() { testLargeTimeout(t, e) } } func testLargeTimeout(t *testing.T, e env) { te := newTest(t, e) te.declareLogNoise("Server.processUnaryRPC failed to write status") ts := &funcServer{} te.startServer(ts) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) timeouts := []time.Duration{ time.Duration(math.MaxInt64), // will be (correctly) converted to // 2562048 hours, which overflows upon converting back to an int64 2562047 * time.Hour, // the largest timeout that does not overflow } for i, maxTimeout := range timeouts { ts.unaryCall = func(ctx context.Context, in *testpb.SimpleRequest) (*testpb.SimpleResponse, error) { deadline, ok := ctx.Deadline() timeout := time.Until(deadline) minTimeout := maxTimeout - 5*time.Second if !ok || timeout < minTimeout || timeout > maxTimeout { t.Errorf("ctx.Deadline() = (now+%v), %v; want [%v, %v], true", timeout, ok, minTimeout, maxTimeout) return nil, status.Error(codes.OutOfRange, "deadline error") } return &testpb.SimpleResponse{}, nil } ctx, cancel := context.WithTimeout(context.Background(), maxTimeout) defer cancel() if _, err := tc.UnaryCall(ctx, &testpb.SimpleRequest{}); err != nil { t.Errorf("case %v: UnaryCall(_) = _, %v; want _, nil", i, err) } } } // Proxies typically send GO_AWAY followed by connection closure a minute or so later. This // test ensures that the connection is re-created after GO_AWAY and not affected by the // subsequent (old) connection closure. func (s) TestGoAwayThenClose(t *testing.T) { ctx, cancel := context.WithTimeout(context.Background(), 20*time.Second) defer cancel() lis1, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("Error while listening. Err: %v", err) } s1 := grpc.NewServer() defer s1.Stop() ts1 := &funcServer{ unaryCall: func(ctx context.Context, in *testpb.SimpleRequest) (*testpb.SimpleResponse, error) { return &testpb.SimpleResponse{}, nil }, fullDuplexCall: func(stream testpb.TestService_FullDuplexCallServer) error { // Wait forever. _, err := stream.Recv() if err == nil { t.Error("expected to never receive any message") } return err }, } testpb.RegisterTestServiceServer(s1, ts1) go s1.Serve(lis1) conn2Established := grpcsync.NewEvent() lis2, err := listenWithNotifyingListener("tcp", "localhost:0", conn2Established) if err != nil { t.Fatalf("Error while listening. Err: %v", err) } s2 := grpc.NewServer() defer s2.Stop() conn2Ready := grpcsync.NewEvent() ts2 := &funcServer{unaryCall: func(ctx context.Context, in *testpb.SimpleRequest) (*testpb.SimpleResponse, error) { conn2Ready.Fire() return &testpb.SimpleResponse{}, nil }} testpb.RegisterTestServiceServer(s2, ts2) go s2.Serve(lis2) r, rcleanup := manual.GenerateAndRegisterManualResolver() defer rcleanup() r.InitialState(resolver.State{Addresses: []resolver.Address{ {Addr: lis1.Addr().String()}, {Addr: lis2.Addr().String()}, }}) cc, err := grpc.DialContext(ctx, r.Scheme()+":///", grpc.WithInsecure()) if err != nil { t.Fatalf("Error creating client: %v", err) } defer cc.Close() client := testpb.NewTestServiceClient(cc) // Should go on connection 1. We use a long-lived RPC because it will cause GracefulStop to send GO_AWAY, but the // connection doesn't get closed until the server stops and the client receives. stream, err := client.FullDuplexCall(ctx) if err != nil { t.Fatalf("FullDuplexCall(_) = _, %v; want _, nil", err) } // Send GO_AWAY to connection 1. go s1.GracefulStop() // Wait for connection 2 to be established. <-conn2Established.Done() // Close connection 1. s1.Stop() // Wait for client to close. _, err = stream.Recv() if err == nil { t.Fatal("expected the stream to die, but got a successful Recv") } // Connection was dialed, so ac is either in connecting or ready. Because there's a race // between ac state change and balancer state change, so cc could still be transient // failure. This wait make sure cc is at least in connecting, and RPCs won't fail after // this. cc.WaitForStateChange(ctx, connectivity.TransientFailure) // Do a bunch of RPCs, make sure it stays stable. These should go to connection 2. for i := 0; i < 10; i++ { if _, err := client.UnaryCall(ctx, &testpb.SimpleRequest{}); err != nil { t.Fatalf("UnaryCall(_) = _, %v; want _, nil", err) } } } func listenWithNotifyingListener(network, address string, event *grpcsync.Event) (net.Listener, error) { lis, err := net.Listen(network, address) if err != nil { return nil, err } return notifyingListener{connEstablished: event, Listener: lis}, nil } type notifyingListener struct { connEstablished *grpcsync.Event net.Listener } func (lis notifyingListener) Accept() (net.Conn, error) { defer lis.connEstablished.Fire() return lis.Listener.Accept() } func (s) TestRPCWaitsForResolver(t *testing.T) { te := testServiceConfigSetup(t, tcpClearRREnv) te.startServer(&testServer{security: tcpClearRREnv.security}) defer te.tearDown() r, rcleanup := manual.GenerateAndRegisterManualResolver() defer rcleanup() te.resolverScheme = r.Scheme() te.nonBlockingDial = true cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) ctx, cancel := context.WithTimeout(context.Background(), 500*time.Millisecond) defer cancel() // With no resolved addresses yet, this will timeout. if _, err := tc.EmptyCall(ctx, &testpb.Empty{}); status.Code(err) != codes.DeadlineExceeded { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, %s", err, codes.DeadlineExceeded) } ctx, cancel = context.WithTimeout(context.Background(), 10*time.Second) defer cancel() go func() { time.Sleep(time.Second) r.UpdateState(resolver.State{ Addresses: []resolver.Address{{Addr: te.srvAddr}}, ServiceConfig: parseCfg(`{ "methodConfig": [ { "name": [ { "service": "grpc.testing.TestService", "method": "UnaryCall" } ], "maxRequestMessageBytes": 0 } ] }`)}) }() // We wait a second before providing a service config and resolving // addresses. So this will wait for that and then honor the // maxRequestMessageBytes it contains. if _, err := tc.UnaryCall(ctx, &testpb.SimpleRequest{ResponseType: testpb.PayloadType_UNCOMPRESSABLE}); status.Code(err) != codes.ResourceExhausted { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, nil", err) } if got := ctx.Err(); got != nil { t.Fatalf("ctx.Err() = %v; want nil (deadline should be set short by service config)", got) } if _, err := tc.UnaryCall(ctx, &testpb.SimpleRequest{}); err != nil { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, nil", err) } } func (s) TestHTTPHeaderFrameErrorHandlingHTTPMode(t *testing.T) { // Non-gRPC content-type fallback path. for httpCode := range transport.HTTPStatusConvTab { doHTTPHeaderTest(t, transport.HTTPStatusConvTab[int(httpCode)], []string{ ":status", fmt.Sprintf("%d", httpCode), "content-type", "text/html", // non-gRPC content type to switch to HTTP mode. "grpc-status", "1", // Make up a gRPC status error "grpc-status-details-bin", "???", // Make up a gRPC field parsing error }) } // Missing content-type fallback path. for httpCode := range transport.HTTPStatusConvTab { doHTTPHeaderTest(t, transport.HTTPStatusConvTab[int(httpCode)], []string{ ":status", fmt.Sprintf("%d", httpCode), // Omitting content type to switch to HTTP mode. "grpc-status", "1", // Make up a gRPC status error "grpc-status-details-bin", "???", // Make up a gRPC field parsing error }) } // Malformed HTTP status when fallback. doHTTPHeaderTest(t, codes.Internal, []string{ ":status", "abc", // Omitting content type to switch to HTTP mode. "grpc-status", "1", // Make up a gRPC status error "grpc-status-details-bin", "???", // Make up a gRPC field parsing error }) } // Testing erroneous ResponseHeader or Trailers-only (delivered in the first HEADERS frame). func (s) TestHTTPHeaderFrameErrorHandlingInitialHeader(t *testing.T) { for _, test := range []struct { header []string errCode codes.Code }{ { // missing gRPC status. header: []string{ ":status", "403", "content-type", "application/grpc", }, errCode: codes.Unknown, }, { // malformed grpc-status. header: []string{ ":status", "502", "content-type", "application/grpc", "grpc-status", "abc", }, errCode: codes.Internal, }, { // Malformed grpc-tags-bin field. header: []string{ ":status", "502", "content-type", "application/grpc", "grpc-status", "0", "grpc-tags-bin", "???", }, errCode: codes.Internal, }, { // gRPC status error. header: []string{ ":status", "502", "content-type", "application/grpc", "grpc-status", "3", }, errCode: codes.InvalidArgument, }, } { doHTTPHeaderTest(t, test.errCode, test.header) } } // Testing non-Trailers-only Trailers (delievered in second HEADERS frame) func (s) TestHTTPHeaderFrameErrorHandlingNormalTrailer(t *testing.T) { for _, test := range []struct { responseHeader []string trailer []string errCode codes.Code }{ { responseHeader: []string{ ":status", "200", "content-type", "application/grpc", }, trailer: []string{ // trailer missing grpc-status ":status", "502", }, errCode: codes.Unknown, }, { responseHeader: []string{ ":status", "404", "content-type", "application/grpc", }, trailer: []string{ // malformed grpc-status-details-bin field "grpc-status", "0", "grpc-status-details-bin", "????", }, errCode: codes.Internal, }, } { doHTTPHeaderTest(t, test.errCode, test.responseHeader, test.trailer) } } func (s) TestHTTPHeaderFrameErrorHandlingMoreThanTwoHeaders(t *testing.T) { header := []string{ ":status", "200", "content-type", "application/grpc", } doHTTPHeaderTest(t, codes.Internal, header, header, header) } type httpServer struct { headerFields [][]string } func (s *httpServer) writeHeader(framer *http2.Framer, sid uint32, headerFields []string, endStream bool) error { if len(headerFields)%2 == 1 { panic("odd number of kv args") } var buf bytes.Buffer henc := hpack.NewEncoder(&buf) for len(headerFields) > 0 { k, v := headerFields[0], headerFields[1] headerFields = headerFields[2:] henc.WriteField(hpack.HeaderField{Name: k, Value: v}) } return framer.WriteHeaders(http2.HeadersFrameParam{ StreamID: sid, BlockFragment: buf.Bytes(), EndStream: endStream, EndHeaders: true, }) } func (s *httpServer) start(t *testing.T, lis net.Listener) { // Launch an HTTP server to send back header. go func() { conn, err := lis.Accept() if err != nil { t.Errorf("Error accepting connection: %v", err) return } defer conn.Close() // Read preface sent by client. if _, err = io.ReadFull(conn, make([]byte, len(http2.ClientPreface))); err != nil { t.Errorf("Error at server-side while reading preface from client. Err: %v", err) return } reader := bufio.NewReader(conn) writer := bufio.NewWriter(conn) framer := http2.NewFramer(writer, reader) if err = framer.WriteSettingsAck(); err != nil { t.Errorf("Error at server-side while sending Settings ack. Err: %v", err) return } writer.Flush() // necessary since client is expecting preface before declaring connection fully setup. var sid uint32 // Read frames until a header is received. for { frame, err := framer.ReadFrame() if err != nil { t.Errorf("Error at server-side while reading frame. Err: %v", err) return } if hframe, ok := frame.(*http2.HeadersFrame); ok { sid = hframe.Header().StreamID break } } for i, headers := range s.headerFields { if err = s.writeHeader(framer, sid, headers, i == len(s.headerFields)-1); err != nil { t.Errorf("Error at server-side while writing headers. Err: %v", err) return } writer.Flush() } }() } func doHTTPHeaderTest(t *testing.T, errCode codes.Code, headerFields ...[]string) { t.Helper() lis, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("Failed to listen. Err: %v", err) } defer lis.Close() server := &httpServer{ headerFields: headerFields, } server.start(t, lis) cc, err := grpc.Dial(lis.Addr().String(), grpc.WithInsecure()) if err != nil { t.Fatalf("failed to dial due to err: %v", err) } defer cc.Close() ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() client := testpb.NewTestServiceClient(cc) stream, err := client.FullDuplexCall(ctx) if err != nil { t.Fatalf("error creating stream due to err: %v", err) } if _, err := stream.Recv(); err == nil || status.Code(err) != errCode { t.Fatalf("stream.Recv() = _, %v, want error code: %v", err, errCode) } } func parseCfg(s string) serviceconfig.Config { c, err := serviceconfig.Parse(s) if err != nil { panic(fmt.Sprintf("Error parsing config %q: %v", s, err)) } return c } grpc-go-1.22.1/test/go_vet/000077500000000000000000000000001351635773100154105ustar00rootroot00000000000000grpc-go-1.22.1/test/go_vet/vet.go000066400000000000000000000026371351635773100165450ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // vet checks whether files that are supposed to be built on appengine running // Go 1.10 or earlier import an unsupported package (e.g. "unsafe", "syscall"). package main import ( "fmt" "go/build" "os" ) func main() { fail := false b := build.Default b.BuildTags = []string{"appengine", "appenginevm"} argsWithoutProg := os.Args[1:] for _, dir := range argsWithoutProg { p, err := b.Import(".", dir, 0) if _, ok := err.(*build.NoGoError); ok { continue } else if err != nil { fmt.Printf("build.Import failed due to %v\n", err) fail = true continue } for _, pkg := range p.Imports { if pkg == "syscall" || pkg == "unsafe" { fmt.Printf("Package %s/%s importing %s package without appengine build tag is NOT ALLOWED!\n", p.Dir, p.Name, pkg) fail = true } } } if fail { os.Exit(1) } } grpc-go-1.22.1/test/gracefulstop_test.go000066400000000000000000000145601351635773100202170ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package test import ( "context" "fmt" "io" "net" "sync" "testing" "time" "golang.org/x/net/http2" "google.golang.org/grpc" "google.golang.org/grpc/internal/envconfig" testpb "google.golang.org/grpc/test/grpc_testing" ) type delayListener struct { net.Listener closeCalled chan struct{} acceptCalled chan struct{} allowCloseCh chan struct{} cc *delayConn dialed bool } func (d *delayListener) Accept() (net.Conn, error) { select { case <-d.acceptCalled: // On the second call, block until closed, then return an error. <-d.closeCalled <-d.allowCloseCh return nil, fmt.Errorf("listener is closed") default: close(d.acceptCalled) conn, err := d.Listener.Accept() if err != nil { return nil, err } framer := http2.NewFramer(conn, conn) if err = framer.WriteSettings(http2.Setting{}); err != nil { return nil, err } // Allow closing of listener only after accept. // Note: Dial can return successfully, yet Accept // might now have finished. d.allowClose() return conn, err } } func (d *delayListener) allowClose() { close(d.allowCloseCh) } func (d *delayListener) Close() error { close(d.closeCalled) go func() { <-d.allowCloseCh d.Listener.Close() }() return nil } func (d *delayListener) allowClientRead() { d.cc.allowRead() } func (d *delayListener) Dial(ctx context.Context) (net.Conn, error) { if d.dialed { // Only hand out one connection (net.Dial can return more even after the // listener is closed). This is not thread-safe, but Dial should never be // called concurrently in this environment. return nil, fmt.Errorf("no more conns") } d.dialed = true c, err := (&net.Dialer{}).DialContext(ctx, "tcp", d.Listener.Addr().String()) if err != nil { return nil, err } d.cc = &delayConn{Conn: c, blockRead: make(chan struct{})} return d.cc, nil } type delayConn struct { net.Conn blockRead chan struct{} } func (d *delayConn) allowRead() { close(d.blockRead) } func (d *delayConn) Read(b []byte) (n int, err error) { <-d.blockRead return d.Conn.Read(b) } func (s) TestGracefulStop(t *testing.T) { // We need to turn off RequireHandshake because if it were on, it would // block forever waiting to read the handshake, and the delayConn would // never let it (the delay is intended to block until later in the test). // // Restore current setting after test. old := envconfig.RequireHandshake defer func() { envconfig.RequireHandshake = old }() envconfig.RequireHandshake = envconfig.RequireHandshakeOff // This test ensures GracefulStop cannot race and break RPCs on new // connections created after GracefulStop was called but before // listener.Accept() returns a "closing" error. // // Steps of this test: // 1. Start Server // 2. GracefulStop() Server after listener's Accept is called, but don't // allow Accept() to exit when Close() is called on it. // 3. Create a new connection to the server after listener.Close() is called. // Server will want to send a GoAway on the new conn, but we delay client // reads until 5. // 4. Send an RPC on the new connection. // 5. Allow the client to read the GoAway. The RPC should complete // successfully. lis, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("Error listenening: %v", err) } dlis := &delayListener{ Listener: lis, acceptCalled: make(chan struct{}), closeCalled: make(chan struct{}), allowCloseCh: make(chan struct{}), } d := func(ctx context.Context, _ string) (net.Conn, error) { return dlis.Dial(ctx) } serverGotReq := make(chan struct{}) ss := &stubServer{ fullDuplexCall: func(stream testpb.TestService_FullDuplexCallServer) error { close(serverGotReq) _, err := stream.Recv() if err != nil { return err } return stream.Send(&testpb.StreamingOutputCallResponse{}) }, } s := grpc.NewServer() testpb.RegisterTestServiceServer(s, ss) // 1. Start Server wg := sync.WaitGroup{} wg.Add(1) go func() { s.Serve(dlis) wg.Done() }() // 2. GracefulStop() Server after listener's Accept is called, but don't // allow Accept() to exit when Close() is called on it. <-dlis.acceptCalled wg.Add(1) go func() { s.GracefulStop() wg.Done() }() // 3. Create a new connection to the server after listener.Close() is called. // Server will want to send a GoAway on the new conn, but we delay it // until 5. <-dlis.closeCalled // Block until GracefulStop calls dlis.Close() // Now dial. The listener's Accept method will return a valid connection, // even though GracefulStop has closed the listener. ctx, dialCancel := context.WithTimeout(context.Background(), 5*time.Second) defer dialCancel() cc, err := grpc.DialContext(ctx, "", grpc.WithInsecure(), grpc.WithBlock(), grpc.WithContextDialer(d)) if err != nil { dlis.allowClientRead() t.Fatalf("grpc.Dial(%q) = %v", lis.Addr().String(), err) } client := testpb.NewTestServiceClient(cc) defer cc.Close() // 4. Send an RPC on the new connection. // The server would send a GOAWAY first, but we are delaying the server's // writes for now until the client writes more than the preface. ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) stream, err := client.FullDuplexCall(ctx) if err != nil { t.Fatalf("FullDuplexCall= _, %v; want _, ", err) } go func() { // 5. Allow the client to read the GoAway. The RPC should complete // successfully. <-serverGotReq dlis.allowClientRead() }() if err := stream.Send(&testpb.StreamingOutputCallRequest{}); err != nil { t.Fatalf("stream.Send(_) = %v, want ", err) } if _, err := stream.Recv(); err != nil { t.Fatalf("stream.Recv() = _, %v, want _, ", err) } if _, err := stream.Recv(); err != io.EOF { t.Fatalf("stream.Recv() = _, %v, want _, io.EOF", err) } // 5. happens above, then we finish the call. cancel() wg.Wait() } grpc-go-1.22.1/test/grpc_testing/000077500000000000000000000000001351635773100166155ustar00rootroot00000000000000grpc-go-1.22.1/test/grpc_testing/test.pb.go000066400000000000000000001055201351635773100205260ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: grpc_testing/test.proto package grpc_testing import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import ( context "golang.org/x/net/context" grpc "google.golang.org/grpc" ) // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package // The type of payload that should be returned. type PayloadType int32 const ( // Compressable text format. PayloadType_COMPRESSABLE PayloadType = 0 // Uncompressable binary format. PayloadType_UNCOMPRESSABLE PayloadType = 1 // Randomly chosen from all other formats defined in this enum. PayloadType_RANDOM PayloadType = 2 ) var PayloadType_name = map[int32]string{ 0: "COMPRESSABLE", 1: "UNCOMPRESSABLE", 2: "RANDOM", } var PayloadType_value = map[string]int32{ "COMPRESSABLE": 0, "UNCOMPRESSABLE": 1, "RANDOM": 2, } func (x PayloadType) String() string { return proto.EnumName(PayloadType_name, int32(x)) } func (PayloadType) EnumDescriptor() ([]byte, []int) { return fileDescriptor_test_c9f6c5af4267cb88, []int{0} } type Empty struct { XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Empty) Reset() { *m = Empty{} } func (m *Empty) String() string { return proto.CompactTextString(m) } func (*Empty) ProtoMessage() {} func (*Empty) Descriptor() ([]byte, []int) { return fileDescriptor_test_c9f6c5af4267cb88, []int{0} } func (m *Empty) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Empty.Unmarshal(m, b) } func (m *Empty) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Empty.Marshal(b, m, deterministic) } func (dst *Empty) XXX_Merge(src proto.Message) { xxx_messageInfo_Empty.Merge(dst, src) } func (m *Empty) XXX_Size() int { return xxx_messageInfo_Empty.Size(m) } func (m *Empty) XXX_DiscardUnknown() { xxx_messageInfo_Empty.DiscardUnknown(m) } var xxx_messageInfo_Empty proto.InternalMessageInfo // A block of data, to simply increase gRPC message size. type Payload struct { // The type of data in body. Type PayloadType `protobuf:"varint,1,opt,name=type,proto3,enum=grpc.testing.PayloadType" json:"type,omitempty"` // Primary contents of payload. Body []byte `protobuf:"bytes,2,opt,name=body,proto3" json:"body,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Payload) Reset() { *m = Payload{} } func (m *Payload) String() string { return proto.CompactTextString(m) } func (*Payload) ProtoMessage() {} func (*Payload) Descriptor() ([]byte, []int) { return fileDescriptor_test_c9f6c5af4267cb88, []int{1} } func (m *Payload) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Payload.Unmarshal(m, b) } func (m *Payload) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Payload.Marshal(b, m, deterministic) } func (dst *Payload) XXX_Merge(src proto.Message) { xxx_messageInfo_Payload.Merge(dst, src) } func (m *Payload) XXX_Size() int { return xxx_messageInfo_Payload.Size(m) } func (m *Payload) XXX_DiscardUnknown() { xxx_messageInfo_Payload.DiscardUnknown(m) } var xxx_messageInfo_Payload proto.InternalMessageInfo func (m *Payload) GetType() PayloadType { if m != nil { return m.Type } return PayloadType_COMPRESSABLE } func (m *Payload) GetBody() []byte { if m != nil { return m.Body } return nil } // Unary request. type SimpleRequest struct { // Desired payload type in the response from the server. // If response_type is RANDOM, server randomly chooses one from other formats. ResponseType PayloadType `protobuf:"varint,1,opt,name=response_type,json=responseType,proto3,enum=grpc.testing.PayloadType" json:"response_type,omitempty"` // Desired payload size in the response from the server. // If response_type is COMPRESSABLE, this denotes the size before compression. ResponseSize int32 `protobuf:"varint,2,opt,name=response_size,json=responseSize,proto3" json:"response_size,omitempty"` // Optional input payload sent along with the request. Payload *Payload `protobuf:"bytes,3,opt,name=payload,proto3" json:"payload,omitempty"` // Whether SimpleResponse should include username. FillUsername bool `protobuf:"varint,4,opt,name=fill_username,json=fillUsername,proto3" json:"fill_username,omitempty"` // Whether SimpleResponse should include OAuth scope. FillOauthScope bool `protobuf:"varint,5,opt,name=fill_oauth_scope,json=fillOauthScope,proto3" json:"fill_oauth_scope,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *SimpleRequest) Reset() { *m = SimpleRequest{} } func (m *SimpleRequest) String() string { return proto.CompactTextString(m) } func (*SimpleRequest) ProtoMessage() {} func (*SimpleRequest) Descriptor() ([]byte, []int) { return fileDescriptor_test_c9f6c5af4267cb88, []int{2} } func (m *SimpleRequest) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_SimpleRequest.Unmarshal(m, b) } func (m *SimpleRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_SimpleRequest.Marshal(b, m, deterministic) } func (dst *SimpleRequest) XXX_Merge(src proto.Message) { xxx_messageInfo_SimpleRequest.Merge(dst, src) } func (m *SimpleRequest) XXX_Size() int { return xxx_messageInfo_SimpleRequest.Size(m) } func (m *SimpleRequest) XXX_DiscardUnknown() { xxx_messageInfo_SimpleRequest.DiscardUnknown(m) } var xxx_messageInfo_SimpleRequest proto.InternalMessageInfo func (m *SimpleRequest) GetResponseType() PayloadType { if m != nil { return m.ResponseType } return PayloadType_COMPRESSABLE } func (m *SimpleRequest) GetResponseSize() int32 { if m != nil { return m.ResponseSize } return 0 } func (m *SimpleRequest) GetPayload() *Payload { if m != nil { return m.Payload } return nil } func (m *SimpleRequest) GetFillUsername() bool { if m != nil { return m.FillUsername } return false } func (m *SimpleRequest) GetFillOauthScope() bool { if m != nil { return m.FillOauthScope } return false } // Unary response, as configured by the request. type SimpleResponse struct { // Payload to increase message size. Payload *Payload `protobuf:"bytes,1,opt,name=payload,proto3" json:"payload,omitempty"` // The user the request came from, for verifying authentication was // successful when the client expected it. Username string `protobuf:"bytes,2,opt,name=username,proto3" json:"username,omitempty"` // OAuth scope. OauthScope string `protobuf:"bytes,3,opt,name=oauth_scope,json=oauthScope,proto3" json:"oauth_scope,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *SimpleResponse) Reset() { *m = SimpleResponse{} } func (m *SimpleResponse) String() string { return proto.CompactTextString(m) } func (*SimpleResponse) ProtoMessage() {} func (*SimpleResponse) Descriptor() ([]byte, []int) { return fileDescriptor_test_c9f6c5af4267cb88, []int{3} } func (m *SimpleResponse) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_SimpleResponse.Unmarshal(m, b) } func (m *SimpleResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_SimpleResponse.Marshal(b, m, deterministic) } func (dst *SimpleResponse) XXX_Merge(src proto.Message) { xxx_messageInfo_SimpleResponse.Merge(dst, src) } func (m *SimpleResponse) XXX_Size() int { return xxx_messageInfo_SimpleResponse.Size(m) } func (m *SimpleResponse) XXX_DiscardUnknown() { xxx_messageInfo_SimpleResponse.DiscardUnknown(m) } var xxx_messageInfo_SimpleResponse proto.InternalMessageInfo func (m *SimpleResponse) GetPayload() *Payload { if m != nil { return m.Payload } return nil } func (m *SimpleResponse) GetUsername() string { if m != nil { return m.Username } return "" } func (m *SimpleResponse) GetOauthScope() string { if m != nil { return m.OauthScope } return "" } // Client-streaming request. type StreamingInputCallRequest struct { // Optional input payload sent along with the request. Payload *Payload `protobuf:"bytes,1,opt,name=payload,proto3" json:"payload,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *StreamingInputCallRequest) Reset() { *m = StreamingInputCallRequest{} } func (m *StreamingInputCallRequest) String() string { return proto.CompactTextString(m) } func (*StreamingInputCallRequest) ProtoMessage() {} func (*StreamingInputCallRequest) Descriptor() ([]byte, []int) { return fileDescriptor_test_c9f6c5af4267cb88, []int{4} } func (m *StreamingInputCallRequest) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_StreamingInputCallRequest.Unmarshal(m, b) } func (m *StreamingInputCallRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_StreamingInputCallRequest.Marshal(b, m, deterministic) } func (dst *StreamingInputCallRequest) XXX_Merge(src proto.Message) { xxx_messageInfo_StreamingInputCallRequest.Merge(dst, src) } func (m *StreamingInputCallRequest) XXX_Size() int { return xxx_messageInfo_StreamingInputCallRequest.Size(m) } func (m *StreamingInputCallRequest) XXX_DiscardUnknown() { xxx_messageInfo_StreamingInputCallRequest.DiscardUnknown(m) } var xxx_messageInfo_StreamingInputCallRequest proto.InternalMessageInfo func (m *StreamingInputCallRequest) GetPayload() *Payload { if m != nil { return m.Payload } return nil } // Client-streaming response. type StreamingInputCallResponse struct { // Aggregated size of payloads received from the client. AggregatedPayloadSize int32 `protobuf:"varint,1,opt,name=aggregated_payload_size,json=aggregatedPayloadSize,proto3" json:"aggregated_payload_size,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *StreamingInputCallResponse) Reset() { *m = StreamingInputCallResponse{} } func (m *StreamingInputCallResponse) String() string { return proto.CompactTextString(m) } func (*StreamingInputCallResponse) ProtoMessage() {} func (*StreamingInputCallResponse) Descriptor() ([]byte, []int) { return fileDescriptor_test_c9f6c5af4267cb88, []int{5} } func (m *StreamingInputCallResponse) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_StreamingInputCallResponse.Unmarshal(m, b) } func (m *StreamingInputCallResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_StreamingInputCallResponse.Marshal(b, m, deterministic) } func (dst *StreamingInputCallResponse) XXX_Merge(src proto.Message) { xxx_messageInfo_StreamingInputCallResponse.Merge(dst, src) } func (m *StreamingInputCallResponse) XXX_Size() int { return xxx_messageInfo_StreamingInputCallResponse.Size(m) } func (m *StreamingInputCallResponse) XXX_DiscardUnknown() { xxx_messageInfo_StreamingInputCallResponse.DiscardUnknown(m) } var xxx_messageInfo_StreamingInputCallResponse proto.InternalMessageInfo func (m *StreamingInputCallResponse) GetAggregatedPayloadSize() int32 { if m != nil { return m.AggregatedPayloadSize } return 0 } // Configuration for a particular response. type ResponseParameters struct { // Desired payload sizes in responses from the server. // If response_type is COMPRESSABLE, this denotes the size before compression. Size int32 `protobuf:"varint,1,opt,name=size,proto3" json:"size,omitempty"` // Desired interval between consecutive responses in the response stream in // microseconds. IntervalUs int32 `protobuf:"varint,2,opt,name=interval_us,json=intervalUs,proto3" json:"interval_us,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ResponseParameters) Reset() { *m = ResponseParameters{} } func (m *ResponseParameters) String() string { return proto.CompactTextString(m) } func (*ResponseParameters) ProtoMessage() {} func (*ResponseParameters) Descriptor() ([]byte, []int) { return fileDescriptor_test_c9f6c5af4267cb88, []int{6} } func (m *ResponseParameters) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ResponseParameters.Unmarshal(m, b) } func (m *ResponseParameters) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ResponseParameters.Marshal(b, m, deterministic) } func (dst *ResponseParameters) XXX_Merge(src proto.Message) { xxx_messageInfo_ResponseParameters.Merge(dst, src) } func (m *ResponseParameters) XXX_Size() int { return xxx_messageInfo_ResponseParameters.Size(m) } func (m *ResponseParameters) XXX_DiscardUnknown() { xxx_messageInfo_ResponseParameters.DiscardUnknown(m) } var xxx_messageInfo_ResponseParameters proto.InternalMessageInfo func (m *ResponseParameters) GetSize() int32 { if m != nil { return m.Size } return 0 } func (m *ResponseParameters) GetIntervalUs() int32 { if m != nil { return m.IntervalUs } return 0 } // Server-streaming request. type StreamingOutputCallRequest struct { // Desired payload type in the response from the server. // If response_type is RANDOM, the payload from each response in the stream // might be of different types. This is to simulate a mixed type of payload // stream. ResponseType PayloadType `protobuf:"varint,1,opt,name=response_type,json=responseType,proto3,enum=grpc.testing.PayloadType" json:"response_type,omitempty"` // Configuration for each expected response message. ResponseParameters []*ResponseParameters `protobuf:"bytes,2,rep,name=response_parameters,json=responseParameters,proto3" json:"response_parameters,omitempty"` // Optional input payload sent along with the request. Payload *Payload `protobuf:"bytes,3,opt,name=payload,proto3" json:"payload,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *StreamingOutputCallRequest) Reset() { *m = StreamingOutputCallRequest{} } func (m *StreamingOutputCallRequest) String() string { return proto.CompactTextString(m) } func (*StreamingOutputCallRequest) ProtoMessage() {} func (*StreamingOutputCallRequest) Descriptor() ([]byte, []int) { return fileDescriptor_test_c9f6c5af4267cb88, []int{7} } func (m *StreamingOutputCallRequest) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_StreamingOutputCallRequest.Unmarshal(m, b) } func (m *StreamingOutputCallRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_StreamingOutputCallRequest.Marshal(b, m, deterministic) } func (dst *StreamingOutputCallRequest) XXX_Merge(src proto.Message) { xxx_messageInfo_StreamingOutputCallRequest.Merge(dst, src) } func (m *StreamingOutputCallRequest) XXX_Size() int { return xxx_messageInfo_StreamingOutputCallRequest.Size(m) } func (m *StreamingOutputCallRequest) XXX_DiscardUnknown() { xxx_messageInfo_StreamingOutputCallRequest.DiscardUnknown(m) } var xxx_messageInfo_StreamingOutputCallRequest proto.InternalMessageInfo func (m *StreamingOutputCallRequest) GetResponseType() PayloadType { if m != nil { return m.ResponseType } return PayloadType_COMPRESSABLE } func (m *StreamingOutputCallRequest) GetResponseParameters() []*ResponseParameters { if m != nil { return m.ResponseParameters } return nil } func (m *StreamingOutputCallRequest) GetPayload() *Payload { if m != nil { return m.Payload } return nil } // Server-streaming response, as configured by the request and parameters. type StreamingOutputCallResponse struct { // Payload to increase response size. Payload *Payload `protobuf:"bytes,1,opt,name=payload,proto3" json:"payload,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *StreamingOutputCallResponse) Reset() { *m = StreamingOutputCallResponse{} } func (m *StreamingOutputCallResponse) String() string { return proto.CompactTextString(m) } func (*StreamingOutputCallResponse) ProtoMessage() {} func (*StreamingOutputCallResponse) Descriptor() ([]byte, []int) { return fileDescriptor_test_c9f6c5af4267cb88, []int{8} } func (m *StreamingOutputCallResponse) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_StreamingOutputCallResponse.Unmarshal(m, b) } func (m *StreamingOutputCallResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_StreamingOutputCallResponse.Marshal(b, m, deterministic) } func (dst *StreamingOutputCallResponse) XXX_Merge(src proto.Message) { xxx_messageInfo_StreamingOutputCallResponse.Merge(dst, src) } func (m *StreamingOutputCallResponse) XXX_Size() int { return xxx_messageInfo_StreamingOutputCallResponse.Size(m) } func (m *StreamingOutputCallResponse) XXX_DiscardUnknown() { xxx_messageInfo_StreamingOutputCallResponse.DiscardUnknown(m) } var xxx_messageInfo_StreamingOutputCallResponse proto.InternalMessageInfo func (m *StreamingOutputCallResponse) GetPayload() *Payload { if m != nil { return m.Payload } return nil } func init() { proto.RegisterType((*Empty)(nil), "grpc.testing.Empty") proto.RegisterType((*Payload)(nil), "grpc.testing.Payload") proto.RegisterType((*SimpleRequest)(nil), "grpc.testing.SimpleRequest") proto.RegisterType((*SimpleResponse)(nil), "grpc.testing.SimpleResponse") proto.RegisterType((*StreamingInputCallRequest)(nil), "grpc.testing.StreamingInputCallRequest") proto.RegisterType((*StreamingInputCallResponse)(nil), "grpc.testing.StreamingInputCallResponse") proto.RegisterType((*ResponseParameters)(nil), "grpc.testing.ResponseParameters") proto.RegisterType((*StreamingOutputCallRequest)(nil), "grpc.testing.StreamingOutputCallRequest") proto.RegisterType((*StreamingOutputCallResponse)(nil), "grpc.testing.StreamingOutputCallResponse") proto.RegisterEnum("grpc.testing.PayloadType", PayloadType_name, PayloadType_value) } // Reference imports to suppress errors if they are not otherwise used. var _ context.Context var _ grpc.ClientConn // This is a compile-time assertion to ensure that this generated file // is compatible with the grpc package it is being compiled against. const _ = grpc.SupportPackageIsVersion4 // TestServiceClient is the client API for TestService service. // // For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream. type TestServiceClient interface { // One empty request followed by one empty response. EmptyCall(ctx context.Context, in *Empty, opts ...grpc.CallOption) (*Empty, error) // One request followed by one response. // The server returns the client payload as-is. UnaryCall(ctx context.Context, in *SimpleRequest, opts ...grpc.CallOption) (*SimpleResponse, error) // One request followed by a sequence of responses (streamed download). // The server returns the payload with client desired type and sizes. StreamingOutputCall(ctx context.Context, in *StreamingOutputCallRequest, opts ...grpc.CallOption) (TestService_StreamingOutputCallClient, error) // A sequence of requests followed by one response (streamed upload). // The server returns the aggregated size of client payload as the result. StreamingInputCall(ctx context.Context, opts ...grpc.CallOption) (TestService_StreamingInputCallClient, error) // A sequence of requests with each request served by the server immediately. // As one request could lead to multiple responses, this interface // demonstrates the idea of full duplexing. FullDuplexCall(ctx context.Context, opts ...grpc.CallOption) (TestService_FullDuplexCallClient, error) // A sequence of requests followed by a sequence of responses. // The server buffers all the client requests and then serves them in order. A // stream of responses are returned to the client when the server starts with // first request. HalfDuplexCall(ctx context.Context, opts ...grpc.CallOption) (TestService_HalfDuplexCallClient, error) } type testServiceClient struct { cc *grpc.ClientConn } func NewTestServiceClient(cc *grpc.ClientConn) TestServiceClient { return &testServiceClient{cc} } func (c *testServiceClient) EmptyCall(ctx context.Context, in *Empty, opts ...grpc.CallOption) (*Empty, error) { out := new(Empty) err := c.cc.Invoke(ctx, "/grpc.testing.TestService/EmptyCall", in, out, opts...) if err != nil { return nil, err } return out, nil } func (c *testServiceClient) UnaryCall(ctx context.Context, in *SimpleRequest, opts ...grpc.CallOption) (*SimpleResponse, error) { out := new(SimpleResponse) err := c.cc.Invoke(ctx, "/grpc.testing.TestService/UnaryCall", in, out, opts...) if err != nil { return nil, err } return out, nil } func (c *testServiceClient) StreamingOutputCall(ctx context.Context, in *StreamingOutputCallRequest, opts ...grpc.CallOption) (TestService_StreamingOutputCallClient, error) { stream, err := c.cc.NewStream(ctx, &_TestService_serviceDesc.Streams[0], "/grpc.testing.TestService/StreamingOutputCall", opts...) if err != nil { return nil, err } x := &testServiceStreamingOutputCallClient{stream} if err := x.ClientStream.SendMsg(in); err != nil { return nil, err } if err := x.ClientStream.CloseSend(); err != nil { return nil, err } return x, nil } type TestService_StreamingOutputCallClient interface { Recv() (*StreamingOutputCallResponse, error) grpc.ClientStream } type testServiceStreamingOutputCallClient struct { grpc.ClientStream } func (x *testServiceStreamingOutputCallClient) Recv() (*StreamingOutputCallResponse, error) { m := new(StreamingOutputCallResponse) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func (c *testServiceClient) StreamingInputCall(ctx context.Context, opts ...grpc.CallOption) (TestService_StreamingInputCallClient, error) { stream, err := c.cc.NewStream(ctx, &_TestService_serviceDesc.Streams[1], "/grpc.testing.TestService/StreamingInputCall", opts...) if err != nil { return nil, err } x := &testServiceStreamingInputCallClient{stream} return x, nil } type TestService_StreamingInputCallClient interface { Send(*StreamingInputCallRequest) error CloseAndRecv() (*StreamingInputCallResponse, error) grpc.ClientStream } type testServiceStreamingInputCallClient struct { grpc.ClientStream } func (x *testServiceStreamingInputCallClient) Send(m *StreamingInputCallRequest) error { return x.ClientStream.SendMsg(m) } func (x *testServiceStreamingInputCallClient) CloseAndRecv() (*StreamingInputCallResponse, error) { if err := x.ClientStream.CloseSend(); err != nil { return nil, err } m := new(StreamingInputCallResponse) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func (c *testServiceClient) FullDuplexCall(ctx context.Context, opts ...grpc.CallOption) (TestService_FullDuplexCallClient, error) { stream, err := c.cc.NewStream(ctx, &_TestService_serviceDesc.Streams[2], "/grpc.testing.TestService/FullDuplexCall", opts...) if err != nil { return nil, err } x := &testServiceFullDuplexCallClient{stream} return x, nil } type TestService_FullDuplexCallClient interface { Send(*StreamingOutputCallRequest) error Recv() (*StreamingOutputCallResponse, error) grpc.ClientStream } type testServiceFullDuplexCallClient struct { grpc.ClientStream } func (x *testServiceFullDuplexCallClient) Send(m *StreamingOutputCallRequest) error { return x.ClientStream.SendMsg(m) } func (x *testServiceFullDuplexCallClient) Recv() (*StreamingOutputCallResponse, error) { m := new(StreamingOutputCallResponse) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func (c *testServiceClient) HalfDuplexCall(ctx context.Context, opts ...grpc.CallOption) (TestService_HalfDuplexCallClient, error) { stream, err := c.cc.NewStream(ctx, &_TestService_serviceDesc.Streams[3], "/grpc.testing.TestService/HalfDuplexCall", opts...) if err != nil { return nil, err } x := &testServiceHalfDuplexCallClient{stream} return x, nil } type TestService_HalfDuplexCallClient interface { Send(*StreamingOutputCallRequest) error Recv() (*StreamingOutputCallResponse, error) grpc.ClientStream } type testServiceHalfDuplexCallClient struct { grpc.ClientStream } func (x *testServiceHalfDuplexCallClient) Send(m *StreamingOutputCallRequest) error { return x.ClientStream.SendMsg(m) } func (x *testServiceHalfDuplexCallClient) Recv() (*StreamingOutputCallResponse, error) { m := new(StreamingOutputCallResponse) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } // TestServiceServer is the server API for TestService service. type TestServiceServer interface { // One empty request followed by one empty response. EmptyCall(context.Context, *Empty) (*Empty, error) // One request followed by one response. // The server returns the client payload as-is. UnaryCall(context.Context, *SimpleRequest) (*SimpleResponse, error) // One request followed by a sequence of responses (streamed download). // The server returns the payload with client desired type and sizes. StreamingOutputCall(*StreamingOutputCallRequest, TestService_StreamingOutputCallServer) error // A sequence of requests followed by one response (streamed upload). // The server returns the aggregated size of client payload as the result. StreamingInputCall(TestService_StreamingInputCallServer) error // A sequence of requests with each request served by the server immediately. // As one request could lead to multiple responses, this interface // demonstrates the idea of full duplexing. FullDuplexCall(TestService_FullDuplexCallServer) error // A sequence of requests followed by a sequence of responses. // The server buffers all the client requests and then serves them in order. A // stream of responses are returned to the client when the server starts with // first request. HalfDuplexCall(TestService_HalfDuplexCallServer) error } func RegisterTestServiceServer(s *grpc.Server, srv TestServiceServer) { s.RegisterService(&_TestService_serviceDesc, srv) } func _TestService_EmptyCall_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(Empty) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(TestServiceServer).EmptyCall(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/grpc.testing.TestService/EmptyCall", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(TestServiceServer).EmptyCall(ctx, req.(*Empty)) } return interceptor(ctx, in, info, handler) } func _TestService_UnaryCall_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(SimpleRequest) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(TestServiceServer).UnaryCall(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/grpc.testing.TestService/UnaryCall", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(TestServiceServer).UnaryCall(ctx, req.(*SimpleRequest)) } return interceptor(ctx, in, info, handler) } func _TestService_StreamingOutputCall_Handler(srv interface{}, stream grpc.ServerStream) error { m := new(StreamingOutputCallRequest) if err := stream.RecvMsg(m); err != nil { return err } return srv.(TestServiceServer).StreamingOutputCall(m, &testServiceStreamingOutputCallServer{stream}) } type TestService_StreamingOutputCallServer interface { Send(*StreamingOutputCallResponse) error grpc.ServerStream } type testServiceStreamingOutputCallServer struct { grpc.ServerStream } func (x *testServiceStreamingOutputCallServer) Send(m *StreamingOutputCallResponse) error { return x.ServerStream.SendMsg(m) } func _TestService_StreamingInputCall_Handler(srv interface{}, stream grpc.ServerStream) error { return srv.(TestServiceServer).StreamingInputCall(&testServiceStreamingInputCallServer{stream}) } type TestService_StreamingInputCallServer interface { SendAndClose(*StreamingInputCallResponse) error Recv() (*StreamingInputCallRequest, error) grpc.ServerStream } type testServiceStreamingInputCallServer struct { grpc.ServerStream } func (x *testServiceStreamingInputCallServer) SendAndClose(m *StreamingInputCallResponse) error { return x.ServerStream.SendMsg(m) } func (x *testServiceStreamingInputCallServer) Recv() (*StreamingInputCallRequest, error) { m := new(StreamingInputCallRequest) if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func _TestService_FullDuplexCall_Handler(srv interface{}, stream grpc.ServerStream) error { return srv.(TestServiceServer).FullDuplexCall(&testServiceFullDuplexCallServer{stream}) } type TestService_FullDuplexCallServer interface { Send(*StreamingOutputCallResponse) error Recv() (*StreamingOutputCallRequest, error) grpc.ServerStream } type testServiceFullDuplexCallServer struct { grpc.ServerStream } func (x *testServiceFullDuplexCallServer) Send(m *StreamingOutputCallResponse) error { return x.ServerStream.SendMsg(m) } func (x *testServiceFullDuplexCallServer) Recv() (*StreamingOutputCallRequest, error) { m := new(StreamingOutputCallRequest) if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func _TestService_HalfDuplexCall_Handler(srv interface{}, stream grpc.ServerStream) error { return srv.(TestServiceServer).HalfDuplexCall(&testServiceHalfDuplexCallServer{stream}) } type TestService_HalfDuplexCallServer interface { Send(*StreamingOutputCallResponse) error Recv() (*StreamingOutputCallRequest, error) grpc.ServerStream } type testServiceHalfDuplexCallServer struct { grpc.ServerStream } func (x *testServiceHalfDuplexCallServer) Send(m *StreamingOutputCallResponse) error { return x.ServerStream.SendMsg(m) } func (x *testServiceHalfDuplexCallServer) Recv() (*StreamingOutputCallRequest, error) { m := new(StreamingOutputCallRequest) if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err } return m, nil } var _TestService_serviceDesc = grpc.ServiceDesc{ ServiceName: "grpc.testing.TestService", HandlerType: (*TestServiceServer)(nil), Methods: []grpc.MethodDesc{ { MethodName: "EmptyCall", Handler: _TestService_EmptyCall_Handler, }, { MethodName: "UnaryCall", Handler: _TestService_UnaryCall_Handler, }, }, Streams: []grpc.StreamDesc{ { StreamName: "StreamingOutputCall", Handler: _TestService_StreamingOutputCall_Handler, ServerStreams: true, }, { StreamName: "StreamingInputCall", Handler: _TestService_StreamingInputCall_Handler, ClientStreams: true, }, { StreamName: "FullDuplexCall", Handler: _TestService_FullDuplexCall_Handler, ServerStreams: true, ClientStreams: true, }, { StreamName: "HalfDuplexCall", Handler: _TestService_HalfDuplexCall_Handler, ServerStreams: true, ClientStreams: true, }, }, Metadata: "grpc_testing/test.proto", } func init() { proto.RegisterFile("grpc_testing/test.proto", fileDescriptor_test_c9f6c5af4267cb88) } var fileDescriptor_test_c9f6c5af4267cb88 = []byte{ // 587 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xbc, 0x55, 0xdb, 0x6e, 0xd3, 0x40, 0x10, 0x65, 0xdb, 0xf4, 0x36, 0x49, 0xad, 0x68, 0xab, 0xaa, 0xae, 0x8b, 0x84, 0x65, 0x1e, 0x30, 0x48, 0xa4, 0x28, 0x08, 0x1e, 0x41, 0xa5, 0x17, 0x51, 0x29, 0x4d, 0x82, 0x9d, 0x3c, 0x47, 0xdb, 0x64, 0x6b, 0x2c, 0x39, 0xf6, 0xb2, 0x5e, 0x57, 0xa4, 0x0f, 0xfc, 0x18, 0x3f, 0xc3, 0x47, 0xf0, 0x01, 0x68, 0xd7, 0x76, 0xe2, 0x24, 0xae, 0x48, 0x41, 0xf0, 0x14, 0x7b, 0xe6, 0xcc, 0x99, 0x73, 0x3c, 0xb3, 0x1b, 0x38, 0xf0, 0x38, 0x1b, 0x0e, 0x04, 0x8d, 0x85, 0x1f, 0x7a, 0xc7, 0xf2, 0xb7, 0xc1, 0x78, 0x24, 0x22, 0x5c, 0x93, 0x89, 0x46, 0x96, 0xb0, 0xb6, 0x60, 0xe3, 0x7c, 0xcc, 0xc4, 0xc4, 0x6a, 0xc1, 0x56, 0x97, 0x4c, 0x82, 0x88, 0x8c, 0xf0, 0x4b, 0xa8, 0x88, 0x09, 0xa3, 0x3a, 0x32, 0x91, 0xad, 0x35, 0x0f, 0x1b, 0xc5, 0x82, 0x46, 0x06, 0xea, 0x4d, 0x18, 0x75, 0x14, 0x0c, 0x63, 0xa8, 0x5c, 0x47, 0xa3, 0x89, 0xbe, 0x66, 0x22, 0xbb, 0xe6, 0xa8, 0x67, 0xeb, 0x27, 0x82, 0x5d, 0xd7, 0x1f, 0xb3, 0x80, 0x3a, 0xf4, 0x4b, 0x42, 0x63, 0x81, 0xdf, 0xc1, 0x2e, 0xa7, 0x31, 0x8b, 0xc2, 0x98, 0x0e, 0x56, 0x63, 0xaf, 0xe5, 0x78, 0xf9, 0x86, 0x9f, 0x16, 0xea, 0x63, 0xff, 0x8e, 0xaa, 0x76, 0x1b, 0x33, 0x90, 0xeb, 0xdf, 0x51, 0x7c, 0x0c, 0x5b, 0x2c, 0x65, 0xd0, 0xd7, 0x4d, 0x64, 0x57, 0x9b, 0xfb, 0xa5, 0xf4, 0x4e, 0x8e, 0x92, 0xac, 0x37, 0x7e, 0x10, 0x0c, 0x92, 0x98, 0xf2, 0x90, 0x8c, 0xa9, 0x5e, 0x31, 0x91, 0xbd, 0xed, 0xd4, 0x64, 0xb0, 0x9f, 0xc5, 0xb0, 0x0d, 0x75, 0x05, 0x8a, 0x48, 0x22, 0x3e, 0x0f, 0xe2, 0x61, 0xc4, 0xa8, 0xbe, 0xa1, 0x70, 0x9a, 0x8c, 0x77, 0x64, 0xd8, 0x95, 0x51, 0xeb, 0x1b, 0x68, 0xb9, 0xeb, 0x54, 0x55, 0x51, 0x11, 0x5a, 0x49, 0x91, 0x01, 0xdb, 0x53, 0x31, 0xd2, 0xe2, 0x8e, 0x33, 0x7d, 0xc7, 0x4f, 0xa0, 0x5a, 0xd4, 0xb0, 0xae, 0xd2, 0x10, 0xcd, 0xfa, 0xb7, 0xe0, 0xd0, 0x15, 0x9c, 0x92, 0xb1, 0x1f, 0x7a, 0x97, 0x21, 0x4b, 0xc4, 0x29, 0x09, 0x82, 0x7c, 0x02, 0x0f, 0x95, 0x62, 0xf5, 0xc0, 0x28, 0x63, 0xcb, 0x9c, 0xbd, 0x85, 0x03, 0xe2, 0x79, 0x9c, 0x7a, 0x44, 0xd0, 0xd1, 0x20, 0xab, 0x49, 0x47, 0x83, 0xd4, 0x68, 0xf6, 0x67, 0xe9, 0x8c, 0x5a, 0xce, 0xc8, 0xba, 0x04, 0x9c, 0x73, 0x74, 0x09, 0x27, 0x63, 0x2a, 0x28, 0x8f, 0xe5, 0x12, 0x15, 0x4a, 0xd5, 0xb3, 0xb4, 0xeb, 0x87, 0x82, 0xf2, 0x5b, 0x22, 0x07, 0x94, 0x0d, 0x1c, 0xf2, 0x50, 0x3f, 0xb6, 0x7e, 0xa0, 0x82, 0xc2, 0x4e, 0x22, 0x16, 0x0c, 0xff, 0xed, 0xca, 0x7d, 0x82, 0xbd, 0x69, 0x3d, 0x9b, 0x4a, 0xd5, 0xd7, 0xcc, 0x75, 0xbb, 0xda, 0x34, 0xe7, 0x59, 0x96, 0x2d, 0x39, 0x98, 0x2f, 0xdb, 0x7c, 0xe8, 0x82, 0x5a, 0x6d, 0x38, 0x2a, 0x75, 0xf8, 0x87, 0xeb, 0xf5, 0xe2, 0x3d, 0x54, 0x0b, 0x86, 0x71, 0x1d, 0x6a, 0xa7, 0x9d, 0xab, 0xae, 0x73, 0xee, 0xba, 0x27, 0x1f, 0x5a, 0xe7, 0xf5, 0x47, 0x18, 0x83, 0xd6, 0x6f, 0xcf, 0xc5, 0x10, 0x06, 0xd8, 0x74, 0x4e, 0xda, 0x67, 0x9d, 0xab, 0xfa, 0x5a, 0xf3, 0x7b, 0x05, 0xaa, 0x3d, 0x1a, 0x0b, 0x97, 0xf2, 0x5b, 0x7f, 0x48, 0xf1, 0x1b, 0xd8, 0x51, 0x17, 0x88, 0x94, 0x85, 0xf7, 0xe6, 0xbb, 0xab, 0x84, 0x51, 0x16, 0xc4, 0x17, 0xb0, 0xd3, 0x0f, 0x09, 0x4f, 0xcb, 0x8e, 0xe6, 0x11, 0x73, 0x17, 0x87, 0xf1, 0xb8, 0x3c, 0x99, 0x7d, 0x80, 0x00, 0xf6, 0x4a, 0xbe, 0x0f, 0xb6, 0x17, 0x8a, 0xee, 0x5d, 0x12, 0xe3, 0xf9, 0x0a, 0xc8, 0xb4, 0xd7, 0x2b, 0x84, 0x7d, 0xc0, 0xcb, 0x27, 0x02, 0x3f, 0xbb, 0x87, 0x62, 0xf1, 0x04, 0x1a, 0xf6, 0xef, 0x81, 0x69, 0x2b, 0x5b, 0xb6, 0xd2, 0x2e, 0x92, 0x20, 0x38, 0x4b, 0x58, 0x40, 0xbf, 0xfe, 0x33, 0x4f, 0x36, 0x52, 0xae, 0xb4, 0x8f, 0x24, 0xb8, 0xf9, 0x0f, 0xad, 0xae, 0x37, 0xd5, 0x7f, 0xd0, 0xeb, 0x5f, 0x01, 0x00, 0x00, 0xff, 0xff, 0x07, 0xc7, 0x76, 0x69, 0x9e, 0x06, 0x00, 0x00, } grpc-go-1.22.1/test/grpc_testing/test.proto000066400000000000000000000120431351635773100206610ustar00rootroot00000000000000// Copyright 2017 gRPC authors. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. // An integration test service that covers all the method signature permutations // of unary/streaming requests/responses. syntax = "proto3"; package grpc.testing; message Empty {} // The type of payload that should be returned. enum PayloadType { // Compressable text format. COMPRESSABLE = 0; // Uncompressable binary format. UNCOMPRESSABLE = 1; // Randomly chosen from all other formats defined in this enum. RANDOM = 2; } // A block of data, to simply increase gRPC message size. message Payload { // The type of data in body. PayloadType type = 1; // Primary contents of payload. bytes body = 2; } // Unary request. message SimpleRequest { // Desired payload type in the response from the server. // If response_type is RANDOM, server randomly chooses one from other formats. PayloadType response_type = 1; // Desired payload size in the response from the server. // If response_type is COMPRESSABLE, this denotes the size before compression. int32 response_size = 2; // Optional input payload sent along with the request. Payload payload = 3; // Whether SimpleResponse should include username. bool fill_username = 4; // Whether SimpleResponse should include OAuth scope. bool fill_oauth_scope = 5; } // Unary response, as configured by the request. message SimpleResponse { // Payload to increase message size. Payload payload = 1; // The user the request came from, for verifying authentication was // successful when the client expected it. string username = 2; // OAuth scope. string oauth_scope = 3; } // Client-streaming request. message StreamingInputCallRequest { // Optional input payload sent along with the request. Payload payload = 1; // Not expecting any payload from the response. } // Client-streaming response. message StreamingInputCallResponse { // Aggregated size of payloads received from the client. int32 aggregated_payload_size = 1; } // Configuration for a particular response. message ResponseParameters { // Desired payload sizes in responses from the server. // If response_type is COMPRESSABLE, this denotes the size before compression. int32 size = 1; // Desired interval between consecutive responses in the response stream in // microseconds. int32 interval_us = 2; } // Server-streaming request. message StreamingOutputCallRequest { // Desired payload type in the response from the server. // If response_type is RANDOM, the payload from each response in the stream // might be of different types. This is to simulate a mixed type of payload // stream. PayloadType response_type = 1; // Configuration for each expected response message. repeated ResponseParameters response_parameters = 2; // Optional input payload sent along with the request. Payload payload = 3; } // Server-streaming response, as configured by the request and parameters. message StreamingOutputCallResponse { // Payload to increase response size. Payload payload = 1; } // A simple service to test the various types of RPCs and experiment with // performance with various types of payload. service TestService { // One empty request followed by one empty response. rpc EmptyCall(Empty) returns (Empty); // One request followed by one response. // The server returns the client payload as-is. rpc UnaryCall(SimpleRequest) returns (SimpleResponse); // One request followed by a sequence of responses (streamed download). // The server returns the payload with client desired type and sizes. rpc StreamingOutputCall(StreamingOutputCallRequest) returns (stream StreamingOutputCallResponse); // A sequence of requests followed by one response (streamed upload). // The server returns the aggregated size of client payload as the result. rpc StreamingInputCall(stream StreamingInputCallRequest) returns (StreamingInputCallResponse); // A sequence of requests with each request served by the server immediately. // As one request could lead to multiple responses, this interface // demonstrates the idea of full duplexing. rpc FullDuplexCall(stream StreamingOutputCallRequest) returns (stream StreamingOutputCallResponse); // A sequence of requests followed by a sequence of responses. // The server buffers all the client requests and then serves them in order. A // stream of responses are returned to the client when the server starts with // first request. rpc HalfDuplexCall(stream StreamingOutputCallRequest) returns (stream StreamingOutputCallResponse); } grpc-go-1.22.1/test/healthcheck_test.go000066400000000000000000000742521351635773100177700ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package test import ( "context" "errors" "fmt" "net" "sync" "testing" "time" "google.golang.org/grpc" "google.golang.org/grpc/codes" "google.golang.org/grpc/connectivity" _ "google.golang.org/grpc/health" healthgrpc "google.golang.org/grpc/health/grpc_health_v1" healthpb "google.golang.org/grpc/health/grpc_health_v1" "google.golang.org/grpc/internal" "google.golang.org/grpc/internal/channelz" "google.golang.org/grpc/resolver" "google.golang.org/grpc/resolver/manual" "google.golang.org/grpc/status" testpb "google.golang.org/grpc/test/grpc_testing" ) var testHealthCheckFunc = internal.HealthCheckFunc func newTestHealthServer() *testHealthServer { return newTestHealthServerWithWatchFunc(defaultWatchFunc) } func newTestHealthServerWithWatchFunc(f func(s *testHealthServer, in *healthpb.HealthCheckRequest, stream healthgrpc.Health_WatchServer) error) *testHealthServer { return &testHealthServer{ watchFunc: f, update: make(chan struct{}, 1), status: make(map[string]healthpb.HealthCheckResponse_ServingStatus), } } // defaultWatchFunc will send a HealthCheckResponse to the client whenever SetServingStatus is called. func defaultWatchFunc(s *testHealthServer, in *healthpb.HealthCheckRequest, stream healthgrpc.Health_WatchServer) error { if in.Service != "foo" { return status.Error(codes.FailedPrecondition, "the defaultWatchFunc only handles request with service name to be \"foo\"") } var done bool for { select { case <-stream.Context().Done(): done = true case <-s.update: } if done { break } s.mu.Lock() resp := &healthpb.HealthCheckResponse{ Status: s.status[in.Service], } s.mu.Unlock() stream.SendMsg(resp) } return nil } type testHealthServer struct { watchFunc func(s *testHealthServer, in *healthpb.HealthCheckRequest, stream healthgrpc.Health_WatchServer) error mu sync.Mutex status map[string]healthpb.HealthCheckResponse_ServingStatus update chan struct{} } func (s *testHealthServer) Check(ctx context.Context, in *healthpb.HealthCheckRequest) (*healthpb.HealthCheckResponse, error) { return &healthpb.HealthCheckResponse{ Status: healthpb.HealthCheckResponse_SERVING, }, nil } func (s *testHealthServer) Watch(in *healthpb.HealthCheckRequest, stream healthgrpc.Health_WatchServer) error { return s.watchFunc(s, in, stream) } // SetServingStatus is called when need to reset the serving status of a service // or insert a new service entry into the statusMap. func (s *testHealthServer) SetServingStatus(service string, status healthpb.HealthCheckResponse_ServingStatus) { s.mu.Lock() s.status[service] = status select { case <-s.update: default: } s.update <- struct{}{} s.mu.Unlock() } func setupHealthCheckWrapper() (hcEnterChan chan struct{}, hcExitChan chan struct{}, wrapper internal.HealthChecker) { hcEnterChan = make(chan struct{}) hcExitChan = make(chan struct{}) wrapper = func(ctx context.Context, newStream func(string) (interface{}, error), update func(state connectivity.State), service string) error { close(hcEnterChan) defer close(hcExitChan) return testHealthCheckFunc(ctx, newStream, update, service) } return } type svrConfig struct { specialWatchFunc func(s *testHealthServer, in *healthpb.HealthCheckRequest, stream healthgrpc.Health_WatchServer) error } func setupServer(sc *svrConfig) (s *grpc.Server, lis net.Listener, ts *testHealthServer, deferFunc func(), err error) { s = grpc.NewServer() lis, err = net.Listen("tcp", "localhost:0") if err != nil { return nil, nil, nil, func() {}, fmt.Errorf("failed to listen due to err %v", err) } if sc.specialWatchFunc != nil { ts = newTestHealthServerWithWatchFunc(sc.specialWatchFunc) } else { ts = newTestHealthServer() } healthgrpc.RegisterHealthServer(s, ts) testpb.RegisterTestServiceServer(s, &testServer{}) go s.Serve(lis) return s, lis, ts, s.Stop, nil } type clientConfig struct { balancerName string testHealthCheckFuncWrapper internal.HealthChecker extraDialOption []grpc.DialOption } func setupClient(c *clientConfig) (cc *grpc.ClientConn, r *manual.Resolver, deferFunc func(), err error) { r, rcleanup := manual.GenerateAndRegisterManualResolver() var opts []grpc.DialOption opts = append(opts, grpc.WithInsecure(), grpc.WithBalancerName(c.balancerName)) if c.testHealthCheckFuncWrapper != nil { opts = append(opts, internal.WithHealthCheckFunc.(func(internal.HealthChecker) grpc.DialOption)(c.testHealthCheckFuncWrapper)) } opts = append(opts, c.extraDialOption...) cc, err = grpc.Dial(r.Scheme()+":///test.server", opts...) if err != nil { rcleanup() return nil, nil, nil, fmt.Errorf("dial failed due to err: %v", err) } return cc, r, func() { cc.Close(); rcleanup() }, nil } func (s) TestHealthCheckWatchStateChange(t *testing.T) { _, lis, ts, deferFunc, err := setupServer(&svrConfig{}) defer deferFunc() if err != nil { t.Fatal(err) } // The table below shows the expected series of addrConn connectivity transitions when server // updates its health status. As there's only one addrConn corresponds with the ClientConn in this // test, we use ClientConn's connectivity state as the addrConn connectivity state. //+------------------------------+-------------------------------------------+ //| Health Check Returned Status | Expected addrConn Connectivity Transition | //+------------------------------+-------------------------------------------+ //| NOT_SERVING | ->TRANSIENT FAILURE | //| SERVING | ->READY | //| SERVICE_UNKNOWN | ->TRANSIENT FAILURE | //| SERVING | ->READY | //| UNKNOWN | ->TRANSIENT FAILURE | //+------------------------------+-------------------------------------------+ ts.SetServingStatus("foo", healthpb.HealthCheckResponse_NOT_SERVING) cc, r, deferFunc, err := setupClient(&clientConfig{balancerName: "round_robin"}) if err != nil { t.Fatal(err) } defer deferFunc() r.UpdateState(resolver.State{ Addresses: []resolver.Address{{Addr: lis.Addr().String()}}, ServiceConfig: parseCfg(`{ "healthCheckConfig": { "serviceName": "foo" } }`)}) ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() if ok := cc.WaitForStateChange(ctx, connectivity.Idle); !ok { t.Fatal("ClientConn is still in IDLE state when the context times out.") } if ok := cc.WaitForStateChange(ctx, connectivity.Connecting); !ok { t.Fatal("ClientConn is still in CONNECTING state when the context times out.") } if s := cc.GetState(); s != connectivity.TransientFailure { t.Fatalf("ClientConn is in %v state, want TRANSIENT FAILURE", s) } ts.SetServingStatus("foo", healthpb.HealthCheckResponse_SERVING) if ok := cc.WaitForStateChange(ctx, connectivity.TransientFailure); !ok { t.Fatal("ClientConn is still in TRANSIENT FAILURE state when the context times out.") } if s := cc.GetState(); s != connectivity.Ready { t.Fatalf("ClientConn is in %v state, want READY", s) } ts.SetServingStatus("foo", healthpb.HealthCheckResponse_SERVICE_UNKNOWN) if ok := cc.WaitForStateChange(ctx, connectivity.Ready); !ok { t.Fatal("ClientConn is still in READY state when the context times out.") } if s := cc.GetState(); s != connectivity.TransientFailure { t.Fatalf("ClientConn is in %v state, want TRANSIENT FAILURE", s) } ts.SetServingStatus("foo", healthpb.HealthCheckResponse_SERVING) if ok := cc.WaitForStateChange(ctx, connectivity.TransientFailure); !ok { t.Fatal("ClientConn is still in TRANSIENT FAILURE state when the context times out.") } if s := cc.GetState(); s != connectivity.Ready { t.Fatalf("ClientConn is in %v state, want READY", s) } ts.SetServingStatus("foo", healthpb.HealthCheckResponse_UNKNOWN) if ok := cc.WaitForStateChange(ctx, connectivity.Ready); !ok { t.Fatal("ClientConn is still in READY state when the context times out.") } if s := cc.GetState(); s != connectivity.TransientFailure { t.Fatalf("ClientConn is in %v state, want TRANSIENT FAILURE", s) } } // If Watch returns Unimplemented, then the ClientConn should go into READY state. func (s) TestHealthCheckHealthServerNotRegistered(t *testing.T) { s := grpc.NewServer() lis, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("failed to listen due to err: %v", err) } go s.Serve(lis) defer s.Stop() cc, r, deferFunc, err := setupClient(&clientConfig{balancerName: "round_robin"}) if err != nil { t.Fatal(err) } defer deferFunc() r.UpdateState(resolver.State{ Addresses: []resolver.Address{{Addr: lis.Addr().String()}}, ServiceConfig: parseCfg(`{ "healthCheckConfig": { "serviceName": "foo" } }`)}) ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() if ok := cc.WaitForStateChange(ctx, connectivity.Idle); !ok { t.Fatal("ClientConn is still in IDLE state when the context times out.") } if ok := cc.WaitForStateChange(ctx, connectivity.Connecting); !ok { t.Fatal("ClientConn is still in CONNECTING state when the context times out.") } if s := cc.GetState(); s != connectivity.Ready { t.Fatalf("ClientConn is in %v state, want READY", s) } } // In the case of a goaway received, the health check stream should be terminated and health check // function should exit. func (s) TestHealthCheckWithGoAway(t *testing.T) { hcEnterChan, hcExitChan, testHealthCheckFuncWrapper := setupHealthCheckWrapper() s, lis, ts, deferFunc, err := setupServer(&svrConfig{}) defer deferFunc() if err != nil { t.Fatal(err) } ts.SetServingStatus("foo", healthpb.HealthCheckResponse_SERVING) cc, r, deferFunc, err := setupClient(&clientConfig{ balancerName: "round_robin", testHealthCheckFuncWrapper: testHealthCheckFuncWrapper, }) if err != nil { t.Fatal(err) } defer deferFunc() tc := testpb.NewTestServiceClient(cc) r.UpdateState(resolver.State{ Addresses: []resolver.Address{{Addr: lis.Addr().String()}}, ServiceConfig: parseCfg(`{ "healthCheckConfig": { "serviceName": "foo" } }`)}) // make some rpcs to make sure connection is working. if err := verifyResultWithDelay(func() (bool, error) { if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}); err != nil { return false, fmt.Errorf("TestService/EmptyCall(_, _) = _, %v, want _, ", err) } return true, nil }); err != nil { t.Fatal(err) } // the stream rpc will persist through goaway event. ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) defer cancel() stream, err := tc.FullDuplexCall(ctx, grpc.WaitForReady(true)) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } respParam := []*testpb.ResponseParameters{{Size: 1}} payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, int32(1)) if err != nil { t.Fatal(err) } req := &testpb.StreamingOutputCallRequest{ ResponseParameters: respParam, Payload: payload, } if err := stream.Send(req); err != nil { t.Fatalf("%v.Send(_) = %v, want ", stream, err) } if _, err := stream.Recv(); err != nil { t.Fatalf("%v.Recv() = _, %v, want _, ", stream, err) } select { case <-hcExitChan: t.Fatal("Health check function has exited, which is not expected.") default: } // server sends GoAway go s.GracefulStop() select { case <-hcExitChan: case <-time.After(5 * time.Second): select { case <-hcEnterChan: default: t.Fatal("Health check function has not entered after 5s.") } t.Fatal("Health check function has not exited after 5s.") } // The existing RPC should be still good to proceed. if err := stream.Send(req); err != nil { t.Fatalf("%v.Send(_) = %v, want ", stream, err) } if _, err := stream.Recv(); err != nil { t.Fatalf("%v.Recv() = _, %v, want _, ", stream, err) } } func (s) TestHealthCheckWithConnClose(t *testing.T) { hcEnterChan, hcExitChan, testHealthCheckFuncWrapper := setupHealthCheckWrapper() s, lis, ts, deferFunc, err := setupServer(&svrConfig{}) defer deferFunc() if err != nil { t.Fatal(err) } ts.SetServingStatus("foo", healthpb.HealthCheckResponse_SERVING) cc, r, deferFunc, err := setupClient(&clientConfig{ balancerName: "round_robin", testHealthCheckFuncWrapper: testHealthCheckFuncWrapper, }) if err != nil { t.Fatal(err) } defer deferFunc() tc := testpb.NewTestServiceClient(cc) r.UpdateState(resolver.State{ Addresses: []resolver.Address{{Addr: lis.Addr().String()}}, ServiceConfig: parseCfg(`{ "healthCheckConfig": { "serviceName": "foo" } }`)}) // make some rpcs to make sure connection is working. if err := verifyResultWithDelay(func() (bool, error) { if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}); err != nil { return false, fmt.Errorf("TestService/EmptyCall(_, _) = _, %v, want _, ", err) } return true, nil }); err != nil { t.Fatal(err) } select { case <-hcExitChan: t.Fatal("Health check function has exited, which is not expected.") default: } // server closes the connection s.Stop() select { case <-hcExitChan: case <-time.After(5 * time.Second): select { case <-hcEnterChan: default: t.Fatal("Health check function has not entered after 5s.") } t.Fatal("Health check function has not exited after 5s.") } } // addrConn drain happens when addrConn gets torn down due to its address being no longer in the // address list returned by the resolver. func (s) TestHealthCheckWithAddrConnDrain(t *testing.T) { hcEnterChan, hcExitChan, testHealthCheckFuncWrapper := setupHealthCheckWrapper() _, lis, ts, deferFunc, err := setupServer(&svrConfig{}) defer deferFunc() if err != nil { t.Fatal(err) } ts.SetServingStatus("foo", healthpb.HealthCheckResponse_SERVING) cc, r, deferFunc, err := setupClient(&clientConfig{ balancerName: "round_robin", testHealthCheckFuncWrapper: testHealthCheckFuncWrapper, }) if err != nil { t.Fatal(err) } defer deferFunc() tc := testpb.NewTestServiceClient(cc) sc := parseCfg(`{ "healthCheckConfig": { "serviceName": "foo" } }`) r.UpdateState(resolver.State{ Addresses: []resolver.Address{{Addr: lis.Addr().String()}}, ServiceConfig: sc, }) // make some rpcs to make sure connection is working. if err := verifyResultWithDelay(func() (bool, error) { if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}); err != nil { return false, fmt.Errorf("TestService/EmptyCall(_, _) = _, %v, want _, ", err) } return true, nil }); err != nil { t.Fatal(err) } // the stream rpc will persist through goaway event. ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) defer cancel() stream, err := tc.FullDuplexCall(ctx, grpc.WaitForReady(true)) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } respParam := []*testpb.ResponseParameters{{Size: 1}} payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, int32(1)) if err != nil { t.Fatal(err) } req := &testpb.StreamingOutputCallRequest{ ResponseParameters: respParam, Payload: payload, } if err := stream.Send(req); err != nil { t.Fatalf("%v.Send(_) = %v, want ", stream, err) } if _, err := stream.Recv(); err != nil { t.Fatalf("%v.Recv() = _, %v, want _, ", stream, err) } select { case <-hcExitChan: t.Fatal("Health check function has exited, which is not expected.") default: } // trigger teardown of the ac r.UpdateState(resolver.State{Addresses: []resolver.Address{}, ServiceConfig: sc}) select { case <-hcExitChan: case <-time.After(5 * time.Second): select { case <-hcEnterChan: default: t.Fatal("Health check function has not entered after 5s.") } t.Fatal("Health check function has not exited after 5s.") } // The existing RPC should be still good to proceed. if err := stream.Send(req); err != nil { t.Fatalf("%v.Send(_) = %v, want ", stream, err) } if _, err := stream.Recv(); err != nil { t.Fatalf("%v.Recv() = _, %v, want _, ", stream, err) } } // ClientConn close will lead to its addrConns being torn down. func (s) TestHealthCheckWithClientConnClose(t *testing.T) { hcEnterChan, hcExitChan, testHealthCheckFuncWrapper := setupHealthCheckWrapper() _, lis, ts, deferFunc, err := setupServer(&svrConfig{}) defer deferFunc() if err != nil { t.Fatal(err) } ts.SetServingStatus("foo", healthpb.HealthCheckResponse_SERVING) cc, r, deferFunc, err := setupClient(&clientConfig{ balancerName: "round_robin", testHealthCheckFuncWrapper: testHealthCheckFuncWrapper, }) if err != nil { t.Fatal(err) } defer deferFunc() tc := testpb.NewTestServiceClient(cc) r.UpdateState(resolver.State{ Addresses: []resolver.Address{{Addr: lis.Addr().String()}}, ServiceConfig: parseCfg(`{ "healthCheckConfig": { "serviceName": "foo" } }`)}) // make some rpcs to make sure connection is working. if err := verifyResultWithDelay(func() (bool, error) { if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}); err != nil { return false, fmt.Errorf("TestService/EmptyCall(_, _) = _, %v, want _, ", err) } return true, nil }); err != nil { t.Fatal(err) } select { case <-hcExitChan: t.Fatal("Health check function has exited, which is not expected.") default: } // trigger addrConn teardown cc.Close() select { case <-hcExitChan: case <-time.After(5 * time.Second): select { case <-hcEnterChan: default: t.Fatal("Health check function has not entered after 5s.") } t.Fatal("Health check function has not exited after 5s.") } } // This test is to test the logic in the createTransport after the health check function returns which // closes the skipReset channel(since it has not been closed inside health check func) to unblock // onGoAway/onClose goroutine. func (s) TestHealthCheckWithoutSetConnectivityStateCalledAddrConnShutDown(t *testing.T) { hcEnterChan, hcExitChan, testHealthCheckFuncWrapper := setupHealthCheckWrapper() _, lis, ts, deferFunc, err := setupServer(&svrConfig{ specialWatchFunc: func(s *testHealthServer, in *healthpb.HealthCheckRequest, stream healthgrpc.Health_WatchServer) error { if in.Service != "delay" { return status.Error(codes.FailedPrecondition, "this special Watch function only handles request with service name to be \"delay\"") } // Do nothing to mock a delay of health check response from server side. // This case is to help with the test that covers the condition that setConnectivityState is not // called inside HealthCheckFunc before the func returns. select { case <-stream.Context().Done(): case <-time.After(5 * time.Second): } return nil }, }) defer deferFunc() if err != nil { t.Fatal(err) } ts.SetServingStatus("delay", healthpb.HealthCheckResponse_SERVING) _, r, deferFunc, err := setupClient(&clientConfig{ balancerName: "round_robin", testHealthCheckFuncWrapper: testHealthCheckFuncWrapper, }) if err != nil { t.Fatal(err) } defer deferFunc() // The serviceName "delay" is specially handled at server side, where response will not be sent // back to client immediately upon receiving the request (client should receive no response until // test ends). sc := parseCfg(`{ "healthCheckConfig": { "serviceName": "delay" } }`) r.UpdateState(resolver.State{ Addresses: []resolver.Address{{Addr: lis.Addr().String()}}, ServiceConfig: sc, }) select { case <-hcExitChan: t.Fatal("Health check function has exited, which is not expected.") default: } select { case <-hcEnterChan: case <-time.After(5 * time.Second): t.Fatal("Health check function has not been invoked after 5s.") } // trigger teardown of the ac, ac in SHUTDOWN state r.UpdateState(resolver.State{Addresses: []resolver.Address{}, ServiceConfig: sc}) // The health check func should exit without calling the setConnectivityState func, as server hasn't sent // any response. select { case <-hcExitChan: case <-time.After(5 * time.Second): t.Fatal("Health check function has not exited after 5s.") } // The deferred leakcheck will check whether there's leaked goroutine, which is an indication // whether we closes the skipReset channel to unblock onGoAway/onClose goroutine. } // This test is to test the logic in the createTransport after the health check function returns which // closes the allowedToReset channel(since it has not been closed inside health check func) to unblock // onGoAway/onClose goroutine. func (s) TestHealthCheckWithoutSetConnectivityStateCalled(t *testing.T) { hcEnterChan, hcExitChan, testHealthCheckFuncWrapper := setupHealthCheckWrapper() s, lis, ts, deferFunc, err := setupServer(&svrConfig{ specialWatchFunc: func(s *testHealthServer, in *healthpb.HealthCheckRequest, stream healthgrpc.Health_WatchServer) error { if in.Service != "delay" { return status.Error(codes.FailedPrecondition, "this special Watch function only handles request with service name to be \"delay\"") } // Do nothing to mock a delay of health check response from server side. // This case is to help with the test that covers the condition that setConnectivityState is not // called inside HealthCheckFunc before the func returns. select { case <-stream.Context().Done(): case <-time.After(5 * time.Second): } return nil }, }) defer deferFunc() if err != nil { t.Fatal(err) } ts.SetServingStatus("delay", healthpb.HealthCheckResponse_SERVING) _, r, deferFunc, err := setupClient(&clientConfig{ balancerName: "round_robin", testHealthCheckFuncWrapper: testHealthCheckFuncWrapper, }) if err != nil { t.Fatal(err) } defer deferFunc() // The serviceName "delay" is specially handled at server side, where response will not be sent // back to client immediately upon receiving the request (client should receive no response until // test ends). r.UpdateState(resolver.State{ Addresses: []resolver.Address{{Addr: lis.Addr().String()}}, ServiceConfig: parseCfg(`{ "healthCheckConfig": { "serviceName": "delay" } }`)}) select { case <-hcExitChan: t.Fatal("Health check function has exited, which is not expected.") default: } select { case <-hcEnterChan: case <-time.After(5 * time.Second): t.Fatal("Health check function has not been invoked after 5s.") } // trigger transport being closed s.Stop() // The health check func should exit without calling the setConnectivityState func, as server hasn't sent // any response. select { case <-hcExitChan: case <-time.After(5 * time.Second): t.Fatal("Health check function has not exited after 5s.") } // The deferred leakcheck will check whether there's leaked goroutine, which is an indication // whether we closes the allowedToReset channel to unblock onGoAway/onClose goroutine. } func testHealthCheckDisableWithDialOption(t *testing.T, addr string) { hcEnterChan, _, testHealthCheckFuncWrapper := setupHealthCheckWrapper() cc, r, deferFunc, err := setupClient(&clientConfig{ balancerName: "round_robin", testHealthCheckFuncWrapper: testHealthCheckFuncWrapper, extraDialOption: []grpc.DialOption{grpc.WithDisableHealthCheck()}, }) if err != nil { t.Fatal(err) } defer deferFunc() tc := testpb.NewTestServiceClient(cc) r.UpdateState(resolver.State{ Addresses: []resolver.Address{{Addr: addr}}, ServiceConfig: parseCfg(`{ "healthCheckConfig": { "serviceName": "foo" } }`)}) // send some rpcs to make sure transport has been created and is ready for use. if err := verifyResultWithDelay(func() (bool, error) { if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}); err != nil { return false, fmt.Errorf("TestService/EmptyCall(_, _) = _, %v, want _, ", err) } return true, nil }); err != nil { t.Fatal(err) } select { case <-hcEnterChan: t.Fatal("Health check function has exited, which is not expected.") default: } } func testHealthCheckDisableWithBalancer(t *testing.T, addr string) { hcEnterChan, _, testHealthCheckFuncWrapper := setupHealthCheckWrapper() cc, r, deferFunc, err := setupClient(&clientConfig{ balancerName: "pick_first", testHealthCheckFuncWrapper: testHealthCheckFuncWrapper, }) if err != nil { t.Fatal(err) } defer deferFunc() tc := testpb.NewTestServiceClient(cc) r.UpdateState(resolver.State{ Addresses: []resolver.Address{{Addr: addr}}, ServiceConfig: parseCfg(`{ "healthCheckConfig": { "serviceName": "foo" } }`)}) // send some rpcs to make sure transport has been created and is ready for use. if err := verifyResultWithDelay(func() (bool, error) { if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}); err != nil { return false, fmt.Errorf("TestService/EmptyCall(_, _) = _, %v, want _, ", err) } return true, nil }); err != nil { t.Fatal(err) } select { case <-hcEnterChan: t.Fatal("Health check function has started, which is not expected.") default: } } func testHealthCheckDisableWithServiceConfig(t *testing.T, addr string) { hcEnterChan, _, testHealthCheckFuncWrapper := setupHealthCheckWrapper() cc, r, deferFunc, err := setupClient(&clientConfig{ balancerName: "round_robin", testHealthCheckFuncWrapper: testHealthCheckFuncWrapper, }) if err != nil { t.Fatal(err) } defer deferFunc() tc := testpb.NewTestServiceClient(cc) r.UpdateState(resolver.State{Addresses: []resolver.Address{{Addr: addr}}}) // send some rpcs to make sure transport has been created and is ready for use. if err := verifyResultWithDelay(func() (bool, error) { if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}); err != nil { return false, fmt.Errorf("TestService/EmptyCall(_, _) = _, %v, want _, ", err) } return true, nil }); err != nil { t.Fatal(err) } select { case <-hcEnterChan: t.Fatal("Health check function has started, which is not expected.") default: } } func (s) TestHealthCheckDisable(t *testing.T) { _, lis, ts, deferFunc, err := setupServer(&svrConfig{}) defer deferFunc() if err != nil { t.Fatal(err) } ts.SetServingStatus("foo", healthpb.HealthCheckResponse_SERVING) // test client side disabling configuration. testHealthCheckDisableWithDialOption(t, lis.Addr().String()) testHealthCheckDisableWithBalancer(t, lis.Addr().String()) testHealthCheckDisableWithServiceConfig(t, lis.Addr().String()) } func (s) TestHealthCheckChannelzCountingCallSuccess(t *testing.T) { _, lis, _, deferFunc, err := setupServer(&svrConfig{ specialWatchFunc: func(s *testHealthServer, in *healthpb.HealthCheckRequest, stream healthgrpc.Health_WatchServer) error { if in.Service != "channelzSuccess" { return status.Error(codes.FailedPrecondition, "this special Watch function only handles request with service name to be \"channelzSuccess\"") } return status.Error(codes.OK, "fake success") }, }) defer deferFunc() if err != nil { t.Fatal(err) } _, r, deferFunc, err := setupClient(&clientConfig{balancerName: "round_robin"}) if err != nil { t.Fatal(err) } defer deferFunc() r.UpdateState(resolver.State{ Addresses: []resolver.Address{{Addr: lis.Addr().String()}}, ServiceConfig: parseCfg(`{ "healthCheckConfig": { "serviceName": "channelzSuccess" } }`)}) if err := verifyResultWithDelay(func() (bool, error) { cm, _ := channelz.GetTopChannels(0, 0) if len(cm) == 0 { return false, errors.New("channelz.GetTopChannels return 0 top channel") } if len(cm[0].SubChans) == 0 { return false, errors.New("there is 0 subchannel") } var id int64 for k := range cm[0].SubChans { id = k break } scm := channelz.GetSubChannel(id) if scm == nil || scm.ChannelData == nil { return false, errors.New("nil subchannel metric or nil subchannel metric ChannelData returned") } // exponential backoff retry may result in more than one health check call. if scm.ChannelData.CallsStarted > 0 && scm.ChannelData.CallsSucceeded > 0 && scm.ChannelData.CallsFailed == 0 { return true, nil } return false, fmt.Errorf("got %d CallsStarted, %d CallsSucceeded, want >0 >0", scm.ChannelData.CallsStarted, scm.ChannelData.CallsSucceeded) }); err != nil { t.Fatal(err) } } func (s) TestHealthCheckChannelzCountingCallFailure(t *testing.T) { _, lis, _, deferFunc, err := setupServer(&svrConfig{ specialWatchFunc: func(s *testHealthServer, in *healthpb.HealthCheckRequest, stream healthgrpc.Health_WatchServer) error { if in.Service != "channelzFailure" { return status.Error(codes.FailedPrecondition, "this special Watch function only handles request with service name to be \"channelzFailure\"") } return status.Error(codes.Internal, "fake failure") }, }) if err != nil { t.Fatal(err) } defer deferFunc() _, r, deferFunc, err := setupClient(&clientConfig{balancerName: "round_robin"}) if err != nil { t.Fatal(err) } defer deferFunc() r.UpdateState(resolver.State{ Addresses: []resolver.Address{{Addr: lis.Addr().String()}}, ServiceConfig: parseCfg(`{ "healthCheckConfig": { "serviceName": "channelzFailure" } }`)}) if err := verifyResultWithDelay(func() (bool, error) { cm, _ := channelz.GetTopChannels(0, 0) if len(cm) == 0 { return false, errors.New("channelz.GetTopChannels return 0 top channel") } if len(cm[0].SubChans) == 0 { return false, errors.New("there is 0 subchannel") } var id int64 for k := range cm[0].SubChans { id = k break } scm := channelz.GetSubChannel(id) if scm == nil || scm.ChannelData == nil { return false, errors.New("nil subchannel metric or nil subchannel metric ChannelData returned") } // exponential backoff retry may result in more than one health check call. if scm.ChannelData.CallsStarted > 0 && scm.ChannelData.CallsFailed > 0 && scm.ChannelData.CallsSucceeded == 0 { return true, nil } return false, fmt.Errorf("got %d CallsStarted, %d CallsFailed, want >0, >0", scm.ChannelData.CallsStarted, scm.ChannelData.CallsFailed) }); err != nil { t.Fatal(err) } } grpc-go-1.22.1/test/race.go000066400000000000000000000012301351635773100153620ustar00rootroot00000000000000// +build race /* * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package test func init() { raceMode = true } grpc-go-1.22.1/test/rawConnWrapper.go000066400000000000000000000141771351635773100174360ustar00rootroot00000000000000/* * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package test import ( "bytes" "fmt" "io" "net" "strings" "sync" "time" "golang.org/x/net/http2" "golang.org/x/net/http2/hpack" ) type listenerWrapper struct { net.Listener mu sync.Mutex rcw *rawConnWrapper } func listenWithConnControl(network, address string) (net.Listener, error) { l, err := net.Listen(network, address) if err != nil { return nil, err } return &listenerWrapper{Listener: l}, nil } // Accept blocks until Dial is called, then returns a net.Conn for the server // half of the connection. func (l *listenerWrapper) Accept() (net.Conn, error) { c, err := l.Listener.Accept() if err != nil { return nil, err } l.mu.Lock() l.rcw = newRawConnWrapperFromConn(c) l.mu.Unlock() return c, nil } func (l *listenerWrapper) getLastConn() *rawConnWrapper { l.mu.Lock() defer l.mu.Unlock() return l.rcw } type dialerWrapper struct { c net.Conn rcw *rawConnWrapper } func (d *dialerWrapper) dialer(target string, t time.Duration) (net.Conn, error) { c, err := net.DialTimeout("tcp", target, t) d.c = c d.rcw = newRawConnWrapperFromConn(c) return c, err } func (d *dialerWrapper) getRawConnWrapper() *rawConnWrapper { return d.rcw } type rawConnWrapper struct { cc io.ReadWriteCloser fr *http2.Framer // writing headers: headerBuf bytes.Buffer hpackEnc *hpack.Encoder // reading frames: frc chan http2.Frame frErrc chan error } func newRawConnWrapperFromConn(cc io.ReadWriteCloser) *rawConnWrapper { rcw := &rawConnWrapper{ cc: cc, frc: make(chan http2.Frame, 1), frErrc: make(chan error, 1), } rcw.hpackEnc = hpack.NewEncoder(&rcw.headerBuf) rcw.fr = http2.NewFramer(cc, cc) rcw.fr.ReadMetaHeaders = hpack.NewDecoder(4096 /*initialHeaderTableSize*/, nil) return rcw } func (rcw *rawConnWrapper) Close() error { return rcw.cc.Close() } func (rcw *rawConnWrapper) encodeHeaderField(k, v string) error { err := rcw.hpackEnc.WriteField(hpack.HeaderField{Name: k, Value: v}) if err != nil { return fmt.Errorf("HPACK encoding error for %q/%q: %v", k, v, err) } return nil } // encodeRawHeader is for usage on both client and server side to construct header based on the input // key, value pairs. func (rcw *rawConnWrapper) encodeRawHeader(headers ...string) []byte { if len(headers)%2 == 1 { panic("odd number of kv args") } rcw.headerBuf.Reset() pseudoCount := map[string]int{} var keys []string vals := map[string][]string{} for len(headers) > 0 { k, v := headers[0], headers[1] headers = headers[2:] if _, ok := vals[k]; !ok { keys = append(keys, k) } if strings.HasPrefix(k, ":") { pseudoCount[k]++ if pseudoCount[k] == 1 { vals[k] = []string{v} } else { // Allows testing of invalid headers w/ dup pseudo fields. vals[k] = append(vals[k], v) } } else { vals[k] = append(vals[k], v) } } for _, k := range keys { for _, v := range vals[k] { rcw.encodeHeaderField(k, v) } } return rcw.headerBuf.Bytes() } // encodeHeader is for usage on client side to write request header. // // encodeHeader encodes headers and returns their HPACK bytes. headers // must contain an even number of key/value pairs. There may be // multiple pairs for keys (e.g. "cookie"). The :method, :path, and // :scheme headers default to GET, / and https. func (rcw *rawConnWrapper) encodeHeader(headers ...string) []byte { if len(headers)%2 == 1 { panic("odd number of kv args") } rcw.headerBuf.Reset() if len(headers) == 0 { // Fast path, mostly for benchmarks, so test code doesn't pollute // profiles when we're looking to improve server allocations. rcw.encodeHeaderField(":method", "GET") rcw.encodeHeaderField(":path", "/") rcw.encodeHeaderField(":scheme", "https") return rcw.headerBuf.Bytes() } if len(headers) == 2 && headers[0] == ":method" { // Another fast path for benchmarks. rcw.encodeHeaderField(":method", headers[1]) rcw.encodeHeaderField(":path", "/") rcw.encodeHeaderField(":scheme", "https") return rcw.headerBuf.Bytes() } pseudoCount := map[string]int{} keys := []string{":method", ":path", ":scheme"} vals := map[string][]string{ ":method": {"GET"}, ":path": {"/"}, ":scheme": {"https"}, } for len(headers) > 0 { k, v := headers[0], headers[1] headers = headers[2:] if _, ok := vals[k]; !ok { keys = append(keys, k) } if strings.HasPrefix(k, ":") { pseudoCount[k]++ if pseudoCount[k] == 1 { vals[k] = []string{v} } else { // Allows testing of invalid headers w/ dup pseudo fields. vals[k] = append(vals[k], v) } } else { vals[k] = append(vals[k], v) } } for _, k := range keys { for _, v := range vals[k] { rcw.encodeHeaderField(k, v) } } return rcw.headerBuf.Bytes() } func (rcw *rawConnWrapper) writeHeaders(p http2.HeadersFrameParam) error { if err := rcw.fr.WriteHeaders(p); err != nil { return fmt.Errorf("error writing HEADERS: %v", err) } return nil } func (rcw *rawConnWrapper) writeRSTStream(streamID uint32, code http2.ErrCode) error { if err := rcw.fr.WriteRSTStream(streamID, code); err != nil { return fmt.Errorf("error writing RST_STREAM: %v", err) } return nil } func (rcw *rawConnWrapper) writeGoAway(maxStreamID uint32, code http2.ErrCode, debugData []byte) error { if err := rcw.fr.WriteGoAway(maxStreamID, code, debugData); err != nil { return fmt.Errorf("error writing GoAway: %v", err) } return nil } func (rcw *rawConnWrapper) writeRawFrame(t http2.FrameType, flags http2.Flags, streamID uint32, payload []byte) error { if err := rcw.fr.WriteRawFrame(t, flags, streamID, payload); err != nil { return fmt.Errorf("error writing Raw Frame: %v", err) } return nil } grpc-go-1.22.1/test/retry_test.go000066400000000000000000000405501351635773100166640ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package test import ( "context" "fmt" "io" "os" "reflect" "strconv" "strings" "testing" "time" "github.com/golang/protobuf/proto" "google.golang.org/grpc" "google.golang.org/grpc/codes" "google.golang.org/grpc/internal/envconfig" "google.golang.org/grpc/metadata" "google.golang.org/grpc/status" testpb "google.golang.org/grpc/test/grpc_testing" ) func enableRetry() func() { old := envconfig.Retry envconfig.Retry = true return func() { envconfig.Retry = old } } func (s) TestRetryUnary(t *testing.T) { defer enableRetry()() i := -1 ss := &stubServer{ emptyCall: func(context.Context, *testpb.Empty) (*testpb.Empty, error) { i++ switch i { case 0, 2, 5: return &testpb.Empty{}, nil case 6, 8, 11: return nil, status.New(codes.Internal, "non-retryable error").Err() } return nil, status.New(codes.AlreadyExists, "retryable error").Err() }, } if err := ss.Start([]grpc.ServerOption{}); err != nil { t.Fatalf("Error starting endpoint server: %v", err) } defer ss.Stop() ss.newServiceConfig(`{ "methodConfig": [{ "name": [{"service": "grpc.testing.TestService"}], "waitForReady": true, "retryPolicy": { "MaxAttempts": 4, "InitialBackoff": ".01s", "MaxBackoff": ".01s", "BackoffMultiplier": 1.0, "RetryableStatusCodes": [ "ALREADY_EXISTS" ] } }]}`) ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second) for { if ctx.Err() != nil { t.Fatalf("Timed out waiting for service config update") } if ss.cc.GetMethodConfig("/grpc.testing.TestService/EmptyCall").WaitForReady != nil { break } time.Sleep(time.Millisecond) } cancel() testCases := []struct { code codes.Code count int }{ {codes.OK, 0}, {codes.OK, 2}, {codes.OK, 5}, {codes.Internal, 6}, {codes.Internal, 8}, {codes.Internal, 11}, {codes.AlreadyExists, 15}, } for _, tc := range testCases { ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) _, err := ss.client.EmptyCall(ctx, &testpb.Empty{}) cancel() if status.Code(err) != tc.code { t.Fatalf("EmptyCall(_, _) = _, %v; want _, ", err, tc.code) } if i != tc.count { t.Fatalf("i = %v; want %v", i, tc.count) } } } func (s) TestRetryDisabledByDefault(t *testing.T) { if strings.EqualFold(os.Getenv("GRPC_GO_RETRY"), "on") { return } i := -1 ss := &stubServer{ emptyCall: func(context.Context, *testpb.Empty) (*testpb.Empty, error) { i++ switch i { case 0: return nil, status.New(codes.AlreadyExists, "retryable error").Err() } return &testpb.Empty{}, nil }, } if err := ss.Start([]grpc.ServerOption{}); err != nil { t.Fatalf("Error starting endpoint server: %v", err) } defer ss.Stop() ss.newServiceConfig(`{ "methodConfig": [{ "name": [{"service": "grpc.testing.TestService"}], "waitForReady": true, "retryPolicy": { "MaxAttempts": 4, "InitialBackoff": ".01s", "MaxBackoff": ".01s", "BackoffMultiplier": 1.0, "RetryableStatusCodes": [ "ALREADY_EXISTS" ] } }]}`) ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second) for { if ctx.Err() != nil { t.Fatalf("Timed out waiting for service config update") } if ss.cc.GetMethodConfig("/grpc.testing.TestService/EmptyCall").WaitForReady != nil { break } time.Sleep(time.Millisecond) } cancel() testCases := []struct { code codes.Code count int }{ {codes.AlreadyExists, 0}, } for _, tc := range testCases { ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) _, err := ss.client.EmptyCall(ctx, &testpb.Empty{}) cancel() if status.Code(err) != tc.code { t.Fatalf("EmptyCall(_, _) = _, %v; want _, ", err, tc.code) } if i != tc.count { t.Fatalf("i = %v; want %v", i, tc.count) } } } func (s) TestRetryThrottling(t *testing.T) { defer enableRetry()() i := -1 ss := &stubServer{ emptyCall: func(context.Context, *testpb.Empty) (*testpb.Empty, error) { i++ switch i { case 0, 3, 6, 10, 11, 12, 13, 14, 16, 18: return &testpb.Empty{}, nil } return nil, status.New(codes.Unavailable, "retryable error").Err() }, } if err := ss.Start([]grpc.ServerOption{}); err != nil { t.Fatalf("Error starting endpoint server: %v", err) } defer ss.Stop() ss.newServiceConfig(`{ "methodConfig": [{ "name": [{"service": "grpc.testing.TestService"}], "waitForReady": true, "retryPolicy": { "MaxAttempts": 4, "InitialBackoff": ".01s", "MaxBackoff": ".01s", "BackoffMultiplier": 1.0, "RetryableStatusCodes": [ "UNAVAILABLE" ] } }], "retryThrottling": { "maxTokens": 10, "tokenRatio": 0.5 } }`) ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second) for { if ctx.Err() != nil { t.Fatalf("Timed out waiting for service config update") } if ss.cc.GetMethodConfig("/grpc.testing.TestService/EmptyCall").WaitForReady != nil { break } time.Sleep(time.Millisecond) } cancel() testCases := []struct { code codes.Code count int }{ {codes.OK, 0}, // tokens = 10 {codes.OK, 3}, // tokens = 8.5 (10 - 2 failures + 0.5 success) {codes.OK, 6}, // tokens = 6 {codes.Unavailable, 8}, // tokens = 5 -- first attempt is retried; second aborted. {codes.Unavailable, 9}, // tokens = 4 {codes.OK, 10}, // tokens = 4.5 {codes.OK, 11}, // tokens = 5 {codes.OK, 12}, // tokens = 5.5 {codes.OK, 13}, // tokens = 6 {codes.OK, 14}, // tokens = 6.5 {codes.OK, 16}, // tokens = 5.5 {codes.Unavailable, 17}, // tokens = 4.5 } for _, tc := range testCases { ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) _, err := ss.client.EmptyCall(ctx, &testpb.Empty{}) cancel() if status.Code(err) != tc.code { t.Errorf("EmptyCall(_, _) = _, %v; want _, ", err, tc.code) } if i != tc.count { t.Errorf("i = %v; want %v", i, tc.count) } } } func (s) TestRetryStreaming(t *testing.T) { defer enableRetry()() req := func(b byte) *testpb.StreamingOutputCallRequest { return &testpb.StreamingOutputCallRequest{Payload: &testpb.Payload{Body: []byte{b}}} } res := func(b byte) *testpb.StreamingOutputCallResponse { return &testpb.StreamingOutputCallResponse{Payload: &testpb.Payload{Body: []byte{b}}} } largePayload, _ := newPayload(testpb.PayloadType_COMPRESSABLE, 500) type serverOp func(stream testpb.TestService_FullDuplexCallServer) error type clientOp func(stream testpb.TestService_FullDuplexCallClient) error // Server Operations sAttempts := func(n int) serverOp { return func(stream testpb.TestService_FullDuplexCallServer) error { const key = "grpc-previous-rpc-attempts" md, ok := metadata.FromIncomingContext(stream.Context()) if !ok { return status.Errorf(codes.Internal, "server: no header metadata received") } if got := md[key]; len(got) != 1 || got[0] != strconv.Itoa(n) { return status.Errorf(codes.Internal, "server: metadata = %v; want ", md, key, n) } return nil } } sReq := func(b byte) serverOp { return func(stream testpb.TestService_FullDuplexCallServer) error { want := req(b) if got, err := stream.Recv(); err != nil || !proto.Equal(got, want) { return status.Errorf(codes.Internal, "server: Recv() = %v, %v; want %v, ", got, err, want) } return nil } } sReqPayload := func(p *testpb.Payload) serverOp { return func(stream testpb.TestService_FullDuplexCallServer) error { want := &testpb.StreamingOutputCallRequest{Payload: p} if got, err := stream.Recv(); err != nil || !proto.Equal(got, want) { return status.Errorf(codes.Internal, "server: Recv() = %v, %v; want %v, ", got, err, want) } return nil } } sRes := func(b byte) serverOp { return func(stream testpb.TestService_FullDuplexCallServer) error { msg := res(b) if err := stream.Send(msg); err != nil { return status.Errorf(codes.Internal, "server: Send(%v) = %v; want ", msg, err) } return nil } } sErr := func(c codes.Code) serverOp { return func(stream testpb.TestService_FullDuplexCallServer) error { return status.New(c, "").Err() } } sCloseSend := func() serverOp { return func(stream testpb.TestService_FullDuplexCallServer) error { if msg, err := stream.Recv(); msg != nil || err != io.EOF { return status.Errorf(codes.Internal, "server: Recv() = %v, %v; want , io.EOF", msg, err) } return nil } } sPushback := func(s string) serverOp { return func(stream testpb.TestService_FullDuplexCallServer) error { stream.SetTrailer(metadata.MD{"grpc-retry-pushback-ms": []string{s}}) return nil } } // Client Operations cReq := func(b byte) clientOp { return func(stream testpb.TestService_FullDuplexCallClient) error { msg := req(b) if err := stream.Send(msg); err != nil { return fmt.Errorf("client: Send(%v) = %v; want ", msg, err) } return nil } } cReqPayload := func(p *testpb.Payload) clientOp { return func(stream testpb.TestService_FullDuplexCallClient) error { msg := &testpb.StreamingOutputCallRequest{Payload: p} if err := stream.Send(msg); err != nil { return fmt.Errorf("client: Send(%v) = %v; want ", msg, err) } return nil } } cRes := func(b byte) clientOp { return func(stream testpb.TestService_FullDuplexCallClient) error { want := res(b) if got, err := stream.Recv(); err != nil || !proto.Equal(got, want) { return fmt.Errorf("client: Recv() = %v, %v; want %v, ", got, err, want) } return nil } } cErr := func(c codes.Code) clientOp { return func(stream testpb.TestService_FullDuplexCallClient) error { want := status.New(c, "").Err() if c == codes.OK { want = io.EOF } res, err := stream.Recv() if res != nil || ((err == nil) != (want == nil)) || (want != nil && !reflect.DeepEqual(err, want)) { return fmt.Errorf("client: Recv() = %v, %v; want , %v", res, err, want) } return nil } } cCloseSend := func() clientOp { return func(stream testpb.TestService_FullDuplexCallClient) error { if err := stream.CloseSend(); err != nil { return fmt.Errorf("client: CloseSend() = %v; want ", err) } return nil } } var curTime time.Time cGetTime := func() clientOp { return func(_ testpb.TestService_FullDuplexCallClient) error { curTime = time.Now() return nil } } cCheckElapsed := func(d time.Duration) clientOp { return func(_ testpb.TestService_FullDuplexCallClient) error { if elapsed := time.Since(curTime); elapsed < d { return fmt.Errorf("elapsed time: %v; want >= %v", elapsed, d) } return nil } } cHdr := func() clientOp { return func(stream testpb.TestService_FullDuplexCallClient) error { _, err := stream.Header() return err } } cCtx := func() clientOp { return func(stream testpb.TestService_FullDuplexCallClient) error { stream.Context() return nil } } testCases := []struct { desc string serverOps []serverOp clientOps []clientOp }{{ desc: "Non-retryable error code", serverOps: []serverOp{sReq(1), sErr(codes.Internal)}, clientOps: []clientOp{cReq(1), cErr(codes.Internal)}, }, { desc: "One retry necessary", serverOps: []serverOp{sReq(1), sErr(codes.Unavailable), sReq(1), sAttempts(1), sRes(1)}, clientOps: []clientOp{cReq(1), cRes(1), cErr(codes.OK)}, }, { desc: "Exceed max attempts (4); check attempts header on server", serverOps: []serverOp{ sReq(1), sErr(codes.Unavailable), sReq(1), sAttempts(1), sErr(codes.Unavailable), sAttempts(2), sReq(1), sErr(codes.Unavailable), sAttempts(3), sReq(1), sErr(codes.Unavailable), }, clientOps: []clientOp{cReq(1), cErr(codes.Unavailable)}, }, { desc: "Multiple requests", serverOps: []serverOp{ sReq(1), sReq(2), sErr(codes.Unavailable), sReq(1), sReq(2), sRes(5), }, clientOps: []clientOp{cReq(1), cReq(2), cRes(5), cErr(codes.OK)}, }, { desc: "Multiple successive requests", serverOps: []serverOp{ sReq(1), sErr(codes.Unavailable), sReq(1), sReq(2), sErr(codes.Unavailable), sReq(1), sReq(2), sReq(3), sRes(5), }, clientOps: []clientOp{cReq(1), cReq(2), cReq(3), cRes(5), cErr(codes.OK)}, }, { desc: "No retry after receiving", serverOps: []serverOp{ sReq(1), sErr(codes.Unavailable), sReq(1), sRes(3), sErr(codes.Unavailable), }, clientOps: []clientOp{cReq(1), cRes(3), cErr(codes.Unavailable)}, }, { desc: "No retry after header", serverOps: []serverOp{sReq(1), sErr(codes.Unavailable)}, clientOps: []clientOp{cReq(1), cHdr(), cErr(codes.Unavailable)}, }, { desc: "No retry after context", serverOps: []serverOp{sReq(1), sErr(codes.Unavailable)}, clientOps: []clientOp{cReq(1), cCtx(), cErr(codes.Unavailable)}, }, { desc: "Replaying close send", serverOps: []serverOp{ sReq(1), sReq(2), sCloseSend(), sErr(codes.Unavailable), sReq(1), sReq(2), sCloseSend(), sRes(1), sRes(3), sRes(5), }, clientOps: []clientOp{cReq(1), cReq(2), cCloseSend(), cRes(1), cRes(3), cRes(5), cErr(codes.OK)}, }, { desc: "Negative server pushback - no retry", serverOps: []serverOp{sReq(1), sPushback("-1"), sErr(codes.Unavailable)}, clientOps: []clientOp{cReq(1), cErr(codes.Unavailable)}, }, { desc: "Non-numeric server pushback - no retry", serverOps: []serverOp{sReq(1), sPushback("xxx"), sErr(codes.Unavailable)}, clientOps: []clientOp{cReq(1), cErr(codes.Unavailable)}, }, { desc: "Multiple server pushback values - no retry", serverOps: []serverOp{sReq(1), sPushback("100"), sPushback("10"), sErr(codes.Unavailable)}, clientOps: []clientOp{cReq(1), cErr(codes.Unavailable)}, }, { desc: "1s server pushback - delayed retry", serverOps: []serverOp{sReq(1), sPushback("1000"), sErr(codes.Unavailable), sReq(1), sRes(2)}, clientOps: []clientOp{cGetTime(), cReq(1), cRes(2), cCheckElapsed(time.Second), cErr(codes.OK)}, }, { desc: "Overflowing buffer - no retry", serverOps: []serverOp{sReqPayload(largePayload), sErr(codes.Unavailable)}, clientOps: []clientOp{cReqPayload(largePayload), cErr(codes.Unavailable)}, }} var serverOpIter int var serverOps []serverOp ss := &stubServer{ fullDuplexCall: func(stream testpb.TestService_FullDuplexCallServer) error { for serverOpIter < len(serverOps) { op := serverOps[serverOpIter] serverOpIter++ if err := op(stream); err != nil { return err } } return nil }, } if err := ss.Start([]grpc.ServerOption{}, grpc.WithDefaultCallOptions(grpc.MaxRetryRPCBufferSize(200))); err != nil { t.Fatalf("Error starting endpoint server: %v", err) } defer ss.Stop() ss.newServiceConfig(`{ "methodConfig": [{ "name": [{"service": "grpc.testing.TestService"}], "waitForReady": true, "retryPolicy": { "MaxAttempts": 4, "InitialBackoff": ".01s", "MaxBackoff": ".01s", "BackoffMultiplier": 1.0, "RetryableStatusCodes": [ "UNAVAILABLE" ] } }]}`) ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second) for { if ctx.Err() != nil { t.Fatalf("Timed out waiting for service config update") } if ss.cc.GetMethodConfig("/grpc.testing.TestService/FullDuplexCall").WaitForReady != nil { break } time.Sleep(time.Millisecond) } cancel() for _, tc := range testCases { func() { serverOpIter = 0 serverOps = tc.serverOps ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() stream, err := ss.client.FullDuplexCall(ctx) if err != nil { t.Fatalf("%v: Error while creating stream: %v", tc.desc, err) } for _, op := range tc.clientOps { if err := op(stream); err != nil { t.Errorf("%v: %v", tc.desc, err) break } } if serverOpIter != len(serverOps) { t.Errorf("%v: serverOpIter = %v; want %v", tc.desc, serverOpIter, len(serverOps)) } }() } } grpc-go-1.22.1/test/servertester.go000066400000000000000000000151141351635773100172130ustar00rootroot00000000000000/* * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package test import ( "bytes" "errors" "io" "strings" "testing" "time" "golang.org/x/net/http2" "golang.org/x/net/http2/hpack" ) // This is a subset of http2's serverTester type. // // serverTester wraps a io.ReadWriter (acting like the underlying // network connection) and provides utility methods to read and write // http2 frames. // // NOTE(bradfitz): this could eventually be exported somewhere. Others // have asked for it too. For now I'm still experimenting with the // API and don't feel like maintaining a stable testing API. type serverTester struct { cc io.ReadWriteCloser // client conn t testing.TB fr *http2.Framer // writing headers: headerBuf bytes.Buffer hpackEnc *hpack.Encoder // reading frames: frc chan http2.Frame frErrc chan error } func newServerTesterFromConn(t testing.TB, cc io.ReadWriteCloser) *serverTester { st := &serverTester{ t: t, cc: cc, frc: make(chan http2.Frame, 1), frErrc: make(chan error, 1), } st.hpackEnc = hpack.NewEncoder(&st.headerBuf) st.fr = http2.NewFramer(cc, cc) st.fr.ReadMetaHeaders = hpack.NewDecoder(4096 /*initialHeaderTableSize*/, nil) return st } func (st *serverTester) readFrame() (http2.Frame, error) { go func() { fr, err := st.fr.ReadFrame() if err != nil { st.frErrc <- err } else { st.frc <- fr } }() t := time.NewTimer(2 * time.Second) defer t.Stop() select { case f := <-st.frc: return f, nil case err := <-st.frErrc: return nil, err case <-t.C: return nil, errors.New("timeout waiting for frame") } } // greet initiates the client's HTTP/2 connection into a state where // frames may be sent. func (st *serverTester) greet() { st.writePreface() st.writeInitialSettings() st.wantSettings() st.writeSettingsAck() for { f, err := st.readFrame() if err != nil { st.t.Fatal(err) } switch f := f.(type) { case *http2.WindowUpdateFrame: // grpc's transport/http2_server sends this // before the settings ack. The Go http2 // server uses a setting instead. case *http2.SettingsFrame: if f.IsAck() { return } st.t.Fatalf("during greet, got non-ACK settings frame") default: st.t.Fatalf("during greet, unexpected frame type %T", f) } } } func (st *serverTester) writePreface() { n, err := st.cc.Write([]byte(http2.ClientPreface)) if err != nil { st.t.Fatalf("Error writing client preface: %v", err) } if n != len(http2.ClientPreface) { st.t.Fatalf("Writing client preface, wrote %d bytes; want %d", n, len(http2.ClientPreface)) } } func (st *serverTester) writeInitialSettings() { if err := st.fr.WriteSettings(); err != nil { st.t.Fatalf("Error writing initial SETTINGS frame from client to server: %v", err) } } func (st *serverTester) writeSettingsAck() { if err := st.fr.WriteSettingsAck(); err != nil { st.t.Fatalf("Error writing ACK of server's SETTINGS: %v", err) } } func (st *serverTester) wantSettings() *http2.SettingsFrame { f, err := st.readFrame() if err != nil { st.t.Fatalf("Error while expecting a SETTINGS frame: %v", err) } sf, ok := f.(*http2.SettingsFrame) if !ok { st.t.Fatalf("got a %T; want *SettingsFrame", f) } return sf } // wait for any activity from the server func (st *serverTester) wantAnyFrame() http2.Frame { f, err := st.fr.ReadFrame() if err != nil { st.t.Fatal(err) } return f } func (st *serverTester) encodeHeaderField(k, v string) { err := st.hpackEnc.WriteField(hpack.HeaderField{Name: k, Value: v}) if err != nil { st.t.Fatalf("HPACK encoding error for %q/%q: %v", k, v, err) } } // encodeHeader encodes headers and returns their HPACK bytes. headers // must contain an even number of key/value pairs. There may be // multiple pairs for keys (e.g. "cookie"). The :method, :path, and // :scheme headers default to GET, / and https. func (st *serverTester) encodeHeader(headers ...string) []byte { if len(headers)%2 == 1 { panic("odd number of kv args") } st.headerBuf.Reset() if len(headers) == 0 { // Fast path, mostly for benchmarks, so test code doesn't pollute // profiles when we're looking to improve server allocations. st.encodeHeaderField(":method", "GET") st.encodeHeaderField(":path", "/") st.encodeHeaderField(":scheme", "https") return st.headerBuf.Bytes() } if len(headers) == 2 && headers[0] == ":method" { // Another fast path for benchmarks. st.encodeHeaderField(":method", headers[1]) st.encodeHeaderField(":path", "/") st.encodeHeaderField(":scheme", "https") return st.headerBuf.Bytes() } pseudoCount := map[string]int{} keys := []string{":method", ":path", ":scheme"} vals := map[string][]string{ ":method": {"GET"}, ":path": {"/"}, ":scheme": {"https"}, } for len(headers) > 0 { k, v := headers[0], headers[1] headers = headers[2:] if _, ok := vals[k]; !ok { keys = append(keys, k) } if strings.HasPrefix(k, ":") { pseudoCount[k]++ if pseudoCount[k] == 1 { vals[k] = []string{v} } else { // Allows testing of invalid headers w/ dup pseudo fields. vals[k] = append(vals[k], v) } } else { vals[k] = append(vals[k], v) } } for _, k := range keys { for _, v := range vals[k] { st.encodeHeaderField(k, v) } } return st.headerBuf.Bytes() } func (st *serverTester) writeHeadersGRPC(streamID uint32, path string) { st.writeHeaders(http2.HeadersFrameParam{ StreamID: streamID, BlockFragment: st.encodeHeader( ":method", "POST", ":path", path, "content-type", "application/grpc", "te", "trailers", ), EndStream: false, EndHeaders: true, }) } func (st *serverTester) writeHeaders(p http2.HeadersFrameParam) { if err := st.fr.WriteHeaders(p); err != nil { st.t.Fatalf("Error writing HEADERS: %v", err) } } func (st *serverTester) writeData(streamID uint32, endStream bool, data []byte) { if err := st.fr.WriteData(streamID, endStream, data); err != nil { st.t.Fatalf("Error writing DATA: %v", err) } } func (st *serverTester) writeRSTStream(streamID uint32, code http2.ErrCode) { if err := st.fr.WriteRSTStream(streamID, code); err != nil { st.t.Fatalf("Error writing RST_STREAM: %v", err) } } grpc-go-1.22.1/test/stream_cleanup_test.go000066400000000000000000000111311351635773100205120ustar00rootroot00000000000000/* * * Copyright 2019 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package test import ( "context" "io" "testing" "time" "google.golang.org/grpc" "google.golang.org/grpc/codes" "google.golang.org/grpc/status" testpb "google.golang.org/grpc/test/grpc_testing" ) func (s) TestStreamCleanup(t *testing.T) { const initialWindowSize uint = 70 * 1024 // Must be higher than default 64K, ignored otherwise const bodySize = 2 * initialWindowSize // Something that is not going to fit in a single window const callRecvMsgSize uint = 1 // The maximum message size the client can receive ss := &stubServer{ unaryCall: func(ctx context.Context, in *testpb.SimpleRequest) (*testpb.SimpleResponse, error) { return &testpb.SimpleResponse{Payload: &testpb.Payload{ Body: make([]byte, bodySize), }}, nil }, emptyCall: func(context.Context, *testpb.Empty) (*testpb.Empty, error) { return &testpb.Empty{}, nil }, } if err := ss.Start([]grpc.ServerOption{grpc.MaxConcurrentStreams(1)}, grpc.WithDefaultCallOptions(grpc.MaxCallRecvMsgSize(int(callRecvMsgSize))), grpc.WithInitialWindowSize(int32(initialWindowSize))); err != nil { t.Fatalf("Error starting endpoint server: %v", err) } defer ss.Stop() if _, err := ss.client.UnaryCall(context.Background(), &testpb.SimpleRequest{}); status.Code(err) != codes.ResourceExhausted { t.Fatalf("should fail with ResourceExhausted, message's body size: %v, maximum message size the client can receive: %v", bodySize, callRecvMsgSize) } if _, err := ss.client.EmptyCall(context.Background(), &testpb.Empty{}); err != nil { t.Fatalf("should succeed, err: %v", err) } } func (s) TestStreamCleanupAfterSendStatus(t *testing.T) { const initialWindowSize uint = 70 * 1024 // Must be higher than default 64K, ignored otherwise const bodySize = 2 * initialWindowSize // Something that is not going to fit in a single window serverReturnedStatus := make(chan struct{}) ss := &stubServer{ fullDuplexCall: func(stream testpb.TestService_FullDuplexCallServer) error { defer func() { close(serverReturnedStatus) }() return stream.Send(&testpb.StreamingOutputCallResponse{ Payload: &testpb.Payload{ Body: make([]byte, bodySize), }, }) }, } if err := ss.Start([]grpc.ServerOption{grpc.MaxConcurrentStreams(1)}, grpc.WithInitialWindowSize(int32(initialWindowSize))); err != nil { t.Fatalf("Error starting endpoint server: %v", err) } defer ss.Stop() // This test makes sure we don't delete stream from server transport's // activeStreams list too aggressively. // 1. Make a long living stream RPC. So server's activeStream list is not // empty. ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) defer cancel() stream, err := ss.client.FullDuplexCall(ctx) if err != nil { t.Fatalf("FullDuplexCall= _, %v; want _, ", err) } // 2. Wait for service handler to return status. // // This will trigger a stream cleanup code, which will eventually remove // this stream from activeStream. // // But the stream removal won't happen because it's supposed to be done // after the status is sent by loopyWriter, and the status send is blocked // by flow control. <-serverReturnedStatus // 3. GracefulStop (besides sending goaway) checks the number of // activeStreams. // // It will close the connection if there's no active streams. This won't // happen because of the pending stream. But if there's a bug in stream // cleanup that causes stream to be removed too aggressively, the connection // will be closd and the stream will be broken. gracefulStopDone := make(chan struct{}) go func() { defer close(gracefulStopDone) ss.s.GracefulStop() }() // 4. Make sure the stream is not broken. if _, err := stream.Recv(); err != nil { t.Fatalf("stream.Recv() = _, %v, want _, ", err) } if _, err := stream.Recv(); err != io.EOF { t.Fatalf("stream.Recv() = _, %v, want _, io.EOF", err) } timer := time.NewTimer(time.Second) select { case <-gracefulStopDone: timer.Stop() case <-timer.C: t.Fatalf("s.GracefulStop() didn't finish without 1 second after the last RPC") } } grpc-go-1.22.1/test/tools/000077500000000000000000000000001351635773100152655ustar00rootroot00000000000000grpc-go-1.22.1/test/tools/tools.go000066400000000000000000000021231351635773100167520ustar00rootroot00000000000000// +build tools /* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // This package exists to cause `go mod` and `go get` to believe these tools // are dependencies, even though they are not runtime dependencies of any grpc // package. This means they will appear in our `go.mod` file, but will not be // a part of the build. package tools import ( _ "github.com/client9/misspell/cmd/misspell" _ "github.com/golang/protobuf/protoc-gen-go" _ "golang.org/x/lint/golint" _ "golang.org/x/tools/cmd/goimports" _ "honnef.co/go/tools/cmd/staticcheck" ) grpc-go-1.22.1/testdata/000077500000000000000000000000001351635773100147575ustar00rootroot00000000000000grpc-go-1.22.1/testdata/ca.pem000066400000000000000000000015271351635773100160520ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIICSjCCAbOgAwIBAgIJAJHGGR4dGioHMA0GCSqGSIb3DQEBCwUAMFYxCzAJBgNV BAYTAkFVMRMwEQYDVQQIEwpTb21lLVN0YXRlMSEwHwYDVQQKExhJbnRlcm5ldCBX aWRnaXRzIFB0eSBMdGQxDzANBgNVBAMTBnRlc3RjYTAeFw0xNDExMTEyMjMxMjla Fw0yNDExMDgyMjMxMjlaMFYxCzAJBgNVBAYTAkFVMRMwEQYDVQQIEwpTb21lLVN0 YXRlMSEwHwYDVQQKExhJbnRlcm5ldCBXaWRnaXRzIFB0eSBMdGQxDzANBgNVBAMT BnRlc3RjYTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEAwEDfBV5MYdlHVHJ7 +L4nxrZy7mBfAVXpOc5vMYztssUI7mL2/iYujiIXM+weZYNTEpLdjyJdu7R5gGUu g1jSVK/EPHfc74O7AyZU34PNIP4Sh33N+/A5YexrNgJlPY+E3GdVYi4ldWJjgkAd Qah2PH5ACLrIIC6tRka9hcaBlIECAwEAAaMgMB4wDAYDVR0TBAUwAwEB/zAOBgNV HQ8BAf8EBAMCAgQwDQYJKoZIhvcNAQELBQADgYEAHzC7jdYlzAVmddi/gdAeKPau sPBG/C2HCWqHzpCUHcKuvMzDVkY/MP2o6JIW2DBbY64bO/FceExhjcykgaYtCH/m oIU63+CFOTtR7otyQAWHqXa7q4SbCDlG7DyRFxqG0txPtGvy12lgldA2+RgcigQG Dfcog5wrJytaQ6UA0wE= -----END CERTIFICATE----- grpc-go-1.22.1/testdata/server1.key000066400000000000000000000016201351635773100170570ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIICdQIBADANBgkqhkiG9w0BAQEFAASCAl8wggJbAgEAAoGBAOHDFScoLCVJpYDD M4HYtIdV6Ake/sMNaaKdODjDMsux/4tDydlumN+fm+AjPEK5GHhGn1BgzkWF+slf 3BxhrA/8dNsnunstVA7ZBgA/5qQxMfGAq4wHNVX77fBZOgp9VlSMVfyd9N8YwbBY AckOeUQadTi2X1S6OgJXgQ0m3MWhAgMBAAECgYAn7qGnM2vbjJNBm0VZCkOkTIWm V10okw7EPJrdL2mkre9NasghNXbE1y5zDshx5Nt3KsazKOxTT8d0Jwh/3KbaN+YY tTCbKGW0pXDRBhwUHRcuRzScjli8Rih5UOCiZkhefUTcRb6xIhZJuQy71tjaSy0p dHZRmYyBYO2YEQ8xoQJBAPrJPhMBkzmEYFtyIEqAxQ/o/A6E+E4w8i+KM7nQCK7q K4JXzyXVAjLfyBZWHGM2uro/fjqPggGD6QH1qXCkI4MCQQDmdKeb2TrKRh5BY1LR 81aJGKcJ2XbcDu6wMZK4oqWbTX2KiYn9GB0woM6nSr/Y6iy1u145YzYxEV/iMwff DJULAkB8B2MnyzOg0pNFJqBJuH29bKCcHa8gHJzqXhNO5lAlEbMK95p/P2Wi+4Hd aiEIAF1BF326QJcvYKmwSmrORp85AkAlSNxRJ50OWrfMZnBgzVjDx3xG6KsFQVk2 ol6VhqL6dFgKUORFUWBvnKSyhjJxurlPEahV6oo6+A+mPhFY8eUvAkAZQyTdupP3 XEFQKctGz+9+gKkemDp7LBBMEMBXrGTLPhpEfcjv/7KPdnFHYmhYeBTBnuVmTVWe F98XJ7tIFfJq -----END PRIVATE KEY----- grpc-go-1.22.1/testdata/server1.pem000066400000000000000000000017041351635773100170530ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIICnDCCAgWgAwIBAgIBBzANBgkqhkiG9w0BAQsFADBWMQswCQYDVQQGEwJBVTET MBEGA1UECBMKU29tZS1TdGF0ZTEhMB8GA1UEChMYSW50ZXJuZXQgV2lkZ2l0cyBQ dHkgTHRkMQ8wDQYDVQQDEwZ0ZXN0Y2EwHhcNMTUxMTA0MDIyMDI0WhcNMjUxMTAx MDIyMDI0WjBlMQswCQYDVQQGEwJVUzERMA8GA1UECBMISWxsaW5vaXMxEDAOBgNV BAcTB0NoaWNhZ28xFTATBgNVBAoTDEV4YW1wbGUsIENvLjEaMBgGA1UEAxQRKi50 ZXN0Lmdvb2dsZS5jb20wgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAOHDFSco LCVJpYDDM4HYtIdV6Ake/sMNaaKdODjDMsux/4tDydlumN+fm+AjPEK5GHhGn1Bg zkWF+slf3BxhrA/8dNsnunstVA7ZBgA/5qQxMfGAq4wHNVX77fBZOgp9VlSMVfyd 9N8YwbBYAckOeUQadTi2X1S6OgJXgQ0m3MWhAgMBAAGjazBpMAkGA1UdEwQCMAAw CwYDVR0PBAQDAgXgME8GA1UdEQRIMEaCECoudGVzdC5nb29nbGUuZnKCGHdhdGVy em9vaS50ZXN0Lmdvb2dsZS5iZYISKi50ZXN0LnlvdXR1YmUuY29thwTAqAEDMA0G CSqGSIb3DQEBCwUAA4GBAJFXVifQNub1LUP4JlnX5lXNlo8FxZ2a12AFQs+bzoJ6 hM044EDjqyxUqSbVePK0ni3w1fHQB5rY9yYC5f8G7aqqTY1QOhoUk8ZTSTRpnkTh y4jjdvTZeLDVBlueZUTDRmy2feY5aZIU18vFDK08dTG0A87pppuv1LNIR3loveU8 -----END CERTIFICATE----- grpc-go-1.22.1/testdata/testdata.go000066400000000000000000000022101351635773100171120ustar00rootroot00000000000000/* * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package testdata import ( "path/filepath" "runtime" ) // basepath is the root directory of this package. var basepath string func init() { _, currentFile, _, _ := runtime.Caller(0) basepath = filepath.Dir(currentFile) } // Path returns the absolute path the given relative file or directory path, // relative to the google.golang.org/grpc/testdata directory in the user's GOPATH. // If rel is already absolute, it is returned unmodified. func Path(rel string) string { if filepath.IsAbs(rel) { return rel } return filepath.Join(basepath, rel) } grpc-go-1.22.1/trace.go000066400000000000000000000057571351635773100146110ustar00rootroot00000000000000/* * * Copyright 2015 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "bytes" "fmt" "io" "net" "strings" "sync" "time" "golang.org/x/net/trace" ) // EnableTracing controls whether to trace RPCs using the golang.org/x/net/trace package. // This should only be set before any RPCs are sent or received by this program. var EnableTracing bool // methodFamily returns the trace family for the given method. // It turns "/pkg.Service/GetFoo" into "pkg.Service". func methodFamily(m string) string { m = strings.TrimPrefix(m, "/") // remove leading slash if i := strings.Index(m, "/"); i >= 0 { m = m[:i] // remove everything from second slash } if i := strings.LastIndex(m, "."); i >= 0 { m = m[i+1:] // cut down to last dotted component } return m } // traceInfo contains tracing information for an RPC. type traceInfo struct { tr trace.Trace firstLine firstLine } // firstLine is the first line of an RPC trace. // It may be mutated after construction; remoteAddr specifically may change // during client-side use. type firstLine struct { mu sync.Mutex client bool // whether this is a client (outgoing) RPC remoteAddr net.Addr deadline time.Duration // may be zero } func (f *firstLine) SetRemoteAddr(addr net.Addr) { f.mu.Lock() f.remoteAddr = addr f.mu.Unlock() } func (f *firstLine) String() string { f.mu.Lock() defer f.mu.Unlock() var line bytes.Buffer io.WriteString(&line, "RPC: ") if f.client { io.WriteString(&line, "to") } else { io.WriteString(&line, "from") } fmt.Fprintf(&line, " %v deadline:", f.remoteAddr) if f.deadline != 0 { fmt.Fprint(&line, f.deadline) } else { io.WriteString(&line, "none") } return line.String() } const truncateSize = 100 func truncate(x string, l int) string { if l > len(x) { return x } return x[:l] } // payload represents an RPC request or response payload. type payload struct { sent bool // whether this is an outgoing payload msg interface{} // e.g. a proto.Message // TODO(dsymonds): add stringifying info to codec, and limit how much we hold here? } func (p payload) String() string { if p.sent { return truncate(fmt.Sprintf("sent: %v", p.msg), truncateSize) } return truncate(fmt.Sprintf("recv: %v", p.msg), truncateSize) } type fmtStringer struct { format string a []interface{} } func (f *fmtStringer) String() string { return fmt.Sprintf(f.format, f.a...) } type stringer string func (s stringer) String() string { return string(s) } grpc-go-1.22.1/version.go000066400000000000000000000012531351635773100151630ustar00rootroot00000000000000/* * * Copyright 2018 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc // Version is the current grpc version. const Version = "1.22.1" grpc-go-1.22.1/vet.sh000077500000000000000000000110111351635773100142750ustar00rootroot00000000000000#!/bin/bash if [[ `uname -a` = *"Darwin"* ]]; then echo "It seems you are running on Mac. This script does not work on Mac. See https://github.com/grpc/grpc-go/issues/2047" exit 1 fi set -ex # Exit on error; debugging enabled. set -o pipefail # Fail a pipe if any sub-command fails. die() { echo "$@" >&2 exit 1 } fail_on_output() { tee /dev/stderr | (! read) } # Check to make sure it's safe to modify the user's git repo. git status --porcelain | fail_on_output # Undo any edits made by this script. cleanup() { git reset --hard HEAD } trap cleanup EXIT PATH="${GOPATH}/bin:${GOROOT}/bin:${PATH}" if [[ "$1" = "-install" ]]; then # Check for module support if go help mod >& /dev/null; then go install \ golang.org/x/lint/golint \ golang.org/x/tools/cmd/goimports \ honnef.co/go/tools/cmd/staticcheck \ github.com/client9/misspell/cmd/misspell \ github.com/golang/protobuf/protoc-gen-go else # Ye olde `go get` incantation. # Note: this gets the latest version of all tools (vs. the pinned versions # with Go modules). go get -u \ golang.org/x/lint/golint \ golang.org/x/tools/cmd/goimports \ honnef.co/go/tools/cmd/staticcheck \ github.com/client9/misspell/cmd/misspell \ github.com/golang/protobuf/protoc-gen-go fi if [[ -z "${VET_SKIP_PROTO}" ]]; then if [[ "${TRAVIS}" = "true" ]]; then PROTOBUF_VERSION=3.3.0 PROTOC_FILENAME=protoc-${PROTOBUF_VERSION}-linux-x86_64.zip pushd /home/travis wget https://github.com/google/protobuf/releases/download/v${PROTOBUF_VERSION}/${PROTOC_FILENAME} unzip ${PROTOC_FILENAME} bin/protoc --version popd elif ! which protoc > /dev/null; then die "Please install protoc into your path" fi fi exit 0 elif [[ "$#" -ne 0 ]]; then die "Unknown argument(s): $*" fi # - Ensure all source files contain a copyright message. git ls-files "*.go" | xargs grep -L "\(Copyright [0-9]\{4,\} gRPC authors\)\|DO NOT EDIT" 2>&1 | fail_on_output # - Make sure all tests in grpc and grpc/test use leakcheck via Teardown. (! grep 'func Test[^(]' *_test.go) (! grep 'func Test[^(]' test/*.go) # - Do not import math/rand for real library code. Use internal/grpcrand for # thread safety. git ls-files "*.go" | xargs grep -l '"math/rand"' 2>&1 | (! grep -v '^examples\|^stress\|grpcrand\|wrr_test') # - Ensure all ptypes proto packages are renamed when importing. git ls-files "*.go" | (! xargs grep "\(import \|^\s*\)\"github.com/golang/protobuf/ptypes/") # - Check imports that are illegal in appengine (until Go 1.11). # TODO: Remove when we drop Go 1.10 support go list -f {{.Dir}} ./... | xargs go run test/go_vet/vet.go # - gofmt, goimports, golint (with exceptions for generated code), go vet. gofmt -s -d -l . 2>&1 | fail_on_output goimports -l . 2>&1 | (! grep -vE "(_mock|\.pb)\.go:") | fail_on_output golint ./... 2>&1 | (! grep -vE "(_mock|\.pb)\.go:") go vet -all . # - Check that generated proto files are up to date. if [[ -z "${VET_SKIP_PROTO}" ]]; then PATH="/home/travis/bin:${PATH}" make proto && \ git status --porcelain 2>&1 | fail_on_output || \ (git status; git --no-pager diff; exit 1) fi # - Check that our module is tidy. if go help mod >& /dev/null; then go mod tidy && \ git status --porcelain 2>&1 | fail_on_output || \ (git status; git --no-pager diff; exit 1) fi # - Collection of static analysis checks # TODO(menghanl): fix errors in transport_test. staticcheck -go 1.9 -checks 'inherit,-ST1015' -ignore ' google.golang.org/grpc/balancer.go:SA1019 google.golang.org/grpc/balancer/roundrobin/roundrobin_test.go:SA1019 google.golang.org/grpc/balancer/xds/edsbalancer/balancergroup.go:SA1019 google.golang.org/grpc/balancer/xds/xds.go:SA1019 google.golang.org/grpc/balancer_conn_wrappers.go:SA1019 google.golang.org/grpc/balancer_test.go:SA1019 google.golang.org/grpc/benchmark/benchmain/main.go:SA1019 google.golang.org/grpc/benchmark/worker/benchmark_client.go:SA1019 google.golang.org/grpc/clientconn.go:S1024 google.golang.org/grpc/clientconn_state_transition_test.go:SA1019 google.golang.org/grpc/clientconn_test.go:SA1019 google.golang.org/grpc/internal/transport/handler_server.go:SA1019 google.golang.org/grpc/internal/transport/handler_server_test.go:SA1019 google.golang.org/grpc/resolver/dns/dns_resolver.go:SA1019 google.golang.org/grpc/stats/stats_test.go:SA1019 google.golang.org/grpc/test/channelz_test.go:SA1019 google.golang.org/grpc/test/end2end_test.go:SA1019 google.golang.org/grpc/test/healthcheck_test.go:SA1019 ' ./... misspell -error .