pax_global_header00006660000000000000000000000064131541646130014516gustar00rootroot0000000000000052 comment=ad3683afc5db56e0f6fe8786535b9b5b3d8a3d22 golang-google-grpc-1.6.0/000077500000000000000000000000001315416461300151745ustar00rootroot00000000000000golang-google-grpc-1.6.0/.github/000077500000000000000000000000001315416461300165345ustar00rootroot00000000000000golang-google-grpc-1.6.0/.github/ISSUE_TEMPLATE000066400000000000000000000005501315416461300206420ustar00rootroot00000000000000Please answer these questions before submitting your issue. ### What version of gRPC are you using? ### What version of Go are you using (`go version`)? ### What operating system (Linux, Windows, …) and version? ### What did you do? If possible, provide a recipe for reproducing the error. ### What did you expect to see? ### What did you see instead? golang-google-grpc-1.6.0/.please-update000066400000000000000000000000001315416461300177140ustar00rootroot00000000000000golang-google-grpc-1.6.0/.travis.yml000066400000000000000000000006041315416461300173050ustar00rootroot00000000000000language: go go: - 1.6.x - 1.7.x - 1.8.x - 1.9.x matrix: include: - go: 1.9.x env: ARCH=386 go_import_path: google.golang.org/grpc before_install: - if [[ "$TRAVIS_GO_VERSION" = 1.9* && "$ARCH" != "386" ]]; then ./vet.sh -install || exit 1; fi script: - if [[ "$TRAVIS_GO_VERSION" = 1.9* && "$ARCH" != "386" ]]; then ./vet.sh || exit 1; fi - make test testrace golang-google-grpc-1.6.0/AUTHORS000066400000000000000000000000141315416461300162370ustar00rootroot00000000000000Google Inc. golang-google-grpc-1.6.0/CONTRIBUTING.md000066400000000000000000000044071315416461300174320ustar00rootroot00000000000000# How to contribute We definitely welcome your patches and contributions to gRPC! If you are new to github, please start by reading [Pull Request howto](https://help.github.com/articles/about-pull-requests/) ## Legal requirements In order to protect both you and ourselves, you will need to sign the [Contributor License Agreement](https://cla.developers.google.com/clas). ## Guidelines for Pull Requests How to get your contributions merged smoothly and quickly. - Create **small PRs** that are narrowly focused on **addressing a single concern**. We often times receive PRs that are trying to fix several things at a time, but only one fix is considered acceptable, nothing gets merged and both author's & review's time is wasted. Create more PRs to address different concerns and everyone will be happy. - For speculative changes, consider opening an issue and discussing it first. If you are suggesting a behavioral or API change, consider starting with a [gRFC proposal](https://github.com/grpc/proposal). - Provide a good **PR description** as a record of **what** change is being made and **why** it was made. Link to a github issue if it exists. - Don't fix code style and formatting unless you are already changing that line to address an issue. PRs with irrelevant changes won't be merged. If you do want to fix formatting or style, do that in a separate PR. - Unless your PR is trivial, you should expect there will be reviewer comments that you'll need to address before merging. We expect you to be reasonably responsive to those comments, otherwise the PR will be closed after 2-3 weeks of inactivity. - Maintain **clean commit history** and use **meaningful commit messages**. PRs with messy commit history are difficult to review and won't be merged. Use `rebase -i upstream/master` to curate your commit history and/or to bring in latest changes from master (but avoid rebasing in the middle of a code review). - Keep your PR up to date with upstream/master (if there are merge conflicts, we can't really merge your change). - **All tests need to be passing** before your change can be merged. We recommend you **run tests locally** before creating your PR to catch breakages early on. - Exceptions to the rules can be made if there's a compelling reason for doing so. golang-google-grpc-1.6.0/Documentation/000077500000000000000000000000001315416461300200055ustar00rootroot00000000000000golang-google-grpc-1.6.0/Documentation/gomock-example.md000066400000000000000000000151151315416461300232420ustar00rootroot00000000000000# Mocking Service for gRPC [Example code unary RPC](https://github.com/grpc/grpc-go/tree/master/examples/helloworld/mock_helloworld) [Example code streaming RPC](https://github.com/grpc/grpc-go/tree/master/examples/route_guide/mock_routeguide) ## Why? To test client-side logic without the overhead of connecting to a real server. Mocking enables users to write light-weight unit tests to check functionalities on client-side without invoking RPC calls to a server. ## Idea: Mock the client stub that connects to the server. We use Gomock to mock the client interface (in the generated code) and programmatically set its methods to expect and return pre-determined values. This enables users to write tests around the client logic and use this mocked stub while making RPC calls. ## How to use Gomock? Documentation on Gomock can be found [here](https://github.com/golang/mock). A quick reading of the documentation should enable users to follow the code below. Consider a gRPC service based on following proto file: ```proto //helloworld.proto package helloworld; message HelloRequest { string name = 1; } message HelloReply { string name = 1; } service Greeter { rpc SayHello (HelloRequest) returns (HelloReply) {} } ``` The generated file helloworld.pb.go will have a client interface for each service defined in the proto file. This interface will have methods corresponding to each rpc inside that service. ```Go type GreeterClient interface { SayHello(ctx context.Context, in *HelloRequest, opts ...grpc.CallOption) (*HelloReply, error) } ``` The generated code also contains a struct that implements this interface. ```Go type greeterClient struct { cc *grpc.ClientConn } func (c *greeterClient) SayHello(ctx context.Context, in *HelloRequest, opts ...grpc.CallOption) (*HelloReply, error){ // ... // gRPC specific code here // ... } ``` Along with this the generated code has a method to create an instance of this struct. ```Go func NewGreeterClient(cc *grpc.ClientConn) GreeterClient ``` The user code uses this function to create an instance of the struct greeterClient which then can be used to make rpc calls to the server. We will mock this interface GreeterClient and use an instance of that mock to make rpc calls. These calls instead of going to server will return pre-determined values. To create a mock we’ll use [mockgen](https://github.com/golang/mock#running-mockgen). From the directory ``` examples/helloworld/ ``` run ``` mockgen google.golang.org/grpc/examples/helloworld/helloworld GreeterClient > mock_helloworld/hw_mock.go ``` Notice that in the above command we specify GreeterClient as the interface to be mocked. The user test code can import the package generated by mockgen along with library package gomock to write unit tests around client-side logic. ```Go import "github.com/golang/mock/gomock" import hwmock "google.golang.org/grpc/examples/helloworld/mock_helloworld" ``` An instance of the mocked interface can be created as: ```Go mockGreeterClient := hwmock.NewMockGreeterClient(ctrl) ``` This mocked object can be programmed to expect calls to its methods and return pre-determined values. For instance, we can program mockGreeterClient to expect a call to its method SayHello and return a HelloReply with message “Mocked RPC”. ```Go mockGreeterClient.EXPECT().SayHello( gomock.Any(), // expect any value for first parameter gomock.Any(), // expect any value for second parameter ).Return(&helloworld.HelloReply{Message: “Mocked RPC”}, nil) ``` gomock.Any() indicates that the parameter can have any value or type. We can indicate specific values for built-in types with gomock.Eq(). However, if the test code needs to specify the parameter to have a proto message type, we can replace gomock.Any() with an instance of a struct that implements gomock.Matcher interface. ```Go type rpcMsg struct { msg proto.Message } func (r *rpcMsg) Matches(msg interface{}) bool { m, ok := msg.(proto.Message) if !ok { return false } return proto.Equal(m, r.msg) } func (r *rpcMsg) String() string { return fmt.Sprintf("is %s", r.msg) } ... req := &helloworld.HelloRequest{Name: "unit_test"} mockGreeterClient.EXPECT().SayHello( gomock.Any(), &rpcMsg{msg: req}, ).Return(&helloworld.HelloReply{Message: "Mocked Interface"}, nil) ``` ## Mock streaming RPCs: For our example we consider the case of bi-directional streaming RPCs. Concretely, we'll write a test for RouteChat function from the route guide example to demonstrate how to write mocks for streams. RouteChat is a bi-directional streaming RPC, which means calling RouteChat returns a stream that can __Send__ and __Recv__ messages to and from the server, respectively. We'll start by creating a mock of this stream interface returned by RouteChat and then we'll mock the client interface and set expectation on the method RouteChat to return our mocked stream. ### Generating mocking code: Like before we'll use [mockgen](https://github.com/golang/mock#running-mockgen). From the `examples/route_guide` directory run: `mockgen google.golang.org/grpc/examples/route_guide/routeguide RouteGuideClient,RouteGuide_RouteChatClient > mock_route_guide/rg_mock.go` Notice that we are mocking both client(`RouteGuideClient`) and stream(`RouteGuide_RouteChatClient`) interfaces here. This will create a file `rg_mock.go` under directory `mock_route_guide`. This file contins all the mocking code we need to write our test. In our test code, like before, we import the this mocking code along with the generated code ```go import ( rgmock "google.golang.org/grpc/examples/route_guide/mock_routeguide" rgpb "google.golang.org/grpc/examples/route_guide/routeguide" ) ``` Now conside a test that takes the RouteGuide client object as a parameter, makes a RouteChat rpc call and sends a message on the resulting stream. Furthermore, this test expects to see the same message to be received on the stream. ```go var msg = ... // Creates a RouteChat call and sends msg on it. // Checks if the received message was equal to msg. func testRouteChat(client rgb.RouteChatClient) error{ ... } ``` We can inject our mock in here by simply passing it as an argument to the method. Creating mock for stream interface: ```go stream := rgmock.NewMockRouteGuide_RouteChatClient(ctrl) } ``` Setting Expectations: ```go stream.EXPECT().Send(gomock.Any()).Return(nil) stream.EXPECT().Recv().Return(msg, nil) ``` Creating mock for client interface: ```go rgclient := rgmock.NewMockRouteGuideClient(ctrl) ``` Setting Expectations: ```go rgclient.EXPECT().RouteChat(gomock.Any()).Return(stream, nil) ``` golang-google-grpc-1.6.0/Documentation/grpc-auth-support.md000066400000000000000000000025421315416461300237360ustar00rootroot00000000000000# Authentication As outlined in the [gRPC authentication guide](https://grpc.io/docs/guides/auth.html) there are a number of different mechanisms for asserting identity between an client and server. We'll present some code-samples here demonstrating how to provide TLS support encryption and identity assertions as well as passing OAuth2 tokens to services that support it. # Enabling TLS on a gRPC client ```Go conn, err := grpc.Dial(serverAddr, grpc.WithTransportCredentials(credentials.NewClientTLSFromCert(nil, ""))) ``` # Enabling TLS on a gRPC server ```Go creds, err := credentials.NewServerTLSFromFile(certFile, keyFile) if err != nil { log.Fatalf("Failed to generate credentials %v", err) } lis, err := net.Listen("tcp", ":0") server := grpc.NewServer(grpc.Creds(creds)) ... server.Serve(lis) ``` # Authenticating with Google ## Google Compute Engine (GCE) ```Go conn, err := grpc.Dial(serverAddr, grpc.WithTransportCredentials(credentials.NewClientTLSFromCert(nil, "")), grpc.WithPerRPCCredentials(oauth.NewComputeEngine())) ``` ## JWT ```Go jwtCreds, err := oauth.NewServiceAccountFromFile(*serviceAccountKeyFile, *oauthScope) if err != nil { log.Fatalf("Failed to create JWT credentials: %v", err) } conn, err := grpc.Dial(serverAddr, grpc.WithTransportCredentials(credentials.NewClientTLSFromCert(nil, "")), grpc.WithPerRPCCredentials(jwtCreds)) ``` golang-google-grpc-1.6.0/Documentation/grpc-metadata.md000066400000000000000000000146371315416461300230530ustar00rootroot00000000000000# Metadata gRPC supports sending metadata between client and server. This doc shows how to send and receive metadata in gRPC-go. ## Background Four kinds of service method: - [Unary RPC](https://grpc.io/docs/guides/concepts.html#unary-rpc) - [Server streaming RPC](https://grpc.io/docs/guides/concepts.html#server-streaming-rpc) - [Client streaming RPC](https://grpc.io/docs/guides/concepts.html#client-streaming-rpc) - [Bidirectional streaming RPC](https://grpc.io/docs/guides/concepts.html#bidirectional-streaming-rpc) And concept of [metadata](https://grpc.io/docs/guides/concepts.html#metadata). ## Constructing metadata A metadata can be created using package [metadata](https://godoc.org/google.golang.org/grpc/metadata). The type MD is actually a map from string to a list of strings: ```go type MD map[string][]string ``` Metadata can be read like a normal map. Note that the value type of this map is `[]string`, so that users can attach multiple values using a single key. ### Creating a new metadata A metadata can be created from a `map[string]string` using function `New`: ```go md := metadata.New(map[string]string{"key1": "val1", "key2": "val2"}) ``` Another way is to use `Pairs`. Values with the same key will be merged into a list: ```go md := metadata.Pairs( "key1", "val1", "key1", "val1-2", // "key1" will have map value []string{"val1", "val1-2"} "key2", "val2", ) ``` __Note:__ all the keys will be automatically converted to lowercase, so "key1" and "kEy1" will be the same key and their values will be merged into the same list. This happens for both `New` and `Pairs`. ### Storing binary data in metadata In metadata, keys are always strings. But values can be strings or binary data. To store binary data value in metadata, simply add "-bin" suffix to the key. The values with "-bin" suffixed keys will be encoded when creating the metadata: ```go md := metadata.Pairs( "key", "string value", "key-bin", string([]byte{96, 102}), // this binary data will be encoded (base64) before sending // and will be decoded after being transferred. ) ``` ## Retrieving metadata from context Metadata can be retrieved from context using `FromIncomingContext`: ```go func (s *server) SomeRPC(ctx context.Context, in *pb.SomeRequest) (*pb.SomeResponse, err) { md, ok := metadata.FromIncomingContext(ctx) // do something with metadata } ``` ## Sending and receiving metadata - client side [//]: # "TODO: uncomment next line after example source added" [//]: # "Real metadata sending and receiving examples are available [here](TODO:example_dir)." ### Sending metadata To send metadata to server, the client can wrap the metadata into a context using `NewOutgoingContext`, and make the RPC with this context: ```go md := metadata.Pairs("key", "val") // create a new context with this metadata ctx := metadata.NewOutgoingContext(context.Background(), md) // make unary RPC response, err := client.SomeRPC(ctx, someRequest) // or make streaming RPC stream, err := client.SomeStreamingRPC(ctx) ``` To read this back from the context on the client (e.g. in an interceptor) before the RPC is sent, use `FromOutgoingContext`. ### Receiving metadata Metadata that a client can receive includes header and trailer. #### Unary call Header and trailer sent along with a unary call can be retrieved using function [Header](https://godoc.org/google.golang.org/grpc#Header) and [Trailer](https://godoc.org/google.golang.org/grpc#Trailer) in [CallOption](https://godoc.org/google.golang.org/grpc#CallOption): ```go var header, trailer metadata.MD // variable to store header and trailer r, err := client.SomeRPC( ctx, someRequest, grpc.Header(&header), // will retrieve header grpc.Trailer(&trailer), // will retrieve trailer ) // do something with header and trailer ``` #### Streaming call For streaming calls including: - Server streaming RPC - Client streaming RPC - Bidirectional streaming RPC Header and trailer can be retrieved from the returned stream using function `Header` and `Trailer` in interface [ClientStream](https://godoc.org/google.golang.org/grpc#ClientStream): ```go stream, err := client.SomeStreamingRPC(ctx) // retrieve header header, err := stream.Header() // retrieve trailer trailer := stream.Trailer() ``` ## Sending and receiving metadata - server side [//]: # "TODO: uncomment next line after example source added" [//]: # "Real metadata sending and receiving examples are available [here](TODO:example_dir)." ### Receiving metadata To read metadata sent by the client, the server needs to retrieve it from RPC context. If it is a unary call, the RPC handler's context can be used. For streaming calls, the server needs to get context from the stream. #### Unary call ```go func (s *server) SomeRPC(ctx context.Context, in *pb.someRequest) (*pb.someResponse, error) { md, ok := metadata.FromIncomingContext(ctx) // do something with metadata } ``` #### Streaming call ```go func (s *server) SomeStreamingRPC(stream pb.Service_SomeStreamingRPCServer) error { md, ok := metadata.FromIncomingContext(stream.Context()) // get context from stream // do something with metadata } ``` ### Sending metadata #### Unary call To send header and trailer to client in unary call, the server can call [SendHeader](https://godoc.org/google.golang.org/grpc#SendHeader) and [SetTrailer](https://godoc.org/google.golang.org/grpc#SetTrailer) functions in module [grpc](https://godoc.org/google.golang.org/grpc). These two functions take a context as the first parameter. It should be the RPC handler's context or one derived from it: ```go func (s *server) SomeRPC(ctx context.Context, in *pb.someRequest) (*pb.someResponse, error) { // create and send header header := metadata.Pairs("header-key", "val") grpc.SendHeader(ctx, header) // create and set trailer trailer := metadata.Pairs("trailer-key", "val") grpc.SetTrailer(ctx, trailer) } ``` #### Streaming call For streaming calls, header and trailer can be sent using function `SendHeader` and `SetTrailer` in interface [ServerStream](https://godoc.org/google.golang.org/grpc#ServerStream): ```go func (s *server) SomeStreamingRPC(stream pb.Service_SomeStreamingRPCServer) error { // create and send header header := metadata.Pairs("header-key", "val") stream.SendHeader(header) // create and set trailer trailer := metadata.Pairs("trailer-key", "val") stream.SetTrailer(trailer) } ``` golang-google-grpc-1.6.0/Documentation/server-reflection-tutorial.md000066400000000000000000000077561315416461300256450ustar00rootroot00000000000000# gRPC Server Reflection Tutorial gRPC Server Reflection provides information about publicly-accessible gRPC services on a server, and assists clients at runtime to construct RPC requests and responses without precompiled service information. It is used by gRPC CLI, which can be used to introspect server protos and send/receive test RPCs. ## Enable Server Reflection gRPC-go Server Reflection is implemented in package [reflection](https://github.com/grpc/grpc-go/tree/master/reflection). To enable server reflection, you need to import this package and register reflection service on your gRPC server. For example, to enable server reflection in `example/helloworld`, we need to make the following changes: ```diff --- a/examples/helloworld/greeter_server/main.go +++ b/examples/helloworld/greeter_server/main.go @@ -40,6 +40,7 @@ import ( "golang.org/x/net/context" "google.golang.org/grpc" pb "google.golang.org/grpc/examples/helloworld/helloworld" + "google.golang.org/grpc/reflection" ) const ( @@ -61,6 +62,8 @@ func main() { } s := grpc.NewServer() pb.RegisterGreeterServer(s, &server{}) + // Register reflection service on gRPC server. + reflection.Register(s) if err := s.Serve(lis); err != nil { log.Fatalf("failed to serve: %v", err) } ``` We have made this change in `example/helloworld`, and we will use it as an example to show the use of gRPC server reflection and gRPC CLI in this tutorial. ## gRPC CLI After enabling Server Reflection in a server application, you can use gRPC CLI to check its services. gRPC CLI is only available in c++. Instructions on how to use gRPC CLI can be found at [command_line_tool.md](https://github.com/grpc/grpc/blob/master/doc/command_line_tool.md). To build gRPC CLI: ```sh git clone https://github.com/grpc/grpc cd grpc make grpc_cli cd bins/opt # grpc_cli is in directory bins/opt/ ``` ## Use gRPC CLI to check services First, start the helloworld server in grpc-go directory: ```sh $ cd $ go run examples/helloworld/greeter_server/main.go ``` Open a new terminal and make sure you are in the directory where grpc_cli lives: ```sh $ cd /bins/opt ``` ### List services `grpc_cli ls` command lists services and methods exposed at a given port: - List all the services exposed at a given port ```sh $ ./grpc_cli ls localhost:50051 ``` output: ```sh helloworld.Greeter grpc.reflection.v1alpha.ServerReflection ``` - List one service with details `grpc_cli ls` command inspects a service given its full name (in the format of \.\). It can print information with a long listing format when `-l` flag is set. This flag can be used to get more details about a service. ```sh $ ./grpc_cli ls localhost:50051 helloworld.Greeter -l ``` output: ```sh filename: helloworld.proto package: helloworld; service Greeter { rpc SayHello(helloworld.HelloRequest) returns (helloworld.HelloReply) {} } ``` ### List methods - List one method with details `grpc_cli ls` command also inspects a method given its full name (in the format of \.\.\). ```sh $ ./grpc_cli ls localhost:50051 helloworld.Greeter.SayHello -l ``` output: ```sh rpc SayHello(helloworld.HelloRequest) returns (helloworld.HelloReply) {} ``` ### Inspect message types We can use`grpc_cli type` command to inspect request/response types given the full name of the type (in the format of \.\). - Get information about the request type ```sh $ ./grpc_cli type localhost:50051 helloworld.HelloRequest ``` output: ```sh message HelloRequest { optional string name = 1[json_name = "name"]; } ``` ### Call a remote method We can send RPCs to a server and get responses using `grpc_cli call` command. - Call a unary method ```sh $ ./grpc_cli call localhost:50051 SayHello "name: 'gRPC CLI'" ``` output: ```sh message: "Hello gRPC CLI" ``` golang-google-grpc-1.6.0/LICENSE000066400000000000000000000261361315416461300162110ustar00rootroot00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. golang-google-grpc-1.6.0/Makefile000066400000000000000000000013711315416461300166360ustar00rootroot00000000000000all: test testrace deps: go get -d -v google.golang.org/grpc/... updatedeps: go get -d -v -u -f google.golang.org/grpc/... testdeps: go get -d -v -t google.golang.org/grpc/... updatetestdeps: go get -d -v -t -u -f google.golang.org/grpc/... build: deps go build google.golang.org/grpc/... proto: @ if ! which protoc > /dev/null; then \ echo "error: protoc not installed" >&2; \ exit 1; \ fi go generate google.golang.org/grpc/... test: testdeps go test -v -cpu 1,4 google.golang.org/grpc/... testrace: testdeps go test -v -race -cpu 1,4 google.golang.org/grpc/... clean: go clean -i google.golang.org/grpc/... .PHONY: \ all \ deps \ updatedeps \ testdeps \ updatetestdeps \ build \ proto \ test \ testrace \ clean \ coverage golang-google-grpc-1.6.0/README.md000066400000000000000000000035121315416461300164540ustar00rootroot00000000000000# gRPC-Go [![Build Status](https://travis-ci.org/grpc/grpc-go.svg)](https://travis-ci.org/grpc/grpc-go) [![GoDoc](https://godoc.org/google.golang.org/grpc?status.svg)](https://godoc.org/google.golang.org/grpc) The Go implementation of [gRPC](https://grpc.io/): A high performance, open source, general RPC framework that puts mobile and HTTP/2 first. For more information see the [gRPC Quick Start: Go](https://grpc.io/docs/quickstart/go.html) guide. Installation ------------ To install this package, you need to install Go and setup your Go workspace on your computer. The simplest way to install the library is to run: ``` $ go get -u google.golang.org/grpc ``` Prerequisites ------------- This requires Go 1.6 or later. Constraints ----------- The grpc package should only depend on standard Go packages and a small number of exceptions. If your contribution introduces new dependencies which are NOT in the [list](http://godoc.org/google.golang.org/grpc?imports), you need a discussion with gRPC-Go authors and consultants. Documentation ------------- See [API documentation](https://godoc.org/google.golang.org/grpc) for package and API descriptions and find examples in the [examples directory](examples/). Performance ----------- See the current benchmarks for some of the languages supported in [this dashboard](https://performance-dot-grpc-testing.appspot.com/explore?dashboard=5652536396611584&widget=490377658&container=1286539696). Status ------ General Availability [Google Cloud Platform Launch Stages](https://cloud.google.com/terms/launch-stages). FAQ --- #### Compiling error, undefined: grpc.SupportPackageIsVersion Please update proto package, gRPC package and rebuild the proto files: - `go get -u github.com/golang/protobuf/{proto,protoc-gen-go}` - `go get -u google.golang.org/grpc` - `protoc --go_out=plugins=grpc:. *.proto` golang-google-grpc-1.6.0/backoff.go000066400000000000000000000052641315416461300171250ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "math/rand" "time" ) // DefaultBackoffConfig uses values specified for backoff in // https://github.com/grpc/grpc/blob/master/doc/connection-backoff.md. var ( DefaultBackoffConfig = BackoffConfig{ MaxDelay: 120 * time.Second, baseDelay: 1.0 * time.Second, factor: 1.6, jitter: 0.2, } ) // backoffStrategy defines the methodology for backing off after a grpc // connection failure. // // This is unexported until the gRPC project decides whether or not to allow // alternative backoff strategies. Once a decision is made, this type and its // method may be exported. type backoffStrategy interface { // backoff returns the amount of time to wait before the next retry given // the number of consecutive failures. backoff(retries int) time.Duration } // BackoffConfig defines the parameters for the default gRPC backoff strategy. type BackoffConfig struct { // MaxDelay is the upper bound of backoff delay. MaxDelay time.Duration // TODO(stevvooe): The following fields are not exported, as allowing // changes would violate the current gRPC specification for backoff. If // gRPC decides to allow more interesting backoff strategies, these fields // may be opened up in the future. // baseDelay is the amount of time to wait before retrying after the first // failure. baseDelay time.Duration // factor is applied to the backoff after each retry. factor float64 // jitter provides a range to randomize backoff delays. jitter float64 } func setDefaults(bc *BackoffConfig) { md := bc.MaxDelay *bc = DefaultBackoffConfig if md > 0 { bc.MaxDelay = md } } func (bc BackoffConfig) backoff(retries int) time.Duration { if retries == 0 { return bc.baseDelay } backoff, max := float64(bc.baseDelay), float64(bc.MaxDelay) for backoff < max && retries > 0 { backoff *= bc.factor retries-- } if backoff > max { backoff = max } // Randomize backoff delays so that if a cluster of requests start at // the same time, they won't operate in lockstep. backoff *= 1 + bc.jitter*(rand.Float64()*2-1) if backoff < 0 { return 0 } return time.Duration(backoff) } golang-google-grpc-1.6.0/backoff_test.go000066400000000000000000000015341315416461300201600ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import "testing" func TestBackoffConfigDefaults(t *testing.T) { b := BackoffConfig{} setDefaults(&b) if b != DefaultBackoffConfig { t.Fatalf("expected BackoffConfig to pickup default parameters: %v != %v", b, DefaultBackoffConfig) } } golang-google-grpc-1.6.0/balancer.go000066400000000000000000000267201315416461300173010ustar00rootroot00000000000000/* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "fmt" "net" "sync" "golang.org/x/net/context" "google.golang.org/grpc/codes" "google.golang.org/grpc/credentials" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/naming" ) // Address represents a server the client connects to. // This is the EXPERIMENTAL API and may be changed or extended in the future. type Address struct { // Addr is the server address on which a connection will be established. Addr string // Metadata is the information associated with Addr, which may be used // to make load balancing decision. Metadata interface{} } // BalancerConfig specifies the configurations for Balancer. type BalancerConfig struct { // DialCreds is the transport credential the Balancer implementation can // use to dial to a remote load balancer server. The Balancer implementations // can ignore this if it does not need to talk to another party securely. DialCreds credentials.TransportCredentials // Dialer is the custom dialer the Balancer implementation can use to dial // to a remote load balancer server. The Balancer implementations // can ignore this if it doesn't need to talk to remote balancer. Dialer func(context.Context, string) (net.Conn, error) } // BalancerGetOptions configures a Get call. // This is the EXPERIMENTAL API and may be changed or extended in the future. type BalancerGetOptions struct { // BlockingWait specifies whether Get should block when there is no // connected address. BlockingWait bool } // Balancer chooses network addresses for RPCs. // This is the EXPERIMENTAL API and may be changed or extended in the future. type Balancer interface { // Start does the initialization work to bootstrap a Balancer. For example, // this function may start the name resolution and watch the updates. It will // be called when dialing. Start(target string, config BalancerConfig) error // Up informs the Balancer that gRPC has a connection to the server at // addr. It returns down which is called once the connection to addr gets // lost or closed. // TODO: It is not clear how to construct and take advantage of the meaningful error // parameter for down. Need realistic demands to guide. Up(addr Address) (down func(error)) // Get gets the address of a server for the RPC corresponding to ctx. // i) If it returns a connected address, gRPC internals issues the RPC on the // connection to this address; // ii) If it returns an address on which the connection is under construction // (initiated by Notify(...)) but not connected, gRPC internals // * fails RPC if the RPC is fail-fast and connection is in the TransientFailure or // Shutdown state; // or // * issues RPC on the connection otherwise. // iii) If it returns an address on which the connection does not exist, gRPC // internals treats it as an error and will fail the corresponding RPC. // // Therefore, the following is the recommended rule when writing a custom Balancer. // If opts.BlockingWait is true, it should return a connected address or // block if there is no connected address. It should respect the timeout or // cancellation of ctx when blocking. If opts.BlockingWait is false (for fail-fast // RPCs), it should return an address it has notified via Notify(...) immediately // instead of blocking. // // The function returns put which is called once the rpc has completed or failed. // put can collect and report RPC stats to a remote load balancer. // // This function should only return the errors Balancer cannot recover by itself. // gRPC internals will fail the RPC if an error is returned. Get(ctx context.Context, opts BalancerGetOptions) (addr Address, put func(), err error) // Notify returns a channel that is used by gRPC internals to watch the addresses // gRPC needs to connect. The addresses might be from a name resolver or remote // load balancer. gRPC internals will compare it with the existing connected // addresses. If the address Balancer notified is not in the existing connected // addresses, gRPC starts to connect the address. If an address in the existing // connected addresses is not in the notification list, the corresponding connection // is shutdown gracefully. Otherwise, there are no operations to take. Note that // the Address slice must be the full list of the Addresses which should be connected. // It is NOT delta. Notify() <-chan []Address // Close shuts down the balancer. Close() error } // downErr implements net.Error. It is constructed by gRPC internals and passed to the down // call of Balancer. type downErr struct { timeout bool temporary bool desc string } func (e downErr) Error() string { return e.desc } func (e downErr) Timeout() bool { return e.timeout } func (e downErr) Temporary() bool { return e.temporary } func downErrorf(timeout, temporary bool, format string, a ...interface{}) downErr { return downErr{ timeout: timeout, temporary: temporary, desc: fmt.Sprintf(format, a...), } } // RoundRobin returns a Balancer that selects addresses round-robin. It uses r to watch // the name resolution updates and updates the addresses available correspondingly. func RoundRobin(r naming.Resolver) Balancer { return &roundRobin{r: r} } type addrInfo struct { addr Address connected bool } type roundRobin struct { r naming.Resolver w naming.Watcher addrs []*addrInfo // all the addresses the client should potentially connect mu sync.Mutex addrCh chan []Address // the channel to notify gRPC internals the list of addresses the client should connect to. next int // index of the next address to return for Get() waitCh chan struct{} // the channel to block when there is no connected address available done bool // The Balancer is closed. } func (rr *roundRobin) watchAddrUpdates() error { updates, err := rr.w.Next() if err != nil { grpclog.Warningf("grpc: the naming watcher stops working due to %v.", err) return err } rr.mu.Lock() defer rr.mu.Unlock() for _, update := range updates { addr := Address{ Addr: update.Addr, Metadata: update.Metadata, } switch update.Op { case naming.Add: var exist bool for _, v := range rr.addrs { if addr == v.addr { exist = true grpclog.Infoln("grpc: The name resolver wanted to add an existing address: ", addr) break } } if exist { continue } rr.addrs = append(rr.addrs, &addrInfo{addr: addr}) case naming.Delete: for i, v := range rr.addrs { if addr == v.addr { copy(rr.addrs[i:], rr.addrs[i+1:]) rr.addrs = rr.addrs[:len(rr.addrs)-1] break } } default: grpclog.Errorln("Unknown update.Op ", update.Op) } } // Make a copy of rr.addrs and write it onto rr.addrCh so that gRPC internals gets notified. open := make([]Address, len(rr.addrs)) for i, v := range rr.addrs { open[i] = v.addr } if rr.done { return ErrClientConnClosing } select { case <-rr.addrCh: default: } rr.addrCh <- open return nil } func (rr *roundRobin) Start(target string, config BalancerConfig) error { rr.mu.Lock() defer rr.mu.Unlock() if rr.done { return ErrClientConnClosing } if rr.r == nil { // If there is no name resolver installed, it is not needed to // do name resolution. In this case, target is added into rr.addrs // as the only address available and rr.addrCh stays nil. rr.addrs = append(rr.addrs, &addrInfo{addr: Address{Addr: target}}) return nil } w, err := rr.r.Resolve(target) if err != nil { return err } rr.w = w rr.addrCh = make(chan []Address, 1) go func() { for { if err := rr.watchAddrUpdates(); err != nil { return } } }() return nil } // Up sets the connected state of addr and sends notification if there are pending // Get() calls. func (rr *roundRobin) Up(addr Address) func(error) { rr.mu.Lock() defer rr.mu.Unlock() var cnt int for _, a := range rr.addrs { if a.addr == addr { if a.connected { return nil } a.connected = true } if a.connected { cnt++ } } // addr is only one which is connected. Notify the Get() callers who are blocking. if cnt == 1 && rr.waitCh != nil { close(rr.waitCh) rr.waitCh = nil } return func(err error) { rr.down(addr, err) } } // down unsets the connected state of addr. func (rr *roundRobin) down(addr Address, err error) { rr.mu.Lock() defer rr.mu.Unlock() for _, a := range rr.addrs { if addr == a.addr { a.connected = false break } } } // Get returns the next addr in the rotation. func (rr *roundRobin) Get(ctx context.Context, opts BalancerGetOptions) (addr Address, put func(), err error) { var ch chan struct{} rr.mu.Lock() if rr.done { rr.mu.Unlock() err = ErrClientConnClosing return } if len(rr.addrs) > 0 { if rr.next >= len(rr.addrs) { rr.next = 0 } next := rr.next for { a := rr.addrs[next] next = (next + 1) % len(rr.addrs) if a.connected { addr = a.addr rr.next = next rr.mu.Unlock() return } if next == rr.next { // Has iterated all the possible address but none is connected. break } } } if !opts.BlockingWait { if len(rr.addrs) == 0 { rr.mu.Unlock() err = Errorf(codes.Unavailable, "there is no address available") return } // Returns the next addr on rr.addrs for failfast RPCs. addr = rr.addrs[rr.next].addr rr.next++ rr.mu.Unlock() return } // Wait on rr.waitCh for non-failfast RPCs. if rr.waitCh == nil { ch = make(chan struct{}) rr.waitCh = ch } else { ch = rr.waitCh } rr.mu.Unlock() for { select { case <-ctx.Done(): err = ctx.Err() return case <-ch: rr.mu.Lock() if rr.done { rr.mu.Unlock() err = ErrClientConnClosing return } if len(rr.addrs) > 0 { if rr.next >= len(rr.addrs) { rr.next = 0 } next := rr.next for { a := rr.addrs[next] next = (next + 1) % len(rr.addrs) if a.connected { addr = a.addr rr.next = next rr.mu.Unlock() return } if next == rr.next { // Has iterated all the possible address but none is connected. break } } } // The newly added addr got removed by Down() again. if rr.waitCh == nil { ch = make(chan struct{}) rr.waitCh = ch } else { ch = rr.waitCh } rr.mu.Unlock() } } } func (rr *roundRobin) Notify() <-chan []Address { return rr.addrCh } func (rr *roundRobin) Close() error { rr.mu.Lock() defer rr.mu.Unlock() if rr.done { return errBalancerClosed } rr.done = true if rr.w != nil { rr.w.Close() } if rr.waitCh != nil { close(rr.waitCh) rr.waitCh = nil } if rr.addrCh != nil { close(rr.addrCh) } return nil } // pickFirst is used to test multi-addresses in one addrConn in which all addresses share the same addrConn. // It is a wrapper around roundRobin balancer. The logic of all methods works fine because balancer.Get() // returns the only address Up by resetTransport(). type pickFirst struct { *roundRobin } func pickFirstBalancer(r naming.Resolver) Balancer { return &pickFirst{&roundRobin{r: r}} } golang-google-grpc-1.6.0/balancer_test.go000066400000000000000000000571501315416461300203410ustar00rootroot00000000000000/* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "fmt" "math" "strconv" "sync" "testing" "time" "golang.org/x/net/context" "google.golang.org/grpc/codes" "google.golang.org/grpc/naming" ) type testWatcher struct { // the channel to receives name resolution updates update chan *naming.Update // the side channel to get to know how many updates in a batch side chan int // the channel to notifiy update injector that the update reading is done readDone chan int } func (w *testWatcher) Next() (updates []*naming.Update, err error) { n := <-w.side if n == 0 { return nil, fmt.Errorf("w.side is closed") } for i := 0; i < n; i++ { u := <-w.update if u != nil { updates = append(updates, u) } } w.readDone <- 0 return } func (w *testWatcher) Close() { } // Inject naming resolution updates to the testWatcher. func (w *testWatcher) inject(updates []*naming.Update) { w.side <- len(updates) for _, u := range updates { w.update <- u } <-w.readDone } type testNameResolver struct { w *testWatcher addr string } func (r *testNameResolver) Resolve(target string) (naming.Watcher, error) { r.w = &testWatcher{ update: make(chan *naming.Update, 1), side: make(chan int, 1), readDone: make(chan int), } r.w.side <- 1 r.w.update <- &naming.Update{ Op: naming.Add, Addr: r.addr, } go func() { <-r.w.readDone }() return r.w, nil } func startServers(t *testing.T, numServers int, maxStreams uint32) ([]*server, *testNameResolver) { var servers []*server for i := 0; i < numServers; i++ { s := newTestServer() servers = append(servers, s) go s.start(t, 0, maxStreams) s.wait(t, 2*time.Second) } // Point to server[0] addr := "localhost:" + servers[0].port return servers, &testNameResolver{ addr: addr, } } func TestNameDiscovery(t *testing.T) { // Start 2 servers on 2 ports. numServers := 2 servers, r := startServers(t, numServers, math.MaxUint32) cc, err := Dial("foo.bar.com", WithBalancer(RoundRobin(r)), WithBlock(), WithInsecure(), WithCodec(testCodec{})) if err != nil { t.Fatalf("Failed to create ClientConn: %v", err) } req := "port" var reply string if err := Invoke(context.Background(), "/foo/bar", &req, &reply, cc); err == nil || ErrorDesc(err) != servers[0].port { t.Fatalf("grpc.Invoke(_, _, _, _, _) = %v, want %s", err, servers[0].port) } // Inject the name resolution change to remove servers[0] and add servers[1]. var updates []*naming.Update updates = append(updates, &naming.Update{ Op: naming.Delete, Addr: "localhost:" + servers[0].port, }) updates = append(updates, &naming.Update{ Op: naming.Add, Addr: "localhost:" + servers[1].port, }) r.w.inject(updates) // Loop until the rpcs in flight talks to servers[1]. for { if err := Invoke(context.Background(), "/foo/bar", &req, &reply, cc); err != nil && ErrorDesc(err) == servers[1].port { break } time.Sleep(10 * time.Millisecond) } cc.Close() for i := 0; i < numServers; i++ { servers[i].stop() } } func TestEmptyAddrs(t *testing.T) { servers, r := startServers(t, 1, math.MaxUint32) cc, err := Dial("foo.bar.com", WithBalancer(RoundRobin(r)), WithBlock(), WithInsecure(), WithCodec(testCodec{})) if err != nil { t.Fatalf("Failed to create ClientConn: %v", err) } var reply string if err := Invoke(context.Background(), "/foo/bar", &expectedRequest, &reply, cc); err != nil || reply != expectedResponse { t.Fatalf("grpc.Invoke(_, _, _, _, _) = %v, reply = %q, want %q, ", err, reply, expectedResponse) } // Inject name resolution change to remove the server so that there is no address // available after that. u := &naming.Update{ Op: naming.Delete, Addr: "localhost:" + servers[0].port, } r.w.inject([]*naming.Update{u}) // Loop until the above updates apply. for { time.Sleep(10 * time.Millisecond) ctx, cancel := context.WithTimeout(context.Background(), 10*time.Millisecond) if err := Invoke(ctx, "/foo/bar", &expectedRequest, &reply, cc); err != nil { cancel() break } cancel() } cc.Close() servers[0].stop() } func TestRoundRobin(t *testing.T) { // Start 3 servers on 3 ports. numServers := 3 servers, r := startServers(t, numServers, math.MaxUint32) cc, err := Dial("foo.bar.com", WithBalancer(RoundRobin(r)), WithBlock(), WithInsecure(), WithCodec(testCodec{})) if err != nil { t.Fatalf("Failed to create ClientConn: %v", err) } // Add servers[1] to the service discovery. u := &naming.Update{ Op: naming.Add, Addr: "localhost:" + servers[1].port, } r.w.inject([]*naming.Update{u}) req := "port" var reply string // Loop until servers[1] is up for { if err := Invoke(context.Background(), "/foo/bar", &req, &reply, cc); err != nil && ErrorDesc(err) == servers[1].port { break } time.Sleep(10 * time.Millisecond) } // Add server2[2] to the service discovery. u = &naming.Update{ Op: naming.Add, Addr: "localhost:" + servers[2].port, } r.w.inject([]*naming.Update{u}) // Loop until both servers[2] are up. for { if err := Invoke(context.Background(), "/foo/bar", &req, &reply, cc); err != nil && ErrorDesc(err) == servers[2].port { break } time.Sleep(10 * time.Millisecond) } // Check the incoming RPCs served in a round-robin manner. for i := 0; i < 10; i++ { if err := Invoke(context.Background(), "/foo/bar", &req, &reply, cc); err == nil || ErrorDesc(err) != servers[i%numServers].port { t.Fatalf("Index %d: Invoke(_, _, _, _, _) = %v, want %s", i, err, servers[i%numServers].port) } } cc.Close() for i := 0; i < numServers; i++ { servers[i].stop() } } func TestCloseWithPendingRPC(t *testing.T) { servers, r := startServers(t, 1, math.MaxUint32) cc, err := Dial("foo.bar.com", WithBalancer(RoundRobin(r)), WithBlock(), WithInsecure(), WithCodec(testCodec{})) if err != nil { t.Fatalf("Failed to create ClientConn: %v", err) } var reply string if err := Invoke(context.Background(), "/foo/bar", &expectedRequest, &reply, cc, FailFast(false)); err != nil { t.Fatalf("grpc.Invoke(_, _, _, _, _) = %v, want %s", err, servers[0].port) } // Remove the server. updates := []*naming.Update{{ Op: naming.Delete, Addr: "localhost:" + servers[0].port, }} r.w.inject(updates) // Loop until the above update applies. for { ctx, cancel := context.WithTimeout(context.Background(), 10*time.Millisecond) if err := Invoke(ctx, "/foo/bar", &expectedRequest, &reply, cc, FailFast(false)); Code(err) == codes.DeadlineExceeded { cancel() break } time.Sleep(10 * time.Millisecond) cancel() } // Issue 2 RPCs which should be completed with error status once cc is closed. var wg sync.WaitGroup wg.Add(2) go func() { defer wg.Done() var reply string if err := Invoke(context.Background(), "/foo/bar", &expectedRequest, &reply, cc, FailFast(false)); err == nil { t.Errorf("grpc.Invoke(_, _, _, _, _) = %v, want not nil", err) } }() go func() { defer wg.Done() var reply string time.Sleep(5 * time.Millisecond) if err := Invoke(context.Background(), "/foo/bar", &expectedRequest, &reply, cc, FailFast(false)); err == nil { t.Errorf("grpc.Invoke(_, _, _, _, _) = %v, want not nil", err) } }() time.Sleep(5 * time.Millisecond) cc.Close() wg.Wait() servers[0].stop() } func TestGetOnWaitChannel(t *testing.T) { servers, r := startServers(t, 1, math.MaxUint32) cc, err := Dial("foo.bar.com", WithBalancer(RoundRobin(r)), WithBlock(), WithInsecure(), WithCodec(testCodec{})) if err != nil { t.Fatalf("Failed to create ClientConn: %v", err) } // Remove all servers so that all upcoming RPCs will block on waitCh. updates := []*naming.Update{{ Op: naming.Delete, Addr: "localhost:" + servers[0].port, }} r.w.inject(updates) for { var reply string ctx, cancel := context.WithTimeout(context.Background(), 10*time.Millisecond) if err := Invoke(ctx, "/foo/bar", &expectedRequest, &reply, cc, FailFast(false)); Code(err) == codes.DeadlineExceeded { cancel() break } cancel() time.Sleep(10 * time.Millisecond) } var wg sync.WaitGroup wg.Add(1) go func() { defer wg.Done() var reply string if err := Invoke(context.Background(), "/foo/bar", &expectedRequest, &reply, cc, FailFast(false)); err != nil { t.Errorf("grpc.Invoke(_, _, _, _, _) = %v, want ", err) } }() // Add a connected server to get the above RPC through. updates = []*naming.Update{{ Op: naming.Add, Addr: "localhost:" + servers[0].port, }} r.w.inject(updates) // Wait until the above RPC succeeds. wg.Wait() cc.Close() servers[0].stop() } func TestOneServerDown(t *testing.T) { // Start 2 servers. numServers := 2 servers, r := startServers(t, numServers, math.MaxUint32) cc, err := Dial("foo.bar.com", WithBalancer(RoundRobin(r)), WithBlock(), WithInsecure(), WithCodec(testCodec{})) if err != nil { t.Fatalf("Failed to create ClientConn: %v", err) } // Add servers[1] to the service discovery. var updates []*naming.Update updates = append(updates, &naming.Update{ Op: naming.Add, Addr: "localhost:" + servers[1].port, }) r.w.inject(updates) req := "port" var reply string // Loop until servers[1] is up for { if err := Invoke(context.Background(), "/foo/bar", &req, &reply, cc); err != nil && ErrorDesc(err) == servers[1].port { break } time.Sleep(10 * time.Millisecond) } var wg sync.WaitGroup numRPC := 100 sleepDuration := 10 * time.Millisecond wg.Add(1) go func() { time.Sleep(sleepDuration) // After sleepDuration, kill server[0]. servers[0].stop() wg.Done() }() // All non-failfast RPCs should not block because there's at least one connection available. for i := 0; i < numRPC; i++ { wg.Add(1) go func() { time.Sleep(sleepDuration) // After sleepDuration, invoke RPC. // server[0] is killed around the same time to make it racy between balancer and gRPC internals. Invoke(context.Background(), "/foo/bar", &req, &reply, cc, FailFast(false)) wg.Done() }() } wg.Wait() cc.Close() for i := 0; i < numServers; i++ { servers[i].stop() } } func TestOneAddressRemoval(t *testing.T) { // Start 2 servers. numServers := 2 servers, r := startServers(t, numServers, math.MaxUint32) cc, err := Dial("foo.bar.com", WithBalancer(RoundRobin(r)), WithBlock(), WithInsecure(), WithCodec(testCodec{})) if err != nil { t.Fatalf("Failed to create ClientConn: %v", err) } // Add servers[1] to the service discovery. var updates []*naming.Update updates = append(updates, &naming.Update{ Op: naming.Add, Addr: "localhost:" + servers[1].port, }) r.w.inject(updates) req := "port" var reply string // Loop until servers[1] is up for { if err := Invoke(context.Background(), "/foo/bar", &req, &reply, cc); err != nil && ErrorDesc(err) == servers[1].port { break } time.Sleep(10 * time.Millisecond) } var wg sync.WaitGroup numRPC := 100 sleepDuration := 10 * time.Millisecond wg.Add(1) go func() { time.Sleep(sleepDuration) // After sleepDuration, delete server[0]. var updates []*naming.Update updates = append(updates, &naming.Update{ Op: naming.Delete, Addr: "localhost:" + servers[0].port, }) r.w.inject(updates) wg.Done() }() // All non-failfast RPCs should not fail because there's at least one connection available. for i := 0; i < numRPC; i++ { wg.Add(1) go func() { var reply string time.Sleep(sleepDuration) // After sleepDuration, invoke RPC. // server[0] is removed around the same time to make it racy between balancer and gRPC internals. if err := Invoke(context.Background(), "/foo/bar", &expectedRequest, &reply, cc, FailFast(false)); err != nil { t.Errorf("grpc.Invoke(_, _, _, _, _) = %v, want not nil", err) } wg.Done() }() } wg.Wait() cc.Close() for i := 0; i < numServers; i++ { servers[i].stop() } } func checkServerUp(t *testing.T, currentServer *server) { req := "port" port := currentServer.port cc, err := Dial("localhost:"+port, WithBlock(), WithInsecure(), WithCodec(testCodec{})) if err != nil { t.Fatalf("Failed to create ClientConn: %v", err) } var reply string for { if err := Invoke(context.Background(), "/foo/bar", &req, &reply, cc); err != nil && ErrorDesc(err) == port { break } time.Sleep(10 * time.Millisecond) } cc.Close() } func TestPickFirstEmptyAddrs(t *testing.T) { servers, r := startServers(t, 1, math.MaxUint32) defer servers[0].stop() cc, err := Dial("foo.bar.com", WithBalancer(pickFirstBalancer(r)), WithBlock(), WithInsecure(), WithCodec(testCodec{})) if err != nil { t.Fatalf("Failed to create ClientConn: %v", err) } defer cc.Close() var reply string if err := Invoke(context.Background(), "/foo/bar", &expectedRequest, &reply, cc); err != nil || reply != expectedResponse { t.Fatalf("grpc.Invoke(_, _, _, _, _) = %v, reply = %q, want %q, ", err, reply, expectedResponse) } // Inject name resolution change to remove the server so that there is no address // available after that. u := &naming.Update{ Op: naming.Delete, Addr: "localhost:" + servers[0].port, } r.w.inject([]*naming.Update{u}) // Loop until the above updates apply. for { time.Sleep(10 * time.Millisecond) ctx, cancel := context.WithTimeout(context.Background(), 10*time.Millisecond) if err := Invoke(ctx, "/foo/bar", &expectedRequest, &reply, cc); err != nil { cancel() break } cancel() } } func TestPickFirstCloseWithPendingRPC(t *testing.T) { servers, r := startServers(t, 1, math.MaxUint32) defer servers[0].stop() cc, err := Dial("foo.bar.com", WithBalancer(pickFirstBalancer(r)), WithBlock(), WithInsecure(), WithCodec(testCodec{})) if err != nil { t.Fatalf("Failed to create ClientConn: %v", err) } var reply string if err := Invoke(context.Background(), "/foo/bar", &expectedRequest, &reply, cc, FailFast(false)); err != nil { t.Fatalf("grpc.Invoke(_, _, _, _, _) = %v, want %s", err, servers[0].port) } // Remove the server. updates := []*naming.Update{{ Op: naming.Delete, Addr: "localhost:" + servers[0].port, }} r.w.inject(updates) // Loop until the above update applies. for { ctx, cancel := context.WithTimeout(context.Background(), 10*time.Millisecond) if err := Invoke(ctx, "/foo/bar", &expectedRequest, &reply, cc, FailFast(false)); Code(err) == codes.DeadlineExceeded { cancel() break } time.Sleep(10 * time.Millisecond) cancel() } // Issue 2 RPCs which should be completed with error status once cc is closed. var wg sync.WaitGroup wg.Add(2) go func() { defer wg.Done() var reply string if err := Invoke(context.Background(), "/foo/bar", &expectedRequest, &reply, cc, FailFast(false)); err == nil { t.Errorf("grpc.Invoke(_, _, _, _, _) = %v, want not nil", err) } }() go func() { defer wg.Done() var reply string time.Sleep(5 * time.Millisecond) if err := Invoke(context.Background(), "/foo/bar", &expectedRequest, &reply, cc, FailFast(false)); err == nil { t.Errorf("grpc.Invoke(_, _, _, _, _) = %v, want not nil", err) } }() time.Sleep(5 * time.Millisecond) cc.Close() wg.Wait() } func TestPickFirstOrderAllServerUp(t *testing.T) { // Start 3 servers on 3 ports. numServers := 3 servers, r := startServers(t, numServers, math.MaxUint32) for i := 0; i < numServers; i++ { defer servers[i].stop() } cc, err := Dial("foo.bar.com", WithBalancer(pickFirstBalancer(r)), WithBlock(), WithInsecure(), WithCodec(testCodec{})) if err != nil { t.Fatalf("Failed to create ClientConn: %v", err) } defer cc.Close() // Add servers[1] and [2] to the service discovery. u := &naming.Update{ Op: naming.Add, Addr: "localhost:" + servers[1].port, } r.w.inject([]*naming.Update{u}) u = &naming.Update{ Op: naming.Add, Addr: "localhost:" + servers[2].port, } r.w.inject([]*naming.Update{u}) // Loop until all 3 servers are up checkServerUp(t, servers[0]) checkServerUp(t, servers[1]) checkServerUp(t, servers[2]) // Check the incoming RPCs served in server[0] req := "port" var reply string for i := 0; i < 20; i++ { if err := Invoke(context.Background(), "/foo/bar", &req, &reply, cc); err == nil || ErrorDesc(err) != servers[0].port { t.Fatalf("Index %d: Invoke(_, _, _, _, _) = %v, want %s", 0, err, servers[0].port) } time.Sleep(10 * time.Millisecond) } // Delete server[0] in the balancer, the incoming RPCs served in server[1] // For test addrconn, close server[0] instead u = &naming.Update{ Op: naming.Delete, Addr: "localhost:" + servers[0].port, } r.w.inject([]*naming.Update{u}) // Loop until it changes to server[1] for { if err := Invoke(context.Background(), "/foo/bar", &req, &reply, cc); err != nil && ErrorDesc(err) == servers[1].port { break } time.Sleep(10 * time.Millisecond) } for i := 0; i < 20; i++ { if err := Invoke(context.Background(), "/foo/bar", &req, &reply, cc); err == nil || ErrorDesc(err) != servers[1].port { t.Fatalf("Index %d: Invoke(_, _, _, _, _) = %v, want %s", 1, err, servers[1].port) } time.Sleep(10 * time.Millisecond) } // Add server[0] back to the balancer, the incoming RPCs served in server[1] // Add is append operation, the order of Notify now is {server[1].port server[2].port server[0].port} u = &naming.Update{ Op: naming.Add, Addr: "localhost:" + servers[0].port, } r.w.inject([]*naming.Update{u}) for i := 0; i < 20; i++ { if err := Invoke(context.Background(), "/foo/bar", &req, &reply, cc); err == nil || ErrorDesc(err) != servers[1].port { t.Fatalf("Index %d: Invoke(_, _, _, _, _) = %v, want %s", 1, err, servers[1].port) } time.Sleep(10 * time.Millisecond) } // Delete server[1] in the balancer, the incoming RPCs served in server[2] u = &naming.Update{ Op: naming.Delete, Addr: "localhost:" + servers[1].port, } r.w.inject([]*naming.Update{u}) for { if err := Invoke(context.Background(), "/foo/bar", &req, &reply, cc); err != nil && ErrorDesc(err) == servers[2].port { break } time.Sleep(1 * time.Second) } for i := 0; i < 20; i++ { if err := Invoke(context.Background(), "/foo/bar", &req, &reply, cc); err == nil || ErrorDesc(err) != servers[2].port { t.Fatalf("Index %d: Invoke(_, _, _, _, _) = %v, want %s", 2, err, servers[2].port) } time.Sleep(10 * time.Millisecond) } // Delete server[2] in the balancer, the incoming RPCs served in server[0] u = &naming.Update{ Op: naming.Delete, Addr: "localhost:" + servers[2].port, } r.w.inject([]*naming.Update{u}) for { if err := Invoke(context.Background(), "/foo/bar", &req, &reply, cc); err != nil && ErrorDesc(err) == servers[0].port { break } time.Sleep(1 * time.Second) } for i := 0; i < 20; i++ { if err := Invoke(context.Background(), "/foo/bar", &req, &reply, cc); err == nil || ErrorDesc(err) != servers[0].port { t.Fatalf("Index %d: Invoke(_, _, _, _, _) = %v, want %s", 2, err, servers[2].port) } time.Sleep(10 * time.Millisecond) } } func TestPickFirstOrderOneServerDown(t *testing.T) { // Start 3 servers on 3 ports. numServers := 3 servers, r := startServers(t, numServers, math.MaxUint32) for i := 0; i < numServers; i++ { defer servers[i].stop() } cc, err := Dial("foo.bar.com", WithBalancer(pickFirstBalancer(r)), WithBlock(), WithInsecure(), WithCodec(testCodec{})) if err != nil { t.Fatalf("Failed to create ClientConn: %v", err) } defer cc.Close() // Add servers[1] and [2] to the service discovery. u := &naming.Update{ Op: naming.Add, Addr: "localhost:" + servers[1].port, } r.w.inject([]*naming.Update{u}) u = &naming.Update{ Op: naming.Add, Addr: "localhost:" + servers[2].port, } r.w.inject([]*naming.Update{u}) // Loop until all 3 servers are up checkServerUp(t, servers[0]) checkServerUp(t, servers[1]) checkServerUp(t, servers[2]) // Check the incoming RPCs served in server[0] req := "port" var reply string for i := 0; i < 20; i++ { if err := Invoke(context.Background(), "/foo/bar", &req, &reply, cc); err == nil || ErrorDesc(err) != servers[0].port { t.Fatalf("Index %d: Invoke(_, _, _, _, _) = %v, want %s", 0, err, servers[0].port) } time.Sleep(10 * time.Millisecond) } // server[0] down, incoming RPCs served in server[1], but the order of Notify still remains // {server[0] server[1] server[2]} servers[0].stop() // Loop until it changes to server[1] for { if err := Invoke(context.Background(), "/foo/bar", &req, &reply, cc); err != nil && ErrorDesc(err) == servers[1].port { break } time.Sleep(10 * time.Millisecond) } for i := 0; i < 20; i++ { if err := Invoke(context.Background(), "/foo/bar", &req, &reply, cc); err == nil || ErrorDesc(err) != servers[1].port { t.Fatalf("Index %d: Invoke(_, _, _, _, _) = %v, want %s", 1, err, servers[1].port) } time.Sleep(10 * time.Millisecond) } // up the server[0] back, the incoming RPCs served in server[1] p, _ := strconv.Atoi(servers[0].port) servers[0] = newTestServer() go servers[0].start(t, p, math.MaxUint32) servers[0].wait(t, 2*time.Second) checkServerUp(t, servers[0]) for i := 0; i < 20; i++ { if err := Invoke(context.Background(), "/foo/bar", &req, &reply, cc); err == nil || ErrorDesc(err) != servers[1].port { t.Fatalf("Index %d: Invoke(_, _, _, _, _) = %v, want %s", 1, err, servers[1].port) } time.Sleep(10 * time.Millisecond) } // Delete server[1] in the balancer, the incoming RPCs served in server[0] u = &naming.Update{ Op: naming.Delete, Addr: "localhost:" + servers[1].port, } r.w.inject([]*naming.Update{u}) for { if err := Invoke(context.Background(), "/foo/bar", &req, &reply, cc); err != nil && ErrorDesc(err) == servers[0].port { break } time.Sleep(1 * time.Second) } for i := 0; i < 20; i++ { if err := Invoke(context.Background(), "/foo/bar", &req, &reply, cc); err == nil || ErrorDesc(err) != servers[0].port { t.Fatalf("Index %d: Invoke(_, _, _, _, _) = %v, want %s", 0, err, servers[0].port) } time.Sleep(10 * time.Millisecond) } } func TestPickFirstOneAddressRemoval(t *testing.T) { // Start 2 servers. numServers := 2 servers, r := startServers(t, numServers, math.MaxUint32) for i := 0; i < numServers; i++ { defer servers[i].stop() } cc, err := Dial("localhost:"+servers[0].port, WithBalancer(pickFirstBalancer(r)), WithBlock(), WithInsecure(), WithCodec(testCodec{})) if err != nil { t.Fatalf("Failed to create ClientConn: %v", err) } defer cc.Close() // Add servers[1] to the service discovery. var updates []*naming.Update updates = append(updates, &naming.Update{ Op: naming.Add, Addr: "localhost:" + servers[1].port, }) r.w.inject(updates) // Create a new cc to Loop until servers[1] is up checkServerUp(t, servers[0]) checkServerUp(t, servers[1]) var wg sync.WaitGroup numRPC := 100 sleepDuration := 10 * time.Millisecond wg.Add(1) go func() { time.Sleep(sleepDuration) // After sleepDuration, delete server[0]. var updates []*naming.Update updates = append(updates, &naming.Update{ Op: naming.Delete, Addr: "localhost:" + servers[0].port, }) r.w.inject(updates) wg.Done() }() // All non-failfast RPCs should not fail because there's at least one connection available. for i := 0; i < numRPC; i++ { wg.Add(1) go func() { var reply string time.Sleep(sleepDuration) // After sleepDuration, invoke RPC. // server[0] is removed around the same time to make it racy between balancer and gRPC internals. if err := Invoke(context.Background(), "/foo/bar", &expectedRequest, &reply, cc, FailFast(false)); err != nil { t.Errorf("grpc.Invoke(_, _, _, _, _) = %v, want not nil", err) } wg.Done() }() } wg.Wait() } golang-google-grpc-1.6.0/benchmark/000077500000000000000000000000001315416461300171265ustar00rootroot00000000000000golang-google-grpc-1.6.0/benchmark/benchmain/000077500000000000000000000000001315416461300210525ustar00rootroot00000000000000golang-google-grpc-1.6.0/benchmark/benchmain/main.go000066400000000000000000000342021315416461300223260ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ /* Package main provides benchmark with setting flags. An example to run some benchmarks with profiling enabled: go run benchmark/benchmain/main.go -benchtime=10s -workloads=all \ -compression=on -maxConcurrentCalls=1 -traceMode=false \ -reqSizeBytes=1,1048576 -respSizeBytes=1,1048576 \ -latency=0s -kbps=0 -mtu=0 \ -cpuProfile=cpuProf -memProfile=memProf -memProfileRate=10000 */ package main import ( "errors" "flag" "fmt" "io" "io/ioutil" "log" "net" "os" "reflect" "runtime" "runtime/pprof" "strconv" "strings" "sync" "sync/atomic" "testing" "time" "golang.org/x/net/context" "google.golang.org/grpc" bm "google.golang.org/grpc/benchmark" testpb "google.golang.org/grpc/benchmark/grpc_testing" "google.golang.org/grpc/benchmark/latency" "google.golang.org/grpc/benchmark/stats" "google.golang.org/grpc/grpclog" ) const ( compressionOn = "on" compressionOff = "off" compressionBoth = "both" ) var allCompressionModes = []string{compressionOn, compressionOff, compressionBoth} const ( workloadsUnary = "unary" workloadsStreaming = "streaming" workloadsAll = "all" ) var allWorkloads = []string{workloadsUnary, workloadsStreaming, workloadsAll} var ( runMode = []bool{true, true} // {runUnary, runStream} // When set the latency to 0 (no delay), the result is slower than the real result with no delay // because latency simulation section has extra operations ltc = []time.Duration{0, 40 * time.Millisecond} // if non-positive, no delay. kbps = []int{0, 10240} // if non-positive, infinite mtu = []int{0} // if non-positive, infinite maxConcurrentCalls = []int{1, 8, 64, 512} reqSizeBytes = []int{1, 1024, 1024 * 1024} respSizeBytes = []int{1, 1024, 1024 * 1024} enableTrace = []bool{false} benchtime time.Duration memProfile, cpuProfile string memProfileRate int enableCompressor []bool ) func unaryBenchmark(startTimer func(), stopTimer func(int32), benchFeatures bm.Features, benchtime time.Duration, s *stats.Stats) { caller, close := makeFuncUnary(benchFeatures) defer close() runBenchmark(caller, startTimer, stopTimer, benchFeatures, benchtime, s) } func streamBenchmark(startTimer func(), stopTimer func(int32), benchFeatures bm.Features, benchtime time.Duration, s *stats.Stats) { caller, close := makeFuncStream(benchFeatures) defer close() runBenchmark(caller, startTimer, stopTimer, benchFeatures, benchtime, s) } func makeFuncUnary(benchFeatures bm.Features) (func(int), func()) { nw := &latency.Network{Kbps: benchFeatures.Kbps, Latency: benchFeatures.Latency, MTU: benchFeatures.Mtu} opts := []grpc.DialOption{} sopts := []grpc.ServerOption{} if benchFeatures.EnableCompressor { sopts = append(sopts, grpc.RPCCompressor(nopCompressor{}), grpc.RPCDecompressor(nopDecompressor{}), ) opts = append(opts, grpc.WithCompressor(nopCompressor{}), grpc.WithDecompressor(nopDecompressor{}), ) } sopts = append(sopts, grpc.MaxConcurrentStreams(uint32(benchFeatures.MaxConcurrentCalls+1))) opts = append(opts, grpc.WithDialer(func(address string, timeout time.Duration) (net.Conn, error) { return nw.TimeoutDialer(net.DialTimeout)("tcp", address, timeout) })) opts = append(opts, grpc.WithInsecure()) target, stopper := bm.StartServer(bm.ServerInfo{Addr: "localhost:0", Type: "protobuf", Network: nw}, sopts...) conn := bm.NewClientConn(target, opts...) tc := testpb.NewBenchmarkServiceClient(conn) return func(int) { unaryCaller(tc, benchFeatures.ReqSizeBytes, benchFeatures.RespSizeBytes) }, func() { conn.Close() stopper() } } func makeFuncStream(benchFeatures bm.Features) (func(int), func()) { fmt.Println(benchFeatures) nw := &latency.Network{Kbps: benchFeatures.Kbps, Latency: benchFeatures.Latency, MTU: benchFeatures.Mtu} opts := []grpc.DialOption{} sopts := []grpc.ServerOption{} if benchFeatures.EnableCompressor { sopts = append(sopts, grpc.RPCCompressor(grpc.NewGZIPCompressor()), grpc.RPCDecompressor(grpc.NewGZIPDecompressor()), ) opts = append(opts, grpc.WithCompressor(grpc.NewGZIPCompressor()), grpc.WithDecompressor(grpc.NewGZIPDecompressor()), ) } sopts = append(sopts, grpc.MaxConcurrentStreams(uint32(benchFeatures.MaxConcurrentCalls+1))) opts = append(opts, grpc.WithDialer(func(address string, timeout time.Duration) (net.Conn, error) { return nw.TimeoutDialer(net.DialTimeout)("tcp", address, timeout) })) opts = append(opts, grpc.WithInsecure()) target, stopper := bm.StartServer(bm.ServerInfo{Addr: "localhost:0", Type: "protobuf", Network: nw}, sopts...) conn := bm.NewClientConn(target, opts...) tc := testpb.NewBenchmarkServiceClient(conn) streams := make([]testpb.BenchmarkService_StreamingCallClient, benchFeatures.MaxConcurrentCalls) for i := 0; i < benchFeatures.MaxConcurrentCalls; i++ { stream, err := tc.StreamingCall(context.Background()) if err != nil { grpclog.Fatalf("%v.StreamingCall(_) = _, %v", tc, err) } streams[i] = stream } return func(pos int) { streamCaller(streams[pos], benchFeatures.ReqSizeBytes, benchFeatures.RespSizeBytes) }, func() { conn.Close() stopper() } } func unaryCaller(client testpb.BenchmarkServiceClient, reqSize, respSize int) { if err := bm.DoUnaryCall(client, reqSize, respSize); err != nil { grpclog.Fatalf("DoUnaryCall failed: %v", err) } } func streamCaller(stream testpb.BenchmarkService_StreamingCallClient, reqSize, respSize int) { if err := bm.DoStreamingRoundTrip(stream, reqSize, respSize); err != nil { grpclog.Fatalf("DoStreamingRoundTrip failed: %v", err) } } func runBenchmark(caller func(int), startTimer func(), stopTimer func(int32), benchFeatures bm.Features, benchtime time.Duration, s *stats.Stats) { // Warm up connection. for i := 0; i < 10; i++ { caller(0) } // Run benchmark. startTimer() var ( mu sync.Mutex wg sync.WaitGroup ) wg.Add(benchFeatures.MaxConcurrentCalls) bmEnd := time.Now().Add(benchtime) var count int32 for i := 0; i < benchFeatures.MaxConcurrentCalls; i++ { go func(pos int) { for { t := time.Now() if t.After(bmEnd) { break } start := time.Now() caller(pos) elapse := time.Since(start) atomic.AddInt32(&count, 1) mu.Lock() s.Add(elapse) mu.Unlock() } wg.Done() }(i) } wg.Wait() stopTimer(count) } // Initiate main function to get settings of features. func init() { var ( workloads, compressorMode, readLatency string readKbps, readMtu, readMaxConcurrentCalls intSliceType readReqSizeBytes, readRespSizeBytes intSliceType traceMode bool ) flag.StringVar(&workloads, "workloads", workloadsAll, fmt.Sprintf("Workloads to execute - One of: %v", strings.Join(allWorkloads, ", "))) flag.BoolVar(&traceMode, "traceMode", false, "Enable gRPC tracing") flag.StringVar(&readLatency, "latency", "", "Simulated one-way network latency - may be a comma-separated list") flag.DurationVar(&benchtime, "benchtime", time.Second, "Configures the amount of time to run each benchmark") flag.Var(&readKbps, "kbps", "Simulated network throughput (in kbps) - may be a comma-separated list") flag.Var(&readMtu, "mtu", "Simulated network MTU (Maximum Transmission Unit) - may be a comma-separated list") flag.Var(&readMaxConcurrentCalls, "maxConcurrentCalls", "Number of concurrent RPCs during benchmarks") flag.Var(&readReqSizeBytes, "reqSizeBytes", "Request size in bytes - may be a comma-separated list") flag.Var(&readRespSizeBytes, "respSizeBytes", "Response size in bytes - may be a comma-separated list") flag.StringVar(&memProfile, "memProfile", "", "Enables memory profiling output to the filename provided") flag.IntVar(&memProfileRate, "memProfileRate", 0, "Configures the memory profiling rate") flag.StringVar(&cpuProfile, "cpuProfile", "", "Enables CPU profiling output to the filename provided") flag.StringVar(&compressorMode, "compression", compressionOff, fmt.Sprintf("Compression mode - One of: %v", strings.Join(allCompressionModes, ", "))) flag.Parse() if flag.NArg() != 0 { log.Fatal("Error: unparsed arguments: ", flag.Args()) } switch workloads { case workloadsUnary: runMode[0] = true runMode[1] = false case workloadsStreaming: runMode[0] = false runMode[1] = true case workloadsAll: runMode[0] = true runMode[1] = true default: log.Fatalf("Unknown workloads setting: %v (want one of: %v)", workloads, strings.Join(allWorkloads, ", ")) } switch compressorMode { case compressionOn: enableCompressor = []bool{true} case compressionOff: enableCompressor = []bool{false} case compressionBoth: enableCompressor = []bool{false, true} default: log.Fatalf("Unknown compression mode setting: %v (want one of: %v)", compressorMode, strings.Join(allCompressionModes, ", ")) } if traceMode { enableTrace = []bool{true} } // Time input formats as (time + unit). readTimeFromInput(<c, readLatency) readIntFromIntSlice(&kbps, readKbps) readIntFromIntSlice(&mtu, readMtu) readIntFromIntSlice(&maxConcurrentCalls, readMaxConcurrentCalls) readIntFromIntSlice(&reqSizeBytes, readReqSizeBytes) readIntFromIntSlice(&respSizeBytes, readRespSizeBytes) } type intSliceType []int func (intSlice *intSliceType) String() string { return fmt.Sprintf("%v", *intSlice) } func (intSlice *intSliceType) Set(value string) error { if len(*intSlice) > 0 { return errors.New("interval flag already set") } for _, num := range strings.Split(value, ",") { next, err := strconv.Atoi(num) if err != nil { return err } *intSlice = append(*intSlice, next) } return nil } func readIntFromIntSlice(values *[]int, replace intSliceType) { // If not set replace in the flag, just return to run the default settings. if len(replace) == 0 { return } *values = replace } func readTimeFromInput(values *[]time.Duration, replace string) { if strings.Compare(replace, "") != 0 { *values = []time.Duration{} for _, ltc := range strings.Split(replace, ",") { duration, err := time.ParseDuration(ltc) if err != nil { log.Fatal(err.Error()) } *values = append(*values, duration) } } } func main() { before() featuresPos := make([]int, 8) // 0:enableTracing 1:ltc 2:kbps 3:mtu 4:maxC 5:reqSize 6:respSize featuresNum := []int{len(enableTrace), len(ltc), len(kbps), len(mtu), len(maxConcurrentCalls), len(reqSizeBytes), len(respSizeBytes), len(enableCompressor)} initalPos := make([]int, len(featuresPos)) s := stats.NewStats(10) var memStats runtime.MemStats var results testing.BenchmarkResult var startAllocs, startBytes uint64 var startTime time.Time start := true var startTimer = func() { runtime.ReadMemStats(&memStats) startAllocs = memStats.Mallocs startBytes = memStats.TotalAlloc startTime = time.Now() } var stopTimer = func(count int32) { runtime.ReadMemStats(&memStats) results = testing.BenchmarkResult{N: int(count), T: time.Now().Sub(startTime), Bytes: 0, MemAllocs: memStats.Mallocs - startAllocs, MemBytes: memStats.TotalAlloc - startBytes} } // Run benchmarks for !reflect.DeepEqual(featuresPos, initalPos) || start { start = false tracing := "Trace" if !enableTrace[featuresPos[0]] { tracing = "noTrace" } benchFeature := bm.Features{ EnableTrace: enableTrace[featuresPos[0]], Latency: ltc[featuresPos[1]], Kbps: kbps[featuresPos[2]], Mtu: mtu[featuresPos[3]], MaxConcurrentCalls: maxConcurrentCalls[featuresPos[4]], ReqSizeBytes: reqSizeBytes[featuresPos[5]], RespSizeBytes: respSizeBytes[featuresPos[6]], EnableCompressor: enableCompressor[featuresPos[7]], } grpc.EnableTracing = enableTrace[featuresPos[0]] if runMode[0] { fmt.Printf("Unary-%s-%s:\n", tracing, benchFeature.String()) unaryBenchmark(startTimer, stopTimer, benchFeature, benchtime, s) fmt.Println(results.String(), results.MemString()) fmt.Println(s.String()) s.Clear() } if runMode[1] { fmt.Printf("Stream-%s-%s\n", tracing, benchFeature.String()) streamBenchmark(startTimer, stopTimer, benchFeature, benchtime, s) fmt.Println(results.String(), results.MemString()) fmt.Println(s.String()) s.Clear() } bm.AddOne(featuresPos, featuresNum) } after() } func before() { if memProfileRate > 0 { runtime.MemProfileRate = memProfileRate } if cpuProfile != "" { f, err := os.Create(cpuProfile) if err != nil { fmt.Fprintf(os.Stderr, "testing: %s\n", err) return } if err := pprof.StartCPUProfile(f); err != nil { fmt.Fprintf(os.Stderr, "testing: can't start cpu profile: %s\n", err) f.Close() return } } } func after() { if cpuProfile != "" { pprof.StopCPUProfile() // flushes profile to disk } if memProfile != "" { f, err := os.Create(memProfile) if err != nil { fmt.Fprintf(os.Stderr, "testing: %s\n", err) os.Exit(2) } runtime.GC() // materialize all statistics if err = pprof.WriteHeapProfile(f); err != nil { fmt.Fprintf(os.Stderr, "testing: can't write %s: %s\n", memProfile, err) os.Exit(2) } f.Close() } } // nopCompressor is a compressor that just copies data. type nopCompressor struct{} func (nopCompressor) Do(w io.Writer, p []byte) error { n, err := w.Write(p) if err != nil { return err } if n != len(p) { return fmt.Errorf("nopCompressor.Write: wrote %v bytes; want %v", n, len(p)) } return nil } func (nopCompressor) Type() string { return "nop" } // nopDecompressor is a decompressor that just copies data. type nopDecompressor struct{} func (nopDecompressor) Do(r io.Reader) ([]byte, error) { return ioutil.ReadAll(r) } func (nopDecompressor) Type() string { return "nop" } golang-google-grpc-1.6.0/benchmark/benchmark.go000066400000000000000000000263461315416461300214220ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ //go:generate protoc -I grpc_testing --go_out=plugins=grpc:grpc_testing grpc_testing/control.proto grpc_testing/messages.proto grpc_testing/payloads.proto grpc_testing/services.proto grpc_testing/stats.proto /* Package benchmark implements the building blocks to setup end-to-end gRPC benchmarks. */ package benchmark import ( "fmt" "io" "net" "sync" "testing" "time" "golang.org/x/net/context" "google.golang.org/grpc" testpb "google.golang.org/grpc/benchmark/grpc_testing" "google.golang.org/grpc/benchmark/latency" "google.golang.org/grpc/benchmark/stats" "google.golang.org/grpc/grpclog" ) // Features contains most fields for a benchmark type Features struct { EnableTrace bool Latency time.Duration Kbps int Mtu int MaxConcurrentCalls int ReqSizeBytes int RespSizeBytes int EnableCompressor bool } func (f Features) String() string { return fmt.Sprintf("latency_%s-kbps_%#v-MTU_%#v-maxConcurrentCalls_"+ "%#v-reqSize_%#vB-respSize_%#vB-Compressor_%t", f.Latency.String(), f.Kbps, f.Mtu, f.MaxConcurrentCalls, f.ReqSizeBytes, f.RespSizeBytes, f.EnableCompressor) } // AddOne add 1 to the features slice func AddOne(features []int, featuresMaxPosition []int) { for i := len(features) - 1; i >= 0; i-- { features[i] = (features[i] + 1) if features[i]/featuresMaxPosition[i] == 0 { break } features[i] = features[i] % featuresMaxPosition[i] } } // Allows reuse of the same testpb.Payload object. func setPayload(p *testpb.Payload, t testpb.PayloadType, size int) { if size < 0 { grpclog.Fatalf("Requested a response with invalid length %d", size) } body := make([]byte, size) switch t { case testpb.PayloadType_COMPRESSABLE: case testpb.PayloadType_UNCOMPRESSABLE: grpclog.Fatalf("PayloadType UNCOMPRESSABLE is not supported") default: grpclog.Fatalf("Unsupported payload type: %d", t) } p.Type = t p.Body = body return } func newPayload(t testpb.PayloadType, size int) *testpb.Payload { p := new(testpb.Payload) setPayload(p, t, size) return p } type testServer struct { } func (s *testServer) UnaryCall(ctx context.Context, in *testpb.SimpleRequest) (*testpb.SimpleResponse, error) { return &testpb.SimpleResponse{ Payload: newPayload(in.ResponseType, int(in.ResponseSize)), }, nil } func (s *testServer) StreamingCall(stream testpb.BenchmarkService_StreamingCallServer) error { response := &testpb.SimpleResponse{ Payload: new(testpb.Payload), } in := new(testpb.SimpleRequest) for { // use ServerStream directly to reuse the same testpb.SimpleRequest object err := stream.(grpc.ServerStream).RecvMsg(in) if err == io.EOF { // read done. return nil } if err != nil { return err } setPayload(response.Payload, in.ResponseType, int(in.ResponseSize)) if err := stream.Send(response); err != nil { return err } } } // byteBufServer is a gRPC server that sends and receives byte buffer. // The purpose is to benchmark the gRPC performance without protobuf serialization/deserialization overhead. type byteBufServer struct { respSize int32 } // UnaryCall is an empty function and is not used for benchmark. // If bytebuf UnaryCall benchmark is needed later, the function body needs to be updated. func (s *byteBufServer) UnaryCall(ctx context.Context, in *testpb.SimpleRequest) (*testpb.SimpleResponse, error) { return &testpb.SimpleResponse{}, nil } func (s *byteBufServer) StreamingCall(stream testpb.BenchmarkService_StreamingCallServer) error { for { var in []byte err := stream.(grpc.ServerStream).RecvMsg(&in) if err == io.EOF { return nil } if err != nil { return err } out := make([]byte, s.respSize) if err := stream.(grpc.ServerStream).SendMsg(&out); err != nil { return err } } } // ServerInfo contains the information to create a gRPC benchmark server. type ServerInfo struct { // Addr is the address of the server. Addr string // Type is the type of the server. // It should be "protobuf" or "bytebuf". Type string // Metadata is an optional configuration. // For "protobuf", it's ignored. // For "bytebuf", it should be an int representing response size. Metadata interface{} // Network can simulate latency Network *latency.Network } // StartServer starts a gRPC server serving a benchmark service according to info. // It returns its listen address and a function to stop the server. func StartServer(info ServerInfo, opts ...grpc.ServerOption) (string, func()) { lis, err := net.Listen("tcp", info.Addr) if err != nil { grpclog.Fatalf("Failed to listen: %v", err) } nw := info.Network if nw != nil { lis = nw.Listener(lis) } s := grpc.NewServer(opts...) switch info.Type { case "protobuf": testpb.RegisterBenchmarkServiceServer(s, &testServer{}) case "bytebuf": respSize, ok := info.Metadata.(int32) if !ok { grpclog.Fatalf("failed to StartServer, invalid metadata: %v, for Type: %v", info.Metadata, info.Type) } testpb.RegisterBenchmarkServiceServer(s, &byteBufServer{respSize: respSize}) default: grpclog.Fatalf("failed to StartServer, unknown Type: %v", info.Type) } go s.Serve(lis) return lis.Addr().String(), func() { s.Stop() } } // DoUnaryCall performs an unary RPC with given stub and request and response sizes. func DoUnaryCall(tc testpb.BenchmarkServiceClient, reqSize, respSize int) error { pl := newPayload(testpb.PayloadType_COMPRESSABLE, reqSize) req := &testpb.SimpleRequest{ ResponseType: pl.Type, ResponseSize: int32(respSize), Payload: pl, } if _, err := tc.UnaryCall(context.Background(), req); err != nil { return fmt.Errorf("/BenchmarkService/UnaryCall(_, _) = _, %v, want _, ", err) } return nil } // DoStreamingRoundTrip performs a round trip for a single streaming rpc. func DoStreamingRoundTrip(stream testpb.BenchmarkService_StreamingCallClient, reqSize, respSize int) error { pl := newPayload(testpb.PayloadType_COMPRESSABLE, reqSize) req := &testpb.SimpleRequest{ ResponseType: pl.Type, ResponseSize: int32(respSize), Payload: pl, } if err := stream.Send(req); err != nil { return fmt.Errorf("/BenchmarkService/StreamingCall.Send(_) = %v, want ", err) } if _, err := stream.Recv(); err != nil { // EOF is a valid error here. if err == io.EOF { return nil } return fmt.Errorf("/BenchmarkService/StreamingCall.Recv(_) = %v, want ", err) } return nil } // DoByteBufStreamingRoundTrip performs a round trip for a single streaming rpc, using a custom codec for byte buffer. func DoByteBufStreamingRoundTrip(stream testpb.BenchmarkService_StreamingCallClient, reqSize, respSize int) error { out := make([]byte, reqSize) if err := stream.(grpc.ClientStream).SendMsg(&out); err != nil { return fmt.Errorf("/BenchmarkService/StreamingCall.(ClientStream).SendMsg(_) = %v, want ", err) } var in []byte if err := stream.(grpc.ClientStream).RecvMsg(&in); err != nil { // EOF is a valid error here. if err == io.EOF { return nil } return fmt.Errorf("/BenchmarkService/StreamingCall.(ClientStream).RecvMsg(_) = %v, want ", err) } return nil } // NewClientConn creates a gRPC client connection to addr. func NewClientConn(addr string, opts ...grpc.DialOption) *grpc.ClientConn { conn, err := grpc.Dial(addr, opts...) if err != nil { grpclog.Fatalf("NewClientConn(%q) failed to create a ClientConn %v", addr, err) } return conn } func runUnary(b *testing.B, benchFeatures Features) { s := stats.AddStats(b, 38) nw := &latency.Network{Kbps: benchFeatures.Kbps, Latency: benchFeatures.Latency, MTU: benchFeatures.Mtu} target, stopper := StartServer(ServerInfo{Addr: "localhost:0", Type: "protobuf", Network: nw}, grpc.MaxConcurrentStreams(uint32(benchFeatures.MaxConcurrentCalls+1))) defer stopper() conn := NewClientConn( target, grpc.WithInsecure(), grpc.WithDialer(func(address string, timeout time.Duration) (net.Conn, error) { return nw.TimeoutDialer(net.DialTimeout)("tcp", address, timeout) }), ) tc := testpb.NewBenchmarkServiceClient(conn) // Warm up connection. for i := 0; i < 10; i++ { unaryCaller(tc, benchFeatures.ReqSizeBytes, benchFeatures.RespSizeBytes) } ch := make(chan int, benchFeatures.MaxConcurrentCalls*4) var ( mu sync.Mutex wg sync.WaitGroup ) wg.Add(benchFeatures.MaxConcurrentCalls) // Distribute the b.N calls over maxConcurrentCalls workers. for i := 0; i < benchFeatures.MaxConcurrentCalls; i++ { go func() { for range ch { start := time.Now() unaryCaller(tc, benchFeatures.ReqSizeBytes, benchFeatures.RespSizeBytes) elapse := time.Since(start) mu.Lock() s.Add(elapse) mu.Unlock() } wg.Done() }() } b.ResetTimer() for i := 0; i < b.N; i++ { ch <- i } close(ch) wg.Wait() b.StopTimer() conn.Close() } func runStream(b *testing.B, benchFeatures Features) { s := stats.AddStats(b, 38) nw := &latency.Network{Kbps: benchFeatures.Kbps, Latency: benchFeatures.Latency, MTU: benchFeatures.Mtu} target, stopper := StartServer(ServerInfo{Addr: "localhost:0", Type: "protobuf", Network: nw}, grpc.MaxConcurrentStreams(uint32(benchFeatures.MaxConcurrentCalls+1))) defer stopper() conn := NewClientConn( target, grpc.WithInsecure(), grpc.WithDialer(func(address string, timeout time.Duration) (net.Conn, error) { return nw.TimeoutDialer(net.DialTimeout)("tcp", address, timeout) }), ) tc := testpb.NewBenchmarkServiceClient(conn) // Warm up connection. stream, err := tc.StreamingCall(context.Background()) if err != nil { b.Fatalf("%v.StreamingCall(_) = _, %v", tc, err) } for i := 0; i < 10; i++ { streamCaller(stream, benchFeatures.ReqSizeBytes, benchFeatures.RespSizeBytes) } ch := make(chan struct{}, benchFeatures.MaxConcurrentCalls*4) var ( mu sync.Mutex wg sync.WaitGroup ) wg.Add(benchFeatures.MaxConcurrentCalls) // Distribute the b.N calls over maxConcurrentCalls workers. for i := 0; i < benchFeatures.MaxConcurrentCalls; i++ { stream, err := tc.StreamingCall(context.Background()) if err != nil { b.Fatalf("%v.StreamingCall(_) = _, %v", tc, err) } go func() { for range ch { start := time.Now() streamCaller(stream, benchFeatures.ReqSizeBytes, benchFeatures.RespSizeBytes) elapse := time.Since(start) mu.Lock() s.Add(elapse) mu.Unlock() } wg.Done() }() } b.ResetTimer() for i := 0; i < b.N; i++ { ch <- struct{}{} } close(ch) wg.Wait() b.StopTimer() conn.Close() } func unaryCaller(client testpb.BenchmarkServiceClient, reqSize, respSize int) { if err := DoUnaryCall(client, reqSize, respSize); err != nil { grpclog.Fatalf("DoUnaryCall failed: %v", err) } } func streamCaller(stream testpb.BenchmarkService_StreamingCallClient, reqSize, respSize int) { if err := DoStreamingRoundTrip(stream, reqSize, respSize); err != nil { grpclog.Fatalf("DoStreamingRoundTrip failed: %v", err) } } golang-google-grpc-1.6.0/benchmark/benchmark16_test.go000066400000000000000000000056361315416461300226270ustar00rootroot00000000000000// +build go1.6,!go1.7 /* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package benchmark import ( "os" "testing" "google.golang.org/grpc" "google.golang.org/grpc/benchmark/stats" ) func BenchmarkClientStreamc1(b *testing.B) { grpc.EnableTracing = true runStream(b, Features{true, 0, 0, 0, 1, 1, 1, false}) } func BenchmarkClientStreamc8(b *testing.B) { grpc.EnableTracing = true runStream(b, Features{true, 0, 0, 0, 8, 1, 1, false}) } func BenchmarkClientStreamc64(b *testing.B) { grpc.EnableTracing = true runStream(b, Features{true, 0, 0, 0, 64, 1, 1, false}) } func BenchmarkClientStreamc512(b *testing.B) { grpc.EnableTracing = true runStream(b, Features{true, 0, 0, 0, 512, 1, 1, false}) } func BenchmarkClientUnaryc1(b *testing.B) { grpc.EnableTracing = true runStream(b, Features{true, 0, 0, 0, 1, 1, 1, false}) } func BenchmarkClientUnaryc8(b *testing.B) { grpc.EnableTracing = true runStream(b, Features{true, 0, 0, 0, 8, 1, 1, false}) } func BenchmarkClientUnaryc64(b *testing.B) { grpc.EnableTracing = true runStream(b, Features{true, 0, 0, 0, 64, 1, 1, false}) } func BenchmarkClientUnaryc512(b *testing.B) { grpc.EnableTracing = true runStream(b, Features{true, 0, 0, 0, 512, 1, 1, false}) } func BenchmarkClientStreamNoTracec1(b *testing.B) { grpc.EnableTracing = false runStream(b, Features{false, 0, 0, 0, 1, 1, 1, false}) } func BenchmarkClientStreamNoTracec8(b *testing.B) { grpc.EnableTracing = false runStream(b, Features{false, 0, 0, 0, 8, 1, 1, false}) } func BenchmarkClientStreamNoTracec64(b *testing.B) { grpc.EnableTracing = false runStream(b, Features{false, 0, 0, 0, 64, 1, 1, false}) } func BenchmarkClientStreamNoTracec512(b *testing.B) { grpc.EnableTracing = false runStream(b, Features{false, 0, 0, 0, 512, 1, 1, false}) } func BenchmarkClientUnaryNoTracec1(b *testing.B) { grpc.EnableTracing = false runStream(b, Features{false, 0, 0, 0, 1, 1, 1, false}) } func BenchmarkClientUnaryNoTracec8(b *testing.B) { grpc.EnableTracing = false runStream(b, Features{false, 0, 0, 0, 8, 1, 1, false}) } func BenchmarkClientUnaryNoTracec64(b *testing.B) { grpc.EnableTracing = false runStream(b, Features{false, 0, 0, 0, 64, 1, 1, false}) } func BenchmarkClientUnaryNoTracec512(b *testing.B) { grpc.EnableTracing = false runStream(b, Features{false, 0, 0, 0, 512, 1, 1, false}) } func TestMain(m *testing.M) { os.Exit(stats.RunTestMain(m)) } golang-google-grpc-1.6.0/benchmark/benchmark17_test.go000066400000000000000000000053051315416461300226210ustar00rootroot00000000000000// +build go1.7 /* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package benchmark import ( "fmt" "os" "reflect" "testing" "time" "google.golang.org/grpc" "google.golang.org/grpc/benchmark/stats" ) func BenchmarkClient(b *testing.B) { enableTrace := []bool{true, false} // run both enable and disable by default // When set the latency to 0 (no delay), the result is slower than the real result with no delay // because latency simulation section has extra operations latency := []time.Duration{0, 40 * time.Millisecond} // if non-positive, no delay. kbps := []int{0, 10240} // if non-positive, infinite mtu := []int{0} // if non-positive, infinite maxConcurrentCalls := []int{1, 8, 64, 512} reqSizeBytes := []int{1, 1024 * 1024} respSizeBytes := []int{1, 1024 * 1024} featuresCurPos := make([]int, 7) // 0:enableTracing 1:md 2:ltc 3:kbps 4:mtu 5:maxC 6:connCount 7:reqSize 8:respSize featuresMaxPosition := []int{len(enableTrace), len(latency), len(kbps), len(mtu), len(maxConcurrentCalls), len(reqSizeBytes), len(respSizeBytes)} initalPos := make([]int, len(featuresCurPos)) // run benchmarks start := true for !reflect.DeepEqual(featuresCurPos, initalPos) || start { start = false tracing := "Trace" if !enableTrace[featuresCurPos[0]] { tracing = "noTrace" } benchFeature := Features{ EnableTrace: enableTrace[featuresCurPos[0]], Latency: latency[featuresCurPos[1]], Kbps: kbps[featuresCurPos[2]], Mtu: mtu[featuresCurPos[3]], MaxConcurrentCalls: maxConcurrentCalls[featuresCurPos[4]], ReqSizeBytes: reqSizeBytes[featuresCurPos[5]], RespSizeBytes: respSizeBytes[featuresCurPos[6]], } grpc.EnableTracing = enableTrace[featuresCurPos[0]] b.Run(fmt.Sprintf("Unary-%s-%s", tracing, benchFeature.String()), func(b *testing.B) { runUnary(b, benchFeature) }) b.Run(fmt.Sprintf("Stream-%s-%s", tracing, benchFeature.String()), func(b *testing.B) { runStream(b, benchFeature) }) AddOne(featuresCurPos, featuresMaxPosition) } } func TestMain(m *testing.M) { os.Exit(stats.RunTestMain(m)) } golang-google-grpc-1.6.0/benchmark/client/000077500000000000000000000000001315416461300204045ustar00rootroot00000000000000golang-google-grpc-1.6.0/benchmark/client/main.go000066400000000000000000000077341315416461300216720ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package main import ( "flag" "math" "net" "net/http" _ "net/http/pprof" "sync" "time" "golang.org/x/net/context" "google.golang.org/grpc" "google.golang.org/grpc/benchmark" testpb "google.golang.org/grpc/benchmark/grpc_testing" "google.golang.org/grpc/benchmark/stats" "google.golang.org/grpc/grpclog" ) var ( server = flag.String("server", "", "The server address") maxConcurrentRPCs = flag.Int("max_concurrent_rpcs", 1, "The max number of concurrent RPCs") duration = flag.Int("duration", math.MaxInt32, "The duration in seconds to run the benchmark client") trace = flag.Bool("trace", true, "Whether tracing is on") rpcType = flag.Int("rpc_type", 0, `Configure different client rpc type. Valid options are: 0 : unary call; 1 : streaming call.`) ) func unaryCaller(client testpb.BenchmarkServiceClient) { benchmark.DoUnaryCall(client, 1, 1) } func streamCaller(stream testpb.BenchmarkService_StreamingCallClient) { benchmark.DoStreamingRoundTrip(stream, 1, 1) } func buildConnection() (s *stats.Stats, conn *grpc.ClientConn, tc testpb.BenchmarkServiceClient) { s = stats.NewStats(256) conn = benchmark.NewClientConn(*server) tc = testpb.NewBenchmarkServiceClient(conn) return s, conn, tc } func closeLoopUnary() { s, conn, tc := buildConnection() for i := 0; i < 100; i++ { unaryCaller(tc) } ch := make(chan int, *maxConcurrentRPCs*4) var ( mu sync.Mutex wg sync.WaitGroup ) wg.Add(*maxConcurrentRPCs) for i := 0; i < *maxConcurrentRPCs; i++ { go func() { for range ch { start := time.Now() unaryCaller(tc) elapse := time.Since(start) mu.Lock() s.Add(elapse) mu.Unlock() } wg.Done() }() } // Stop the client when time is up. done := make(chan struct{}) go func() { <-time.After(time.Duration(*duration) * time.Second) close(done) }() ok := true for ok { select { case ch <- 0: case <-done: ok = false } } close(ch) wg.Wait() conn.Close() grpclog.Println(s.String()) } func closeLoopStream() { s, conn, tc := buildConnection() ch := make(chan int, *maxConcurrentRPCs*4) var ( mu sync.Mutex wg sync.WaitGroup ) wg.Add(*maxConcurrentRPCs) // Distribute RPCs over maxConcurrentCalls workers. for i := 0; i < *maxConcurrentRPCs; i++ { go func() { stream, err := tc.StreamingCall(context.Background()) if err != nil { grpclog.Fatalf("%v.StreamingCall(_) = _, %v", tc, err) } // Do some warm up. for i := 0; i < 100; i++ { streamCaller(stream) } for range ch { start := time.Now() streamCaller(stream) elapse := time.Since(start) mu.Lock() s.Add(elapse) mu.Unlock() } wg.Done() }() } // Stop the client when time is up. done := make(chan struct{}) go func() { <-time.After(time.Duration(*duration) * time.Second) close(done) }() ok := true for ok { select { case ch <- 0: case <-done: ok = false } } close(ch) wg.Wait() conn.Close() grpclog.Println(s.String()) } func main() { flag.Parse() grpc.EnableTracing = *trace go func() { lis, err := net.Listen("tcp", ":0") if err != nil { grpclog.Fatalf("Failed to listen: %v", err) } grpclog.Println("Client profiling address: ", lis.Addr().String()) if err := http.Serve(lis, nil); err != nil { grpclog.Fatalf("Failed to serve: %v", err) } }() switch *rpcType { case 0: closeLoopUnary() case 1: closeLoopStream() } } golang-google-grpc-1.6.0/benchmark/grpc_testing/000077500000000000000000000000001315416461300216165ustar00rootroot00000000000000golang-google-grpc-1.6.0/benchmark/grpc_testing/control.pb.go000066400000000000000000001173111315416461300242310ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: control.proto /* Package grpc_testing is a generated protocol buffer package. It is generated from these files: control.proto messages.proto payloads.proto services.proto stats.proto It has these top-level messages: PoissonParams UniformParams DeterministicParams ParetoParams ClosedLoopParams LoadParams SecurityParams ClientConfig ClientStatus Mark ClientArgs ServerConfig ServerArgs ServerStatus CoreRequest CoreResponse Void Scenario Scenarios Payload EchoStatus SimpleRequest SimpleResponse StreamingInputCallRequest StreamingInputCallResponse ResponseParameters StreamingOutputCallRequest StreamingOutputCallResponse ReconnectParams ReconnectInfo ByteBufferParams SimpleProtoParams ComplexProtoParams PayloadConfig ServerStats HistogramParams HistogramData ClientStats */ package grpc_testing import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package type ClientType int32 const ( ClientType_SYNC_CLIENT ClientType = 0 ClientType_ASYNC_CLIENT ClientType = 1 ) var ClientType_name = map[int32]string{ 0: "SYNC_CLIENT", 1: "ASYNC_CLIENT", } var ClientType_value = map[string]int32{ "SYNC_CLIENT": 0, "ASYNC_CLIENT": 1, } func (x ClientType) String() string { return proto.EnumName(ClientType_name, int32(x)) } func (ClientType) EnumDescriptor() ([]byte, []int) { return fileDescriptor0, []int{0} } type ServerType int32 const ( ServerType_SYNC_SERVER ServerType = 0 ServerType_ASYNC_SERVER ServerType = 1 ServerType_ASYNC_GENERIC_SERVER ServerType = 2 ) var ServerType_name = map[int32]string{ 0: "SYNC_SERVER", 1: "ASYNC_SERVER", 2: "ASYNC_GENERIC_SERVER", } var ServerType_value = map[string]int32{ "SYNC_SERVER": 0, "ASYNC_SERVER": 1, "ASYNC_GENERIC_SERVER": 2, } func (x ServerType) String() string { return proto.EnumName(ServerType_name, int32(x)) } func (ServerType) EnumDescriptor() ([]byte, []int) { return fileDescriptor0, []int{1} } type RpcType int32 const ( RpcType_UNARY RpcType = 0 RpcType_STREAMING RpcType = 1 ) var RpcType_name = map[int32]string{ 0: "UNARY", 1: "STREAMING", } var RpcType_value = map[string]int32{ "UNARY": 0, "STREAMING": 1, } func (x RpcType) String() string { return proto.EnumName(RpcType_name, int32(x)) } func (RpcType) EnumDescriptor() ([]byte, []int) { return fileDescriptor0, []int{2} } // Parameters of poisson process distribution, which is a good representation // of activity coming in from independent identical stationary sources. type PoissonParams struct { // The rate of arrivals (a.k.a. lambda parameter of the exp distribution). OfferedLoad float64 `protobuf:"fixed64,1,opt,name=offered_load,json=offeredLoad" json:"offered_load,omitempty"` } func (m *PoissonParams) Reset() { *m = PoissonParams{} } func (m *PoissonParams) String() string { return proto.CompactTextString(m) } func (*PoissonParams) ProtoMessage() {} func (*PoissonParams) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} } func (m *PoissonParams) GetOfferedLoad() float64 { if m != nil { return m.OfferedLoad } return 0 } type UniformParams struct { InterarrivalLo float64 `protobuf:"fixed64,1,opt,name=interarrival_lo,json=interarrivalLo" json:"interarrival_lo,omitempty"` InterarrivalHi float64 `protobuf:"fixed64,2,opt,name=interarrival_hi,json=interarrivalHi" json:"interarrival_hi,omitempty"` } func (m *UniformParams) Reset() { *m = UniformParams{} } func (m *UniformParams) String() string { return proto.CompactTextString(m) } func (*UniformParams) ProtoMessage() {} func (*UniformParams) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} } func (m *UniformParams) GetInterarrivalLo() float64 { if m != nil { return m.InterarrivalLo } return 0 } func (m *UniformParams) GetInterarrivalHi() float64 { if m != nil { return m.InterarrivalHi } return 0 } type DeterministicParams struct { OfferedLoad float64 `protobuf:"fixed64,1,opt,name=offered_load,json=offeredLoad" json:"offered_load,omitempty"` } func (m *DeterministicParams) Reset() { *m = DeterministicParams{} } func (m *DeterministicParams) String() string { return proto.CompactTextString(m) } func (*DeterministicParams) ProtoMessage() {} func (*DeterministicParams) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{2} } func (m *DeterministicParams) GetOfferedLoad() float64 { if m != nil { return m.OfferedLoad } return 0 } type ParetoParams struct { InterarrivalBase float64 `protobuf:"fixed64,1,opt,name=interarrival_base,json=interarrivalBase" json:"interarrival_base,omitempty"` Alpha float64 `protobuf:"fixed64,2,opt,name=alpha" json:"alpha,omitempty"` } func (m *ParetoParams) Reset() { *m = ParetoParams{} } func (m *ParetoParams) String() string { return proto.CompactTextString(m) } func (*ParetoParams) ProtoMessage() {} func (*ParetoParams) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{3} } func (m *ParetoParams) GetInterarrivalBase() float64 { if m != nil { return m.InterarrivalBase } return 0 } func (m *ParetoParams) GetAlpha() float64 { if m != nil { return m.Alpha } return 0 } // Once an RPC finishes, immediately start a new one. // No configuration parameters needed. type ClosedLoopParams struct { } func (m *ClosedLoopParams) Reset() { *m = ClosedLoopParams{} } func (m *ClosedLoopParams) String() string { return proto.CompactTextString(m) } func (*ClosedLoopParams) ProtoMessage() {} func (*ClosedLoopParams) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{4} } type LoadParams struct { // Types that are valid to be assigned to Load: // *LoadParams_ClosedLoop // *LoadParams_Poisson // *LoadParams_Uniform // *LoadParams_Determ // *LoadParams_Pareto Load isLoadParams_Load `protobuf_oneof:"load"` } func (m *LoadParams) Reset() { *m = LoadParams{} } func (m *LoadParams) String() string { return proto.CompactTextString(m) } func (*LoadParams) ProtoMessage() {} func (*LoadParams) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{5} } type isLoadParams_Load interface { isLoadParams_Load() } type LoadParams_ClosedLoop struct { ClosedLoop *ClosedLoopParams `protobuf:"bytes,1,opt,name=closed_loop,json=closedLoop,oneof"` } type LoadParams_Poisson struct { Poisson *PoissonParams `protobuf:"bytes,2,opt,name=poisson,oneof"` } type LoadParams_Uniform struct { Uniform *UniformParams `protobuf:"bytes,3,opt,name=uniform,oneof"` } type LoadParams_Determ struct { Determ *DeterministicParams `protobuf:"bytes,4,opt,name=determ,oneof"` } type LoadParams_Pareto struct { Pareto *ParetoParams `protobuf:"bytes,5,opt,name=pareto,oneof"` } func (*LoadParams_ClosedLoop) isLoadParams_Load() {} func (*LoadParams_Poisson) isLoadParams_Load() {} func (*LoadParams_Uniform) isLoadParams_Load() {} func (*LoadParams_Determ) isLoadParams_Load() {} func (*LoadParams_Pareto) isLoadParams_Load() {} func (m *LoadParams) GetLoad() isLoadParams_Load { if m != nil { return m.Load } return nil } func (m *LoadParams) GetClosedLoop() *ClosedLoopParams { if x, ok := m.GetLoad().(*LoadParams_ClosedLoop); ok { return x.ClosedLoop } return nil } func (m *LoadParams) GetPoisson() *PoissonParams { if x, ok := m.GetLoad().(*LoadParams_Poisson); ok { return x.Poisson } return nil } func (m *LoadParams) GetUniform() *UniformParams { if x, ok := m.GetLoad().(*LoadParams_Uniform); ok { return x.Uniform } return nil } func (m *LoadParams) GetDeterm() *DeterministicParams { if x, ok := m.GetLoad().(*LoadParams_Determ); ok { return x.Determ } return nil } func (m *LoadParams) GetPareto() *ParetoParams { if x, ok := m.GetLoad().(*LoadParams_Pareto); ok { return x.Pareto } return nil } // XXX_OneofFuncs is for the internal use of the proto package. func (*LoadParams) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _LoadParams_OneofMarshaler, _LoadParams_OneofUnmarshaler, _LoadParams_OneofSizer, []interface{}{ (*LoadParams_ClosedLoop)(nil), (*LoadParams_Poisson)(nil), (*LoadParams_Uniform)(nil), (*LoadParams_Determ)(nil), (*LoadParams_Pareto)(nil), } } func _LoadParams_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*LoadParams) // load switch x := m.Load.(type) { case *LoadParams_ClosedLoop: b.EncodeVarint(1<<3 | proto.WireBytes) if err := b.EncodeMessage(x.ClosedLoop); err != nil { return err } case *LoadParams_Poisson: b.EncodeVarint(2<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Poisson); err != nil { return err } case *LoadParams_Uniform: b.EncodeVarint(3<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Uniform); err != nil { return err } case *LoadParams_Determ: b.EncodeVarint(4<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Determ); err != nil { return err } case *LoadParams_Pareto: b.EncodeVarint(5<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Pareto); err != nil { return err } case nil: default: return fmt.Errorf("LoadParams.Load has unexpected type %T", x) } return nil } func _LoadParams_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*LoadParams) switch tag { case 1: // load.closed_loop if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(ClosedLoopParams) err := b.DecodeMessage(msg) m.Load = &LoadParams_ClosedLoop{msg} return true, err case 2: // load.poisson if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(PoissonParams) err := b.DecodeMessage(msg) m.Load = &LoadParams_Poisson{msg} return true, err case 3: // load.uniform if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(UniformParams) err := b.DecodeMessage(msg) m.Load = &LoadParams_Uniform{msg} return true, err case 4: // load.determ if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(DeterministicParams) err := b.DecodeMessage(msg) m.Load = &LoadParams_Determ{msg} return true, err case 5: // load.pareto if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(ParetoParams) err := b.DecodeMessage(msg) m.Load = &LoadParams_Pareto{msg} return true, err default: return false, nil } } func _LoadParams_OneofSizer(msg proto.Message) (n int) { m := msg.(*LoadParams) // load switch x := m.Load.(type) { case *LoadParams_ClosedLoop: s := proto.Size(x.ClosedLoop) n += proto.SizeVarint(1<<3 | proto.WireBytes) n += proto.SizeVarint(uint64(s)) n += s case *LoadParams_Poisson: s := proto.Size(x.Poisson) n += proto.SizeVarint(2<<3 | proto.WireBytes) n += proto.SizeVarint(uint64(s)) n += s case *LoadParams_Uniform: s := proto.Size(x.Uniform) n += proto.SizeVarint(3<<3 | proto.WireBytes) n += proto.SizeVarint(uint64(s)) n += s case *LoadParams_Determ: s := proto.Size(x.Determ) n += proto.SizeVarint(4<<3 | proto.WireBytes) n += proto.SizeVarint(uint64(s)) n += s case *LoadParams_Pareto: s := proto.Size(x.Pareto) n += proto.SizeVarint(5<<3 | proto.WireBytes) n += proto.SizeVarint(uint64(s)) n += s case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } // presence of SecurityParams implies use of TLS type SecurityParams struct { UseTestCa bool `protobuf:"varint,1,opt,name=use_test_ca,json=useTestCa" json:"use_test_ca,omitempty"` ServerHostOverride string `protobuf:"bytes,2,opt,name=server_host_override,json=serverHostOverride" json:"server_host_override,omitempty"` } func (m *SecurityParams) Reset() { *m = SecurityParams{} } func (m *SecurityParams) String() string { return proto.CompactTextString(m) } func (*SecurityParams) ProtoMessage() {} func (*SecurityParams) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{6} } func (m *SecurityParams) GetUseTestCa() bool { if m != nil { return m.UseTestCa } return false } func (m *SecurityParams) GetServerHostOverride() string { if m != nil { return m.ServerHostOverride } return "" } type ClientConfig struct { // List of targets to connect to. At least one target needs to be specified. ServerTargets []string `protobuf:"bytes,1,rep,name=server_targets,json=serverTargets" json:"server_targets,omitempty"` ClientType ClientType `protobuf:"varint,2,opt,name=client_type,json=clientType,enum=grpc.testing.ClientType" json:"client_type,omitempty"` SecurityParams *SecurityParams `protobuf:"bytes,3,opt,name=security_params,json=securityParams" json:"security_params,omitempty"` // How many concurrent RPCs to start for each channel. // For synchronous client, use a separate thread for each outstanding RPC. OutstandingRpcsPerChannel int32 `protobuf:"varint,4,opt,name=outstanding_rpcs_per_channel,json=outstandingRpcsPerChannel" json:"outstanding_rpcs_per_channel,omitempty"` // Number of independent client channels to create. // i-th channel will connect to server_target[i % server_targets.size()] ClientChannels int32 `protobuf:"varint,5,opt,name=client_channels,json=clientChannels" json:"client_channels,omitempty"` // Only for async client. Number of threads to use to start/manage RPCs. AsyncClientThreads int32 `protobuf:"varint,7,opt,name=async_client_threads,json=asyncClientThreads" json:"async_client_threads,omitempty"` RpcType RpcType `protobuf:"varint,8,opt,name=rpc_type,json=rpcType,enum=grpc.testing.RpcType" json:"rpc_type,omitempty"` // The requested load for the entire client (aggregated over all the threads). LoadParams *LoadParams `protobuf:"bytes,10,opt,name=load_params,json=loadParams" json:"load_params,omitempty"` PayloadConfig *PayloadConfig `protobuf:"bytes,11,opt,name=payload_config,json=payloadConfig" json:"payload_config,omitempty"` HistogramParams *HistogramParams `protobuf:"bytes,12,opt,name=histogram_params,json=histogramParams" json:"histogram_params,omitempty"` // Specify the cores we should run the client on, if desired CoreList []int32 `protobuf:"varint,13,rep,packed,name=core_list,json=coreList" json:"core_list,omitempty"` CoreLimit int32 `protobuf:"varint,14,opt,name=core_limit,json=coreLimit" json:"core_limit,omitempty"` } func (m *ClientConfig) Reset() { *m = ClientConfig{} } func (m *ClientConfig) String() string { return proto.CompactTextString(m) } func (*ClientConfig) ProtoMessage() {} func (*ClientConfig) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{7} } func (m *ClientConfig) GetServerTargets() []string { if m != nil { return m.ServerTargets } return nil } func (m *ClientConfig) GetClientType() ClientType { if m != nil { return m.ClientType } return ClientType_SYNC_CLIENT } func (m *ClientConfig) GetSecurityParams() *SecurityParams { if m != nil { return m.SecurityParams } return nil } func (m *ClientConfig) GetOutstandingRpcsPerChannel() int32 { if m != nil { return m.OutstandingRpcsPerChannel } return 0 } func (m *ClientConfig) GetClientChannels() int32 { if m != nil { return m.ClientChannels } return 0 } func (m *ClientConfig) GetAsyncClientThreads() int32 { if m != nil { return m.AsyncClientThreads } return 0 } func (m *ClientConfig) GetRpcType() RpcType { if m != nil { return m.RpcType } return RpcType_UNARY } func (m *ClientConfig) GetLoadParams() *LoadParams { if m != nil { return m.LoadParams } return nil } func (m *ClientConfig) GetPayloadConfig() *PayloadConfig { if m != nil { return m.PayloadConfig } return nil } func (m *ClientConfig) GetHistogramParams() *HistogramParams { if m != nil { return m.HistogramParams } return nil } func (m *ClientConfig) GetCoreList() []int32 { if m != nil { return m.CoreList } return nil } func (m *ClientConfig) GetCoreLimit() int32 { if m != nil { return m.CoreLimit } return 0 } type ClientStatus struct { Stats *ClientStats `protobuf:"bytes,1,opt,name=stats" json:"stats,omitempty"` } func (m *ClientStatus) Reset() { *m = ClientStatus{} } func (m *ClientStatus) String() string { return proto.CompactTextString(m) } func (*ClientStatus) ProtoMessage() {} func (*ClientStatus) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{8} } func (m *ClientStatus) GetStats() *ClientStats { if m != nil { return m.Stats } return nil } // Request current stats type Mark struct { // if true, the stats will be reset after taking their snapshot. Reset_ bool `protobuf:"varint,1,opt,name=reset" json:"reset,omitempty"` } func (m *Mark) Reset() { *m = Mark{} } func (m *Mark) String() string { return proto.CompactTextString(m) } func (*Mark) ProtoMessage() {} func (*Mark) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{9} } func (m *Mark) GetReset_() bool { if m != nil { return m.Reset_ } return false } type ClientArgs struct { // Types that are valid to be assigned to Argtype: // *ClientArgs_Setup // *ClientArgs_Mark Argtype isClientArgs_Argtype `protobuf_oneof:"argtype"` } func (m *ClientArgs) Reset() { *m = ClientArgs{} } func (m *ClientArgs) String() string { return proto.CompactTextString(m) } func (*ClientArgs) ProtoMessage() {} func (*ClientArgs) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{10} } type isClientArgs_Argtype interface { isClientArgs_Argtype() } type ClientArgs_Setup struct { Setup *ClientConfig `protobuf:"bytes,1,opt,name=setup,oneof"` } type ClientArgs_Mark struct { Mark *Mark `protobuf:"bytes,2,opt,name=mark,oneof"` } func (*ClientArgs_Setup) isClientArgs_Argtype() {} func (*ClientArgs_Mark) isClientArgs_Argtype() {} func (m *ClientArgs) GetArgtype() isClientArgs_Argtype { if m != nil { return m.Argtype } return nil } func (m *ClientArgs) GetSetup() *ClientConfig { if x, ok := m.GetArgtype().(*ClientArgs_Setup); ok { return x.Setup } return nil } func (m *ClientArgs) GetMark() *Mark { if x, ok := m.GetArgtype().(*ClientArgs_Mark); ok { return x.Mark } return nil } // XXX_OneofFuncs is for the internal use of the proto package. func (*ClientArgs) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _ClientArgs_OneofMarshaler, _ClientArgs_OneofUnmarshaler, _ClientArgs_OneofSizer, []interface{}{ (*ClientArgs_Setup)(nil), (*ClientArgs_Mark)(nil), } } func _ClientArgs_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*ClientArgs) // argtype switch x := m.Argtype.(type) { case *ClientArgs_Setup: b.EncodeVarint(1<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Setup); err != nil { return err } case *ClientArgs_Mark: b.EncodeVarint(2<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Mark); err != nil { return err } case nil: default: return fmt.Errorf("ClientArgs.Argtype has unexpected type %T", x) } return nil } func _ClientArgs_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*ClientArgs) switch tag { case 1: // argtype.setup if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(ClientConfig) err := b.DecodeMessage(msg) m.Argtype = &ClientArgs_Setup{msg} return true, err case 2: // argtype.mark if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(Mark) err := b.DecodeMessage(msg) m.Argtype = &ClientArgs_Mark{msg} return true, err default: return false, nil } } func _ClientArgs_OneofSizer(msg proto.Message) (n int) { m := msg.(*ClientArgs) // argtype switch x := m.Argtype.(type) { case *ClientArgs_Setup: s := proto.Size(x.Setup) n += proto.SizeVarint(1<<3 | proto.WireBytes) n += proto.SizeVarint(uint64(s)) n += s case *ClientArgs_Mark: s := proto.Size(x.Mark) n += proto.SizeVarint(2<<3 | proto.WireBytes) n += proto.SizeVarint(uint64(s)) n += s case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } type ServerConfig struct { ServerType ServerType `protobuf:"varint,1,opt,name=server_type,json=serverType,enum=grpc.testing.ServerType" json:"server_type,omitempty"` SecurityParams *SecurityParams `protobuf:"bytes,2,opt,name=security_params,json=securityParams" json:"security_params,omitempty"` // Port on which to listen. Zero means pick unused port. Port int32 `protobuf:"varint,4,opt,name=port" json:"port,omitempty"` // Only for async server. Number of threads used to serve the requests. AsyncServerThreads int32 `protobuf:"varint,7,opt,name=async_server_threads,json=asyncServerThreads" json:"async_server_threads,omitempty"` // Specify the number of cores to limit server to, if desired CoreLimit int32 `protobuf:"varint,8,opt,name=core_limit,json=coreLimit" json:"core_limit,omitempty"` // payload config, used in generic server PayloadConfig *PayloadConfig `protobuf:"bytes,9,opt,name=payload_config,json=payloadConfig" json:"payload_config,omitempty"` // Specify the cores we should run the server on, if desired CoreList []int32 `protobuf:"varint,10,rep,packed,name=core_list,json=coreList" json:"core_list,omitempty"` } func (m *ServerConfig) Reset() { *m = ServerConfig{} } func (m *ServerConfig) String() string { return proto.CompactTextString(m) } func (*ServerConfig) ProtoMessage() {} func (*ServerConfig) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{11} } func (m *ServerConfig) GetServerType() ServerType { if m != nil { return m.ServerType } return ServerType_SYNC_SERVER } func (m *ServerConfig) GetSecurityParams() *SecurityParams { if m != nil { return m.SecurityParams } return nil } func (m *ServerConfig) GetPort() int32 { if m != nil { return m.Port } return 0 } func (m *ServerConfig) GetAsyncServerThreads() int32 { if m != nil { return m.AsyncServerThreads } return 0 } func (m *ServerConfig) GetCoreLimit() int32 { if m != nil { return m.CoreLimit } return 0 } func (m *ServerConfig) GetPayloadConfig() *PayloadConfig { if m != nil { return m.PayloadConfig } return nil } func (m *ServerConfig) GetCoreList() []int32 { if m != nil { return m.CoreList } return nil } type ServerArgs struct { // Types that are valid to be assigned to Argtype: // *ServerArgs_Setup // *ServerArgs_Mark Argtype isServerArgs_Argtype `protobuf_oneof:"argtype"` } func (m *ServerArgs) Reset() { *m = ServerArgs{} } func (m *ServerArgs) String() string { return proto.CompactTextString(m) } func (*ServerArgs) ProtoMessage() {} func (*ServerArgs) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{12} } type isServerArgs_Argtype interface { isServerArgs_Argtype() } type ServerArgs_Setup struct { Setup *ServerConfig `protobuf:"bytes,1,opt,name=setup,oneof"` } type ServerArgs_Mark struct { Mark *Mark `protobuf:"bytes,2,opt,name=mark,oneof"` } func (*ServerArgs_Setup) isServerArgs_Argtype() {} func (*ServerArgs_Mark) isServerArgs_Argtype() {} func (m *ServerArgs) GetArgtype() isServerArgs_Argtype { if m != nil { return m.Argtype } return nil } func (m *ServerArgs) GetSetup() *ServerConfig { if x, ok := m.GetArgtype().(*ServerArgs_Setup); ok { return x.Setup } return nil } func (m *ServerArgs) GetMark() *Mark { if x, ok := m.GetArgtype().(*ServerArgs_Mark); ok { return x.Mark } return nil } // XXX_OneofFuncs is for the internal use of the proto package. func (*ServerArgs) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _ServerArgs_OneofMarshaler, _ServerArgs_OneofUnmarshaler, _ServerArgs_OneofSizer, []interface{}{ (*ServerArgs_Setup)(nil), (*ServerArgs_Mark)(nil), } } func _ServerArgs_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*ServerArgs) // argtype switch x := m.Argtype.(type) { case *ServerArgs_Setup: b.EncodeVarint(1<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Setup); err != nil { return err } case *ServerArgs_Mark: b.EncodeVarint(2<<3 | proto.WireBytes) if err := b.EncodeMessage(x.Mark); err != nil { return err } case nil: default: return fmt.Errorf("ServerArgs.Argtype has unexpected type %T", x) } return nil } func _ServerArgs_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*ServerArgs) switch tag { case 1: // argtype.setup if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(ServerConfig) err := b.DecodeMessage(msg) m.Argtype = &ServerArgs_Setup{msg} return true, err case 2: // argtype.mark if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(Mark) err := b.DecodeMessage(msg) m.Argtype = &ServerArgs_Mark{msg} return true, err default: return false, nil } } func _ServerArgs_OneofSizer(msg proto.Message) (n int) { m := msg.(*ServerArgs) // argtype switch x := m.Argtype.(type) { case *ServerArgs_Setup: s := proto.Size(x.Setup) n += proto.SizeVarint(1<<3 | proto.WireBytes) n += proto.SizeVarint(uint64(s)) n += s case *ServerArgs_Mark: s := proto.Size(x.Mark) n += proto.SizeVarint(2<<3 | proto.WireBytes) n += proto.SizeVarint(uint64(s)) n += s case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } type ServerStatus struct { Stats *ServerStats `protobuf:"bytes,1,opt,name=stats" json:"stats,omitempty"` // the port bound by the server Port int32 `protobuf:"varint,2,opt,name=port" json:"port,omitempty"` // Number of cores available to the server Cores int32 `protobuf:"varint,3,opt,name=cores" json:"cores,omitempty"` } func (m *ServerStatus) Reset() { *m = ServerStatus{} } func (m *ServerStatus) String() string { return proto.CompactTextString(m) } func (*ServerStatus) ProtoMessage() {} func (*ServerStatus) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{13} } func (m *ServerStatus) GetStats() *ServerStats { if m != nil { return m.Stats } return nil } func (m *ServerStatus) GetPort() int32 { if m != nil { return m.Port } return 0 } func (m *ServerStatus) GetCores() int32 { if m != nil { return m.Cores } return 0 } type CoreRequest struct { } func (m *CoreRequest) Reset() { *m = CoreRequest{} } func (m *CoreRequest) String() string { return proto.CompactTextString(m) } func (*CoreRequest) ProtoMessage() {} func (*CoreRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{14} } type CoreResponse struct { // Number of cores available on the server Cores int32 `protobuf:"varint,1,opt,name=cores" json:"cores,omitempty"` } func (m *CoreResponse) Reset() { *m = CoreResponse{} } func (m *CoreResponse) String() string { return proto.CompactTextString(m) } func (*CoreResponse) ProtoMessage() {} func (*CoreResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{15} } func (m *CoreResponse) GetCores() int32 { if m != nil { return m.Cores } return 0 } type Void struct { } func (m *Void) Reset() { *m = Void{} } func (m *Void) String() string { return proto.CompactTextString(m) } func (*Void) ProtoMessage() {} func (*Void) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{16} } // A single performance scenario: input to qps_json_driver type Scenario struct { // Human readable name for this scenario Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"` // Client configuration ClientConfig *ClientConfig `protobuf:"bytes,2,opt,name=client_config,json=clientConfig" json:"client_config,omitempty"` // Number of clients to start for the test NumClients int32 `protobuf:"varint,3,opt,name=num_clients,json=numClients" json:"num_clients,omitempty"` // Server configuration ServerConfig *ServerConfig `protobuf:"bytes,4,opt,name=server_config,json=serverConfig" json:"server_config,omitempty"` // Number of servers to start for the test NumServers int32 `protobuf:"varint,5,opt,name=num_servers,json=numServers" json:"num_servers,omitempty"` // Warmup period, in seconds WarmupSeconds int32 `protobuf:"varint,6,opt,name=warmup_seconds,json=warmupSeconds" json:"warmup_seconds,omitempty"` // Benchmark time, in seconds BenchmarkSeconds int32 `protobuf:"varint,7,opt,name=benchmark_seconds,json=benchmarkSeconds" json:"benchmark_seconds,omitempty"` // Number of workers to spawn locally (usually zero) SpawnLocalWorkerCount int32 `protobuf:"varint,8,opt,name=spawn_local_worker_count,json=spawnLocalWorkerCount" json:"spawn_local_worker_count,omitempty"` } func (m *Scenario) Reset() { *m = Scenario{} } func (m *Scenario) String() string { return proto.CompactTextString(m) } func (*Scenario) ProtoMessage() {} func (*Scenario) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{17} } func (m *Scenario) GetName() string { if m != nil { return m.Name } return "" } func (m *Scenario) GetClientConfig() *ClientConfig { if m != nil { return m.ClientConfig } return nil } func (m *Scenario) GetNumClients() int32 { if m != nil { return m.NumClients } return 0 } func (m *Scenario) GetServerConfig() *ServerConfig { if m != nil { return m.ServerConfig } return nil } func (m *Scenario) GetNumServers() int32 { if m != nil { return m.NumServers } return 0 } func (m *Scenario) GetWarmupSeconds() int32 { if m != nil { return m.WarmupSeconds } return 0 } func (m *Scenario) GetBenchmarkSeconds() int32 { if m != nil { return m.BenchmarkSeconds } return 0 } func (m *Scenario) GetSpawnLocalWorkerCount() int32 { if m != nil { return m.SpawnLocalWorkerCount } return 0 } // A set of scenarios to be run with qps_json_driver type Scenarios struct { Scenarios []*Scenario `protobuf:"bytes,1,rep,name=scenarios" json:"scenarios,omitempty"` } func (m *Scenarios) Reset() { *m = Scenarios{} } func (m *Scenarios) String() string { return proto.CompactTextString(m) } func (*Scenarios) ProtoMessage() {} func (*Scenarios) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{18} } func (m *Scenarios) GetScenarios() []*Scenario { if m != nil { return m.Scenarios } return nil } func init() { proto.RegisterType((*PoissonParams)(nil), "grpc.testing.PoissonParams") proto.RegisterType((*UniformParams)(nil), "grpc.testing.UniformParams") proto.RegisterType((*DeterministicParams)(nil), "grpc.testing.DeterministicParams") proto.RegisterType((*ParetoParams)(nil), "grpc.testing.ParetoParams") proto.RegisterType((*ClosedLoopParams)(nil), "grpc.testing.ClosedLoopParams") proto.RegisterType((*LoadParams)(nil), "grpc.testing.LoadParams") proto.RegisterType((*SecurityParams)(nil), "grpc.testing.SecurityParams") proto.RegisterType((*ClientConfig)(nil), "grpc.testing.ClientConfig") proto.RegisterType((*ClientStatus)(nil), "grpc.testing.ClientStatus") proto.RegisterType((*Mark)(nil), "grpc.testing.Mark") proto.RegisterType((*ClientArgs)(nil), "grpc.testing.ClientArgs") proto.RegisterType((*ServerConfig)(nil), "grpc.testing.ServerConfig") proto.RegisterType((*ServerArgs)(nil), "grpc.testing.ServerArgs") proto.RegisterType((*ServerStatus)(nil), "grpc.testing.ServerStatus") proto.RegisterType((*CoreRequest)(nil), "grpc.testing.CoreRequest") proto.RegisterType((*CoreResponse)(nil), "grpc.testing.CoreResponse") proto.RegisterType((*Void)(nil), "grpc.testing.Void") proto.RegisterType((*Scenario)(nil), "grpc.testing.Scenario") proto.RegisterType((*Scenarios)(nil), "grpc.testing.Scenarios") proto.RegisterEnum("grpc.testing.ClientType", ClientType_name, ClientType_value) proto.RegisterEnum("grpc.testing.ServerType", ServerType_name, ServerType_value) proto.RegisterEnum("grpc.testing.RpcType", RpcType_name, RpcType_value) } func init() { proto.RegisterFile("control.proto", fileDescriptor0) } var fileDescriptor0 = []byte{ // 1179 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xa4, 0x56, 0x6f, 0x6f, 0xdb, 0xb6, 0x13, 0xb6, 0x1d, 0xdb, 0xb1, 0x4e, 0xb6, 0xe3, 0x1f, 0x7f, 0xe9, 0xa0, 0xa6, 0x69, 0x97, 0x6a, 0x1b, 0x16, 0x64, 0x40, 0x5a, 0x78, 0x05, 0xba, 0x62, 0x2f, 0x02, 0xc7, 0x33, 0xea, 0x00, 0x69, 0x96, 0xd1, 0x69, 0x87, 0xbe, 0x12, 0x18, 0x99, 0xb1, 0x85, 0xc8, 0xa2, 0x46, 0x52, 0x09, 0xf2, 0x15, 0xf6, 0x99, 0xf6, 0x39, 0xf6, 0x35, 0xf6, 0x15, 0x06, 0xfe, 0x91, 0x23, 0xb9, 0x06, 0x9a, 0x6d, 0xef, 0xc4, 0xbb, 0xe7, 0xe1, 0x91, 0xf7, 0xdc, 0x1d, 0x05, 0x9d, 0x90, 0x25, 0x92, 0xb3, 0xf8, 0x30, 0xe5, 0x4c, 0x32, 0xd4, 0x9e, 0xf1, 0x34, 0x3c, 0x94, 0x54, 0xc8, 0x28, 0x99, 0xed, 0x74, 0x53, 0x72, 0x17, 0x33, 0x32, 0x15, 0xc6, 0xbb, 0xe3, 0x0a, 0x49, 0xa4, 0x5d, 0xf8, 0x7d, 0xe8, 0x9c, 0xb3, 0x48, 0x08, 0x96, 0x9c, 0x13, 0x4e, 0x16, 0x02, 0x3d, 0x87, 0x36, 0xbb, 0xba, 0xa2, 0x9c, 0x4e, 0x03, 0x45, 0xf2, 0xaa, 0x7b, 0xd5, 0xfd, 0x2a, 0x76, 0xad, 0xed, 0x94, 0x91, 0xa9, 0x4f, 0xa0, 0xf3, 0x3e, 0x89, 0xae, 0x18, 0x5f, 0x58, 0xce, 0xb7, 0xb0, 0x15, 0x25, 0x92, 0x72, 0xc2, 0x79, 0x74, 0x43, 0xe2, 0x20, 0x66, 0x96, 0xd6, 0x2d, 0x9a, 0x4f, 0xd9, 0x27, 0xc0, 0x79, 0xe4, 0xd5, 0x3e, 0x05, 0x8e, 0x23, 0xff, 0x07, 0xf8, 0xff, 0x4f, 0x54, 0x52, 0xbe, 0x88, 0x92, 0x48, 0xc8, 0x28, 0x7c, 0xf8, 0xe1, 0x7e, 0x81, 0xf6, 0x39, 0xe1, 0x54, 0x32, 0x4b, 0xf9, 0x0e, 0xfe, 0x57, 0x0a, 0x79, 0x49, 0x04, 0xb5, 0xbc, 0x5e, 0xd1, 0x71, 0x4c, 0x04, 0x45, 0xdb, 0xd0, 0x20, 0x71, 0x3a, 0x27, 0xf6, 0x54, 0x66, 0xe1, 0x23, 0xe8, 0x0d, 0x63, 0x26, 0x54, 0x00, 0x96, 0x9a, 0x6d, 0xfd, 0x3f, 0x6a, 0x00, 0x2a, 0x9e, 0x8d, 0x32, 0x00, 0x37, 0xd4, 0x90, 0x20, 0x66, 0x2c, 0xd5, 0xfb, 0xbb, 0xfd, 0x67, 0x87, 0x45, 0x1d, 0x0e, 0x57, 0xf7, 0x18, 0x57, 0x30, 0x84, 0x4b, 0x1b, 0x7a, 0x0d, 0x9b, 0xa9, 0x51, 0x42, 0x47, 0x77, 0xfb, 0x4f, 0xca, 0xf4, 0x92, 0x4c, 0xe3, 0x0a, 0xce, 0xd1, 0x8a, 0x98, 0x19, 0x39, 0xbc, 0x8d, 0x75, 0xc4, 0x92, 0x56, 0x8a, 0x68, 0xd1, 0xe8, 0x47, 0x68, 0x4e, 0x75, 0x92, 0xbd, 0xba, 0xe6, 0x3d, 0x2f, 0xf3, 0xd6, 0x08, 0x30, 0xae, 0x60, 0x4b, 0x41, 0xaf, 0xa0, 0x99, 0xea, 0x3c, 0x7b, 0x0d, 0x4d, 0xde, 0x59, 0x39, 0x6d, 0x41, 0x03, 0xc5, 0x32, 0xd8, 0xe3, 0x26, 0xd4, 0x95, 0x70, 0xfe, 0x25, 0x74, 0x27, 0x34, 0xcc, 0x78, 0x24, 0xef, 0x6c, 0x06, 0x9f, 0x81, 0x9b, 0x09, 0x1a, 0x28, 0x7e, 0x10, 0x12, 0x9d, 0xc1, 0x16, 0x76, 0x32, 0x41, 0x2f, 0xa8, 0x90, 0x43, 0x82, 0x5e, 0xc2, 0xb6, 0xa0, 0xfc, 0x86, 0xf2, 0x60, 0xce, 0x84, 0x0c, 0xd8, 0x0d, 0xe5, 0x3c, 0x9a, 0x52, 0x9d, 0x2b, 0x07, 0x23, 0xe3, 0x1b, 0x33, 0x21, 0x7f, 0xb6, 0x1e, 0xff, 0xf7, 0x06, 0xb4, 0x87, 0x71, 0x44, 0x13, 0x39, 0x64, 0xc9, 0x55, 0x34, 0x43, 0xdf, 0x40, 0xd7, 0x6e, 0x21, 0x09, 0x9f, 0x51, 0x29, 0xbc, 0xea, 0xde, 0xc6, 0xbe, 0x83, 0x3b, 0xc6, 0x7a, 0x61, 0x8c, 0xe8, 0x8d, 0xd2, 0x52, 0xd1, 0x02, 0x79, 0x97, 0x9a, 0x00, 0xdd, 0xbe, 0xb7, 0xaa, 0xa5, 0x02, 0x5c, 0xdc, 0xa5, 0x54, 0x69, 0x98, 0x7f, 0xa3, 0x11, 0x6c, 0x09, 0x7b, 0xad, 0x20, 0xd5, 0xf7, 0xb2, 0x92, 0xec, 0x96, 0xe9, 0xe5, 0xbb, 0xe3, 0xae, 0x28, 0xe7, 0xe2, 0x08, 0x76, 0x59, 0x26, 0x85, 0x24, 0xc9, 0x34, 0x4a, 0x66, 0x01, 0x4f, 0x43, 0x11, 0xa4, 0x94, 0x07, 0xe1, 0x9c, 0x24, 0x09, 0x8d, 0xb5, 0x5c, 0x0d, 0xfc, 0xb8, 0x80, 0xc1, 0x69, 0x28, 0xce, 0x29, 0x1f, 0x1a, 0x80, 0xea, 0x33, 0x7b, 0x05, 0x4b, 0x11, 0x5a, 0xa5, 0x06, 0xee, 0x1a, 0xb3, 0xc5, 0x09, 0x95, 0x55, 0x22, 0xee, 0x92, 0x30, 0xc8, 0x6f, 0x3c, 0xe7, 0x94, 0x4c, 0x85, 0xb7, 0xa9, 0xd1, 0x48, 0xfb, 0xec, 0x5d, 0x8d, 0x07, 0xbd, 0x84, 0x16, 0x4f, 0x43, 0x93, 0x9a, 0x96, 0x4e, 0xcd, 0xa3, 0xf2, 0xdd, 0x70, 0x1a, 0xea, 0xbc, 0x6c, 0x72, 0xf3, 0xa1, 0xf2, 0xa9, 0x34, 0xcf, 0x13, 0x02, 0x3a, 0x21, 0x2b, 0xf9, 0xbc, 0x6f, 0x25, 0x0c, 0xf1, 0x7d, 0x5b, 0x1d, 0x43, 0x3e, 0xbc, 0x82, 0x50, 0x6b, 0xe8, 0xb9, 0x6b, 0x5b, 0xc3, 0x60, 0x8c, 0xcc, 0xb8, 0x93, 0x16, 0x97, 0x68, 0x0c, 0xbd, 0x79, 0x24, 0x24, 0x9b, 0x71, 0xb2, 0xc8, 0xcf, 0xd0, 0xd6, 0xbb, 0x3c, 0x2d, 0xef, 0x32, 0xce, 0x51, 0xf6, 0x20, 0x5b, 0xf3, 0xb2, 0x01, 0x3d, 0x01, 0x27, 0x64, 0x9c, 0x06, 0x71, 0x24, 0xa4, 0xd7, 0xd9, 0xdb, 0xd8, 0x6f, 0xe0, 0x96, 0x32, 0x9c, 0x46, 0x42, 0xa2, 0xa7, 0x00, 0xd6, 0xb9, 0x88, 0xa4, 0xd7, 0xd5, 0xf9, 0x73, 0x8c, 0x77, 0x11, 0x49, 0xff, 0x28, 0xaf, 0xc5, 0x89, 0x24, 0x32, 0x13, 0xe8, 0x05, 0x34, 0xf4, 0x18, 0xb6, 0xa3, 0xe2, 0xf1, 0xba, 0xf2, 0x52, 0x50, 0x81, 0x0d, 0xce, 0xdf, 0x85, 0xfa, 0x3b, 0xc2, 0xaf, 0xd5, 0x88, 0xe2, 0x54, 0x50, 0x69, 0x3b, 0xc4, 0x2c, 0xfc, 0x0c, 0xc0, 0x70, 0x06, 0x7c, 0x26, 0x50, 0x1f, 0x1a, 0x82, 0xca, 0x2c, 0x9f, 0x43, 0x3b, 0xeb, 0x36, 0x37, 0xd9, 0x19, 0x57, 0xb0, 0x81, 0xa2, 0x7d, 0xa8, 0x2f, 0x08, 0xbf, 0xb6, 0xb3, 0x07, 0x95, 0x29, 0x2a, 0xf2, 0xb8, 0x82, 0x35, 0xe2, 0xd8, 0x81, 0x4d, 0xc2, 0x67, 0xaa, 0x00, 0xfc, 0x3f, 0x6b, 0xd0, 0x9e, 0xe8, 0xe6, 0xb1, 0xc9, 0x7e, 0x03, 0x6e, 0xde, 0x62, 0xaa, 0x40, 0xaa, 0xeb, 0x7a, 0xc7, 0x10, 0x4c, 0xef, 0x88, 0xe5, 0xf7, 0xba, 0xde, 0xa9, 0xfd, 0x8b, 0xde, 0x41, 0x50, 0x4f, 0x19, 0x97, 0xb6, 0x47, 0xf4, 0xf7, 0x7d, 0x95, 0xe7, 0x67, 0x5b, 0x53, 0xe5, 0xf6, 0x54, 0xb6, 0xca, 0xcb, 0x6a, 0xb6, 0x56, 0xd4, 0x5c, 0x53, 0x97, 0xce, 0x3f, 0xae, 0xcb, 0x52, 0x35, 0x41, 0xb9, 0x9a, 0x94, 0x9e, 0xe6, 0x40, 0x0f, 0xd0, 0xb3, 0x28, 0xc0, 0x7f, 0xd4, 0x33, 0xca, 0xe5, 0x7c, 0x50, 0x95, 0xde, 0x43, 0xf3, 0x2a, 0x5d, 0x66, 0xbf, 0x56, 0xc8, 0xfe, 0x36, 0x34, 0xd4, 0xbd, 0xcc, 0x28, 0x6c, 0x60, 0xb3, 0xf0, 0x3b, 0xe0, 0x0e, 0x19, 0xa7, 0x98, 0xfe, 0x96, 0x51, 0x21, 0xfd, 0xaf, 0xa1, 0x6d, 0x96, 0x22, 0x65, 0x89, 0x79, 0x89, 0x0d, 0xa9, 0x5a, 0x24, 0x35, 0xa1, 0xfe, 0x81, 0x45, 0x53, 0xff, 0xaf, 0x1a, 0xb4, 0x26, 0x21, 0x4d, 0x08, 0x8f, 0x98, 0x8a, 0x99, 0x90, 0x85, 0x29, 0x36, 0x07, 0xeb, 0x6f, 0x74, 0x04, 0x9d, 0x7c, 0x00, 0x1a, 0x7d, 0x6a, 0x9f, 0xeb, 0x04, 0xdc, 0x0e, 0x8b, 0x6f, 0xc5, 0x97, 0xe0, 0x26, 0xd9, 0xc2, 0x8e, 0xc5, 0xfc, 0xe8, 0x90, 0x64, 0x0b, 0xc3, 0x51, 0x33, 0xda, 0x3e, 0x1b, 0x79, 0x84, 0xfa, 0xe7, 0xb4, 0xc1, 0x6d, 0x51, 0x6c, 0x15, 0x1b, 0xc1, 0xd8, 0xf2, 0xf9, 0xac, 0x22, 0x18, 0x8e, 0x50, 0xcf, 0xd5, 0x2d, 0xe1, 0x8b, 0x2c, 0x0d, 0x04, 0x0d, 0x59, 0x32, 0x15, 0x5e, 0x53, 0x63, 0x3a, 0xc6, 0x3a, 0x31, 0x46, 0xf5, 0x83, 0x73, 0x49, 0x93, 0x70, 0xae, 0xb4, 0x5c, 0x22, 0x4d, 0x65, 0xf7, 0x96, 0x8e, 0x1c, 0xfc, 0x1a, 0x3c, 0x91, 0x92, 0xdb, 0x24, 0x88, 0x59, 0x48, 0xe2, 0xe0, 0x96, 0xf1, 0x6b, 0x7d, 0x83, 0x2c, 0xc9, 0xab, 0xfc, 0x91, 0xf6, 0x9f, 0x2a, 0xf7, 0xaf, 0xda, 0x3b, 0x54, 0x4e, 0x7f, 0x00, 0x4e, 0x9e, 0x70, 0x81, 0x5e, 0x81, 0x23, 0xf2, 0x85, 0x7e, 0x43, 0xdd, 0xfe, 0x17, 0x2b, 0xf7, 0xb6, 0x6e, 0x7c, 0x0f, 0x3c, 0x78, 0x91, 0xcf, 0x28, 0xdd, 0xee, 0x5b, 0xe0, 0x4e, 0x3e, 0x9e, 0x0d, 0x83, 0xe1, 0xe9, 0xc9, 0xe8, 0xec, 0xa2, 0x57, 0x41, 0x3d, 0x68, 0x0f, 0x8a, 0x96, 0xea, 0xc1, 0x49, 0xde, 0x04, 0x25, 0xc2, 0x64, 0x84, 0x3f, 0x8c, 0x70, 0x91, 0x60, 0x2d, 0x55, 0xe4, 0xc1, 0xb6, 0xb1, 0xbc, 0x1d, 0x9d, 0x8d, 0xf0, 0xc9, 0xd2, 0x53, 0x3b, 0xf8, 0x0a, 0x36, 0xed, 0xbb, 0x84, 0x1c, 0x68, 0xbc, 0x3f, 0x1b, 0xe0, 0x8f, 0xbd, 0x0a, 0xea, 0x80, 0x33, 0xb9, 0xc0, 0xa3, 0xc1, 0xbb, 0x93, 0xb3, 0xb7, 0xbd, 0xea, 0x65, 0x53, 0xff, 0x12, 0x7f, 0xff, 0x77, 0x00, 0x00, 0x00, 0xff, 0xff, 0x75, 0x59, 0xf4, 0x03, 0x4e, 0x0b, 0x00, 0x00, } golang-google-grpc-1.6.0/benchmark/grpc_testing/control.proto000066400000000000000000000112531315416461300243650ustar00rootroot00000000000000// Copyright 2016 gRPC authors. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. syntax = "proto3"; import "payloads.proto"; import "stats.proto"; package grpc.testing; enum ClientType { SYNC_CLIENT = 0; ASYNC_CLIENT = 1; } enum ServerType { SYNC_SERVER = 0; ASYNC_SERVER = 1; ASYNC_GENERIC_SERVER = 2; } enum RpcType { UNARY = 0; STREAMING = 1; } // Parameters of poisson process distribution, which is a good representation // of activity coming in from independent identical stationary sources. message PoissonParams { // The rate of arrivals (a.k.a. lambda parameter of the exp distribution). double offered_load = 1; } message UniformParams { double interarrival_lo = 1; double interarrival_hi = 2; } message DeterministicParams { double offered_load = 1; } message ParetoParams { double interarrival_base = 1; double alpha = 2; } // Once an RPC finishes, immediately start a new one. // No configuration parameters needed. message ClosedLoopParams { } message LoadParams { oneof load { ClosedLoopParams closed_loop = 1; PoissonParams poisson = 2; UniformParams uniform = 3; DeterministicParams determ = 4; ParetoParams pareto = 5; }; } // presence of SecurityParams implies use of TLS message SecurityParams { bool use_test_ca = 1; string server_host_override = 2; } message ClientConfig { // List of targets to connect to. At least one target needs to be specified. repeated string server_targets = 1; ClientType client_type = 2; SecurityParams security_params = 3; // How many concurrent RPCs to start for each channel. // For synchronous client, use a separate thread for each outstanding RPC. int32 outstanding_rpcs_per_channel = 4; // Number of independent client channels to create. // i-th channel will connect to server_target[i % server_targets.size()] int32 client_channels = 5; // Only for async client. Number of threads to use to start/manage RPCs. int32 async_client_threads = 7; RpcType rpc_type = 8; // The requested load for the entire client (aggregated over all the threads). LoadParams load_params = 10; PayloadConfig payload_config = 11; HistogramParams histogram_params = 12; // Specify the cores we should run the client on, if desired repeated int32 core_list = 13; int32 core_limit = 14; } message ClientStatus { ClientStats stats = 1; } // Request current stats message Mark { // if true, the stats will be reset after taking their snapshot. bool reset = 1; } message ClientArgs { oneof argtype { ClientConfig setup = 1; Mark mark = 2; } } message ServerConfig { ServerType server_type = 1; SecurityParams security_params = 2; // Port on which to listen. Zero means pick unused port. int32 port = 4; // Only for async server. Number of threads used to serve the requests. int32 async_server_threads = 7; // Specify the number of cores to limit server to, if desired int32 core_limit = 8; // payload config, used in generic server PayloadConfig payload_config = 9; // Specify the cores we should run the server on, if desired repeated int32 core_list = 10; } message ServerArgs { oneof argtype { ServerConfig setup = 1; Mark mark = 2; } } message ServerStatus { ServerStats stats = 1; // the port bound by the server int32 port = 2; // Number of cores available to the server int32 cores = 3; } message CoreRequest { } message CoreResponse { // Number of cores available on the server int32 cores = 1; } message Void { } // A single performance scenario: input to qps_json_driver message Scenario { // Human readable name for this scenario string name = 1; // Client configuration ClientConfig client_config = 2; // Number of clients to start for the test int32 num_clients = 3; // Server configuration ServerConfig server_config = 4; // Number of servers to start for the test int32 num_servers = 5; // Warmup period, in seconds int32 warmup_seconds = 6; // Benchmark time, in seconds int32 benchmark_seconds = 7; // Number of workers to spawn locally (usually zero) int32 spawn_local_worker_count = 8; } // A set of scenarios to be run with qps_json_driver message Scenarios { repeated Scenario scenarios = 1; } golang-google-grpc-1.6.0/benchmark/grpc_testing/messages.pb.go000066400000000000000000000456261315416461300243710ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: messages.proto package grpc_testing import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // The type of payload that should be returned. type PayloadType int32 const ( // Compressable text format. PayloadType_COMPRESSABLE PayloadType = 0 // Uncompressable binary format. PayloadType_UNCOMPRESSABLE PayloadType = 1 // Randomly chosen from all other formats defined in this enum. PayloadType_RANDOM PayloadType = 2 ) var PayloadType_name = map[int32]string{ 0: "COMPRESSABLE", 1: "UNCOMPRESSABLE", 2: "RANDOM", } var PayloadType_value = map[string]int32{ "COMPRESSABLE": 0, "UNCOMPRESSABLE": 1, "RANDOM": 2, } func (x PayloadType) String() string { return proto.EnumName(PayloadType_name, int32(x)) } func (PayloadType) EnumDescriptor() ([]byte, []int) { return fileDescriptor1, []int{0} } // Compression algorithms type CompressionType int32 const ( // No compression CompressionType_NONE CompressionType = 0 CompressionType_GZIP CompressionType = 1 CompressionType_DEFLATE CompressionType = 2 ) var CompressionType_name = map[int32]string{ 0: "NONE", 1: "GZIP", 2: "DEFLATE", } var CompressionType_value = map[string]int32{ "NONE": 0, "GZIP": 1, "DEFLATE": 2, } func (x CompressionType) String() string { return proto.EnumName(CompressionType_name, int32(x)) } func (CompressionType) EnumDescriptor() ([]byte, []int) { return fileDescriptor1, []int{1} } // A block of data, to simply increase gRPC message size. type Payload struct { // The type of data in body. Type PayloadType `protobuf:"varint,1,opt,name=type,enum=grpc.testing.PayloadType" json:"type,omitempty"` // Primary contents of payload. Body []byte `protobuf:"bytes,2,opt,name=body,proto3" json:"body,omitempty"` } func (m *Payload) Reset() { *m = Payload{} } func (m *Payload) String() string { return proto.CompactTextString(m) } func (*Payload) ProtoMessage() {} func (*Payload) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{0} } func (m *Payload) GetType() PayloadType { if m != nil { return m.Type } return PayloadType_COMPRESSABLE } func (m *Payload) GetBody() []byte { if m != nil { return m.Body } return nil } // A protobuf representation for grpc status. This is used by test // clients to specify a status that the server should attempt to return. type EchoStatus struct { Code int32 `protobuf:"varint,1,opt,name=code" json:"code,omitempty"` Message string `protobuf:"bytes,2,opt,name=message" json:"message,omitempty"` } func (m *EchoStatus) Reset() { *m = EchoStatus{} } func (m *EchoStatus) String() string { return proto.CompactTextString(m) } func (*EchoStatus) ProtoMessage() {} func (*EchoStatus) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{1} } func (m *EchoStatus) GetCode() int32 { if m != nil { return m.Code } return 0 } func (m *EchoStatus) GetMessage() string { if m != nil { return m.Message } return "" } // Unary request. type SimpleRequest struct { // Desired payload type in the response from the server. // If response_type is RANDOM, server randomly chooses one from other formats. ResponseType PayloadType `protobuf:"varint,1,opt,name=response_type,json=responseType,enum=grpc.testing.PayloadType" json:"response_type,omitempty"` // Desired payload size in the response from the server. // If response_type is COMPRESSABLE, this denotes the size before compression. ResponseSize int32 `protobuf:"varint,2,opt,name=response_size,json=responseSize" json:"response_size,omitempty"` // Optional input payload sent along with the request. Payload *Payload `protobuf:"bytes,3,opt,name=payload" json:"payload,omitempty"` // Whether SimpleResponse should include username. FillUsername bool `protobuf:"varint,4,opt,name=fill_username,json=fillUsername" json:"fill_username,omitempty"` // Whether SimpleResponse should include OAuth scope. FillOauthScope bool `protobuf:"varint,5,opt,name=fill_oauth_scope,json=fillOauthScope" json:"fill_oauth_scope,omitempty"` // Compression algorithm to be used by the server for the response (stream) ResponseCompression CompressionType `protobuf:"varint,6,opt,name=response_compression,json=responseCompression,enum=grpc.testing.CompressionType" json:"response_compression,omitempty"` // Whether server should return a given status ResponseStatus *EchoStatus `protobuf:"bytes,7,opt,name=response_status,json=responseStatus" json:"response_status,omitempty"` } func (m *SimpleRequest) Reset() { *m = SimpleRequest{} } func (m *SimpleRequest) String() string { return proto.CompactTextString(m) } func (*SimpleRequest) ProtoMessage() {} func (*SimpleRequest) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{2} } func (m *SimpleRequest) GetResponseType() PayloadType { if m != nil { return m.ResponseType } return PayloadType_COMPRESSABLE } func (m *SimpleRequest) GetResponseSize() int32 { if m != nil { return m.ResponseSize } return 0 } func (m *SimpleRequest) GetPayload() *Payload { if m != nil { return m.Payload } return nil } func (m *SimpleRequest) GetFillUsername() bool { if m != nil { return m.FillUsername } return false } func (m *SimpleRequest) GetFillOauthScope() bool { if m != nil { return m.FillOauthScope } return false } func (m *SimpleRequest) GetResponseCompression() CompressionType { if m != nil { return m.ResponseCompression } return CompressionType_NONE } func (m *SimpleRequest) GetResponseStatus() *EchoStatus { if m != nil { return m.ResponseStatus } return nil } // Unary response, as configured by the request. type SimpleResponse struct { // Payload to increase message size. Payload *Payload `protobuf:"bytes,1,opt,name=payload" json:"payload,omitempty"` // The user the request came from, for verifying authentication was // successful when the client expected it. Username string `protobuf:"bytes,2,opt,name=username" json:"username,omitempty"` // OAuth scope. OauthScope string `protobuf:"bytes,3,opt,name=oauth_scope,json=oauthScope" json:"oauth_scope,omitempty"` } func (m *SimpleResponse) Reset() { *m = SimpleResponse{} } func (m *SimpleResponse) String() string { return proto.CompactTextString(m) } func (*SimpleResponse) ProtoMessage() {} func (*SimpleResponse) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{3} } func (m *SimpleResponse) GetPayload() *Payload { if m != nil { return m.Payload } return nil } func (m *SimpleResponse) GetUsername() string { if m != nil { return m.Username } return "" } func (m *SimpleResponse) GetOauthScope() string { if m != nil { return m.OauthScope } return "" } // Client-streaming request. type StreamingInputCallRequest struct { // Optional input payload sent along with the request. Payload *Payload `protobuf:"bytes,1,opt,name=payload" json:"payload,omitempty"` } func (m *StreamingInputCallRequest) Reset() { *m = StreamingInputCallRequest{} } func (m *StreamingInputCallRequest) String() string { return proto.CompactTextString(m) } func (*StreamingInputCallRequest) ProtoMessage() {} func (*StreamingInputCallRequest) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{4} } func (m *StreamingInputCallRequest) GetPayload() *Payload { if m != nil { return m.Payload } return nil } // Client-streaming response. type StreamingInputCallResponse struct { // Aggregated size of payloads received from the client. AggregatedPayloadSize int32 `protobuf:"varint,1,opt,name=aggregated_payload_size,json=aggregatedPayloadSize" json:"aggregated_payload_size,omitempty"` } func (m *StreamingInputCallResponse) Reset() { *m = StreamingInputCallResponse{} } func (m *StreamingInputCallResponse) String() string { return proto.CompactTextString(m) } func (*StreamingInputCallResponse) ProtoMessage() {} func (*StreamingInputCallResponse) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{5} } func (m *StreamingInputCallResponse) GetAggregatedPayloadSize() int32 { if m != nil { return m.AggregatedPayloadSize } return 0 } // Configuration for a particular response. type ResponseParameters struct { // Desired payload sizes in responses from the server. // If response_type is COMPRESSABLE, this denotes the size before compression. Size int32 `protobuf:"varint,1,opt,name=size" json:"size,omitempty"` // Desired interval between consecutive responses in the response stream in // microseconds. IntervalUs int32 `protobuf:"varint,2,opt,name=interval_us,json=intervalUs" json:"interval_us,omitempty"` } func (m *ResponseParameters) Reset() { *m = ResponseParameters{} } func (m *ResponseParameters) String() string { return proto.CompactTextString(m) } func (*ResponseParameters) ProtoMessage() {} func (*ResponseParameters) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{6} } func (m *ResponseParameters) GetSize() int32 { if m != nil { return m.Size } return 0 } func (m *ResponseParameters) GetIntervalUs() int32 { if m != nil { return m.IntervalUs } return 0 } // Server-streaming request. type StreamingOutputCallRequest struct { // Desired payload type in the response from the server. // If response_type is RANDOM, the payload from each response in the stream // might be of different types. This is to simulate a mixed type of payload // stream. ResponseType PayloadType `protobuf:"varint,1,opt,name=response_type,json=responseType,enum=grpc.testing.PayloadType" json:"response_type,omitempty"` // Configuration for each expected response message. ResponseParameters []*ResponseParameters `protobuf:"bytes,2,rep,name=response_parameters,json=responseParameters" json:"response_parameters,omitempty"` // Optional input payload sent along with the request. Payload *Payload `protobuf:"bytes,3,opt,name=payload" json:"payload,omitempty"` // Compression algorithm to be used by the server for the response (stream) ResponseCompression CompressionType `protobuf:"varint,6,opt,name=response_compression,json=responseCompression,enum=grpc.testing.CompressionType" json:"response_compression,omitempty"` // Whether server should return a given status ResponseStatus *EchoStatus `protobuf:"bytes,7,opt,name=response_status,json=responseStatus" json:"response_status,omitempty"` } func (m *StreamingOutputCallRequest) Reset() { *m = StreamingOutputCallRequest{} } func (m *StreamingOutputCallRequest) String() string { return proto.CompactTextString(m) } func (*StreamingOutputCallRequest) ProtoMessage() {} func (*StreamingOutputCallRequest) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{7} } func (m *StreamingOutputCallRequest) GetResponseType() PayloadType { if m != nil { return m.ResponseType } return PayloadType_COMPRESSABLE } func (m *StreamingOutputCallRequest) GetResponseParameters() []*ResponseParameters { if m != nil { return m.ResponseParameters } return nil } func (m *StreamingOutputCallRequest) GetPayload() *Payload { if m != nil { return m.Payload } return nil } func (m *StreamingOutputCallRequest) GetResponseCompression() CompressionType { if m != nil { return m.ResponseCompression } return CompressionType_NONE } func (m *StreamingOutputCallRequest) GetResponseStatus() *EchoStatus { if m != nil { return m.ResponseStatus } return nil } // Server-streaming response, as configured by the request and parameters. type StreamingOutputCallResponse struct { // Payload to increase response size. Payload *Payload `protobuf:"bytes,1,opt,name=payload" json:"payload,omitempty"` } func (m *StreamingOutputCallResponse) Reset() { *m = StreamingOutputCallResponse{} } func (m *StreamingOutputCallResponse) String() string { return proto.CompactTextString(m) } func (*StreamingOutputCallResponse) ProtoMessage() {} func (*StreamingOutputCallResponse) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{8} } func (m *StreamingOutputCallResponse) GetPayload() *Payload { if m != nil { return m.Payload } return nil } // For reconnect interop test only. // Client tells server what reconnection parameters it used. type ReconnectParams struct { MaxReconnectBackoffMs int32 `protobuf:"varint,1,opt,name=max_reconnect_backoff_ms,json=maxReconnectBackoffMs" json:"max_reconnect_backoff_ms,omitempty"` } func (m *ReconnectParams) Reset() { *m = ReconnectParams{} } func (m *ReconnectParams) String() string { return proto.CompactTextString(m) } func (*ReconnectParams) ProtoMessage() {} func (*ReconnectParams) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{9} } func (m *ReconnectParams) GetMaxReconnectBackoffMs() int32 { if m != nil { return m.MaxReconnectBackoffMs } return 0 } // For reconnect interop test only. // Server tells client whether its reconnects are following the spec and the // reconnect backoffs it saw. type ReconnectInfo struct { Passed bool `protobuf:"varint,1,opt,name=passed" json:"passed,omitempty"` BackoffMs []int32 `protobuf:"varint,2,rep,packed,name=backoff_ms,json=backoffMs" json:"backoff_ms,omitempty"` } func (m *ReconnectInfo) Reset() { *m = ReconnectInfo{} } func (m *ReconnectInfo) String() string { return proto.CompactTextString(m) } func (*ReconnectInfo) ProtoMessage() {} func (*ReconnectInfo) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{10} } func (m *ReconnectInfo) GetPassed() bool { if m != nil { return m.Passed } return false } func (m *ReconnectInfo) GetBackoffMs() []int32 { if m != nil { return m.BackoffMs } return nil } func init() { proto.RegisterType((*Payload)(nil), "grpc.testing.Payload") proto.RegisterType((*EchoStatus)(nil), "grpc.testing.EchoStatus") proto.RegisterType((*SimpleRequest)(nil), "grpc.testing.SimpleRequest") proto.RegisterType((*SimpleResponse)(nil), "grpc.testing.SimpleResponse") proto.RegisterType((*StreamingInputCallRequest)(nil), "grpc.testing.StreamingInputCallRequest") proto.RegisterType((*StreamingInputCallResponse)(nil), "grpc.testing.StreamingInputCallResponse") proto.RegisterType((*ResponseParameters)(nil), "grpc.testing.ResponseParameters") proto.RegisterType((*StreamingOutputCallRequest)(nil), "grpc.testing.StreamingOutputCallRequest") proto.RegisterType((*StreamingOutputCallResponse)(nil), "grpc.testing.StreamingOutputCallResponse") proto.RegisterType((*ReconnectParams)(nil), "grpc.testing.ReconnectParams") proto.RegisterType((*ReconnectInfo)(nil), "grpc.testing.ReconnectInfo") proto.RegisterEnum("grpc.testing.PayloadType", PayloadType_name, PayloadType_value) proto.RegisterEnum("grpc.testing.CompressionType", CompressionType_name, CompressionType_value) } func init() { proto.RegisterFile("messages.proto", fileDescriptor1) } var fileDescriptor1 = []byte{ // 652 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xcc, 0x55, 0x4d, 0x6f, 0xd3, 0x40, 0x10, 0xc5, 0xf9, 0xee, 0x24, 0x4d, 0xa3, 0x85, 0x82, 0x5b, 0x54, 0x11, 0x99, 0x4b, 0x54, 0x89, 0x20, 0x05, 0x09, 0x24, 0x0e, 0xa0, 0xb4, 0x4d, 0x51, 0x50, 0x9a, 0x84, 0x75, 0x7b, 0xe1, 0x62, 0x6d, 0x9c, 0x8d, 0x6b, 0x11, 0x7b, 0x8d, 0x77, 0x8d, 0x9a, 0x1e, 0xb8, 0xf3, 0x83, 0xb9, 0xa3, 0x5d, 0x7f, 0xc4, 0x69, 0x7b, 0x68, 0xe1, 0xc2, 0x6d, 0xf7, 0xed, 0x9b, 0x97, 0x79, 0x33, 0xcf, 0x0a, 0x34, 0x3d, 0xca, 0x39, 0x71, 0x28, 0xef, 0x06, 0x21, 0x13, 0x0c, 0x35, 0x9c, 0x30, 0xb0, 0xbb, 0x82, 0x72, 0xe1, 0xfa, 0x8e, 0x31, 0x82, 0xea, 0x94, 0xac, 0x96, 0x8c, 0xcc, 0xd1, 0x2b, 0x28, 0x89, 0x55, 0x40, 0x75, 0xad, 0xad, 0x75, 0x9a, 0xbd, 0xbd, 0x6e, 0x9e, 0xd7, 0x4d, 0x48, 0xe7, 0xab, 0x80, 0x62, 0x45, 0x43, 0x08, 0x4a, 0x33, 0x36, 0x5f, 0xe9, 0x85, 0xb6, 0xd6, 0x69, 0x60, 0x75, 0x36, 0xde, 0x03, 0x0c, 0xec, 0x4b, 0x66, 0x0a, 0x22, 0x22, 0x2e, 0x19, 0x36, 0x9b, 0xc7, 0x82, 0x65, 0xac, 0xce, 0x48, 0x87, 0x6a, 0xd2, 0x8f, 0x2a, 0xdc, 0xc2, 0xe9, 0xd5, 0xf8, 0x55, 0x84, 0x6d, 0xd3, 0xf5, 0x82, 0x25, 0xc5, 0xf4, 0x7b, 0x44, 0xb9, 0x40, 0x1f, 0x60, 0x3b, 0xa4, 0x3c, 0x60, 0x3e, 0xa7, 0xd6, 0xfd, 0x3a, 0x6b, 0xa4, 0x7c, 0x79, 0x43, 0x2f, 0x73, 0xf5, 0xdc, 0xbd, 0x8e, 0x7f, 0xb1, 0xbc, 0x26, 0x99, 0xee, 0x35, 0x45, 0xaf, 0xa1, 0x1a, 0xc4, 0x0a, 0x7a, 0xb1, 0xad, 0x75, 0xea, 0xbd, 0xdd, 0x3b, 0xe5, 0x71, 0xca, 0x92, 0xaa, 0x0b, 0x77, 0xb9, 0xb4, 0x22, 0x4e, 0x43, 0x9f, 0x78, 0x54, 0x2f, 0xb5, 0xb5, 0x4e, 0x0d, 0x37, 0x24, 0x78, 0x91, 0x60, 0xa8, 0x03, 0x2d, 0x45, 0x62, 0x24, 0x12, 0x97, 0x16, 0xb7, 0x59, 0x40, 0xf5, 0xb2, 0xe2, 0x35, 0x25, 0x3e, 0x91, 0xb0, 0x29, 0x51, 0x34, 0x85, 0x27, 0x59, 0x93, 0x36, 0xf3, 0x82, 0x90, 0x72, 0xee, 0x32, 0x5f, 0xaf, 0x28, 0xaf, 0x07, 0x9b, 0xcd, 0x1c, 0xaf, 0x09, 0xca, 0xef, 0xe3, 0xb4, 0x34, 0xf7, 0x80, 0xfa, 0xb0, 0xb3, 0xb6, 0xad, 0x36, 0xa1, 0x57, 0x95, 0x33, 0x7d, 0x53, 0x6c, 0xbd, 0x29, 0xdc, 0xcc, 0x46, 0xa2, 0xee, 0xc6, 0x4f, 0x68, 0xa6, 0xab, 0x88, 0xf1, 0xfc, 0x98, 0xb4, 0x7b, 0x8d, 0x69, 0x1f, 0x6a, 0xd9, 0x84, 0xe2, 0x4d, 0x67, 0x77, 0xf4, 0x02, 0xea, 0xf9, 0xc1, 0x14, 0xd5, 0x33, 0xb0, 0x6c, 0x28, 0xc6, 0x08, 0xf6, 0x4c, 0x11, 0x52, 0xe2, 0xb9, 0xbe, 0x33, 0xf4, 0x83, 0x48, 0x1c, 0x93, 0xe5, 0x32, 0x8d, 0xc5, 0x43, 0x5b, 0x31, 0xce, 0x61, 0xff, 0x2e, 0xb5, 0xc4, 0xd9, 0x5b, 0x78, 0x46, 0x1c, 0x27, 0xa4, 0x0e, 0x11, 0x74, 0x6e, 0x25, 0x35, 0x71, 0x5e, 0xe2, 0xe0, 0xee, 0xae, 0x9f, 0x13, 0x69, 0x19, 0x1c, 0x63, 0x08, 0x28, 0xd5, 0x98, 0x92, 0x90, 0x78, 0x54, 0xd0, 0x50, 0x65, 0x3e, 0x57, 0xaa, 0xce, 0xd2, 0xae, 0xeb, 0x0b, 0x1a, 0xfe, 0x20, 0x32, 0x35, 0x49, 0x0a, 0x21, 0x85, 0x2e, 0xb8, 0xf1, 0xbb, 0x90, 0xeb, 0x70, 0x12, 0x89, 0x1b, 0x86, 0xff, 0xf5, 0x3b, 0xf8, 0x02, 0x59, 0x4e, 0xac, 0x20, 0x6b, 0x55, 0x2f, 0xb4, 0x8b, 0x9d, 0x7a, 0xaf, 0xbd, 0xa9, 0x72, 0xdb, 0x12, 0x46, 0xe1, 0x6d, 0x9b, 0x0f, 0xfe, 0x6a, 0xfe, 0xcb, 0x98, 0x8f, 0xe1, 0xf9, 0x9d, 0x63, 0xff, 0xcb, 0xcc, 0x1b, 0x9f, 0x61, 0x07, 0x53, 0x9b, 0xf9, 0x3e, 0xb5, 0x85, 0x1a, 0x16, 0x47, 0xef, 0x40, 0xf7, 0xc8, 0x95, 0x15, 0xa6, 0xb0, 0x35, 0x23, 0xf6, 0x37, 0xb6, 0x58, 0x58, 0x1e, 0x4f, 0xe3, 0xe5, 0x91, 0xab, 0xac, 0xea, 0x28, 0x7e, 0x3d, 0xe3, 0xc6, 0x29, 0x6c, 0x67, 0xe8, 0xd0, 0x5f, 0x30, 0xf4, 0x14, 0x2a, 0x01, 0xe1, 0x9c, 0xc6, 0xcd, 0xd4, 0x70, 0x72, 0x43, 0x07, 0x00, 0x39, 0x4d, 0xb9, 0xd4, 0x32, 0xde, 0x9a, 0xa5, 0x3a, 0x87, 0x1f, 0xa1, 0x9e, 0x4b, 0x06, 0x6a, 0x41, 0xe3, 0x78, 0x72, 0x36, 0xc5, 0x03, 0xd3, 0xec, 0x1f, 0x8d, 0x06, 0xad, 0x47, 0x08, 0x41, 0xf3, 0x62, 0xbc, 0x81, 0x69, 0x08, 0xa0, 0x82, 0xfb, 0xe3, 0x93, 0xc9, 0x59, 0xab, 0x70, 0xd8, 0x83, 0x9d, 0x1b, 0xfb, 0x40, 0x35, 0x28, 0x8d, 0x27, 0x63, 0x59, 0x5c, 0x83, 0xd2, 0xa7, 0xaf, 0xc3, 0x69, 0x4b, 0x43, 0x75, 0xa8, 0x9e, 0x0c, 0x4e, 0x47, 0xfd, 0xf3, 0x41, 0xab, 0x30, 0xab, 0xa8, 0xbf, 0x9a, 0x37, 0x7f, 0x02, 0x00, 0x00, 0xff, 0xff, 0xc2, 0x6a, 0xce, 0x1e, 0x7c, 0x06, 0x00, 0x00, } golang-google-grpc-1.6.0/benchmark/grpc_testing/messages.proto000066400000000000000000000110451315416461300245130ustar00rootroot00000000000000// Copyright 2016 gRPC authors. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. // Message definitions to be used by integration test service definitions. syntax = "proto3"; package grpc.testing; // The type of payload that should be returned. enum PayloadType { // Compressable text format. COMPRESSABLE = 0; // Uncompressable binary format. UNCOMPRESSABLE = 1; // Randomly chosen from all other formats defined in this enum. RANDOM = 2; } // Compression algorithms enum CompressionType { // No compression NONE = 0; GZIP = 1; DEFLATE = 2; } // A block of data, to simply increase gRPC message size. message Payload { // The type of data in body. PayloadType type = 1; // Primary contents of payload. bytes body = 2; } // A protobuf representation for grpc status. This is used by test // clients to specify a status that the server should attempt to return. message EchoStatus { int32 code = 1; string message = 2; } // Unary request. message SimpleRequest { // Desired payload type in the response from the server. // If response_type is RANDOM, server randomly chooses one from other formats. PayloadType response_type = 1; // Desired payload size in the response from the server. // If response_type is COMPRESSABLE, this denotes the size before compression. int32 response_size = 2; // Optional input payload sent along with the request. Payload payload = 3; // Whether SimpleResponse should include username. bool fill_username = 4; // Whether SimpleResponse should include OAuth scope. bool fill_oauth_scope = 5; // Compression algorithm to be used by the server for the response (stream) CompressionType response_compression = 6; // Whether server should return a given status EchoStatus response_status = 7; } // Unary response, as configured by the request. message SimpleResponse { // Payload to increase message size. Payload payload = 1; // The user the request came from, for verifying authentication was // successful when the client expected it. string username = 2; // OAuth scope. string oauth_scope = 3; } // Client-streaming request. message StreamingInputCallRequest { // Optional input payload sent along with the request. Payload payload = 1; // Not expecting any payload from the response. } // Client-streaming response. message StreamingInputCallResponse { // Aggregated size of payloads received from the client. int32 aggregated_payload_size = 1; } // Configuration for a particular response. message ResponseParameters { // Desired payload sizes in responses from the server. // If response_type is COMPRESSABLE, this denotes the size before compression. int32 size = 1; // Desired interval between consecutive responses in the response stream in // microseconds. int32 interval_us = 2; } // Server-streaming request. message StreamingOutputCallRequest { // Desired payload type in the response from the server. // If response_type is RANDOM, the payload from each response in the stream // might be of different types. This is to simulate a mixed type of payload // stream. PayloadType response_type = 1; // Configuration for each expected response message. repeated ResponseParameters response_parameters = 2; // Optional input payload sent along with the request. Payload payload = 3; // Compression algorithm to be used by the server for the response (stream) CompressionType response_compression = 6; // Whether server should return a given status EchoStatus response_status = 7; } // Server-streaming response, as configured by the request and parameters. message StreamingOutputCallResponse { // Payload to increase response size. Payload payload = 1; } // For reconnect interop test only. // Client tells server what reconnection parameters it used. message ReconnectParams { int32 max_reconnect_backoff_ms = 1; } // For reconnect interop test only. // Server tells client whether its reconnects are following the spec and the // reconnect backoffs it saw. message ReconnectInfo { bool passed = 1; repeated int32 backoff_ms = 2; } golang-google-grpc-1.6.0/benchmark/grpc_testing/payloads.pb.go000066400000000000000000000213701315416461300243640ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: payloads.proto package grpc_testing import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf type ByteBufferParams struct { ReqSize int32 `protobuf:"varint,1,opt,name=req_size,json=reqSize" json:"req_size,omitempty"` RespSize int32 `protobuf:"varint,2,opt,name=resp_size,json=respSize" json:"resp_size,omitempty"` } func (m *ByteBufferParams) Reset() { *m = ByteBufferParams{} } func (m *ByteBufferParams) String() string { return proto.CompactTextString(m) } func (*ByteBufferParams) ProtoMessage() {} func (*ByteBufferParams) Descriptor() ([]byte, []int) { return fileDescriptor2, []int{0} } func (m *ByteBufferParams) GetReqSize() int32 { if m != nil { return m.ReqSize } return 0 } func (m *ByteBufferParams) GetRespSize() int32 { if m != nil { return m.RespSize } return 0 } type SimpleProtoParams struct { ReqSize int32 `protobuf:"varint,1,opt,name=req_size,json=reqSize" json:"req_size,omitempty"` RespSize int32 `protobuf:"varint,2,opt,name=resp_size,json=respSize" json:"resp_size,omitempty"` } func (m *SimpleProtoParams) Reset() { *m = SimpleProtoParams{} } func (m *SimpleProtoParams) String() string { return proto.CompactTextString(m) } func (*SimpleProtoParams) ProtoMessage() {} func (*SimpleProtoParams) Descriptor() ([]byte, []int) { return fileDescriptor2, []int{1} } func (m *SimpleProtoParams) GetReqSize() int32 { if m != nil { return m.ReqSize } return 0 } func (m *SimpleProtoParams) GetRespSize() int32 { if m != nil { return m.RespSize } return 0 } type ComplexProtoParams struct { } func (m *ComplexProtoParams) Reset() { *m = ComplexProtoParams{} } func (m *ComplexProtoParams) String() string { return proto.CompactTextString(m) } func (*ComplexProtoParams) ProtoMessage() {} func (*ComplexProtoParams) Descriptor() ([]byte, []int) { return fileDescriptor2, []int{2} } type PayloadConfig struct { // Types that are valid to be assigned to Payload: // *PayloadConfig_BytebufParams // *PayloadConfig_SimpleParams // *PayloadConfig_ComplexParams Payload isPayloadConfig_Payload `protobuf_oneof:"payload"` } func (m *PayloadConfig) Reset() { *m = PayloadConfig{} } func (m *PayloadConfig) String() string { return proto.CompactTextString(m) } func (*PayloadConfig) ProtoMessage() {} func (*PayloadConfig) Descriptor() ([]byte, []int) { return fileDescriptor2, []int{3} } type isPayloadConfig_Payload interface { isPayloadConfig_Payload() } type PayloadConfig_BytebufParams struct { BytebufParams *ByteBufferParams `protobuf:"bytes,1,opt,name=bytebuf_params,json=bytebufParams,oneof"` } type PayloadConfig_SimpleParams struct { SimpleParams *SimpleProtoParams `protobuf:"bytes,2,opt,name=simple_params,json=simpleParams,oneof"` } type PayloadConfig_ComplexParams struct { ComplexParams *ComplexProtoParams `protobuf:"bytes,3,opt,name=complex_params,json=complexParams,oneof"` } func (*PayloadConfig_BytebufParams) isPayloadConfig_Payload() {} func (*PayloadConfig_SimpleParams) isPayloadConfig_Payload() {} func (*PayloadConfig_ComplexParams) isPayloadConfig_Payload() {} func (m *PayloadConfig) GetPayload() isPayloadConfig_Payload { if m != nil { return m.Payload } return nil } func (m *PayloadConfig) GetBytebufParams() *ByteBufferParams { if x, ok := m.GetPayload().(*PayloadConfig_BytebufParams); ok { return x.BytebufParams } return nil } func (m *PayloadConfig) GetSimpleParams() *SimpleProtoParams { if x, ok := m.GetPayload().(*PayloadConfig_SimpleParams); ok { return x.SimpleParams } return nil } func (m *PayloadConfig) GetComplexParams() *ComplexProtoParams { if x, ok := m.GetPayload().(*PayloadConfig_ComplexParams); ok { return x.ComplexParams } return nil } // XXX_OneofFuncs is for the internal use of the proto package. func (*PayloadConfig) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _PayloadConfig_OneofMarshaler, _PayloadConfig_OneofUnmarshaler, _PayloadConfig_OneofSizer, []interface{}{ (*PayloadConfig_BytebufParams)(nil), (*PayloadConfig_SimpleParams)(nil), (*PayloadConfig_ComplexParams)(nil), } } func _PayloadConfig_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*PayloadConfig) // payload switch x := m.Payload.(type) { case *PayloadConfig_BytebufParams: b.EncodeVarint(1<<3 | proto.WireBytes) if err := b.EncodeMessage(x.BytebufParams); err != nil { return err } case *PayloadConfig_SimpleParams: b.EncodeVarint(2<<3 | proto.WireBytes) if err := b.EncodeMessage(x.SimpleParams); err != nil { return err } case *PayloadConfig_ComplexParams: b.EncodeVarint(3<<3 | proto.WireBytes) if err := b.EncodeMessage(x.ComplexParams); err != nil { return err } case nil: default: return fmt.Errorf("PayloadConfig.Payload has unexpected type %T", x) } return nil } func _PayloadConfig_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*PayloadConfig) switch tag { case 1: // payload.bytebuf_params if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(ByteBufferParams) err := b.DecodeMessage(msg) m.Payload = &PayloadConfig_BytebufParams{msg} return true, err case 2: // payload.simple_params if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(SimpleProtoParams) err := b.DecodeMessage(msg) m.Payload = &PayloadConfig_SimpleParams{msg} return true, err case 3: // payload.complex_params if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(ComplexProtoParams) err := b.DecodeMessage(msg) m.Payload = &PayloadConfig_ComplexParams{msg} return true, err default: return false, nil } } func _PayloadConfig_OneofSizer(msg proto.Message) (n int) { m := msg.(*PayloadConfig) // payload switch x := m.Payload.(type) { case *PayloadConfig_BytebufParams: s := proto.Size(x.BytebufParams) n += proto.SizeVarint(1<<3 | proto.WireBytes) n += proto.SizeVarint(uint64(s)) n += s case *PayloadConfig_SimpleParams: s := proto.Size(x.SimpleParams) n += proto.SizeVarint(2<<3 | proto.WireBytes) n += proto.SizeVarint(uint64(s)) n += s case *PayloadConfig_ComplexParams: s := proto.Size(x.ComplexParams) n += proto.SizeVarint(3<<3 | proto.WireBytes) n += proto.SizeVarint(uint64(s)) n += s case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } func init() { proto.RegisterType((*ByteBufferParams)(nil), "grpc.testing.ByteBufferParams") proto.RegisterType((*SimpleProtoParams)(nil), "grpc.testing.SimpleProtoParams") proto.RegisterType((*ComplexProtoParams)(nil), "grpc.testing.ComplexProtoParams") proto.RegisterType((*PayloadConfig)(nil), "grpc.testing.PayloadConfig") } func init() { proto.RegisterFile("payloads.proto", fileDescriptor2) } var fileDescriptor2 = []byte{ // 254 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0xe2, 0x2b, 0x48, 0xac, 0xcc, 0xc9, 0x4f, 0x4c, 0x29, 0xd6, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x17, 0xe2, 0x49, 0x2f, 0x2a, 0x48, 0xd6, 0x2b, 0x49, 0x2d, 0x2e, 0xc9, 0xcc, 0x4b, 0x57, 0xf2, 0xe2, 0x12, 0x70, 0xaa, 0x2c, 0x49, 0x75, 0x2a, 0x4d, 0x4b, 0x4b, 0x2d, 0x0a, 0x48, 0x2c, 0x4a, 0xcc, 0x2d, 0x16, 0x92, 0xe4, 0xe2, 0x28, 0x4a, 0x2d, 0x8c, 0x2f, 0xce, 0xac, 0x4a, 0x95, 0x60, 0x54, 0x60, 0xd4, 0x60, 0x0d, 0x62, 0x2f, 0x4a, 0x2d, 0x0c, 0xce, 0xac, 0x4a, 0x15, 0x92, 0xe6, 0xe2, 0x2c, 0x4a, 0x2d, 0x2e, 0x80, 0xc8, 0x31, 0x81, 0xe5, 0x38, 0x40, 0x02, 0x20, 0x49, 0x25, 0x6f, 0x2e, 0xc1, 0xe0, 0xcc, 0xdc, 0x82, 0x9c, 0xd4, 0x00, 0x90, 0x45, 0x14, 0x1a, 0x26, 0xc2, 0x25, 0xe4, 0x9c, 0x0f, 0x32, 0xac, 0x02, 0xc9, 0x34, 0xa5, 0x6f, 0x8c, 0x5c, 0xbc, 0x01, 0x10, 0xff, 0x38, 0xe7, 0xe7, 0xa5, 0x65, 0xa6, 0x0b, 0xb9, 0x73, 0xf1, 0x25, 0x55, 0x96, 0xa4, 0x26, 0x95, 0xa6, 0xc5, 0x17, 0x80, 0xd5, 0x80, 0x6d, 0xe1, 0x36, 0x92, 0xd3, 0x43, 0xf6, 0xa7, 0x1e, 0xba, 0x27, 0x3d, 0x18, 0x82, 0x78, 0xa1, 0xfa, 0xa0, 0x0e, 0x75, 0xe3, 0xe2, 0x2d, 0x06, 0xbb, 0x1e, 0x66, 0x0e, 0x13, 0xd8, 0x1c, 0x79, 0x54, 0x73, 0x30, 0x3c, 0xe8, 0xc1, 0x10, 0xc4, 0x03, 0xd1, 0x07, 0x35, 0xc7, 0x93, 0x8b, 0x2f, 0x19, 0xe2, 0x70, 0x98, 0x41, 0xcc, 0x60, 0x83, 0x14, 0x50, 0x0d, 0xc2, 0xf4, 0x1c, 0xc8, 0x49, 0x50, 0x9d, 0x10, 0x01, 0x27, 0x4e, 0x2e, 0x76, 0x68, 0xe4, 0x25, 0xb1, 0x81, 0x23, 0xcf, 0x18, 0x10, 0x00, 0x00, 0xff, 0xff, 0xb0, 0x8c, 0x18, 0x4e, 0xce, 0x01, 0x00, 0x00, } golang-google-grpc-1.6.0/benchmark/grpc_testing/payloads.proto000066400000000000000000000021161315416461300245170ustar00rootroot00000000000000// Copyright 2016 gRPC authors. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. syntax = "proto3"; package grpc.testing; message ByteBufferParams { int32 req_size = 1; int32 resp_size = 2; } message SimpleProtoParams { int32 req_size = 1; int32 resp_size = 2; } message ComplexProtoParams { // TODO (vpai): Fill this in once the details of complex, representative // protos are decided } message PayloadConfig { oneof payload { ByteBufferParams bytebuf_params = 1; SimpleProtoParams simple_params = 2; ComplexProtoParams complex_params = 3; } } golang-google-grpc-1.6.0/benchmark/grpc_testing/services.pb.go000066400000000000000000000346731315416461300244050ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: services.proto package grpc_testing import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import ( context "golang.org/x/net/context" grpc "google.golang.org/grpc" ) // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // Reference imports to suppress errors if they are not otherwise used. var _ context.Context var _ grpc.ClientConn // This is a compile-time assertion to ensure that this generated file // is compatible with the grpc package it is being compiled against. const _ = grpc.SupportPackageIsVersion4 // Client API for BenchmarkService service type BenchmarkServiceClient interface { // One request followed by one response. // The server returns the client payload as-is. UnaryCall(ctx context.Context, in *SimpleRequest, opts ...grpc.CallOption) (*SimpleResponse, error) // One request followed by one response. // The server returns the client payload as-is. StreamingCall(ctx context.Context, opts ...grpc.CallOption) (BenchmarkService_StreamingCallClient, error) } type benchmarkServiceClient struct { cc *grpc.ClientConn } func NewBenchmarkServiceClient(cc *grpc.ClientConn) BenchmarkServiceClient { return &benchmarkServiceClient{cc} } func (c *benchmarkServiceClient) UnaryCall(ctx context.Context, in *SimpleRequest, opts ...grpc.CallOption) (*SimpleResponse, error) { out := new(SimpleResponse) err := grpc.Invoke(ctx, "/grpc.testing.BenchmarkService/UnaryCall", in, out, c.cc, opts...) if err != nil { return nil, err } return out, nil } func (c *benchmarkServiceClient) StreamingCall(ctx context.Context, opts ...grpc.CallOption) (BenchmarkService_StreamingCallClient, error) { stream, err := grpc.NewClientStream(ctx, &_BenchmarkService_serviceDesc.Streams[0], c.cc, "/grpc.testing.BenchmarkService/StreamingCall", opts...) if err != nil { return nil, err } x := &benchmarkServiceStreamingCallClient{stream} return x, nil } type BenchmarkService_StreamingCallClient interface { Send(*SimpleRequest) error Recv() (*SimpleResponse, error) grpc.ClientStream } type benchmarkServiceStreamingCallClient struct { grpc.ClientStream } func (x *benchmarkServiceStreamingCallClient) Send(m *SimpleRequest) error { return x.ClientStream.SendMsg(m) } func (x *benchmarkServiceStreamingCallClient) Recv() (*SimpleResponse, error) { m := new(SimpleResponse) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } // Server API for BenchmarkService service type BenchmarkServiceServer interface { // One request followed by one response. // The server returns the client payload as-is. UnaryCall(context.Context, *SimpleRequest) (*SimpleResponse, error) // One request followed by one response. // The server returns the client payload as-is. StreamingCall(BenchmarkService_StreamingCallServer) error } func RegisterBenchmarkServiceServer(s *grpc.Server, srv BenchmarkServiceServer) { s.RegisterService(&_BenchmarkService_serviceDesc, srv) } func _BenchmarkService_UnaryCall_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(SimpleRequest) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(BenchmarkServiceServer).UnaryCall(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/grpc.testing.BenchmarkService/UnaryCall", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(BenchmarkServiceServer).UnaryCall(ctx, req.(*SimpleRequest)) } return interceptor(ctx, in, info, handler) } func _BenchmarkService_StreamingCall_Handler(srv interface{}, stream grpc.ServerStream) error { return srv.(BenchmarkServiceServer).StreamingCall(&benchmarkServiceStreamingCallServer{stream}) } type BenchmarkService_StreamingCallServer interface { Send(*SimpleResponse) error Recv() (*SimpleRequest, error) grpc.ServerStream } type benchmarkServiceStreamingCallServer struct { grpc.ServerStream } func (x *benchmarkServiceStreamingCallServer) Send(m *SimpleResponse) error { return x.ServerStream.SendMsg(m) } func (x *benchmarkServiceStreamingCallServer) Recv() (*SimpleRequest, error) { m := new(SimpleRequest) if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err } return m, nil } var _BenchmarkService_serviceDesc = grpc.ServiceDesc{ ServiceName: "grpc.testing.BenchmarkService", HandlerType: (*BenchmarkServiceServer)(nil), Methods: []grpc.MethodDesc{ { MethodName: "UnaryCall", Handler: _BenchmarkService_UnaryCall_Handler, }, }, Streams: []grpc.StreamDesc{ { StreamName: "StreamingCall", Handler: _BenchmarkService_StreamingCall_Handler, ServerStreams: true, ClientStreams: true, }, }, Metadata: "services.proto", } // Client API for WorkerService service type WorkerServiceClient interface { // Start server with specified workload. // First request sent specifies the ServerConfig followed by ServerStatus // response. After that, a "Mark" can be sent anytime to request the latest // stats. Closing the stream will initiate shutdown of the test server // and once the shutdown has finished, the OK status is sent to terminate // this RPC. RunServer(ctx context.Context, opts ...grpc.CallOption) (WorkerService_RunServerClient, error) // Start client with specified workload. // First request sent specifies the ClientConfig followed by ClientStatus // response. After that, a "Mark" can be sent anytime to request the latest // stats. Closing the stream will initiate shutdown of the test client // and once the shutdown has finished, the OK status is sent to terminate // this RPC. RunClient(ctx context.Context, opts ...grpc.CallOption) (WorkerService_RunClientClient, error) // Just return the core count - unary call CoreCount(ctx context.Context, in *CoreRequest, opts ...grpc.CallOption) (*CoreResponse, error) // Quit this worker QuitWorker(ctx context.Context, in *Void, opts ...grpc.CallOption) (*Void, error) } type workerServiceClient struct { cc *grpc.ClientConn } func NewWorkerServiceClient(cc *grpc.ClientConn) WorkerServiceClient { return &workerServiceClient{cc} } func (c *workerServiceClient) RunServer(ctx context.Context, opts ...grpc.CallOption) (WorkerService_RunServerClient, error) { stream, err := grpc.NewClientStream(ctx, &_WorkerService_serviceDesc.Streams[0], c.cc, "/grpc.testing.WorkerService/RunServer", opts...) if err != nil { return nil, err } x := &workerServiceRunServerClient{stream} return x, nil } type WorkerService_RunServerClient interface { Send(*ServerArgs) error Recv() (*ServerStatus, error) grpc.ClientStream } type workerServiceRunServerClient struct { grpc.ClientStream } func (x *workerServiceRunServerClient) Send(m *ServerArgs) error { return x.ClientStream.SendMsg(m) } func (x *workerServiceRunServerClient) Recv() (*ServerStatus, error) { m := new(ServerStatus) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func (c *workerServiceClient) RunClient(ctx context.Context, opts ...grpc.CallOption) (WorkerService_RunClientClient, error) { stream, err := grpc.NewClientStream(ctx, &_WorkerService_serviceDesc.Streams[1], c.cc, "/grpc.testing.WorkerService/RunClient", opts...) if err != nil { return nil, err } x := &workerServiceRunClientClient{stream} return x, nil } type WorkerService_RunClientClient interface { Send(*ClientArgs) error Recv() (*ClientStatus, error) grpc.ClientStream } type workerServiceRunClientClient struct { grpc.ClientStream } func (x *workerServiceRunClientClient) Send(m *ClientArgs) error { return x.ClientStream.SendMsg(m) } func (x *workerServiceRunClientClient) Recv() (*ClientStatus, error) { m := new(ClientStatus) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func (c *workerServiceClient) CoreCount(ctx context.Context, in *CoreRequest, opts ...grpc.CallOption) (*CoreResponse, error) { out := new(CoreResponse) err := grpc.Invoke(ctx, "/grpc.testing.WorkerService/CoreCount", in, out, c.cc, opts...) if err != nil { return nil, err } return out, nil } func (c *workerServiceClient) QuitWorker(ctx context.Context, in *Void, opts ...grpc.CallOption) (*Void, error) { out := new(Void) err := grpc.Invoke(ctx, "/grpc.testing.WorkerService/QuitWorker", in, out, c.cc, opts...) if err != nil { return nil, err } return out, nil } // Server API for WorkerService service type WorkerServiceServer interface { // Start server with specified workload. // First request sent specifies the ServerConfig followed by ServerStatus // response. After that, a "Mark" can be sent anytime to request the latest // stats. Closing the stream will initiate shutdown of the test server // and once the shutdown has finished, the OK status is sent to terminate // this RPC. RunServer(WorkerService_RunServerServer) error // Start client with specified workload. // First request sent specifies the ClientConfig followed by ClientStatus // response. After that, a "Mark" can be sent anytime to request the latest // stats. Closing the stream will initiate shutdown of the test client // and once the shutdown has finished, the OK status is sent to terminate // this RPC. RunClient(WorkerService_RunClientServer) error // Just return the core count - unary call CoreCount(context.Context, *CoreRequest) (*CoreResponse, error) // Quit this worker QuitWorker(context.Context, *Void) (*Void, error) } func RegisterWorkerServiceServer(s *grpc.Server, srv WorkerServiceServer) { s.RegisterService(&_WorkerService_serviceDesc, srv) } func _WorkerService_RunServer_Handler(srv interface{}, stream grpc.ServerStream) error { return srv.(WorkerServiceServer).RunServer(&workerServiceRunServerServer{stream}) } type WorkerService_RunServerServer interface { Send(*ServerStatus) error Recv() (*ServerArgs, error) grpc.ServerStream } type workerServiceRunServerServer struct { grpc.ServerStream } func (x *workerServiceRunServerServer) Send(m *ServerStatus) error { return x.ServerStream.SendMsg(m) } func (x *workerServiceRunServerServer) Recv() (*ServerArgs, error) { m := new(ServerArgs) if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func _WorkerService_RunClient_Handler(srv interface{}, stream grpc.ServerStream) error { return srv.(WorkerServiceServer).RunClient(&workerServiceRunClientServer{stream}) } type WorkerService_RunClientServer interface { Send(*ClientStatus) error Recv() (*ClientArgs, error) grpc.ServerStream } type workerServiceRunClientServer struct { grpc.ServerStream } func (x *workerServiceRunClientServer) Send(m *ClientStatus) error { return x.ServerStream.SendMsg(m) } func (x *workerServiceRunClientServer) Recv() (*ClientArgs, error) { m := new(ClientArgs) if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func _WorkerService_CoreCount_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(CoreRequest) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(WorkerServiceServer).CoreCount(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/grpc.testing.WorkerService/CoreCount", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(WorkerServiceServer).CoreCount(ctx, req.(*CoreRequest)) } return interceptor(ctx, in, info, handler) } func _WorkerService_QuitWorker_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(Void) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(WorkerServiceServer).QuitWorker(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/grpc.testing.WorkerService/QuitWorker", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(WorkerServiceServer).QuitWorker(ctx, req.(*Void)) } return interceptor(ctx, in, info, handler) } var _WorkerService_serviceDesc = grpc.ServiceDesc{ ServiceName: "grpc.testing.WorkerService", HandlerType: (*WorkerServiceServer)(nil), Methods: []grpc.MethodDesc{ { MethodName: "CoreCount", Handler: _WorkerService_CoreCount_Handler, }, { MethodName: "QuitWorker", Handler: _WorkerService_QuitWorker_Handler, }, }, Streams: []grpc.StreamDesc{ { StreamName: "RunServer", Handler: _WorkerService_RunServer_Handler, ServerStreams: true, ClientStreams: true, }, { StreamName: "RunClient", Handler: _WorkerService_RunClient_Handler, ServerStreams: true, ClientStreams: true, }, }, Metadata: "services.proto", } func init() { proto.RegisterFile("services.proto", fileDescriptor3) } var fileDescriptor3 = []byte{ // 255 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xa4, 0x91, 0xc1, 0x4a, 0xc4, 0x30, 0x10, 0x86, 0xa9, 0x07, 0xa1, 0xc1, 0x2e, 0x92, 0x93, 0x46, 0x1f, 0xc0, 0x53, 0x91, 0xd5, 0x17, 0x70, 0x8b, 0x1e, 0x05, 0xb7, 0xa8, 0xe7, 0x58, 0x87, 0x1a, 0x36, 0xcd, 0xd4, 0x99, 0x89, 0xe0, 0x93, 0xf8, 0x0e, 0x3e, 0xa5, 0xec, 0x66, 0x57, 0xd6, 0x92, 0x9b, 0xc7, 0xf9, 0xbf, 0xe1, 0x23, 0x7f, 0x46, 0xcd, 0x18, 0xe8, 0xc3, 0x75, 0xc0, 0xf5, 0x48, 0x28, 0xa8, 0x8f, 0x7a, 0x1a, 0xbb, 0x5a, 0x80, 0xc5, 0x85, 0xde, 0xcc, 0x06, 0x60, 0xb6, 0xfd, 0x8e, 0x9a, 0xaa, 0xc3, 0x20, 0x84, 0x3e, 0x8d, 0xf3, 0xef, 0x42, 0x1d, 0x2f, 0x20, 0x74, 0x6f, 0x83, 0xa5, 0x55, 0x9b, 0x44, 0xfa, 0x4e, 0x95, 0x8f, 0xc1, 0xd2, 0x67, 0x63, 0xbd, 0xd7, 0x67, 0xf5, 0xbe, 0xaf, 0x6e, 0xdd, 0x30, 0x7a, 0x58, 0xc2, 0x7b, 0x04, 0x16, 0x73, 0x9e, 0x87, 0x3c, 0x62, 0x60, 0xd0, 0xf7, 0xaa, 0x6a, 0x85, 0xc0, 0x0e, 0x2e, 0xf4, 0xff, 0x74, 0x5d, 0x14, 0x97, 0xc5, 0xfc, 0xeb, 0x40, 0x55, 0xcf, 0x48, 0x2b, 0xa0, 0xdd, 0x4b, 0x6f, 0x55, 0xb9, 0x8c, 0x61, 0x3d, 0x01, 0xe9, 0x93, 0x89, 0x60, 0x93, 0xde, 0x50, 0xcf, 0xc6, 0xe4, 0x48, 0x2b, 0x56, 0x22, 0xaf, 0xc5, 0x5b, 0x4d, 0xe3, 0x1d, 0x04, 0x99, 0x6a, 0x52, 0x9a, 0xd3, 0x24, 0xb2, 0xa7, 0x59, 0xa8, 0xb2, 0x41, 0x82, 0x06, 0x63, 0x10, 0x7d, 0x3a, 0x59, 0x46, 0xfa, 0x6d, 0x6a, 0x72, 0x68, 0xfb, 0x67, 0xd7, 0x4a, 0x3d, 0x44, 0x27, 0xa9, 0xa6, 0xd6, 0x7f, 0x37, 0x9f, 0xd0, 0xbd, 0x9a, 0x4c, 0xf6, 0x72, 0xb8, 0xb9, 0xe6, 0xd5, 0x4f, 0x00, 0x00, 0x00, 0xff, 0xff, 0x3b, 0x84, 0x02, 0xe3, 0x0c, 0x02, 0x00, 0x00, } golang-google-grpc-1.6.0/benchmark/grpc_testing/services.proto000066400000000000000000000042261315416461300245320ustar00rootroot00000000000000// Copyright 2016 gRPC authors. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. // An integration test service that covers all the method signature permutations // of unary/streaming requests/responses. syntax = "proto3"; import "messages.proto"; import "control.proto"; package grpc.testing; service BenchmarkService { // One request followed by one response. // The server returns the client payload as-is. rpc UnaryCall(SimpleRequest) returns (SimpleResponse); // One request followed by one response. // The server returns the client payload as-is. rpc StreamingCall(stream SimpleRequest) returns (stream SimpleResponse); } service WorkerService { // Start server with specified workload. // First request sent specifies the ServerConfig followed by ServerStatus // response. After that, a "Mark" can be sent anytime to request the latest // stats. Closing the stream will initiate shutdown of the test server // and once the shutdown has finished, the OK status is sent to terminate // this RPC. rpc RunServer(stream ServerArgs) returns (stream ServerStatus); // Start client with specified workload. // First request sent specifies the ClientConfig followed by ClientStatus // response. After that, a "Mark" can be sent anytime to request the latest // stats. Closing the stream will initiate shutdown of the test client // and once the shutdown has finished, the OK status is sent to terminate // this RPC. rpc RunClient(stream ClientArgs) returns (stream ClientStatus); // Just return the core count - unary call rpc CoreCount(CoreRequest) returns (CoreResponse); // Quit this worker rpc QuitWorker(Void) returns (Void); } golang-google-grpc-1.6.0/benchmark/grpc_testing/stats.pb.go000066400000000000000000000167211315416461300237120ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: stats.proto package grpc_testing import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf type ServerStats struct { // wall clock time change in seconds since last reset TimeElapsed float64 `protobuf:"fixed64,1,opt,name=time_elapsed,json=timeElapsed" json:"time_elapsed,omitempty"` // change in user time (in seconds) used by the server since last reset TimeUser float64 `protobuf:"fixed64,2,opt,name=time_user,json=timeUser" json:"time_user,omitempty"` // change in server time (in seconds) used by the server process and all // threads since last reset TimeSystem float64 `protobuf:"fixed64,3,opt,name=time_system,json=timeSystem" json:"time_system,omitempty"` } func (m *ServerStats) Reset() { *m = ServerStats{} } func (m *ServerStats) String() string { return proto.CompactTextString(m) } func (*ServerStats) ProtoMessage() {} func (*ServerStats) Descriptor() ([]byte, []int) { return fileDescriptor4, []int{0} } func (m *ServerStats) GetTimeElapsed() float64 { if m != nil { return m.TimeElapsed } return 0 } func (m *ServerStats) GetTimeUser() float64 { if m != nil { return m.TimeUser } return 0 } func (m *ServerStats) GetTimeSystem() float64 { if m != nil { return m.TimeSystem } return 0 } // Histogram params based on grpc/support/histogram.c type HistogramParams struct { Resolution float64 `protobuf:"fixed64,1,opt,name=resolution" json:"resolution,omitempty"` MaxPossible float64 `protobuf:"fixed64,2,opt,name=max_possible,json=maxPossible" json:"max_possible,omitempty"` } func (m *HistogramParams) Reset() { *m = HistogramParams{} } func (m *HistogramParams) String() string { return proto.CompactTextString(m) } func (*HistogramParams) ProtoMessage() {} func (*HistogramParams) Descriptor() ([]byte, []int) { return fileDescriptor4, []int{1} } func (m *HistogramParams) GetResolution() float64 { if m != nil { return m.Resolution } return 0 } func (m *HistogramParams) GetMaxPossible() float64 { if m != nil { return m.MaxPossible } return 0 } // Histogram data based on grpc/support/histogram.c type HistogramData struct { Bucket []uint32 `protobuf:"varint,1,rep,packed,name=bucket" json:"bucket,omitempty"` MinSeen float64 `protobuf:"fixed64,2,opt,name=min_seen,json=minSeen" json:"min_seen,omitempty"` MaxSeen float64 `protobuf:"fixed64,3,opt,name=max_seen,json=maxSeen" json:"max_seen,omitempty"` Sum float64 `protobuf:"fixed64,4,opt,name=sum" json:"sum,omitempty"` SumOfSquares float64 `protobuf:"fixed64,5,opt,name=sum_of_squares,json=sumOfSquares" json:"sum_of_squares,omitempty"` Count float64 `protobuf:"fixed64,6,opt,name=count" json:"count,omitempty"` } func (m *HistogramData) Reset() { *m = HistogramData{} } func (m *HistogramData) String() string { return proto.CompactTextString(m) } func (*HistogramData) ProtoMessage() {} func (*HistogramData) Descriptor() ([]byte, []int) { return fileDescriptor4, []int{2} } func (m *HistogramData) GetBucket() []uint32 { if m != nil { return m.Bucket } return nil } func (m *HistogramData) GetMinSeen() float64 { if m != nil { return m.MinSeen } return 0 } func (m *HistogramData) GetMaxSeen() float64 { if m != nil { return m.MaxSeen } return 0 } func (m *HistogramData) GetSum() float64 { if m != nil { return m.Sum } return 0 } func (m *HistogramData) GetSumOfSquares() float64 { if m != nil { return m.SumOfSquares } return 0 } func (m *HistogramData) GetCount() float64 { if m != nil { return m.Count } return 0 } type ClientStats struct { // Latency histogram. Data points are in nanoseconds. Latencies *HistogramData `protobuf:"bytes,1,opt,name=latencies" json:"latencies,omitempty"` // See ServerStats for details. TimeElapsed float64 `protobuf:"fixed64,2,opt,name=time_elapsed,json=timeElapsed" json:"time_elapsed,omitempty"` TimeUser float64 `protobuf:"fixed64,3,opt,name=time_user,json=timeUser" json:"time_user,omitempty"` TimeSystem float64 `protobuf:"fixed64,4,opt,name=time_system,json=timeSystem" json:"time_system,omitempty"` } func (m *ClientStats) Reset() { *m = ClientStats{} } func (m *ClientStats) String() string { return proto.CompactTextString(m) } func (*ClientStats) ProtoMessage() {} func (*ClientStats) Descriptor() ([]byte, []int) { return fileDescriptor4, []int{3} } func (m *ClientStats) GetLatencies() *HistogramData { if m != nil { return m.Latencies } return nil } func (m *ClientStats) GetTimeElapsed() float64 { if m != nil { return m.TimeElapsed } return 0 } func (m *ClientStats) GetTimeUser() float64 { if m != nil { return m.TimeUser } return 0 } func (m *ClientStats) GetTimeSystem() float64 { if m != nil { return m.TimeSystem } return 0 } func init() { proto.RegisterType((*ServerStats)(nil), "grpc.testing.ServerStats") proto.RegisterType((*HistogramParams)(nil), "grpc.testing.HistogramParams") proto.RegisterType((*HistogramData)(nil), "grpc.testing.HistogramData") proto.RegisterType((*ClientStats)(nil), "grpc.testing.ClientStats") } func init() { proto.RegisterFile("stats.proto", fileDescriptor4) } var fileDescriptor4 = []byte{ // 341 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x84, 0x92, 0xc1, 0x4a, 0xeb, 0x40, 0x14, 0x86, 0x49, 0xd3, 0xf6, 0xb6, 0x27, 0xed, 0xbd, 0x97, 0x41, 0x24, 0x52, 0xd0, 0x1a, 0x5c, 0x74, 0x95, 0x85, 0xae, 0x5c, 0xab, 0xe0, 0xce, 0xd2, 0xe8, 0x3a, 0x4c, 0xe3, 0x69, 0x19, 0xcc, 0xcc, 0xc4, 0x39, 0x33, 0x12, 0x1f, 0x49, 0x7c, 0x49, 0xc9, 0x24, 0x68, 0x55, 0xd0, 0x5d, 0xe6, 0xfb, 0x7e, 0xe6, 0xe4, 0xe4, 0x0f, 0x44, 0x64, 0xb9, 0xa5, 0xb4, 0x32, 0xda, 0x6a, 0x36, 0xd9, 0x9a, 0xaa, 0x48, 0x2d, 0x92, 0x15, 0x6a, 0x9b, 0x28, 0x88, 0x32, 0x34, 0x4f, 0x68, 0xb2, 0x26, 0xc2, 0x8e, 0x61, 0x62, 0x85, 0xc4, 0x1c, 0x4b, 0x5e, 0x11, 0xde, 0xc7, 0xc1, 0x3c, 0x58, 0x04, 0xab, 0xa8, 0x61, 0x57, 0x2d, 0x62, 0x33, 0x18, 0xfb, 0x88, 0x23, 0x34, 0x71, 0xcf, 0xfb, 0x51, 0x03, 0xee, 0x08, 0x0d, 0x3b, 0x02, 0x9f, 0xcd, 0xe9, 0x99, 0x2c, 0xca, 0x38, 0xf4, 0x1a, 0x1a, 0x94, 0x79, 0x92, 0xdc, 0xc2, 0xbf, 0x6b, 0x41, 0x56, 0x6f, 0x0d, 0x97, 0x4b, 0x6e, 0xb8, 0x24, 0x76, 0x08, 0x60, 0x90, 0x74, 0xe9, 0xac, 0xd0, 0xaa, 0x9b, 0xb8, 0x43, 0x9a, 0x77, 0x92, 0xbc, 0xce, 0x2b, 0x4d, 0x24, 0xd6, 0x25, 0x76, 0x33, 0x23, 0xc9, 0xeb, 0x65, 0x87, 0x92, 0xd7, 0x00, 0xa6, 0xef, 0xd7, 0x5e, 0x72, 0xcb, 0xd9, 0x3e, 0x0c, 0xd7, 0xae, 0x78, 0x40, 0x1b, 0x07, 0xf3, 0x70, 0x31, 0x5d, 0x75, 0x27, 0x76, 0x00, 0x23, 0x29, 0x54, 0x4e, 0x88, 0xaa, 0xbb, 0xe8, 0x8f, 0x14, 0x2a, 0x43, 0x54, 0x5e, 0xf1, 0xba, 0x55, 0x61, 0xa7, 0x78, 0xed, 0xd5, 0x7f, 0x08, 0xc9, 0xc9, 0xb8, 0xef, 0x69, 0xf3, 0xc8, 0x4e, 0xe0, 0x2f, 0x39, 0x99, 0xeb, 0x4d, 0x4e, 0x8f, 0x8e, 0x1b, 0xa4, 0x78, 0xe0, 0xe5, 0x84, 0x9c, 0xbc, 0xd9, 0x64, 0x2d, 0x63, 0x7b, 0x30, 0x28, 0xb4, 0x53, 0x36, 0x1e, 0x7a, 0xd9, 0x1e, 0x92, 0x97, 0x00, 0xa2, 0x8b, 0x52, 0xa0, 0xb2, 0xed, 0x47, 0x3f, 0x87, 0x71, 0xc9, 0x2d, 0xaa, 0x42, 0x20, 0xf9, 0xfd, 0xa3, 0xd3, 0x59, 0xba, 0xdb, 0x52, 0xfa, 0x69, 0xb7, 0xd5, 0x47, 0xfa, 0x5b, 0x5f, 0xbd, 0x5f, 0xfa, 0x0a, 0x7f, 0xee, 0xab, 0xff, 0xb5, 0xaf, 0xf5, 0xd0, 0xff, 0x34, 0x67, 0x6f, 0x01, 0x00, 0x00, 0xff, 0xff, 0xea, 0x75, 0x34, 0x90, 0x43, 0x02, 0x00, 0x00, } golang-google-grpc-1.6.0/benchmark/grpc_testing/stats.proto000066400000000000000000000031461315416461300240450ustar00rootroot00000000000000// Copyright 2016 gRPC authors. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. syntax = "proto3"; package grpc.testing; message ServerStats { // wall clock time change in seconds since last reset double time_elapsed = 1; // change in user time (in seconds) used by the server since last reset double time_user = 2; // change in server time (in seconds) used by the server process and all // threads since last reset double time_system = 3; } // Histogram params based on grpc/support/histogram.c message HistogramParams { double resolution = 1; // first bucket is [0, 1 + resolution) double max_possible = 2; // use enough buckets to allow this value } // Histogram data based on grpc/support/histogram.c message HistogramData { repeated uint32 bucket = 1; double min_seen = 2; double max_seen = 3; double sum = 4; double sum_of_squares = 5; double count = 6; } message ClientStats { // Latency histogram. Data points are in nanoseconds. HistogramData latencies = 1; // See ServerStats for details. double time_elapsed = 2; double time_user = 3; double time_system = 4; } golang-google-grpc-1.6.0/benchmark/latency/000077500000000000000000000000001315416461300205655ustar00rootroot00000000000000golang-google-grpc-1.6.0/benchmark/latency/latency.go000066400000000000000000000217531315416461300225630ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package latency provides wrappers for net.Conn, net.Listener, and // net.Dialers, designed to interoperate to inject real-world latency into // network connections. package latency import ( "bytes" "encoding/binary" "fmt" "io" "net" "time" "golang.org/x/net/context" ) // Dialer is a function matching the signature of net.Dial. type Dialer func(network, address string) (net.Conn, error) // TimeoutDialer is a function matching the signature of net.DialTimeout. type TimeoutDialer func(network, address string, timeout time.Duration) (net.Conn, error) // ContextDialer is a function matching the signature of // net.Dialer.DialContext. type ContextDialer func(ctx context.Context, network, address string) (net.Conn, error) // Network represents a network with the given bandwidth, latency, and MTU // (Maximum Transmission Unit) configuration, and can produce wrappers of // net.Listeners, net.Conn, and various forms of dialing functions. The // Listeners and Dialers/Conns on both sides of connections must come from this // package, but need not be created from the same Network. Latency is computed // when sending (in Write), and is injected when receiving (in Read). This // allows senders' Write calls to be non-blocking, as in real-world // applications. // // Note: Latency is injected by the sender specifying the absolute time data // should be available, and the reader delaying until that time arrives to // provide the data. This package attempts to counter-act the effects of clock // drift and existing network latency by measuring the delay between the // sender's transmission time and the receiver's reception time during startup. // No attempt is made to measure the existing bandwidth of the connection. type Network struct { Kbps int // Kilobits per second; if non-positive, infinite Latency time.Duration // One-way latency (sending); if non-positive, no delay MTU int // Bytes per packet; if non-positive, infinite } // Conn returns a net.Conn that wraps c and injects n's latency into that // connection. This function also imposes latency for connection creation. // If n's Latency is lower than the measured latency in c, an error is // returned. func (n *Network) Conn(c net.Conn) (net.Conn, error) { start := now() nc := &conn{Conn: c, network: n, readBuf: new(bytes.Buffer)} if err := nc.sync(); err != nil { return nil, err } sleep(start.Add(nc.delay).Sub(now())) return nc, nil } type conn struct { net.Conn network *Network readBuf *bytes.Buffer // one packet worth of data received lastSendEnd time.Time // time the previous Write should be fully on the wire delay time.Duration // desired latency - measured latency } // header is sent before all data transmitted by the application. type header struct { ReadTime int64 // Time the reader is allowed to read this packet (UnixNano) Sz int32 // Size of the data in the packet } func (c *conn) Write(p []byte) (n int, err error) { tNow := now() if c.lastSendEnd.Before(tNow) { c.lastSendEnd = tNow } for len(p) > 0 { pkt := p if c.network.MTU > 0 && len(pkt) > c.network.MTU { pkt = pkt[:c.network.MTU] p = p[c.network.MTU:] } else { p = nil } if c.network.Kbps > 0 { if congestion := c.lastSendEnd.Sub(tNow) - c.delay; congestion > 0 { // The network is full; sleep until this packet can be sent. sleep(congestion) tNow = tNow.Add(congestion) } } c.lastSendEnd = c.lastSendEnd.Add(c.network.pktTime(len(pkt))) hdr := header{ReadTime: c.lastSendEnd.Add(c.delay).UnixNano(), Sz: int32(len(pkt))} if err := binary.Write(c.Conn, binary.BigEndian, hdr); err != nil { return n, err } x, err := c.Conn.Write(pkt) n += x if err != nil { return n, err } } return n, nil } func (c *conn) Read(p []byte) (n int, err error) { if c.readBuf.Len() == 0 { var hdr header if err := binary.Read(c.Conn, binary.BigEndian, &hdr); err != nil { return 0, err } defer func() { sleep(time.Unix(0, hdr.ReadTime).Sub(now())) }() if _, err := io.CopyN(c.readBuf, c.Conn, int64(hdr.Sz)); err != nil { return 0, err } } // Read from readBuf. return c.readBuf.Read(p) } // sync does a handshake and then measures the latency on the network in // coordination with the other side. func (c *conn) sync() error { const ( pingMsg = "syncPing" warmup = 10 // minimum number of iterations to measure latency giveUp = 50 // maximum number of iterations to measure latency accuracy = time.Millisecond // req'd accuracy to stop early goodRun = 3 // stop early if latency within accuracy this many times ) type syncMsg struct { SendT int64 // Time sent. If zero, stop. RecvT int64 // Time received. If zero, fill in and respond. } // A trivial handshake if err := binary.Write(c.Conn, binary.BigEndian, []byte(pingMsg)); err != nil { return err } var ping [8]byte if err := binary.Read(c.Conn, binary.BigEndian, &ping); err != nil { return err } else if string(ping[:]) != pingMsg { return fmt.Errorf("malformed handshake message: %v (want %q)", ping, pingMsg) } // Both sides are alive and syncing. Calculate network delay / clock skew. att := 0 good := 0 var latency time.Duration localDone, remoteDone := false, false send := true for !localDone || !remoteDone { if send { if err := binary.Write(c.Conn, binary.BigEndian, syncMsg{SendT: now().UnixNano()}); err != nil { return err } att++ send = false } // Block until we get a syncMsg m := syncMsg{} if err := binary.Read(c.Conn, binary.BigEndian, &m); err != nil { return err } if m.RecvT == 0 { // Message initiated from other side. if m.SendT == 0 { remoteDone = true continue } // Send response. m.RecvT = now().UnixNano() if err := binary.Write(c.Conn, binary.BigEndian, m); err != nil { return err } continue } lag := time.Duration(m.RecvT - m.SendT) latency += lag avgLatency := latency / time.Duration(att) if e := lag - avgLatency; e > -accuracy && e < accuracy { good++ } else { good = 0 } if att < giveUp && (att < warmup || good < goodRun) { send = true continue } localDone = true latency = avgLatency // Tell the other side we're done. if err := binary.Write(c.Conn, binary.BigEndian, syncMsg{}); err != nil { return err } } if c.network.Latency <= 0 { return nil } c.delay = c.network.Latency - latency if c.delay < 0 { return fmt.Errorf("measured network latency (%v) higher than desired latency (%v)", latency, c.network.Latency) } return nil } // Listener returns a net.Listener that wraps l and injects n's latency in its // connections. func (n *Network) Listener(l net.Listener) net.Listener { return &listener{Listener: l, network: n} } type listener struct { net.Listener network *Network } func (l *listener) Accept() (net.Conn, error) { c, err := l.Listener.Accept() if err != nil { return nil, err } return l.network.Conn(c) } // Dialer returns a Dialer that wraps d and injects n's latency in its // connections. n's Latency is also injected to the connection's creation. func (n *Network) Dialer(d Dialer) Dialer { return func(network, address string) (net.Conn, error) { conn, err := d(network, address) if err != nil { return nil, err } return n.Conn(conn) } } // TimeoutDialer returns a TimeoutDialer that wraps d and injects n's latency // in its connections. n's Latency is also injected to the connection's // creation. func (n *Network) TimeoutDialer(d TimeoutDialer) TimeoutDialer { return func(network, address string, timeout time.Duration) (net.Conn, error) { conn, err := d(network, address, timeout) if err != nil { return nil, err } return n.Conn(conn) } } // ContextDialer returns a ContextDialer that wraps d and injects n's latency // in its connections. n's Latency is also injected to the connection's // creation. func (n *Network) ContextDialer(d ContextDialer) ContextDialer { return func(ctx context.Context, network, address string) (net.Conn, error) { conn, err := d(ctx, network, address) if err != nil { return nil, err } return n.Conn(conn) } } // pktTime returns the time it takes to transmit one packet of data of size b // in bytes. func (n *Network) pktTime(b int) time.Duration { if n.Kbps <= 0 { return time.Duration(0) } return time.Duration(b) * time.Second / time.Duration(n.Kbps*(1024/8)) } // Wrappers for testing var now = time.Now var sleep = time.Sleep golang-google-grpc-1.6.0/benchmark/latency/latency_test.go000066400000000000000000000245251315416461300236220ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package latency import ( "bytes" "fmt" "net" "reflect" "sync" "testing" "time" ) // bufConn is a net.Conn implemented by a bytes.Buffer (which is a ReadWriter). type bufConn struct { *bytes.Buffer } func (bufConn) Close() error { panic("unimplemented") } func (bufConn) LocalAddr() net.Addr { panic("unimplemented") } func (bufConn) RemoteAddr() net.Addr { panic("unimplemented") } func (bufConn) SetDeadline(t time.Time) error { panic("unimplemneted") } func (bufConn) SetReadDeadline(t time.Time) error { panic("unimplemneted") } func (bufConn) SetWriteDeadline(t time.Time) error { panic("unimplemneted") } func restoreHooks() func() { s := sleep n := now return func() { sleep = s now = n } } func TestConn(t *testing.T) { defer restoreHooks()() // Constant time. now = func() time.Time { return time.Unix(123, 456) } // Capture sleep times for checking later. var sleepTimes []time.Duration sleep = func(t time.Duration) { sleepTimes = append(sleepTimes, t) } wantSleeps := func(want ...time.Duration) { if !reflect.DeepEqual(want, sleepTimes) { t.Fatalf("sleepTimes = %v; want %v", sleepTimes, want) } sleepTimes = nil } // Use a fairly high latency to cause a large BDP and avoid sleeps while // writing due to simulation of full buffers. latency := 1 * time.Second c, err := (&Network{Kbps: 1, Latency: latency, MTU: 5}).Conn(bufConn{&bytes.Buffer{}}) if err != nil { t.Fatalf("Unexpected error creating connection: %v", err) } wantSleeps(latency) // Connection creation delay. // 1 kbps = 128 Bps. Divides evenly by 1 second using nanos. byteLatency := time.Duration(time.Second / 128) write := func(b []byte) { n, err := c.Write(b) if n != len(b) || err != nil { t.Fatalf("c.Write(%v) = %v, %v; want %v, nil", b, n, err, len(b)) } } write([]byte{1, 2, 3, 4, 5}) // One full packet pkt1Time := latency + byteLatency*5 write([]byte{6}) // One partial packet pkt2Time := pkt1Time + byteLatency write([]byte{7, 8, 9, 10, 11, 12, 13}) // Two packets pkt3Time := pkt2Time + byteLatency*5 pkt4Time := pkt3Time + byteLatency*2 // No reads, so no sleeps yet. wantSleeps() read := func(n int, want []byte) { b := make([]byte, n) if rd, err := c.Read(b); err != nil || rd != len(want) { t.Fatalf("c.Read(<%v bytes>) = %v, %v; want %v, nil", n, rd, err, len(want)) } if !reflect.DeepEqual(b[:len(want)], want) { t.Fatalf("read %v; want %v", b, want) } } read(1, []byte{1}) wantSleeps(pkt1Time) read(1, []byte{2}) wantSleeps() read(3, []byte{3, 4, 5}) wantSleeps() read(2, []byte{6}) wantSleeps(pkt2Time) read(2, []byte{7, 8}) wantSleeps(pkt3Time) read(10, []byte{9, 10, 11}) wantSleeps() read(10, []byte{12, 13}) wantSleeps(pkt4Time) } func TestSync(t *testing.T) { defer restoreHooks()() // Infinitely fast CPU: time doesn't pass unless sleep is called. tn := time.Unix(123, 0) now = func() time.Time { return tn } sleep = func(d time.Duration) { tn = tn.Add(d) } // Simulate a 20ms latency network, then run sync across that and expect to // measure 20ms latency, or 10ms additional delay for a 30ms network. slowConn, err := (&Network{Kbps: 0, Latency: 20 * time.Millisecond, MTU: 5}).Conn(bufConn{&bytes.Buffer{}}) if err != nil { t.Fatalf("Unexpected error creating connection: %v", err) } c, err := (&Network{Latency: 30 * time.Millisecond}).Conn(slowConn) if err != nil { t.Fatalf("Unexpected error creating connection: %v", err) } if c.(*conn).delay != 10*time.Millisecond { t.Fatalf("c.delay = %v; want 10ms", c.(*conn).delay) } } func TestSyncTooSlow(t *testing.T) { defer restoreHooks()() // Infinitely fast CPU: time doesn't pass unless sleep is called. tn := time.Unix(123, 0) now = func() time.Time { return tn } sleep = func(d time.Duration) { tn = tn.Add(d) } // Simulate a 10ms latency network, then attempt to simulate a 5ms latency // network and expect an error. slowConn, err := (&Network{Kbps: 0, Latency: 10 * time.Millisecond, MTU: 5}).Conn(bufConn{&bytes.Buffer{}}) if err != nil { t.Fatalf("Unexpected error creating connection: %v", err) } errWant := "measured network latency (10ms) higher than desired latency (5ms)" if _, err := (&Network{Latency: 5 * time.Millisecond}).Conn(slowConn); err == nil || err.Error() != errWant { t.Fatalf("Conn() = _, %q; want _, %q", err, errWant) } } func TestListenerAndDialer(t *testing.T) { defer restoreHooks()() tn := time.Unix(123, 0) startTime := tn mu := &sync.Mutex{} now = func() time.Time { mu.Lock() defer mu.Unlock() return tn } // Use a fairly high latency to cause a large BDP and avoid sleeps while // writing due to simulation of full buffers. n := &Network{Kbps: 2, Latency: 1 * time.Second, MTU: 10} // 2 kbps = .25 kBps = 256 Bps byteLatency := func(n int) time.Duration { return time.Duration(n) * time.Second / 256 } // Create a real listener and wrap it. l, err := net.Listen("tcp", ":0") if err != nil { t.Fatalf("Unexpected error creating listener: %v", err) } defer l.Close() l = n.Listener(l) var serverConn net.Conn var scErr error scDone := make(chan struct{}) go func() { serverConn, scErr = l.Accept() close(scDone) }() // Create a dialer and use it. clientConn, err := n.TimeoutDialer(net.DialTimeout)("tcp", l.Addr().String(), 2*time.Second) if err != nil { t.Fatalf("Unexpected error dialing: %v", err) } defer clientConn.Close() // Block until server's Conn is available. <-scDone if scErr != nil { t.Fatalf("Unexpected error listening: %v", scErr) } defer serverConn.Close() // sleep (only) advances tn. Done after connections established so sync detects zero delay. sleep = func(d time.Duration) { mu.Lock() defer mu.Unlock() if d > 0 { tn = tn.Add(d) } } seq := func(a, b int) []byte { buf := make([]byte, b-a) for i := 0; i < b-a; i++ { buf[i] = byte(i + a) } return buf } pkt1 := seq(0, 10) pkt2 := seq(10, 30) pkt3 := seq(30, 35) write := func(c net.Conn, b []byte) { n, err := c.Write(b) if n != len(b) || err != nil { t.Fatalf("c.Write(%v) = %v, %v; want %v, nil", b, n, err, len(b)) } } write(serverConn, pkt1) write(serverConn, pkt2) write(serverConn, pkt3) write(clientConn, pkt3) write(clientConn, pkt1) write(clientConn, pkt2) if tn != startTime { t.Fatalf("unexpected sleep in write; tn = %v; want %v", tn, startTime) } read := func(c net.Conn, n int, want []byte, timeWant time.Time) { b := make([]byte, n) if rd, err := c.Read(b); err != nil || rd != len(want) { t.Fatalf("c.Read(<%v bytes>) = %v, %v; want %v, nil (read: %v)", n, rd, err, len(want), b[:rd]) } if !reflect.DeepEqual(b[:len(want)], want) { t.Fatalf("read %v; want %v", b, want) } if !tn.Equal(timeWant) { t.Errorf("tn after read(%v) = %v; want %v", want, tn, timeWant) } } read(clientConn, len(pkt1)+1, pkt1, startTime.Add(n.Latency+byteLatency(len(pkt1)))) read(serverConn, len(pkt3)+1, pkt3, tn) // tn was advanced by the above read; pkt3 is shorter than pkt1 read(clientConn, len(pkt2), pkt2[:10], startTime.Add(n.Latency+byteLatency(len(pkt1)+10))) read(clientConn, len(pkt2), pkt2[10:], startTime.Add(n.Latency+byteLatency(len(pkt1)+len(pkt2)))) read(clientConn, len(pkt3), pkt3, startTime.Add(n.Latency+byteLatency(len(pkt1)+len(pkt2)+len(pkt3)))) read(serverConn, len(pkt1), pkt1, tn) // tn already past the arrival time due to prior reads read(serverConn, len(pkt2), pkt2[:10], tn) read(serverConn, len(pkt2), pkt2[10:], tn) // Sleep awhile and make sure the read happens disregarding previous writes // (lastSendEnd handling). sleep(10 * time.Second) write(clientConn, pkt1) read(serverConn, len(pkt1), pkt1, tn.Add(n.Latency+byteLatency(len(pkt1)))) // Send, sleep longer than the network delay, then make sure the read happens // instantly. write(serverConn, pkt1) sleep(10 * time.Second) read(clientConn, len(pkt1), pkt1, tn) } func TestBufferBloat(t *testing.T) { defer restoreHooks()() // Infinitely fast CPU: time doesn't pass unless sleep is called. tn := time.Unix(123, 0) now = func() time.Time { return tn } // Capture sleep times for checking later. var sleepTimes []time.Duration sleep = func(d time.Duration) { sleepTimes = append(sleepTimes, d) tn = tn.Add(d) } wantSleeps := func(want ...time.Duration) error { if !reflect.DeepEqual(want, sleepTimes) { return fmt.Errorf("sleepTimes = %v; want %v", sleepTimes, want) } sleepTimes = nil return nil } n := &Network{Kbps: 8 /* 1KBps */, Latency: time.Second, MTU: 8} bdpBytes := (n.Kbps * 1024 / 8) * int(n.Latency/time.Second) // 1024 c, err := n.Conn(bufConn{&bytes.Buffer{}}) if err != nil { t.Fatalf("Unexpected error creating connection: %v", err) } wantSleeps(n.Latency) // Connection creation delay. write := func(n int, sleeps ...time.Duration) { if wt, err := c.Write(make([]byte, n)); err != nil || wt != n { t.Fatalf("c.Write(<%v bytes>) = %v, %v; want %v, nil", n, wt, err, n) } if err := wantSleeps(sleeps...); err != nil { t.Fatalf("After writing %v bytes: %v", n, err) } } read := func(n int, sleeps ...time.Duration) { if rd, err := c.Read(make([]byte, n)); err != nil || rd != n { t.Fatalf("c.Read(_) = %v, %v; want %v, nil", rd, err, n) } if err := wantSleeps(sleeps...); err != nil { t.Fatalf("After reading %v bytes: %v", n, err) } } write(8) // No reads and buffer not full, so no sleeps yet. read(8, time.Second+n.pktTime(8)) write(bdpBytes) // Fill the buffer. write(1) // We can send one extra packet even when the buffer is full. write(n.MTU, n.pktTime(1)) // Make sure we sleep to clear the previous write. write(1, n.pktTime(n.MTU)) write(n.MTU+1, n.pktTime(1), n.pktTime(n.MTU)) tn = tn.Add(10 * time.Second) // Wait long enough for the buffer to clear. write(bdpBytes) // No sleeps required. } golang-google-grpc-1.6.0/benchmark/server/000077500000000000000000000000001315416461300204345ustar00rootroot00000000000000golang-google-grpc-1.6.0/benchmark/server/main.go000066400000000000000000000026211315416461300217100ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package main import ( "flag" "math" "net" "net/http" _ "net/http/pprof" "time" "google.golang.org/grpc/benchmark" "google.golang.org/grpc/grpclog" ) var ( duration = flag.Int("duration", math.MaxInt32, "The duration in seconds to run the benchmark server") ) func main() { flag.Parse() go func() { lis, err := net.Listen("tcp", ":0") if err != nil { grpclog.Fatalf("Failed to listen: %v", err) } grpclog.Println("Server profiling address: ", lis.Addr().String()) if err := http.Serve(lis, nil); err != nil { grpclog.Fatalf("Failed to serve: %v", err) } }() addr, stopper := benchmark.StartServer(benchmark.ServerInfo{Addr: ":0", Type: "protobuf"}) // listen on all interfaces grpclog.Println("Server Address: ", addr) <-time.After(time.Duration(*duration) * time.Second) stopper() } golang-google-grpc-1.6.0/benchmark/stats/000077500000000000000000000000001315416461300202645ustar00rootroot00000000000000golang-google-grpc-1.6.0/benchmark/stats/histogram.go000066400000000000000000000145221315416461300226140ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package stats import ( "bytes" "fmt" "io" "log" "math" "strconv" "strings" ) // Histogram accumulates values in the form of a histogram with // exponentially increased bucket sizes. type Histogram struct { // Count is the total number of values added to the histogram. Count int64 // Sum is the sum of all the values added to the histogram. Sum int64 // SumOfSquares is the sum of squares of all values. SumOfSquares int64 // Min is the minimum of all the values added to the histogram. Min int64 // Max is the maximum of all the values added to the histogram. Max int64 // Buckets contains all the buckets of the histogram. Buckets []HistogramBucket opts HistogramOptions logBaseBucketSize float64 oneOverLogOnePlusGrowthFactor float64 } // HistogramOptions contains the parameters that define the histogram's buckets. // The first bucket of the created histogram (with index 0) contains [min, min+n) // where n = BaseBucketSize, min = MinValue. // Bucket i (i>=1) contains [min + n * m^(i-1), min + n * m^i), where m = 1+GrowthFactor. // The type of the values is int64. type HistogramOptions struct { // NumBuckets is the number of buckets. NumBuckets int // GrowthFactor is the growth factor of the buckets. A value of 0.1 // indicates that bucket N+1 will be 10% larger than bucket N. GrowthFactor float64 // BaseBucketSize is the size of the first bucket. BaseBucketSize float64 // MinValue is the lower bound of the first bucket. MinValue int64 } // HistogramBucket represents one histogram bucket. type HistogramBucket struct { // LowBound is the lower bound of the bucket. LowBound float64 // Count is the number of values in the bucket. Count int64 } // NewHistogram returns a pointer to a new Histogram object that was created // with the provided options. func NewHistogram(opts HistogramOptions) *Histogram { if opts.NumBuckets == 0 { opts.NumBuckets = 32 } if opts.BaseBucketSize == 0.0 { opts.BaseBucketSize = 1.0 } h := Histogram{ Buckets: make([]HistogramBucket, opts.NumBuckets), Min: math.MaxInt64, Max: math.MinInt64, opts: opts, logBaseBucketSize: math.Log(opts.BaseBucketSize), oneOverLogOnePlusGrowthFactor: 1 / math.Log(1+opts.GrowthFactor), } m := 1.0 + opts.GrowthFactor delta := opts.BaseBucketSize h.Buckets[0].LowBound = float64(opts.MinValue) for i := 1; i < opts.NumBuckets; i++ { h.Buckets[i].LowBound = float64(opts.MinValue) + delta delta = delta * m } return &h } // Print writes textual output of the histogram values. func (h *Histogram) Print(w io.Writer) { h.PrintWithUnit(w, 1) } // PrintWithUnit writes textual output of the histogram values . // Data in histogram is divided by a Unit before print. func (h *Histogram) PrintWithUnit(w io.Writer, unit float64) { avg := float64(h.Sum) / float64(h.Count) fmt.Fprintf(w, "Count: %d Min: %5.1f Max: %5.1f Avg: %.2f\n", h.Count, float64(h.Min)/unit, float64(h.Max)/unit, avg/unit) fmt.Fprintf(w, "%s\n", strings.Repeat("-", 60)) if h.Count <= 0 { return } maxBucketDigitLen := len(strconv.FormatFloat(h.Buckets[len(h.Buckets)-1].LowBound, 'f', 6, 64)) if maxBucketDigitLen < 3 { // For "inf". maxBucketDigitLen = 3 } maxCountDigitLen := len(strconv.FormatInt(h.Count, 10)) percentMulti := 100 / float64(h.Count) accCount := int64(0) for i, b := range h.Buckets { fmt.Fprintf(w, "[%*f, ", maxBucketDigitLen, b.LowBound/unit) if i+1 < len(h.Buckets) { fmt.Fprintf(w, "%*f)", maxBucketDigitLen, h.Buckets[i+1].LowBound/unit) } else { fmt.Fprintf(w, "%*s)", maxBucketDigitLen, "inf") } accCount += b.Count fmt.Fprintf(w, " %*d %5.1f%% %5.1f%%", maxCountDigitLen, b.Count, float64(b.Count)*percentMulti, float64(accCount)*percentMulti) const barScale = 0.1 barLength := int(float64(b.Count)*percentMulti*barScale + 0.5) fmt.Fprintf(w, " %s\n", strings.Repeat("#", barLength)) } } // String returns the textual output of the histogram values as string. func (h *Histogram) String() string { var b bytes.Buffer h.Print(&b) return b.String() } // Clear resets all the content of histogram. func (h *Histogram) Clear() { h.Count = 0 h.Sum = 0 h.SumOfSquares = 0 h.Min = math.MaxInt64 h.Max = math.MinInt64 for i := range h.Buckets { h.Buckets[i].Count = 0 } } // Opts returns a copy of the options used to create the Histogram. func (h *Histogram) Opts() HistogramOptions { return h.opts } // Add adds a value to the histogram. func (h *Histogram) Add(value int64) error { bucket, err := h.findBucket(value) if err != nil { return err } h.Buckets[bucket].Count++ h.Count++ h.Sum += value h.SumOfSquares += value * value if value < h.Min { h.Min = value } if value > h.Max { h.Max = value } return nil } func (h *Histogram) findBucket(value int64) (int, error) { delta := float64(value - h.opts.MinValue) var b int if delta >= h.opts.BaseBucketSize { // b = log_{1+growthFactor} (delta / baseBucketSize) + 1 // = log(delta / baseBucketSize) / log(1+growthFactor) + 1 // = (log(delta) - log(baseBucketSize)) * (1 / log(1+growthFactor)) + 1 b = int((math.Log(delta)-h.logBaseBucketSize)*h.oneOverLogOnePlusGrowthFactor + 1) } if b >= len(h.Buckets) { return 0, fmt.Errorf("no bucket for value: %d", value) } return b, nil } // Merge takes another histogram h2, and merges its content into h. // The two histograms must be created by equivalent HistogramOptions. func (h *Histogram) Merge(h2 *Histogram) { if h.opts != h2.opts { log.Fatalf("failed to merge histograms, created by inequivalent options") } h.Count += h2.Count h.Sum += h2.Sum h.SumOfSquares += h2.SumOfSquares if h2.Min < h.Min { h.Min = h2.Min } if h2.Max > h.Max { h.Max = h2.Max } for i, b := range h2.Buckets { h.Buckets[i].Count += b.Count } } golang-google-grpc-1.6.0/benchmark/stats/stats.go000066400000000000000000000063171315416461300217600ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package stats import ( "bytes" "fmt" "io" "math" "time" ) // Stats is a simple helper for gathering additional statistics like histogram // during benchmarks. This is not thread safe. type Stats struct { numBuckets int unit time.Duration min, max int64 histogram *Histogram durations durationSlice dirty bool } type durationSlice []time.Duration // NewStats creates a new Stats instance. If numBuckets is not positive, // the default value (16) will be used. func NewStats(numBuckets int) *Stats { if numBuckets <= 0 { numBuckets = 16 } return &Stats{ // Use one more bucket for the last unbounded bucket. numBuckets: numBuckets + 1, durations: make(durationSlice, 0, 100000), } } // Add adds an elapsed time per operation to the stats. func (stats *Stats) Add(d time.Duration) { stats.durations = append(stats.durations, d) stats.dirty = true } // Clear resets the stats, removing all values. func (stats *Stats) Clear() { stats.durations = stats.durations[:0] stats.histogram = nil stats.dirty = false } // maybeUpdate updates internal stat data if there was any newly added // stats since this was updated. func (stats *Stats) maybeUpdate() { if !stats.dirty { return } stats.min = math.MaxInt64 stats.max = 0 for _, d := range stats.durations { if stats.min > int64(d) { stats.min = int64(d) } if stats.max < int64(d) { stats.max = int64(d) } } // Use the largest unit that can represent the minimum time duration. stats.unit = time.Nanosecond for _, u := range []time.Duration{time.Microsecond, time.Millisecond, time.Second} { if stats.min <= int64(u) { break } stats.unit = u } numBuckets := stats.numBuckets if n := int(stats.max - stats.min + 1); n < numBuckets { numBuckets = n } stats.histogram = NewHistogram(HistogramOptions{ NumBuckets: numBuckets, // max-min(lower bound of last bucket) = (1 + growthFactor)^(numBuckets-2) * baseBucketSize. GrowthFactor: math.Pow(float64(stats.max-stats.min), 1/float64(numBuckets-2)) - 1, BaseBucketSize: 1.0, MinValue: stats.min}) for _, d := range stats.durations { stats.histogram.Add(int64(d)) } stats.dirty = false } // Print writes textual output of the Stats. func (stats *Stats) Print(w io.Writer) { stats.maybeUpdate() if stats.histogram == nil { fmt.Fprint(w, "Histogram (empty)\n") } else { fmt.Fprintf(w, "Histogram (unit: %s)\n", fmt.Sprintf("%v", stats.unit)[1:]) stats.histogram.PrintWithUnit(w, float64(stats.unit)) } } // String returns the textual output of the Stats as string. func (stats *Stats) String() string { var b bytes.Buffer stats.Print(&b) return b.String() } golang-google-grpc-1.6.0/benchmark/stats/util.go000066400000000000000000000127211315416461300215730ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package stats import ( "bufio" "bytes" "fmt" "os" "runtime" "sort" "strings" "sync" "testing" ) var ( curB *testing.B curBenchName string curStats map[string]*Stats orgStdout *os.File nextOutPos int injectCond *sync.Cond injectDone chan struct{} ) // AddStats adds a new unnamed Stats instance to the current benchmark. You need // to run benchmarks by calling RunTestMain() to inject the stats to the // benchmark results. If numBuckets is not positive, the default value (16) will // be used. Please note that this calls b.ResetTimer() since it may be blocked // until the previous benchmark stats is printed out. So AddStats() should // typically be called at the very beginning of each benchmark function. func AddStats(b *testing.B, numBuckets int) *Stats { return AddStatsWithName(b, "", numBuckets) } // AddStatsWithName adds a new named Stats instance to the current benchmark. // With this, you can add multiple stats in a single benchmark. You need // to run benchmarks by calling RunTestMain() to inject the stats to the // benchmark results. If numBuckets is not positive, the default value (16) will // be used. Please note that this calls b.ResetTimer() since it may be blocked // until the previous benchmark stats is printed out. So AddStatsWithName() // should typically be called at the very beginning of each benchmark function. func AddStatsWithName(b *testing.B, name string, numBuckets int) *Stats { var benchName string for i := 1; ; i++ { pc, _, _, ok := runtime.Caller(i) if !ok { panic("benchmark function not found") } p := strings.Split(runtime.FuncForPC(pc).Name(), ".") benchName = p[len(p)-1] if strings.HasPrefix(benchName, "run") { break } } procs := runtime.GOMAXPROCS(-1) if procs != 1 { benchName = fmt.Sprintf("%s-%d", benchName, procs) } stats := NewStats(numBuckets) if injectCond != nil { // We need to wait until the previous benchmark stats is printed out. injectCond.L.Lock() for curB != nil && curBenchName != benchName { injectCond.Wait() } curB = b curBenchName = benchName curStats[name] = stats injectCond.L.Unlock() } b.ResetTimer() return stats } // RunTestMain runs the tests with enabling injection of benchmark stats. It // returns an exit code to pass to os.Exit. func RunTestMain(m *testing.M) int { startStatsInjector() defer stopStatsInjector() return m.Run() } // startStatsInjector starts stats injection to benchmark results. func startStatsInjector() { orgStdout = os.Stdout r, w, _ := os.Pipe() os.Stdout = w nextOutPos = 0 resetCurBenchStats() injectCond = sync.NewCond(&sync.Mutex{}) injectDone = make(chan struct{}) go func() { defer close(injectDone) scanner := bufio.NewScanner(r) scanner.Split(splitLines) for scanner.Scan() { injectStatsIfFinished(scanner.Text()) } if err := scanner.Err(); err != nil { panic(err) } }() } // stopStatsInjector stops stats injection and restores os.Stdout. func stopStatsInjector() { os.Stdout.Close() <-injectDone injectCond = nil os.Stdout = orgStdout } // splitLines is a split function for a bufio.Scanner that returns each line // of text, teeing texts to the original stdout even before each line ends. func splitLines(data []byte, eof bool) (advance int, token []byte, err error) { if eof && len(data) == 0 { return 0, nil, nil } if i := bytes.IndexByte(data, '\n'); i >= 0 { orgStdout.Write(data[nextOutPos : i+1]) nextOutPos = 0 return i + 1, data[0:i], nil } orgStdout.Write(data[nextOutPos:]) nextOutPos = len(data) if eof { // This is a final, non-terminated line. Return it. return len(data), data, nil } return 0, nil, nil } // injectStatsIfFinished prints out the stats if the current benchmark finishes. func injectStatsIfFinished(line string) { injectCond.L.Lock() defer injectCond.L.Unlock() // We assume that the benchmark results start with "Benchmark". if curB == nil || !strings.HasPrefix(line, "Benchmark") { return } if !curB.Failed() { // Output all stats in alphabetical order. names := make([]string, 0, len(curStats)) for name := range curStats { names = append(names, name) } sort.Strings(names) for _, name := range names { stats := curStats[name] // The output of stats starts with a header like "Histogram (unit: ms)" // followed by statistical properties and the buckets. Add the stats name // if it is a named stats and indent them as Go testing outputs. lines := strings.Split(stats.String(), "\n") if n := len(lines); n > 0 { if name != "" { name = ": " + name } fmt.Fprintf(orgStdout, "--- %s%s\n", lines[0], name) for _, line := range lines[1 : n-1] { fmt.Fprintf(orgStdout, "\t%s\n", line) } } } } resetCurBenchStats() injectCond.Signal() } // resetCurBenchStats resets the current benchmark stats. func resetCurBenchStats() { curB = nil curBenchName = "" curStats = make(map[string]*Stats) } golang-google-grpc-1.6.0/benchmark/worker/000077500000000000000000000000001315416461300204375ustar00rootroot00000000000000golang-google-grpc-1.6.0/benchmark/worker/benchmark_client.go000066400000000000000000000303301315416461300242550ustar00rootroot00000000000000/* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package main import ( "flag" "math" "runtime" "sync" "syscall" "time" "golang.org/x/net/context" "google.golang.org/grpc" "google.golang.org/grpc/benchmark" testpb "google.golang.org/grpc/benchmark/grpc_testing" "google.golang.org/grpc/benchmark/stats" "google.golang.org/grpc/codes" "google.golang.org/grpc/credentials" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/testdata" ) var ( caFile = flag.String("ca_file", "", "The file containing the CA root cert file") ) type lockingHistogram struct { mu sync.Mutex histogram *stats.Histogram } func (h *lockingHistogram) add(value int64) { h.mu.Lock() defer h.mu.Unlock() h.histogram.Add(value) } // swap sets h.histogram to new, and returns its old value. func (h *lockingHistogram) swap(new *stats.Histogram) *stats.Histogram { h.mu.Lock() defer h.mu.Unlock() old := h.histogram h.histogram = new return old } func (h *lockingHistogram) mergeInto(merged *stats.Histogram) { h.mu.Lock() defer h.mu.Unlock() merged.Merge(h.histogram) } type benchmarkClient struct { closeConns func() stop chan bool lastResetTime time.Time histogramOptions stats.HistogramOptions lockingHistograms []lockingHistogram rusageLastReset *syscall.Rusage } func printClientConfig(config *testpb.ClientConfig) { // Some config options are ignored: // - client type: // will always create sync client // - async client threads. // - core list grpclog.Printf(" * client type: %v (ignored, always creates sync client)", config.ClientType) grpclog.Printf(" * async client threads: %v (ignored)", config.AsyncClientThreads) // TODO: use cores specified by CoreList when setting list of cores is supported in go. grpclog.Printf(" * core list: %v (ignored)", config.CoreList) grpclog.Printf(" - security params: %v", config.SecurityParams) grpclog.Printf(" - core limit: %v", config.CoreLimit) grpclog.Printf(" - payload config: %v", config.PayloadConfig) grpclog.Printf(" - rpcs per chann: %v", config.OutstandingRpcsPerChannel) grpclog.Printf(" - channel number: %v", config.ClientChannels) grpclog.Printf(" - load params: %v", config.LoadParams) grpclog.Printf(" - rpc type: %v", config.RpcType) grpclog.Printf(" - histogram params: %v", config.HistogramParams) grpclog.Printf(" - server targets: %v", config.ServerTargets) } func setupClientEnv(config *testpb.ClientConfig) { // Use all cpu cores available on machine by default. // TODO: Revisit this for the optimal default setup. if config.CoreLimit > 0 { runtime.GOMAXPROCS(int(config.CoreLimit)) } else { runtime.GOMAXPROCS(runtime.NumCPU()) } } // createConns creates connections according to given config. // It returns the connections and corresponding function to close them. // It returns non-nil error if there is anything wrong. func createConns(config *testpb.ClientConfig) ([]*grpc.ClientConn, func(), error) { var opts []grpc.DialOption // Sanity check for client type. switch config.ClientType { case testpb.ClientType_SYNC_CLIENT: case testpb.ClientType_ASYNC_CLIENT: default: return nil, nil, grpc.Errorf(codes.InvalidArgument, "unknow client type: %v", config.ClientType) } // Check and set security options. if config.SecurityParams != nil { if *caFile == "" { *caFile = testdata.Path("ca.pem") } creds, err := credentials.NewClientTLSFromFile(*caFile, config.SecurityParams.ServerHostOverride) if err != nil { return nil, nil, grpc.Errorf(codes.InvalidArgument, "failed to create TLS credentials %v", err) } opts = append(opts, grpc.WithTransportCredentials(creds)) } else { opts = append(opts, grpc.WithInsecure()) } // Use byteBufCodec if it is required. if config.PayloadConfig != nil { switch config.PayloadConfig.Payload.(type) { case *testpb.PayloadConfig_BytebufParams: opts = append(opts, grpc.WithCodec(byteBufCodec{})) case *testpb.PayloadConfig_SimpleParams: default: return nil, nil, grpc.Errorf(codes.InvalidArgument, "unknow payload config: %v", config.PayloadConfig) } } // Create connections. connCount := int(config.ClientChannels) conns := make([]*grpc.ClientConn, connCount, connCount) for connIndex := 0; connIndex < connCount; connIndex++ { conns[connIndex] = benchmark.NewClientConn(config.ServerTargets[connIndex%len(config.ServerTargets)], opts...) } return conns, func() { for _, conn := range conns { conn.Close() } }, nil } func performRPCs(config *testpb.ClientConfig, conns []*grpc.ClientConn, bc *benchmarkClient) error { // Read payload size and type from config. var ( payloadReqSize, payloadRespSize int payloadType string ) if config.PayloadConfig != nil { switch c := config.PayloadConfig.Payload.(type) { case *testpb.PayloadConfig_BytebufParams: payloadReqSize = int(c.BytebufParams.ReqSize) payloadRespSize = int(c.BytebufParams.RespSize) payloadType = "bytebuf" case *testpb.PayloadConfig_SimpleParams: payloadReqSize = int(c.SimpleParams.ReqSize) payloadRespSize = int(c.SimpleParams.RespSize) payloadType = "protobuf" default: return grpc.Errorf(codes.InvalidArgument, "unknow payload config: %v", config.PayloadConfig) } } // TODO add open loop distribution. switch config.LoadParams.Load.(type) { case *testpb.LoadParams_ClosedLoop: case *testpb.LoadParams_Poisson: return grpc.Errorf(codes.Unimplemented, "unsupported load params: %v", config.LoadParams) default: return grpc.Errorf(codes.InvalidArgument, "unknown load params: %v", config.LoadParams) } rpcCountPerConn := int(config.OutstandingRpcsPerChannel) switch config.RpcType { case testpb.RpcType_UNARY: bc.doCloseLoopUnary(conns, rpcCountPerConn, payloadReqSize, payloadRespSize) // TODO open loop. case testpb.RpcType_STREAMING: bc.doCloseLoopStreaming(conns, rpcCountPerConn, payloadReqSize, payloadRespSize, payloadType) // TODO open loop. default: return grpc.Errorf(codes.InvalidArgument, "unknown rpc type: %v", config.RpcType) } return nil } func startBenchmarkClient(config *testpb.ClientConfig) (*benchmarkClient, error) { printClientConfig(config) // Set running environment like how many cores to use. setupClientEnv(config) conns, closeConns, err := createConns(config) if err != nil { return nil, err } rusage := new(syscall.Rusage) syscall.Getrusage(syscall.RUSAGE_SELF, rusage) rpcCountPerConn := int(config.OutstandingRpcsPerChannel) bc := &benchmarkClient{ histogramOptions: stats.HistogramOptions{ NumBuckets: int(math.Log(config.HistogramParams.MaxPossible)/math.Log(1+config.HistogramParams.Resolution)) + 1, GrowthFactor: config.HistogramParams.Resolution, BaseBucketSize: (1 + config.HistogramParams.Resolution), MinValue: 0, }, lockingHistograms: make([]lockingHistogram, rpcCountPerConn*len(conns), rpcCountPerConn*len(conns)), stop: make(chan bool), lastResetTime: time.Now(), closeConns: closeConns, rusageLastReset: rusage, } if err = performRPCs(config, conns, bc); err != nil { // Close all connections if performRPCs failed. closeConns() return nil, err } return bc, nil } func (bc *benchmarkClient) doCloseLoopUnary(conns []*grpc.ClientConn, rpcCountPerConn int, reqSize int, respSize int) { for ic, conn := range conns { client := testpb.NewBenchmarkServiceClient(conn) // For each connection, create rpcCountPerConn goroutines to do rpc. for j := 0; j < rpcCountPerConn; j++ { // Create histogram for each goroutine. idx := ic*rpcCountPerConn + j bc.lockingHistograms[idx].histogram = stats.NewHistogram(bc.histogramOptions) // Start goroutine on the created mutex and histogram. go func(idx int) { // TODO: do warm up if necessary. // Now relying on worker client to reserve time to do warm up. // The worker client needs to wait for some time after client is created, // before starting benchmark. done := make(chan bool) for { go func() { start := time.Now() if err := benchmark.DoUnaryCall(client, reqSize, respSize); err != nil { select { case <-bc.stop: case done <- false: } return } elapse := time.Since(start) bc.lockingHistograms[idx].add(int64(elapse)) select { case <-bc.stop: case done <- true: } }() select { case <-bc.stop: return case <-done: } } }(idx) } } } func (bc *benchmarkClient) doCloseLoopStreaming(conns []*grpc.ClientConn, rpcCountPerConn int, reqSize int, respSize int, payloadType string) { var doRPC func(testpb.BenchmarkService_StreamingCallClient, int, int) error if payloadType == "bytebuf" { doRPC = benchmark.DoByteBufStreamingRoundTrip } else { doRPC = benchmark.DoStreamingRoundTrip } for ic, conn := range conns { // For each connection, create rpcCountPerConn goroutines to do rpc. for j := 0; j < rpcCountPerConn; j++ { c := testpb.NewBenchmarkServiceClient(conn) stream, err := c.StreamingCall(context.Background()) if err != nil { grpclog.Fatalf("%v.StreamingCall(_) = _, %v", c, err) } // Create histogram for each goroutine. idx := ic*rpcCountPerConn + j bc.lockingHistograms[idx].histogram = stats.NewHistogram(bc.histogramOptions) // Start goroutine on the created mutex and histogram. go func(idx int) { // TODO: do warm up if necessary. // Now relying on worker client to reserve time to do warm up. // The worker client needs to wait for some time after client is created, // before starting benchmark. for { start := time.Now() if err := doRPC(stream, reqSize, respSize); err != nil { return } elapse := time.Since(start) bc.lockingHistograms[idx].add(int64(elapse)) select { case <-bc.stop: return default: } } }(idx) } } } // getStats returns the stats for benchmark client. // It resets lastResetTime and all histograms if argument reset is true. func (bc *benchmarkClient) getStats(reset bool) *testpb.ClientStats { var wallTimeElapsed, uTimeElapsed, sTimeElapsed float64 mergedHistogram := stats.NewHistogram(bc.histogramOptions) latestRusage := new(syscall.Rusage) if reset { // Merging histogram may take some time. // Put all histograms aside and merge later. toMerge := make([]*stats.Histogram, len(bc.lockingHistograms), len(bc.lockingHistograms)) for i := range bc.lockingHistograms { toMerge[i] = bc.lockingHistograms[i].swap(stats.NewHistogram(bc.histogramOptions)) } for i := 0; i < len(toMerge); i++ { mergedHistogram.Merge(toMerge[i]) } wallTimeElapsed = time.Since(bc.lastResetTime).Seconds() syscall.Getrusage(syscall.RUSAGE_SELF, latestRusage) uTimeElapsed, sTimeElapsed = cpuTimeDiff(bc.rusageLastReset, latestRusage) bc.rusageLastReset = latestRusage bc.lastResetTime = time.Now() } else { // Merge only, not reset. for i := range bc.lockingHistograms { bc.lockingHistograms[i].mergeInto(mergedHistogram) } wallTimeElapsed = time.Since(bc.lastResetTime).Seconds() syscall.Getrusage(syscall.RUSAGE_SELF, latestRusage) uTimeElapsed, sTimeElapsed = cpuTimeDiff(bc.rusageLastReset, latestRusage) } b := make([]uint32, len(mergedHistogram.Buckets), len(mergedHistogram.Buckets)) for i, v := range mergedHistogram.Buckets { b[i] = uint32(v.Count) } return &testpb.ClientStats{ Latencies: &testpb.HistogramData{ Bucket: b, MinSeen: float64(mergedHistogram.Min), MaxSeen: float64(mergedHistogram.Max), Sum: float64(mergedHistogram.Sum), SumOfSquares: float64(mergedHistogram.SumOfSquares), Count: float64(mergedHistogram.Count), }, TimeElapsed: wallTimeElapsed, TimeUser: uTimeElapsed, TimeSystem: sTimeElapsed, } } func (bc *benchmarkClient) shutdown() { close(bc.stop) bc.closeConns() } golang-google-grpc-1.6.0/benchmark/worker/benchmark_server.go000066400000000000000000000126711315416461300243150ustar00rootroot00000000000000/* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package main import ( "flag" "runtime" "strconv" "strings" "sync" "syscall" "time" "google.golang.org/grpc" "google.golang.org/grpc/benchmark" testpb "google.golang.org/grpc/benchmark/grpc_testing" "google.golang.org/grpc/codes" "google.golang.org/grpc/credentials" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/testdata" ) var ( certFile = flag.String("tls_cert_file", "", "The TLS cert file") keyFile = flag.String("tls_key_file", "", "The TLS key file") ) type benchmarkServer struct { port int cores int closeFunc func() mu sync.RWMutex lastResetTime time.Time rusageLastReset *syscall.Rusage } func printServerConfig(config *testpb.ServerConfig) { // Some config options are ignored: // - server type: // will always start sync server // - async server threads // - core list grpclog.Printf(" * server type: %v (ignored, always starts sync server)", config.ServerType) grpclog.Printf(" * async server threads: %v (ignored)", config.AsyncServerThreads) // TODO: use cores specified by CoreList when setting list of cores is supported in go. grpclog.Printf(" * core list: %v (ignored)", config.CoreList) grpclog.Printf(" - security params: %v", config.SecurityParams) grpclog.Printf(" - core limit: %v", config.CoreLimit) grpclog.Printf(" - port: %v", config.Port) grpclog.Printf(" - payload config: %v", config.PayloadConfig) } func startBenchmarkServer(config *testpb.ServerConfig, serverPort int) (*benchmarkServer, error) { printServerConfig(config) // Use all cpu cores available on machine by default. // TODO: Revisit this for the optimal default setup. numOfCores := runtime.NumCPU() if config.CoreLimit > 0 { numOfCores = int(config.CoreLimit) } runtime.GOMAXPROCS(numOfCores) var opts []grpc.ServerOption // Sanity check for server type. switch config.ServerType { case testpb.ServerType_SYNC_SERVER: case testpb.ServerType_ASYNC_SERVER: case testpb.ServerType_ASYNC_GENERIC_SERVER: default: return nil, grpc.Errorf(codes.InvalidArgument, "unknow server type: %v", config.ServerType) } // Set security options. if config.SecurityParams != nil { if *certFile == "" { *certFile = testdata.Path("server1.pem") } if *keyFile == "" { *keyFile = testdata.Path("server1.key") } creds, err := credentials.NewServerTLSFromFile(*certFile, *keyFile) if err != nil { grpclog.Fatalf("failed to generate credentials %v", err) } opts = append(opts, grpc.Creds(creds)) } // Priority: config.Port > serverPort > default (0). port := int(config.Port) if port == 0 { port = serverPort } // Create different benchmark server according to config. var ( addr string closeFunc func() err error ) if config.PayloadConfig != nil { switch payload := config.PayloadConfig.Payload.(type) { case *testpb.PayloadConfig_BytebufParams: opts = append(opts, grpc.CustomCodec(byteBufCodec{})) addr, closeFunc = benchmark.StartServer(benchmark.ServerInfo{ Addr: ":" + strconv.Itoa(port), Type: "bytebuf", Metadata: payload.BytebufParams.RespSize, }, opts...) case *testpb.PayloadConfig_SimpleParams: addr, closeFunc = benchmark.StartServer(benchmark.ServerInfo{ Addr: ":" + strconv.Itoa(port), Type: "protobuf", }, opts...) case *testpb.PayloadConfig_ComplexParams: return nil, grpc.Errorf(codes.Unimplemented, "unsupported payload config: %v", config.PayloadConfig) default: return nil, grpc.Errorf(codes.InvalidArgument, "unknow payload config: %v", config.PayloadConfig) } } else { // Start protobuf server if payload config is nil. addr, closeFunc = benchmark.StartServer(benchmark.ServerInfo{ Addr: ":" + strconv.Itoa(port), Type: "protobuf", }, opts...) } grpclog.Printf("benchmark server listening at %v", addr) addrSplitted := strings.Split(addr, ":") p, err := strconv.Atoi(addrSplitted[len(addrSplitted)-1]) if err != nil { grpclog.Fatalf("failed to get port number from server address: %v", err) } rusage := new(syscall.Rusage) syscall.Getrusage(syscall.RUSAGE_SELF, rusage) return &benchmarkServer{ port: p, cores: numOfCores, closeFunc: closeFunc, lastResetTime: time.Now(), rusageLastReset: rusage, }, nil } // getStats returns the stats for benchmark server. // It resets lastResetTime if argument reset is true. func (bs *benchmarkServer) getStats(reset bool) *testpb.ServerStats { bs.mu.RLock() defer bs.mu.RUnlock() wallTimeElapsed := time.Since(bs.lastResetTime).Seconds() rusageLatest := new(syscall.Rusage) syscall.Getrusage(syscall.RUSAGE_SELF, rusageLatest) uTimeElapsed, sTimeElapsed := cpuTimeDiff(bs.rusageLastReset, rusageLatest) if reset { bs.lastResetTime = time.Now() bs.rusageLastReset = rusageLatest } return &testpb.ServerStats{ TimeElapsed: wallTimeElapsed, TimeUser: uTimeElapsed, TimeSystem: sTimeElapsed, } } golang-google-grpc-1.6.0/benchmark/worker/main.go000066400000000000000000000133321315416461300217140ustar00rootroot00000000000000/* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package main import ( "flag" "fmt" "io" "net" "net/http" _ "net/http/pprof" "runtime" "strconv" "time" "golang.org/x/net/context" "google.golang.org/grpc" testpb "google.golang.org/grpc/benchmark/grpc_testing" "google.golang.org/grpc/codes" "google.golang.org/grpc/grpclog" ) var ( driverPort = flag.Int("driver_port", 10000, "port for communication with driver") serverPort = flag.Int("server_port", 0, "port for benchmark server if not specified by server config message") pprofPort = flag.Int("pprof_port", -1, "Port for pprof debug server to listen on. Pprof server doesn't start if unset") blockProfRate = flag.Int("block_prof_rate", 0, "fraction of goroutine blocking events to report in blocking profile") ) type byteBufCodec struct { } func (byteBufCodec) Marshal(v interface{}) ([]byte, error) { b, ok := v.(*[]byte) if !ok { return nil, fmt.Errorf("failed to marshal: %v is not type of *[]byte", v) } return *b, nil } func (byteBufCodec) Unmarshal(data []byte, v interface{}) error { b, ok := v.(*[]byte) if !ok { return fmt.Errorf("failed to marshal: %v is not type of *[]byte", v) } *b = data return nil } func (byteBufCodec) String() string { return "bytebuffer" } // workerServer implements WorkerService rpc handlers. // It can create benchmarkServer or benchmarkClient on demand. type workerServer struct { stop chan<- bool serverPort int } func (s *workerServer) RunServer(stream testpb.WorkerService_RunServerServer) error { var bs *benchmarkServer defer func() { // Close benchmark server when stream ends. grpclog.Printf("closing benchmark server") if bs != nil { bs.closeFunc() } }() for { in, err := stream.Recv() if err == io.EOF { return nil } if err != nil { return err } var out *testpb.ServerStatus switch argtype := in.Argtype.(type) { case *testpb.ServerArgs_Setup: grpclog.Printf("server setup received:") if bs != nil { grpclog.Printf("server setup received when server already exists, closing the existing server") bs.closeFunc() } bs, err = startBenchmarkServer(argtype.Setup, s.serverPort) if err != nil { return err } out = &testpb.ServerStatus{ Stats: bs.getStats(false), Port: int32(bs.port), Cores: int32(bs.cores), } case *testpb.ServerArgs_Mark: grpclog.Printf("server mark received:") grpclog.Printf(" - %v", argtype) if bs == nil { return grpc.Errorf(codes.InvalidArgument, "server does not exist when mark received") } out = &testpb.ServerStatus{ Stats: bs.getStats(argtype.Mark.Reset_), Port: int32(bs.port), Cores: int32(bs.cores), } } if err := stream.Send(out); err != nil { return err } } } func (s *workerServer) RunClient(stream testpb.WorkerService_RunClientServer) error { var bc *benchmarkClient defer func() { // Shut down benchmark client when stream ends. grpclog.Printf("shuting down benchmark client") if bc != nil { bc.shutdown() } }() for { in, err := stream.Recv() if err == io.EOF { return nil } if err != nil { return err } var out *testpb.ClientStatus switch t := in.Argtype.(type) { case *testpb.ClientArgs_Setup: grpclog.Printf("client setup received:") if bc != nil { grpclog.Printf("client setup received when client already exists, shuting down the existing client") bc.shutdown() } bc, err = startBenchmarkClient(t.Setup) if err != nil { return err } out = &testpb.ClientStatus{ Stats: bc.getStats(false), } case *testpb.ClientArgs_Mark: grpclog.Printf("client mark received:") grpclog.Printf(" - %v", t) if bc == nil { return grpc.Errorf(codes.InvalidArgument, "client does not exist when mark received") } out = &testpb.ClientStatus{ Stats: bc.getStats(t.Mark.Reset_), } } if err := stream.Send(out); err != nil { return err } } } func (s *workerServer) CoreCount(ctx context.Context, in *testpb.CoreRequest) (*testpb.CoreResponse, error) { grpclog.Printf("core count: %v", runtime.NumCPU()) return &testpb.CoreResponse{Cores: int32(runtime.NumCPU())}, nil } func (s *workerServer) QuitWorker(ctx context.Context, in *testpb.Void) (*testpb.Void, error) { grpclog.Printf("quiting worker") s.stop <- true return &testpb.Void{}, nil } func main() { grpc.EnableTracing = false flag.Parse() lis, err := net.Listen("tcp", ":"+strconv.Itoa(*driverPort)) if err != nil { grpclog.Fatalf("failed to listen: %v", err) } grpclog.Printf("worker listening at port %v", *driverPort) s := grpc.NewServer() stop := make(chan bool) testpb.RegisterWorkerServiceServer(s, &workerServer{ stop: stop, serverPort: *serverPort, }) go func() { <-stop // Wait for 1 second before stopping the server to make sure the return value of QuitWorker is sent to client. // TODO revise this once server graceful stop is supported in gRPC. time.Sleep(time.Second) s.Stop() }() runtime.SetBlockProfileRate(*blockProfRate) if *pprofPort >= 0 { go func() { grpclog.Println("Starting pprof server on port " + strconv.Itoa(*pprofPort)) grpclog.Println(http.ListenAndServe("localhost:"+strconv.Itoa(*pprofPort), nil)) }() } s.Serve(lis) } golang-google-grpc-1.6.0/benchmark/worker/util.go000066400000000000000000000021301315416461300217370ustar00rootroot00000000000000/* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package main import "syscall" func cpuTimeDiff(first *syscall.Rusage, latest *syscall.Rusage) (float64, float64) { var ( utimeDiffs = latest.Utime.Sec - first.Utime.Sec utimeDiffus = latest.Utime.Usec - first.Utime.Usec stimeDiffs = latest.Stime.Sec - first.Stime.Sec stimeDiffus = latest.Stime.Usec - first.Stime.Usec ) uTimeElapsed := float64(utimeDiffs) + float64(utimeDiffus)*1.0e-6 sTimeElapsed := float64(stimeDiffs) + float64(stimeDiffus)*1.0e-6 return uTimeElapsed, sTimeElapsed } golang-google-grpc-1.6.0/call.go000066400000000000000000000223351315416461300164430ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "bytes" "io" "time" "golang.org/x/net/context" "golang.org/x/net/trace" "google.golang.org/grpc/codes" "google.golang.org/grpc/peer" "google.golang.org/grpc/stats" "google.golang.org/grpc/status" "google.golang.org/grpc/transport" ) // recvResponse receives and parses an RPC response. // On error, it returns the error and indicates whether the call should be retried. // // TODO(zhaoq): Check whether the received message sequence is valid. // TODO ctx is used for stats collection and processing. It is the context passed from the application. func recvResponse(ctx context.Context, dopts dialOptions, t transport.ClientTransport, c *callInfo, stream *transport.Stream, reply interface{}) (err error) { // Try to acquire header metadata from the server if there is any. defer func() { if err != nil { if _, ok := err.(transport.ConnectionError); !ok { t.CloseStream(stream, err) } } }() c.headerMD, err = stream.Header() if err != nil { return } p := &parser{r: stream} var inPayload *stats.InPayload if dopts.copts.StatsHandler != nil { inPayload = &stats.InPayload{ Client: true, } } for { if c.maxReceiveMessageSize == nil { return Errorf(codes.Internal, "callInfo maxReceiveMessageSize field uninitialized(nil)") } if err = recv(p, dopts.codec, stream, dopts.dc, reply, *c.maxReceiveMessageSize, inPayload); err != nil { if err == io.EOF { break } return } } if inPayload != nil && err == io.EOF && stream.Status().Code() == codes.OK { // TODO in the current implementation, inTrailer may be handled before inPayload in some cases. // Fix the order if necessary. dopts.copts.StatsHandler.HandleRPC(ctx, inPayload) } c.trailerMD = stream.Trailer() return nil } // sendRequest writes out various information of an RPC such as Context and Message. func sendRequest(ctx context.Context, dopts dialOptions, compressor Compressor, c *callInfo, callHdr *transport.CallHdr, stream *transport.Stream, t transport.ClientTransport, args interface{}, opts *transport.Options) (err error) { defer func() { if err != nil { // If err is connection error, t will be closed, no need to close stream here. if _, ok := err.(transport.ConnectionError); !ok { t.CloseStream(stream, err) } } }() var ( cbuf *bytes.Buffer outPayload *stats.OutPayload ) if compressor != nil { cbuf = new(bytes.Buffer) } if dopts.copts.StatsHandler != nil { outPayload = &stats.OutPayload{ Client: true, } } hdr, data, err := encode(dopts.codec, args, compressor, cbuf, outPayload) if err != nil { return err } if c.maxSendMessageSize == nil { return Errorf(codes.Internal, "callInfo maxSendMessageSize field uninitialized(nil)") } if len(data) > *c.maxSendMessageSize { return Errorf(codes.ResourceExhausted, "grpc: trying to send message larger than max (%d vs. %d)", len(data), *c.maxSendMessageSize) } err = t.Write(stream, hdr, data, opts) if err == nil && outPayload != nil { outPayload.SentTime = time.Now() dopts.copts.StatsHandler.HandleRPC(ctx, outPayload) } // t.NewStream(...) could lead to an early rejection of the RPC (e.g., the service/method // does not exist.) so that t.Write could get io.EOF from wait(...). Leave the following // recvResponse to get the final status. if err != nil && err != io.EOF { return err } // Sent successfully. return nil } // Invoke sends the RPC request on the wire and returns after response is received. // Invoke is called by generated code. Also users can call Invoke directly when it // is really needed in their use cases. func Invoke(ctx context.Context, method string, args, reply interface{}, cc *ClientConn, opts ...CallOption) error { if cc.dopts.unaryInt != nil { return cc.dopts.unaryInt(ctx, method, args, reply, cc, invoke, opts...) } return invoke(ctx, method, args, reply, cc, opts...) } func invoke(ctx context.Context, method string, args, reply interface{}, cc *ClientConn, opts ...CallOption) (e error) { c := defaultCallInfo mc := cc.GetMethodConfig(method) if mc.WaitForReady != nil { c.failFast = !*mc.WaitForReady } if mc.Timeout != nil && *mc.Timeout >= 0 { var cancel context.CancelFunc ctx, cancel = context.WithTimeout(ctx, *mc.Timeout) defer cancel() } opts = append(cc.dopts.callOptions, opts...) for _, o := range opts { if err := o.before(&c); err != nil { return toRPCErr(err) } } defer func() { for _, o := range opts { o.after(&c) } }() c.maxSendMessageSize = getMaxSize(mc.MaxReqSize, c.maxSendMessageSize, defaultClientMaxSendMessageSize) c.maxReceiveMessageSize = getMaxSize(mc.MaxRespSize, c.maxReceiveMessageSize, defaultClientMaxReceiveMessageSize) if EnableTracing { c.traceInfo.tr = trace.New("grpc.Sent."+methodFamily(method), method) defer c.traceInfo.tr.Finish() c.traceInfo.firstLine.client = true if deadline, ok := ctx.Deadline(); ok { c.traceInfo.firstLine.deadline = deadline.Sub(time.Now()) } c.traceInfo.tr.LazyLog(&c.traceInfo.firstLine, false) // TODO(dsymonds): Arrange for c.traceInfo.firstLine.remoteAddr to be set. defer func() { if e != nil { c.traceInfo.tr.LazyLog(&fmtStringer{"%v", []interface{}{e}}, true) c.traceInfo.tr.SetError() } }() } ctx = newContextWithRPCInfo(ctx) sh := cc.dopts.copts.StatsHandler if sh != nil { ctx = sh.TagRPC(ctx, &stats.RPCTagInfo{FullMethodName: method, FailFast: c.failFast}) begin := &stats.Begin{ Client: true, BeginTime: time.Now(), FailFast: c.failFast, } sh.HandleRPC(ctx, begin) defer func() { end := &stats.End{ Client: true, EndTime: time.Now(), Error: e, } sh.HandleRPC(ctx, end) }() } topts := &transport.Options{ Last: true, Delay: false, } for { var ( err error t transport.ClientTransport stream *transport.Stream // Record the put handler from Balancer.Get(...). It is called once the // RPC has completed or failed. put func() ) // TODO(zhaoq): Need a formal spec of fail-fast. callHdr := &transport.CallHdr{ Host: cc.authority, Method: method, } if cc.dopts.cp != nil { callHdr.SendCompress = cc.dopts.cp.Type() } if c.creds != nil { callHdr.Creds = c.creds } gopts := BalancerGetOptions{ BlockingWait: !c.failFast, } t, put, err = cc.getTransport(ctx, gopts) if err != nil { // TODO(zhaoq): Probably revisit the error handling. if _, ok := status.FromError(err); ok { return err } if err == errConnClosing || err == errConnUnavailable { if c.failFast { return Errorf(codes.Unavailable, "%v", err) } continue } // All the other errors are treated as Internal errors. return Errorf(codes.Internal, "%v", err) } if c.traceInfo.tr != nil { c.traceInfo.tr.LazyLog(&payload{sent: true, msg: args}, true) } stream, err = t.NewStream(ctx, callHdr) if err != nil { if put != nil { if _, ok := err.(transport.ConnectionError); ok { // If error is connection error, transport was sending data on wire, // and we are not sure if anything has been sent on wire. // If error is not connection error, we are sure nothing has been sent. updateRPCInfoInContext(ctx, rpcInfo{bytesSent: true, bytesReceived: false}) } put() } if _, ok := err.(transport.ConnectionError); (ok || err == transport.ErrStreamDrain) && !c.failFast { continue } return toRPCErr(err) } if peer, ok := peer.FromContext(stream.Context()); ok { c.peer = peer } err = sendRequest(ctx, cc.dopts, cc.dopts.cp, &c, callHdr, stream, t, args, topts) if err != nil { if put != nil { updateRPCInfoInContext(ctx, rpcInfo{ bytesSent: stream.BytesSent(), bytesReceived: stream.BytesReceived(), }) put() } // Retry a non-failfast RPC when // i) there is a connection error; or // ii) the server started to drain before this RPC was initiated. if _, ok := err.(transport.ConnectionError); (ok || err == transport.ErrStreamDrain) && !c.failFast { continue } return toRPCErr(err) } err = recvResponse(ctx, cc.dopts, t, &c, stream, reply) if err != nil { if put != nil { updateRPCInfoInContext(ctx, rpcInfo{ bytesSent: stream.BytesSent(), bytesReceived: stream.BytesReceived(), }) put() } if _, ok := err.(transport.ConnectionError); (ok || err == transport.ErrStreamDrain) && !c.failFast { continue } return toRPCErr(err) } if c.traceInfo.tr != nil { c.traceInfo.tr.LazyLog(&payload{sent: false, msg: reply}, true) } t.CloseStream(stream, nil) if put != nil { updateRPCInfoInContext(ctx, rpcInfo{ bytesSent: stream.BytesSent(), bytesReceived: stream.BytesReceived(), }) put() } return stream.Status().Err() } } golang-google-grpc-1.6.0/call_test.go000066400000000000000000000160401315416461300174760ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "fmt" "io" "math" "net" "strconv" "strings" "sync" "testing" "time" "golang.org/x/net/context" "google.golang.org/grpc/codes" "google.golang.org/grpc/status" "google.golang.org/grpc/transport" ) var ( expectedRequest = "ping" expectedResponse = "pong" weirdError = "format verbs: %v%s" sizeLargeErr = 1024 * 1024 canceled = 0 ) type testCodec struct { } func (testCodec) Marshal(v interface{}) ([]byte, error) { return []byte(*(v.(*string))), nil } func (testCodec) Unmarshal(data []byte, v interface{}) error { *(v.(*string)) = string(data) return nil } func (testCodec) String() string { return "test" } type testStreamHandler struct { port string t transport.ServerTransport } func (h *testStreamHandler) handleStream(t *testing.T, s *transport.Stream) { p := &parser{r: s} for { pf, req, err := p.recvMsg(math.MaxInt32) if err == io.EOF { break } if err != nil { return } if pf != compressionNone { t.Errorf("Received the mistaken message format %d, want %d", pf, compressionNone) return } var v string codec := testCodec{} if err := codec.Unmarshal(req, &v); err != nil { t.Errorf("Failed to unmarshal the received message: %v", err) return } if v == "weird error" { h.t.WriteStatus(s, status.New(codes.Internal, weirdError)) return } if v == "canceled" { canceled++ h.t.WriteStatus(s, status.New(codes.Internal, "")) return } if v == "port" { h.t.WriteStatus(s, status.New(codes.Internal, h.port)) return } if v != expectedRequest { h.t.WriteStatus(s, status.New(codes.Internal, strings.Repeat("A", sizeLargeErr))) return } } // send a response back to end the stream. hdr, data, err := encode(testCodec{}, &expectedResponse, nil, nil, nil) if err != nil { t.Errorf("Failed to encode the response: %v", err) return } h.t.Write(s, hdr, data, &transport.Options{}) h.t.WriteStatus(s, status.New(codes.OK, "")) } type server struct { lis net.Listener port string startedErr chan error // sent nil or an error after server starts mu sync.Mutex conns map[transport.ServerTransport]bool } func newTestServer() *server { return &server{startedErr: make(chan error, 1)} } // start starts server. Other goroutines should block on s.startedErr for further operations. func (s *server) start(t *testing.T, port int, maxStreams uint32) { var err error if port == 0 { s.lis, err = net.Listen("tcp", "localhost:0") } else { s.lis, err = net.Listen("tcp", "localhost:"+strconv.Itoa(port)) } if err != nil { s.startedErr <- fmt.Errorf("failed to listen: %v", err) return } _, p, err := net.SplitHostPort(s.lis.Addr().String()) if err != nil { s.startedErr <- fmt.Errorf("failed to parse listener address: %v", err) return } s.port = p s.conns = make(map[transport.ServerTransport]bool) s.startedErr <- nil for { conn, err := s.lis.Accept() if err != nil { return } config := &transport.ServerConfig{ MaxStreams: maxStreams, } st, err := transport.NewServerTransport("http2", conn, config) if err != nil { continue } s.mu.Lock() if s.conns == nil { s.mu.Unlock() st.Close() return } s.conns[st] = true s.mu.Unlock() h := &testStreamHandler{ port: s.port, t: st, } go st.HandleStreams(func(s *transport.Stream) { go h.handleStream(t, s) }, func(ctx context.Context, method string) context.Context { return ctx }) } } func (s *server) wait(t *testing.T, timeout time.Duration) { select { case err := <-s.startedErr: if err != nil { t.Fatal(err) } case <-time.After(timeout): t.Fatalf("Timed out after %v waiting for server to be ready", timeout) } } func (s *server) stop() { s.lis.Close() s.mu.Lock() for c := range s.conns { c.Close() } s.conns = nil s.mu.Unlock() } func setUp(t *testing.T, port int, maxStreams uint32) (*server, *ClientConn) { server := newTestServer() go server.start(t, port, maxStreams) server.wait(t, 2*time.Second) addr := "localhost:" + server.port cc, err := Dial(addr, WithBlock(), WithInsecure(), WithCodec(testCodec{})) if err != nil { t.Fatalf("Failed to create ClientConn: %v", err) } return server, cc } func TestInvoke(t *testing.T) { server, cc := setUp(t, 0, math.MaxUint32) var reply string if err := Invoke(context.Background(), "/foo/bar", &expectedRequest, &reply, cc); err != nil || reply != expectedResponse { t.Fatalf("grpc.Invoke(_, _, _, _, _) = %v, want ", err) } cc.Close() server.stop() } func TestInvokeLargeErr(t *testing.T) { server, cc := setUp(t, 0, math.MaxUint32) var reply string req := "hello" err := Invoke(context.Background(), "/foo/bar", &req, &reply, cc) if _, ok := status.FromError(err); !ok { t.Fatalf("grpc.Invoke(_, _, _, _, _) receives non rpc error.") } if Code(err) != codes.Internal || len(ErrorDesc(err)) != sizeLargeErr { t.Fatalf("grpc.Invoke(_, _, _, _, _) = %v, want an error of code %d and desc size %d", err, codes.Internal, sizeLargeErr) } cc.Close() server.stop() } // TestInvokeErrorSpecialChars checks that error messages don't get mangled. func TestInvokeErrorSpecialChars(t *testing.T) { server, cc := setUp(t, 0, math.MaxUint32) var reply string req := "weird error" err := Invoke(context.Background(), "/foo/bar", &req, &reply, cc) if _, ok := status.FromError(err); !ok { t.Fatalf("grpc.Invoke(_, _, _, _, _) receives non rpc error.") } if got, want := ErrorDesc(err), weirdError; got != want { t.Fatalf("grpc.Invoke(_, _, _, _, _) error = %q, want %q", got, want) } cc.Close() server.stop() } // TestInvokeCancel checks that an Invoke with a canceled context is not sent. func TestInvokeCancel(t *testing.T) { server, cc := setUp(t, 0, math.MaxUint32) var reply string req := "canceled" for i := 0; i < 100; i++ { ctx, cancel := context.WithCancel(context.Background()) cancel() Invoke(ctx, "/foo/bar", &req, &reply, cc) } if canceled != 0 { t.Fatalf("received %d of 100 canceled requests", canceled) } cc.Close() server.stop() } // TestInvokeCancelClosedNonFail checks that a canceled non-failfast RPC // on a closed client will terminate. func TestInvokeCancelClosedNonFailFast(t *testing.T) { server, cc := setUp(t, 0, math.MaxUint32) var reply string cc.Close() req := "hello" ctx, cancel := context.WithCancel(context.Background()) cancel() if err := Invoke(ctx, "/foo/bar", &req, &reply, cc, FailFast(false)); err == nil { t.Fatalf("canceled invoke on closed connection should fail") } server.stop() } golang-google-grpc-1.6.0/clientconn.go000066400000000000000000001072011315416461300176600ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "errors" "math" "net" "strings" "sync" "time" "golang.org/x/net/context" "golang.org/x/net/trace" "google.golang.org/grpc/connectivity" "google.golang.org/grpc/credentials" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/keepalive" "google.golang.org/grpc/stats" "google.golang.org/grpc/transport" ) var ( // ErrClientConnClosing indicates that the operation is illegal because // the ClientConn is closing. ErrClientConnClosing = errors.New("grpc: the client connection is closing") // ErrClientConnTimeout indicates that the ClientConn cannot establish the // underlying connections within the specified timeout. // DEPRECATED: Please use context.DeadlineExceeded instead. ErrClientConnTimeout = errors.New("grpc: timed out when dialing") // errNoTransportSecurity indicates that there is no transport security // being set for ClientConn. Users should either set one or explicitly // call WithInsecure DialOption to disable security. errNoTransportSecurity = errors.New("grpc: no transport security set (use grpc.WithInsecure() explicitly or set credentials)") // errTransportCredentialsMissing indicates that users want to transmit security // information (e.g., oauth2 token) which requires secure connection on an insecure // connection. errTransportCredentialsMissing = errors.New("grpc: the credentials require transport level security (use grpc.WithTransportCredentials() to set)") // errCredentialsConflict indicates that grpc.WithTransportCredentials() // and grpc.WithInsecure() are both called for a connection. errCredentialsConflict = errors.New("grpc: transport credentials are set for an insecure connection (grpc.WithTransportCredentials() and grpc.WithInsecure() are both called)") // errNetworkIO indicates that the connection is down due to some network I/O error. errNetworkIO = errors.New("grpc: failed with network I/O error") // errConnDrain indicates that the connection starts to be drained and does not accept any new RPCs. errConnDrain = errors.New("grpc: the connection is drained") // errConnClosing indicates that the connection is closing. errConnClosing = errors.New("grpc: the connection is closing") // errConnUnavailable indicates that the connection is unavailable. errConnUnavailable = errors.New("grpc: the connection is unavailable") // errBalancerClosed indicates that the balancer is closed. errBalancerClosed = errors.New("grpc: balancer is closed") // minimum time to give a connection to complete minConnectTimeout = 20 * time.Second ) // dialOptions configure a Dial call. dialOptions are set by the DialOption // values passed to Dial. type dialOptions struct { unaryInt UnaryClientInterceptor streamInt StreamClientInterceptor codec Codec cp Compressor dc Decompressor bs backoffStrategy balancer Balancer block bool insecure bool timeout time.Duration scChan <-chan ServiceConfig copts transport.ConnectOptions callOptions []CallOption } const ( defaultClientMaxReceiveMessageSize = 1024 * 1024 * 4 defaultClientMaxSendMessageSize = math.MaxInt32 ) // DialOption configures how we set up the connection. type DialOption func(*dialOptions) // WithInitialWindowSize returns a DialOption which sets the value for initial window size on a stream. // The lower bound for window size is 64K and any value smaller than that will be ignored. func WithInitialWindowSize(s int32) DialOption { return func(o *dialOptions) { o.copts.InitialWindowSize = s } } // WithInitialConnWindowSize returns a DialOption which sets the value for initial window size on a connection. // The lower bound for window size is 64K and any value smaller than that will be ignored. func WithInitialConnWindowSize(s int32) DialOption { return func(o *dialOptions) { o.copts.InitialConnWindowSize = s } } // WithMaxMsgSize returns a DialOption which sets the maximum message size the client can receive. Deprecated: use WithDefaultCallOptions(MaxCallRecvMsgSize(s)) instead. func WithMaxMsgSize(s int) DialOption { return WithDefaultCallOptions(MaxCallRecvMsgSize(s)) } // WithDefaultCallOptions returns a DialOption which sets the default CallOptions for calls over the connection. func WithDefaultCallOptions(cos ...CallOption) DialOption { return func(o *dialOptions) { o.callOptions = append(o.callOptions, cos...) } } // WithCodec returns a DialOption which sets a codec for message marshaling and unmarshaling. func WithCodec(c Codec) DialOption { return func(o *dialOptions) { o.codec = c } } // WithCompressor returns a DialOption which sets a CompressorGenerator for generating message // compressor. func WithCompressor(cp Compressor) DialOption { return func(o *dialOptions) { o.cp = cp } } // WithDecompressor returns a DialOption which sets a DecompressorGenerator for generating // message decompressor. func WithDecompressor(dc Decompressor) DialOption { return func(o *dialOptions) { o.dc = dc } } // WithBalancer returns a DialOption which sets a load balancer. func WithBalancer(b Balancer) DialOption { return func(o *dialOptions) { o.balancer = b } } // WithServiceConfig returns a DialOption which has a channel to read the service configuration. func WithServiceConfig(c <-chan ServiceConfig) DialOption { return func(o *dialOptions) { o.scChan = c } } // WithBackoffMaxDelay configures the dialer to use the provided maximum delay // when backing off after failed connection attempts. func WithBackoffMaxDelay(md time.Duration) DialOption { return WithBackoffConfig(BackoffConfig{MaxDelay: md}) } // WithBackoffConfig configures the dialer to use the provided backoff // parameters after connection failures. // // Use WithBackoffMaxDelay until more parameters on BackoffConfig are opened up // for use. func WithBackoffConfig(b BackoffConfig) DialOption { // Set defaults to ensure that provided BackoffConfig is valid and // unexported fields get default values. setDefaults(&b) return withBackoff(b) } // withBackoff sets the backoff strategy used for retries after a // failed connection attempt. // // This can be exported if arbitrary backoff strategies are allowed by gRPC. func withBackoff(bs backoffStrategy) DialOption { return func(o *dialOptions) { o.bs = bs } } // WithBlock returns a DialOption which makes caller of Dial blocks until the underlying // connection is up. Without this, Dial returns immediately and connecting the server // happens in background. func WithBlock() DialOption { return func(o *dialOptions) { o.block = true } } // WithInsecure returns a DialOption which disables transport security for this ClientConn. // Note that transport security is required unless WithInsecure is set. func WithInsecure() DialOption { return func(o *dialOptions) { o.insecure = true } } // WithTransportCredentials returns a DialOption which configures a // connection level security credentials (e.g., TLS/SSL). func WithTransportCredentials(creds credentials.TransportCredentials) DialOption { return func(o *dialOptions) { o.copts.TransportCredentials = creds } } // WithPerRPCCredentials returns a DialOption which sets // credentials and places auth state on each outbound RPC. func WithPerRPCCredentials(creds credentials.PerRPCCredentials) DialOption { return func(o *dialOptions) { o.copts.PerRPCCredentials = append(o.copts.PerRPCCredentials, creds) } } // WithTimeout returns a DialOption that configures a timeout for dialing a ClientConn // initially. This is valid if and only if WithBlock() is present. // Deprecated: use DialContext and context.WithTimeout instead. func WithTimeout(d time.Duration) DialOption { return func(o *dialOptions) { o.timeout = d } } // WithDialer returns a DialOption that specifies a function to use for dialing network addresses. // If FailOnNonTempDialError() is set to true, and an error is returned by f, gRPC checks the error's // Temporary() method to decide if it should try to reconnect to the network address. func WithDialer(f func(string, time.Duration) (net.Conn, error)) DialOption { return func(o *dialOptions) { o.copts.Dialer = func(ctx context.Context, addr string) (net.Conn, error) { if deadline, ok := ctx.Deadline(); ok { return f(addr, deadline.Sub(time.Now())) } return f(addr, 0) } } } // WithStatsHandler returns a DialOption that specifies the stats handler // for all the RPCs and underlying network connections in this ClientConn. func WithStatsHandler(h stats.Handler) DialOption { return func(o *dialOptions) { o.copts.StatsHandler = h } } // FailOnNonTempDialError returns a DialOption that specifies if gRPC fails on non-temporary dial errors. // If f is true, and dialer returns a non-temporary error, gRPC will fail the connection to the network // address and won't try to reconnect. // The default value of FailOnNonTempDialError is false. // This is an EXPERIMENTAL API. func FailOnNonTempDialError(f bool) DialOption { return func(o *dialOptions) { o.copts.FailOnNonTempDialError = f } } // WithUserAgent returns a DialOption that specifies a user agent string for all the RPCs. func WithUserAgent(s string) DialOption { return func(o *dialOptions) { o.copts.UserAgent = s } } // WithKeepaliveParams returns a DialOption that specifies keepalive paramaters for the client transport. func WithKeepaliveParams(kp keepalive.ClientParameters) DialOption { return func(o *dialOptions) { o.copts.KeepaliveParams = kp } } // WithUnaryInterceptor returns a DialOption that specifies the interceptor for unary RPCs. func WithUnaryInterceptor(f UnaryClientInterceptor) DialOption { return func(o *dialOptions) { o.unaryInt = f } } // WithStreamInterceptor returns a DialOption that specifies the interceptor for streaming RPCs. func WithStreamInterceptor(f StreamClientInterceptor) DialOption { return func(o *dialOptions) { o.streamInt = f } } // WithAuthority returns a DialOption that specifies the value to be used as // the :authority pseudo-header. This value only works with WithInsecure and // has no effect if TransportCredentials are present. func WithAuthority(a string) DialOption { return func(o *dialOptions) { o.copts.Authority = a } } // Dial creates a client connection to the given target. func Dial(target string, opts ...DialOption) (*ClientConn, error) { return DialContext(context.Background(), target, opts...) } // DialContext creates a client connection to the given target. ctx can be used to // cancel or expire the pending connection. Once this function returns, the // cancellation and expiration of ctx will be noop. Users should call ClientConn.Close // to terminate all the pending operations after this function returns. func DialContext(ctx context.Context, target string, opts ...DialOption) (conn *ClientConn, err error) { cc := &ClientConn{ target: target, csMgr: &connectivityStateManager{}, conns: make(map[Address]*addrConn), } cc.csEvltr = &connectivityStateEvaluator{csMgr: cc.csMgr} cc.ctx, cc.cancel = context.WithCancel(context.Background()) for _, opt := range opts { opt(&cc.dopts) } cc.mkp = cc.dopts.copts.KeepaliveParams if cc.dopts.copts.Dialer == nil { cc.dopts.copts.Dialer = newProxyDialer( func(ctx context.Context, addr string) (net.Conn, error) { return dialContext(ctx, "tcp", addr) }, ) } if cc.dopts.copts.UserAgent != "" { cc.dopts.copts.UserAgent += " " + grpcUA } else { cc.dopts.copts.UserAgent = grpcUA } if cc.dopts.timeout > 0 { var cancel context.CancelFunc ctx, cancel = context.WithTimeout(ctx, cc.dopts.timeout) defer cancel() } defer func() { select { case <-ctx.Done(): conn, err = nil, ctx.Err() default: } if err != nil { cc.Close() } }() scSet := false if cc.dopts.scChan != nil { // Try to get an initial service config. select { case sc, ok := <-cc.dopts.scChan: if ok { cc.sc = sc scSet = true } default: } } // Set defaults. if cc.dopts.codec == nil { cc.dopts.codec = protoCodec{} } if cc.dopts.bs == nil { cc.dopts.bs = DefaultBackoffConfig } creds := cc.dopts.copts.TransportCredentials if creds != nil && creds.Info().ServerName != "" { cc.authority = creds.Info().ServerName } else if cc.dopts.insecure && cc.dopts.copts.Authority != "" { cc.authority = cc.dopts.copts.Authority } else { cc.authority = target } waitC := make(chan error, 1) go func() { defer close(waitC) if cc.dopts.balancer == nil && cc.sc.LB != nil { cc.dopts.balancer = cc.sc.LB } if cc.dopts.balancer != nil { var credsClone credentials.TransportCredentials if creds != nil { credsClone = creds.Clone() } config := BalancerConfig{ DialCreds: credsClone, Dialer: cc.dopts.copts.Dialer, } if err := cc.dopts.balancer.Start(target, config); err != nil { waitC <- err return } ch := cc.dopts.balancer.Notify() if ch != nil { if cc.dopts.block { doneChan := make(chan struct{}) go cc.lbWatcher(doneChan) <-doneChan } else { go cc.lbWatcher(nil) } return } } // No balancer, or no resolver within the balancer. Connect directly. if err := cc.resetAddrConn([]Address{{Addr: target}}, cc.dopts.block, nil); err != nil { waitC <- err return } }() select { case <-ctx.Done(): return nil, ctx.Err() case err := <-waitC: if err != nil { return nil, err } } if cc.dopts.scChan != nil && !scSet { // Blocking wait for the initial service config. select { case sc, ok := <-cc.dopts.scChan: if ok { cc.sc = sc } case <-ctx.Done(): return nil, ctx.Err() } } if cc.dopts.scChan != nil { go cc.scWatcher() } return cc, nil } // connectivityStateEvaluator gets updated by addrConns when their // states transition, based on which it evaluates the state of // ClientConn. // Note: This code will eventually sit in the balancer in the new design. type connectivityStateEvaluator struct { csMgr *connectivityStateManager mu sync.Mutex numReady uint64 // Number of addrConns in ready state. numConnecting uint64 // Number of addrConns in connecting state. numTransientFailure uint64 // Number of addrConns in transientFailure. } // recordTransition records state change happening in every addrConn and based on // that it evaluates what state the ClientConn is in. // It can only transition between connectivity.Ready, connectivity.Connecting and connectivity.TransientFailure. Other states, // Idle and connectivity.Shutdown are transitioned into by ClientConn; in the begining of the connection // before any addrConn is created ClientConn is in idle state. In the end when ClientConn // closes it is in connectivity.Shutdown state. // TODO Note that in later releases, a ClientConn with no activity will be put into an Idle state. func (cse *connectivityStateEvaluator) recordTransition(oldState, newState connectivity.State) { cse.mu.Lock() defer cse.mu.Unlock() // Update counters. for idx, state := range []connectivity.State{oldState, newState} { updateVal := 2*uint64(idx) - 1 // -1 for oldState and +1 for new. switch state { case connectivity.Ready: cse.numReady += updateVal case connectivity.Connecting: cse.numConnecting += updateVal case connectivity.TransientFailure: cse.numTransientFailure += updateVal } } // Evaluate. if cse.numReady > 0 { cse.csMgr.updateState(connectivity.Ready) return } if cse.numConnecting > 0 { cse.csMgr.updateState(connectivity.Connecting) return } cse.csMgr.updateState(connectivity.TransientFailure) } // connectivityStateManager keeps the connectivity.State of ClientConn. // This struct will eventually be exported so the balancers can access it. type connectivityStateManager struct { mu sync.Mutex state connectivity.State notifyChan chan struct{} } // updateState updates the connectivity.State of ClientConn. // If there's a change it notifies goroutines waiting on state change to // happen. func (csm *connectivityStateManager) updateState(state connectivity.State) { csm.mu.Lock() defer csm.mu.Unlock() if csm.state == connectivity.Shutdown { return } if csm.state == state { return } csm.state = state if csm.notifyChan != nil { // There are other goroutines waiting on this channel. close(csm.notifyChan) csm.notifyChan = nil } } func (csm *connectivityStateManager) getState() connectivity.State { csm.mu.Lock() defer csm.mu.Unlock() return csm.state } func (csm *connectivityStateManager) getNotifyChan() <-chan struct{} { csm.mu.Lock() defer csm.mu.Unlock() if csm.notifyChan == nil { csm.notifyChan = make(chan struct{}) } return csm.notifyChan } // ClientConn represents a client connection to an RPC server. type ClientConn struct { ctx context.Context cancel context.CancelFunc target string authority string dopts dialOptions csMgr *connectivityStateManager csEvltr *connectivityStateEvaluator // This will eventually be part of balancer. mu sync.RWMutex sc ServiceConfig conns map[Address]*addrConn // Keepalive parameter can be updated if a GoAway is received. mkp keepalive.ClientParameters } // WaitForStateChange waits until the connectivity.State of ClientConn changes from sourceState or // ctx expires. A true value is returned in former case and false in latter. // This is an EXPERIMENTAL API. func (cc *ClientConn) WaitForStateChange(ctx context.Context, sourceState connectivity.State) bool { ch := cc.csMgr.getNotifyChan() if cc.csMgr.getState() != sourceState { return true } select { case <-ctx.Done(): return false case <-ch: return true } } // GetState returns the connectivity.State of ClientConn. // This is an EXPERIMENTAL API. func (cc *ClientConn) GetState() connectivity.State { return cc.csMgr.getState() } // lbWatcher watches the Notify channel of the balancer in cc and manages // connections accordingly. If doneChan is not nil, it is closed after the // first successfull connection is made. func (cc *ClientConn) lbWatcher(doneChan chan struct{}) { defer func() { // In case channel from cc.dopts.balancer.Notify() gets closed before a // successful connection gets established, don't forget to notify the // caller. if doneChan != nil { close(doneChan) } }() _, isPickFirst := cc.dopts.balancer.(*pickFirst) for addrs := range cc.dopts.balancer.Notify() { if isPickFirst { if len(addrs) == 0 { // No address can be connected, should teardown current addrconn if exists cc.mu.Lock() if len(cc.conns) != 0 { cc.pickFirstAddrConnTearDown() } cc.mu.Unlock() } else { cc.resetAddrConn(addrs, true, nil) if doneChan != nil { close(doneChan) doneChan = nil } } } else { // Not pickFirst, create a new addrConn for each address. var ( add []Address // Addresses need to setup connections. del []*addrConn // Connections need to tear down. ) cc.mu.Lock() for _, a := range addrs { if _, ok := cc.conns[a]; !ok { add = append(add, a) } } for k, c := range cc.conns { var keep bool for _, a := range addrs { if k == a { keep = true break } } if !keep { del = append(del, c) delete(cc.conns, k) } } cc.mu.Unlock() for _, a := range add { var err error if doneChan != nil { err = cc.resetAddrConn([]Address{a}, true, nil) if err == nil { close(doneChan) doneChan = nil } } else { err = cc.resetAddrConn([]Address{a}, false, nil) } if err != nil { grpclog.Warningf("Error creating connection to %v. Err: %v", a, err) } } for _, c := range del { c.tearDown(errConnDrain) } } } } func (cc *ClientConn) scWatcher() { for { select { case sc, ok := <-cc.dopts.scChan: if !ok { return } cc.mu.Lock() // TODO: load balance policy runtime change is ignored. // We may revist this decision in the future. cc.sc = sc cc.mu.Unlock() case <-cc.ctx.Done(): return } } } // pickFirstUpdateAddresses checks whether current address in the updating list, Update the list if true. // It is only used when the balancer is pick first. func (cc *ClientConn) pickFirstUpdateAddresses(addrs []Address) bool { if len(cc.conns) == 0 { // No addrconn. Should go resetting addrconn. return false } var currentAc *addrConn for _, currentAc = range cc.conns { break } var addrInNewSlice bool for _, addr := range addrs { if strings.Compare(addr.Addr, currentAc.curAddr.Addr) == 0 { addrInNewSlice = true break } } if addrInNewSlice { cc.conns = make(map[Address]*addrConn) for _, addr := range addrs { cc.conns[addr] = currentAc } currentAc.addrs = addrs return true } return false } // pickFirstAddrConnTearDown() should be called after lock. func (cc *ClientConn) pickFirstAddrConnTearDown() { if len(cc.conns) == 0 { return } var currentAc *addrConn for _, currentAc = range cc.conns { break } for k := range cc.conns { delete(cc.conns, k) } currentAc.tearDown(errConnDrain) } // resetAddrConn creates an addrConn for addr and adds it to cc.conns. // If there is an old addrConn for addr, it will be torn down, using tearDownErr as the reason. // If tearDownErr is nil, errConnDrain will be used instead. // // We should never need to replace an addrConn with a new one. This function is only used // as newAddrConn to create new addrConn. // TODO rename this function and clean up the code. func (cc *ClientConn) resetAddrConn(addrs []Address, block bool, tearDownErr error) error { // if current transport in addrs, just change lists to update order and new addresses // not work for roundrobin cc.mu.Lock() if _, isPickFirst := cc.dopts.balancer.(*pickFirst); isPickFirst { // If Current address in use in the updating list, just update the list. // Otherwise, teardown current addrconn and create a new one. if cc.pickFirstUpdateAddresses(addrs) { cc.mu.Unlock() return nil } cc.pickFirstAddrConnTearDown() } cc.mu.Unlock() ac := &addrConn{ cc: cc, addrs: addrs, dopts: cc.dopts, } ac.ctx, ac.cancel = context.WithCancel(cc.ctx) ac.csEvltr = cc.csEvltr if EnableTracing { ac.events = trace.NewEventLog("grpc.ClientConn", ac.addrs[0].Addr) } if !ac.dopts.insecure { if ac.dopts.copts.TransportCredentials == nil { return errNoTransportSecurity } } else { if ac.dopts.copts.TransportCredentials != nil { return errCredentialsConflict } for _, cd := range ac.dopts.copts.PerRPCCredentials { if cd.RequireTransportSecurity() { return errTransportCredentialsMissing } } } // Track ac in cc. This needs to be done before any getTransport(...) is called. cc.mu.Lock() if cc.conns == nil { cc.mu.Unlock() return ErrClientConnClosing } stale := cc.conns[ac.addrs[0]] for _, a := range ac.addrs { cc.conns[a] = ac } cc.mu.Unlock() if stale != nil { // There is an addrConn alive on ac.addr already. This could be due to // a buggy Balancer that reports duplicated Addresses. if tearDownErr == nil { // tearDownErr is nil if resetAddrConn is called by // 1) Dial // 2) lbWatcher // In both cases, the stale ac should drain, not close. stale.tearDown(errConnDrain) } else { stale.tearDown(tearDownErr) } } if block { if err := ac.resetTransport(false); err != nil { if err != errConnClosing { // Tear down ac and delete it from cc.conns. cc.mu.Lock() delete(cc.conns, ac.addrs[0]) cc.mu.Unlock() ac.tearDown(err) } if e, ok := err.(transport.ConnectionError); ok && !e.Temporary() { return e.Origin() } return err } // Start to monitor the error status of transport. go ac.transportMonitor() } else { // Start a goroutine connecting to the server asynchronously. go func() { if err := ac.resetTransport(false); err != nil { grpclog.Warningf("Failed to dial %s: %v; please retry.", ac.addrs[0].Addr, err) if err != errConnClosing { // Keep this ac in cc.conns, to get the reason it's torn down. ac.tearDown(err) } return } ac.transportMonitor() }() } return nil } // GetMethodConfig gets the method config of the input method. // If there's an exact match for input method (i.e. /service/method), we return // the corresponding MethodConfig. // If there isn't an exact match for the input method, we look for the default config // under the service (i.e /service/). If there is a default MethodConfig for // the serivce, we return it. // Otherwise, we return an empty MethodConfig. func (cc *ClientConn) GetMethodConfig(method string) MethodConfig { // TODO: Avoid the locking here. cc.mu.RLock() defer cc.mu.RUnlock() m, ok := cc.sc.Methods[method] if !ok { i := strings.LastIndex(method, "/") m, _ = cc.sc.Methods[method[:i+1]] } return m } func (cc *ClientConn) getTransport(ctx context.Context, opts BalancerGetOptions) (transport.ClientTransport, func(), error) { var ( ac *addrConn ok bool put func() ) if cc.dopts.balancer == nil { // If balancer is nil, there should be only one addrConn available. cc.mu.RLock() if cc.conns == nil { cc.mu.RUnlock() return nil, nil, toRPCErr(ErrClientConnClosing) } for _, ac = range cc.conns { // Break after the first iteration to get the first addrConn. ok = true break } cc.mu.RUnlock() } else { var ( addr Address err error ) addr, put, err = cc.dopts.balancer.Get(ctx, opts) if err != nil { return nil, nil, toRPCErr(err) } cc.mu.RLock() if cc.conns == nil { cc.mu.RUnlock() return nil, nil, toRPCErr(ErrClientConnClosing) } ac, ok = cc.conns[addr] cc.mu.RUnlock() } if !ok { if put != nil { updateRPCInfoInContext(ctx, rpcInfo{bytesSent: false, bytesReceived: false}) put() } return nil, nil, errConnClosing } t, err := ac.wait(ctx, cc.dopts.balancer != nil, !opts.BlockingWait) if err != nil { if put != nil { updateRPCInfoInContext(ctx, rpcInfo{bytesSent: false, bytesReceived: false}) put() } return nil, nil, err } return t, put, nil } // Close tears down the ClientConn and all underlying connections. func (cc *ClientConn) Close() error { cc.cancel() cc.mu.Lock() if cc.conns == nil { cc.mu.Unlock() return ErrClientConnClosing } conns := cc.conns cc.conns = nil cc.csMgr.updateState(connectivity.Shutdown) cc.mu.Unlock() if cc.dopts.balancer != nil { cc.dopts.balancer.Close() } for _, ac := range conns { ac.tearDown(ErrClientConnClosing) } return nil } // addrConn is a network connection to a given address. type addrConn struct { ctx context.Context cancel context.CancelFunc cc *ClientConn curAddr Address addrs []Address dopts dialOptions events trace.EventLog csEvltr *connectivityStateEvaluator mu sync.Mutex state connectivity.State down func(error) // the handler called when a connection is down. // ready is closed and becomes nil when a new transport is up or failed // due to timeout. ready chan struct{} transport transport.ClientTransport // The reason this addrConn is torn down. tearDownErr error } // adjustParams updates parameters used to create transports upon // receiving a GoAway. func (ac *addrConn) adjustParams(r transport.GoAwayReason) { switch r { case transport.TooManyPings: v := 2 * ac.dopts.copts.KeepaliveParams.Time ac.cc.mu.Lock() if v > ac.cc.mkp.Time { ac.cc.mkp.Time = v } ac.cc.mu.Unlock() } } // printf records an event in ac's event log, unless ac has been closed. // REQUIRES ac.mu is held. func (ac *addrConn) printf(format string, a ...interface{}) { if ac.events != nil { ac.events.Printf(format, a...) } } // errorf records an error in ac's event log, unless ac has been closed. // REQUIRES ac.mu is held. func (ac *addrConn) errorf(format string, a ...interface{}) { if ac.events != nil { ac.events.Errorf(format, a...) } } // resetTransport recreates a transport to the address for ac. // For the old transport: // - if drain is true, it will be gracefully closed. // - otherwise, it will be closed. func (ac *addrConn) resetTransport(drain bool) error { ac.mu.Lock() if ac.state == connectivity.Shutdown { ac.mu.Unlock() return errConnClosing } ac.printf("connecting") if ac.down != nil { ac.down(downErrorf(false, true, "%v", errNetworkIO)) ac.down = nil } oldState := ac.state ac.state = connectivity.Connecting ac.csEvltr.recordTransition(oldState, ac.state) t := ac.transport ac.transport = nil ac.mu.Unlock() if t != nil && !drain { t.Close() } ac.cc.mu.RLock() ac.dopts.copts.KeepaliveParams = ac.cc.mkp ac.cc.mu.RUnlock() for retries := 0; ; retries++ { ac.mu.Lock() sleepTime := ac.dopts.bs.backoff(retries) timeout := minConnectTimeout if timeout < time.Duration(int(sleepTime)/len(ac.addrs)) { timeout = time.Duration(int(sleepTime) / len(ac.addrs)) } connectTime := time.Now() // copy ac.addrs in case of race addrsIter := make([]Address, len(ac.addrs)) copy(addrsIter, ac.addrs) ac.mu.Unlock() for _, addr := range addrsIter { ac.mu.Lock() if ac.state == connectivity.Shutdown { // ac.tearDown(...) has been invoked. ac.mu.Unlock() return errConnClosing } ac.mu.Unlock() ctx, cancel := context.WithTimeout(ac.ctx, timeout) sinfo := transport.TargetInfo{ Addr: addr.Addr, Metadata: addr.Metadata, } newTransport, err := transport.NewClientTransport(ctx, sinfo, ac.dopts.copts) // Don't call cancel in success path due to a race in Go 1.6: // https://github.com/golang/go/issues/15078. if err != nil { cancel() if e, ok := err.(transport.ConnectionError); ok && !e.Temporary() { return err } grpclog.Warningf("grpc: addrConn.resetTransport failed to create client transport: %v; Reconnecting to %v", err, addr) ac.mu.Lock() if ac.state == connectivity.Shutdown { // ac.tearDown(...) has been invoked. ac.mu.Unlock() return errConnClosing } ac.errorf("transient failure: %v", err) oldState = ac.state ac.state = connectivity.TransientFailure ac.csEvltr.recordTransition(oldState, ac.state) if ac.ready != nil { close(ac.ready) ac.ready = nil } ac.mu.Unlock() continue } ac.mu.Lock() ac.printf("ready") if ac.state == connectivity.Shutdown { // ac.tearDown(...) has been invoked. ac.mu.Unlock() newTransport.Close() return errConnClosing } oldState = ac.state ac.state = connectivity.Ready ac.csEvltr.recordTransition(oldState, ac.state) ac.transport = newTransport if ac.ready != nil { close(ac.ready) ac.ready = nil } if ac.cc.dopts.balancer != nil { ac.down = ac.cc.dopts.balancer.Up(addr) } ac.curAddr = addr ac.mu.Unlock() return nil } timer := time.NewTimer(sleepTime - time.Since(connectTime)) select { case <-timer.C: case <-ac.ctx.Done(): timer.Stop() return ac.ctx.Err() } timer.Stop() } } // Run in a goroutine to track the error in transport and create the // new transport if an error happens. It returns when the channel is closing. func (ac *addrConn) transportMonitor() { for { ac.mu.Lock() t := ac.transport ac.mu.Unlock() select { // This is needed to detect the teardown when // the addrConn is idle (i.e., no RPC in flight). case <-ac.ctx.Done(): select { case <-t.Error(): t.Close() default: } return case <-t.GoAway(): ac.adjustParams(t.GetGoAwayReason()) // If GoAway happens without any network I/O error, the underlying transport // will be gracefully closed, and a new transport will be created. // (The transport will be closed when all the pending RPCs finished or failed.) // If GoAway and some network I/O error happen concurrently, the underlying transport // will be closed, and a new transport will be created. var drain bool select { case <-t.Error(): default: drain = true } if err := ac.resetTransport(drain); err != nil { grpclog.Infof("get error from resetTransport %v, transportMonitor returning", err) if err != errConnClosing { // Keep this ac in cc.conns, to get the reason it's torn down. ac.tearDown(err) } return } case <-t.Error(): select { case <-ac.ctx.Done(): t.Close() return case <-t.GoAway(): ac.adjustParams(t.GetGoAwayReason()) if err := ac.resetTransport(false); err != nil { grpclog.Infof("get error from resetTransport %v, transportMonitor returning", err) if err != errConnClosing { // Keep this ac in cc.conns, to get the reason it's torn down. ac.tearDown(err) } return } default: } ac.mu.Lock() if ac.state == connectivity.Shutdown { // ac has been shutdown. ac.mu.Unlock() return } oldState := ac.state ac.state = connectivity.TransientFailure ac.csEvltr.recordTransition(oldState, ac.state) ac.mu.Unlock() if err := ac.resetTransport(false); err != nil { grpclog.Infof("get error from resetTransport %v, transportMonitor returning", err) ac.mu.Lock() ac.printf("transport exiting: %v", err) ac.mu.Unlock() grpclog.Warningf("grpc: addrConn.transportMonitor exits due to: %v", err) if err != errConnClosing { // Keep this ac in cc.conns, to get the reason it's torn down. ac.tearDown(err) } return } } } } // wait blocks until i) the new transport is up or ii) ctx is done or iii) ac is closed or // iv) transport is in connectivity.TransientFailure and there is a balancer/failfast is true. func (ac *addrConn) wait(ctx context.Context, hasBalancer, failfast bool) (transport.ClientTransport, error) { for { ac.mu.Lock() switch { case ac.state == connectivity.Shutdown: if failfast || !hasBalancer { // RPC is failfast or balancer is nil. This RPC should fail with ac.tearDownErr. err := ac.tearDownErr ac.mu.Unlock() return nil, err } ac.mu.Unlock() return nil, errConnClosing case ac.state == connectivity.Ready: ct := ac.transport ac.mu.Unlock() return ct, nil case ac.state == connectivity.TransientFailure: if failfast || hasBalancer { ac.mu.Unlock() return nil, errConnUnavailable } } ready := ac.ready if ready == nil { ready = make(chan struct{}) ac.ready = ready } ac.mu.Unlock() select { case <-ctx.Done(): return nil, toRPCErr(ctx.Err()) // Wait until the new transport is ready or failed. case <-ready: } } } // tearDown starts to tear down the addrConn. // TODO(zhaoq): Make this synchronous to avoid unbounded memory consumption in // some edge cases (e.g., the caller opens and closes many addrConn's in a // tight loop. // tearDown doesn't remove ac from ac.cc.conns. func (ac *addrConn) tearDown(err error) { ac.cancel() ac.mu.Lock() ac.curAddr = Address{} defer ac.mu.Unlock() if ac.down != nil { ac.down(downErrorf(false, false, "%v", err)) ac.down = nil } if err == errConnDrain && ac.transport != nil { // GracefulClose(...) may be executed multiple times when // i) receiving multiple GoAway frames from the server; or // ii) there are concurrent name resolver/Balancer triggered // address removal and GoAway. ac.transport.GracefulClose() } if ac.state == connectivity.Shutdown { return } oldState := ac.state ac.state = connectivity.Shutdown ac.tearDownErr = err ac.csEvltr.recordTransition(oldState, ac.state) if ac.events != nil { ac.events.Finish() ac.events = nil } if ac.ready != nil { close(ac.ready) ac.ready = nil } if ac.transport != nil && err != errConnDrain { ac.transport.Close() } return } golang-google-grpc-1.6.0/clientconn_test.go000066400000000000000000000274171315416461300207310ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "math" "net" "testing" "time" "golang.org/x/net/context" "google.golang.org/grpc/connectivity" "google.golang.org/grpc/credentials" "google.golang.org/grpc/keepalive" "google.golang.org/grpc/naming" "google.golang.org/grpc/testdata" ) func assertState(wantState connectivity.State, cc *ClientConn) (connectivity.State, bool) { ctx, cancel := context.WithTimeout(context.Background(), time.Second) defer cancel() var state connectivity.State for state = cc.GetState(); state != wantState && cc.WaitForStateChange(ctx, state); state = cc.GetState() { } return state, state == wantState } func TestConnectivityStates(t *testing.T) { servers, resolver := startServers(t, 2, math.MaxUint32) defer func() { for i := 0; i < 2; i++ { servers[i].stop() } }() cc, err := Dial("foo.bar.com", WithBalancer(RoundRobin(resolver)), WithInsecure()) if err != nil { t.Fatalf("Dial(\"foo.bar.com\", WithBalancer(_)) = _, %v, want _ ", err) } defer cc.Close() wantState := connectivity.Ready if state, ok := assertState(wantState, cc); !ok { t.Fatalf("asserState(%s) = %s, false, want %s, true", wantState, state, wantState) } // Send an update to delete the server connection (tearDown addrConn). update := []*naming.Update{ { Op: naming.Delete, Addr: "localhost:" + servers[0].port, }, } resolver.w.inject(update) wantState = connectivity.TransientFailure if state, ok := assertState(wantState, cc); !ok { t.Fatalf("asserState(%s) = %s, false, want %s, true", wantState, state, wantState) } update[0] = &naming.Update{ Op: naming.Add, Addr: "localhost:" + servers[1].port, } resolver.w.inject(update) wantState = connectivity.Ready if state, ok := assertState(wantState, cc); !ok { t.Fatalf("asserState(%s) = %s, false, want %s, true", wantState, state, wantState) } } func TestDialTimeout(t *testing.T) { conn, err := Dial("Non-Existent.Server:80", WithTimeout(time.Millisecond), WithBlock(), WithInsecure()) if err == nil { conn.Close() } if err != context.DeadlineExceeded { t.Fatalf("Dial(_, _) = %v, %v, want %v", conn, err, context.DeadlineExceeded) } } func TestTLSDialTimeout(t *testing.T) { creds, err := credentials.NewClientTLSFromFile(testdata.Path("ca.pem"), "x.test.youtube.com") if err != nil { t.Fatalf("Failed to create credentials %v", err) } conn, err := Dial("Non-Existent.Server:80", WithTransportCredentials(creds), WithTimeout(time.Millisecond), WithBlock()) if err == nil { conn.Close() } if err != context.DeadlineExceeded { t.Fatalf("Dial(_, _) = %v, %v, want %v", conn, err, context.DeadlineExceeded) } } func TestDefaultAuthority(t *testing.T) { target := "Non-Existent.Server:8080" conn, err := Dial(target, WithInsecure()) if err != nil { t.Fatalf("Dial(_, _) = _, %v, want _, ", err) } conn.Close() if conn.authority != target { t.Fatalf("%v.authority = %v, want %v", conn, conn.authority, target) } } func TestTLSServerNameOverwrite(t *testing.T) { overwriteServerName := "over.write.server.name" creds, err := credentials.NewClientTLSFromFile(testdata.Path("ca.pem"), overwriteServerName) if err != nil { t.Fatalf("Failed to create credentials %v", err) } conn, err := Dial("Non-Existent.Server:80", WithTransportCredentials(creds)) if err != nil { t.Fatalf("Dial(_, _) = _, %v, want _, ", err) } conn.Close() if conn.authority != overwriteServerName { t.Fatalf("%v.authority = %v, want %v", conn, conn.authority, overwriteServerName) } } func TestWithAuthority(t *testing.T) { overwriteServerName := "over.write.server.name" conn, err := Dial("Non-Existent.Server:80", WithInsecure(), WithAuthority(overwriteServerName)) if err != nil { t.Fatalf("Dial(_, _) = _, %v, want _, ", err) } conn.Close() if conn.authority != overwriteServerName { t.Fatalf("%v.authority = %v, want %v", conn, conn.authority, overwriteServerName) } } func TestWithAuthorityAndTLS(t *testing.T) { overwriteServerName := "over.write.server.name" creds, err := credentials.NewClientTLSFromFile(testdata.Path("ca.pem"), overwriteServerName) if err != nil { t.Fatalf("Failed to create credentials %v", err) } conn, err := Dial("Non-Existent.Server:80", WithTransportCredentials(creds), WithAuthority("no.effect.authority")) if err != nil { t.Fatalf("Dial(_, _) = _, %v, want _, ", err) } conn.Close() if conn.authority != overwriteServerName { t.Fatalf("%v.authority = %v, want %v", conn, conn.authority, overwriteServerName) } } func TestDialContextCancel(t *testing.T) { ctx, cancel := context.WithCancel(context.Background()) cancel() if _, err := DialContext(ctx, "Non-Existent.Server:80", WithBlock(), WithInsecure()); err != context.Canceled { t.Fatalf("DialContext(%v, _) = _, %v, want _, %v", ctx, err, context.Canceled) } } // blockingBalancer mimics the behavior of balancers whose initialization takes a long time. // In this test, reading from blockingBalancer.Notify() blocks forever. type blockingBalancer struct { ch chan []Address } func newBlockingBalancer() Balancer { return &blockingBalancer{ch: make(chan []Address)} } func (b *blockingBalancer) Start(target string, config BalancerConfig) error { return nil } func (b *blockingBalancer) Up(addr Address) func(error) { return nil } func (b *blockingBalancer) Get(ctx context.Context, opts BalancerGetOptions) (addr Address, put func(), err error) { return Address{}, nil, nil } func (b *blockingBalancer) Notify() <-chan []Address { return b.ch } func (b *blockingBalancer) Close() error { close(b.ch) return nil } func TestDialWithBlockingBalancer(t *testing.T) { ctx, cancel := context.WithCancel(context.Background()) dialDone := make(chan struct{}) go func() { DialContext(ctx, "Non-Existent.Server:80", WithBlock(), WithInsecure(), WithBalancer(newBlockingBalancer())) close(dialDone) }() cancel() <-dialDone } // securePerRPCCredentials always requires transport security. type securePerRPCCredentials struct{} func (c securePerRPCCredentials) GetRequestMetadata(ctx context.Context, uri ...string) (map[string]string, error) { return nil, nil } func (c securePerRPCCredentials) RequireTransportSecurity() bool { return true } func TestCredentialsMisuse(t *testing.T) { tlsCreds, err := credentials.NewClientTLSFromFile(testdata.Path("ca.pem"), "x.test.youtube.com") if err != nil { t.Fatalf("Failed to create authenticator %v", err) } // Two conflicting credential configurations if _, err := Dial("Non-Existent.Server:80", WithTransportCredentials(tlsCreds), WithBlock(), WithInsecure()); err != errCredentialsConflict { t.Fatalf("Dial(_, _) = _, %v, want _, %v", err, errCredentialsConflict) } // security info on insecure connection if _, err := Dial("Non-Existent.Server:80", WithPerRPCCredentials(securePerRPCCredentials{}), WithBlock(), WithInsecure()); err != errTransportCredentialsMissing { t.Fatalf("Dial(_, _) = _, %v, want _, %v", err, errTransportCredentialsMissing) } } func TestWithBackoffConfigDefault(t *testing.T) { testBackoffConfigSet(t, &DefaultBackoffConfig) } func TestWithBackoffConfig(t *testing.T) { b := BackoffConfig{MaxDelay: DefaultBackoffConfig.MaxDelay / 2} expected := b setDefaults(&expected) // defaults should be set testBackoffConfigSet(t, &expected, WithBackoffConfig(b)) } func TestWithBackoffMaxDelay(t *testing.T) { md := DefaultBackoffConfig.MaxDelay / 2 expected := BackoffConfig{MaxDelay: md} setDefaults(&expected) testBackoffConfigSet(t, &expected, WithBackoffMaxDelay(md)) } func testBackoffConfigSet(t *testing.T, expected *BackoffConfig, opts ...DialOption) { opts = append(opts, WithInsecure()) conn, err := Dial("foo:80", opts...) if err != nil { t.Fatalf("unexpected error dialing connection: %v", err) } if conn.dopts.bs == nil { t.Fatalf("backoff config not set") } actual, ok := conn.dopts.bs.(BackoffConfig) if !ok { t.Fatalf("unexpected type of backoff config: %#v", conn.dopts.bs) } if actual != *expected { t.Fatalf("unexpected backoff config on connection: %v, want %v", actual, expected) } conn.Close() } type testErr struct { temp bool } func (e *testErr) Error() string { return "test error" } func (e *testErr) Temporary() bool { return e.temp } var nonTemporaryError = &testErr{false} func nonTemporaryErrorDialer(addr string, timeout time.Duration) (net.Conn, error) { return nil, nonTemporaryError } func TestDialWithBlockErrorOnNonTemporaryErrorDialer(t *testing.T) { ctx, cancel := context.WithTimeout(context.Background(), 100*time.Millisecond) defer cancel() if _, err := DialContext(ctx, "", WithInsecure(), WithDialer(nonTemporaryErrorDialer), WithBlock(), FailOnNonTempDialError(true)); err != nonTemporaryError { t.Fatalf("Dial(%q) = %v, want %v", "", err, nonTemporaryError) } // Without FailOnNonTempDialError, gRPC will retry to connect, and dial should exit with time out error. if _, err := DialContext(ctx, "", WithInsecure(), WithDialer(nonTemporaryErrorDialer), WithBlock()); err != context.DeadlineExceeded { t.Fatalf("Dial(%q) = %v, want %v", "", err, context.DeadlineExceeded) } } // emptyBalancer returns an empty set of servers. type emptyBalancer struct { ch chan []Address } func newEmptyBalancer() Balancer { return &emptyBalancer{ch: make(chan []Address, 1)} } func (b *emptyBalancer) Start(_ string, _ BalancerConfig) error { b.ch <- nil return nil } func (b *emptyBalancer) Up(_ Address) func(error) { return nil } func (b *emptyBalancer) Get(_ context.Context, _ BalancerGetOptions) (Address, func(), error) { return Address{}, nil, nil } func (b *emptyBalancer) Notify() <-chan []Address { return b.ch } func (b *emptyBalancer) Close() error { close(b.ch) return nil } func TestNonblockingDialWithEmptyBalancer(t *testing.T) { ctx, cancel := context.WithCancel(context.Background()) defer cancel() dialDone := make(chan error) go func() { dialDone <- func() error { conn, err := DialContext(ctx, "Non-Existent.Server:80", WithInsecure(), WithBalancer(newEmptyBalancer())) if err != nil { return err } return conn.Close() }() }() if err := <-dialDone; err != nil { t.Fatalf("unexpected error dialing connection: %s", err) } } func TestClientUpdatesParamsAfterGoAway(t *testing.T) { lis, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("Failed to listen. Err: %v", err) } defer lis.Close() addr := lis.Addr().String() s := NewServer() go s.Serve(lis) defer s.Stop() cc, err := Dial(addr, WithBlock(), WithInsecure(), WithKeepaliveParams(keepalive.ClientParameters{ Time: 50 * time.Millisecond, Timeout: 1 * time.Millisecond, PermitWithoutStream: true, })) if err != nil { t.Fatalf("Dial(%s, _) = _, %v, want _, ", addr, err) } defer cc.Close() time.Sleep(1 * time.Second) cc.mu.RLock() defer cc.mu.RUnlock() v := cc.mkp.Time if v < 100*time.Millisecond { t.Fatalf("cc.dopts.copts.Keepalive.Time = %v , want 100ms", v) } } func TestClientLBWatcherWithClosedBalancer(t *testing.T) { b := newBlockingBalancer() cc := &ClientConn{dopts: dialOptions{balancer: b}} doneChan := make(chan struct{}) go cc.lbWatcher(doneChan) // Balancer closes before any successful connections. b.Close() select { case <-doneChan: case <-time.After(100 * time.Millisecond): t.Fatal("lbWatcher with closed balancer didn't close doneChan after 100ms") } } golang-google-grpc-1.6.0/codec.go000066400000000000000000000051221315416461300166000ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "math" "sync" "github.com/golang/protobuf/proto" ) // Codec defines the interface gRPC uses to encode and decode messages. // Note that implementations of this interface must be thread safe; // a Codec's methods can be called from concurrent goroutines. type Codec interface { // Marshal returns the wire format of v. Marshal(v interface{}) ([]byte, error) // Unmarshal parses the wire format into v. Unmarshal(data []byte, v interface{}) error // String returns the name of the Codec implementation. The returned // string will be used as part of content type in transmission. String() string } // protoCodec is a Codec implementation with protobuf. It is the default codec for gRPC. type protoCodec struct { } type cachedProtoBuffer struct { lastMarshaledSize uint32 proto.Buffer } func capToMaxInt32(val int) uint32 { if val > math.MaxInt32 { return uint32(math.MaxInt32) } return uint32(val) } func (p protoCodec) marshal(v interface{}, cb *cachedProtoBuffer) ([]byte, error) { protoMsg := v.(proto.Message) newSlice := make([]byte, 0, cb.lastMarshaledSize) cb.SetBuf(newSlice) cb.Reset() if err := cb.Marshal(protoMsg); err != nil { return nil, err } out := cb.Bytes() cb.lastMarshaledSize = capToMaxInt32(len(out)) return out, nil } func (p protoCodec) Marshal(v interface{}) ([]byte, error) { cb := protoBufferPool.Get().(*cachedProtoBuffer) out, err := p.marshal(v, cb) // put back buffer and lose the ref to the slice cb.SetBuf(nil) protoBufferPool.Put(cb) return out, err } func (p protoCodec) Unmarshal(data []byte, v interface{}) error { cb := protoBufferPool.Get().(*cachedProtoBuffer) cb.SetBuf(data) v.(proto.Message).Reset() err := cb.Unmarshal(v.(proto.Message)) cb.SetBuf(nil) protoBufferPool.Put(cb) return err } func (protoCodec) String() string { return "proto" } var ( protoBufferPool = &sync.Pool{ New: func() interface{} { return &cachedProtoBuffer{ Buffer: proto.Buffer{}, lastMarshaledSize: 16, } }, } ) golang-google-grpc-1.6.0/codec_benchmark_test.go000066400000000000000000000056451315416461300216630ustar00rootroot00000000000000// +build go1.7 /* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "fmt" "testing" "github.com/golang/protobuf/proto" "google.golang.org/grpc/test/codec_perf" ) func setupBenchmarkProtoCodecInputs(b *testing.B, payloadBaseSize uint32) []proto.Message { payloadBase := make([]byte, payloadBaseSize) // arbitrary byte slices payloadSuffixes := [][]byte{ []byte("one"), []byte("two"), []byte("three"), []byte("four"), []byte("five"), } protoStructs := make([]proto.Message, 0) for _, p := range payloadSuffixes { ps := &codec_perf.Buffer{} ps.Body = append(payloadBase, p...) protoStructs = append(protoStructs, ps) } return protoStructs } // The possible use of certain protobuf APIs like the proto.Buffer API potentially involves caching // on our side. This can add checks around memory allocations and possible contention. // Example run: go test -v -run=^$ -bench=BenchmarkProtoCodec -benchmem func BenchmarkProtoCodec(b *testing.B) { // range of message sizes payloadBaseSizes := make([]uint32, 0) for i := uint32(0); i <= 12; i += 4 { payloadBaseSizes = append(payloadBaseSizes, 1<= Code(len(_Code_index)-1) { return fmt.Sprintf("Code(%d)", i) } return _Code_name[_Code_index[i]:_Code_index[i+1]] } golang-google-grpc-1.6.0/codes/codes.go000066400000000000000000000133441315416461300177220ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package codes defines the canonical error codes used by gRPC. It is // consistent across various languages. package codes // import "google.golang.org/grpc/codes" // A Code is an unsigned 32-bit error code as defined in the gRPC spec. type Code uint32 //go:generate stringer -type=Code const ( // OK is returned on success. OK Code = 0 // Canceled indicates the operation was canceled (typically by the caller). Canceled Code = 1 // Unknown error. An example of where this error may be returned is // if a Status value received from another address space belongs to // an error-space that is not known in this address space. Also // errors raised by APIs that do not return enough error information // may be converted to this error. Unknown Code = 2 // InvalidArgument indicates client specified an invalid argument. // Note that this differs from FailedPrecondition. It indicates arguments // that are problematic regardless of the state of the system // (e.g., a malformed file name). InvalidArgument Code = 3 // DeadlineExceeded means operation expired before completion. // For operations that change the state of the system, this error may be // returned even if the operation has completed successfully. For // example, a successful response from a server could have been delayed // long enough for the deadline to expire. DeadlineExceeded Code = 4 // NotFound means some requested entity (e.g., file or directory) was // not found. NotFound Code = 5 // AlreadyExists means an attempt to create an entity failed because one // already exists. AlreadyExists Code = 6 // PermissionDenied indicates the caller does not have permission to // execute the specified operation. It must not be used for rejections // caused by exhausting some resource (use ResourceExhausted // instead for those errors). It must not be // used if the caller cannot be identified (use Unauthenticated // instead for those errors). PermissionDenied Code = 7 // Unauthenticated indicates the request does not have valid // authentication credentials for the operation. Unauthenticated Code = 16 // ResourceExhausted indicates some resource has been exhausted, perhaps // a per-user quota, or perhaps the entire file system is out of space. ResourceExhausted Code = 8 // FailedPrecondition indicates operation was rejected because the // system is not in a state required for the operation's execution. // For example, directory to be deleted may be non-empty, an rmdir // operation is applied to a non-directory, etc. // // A litmus test that may help a service implementor in deciding // between FailedPrecondition, Aborted, and Unavailable: // (a) Use Unavailable if the client can retry just the failing call. // (b) Use Aborted if the client should retry at a higher-level // (e.g., restarting a read-modify-write sequence). // (c) Use FailedPrecondition if the client should not retry until // the system state has been explicitly fixed. E.g., if an "rmdir" // fails because the directory is non-empty, FailedPrecondition // should be returned since the client should not retry unless // they have first fixed up the directory by deleting files from it. // (d) Use FailedPrecondition if the client performs conditional // REST Get/Update/Delete on a resource and the resource on the // server does not match the condition. E.g., conflicting // read-modify-write on the same resource. FailedPrecondition Code = 9 // Aborted indicates the operation was aborted, typically due to a // concurrency issue like sequencer check failures, transaction aborts, // etc. // // See litmus test above for deciding between FailedPrecondition, // Aborted, and Unavailable. Aborted Code = 10 // OutOfRange means operation was attempted past the valid range. // E.g., seeking or reading past end of file. // // Unlike InvalidArgument, this error indicates a problem that may // be fixed if the system state changes. For example, a 32-bit file // system will generate InvalidArgument if asked to read at an // offset that is not in the range [0,2^32-1], but it will generate // OutOfRange if asked to read from an offset past the current // file size. // // There is a fair bit of overlap between FailedPrecondition and // OutOfRange. We recommend using OutOfRange (the more specific // error) when it applies so that callers who are iterating through // a space can easily look for an OutOfRange error to detect when // they are done. OutOfRange Code = 11 // Unimplemented indicates operation is not implemented or not // supported/enabled in this service. Unimplemented Code = 12 // Internal errors. Means some invariants expected by underlying // system has been broken. If you see one of these errors, // something is very broken. Internal Code = 13 // Unavailable indicates the service is currently unavailable. // This is a most likely a transient condition and may be corrected // by retrying with a backoff. // // See litmus test above for deciding between FailedPrecondition, // Aborted, and Unavailable. Unavailable Code = 14 // DataLoss indicates unrecoverable data loss or corruption. DataLoss Code = 15 ) golang-google-grpc-1.6.0/connectivity/000077500000000000000000000000001315416461300177125ustar00rootroot00000000000000golang-google-grpc-1.6.0/connectivity/connectivity.go000066400000000000000000000041161315416461300227610ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package connectivity defines connectivity semantics. // For details, see https://github.com/grpc/grpc/blob/master/doc/connectivity-semantics-and-api.md. // All APIs in this package are experimental. package connectivity import ( "golang.org/x/net/context" "google.golang.org/grpc/grpclog" ) // State indicates the state of connectivity. // It can be the state of a ClientConn or SubConn. type State int func (s State) String() string { switch s { case Idle: return "IDLE" case Connecting: return "CONNECTING" case Ready: return "READY" case TransientFailure: return "TRANSIENT_FAILURE" case Shutdown: return "SHUTDOWN" default: grpclog.Errorf("unknown connectivity state: %d", s) return "Invalid-State" } } const ( // Idle indicates the ClientConn is idle. Idle State = iota // Connecting indicates the ClienConn is connecting. Connecting // Ready indicates the ClientConn is ready for work. Ready // TransientFailure indicates the ClientConn has seen a failure but expects to recover. TransientFailure // Shutdown indicates the ClientConn has started shutting down. Shutdown ) // Reporter reports the connectivity states. type Reporter interface { // CurrentState returns the current state of the reporter. CurrentState() State // WaitForStateChange blocks until the reporter's state is different from the given state, // and returns true. // It returns false if <-ctx.Done() can proceed (ctx got timeout or got canceled). WaitForStateChange(context.Context, State) bool } golang-google-grpc-1.6.0/credentials/000077500000000000000000000000001315416461300174715ustar00rootroot00000000000000golang-google-grpc-1.6.0/credentials/credentials.go000066400000000000000000000177431315416461300223310ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package credentials implements various credentials supported by gRPC library, // which encapsulate all the state needed by a client to authenticate with a // server and make various assertions, e.g., about the client's identity, role, // or whether it is authorized to make a particular call. package credentials // import "google.golang.org/grpc/credentials" import ( "crypto/tls" "crypto/x509" "errors" "fmt" "io/ioutil" "net" "strings" "golang.org/x/net/context" ) var ( // alpnProtoStr are the specified application level protocols for gRPC. alpnProtoStr = []string{"h2"} ) // PerRPCCredentials defines the common interface for the credentials which need to // attach security information to every RPC (e.g., oauth2). type PerRPCCredentials interface { // GetRequestMetadata gets the current request metadata, refreshing // tokens if required. This should be called by the transport layer on // each request, and the data should be populated in headers or other // context. uri is the URI of the entry point for the request. When // supported by the underlying implementation, ctx can be used for // timeout and cancellation. // TODO(zhaoq): Define the set of the qualified keys instead of leaving // it as an arbitrary string. GetRequestMetadata(ctx context.Context, uri ...string) (map[string]string, error) // RequireTransportSecurity indicates whether the credentials requires // transport security. RequireTransportSecurity() bool } // ProtocolInfo provides information regarding the gRPC wire protocol version, // security protocol, security protocol version in use, server name, etc. type ProtocolInfo struct { // ProtocolVersion is the gRPC wire protocol version. ProtocolVersion string // SecurityProtocol is the security protocol in use. SecurityProtocol string // SecurityVersion is the security protocol version. SecurityVersion string // ServerName is the user-configured server name. ServerName string } // AuthInfo defines the common interface for the auth information the users are interested in. type AuthInfo interface { AuthType() string } var ( // ErrConnDispatched indicates that rawConn has been dispatched out of gRPC // and the caller should not close rawConn. ErrConnDispatched = errors.New("credentials: rawConn is dispatched out of gRPC") ) // TransportCredentials defines the common interface for all the live gRPC wire // protocols and supported transport security protocols (e.g., TLS, SSL). type TransportCredentials interface { // ClientHandshake does the authentication handshake specified by the corresponding // authentication protocol on rawConn for clients. It returns the authenticated // connection and the corresponding auth information about the connection. // Implementations must use the provided context to implement timely cancellation. // gRPC will try to reconnect if the error returned is a temporary error // (io.EOF, context.DeadlineExceeded or err.Temporary() == true). // If the returned error is a wrapper error, implementations should make sure that // the error implements Temporary() to have the correct retry behaviors. ClientHandshake(context.Context, string, net.Conn) (net.Conn, AuthInfo, error) // ServerHandshake does the authentication handshake for servers. It returns // the authenticated connection and the corresponding auth information about // the connection. ServerHandshake(net.Conn) (net.Conn, AuthInfo, error) // Info provides the ProtocolInfo of this TransportCredentials. Info() ProtocolInfo // Clone makes a copy of this TransportCredentials. Clone() TransportCredentials // OverrideServerName overrides the server name used to verify the hostname on the returned certificates from the server. // gRPC internals also use it to override the virtual hosting name if it is set. // It must be called before dialing. Currently, this is only used by grpclb. OverrideServerName(string) error } // TLSInfo contains the auth information for a TLS authenticated connection. // It implements the AuthInfo interface. type TLSInfo struct { State tls.ConnectionState } // AuthType returns the type of TLSInfo as a string. func (t TLSInfo) AuthType() string { return "tls" } // tlsCreds is the credentials required for authenticating a connection using TLS. type tlsCreds struct { // TLS configuration config *tls.Config } func (c tlsCreds) Info() ProtocolInfo { return ProtocolInfo{ SecurityProtocol: "tls", SecurityVersion: "1.2", ServerName: c.config.ServerName, } } func (c *tlsCreds) ClientHandshake(ctx context.Context, addr string, rawConn net.Conn) (_ net.Conn, _ AuthInfo, err error) { // use local cfg to avoid clobbering ServerName if using multiple endpoints cfg := cloneTLSConfig(c.config) if cfg.ServerName == "" { colonPos := strings.LastIndex(addr, ":") if colonPos == -1 { colonPos = len(addr) } cfg.ServerName = addr[:colonPos] } conn := tls.Client(rawConn, cfg) errChannel := make(chan error, 1) go func() { errChannel <- conn.Handshake() }() select { case err := <-errChannel: if err != nil { return nil, nil, err } case <-ctx.Done(): return nil, nil, ctx.Err() } return conn, TLSInfo{conn.ConnectionState()}, nil } func (c *tlsCreds) ServerHandshake(rawConn net.Conn) (net.Conn, AuthInfo, error) { conn := tls.Server(rawConn, c.config) if err := conn.Handshake(); err != nil { return nil, nil, err } return conn, TLSInfo{conn.ConnectionState()}, nil } func (c *tlsCreds) Clone() TransportCredentials { return NewTLS(c.config) } func (c *tlsCreds) OverrideServerName(serverNameOverride string) error { c.config.ServerName = serverNameOverride return nil } // NewTLS uses c to construct a TransportCredentials based on TLS. func NewTLS(c *tls.Config) TransportCredentials { tc := &tlsCreds{cloneTLSConfig(c)} tc.config.NextProtos = alpnProtoStr return tc } // NewClientTLSFromCert constructs TLS credentials from the input certificate for client. // serverNameOverride is for testing only. If set to a non empty string, // it will override the virtual host name of authority (e.g. :authority header field) in requests. func NewClientTLSFromCert(cp *x509.CertPool, serverNameOverride string) TransportCredentials { return NewTLS(&tls.Config{ServerName: serverNameOverride, RootCAs: cp}) } // NewClientTLSFromFile constructs TLS credentials from the input certificate file for client. // serverNameOverride is for testing only. If set to a non empty string, // it will override the virtual host name of authority (e.g. :authority header field) in requests. func NewClientTLSFromFile(certFile, serverNameOverride string) (TransportCredentials, error) { b, err := ioutil.ReadFile(certFile) if err != nil { return nil, err } cp := x509.NewCertPool() if !cp.AppendCertsFromPEM(b) { return nil, fmt.Errorf("credentials: failed to append certificates") } return NewTLS(&tls.Config{ServerName: serverNameOverride, RootCAs: cp}), nil } // NewServerTLSFromCert constructs TLS credentials from the input certificate for server. func NewServerTLSFromCert(cert *tls.Certificate) TransportCredentials { return NewTLS(&tls.Config{Certificates: []tls.Certificate{*cert}}) } // NewServerTLSFromFile constructs TLS credentials from the input certificate file and key // file for server. func NewServerTLSFromFile(certFile, keyFile string) (TransportCredentials, error) { cert, err := tls.LoadX509KeyPair(certFile, keyFile) if err != nil { return nil, err } return NewTLS(&tls.Config{Certificates: []tls.Certificate{cert}}), nil } golang-google-grpc-1.6.0/credentials/credentials_test.go000066400000000000000000000140041315416461300233530ustar00rootroot00000000000000/* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package credentials import ( "crypto/tls" "net" "testing" "golang.org/x/net/context" "google.golang.org/grpc/testdata" ) func TestTLSOverrideServerName(t *testing.T) { expectedServerName := "server.name" c := NewTLS(nil) c.OverrideServerName(expectedServerName) if c.Info().ServerName != expectedServerName { t.Fatalf("c.Info().ServerName = %v, want %v", c.Info().ServerName, expectedServerName) } } func TestTLSClone(t *testing.T) { expectedServerName := "server.name" c := NewTLS(nil) c.OverrideServerName(expectedServerName) cc := c.Clone() if cc.Info().ServerName != expectedServerName { t.Fatalf("cc.Info().ServerName = %v, want %v", cc.Info().ServerName, expectedServerName) } cc.OverrideServerName("") if c.Info().ServerName != expectedServerName { t.Fatalf("Change in clone should not affect the original, c.Info().ServerName = %v, want %v", c.Info().ServerName, expectedServerName) } } type serverHandshake func(net.Conn) (AuthInfo, error) func TestClientHandshakeReturnsAuthInfo(t *testing.T) { done := make(chan AuthInfo, 1) lis := launchServer(t, tlsServerHandshake, done) defer lis.Close() lisAddr := lis.Addr().String() clientAuthInfo := clientHandle(t, gRPCClientHandshake, lisAddr) // wait until server sends serverAuthInfo or fails. serverAuthInfo, ok := <-done if !ok { t.Fatalf("Error at server-side") } if !compare(clientAuthInfo, serverAuthInfo) { t.Fatalf("c.ClientHandshake(_, %v, _) = %v, want %v.", lisAddr, clientAuthInfo, serverAuthInfo) } } func TestServerHandshakeReturnsAuthInfo(t *testing.T) { done := make(chan AuthInfo, 1) lis := launchServer(t, gRPCServerHandshake, done) defer lis.Close() clientAuthInfo := clientHandle(t, tlsClientHandshake, lis.Addr().String()) // wait until server sends serverAuthInfo or fails. serverAuthInfo, ok := <-done if !ok { t.Fatalf("Error at server-side") } if !compare(clientAuthInfo, serverAuthInfo) { t.Fatalf("ServerHandshake(_) = %v, want %v.", serverAuthInfo, clientAuthInfo) } } func TestServerAndClientHandshake(t *testing.T) { done := make(chan AuthInfo, 1) lis := launchServer(t, gRPCServerHandshake, done) defer lis.Close() clientAuthInfo := clientHandle(t, gRPCClientHandshake, lis.Addr().String()) // wait until server sends serverAuthInfo or fails. serverAuthInfo, ok := <-done if !ok { t.Fatalf("Error at server-side") } if !compare(clientAuthInfo, serverAuthInfo) { t.Fatalf("AuthInfo returned by server: %v and client: %v aren't same", serverAuthInfo, clientAuthInfo) } } func compare(a1, a2 AuthInfo) bool { if a1.AuthType() != a2.AuthType() { return false } switch a1.AuthType() { case "tls": state1 := a1.(TLSInfo).State state2 := a2.(TLSInfo).State if state1.Version == state2.Version && state1.HandshakeComplete == state2.HandshakeComplete && state1.CipherSuite == state2.CipherSuite && state1.NegotiatedProtocol == state2.NegotiatedProtocol { return true } return false default: return false } } func launchServer(t *testing.T, hs serverHandshake, done chan AuthInfo) net.Listener { lis, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("Failed to listen: %v", err) } go serverHandle(t, hs, done, lis) return lis } // Is run in a seperate goroutine. func serverHandle(t *testing.T, hs serverHandshake, done chan AuthInfo, lis net.Listener) { serverRawConn, err := lis.Accept() if err != nil { t.Errorf("Server failed to accept connection: %v", err) close(done) return } serverAuthInfo, err := hs(serverRawConn) if err != nil { t.Errorf("Server failed while handshake. Error: %v", err) serverRawConn.Close() close(done) return } done <- serverAuthInfo } func clientHandle(t *testing.T, hs func(net.Conn, string) (AuthInfo, error), lisAddr string) AuthInfo { conn, err := net.Dial("tcp", lisAddr) if err != nil { t.Fatalf("Client failed to connect to %s. Error: %v", lisAddr, err) } defer conn.Close() clientAuthInfo, err := hs(conn, lisAddr) if err != nil { t.Fatalf("Error on client while handshake. Error: %v", err) } return clientAuthInfo } // Server handshake implementation in gRPC. func gRPCServerHandshake(conn net.Conn) (AuthInfo, error) { serverTLS, err := NewServerTLSFromFile(testdata.Path("server1.pem"), testdata.Path("server1.key")) if err != nil { return nil, err } _, serverAuthInfo, err := serverTLS.ServerHandshake(conn) if err != nil { return nil, err } return serverAuthInfo, nil } // Client handshake implementation in gRPC. func gRPCClientHandshake(conn net.Conn, lisAddr string) (AuthInfo, error) { clientTLS := NewTLS(&tls.Config{InsecureSkipVerify: true}) _, authInfo, err := clientTLS.ClientHandshake(context.Background(), lisAddr, conn) if err != nil { return nil, err } return authInfo, nil } func tlsServerHandshake(conn net.Conn) (AuthInfo, error) { cert, err := tls.LoadX509KeyPair(testdata.Path("server1.pem"), testdata.Path("server1.key")) if err != nil { return nil, err } serverTLSConfig := &tls.Config{Certificates: []tls.Certificate{cert}} serverConn := tls.Server(conn, serverTLSConfig) err = serverConn.Handshake() if err != nil { return nil, err } return TLSInfo{State: serverConn.ConnectionState()}, nil } func tlsClientHandshake(conn net.Conn, _ string) (AuthInfo, error) { clientTLSConfig := &tls.Config{InsecureSkipVerify: true} clientConn := tls.Client(conn, clientTLSConfig) if err := clientConn.Handshake(); err != nil { return nil, err } return TLSInfo{State: clientConn.ConnectionState()}, nil } golang-google-grpc-1.6.0/credentials/credentials_util_go17.go000066400000000000000000000040421315416461300242070ustar00rootroot00000000000000// +build go1.7 // +build !go1.8 /* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package credentials import ( "crypto/tls" ) // cloneTLSConfig returns a shallow clone of the exported // fields of cfg, ignoring the unexported sync.Once, which // contains a mutex and must not be copied. // // If cfg is nil, a new zero tls.Config is returned. func cloneTLSConfig(cfg *tls.Config) *tls.Config { if cfg == nil { return &tls.Config{} } return &tls.Config{ Rand: cfg.Rand, Time: cfg.Time, Certificates: cfg.Certificates, NameToCertificate: cfg.NameToCertificate, GetCertificate: cfg.GetCertificate, RootCAs: cfg.RootCAs, NextProtos: cfg.NextProtos, ServerName: cfg.ServerName, ClientAuth: cfg.ClientAuth, ClientCAs: cfg.ClientCAs, InsecureSkipVerify: cfg.InsecureSkipVerify, CipherSuites: cfg.CipherSuites, PreferServerCipherSuites: cfg.PreferServerCipherSuites, SessionTicketsDisabled: cfg.SessionTicketsDisabled, SessionTicketKey: cfg.SessionTicketKey, ClientSessionCache: cfg.ClientSessionCache, MinVersion: cfg.MinVersion, MaxVersion: cfg.MaxVersion, CurvePreferences: cfg.CurvePreferences, DynamicRecordSizingDisabled: cfg.DynamicRecordSizingDisabled, Renegotiation: cfg.Renegotiation, } } golang-google-grpc-1.6.0/credentials/credentials_util_go18.go000066400000000000000000000017521315416461300242150ustar00rootroot00000000000000// +build go1.8 /* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package credentials import ( "crypto/tls" ) // cloneTLSConfig returns a shallow clone of the exported // fields of cfg, ignoring the unexported sync.Once, which // contains a mutex and must not be copied. // // If cfg is nil, a new zero tls.Config is returned. func cloneTLSConfig(cfg *tls.Config) *tls.Config { if cfg == nil { return &tls.Config{} } return cfg.Clone() } golang-google-grpc-1.6.0/credentials/credentials_util_pre_go17.go000066400000000000000000000035471315416461300250660ustar00rootroot00000000000000// +build !go1.7 /* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package credentials import ( "crypto/tls" ) // cloneTLSConfig returns a shallow clone of the exported // fields of cfg, ignoring the unexported sync.Once, which // contains a mutex and must not be copied. // // If cfg is nil, a new zero tls.Config is returned. func cloneTLSConfig(cfg *tls.Config) *tls.Config { if cfg == nil { return &tls.Config{} } return &tls.Config{ Rand: cfg.Rand, Time: cfg.Time, Certificates: cfg.Certificates, NameToCertificate: cfg.NameToCertificate, GetCertificate: cfg.GetCertificate, RootCAs: cfg.RootCAs, NextProtos: cfg.NextProtos, ServerName: cfg.ServerName, ClientAuth: cfg.ClientAuth, ClientCAs: cfg.ClientCAs, InsecureSkipVerify: cfg.InsecureSkipVerify, CipherSuites: cfg.CipherSuites, PreferServerCipherSuites: cfg.PreferServerCipherSuites, SessionTicketsDisabled: cfg.SessionTicketsDisabled, SessionTicketKey: cfg.SessionTicketKey, ClientSessionCache: cfg.ClientSessionCache, MinVersion: cfg.MinVersion, MaxVersion: cfg.MaxVersion, CurvePreferences: cfg.CurvePreferences, } } golang-google-grpc-1.6.0/credentials/oauth/000077500000000000000000000000001315416461300206115ustar00rootroot00000000000000golang-google-grpc-1.6.0/credentials/oauth/oauth.go000066400000000000000000000122461315416461300222650ustar00rootroot00000000000000/* * * Copyright 2015 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package oauth implements gRPC credentials using OAuth. package oauth import ( "fmt" "io/ioutil" "sync" "golang.org/x/net/context" "golang.org/x/oauth2" "golang.org/x/oauth2/google" "golang.org/x/oauth2/jwt" "google.golang.org/grpc/credentials" ) // TokenSource supplies PerRPCCredentials from an oauth2.TokenSource. type TokenSource struct { oauth2.TokenSource } // GetRequestMetadata gets the request metadata as a map from a TokenSource. func (ts TokenSource) GetRequestMetadata(ctx context.Context, uri ...string) (map[string]string, error) { token, err := ts.Token() if err != nil { return nil, err } return map[string]string{ "authorization": token.Type() + " " + token.AccessToken, }, nil } // RequireTransportSecurity indicates whether the credentials requires transport security. func (ts TokenSource) RequireTransportSecurity() bool { return true } type jwtAccess struct { jsonKey []byte } // NewJWTAccessFromFile creates PerRPCCredentials from the given keyFile. func NewJWTAccessFromFile(keyFile string) (credentials.PerRPCCredentials, error) { jsonKey, err := ioutil.ReadFile(keyFile) if err != nil { return nil, fmt.Errorf("credentials: failed to read the service account key file: %v", err) } return NewJWTAccessFromKey(jsonKey) } // NewJWTAccessFromKey creates PerRPCCredentials from the given jsonKey. func NewJWTAccessFromKey(jsonKey []byte) (credentials.PerRPCCredentials, error) { return jwtAccess{jsonKey}, nil } func (j jwtAccess) GetRequestMetadata(ctx context.Context, uri ...string) (map[string]string, error) { ts, err := google.JWTAccessTokenSourceFromJSON(j.jsonKey, uri[0]) if err != nil { return nil, err } token, err := ts.Token() if err != nil { return nil, err } return map[string]string{ "authorization": token.TokenType + " " + token.AccessToken, }, nil } func (j jwtAccess) RequireTransportSecurity() bool { return true } // oauthAccess supplies PerRPCCredentials from a given token. type oauthAccess struct { token oauth2.Token } // NewOauthAccess constructs the PerRPCCredentials using a given token. func NewOauthAccess(token *oauth2.Token) credentials.PerRPCCredentials { return oauthAccess{token: *token} } func (oa oauthAccess) GetRequestMetadata(ctx context.Context, uri ...string) (map[string]string, error) { return map[string]string{ "authorization": oa.token.TokenType + " " + oa.token.AccessToken, }, nil } func (oa oauthAccess) RequireTransportSecurity() bool { return true } // NewComputeEngine constructs the PerRPCCredentials that fetches access tokens from // Google Compute Engine (GCE)'s metadata server. It is only valid to use this // if your program is running on a GCE instance. // TODO(dsymonds): Deprecate and remove this. func NewComputeEngine() credentials.PerRPCCredentials { return TokenSource{google.ComputeTokenSource("")} } // serviceAccount represents PerRPCCredentials via JWT signing key. type serviceAccount struct { mu sync.Mutex config *jwt.Config t *oauth2.Token } func (s *serviceAccount) GetRequestMetadata(ctx context.Context, uri ...string) (map[string]string, error) { s.mu.Lock() defer s.mu.Unlock() if !s.t.Valid() { var err error s.t, err = s.config.TokenSource(ctx).Token() if err != nil { return nil, err } } return map[string]string{ "authorization": s.t.TokenType + " " + s.t.AccessToken, }, nil } func (s *serviceAccount) RequireTransportSecurity() bool { return true } // NewServiceAccountFromKey constructs the PerRPCCredentials using the JSON key slice // from a Google Developers service account. func NewServiceAccountFromKey(jsonKey []byte, scope ...string) (credentials.PerRPCCredentials, error) { config, err := google.JWTConfigFromJSON(jsonKey, scope...) if err != nil { return nil, err } return &serviceAccount{config: config}, nil } // NewServiceAccountFromFile constructs the PerRPCCredentials using the JSON key file // of a Google Developers service account. func NewServiceAccountFromFile(keyFile string, scope ...string) (credentials.PerRPCCredentials, error) { jsonKey, err := ioutil.ReadFile(keyFile) if err != nil { return nil, fmt.Errorf("credentials: failed to read the service account key file: %v", err) } return NewServiceAccountFromKey(jsonKey, scope...) } // NewApplicationDefault returns "Application Default Credentials". For more // detail, see https://developers.google.com/accounts/docs/application-default-credentials. func NewApplicationDefault(ctx context.Context, scope ...string) (credentials.PerRPCCredentials, error) { t, err := google.DefaultTokenSource(ctx, scope...) if err != nil { return nil, err } return TokenSource{t}, nil } golang-google-grpc-1.6.0/doc.go000066400000000000000000000013631315416461300162730ustar00rootroot00000000000000/* * * Copyright 2015 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ /* Package grpc implements an RPC system called gRPC. See grpc.io for more information about gRPC. */ package grpc // import "google.golang.org/grpc" golang-google-grpc-1.6.0/examples/000077500000000000000000000000001315416461300170125ustar00rootroot00000000000000golang-google-grpc-1.6.0/examples/README.md000066400000000000000000000022461315416461300202750ustar00rootroot00000000000000gRPC in 3 minutes (Go) ====================== BACKGROUND ------------- For this sample, we've already generated the server and client stubs from [helloworld.proto](helloworld/helloworld/helloworld.proto). PREREQUISITES ------------- - This requires Go 1.5 or later - Requires that [GOPATH is set](https://golang.org/doc/code.html#GOPATH) ``` $ go help gopath $ # ensure the PATH contains $GOPATH/bin $ export PATH=$PATH:$GOPATH/bin ``` INSTALL ------- ``` $ go get -u google.golang.org/grpc/examples/helloworld/greeter_client $ go get -u google.golang.org/grpc/examples/helloworld/greeter_server ``` TRY IT! ------- - Run the server ``` $ greeter_server & ``` - Run the client ``` $ greeter_client ``` OPTIONAL - Rebuilding the generated code ---------------------------------------- 1 First [install protoc](https://github.com/google/protobuf/blob/master/README.md) - For now, this needs to be installed from source - This is will change once proto3 is officially released 2 Install the protoc Go plugin. ``` $ go get -a github.com/golang/protobuf/protoc-gen-go ``` 3 Rebuild the generated Go code. ``` $ go generate google.golang.org/grpc/examples/helloworld/... ``` golang-google-grpc-1.6.0/examples/gotutorial.md000066400000000000000000000516171315416461300215370ustar00rootroot00000000000000# gRPC Basics: Go This tutorial provides a basic Go programmer's introduction to working with gRPC. By walking through this example you'll learn how to: - Define a service in a .proto file. - Generate server and client code using the protocol buffer compiler. - Use the Go gRPC API to write a simple client and server for your service. It assumes that you have read the [Getting started](https://github.com/grpc/grpc/tree/master/examples) guide and are familiar with [protocol buffers](https://developers.google.com/protocol-buffers/docs/overview). Note that the example in this tutorial uses the proto3 version of the protocol buffers language, which is currently in alpha release:you can find out more in the [proto3 language guide](https://developers.google.com/protocol-buffers/docs/proto3) and see the [release notes](https://github.com/google/protobuf/releases) for the new version in the protocol buffers Github repository. This isn't a comprehensive guide to using gRPC in Go: more reference documentation is coming soon. ## Why use gRPC? Our example is a simple route mapping application that lets clients get information about features on their route, create a summary of their route, and exchange route information such as traffic updates with the server and other clients. With gRPC we can define our service once in a .proto file and implement clients and servers in any of gRPC's supported languages, which in turn can be run in environments ranging from servers inside Google to your own tablet - all the complexity of communication between different languages and environments is handled for you by gRPC. We also get all the advantages of working with protocol buffers, including efficient serialization, a simple IDL, and easy interface updating. ## Example code and setup The example code for our tutorial is in [grpc/grpc-go/examples/route_guide](https://github.com/grpc/grpc-go/tree/master/examples/route_guide). To download the example, clone the `grpc-go` repository by running the following command: ```shell $ go get google.golang.org/grpc ``` Then change your current directory to `grpc-go/examples/route_guide`: ```shell $ cd $GOPATH/src/google.golang.org/grpc/examples/route_guide ``` You also should have the relevant tools installed to generate the server and client interface code - if you don't already, follow the setup instructions in [the Go quick start guide](https://github.com/grpc/grpc-go/tree/master/examples/). ## Defining the service Our first step (as you'll know from the [quick start](https://grpc.io/docs/#quick-start)) is to define the gRPC *service* and the method *request* and *response* types using [protocol buffers](https://developers.google.com/protocol-buffers/docs/overview). You can see the complete .proto file in [examples/route_guide/routeguide/route_guide.proto](https://github.com/grpc/grpc-go/tree/master/examples/route_guide/routeguide/route_guide.proto). To define a service, you specify a named `service` in your .proto file: ```proto service RouteGuide { ... } ``` Then you define `rpc` methods inside your service definition, specifying their request and response types. gRPC lets you define four kinds of service method, all of which are used in the `RouteGuide` service: - A *simple RPC* where the client sends a request to the server using the stub and waits for a response to come back, just like a normal function call. ```proto // Obtains the feature at a given position. rpc GetFeature(Point) returns (Feature) {} ``` - A *server-side streaming RPC* where the client sends a request to the server and gets a stream to read a sequence of messages back. The client reads from the returned stream until there are no more messages. As you can see in our example, you specify a server-side streaming method by placing the `stream` keyword before the *response* type. ```proto // Obtains the Features available within the given Rectangle. Results are // streamed rather than returned at once (e.g. in a response message with a // repeated field), as the rectangle may cover a large area and contain a // huge number of features. rpc ListFeatures(Rectangle) returns (stream Feature) {} ``` - A *client-side streaming RPC* where the client writes a sequence of messages and sends them to the server, again using a provided stream. Once the client has finished writing the messages, it waits for the server to read them all and return its response. You specify a client-side streaming method by placing the `stream` keyword before the *request* type. ```proto // Accepts a stream of Points on a route being traversed, returning a // RouteSummary when traversal is completed. rpc RecordRoute(stream Point) returns (RouteSummary) {} ``` - A *bidirectional streaming RPC* where both sides send a sequence of messages using a read-write stream. The two streams operate independently, so clients and servers can read and write in whatever order they like: for example, the server could wait to receive all the client messages before writing its responses, or it could alternately read a message then write a message, or some other combination of reads and writes. The order of messages in each stream is preserved. You specify this type of method by placing the `stream` keyword before both the request and the response. ```proto // Accepts a stream of RouteNotes sent while a route is being traversed, // while receiving other RouteNotes (e.g. from other users). rpc RouteChat(stream RouteNote) returns (stream RouteNote) {} ``` Our .proto file also contains protocol buffer message type definitions for all the request and response types used in our service methods - for example, here's the `Point` message type: ```proto // Points are represented as latitude-longitude pairs in the E7 representation // (degrees multiplied by 10**7 and rounded to the nearest integer). // Latitudes should be in the range +/- 90 degrees and longitude should be in // the range +/- 180 degrees (inclusive). message Point { int32 latitude = 1; int32 longitude = 2; } ``` ## Generating client and server code Next we need to generate the gRPC client and server interfaces from our .proto service definition. We do this using the protocol buffer compiler `protoc` with a special gRPC Go plugin. For simplicity, we've provided a [bash script](https://github.com/grpc/grpc-go/blob/master/codegen.sh) that runs `protoc` for you with the appropriate plugin, input, and output (if you want to run this by yourself, make sure you've installed protoc and followed the gRPC-Go [installation instructions](https://github.com/grpc/grpc-go/blob/master/README.md) first): ```shell $ codegen.sh route_guide.proto ``` which actually runs: ```shell $ protoc --go_out=plugins=grpc:. route_guide.proto ``` Running this command generates the following file in your current directory: - `route_guide.pb.go` This contains: - All the protocol buffer code to populate, serialize, and retrieve our request and response message types - An interface type (or *stub*) for clients to call with the methods defined in the `RouteGuide` service. - An interface type for servers to implement, also with the methods defined in the `RouteGuide` service. ## Creating the server First let's look at how we create a `RouteGuide` server. If you're only interested in creating gRPC clients, you can skip this section and go straight to [Creating the client](#client) (though you might find it interesting anyway!). There are two parts to making our `RouteGuide` service do its job: - Implementing the service interface generated from our service definition: doing the actual "work" of our service. - Running a gRPC server to listen for requests from clients and dispatch them to the right service implementation. You can find our example `RouteGuide` server in [grpc-go/examples/route_guide/server/server.go](https://github.com/grpc/grpc-go/tree/master/examples/route_guide/server/server.go). Let's take a closer look at how it works. ### Implementing RouteGuide As you can see, our server has a `routeGuideServer` struct type that implements the generated `RouteGuideServer` interface: ```go type routeGuideServer struct { ... } ... func (s *routeGuideServer) GetFeature(ctx context.Context, point *pb.Point) (*pb.Feature, error) { ... } ... func (s *routeGuideServer) ListFeatures(rect *pb.Rectangle, stream pb.RouteGuide_ListFeaturesServer) error { ... } ... func (s *routeGuideServer) RecordRoute(stream pb.RouteGuide_RecordRouteServer) error { ... } ... func (s *routeGuideServer) RouteChat(stream pb.RouteGuide_RouteChatServer) error { ... } ... ``` #### Simple RPC `routeGuideServer` implements all our service methods. Let's look at the simplest type first, `GetFeature`, which just gets a `Point` from the client and returns the corresponding feature information from its database in a `Feature`. ```go func (s *routeGuideServer) GetFeature(ctx context.Context, point *pb.Point) (*pb.Feature, error) { for _, feature := range s.savedFeatures { if proto.Equal(feature.Location, point) { return feature, nil } } // No feature was found, return an unnamed feature return &pb.Feature{"", point}, nil } ``` The method is passed a context object for the RPC and the client's `Point` protocol buffer request. It returns a `Feature` protocol buffer object with the response information and an `error`. In the method we populate the `Feature` with the appropriate information, and then `return` it along with an `nil` error to tell gRPC that we've finished dealing with the RPC and that the `Feature` can be returned to the client. #### Server-side streaming RPC Now let's look at one of our streaming RPCs. `ListFeatures` is a server-side streaming RPC, so we need to send back multiple `Feature`s to our client. ```go func (s *routeGuideServer) ListFeatures(rect *pb.Rectangle, stream pb.RouteGuide_ListFeaturesServer) error { for _, feature := range s.savedFeatures { if inRange(feature.Location, rect) { if err := stream.Send(feature); err != nil { return err } } } return nil } ``` As you can see, instead of getting simple request and response objects in our method parameters, this time we get a request object (the `Rectangle` in which our client wants to find `Feature`s) and a special `RouteGuide_ListFeaturesServer` object to write our responses. In the method, we populate as many `Feature` objects as we need to return, writing them to the `RouteGuide_ListFeaturesServer` using its `Send()` method. Finally, as in our simple RPC, we return a `nil` error to tell gRPC that we've finished writing responses. Should any error happen in this call, we return a non-`nil` error; the gRPC layer will translate it into an appropriate RPC status to be sent on the wire. #### Client-side streaming RPC Now let's look at something a little more complicated: the client-side streaming method `RecordRoute`, where we get a stream of `Point`s from the client and return a single `RouteSummary` with information about their trip. As you can see, this time the method doesn't have a request parameter at all. Instead, it gets a `RouteGuide_RecordRouteServer` stream, which the server can use to both read *and* write messages - it can receive client messages using its `Recv()` method and return its single response using its `SendAndClose()` method. ```go func (s *routeGuideServer) RecordRoute(stream pb.RouteGuide_RecordRouteServer) error { var pointCount, featureCount, distance int32 var lastPoint *pb.Point startTime := time.Now() for { point, err := stream.Recv() if err == io.EOF { endTime := time.Now() return stream.SendAndClose(&pb.RouteSummary{ PointCount: pointCount, FeatureCount: featureCount, Distance: distance, ElapsedTime: int32(endTime.Sub(startTime).Seconds()), }) } if err != nil { return err } pointCount++ for _, feature := range s.savedFeatures { if proto.Equal(feature.Location, point) { featureCount++ } } if lastPoint != nil { distance += calcDistance(lastPoint, point) } lastPoint = point } } ``` In the method body we use the `RouteGuide_RecordRouteServer`s `Recv()` method to repeatedly read in our client's requests to a request object (in this case a `Point`) until there are no more messages: the server needs to check the the error returned from `Recv()` after each call. If this is `nil`, the stream is still good and it can continue reading; if it's `io.EOF` the message stream has ended and the server can return its `RouteSummary`. If it has any other value, we return the error "as is" so that it'll be translated to an RPC status by the gRPC layer. #### Bidirectional streaming RPC Finally, let's look at our bidirectional streaming RPC `RouteChat()`. ```go func (s *routeGuideServer) RouteChat(stream pb.RouteGuide_RouteChatServer) error { for { in, err := stream.Recv() if err == io.EOF { return nil } if err != nil { return err } key := serialize(in.Location) ... // look for notes to be sent to client for _, note := range s.routeNotes[key] { if err := stream.Send(note); err != nil { return err } } } } ``` This time we get a `RouteGuide_RouteChatServer` stream that, as in our client-side streaming example, can be used to read and write messages. However, this time we return values via our method's stream while the client is still writing messages to *their* message stream. The syntax for reading and writing here is very similar to our client-streaming method, except the server uses the stream's `Send()` method rather than `SendAndClose()` because it's writing multiple responses. Although each side will always get the other's messages in the order they were written, both the client and server can read and write in any order — the streams operate completely independently. ### Starting the server Once we've implemented all our methods, we also need to start up a gRPC server so that clients can actually use our service. The following snippet shows how we do this for our `RouteGuide` service: ```go flag.Parse() lis, err := net.Listen("tcp", fmt.Sprintf(":%d", *port)) if err != nil { log.Fatalf("failed to listen: %v", err) } grpcServer := grpc.NewServer() pb.RegisterRouteGuideServer(grpcServer, &routeGuideServer{}) ... // determine whether to use TLS grpcServer.Serve(lis) ``` To build and start a server, we: 1. Specify the port we want to use to listen for client requests using `lis, err := net.Listen("tcp", fmt.Sprintf(":%d", *port))`. 2. Create an instance of the gRPC server using `grpc.NewServer()`. 3. Register our service implementation with the gRPC server. 4. Call `Serve()` on the server with our port details to do a blocking wait until the process is killed or `Stop()` is called. ## Creating the client In this section, we'll look at creating a Go client for our `RouteGuide` service. You can see our complete example client code in [grpc-go/examples/route_guide/client/client.go](https://github.com/grpc/grpc-go/tree/master/examples/route_guide/client/client.go). ### Creating a stub To call service methods, we first need to create a gRPC *channel* to communicate with the server. We create this by passing the server address and port number to `grpc.Dial()` as follows: ```go conn, err := grpc.Dial(*serverAddr) if err != nil { ... } defer conn.Close() ``` You can use `DialOptions` to set the auth credentials (e.g., TLS, GCE credentials, JWT credentials) in `grpc.Dial` if the service you request requires that - however, we don't need to do this for our `RouteGuide` service. Once the gRPC *channel* is setup, we need a client *stub* to perform RPCs. We get this using the `NewRouteGuideClient` method provided in the `pb` package we generated from our .proto. ```go client := pb.NewRouteGuideClient(conn) ``` ### Calling service methods Now let's look at how we call our service methods. Note that in gRPC-Go, RPCs operate in a blocking/synchronous mode, which means that the RPC call waits for the server to respond, and will either return a response or an error. #### Simple RPC Calling the simple RPC `GetFeature` is nearly as straightforward as calling a local method. ```go feature, err := client.GetFeature(context.Background(), &pb.Point{409146138, -746188906}) if err != nil { ... } ``` As you can see, we call the method on the stub we got earlier. In our method parameters we create and populate a request protocol buffer object (in our case `Point`). We also pass a `context.Context` object which lets us change our RPC's behaviour if necessary, such as time-out/cancel an RPC in flight. If the call doesn't return an error, then we can read the response information from the server from the first return value. ```go log.Println(feature) ``` #### Server-side streaming RPC Here's where we call the server-side streaming method `ListFeatures`, which returns a stream of geographical `Feature`s. If you've already read [Creating the server](#server) some of this may look very familiar - streaming RPCs are implemented in a similar way on both sides. ```go rect := &pb.Rectangle{ ... } // initialize a pb.Rectangle stream, err := client.ListFeatures(context.Background(), rect) if err != nil { ... } for { feature, err := stream.Recv() if err == io.EOF { break } if err != nil { log.Fatalf("%v.ListFeatures(_) = _, %v", client, err) } log.Println(feature) } ``` As in the simple RPC, we pass the method a context and a request. However, instead of getting a response object back, we get back an instance of `RouteGuide_ListFeaturesClient`. The client can use the `RouteGuide_ListFeaturesClient` stream to read the server's responses. We use the `RouteGuide_ListFeaturesClient`'s `Recv()` method to repeatedly read in the server's responses to a response protocol buffer object (in this case a `Feature`) until there are no more messages: the client needs to check the error `err` returned from `Recv()` after each call. If `nil`, the stream is still good and it can continue reading; if it's `io.EOF` then the message stream has ended; otherwise there must be an RPC error, which is passed over through `err`. #### Client-side streaming RPC The client-side streaming method `RecordRoute` is similar to the server-side method, except that we only pass the method a context and get a `RouteGuide_RecordRouteClient` stream back, which we can use to both write *and* read messages. ```go // Create a random number of random points r := rand.New(rand.NewSource(time.Now().UnixNano())) pointCount := int(r.Int31n(100)) + 2 // Traverse at least two points var points []*pb.Point for i := 0; i < pointCount; i++ { points = append(points, randomPoint(r)) } log.Printf("Traversing %d points.", len(points)) stream, err := client.RecordRoute(context.Background()) if err != nil { log.Fatalf("%v.RecordRoute(_) = _, %v", client, err) } for _, point := range points { if err := stream.Send(point); err != nil { log.Fatalf("%v.Send(%v) = %v", stream, point, err) } } reply, err := stream.CloseAndRecv() if err != nil { log.Fatalf("%v.CloseAndRecv() got error %v, want %v", stream, err, nil) } log.Printf("Route summary: %v", reply) ``` The `RouteGuide_RecordRouteClient` has a `Send()` method that we can use to send requests to the server. Once we've finished writing our client's requests to the stream using `Send()`, we need to call `CloseAndRecv()` on the stream to let gRPC know that we've finished writing and are expecting to receive a response. We get our RPC status from the `err` returned from `CloseAndRecv()`. If the status is `nil`, then the first return value from `CloseAndRecv()` will be a valid server response. #### Bidirectional streaming RPC Finally, let's look at our bidirectional streaming RPC `RouteChat()`. As in the case of `RecordRoute`, we only pass the method a context object and get back a stream that we can use to both write and read messages. However, this time we return values via our method's stream while the server is still writing messages to *their* message stream. ```go stream, err := client.RouteChat(context.Background()) waitc := make(chan struct{}) go func() { for { in, err := stream.Recv() if err == io.EOF { // read done. close(waitc) return } if err != nil { log.Fatalf("Failed to receive a note : %v", err) } log.Printf("Got message %s at point(%d, %d)", in.Message, in.Location.Latitude, in.Location.Longitude) } }() for _, note := range notes { if err := stream.Send(note); err != nil { log.Fatalf("Failed to send a note: %v", err) } } stream.CloseSend() <-waitc ``` The syntax for reading and writing here is very similar to our client-side streaming method, except we use the stream's `CloseSend()` method once we've finished our call. Although each side will always get the other's messages in the order they were written, both the client and server can read and write in any order — the streams operate completely independently. ## Try it out! To compile and run the server, assuming you are in the folder `$GOPATH/src/google.golang.org/grpc/examples/route_guide`, simply: ```sh $ go run server/server.go ``` Likewise, to run the client: ```sh $ go run client/client.go ``` golang-google-grpc-1.6.0/examples/helloworld/000077500000000000000000000000001315416461300211655ustar00rootroot00000000000000golang-google-grpc-1.6.0/examples/helloworld/greeter_client/000077500000000000000000000000001315416461300241605ustar00rootroot00000000000000golang-google-grpc-1.6.0/examples/helloworld/greeter_client/main.go000066400000000000000000000024741315416461300254420ustar00rootroot00000000000000/* * * Copyright 2015 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package main import ( "log" "os" "golang.org/x/net/context" "google.golang.org/grpc" pb "google.golang.org/grpc/examples/helloworld/helloworld" ) const ( address = "localhost:50051" defaultName = "world" ) func main() { // Set up a connection to the server. conn, err := grpc.Dial(address, grpc.WithInsecure()) if err != nil { log.Fatalf("did not connect: %v", err) } defer conn.Close() c := pb.NewGreeterClient(conn) // Contact the server and print out its response. name := defaultName if len(os.Args) > 1 { name = os.Args[1] } r, err := c.SayHello(context.Background(), &pb.HelloRequest{Name: name}) if err != nil { log.Fatalf("could not greet: %v", err) } log.Printf("Greeting: %s", r.Message) } golang-google-grpc-1.6.0/examples/helloworld/greeter_server/000077500000000000000000000000001315416461300242105ustar00rootroot00000000000000golang-google-grpc-1.6.0/examples/helloworld/greeter_server/main.go000066400000000000000000000030071315416461300254630ustar00rootroot00000000000000/* * * Copyright 2015 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ //go:generate protoc -I ../helloworld --go_out=plugins=grpc:../helloworld ../helloworld/helloworld.proto package main import ( "log" "net" "golang.org/x/net/context" "google.golang.org/grpc" pb "google.golang.org/grpc/examples/helloworld/helloworld" "google.golang.org/grpc/reflection" ) const ( port = ":50051" ) // server is used to implement helloworld.GreeterServer. type server struct{} // SayHello implements helloworld.GreeterServer func (s *server) SayHello(ctx context.Context, in *pb.HelloRequest) (*pb.HelloReply, error) { return &pb.HelloReply{Message: "Hello " + in.Name}, nil } func main() { lis, err := net.Listen("tcp", port) if err != nil { log.Fatalf("failed to listen: %v", err) } s := grpc.NewServer() pb.RegisterGreeterServer(s, &server{}) // Register reflection service on gRPC server. reflection.Register(s) if err := s.Serve(lis); err != nil { log.Fatalf("failed to serve: %v", err) } } golang-google-grpc-1.6.0/examples/helloworld/helloworld/000077500000000000000000000000001315416461300233405ustar00rootroot00000000000000golang-google-grpc-1.6.0/examples/helloworld/helloworld/helloworld.pb.go000066400000000000000000000124541315416461300264500ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: helloworld.proto /* Package helloworld is a generated protocol buffer package. It is generated from these files: helloworld.proto It has these top-level messages: HelloRequest HelloReply */ package helloworld import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import ( context "golang.org/x/net/context" grpc "google.golang.org/grpc" ) // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package // The request message containing the user's name. type HelloRequest struct { Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"` } func (m *HelloRequest) Reset() { *m = HelloRequest{} } func (m *HelloRequest) String() string { return proto.CompactTextString(m) } func (*HelloRequest) ProtoMessage() {} func (*HelloRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} } func (m *HelloRequest) GetName() string { if m != nil { return m.Name } return "" } // The response message containing the greetings type HelloReply struct { Message string `protobuf:"bytes,1,opt,name=message" json:"message,omitempty"` } func (m *HelloReply) Reset() { *m = HelloReply{} } func (m *HelloReply) String() string { return proto.CompactTextString(m) } func (*HelloReply) ProtoMessage() {} func (*HelloReply) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} } func (m *HelloReply) GetMessage() string { if m != nil { return m.Message } return "" } func init() { proto.RegisterType((*HelloRequest)(nil), "helloworld.HelloRequest") proto.RegisterType((*HelloReply)(nil), "helloworld.HelloReply") } // Reference imports to suppress errors if they are not otherwise used. var _ context.Context var _ grpc.ClientConn // This is a compile-time assertion to ensure that this generated file // is compatible with the grpc package it is being compiled against. const _ = grpc.SupportPackageIsVersion4 // Client API for Greeter service type GreeterClient interface { // Sends a greeting SayHello(ctx context.Context, in *HelloRequest, opts ...grpc.CallOption) (*HelloReply, error) } type greeterClient struct { cc *grpc.ClientConn } func NewGreeterClient(cc *grpc.ClientConn) GreeterClient { return &greeterClient{cc} } func (c *greeterClient) SayHello(ctx context.Context, in *HelloRequest, opts ...grpc.CallOption) (*HelloReply, error) { out := new(HelloReply) err := grpc.Invoke(ctx, "/helloworld.Greeter/SayHello", in, out, c.cc, opts...) if err != nil { return nil, err } return out, nil } // Server API for Greeter service type GreeterServer interface { // Sends a greeting SayHello(context.Context, *HelloRequest) (*HelloReply, error) } func RegisterGreeterServer(s *grpc.Server, srv GreeterServer) { s.RegisterService(&_Greeter_serviceDesc, srv) } func _Greeter_SayHello_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(HelloRequest) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(GreeterServer).SayHello(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/helloworld.Greeter/SayHello", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(GreeterServer).SayHello(ctx, req.(*HelloRequest)) } return interceptor(ctx, in, info, handler) } var _Greeter_serviceDesc = grpc.ServiceDesc{ ServiceName: "helloworld.Greeter", HandlerType: (*GreeterServer)(nil), Methods: []grpc.MethodDesc{ { MethodName: "SayHello", Handler: _Greeter_SayHello_Handler, }, }, Streams: []grpc.StreamDesc{}, Metadata: "helloworld.proto", } func init() { proto.RegisterFile("helloworld.proto", fileDescriptor0) } var fileDescriptor0 = []byte{ // 175 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x12, 0xc8, 0x48, 0xcd, 0xc9, 0xc9, 0x2f, 0xcf, 0x2f, 0xca, 0x49, 0xd1, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x17, 0xe2, 0x42, 0x88, 0x28, 0x29, 0x71, 0xf1, 0x78, 0x80, 0x78, 0x41, 0xa9, 0x85, 0xa5, 0xa9, 0xc5, 0x25, 0x42, 0x42, 0x5c, 0x2c, 0x79, 0x89, 0xb9, 0xa9, 0x12, 0x8c, 0x0a, 0x8c, 0x1a, 0x9c, 0x41, 0x60, 0xb6, 0x92, 0x1a, 0x17, 0x17, 0x54, 0x4d, 0x41, 0x4e, 0xa5, 0x90, 0x04, 0x17, 0x7b, 0x6e, 0x6a, 0x71, 0x71, 0x62, 0x3a, 0x4c, 0x11, 0x8c, 0x6b, 0xe4, 0xc9, 0xc5, 0xee, 0x5e, 0x94, 0x9a, 0x5a, 0x92, 0x5a, 0x24, 0x64, 0xc7, 0xc5, 0x11, 0x9c, 0x58, 0x09, 0xd6, 0x25, 0x24, 0xa1, 0x87, 0xe4, 0x02, 0x64, 0xcb, 0xa4, 0xc4, 0xb0, 0xc8, 0x14, 0xe4, 0x54, 0x2a, 0x31, 0x38, 0x19, 0x70, 0x49, 0x67, 0xe6, 0xeb, 0xa5, 0x17, 0x15, 0x24, 0xeb, 0xa5, 0x56, 0x24, 0xe6, 0x16, 0xe4, 0xa4, 0x16, 0x23, 0xa9, 0x75, 0xe2, 0x07, 0x2b, 0x0e, 0x07, 0xb1, 0x03, 0x40, 0x5e, 0x0a, 0x60, 0x4c, 0x62, 0x03, 0xfb, 0xcd, 0x18, 0x10, 0x00, 0x00, 0xff, 0xff, 0x0f, 0xb7, 0xcd, 0xf2, 0xef, 0x00, 0x00, 0x00, } golang-google-grpc-1.6.0/examples/helloworld/helloworld/helloworld.proto000066400000000000000000000021051315416461300265760ustar00rootroot00000000000000// Copyright 2015 gRPC authors. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. syntax = "proto3"; option java_multiple_files = true; option java_package = "io.grpc.examples.helloworld"; option java_outer_classname = "HelloWorldProto"; package helloworld; // The greeting service definition. service Greeter { // Sends a greeting rpc SayHello (HelloRequest) returns (HelloReply) {} } // The request message containing the user's name. message HelloRequest { string name = 1; } // The response message containing the greetings message HelloReply { string message = 1; } golang-google-grpc-1.6.0/examples/helloworld/mock_helloworld/000077500000000000000000000000001315416461300243515ustar00rootroot00000000000000golang-google-grpc-1.6.0/examples/helloworld/mock_helloworld/hw_mock.go000066400000000000000000000027271315416461300263370ustar00rootroot00000000000000// Automatically generated by MockGen. DO NOT EDIT! // Source: google.golang.org/grpc/examples/helloworld/helloworld (interfaces: GreeterClient) package mock_helloworld import ( gomock "github.com/golang/mock/gomock" context "golang.org/x/net/context" grpc "google.golang.org/grpc" helloworld "google.golang.org/grpc/examples/helloworld/helloworld" ) // Mock of GreeterClient interface type MockGreeterClient struct { ctrl *gomock.Controller recorder *_MockGreeterClientRecorder } // Recorder for MockGreeterClient (not exported) type _MockGreeterClientRecorder struct { mock *MockGreeterClient } func NewMockGreeterClient(ctrl *gomock.Controller) *MockGreeterClient { mock := &MockGreeterClient{ctrl: ctrl} mock.recorder = &_MockGreeterClientRecorder{mock} return mock } func (_m *MockGreeterClient) EXPECT() *_MockGreeterClientRecorder { return _m.recorder } func (_m *MockGreeterClient) SayHello(_param0 context.Context, _param1 *helloworld.HelloRequest, _param2 ...grpc.CallOption) (*helloworld.HelloReply, error) { _s := []interface{}{_param0, _param1} for _, _x := range _param2 { _s = append(_s, _x) } ret := _m.ctrl.Call(_m, "SayHello", _s...) ret0, _ := ret[0].(*helloworld.HelloReply) ret1, _ := ret[1].(error) return ret0, ret1 } func (_mr *_MockGreeterClientRecorder) SayHello(arg0, arg1 interface{}, arg2 ...interface{}) *gomock.Call { _s := append([]interface{}{arg0, arg1}, arg2...) return _mr.mock.ctrl.RecordCall(_mr.mock, "SayHello", _s...) } golang-google-grpc-1.6.0/examples/helloworld/mock_helloworld/hw_mock_test.go000066400000000000000000000035051315416461300273710ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package mock_helloworld_test import ( "fmt" "testing" "github.com/golang/mock/gomock" "github.com/golang/protobuf/proto" "golang.org/x/net/context" helloworld "google.golang.org/grpc/examples/helloworld/helloworld" hwmock "google.golang.org/grpc/examples/helloworld/mock_helloworld" ) // rpcMsg implements the gomock.Matcher interface type rpcMsg struct { msg proto.Message } func (r *rpcMsg) Matches(msg interface{}) bool { m, ok := msg.(proto.Message) if !ok { return false } return proto.Equal(m, r.msg) } func (r *rpcMsg) String() string { return fmt.Sprintf("is %s", r.msg) } func TestSayHello(t *testing.T) { ctrl := gomock.NewController(t) defer ctrl.Finish() mockGreeterClient := hwmock.NewMockGreeterClient(ctrl) req := &helloworld.HelloRequest{Name: "unit_test"} mockGreeterClient.EXPECT().SayHello( gomock.Any(), &rpcMsg{msg: req}, ).Return(&helloworld.HelloReply{Message: "Mocked Interface"}, nil) testSayHello(t, mockGreeterClient) } func testSayHello(t *testing.T, client helloworld.GreeterClient) { r, err := client.SayHello(context.Background(), &helloworld.HelloRequest{Name: "unit_test"}) if err != nil || r.Message != "Mocked Interface" { t.Errorf("mocking failed") } t.Log("Reply : ", r.Message) } golang-google-grpc-1.6.0/examples/route_guide/000077500000000000000000000000001315416461300213255ustar00rootroot00000000000000golang-google-grpc-1.6.0/examples/route_guide/README.md000066400000000000000000000015501315416461300226050ustar00rootroot00000000000000# Description The route guide server and client demonstrate how to use grpc go libraries to perform unary, client streaming, server streaming and full duplex RPCs. Please refer to [gRPC Basics: Go] (https://grpc.io/docs/tutorials/basic/go.html) for more information. See the definition of the route guide service in routeguide/route_guide.proto. # Run the sample code To compile and run the server, assuming you are in the root of the route_guide folder, i.e., .../examples/route_guide/, simply: ```sh $ go run server/server.go ``` Likewise, to run the client: ```sh $ go run client/client.go ``` # Optional command line flags The server and client both take optional command line flags. For example, the client and server run without TLS by default. To enable TLS: ```sh $ go run server/server.go -tls=true ``` and ```sh $ go run client/client.go -tls=true ``` golang-google-grpc-1.6.0/examples/route_guide/client/000077500000000000000000000000001315416461300226035ustar00rootroot00000000000000golang-google-grpc-1.6.0/examples/route_guide/client/client.go000066400000000000000000000132551315416461300244160ustar00rootroot00000000000000/* * * Copyright 2015 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package main implements a simple gRPC client that demonstrates how to use gRPC-Go libraries // to perform unary, client streaming, server streaming and full duplex RPCs. // // It interacts with the route guide service whose definition can be found in routeguide/route_guide.proto. package main import ( "flag" "io" "log" "math/rand" "time" "golang.org/x/net/context" "google.golang.org/grpc" "google.golang.org/grpc/credentials" pb "google.golang.org/grpc/examples/route_guide/routeguide" "google.golang.org/grpc/testdata" ) var ( tls = flag.Bool("tls", false, "Connection uses TLS if true, else plain TCP") caFile = flag.String("ca_file", "", "The file containning the CA root cert file") serverAddr = flag.String("server_addr", "127.0.0.1:10000", "The server address in the format of host:port") serverHostOverride = flag.String("server_host_override", "x.test.youtube.com", "The server name use to verify the hostname returned by TLS handshake") ) // printFeature gets the feature for the given point. func printFeature(client pb.RouteGuideClient, point *pb.Point) { log.Printf("Getting feature for point (%d, %d)", point.Latitude, point.Longitude) feature, err := client.GetFeature(context.Background(), point) if err != nil { log.Fatalf("%v.GetFeatures(_) = _, %v: ", client, err) } log.Println(feature) } // printFeatures lists all the features within the given bounding Rectangle. func printFeatures(client pb.RouteGuideClient, rect *pb.Rectangle) { log.Printf("Looking for features within %v", rect) stream, err := client.ListFeatures(context.Background(), rect) if err != nil { log.Fatalf("%v.ListFeatures(_) = _, %v", client, err) } for { feature, err := stream.Recv() if err == io.EOF { break } if err != nil { log.Fatalf("%v.ListFeatures(_) = _, %v", client, err) } log.Println(feature) } } // runRecordRoute sends a sequence of points to server and expects to get a RouteSummary from server. func runRecordRoute(client pb.RouteGuideClient) { // Create a random number of random points r := rand.New(rand.NewSource(time.Now().UnixNano())) pointCount := int(r.Int31n(100)) + 2 // Traverse at least two points var points []*pb.Point for i := 0; i < pointCount; i++ { points = append(points, randomPoint(r)) } log.Printf("Traversing %d points.", len(points)) stream, err := client.RecordRoute(context.Background()) if err != nil { log.Fatalf("%v.RecordRoute(_) = _, %v", client, err) } for _, point := range points { if err := stream.Send(point); err != nil { log.Fatalf("%v.Send(%v) = %v", stream, point, err) } } reply, err := stream.CloseAndRecv() if err != nil { log.Fatalf("%v.CloseAndRecv() got error %v, want %v", stream, err, nil) } log.Printf("Route summary: %v", reply) } // runRouteChat receives a sequence of route notes, while sending notes for various locations. func runRouteChat(client pb.RouteGuideClient) { notes := []*pb.RouteNote{ {&pb.Point{Latitude: 0, Longitude: 1}, "First message"}, {&pb.Point{Latitude: 0, Longitude: 2}, "Second message"}, {&pb.Point{Latitude: 0, Longitude: 3}, "Third message"}, {&pb.Point{Latitude: 0, Longitude: 1}, "Fourth message"}, {&pb.Point{Latitude: 0, Longitude: 2}, "Fifth message"}, {&pb.Point{Latitude: 0, Longitude: 3}, "Sixth message"}, } stream, err := client.RouteChat(context.Background()) if err != nil { log.Fatalf("%v.RouteChat(_) = _, %v", client, err) } waitc := make(chan struct{}) go func() { for { in, err := stream.Recv() if err == io.EOF { // read done. close(waitc) return } if err != nil { log.Fatalf("Failed to receive a note : %v", err) } log.Printf("Got message %s at point(%d, %d)", in.Message, in.Location.Latitude, in.Location.Longitude) } }() for _, note := range notes { if err := stream.Send(note); err != nil { log.Fatalf("Failed to send a note: %v", err) } } stream.CloseSend() <-waitc } func randomPoint(r *rand.Rand) *pb.Point { lat := (r.Int31n(180) - 90) * 1e7 long := (r.Int31n(360) - 180) * 1e7 return &pb.Point{Latitude: lat, Longitude: long} } func main() { flag.Parse() var opts []grpc.DialOption if *tls { if *caFile == "" { *caFile = testdata.Path("ca.pem") } creds, err := credentials.NewClientTLSFromFile(*caFile, *serverHostOverride) if err != nil { log.Fatalf("Failed to create TLS credentials %v", err) } opts = append(opts, grpc.WithTransportCredentials(creds)) } else { opts = append(opts, grpc.WithInsecure()) } conn, err := grpc.Dial(*serverAddr, opts...) if err != nil { log.Fatalf("fail to dial: %v", err) } defer conn.Close() client := pb.NewRouteGuideClient(conn) // Looking for a valid feature printFeature(client, &pb.Point{Latitude: 409146138, Longitude: -746188906}) // Feature missing. printFeature(client, &pb.Point{Latitude: 0, Longitude: 0}) // Looking for features between 40, -75 and 42, -73. printFeatures(client, &pb.Rectangle{ Lo: &pb.Point{Latitude: 400000000, Longitude: -750000000}, Hi: &pb.Point{Latitude: 420000000, Longitude: -730000000}, }) // RecordRoute runRecordRoute(client) // RouteChat runRouteChat(client) } golang-google-grpc-1.6.0/examples/route_guide/mock_routeguide/000077500000000000000000000000001315416461300245125ustar00rootroot00000000000000golang-google-grpc-1.6.0/examples/route_guide/mock_routeguide/rg_mock.go000066400000000000000000000147711315416461300264740ustar00rootroot00000000000000// Automatically generated by MockGen. DO NOT EDIT! // Source: google.golang.org/grpc/examples/route_guide/routeguide (interfaces: RouteGuideClient,RouteGuide_RouteChatClient) package mock_routeguide import ( gomock "github.com/golang/mock/gomock" context "golang.org/x/net/context" grpc "google.golang.org/grpc" routeguide "google.golang.org/grpc/examples/route_guide/routeguide" metadata "google.golang.org/grpc/metadata" ) // Mock of RouteGuideClient interface type MockRouteGuideClient struct { ctrl *gomock.Controller recorder *_MockRouteGuideClientRecorder } // Recorder for MockRouteGuideClient (not exported) type _MockRouteGuideClientRecorder struct { mock *MockRouteGuideClient } func NewMockRouteGuideClient(ctrl *gomock.Controller) *MockRouteGuideClient { mock := &MockRouteGuideClient{ctrl: ctrl} mock.recorder = &_MockRouteGuideClientRecorder{mock} return mock } func (_m *MockRouteGuideClient) EXPECT() *_MockRouteGuideClientRecorder { return _m.recorder } func (_m *MockRouteGuideClient) GetFeature(_param0 context.Context, _param1 *routeguide.Point, _param2 ...grpc.CallOption) (*routeguide.Feature, error) { _s := []interface{}{_param0, _param1} for _, _x := range _param2 { _s = append(_s, _x) } ret := _m.ctrl.Call(_m, "GetFeature", _s...) ret0, _ := ret[0].(*routeguide.Feature) ret1, _ := ret[1].(error) return ret0, ret1 } func (_mr *_MockRouteGuideClientRecorder) GetFeature(arg0, arg1 interface{}, arg2 ...interface{}) *gomock.Call { _s := append([]interface{}{arg0, arg1}, arg2...) return _mr.mock.ctrl.RecordCall(_mr.mock, "GetFeature", _s...) } func (_m *MockRouteGuideClient) ListFeatures(_param0 context.Context, _param1 *routeguide.Rectangle, _param2 ...grpc.CallOption) (routeguide.RouteGuide_ListFeaturesClient, error) { _s := []interface{}{_param0, _param1} for _, _x := range _param2 { _s = append(_s, _x) } ret := _m.ctrl.Call(_m, "ListFeatures", _s...) ret0, _ := ret[0].(routeguide.RouteGuide_ListFeaturesClient) ret1, _ := ret[1].(error) return ret0, ret1 } func (_mr *_MockRouteGuideClientRecorder) ListFeatures(arg0, arg1 interface{}, arg2 ...interface{}) *gomock.Call { _s := append([]interface{}{arg0, arg1}, arg2...) return _mr.mock.ctrl.RecordCall(_mr.mock, "ListFeatures", _s...) } func (_m *MockRouteGuideClient) RecordRoute(_param0 context.Context, _param1 ...grpc.CallOption) (routeguide.RouteGuide_RecordRouteClient, error) { _s := []interface{}{_param0} for _, _x := range _param1 { _s = append(_s, _x) } ret := _m.ctrl.Call(_m, "RecordRoute", _s...) ret0, _ := ret[0].(routeguide.RouteGuide_RecordRouteClient) ret1, _ := ret[1].(error) return ret0, ret1 } func (_mr *_MockRouteGuideClientRecorder) RecordRoute(arg0 interface{}, arg1 ...interface{}) *gomock.Call { _s := append([]interface{}{arg0}, arg1...) return _mr.mock.ctrl.RecordCall(_mr.mock, "RecordRoute", _s...) } func (_m *MockRouteGuideClient) RouteChat(_param0 context.Context, _param1 ...grpc.CallOption) (routeguide.RouteGuide_RouteChatClient, error) { _s := []interface{}{_param0} for _, _x := range _param1 { _s = append(_s, _x) } ret := _m.ctrl.Call(_m, "RouteChat", _s...) ret0, _ := ret[0].(routeguide.RouteGuide_RouteChatClient) ret1, _ := ret[1].(error) return ret0, ret1 } func (_mr *_MockRouteGuideClientRecorder) RouteChat(arg0 interface{}, arg1 ...interface{}) *gomock.Call { _s := append([]interface{}{arg0}, arg1...) return _mr.mock.ctrl.RecordCall(_mr.mock, "RouteChat", _s...) } // Mock of RouteGuide_RouteChatClient interface type MockRouteGuide_RouteChatClient struct { ctrl *gomock.Controller recorder *_MockRouteGuide_RouteChatClientRecorder } // Recorder for MockRouteGuide_RouteChatClient (not exported) type _MockRouteGuide_RouteChatClientRecorder struct { mock *MockRouteGuide_RouteChatClient } func NewMockRouteGuide_RouteChatClient(ctrl *gomock.Controller) *MockRouteGuide_RouteChatClient { mock := &MockRouteGuide_RouteChatClient{ctrl: ctrl} mock.recorder = &_MockRouteGuide_RouteChatClientRecorder{mock} return mock } func (_m *MockRouteGuide_RouteChatClient) EXPECT() *_MockRouteGuide_RouteChatClientRecorder { return _m.recorder } func (_m *MockRouteGuide_RouteChatClient) CloseSend() error { ret := _m.ctrl.Call(_m, "CloseSend") ret0, _ := ret[0].(error) return ret0 } func (_mr *_MockRouteGuide_RouteChatClientRecorder) CloseSend() *gomock.Call { return _mr.mock.ctrl.RecordCall(_mr.mock, "CloseSend") } func (_m *MockRouteGuide_RouteChatClient) Context() context.Context { ret := _m.ctrl.Call(_m, "Context") ret0, _ := ret[0].(context.Context) return ret0 } func (_mr *_MockRouteGuide_RouteChatClientRecorder) Context() *gomock.Call { return _mr.mock.ctrl.RecordCall(_mr.mock, "Context") } func (_m *MockRouteGuide_RouteChatClient) Header() (metadata.MD, error) { ret := _m.ctrl.Call(_m, "Header") ret0, _ := ret[0].(metadata.MD) ret1, _ := ret[1].(error) return ret0, ret1 } func (_mr *_MockRouteGuide_RouteChatClientRecorder) Header() *gomock.Call { return _mr.mock.ctrl.RecordCall(_mr.mock, "Header") } func (_m *MockRouteGuide_RouteChatClient) Recv() (*routeguide.RouteNote, error) { ret := _m.ctrl.Call(_m, "Recv") ret0, _ := ret[0].(*routeguide.RouteNote) ret1, _ := ret[1].(error) return ret0, ret1 } func (_mr *_MockRouteGuide_RouteChatClientRecorder) Recv() *gomock.Call { return _mr.mock.ctrl.RecordCall(_mr.mock, "Recv") } func (_m *MockRouteGuide_RouteChatClient) RecvMsg(_param0 interface{}) error { ret := _m.ctrl.Call(_m, "RecvMsg", _param0) ret0, _ := ret[0].(error) return ret0 } func (_mr *_MockRouteGuide_RouteChatClientRecorder) RecvMsg(arg0 interface{}) *gomock.Call { return _mr.mock.ctrl.RecordCall(_mr.mock, "RecvMsg", arg0) } func (_m *MockRouteGuide_RouteChatClient) Send(_param0 *routeguide.RouteNote) error { ret := _m.ctrl.Call(_m, "Send", _param0) ret0, _ := ret[0].(error) return ret0 } func (_mr *_MockRouteGuide_RouteChatClientRecorder) Send(arg0 interface{}) *gomock.Call { return _mr.mock.ctrl.RecordCall(_mr.mock, "Send", arg0) } func (_m *MockRouteGuide_RouteChatClient) SendMsg(_param0 interface{}) error { ret := _m.ctrl.Call(_m, "SendMsg", _param0) ret0, _ := ret[0].(error) return ret0 } func (_mr *_MockRouteGuide_RouteChatClientRecorder) SendMsg(arg0 interface{}) *gomock.Call { return _mr.mock.ctrl.RecordCall(_mr.mock, "SendMsg", arg0) } func (_m *MockRouteGuide_RouteChatClient) Trailer() metadata.MD { ret := _m.ctrl.Call(_m, "Trailer") ret0, _ := ret[0].(metadata.MD) return ret0 } func (_mr *_MockRouteGuide_RouteChatClientRecorder) Trailer() *gomock.Call { return _mr.mock.ctrl.RecordCall(_mr.mock, "Trailer") } golang-google-grpc-1.6.0/examples/route_guide/mock_routeguide/rg_mock_test.go000066400000000000000000000041161315416461300275230ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package mock_routeguide_test import ( "fmt" "testing" "github.com/golang/mock/gomock" "github.com/golang/protobuf/proto" "golang.org/x/net/context" rgmock "google.golang.org/grpc/examples/route_guide/mock_routeguide" rgpb "google.golang.org/grpc/examples/route_guide/routeguide" ) var ( msg = &rgpb.RouteNote{ Location: &rgpb.Point{Latitude: 17, Longitude: 29}, Message: "Taxi-cab", } ) func TestRouteChat(t *testing.T) { ctrl := gomock.NewController(t) defer ctrl.Finish() // Create mock for the stream returned by RouteChat stream := rgmock.NewMockRouteGuide_RouteChatClient(ctrl) // set expectation on sending. stream.EXPECT().Send( gomock.Any(), ).Return(nil) // Set expectation on receiving. stream.EXPECT().Recv().Return(msg, nil) stream.EXPECT().CloseSend().Return(nil) // Create mock for the client interface. rgclient := rgmock.NewMockRouteGuideClient(ctrl) // Set expectation on RouteChat rgclient.EXPECT().RouteChat( gomock.Any(), ).Return(stream, nil) if err := testRouteChat(rgclient); err != nil { t.Fatalf("Test failed: %v", err) } } func testRouteChat(client rgpb.RouteGuideClient) error { stream, err := client.RouteChat(context.Background()) if err != nil { return err } if err := stream.Send(msg); err != nil { return err } if err := stream.CloseSend(); err != nil { return err } got, err := stream.Recv() if err != nil { return err } if !proto.Equal(got, msg) { return fmt.Errorf("stream.Recv() = %v, want %v", got, msg) } return nil } golang-google-grpc-1.6.0/examples/route_guide/routeguide/000077500000000000000000000000001315416461300235015ustar00rootroot00000000000000golang-google-grpc-1.6.0/examples/route_guide/routeguide/route_guide.pb.go000066400000000000000000000422661315416461300267550ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: route_guide.proto /* Package routeguide is a generated protocol buffer package. It is generated from these files: route_guide.proto It has these top-level messages: Point Rectangle Feature RouteNote RouteSummary */ package routeguide import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import ( context "golang.org/x/net/context" grpc "google.golang.org/grpc" ) // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package // Points are represented as latitude-longitude pairs in the E7 representation // (degrees multiplied by 10**7 and rounded to the nearest integer). // Latitudes should be in the range +/- 90 degrees and longitude should be in // the range +/- 180 degrees (inclusive). type Point struct { Latitude int32 `protobuf:"varint,1,opt,name=latitude" json:"latitude,omitempty"` Longitude int32 `protobuf:"varint,2,opt,name=longitude" json:"longitude,omitempty"` } func (m *Point) Reset() { *m = Point{} } func (m *Point) String() string { return proto.CompactTextString(m) } func (*Point) ProtoMessage() {} func (*Point) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} } func (m *Point) GetLatitude() int32 { if m != nil { return m.Latitude } return 0 } func (m *Point) GetLongitude() int32 { if m != nil { return m.Longitude } return 0 } // A latitude-longitude rectangle, represented as two diagonally opposite // points "lo" and "hi". type Rectangle struct { // One corner of the rectangle. Lo *Point `protobuf:"bytes,1,opt,name=lo" json:"lo,omitempty"` // The other corner of the rectangle. Hi *Point `protobuf:"bytes,2,opt,name=hi" json:"hi,omitempty"` } func (m *Rectangle) Reset() { *m = Rectangle{} } func (m *Rectangle) String() string { return proto.CompactTextString(m) } func (*Rectangle) ProtoMessage() {} func (*Rectangle) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} } func (m *Rectangle) GetLo() *Point { if m != nil { return m.Lo } return nil } func (m *Rectangle) GetHi() *Point { if m != nil { return m.Hi } return nil } // A feature names something at a given point. // // If a feature could not be named, the name is empty. type Feature struct { // The name of the feature. Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"` // The point where the feature is detected. Location *Point `protobuf:"bytes,2,opt,name=location" json:"location,omitempty"` } func (m *Feature) Reset() { *m = Feature{} } func (m *Feature) String() string { return proto.CompactTextString(m) } func (*Feature) ProtoMessage() {} func (*Feature) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{2} } func (m *Feature) GetName() string { if m != nil { return m.Name } return "" } func (m *Feature) GetLocation() *Point { if m != nil { return m.Location } return nil } // A RouteNote is a message sent while at a given point. type RouteNote struct { // The location from which the message is sent. Location *Point `protobuf:"bytes,1,opt,name=location" json:"location,omitempty"` // The message to be sent. Message string `protobuf:"bytes,2,opt,name=message" json:"message,omitempty"` } func (m *RouteNote) Reset() { *m = RouteNote{} } func (m *RouteNote) String() string { return proto.CompactTextString(m) } func (*RouteNote) ProtoMessage() {} func (*RouteNote) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{3} } func (m *RouteNote) GetLocation() *Point { if m != nil { return m.Location } return nil } func (m *RouteNote) GetMessage() string { if m != nil { return m.Message } return "" } // A RouteSummary is received in response to a RecordRoute rpc. // // It contains the number of individual points received, the number of // detected features, and the total distance covered as the cumulative sum of // the distance between each point. type RouteSummary struct { // The number of points received. PointCount int32 `protobuf:"varint,1,opt,name=point_count,json=pointCount" json:"point_count,omitempty"` // The number of known features passed while traversing the route. FeatureCount int32 `protobuf:"varint,2,opt,name=feature_count,json=featureCount" json:"feature_count,omitempty"` // The distance covered in metres. Distance int32 `protobuf:"varint,3,opt,name=distance" json:"distance,omitempty"` // The duration of the traversal in seconds. ElapsedTime int32 `protobuf:"varint,4,opt,name=elapsed_time,json=elapsedTime" json:"elapsed_time,omitempty"` } func (m *RouteSummary) Reset() { *m = RouteSummary{} } func (m *RouteSummary) String() string { return proto.CompactTextString(m) } func (*RouteSummary) ProtoMessage() {} func (*RouteSummary) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{4} } func (m *RouteSummary) GetPointCount() int32 { if m != nil { return m.PointCount } return 0 } func (m *RouteSummary) GetFeatureCount() int32 { if m != nil { return m.FeatureCount } return 0 } func (m *RouteSummary) GetDistance() int32 { if m != nil { return m.Distance } return 0 } func (m *RouteSummary) GetElapsedTime() int32 { if m != nil { return m.ElapsedTime } return 0 } func init() { proto.RegisterType((*Point)(nil), "routeguide.Point") proto.RegisterType((*Rectangle)(nil), "routeguide.Rectangle") proto.RegisterType((*Feature)(nil), "routeguide.Feature") proto.RegisterType((*RouteNote)(nil), "routeguide.RouteNote") proto.RegisterType((*RouteSummary)(nil), "routeguide.RouteSummary") } // Reference imports to suppress errors if they are not otherwise used. var _ context.Context var _ grpc.ClientConn // This is a compile-time assertion to ensure that this generated file // is compatible with the grpc package it is being compiled against. const _ = grpc.SupportPackageIsVersion4 // Client API for RouteGuide service type RouteGuideClient interface { // A simple RPC. // // Obtains the feature at a given position. // // A feature with an empty name is returned if there's no feature at the given // position. GetFeature(ctx context.Context, in *Point, opts ...grpc.CallOption) (*Feature, error) // A server-to-client streaming RPC. // // Obtains the Features available within the given Rectangle. Results are // streamed rather than returned at once (e.g. in a response message with a // repeated field), as the rectangle may cover a large area and contain a // huge number of features. ListFeatures(ctx context.Context, in *Rectangle, opts ...grpc.CallOption) (RouteGuide_ListFeaturesClient, error) // A client-to-server streaming RPC. // // Accepts a stream of Points on a route being traversed, returning a // RouteSummary when traversal is completed. RecordRoute(ctx context.Context, opts ...grpc.CallOption) (RouteGuide_RecordRouteClient, error) // A Bidirectional streaming RPC. // // Accepts a stream of RouteNotes sent while a route is being traversed, // while receiving other RouteNotes (e.g. from other users). RouteChat(ctx context.Context, opts ...grpc.CallOption) (RouteGuide_RouteChatClient, error) } type routeGuideClient struct { cc *grpc.ClientConn } func NewRouteGuideClient(cc *grpc.ClientConn) RouteGuideClient { return &routeGuideClient{cc} } func (c *routeGuideClient) GetFeature(ctx context.Context, in *Point, opts ...grpc.CallOption) (*Feature, error) { out := new(Feature) err := grpc.Invoke(ctx, "/routeguide.RouteGuide/GetFeature", in, out, c.cc, opts...) if err != nil { return nil, err } return out, nil } func (c *routeGuideClient) ListFeatures(ctx context.Context, in *Rectangle, opts ...grpc.CallOption) (RouteGuide_ListFeaturesClient, error) { stream, err := grpc.NewClientStream(ctx, &_RouteGuide_serviceDesc.Streams[0], c.cc, "/routeguide.RouteGuide/ListFeatures", opts...) if err != nil { return nil, err } x := &routeGuideListFeaturesClient{stream} if err := x.ClientStream.SendMsg(in); err != nil { return nil, err } if err := x.ClientStream.CloseSend(); err != nil { return nil, err } return x, nil } type RouteGuide_ListFeaturesClient interface { Recv() (*Feature, error) grpc.ClientStream } type routeGuideListFeaturesClient struct { grpc.ClientStream } func (x *routeGuideListFeaturesClient) Recv() (*Feature, error) { m := new(Feature) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func (c *routeGuideClient) RecordRoute(ctx context.Context, opts ...grpc.CallOption) (RouteGuide_RecordRouteClient, error) { stream, err := grpc.NewClientStream(ctx, &_RouteGuide_serviceDesc.Streams[1], c.cc, "/routeguide.RouteGuide/RecordRoute", opts...) if err != nil { return nil, err } x := &routeGuideRecordRouteClient{stream} return x, nil } type RouteGuide_RecordRouteClient interface { Send(*Point) error CloseAndRecv() (*RouteSummary, error) grpc.ClientStream } type routeGuideRecordRouteClient struct { grpc.ClientStream } func (x *routeGuideRecordRouteClient) Send(m *Point) error { return x.ClientStream.SendMsg(m) } func (x *routeGuideRecordRouteClient) CloseAndRecv() (*RouteSummary, error) { if err := x.ClientStream.CloseSend(); err != nil { return nil, err } m := new(RouteSummary) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func (c *routeGuideClient) RouteChat(ctx context.Context, opts ...grpc.CallOption) (RouteGuide_RouteChatClient, error) { stream, err := grpc.NewClientStream(ctx, &_RouteGuide_serviceDesc.Streams[2], c.cc, "/routeguide.RouteGuide/RouteChat", opts...) if err != nil { return nil, err } x := &routeGuideRouteChatClient{stream} return x, nil } type RouteGuide_RouteChatClient interface { Send(*RouteNote) error Recv() (*RouteNote, error) grpc.ClientStream } type routeGuideRouteChatClient struct { grpc.ClientStream } func (x *routeGuideRouteChatClient) Send(m *RouteNote) error { return x.ClientStream.SendMsg(m) } func (x *routeGuideRouteChatClient) Recv() (*RouteNote, error) { m := new(RouteNote) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } // Server API for RouteGuide service type RouteGuideServer interface { // A simple RPC. // // Obtains the feature at a given position. // // A feature with an empty name is returned if there's no feature at the given // position. GetFeature(context.Context, *Point) (*Feature, error) // A server-to-client streaming RPC. // // Obtains the Features available within the given Rectangle. Results are // streamed rather than returned at once (e.g. in a response message with a // repeated field), as the rectangle may cover a large area and contain a // huge number of features. ListFeatures(*Rectangle, RouteGuide_ListFeaturesServer) error // A client-to-server streaming RPC. // // Accepts a stream of Points on a route being traversed, returning a // RouteSummary when traversal is completed. RecordRoute(RouteGuide_RecordRouteServer) error // A Bidirectional streaming RPC. // // Accepts a stream of RouteNotes sent while a route is being traversed, // while receiving other RouteNotes (e.g. from other users). RouteChat(RouteGuide_RouteChatServer) error } func RegisterRouteGuideServer(s *grpc.Server, srv RouteGuideServer) { s.RegisterService(&_RouteGuide_serviceDesc, srv) } func _RouteGuide_GetFeature_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(Point) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(RouteGuideServer).GetFeature(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/routeguide.RouteGuide/GetFeature", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(RouteGuideServer).GetFeature(ctx, req.(*Point)) } return interceptor(ctx, in, info, handler) } func _RouteGuide_ListFeatures_Handler(srv interface{}, stream grpc.ServerStream) error { m := new(Rectangle) if err := stream.RecvMsg(m); err != nil { return err } return srv.(RouteGuideServer).ListFeatures(m, &routeGuideListFeaturesServer{stream}) } type RouteGuide_ListFeaturesServer interface { Send(*Feature) error grpc.ServerStream } type routeGuideListFeaturesServer struct { grpc.ServerStream } func (x *routeGuideListFeaturesServer) Send(m *Feature) error { return x.ServerStream.SendMsg(m) } func _RouteGuide_RecordRoute_Handler(srv interface{}, stream grpc.ServerStream) error { return srv.(RouteGuideServer).RecordRoute(&routeGuideRecordRouteServer{stream}) } type RouteGuide_RecordRouteServer interface { SendAndClose(*RouteSummary) error Recv() (*Point, error) grpc.ServerStream } type routeGuideRecordRouteServer struct { grpc.ServerStream } func (x *routeGuideRecordRouteServer) SendAndClose(m *RouteSummary) error { return x.ServerStream.SendMsg(m) } func (x *routeGuideRecordRouteServer) Recv() (*Point, error) { m := new(Point) if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func _RouteGuide_RouteChat_Handler(srv interface{}, stream grpc.ServerStream) error { return srv.(RouteGuideServer).RouteChat(&routeGuideRouteChatServer{stream}) } type RouteGuide_RouteChatServer interface { Send(*RouteNote) error Recv() (*RouteNote, error) grpc.ServerStream } type routeGuideRouteChatServer struct { grpc.ServerStream } func (x *routeGuideRouteChatServer) Send(m *RouteNote) error { return x.ServerStream.SendMsg(m) } func (x *routeGuideRouteChatServer) Recv() (*RouteNote, error) { m := new(RouteNote) if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err } return m, nil } var _RouteGuide_serviceDesc = grpc.ServiceDesc{ ServiceName: "routeguide.RouteGuide", HandlerType: (*RouteGuideServer)(nil), Methods: []grpc.MethodDesc{ { MethodName: "GetFeature", Handler: _RouteGuide_GetFeature_Handler, }, }, Streams: []grpc.StreamDesc{ { StreamName: "ListFeatures", Handler: _RouteGuide_ListFeatures_Handler, ServerStreams: true, }, { StreamName: "RecordRoute", Handler: _RouteGuide_RecordRoute_Handler, ClientStreams: true, }, { StreamName: "RouteChat", Handler: _RouteGuide_RouteChat_Handler, ServerStreams: true, ClientStreams: true, }, }, Metadata: "route_guide.proto", } func init() { proto.RegisterFile("route_guide.proto", fileDescriptor0) } var fileDescriptor0 = []byte{ // 404 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x84, 0x53, 0xdd, 0xca, 0xd3, 0x40, 0x10, 0xfd, 0x36, 0x7e, 0x9f, 0x6d, 0x26, 0x11, 0xe9, 0x88, 0x10, 0xa2, 0xa0, 0x8d, 0x37, 0xbd, 0x31, 0x94, 0x0a, 0x5e, 0x56, 0x6c, 0xc1, 0xde, 0x14, 0xa9, 0xb1, 0xf7, 0x65, 0x4d, 0xc6, 0x74, 0x61, 0x93, 0x0d, 0xc9, 0x06, 0xf4, 0x01, 0x7c, 0x02, 0x5f, 0x58, 0xb2, 0x49, 0xda, 0x54, 0x5b, 0xbc, 0xdb, 0x39, 0x73, 0xce, 0xfc, 0x9c, 0x61, 0x61, 0x52, 0xaa, 0x5a, 0xd3, 0x21, 0xad, 0x45, 0x42, 0x61, 0x51, 0x2a, 0xad, 0x10, 0x0c, 0x64, 0x90, 0xe0, 0x23, 0x3c, 0xec, 0x94, 0xc8, 0x35, 0xfa, 0x30, 0x96, 0x5c, 0x0b, 0x5d, 0x27, 0xe4, 0xb1, 0xd7, 0x6c, 0xf6, 0x10, 0x9d, 0x62, 0x7c, 0x09, 0xb6, 0x54, 0x79, 0xda, 0x26, 0x2d, 0x93, 0x3c, 0x03, 0xc1, 0x17, 0xb0, 0x23, 0x8a, 0x35, 0xcf, 0x53, 0x49, 0x38, 0x05, 0x4b, 0x2a, 0x53, 0xc0, 0x59, 0x4c, 0xc2, 0x73, 0xa3, 0xd0, 0x74, 0x89, 0x2c, 0xa9, 0x1a, 0xca, 0x51, 0x98, 0x32, 0xd7, 0x29, 0x47, 0x11, 0x6c, 0x61, 0xf4, 0x89, 0xb8, 0xae, 0x4b, 0x42, 0x84, 0xfb, 0x9c, 0x67, 0xed, 0x4c, 0x76, 0x64, 0xde, 0xf8, 0x16, 0xc6, 0x52, 0xc5, 0x5c, 0x0b, 0x95, 0xdf, 0xae, 0x73, 0xa2, 0x04, 0x7b, 0xb0, 0xa3, 0x26, 0xfb, 0x59, 0xe9, 0x4b, 0x2d, 0xfb, 0xaf, 0x16, 0x3d, 0x18, 0x65, 0x54, 0x55, 0x3c, 0x6d, 0x17, 0xb7, 0xa3, 0x3e, 0x0c, 0x7e, 0x33, 0x70, 0x4d, 0xd9, 0xaf, 0x75, 0x96, 0xf1, 0xf2, 0x27, 0xbe, 0x02, 0xa7, 0x68, 0xd4, 0x87, 0x58, 0xd5, 0xb9, 0xee, 0x4c, 0x04, 0x03, 0xad, 0x1b, 0x04, 0xdf, 0xc0, 0x93, 0xef, 0xed, 0x56, 0x1d, 0xa5, 0xb5, 0xd2, 0xed, 0xc0, 0x96, 0xe4, 0xc3, 0x38, 0x11, 0x95, 0xe6, 0x79, 0x4c, 0xde, 0xa3, 0xf6, 0x0e, 0x7d, 0x8c, 0x53, 0x70, 0x49, 0xf2, 0xa2, 0xa2, 0xe4, 0xa0, 0x45, 0x46, 0xde, 0xbd, 0xc9, 0x3b, 0x1d, 0xb6, 0x17, 0x19, 0x2d, 0x7e, 0x59, 0x00, 0x66, 0xaa, 0x4d, 0xb3, 0x0e, 0xbe, 0x07, 0xd8, 0x90, 0xee, 0xbd, 0xfc, 0x77, 0x53, 0xff, 0xd9, 0x10, 0xea, 0x78, 0xc1, 0x1d, 0x2e, 0xc1, 0xdd, 0x8a, 0xaa, 0x17, 0x56, 0xf8, 0x7c, 0x48, 0x3b, 0x5d, 0xfb, 0x86, 0x7a, 0xce, 0x70, 0x09, 0x4e, 0x44, 0xb1, 0x2a, 0x13, 0x33, 0xcb, 0xb5, 0xc6, 0xde, 0x45, 0xc5, 0x81, 0x8f, 0xc1, 0xdd, 0x8c, 0xe1, 0x87, 0xee, 0x64, 0xeb, 0x23, 0xd7, 0x7f, 0x35, 0xef, 0x2f, 0xe9, 0x5f, 0x87, 0x1b, 0xf9, 0x9c, 0xad, 0xe6, 0xf0, 0x42, 0xa8, 0x30, 0x2d, 0x8b, 0x38, 0xa4, 0x1f, 0x3c, 0x2b, 0x24, 0x55, 0x03, 0xfa, 0xea, 0xe9, 0xd9, 0xa3, 0x5d, 0xf3, 0x27, 0x76, 0xec, 0xdb, 0x63, 0xf3, 0x39, 0xde, 0xfd, 0x09, 0x00, 0x00, 0xff, 0xff, 0xc8, 0xe4, 0xef, 0xe6, 0x31, 0x03, 0x00, 0x00, } golang-google-grpc-1.6.0/examples/route_guide/routeguide/route_guide.proto000066400000000000000000000065771315416461300271200ustar00rootroot00000000000000// Copyright 2015 gRPC authors. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. syntax = "proto3"; option java_multiple_files = true; option java_package = "io.grpc.examples.routeguide"; option java_outer_classname = "RouteGuideProto"; package routeguide; // Interface exported by the server. service RouteGuide { // A simple RPC. // // Obtains the feature at a given position. // // A feature with an empty name is returned if there's no feature at the given // position. rpc GetFeature(Point) returns (Feature) {} // A server-to-client streaming RPC. // // Obtains the Features available within the given Rectangle. Results are // streamed rather than returned at once (e.g. in a response message with a // repeated field), as the rectangle may cover a large area and contain a // huge number of features. rpc ListFeatures(Rectangle) returns (stream Feature) {} // A client-to-server streaming RPC. // // Accepts a stream of Points on a route being traversed, returning a // RouteSummary when traversal is completed. rpc RecordRoute(stream Point) returns (RouteSummary) {} // A Bidirectional streaming RPC. // // Accepts a stream of RouteNotes sent while a route is being traversed, // while receiving other RouteNotes (e.g. from other users). rpc RouteChat(stream RouteNote) returns (stream RouteNote) {} } // Points are represented as latitude-longitude pairs in the E7 representation // (degrees multiplied by 10**7 and rounded to the nearest integer). // Latitudes should be in the range +/- 90 degrees and longitude should be in // the range +/- 180 degrees (inclusive). message Point { int32 latitude = 1; int32 longitude = 2; } // A latitude-longitude rectangle, represented as two diagonally opposite // points "lo" and "hi". message Rectangle { // One corner of the rectangle. Point lo = 1; // The other corner of the rectangle. Point hi = 2; } // A feature names something at a given point. // // If a feature could not be named, the name is empty. message Feature { // The name of the feature. string name = 1; // The point where the feature is detected. Point location = 2; } // A RouteNote is a message sent while at a given point. message RouteNote { // The location from which the message is sent. Point location = 1; // The message to be sent. string message = 2; } // A RouteSummary is received in response to a RecordRoute rpc. // // It contains the number of individual points received, the number of // detected features, and the total distance covered as the cumulative sum of // the distance between each point. message RouteSummary { // The number of points received. int32 point_count = 1; // The number of known features passed while traversing the route. int32 feature_count = 2; // The distance covered in metres. int32 distance = 3; // The duration of the traversal in seconds. int32 elapsed_time = 4; } golang-google-grpc-1.6.0/examples/route_guide/server/000077500000000000000000000000001315416461300226335ustar00rootroot00000000000000golang-google-grpc-1.6.0/examples/route_guide/server/server.go000066400000000000000000000153251315416461300244760ustar00rootroot00000000000000/* * * Copyright 2015 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ //go:generate protoc -I ../routeguide --go_out=plugins=grpc:../routeguide ../routeguide/route_guide.proto // Package main implements a simple gRPC server that demonstrates how to use gRPC-Go libraries // to perform unary, client streaming, server streaming and full duplex RPCs. // // It implements the route guide service whose definition can be found in routeguide/route_guide.proto. package main import ( "encoding/json" "flag" "fmt" "io" "io/ioutil" "log" "math" "net" "time" "golang.org/x/net/context" "google.golang.org/grpc" "google.golang.org/grpc/credentials" "google.golang.org/grpc/testdata" "github.com/golang/protobuf/proto" pb "google.golang.org/grpc/examples/route_guide/routeguide" ) var ( tls = flag.Bool("tls", false, "Connection uses TLS if true, else plain TCP") certFile = flag.String("cert_file", "", "The TLS cert file") keyFile = flag.String("key_file", "", "The TLS key file") jsonDBFile = flag.String("json_db_file", "testdata/route_guide_db.json", "A json file containing a list of features") port = flag.Int("port", 10000, "The server port") ) type routeGuideServer struct { savedFeatures []*pb.Feature routeNotes map[string][]*pb.RouteNote } // GetFeature returns the feature at the given point. func (s *routeGuideServer) GetFeature(ctx context.Context, point *pb.Point) (*pb.Feature, error) { for _, feature := range s.savedFeatures { if proto.Equal(feature.Location, point) { return feature, nil } } // No feature was found, return an unnamed feature return &pb.Feature{Location: point}, nil } // ListFeatures lists all features contained within the given bounding Rectangle. func (s *routeGuideServer) ListFeatures(rect *pb.Rectangle, stream pb.RouteGuide_ListFeaturesServer) error { for _, feature := range s.savedFeatures { if inRange(feature.Location, rect) { if err := stream.Send(feature); err != nil { return err } } } return nil } // RecordRoute records a route composited of a sequence of points. // // It gets a stream of points, and responds with statistics about the "trip": // number of points, number of known features visited, total distance traveled, and // total time spent. func (s *routeGuideServer) RecordRoute(stream pb.RouteGuide_RecordRouteServer) error { var pointCount, featureCount, distance int32 var lastPoint *pb.Point startTime := time.Now() for { point, err := stream.Recv() if err == io.EOF { endTime := time.Now() return stream.SendAndClose(&pb.RouteSummary{ PointCount: pointCount, FeatureCount: featureCount, Distance: distance, ElapsedTime: int32(endTime.Sub(startTime).Seconds()), }) } if err != nil { return err } pointCount++ for _, feature := range s.savedFeatures { if proto.Equal(feature.Location, point) { featureCount++ } } if lastPoint != nil { distance += calcDistance(lastPoint, point) } lastPoint = point } } // RouteChat receives a stream of message/location pairs, and responds with a stream of all // previous messages at each of those locations. func (s *routeGuideServer) RouteChat(stream pb.RouteGuide_RouteChatServer) error { for { in, err := stream.Recv() if err == io.EOF { return nil } if err != nil { return err } key := serialize(in.Location) if _, present := s.routeNotes[key]; !present { s.routeNotes[key] = []*pb.RouteNote{in} } else { s.routeNotes[key] = append(s.routeNotes[key], in) } for _, note := range s.routeNotes[key] { if err := stream.Send(note); err != nil { return err } } } } // loadFeatures loads features from a JSON file. func (s *routeGuideServer) loadFeatures(filePath string) { file, err := ioutil.ReadFile(filePath) if err != nil { log.Fatalf("Failed to load default features: %v", err) } if err := json.Unmarshal(file, &s.savedFeatures); err != nil { log.Fatalf("Failed to load default features: %v", err) } } func toRadians(num float64) float64 { return num * math.Pi / float64(180) } // calcDistance calculates the distance between two points using the "haversine" formula. // This code was taken from http://www.movable-type.co.uk/scripts/latlong.html. func calcDistance(p1 *pb.Point, p2 *pb.Point) int32 { const CordFactor float64 = 1e7 const R float64 = float64(6371000) // metres lat1 := float64(p1.Latitude) / CordFactor lat2 := float64(p2.Latitude) / CordFactor lng1 := float64(p1.Longitude) / CordFactor lng2 := float64(p2.Longitude) / CordFactor φ1 := toRadians(lat1) φ2 := toRadians(lat2) Δφ := toRadians(lat2 - lat1) Δλ := toRadians(lng2 - lng1) a := math.Sin(Δφ/2)*math.Sin(Δφ/2) + math.Cos(φ1)*math.Cos(φ2)* math.Sin(Δλ/2)*math.Sin(Δλ/2) c := 2 * math.Atan2(math.Sqrt(a), math.Sqrt(1-a)) distance := R * c return int32(distance) } func inRange(point *pb.Point, rect *pb.Rectangle) bool { left := math.Min(float64(rect.Lo.Longitude), float64(rect.Hi.Longitude)) right := math.Max(float64(rect.Lo.Longitude), float64(rect.Hi.Longitude)) top := math.Max(float64(rect.Lo.Latitude), float64(rect.Hi.Latitude)) bottom := math.Min(float64(rect.Lo.Latitude), float64(rect.Hi.Latitude)) if float64(point.Longitude) >= left && float64(point.Longitude) <= right && float64(point.Latitude) >= bottom && float64(point.Latitude) <= top { return true } return false } func serialize(point *pb.Point) string { return fmt.Sprintf("%d %d", point.Latitude, point.Longitude) } func newServer() *routeGuideServer { s := new(routeGuideServer) s.loadFeatures(*jsonDBFile) s.routeNotes = make(map[string][]*pb.RouteNote) return s } func main() { flag.Parse() lis, err := net.Listen("tcp", fmt.Sprintf(":%d", *port)) if err != nil { log.Fatalf("failed to listen: %v", err) } var opts []grpc.ServerOption if *tls { if *certFile == "" { *certFile = testdata.Path("server1.pem") } if *keyFile == "" { *keyFile = testdata.Path("server1.key") } creds, err := credentials.NewServerTLSFromFile(*certFile, *keyFile) if err != nil { log.Fatalf("Failed to generate credentials %v", err) } opts = []grpc.ServerOption{grpc.Creds(creds)} } grpcServer := grpc.NewServer(opts...) pb.RegisterRouteGuideServer(grpcServer, newServer()) grpcServer.Serve(lis) } golang-google-grpc-1.6.0/examples/route_guide/testdata/000077500000000000000000000000001315416461300231365ustar00rootroot00000000000000golang-google-grpc-1.6.0/examples/route_guide/testdata/route_guide_db.json000066400000000000000000000327101315416461300270140ustar00rootroot00000000000000[{ "location": { "latitude": 407838351, "longitude": -746143763 }, "name": "Patriots Path, Mendham, NJ 07945, USA" }, { "location": { "latitude": 408122808, "longitude": -743999179 }, "name": "101 New Jersey 10, Whippany, NJ 07981, USA" }, { "location": { "latitude": 413628156, "longitude": -749015468 }, "name": "U.S. 6, Shohola, PA 18458, USA" }, { "location": { "latitude": 419999544, "longitude": -740371136 }, "name": "5 Conners Road, Kingston, NY 12401, USA" }, { "location": { "latitude": 414008389, "longitude": -743951297 }, "name": "Mid Hudson Psychiatric Center, New Hampton, NY 10958, USA" }, { "location": { "latitude": 419611318, "longitude": -746524769 }, "name": "287 Flugertown Road, Livingston Manor, NY 12758, USA" }, { "location": { "latitude": 406109563, "longitude": -742186778 }, "name": "4001 Tremley Point Road, Linden, NJ 07036, USA" }, { "location": { "latitude": 416802456, "longitude": -742370183 }, "name": "352 South Mountain Road, Wallkill, NY 12589, USA" }, { "location": { "latitude": 412950425, "longitude": -741077389 }, "name": "Bailey Turn Road, Harriman, NY 10926, USA" }, { "location": { "latitude": 412144655, "longitude": -743949739 }, "name": "193-199 Wawayanda Road, Hewitt, NJ 07421, USA" }, { "location": { "latitude": 415736605, "longitude": -742847522 }, "name": "406-496 Ward Avenue, Pine Bush, NY 12566, USA" }, { "location": { "latitude": 413843930, "longitude": -740501726 }, "name": "162 Merrill Road, Highland Mills, NY 10930, USA" }, { "location": { "latitude": 410873075, "longitude": -744459023 }, "name": "Clinton Road, West Milford, NJ 07480, USA" }, { "location": { "latitude": 412346009, "longitude": -744026814 }, "name": "16 Old Brook Lane, Warwick, NY 10990, USA" }, { "location": { "latitude": 402948455, "longitude": -747903913 }, "name": "3 Drake Lane, Pennington, NJ 08534, USA" }, { "location": { "latitude": 406337092, "longitude": -740122226 }, "name": "6324 8th Avenue, Brooklyn, NY 11220, USA" }, { "location": { "latitude": 406421967, "longitude": -747727624 }, "name": "1 Merck Access Road, Whitehouse Station, NJ 08889, USA" }, { "location": { "latitude": 416318082, "longitude": -749677716 }, "name": "78-98 Schalck Road, Narrowsburg, NY 12764, USA" }, { "location": { "latitude": 415301720, "longitude": -748416257 }, "name": "282 Lakeview Drive Road, Highland Lake, NY 12743, USA" }, { "location": { "latitude": 402647019, "longitude": -747071791 }, "name": "330 Evelyn Avenue, Hamilton Township, NJ 08619, USA" }, { "location": { "latitude": 412567807, "longitude": -741058078 }, "name": "New York State Reference Route 987E, Southfields, NY 10975, USA" }, { "location": { "latitude": 416855156, "longitude": -744420597 }, "name": "103-271 Tempaloni Road, Ellenville, NY 12428, USA" }, { "location": { "latitude": 404663628, "longitude": -744820157 }, "name": "1300 Airport Road, North Brunswick Township, NJ 08902, USA" }, { "location": { "latitude": 407113723, "longitude": -749746483 }, "name": "" }, { "location": { "latitude": 402133926, "longitude": -743613249 }, "name": "" }, { "location": { "latitude": 400273442, "longitude": -741220915 }, "name": "" }, { "location": { "latitude": 411236786, "longitude": -744070769 }, "name": "" }, { "location": { "latitude": 411633782, "longitude": -746784970 }, "name": "211-225 Plains Road, Augusta, NJ 07822, USA" }, { "location": { "latitude": 415830701, "longitude": -742952812 }, "name": "" }, { "location": { "latitude": 413447164, "longitude": -748712898 }, "name": "165 Pedersen Ridge Road, Milford, PA 18337, USA" }, { "location": { "latitude": 405047245, "longitude": -749800722 }, "name": "100-122 Locktown Road, Frenchtown, NJ 08825, USA" }, { "location": { "latitude": 418858923, "longitude": -746156790 }, "name": "" }, { "location": { "latitude": 417951888, "longitude": -748484944 }, "name": "650-652 Willi Hill Road, Swan Lake, NY 12783, USA" }, { "location": { "latitude": 407033786, "longitude": -743977337 }, "name": "26 East 3rd Street, New Providence, NJ 07974, USA" }, { "location": { "latitude": 417548014, "longitude": -740075041 }, "name": "" }, { "location": { "latitude": 410395868, "longitude": -744972325 }, "name": "" }, { "location": { "latitude": 404615353, "longitude": -745129803 }, "name": "" }, { "location": { "latitude": 406589790, "longitude": -743560121 }, "name": "611 Lawrence Avenue, Westfield, NJ 07090, USA" }, { "location": { "latitude": 414653148, "longitude": -740477477 }, "name": "18 Lannis Avenue, New Windsor, NY 12553, USA" }, { "location": { "latitude": 405957808, "longitude": -743255336 }, "name": "82-104 Amherst Avenue, Colonia, NJ 07067, USA" }, { "location": { "latitude": 411733589, "longitude": -741648093 }, "name": "170 Seven Lakes Drive, Sloatsburg, NY 10974, USA" }, { "location": { "latitude": 412676291, "longitude": -742606606 }, "name": "1270 Lakes Road, Monroe, NY 10950, USA" }, { "location": { "latitude": 409224445, "longitude": -748286738 }, "name": "509-535 Alphano Road, Great Meadows, NJ 07838, USA" }, { "location": { "latitude": 406523420, "longitude": -742135517 }, "name": "652 Garden Street, Elizabeth, NJ 07202, USA" }, { "location": { "latitude": 401827388, "longitude": -740294537 }, "name": "349 Sea Spray Court, Neptune City, NJ 07753, USA" }, { "location": { "latitude": 410564152, "longitude": -743685054 }, "name": "13-17 Stanley Street, West Milford, NJ 07480, USA" }, { "location": { "latitude": 408472324, "longitude": -740726046 }, "name": "47 Industrial Avenue, Teterboro, NJ 07608, USA" }, { "location": { "latitude": 412452168, "longitude": -740214052 }, "name": "5 White Oak Lane, Stony Point, NY 10980, USA" }, { "location": { "latitude": 409146138, "longitude": -746188906 }, "name": "Berkshire Valley Management Area Trail, Jefferson, NJ, USA" }, { "location": { "latitude": 404701380, "longitude": -744781745 }, "name": "1007 Jersey Avenue, New Brunswick, NJ 08901, USA" }, { "location": { "latitude": 409642566, "longitude": -746017679 }, "name": "6 East Emerald Isle Drive, Lake Hopatcong, NJ 07849, USA" }, { "location": { "latitude": 408031728, "longitude": -748645385 }, "name": "1358-1474 New Jersey 57, Port Murray, NJ 07865, USA" }, { "location": { "latitude": 413700272, "longitude": -742135189 }, "name": "367 Prospect Road, Chester, NY 10918, USA" }, { "location": { "latitude": 404310607, "longitude": -740282632 }, "name": "10 Simon Lake Drive, Atlantic Highlands, NJ 07716, USA" }, { "location": { "latitude": 409319800, "longitude": -746201391 }, "name": "11 Ward Street, Mount Arlington, NJ 07856, USA" }, { "location": { "latitude": 406685311, "longitude": -742108603 }, "name": "300-398 Jefferson Avenue, Elizabeth, NJ 07201, USA" }, { "location": { "latitude": 419018117, "longitude": -749142781 }, "name": "43 Dreher Road, Roscoe, NY 12776, USA" }, { "location": { "latitude": 412856162, "longitude": -745148837 }, "name": "Swan Street, Pine Island, NY 10969, USA" }, { "location": { "latitude": 416560744, "longitude": -746721964 }, "name": "66 Pleasantview Avenue, Monticello, NY 12701, USA" }, { "location": { "latitude": 405314270, "longitude": -749836354 }, "name": "" }, { "location": { "latitude": 414219548, "longitude": -743327440 }, "name": "" }, { "location": { "latitude": 415534177, "longitude": -742900616 }, "name": "565 Winding Hills Road, Montgomery, NY 12549, USA" }, { "location": { "latitude": 406898530, "longitude": -749127080 }, "name": "231 Rocky Run Road, Glen Gardner, NJ 08826, USA" }, { "location": { "latitude": 407586880, "longitude": -741670168 }, "name": "100 Mount Pleasant Avenue, Newark, NJ 07104, USA" }, { "location": { "latitude": 400106455, "longitude": -742870190 }, "name": "517-521 Huntington Drive, Manchester Township, NJ 08759, USA" }, { "location": { "latitude": 400066188, "longitude": -746793294 }, "name": "" }, { "location": { "latitude": 418803880, "longitude": -744102673 }, "name": "40 Mountain Road, Napanoch, NY 12458, USA" }, { "location": { "latitude": 414204288, "longitude": -747895140 }, "name": "" }, { "location": { "latitude": 414777405, "longitude": -740615601 }, "name": "" }, { "location": { "latitude": 415464475, "longitude": -747175374 }, "name": "48 North Road, Forestburgh, NY 12777, USA" }, { "location": { "latitude": 404062378, "longitude": -746376177 }, "name": "" }, { "location": { "latitude": 405688272, "longitude": -749285130 }, "name": "" }, { "location": { "latitude": 400342070, "longitude": -748788996 }, "name": "" }, { "location": { "latitude": 401809022, "longitude": -744157964 }, "name": "" }, { "location": { "latitude": 404226644, "longitude": -740517141 }, "name": "9 Thompson Avenue, Leonardo, NJ 07737, USA" }, { "location": { "latitude": 410322033, "longitude": -747871659 }, "name": "" }, { "location": { "latitude": 407100674, "longitude": -747742727 }, "name": "" }, { "location": { "latitude": 418811433, "longitude": -741718005 }, "name": "213 Bush Road, Stone Ridge, NY 12484, USA" }, { "location": { "latitude": 415034302, "longitude": -743850945 }, "name": "" }, { "location": { "latitude": 411349992, "longitude": -743694161 }, "name": "" }, { "location": { "latitude": 404839914, "longitude": -744759616 }, "name": "1-17 Bergen Court, New Brunswick, NJ 08901, USA" }, { "location": { "latitude": 414638017, "longitude": -745957854 }, "name": "35 Oakland Valley Road, Cuddebackville, NY 12729, USA" }, { "location": { "latitude": 412127800, "longitude": -740173578 }, "name": "" }, { "location": { "latitude": 401263460, "longitude": -747964303 }, "name": "" }, { "location": { "latitude": 412843391, "longitude": -749086026 }, "name": "" }, { "location": { "latitude": 418512773, "longitude": -743067823 }, "name": "" }, { "location": { "latitude": 404318328, "longitude": -740835638 }, "name": "42-102 Main Street, Belford, NJ 07718, USA" }, { "location": { "latitude": 419020746, "longitude": -741172328 }, "name": "" }, { "location": { "latitude": 404080723, "longitude": -746119569 }, "name": "" }, { "location": { "latitude": 401012643, "longitude": -744035134 }, "name": "" }, { "location": { "latitude": 404306372, "longitude": -741079661 }, "name": "" }, { "location": { "latitude": 403966326, "longitude": -748519297 }, "name": "" }, { "location": { "latitude": 405002031, "longitude": -748407866 }, "name": "" }, { "location": { "latitude": 409532885, "longitude": -742200683 }, "name": "" }, { "location": { "latitude": 416851321, "longitude": -742674555 }, "name": "" }, { "location": { "latitude": 406411633, "longitude": -741722051 }, "name": "3387 Richmond Terrace, Staten Island, NY 10303, USA" }, { "location": { "latitude": 413069058, "longitude": -744597778 }, "name": "261 Van Sickle Road, Goshen, NY 10924, USA" }, { "location": { "latitude": 418465462, "longitude": -746859398 }, "name": "" }, { "location": { "latitude": 411733222, "longitude": -744228360 }, "name": "" }, { "location": { "latitude": 410248224, "longitude": -747127767 }, "name": "3 Hasta Way, Newton, NJ 07860, USA" }] golang-google-grpc-1.6.0/go16.go000066400000000000000000000052521315416461300163030ustar00rootroot00000000000000// +build go1.6,!go1.7 /* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "fmt" "io" "net" "net/http" "os" "golang.org/x/net/context" "google.golang.org/grpc/codes" "google.golang.org/grpc/status" "google.golang.org/grpc/transport" ) // dialContext connects to the address on the named network. func dialContext(ctx context.Context, network, address string) (net.Conn, error) { return (&net.Dialer{Cancel: ctx.Done()}).Dial(network, address) } func sendHTTPRequest(ctx context.Context, req *http.Request, conn net.Conn) error { req.Cancel = ctx.Done() if err := req.Write(conn); err != nil { return fmt.Errorf("failed to write the HTTP request: %v", err) } return nil } // toRPCErr converts an error into an error from the status package. func toRPCErr(err error) error { if _, ok := status.FromError(err); ok { return err } switch e := err.(type) { case transport.StreamError: return status.Error(e.Code, e.Desc) case transport.ConnectionError: return status.Error(codes.Unavailable, e.Desc) default: switch err { case context.DeadlineExceeded: return status.Error(codes.DeadlineExceeded, err.Error()) case context.Canceled: return status.Error(codes.Canceled, err.Error()) case ErrClientConnClosing: return status.Error(codes.FailedPrecondition, err.Error()) } } return status.Error(codes.Unknown, err.Error()) } // convertCode converts a standard Go error into its canonical code. Note that // this is only used to translate the error returned by the server applications. func convertCode(err error) codes.Code { switch err { case nil: return codes.OK case io.EOF: return codes.OutOfRange case io.ErrClosedPipe, io.ErrNoProgress, io.ErrShortBuffer, io.ErrShortWrite, io.ErrUnexpectedEOF: return codes.FailedPrecondition case os.ErrInvalid: return codes.InvalidArgument case context.Canceled: return codes.Canceled case context.DeadlineExceeded: return codes.DeadlineExceeded } switch { case os.IsExist(err): return codes.AlreadyExists case os.IsNotExist(err): return codes.NotFound case os.IsPermission(err): return codes.PermissionDenied } return codes.Unknown } golang-google-grpc-1.6.0/go17.go000066400000000000000000000053131315416461300163020ustar00rootroot00000000000000// +build go1.7 /* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "context" "io" "net" "net/http" "os" netctx "golang.org/x/net/context" "google.golang.org/grpc/codes" "google.golang.org/grpc/status" "google.golang.org/grpc/transport" ) // dialContext connects to the address on the named network. func dialContext(ctx context.Context, network, address string) (net.Conn, error) { return (&net.Dialer{}).DialContext(ctx, network, address) } func sendHTTPRequest(ctx context.Context, req *http.Request, conn net.Conn) error { req = req.WithContext(ctx) if err := req.Write(conn); err != nil { return err } return nil } // toRPCErr converts an error into an error from the status package. func toRPCErr(err error) error { if _, ok := status.FromError(err); ok { return err } switch e := err.(type) { case transport.StreamError: return status.Error(e.Code, e.Desc) case transport.ConnectionError: return status.Error(codes.Unavailable, e.Desc) default: switch err { case context.DeadlineExceeded, netctx.DeadlineExceeded: return status.Error(codes.DeadlineExceeded, err.Error()) case context.Canceled, netctx.Canceled: return status.Error(codes.Canceled, err.Error()) case ErrClientConnClosing: return status.Error(codes.FailedPrecondition, err.Error()) } } return status.Error(codes.Unknown, err.Error()) } // convertCode converts a standard Go error into its canonical code. Note that // this is only used to translate the error returned by the server applications. func convertCode(err error) codes.Code { switch err { case nil: return codes.OK case io.EOF: return codes.OutOfRange case io.ErrClosedPipe, io.ErrNoProgress, io.ErrShortBuffer, io.ErrShortWrite, io.ErrUnexpectedEOF: return codes.FailedPrecondition case os.ErrInvalid: return codes.InvalidArgument case context.Canceled, netctx.Canceled: return codes.Canceled case context.DeadlineExceeded, netctx.DeadlineExceeded: return codes.DeadlineExceeded } switch { case os.IsExist(err): return codes.AlreadyExists case os.IsNotExist(err): return codes.NotFound case os.IsPermission(err): return codes.PermissionDenied } return codes.Unknown } golang-google-grpc-1.6.0/grpclb.go000066400000000000000000000410751315416461300170030ustar00rootroot00000000000000/* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "errors" "fmt" "math/rand" "net" "sync" "time" "golang.org/x/net/context" "google.golang.org/grpc/codes" lbmpb "google.golang.org/grpc/grpclb/grpc_lb_v1/messages" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/metadata" "google.golang.org/grpc/naming" ) // Client API for LoadBalancer service. // Mostly copied from generated pb.go file. // To avoid circular dependency. type loadBalancerClient struct { cc *ClientConn } func (c *loadBalancerClient) BalanceLoad(ctx context.Context, opts ...CallOption) (*balanceLoadClientStream, error) { desc := &StreamDesc{ StreamName: "BalanceLoad", ServerStreams: true, ClientStreams: true, } stream, err := NewClientStream(ctx, desc, c.cc, "/grpc.lb.v1.LoadBalancer/BalanceLoad", opts...) if err != nil { return nil, err } x := &balanceLoadClientStream{stream} return x, nil } type balanceLoadClientStream struct { ClientStream } func (x *balanceLoadClientStream) Send(m *lbmpb.LoadBalanceRequest) error { return x.ClientStream.SendMsg(m) } func (x *balanceLoadClientStream) Recv() (*lbmpb.LoadBalanceResponse, error) { m := new(lbmpb.LoadBalanceResponse) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } // NewGRPCLBBalancer creates a grpclb load balancer. func NewGRPCLBBalancer(r naming.Resolver) Balancer { return &balancer{ r: r, } } type remoteBalancerInfo struct { addr string // the server name used for authentication with the remote LB server. name string } // grpclbAddrInfo consists of the information of a backend server. type grpclbAddrInfo struct { addr Address connected bool // dropForRateLimiting indicates whether this particular request should be // dropped by the client for rate limiting. dropForRateLimiting bool // dropForLoadBalancing indicates whether this particular request should be // dropped by the client for load balancing. dropForLoadBalancing bool } type balancer struct { r naming.Resolver target string mu sync.Mutex seq int // a sequence number to make sure addrCh does not get stale addresses. w naming.Watcher addrCh chan []Address rbs []remoteBalancerInfo addrs []*grpclbAddrInfo next int waitCh chan struct{} done bool rand *rand.Rand clientStats lbmpb.ClientStats } func (b *balancer) watchAddrUpdates(w naming.Watcher, ch chan []remoteBalancerInfo) error { updates, err := w.Next() if err != nil { grpclog.Warningf("grpclb: failed to get next addr update from watcher: %v", err) return err } b.mu.Lock() defer b.mu.Unlock() if b.done { return ErrClientConnClosing } for _, update := range updates { switch update.Op { case naming.Add: var exist bool for _, v := range b.rbs { // TODO: Is the same addr with different server name a different balancer? if update.Addr == v.addr { exist = true break } } if exist { continue } md, ok := update.Metadata.(*naming.AddrMetadataGRPCLB) if !ok { // TODO: Revisit the handling here and may introduce some fallback mechanism. grpclog.Errorf("The name resolution contains unexpected metadata %v", update.Metadata) continue } switch md.AddrType { case naming.Backend: // TODO: Revisit the handling here and may introduce some fallback mechanism. grpclog.Errorf("The name resolution does not give grpclb addresses") continue case naming.GRPCLB: b.rbs = append(b.rbs, remoteBalancerInfo{ addr: update.Addr, name: md.ServerName, }) default: grpclog.Errorf("Received unknow address type %d", md.AddrType) continue } case naming.Delete: for i, v := range b.rbs { if update.Addr == v.addr { copy(b.rbs[i:], b.rbs[i+1:]) b.rbs = b.rbs[:len(b.rbs)-1] break } } default: grpclog.Errorf("Unknown update.Op %v", update.Op) } } // TODO: Fall back to the basic round-robin load balancing if the resulting address is // not a load balancer. select { case <-ch: default: } ch <- b.rbs return nil } func convertDuration(d *lbmpb.Duration) time.Duration { if d == nil { return 0 } return time.Duration(d.Seconds)*time.Second + time.Duration(d.Nanos)*time.Nanosecond } func (b *balancer) processServerList(l *lbmpb.ServerList, seq int) { if l == nil { return } servers := l.GetServers() var ( sl []*grpclbAddrInfo addrs []Address ) for _, s := range servers { md := metadata.Pairs("lb-token", s.LoadBalanceToken) ip := net.IP(s.IpAddress) ipStr := ip.String() if ip.To4() == nil { // Add square brackets to ipv6 addresses, otherwise net.Dial() and // net.SplitHostPort() will return too many colons error. ipStr = fmt.Sprintf("[%s]", ipStr) } addr := Address{ Addr: fmt.Sprintf("%s:%d", ipStr, s.Port), Metadata: &md, } sl = append(sl, &grpclbAddrInfo{ addr: addr, dropForRateLimiting: s.DropForRateLimiting, dropForLoadBalancing: s.DropForLoadBalancing, }) addrs = append(addrs, addr) } b.mu.Lock() defer b.mu.Unlock() if b.done || seq < b.seq { return } if len(sl) > 0 { // reset b.next to 0 when replacing the server list. b.next = 0 b.addrs = sl b.addrCh <- addrs } return } func (b *balancer) sendLoadReport(s *balanceLoadClientStream, interval time.Duration, done <-chan struct{}) { ticker := time.NewTicker(interval) defer ticker.Stop() for { select { case <-ticker.C: case <-done: return } b.mu.Lock() stats := b.clientStats b.clientStats = lbmpb.ClientStats{} // Clear the stats. b.mu.Unlock() t := time.Now() stats.Timestamp = &lbmpb.Timestamp{ Seconds: t.Unix(), Nanos: int32(t.Nanosecond()), } if err := s.Send(&lbmpb.LoadBalanceRequest{ LoadBalanceRequestType: &lbmpb.LoadBalanceRequest_ClientStats{ ClientStats: &stats, }, }); err != nil { grpclog.Errorf("grpclb: failed to send load report: %v", err) return } } } func (b *balancer) callRemoteBalancer(lbc *loadBalancerClient, seq int) (retry bool) { ctx, cancel := context.WithCancel(context.Background()) defer cancel() stream, err := lbc.BalanceLoad(ctx) if err != nil { grpclog.Errorf("grpclb: failed to perform RPC to the remote balancer %v", err) return } b.mu.Lock() if b.done { b.mu.Unlock() return } b.mu.Unlock() initReq := &lbmpb.LoadBalanceRequest{ LoadBalanceRequestType: &lbmpb.LoadBalanceRequest_InitialRequest{ InitialRequest: &lbmpb.InitialLoadBalanceRequest{ Name: b.target, }, }, } if err := stream.Send(initReq); err != nil { grpclog.Errorf("grpclb: failed to send init request: %v", err) // TODO: backoff on retry? return true } reply, err := stream.Recv() if err != nil { grpclog.Errorf("grpclb: failed to recv init response: %v", err) // TODO: backoff on retry? return true } initResp := reply.GetInitialResponse() if initResp == nil { grpclog.Errorf("grpclb: reply from remote balancer did not include initial response.") return } // TODO: Support delegation. if initResp.LoadBalancerDelegate != "" { // delegation grpclog.Errorf("TODO: Delegation is not supported yet.") return } streamDone := make(chan struct{}) defer close(streamDone) b.mu.Lock() b.clientStats = lbmpb.ClientStats{} // Clear client stats. b.mu.Unlock() if d := convertDuration(initResp.ClientStatsReportInterval); d > 0 { go b.sendLoadReport(stream, d, streamDone) } // Retrieve the server list. for { reply, err := stream.Recv() if err != nil { grpclog.Errorf("grpclb: failed to recv server list: %v", err) break } b.mu.Lock() if b.done || seq < b.seq { b.mu.Unlock() return } b.seq++ // tick when receiving a new list of servers. seq = b.seq b.mu.Unlock() if serverList := reply.GetServerList(); serverList != nil { b.processServerList(serverList, seq) } } return true } func (b *balancer) Start(target string, config BalancerConfig) error { b.rand = rand.New(rand.NewSource(time.Now().Unix())) // TODO: Fall back to the basic direct connection if there is no name resolver. if b.r == nil { return errors.New("there is no name resolver installed") } b.target = target b.mu.Lock() if b.done { b.mu.Unlock() return ErrClientConnClosing } b.addrCh = make(chan []Address) w, err := b.r.Resolve(target) if err != nil { b.mu.Unlock() grpclog.Errorf("grpclb: failed to resolve address: %v, err: %v", target, err) return err } b.w = w b.mu.Unlock() balancerAddrsCh := make(chan []remoteBalancerInfo, 1) // Spawn a goroutine to monitor the name resolution of remote load balancer. go func() { for { if err := b.watchAddrUpdates(w, balancerAddrsCh); err != nil { grpclog.Warningf("grpclb: the naming watcher stops working due to %v.\n", err) close(balancerAddrsCh) return } } }() // Spawn a goroutine to talk to the remote load balancer. go func() { var ( cc *ClientConn // ccError is closed when there is an error in the current cc. // A new rb should be picked from rbs and connected. ccError chan struct{} rb *remoteBalancerInfo rbs []remoteBalancerInfo rbIdx int ) defer func() { if ccError != nil { select { case <-ccError: default: close(ccError) } } if cc != nil { cc.Close() } }() for { var ok bool select { case rbs, ok = <-balancerAddrsCh: if !ok { return } foundIdx := -1 if rb != nil { for i, trb := range rbs { if trb == *rb { foundIdx = i break } } } if foundIdx >= 0 { if foundIdx >= 1 { // Move the address in use to the beginning of the list. b.rbs[0], b.rbs[foundIdx] = b.rbs[foundIdx], b.rbs[0] rbIdx = 0 } continue // If found, don't dial new cc. } else if len(rbs) > 0 { // Pick a random one from the list, instead of always using the first one. if l := len(rbs); l > 1 && rb != nil { tmpIdx := b.rand.Intn(l - 1) b.rbs[0], b.rbs[tmpIdx] = b.rbs[tmpIdx], b.rbs[0] } rbIdx = 0 rb = &rbs[0] } else { // foundIdx < 0 && len(rbs) <= 0. rb = nil } case <-ccError: ccError = nil if rbIdx < len(rbs)-1 { rbIdx++ rb = &rbs[rbIdx] } else { rb = nil } } if rb == nil { continue } if cc != nil { cc.Close() } // Talk to the remote load balancer to get the server list. var ( err error dopts []DialOption ) if creds := config.DialCreds; creds != nil { if rb.name != "" { if err := creds.OverrideServerName(rb.name); err != nil { grpclog.Warningf("grpclb: failed to override the server name in the credentials: %v", err) continue } } dopts = append(dopts, WithTransportCredentials(creds)) } else { dopts = append(dopts, WithInsecure()) } if dialer := config.Dialer; dialer != nil { // WithDialer takes a different type of function, so we instead use a special DialOption here. dopts = append(dopts, func(o *dialOptions) { o.copts.Dialer = dialer }) } ccError = make(chan struct{}) cc, err = Dial(rb.addr, dopts...) if err != nil { grpclog.Warningf("grpclb: failed to setup a connection to the remote balancer %v: %v", rb.addr, err) close(ccError) continue } b.mu.Lock() b.seq++ // tick when getting a new balancer address seq := b.seq b.next = 0 b.mu.Unlock() go func(cc *ClientConn, ccError chan struct{}) { lbc := &loadBalancerClient{cc} b.callRemoteBalancer(lbc, seq) cc.Close() select { case <-ccError: default: close(ccError) } }(cc, ccError) } }() return nil } func (b *balancer) down(addr Address, err error) { b.mu.Lock() defer b.mu.Unlock() for _, a := range b.addrs { if addr == a.addr { a.connected = false break } } } func (b *balancer) Up(addr Address) func(error) { b.mu.Lock() defer b.mu.Unlock() if b.done { return nil } var cnt int for _, a := range b.addrs { if a.addr == addr { if a.connected { return nil } a.connected = true } if a.connected && !a.dropForRateLimiting && !a.dropForLoadBalancing { cnt++ } } // addr is the only one which is connected. Notify the Get() callers who are blocking. if cnt == 1 && b.waitCh != nil { close(b.waitCh) b.waitCh = nil } return func(err error) { b.down(addr, err) } } func (b *balancer) Get(ctx context.Context, opts BalancerGetOptions) (addr Address, put func(), err error) { var ch chan struct{} b.mu.Lock() if b.done { b.mu.Unlock() err = ErrClientConnClosing return } seq := b.seq defer func() { if err != nil { return } put = func() { s, ok := rpcInfoFromContext(ctx) if !ok { return } b.mu.Lock() defer b.mu.Unlock() if b.done || seq < b.seq { return } b.clientStats.NumCallsFinished++ if !s.bytesSent { b.clientStats.NumCallsFinishedWithClientFailedToSend++ } else if s.bytesReceived { b.clientStats.NumCallsFinishedKnownReceived++ } } }() b.clientStats.NumCallsStarted++ if len(b.addrs) > 0 { if b.next >= len(b.addrs) { b.next = 0 } next := b.next for { a := b.addrs[next] next = (next + 1) % len(b.addrs) if a.connected { if !a.dropForRateLimiting && !a.dropForLoadBalancing { addr = a.addr b.next = next b.mu.Unlock() return } if !opts.BlockingWait { b.next = next if a.dropForLoadBalancing { b.clientStats.NumCallsFinished++ b.clientStats.NumCallsFinishedWithDropForLoadBalancing++ } else if a.dropForRateLimiting { b.clientStats.NumCallsFinished++ b.clientStats.NumCallsFinishedWithDropForRateLimiting++ } b.mu.Unlock() err = Errorf(codes.Unavailable, "%s drops requests", a.addr.Addr) return } } if next == b.next { // Has iterated all the possible address but none is connected. break } } } if !opts.BlockingWait { if len(b.addrs) == 0 { b.clientStats.NumCallsFinished++ b.clientStats.NumCallsFinishedWithClientFailedToSend++ b.mu.Unlock() err = Errorf(codes.Unavailable, "there is no address available") return } // Returns the next addr on b.addrs for a failfast RPC. addr = b.addrs[b.next].addr b.next++ b.mu.Unlock() return } // Wait on b.waitCh for non-failfast RPCs. if b.waitCh == nil { ch = make(chan struct{}) b.waitCh = ch } else { ch = b.waitCh } b.mu.Unlock() for { select { case <-ctx.Done(): b.mu.Lock() b.clientStats.NumCallsFinished++ b.clientStats.NumCallsFinishedWithClientFailedToSend++ b.mu.Unlock() err = ctx.Err() return case <-ch: b.mu.Lock() if b.done { b.clientStats.NumCallsFinished++ b.clientStats.NumCallsFinishedWithClientFailedToSend++ b.mu.Unlock() err = ErrClientConnClosing return } if len(b.addrs) > 0 { if b.next >= len(b.addrs) { b.next = 0 } next := b.next for { a := b.addrs[next] next = (next + 1) % len(b.addrs) if a.connected { if !a.dropForRateLimiting && !a.dropForLoadBalancing { addr = a.addr b.next = next b.mu.Unlock() return } if !opts.BlockingWait { b.next = next if a.dropForLoadBalancing { b.clientStats.NumCallsFinished++ b.clientStats.NumCallsFinishedWithDropForLoadBalancing++ } else if a.dropForRateLimiting { b.clientStats.NumCallsFinished++ b.clientStats.NumCallsFinishedWithDropForRateLimiting++ } b.mu.Unlock() err = Errorf(codes.Unavailable, "drop requests for the addreess %s", a.addr.Addr) return } } if next == b.next { // Has iterated all the possible address but none is connected. break } } } // The newly added addr got removed by Down() again. if b.waitCh == nil { ch = make(chan struct{}) b.waitCh = ch } else { ch = b.waitCh } b.mu.Unlock() } } } func (b *balancer) Notify() <-chan []Address { return b.addrCh } func (b *balancer) Close() error { b.mu.Lock() defer b.mu.Unlock() if b.done { return errBalancerClosed } b.done = true if b.waitCh != nil { close(b.waitCh) } if b.addrCh != nil { close(b.addrCh) } if b.w != nil { b.w.Close() } return nil } golang-google-grpc-1.6.0/grpclb/000077500000000000000000000000001315416461300164455ustar00rootroot00000000000000golang-google-grpc-1.6.0/grpclb/grpc_lb_v1/000077500000000000000000000000001315416461300204635ustar00rootroot00000000000000golang-google-grpc-1.6.0/grpclb/grpc_lb_v1/messages/000077500000000000000000000000001315416461300222725ustar00rootroot00000000000000golang-google-grpc-1.6.0/grpclb/grpc_lb_v1/messages/messages.pb.go000066400000000000000000000615361315416461300250430ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: grpc_lb_v1/messages/messages.proto /* Package messages is a generated protocol buffer package. It is generated from these files: grpc_lb_v1/messages/messages.proto It has these top-level messages: Duration Timestamp LoadBalanceRequest InitialLoadBalanceRequest ClientStats LoadBalanceResponse InitialLoadBalanceResponse ServerList Server */ package messages import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package type Duration struct { // Signed seconds of the span of time. Must be from -315,576,000,000 // to +315,576,000,000 inclusive. Seconds int64 `protobuf:"varint,1,opt,name=seconds" json:"seconds,omitempty"` // Signed fractions of a second at nanosecond resolution of the span // of time. Durations less than one second are represented with a 0 // `seconds` field and a positive or negative `nanos` field. For durations // of one second or more, a non-zero value for the `nanos` field must be // of the same sign as the `seconds` field. Must be from -999,999,999 // to +999,999,999 inclusive. Nanos int32 `protobuf:"varint,2,opt,name=nanos" json:"nanos,omitempty"` } func (m *Duration) Reset() { *m = Duration{} } func (m *Duration) String() string { return proto.CompactTextString(m) } func (*Duration) ProtoMessage() {} func (*Duration) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} } func (m *Duration) GetSeconds() int64 { if m != nil { return m.Seconds } return 0 } func (m *Duration) GetNanos() int32 { if m != nil { return m.Nanos } return 0 } type Timestamp struct { // Represents seconds of UTC time since Unix epoch // 1970-01-01T00:00:00Z. Must be from 0001-01-01T00:00:00Z to // 9999-12-31T23:59:59Z inclusive. Seconds int64 `protobuf:"varint,1,opt,name=seconds" json:"seconds,omitempty"` // Non-negative fractions of a second at nanosecond resolution. Negative // second values with fractions must still have non-negative nanos values // that count forward in time. Must be from 0 to 999,999,999 // inclusive. Nanos int32 `protobuf:"varint,2,opt,name=nanos" json:"nanos,omitempty"` } func (m *Timestamp) Reset() { *m = Timestamp{} } func (m *Timestamp) String() string { return proto.CompactTextString(m) } func (*Timestamp) ProtoMessage() {} func (*Timestamp) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} } func (m *Timestamp) GetSeconds() int64 { if m != nil { return m.Seconds } return 0 } func (m *Timestamp) GetNanos() int32 { if m != nil { return m.Nanos } return 0 } type LoadBalanceRequest struct { // Types that are valid to be assigned to LoadBalanceRequestType: // *LoadBalanceRequest_InitialRequest // *LoadBalanceRequest_ClientStats LoadBalanceRequestType isLoadBalanceRequest_LoadBalanceRequestType `protobuf_oneof:"load_balance_request_type"` } func (m *LoadBalanceRequest) Reset() { *m = LoadBalanceRequest{} } func (m *LoadBalanceRequest) String() string { return proto.CompactTextString(m) } func (*LoadBalanceRequest) ProtoMessage() {} func (*LoadBalanceRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{2} } type isLoadBalanceRequest_LoadBalanceRequestType interface { isLoadBalanceRequest_LoadBalanceRequestType() } type LoadBalanceRequest_InitialRequest struct { InitialRequest *InitialLoadBalanceRequest `protobuf:"bytes,1,opt,name=initial_request,json=initialRequest,oneof"` } type LoadBalanceRequest_ClientStats struct { ClientStats *ClientStats `protobuf:"bytes,2,opt,name=client_stats,json=clientStats,oneof"` } func (*LoadBalanceRequest_InitialRequest) isLoadBalanceRequest_LoadBalanceRequestType() {} func (*LoadBalanceRequest_ClientStats) isLoadBalanceRequest_LoadBalanceRequestType() {} func (m *LoadBalanceRequest) GetLoadBalanceRequestType() isLoadBalanceRequest_LoadBalanceRequestType { if m != nil { return m.LoadBalanceRequestType } return nil } func (m *LoadBalanceRequest) GetInitialRequest() *InitialLoadBalanceRequest { if x, ok := m.GetLoadBalanceRequestType().(*LoadBalanceRequest_InitialRequest); ok { return x.InitialRequest } return nil } func (m *LoadBalanceRequest) GetClientStats() *ClientStats { if x, ok := m.GetLoadBalanceRequestType().(*LoadBalanceRequest_ClientStats); ok { return x.ClientStats } return nil } // XXX_OneofFuncs is for the internal use of the proto package. func (*LoadBalanceRequest) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _LoadBalanceRequest_OneofMarshaler, _LoadBalanceRequest_OneofUnmarshaler, _LoadBalanceRequest_OneofSizer, []interface{}{ (*LoadBalanceRequest_InitialRequest)(nil), (*LoadBalanceRequest_ClientStats)(nil), } } func _LoadBalanceRequest_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*LoadBalanceRequest) // load_balance_request_type switch x := m.LoadBalanceRequestType.(type) { case *LoadBalanceRequest_InitialRequest: b.EncodeVarint(1<<3 | proto.WireBytes) if err := b.EncodeMessage(x.InitialRequest); err != nil { return err } case *LoadBalanceRequest_ClientStats: b.EncodeVarint(2<<3 | proto.WireBytes) if err := b.EncodeMessage(x.ClientStats); err != nil { return err } case nil: default: return fmt.Errorf("LoadBalanceRequest.LoadBalanceRequestType has unexpected type %T", x) } return nil } func _LoadBalanceRequest_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*LoadBalanceRequest) switch tag { case 1: // load_balance_request_type.initial_request if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(InitialLoadBalanceRequest) err := b.DecodeMessage(msg) m.LoadBalanceRequestType = &LoadBalanceRequest_InitialRequest{msg} return true, err case 2: // load_balance_request_type.client_stats if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(ClientStats) err := b.DecodeMessage(msg) m.LoadBalanceRequestType = &LoadBalanceRequest_ClientStats{msg} return true, err default: return false, nil } } func _LoadBalanceRequest_OneofSizer(msg proto.Message) (n int) { m := msg.(*LoadBalanceRequest) // load_balance_request_type switch x := m.LoadBalanceRequestType.(type) { case *LoadBalanceRequest_InitialRequest: s := proto.Size(x.InitialRequest) n += proto.SizeVarint(1<<3 | proto.WireBytes) n += proto.SizeVarint(uint64(s)) n += s case *LoadBalanceRequest_ClientStats: s := proto.Size(x.ClientStats) n += proto.SizeVarint(2<<3 | proto.WireBytes) n += proto.SizeVarint(uint64(s)) n += s case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } type InitialLoadBalanceRequest struct { // Name of load balanced service (IE, balancer.service.com) // length should be less than 256 bytes. Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"` } func (m *InitialLoadBalanceRequest) Reset() { *m = InitialLoadBalanceRequest{} } func (m *InitialLoadBalanceRequest) String() string { return proto.CompactTextString(m) } func (*InitialLoadBalanceRequest) ProtoMessage() {} func (*InitialLoadBalanceRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{3} } func (m *InitialLoadBalanceRequest) GetName() string { if m != nil { return m.Name } return "" } // Contains client level statistics that are useful to load balancing. Each // count except the timestamp should be reset to zero after reporting the stats. type ClientStats struct { // The timestamp of generating the report. Timestamp *Timestamp `protobuf:"bytes,1,opt,name=timestamp" json:"timestamp,omitempty"` // The total number of RPCs that started. NumCallsStarted int64 `protobuf:"varint,2,opt,name=num_calls_started,json=numCallsStarted" json:"num_calls_started,omitempty"` // The total number of RPCs that finished. NumCallsFinished int64 `protobuf:"varint,3,opt,name=num_calls_finished,json=numCallsFinished" json:"num_calls_finished,omitempty"` // The total number of RPCs that were dropped by the client because of rate // limiting. NumCallsFinishedWithDropForRateLimiting int64 `protobuf:"varint,4,opt,name=num_calls_finished_with_drop_for_rate_limiting,json=numCallsFinishedWithDropForRateLimiting" json:"num_calls_finished_with_drop_for_rate_limiting,omitempty"` // The total number of RPCs that were dropped by the client because of load // balancing. NumCallsFinishedWithDropForLoadBalancing int64 `protobuf:"varint,5,opt,name=num_calls_finished_with_drop_for_load_balancing,json=numCallsFinishedWithDropForLoadBalancing" json:"num_calls_finished_with_drop_for_load_balancing,omitempty"` // The total number of RPCs that failed to reach a server except dropped RPCs. NumCallsFinishedWithClientFailedToSend int64 `protobuf:"varint,6,opt,name=num_calls_finished_with_client_failed_to_send,json=numCallsFinishedWithClientFailedToSend" json:"num_calls_finished_with_client_failed_to_send,omitempty"` // The total number of RPCs that finished and are known to have been received // by a server. NumCallsFinishedKnownReceived int64 `protobuf:"varint,7,opt,name=num_calls_finished_known_received,json=numCallsFinishedKnownReceived" json:"num_calls_finished_known_received,omitempty"` } func (m *ClientStats) Reset() { *m = ClientStats{} } func (m *ClientStats) String() string { return proto.CompactTextString(m) } func (*ClientStats) ProtoMessage() {} func (*ClientStats) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{4} } func (m *ClientStats) GetTimestamp() *Timestamp { if m != nil { return m.Timestamp } return nil } func (m *ClientStats) GetNumCallsStarted() int64 { if m != nil { return m.NumCallsStarted } return 0 } func (m *ClientStats) GetNumCallsFinished() int64 { if m != nil { return m.NumCallsFinished } return 0 } func (m *ClientStats) GetNumCallsFinishedWithDropForRateLimiting() int64 { if m != nil { return m.NumCallsFinishedWithDropForRateLimiting } return 0 } func (m *ClientStats) GetNumCallsFinishedWithDropForLoadBalancing() int64 { if m != nil { return m.NumCallsFinishedWithDropForLoadBalancing } return 0 } func (m *ClientStats) GetNumCallsFinishedWithClientFailedToSend() int64 { if m != nil { return m.NumCallsFinishedWithClientFailedToSend } return 0 } func (m *ClientStats) GetNumCallsFinishedKnownReceived() int64 { if m != nil { return m.NumCallsFinishedKnownReceived } return 0 } type LoadBalanceResponse struct { // Types that are valid to be assigned to LoadBalanceResponseType: // *LoadBalanceResponse_InitialResponse // *LoadBalanceResponse_ServerList LoadBalanceResponseType isLoadBalanceResponse_LoadBalanceResponseType `protobuf_oneof:"load_balance_response_type"` } func (m *LoadBalanceResponse) Reset() { *m = LoadBalanceResponse{} } func (m *LoadBalanceResponse) String() string { return proto.CompactTextString(m) } func (*LoadBalanceResponse) ProtoMessage() {} func (*LoadBalanceResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{5} } type isLoadBalanceResponse_LoadBalanceResponseType interface { isLoadBalanceResponse_LoadBalanceResponseType() } type LoadBalanceResponse_InitialResponse struct { InitialResponse *InitialLoadBalanceResponse `protobuf:"bytes,1,opt,name=initial_response,json=initialResponse,oneof"` } type LoadBalanceResponse_ServerList struct { ServerList *ServerList `protobuf:"bytes,2,opt,name=server_list,json=serverList,oneof"` } func (*LoadBalanceResponse_InitialResponse) isLoadBalanceResponse_LoadBalanceResponseType() {} func (*LoadBalanceResponse_ServerList) isLoadBalanceResponse_LoadBalanceResponseType() {} func (m *LoadBalanceResponse) GetLoadBalanceResponseType() isLoadBalanceResponse_LoadBalanceResponseType { if m != nil { return m.LoadBalanceResponseType } return nil } func (m *LoadBalanceResponse) GetInitialResponse() *InitialLoadBalanceResponse { if x, ok := m.GetLoadBalanceResponseType().(*LoadBalanceResponse_InitialResponse); ok { return x.InitialResponse } return nil } func (m *LoadBalanceResponse) GetServerList() *ServerList { if x, ok := m.GetLoadBalanceResponseType().(*LoadBalanceResponse_ServerList); ok { return x.ServerList } return nil } // XXX_OneofFuncs is for the internal use of the proto package. func (*LoadBalanceResponse) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _LoadBalanceResponse_OneofMarshaler, _LoadBalanceResponse_OneofUnmarshaler, _LoadBalanceResponse_OneofSizer, []interface{}{ (*LoadBalanceResponse_InitialResponse)(nil), (*LoadBalanceResponse_ServerList)(nil), } } func _LoadBalanceResponse_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*LoadBalanceResponse) // load_balance_response_type switch x := m.LoadBalanceResponseType.(type) { case *LoadBalanceResponse_InitialResponse: b.EncodeVarint(1<<3 | proto.WireBytes) if err := b.EncodeMessage(x.InitialResponse); err != nil { return err } case *LoadBalanceResponse_ServerList: b.EncodeVarint(2<<3 | proto.WireBytes) if err := b.EncodeMessage(x.ServerList); err != nil { return err } case nil: default: return fmt.Errorf("LoadBalanceResponse.LoadBalanceResponseType has unexpected type %T", x) } return nil } func _LoadBalanceResponse_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*LoadBalanceResponse) switch tag { case 1: // load_balance_response_type.initial_response if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(InitialLoadBalanceResponse) err := b.DecodeMessage(msg) m.LoadBalanceResponseType = &LoadBalanceResponse_InitialResponse{msg} return true, err case 2: // load_balance_response_type.server_list if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(ServerList) err := b.DecodeMessage(msg) m.LoadBalanceResponseType = &LoadBalanceResponse_ServerList{msg} return true, err default: return false, nil } } func _LoadBalanceResponse_OneofSizer(msg proto.Message) (n int) { m := msg.(*LoadBalanceResponse) // load_balance_response_type switch x := m.LoadBalanceResponseType.(type) { case *LoadBalanceResponse_InitialResponse: s := proto.Size(x.InitialResponse) n += proto.SizeVarint(1<<3 | proto.WireBytes) n += proto.SizeVarint(uint64(s)) n += s case *LoadBalanceResponse_ServerList: s := proto.Size(x.ServerList) n += proto.SizeVarint(2<<3 | proto.WireBytes) n += proto.SizeVarint(uint64(s)) n += s case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } type InitialLoadBalanceResponse struct { // This is an application layer redirect that indicates the client should use // the specified server for load balancing. When this field is non-empty in // the response, the client should open a separate connection to the // load_balancer_delegate and call the BalanceLoad method. Its length should // be less than 64 bytes. LoadBalancerDelegate string `protobuf:"bytes,1,opt,name=load_balancer_delegate,json=loadBalancerDelegate" json:"load_balancer_delegate,omitempty"` // This interval defines how often the client should send the client stats // to the load balancer. Stats should only be reported when the duration is // positive. ClientStatsReportInterval *Duration `protobuf:"bytes,2,opt,name=client_stats_report_interval,json=clientStatsReportInterval" json:"client_stats_report_interval,omitempty"` } func (m *InitialLoadBalanceResponse) Reset() { *m = InitialLoadBalanceResponse{} } func (m *InitialLoadBalanceResponse) String() string { return proto.CompactTextString(m) } func (*InitialLoadBalanceResponse) ProtoMessage() {} func (*InitialLoadBalanceResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{6} } func (m *InitialLoadBalanceResponse) GetLoadBalancerDelegate() string { if m != nil { return m.LoadBalancerDelegate } return "" } func (m *InitialLoadBalanceResponse) GetClientStatsReportInterval() *Duration { if m != nil { return m.ClientStatsReportInterval } return nil } type ServerList struct { // Contains a list of servers selected by the load balancer. The list will // be updated when server resolutions change or as needed to balance load // across more servers. The client should consume the server list in order // unless instructed otherwise via the client_config. Servers []*Server `protobuf:"bytes,1,rep,name=servers" json:"servers,omitempty"` } func (m *ServerList) Reset() { *m = ServerList{} } func (m *ServerList) String() string { return proto.CompactTextString(m) } func (*ServerList) ProtoMessage() {} func (*ServerList) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{7} } func (m *ServerList) GetServers() []*Server { if m != nil { return m.Servers } return nil } // Contains server information. When none of the [drop_for_*] fields are true, // use the other fields. When drop_for_rate_limiting is true, ignore all other // fields. Use drop_for_load_balancing only when it is true and // drop_for_rate_limiting is false. type Server struct { // A resolved address for the server, serialized in network-byte-order. It may // either be an IPv4 or IPv6 address. IpAddress []byte `protobuf:"bytes,1,opt,name=ip_address,json=ipAddress,proto3" json:"ip_address,omitempty"` // A resolved port number for the server. Port int32 `protobuf:"varint,2,opt,name=port" json:"port,omitempty"` // An opaque but printable token given to the frontend for each pick. All // frontend requests for that pick must include the token in its initial // metadata. The token is used by the backend to verify the request and to // allow the backend to report load to the gRPC LB system. // // Its length is variable but less than 50 bytes. LoadBalanceToken string `protobuf:"bytes,3,opt,name=load_balance_token,json=loadBalanceToken" json:"load_balance_token,omitempty"` // Indicates whether this particular request should be dropped by the client // for rate limiting. DropForRateLimiting bool `protobuf:"varint,4,opt,name=drop_for_rate_limiting,json=dropForRateLimiting" json:"drop_for_rate_limiting,omitempty"` // Indicates whether this particular request should be dropped by the client // for load balancing. DropForLoadBalancing bool `protobuf:"varint,5,opt,name=drop_for_load_balancing,json=dropForLoadBalancing" json:"drop_for_load_balancing,omitempty"` } func (m *Server) Reset() { *m = Server{} } func (m *Server) String() string { return proto.CompactTextString(m) } func (*Server) ProtoMessage() {} func (*Server) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{8} } func (m *Server) GetIpAddress() []byte { if m != nil { return m.IpAddress } return nil } func (m *Server) GetPort() int32 { if m != nil { return m.Port } return 0 } func (m *Server) GetLoadBalanceToken() string { if m != nil { return m.LoadBalanceToken } return "" } func (m *Server) GetDropForRateLimiting() bool { if m != nil { return m.DropForRateLimiting } return false } func (m *Server) GetDropForLoadBalancing() bool { if m != nil { return m.DropForLoadBalancing } return false } func init() { proto.RegisterType((*Duration)(nil), "grpc.lb.v1.Duration") proto.RegisterType((*Timestamp)(nil), "grpc.lb.v1.Timestamp") proto.RegisterType((*LoadBalanceRequest)(nil), "grpc.lb.v1.LoadBalanceRequest") proto.RegisterType((*InitialLoadBalanceRequest)(nil), "grpc.lb.v1.InitialLoadBalanceRequest") proto.RegisterType((*ClientStats)(nil), "grpc.lb.v1.ClientStats") proto.RegisterType((*LoadBalanceResponse)(nil), "grpc.lb.v1.LoadBalanceResponse") proto.RegisterType((*InitialLoadBalanceResponse)(nil), "grpc.lb.v1.InitialLoadBalanceResponse") proto.RegisterType((*ServerList)(nil), "grpc.lb.v1.ServerList") proto.RegisterType((*Server)(nil), "grpc.lb.v1.Server") } func init() { proto.RegisterFile("grpc_lb_v1/messages/messages.proto", fileDescriptor0) } var fileDescriptor0 = []byte{ // 709 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x94, 0x55, 0xdd, 0x4e, 0x1b, 0x3b, 0x10, 0x26, 0x27, 0x01, 0x92, 0x09, 0x3a, 0xe4, 0x98, 0x1c, 0x08, 0x14, 0x24, 0xba, 0x52, 0x69, 0x54, 0xd1, 0x20, 0xa0, 0xbd, 0xe8, 0xcf, 0x45, 0x1b, 0x10, 0x0a, 0x2d, 0x17, 0x95, 0x43, 0x55, 0xa9, 0x52, 0x65, 0x39, 0xd9, 0x21, 0x58, 0x6c, 0xec, 0xad, 0xed, 0x04, 0xf5, 0x11, 0xfa, 0x28, 0x7d, 0x8c, 0xaa, 0xcf, 0xd0, 0xf7, 0xa9, 0xd6, 0xbb, 0x9b, 0x5d, 0x20, 0x80, 0x7a, 0x67, 0x8f, 0xbf, 0xf9, 0xbe, 0xf1, 0xac, 0xbf, 0x59, 0xf0, 0x06, 0x3a, 0xec, 0xb3, 0xa0, 0xc7, 0xc6, 0xbb, 0x3b, 0x43, 0x34, 0x86, 0x0f, 0xd0, 0x4c, 0x16, 0xad, 0x50, 0x2b, 0xab, 0x08, 0x44, 0x98, 0x56, 0xd0, 0x6b, 0x8d, 0x77, 0xbd, 0x97, 0x50, 0x3e, 0x1c, 0x69, 0x6e, 0x85, 0x92, 0xa4, 0x01, 0xf3, 0x06, 0xfb, 0x4a, 0xfa, 0xa6, 0x51, 0xd8, 0x2c, 0x34, 0x8b, 0x34, 0xdd, 0x92, 0x3a, 0xcc, 0x4a, 0x2e, 0x95, 0x69, 0xfc, 0xb3, 0x59, 0x68, 0xce, 0xd2, 0x78, 0xe3, 0xbd, 0x82, 0xca, 0xa9, 0x18, 0xa2, 0xb1, 0x7c, 0x18, 0xfe, 0x75, 0xf2, 0xcf, 0x02, 0x90, 0x13, 0xc5, 0xfd, 0x36, 0x0f, 0xb8, 0xec, 0x23, 0xc5, 0xaf, 0x23, 0x34, 0x96, 0x7c, 0x80, 0x45, 0x21, 0x85, 0x15, 0x3c, 0x60, 0x3a, 0x0e, 0x39, 0xba, 0xea, 0xde, 0xa3, 0x56, 0x56, 0x75, 0xeb, 0x38, 0x86, 0xdc, 0xcc, 0xef, 0xcc, 0xd0, 0x7f, 0x93, 0xfc, 0x94, 0xf1, 0x35, 0x2c, 0xf4, 0x03, 0x81, 0xd2, 0x32, 0x63, 0xb9, 0x8d, 0xab, 0xa8, 0xee, 0xad, 0xe4, 0xe9, 0x0e, 0xdc, 0x79, 0x37, 0x3a, 0xee, 0xcc, 0xd0, 0x6a, 0x3f, 0xdb, 0xb6, 0x1f, 0xc0, 0x6a, 0xa0, 0xb8, 0xcf, 0x7a, 0xb1, 0x4c, 0x5a, 0x14, 0xb3, 0xdf, 0x42, 0xf4, 0x76, 0x60, 0xf5, 0xd6, 0x4a, 0x08, 0x81, 0x92, 0xe4, 0x43, 0x74, 0xe5, 0x57, 0xa8, 0x5b, 0x7b, 0xdf, 0x4b, 0x50, 0xcd, 0x89, 0x91, 0x7d, 0xa8, 0xd8, 0xb4, 0x83, 0xc9, 0x3d, 0xff, 0xcf, 0x17, 0x36, 0x69, 0x2f, 0xcd, 0x70, 0xe4, 0x09, 0xfc, 0x27, 0x47, 0x43, 0xd6, 0xe7, 0x41, 0x60, 0xa2, 0x3b, 0x69, 0x8b, 0xbe, 0xbb, 0x55, 0x91, 0x2e, 0xca, 0xd1, 0xf0, 0x20, 0x8a, 0x77, 0xe3, 0x30, 0xd9, 0x06, 0x92, 0x61, 0xcf, 0x84, 0x14, 0xe6, 0x1c, 0xfd, 0x46, 0xd1, 0x81, 0x6b, 0x29, 0xf8, 0x28, 0x89, 0x13, 0x06, 0xad, 0x9b, 0x68, 0x76, 0x29, 0xec, 0x39, 0xf3, 0xb5, 0x0a, 0xd9, 0x99, 0xd2, 0x4c, 0x73, 0x8b, 0x2c, 0x10, 0x43, 0x61, 0x85, 0x1c, 0x34, 0x4a, 0x8e, 0xe9, 0xf1, 0x75, 0xa6, 0x4f, 0xc2, 0x9e, 0x1f, 0x6a, 0x15, 0x1e, 0x29, 0x4d, 0xb9, 0xc5, 0x93, 0x04, 0x4e, 0x38, 0xec, 0xdc, 0x2b, 0x90, 0x6b, 0x77, 0xa4, 0x30, 0xeb, 0x14, 0x9a, 0x77, 0x28, 0x64, 0xbd, 0x8f, 0x24, 0xbe, 0xc0, 0xd3, 0xdb, 0x24, 0x92, 0x67, 0x70, 0xc6, 0x45, 0x80, 0x3e, 0xb3, 0x8a, 0x19, 0x94, 0x7e, 0x63, 0xce, 0x09, 0x6c, 0x4d, 0x13, 0x88, 0x3f, 0xd5, 0x91, 0xc3, 0x9f, 0xaa, 0x2e, 0x4a, 0x9f, 0x74, 0xe0, 0xe1, 0x14, 0xfa, 0x0b, 0xa9, 0x2e, 0x25, 0xd3, 0xd8, 0x47, 0x31, 0x46, 0xbf, 0x31, 0xef, 0x28, 0x37, 0xae, 0x53, 0xbe, 0x8f, 0x50, 0x34, 0x01, 0x79, 0xbf, 0x0a, 0xb0, 0x74, 0xe5, 0xd9, 0x98, 0x50, 0x49, 0x83, 0xa4, 0x0b, 0xb5, 0xcc, 0x01, 0x71, 0x2c, 0x79, 0x1a, 0x5b, 0xf7, 0x59, 0x20, 0x46, 0x77, 0x66, 0xe8, 0xe2, 0xc4, 0x03, 0x09, 0xe9, 0x0b, 0xa8, 0x1a, 0xd4, 0x63, 0xd4, 0x2c, 0x10, 0xc6, 0x26, 0x1e, 0x58, 0xce, 0xf3, 0x75, 0xdd, 0xf1, 0x89, 0x70, 0x1e, 0x02, 0x33, 0xd9, 0xb5, 0xd7, 0x61, 0xed, 0x9a, 0x03, 0x62, 0xce, 0xd8, 0x02, 0x3f, 0x0a, 0xb0, 0x76, 0x7b, 0x29, 0xe4, 0x19, 0x2c, 0xe7, 0x93, 0x35, 0xf3, 0x31, 0xc0, 0x01, 0xb7, 0xa9, 0x2d, 0xea, 0x41, 0x96, 0xa4, 0x0f, 0x93, 0x33, 0xf2, 0x11, 0xd6, 0xf3, 0x96, 0x65, 0x1a, 0x43, 0xa5, 0x2d, 0x13, 0xd2, 0xa2, 0x1e, 0xf3, 0x20, 0x29, 0xbf, 0x9e, 0x2f, 0x3f, 0x1d, 0x62, 0x74, 0x35, 0xe7, 0x5e, 0xea, 0xf2, 0x8e, 0x93, 0x34, 0xef, 0x0d, 0x40, 0x76, 0x4b, 0xb2, 0x1d, 0x0d, 0xac, 0x68, 0x17, 0x0d, 0xac, 0x62, 0xb3, 0xba, 0x47, 0x6e, 0xb6, 0x83, 0xa6, 0x90, 0x77, 0xa5, 0x72, 0xb1, 0x56, 0xf2, 0x7e, 0x17, 0x60, 0x2e, 0x3e, 0x21, 0x1b, 0x00, 0x22, 0x64, 0xdc, 0xf7, 0x35, 0x9a, 0x78, 0xe4, 0x2d, 0xd0, 0x8a, 0x08, 0xdf, 0xc6, 0x81, 0xc8, 0xfd, 0x91, 0x76, 0x32, 0xf3, 0xdc, 0x3a, 0x32, 0xe3, 0x95, 0x4e, 0x5a, 0x75, 0x81, 0xd2, 0x99, 0xb1, 0x42, 0x6b, 0xb9, 0x46, 0x9c, 0x46, 0x71, 0xb2, 0x0f, 0xcb, 0x77, 0x98, 0xae, 0x4c, 0x97, 0xfc, 0x29, 0x06, 0x7b, 0x0e, 0x2b, 0x77, 0x19, 0xa9, 0x4c, 0xeb, 0xfe, 0x14, 0xd3, 0xb4, 0xe1, 0x73, 0x39, 0xfd, 0x47, 0xf4, 0xe6, 0xdc, 0x4f, 0x62, 0xff, 0x4f, 0x00, 0x00, 0x00, 0xff, 0xff, 0xa3, 0x36, 0x86, 0xa6, 0x4a, 0x06, 0x00, 0x00, } golang-google-grpc-1.6.0/grpclb/grpc_lb_v1/messages/messages.proto000066400000000000000000000132441315416461300251720ustar00rootroot00000000000000// Copyright 2016 gRPC authors. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. syntax = "proto3"; package grpc.lb.v1; option go_package = "messages"; message Duration { // Signed seconds of the span of time. Must be from -315,576,000,000 // to +315,576,000,000 inclusive. int64 seconds = 1; // Signed fractions of a second at nanosecond resolution of the span // of time. Durations less than one second are represented with a 0 // `seconds` field and a positive or negative `nanos` field. For durations // of one second or more, a non-zero value for the `nanos` field must be // of the same sign as the `seconds` field. Must be from -999,999,999 // to +999,999,999 inclusive. int32 nanos = 2; } message Timestamp { // Represents seconds of UTC time since Unix epoch // 1970-01-01T00:00:00Z. Must be from 0001-01-01T00:00:00Z to // 9999-12-31T23:59:59Z inclusive. int64 seconds = 1; // Non-negative fractions of a second at nanosecond resolution. Negative // second values with fractions must still have non-negative nanos values // that count forward in time. Must be from 0 to 999,999,999 // inclusive. int32 nanos = 2; } message LoadBalanceRequest { oneof load_balance_request_type { // This message should be sent on the first request to the load balancer. InitialLoadBalanceRequest initial_request = 1; // The client stats should be periodically reported to the load balancer // based on the duration defined in the InitialLoadBalanceResponse. ClientStats client_stats = 2; } } message InitialLoadBalanceRequest { // Name of load balanced service (IE, balancer.service.com) // length should be less than 256 bytes. string name = 1; } // Contains client level statistics that are useful to load balancing. Each // count except the timestamp should be reset to zero after reporting the stats. message ClientStats { // The timestamp of generating the report. Timestamp timestamp = 1; // The total number of RPCs that started. int64 num_calls_started = 2; // The total number of RPCs that finished. int64 num_calls_finished = 3; // The total number of RPCs that were dropped by the client because of rate // limiting. int64 num_calls_finished_with_drop_for_rate_limiting = 4; // The total number of RPCs that were dropped by the client because of load // balancing. int64 num_calls_finished_with_drop_for_load_balancing = 5; // The total number of RPCs that failed to reach a server except dropped RPCs. int64 num_calls_finished_with_client_failed_to_send = 6; // The total number of RPCs that finished and are known to have been received // by a server. int64 num_calls_finished_known_received = 7; } message LoadBalanceResponse { oneof load_balance_response_type { // This message should be sent on the first response to the client. InitialLoadBalanceResponse initial_response = 1; // Contains the list of servers selected by the load balancer. The client // should send requests to these servers in the specified order. ServerList server_list = 2; } } message InitialLoadBalanceResponse { // This is an application layer redirect that indicates the client should use // the specified server for load balancing. When this field is non-empty in // the response, the client should open a separate connection to the // load_balancer_delegate and call the BalanceLoad method. Its length should // be less than 64 bytes. string load_balancer_delegate = 1; // This interval defines how often the client should send the client stats // to the load balancer. Stats should only be reported when the duration is // positive. Duration client_stats_report_interval = 2; } message ServerList { // Contains a list of servers selected by the load balancer. The list will // be updated when server resolutions change or as needed to balance load // across more servers. The client should consume the server list in order // unless instructed otherwise via the client_config. repeated Server servers = 1; // Was google.protobuf.Duration expiration_interval. reserved 3; } // Contains server information. When none of the [drop_for_*] fields are true, // use the other fields. When drop_for_rate_limiting is true, ignore all other // fields. Use drop_for_load_balancing only when it is true and // drop_for_rate_limiting is false. message Server { // A resolved address for the server, serialized in network-byte-order. It may // either be an IPv4 or IPv6 address. bytes ip_address = 1; // A resolved port number for the server. int32 port = 2; // An opaque but printable token given to the frontend for each pick. All // frontend requests for that pick must include the token in its initial // metadata. The token is used by the backend to verify the request and to // allow the backend to report load to the gRPC LB system. // // Its length is variable but less than 50 bytes. string load_balance_token = 3; // Indicates whether this particular request should be dropped by the client // for rate limiting. bool drop_for_rate_limiting = 4; // Indicates whether this particular request should be dropped by the client // for load balancing. bool drop_for_load_balancing = 5; } golang-google-grpc-1.6.0/grpclb/grpc_lb_v1/service/000077500000000000000000000000001315416461300221235ustar00rootroot00000000000000golang-google-grpc-1.6.0/grpclb/grpc_lb_v1/service/service.pb.go000066400000000000000000000117161315416461300245200ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: grpc_lb_v1/service/service.proto /* Package service is a generated protocol buffer package. It is generated from these files: grpc_lb_v1/service/service.proto It has these top-level messages: */ package service import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import grpc_lb_v1 "google.golang.org/grpc/grpclb/grpc_lb_v1/messages" import ( context "golang.org/x/net/context" grpc "google.golang.org/grpc" ) // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package // Reference imports to suppress errors if they are not otherwise used. var _ context.Context var _ grpc.ClientConn // This is a compile-time assertion to ensure that this generated file // is compatible with the grpc package it is being compiled against. const _ = grpc.SupportPackageIsVersion4 // Client API for LoadBalancer service type LoadBalancerClient interface { // Bidirectional rpc to get a list of servers. BalanceLoad(ctx context.Context, opts ...grpc.CallOption) (LoadBalancer_BalanceLoadClient, error) } type loadBalancerClient struct { cc *grpc.ClientConn } func NewLoadBalancerClient(cc *grpc.ClientConn) LoadBalancerClient { return &loadBalancerClient{cc} } func (c *loadBalancerClient) BalanceLoad(ctx context.Context, opts ...grpc.CallOption) (LoadBalancer_BalanceLoadClient, error) { stream, err := grpc.NewClientStream(ctx, &_LoadBalancer_serviceDesc.Streams[0], c.cc, "/grpc.lb.v1.LoadBalancer/BalanceLoad", opts...) if err != nil { return nil, err } x := &loadBalancerBalanceLoadClient{stream} return x, nil } type LoadBalancer_BalanceLoadClient interface { Send(*grpc_lb_v1.LoadBalanceRequest) error Recv() (*grpc_lb_v1.LoadBalanceResponse, error) grpc.ClientStream } type loadBalancerBalanceLoadClient struct { grpc.ClientStream } func (x *loadBalancerBalanceLoadClient) Send(m *grpc_lb_v1.LoadBalanceRequest) error { return x.ClientStream.SendMsg(m) } func (x *loadBalancerBalanceLoadClient) Recv() (*grpc_lb_v1.LoadBalanceResponse, error) { m := new(grpc_lb_v1.LoadBalanceResponse) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } // Server API for LoadBalancer service type LoadBalancerServer interface { // Bidirectional rpc to get a list of servers. BalanceLoad(LoadBalancer_BalanceLoadServer) error } func RegisterLoadBalancerServer(s *grpc.Server, srv LoadBalancerServer) { s.RegisterService(&_LoadBalancer_serviceDesc, srv) } func _LoadBalancer_BalanceLoad_Handler(srv interface{}, stream grpc.ServerStream) error { return srv.(LoadBalancerServer).BalanceLoad(&loadBalancerBalanceLoadServer{stream}) } type LoadBalancer_BalanceLoadServer interface { Send(*grpc_lb_v1.LoadBalanceResponse) error Recv() (*grpc_lb_v1.LoadBalanceRequest, error) grpc.ServerStream } type loadBalancerBalanceLoadServer struct { grpc.ServerStream } func (x *loadBalancerBalanceLoadServer) Send(m *grpc_lb_v1.LoadBalanceResponse) error { return x.ServerStream.SendMsg(m) } func (x *loadBalancerBalanceLoadServer) Recv() (*grpc_lb_v1.LoadBalanceRequest, error) { m := new(grpc_lb_v1.LoadBalanceRequest) if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err } return m, nil } var _LoadBalancer_serviceDesc = grpc.ServiceDesc{ ServiceName: "grpc.lb.v1.LoadBalancer", HandlerType: (*LoadBalancerServer)(nil), Methods: []grpc.MethodDesc{}, Streams: []grpc.StreamDesc{ { StreamName: "BalanceLoad", Handler: _LoadBalancer_BalanceLoad_Handler, ServerStreams: true, ClientStreams: true, }, }, Metadata: "grpc_lb_v1/service/service.proto", } func init() { proto.RegisterFile("grpc_lb_v1/service/service.proto", fileDescriptor0) } var fileDescriptor0 = []byte{ // 142 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x52, 0x48, 0x2f, 0x2a, 0x48, 0x8e, 0xcf, 0x49, 0x8a, 0x2f, 0x33, 0xd4, 0x2f, 0x4e, 0x2d, 0x2a, 0xcb, 0x4c, 0x4e, 0x85, 0xd1, 0x7a, 0x05, 0x45, 0xf9, 0x25, 0xf9, 0x42, 0x5c, 0x20, 0x15, 0x7a, 0x39, 0x49, 0x7a, 0x65, 0x86, 0x52, 0x4a, 0x48, 0xaa, 0x73, 0x53, 0x8b, 0x8b, 0x13, 0xd3, 0x53, 0x8b, 0xe1, 0x0c, 0x88, 0x7a, 0xa3, 0x24, 0x2e, 0x1e, 0x9f, 0xfc, 0xc4, 0x14, 0xa7, 0xc4, 0x9c, 0xc4, 0xbc, 0xe4, 0xd4, 0x22, 0xa1, 0x20, 0x2e, 0x6e, 0x28, 0x1b, 0x24, 0x2c, 0x24, 0xa7, 0x87, 0x30, 0x4f, 0x0f, 0x49, 0x61, 0x50, 0x6a, 0x61, 0x69, 0x6a, 0x71, 0x89, 0x94, 0x3c, 0x4e, 0xf9, 0xe2, 0x82, 0xfc, 0xbc, 0xe2, 0x54, 0x0d, 0x46, 0x03, 0x46, 0x27, 0xce, 0x28, 0x76, 0xa8, 0x23, 0x93, 0xd8, 0xc0, 0xb6, 0x1a, 0x03, 0x02, 0x00, 0x00, 0xff, 0xff, 0x39, 0x4e, 0xb0, 0xf8, 0xc9, 0x00, 0x00, 0x00, } golang-google-grpc-1.6.0/grpclb/grpc_lb_v1/service/service.proto000066400000000000000000000015501315416461300246510ustar00rootroot00000000000000// Copyright 2016 gRPC authors. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. syntax = "proto3"; package grpc.lb.v1; option go_package = "service"; import "grpc_lb_v1/messages/messages.proto"; service LoadBalancer { // Bidirectional rpc to get a list of servers. rpc BalanceLoad(stream LoadBalanceRequest) returns (stream LoadBalanceResponse); } golang-google-grpc-1.6.0/grpclb/grpclb_test.go000066400000000000000000000625421315416461300213150ustar00rootroot00000000000000/* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ //go:generate protoc --go_out=plugins=:. grpc_lb_v1/messages/messages.proto //go:generate protoc --go_out=Mgrpc_lb_v1/messages/messages.proto=google.golang.org/grpc/grpclb/grpc_lb_v1/messages,plugins=grpc:. grpc_lb_v1/service/service.proto // Package grpclb_test is currently used only for grpclb testing. package grpclb_test import ( "errors" "fmt" "io" "net" "strings" "sync" "testing" "time" "github.com/golang/protobuf/proto" "golang.org/x/net/context" "google.golang.org/grpc" "google.golang.org/grpc/codes" "google.golang.org/grpc/credentials" lbmpb "google.golang.org/grpc/grpclb/grpc_lb_v1/messages" lbspb "google.golang.org/grpc/grpclb/grpc_lb_v1/service" "google.golang.org/grpc/metadata" "google.golang.org/grpc/naming" testpb "google.golang.org/grpc/test/grpc_testing" ) var ( lbsn = "bar.com" besn = "foo.com" lbToken = "iamatoken" // Resolver replaces localhost with fakeName in Next(). // Dialer replaces fakeName with localhost when dialing. // This will test that custom dialer is passed from Dial to grpclb. fakeName = "fake.Name" ) type testWatcher struct { // the channel to receives name resolution updates update chan *naming.Update // the side channel to get to know how many updates in a batch side chan int // the channel to notifiy update injector that the update reading is done readDone chan int } func (w *testWatcher) Next() (updates []*naming.Update, err error) { n, ok := <-w.side if !ok { return nil, fmt.Errorf("w.side is closed") } for i := 0; i < n; i++ { u, ok := <-w.update if !ok { break } if u != nil { // Resolver replaces localhost with fakeName in Next(). // Custom dialer will replace fakeName with localhost when dialing. u.Addr = strings.Replace(u.Addr, "localhost", fakeName, 1) updates = append(updates, u) } } w.readDone <- 0 return } func (w *testWatcher) Close() { } // Inject naming resolution updates to the testWatcher. func (w *testWatcher) inject(updates []*naming.Update) { w.side <- len(updates) for _, u := range updates { w.update <- u } <-w.readDone } type testNameResolver struct { w *testWatcher addrs []string } func (r *testNameResolver) Resolve(target string) (naming.Watcher, error) { r.w = &testWatcher{ update: make(chan *naming.Update, len(r.addrs)), side: make(chan int, 1), readDone: make(chan int), } r.w.side <- len(r.addrs) for _, addr := range r.addrs { r.w.update <- &naming.Update{ Op: naming.Add, Addr: addr, Metadata: &naming.AddrMetadataGRPCLB{ AddrType: naming.GRPCLB, ServerName: lbsn, }, } } go func() { <-r.w.readDone }() return r.w, nil } func (r *testNameResolver) inject(updates []*naming.Update) { if r.w != nil { r.w.inject(updates) } } type serverNameCheckCreds struct { mu sync.Mutex sn string expected string } func (c *serverNameCheckCreds) ServerHandshake(rawConn net.Conn) (net.Conn, credentials.AuthInfo, error) { if _, err := io.WriteString(rawConn, c.sn); err != nil { fmt.Printf("Failed to write the server name %s to the client %v", c.sn, err) return nil, nil, err } return rawConn, nil, nil } func (c *serverNameCheckCreds) ClientHandshake(ctx context.Context, addr string, rawConn net.Conn) (net.Conn, credentials.AuthInfo, error) { c.mu.Lock() defer c.mu.Unlock() b := make([]byte, len(c.expected)) if _, err := rawConn.Read(b); err != nil { fmt.Printf("Failed to read the server name from the server %v", err) return nil, nil, err } if c.expected != string(b) { fmt.Printf("Read the server name %s want %s", string(b), c.expected) return nil, nil, errors.New("received unexpected server name") } return rawConn, nil, nil } func (c *serverNameCheckCreds) Info() credentials.ProtocolInfo { c.mu.Lock() defer c.mu.Unlock() return credentials.ProtocolInfo{} } func (c *serverNameCheckCreds) Clone() credentials.TransportCredentials { c.mu.Lock() defer c.mu.Unlock() return &serverNameCheckCreds{ expected: c.expected, } } func (c *serverNameCheckCreds) OverrideServerName(s string) error { c.mu.Lock() defer c.mu.Unlock() c.expected = s return nil } // fakeNameDialer replaces fakeName with localhost when dialing. // This will test that custom dialer is passed from Dial to grpclb. func fakeNameDialer(addr string, timeout time.Duration) (net.Conn, error) { addr = strings.Replace(addr, fakeName, "localhost", 1) return net.DialTimeout("tcp", addr, timeout) } type remoteBalancer struct { sls []*lbmpb.ServerList intervals []time.Duration statsDura time.Duration done chan struct{} mu sync.Mutex stats lbmpb.ClientStats } func newRemoteBalancer(sls []*lbmpb.ServerList, intervals []time.Duration) *remoteBalancer { return &remoteBalancer{ sls: sls, intervals: intervals, done: make(chan struct{}), } } func (b *remoteBalancer) stop() { close(b.done) } func (b *remoteBalancer) BalanceLoad(stream lbspb.LoadBalancer_BalanceLoadServer) error { req, err := stream.Recv() if err != nil { return err } initReq := req.GetInitialRequest() if initReq.Name != besn { return grpc.Errorf(codes.InvalidArgument, "invalid service name: %v", initReq.Name) } resp := &lbmpb.LoadBalanceResponse{ LoadBalanceResponseType: &lbmpb.LoadBalanceResponse_InitialResponse{ InitialResponse: &lbmpb.InitialLoadBalanceResponse{ ClientStatsReportInterval: &lbmpb.Duration{ Seconds: int64(b.statsDura.Seconds()), Nanos: int32(b.statsDura.Nanoseconds() - int64(b.statsDura.Seconds())*1e9), }, }, }, } if err := stream.Send(resp); err != nil { return err } go func() { for { var ( req *lbmpb.LoadBalanceRequest err error ) if req, err = stream.Recv(); err != nil { return } b.mu.Lock() b.stats.NumCallsStarted += req.GetClientStats().NumCallsStarted b.stats.NumCallsFinished += req.GetClientStats().NumCallsFinished b.stats.NumCallsFinishedWithDropForRateLimiting += req.GetClientStats().NumCallsFinishedWithDropForRateLimiting b.stats.NumCallsFinishedWithDropForLoadBalancing += req.GetClientStats().NumCallsFinishedWithDropForLoadBalancing b.stats.NumCallsFinishedWithClientFailedToSend += req.GetClientStats().NumCallsFinishedWithClientFailedToSend b.stats.NumCallsFinishedKnownReceived += req.GetClientStats().NumCallsFinishedKnownReceived b.mu.Unlock() } }() for k, v := range b.sls { time.Sleep(b.intervals[k]) resp = &lbmpb.LoadBalanceResponse{ LoadBalanceResponseType: &lbmpb.LoadBalanceResponse_ServerList{ ServerList: v, }, } if err := stream.Send(resp); err != nil { return err } } <-b.done return nil } type testServer struct { testpb.TestServiceServer addr string } const testmdkey = "testmd" func (s *testServer) EmptyCall(ctx context.Context, in *testpb.Empty) (*testpb.Empty, error) { md, ok := metadata.FromIncomingContext(ctx) if !ok { return nil, grpc.Errorf(codes.Internal, "failed to receive metadata") } if md == nil || md["lb-token"][0] != lbToken { return nil, grpc.Errorf(codes.Internal, "received unexpected metadata: %v", md) } grpc.SetTrailer(ctx, metadata.Pairs(testmdkey, s.addr)) return &testpb.Empty{}, nil } func (s *testServer) FullDuplexCall(stream testpb.TestService_FullDuplexCallServer) error { return nil } func startBackends(sn string, lis ...net.Listener) (servers []*grpc.Server) { for _, l := range lis { creds := &serverNameCheckCreds{ sn: sn, } s := grpc.NewServer(grpc.Creds(creds)) testpb.RegisterTestServiceServer(s, &testServer{addr: l.Addr().String()}) servers = append(servers, s) go func(s *grpc.Server, l net.Listener) { s.Serve(l) }(s, l) } return } func stopBackends(servers []*grpc.Server) { for _, s := range servers { s.Stop() } } type testServers struct { lbAddr string ls *remoteBalancer lb *grpc.Server beIPs []net.IP bePorts []int } func newLoadBalancer(numberOfBackends int) (tss *testServers, cleanup func(), err error) { var ( beListeners []net.Listener ls *remoteBalancer lb *grpc.Server beIPs []net.IP bePorts []int ) for i := 0; i < numberOfBackends; i++ { // Start a backend. beLis, e := net.Listen("tcp", "localhost:0") if e != nil { err = fmt.Errorf("Failed to listen %v", err) return } beIPs = append(beIPs, beLis.Addr().(*net.TCPAddr).IP) bePorts = append(bePorts, beLis.Addr().(*net.TCPAddr).Port) beListeners = append(beListeners, beLis) } backends := startBackends(besn, beListeners...) // Start a load balancer. lbLis, err := net.Listen("tcp", "localhost:0") if err != nil { err = fmt.Errorf("Failed to create the listener for the load balancer %v", err) return } lbCreds := &serverNameCheckCreds{ sn: lbsn, } lb = grpc.NewServer(grpc.Creds(lbCreds)) if err != nil { err = fmt.Errorf("Failed to generate the port number %v", err) return } ls = newRemoteBalancer(nil, nil) lbspb.RegisterLoadBalancerServer(lb, ls) go func() { lb.Serve(lbLis) }() tss = &testServers{ lbAddr: lbLis.Addr().String(), ls: ls, lb: lb, beIPs: beIPs, bePorts: bePorts, } cleanup = func() { defer stopBackends(backends) defer func() { ls.stop() lb.Stop() }() } return } func TestGRPCLB(t *testing.T) { tss, cleanup, err := newLoadBalancer(1) if err != nil { t.Fatalf("failed to create new load balancer: %v", err) } defer cleanup() be := &lbmpb.Server{ IpAddress: tss.beIPs[0], Port: int32(tss.bePorts[0]), LoadBalanceToken: lbToken, } var bes []*lbmpb.Server bes = append(bes, be) sl := &lbmpb.ServerList{ Servers: bes, } tss.ls.sls = []*lbmpb.ServerList{sl} tss.ls.intervals = []time.Duration{0} creds := serverNameCheckCreds{ expected: besn, } ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() cc, err := grpc.DialContext(ctx, besn, grpc.WithBalancer(grpc.NewGRPCLBBalancer(&testNameResolver{addrs: []string{tss.lbAddr}})), grpc.WithBlock(), grpc.WithTransportCredentials(&creds), grpc.WithDialer(fakeNameDialer)) if err != nil { t.Fatalf("Failed to dial to the backend %v", err) } testC := testpb.NewTestServiceClient(cc) if _, err := testC.EmptyCall(context.Background(), &testpb.Empty{}); err != nil { t.Fatalf("%v.EmptyCall(_, _) = _, %v, want _, ", testC, err) } cc.Close() } func TestDropRequest(t *testing.T) { tss, cleanup, err := newLoadBalancer(2) if err != nil { t.Fatalf("failed to create new load balancer: %v", err) } defer cleanup() tss.ls.sls = []*lbmpb.ServerList{{ Servers: []*lbmpb.Server{{ IpAddress: tss.beIPs[0], Port: int32(tss.bePorts[0]), LoadBalanceToken: lbToken, DropForLoadBalancing: true, }, { IpAddress: tss.beIPs[1], Port: int32(tss.bePorts[1]), LoadBalanceToken: lbToken, DropForLoadBalancing: false, }}, }} tss.ls.intervals = []time.Duration{0} creds := serverNameCheckCreds{ expected: besn, } ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() cc, err := grpc.DialContext(ctx, besn, grpc.WithBalancer(grpc.NewGRPCLBBalancer(&testNameResolver{addrs: []string{tss.lbAddr}})), grpc.WithBlock(), grpc.WithTransportCredentials(&creds), grpc.WithDialer(fakeNameDialer)) if err != nil { t.Fatalf("Failed to dial to the backend %v", err) } testC := testpb.NewTestServiceClient(cc) // The 1st, non-fail-fast RPC should succeed. This ensures both server // connections are made, because the first one has DropForLoadBalancing set to true. if _, err := testC.EmptyCall(context.Background(), &testpb.Empty{}, grpc.FailFast(false)); err != nil { t.Fatalf("%v.SayHello(_, _) = _, %v, want _, ", testC, err) } for i := 0; i < 3; i++ { // Odd fail-fast RPCs should fail, because the 1st backend has DropForLoadBalancing // set to true. if _, err := testC.EmptyCall(context.Background(), &testpb.Empty{}); grpc.Code(err) != codes.Unavailable { t.Fatalf("%v.EmptyCall(_, _) = _, %v, want _, %s", testC, err, codes.Unavailable) } // Even fail-fast RPCs should succeed since they choose the // non-drop-request backend according to the round robin policy. if _, err := testC.EmptyCall(context.Background(), &testpb.Empty{}); err != nil { t.Fatalf("%v.EmptyCall(_, _) = _, %v, want _, ", testC, err) } } cc.Close() } func TestDropRequestFailedNonFailFast(t *testing.T) { tss, cleanup, err := newLoadBalancer(1) if err != nil { t.Fatalf("failed to create new load balancer: %v", err) } defer cleanup() be := &lbmpb.Server{ IpAddress: tss.beIPs[0], Port: int32(tss.bePorts[0]), LoadBalanceToken: lbToken, DropForLoadBalancing: true, } var bes []*lbmpb.Server bes = append(bes, be) sl := &lbmpb.ServerList{ Servers: bes, } tss.ls.sls = []*lbmpb.ServerList{sl} tss.ls.intervals = []time.Duration{0} creds := serverNameCheckCreds{ expected: besn, } ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() cc, err := grpc.DialContext(ctx, besn, grpc.WithBalancer(grpc.NewGRPCLBBalancer(&testNameResolver{addrs: []string{tss.lbAddr}})), grpc.WithBlock(), grpc.WithTransportCredentials(&creds), grpc.WithDialer(fakeNameDialer)) if err != nil { t.Fatalf("Failed to dial to the backend %v", err) } testC := testpb.NewTestServiceClient(cc) ctx, cancel = context.WithTimeout(context.Background(), 10*time.Millisecond) defer cancel() if _, err := testC.EmptyCall(ctx, &testpb.Empty{}, grpc.FailFast(false)); grpc.Code(err) != codes.DeadlineExceeded { t.Fatalf("%v.EmptyCall(_, _) = _, %v, want _, %s", testC, err, codes.DeadlineExceeded) } cc.Close() } // When the balancer in use disconnects, grpclb should connect to the next address from resolved balancer address list. func TestBalancerDisconnects(t *testing.T) { var ( lbAddrs []string lbs []*grpc.Server ) for i := 0; i < 3; i++ { tss, cleanup, err := newLoadBalancer(1) if err != nil { t.Fatalf("failed to create new load balancer: %v", err) } defer cleanup() be := &lbmpb.Server{ IpAddress: tss.beIPs[0], Port: int32(tss.bePorts[0]), LoadBalanceToken: lbToken, } var bes []*lbmpb.Server bes = append(bes, be) sl := &lbmpb.ServerList{ Servers: bes, } tss.ls.sls = []*lbmpb.ServerList{sl} tss.ls.intervals = []time.Duration{0} lbAddrs = append(lbAddrs, tss.lbAddr) lbs = append(lbs, tss.lb) } creds := serverNameCheckCreds{ expected: besn, } ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() resolver := &testNameResolver{ addrs: lbAddrs[:2], } cc, err := grpc.DialContext(ctx, besn, grpc.WithBalancer(grpc.NewGRPCLBBalancer(resolver)), grpc.WithBlock(), grpc.WithTransportCredentials(&creds), grpc.WithDialer(fakeNameDialer)) if err != nil { t.Fatalf("Failed to dial to the backend %v", err) } testC := testpb.NewTestServiceClient(cc) var previousTrailer string trailer := metadata.MD{} if _, err := testC.EmptyCall(context.Background(), &testpb.Empty{}, grpc.Trailer(&trailer), grpc.FailFast(false)); err != nil { t.Fatalf("%v.EmptyCall(_, _) = _, %v, want _, ", testC, err) } else { previousTrailer = trailer[testmdkey][0] } // The initial resolver update contains lbs[0] and lbs[1]. // When lbs[0] is stopped, lbs[1] should be used. lbs[0].Stop() for { if _, err := testC.EmptyCall(context.Background(), &testpb.Empty{}, grpc.Trailer(&trailer), grpc.FailFast(false)); err != nil { t.Fatalf("%v.EmptyCall(_, _) = _, %v, want _, ", testC, err) } else if trailer[testmdkey][0] != previousTrailer { // A new backend server should receive the request. // The trailer contains the backend address, so the trailer should be different from the previous one. previousTrailer = trailer[testmdkey][0] break } time.Sleep(100 * time.Millisecond) } // Inject a update to add lbs[2] to resolved addresses. resolver.inject([]*naming.Update{ {Op: naming.Add, Addr: lbAddrs[2], Metadata: &naming.AddrMetadataGRPCLB{ AddrType: naming.GRPCLB, ServerName: lbsn, }, }, }) // Stop lbs[1]. Now lbs[0] and lbs[1] are all stopped. lbs[2] should be used. lbs[1].Stop() for { if _, err := testC.EmptyCall(context.Background(), &testpb.Empty{}, grpc.Trailer(&trailer), grpc.FailFast(false)); err != nil { t.Fatalf("%v.EmptyCall(_, _) = _, %v, want _, ", testC, err) } else if trailer[testmdkey][0] != previousTrailer { // A new backend server should receive the request. // The trailer contains the backend address, so the trailer should be different from the previous one. break } time.Sleep(100 * time.Millisecond) } cc.Close() } type failPreRPCCred struct{} func (failPreRPCCred) GetRequestMetadata(ctx context.Context, uri ...string) (map[string]string, error) { if strings.Contains(uri[0], "failtosend") { return nil, fmt.Errorf("rpc should fail to send") } return nil, nil } func (failPreRPCCred) RequireTransportSecurity() bool { return false } func checkStats(stats *lbmpb.ClientStats, expected *lbmpb.ClientStats) error { if !proto.Equal(stats, expected) { return fmt.Errorf("stats not equal: got %+v, want %+v", stats, expected) } return nil } func runAndGetStats(t *testing.T, dropForLoadBalancing, dropForRateLimiting bool, runRPCs func(*grpc.ClientConn)) lbmpb.ClientStats { tss, cleanup, err := newLoadBalancer(3) if err != nil { t.Fatalf("failed to create new load balancer: %v", err) } defer cleanup() tss.ls.sls = []*lbmpb.ServerList{{ Servers: []*lbmpb.Server{{ IpAddress: tss.beIPs[2], Port: int32(tss.bePorts[2]), LoadBalanceToken: lbToken, DropForLoadBalancing: dropForLoadBalancing, DropForRateLimiting: dropForRateLimiting, }}, }} tss.ls.intervals = []time.Duration{0} tss.ls.statsDura = 100 * time.Millisecond creds := serverNameCheckCreds{expected: besn} ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() cc, err := grpc.DialContext(ctx, besn, grpc.WithBalancer(grpc.NewGRPCLBBalancer(&testNameResolver{addrs: []string{tss.lbAddr}})), grpc.WithTransportCredentials(&creds), grpc.WithPerRPCCredentials(failPreRPCCred{}), grpc.WithBlock(), grpc.WithDialer(fakeNameDialer)) if err != nil { t.Fatalf("Failed to dial to the backend %v", err) } defer cc.Close() runRPCs(cc) time.Sleep(1 * time.Second) tss.ls.mu.Lock() stats := tss.ls.stats tss.ls.mu.Unlock() return stats } const countRPC = 40 func TestGRPCLBStatsUnarySuccess(t *testing.T) { stats := runAndGetStats(t, false, false, func(cc *grpc.ClientConn) { testC := testpb.NewTestServiceClient(cc) // The first non-failfast RPC succeeds, all connections are up. if _, err := testC.EmptyCall(context.Background(), &testpb.Empty{}, grpc.FailFast(false)); err != nil { t.Fatalf("%v.EmptyCall(_, _) = _, %v, want _, ", testC, err) } for i := 0; i < countRPC-1; i++ { testC.EmptyCall(context.Background(), &testpb.Empty{}) } }) if err := checkStats(&stats, &lbmpb.ClientStats{ NumCallsStarted: int64(countRPC), NumCallsFinished: int64(countRPC), NumCallsFinishedKnownReceived: int64(countRPC), }); err != nil { t.Fatal(err) } } func TestGRPCLBStatsUnaryDropLoadBalancing(t *testing.T) { c := 0 stats := runAndGetStats(t, true, false, func(cc *grpc.ClientConn) { testC := testpb.NewTestServiceClient(cc) for { c++ if _, err := testC.EmptyCall(context.Background(), &testpb.Empty{}); err != nil { if strings.Contains(err.Error(), "drops requests") { break } } } for i := 0; i < countRPC; i++ { testC.EmptyCall(context.Background(), &testpb.Empty{}) } }) if err := checkStats(&stats, &lbmpb.ClientStats{ NumCallsStarted: int64(countRPC + c), NumCallsFinished: int64(countRPC + c), NumCallsFinishedWithDropForLoadBalancing: int64(countRPC + 1), NumCallsFinishedWithClientFailedToSend: int64(c - 1), }); err != nil { t.Fatal(err) } } func TestGRPCLBStatsUnaryDropRateLimiting(t *testing.T) { c := 0 stats := runAndGetStats(t, false, true, func(cc *grpc.ClientConn) { testC := testpb.NewTestServiceClient(cc) for { c++ if _, err := testC.EmptyCall(context.Background(), &testpb.Empty{}); err != nil { if strings.Contains(err.Error(), "drops requests") { break } } } for i := 0; i < countRPC; i++ { testC.EmptyCall(context.Background(), &testpb.Empty{}) } }) if err := checkStats(&stats, &lbmpb.ClientStats{ NumCallsStarted: int64(countRPC + c), NumCallsFinished: int64(countRPC + c), NumCallsFinishedWithDropForRateLimiting: int64(countRPC + 1), NumCallsFinishedWithClientFailedToSend: int64(c - 1), }); err != nil { t.Fatal(err) } } func TestGRPCLBStatsUnaryFailedToSend(t *testing.T) { stats := runAndGetStats(t, false, false, func(cc *grpc.ClientConn) { testC := testpb.NewTestServiceClient(cc) // The first non-failfast RPC succeeds, all connections are up. if _, err := testC.EmptyCall(context.Background(), &testpb.Empty{}, grpc.FailFast(false)); err != nil { t.Fatalf("%v.EmptyCall(_, _) = _, %v, want _, ", testC, err) } for i := 0; i < countRPC-1; i++ { grpc.Invoke(context.Background(), "failtosend", &testpb.Empty{}, nil, cc) } }) if err := checkStats(&stats, &lbmpb.ClientStats{ NumCallsStarted: int64(countRPC), NumCallsFinished: int64(countRPC), NumCallsFinishedWithClientFailedToSend: int64(countRPC - 1), NumCallsFinishedKnownReceived: 1, }); err != nil { t.Fatal(err) } } func TestGRPCLBStatsStreamingSuccess(t *testing.T) { stats := runAndGetStats(t, false, false, func(cc *grpc.ClientConn) { testC := testpb.NewTestServiceClient(cc) // The first non-failfast RPC succeeds, all connections are up. stream, err := testC.FullDuplexCall(context.Background(), grpc.FailFast(false)) if err != nil { t.Fatalf("%v.FullDuplexCall(_, _) = _, %v, want _, ", testC, err) } for { if _, err = stream.Recv(); err == io.EOF { break } } for i := 0; i < countRPC-1; i++ { stream, err = testC.FullDuplexCall(context.Background()) if err == nil { // Wait for stream to end if err is nil. for { if _, err = stream.Recv(); err == io.EOF { break } } } } }) if err := checkStats(&stats, &lbmpb.ClientStats{ NumCallsStarted: int64(countRPC), NumCallsFinished: int64(countRPC), NumCallsFinishedKnownReceived: int64(countRPC), }); err != nil { t.Fatal(err) } } func TestGRPCLBStatsStreamingDropLoadBalancing(t *testing.T) { c := 0 stats := runAndGetStats(t, true, false, func(cc *grpc.ClientConn) { testC := testpb.NewTestServiceClient(cc) for { c++ if _, err := testC.EmptyCall(context.Background(), &testpb.Empty{}); err != nil { if strings.Contains(err.Error(), "drops requests") { break } } } for i := 0; i < countRPC; i++ { testC.FullDuplexCall(context.Background()) } }) if err := checkStats(&stats, &lbmpb.ClientStats{ NumCallsStarted: int64(countRPC + c), NumCallsFinished: int64(countRPC + c), NumCallsFinishedWithDropForLoadBalancing: int64(countRPC + 1), NumCallsFinishedWithClientFailedToSend: int64(c - 1), }); err != nil { t.Fatal(err) } } func TestGRPCLBStatsStreamingDropRateLimiting(t *testing.T) { c := 0 stats := runAndGetStats(t, false, true, func(cc *grpc.ClientConn) { testC := testpb.NewTestServiceClient(cc) for { c++ if _, err := testC.EmptyCall(context.Background(), &testpb.Empty{}); err != nil { if strings.Contains(err.Error(), "drops requests") { break } } } for i := 0; i < countRPC; i++ { testC.FullDuplexCall(context.Background()) } }) if err := checkStats(&stats, &lbmpb.ClientStats{ NumCallsStarted: int64(countRPC + c), NumCallsFinished: int64(countRPC + c), NumCallsFinishedWithDropForRateLimiting: int64(countRPC + 1), NumCallsFinishedWithClientFailedToSend: int64(c - 1), }); err != nil { t.Fatal(err) } } func TestGRPCLBStatsStreamingFailedToSend(t *testing.T) { stats := runAndGetStats(t, false, false, func(cc *grpc.ClientConn) { testC := testpb.NewTestServiceClient(cc) // The first non-failfast RPC succeeds, all connections are up. stream, err := testC.FullDuplexCall(context.Background(), grpc.FailFast(false)) if err != nil { t.Fatalf("%v.FullDuplexCall(_, _) = _, %v, want _, ", testC, err) } for { if _, err = stream.Recv(); err == io.EOF { break } } for i := 0; i < countRPC-1; i++ { grpc.NewClientStream(context.Background(), &grpc.StreamDesc{}, cc, "failtosend") } }) if err := checkStats(&stats, &lbmpb.ClientStats{ NumCallsStarted: int64(countRPC), NumCallsFinished: int64(countRPC), NumCallsFinishedWithClientFailedToSend: int64(countRPC - 1), NumCallsFinishedKnownReceived: 1, }); err != nil { t.Fatal(err) } } golang-google-grpc-1.6.0/grpclog/000077500000000000000000000000001315416461300166315ustar00rootroot00000000000000golang-google-grpc-1.6.0/grpclog/glogger/000077500000000000000000000000001315416461300202575ustar00rootroot00000000000000golang-google-grpc-1.6.0/grpclog/glogger/glogger.go000066400000000000000000000041371315416461300222410ustar00rootroot00000000000000/* * * Copyright 2015 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package glogger defines glog-based logging for grpc. // Importing this package will install glog as the logger used by grpclog. package glogger import ( "fmt" "github.com/golang/glog" "google.golang.org/grpc/grpclog" ) func init() { grpclog.SetLoggerV2(&glogger{}) } type glogger struct{} func (g *glogger) Info(args ...interface{}) { glog.InfoDepth(2, args...) } func (g *glogger) Infoln(args ...interface{}) { glog.InfoDepth(2, fmt.Sprintln(args...)) } func (g *glogger) Infof(format string, args ...interface{}) { glog.InfoDepth(2, fmt.Sprintf(format, args...)) } func (g *glogger) Warning(args ...interface{}) { glog.WarningDepth(2, args...) } func (g *glogger) Warningln(args ...interface{}) { glog.WarningDepth(2, fmt.Sprintln(args...)) } func (g *glogger) Warningf(format string, args ...interface{}) { glog.WarningDepth(2, fmt.Sprintf(format, args...)) } func (g *glogger) Error(args ...interface{}) { glog.ErrorDepth(2, args...) } func (g *glogger) Errorln(args ...interface{}) { glog.ErrorDepth(2, fmt.Sprintln(args...)) } func (g *glogger) Errorf(format string, args ...interface{}) { glog.ErrorDepth(2, fmt.Sprintf(format, args...)) } func (g *glogger) Fatal(args ...interface{}) { glog.FatalDepth(2, args...) } func (g *glogger) Fatalln(args ...interface{}) { glog.FatalDepth(2, fmt.Sprintln(args...)) } func (g *glogger) Fatalf(format string, args ...interface{}) { glog.FatalDepth(2, fmt.Sprintf(format, args...)) } func (g *glogger) V(l int) bool { return bool(glog.V(glog.Level(l))) } golang-google-grpc-1.6.0/grpclog/grpclog.go000066400000000000000000000071601315416461300206210ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package grpclog defines logging for grpc. // // All logs in transport package only go to verbose level 2. // All logs in other packages in grpc are logged in spite of the verbosity level. // // In the default logger, // severity level can be set by environment variable GRPC_GO_LOG_SEVERITY_LEVEL, // verbosity level can be set by GRPC_GO_LOG_VERBOSITY_LEVEL. package grpclog // import "google.golang.org/grpc/grpclog" import "os" var logger = newLoggerV2() // V reports whether verbosity level l is at least the requested verbose level. func V(l int) bool { return logger.V(l) } // Info logs to the INFO log. func Info(args ...interface{}) { logger.Info(args...) } // Infof logs to the INFO log. Arguments are handled in the manner of fmt.Printf. func Infof(format string, args ...interface{}) { logger.Infof(format, args...) } // Infoln logs to the INFO log. Arguments are handled in the manner of fmt.Println. func Infoln(args ...interface{}) { logger.Infoln(args...) } // Warning logs to the WARNING log. func Warning(args ...interface{}) { logger.Warning(args...) } // Warningf logs to the WARNING log. Arguments are handled in the manner of fmt.Printf. func Warningf(format string, args ...interface{}) { logger.Warningf(format, args...) } // Warningln logs to the WARNING log. Arguments are handled in the manner of fmt.Println. func Warningln(args ...interface{}) { logger.Warningln(args...) } // Error logs to the ERROR log. func Error(args ...interface{}) { logger.Error(args...) } // Errorf logs to the ERROR log. Arguments are handled in the manner of fmt.Printf. func Errorf(format string, args ...interface{}) { logger.Errorf(format, args...) } // Errorln logs to the ERROR log. Arguments are handled in the manner of fmt.Println. func Errorln(args ...interface{}) { logger.Errorln(args...) } // Fatal logs to the FATAL log. Arguments are handled in the manner of fmt.Print. // It calls os.Exit() with exit code 1. func Fatal(args ...interface{}) { logger.Fatal(args...) // Make sure fatal logs will exit. os.Exit(1) } // Fatalf logs to the FATAL log. Arguments are handled in the manner of fmt.Printf. // It calles os.Exit() with exit code 1. func Fatalf(format string, args ...interface{}) { logger.Fatalf(format, args...) // Make sure fatal logs will exit. os.Exit(1) } // Fatalln logs to the FATAL log. Arguments are handled in the manner of fmt.Println. // It calle os.Exit()) with exit code 1. func Fatalln(args ...interface{}) { logger.Fatalln(args...) // Make sure fatal logs will exit. os.Exit(1) } // Print prints to the logger. Arguments are handled in the manner of fmt.Print. // Deprecated: use Info. func Print(args ...interface{}) { logger.Info(args...) } // Printf prints to the logger. Arguments are handled in the manner of fmt.Printf. // Deprecated: use Infof. func Printf(format string, args ...interface{}) { logger.Infof(format, args...) } // Println prints to the logger. Arguments are handled in the manner of fmt.Println. // Deprecated: use Infoln. func Println(args ...interface{}) { logger.Infoln(args...) } golang-google-grpc-1.6.0/grpclog/logger.go000066400000000000000000000041151315416461300204400ustar00rootroot00000000000000/* * * Copyright 2015 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpclog // Logger mimics golang's standard Logger as an interface. // Deprecated: use LoggerV2. type Logger interface { Fatal(args ...interface{}) Fatalf(format string, args ...interface{}) Fatalln(args ...interface{}) Print(args ...interface{}) Printf(format string, args ...interface{}) Println(args ...interface{}) } // SetLogger sets the logger that is used in grpc. Call only from // init() functions. // Deprecated: use SetLoggerV2. func SetLogger(l Logger) { logger = &loggerWrapper{Logger: l} } // loggerWrapper wraps Logger into a LoggerV2. type loggerWrapper struct { Logger } func (g *loggerWrapper) Info(args ...interface{}) { g.Logger.Print(args...) } func (g *loggerWrapper) Infoln(args ...interface{}) { g.Logger.Println(args...) } func (g *loggerWrapper) Infof(format string, args ...interface{}) { g.Logger.Printf(format, args...) } func (g *loggerWrapper) Warning(args ...interface{}) { g.Logger.Print(args...) } func (g *loggerWrapper) Warningln(args ...interface{}) { g.Logger.Println(args...) } func (g *loggerWrapper) Warningf(format string, args ...interface{}) { g.Logger.Printf(format, args...) } func (g *loggerWrapper) Error(args ...interface{}) { g.Logger.Print(args...) } func (g *loggerWrapper) Errorln(args ...interface{}) { g.Logger.Println(args...) } func (g *loggerWrapper) Errorf(format string, args ...interface{}) { g.Logger.Printf(format, args...) } func (g *loggerWrapper) V(l int) bool { // Returns true for all verbose level. return true } golang-google-grpc-1.6.0/grpclog/loggerv2.go000066400000000000000000000144371315416461300207200ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpclog import ( "io" "io/ioutil" "log" "os" "strconv" ) // LoggerV2 does underlying logging work for grpclog. type LoggerV2 interface { // Info logs to INFO log. Arguments are handled in the manner of fmt.Print. Info(args ...interface{}) // Infoln logs to INFO log. Arguments are handled in the manner of fmt.Println. Infoln(args ...interface{}) // Infof logs to INFO log. Arguments are handled in the manner of fmt.Printf. Infof(format string, args ...interface{}) // Warning logs to WARNING log. Arguments are handled in the manner of fmt.Print. Warning(args ...interface{}) // Warningln logs to WARNING log. Arguments are handled in the manner of fmt.Println. Warningln(args ...interface{}) // Warningf logs to WARNING log. Arguments are handled in the manner of fmt.Printf. Warningf(format string, args ...interface{}) // Error logs to ERROR log. Arguments are handled in the manner of fmt.Print. Error(args ...interface{}) // Errorln logs to ERROR log. Arguments are handled in the manner of fmt.Println. Errorln(args ...interface{}) // Errorf logs to ERROR log. Arguments are handled in the manner of fmt.Printf. Errorf(format string, args ...interface{}) // Fatal logs to ERROR log. Arguments are handled in the manner of fmt.Print. // gRPC ensures that all Fatal logs will exit with os.Exit(1). // Implementations may also call os.Exit() with a non-zero exit code. Fatal(args ...interface{}) // Fatalln logs to ERROR log. Arguments are handled in the manner of fmt.Println. // gRPC ensures that all Fatal logs will exit with os.Exit(1). // Implementations may also call os.Exit() with a non-zero exit code. Fatalln(args ...interface{}) // Fatalf logs to ERROR log. Arguments are handled in the manner of fmt.Printf. // gRPC ensures that all Fatal logs will exit with os.Exit(1). // Implementations may also call os.Exit() with a non-zero exit code. Fatalf(format string, args ...interface{}) // V reports whether verbosity level l is at least the requested verbose level. V(l int) bool } // SetLoggerV2 sets logger that is used in grpc to a V2 logger. // Not mutex-protected, should be called before any gRPC functions. func SetLoggerV2(l LoggerV2) { logger = l } const ( // infoLog indicates Info severity. infoLog int = iota // warningLog indicates Warning severity. warningLog // errorLog indicates Error severity. errorLog // fatalLog indicates Fatal severity. fatalLog ) // severityName contains the string representation of each severity. var severityName = []string{ infoLog: "INFO", warningLog: "WARNING", errorLog: "ERROR", fatalLog: "FATAL", } // loggerT is the default logger used by grpclog. type loggerT struct { m []*log.Logger v int } // NewLoggerV2 creates a loggerV2 with the provided writers. // Fatal logs will be written to errorW, warningW, infoW, followed by exit(1). // Error logs will be written to errorW, warningW and infoW. // Warning logs will be written to warningW and infoW. // Info logs will be written to infoW. func NewLoggerV2(infoW, warningW, errorW io.Writer) LoggerV2 { return NewLoggerV2WithVerbosity(infoW, warningW, errorW, 0) } // NewLoggerV2WithVerbosity creates a loggerV2 with the provided writers and // verbosity level. func NewLoggerV2WithVerbosity(infoW, warningW, errorW io.Writer, v int) LoggerV2 { var m []*log.Logger m = append(m, log.New(infoW, severityName[infoLog]+": ", log.LstdFlags)) m = append(m, log.New(io.MultiWriter(infoW, warningW), severityName[warningLog]+": ", log.LstdFlags)) ew := io.MultiWriter(infoW, warningW, errorW) // ew will be used for error and fatal. m = append(m, log.New(ew, severityName[errorLog]+": ", log.LstdFlags)) m = append(m, log.New(ew, severityName[fatalLog]+": ", log.LstdFlags)) return &loggerT{m: m, v: v} } // newLoggerV2 creates a loggerV2 to be used as default logger. // All logs are written to stderr. func newLoggerV2() LoggerV2 { errorW := ioutil.Discard warningW := ioutil.Discard infoW := ioutil.Discard logLevel := os.Getenv("GRPC_GO_LOG_SEVERITY_LEVEL") switch logLevel { case "", "ERROR", "error": // If env is unset, set level to ERROR. errorW = os.Stderr case "WARNING", "warning": warningW = os.Stderr case "INFO", "info": infoW = os.Stderr } var v int vLevel := os.Getenv("GRPC_GO_LOG_VERBOSITY_LEVEL") if vl, err := strconv.Atoi(vLevel); err == nil { v = vl } return NewLoggerV2WithVerbosity(infoW, warningW, errorW, v) } func (g *loggerT) Info(args ...interface{}) { g.m[infoLog].Print(args...) } func (g *loggerT) Infoln(args ...interface{}) { g.m[infoLog].Println(args...) } func (g *loggerT) Infof(format string, args ...interface{}) { g.m[infoLog].Printf(format, args...) } func (g *loggerT) Warning(args ...interface{}) { g.m[warningLog].Print(args...) } func (g *loggerT) Warningln(args ...interface{}) { g.m[warningLog].Println(args...) } func (g *loggerT) Warningf(format string, args ...interface{}) { g.m[warningLog].Printf(format, args...) } func (g *loggerT) Error(args ...interface{}) { g.m[errorLog].Print(args...) } func (g *loggerT) Errorln(args ...interface{}) { g.m[errorLog].Println(args...) } func (g *loggerT) Errorf(format string, args ...interface{}) { g.m[errorLog].Printf(format, args...) } func (g *loggerT) Fatal(args ...interface{}) { g.m[fatalLog].Fatal(args...) // No need to call os.Exit() again because log.Logger.Fatal() calls os.Exit(). } func (g *loggerT) Fatalln(args ...interface{}) { g.m[fatalLog].Fatalln(args...) // No need to call os.Exit() again because log.Logger.Fatal() calls os.Exit(). } func (g *loggerT) Fatalf(format string, args ...interface{}) { g.m[fatalLog].Fatalf(format, args...) // No need to call os.Exit() again because log.Logger.Fatal() calls os.Exit(). } func (g *loggerT) V(l int) bool { return l <= g.v } golang-google-grpc-1.6.0/grpclog/loggerv2_test.go000066400000000000000000000034761315416461300217600ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpclog import ( "bytes" "fmt" "regexp" "testing" ) func TestLoggerV2Severity(t *testing.T) { buffers := []*bytes.Buffer{new(bytes.Buffer), new(bytes.Buffer), new(bytes.Buffer)} SetLoggerV2(NewLoggerV2(buffers[infoLog], buffers[warningLog], buffers[errorLog])) Info(severityName[infoLog]) Warning(severityName[warningLog]) Error(severityName[errorLog]) for i := 0; i < fatalLog; i++ { buf := buffers[i] // The content of info buffer should be something like: // INFO: 2017/04/07 14:55:42 INFO // WARNING: 2017/04/07 14:55:42 WARNING // ERROR: 2017/04/07 14:55:42 ERROR for j := i; j < fatalLog; j++ { b, err := buf.ReadBytes('\n') if err != nil { t.Fatal(err) } if err := checkLogForSeverity(j, b); err != nil { t.Fatal(err) } } } } // check if b is in the format of: // WARNING: 2017/04/07 14:55:42 WARNING func checkLogForSeverity(s int, b []byte) error { expected := regexp.MustCompile(fmt.Sprintf(`^%s: [0-9]{4}/[0-9]{2}/[0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2} %s\n$`, severityName[s], severityName[s])) if m := expected.Match(b); !m { return fmt.Errorf("got: %v, want string in format of: %v", string(b), severityName[s]+": 2016/10/05 17:09:26 "+severityName[s]) } return nil } golang-google-grpc-1.6.0/health/000077500000000000000000000000001315416461300164415ustar00rootroot00000000000000golang-google-grpc-1.6.0/health/grpc_health_v1/000077500000000000000000000000001315416461300213275ustar00rootroot00000000000000golang-google-grpc-1.6.0/health/grpc_health_v1/health.pb.go000066400000000000000000000152231315416461300235260ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: grpc_health_v1/health.proto /* Package grpc_health_v1 is a generated protocol buffer package. It is generated from these files: grpc_health_v1/health.proto It has these top-level messages: HealthCheckRequest HealthCheckResponse */ package grpc_health_v1 import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import ( context "golang.org/x/net/context" grpc "google.golang.org/grpc" ) // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package type HealthCheckResponse_ServingStatus int32 const ( HealthCheckResponse_UNKNOWN HealthCheckResponse_ServingStatus = 0 HealthCheckResponse_SERVING HealthCheckResponse_ServingStatus = 1 HealthCheckResponse_NOT_SERVING HealthCheckResponse_ServingStatus = 2 ) var HealthCheckResponse_ServingStatus_name = map[int32]string{ 0: "UNKNOWN", 1: "SERVING", 2: "NOT_SERVING", } var HealthCheckResponse_ServingStatus_value = map[string]int32{ "UNKNOWN": 0, "SERVING": 1, "NOT_SERVING": 2, } func (x HealthCheckResponse_ServingStatus) String() string { return proto.EnumName(HealthCheckResponse_ServingStatus_name, int32(x)) } func (HealthCheckResponse_ServingStatus) EnumDescriptor() ([]byte, []int) { return fileDescriptor0, []int{1, 0} } type HealthCheckRequest struct { Service string `protobuf:"bytes,1,opt,name=service" json:"service,omitempty"` } func (m *HealthCheckRequest) Reset() { *m = HealthCheckRequest{} } func (m *HealthCheckRequest) String() string { return proto.CompactTextString(m) } func (*HealthCheckRequest) ProtoMessage() {} func (*HealthCheckRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} } func (m *HealthCheckRequest) GetService() string { if m != nil { return m.Service } return "" } type HealthCheckResponse struct { Status HealthCheckResponse_ServingStatus `protobuf:"varint,1,opt,name=status,enum=grpc.health.v1.HealthCheckResponse_ServingStatus" json:"status,omitempty"` } func (m *HealthCheckResponse) Reset() { *m = HealthCheckResponse{} } func (m *HealthCheckResponse) String() string { return proto.CompactTextString(m) } func (*HealthCheckResponse) ProtoMessage() {} func (*HealthCheckResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} } func (m *HealthCheckResponse) GetStatus() HealthCheckResponse_ServingStatus { if m != nil { return m.Status } return HealthCheckResponse_UNKNOWN } func init() { proto.RegisterType((*HealthCheckRequest)(nil), "grpc.health.v1.HealthCheckRequest") proto.RegisterType((*HealthCheckResponse)(nil), "grpc.health.v1.HealthCheckResponse") proto.RegisterEnum("grpc.health.v1.HealthCheckResponse_ServingStatus", HealthCheckResponse_ServingStatus_name, HealthCheckResponse_ServingStatus_value) } // Reference imports to suppress errors if they are not otherwise used. var _ context.Context var _ grpc.ClientConn // This is a compile-time assertion to ensure that this generated file // is compatible with the grpc package it is being compiled against. const _ = grpc.SupportPackageIsVersion4 // Client API for Health service type HealthClient interface { Check(ctx context.Context, in *HealthCheckRequest, opts ...grpc.CallOption) (*HealthCheckResponse, error) } type healthClient struct { cc *grpc.ClientConn } func NewHealthClient(cc *grpc.ClientConn) HealthClient { return &healthClient{cc} } func (c *healthClient) Check(ctx context.Context, in *HealthCheckRequest, opts ...grpc.CallOption) (*HealthCheckResponse, error) { out := new(HealthCheckResponse) err := grpc.Invoke(ctx, "/grpc.health.v1.Health/Check", in, out, c.cc, opts...) if err != nil { return nil, err } return out, nil } // Server API for Health service type HealthServer interface { Check(context.Context, *HealthCheckRequest) (*HealthCheckResponse, error) } func RegisterHealthServer(s *grpc.Server, srv HealthServer) { s.RegisterService(&_Health_serviceDesc, srv) } func _Health_Check_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(HealthCheckRequest) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(HealthServer).Check(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/grpc.health.v1.Health/Check", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(HealthServer).Check(ctx, req.(*HealthCheckRequest)) } return interceptor(ctx, in, info, handler) } var _Health_serviceDesc = grpc.ServiceDesc{ ServiceName: "grpc.health.v1.Health", HandlerType: (*HealthServer)(nil), Methods: []grpc.MethodDesc{ { MethodName: "Check", Handler: _Health_Check_Handler, }, }, Streams: []grpc.StreamDesc{}, Metadata: "grpc_health_v1/health.proto", } func init() { proto.RegisterFile("grpc_health_v1/health.proto", fileDescriptor0) } var fileDescriptor0 = []byte{ // 213 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x92, 0x4e, 0x2f, 0x2a, 0x48, 0x8e, 0xcf, 0x48, 0x4d, 0xcc, 0x29, 0xc9, 0x88, 0x2f, 0x33, 0xd4, 0x87, 0xb0, 0xf4, 0x0a, 0x8a, 0xf2, 0x4b, 0xf2, 0x85, 0xf8, 0x40, 0x92, 0x7a, 0x50, 0xa1, 0x32, 0x43, 0x25, 0x3d, 0x2e, 0x21, 0x0f, 0x30, 0xc7, 0x39, 0x23, 0x35, 0x39, 0x3b, 0x28, 0xb5, 0xb0, 0x34, 0xb5, 0xb8, 0x44, 0x48, 0x82, 0x8b, 0xbd, 0x38, 0xb5, 0xa8, 0x2c, 0x33, 0x39, 0x55, 0x82, 0x51, 0x81, 0x51, 0x83, 0x33, 0x08, 0xc6, 0x55, 0x9a, 0xc3, 0xc8, 0x25, 0x8c, 0xa2, 0xa1, 0xb8, 0x20, 0x3f, 0xaf, 0x38, 0x55, 0xc8, 0x93, 0x8b, 0xad, 0xb8, 0x24, 0xb1, 0xa4, 0xb4, 0x18, 0xac, 0x81, 0xcf, 0xc8, 0x50, 0x0f, 0xd5, 0x22, 0x3d, 0x2c, 0x9a, 0xf4, 0x82, 0x41, 0x86, 0xe6, 0xa5, 0x07, 0x83, 0x35, 0x06, 0x41, 0x0d, 0x50, 0xb2, 0xe2, 0xe2, 0x45, 0x91, 0x10, 0xe2, 0xe6, 0x62, 0x0f, 0xf5, 0xf3, 0xf6, 0xf3, 0x0f, 0xf7, 0x13, 0x60, 0x00, 0x71, 0x82, 0x5d, 0x83, 0xc2, 0x3c, 0xfd, 0xdc, 0x05, 0x18, 0x85, 0xf8, 0xb9, 0xb8, 0xfd, 0xfc, 0x43, 0xe2, 0x61, 0x02, 0x4c, 0x46, 0x51, 0x5c, 0x6c, 0x10, 0x8b, 0x84, 0x02, 0xb8, 0x58, 0xc1, 0x96, 0x09, 0x29, 0xe1, 0x75, 0x09, 0xd8, 0xbf, 0x52, 0xca, 0x44, 0xb8, 0x36, 0x89, 0x0d, 0x1c, 0x82, 0xc6, 0x80, 0x00, 0x00, 0x00, 0xff, 0xff, 0x53, 0x2b, 0x65, 0x20, 0x60, 0x01, 0x00, 0x00, } golang-google-grpc-1.6.0/health/grpc_health_v1/health.proto000066400000000000000000000016131315416461300236620ustar00rootroot00000000000000// Copyright 2017 gRPC authors. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. syntax = "proto3"; package grpc.health.v1; message HealthCheckRequest { string service = 1; } message HealthCheckResponse { enum ServingStatus { UNKNOWN = 0; SERVING = 1; NOT_SERVING = 2; } ServingStatus status = 1; } service Health{ rpc Check(HealthCheckRequest) returns (HealthCheckResponse); } golang-google-grpc-1.6.0/health/health.go000066400000000000000000000043331315416461300202400ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ //go:generate protoc --go_out=plugins=grpc:. grpc_health_v1/health.proto // Package health provides some utility functions to health-check a server. The implementation // is based on protobuf. Users need to write their own implementations if other IDLs are used. package health import ( "sync" "golang.org/x/net/context" "google.golang.org/grpc" "google.golang.org/grpc/codes" healthpb "google.golang.org/grpc/health/grpc_health_v1" ) // Server implements `service Health`. type Server struct { mu sync.Mutex // statusMap stores the serving status of the services this Server monitors. statusMap map[string]healthpb.HealthCheckResponse_ServingStatus } // NewServer returns a new Server. func NewServer() *Server { return &Server{ statusMap: make(map[string]healthpb.HealthCheckResponse_ServingStatus), } } // Check implements `service Health`. func (s *Server) Check(ctx context.Context, in *healthpb.HealthCheckRequest) (*healthpb.HealthCheckResponse, error) { s.mu.Lock() defer s.mu.Unlock() if in.Service == "" { // check the server overall health status. return &healthpb.HealthCheckResponse{ Status: healthpb.HealthCheckResponse_SERVING, }, nil } if status, ok := s.statusMap[in.Service]; ok { return &healthpb.HealthCheckResponse{ Status: status, }, nil } return nil, grpc.Errorf(codes.NotFound, "unknown service") } // SetServingStatus is called when need to reset the serving status of a service // or insert a new service entry into the statusMap. func (s *Server) SetServingStatus(service string, status healthpb.HealthCheckResponse_ServingStatus) { s.mu.Lock() s.statusMap[service] = status s.mu.Unlock() } golang-google-grpc-1.6.0/interceptor.go000066400000000000000000000074231315416461300200670ustar00rootroot00000000000000/* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "golang.org/x/net/context" ) // UnaryInvoker is called by UnaryClientInterceptor to complete RPCs. type UnaryInvoker func(ctx context.Context, method string, req, reply interface{}, cc *ClientConn, opts ...CallOption) error // UnaryClientInterceptor intercepts the execution of a unary RPC on the client. invoker is the handler to complete the RPC // and it is the responsibility of the interceptor to call it. // This is an EXPERIMENTAL API. type UnaryClientInterceptor func(ctx context.Context, method string, req, reply interface{}, cc *ClientConn, invoker UnaryInvoker, opts ...CallOption) error // Streamer is called by StreamClientInterceptor to create a ClientStream. type Streamer func(ctx context.Context, desc *StreamDesc, cc *ClientConn, method string, opts ...CallOption) (ClientStream, error) // StreamClientInterceptor intercepts the creation of ClientStream. It may return a custom ClientStream to intercept all I/O // operations. streamer is the handler to create a ClientStream and it is the responsibility of the interceptor to call it. // This is an EXPERIMENTAL API. type StreamClientInterceptor func(ctx context.Context, desc *StreamDesc, cc *ClientConn, method string, streamer Streamer, opts ...CallOption) (ClientStream, error) // UnaryServerInfo consists of various information about a unary RPC on // server side. All per-rpc information may be mutated by the interceptor. type UnaryServerInfo struct { // Server is the service implementation the user provides. This is read-only. Server interface{} // FullMethod is the full RPC method string, i.e., /package.service/method. FullMethod string } // UnaryHandler defines the handler invoked by UnaryServerInterceptor to complete the normal // execution of a unary RPC. type UnaryHandler func(ctx context.Context, req interface{}) (interface{}, error) // UnaryServerInterceptor provides a hook to intercept the execution of a unary RPC on the server. info // contains all the information of this RPC the interceptor can operate on. And handler is the wrapper // of the service method implementation. It is the responsibility of the interceptor to invoke handler // to complete the RPC. type UnaryServerInterceptor func(ctx context.Context, req interface{}, info *UnaryServerInfo, handler UnaryHandler) (resp interface{}, err error) // StreamServerInfo consists of various information about a streaming RPC on // server side. All per-rpc information may be mutated by the interceptor. type StreamServerInfo struct { // FullMethod is the full RPC method string, i.e., /package.service/method. FullMethod string // IsClientStream indicates whether the RPC is a client streaming RPC. IsClientStream bool // IsServerStream indicates whether the RPC is a server streaming RPC. IsServerStream bool } // StreamServerInterceptor provides a hook to intercept the execution of a streaming RPC on the server. // info contains all the information of this RPC the interceptor can operate on. And handler is the // service method implementation. It is the responsibility of the interceptor to invoke handler to // complete the RPC. type StreamServerInterceptor func(srv interface{}, ss ServerStream, info *StreamServerInfo, handler StreamHandler) error golang-google-grpc-1.6.0/internal/000077500000000000000000000000001315416461300170105ustar00rootroot00000000000000golang-google-grpc-1.6.0/internal/internal.go000066400000000000000000000024501315416461300211540ustar00rootroot00000000000000/* * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package internal contains gRPC-internal code for testing, to avoid polluting // the godoc of the top-level grpc package. package internal // TestingCloseConns closes all existing transports but keeps // grpcServer.lis accepting new connections. // // The provided grpcServer must be of type *grpc.Server. It is untyped // for circular dependency reasons. var TestingCloseConns func(grpcServer interface{}) // TestingUseHandlerImpl enables the http.Handler-based server implementation. // It must be called before Serve and requires TLS credentials. // // The provided grpcServer must be of type *grpc.Server. It is untyped // for circular dependency reasons. var TestingUseHandlerImpl func(grpcServer interface{}) golang-google-grpc-1.6.0/interop/000077500000000000000000000000001315416461300166545ustar00rootroot00000000000000golang-google-grpc-1.6.0/interop/client/000077500000000000000000000000001315416461300201325ustar00rootroot00000000000000golang-google-grpc-1.6.0/interop/client/client.go000066400000000000000000000170731315416461300217470ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package main import ( "flag" "net" "strconv" "google.golang.org/grpc" "google.golang.org/grpc/credentials" "google.golang.org/grpc/credentials/oauth" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/interop" testpb "google.golang.org/grpc/interop/grpc_testing" "google.golang.org/grpc/testdata" ) var ( caFile = flag.String("ca_file", "", "The file containning the CA root cert file") useTLS = flag.Bool("use_tls", false, "Connection uses TLS if true, else plain TCP") testCA = flag.Bool("use_test_ca", false, "Whether to replace platform root CAs with test CA as the CA root") serviceAccountKeyFile = flag.String("service_account_key_file", "", "Path to service account json key file") oauthScope = flag.String("oauth_scope", "", "The scope for OAuth2 tokens") defaultServiceAccount = flag.String("default_service_account", "", "Email of GCE default service account") serverHost = flag.String("server_host", "localhost", "The server host name") serverPort = flag.Int("server_port", 10000, "The server port number") tlsServerName = flag.String("server_host_override", "", "The server name use to verify the hostname returned by TLS handshake if it is not empty. Otherwise, --server_host is used.") testCase = flag.String("test_case", "large_unary", `Configure different test cases. Valid options are: empty_unary : empty (zero bytes) request and response; large_unary : single request and (large) response; client_streaming : request streaming with single response; server_streaming : single request with response streaming; ping_pong : full-duplex streaming; empty_stream : full-duplex streaming with zero message; timeout_on_sleeping_server: fullduplex streaming on a sleeping server; compute_engine_creds: large_unary with compute engine auth; service_account_creds: large_unary with service account auth; jwt_token_creds: large_unary with jwt token auth; per_rpc_creds: large_unary with per rpc token; oauth2_auth_token: large_unary with oauth2 token auth; cancel_after_begin: cancellation after metadata has been sent but before payloads are sent; cancel_after_first_response: cancellation after receiving 1st message from the server; status_code_and_message: status code propagated back to client; custom_metadata: server will echo custom metadata; unimplemented_method: client attempts to call unimplemented method; unimplemented_service: client attempts to call unimplemented service.`) ) func main() { flag.Parse() serverAddr := net.JoinHostPort(*serverHost, strconv.Itoa(*serverPort)) var opts []grpc.DialOption if *useTLS { var sn string if *tlsServerName != "" { sn = *tlsServerName } var creds credentials.TransportCredentials if *testCA { var err error if *caFile == "" { *caFile = testdata.Path("ca.pem") } creds, err = credentials.NewClientTLSFromFile(*caFile, sn) if err != nil { grpclog.Fatalf("Failed to create TLS credentials %v", err) } } else { creds = credentials.NewClientTLSFromCert(nil, sn) } opts = append(opts, grpc.WithTransportCredentials(creds)) if *testCase == "compute_engine_creds" { opts = append(opts, grpc.WithPerRPCCredentials(oauth.NewComputeEngine())) } else if *testCase == "service_account_creds" { jwtCreds, err := oauth.NewServiceAccountFromFile(*serviceAccountKeyFile, *oauthScope) if err != nil { grpclog.Fatalf("Failed to create JWT credentials: %v", err) } opts = append(opts, grpc.WithPerRPCCredentials(jwtCreds)) } else if *testCase == "jwt_token_creds" { jwtCreds, err := oauth.NewJWTAccessFromFile(*serviceAccountKeyFile) if err != nil { grpclog.Fatalf("Failed to create JWT credentials: %v", err) } opts = append(opts, grpc.WithPerRPCCredentials(jwtCreds)) } else if *testCase == "oauth2_auth_token" { opts = append(opts, grpc.WithPerRPCCredentials(oauth.NewOauthAccess(interop.GetToken(*serviceAccountKeyFile, *oauthScope)))) } } else { opts = append(opts, grpc.WithInsecure()) } conn, err := grpc.Dial(serverAddr, opts...) if err != nil { grpclog.Fatalf("Fail to dial: %v", err) } defer conn.Close() tc := testpb.NewTestServiceClient(conn) switch *testCase { case "empty_unary": interop.DoEmptyUnaryCall(tc) grpclog.Println("EmptyUnaryCall done") case "large_unary": interop.DoLargeUnaryCall(tc) grpclog.Println("LargeUnaryCall done") case "client_streaming": interop.DoClientStreaming(tc) grpclog.Println("ClientStreaming done") case "server_streaming": interop.DoServerStreaming(tc) grpclog.Println("ServerStreaming done") case "ping_pong": interop.DoPingPong(tc) grpclog.Println("Pingpong done") case "empty_stream": interop.DoEmptyStream(tc) grpclog.Println("Emptystream done") case "timeout_on_sleeping_server": interop.DoTimeoutOnSleepingServer(tc) grpclog.Println("TimeoutOnSleepingServer done") case "compute_engine_creds": if !*useTLS { grpclog.Fatalf("TLS is not enabled. TLS is required to execute compute_engine_creds test case.") } interop.DoComputeEngineCreds(tc, *defaultServiceAccount, *oauthScope) grpclog.Println("ComputeEngineCreds done") case "service_account_creds": if !*useTLS { grpclog.Fatalf("TLS is not enabled. TLS is required to execute service_account_creds test case.") } interop.DoServiceAccountCreds(tc, *serviceAccountKeyFile, *oauthScope) grpclog.Println("ServiceAccountCreds done") case "jwt_token_creds": if !*useTLS { grpclog.Fatalf("TLS is not enabled. TLS is required to execute jwt_token_creds test case.") } interop.DoJWTTokenCreds(tc, *serviceAccountKeyFile) grpclog.Println("JWTtokenCreds done") case "per_rpc_creds": if !*useTLS { grpclog.Fatalf("TLS is not enabled. TLS is required to execute per_rpc_creds test case.") } interop.DoPerRPCCreds(tc, *serviceAccountKeyFile, *oauthScope) grpclog.Println("PerRPCCreds done") case "oauth2_auth_token": if !*useTLS { grpclog.Fatalf("TLS is not enabled. TLS is required to execute oauth2_auth_token test case.") } interop.DoOauth2TokenCreds(tc, *serviceAccountKeyFile, *oauthScope) grpclog.Println("Oauth2TokenCreds done") case "cancel_after_begin": interop.DoCancelAfterBegin(tc) grpclog.Println("CancelAfterBegin done") case "cancel_after_first_response": interop.DoCancelAfterFirstResponse(tc) grpclog.Println("CancelAfterFirstResponse done") case "status_code_and_message": interop.DoStatusCodeAndMessage(tc) grpclog.Println("StatusCodeAndMessage done") case "custom_metadata": interop.DoCustomMetadata(tc) grpclog.Println("CustomMetadata done") case "unimplemented_method": interop.DoUnimplementedMethod(conn) grpclog.Println("UnimplementedMethod done") case "unimplemented_service": interop.DoUnimplementedService(testpb.NewUnimplementedServiceClient(conn)) grpclog.Println("UnimplementedService done") default: grpclog.Fatal("Unsupported test case: ", *testCase) } } golang-google-grpc-1.6.0/interop/grpc_testing/000077500000000000000000000000001315416461300213445ustar00rootroot00000000000000golang-google-grpc-1.6.0/interop/grpc_testing/test.pb.go000066400000000000000000001014571315416461300232620ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: grpc_testing/test.proto /* Package grpc_testing is a generated protocol buffer package. It is generated from these files: grpc_testing/test.proto It has these top-level messages: Empty Payload EchoStatus SimpleRequest SimpleResponse StreamingInputCallRequest StreamingInputCallResponse ResponseParameters StreamingOutputCallRequest StreamingOutputCallResponse */ package grpc_testing import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import ( context "golang.org/x/net/context" grpc "google.golang.org/grpc" ) // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package // The type of payload that should be returned. type PayloadType int32 const ( // Compressable text format. PayloadType_COMPRESSABLE PayloadType = 0 // Uncompressable binary format. PayloadType_UNCOMPRESSABLE PayloadType = 1 // Randomly chosen from all other formats defined in this enum. PayloadType_RANDOM PayloadType = 2 ) var PayloadType_name = map[int32]string{ 0: "COMPRESSABLE", 1: "UNCOMPRESSABLE", 2: "RANDOM", } var PayloadType_value = map[string]int32{ "COMPRESSABLE": 0, "UNCOMPRESSABLE": 1, "RANDOM": 2, } func (x PayloadType) Enum() *PayloadType { p := new(PayloadType) *p = x return p } func (x PayloadType) String() string { return proto.EnumName(PayloadType_name, int32(x)) } func (x *PayloadType) UnmarshalJSON(data []byte) error { value, err := proto.UnmarshalJSONEnum(PayloadType_value, data, "PayloadType") if err != nil { return err } *x = PayloadType(value) return nil } func (PayloadType) EnumDescriptor() ([]byte, []int) { return fileDescriptor0, []int{0} } type Empty struct { XXX_unrecognized []byte `json:"-"` } func (m *Empty) Reset() { *m = Empty{} } func (m *Empty) String() string { return proto.CompactTextString(m) } func (*Empty) ProtoMessage() {} func (*Empty) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} } // A block of data, to simply increase gRPC message size. type Payload struct { // The type of data in body. Type *PayloadType `protobuf:"varint,1,opt,name=type,enum=grpc.testing.PayloadType" json:"type,omitempty"` // Primary contents of payload. Body []byte `protobuf:"bytes,2,opt,name=body" json:"body,omitempty"` XXX_unrecognized []byte `json:"-"` } func (m *Payload) Reset() { *m = Payload{} } func (m *Payload) String() string { return proto.CompactTextString(m) } func (*Payload) ProtoMessage() {} func (*Payload) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} } func (m *Payload) GetType() PayloadType { if m != nil && m.Type != nil { return *m.Type } return PayloadType_COMPRESSABLE } func (m *Payload) GetBody() []byte { if m != nil { return m.Body } return nil } // A protobuf representation for grpc status. This is used by test // clients to specify a status that the server should attempt to return. type EchoStatus struct { Code *int32 `protobuf:"varint,1,opt,name=code" json:"code,omitempty"` Message *string `protobuf:"bytes,2,opt,name=message" json:"message,omitempty"` XXX_unrecognized []byte `json:"-"` } func (m *EchoStatus) Reset() { *m = EchoStatus{} } func (m *EchoStatus) String() string { return proto.CompactTextString(m) } func (*EchoStatus) ProtoMessage() {} func (*EchoStatus) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{2} } func (m *EchoStatus) GetCode() int32 { if m != nil && m.Code != nil { return *m.Code } return 0 } func (m *EchoStatus) GetMessage() string { if m != nil && m.Message != nil { return *m.Message } return "" } // Unary request. type SimpleRequest struct { // Desired payload type in the response from the server. // If response_type is RANDOM, server randomly chooses one from other formats. ResponseType *PayloadType `protobuf:"varint,1,opt,name=response_type,json=responseType,enum=grpc.testing.PayloadType" json:"response_type,omitempty"` // Desired payload size in the response from the server. // If response_type is COMPRESSABLE, this denotes the size before compression. ResponseSize *int32 `protobuf:"varint,2,opt,name=response_size,json=responseSize" json:"response_size,omitempty"` // Optional input payload sent along with the request. Payload *Payload `protobuf:"bytes,3,opt,name=payload" json:"payload,omitempty"` // Whether SimpleResponse should include username. FillUsername *bool `protobuf:"varint,4,opt,name=fill_username,json=fillUsername" json:"fill_username,omitempty"` // Whether SimpleResponse should include OAuth scope. FillOauthScope *bool `protobuf:"varint,5,opt,name=fill_oauth_scope,json=fillOauthScope" json:"fill_oauth_scope,omitempty"` // Whether server should return a given status ResponseStatus *EchoStatus `protobuf:"bytes,7,opt,name=response_status,json=responseStatus" json:"response_status,omitempty"` XXX_unrecognized []byte `json:"-"` } func (m *SimpleRequest) Reset() { *m = SimpleRequest{} } func (m *SimpleRequest) String() string { return proto.CompactTextString(m) } func (*SimpleRequest) ProtoMessage() {} func (*SimpleRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{3} } func (m *SimpleRequest) GetResponseType() PayloadType { if m != nil && m.ResponseType != nil { return *m.ResponseType } return PayloadType_COMPRESSABLE } func (m *SimpleRequest) GetResponseSize() int32 { if m != nil && m.ResponseSize != nil { return *m.ResponseSize } return 0 } func (m *SimpleRequest) GetPayload() *Payload { if m != nil { return m.Payload } return nil } func (m *SimpleRequest) GetFillUsername() bool { if m != nil && m.FillUsername != nil { return *m.FillUsername } return false } func (m *SimpleRequest) GetFillOauthScope() bool { if m != nil && m.FillOauthScope != nil { return *m.FillOauthScope } return false } func (m *SimpleRequest) GetResponseStatus() *EchoStatus { if m != nil { return m.ResponseStatus } return nil } // Unary response, as configured by the request. type SimpleResponse struct { // Payload to increase message size. Payload *Payload `protobuf:"bytes,1,opt,name=payload" json:"payload,omitempty"` // The user the request came from, for verifying authentication was // successful when the client expected it. Username *string `protobuf:"bytes,2,opt,name=username" json:"username,omitempty"` // OAuth scope. OauthScope *string `protobuf:"bytes,3,opt,name=oauth_scope,json=oauthScope" json:"oauth_scope,omitempty"` XXX_unrecognized []byte `json:"-"` } func (m *SimpleResponse) Reset() { *m = SimpleResponse{} } func (m *SimpleResponse) String() string { return proto.CompactTextString(m) } func (*SimpleResponse) ProtoMessage() {} func (*SimpleResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{4} } func (m *SimpleResponse) GetPayload() *Payload { if m != nil { return m.Payload } return nil } func (m *SimpleResponse) GetUsername() string { if m != nil && m.Username != nil { return *m.Username } return "" } func (m *SimpleResponse) GetOauthScope() string { if m != nil && m.OauthScope != nil { return *m.OauthScope } return "" } // Client-streaming request. type StreamingInputCallRequest struct { // Optional input payload sent along with the request. Payload *Payload `protobuf:"bytes,1,opt,name=payload" json:"payload,omitempty"` XXX_unrecognized []byte `json:"-"` } func (m *StreamingInputCallRequest) Reset() { *m = StreamingInputCallRequest{} } func (m *StreamingInputCallRequest) String() string { return proto.CompactTextString(m) } func (*StreamingInputCallRequest) ProtoMessage() {} func (*StreamingInputCallRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{5} } func (m *StreamingInputCallRequest) GetPayload() *Payload { if m != nil { return m.Payload } return nil } // Client-streaming response. type StreamingInputCallResponse struct { // Aggregated size of payloads received from the client. AggregatedPayloadSize *int32 `protobuf:"varint,1,opt,name=aggregated_payload_size,json=aggregatedPayloadSize" json:"aggregated_payload_size,omitempty"` XXX_unrecognized []byte `json:"-"` } func (m *StreamingInputCallResponse) Reset() { *m = StreamingInputCallResponse{} } func (m *StreamingInputCallResponse) String() string { return proto.CompactTextString(m) } func (*StreamingInputCallResponse) ProtoMessage() {} func (*StreamingInputCallResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{6} } func (m *StreamingInputCallResponse) GetAggregatedPayloadSize() int32 { if m != nil && m.AggregatedPayloadSize != nil { return *m.AggregatedPayloadSize } return 0 } // Configuration for a particular response. type ResponseParameters struct { // Desired payload sizes in responses from the server. // If response_type is COMPRESSABLE, this denotes the size before compression. Size *int32 `protobuf:"varint,1,opt,name=size" json:"size,omitempty"` // Desired interval between consecutive responses in the response stream in // microseconds. IntervalUs *int32 `protobuf:"varint,2,opt,name=interval_us,json=intervalUs" json:"interval_us,omitempty"` XXX_unrecognized []byte `json:"-"` } func (m *ResponseParameters) Reset() { *m = ResponseParameters{} } func (m *ResponseParameters) String() string { return proto.CompactTextString(m) } func (*ResponseParameters) ProtoMessage() {} func (*ResponseParameters) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{7} } func (m *ResponseParameters) GetSize() int32 { if m != nil && m.Size != nil { return *m.Size } return 0 } func (m *ResponseParameters) GetIntervalUs() int32 { if m != nil && m.IntervalUs != nil { return *m.IntervalUs } return 0 } // Server-streaming request. type StreamingOutputCallRequest struct { // Desired payload type in the response from the server. // If response_type is RANDOM, the payload from each response in the stream // might be of different types. This is to simulate a mixed type of payload // stream. ResponseType *PayloadType `protobuf:"varint,1,opt,name=response_type,json=responseType,enum=grpc.testing.PayloadType" json:"response_type,omitempty"` // Configuration for each expected response message. ResponseParameters []*ResponseParameters `protobuf:"bytes,2,rep,name=response_parameters,json=responseParameters" json:"response_parameters,omitempty"` // Optional input payload sent along with the request. Payload *Payload `protobuf:"bytes,3,opt,name=payload" json:"payload,omitempty"` // Whether server should return a given status ResponseStatus *EchoStatus `protobuf:"bytes,7,opt,name=response_status,json=responseStatus" json:"response_status,omitempty"` XXX_unrecognized []byte `json:"-"` } func (m *StreamingOutputCallRequest) Reset() { *m = StreamingOutputCallRequest{} } func (m *StreamingOutputCallRequest) String() string { return proto.CompactTextString(m) } func (*StreamingOutputCallRequest) ProtoMessage() {} func (*StreamingOutputCallRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{8} } func (m *StreamingOutputCallRequest) GetResponseType() PayloadType { if m != nil && m.ResponseType != nil { return *m.ResponseType } return PayloadType_COMPRESSABLE } func (m *StreamingOutputCallRequest) GetResponseParameters() []*ResponseParameters { if m != nil { return m.ResponseParameters } return nil } func (m *StreamingOutputCallRequest) GetPayload() *Payload { if m != nil { return m.Payload } return nil } func (m *StreamingOutputCallRequest) GetResponseStatus() *EchoStatus { if m != nil { return m.ResponseStatus } return nil } // Server-streaming response, as configured by the request and parameters. type StreamingOutputCallResponse struct { // Payload to increase response size. Payload *Payload `protobuf:"bytes,1,opt,name=payload" json:"payload,omitempty"` XXX_unrecognized []byte `json:"-"` } func (m *StreamingOutputCallResponse) Reset() { *m = StreamingOutputCallResponse{} } func (m *StreamingOutputCallResponse) String() string { return proto.CompactTextString(m) } func (*StreamingOutputCallResponse) ProtoMessage() {} func (*StreamingOutputCallResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{9} } func (m *StreamingOutputCallResponse) GetPayload() *Payload { if m != nil { return m.Payload } return nil } func init() { proto.RegisterType((*Empty)(nil), "grpc.testing.Empty") proto.RegisterType((*Payload)(nil), "grpc.testing.Payload") proto.RegisterType((*EchoStatus)(nil), "grpc.testing.EchoStatus") proto.RegisterType((*SimpleRequest)(nil), "grpc.testing.SimpleRequest") proto.RegisterType((*SimpleResponse)(nil), "grpc.testing.SimpleResponse") proto.RegisterType((*StreamingInputCallRequest)(nil), "grpc.testing.StreamingInputCallRequest") proto.RegisterType((*StreamingInputCallResponse)(nil), "grpc.testing.StreamingInputCallResponse") proto.RegisterType((*ResponseParameters)(nil), "grpc.testing.ResponseParameters") proto.RegisterType((*StreamingOutputCallRequest)(nil), "grpc.testing.StreamingOutputCallRequest") proto.RegisterType((*StreamingOutputCallResponse)(nil), "grpc.testing.StreamingOutputCallResponse") proto.RegisterEnum("grpc.testing.PayloadType", PayloadType_name, PayloadType_value) } // Reference imports to suppress errors if they are not otherwise used. var _ context.Context var _ grpc.ClientConn // This is a compile-time assertion to ensure that this generated file // is compatible with the grpc package it is being compiled against. const _ = grpc.SupportPackageIsVersion4 // Client API for TestService service type TestServiceClient interface { // One empty request followed by one empty response. EmptyCall(ctx context.Context, in *Empty, opts ...grpc.CallOption) (*Empty, error) // One request followed by one response. // The server returns the client payload as-is. UnaryCall(ctx context.Context, in *SimpleRequest, opts ...grpc.CallOption) (*SimpleResponse, error) // One request followed by a sequence of responses (streamed download). // The server returns the payload with client desired type and sizes. StreamingOutputCall(ctx context.Context, in *StreamingOutputCallRequest, opts ...grpc.CallOption) (TestService_StreamingOutputCallClient, error) // A sequence of requests followed by one response (streamed upload). // The server returns the aggregated size of client payload as the result. StreamingInputCall(ctx context.Context, opts ...grpc.CallOption) (TestService_StreamingInputCallClient, error) // A sequence of requests with each request served by the server immediately. // As one request could lead to multiple responses, this interface // demonstrates the idea of full duplexing. FullDuplexCall(ctx context.Context, opts ...grpc.CallOption) (TestService_FullDuplexCallClient, error) // A sequence of requests followed by a sequence of responses. // The server buffers all the client requests and then serves them in order. A // stream of responses are returned to the client when the server starts with // first request. HalfDuplexCall(ctx context.Context, opts ...grpc.CallOption) (TestService_HalfDuplexCallClient, error) } type testServiceClient struct { cc *grpc.ClientConn } func NewTestServiceClient(cc *grpc.ClientConn) TestServiceClient { return &testServiceClient{cc} } func (c *testServiceClient) EmptyCall(ctx context.Context, in *Empty, opts ...grpc.CallOption) (*Empty, error) { out := new(Empty) err := grpc.Invoke(ctx, "/grpc.testing.TestService/EmptyCall", in, out, c.cc, opts...) if err != nil { return nil, err } return out, nil } func (c *testServiceClient) UnaryCall(ctx context.Context, in *SimpleRequest, opts ...grpc.CallOption) (*SimpleResponse, error) { out := new(SimpleResponse) err := grpc.Invoke(ctx, "/grpc.testing.TestService/UnaryCall", in, out, c.cc, opts...) if err != nil { return nil, err } return out, nil } func (c *testServiceClient) StreamingOutputCall(ctx context.Context, in *StreamingOutputCallRequest, opts ...grpc.CallOption) (TestService_StreamingOutputCallClient, error) { stream, err := grpc.NewClientStream(ctx, &_TestService_serviceDesc.Streams[0], c.cc, "/grpc.testing.TestService/StreamingOutputCall", opts...) if err != nil { return nil, err } x := &testServiceStreamingOutputCallClient{stream} if err := x.ClientStream.SendMsg(in); err != nil { return nil, err } if err := x.ClientStream.CloseSend(); err != nil { return nil, err } return x, nil } type TestService_StreamingOutputCallClient interface { Recv() (*StreamingOutputCallResponse, error) grpc.ClientStream } type testServiceStreamingOutputCallClient struct { grpc.ClientStream } func (x *testServiceStreamingOutputCallClient) Recv() (*StreamingOutputCallResponse, error) { m := new(StreamingOutputCallResponse) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func (c *testServiceClient) StreamingInputCall(ctx context.Context, opts ...grpc.CallOption) (TestService_StreamingInputCallClient, error) { stream, err := grpc.NewClientStream(ctx, &_TestService_serviceDesc.Streams[1], c.cc, "/grpc.testing.TestService/StreamingInputCall", opts...) if err != nil { return nil, err } x := &testServiceStreamingInputCallClient{stream} return x, nil } type TestService_StreamingInputCallClient interface { Send(*StreamingInputCallRequest) error CloseAndRecv() (*StreamingInputCallResponse, error) grpc.ClientStream } type testServiceStreamingInputCallClient struct { grpc.ClientStream } func (x *testServiceStreamingInputCallClient) Send(m *StreamingInputCallRequest) error { return x.ClientStream.SendMsg(m) } func (x *testServiceStreamingInputCallClient) CloseAndRecv() (*StreamingInputCallResponse, error) { if err := x.ClientStream.CloseSend(); err != nil { return nil, err } m := new(StreamingInputCallResponse) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func (c *testServiceClient) FullDuplexCall(ctx context.Context, opts ...grpc.CallOption) (TestService_FullDuplexCallClient, error) { stream, err := grpc.NewClientStream(ctx, &_TestService_serviceDesc.Streams[2], c.cc, "/grpc.testing.TestService/FullDuplexCall", opts...) if err != nil { return nil, err } x := &testServiceFullDuplexCallClient{stream} return x, nil } type TestService_FullDuplexCallClient interface { Send(*StreamingOutputCallRequest) error Recv() (*StreamingOutputCallResponse, error) grpc.ClientStream } type testServiceFullDuplexCallClient struct { grpc.ClientStream } func (x *testServiceFullDuplexCallClient) Send(m *StreamingOutputCallRequest) error { return x.ClientStream.SendMsg(m) } func (x *testServiceFullDuplexCallClient) Recv() (*StreamingOutputCallResponse, error) { m := new(StreamingOutputCallResponse) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func (c *testServiceClient) HalfDuplexCall(ctx context.Context, opts ...grpc.CallOption) (TestService_HalfDuplexCallClient, error) { stream, err := grpc.NewClientStream(ctx, &_TestService_serviceDesc.Streams[3], c.cc, "/grpc.testing.TestService/HalfDuplexCall", opts...) if err != nil { return nil, err } x := &testServiceHalfDuplexCallClient{stream} return x, nil } type TestService_HalfDuplexCallClient interface { Send(*StreamingOutputCallRequest) error Recv() (*StreamingOutputCallResponse, error) grpc.ClientStream } type testServiceHalfDuplexCallClient struct { grpc.ClientStream } func (x *testServiceHalfDuplexCallClient) Send(m *StreamingOutputCallRequest) error { return x.ClientStream.SendMsg(m) } func (x *testServiceHalfDuplexCallClient) Recv() (*StreamingOutputCallResponse, error) { m := new(StreamingOutputCallResponse) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } // Server API for TestService service type TestServiceServer interface { // One empty request followed by one empty response. EmptyCall(context.Context, *Empty) (*Empty, error) // One request followed by one response. // The server returns the client payload as-is. UnaryCall(context.Context, *SimpleRequest) (*SimpleResponse, error) // One request followed by a sequence of responses (streamed download). // The server returns the payload with client desired type and sizes. StreamingOutputCall(*StreamingOutputCallRequest, TestService_StreamingOutputCallServer) error // A sequence of requests followed by one response (streamed upload). // The server returns the aggregated size of client payload as the result. StreamingInputCall(TestService_StreamingInputCallServer) error // A sequence of requests with each request served by the server immediately. // As one request could lead to multiple responses, this interface // demonstrates the idea of full duplexing. FullDuplexCall(TestService_FullDuplexCallServer) error // A sequence of requests followed by a sequence of responses. // The server buffers all the client requests and then serves them in order. A // stream of responses are returned to the client when the server starts with // first request. HalfDuplexCall(TestService_HalfDuplexCallServer) error } func RegisterTestServiceServer(s *grpc.Server, srv TestServiceServer) { s.RegisterService(&_TestService_serviceDesc, srv) } func _TestService_EmptyCall_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(Empty) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(TestServiceServer).EmptyCall(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/grpc.testing.TestService/EmptyCall", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(TestServiceServer).EmptyCall(ctx, req.(*Empty)) } return interceptor(ctx, in, info, handler) } func _TestService_UnaryCall_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(SimpleRequest) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(TestServiceServer).UnaryCall(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/grpc.testing.TestService/UnaryCall", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(TestServiceServer).UnaryCall(ctx, req.(*SimpleRequest)) } return interceptor(ctx, in, info, handler) } func _TestService_StreamingOutputCall_Handler(srv interface{}, stream grpc.ServerStream) error { m := new(StreamingOutputCallRequest) if err := stream.RecvMsg(m); err != nil { return err } return srv.(TestServiceServer).StreamingOutputCall(m, &testServiceStreamingOutputCallServer{stream}) } type TestService_StreamingOutputCallServer interface { Send(*StreamingOutputCallResponse) error grpc.ServerStream } type testServiceStreamingOutputCallServer struct { grpc.ServerStream } func (x *testServiceStreamingOutputCallServer) Send(m *StreamingOutputCallResponse) error { return x.ServerStream.SendMsg(m) } func _TestService_StreamingInputCall_Handler(srv interface{}, stream grpc.ServerStream) error { return srv.(TestServiceServer).StreamingInputCall(&testServiceStreamingInputCallServer{stream}) } type TestService_StreamingInputCallServer interface { SendAndClose(*StreamingInputCallResponse) error Recv() (*StreamingInputCallRequest, error) grpc.ServerStream } type testServiceStreamingInputCallServer struct { grpc.ServerStream } func (x *testServiceStreamingInputCallServer) SendAndClose(m *StreamingInputCallResponse) error { return x.ServerStream.SendMsg(m) } func (x *testServiceStreamingInputCallServer) Recv() (*StreamingInputCallRequest, error) { m := new(StreamingInputCallRequest) if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func _TestService_FullDuplexCall_Handler(srv interface{}, stream grpc.ServerStream) error { return srv.(TestServiceServer).FullDuplexCall(&testServiceFullDuplexCallServer{stream}) } type TestService_FullDuplexCallServer interface { Send(*StreamingOutputCallResponse) error Recv() (*StreamingOutputCallRequest, error) grpc.ServerStream } type testServiceFullDuplexCallServer struct { grpc.ServerStream } func (x *testServiceFullDuplexCallServer) Send(m *StreamingOutputCallResponse) error { return x.ServerStream.SendMsg(m) } func (x *testServiceFullDuplexCallServer) Recv() (*StreamingOutputCallRequest, error) { m := new(StreamingOutputCallRequest) if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func _TestService_HalfDuplexCall_Handler(srv interface{}, stream grpc.ServerStream) error { return srv.(TestServiceServer).HalfDuplexCall(&testServiceHalfDuplexCallServer{stream}) } type TestService_HalfDuplexCallServer interface { Send(*StreamingOutputCallResponse) error Recv() (*StreamingOutputCallRequest, error) grpc.ServerStream } type testServiceHalfDuplexCallServer struct { grpc.ServerStream } func (x *testServiceHalfDuplexCallServer) Send(m *StreamingOutputCallResponse) error { return x.ServerStream.SendMsg(m) } func (x *testServiceHalfDuplexCallServer) Recv() (*StreamingOutputCallRequest, error) { m := new(StreamingOutputCallRequest) if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err } return m, nil } var _TestService_serviceDesc = grpc.ServiceDesc{ ServiceName: "grpc.testing.TestService", HandlerType: (*TestServiceServer)(nil), Methods: []grpc.MethodDesc{ { MethodName: "EmptyCall", Handler: _TestService_EmptyCall_Handler, }, { MethodName: "UnaryCall", Handler: _TestService_UnaryCall_Handler, }, }, Streams: []grpc.StreamDesc{ { StreamName: "StreamingOutputCall", Handler: _TestService_StreamingOutputCall_Handler, ServerStreams: true, }, { StreamName: "StreamingInputCall", Handler: _TestService_StreamingInputCall_Handler, ClientStreams: true, }, { StreamName: "FullDuplexCall", Handler: _TestService_FullDuplexCall_Handler, ServerStreams: true, ClientStreams: true, }, { StreamName: "HalfDuplexCall", Handler: _TestService_HalfDuplexCall_Handler, ServerStreams: true, ClientStreams: true, }, }, Metadata: "grpc_testing/test.proto", } // Client API for UnimplementedService service type UnimplementedServiceClient interface { // A call that no server should implement UnimplementedCall(ctx context.Context, in *Empty, opts ...grpc.CallOption) (*Empty, error) } type unimplementedServiceClient struct { cc *grpc.ClientConn } func NewUnimplementedServiceClient(cc *grpc.ClientConn) UnimplementedServiceClient { return &unimplementedServiceClient{cc} } func (c *unimplementedServiceClient) UnimplementedCall(ctx context.Context, in *Empty, opts ...grpc.CallOption) (*Empty, error) { out := new(Empty) err := grpc.Invoke(ctx, "/grpc.testing.UnimplementedService/UnimplementedCall", in, out, c.cc, opts...) if err != nil { return nil, err } return out, nil } // Server API for UnimplementedService service type UnimplementedServiceServer interface { // A call that no server should implement UnimplementedCall(context.Context, *Empty) (*Empty, error) } func RegisterUnimplementedServiceServer(s *grpc.Server, srv UnimplementedServiceServer) { s.RegisterService(&_UnimplementedService_serviceDesc, srv) } func _UnimplementedService_UnimplementedCall_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(Empty) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(UnimplementedServiceServer).UnimplementedCall(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/grpc.testing.UnimplementedService/UnimplementedCall", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(UnimplementedServiceServer).UnimplementedCall(ctx, req.(*Empty)) } return interceptor(ctx, in, info, handler) } var _UnimplementedService_serviceDesc = grpc.ServiceDesc{ ServiceName: "grpc.testing.UnimplementedService", HandlerType: (*UnimplementedServiceServer)(nil), Methods: []grpc.MethodDesc{ { MethodName: "UnimplementedCall", Handler: _UnimplementedService_UnimplementedCall_Handler, }, }, Streams: []grpc.StreamDesc{}, Metadata: "grpc_testing/test.proto", } func init() { proto.RegisterFile("grpc_testing/test.proto", fileDescriptor0) } var fileDescriptor0 = []byte{ // 656 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xbc, 0x54, 0x4d, 0x6f, 0xd3, 0x40, 0x10, 0xc5, 0x69, 0x42, 0xda, 0x49, 0x6a, 0xc2, 0x94, 0xaa, 0x6e, 0x8a, 0x44, 0x64, 0x0e, 0x18, 0x24, 0x52, 0x14, 0x09, 0x0e, 0x48, 0x80, 0x4a, 0x9b, 0x8a, 0x4a, 0x6d, 0x53, 0xec, 0xe6, 0x1c, 0x2d, 0xc9, 0xd4, 0xb5, 0xe4, 0x2f, 0xec, 0x75, 0x45, 0x7a, 0xe0, 0xcf, 0xf0, 0x23, 0x38, 0xf0, 0xe7, 0xd0, 0xae, 0xed, 0xc4, 0x49, 0x53, 0xd1, 0xf2, 0x75, 0xca, 0xee, 0x9b, 0x37, 0xb3, 0xf3, 0x66, 0x5e, 0x0c, 0x1b, 0x76, 0x14, 0x0e, 0x07, 0x9c, 0x62, 0xee, 0xf8, 0xf6, 0xb6, 0xf8, 0x6d, 0x87, 0x51, 0xc0, 0x03, 0xac, 0x8b, 0x40, 0x3b, 0x0b, 0xe8, 0x55, 0xa8, 0x74, 0xbd, 0x90, 0x8f, 0xf5, 0x43, 0xa8, 0x9e, 0xb0, 0xb1, 0x1b, 0xb0, 0x11, 0x3e, 0x87, 0x32, 0x1f, 0x87, 0xa4, 0x29, 0x2d, 0xc5, 0x50, 0x3b, 0x9b, 0xed, 0x62, 0x42, 0x3b, 0x23, 0x9d, 0x8e, 0x43, 0x32, 0x25, 0x0d, 0x11, 0xca, 0x9f, 0x82, 0xd1, 0x58, 0x2b, 0xb5, 0x14, 0xa3, 0x6e, 0xca, 0xb3, 0xfe, 0x1a, 0xa0, 0x3b, 0x3c, 0x0f, 0x2c, 0xce, 0x78, 0x12, 0x0b, 0xc6, 0x30, 0x18, 0xa5, 0x05, 0x2b, 0xa6, 0x3c, 0xa3, 0x06, 0x55, 0x8f, 0xe2, 0x98, 0xd9, 0x24, 0x13, 0x57, 0xcc, 0xfc, 0xaa, 0x7f, 0x2f, 0xc1, 0xaa, 0xe5, 0x78, 0xa1, 0x4b, 0x26, 0x7d, 0x4e, 0x28, 0xe6, 0xf8, 0x16, 0x56, 0x23, 0x8a, 0xc3, 0xc0, 0x8f, 0x69, 0x70, 0xb3, 0xce, 0xea, 0x39, 0x5f, 0xdc, 0xf0, 0x71, 0x21, 0x3f, 0x76, 0x2e, 0xd3, 0x17, 0x2b, 0x53, 0x92, 0xe5, 0x5c, 0x12, 0x6e, 0x43, 0x35, 0x4c, 0x2b, 0x68, 0x4b, 0x2d, 0xc5, 0xa8, 0x75, 0xd6, 0x17, 0x96, 0x37, 0x73, 0x96, 0xa8, 0x7a, 0xe6, 0xb8, 0xee, 0x20, 0x89, 0x29, 0xf2, 0x99, 0x47, 0x5a, 0xb9, 0xa5, 0x18, 0xcb, 0x66, 0x5d, 0x80, 0xfd, 0x0c, 0x43, 0x03, 0x1a, 0x92, 0x14, 0xb0, 0x84, 0x9f, 0x0f, 0xe2, 0x61, 0x10, 0x92, 0x56, 0x91, 0x3c, 0x55, 0xe0, 0x3d, 0x01, 0x5b, 0x02, 0xc5, 0x1d, 0xb8, 0x37, 0x6d, 0x52, 0xce, 0x4d, 0xab, 0xca, 0x3e, 0xb4, 0xd9, 0x3e, 0xa6, 0x73, 0x35, 0xd5, 0x89, 0x00, 0x79, 0xd7, 0xbf, 0x82, 0x9a, 0x0f, 0x2e, 0xc5, 0x8b, 0xa2, 0x94, 0x1b, 0x89, 0x6a, 0xc2, 0xf2, 0x44, 0x4f, 0xba, 0x97, 0xc9, 0x1d, 0x1f, 0x41, 0xad, 0x28, 0x63, 0x49, 0x86, 0x21, 0x98, 0x48, 0xd0, 0x0f, 0x61, 0xd3, 0xe2, 0x11, 0x31, 0xcf, 0xf1, 0xed, 0x03, 0x3f, 0x4c, 0xf8, 0x2e, 0x73, 0xdd, 0x7c, 0x89, 0xb7, 0x6d, 0x45, 0x3f, 0x85, 0xe6, 0xa2, 0x6a, 0x99, 0xb2, 0x57, 0xb0, 0xc1, 0x6c, 0x3b, 0x22, 0x9b, 0x71, 0x1a, 0x0d, 0xb2, 0x9c, 0x74, 0xbb, 0xa9, 0xcd, 0xd6, 0xa7, 0xe1, 0xac, 0xb4, 0x58, 0xb3, 0x7e, 0x00, 0x98, 0xd7, 0x38, 0x61, 0x11, 0xf3, 0x88, 0x53, 0x24, 0x1d, 0x5a, 0x48, 0x95, 0x67, 0x21, 0xd7, 0xf1, 0x39, 0x45, 0x17, 0x4c, 0xec, 0x38, 0xf3, 0x0c, 0xe4, 0x50, 0x3f, 0xd6, 0xbf, 0x95, 0x0a, 0x1d, 0xf6, 0x12, 0x3e, 0x27, 0xf8, 0x4f, 0x5d, 0xfb, 0x11, 0xd6, 0x26, 0xf9, 0xe1, 0xa4, 0x55, 0xad, 0xd4, 0x5a, 0x32, 0x6a, 0x9d, 0xd6, 0x6c, 0x95, 0xab, 0x92, 0x4c, 0x8c, 0xae, 0xca, 0xbc, 0xb5, 0xc7, 0xff, 0x82, 0x29, 0x8f, 0x61, 0x6b, 0xe1, 0x90, 0x7e, 0xd3, 0xa1, 0xcf, 0xde, 0x41, 0xad, 0x30, 0x33, 0x6c, 0x40, 0x7d, 0xb7, 0x77, 0x74, 0x62, 0x76, 0x2d, 0x6b, 0xe7, 0xfd, 0x61, 0xb7, 0x71, 0x07, 0x11, 0xd4, 0xfe, 0xf1, 0x0c, 0xa6, 0x20, 0xc0, 0x5d, 0x73, 0xe7, 0x78, 0xaf, 0x77, 0xd4, 0x28, 0x75, 0x7e, 0x94, 0xa1, 0x76, 0x4a, 0x31, 0xb7, 0x28, 0xba, 0x70, 0x86, 0x84, 0x2f, 0x61, 0x45, 0x7e, 0x02, 0x45, 0x5b, 0xb8, 0x36, 0xa7, 0x4b, 0x04, 0x9a, 0x8b, 0x40, 0xdc, 0x87, 0x95, 0xbe, 0xcf, 0xa2, 0x34, 0x6d, 0x6b, 0x96, 0x31, 0xf3, 0xf9, 0x6a, 0x3e, 0x5c, 0x1c, 0xcc, 0x06, 0xe0, 0xc2, 0xda, 0x82, 0xf9, 0xa0, 0x31, 0x97, 0x74, 0xad, 0xcf, 0x9a, 0x4f, 0x6f, 0xc0, 0x4c, 0xdf, 0x7a, 0xa1, 0xa0, 0x03, 0x78, 0xf5, 0x4f, 0x85, 0x4f, 0xae, 0x29, 0x31, 0xff, 0x27, 0x6e, 0x1a, 0xbf, 0x26, 0xa6, 0x4f, 0x19, 0xe2, 0x29, 0x75, 0x3f, 0x71, 0xdd, 0xbd, 0x24, 0x74, 0xe9, 0xcb, 0x3f, 0xd3, 0x64, 0x28, 0x52, 0x95, 0xfa, 0x81, 0xb9, 0x67, 0xff, 0xe1, 0xa9, 0x4e, 0x1f, 0x1e, 0xf4, 0x7d, 0xb9, 0x41, 0x8f, 0x7c, 0x4e, 0xa3, 0xdc, 0x45, 0x6f, 0xe0, 0xfe, 0x0c, 0x7e, 0x3b, 0x37, 0xfd, 0x0c, 0x00, 0x00, 0xff, 0xff, 0x15, 0x62, 0x93, 0xba, 0xaf, 0x07, 0x00, 0x00, } golang-google-grpc-1.6.0/interop/grpc_testing/test.proto000066400000000000000000000135011315416461300234100ustar00rootroot00000000000000// Copyright 2017 gRPC authors. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. // An integration test service that covers all the method signature permutations // of unary/streaming requests/responses. syntax = "proto2"; package grpc.testing; message Empty {} // The type of payload that should be returned. enum PayloadType { // Compressable text format. COMPRESSABLE = 0; // Uncompressable binary format. UNCOMPRESSABLE = 1; // Randomly chosen from all other formats defined in this enum. RANDOM = 2; } // A block of data, to simply increase gRPC message size. message Payload { // The type of data in body. optional PayloadType type = 1; // Primary contents of payload. optional bytes body = 2; } // A protobuf representation for grpc status. This is used by test // clients to specify a status that the server should attempt to return. message EchoStatus { optional int32 code = 1; optional string message = 2; } // Unary request. message SimpleRequest { // Desired payload type in the response from the server. // If response_type is RANDOM, server randomly chooses one from other formats. optional PayloadType response_type = 1; // Desired payload size in the response from the server. // If response_type is COMPRESSABLE, this denotes the size before compression. optional int32 response_size = 2; // Optional input payload sent along with the request. optional Payload payload = 3; // Whether SimpleResponse should include username. optional bool fill_username = 4; // Whether SimpleResponse should include OAuth scope. optional bool fill_oauth_scope = 5; // Whether server should return a given status optional EchoStatus response_status = 7; } // Unary response, as configured by the request. message SimpleResponse { // Payload to increase message size. optional Payload payload = 1; // The user the request came from, for verifying authentication was // successful when the client expected it. optional string username = 2; // OAuth scope. optional string oauth_scope = 3; } // Client-streaming request. message StreamingInputCallRequest { // Optional input payload sent along with the request. optional Payload payload = 1; // Not expecting any payload from the response. } // Client-streaming response. message StreamingInputCallResponse { // Aggregated size of payloads received from the client. optional int32 aggregated_payload_size = 1; } // Configuration for a particular response. message ResponseParameters { // Desired payload sizes in responses from the server. // If response_type is COMPRESSABLE, this denotes the size before compression. optional int32 size = 1; // Desired interval between consecutive responses in the response stream in // microseconds. optional int32 interval_us = 2; } // Server-streaming request. message StreamingOutputCallRequest { // Desired payload type in the response from the server. // If response_type is RANDOM, the payload from each response in the stream // might be of different types. This is to simulate a mixed type of payload // stream. optional PayloadType response_type = 1; // Configuration for each expected response message. repeated ResponseParameters response_parameters = 2; // Optional input payload sent along with the request. optional Payload payload = 3; // Whether server should return a given status optional EchoStatus response_status = 7; } // Server-streaming response, as configured by the request and parameters. message StreamingOutputCallResponse { // Payload to increase response size. optional Payload payload = 1; } // A simple service to test the various types of RPCs and experiment with // performance with various types of payload. service TestService { // One empty request followed by one empty response. rpc EmptyCall(Empty) returns (Empty); // One request followed by one response. // The server returns the client payload as-is. rpc UnaryCall(SimpleRequest) returns (SimpleResponse); // One request followed by a sequence of responses (streamed download). // The server returns the payload with client desired type and sizes. rpc StreamingOutputCall(StreamingOutputCallRequest) returns (stream StreamingOutputCallResponse); // A sequence of requests followed by one response (streamed upload). // The server returns the aggregated size of client payload as the result. rpc StreamingInputCall(stream StreamingInputCallRequest) returns (StreamingInputCallResponse); // A sequence of requests with each request served by the server immediately. // As one request could lead to multiple responses, this interface // demonstrates the idea of full duplexing. rpc FullDuplexCall(stream StreamingOutputCallRequest) returns (stream StreamingOutputCallResponse); // A sequence of requests followed by a sequence of responses. // The server buffers all the client requests and then serves them in order. A // stream of responses are returned to the client when the server starts with // first request. rpc HalfDuplexCall(stream StreamingOutputCallRequest) returns (stream StreamingOutputCallResponse); } // A simple service NOT implemented at servers so clients can test for // that case. service UnimplementedService { // A call that no server should implement rpc UnimplementedCall(grpc.testing.Empty) returns (grpc.testing.Empty); } golang-google-grpc-1.6.0/interop/http2/000077500000000000000000000000001315416461300177155ustar00rootroot00000000000000golang-google-grpc-1.6.0/interop/http2/negative_http2_client.go000066400000000000000000000114521315416461300245300ustar00rootroot00000000000000/* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * * * Client used to test http2 error edge cases like GOAWAYs and RST_STREAMs * * Documentation: * https://github.com/grpc/grpc/blob/master/doc/negative-http2-interop-test-descriptions.md */ package main import ( "flag" "net" "strconv" "sync" "time" "github.com/golang/protobuf/proto" "golang.org/x/net/context" "google.golang.org/grpc" "google.golang.org/grpc/codes" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/interop" testpb "google.golang.org/grpc/interop/grpc_testing" ) var ( serverHost = flag.String("server_host", "127.0.0.1", "The server host name") serverPort = flag.Int("server_port", 8080, "The server port number") testCase = flag.String("test_case", "goaway", `Configure different test cases. Valid options are: goaway : client sends two requests, the server will send a goaway in between; rst_after_header : server will send rst_stream after it sends headers; rst_during_data : server will send rst_stream while sending data; rst_after_data : server will send rst_stream after sending data; ping : server will send pings between each http2 frame; max_streams : server will ensure that the max_concurrent_streams limit is upheld;`) largeReqSize = 271828 largeRespSize = 314159 ) func largeSimpleRequest() *testpb.SimpleRequest { pl := interop.ClientNewPayload(testpb.PayloadType_COMPRESSABLE, largeReqSize) return &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseSize: proto.Int32(int32(largeRespSize)), Payload: pl, } } // sends two unary calls. The server asserts that the calls use different connections. func goaway(tc testpb.TestServiceClient) { interop.DoLargeUnaryCall(tc) // sleep to ensure that the client has time to recv the GOAWAY. // TODO(ncteisen): make this less hacky. time.Sleep(1 * time.Second) interop.DoLargeUnaryCall(tc) } func rstAfterHeader(tc testpb.TestServiceClient) { req := largeSimpleRequest() reply, err := tc.UnaryCall(context.Background(), req) if reply != nil { grpclog.Fatalf("Client received reply despite server sending rst stream after header") } if grpc.Code(err) != codes.Internal { grpclog.Fatalf("%v.UnaryCall() = _, %v, want _, %v", tc, grpc.Code(err), codes.Internal) } } func rstDuringData(tc testpb.TestServiceClient) { req := largeSimpleRequest() reply, err := tc.UnaryCall(context.Background(), req) if reply != nil { grpclog.Fatalf("Client received reply despite server sending rst stream during data") } if grpc.Code(err) != codes.Unknown { grpclog.Fatalf("%v.UnaryCall() = _, %v, want _, %v", tc, grpc.Code(err), codes.Unknown) } } func rstAfterData(tc testpb.TestServiceClient) { req := largeSimpleRequest() reply, err := tc.UnaryCall(context.Background(), req) if reply != nil { grpclog.Fatalf("Client received reply despite server sending rst stream after data") } if grpc.Code(err) != codes.Internal { grpclog.Fatalf("%v.UnaryCall() = _, %v, want _, %v", tc, grpc.Code(err), codes.Internal) } } func ping(tc testpb.TestServiceClient) { // The server will assert that every ping it sends was ACK-ed by the client. interop.DoLargeUnaryCall(tc) } func maxStreams(tc testpb.TestServiceClient) { interop.DoLargeUnaryCall(tc) var wg sync.WaitGroup for i := 0; i < 15; i++ { wg.Add(1) go func() { defer wg.Done() interop.DoLargeUnaryCall(tc) }() } wg.Wait() } func main() { flag.Parse() serverAddr := net.JoinHostPort(*serverHost, strconv.Itoa(*serverPort)) var opts []grpc.DialOption opts = append(opts, grpc.WithInsecure()) conn, err := grpc.Dial(serverAddr, opts...) if err != nil { grpclog.Fatalf("Fail to dial: %v", err) } defer conn.Close() tc := testpb.NewTestServiceClient(conn) switch *testCase { case "goaway": goaway(tc) grpclog.Println("goaway done") case "rst_after_header": rstAfterHeader(tc) grpclog.Println("rst_after_header done") case "rst_during_data": rstDuringData(tc) grpclog.Println("rst_during_data done") case "rst_after_data": rstAfterData(tc) grpclog.Println("rst_after_data done") case "ping": ping(tc) grpclog.Println("ping done") case "max_streams": maxStreams(tc) grpclog.Println("max_streams done") default: grpclog.Fatal("Unsupported test case: ", *testCase) } } golang-google-grpc-1.6.0/interop/server/000077500000000000000000000000001315416461300201625ustar00rootroot00000000000000golang-google-grpc-1.6.0/interop/server/server.go000066400000000000000000000034431315416461300220230ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package main import ( "flag" "net" "strconv" "google.golang.org/grpc" "google.golang.org/grpc/credentials" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/interop" testpb "google.golang.org/grpc/interop/grpc_testing" "google.golang.org/grpc/testdata" ) var ( useTLS = flag.Bool("use_tls", false, "Connection uses TLS if true, else plain TCP") certFile = flag.String("tls_cert_file", "", "The TLS cert file") keyFile = flag.String("tls_key_file", "", "The TLS key file") port = flag.Int("port", 10000, "The server port") ) func main() { flag.Parse() p := strconv.Itoa(*port) lis, err := net.Listen("tcp", ":"+p) if err != nil { grpclog.Fatalf("failed to listen: %v", err) } var opts []grpc.ServerOption if *useTLS { if *certFile == "" { *certFile = testdata.Path("server1.pem") } if *keyFile == "" { *keyFile = testdata.Path("server1.key") } creds, err := credentials.NewServerTLSFromFile(*certFile, *keyFile) if err != nil { grpclog.Fatalf("Failed to generate credentials %v", err) } opts = []grpc.ServerOption{grpc.Creds(creds)} } server := grpc.NewServer(opts...) testpb.RegisterTestServiceServer(server, interop.NewTestServer()) server.Serve(lis) } golang-google-grpc-1.6.0/interop/test_utils.go000066400000000000000000000613021315416461300214040ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ //go:generate protoc --go_out=plugins=grpc:. grpc_testing/test.proto package interop import ( "fmt" "io" "io/ioutil" "strings" "time" "github.com/golang/protobuf/proto" "golang.org/x/net/context" "golang.org/x/oauth2" "golang.org/x/oauth2/google" "google.golang.org/grpc" "google.golang.org/grpc/codes" "google.golang.org/grpc/grpclog" testpb "google.golang.org/grpc/interop/grpc_testing" "google.golang.org/grpc/metadata" ) var ( reqSizes = []int{27182, 8, 1828, 45904} respSizes = []int{31415, 9, 2653, 58979} largeReqSize = 271828 largeRespSize = 314159 initialMetadataKey = "x-grpc-test-echo-initial" trailingMetadataKey = "x-grpc-test-echo-trailing-bin" ) // ClientNewPayload returns a payload of the given type and size. func ClientNewPayload(t testpb.PayloadType, size int) *testpb.Payload { if size < 0 { grpclog.Fatalf("Requested a response with invalid length %d", size) } body := make([]byte, size) switch t { case testpb.PayloadType_COMPRESSABLE: case testpb.PayloadType_UNCOMPRESSABLE: grpclog.Fatalf("PayloadType UNCOMPRESSABLE is not supported") default: grpclog.Fatalf("Unsupported payload type: %d", t) } return &testpb.Payload{ Type: t.Enum(), Body: body, } } // DoEmptyUnaryCall performs a unary RPC with empty request and response messages. func DoEmptyUnaryCall(tc testpb.TestServiceClient, args ...grpc.CallOption) { reply, err := tc.EmptyCall(context.Background(), &testpb.Empty{}, args...) if err != nil { grpclog.Fatal("/TestService/EmptyCall RPC failed: ", err) } if !proto.Equal(&testpb.Empty{}, reply) { grpclog.Fatalf("/TestService/EmptyCall receives %v, want %v", reply, testpb.Empty{}) } } // DoLargeUnaryCall performs a unary RPC with large payload in the request and response. func DoLargeUnaryCall(tc testpb.TestServiceClient, args ...grpc.CallOption) { pl := ClientNewPayload(testpb.PayloadType_COMPRESSABLE, largeReqSize) req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseSize: proto.Int32(int32(largeRespSize)), Payload: pl, } reply, err := tc.UnaryCall(context.Background(), req, args...) if err != nil { grpclog.Fatal("/TestService/UnaryCall RPC failed: ", err) } t := reply.GetPayload().GetType() s := len(reply.GetPayload().GetBody()) if t != testpb.PayloadType_COMPRESSABLE || s != largeRespSize { grpclog.Fatalf("Got the reply with type %d len %d; want %d, %d", t, s, testpb.PayloadType_COMPRESSABLE, largeRespSize) } } // DoClientStreaming performs a client streaming RPC. func DoClientStreaming(tc testpb.TestServiceClient, args ...grpc.CallOption) { stream, err := tc.StreamingInputCall(context.Background(), args...) if err != nil { grpclog.Fatalf("%v.StreamingInputCall(_) = _, %v", tc, err) } var sum int for _, s := range reqSizes { pl := ClientNewPayload(testpb.PayloadType_COMPRESSABLE, s) req := &testpb.StreamingInputCallRequest{ Payload: pl, } if err := stream.Send(req); err != nil { grpclog.Fatalf("%v has error %v while sending %v", stream, err, req) } sum += s } reply, err := stream.CloseAndRecv() if err != nil { grpclog.Fatalf("%v.CloseAndRecv() got error %v, want %v", stream, err, nil) } if reply.GetAggregatedPayloadSize() != int32(sum) { grpclog.Fatalf("%v.CloseAndRecv().GetAggregatePayloadSize() = %v; want %v", stream, reply.GetAggregatedPayloadSize(), sum) } } // DoServerStreaming performs a server streaming RPC. func DoServerStreaming(tc testpb.TestServiceClient, args ...grpc.CallOption) { respParam := make([]*testpb.ResponseParameters, len(respSizes)) for i, s := range respSizes { respParam[i] = &testpb.ResponseParameters{ Size: proto.Int32(int32(s)), } } req := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseParameters: respParam, } stream, err := tc.StreamingOutputCall(context.Background(), req, args...) if err != nil { grpclog.Fatalf("%v.StreamingOutputCall(_) = _, %v", tc, err) } var rpcStatus error var respCnt int var index int for { reply, err := stream.Recv() if err != nil { rpcStatus = err break } t := reply.GetPayload().GetType() if t != testpb.PayloadType_COMPRESSABLE { grpclog.Fatalf("Got the reply of type %d, want %d", t, testpb.PayloadType_COMPRESSABLE) } size := len(reply.GetPayload().GetBody()) if size != int(respSizes[index]) { grpclog.Fatalf("Got reply body of length %d, want %d", size, respSizes[index]) } index++ respCnt++ } if rpcStatus != io.EOF { grpclog.Fatalf("Failed to finish the server streaming rpc: %v", rpcStatus) } if respCnt != len(respSizes) { grpclog.Fatalf("Got %d reply, want %d", len(respSizes), respCnt) } } // DoPingPong performs ping-pong style bi-directional streaming RPC. func DoPingPong(tc testpb.TestServiceClient, args ...grpc.CallOption) { stream, err := tc.FullDuplexCall(context.Background(), args...) if err != nil { grpclog.Fatalf("%v.FullDuplexCall(_) = _, %v", tc, err) } var index int for index < len(reqSizes) { respParam := []*testpb.ResponseParameters{ { Size: proto.Int32(int32(respSizes[index])), }, } pl := ClientNewPayload(testpb.PayloadType_COMPRESSABLE, reqSizes[index]) req := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseParameters: respParam, Payload: pl, } if err := stream.Send(req); err != nil { grpclog.Fatalf("%v has error %v while sending %v", stream, err, req) } reply, err := stream.Recv() if err != nil { grpclog.Fatalf("%v.Recv() = %v", stream, err) } t := reply.GetPayload().GetType() if t != testpb.PayloadType_COMPRESSABLE { grpclog.Fatalf("Got the reply of type %d, want %d", t, testpb.PayloadType_COMPRESSABLE) } size := len(reply.GetPayload().GetBody()) if size != int(respSizes[index]) { grpclog.Fatalf("Got reply body of length %d, want %d", size, respSizes[index]) } index++ } if err := stream.CloseSend(); err != nil { grpclog.Fatalf("%v.CloseSend() got %v, want %v", stream, err, nil) } if _, err := stream.Recv(); err != io.EOF { grpclog.Fatalf("%v failed to complele the ping pong test: %v", stream, err) } } // DoEmptyStream sets up a bi-directional streaming with zero message. func DoEmptyStream(tc testpb.TestServiceClient, args ...grpc.CallOption) { stream, err := tc.FullDuplexCall(context.Background(), args...) if err != nil { grpclog.Fatalf("%v.FullDuplexCall(_) = _, %v", tc, err) } if err := stream.CloseSend(); err != nil { grpclog.Fatalf("%v.CloseSend() got %v, want %v", stream, err, nil) } if _, err := stream.Recv(); err != io.EOF { grpclog.Fatalf("%v failed to complete the empty stream test: %v", stream, err) } } // DoTimeoutOnSleepingServer performs an RPC on a sleep server which causes RPC timeout. func DoTimeoutOnSleepingServer(tc testpb.TestServiceClient, args ...grpc.CallOption) { ctx, cancel := context.WithTimeout(context.Background(), 1*time.Millisecond) defer cancel() stream, err := tc.FullDuplexCall(ctx, args...) if err != nil { if grpc.Code(err) == codes.DeadlineExceeded { return } grpclog.Fatalf("%v.FullDuplexCall(_) = _, %v", tc, err) } pl := ClientNewPayload(testpb.PayloadType_COMPRESSABLE, 27182) req := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), Payload: pl, } if err := stream.Send(req); err != nil { if grpc.Code(err) != codes.DeadlineExceeded { grpclog.Fatalf("%v.Send(_) = %v", stream, err) } } if _, err := stream.Recv(); grpc.Code(err) != codes.DeadlineExceeded { grpclog.Fatalf("%v.Recv() = _, %v, want error code %d", stream, err, codes.DeadlineExceeded) } } // DoComputeEngineCreds performs a unary RPC with compute engine auth. func DoComputeEngineCreds(tc testpb.TestServiceClient, serviceAccount, oauthScope string) { pl := ClientNewPayload(testpb.PayloadType_COMPRESSABLE, largeReqSize) req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseSize: proto.Int32(int32(largeRespSize)), Payload: pl, FillUsername: proto.Bool(true), FillOauthScope: proto.Bool(true), } reply, err := tc.UnaryCall(context.Background(), req) if err != nil { grpclog.Fatal("/TestService/UnaryCall RPC failed: ", err) } user := reply.GetUsername() scope := reply.GetOauthScope() if user != serviceAccount { grpclog.Fatalf("Got user name %q, want %q.", user, serviceAccount) } if !strings.Contains(oauthScope, scope) { grpclog.Fatalf("Got OAuth scope %q which is NOT a substring of %q.", scope, oauthScope) } } func getServiceAccountJSONKey(keyFile string) []byte { jsonKey, err := ioutil.ReadFile(keyFile) if err != nil { grpclog.Fatalf("Failed to read the service account key file: %v", err) } return jsonKey } // DoServiceAccountCreds performs a unary RPC with service account auth. func DoServiceAccountCreds(tc testpb.TestServiceClient, serviceAccountKeyFile, oauthScope string) { pl := ClientNewPayload(testpb.PayloadType_COMPRESSABLE, largeReqSize) req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseSize: proto.Int32(int32(largeRespSize)), Payload: pl, FillUsername: proto.Bool(true), FillOauthScope: proto.Bool(true), } reply, err := tc.UnaryCall(context.Background(), req) if err != nil { grpclog.Fatal("/TestService/UnaryCall RPC failed: ", err) } jsonKey := getServiceAccountJSONKey(serviceAccountKeyFile) user := reply.GetUsername() scope := reply.GetOauthScope() if !strings.Contains(string(jsonKey), user) { grpclog.Fatalf("Got user name %q which is NOT a substring of %q.", user, jsonKey) } if !strings.Contains(oauthScope, scope) { grpclog.Fatalf("Got OAuth scope %q which is NOT a substring of %q.", scope, oauthScope) } } // DoJWTTokenCreds performs a unary RPC with JWT token auth. func DoJWTTokenCreds(tc testpb.TestServiceClient, serviceAccountKeyFile string) { pl := ClientNewPayload(testpb.PayloadType_COMPRESSABLE, largeReqSize) req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseSize: proto.Int32(int32(largeRespSize)), Payload: pl, FillUsername: proto.Bool(true), } reply, err := tc.UnaryCall(context.Background(), req) if err != nil { grpclog.Fatal("/TestService/UnaryCall RPC failed: ", err) } jsonKey := getServiceAccountJSONKey(serviceAccountKeyFile) user := reply.GetUsername() if !strings.Contains(string(jsonKey), user) { grpclog.Fatalf("Got user name %q which is NOT a substring of %q.", user, jsonKey) } } // GetToken obtains an OAUTH token from the input. func GetToken(serviceAccountKeyFile string, oauthScope string) *oauth2.Token { jsonKey := getServiceAccountJSONKey(serviceAccountKeyFile) config, err := google.JWTConfigFromJSON(jsonKey, oauthScope) if err != nil { grpclog.Fatalf("Failed to get the config: %v", err) } token, err := config.TokenSource(context.Background()).Token() if err != nil { grpclog.Fatalf("Failed to get the token: %v", err) } return token } // DoOauth2TokenCreds performs a unary RPC with OAUTH2 token auth. func DoOauth2TokenCreds(tc testpb.TestServiceClient, serviceAccountKeyFile, oauthScope string) { pl := ClientNewPayload(testpb.PayloadType_COMPRESSABLE, largeReqSize) req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseSize: proto.Int32(int32(largeRespSize)), Payload: pl, FillUsername: proto.Bool(true), FillOauthScope: proto.Bool(true), } reply, err := tc.UnaryCall(context.Background(), req) if err != nil { grpclog.Fatal("/TestService/UnaryCall RPC failed: ", err) } jsonKey := getServiceAccountJSONKey(serviceAccountKeyFile) user := reply.GetUsername() scope := reply.GetOauthScope() if !strings.Contains(string(jsonKey), user) { grpclog.Fatalf("Got user name %q which is NOT a substring of %q.", user, jsonKey) } if !strings.Contains(oauthScope, scope) { grpclog.Fatalf("Got OAuth scope %q which is NOT a substring of %q.", scope, oauthScope) } } // DoPerRPCCreds performs a unary RPC with per RPC OAUTH2 token. func DoPerRPCCreds(tc testpb.TestServiceClient, serviceAccountKeyFile, oauthScope string) { jsonKey := getServiceAccountJSONKey(serviceAccountKeyFile) pl := ClientNewPayload(testpb.PayloadType_COMPRESSABLE, largeReqSize) req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseSize: proto.Int32(int32(largeRespSize)), Payload: pl, FillUsername: proto.Bool(true), FillOauthScope: proto.Bool(true), } token := GetToken(serviceAccountKeyFile, oauthScope) kv := map[string]string{"authorization": token.TokenType + " " + token.AccessToken} ctx := metadata.NewOutgoingContext(context.Background(), metadata.MD{"authorization": []string{kv["authorization"]}}) reply, err := tc.UnaryCall(ctx, req) if err != nil { grpclog.Fatal("/TestService/UnaryCall RPC failed: ", err) } user := reply.GetUsername() scope := reply.GetOauthScope() if !strings.Contains(string(jsonKey), user) { grpclog.Fatalf("Got user name %q which is NOT a substring of %q.", user, jsonKey) } if !strings.Contains(oauthScope, scope) { grpclog.Fatalf("Got OAuth scope %q which is NOT a substring of %q.", scope, oauthScope) } } var ( testMetadata = metadata.MD{ "key1": []string{"value1"}, "key2": []string{"value2"}, } ) // DoCancelAfterBegin cancels the RPC after metadata has been sent but before payloads are sent. func DoCancelAfterBegin(tc testpb.TestServiceClient, args ...grpc.CallOption) { ctx, cancel := context.WithCancel(metadata.NewOutgoingContext(context.Background(), testMetadata)) stream, err := tc.StreamingInputCall(ctx, args...) if err != nil { grpclog.Fatalf("%v.StreamingInputCall(_) = _, %v", tc, err) } cancel() _, err = stream.CloseAndRecv() if grpc.Code(err) != codes.Canceled { grpclog.Fatalf("%v.CloseAndRecv() got error code %d, want %d", stream, grpc.Code(err), codes.Canceled) } } // DoCancelAfterFirstResponse cancels the RPC after receiving the first message from the server. func DoCancelAfterFirstResponse(tc testpb.TestServiceClient, args ...grpc.CallOption) { ctx, cancel := context.WithCancel(context.Background()) stream, err := tc.FullDuplexCall(ctx, args...) if err != nil { grpclog.Fatalf("%v.FullDuplexCall(_) = _, %v", tc, err) } respParam := []*testpb.ResponseParameters{ { Size: proto.Int32(31415), }, } pl := ClientNewPayload(testpb.PayloadType_COMPRESSABLE, 27182) req := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseParameters: respParam, Payload: pl, } if err := stream.Send(req); err != nil { grpclog.Fatalf("%v has error %v while sending %v", stream, err, req) } if _, err := stream.Recv(); err != nil { grpclog.Fatalf("%v.Recv() = %v", stream, err) } cancel() if _, err := stream.Recv(); grpc.Code(err) != codes.Canceled { grpclog.Fatalf("%v compleled with error code %d, want %d", stream, grpc.Code(err), codes.Canceled) } } var ( initialMetadataValue = "test_initial_metadata_value" trailingMetadataValue = "\x0a\x0b\x0a\x0b\x0a\x0b" customMetadata = metadata.Pairs( initialMetadataKey, initialMetadataValue, trailingMetadataKey, trailingMetadataValue, ) ) func validateMetadata(header, trailer metadata.MD) { if len(header[initialMetadataKey]) != 1 { grpclog.Fatalf("Expected exactly one header from server. Received %d", len(header[initialMetadataKey])) } if header[initialMetadataKey][0] != initialMetadataValue { grpclog.Fatalf("Got header %s; want %s", header[initialMetadataKey][0], initialMetadataValue) } if len(trailer[trailingMetadataKey]) != 1 { grpclog.Fatalf("Expected exactly one trailer from server. Received %d", len(trailer[trailingMetadataKey])) } if trailer[trailingMetadataKey][0] != trailingMetadataValue { grpclog.Fatalf("Got trailer %s; want %s", trailer[trailingMetadataKey][0], trailingMetadataValue) } } // DoCustomMetadata checks that metadata is echoed back to the client. func DoCustomMetadata(tc testpb.TestServiceClient, args ...grpc.CallOption) { // Testing with UnaryCall. pl := ClientNewPayload(testpb.PayloadType_COMPRESSABLE, 1) req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseSize: proto.Int32(int32(1)), Payload: pl, } ctx := metadata.NewOutgoingContext(context.Background(), customMetadata) var header, trailer metadata.MD args = append(args, grpc.Header(&header), grpc.Trailer(&trailer)) reply, err := tc.UnaryCall( ctx, req, args..., ) if err != nil { grpclog.Fatal("/TestService/UnaryCall RPC failed: ", err) } t := reply.GetPayload().GetType() s := len(reply.GetPayload().GetBody()) if t != testpb.PayloadType_COMPRESSABLE || s != 1 { grpclog.Fatalf("Got the reply with type %d len %d; want %d, %d", t, s, testpb.PayloadType_COMPRESSABLE, 1) } validateMetadata(header, trailer) // Testing with FullDuplex. stream, err := tc.FullDuplexCall(ctx, args...) if err != nil { grpclog.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } respParam := []*testpb.ResponseParameters{ { Size: proto.Int32(1), }, } streamReq := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseParameters: respParam, Payload: pl, } if err := stream.Send(streamReq); err != nil { grpclog.Fatalf("%v has error %v while sending %v", stream, err, streamReq) } streamHeader, err := stream.Header() if err != nil { grpclog.Fatalf("%v.Header() = %v", stream, err) } if _, err := stream.Recv(); err != nil { grpclog.Fatalf("%v.Recv() = %v", stream, err) } if err := stream.CloseSend(); err != nil { grpclog.Fatalf("%v.CloseSend() = %v, want ", stream, err) } if _, err := stream.Recv(); err != io.EOF { grpclog.Fatalf("%v failed to complete the custom metadata test: %v", stream, err) } streamTrailer := stream.Trailer() validateMetadata(streamHeader, streamTrailer) } // DoStatusCodeAndMessage checks that the status code is propagated back to the client. func DoStatusCodeAndMessage(tc testpb.TestServiceClient, args ...grpc.CallOption) { var code int32 = 2 msg := "test status message" expectedErr := grpc.Errorf(codes.Code(code), msg) respStatus := &testpb.EchoStatus{ Code: proto.Int32(code), Message: proto.String(msg), } // Test UnaryCall. req := &testpb.SimpleRequest{ ResponseStatus: respStatus, } if _, err := tc.UnaryCall(context.Background(), req, args...); err == nil || err.Error() != expectedErr.Error() { grpclog.Fatalf("%v.UnaryCall(_, %v) = _, %v, want _, %v", tc, req, err, expectedErr) } // Test FullDuplexCall. stream, err := tc.FullDuplexCall(context.Background(), args...) if err != nil { grpclog.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } streamReq := &testpb.StreamingOutputCallRequest{ ResponseStatus: respStatus, } if err := stream.Send(streamReq); err != nil { grpclog.Fatalf("%v has error %v while sending %v, want ", stream, err, streamReq) } if err := stream.CloseSend(); err != nil { grpclog.Fatalf("%v.CloseSend() = %v, want ", stream, err) } if _, err = stream.Recv(); err.Error() != expectedErr.Error() { grpclog.Fatalf("%v.Recv() returned error %v, want %v", stream, err, expectedErr) } } // DoUnimplementedService attempts to call a method from an unimplemented service. func DoUnimplementedService(tc testpb.UnimplementedServiceClient) { _, err := tc.UnimplementedCall(context.Background(), &testpb.Empty{}) if grpc.Code(err) != codes.Unimplemented { grpclog.Fatalf("%v.UnimplementedCall() = _, %v, want _, %v", tc, grpc.Code(err), codes.Unimplemented) } } // DoUnimplementedMethod attempts to call an unimplemented method. func DoUnimplementedMethod(cc *grpc.ClientConn) { var req, reply proto.Message if err := grpc.Invoke(context.Background(), "/grpc.testing.TestService/UnimplementedCall", req, reply, cc); err == nil || grpc.Code(err) != codes.Unimplemented { grpclog.Fatalf("grpc.Invoke(_, _, _, _, _) = %v, want error code %s", err, codes.Unimplemented) } } type testServer struct { } // NewTestServer creates a test server for test service. func NewTestServer() testpb.TestServiceServer { return &testServer{} } func (s *testServer) EmptyCall(ctx context.Context, in *testpb.Empty) (*testpb.Empty, error) { return new(testpb.Empty), nil } func serverNewPayload(t testpb.PayloadType, size int32) (*testpb.Payload, error) { if size < 0 { return nil, fmt.Errorf("requested a response with invalid length %d", size) } body := make([]byte, size) switch t { case testpb.PayloadType_COMPRESSABLE: case testpb.PayloadType_UNCOMPRESSABLE: return nil, fmt.Errorf("payloadType UNCOMPRESSABLE is not supported") default: return nil, fmt.Errorf("unsupported payload type: %d", t) } return &testpb.Payload{ Type: t.Enum(), Body: body, }, nil } func (s *testServer) UnaryCall(ctx context.Context, in *testpb.SimpleRequest) (*testpb.SimpleResponse, error) { status := in.GetResponseStatus() if md, ok := metadata.FromIncomingContext(ctx); ok { if initialMetadata, ok := md[initialMetadataKey]; ok { header := metadata.Pairs(initialMetadataKey, initialMetadata[0]) grpc.SendHeader(ctx, header) } if trailingMetadata, ok := md[trailingMetadataKey]; ok { trailer := metadata.Pairs(trailingMetadataKey, trailingMetadata[0]) grpc.SetTrailer(ctx, trailer) } } if status != nil && *status.Code != 0 { return nil, grpc.Errorf(codes.Code(*status.Code), *status.Message) } pl, err := serverNewPayload(in.GetResponseType(), in.GetResponseSize()) if err != nil { return nil, err } return &testpb.SimpleResponse{ Payload: pl, }, nil } func (s *testServer) StreamingOutputCall(args *testpb.StreamingOutputCallRequest, stream testpb.TestService_StreamingOutputCallServer) error { cs := args.GetResponseParameters() for _, c := range cs { if us := c.GetIntervalUs(); us > 0 { time.Sleep(time.Duration(us) * time.Microsecond) } pl, err := serverNewPayload(args.GetResponseType(), c.GetSize()) if err != nil { return err } if err := stream.Send(&testpb.StreamingOutputCallResponse{ Payload: pl, }); err != nil { return err } } return nil } func (s *testServer) StreamingInputCall(stream testpb.TestService_StreamingInputCallServer) error { var sum int for { in, err := stream.Recv() if err == io.EOF { return stream.SendAndClose(&testpb.StreamingInputCallResponse{ AggregatedPayloadSize: proto.Int32(int32(sum)), }) } if err != nil { return err } p := in.GetPayload().GetBody() sum += len(p) } } func (s *testServer) FullDuplexCall(stream testpb.TestService_FullDuplexCallServer) error { if md, ok := metadata.FromIncomingContext(stream.Context()); ok { if initialMetadata, ok := md[initialMetadataKey]; ok { header := metadata.Pairs(initialMetadataKey, initialMetadata[0]) stream.SendHeader(header) } if trailingMetadata, ok := md[trailingMetadataKey]; ok { trailer := metadata.Pairs(trailingMetadataKey, trailingMetadata[0]) stream.SetTrailer(trailer) } } for { in, err := stream.Recv() if err == io.EOF { // read done. return nil } if err != nil { return err } status := in.GetResponseStatus() if status != nil && *status.Code != 0 { return grpc.Errorf(codes.Code(*status.Code), *status.Message) } cs := in.GetResponseParameters() for _, c := range cs { if us := c.GetIntervalUs(); us > 0 { time.Sleep(time.Duration(us) * time.Microsecond) } pl, err := serverNewPayload(in.GetResponseType(), c.GetSize()) if err != nil { return err } if err := stream.Send(&testpb.StreamingOutputCallResponse{ Payload: pl, }); err != nil { return err } } } } func (s *testServer) HalfDuplexCall(stream testpb.TestService_HalfDuplexCallServer) error { var msgBuf []*testpb.StreamingOutputCallRequest for { in, err := stream.Recv() if err == io.EOF { // read done. break } if err != nil { return err } msgBuf = append(msgBuf, in) } for _, m := range msgBuf { cs := m.GetResponseParameters() for _, c := range cs { if us := c.GetIntervalUs(); us > 0 { time.Sleep(time.Duration(us) * time.Microsecond) } pl, err := serverNewPayload(m.GetResponseType(), c.GetSize()) if err != nil { return err } if err := stream.Send(&testpb.StreamingOutputCallResponse{ Payload: pl, }); err != nil { return err } } } return nil } golang-google-grpc-1.6.0/keepalive/000077500000000000000000000000001315416461300171415ustar00rootroot00000000000000golang-google-grpc-1.6.0/keepalive/keepalive.go000066400000000000000000000070111315416461300214340ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package keepalive defines configurable parameters for point-to-point healthcheck. package keepalive import ( "time" ) // ClientParameters is used to set keepalive parameters on the client-side. // These configure how the client will actively probe to notice when a connection is broken // and send pings so intermediaries will be aware of the liveness of the connection. // Make sure these parameters are set in coordination with the keepalive policy on the server, // as incompatible settings can result in closing of connection. type ClientParameters struct { // After a duration of this time if the client doesn't see any activity it pings the server to see if the transport is still alive. Time time.Duration // The current default value is infinity. // After having pinged for keepalive check, the client waits for a duration of Timeout and if no activity is seen even after that // the connection is closed. Timeout time.Duration // The current default value is 20 seconds. // If true, client runs keepalive checks even with no active RPCs. PermitWithoutStream bool // false by default. } // ServerParameters is used to set keepalive and max-age parameters on the server-side. type ServerParameters struct { // MaxConnectionIdle is a duration for the amount of time after which an idle connection would be closed by sending a GoAway. // Idleness duration is defined since the most recent time the number of outstanding RPCs became zero or the connection establishment. MaxConnectionIdle time.Duration // The current default value is infinity. // MaxConnectionAge is a duration for the maximum amount of time a connection may exist before it will be closed by sending a GoAway. // A random jitter of +/-10% will be added to MaxConnectionAge to spread out connection storms. MaxConnectionAge time.Duration // The current default value is infinity. // MaxConnectinoAgeGrace is an additive period after MaxConnectionAge after which the connection will be forcibly closed. MaxConnectionAgeGrace time.Duration // The current default value is infinity. // After a duration of this time if the server doesn't see any activity it pings the client to see if the transport is still alive. Time time.Duration // The current default value is 2 hours. // After having pinged for keepalive check, the server waits for a duration of Timeout and if no activity is seen even after that // the connection is closed. Timeout time.Duration // The current default value is 20 seconds. } // EnforcementPolicy is used to set keepalive enforcement policy on the server-side. // Server will close connection with a client that violates this policy. type EnforcementPolicy struct { // MinTime is the minimum amount of time a client should wait before sending a keepalive ping. MinTime time.Duration // The current default value is 5 minutes. // If true, server expects keepalive pings even when there are no active streams(RPCs). PermitWithoutStream bool // false by default. } golang-google-grpc-1.6.0/metadata/000077500000000000000000000000001315416461300167545ustar00rootroot00000000000000golang-google-grpc-1.6.0/metadata/metadata.go000066400000000000000000000101421315416461300210610ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package metadata define the structure of the metadata supported by gRPC library. // Please refer to https://grpc.io/docs/guides/wire.html for more information about custom-metadata. package metadata // import "google.golang.org/grpc/metadata" import ( "fmt" "strings" "golang.org/x/net/context" ) // DecodeKeyValue returns k, v, nil. It is deprecated and should not be used. func DecodeKeyValue(k, v string) (string, string, error) { return k, v, nil } // MD is a mapping from metadata keys to values. Users should use the following // two convenience functions New and Pairs to generate MD. type MD map[string][]string // New creates an MD from a given key-value map. // // Only the following ASCII characters are allowed in keys: // - digits: 0-9 // - uppercase letters: A-Z (normalized to lower) // - lowercase letters: a-z // - special characters: -_. // Uppercase letters are automatically converted to lowercase. // // Keys beginning with "grpc-" are reserved for grpc-internal use only and may // result in errors if set in metadata. func New(m map[string]string) MD { md := MD{} for k, val := range m { key := strings.ToLower(k) md[key] = append(md[key], val) } return md } // Pairs returns an MD formed by the mapping of key, value ... // Pairs panics if len(kv) is odd. // // Only the following ASCII characters are allowed in keys: // - digits: 0-9 // - uppercase letters: A-Z (normalized to lower) // - lowercase letters: a-z // - special characters: -_. // Uppercase letters are automatically converted to lowercase. // // Keys beginning with "grpc-" are reserved for grpc-internal use only and may // result in errors if set in metadata. func Pairs(kv ...string) MD { if len(kv)%2 == 1 { panic(fmt.Sprintf("metadata: Pairs got the odd number of input pairs for metadata: %d", len(kv))) } md := MD{} var key string for i, s := range kv { if i%2 == 0 { key = strings.ToLower(s) continue } md[key] = append(md[key], s) } return md } // Len returns the number of items in md. func (md MD) Len() int { return len(md) } // Copy returns a copy of md. func (md MD) Copy() MD { return Join(md) } // Join joins any number of mds into a single MD. // The order of values for each key is determined by the order in which // the mds containing those values are presented to Join. func Join(mds ...MD) MD { out := MD{} for _, md := range mds { for k, v := range md { out[k] = append(out[k], v...) } } return out } type mdIncomingKey struct{} type mdOutgoingKey struct{} // NewIncomingContext creates a new context with incoming md attached. func NewIncomingContext(ctx context.Context, md MD) context.Context { return context.WithValue(ctx, mdIncomingKey{}, md) } // NewOutgoingContext creates a new context with outgoing md attached. func NewOutgoingContext(ctx context.Context, md MD) context.Context { return context.WithValue(ctx, mdOutgoingKey{}, md) } // FromIncomingContext returns the incoming metadata in ctx if it exists. The // returned MD should not be modified. Writing to it may cause races. // Modification should be made to copies of the returned MD. func FromIncomingContext(ctx context.Context) (md MD, ok bool) { md, ok = ctx.Value(mdIncomingKey{}).(MD) return } // FromOutgoingContext returns the outgoing metadata in ctx if it exists. The // returned MD should not be modified. Writing to it may cause races. // Modification should be made to the copies of the returned MD. func FromOutgoingContext(ctx context.Context) (md MD, ok bool) { md, ok = ctx.Value(mdOutgoingKey{}).(MD) return } golang-google-grpc-1.6.0/metadata/metadata_test.go000066400000000000000000000035411315416461300221250ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package metadata import ( "reflect" "testing" ) func TestPairsMD(t *testing.T) { for _, test := range []struct { // input kv []string // output md MD }{ {[]string{}, MD{}}, {[]string{"k1", "v1", "k1", "v2"}, MD{"k1": []string{"v1", "v2"}}}, } { md := Pairs(test.kv...) if !reflect.DeepEqual(md, test.md) { t.Fatalf("Pairs(%v) = %v, want %v", test.kv, md, test.md) } } } func TestCopy(t *testing.T) { const key, val = "key", "val" orig := Pairs(key, val) copy := orig.Copy() if !reflect.DeepEqual(orig, copy) { t.Errorf("copied value not equal to the original, got %v, want %v", copy, orig) } orig[key][0] = "foo" if v := copy[key][0]; v != val { t.Errorf("change in original should not affect copy, got %q, want %q", v, val) } } func TestJoin(t *testing.T) { for _, test := range []struct { mds []MD want MD }{ {[]MD{}, MD{}}, {[]MD{Pairs("foo", "bar")}, Pairs("foo", "bar")}, {[]MD{Pairs("foo", "bar"), Pairs("foo", "baz")}, Pairs("foo", "bar", "foo", "baz")}, {[]MD{Pairs("foo", "bar"), Pairs("foo", "baz"), Pairs("zip", "zap")}, Pairs("foo", "bar", "foo", "baz", "zip", "zap")}, } { md := Join(test.mds...) if !reflect.DeepEqual(md, test.want) { t.Errorf("context's metadata is %v, want %v", md, test.want) } } } golang-google-grpc-1.6.0/naming/000077500000000000000000000000001315416461300164455ustar00rootroot00000000000000golang-google-grpc-1.6.0/naming/dns_resolver.go000066400000000000000000000205031315416461300215010ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package naming import ( "errors" "fmt" "net" "strconv" "time" "golang.org/x/net/context" "google.golang.org/grpc/grpclog" ) const ( defaultPort = "443" defaultFreq = time.Minute * 30 ) var ( errMissingAddr = errors.New("missing address") errWatcherClose = errors.New("watcher has been closed") ) // NewDNSResolverWithFreq creates a DNS Resolver that can resolve DNS names, and // create watchers that poll the DNS server using the frequency set by freq. func NewDNSResolverWithFreq(freq time.Duration) (Resolver, error) { return &dnsResolver{freq: freq}, nil } // NewDNSResolver creates a DNS Resolver that can resolve DNS names, and create // watchers that poll the DNS server using the default frequency defined by defaultFreq. func NewDNSResolver() (Resolver, error) { return NewDNSResolverWithFreq(defaultFreq) } // dnsResolver handles name resolution for names following the DNS scheme type dnsResolver struct { // frequency of polling the DNS server that the watchers created by this resolver will use. freq time.Duration } // formatIP returns ok = false if addr is not a valid textual representation of an IP address. // If addr is an IPv4 address, return the addr and ok = true. // If addr is an IPv6 address, return the addr enclosed in square brackets and ok = true. func formatIP(addr string) (addrIP string, ok bool) { ip := net.ParseIP(addr) if ip == nil { return "", false } if ip.To4() != nil { return addr, true } return "[" + addr + "]", true } // parseTarget takes the user input target string, returns formatted host and port info. // If target doesn't specify a port, set the port to be the defaultPort. // If target is in IPv6 format and host-name is enclosed in sqarue brackets, brackets // are strippd when setting the host. // examples: // target: "www.google.com" returns host: "www.google.com", port: "443" // target: "ipv4-host:80" returns host: "ipv4-host", port: "80" // target: "[ipv6-host]" returns host: "ipv6-host", port: "443" // target: ":80" returns host: "localhost", port: "80" // target: ":" returns host: "localhost", port: "443" func parseTarget(target string) (host, port string, err error) { if target == "" { return "", "", errMissingAddr } if ip := net.ParseIP(target); ip != nil { // target is an IPv4 or IPv6(without brackets) address return target, defaultPort, nil } if host, port, err := net.SplitHostPort(target); err == nil { // target has port, i.e ipv4-host:port, [ipv6-host]:port, host-name:port if host == "" { // Keep consistent with net.Dial(): If the host is empty, as in ":80", the local system is assumed. host = "localhost" } if port == "" { // If the port field is empty(target ends with colon), e.g. "[::1]:", defaultPort is used. port = defaultPort } return host, port, nil } if host, port, err := net.SplitHostPort(target + ":" + defaultPort); err == nil { // target doesn't have port return host, port, nil } return "", "", fmt.Errorf("invalid target address %v", target) } // Resolve creates a watcher that watches the name resolution of the target. func (r *dnsResolver) Resolve(target string) (Watcher, error) { host, port, err := parseTarget(target) if err != nil { return nil, err } if net.ParseIP(host) != nil { ipWatcher := &ipWatcher{ updateChan: make(chan *Update, 1), } host, _ = formatIP(host) ipWatcher.updateChan <- &Update{Op: Add, Addr: host + ":" + port} return ipWatcher, nil } ctx, cancel := context.WithCancel(context.Background()) return &dnsWatcher{ r: r, host: host, port: port, ctx: ctx, cancel: cancel, t: time.NewTimer(0), }, nil } // dnsWatcher watches for the name resolution update for a specific target type dnsWatcher struct { r *dnsResolver host string port string // The latest resolved address set curAddrs map[string]*Update ctx context.Context cancel context.CancelFunc t *time.Timer } // ipWatcher watches for the name resolution update for an IP address. type ipWatcher struct { updateChan chan *Update } // Next returns the adrress resolution Update for the target. For IP address, // the resolution is itself, thus polling name server is unncessary. Therefore, // Next() will return an Update the first time it is called, and will be blocked // for all following calls as no Update exisits until watcher is closed. func (i *ipWatcher) Next() ([]*Update, error) { u, ok := <-i.updateChan if !ok { return nil, errWatcherClose } return []*Update{u}, nil } // Close closes the ipWatcher. func (i *ipWatcher) Close() { close(i.updateChan) } // AddressType indicates the address type returned by name resolution. type AddressType uint8 const ( // Backend indicates the server is a backend server. Backend AddressType = iota // GRPCLB indicates the server is a grpclb load balancer. GRPCLB ) // AddrMetadataGRPCLB contains the information the name resolver for grpclb should provide. The // name resolver used by the grpclb balancer is required to provide this type of metadata in // its address updates. type AddrMetadataGRPCLB struct { // AddrType is the type of server (grpc load balancer or backend). AddrType AddressType // ServerName is the name of the grpc load balancer. Used for authentication. ServerName string } // compileUpdate compares the old resolved addresses and newly resolved addresses, // and generates an update list func (w *dnsWatcher) compileUpdate(newAddrs map[string]*Update) []*Update { var res []*Update for a, u := range w.curAddrs { if _, ok := newAddrs[a]; !ok { u.Op = Delete res = append(res, u) } } for a, u := range newAddrs { if _, ok := w.curAddrs[a]; !ok { res = append(res, u) } } return res } func (w *dnsWatcher) lookupSRV() map[string]*Update { newAddrs := make(map[string]*Update) _, srvs, err := lookupSRV(w.ctx, "grpclb", "tcp", w.host) if err != nil { grpclog.Infof("grpc: failed dns SRV record lookup due to %v.\n", err) return nil } for _, s := range srvs { lbAddrs, err := lookupHost(w.ctx, s.Target) if err != nil { grpclog.Warningf("grpc: failed load banlacer address dns lookup due to %v.\n", err) continue } for _, a := range lbAddrs { a, ok := formatIP(a) if !ok { grpclog.Errorf("grpc: failed IP parsing due to %v.\n", err) continue } addr := a + ":" + strconv.Itoa(int(s.Port)) newAddrs[addr] = &Update{Addr: addr, Metadata: AddrMetadataGRPCLB{AddrType: GRPCLB, ServerName: s.Target}} } } return newAddrs } func (w *dnsWatcher) lookupHost() map[string]*Update { newAddrs := make(map[string]*Update) addrs, err := lookupHost(w.ctx, w.host) if err != nil { grpclog.Warningf("grpc: failed dns A record lookup due to %v.\n", err) return nil } for _, a := range addrs { a, ok := formatIP(a) if !ok { grpclog.Errorf("grpc: failed IP parsing due to %v.\n", err) continue } addr := a + ":" + w.port newAddrs[addr] = &Update{Addr: addr} } return newAddrs } func (w *dnsWatcher) lookup() []*Update { newAddrs := w.lookupSRV() if newAddrs == nil { // If failed to get any balancer address (either no corresponding SRV for the // target, or caused by failure during resolution/parsing of the balancer target), // return any A record info available. newAddrs = w.lookupHost() } result := w.compileUpdate(newAddrs) w.curAddrs = newAddrs return result } // Next returns the resolved address update(delta) for the target. If there's no // change, it will sleep for 30 mins and try to resolve again after that. func (w *dnsWatcher) Next() ([]*Update, error) { for { select { case <-w.ctx.Done(): return nil, errWatcherClose case <-w.t.C: } result := w.lookup() // Next lookup should happen after an interval defined by w.r.freq. w.t.Reset(w.r.freq) if len(result) > 0 { return result, nil } } } func (w *dnsWatcher) Close() { w.cancel() } golang-google-grpc-1.6.0/naming/dns_resolver_test.go000066400000000000000000000210541315416461300225420ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package naming import ( "fmt" "net" "reflect" "sync" "testing" "time" ) func newUpdateWithMD(op Operation, addr, lb string) *Update { return &Update{ Op: op, Addr: addr, Metadata: AddrMetadataGRPCLB{AddrType: GRPCLB, ServerName: lb}, } } func toMap(u []*Update) map[string]*Update { m := make(map[string]*Update) for _, v := range u { m[v.Addr] = v } return m } func TestCompileUpdate(t *testing.T) { tests := []struct { oldAddrs []string newAddrs []string want []*Update }{ { []string{}, []string{"1.0.0.1"}, []*Update{{Op: Add, Addr: "1.0.0.1"}}, }, { []string{"1.0.0.1"}, []string{"1.0.0.1"}, []*Update{}, }, { []string{"1.0.0.0"}, []string{"1.0.0.1"}, []*Update{{Op: Delete, Addr: "1.0.0.0"}, {Op: Add, Addr: "1.0.0.1"}}, }, { []string{"1.0.0.1"}, []string{"1.0.0.0"}, []*Update{{Op: Add, Addr: "1.0.0.0"}, {Op: Delete, Addr: "1.0.0.1"}}, }, { []string{"1.0.0.1"}, []string{"1.0.0.1", "1.0.0.2", "1.0.0.3"}, []*Update{{Op: Add, Addr: "1.0.0.2"}, {Op: Add, Addr: "1.0.0.3"}}, }, { []string{"1.0.0.1", "1.0.0.2", "1.0.0.3"}, []string{"1.0.0.0"}, []*Update{{Op: Add, Addr: "1.0.0.0"}, {Op: Delete, Addr: "1.0.0.1"}, {Op: Delete, Addr: "1.0.0.2"}, {Op: Delete, Addr: "1.0.0.3"}}, }, { []string{"1.0.0.1", "1.0.0.3", "1.0.0.5"}, []string{"1.0.0.2", "1.0.0.3", "1.0.0.6"}, []*Update{{Op: Delete, Addr: "1.0.0.1"}, {Op: Add, Addr: "1.0.0.2"}, {Op: Delete, Addr: "1.0.0.5"}, {Op: Add, Addr: "1.0.0.6"}}, }, { []string{"1.0.0.1", "1.0.0.1", "1.0.0.2"}, []string{"1.0.0.1"}, []*Update{{Op: Delete, Addr: "1.0.0.2"}}, }, } var w dnsWatcher for _, c := range tests { w.curAddrs = make(map[string]*Update) newUpdates := make(map[string]*Update) for _, a := range c.oldAddrs { w.curAddrs[a] = &Update{Addr: a} } for _, a := range c.newAddrs { newUpdates[a] = &Update{Addr: a} } r := w.compileUpdate(newUpdates) if !reflect.DeepEqual(toMap(c.want), toMap(r)) { t.Errorf("w(%+v).compileUpdate(%+v) = %+v, want %+v", c.oldAddrs, c.newAddrs, updatesToSlice(r), updatesToSlice(c.want)) } } } func TestResolveFunc(t *testing.T) { tests := []struct { addr string want error }{ // TODO(yuxuanli): More false cases? {"www.google.com", nil}, {"foo.bar:12345", nil}, {"127.0.0.1", nil}, {"127.0.0.1:12345", nil}, {"[::1]:80", nil}, {"[2001:db8:a0b:12f0::1]:21", nil}, {":80", nil}, {"127.0.0...1:12345", nil}, {"[fe80::1%lo0]:80", nil}, {"golang.org:http", nil}, {"[2001:db8::1]:http", nil}, {":", nil}, {"", errMissingAddr}, {"[2001:db8:a0b:12f0::1", fmt.Errorf("invalid target address %v", "[2001:db8:a0b:12f0::1")}, } r, err := NewDNSResolver() if err != nil { t.Errorf("%v", err) } for _, v := range tests { _, err := r.Resolve(v.addr) if !reflect.DeepEqual(err, v.want) { t.Errorf("Resolve(%q) = %v, want %v", v.addr, err, v.want) } } } var hostLookupTbl = map[string][]string{ "foo.bar.com": {"1.2.3.4", "5.6.7.8"}, "ipv4.single.fake": {"1.2.3.4"}, "ipv4.multi.fake": {"1.2.3.4", "5.6.7.8", "9.10.11.12"}, "ipv6.single.fake": {"2607:f8b0:400a:801::1001"}, "ipv6.multi.fake": {"2607:f8b0:400a:801::1001", "2607:f8b0:400a:801::1002", "2607:f8b0:400a:801::1003"}, } func hostLookup(host string) ([]string, error) { if addrs, ok := hostLookupTbl[host]; ok { return addrs, nil } return nil, fmt.Errorf("failed to lookup host:%s resolution in hostLookupTbl", host) } var srvLookupTbl = map[string][]*net.SRV{ "_grpclb._tcp.srv.ipv4.single.fake": {&net.SRV{Target: "ipv4.single.fake", Port: 1234}}, "_grpclb._tcp.srv.ipv4.multi.fake": {&net.SRV{Target: "ipv4.multi.fake", Port: 1234}}, "_grpclb._tcp.srv.ipv6.single.fake": {&net.SRV{Target: "ipv6.single.fake", Port: 1234}}, "_grpclb._tcp.srv.ipv6.multi.fake": {&net.SRV{Target: "ipv6.multi.fake", Port: 1234}}, } func srvLookup(service, proto, name string) (string, []*net.SRV, error) { cname := "_" + service + "._" + proto + "." + name if srvs, ok := srvLookupTbl[cname]; ok { return cname, srvs, nil } return "", nil, fmt.Errorf("failed to lookup srv record for %s in srvLookupTbl", cname) } func updatesToSlice(updates []*Update) []Update { res := make([]Update, len(updates)) for i, u := range updates { res[i] = *u } return res } func testResolver(t *testing.T, freq time.Duration, slp time.Duration) { tests := []struct { target string want []*Update }{ { "foo.bar.com", []*Update{{Op: Add, Addr: "1.2.3.4" + colonDefaultPort}, {Op: Add, Addr: "5.6.7.8" + colonDefaultPort}}, }, { "foo.bar.com:1234", []*Update{{Op: Add, Addr: "1.2.3.4:1234"}, {Op: Add, Addr: "5.6.7.8:1234"}}, }, { "srv.ipv4.single.fake", []*Update{newUpdateWithMD(Add, "1.2.3.4:1234", "ipv4.single.fake")}, }, { "srv.ipv4.multi.fake", []*Update{ newUpdateWithMD(Add, "1.2.3.4:1234", "ipv4.multi.fake"), newUpdateWithMD(Add, "5.6.7.8:1234", "ipv4.multi.fake"), newUpdateWithMD(Add, "9.10.11.12:1234", "ipv4.multi.fake")}, }, { "srv.ipv6.single.fake", []*Update{newUpdateWithMD(Add, "[2607:f8b0:400a:801::1001]:1234", "ipv6.single.fake")}, }, { "srv.ipv6.multi.fake", []*Update{ newUpdateWithMD(Add, "[2607:f8b0:400a:801::1001]:1234", "ipv6.multi.fake"), newUpdateWithMD(Add, "[2607:f8b0:400a:801::1002]:1234", "ipv6.multi.fake"), newUpdateWithMD(Add, "[2607:f8b0:400a:801::1003]:1234", "ipv6.multi.fake"), }, }, } for _, a := range tests { r, err := NewDNSResolverWithFreq(freq) if err != nil { t.Fatalf("%v\n", err) } w, err := r.Resolve(a.target) if err != nil { t.Fatalf("%v\n", err) } updates, err := w.Next() if err != nil { t.Fatalf("%v\n", err) } if !reflect.DeepEqual(toMap(a.want), toMap(updates)) { t.Errorf("Resolve(%q) = %+v, want %+v\n", a.target, updatesToSlice(updates), updatesToSlice(a.want)) } var wg sync.WaitGroup wg.Add(1) go func() { defer wg.Done() for { _, err := w.Next() if err != nil { return } t.Error("Execution shouldn't reach here, since w.Next() should be blocked until close happen.") } }() // Sleep for sometime to let watcher do more than one lookup time.Sleep(slp) w.Close() wg.Wait() } } func TestResolve(t *testing.T) { defer replaceNetFunc()() testResolver(t, time.Millisecond*5, time.Millisecond*10) } const colonDefaultPort = ":" + defaultPort func TestIPWatcher(t *testing.T) { tests := []struct { target string want []*Update }{ {"127.0.0.1", []*Update{{Op: Add, Addr: "127.0.0.1" + colonDefaultPort}}}, {"127.0.0.1:12345", []*Update{{Op: Add, Addr: "127.0.0.1:12345"}}}, {"::1", []*Update{{Op: Add, Addr: "[::1]" + colonDefaultPort}}}, {"[::1]:12345", []*Update{{Op: Add, Addr: "[::1]:12345"}}}, {"[::1]:", []*Update{{Op: Add, Addr: "[::1]:443"}}}, {"2001:db8:85a3::8a2e:370:7334", []*Update{{Op: Add, Addr: "[2001:db8:85a3::8a2e:370:7334]" + colonDefaultPort}}}, {"[2001:db8:85a3::8a2e:370:7334]", []*Update{{Op: Add, Addr: "[2001:db8:85a3::8a2e:370:7334]" + colonDefaultPort}}}, {"[2001:db8:85a3::8a2e:370:7334]:12345", []*Update{{Op: Add, Addr: "[2001:db8:85a3::8a2e:370:7334]:12345"}}}, {"[2001:db8::1]:http", []*Update{{Op: Add, Addr: "[2001:db8::1]:http"}}}, // TODO(yuxuanli): zone support? } for _, v := range tests { r, err := NewDNSResolverWithFreq(time.Millisecond * 5) if err != nil { t.Fatalf("%v\n", err) } w, err := r.Resolve(v.target) if err != nil { t.Fatalf("%v\n", err) } var updates []*Update var wg sync.WaitGroup wg.Add(1) count := 0 go func() { defer wg.Done() for { u, err := w.Next() if err != nil { return } updates = u count++ } }() // Sleep for sometime to let watcher do more than one lookup time.Sleep(time.Millisecond * 10) w.Close() wg.Wait() if !reflect.DeepEqual(v.want, updates) { t.Errorf("Resolve(%q) = %v, want %+v\n", v.target, updatesToSlice(updates), updatesToSlice(v.want)) } if count != 1 { t.Errorf("IPWatcher Next() should return only once, not %d times\n", count) } } } golang-google-grpc-1.6.0/naming/go17.go000066400000000000000000000016671315416461300175630ustar00rootroot00000000000000// +build go1.6, !go1.8 /* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package naming import ( "net" "golang.org/x/net/context" ) var ( lookupHost = func(ctx context.Context, host string) ([]string, error) { return net.LookupHost(host) } lookupSRV = func(ctx context.Context, service, proto, name string) (string, []*net.SRV, error) { return net.LookupSRV(service, proto, name) } ) golang-google-grpc-1.6.0/naming/go17_test.go000066400000000000000000000021161315416461300206100ustar00rootroot00000000000000// +build go1.6, !go1.8 /* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package naming import ( "net" "golang.org/x/net/context" ) func replaceNetFunc() func() { oldLookupHost := lookupHost oldLookupSRV := lookupSRV lookupHost = func(ctx context.Context, host string) ([]string, error) { return hostLookup(host) } lookupSRV = func(ctx context.Context, service, proto, name string) (string, []*net.SRV, error) { return srvLookup(service, proto, name) } return func() { lookupHost = oldLookupHost lookupSRV = oldLookupSRV } } golang-google-grpc-1.6.0/naming/go18.go000066400000000000000000000013541315416461300175550ustar00rootroot00000000000000// +build go1.8 /* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package naming import "net" var ( lookupHost = net.DefaultResolver.LookupHost lookupSRV = net.DefaultResolver.LookupSRV ) golang-google-grpc-1.6.0/naming/go18_test.go000066400000000000000000000020641315416461300206130ustar00rootroot00000000000000// +build go1.8 /* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package naming import ( "context" "net" ) func replaceNetFunc() func() { oldLookupHost := lookupHost oldLookupSRV := lookupSRV lookupHost = func(ctx context.Context, host string) ([]string, error) { return hostLookup(host) } lookupSRV = func(ctx context.Context, service, proto, name string) (string, []*net.SRV, error) { return srvLookup(service, proto, name) } return func() { lookupHost = oldLookupHost lookupSRV = oldLookupSRV } } golang-google-grpc-1.6.0/naming/naming.go000066400000000000000000000037651315416461300202600ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package naming defines the naming API and related data structures for gRPC. // The interface is EXPERIMENTAL and may be suject to change. package naming // Operation defines the corresponding operations for a name resolution change. type Operation uint8 const ( // Add indicates a new address is added. Add Operation = iota // Delete indicates an exisiting address is deleted. Delete ) // Update defines a name resolution update. Notice that it is not valid having both // empty string Addr and nil Metadata in an Update. type Update struct { // Op indicates the operation of the update. Op Operation // Addr is the updated address. It is empty string if there is no address update. Addr string // Metadata is the updated metadata. It is nil if there is no metadata update. // Metadata is not required for a custom naming implementation. Metadata interface{} } // Resolver creates a Watcher for a target to track its resolution changes. type Resolver interface { // Resolve creates a Watcher for target. Resolve(target string) (Watcher, error) } // Watcher watches for the updates on the specified target. type Watcher interface { // Next blocks until an update or error happens. It may return one or more // updates. The first call should get the full set of the results. It should // return an error if and only if Watcher cannot recover. Next() ([]*Update, error) // Close closes the Watcher. Close() } golang-google-grpc-1.6.0/peer/000077500000000000000000000000001315416461300161275ustar00rootroot00000000000000golang-google-grpc-1.6.0/peer/peer.go000066400000000000000000000027501315416461300174150ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package peer defines various peer information associated with RPCs and // corresponding utils. package peer import ( "net" "golang.org/x/net/context" "google.golang.org/grpc/credentials" ) // Peer contains the information of the peer for an RPC, such as the address // and authentication information. type Peer struct { // Addr is the peer address. Addr net.Addr // AuthInfo is the authentication information of the transport. // It is nil if there is no transport security being used. AuthInfo credentials.AuthInfo } type peerKey struct{} // NewContext creates a new context with peer information attached. func NewContext(ctx context.Context, p *Peer) context.Context { return context.WithValue(ctx, peerKey{}, p) } // FromContext returns the peer information in ctx if it exists. func FromContext(ctx context.Context) (p *Peer, ok bool) { p, ok = ctx.Value(peerKey{}).(*Peer) return } golang-google-grpc-1.6.0/proxy.go000066400000000000000000000066121315416461300167110ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "bufio" "errors" "fmt" "io" "net" "net/http" "net/http/httputil" "net/url" "golang.org/x/net/context" ) var ( // errDisabled indicates that proxy is disabled for the address. errDisabled = errors.New("proxy is disabled for the address") // The following variable will be overwritten in the tests. httpProxyFromEnvironment = http.ProxyFromEnvironment ) func mapAddress(ctx context.Context, address string) (string, error) { req := &http.Request{ URL: &url.URL{ Scheme: "https", Host: address, }, } url, err := httpProxyFromEnvironment(req) if err != nil { return "", err } if url == nil { return "", errDisabled } return url.Host, nil } // To read a response from a net.Conn, http.ReadResponse() takes a bufio.Reader. // It's possible that this reader reads more than what's need for the response and stores // those bytes in the buffer. // bufConn wraps the original net.Conn and the bufio.Reader to make sure we don't lose the // bytes in the buffer. type bufConn struct { net.Conn r io.Reader } func (c *bufConn) Read(b []byte) (int, error) { return c.r.Read(b) } func doHTTPConnectHandshake(ctx context.Context, conn net.Conn, addr string) (_ net.Conn, err error) { defer func() { if err != nil { conn.Close() } }() req := (&http.Request{ Method: http.MethodConnect, URL: &url.URL{Host: addr}, Header: map[string][]string{"User-Agent": {grpcUA}}, }) if err := sendHTTPRequest(ctx, req, conn); err != nil { return nil, fmt.Errorf("failed to write the HTTP request: %v", err) } r := bufio.NewReader(conn) resp, err := http.ReadResponse(r, req) if err != nil { return nil, fmt.Errorf("reading server HTTP response: %v", err) } defer resp.Body.Close() if resp.StatusCode != http.StatusOK { dump, err := httputil.DumpResponse(resp, true) if err != nil { return nil, fmt.Errorf("failed to do connect handshake, status code: %s", resp.Status) } return nil, fmt.Errorf("failed to do connect handshake, response: %q", dump) } return &bufConn{Conn: conn, r: r}, nil } // newProxyDialer returns a dialer that connects to proxy first if necessary. // The returned dialer checks if a proxy is necessary, dial to the proxy with the // provided dialer, does HTTP CONNECT handshake and returns the connection. func newProxyDialer(dialer func(context.Context, string) (net.Conn, error)) func(context.Context, string) (net.Conn, error) { return func(ctx context.Context, addr string) (conn net.Conn, err error) { var skipHandshake bool newAddr, err := mapAddress(ctx, addr) if err != nil { if err != errDisabled { return nil, err } skipHandshake = true newAddr = addr } conn, err = dialer(ctx, newAddr) if err != nil { return } if !skipHandshake { conn, err = doHTTPConnectHandshake(ctx, conn, addr) } return } } golang-google-grpc-1.6.0/proxy_test.go000066400000000000000000000077401315416461300177530ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "bufio" "io" "net" "net/http" "net/url" "testing" "time" "golang.org/x/net/context" ) const ( envTestAddr = "1.2.3.4:8080" envProxyAddr = "2.3.4.5:7687" ) // overwriteAndRestore overwrite function httpProxyFromEnvironment and // returns a function to restore the default values. func overwrite(hpfe func(req *http.Request) (*url.URL, error)) func() { backHPFE := httpProxyFromEnvironment httpProxyFromEnvironment = hpfe return func() { httpProxyFromEnvironment = backHPFE } } func TestMapAddressEnv(t *testing.T) { // Overwrite the function in the test and restore them in defer. hpfe := func(req *http.Request) (*url.URL, error) { if req.URL.Host == envTestAddr { return &url.URL{ Scheme: "https", Host: envProxyAddr, }, nil } return nil, nil } defer overwrite(hpfe)() // envTestAddr should be handled by ProxyFromEnvironment. got, err := mapAddress(context.Background(), envTestAddr) if err != nil { t.Error(err) } if got != envProxyAddr { t.Errorf("want %v, got %v", envProxyAddr, got) } } type proxyServer struct { t *testing.T lis net.Listener in net.Conn out net.Conn } func (p *proxyServer) run() { in, err := p.lis.Accept() if err != nil { return } p.in = in req, err := http.ReadRequest(bufio.NewReader(in)) if err != nil { p.t.Errorf("failed to read CONNECT req: %v", err) return } if req.Method != http.MethodConnect || req.UserAgent() != grpcUA { resp := http.Response{StatusCode: http.StatusMethodNotAllowed} resp.Write(p.in) p.in.Close() p.t.Errorf("get wrong CONNECT req: %+v", req) return } out, err := net.Dial("tcp", req.URL.Host) if err != nil { p.t.Errorf("failed to dial to server: %v", err) return } resp := http.Response{StatusCode: http.StatusOK, Proto: "HTTP/1.0"} resp.Write(p.in) p.out = out go io.Copy(p.in, p.out) go io.Copy(p.out, p.in) } func (p *proxyServer) stop() { p.lis.Close() if p.in != nil { p.in.Close() } if p.out != nil { p.out.Close() } } func TestHTTPConnect(t *testing.T) { plis, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("failed to listen: %v", err) } p := &proxyServer{t: t, lis: plis} go p.run() defer p.stop() blis, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("failed to listen: %v", err) } msg := []byte{4, 3, 5, 2} recvBuf := make([]byte, len(msg), len(msg)) done := make(chan struct{}) go func() { in, err := blis.Accept() if err != nil { t.Errorf("failed to accept: %v", err) return } defer in.Close() in.Read(recvBuf) close(done) }() // Overwrite the function in the test and restore them in defer. hpfe := func(req *http.Request) (*url.URL, error) { return &url.URL{Host: plis.Addr().String()}, nil } defer overwrite(hpfe)() // Dial to proxy server. dialer := newProxyDialer(func(ctx context.Context, addr string) (net.Conn, error) { if deadline, ok := ctx.Deadline(); ok { return net.DialTimeout("tcp", addr, deadline.Sub(time.Now())) } return net.Dial("tcp", addr) }) ctx, cancel := context.WithTimeout(context.Background(), time.Second) defer cancel() c, err := dialer(ctx, blis.Addr().String()) if err != nil { t.Fatalf("http connect Dial failed: %v", err) } defer c.Close() // Send msg on the connection. c.Write(msg) <-done // Check received msg. if string(recvBuf) != string(msg) { t.Fatalf("received msg: %v, want %v", recvBuf, msg) } } golang-google-grpc-1.6.0/reflection/000077500000000000000000000000001315416461300173265ustar00rootroot00000000000000golang-google-grpc-1.6.0/reflection/README.md000066400000000000000000000007051315416461300206070ustar00rootroot00000000000000# Reflection Package reflection implements server reflection service. The service implemented is defined in: https://github.com/grpc/grpc/blob/master/src/proto/grpc/reflection/v1alpha/reflection.proto. To register server reflection on a gRPC server: ```go import "google.golang.org/grpc/reflection" s := grpc.NewServer() pb.RegisterYourOwnServer(s, &server{}) // Register reflection service on gRPC server. reflection.Register(s) s.Serve(lis) ``` golang-google-grpc-1.6.0/reflection/grpc_reflection_v1alpha/000077500000000000000000000000001315416461300241075ustar00rootroot00000000000000golang-google-grpc-1.6.0/reflection/grpc_reflection_v1alpha/reflection.pb.go000066400000000000000000000753061315416461300272030ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: grpc_reflection_v1alpha/reflection.proto /* Package grpc_reflection_v1alpha is a generated protocol buffer package. It is generated from these files: grpc_reflection_v1alpha/reflection.proto It has these top-level messages: ServerReflectionRequest ExtensionRequest ServerReflectionResponse FileDescriptorResponse ExtensionNumberResponse ListServiceResponse ServiceResponse ErrorResponse */ package grpc_reflection_v1alpha import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import ( context "golang.org/x/net/context" grpc "google.golang.org/grpc" ) // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package // The message sent by the client when calling ServerReflectionInfo method. type ServerReflectionRequest struct { Host string `protobuf:"bytes,1,opt,name=host" json:"host,omitempty"` // To use reflection service, the client should set one of the following // fields in message_request. The server distinguishes requests by their // defined field and then handles them using corresponding methods. // // Types that are valid to be assigned to MessageRequest: // *ServerReflectionRequest_FileByFilename // *ServerReflectionRequest_FileContainingSymbol // *ServerReflectionRequest_FileContainingExtension // *ServerReflectionRequest_AllExtensionNumbersOfType // *ServerReflectionRequest_ListServices MessageRequest isServerReflectionRequest_MessageRequest `protobuf_oneof:"message_request"` } func (m *ServerReflectionRequest) Reset() { *m = ServerReflectionRequest{} } func (m *ServerReflectionRequest) String() string { return proto.CompactTextString(m) } func (*ServerReflectionRequest) ProtoMessage() {} func (*ServerReflectionRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} } type isServerReflectionRequest_MessageRequest interface { isServerReflectionRequest_MessageRequest() } type ServerReflectionRequest_FileByFilename struct { FileByFilename string `protobuf:"bytes,3,opt,name=file_by_filename,json=fileByFilename,oneof"` } type ServerReflectionRequest_FileContainingSymbol struct { FileContainingSymbol string `protobuf:"bytes,4,opt,name=file_containing_symbol,json=fileContainingSymbol,oneof"` } type ServerReflectionRequest_FileContainingExtension struct { FileContainingExtension *ExtensionRequest `protobuf:"bytes,5,opt,name=file_containing_extension,json=fileContainingExtension,oneof"` } type ServerReflectionRequest_AllExtensionNumbersOfType struct { AllExtensionNumbersOfType string `protobuf:"bytes,6,opt,name=all_extension_numbers_of_type,json=allExtensionNumbersOfType,oneof"` } type ServerReflectionRequest_ListServices struct { ListServices string `protobuf:"bytes,7,opt,name=list_services,json=listServices,oneof"` } func (*ServerReflectionRequest_FileByFilename) isServerReflectionRequest_MessageRequest() {} func (*ServerReflectionRequest_FileContainingSymbol) isServerReflectionRequest_MessageRequest() {} func (*ServerReflectionRequest_FileContainingExtension) isServerReflectionRequest_MessageRequest() {} func (*ServerReflectionRequest_AllExtensionNumbersOfType) isServerReflectionRequest_MessageRequest() {} func (*ServerReflectionRequest_ListServices) isServerReflectionRequest_MessageRequest() {} func (m *ServerReflectionRequest) GetMessageRequest() isServerReflectionRequest_MessageRequest { if m != nil { return m.MessageRequest } return nil } func (m *ServerReflectionRequest) GetHost() string { if m != nil { return m.Host } return "" } func (m *ServerReflectionRequest) GetFileByFilename() string { if x, ok := m.GetMessageRequest().(*ServerReflectionRequest_FileByFilename); ok { return x.FileByFilename } return "" } func (m *ServerReflectionRequest) GetFileContainingSymbol() string { if x, ok := m.GetMessageRequest().(*ServerReflectionRequest_FileContainingSymbol); ok { return x.FileContainingSymbol } return "" } func (m *ServerReflectionRequest) GetFileContainingExtension() *ExtensionRequest { if x, ok := m.GetMessageRequest().(*ServerReflectionRequest_FileContainingExtension); ok { return x.FileContainingExtension } return nil } func (m *ServerReflectionRequest) GetAllExtensionNumbersOfType() string { if x, ok := m.GetMessageRequest().(*ServerReflectionRequest_AllExtensionNumbersOfType); ok { return x.AllExtensionNumbersOfType } return "" } func (m *ServerReflectionRequest) GetListServices() string { if x, ok := m.GetMessageRequest().(*ServerReflectionRequest_ListServices); ok { return x.ListServices } return "" } // XXX_OneofFuncs is for the internal use of the proto package. func (*ServerReflectionRequest) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _ServerReflectionRequest_OneofMarshaler, _ServerReflectionRequest_OneofUnmarshaler, _ServerReflectionRequest_OneofSizer, []interface{}{ (*ServerReflectionRequest_FileByFilename)(nil), (*ServerReflectionRequest_FileContainingSymbol)(nil), (*ServerReflectionRequest_FileContainingExtension)(nil), (*ServerReflectionRequest_AllExtensionNumbersOfType)(nil), (*ServerReflectionRequest_ListServices)(nil), } } func _ServerReflectionRequest_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*ServerReflectionRequest) // message_request switch x := m.MessageRequest.(type) { case *ServerReflectionRequest_FileByFilename: b.EncodeVarint(3<<3 | proto.WireBytes) b.EncodeStringBytes(x.FileByFilename) case *ServerReflectionRequest_FileContainingSymbol: b.EncodeVarint(4<<3 | proto.WireBytes) b.EncodeStringBytes(x.FileContainingSymbol) case *ServerReflectionRequest_FileContainingExtension: b.EncodeVarint(5<<3 | proto.WireBytes) if err := b.EncodeMessage(x.FileContainingExtension); err != nil { return err } case *ServerReflectionRequest_AllExtensionNumbersOfType: b.EncodeVarint(6<<3 | proto.WireBytes) b.EncodeStringBytes(x.AllExtensionNumbersOfType) case *ServerReflectionRequest_ListServices: b.EncodeVarint(7<<3 | proto.WireBytes) b.EncodeStringBytes(x.ListServices) case nil: default: return fmt.Errorf("ServerReflectionRequest.MessageRequest has unexpected type %T", x) } return nil } func _ServerReflectionRequest_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*ServerReflectionRequest) switch tag { case 3: // message_request.file_by_filename if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } x, err := b.DecodeStringBytes() m.MessageRequest = &ServerReflectionRequest_FileByFilename{x} return true, err case 4: // message_request.file_containing_symbol if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } x, err := b.DecodeStringBytes() m.MessageRequest = &ServerReflectionRequest_FileContainingSymbol{x} return true, err case 5: // message_request.file_containing_extension if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(ExtensionRequest) err := b.DecodeMessage(msg) m.MessageRequest = &ServerReflectionRequest_FileContainingExtension{msg} return true, err case 6: // message_request.all_extension_numbers_of_type if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } x, err := b.DecodeStringBytes() m.MessageRequest = &ServerReflectionRequest_AllExtensionNumbersOfType{x} return true, err case 7: // message_request.list_services if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } x, err := b.DecodeStringBytes() m.MessageRequest = &ServerReflectionRequest_ListServices{x} return true, err default: return false, nil } } func _ServerReflectionRequest_OneofSizer(msg proto.Message) (n int) { m := msg.(*ServerReflectionRequest) // message_request switch x := m.MessageRequest.(type) { case *ServerReflectionRequest_FileByFilename: n += proto.SizeVarint(3<<3 | proto.WireBytes) n += proto.SizeVarint(uint64(len(x.FileByFilename))) n += len(x.FileByFilename) case *ServerReflectionRequest_FileContainingSymbol: n += proto.SizeVarint(4<<3 | proto.WireBytes) n += proto.SizeVarint(uint64(len(x.FileContainingSymbol))) n += len(x.FileContainingSymbol) case *ServerReflectionRequest_FileContainingExtension: s := proto.Size(x.FileContainingExtension) n += proto.SizeVarint(5<<3 | proto.WireBytes) n += proto.SizeVarint(uint64(s)) n += s case *ServerReflectionRequest_AllExtensionNumbersOfType: n += proto.SizeVarint(6<<3 | proto.WireBytes) n += proto.SizeVarint(uint64(len(x.AllExtensionNumbersOfType))) n += len(x.AllExtensionNumbersOfType) case *ServerReflectionRequest_ListServices: n += proto.SizeVarint(7<<3 | proto.WireBytes) n += proto.SizeVarint(uint64(len(x.ListServices))) n += len(x.ListServices) case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } // The type name and extension number sent by the client when requesting // file_containing_extension. type ExtensionRequest struct { // Fully-qualified type name. The format should be . ContainingType string `protobuf:"bytes,1,opt,name=containing_type,json=containingType" json:"containing_type,omitempty"` ExtensionNumber int32 `protobuf:"varint,2,opt,name=extension_number,json=extensionNumber" json:"extension_number,omitempty"` } func (m *ExtensionRequest) Reset() { *m = ExtensionRequest{} } func (m *ExtensionRequest) String() string { return proto.CompactTextString(m) } func (*ExtensionRequest) ProtoMessage() {} func (*ExtensionRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} } func (m *ExtensionRequest) GetContainingType() string { if m != nil { return m.ContainingType } return "" } func (m *ExtensionRequest) GetExtensionNumber() int32 { if m != nil { return m.ExtensionNumber } return 0 } // The message sent by the server to answer ServerReflectionInfo method. type ServerReflectionResponse struct { ValidHost string `protobuf:"bytes,1,opt,name=valid_host,json=validHost" json:"valid_host,omitempty"` OriginalRequest *ServerReflectionRequest `protobuf:"bytes,2,opt,name=original_request,json=originalRequest" json:"original_request,omitempty"` // The server set one of the following fields accroding to the message_request // in the request. // // Types that are valid to be assigned to MessageResponse: // *ServerReflectionResponse_FileDescriptorResponse // *ServerReflectionResponse_AllExtensionNumbersResponse // *ServerReflectionResponse_ListServicesResponse // *ServerReflectionResponse_ErrorResponse MessageResponse isServerReflectionResponse_MessageResponse `protobuf_oneof:"message_response"` } func (m *ServerReflectionResponse) Reset() { *m = ServerReflectionResponse{} } func (m *ServerReflectionResponse) String() string { return proto.CompactTextString(m) } func (*ServerReflectionResponse) ProtoMessage() {} func (*ServerReflectionResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{2} } type isServerReflectionResponse_MessageResponse interface { isServerReflectionResponse_MessageResponse() } type ServerReflectionResponse_FileDescriptorResponse struct { FileDescriptorResponse *FileDescriptorResponse `protobuf:"bytes,4,opt,name=file_descriptor_response,json=fileDescriptorResponse,oneof"` } type ServerReflectionResponse_AllExtensionNumbersResponse struct { AllExtensionNumbersResponse *ExtensionNumberResponse `protobuf:"bytes,5,opt,name=all_extension_numbers_response,json=allExtensionNumbersResponse,oneof"` } type ServerReflectionResponse_ListServicesResponse struct { ListServicesResponse *ListServiceResponse `protobuf:"bytes,6,opt,name=list_services_response,json=listServicesResponse,oneof"` } type ServerReflectionResponse_ErrorResponse struct { ErrorResponse *ErrorResponse `protobuf:"bytes,7,opt,name=error_response,json=errorResponse,oneof"` } func (*ServerReflectionResponse_FileDescriptorResponse) isServerReflectionResponse_MessageResponse() {} func (*ServerReflectionResponse_AllExtensionNumbersResponse) isServerReflectionResponse_MessageResponse() { } func (*ServerReflectionResponse_ListServicesResponse) isServerReflectionResponse_MessageResponse() {} func (*ServerReflectionResponse_ErrorResponse) isServerReflectionResponse_MessageResponse() {} func (m *ServerReflectionResponse) GetMessageResponse() isServerReflectionResponse_MessageResponse { if m != nil { return m.MessageResponse } return nil } func (m *ServerReflectionResponse) GetValidHost() string { if m != nil { return m.ValidHost } return "" } func (m *ServerReflectionResponse) GetOriginalRequest() *ServerReflectionRequest { if m != nil { return m.OriginalRequest } return nil } func (m *ServerReflectionResponse) GetFileDescriptorResponse() *FileDescriptorResponse { if x, ok := m.GetMessageResponse().(*ServerReflectionResponse_FileDescriptorResponse); ok { return x.FileDescriptorResponse } return nil } func (m *ServerReflectionResponse) GetAllExtensionNumbersResponse() *ExtensionNumberResponse { if x, ok := m.GetMessageResponse().(*ServerReflectionResponse_AllExtensionNumbersResponse); ok { return x.AllExtensionNumbersResponse } return nil } func (m *ServerReflectionResponse) GetListServicesResponse() *ListServiceResponse { if x, ok := m.GetMessageResponse().(*ServerReflectionResponse_ListServicesResponse); ok { return x.ListServicesResponse } return nil } func (m *ServerReflectionResponse) GetErrorResponse() *ErrorResponse { if x, ok := m.GetMessageResponse().(*ServerReflectionResponse_ErrorResponse); ok { return x.ErrorResponse } return nil } // XXX_OneofFuncs is for the internal use of the proto package. func (*ServerReflectionResponse) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _ServerReflectionResponse_OneofMarshaler, _ServerReflectionResponse_OneofUnmarshaler, _ServerReflectionResponse_OneofSizer, []interface{}{ (*ServerReflectionResponse_FileDescriptorResponse)(nil), (*ServerReflectionResponse_AllExtensionNumbersResponse)(nil), (*ServerReflectionResponse_ListServicesResponse)(nil), (*ServerReflectionResponse_ErrorResponse)(nil), } } func _ServerReflectionResponse_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*ServerReflectionResponse) // message_response switch x := m.MessageResponse.(type) { case *ServerReflectionResponse_FileDescriptorResponse: b.EncodeVarint(4<<3 | proto.WireBytes) if err := b.EncodeMessage(x.FileDescriptorResponse); err != nil { return err } case *ServerReflectionResponse_AllExtensionNumbersResponse: b.EncodeVarint(5<<3 | proto.WireBytes) if err := b.EncodeMessage(x.AllExtensionNumbersResponse); err != nil { return err } case *ServerReflectionResponse_ListServicesResponse: b.EncodeVarint(6<<3 | proto.WireBytes) if err := b.EncodeMessage(x.ListServicesResponse); err != nil { return err } case *ServerReflectionResponse_ErrorResponse: b.EncodeVarint(7<<3 | proto.WireBytes) if err := b.EncodeMessage(x.ErrorResponse); err != nil { return err } case nil: default: return fmt.Errorf("ServerReflectionResponse.MessageResponse has unexpected type %T", x) } return nil } func _ServerReflectionResponse_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*ServerReflectionResponse) switch tag { case 4: // message_response.file_descriptor_response if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(FileDescriptorResponse) err := b.DecodeMessage(msg) m.MessageResponse = &ServerReflectionResponse_FileDescriptorResponse{msg} return true, err case 5: // message_response.all_extension_numbers_response if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(ExtensionNumberResponse) err := b.DecodeMessage(msg) m.MessageResponse = &ServerReflectionResponse_AllExtensionNumbersResponse{msg} return true, err case 6: // message_response.list_services_response if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(ListServiceResponse) err := b.DecodeMessage(msg) m.MessageResponse = &ServerReflectionResponse_ListServicesResponse{msg} return true, err case 7: // message_response.error_response if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } msg := new(ErrorResponse) err := b.DecodeMessage(msg) m.MessageResponse = &ServerReflectionResponse_ErrorResponse{msg} return true, err default: return false, nil } } func _ServerReflectionResponse_OneofSizer(msg proto.Message) (n int) { m := msg.(*ServerReflectionResponse) // message_response switch x := m.MessageResponse.(type) { case *ServerReflectionResponse_FileDescriptorResponse: s := proto.Size(x.FileDescriptorResponse) n += proto.SizeVarint(4<<3 | proto.WireBytes) n += proto.SizeVarint(uint64(s)) n += s case *ServerReflectionResponse_AllExtensionNumbersResponse: s := proto.Size(x.AllExtensionNumbersResponse) n += proto.SizeVarint(5<<3 | proto.WireBytes) n += proto.SizeVarint(uint64(s)) n += s case *ServerReflectionResponse_ListServicesResponse: s := proto.Size(x.ListServicesResponse) n += proto.SizeVarint(6<<3 | proto.WireBytes) n += proto.SizeVarint(uint64(s)) n += s case *ServerReflectionResponse_ErrorResponse: s := proto.Size(x.ErrorResponse) n += proto.SizeVarint(7<<3 | proto.WireBytes) n += proto.SizeVarint(uint64(s)) n += s case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } // Serialized FileDescriptorProto messages sent by the server answering // a file_by_filename, file_containing_symbol, or file_containing_extension // request. type FileDescriptorResponse struct { // Serialized FileDescriptorProto messages. We avoid taking a dependency on // descriptor.proto, which uses proto2 only features, by making them opaque // bytes instead. FileDescriptorProto [][]byte `protobuf:"bytes,1,rep,name=file_descriptor_proto,json=fileDescriptorProto,proto3" json:"file_descriptor_proto,omitempty"` } func (m *FileDescriptorResponse) Reset() { *m = FileDescriptorResponse{} } func (m *FileDescriptorResponse) String() string { return proto.CompactTextString(m) } func (*FileDescriptorResponse) ProtoMessage() {} func (*FileDescriptorResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{3} } func (m *FileDescriptorResponse) GetFileDescriptorProto() [][]byte { if m != nil { return m.FileDescriptorProto } return nil } // A list of extension numbers sent by the server answering // all_extension_numbers_of_type request. type ExtensionNumberResponse struct { // Full name of the base type, including the package name. The format // is . BaseTypeName string `protobuf:"bytes,1,opt,name=base_type_name,json=baseTypeName" json:"base_type_name,omitempty"` ExtensionNumber []int32 `protobuf:"varint,2,rep,packed,name=extension_number,json=extensionNumber" json:"extension_number,omitempty"` } func (m *ExtensionNumberResponse) Reset() { *m = ExtensionNumberResponse{} } func (m *ExtensionNumberResponse) String() string { return proto.CompactTextString(m) } func (*ExtensionNumberResponse) ProtoMessage() {} func (*ExtensionNumberResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{4} } func (m *ExtensionNumberResponse) GetBaseTypeName() string { if m != nil { return m.BaseTypeName } return "" } func (m *ExtensionNumberResponse) GetExtensionNumber() []int32 { if m != nil { return m.ExtensionNumber } return nil } // A list of ServiceResponse sent by the server answering list_services request. type ListServiceResponse struct { // The information of each service may be expanded in the future, so we use // ServiceResponse message to encapsulate it. Service []*ServiceResponse `protobuf:"bytes,1,rep,name=service" json:"service,omitempty"` } func (m *ListServiceResponse) Reset() { *m = ListServiceResponse{} } func (m *ListServiceResponse) String() string { return proto.CompactTextString(m) } func (*ListServiceResponse) ProtoMessage() {} func (*ListServiceResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{5} } func (m *ListServiceResponse) GetService() []*ServiceResponse { if m != nil { return m.Service } return nil } // The information of a single service used by ListServiceResponse to answer // list_services request. type ServiceResponse struct { // Full name of a registered service, including its package name. The format // is . Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"` } func (m *ServiceResponse) Reset() { *m = ServiceResponse{} } func (m *ServiceResponse) String() string { return proto.CompactTextString(m) } func (*ServiceResponse) ProtoMessage() {} func (*ServiceResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{6} } func (m *ServiceResponse) GetName() string { if m != nil { return m.Name } return "" } // The error code and error message sent by the server when an error occurs. type ErrorResponse struct { // This field uses the error codes defined in grpc::StatusCode. ErrorCode int32 `protobuf:"varint,1,opt,name=error_code,json=errorCode" json:"error_code,omitempty"` ErrorMessage string `protobuf:"bytes,2,opt,name=error_message,json=errorMessage" json:"error_message,omitempty"` } func (m *ErrorResponse) Reset() { *m = ErrorResponse{} } func (m *ErrorResponse) String() string { return proto.CompactTextString(m) } func (*ErrorResponse) ProtoMessage() {} func (*ErrorResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{7} } func (m *ErrorResponse) GetErrorCode() int32 { if m != nil { return m.ErrorCode } return 0 } func (m *ErrorResponse) GetErrorMessage() string { if m != nil { return m.ErrorMessage } return "" } func init() { proto.RegisterType((*ServerReflectionRequest)(nil), "grpc.reflection.v1alpha.ServerReflectionRequest") proto.RegisterType((*ExtensionRequest)(nil), "grpc.reflection.v1alpha.ExtensionRequest") proto.RegisterType((*ServerReflectionResponse)(nil), "grpc.reflection.v1alpha.ServerReflectionResponse") proto.RegisterType((*FileDescriptorResponse)(nil), "grpc.reflection.v1alpha.FileDescriptorResponse") proto.RegisterType((*ExtensionNumberResponse)(nil), "grpc.reflection.v1alpha.ExtensionNumberResponse") proto.RegisterType((*ListServiceResponse)(nil), "grpc.reflection.v1alpha.ListServiceResponse") proto.RegisterType((*ServiceResponse)(nil), "grpc.reflection.v1alpha.ServiceResponse") proto.RegisterType((*ErrorResponse)(nil), "grpc.reflection.v1alpha.ErrorResponse") } // Reference imports to suppress errors if they are not otherwise used. var _ context.Context var _ grpc.ClientConn // This is a compile-time assertion to ensure that this generated file // is compatible with the grpc package it is being compiled against. const _ = grpc.SupportPackageIsVersion4 // Client API for ServerReflection service type ServerReflectionClient interface { // The reflection service is structured as a bidirectional stream, ensuring // all related requests go to a single server. ServerReflectionInfo(ctx context.Context, opts ...grpc.CallOption) (ServerReflection_ServerReflectionInfoClient, error) } type serverReflectionClient struct { cc *grpc.ClientConn } func NewServerReflectionClient(cc *grpc.ClientConn) ServerReflectionClient { return &serverReflectionClient{cc} } func (c *serverReflectionClient) ServerReflectionInfo(ctx context.Context, opts ...grpc.CallOption) (ServerReflection_ServerReflectionInfoClient, error) { stream, err := grpc.NewClientStream(ctx, &_ServerReflection_serviceDesc.Streams[0], c.cc, "/grpc.reflection.v1alpha.ServerReflection/ServerReflectionInfo", opts...) if err != nil { return nil, err } x := &serverReflectionServerReflectionInfoClient{stream} return x, nil } type ServerReflection_ServerReflectionInfoClient interface { Send(*ServerReflectionRequest) error Recv() (*ServerReflectionResponse, error) grpc.ClientStream } type serverReflectionServerReflectionInfoClient struct { grpc.ClientStream } func (x *serverReflectionServerReflectionInfoClient) Send(m *ServerReflectionRequest) error { return x.ClientStream.SendMsg(m) } func (x *serverReflectionServerReflectionInfoClient) Recv() (*ServerReflectionResponse, error) { m := new(ServerReflectionResponse) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } // Server API for ServerReflection service type ServerReflectionServer interface { // The reflection service is structured as a bidirectional stream, ensuring // all related requests go to a single server. ServerReflectionInfo(ServerReflection_ServerReflectionInfoServer) error } func RegisterServerReflectionServer(s *grpc.Server, srv ServerReflectionServer) { s.RegisterService(&_ServerReflection_serviceDesc, srv) } func _ServerReflection_ServerReflectionInfo_Handler(srv interface{}, stream grpc.ServerStream) error { return srv.(ServerReflectionServer).ServerReflectionInfo(&serverReflectionServerReflectionInfoServer{stream}) } type ServerReflection_ServerReflectionInfoServer interface { Send(*ServerReflectionResponse) error Recv() (*ServerReflectionRequest, error) grpc.ServerStream } type serverReflectionServerReflectionInfoServer struct { grpc.ServerStream } func (x *serverReflectionServerReflectionInfoServer) Send(m *ServerReflectionResponse) error { return x.ServerStream.SendMsg(m) } func (x *serverReflectionServerReflectionInfoServer) Recv() (*ServerReflectionRequest, error) { m := new(ServerReflectionRequest) if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err } return m, nil } var _ServerReflection_serviceDesc = grpc.ServiceDesc{ ServiceName: "grpc.reflection.v1alpha.ServerReflection", HandlerType: (*ServerReflectionServer)(nil), Methods: []grpc.MethodDesc{}, Streams: []grpc.StreamDesc{ { StreamName: "ServerReflectionInfo", Handler: _ServerReflection_ServerReflectionInfo_Handler, ServerStreams: true, ClientStreams: true, }, }, Metadata: "grpc_reflection_v1alpha/reflection.proto", } func init() { proto.RegisterFile("grpc_reflection_v1alpha/reflection.proto", fileDescriptor0) } var fileDescriptor0 = []byte{ // 656 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x94, 0x54, 0x51, 0x73, 0xd2, 0x40, 0x10, 0x6e, 0x5a, 0x68, 0x87, 0x85, 0x02, 0x5e, 0x2b, 0xa4, 0x3a, 0x75, 0x98, 0x68, 0x35, 0x75, 0x1c, 0xda, 0xe2, 0x8c, 0x3f, 0x80, 0xaa, 0x83, 0x33, 0xb5, 0x75, 0x0e, 0x5f, 0x1c, 0x1f, 0x6e, 0x02, 0x2c, 0x34, 0x1a, 0x72, 0xf1, 0x2e, 0x45, 0x79, 0xf2, 0x47, 0xf8, 0xa3, 0xfc, 0x4b, 0x3e, 0x3a, 0x77, 0x09, 0x21, 0xa4, 0x44, 0xa7, 0x4f, 0x30, 0xdf, 0xee, 0xde, 0xb7, 0xbb, 0xdf, 0xb7, 0x01, 0x7b, 0x22, 0x82, 0x21, 0x13, 0x38, 0xf6, 0x70, 0x18, 0xba, 0xdc, 0x67, 0xb3, 0x33, 0xc7, 0x0b, 0xae, 0x9d, 0x93, 0x25, 0xd4, 0x0e, 0x04, 0x0f, 0x39, 0x69, 0xaa, 0xcc, 0x76, 0x0a, 0x8e, 0x33, 0xad, 0x3f, 0x9b, 0xd0, 0xec, 0xa3, 0x98, 0xa1, 0xa0, 0x49, 0x90, 0xe2, 0xb7, 0x1b, 0x94, 0x21, 0x21, 0x50, 0xb8, 0xe6, 0x32, 0x34, 0x8d, 0x96, 0x61, 0x97, 0xa8, 0xfe, 0x4f, 0x9e, 0x43, 0x7d, 0xec, 0x7a, 0xc8, 0x06, 0x73, 0xa6, 0x7e, 0x7d, 0x67, 0x8a, 0xe6, 0x96, 0x8a, 0xf7, 0x36, 0x68, 0x55, 0x21, 0xdd, 0xf9, 0xdb, 0x18, 0x27, 0xaf, 0xa0, 0xa1, 0x73, 0x87, 0xdc, 0x0f, 0x1d, 0xd7, 0x77, 0xfd, 0x09, 0x93, 0xf3, 0xe9, 0x80, 0x7b, 0x66, 0x21, 0xae, 0xd8, 0x57, 0xf1, 0xf3, 0x24, 0xdc, 0xd7, 0x51, 0x32, 0x81, 0x83, 0x6c, 0x1d, 0xfe, 0x08, 0xd1, 0x97, 0x2e, 0xf7, 0xcd, 0x62, 0xcb, 0xb0, 0xcb, 0x9d, 0xe3, 0x76, 0xce, 0x40, 0xed, 0x37, 0x8b, 0xcc, 0x78, 0x8a, 0xde, 0x06, 0x6d, 0xae, 0xb2, 0x24, 0x19, 0xa4, 0x0b, 0x87, 0x8e, 0xe7, 0x2d, 0x1f, 0x67, 0xfe, 0xcd, 0x74, 0x80, 0x42, 0x32, 0x3e, 0x66, 0xe1, 0x3c, 0x40, 0x73, 0x3b, 0xee, 0xf3, 0xc0, 0xf1, 0xbc, 0xa4, 0xec, 0x32, 0x4a, 0xba, 0x1a, 0x7f, 0x9c, 0x07, 0x48, 0x8e, 0x60, 0xd7, 0x73, 0x65, 0xc8, 0x24, 0x8a, 0x99, 0x3b, 0x44, 0x69, 0xee, 0xc4, 0x35, 0x15, 0x05, 0xf7, 0x63, 0xb4, 0x7b, 0x0f, 0x6a, 0x53, 0x94, 0xd2, 0x99, 0x20, 0x13, 0x51, 0x63, 0xd6, 0x18, 0xea, 0xd9, 0x66, 0xc9, 0x33, 0xa8, 0xa5, 0xa6, 0xd6, 0x3d, 0x44, 0xdb, 0xaf, 0x2e, 0x61, 0x4d, 0x7b, 0x0c, 0xf5, 0x6c, 0xdb, 0xe6, 0x66, 0xcb, 0xb0, 0x8b, 0xb4, 0x86, 0xab, 0x8d, 0x5a, 0xbf, 0x0b, 0x60, 0xde, 0x96, 0x58, 0x06, 0xdc, 0x97, 0x48, 0x0e, 0x01, 0x66, 0x8e, 0xe7, 0x8e, 0x58, 0x4a, 0xe9, 0x92, 0x46, 0x7a, 0x4a, 0xee, 0xcf, 0x50, 0xe7, 0xc2, 0x9d, 0xb8, 0xbe, 0xe3, 0x2d, 0xfa, 0xd6, 0x34, 0xe5, 0xce, 0x69, 0xae, 0x02, 0x39, 0x76, 0xa2, 0xb5, 0xc5, 0x4b, 0x8b, 0x61, 0xbf, 0x82, 0xa9, 0x75, 0x1e, 0xa1, 0x1c, 0x0a, 0x37, 0x08, 0xb9, 0x60, 0x22, 0xee, 0x4b, 0x3b, 0xa4, 0xdc, 0x39, 0xc9, 0x25, 0x51, 0x26, 0x7b, 0x9d, 0xd4, 0x2d, 0xc6, 0xe9, 0x6d, 0x50, 0x6d, 0xb9, 0xdb, 0x11, 0xf2, 0x1d, 0x1e, 0xad, 0xd7, 0x3a, 0xa1, 0x2c, 0xfe, 0x67, 0xae, 0x8c, 0x01, 0x52, 0x9c, 0x0f, 0xd7, 0xd8, 0x23, 0x21, 0x1e, 0x41, 0x63, 0xc5, 0x20, 0x4b, 0xc2, 0x6d, 0x4d, 0xf8, 0x22, 0x97, 0xf0, 0x62, 0x69, 0xa0, 0x14, 0xd9, 0x7e, 0xda, 0x57, 0x09, 0xcb, 0x15, 0x54, 0x51, 0x88, 0xf4, 0x06, 0x77, 0xf4, 0xeb, 0x4f, 0xf3, 0xc7, 0x51, 0xe9, 0xa9, 0x77, 0x77, 0x31, 0x0d, 0x74, 0x09, 0xd4, 0x97, 0x86, 0x8d, 0x30, 0xeb, 0x02, 0x1a, 0xeb, 0xf7, 0x4e, 0x3a, 0x70, 0x3f, 0x2b, 0xa5, 0xfe, 0xf0, 0x98, 0x46, 0x6b, 0xcb, 0xae, 0xd0, 0xbd, 0x55, 0x51, 0x3e, 0xa8, 0x90, 0xf5, 0x05, 0x9a, 0x39, 0x2b, 0x25, 0x4f, 0xa0, 0x3a, 0x70, 0x24, 0xea, 0x03, 0x60, 0xfa, 0x1b, 0x13, 0x39, 0xb3, 0xa2, 0x50, 0xe5, 0xff, 0x4b, 0xf5, 0x7d, 0x59, 0x7f, 0x03, 0x5b, 0xeb, 0x6e, 0xe0, 0x13, 0xec, 0xad, 0xd9, 0x26, 0xe9, 0xc2, 0x4e, 0x2c, 0x8b, 0x6e, 0xb4, 0xdc, 0xb1, 0xff, 0xe9, 0xea, 0x54, 0x29, 0x5d, 0x14, 0x5a, 0x47, 0x50, 0xcb, 0x3e, 0x4b, 0xa0, 0x90, 0x6a, 0x5a, 0xff, 0xb7, 0xfa, 0xb0, 0xbb, 0xb2, 0x71, 0x75, 0x79, 0x91, 0x62, 0x43, 0x3e, 0x8a, 0x52, 0x8b, 0xb4, 0xa4, 0x91, 0x73, 0x3e, 0x42, 0xf2, 0x18, 0x22, 0x41, 0x58, 0xac, 0x82, 0x3e, 0xbb, 0x12, 0xad, 0x68, 0xf0, 0x7d, 0x84, 0x75, 0x7e, 0x19, 0x50, 0xcf, 0x9e, 0x1b, 0xf9, 0x09, 0xfb, 0x59, 0xec, 0x9d, 0x3f, 0xe6, 0xe4, 0xce, 0x17, 0xfb, 0xe0, 0xec, 0x0e, 0x15, 0xd1, 0x54, 0xb6, 0x71, 0x6a, 0x0c, 0xb6, 0xb5, 0xf4, 0x2f, 0xff, 0x06, 0x00, 0x00, 0xff, 0xff, 0x85, 0x02, 0x09, 0x9d, 0x9f, 0x06, 0x00, 0x00, } golang-google-grpc-1.6.0/reflection/grpc_reflection_v1alpha/reflection.proto000066400000000000000000000124711315416461300273330ustar00rootroot00000000000000// Copyright 2016 gRPC authors. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. // Service exported by server reflection syntax = "proto3"; package grpc.reflection.v1alpha; service ServerReflection { // The reflection service is structured as a bidirectional stream, ensuring // all related requests go to a single server. rpc ServerReflectionInfo(stream ServerReflectionRequest) returns (stream ServerReflectionResponse); } // The message sent by the client when calling ServerReflectionInfo method. message ServerReflectionRequest { string host = 1; // To use reflection service, the client should set one of the following // fields in message_request. The server distinguishes requests by their // defined field and then handles them using corresponding methods. oneof message_request { // Find a proto file by the file name. string file_by_filename = 3; // Find the proto file that declares the given fully-qualified symbol name. // This field should be a fully-qualified symbol name // (e.g. .[.] or .). string file_containing_symbol = 4; // Find the proto file which defines an extension extending the given // message type with the given field number. ExtensionRequest file_containing_extension = 5; // Finds the tag numbers used by all known extensions of extendee_type, and // appends them to ExtensionNumberResponse in an undefined order. // Its corresponding method is best-effort: it's not guaranteed that the // reflection service will implement this method, and it's not guaranteed // that this method will provide all extensions. Returns // StatusCode::UNIMPLEMENTED if it's not implemented. // This field should be a fully-qualified type name. The format is // . string all_extension_numbers_of_type = 6; // List the full names of registered services. The content will not be // checked. string list_services = 7; } } // The type name and extension number sent by the client when requesting // file_containing_extension. message ExtensionRequest { // Fully-qualified type name. The format should be . string containing_type = 1; int32 extension_number = 2; } // The message sent by the server to answer ServerReflectionInfo method. message ServerReflectionResponse { string valid_host = 1; ServerReflectionRequest original_request = 2; // The server set one of the following fields accroding to the message_request // in the request. oneof message_response { // This message is used to answer file_by_filename, file_containing_symbol, // file_containing_extension requests with transitive dependencies. As // the repeated label is not allowed in oneof fields, we use a // FileDescriptorResponse message to encapsulate the repeated fields. // The reflection service is allowed to avoid sending FileDescriptorProtos // that were previously sent in response to earlier requests in the stream. FileDescriptorResponse file_descriptor_response = 4; // This message is used to answer all_extension_numbers_of_type requst. ExtensionNumberResponse all_extension_numbers_response = 5; // This message is used to answer list_services request. ListServiceResponse list_services_response = 6; // This message is used when an error occurs. ErrorResponse error_response = 7; } } // Serialized FileDescriptorProto messages sent by the server answering // a file_by_filename, file_containing_symbol, or file_containing_extension // request. message FileDescriptorResponse { // Serialized FileDescriptorProto messages. We avoid taking a dependency on // descriptor.proto, which uses proto2 only features, by making them opaque // bytes instead. repeated bytes file_descriptor_proto = 1; } // A list of extension numbers sent by the server answering // all_extension_numbers_of_type request. message ExtensionNumberResponse { // Full name of the base type, including the package name. The format // is . string base_type_name = 1; repeated int32 extension_number = 2; } // A list of ServiceResponse sent by the server answering list_services request. message ListServiceResponse { // The information of each service may be expanded in the future, so we use // ServiceResponse message to encapsulate it. repeated ServiceResponse service = 1; } // The information of a single service used by ListServiceResponse to answer // list_services request. message ServiceResponse { // Full name of a registered service, including its package name. The format // is . string name = 1; } // The error code and error message sent by the server when an error occurs. message ErrorResponse { // This field uses the error codes defined in grpc::StatusCode. int32 error_code = 1; string error_message = 2; } golang-google-grpc-1.6.0/reflection/grpc_testing/000077500000000000000000000000001315416461300220165ustar00rootroot00000000000000golang-google-grpc-1.6.0/reflection/grpc_testing/proto2.pb.go000066400000000000000000000046201315416461300241740ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: proto2.proto /* Package grpc_testing is a generated protocol buffer package. It is generated from these files: proto2.proto proto2_ext.proto proto2_ext2.proto test.proto It has these top-level messages: ToBeExtended Extension AnotherExtension SearchResponse SearchRequest */ package grpc_testing import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package type ToBeExtended struct { Foo *int32 `protobuf:"varint,1,req,name=foo" json:"foo,omitempty"` proto.XXX_InternalExtensions `json:"-"` XXX_unrecognized []byte `json:"-"` } func (m *ToBeExtended) Reset() { *m = ToBeExtended{} } func (m *ToBeExtended) String() string { return proto.CompactTextString(m) } func (*ToBeExtended) ProtoMessage() {} func (*ToBeExtended) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} } var extRange_ToBeExtended = []proto.ExtensionRange{ {10, 30}, } func (*ToBeExtended) ExtensionRangeArray() []proto.ExtensionRange { return extRange_ToBeExtended } func (m *ToBeExtended) GetFoo() int32 { if m != nil && m.Foo != nil { return *m.Foo } return 0 } func init() { proto.RegisterType((*ToBeExtended)(nil), "grpc.testing.ToBeExtended") } func init() { proto.RegisterFile("proto2.proto", fileDescriptor0) } var fileDescriptor0 = []byte{ // 86 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0xe2, 0x29, 0x28, 0xca, 0x2f, 0xc9, 0x37, 0xd2, 0x03, 0x53, 0x42, 0x3c, 0xe9, 0x45, 0x05, 0xc9, 0x7a, 0x25, 0xa9, 0xc5, 0x25, 0x99, 0x79, 0xe9, 0x4a, 0x6a, 0x5c, 0x3c, 0x21, 0xf9, 0x4e, 0xa9, 0xae, 0x15, 0x25, 0xa9, 0x79, 0x29, 0xa9, 0x29, 0x42, 0x02, 0x5c, 0xcc, 0x69, 0xf9, 0xf9, 0x12, 0x8c, 0x0a, 0x4c, 0x1a, 0xac, 0x41, 0x20, 0xa6, 0x16, 0x0b, 0x07, 0x97, 0x80, 0x3c, 0x20, 0x00, 0x00, 0xff, 0xff, 0x74, 0x86, 0x9c, 0x08, 0x44, 0x00, 0x00, 0x00, } golang-google-grpc-1.6.0/reflection/grpc_testing/proto2.proto000066400000000000000000000013041315416461300243260ustar00rootroot00000000000000// Copyright 2017 gRPC authors. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. syntax = "proto2"; package grpc.testing; message ToBeExtended { required int32 foo = 1; extensions 10 to 30; } golang-google-grpc-1.6.0/reflection/grpc_testing/proto2_ext.pb.go000066400000000000000000000057061315416461300250620ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: proto2_ext.proto package grpc_testing import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf type Extension struct { Whatzit *int32 `protobuf:"varint,1,opt,name=whatzit" json:"whatzit,omitempty"` XXX_unrecognized []byte `json:"-"` } func (m *Extension) Reset() { *m = Extension{} } func (m *Extension) String() string { return proto.CompactTextString(m) } func (*Extension) ProtoMessage() {} func (*Extension) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{0} } func (m *Extension) GetWhatzit() int32 { if m != nil && m.Whatzit != nil { return *m.Whatzit } return 0 } var E_Foo = &proto.ExtensionDesc{ ExtendedType: (*ToBeExtended)(nil), ExtensionType: (*int32)(nil), Field: 13, Name: "grpc.testing.foo", Tag: "varint,13,opt,name=foo", Filename: "proto2_ext.proto", } var E_Bar = &proto.ExtensionDesc{ ExtendedType: (*ToBeExtended)(nil), ExtensionType: (*Extension)(nil), Field: 17, Name: "grpc.testing.bar", Tag: "bytes,17,opt,name=bar", Filename: "proto2_ext.proto", } var E_Baz = &proto.ExtensionDesc{ ExtendedType: (*ToBeExtended)(nil), ExtensionType: (*SearchRequest)(nil), Field: 19, Name: "grpc.testing.baz", Tag: "bytes,19,opt,name=baz", Filename: "proto2_ext.proto", } func init() { proto.RegisterType((*Extension)(nil), "grpc.testing.Extension") proto.RegisterExtension(E_Foo) proto.RegisterExtension(E_Bar) proto.RegisterExtension(E_Baz) } func init() { proto.RegisterFile("proto2_ext.proto", fileDescriptor1) } var fileDescriptor1 = []byte{ // 179 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x12, 0x28, 0x28, 0xca, 0x2f, 0xc9, 0x37, 0x8a, 0x4f, 0xad, 0x28, 0xd1, 0x03, 0x33, 0x85, 0x78, 0xd2, 0x8b, 0x0a, 0x92, 0xf5, 0x4a, 0x52, 0x8b, 0x4b, 0x32, 0xf3, 0xd2, 0xa5, 0x78, 0x20, 0xf2, 0x10, 0x39, 0x29, 0x2e, 0x90, 0x30, 0x84, 0xad, 0xa4, 0xca, 0xc5, 0xe9, 0x5a, 0x51, 0x92, 0x9a, 0x57, 0x9c, 0x99, 0x9f, 0x27, 0x24, 0xc1, 0xc5, 0x5e, 0x9e, 0x91, 0x58, 0x52, 0x95, 0x59, 0x22, 0xc1, 0xa8, 0xc0, 0xa8, 0xc1, 0x1a, 0x04, 0xe3, 0x5a, 0xe9, 0x70, 0x31, 0xa7, 0xe5, 0xe7, 0x0b, 0x49, 0xe9, 0x21, 0x1b, 0xab, 0x17, 0x92, 0xef, 0x94, 0x0a, 0xd6, 0x9d, 0x92, 0x9a, 0x22, 0xc1, 0x0b, 0xd6, 0x01, 0x52, 0x66, 0xe5, 0xca, 0xc5, 0x9c, 0x94, 0x58, 0x84, 0x57, 0xb5, 0xa0, 0x02, 0xa3, 0x06, 0xb7, 0x91, 0x38, 0xaa, 0x0a, 0xb8, 0x4b, 0x82, 0x40, 0xfa, 0xad, 0x3c, 0x41, 0xc6, 0x54, 0xe1, 0x35, 0x46, 0x18, 0x6c, 0x8c, 0x34, 0xaa, 0x8a, 0xe0, 0xd4, 0xc4, 0xa2, 0xe4, 0x8c, 0xa0, 0xd4, 0xc2, 0xd2, 0xd4, 0xe2, 0x12, 0x90, 0x51, 0x55, 0x80, 0x00, 0x00, 0x00, 0xff, 0xff, 0x71, 0x6b, 0x94, 0x9f, 0x21, 0x01, 0x00, 0x00, } golang-google-grpc-1.6.0/reflection/grpc_testing/proto2_ext.proto000066400000000000000000000015211315416461300252070ustar00rootroot00000000000000// Copyright 2017 gRPC authors. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. syntax = "proto2"; package grpc.testing; import "proto2.proto"; import "test.proto"; extend ToBeExtended { optional int32 foo = 13; optional Extension bar = 17; optional SearchRequest baz = 19; } message Extension { optional int32 whatzit = 1; } golang-google-grpc-1.6.0/reflection/grpc_testing/proto2_ext2.pb.go000066400000000000000000000053241315416461300251400ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: proto2_ext2.proto package grpc_testing import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf type AnotherExtension struct { Whatchamacallit *int32 `protobuf:"varint,1,opt,name=whatchamacallit" json:"whatchamacallit,omitempty"` XXX_unrecognized []byte `json:"-"` } func (m *AnotherExtension) Reset() { *m = AnotherExtension{} } func (m *AnotherExtension) String() string { return proto.CompactTextString(m) } func (*AnotherExtension) ProtoMessage() {} func (*AnotherExtension) Descriptor() ([]byte, []int) { return fileDescriptor2, []int{0} } func (m *AnotherExtension) GetWhatchamacallit() int32 { if m != nil && m.Whatchamacallit != nil { return *m.Whatchamacallit } return 0 } var E_Frob = &proto.ExtensionDesc{ ExtendedType: (*ToBeExtended)(nil), ExtensionType: (*string)(nil), Field: 23, Name: "grpc.testing.frob", Tag: "bytes,23,opt,name=frob", Filename: "proto2_ext2.proto", } var E_Nitz = &proto.ExtensionDesc{ ExtendedType: (*ToBeExtended)(nil), ExtensionType: (*AnotherExtension)(nil), Field: 29, Name: "grpc.testing.nitz", Tag: "bytes,29,opt,name=nitz", Filename: "proto2_ext2.proto", } func init() { proto.RegisterType((*AnotherExtension)(nil), "grpc.testing.AnotherExtension") proto.RegisterExtension(E_Frob) proto.RegisterExtension(E_Nitz) } func init() { proto.RegisterFile("proto2_ext2.proto", fileDescriptor2) } var fileDescriptor2 = []byte{ // 165 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x12, 0x2c, 0x28, 0xca, 0x2f, 0xc9, 0x37, 0x8a, 0x4f, 0xad, 0x28, 0x31, 0xd2, 0x03, 0xb3, 0x85, 0x78, 0xd2, 0x8b, 0x0a, 0x92, 0xf5, 0x4a, 0x52, 0x8b, 0x4b, 0x32, 0xf3, 0xd2, 0xa5, 0x78, 0x20, 0x0a, 0x20, 0x72, 0x4a, 0x36, 0x5c, 0x02, 0x8e, 0x79, 0xf9, 0x25, 0x19, 0xa9, 0x45, 0xae, 0x15, 0x25, 0xa9, 0x79, 0xc5, 0x99, 0xf9, 0x79, 0x42, 0x1a, 0x5c, 0xfc, 0xe5, 0x19, 0x89, 0x25, 0xc9, 0x19, 0x89, 0xb9, 0x89, 0xc9, 0x89, 0x39, 0x39, 0x99, 0x25, 0x12, 0x8c, 0x0a, 0x8c, 0x1a, 0xac, 0x41, 0xe8, 0xc2, 0x56, 0x7a, 0x5c, 0x2c, 0x69, 0x45, 0xf9, 0x49, 0x42, 0x52, 0x7a, 0xc8, 0x56, 0xe8, 0x85, 0xe4, 0x3b, 0xa5, 0x82, 0x8d, 0x4b, 0x49, 0x4d, 0x91, 0x10, 0x57, 0x60, 0xd4, 0xe0, 0x0c, 0x02, 0xab, 0xb3, 0xf2, 0xe3, 0x62, 0xc9, 0xcb, 0x2c, 0xa9, 0xc2, 0xab, 0x5e, 0x56, 0x81, 0x51, 0x83, 0xdb, 0x48, 0x0e, 0x55, 0x05, 0xba, 0x1b, 0x83, 0xc0, 0xe6, 0x00, 0x02, 0x00, 0x00, 0xff, 0xff, 0xf0, 0x7e, 0x0d, 0x26, 0xed, 0x00, 0x00, 0x00, } golang-google-grpc-1.6.0/reflection/grpc_testing/proto2_ext2.proto000066400000000000000000000014621315416461300252750ustar00rootroot00000000000000// Copyright 2017 gRPC authors. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. syntax = "proto2"; package grpc.testing; import "proto2.proto"; extend ToBeExtended { optional string frob = 23; optional AnotherExtension nitz = 29; } message AnotherExtension { optional int32 whatchamacallit = 1; } golang-google-grpc-1.6.0/reflection/grpc_testing/test.pb.go000066400000000000000000000201321315416461300237220ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: test.proto package grpc_testing import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import ( context "golang.org/x/net/context" grpc "google.golang.org/grpc" ) // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf type SearchResponse struct { Results []*SearchResponse_Result `protobuf:"bytes,1,rep,name=results" json:"results,omitempty"` } func (m *SearchResponse) Reset() { *m = SearchResponse{} } func (m *SearchResponse) String() string { return proto.CompactTextString(m) } func (*SearchResponse) ProtoMessage() {} func (*SearchResponse) Descriptor() ([]byte, []int) { return fileDescriptor3, []int{0} } func (m *SearchResponse) GetResults() []*SearchResponse_Result { if m != nil { return m.Results } return nil } type SearchResponse_Result struct { Url string `protobuf:"bytes,1,opt,name=url" json:"url,omitempty"` Title string `protobuf:"bytes,2,opt,name=title" json:"title,omitempty"` Snippets []string `protobuf:"bytes,3,rep,name=snippets" json:"snippets,omitempty"` } func (m *SearchResponse_Result) Reset() { *m = SearchResponse_Result{} } func (m *SearchResponse_Result) String() string { return proto.CompactTextString(m) } func (*SearchResponse_Result) ProtoMessage() {} func (*SearchResponse_Result) Descriptor() ([]byte, []int) { return fileDescriptor3, []int{0, 0} } func (m *SearchResponse_Result) GetUrl() string { if m != nil { return m.Url } return "" } func (m *SearchResponse_Result) GetTitle() string { if m != nil { return m.Title } return "" } func (m *SearchResponse_Result) GetSnippets() []string { if m != nil { return m.Snippets } return nil } type SearchRequest struct { Query string `protobuf:"bytes,1,opt,name=query" json:"query,omitempty"` } func (m *SearchRequest) Reset() { *m = SearchRequest{} } func (m *SearchRequest) String() string { return proto.CompactTextString(m) } func (*SearchRequest) ProtoMessage() {} func (*SearchRequest) Descriptor() ([]byte, []int) { return fileDescriptor3, []int{1} } func (m *SearchRequest) GetQuery() string { if m != nil { return m.Query } return "" } func init() { proto.RegisterType((*SearchResponse)(nil), "grpc.testing.SearchResponse") proto.RegisterType((*SearchResponse_Result)(nil), "grpc.testing.SearchResponse.Result") proto.RegisterType((*SearchRequest)(nil), "grpc.testing.SearchRequest") } // Reference imports to suppress errors if they are not otherwise used. var _ context.Context var _ grpc.ClientConn // This is a compile-time assertion to ensure that this generated file // is compatible with the grpc package it is being compiled against. const _ = grpc.SupportPackageIsVersion4 // Client API for SearchService service type SearchServiceClient interface { Search(ctx context.Context, in *SearchRequest, opts ...grpc.CallOption) (*SearchResponse, error) StreamingSearch(ctx context.Context, opts ...grpc.CallOption) (SearchService_StreamingSearchClient, error) } type searchServiceClient struct { cc *grpc.ClientConn } func NewSearchServiceClient(cc *grpc.ClientConn) SearchServiceClient { return &searchServiceClient{cc} } func (c *searchServiceClient) Search(ctx context.Context, in *SearchRequest, opts ...grpc.CallOption) (*SearchResponse, error) { out := new(SearchResponse) err := grpc.Invoke(ctx, "/grpc.testing.SearchService/Search", in, out, c.cc, opts...) if err != nil { return nil, err } return out, nil } func (c *searchServiceClient) StreamingSearch(ctx context.Context, opts ...grpc.CallOption) (SearchService_StreamingSearchClient, error) { stream, err := grpc.NewClientStream(ctx, &_SearchService_serviceDesc.Streams[0], c.cc, "/grpc.testing.SearchService/StreamingSearch", opts...) if err != nil { return nil, err } x := &searchServiceStreamingSearchClient{stream} return x, nil } type SearchService_StreamingSearchClient interface { Send(*SearchRequest) error Recv() (*SearchResponse, error) grpc.ClientStream } type searchServiceStreamingSearchClient struct { grpc.ClientStream } func (x *searchServiceStreamingSearchClient) Send(m *SearchRequest) error { return x.ClientStream.SendMsg(m) } func (x *searchServiceStreamingSearchClient) Recv() (*SearchResponse, error) { m := new(SearchResponse) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } // Server API for SearchService service type SearchServiceServer interface { Search(context.Context, *SearchRequest) (*SearchResponse, error) StreamingSearch(SearchService_StreamingSearchServer) error } func RegisterSearchServiceServer(s *grpc.Server, srv SearchServiceServer) { s.RegisterService(&_SearchService_serviceDesc, srv) } func _SearchService_Search_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(SearchRequest) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(SearchServiceServer).Search(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/grpc.testing.SearchService/Search", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(SearchServiceServer).Search(ctx, req.(*SearchRequest)) } return interceptor(ctx, in, info, handler) } func _SearchService_StreamingSearch_Handler(srv interface{}, stream grpc.ServerStream) error { return srv.(SearchServiceServer).StreamingSearch(&searchServiceStreamingSearchServer{stream}) } type SearchService_StreamingSearchServer interface { Send(*SearchResponse) error Recv() (*SearchRequest, error) grpc.ServerStream } type searchServiceStreamingSearchServer struct { grpc.ServerStream } func (x *searchServiceStreamingSearchServer) Send(m *SearchResponse) error { return x.ServerStream.SendMsg(m) } func (x *searchServiceStreamingSearchServer) Recv() (*SearchRequest, error) { m := new(SearchRequest) if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err } return m, nil } var _SearchService_serviceDesc = grpc.ServiceDesc{ ServiceName: "grpc.testing.SearchService", HandlerType: (*SearchServiceServer)(nil), Methods: []grpc.MethodDesc{ { MethodName: "Search", Handler: _SearchService_Search_Handler, }, }, Streams: []grpc.StreamDesc{ { StreamName: "StreamingSearch", Handler: _SearchService_StreamingSearch_Handler, ServerStreams: true, ClientStreams: true, }, }, Metadata: "test.proto", } func init() { proto.RegisterFile("test.proto", fileDescriptor3) } var fileDescriptor3 = []byte{ // 231 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xa4, 0x91, 0xbd, 0x4a, 0xc5, 0x40, 0x10, 0x85, 0x59, 0x83, 0xd1, 0x3b, 0xfe, 0x32, 0x58, 0x84, 0x68, 0x11, 0xae, 0x08, 0xa9, 0x16, 0xb9, 0xd6, 0x56, 0xb6, 0x16, 0xb2, 0x79, 0x82, 0x6b, 0x18, 0xe2, 0x42, 0x4c, 0x36, 0x33, 0x13, 0xc1, 0x87, 0xb1, 0xf5, 0x39, 0x25, 0x59, 0x23, 0x0a, 0x62, 0x63, 0xb7, 0xe7, 0xe3, 0xcc, 0xb7, 0xbb, 0x0c, 0x80, 0x92, 0xa8, 0x0d, 0xdc, 0x6b, 0x8f, 0x87, 0x0d, 0x87, 0xda, 0x4e, 0xc0, 0x77, 0xcd, 0xfa, 0xcd, 0xc0, 0x71, 0x45, 0x5b, 0xae, 0x9f, 0x1c, 0x49, 0xe8, 0x3b, 0x21, 0xbc, 0x85, 0x3d, 0x26, 0x19, 0x5b, 0x95, 0xcc, 0x14, 0x49, 0x79, 0xb0, 0xb9, 0xb4, 0xdf, 0x47, 0xec, 0xcf, 0xba, 0x75, 0x73, 0xd7, 0x2d, 0x33, 0xf9, 0x3d, 0xa4, 0x11, 0xe1, 0x29, 0x24, 0x23, 0xb7, 0x99, 0x29, 0x4c, 0xb9, 0x72, 0xd3, 0x11, 0xcf, 0x60, 0x57, 0xbd, 0xb6, 0x94, 0xed, 0xcc, 0x2c, 0x06, 0xcc, 0x61, 0x5f, 0x3a, 0x1f, 0x02, 0xa9, 0x64, 0x49, 0x91, 0x94, 0x2b, 0xf7, 0x95, 0xd7, 0x57, 0x70, 0xb4, 0xdc, 0x37, 0x8c, 0x24, 0x3a, 0x29, 0x86, 0x91, 0xf8, 0xf5, 0x53, 0x1b, 0xc3, 0xe6, 0xdd, 0x2c, 0xbd, 0x8a, 0xf8, 0xc5, 0xd7, 0x84, 0x77, 0x90, 0x46, 0x80, 0xe7, 0xbf, 0x3f, 0x7f, 0xd6, 0xe5, 0x17, 0x7f, 0xfd, 0x0d, 0x1f, 0xe0, 0xa4, 0x52, 0xa6, 0xed, 0xb3, 0xef, 0x9a, 0x7f, 0xdb, 0x4a, 0x73, 0x6d, 0x1e, 0xd3, 0x79, 0x09, 0x37, 0x1f, 0x01, 0x00, 0x00, 0xff, 0xff, 0x20, 0xd6, 0x09, 0xb8, 0x92, 0x01, 0x00, 0x00, } golang-google-grpc-1.6.0/reflection/grpc_testing/test.proto000066400000000000000000000017441315416461300240700ustar00rootroot00000000000000// Copyright 2017 gRPC authors. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. syntax = "proto3"; package grpc.testing; message SearchResponse { message Result { string url = 1; string title = 2; repeated string snippets = 3; } repeated Result results = 1; } message SearchRequest { string query = 1; } service SearchService { rpc Search(SearchRequest) returns (SearchResponse); rpc StreamingSearch(stream SearchRequest) returns (stream SearchResponse); } golang-google-grpc-1.6.0/reflection/grpc_testingv3/000077500000000000000000000000001315416461300222675ustar00rootroot00000000000000golang-google-grpc-1.6.0/reflection/grpc_testingv3/testv3.pb.go000066400000000000000000000207011315416461300244460ustar00rootroot00000000000000// Code generated by protoc-gen-go. // source: testv3.proto // DO NOT EDIT! /* Package grpc_testingv3 is a generated protocol buffer package. It is generated from these files: testv3.proto It has these top-level messages: SearchResponseV3 SearchRequestV3 */ package grpc_testingv3 import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import ( context "golang.org/x/net/context" grpc "google.golang.org/grpc" ) // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package type SearchResponseV3 struct { Results []*SearchResponseV3_Result `protobuf:"bytes,1,rep,name=results" json:"results,omitempty"` } func (m *SearchResponseV3) Reset() { *m = SearchResponseV3{} } func (m *SearchResponseV3) String() string { return proto.CompactTextString(m) } func (*SearchResponseV3) ProtoMessage() {} func (*SearchResponseV3) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} } func (m *SearchResponseV3) GetResults() []*SearchResponseV3_Result { if m != nil { return m.Results } return nil } type SearchResponseV3_Result struct { Url string `protobuf:"bytes,1,opt,name=url" json:"url,omitempty"` Title string `protobuf:"bytes,2,opt,name=title" json:"title,omitempty"` Snippets []string `protobuf:"bytes,3,rep,name=snippets" json:"snippets,omitempty"` } func (m *SearchResponseV3_Result) Reset() { *m = SearchResponseV3_Result{} } func (m *SearchResponseV3_Result) String() string { return proto.CompactTextString(m) } func (*SearchResponseV3_Result) ProtoMessage() {} func (*SearchResponseV3_Result) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0, 0} } type SearchRequestV3 struct { Query string `protobuf:"bytes,1,opt,name=query" json:"query,omitempty"` } func (m *SearchRequestV3) Reset() { *m = SearchRequestV3{} } func (m *SearchRequestV3) String() string { return proto.CompactTextString(m) } func (*SearchRequestV3) ProtoMessage() {} func (*SearchRequestV3) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} } func init() { proto.RegisterType((*SearchResponseV3)(nil), "grpc.testingv3.SearchResponseV3") proto.RegisterType((*SearchResponseV3_Result)(nil), "grpc.testingv3.SearchResponseV3.Result") proto.RegisterType((*SearchRequestV3)(nil), "grpc.testingv3.SearchRequestV3") } // Reference imports to suppress errors if they are not otherwise used. var _ context.Context var _ grpc.ClientConn // This is a compile-time assertion to ensure that this generated file // is compatible with the grpc package it is being compiled against. const _ = grpc.SupportPackageIsVersion3 // Client API for SearchServiceV3 service type SearchServiceV3Client interface { Search(ctx context.Context, in *SearchRequestV3, opts ...grpc.CallOption) (*SearchResponseV3, error) StreamingSearch(ctx context.Context, opts ...grpc.CallOption) (SearchServiceV3_StreamingSearchClient, error) } type searchServiceV3Client struct { cc *grpc.ClientConn } func NewSearchServiceV3Client(cc *grpc.ClientConn) SearchServiceV3Client { return &searchServiceV3Client{cc} } func (c *searchServiceV3Client) Search(ctx context.Context, in *SearchRequestV3, opts ...grpc.CallOption) (*SearchResponseV3, error) { out := new(SearchResponseV3) err := grpc.Invoke(ctx, "/grpc.testingv3.SearchServiceV3/Search", in, out, c.cc, opts...) if err != nil { return nil, err } return out, nil } func (c *searchServiceV3Client) StreamingSearch(ctx context.Context, opts ...grpc.CallOption) (SearchServiceV3_StreamingSearchClient, error) { stream, err := grpc.NewClientStream(ctx, &_SearchServiceV3_serviceDesc.Streams[0], c.cc, "/grpc.testingv3.SearchServiceV3/StreamingSearch", opts...) if err != nil { return nil, err } x := &searchServiceV3StreamingSearchClient{stream} return x, nil } type SearchServiceV3_StreamingSearchClient interface { Send(*SearchRequestV3) error Recv() (*SearchResponseV3, error) grpc.ClientStream } type searchServiceV3StreamingSearchClient struct { grpc.ClientStream } func (x *searchServiceV3StreamingSearchClient) Send(m *SearchRequestV3) error { return x.ClientStream.SendMsg(m) } func (x *searchServiceV3StreamingSearchClient) Recv() (*SearchResponseV3, error) { m := new(SearchResponseV3) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } // Server API for SearchServiceV3 service type SearchServiceV3Server interface { Search(context.Context, *SearchRequestV3) (*SearchResponseV3, error) StreamingSearch(SearchServiceV3_StreamingSearchServer) error } func RegisterSearchServiceV3Server(s *grpc.Server, srv SearchServiceV3Server) { s.RegisterService(&_SearchServiceV3_serviceDesc, srv) } func _SearchServiceV3_Search_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(SearchRequestV3) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(SearchServiceV3Server).Search(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/grpc.testingv3.SearchServiceV3/Search", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(SearchServiceV3Server).Search(ctx, req.(*SearchRequestV3)) } return interceptor(ctx, in, info, handler) } func _SearchServiceV3_StreamingSearch_Handler(srv interface{}, stream grpc.ServerStream) error { return srv.(SearchServiceV3Server).StreamingSearch(&searchServiceV3StreamingSearchServer{stream}) } type SearchServiceV3_StreamingSearchServer interface { Send(*SearchResponseV3) error Recv() (*SearchRequestV3, error) grpc.ServerStream } type searchServiceV3StreamingSearchServer struct { grpc.ServerStream } func (x *searchServiceV3StreamingSearchServer) Send(m *SearchResponseV3) error { return x.ServerStream.SendMsg(m) } func (x *searchServiceV3StreamingSearchServer) Recv() (*SearchRequestV3, error) { m := new(SearchRequestV3) if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err } return m, nil } var _SearchServiceV3_serviceDesc = grpc.ServiceDesc{ ServiceName: "grpc.testingv3.SearchServiceV3", HandlerType: (*SearchServiceV3Server)(nil), Methods: []grpc.MethodDesc{ { MethodName: "Search", Handler: _SearchServiceV3_Search_Handler, }, }, Streams: []grpc.StreamDesc{ { StreamName: "StreamingSearch", Handler: _SearchServiceV3_StreamingSearch_Handler, ServerStreams: true, ClientStreams: true, }, }, Metadata: fileDescriptor0, } func init() { proto.RegisterFile("testv3.proto", fileDescriptor0) } var fileDescriptor0 = []byte{ // 240 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xac, 0x91, 0x41, 0x4b, 0xc3, 0x40, 0x10, 0x85, 0x59, 0x83, 0xd1, 0x8e, 0x62, 0xcb, 0xe2, 0x21, 0xe4, 0x62, 0xe8, 0xa5, 0x39, 0x2d, 0xd2, 0xfd, 0x05, 0x9e, 0xf5, 0xb4, 0x81, 0xe2, 0xb5, 0x86, 0x21, 0x2e, 0xc4, 0x64, 0x3b, 0x33, 0x09, 0xf8, 0x7b, 0xfc, 0x13, 0xfe, 0x3c, 0x49, 0xd2, 0x08, 0x0a, 0xe2, 0xa5, 0xb7, 0x7d, 0x8f, 0xf7, 0xbe, 0xe5, 0x31, 0x70, 0x2d, 0xc8, 0xd2, 0x5b, 0x13, 0xa8, 0x95, 0x56, 0xdf, 0x54, 0x14, 0x4a, 0x33, 0x58, 0xbe, 0xa9, 0x7a, 0xbb, 0xfe, 0x50, 0xb0, 0x2a, 0x70, 0x4f, 0xe5, 0xab, 0x43, 0x0e, 0x6d, 0xc3, 0xb8, 0xb3, 0xfa, 0x01, 0x2e, 0x08, 0xb9, 0xab, 0x85, 0x13, 0x95, 0x45, 0xf9, 0xd5, 0x76, 0x63, 0x7e, 0xd6, 0xcc, 0xef, 0x8a, 0x71, 0x63, 0xde, 0xcd, 0xbd, 0xf4, 0x09, 0xe2, 0xc9, 0xd2, 0x2b, 0x88, 0x3a, 0xaa, 0x13, 0x95, 0xa9, 0x7c, 0xe1, 0x86, 0xa7, 0xbe, 0x85, 0x73, 0xf1, 0x52, 0x63, 0x72, 0x36, 0x7a, 0x93, 0xd0, 0x29, 0x5c, 0x72, 0xe3, 0x43, 0x40, 0xe1, 0x24, 0xca, 0xa2, 0x7c, 0xe1, 0xbe, 0xf5, 0x7a, 0x03, 0xcb, 0xf9, 0xc7, 0x43, 0x87, 0x2c, 0x3b, 0x3b, 0x40, 0x0e, 0x1d, 0xd2, 0xfb, 0x11, 0x3c, 0x89, 0xed, 0xa7, 0x9a, 0x93, 0x05, 0x52, 0xef, 0xcb, 0x61, 0xcd, 0x23, 0xc4, 0x93, 0xa5, 0xef, 0xfe, 0x9a, 0x71, 0x84, 0xa6, 0xd9, 0x7f, 0x3b, 0xf5, 0x33, 0x2c, 0x0b, 0x21, 0xdc, 0xbf, 0xf9, 0xa6, 0x3a, 0x19, 0x35, 0x57, 0xf7, 0xea, 0x25, 0x1e, 0x0f, 0x64, 0xbf, 0x02, 0x00, 0x00, 0xff, 0xff, 0xd4, 0xe6, 0xa0, 0xf9, 0xb0, 0x01, 0x00, 0x00, } golang-google-grpc-1.6.0/reflection/grpc_testingv3/testv3.proto000066400000000000000000000006451315416461300246110ustar00rootroot00000000000000syntax = "proto3"; package grpc.testingv3; message SearchResponseV3 { message Result { string url = 1; string title = 2; repeated string snippets = 3; } repeated Result results = 1; } message SearchRequestV3 { string query = 1; } service SearchServiceV3 { rpc Search(SearchRequestV3) returns (SearchResponseV3); rpc StreamingSearch(stream SearchRequestV3) returns (stream SearchResponseV3); } golang-google-grpc-1.6.0/reflection/serverreflection.go000066400000000000000000000270551315416461300232470ustar00rootroot00000000000000/* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ //go:generate protoc --go_out=plugins=grpc:. grpc_reflection_v1alpha/reflection.proto /* Package reflection implements server reflection service. The service implemented is defined in: https://github.com/grpc/grpc/blob/master/src/proto/grpc/reflection/v1alpha/reflection.proto. To register server reflection on a gRPC server: import "google.golang.org/grpc/reflection" s := grpc.NewServer() pb.RegisterYourOwnServer(s, &server{}) // Register reflection service on gRPC server. reflection.Register(s) s.Serve(lis) */ package reflection // import "google.golang.org/grpc/reflection" import ( "bytes" "compress/gzip" "fmt" "io" "io/ioutil" "reflect" "strings" "github.com/golang/protobuf/proto" dpb "github.com/golang/protobuf/protoc-gen-go/descriptor" "google.golang.org/grpc" "google.golang.org/grpc/codes" rpb "google.golang.org/grpc/reflection/grpc_reflection_v1alpha" ) type serverReflectionServer struct { s *grpc.Server // TODO add more cache if necessary serviceInfo map[string]grpc.ServiceInfo // cache for s.GetServiceInfo() } // Register registers the server reflection service on the given gRPC server. func Register(s *grpc.Server) { rpb.RegisterServerReflectionServer(s, &serverReflectionServer{ s: s, }) } // protoMessage is used for type assertion on proto messages. // Generated proto message implements function Descriptor(), but Descriptor() // is not part of interface proto.Message. This interface is needed to // call Descriptor(). type protoMessage interface { Descriptor() ([]byte, []int) } // fileDescForType gets the file descriptor for the given type. // The given type should be a proto message. func (s *serverReflectionServer) fileDescForType(st reflect.Type) (*dpb.FileDescriptorProto, error) { m, ok := reflect.Zero(reflect.PtrTo(st)).Interface().(protoMessage) if !ok { return nil, fmt.Errorf("failed to create message from type: %v", st) } enc, _ := m.Descriptor() return s.decodeFileDesc(enc) } // decodeFileDesc does decompression and unmarshalling on the given // file descriptor byte slice. func (s *serverReflectionServer) decodeFileDesc(enc []byte) (*dpb.FileDescriptorProto, error) { raw, err := decompress(enc) if err != nil { return nil, fmt.Errorf("failed to decompress enc: %v", err) } fd := new(dpb.FileDescriptorProto) if err := proto.Unmarshal(raw, fd); err != nil { return nil, fmt.Errorf("bad descriptor: %v", err) } return fd, nil } // decompress does gzip decompression. func decompress(b []byte) ([]byte, error) { r, err := gzip.NewReader(bytes.NewReader(b)) if err != nil { return nil, fmt.Errorf("bad gzipped descriptor: %v", err) } out, err := ioutil.ReadAll(r) if err != nil { return nil, fmt.Errorf("bad gzipped descriptor: %v", err) } return out, nil } func (s *serverReflectionServer) typeForName(name string) (reflect.Type, error) { pt := proto.MessageType(name) if pt == nil { return nil, fmt.Errorf("unknown type: %q", name) } st := pt.Elem() return st, nil } func (s *serverReflectionServer) fileDescContainingExtension(st reflect.Type, ext int32) (*dpb.FileDescriptorProto, error) { m, ok := reflect.Zero(reflect.PtrTo(st)).Interface().(proto.Message) if !ok { return nil, fmt.Errorf("failed to create message from type: %v", st) } var extDesc *proto.ExtensionDesc for id, desc := range proto.RegisteredExtensions(m) { if id == ext { extDesc = desc break } } if extDesc == nil { return nil, fmt.Errorf("failed to find registered extension for extension number %v", ext) } return s.decodeFileDesc(proto.FileDescriptor(extDesc.Filename)) } func (s *serverReflectionServer) allExtensionNumbersForType(st reflect.Type) ([]int32, error) { m, ok := reflect.Zero(reflect.PtrTo(st)).Interface().(proto.Message) if !ok { return nil, fmt.Errorf("failed to create message from type: %v", st) } exts := proto.RegisteredExtensions(m) out := make([]int32, 0, len(exts)) for id := range exts { out = append(out, id) } return out, nil } // fileDescEncodingByFilename finds the file descriptor for given filename, // does marshalling on it and returns the marshalled result. func (s *serverReflectionServer) fileDescEncodingByFilename(name string) ([]byte, error) { enc := proto.FileDescriptor(name) if enc == nil { return nil, fmt.Errorf("unknown file: %v", name) } fd, err := s.decodeFileDesc(enc) if err != nil { return nil, err } return proto.Marshal(fd) } // serviceMetadataForSymbol finds the metadata for name in s.serviceInfo. // name should be a service name or a method name. func (s *serverReflectionServer) serviceMetadataForSymbol(name string) (interface{}, error) { if s.serviceInfo == nil { s.serviceInfo = s.s.GetServiceInfo() } // Check if it's a service name. if info, ok := s.serviceInfo[name]; ok { return info.Metadata, nil } // Check if it's a method name. pos := strings.LastIndex(name, ".") // Not a valid method name. if pos == -1 { return nil, fmt.Errorf("unknown symbol: %v", name) } info, ok := s.serviceInfo[name[:pos]] // Substring before last "." is not a service name. if !ok { return nil, fmt.Errorf("unknown symbol: %v", name) } // Search the method name in info.Methods. var found bool for _, m := range info.Methods { if m.Name == name[pos+1:] { found = true break } } if found { return info.Metadata, nil } return nil, fmt.Errorf("unknown symbol: %v", name) } // parseMetadata finds the file descriptor bytes specified meta. // For SupportPackageIsVersion4, m is the name of the proto file, we // call proto.FileDescriptor to get the byte slice. // For SupportPackageIsVersion3, m is a byte slice itself. func parseMetadata(meta interface{}) ([]byte, bool) { // Check if meta is the file name. if fileNameForMeta, ok := meta.(string); ok { return proto.FileDescriptor(fileNameForMeta), true } // Check if meta is the byte slice. if enc, ok := meta.([]byte); ok { return enc, true } return nil, false } // fileDescEncodingContainingSymbol finds the file descriptor containing the given symbol, // does marshalling on it and returns the marshalled result. // The given symbol can be a type, a service or a method. func (s *serverReflectionServer) fileDescEncodingContainingSymbol(name string) ([]byte, error) { var ( fd *dpb.FileDescriptorProto ) // Check if it's a type name. if st, err := s.typeForName(name); err == nil { fd, err = s.fileDescForType(st) if err != nil { return nil, err } } else { // Check if it's a service name or a method name. meta, err := s.serviceMetadataForSymbol(name) // Metadata not found. if err != nil { return nil, err } // Metadata not valid. enc, ok := parseMetadata(meta) if !ok { return nil, fmt.Errorf("invalid file descriptor for symbol: %v", name) } fd, err = s.decodeFileDesc(enc) if err != nil { return nil, err } } return proto.Marshal(fd) } // fileDescEncodingContainingExtension finds the file descriptor containing given extension, // does marshalling on it and returns the marshalled result. func (s *serverReflectionServer) fileDescEncodingContainingExtension(typeName string, extNum int32) ([]byte, error) { st, err := s.typeForName(typeName) if err != nil { return nil, err } fd, err := s.fileDescContainingExtension(st, extNum) if err != nil { return nil, err } return proto.Marshal(fd) } // allExtensionNumbersForTypeName returns all extension numbers for the given type. func (s *serverReflectionServer) allExtensionNumbersForTypeName(name string) ([]int32, error) { st, err := s.typeForName(name) if err != nil { return nil, err } extNums, err := s.allExtensionNumbersForType(st) if err != nil { return nil, err } return extNums, nil } // ServerReflectionInfo is the reflection service handler. func (s *serverReflectionServer) ServerReflectionInfo(stream rpb.ServerReflection_ServerReflectionInfoServer) error { for { in, err := stream.Recv() if err == io.EOF { return nil } if err != nil { return err } out := &rpb.ServerReflectionResponse{ ValidHost: in.Host, OriginalRequest: in, } switch req := in.MessageRequest.(type) { case *rpb.ServerReflectionRequest_FileByFilename: b, err := s.fileDescEncodingByFilename(req.FileByFilename) if err != nil { out.MessageResponse = &rpb.ServerReflectionResponse_ErrorResponse{ ErrorResponse: &rpb.ErrorResponse{ ErrorCode: int32(codes.NotFound), ErrorMessage: err.Error(), }, } } else { out.MessageResponse = &rpb.ServerReflectionResponse_FileDescriptorResponse{ FileDescriptorResponse: &rpb.FileDescriptorResponse{FileDescriptorProto: [][]byte{b}}, } } case *rpb.ServerReflectionRequest_FileContainingSymbol: b, err := s.fileDescEncodingContainingSymbol(req.FileContainingSymbol) if err != nil { out.MessageResponse = &rpb.ServerReflectionResponse_ErrorResponse{ ErrorResponse: &rpb.ErrorResponse{ ErrorCode: int32(codes.NotFound), ErrorMessage: err.Error(), }, } } else { out.MessageResponse = &rpb.ServerReflectionResponse_FileDescriptorResponse{ FileDescriptorResponse: &rpb.FileDescriptorResponse{FileDescriptorProto: [][]byte{b}}, } } case *rpb.ServerReflectionRequest_FileContainingExtension: typeName := req.FileContainingExtension.ContainingType extNum := req.FileContainingExtension.ExtensionNumber b, err := s.fileDescEncodingContainingExtension(typeName, extNum) if err != nil { out.MessageResponse = &rpb.ServerReflectionResponse_ErrorResponse{ ErrorResponse: &rpb.ErrorResponse{ ErrorCode: int32(codes.NotFound), ErrorMessage: err.Error(), }, } } else { out.MessageResponse = &rpb.ServerReflectionResponse_FileDescriptorResponse{ FileDescriptorResponse: &rpb.FileDescriptorResponse{FileDescriptorProto: [][]byte{b}}, } } case *rpb.ServerReflectionRequest_AllExtensionNumbersOfType: extNums, err := s.allExtensionNumbersForTypeName(req.AllExtensionNumbersOfType) if err != nil { out.MessageResponse = &rpb.ServerReflectionResponse_ErrorResponse{ ErrorResponse: &rpb.ErrorResponse{ ErrorCode: int32(codes.NotFound), ErrorMessage: err.Error(), }, } } else { out.MessageResponse = &rpb.ServerReflectionResponse_AllExtensionNumbersResponse{ AllExtensionNumbersResponse: &rpb.ExtensionNumberResponse{ BaseTypeName: req.AllExtensionNumbersOfType, ExtensionNumber: extNums, }, } } case *rpb.ServerReflectionRequest_ListServices: if s.serviceInfo == nil { s.serviceInfo = s.s.GetServiceInfo() } serviceResponses := make([]*rpb.ServiceResponse, 0, len(s.serviceInfo)) for n := range s.serviceInfo { serviceResponses = append(serviceResponses, &rpb.ServiceResponse{ Name: n, }) } out.MessageResponse = &rpb.ServerReflectionResponse_ListServicesResponse{ ListServicesResponse: &rpb.ListServiceResponse{ Service: serviceResponses, }, } default: return grpc.Errorf(codes.InvalidArgument, "invalid MessageRequest: %v", in.MessageRequest) } if err := stream.Send(out); err != nil { return err } } } golang-google-grpc-1.6.0/reflection/serverreflection_test.go000066400000000000000000000401241315416461300242760ustar00rootroot00000000000000/* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ //go:generate protoc -I grpc_testing --go_out=plugins=grpc:grpc_testing/ grpc_testing/proto2.proto grpc_testing/proto2_ext.proto grpc_testing/proto2_ext2.proto grpc_testing/test.proto // Note: grpc_testingv3/testv3.pb.go is not re-generated because it was // intentionally generated by an older version of protoc-gen-go. package reflection import ( "fmt" "net" "reflect" "sort" "testing" "github.com/golang/protobuf/proto" dpb "github.com/golang/protobuf/protoc-gen-go/descriptor" "golang.org/x/net/context" "google.golang.org/grpc" rpb "google.golang.org/grpc/reflection/grpc_reflection_v1alpha" pb "google.golang.org/grpc/reflection/grpc_testing" pbv3 "google.golang.org/grpc/reflection/grpc_testingv3" ) var ( s = &serverReflectionServer{} // fileDescriptor of each test proto file. fdTest *dpb.FileDescriptorProto fdTestv3 *dpb.FileDescriptorProto fdProto2 *dpb.FileDescriptorProto fdProto2Ext *dpb.FileDescriptorProto fdProto2Ext2 *dpb.FileDescriptorProto // fileDescriptor marshalled. fdTestByte []byte fdTestv3Byte []byte fdProto2Byte []byte fdProto2ExtByte []byte fdProto2Ext2Byte []byte ) func loadFileDesc(filename string) (*dpb.FileDescriptorProto, []byte) { enc := proto.FileDescriptor(filename) if enc == nil { panic(fmt.Sprintf("failed to find fd for file: %v", filename)) } fd, err := s.decodeFileDesc(enc) if err != nil { panic(fmt.Sprintf("failed to decode enc: %v", err)) } b, err := proto.Marshal(fd) if err != nil { panic(fmt.Sprintf("failed to marshal fd: %v", err)) } return fd, b } func init() { fdTest, fdTestByte = loadFileDesc("test.proto") fdTestv3, fdTestv3Byte = loadFileDesc("testv3.proto") fdProto2, fdProto2Byte = loadFileDesc("proto2.proto") fdProto2Ext, fdProto2ExtByte = loadFileDesc("proto2_ext.proto") fdProto2Ext2, fdProto2Ext2Byte = loadFileDesc("proto2_ext2.proto") } func TestFileDescForType(t *testing.T) { for _, test := range []struct { st reflect.Type wantFd *dpb.FileDescriptorProto }{ {reflect.TypeOf(pb.SearchResponse_Result{}), fdTest}, {reflect.TypeOf(pb.ToBeExtended{}), fdProto2}, } { fd, err := s.fileDescForType(test.st) if err != nil || !proto.Equal(fd, test.wantFd) { t.Errorf("fileDescForType(%q) = %q, %v, want %q, ", test.st, fd, err, test.wantFd) } } } func TestTypeForName(t *testing.T) { for _, test := range []struct { name string want reflect.Type }{ {"grpc.testing.SearchResponse", reflect.TypeOf(pb.SearchResponse{})}, } { r, err := s.typeForName(test.name) if err != nil || r != test.want { t.Errorf("typeForName(%q) = %q, %v, want %q, ", test.name, r, err, test.want) } } } func TestTypeForNameNotFound(t *testing.T) { for _, test := range []string{ "grpc.testing.not_exiting", } { _, err := s.typeForName(test) if err == nil { t.Errorf("typeForName(%q) = _, %v, want _, ", test, err) } } } func TestFileDescContainingExtension(t *testing.T) { for _, test := range []struct { st reflect.Type extNum int32 want *dpb.FileDescriptorProto }{ {reflect.TypeOf(pb.ToBeExtended{}), 13, fdProto2Ext}, {reflect.TypeOf(pb.ToBeExtended{}), 17, fdProto2Ext}, {reflect.TypeOf(pb.ToBeExtended{}), 19, fdProto2Ext}, {reflect.TypeOf(pb.ToBeExtended{}), 23, fdProto2Ext2}, {reflect.TypeOf(pb.ToBeExtended{}), 29, fdProto2Ext2}, } { fd, err := s.fileDescContainingExtension(test.st, test.extNum) if err != nil || !proto.Equal(fd, test.want) { t.Errorf("fileDescContainingExtension(%q) = %q, %v, want %q, ", test.st, fd, err, test.want) } } } // intArray is used to sort []int32 type intArray []int32 func (s intArray) Len() int { return len(s) } func (s intArray) Swap(i, j int) { s[i], s[j] = s[j], s[i] } func (s intArray) Less(i, j int) bool { return s[i] < s[j] } func TestAllExtensionNumbersForType(t *testing.T) { for _, test := range []struct { st reflect.Type want []int32 }{ {reflect.TypeOf(pb.ToBeExtended{}), []int32{13, 17, 19, 23, 29}}, } { r, err := s.allExtensionNumbersForType(test.st) sort.Sort(intArray(r)) if err != nil || !reflect.DeepEqual(r, test.want) { t.Errorf("allExtensionNumbersForType(%q) = %v, %v, want %v, ", test.st, r, err, test.want) } } } // Do end2end tests. type server struct{} func (s *server) Search(ctx context.Context, in *pb.SearchRequest) (*pb.SearchResponse, error) { return &pb.SearchResponse{}, nil } func (s *server) StreamingSearch(stream pb.SearchService_StreamingSearchServer) error { return nil } type serverV3 struct{} func (s *serverV3) Search(ctx context.Context, in *pbv3.SearchRequestV3) (*pbv3.SearchResponseV3, error) { return &pbv3.SearchResponseV3{}, nil } func (s *serverV3) StreamingSearch(stream pbv3.SearchServiceV3_StreamingSearchServer) error { return nil } func TestReflectionEnd2end(t *testing.T) { // Start server. lis, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("failed to listen: %v", err) } s := grpc.NewServer() pb.RegisterSearchServiceServer(s, &server{}) pbv3.RegisterSearchServiceV3Server(s, &serverV3{}) // Register reflection service on s. Register(s) go s.Serve(lis) // Create client. conn, err := grpc.Dial(lis.Addr().String(), grpc.WithInsecure()) if err != nil { t.Fatalf("cannot connect to server: %v", err) } defer conn.Close() c := rpb.NewServerReflectionClient(conn) stream, err := c.ServerReflectionInfo(context.Background()) if err != nil { t.Fatalf("cannot get ServerReflectionInfo: %v", err) } testFileByFilename(t, stream) testFileByFilenameError(t, stream) testFileContainingSymbol(t, stream) testFileContainingSymbolError(t, stream) testFileContainingExtension(t, stream) testFileContainingExtensionError(t, stream) testAllExtensionNumbersOfType(t, stream) testAllExtensionNumbersOfTypeError(t, stream) testListServices(t, stream) s.Stop() } func testFileByFilename(t *testing.T, stream rpb.ServerReflection_ServerReflectionInfoClient) { for _, test := range []struct { filename string want []byte }{ {"test.proto", fdTestByte}, {"proto2.proto", fdProto2Byte}, {"proto2_ext.proto", fdProto2ExtByte}, } { if err := stream.Send(&rpb.ServerReflectionRequest{ MessageRequest: &rpb.ServerReflectionRequest_FileByFilename{ FileByFilename: test.filename, }, }); err != nil { t.Fatalf("failed to send request: %v", err) } r, err := stream.Recv() if err != nil { // io.EOF is not ok. t.Fatalf("failed to recv response: %v", err) } switch r.MessageResponse.(type) { case *rpb.ServerReflectionResponse_FileDescriptorResponse: if !reflect.DeepEqual(r.GetFileDescriptorResponse().FileDescriptorProto[0], test.want) { t.Errorf("FileByFilename(%v)\nreceived: %q,\nwant: %q", test.filename, r.GetFileDescriptorResponse().FileDescriptorProto[0], test.want) } default: t.Errorf("FileByFilename(%v) = %v, want type ", test.filename, r.MessageResponse) } } } func testFileByFilenameError(t *testing.T, stream rpb.ServerReflection_ServerReflectionInfoClient) { for _, test := range []string{ "test.poto", "proo2.proto", "proto2_et.proto", } { if err := stream.Send(&rpb.ServerReflectionRequest{ MessageRequest: &rpb.ServerReflectionRequest_FileByFilename{ FileByFilename: test, }, }); err != nil { t.Fatalf("failed to send request: %v", err) } r, err := stream.Recv() if err != nil { // io.EOF is not ok. t.Fatalf("failed to recv response: %v", err) } switch r.MessageResponse.(type) { case *rpb.ServerReflectionResponse_ErrorResponse: default: t.Errorf("FileByFilename(%v) = %v, want type ", test, r.MessageResponse) } } } func testFileContainingSymbol(t *testing.T, stream rpb.ServerReflection_ServerReflectionInfoClient) { for _, test := range []struct { symbol string want []byte }{ {"grpc.testing.SearchService", fdTestByte}, {"grpc.testing.SearchService.Search", fdTestByte}, {"grpc.testing.SearchService.StreamingSearch", fdTestByte}, {"grpc.testing.SearchResponse", fdTestByte}, {"grpc.testing.ToBeExtended", fdProto2Byte}, // Test support package v3. {"grpc.testingv3.SearchServiceV3", fdTestv3Byte}, {"grpc.testingv3.SearchServiceV3.Search", fdTestv3Byte}, {"grpc.testingv3.SearchServiceV3.StreamingSearch", fdTestv3Byte}, {"grpc.testingv3.SearchResponseV3", fdTestv3Byte}, } { if err := stream.Send(&rpb.ServerReflectionRequest{ MessageRequest: &rpb.ServerReflectionRequest_FileContainingSymbol{ FileContainingSymbol: test.symbol, }, }); err != nil { t.Fatalf("failed to send request: %v", err) } r, err := stream.Recv() if err != nil { // io.EOF is not ok. t.Fatalf("failed to recv response: %v", err) } switch r.MessageResponse.(type) { case *rpb.ServerReflectionResponse_FileDescriptorResponse: if !reflect.DeepEqual(r.GetFileDescriptorResponse().FileDescriptorProto[0], test.want) { t.Errorf("FileContainingSymbol(%v)\nreceived: %q,\nwant: %q", test.symbol, r.GetFileDescriptorResponse().FileDescriptorProto[0], test.want) } default: t.Errorf("FileContainingSymbol(%v) = %v, want type ", test.symbol, r.MessageResponse) } } } func testFileContainingSymbolError(t *testing.T, stream rpb.ServerReflection_ServerReflectionInfoClient) { for _, test := range []string{ "grpc.testing.SerchService", "grpc.testing.SearchService.SearchE", "grpc.tesing.SearchResponse", "gpc.testing.ToBeExtended", } { if err := stream.Send(&rpb.ServerReflectionRequest{ MessageRequest: &rpb.ServerReflectionRequest_FileContainingSymbol{ FileContainingSymbol: test, }, }); err != nil { t.Fatalf("failed to send request: %v", err) } r, err := stream.Recv() if err != nil { // io.EOF is not ok. t.Fatalf("failed to recv response: %v", err) } switch r.MessageResponse.(type) { case *rpb.ServerReflectionResponse_ErrorResponse: default: t.Errorf("FileContainingSymbol(%v) = %v, want type ", test, r.MessageResponse) } } } func testFileContainingExtension(t *testing.T, stream rpb.ServerReflection_ServerReflectionInfoClient) { for _, test := range []struct { typeName string extNum int32 want []byte }{ {"grpc.testing.ToBeExtended", 13, fdProto2ExtByte}, {"grpc.testing.ToBeExtended", 17, fdProto2ExtByte}, {"grpc.testing.ToBeExtended", 19, fdProto2ExtByte}, {"grpc.testing.ToBeExtended", 23, fdProto2Ext2Byte}, {"grpc.testing.ToBeExtended", 29, fdProto2Ext2Byte}, } { if err := stream.Send(&rpb.ServerReflectionRequest{ MessageRequest: &rpb.ServerReflectionRequest_FileContainingExtension{ FileContainingExtension: &rpb.ExtensionRequest{ ContainingType: test.typeName, ExtensionNumber: test.extNum, }, }, }); err != nil { t.Fatalf("failed to send request: %v", err) } r, err := stream.Recv() if err != nil { // io.EOF is not ok. t.Fatalf("failed to recv response: %v", err) } switch r.MessageResponse.(type) { case *rpb.ServerReflectionResponse_FileDescriptorResponse: if !reflect.DeepEqual(r.GetFileDescriptorResponse().FileDescriptorProto[0], test.want) { t.Errorf("FileContainingExtension(%v, %v)\nreceived: %q,\nwant: %q", test.typeName, test.extNum, r.GetFileDescriptorResponse().FileDescriptorProto[0], test.want) } default: t.Errorf("FileContainingExtension(%v, %v) = %v, want type ", test.typeName, test.extNum, r.MessageResponse) } } } func testFileContainingExtensionError(t *testing.T, stream rpb.ServerReflection_ServerReflectionInfoClient) { for _, test := range []struct { typeName string extNum int32 }{ {"grpc.testing.ToBExtended", 17}, {"grpc.testing.ToBeExtended", 15}, } { if err := stream.Send(&rpb.ServerReflectionRequest{ MessageRequest: &rpb.ServerReflectionRequest_FileContainingExtension{ FileContainingExtension: &rpb.ExtensionRequest{ ContainingType: test.typeName, ExtensionNumber: test.extNum, }, }, }); err != nil { t.Fatalf("failed to send request: %v", err) } r, err := stream.Recv() if err != nil { // io.EOF is not ok. t.Fatalf("failed to recv response: %v", err) } switch r.MessageResponse.(type) { case *rpb.ServerReflectionResponse_ErrorResponse: default: t.Errorf("FileContainingExtension(%v, %v) = %v, want type ", test.typeName, test.extNum, r.MessageResponse) } } } func testAllExtensionNumbersOfType(t *testing.T, stream rpb.ServerReflection_ServerReflectionInfoClient) { for _, test := range []struct { typeName string want []int32 }{ {"grpc.testing.ToBeExtended", []int32{13, 17, 19, 23, 29}}, } { if err := stream.Send(&rpb.ServerReflectionRequest{ MessageRequest: &rpb.ServerReflectionRequest_AllExtensionNumbersOfType{ AllExtensionNumbersOfType: test.typeName, }, }); err != nil { t.Fatalf("failed to send request: %v", err) } r, err := stream.Recv() if err != nil { // io.EOF is not ok. t.Fatalf("failed to recv response: %v", err) } switch r.MessageResponse.(type) { case *rpb.ServerReflectionResponse_AllExtensionNumbersResponse: extNum := r.GetAllExtensionNumbersResponse().ExtensionNumber sort.Sort(intArray(extNum)) if r.GetAllExtensionNumbersResponse().BaseTypeName != test.typeName || !reflect.DeepEqual(extNum, test.want) { t.Errorf("AllExtensionNumbersOfType(%v)\nreceived: %v,\nwant: {%q %v}", r.GetAllExtensionNumbersResponse(), test.typeName, test.typeName, test.want) } default: t.Errorf("AllExtensionNumbersOfType(%v) = %v, want type ", test.typeName, r.MessageResponse) } } } func testAllExtensionNumbersOfTypeError(t *testing.T, stream rpb.ServerReflection_ServerReflectionInfoClient) { for _, test := range []string{ "grpc.testing.ToBeExtendedE", } { if err := stream.Send(&rpb.ServerReflectionRequest{ MessageRequest: &rpb.ServerReflectionRequest_AllExtensionNumbersOfType{ AllExtensionNumbersOfType: test, }, }); err != nil { t.Fatalf("failed to send request: %v", err) } r, err := stream.Recv() if err != nil { // io.EOF is not ok. t.Fatalf("failed to recv response: %v", err) } switch r.MessageResponse.(type) { case *rpb.ServerReflectionResponse_ErrorResponse: default: t.Errorf("AllExtensionNumbersOfType(%v) = %v, want type ", test, r.MessageResponse) } } } func testListServices(t *testing.T, stream rpb.ServerReflection_ServerReflectionInfoClient) { if err := stream.Send(&rpb.ServerReflectionRequest{ MessageRequest: &rpb.ServerReflectionRequest_ListServices{}, }); err != nil { t.Fatalf("failed to send request: %v", err) } r, err := stream.Recv() if err != nil { // io.EOF is not ok. t.Fatalf("failed to recv response: %v", err) } switch r.MessageResponse.(type) { case *rpb.ServerReflectionResponse_ListServicesResponse: services := r.GetListServicesResponse().Service want := []string{ "grpc.testingv3.SearchServiceV3", "grpc.testing.SearchService", "grpc.reflection.v1alpha.ServerReflection", } // Compare service names in response with want. if len(services) != len(want) { t.Errorf("= %v, want service names: %v", services, want) } m := make(map[string]int) for _, e := range services { m[e.Name]++ } for _, e := range want { if m[e] > 0 { m[e]-- continue } t.Errorf("ListService\nreceived: %v,\nwant: %q", services, want) } default: t.Errorf("ListServices = %v, want type ", r.MessageResponse) } } golang-google-grpc-1.6.0/rpc_util.go000066400000000000000000000361541315416461300173550ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "bytes" "compress/gzip" "encoding/binary" "io" "io/ioutil" "math" "sync" "time" "golang.org/x/net/context" "google.golang.org/grpc/codes" "google.golang.org/grpc/credentials" "google.golang.org/grpc/metadata" "google.golang.org/grpc/peer" "google.golang.org/grpc/stats" "google.golang.org/grpc/status" "google.golang.org/grpc/transport" ) // Compressor defines the interface gRPC uses to compress a message. type Compressor interface { // Do compresses p into w. Do(w io.Writer, p []byte) error // Type returns the compression algorithm the Compressor uses. Type() string } type gzipCompressor struct { pool sync.Pool } // NewGZIPCompressor creates a Compressor based on GZIP. func NewGZIPCompressor() Compressor { return &gzipCompressor{ pool: sync.Pool{ New: func() interface{} { return gzip.NewWriter(ioutil.Discard) }, }, } } func (c *gzipCompressor) Do(w io.Writer, p []byte) error { z := c.pool.Get().(*gzip.Writer) defer c.pool.Put(z) z.Reset(w) if _, err := z.Write(p); err != nil { return err } return z.Close() } func (c *gzipCompressor) Type() string { return "gzip" } // Decompressor defines the interface gRPC uses to decompress a message. type Decompressor interface { // Do reads the data from r and uncompress them. Do(r io.Reader) ([]byte, error) // Type returns the compression algorithm the Decompressor uses. Type() string } type gzipDecompressor struct { pool sync.Pool } // NewGZIPDecompressor creates a Decompressor based on GZIP. func NewGZIPDecompressor() Decompressor { return &gzipDecompressor{} } func (d *gzipDecompressor) Do(r io.Reader) ([]byte, error) { var z *gzip.Reader switch maybeZ := d.pool.Get().(type) { case nil: newZ, err := gzip.NewReader(r) if err != nil { return nil, err } z = newZ case *gzip.Reader: z = maybeZ if err := z.Reset(r); err != nil { d.pool.Put(z) return nil, err } } defer func() { z.Close() d.pool.Put(z) }() return ioutil.ReadAll(z) } func (d *gzipDecompressor) Type() string { return "gzip" } // callInfo contains all related configuration and information about an RPC. type callInfo struct { failFast bool headerMD metadata.MD trailerMD metadata.MD peer *peer.Peer traceInfo traceInfo // in trace.go maxReceiveMessageSize *int maxSendMessageSize *int creds credentials.PerRPCCredentials } var defaultCallInfo = callInfo{failFast: true} // CallOption configures a Call before it starts or extracts information from // a Call after it completes. type CallOption interface { // before is called before the call is sent to any server. If before // returns a non-nil error, the RPC fails with that error. before(*callInfo) error // after is called after the call has completed. after cannot return an // error, so any failures should be reported via output parameters. after(*callInfo) } // EmptyCallOption does not alter the Call configuration. // It can be embedded in another structure to carry satellite data for use // by interceptors. type EmptyCallOption struct{} func (EmptyCallOption) before(*callInfo) error { return nil } func (EmptyCallOption) after(*callInfo) {} type beforeCall func(c *callInfo) error func (o beforeCall) before(c *callInfo) error { return o(c) } func (o beforeCall) after(c *callInfo) {} type afterCall func(c *callInfo) func (o afterCall) before(c *callInfo) error { return nil } func (o afterCall) after(c *callInfo) { o(c) } // Header returns a CallOptions that retrieves the header metadata // for a unary RPC. func Header(md *metadata.MD) CallOption { return afterCall(func(c *callInfo) { *md = c.headerMD }) } // Trailer returns a CallOptions that retrieves the trailer metadata // for a unary RPC. func Trailer(md *metadata.MD) CallOption { return afterCall(func(c *callInfo) { *md = c.trailerMD }) } // Peer returns a CallOption that retrieves peer information for a // unary RPC. func Peer(peer *peer.Peer) CallOption { return afterCall(func(c *callInfo) { if c.peer != nil { *peer = *c.peer } }) } // FailFast configures the action to take when an RPC is attempted on broken // connections or unreachable servers. If failfast is true, the RPC will fail // immediately. Otherwise, the RPC client will block the call until a // connection is available (or the call is canceled or times out) and will retry // the call if it fails due to a transient error. Please refer to // https://github.com/grpc/grpc/blob/master/doc/wait-for-ready.md. // Note: failFast is default to true. func FailFast(failFast bool) CallOption { return beforeCall(func(c *callInfo) error { c.failFast = failFast return nil }) } // MaxCallRecvMsgSize returns a CallOption which sets the maximum message size the client can receive. func MaxCallRecvMsgSize(s int) CallOption { return beforeCall(func(o *callInfo) error { o.maxReceiveMessageSize = &s return nil }) } // MaxCallSendMsgSize returns a CallOption which sets the maximum message size the client can send. func MaxCallSendMsgSize(s int) CallOption { return beforeCall(func(o *callInfo) error { o.maxSendMessageSize = &s return nil }) } // PerRPCCredentials returns a CallOption that sets credentials.PerRPCCredentials // for a call. func PerRPCCredentials(creds credentials.PerRPCCredentials) CallOption { return beforeCall(func(c *callInfo) error { c.creds = creds return nil }) } // The format of the payload: compressed or not? type payloadFormat uint8 const ( compressionNone payloadFormat = iota // no compression compressionMade ) // parser reads complete gRPC messages from the underlying reader. type parser struct { // r is the underlying reader. // See the comment on recvMsg for the permissible // error types. r io.Reader // The header of a gRPC message. Find more detail // at https://grpc.io/docs/guides/wire.html. header [5]byte } // recvMsg reads a complete gRPC message from the stream. // // It returns the message and its payload (compression/encoding) // format. The caller owns the returned msg memory. // // If there is an error, possible values are: // * io.EOF, when no messages remain // * io.ErrUnexpectedEOF // * of type transport.ConnectionError // * of type transport.StreamError // No other error values or types must be returned, which also means // that the underlying io.Reader must not return an incompatible // error. func (p *parser) recvMsg(maxReceiveMessageSize int) (pf payloadFormat, msg []byte, err error) { if _, err := p.r.Read(p.header[:]); err != nil { return 0, nil, err } pf = payloadFormat(p.header[0]) length := binary.BigEndian.Uint32(p.header[1:]) if length == 0 { return pf, nil, nil } if length > uint32(maxReceiveMessageSize) { return 0, nil, Errorf(codes.ResourceExhausted, "grpc: received message larger than max (%d vs. %d)", length, maxReceiveMessageSize) } // TODO(bradfitz,zhaoq): garbage. reuse buffer after proto decoding instead // of making it for each message: msg = make([]byte, int(length)) if _, err := p.r.Read(msg); err != nil { if err == io.EOF { err = io.ErrUnexpectedEOF } return 0, nil, err } return pf, msg, nil } // encode serializes msg and returns a buffer of message header and a buffer of msg. // If msg is nil, it generates the message header and an empty msg buffer. func encode(c Codec, msg interface{}, cp Compressor, cbuf *bytes.Buffer, outPayload *stats.OutPayload) ([]byte, []byte, error) { var b []byte const ( payloadLen = 1 sizeLen = 4 ) if msg != nil { var err error b, err = c.Marshal(msg) if err != nil { return nil, nil, Errorf(codes.Internal, "grpc: error while marshaling: %v", err.Error()) } if outPayload != nil { outPayload.Payload = msg // TODO truncate large payload. outPayload.Data = b outPayload.Length = len(b) } if cp != nil { if err := cp.Do(cbuf, b); err != nil { return nil, nil, Errorf(codes.Internal, "grpc: error while compressing: %v", err.Error()) } b = cbuf.Bytes() } } if uint(len(b)) > math.MaxUint32 { return nil, nil, Errorf(codes.ResourceExhausted, "grpc: message too large (%d bytes)", len(b)) } bufHeader := make([]byte, payloadLen+sizeLen) if cp == nil { bufHeader[0] = byte(compressionNone) } else { bufHeader[0] = byte(compressionMade) } // Write length of b into buf binary.BigEndian.PutUint32(bufHeader[payloadLen:], uint32(len(b))) if outPayload != nil { outPayload.WireLength = payloadLen + sizeLen + len(b) } return bufHeader, b, nil } func checkRecvPayload(pf payloadFormat, recvCompress string, dc Decompressor) error { switch pf { case compressionNone: case compressionMade: if dc == nil || recvCompress != dc.Type() { return Errorf(codes.Unimplemented, "grpc: Decompressor is not installed for grpc-encoding %q", recvCompress) } default: return Errorf(codes.Internal, "grpc: received unexpected payload format %d", pf) } return nil } func recv(p *parser, c Codec, s *transport.Stream, dc Decompressor, m interface{}, maxReceiveMessageSize int, inPayload *stats.InPayload) error { pf, d, err := p.recvMsg(maxReceiveMessageSize) if err != nil { return err } if inPayload != nil { inPayload.WireLength = len(d) } if err := checkRecvPayload(pf, s.RecvCompress(), dc); err != nil { return err } if pf == compressionMade { d, err = dc.Do(bytes.NewReader(d)) if err != nil { return Errorf(codes.Internal, "grpc: failed to decompress the received message %v", err) } } if len(d) > maxReceiveMessageSize { // TODO: Revisit the error code. Currently keep it consistent with java // implementation. return Errorf(codes.ResourceExhausted, "grpc: received message larger than max (%d vs. %d)", len(d), maxReceiveMessageSize) } if err := c.Unmarshal(d, m); err != nil { return Errorf(codes.Internal, "grpc: failed to unmarshal the received message %v", err) } if inPayload != nil { inPayload.RecvTime = time.Now() inPayload.Payload = m // TODO truncate large payload. inPayload.Data = d inPayload.Length = len(d) } return nil } type rpcInfo struct { bytesSent bool bytesReceived bool } type rpcInfoContextKey struct{} func newContextWithRPCInfo(ctx context.Context) context.Context { return context.WithValue(ctx, rpcInfoContextKey{}, &rpcInfo{}) } func rpcInfoFromContext(ctx context.Context) (s *rpcInfo, ok bool) { s, ok = ctx.Value(rpcInfoContextKey{}).(*rpcInfo) return } func updateRPCInfoInContext(ctx context.Context, s rpcInfo) { if ss, ok := rpcInfoFromContext(ctx); ok { *ss = s } return } // Code returns the error code for err if it was produced by the rpc system. // Otherwise, it returns codes.Unknown. // // Deprecated; use status.FromError and Code method instead. func Code(err error) codes.Code { if s, ok := status.FromError(err); ok { return s.Code() } return codes.Unknown } // ErrorDesc returns the error description of err if it was produced by the rpc system. // Otherwise, it returns err.Error() or empty string when err is nil. // // Deprecated; use status.FromError and Message method instead. func ErrorDesc(err error) string { if s, ok := status.FromError(err); ok { return s.Message() } return err.Error() } // Errorf returns an error containing an error code and a description; // Errorf returns nil if c is OK. // // Deprecated; use status.Errorf instead. func Errorf(c codes.Code, format string, a ...interface{}) error { return status.Errorf(c, format, a...) } // MethodConfig defines the configuration recommended by the service providers for a // particular method. // This is EXPERIMENTAL and subject to change. type MethodConfig struct { // WaitForReady indicates whether RPCs sent to this method should wait until // the connection is ready by default (!failfast). The value specified via the // gRPC client API will override the value set here. WaitForReady *bool // Timeout is the default timeout for RPCs sent to this method. The actual // deadline used will be the minimum of the value specified here and the value // set by the application via the gRPC client API. If either one is not set, // then the other will be used. If neither is set, then the RPC has no deadline. Timeout *time.Duration // MaxReqSize is the maximum allowed payload size for an individual request in a // stream (client->server) in bytes. The size which is measured is the serialized // payload after per-message compression (but before stream compression) in bytes. // The actual value used is the minumum of the value specified here and the value set // by the application via the gRPC client API. If either one is not set, then the other // will be used. If neither is set, then the built-in default is used. MaxReqSize *int // MaxRespSize is the maximum allowed payload size for an individual response in a // stream (server->client) in bytes. MaxRespSize *int } // ServiceConfig is provided by the service provider and contains parameters for how // clients that connect to the service should behave. // This is EXPERIMENTAL and subject to change. type ServiceConfig struct { // LB is the load balancer the service providers recommends. The balancer specified // via grpc.WithBalancer will override this. LB Balancer // Methods contains a map for the methods in this service. // If there is an exact match for a method (i.e. /service/method) in the map, use the corresponding MethodConfig. // If there's no exact match, look for the default config for the service (/service/) and use the corresponding MethodConfig if it exists. // Otherwise, the method has no MethodConfig to use. Methods map[string]MethodConfig } func min(a, b *int) *int { if *a < *b { return a } return b } func getMaxSize(mcMax, doptMax *int, defaultVal int) *int { if mcMax == nil && doptMax == nil { return &defaultVal } if mcMax != nil && doptMax != nil { return min(mcMax, doptMax) } if mcMax != nil { return mcMax } return doptMax } // SupportPackageIsVersion3 is referenced from generated protocol buffer files. // The latest support package version is 4. // SupportPackageIsVersion3 is kept for compability. It will be removed in the // next support package version update. const SupportPackageIsVersion3 = true // SupportPackageIsVersion4 is referenced from generated protocol buffer files // to assert that that code is compatible with this version of the grpc package. // // This constant may be renamed in the future if a change in the generated code // requires a synchronised update of grpc-go and protoc-gen-go. This constant // should not be referenced from any other code. const SupportPackageIsVersion4 = true // Version is the current grpc version. const Version = "1.6.0" const grpcUA = "grpc-go/" + Version golang-google-grpc-1.6.0/rpc_util_test.go000066400000000000000000000145301315416461300204060ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "bytes" "io" "math" "reflect" "testing" "github.com/golang/protobuf/proto" "google.golang.org/grpc/codes" "google.golang.org/grpc/status" perfpb "google.golang.org/grpc/test/codec_perf" "google.golang.org/grpc/transport" ) type fullReader struct { reader io.Reader } func (f fullReader) Read(p []byte) (int, error) { return io.ReadFull(f.reader, p) } var _ CallOption = EmptyCallOption{} // ensure EmptyCallOption implements the interface func TestSimpleParsing(t *testing.T) { bigMsg := bytes.Repeat([]byte{'x'}, 1<<24) for _, test := range []struct { // input p []byte // outputs err error b []byte pt payloadFormat }{ {nil, io.EOF, nil, compressionNone}, {[]byte{0, 0, 0, 0, 0}, nil, nil, compressionNone}, {[]byte{0, 0, 0, 0, 1, 'a'}, nil, []byte{'a'}, compressionNone}, {[]byte{1, 0}, io.ErrUnexpectedEOF, nil, compressionNone}, {[]byte{0, 0, 0, 0, 10, 'a'}, io.ErrUnexpectedEOF, nil, compressionNone}, // Check that messages with length >= 2^24 are parsed. {append([]byte{0, 1, 0, 0, 0}, bigMsg...), nil, bigMsg, compressionNone}, } { buf := fullReader{bytes.NewReader(test.p)} parser := &parser{r: buf} pt, b, err := parser.recvMsg(math.MaxInt32) if err != test.err || !bytes.Equal(b, test.b) || pt != test.pt { t.Fatalf("parser{%v}.recvMsg(_) = %v, %v, %v\nwant %v, %v, %v", test.p, pt, b, err, test.pt, test.b, test.err) } } } func TestMultipleParsing(t *testing.T) { // Set a byte stream consists of 3 messages with their headers. p := []byte{0, 0, 0, 0, 1, 'a', 0, 0, 0, 0, 2, 'b', 'c', 0, 0, 0, 0, 1, 'd'} b := fullReader{bytes.NewReader(p)} parser := &parser{r: b} wantRecvs := []struct { pt payloadFormat data []byte }{ {compressionNone, []byte("a")}, {compressionNone, []byte("bc")}, {compressionNone, []byte("d")}, } for i, want := range wantRecvs { pt, data, err := parser.recvMsg(math.MaxInt32) if err != nil || pt != want.pt || !reflect.DeepEqual(data, want.data) { t.Fatalf("after %d calls, parser{%v}.recvMsg(_) = %v, %v, %v\nwant %v, %v, ", i, p, pt, data, err, want.pt, want.data) } } pt, data, err := parser.recvMsg(math.MaxInt32) if err != io.EOF { t.Fatalf("after %d recvMsgs calls, parser{%v}.recvMsg(_) = %v, %v, %v\nwant _, _, %v", len(wantRecvs), p, pt, data, err, io.EOF) } } func TestEncode(t *testing.T) { for _, test := range []struct { // input msg proto.Message cp Compressor // outputs hdr []byte data []byte err error }{ {nil, nil, []byte{0, 0, 0, 0, 0}, []byte{}, nil}, } { hdr, data, err := encode(protoCodec{}, test.msg, nil, nil, nil) if err != test.err || !bytes.Equal(hdr, test.hdr) || !bytes.Equal(data, test.data) { t.Fatalf("encode(_, _, %v, _) = %v, %v, %v\nwant %v, %v, %v", test.cp, hdr, data, err, test.hdr, test.data, test.err) } } } func TestCompress(t *testing.T) { for _, test := range []struct { // input data []byte cp Compressor dc Decompressor // outputs err error }{ {make([]byte, 1024), NewGZIPCompressor(), NewGZIPDecompressor(), nil}, } { b := new(bytes.Buffer) if err := test.cp.Do(b, test.data); err != test.err { t.Fatalf("Compressor.Do(_, %v) = %v, want %v", test.data, err, test.err) } if b.Len() >= len(test.data) { t.Fatalf("The compressor fails to compress data.") } if p, err := test.dc.Do(b); err != nil || !bytes.Equal(test.data, p) { t.Fatalf("Decompressor.Do(%v) = %v, %v, want %v, ", b, p, err, test.data) } } } func TestToRPCErr(t *testing.T) { for _, test := range []struct { // input errIn error // outputs errOut error }{ {transport.StreamError{Code: codes.Unknown, Desc: ""}, status.Error(codes.Unknown, "")}, {transport.ErrConnClosing, status.Error(codes.Unavailable, transport.ErrConnClosing.Desc)}, } { err := toRPCErr(test.errIn) if _, ok := status.FromError(err); !ok { t.Fatalf("toRPCErr{%v} returned type %T, want %T", test.errIn, err, status.Error(codes.Unknown, "")) } if !reflect.DeepEqual(err, test.errOut) { t.Fatalf("toRPCErr{%v} = %v \nwant %v", test.errIn, err, test.errOut) } } } // bmEncode benchmarks encoding a Protocol Buffer message containing mSize // bytes. func bmEncode(b *testing.B, mSize int) { msg := &perfpb.Buffer{Body: make([]byte, mSize)} encodeHdr, encodeData, _ := encode(protoCodec{}, msg, nil, nil, nil) encodedSz := int64(len(encodeHdr) + len(encodeData)) b.ReportAllocs() b.ResetTimer() for i := 0; i < b.N; i++ { encode(protoCodec{}, msg, nil, nil, nil) } b.SetBytes(encodedSz) } func BenchmarkEncode1B(b *testing.B) { bmEncode(b, 1) } func BenchmarkEncode1KiB(b *testing.B) { bmEncode(b, 1024) } func BenchmarkEncode8KiB(b *testing.B) { bmEncode(b, 8*1024) } func BenchmarkEncode64KiB(b *testing.B) { bmEncode(b, 64*1024) } func BenchmarkEncode512KiB(b *testing.B) { bmEncode(b, 512*1024) } func BenchmarkEncode1MiB(b *testing.B) { bmEncode(b, 1024*1024) } // bmCompressor benchmarks a compressor of a Protocol Buffer message containing // mSize bytes. func bmCompressor(b *testing.B, mSize int, cp Compressor) { payload := make([]byte, mSize) cBuf := bytes.NewBuffer(make([]byte, mSize)) b.ReportAllocs() b.ResetTimer() for i := 0; i < b.N; i++ { cp.Do(cBuf, payload) cBuf.Reset() } } func BenchmarkGZIPCompressor1B(b *testing.B) { bmCompressor(b, 1, NewGZIPCompressor()) } func BenchmarkGZIPCompressor1KiB(b *testing.B) { bmCompressor(b, 1024, NewGZIPCompressor()) } func BenchmarkGZIPCompressor8KiB(b *testing.B) { bmCompressor(b, 8*1024, NewGZIPCompressor()) } func BenchmarkGZIPCompressor64KiB(b *testing.B) { bmCompressor(b, 64*1024, NewGZIPCompressor()) } func BenchmarkGZIPCompressor512KiB(b *testing.B) { bmCompressor(b, 512*1024, NewGZIPCompressor()) } func BenchmarkGZIPCompressor1MiB(b *testing.B) { bmCompressor(b, 1024*1024, NewGZIPCompressor()) } golang-google-grpc-1.6.0/server.go000066400000000000000000001023621315416461300170350ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "bytes" "errors" "fmt" "io" "math" "net" "net/http" "reflect" "runtime" "strings" "sync" "time" "golang.org/x/net/context" "golang.org/x/net/http2" "golang.org/x/net/trace" "google.golang.org/grpc/codes" "google.golang.org/grpc/credentials" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/internal" "google.golang.org/grpc/keepalive" "google.golang.org/grpc/metadata" "google.golang.org/grpc/stats" "google.golang.org/grpc/status" "google.golang.org/grpc/tap" "google.golang.org/grpc/transport" ) const ( defaultServerMaxReceiveMessageSize = 1024 * 1024 * 4 defaultServerMaxSendMessageSize = math.MaxInt32 ) type methodHandler func(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor UnaryServerInterceptor) (interface{}, error) // MethodDesc represents an RPC service's method specification. type MethodDesc struct { MethodName string Handler methodHandler } // ServiceDesc represents an RPC service's specification. type ServiceDesc struct { ServiceName string // The pointer to the service interface. Used to check whether the user // provided implementation satisfies the interface requirements. HandlerType interface{} Methods []MethodDesc Streams []StreamDesc Metadata interface{} } // service consists of the information of the server serving this service and // the methods in this service. type service struct { server interface{} // the server for service methods md map[string]*MethodDesc sd map[string]*StreamDesc mdata interface{} } // Server is a gRPC server to serve RPC requests. type Server struct { opts options mu sync.Mutex // guards following lis map[net.Listener]bool conns map[io.Closer]bool serve bool drain bool ctx context.Context cancel context.CancelFunc // A CondVar to let GracefulStop() blocks until all the pending RPCs are finished // and all the transport goes away. cv *sync.Cond m map[string]*service // service name -> service info events trace.EventLog } type options struct { creds credentials.TransportCredentials codec Codec cp Compressor dc Decompressor unaryInt UnaryServerInterceptor streamInt StreamServerInterceptor inTapHandle tap.ServerInHandle statsHandler stats.Handler maxConcurrentStreams uint32 maxReceiveMessageSize int maxSendMessageSize int useHandlerImpl bool // use http.Handler-based server unknownStreamDesc *StreamDesc keepaliveParams keepalive.ServerParameters keepalivePolicy keepalive.EnforcementPolicy initialWindowSize int32 initialConnWindowSize int32 } var defaultServerOptions = options{ maxReceiveMessageSize: defaultServerMaxReceiveMessageSize, maxSendMessageSize: defaultServerMaxSendMessageSize, } // A ServerOption sets options such as credentials, codec and keepalive parameters, etc. type ServerOption func(*options) // InitialWindowSize returns a ServerOption that sets window size for stream. // The lower bound for window size is 64K and any value smaller than that will be ignored. func InitialWindowSize(s int32) ServerOption { return func(o *options) { o.initialWindowSize = s } } // InitialConnWindowSize returns a ServerOption that sets window size for a connection. // The lower bound for window size is 64K and any value smaller than that will be ignored. func InitialConnWindowSize(s int32) ServerOption { return func(o *options) { o.initialConnWindowSize = s } } // KeepaliveParams returns a ServerOption that sets keepalive and max-age parameters for the server. func KeepaliveParams(kp keepalive.ServerParameters) ServerOption { return func(o *options) { o.keepaliveParams = kp } } // KeepaliveEnforcementPolicy returns a ServerOption that sets keepalive enforcement policy for the server. func KeepaliveEnforcementPolicy(kep keepalive.EnforcementPolicy) ServerOption { return func(o *options) { o.keepalivePolicy = kep } } // CustomCodec returns a ServerOption that sets a codec for message marshaling and unmarshaling. func CustomCodec(codec Codec) ServerOption { return func(o *options) { o.codec = codec } } // RPCCompressor returns a ServerOption that sets a compressor for outbound messages. func RPCCompressor(cp Compressor) ServerOption { return func(o *options) { o.cp = cp } } // RPCDecompressor returns a ServerOption that sets a decompressor for inbound messages. func RPCDecompressor(dc Decompressor) ServerOption { return func(o *options) { o.dc = dc } } // MaxMsgSize returns a ServerOption to set the max message size in bytes the server can receive. // If this is not set, gRPC uses the default limit. Deprecated: use MaxRecvMsgSize instead. func MaxMsgSize(m int) ServerOption { return MaxRecvMsgSize(m) } // MaxRecvMsgSize returns a ServerOption to set the max message size in bytes the server can receive. // If this is not set, gRPC uses the default 4MB. func MaxRecvMsgSize(m int) ServerOption { return func(o *options) { o.maxReceiveMessageSize = m } } // MaxSendMsgSize returns a ServerOption to set the max message size in bytes the server can send. // If this is not set, gRPC uses the default 4MB. func MaxSendMsgSize(m int) ServerOption { return func(o *options) { o.maxSendMessageSize = m } } // MaxConcurrentStreams returns a ServerOption that will apply a limit on the number // of concurrent streams to each ServerTransport. func MaxConcurrentStreams(n uint32) ServerOption { return func(o *options) { o.maxConcurrentStreams = n } } // Creds returns a ServerOption that sets credentials for server connections. func Creds(c credentials.TransportCredentials) ServerOption { return func(o *options) { o.creds = c } } // UnaryInterceptor returns a ServerOption that sets the UnaryServerInterceptor for the // server. Only one unary interceptor can be installed. The construction of multiple // interceptors (e.g., chaining) can be implemented at the caller. func UnaryInterceptor(i UnaryServerInterceptor) ServerOption { return func(o *options) { if o.unaryInt != nil { panic("The unary server interceptor was already set and may not be reset.") } o.unaryInt = i } } // StreamInterceptor returns a ServerOption that sets the StreamServerInterceptor for the // server. Only one stream interceptor can be installed. func StreamInterceptor(i StreamServerInterceptor) ServerOption { return func(o *options) { if o.streamInt != nil { panic("The stream server interceptor was already set and may not be reset.") } o.streamInt = i } } // InTapHandle returns a ServerOption that sets the tap handle for all the server // transport to be created. Only one can be installed. func InTapHandle(h tap.ServerInHandle) ServerOption { return func(o *options) { if o.inTapHandle != nil { panic("The tap handle was already set and may not be reset.") } o.inTapHandle = h } } // StatsHandler returns a ServerOption that sets the stats handler for the server. func StatsHandler(h stats.Handler) ServerOption { return func(o *options) { o.statsHandler = h } } // UnknownServiceHandler returns a ServerOption that allows for adding a custom // unknown service handler. The provided method is a bidi-streaming RPC service // handler that will be invoked instead of returning the "unimplemented" gRPC // error whenever a request is received for an unregistered service or method. // The handling function has full access to the Context of the request and the // stream, and the invocation passes through interceptors. func UnknownServiceHandler(streamHandler StreamHandler) ServerOption { return func(o *options) { o.unknownStreamDesc = &StreamDesc{ StreamName: "unknown_service_handler", Handler: streamHandler, // We need to assume that the users of the streamHandler will want to use both. ClientStreams: true, ServerStreams: true, } } } // NewServer creates a gRPC server which has no service registered and has not // started to accept requests yet. func NewServer(opt ...ServerOption) *Server { opts := defaultServerOptions for _, o := range opt { o(&opts) } if opts.codec == nil { // Set the default codec. opts.codec = protoCodec{} } s := &Server{ lis: make(map[net.Listener]bool), opts: opts, conns: make(map[io.Closer]bool), m: make(map[string]*service), } s.cv = sync.NewCond(&s.mu) s.ctx, s.cancel = context.WithCancel(context.Background()) if EnableTracing { _, file, line, _ := runtime.Caller(1) s.events = trace.NewEventLog("grpc.Server", fmt.Sprintf("%s:%d", file, line)) } return s } // printf records an event in s's event log, unless s has been stopped. // REQUIRES s.mu is held. func (s *Server) printf(format string, a ...interface{}) { if s.events != nil { s.events.Printf(format, a...) } } // errorf records an error in s's event log, unless s has been stopped. // REQUIRES s.mu is held. func (s *Server) errorf(format string, a ...interface{}) { if s.events != nil { s.events.Errorf(format, a...) } } // RegisterService registers a service and its implementation to the gRPC // server. It is called from the IDL generated code. This must be called before // invoking Serve. func (s *Server) RegisterService(sd *ServiceDesc, ss interface{}) { ht := reflect.TypeOf(sd.HandlerType).Elem() st := reflect.TypeOf(ss) if !st.Implements(ht) { grpclog.Fatalf("grpc: Server.RegisterService found the handler of type %v that does not satisfy %v", st, ht) } s.register(sd, ss) } func (s *Server) register(sd *ServiceDesc, ss interface{}) { s.mu.Lock() defer s.mu.Unlock() s.printf("RegisterService(%q)", sd.ServiceName) if s.serve { grpclog.Fatalf("grpc: Server.RegisterService after Server.Serve for %q", sd.ServiceName) } if _, ok := s.m[sd.ServiceName]; ok { grpclog.Fatalf("grpc: Server.RegisterService found duplicate service registration for %q", sd.ServiceName) } srv := &service{ server: ss, md: make(map[string]*MethodDesc), sd: make(map[string]*StreamDesc), mdata: sd.Metadata, } for i := range sd.Methods { d := &sd.Methods[i] srv.md[d.MethodName] = d } for i := range sd.Streams { d := &sd.Streams[i] srv.sd[d.StreamName] = d } s.m[sd.ServiceName] = srv } // MethodInfo contains the information of an RPC including its method name and type. type MethodInfo struct { // Name is the method name only, without the service name or package name. Name string // IsClientStream indicates whether the RPC is a client streaming RPC. IsClientStream bool // IsServerStream indicates whether the RPC is a server streaming RPC. IsServerStream bool } // ServiceInfo contains unary RPC method info, streaming RPC method info and metadata for a service. type ServiceInfo struct { Methods []MethodInfo // Metadata is the metadata specified in ServiceDesc when registering service. Metadata interface{} } // GetServiceInfo returns a map from service names to ServiceInfo. // Service names include the package names, in the form of .. func (s *Server) GetServiceInfo() map[string]ServiceInfo { ret := make(map[string]ServiceInfo) for n, srv := range s.m { methods := make([]MethodInfo, 0, len(srv.md)+len(srv.sd)) for m := range srv.md { methods = append(methods, MethodInfo{ Name: m, IsClientStream: false, IsServerStream: false, }) } for m, d := range srv.sd { methods = append(methods, MethodInfo{ Name: m, IsClientStream: d.ClientStreams, IsServerStream: d.ServerStreams, }) } ret[n] = ServiceInfo{ Methods: methods, Metadata: srv.mdata, } } return ret } var ( // ErrServerStopped indicates that the operation is now illegal because of // the server being stopped. ErrServerStopped = errors.New("grpc: the server has been stopped") ) func (s *Server) useTransportAuthenticator(rawConn net.Conn) (net.Conn, credentials.AuthInfo, error) { if s.opts.creds == nil { return rawConn, nil, nil } return s.opts.creds.ServerHandshake(rawConn) } // Serve accepts incoming connections on the listener lis, creating a new // ServerTransport and service goroutine for each. The service goroutines // read gRPC requests and then call the registered handlers to reply to them. // Serve returns when lis.Accept fails with fatal errors. lis will be closed when // this method returns. // Serve always returns non-nil error. func (s *Server) Serve(lis net.Listener) error { s.mu.Lock() s.printf("serving") s.serve = true if s.lis == nil { s.mu.Unlock() lis.Close() return ErrServerStopped } s.lis[lis] = true s.mu.Unlock() defer func() { s.mu.Lock() if s.lis != nil && s.lis[lis] { lis.Close() delete(s.lis, lis) } s.mu.Unlock() }() var tempDelay time.Duration // how long to sleep on accept failure for { rawConn, err := lis.Accept() if err != nil { if ne, ok := err.(interface { Temporary() bool }); ok && ne.Temporary() { if tempDelay == 0 { tempDelay = 5 * time.Millisecond } else { tempDelay *= 2 } if max := 1 * time.Second; tempDelay > max { tempDelay = max } s.mu.Lock() s.printf("Accept error: %v; retrying in %v", err, tempDelay) s.mu.Unlock() timer := time.NewTimer(tempDelay) select { case <-timer.C: case <-s.ctx.Done(): } timer.Stop() continue } s.mu.Lock() s.printf("done serving; Accept = %v", err) s.mu.Unlock() return err } tempDelay = 0 // Start a new goroutine to deal with rawConn // so we don't stall this Accept loop goroutine. go s.handleRawConn(rawConn) } } // handleRawConn is run in its own goroutine and handles a just-accepted // connection that has not had any I/O performed on it yet. func (s *Server) handleRawConn(rawConn net.Conn) { conn, authInfo, err := s.useTransportAuthenticator(rawConn) if err != nil { s.mu.Lock() s.errorf("ServerHandshake(%q) failed: %v", rawConn.RemoteAddr(), err) s.mu.Unlock() grpclog.Warningf("grpc: Server.Serve failed to complete security handshake from %q: %v", rawConn.RemoteAddr(), err) // If serverHandShake returns ErrConnDispatched, keep rawConn open. if err != credentials.ErrConnDispatched { rawConn.Close() } return } s.mu.Lock() if s.conns == nil { s.mu.Unlock() conn.Close() return } s.mu.Unlock() if s.opts.useHandlerImpl { s.serveUsingHandler(conn) } else { s.serveHTTP2Transport(conn, authInfo) } } // serveHTTP2Transport sets up a http/2 transport (using the // gRPC http2 server transport in transport/http2_server.go) and // serves streams on it. // This is run in its own goroutine (it does network I/O in // transport.NewServerTransport). func (s *Server) serveHTTP2Transport(c net.Conn, authInfo credentials.AuthInfo) { config := &transport.ServerConfig{ MaxStreams: s.opts.maxConcurrentStreams, AuthInfo: authInfo, InTapHandle: s.opts.inTapHandle, StatsHandler: s.opts.statsHandler, KeepaliveParams: s.opts.keepaliveParams, KeepalivePolicy: s.opts.keepalivePolicy, InitialWindowSize: s.opts.initialWindowSize, InitialConnWindowSize: s.opts.initialConnWindowSize, } st, err := transport.NewServerTransport("http2", c, config) if err != nil { s.mu.Lock() s.errorf("NewServerTransport(%q) failed: %v", c.RemoteAddr(), err) s.mu.Unlock() c.Close() grpclog.Warningln("grpc: Server.Serve failed to create ServerTransport: ", err) return } if !s.addConn(st) { st.Close() return } s.serveStreams(st) } func (s *Server) serveStreams(st transport.ServerTransport) { defer s.removeConn(st) defer st.Close() var wg sync.WaitGroup st.HandleStreams(func(stream *transport.Stream) { wg.Add(1) go func() { defer wg.Done() s.handleStream(st, stream, s.traceInfo(st, stream)) }() }, func(ctx context.Context, method string) context.Context { if !EnableTracing { return ctx } tr := trace.New("grpc.Recv."+methodFamily(method), method) return trace.NewContext(ctx, tr) }) wg.Wait() } var _ http.Handler = (*Server)(nil) // serveUsingHandler is called from handleRawConn when s is configured // to handle requests via the http.Handler interface. It sets up a // net/http.Server to handle the just-accepted conn. The http.Server // is configured to route all incoming requests (all HTTP/2 streams) // to ServeHTTP, which creates a new ServerTransport for each stream. // serveUsingHandler blocks until conn closes. // // This codepath is only used when Server.TestingUseHandlerImpl has // been configured. This lets the end2end tests exercise the ServeHTTP // method as one of the environment types. // // conn is the *tls.Conn that's already been authenticated. func (s *Server) serveUsingHandler(conn net.Conn) { if !s.addConn(conn) { conn.Close() return } defer s.removeConn(conn) h2s := &http2.Server{ MaxConcurrentStreams: s.opts.maxConcurrentStreams, } h2s.ServeConn(conn, &http2.ServeConnOpts{ Handler: s, }) } // ServeHTTP implements the Go standard library's http.Handler // interface by responding to the gRPC request r, by looking up // the requested gRPC method in the gRPC server s. // // The provided HTTP request must have arrived on an HTTP/2 // connection. When using the Go standard library's server, // practically this means that the Request must also have arrived // over TLS. // // To share one port (such as 443 for https) between gRPC and an // existing http.Handler, use a root http.Handler such as: // // if r.ProtoMajor == 2 && strings.HasPrefix( // r.Header.Get("Content-Type"), "application/grpc") { // grpcServer.ServeHTTP(w, r) // } else { // yourMux.ServeHTTP(w, r) // } // // Note that ServeHTTP uses Go's HTTP/2 server implementation which is totally // separate from grpc-go's HTTP/2 server. Performance and features may vary // between the two paths. ServeHTTP does not support some gRPC features // available through grpc-go's HTTP/2 server, and it is currently EXPERIMENTAL // and subject to change. func (s *Server) ServeHTTP(w http.ResponseWriter, r *http.Request) { st, err := transport.NewServerHandlerTransport(w, r) if err != nil { http.Error(w, err.Error(), http.StatusInternalServerError) return } if !s.addConn(st) { st.Close() return } defer s.removeConn(st) s.serveStreams(st) } // traceInfo returns a traceInfo and associates it with stream, if tracing is enabled. // If tracing is not enabled, it returns nil. func (s *Server) traceInfo(st transport.ServerTransport, stream *transport.Stream) (trInfo *traceInfo) { tr, ok := trace.FromContext(stream.Context()) if !ok { return nil } trInfo = &traceInfo{ tr: tr, } trInfo.firstLine.client = false trInfo.firstLine.remoteAddr = st.RemoteAddr() if dl, ok := stream.Context().Deadline(); ok { trInfo.firstLine.deadline = dl.Sub(time.Now()) } return trInfo } func (s *Server) addConn(c io.Closer) bool { s.mu.Lock() defer s.mu.Unlock() if s.conns == nil || s.drain { return false } s.conns[c] = true return true } func (s *Server) removeConn(c io.Closer) { s.mu.Lock() defer s.mu.Unlock() if s.conns != nil { delete(s.conns, c) s.cv.Broadcast() } } func (s *Server) sendResponse(t transport.ServerTransport, stream *transport.Stream, msg interface{}, cp Compressor, opts *transport.Options) error { var ( cbuf *bytes.Buffer outPayload *stats.OutPayload ) if cp != nil { cbuf = new(bytes.Buffer) } if s.opts.statsHandler != nil { outPayload = &stats.OutPayload{} } hdr, data, err := encode(s.opts.codec, msg, cp, cbuf, outPayload) if err != nil { grpclog.Errorln("grpc: server failed to encode response: ", err) return err } if len(data) > s.opts.maxSendMessageSize { return status.Errorf(codes.ResourceExhausted, "grpc: trying to send message larger than max (%d vs. %d)", len(data), s.opts.maxSendMessageSize) } err = t.Write(stream, hdr, data, opts) if err == nil && outPayload != nil { outPayload.SentTime = time.Now() s.opts.statsHandler.HandleRPC(stream.Context(), outPayload) } return err } func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport.Stream, srv *service, md *MethodDesc, trInfo *traceInfo) (err error) { sh := s.opts.statsHandler if sh != nil { begin := &stats.Begin{ BeginTime: time.Now(), } sh.HandleRPC(stream.Context(), begin) defer func() { end := &stats.End{ EndTime: time.Now(), } if err != nil && err != io.EOF { end.Error = toRPCErr(err) } sh.HandleRPC(stream.Context(), end) }() } if trInfo != nil { defer trInfo.tr.Finish() trInfo.firstLine.client = false trInfo.tr.LazyLog(&trInfo.firstLine, false) defer func() { if err != nil && err != io.EOF { trInfo.tr.LazyLog(&fmtStringer{"%v", []interface{}{err}}, true) trInfo.tr.SetError() } }() } if s.opts.cp != nil { // NOTE: this needs to be ahead of all handling, https://github.com/grpc/grpc-go/issues/686. stream.SetSendCompress(s.opts.cp.Type()) } p := &parser{r: stream} pf, req, err := p.recvMsg(s.opts.maxReceiveMessageSize) if err == io.EOF { // The entire stream is done (for unary RPC only). return err } if err == io.ErrUnexpectedEOF { err = Errorf(codes.Internal, io.ErrUnexpectedEOF.Error()) } if err != nil { if st, ok := status.FromError(err); ok { if e := t.WriteStatus(stream, st); e != nil { grpclog.Warningf("grpc: Server.processUnaryRPC failed to write status %v", e) } } else { switch st := err.(type) { case transport.ConnectionError: // Nothing to do here. case transport.StreamError: if e := t.WriteStatus(stream, status.New(st.Code, st.Desc)); e != nil { grpclog.Warningf("grpc: Server.processUnaryRPC failed to write status %v", e) } default: panic(fmt.Sprintf("grpc: Unexpected error (%T) from recvMsg: %v", st, st)) } } return err } if err := checkRecvPayload(pf, stream.RecvCompress(), s.opts.dc); err != nil { if st, ok := status.FromError(err); ok { if e := t.WriteStatus(stream, st); e != nil { grpclog.Warningf("grpc: Server.processUnaryRPC failed to write status %v", e) } return err } if e := t.WriteStatus(stream, status.New(codes.Internal, err.Error())); e != nil { grpclog.Warningf("grpc: Server.processUnaryRPC failed to write status %v", e) } // TODO checkRecvPayload always return RPC error. Add a return here if necessary. } var inPayload *stats.InPayload if sh != nil { inPayload = &stats.InPayload{ RecvTime: time.Now(), } } df := func(v interface{}) error { if inPayload != nil { inPayload.WireLength = len(req) } if pf == compressionMade { var err error req, err = s.opts.dc.Do(bytes.NewReader(req)) if err != nil { return Errorf(codes.Internal, err.Error()) } } if len(req) > s.opts.maxReceiveMessageSize { // TODO: Revisit the error code. Currently keep it consistent with // java implementation. return status.Errorf(codes.ResourceExhausted, "grpc: received message larger than max (%d vs. %d)", len(req), s.opts.maxReceiveMessageSize) } if err := s.opts.codec.Unmarshal(req, v); err != nil { return status.Errorf(codes.Internal, "grpc: error unmarshalling request: %v", err) } if inPayload != nil { inPayload.Payload = v inPayload.Data = req inPayload.Length = len(req) sh.HandleRPC(stream.Context(), inPayload) } if trInfo != nil { trInfo.tr.LazyLog(&payload{sent: false, msg: v}, true) } return nil } reply, appErr := md.Handler(srv.server, stream.Context(), df, s.opts.unaryInt) if appErr != nil { appStatus, ok := status.FromError(appErr) if !ok { // Convert appErr if it is not a grpc status error. appErr = status.Error(convertCode(appErr), appErr.Error()) appStatus, _ = status.FromError(appErr) } if trInfo != nil { trInfo.tr.LazyLog(stringer(appStatus.Message()), true) trInfo.tr.SetError() } if e := t.WriteStatus(stream, appStatus); e != nil { grpclog.Warningf("grpc: Server.processUnaryRPC failed to write status: %v", e) } return appErr } if trInfo != nil { trInfo.tr.LazyLog(stringer("OK"), false) } opts := &transport.Options{ Last: true, Delay: false, } if err := s.sendResponse(t, stream, reply, s.opts.cp, opts); err != nil { if err == io.EOF { // The entire stream is done (for unary RPC only). return err } if s, ok := status.FromError(err); ok { if e := t.WriteStatus(stream, s); e != nil { grpclog.Warningf("grpc: Server.processUnaryRPC failed to write status: %v", e) } } else { switch st := err.(type) { case transport.ConnectionError: // Nothing to do here. case transport.StreamError: if e := t.WriteStatus(stream, status.New(st.Code, st.Desc)); e != nil { grpclog.Warningf("grpc: Server.processUnaryRPC failed to write status %v", e) } default: panic(fmt.Sprintf("grpc: Unexpected error (%T) from sendResponse: %v", st, st)) } } return err } if trInfo != nil { trInfo.tr.LazyLog(&payload{sent: true, msg: reply}, true) } // TODO: Should we be logging if writing status failed here, like above? // Should the logging be in WriteStatus? Should we ignore the WriteStatus // error or allow the stats handler to see it? return t.WriteStatus(stream, status.New(codes.OK, "")) } func (s *Server) processStreamingRPC(t transport.ServerTransport, stream *transport.Stream, srv *service, sd *StreamDesc, trInfo *traceInfo) (err error) { sh := s.opts.statsHandler if sh != nil { begin := &stats.Begin{ BeginTime: time.Now(), } sh.HandleRPC(stream.Context(), begin) defer func() { end := &stats.End{ EndTime: time.Now(), } if err != nil && err != io.EOF { end.Error = toRPCErr(err) } sh.HandleRPC(stream.Context(), end) }() } if s.opts.cp != nil { stream.SetSendCompress(s.opts.cp.Type()) } ss := &serverStream{ t: t, s: stream, p: &parser{r: stream}, codec: s.opts.codec, cp: s.opts.cp, dc: s.opts.dc, maxReceiveMessageSize: s.opts.maxReceiveMessageSize, maxSendMessageSize: s.opts.maxSendMessageSize, trInfo: trInfo, statsHandler: sh, } if ss.cp != nil { ss.cbuf = new(bytes.Buffer) } if trInfo != nil { trInfo.tr.LazyLog(&trInfo.firstLine, false) defer func() { ss.mu.Lock() if err != nil && err != io.EOF { ss.trInfo.tr.LazyLog(&fmtStringer{"%v", []interface{}{err}}, true) ss.trInfo.tr.SetError() } ss.trInfo.tr.Finish() ss.trInfo.tr = nil ss.mu.Unlock() }() } var appErr error var server interface{} if srv != nil { server = srv.server } if s.opts.streamInt == nil { appErr = sd.Handler(server, ss) } else { info := &StreamServerInfo{ FullMethod: stream.Method(), IsClientStream: sd.ClientStreams, IsServerStream: sd.ServerStreams, } appErr = s.opts.streamInt(server, ss, info, sd.Handler) } if appErr != nil { appStatus, ok := status.FromError(appErr) if !ok { switch err := appErr.(type) { case transport.StreamError: appStatus = status.New(err.Code, err.Desc) default: appStatus = status.New(convertCode(appErr), appErr.Error()) } appErr = appStatus.Err() } if trInfo != nil { ss.mu.Lock() ss.trInfo.tr.LazyLog(stringer(appStatus.Message()), true) ss.trInfo.tr.SetError() ss.mu.Unlock() } t.WriteStatus(ss.s, appStatus) // TODO: Should we log an error from WriteStatus here and below? return appErr } if trInfo != nil { ss.mu.Lock() ss.trInfo.tr.LazyLog(stringer("OK"), false) ss.mu.Unlock() } return t.WriteStatus(ss.s, status.New(codes.OK, "")) } func (s *Server) handleStream(t transport.ServerTransport, stream *transport.Stream, trInfo *traceInfo) { sm := stream.Method() if sm != "" && sm[0] == '/' { sm = sm[1:] } pos := strings.LastIndex(sm, "/") if pos == -1 { if trInfo != nil { trInfo.tr.LazyLog(&fmtStringer{"Malformed method name %q", []interface{}{sm}}, true) trInfo.tr.SetError() } errDesc := fmt.Sprintf("malformed method name: %q", stream.Method()) if err := t.WriteStatus(stream, status.New(codes.ResourceExhausted, errDesc)); err != nil { if trInfo != nil { trInfo.tr.LazyLog(&fmtStringer{"%v", []interface{}{err}}, true) trInfo.tr.SetError() } grpclog.Warningf("grpc: Server.handleStream failed to write status: %v", err) } if trInfo != nil { trInfo.tr.Finish() } return } service := sm[:pos] method := sm[pos+1:] srv, ok := s.m[service] if !ok { if unknownDesc := s.opts.unknownStreamDesc; unknownDesc != nil { s.processStreamingRPC(t, stream, nil, unknownDesc, trInfo) return } if trInfo != nil { trInfo.tr.LazyLog(&fmtStringer{"Unknown service %v", []interface{}{service}}, true) trInfo.tr.SetError() } errDesc := fmt.Sprintf("unknown service %v", service) if err := t.WriteStatus(stream, status.New(codes.Unimplemented, errDesc)); err != nil { if trInfo != nil { trInfo.tr.LazyLog(&fmtStringer{"%v", []interface{}{err}}, true) trInfo.tr.SetError() } grpclog.Warningf("grpc: Server.handleStream failed to write status: %v", err) } if trInfo != nil { trInfo.tr.Finish() } return } // Unary RPC or Streaming RPC? if md, ok := srv.md[method]; ok { s.processUnaryRPC(t, stream, srv, md, trInfo) return } if sd, ok := srv.sd[method]; ok { s.processStreamingRPC(t, stream, srv, sd, trInfo) return } if trInfo != nil { trInfo.tr.LazyLog(&fmtStringer{"Unknown method %v", []interface{}{method}}, true) trInfo.tr.SetError() } if unknownDesc := s.opts.unknownStreamDesc; unknownDesc != nil { s.processStreamingRPC(t, stream, nil, unknownDesc, trInfo) return } errDesc := fmt.Sprintf("unknown method %v", method) if err := t.WriteStatus(stream, status.New(codes.Unimplemented, errDesc)); err != nil { if trInfo != nil { trInfo.tr.LazyLog(&fmtStringer{"%v", []interface{}{err}}, true) trInfo.tr.SetError() } grpclog.Warningf("grpc: Server.handleStream failed to write status: %v", err) } if trInfo != nil { trInfo.tr.Finish() } } // Stop stops the gRPC server. It immediately closes all open // connections and listeners. // It cancels all active RPCs on the server side and the corresponding // pending RPCs on the client side will get notified by connection // errors. func (s *Server) Stop() { s.mu.Lock() listeners := s.lis s.lis = nil st := s.conns s.conns = nil // interrupt GracefulStop if Stop and GracefulStop are called concurrently. s.cv.Broadcast() s.mu.Unlock() for lis := range listeners { lis.Close() } for c := range st { c.Close() } s.mu.Lock() s.cancel() if s.events != nil { s.events.Finish() s.events = nil } s.mu.Unlock() } // GracefulStop stops the gRPC server gracefully. It stops the server from // accepting new connections and RPCs and blocks until all the pending RPCs are // finished. func (s *Server) GracefulStop() { s.mu.Lock() defer s.mu.Unlock() if s.conns == nil { return } for lis := range s.lis { lis.Close() } s.lis = nil s.cancel() if !s.drain { for c := range s.conns { c.(transport.ServerTransport).Drain() } s.drain = true } for len(s.conns) != 0 { s.cv.Wait() } s.conns = nil if s.events != nil { s.events.Finish() s.events = nil } } func init() { internal.TestingCloseConns = func(arg interface{}) { arg.(*Server).testingCloseConns() } internal.TestingUseHandlerImpl = func(arg interface{}) { arg.(*Server).opts.useHandlerImpl = true } } // testingCloseConns closes all existing transports but keeps s.lis // accepting new connections. func (s *Server) testingCloseConns() { s.mu.Lock() for c := range s.conns { c.Close() delete(s.conns, c) } s.mu.Unlock() } // SetHeader sets the header metadata. // When called multiple times, all the provided metadata will be merged. // All the metadata will be sent out when one of the following happens: // - grpc.SendHeader() is called; // - The first response is sent out; // - An RPC status is sent out (error or success). func SetHeader(ctx context.Context, md metadata.MD) error { if md.Len() == 0 { return nil } stream, ok := transport.StreamFromContext(ctx) if !ok { return Errorf(codes.Internal, "grpc: failed to fetch the stream from the context %v", ctx) } return stream.SetHeader(md) } // SendHeader sends header metadata. It may be called at most once. // The provided md and headers set by SetHeader() will be sent. func SendHeader(ctx context.Context, md metadata.MD) error { stream, ok := transport.StreamFromContext(ctx) if !ok { return Errorf(codes.Internal, "grpc: failed to fetch the stream from the context %v", ctx) } t := stream.ServerTransport() if t == nil { grpclog.Fatalf("grpc: SendHeader: %v has no ServerTransport to send header metadata.", stream) } if err := t.WriteHeader(stream, md); err != nil { return toRPCErr(err) } return nil } // SetTrailer sets the trailer metadata that will be sent when an RPC returns. // When called more than once, all the provided metadata will be merged. func SetTrailer(ctx context.Context, md metadata.MD) error { if md.Len() == 0 { return nil } stream, ok := transport.StreamFromContext(ctx) if !ok { return Errorf(codes.Internal, "grpc: failed to fetch the stream from the context %v", ctx) } return stream.SetTrailer(md) } golang-google-grpc-1.6.0/server_test.go000066400000000000000000000043661315416461300201010ustar00rootroot00000000000000/* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "net" "reflect" "strings" "testing" ) type emptyServiceServer interface{} type testServer struct{} func TestStopBeforeServe(t *testing.T) { lis, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("failed to create listener: %v", err) } server := NewServer() server.Stop() err = server.Serve(lis) if err != ErrServerStopped { t.Fatalf("server.Serve() error = %v, want %v", err, ErrServerStopped) } // server.Serve is responsible for closing the listener, even if the // server was already stopped. err = lis.Close() if got, want := ErrorDesc(err), "use of closed"; !strings.Contains(got, want) { t.Errorf("Close() error = %q, want %q", got, want) } } func TestGetServiceInfo(t *testing.T) { testSd := ServiceDesc{ ServiceName: "grpc.testing.EmptyService", HandlerType: (*emptyServiceServer)(nil), Methods: []MethodDesc{ { MethodName: "EmptyCall", Handler: nil, }, }, Streams: []StreamDesc{ { StreamName: "EmptyStream", Handler: nil, ServerStreams: false, ClientStreams: true, }, }, Metadata: []int{0, 2, 1, 3}, } server := NewServer() server.RegisterService(&testSd, &testServer{}) info := server.GetServiceInfo() want := map[string]ServiceInfo{ "grpc.testing.EmptyService": { Methods: []MethodInfo{ { Name: "EmptyCall", IsClientStream: false, IsServerStream: false, }, { Name: "EmptyStream", IsClientStream: true, IsServerStream: false, }}, Metadata: []int{0, 2, 1, 3}, }, } if !reflect.DeepEqual(info, want) { t.Errorf("GetServiceInfo() = %+v, want %+v", info, want) } } golang-google-grpc-1.6.0/stats/000077500000000000000000000000001315416461300163325ustar00rootroot00000000000000golang-google-grpc-1.6.0/stats/grpc_testing/000077500000000000000000000000001315416461300210225ustar00rootroot00000000000000golang-google-grpc-1.6.0/stats/grpc_testing/test.pb.go000066400000000000000000000272171315416461300227410ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: grpc_testing/test.proto /* Package grpc_testing is a generated protocol buffer package. It is generated from these files: grpc_testing/test.proto It has these top-level messages: SimpleRequest SimpleResponse */ package grpc_testing import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import ( context "golang.org/x/net/context" grpc "google.golang.org/grpc" ) // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package type SimpleRequest struct { Id int32 `protobuf:"varint,2,opt,name=id" json:"id,omitempty"` } func (m *SimpleRequest) Reset() { *m = SimpleRequest{} } func (m *SimpleRequest) String() string { return proto.CompactTextString(m) } func (*SimpleRequest) ProtoMessage() {} func (*SimpleRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} } func (m *SimpleRequest) GetId() int32 { if m != nil { return m.Id } return 0 } type SimpleResponse struct { Id int32 `protobuf:"varint,3,opt,name=id" json:"id,omitempty"` } func (m *SimpleResponse) Reset() { *m = SimpleResponse{} } func (m *SimpleResponse) String() string { return proto.CompactTextString(m) } func (*SimpleResponse) ProtoMessage() {} func (*SimpleResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} } func (m *SimpleResponse) GetId() int32 { if m != nil { return m.Id } return 0 } func init() { proto.RegisterType((*SimpleRequest)(nil), "grpc.testing.SimpleRequest") proto.RegisterType((*SimpleResponse)(nil), "grpc.testing.SimpleResponse") } // Reference imports to suppress errors if they are not otherwise used. var _ context.Context var _ grpc.ClientConn // This is a compile-time assertion to ensure that this generated file // is compatible with the grpc package it is being compiled against. const _ = grpc.SupportPackageIsVersion4 // Client API for TestService service type TestServiceClient interface { // One request followed by one response. // The server returns the client id as-is. UnaryCall(ctx context.Context, in *SimpleRequest, opts ...grpc.CallOption) (*SimpleResponse, error) // A sequence of requests with each request served by the server immediately. // As one request could lead to multiple responses, this interface // demonstrates the idea of full duplexing. FullDuplexCall(ctx context.Context, opts ...grpc.CallOption) (TestService_FullDuplexCallClient, error) // Client stream ClientStreamCall(ctx context.Context, opts ...grpc.CallOption) (TestService_ClientStreamCallClient, error) // Server stream ServerStreamCall(ctx context.Context, in *SimpleRequest, opts ...grpc.CallOption) (TestService_ServerStreamCallClient, error) } type testServiceClient struct { cc *grpc.ClientConn } func NewTestServiceClient(cc *grpc.ClientConn) TestServiceClient { return &testServiceClient{cc} } func (c *testServiceClient) UnaryCall(ctx context.Context, in *SimpleRequest, opts ...grpc.CallOption) (*SimpleResponse, error) { out := new(SimpleResponse) err := grpc.Invoke(ctx, "/grpc.testing.TestService/UnaryCall", in, out, c.cc, opts...) if err != nil { return nil, err } return out, nil } func (c *testServiceClient) FullDuplexCall(ctx context.Context, opts ...grpc.CallOption) (TestService_FullDuplexCallClient, error) { stream, err := grpc.NewClientStream(ctx, &_TestService_serviceDesc.Streams[0], c.cc, "/grpc.testing.TestService/FullDuplexCall", opts...) if err != nil { return nil, err } x := &testServiceFullDuplexCallClient{stream} return x, nil } type TestService_FullDuplexCallClient interface { Send(*SimpleRequest) error Recv() (*SimpleResponse, error) grpc.ClientStream } type testServiceFullDuplexCallClient struct { grpc.ClientStream } func (x *testServiceFullDuplexCallClient) Send(m *SimpleRequest) error { return x.ClientStream.SendMsg(m) } func (x *testServiceFullDuplexCallClient) Recv() (*SimpleResponse, error) { m := new(SimpleResponse) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func (c *testServiceClient) ClientStreamCall(ctx context.Context, opts ...grpc.CallOption) (TestService_ClientStreamCallClient, error) { stream, err := grpc.NewClientStream(ctx, &_TestService_serviceDesc.Streams[1], c.cc, "/grpc.testing.TestService/ClientStreamCall", opts...) if err != nil { return nil, err } x := &testServiceClientStreamCallClient{stream} return x, nil } type TestService_ClientStreamCallClient interface { Send(*SimpleRequest) error CloseAndRecv() (*SimpleResponse, error) grpc.ClientStream } type testServiceClientStreamCallClient struct { grpc.ClientStream } func (x *testServiceClientStreamCallClient) Send(m *SimpleRequest) error { return x.ClientStream.SendMsg(m) } func (x *testServiceClientStreamCallClient) CloseAndRecv() (*SimpleResponse, error) { if err := x.ClientStream.CloseSend(); err != nil { return nil, err } m := new(SimpleResponse) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func (c *testServiceClient) ServerStreamCall(ctx context.Context, in *SimpleRequest, opts ...grpc.CallOption) (TestService_ServerStreamCallClient, error) { stream, err := grpc.NewClientStream(ctx, &_TestService_serviceDesc.Streams[2], c.cc, "/grpc.testing.TestService/ServerStreamCall", opts...) if err != nil { return nil, err } x := &testServiceServerStreamCallClient{stream} if err := x.ClientStream.SendMsg(in); err != nil { return nil, err } if err := x.ClientStream.CloseSend(); err != nil { return nil, err } return x, nil } type TestService_ServerStreamCallClient interface { Recv() (*SimpleResponse, error) grpc.ClientStream } type testServiceServerStreamCallClient struct { grpc.ClientStream } func (x *testServiceServerStreamCallClient) Recv() (*SimpleResponse, error) { m := new(SimpleResponse) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } // Server API for TestService service type TestServiceServer interface { // One request followed by one response. // The server returns the client id as-is. UnaryCall(context.Context, *SimpleRequest) (*SimpleResponse, error) // A sequence of requests with each request served by the server immediately. // As one request could lead to multiple responses, this interface // demonstrates the idea of full duplexing. FullDuplexCall(TestService_FullDuplexCallServer) error // Client stream ClientStreamCall(TestService_ClientStreamCallServer) error // Server stream ServerStreamCall(*SimpleRequest, TestService_ServerStreamCallServer) error } func RegisterTestServiceServer(s *grpc.Server, srv TestServiceServer) { s.RegisterService(&_TestService_serviceDesc, srv) } func _TestService_UnaryCall_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(SimpleRequest) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(TestServiceServer).UnaryCall(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/grpc.testing.TestService/UnaryCall", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(TestServiceServer).UnaryCall(ctx, req.(*SimpleRequest)) } return interceptor(ctx, in, info, handler) } func _TestService_FullDuplexCall_Handler(srv interface{}, stream grpc.ServerStream) error { return srv.(TestServiceServer).FullDuplexCall(&testServiceFullDuplexCallServer{stream}) } type TestService_FullDuplexCallServer interface { Send(*SimpleResponse) error Recv() (*SimpleRequest, error) grpc.ServerStream } type testServiceFullDuplexCallServer struct { grpc.ServerStream } func (x *testServiceFullDuplexCallServer) Send(m *SimpleResponse) error { return x.ServerStream.SendMsg(m) } func (x *testServiceFullDuplexCallServer) Recv() (*SimpleRequest, error) { m := new(SimpleRequest) if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func _TestService_ClientStreamCall_Handler(srv interface{}, stream grpc.ServerStream) error { return srv.(TestServiceServer).ClientStreamCall(&testServiceClientStreamCallServer{stream}) } type TestService_ClientStreamCallServer interface { SendAndClose(*SimpleResponse) error Recv() (*SimpleRequest, error) grpc.ServerStream } type testServiceClientStreamCallServer struct { grpc.ServerStream } func (x *testServiceClientStreamCallServer) SendAndClose(m *SimpleResponse) error { return x.ServerStream.SendMsg(m) } func (x *testServiceClientStreamCallServer) Recv() (*SimpleRequest, error) { m := new(SimpleRequest) if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func _TestService_ServerStreamCall_Handler(srv interface{}, stream grpc.ServerStream) error { m := new(SimpleRequest) if err := stream.RecvMsg(m); err != nil { return err } return srv.(TestServiceServer).ServerStreamCall(m, &testServiceServerStreamCallServer{stream}) } type TestService_ServerStreamCallServer interface { Send(*SimpleResponse) error grpc.ServerStream } type testServiceServerStreamCallServer struct { grpc.ServerStream } func (x *testServiceServerStreamCallServer) Send(m *SimpleResponse) error { return x.ServerStream.SendMsg(m) } var _TestService_serviceDesc = grpc.ServiceDesc{ ServiceName: "grpc.testing.TestService", HandlerType: (*TestServiceServer)(nil), Methods: []grpc.MethodDesc{ { MethodName: "UnaryCall", Handler: _TestService_UnaryCall_Handler, }, }, Streams: []grpc.StreamDesc{ { StreamName: "FullDuplexCall", Handler: _TestService_FullDuplexCall_Handler, ServerStreams: true, ClientStreams: true, }, { StreamName: "ClientStreamCall", Handler: _TestService_ClientStreamCall_Handler, ClientStreams: true, }, { StreamName: "ServerStreamCall", Handler: _TestService_ServerStreamCall_Handler, ServerStreams: true, }, }, Metadata: "grpc_testing/test.proto", } func init() { proto.RegisterFile("grpc_testing/test.proto", fileDescriptor0) } var fileDescriptor0 = []byte{ // 202 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x12, 0x4f, 0x2f, 0x2a, 0x48, 0x8e, 0x2f, 0x49, 0x2d, 0x2e, 0xc9, 0xcc, 0x4b, 0xd7, 0x07, 0xd1, 0x7a, 0x05, 0x45, 0xf9, 0x25, 0xf9, 0x42, 0x3c, 0x20, 0x09, 0x3d, 0xa8, 0x84, 0x92, 0x3c, 0x17, 0x6f, 0x70, 0x66, 0x6e, 0x41, 0x4e, 0x6a, 0x50, 0x6a, 0x61, 0x69, 0x6a, 0x71, 0x89, 0x10, 0x1f, 0x17, 0x53, 0x66, 0x8a, 0x04, 0x93, 0x02, 0xa3, 0x06, 0x6b, 0x10, 0x53, 0x66, 0x8a, 0x92, 0x02, 0x17, 0x1f, 0x4c, 0x41, 0x71, 0x41, 0x7e, 0x5e, 0x71, 0x2a, 0x54, 0x05, 0x33, 0x4c, 0x85, 0xd1, 0x09, 0x26, 0x2e, 0xee, 0x90, 0xd4, 0xe2, 0x92, 0xe0, 0xd4, 0xa2, 0xb2, 0xcc, 0xe4, 0x54, 0x21, 0x37, 0x2e, 0xce, 0xd0, 0xbc, 0xc4, 0xa2, 0x4a, 0xe7, 0xc4, 0x9c, 0x1c, 0x21, 0x69, 0x3d, 0x64, 0xeb, 0xf4, 0x50, 0xec, 0x92, 0x92, 0xc1, 0x2e, 0x09, 0xb5, 0xc7, 0x9f, 0x8b, 0xcf, 0xad, 0x34, 0x27, 0xc7, 0xa5, 0xb4, 0x20, 0x27, 0xb5, 0x82, 0x42, 0xc3, 0x34, 0x18, 0x0d, 0x18, 0x85, 0xfc, 0xb9, 0x04, 0x9c, 0x73, 0x32, 0x53, 0xf3, 0x4a, 0x82, 0x4b, 0x8a, 0x52, 0x13, 0x73, 0x29, 0x36, 0x12, 0x64, 0x20, 0xc8, 0xd3, 0xa9, 0x45, 0x54, 0x31, 0xd0, 0x80, 0x31, 0x89, 0x0d, 0x1c, 0x45, 0xc6, 0x80, 0x00, 0x00, 0x00, 0xff, 0xff, 0x4c, 0x43, 0x27, 0x67, 0xbd, 0x01, 0x00, 0x00, } golang-google-grpc-1.6.0/stats/grpc_testing/test.proto000066400000000000000000000025331315416461300230710ustar00rootroot00000000000000// Copyright 2017 gRPC authors. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. syntax = "proto3"; package grpc.testing; message SimpleRequest { int32 id = 2; } message SimpleResponse { int32 id = 3; } // A simple test service. service TestService { // One request followed by one response. // The server returns the client id as-is. rpc UnaryCall(SimpleRequest) returns (SimpleResponse); // A sequence of requests with each request served by the server immediately. // As one request could lead to multiple responses, this interface // demonstrates the idea of full duplexing. rpc FullDuplexCall(stream SimpleRequest) returns (stream SimpleResponse); // Client stream rpc ClientStreamCall(stream SimpleRequest) returns (SimpleResponse); // Server stream rpc ServerStreamCall(SimpleRequest) returns (stream SimpleResponse); } golang-google-grpc-1.6.0/stats/handlers.go000066400000000000000000000044461315416461300204710ustar00rootroot00000000000000/* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package stats import ( "net" "golang.org/x/net/context" ) // ConnTagInfo defines the relevant information needed by connection context tagger. type ConnTagInfo struct { // RemoteAddr is the remote address of the corresponding connection. RemoteAddr net.Addr // LocalAddr is the local address of the corresponding connection. LocalAddr net.Addr } // RPCTagInfo defines the relevant information needed by RPC context tagger. type RPCTagInfo struct { // FullMethodName is the RPC method in the format of /package.service/method. FullMethodName string // FailFast indicates if this RPC is failfast. // This field is only valid on client side, it's always false on server side. FailFast bool } // Handler defines the interface for the related stats handling (e.g., RPCs, connections). type Handler interface { // TagRPC can attach some information to the given context. // The context used for the rest lifetime of the RPC will be derived from // the returned context. TagRPC(context.Context, *RPCTagInfo) context.Context // HandleRPC processes the RPC stats. HandleRPC(context.Context, RPCStats) // TagConn can attach some information to the given context. // The returned context will be used for stats handling. // For conn stats handling, the context used in HandleConn for this // connection will be derived from the context returned. // For RPC stats handling, // - On server side, the context used in HandleRPC for all RPCs on this // connection will be derived from the context returned. // - On client side, the context is not derived from the context returned. TagConn(context.Context, *ConnTagInfo) context.Context // HandleConn processes the Conn stats. HandleConn(context.Context, ConnStats) } golang-google-grpc-1.6.0/stats/stats.go000066400000000000000000000232601315416461300200220ustar00rootroot00000000000000/* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ //go:generate protoc --go_out=plugins=grpc:. grpc_testing/test.proto // Package stats is for collecting and reporting various network and RPC stats. // This package is for monitoring purpose only. All fields are read-only. // All APIs are experimental. package stats // import "google.golang.org/grpc/stats" import ( "net" "time" "golang.org/x/net/context" ) // RPCStats contains stats information about RPCs. type RPCStats interface { isRPCStats() // IsClient returns true if this RPCStats is from client side. IsClient() bool } // Begin contains stats when an RPC begins. // FailFast is only valid if this Begin is from client side. type Begin struct { // Client is true if this Begin is from client side. Client bool // BeginTime is the time when the RPC begins. BeginTime time.Time // FailFast indicates if this RPC is failfast. FailFast bool } // IsClient indicates if the stats information is from client side. func (s *Begin) IsClient() bool { return s.Client } func (s *Begin) isRPCStats() {} // InPayload contains the information for an incoming payload. type InPayload struct { // Client is true if this InPayload is from client side. Client bool // Payload is the payload with original type. Payload interface{} // Data is the serialized message payload. Data []byte // Length is the length of uncompressed data. Length int // WireLength is the length of data on wire (compressed, signed, encrypted). WireLength int // RecvTime is the time when the payload is received. RecvTime time.Time } // IsClient indicates if the stats information is from client side. func (s *InPayload) IsClient() bool { return s.Client } func (s *InPayload) isRPCStats() {} // InHeader contains stats when a header is received. type InHeader struct { // Client is true if this InHeader is from client side. Client bool // WireLength is the wire length of header. WireLength int // The following fields are valid only if Client is false. // FullMethod is the full RPC method string, i.e., /package.service/method. FullMethod string // RemoteAddr is the remote address of the corresponding connection. RemoteAddr net.Addr // LocalAddr is the local address of the corresponding connection. LocalAddr net.Addr // Compression is the compression algorithm used for the RPC. Compression string } // IsClient indicates if the stats information is from client side. func (s *InHeader) IsClient() bool { return s.Client } func (s *InHeader) isRPCStats() {} // InTrailer contains stats when a trailer is received. type InTrailer struct { // Client is true if this InTrailer is from client side. Client bool // WireLength is the wire length of trailer. WireLength int } // IsClient indicates if the stats information is from client side. func (s *InTrailer) IsClient() bool { return s.Client } func (s *InTrailer) isRPCStats() {} // OutPayload contains the information for an outgoing payload. type OutPayload struct { // Client is true if this OutPayload is from client side. Client bool // Payload is the payload with original type. Payload interface{} // Data is the serialized message payload. Data []byte // Length is the length of uncompressed data. Length int // WireLength is the length of data on wire (compressed, signed, encrypted). WireLength int // SentTime is the time when the payload is sent. SentTime time.Time } // IsClient indicates if this stats information is from client side. func (s *OutPayload) IsClient() bool { return s.Client } func (s *OutPayload) isRPCStats() {} // OutHeader contains stats when a header is sent. type OutHeader struct { // Client is true if this OutHeader is from client side. Client bool // WireLength is the wire length of header. WireLength int // The following fields are valid only if Client is true. // FullMethod is the full RPC method string, i.e., /package.service/method. FullMethod string // RemoteAddr is the remote address of the corresponding connection. RemoteAddr net.Addr // LocalAddr is the local address of the corresponding connection. LocalAddr net.Addr // Compression is the compression algorithm used for the RPC. Compression string } // IsClient indicates if this stats information is from client side. func (s *OutHeader) IsClient() bool { return s.Client } func (s *OutHeader) isRPCStats() {} // OutTrailer contains stats when a trailer is sent. type OutTrailer struct { // Client is true if this OutTrailer is from client side. Client bool // WireLength is the wire length of trailer. WireLength int } // IsClient indicates if this stats information is from client side. func (s *OutTrailer) IsClient() bool { return s.Client } func (s *OutTrailer) isRPCStats() {} // End contains stats when an RPC ends. type End struct { // Client is true if this End is from client side. Client bool // EndTime is the time when the RPC ends. EndTime time.Time // Error is the error the RPC ended with. It is an error generated from // status.Status and can be converted back to status.Status using // status.FromError if non-nil. Error error } // IsClient indicates if this is from client side. func (s *End) IsClient() bool { return s.Client } func (s *End) isRPCStats() {} // ConnStats contains stats information about connections. type ConnStats interface { isConnStats() // IsClient returns true if this ConnStats is from client side. IsClient() bool } // ConnBegin contains the stats of a connection when it is established. type ConnBegin struct { // Client is true if this ConnBegin is from client side. Client bool } // IsClient indicates if this is from client side. func (s *ConnBegin) IsClient() bool { return s.Client } func (s *ConnBegin) isConnStats() {} // ConnEnd contains the stats of a connection when it ends. type ConnEnd struct { // Client is true if this ConnEnd is from client side. Client bool } // IsClient indicates if this is from client side. func (s *ConnEnd) IsClient() bool { return s.Client } func (s *ConnEnd) isConnStats() {} type incomingTagsKey struct{} type outgoingTagsKey struct{} // SetTags attaches stats tagging data to the context, which will be sent in // the outgoing RPC with the header grpc-tags-bin. Subsequent calls to // SetTags will overwrite the values from earlier calls. // // NOTE: this is provided only for backward compatibilty with existing clients // and will likely be removed in an upcoming release. New uses should transmit // this type of data using metadata with a different, non-reserved (i.e. does // not begin with "grpc-") header name. func SetTags(ctx context.Context, b []byte) context.Context { return context.WithValue(ctx, outgoingTagsKey{}, b) } // Tags returns the tags from the context for the inbound RPC. // // NOTE: this is provided only for backward compatibilty with existing clients // and will likely be removed in an upcoming release. New uses should transmit // this type of data using metadata with a different, non-reserved (i.e. does // not begin with "grpc-") header name. func Tags(ctx context.Context) []byte { b, _ := ctx.Value(incomingTagsKey{}).([]byte) return b } // SetIncomingTags attaches stats tagging data to the context, to be read by // the application (not sent in outgoing RPCs). // // This is intended for gRPC-internal use ONLY. func SetIncomingTags(ctx context.Context, b []byte) context.Context { return context.WithValue(ctx, incomingTagsKey{}, b) } // OutgoingTags returns the tags from the context for the outbound RPC. // // This is intended for gRPC-internal use ONLY. func OutgoingTags(ctx context.Context) []byte { b, _ := ctx.Value(outgoingTagsKey{}).([]byte) return b } type incomingTraceKey struct{} type outgoingTraceKey struct{} // SetTrace attaches stats tagging data to the context, which will be sent in // the outgoing RPC with the header grpc-trace-bin. Subsequent calls to // SetTrace will overwrite the values from earlier calls. // // NOTE: this is provided only for backward compatibilty with existing clients // and will likely be removed in an upcoming release. New uses should transmit // this type of data using metadata with a different, non-reserved (i.e. does // not begin with "grpc-") header name. func SetTrace(ctx context.Context, b []byte) context.Context { return context.WithValue(ctx, outgoingTraceKey{}, b) } // Trace returns the trace from the context for the inbound RPC. // // NOTE: this is provided only for backward compatibilty with existing clients // and will likely be removed in an upcoming release. New uses should transmit // this type of data using metadata with a different, non-reserved (i.e. does // not begin with "grpc-") header name. func Trace(ctx context.Context) []byte { b, _ := ctx.Value(incomingTraceKey{}).([]byte) return b } // SetIncomingTrace attaches stats tagging data to the context, to be read by // the application (not sent in outgoing RPCs). It is intended for // gRPC-internal use. func SetIncomingTrace(ctx context.Context, b []byte) context.Context { return context.WithValue(ctx, incomingTraceKey{}, b) } // OutgoingTrace returns the trace from the context for the outbound RPC. It is // intended for gRPC-internal use. func OutgoingTrace(ctx context.Context) []byte { b, _ := ctx.Value(outgoingTraceKey{}).([]byte) return b } golang-google-grpc-1.6.0/stats/stats_test.go000066400000000000000000001036641315416461300210700ustar00rootroot00000000000000/* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package stats_test import ( "fmt" "io" "net" "reflect" "sync" "testing" "time" "github.com/golang/protobuf/proto" "golang.org/x/net/context" "google.golang.org/grpc" "google.golang.org/grpc/metadata" "google.golang.org/grpc/stats" testpb "google.golang.org/grpc/stats/grpc_testing" ) func init() { grpc.EnableTracing = false } type connCtxKey struct{} type rpcCtxKey struct{} var ( // For headers: testMetadata = metadata.MD{ "key1": []string{"value1"}, "key2": []string{"value2"}, } // For trailers: testTrailerMetadata = metadata.MD{ "tkey1": []string{"trailerValue1"}, "tkey2": []string{"trailerValue2"}, } // The id for which the service handler should return error. errorID int32 = 32202 ) type testServer struct{} func (s *testServer) UnaryCall(ctx context.Context, in *testpb.SimpleRequest) (*testpb.SimpleResponse, error) { md, ok := metadata.FromIncomingContext(ctx) if ok { if err := grpc.SendHeader(ctx, md); err != nil { return nil, grpc.Errorf(grpc.Code(err), "grpc.SendHeader(_, %v) = %v, want ", md, err) } if err := grpc.SetTrailer(ctx, testTrailerMetadata); err != nil { return nil, grpc.Errorf(grpc.Code(err), "grpc.SetTrailer(_, %v) = %v, want ", testTrailerMetadata, err) } } if in.Id == errorID { return nil, fmt.Errorf("got error id: %v", in.Id) } return &testpb.SimpleResponse{Id: in.Id}, nil } func (s *testServer) FullDuplexCall(stream testpb.TestService_FullDuplexCallServer) error { md, ok := metadata.FromIncomingContext(stream.Context()) if ok { if err := stream.SendHeader(md); err != nil { return grpc.Errorf(grpc.Code(err), "%v.SendHeader(%v) = %v, want %v", stream, md, err, nil) } stream.SetTrailer(testTrailerMetadata) } for { in, err := stream.Recv() if err == io.EOF { // read done. return nil } if err != nil { return err } if in.Id == errorID { return fmt.Errorf("got error id: %v", in.Id) } if err := stream.Send(&testpb.SimpleResponse{Id: in.Id}); err != nil { return err } } } func (s *testServer) ClientStreamCall(stream testpb.TestService_ClientStreamCallServer) error { md, ok := metadata.FromIncomingContext(stream.Context()) if ok { if err := stream.SendHeader(md); err != nil { return grpc.Errorf(grpc.Code(err), "%v.SendHeader(%v) = %v, want %v", stream, md, err, nil) } stream.SetTrailer(testTrailerMetadata) } for { in, err := stream.Recv() if err == io.EOF { // read done. return stream.SendAndClose(&testpb.SimpleResponse{Id: int32(0)}) } if err != nil { return err } if in.Id == errorID { return fmt.Errorf("got error id: %v", in.Id) } } } func (s *testServer) ServerStreamCall(in *testpb.SimpleRequest, stream testpb.TestService_ServerStreamCallServer) error { md, ok := metadata.FromIncomingContext(stream.Context()) if ok { if err := stream.SendHeader(md); err != nil { return grpc.Errorf(grpc.Code(err), "%v.SendHeader(%v) = %v, want %v", stream, md, err, nil) } stream.SetTrailer(testTrailerMetadata) } if in.Id == errorID { return fmt.Errorf("got error id: %v", in.Id) } for i := 0; i < 5; i++ { if err := stream.Send(&testpb.SimpleResponse{Id: in.Id}); err != nil { return err } } return nil } // test is an end-to-end test. It should be created with the newTest // func, modified as needed, and then started with its startServer method. // It should be cleaned up with the tearDown method. type test struct { t *testing.T compress string clientStatsHandler stats.Handler serverStatsHandler stats.Handler testServer testpb.TestServiceServer // nil means none // srv and srvAddr are set once startServer is called. srv *grpc.Server srvAddr string cc *grpc.ClientConn // nil until requested via clientConn } func (te *test) tearDown() { if te.cc != nil { te.cc.Close() te.cc = nil } te.srv.Stop() } type testConfig struct { compress string } // newTest returns a new test using the provided testing.T and // environment. It is returned with default values. Tests should // modify it before calling its startServer and clientConn methods. func newTest(t *testing.T, tc *testConfig, ch stats.Handler, sh stats.Handler) *test { te := &test{ t: t, compress: tc.compress, clientStatsHandler: ch, serverStatsHandler: sh, } return te } // startServer starts a gRPC server listening. Callers should defer a // call to te.tearDown to clean up. func (te *test) startServer(ts testpb.TestServiceServer) { te.testServer = ts lis, err := net.Listen("tcp", "localhost:0") if err != nil { te.t.Fatalf("Failed to listen: %v", err) } var opts []grpc.ServerOption if te.compress == "gzip" { opts = append(opts, grpc.RPCCompressor(grpc.NewGZIPCompressor()), grpc.RPCDecompressor(grpc.NewGZIPDecompressor()), ) } if te.serverStatsHandler != nil { opts = append(opts, grpc.StatsHandler(te.serverStatsHandler)) } s := grpc.NewServer(opts...) te.srv = s if te.testServer != nil { testpb.RegisterTestServiceServer(s, te.testServer) } go s.Serve(lis) te.srvAddr = lis.Addr().String() } func (te *test) clientConn() *grpc.ClientConn { if te.cc != nil { return te.cc } opts := []grpc.DialOption{grpc.WithInsecure(), grpc.WithBlock()} if te.compress == "gzip" { opts = append(opts, grpc.WithCompressor(grpc.NewGZIPCompressor()), grpc.WithDecompressor(grpc.NewGZIPDecompressor()), ) } if te.clientStatsHandler != nil { opts = append(opts, grpc.WithStatsHandler(te.clientStatsHandler)) } var err error te.cc, err = grpc.Dial(te.srvAddr, opts...) if err != nil { te.t.Fatalf("Dial(%q) = %v", te.srvAddr, err) } return te.cc } type rpcType int const ( unaryRPC rpcType = iota clientStreamRPC serverStreamRPC fullDuplexStreamRPC ) type rpcConfig struct { count int // Number of requests and responses for streaming RPCs. success bool // Whether the RPC should succeed or return error. failfast bool callType rpcType // Type of RPC. noLastRecv bool // Whether to call recv for io.EOF. When true, last recv won't be called. Only valid for streaming RPCs. } func (te *test) doUnaryCall(c *rpcConfig) (*testpb.SimpleRequest, *testpb.SimpleResponse, error) { var ( resp *testpb.SimpleResponse req *testpb.SimpleRequest err error ) tc := testpb.NewTestServiceClient(te.clientConn()) if c.success { req = &testpb.SimpleRequest{Id: errorID + 1} } else { req = &testpb.SimpleRequest{Id: errorID} } ctx := metadata.NewOutgoingContext(context.Background(), testMetadata) resp, err = tc.UnaryCall(ctx, req, grpc.FailFast(c.failfast)) return req, resp, err } func (te *test) doFullDuplexCallRoundtrip(c *rpcConfig) ([]*testpb.SimpleRequest, []*testpb.SimpleResponse, error) { var ( reqs []*testpb.SimpleRequest resps []*testpb.SimpleResponse err error ) tc := testpb.NewTestServiceClient(te.clientConn()) stream, err := tc.FullDuplexCall(metadata.NewOutgoingContext(context.Background(), testMetadata), grpc.FailFast(c.failfast)) if err != nil { return reqs, resps, err } var startID int32 if !c.success { startID = errorID } for i := 0; i < c.count; i++ { req := &testpb.SimpleRequest{ Id: int32(i) + startID, } reqs = append(reqs, req) if err = stream.Send(req); err != nil { return reqs, resps, err } var resp *testpb.SimpleResponse if resp, err = stream.Recv(); err != nil { return reqs, resps, err } resps = append(resps, resp) } if err = stream.CloseSend(); err != nil && err != io.EOF { return reqs, resps, err } if !c.noLastRecv { if _, err = stream.Recv(); err != io.EOF { return reqs, resps, err } } else { // In the case of not calling the last recv, sleep to avoid // returning too fast to miss the remaining stats (InTrailer and End). time.Sleep(time.Second) } return reqs, resps, nil } func (te *test) doClientStreamCall(c *rpcConfig) ([]*testpb.SimpleRequest, *testpb.SimpleResponse, error) { var ( reqs []*testpb.SimpleRequest resp *testpb.SimpleResponse err error ) tc := testpb.NewTestServiceClient(te.clientConn()) stream, err := tc.ClientStreamCall(metadata.NewOutgoingContext(context.Background(), testMetadata), grpc.FailFast(c.failfast)) if err != nil { return reqs, resp, err } var startID int32 if !c.success { startID = errorID } for i := 0; i < c.count; i++ { req := &testpb.SimpleRequest{ Id: int32(i) + startID, } reqs = append(reqs, req) if err = stream.Send(req); err != nil { return reqs, resp, err } } resp, err = stream.CloseAndRecv() return reqs, resp, err } func (te *test) doServerStreamCall(c *rpcConfig) (*testpb.SimpleRequest, []*testpb.SimpleResponse, error) { var ( req *testpb.SimpleRequest resps []*testpb.SimpleResponse err error ) tc := testpb.NewTestServiceClient(te.clientConn()) var startID int32 if !c.success { startID = errorID } req = &testpb.SimpleRequest{Id: startID} stream, err := tc.ServerStreamCall(metadata.NewOutgoingContext(context.Background(), testMetadata), req, grpc.FailFast(c.failfast)) if err != nil { return req, resps, err } for { var resp *testpb.SimpleResponse resp, err := stream.Recv() if err == io.EOF { return req, resps, nil } else if err != nil { return req, resps, err } resps = append(resps, resp) } } type expectedData struct { method string serverAddr string compression string reqIdx int requests []*testpb.SimpleRequest respIdx int responses []*testpb.SimpleResponse err error failfast bool } type gotData struct { ctx context.Context client bool s interface{} // This could be RPCStats or ConnStats. } const ( begin int = iota end inPayload inHeader inTrailer outPayload outHeader outTrailer connbegin connend ) func checkBegin(t *testing.T, d *gotData, e *expectedData) { var ( ok bool st *stats.Begin ) if st, ok = d.s.(*stats.Begin); !ok { t.Fatalf("got %T, want Begin", d.s) } if d.ctx == nil { t.Fatalf("d.ctx = nil, want ") } if st.BeginTime.IsZero() { t.Fatalf("st.BeginTime = %v, want ", st.BeginTime) } if d.client { if st.FailFast != e.failfast { t.Fatalf("st.FailFast = %v, want %v", st.FailFast, e.failfast) } } } func checkInHeader(t *testing.T, d *gotData, e *expectedData) { var ( ok bool st *stats.InHeader ) if st, ok = d.s.(*stats.InHeader); !ok { t.Fatalf("got %T, want InHeader", d.s) } if d.ctx == nil { t.Fatalf("d.ctx = nil, want ") } // TODO check real length, not just > 0. if st.WireLength <= 0 { t.Fatalf("st.Lenght = 0, want > 0") } if !d.client { if st.FullMethod != e.method { t.Fatalf("st.FullMethod = %s, want %v", st.FullMethod, e.method) } if st.LocalAddr.String() != e.serverAddr { t.Fatalf("st.LocalAddr = %v, want %v", st.LocalAddr, e.serverAddr) } if st.Compression != e.compression { t.Fatalf("st.Compression = %v, want %v", st.Compression, e.compression) } if connInfo, ok := d.ctx.Value(connCtxKey{}).(*stats.ConnTagInfo); ok { if connInfo.RemoteAddr != st.RemoteAddr { t.Fatalf("connInfo.RemoteAddr = %v, want %v", connInfo.RemoteAddr, st.RemoteAddr) } if connInfo.LocalAddr != st.LocalAddr { t.Fatalf("connInfo.LocalAddr = %v, want %v", connInfo.LocalAddr, st.LocalAddr) } } else { t.Fatalf("got context %v, want one with connCtxKey", d.ctx) } if rpcInfo, ok := d.ctx.Value(rpcCtxKey{}).(*stats.RPCTagInfo); ok { if rpcInfo.FullMethodName != st.FullMethod { t.Fatalf("rpcInfo.FullMethod = %s, want %v", rpcInfo.FullMethodName, st.FullMethod) } } else { t.Fatalf("got context %v, want one with rpcCtxKey", d.ctx) } } } func checkInPayload(t *testing.T, d *gotData, e *expectedData) { var ( ok bool st *stats.InPayload ) if st, ok = d.s.(*stats.InPayload); !ok { t.Fatalf("got %T, want InPayload", d.s) } if d.ctx == nil { t.Fatalf("d.ctx = nil, want ") } if d.client { b, err := proto.Marshal(e.responses[e.respIdx]) if err != nil { t.Fatalf("failed to marshal message: %v", err) } if reflect.TypeOf(st.Payload) != reflect.TypeOf(e.responses[e.respIdx]) { t.Fatalf("st.Payload = %T, want %T", st.Payload, e.responses[e.respIdx]) } e.respIdx++ if string(st.Data) != string(b) { t.Fatalf("st.Data = %v, want %v", st.Data, b) } if st.Length != len(b) { t.Fatalf("st.Lenght = %v, want %v", st.Length, len(b)) } } else { b, err := proto.Marshal(e.requests[e.reqIdx]) if err != nil { t.Fatalf("failed to marshal message: %v", err) } if reflect.TypeOf(st.Payload) != reflect.TypeOf(e.requests[e.reqIdx]) { t.Fatalf("st.Payload = %T, want %T", st.Payload, e.requests[e.reqIdx]) } e.reqIdx++ if string(st.Data) != string(b) { t.Fatalf("st.Data = %v, want %v", st.Data, b) } if st.Length != len(b) { t.Fatalf("st.Lenght = %v, want %v", st.Length, len(b)) } } // TODO check WireLength and ReceivedTime. if st.RecvTime.IsZero() { t.Fatalf("st.ReceivedTime = %v, want ", st.RecvTime) } } func checkInTrailer(t *testing.T, d *gotData, e *expectedData) { var ( ok bool st *stats.InTrailer ) if st, ok = d.s.(*stats.InTrailer); !ok { t.Fatalf("got %T, want InTrailer", d.s) } if d.ctx == nil { t.Fatalf("d.ctx = nil, want ") } // TODO check real length, not just > 0. if st.WireLength <= 0 { t.Fatalf("st.Lenght = 0, want > 0") } } func checkOutHeader(t *testing.T, d *gotData, e *expectedData) { var ( ok bool st *stats.OutHeader ) if st, ok = d.s.(*stats.OutHeader); !ok { t.Fatalf("got %T, want OutHeader", d.s) } if d.ctx == nil { t.Fatalf("d.ctx = nil, want ") } // TODO check real length, not just > 0. if st.WireLength <= 0 { t.Fatalf("st.Lenght = 0, want > 0") } if d.client { if st.FullMethod != e.method { t.Fatalf("st.FullMethod = %s, want %v", st.FullMethod, e.method) } if st.RemoteAddr.String() != e.serverAddr { t.Fatalf("st.RemoteAddr = %v, want %v", st.RemoteAddr, e.serverAddr) } if st.Compression != e.compression { t.Fatalf("st.Compression = %v, want %v", st.Compression, e.compression) } if rpcInfo, ok := d.ctx.Value(rpcCtxKey{}).(*stats.RPCTagInfo); ok { if rpcInfo.FullMethodName != st.FullMethod { t.Fatalf("rpcInfo.FullMethod = %s, want %v", rpcInfo.FullMethodName, st.FullMethod) } } else { t.Fatalf("got context %v, want one with rpcCtxKey", d.ctx) } } } func checkOutPayload(t *testing.T, d *gotData, e *expectedData) { var ( ok bool st *stats.OutPayload ) if st, ok = d.s.(*stats.OutPayload); !ok { t.Fatalf("got %T, want OutPayload", d.s) } if d.ctx == nil { t.Fatalf("d.ctx = nil, want ") } if d.client { b, err := proto.Marshal(e.requests[e.reqIdx]) if err != nil { t.Fatalf("failed to marshal message: %v", err) } if reflect.TypeOf(st.Payload) != reflect.TypeOf(e.requests[e.reqIdx]) { t.Fatalf("st.Payload = %T, want %T", st.Payload, e.requests[e.reqIdx]) } e.reqIdx++ if string(st.Data) != string(b) { t.Fatalf("st.Data = %v, want %v", st.Data, b) } if st.Length != len(b) { t.Fatalf("st.Lenght = %v, want %v", st.Length, len(b)) } } else { b, err := proto.Marshal(e.responses[e.respIdx]) if err != nil { t.Fatalf("failed to marshal message: %v", err) } if reflect.TypeOf(st.Payload) != reflect.TypeOf(e.responses[e.respIdx]) { t.Fatalf("st.Payload = %T, want %T", st.Payload, e.responses[e.respIdx]) } e.respIdx++ if string(st.Data) != string(b) { t.Fatalf("st.Data = %v, want %v", st.Data, b) } if st.Length != len(b) { t.Fatalf("st.Lenght = %v, want %v", st.Length, len(b)) } } // TODO check WireLength and ReceivedTime. if st.SentTime.IsZero() { t.Fatalf("st.SentTime = %v, want ", st.SentTime) } } func checkOutTrailer(t *testing.T, d *gotData, e *expectedData) { var ( ok bool st *stats.OutTrailer ) if st, ok = d.s.(*stats.OutTrailer); !ok { t.Fatalf("got %T, want OutTrailer", d.s) } if d.ctx == nil { t.Fatalf("d.ctx = nil, want ") } if st.Client { t.Fatalf("st IsClient = true, want false") } // TODO check real length, not just > 0. if st.WireLength <= 0 { t.Fatalf("st.Lenght = 0, want > 0") } } func checkEnd(t *testing.T, d *gotData, e *expectedData) { var ( ok bool st *stats.End ) if st, ok = d.s.(*stats.End); !ok { t.Fatalf("got %T, want End", d.s) } if d.ctx == nil { t.Fatalf("d.ctx = nil, want ") } if st.EndTime.IsZero() { t.Fatalf("st.EndTime = %v, want ", st.EndTime) } if grpc.Code(st.Error) != grpc.Code(e.err) || grpc.ErrorDesc(st.Error) != grpc.ErrorDesc(e.err) { t.Fatalf("st.Error = %v, want %v", st.Error, e.err) } } func checkConnBegin(t *testing.T, d *gotData, e *expectedData) { var ( ok bool st *stats.ConnBegin ) if st, ok = d.s.(*stats.ConnBegin); !ok { t.Fatalf("got %T, want ConnBegin", d.s) } if d.ctx == nil { t.Fatalf("d.ctx = nil, want ") } st.IsClient() // TODO remove this. } func checkConnEnd(t *testing.T, d *gotData, e *expectedData) { var ( ok bool st *stats.ConnEnd ) if st, ok = d.s.(*stats.ConnEnd); !ok { t.Fatalf("got %T, want ConnEnd", d.s) } if d.ctx == nil { t.Fatalf("d.ctx = nil, want ") } st.IsClient() // TODO remove this. } type statshandler struct { mu sync.Mutex gotRPC []*gotData gotConn []*gotData } func (h *statshandler) TagConn(ctx context.Context, info *stats.ConnTagInfo) context.Context { return context.WithValue(ctx, connCtxKey{}, info) } func (h *statshandler) TagRPC(ctx context.Context, info *stats.RPCTagInfo) context.Context { return context.WithValue(ctx, rpcCtxKey{}, info) } func (h *statshandler) HandleConn(ctx context.Context, s stats.ConnStats) { h.mu.Lock() defer h.mu.Unlock() h.gotConn = append(h.gotConn, &gotData{ctx, s.IsClient(), s}) } func (h *statshandler) HandleRPC(ctx context.Context, s stats.RPCStats) { h.mu.Lock() defer h.mu.Unlock() h.gotRPC = append(h.gotRPC, &gotData{ctx, s.IsClient(), s}) } func checkConnStats(t *testing.T, got []*gotData) { if len(got) <= 0 || len(got)%2 != 0 { for i, g := range got { t.Errorf(" - %v, %T = %+v, ctx: %v", i, g.s, g.s, g.ctx) } t.Fatalf("got %v stats, want even positive number", len(got)) } // The first conn stats must be a ConnBegin. checkConnBegin(t, got[0], nil) // The last conn stats must be a ConnEnd. checkConnEnd(t, got[len(got)-1], nil) } func checkServerStats(t *testing.T, got []*gotData, expect *expectedData, checkFuncs []func(t *testing.T, d *gotData, e *expectedData)) { if len(got) != len(checkFuncs) { for i, g := range got { t.Errorf(" - %v, %T", i, g.s) } t.Fatalf("got %v stats, want %v stats", len(got), len(checkFuncs)) } var rpcctx context.Context for i := 0; i < len(got); i++ { if _, ok := got[i].s.(stats.RPCStats); ok { if rpcctx != nil && got[i].ctx != rpcctx { t.Fatalf("got different contexts with stats %T", got[i].s) } rpcctx = got[i].ctx } } for i, f := range checkFuncs { f(t, got[i], expect) } } func testServerStats(t *testing.T, tc *testConfig, cc *rpcConfig, checkFuncs []func(t *testing.T, d *gotData, e *expectedData)) { h := &statshandler{} te := newTest(t, tc, nil, h) te.startServer(&testServer{}) defer te.tearDown() var ( reqs []*testpb.SimpleRequest resps []*testpb.SimpleResponse err error method string req *testpb.SimpleRequest resp *testpb.SimpleResponse e error ) switch cc.callType { case unaryRPC: method = "/grpc.testing.TestService/UnaryCall" req, resp, e = te.doUnaryCall(cc) reqs = []*testpb.SimpleRequest{req} resps = []*testpb.SimpleResponse{resp} err = e case clientStreamRPC: method = "/grpc.testing.TestService/ClientStreamCall" reqs, resp, e = te.doClientStreamCall(cc) resps = []*testpb.SimpleResponse{resp} err = e case serverStreamRPC: method = "/grpc.testing.TestService/ServerStreamCall" req, resps, e = te.doServerStreamCall(cc) reqs = []*testpb.SimpleRequest{req} err = e case fullDuplexStreamRPC: method = "/grpc.testing.TestService/FullDuplexCall" reqs, resps, err = te.doFullDuplexCallRoundtrip(cc) } if cc.success != (err == nil) { t.Fatalf("cc.success: %v, got error: %v", cc.success, err) } te.cc.Close() te.srv.GracefulStop() // Wait for the server to stop. for { h.mu.Lock() if len(h.gotRPC) >= len(checkFuncs) { h.mu.Unlock() break } h.mu.Unlock() time.Sleep(10 * time.Millisecond) } for { h.mu.Lock() if _, ok := h.gotConn[len(h.gotConn)-1].s.(*stats.ConnEnd); ok { h.mu.Unlock() break } h.mu.Unlock() time.Sleep(10 * time.Millisecond) } expect := &expectedData{ serverAddr: te.srvAddr, compression: tc.compress, method: method, requests: reqs, responses: resps, err: err, } checkConnStats(t, h.gotConn) checkServerStats(t, h.gotRPC, expect, checkFuncs) } func TestServerStatsUnaryRPC(t *testing.T) { testServerStats(t, &testConfig{compress: ""}, &rpcConfig{success: true, callType: unaryRPC}, []func(t *testing.T, d *gotData, e *expectedData){ checkInHeader, checkBegin, checkInPayload, checkOutHeader, checkOutPayload, checkOutTrailer, checkEnd, }) } func TestServerStatsUnaryRPCError(t *testing.T) { testServerStats(t, &testConfig{compress: ""}, &rpcConfig{success: false, callType: unaryRPC}, []func(t *testing.T, d *gotData, e *expectedData){ checkInHeader, checkBegin, checkInPayload, checkOutHeader, checkOutTrailer, checkEnd, }) } func TestServerStatsClientStreamRPC(t *testing.T) { count := 5 checkFuncs := []func(t *testing.T, d *gotData, e *expectedData){ checkInHeader, checkBegin, checkOutHeader, } ioPayFuncs := []func(t *testing.T, d *gotData, e *expectedData){ checkInPayload, } for i := 0; i < count; i++ { checkFuncs = append(checkFuncs, ioPayFuncs...) } checkFuncs = append(checkFuncs, checkOutPayload, checkOutTrailer, checkEnd, ) testServerStats(t, &testConfig{compress: "gzip"}, &rpcConfig{count: count, success: true, callType: clientStreamRPC}, checkFuncs) } func TestServerStatsClientStreamRPCError(t *testing.T) { count := 1 testServerStats(t, &testConfig{compress: "gzip"}, &rpcConfig{count: count, success: false, callType: clientStreamRPC}, []func(t *testing.T, d *gotData, e *expectedData){ checkInHeader, checkBegin, checkOutHeader, checkInPayload, checkOutTrailer, checkEnd, }) } func TestServerStatsServerStreamRPC(t *testing.T) { count := 5 checkFuncs := []func(t *testing.T, d *gotData, e *expectedData){ checkInHeader, checkBegin, checkInPayload, checkOutHeader, } ioPayFuncs := []func(t *testing.T, d *gotData, e *expectedData){ checkOutPayload, } for i := 0; i < count; i++ { checkFuncs = append(checkFuncs, ioPayFuncs...) } checkFuncs = append(checkFuncs, checkOutTrailer, checkEnd, ) testServerStats(t, &testConfig{compress: "gzip"}, &rpcConfig{count: count, success: true, callType: serverStreamRPC}, checkFuncs) } func TestServerStatsServerStreamRPCError(t *testing.T) { count := 5 testServerStats(t, &testConfig{compress: "gzip"}, &rpcConfig{count: count, success: false, callType: serverStreamRPC}, []func(t *testing.T, d *gotData, e *expectedData){ checkInHeader, checkBegin, checkInPayload, checkOutHeader, checkOutTrailer, checkEnd, }) } func TestServerStatsFullDuplexRPC(t *testing.T) { count := 5 checkFuncs := []func(t *testing.T, d *gotData, e *expectedData){ checkInHeader, checkBegin, checkOutHeader, } ioPayFuncs := []func(t *testing.T, d *gotData, e *expectedData){ checkInPayload, checkOutPayload, } for i := 0; i < count; i++ { checkFuncs = append(checkFuncs, ioPayFuncs...) } checkFuncs = append(checkFuncs, checkOutTrailer, checkEnd, ) testServerStats(t, &testConfig{compress: "gzip"}, &rpcConfig{count: count, success: true, callType: fullDuplexStreamRPC}, checkFuncs) } func TestServerStatsFullDuplexRPCError(t *testing.T) { count := 5 testServerStats(t, &testConfig{compress: "gzip"}, &rpcConfig{count: count, success: false, callType: fullDuplexStreamRPC}, []func(t *testing.T, d *gotData, e *expectedData){ checkInHeader, checkBegin, checkOutHeader, checkInPayload, checkOutTrailer, checkEnd, }) } type checkFuncWithCount struct { f func(t *testing.T, d *gotData, e *expectedData) c int // expected count } func checkClientStats(t *testing.T, got []*gotData, expect *expectedData, checkFuncs map[int]*checkFuncWithCount) { var expectLen int for _, v := range checkFuncs { expectLen += v.c } if len(got) != expectLen { for i, g := range got { t.Errorf(" - %v, %T", i, g.s) } t.Fatalf("got %v stats, want %v stats", len(got), expectLen) } var tagInfoInCtx *stats.RPCTagInfo for i := 0; i < len(got); i++ { if _, ok := got[i].s.(stats.RPCStats); ok { tagInfoInCtxNew, _ := got[i].ctx.Value(rpcCtxKey{}).(*stats.RPCTagInfo) if tagInfoInCtx != nil && tagInfoInCtx != tagInfoInCtxNew { t.Fatalf("got context containing different tagInfo with stats %T", got[i].s) } tagInfoInCtx = tagInfoInCtxNew } } for _, s := range got { switch s.s.(type) { case *stats.Begin: if checkFuncs[begin].c <= 0 { t.Fatalf("unexpected stats: %T", s.s) } checkFuncs[begin].f(t, s, expect) checkFuncs[begin].c-- case *stats.OutHeader: if checkFuncs[outHeader].c <= 0 { t.Fatalf("unexpected stats: %T", s.s) } checkFuncs[outHeader].f(t, s, expect) checkFuncs[outHeader].c-- case *stats.OutPayload: if checkFuncs[outPayload].c <= 0 { t.Fatalf("unexpected stats: %T", s.s) } checkFuncs[outPayload].f(t, s, expect) checkFuncs[outPayload].c-- case *stats.InHeader: if checkFuncs[inHeader].c <= 0 { t.Fatalf("unexpected stats: %T", s.s) } checkFuncs[inHeader].f(t, s, expect) checkFuncs[inHeader].c-- case *stats.InPayload: if checkFuncs[inPayload].c <= 0 { t.Fatalf("unexpected stats: %T", s.s) } checkFuncs[inPayload].f(t, s, expect) checkFuncs[inPayload].c-- case *stats.InTrailer: if checkFuncs[inTrailer].c <= 0 { t.Fatalf("unexpected stats: %T", s.s) } checkFuncs[inTrailer].f(t, s, expect) checkFuncs[inTrailer].c-- case *stats.End: if checkFuncs[end].c <= 0 { t.Fatalf("unexpected stats: %T", s.s) } checkFuncs[end].f(t, s, expect) checkFuncs[end].c-- case *stats.ConnBegin: if checkFuncs[connbegin].c <= 0 { t.Fatalf("unexpected stats: %T", s.s) } checkFuncs[connbegin].f(t, s, expect) checkFuncs[connbegin].c-- case *stats.ConnEnd: if checkFuncs[connend].c <= 0 { t.Fatalf("unexpected stats: %T", s.s) } checkFuncs[connend].f(t, s, expect) checkFuncs[connend].c-- default: t.Fatalf("unexpected stats: %T", s.s) } } } func testClientStats(t *testing.T, tc *testConfig, cc *rpcConfig, checkFuncs map[int]*checkFuncWithCount) { h := &statshandler{} te := newTest(t, tc, h, nil) te.startServer(&testServer{}) defer te.tearDown() var ( reqs []*testpb.SimpleRequest resps []*testpb.SimpleResponse method string err error req *testpb.SimpleRequest resp *testpb.SimpleResponse e error ) switch cc.callType { case unaryRPC: method = "/grpc.testing.TestService/UnaryCall" req, resp, e = te.doUnaryCall(cc) reqs = []*testpb.SimpleRequest{req} resps = []*testpb.SimpleResponse{resp} err = e case clientStreamRPC: method = "/grpc.testing.TestService/ClientStreamCall" reqs, resp, e = te.doClientStreamCall(cc) resps = []*testpb.SimpleResponse{resp} err = e case serverStreamRPC: method = "/grpc.testing.TestService/ServerStreamCall" req, resps, e = te.doServerStreamCall(cc) reqs = []*testpb.SimpleRequest{req} err = e case fullDuplexStreamRPC: method = "/grpc.testing.TestService/FullDuplexCall" reqs, resps, err = te.doFullDuplexCallRoundtrip(cc) } if cc.success != (err == nil) { t.Fatalf("cc.success: %v, got error: %v", cc.success, err) } te.cc.Close() te.srv.GracefulStop() // Wait for the server to stop. lenRPCStats := 0 for _, v := range checkFuncs { lenRPCStats += v.c } for { h.mu.Lock() if len(h.gotRPC) >= lenRPCStats { h.mu.Unlock() break } h.mu.Unlock() time.Sleep(10 * time.Millisecond) } for { h.mu.Lock() if _, ok := h.gotConn[len(h.gotConn)-1].s.(*stats.ConnEnd); ok { h.mu.Unlock() break } h.mu.Unlock() time.Sleep(10 * time.Millisecond) } expect := &expectedData{ serverAddr: te.srvAddr, compression: tc.compress, method: method, requests: reqs, responses: resps, failfast: cc.failfast, err: err, } checkConnStats(t, h.gotConn) checkClientStats(t, h.gotRPC, expect, checkFuncs) } func TestClientStatsUnaryRPC(t *testing.T) { testClientStats(t, &testConfig{compress: ""}, &rpcConfig{success: true, failfast: false, callType: unaryRPC}, map[int]*checkFuncWithCount{ begin: {checkBegin, 1}, outHeader: {checkOutHeader, 1}, outPayload: {checkOutPayload, 1}, inHeader: {checkInHeader, 1}, inPayload: {checkInPayload, 1}, inTrailer: {checkInTrailer, 1}, end: {checkEnd, 1}, }) } func TestClientStatsUnaryRPCError(t *testing.T) { testClientStats(t, &testConfig{compress: ""}, &rpcConfig{success: false, failfast: false, callType: unaryRPC}, map[int]*checkFuncWithCount{ begin: {checkBegin, 1}, outHeader: {checkOutHeader, 1}, outPayload: {checkOutPayload, 1}, inHeader: {checkInHeader, 1}, inTrailer: {checkInTrailer, 1}, end: {checkEnd, 1}, }) } func TestClientStatsClientStreamRPC(t *testing.T) { count := 5 testClientStats(t, &testConfig{compress: "gzip"}, &rpcConfig{count: count, success: true, failfast: false, callType: clientStreamRPC}, map[int]*checkFuncWithCount{ begin: {checkBegin, 1}, outHeader: {checkOutHeader, 1}, inHeader: {checkInHeader, 1}, outPayload: {checkOutPayload, count}, inTrailer: {checkInTrailer, 1}, inPayload: {checkInPayload, 1}, end: {checkEnd, 1}, }) } func TestClientStatsClientStreamRPCError(t *testing.T) { count := 1 testClientStats(t, &testConfig{compress: "gzip"}, &rpcConfig{count: count, success: false, failfast: false, callType: clientStreamRPC}, map[int]*checkFuncWithCount{ begin: {checkBegin, 1}, outHeader: {checkOutHeader, 1}, inHeader: {checkInHeader, 1}, outPayload: {checkOutPayload, 1}, inTrailer: {checkInTrailer, 1}, end: {checkEnd, 1}, }) } func TestClientStatsServerStreamRPC(t *testing.T) { count := 5 testClientStats(t, &testConfig{compress: "gzip"}, &rpcConfig{count: count, success: true, failfast: false, callType: serverStreamRPC}, map[int]*checkFuncWithCount{ begin: {checkBegin, 1}, outHeader: {checkOutHeader, 1}, outPayload: {checkOutPayload, 1}, inHeader: {checkInHeader, 1}, inPayload: {checkInPayload, count}, inTrailer: {checkInTrailer, 1}, end: {checkEnd, 1}, }) } func TestClientStatsServerStreamRPCError(t *testing.T) { count := 5 testClientStats(t, &testConfig{compress: "gzip"}, &rpcConfig{count: count, success: false, failfast: false, callType: serverStreamRPC}, map[int]*checkFuncWithCount{ begin: {checkBegin, 1}, outHeader: {checkOutHeader, 1}, outPayload: {checkOutPayload, 1}, inHeader: {checkInHeader, 1}, inTrailer: {checkInTrailer, 1}, end: {checkEnd, 1}, }) } func TestClientStatsFullDuplexRPC(t *testing.T) { count := 5 testClientStats(t, &testConfig{compress: "gzip"}, &rpcConfig{count: count, success: true, failfast: false, callType: fullDuplexStreamRPC}, map[int]*checkFuncWithCount{ begin: {checkBegin, 1}, outHeader: {checkOutHeader, 1}, outPayload: {checkOutPayload, count}, inHeader: {checkInHeader, 1}, inPayload: {checkInPayload, count}, inTrailer: {checkInTrailer, 1}, end: {checkEnd, 1}, }) } func TestClientStatsFullDuplexRPCError(t *testing.T) { count := 5 testClientStats(t, &testConfig{compress: "gzip"}, &rpcConfig{count: count, success: false, failfast: false, callType: fullDuplexStreamRPC}, map[int]*checkFuncWithCount{ begin: {checkBegin, 1}, outHeader: {checkOutHeader, 1}, outPayload: {checkOutPayload, 1}, inHeader: {checkInHeader, 1}, inTrailer: {checkInTrailer, 1}, end: {checkEnd, 1}, }) } // If the user doesn't call the last recv() on clientStream. func TestClientStatsFullDuplexRPCNotCallingLastRecv(t *testing.T) { count := 1 testClientStats(t, &testConfig{compress: "gzip"}, &rpcConfig{count: count, success: true, failfast: false, callType: fullDuplexStreamRPC, noLastRecv: true}, map[int]*checkFuncWithCount{ begin: {checkBegin, 1}, outHeader: {checkOutHeader, 1}, outPayload: {checkOutPayload, count}, inHeader: {checkInHeader, 1}, inPayload: {checkInPayload, count}, inTrailer: {checkInTrailer, 1}, end: {checkEnd, 1}, }) } func TestTags(t *testing.T) { b := []byte{5, 2, 4, 3, 1} ctx := stats.SetTags(context.Background(), b) if tg := stats.OutgoingTags(ctx); !reflect.DeepEqual(tg, b) { t.Errorf("OutgoingTags(%v) = %v; want %v", ctx, tg, b) } if tg := stats.Tags(ctx); tg != nil { t.Errorf("Tags(%v) = %v; want nil", ctx, tg) } ctx = stats.SetIncomingTags(context.Background(), b) if tg := stats.Tags(ctx); !reflect.DeepEqual(tg, b) { t.Errorf("Tags(%v) = %v; want %v", ctx, tg, b) } if tg := stats.OutgoingTags(ctx); tg != nil { t.Errorf("OutgoingTags(%v) = %v; want nil", ctx, tg) } } func TestTrace(t *testing.T) { b := []byte{5, 2, 4, 3, 1} ctx := stats.SetTrace(context.Background(), b) if tr := stats.OutgoingTrace(ctx); !reflect.DeepEqual(tr, b) { t.Errorf("OutgoingTrace(%v) = %v; want %v", ctx, tr, b) } if tr := stats.Trace(ctx); tr != nil { t.Errorf("Trace(%v) = %v; want nil", ctx, tr) } ctx = stats.SetIncomingTrace(context.Background(), b) if tr := stats.Trace(ctx); !reflect.DeepEqual(tr, b) { t.Errorf("Trace(%v) = %v; want %v", ctx, tr, b) } if tr := stats.OutgoingTrace(ctx); tr != nil { t.Errorf("OutgoingTrace(%v) = %v; want nil", ctx, tr) } } golang-google-grpc-1.6.0/status/000077500000000000000000000000001315416461300165175ustar00rootroot00000000000000golang-google-grpc-1.6.0/status/status.go000066400000000000000000000116371315416461300204010ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package status implements errors returned by gRPC. These errors are // serialized and transmitted on the wire between server and client, and allow // for additional data to be transmitted via the Details field in the status // proto. gRPC service handlers should return an error created by this // package, and gRPC clients should expect a corresponding error to be // returned from the RPC call. // // This package upholds the invariants that a non-nil error may not // contain an OK code, and an OK code must result in a nil error. package status import ( "errors" "fmt" "github.com/golang/protobuf/proto" "github.com/golang/protobuf/ptypes" spb "google.golang.org/genproto/googleapis/rpc/status" "google.golang.org/grpc/codes" ) // statusError is an alias of a status proto. It implements error and Status, // and a nil statusError should never be returned by this package. type statusError spb.Status func (se *statusError) Error() string { p := (*spb.Status)(se) return fmt.Sprintf("rpc error: code = %s desc = %s", codes.Code(p.GetCode()), p.GetMessage()) } func (se *statusError) status() *Status { return &Status{s: (*spb.Status)(se)} } // Status represents an RPC status code, message, and details. It is immutable // and should be created with New, Newf, or FromProto. type Status struct { s *spb.Status } // Code returns the status code contained in s. func (s *Status) Code() codes.Code { if s == nil || s.s == nil { return codes.OK } return codes.Code(s.s.Code) } // Message returns the message contained in s. func (s *Status) Message() string { if s == nil || s.s == nil { return "" } return s.s.Message } // Proto returns s's status as an spb.Status proto message. func (s *Status) Proto() *spb.Status { if s == nil { return nil } return proto.Clone(s.s).(*spb.Status) } // Err returns an immutable error representing s; returns nil if s.Code() is // OK. func (s *Status) Err() error { if s.Code() == codes.OK { return nil } return (*statusError)(s.s) } // New returns a Status representing c and msg. func New(c codes.Code, msg string) *Status { return &Status{s: &spb.Status{Code: int32(c), Message: msg}} } // Newf returns New(c, fmt.Sprintf(format, a...)). func Newf(c codes.Code, format string, a ...interface{}) *Status { return New(c, fmt.Sprintf(format, a...)) } // Error returns an error representing c and msg. If c is OK, returns nil. func Error(c codes.Code, msg string) error { return New(c, msg).Err() } // Errorf returns Error(c, fmt.Sprintf(format, a...)). func Errorf(c codes.Code, format string, a ...interface{}) error { return Error(c, fmt.Sprintf(format, a...)) } // ErrorProto returns an error representing s. If s.Code is OK, returns nil. func ErrorProto(s *spb.Status) error { return FromProto(s).Err() } // FromProto returns a Status representing s. func FromProto(s *spb.Status) *Status { return &Status{s: proto.Clone(s).(*spb.Status)} } // FromError returns a Status representing err if it was produced from this // package, otherwise it returns nil, false. func FromError(err error) (s *Status, ok bool) { if err == nil { return &Status{s: &spb.Status{Code: int32(codes.OK)}}, true } if s, ok := err.(*statusError); ok { return s.status(), true } return nil, false } // WithDetails returns a new status with the provided details messages appended to the status. // If any errors are encountered, it returns nil and the first error encountered. func (s *Status) WithDetails(details ...proto.Message) (*Status, error) { if s.Code() == codes.OK { return nil, errors.New("no error details for status with code OK") } // s.Code() != OK implies that s.Proto() != nil. p := s.Proto() for _, detail := range details { any, err := ptypes.MarshalAny(detail) if err != nil { return nil, err } p.Details = append(p.Details, any) } return &Status{s: p}, nil } // Details returns a slice of details messages attached to the status. // If a detail cannot be decoded, the error is returned in place of the detail. func (s *Status) Details() []interface{} { if s == nil || s.s == nil { return nil } details := make([]interface{}, 0, len(s.s.Details)) for _, any := range s.s.Details { detail := &ptypes.DynamicAny{} if err := ptypes.UnmarshalAny(any, detail); err != nil { details = append(details, err) continue } details = append(details, detail.Message) } return details } golang-google-grpc-1.6.0/status/status_test.go000066400000000000000000000147131315416461300214360ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package status import ( "errors" "fmt" "reflect" "testing" "github.com/golang/protobuf/proto" "github.com/golang/protobuf/ptypes" apb "github.com/golang/protobuf/ptypes/any" dpb "github.com/golang/protobuf/ptypes/duration" cpb "google.golang.org/genproto/googleapis/rpc/code" epb "google.golang.org/genproto/googleapis/rpc/errdetails" spb "google.golang.org/genproto/googleapis/rpc/status" "google.golang.org/grpc/codes" ) func TestErrorsWithSameParameters(t *testing.T) { const description = "some description" e1 := Errorf(codes.AlreadyExists, description) e2 := Errorf(codes.AlreadyExists, description) if e1 == e2 || !reflect.DeepEqual(e1, e2) { t.Fatalf("Errors should be equivalent but unique - e1: %v, %v e2: %p, %v", e1.(*statusError), e1, e2.(*statusError), e2) } } func TestFromToProto(t *testing.T) { s := &spb.Status{ Code: int32(codes.Internal), Message: "test test test", Details: []*apb.Any{{TypeUrl: "foo", Value: []byte{3, 2, 1}}}, } err := FromProto(s) if got := err.Proto(); !proto.Equal(s, got) { t.Fatalf("Expected errors to be identical - s: %v got: %v", s, got) } } func TestFromNilProto(t *testing.T) { tests := []*Status{nil, FromProto(nil)} for _, s := range tests { if c := s.Code(); c != codes.OK { t.Errorf("s: %v - Expected s.Code() = OK; got %v", s, c) } if m := s.Message(); m != "" { t.Errorf("s: %v - Expected s.Message() = \"\"; got %q", s, m) } if p := s.Proto(); p != nil { t.Errorf("s: %v - Expected s.Proto() = nil; got %q", s, p) } if e := s.Err(); e != nil { t.Errorf("s: %v - Expected s.Err() = nil; got %v", s, e) } } } func TestError(t *testing.T) { err := Error(codes.Internal, "test description") if got, want := err.Error(), "rpc error: code = Internal desc = test description"; got != want { t.Fatalf("err.Error() = %q; want %q", got, want) } s, _ := FromError(err) if got, want := s.Code(), codes.Internal; got != want { t.Fatalf("err.Code() = %s; want %s", got, want) } if got, want := s.Message(), "test description"; got != want { t.Fatalf("err.Message() = %s; want %s", got, want) } } func TestErrorOK(t *testing.T) { err := Error(codes.OK, "foo") if err != nil { t.Fatalf("Error(codes.OK, _) = %p; want nil", err.(*statusError)) } } func TestErrorProtoOK(t *testing.T) { s := &spb.Status{Code: int32(codes.OK)} if got := ErrorProto(s); got != nil { t.Fatalf("ErrorProto(%v) = %v; want nil", s, got) } } func TestFromError(t *testing.T) { code, message := codes.Internal, "test description" err := Error(code, message) s, ok := FromError(err) if !ok || s.Code() != code || s.Message() != message || s.Err() == nil { t.Fatalf("FromError(%v) = %v, %v; want , true", err, s, ok, code, message) } } func TestFromErrorOK(t *testing.T) { code, message := codes.OK, "" s, ok := FromError(nil) if !ok || s.Code() != code || s.Message() != message || s.Err() != nil { t.Fatalf("FromError(nil) = %v, %v; want , true", s, ok, code, message) } } func TestStatus_ErrorDetails(t *testing.T) { tests := []struct { code codes.Code details []proto.Message }{ { code: codes.NotFound, details: nil, }, { code: codes.NotFound, details: []proto.Message{ &epb.ResourceInfo{ ResourceType: "book", ResourceName: "projects/1234/books/5678", Owner: "User", }, }, }, { code: codes.Internal, details: []proto.Message{ &epb.DebugInfo{ StackEntries: []string{ "first stack", "second stack", }, }, }, }, { code: codes.Unavailable, details: []proto.Message{ &epb.RetryInfo{ RetryDelay: &dpb.Duration{Seconds: 60}, }, &epb.ResourceInfo{ ResourceType: "book", ResourceName: "projects/1234/books/5678", Owner: "User", }, }, }, } for _, tc := range tests { s, err := New(tc.code, "").WithDetails(tc.details...) if err != nil { t.Fatalf("(%v).WithDetails(%+v) failed: %v", str(s), tc.details, err) } details := s.Details() for i := range details { if !proto.Equal(details[i].(proto.Message), tc.details[i]) { t.Fatalf("(%v).Details()[%d] = %+v, want %+v", str(s), i, details[i], tc.details[i]) } } } } func TestStatus_WithDetails_Fail(t *testing.T) { tests := []*Status{ nil, FromProto(nil), New(codes.OK, ""), } for _, s := range tests { if s, err := s.WithDetails(); err == nil || s != nil { t.Fatalf("(%v).WithDetails(%+v) = %v, %v; want nil, non-nil", str(s), []proto.Message{}, s, err) } } } func TestStatus_ErrorDetails_Fail(t *testing.T) { tests := []struct { s *Status i []interface{} }{ { nil, nil, }, { FromProto(nil), nil, }, { New(codes.OK, ""), []interface{}{}, }, { FromProto(&spb.Status{ Code: int32(cpb.Code_CANCELLED), Details: []*apb.Any{ { TypeUrl: "", Value: []byte{}, }, mustMarshalAny(&epb.ResourceInfo{ ResourceType: "book", ResourceName: "projects/1234/books/5678", Owner: "User", }), }, }), []interface{}{ errors.New(`message type url "" is invalid`), &epb.ResourceInfo{ ResourceType: "book", ResourceName: "projects/1234/books/5678", Owner: "User", }, }, }, } for _, tc := range tests { got := tc.s.Details() if !reflect.DeepEqual(got, tc.i) { t.Errorf("(%v).Details() = %+v, want %+v", str(tc.s), got, tc.i) } } } func str(s *Status) string { if s == nil { return "nil" } if s.s == nil { return "" } return fmt.Sprintf("", codes.Code(s.s.GetCode()), s.s.GetMessage(), s.s.GetDetails()) } // mustMarshalAny converts a protobuf message to an any. func mustMarshalAny(msg proto.Message) *apb.Any { any, err := ptypes.MarshalAny(msg) if err != nil { panic(fmt.Sprintf("ptypes.MarshalAny(%+v) failed: %v", msg, err)) } return any } golang-google-grpc-1.6.0/stream.go000066400000000000000000000451311315416461300170220ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "bytes" "errors" "io" "sync" "time" "golang.org/x/net/context" "golang.org/x/net/trace" "google.golang.org/grpc/codes" "google.golang.org/grpc/metadata" "google.golang.org/grpc/peer" "google.golang.org/grpc/stats" "google.golang.org/grpc/status" "google.golang.org/grpc/transport" ) // StreamHandler defines the handler called by gRPC server to complete the // execution of a streaming RPC. type StreamHandler func(srv interface{}, stream ServerStream) error // StreamDesc represents a streaming RPC service's method specification. type StreamDesc struct { StreamName string Handler StreamHandler // At least one of these is true. ServerStreams bool ClientStreams bool } // Stream defines the common interface a client or server stream has to satisfy. type Stream interface { // Context returns the context for this stream. Context() context.Context // SendMsg blocks until it sends m, the stream is done or the stream // breaks. // On error, it aborts the stream and returns an RPC status on client // side. On server side, it simply returns the error to the caller. // SendMsg is called by generated code. Also Users can call SendMsg // directly when it is really needed in their use cases. // It's safe to have a goroutine calling SendMsg and another goroutine calling // recvMsg on the same stream at the same time. // But it is not safe to call SendMsg on the same stream in different goroutines. SendMsg(m interface{}) error // RecvMsg blocks until it receives a message or the stream is // done. On client side, it returns io.EOF when the stream is done. On // any other error, it aborts the stream and returns an RPC status. On // server side, it simply returns the error to the caller. // It's safe to have a goroutine calling SendMsg and another goroutine calling // recvMsg on the same stream at the same time. // But it is not safe to call RecvMsg on the same stream in different goroutines. RecvMsg(m interface{}) error } // ClientStream defines the interface a client stream has to satisfy. type ClientStream interface { // Header returns the header metadata received from the server if there // is any. It blocks if the metadata is not ready to read. Header() (metadata.MD, error) // Trailer returns the trailer metadata from the server, if there is any. // It must only be called after stream.CloseAndRecv has returned, or // stream.Recv has returned a non-nil error (including io.EOF). Trailer() metadata.MD // CloseSend closes the send direction of the stream. It closes the stream // when non-nil error is met. CloseSend() error // Stream.SendMsg() may return a non-nil error when something wrong happens sending // the request. The returned error indicates the status of this sending, not the final // status of the RPC. // Always call Stream.RecvMsg() to get the final status if you care about the status of // the RPC. Stream } // NewClientStream creates a new Stream for the client side. This is called // by generated code. func NewClientStream(ctx context.Context, desc *StreamDesc, cc *ClientConn, method string, opts ...CallOption) (_ ClientStream, err error) { if cc.dopts.streamInt != nil { return cc.dopts.streamInt(ctx, desc, cc, method, newClientStream, opts...) } return newClientStream(ctx, desc, cc, method, opts...) } func newClientStream(ctx context.Context, desc *StreamDesc, cc *ClientConn, method string, opts ...CallOption) (_ ClientStream, err error) { var ( t transport.ClientTransport s *transport.Stream put func() cancel context.CancelFunc ) c := defaultCallInfo mc := cc.GetMethodConfig(method) if mc.WaitForReady != nil { c.failFast = !*mc.WaitForReady } if mc.Timeout != nil { ctx, cancel = context.WithTimeout(ctx, *mc.Timeout) defer func() { if err != nil { cancel() } }() } opts = append(cc.dopts.callOptions, opts...) for _, o := range opts { if err := o.before(&c); err != nil { return nil, toRPCErr(err) } } c.maxSendMessageSize = getMaxSize(mc.MaxReqSize, c.maxSendMessageSize, defaultClientMaxSendMessageSize) c.maxReceiveMessageSize = getMaxSize(mc.MaxRespSize, c.maxReceiveMessageSize, defaultClientMaxReceiveMessageSize) callHdr := &transport.CallHdr{ Host: cc.authority, Method: method, // If it's not client streaming, we should already have the request to be sent, // so we don't flush the header. // If it's client streaming, the user may never send a request or send it any // time soon, so we ask the transport to flush the header. Flush: desc.ClientStreams, } if cc.dopts.cp != nil { callHdr.SendCompress = cc.dopts.cp.Type() } if c.creds != nil { callHdr.Creds = c.creds } var trInfo traceInfo if EnableTracing { trInfo.tr = trace.New("grpc.Sent."+methodFamily(method), method) trInfo.firstLine.client = true if deadline, ok := ctx.Deadline(); ok { trInfo.firstLine.deadline = deadline.Sub(time.Now()) } trInfo.tr.LazyLog(&trInfo.firstLine, false) ctx = trace.NewContext(ctx, trInfo.tr) defer func() { if err != nil { // Need to call tr.finish() if error is returned. // Because tr will not be returned to caller. trInfo.tr.LazyPrintf("RPC: [%v]", err) trInfo.tr.SetError() trInfo.tr.Finish() } }() } ctx = newContextWithRPCInfo(ctx) sh := cc.dopts.copts.StatsHandler if sh != nil { ctx = sh.TagRPC(ctx, &stats.RPCTagInfo{FullMethodName: method, FailFast: c.failFast}) begin := &stats.Begin{ Client: true, BeginTime: time.Now(), FailFast: c.failFast, } sh.HandleRPC(ctx, begin) defer func() { if err != nil { // Only handle end stats if err != nil. end := &stats.End{ Client: true, Error: err, } sh.HandleRPC(ctx, end) } }() } gopts := BalancerGetOptions{ BlockingWait: !c.failFast, } for { t, put, err = cc.getTransport(ctx, gopts) if err != nil { // TODO(zhaoq): Probably revisit the error handling. if _, ok := status.FromError(err); ok { return nil, err } if err == errConnClosing || err == errConnUnavailable { if c.failFast { return nil, Errorf(codes.Unavailable, "%v", err) } continue } // All the other errors are treated as Internal errors. return nil, Errorf(codes.Internal, "%v", err) } s, err = t.NewStream(ctx, callHdr) if err != nil { if _, ok := err.(transport.ConnectionError); ok && put != nil { // If error is connection error, transport was sending data on wire, // and we are not sure if anything has been sent on wire. // If error is not connection error, we are sure nothing has been sent. updateRPCInfoInContext(ctx, rpcInfo{bytesSent: true, bytesReceived: false}) } if put != nil { put() put = nil } if _, ok := err.(transport.ConnectionError); (ok || err == transport.ErrStreamDrain) && !c.failFast { continue } return nil, toRPCErr(err) } break } // Set callInfo.peer object from stream's context. if peer, ok := peer.FromContext(s.Context()); ok { c.peer = peer } cs := &clientStream{ opts: opts, c: c, desc: desc, codec: cc.dopts.codec, cp: cc.dopts.cp, dc: cc.dopts.dc, cancel: cancel, put: put, t: t, s: s, p: &parser{r: s}, tracing: EnableTracing, trInfo: trInfo, statsCtx: ctx, statsHandler: cc.dopts.copts.StatsHandler, } if cc.dopts.cp != nil { cs.cbuf = new(bytes.Buffer) } // Listen on ctx.Done() to detect cancellation and s.Done() to detect normal termination // when there is no pending I/O operations on this stream. go func() { select { case <-t.Error(): // Incur transport error, simply exit. case <-cc.ctx.Done(): cs.finish(ErrClientConnClosing) cs.closeTransportStream(ErrClientConnClosing) case <-s.Done(): // TODO: The trace of the RPC is terminated here when there is no pending // I/O, which is probably not the optimal solution. cs.finish(s.Status().Err()) cs.closeTransportStream(nil) case <-s.GoAway(): cs.finish(errConnDrain) cs.closeTransportStream(errConnDrain) case <-s.Context().Done(): err := s.Context().Err() cs.finish(err) cs.closeTransportStream(transport.ContextErr(err)) } }() return cs, nil } // clientStream implements a client side Stream. type clientStream struct { opts []CallOption c callInfo t transport.ClientTransport s *transport.Stream p *parser desc *StreamDesc codec Codec cp Compressor cbuf *bytes.Buffer dc Decompressor cancel context.CancelFunc tracing bool // set to EnableTracing when the clientStream is created. mu sync.Mutex put func() closed bool finished bool // trInfo.tr is set when the clientStream is created (if EnableTracing is true), // and is set to nil when the clientStream's finish method is called. trInfo traceInfo // statsCtx keeps the user context for stats handling. // All stats collection should use the statsCtx (instead of the stream context) // so that all the generated stats for a particular RPC can be associated in the processing phase. statsCtx context.Context statsHandler stats.Handler } func (cs *clientStream) Context() context.Context { return cs.s.Context() } func (cs *clientStream) Header() (metadata.MD, error) { m, err := cs.s.Header() if err != nil { if _, ok := err.(transport.ConnectionError); !ok { cs.closeTransportStream(err) } } return m, err } func (cs *clientStream) Trailer() metadata.MD { return cs.s.Trailer() } func (cs *clientStream) SendMsg(m interface{}) (err error) { if cs.tracing { cs.mu.Lock() if cs.trInfo.tr != nil { cs.trInfo.tr.LazyLog(&payload{sent: true, msg: m}, true) } cs.mu.Unlock() } // TODO Investigate how to signal the stats handling party. // generate error stats if err != nil && err != io.EOF? defer func() { if err != nil { cs.finish(err) } if err == nil { return } if err == io.EOF { // Specialize the process for server streaming. SendMsg is only called // once when creating the stream object. io.EOF needs to be skipped when // the rpc is early finished (before the stream object is created.). // TODO: It is probably better to move this into the generated code. if !cs.desc.ClientStreams && cs.desc.ServerStreams { err = nil } return } if _, ok := err.(transport.ConnectionError); !ok { cs.closeTransportStream(err) } err = toRPCErr(err) }() var outPayload *stats.OutPayload if cs.statsHandler != nil { outPayload = &stats.OutPayload{ Client: true, } } hdr, data, err := encode(cs.codec, m, cs.cp, cs.cbuf, outPayload) defer func() { if cs.cbuf != nil { cs.cbuf.Reset() } }() if err != nil { return err } if cs.c.maxSendMessageSize == nil { return Errorf(codes.Internal, "callInfo maxSendMessageSize field uninitialized(nil)") } if len(data) > *cs.c.maxSendMessageSize { return Errorf(codes.ResourceExhausted, "trying to send message larger than max (%d vs. %d)", len(data), *cs.c.maxSendMessageSize) } err = cs.t.Write(cs.s, hdr, data, &transport.Options{Last: false}) if err == nil && outPayload != nil { outPayload.SentTime = time.Now() cs.statsHandler.HandleRPC(cs.statsCtx, outPayload) } return err } func (cs *clientStream) RecvMsg(m interface{}) (err error) { var inPayload *stats.InPayload if cs.statsHandler != nil { inPayload = &stats.InPayload{ Client: true, } } if cs.c.maxReceiveMessageSize == nil { return Errorf(codes.Internal, "callInfo maxReceiveMessageSize field uninitialized(nil)") } err = recv(cs.p, cs.codec, cs.s, cs.dc, m, *cs.c.maxReceiveMessageSize, inPayload) defer func() { // err != nil indicates the termination of the stream. if err != nil { cs.finish(err) } }() if err == nil { if cs.tracing { cs.mu.Lock() if cs.trInfo.tr != nil { cs.trInfo.tr.LazyLog(&payload{sent: false, msg: m}, true) } cs.mu.Unlock() } if inPayload != nil { cs.statsHandler.HandleRPC(cs.statsCtx, inPayload) } if !cs.desc.ClientStreams || cs.desc.ServerStreams { return } // Special handling for client streaming rpc. // This recv expects EOF or errors, so we don't collect inPayload. if cs.c.maxReceiveMessageSize == nil { return Errorf(codes.Internal, "callInfo maxReceiveMessageSize field uninitialized(nil)") } err = recv(cs.p, cs.codec, cs.s, cs.dc, m, *cs.c.maxReceiveMessageSize, nil) cs.closeTransportStream(err) if err == nil { return toRPCErr(errors.New("grpc: client streaming protocol violation: get , want ")) } if err == io.EOF { if se := cs.s.Status().Err(); se != nil { return se } cs.finish(err) return nil } return toRPCErr(err) } if _, ok := err.(transport.ConnectionError); !ok { cs.closeTransportStream(err) } if err == io.EOF { if statusErr := cs.s.Status().Err(); statusErr != nil { return statusErr } // Returns io.EOF to indicate the end of the stream. return } return toRPCErr(err) } func (cs *clientStream) CloseSend() (err error) { err = cs.t.Write(cs.s, nil, nil, &transport.Options{Last: true}) defer func() { if err != nil { cs.finish(err) } }() if err == nil || err == io.EOF { return nil } if _, ok := err.(transport.ConnectionError); !ok { cs.closeTransportStream(err) } err = toRPCErr(err) return } func (cs *clientStream) closeTransportStream(err error) { cs.mu.Lock() if cs.closed { cs.mu.Unlock() return } cs.closed = true cs.mu.Unlock() cs.t.CloseStream(cs.s, err) } func (cs *clientStream) finish(err error) { cs.mu.Lock() defer cs.mu.Unlock() if cs.finished { return } cs.finished = true defer func() { if cs.cancel != nil { cs.cancel() } }() for _, o := range cs.opts { o.after(&cs.c) } if cs.put != nil { updateRPCInfoInContext(cs.s.Context(), rpcInfo{ bytesSent: cs.s.BytesSent(), bytesReceived: cs.s.BytesReceived(), }) cs.put() cs.put = nil } if cs.statsHandler != nil { end := &stats.End{ Client: true, EndTime: time.Now(), } if err != io.EOF { // end.Error is nil if the RPC finished successfully. end.Error = toRPCErr(err) } cs.statsHandler.HandleRPC(cs.statsCtx, end) } if !cs.tracing { return } if cs.trInfo.tr != nil { if err == nil || err == io.EOF { cs.trInfo.tr.LazyPrintf("RPC: [OK]") } else { cs.trInfo.tr.LazyPrintf("RPC: [%v]", err) cs.trInfo.tr.SetError() } cs.trInfo.tr.Finish() cs.trInfo.tr = nil } } // ServerStream defines the interface a server stream has to satisfy. type ServerStream interface { // SetHeader sets the header metadata. It may be called multiple times. // When call multiple times, all the provided metadata will be merged. // All the metadata will be sent out when one of the following happens: // - ServerStream.SendHeader() is called; // - The first response is sent out; // - An RPC status is sent out (error or success). SetHeader(metadata.MD) error // SendHeader sends the header metadata. // The provided md and headers set by SetHeader() will be sent. // It fails if called multiple times. SendHeader(metadata.MD) error // SetTrailer sets the trailer metadata which will be sent with the RPC status. // When called more than once, all the provided metadata will be merged. SetTrailer(metadata.MD) Stream } // serverStream implements a server side Stream. type serverStream struct { t transport.ServerTransport s *transport.Stream p *parser codec Codec cp Compressor dc Decompressor cbuf *bytes.Buffer maxReceiveMessageSize int maxSendMessageSize int trInfo *traceInfo statsHandler stats.Handler mu sync.Mutex // protects trInfo.tr after the service handler runs. } func (ss *serverStream) Context() context.Context { return ss.s.Context() } func (ss *serverStream) SetHeader(md metadata.MD) error { if md.Len() == 0 { return nil } return ss.s.SetHeader(md) } func (ss *serverStream) SendHeader(md metadata.MD) error { return ss.t.WriteHeader(ss.s, md) } func (ss *serverStream) SetTrailer(md metadata.MD) { if md.Len() == 0 { return } ss.s.SetTrailer(md) return } func (ss *serverStream) SendMsg(m interface{}) (err error) { defer func() { if ss.trInfo != nil { ss.mu.Lock() if ss.trInfo.tr != nil { if err == nil { ss.trInfo.tr.LazyLog(&payload{sent: true, msg: m}, true) } else { ss.trInfo.tr.LazyLog(&fmtStringer{"%v", []interface{}{err}}, true) ss.trInfo.tr.SetError() } } ss.mu.Unlock() } if err != nil && err != io.EOF { st, _ := status.FromError(toRPCErr(err)) ss.t.WriteStatus(ss.s, st) } }() var outPayload *stats.OutPayload if ss.statsHandler != nil { outPayload = &stats.OutPayload{} } hdr, data, err := encode(ss.codec, m, ss.cp, ss.cbuf, outPayload) defer func() { if ss.cbuf != nil { ss.cbuf.Reset() } }() if err != nil { return err } if len(data) > ss.maxSendMessageSize { return Errorf(codes.ResourceExhausted, "trying to send message larger than max (%d vs. %d)", len(data), ss.maxSendMessageSize) } if err := ss.t.Write(ss.s, hdr, data, &transport.Options{Last: false}); err != nil { return toRPCErr(err) } if outPayload != nil { outPayload.SentTime = time.Now() ss.statsHandler.HandleRPC(ss.s.Context(), outPayload) } return nil } func (ss *serverStream) RecvMsg(m interface{}) (err error) { defer func() { if ss.trInfo != nil { ss.mu.Lock() if ss.trInfo.tr != nil { if err == nil { ss.trInfo.tr.LazyLog(&payload{sent: false, msg: m}, true) } else if err != io.EOF { ss.trInfo.tr.LazyLog(&fmtStringer{"%v", []interface{}{err}}, true) ss.trInfo.tr.SetError() } } ss.mu.Unlock() } if err != nil && err != io.EOF { st, _ := status.FromError(toRPCErr(err)) ss.t.WriteStatus(ss.s, st) } }() var inPayload *stats.InPayload if ss.statsHandler != nil { inPayload = &stats.InPayload{} } if err := recv(ss.p, ss.codec, ss.s, ss.dc, m, ss.maxReceiveMessageSize, inPayload); err != nil { if err == io.EOF { return err } if err == io.ErrUnexpectedEOF { err = Errorf(codes.Internal, io.ErrUnexpectedEOF.Error()) } return toRPCErr(err) } if inPayload != nil { ss.statsHandler.HandleRPC(ss.s.Context(), inPayload) } return nil } golang-google-grpc-1.6.0/stress/000077500000000000000000000000001315416461300165175ustar00rootroot00000000000000golang-google-grpc-1.6.0/stress/client/000077500000000000000000000000001315416461300177755ustar00rootroot00000000000000golang-google-grpc-1.6.0/stress/client/main.go000066400000000000000000000246701315416461300212610ustar00rootroot00000000000000/* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ //go:generate protoc -I ../grpc_testing --go_out=plugins=grpc:../grpc_testing ../grpc_testing/metrics.proto // client starts an interop client to do stress test and a metrics server to report qps. package main import ( "flag" "fmt" "math/rand" "net" "strconv" "strings" "sync" "time" "golang.org/x/net/context" "google.golang.org/grpc" "google.golang.org/grpc/codes" "google.golang.org/grpc/credentials" "google.golang.org/grpc/grpclog" "google.golang.org/grpc/interop" testpb "google.golang.org/grpc/interop/grpc_testing" metricspb "google.golang.org/grpc/stress/grpc_testing" "google.golang.org/grpc/testdata" ) var ( serverAddresses = flag.String("server_addresses", "localhost:8080", "a list of server addresses") testCases = flag.String("test_cases", "", "a list of test cases along with the relative weights") testDurationSecs = flag.Int("test_duration_secs", -1, "test duration in seconds") numChannelsPerServer = flag.Int("num_channels_per_server", 1, "Number of channels (i.e connections) to each server") numStubsPerChannel = flag.Int("num_stubs_per_channel", 1, "Number of client stubs per each connection to server") metricsPort = flag.Int("metrics_port", 8081, "The port at which the stress client exposes QPS metrics") useTLS = flag.Bool("use_tls", false, "Connection uses TLS if true, else plain TCP") testCA = flag.Bool("use_test_ca", false, "Whether to replace platform root CAs with test CA as the CA root") tlsServerName = flag.String("server_host_override", "foo.test.google.fr", "The server name use to verify the hostname returned by TLS handshake if it is not empty. Otherwise, --server_host is used.") caFile = flag.String("ca_file", "", "The file containning the CA root cert file") ) // testCaseWithWeight contains the test case type and its weight. type testCaseWithWeight struct { name string weight int } // parseTestCases converts test case string to a list of struct testCaseWithWeight. func parseTestCases(testCaseString string) []testCaseWithWeight { testCaseStrings := strings.Split(testCaseString, ",") testCases := make([]testCaseWithWeight, len(testCaseStrings)) for i, str := range testCaseStrings { testCase := strings.Split(str, ":") if len(testCase) != 2 { panic(fmt.Sprintf("invalid test case with weight: %s", str)) } // Check if test case is supported. switch testCase[0] { case "empty_unary", "large_unary", "client_streaming", "server_streaming", "ping_pong", "empty_stream", "timeout_on_sleeping_server", "cancel_after_begin", "cancel_after_first_response", "status_code_and_message", "custom_metadata": default: panic(fmt.Sprintf("unknown test type: %s", testCase[0])) } testCases[i].name = testCase[0] w, err := strconv.Atoi(testCase[1]) if err != nil { panic(fmt.Sprintf("%v", err)) } testCases[i].weight = w } return testCases } // weightedRandomTestSelector defines a weighted random selector for test case types. type weightedRandomTestSelector struct { tests []testCaseWithWeight totalWeight int } // newWeightedRandomTestSelector constructs a weightedRandomTestSelector with the given list of testCaseWithWeight. func newWeightedRandomTestSelector(tests []testCaseWithWeight) *weightedRandomTestSelector { var totalWeight int for _, t := range tests { totalWeight += t.weight } rand.Seed(time.Now().UnixNano()) return &weightedRandomTestSelector{tests, totalWeight} } func (selector weightedRandomTestSelector) getNextTest() string { random := rand.Intn(selector.totalWeight) var weightSofar int for _, test := range selector.tests { weightSofar += test.weight if random < weightSofar { return test.name } } panic("no test case selected by weightedRandomTestSelector") } // gauge stores the qps of one interop client (one stub). type gauge struct { mutex sync.RWMutex val int64 } func (g *gauge) set(v int64) { g.mutex.Lock() defer g.mutex.Unlock() g.val = v } func (g *gauge) get() int64 { g.mutex.RLock() defer g.mutex.RUnlock() return g.val } // server implements metrics server functions. type server struct { mutex sync.RWMutex // gauges is a map from /stress_test/server_/channel_/stub_/qps to its qps gauge. gauges map[string]*gauge } // newMetricsServer returns a new metrics server. func newMetricsServer() *server { return &server{gauges: make(map[string]*gauge)} } // GetAllGauges returns all gauges. func (s *server) GetAllGauges(in *metricspb.EmptyMessage, stream metricspb.MetricsService_GetAllGaugesServer) error { s.mutex.RLock() defer s.mutex.RUnlock() for name, gauge := range s.gauges { if err := stream.Send(&metricspb.GaugeResponse{Name: name, Value: &metricspb.GaugeResponse_LongValue{LongValue: gauge.get()}}); err != nil { return err } } return nil } // GetGauge returns the gauge for the given name. func (s *server) GetGauge(ctx context.Context, in *metricspb.GaugeRequest) (*metricspb.GaugeResponse, error) { s.mutex.RLock() defer s.mutex.RUnlock() if g, ok := s.gauges[in.Name]; ok { return &metricspb.GaugeResponse{Name: in.Name, Value: &metricspb.GaugeResponse_LongValue{LongValue: g.get()}}, nil } return nil, grpc.Errorf(codes.InvalidArgument, "gauge with name %s not found", in.Name) } // createGauge creates a gauge using the given name in metrics server. func (s *server) createGauge(name string) *gauge { s.mutex.Lock() defer s.mutex.Unlock() if _, ok := s.gauges[name]; ok { // gauge already exists. panic(fmt.Sprintf("gauge %s already exists", name)) } var g gauge s.gauges[name] = &g return &g } func startServer(server *server, port int) { lis, err := net.Listen("tcp", ":"+strconv.Itoa(port)) if err != nil { grpclog.Fatalf("failed to listen: %v", err) } s := grpc.NewServer() metricspb.RegisterMetricsServiceServer(s, server) s.Serve(lis) } // performRPCs uses weightedRandomTestSelector to select test case and runs the tests. func performRPCs(gauge *gauge, conn *grpc.ClientConn, selector *weightedRandomTestSelector, stop <-chan bool) { client := testpb.NewTestServiceClient(conn) var numCalls int64 startTime := time.Now() for { test := selector.getNextTest() switch test { case "empty_unary": interop.DoEmptyUnaryCall(client, grpc.FailFast(false)) case "large_unary": interop.DoLargeUnaryCall(client, grpc.FailFast(false)) case "client_streaming": interop.DoClientStreaming(client, grpc.FailFast(false)) case "server_streaming": interop.DoServerStreaming(client, grpc.FailFast(false)) case "ping_pong": interop.DoPingPong(client, grpc.FailFast(false)) case "empty_stream": interop.DoEmptyStream(client, grpc.FailFast(false)) case "timeout_on_sleeping_server": interop.DoTimeoutOnSleepingServer(client, grpc.FailFast(false)) case "cancel_after_begin": interop.DoCancelAfterBegin(client, grpc.FailFast(false)) case "cancel_after_first_response": interop.DoCancelAfterFirstResponse(client, grpc.FailFast(false)) case "status_code_and_message": interop.DoStatusCodeAndMessage(client, grpc.FailFast(false)) case "custom_metadata": interop.DoCustomMetadata(client, grpc.FailFast(false)) } numCalls++ gauge.set(int64(float64(numCalls) / time.Since(startTime).Seconds())) select { case <-stop: return default: } } } func logParameterInfo(addresses []string, tests []testCaseWithWeight) { grpclog.Printf("server_addresses: %s", *serverAddresses) grpclog.Printf("test_cases: %s", *testCases) grpclog.Printf("test_duration_secs: %d", *testDurationSecs) grpclog.Printf("num_channels_per_server: %d", *numChannelsPerServer) grpclog.Printf("num_stubs_per_channel: %d", *numStubsPerChannel) grpclog.Printf("metrics_port: %d", *metricsPort) grpclog.Printf("use_tls: %t", *useTLS) grpclog.Printf("use_test_ca: %t", *testCA) grpclog.Printf("server_host_override: %s", *tlsServerName) grpclog.Println("addresses:") for i, addr := range addresses { grpclog.Printf("%d. %s\n", i+1, addr) } grpclog.Println("tests:") for i, test := range tests { grpclog.Printf("%d. %v\n", i+1, test) } } func newConn(address string, useTLS, testCA bool, tlsServerName string) (*grpc.ClientConn, error) { var opts []grpc.DialOption if useTLS { var sn string if tlsServerName != "" { sn = tlsServerName } var creds credentials.TransportCredentials if testCA { var err error if *caFile == "" { *caFile = testdata.Path("ca.pem") } creds, err = credentials.NewClientTLSFromFile(*caFile, sn) if err != nil { grpclog.Fatalf("Failed to create TLS credentials %v", err) } } else { creds = credentials.NewClientTLSFromCert(nil, sn) } opts = append(opts, grpc.WithTransportCredentials(creds)) } else { opts = append(opts, grpc.WithInsecure()) } return grpc.Dial(address, opts...) } func main() { flag.Parse() addresses := strings.Split(*serverAddresses, ",") tests := parseTestCases(*testCases) logParameterInfo(addresses, tests) testSelector := newWeightedRandomTestSelector(tests) metricsServer := newMetricsServer() var wg sync.WaitGroup wg.Add(len(addresses) * *numChannelsPerServer * *numStubsPerChannel) stop := make(chan bool) for serverIndex, address := range addresses { for connIndex := 0; connIndex < *numChannelsPerServer; connIndex++ { conn, err := newConn(address, *useTLS, *testCA, *tlsServerName) if err != nil { grpclog.Fatalf("Fail to dial: %v", err) } defer conn.Close() for clientIndex := 0; clientIndex < *numStubsPerChannel; clientIndex++ { name := fmt.Sprintf("/stress_test/server_%d/channel_%d/stub_%d/qps", serverIndex+1, connIndex+1, clientIndex+1) go func() { defer wg.Done() g := metricsServer.createGauge(name) performRPCs(g, conn, testSelector, stop) }() } } } go startServer(metricsServer, *metricsPort) if *testDurationSecs > 0 { time.Sleep(time.Duration(*testDurationSecs) * time.Second) close(stop) } wg.Wait() grpclog.Printf(" ===== ALL DONE ===== ") } golang-google-grpc-1.6.0/stress/grpc_testing/000077500000000000000000000000001315416461300212075ustar00rootroot00000000000000golang-google-grpc-1.6.0/stress/grpc_testing/metrics.pb.go000066400000000000000000000302001315416461300235770ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: metrics.proto /* Package grpc_testing is a generated protocol buffer package. It is generated from these files: metrics.proto It has these top-level messages: GaugeResponse GaugeRequest EmptyMessage */ package grpc_testing import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import ( context "golang.org/x/net/context" grpc "google.golang.org/grpc" ) // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package // Reponse message containing the gauge name and value type GaugeResponse struct { Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"` // Types that are valid to be assigned to Value: // *GaugeResponse_LongValue // *GaugeResponse_DoubleValue // *GaugeResponse_StringValue Value isGaugeResponse_Value `protobuf_oneof:"value"` } func (m *GaugeResponse) Reset() { *m = GaugeResponse{} } func (m *GaugeResponse) String() string { return proto.CompactTextString(m) } func (*GaugeResponse) ProtoMessage() {} func (*GaugeResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} } type isGaugeResponse_Value interface { isGaugeResponse_Value() } type GaugeResponse_LongValue struct { LongValue int64 `protobuf:"varint,2,opt,name=long_value,json=longValue,oneof"` } type GaugeResponse_DoubleValue struct { DoubleValue float64 `protobuf:"fixed64,3,opt,name=double_value,json=doubleValue,oneof"` } type GaugeResponse_StringValue struct { StringValue string `protobuf:"bytes,4,opt,name=string_value,json=stringValue,oneof"` } func (*GaugeResponse_LongValue) isGaugeResponse_Value() {} func (*GaugeResponse_DoubleValue) isGaugeResponse_Value() {} func (*GaugeResponse_StringValue) isGaugeResponse_Value() {} func (m *GaugeResponse) GetValue() isGaugeResponse_Value { if m != nil { return m.Value } return nil } func (m *GaugeResponse) GetName() string { if m != nil { return m.Name } return "" } func (m *GaugeResponse) GetLongValue() int64 { if x, ok := m.GetValue().(*GaugeResponse_LongValue); ok { return x.LongValue } return 0 } func (m *GaugeResponse) GetDoubleValue() float64 { if x, ok := m.GetValue().(*GaugeResponse_DoubleValue); ok { return x.DoubleValue } return 0 } func (m *GaugeResponse) GetStringValue() string { if x, ok := m.GetValue().(*GaugeResponse_StringValue); ok { return x.StringValue } return "" } // XXX_OneofFuncs is for the internal use of the proto package. func (*GaugeResponse) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { return _GaugeResponse_OneofMarshaler, _GaugeResponse_OneofUnmarshaler, _GaugeResponse_OneofSizer, []interface{}{ (*GaugeResponse_LongValue)(nil), (*GaugeResponse_DoubleValue)(nil), (*GaugeResponse_StringValue)(nil), } } func _GaugeResponse_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { m := msg.(*GaugeResponse) // value switch x := m.Value.(type) { case *GaugeResponse_LongValue: b.EncodeVarint(2<<3 | proto.WireVarint) b.EncodeVarint(uint64(x.LongValue)) case *GaugeResponse_DoubleValue: b.EncodeVarint(3<<3 | proto.WireFixed64) b.EncodeFixed64(math.Float64bits(x.DoubleValue)) case *GaugeResponse_StringValue: b.EncodeVarint(4<<3 | proto.WireBytes) b.EncodeStringBytes(x.StringValue) case nil: default: return fmt.Errorf("GaugeResponse.Value has unexpected type %T", x) } return nil } func _GaugeResponse_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { m := msg.(*GaugeResponse) switch tag { case 2: // value.long_value if wire != proto.WireVarint { return true, proto.ErrInternalBadWireType } x, err := b.DecodeVarint() m.Value = &GaugeResponse_LongValue{int64(x)} return true, err case 3: // value.double_value if wire != proto.WireFixed64 { return true, proto.ErrInternalBadWireType } x, err := b.DecodeFixed64() m.Value = &GaugeResponse_DoubleValue{math.Float64frombits(x)} return true, err case 4: // value.string_value if wire != proto.WireBytes { return true, proto.ErrInternalBadWireType } x, err := b.DecodeStringBytes() m.Value = &GaugeResponse_StringValue{x} return true, err default: return false, nil } } func _GaugeResponse_OneofSizer(msg proto.Message) (n int) { m := msg.(*GaugeResponse) // value switch x := m.Value.(type) { case *GaugeResponse_LongValue: n += proto.SizeVarint(2<<3 | proto.WireVarint) n += proto.SizeVarint(uint64(x.LongValue)) case *GaugeResponse_DoubleValue: n += proto.SizeVarint(3<<3 | proto.WireFixed64) n += 8 case *GaugeResponse_StringValue: n += proto.SizeVarint(4<<3 | proto.WireBytes) n += proto.SizeVarint(uint64(len(x.StringValue))) n += len(x.StringValue) case nil: default: panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) } return n } // Request message containing the gauge name type GaugeRequest struct { Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"` } func (m *GaugeRequest) Reset() { *m = GaugeRequest{} } func (m *GaugeRequest) String() string { return proto.CompactTextString(m) } func (*GaugeRequest) ProtoMessage() {} func (*GaugeRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} } func (m *GaugeRequest) GetName() string { if m != nil { return m.Name } return "" } type EmptyMessage struct { } func (m *EmptyMessage) Reset() { *m = EmptyMessage{} } func (m *EmptyMessage) String() string { return proto.CompactTextString(m) } func (*EmptyMessage) ProtoMessage() {} func (*EmptyMessage) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{2} } func init() { proto.RegisterType((*GaugeResponse)(nil), "grpc.testing.GaugeResponse") proto.RegisterType((*GaugeRequest)(nil), "grpc.testing.GaugeRequest") proto.RegisterType((*EmptyMessage)(nil), "grpc.testing.EmptyMessage") } // Reference imports to suppress errors if they are not otherwise used. var _ context.Context var _ grpc.ClientConn // This is a compile-time assertion to ensure that this generated file // is compatible with the grpc package it is being compiled against. const _ = grpc.SupportPackageIsVersion4 // Client API for MetricsService service type MetricsServiceClient interface { // Returns the values of all the gauges that are currently being maintained by // the service GetAllGauges(ctx context.Context, in *EmptyMessage, opts ...grpc.CallOption) (MetricsService_GetAllGaugesClient, error) // Returns the value of one gauge GetGauge(ctx context.Context, in *GaugeRequest, opts ...grpc.CallOption) (*GaugeResponse, error) } type metricsServiceClient struct { cc *grpc.ClientConn } func NewMetricsServiceClient(cc *grpc.ClientConn) MetricsServiceClient { return &metricsServiceClient{cc} } func (c *metricsServiceClient) GetAllGauges(ctx context.Context, in *EmptyMessage, opts ...grpc.CallOption) (MetricsService_GetAllGaugesClient, error) { stream, err := grpc.NewClientStream(ctx, &_MetricsService_serviceDesc.Streams[0], c.cc, "/grpc.testing.MetricsService/GetAllGauges", opts...) if err != nil { return nil, err } x := &metricsServiceGetAllGaugesClient{stream} if err := x.ClientStream.SendMsg(in); err != nil { return nil, err } if err := x.ClientStream.CloseSend(); err != nil { return nil, err } return x, nil } type MetricsService_GetAllGaugesClient interface { Recv() (*GaugeResponse, error) grpc.ClientStream } type metricsServiceGetAllGaugesClient struct { grpc.ClientStream } func (x *metricsServiceGetAllGaugesClient) Recv() (*GaugeResponse, error) { m := new(GaugeResponse) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func (c *metricsServiceClient) GetGauge(ctx context.Context, in *GaugeRequest, opts ...grpc.CallOption) (*GaugeResponse, error) { out := new(GaugeResponse) err := grpc.Invoke(ctx, "/grpc.testing.MetricsService/GetGauge", in, out, c.cc, opts...) if err != nil { return nil, err } return out, nil } // Server API for MetricsService service type MetricsServiceServer interface { // Returns the values of all the gauges that are currently being maintained by // the service GetAllGauges(*EmptyMessage, MetricsService_GetAllGaugesServer) error // Returns the value of one gauge GetGauge(context.Context, *GaugeRequest) (*GaugeResponse, error) } func RegisterMetricsServiceServer(s *grpc.Server, srv MetricsServiceServer) { s.RegisterService(&_MetricsService_serviceDesc, srv) } func _MetricsService_GetAllGauges_Handler(srv interface{}, stream grpc.ServerStream) error { m := new(EmptyMessage) if err := stream.RecvMsg(m); err != nil { return err } return srv.(MetricsServiceServer).GetAllGauges(m, &metricsServiceGetAllGaugesServer{stream}) } type MetricsService_GetAllGaugesServer interface { Send(*GaugeResponse) error grpc.ServerStream } type metricsServiceGetAllGaugesServer struct { grpc.ServerStream } func (x *metricsServiceGetAllGaugesServer) Send(m *GaugeResponse) error { return x.ServerStream.SendMsg(m) } func _MetricsService_GetGauge_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(GaugeRequest) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(MetricsServiceServer).GetGauge(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/grpc.testing.MetricsService/GetGauge", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(MetricsServiceServer).GetGauge(ctx, req.(*GaugeRequest)) } return interceptor(ctx, in, info, handler) } var _MetricsService_serviceDesc = grpc.ServiceDesc{ ServiceName: "grpc.testing.MetricsService", HandlerType: (*MetricsServiceServer)(nil), Methods: []grpc.MethodDesc{ { MethodName: "GetGauge", Handler: _MetricsService_GetGauge_Handler, }, }, Streams: []grpc.StreamDesc{ { StreamName: "GetAllGauges", Handler: _MetricsService_GetAllGauges_Handler, ServerStreams: true, }, }, Metadata: "metrics.proto", } func init() { proto.RegisterFile("metrics.proto", fileDescriptor0) } var fileDescriptor0 = []byte{ // 256 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x7c, 0x91, 0x3f, 0x4f, 0xc3, 0x30, 0x10, 0xc5, 0x6b, 0x5a, 0xfe, 0xf4, 0x70, 0x3b, 0x78, 0xaa, 0xca, 0x40, 0x14, 0x96, 0x4c, 0x11, 0x82, 0x4f, 0x00, 0x08, 0xa5, 0x0c, 0x5d, 0x82, 0xc4, 0x8a, 0xd2, 0x70, 0xb2, 0x22, 0x39, 0x71, 0xf0, 0x5d, 0x2a, 0xf1, 0x49, 0x58, 0xf9, 0xa8, 0xc8, 0x4e, 0x55, 0xa5, 0x08, 0x75, 0xb3, 0x7e, 0xf7, 0xfc, 0xfc, 0x9e, 0x0f, 0x66, 0x35, 0xb2, 0xab, 0x4a, 0x4a, 0x5b, 0x67, 0xd9, 0x2a, 0xa9, 0x5d, 0x5b, 0xa6, 0x8c, 0xc4, 0x55, 0xa3, 0xe3, 0x6f, 0x01, 0xb3, 0xac, 0xe8, 0x34, 0xe6, 0x48, 0xad, 0x6d, 0x08, 0x95, 0x82, 0x49, 0x53, 0xd4, 0xb8, 0x10, 0x91, 0x48, 0xa6, 0x79, 0x38, 0xab, 0x6b, 0x00, 0x63, 0x1b, 0xfd, 0xbe, 0x2d, 0x4c, 0x87, 0x8b, 0x93, 0x48, 0x24, 0xe3, 0xd5, 0x28, 0x9f, 0x7a, 0xf6, 0xe6, 0x91, 0xba, 0x01, 0xf9, 0x61, 0xbb, 0x8d, 0xc1, 0x9d, 0x64, 0x1c, 0x89, 0x44, 0xac, 0x46, 0xf9, 0x65, 0x4f, 0xf7, 0x22, 0x62, 0x57, 0xed, 0x7d, 0x26, 0xfe, 0x05, 0x2f, 0xea, 0x69, 0x10, 0x3d, 0x9e, 0xc3, 0x69, 0x98, 0xc6, 0x31, 0xc8, 0x5d, 0xb0, 0xcf, 0x0e, 0x89, 0xff, 0xcb, 0x15, 0xcf, 0x41, 0x3e, 0xd7, 0x2d, 0x7f, 0xad, 0x91, 0xa8, 0xd0, 0x78, 0xf7, 0x23, 0x60, 0xbe, 0xee, 0xdb, 0xbe, 0xa2, 0xdb, 0x56, 0x25, 0xaa, 0x17, 0x90, 0x19, 0xf2, 0x83, 0x31, 0xc1, 0x8c, 0xd4, 0x32, 0x1d, 0xf6, 0x4f, 0x87, 0xd7, 0x97, 0x57, 0x87, 0xb3, 0x83, 0x7f, 0xb9, 0x15, 0xea, 0x09, 0x2e, 0x32, 0xe4, 0x40, 0xff, 0xda, 0x0c, 0x93, 0x1e, 0xb5, 0xd9, 0x9c, 0x85, 0x2d, 0xdc, 0xff, 0x06, 0x00, 0x00, 0xff, 0xff, 0x5e, 0x7d, 0xb2, 0xc9, 0x96, 0x01, 0x00, 0x00, } golang-google-grpc-1.6.0/stress/grpc_testing/metrics.proto000066400000000000000000000027431315416461300237500ustar00rootroot00000000000000// Copyright 2015-2016 gRPC authors. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. // Contains the definitions for a metrics service and the type of metrics // exposed by the service. // // Currently, 'Gauge' (i.e a metric that represents the measured value of // something at an instant of time) is the only metric type supported by the // service. syntax = "proto3"; package grpc.testing; // Reponse message containing the gauge name and value message GaugeResponse { string name = 1; oneof value { int64 long_value = 2; double double_value = 3; string string_value = 4; } } // Request message containing the gauge name message GaugeRequest { string name = 1; } message EmptyMessage {} service MetricsService { // Returns the values of all the gauges that are currently being maintained by // the service rpc GetAllGauges(EmptyMessage) returns (stream GaugeResponse); // Returns the value of one gauge rpc GetGauge(GaugeRequest) returns (GaugeResponse); } golang-google-grpc-1.6.0/stress/metrics_client/000077500000000000000000000000001315416461300215235ustar00rootroot00000000000000golang-google-grpc-1.6.0/stress/metrics_client/main.go000066400000000000000000000042741315416461300230050ustar00rootroot00000000000000/* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package main import ( "flag" "fmt" "io" "golang.org/x/net/context" "google.golang.org/grpc" "google.golang.org/grpc/grpclog" metricspb "google.golang.org/grpc/stress/grpc_testing" ) var ( metricsServerAddress = flag.String("metrics_server_address", "", "The metrics server addresses in the fomrat :") totalOnly = flag.Bool("total_only", false, "If true, this prints only the total value of all gauges") ) func printMetrics(client metricspb.MetricsServiceClient, totalOnly bool) { stream, err := client.GetAllGauges(context.Background(), &metricspb.EmptyMessage{}) if err != nil { grpclog.Fatalf("failed to call GetAllGuages: %v", err) } var ( overallQPS int64 rpcStatus error ) for { gaugeResponse, err := stream.Recv() if err != nil { rpcStatus = err break } if _, ok := gaugeResponse.GetValue().(*metricspb.GaugeResponse_LongValue); !ok { panic(fmt.Sprintf("gauge %s is not a long value", gaugeResponse.Name)) } v := gaugeResponse.GetLongValue() if !totalOnly { grpclog.Printf("%s: %d", gaugeResponse.Name, v) } overallQPS += v } if rpcStatus != io.EOF { grpclog.Fatalf("failed to finish server streaming: %v", rpcStatus) } grpclog.Printf("overall qps: %d", overallQPS) } func main() { flag.Parse() if *metricsServerAddress == "" { grpclog.Fatalf("Metrics server address is empty.") } conn, err := grpc.Dial(*metricsServerAddress, grpc.WithInsecure()) if err != nil { grpclog.Fatalf("cannot connect to metrics server: %v", err) } defer conn.Close() c := metricspb.NewMetricsServiceClient(conn) printMetrics(c, *totalOnly) } golang-google-grpc-1.6.0/tap/000077500000000000000000000000001315416461300157605ustar00rootroot00000000000000golang-google-grpc-1.6.0/tap/tap.go000066400000000000000000000040611315416461300170740ustar00rootroot00000000000000/* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package tap defines the function handles which are executed on the transport // layer of gRPC-Go and related information. Everything here is EXPERIMENTAL. package tap import ( "golang.org/x/net/context" ) // Info defines the relevant information needed by the handles. type Info struct { // FullMethodName is the string of grpc method (in the format of // /package.service/method). FullMethodName string // TODO: More to be added. } // ServerInHandle defines the function which runs before a new stream is created // on the server side. If it returns a non-nil error, the stream will not be // created and a RST_STREAM will be sent back to the client with REFUSED_STREAM. // The client will receive an RPC error "code = Unavailable, desc = stream // terminated by RST_STREAM with error code: REFUSED_STREAM". // // It's intended to be used in situations where you don't want to waste the // resources to accept the new stream (e.g. rate-limiting). And the content of // the error will be ignored and won't be sent back to the client. For other // general usages, please use interceptors. // // Note that it is executed in the per-connection I/O goroutine(s) instead of // per-RPC goroutine. Therefore, users should NOT have any // blocking/time-consuming work in this handle. Otherwise all the RPCs would // slow down. Also, for the same reason, this handle won't be called // concurrently by gRPC. type ServerInHandle func(ctx context.Context, info *Info) (context.Context, error) golang-google-grpc-1.6.0/test/000077500000000000000000000000001315416461300161535ustar00rootroot00000000000000golang-google-grpc-1.6.0/test/bufconn/000077500000000000000000000000001315416461300176055ustar00rootroot00000000000000golang-google-grpc-1.6.0/test/bufconn/bufconn.go000066400000000000000000000115371315416461300215750ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package bufconn provides a net.Conn implemented by a buffer and related // dialing and listening functionality. package bufconn import ( "fmt" "io" "net" "sync" "time" ) // Listener implements a net.Listener that creates local, buffered net.Conns // via its Accept and Dial method. type Listener struct { mu sync.Mutex sz int ch chan net.Conn done chan struct{} } var errClosed = fmt.Errorf("Closed") // Listen returns a Listener that can only be contacted by its own Dialers and // creates buffered connections between the two. func Listen(sz int) *Listener { return &Listener{sz: sz, ch: make(chan net.Conn), done: make(chan struct{})} } // Accept blocks until Dial is called, then returns a net.Conn for the server // half of the connection. func (l *Listener) Accept() (net.Conn, error) { select { case <-l.done: return nil, errClosed case c := <-l.ch: return c, nil } } // Close stops the listener. func (l *Listener) Close() error { l.mu.Lock() defer l.mu.Unlock() select { case <-l.done: // Already closed. break default: close(l.done) } return nil } // Addr reports the address of the listener. func (l *Listener) Addr() net.Addr { return addr{} } // Dial creates an in-memory full-duplex network connection, unblocks Accept by // providing it the server half of the connection, and returns the client half // of the connection. func (l *Listener) Dial() (net.Conn, error) { p1, p2 := newPipe(l.sz), newPipe(l.sz) select { case <-l.done: return nil, errClosed case l.ch <- &conn{p1, p2}: return &conn{p2, p1}, nil } } type pipe struct { mu sync.Mutex // buf contains the data in the pipe. It is a ring buffer of fixed capacity, // with r and w pointing to the offset to read and write, respsectively. // // Data is read between [r, w) and written to [w, r), wrapping around the end // of the slice if necessary. // // The buffer is empty if r == len(buf), otherwise if r == w, it is full. // // w and r are always in the range [0, cap(buf)) and [0, len(buf)]. buf []byte w, r int wwait sync.Cond rwait sync.Cond closed bool } func newPipe(sz int) *pipe { p := &pipe{buf: make([]byte, 0, sz)} p.wwait.L = &p.mu p.rwait.L = &p.mu return p } func (p *pipe) empty() bool { return p.r == len(p.buf) } func (p *pipe) full() bool { return p.r < len(p.buf) && p.r == p.w } func (p *pipe) Read(b []byte) (n int, err error) { p.mu.Lock() defer p.mu.Unlock() // Block until p has data. for { if p.closed { return 0, io.ErrClosedPipe } if !p.empty() { break } p.rwait.Wait() } wasFull := p.full() n = copy(b, p.buf[p.r:len(p.buf)]) p.r += n if p.r == cap(p.buf) { p.r = 0 p.buf = p.buf[:p.w] } // Signal a blocked writer, if any if wasFull { p.wwait.Signal() } return n, nil } func (p *pipe) Write(b []byte) (n int, err error) { p.mu.Lock() defer p.mu.Unlock() if p.closed { return 0, io.ErrClosedPipe } for len(b) > 0 { // Block until p is not full. for { if p.closed { return 0, io.ErrClosedPipe } if !p.full() { break } p.wwait.Wait() } wasEmpty := p.empty() end := cap(p.buf) if p.w < p.r { end = p.r } x := copy(p.buf[p.w:end], b) b = b[x:] n += x p.w += x if p.w > len(p.buf) { p.buf = p.buf[:p.w] } if p.w == cap(p.buf) { p.w = 0 } // Signal a blocked reader, if any. if wasEmpty { p.rwait.Signal() } } return n, nil } func (p *pipe) Close() error { p.mu.Lock() defer p.mu.Unlock() p.closed = true // Signal all blocked readers and writers to return an error. p.rwait.Broadcast() p.wwait.Broadcast() return nil } type conn struct { io.ReadCloser io.WriteCloser } func (c *conn) Close() error { err1 := c.ReadCloser.Close() err2 := c.WriteCloser.Close() if err1 != nil { return err1 } return err2 } func (*conn) LocalAddr() net.Addr { return addr{} } func (*conn) RemoteAddr() net.Addr { return addr{} } func (c *conn) SetDeadline(t time.Time) error { return fmt.Errorf("unsupported") } func (c *conn) SetReadDeadline(t time.Time) error { return fmt.Errorf("unsupported") } func (c *conn) SetWriteDeadline(t time.Time) error { return fmt.Errorf("unsupported") } type addr struct{} func (addr) Network() string { return "bufconn" } func (addr) String() string { return "bufconn" } golang-google-grpc-1.6.0/test/bufconn/bufconn_test.go000066400000000000000000000062301315416461300226260ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package bufconn import ( "fmt" "io" "net" "reflect" "testing" "time" ) func testRW(r io.Reader, w io.Writer) error { for i := 0; i < 20; i++ { d := make([]byte, i) for j := 0; j < i; j++ { d[j] = byte(i - j) } var rn int var rerr error b := make([]byte, i) done := make(chan struct{}) go func() { for rn < len(b) && rerr == nil { var x int x, rerr = r.Read(b[rn:]) rn += x } close(done) }() wn, werr := w.Write(d) if wn != i || werr != nil { return fmt.Errorf("%v: w.Write(%v) = %v, %v; want %v, nil", i, d, wn, werr, i) } select { case <-done: case <-time.After(500 * time.Millisecond): return fmt.Errorf("%v: r.Read never returned", i) } if rn != i || rerr != nil { return fmt.Errorf("%v: r.Read = %v, %v; want %v, nil", i, rn, rerr, i) } if !reflect.DeepEqual(b, d) { return fmt.Errorf("%v: r.Read read %v; want %v", i, b, d) } } return nil } func TestPipe(t *testing.T) { p := newPipe(10) if err := testRW(p, p); err != nil { t.Fatalf(err.Error()) } } func TestPipeClose(t *testing.T) { p := newPipe(10) p.Close() if _, err := p.Write(nil); err != io.ErrClosedPipe { t.Fatalf("p.Write = _, %v; want _, %v", err, io.ErrClosedPipe) } if _, err := p.Read(nil); err != io.ErrClosedPipe { t.Fatalf("p.Read = _, %v; want _, %v", err, io.ErrClosedPipe) } } func TestConn(t *testing.T) { p1, p2 := newPipe(10), newPipe(10) c1, c2 := &conn{p1, p2}, &conn{p2, p1} if err := testRW(c1, c2); err != nil { t.Fatalf(err.Error()) } if err := testRW(c2, c1); err != nil { t.Fatalf(err.Error()) } } func TestListener(t *testing.T) { l := Listen(7) var s net.Conn var serr error done := make(chan struct{}) go func() { s, serr = l.Accept() close(done) }() c, cerr := l.Dial() <-done if cerr != nil || serr != nil { t.Fatalf("cerr = %v, serr = %v; want nil, nil", cerr, serr) } if err := testRW(c, s); err != nil { t.Fatalf(err.Error()) } if err := testRW(s, c); err != nil { t.Fatalf(err.Error()) } } func TestCloseWhileDialing(t *testing.T) { l := Listen(7) var c net.Conn var err error done := make(chan struct{}) go func() { c, err = l.Dial() close(done) }() l.Close() <-done if c != nil || err != errClosed { t.Fatalf("c, err = %v, %v; want nil, %v", c, err, errClosed) } } func TestCloseWhileAccepting(t *testing.T) { l := Listen(7) var c net.Conn var err error done := make(chan struct{}) go func() { c, err = l.Accept() close(done) }() l.Close() <-done if c != nil || err != errClosed { t.Fatalf("c, err = %v, %v; want nil, %v", c, err, errClosed) } } golang-google-grpc-1.6.0/test/codec_perf/000077500000000000000000000000001315416461300202445ustar00rootroot00000000000000golang-google-grpc-1.6.0/test/codec_perf/perf.pb.go000066400000000000000000000041121315416461300221250ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: codec_perf/perf.proto /* Package codec_perf is a generated protocol buffer package. It is generated from these files: codec_perf/perf.proto It has these top-level messages: Buffer */ package codec_perf import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package // Buffer is a message that contains a body of bytes that is used to exercise // encoding and decoding overheads. type Buffer struct { Body []byte `protobuf:"bytes,1,opt,name=body" json:"body,omitempty"` XXX_unrecognized []byte `json:"-"` } func (m *Buffer) Reset() { *m = Buffer{} } func (m *Buffer) String() string { return proto.CompactTextString(m) } func (*Buffer) ProtoMessage() {} func (*Buffer) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} } func (m *Buffer) GetBody() []byte { if m != nil { return m.Body } return nil } func init() { proto.RegisterType((*Buffer)(nil), "codec.perf.Buffer") } func init() { proto.RegisterFile("codec_perf/perf.proto", fileDescriptor0) } var fileDescriptor0 = []byte{ // 78 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x12, 0x4d, 0xce, 0x4f, 0x49, 0x4d, 0x8e, 0x2f, 0x48, 0x2d, 0x4a, 0xd3, 0x07, 0x11, 0x7a, 0x05, 0x45, 0xf9, 0x25, 0xf9, 0x42, 0x5c, 0x60, 0x61, 0x3d, 0x90, 0x88, 0x92, 0x0c, 0x17, 0x9b, 0x53, 0x69, 0x5a, 0x5a, 0x6a, 0x91, 0x90, 0x10, 0x17, 0x4b, 0x52, 0x7e, 0x4a, 0xa5, 0x04, 0xa3, 0x02, 0xa3, 0x06, 0x4f, 0x10, 0x98, 0x0d, 0x08, 0x00, 0x00, 0xff, 0xff, 0xdc, 0x93, 0x4c, 0x5f, 0x41, 0x00, 0x00, 0x00, } golang-google-grpc-1.6.0/test/codec_perf/perf.proto000066400000000000000000000016051315416461300222670ustar00rootroot00000000000000// Copyright 2017 gRPC authors. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. // Messages used for performance tests that may not reference grpc directly for // reasons of import cycles. syntax = "proto2"; package codec.perf; // Buffer is a message that contains a body of bytes that is used to exercise // encoding and decoding overheads. message Buffer { optional bytes body = 1; } golang-google-grpc-1.6.0/test/end2end_test.go000066400000000000000000004607211315416461300210720ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ //go:generate protoc --go_out=plugins=grpc:. codec_perf/perf.proto //go:generate protoc --go_out=plugins=grpc:. grpc_testing/test.proto package test import ( "bytes" "crypto/tls" "errors" "flag" "fmt" "io" "math" "net" "os" "reflect" "runtime" "sort" "strings" "sync" "syscall" "testing" "time" "github.com/golang/protobuf/proto" anypb "github.com/golang/protobuf/ptypes/any" "golang.org/x/net/context" "golang.org/x/net/http2" spb "google.golang.org/genproto/googleapis/rpc/status" "google.golang.org/grpc" "google.golang.org/grpc/codes" "google.golang.org/grpc/connectivity" "google.golang.org/grpc/credentials" _ "google.golang.org/grpc/grpclog/glogger" "google.golang.org/grpc/health" healthpb "google.golang.org/grpc/health/grpc_health_v1" "google.golang.org/grpc/internal" "google.golang.org/grpc/metadata" "google.golang.org/grpc/peer" "google.golang.org/grpc/stats" "google.golang.org/grpc/status" "google.golang.org/grpc/tap" testpb "google.golang.org/grpc/test/grpc_testing" "google.golang.org/grpc/testdata" ) var ( // For headers: testMetadata = metadata.MD{ "key1": []string{"value1"}, "key2": []string{"value2"}, "key3-bin": []string{"binvalue1", string([]byte{1, 2, 3})}, } testMetadata2 = metadata.MD{ "key1": []string{"value12"}, "key2": []string{"value22"}, } // For trailers: testTrailerMetadata = metadata.MD{ "tkey1": []string{"trailerValue1"}, "tkey2": []string{"trailerValue2"}, "tkey3-bin": []string{"trailerbinvalue1", string([]byte{3, 2, 1})}, } testTrailerMetadata2 = metadata.MD{ "tkey1": []string{"trailerValue12"}, "tkey2": []string{"trailerValue22"}, } // capital "Key" is illegal in HTTP/2. malformedHTTP2Metadata = metadata.MD{ "Key": []string{"foo"}, } testAppUA = "myApp1/1.0 myApp2/0.9" failAppUA = "fail-this-RPC" detailedError = status.ErrorProto(&spb.Status{ Code: int32(codes.DataLoss), Message: "error for testing: " + failAppUA, Details: []*anypb.Any{{ TypeUrl: "url", Value: []byte{6, 0, 0, 6, 1, 3}, }}, }) ) var raceMode bool // set by race_test.go in race mode type testServer struct { security string // indicate the authentication protocol used by this server. earlyFail bool // whether to error out the execution of a service handler prematurely. setAndSendHeader bool // whether to call setHeader and sendHeader. setHeaderOnly bool // whether to only call setHeader, not sendHeader. multipleSetTrailer bool // whether to call setTrailer multiple times. } func (s *testServer) EmptyCall(ctx context.Context, in *testpb.Empty) (*testpb.Empty, error) { if md, ok := metadata.FromIncomingContext(ctx); ok { // For testing purpose, returns an error if user-agent is failAppUA. // To test that client gets the correct error. if ua, ok := md["user-agent"]; !ok || strings.HasPrefix(ua[0], failAppUA) { return nil, detailedError } var str []string for _, entry := range md["user-agent"] { str = append(str, "ua", entry) } grpc.SendHeader(ctx, metadata.Pairs(str...)) } return new(testpb.Empty), nil } func newPayload(t testpb.PayloadType, size int32) (*testpb.Payload, error) { if size < 0 { return nil, fmt.Errorf("Requested a response with invalid length %d", size) } body := make([]byte, size) switch t { case testpb.PayloadType_COMPRESSABLE: case testpb.PayloadType_UNCOMPRESSABLE: return nil, fmt.Errorf("PayloadType UNCOMPRESSABLE is not supported") default: return nil, fmt.Errorf("Unsupported payload type: %d", t) } return &testpb.Payload{ Type: t.Enum(), Body: body, }, nil } func (s *testServer) UnaryCall(ctx context.Context, in *testpb.SimpleRequest) (*testpb.SimpleResponse, error) { md, ok := metadata.FromIncomingContext(ctx) if ok { if _, exists := md[":authority"]; !exists { return nil, grpc.Errorf(codes.DataLoss, "expected an :authority metadata: %v", md) } if s.setAndSendHeader { if err := grpc.SetHeader(ctx, md); err != nil { return nil, grpc.Errorf(grpc.Code(err), "grpc.SetHeader(_, %v) = %v, want ", md, err) } if err := grpc.SendHeader(ctx, testMetadata2); err != nil { return nil, grpc.Errorf(grpc.Code(err), "grpc.SendHeader(_, %v) = %v, want ", testMetadata2, err) } } else if s.setHeaderOnly { if err := grpc.SetHeader(ctx, md); err != nil { return nil, grpc.Errorf(grpc.Code(err), "grpc.SetHeader(_, %v) = %v, want ", md, err) } if err := grpc.SetHeader(ctx, testMetadata2); err != nil { return nil, grpc.Errorf(grpc.Code(err), "grpc.SetHeader(_, %v) = %v, want ", testMetadata2, err) } } else { if err := grpc.SendHeader(ctx, md); err != nil { return nil, grpc.Errorf(grpc.Code(err), "grpc.SendHeader(_, %v) = %v, want ", md, err) } } if err := grpc.SetTrailer(ctx, testTrailerMetadata); err != nil { return nil, grpc.Errorf(grpc.Code(err), "grpc.SetTrailer(_, %v) = %v, want ", testTrailerMetadata, err) } if s.multipleSetTrailer { if err := grpc.SetTrailer(ctx, testTrailerMetadata2); err != nil { return nil, grpc.Errorf(grpc.Code(err), "grpc.SetTrailer(_, %v) = %v, want ", testTrailerMetadata2, err) } } } pr, ok := peer.FromContext(ctx) if !ok { return nil, grpc.Errorf(codes.DataLoss, "failed to get peer from ctx") } if pr.Addr == net.Addr(nil) { return nil, grpc.Errorf(codes.DataLoss, "failed to get peer address") } if s.security != "" { // Check Auth info var authType, serverName string switch info := pr.AuthInfo.(type) { case credentials.TLSInfo: authType = info.AuthType() serverName = info.State.ServerName default: return nil, grpc.Errorf(codes.Unauthenticated, "Unknown AuthInfo type") } if authType != s.security { return nil, grpc.Errorf(codes.Unauthenticated, "Wrong auth type: got %q, want %q", authType, s.security) } if serverName != "x.test.youtube.com" { return nil, grpc.Errorf(codes.Unauthenticated, "Unknown server name %q", serverName) } } // Simulate some service delay. time.Sleep(time.Second) payload, err := newPayload(in.GetResponseType(), in.GetResponseSize()) if err != nil { return nil, err } return &testpb.SimpleResponse{ Payload: payload, }, nil } func (s *testServer) StreamingOutputCall(args *testpb.StreamingOutputCallRequest, stream testpb.TestService_StreamingOutputCallServer) error { if md, ok := metadata.FromIncomingContext(stream.Context()); ok { if _, exists := md[":authority"]; !exists { return grpc.Errorf(codes.DataLoss, "expected an :authority metadata: %v", md) } // For testing purpose, returns an error if user-agent is failAppUA. // To test that client gets the correct error. if ua, ok := md["user-agent"]; !ok || strings.HasPrefix(ua[0], failAppUA) { return grpc.Errorf(codes.DataLoss, "error for testing: "+failAppUA) } } cs := args.GetResponseParameters() for _, c := range cs { if us := c.GetIntervalUs(); us > 0 { time.Sleep(time.Duration(us) * time.Microsecond) } payload, err := newPayload(args.GetResponseType(), c.GetSize()) if err != nil { return err } if err := stream.Send(&testpb.StreamingOutputCallResponse{ Payload: payload, }); err != nil { return err } } return nil } func (s *testServer) StreamingInputCall(stream testpb.TestService_StreamingInputCallServer) error { var sum int for { in, err := stream.Recv() if err == io.EOF { return stream.SendAndClose(&testpb.StreamingInputCallResponse{ AggregatedPayloadSize: proto.Int32(int32(sum)), }) } if err != nil { return err } p := in.GetPayload().GetBody() sum += len(p) if s.earlyFail { return grpc.Errorf(codes.NotFound, "not found") } } } func (s *testServer) FullDuplexCall(stream testpb.TestService_FullDuplexCallServer) error { md, ok := metadata.FromIncomingContext(stream.Context()) if ok { if s.setAndSendHeader { if err := stream.SetHeader(md); err != nil { return grpc.Errorf(grpc.Code(err), "%v.SetHeader(_, %v) = %v, want ", stream, md, err) } if err := stream.SendHeader(testMetadata2); err != nil { return grpc.Errorf(grpc.Code(err), "%v.SendHeader(_, %v) = %v, want ", stream, testMetadata2, err) } } else if s.setHeaderOnly { if err := stream.SetHeader(md); err != nil { return grpc.Errorf(grpc.Code(err), "%v.SetHeader(_, %v) = %v, want ", stream, md, err) } if err := stream.SetHeader(testMetadata2); err != nil { return grpc.Errorf(grpc.Code(err), "%v.SetHeader(_, %v) = %v, want ", stream, testMetadata2, err) } } else { if err := stream.SendHeader(md); err != nil { return grpc.Errorf(grpc.Code(err), "%v.SendHeader(%v) = %v, want %v", stream, md, err, nil) } } stream.SetTrailer(testTrailerMetadata) if s.multipleSetTrailer { stream.SetTrailer(testTrailerMetadata2) } } for { in, err := stream.Recv() if err == io.EOF { // read done. return nil } if err != nil { // to facilitate testSvrWriteStatusEarlyWrite if grpc.Code(err) == codes.ResourceExhausted { return grpc.Errorf(codes.Internal, "fake error for test testSvrWriteStatusEarlyWrite. true error: %s", err.Error()) } return err } cs := in.GetResponseParameters() for _, c := range cs { if us := c.GetIntervalUs(); us > 0 { time.Sleep(time.Duration(us) * time.Microsecond) } payload, err := newPayload(in.GetResponseType(), c.GetSize()) if err != nil { return err } if err := stream.Send(&testpb.StreamingOutputCallResponse{ Payload: payload, }); err != nil { // to facilitate testSvrWriteStatusEarlyWrite if grpc.Code(err) == codes.ResourceExhausted { return grpc.Errorf(codes.Internal, "fake error for test testSvrWriteStatusEarlyWrite. true error: %s", err.Error()) } return err } } } } func (s *testServer) HalfDuplexCall(stream testpb.TestService_HalfDuplexCallServer) error { var msgBuf []*testpb.StreamingOutputCallRequest for { in, err := stream.Recv() if err == io.EOF { // read done. break } if err != nil { return err } msgBuf = append(msgBuf, in) } for _, m := range msgBuf { cs := m.GetResponseParameters() for _, c := range cs { if us := c.GetIntervalUs(); us > 0 { time.Sleep(time.Duration(us) * time.Microsecond) } payload, err := newPayload(m.GetResponseType(), c.GetSize()) if err != nil { return err } if err := stream.Send(&testpb.StreamingOutputCallResponse{ Payload: payload, }); err != nil { return err } } } return nil } type env struct { name string network string // The type of network such as tcp, unix, etc. security string // The security protocol such as TLS, SSH, etc. httpHandler bool // whether to use the http.Handler ServerTransport; requires TLS balancer bool // whether to use balancer } func (e env) runnable() bool { if runtime.GOOS == "windows" && e.network == "unix" { return false } return true } func (e env) dialer(addr string, timeout time.Duration) (net.Conn, error) { return net.DialTimeout(e.network, addr, timeout) } var ( tcpClearEnv = env{name: "tcp-clear", network: "tcp", balancer: true} tcpTLSEnv = env{name: "tcp-tls", network: "tcp", security: "tls", balancer: true} unixClearEnv = env{name: "unix-clear", network: "unix", balancer: true} unixTLSEnv = env{name: "unix-tls", network: "unix", security: "tls", balancer: true} handlerEnv = env{name: "handler-tls", network: "tcp", security: "tls", httpHandler: true, balancer: true} noBalancerEnv = env{name: "no-balancer", network: "tcp", security: "tls", balancer: false} allEnv = []env{tcpClearEnv, tcpTLSEnv, unixClearEnv, unixTLSEnv, handlerEnv, noBalancerEnv} ) var onlyEnv = flag.String("only_env", "", "If non-empty, one of 'tcp-clear', 'tcp-tls', 'unix-clear', 'unix-tls', or 'handler-tls' to only run the tests for that environment. Empty means all.") func listTestEnv() (envs []env) { if *onlyEnv != "" { for _, e := range allEnv { if e.name == *onlyEnv { if !e.runnable() { panic(fmt.Sprintf("--only_env environment %q does not run on %s", *onlyEnv, runtime.GOOS)) } return []env{e} } } panic(fmt.Sprintf("invalid --only_env value %q", *onlyEnv)) } for _, e := range allEnv { if e.runnable() { envs = append(envs, e) } } return envs } // test is an end-to-end test. It should be created with the newTest // func, modified as needed, and then started with its startServer method. // It should be cleaned up with the tearDown method. type test struct { t *testing.T e env ctx context.Context // valid for life of test, before tearDown cancel context.CancelFunc // Configurable knobs, after newTest returns: testServer testpb.TestServiceServer // nil means none healthServer *health.Server // nil means disabled maxStream uint32 tapHandle tap.ServerInHandle maxMsgSize *int maxClientReceiveMsgSize *int maxClientSendMsgSize *int maxServerReceiveMsgSize *int maxServerSendMsgSize *int userAgent string clientCompression bool serverCompression bool unaryClientInt grpc.UnaryClientInterceptor streamClientInt grpc.StreamClientInterceptor unaryServerInt grpc.UnaryServerInterceptor streamServerInt grpc.StreamServerInterceptor unknownHandler grpc.StreamHandler sc <-chan grpc.ServiceConfig customCodec grpc.Codec serverInitialWindowSize int32 serverInitialConnWindowSize int32 clientInitialWindowSize int32 clientInitialConnWindowSize int32 perRPCCreds credentials.PerRPCCredentials // srv and srvAddr are set once startServer is called. srv *grpc.Server srvAddr string cc *grpc.ClientConn // nil until requested via clientConn restoreLogs func() // nil unless declareLogNoise is used } func (te *test) tearDown() { if te.cancel != nil { te.cancel() te.cancel = nil } if te.cc != nil { te.cc.Close() te.cc = nil } if te.restoreLogs != nil { te.restoreLogs() te.restoreLogs = nil } if te.srv != nil { te.srv.Stop() } } // newTest returns a new test using the provided testing.T and // environment. It is returned with default values. Tests should // modify it before calling its startServer and clientConn methods. func newTest(t *testing.T, e env) *test { te := &test{ t: t, e: e, maxStream: math.MaxUint32, } te.ctx, te.cancel = context.WithCancel(context.Background()) return te } // startServer starts a gRPC server listening. Callers should defer a // call to te.tearDown to clean up. func (te *test) startServer(ts testpb.TestServiceServer) { te.testServer = ts te.t.Logf("Running test in %s environment...", te.e.name) sopts := []grpc.ServerOption{grpc.MaxConcurrentStreams(te.maxStream)} if te.maxMsgSize != nil { sopts = append(sopts, grpc.MaxMsgSize(*te.maxMsgSize)) } if te.maxServerReceiveMsgSize != nil { sopts = append(sopts, grpc.MaxRecvMsgSize(*te.maxServerReceiveMsgSize)) } if te.maxServerSendMsgSize != nil { sopts = append(sopts, grpc.MaxSendMsgSize(*te.maxServerSendMsgSize)) } if te.tapHandle != nil { sopts = append(sopts, grpc.InTapHandle(te.tapHandle)) } if te.serverCompression { sopts = append(sopts, grpc.RPCCompressor(grpc.NewGZIPCompressor()), grpc.RPCDecompressor(grpc.NewGZIPDecompressor()), ) } if te.unaryServerInt != nil { sopts = append(sopts, grpc.UnaryInterceptor(te.unaryServerInt)) } if te.streamServerInt != nil { sopts = append(sopts, grpc.StreamInterceptor(te.streamServerInt)) } if te.unknownHandler != nil { sopts = append(sopts, grpc.UnknownServiceHandler(te.unknownHandler)) } if te.serverInitialWindowSize > 0 { sopts = append(sopts, grpc.InitialWindowSize(te.serverInitialWindowSize)) } if te.serverInitialConnWindowSize > 0 { sopts = append(sopts, grpc.InitialConnWindowSize(te.serverInitialConnWindowSize)) } la := "localhost:0" switch te.e.network { case "unix": la = "/tmp/testsock" + fmt.Sprintf("%d", time.Now().UnixNano()) syscall.Unlink(la) } lis, err := net.Listen(te.e.network, la) if err != nil { te.t.Fatalf("Failed to listen: %v", err) } switch te.e.security { case "tls": creds, err := credentials.NewServerTLSFromFile(testdata.Path("server1.pem"), testdata.Path("server1.key")) if err != nil { te.t.Fatalf("Failed to generate credentials %v", err) } sopts = append(sopts, grpc.Creds(creds)) case "clientAlwaysFailCred": sopts = append(sopts, grpc.Creds(clientAlwaysFailCred{})) case "clientTimeoutCreds": sopts = append(sopts, grpc.Creds(&clientTimeoutCreds{})) } if te.customCodec != nil { sopts = append(sopts, grpc.CustomCodec(te.customCodec)) } s := grpc.NewServer(sopts...) te.srv = s if te.e.httpHandler { internal.TestingUseHandlerImpl(s) } if te.healthServer != nil { healthpb.RegisterHealthServer(s, te.healthServer) } if te.testServer != nil { testpb.RegisterTestServiceServer(s, te.testServer) } addr := la switch te.e.network { case "unix": default: _, port, err := net.SplitHostPort(lis.Addr().String()) if err != nil { te.t.Fatalf("Failed to parse listener address: %v", err) } addr = "localhost:" + port } go s.Serve(lis) te.srvAddr = addr } func (te *test) clientConn() *grpc.ClientConn { if te.cc != nil { return te.cc } opts := []grpc.DialOption{ grpc.WithDialer(te.e.dialer), grpc.WithUserAgent(te.userAgent), } if te.sc != nil { opts = append(opts, grpc.WithServiceConfig(te.sc)) } if te.clientCompression { opts = append(opts, grpc.WithCompressor(grpc.NewGZIPCompressor()), grpc.WithDecompressor(grpc.NewGZIPDecompressor()), ) } if te.unaryClientInt != nil { opts = append(opts, grpc.WithUnaryInterceptor(te.unaryClientInt)) } if te.streamClientInt != nil { opts = append(opts, grpc.WithStreamInterceptor(te.streamClientInt)) } if te.maxMsgSize != nil { opts = append(opts, grpc.WithMaxMsgSize(*te.maxMsgSize)) } if te.maxClientReceiveMsgSize != nil { opts = append(opts, grpc.WithDefaultCallOptions(grpc.MaxCallRecvMsgSize(*te.maxClientReceiveMsgSize))) } if te.maxClientSendMsgSize != nil { opts = append(opts, grpc.WithDefaultCallOptions(grpc.MaxCallSendMsgSize(*te.maxClientSendMsgSize))) } switch te.e.security { case "tls": creds, err := credentials.NewClientTLSFromFile(testdata.Path("ca.pem"), "x.test.youtube.com") if err != nil { te.t.Fatalf("Failed to load credentials: %v", err) } opts = append(opts, grpc.WithTransportCredentials(creds)) case "clientAlwaysFailCred": opts = append(opts, grpc.WithTransportCredentials(clientAlwaysFailCred{})) case "clientTimeoutCreds": opts = append(opts, grpc.WithTransportCredentials(&clientTimeoutCreds{})) default: opts = append(opts, grpc.WithInsecure()) } if te.e.balancer { opts = append(opts, grpc.WithBalancer(grpc.RoundRobin(nil))) } if te.clientInitialWindowSize > 0 { opts = append(opts, grpc.WithInitialWindowSize(te.clientInitialWindowSize)) } if te.clientInitialConnWindowSize > 0 { opts = append(opts, grpc.WithInitialConnWindowSize(te.clientInitialConnWindowSize)) } if te.perRPCCreds != nil { opts = append(opts, grpc.WithPerRPCCredentials(te.perRPCCreds)) } if te.customCodec != nil { opts = append(opts, grpc.WithCodec(te.customCodec)) } var err error te.cc, err = grpc.Dial(te.srvAddr, opts...) if err != nil { te.t.Fatalf("Dial(%q) = %v", te.srvAddr, err) } return te.cc } func (te *test) declareLogNoise(phrases ...string) { te.restoreLogs = declareLogNoise(te.t, phrases...) } func (te *test) withServerTester(fn func(st *serverTester)) { c, err := te.e.dialer(te.srvAddr, 10*time.Second) if err != nil { te.t.Fatal(err) } defer c.Close() if te.e.security == "tls" { c = tls.Client(c, &tls.Config{ InsecureSkipVerify: true, NextProtos: []string{http2.NextProtoTLS}, }) } st := newServerTesterFromConn(te.t, c) st.greet() fn(st) } func TestTimeoutOnDeadServer(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testTimeoutOnDeadServer(t, e) } } func testTimeoutOnDeadServer(t *testing.T, e env) { te := newTest(t, e) te.userAgent = testAppUA te.declareLogNoise( "transport: http2Client.notifyError got notified that the client transport was broken EOF", "grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing", "grpc: addrConn.resetTransport failed to create client transport: connection error", ) te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}, grpc.FailFast(false)); err != nil { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, ", err) } te.srv.Stop() ctx, cancel := context.WithTimeout(context.Background(), time.Millisecond) _, err := tc.EmptyCall(ctx, &testpb.Empty{}, grpc.FailFast(false)) cancel() if e.balancer && grpc.Code(err) != codes.DeadlineExceeded { // If e.balancer == nil, the ac will stop reconnecting because the dialer returns non-temp error, // the error will be an internal error. t.Fatalf("TestService/EmptyCall(%v, _) = _, %v, want _, error code: %s", ctx, err, codes.DeadlineExceeded) } awaitNewConnLogOutput() } func TestServerGracefulStopIdempotent(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { if e.name == "handler-tls" { continue } testServerGracefulStopIdempotent(t, e) } } func testServerGracefulStopIdempotent(t *testing.T, e env) { te := newTest(t, e) te.userAgent = testAppUA te.startServer(&testServer{security: e.security}) defer te.tearDown() for i := 0; i < 3; i++ { te.srv.GracefulStop() } } func TestServerGoAway(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { if e.name == "handler-tls" { continue } testServerGoAway(t, e) } } func testServerGoAway(t *testing.T, e env) { te := newTest(t, e) te.userAgent = testAppUA te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) // Finish an RPC to make sure the connection is good. if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}, grpc.FailFast(false)); err != nil { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, ", err) } ch := make(chan struct{}) go func() { te.srv.GracefulStop() close(ch) }() // Loop until the server side GoAway signal is propagated to the client. for { ctx, cancel := context.WithTimeout(context.Background(), 10*time.Millisecond) if _, err := tc.EmptyCall(ctx, &testpb.Empty{}); err != nil && grpc.Code(err) != codes.DeadlineExceeded { cancel() break } cancel() } // A new RPC should fail. if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}); grpc.Code(err) != codes.Unavailable && grpc.Code(err) != codes.Internal { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, %s or %s", err, codes.Unavailable, codes.Internal) } <-ch awaitNewConnLogOutput() } func TestServerGoAwayPendingRPC(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { if e.name == "handler-tls" { continue } testServerGoAwayPendingRPC(t, e) } } func testServerGoAwayPendingRPC(t *testing.T, e env) { te := newTest(t, e) te.userAgent = testAppUA te.declareLogNoise( "transport: http2Client.notifyError got notified that the client transport was broken EOF", "grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing", "grpc: addrConn.resetTransport failed to create client transport: connection error", ) te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) ctx, cancel := context.WithCancel(context.Background()) stream, err := tc.FullDuplexCall(ctx, grpc.FailFast(false)) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } // Finish an RPC to make sure the connection is good. if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}, grpc.FailFast(false)); err != nil { t.Fatalf("%v.EmptyCall(_, _, _) = _, %v, want _, ", tc, err) } ch := make(chan struct{}) go func() { te.srv.GracefulStop() close(ch) }() // Loop until the server side GoAway signal is propagated to the client. for { ctx, cancel := context.WithTimeout(context.Background(), 10*time.Millisecond) if _, err := tc.EmptyCall(ctx, &testpb.Empty{}, grpc.FailFast(false)); err != nil { cancel() break } cancel() } respParam := []*testpb.ResponseParameters{ { Size: proto.Int32(1), }, } payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, int32(100)) if err != nil { t.Fatal(err) } req := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseParameters: respParam, Payload: payload, } // The existing RPC should be still good to proceed. if err := stream.Send(req); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, req, err) } if _, err := stream.Recv(); err != nil { t.Fatalf("%v.Recv() = _, %v, want _, ", stream, err) } cancel() <-ch awaitNewConnLogOutput() } func TestServerMultipleGoAwayPendingRPC(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { if e.name == "handler-tls" { continue } testServerMultipleGoAwayPendingRPC(t, e) } } func testServerMultipleGoAwayPendingRPC(t *testing.T, e env) { te := newTest(t, e) te.userAgent = testAppUA te.declareLogNoise( "transport: http2Client.notifyError got notified that the client transport was broken EOF", "grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing", "grpc: addrConn.resetTransport failed to create client transport: connection error", ) te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) ctx, cancel := context.WithCancel(context.Background()) stream, err := tc.FullDuplexCall(ctx, grpc.FailFast(false)) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } // Finish an RPC to make sure the connection is good. if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}, grpc.FailFast(false)); err != nil { t.Fatalf("%v.EmptyCall(_, _, _) = _, %v, want _, ", tc, err) } ch1 := make(chan struct{}) go func() { te.srv.GracefulStop() close(ch1) }() ch2 := make(chan struct{}) go func() { te.srv.GracefulStop() close(ch2) }() // Loop until the server side GoAway signal is propagated to the client. for { ctx, cancel := context.WithTimeout(context.Background(), 10*time.Millisecond) if _, err := tc.EmptyCall(ctx, &testpb.Empty{}, grpc.FailFast(false)); err != nil { cancel() break } cancel() } select { case <-ch1: t.Fatal("GracefulStop() terminated early") case <-ch2: t.Fatal("GracefulStop() terminated early") default: } respParam := []*testpb.ResponseParameters{ { Size: proto.Int32(1), }, } payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, int32(100)) if err != nil { t.Fatal(err) } req := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseParameters: respParam, Payload: payload, } // The existing RPC should be still good to proceed. if err := stream.Send(req); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, req, err) } if _, err := stream.Recv(); err != nil { t.Fatalf("%v.Recv() = _, %v, want _, ", stream, err) } if err := stream.CloseSend(); err != nil { t.Fatalf("%v.CloseSend() = %v, want ", stream, err) } <-ch1 <-ch2 cancel() awaitNewConnLogOutput() } func TestConcurrentClientConnCloseAndServerGoAway(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { if e.name == "handler-tls" { continue } testConcurrentClientConnCloseAndServerGoAway(t, e) } } func testConcurrentClientConnCloseAndServerGoAway(t *testing.T, e env) { te := newTest(t, e) te.userAgent = testAppUA te.declareLogNoise( "transport: http2Client.notifyError got notified that the client transport was broken EOF", "grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing", "grpc: addrConn.resetTransport failed to create client transport: connection error", ) te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}, grpc.FailFast(false)); err != nil { t.Fatalf("%v.EmptyCall(_, _, _) = _, %v, want _, ", tc, err) } ch := make(chan struct{}) // Close ClientConn and Server concurrently. go func() { te.srv.GracefulStop() close(ch) }() go func() { cc.Close() }() <-ch } func TestConcurrentServerStopAndGoAway(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { if e.name == "handler-tls" { continue } testConcurrentServerStopAndGoAway(t, e) } } func testConcurrentServerStopAndGoAway(t *testing.T, e env) { te := newTest(t, e) te.userAgent = testAppUA te.declareLogNoise( "transport: http2Client.notifyError got notified that the client transport was broken EOF", "grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing", "grpc: addrConn.resetTransport failed to create client transport: connection error", ) te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) stream, err := tc.FullDuplexCall(context.Background(), grpc.FailFast(false)) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } // Finish an RPC to make sure the connection is good. if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}, grpc.FailFast(false)); err != nil { t.Fatalf("%v.EmptyCall(_, _, _) = _, %v, want _, ", tc, err) } ch := make(chan struct{}) go func() { te.srv.GracefulStop() close(ch) }() // Loop until the server side GoAway signal is propagated to the client. for { ctx, cancel := context.WithTimeout(context.Background(), 10*time.Millisecond) if _, err := tc.EmptyCall(ctx, &testpb.Empty{}, grpc.FailFast(false)); err != nil { cancel() break } cancel() } // Stop the server and close all the connections. te.srv.Stop() respParam := []*testpb.ResponseParameters{ { Size: proto.Int32(1), }, } payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, int32(100)) if err != nil { t.Fatal(err) } req := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseParameters: respParam, Payload: payload, } if err := stream.Send(req); err == nil { if _, err := stream.Recv(); err == nil { t.Fatalf("%v.Recv() = _, %v, want _, ", stream, err) } } <-ch awaitNewConnLogOutput() } func TestClientConnCloseAfterGoAwayWithActiveStream(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { if e.name == "handler-tls" { continue } testClientConnCloseAfterGoAwayWithActiveStream(t, e) } } func testClientConnCloseAfterGoAwayWithActiveStream(t *testing.T, e env) { te := newTest(t, e) te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) if _, err := tc.FullDuplexCall(context.Background()); err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want _, ", tc, err) } done := make(chan struct{}) go func() { te.srv.GracefulStop() close(done) }() time.Sleep(time.Second) cc.Close() timeout := time.NewTimer(time.Second) select { case <-done: case <-timeout.C: t.Fatalf("Test timed-out.") } } func TestFailFast(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testFailFast(t, e) } } func testFailFast(t *testing.T, e env) { te := newTest(t, e) te.userAgent = testAppUA te.declareLogNoise( "transport: http2Client.notifyError got notified that the client transport was broken EOF", "grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing", "grpc: addrConn.resetTransport failed to create client transport: connection error", ) te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}); err != nil { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, ", err) } // Stop the server and tear down all the exisiting connections. te.srv.Stop() // Loop until the server teardown is propagated to the client. for { _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}) if grpc.Code(err) == codes.Unavailable { break } fmt.Printf("%v.EmptyCall(_, _) = _, %v", tc, err) time.Sleep(10 * time.Millisecond) } // The client keeps reconnecting and ongoing fail-fast RPCs should fail with code.Unavailable. if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}); grpc.Code(err) != codes.Unavailable { t.Fatalf("TestService/EmptyCall(_, _, _) = _, %v, want _, error code: %s", err, codes.Unavailable) } if _, err := tc.StreamingInputCall(context.Background()); grpc.Code(err) != codes.Unavailable { t.Fatalf("TestService/StreamingInputCall(_) = _, %v, want _, error code: %s", err, codes.Unavailable) } awaitNewConnLogOutput() } func testServiceConfigSetup(t *testing.T, e env) (*test, chan grpc.ServiceConfig) { te := newTest(t, e) // We write before read. ch := make(chan grpc.ServiceConfig, 1) te.sc = ch te.userAgent = testAppUA te.declareLogNoise( "transport: http2Client.notifyError got notified that the client transport was broken EOF", "grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing", "grpc: addrConn.resetTransport failed to create client transport: connection error", "Failed to dial : context canceled; please retry.", ) return te, ch } func newBool(b bool) (a *bool) { return &b } func newInt(b int) (a *int) { return &b } func newDuration(b time.Duration) (a *time.Duration) { a = new(time.Duration) *a = b return } func TestServiceConfigGetMethodConfig(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testGetMethodConfig(t, e) } } func testGetMethodConfig(t *testing.T, e env) { te, ch := testServiceConfigSetup(t, e) defer te.tearDown() mc1 := grpc.MethodConfig{ WaitForReady: newBool(true), Timeout: newDuration(time.Millisecond), } mc2 := grpc.MethodConfig{WaitForReady: newBool(false)} m := make(map[string]grpc.MethodConfig) m["/grpc.testing.TestService/EmptyCall"] = mc1 m["/grpc.testing.TestService/"] = mc2 sc := grpc.ServiceConfig{ Methods: m, } ch <- sc cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) // The following RPCs are expected to become non-fail-fast ones with 1ms deadline. if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}); grpc.Code(err) != codes.DeadlineExceeded { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, %s", err, codes.DeadlineExceeded) } m = make(map[string]grpc.MethodConfig) m["/grpc.testing.TestService/UnaryCall"] = mc1 m["/grpc.testing.TestService/"] = mc2 sc = grpc.ServiceConfig{ Methods: m, } ch <- sc // Wait for the new service config to propagate. for { if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}); grpc.Code(err) == codes.DeadlineExceeded { continue } break } // The following RPCs are expected to become fail-fast. if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}); grpc.Code(err) != codes.Unavailable { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, %s", err, codes.Unavailable) } } func TestServiceConfigWaitForReady(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testServiceConfigWaitForReady(t, e) } } func testServiceConfigWaitForReady(t *testing.T, e env) { te, ch := testServiceConfigSetup(t, e) defer te.tearDown() // Case1: Client API set failfast to be false, and service config set wait_for_ready to be false, Client API should win, and the rpc will wait until deadline exceeds. mc := grpc.MethodConfig{ WaitForReady: newBool(false), Timeout: newDuration(time.Millisecond), } m := make(map[string]grpc.MethodConfig) m["/grpc.testing.TestService/EmptyCall"] = mc m["/grpc.testing.TestService/FullDuplexCall"] = mc sc := grpc.ServiceConfig{ Methods: m, } ch <- sc cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) // The following RPCs are expected to become non-fail-fast ones with 1ms deadline. if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}, grpc.FailFast(false)); grpc.Code(err) != codes.DeadlineExceeded { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, %s", err, codes.DeadlineExceeded) } if _, err := tc.FullDuplexCall(context.Background(), grpc.FailFast(false)); grpc.Code(err) != codes.DeadlineExceeded { t.Fatalf("TestService/FullDuplexCall(_) = _, %v, want %s", err, codes.DeadlineExceeded) } // Generate a service config update. // Case2: Client API does not set failfast, and service config set wait_for_ready to be true, and the rpc will wait until deadline exceeds. mc.WaitForReady = newBool(true) m = make(map[string]grpc.MethodConfig) m["/grpc.testing.TestService/EmptyCall"] = mc m["/grpc.testing.TestService/FullDuplexCall"] = mc sc = grpc.ServiceConfig{ Methods: m, } ch <- sc // Wait for the new service config to take effect. mc = cc.GetMethodConfig("/grpc.testing.TestService/EmptyCall") for { if !*mc.WaitForReady { time.Sleep(100 * time.Millisecond) mc = cc.GetMethodConfig("/grpc.testing.TestService/EmptyCall") continue } break } // The following RPCs are expected to become non-fail-fast ones with 1ms deadline. if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}); grpc.Code(err) != codes.DeadlineExceeded { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, %s", err, codes.DeadlineExceeded) } if _, err := tc.FullDuplexCall(context.Background()); grpc.Code(err) != codes.DeadlineExceeded { t.Fatalf("TestService/FullDuplexCall(_) = _, %v, want %s", err, codes.DeadlineExceeded) } } func TestServiceConfigTimeout(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testServiceConfigTimeout(t, e) } } func testServiceConfigTimeout(t *testing.T, e env) { te, ch := testServiceConfigSetup(t, e) defer te.tearDown() // Case1: Client API sets timeout to be 1ns and ServiceConfig sets timeout to be 1hr. Timeout should be 1ns (min of 1ns and 1hr) and the rpc will wait until deadline exceeds. mc := grpc.MethodConfig{ Timeout: newDuration(time.Hour), } m := make(map[string]grpc.MethodConfig) m["/grpc.testing.TestService/EmptyCall"] = mc m["/grpc.testing.TestService/FullDuplexCall"] = mc sc := grpc.ServiceConfig{ Methods: m, } ch <- sc cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) // The following RPCs are expected to become non-fail-fast ones with 1ns deadline. ctx, cancel := context.WithTimeout(context.Background(), time.Nanosecond) if _, err := tc.EmptyCall(ctx, &testpb.Empty{}, grpc.FailFast(false)); grpc.Code(err) != codes.DeadlineExceeded { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, %s", err, codes.DeadlineExceeded) } cancel() ctx, cancel = context.WithTimeout(context.Background(), time.Nanosecond) if _, err := tc.FullDuplexCall(ctx, grpc.FailFast(false)); grpc.Code(err) != codes.DeadlineExceeded { t.Fatalf("TestService/FullDuplexCall(_) = _, %v, want %s", err, codes.DeadlineExceeded) } cancel() // Generate a service config update. // Case2: Client API sets timeout to be 1hr and ServiceConfig sets timeout to be 1ns. Timeout should be 1ns (min of 1ns and 1hr) and the rpc will wait until deadline exceeds. mc.Timeout = newDuration(time.Nanosecond) m = make(map[string]grpc.MethodConfig) m["/grpc.testing.TestService/EmptyCall"] = mc m["/grpc.testing.TestService/FullDuplexCall"] = mc sc = grpc.ServiceConfig{ Methods: m, } ch <- sc // Wait for the new service config to take effect. mc = cc.GetMethodConfig("/grpc.testing.TestService/FullDuplexCall") for { if *mc.Timeout != time.Nanosecond { time.Sleep(100 * time.Millisecond) mc = cc.GetMethodConfig("/grpc.testing.TestService/FullDuplexCall") continue } break } ctx, cancel = context.WithTimeout(context.Background(), time.Hour) if _, err := tc.EmptyCall(ctx, &testpb.Empty{}, grpc.FailFast(false)); grpc.Code(err) != codes.DeadlineExceeded { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, %s", err, codes.DeadlineExceeded) } cancel() ctx, cancel = context.WithTimeout(context.Background(), time.Hour) if _, err := tc.FullDuplexCall(ctx, grpc.FailFast(false)); grpc.Code(err) != codes.DeadlineExceeded { t.Fatalf("TestService/FullDuplexCall(_) = _, %v, want %s", err, codes.DeadlineExceeded) } cancel() } func TestServiceConfigMaxMsgSize(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testServiceConfigMaxMsgSize(t, e) } } func testServiceConfigMaxMsgSize(t *testing.T, e env) { // Setting up values and objects shared across all test cases. const smallSize = 1 const largeSize = 1024 const extraLargeSize = 2048 smallPayload, err := newPayload(testpb.PayloadType_COMPRESSABLE, smallSize) if err != nil { t.Fatal(err) } largePayload, err := newPayload(testpb.PayloadType_COMPRESSABLE, largeSize) if err != nil { t.Fatal(err) } extraLargePayload, err := newPayload(testpb.PayloadType_COMPRESSABLE, extraLargeSize) if err != nil { t.Fatal(err) } mc := grpc.MethodConfig{ MaxReqSize: newInt(extraLargeSize), MaxRespSize: newInt(extraLargeSize), } m := make(map[string]grpc.MethodConfig) m["/grpc.testing.TestService/UnaryCall"] = mc m["/grpc.testing.TestService/FullDuplexCall"] = mc sc := grpc.ServiceConfig{ Methods: m, } // Case1: sc set maxReqSize to 2048 (send), maxRespSize to 2048 (recv). te1, ch1 := testServiceConfigSetup(t, e) te1.startServer(&testServer{security: e.security}) defer te1.tearDown() ch1 <- sc tc := testpb.NewTestServiceClient(te1.clientConn()) req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseSize: proto.Int32(int32(extraLargeSize)), Payload: smallPayload, } // Test for unary RPC recv. if _, err := tc.UnaryCall(context.Background(), req); err == nil || grpc.Code(err) != codes.ResourceExhausted { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, error code: %s", err, codes.ResourceExhausted) } // Test for unary RPC send. req.Payload = extraLargePayload req.ResponseSize = proto.Int32(int32(smallSize)) if _, err := tc.UnaryCall(context.Background(), req); err == nil || grpc.Code(err) != codes.ResourceExhausted { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, error code: %s", err, codes.ResourceExhausted) } // Test for streaming RPC recv. respParam := []*testpb.ResponseParameters{ { Size: proto.Int32(int32(extraLargeSize)), }, } sreq := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseParameters: respParam, Payload: smallPayload, } stream, err := tc.FullDuplexCall(te1.ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } if err := stream.Send(sreq); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, sreq, err) } if _, err := stream.Recv(); err == nil || grpc.Code(err) != codes.ResourceExhausted { t.Fatalf("%v.Recv() = _, %v, want _, error code: %s", stream, err, codes.ResourceExhausted) } // Test for streaming RPC send. respParam[0].Size = proto.Int32(int32(smallSize)) sreq.Payload = extraLargePayload stream, err = tc.FullDuplexCall(te1.ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } if err := stream.Send(sreq); err == nil || grpc.Code(err) != codes.ResourceExhausted { t.Fatalf("%v.Send(%v) = %v, want _, error code: %s", stream, sreq, err, codes.ResourceExhausted) } // Case2: Client API set maxReqSize to 1024 (send), maxRespSize to 1024 (recv). Sc sets maxReqSize to 2048 (send), maxRespSize to 2048 (recv). te2, ch2 := testServiceConfigSetup(t, e) te2.maxClientReceiveMsgSize = newInt(1024) te2.maxClientSendMsgSize = newInt(1024) te2.startServer(&testServer{security: e.security}) defer te2.tearDown() ch2 <- sc tc = testpb.NewTestServiceClient(te2.clientConn()) // Test for unary RPC recv. req.Payload = smallPayload req.ResponseSize = proto.Int32(int32(largeSize)) if _, err := tc.UnaryCall(context.Background(), req); err == nil || grpc.Code(err) != codes.ResourceExhausted { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, error code: %s", err, codes.ResourceExhausted) } // Test for unary RPC send. req.Payload = largePayload req.ResponseSize = proto.Int32(int32(smallSize)) if _, err := tc.UnaryCall(context.Background(), req); err == nil || grpc.Code(err) != codes.ResourceExhausted { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, error code: %s", err, codes.ResourceExhausted) } // Test for streaming RPC recv. stream, err = tc.FullDuplexCall(te2.ctx) respParam[0].Size = proto.Int32(int32(largeSize)) sreq.Payload = smallPayload if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } if err := stream.Send(sreq); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, sreq, err) } if _, err := stream.Recv(); err == nil || grpc.Code(err) != codes.ResourceExhausted { t.Fatalf("%v.Recv() = _, %v, want _, error code: %s", stream, err, codes.ResourceExhausted) } // Test for streaming RPC send. respParam[0].Size = proto.Int32(int32(smallSize)) sreq.Payload = largePayload stream, err = tc.FullDuplexCall(te2.ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } if err := stream.Send(sreq); err == nil || grpc.Code(err) != codes.ResourceExhausted { t.Fatalf("%v.Send(%v) = %v, want _, error code: %s", stream, sreq, err, codes.ResourceExhausted) } // Case3: Client API set maxReqSize to 4096 (send), maxRespSize to 4096 (recv). Sc sets maxReqSize to 2048 (send), maxRespSize to 2048 (recv). te3, ch3 := testServiceConfigSetup(t, e) te3.maxClientReceiveMsgSize = newInt(4096) te3.maxClientSendMsgSize = newInt(4096) te3.startServer(&testServer{security: e.security}) defer te3.tearDown() ch3 <- sc tc = testpb.NewTestServiceClient(te3.clientConn()) // Test for unary RPC recv. req.Payload = smallPayload req.ResponseSize = proto.Int32(int32(largeSize)) if _, err := tc.UnaryCall(context.Background(), req); err != nil { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want ", err) } req.ResponseSize = proto.Int32(int32(extraLargeSize)) if _, err := tc.UnaryCall(context.Background(), req); err == nil || grpc.Code(err) != codes.ResourceExhausted { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, error code: %s", err, codes.ResourceExhausted) } // Test for unary RPC send. req.Payload = largePayload req.ResponseSize = proto.Int32(int32(smallSize)) if _, err := tc.UnaryCall(context.Background(), req); err != nil { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want ", err) } req.Payload = extraLargePayload if _, err := tc.UnaryCall(context.Background(), req); err == nil || grpc.Code(err) != codes.ResourceExhausted { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, error code: %s", err, codes.ResourceExhausted) } // Test for streaming RPC recv. stream, err = tc.FullDuplexCall(te3.ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } respParam[0].Size = proto.Int32(int32(largeSize)) sreq.Payload = smallPayload if err := stream.Send(sreq); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, sreq, err) } if _, err := stream.Recv(); err != nil { t.Fatalf("%v.Recv() = _, %v, want ", stream, err) } respParam[0].Size = proto.Int32(int32(extraLargeSize)) if err := stream.Send(sreq); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, sreq, err) } if _, err := stream.Recv(); err == nil || grpc.Code(err) != codes.ResourceExhausted { t.Fatalf("%v.Recv() = _, %v, want _, error code: %s", stream, err, codes.ResourceExhausted) } // Test for streaming RPC send. respParam[0].Size = proto.Int32(int32(smallSize)) sreq.Payload = largePayload stream, err = tc.FullDuplexCall(te3.ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } if err := stream.Send(sreq); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, sreq, err) } sreq.Payload = extraLargePayload if err := stream.Send(sreq); err == nil || grpc.Code(err) != codes.ResourceExhausted { t.Fatalf("%v.Send(%v) = %v, want _, error code: %s", stream, sreq, err, codes.ResourceExhausted) } } func TestMaxMsgSizeClientDefault(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testMaxMsgSizeClientDefault(t, e) } } func testMaxMsgSizeClientDefault(t *testing.T, e env) { te := newTest(t, e) te.userAgent = testAppUA te.declareLogNoise( "transport: http2Client.notifyError got notified that the client transport was broken EOF", "grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing", "grpc: addrConn.resetTransport failed to create client transport: connection error", "Failed to dial : context canceled; please retry.", ) te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) const smallSize = 1 const largeSize = 4 * 1024 * 1024 smallPayload, err := newPayload(testpb.PayloadType_COMPRESSABLE, smallSize) if err != nil { t.Fatal(err) } req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseSize: proto.Int32(int32(largeSize)), Payload: smallPayload, } // Test for unary RPC recv. if _, err := tc.UnaryCall(context.Background(), req); err == nil || grpc.Code(err) != codes.ResourceExhausted { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, error code: %s", err, codes.ResourceExhausted) } respParam := []*testpb.ResponseParameters{ { Size: proto.Int32(int32(largeSize)), }, } sreq := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseParameters: respParam, Payload: smallPayload, } // Test for streaming RPC recv. stream, err := tc.FullDuplexCall(te.ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } if err := stream.Send(sreq); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, sreq, err) } if _, err := stream.Recv(); err == nil || grpc.Code(err) != codes.ResourceExhausted { t.Fatalf("%v.Recv() = _, %v, want _, error code: %s", stream, err, codes.ResourceExhausted) } } func TestMaxMsgSizeClientAPI(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testMaxMsgSizeClientAPI(t, e) } } func testMaxMsgSizeClientAPI(t *testing.T, e env) { te := newTest(t, e) te.userAgent = testAppUA // To avoid error on server side. te.maxServerSendMsgSize = newInt(5 * 1024 * 1024) te.maxClientReceiveMsgSize = newInt(1024) te.maxClientSendMsgSize = newInt(1024) te.declareLogNoise( "transport: http2Client.notifyError got notified that the client transport was broken EOF", "grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing", "grpc: addrConn.resetTransport failed to create client transport: connection error", "Failed to dial : context canceled; please retry.", ) te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) const smallSize = 1 const largeSize = 1024 smallPayload, err := newPayload(testpb.PayloadType_COMPRESSABLE, smallSize) if err != nil { t.Fatal(err) } largePayload, err := newPayload(testpb.PayloadType_COMPRESSABLE, largeSize) if err != nil { t.Fatal(err) } req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseSize: proto.Int32(int32(largeSize)), Payload: smallPayload, } // Test for unary RPC recv. if _, err := tc.UnaryCall(context.Background(), req); err == nil || grpc.Code(err) != codes.ResourceExhausted { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, error code: %s", err, codes.ResourceExhausted) } // Test for unary RPC send. req.Payload = largePayload req.ResponseSize = proto.Int32(int32(smallSize)) if _, err := tc.UnaryCall(context.Background(), req); err == nil || grpc.Code(err) != codes.ResourceExhausted { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, error code: %s", err, codes.ResourceExhausted) } respParam := []*testpb.ResponseParameters{ { Size: proto.Int32(int32(largeSize)), }, } sreq := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseParameters: respParam, Payload: smallPayload, } // Test for streaming RPC recv. stream, err := tc.FullDuplexCall(te.ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } if err := stream.Send(sreq); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, sreq, err) } if _, err := stream.Recv(); err == nil || grpc.Code(err) != codes.ResourceExhausted { t.Fatalf("%v.Recv() = _, %v, want _, error code: %s", stream, err, codes.ResourceExhausted) } // Test for streaming RPC send. respParam[0].Size = proto.Int32(int32(smallSize)) sreq.Payload = largePayload stream, err = tc.FullDuplexCall(te.ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } if err := stream.Send(sreq); err == nil || grpc.Code(err) != codes.ResourceExhausted { t.Fatalf("%v.Send(%v) = %v, want _, error code: %s", stream, sreq, err, codes.ResourceExhausted) } } func TestMaxMsgSizeServerAPI(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testMaxMsgSizeServerAPI(t, e) } } func testMaxMsgSizeServerAPI(t *testing.T, e env) { te := newTest(t, e) te.userAgent = testAppUA te.maxServerReceiveMsgSize = newInt(1024) te.maxServerSendMsgSize = newInt(1024) te.declareLogNoise( "transport: http2Client.notifyError got notified that the client transport was broken EOF", "grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing", "grpc: addrConn.resetTransport failed to create client transport: connection error", "Failed to dial : context canceled; please retry.", ) te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) const smallSize = 1 const largeSize = 1024 smallPayload, err := newPayload(testpb.PayloadType_COMPRESSABLE, smallSize) if err != nil { t.Fatal(err) } largePayload, err := newPayload(testpb.PayloadType_COMPRESSABLE, largeSize) if err != nil { t.Fatal(err) } req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseSize: proto.Int32(int32(largeSize)), Payload: smallPayload, } // Test for unary RPC send. if _, err := tc.UnaryCall(context.Background(), req); err == nil || grpc.Code(err) != codes.ResourceExhausted { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, error code: %s", err, codes.ResourceExhausted) } // Test for unary RPC recv. req.Payload = largePayload req.ResponseSize = proto.Int32(int32(smallSize)) if _, err := tc.UnaryCall(context.Background(), req); err == nil || grpc.Code(err) != codes.ResourceExhausted { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, error code: %s", err, codes.ResourceExhausted) } respParam := []*testpb.ResponseParameters{ { Size: proto.Int32(int32(largeSize)), }, } sreq := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseParameters: respParam, Payload: smallPayload, } // Test for streaming RPC send. stream, err := tc.FullDuplexCall(te.ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } if err := stream.Send(sreq); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, sreq, err) } if _, err := stream.Recv(); err == nil || grpc.Code(err) != codes.ResourceExhausted { t.Fatalf("%v.Recv() = _, %v, want _, error code: %s", stream, err, codes.ResourceExhausted) } // Test for streaming RPC recv. respParam[0].Size = proto.Int32(int32(smallSize)) sreq.Payload = largePayload stream, err = tc.FullDuplexCall(te.ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } if err := stream.Send(sreq); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, sreq, err) } if _, err := stream.Recv(); err == nil || grpc.Code(err) != codes.ResourceExhausted { t.Fatalf("%v.Recv() = _, %v, want _, error code: %s", stream, err, codes.ResourceExhausted) } } func TestTap(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { if e.name == "handler-tls" { continue } testTap(t, e) } } type myTap struct { cnt int } func (t *myTap) handle(ctx context.Context, info *tap.Info) (context.Context, error) { if info != nil { if info.FullMethodName == "/grpc.testing.TestService/EmptyCall" { t.cnt++ } else if info.FullMethodName == "/grpc.testing.TestService/UnaryCall" { return nil, fmt.Errorf("tap error") } } return ctx, nil } func testTap(t *testing.T, e env) { te := newTest(t, e) te.userAgent = testAppUA ttap := &myTap{} te.tapHandle = ttap.handle te.declareLogNoise( "transport: http2Client.notifyError got notified that the client transport was broken EOF", "grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing", "grpc: addrConn.resetTransport failed to create client transport: connection error", ) te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}); err != nil { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, ", err) } if ttap.cnt != 1 { t.Fatalf("Get the count in ttap %d, want 1", ttap.cnt) } payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, 31) if err != nil { t.Fatal(err) } req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseSize: proto.Int32(45), Payload: payload, } if _, err := tc.UnaryCall(context.Background(), req); grpc.Code(err) != codes.Unavailable { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, %s", err, codes.Unavailable) } } func healthCheck(d time.Duration, cc *grpc.ClientConn, serviceName string) (*healthpb.HealthCheckResponse, error) { ctx, cancel := context.WithTimeout(context.Background(), d) defer cancel() hc := healthpb.NewHealthClient(cc) req := &healthpb.HealthCheckRequest{ Service: serviceName, } return hc.Check(ctx, req) } func TestHealthCheckOnSuccess(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testHealthCheckOnSuccess(t, e) } } func testHealthCheckOnSuccess(t *testing.T, e env) { te := newTest(t, e) hs := health.NewServer() hs.SetServingStatus("grpc.health.v1.Health", 1) te.healthServer = hs te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() if _, err := healthCheck(1*time.Second, cc, "grpc.health.v1.Health"); err != nil { t.Fatalf("Health/Check(_, _) = _, %v, want _, ", err) } } func TestHealthCheckOnFailure(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testHealthCheckOnFailure(t, e) } } func testHealthCheckOnFailure(t *testing.T, e env) { defer leakCheck(t)() te := newTest(t, e) te.declareLogNoise( "Failed to dial ", "grpc: the client connection is closing; please retry", ) hs := health.NewServer() hs.SetServingStatus("grpc.health.v1.HealthCheck", 1) te.healthServer = hs te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() wantErr := grpc.Errorf(codes.DeadlineExceeded, "context deadline exceeded") if _, err := healthCheck(0*time.Second, cc, "grpc.health.v1.Health"); !reflect.DeepEqual(err, wantErr) { t.Fatalf("Health/Check(_, _) = _, %v, want _, error code %s", err, codes.DeadlineExceeded) } awaitNewConnLogOutput() } func TestHealthCheckOff(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { // TODO(bradfitz): Temporarily skip this env due to #619. if e.name == "handler-tls" { continue } testHealthCheckOff(t, e) } } func testHealthCheckOff(t *testing.T, e env) { te := newTest(t, e) te.startServer(&testServer{security: e.security}) defer te.tearDown() want := grpc.Errorf(codes.Unimplemented, "unknown service grpc.health.v1.Health") if _, err := healthCheck(1*time.Second, te.clientConn(), ""); !reflect.DeepEqual(err, want) { t.Fatalf("Health/Check(_, _) = _, %v, want _, %v", err, want) } } func TestUnknownHandler(t *testing.T) { defer leakCheck(t)() // An example unknownHandler that returns a different code and a different method, making sure that we do not // expose what methods are implemented to a client that is not authenticated. unknownHandler := func(srv interface{}, stream grpc.ServerStream) error { return grpc.Errorf(codes.Unauthenticated, "user unauthenticated") } for _, e := range listTestEnv() { // TODO(bradfitz): Temporarily skip this env due to #619. if e.name == "handler-tls" { continue } testUnknownHandler(t, e, unknownHandler) } } func testUnknownHandler(t *testing.T, e env, unknownHandler grpc.StreamHandler) { te := newTest(t, e) te.unknownHandler = unknownHandler te.startServer(&testServer{security: e.security}) defer te.tearDown() want := grpc.Errorf(codes.Unauthenticated, "user unauthenticated") if _, err := healthCheck(1*time.Second, te.clientConn(), ""); !reflect.DeepEqual(err, want) { t.Fatalf("Health/Check(_, _) = _, %v, want _, %v", err, want) } } func TestHealthCheckServingStatus(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testHealthCheckServingStatus(t, e) } } func testHealthCheckServingStatus(t *testing.T, e env) { te := newTest(t, e) hs := health.NewServer() te.healthServer = hs te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() out, err := healthCheck(1*time.Second, cc, "") if err != nil { t.Fatalf("Health/Check(_, _) = _, %v, want _, ", err) } if out.Status != healthpb.HealthCheckResponse_SERVING { t.Fatalf("Got the serving status %v, want SERVING", out.Status) } wantErr := grpc.Errorf(codes.NotFound, "unknown service") if _, err := healthCheck(1*time.Second, cc, "grpc.health.v1.Health"); !reflect.DeepEqual(err, wantErr) { t.Fatalf("Health/Check(_, _) = _, %v, want _, error code %s", err, codes.NotFound) } hs.SetServingStatus("grpc.health.v1.Health", healthpb.HealthCheckResponse_SERVING) out, err = healthCheck(1*time.Second, cc, "grpc.health.v1.Health") if err != nil { t.Fatalf("Health/Check(_, _) = _, %v, want _, ", err) } if out.Status != healthpb.HealthCheckResponse_SERVING { t.Fatalf("Got the serving status %v, want SERVING", out.Status) } hs.SetServingStatus("grpc.health.v1.Health", healthpb.HealthCheckResponse_NOT_SERVING) out, err = healthCheck(1*time.Second, cc, "grpc.health.v1.Health") if err != nil { t.Fatalf("Health/Check(_, _) = _, %v, want _, ", err) } if out.Status != healthpb.HealthCheckResponse_NOT_SERVING { t.Fatalf("Got the serving status %v, want NOT_SERVING", out.Status) } } func TestErrorChanNoIO(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testErrorChanNoIO(t, e) } } func testErrorChanNoIO(t *testing.T, e env) { te := newTest(t, e) te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) if _, err := tc.FullDuplexCall(context.Background()); err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } } func TestEmptyUnaryWithUserAgent(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testEmptyUnaryWithUserAgent(t, e) } } func testEmptyUnaryWithUserAgent(t *testing.T, e env) { te := newTest(t, e) te.userAgent = testAppUA te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) var header metadata.MD reply, err := tc.EmptyCall(context.Background(), &testpb.Empty{}, grpc.Header(&header)) if err != nil || !proto.Equal(&testpb.Empty{}, reply) { t.Fatalf("TestService/EmptyCall(_, _) = %v, %v, want %v, ", reply, err, &testpb.Empty{}) } if v, ok := header["ua"]; !ok || !strings.HasPrefix(v[0], testAppUA) { t.Fatalf("header[\"ua\"] = %q, %t, want string with prefix %q, true", v, ok, testAppUA) } te.srv.Stop() } func TestFailedEmptyUnary(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { if e.name == "handler-tls" { // This test covers status details, but // Grpc-Status-Details-Bin is not support in handler_server. continue } testFailedEmptyUnary(t, e) } } func testFailedEmptyUnary(t *testing.T, e env) { te := newTest(t, e) te.userAgent = failAppUA te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) ctx := metadata.NewOutgoingContext(context.Background(), testMetadata) wantErr := detailedError if _, err := tc.EmptyCall(ctx, &testpb.Empty{}); !reflect.DeepEqual(err, wantErr) { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, %v", err, wantErr) } } func TestLargeUnary(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testLargeUnary(t, e) } } func testLargeUnary(t *testing.T, e env) { te := newTest(t, e) te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) const argSize = 271828 const respSize = 314159 payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, argSize) if err != nil { t.Fatal(err) } req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseSize: proto.Int32(respSize), Payload: payload, } reply, err := tc.UnaryCall(context.Background(), req) if err != nil { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, ", err) } pt := reply.GetPayload().GetType() ps := len(reply.GetPayload().GetBody()) if pt != testpb.PayloadType_COMPRESSABLE || ps != respSize { t.Fatalf("Got the reply with type %d len %d; want %d, %d", pt, ps, testpb.PayloadType_COMPRESSABLE, respSize) } } // Test backward-compatability API for setting msg size limit. func TestExceedMsgLimit(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testExceedMsgLimit(t, e) } } func testExceedMsgLimit(t *testing.T, e env) { te := newTest(t, e) te.maxMsgSize = newInt(1024) te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) argSize := int32(*te.maxMsgSize + 1) const smallSize = 1 payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, argSize) if err != nil { t.Fatal(err) } smallPayload, err := newPayload(testpb.PayloadType_COMPRESSABLE, smallSize) if err != nil { t.Fatal(err) } // Test on server side for unary RPC. req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseSize: proto.Int32(smallSize), Payload: payload, } if _, err := tc.UnaryCall(context.Background(), req); err == nil || grpc.Code(err) != codes.ResourceExhausted { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, error code: %s", err, codes.ResourceExhausted) } // Test on client side for unary RPC. req.ResponseSize = proto.Int32(int32(*te.maxMsgSize) + 1) req.Payload = smallPayload if _, err := tc.UnaryCall(context.Background(), req); err == nil || grpc.Code(err) != codes.ResourceExhausted { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, error code: %s", err, codes.ResourceExhausted) } // Test on server side for streaming RPC. stream, err := tc.FullDuplexCall(te.ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } respParam := []*testpb.ResponseParameters{ { Size: proto.Int32(1), }, } spayload, err := newPayload(testpb.PayloadType_COMPRESSABLE, int32(*te.maxMsgSize+1)) if err != nil { t.Fatal(err) } sreq := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseParameters: respParam, Payload: spayload, } if err := stream.Send(sreq); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, sreq, err) } if _, err := stream.Recv(); err == nil || grpc.Code(err) != codes.ResourceExhausted { t.Fatalf("%v.Recv() = _, %v, want _, error code: %s", stream, err, codes.ResourceExhausted) } // Test on client side for streaming RPC. stream, err = tc.FullDuplexCall(te.ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } respParam[0].Size = proto.Int32(int32(*te.maxMsgSize) + 1) sreq.Payload = smallPayload if err := stream.Send(sreq); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, sreq, err) } if _, err := stream.Recv(); err == nil || grpc.Code(err) != codes.ResourceExhausted { t.Fatalf("%v.Recv() = _, %v, want _, error code: %s", stream, err, codes.ResourceExhausted) } } func TestPeerClientSide(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testPeerClientSide(t, e) } } func testPeerClientSide(t *testing.T, e env) { te := newTest(t, e) te.userAgent = testAppUA te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) peer := new(peer.Peer) if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}, grpc.Peer(peer), grpc.FailFast(false)); err != nil { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, ", err) } pa := peer.Addr.String() if e.network == "unix" { if pa != te.srvAddr { t.Fatalf("peer.Addr = %v, want %v", pa, te.srvAddr) } return } _, pp, err := net.SplitHostPort(pa) if err != nil { t.Fatalf("Failed to parse address from peer.") } _, sp, err := net.SplitHostPort(te.srvAddr) if err != nil { t.Fatalf("Failed to parse address of test server.") } if pp != sp { t.Fatalf("peer.Addr = localhost:%v, want localhost:%v", pp, sp) } } // TestPeerNegative tests that if call fails setting peer // doesn't cause a segmentation fault. // issue#1141 https://github.com/grpc/grpc-go/issues/1141 func TestPeerNegative(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testPeerNegative(t, e) } } func testPeerNegative(t *testing.T, e env) { te := newTest(t, e) te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) peer := new(peer.Peer) ctx, cancel := context.WithCancel(context.Background()) cancel() tc.EmptyCall(ctx, &testpb.Empty{}, grpc.Peer(peer)) } func TestPeerFailedRPC(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testPeerFailedRPC(t, e) } } func testPeerFailedRPC(t *testing.T, e env) { te := newTest(t, e) te.maxServerReceiveMsgSize = newInt(1 * 1024) te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) // first make a successful request to the server if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}); err != nil { t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, ", err) } // make a second request that will be rejected by the server const largeSize = 5 * 1024 largePayload, err := newPayload(testpb.PayloadType_COMPRESSABLE, largeSize) if err != nil { t.Fatal(err) } req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), Payload: largePayload, } peer := new(peer.Peer) if _, err := tc.UnaryCall(context.Background(), req, grpc.Peer(peer)); err == nil || grpc.Code(err) != codes.ResourceExhausted { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, error code: %s", err, codes.ResourceExhausted) } else { pa := peer.Addr.String() if e.network == "unix" { if pa != te.srvAddr { t.Fatalf("peer.Addr = %v, want %v", pa, te.srvAddr) } return } _, pp, err := net.SplitHostPort(pa) if err != nil { t.Fatalf("Failed to parse address from peer.") } _, sp, err := net.SplitHostPort(te.srvAddr) if err != nil { t.Fatalf("Failed to parse address of test server.") } if pp != sp { t.Fatalf("peer.Addr = localhost:%v, want localhost:%v", pp, sp) } } } func TestMetadataUnaryRPC(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testMetadataUnaryRPC(t, e) } } func testMetadataUnaryRPC(t *testing.T, e env) { te := newTest(t, e) te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) const argSize = 2718 const respSize = 314 payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, argSize) if err != nil { t.Fatal(err) } req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseSize: proto.Int32(respSize), Payload: payload, } var header, trailer metadata.MD ctx := metadata.NewOutgoingContext(context.Background(), testMetadata) if _, err := tc.UnaryCall(ctx, req, grpc.Header(&header), grpc.Trailer(&trailer)); err != nil { t.Fatalf("TestService.UnaryCall(%v, _, _, _) = _, %v; want _, ", ctx, err) } // Ignore optional response headers that Servers may set: if header != nil { delete(header, "trailer") // RFC 2616 says server SHOULD (but optional) declare trailers delete(header, "date") // the Date header is also optional delete(header, "user-agent") } if !reflect.DeepEqual(header, testMetadata) { t.Fatalf("Received header metadata %v, want %v", header, testMetadata) } if !reflect.DeepEqual(trailer, testTrailerMetadata) { t.Fatalf("Received trailer metadata %v, want %v", trailer, testTrailerMetadata) } } func TestMultipleSetTrailerUnaryRPC(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testMultipleSetTrailerUnaryRPC(t, e) } } func testMultipleSetTrailerUnaryRPC(t *testing.T, e env) { te := newTest(t, e) te.startServer(&testServer{security: e.security, multipleSetTrailer: true}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) const ( argSize = 1 respSize = 1 ) payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, argSize) if err != nil { t.Fatal(err) } req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseSize: proto.Int32(respSize), Payload: payload, } var trailer metadata.MD ctx := metadata.NewOutgoingContext(context.Background(), testMetadata) if _, err := tc.UnaryCall(ctx, req, grpc.Trailer(&trailer), grpc.FailFast(false)); err != nil { t.Fatalf("TestService.UnaryCall(%v, _, _, _) = _, %v; want _, ", ctx, err) } expectedTrailer := metadata.Join(testTrailerMetadata, testTrailerMetadata2) if !reflect.DeepEqual(trailer, expectedTrailer) { t.Fatalf("Received trailer metadata %v, want %v", trailer, expectedTrailer) } } func TestMultipleSetTrailerStreamingRPC(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testMultipleSetTrailerStreamingRPC(t, e) } } func testMultipleSetTrailerStreamingRPC(t *testing.T, e env) { te := newTest(t, e) te.startServer(&testServer{security: e.security, multipleSetTrailer: true}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) ctx := metadata.NewOutgoingContext(context.Background(), testMetadata) stream, err := tc.FullDuplexCall(ctx, grpc.FailFast(false)) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } if err := stream.CloseSend(); err != nil { t.Fatalf("%v.CloseSend() got %v, want %v", stream, err, nil) } if _, err := stream.Recv(); err != io.EOF { t.Fatalf("%v failed to complele the FullDuplexCall: %v", stream, err) } trailer := stream.Trailer() expectedTrailer := metadata.Join(testTrailerMetadata, testTrailerMetadata2) if !reflect.DeepEqual(trailer, expectedTrailer) { t.Fatalf("Received trailer metadata %v, want %v", trailer, expectedTrailer) } } func TestSetAndSendHeaderUnaryRPC(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { if e.name == "handler-tls" { continue } testSetAndSendHeaderUnaryRPC(t, e) } } // To test header metadata is sent on SendHeader(). func testSetAndSendHeaderUnaryRPC(t *testing.T, e env) { te := newTest(t, e) te.startServer(&testServer{security: e.security, setAndSendHeader: true}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) const ( argSize = 1 respSize = 1 ) payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, argSize) if err != nil { t.Fatal(err) } req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseSize: proto.Int32(respSize), Payload: payload, } var header metadata.MD ctx := metadata.NewOutgoingContext(context.Background(), testMetadata) if _, err := tc.UnaryCall(ctx, req, grpc.Header(&header), grpc.FailFast(false)); err != nil { t.Fatalf("TestService.UnaryCall(%v, _, _, _) = _, %v; want _, ", ctx, err) } delete(header, "user-agent") expectedHeader := metadata.Join(testMetadata, testMetadata2) if !reflect.DeepEqual(header, expectedHeader) { t.Fatalf("Received header metadata %v, want %v", header, expectedHeader) } } func TestMultipleSetHeaderUnaryRPC(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { if e.name == "handler-tls" { continue } testMultipleSetHeaderUnaryRPC(t, e) } } // To test header metadata is sent when sending response. func testMultipleSetHeaderUnaryRPC(t *testing.T, e env) { te := newTest(t, e) te.startServer(&testServer{security: e.security, setHeaderOnly: true}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) const ( argSize = 1 respSize = 1 ) payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, argSize) if err != nil { t.Fatal(err) } req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseSize: proto.Int32(respSize), Payload: payload, } var header metadata.MD ctx := metadata.NewOutgoingContext(context.Background(), testMetadata) if _, err := tc.UnaryCall(ctx, req, grpc.Header(&header), grpc.FailFast(false)); err != nil { t.Fatalf("TestService.UnaryCall(%v, _, _, _) = _, %v; want _, ", ctx, err) } delete(header, "user-agent") expectedHeader := metadata.Join(testMetadata, testMetadata2) if !reflect.DeepEqual(header, expectedHeader) { t.Fatalf("Received header metadata %v, want %v", header, expectedHeader) } } func TestMultipleSetHeaderUnaryRPCError(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { if e.name == "handler-tls" { continue } testMultipleSetHeaderUnaryRPCError(t, e) } } // To test header metadata is sent when sending status. func testMultipleSetHeaderUnaryRPCError(t *testing.T, e env) { te := newTest(t, e) te.startServer(&testServer{security: e.security, setHeaderOnly: true}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) const ( argSize = 1 respSize = -1 // Invalid respSize to make RPC fail. ) payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, argSize) if err != nil { t.Fatal(err) } req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseSize: proto.Int32(respSize), Payload: payload, } var header metadata.MD ctx := metadata.NewOutgoingContext(context.Background(), testMetadata) if _, err := tc.UnaryCall(ctx, req, grpc.Header(&header), grpc.FailFast(false)); err == nil { t.Fatalf("TestService.UnaryCall(%v, _, _, _) = _, %v; want _, ", ctx, err) } delete(header, "user-agent") expectedHeader := metadata.Join(testMetadata, testMetadata2) if !reflect.DeepEqual(header, expectedHeader) { t.Fatalf("Received header metadata %v, want %v", header, expectedHeader) } } func TestSetAndSendHeaderStreamingRPC(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { if e.name == "handler-tls" { continue } testSetAndSendHeaderStreamingRPC(t, e) } } // To test header metadata is sent on SendHeader(). func testSetAndSendHeaderStreamingRPC(t *testing.T, e env) { te := newTest(t, e) te.startServer(&testServer{security: e.security, setAndSendHeader: true}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) const ( argSize = 1 respSize = 1 ) ctx := metadata.NewOutgoingContext(context.Background(), testMetadata) stream, err := tc.FullDuplexCall(ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } if err := stream.CloseSend(); err != nil { t.Fatalf("%v.CloseSend() got %v, want %v", stream, err, nil) } if _, err := stream.Recv(); err != io.EOF { t.Fatalf("%v failed to complele the FullDuplexCall: %v", stream, err) } header, err := stream.Header() if err != nil { t.Fatalf("%v.Header() = _, %v, want _, ", stream, err) } delete(header, "user-agent") expectedHeader := metadata.Join(testMetadata, testMetadata2) if !reflect.DeepEqual(header, expectedHeader) { t.Fatalf("Received header metadata %v, want %v", header, expectedHeader) } } func TestMultipleSetHeaderStreamingRPC(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { if e.name == "handler-tls" { continue } testMultipleSetHeaderStreamingRPC(t, e) } } // To test header metadata is sent when sending response. func testMultipleSetHeaderStreamingRPC(t *testing.T, e env) { te := newTest(t, e) te.startServer(&testServer{security: e.security, setHeaderOnly: true}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) const ( argSize = 1 respSize = 1 ) ctx := metadata.NewOutgoingContext(context.Background(), testMetadata) stream, err := tc.FullDuplexCall(ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, argSize) if err != nil { t.Fatal(err) } req := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseParameters: []*testpb.ResponseParameters{ {Size: proto.Int32(respSize)}, }, Payload: payload, } if err := stream.Send(req); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, req, err) } if _, err := stream.Recv(); err != nil { t.Fatalf("%v.Recv() = %v, want ", stream, err) } if err := stream.CloseSend(); err != nil { t.Fatalf("%v.CloseSend() got %v, want %v", stream, err, nil) } if _, err := stream.Recv(); err != io.EOF { t.Fatalf("%v failed to complele the FullDuplexCall: %v", stream, err) } header, err := stream.Header() if err != nil { t.Fatalf("%v.Header() = _, %v, want _, ", stream, err) } delete(header, "user-agent") expectedHeader := metadata.Join(testMetadata, testMetadata2) if !reflect.DeepEqual(header, expectedHeader) { t.Fatalf("Received header metadata %v, want %v", header, expectedHeader) } } func TestMultipleSetHeaderStreamingRPCError(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { if e.name == "handler-tls" { continue } testMultipleSetHeaderStreamingRPCError(t, e) } } // To test header metadata is sent when sending status. func testMultipleSetHeaderStreamingRPCError(t *testing.T, e env) { te := newTest(t, e) te.startServer(&testServer{security: e.security, setHeaderOnly: true}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) const ( argSize = 1 respSize = -1 ) ctx := metadata.NewOutgoingContext(context.Background(), testMetadata) stream, err := tc.FullDuplexCall(ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, argSize) if err != nil { t.Fatal(err) } req := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseParameters: []*testpb.ResponseParameters{ {Size: proto.Int32(respSize)}, }, Payload: payload, } if err := stream.Send(req); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, req, err) } if _, err := stream.Recv(); err == nil { t.Fatalf("%v.Recv() = %v, want ", stream, err) } header, err := stream.Header() if err != nil { t.Fatalf("%v.Header() = _, %v, want _, ", stream, err) } delete(header, "user-agent") expectedHeader := metadata.Join(testMetadata, testMetadata2) if !reflect.DeepEqual(header, expectedHeader) { t.Fatalf("Received header metadata %v, want %v", header, expectedHeader) } if err := stream.CloseSend(); err != nil { t.Fatalf("%v.CloseSend() got %v, want %v", stream, err, nil) } } // TestMalformedHTTP2Metedata verfies the returned error when the client // sends an illegal metadata. func TestMalformedHTTP2Metadata(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { if e.name == "handler-tls" { // Failed with "server stops accepting new RPCs". // Server stops accepting new RPCs when the client sends an illegal http2 header. continue } testMalformedHTTP2Metadata(t, e) } } func testMalformedHTTP2Metadata(t *testing.T, e env) { te := newTest(t, e) te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, 2718) if err != nil { t.Fatal(err) } req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseSize: proto.Int32(314), Payload: payload, } ctx := metadata.NewOutgoingContext(context.Background(), malformedHTTP2Metadata) if _, err := tc.UnaryCall(ctx, req); grpc.Code(err) != codes.Internal { t.Fatalf("TestService.UnaryCall(%v, _) = _, %v; want _, %s", ctx, err, codes.Internal) } } func performOneRPC(t *testing.T, tc testpb.TestServiceClient, wg *sync.WaitGroup) { defer wg.Done() const argSize = 2718 const respSize = 314 payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, argSize) if err != nil { t.Error(err) return } req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseSize: proto.Int32(respSize), Payload: payload, } reply, err := tc.UnaryCall(context.Background(), req, grpc.FailFast(false)) if err != nil { t.Errorf("TestService/UnaryCall(_, _) = _, %v, want _, ", err) return } pt := reply.GetPayload().GetType() ps := len(reply.GetPayload().GetBody()) if pt != testpb.PayloadType_COMPRESSABLE || ps != respSize { t.Errorf("Got reply with type %d len %d; want %d, %d", pt, ps, testpb.PayloadType_COMPRESSABLE, respSize) return } } func TestRetry(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { if e.name == "handler-tls" { // In race mode, with go1.6, the test never returns with handler_server. continue } testRetry(t, e) } } // This test mimics a user who sends 1000 RPCs concurrently on a faulty transport. // TODO(zhaoq): Refactor to make this clearer and add more cases to test racy // and error-prone paths. func testRetry(t *testing.T, e env) { te := newTest(t, e) te.declareLogNoise("transport: http2Client.notifyError got notified that the client transport was broken") te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) var wg sync.WaitGroup numRPC := 1000 rpcSpacing := 2 * time.Millisecond if raceMode { // The race detector has a limit on how many goroutines it can track. // This test is near the upper limit, and goes over the limit // depending on the environment (the http.Handler environment uses // more goroutines) t.Logf("Shortening test in race mode.") numRPC /= 2 rpcSpacing *= 2 } wg.Add(1) go func() { // Halfway through starting RPCs, kill all connections: time.Sleep(time.Duration(numRPC/2) * rpcSpacing) // The server shuts down the network connection to make a // transport error which will be detected by the client side // code. internal.TestingCloseConns(te.srv) wg.Done() }() // All these RPCs should succeed eventually. for i := 0; i < numRPC; i++ { time.Sleep(rpcSpacing) wg.Add(1) go performOneRPC(t, tc, &wg) } wg.Wait() } func TestRPCTimeout(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testRPCTimeout(t, e) } } // TODO(zhaoq): Have a better test coverage of timeout and cancellation mechanism. func testRPCTimeout(t *testing.T, e env) { te := newTest(t, e) te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) const argSize = 2718 const respSize = 314 payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, argSize) if err != nil { t.Fatal(err) } req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseSize: proto.Int32(respSize), Payload: payload, } for i := -1; i <= 10; i++ { ctx, cancel := context.WithTimeout(context.Background(), time.Duration(i)*time.Millisecond) if _, err := tc.UnaryCall(ctx, req); grpc.Code(err) != codes.DeadlineExceeded { t.Fatalf("TestService/UnaryCallv(_, _) = _, %v; want , error code: %s", err, codes.DeadlineExceeded) } cancel() } } func TestCancel(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testCancel(t, e) } } func testCancel(t *testing.T, e env) { te := newTest(t, e) te.declareLogNoise("grpc: the client connection is closing; please retry") te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) const argSize = 2718 const respSize = 314 payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, argSize) if err != nil { t.Fatal(err) } req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseSize: proto.Int32(respSize), Payload: payload, } ctx, cancel := context.WithCancel(context.Background()) time.AfterFunc(1*time.Millisecond, cancel) if r, err := tc.UnaryCall(ctx, req); grpc.Code(err) != codes.Canceled { t.Fatalf("TestService/UnaryCall(_, _) = %v, %v; want _, error code: %s", r, err, codes.Canceled) } awaitNewConnLogOutput() } func TestCancelNoIO(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testCancelNoIO(t, e) } } func testCancelNoIO(t *testing.T, e env) { te := newTest(t, e) te.declareLogNoise("http2Client.notifyError got notified that the client transport was broken") te.maxStream = 1 // Only allows 1 live stream per server transport. te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) // Start one blocked RPC for which we'll never send streaming // input. This will consume the 1 maximum concurrent streams, // causing future RPCs to hang. ctx, cancelFirst := context.WithCancel(context.Background()) _, err := tc.StreamingInputCall(ctx) if err != nil { t.Fatalf("%v.StreamingInputCall(_) = _, %v, want _, ", tc, err) } // Loop until the ClientConn receives the initial settings // frame from the server, notifying it about the maximum // concurrent streams. We know when it's received it because // an RPC will fail with codes.DeadlineExceeded instead of // succeeding. // TODO(bradfitz): add internal test hook for this (Issue 534) for { ctx, cancelSecond := context.WithTimeout(context.Background(), 250*time.Millisecond) _, err := tc.StreamingInputCall(ctx) cancelSecond() if err == nil { time.Sleep(50 * time.Millisecond) continue } if grpc.Code(err) == codes.DeadlineExceeded { break } t.Fatalf("%v.StreamingInputCall(_) = _, %v, want _, %s", tc, err, codes.DeadlineExceeded) } // If there are any RPCs in flight before the client receives // the max streams setting, let them be expired. // TODO(bradfitz): add internal test hook for this (Issue 534) time.Sleep(500 * time.Millisecond) ch := make(chan struct{}) go func() { defer close(ch) // This should be blocked until the 1st is canceled. ctx, cancelThird := context.WithTimeout(context.Background(), 2*time.Second) if _, err := tc.StreamingInputCall(ctx); err != nil { t.Errorf("%v.StreamingInputCall(_) = _, %v, want _, ", tc, err) } cancelThird() }() cancelFirst() <-ch } // The following tests the gRPC streaming RPC implementations. // TODO(zhaoq): Have better coverage on error cases. var ( reqSizes = []int{27182, 8, 1828, 45904} respSizes = []int{31415, 9, 2653, 58979} ) func TestNoService(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testNoService(t, e) } } func testNoService(t *testing.T, e env) { te := newTest(t, e) te.startServer(nil) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) stream, err := tc.FullDuplexCall(te.ctx, grpc.FailFast(false)) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } if _, err := stream.Recv(); grpc.Code(err) != codes.Unimplemented { t.Fatalf("stream.Recv() = _, %v, want _, error code %s", err, codes.Unimplemented) } } func TestPingPong(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testPingPong(t, e) } } func testPingPong(t *testing.T, e env) { te := newTest(t, e) te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) stream, err := tc.FullDuplexCall(te.ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } var index int for index < len(reqSizes) { respParam := []*testpb.ResponseParameters{ { Size: proto.Int32(int32(respSizes[index])), }, } payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, int32(reqSizes[index])) if err != nil { t.Fatal(err) } req := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseParameters: respParam, Payload: payload, } if err := stream.Send(req); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, req, err) } reply, err := stream.Recv() if err != nil { t.Fatalf("%v.Recv() = %v, want ", stream, err) } pt := reply.GetPayload().GetType() if pt != testpb.PayloadType_COMPRESSABLE { t.Fatalf("Got the reply of type %d, want %d", pt, testpb.PayloadType_COMPRESSABLE) } size := len(reply.GetPayload().GetBody()) if size != int(respSizes[index]) { t.Fatalf("Got reply body of length %d, want %d", size, respSizes[index]) } index++ } if err := stream.CloseSend(); err != nil { t.Fatalf("%v.CloseSend() got %v, want %v", stream, err, nil) } if _, err := stream.Recv(); err != io.EOF { t.Fatalf("%v failed to complele the ping pong test: %v", stream, err) } } func TestMetadataStreamingRPC(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testMetadataStreamingRPC(t, e) } } func testMetadataStreamingRPC(t *testing.T, e env) { te := newTest(t, e) te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) ctx := metadata.NewOutgoingContext(te.ctx, testMetadata) stream, err := tc.FullDuplexCall(ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } go func() { headerMD, err := stream.Header() if e.security == "tls" { delete(headerMD, "transport_security_type") } delete(headerMD, "trailer") // ignore if present delete(headerMD, "user-agent") if err != nil || !reflect.DeepEqual(testMetadata, headerMD) { t.Errorf("#1 %v.Header() = %v, %v, want %v, ", stream, headerMD, err, testMetadata) } // test the cached value. headerMD, err = stream.Header() delete(headerMD, "trailer") // ignore if present delete(headerMD, "user-agent") if err != nil || !reflect.DeepEqual(testMetadata, headerMD) { t.Errorf("#2 %v.Header() = %v, %v, want %v, ", stream, headerMD, err, testMetadata) } err = func() error { for index := 0; index < len(reqSizes); index++ { respParam := []*testpb.ResponseParameters{ { Size: proto.Int32(int32(respSizes[index])), }, } payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, int32(reqSizes[index])) if err != nil { return err } req := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseParameters: respParam, Payload: payload, } if err := stream.Send(req); err != nil { return fmt.Errorf("%v.Send(%v) = %v, want ", stream, req, err) } } return nil }() // Tell the server we're done sending args. stream.CloseSend() if err != nil { t.Error(err) } }() for { if _, err := stream.Recv(); err != nil { break } } trailerMD := stream.Trailer() if !reflect.DeepEqual(testTrailerMetadata, trailerMD) { t.Fatalf("%v.Trailer() = %v, want %v", stream, trailerMD, testTrailerMetadata) } } func TestServerStreaming(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testServerStreaming(t, e) } } func testServerStreaming(t *testing.T, e env) { te := newTest(t, e) te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) respParam := make([]*testpb.ResponseParameters, len(respSizes)) for i, s := range respSizes { respParam[i] = &testpb.ResponseParameters{ Size: proto.Int32(int32(s)), } } req := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseParameters: respParam, } stream, err := tc.StreamingOutputCall(context.Background(), req) if err != nil { t.Fatalf("%v.StreamingOutputCall(_) = _, %v, want ", tc, err) } var rpcStatus error var respCnt int var index int for { reply, err := stream.Recv() if err != nil { rpcStatus = err break } pt := reply.GetPayload().GetType() if pt != testpb.PayloadType_COMPRESSABLE { t.Fatalf("Got the reply of type %d, want %d", pt, testpb.PayloadType_COMPRESSABLE) } size := len(reply.GetPayload().GetBody()) if size != int(respSizes[index]) { t.Fatalf("Got reply body of length %d, want %d", size, respSizes[index]) } index++ respCnt++ } if rpcStatus != io.EOF { t.Fatalf("Failed to finish the server streaming rpc: %v, want ", rpcStatus) } if respCnt != len(respSizes) { t.Fatalf("Got %d reply, want %d", len(respSizes), respCnt) } } func TestFailedServerStreaming(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testFailedServerStreaming(t, e) } } func testFailedServerStreaming(t *testing.T, e env) { te := newTest(t, e) te.userAgent = failAppUA te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) respParam := make([]*testpb.ResponseParameters, len(respSizes)) for i, s := range respSizes { respParam[i] = &testpb.ResponseParameters{ Size: proto.Int32(int32(s)), } } req := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseParameters: respParam, } ctx := metadata.NewOutgoingContext(te.ctx, testMetadata) stream, err := tc.StreamingOutputCall(ctx, req) if err != nil { t.Fatalf("%v.StreamingOutputCall(_) = _, %v, want ", tc, err) } wantErr := grpc.Errorf(codes.DataLoss, "error for testing: "+failAppUA) if _, err := stream.Recv(); !reflect.DeepEqual(err, wantErr) { t.Fatalf("%v.Recv() = _, %v, want _, %v", stream, err, wantErr) } } // concurrentSendServer is a TestServiceServer whose // StreamingOutputCall makes ten serial Send calls, sending payloads // "0".."9", inclusive. TestServerStreamingConcurrent verifies they // were received in the correct order, and that there were no races. // // All other TestServiceServer methods crash if called. type concurrentSendServer struct { testpb.TestServiceServer } func (s concurrentSendServer) StreamingOutputCall(args *testpb.StreamingOutputCallRequest, stream testpb.TestService_StreamingOutputCallServer) error { for i := 0; i < 10; i++ { stream.Send(&testpb.StreamingOutputCallResponse{ Payload: &testpb.Payload{ Body: []byte{'0' + uint8(i)}, }, }) } return nil } // Tests doing a bunch of concurrent streaming output calls. func TestServerStreamingConcurrent(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testServerStreamingConcurrent(t, e) } } func testServerStreamingConcurrent(t *testing.T, e env) { te := newTest(t, e) te.startServer(concurrentSendServer{}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) doStreamingCall := func() { req := &testpb.StreamingOutputCallRequest{} stream, err := tc.StreamingOutputCall(context.Background(), req) if err != nil { t.Errorf("%v.StreamingOutputCall(_) = _, %v, want ", tc, err) return } var ngot int var buf bytes.Buffer for { reply, err := stream.Recv() if err == io.EOF { break } if err != nil { t.Fatal(err) } ngot++ if buf.Len() > 0 { buf.WriteByte(',') } buf.Write(reply.GetPayload().GetBody()) } if want := 10; ngot != want { t.Errorf("Got %d replies, want %d", ngot, want) } if got, want := buf.String(), "0,1,2,3,4,5,6,7,8,9"; got != want { t.Errorf("Got replies %q; want %q", got, want) } } var wg sync.WaitGroup for i := 0; i < 20; i++ { wg.Add(1) go func() { defer wg.Done() doStreamingCall() }() } wg.Wait() } func generatePayloadSizes() [][]int { reqSizes := [][]int{ {27182, 8, 1828, 45904}, } num8KPayloads := 1024 eightKPayloads := []int{} for i := 0; i < num8KPayloads; i++ { eightKPayloads = append(eightKPayloads, (1 << 13)) } reqSizes = append(reqSizes, eightKPayloads) num2MPayloads := 8 twoMPayloads := []int{} for i := 0; i < num2MPayloads; i++ { twoMPayloads = append(twoMPayloads, (1 << 21)) } reqSizes = append(reqSizes, twoMPayloads) return reqSizes } func TestClientStreaming(t *testing.T) { defer leakCheck(t)() for _, s := range generatePayloadSizes() { for _, e := range listTestEnv() { testClientStreaming(t, e, s) } } } func testClientStreaming(t *testing.T, e env, sizes []int) { te := newTest(t, e) te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) ctx, cancel := context.WithTimeout(te.ctx, time.Second*30) defer cancel() stream, err := tc.StreamingInputCall(ctx) if err != nil { t.Fatalf("%v.StreamingInputCall(_) = _, %v, want ", tc, err) } var sum int for _, s := range sizes { payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, int32(s)) if err != nil { t.Fatal(err) } req := &testpb.StreamingInputCallRequest{ Payload: payload, } if err := stream.Send(req); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, req, err) } sum += s } reply, err := stream.CloseAndRecv() if err != nil { t.Fatalf("%v.CloseAndRecv() got error %v, want %v", stream, err, nil) } if reply.GetAggregatedPayloadSize() != int32(sum) { t.Fatalf("%v.CloseAndRecv().GetAggregatePayloadSize() = %v; want %v", stream, reply.GetAggregatedPayloadSize(), sum) } } func TestClientStreamingError(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { if e.name == "handler-tls" { continue } testClientStreamingError(t, e) } } func testClientStreamingError(t *testing.T, e env) { te := newTest(t, e) te.startServer(&testServer{security: e.security, earlyFail: true}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) stream, err := tc.StreamingInputCall(te.ctx) if err != nil { t.Fatalf("%v.StreamingInputCall(_) = _, %v, want ", tc, err) } payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, 1) if err != nil { t.Fatal(err) } req := &testpb.StreamingInputCallRequest{ Payload: payload, } // The 1st request should go through. if err := stream.Send(req); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, req, err) } for { if err := stream.Send(req); err != io.EOF { continue } if _, err := stream.CloseAndRecv(); grpc.Code(err) != codes.NotFound { t.Fatalf("%v.CloseAndRecv() = %v, want error %s", stream, err, codes.NotFound) } break } } func TestExceedMaxStreamsLimit(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testExceedMaxStreamsLimit(t, e) } } func testExceedMaxStreamsLimit(t *testing.T, e env) { te := newTest(t, e) te.declareLogNoise( "http2Client.notifyError got notified that the client transport was broken", "Conn.resetTransport failed to create client transport", "grpc: the connection is closing", ) te.maxStream = 1 // Only allows 1 live stream per server transport. te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) _, err := tc.StreamingInputCall(te.ctx) if err != nil { t.Fatalf("%v.StreamingInputCall(_) = _, %v, want _, ", tc, err) } // Loop until receiving the new max stream setting from the server. for { ctx, cancel := context.WithTimeout(context.Background(), time.Second) defer cancel() _, err := tc.StreamingInputCall(ctx) if err == nil { time.Sleep(time.Second) continue } if grpc.Code(err) == codes.DeadlineExceeded { break } t.Fatalf("%v.StreamingInputCall(_) = _, %v, want _, %s", tc, err, codes.DeadlineExceeded) } } const defaultMaxStreamsClient = 100 func TestExceedDefaultMaxStreamsLimit(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { if e.name == "handler-tls" { // The default max stream limit in handler_server is not 100? continue } testExceedDefaultMaxStreamsLimit(t, e) } } func testExceedDefaultMaxStreamsLimit(t *testing.T, e env) { te := newTest(t, e) te.declareLogNoise( "http2Client.notifyError got notified that the client transport was broken", "Conn.resetTransport failed to create client transport", "grpc: the connection is closing", ) // When masStream is set to 0 the server doesn't send a settings frame for // MaxConcurrentStreams, essentially allowing infinite (math.MaxInt32) streams. // In such a case, there should be a default cap on the client-side. te.maxStream = 0 te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) // Create as many streams as a client can. for i := 0; i < defaultMaxStreamsClient; i++ { if _, err := tc.StreamingInputCall(te.ctx); err != nil { t.Fatalf("%v.StreamingInputCall(_) = _, %v, want _, ", tc, err) } } // Trying to create one more should timeout. ctx, cancel := context.WithTimeout(context.Background(), time.Second) defer cancel() _, err := tc.StreamingInputCall(ctx) if err == nil || grpc.Code(err) != codes.DeadlineExceeded { t.Fatalf("%v.StreamingInputCall(_) = _, %v, want _, %s", tc, err, codes.DeadlineExceeded) } } func TestStreamsQuotaRecovery(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testStreamsQuotaRecovery(t, e) } } func testStreamsQuotaRecovery(t *testing.T, e env) { te := newTest(t, e) te.declareLogNoise( "http2Client.notifyError got notified that the client transport was broken", "Conn.resetTransport failed to create client transport", "grpc: the connection is closing", ) te.maxStream = 1 // Allows 1 live stream. te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) if _, err := tc.StreamingInputCall(context.Background()); err != nil { t.Fatalf("%v.StreamingInputCall(_) = _, %v, want _, ", tc, err) } // Loop until the new max stream setting is effective. for { ctx, cancel := context.WithTimeout(context.Background(), time.Second) defer cancel() _, err := tc.StreamingInputCall(ctx) if err == nil { time.Sleep(time.Second) continue } if grpc.Code(err) == codes.DeadlineExceeded { break } t.Fatalf("%v.StreamingInputCall(_) = _, %v, want _, %s", tc, err, codes.DeadlineExceeded) } var wg sync.WaitGroup for i := 0; i < 10; i++ { wg.Add(1) go func() { defer wg.Done() payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, 314) if err != nil { t.Error(err) return } req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseSize: proto.Int32(1592), Payload: payload, } // No rpc should go through due to the max streams limit. ctx, cancel := context.WithTimeout(context.Background(), 10*time.Millisecond) defer cancel() if _, err := tc.UnaryCall(ctx, req, grpc.FailFast(false)); grpc.Code(err) != codes.DeadlineExceeded { t.Errorf("TestService/UnaryCall(_, _) = _, %v, want _, %s", err, codes.DeadlineExceeded) } }() } wg.Wait() } func TestCompressServerHasNoSupport(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testCompressServerHasNoSupport(t, e) } } func testCompressServerHasNoSupport(t *testing.T, e env) { te := newTest(t, e) te.serverCompression = false te.clientCompression = true te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) const argSize = 271828 const respSize = 314159 payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, argSize) if err != nil { t.Fatal(err) } req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseSize: proto.Int32(respSize), Payload: payload, } if _, err := tc.UnaryCall(context.Background(), req); err == nil || grpc.Code(err) != codes.Unimplemented { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, error code %s", err, codes.Unimplemented) } // Streaming RPC stream, err := tc.FullDuplexCall(context.Background()) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } respParam := []*testpb.ResponseParameters{ { Size: proto.Int32(31415), }, } payload, err = newPayload(testpb.PayloadType_COMPRESSABLE, int32(31415)) if err != nil { t.Fatal(err) } sreq := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseParameters: respParam, Payload: payload, } if err := stream.Send(sreq); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, sreq, err) } if _, err := stream.Recv(); err == nil || grpc.Code(err) != codes.Unimplemented { t.Fatalf("%v.Recv() = %v, want error code %s", stream, err, codes.Unimplemented) } } func TestCompressOK(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testCompressOK(t, e) } } func testCompressOK(t *testing.T, e env) { te := newTest(t, e) te.serverCompression = true te.clientCompression = true te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) // Unary call const argSize = 271828 const respSize = 314159 payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, argSize) if err != nil { t.Fatal(err) } req := &testpb.SimpleRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseSize: proto.Int32(respSize), Payload: payload, } ctx := metadata.NewOutgoingContext(context.Background(), metadata.Pairs("something", "something")) if _, err := tc.UnaryCall(ctx, req); err != nil { t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, ", err) } // Streaming RPC ctx, cancel := context.WithCancel(context.Background()) defer cancel() stream, err := tc.FullDuplexCall(ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } respParam := []*testpb.ResponseParameters{ { Size: proto.Int32(31415), }, } payload, err = newPayload(testpb.PayloadType_COMPRESSABLE, int32(31415)) if err != nil { t.Fatal(err) } sreq := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseParameters: respParam, Payload: payload, } if err := stream.Send(sreq); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, sreq, err) } if _, err := stream.Recv(); err != nil { t.Fatalf("%v.Recv() = %v, want ", stream, err) } } func TestUnaryClientInterceptor(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testUnaryClientInterceptor(t, e) } } func failOkayRPC(ctx context.Context, method string, req, reply interface{}, cc *grpc.ClientConn, invoker grpc.UnaryInvoker, opts ...grpc.CallOption) error { err := invoker(ctx, method, req, reply, cc, opts...) if err == nil { return grpc.Errorf(codes.NotFound, "") } return err } func testUnaryClientInterceptor(t *testing.T, e env) { te := newTest(t, e) te.userAgent = testAppUA te.unaryClientInt = failOkayRPC te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}); grpc.Code(err) != codes.NotFound { t.Fatalf("%v.EmptyCall(_, _) = _, %v, want _, error code %s", tc, err, codes.NotFound) } } func TestStreamClientInterceptor(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testStreamClientInterceptor(t, e) } } func failOkayStream(ctx context.Context, desc *grpc.StreamDesc, cc *grpc.ClientConn, method string, streamer grpc.Streamer, opts ...grpc.CallOption) (grpc.ClientStream, error) { s, err := streamer(ctx, desc, cc, method, opts...) if err == nil { return nil, grpc.Errorf(codes.NotFound, "") } return s, nil } func testStreamClientInterceptor(t *testing.T, e env) { te := newTest(t, e) te.streamClientInt = failOkayStream te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) respParam := []*testpb.ResponseParameters{ { Size: proto.Int32(int32(1)), }, } payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, int32(1)) if err != nil { t.Fatal(err) } req := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseParameters: respParam, Payload: payload, } if _, err := tc.StreamingOutputCall(context.Background(), req); grpc.Code(err) != codes.NotFound { t.Fatalf("%v.StreamingOutputCall(_) = _, %v, want _, error code %s", tc, err, codes.NotFound) } } func TestUnaryServerInterceptor(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testUnaryServerInterceptor(t, e) } } func errInjector(ctx context.Context, req interface{}, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (interface{}, error) { return nil, grpc.Errorf(codes.PermissionDenied, "") } func testUnaryServerInterceptor(t *testing.T, e env) { te := newTest(t, e) te.unaryServerInt = errInjector te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}); grpc.Code(err) != codes.PermissionDenied { t.Fatalf("%v.EmptyCall(_, _) = _, %v, want _, error code %s", tc, err, codes.PermissionDenied) } } func TestStreamServerInterceptor(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { // TODO(bradfitz): Temporarily skip this env due to #619. if e.name == "handler-tls" { continue } testStreamServerInterceptor(t, e) } } func fullDuplexOnly(srv interface{}, ss grpc.ServerStream, info *grpc.StreamServerInfo, handler grpc.StreamHandler) error { if info.FullMethod == "/grpc.testing.TestService/FullDuplexCall" { return handler(srv, ss) } // Reject the other methods. return grpc.Errorf(codes.PermissionDenied, "") } func testStreamServerInterceptor(t *testing.T, e env) { te := newTest(t, e) te.streamServerInt = fullDuplexOnly te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) respParam := []*testpb.ResponseParameters{ { Size: proto.Int32(int32(1)), }, } payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, int32(1)) if err != nil { t.Fatal(err) } req := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseParameters: respParam, Payload: payload, } s1, err := tc.StreamingOutputCall(context.Background(), req) if err != nil { t.Fatalf("%v.StreamingOutputCall(_) = _, %v, want _, ", tc, err) } if _, err := s1.Recv(); grpc.Code(err) != codes.PermissionDenied { t.Fatalf("%v.StreamingInputCall(_) = _, %v, want _, error code %s", tc, err, codes.PermissionDenied) } s2, err := tc.FullDuplexCall(context.Background()) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } if err := s2.Send(req); err != nil { t.Fatalf("%v.Send(_) = %v, want ", s2, err) } if _, err := s2.Recv(); err != nil { t.Fatalf("%v.Recv() = _, %v, want _, ", s2, err) } } // funcServer implements methods of TestServiceServer using funcs, // similar to an http.HandlerFunc. // Any unimplemented method will crash. Tests implement the method(s) // they need. type funcServer struct { testpb.TestServiceServer unaryCall func(ctx context.Context, in *testpb.SimpleRequest) (*testpb.SimpleResponse, error) streamingInputCall func(stream testpb.TestService_StreamingInputCallServer) error } func (s *funcServer) UnaryCall(ctx context.Context, in *testpb.SimpleRequest) (*testpb.SimpleResponse, error) { return s.unaryCall(ctx, in) } func (s *funcServer) StreamingInputCall(stream testpb.TestService_StreamingInputCallServer) error { return s.streamingInputCall(stream) } func TestClientRequestBodyErrorUnexpectedEOF(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testClientRequestBodyErrorUnexpectedEOF(t, e) } } func testClientRequestBodyErrorUnexpectedEOF(t *testing.T, e env) { te := newTest(t, e) ts := &funcServer{unaryCall: func(ctx context.Context, in *testpb.SimpleRequest) (*testpb.SimpleResponse, error) { errUnexpectedCall := errors.New("unexpected call func server method") t.Error(errUnexpectedCall) return nil, errUnexpectedCall }} te.startServer(ts) defer te.tearDown() te.withServerTester(func(st *serverTester) { st.writeHeadersGRPC(1, "/grpc.testing.TestService/UnaryCall") // Say we have 5 bytes coming, but set END_STREAM flag: st.writeData(1, true, []byte{0, 0, 0, 0, 5}) st.wantAnyFrame() // wait for server to crash (it used to crash) }) } func TestClientRequestBodyErrorCloseAfterLength(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testClientRequestBodyErrorCloseAfterLength(t, e) } } func testClientRequestBodyErrorCloseAfterLength(t *testing.T, e env) { te := newTest(t, e) te.declareLogNoise("Server.processUnaryRPC failed to write status") ts := &funcServer{unaryCall: func(ctx context.Context, in *testpb.SimpleRequest) (*testpb.SimpleResponse, error) { errUnexpectedCall := errors.New("unexpected call func server method") t.Error(errUnexpectedCall) return nil, errUnexpectedCall }} te.startServer(ts) defer te.tearDown() te.withServerTester(func(st *serverTester) { st.writeHeadersGRPC(1, "/grpc.testing.TestService/UnaryCall") // say we're sending 5 bytes, but then close the connection instead. st.writeData(1, false, []byte{0, 0, 0, 0, 5}) st.cc.Close() }) } func TestClientRequestBodyErrorCancel(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testClientRequestBodyErrorCancel(t, e) } } func testClientRequestBodyErrorCancel(t *testing.T, e env) { te := newTest(t, e) gotCall := make(chan bool, 1) ts := &funcServer{unaryCall: func(ctx context.Context, in *testpb.SimpleRequest) (*testpb.SimpleResponse, error) { gotCall <- true return new(testpb.SimpleResponse), nil }} te.startServer(ts) defer te.tearDown() te.withServerTester(func(st *serverTester) { st.writeHeadersGRPC(1, "/grpc.testing.TestService/UnaryCall") // Say we have 5 bytes coming, but cancel it instead. st.writeRSTStream(1, http2.ErrCodeCancel) st.writeData(1, false, []byte{0, 0, 0, 0, 5}) // Verify we didn't a call yet. select { case <-gotCall: t.Fatal("unexpected call") default: } // And now send an uncanceled (but still invalid), just to get a response. st.writeHeadersGRPC(3, "/grpc.testing.TestService/UnaryCall") st.writeData(3, true, []byte{0, 0, 0, 0, 0}) <-gotCall st.wantAnyFrame() }) } func TestClientRequestBodyErrorCancelStreamingInput(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testClientRequestBodyErrorCancelStreamingInput(t, e) } } func testClientRequestBodyErrorCancelStreamingInput(t *testing.T, e env) { te := newTest(t, e) recvErr := make(chan error, 1) ts := &funcServer{streamingInputCall: func(stream testpb.TestService_StreamingInputCallServer) error { _, err := stream.Recv() recvErr <- err return nil }} te.startServer(ts) defer te.tearDown() te.withServerTester(func(st *serverTester) { st.writeHeadersGRPC(1, "/grpc.testing.TestService/StreamingInputCall") // Say we have 5 bytes coming, but cancel it instead. st.writeData(1, false, []byte{0, 0, 0, 0, 5}) st.writeRSTStream(1, http2.ErrCodeCancel) var got error select { case got = <-recvErr: case <-time.After(3 * time.Second): t.Fatal("timeout waiting for error") } if grpc.Code(got) != codes.Canceled { t.Errorf("error = %#v; want error code %s", got, codes.Canceled) } }) } const clientAlwaysFailCredErrorMsg = "clientAlwaysFailCred always fails" var errClientAlwaysFailCred = errors.New(clientAlwaysFailCredErrorMsg) type clientAlwaysFailCred struct{} func (c clientAlwaysFailCred) ClientHandshake(ctx context.Context, addr string, rawConn net.Conn) (net.Conn, credentials.AuthInfo, error) { return nil, nil, errClientAlwaysFailCred } func (c clientAlwaysFailCred) ServerHandshake(rawConn net.Conn) (net.Conn, credentials.AuthInfo, error) { return rawConn, nil, nil } func (c clientAlwaysFailCred) Info() credentials.ProtocolInfo { return credentials.ProtocolInfo{} } func (c clientAlwaysFailCred) Clone() credentials.TransportCredentials { return nil } func (c clientAlwaysFailCred) OverrideServerName(s string) error { return nil } func TestDialWithBlockErrorOnBadCertificates(t *testing.T) { te := newTest(t, env{name: "bad-cred", network: "tcp", security: "clientAlwaysFailCred", balancer: true}) te.startServer(&testServer{security: te.e.security}) defer te.tearDown() var ( err error opts []grpc.DialOption ) opts = append(opts, grpc.WithTransportCredentials(clientAlwaysFailCred{}), grpc.WithBlock()) te.cc, err = grpc.Dial(te.srvAddr, opts...) if err != errClientAlwaysFailCred { te.t.Fatalf("Dial(%q) = %v, want %v", te.srvAddr, err, errClientAlwaysFailCred) } } func TestFailFastRPCErrorOnBadCertificates(t *testing.T) { te := newTest(t, env{name: "bad-cred", network: "tcp", security: "clientAlwaysFailCred", balancer: true}) te.startServer(&testServer{security: te.e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}); !strings.Contains(err.Error(), clientAlwaysFailCredErrorMsg) { te.t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want err.Error() contains %q", err, clientAlwaysFailCredErrorMsg) } } func TestFailFastRPCWithNoBalancerErrorOnBadCertificates(t *testing.T) { te := newTest(t, env{name: "bad-cred", network: "tcp", security: "clientAlwaysFailCred", balancer: false}) te.startServer(&testServer{security: te.e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}); !strings.Contains(err.Error(), clientAlwaysFailCredErrorMsg) { te.t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want err.Error() contains %q", err, clientAlwaysFailCredErrorMsg) } } func TestNonFailFastRPCWithNoBalancerErrorOnBadCertificates(t *testing.T) { te := newTest(t, env{name: "bad-cred", network: "tcp", security: "clientAlwaysFailCred", balancer: false}) te.startServer(&testServer{security: te.e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}, grpc.FailFast(false)); !strings.Contains(err.Error(), clientAlwaysFailCredErrorMsg) { te.t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want err.Error() contains %q", err, clientAlwaysFailCredErrorMsg) } } type clientTimeoutCreds struct { timeoutReturned bool } func (c *clientTimeoutCreds) ClientHandshake(ctx context.Context, addr string, rawConn net.Conn) (net.Conn, credentials.AuthInfo, error) { if !c.timeoutReturned { c.timeoutReturned = true return nil, nil, context.DeadlineExceeded } return rawConn, nil, nil } func (c *clientTimeoutCreds) ServerHandshake(rawConn net.Conn) (net.Conn, credentials.AuthInfo, error) { return rawConn, nil, nil } func (c *clientTimeoutCreds) Info() credentials.ProtocolInfo { return credentials.ProtocolInfo{} } func (c *clientTimeoutCreds) Clone() credentials.TransportCredentials { return nil } func (c *clientTimeoutCreds) OverrideServerName(s string) error { return nil } func TestNonFailFastRPCSucceedOnTimeoutCreds(t *testing.T) { te := newTest(t, env{name: "timeout-cred", network: "tcp", security: "clientTimeoutCreds", balancer: false}) te.userAgent = testAppUA te.startServer(&testServer{security: te.e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) // This unary call should succeed, because ClientHandshake will succeed for the second time. if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}, grpc.FailFast(false)); err != nil { te.t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want ", err) } } type serverDispatchCred struct { ready chan struct{} rawConn net.Conn } func newServerDispatchCred() *serverDispatchCred { return &serverDispatchCred{ ready: make(chan struct{}), } } func (c *serverDispatchCred) ClientHandshake(ctx context.Context, addr string, rawConn net.Conn) (net.Conn, credentials.AuthInfo, error) { return rawConn, nil, nil } func (c *serverDispatchCred) ServerHandshake(rawConn net.Conn) (net.Conn, credentials.AuthInfo, error) { c.rawConn = rawConn close(c.ready) return nil, nil, credentials.ErrConnDispatched } func (c *serverDispatchCred) Info() credentials.ProtocolInfo { return credentials.ProtocolInfo{} } func (c *serverDispatchCred) Clone() credentials.TransportCredentials { return nil } func (c *serverDispatchCred) OverrideServerName(s string) error { return nil } func (c *serverDispatchCred) getRawConn() net.Conn { <-c.ready return c.rawConn } func TestServerCredsDispatch(t *testing.T) { lis, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("Failed to listen: %v", err) } cred := newServerDispatchCred() s := grpc.NewServer(grpc.Creds(cred)) go s.Serve(lis) defer s.Stop() cc, err := grpc.Dial(lis.Addr().String(), grpc.WithTransportCredentials(cred)) if err != nil { t.Fatalf("grpc.Dial(%q) = %v", lis.Addr().String(), err) } defer cc.Close() // Check rawConn is not closed. if n, err := cred.getRawConn().Write([]byte{0}); n <= 0 || err != nil { t.Errorf("Read() = %v, %v; want n>0, ", n, err) } } func TestFlowControlLogicalRace(t *testing.T) { // Test for a regression of https://github.com/grpc/grpc-go/issues/632, // and other flow control bugs. defer leakCheck(t)() const ( itemCount = 100 itemSize = 1 << 10 recvCount = 2 maxFailures = 3 requestTimeout = time.Second * 5 ) requestCount := 10000 if raceMode { requestCount = 1000 } lis, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("Failed to listen: %v", err) } defer lis.Close() s := grpc.NewServer() testpb.RegisterTestServiceServer(s, &flowControlLogicalRaceServer{ itemCount: itemCount, itemSize: itemSize, }) defer s.Stop() go s.Serve(lis) ctx := context.Background() cc, err := grpc.Dial(lis.Addr().String(), grpc.WithInsecure(), grpc.WithBlock()) if err != nil { t.Fatalf("grpc.Dial(%q) = %v", lis.Addr().String(), err) } defer cc.Close() cl := testpb.NewTestServiceClient(cc) failures := 0 for i := 0; i < requestCount; i++ { ctx, cancel := context.WithTimeout(ctx, requestTimeout) output, err := cl.StreamingOutputCall(ctx, &testpb.StreamingOutputCallRequest{}) if err != nil { t.Fatalf("StreamingOutputCall; err = %q", err) } j := 0 loop: for ; j < recvCount; j++ { _, err := output.Recv() if err != nil { if err == io.EOF { break loop } switch grpc.Code(err) { case codes.DeadlineExceeded: break loop default: t.Fatalf("Recv; err = %q", err) } } } cancel() <-ctx.Done() if j < recvCount { t.Errorf("got %d responses to request %d", j, i) failures++ if failures >= maxFailures { // Continue past the first failure to see if the connection is // entirely broken, or if only a single RPC was affected break } } } } type flowControlLogicalRaceServer struct { testpb.TestServiceServer itemSize int itemCount int } func (s *flowControlLogicalRaceServer) StreamingOutputCall(req *testpb.StreamingOutputCallRequest, srv testpb.TestService_StreamingOutputCallServer) error { for i := 0; i < s.itemCount; i++ { err := srv.Send(&testpb.StreamingOutputCallResponse{ Payload: &testpb.Payload{ // Sending a large stream of data which the client reject // helps to trigger some types of flow control bugs. // // Reallocating memory here is inefficient, but the stress it // puts on the GC leads to more frequent flow control // failures. The GC likely causes more variety in the // goroutine scheduling orders. Body: bytes.Repeat([]byte("a"), s.itemSize), }, }) if err != nil { return err } } return nil } // interestingGoroutines returns all goroutines we care about for the purpose // of leak checking. It excludes testing or runtime ones. func interestingGoroutines() (gs []string) { buf := make([]byte, 2<<20) buf = buf[:runtime.Stack(buf, true)] for _, g := range strings.Split(string(buf), "\n\n") { sl := strings.SplitN(g, "\n", 2) if len(sl) != 2 { continue } stack := strings.TrimSpace(sl[1]) if strings.HasPrefix(stack, "testing.RunTests") { continue } if stack == "" || strings.Contains(stack, "testing.Main(") || strings.Contains(stack, "testing.tRunner(") || strings.Contains(stack, "testing.(*M).") || strings.Contains(stack, "runtime.goexit") || strings.Contains(stack, "created by runtime.gc") || strings.Contains(stack, "created by runtime/trace.Start") || strings.Contains(stack, "created by google3/base/go/log.init") || strings.Contains(stack, "interestingGoroutines") || strings.Contains(stack, "runtime.MHeap_Scavenger") || strings.Contains(stack, "signal.signal_recv") || strings.Contains(stack, "sigterm.handler") || strings.Contains(stack, "runtime_mcall") || strings.Contains(stack, "(*loggingT).flushDaemon") || strings.Contains(stack, "goroutine in C code") { continue } gs = append(gs, g) } sort.Strings(gs) return } // leakCheck snapshots the currently-running goroutines and returns a // function to be run at the end of tests to see whether any // goroutines leaked. func leakCheck(t testing.TB) func() { orig := map[string]bool{} for _, g := range interestingGoroutines() { orig[g] = true } return func() { // Loop, waiting for goroutines to shut down. // Wait up to 10 seconds, but finish as quickly as possible. deadline := time.Now().Add(10 * time.Second) for { var leaked []string for _, g := range interestingGoroutines() { if !orig[g] { leaked = append(leaked, g) } } if len(leaked) == 0 { return } if time.Now().Before(deadline) { time.Sleep(50 * time.Millisecond) continue } for _, g := range leaked { t.Errorf("Leaked goroutine: %v", g) } return } } } type lockingWriter struct { mu sync.Mutex w io.Writer } func (lw *lockingWriter) Write(p []byte) (n int, err error) { lw.mu.Lock() defer lw.mu.Unlock() return lw.w.Write(p) } func (lw *lockingWriter) setWriter(w io.Writer) { lw.mu.Lock() defer lw.mu.Unlock() lw.w = w } var testLogOutput = &lockingWriter{w: os.Stderr} // awaitNewConnLogOutput waits for any of grpc.NewConn's goroutines to // terminate, if they're still running. It spams logs with this // message. We wait for it so our log filter is still // active. Otherwise the "defer restore()" at the top of various test // functions restores our log filter and then the goroutine spams. func awaitNewConnLogOutput() { awaitLogOutput(50*time.Millisecond, "grpc: the client connection is closing; please retry") } func awaitLogOutput(maxWait time.Duration, phrase string) { pb := []byte(phrase) timer := time.NewTimer(maxWait) defer timer.Stop() wakeup := make(chan bool, 1) for { if logOutputHasContents(pb, wakeup) { return } select { case <-timer.C: // Too slow. Oh well. return case <-wakeup: } } } func logOutputHasContents(v []byte, wakeup chan<- bool) bool { testLogOutput.mu.Lock() defer testLogOutput.mu.Unlock() fw, ok := testLogOutput.w.(*filterWriter) if !ok { return false } fw.mu.Lock() defer fw.mu.Unlock() if bytes.Contains(fw.buf.Bytes(), v) { return true } fw.wakeup = wakeup return false } var verboseLogs = flag.Bool("verbose_logs", false, "show all grpclog output, without filtering") func noop() {} // declareLogNoise declares that t is expected to emit the following noisy phrases, // even on success. Those phrases will be filtered from grpclog output // and only be shown if *verbose_logs or t ends up failing. // The returned restore function should be called with defer to be run // before the test ends. func declareLogNoise(t *testing.T, phrases ...string) (restore func()) { if *verboseLogs { return noop } fw := &filterWriter{dst: os.Stderr, filter: phrases} testLogOutput.setWriter(fw) return func() { if t.Failed() { fw.mu.Lock() defer fw.mu.Unlock() if fw.buf.Len() > 0 { t.Logf("Complete log output:\n%s", fw.buf.Bytes()) } } testLogOutput.setWriter(os.Stderr) } } type filterWriter struct { dst io.Writer filter []string mu sync.Mutex buf bytes.Buffer wakeup chan<- bool // if non-nil, gets true on write } func (fw *filterWriter) Write(p []byte) (n int, err error) { fw.mu.Lock() fw.buf.Write(p) if fw.wakeup != nil { select { case fw.wakeup <- true: default: } } fw.mu.Unlock() ps := string(p) for _, f := range fw.filter { if strings.Contains(ps, f) { return len(p), nil } } return fw.dst.Write(p) } // stubServer is a server that is easy to customize within individual test // cases. type stubServer struct { // Guarantees we satisfy this interface; panics if unimplemented methods are called. testpb.TestServiceServer // Customizable implementations of server handlers. emptyCall func(ctx context.Context, in *testpb.Empty) (*testpb.Empty, error) fullDuplexCall func(stream testpb.TestService_FullDuplexCallServer) error // A client connected to this service the test may use. Created in Start(). client testpb.TestServiceClient cleanups []func() // Lambdas executed in Stop(); populated by Start(). } func (ss *stubServer) EmptyCall(ctx context.Context, in *testpb.Empty) (*testpb.Empty, error) { return ss.emptyCall(ctx, in) } func (ss *stubServer) FullDuplexCall(stream testpb.TestService_FullDuplexCallServer) error { return ss.fullDuplexCall(stream) } // Start starts the server and creates a client connected to it. func (ss *stubServer) Start(sopts []grpc.ServerOption) error { lis, err := net.Listen("tcp", "localhost:0") if err != nil { return fmt.Errorf(`net.Listen("tcp", "localhost:0") = %v`, err) } ss.cleanups = append(ss.cleanups, func() { lis.Close() }) s := grpc.NewServer(sopts...) testpb.RegisterTestServiceServer(s, ss) go s.Serve(lis) ss.cleanups = append(ss.cleanups, s.Stop) cc, err := grpc.Dial(lis.Addr().String(), grpc.WithInsecure(), grpc.WithBlock()) if err != nil { return fmt.Errorf("grpc.Dial(%q) = %v", lis.Addr().String(), err) } ss.cleanups = append(ss.cleanups, func() { cc.Close() }) ss.client = testpb.NewTestServiceClient(cc) return nil } func (ss *stubServer) Stop() { for i := len(ss.cleanups) - 1; i >= 0; i-- { ss.cleanups[i]() } } func TestUnaryProxyDoesNotForwardMetadata(t *testing.T) { const mdkey = "somedata" // endpoint ensures mdkey is NOT in metadata and returns an error if it is. endpoint := &stubServer{ emptyCall: func(ctx context.Context, in *testpb.Empty) (*testpb.Empty, error) { if md, ok := metadata.FromIncomingContext(ctx); !ok || md[mdkey] != nil { return nil, status.Errorf(codes.Internal, "endpoint: md=%v; want !contains(%q)", md, mdkey) } return &testpb.Empty{}, nil }, } if err := endpoint.Start(nil); err != nil { t.Fatalf("Error starting endpoint server: %v", err) } defer endpoint.Stop() // proxy ensures mdkey IS in metadata, then forwards the RPC to endpoint // without explicitly copying the metadata. proxy := &stubServer{ emptyCall: func(ctx context.Context, in *testpb.Empty) (*testpb.Empty, error) { if md, ok := metadata.FromIncomingContext(ctx); !ok || md[mdkey] == nil { return nil, status.Errorf(codes.Internal, "proxy: md=%v; want contains(%q)", md, mdkey) } return endpoint.client.EmptyCall(ctx, in) }, } if err := proxy.Start(nil); err != nil { t.Fatalf("Error starting proxy server: %v", err) } defer proxy.Stop() ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second) defer cancel() md := metadata.Pairs(mdkey, "val") ctx = metadata.NewOutgoingContext(ctx, md) // Sanity check that endpoint properly errors when it sees mdkey. _, err := endpoint.client.EmptyCall(ctx, &testpb.Empty{}) if s, ok := status.FromError(err); !ok || s.Code() != codes.Internal { t.Fatalf("endpoint.client.EmptyCall(_, _) = _, %v; want _, ", err) } if _, err := proxy.client.EmptyCall(ctx, &testpb.Empty{}); err != nil { t.Fatal(err.Error()) } } func TestStreamingProxyDoesNotForwardMetadata(t *testing.T) { const mdkey = "somedata" // doFDC performs a FullDuplexCall with client and returns the error from the // first stream.Recv call, or nil if that error is io.EOF. Calls t.Fatal if // the stream cannot be established. doFDC := func(ctx context.Context, client testpb.TestServiceClient) error { stream, err := client.FullDuplexCall(ctx) if err != nil { t.Fatalf("Unwanted error: %v", err) } if _, err := stream.Recv(); err != io.EOF { return err } return nil } // endpoint ensures mdkey is NOT in metadata and returns an error if it is. endpoint := &stubServer{ fullDuplexCall: func(stream testpb.TestService_FullDuplexCallServer) error { ctx := stream.Context() if md, ok := metadata.FromIncomingContext(ctx); !ok || md[mdkey] != nil { return status.Errorf(codes.Internal, "endpoint: md=%v; want !contains(%q)", md, mdkey) } return nil }, } if err := endpoint.Start(nil); err != nil { t.Fatalf("Error starting endpoint server: %v", err) } defer endpoint.Stop() // proxy ensures mdkey IS in metadata, then forwards the RPC to endpoint // without explicitly copying the metadata. proxy := &stubServer{ fullDuplexCall: func(stream testpb.TestService_FullDuplexCallServer) error { ctx := stream.Context() if md, ok := metadata.FromIncomingContext(ctx); !ok || md[mdkey] == nil { return status.Errorf(codes.Internal, "endpoint: md=%v; want !contains(%q)", md, mdkey) } return doFDC(ctx, endpoint.client) }, } if err := proxy.Start(nil); err != nil { t.Fatalf("Error starting proxy server: %v", err) } defer proxy.Stop() ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second) defer cancel() md := metadata.Pairs(mdkey, "val") ctx = metadata.NewOutgoingContext(ctx, md) // Sanity check that endpoint properly errors when it sees mdkey in ctx. err := doFDC(ctx, endpoint.client) if s, ok := status.FromError(err); !ok || s.Code() != codes.Internal { t.Fatalf("stream.Recv() = _, %v; want _, ", err) } if err := doFDC(ctx, proxy.client); err != nil { t.Fatalf("doFDC(_, proxy.client) = %v; want nil", err) } } func TestStatsTagsAndTrace(t *testing.T) { // Data added to context by client (typically in a stats handler). tags := []byte{1, 5, 2, 4, 3} trace := []byte{5, 2, 1, 3, 4} // endpoint ensures Tags() and Trace() in context match those that were added // by the client and returns an error if not. endpoint := &stubServer{ emptyCall: func(ctx context.Context, in *testpb.Empty) (*testpb.Empty, error) { md, _ := metadata.FromIncomingContext(ctx) if tg := stats.Tags(ctx); !reflect.DeepEqual(tg, tags) { return nil, status.Errorf(codes.Internal, "stats.Tags(%v)=%v; want %v", ctx, tg, tags) } if !reflect.DeepEqual(md["grpc-tags-bin"], []string{string(tags)}) { return nil, status.Errorf(codes.Internal, "md['grpc-tags-bin']=%v; want %v", md["grpc-tags-bin"], tags) } if tr := stats.Trace(ctx); !reflect.DeepEqual(tr, trace) { return nil, status.Errorf(codes.Internal, "stats.Trace(%v)=%v; want %v", ctx, tr, trace) } if !reflect.DeepEqual(md["grpc-trace-bin"], []string{string(trace)}) { return nil, status.Errorf(codes.Internal, "md['grpc-trace-bin']=%v; want %v", md["grpc-trace-bin"], trace) } return &testpb.Empty{}, nil }, } if err := endpoint.Start(nil); err != nil { t.Fatalf("Error starting endpoint server: %v", err) } defer endpoint.Stop() ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second) defer cancel() testCases := []struct { ctx context.Context want codes.Code }{ {ctx: ctx, want: codes.Internal}, {ctx: stats.SetTags(ctx, tags), want: codes.Internal}, {ctx: stats.SetTrace(ctx, trace), want: codes.Internal}, {ctx: stats.SetTags(stats.SetTrace(ctx, tags), tags), want: codes.Internal}, {ctx: stats.SetTags(stats.SetTrace(ctx, trace), tags), want: codes.OK}, } for _, tc := range testCases { _, err := endpoint.client.EmptyCall(tc.ctx, &testpb.Empty{}) if tc.want == codes.OK && err != nil { t.Fatalf("endpoint.client.EmptyCall(%v, _) = _, %v; want _, nil", tc.ctx, err) } if s, ok := status.FromError(err); !ok || s.Code() != tc.want { t.Fatalf("endpoint.client.EmptyCall(%v, _) = _, %v; want _, ", tc.ctx, err, tc.want) } } } func TestTapTimeout(t *testing.T) { sopts := []grpc.ServerOption{ grpc.InTapHandle(func(ctx context.Context, _ *tap.Info) (context.Context, error) { c, cancel := context.WithCancel(ctx) // Call cancel instead of setting a deadline so we can detect which error // occurred -- this cancellation (desired) or the client's deadline // expired (indicating this cancellation did not affect the RPC). time.AfterFunc(10*time.Millisecond, cancel) return c, nil }), } ss := &stubServer{ emptyCall: func(ctx context.Context, in *testpb.Empty) (*testpb.Empty, error) { <-ctx.Done() return &testpb.Empty{}, nil }, } if err := ss.Start(sopts); err != nil { t.Fatalf("Error starting endpoint server: %v", err) } defer ss.Stop() // This was known to be flaky; test several times. for i := 0; i < 10; i++ { // Set our own deadline in case the server hangs. ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second) res, err := ss.client.EmptyCall(ctx, &testpb.Empty{}) cancel() if s, ok := status.FromError(err); !ok || s.Code() != codes.Canceled { t.Fatalf("ss.client.EmptyCall(context.Background(), _) = %v, %v; want nil, ", res, err) } } } type windowSizeConfig struct { serverStream int32 serverConn int32 clientStream int32 clientConn int32 } func max(a, b int32) int32 { if a > b { return a } return b } func TestConfigurableWindowSizeWithLargeWindow(t *testing.T) { defer leakCheck(t)() wc := windowSizeConfig{ serverStream: 8 * 1024 * 1024, serverConn: 12 * 1024 * 1024, clientStream: 6 * 1024 * 1024, clientConn: 8 * 1024 * 1024, } for _, e := range listTestEnv() { testConfigurableWindowSize(t, e, wc) } } func TestConfigurableWindowSizeWithSmallWindow(t *testing.T) { defer leakCheck(t)() wc := windowSizeConfig{ serverStream: 1, serverConn: 1, clientStream: 1, clientConn: 1, } for _, e := range listTestEnv() { testConfigurableWindowSize(t, e, wc) } } func testConfigurableWindowSize(t *testing.T, e env, wc windowSizeConfig) { te := newTest(t, e) te.serverInitialWindowSize = wc.serverStream te.serverInitialConnWindowSize = wc.serverConn te.clientInitialWindowSize = wc.clientStream te.clientInitialConnWindowSize = wc.clientConn te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) stream, err := tc.FullDuplexCall(context.Background()) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } numOfIter := 11 // Set message size to exhaust largest of window sizes. messageSize := max(max(wc.serverStream, wc.serverConn), max(wc.clientStream, wc.clientConn)) / int32(numOfIter-1) messageSize = max(messageSize, 64*1024) payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, messageSize) if err != nil { t.Fatal(err) } respParams := []*testpb.ResponseParameters{ { Size: proto.Int32(messageSize), }, } req := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseParameters: respParams, Payload: payload, } for i := 0; i < numOfIter; i++ { if err := stream.Send(req); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, req, err) } if _, err := stream.Recv(); err != nil { t.Fatalf("%v.Recv() = _, %v, want _, ", stream, err) } } if err := stream.CloseSend(); err != nil { t.Fatalf("%v.CloseSend() = %v, want ", stream, err) } } var ( // test authdata authdata = map[string]string{ "test-key": "test-value", "test-key2-bin": string([]byte{1, 2, 3}), } ) type testPerRPCCredentials struct{} func (cr testPerRPCCredentials) GetRequestMetadata(ctx context.Context, uri ...string) (map[string]string, error) { return authdata, nil } func (cr testPerRPCCredentials) RequireTransportSecurity() bool { return false } func authHandle(ctx context.Context, info *tap.Info) (context.Context, error) { md, ok := metadata.FromIncomingContext(ctx) if !ok { return ctx, fmt.Errorf("didn't find metadata in context") } for k, vwant := range authdata { vgot, ok := md[k] if !ok { return ctx, fmt.Errorf("didn't find authdata key %v in context", k) } if vgot[0] != vwant { return ctx, fmt.Errorf("for key %v, got value %v, want %v", k, vgot, vwant) } } return ctx, nil } func TestPerRPCCredentialsViaDialOptions(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testPerRPCCredentialsViaDialOptions(t, e) } } func testPerRPCCredentialsViaDialOptions(t *testing.T, e env) { te := newTest(t, e) te.tapHandle = authHandle te.perRPCCreds = testPerRPCCredentials{} te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}); err != nil { t.Fatalf("Test failed. Reason: %v", err) } } func TestPerRPCCredentialsViaCallOptions(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testPerRPCCredentialsViaCallOptions(t, e) } } func testPerRPCCredentialsViaCallOptions(t *testing.T, e env) { te := newTest(t, e) te.tapHandle = authHandle te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}, grpc.PerRPCCredentials(testPerRPCCredentials{})); err != nil { t.Fatalf("Test failed. Reason: %v", err) } } func TestPerRPCCredentialsViaDialOptionsAndCallOptions(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testPerRPCCredentialsViaDialOptionsAndCallOptions(t, e) } } func testPerRPCCredentialsViaDialOptionsAndCallOptions(t *testing.T, e env) { te := newTest(t, e) te.perRPCCreds = testPerRPCCredentials{} // When credentials are provided via both dial options and call options, // we apply both sets. te.tapHandle = func(ctx context.Context, _ *tap.Info) (context.Context, error) { md, ok := metadata.FromIncomingContext(ctx) if !ok { return ctx, fmt.Errorf("couldn't find metadata in context") } for k, vwant := range authdata { vgot, ok := md[k] if !ok { return ctx, fmt.Errorf("couldn't find metadata for key %v", k) } if len(vgot) != 2 { return ctx, fmt.Errorf("len of value for key %v was %v, want 2", k, len(vgot)) } if vgot[0] != vwant || vgot[1] != vwant { return ctx, fmt.Errorf("value for %v was %v, want [%v, %v]", k, vgot, vwant, vwant) } } return ctx, nil } te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() tc := testpb.NewTestServiceClient(cc) if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}, grpc.PerRPCCredentials(testPerRPCCredentials{})); err != nil { t.Fatalf("Test failed. Reason: %v", err) } } func TestWaitForReadyConnection(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testWaitForReadyConnection(t, e) } } func testWaitForReadyConnection(t *testing.T, e env) { te := newTest(t, e) te.userAgent = testAppUA te.startServer(&testServer{security: e.security}) defer te.tearDown() cc := te.clientConn() // Non-blocking dial. tc := testpb.NewTestServiceClient(cc) ctx, cancel := context.WithTimeout(context.Background(), time.Second) defer cancel() state := cc.GetState() // Wait for connection to be Ready. for ; state != connectivity.Ready && cc.WaitForStateChange(ctx, state); state = cc.GetState() { } if state != connectivity.Ready { t.Fatalf("Want connection state to be Ready, got %v", state) } ctx, cancel = context.WithTimeout(context.Background(), time.Second) defer cancel() // Make a fail-fast RPC. if _, err := tc.EmptyCall(ctx, &testpb.Empty{}); err != nil { t.Fatalf("TestService/EmptyCall(_,_) = _, %v, want _, nil", err) } } type errCodec struct { noError bool } func (c *errCodec) Marshal(v interface{}) ([]byte, error) { if c.noError { return []byte{}, nil } return nil, fmt.Errorf("3987^12 + 4365^12 = 4472^12") } func (c *errCodec) Unmarshal(data []byte, v interface{}) error { return nil } func (c *errCodec) String() string { return "Fermat's near-miss." } func TestEncodeDoesntPanic(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testEncodeDoesntPanic(t, e) } } func testEncodeDoesntPanic(t *testing.T, e env) { te := newTest(t, e) erc := &errCodec{} te.customCodec = erc te.startServer(&testServer{security: e.security}) defer te.tearDown() te.customCodec = nil tc := testpb.NewTestServiceClient(te.clientConn()) // Failure case, should not panic. tc.EmptyCall(context.Background(), &testpb.Empty{}) erc.noError = true // Passing case. if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}); err != nil { t.Fatalf("EmptyCall(_, _) = _, %v, want _, ", err) } } func TestSvrWriteStatusEarlyWrite(t *testing.T) { defer leakCheck(t)() for _, e := range listTestEnv() { testSvrWriteStatusEarlyWrite(t, e) } } func testSvrWriteStatusEarlyWrite(t *testing.T, e env) { te := newTest(t, e) const smallSize = 1024 const largeSize = 2048 const extraLargeSize = 4096 te.maxServerReceiveMsgSize = newInt(largeSize) te.maxServerSendMsgSize = newInt(largeSize) smallPayload, err := newPayload(testpb.PayloadType_COMPRESSABLE, smallSize) if err != nil { t.Fatal(err) } extraLargePayload, err := newPayload(testpb.PayloadType_COMPRESSABLE, extraLargeSize) if err != nil { t.Fatal(err) } te.startServer(&testServer{security: e.security}) defer te.tearDown() tc := testpb.NewTestServiceClient(te.clientConn()) respParam := []*testpb.ResponseParameters{ { Size: proto.Int32(int32(smallSize)), }, } sreq := &testpb.StreamingOutputCallRequest{ ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(), ResponseParameters: respParam, Payload: extraLargePayload, } // Test recv case: server receives a message larger than maxServerReceiveMsgSize. stream, err := tc.FullDuplexCall(te.ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } if err = stream.Send(sreq); err != nil { t.Fatalf("%v.Send() = _, %v, want ", stream, err) } if _, err = stream.Recv(); err == nil || grpc.Code(err) != codes.ResourceExhausted { t.Fatalf("%v.Recv() = _, %v, want _, error code: %s", stream, err, codes.ResourceExhausted) } // Test send case: server sends a message larger than maxServerSendMsgSize. sreq.Payload = smallPayload respParam[0].Size = proto.Int32(int32(extraLargeSize)) stream, err = tc.FullDuplexCall(te.ctx) if err != nil { t.Fatalf("%v.FullDuplexCall(_) = _, %v, want ", tc, err) } if err = stream.Send(sreq); err != nil { t.Fatalf("%v.Send(%v) = %v, want ", stream, sreq, err) } if _, err = stream.Recv(); err == nil || grpc.Code(err) != codes.ResourceExhausted { t.Fatalf("%v.Recv() = _, %v, want _, error code: %s", stream, err, codes.ResourceExhausted) } } golang-google-grpc-1.6.0/test/grpc_testing/000077500000000000000000000000001315416461300206435ustar00rootroot00000000000000golang-google-grpc-1.6.0/test/grpc_testing/test.pb.go000066400000000000000000000713611315416461300225610ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: grpc_testing/test.proto /* Package grpc_testing is a generated protocol buffer package. It is generated from these files: grpc_testing/test.proto It has these top-level messages: Empty Payload SimpleRequest SimpleResponse StreamingInputCallRequest StreamingInputCallResponse ResponseParameters StreamingOutputCallRequest StreamingOutputCallResponse */ package grpc_testing import proto "github.com/golang/protobuf/proto" import fmt "fmt" import math "math" import ( context "golang.org/x/net/context" grpc "google.golang.org/grpc" ) // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package // The type of payload that should be returned. type PayloadType int32 const ( // Compressable text format. PayloadType_COMPRESSABLE PayloadType = 0 // Uncompressable binary format. PayloadType_UNCOMPRESSABLE PayloadType = 1 // Randomly chosen from all other formats defined in this enum. PayloadType_RANDOM PayloadType = 2 ) var PayloadType_name = map[int32]string{ 0: "COMPRESSABLE", 1: "UNCOMPRESSABLE", 2: "RANDOM", } var PayloadType_value = map[string]int32{ "COMPRESSABLE": 0, "UNCOMPRESSABLE": 1, "RANDOM": 2, } func (x PayloadType) Enum() *PayloadType { p := new(PayloadType) *p = x return p } func (x PayloadType) String() string { return proto.EnumName(PayloadType_name, int32(x)) } func (x *PayloadType) UnmarshalJSON(data []byte) error { value, err := proto.UnmarshalJSONEnum(PayloadType_value, data, "PayloadType") if err != nil { return err } *x = PayloadType(value) return nil } func (PayloadType) EnumDescriptor() ([]byte, []int) { return fileDescriptor0, []int{0} } type Empty struct { XXX_unrecognized []byte `json:"-"` } func (m *Empty) Reset() { *m = Empty{} } func (m *Empty) String() string { return proto.CompactTextString(m) } func (*Empty) ProtoMessage() {} func (*Empty) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} } // A block of data, to simply increase gRPC message size. type Payload struct { // The type of data in body. Type *PayloadType `protobuf:"varint,1,opt,name=type,enum=grpc.testing.PayloadType" json:"type,omitempty"` // Primary contents of payload. Body []byte `protobuf:"bytes,2,opt,name=body" json:"body,omitempty"` XXX_unrecognized []byte `json:"-"` } func (m *Payload) Reset() { *m = Payload{} } func (m *Payload) String() string { return proto.CompactTextString(m) } func (*Payload) ProtoMessage() {} func (*Payload) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} } func (m *Payload) GetType() PayloadType { if m != nil && m.Type != nil { return *m.Type } return PayloadType_COMPRESSABLE } func (m *Payload) GetBody() []byte { if m != nil { return m.Body } return nil } // Unary request. type SimpleRequest struct { // Desired payload type in the response from the server. // If response_type is RANDOM, server randomly chooses one from other formats. ResponseType *PayloadType `protobuf:"varint,1,opt,name=response_type,json=responseType,enum=grpc.testing.PayloadType" json:"response_type,omitempty"` // Desired payload size in the response from the server. // If response_type is COMPRESSABLE, this denotes the size before compression. ResponseSize *int32 `protobuf:"varint,2,opt,name=response_size,json=responseSize" json:"response_size,omitempty"` // Optional input payload sent along with the request. Payload *Payload `protobuf:"bytes,3,opt,name=payload" json:"payload,omitempty"` // Whether SimpleResponse should include username. FillUsername *bool `protobuf:"varint,4,opt,name=fill_username,json=fillUsername" json:"fill_username,omitempty"` // Whether SimpleResponse should include OAuth scope. FillOauthScope *bool `protobuf:"varint,5,opt,name=fill_oauth_scope,json=fillOauthScope" json:"fill_oauth_scope,omitempty"` XXX_unrecognized []byte `json:"-"` } func (m *SimpleRequest) Reset() { *m = SimpleRequest{} } func (m *SimpleRequest) String() string { return proto.CompactTextString(m) } func (*SimpleRequest) ProtoMessage() {} func (*SimpleRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{2} } func (m *SimpleRequest) GetResponseType() PayloadType { if m != nil && m.ResponseType != nil { return *m.ResponseType } return PayloadType_COMPRESSABLE } func (m *SimpleRequest) GetResponseSize() int32 { if m != nil && m.ResponseSize != nil { return *m.ResponseSize } return 0 } func (m *SimpleRequest) GetPayload() *Payload { if m != nil { return m.Payload } return nil } func (m *SimpleRequest) GetFillUsername() bool { if m != nil && m.FillUsername != nil { return *m.FillUsername } return false } func (m *SimpleRequest) GetFillOauthScope() bool { if m != nil && m.FillOauthScope != nil { return *m.FillOauthScope } return false } // Unary response, as configured by the request. type SimpleResponse struct { // Payload to increase message size. Payload *Payload `protobuf:"bytes,1,opt,name=payload" json:"payload,omitempty"` // The user the request came from, for verifying authentication was // successful when the client expected it. Username *string `protobuf:"bytes,2,opt,name=username" json:"username,omitempty"` // OAuth scope. OauthScope *string `protobuf:"bytes,3,opt,name=oauth_scope,json=oauthScope" json:"oauth_scope,omitempty"` XXX_unrecognized []byte `json:"-"` } func (m *SimpleResponse) Reset() { *m = SimpleResponse{} } func (m *SimpleResponse) String() string { return proto.CompactTextString(m) } func (*SimpleResponse) ProtoMessage() {} func (*SimpleResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{3} } func (m *SimpleResponse) GetPayload() *Payload { if m != nil { return m.Payload } return nil } func (m *SimpleResponse) GetUsername() string { if m != nil && m.Username != nil { return *m.Username } return "" } func (m *SimpleResponse) GetOauthScope() string { if m != nil && m.OauthScope != nil { return *m.OauthScope } return "" } // Client-streaming request. type StreamingInputCallRequest struct { // Optional input payload sent along with the request. Payload *Payload `protobuf:"bytes,1,opt,name=payload" json:"payload,omitempty"` XXX_unrecognized []byte `json:"-"` } func (m *StreamingInputCallRequest) Reset() { *m = StreamingInputCallRequest{} } func (m *StreamingInputCallRequest) String() string { return proto.CompactTextString(m) } func (*StreamingInputCallRequest) ProtoMessage() {} func (*StreamingInputCallRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{4} } func (m *StreamingInputCallRequest) GetPayload() *Payload { if m != nil { return m.Payload } return nil } // Client-streaming response. type StreamingInputCallResponse struct { // Aggregated size of payloads received from the client. AggregatedPayloadSize *int32 `protobuf:"varint,1,opt,name=aggregated_payload_size,json=aggregatedPayloadSize" json:"aggregated_payload_size,omitempty"` XXX_unrecognized []byte `json:"-"` } func (m *StreamingInputCallResponse) Reset() { *m = StreamingInputCallResponse{} } func (m *StreamingInputCallResponse) String() string { return proto.CompactTextString(m) } func (*StreamingInputCallResponse) ProtoMessage() {} func (*StreamingInputCallResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{5} } func (m *StreamingInputCallResponse) GetAggregatedPayloadSize() int32 { if m != nil && m.AggregatedPayloadSize != nil { return *m.AggregatedPayloadSize } return 0 } // Configuration for a particular response. type ResponseParameters struct { // Desired payload sizes in responses from the server. // If response_type is COMPRESSABLE, this denotes the size before compression. Size *int32 `protobuf:"varint,1,opt,name=size" json:"size,omitempty"` // Desired interval between consecutive responses in the response stream in // microseconds. IntervalUs *int32 `protobuf:"varint,2,opt,name=interval_us,json=intervalUs" json:"interval_us,omitempty"` XXX_unrecognized []byte `json:"-"` } func (m *ResponseParameters) Reset() { *m = ResponseParameters{} } func (m *ResponseParameters) String() string { return proto.CompactTextString(m) } func (*ResponseParameters) ProtoMessage() {} func (*ResponseParameters) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{6} } func (m *ResponseParameters) GetSize() int32 { if m != nil && m.Size != nil { return *m.Size } return 0 } func (m *ResponseParameters) GetIntervalUs() int32 { if m != nil && m.IntervalUs != nil { return *m.IntervalUs } return 0 } // Server-streaming request. type StreamingOutputCallRequest struct { // Desired payload type in the response from the server. // If response_type is RANDOM, the payload from each response in the stream // might be of different types. This is to simulate a mixed type of payload // stream. ResponseType *PayloadType `protobuf:"varint,1,opt,name=response_type,json=responseType,enum=grpc.testing.PayloadType" json:"response_type,omitempty"` // Configuration for each expected response message. ResponseParameters []*ResponseParameters `protobuf:"bytes,2,rep,name=response_parameters,json=responseParameters" json:"response_parameters,omitempty"` // Optional input payload sent along with the request. Payload *Payload `protobuf:"bytes,3,opt,name=payload" json:"payload,omitempty"` XXX_unrecognized []byte `json:"-"` } func (m *StreamingOutputCallRequest) Reset() { *m = StreamingOutputCallRequest{} } func (m *StreamingOutputCallRequest) String() string { return proto.CompactTextString(m) } func (*StreamingOutputCallRequest) ProtoMessage() {} func (*StreamingOutputCallRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{7} } func (m *StreamingOutputCallRequest) GetResponseType() PayloadType { if m != nil && m.ResponseType != nil { return *m.ResponseType } return PayloadType_COMPRESSABLE } func (m *StreamingOutputCallRequest) GetResponseParameters() []*ResponseParameters { if m != nil { return m.ResponseParameters } return nil } func (m *StreamingOutputCallRequest) GetPayload() *Payload { if m != nil { return m.Payload } return nil } // Server-streaming response, as configured by the request and parameters. type StreamingOutputCallResponse struct { // Payload to increase response size. Payload *Payload `protobuf:"bytes,1,opt,name=payload" json:"payload,omitempty"` XXX_unrecognized []byte `json:"-"` } func (m *StreamingOutputCallResponse) Reset() { *m = StreamingOutputCallResponse{} } func (m *StreamingOutputCallResponse) String() string { return proto.CompactTextString(m) } func (*StreamingOutputCallResponse) ProtoMessage() {} func (*StreamingOutputCallResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{8} } func (m *StreamingOutputCallResponse) GetPayload() *Payload { if m != nil { return m.Payload } return nil } func init() { proto.RegisterType((*Empty)(nil), "grpc.testing.Empty") proto.RegisterType((*Payload)(nil), "grpc.testing.Payload") proto.RegisterType((*SimpleRequest)(nil), "grpc.testing.SimpleRequest") proto.RegisterType((*SimpleResponse)(nil), "grpc.testing.SimpleResponse") proto.RegisterType((*StreamingInputCallRequest)(nil), "grpc.testing.StreamingInputCallRequest") proto.RegisterType((*StreamingInputCallResponse)(nil), "grpc.testing.StreamingInputCallResponse") proto.RegisterType((*ResponseParameters)(nil), "grpc.testing.ResponseParameters") proto.RegisterType((*StreamingOutputCallRequest)(nil), "grpc.testing.StreamingOutputCallRequest") proto.RegisterType((*StreamingOutputCallResponse)(nil), "grpc.testing.StreamingOutputCallResponse") proto.RegisterEnum("grpc.testing.PayloadType", PayloadType_name, PayloadType_value) } // Reference imports to suppress errors if they are not otherwise used. var _ context.Context var _ grpc.ClientConn // This is a compile-time assertion to ensure that this generated file // is compatible with the grpc package it is being compiled against. const _ = grpc.SupportPackageIsVersion4 // Client API for TestService service type TestServiceClient interface { // One empty request followed by one empty response. EmptyCall(ctx context.Context, in *Empty, opts ...grpc.CallOption) (*Empty, error) // One request followed by one response. // The server returns the client payload as-is. UnaryCall(ctx context.Context, in *SimpleRequest, opts ...grpc.CallOption) (*SimpleResponse, error) // One request followed by a sequence of responses (streamed download). // The server returns the payload with client desired type and sizes. StreamingOutputCall(ctx context.Context, in *StreamingOutputCallRequest, opts ...grpc.CallOption) (TestService_StreamingOutputCallClient, error) // A sequence of requests followed by one response (streamed upload). // The server returns the aggregated size of client payload as the result. StreamingInputCall(ctx context.Context, opts ...grpc.CallOption) (TestService_StreamingInputCallClient, error) // A sequence of requests with each request served by the server immediately. // As one request could lead to multiple responses, this interface // demonstrates the idea of full duplexing. FullDuplexCall(ctx context.Context, opts ...grpc.CallOption) (TestService_FullDuplexCallClient, error) // A sequence of requests followed by a sequence of responses. // The server buffers all the client requests and then serves them in order. A // stream of responses are returned to the client when the server starts with // first request. HalfDuplexCall(ctx context.Context, opts ...grpc.CallOption) (TestService_HalfDuplexCallClient, error) } type testServiceClient struct { cc *grpc.ClientConn } func NewTestServiceClient(cc *grpc.ClientConn) TestServiceClient { return &testServiceClient{cc} } func (c *testServiceClient) EmptyCall(ctx context.Context, in *Empty, opts ...grpc.CallOption) (*Empty, error) { out := new(Empty) err := grpc.Invoke(ctx, "/grpc.testing.TestService/EmptyCall", in, out, c.cc, opts...) if err != nil { return nil, err } return out, nil } func (c *testServiceClient) UnaryCall(ctx context.Context, in *SimpleRequest, opts ...grpc.CallOption) (*SimpleResponse, error) { out := new(SimpleResponse) err := grpc.Invoke(ctx, "/grpc.testing.TestService/UnaryCall", in, out, c.cc, opts...) if err != nil { return nil, err } return out, nil } func (c *testServiceClient) StreamingOutputCall(ctx context.Context, in *StreamingOutputCallRequest, opts ...grpc.CallOption) (TestService_StreamingOutputCallClient, error) { stream, err := grpc.NewClientStream(ctx, &_TestService_serviceDesc.Streams[0], c.cc, "/grpc.testing.TestService/StreamingOutputCall", opts...) if err != nil { return nil, err } x := &testServiceStreamingOutputCallClient{stream} if err := x.ClientStream.SendMsg(in); err != nil { return nil, err } if err := x.ClientStream.CloseSend(); err != nil { return nil, err } return x, nil } type TestService_StreamingOutputCallClient interface { Recv() (*StreamingOutputCallResponse, error) grpc.ClientStream } type testServiceStreamingOutputCallClient struct { grpc.ClientStream } func (x *testServiceStreamingOutputCallClient) Recv() (*StreamingOutputCallResponse, error) { m := new(StreamingOutputCallResponse) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func (c *testServiceClient) StreamingInputCall(ctx context.Context, opts ...grpc.CallOption) (TestService_StreamingInputCallClient, error) { stream, err := grpc.NewClientStream(ctx, &_TestService_serviceDesc.Streams[1], c.cc, "/grpc.testing.TestService/StreamingInputCall", opts...) if err != nil { return nil, err } x := &testServiceStreamingInputCallClient{stream} return x, nil } type TestService_StreamingInputCallClient interface { Send(*StreamingInputCallRequest) error CloseAndRecv() (*StreamingInputCallResponse, error) grpc.ClientStream } type testServiceStreamingInputCallClient struct { grpc.ClientStream } func (x *testServiceStreamingInputCallClient) Send(m *StreamingInputCallRequest) error { return x.ClientStream.SendMsg(m) } func (x *testServiceStreamingInputCallClient) CloseAndRecv() (*StreamingInputCallResponse, error) { if err := x.ClientStream.CloseSend(); err != nil { return nil, err } m := new(StreamingInputCallResponse) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func (c *testServiceClient) FullDuplexCall(ctx context.Context, opts ...grpc.CallOption) (TestService_FullDuplexCallClient, error) { stream, err := grpc.NewClientStream(ctx, &_TestService_serviceDesc.Streams[2], c.cc, "/grpc.testing.TestService/FullDuplexCall", opts...) if err != nil { return nil, err } x := &testServiceFullDuplexCallClient{stream} return x, nil } type TestService_FullDuplexCallClient interface { Send(*StreamingOutputCallRequest) error Recv() (*StreamingOutputCallResponse, error) grpc.ClientStream } type testServiceFullDuplexCallClient struct { grpc.ClientStream } func (x *testServiceFullDuplexCallClient) Send(m *StreamingOutputCallRequest) error { return x.ClientStream.SendMsg(m) } func (x *testServiceFullDuplexCallClient) Recv() (*StreamingOutputCallResponse, error) { m := new(StreamingOutputCallResponse) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func (c *testServiceClient) HalfDuplexCall(ctx context.Context, opts ...grpc.CallOption) (TestService_HalfDuplexCallClient, error) { stream, err := grpc.NewClientStream(ctx, &_TestService_serviceDesc.Streams[3], c.cc, "/grpc.testing.TestService/HalfDuplexCall", opts...) if err != nil { return nil, err } x := &testServiceHalfDuplexCallClient{stream} return x, nil } type TestService_HalfDuplexCallClient interface { Send(*StreamingOutputCallRequest) error Recv() (*StreamingOutputCallResponse, error) grpc.ClientStream } type testServiceHalfDuplexCallClient struct { grpc.ClientStream } func (x *testServiceHalfDuplexCallClient) Send(m *StreamingOutputCallRequest) error { return x.ClientStream.SendMsg(m) } func (x *testServiceHalfDuplexCallClient) Recv() (*StreamingOutputCallResponse, error) { m := new(StreamingOutputCallResponse) if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } return m, nil } // Server API for TestService service type TestServiceServer interface { // One empty request followed by one empty response. EmptyCall(context.Context, *Empty) (*Empty, error) // One request followed by one response. // The server returns the client payload as-is. UnaryCall(context.Context, *SimpleRequest) (*SimpleResponse, error) // One request followed by a sequence of responses (streamed download). // The server returns the payload with client desired type and sizes. StreamingOutputCall(*StreamingOutputCallRequest, TestService_StreamingOutputCallServer) error // A sequence of requests followed by one response (streamed upload). // The server returns the aggregated size of client payload as the result. StreamingInputCall(TestService_StreamingInputCallServer) error // A sequence of requests with each request served by the server immediately. // As one request could lead to multiple responses, this interface // demonstrates the idea of full duplexing. FullDuplexCall(TestService_FullDuplexCallServer) error // A sequence of requests followed by a sequence of responses. // The server buffers all the client requests and then serves them in order. A // stream of responses are returned to the client when the server starts with // first request. HalfDuplexCall(TestService_HalfDuplexCallServer) error } func RegisterTestServiceServer(s *grpc.Server, srv TestServiceServer) { s.RegisterService(&_TestService_serviceDesc, srv) } func _TestService_EmptyCall_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(Empty) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(TestServiceServer).EmptyCall(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/grpc.testing.TestService/EmptyCall", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(TestServiceServer).EmptyCall(ctx, req.(*Empty)) } return interceptor(ctx, in, info, handler) } func _TestService_UnaryCall_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(SimpleRequest) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(TestServiceServer).UnaryCall(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/grpc.testing.TestService/UnaryCall", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(TestServiceServer).UnaryCall(ctx, req.(*SimpleRequest)) } return interceptor(ctx, in, info, handler) } func _TestService_StreamingOutputCall_Handler(srv interface{}, stream grpc.ServerStream) error { m := new(StreamingOutputCallRequest) if err := stream.RecvMsg(m); err != nil { return err } return srv.(TestServiceServer).StreamingOutputCall(m, &testServiceStreamingOutputCallServer{stream}) } type TestService_StreamingOutputCallServer interface { Send(*StreamingOutputCallResponse) error grpc.ServerStream } type testServiceStreamingOutputCallServer struct { grpc.ServerStream } func (x *testServiceStreamingOutputCallServer) Send(m *StreamingOutputCallResponse) error { return x.ServerStream.SendMsg(m) } func _TestService_StreamingInputCall_Handler(srv interface{}, stream grpc.ServerStream) error { return srv.(TestServiceServer).StreamingInputCall(&testServiceStreamingInputCallServer{stream}) } type TestService_StreamingInputCallServer interface { SendAndClose(*StreamingInputCallResponse) error Recv() (*StreamingInputCallRequest, error) grpc.ServerStream } type testServiceStreamingInputCallServer struct { grpc.ServerStream } func (x *testServiceStreamingInputCallServer) SendAndClose(m *StreamingInputCallResponse) error { return x.ServerStream.SendMsg(m) } func (x *testServiceStreamingInputCallServer) Recv() (*StreamingInputCallRequest, error) { m := new(StreamingInputCallRequest) if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func _TestService_FullDuplexCall_Handler(srv interface{}, stream grpc.ServerStream) error { return srv.(TestServiceServer).FullDuplexCall(&testServiceFullDuplexCallServer{stream}) } type TestService_FullDuplexCallServer interface { Send(*StreamingOutputCallResponse) error Recv() (*StreamingOutputCallRequest, error) grpc.ServerStream } type testServiceFullDuplexCallServer struct { grpc.ServerStream } func (x *testServiceFullDuplexCallServer) Send(m *StreamingOutputCallResponse) error { return x.ServerStream.SendMsg(m) } func (x *testServiceFullDuplexCallServer) Recv() (*StreamingOutputCallRequest, error) { m := new(StreamingOutputCallRequest) if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err } return m, nil } func _TestService_HalfDuplexCall_Handler(srv interface{}, stream grpc.ServerStream) error { return srv.(TestServiceServer).HalfDuplexCall(&testServiceHalfDuplexCallServer{stream}) } type TestService_HalfDuplexCallServer interface { Send(*StreamingOutputCallResponse) error Recv() (*StreamingOutputCallRequest, error) grpc.ServerStream } type testServiceHalfDuplexCallServer struct { grpc.ServerStream } func (x *testServiceHalfDuplexCallServer) Send(m *StreamingOutputCallResponse) error { return x.ServerStream.SendMsg(m) } func (x *testServiceHalfDuplexCallServer) Recv() (*StreamingOutputCallRequest, error) { m := new(StreamingOutputCallRequest) if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err } return m, nil } var _TestService_serviceDesc = grpc.ServiceDesc{ ServiceName: "grpc.testing.TestService", HandlerType: (*TestServiceServer)(nil), Methods: []grpc.MethodDesc{ { MethodName: "EmptyCall", Handler: _TestService_EmptyCall_Handler, }, { MethodName: "UnaryCall", Handler: _TestService_UnaryCall_Handler, }, }, Streams: []grpc.StreamDesc{ { StreamName: "StreamingOutputCall", Handler: _TestService_StreamingOutputCall_Handler, ServerStreams: true, }, { StreamName: "StreamingInputCall", Handler: _TestService_StreamingInputCall_Handler, ClientStreams: true, }, { StreamName: "FullDuplexCall", Handler: _TestService_FullDuplexCall_Handler, ServerStreams: true, ClientStreams: true, }, { StreamName: "HalfDuplexCall", Handler: _TestService_HalfDuplexCall_Handler, ServerStreams: true, ClientStreams: true, }, }, Metadata: "grpc_testing/test.proto", } func init() { proto.RegisterFile("grpc_testing/test.proto", fileDescriptor0) } var fileDescriptor0 = []byte{ // 582 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xbc, 0x54, 0xdd, 0x6e, 0xd3, 0x4c, 0x10, 0xfd, 0xb6, 0x49, 0xbe, 0x34, 0x93, 0xd4, 0x8a, 0x36, 0xaa, 0xea, 0xba, 0x48, 0x58, 0xe6, 0x02, 0x83, 0x44, 0x8a, 0x22, 0xc1, 0x25, 0xa8, 0xb4, 0xa9, 0xa8, 0x94, 0x26, 0xc1, 0x4e, 0xae, 0xa3, 0x25, 0xd9, 0x1a, 0x4b, 0x8e, 0xbd, 0xac, 0xd7, 0x15, 0xe9, 0x05, 0x2f, 0xc6, 0xcb, 0xf0, 0x10, 0x3c, 0x00, 0x5a, 0xff, 0x24, 0x4e, 0xe2, 0x8a, 0x14, 0x04, 0x57, 0xb6, 0x67, 0xce, 0x9c, 0x39, 0xc7, 0x33, 0xbb, 0x70, 0xe4, 0x70, 0x36, 0x9d, 0x08, 0x1a, 0x0a, 0xd7, 0x77, 0x4e, 0xe5, 0xb3, 0xcd, 0x78, 0x20, 0x02, 0xdc, 0x90, 0x89, 0x76, 0x9a, 0x30, 0xaa, 0x50, 0xe9, 0xce, 0x99, 0x58, 0x18, 0x3d, 0xa8, 0x0e, 0xc9, 0xc2, 0x0b, 0xc8, 0x0c, 0xbf, 0x80, 0xb2, 0x58, 0x30, 0xaa, 0x22, 0x1d, 0x99, 0x4a, 0xe7, 0xb8, 0x9d, 0x2f, 0x68, 0xa7, 0xa0, 0xd1, 0x82, 0x51, 0x2b, 0x86, 0x61, 0x0c, 0xe5, 0x8f, 0xc1, 0x6c, 0xa1, 0xee, 0xe9, 0xc8, 0x6c, 0x58, 0xf1, 0xbb, 0xf1, 0x03, 0xc1, 0x81, 0xed, 0xce, 0x99, 0x47, 0x2d, 0xfa, 0x39, 0xa2, 0xa1, 0xc0, 0x6f, 0xe0, 0x80, 0xd3, 0x90, 0x05, 0x7e, 0x48, 0x27, 0xbb, 0xb1, 0x37, 0x32, 0xbc, 0xfc, 0xc2, 0x4f, 0x72, 0xf5, 0xa1, 0x7b, 0x47, 0xe3, 0x76, 0x95, 0x15, 0xc8, 0x76, 0xef, 0x28, 0x3e, 0x85, 0x2a, 0x4b, 0x18, 0xd4, 0x92, 0x8e, 0xcc, 0x7a, 0xe7, 0xb0, 0x90, 0xde, 0xca, 0x50, 0x92, 0xf5, 0xc6, 0xf5, 0xbc, 0x49, 0x14, 0x52, 0xee, 0x93, 0x39, 0x55, 0xcb, 0x3a, 0x32, 0xf7, 0xad, 0x86, 0x0c, 0x8e, 0xd3, 0x18, 0x36, 0xa1, 0x19, 0x83, 0x02, 0x12, 0x89, 0x4f, 0x93, 0x70, 0x1a, 0x30, 0xaa, 0x56, 0x62, 0x9c, 0x22, 0xe3, 0x03, 0x19, 0xb6, 0x65, 0xd4, 0xf8, 0x0a, 0x4a, 0xe6, 0x3a, 0x51, 0x95, 0x57, 0x84, 0x76, 0x52, 0xa4, 0xc1, 0xfe, 0x52, 0x8c, 0xb4, 0x58, 0xb3, 0x96, 0xdf, 0xf8, 0x31, 0xd4, 0xf3, 0x1a, 0x4a, 0x71, 0x1a, 0x82, 0x55, 0xff, 0x1e, 0x1c, 0xdb, 0x82, 0x53, 0x32, 0x77, 0x7d, 0xe7, 0xca, 0x67, 0x91, 0x38, 0x27, 0x9e, 0x97, 0x4d, 0xe0, 0xa1, 0x52, 0x8c, 0x11, 0x68, 0x45, 0x6c, 0xa9, 0xb3, 0xd7, 0x70, 0x44, 0x1c, 0x87, 0x53, 0x87, 0x08, 0x3a, 0x9b, 0xa4, 0x35, 0xc9, 0x68, 0x50, 0x3c, 0x9a, 0xc3, 0x55, 0x3a, 0xa5, 0x96, 0x33, 0x32, 0xae, 0x00, 0x67, 0x1c, 0x43, 0xc2, 0xc9, 0x9c, 0x0a, 0xca, 0x43, 0xb9, 0x44, 0xb9, 0xd2, 0xf8, 0x5d, 0xda, 0x75, 0x7d, 0x41, 0xf9, 0x2d, 0x91, 0x03, 0x4a, 0x07, 0x0e, 0x59, 0x68, 0x1c, 0x1a, 0xdf, 0x51, 0x4e, 0xe1, 0x20, 0x12, 0x1b, 0x86, 0xff, 0x74, 0xe5, 0x3e, 0x40, 0x6b, 0x59, 0xcf, 0x96, 0x52, 0xd5, 0x3d, 0xbd, 0x64, 0xd6, 0x3b, 0xfa, 0x3a, 0xcb, 0xb6, 0x25, 0x0b, 0xf3, 0x6d, 0x9b, 0x0f, 0x5d, 0x50, 0xa3, 0x0f, 0x27, 0x85, 0x0e, 0x7f, 0x73, 0xbd, 0x9e, 0xbf, 0x85, 0x7a, 0xce, 0x30, 0x6e, 0x42, 0xe3, 0x7c, 0x70, 0x3d, 0xb4, 0xba, 0xb6, 0x7d, 0xf6, 0xae, 0xd7, 0x6d, 0xfe, 0x87, 0x31, 0x28, 0xe3, 0xfe, 0x5a, 0x0c, 0x61, 0x80, 0xff, 0xad, 0xb3, 0xfe, 0xc5, 0xe0, 0xba, 0xb9, 0xd7, 0xf9, 0x56, 0x86, 0xfa, 0x88, 0x86, 0xc2, 0xa6, 0xfc, 0xd6, 0x9d, 0x52, 0xfc, 0x0a, 0x6a, 0xf1, 0x05, 0x22, 0x65, 0xe1, 0xd6, 0x7a, 0xf7, 0x38, 0xa1, 0x15, 0x05, 0xf1, 0x25, 0xd4, 0xc6, 0x3e, 0xe1, 0x49, 0xd9, 0xc9, 0x3a, 0x62, 0xed, 0xe2, 0xd0, 0x1e, 0x15, 0x27, 0xd3, 0x1f, 0xe0, 0x41, 0xab, 0xe0, 0xff, 0x60, 0x73, 0xa3, 0xe8, 0xde, 0x25, 0xd1, 0x9e, 0xed, 0x80, 0x4c, 0x7a, 0xbd, 0x44, 0xd8, 0x05, 0xbc, 0x7d, 0x22, 0xf0, 0xd3, 0x7b, 0x28, 0x36, 0x4f, 0xa0, 0x66, 0xfe, 0x1a, 0x98, 0xb4, 0x32, 0x65, 0x2b, 0xe5, 0x32, 0xf2, 0xbc, 0x8b, 0x88, 0x79, 0xf4, 0xcb, 0x5f, 0xf3, 0x64, 0xa2, 0xd8, 0x95, 0xf2, 0x9e, 0x78, 0x37, 0xff, 0xa0, 0xd5, 0xcf, 0x00, 0x00, 0x00, 0xff, 0xff, 0xb8, 0xa6, 0x30, 0x01, 0x96, 0x06, 0x00, 0x00, } golang-google-grpc-1.6.0/test/grpc_testing/test.proto000066400000000000000000000122741315416461300227150ustar00rootroot00000000000000// Copyright 2017 gRPC authors. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. // An integration test service that covers all the method signature permutations // of unary/streaming requests/responses. syntax = "proto2"; package grpc.testing; message Empty {} // The type of payload that should be returned. enum PayloadType { // Compressable text format. COMPRESSABLE = 0; // Uncompressable binary format. UNCOMPRESSABLE = 1; // Randomly chosen from all other formats defined in this enum. RANDOM = 2; } // A block of data, to simply increase gRPC message size. message Payload { // The type of data in body. optional PayloadType type = 1; // Primary contents of payload. optional bytes body = 2; } // Unary request. message SimpleRequest { // Desired payload type in the response from the server. // If response_type is RANDOM, server randomly chooses one from other formats. optional PayloadType response_type = 1; // Desired payload size in the response from the server. // If response_type is COMPRESSABLE, this denotes the size before compression. optional int32 response_size = 2; // Optional input payload sent along with the request. optional Payload payload = 3; // Whether SimpleResponse should include username. optional bool fill_username = 4; // Whether SimpleResponse should include OAuth scope. optional bool fill_oauth_scope = 5; } // Unary response, as configured by the request. message SimpleResponse { // Payload to increase message size. optional Payload payload = 1; // The user the request came from, for verifying authentication was // successful when the client expected it. optional string username = 2; // OAuth scope. optional string oauth_scope = 3; } // Client-streaming request. message StreamingInputCallRequest { // Optional input payload sent along with the request. optional Payload payload = 1; // Not expecting any payload from the response. } // Client-streaming response. message StreamingInputCallResponse { // Aggregated size of payloads received from the client. optional int32 aggregated_payload_size = 1; } // Configuration for a particular response. message ResponseParameters { // Desired payload sizes in responses from the server. // If response_type is COMPRESSABLE, this denotes the size before compression. optional int32 size = 1; // Desired interval between consecutive responses in the response stream in // microseconds. optional int32 interval_us = 2; } // Server-streaming request. message StreamingOutputCallRequest { // Desired payload type in the response from the server. // If response_type is RANDOM, the payload from each response in the stream // might be of different types. This is to simulate a mixed type of payload // stream. optional PayloadType response_type = 1; // Configuration for each expected response message. repeated ResponseParameters response_parameters = 2; // Optional input payload sent along with the request. optional Payload payload = 3; } // Server-streaming response, as configured by the request and parameters. message StreamingOutputCallResponse { // Payload to increase response size. optional Payload payload = 1; } // A simple service to test the various types of RPCs and experiment with // performance with various types of payload. service TestService { // One empty request followed by one empty response. rpc EmptyCall(Empty) returns (Empty); // One request followed by one response. // The server returns the client payload as-is. rpc UnaryCall(SimpleRequest) returns (SimpleResponse); // One request followed by a sequence of responses (streamed download). // The server returns the payload with client desired type and sizes. rpc StreamingOutputCall(StreamingOutputCallRequest) returns (stream StreamingOutputCallResponse); // A sequence of requests followed by one response (streamed upload). // The server returns the aggregated size of client payload as the result. rpc StreamingInputCall(stream StreamingInputCallRequest) returns (StreamingInputCallResponse); // A sequence of requests with each request served by the server immediately. // As one request could lead to multiple responses, this interface // demonstrates the idea of full duplexing. rpc FullDuplexCall(stream StreamingOutputCallRequest) returns (stream StreamingOutputCallResponse); // A sequence of requests followed by a sequence of responses. // The server buffers all the client requests and then serves them in order. A // stream of responses are returned to the client when the server starts with // first request. rpc HalfDuplexCall(stream StreamingOutputCallRequest) returns (stream StreamingOutputCallResponse); } golang-google-grpc-1.6.0/test/race.go000066400000000000000000000012301315416461300174100ustar00rootroot00000000000000// +build race /* * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package test func init() { raceMode = true } golang-google-grpc-1.6.0/test/servertester.go000066400000000000000000000161771315416461300212530ustar00rootroot00000000000000/* * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package test import ( "bytes" "errors" "io" "strings" "testing" "time" "golang.org/x/net/http2" "golang.org/x/net/http2/hpack" ) // This is a subset of http2's serverTester type. // // serverTester wraps a io.ReadWriter (acting like the underlying // network connection) and provides utility methods to read and write // http2 frames. // // NOTE(bradfitz): this could eventually be exported somewhere. Others // have asked for it too. For now I'm still experimenting with the // API and don't feel like maintaining a stable testing API. type serverTester struct { cc io.ReadWriteCloser // client conn t testing.TB fr *http2.Framer // writing headers: headerBuf bytes.Buffer hpackEnc *hpack.Encoder // reading frames: frc chan http2.Frame frErrc chan error readTimer *time.Timer } func newServerTesterFromConn(t testing.TB, cc io.ReadWriteCloser) *serverTester { st := &serverTester{ t: t, cc: cc, frc: make(chan http2.Frame, 1), frErrc: make(chan error, 1), } st.hpackEnc = hpack.NewEncoder(&st.headerBuf) st.fr = http2.NewFramer(cc, cc) st.fr.ReadMetaHeaders = hpack.NewDecoder(4096 /*initialHeaderTableSize*/, nil) return st } func (st *serverTester) readFrame() (http2.Frame, error) { go func() { fr, err := st.fr.ReadFrame() if err != nil { st.frErrc <- err } else { st.frc <- fr } }() t := time.NewTimer(2 * time.Second) defer t.Stop() select { case f := <-st.frc: return f, nil case err := <-st.frErrc: return nil, err case <-t.C: return nil, errors.New("timeout waiting for frame") } } // greet initiates the client's HTTP/2 connection into a state where // frames may be sent. func (st *serverTester) greet() { st.writePreface() st.writeInitialSettings() st.wantSettings() st.writeSettingsAck() for { f, err := st.readFrame() if err != nil { st.t.Fatal(err) } switch f := f.(type) { case *http2.WindowUpdateFrame: // grpc's transport/http2_server sends this // before the settings ack. The Go http2 // server uses a setting instead. case *http2.SettingsFrame: if f.IsAck() { return } st.t.Fatalf("during greet, got non-ACK settings frame") default: st.t.Fatalf("during greet, unexpected frame type %T", f) } } } func (st *serverTester) writePreface() { n, err := st.cc.Write([]byte(http2.ClientPreface)) if err != nil { st.t.Fatalf("Error writing client preface: %v", err) } if n != len(http2.ClientPreface) { st.t.Fatalf("Writing client preface, wrote %d bytes; want %d", n, len(http2.ClientPreface)) } } func (st *serverTester) writeInitialSettings() { if err := st.fr.WriteSettings(); err != nil { st.t.Fatalf("Error writing initial SETTINGS frame from client to server: %v", err) } } func (st *serverTester) writeSettingsAck() { if err := st.fr.WriteSettingsAck(); err != nil { st.t.Fatalf("Error writing ACK of server's SETTINGS: %v", err) } } func (st *serverTester) wantSettings() *http2.SettingsFrame { f, err := st.readFrame() if err != nil { st.t.Fatalf("Error while expecting a SETTINGS frame: %v", err) } sf, ok := f.(*http2.SettingsFrame) if !ok { st.t.Fatalf("got a %T; want *SettingsFrame", f) } return sf } func (st *serverTester) wantSettingsAck() { f, err := st.readFrame() if err != nil { st.t.Fatal(err) } sf, ok := f.(*http2.SettingsFrame) if !ok { st.t.Fatalf("Wanting a settings ACK, received a %T", f) } if !sf.IsAck() { st.t.Fatal("Settings Frame didn't have ACK set") } } // wait for any activity from the server func (st *serverTester) wantAnyFrame() http2.Frame { f, err := st.fr.ReadFrame() if err != nil { st.t.Fatal(err) } return f } func (st *serverTester) encodeHeaderField(k, v string) { err := st.hpackEnc.WriteField(hpack.HeaderField{Name: k, Value: v}) if err != nil { st.t.Fatalf("HPACK encoding error for %q/%q: %v", k, v, err) } } // encodeHeader encodes headers and returns their HPACK bytes. headers // must contain an even number of key/value pairs. There may be // multiple pairs for keys (e.g. "cookie"). The :method, :path, and // :scheme headers default to GET, / and https. func (st *serverTester) encodeHeader(headers ...string) []byte { if len(headers)%2 == 1 { panic("odd number of kv args") } st.headerBuf.Reset() if len(headers) == 0 { // Fast path, mostly for benchmarks, so test code doesn't pollute // profiles when we're looking to improve server allocations. st.encodeHeaderField(":method", "GET") st.encodeHeaderField(":path", "/") st.encodeHeaderField(":scheme", "https") return st.headerBuf.Bytes() } if len(headers) == 2 && headers[0] == ":method" { // Another fast path for benchmarks. st.encodeHeaderField(":method", headers[1]) st.encodeHeaderField(":path", "/") st.encodeHeaderField(":scheme", "https") return st.headerBuf.Bytes() } pseudoCount := map[string]int{} keys := []string{":method", ":path", ":scheme"} vals := map[string][]string{ ":method": {"GET"}, ":path": {"/"}, ":scheme": {"https"}, } for len(headers) > 0 { k, v := headers[0], headers[1] headers = headers[2:] if _, ok := vals[k]; !ok { keys = append(keys, k) } if strings.HasPrefix(k, ":") { pseudoCount[k]++ if pseudoCount[k] == 1 { vals[k] = []string{v} } else { // Allows testing of invalid headers w/ dup pseudo fields. vals[k] = append(vals[k], v) } } else { vals[k] = append(vals[k], v) } } for _, k := range keys { for _, v := range vals[k] { st.encodeHeaderField(k, v) } } return st.headerBuf.Bytes() } func (st *serverTester) writeHeadersGRPC(streamID uint32, path string) { st.writeHeaders(http2.HeadersFrameParam{ StreamID: streamID, BlockFragment: st.encodeHeader( ":method", "POST", ":path", path, "content-type", "application/grpc", "te", "trailers", ), EndStream: false, EndHeaders: true, }) } func (st *serverTester) writeHeaders(p http2.HeadersFrameParam) { if err := st.fr.WriteHeaders(p); err != nil { st.t.Fatalf("Error writing HEADERS: %v", err) } } func (st *serverTester) writeData(streamID uint32, endStream bool, data []byte) { if err := st.fr.WriteData(streamID, endStream, data); err != nil { st.t.Fatalf("Error writing DATA: %v", err) } } func (st *serverTester) writeRSTStream(streamID uint32, code http2.ErrCode) { if err := st.fr.WriteRSTStream(streamID, code); err != nil { st.t.Fatalf("Error writing RST_STREAM: %v", err) } } func (st *serverTester) writeDataPadded(streamID uint32, endStream bool, data, padding []byte) { if err := st.fr.WriteDataPadded(streamID, endStream, data, padding); err != nil { st.t.Fatalf("Error writing DATA with padding: %v", err) } } golang-google-grpc-1.6.0/testdata/000077500000000000000000000000001315416461300170055ustar00rootroot00000000000000golang-google-grpc-1.6.0/testdata/ca.pem000066400000000000000000000015271315416461300201000ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIICSjCCAbOgAwIBAgIJAJHGGR4dGioHMA0GCSqGSIb3DQEBCwUAMFYxCzAJBgNV BAYTAkFVMRMwEQYDVQQIEwpTb21lLVN0YXRlMSEwHwYDVQQKExhJbnRlcm5ldCBX aWRnaXRzIFB0eSBMdGQxDzANBgNVBAMTBnRlc3RjYTAeFw0xNDExMTEyMjMxMjla Fw0yNDExMDgyMjMxMjlaMFYxCzAJBgNVBAYTAkFVMRMwEQYDVQQIEwpTb21lLVN0 YXRlMSEwHwYDVQQKExhJbnRlcm5ldCBXaWRnaXRzIFB0eSBMdGQxDzANBgNVBAMT BnRlc3RjYTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEAwEDfBV5MYdlHVHJ7 +L4nxrZy7mBfAVXpOc5vMYztssUI7mL2/iYujiIXM+weZYNTEpLdjyJdu7R5gGUu g1jSVK/EPHfc74O7AyZU34PNIP4Sh33N+/A5YexrNgJlPY+E3GdVYi4ldWJjgkAd Qah2PH5ACLrIIC6tRka9hcaBlIECAwEAAaMgMB4wDAYDVR0TBAUwAwEB/zAOBgNV HQ8BAf8EBAMCAgQwDQYJKoZIhvcNAQELBQADgYEAHzC7jdYlzAVmddi/gdAeKPau sPBG/C2HCWqHzpCUHcKuvMzDVkY/MP2o6JIW2DBbY64bO/FceExhjcykgaYtCH/m oIU63+CFOTtR7otyQAWHqXa7q4SbCDlG7DyRFxqG0txPtGvy12lgldA2+RgcigQG Dfcog5wrJytaQ6UA0wE= -----END CERTIFICATE----- golang-google-grpc-1.6.0/testdata/server1.key000066400000000000000000000016201315416461300211050ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIICdQIBADANBgkqhkiG9w0BAQEFAASCAl8wggJbAgEAAoGBAOHDFScoLCVJpYDD M4HYtIdV6Ake/sMNaaKdODjDMsux/4tDydlumN+fm+AjPEK5GHhGn1BgzkWF+slf 3BxhrA/8dNsnunstVA7ZBgA/5qQxMfGAq4wHNVX77fBZOgp9VlSMVfyd9N8YwbBY AckOeUQadTi2X1S6OgJXgQ0m3MWhAgMBAAECgYAn7qGnM2vbjJNBm0VZCkOkTIWm V10okw7EPJrdL2mkre9NasghNXbE1y5zDshx5Nt3KsazKOxTT8d0Jwh/3KbaN+YY tTCbKGW0pXDRBhwUHRcuRzScjli8Rih5UOCiZkhefUTcRb6xIhZJuQy71tjaSy0p dHZRmYyBYO2YEQ8xoQJBAPrJPhMBkzmEYFtyIEqAxQ/o/A6E+E4w8i+KM7nQCK7q K4JXzyXVAjLfyBZWHGM2uro/fjqPggGD6QH1qXCkI4MCQQDmdKeb2TrKRh5BY1LR 81aJGKcJ2XbcDu6wMZK4oqWbTX2KiYn9GB0woM6nSr/Y6iy1u145YzYxEV/iMwff DJULAkB8B2MnyzOg0pNFJqBJuH29bKCcHa8gHJzqXhNO5lAlEbMK95p/P2Wi+4Hd aiEIAF1BF326QJcvYKmwSmrORp85AkAlSNxRJ50OWrfMZnBgzVjDx3xG6KsFQVk2 ol6VhqL6dFgKUORFUWBvnKSyhjJxurlPEahV6oo6+A+mPhFY8eUvAkAZQyTdupP3 XEFQKctGz+9+gKkemDp7LBBMEMBXrGTLPhpEfcjv/7KPdnFHYmhYeBTBnuVmTVWe F98XJ7tIFfJq -----END PRIVATE KEY----- golang-google-grpc-1.6.0/testdata/server1.pem000066400000000000000000000017041315416461300211010ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIICnDCCAgWgAwIBAgIBBzANBgkqhkiG9w0BAQsFADBWMQswCQYDVQQGEwJBVTET MBEGA1UECBMKU29tZS1TdGF0ZTEhMB8GA1UEChMYSW50ZXJuZXQgV2lkZ2l0cyBQ dHkgTHRkMQ8wDQYDVQQDEwZ0ZXN0Y2EwHhcNMTUxMTA0MDIyMDI0WhcNMjUxMTAx MDIyMDI0WjBlMQswCQYDVQQGEwJVUzERMA8GA1UECBMISWxsaW5vaXMxEDAOBgNV BAcTB0NoaWNhZ28xFTATBgNVBAoTDEV4YW1wbGUsIENvLjEaMBgGA1UEAxQRKi50 ZXN0Lmdvb2dsZS5jb20wgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAOHDFSco LCVJpYDDM4HYtIdV6Ake/sMNaaKdODjDMsux/4tDydlumN+fm+AjPEK5GHhGn1Bg zkWF+slf3BxhrA/8dNsnunstVA7ZBgA/5qQxMfGAq4wHNVX77fBZOgp9VlSMVfyd 9N8YwbBYAckOeUQadTi2X1S6OgJXgQ0m3MWhAgMBAAGjazBpMAkGA1UdEwQCMAAw CwYDVR0PBAQDAgXgME8GA1UdEQRIMEaCECoudGVzdC5nb29nbGUuZnKCGHdhdGVy em9vaS50ZXN0Lmdvb2dsZS5iZYISKi50ZXN0LnlvdXR1YmUuY29thwTAqAEDMA0G CSqGSIb3DQEBCwUAA4GBAJFXVifQNub1LUP4JlnX5lXNlo8FxZ2a12AFQs+bzoJ6 hM044EDjqyxUqSbVePK0ni3w1fHQB5rY9yYC5f8G7aqqTY1QOhoUk8ZTSTRpnkTh y4jjdvTZeLDVBlueZUTDRmy2feY5aZIU18vFDK08dTG0A87pppuv1LNIR3loveU8 -----END CERTIFICATE----- golang-google-grpc-1.6.0/testdata/testdata.go000066400000000000000000000030531315416461300211460ustar00rootroot00000000000000/* * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package testdata import ( "log" "os" "path/filepath" ) // Path returns the absolute path the given relative file or directory path, // relative to the google.golang.org/grpc/testdata directory in the user's GOPATH. // If rel is already absolute, it is returned unmodified. func Path(rel string) string { if filepath.IsAbs(rel) { return rel } v, err := goPackagePath("google.golang.org/grpc/testdata") if err != nil { log.Fatalf("Error finding google.golang.org/grpc/testdata directory: %v", err) } return filepath.Join(v, rel) } func goPackagePath(pkg string) (path string, err error) { gp := os.Getenv("GOPATH") if gp == "" { return path, os.ErrNotExist } for _, p := range filepath.SplitList(gp) { dir := filepath.Join(p, "src", filepath.FromSlash(pkg)) fi, err := os.Stat(dir) if os.IsNotExist(err) { continue } if err != nil { return "", err } if !fi.IsDir() { continue } return dir, nil } return path, os.ErrNotExist } golang-google-grpc-1.6.0/trace.go000066400000000000000000000050771315416461300166320ustar00rootroot00000000000000/* * * Copyright 2015 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package grpc import ( "bytes" "fmt" "io" "net" "strings" "time" "golang.org/x/net/trace" ) // EnableTracing controls whether to trace RPCs using the golang.org/x/net/trace package. // This should only be set before any RPCs are sent or received by this program. var EnableTracing = true // methodFamily returns the trace family for the given method. // It turns "/pkg.Service/GetFoo" into "pkg.Service". func methodFamily(m string) string { m = strings.TrimPrefix(m, "/") // remove leading slash if i := strings.Index(m, "/"); i >= 0 { m = m[:i] // remove everything from second slash } if i := strings.LastIndex(m, "."); i >= 0 { m = m[i+1:] // cut down to last dotted component } return m } // traceInfo contains tracing information for an RPC. type traceInfo struct { tr trace.Trace firstLine firstLine } // firstLine is the first line of an RPC trace. type firstLine struct { client bool // whether this is a client (outgoing) RPC remoteAddr net.Addr deadline time.Duration // may be zero } func (f *firstLine) String() string { var line bytes.Buffer io.WriteString(&line, "RPC: ") if f.client { io.WriteString(&line, "to") } else { io.WriteString(&line, "from") } fmt.Fprintf(&line, " %v deadline:", f.remoteAddr) if f.deadline != 0 { fmt.Fprint(&line, f.deadline) } else { io.WriteString(&line, "none") } return line.String() } // payload represents an RPC request or response payload. type payload struct { sent bool // whether this is an outgoing payload msg interface{} // e.g. a proto.Message // TODO(dsymonds): add stringifying info to codec, and limit how much we hold here? } func (p payload) String() string { if p.sent { return fmt.Sprintf("sent: %v", p.msg) } return fmt.Sprintf("recv: %v", p.msg) } type fmtStringer struct { format string a []interface{} } func (f *fmtStringer) String() string { return fmt.Sprintf(f.format, f.a...) } type stringer string func (s stringer) String() string { return string(s) } golang-google-grpc-1.6.0/transport/000077500000000000000000000000001315416461300172305ustar00rootroot00000000000000golang-google-grpc-1.6.0/transport/bdp_estimator.go000066400000000000000000000102771315416461300224220ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package transport import ( "sync" "time" ) const ( // bdpLimit is the maximum value the flow control windows // will be increased to. bdpLimit = (1 << 20) * 4 // alpha is a constant factor used to keep a moving average // of RTTs. alpha = 0.9 // If the current bdp sample is greater than or equal to // our beta * our estimated bdp and the current bandwidth // sample is the maximum bandwidth observed so far, we // increase our bbp estimate by a factor of gamma. beta = 0.66 // To put our bdp to be smaller than or equal to twice the real BDP, // we should multiply our current sample with 4/3, however to round things out // we use 2 as the multiplication factor. gamma = 2 ) var ( // Adding arbitrary data to ping so that its ack can be // identified. // Easter-egg: what does the ping message say? bdpPing = &ping{data: [8]byte{2, 4, 16, 16, 9, 14, 7, 7}} ) type bdpEstimator struct { // sentAt is the time when the ping was sent. sentAt time.Time mu sync.Mutex // bdp is the current bdp estimate. bdp uint32 // sample is the number of bytes received in one measurement cycle. sample uint32 // bwMax is the maximum bandwidth noted so far (bytes/sec). bwMax float64 // bool to keep track of the begining of a new measurement cycle. isSent bool // Callback to update the window sizes. updateFlowControl func(n uint32) // sampleCount is the number of samples taken so far. sampleCount uint64 // round trip time (seconds) rtt float64 } // timesnap registers the time bdp ping was sent out so that // network rtt can be calculated when its ack is recieved. // It is called (by controller) when the bdpPing is // being written on the wire. func (b *bdpEstimator) timesnap(d [8]byte) { if bdpPing.data != d { return } b.sentAt = time.Now() } // add adds bytes to the current sample for calculating bdp. // It returns true only if a ping must be sent. This can be used // by the caller (handleData) to make decision about batching // a window update with it. func (b *bdpEstimator) add(n uint32) bool { b.mu.Lock() defer b.mu.Unlock() if b.bdp == bdpLimit { return false } if !b.isSent { b.isSent = true b.sample = n b.sentAt = time.Time{} b.sampleCount++ return true } b.sample += n return false } // calculate is called when an ack for a bdp ping is received. // Here we calculate the current bdp and bandwidth sample and // decide if the flow control windows should go up. func (b *bdpEstimator) calculate(d [8]byte) { // Check if the ping acked for was the bdp ping. if bdpPing.data != d { return } b.mu.Lock() rttSample := time.Since(b.sentAt).Seconds() if b.sampleCount < 10 { // Bootstrap rtt with an average of first 10 rtt samples. b.rtt += (rttSample - b.rtt) / float64(b.sampleCount) } else { // Heed to the recent past more. b.rtt += (rttSample - b.rtt) * float64(alpha) } b.isSent = false // The number of bytes accumalated so far in the sample is smaller // than or equal to 1.5 times the real BDP on a saturated connection. bwCurrent := float64(b.sample) / (b.rtt * float64(1.5)) if bwCurrent > b.bwMax { b.bwMax = bwCurrent } // If the current sample (which is smaller than or equal to the 1.5 times the real BDP) is // greater than or equal to 2/3rd our perceived bdp AND this is the maximum bandwidth seen so far, we // should update our perception of the network BDP. if float64(b.sample) >= beta*float64(b.bdp) && bwCurrent == b.bwMax && b.bdp != bdpLimit { sampleFloat := float64(b.sample) b.bdp = uint32(gamma * sampleFloat) if b.bdp > bdpLimit { b.bdp = bdpLimit } bdp := b.bdp b.mu.Unlock() b.updateFlowControl(bdp) return } b.mu.Unlock() } golang-google-grpc-1.6.0/transport/control.go000066400000000000000000000147451315416461300212520ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package transport import ( "fmt" "math" "sync" "time" "golang.org/x/net/http2" ) const ( // The default value of flow control window size in HTTP2 spec. defaultWindowSize = 65535 // The initial window size for flow control. initialWindowSize = defaultWindowSize // for an RPC infinity = time.Duration(math.MaxInt64) defaultClientKeepaliveTime = infinity defaultClientKeepaliveTimeout = time.Duration(20 * time.Second) defaultMaxStreamsClient = 100 defaultMaxConnectionIdle = infinity defaultMaxConnectionAge = infinity defaultMaxConnectionAgeGrace = infinity defaultServerKeepaliveTime = time.Duration(2 * time.Hour) defaultServerKeepaliveTimeout = time.Duration(20 * time.Second) defaultKeepalivePolicyMinTime = time.Duration(5 * time.Minute) // max window limit set by HTTP2 Specs. maxWindowSize = math.MaxInt32 ) // The following defines various control items which could flow through // the control buffer of transport. They represent different aspects of // control tasks, e.g., flow control, settings, streaming resetting, etc. type windowUpdate struct { streamID uint32 increment uint32 flush bool } func (*windowUpdate) item() {} type settings struct { ack bool ss []http2.Setting } func (*settings) item() {} type resetStream struct { streamID uint32 code http2.ErrCode } func (*resetStream) item() {} type goAway struct { code http2.ErrCode debugData []byte headsUp bool closeConn bool } func (*goAway) item() {} type flushIO struct { } func (*flushIO) item() {} type ping struct { ack bool data [8]byte } func (*ping) item() {} // quotaPool is a pool which accumulates the quota and sends it to acquire() // when it is available. type quotaPool struct { c chan int mu sync.Mutex quota int } // newQuotaPool creates a quotaPool which has quota q available to consume. func newQuotaPool(q int) *quotaPool { qb := "aPool{ c: make(chan int, 1), } if q > 0 { qb.c <- q } else { qb.quota = q } return qb } // add cancels the pending quota sent on acquired, incremented by v and sends // it back on acquire. func (qb *quotaPool) add(v int) { qb.mu.Lock() defer qb.mu.Unlock() select { case n := <-qb.c: qb.quota += n default: } qb.quota += v if qb.quota <= 0 { return } // After the pool has been created, this is the only place that sends on // the channel. Since mu is held at this point and any quota that was sent // on the channel has been retrieved, we know that this code will always // place any positive quota value on the channel. select { case qb.c <- qb.quota: qb.quota = 0 default: } } // acquire returns the channel on which available quota amounts are sent. func (qb *quotaPool) acquire() <-chan int { return qb.c } // inFlow deals with inbound flow control type inFlow struct { mu sync.Mutex // The inbound flow control limit for pending data. limit uint32 // pendingData is the overall data which have been received but not been // consumed by applications. pendingData uint32 // The amount of data the application has consumed but grpc has not sent // window update for them. Used to reduce window update frequency. pendingUpdate uint32 // delta is the extra window update given by receiver when an application // is reading data bigger in size than the inFlow limit. delta uint32 } // newLimit updates the inflow window to a new value n. // It assumes that n is always greater than the old limit. func (f *inFlow) newLimit(n uint32) uint32 { f.mu.Lock() defer f.mu.Unlock() d := n - f.limit f.limit = n return d } func (f *inFlow) maybeAdjust(n uint32) uint32 { if n > uint32(math.MaxInt32) { n = uint32(math.MaxInt32) } f.mu.Lock() defer f.mu.Unlock() // estSenderQuota is the receiver's view of the maximum number of bytes the sender // can send without a window update. estSenderQuota := int32(f.limit - (f.pendingData + f.pendingUpdate)) // estUntransmittedData is the maximum number of bytes the sends might not have put // on the wire yet. A value of 0 or less means that we have already received all or // more bytes than the application is requesting to read. estUntransmittedData := int32(n - f.pendingData) // Casting into int32 since it could be negative. // This implies that unless we send a window update, the sender won't be able to send all the bytes // for this message. Therefore we must send an update over the limit since there's an active read // request from the application. if estUntransmittedData > estSenderQuota { // Sender's window shouldn't go more than 2^31 - 1 as speecified in the HTTP spec. if f.limit+n > maxWindowSize { f.delta = maxWindowSize - f.limit } else { // Send a window update for the whole message and not just the difference between // estUntransmittedData and estSenderQuota. This will be helpful in case the message // is padded; We will fallback on the current available window(at least a 1/4th of the limit). f.delta = n } return f.delta } return 0 } // onData is invoked when some data frame is received. It updates pendingData. func (f *inFlow) onData(n uint32) error { f.mu.Lock() defer f.mu.Unlock() f.pendingData += n if f.pendingData+f.pendingUpdate > f.limit+f.delta { return fmt.Errorf("received %d-bytes data exceeding the limit %d bytes", f.pendingData+f.pendingUpdate, f.limit) } return nil } // onRead is invoked when the application reads the data. It returns the window size // to be sent to the peer. func (f *inFlow) onRead(n uint32) uint32 { f.mu.Lock() defer f.mu.Unlock() if f.pendingData == 0 { return 0 } f.pendingData -= n if n > f.delta { n -= f.delta f.delta = 0 } else { f.delta -= n n = 0 } f.pendingUpdate += n if f.pendingUpdate >= f.limit/4 { wu := f.pendingUpdate f.pendingUpdate = 0 return wu } return 0 } func (f *inFlow) resetPendingUpdate() uint32 { f.mu.Lock() defer f.mu.Unlock() n := f.pendingUpdate f.pendingUpdate = 0 return n } golang-google-grpc-1.6.0/transport/go16.go000066400000000000000000000024521315416461300203360ustar00rootroot00000000000000// +build go1.6,!go1.7 /* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package transport import ( "net" "google.golang.org/grpc/codes" "golang.org/x/net/context" ) // dialContext connects to the address on the named network. func dialContext(ctx context.Context, network, address string) (net.Conn, error) { return (&net.Dialer{Cancel: ctx.Done()}).Dial(network, address) } // ContextErr converts the error from context package into a StreamError. func ContextErr(err error) StreamError { switch err { case context.DeadlineExceeded: return streamErrorf(codes.DeadlineExceeded, "%v", err) case context.Canceled: return streamErrorf(codes.Canceled, "%v", err) } return streamErrorf(codes.Internal, "Unexpected error from context packet: %v", err) } golang-google-grpc-1.6.0/transport/go17.go000066400000000000000000000025311315416461300203350ustar00rootroot00000000000000// +build go1.7 /* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package transport import ( "context" "net" "google.golang.org/grpc/codes" netctx "golang.org/x/net/context" ) // dialContext connects to the address on the named network. func dialContext(ctx context.Context, network, address string) (net.Conn, error) { return (&net.Dialer{}).DialContext(ctx, network, address) } // ContextErr converts the error from context package into a StreamError. func ContextErr(err error) StreamError { switch err { case context.DeadlineExceeded, netctx.DeadlineExceeded: return streamErrorf(codes.DeadlineExceeded, "%v", err) case context.Canceled, netctx.Canceled: return streamErrorf(codes.Canceled, "%v", err) } return streamErrorf(codes.Internal, "Unexpected error from context packet: %v", err) } golang-google-grpc-1.6.0/transport/handler_server.go000066400000000000000000000262561315416461300225750ustar00rootroot00000000000000/* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // This file is the implementation of a gRPC server using HTTP/2 which // uses the standard Go http2 Server implementation (via the // http.Handler interface), rather than speaking low-level HTTP/2 // frames itself. It is the implementation of *grpc.Server.ServeHTTP. package transport import ( "errors" "fmt" "io" "net" "net/http" "strings" "sync" "time" "github.com/golang/protobuf/proto" "golang.org/x/net/context" "golang.org/x/net/http2" "google.golang.org/grpc/codes" "google.golang.org/grpc/credentials" "google.golang.org/grpc/metadata" "google.golang.org/grpc/peer" "google.golang.org/grpc/status" ) // NewServerHandlerTransport returns a ServerTransport handling gRPC // from inside an http.Handler. It requires that the http Server // supports HTTP/2. func NewServerHandlerTransport(w http.ResponseWriter, r *http.Request) (ServerTransport, error) { if r.ProtoMajor != 2 { return nil, errors.New("gRPC requires HTTP/2") } if r.Method != "POST" { return nil, errors.New("invalid gRPC request method") } if !validContentType(r.Header.Get("Content-Type")) { return nil, errors.New("invalid gRPC request content-type") } if _, ok := w.(http.Flusher); !ok { return nil, errors.New("gRPC requires a ResponseWriter supporting http.Flusher") } if _, ok := w.(http.CloseNotifier); !ok { return nil, errors.New("gRPC requires a ResponseWriter supporting http.CloseNotifier") } st := &serverHandlerTransport{ rw: w, req: r, closedCh: make(chan struct{}), writes: make(chan func()), } if v := r.Header.Get("grpc-timeout"); v != "" { to, err := decodeTimeout(v) if err != nil { return nil, streamErrorf(codes.Internal, "malformed time-out: %v", err) } st.timeoutSet = true st.timeout = to } var metakv []string if r.Host != "" { metakv = append(metakv, ":authority", r.Host) } for k, vv := range r.Header { k = strings.ToLower(k) if isReservedHeader(k) && !isWhitelistedPseudoHeader(k) { continue } for _, v := range vv { v, err := decodeMetadataHeader(k, v) if err != nil { return nil, streamErrorf(codes.InvalidArgument, "malformed binary metadata: %v", err) } metakv = append(metakv, k, v) } } st.headerMD = metadata.Pairs(metakv...) return st, nil } // serverHandlerTransport is an implementation of ServerTransport // which replies to exactly one gRPC request (exactly one HTTP request), // using the net/http.Handler interface. This http.Handler is guaranteed // at this point to be speaking over HTTP/2, so it's able to speak valid // gRPC. type serverHandlerTransport struct { rw http.ResponseWriter req *http.Request timeoutSet bool timeout time.Duration didCommonHeaders bool headerMD metadata.MD closeOnce sync.Once closedCh chan struct{} // closed on Close // writes is a channel of code to run serialized in the // ServeHTTP (HandleStreams) goroutine. The channel is closed // when WriteStatus is called. writes chan func() mu sync.Mutex // streamDone indicates whether WriteStatus has been called and writes channel // has been closed. streamDone bool } func (ht *serverHandlerTransport) Close() error { ht.closeOnce.Do(ht.closeCloseChanOnce) return nil } func (ht *serverHandlerTransport) closeCloseChanOnce() { close(ht.closedCh) } func (ht *serverHandlerTransport) RemoteAddr() net.Addr { return strAddr(ht.req.RemoteAddr) } // strAddr is a net.Addr backed by either a TCP "ip:port" string, or // the empty string if unknown. type strAddr string func (a strAddr) Network() string { if a != "" { // Per the documentation on net/http.Request.RemoteAddr, if this is // set, it's set to the IP:port of the peer (hence, TCP): // https://golang.org/pkg/net/http/#Request // // If we want to support Unix sockets later, we can // add our own grpc-specific convention within the // grpc codebase to set RemoteAddr to a different // format, or probably better: we can attach it to the // context and use that from serverHandlerTransport.RemoteAddr. return "tcp" } return "" } func (a strAddr) String() string { return string(a) } // do runs fn in the ServeHTTP goroutine. func (ht *serverHandlerTransport) do(fn func()) error { // Avoid a panic writing to closed channel. Imperfect but maybe good enough. select { case <-ht.closedCh: return ErrConnClosing default: select { case ht.writes <- fn: return nil case <-ht.closedCh: return ErrConnClosing } } } func (ht *serverHandlerTransport) WriteStatus(s *Stream, st *status.Status) error { ht.mu.Lock() if ht.streamDone { ht.mu.Unlock() return nil } ht.mu.Unlock() err := ht.do(func() { ht.writeCommonHeaders(s) // And flush, in case no header or body has been sent yet. // This forces a separation of headers and trailers if this is the // first call (for example, in end2end tests's TestNoService). ht.rw.(http.Flusher).Flush() h := ht.rw.Header() h.Set("Grpc-Status", fmt.Sprintf("%d", st.Code())) if m := st.Message(); m != "" { h.Set("Grpc-Message", encodeGrpcMessage(m)) } if p := st.Proto(); p != nil && len(p.Details) > 0 { stBytes, err := proto.Marshal(p) if err != nil { // TODO: return error instead, when callers are able to handle it. panic(err) } h.Set("Grpc-Status-Details-Bin", encodeBinHeader(stBytes)) } if md := s.Trailer(); len(md) > 0 { for k, vv := range md { // Clients don't tolerate reading restricted headers after some non restricted ones were sent. if isReservedHeader(k) { continue } for _, v := range vv { // http2 ResponseWriter mechanism to send undeclared Trailers after // the headers have possibly been written. h.Add(http2.TrailerPrefix+k, encodeMetadataHeader(k, v)) } } } }) close(ht.writes) ht.mu.Lock() ht.streamDone = true ht.mu.Unlock() return err } // writeCommonHeaders sets common headers on the first write // call (Write, WriteHeader, or WriteStatus). func (ht *serverHandlerTransport) writeCommonHeaders(s *Stream) { if ht.didCommonHeaders { return } ht.didCommonHeaders = true h := ht.rw.Header() h["Date"] = nil // suppress Date to make tests happy; TODO: restore h.Set("Content-Type", "application/grpc") // Predeclare trailers we'll set later in WriteStatus (after the body). // This is a SHOULD in the HTTP RFC, and the way you add (known) // Trailers per the net/http.ResponseWriter contract. // See https://golang.org/pkg/net/http/#ResponseWriter // and https://golang.org/pkg/net/http/#example_ResponseWriter_trailers h.Add("Trailer", "Grpc-Status") h.Add("Trailer", "Grpc-Message") h.Add("Trailer", "Grpc-Status-Details-Bin") if s.sendCompress != "" { h.Set("Grpc-Encoding", s.sendCompress) } } func (ht *serverHandlerTransport) Write(s *Stream, hdr []byte, data []byte, opts *Options) error { return ht.do(func() { ht.writeCommonHeaders(s) ht.rw.Write(hdr) ht.rw.Write(data) if !opts.Delay { ht.rw.(http.Flusher).Flush() } }) } func (ht *serverHandlerTransport) WriteHeader(s *Stream, md metadata.MD) error { return ht.do(func() { ht.writeCommonHeaders(s) h := ht.rw.Header() for k, vv := range md { // Clients don't tolerate reading restricted headers after some non restricted ones were sent. if isReservedHeader(k) { continue } for _, v := range vv { v = encodeMetadataHeader(k, v) h.Add(k, v) } } ht.rw.WriteHeader(200) ht.rw.(http.Flusher).Flush() }) } func (ht *serverHandlerTransport) HandleStreams(startStream func(*Stream), traceCtx func(context.Context, string) context.Context) { // With this transport type there will be exactly 1 stream: this HTTP request. var ctx context.Context var cancel context.CancelFunc if ht.timeoutSet { ctx, cancel = context.WithTimeout(context.Background(), ht.timeout) } else { ctx, cancel = context.WithCancel(context.Background()) } // requestOver is closed when either the request's context is done // or the status has been written via WriteStatus. requestOver := make(chan struct{}) // clientGone receives a single value if peer is gone, either // because the underlying connection is dead or because the // peer sends an http2 RST_STREAM. clientGone := ht.rw.(http.CloseNotifier).CloseNotify() go func() { select { case <-requestOver: return case <-ht.closedCh: case <-clientGone: } cancel() }() req := ht.req s := &Stream{ id: 0, // irrelevant requestRead: func(int) {}, cancel: cancel, buf: newRecvBuffer(), st: ht, method: req.URL.Path, recvCompress: req.Header.Get("grpc-encoding"), } pr := &peer.Peer{ Addr: ht.RemoteAddr(), } if req.TLS != nil { pr.AuthInfo = credentials.TLSInfo{State: *req.TLS} } ctx = metadata.NewIncomingContext(ctx, ht.headerMD) ctx = peer.NewContext(ctx, pr) s.ctx = newContextWithStream(ctx, s) s.trReader = &transportReader{ reader: &recvBufferReader{ctx: s.ctx, recv: s.buf}, windowHandler: func(int) {}, } // readerDone is closed when the Body.Read-ing goroutine exits. readerDone := make(chan struct{}) go func() { defer close(readerDone) // TODO: minimize garbage, optimize recvBuffer code/ownership const readSize = 8196 for buf := make([]byte, readSize); ; { n, err := req.Body.Read(buf) if n > 0 { s.buf.put(recvMsg{data: buf[:n:n]}) buf = buf[n:] } if err != nil { s.buf.put(recvMsg{err: mapRecvMsgError(err)}) return } if len(buf) == 0 { buf = make([]byte, readSize) } } }() // startStream is provided by the *grpc.Server's serveStreams. // It starts a goroutine serving s and exits immediately. // The goroutine that is started is the one that then calls // into ht, calling WriteHeader, Write, WriteStatus, Close, etc. startStream(s) ht.runStream() close(requestOver) // Wait for reading goroutine to finish. req.Body.Close() <-readerDone } func (ht *serverHandlerTransport) runStream() { for { select { case fn, ok := <-ht.writes: if !ok { return } fn() case <-ht.closedCh: return } } } func (ht *serverHandlerTransport) Drain() { panic("Drain() is not implemented") } // mapRecvMsgError returns the non-nil err into the appropriate // error value as expected by callers of *grpc.parser.recvMsg. // In particular, in can only be: // * io.EOF // * io.ErrUnexpectedEOF // * of type transport.ConnectionError // * of type transport.StreamError func mapRecvMsgError(err error) error { if err == io.EOF || err == io.ErrUnexpectedEOF { return err } if se, ok := err.(http2.StreamError); ok { if code, ok := http2ErrConvTab[se.Code]; ok { return StreamError{ Code: code, Desc: se.Error(), } } } return connectionErrorf(true, err, err.Error()) } golang-google-grpc-1.6.0/transport/handler_server_test.go000066400000000000000000000270221315416461300236240ustar00rootroot00000000000000/* * * Copyright 2016 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package transport import ( "errors" "fmt" "io" "net/http" "net/http/httptest" "net/url" "reflect" "testing" "time" "github.com/golang/protobuf/proto" dpb "github.com/golang/protobuf/ptypes/duration" "golang.org/x/net/context" epb "google.golang.org/genproto/googleapis/rpc/errdetails" "google.golang.org/grpc/codes" "google.golang.org/grpc/metadata" "google.golang.org/grpc/status" ) func TestHandlerTransport_NewServerHandlerTransport(t *testing.T) { type testCase struct { name string req *http.Request wantErr string modrw func(http.ResponseWriter) http.ResponseWriter check func(*serverHandlerTransport, *testCase) error } tests := []testCase{ { name: "http/1.1", req: &http.Request{ ProtoMajor: 1, ProtoMinor: 1, }, wantErr: "gRPC requires HTTP/2", }, { name: "bad method", req: &http.Request{ ProtoMajor: 2, Method: "GET", Header: http.Header{}, RequestURI: "/", }, wantErr: "invalid gRPC request method", }, { name: "bad content type", req: &http.Request{ ProtoMajor: 2, Method: "POST", Header: http.Header{ "Content-Type": {"application/foo"}, }, RequestURI: "/service/foo.bar", }, wantErr: "invalid gRPC request content-type", }, { name: "not flusher", req: &http.Request{ ProtoMajor: 2, Method: "POST", Header: http.Header{ "Content-Type": {"application/grpc"}, }, RequestURI: "/service/foo.bar", }, modrw: func(w http.ResponseWriter) http.ResponseWriter { // Return w without its Flush method type onlyCloseNotifier interface { http.ResponseWriter http.CloseNotifier } return struct{ onlyCloseNotifier }{w.(onlyCloseNotifier)} }, wantErr: "gRPC requires a ResponseWriter supporting http.Flusher", }, { name: "not closenotifier", req: &http.Request{ ProtoMajor: 2, Method: "POST", Header: http.Header{ "Content-Type": {"application/grpc"}, }, RequestURI: "/service/foo.bar", }, modrw: func(w http.ResponseWriter) http.ResponseWriter { // Return w without its CloseNotify method type onlyFlusher interface { http.ResponseWriter http.Flusher } return struct{ onlyFlusher }{w.(onlyFlusher)} }, wantErr: "gRPC requires a ResponseWriter supporting http.CloseNotifier", }, { name: "valid", req: &http.Request{ ProtoMajor: 2, Method: "POST", Header: http.Header{ "Content-Type": {"application/grpc"}, }, URL: &url.URL{ Path: "/service/foo.bar", }, RequestURI: "/service/foo.bar", }, check: func(t *serverHandlerTransport, tt *testCase) error { if t.req != tt.req { return fmt.Errorf("t.req = %p; want %p", t.req, tt.req) } if t.rw == nil { return errors.New("t.rw = nil; want non-nil") } return nil }, }, { name: "with timeout", req: &http.Request{ ProtoMajor: 2, Method: "POST", Header: http.Header{ "Content-Type": []string{"application/grpc"}, "Grpc-Timeout": {"200m"}, }, URL: &url.URL{ Path: "/service/foo.bar", }, RequestURI: "/service/foo.bar", }, check: func(t *serverHandlerTransport, tt *testCase) error { if !t.timeoutSet { return errors.New("timeout not set") } if want := 200 * time.Millisecond; t.timeout != want { return fmt.Errorf("timeout = %v; want %v", t.timeout, want) } return nil }, }, { name: "with bad timeout", req: &http.Request{ ProtoMajor: 2, Method: "POST", Header: http.Header{ "Content-Type": []string{"application/grpc"}, "Grpc-Timeout": {"tomorrow"}, }, URL: &url.URL{ Path: "/service/foo.bar", }, RequestURI: "/service/foo.bar", }, wantErr: `stream error: code = Internal desc = "malformed time-out: transport: timeout unit is not recognized: \"tomorrow\""`, }, { name: "with metadata", req: &http.Request{ ProtoMajor: 2, Method: "POST", Header: http.Header{ "Content-Type": []string{"application/grpc"}, "meta-foo": {"foo-val"}, "meta-bar": {"bar-val1", "bar-val2"}, "user-agent": {"x/y a/b"}, }, URL: &url.URL{ Path: "/service/foo.bar", }, RequestURI: "/service/foo.bar", }, check: func(ht *serverHandlerTransport, tt *testCase) error { want := metadata.MD{ "meta-bar": {"bar-val1", "bar-val2"}, "user-agent": {"x/y a/b"}, "meta-foo": {"foo-val"}, } if !reflect.DeepEqual(ht.headerMD, want) { return fmt.Errorf("metdata = %#v; want %#v", ht.headerMD, want) } return nil }, }, } for _, tt := range tests { rw := newTestHandlerResponseWriter() if tt.modrw != nil { rw = tt.modrw(rw) } got, gotErr := NewServerHandlerTransport(rw, tt.req) if (gotErr != nil) != (tt.wantErr != "") || (gotErr != nil && gotErr.Error() != tt.wantErr) { t.Errorf("%s: error = %v; want %q", tt.name, gotErr, tt.wantErr) continue } if gotErr != nil { continue } if tt.check != nil { if err := tt.check(got.(*serverHandlerTransport), &tt); err != nil { t.Errorf("%s: %v", tt.name, err) } } } } type testHandlerResponseWriter struct { *httptest.ResponseRecorder closeNotify chan bool } func (w testHandlerResponseWriter) CloseNotify() <-chan bool { return w.closeNotify } func (w testHandlerResponseWriter) Flush() {} func newTestHandlerResponseWriter() http.ResponseWriter { return testHandlerResponseWriter{ ResponseRecorder: httptest.NewRecorder(), closeNotify: make(chan bool, 1), } } type handleStreamTest struct { t *testing.T bodyw *io.PipeWriter req *http.Request rw testHandlerResponseWriter ht *serverHandlerTransport } func newHandleStreamTest(t *testing.T) *handleStreamTest { bodyr, bodyw := io.Pipe() req := &http.Request{ ProtoMajor: 2, Method: "POST", Header: http.Header{ "Content-Type": {"application/grpc"}, }, URL: &url.URL{ Path: "/service/foo.bar", }, RequestURI: "/service/foo.bar", Body: bodyr, } rw := newTestHandlerResponseWriter().(testHandlerResponseWriter) ht, err := NewServerHandlerTransport(rw, req) if err != nil { t.Fatal(err) } return &handleStreamTest{ t: t, bodyw: bodyw, ht: ht.(*serverHandlerTransport), rw: rw, } } func TestHandlerTransport_HandleStreams(t *testing.T) { st := newHandleStreamTest(t) handleStream := func(s *Stream) { if want := "/service/foo.bar"; s.method != want { t.Errorf("stream method = %q; want %q", s.method, want) } st.bodyw.Close() // no body st.ht.WriteStatus(s, status.New(codes.OK, "")) } st.ht.HandleStreams( func(s *Stream) { go handleStream(s) }, func(ctx context.Context, method string) context.Context { return ctx }, ) wantHeader := http.Header{ "Date": nil, "Content-Type": {"application/grpc"}, "Trailer": {"Grpc-Status", "Grpc-Message", "Grpc-Status-Details-Bin"}, "Grpc-Status": {"0"}, } if !reflect.DeepEqual(st.rw.HeaderMap, wantHeader) { t.Errorf("Header+Trailer Map: %#v; want %#v", st.rw.HeaderMap, wantHeader) } } // Tests that codes.Unimplemented will close the body, per comment in handler_server.go. func TestHandlerTransport_HandleStreams_Unimplemented(t *testing.T) { handleStreamCloseBodyTest(t, codes.Unimplemented, "thingy is unimplemented") } // Tests that codes.InvalidArgument will close the body, per comment in handler_server.go. func TestHandlerTransport_HandleStreams_InvalidArgument(t *testing.T) { handleStreamCloseBodyTest(t, codes.InvalidArgument, "bad arg") } func handleStreamCloseBodyTest(t *testing.T, statusCode codes.Code, msg string) { st := newHandleStreamTest(t) handleStream := func(s *Stream) { st.ht.WriteStatus(s, status.New(statusCode, msg)) } st.ht.HandleStreams( func(s *Stream) { go handleStream(s) }, func(ctx context.Context, method string) context.Context { return ctx }, ) wantHeader := http.Header{ "Date": nil, "Content-Type": {"application/grpc"}, "Trailer": {"Grpc-Status", "Grpc-Message", "Grpc-Status-Details-Bin"}, "Grpc-Status": {fmt.Sprint(uint32(statusCode))}, "Grpc-Message": {encodeGrpcMessage(msg)}, } if !reflect.DeepEqual(st.rw.HeaderMap, wantHeader) { t.Errorf("Header+Trailer mismatch.\n got: %#v\nwant: %#v", st.rw.HeaderMap, wantHeader) } } func TestHandlerTransport_HandleStreams_Timeout(t *testing.T) { bodyr, bodyw := io.Pipe() req := &http.Request{ ProtoMajor: 2, Method: "POST", Header: http.Header{ "Content-Type": {"application/grpc"}, "Grpc-Timeout": {"200m"}, }, URL: &url.URL{ Path: "/service/foo.bar", }, RequestURI: "/service/foo.bar", Body: bodyr, } rw := newTestHandlerResponseWriter().(testHandlerResponseWriter) ht, err := NewServerHandlerTransport(rw, req) if err != nil { t.Fatal(err) } runStream := func(s *Stream) { defer bodyw.Close() select { case <-s.ctx.Done(): case <-time.After(5 * time.Second): t.Errorf("timeout waiting for ctx.Done") return } err := s.ctx.Err() if err != context.DeadlineExceeded { t.Errorf("ctx.Err = %v; want %v", err, context.DeadlineExceeded) return } ht.WriteStatus(s, status.New(codes.DeadlineExceeded, "too slow")) } ht.HandleStreams( func(s *Stream) { go runStream(s) }, func(ctx context.Context, method string) context.Context { return ctx }, ) wantHeader := http.Header{ "Date": nil, "Content-Type": {"application/grpc"}, "Trailer": {"Grpc-Status", "Grpc-Message", "Grpc-Status-Details-Bin"}, "Grpc-Status": {"4"}, "Grpc-Message": {encodeGrpcMessage("too slow")}, } if !reflect.DeepEqual(rw.HeaderMap, wantHeader) { t.Errorf("Header+Trailer Map mismatch.\n got: %#v\nwant: %#v", rw.HeaderMap, wantHeader) } } func TestHandlerTransport_HandleStreams_ErrDetails(t *testing.T) { errDetails := []proto.Message{ &epb.RetryInfo{ RetryDelay: &dpb.Duration{Seconds: 60}, }, &epb.ResourceInfo{ ResourceType: "foo bar", ResourceName: "service.foo.bar", Owner: "User", }, } statusCode := codes.ResourceExhausted msg := "you are being throttled" st, err := status.New(statusCode, msg).WithDetails(errDetails...) if err != nil { t.Fatal(err) } stBytes, err := proto.Marshal(st.Proto()) if err != nil { t.Fatal(err) } hst := newHandleStreamTest(t) handleStream := func(s *Stream) { hst.ht.WriteStatus(s, st) } hst.ht.HandleStreams( func(s *Stream) { go handleStream(s) }, func(ctx context.Context, method string) context.Context { return ctx }, ) wantHeader := http.Header{ "Date": nil, "Content-Type": {"application/grpc"}, "Trailer": {"Grpc-Status", "Grpc-Message", "Grpc-Status-Details-Bin"}, "Grpc-Status": {fmt.Sprint(uint32(statusCode))}, "Grpc-Message": {encodeGrpcMessage(msg)}, "Grpc-Status-Details-Bin": {encodeBinHeader(stBytes)}, } if !reflect.DeepEqual(hst.rw.HeaderMap, wantHeader) { t.Errorf("Header+Trailer mismatch.\n got: %#v\nwant: %#v", hst.rw.HeaderMap, wantHeader) } } golang-google-grpc-1.6.0/transport/http2_client.go000066400000000000000000001157271315416461300221730ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package transport import ( "bytes" "io" "math" "net" "strings" "sync" "sync/atomic" "time" "golang.org/x/net/context" "golang.org/x/net/http2" "golang.org/x/net/http2/hpack" "google.golang.org/grpc/codes" "google.golang.org/grpc/credentials" "google.golang.org/grpc/keepalive" "google.golang.org/grpc/metadata" "google.golang.org/grpc/peer" "google.golang.org/grpc/stats" "google.golang.org/grpc/status" ) // http2Client implements the ClientTransport interface with HTTP2. type http2Client struct { ctx context.Context target string // server name/addr userAgent string md interface{} conn net.Conn // underlying communication channel remoteAddr net.Addr localAddr net.Addr authInfo credentials.AuthInfo // auth info about the connection nextID uint32 // the next stream ID to be used // writableChan synchronizes write access to the transport. // A writer acquires the write lock by sending a value on writableChan // and releases it by receiving from writableChan. writableChan chan int // shutdownChan is closed when Close is called. // Blocking operations should select on shutdownChan to avoid // blocking forever after Close. // TODO(zhaoq): Maybe have a channel context? shutdownChan chan struct{} // errorChan is closed to notify the I/O error to the caller. errorChan chan struct{} // goAway is closed to notify the upper layer (i.e., addrConn.transportMonitor) // that the server sent GoAway on this transport. goAway chan struct{} // awakenKeepalive is used to wake up keepalive when after it has gone dormant. awakenKeepalive chan struct{} framer *framer hBuf *bytes.Buffer // the buffer for HPACK encoding hEnc *hpack.Encoder // HPACK encoder // controlBuf delivers all the control related tasks (e.g., window // updates, reset streams, and various settings) to the controller. controlBuf *controlBuffer fc *inFlow // sendQuotaPool provides flow control to outbound message. sendQuotaPool *quotaPool // streamsQuota limits the max number of concurrent streams. streamsQuota *quotaPool // The scheme used: https if TLS is on, http otherwise. scheme string isSecure bool creds []credentials.PerRPCCredentials // Boolean to keep track of reading activity on transport. // 1 is true and 0 is false. activity uint32 // Accessed atomically. kp keepalive.ClientParameters statsHandler stats.Handler initialWindowSize int32 bdpEst *bdpEstimator outQuotaVersion uint32 mu sync.Mutex // guard the following variables state transportState // the state of underlying connection activeStreams map[uint32]*Stream // The max number of concurrent streams maxStreams int // the per-stream outbound flow control window size set by the peer. streamSendQuota uint32 // prevGoAway ID records the Last-Stream-ID in the previous GOAway frame. prevGoAwayID uint32 // goAwayReason records the http2.ErrCode and debug data received with the // GoAway frame. goAwayReason GoAwayReason } func dial(ctx context.Context, fn func(context.Context, string) (net.Conn, error), addr string) (net.Conn, error) { if fn != nil { return fn(ctx, addr) } return dialContext(ctx, "tcp", addr) } func isTemporary(err error) bool { switch err { case io.EOF: // Connection closures may be resolved upon retry, and are thus // treated as temporary. return true case context.DeadlineExceeded: // In Go 1.7, context.DeadlineExceeded implements Timeout(), and this // special case is not needed. Until then, we need to keep this // clause. return true } switch err := err.(type) { case interface { Temporary() bool }: return err.Temporary() case interface { Timeout() bool }: // Timeouts may be resolved upon retry, and are thus treated as // temporary. return err.Timeout() } return false } // newHTTP2Client constructs a connected ClientTransport to addr based on HTTP2 // and starts to receive messages on it. Non-nil error returns if construction // fails. func newHTTP2Client(ctx context.Context, addr TargetInfo, opts ConnectOptions) (_ ClientTransport, err error) { scheme := "http" conn, err := dial(ctx, opts.Dialer, addr.Addr) if err != nil { if opts.FailOnNonTempDialError { return nil, connectionErrorf(isTemporary(err), err, "transport: error while dialing: %v", err) } return nil, connectionErrorf(true, err, "transport: Error while dialing %v", err) } // Any further errors will close the underlying connection defer func(conn net.Conn) { if err != nil { conn.Close() } }(conn) var ( isSecure bool authInfo credentials.AuthInfo ) if creds := opts.TransportCredentials; creds != nil { scheme = "https" conn, authInfo, err = creds.ClientHandshake(ctx, addr.Addr, conn) if err != nil { // Credentials handshake errors are typically considered permanent // to avoid retrying on e.g. bad certificates. temp := isTemporary(err) return nil, connectionErrorf(temp, err, "transport: authentication handshake failed: %v", err) } isSecure = true } kp := opts.KeepaliveParams // Validate keepalive parameters. if kp.Time == 0 { kp.Time = defaultClientKeepaliveTime } if kp.Timeout == 0 { kp.Timeout = defaultClientKeepaliveTimeout } dynamicWindow := true icwz := int32(initialWindowSize) if opts.InitialConnWindowSize >= defaultWindowSize { icwz = opts.InitialConnWindowSize dynamicWindow = false } var buf bytes.Buffer t := &http2Client{ ctx: ctx, target: addr.Addr, userAgent: opts.UserAgent, md: addr.Metadata, conn: conn, remoteAddr: conn.RemoteAddr(), localAddr: conn.LocalAddr(), authInfo: authInfo, // The client initiated stream id is odd starting from 1. nextID: 1, writableChan: make(chan int, 1), shutdownChan: make(chan struct{}), errorChan: make(chan struct{}), goAway: make(chan struct{}), awakenKeepalive: make(chan struct{}, 1), framer: newFramer(conn), hBuf: &buf, hEnc: hpack.NewEncoder(&buf), controlBuf: newControlBuffer(), fc: &inFlow{limit: uint32(icwz)}, sendQuotaPool: newQuotaPool(defaultWindowSize), scheme: scheme, state: reachable, activeStreams: make(map[uint32]*Stream), isSecure: isSecure, creds: opts.PerRPCCredentials, maxStreams: defaultMaxStreamsClient, streamsQuota: newQuotaPool(defaultMaxStreamsClient), streamSendQuota: defaultWindowSize, kp: kp, statsHandler: opts.StatsHandler, initialWindowSize: initialWindowSize, } if opts.InitialWindowSize >= defaultWindowSize { t.initialWindowSize = opts.InitialWindowSize dynamicWindow = false } if dynamicWindow { t.bdpEst = &bdpEstimator{ bdp: initialWindowSize, updateFlowControl: t.updateFlowControl, } } // Make sure awakenKeepalive can't be written upon. // keepalive routine will make it writable, if need be. t.awakenKeepalive <- struct{}{} if t.statsHandler != nil { t.ctx = t.statsHandler.TagConn(t.ctx, &stats.ConnTagInfo{ RemoteAddr: t.remoteAddr, LocalAddr: t.localAddr, }) connBegin := &stats.ConnBegin{ Client: true, } t.statsHandler.HandleConn(t.ctx, connBegin) } // Start the reader goroutine for incoming message. Each transport has // a dedicated goroutine which reads HTTP2 frame from network. Then it // dispatches the frame to the corresponding stream entity. go t.reader() // Send connection preface to server. n, err := t.conn.Write(clientPreface) if err != nil { t.Close() return nil, connectionErrorf(true, err, "transport: failed to write client preface: %v", err) } if n != len(clientPreface) { t.Close() return nil, connectionErrorf(true, err, "transport: preface mismatch, wrote %d bytes; want %d", n, len(clientPreface)) } if t.initialWindowSize != defaultWindowSize { err = t.framer.writeSettings(true, http2.Setting{ ID: http2.SettingInitialWindowSize, Val: uint32(t.initialWindowSize), }) } else { err = t.framer.writeSettings(true) } if err != nil { t.Close() return nil, connectionErrorf(true, err, "transport: failed to write initial settings frame: %v", err) } // Adjust the connection flow control window if needed. if delta := uint32(icwz - defaultWindowSize); delta > 0 { if err := t.framer.writeWindowUpdate(true, 0, delta); err != nil { t.Close() return nil, connectionErrorf(true, err, "transport: failed to write window update: %v", err) } } go t.controller() if t.kp.Time != infinity { go t.keepalive() } t.writableChan <- 0 return t, nil } func (t *http2Client) newStream(ctx context.Context, callHdr *CallHdr) *Stream { // TODO(zhaoq): Handle uint32 overflow of Stream.id. s := &Stream{ id: t.nextID, done: make(chan struct{}), goAway: make(chan struct{}), method: callHdr.Method, sendCompress: callHdr.SendCompress, buf: newRecvBuffer(), fc: &inFlow{limit: uint32(t.initialWindowSize)}, sendQuotaPool: newQuotaPool(int(t.streamSendQuota)), headerChan: make(chan struct{}), } t.nextID += 2 s.requestRead = func(n int) { t.adjustWindow(s, uint32(n)) } // The client side stream context should have exactly the same life cycle with the user provided context. // That means, s.ctx should be read-only. And s.ctx is done iff ctx is done. // So we use the original context here instead of creating a copy. s.ctx = ctx s.trReader = &transportReader{ reader: &recvBufferReader{ ctx: s.ctx, goAway: s.goAway, recv: s.buf, }, windowHandler: func(n int) { t.updateWindow(s, uint32(n)) }, } return s } // NewStream creates a stream and registers it into the transport as "active" // streams. func (t *http2Client) NewStream(ctx context.Context, callHdr *CallHdr) (_ *Stream, err error) { pr := &peer.Peer{ Addr: t.remoteAddr, } // Attach Auth info if there is any. if t.authInfo != nil { pr.AuthInfo = t.authInfo } ctx = peer.NewContext(ctx, pr) var ( authData = make(map[string]string) audience string ) // Create an audience string only if needed. if len(t.creds) > 0 || callHdr.Creds != nil { // Construct URI required to get auth request metadata. // Omit port if it is the default one. host := strings.TrimSuffix(callHdr.Host, ":443") pos := strings.LastIndex(callHdr.Method, "/") if pos == -1 { pos = len(callHdr.Method) } audience = "https://" + host + callHdr.Method[:pos] } for _, c := range t.creds { data, err := c.GetRequestMetadata(ctx, audience) if err != nil { return nil, streamErrorf(codes.Internal, "transport: %v", err) } for k, v := range data { // Capital header names are illegal in HTTP/2. k = strings.ToLower(k) authData[k] = v } } callAuthData := make(map[string]string) // Check if credentials.PerRPCCredentials were provided via call options. // Note: if these credentials are provided both via dial options and call // options, then both sets of credentials will be applied. if callCreds := callHdr.Creds; callCreds != nil { if !t.isSecure && callCreds.RequireTransportSecurity() { return nil, streamErrorf(codes.Unauthenticated, "transport: cannot send secure credentials on an insecure conneciton") } data, err := callCreds.GetRequestMetadata(ctx, audience) if err != nil { return nil, streamErrorf(codes.Internal, "transport: %v", err) } for k, v := range data { // Capital header names are illegal in HTTP/2 k = strings.ToLower(k) callAuthData[k] = v } } t.mu.Lock() if t.activeStreams == nil { t.mu.Unlock() return nil, ErrConnClosing } if t.state == draining { t.mu.Unlock() return nil, ErrStreamDrain } if t.state != reachable { t.mu.Unlock() return nil, ErrConnClosing } t.mu.Unlock() sq, err := wait(ctx, nil, nil, t.shutdownChan, t.streamsQuota.acquire()) if err != nil { return nil, err } // Returns the quota balance back. if sq > 1 { t.streamsQuota.add(sq - 1) } if _, err := wait(ctx, nil, nil, t.shutdownChan, t.writableChan); err != nil { // Return the quota back now because there is no stream returned to the caller. if _, ok := err.(StreamError); ok { t.streamsQuota.add(1) } return nil, err } t.mu.Lock() if t.state == draining { t.mu.Unlock() t.streamsQuota.add(1) // Need to make t writable again so that the rpc in flight can still proceed. t.writableChan <- 0 return nil, ErrStreamDrain } if t.state != reachable { t.mu.Unlock() return nil, ErrConnClosing } s := t.newStream(ctx, callHdr) t.activeStreams[s.id] = s // If the number of active streams change from 0 to 1, then check if keepalive // has gone dormant. If so, wake it up. if len(t.activeStreams) == 1 { select { case t.awakenKeepalive <- struct{}{}: t.framer.writePing(false, false, [8]byte{}) // Fill the awakenKeepalive channel again as this channel must be // kept non-writable except at the point that the keepalive() // goroutine is waiting either to be awaken or shutdown. t.awakenKeepalive <- struct{}{} default: } } t.mu.Unlock() // HPACK encodes various headers. Note that once WriteField(...) is // called, the corresponding headers/continuation frame has to be sent // because hpack.Encoder is stateful. t.hBuf.Reset() t.hEnc.WriteField(hpack.HeaderField{Name: ":method", Value: "POST"}) t.hEnc.WriteField(hpack.HeaderField{Name: ":scheme", Value: t.scheme}) t.hEnc.WriteField(hpack.HeaderField{Name: ":path", Value: callHdr.Method}) t.hEnc.WriteField(hpack.HeaderField{Name: ":authority", Value: callHdr.Host}) t.hEnc.WriteField(hpack.HeaderField{Name: "content-type", Value: "application/grpc"}) t.hEnc.WriteField(hpack.HeaderField{Name: "user-agent", Value: t.userAgent}) t.hEnc.WriteField(hpack.HeaderField{Name: "te", Value: "trailers"}) if callHdr.SendCompress != "" { t.hEnc.WriteField(hpack.HeaderField{Name: "grpc-encoding", Value: callHdr.SendCompress}) } if dl, ok := ctx.Deadline(); ok { // Send out timeout regardless its value. The server can detect timeout context by itself. timeout := dl.Sub(time.Now()) t.hEnc.WriteField(hpack.HeaderField{Name: "grpc-timeout", Value: encodeTimeout(timeout)}) } for k, v := range authData { t.hEnc.WriteField(hpack.HeaderField{Name: k, Value: encodeMetadataHeader(k, v)}) } for k, v := range callAuthData { t.hEnc.WriteField(hpack.HeaderField{Name: k, Value: encodeMetadataHeader(k, v)}) } var ( endHeaders bool ) if b := stats.OutgoingTags(ctx); b != nil { t.hEnc.WriteField(hpack.HeaderField{Name: "grpc-tags-bin", Value: encodeBinHeader(b)}) } if b := stats.OutgoingTrace(ctx); b != nil { t.hEnc.WriteField(hpack.HeaderField{Name: "grpc-trace-bin", Value: encodeBinHeader(b)}) } if md, ok := metadata.FromOutgoingContext(ctx); ok { for k, vv := range md { // HTTP doesn't allow you to set pseudoheaders after non pseudoheaders were set. if isReservedHeader(k) { continue } for _, v := range vv { t.hEnc.WriteField(hpack.HeaderField{Name: k, Value: encodeMetadataHeader(k, v)}) } } } if md, ok := t.md.(*metadata.MD); ok { for k, vv := range *md { if isReservedHeader(k) { continue } for _, v := range vv { t.hEnc.WriteField(hpack.HeaderField{Name: k, Value: encodeMetadataHeader(k, v)}) } } } first := true bufLen := t.hBuf.Len() // Sends the headers in a single batch even when they span multiple frames. for !endHeaders { size := t.hBuf.Len() if size > http2MaxFrameLen { size = http2MaxFrameLen } else { endHeaders = true } var flush bool if callHdr.Flush && endHeaders { flush = true } if first { // Sends a HeadersFrame to server to start a new stream. p := http2.HeadersFrameParam{ StreamID: s.id, BlockFragment: t.hBuf.Next(size), EndStream: false, EndHeaders: endHeaders, } // Do a force flush for the buffered frames iff it is the last headers frame // and there is header metadata to be sent. Otherwise, there is flushing until // the corresponding data frame is written. err = t.framer.writeHeaders(flush, p) first = false } else { // Sends Continuation frames for the leftover headers. err = t.framer.writeContinuation(flush, s.id, endHeaders, t.hBuf.Next(size)) } if err != nil { t.notifyError(err) return nil, connectionErrorf(true, err, "transport: %v", err) } } s.mu.Lock() s.bytesSent = true s.mu.Unlock() if t.statsHandler != nil { outHeader := &stats.OutHeader{ Client: true, WireLength: bufLen, FullMethod: callHdr.Method, RemoteAddr: t.remoteAddr, LocalAddr: t.localAddr, Compression: callHdr.SendCompress, } t.statsHandler.HandleRPC(s.ctx, outHeader) } t.writableChan <- 0 return s, nil } // CloseStream clears the footprint of a stream when the stream is not needed any more. // This must not be executed in reader's goroutine. func (t *http2Client) CloseStream(s *Stream, err error) { t.mu.Lock() if t.activeStreams == nil { t.mu.Unlock() return } if err != nil { // notify in-flight streams, before the deletion s.write(recvMsg{err: err}) } delete(t.activeStreams, s.id) if t.state == draining && len(t.activeStreams) == 0 { // The transport is draining and s is the last live stream on t. t.mu.Unlock() t.Close() return } t.mu.Unlock() // rstStream is true in case the stream is being closed at the client-side // and the server needs to be intimated about it by sending a RST_STREAM // frame. // To make sure this frame is written to the wire before the headers of the // next stream waiting for streamsQuota, we add to streamsQuota pool only // after having acquired the writableChan to send RST_STREAM out (look at // the controller() routine). var rstStream bool var rstError http2.ErrCode defer func() { // In case, the client doesn't have to send RST_STREAM to server // we can safely add back to streamsQuota pool now. if !rstStream { t.streamsQuota.add(1) return } t.controlBuf.put(&resetStream{s.id, rstError}) }() s.mu.Lock() rstStream = s.rstStream rstError = s.rstError if s.state == streamDone { s.mu.Unlock() return } if !s.headerDone { close(s.headerChan) s.headerDone = true } s.state = streamDone s.mu.Unlock() if _, ok := err.(StreamError); ok { rstStream = true rstError = http2.ErrCodeCancel } } // Close kicks off the shutdown process of the transport. This should be called // only once on a transport. Once it is called, the transport should not be // accessed any more. func (t *http2Client) Close() (err error) { t.mu.Lock() if t.state == closing { t.mu.Unlock() return } if t.state == reachable || t.state == draining { close(t.errorChan) } t.state = closing t.mu.Unlock() close(t.shutdownChan) err = t.conn.Close() t.mu.Lock() streams := t.activeStreams t.activeStreams = nil t.mu.Unlock() // Notify all active streams. for _, s := range streams { s.mu.Lock() if !s.headerDone { close(s.headerChan) s.headerDone = true } s.mu.Unlock() s.write(recvMsg{err: ErrConnClosing}) } if t.statsHandler != nil { connEnd := &stats.ConnEnd{ Client: true, } t.statsHandler.HandleConn(t.ctx, connEnd) } return } func (t *http2Client) GracefulClose() error { t.mu.Lock() switch t.state { case unreachable: // The server may close the connection concurrently. t is not available for // any streams. Close it now. t.mu.Unlock() t.Close() return nil case closing: t.mu.Unlock() return nil } if t.state == draining { t.mu.Unlock() return nil } t.state = draining active := len(t.activeStreams) t.mu.Unlock() if active == 0 { return t.Close() } return nil } // Write formats the data into HTTP2 data frame(s) and sends it out. The caller // should proceed only if Write returns nil. // TODO(zhaoq): opts.Delay is ignored in this implementation. Support it later // if it improves the performance. func (t *http2Client) Write(s *Stream, hdr []byte, data []byte, opts *Options) error { secondStart := http2MaxFrameLen - len(hdr)%http2MaxFrameLen if len(data) < secondStart { secondStart = len(data) } hdr = append(hdr, data[:secondStart]...) data = data[secondStart:] isLastSlice := (len(data) == 0) r := bytes.NewBuffer(hdr) var ( p []byte oqv uint32 ) for { oqv = atomic.LoadUint32(&t.outQuotaVersion) if r.Len() > 0 || p != nil { size := http2MaxFrameLen // Wait until the stream has some quota to send the data. sq, err := wait(s.ctx, s.done, s.goAway, t.shutdownChan, s.sendQuotaPool.acquire()) if err != nil { return err } // Wait until the transport has some quota to send the data. tq, err := wait(s.ctx, s.done, s.goAway, t.shutdownChan, t.sendQuotaPool.acquire()) if err != nil { return err } if sq < size { size = sq } if tq < size { size = tq } if p == nil { p = r.Next(size) } ps := len(p) if ps < sq { // Overbooked stream quota. Return it back. s.sendQuotaPool.add(sq - ps) } if ps < tq { // Overbooked transport quota. Return it back. t.sendQuotaPool.add(tq - ps) } } var ( endStream bool forceFlush bool ) // Indicate there is a writer who is about to write a data frame. t.framer.adjustNumWriters(1) // Got some quota. Try to acquire writing privilege on the transport. if _, err := wait(s.ctx, s.done, s.goAway, t.shutdownChan, t.writableChan); err != nil { if _, ok := err.(StreamError); ok || err == io.EOF { // Return the connection quota back. t.sendQuotaPool.add(len(p)) } if t.framer.adjustNumWriters(-1) == 0 { // This writer is the last one in this batch and has the // responsibility to flush the buffered frames. It queues // a flush request to controlBuf instead of flushing directly // in order to avoid the race with other writing or flushing. t.controlBuf.put(&flushIO{}) } return err } select { case <-s.ctx.Done(): t.sendQuotaPool.add(len(p)) if t.framer.adjustNumWriters(-1) == 0 { t.controlBuf.put(&flushIO{}) } t.writableChan <- 0 return ContextErr(s.ctx.Err()) default: } if oqv != atomic.LoadUint32(&t.outQuotaVersion) { // InitialWindowSize settings frame must have been received after we // acquired send quota but before we got the writable channel. // We must forsake this write. t.sendQuotaPool.add(len(p)) s.sendQuotaPool.add(len(p)) if t.framer.adjustNumWriters(-1) == 0 { t.controlBuf.put(&flushIO{}) } t.writableChan <- 0 continue } if r.Len() == 0 { if isLastSlice { if opts.Last { endStream = true } if t.framer.adjustNumWriters(0) == 1 { // Do a force flush iff this is last frame for the entire gRPC message // and the caller is the only writer at this moment. forceFlush = true } } else { isLastSlice = true if len(data) != 0 { r = bytes.NewBuffer(data) } } } // If WriteData fails, all the pending streams will be handled // by http2Client.Close(). No explicit CloseStream() needs to be // invoked. if err := t.framer.writeData(forceFlush, s.id, endStream, p); err != nil { t.notifyError(err) return connectionErrorf(true, err, "transport: %v", err) } p = nil if t.framer.adjustNumWriters(-1) == 0 { t.framer.flushWrite() } t.writableChan <- 0 if r.Len() == 0 { break } } if !opts.Last { return nil } s.mu.Lock() if s.state != streamDone { s.state = streamWriteDone } s.mu.Unlock() return nil } func (t *http2Client) getStream(f http2.Frame) (*Stream, bool) { t.mu.Lock() defer t.mu.Unlock() s, ok := t.activeStreams[f.Header().StreamID] return s, ok } // adjustWindow sends out extra window update over the initial window size // of stream if the application is requesting data larger in size than // the window. func (t *http2Client) adjustWindow(s *Stream, n uint32) { s.mu.Lock() defer s.mu.Unlock() if s.state == streamDone { return } if w := s.fc.maybeAdjust(n); w > 0 { // Piggyback conneciton's window update along. if cw := t.fc.resetPendingUpdate(); cw > 0 { t.controlBuf.put(&windowUpdate{0, cw, false}) } t.controlBuf.put(&windowUpdate{s.id, w, true}) } } // updateWindow adjusts the inbound quota for the stream and the transport. // Window updates will deliver to the controller for sending when // the cumulative quota exceeds the corresponding threshold. func (t *http2Client) updateWindow(s *Stream, n uint32) { s.mu.Lock() defer s.mu.Unlock() if s.state == streamDone { return } if w := s.fc.onRead(n); w > 0 { if cw := t.fc.resetPendingUpdate(); cw > 0 { t.controlBuf.put(&windowUpdate{0, cw, false}) } t.controlBuf.put(&windowUpdate{s.id, w, true}) } } // updateFlowControl updates the incoming flow control windows // for the transport and the stream based on the current bdp // estimation. func (t *http2Client) updateFlowControl(n uint32) { t.mu.Lock() for _, s := range t.activeStreams { s.fc.newLimit(n) } t.initialWindowSize = int32(n) t.mu.Unlock() t.controlBuf.put(&windowUpdate{0, t.fc.newLimit(n), false}) t.controlBuf.put(&settings{ ack: false, ss: []http2.Setting{ { ID: http2.SettingInitialWindowSize, Val: uint32(n), }, }, }) } func (t *http2Client) handleData(f *http2.DataFrame) { size := f.Header().Length var sendBDPPing bool if t.bdpEst != nil { sendBDPPing = t.bdpEst.add(uint32(size)) } // Decouple connection's flow control from application's read. // An update on connection's flow control should not depend on // whether user application has read the data or not. Such a // restriction is already imposed on the stream's flow control, // and therefore the sender will be blocked anyways. // Decoupling the connection flow control will prevent other // active(fast) streams from starving in presence of slow or // inactive streams. // // Furthermore, if a bdpPing is being sent out we can piggyback // connection's window update for the bytes we just received. if sendBDPPing { t.controlBuf.put(&windowUpdate{0, uint32(size), false}) t.controlBuf.put(bdpPing) } else { if err := t.fc.onData(uint32(size)); err != nil { t.notifyError(connectionErrorf(true, err, "%v", err)) return } if w := t.fc.onRead(uint32(size)); w > 0 { t.controlBuf.put(&windowUpdate{0, w, true}) } } // Select the right stream to dispatch. s, ok := t.getStream(f) if !ok { return } if size > 0 { s.mu.Lock() if s.state == streamDone { s.mu.Unlock() return } if err := s.fc.onData(uint32(size)); err != nil { s.rstStream = true s.rstError = http2.ErrCodeFlowControl s.finish(status.New(codes.Internal, err.Error())) s.mu.Unlock() s.write(recvMsg{err: io.EOF}) return } if f.Header().Flags.Has(http2.FlagDataPadded) { if w := s.fc.onRead(uint32(size) - uint32(len(f.Data()))); w > 0 { t.controlBuf.put(&windowUpdate{s.id, w, true}) } } s.mu.Unlock() // TODO(bradfitz, zhaoq): A copy is required here because there is no // guarantee f.Data() is consumed before the arrival of next frame. // Can this copy be eliminated? if len(f.Data()) > 0 { data := make([]byte, len(f.Data())) copy(data, f.Data()) s.write(recvMsg{data: data}) } } // The server has closed the stream without sending trailers. Record that // the read direction is closed, and set the status appropriately. if f.FrameHeader.Flags.Has(http2.FlagDataEndStream) { s.mu.Lock() if s.state == streamDone { s.mu.Unlock() return } s.finish(status.New(codes.Internal, "server closed the stream without sending trailers")) s.mu.Unlock() s.write(recvMsg{err: io.EOF}) } } func (t *http2Client) handleRSTStream(f *http2.RSTStreamFrame) { s, ok := t.getStream(f) if !ok { return } s.mu.Lock() if s.state == streamDone { s.mu.Unlock() return } if !s.headerDone { close(s.headerChan) s.headerDone = true } statusCode, ok := http2ErrConvTab[http2.ErrCode(f.ErrCode)] if !ok { warningf("transport: http2Client.handleRSTStream found no mapped gRPC status for the received http2 error %v", f.ErrCode) statusCode = codes.Unknown } s.finish(status.Newf(statusCode, "stream terminated by RST_STREAM with error code: %v", f.ErrCode)) s.mu.Unlock() s.write(recvMsg{err: io.EOF}) } func (t *http2Client) handleSettings(f *http2.SettingsFrame) { if f.IsAck() { return } var ss []http2.Setting f.ForeachSetting(func(s http2.Setting) error { ss = append(ss, s) return nil }) // The settings will be applied once the ack is sent. t.controlBuf.put(&settings{ack: true, ss: ss}) } func (t *http2Client) handlePing(f *http2.PingFrame) { if f.IsAck() { // Maybe it's a BDP ping. if t.bdpEst != nil { t.bdpEst.calculate(f.Data) } return } pingAck := &ping{ack: true} copy(pingAck.data[:], f.Data[:]) t.controlBuf.put(pingAck) } func (t *http2Client) handleGoAway(f *http2.GoAwayFrame) { t.mu.Lock() if t.state != reachable && t.state != draining { t.mu.Unlock() return } if f.ErrCode == http2.ErrCodeEnhanceYourCalm { infof("Client received GoAway with http2.ErrCodeEnhanceYourCalm.") } id := f.LastStreamID if id > 0 && id%2 != 1 { t.mu.Unlock() t.notifyError(connectionErrorf(true, nil, "received illegal http2 GOAWAY frame: stream ID %d is even", f.LastStreamID)) return } // A client can recieve multiple GoAways from server (look at https://github.com/grpc/grpc-go/issues/1387). // The idea is that the first GoAway will be sent with an ID of MaxInt32 and the second GoAway will be sent after an RTT delay // with the ID of the last stream the server will process. // Therefore, when we get the first GoAway we don't really close any streams. While in case of second GoAway we // close all streams created after the second GoAwayId. This way streams that were in-flight while the GoAway from server // was being sent don't get killed. select { case <-t.goAway: // t.goAway has been closed (i.e.,multiple GoAways). // If there are multiple GoAways the first one should always have an ID greater than the following ones. if id > t.prevGoAwayID { t.mu.Unlock() t.notifyError(connectionErrorf(true, nil, "received illegal http2 GOAWAY frame: previously recv GOAWAY frame with LastStramID %d, currently recv %d", id, f.LastStreamID)) return } default: t.setGoAwayReason(f) close(t.goAway) t.state = draining } // All streams with IDs greater than the GoAwayId // and smaller than the previous GoAway ID should be killed. upperLimit := t.prevGoAwayID if upperLimit == 0 { // This is the first GoAway Frame. upperLimit = math.MaxUint32 // Kill all streams after the GoAway ID. } for streamID, stream := range t.activeStreams { if streamID > id && streamID <= upperLimit { close(stream.goAway) } } t.prevGoAwayID = id active := len(t.activeStreams) t.mu.Unlock() if active == 0 { t.Close() } } // setGoAwayReason sets the value of t.goAwayReason based // on the GoAway frame received. // It expects a lock on transport's mutext to be held by // the caller. func (t *http2Client) setGoAwayReason(f *http2.GoAwayFrame) { t.goAwayReason = NoReason switch f.ErrCode { case http2.ErrCodeEnhanceYourCalm: if string(f.DebugData()) == "too_many_pings" { t.goAwayReason = TooManyPings } } } func (t *http2Client) GetGoAwayReason() GoAwayReason { t.mu.Lock() defer t.mu.Unlock() return t.goAwayReason } func (t *http2Client) handleWindowUpdate(f *http2.WindowUpdateFrame) { id := f.Header().StreamID incr := f.Increment if id == 0 { t.sendQuotaPool.add(int(incr)) return } if s, ok := t.getStream(f); ok { s.sendQuotaPool.add(int(incr)) } } // operateHeaders takes action on the decoded headers. func (t *http2Client) operateHeaders(frame *http2.MetaHeadersFrame) { s, ok := t.getStream(frame) if !ok { return } s.mu.Lock() s.bytesReceived = true s.mu.Unlock() var state decodeState if err := state.decodeResponseHeader(frame); err != nil { s.mu.Lock() if !s.headerDone { close(s.headerChan) s.headerDone = true } s.mu.Unlock() s.write(recvMsg{err: err}) // Something wrong. Stops reading even when there is remaining. return } endStream := frame.StreamEnded() var isHeader bool defer func() { if t.statsHandler != nil { if isHeader { inHeader := &stats.InHeader{ Client: true, WireLength: int(frame.Header().Length), } t.statsHandler.HandleRPC(s.ctx, inHeader) } else { inTrailer := &stats.InTrailer{ Client: true, WireLength: int(frame.Header().Length), } t.statsHandler.HandleRPC(s.ctx, inTrailer) } } }() s.mu.Lock() if !endStream { s.recvCompress = state.encoding } if !s.headerDone { if !endStream && len(state.mdata) > 0 { s.header = state.mdata } close(s.headerChan) s.headerDone = true isHeader = true } if !endStream || s.state == streamDone { s.mu.Unlock() return } if len(state.mdata) > 0 { s.trailer = state.mdata } s.finish(state.status()) s.mu.Unlock() s.write(recvMsg{err: io.EOF}) } func handleMalformedHTTP2(s *Stream, err error) { s.mu.Lock() if !s.headerDone { close(s.headerChan) s.headerDone = true } s.mu.Unlock() s.write(recvMsg{err: err}) } // reader runs as a separate goroutine in charge of reading data from network // connection. // // TODO(zhaoq): currently one reader per transport. Investigate whether this is // optimal. // TODO(zhaoq): Check the validity of the incoming frame sequence. func (t *http2Client) reader() { // Check the validity of server preface. frame, err := t.framer.readFrame() if err != nil { t.notifyError(err) return } atomic.CompareAndSwapUint32(&t.activity, 0, 1) sf, ok := frame.(*http2.SettingsFrame) if !ok { t.notifyError(err) return } t.handleSettings(sf) // loop to keep reading incoming messages on this transport. for { frame, err := t.framer.readFrame() atomic.CompareAndSwapUint32(&t.activity, 0, 1) if err != nil { // Abort an active stream if the http2.Framer returns a // http2.StreamError. This can happen only if the server's response // is malformed http2. if se, ok := err.(http2.StreamError); ok { t.mu.Lock() s := t.activeStreams[se.StreamID] t.mu.Unlock() if s != nil { // use error detail to provide better err message handleMalformedHTTP2(s, streamErrorf(http2ErrConvTab[se.Code], "%v", t.framer.errorDetail())) } continue } else { // Transport error. t.notifyError(err) return } } switch frame := frame.(type) { case *http2.MetaHeadersFrame: t.operateHeaders(frame) case *http2.DataFrame: t.handleData(frame) case *http2.RSTStreamFrame: t.handleRSTStream(frame) case *http2.SettingsFrame: t.handleSettings(frame) case *http2.PingFrame: t.handlePing(frame) case *http2.GoAwayFrame: t.handleGoAway(frame) case *http2.WindowUpdateFrame: t.handleWindowUpdate(frame) default: errorf("transport: http2Client.reader got unhandled frame type %v.", frame) } } } func (t *http2Client) applySettings(ss []http2.Setting) { for _, s := range ss { switch s.ID { case http2.SettingMaxConcurrentStreams: // TODO(zhaoq): This is a hack to avoid significant refactoring of the // code to deal with the unrealistic int32 overflow. Probably will try // to find a better way to handle this later. if s.Val > math.MaxInt32 { s.Val = math.MaxInt32 } t.mu.Lock() ms := t.maxStreams t.maxStreams = int(s.Val) t.mu.Unlock() t.streamsQuota.add(int(s.Val) - ms) case http2.SettingInitialWindowSize: t.mu.Lock() for _, stream := range t.activeStreams { // Adjust the sending quota for each stream. stream.sendQuotaPool.add(int(s.Val) - int(t.streamSendQuota)) } t.streamSendQuota = s.Val t.mu.Unlock() atomic.AddUint32(&t.outQuotaVersion, 1) } } } // controller running in a separate goroutine takes charge of sending control // frames (e.g., window update, reset stream, setting, etc.) to the server. func (t *http2Client) controller() { for { select { case i := <-t.controlBuf.get(): t.controlBuf.load() select { case <-t.writableChan: switch i := i.(type) { case *windowUpdate: t.framer.writeWindowUpdate(i.flush, i.streamID, i.increment) case *settings: if i.ack { t.framer.writeSettingsAck(true) t.applySettings(i.ss) } else { t.framer.writeSettings(true, i.ss...) } case *resetStream: // If the server needs to be to intimated about stream closing, // then we need to make sure the RST_STREAM frame is written to // the wire before the headers of the next stream waiting on // streamQuota. We ensure this by adding to the streamsQuota pool // only after having acquired the writableChan to send RST_STREAM. t.streamsQuota.add(1) t.framer.writeRSTStream(true, i.streamID, i.code) case *flushIO: t.framer.flushWrite() case *ping: if !i.ack { t.bdpEst.timesnap(i.data) } t.framer.writePing(true, i.ack, i.data) default: errorf("transport: http2Client.controller got unexpected item type %v\n", i) } t.writableChan <- 0 continue case <-t.shutdownChan: return } case <-t.shutdownChan: return } } } // keepalive running in a separate goroutune makes sure the connection is alive by sending pings. func (t *http2Client) keepalive() { p := &ping{data: [8]byte{}} timer := time.NewTimer(t.kp.Time) for { select { case <-timer.C: if atomic.CompareAndSwapUint32(&t.activity, 1, 0) { timer.Reset(t.kp.Time) continue } // Check if keepalive should go dormant. t.mu.Lock() if len(t.activeStreams) < 1 && !t.kp.PermitWithoutStream { // Make awakenKeepalive writable. <-t.awakenKeepalive t.mu.Unlock() select { case <-t.awakenKeepalive: // If the control gets here a ping has been sent // need to reset the timer with keepalive.Timeout. case <-t.shutdownChan: return } } else { t.mu.Unlock() // Send ping. t.controlBuf.put(p) } // By the time control gets here a ping has been sent one way or the other. timer.Reset(t.kp.Timeout) select { case <-timer.C: if atomic.CompareAndSwapUint32(&t.activity, 1, 0) { timer.Reset(t.kp.Time) continue } t.Close() return case <-t.shutdownChan: if !timer.Stop() { <-timer.C } return } case <-t.shutdownChan: if !timer.Stop() { <-timer.C } return } } } func (t *http2Client) Error() <-chan struct{} { return t.errorChan } func (t *http2Client) GoAway() <-chan struct{} { return t.goAway } func (t *http2Client) notifyError(err error) { t.mu.Lock() // make sure t.errorChan is closed only once. if t.state == draining { t.mu.Unlock() t.Close() return } if t.state == reachable { t.state = unreachable close(t.errorChan) infof("transport: http2Client.notifyError got notified that the client transport was broken %v.", err) } t.mu.Unlock() } golang-google-grpc-1.6.0/transport/http2_server.go000066400000000000000000001037301315416461300222120ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package transport import ( "bytes" "errors" "io" "math" "math/rand" "net" "strconv" "sync" "sync/atomic" "time" "github.com/golang/protobuf/proto" "golang.org/x/net/context" "golang.org/x/net/http2" "golang.org/x/net/http2/hpack" "google.golang.org/grpc/codes" "google.golang.org/grpc/credentials" "google.golang.org/grpc/keepalive" "google.golang.org/grpc/metadata" "google.golang.org/grpc/peer" "google.golang.org/grpc/stats" "google.golang.org/grpc/status" "google.golang.org/grpc/tap" ) // ErrIllegalHeaderWrite indicates that setting header is illegal because of // the stream's state. var ErrIllegalHeaderWrite = errors.New("transport: the stream is done or WriteHeader was already called") // http2Server implements the ServerTransport interface with HTTP2. type http2Server struct { ctx context.Context conn net.Conn remoteAddr net.Addr localAddr net.Addr maxStreamID uint32 // max stream ID ever seen authInfo credentials.AuthInfo // auth info about the connection inTapHandle tap.ServerInHandle // writableChan synchronizes write access to the transport. // A writer acquires the write lock by receiving a value on writableChan // and releases it by sending on writableChan. writableChan chan int // shutdownChan is closed when Close is called. // Blocking operations should select on shutdownChan to avoid // blocking forever after Close. shutdownChan chan struct{} framer *framer hBuf *bytes.Buffer // the buffer for HPACK encoding hEnc *hpack.Encoder // HPACK encoder // The max number of concurrent streams. maxStreams uint32 // controlBuf delivers all the control related tasks (e.g., window // updates, reset streams, and various settings) to the controller. controlBuf *controlBuffer fc *inFlow // sendQuotaPool provides flow control to outbound message. sendQuotaPool *quotaPool stats stats.Handler // Flag to keep track of reading activity on transport. // 1 is true and 0 is false. activity uint32 // Accessed atomically. // Keepalive and max-age parameters for the server. kp keepalive.ServerParameters // Keepalive enforcement policy. kep keepalive.EnforcementPolicy // The time instance last ping was received. lastPingAt time.Time // Number of times the client has violated keepalive ping policy so far. pingStrikes uint8 // Flag to signify that number of ping strikes should be reset to 0. // This is set whenever data or header frames are sent. // 1 means yes. resetPingStrikes uint32 // Accessed atomically. initialWindowSize int32 bdpEst *bdpEstimator outQuotaVersion uint32 mu sync.Mutex // guard the following // drainChan is initialized when drain(...) is called the first time. // After which the server writes out the first GoAway(with ID 2^31-1) frame. // Then an independent goroutine will be launched to later send the second GoAway. // During this time we don't want to write another first GoAway(with ID 2^31 -1) frame. // Thus call to drain(...) will be a no-op if drainChan is already initialized since draining is // already underway. drainChan chan struct{} state transportState activeStreams map[uint32]*Stream // the per-stream outbound flow control window size set by the peer. streamSendQuota uint32 // idle is the time instant when the connection went idle. // This is either the begining of the connection or when the number of // RPCs go down to 0. // When the connection is busy, this value is set to 0. idle time.Time } // newHTTP2Server constructs a ServerTransport based on HTTP2. ConnectionError is // returned if something goes wrong. func newHTTP2Server(conn net.Conn, config *ServerConfig) (_ ServerTransport, err error) { framer := newFramer(conn) // Send initial settings as connection preface to client. var isettings []http2.Setting // TODO(zhaoq): Have a better way to signal "no limit" because 0 is // permitted in the HTTP2 spec. maxStreams := config.MaxStreams if maxStreams == 0 { maxStreams = math.MaxUint32 } else { isettings = append(isettings, http2.Setting{ ID: http2.SettingMaxConcurrentStreams, Val: maxStreams, }) } dynamicWindow := true iwz := int32(initialWindowSize) if config.InitialWindowSize >= defaultWindowSize { iwz = config.InitialWindowSize dynamicWindow = false } icwz := int32(initialWindowSize) if config.InitialConnWindowSize >= defaultWindowSize { icwz = config.InitialConnWindowSize dynamicWindow = false } if iwz != defaultWindowSize { isettings = append(isettings, http2.Setting{ ID: http2.SettingInitialWindowSize, Val: uint32(iwz)}) } if err := framer.writeSettings(true, isettings...); err != nil { return nil, connectionErrorf(true, err, "transport: %v", err) } // Adjust the connection flow control window if needed. if delta := uint32(icwz - defaultWindowSize); delta > 0 { if err := framer.writeWindowUpdate(true, 0, delta); err != nil { return nil, connectionErrorf(true, err, "transport: %v", err) } } kp := config.KeepaliveParams if kp.MaxConnectionIdle == 0 { kp.MaxConnectionIdle = defaultMaxConnectionIdle } if kp.MaxConnectionAge == 0 { kp.MaxConnectionAge = defaultMaxConnectionAge } // Add a jitter to MaxConnectionAge. kp.MaxConnectionAge += getJitter(kp.MaxConnectionAge) if kp.MaxConnectionAgeGrace == 0 { kp.MaxConnectionAgeGrace = defaultMaxConnectionAgeGrace } if kp.Time == 0 { kp.Time = defaultServerKeepaliveTime } if kp.Timeout == 0 { kp.Timeout = defaultServerKeepaliveTimeout } kep := config.KeepalivePolicy if kep.MinTime == 0 { kep.MinTime = defaultKeepalivePolicyMinTime } var buf bytes.Buffer t := &http2Server{ ctx: context.Background(), conn: conn, remoteAddr: conn.RemoteAddr(), localAddr: conn.LocalAddr(), authInfo: config.AuthInfo, framer: framer, hBuf: &buf, hEnc: hpack.NewEncoder(&buf), maxStreams: maxStreams, inTapHandle: config.InTapHandle, controlBuf: newControlBuffer(), fc: &inFlow{limit: uint32(icwz)}, sendQuotaPool: newQuotaPool(defaultWindowSize), state: reachable, writableChan: make(chan int, 1), shutdownChan: make(chan struct{}), activeStreams: make(map[uint32]*Stream), streamSendQuota: defaultWindowSize, stats: config.StatsHandler, kp: kp, idle: time.Now(), kep: kep, initialWindowSize: iwz, } if dynamicWindow { t.bdpEst = &bdpEstimator{ bdp: initialWindowSize, updateFlowControl: t.updateFlowControl, } } if t.stats != nil { t.ctx = t.stats.TagConn(t.ctx, &stats.ConnTagInfo{ RemoteAddr: t.remoteAddr, LocalAddr: t.localAddr, }) connBegin := &stats.ConnBegin{} t.stats.HandleConn(t.ctx, connBegin) } go t.controller() go t.keepalive() t.writableChan <- 0 return t, nil } // operateHeader takes action on the decoded headers. func (t *http2Server) operateHeaders(frame *http2.MetaHeadersFrame, handle func(*Stream), traceCtx func(context.Context, string) context.Context) (close bool) { streamID := frame.Header().StreamID var state decodeState for _, hf := range frame.Fields { if err := state.processHeaderField(hf); err != nil { if se, ok := err.(StreamError); ok { t.controlBuf.put(&resetStream{streamID, statusCodeConvTab[se.Code]}) } return } } buf := newRecvBuffer() s := &Stream{ id: streamID, st: t, buf: buf, fc: &inFlow{limit: uint32(t.initialWindowSize)}, recvCompress: state.encoding, method: state.method, } if frame.StreamEnded() { // s is just created by the caller. No lock needed. s.state = streamReadDone } if state.timeoutSet { s.ctx, s.cancel = context.WithTimeout(t.ctx, state.timeout) } else { s.ctx, s.cancel = context.WithCancel(t.ctx) } pr := &peer.Peer{ Addr: t.remoteAddr, } // Attach Auth info if there is any. if t.authInfo != nil { pr.AuthInfo = t.authInfo } s.ctx = peer.NewContext(s.ctx, pr) // Cache the current stream to the context so that the server application // can find out. Required when the server wants to send some metadata // back to the client (unary call only). s.ctx = newContextWithStream(s.ctx, s) // Attach the received metadata to the context. if len(state.mdata) > 0 { s.ctx = metadata.NewIncomingContext(s.ctx, state.mdata) } if state.statsTags != nil { s.ctx = stats.SetIncomingTags(s.ctx, state.statsTags) } if state.statsTrace != nil { s.ctx = stats.SetIncomingTrace(s.ctx, state.statsTrace) } if t.inTapHandle != nil { var err error info := &tap.Info{ FullMethodName: state.method, } s.ctx, err = t.inTapHandle(s.ctx, info) if err != nil { warningf("transport: http2Server.operateHeaders got an error from InTapHandle: %v", err) t.controlBuf.put(&resetStream{s.id, http2.ErrCodeRefusedStream}) return } } t.mu.Lock() if t.state != reachable { t.mu.Unlock() return } if uint32(len(t.activeStreams)) >= t.maxStreams { t.mu.Unlock() t.controlBuf.put(&resetStream{streamID, http2.ErrCodeRefusedStream}) return } if streamID%2 != 1 || streamID <= t.maxStreamID { t.mu.Unlock() // illegal gRPC stream id. errorf("transport: http2Server.HandleStreams received an illegal stream id: %v", streamID) return true } t.maxStreamID = streamID s.sendQuotaPool = newQuotaPool(int(t.streamSendQuota)) t.activeStreams[streamID] = s if len(t.activeStreams) == 1 { t.idle = time.Time{} } t.mu.Unlock() s.requestRead = func(n int) { t.adjustWindow(s, uint32(n)) } s.ctx = traceCtx(s.ctx, s.method) if t.stats != nil { s.ctx = t.stats.TagRPC(s.ctx, &stats.RPCTagInfo{FullMethodName: s.method}) inHeader := &stats.InHeader{ FullMethod: s.method, RemoteAddr: t.remoteAddr, LocalAddr: t.localAddr, Compression: s.recvCompress, WireLength: int(frame.Header().Length), } t.stats.HandleRPC(s.ctx, inHeader) } s.trReader = &transportReader{ reader: &recvBufferReader{ ctx: s.ctx, recv: s.buf, }, windowHandler: func(n int) { t.updateWindow(s, uint32(n)) }, } handle(s) return } // HandleStreams receives incoming streams using the given handler. This is // typically run in a separate goroutine. // traceCtx attaches trace to ctx and returns the new context. func (t *http2Server) HandleStreams(handle func(*Stream), traceCtx func(context.Context, string) context.Context) { // Check the validity of client preface. preface := make([]byte, len(clientPreface)) if _, err := io.ReadFull(t.conn, preface); err != nil { // Only log if it isn't a simple tcp accept check (ie: tcp balancer doing open/close socket) if err != io.EOF { errorf("transport: http2Server.HandleStreams failed to receive the preface from client: %v", err) } t.Close() return } if !bytes.Equal(preface, clientPreface) { errorf("transport: http2Server.HandleStreams received bogus greeting from client: %q", preface) t.Close() return } frame, err := t.framer.readFrame() if err == io.EOF || err == io.ErrUnexpectedEOF { t.Close() return } if err != nil { errorf("transport: http2Server.HandleStreams failed to read initial settings frame: %v", err) t.Close() return } atomic.StoreUint32(&t.activity, 1) sf, ok := frame.(*http2.SettingsFrame) if !ok { errorf("transport: http2Server.HandleStreams saw invalid preface type %T from client", frame) t.Close() return } t.handleSettings(sf) for { frame, err := t.framer.readFrame() atomic.StoreUint32(&t.activity, 1) if err != nil { if se, ok := err.(http2.StreamError); ok { t.mu.Lock() s := t.activeStreams[se.StreamID] t.mu.Unlock() if s != nil { t.closeStream(s) } t.controlBuf.put(&resetStream{se.StreamID, se.Code}) continue } if err == io.EOF || err == io.ErrUnexpectedEOF { t.Close() return } warningf("transport: http2Server.HandleStreams failed to read frame: %v", err) t.Close() return } switch frame := frame.(type) { case *http2.MetaHeadersFrame: if t.operateHeaders(frame, handle, traceCtx) { t.Close() break } case *http2.DataFrame: t.handleData(frame) case *http2.RSTStreamFrame: t.handleRSTStream(frame) case *http2.SettingsFrame: t.handleSettings(frame) case *http2.PingFrame: t.handlePing(frame) case *http2.WindowUpdateFrame: t.handleWindowUpdate(frame) case *http2.GoAwayFrame: // TODO: Handle GoAway from the client appropriately. default: errorf("transport: http2Server.HandleStreams found unhandled frame type %v.", frame) } } } func (t *http2Server) getStream(f http2.Frame) (*Stream, bool) { t.mu.Lock() defer t.mu.Unlock() if t.activeStreams == nil { // The transport is closing. return nil, false } s, ok := t.activeStreams[f.Header().StreamID] if !ok { // The stream is already done. return nil, false } return s, true } // adjustWindow sends out extra window update over the initial window size // of stream if the application is requesting data larger in size than // the window. func (t *http2Server) adjustWindow(s *Stream, n uint32) { s.mu.Lock() defer s.mu.Unlock() if s.state == streamDone { return } if w := s.fc.maybeAdjust(n); w > 0 { if cw := t.fc.resetPendingUpdate(); cw > 0 { t.controlBuf.put(&windowUpdate{0, cw, false}) } t.controlBuf.put(&windowUpdate{s.id, w, true}) } } // updateWindow adjusts the inbound quota for the stream and the transport. // Window updates will deliver to the controller for sending when // the cumulative quota exceeds the corresponding threshold. func (t *http2Server) updateWindow(s *Stream, n uint32) { s.mu.Lock() defer s.mu.Unlock() if s.state == streamDone { return } if w := s.fc.onRead(n); w > 0 { if cw := t.fc.resetPendingUpdate(); cw > 0 { t.controlBuf.put(&windowUpdate{0, cw, false}) } t.controlBuf.put(&windowUpdate{s.id, w, true}) } } // updateFlowControl updates the incoming flow control windows // for the transport and the stream based on the current bdp // estimation. func (t *http2Server) updateFlowControl(n uint32) { t.mu.Lock() for _, s := range t.activeStreams { s.fc.newLimit(n) } t.initialWindowSize = int32(n) t.mu.Unlock() t.controlBuf.put(&windowUpdate{0, t.fc.newLimit(n), false}) t.controlBuf.put(&settings{ ack: false, ss: []http2.Setting{ { ID: http2.SettingInitialWindowSize, Val: uint32(n), }, }, }) } func (t *http2Server) handleData(f *http2.DataFrame) { size := f.Header().Length var sendBDPPing bool if t.bdpEst != nil { sendBDPPing = t.bdpEst.add(uint32(size)) } // Decouple connection's flow control from application's read. // An update on connection's flow control should not depend on // whether user application has read the data or not. Such a // restriction is already imposed on the stream's flow control, // and therefore the sender will be blocked anyways. // Decoupling the connection flow control will prevent other // active(fast) streams from starving in presence of slow or // inactive streams. // // Furthermore, if a bdpPing is being sent out we can piggyback // connection's window update for the bytes we just received. if sendBDPPing { t.controlBuf.put(&windowUpdate{0, uint32(size), false}) t.controlBuf.put(bdpPing) } else { if err := t.fc.onData(uint32(size)); err != nil { errorf("transport: http2Server %v", err) t.Close() return } if w := t.fc.onRead(uint32(size)); w > 0 { t.controlBuf.put(&windowUpdate{0, w, true}) } } // Select the right stream to dispatch. s, ok := t.getStream(f) if !ok { return } if size > 0 { s.mu.Lock() if s.state == streamDone { s.mu.Unlock() return } if err := s.fc.onData(uint32(size)); err != nil { s.mu.Unlock() t.closeStream(s) t.controlBuf.put(&resetStream{s.id, http2.ErrCodeFlowControl}) return } if f.Header().Flags.Has(http2.FlagDataPadded) { if w := s.fc.onRead(uint32(size) - uint32(len(f.Data()))); w > 0 { t.controlBuf.put(&windowUpdate{s.id, w, true}) } } s.mu.Unlock() // TODO(bradfitz, zhaoq): A copy is required here because there is no // guarantee f.Data() is consumed before the arrival of next frame. // Can this copy be eliminated? if len(f.Data()) > 0 { data := make([]byte, len(f.Data())) copy(data, f.Data()) s.write(recvMsg{data: data}) } } if f.Header().Flags.Has(http2.FlagDataEndStream) { // Received the end of stream from the client. s.mu.Lock() if s.state != streamDone { s.state = streamReadDone } s.mu.Unlock() s.write(recvMsg{err: io.EOF}) } } func (t *http2Server) handleRSTStream(f *http2.RSTStreamFrame) { s, ok := t.getStream(f) if !ok { return } t.closeStream(s) } func (t *http2Server) handleSettings(f *http2.SettingsFrame) { if f.IsAck() { return } var ss []http2.Setting f.ForeachSetting(func(s http2.Setting) error { ss = append(ss, s) return nil }) // The settings will be applied once the ack is sent. t.controlBuf.put(&settings{ack: true, ss: ss}) } const ( maxPingStrikes = 2 defaultPingTimeout = 2 * time.Hour ) func (t *http2Server) handlePing(f *http2.PingFrame) { if f.IsAck() { if f.Data == goAwayPing.data && t.drainChan != nil { close(t.drainChan) return } // Maybe it's a BDP ping. if t.bdpEst != nil { t.bdpEst.calculate(f.Data) } return } pingAck := &ping{ack: true} copy(pingAck.data[:], f.Data[:]) t.controlBuf.put(pingAck) now := time.Now() defer func() { t.lastPingAt = now }() // A reset ping strikes means that we don't need to check for policy // violation for this ping and the pingStrikes counter should be set // to 0. if atomic.CompareAndSwapUint32(&t.resetPingStrikes, 1, 0) { t.pingStrikes = 0 return } t.mu.Lock() ns := len(t.activeStreams) t.mu.Unlock() if ns < 1 && !t.kep.PermitWithoutStream { // Keepalive shouldn't be active thus, this new ping should // have come after atleast defaultPingTimeout. if t.lastPingAt.Add(defaultPingTimeout).After(now) { t.pingStrikes++ } } else { // Check if keepalive policy is respected. if t.lastPingAt.Add(t.kep.MinTime).After(now) { t.pingStrikes++ } } if t.pingStrikes > maxPingStrikes { // Send goaway and close the connection. t.controlBuf.put(&goAway{code: http2.ErrCodeEnhanceYourCalm, debugData: []byte("too_many_pings"), closeConn: true}) } } func (t *http2Server) handleWindowUpdate(f *http2.WindowUpdateFrame) { id := f.Header().StreamID incr := f.Increment if id == 0 { t.sendQuotaPool.add(int(incr)) return } if s, ok := t.getStream(f); ok { s.sendQuotaPool.add(int(incr)) } } func (t *http2Server) writeHeaders(s *Stream, b *bytes.Buffer, endStream bool) error { first := true endHeaders := false var err error defer func() { if err == nil { // Reset ping strikes when seding headers since that might cause the // peer to send ping. atomic.StoreUint32(&t.resetPingStrikes, 1) } }() // Sends the headers in a single batch. for !endHeaders { size := t.hBuf.Len() if size > http2MaxFrameLen { size = http2MaxFrameLen } else { endHeaders = true } if first { p := http2.HeadersFrameParam{ StreamID: s.id, BlockFragment: b.Next(size), EndStream: endStream, EndHeaders: endHeaders, } err = t.framer.writeHeaders(endHeaders, p) first = false } else { err = t.framer.writeContinuation(endHeaders, s.id, endHeaders, b.Next(size)) } if err != nil { t.Close() return connectionErrorf(true, err, "transport: %v", err) } } return nil } // WriteHeader sends the header metedata md back to the client. func (t *http2Server) WriteHeader(s *Stream, md metadata.MD) error { s.mu.Lock() if s.headerOk || s.state == streamDone { s.mu.Unlock() return ErrIllegalHeaderWrite } s.headerOk = true if md.Len() > 0 { if s.header.Len() > 0 { s.header = metadata.Join(s.header, md) } else { s.header = md } } md = s.header s.mu.Unlock() if _, err := wait(s.ctx, nil, nil, t.shutdownChan, t.writableChan); err != nil { return err } t.hBuf.Reset() t.hEnc.WriteField(hpack.HeaderField{Name: ":status", Value: "200"}) t.hEnc.WriteField(hpack.HeaderField{Name: "content-type", Value: "application/grpc"}) if s.sendCompress != "" { t.hEnc.WriteField(hpack.HeaderField{Name: "grpc-encoding", Value: s.sendCompress}) } for k, vv := range md { if isReservedHeader(k) { // Clients don't tolerate reading restricted headers after some non restricted ones were sent. continue } for _, v := range vv { t.hEnc.WriteField(hpack.HeaderField{Name: k, Value: encodeMetadataHeader(k, v)}) } } bufLen := t.hBuf.Len() if err := t.writeHeaders(s, t.hBuf, false); err != nil { return err } if t.stats != nil { outHeader := &stats.OutHeader{ WireLength: bufLen, } t.stats.HandleRPC(s.Context(), outHeader) } t.writableChan <- 0 return nil } // WriteStatus sends stream status to the client and terminates the stream. // There is no further I/O operations being able to perform on this stream. // TODO(zhaoq): Now it indicates the end of entire stream. Revisit if early // OK is adopted. func (t *http2Server) WriteStatus(s *Stream, st *status.Status) error { var headersSent, hasHeader bool s.mu.Lock() if s.state == streamDone { s.mu.Unlock() return nil } if s.headerOk { headersSent = true } if s.header.Len() > 0 { hasHeader = true } s.mu.Unlock() if !headersSent && hasHeader { t.WriteHeader(s, nil) headersSent = true } // Always write a status regardless of context cancellation unless the stream // is terminated (e.g. by a RST_STREAM, GOAWAY, or transport error). The // server's application code is already done so it is fine to ignore s.ctx. select { case <-t.shutdownChan: return ErrConnClosing case <-t.writableChan: } t.hBuf.Reset() if !headersSent { t.hEnc.WriteField(hpack.HeaderField{Name: ":status", Value: "200"}) t.hEnc.WriteField(hpack.HeaderField{Name: "content-type", Value: "application/grpc"}) } t.hEnc.WriteField( hpack.HeaderField{ Name: "grpc-status", Value: strconv.Itoa(int(st.Code())), }) t.hEnc.WriteField(hpack.HeaderField{Name: "grpc-message", Value: encodeGrpcMessage(st.Message())}) if p := st.Proto(); p != nil && len(p.Details) > 0 { stBytes, err := proto.Marshal(p) if err != nil { // TODO: return error instead, when callers are able to handle it. panic(err) } t.hEnc.WriteField(hpack.HeaderField{Name: "grpc-status-details-bin", Value: encodeBinHeader(stBytes)}) } // Attach the trailer metadata. for k, vv := range s.trailer { // Clients don't tolerate reading restricted headers after some non restricted ones were sent. if isReservedHeader(k) { continue } for _, v := range vv { t.hEnc.WriteField(hpack.HeaderField{Name: k, Value: encodeMetadataHeader(k, v)}) } } bufLen := t.hBuf.Len() if err := t.writeHeaders(s, t.hBuf, true); err != nil { t.Close() return err } if t.stats != nil { outTrailer := &stats.OutTrailer{ WireLength: bufLen, } t.stats.HandleRPC(s.Context(), outTrailer) } t.closeStream(s) t.writableChan <- 0 return nil } // Write converts the data into HTTP2 data frame and sends it out. Non-nil error // is returns if it fails (e.g., framing error, transport error). func (t *http2Server) Write(s *Stream, hdr []byte, data []byte, opts *Options) (err error) { // TODO(zhaoq): Support multi-writers for a single stream. secondStart := http2MaxFrameLen - len(hdr)%http2MaxFrameLen if len(data) < secondStart { secondStart = len(data) } hdr = append(hdr, data[:secondStart]...) data = data[secondStart:] isLastSlice := (len(data) == 0) var writeHeaderFrame bool s.mu.Lock() if s.state == streamDone { s.mu.Unlock() return streamErrorf(codes.Unknown, "the stream has been done") } if !s.headerOk { writeHeaderFrame = true } s.mu.Unlock() if writeHeaderFrame { t.WriteHeader(s, nil) } r := bytes.NewBuffer(hdr) var ( p []byte oqv uint32 ) for { if r.Len() == 0 && p == nil { return nil } oqv = atomic.LoadUint32(&t.outQuotaVersion) size := http2MaxFrameLen // Wait until the stream has some quota to send the data. sq, err := wait(s.ctx, nil, nil, t.shutdownChan, s.sendQuotaPool.acquire()) if err != nil { return err } // Wait until the transport has some quota to send the data. tq, err := wait(s.ctx, nil, nil, t.shutdownChan, t.sendQuotaPool.acquire()) if err != nil { return err } if sq < size { size = sq } if tq < size { size = tq } if p == nil { p = r.Next(size) } ps := len(p) if ps < sq { // Overbooked stream quota. Return it back. s.sendQuotaPool.add(sq - ps) } if ps < tq { // Overbooked transport quota. Return it back. t.sendQuotaPool.add(tq - ps) } t.framer.adjustNumWriters(1) // Got some quota. Try to acquire writing privilege on the // transport. if _, err := wait(s.ctx, nil, nil, t.shutdownChan, t.writableChan); err != nil { if _, ok := err.(StreamError); ok { // Return the connection quota back. t.sendQuotaPool.add(ps) } if t.framer.adjustNumWriters(-1) == 0 { // This writer is the last one in this batch and has the // responsibility to flush the buffered frames. It queues // a flush request to controlBuf instead of flushing directly // in order to avoid the race with other writing or flushing. t.controlBuf.put(&flushIO{}) } return err } select { case <-s.ctx.Done(): t.sendQuotaPool.add(ps) if t.framer.adjustNumWriters(-1) == 0 { t.controlBuf.put(&flushIO{}) } t.writableChan <- 0 return ContextErr(s.ctx.Err()) default: } if oqv != atomic.LoadUint32(&t.outQuotaVersion) { // InitialWindowSize settings frame must have been received after we // acquired send quota but before we got the writable channel. // We must forsake this write. t.sendQuotaPool.add(ps) s.sendQuotaPool.add(ps) if t.framer.adjustNumWriters(-1) == 0 { t.controlBuf.put(&flushIO{}) } t.writableChan <- 0 continue } var forceFlush bool if r.Len() == 0 { if isLastSlice { if t.framer.adjustNumWriters(0) == 1 && !opts.Last { forceFlush = true } } else { r = bytes.NewBuffer(data) isLastSlice = true } } // Reset ping strikes when sending data since this might cause // the peer to send ping. atomic.StoreUint32(&t.resetPingStrikes, 1) if err := t.framer.writeData(forceFlush, s.id, false, p); err != nil { t.Close() return connectionErrorf(true, err, "transport: %v", err) } p = nil if t.framer.adjustNumWriters(-1) == 0 { t.framer.flushWrite() } t.writableChan <- 0 } } func (t *http2Server) applySettings(ss []http2.Setting) { for _, s := range ss { if s.ID == http2.SettingInitialWindowSize { t.mu.Lock() defer t.mu.Unlock() for _, stream := range t.activeStreams { stream.sendQuotaPool.add(int(s.Val) - int(t.streamSendQuota)) } t.streamSendQuota = s.Val atomic.AddUint32(&t.outQuotaVersion, 1) } } } // keepalive running in a separate goroutine does the following: // 1. Gracefully closes an idle connection after a duration of keepalive.MaxConnectionIdle. // 2. Gracefully closes any connection after a duration of keepalive.MaxConnectionAge. // 3. Forcibly closes a connection after an additive period of keepalive.MaxConnectionAgeGrace over keepalive.MaxConnectionAge. // 4. Makes sure a connection is alive by sending pings with a frequency of keepalive.Time and closes a non-responsive connection // after an additional duration of keepalive.Timeout. func (t *http2Server) keepalive() { p := &ping{} var pingSent bool maxIdle := time.NewTimer(t.kp.MaxConnectionIdle) maxAge := time.NewTimer(t.kp.MaxConnectionAge) keepalive := time.NewTimer(t.kp.Time) // NOTE: All exit paths of this function should reset their // respecitve timers. A failure to do so will cause the // following clean-up to deadlock and eventually leak. defer func() { if !maxIdle.Stop() { <-maxIdle.C } if !maxAge.Stop() { <-maxAge.C } if !keepalive.Stop() { <-keepalive.C } }() for { select { case <-maxIdle.C: t.mu.Lock() idle := t.idle if idle.IsZero() { // The connection is non-idle. t.mu.Unlock() maxIdle.Reset(t.kp.MaxConnectionIdle) continue } val := t.kp.MaxConnectionIdle - time.Since(idle) t.mu.Unlock() if val <= 0 { // The connection has been idle for a duration of keepalive.MaxConnectionIdle or more. // Gracefully close the connection. t.drain(http2.ErrCodeNo, []byte{}) // Reseting the timer so that the clean-up doesn't deadlock. maxIdle.Reset(infinity) return } maxIdle.Reset(val) case <-maxAge.C: t.drain(http2.ErrCodeNo, []byte{}) maxAge.Reset(t.kp.MaxConnectionAgeGrace) select { case <-maxAge.C: // Close the connection after grace period. t.Close() // Reseting the timer so that the clean-up doesn't deadlock. maxAge.Reset(infinity) case <-t.shutdownChan: } return case <-keepalive.C: if atomic.CompareAndSwapUint32(&t.activity, 1, 0) { pingSent = false keepalive.Reset(t.kp.Time) continue } if pingSent { t.Close() // Reseting the timer so that the clean-up doesn't deadlock. keepalive.Reset(infinity) return } pingSent = true t.controlBuf.put(p) keepalive.Reset(t.kp.Timeout) case <-t.shutdownChan: return } } } var goAwayPing = &ping{data: [8]byte{1, 6, 1, 8, 0, 3, 3, 9}} // controller running in a separate goroutine takes charge of sending control // frames (e.g., window update, reset stream, setting, etc.) to the server. func (t *http2Server) controller() { for { select { case i := <-t.controlBuf.get(): t.controlBuf.load() select { case <-t.writableChan: switch i := i.(type) { case *windowUpdate: t.framer.writeWindowUpdate(i.flush, i.streamID, i.increment) case *settings: if i.ack { t.framer.writeSettingsAck(true) t.applySettings(i.ss) } else { t.framer.writeSettings(true, i.ss...) } case *resetStream: t.framer.writeRSTStream(true, i.streamID, i.code) case *goAway: t.mu.Lock() if t.state == closing { t.mu.Unlock() // The transport is closing. return } sid := t.maxStreamID if !i.headsUp { // Stop accepting more streams now. t.state = draining activeStreams := len(t.activeStreams) t.mu.Unlock() t.framer.writeGoAway(true, sid, i.code, i.debugData) if i.closeConn || activeStreams == 0 { // Abruptly close the connection following the GoAway. t.Close() } t.writableChan <- 0 continue } t.mu.Unlock() // For a graceful close, send out a GoAway with stream ID of MaxUInt32, // Follow that with a ping and wait for the ack to come back or a timer // to expire. During this time accept new streams since they might have // originated before the GoAway reaches the client. // After getting the ack or timer expiration send out another GoAway this // time with an ID of the max stream server intends to process. t.framer.writeGoAway(true, math.MaxUint32, http2.ErrCodeNo, []byte{}) t.framer.writePing(true, false, goAwayPing.data) go func() { timer := time.NewTimer(time.Minute) defer timer.Stop() select { case <-t.drainChan: case <-timer.C: case <-t.shutdownChan: return } t.controlBuf.put(&goAway{code: i.code, debugData: i.debugData}) }() case *flushIO: t.framer.flushWrite() case *ping: if !i.ack { t.bdpEst.timesnap(i.data) } t.framer.writePing(true, i.ack, i.data) default: errorf("transport: http2Server.controller got unexpected item type %v\n", i) } t.writableChan <- 0 continue case <-t.shutdownChan: return } case <-t.shutdownChan: return } } } // Close starts shutting down the http2Server transport. // TODO(zhaoq): Now the destruction is not blocked on any pending streams. This // could cause some resource issue. Revisit this later. func (t *http2Server) Close() (err error) { t.mu.Lock() if t.state == closing { t.mu.Unlock() return errors.New("transport: Close() was already called") } t.state = closing streams := t.activeStreams t.activeStreams = nil t.mu.Unlock() close(t.shutdownChan) err = t.conn.Close() // Cancel all active streams. for _, s := range streams { s.cancel() } if t.stats != nil { connEnd := &stats.ConnEnd{} t.stats.HandleConn(t.ctx, connEnd) } return } // closeStream clears the footprint of a stream when the stream is not needed // any more. func (t *http2Server) closeStream(s *Stream) { t.mu.Lock() delete(t.activeStreams, s.id) if len(t.activeStreams) == 0 { t.idle = time.Now() } if t.state == draining && len(t.activeStreams) == 0 { defer t.Close() } t.mu.Unlock() // In case stream sending and receiving are invoked in separate // goroutines (e.g., bi-directional streaming), cancel needs to be // called to interrupt the potential blocking on other goroutines. s.cancel() s.mu.Lock() if s.state == streamDone { s.mu.Unlock() return } s.state = streamDone s.mu.Unlock() } func (t *http2Server) RemoteAddr() net.Addr { return t.remoteAddr } func (t *http2Server) Drain() { t.drain(http2.ErrCodeNo, []byte{}) } func (t *http2Server) drain(code http2.ErrCode, debugData []byte) { t.mu.Lock() defer t.mu.Unlock() if t.drainChan != nil { return } t.drainChan = make(chan struct{}) t.controlBuf.put(&goAway{code: code, debugData: debugData, headsUp: true}) } var rgen = rand.New(rand.NewSource(time.Now().UnixNano())) func getJitter(v time.Duration) time.Duration { if v == infinity { return 0 } // Generate a jitter between +/- 10% of the value. r := int64(v / 10) j := rgen.Int63n(2*r) - r return time.Duration(j) } golang-google-grpc-1.6.0/transport/http_util.go000066400000000000000000000403471315416461300216030ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package transport import ( "bufio" "bytes" "encoding/base64" "fmt" "io" "net" "net/http" "strconv" "strings" "sync/atomic" "time" "github.com/golang/protobuf/proto" "golang.org/x/net/http2" "golang.org/x/net/http2/hpack" spb "google.golang.org/genproto/googleapis/rpc/status" "google.golang.org/grpc/codes" "google.golang.org/grpc/status" ) const ( // http2MaxFrameLen specifies the max length of a HTTP2 frame. http2MaxFrameLen = 16384 // 16KB frame // http://http2.github.io/http2-spec/#SettingValues http2InitHeaderTableSize = 4096 // http2IOBufSize specifies the buffer size for sending frames. http2IOBufSize = 32 * 1024 ) var ( clientPreface = []byte(http2.ClientPreface) http2ErrConvTab = map[http2.ErrCode]codes.Code{ http2.ErrCodeNo: codes.Internal, http2.ErrCodeProtocol: codes.Internal, http2.ErrCodeInternal: codes.Internal, http2.ErrCodeFlowControl: codes.ResourceExhausted, http2.ErrCodeSettingsTimeout: codes.Internal, http2.ErrCodeStreamClosed: codes.Internal, http2.ErrCodeFrameSize: codes.Internal, http2.ErrCodeRefusedStream: codes.Unavailable, http2.ErrCodeCancel: codes.Canceled, http2.ErrCodeCompression: codes.Internal, http2.ErrCodeConnect: codes.Internal, http2.ErrCodeEnhanceYourCalm: codes.ResourceExhausted, http2.ErrCodeInadequateSecurity: codes.PermissionDenied, http2.ErrCodeHTTP11Required: codes.FailedPrecondition, } statusCodeConvTab = map[codes.Code]http2.ErrCode{ codes.Internal: http2.ErrCodeInternal, codes.Canceled: http2.ErrCodeCancel, codes.Unavailable: http2.ErrCodeRefusedStream, codes.ResourceExhausted: http2.ErrCodeEnhanceYourCalm, codes.PermissionDenied: http2.ErrCodeInadequateSecurity, } httpStatusConvTab = map[int]codes.Code{ // 400 Bad Request - INTERNAL. http.StatusBadRequest: codes.Internal, // 401 Unauthorized - UNAUTHENTICATED. http.StatusUnauthorized: codes.Unauthenticated, // 403 Forbidden - PERMISSION_DENIED. http.StatusForbidden: codes.PermissionDenied, // 404 Not Found - UNIMPLEMENTED. http.StatusNotFound: codes.Unimplemented, // 429 Too Many Requests - UNAVAILABLE. http.StatusTooManyRequests: codes.Unavailable, // 502 Bad Gateway - UNAVAILABLE. http.StatusBadGateway: codes.Unavailable, // 503 Service Unavailable - UNAVAILABLE. http.StatusServiceUnavailable: codes.Unavailable, // 504 Gateway timeout - UNAVAILABLE. http.StatusGatewayTimeout: codes.Unavailable, } ) // Records the states during HPACK decoding. Must be reset once the // decoding of the entire headers are finished. type decodeState struct { encoding string // statusGen caches the stream status received from the trailer the server // sent. Client side only. Do not access directly. After all trailers are // parsed, use the status method to retrieve the status. statusGen *status.Status // rawStatusCode and rawStatusMsg are set from the raw trailer fields and are not // intended for direct access outside of parsing. rawStatusCode *int rawStatusMsg string httpStatus *int // Server side only fields. timeoutSet bool timeout time.Duration method string // key-value metadata map from the peer. mdata map[string][]string statsTags []byte statsTrace []byte } // isReservedHeader checks whether hdr belongs to HTTP2 headers // reserved by gRPC protocol. Any other headers are classified as the // user-specified metadata. func isReservedHeader(hdr string) bool { if hdr != "" && hdr[0] == ':' { return true } switch hdr { case "content-type", "grpc-message-type", "grpc-encoding", "grpc-message", "grpc-status", "grpc-timeout", "grpc-status-details-bin", "te": return true default: return false } } // isWhitelistedPseudoHeader checks whether hdr belongs to HTTP2 pseudoheaders // that should be propagated into metadata visible to users. func isWhitelistedPseudoHeader(hdr string) bool { switch hdr { case ":authority": return true default: return false } } func validContentType(t string) bool { e := "application/grpc" if !strings.HasPrefix(t, e) { return false } // Support variations on the content-type // (e.g. "application/grpc+blah", "application/grpc;blah"). if len(t) > len(e) && t[len(e)] != '+' && t[len(e)] != ';' { return false } return true } func (d *decodeState) status() *status.Status { if d.statusGen == nil { // No status-details were provided; generate status using code/msg. d.statusGen = status.New(codes.Code(int32(*(d.rawStatusCode))), d.rawStatusMsg) } return d.statusGen } const binHdrSuffix = "-bin" func encodeBinHeader(v []byte) string { return base64.RawStdEncoding.EncodeToString(v) } func decodeBinHeader(v string) ([]byte, error) { if len(v)%4 == 0 { // Input was padded, or padding was not necessary. return base64.StdEncoding.DecodeString(v) } return base64.RawStdEncoding.DecodeString(v) } func encodeMetadataHeader(k, v string) string { if strings.HasSuffix(k, binHdrSuffix) { return encodeBinHeader(([]byte)(v)) } return v } func decodeMetadataHeader(k, v string) (string, error) { if strings.HasSuffix(k, binHdrSuffix) { b, err := decodeBinHeader(v) return string(b), err } return v, nil } func (d *decodeState) decodeResponseHeader(frame *http2.MetaHeadersFrame) error { for _, hf := range frame.Fields { if err := d.processHeaderField(hf); err != nil { return err } } // If grpc status exists, no need to check further. if d.rawStatusCode != nil || d.statusGen != nil { return nil } // If grpc status doesn't exist and http status doesn't exist, // then it's a malformed header. if d.httpStatus == nil { return streamErrorf(codes.Internal, "malformed header: doesn't contain status(gRPC or HTTP)") } if *(d.httpStatus) != http.StatusOK { code, ok := httpStatusConvTab[*(d.httpStatus)] if !ok { code = codes.Unknown } return streamErrorf(code, http.StatusText(*(d.httpStatus))) } // gRPC status doesn't exist and http status is OK. // Set rawStatusCode to be unknown and return nil error. // So that, if the stream has ended this Unknown status // will be propogated to the user. // Otherwise, it will be ignored. In which case, status from // a later trailer, that has StreamEnded flag set, is propogated. code := int(codes.Unknown) d.rawStatusCode = &code return nil } func (d *decodeState) addMetadata(k, v string) { if d.mdata == nil { d.mdata = make(map[string][]string) } d.mdata[k] = append(d.mdata[k], v) } func (d *decodeState) processHeaderField(f hpack.HeaderField) error { switch f.Name { case "content-type": if !validContentType(f.Value) { return streamErrorf(codes.FailedPrecondition, "transport: received the unexpected content-type %q", f.Value) } case "grpc-encoding": d.encoding = f.Value case "grpc-status": code, err := strconv.Atoi(f.Value) if err != nil { return streamErrorf(codes.Internal, "transport: malformed grpc-status: %v", err) } d.rawStatusCode = &code case "grpc-message": d.rawStatusMsg = decodeGrpcMessage(f.Value) case "grpc-status-details-bin": v, err := decodeBinHeader(f.Value) if err != nil { return streamErrorf(codes.Internal, "transport: malformed grpc-status-details-bin: %v", err) } s := &spb.Status{} if err := proto.Unmarshal(v, s); err != nil { return streamErrorf(codes.Internal, "transport: malformed grpc-status-details-bin: %v", err) } d.statusGen = status.FromProto(s) case "grpc-timeout": d.timeoutSet = true var err error if d.timeout, err = decodeTimeout(f.Value); err != nil { return streamErrorf(codes.Internal, "transport: malformed time-out: %v", err) } case ":path": d.method = f.Value case ":status": code, err := strconv.Atoi(f.Value) if err != nil { return streamErrorf(codes.Internal, "transport: malformed http-status: %v", err) } d.httpStatus = &code case "grpc-tags-bin": v, err := decodeBinHeader(f.Value) if err != nil { return streamErrorf(codes.Internal, "transport: malformed grpc-tags-bin: %v", err) } d.statsTags = v d.addMetadata(f.Name, string(v)) case "grpc-trace-bin": v, err := decodeBinHeader(f.Value) if err != nil { return streamErrorf(codes.Internal, "transport: malformed grpc-trace-bin: %v", err) } d.statsTrace = v d.addMetadata(f.Name, string(v)) default: if isReservedHeader(f.Name) && !isWhitelistedPseudoHeader(f.Name) { break } v, err := decodeMetadataHeader(f.Name, f.Value) if err != nil { errorf("Failed to decode metadata header (%q, %q): %v", f.Name, f.Value, err) return nil } d.addMetadata(f.Name, string(v)) } return nil } type timeoutUnit uint8 const ( hour timeoutUnit = 'H' minute timeoutUnit = 'M' second timeoutUnit = 'S' millisecond timeoutUnit = 'm' microsecond timeoutUnit = 'u' nanosecond timeoutUnit = 'n' ) func timeoutUnitToDuration(u timeoutUnit) (d time.Duration, ok bool) { switch u { case hour: return time.Hour, true case minute: return time.Minute, true case second: return time.Second, true case millisecond: return time.Millisecond, true case microsecond: return time.Microsecond, true case nanosecond: return time.Nanosecond, true default: } return } const maxTimeoutValue int64 = 100000000 - 1 // div does integer division and round-up the result. Note that this is // equivalent to (d+r-1)/r but has less chance to overflow. func div(d, r time.Duration) int64 { if m := d % r; m > 0 { return int64(d/r + 1) } return int64(d / r) } // TODO(zhaoq): It is the simplistic and not bandwidth efficient. Improve it. func encodeTimeout(t time.Duration) string { if t <= 0 { return "0n" } if d := div(t, time.Nanosecond); d <= maxTimeoutValue { return strconv.FormatInt(d, 10) + "n" } if d := div(t, time.Microsecond); d <= maxTimeoutValue { return strconv.FormatInt(d, 10) + "u" } if d := div(t, time.Millisecond); d <= maxTimeoutValue { return strconv.FormatInt(d, 10) + "m" } if d := div(t, time.Second); d <= maxTimeoutValue { return strconv.FormatInt(d, 10) + "S" } if d := div(t, time.Minute); d <= maxTimeoutValue { return strconv.FormatInt(d, 10) + "M" } // Note that maxTimeoutValue * time.Hour > MaxInt64. return strconv.FormatInt(div(t, time.Hour), 10) + "H" } func decodeTimeout(s string) (time.Duration, error) { size := len(s) if size < 2 { return 0, fmt.Errorf("transport: timeout string is too short: %q", s) } unit := timeoutUnit(s[size-1]) d, ok := timeoutUnitToDuration(unit) if !ok { return 0, fmt.Errorf("transport: timeout unit is not recognized: %q", s) } t, err := strconv.ParseInt(s[:size-1], 10, 64) if err != nil { return 0, err } return d * time.Duration(t), nil } const ( spaceByte = ' ' tildaByte = '~' percentByte = '%' ) // encodeGrpcMessage is used to encode status code in header field // "grpc-message". // It checks to see if each individual byte in msg is an // allowable byte, and then either percent encoding or passing it through. // When percent encoding, the byte is converted into hexadecimal notation // with a '%' prepended. func encodeGrpcMessage(msg string) string { if msg == "" { return "" } lenMsg := len(msg) for i := 0; i < lenMsg; i++ { c := msg[i] if !(c >= spaceByte && c < tildaByte && c != percentByte) { return encodeGrpcMessageUnchecked(msg) } } return msg } func encodeGrpcMessageUnchecked(msg string) string { var buf bytes.Buffer lenMsg := len(msg) for i := 0; i < lenMsg; i++ { c := msg[i] if c >= spaceByte && c < tildaByte && c != percentByte { buf.WriteByte(c) } else { buf.WriteString(fmt.Sprintf("%%%02X", c)) } } return buf.String() } // decodeGrpcMessage decodes the msg encoded by encodeGrpcMessage. func decodeGrpcMessage(msg string) string { if msg == "" { return "" } lenMsg := len(msg) for i := 0; i < lenMsg; i++ { if msg[i] == percentByte && i+2 < lenMsg { return decodeGrpcMessageUnchecked(msg) } } return msg } func decodeGrpcMessageUnchecked(msg string) string { var buf bytes.Buffer lenMsg := len(msg) for i := 0; i < lenMsg; i++ { c := msg[i] if c == percentByte && i+2 < lenMsg { parsed, err := strconv.ParseUint(msg[i+1:i+3], 16, 8) if err != nil { buf.WriteByte(c) } else { buf.WriteByte(byte(parsed)) i += 2 } } else { buf.WriteByte(c) } } return buf.String() } type framer struct { numWriters int32 reader io.Reader writer *bufio.Writer fr *http2.Framer } func newFramer(conn net.Conn) *framer { f := &framer{ reader: bufio.NewReaderSize(conn, http2IOBufSize), writer: bufio.NewWriterSize(conn, http2IOBufSize), } f.fr = http2.NewFramer(f.writer, f.reader) // Opt-in to Frame reuse API on framer to reduce garbage. // Frames aren't safe to read from after a subsequent call to ReadFrame. f.fr.SetReuseFrames() f.fr.ReadMetaHeaders = hpack.NewDecoder(http2InitHeaderTableSize, nil) return f } func (f *framer) adjustNumWriters(i int32) int32 { return atomic.AddInt32(&f.numWriters, i) } // The following writeXXX functions can only be called when the caller gets // unblocked from writableChan channel (i.e., owns the privilege to write). func (f *framer) writeContinuation(forceFlush bool, streamID uint32, endHeaders bool, headerBlockFragment []byte) error { if err := f.fr.WriteContinuation(streamID, endHeaders, headerBlockFragment); err != nil { return err } if forceFlush { return f.writer.Flush() } return nil } func (f *framer) writeData(forceFlush bool, streamID uint32, endStream bool, data []byte) error { if err := f.fr.WriteData(streamID, endStream, data); err != nil { return err } if forceFlush { return f.writer.Flush() } return nil } func (f *framer) writeGoAway(forceFlush bool, maxStreamID uint32, code http2.ErrCode, debugData []byte) error { if err := f.fr.WriteGoAway(maxStreamID, code, debugData); err != nil { return err } if forceFlush { return f.writer.Flush() } return nil } func (f *framer) writeHeaders(forceFlush bool, p http2.HeadersFrameParam) error { if err := f.fr.WriteHeaders(p); err != nil { return err } if forceFlush { return f.writer.Flush() } return nil } func (f *framer) writePing(forceFlush, ack bool, data [8]byte) error { if err := f.fr.WritePing(ack, data); err != nil { return err } if forceFlush { return f.writer.Flush() } return nil } func (f *framer) writePriority(forceFlush bool, streamID uint32, p http2.PriorityParam) error { if err := f.fr.WritePriority(streamID, p); err != nil { return err } if forceFlush { return f.writer.Flush() } return nil } func (f *framer) writePushPromise(forceFlush bool, p http2.PushPromiseParam) error { if err := f.fr.WritePushPromise(p); err != nil { return err } if forceFlush { return f.writer.Flush() } return nil } func (f *framer) writeRSTStream(forceFlush bool, streamID uint32, code http2.ErrCode) error { if err := f.fr.WriteRSTStream(streamID, code); err != nil { return err } if forceFlush { return f.writer.Flush() } return nil } func (f *framer) writeSettings(forceFlush bool, settings ...http2.Setting) error { if err := f.fr.WriteSettings(settings...); err != nil { return err } if forceFlush { return f.writer.Flush() } return nil } func (f *framer) writeSettingsAck(forceFlush bool) error { if err := f.fr.WriteSettingsAck(); err != nil { return err } if forceFlush { return f.writer.Flush() } return nil } func (f *framer) writeWindowUpdate(forceFlush bool, streamID, incr uint32) error { if err := f.fr.WriteWindowUpdate(streamID, incr); err != nil { return err } if forceFlush { return f.writer.Flush() } return nil } func (f *framer) flushWrite() error { return f.writer.Flush() } func (f *framer) readFrame() (http2.Frame, error) { return f.fr.ReadFrame() } func (f *framer) errorDetail() error { return f.fr.ErrorDetail() } golang-google-grpc-1.6.0/transport/http_util_test.go000066400000000000000000000105421315416461300226340ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package transport import ( "fmt" "reflect" "testing" "time" ) func TestTimeoutEncode(t *testing.T) { for _, test := range []struct { in string out string }{ {"12345678ns", "12345678n"}, {"123456789ns", "123457u"}, {"12345678us", "12345678u"}, {"123456789us", "123457m"}, {"12345678ms", "12345678m"}, {"123456789ms", "123457S"}, {"12345678s", "12345678S"}, {"123456789s", "2057614M"}, {"12345678m", "12345678M"}, {"123456789m", "2057614H"}, } { d, err := time.ParseDuration(test.in) if err != nil { t.Fatalf("failed to parse duration string %s: %v", test.in, err) } out := encodeTimeout(d) if out != test.out { t.Fatalf("timeoutEncode(%s) = %s, want %s", test.in, out, test.out) } } } func TestTimeoutDecode(t *testing.T) { for _, test := range []struct { // input s string // output d time.Duration err error }{ {"1234S", time.Second * 1234, nil}, {"1234x", 0, fmt.Errorf("transport: timeout unit is not recognized: %q", "1234x")}, {"1", 0, fmt.Errorf("transport: timeout string is too short: %q", "1")}, {"", 0, fmt.Errorf("transport: timeout string is too short: %q", "")}, } { d, err := decodeTimeout(test.s) if d != test.d || fmt.Sprint(err) != fmt.Sprint(test.err) { t.Fatalf("timeoutDecode(%q) = %d, %v, want %d, %v", test.s, int64(d), err, int64(test.d), test.err) } } } func TestValidContentType(t *testing.T) { tests := []struct { h string want bool }{ {"application/grpc", true}, {"application/grpc+", true}, {"application/grpc+blah", true}, {"application/grpc;", true}, {"application/grpc;blah", true}, {"application/grpcd", false}, {"application/grpd", false}, {"application/grp", false}, } for _, tt := range tests { got := validContentType(tt.h) if got != tt.want { t.Errorf("validContentType(%q) = %v; want %v", tt.h, got, tt.want) } } } func TestEncodeGrpcMessage(t *testing.T) { for _, tt := range []struct { input string expected string }{ {"", ""}, {"Hello", "Hello"}, {"my favorite character is \u0000", "my favorite character is %00"}, {"my favorite character is %", "my favorite character is %25"}, } { actual := encodeGrpcMessage(tt.input) if tt.expected != actual { t.Errorf("encodeGrpcMessage(%v) = %v, want %v", tt.input, actual, tt.expected) } } } func TestDecodeGrpcMessage(t *testing.T) { for _, tt := range []struct { input string expected string }{ {"", ""}, {"Hello", "Hello"}, {"H%61o", "Hao"}, {"H%6", "H%6"}, {"%G0", "%G0"}, {"%E7%B3%BB%E7%BB%9F", "系统"}, } { actual := decodeGrpcMessage(tt.input) if tt.expected != actual { t.Errorf("dncodeGrpcMessage(%v) = %v, want %v", tt.input, actual, tt.expected) } } } const binaryValue = string(128) func TestEncodeMetadataHeader(t *testing.T) { for _, test := range []struct { // input kin string vin string // output vout string }{ {"key", "abc", "abc"}, {"KEY", "abc", "abc"}, {"key-bin", "abc", "YWJj"}, {"key-bin", binaryValue, "woA"}, } { v := encodeMetadataHeader(test.kin, test.vin) if !reflect.DeepEqual(v, test.vout) { t.Fatalf("encodeMetadataHeader(%q, %q) = %q, want %q", test.kin, test.vin, v, test.vout) } } } func TestDecodeMetadataHeader(t *testing.T) { for _, test := range []struct { // input kin string vin string // output vout string err error }{ {"a", "abc", "abc", nil}, {"key-bin", "Zm9vAGJhcg==", "foo\x00bar", nil}, {"key-bin", "Zm9vAGJhcg", "foo\x00bar", nil}, {"key-bin", "woA=", binaryValue, nil}, {"a", "abc,efg", "abc,efg", nil}, } { v, err := decodeMetadataHeader(test.kin, test.vin) if !reflect.DeepEqual(v, test.vout) || !reflect.DeepEqual(err, test.err) { t.Fatalf("decodeMetadataHeader(%q, %q) = %q, %v, want %q, %v", test.kin, test.vin, v, err, test.vout, test.err) } } } golang-google-grpc-1.6.0/transport/log.go000066400000000000000000000023661315416461300203470ustar00rootroot00000000000000/* * * Copyright 2017 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // This file contains wrappers for grpclog functions. // The transport package only logs to verbose level 2 by default. package transport import "google.golang.org/grpc/grpclog" const logLevel = 2 func infof(format string, args ...interface{}) { if grpclog.V(logLevel) { grpclog.Infof(format, args...) } } func warningf(format string, args ...interface{}) { if grpclog.V(logLevel) { grpclog.Warningf(format, args...) } } func errorf(format string, args ...interface{}) { if grpclog.V(logLevel) { grpclog.Errorf(format, args...) } } func fatalf(format string, args ...interface{}) { if grpclog.V(logLevel) { grpclog.Fatalf(format, args...) } } golang-google-grpc-1.6.0/transport/transport.go000066400000000000000000000521421315416461300216170ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // Package transport defines and implements message oriented communication // channel to complete various transactions (e.g., an RPC). package transport // import "google.golang.org/grpc/transport" import ( "fmt" "io" "net" "sync" "golang.org/x/net/context" "golang.org/x/net/http2" "google.golang.org/grpc/codes" "google.golang.org/grpc/credentials" "google.golang.org/grpc/keepalive" "google.golang.org/grpc/metadata" "google.golang.org/grpc/stats" "google.golang.org/grpc/status" "google.golang.org/grpc/tap" ) // recvMsg represents the received msg from the transport. All transport // protocol specific info has been removed. type recvMsg struct { data []byte // nil: received some data // io.EOF: stream is completed. data is nil. // other non-nil error: transport failure. data is nil. err error } // recvBuffer is an unbounded channel of recvMsg structs. // Note recvBuffer differs from controlBuffer only in that recvBuffer // holds a channel of only recvMsg structs instead of objects implementing "item" interface. // recvBuffer is written to much more often than // controlBuffer and using strict recvMsg structs helps avoid allocation in "recvBuffer.put" type recvBuffer struct { c chan recvMsg mu sync.Mutex backlog []recvMsg } func newRecvBuffer() *recvBuffer { b := &recvBuffer{ c: make(chan recvMsg, 1), } return b } func (b *recvBuffer) put(r recvMsg) { b.mu.Lock() defer b.mu.Unlock() if len(b.backlog) == 0 { select { case b.c <- r: return default: } } b.backlog = append(b.backlog, r) } func (b *recvBuffer) load() { b.mu.Lock() defer b.mu.Unlock() if len(b.backlog) > 0 { select { case b.c <- b.backlog[0]: b.backlog[0] = recvMsg{} b.backlog = b.backlog[1:] default: } } } // get returns the channel that receives a recvMsg in the buffer. // // Upon receipt of a recvMsg, the caller should call load to send another // recvMsg onto the channel if there is any. func (b *recvBuffer) get() <-chan recvMsg { return b.c } // recvBufferReader implements io.Reader interface to read the data from // recvBuffer. type recvBufferReader struct { ctx context.Context goAway chan struct{} recv *recvBuffer last []byte // Stores the remaining data in the previous calls. err error } // Read reads the next len(p) bytes from last. If last is drained, it tries to // read additional data from recv. It blocks if there no additional data available // in recv. If Read returns any non-nil error, it will continue to return that error. func (r *recvBufferReader) Read(p []byte) (n int, err error) { if r.err != nil { return 0, r.err } n, r.err = r.read(p) return n, r.err } func (r *recvBufferReader) read(p []byte) (n int, err error) { if r.last != nil && len(r.last) > 0 { // Read remaining data left in last call. copied := copy(p, r.last) r.last = r.last[copied:] return copied, nil } select { case <-r.ctx.Done(): return 0, ContextErr(r.ctx.Err()) case <-r.goAway: return 0, ErrStreamDrain case m := <-r.recv.get(): r.recv.load() if m.err != nil { return 0, m.err } copied := copy(p, m.data) r.last = m.data[copied:] return copied, nil } } // All items in an out of a controlBuffer should be the same type. type item interface { item() } // controlBuffer is an unbounded channel of item. type controlBuffer struct { c chan item mu sync.Mutex backlog []item } func newControlBuffer() *controlBuffer { b := &controlBuffer{ c: make(chan item, 1), } return b } func (b *controlBuffer) put(r item) { b.mu.Lock() defer b.mu.Unlock() if len(b.backlog) == 0 { select { case b.c <- r: return default: } } b.backlog = append(b.backlog, r) } func (b *controlBuffer) load() { b.mu.Lock() defer b.mu.Unlock() if len(b.backlog) > 0 { select { case b.c <- b.backlog[0]: b.backlog[0] = nil b.backlog = b.backlog[1:] default: } } } // get returns the channel that receives an item in the buffer. // // Upon receipt of an item, the caller should call load to send another // item onto the channel if there is any. func (b *controlBuffer) get() <-chan item { return b.c } type streamState uint8 const ( streamActive streamState = iota streamWriteDone // EndStream sent streamReadDone // EndStream received streamDone // the entire stream is finished. ) // Stream represents an RPC in the transport layer. type Stream struct { id uint32 // nil for client side Stream. st ServerTransport // ctx is the associated context of the stream. ctx context.Context // cancel is always nil for client side Stream. cancel context.CancelFunc // done is closed when the final status arrives. done chan struct{} // goAway is closed when the server sent GoAways signal before this stream was initiated. goAway chan struct{} // method records the associated RPC method of the stream. method string recvCompress string sendCompress string buf *recvBuffer trReader io.Reader fc *inFlow recvQuota uint32 // TODO: Remote this unused variable. // The accumulated inbound quota pending for window update. updateQuota uint32 // Callback to state application's intentions to read data. This // is used to adjust flow control, if need be. requestRead func(int) sendQuotaPool *quotaPool // Close headerChan to indicate the end of reception of header metadata. headerChan chan struct{} // header caches the received header metadata. header metadata.MD // The key-value map of trailer metadata. trailer metadata.MD mu sync.RWMutex // guard the following // headerOK becomes true from the first header is about to send. headerOk bool state streamState // true iff headerChan is closed. Used to avoid closing headerChan // multiple times. headerDone bool // the status error received from the server. status *status.Status // rstStream indicates whether a RST_STREAM frame needs to be sent // to the server to signify that this stream is closing. rstStream bool // rstError is the error that needs to be sent along with the RST_STREAM frame. rstError http2.ErrCode // bytesSent and bytesReceived indicates whether any bytes have been sent or // received on this stream. bytesSent bool bytesReceived bool } // RecvCompress returns the compression algorithm applied to the inbound // message. It is empty string if there is no compression applied. func (s *Stream) RecvCompress() string { return s.recvCompress } // SetSendCompress sets the compression algorithm to the stream. func (s *Stream) SetSendCompress(str string) { s.sendCompress = str } // Done returns a chanel which is closed when it receives the final status // from the server. func (s *Stream) Done() <-chan struct{} { return s.done } // GoAway returns a channel which is closed when the server sent GoAways signal // before this stream was initiated. func (s *Stream) GoAway() <-chan struct{} { return s.goAway } // Header acquires the key-value pairs of header metadata once it // is available. It blocks until i) the metadata is ready or ii) there is no // header metadata or iii) the stream is canceled/expired. func (s *Stream) Header() (metadata.MD, error) { var err error select { case <-s.ctx.Done(): err = ContextErr(s.ctx.Err()) case <-s.goAway: err = ErrStreamDrain case <-s.headerChan: return s.header.Copy(), nil } // Even if the stream is closed, header is returned if available. select { case <-s.headerChan: return s.header.Copy(), nil default: } return nil, err } // Trailer returns the cached trailer metedata. Note that if it is not called // after the entire stream is done, it could return an empty MD. Client // side only. func (s *Stream) Trailer() metadata.MD { s.mu.RLock() defer s.mu.RUnlock() return s.trailer.Copy() } // ServerTransport returns the underlying ServerTransport for the stream. // The client side stream always returns nil. func (s *Stream) ServerTransport() ServerTransport { return s.st } // Context returns the context of the stream. func (s *Stream) Context() context.Context { return s.ctx } // Method returns the method for the stream. func (s *Stream) Method() string { return s.method } // Status returns the status received from the server. func (s *Stream) Status() *status.Status { return s.status } // SetHeader sets the header metadata. This can be called multiple times. // Server side only. func (s *Stream) SetHeader(md metadata.MD) error { s.mu.Lock() defer s.mu.Unlock() if s.headerOk || s.state == streamDone { return ErrIllegalHeaderWrite } if md.Len() == 0 { return nil } s.header = metadata.Join(s.header, md) return nil } // SetTrailer sets the trailer metadata which will be sent with the RPC status // by the server. This can be called multiple times. Server side only. func (s *Stream) SetTrailer(md metadata.MD) error { if md.Len() == 0 { return nil } s.mu.Lock() defer s.mu.Unlock() s.trailer = metadata.Join(s.trailer, md) return nil } func (s *Stream) write(m recvMsg) { s.buf.put(m) } // Read reads all p bytes from the wire for this stream. func (s *Stream) Read(p []byte) (n int, err error) { // Don't request a read if there was an error earlier if er := s.trReader.(*transportReader).er; er != nil { return 0, er } s.requestRead(len(p)) return io.ReadFull(s.trReader, p) } // tranportReader reads all the data available for this Stream from the transport and // passes them into the decoder, which converts them into a gRPC message stream. // The error is io.EOF when the stream is done or another non-nil error if // the stream broke. type transportReader struct { reader io.Reader // The handler to control the window update procedure for both this // particular stream and the associated transport. windowHandler func(int) er error } func (t *transportReader) Read(p []byte) (n int, err error) { n, err = t.reader.Read(p) if err != nil { t.er = err return } t.windowHandler(n) return } // finish sets the stream's state and status, and closes the done channel. // s.mu must be held by the caller. st must always be non-nil. func (s *Stream) finish(st *status.Status) { s.status = st s.state = streamDone close(s.done) } // BytesSent indicates whether any bytes have been sent on this stream. func (s *Stream) BytesSent() bool { s.mu.Lock() defer s.mu.Unlock() return s.bytesSent } // BytesReceived indicates whether any bytes have been received on this stream. func (s *Stream) BytesReceived() bool { s.mu.Lock() defer s.mu.Unlock() return s.bytesReceived } // GoString is implemented by Stream so context.String() won't // race when printing %#v. func (s *Stream) GoString() string { return fmt.Sprintf("", s, s.method) } // The key to save transport.Stream in the context. type streamKey struct{} // newContextWithStream creates a new context from ctx and attaches stream // to it. func newContextWithStream(ctx context.Context, stream *Stream) context.Context { return context.WithValue(ctx, streamKey{}, stream) } // StreamFromContext returns the stream saved in ctx. func StreamFromContext(ctx context.Context) (s *Stream, ok bool) { s, ok = ctx.Value(streamKey{}).(*Stream) return } // state of transport type transportState int const ( reachable transportState = iota unreachable closing draining ) // ServerConfig consists of all the configurations to establish a server transport. type ServerConfig struct { MaxStreams uint32 AuthInfo credentials.AuthInfo InTapHandle tap.ServerInHandle StatsHandler stats.Handler KeepaliveParams keepalive.ServerParameters KeepalivePolicy keepalive.EnforcementPolicy InitialWindowSize int32 InitialConnWindowSize int32 } // NewServerTransport creates a ServerTransport with conn or non-nil error // if it fails. func NewServerTransport(protocol string, conn net.Conn, config *ServerConfig) (ServerTransport, error) { return newHTTP2Server(conn, config) } // ConnectOptions covers all relevant options for communicating with the server. type ConnectOptions struct { // UserAgent is the application user agent. UserAgent string // Authority is the :authority pseudo-header to use. This field has no effect if // TransportCredentials is set. Authority string // Dialer specifies how to dial a network address. Dialer func(context.Context, string) (net.Conn, error) // FailOnNonTempDialError specifies if gRPC fails on non-temporary dial errors. FailOnNonTempDialError bool // PerRPCCredentials stores the PerRPCCredentials required to issue RPCs. PerRPCCredentials []credentials.PerRPCCredentials // TransportCredentials stores the Authenticator required to setup a client connection. TransportCredentials credentials.TransportCredentials // KeepaliveParams stores the keepalive parameters. KeepaliveParams keepalive.ClientParameters // StatsHandler stores the handler for stats. StatsHandler stats.Handler // InitialWindowSize sets the intial window size for a stream. InitialWindowSize int32 // InitialConnWindowSize sets the intial window size for a connection. InitialConnWindowSize int32 } // TargetInfo contains the information of the target such as network address and metadata. type TargetInfo struct { Addr string Metadata interface{} } // NewClientTransport establishes the transport with the required ConnectOptions // and returns it to the caller. func NewClientTransport(ctx context.Context, target TargetInfo, opts ConnectOptions) (ClientTransport, error) { return newHTTP2Client(ctx, target, opts) } // Options provides additional hints and information for message // transmission. type Options struct { // Last indicates whether this write is the last piece for // this stream. Last bool // Delay is a hint to the transport implementation for whether // the data could be buffered for a batching write. The // Transport implementation may ignore the hint. Delay bool } // CallHdr carries the information of a particular RPC. type CallHdr struct { // Host specifies the peer's host. Host string // Method specifies the operation to perform. Method string // RecvCompress specifies the compression algorithm applied on // inbound messages. RecvCompress string // SendCompress specifies the compression algorithm applied on // outbound message. SendCompress string // Creds specifies credentials.PerRPCCredentials for a call. Creds credentials.PerRPCCredentials // Flush indicates whether a new stream command should be sent // to the peer without waiting for the first data. This is // only a hint. // If it's true, the transport may modify the flush decision // for performance purposes. // If it's false, new stream will never be flushed. Flush bool } // ClientTransport is the common interface for all gRPC client-side transport // implementations. type ClientTransport interface { // Close tears down this transport. Once it returns, the transport // should not be accessed any more. The caller must make sure this // is called only once. Close() error // GracefulClose starts to tear down the transport. It stops accepting // new RPCs and wait the completion of the pending RPCs. GracefulClose() error // Write sends the data for the given stream. A nil stream indicates // the write is to be performed on the transport as a whole. Write(s *Stream, hdr []byte, data []byte, opts *Options) error // NewStream creates a Stream for an RPC. NewStream(ctx context.Context, callHdr *CallHdr) (*Stream, error) // CloseStream clears the footprint of a stream when the stream is // not needed any more. The err indicates the error incurred when // CloseStream is called. Must be called when a stream is finished // unless the associated transport is closing. CloseStream(stream *Stream, err error) // Error returns a channel that is closed when some I/O error // happens. Typically the caller should have a goroutine to monitor // this in order to take action (e.g., close the current transport // and create a new one) in error case. It should not return nil // once the transport is initiated. Error() <-chan struct{} // GoAway returns a channel that is closed when ClientTransport // receives the draining signal from the server (e.g., GOAWAY frame in // HTTP/2). GoAway() <-chan struct{} // GetGoAwayReason returns the reason why GoAway frame was received. GetGoAwayReason() GoAwayReason } // ServerTransport is the common interface for all gRPC server-side transport // implementations. // // Methods may be called concurrently from multiple goroutines, but // Write methods for a given Stream will be called serially. type ServerTransport interface { // HandleStreams receives incoming streams using the given handler. HandleStreams(func(*Stream), func(context.Context, string) context.Context) // WriteHeader sends the header metadata for the given stream. // WriteHeader may not be called on all streams. WriteHeader(s *Stream, md metadata.MD) error // Write sends the data for the given stream. // Write may not be called on all streams. Write(s *Stream, hdr []byte, data []byte, opts *Options) error // WriteStatus sends the status of a stream to the client. WriteStatus is // the final call made on a stream and always occurs. WriteStatus(s *Stream, st *status.Status) error // Close tears down the transport. Once it is called, the transport // should not be accessed any more. All the pending streams and their // handlers will be terminated asynchronously. Close() error // RemoteAddr returns the remote network address. RemoteAddr() net.Addr // Drain notifies the client this ServerTransport stops accepting new RPCs. Drain() } // streamErrorf creates an StreamError with the specified error code and description. func streamErrorf(c codes.Code, format string, a ...interface{}) StreamError { return StreamError{ Code: c, Desc: fmt.Sprintf(format, a...), } } // connectionErrorf creates an ConnectionError with the specified error description. func connectionErrorf(temp bool, e error, format string, a ...interface{}) ConnectionError { return ConnectionError{ Desc: fmt.Sprintf(format, a...), temp: temp, err: e, } } // ConnectionError is an error that results in the termination of the // entire connection and the retry of all the active streams. type ConnectionError struct { Desc string temp bool err error } func (e ConnectionError) Error() string { return fmt.Sprintf("connection error: desc = %q", e.Desc) } // Temporary indicates if this connection error is temporary or fatal. func (e ConnectionError) Temporary() bool { return e.temp } // Origin returns the original error of this connection error. func (e ConnectionError) Origin() error { // Never return nil error here. // If the original error is nil, return itself. if e.err == nil { return e } return e.err } var ( // ErrConnClosing indicates that the transport is closing. ErrConnClosing = connectionErrorf(true, nil, "transport is closing") // ErrStreamDrain indicates that the stream is rejected by the server because // the server stops accepting new RPCs. ErrStreamDrain = streamErrorf(codes.Unavailable, "the server stops accepting new RPCs") ) // TODO: See if we can replace StreamError with status package errors. // StreamError is an error that only affects one stream within a connection. type StreamError struct { Code codes.Code Desc string } func (e StreamError) Error() string { return fmt.Sprintf("stream error: code = %s desc = %q", e.Code, e.Desc) } // wait blocks until it can receive from ctx.Done, closing, or proceed. // If it receives from ctx.Done, it returns 0, the StreamError for ctx.Err. // If it receives from done, it returns 0, io.EOF if ctx is not done; otherwise // it return the StreamError for ctx.Err. // If it receives from goAway, it returns 0, ErrStreamDrain. // If it receives from closing, it returns 0, ErrConnClosing. // If it receives from proceed, it returns the received integer, nil. func wait(ctx context.Context, done, goAway, closing <-chan struct{}, proceed <-chan int) (int, error) { select { case <-ctx.Done(): return 0, ContextErr(ctx.Err()) case <-done: // User cancellation has precedence. select { case <-ctx.Done(): return 0, ContextErr(ctx.Err()) default: } return 0, io.EOF case <-goAway: return 0, ErrStreamDrain case <-closing: return 0, ErrConnClosing case i := <-proceed: return i, nil } } // GoAwayReason contains the reason for the GoAway frame received. type GoAwayReason uint8 const ( // Invalid indicates that no GoAway frame is received. Invalid GoAwayReason = 0 // NoReason is the default value when GoAway frame is received. NoReason GoAwayReason = 1 // TooManyPings indicates that a GoAway frame with ErrCodeEnhanceYourCalm // was recieved and that the debug data said "too_many_pings". TooManyPings GoAwayReason = 2 ) golang-google-grpc-1.6.0/transport/transport_test.go000066400000000000000000001735441315416461300226700ustar00rootroot00000000000000/* * * Copyright 2014 gRPC authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package transport import ( "bufio" "bytes" "encoding/binary" "errors" "fmt" "io" "math" "net" "net/http" "reflect" "strconv" "strings" "sync" "testing" "time" "golang.org/x/net/context" "golang.org/x/net/http2" "golang.org/x/net/http2/hpack" "google.golang.org/grpc/codes" "google.golang.org/grpc/keepalive" "google.golang.org/grpc/status" ) type server struct { lis net.Listener port string startedErr chan error // error (or nil) with server start value mu sync.Mutex conns map[ServerTransport]bool } var ( expectedRequest = []byte("ping") expectedResponse = []byte("pong") expectedRequestLarge = make([]byte, initialWindowSize*2) expectedResponseLarge = make([]byte, initialWindowSize*2) expectedInvalidHeaderField = "invalid/content-type" ) type testStreamHandler struct { t *http2Server } type hType int const ( normal hType = iota suspended misbehaved encodingRequiredStatus invalidHeaderField delayRead delayWrite pingpong ) func (h *testStreamHandler) handleStream(t *testing.T, s *Stream) { req := expectedRequest resp := expectedResponse if s.Method() == "foo.Large" { req = expectedRequestLarge resp = expectedResponseLarge } p := make([]byte, len(req)) _, err := s.Read(p) if err != nil { return } if !bytes.Equal(p, req) { t.Fatalf("handleStream got %v, want %v", p, req) } // send a response back to the client. h.t.Write(s, resp, nil, &Options{}) // send the trailer to end the stream. h.t.WriteStatus(s, status.New(codes.OK, "")) } func (h *testStreamHandler) handleStreamPingPong(t *testing.T, s *Stream) { header := make([]byte, 5) for i := 0; i < 10; i++ { if _, err := s.Read(header); err != nil { t.Fatalf("Error on server while reading data header: %v", err) } sz := binary.BigEndian.Uint32(header[1:]) msg := make([]byte, int(sz)) if _, err := s.Read(msg); err != nil { t.Fatalf("Error on server while reading message: %v", err) } buf := make([]byte, sz+5) buf[0] = byte(0) binary.BigEndian.PutUint32(buf[1:], uint32(sz)) copy(buf[5:], msg) h.t.Write(s, buf, nil, &Options{}) } } func (h *testStreamHandler) handleStreamMisbehave(t *testing.T, s *Stream) { conn, ok := s.ServerTransport().(*http2Server) if !ok { t.Fatalf("Failed to convert %v to *http2Server", s.ServerTransport()) } var sent int p := make([]byte, http2MaxFrameLen) for sent < initialWindowSize { <-conn.writableChan n := initialWindowSize - sent // The last message may be smaller than http2MaxFrameLen if n <= http2MaxFrameLen { if s.Method() == "foo.Connection" { // Violate connection level flow control window of client but do not // violate any stream level windows. p = make([]byte, n) } else { // Violate stream level flow control window of client. p = make([]byte, n+1) } } if err := conn.framer.writeData(true, s.id, false, p); err != nil { conn.writableChan <- 0 break } conn.writableChan <- 0 sent += len(p) } } func (h *testStreamHandler) handleStreamEncodingRequiredStatus(t *testing.T, s *Stream) { // raw newline is not accepted by http2 framer so it must be encoded. h.t.WriteStatus(s, encodingTestStatus) } func (h *testStreamHandler) handleStreamInvalidHeaderField(t *testing.T, s *Stream) { <-h.t.writableChan h.t.hBuf.Reset() h.t.hEnc.WriteField(hpack.HeaderField{Name: "content-type", Value: expectedInvalidHeaderField}) if err := h.t.writeHeaders(s, h.t.hBuf, false); err != nil { t.Fatalf("Failed to write headers: %v", err) } h.t.writableChan <- 0 } func (h *testStreamHandler) handleStreamDelayRead(t *testing.T, s *Stream) { req := expectedRequest resp := expectedResponse if s.Method() == "foo.Large" { req = expectedRequestLarge resp = expectedResponseLarge } p := make([]byte, len(req)) // Wait before reading. Give time to client to start sending // before server starts reading. time.Sleep(2 * time.Second) _, err := s.Read(p) if err != nil { t.Fatalf("s.Read(_) = _, %v, want _, ", err) return } if !bytes.Equal(p, req) { t.Fatalf("handleStream got %v, want %v", p, req) } // send a response back to the client. h.t.Write(s, resp, nil, &Options{}) // send the trailer to end the stream. h.t.WriteStatus(s, status.New(codes.OK, "")) } func (h *testStreamHandler) handleStreamDelayWrite(t *testing.T, s *Stream) { req := expectedRequest resp := expectedResponse if s.Method() == "foo.Large" { req = expectedRequestLarge resp = expectedResponseLarge } p := make([]byte, len(req)) _, err := s.Read(p) if err != nil { t.Fatalf("s.Read(_) = _, %v, want _, ", err) return } if !bytes.Equal(p, req) { t.Fatalf("handleStream got %v, want %v", p, req) } // Wait before sending. Give time to client to start reading // before server starts sending. time.Sleep(2 * time.Second) h.t.Write(s, resp, nil, &Options{}) // send the trailer to end the stream. h.t.WriteStatus(s, status.New(codes.OK, "")) } // start starts server. Other goroutines should block on s.readyChan for further operations. func (s *server) start(t *testing.T, port int, serverConfig *ServerConfig, ht hType) { var err error if port == 0 { s.lis, err = net.Listen("tcp", "localhost:0") } else { s.lis, err = net.Listen("tcp", "localhost:"+strconv.Itoa(port)) } if err != nil { s.startedErr <- fmt.Errorf("failed to listen: %v", err) return } _, p, err := net.SplitHostPort(s.lis.Addr().String()) if err != nil { s.startedErr <- fmt.Errorf("failed to parse listener address: %v", err) return } s.port = p s.conns = make(map[ServerTransport]bool) s.startedErr <- nil for { conn, err := s.lis.Accept() if err != nil { return } transport, err := NewServerTransport("http2", conn, serverConfig) if err != nil { return } s.mu.Lock() if s.conns == nil { s.mu.Unlock() transport.Close() return } s.conns[transport] = true s.mu.Unlock() h := &testStreamHandler{transport.(*http2Server)} switch ht { case suspended: go transport.HandleStreams(func(*Stream) {}, // Do nothing to handle the stream. func(ctx context.Context, method string) context.Context { return ctx }) case misbehaved: go transport.HandleStreams(func(s *Stream) { go h.handleStreamMisbehave(t, s) }, func(ctx context.Context, method string) context.Context { return ctx }) case encodingRequiredStatus: go transport.HandleStreams(func(s *Stream) { go h.handleStreamEncodingRequiredStatus(t, s) }, func(ctx context.Context, method string) context.Context { return ctx }) case invalidHeaderField: go transport.HandleStreams(func(s *Stream) { go h.handleStreamInvalidHeaderField(t, s) }, func(ctx context.Context, method string) context.Context { return ctx }) case delayRead: go transport.HandleStreams(func(s *Stream) { go h.handleStreamDelayRead(t, s) }, func(ctx context.Context, method string) context.Context { return ctx }) case delayWrite: go transport.HandleStreams(func(s *Stream) { go h.handleStreamDelayWrite(t, s) }, func(ctx context.Context, method string) context.Context { return ctx }) case pingpong: go transport.HandleStreams(func(s *Stream) { go h.handleStreamPingPong(t, s) }, func(ctx context.Context, method string) context.Context { return ctx }) default: go transport.HandleStreams(func(s *Stream) { go h.handleStream(t, s) }, func(ctx context.Context, method string) context.Context { return ctx }) } } } func (s *server) wait(t *testing.T, timeout time.Duration) { select { case err := <-s.startedErr: if err != nil { t.Fatal(err) } case <-time.After(timeout): t.Fatalf("Timed out after %v waiting for server to be ready", timeout) } } func (s *server) stop() { s.lis.Close() s.mu.Lock() for c := range s.conns { c.Close() } s.conns = nil s.mu.Unlock() } func setUp(t *testing.T, port int, maxStreams uint32, ht hType) (*server, ClientTransport) { return setUpWithOptions(t, port, &ServerConfig{MaxStreams: maxStreams}, ht, ConnectOptions{}) } func setUpWithOptions(t *testing.T, port int, serverConfig *ServerConfig, ht hType, copts ConnectOptions) (*server, ClientTransport) { server := &server{startedErr: make(chan error, 1)} go server.start(t, port, serverConfig, ht) server.wait(t, 2*time.Second) addr := "localhost:" + server.port var ( ct ClientTransport connErr error ) target := TargetInfo{ Addr: addr, } ct, connErr = NewClientTransport(context.Background(), target, copts) if connErr != nil { t.Fatalf("failed to create transport: %v", connErr) } return server, ct } func setUpWithNoPingServer(t *testing.T, copts ConnectOptions, done chan net.Conn) ClientTransport { lis, err := net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("Failed to listen: %v", err) } // Launch a non responsive server. go func() { defer lis.Close() conn, err := lis.Accept() if err != nil { t.Errorf("Error at server-side while accepting: %v", err) close(done) return } done <- conn }() tr, err := NewClientTransport(context.Background(), TargetInfo{Addr: lis.Addr().String()}, copts) if err != nil { // Server clean-up. lis.Close() if conn, ok := <-done; ok { conn.Close() } t.Fatalf("Failed to dial: %v", err) } return tr } // TestInflightStreamClosing ensures that closing in-flight stream // sends StreamError to concurrent stream reader. func TestInflightStreamClosing(t *testing.T) { serverConfig := &ServerConfig{} server, client := setUpWithOptions(t, 0, serverConfig, suspended, ConnectOptions{}) defer server.stop() defer client.Close() stream, err := client.NewStream(context.Background(), &CallHdr{}) if err != nil { t.Fatalf("Client failed to create RPC request: %v", err) } donec := make(chan struct{}) serr := StreamError{Desc: "client connection is closing"} go func() { defer close(donec) if _, err := stream.Read(make([]byte, defaultWindowSize)); err != serr { t.Errorf("unexpected Stream error %v, expected %v", err, serr) } }() // should unblock concurrent stream.Read client.CloseStream(stream, serr) // wait for stream.Read error timeout := time.NewTimer(5 * time.Second) select { case <-donec: if !timeout.Stop() { <-timeout.C } case <-timeout.C: t.Fatalf("Test timed out, expected a StreamError.") } } // TestMaxConnectionIdle tests that a server will send GoAway to a idle client. // An idle client is one who doesn't make any RPC calls for a duration of // MaxConnectionIdle time. func TestMaxConnectionIdle(t *testing.T) { serverConfig := &ServerConfig{ KeepaliveParams: keepalive.ServerParameters{ MaxConnectionIdle: 2 * time.Second, }, } server, client := setUpWithOptions(t, 0, serverConfig, suspended, ConnectOptions{}) defer server.stop() defer client.Close() stream, err := client.NewStream(context.Background(), &CallHdr{Flush: true}) if err != nil { t.Fatalf("Client failed to create RPC request: %v", err) } stream.mu.Lock() stream.rstStream = true stream.mu.Unlock() client.CloseStream(stream, nil) // wait for server to see that closed stream and max-age logic to send goaway after no new RPCs are mode timeout := time.NewTimer(time.Second * 4) select { case <-client.GoAway(): if !timeout.Stop() { <-timeout.C } case <-timeout.C: t.Fatalf("Test timed out, expected a GoAway from the server.") } } // TestMaxConenctionIdleNegative tests that a server will not send GoAway to a non-idle(busy) client. func TestMaxConnectionIdleNegative(t *testing.T) { serverConfig := &ServerConfig{ KeepaliveParams: keepalive.ServerParameters{ MaxConnectionIdle: 2 * time.Second, }, } server, client := setUpWithOptions(t, 0, serverConfig, suspended, ConnectOptions{}) defer server.stop() defer client.Close() _, err := client.NewStream(context.Background(), &CallHdr{Flush: true}) if err != nil { t.Fatalf("Client failed to create RPC request: %v", err) } timeout := time.NewTimer(time.Second * 4) select { case <-client.GoAway(): if !timeout.Stop() { <-timeout.C } t.Fatalf("A non-idle client received a GoAway.") case <-timeout.C: } } // TestMaxConnectionAge tests that a server will send GoAway after a duration of MaxConnectionAge. func TestMaxConnectionAge(t *testing.T) { serverConfig := &ServerConfig{ KeepaliveParams: keepalive.ServerParameters{ MaxConnectionAge: 2 * time.Second, }, } server, client := setUpWithOptions(t, 0, serverConfig, suspended, ConnectOptions{}) defer server.stop() defer client.Close() _, err := client.NewStream(context.Background(), &CallHdr{}) if err != nil { t.Fatalf("Client failed to create stream: %v", err) } // Wait for max-age logic to send GoAway. timeout := time.NewTimer(4 * time.Second) select { case <-client.GoAway(): if !timeout.Stop() { <-timeout.C } case <-timeout.C: t.Fatalf("Test timer out, expected a GoAway from the server.") } } // TestKeepaliveServer tests that a server closes connection with a client that doesn't respond to keepalive pings. func TestKeepaliveServer(t *testing.T) { serverConfig := &ServerConfig{ KeepaliveParams: keepalive.ServerParameters{ Time: 2 * time.Second, Timeout: 1 * time.Second, }, } server, c := setUpWithOptions(t, 0, serverConfig, suspended, ConnectOptions{}) defer server.stop() defer c.Close() client, err := net.Dial("tcp", server.lis.Addr().String()) if err != nil { t.Fatalf("Failed to dial: %v", err) } defer client.Close() // Set read deadline on client conn so that it doesn't block forever in errorsome cases. client.SetReadDeadline(time.Now().Add(10 * time.Second)) // Wait for keepalive logic to close the connection. time.Sleep(4 * time.Second) b := make([]byte, 24) for { _, err = client.Read(b) if err == nil { continue } if err != io.EOF { t.Fatalf("client.Read(_) = _,%v, want io.EOF", err) } break } } // TestKeepaliveServerNegative tests that a server doesn't close connection with a client that responds to keepalive pings. func TestKeepaliveServerNegative(t *testing.T) { serverConfig := &ServerConfig{ KeepaliveParams: keepalive.ServerParameters{ Time: 2 * time.Second, Timeout: 1 * time.Second, }, } server, client := setUpWithOptions(t, 0, serverConfig, suspended, ConnectOptions{}) defer server.stop() defer client.Close() // Give keepalive logic some time by sleeping. time.Sleep(4 * time.Second) // Assert that client is still active. clientTr := client.(*http2Client) clientTr.mu.Lock() defer clientTr.mu.Unlock() if clientTr.state != reachable { t.Fatalf("Test failed: Expected server-client connection to be healthy.") } } func TestKeepaliveClientClosesIdleTransport(t *testing.T) { done := make(chan net.Conn, 1) tr := setUpWithNoPingServer(t, ConnectOptions{KeepaliveParams: keepalive.ClientParameters{ Time: 2 * time.Second, // Keepalive time = 2 sec. Timeout: 1 * time.Second, // Keepalive timeout = 1 sec. PermitWithoutStream: true, // Run keepalive even with no RPCs. }}, done) defer tr.Close() conn, ok := <-done if !ok { t.Fatalf("Server didn't return connection object") } defer conn.Close() // Sleep for keepalive to close the connection. time.Sleep(4 * time.Second) // Assert that the connection was closed. ct := tr.(*http2Client) ct.mu.Lock() defer ct.mu.Unlock() if ct.state == reachable { t.Fatalf("Test Failed: Expected client transport to have closed.") } } func TestKeepaliveClientStaysHealthyOnIdleTransport(t *testing.T) { done := make(chan net.Conn, 1) tr := setUpWithNoPingServer(t, ConnectOptions{KeepaliveParams: keepalive.ClientParameters{ Time: 2 * time.Second, // Keepalive time = 2 sec. Timeout: 1 * time.Second, // Keepalive timeout = 1 sec. }}, done) defer tr.Close() conn, ok := <-done if !ok { t.Fatalf("server didn't reutrn connection object") } defer conn.Close() // Give keepalive some time. time.Sleep(4 * time.Second) // Assert that connections is still healthy. ct := tr.(*http2Client) ct.mu.Lock() defer ct.mu.Unlock() if ct.state != reachable { t.Fatalf("Test failed: Expected client transport to be healthy.") } } func TestKeepaliveClientClosesWithActiveStreams(t *testing.T) { done := make(chan net.Conn, 1) tr := setUpWithNoPingServer(t, ConnectOptions{KeepaliveParams: keepalive.ClientParameters{ Time: 2 * time.Second, // Keepalive time = 2 sec. Timeout: 1 * time.Second, // Keepalive timeout = 1 sec. }}, done) defer tr.Close() conn, ok := <-done if !ok { t.Fatalf("Server didn't return connection object") } defer conn.Close() // Create a stream. _, err := tr.NewStream(context.Background(), &CallHdr{Flush: true}) if err != nil { t.Fatalf("Failed to create a new stream: %v", err) } // Give keepalive some time. time.Sleep(4 * time.Second) // Assert that transport was closed. ct := tr.(*http2Client) ct.mu.Lock() defer ct.mu.Unlock() if ct.state == reachable { t.Fatalf("Test failed: Expected client transport to have closed.") } } func TestKeepaliveClientStaysHealthyWithResponsiveServer(t *testing.T) { s, tr := setUpWithOptions(t, 0, &ServerConfig{MaxStreams: math.MaxUint32}, normal, ConnectOptions{KeepaliveParams: keepalive.ClientParameters{ Time: 2 * time.Second, // Keepalive time = 2 sec. Timeout: 1 * time.Second, // Keepalive timeout = 1 sec. PermitWithoutStream: true, // Run keepalive even with no RPCs. }}) defer s.stop() defer tr.Close() // Give keep alive some time. time.Sleep(4 * time.Second) // Assert that transport is healthy. ct := tr.(*http2Client) ct.mu.Lock() defer ct.mu.Unlock() if ct.state != reachable { t.Fatalf("Test failed: Expected client transport to be healthy.") } } func TestKeepaliveServerEnforcementWithAbusiveClientNoRPC(t *testing.T) { serverConfig := &ServerConfig{ KeepalivePolicy: keepalive.EnforcementPolicy{ MinTime: 2 * time.Second, }, } clientOptions := ConnectOptions{ KeepaliveParams: keepalive.ClientParameters{ Time: 50 * time.Millisecond, Timeout: 50 * time.Millisecond, PermitWithoutStream: true, }, } server, client := setUpWithOptions(t, 0, serverConfig, normal, clientOptions) defer server.stop() defer client.Close() timeout := time.NewTimer(2 * time.Second) select { case <-client.GoAway(): if !timeout.Stop() { <-timeout.C } case <-timeout.C: t.Fatalf("Test failed: Expected a GoAway from server.") } time.Sleep(500 * time.Millisecond) ct := client.(*http2Client) ct.mu.Lock() defer ct.mu.Unlock() if ct.state == reachable { t.Fatalf("Test failed: Expected the connection to be closed.") } } func TestKeepaliveServerEnforcementWithAbusiveClientWithRPC(t *testing.T) { serverConfig := &ServerConfig{ KeepalivePolicy: keepalive.EnforcementPolicy{ MinTime: 2 * time.Second, }, } clientOptions := ConnectOptions{ KeepaliveParams: keepalive.ClientParameters{ Time: 50 * time.Millisecond, Timeout: 50 * time.Millisecond, }, } server, client := setUpWithOptions(t, 0, serverConfig, suspended, clientOptions) defer server.stop() defer client.Close() if _, err := client.NewStream(context.Background(), &CallHdr{Flush: true}); err != nil { t.Fatalf("Client failed to create stream.") } timeout := time.NewTimer(2 * time.Second) select { case <-client.GoAway(): if !timeout.Stop() { <-timeout.C } case <-timeout.C: t.Fatalf("Test failed: Expected a GoAway from server.") } time.Sleep(500 * time.Millisecond) ct := client.(*http2Client) ct.mu.Lock() defer ct.mu.Unlock() if ct.state == reachable { t.Fatalf("Test failed: Expected the connection to be closed.") } } func TestKeepaliveServerEnforcementWithObeyingClientNoRPC(t *testing.T) { serverConfig := &ServerConfig{ KeepalivePolicy: keepalive.EnforcementPolicy{ MinTime: 100 * time.Millisecond, PermitWithoutStream: true, }, } clientOptions := ConnectOptions{ KeepaliveParams: keepalive.ClientParameters{ Time: 101 * time.Millisecond, Timeout: 50 * time.Millisecond, PermitWithoutStream: true, }, } server, client := setUpWithOptions(t, 0, serverConfig, normal, clientOptions) defer server.stop() defer client.Close() // Give keepalive enough time. time.Sleep(2 * time.Second) // Assert that connection is healthy. ct := client.(*http2Client) ct.mu.Lock() defer ct.mu.Unlock() if ct.state != reachable { t.Fatalf("Test failed: Expected connection to be healthy.") } } func TestKeepaliveServerEnforcementWithObeyingClientWithRPC(t *testing.T) { serverConfig := &ServerConfig{ KeepalivePolicy: keepalive.EnforcementPolicy{ MinTime: 100 * time.Millisecond, }, } clientOptions := ConnectOptions{ KeepaliveParams: keepalive.ClientParameters{ Time: 101 * time.Millisecond, Timeout: 50 * time.Millisecond, }, } server, client := setUpWithOptions(t, 0, serverConfig, suspended, clientOptions) defer server.stop() defer client.Close() if _, err := client.NewStream(context.Background(), &CallHdr{Flush: true}); err != nil { t.Fatalf("Client failed to create stream.") } // Give keepalive enough time. time.Sleep(2 * time.Second) // Assert that connection is healthy. ct := client.(*http2Client) ct.mu.Lock() defer ct.mu.Unlock() if ct.state != reachable { t.Fatalf("Test failed: Expected connection to be healthy.") } } func TestClientSendAndReceive(t *testing.T) { server, ct := setUp(t, 0, math.MaxUint32, normal) callHdr := &CallHdr{ Host: "localhost", Method: "foo.Small", } s1, err1 := ct.NewStream(context.Background(), callHdr) if err1 != nil { t.Fatalf("failed to open stream: %v", err1) } if s1.id != 1 { t.Fatalf("wrong stream id: %d", s1.id) } s2, err2 := ct.NewStream(context.Background(), callHdr) if err2 != nil { t.Fatalf("failed to open stream: %v", err2) } if s2.id != 3 { t.Fatalf("wrong stream id: %d", s2.id) } opts := Options{ Last: true, Delay: false, } if err := ct.Write(s1, expectedRequest, nil, &opts); err != nil && err != io.EOF { t.Fatalf("failed to send data: %v", err) } p := make([]byte, len(expectedResponse)) _, recvErr := s1.Read(p) if recvErr != nil || !bytes.Equal(p, expectedResponse) { t.Fatalf("Error: %v, want ; Result: %v, want %v", recvErr, p, expectedResponse) } _, recvErr = s1.Read(p) if recvErr != io.EOF { t.Fatalf("Error: %v; want ", recvErr) } ct.Close() server.stop() } func TestClientErrorNotify(t *testing.T) { server, ct := setUp(t, 0, math.MaxUint32, normal) go server.stop() // ct.reader should detect the error and activate ct.Error(). <-ct.Error() ct.Close() } func performOneRPC(ct ClientTransport) { callHdr := &CallHdr{ Host: "localhost", Method: "foo.Small", } s, err := ct.NewStream(context.Background(), callHdr) if err != nil { return } opts := Options{ Last: true, Delay: false, } if err := ct.Write(s, expectedRequest, nil, &opts); err == nil || err == io.EOF { time.Sleep(5 * time.Millisecond) // The following s.Recv()'s could error out because the // underlying transport is gone. // // Read response p := make([]byte, len(expectedResponse)) s.Read(p) // Read io.EOF s.Read(p) } } func TestClientMix(t *testing.T) { s, ct := setUp(t, 0, math.MaxUint32, normal) go func(s *server) { time.Sleep(5 * time.Second) s.stop() }(s) go func(ct ClientTransport) { <-ct.Error() ct.Close() }(ct) for i := 0; i < 1000; i++ { time.Sleep(10 * time.Millisecond) go performOneRPC(ct) } } func TestLargeMessage(t *testing.T) { server, ct := setUp(t, 0, math.MaxUint32, normal) callHdr := &CallHdr{ Host: "localhost", Method: "foo.Large", } var wg sync.WaitGroup for i := 0; i < 2; i++ { wg.Add(1) go func() { defer wg.Done() s, err := ct.NewStream(context.Background(), callHdr) if err != nil { t.Errorf("%v.NewStream(_, _) = _, %v, want _, ", ct, err) } if err := ct.Write(s, expectedRequestLarge, nil, &Options{Last: true, Delay: false}); err != nil && err != io.EOF { t.Errorf("%v.Write(_, _, _) = %v, want ", ct, err) } p := make([]byte, len(expectedResponseLarge)) if _, err := s.Read(p); err != nil || !bytes.Equal(p, expectedResponseLarge) { t.Errorf("s.Read(%v) = _, %v, want %v, ", err, p, expectedResponse) } if _, err = s.Read(p); err != io.EOF { t.Errorf("Failed to complete the stream %v; want ", err) } }() } wg.Wait() ct.Close() server.stop() } func TestLargeMessageWithDelayRead(t *testing.T) { server, ct := setUp(t, 0, math.MaxUint32, delayRead) callHdr := &CallHdr{ Host: "localhost", Method: "foo.Large", } var wg sync.WaitGroup for i := 0; i < 2; i++ { wg.Add(1) go func() { defer wg.Done() s, err := ct.NewStream(context.Background(), callHdr) if err != nil { t.Errorf("%v.NewStream(_, _) = _, %v, want _, ", ct, err) } if err := ct.Write(s, expectedRequestLarge, nil, &Options{Last: true, Delay: false}); err != nil && err != io.EOF { t.Errorf("%v.Write(_, _, _) = %v, want ", ct, err) } p := make([]byte, len(expectedResponseLarge)) // Give time to server to begin sending before client starts reading. time.Sleep(2 * time.Second) if _, err := s.Read(p); err != nil || !bytes.Equal(p, expectedResponseLarge) { t.Errorf("s.Read(_) = _, %v, want _, ", err) } if _, err = s.Read(p); err != io.EOF { t.Errorf("Failed to complete the stream %v; want ", err) } }() } wg.Wait() ct.Close() server.stop() } func TestLargeMessageDelayWrite(t *testing.T) { server, ct := setUp(t, 0, math.MaxUint32, delayWrite) callHdr := &CallHdr{ Host: "localhost", Method: "foo.Large", } var wg sync.WaitGroup for i := 0; i < 2; i++ { wg.Add(1) go func() { defer wg.Done() s, err := ct.NewStream(context.Background(), callHdr) if err != nil { t.Errorf("%v.NewStream(_, _) = _, %v, want _, ", ct, err) } // Give time to server to start reading before client starts sending. time.Sleep(2 * time.Second) if err := ct.Write(s, expectedRequestLarge, nil, &Options{Last: true, Delay: false}); err != nil && err != io.EOF { t.Errorf("%v.Write(_, _, _) = %v, want ", ct, err) } p := make([]byte, len(expectedResponseLarge)) if _, err := s.Read(p); err != nil || !bytes.Equal(p, expectedResponseLarge) { t.Errorf("io.ReadFull(%v) = _, %v, want %v, ", err, p, expectedResponse) } if _, err = s.Read(p); err != io.EOF { t.Errorf("Failed to complete the stream %v; want ", err) } }() } wg.Wait() ct.Close() server.stop() } func TestGracefulClose(t *testing.T) { server, ct := setUp(t, 0, math.MaxUint32, normal) callHdr := &CallHdr{ Host: "localhost", Method: "foo.Small", } s, err := ct.NewStream(context.Background(), callHdr) if err != nil { t.Fatalf("%v.NewStream(_, _) = _, %v, want _, ", ct, err) } if err = ct.GracefulClose(); err != nil { t.Fatalf("%v.GracefulClose() = %v, want ", ct, err) } var wg sync.WaitGroup // Expect the failure for all the follow-up streams because ct has been closed gracefully. for i := 0; i < 100; i++ { wg.Add(1) go func() { defer wg.Done() if _, err := ct.NewStream(context.Background(), callHdr); err != ErrStreamDrain { t.Errorf("%v.NewStream(_, _) = _, %v, want _, %v", ct, err, ErrStreamDrain) } }() } opts := Options{ Last: true, Delay: false, } // The stream which was created before graceful close can still proceed. if err := ct.Write(s, expectedRequest, nil, &opts); err != nil && err != io.EOF { t.Fatalf("%v.Write(_, _, _) = %v, want ", ct, err) } p := make([]byte, len(expectedResponse)) if _, err := s.Read(p); err != nil || !bytes.Equal(p, expectedResponse) { t.Fatalf("s.Read(%v) = _, %v, want %v, ", err, p, expectedResponse) } if _, err = s.Read(p); err != io.EOF { t.Fatalf("Failed to complete the stream %v; want ", err) } wg.Wait() ct.Close() server.stop() } func TestLargeMessageSuspension(t *testing.T) { server, ct := setUp(t, 0, math.MaxUint32, suspended) callHdr := &CallHdr{ Host: "localhost", Method: "foo.Large", } // Set a long enough timeout for writing a large message out. ctx, cancel := context.WithTimeout(context.Background(), time.Second) defer cancel() s, err := ct.NewStream(ctx, callHdr) if err != nil { t.Fatalf("failed to open stream: %v", err) } // Write should not be done successfully due to flow control. msg := make([]byte, initialWindowSize*8) err = ct.Write(s, msg, nil, &Options{Last: true, Delay: false}) expectedErr := streamErrorf(codes.DeadlineExceeded, "%v", context.DeadlineExceeded) if err != expectedErr { t.Fatalf("Write got %v, want %v", err, expectedErr) } ct.Close() server.stop() } func TestMaxStreams(t *testing.T) { server, ct := setUp(t, 0, 1, suspended) callHdr := &CallHdr{ Host: "localhost", Method: "foo.Large", } // Have a pending stream which takes all streams quota. s, err := ct.NewStream(context.Background(), callHdr) if err != nil { t.Fatalf("Failed to open stream: %v", err) } cc, ok := ct.(*http2Client) if !ok { t.Fatalf("Failed to convert %v to *http2Client", ct) } done := make(chan struct{}) ch := make(chan int) ready := make(chan struct{}) go func() { for { select { case <-time.After(5 * time.Millisecond): select { case ch <- 0: case <-ready: return } case <-time.After(5 * time.Second): close(done) return case <-ready: return } } }() for { select { case <-ch: case <-done: t.Fatalf("Client has not received the max stream setting in 5 seconds.") } cc.mu.Lock() // cc.maxStreams should be equal to 1 after having received settings frame from // server. if cc.maxStreams == 1 { cc.mu.Unlock() select { case <-cc.streamsQuota.acquire(): t.Fatalf("streamsQuota.acquire() becomes readable mistakenly.") default: cc.streamsQuota.mu.Lock() quota := cc.streamsQuota.quota cc.streamsQuota.mu.Unlock() if quota != 0 { t.Fatalf("streamsQuota.quota got non-zero quota mistakenly.") } } break } cc.mu.Unlock() } close(ready) // Close the pending stream so that the streams quota becomes available for the next new stream. ct.CloseStream(s, nil) select { case i := <-cc.streamsQuota.acquire(): if i != 1 { t.Fatalf("streamsQuota.acquire() got %d quota, want 1.", i) } cc.streamsQuota.add(i) default: t.Fatalf("streamsQuota.acquire() is not readable.") } if _, err := ct.NewStream(context.Background(), callHdr); err != nil { t.Fatalf("Failed to open stream: %v", err) } ct.Close() server.stop() } func TestServerContextCanceledOnClosedConnection(t *testing.T) { server, ct := setUp(t, 0, math.MaxUint32, suspended) callHdr := &CallHdr{ Host: "localhost", Method: "foo", } var sc *http2Server // Wait until the server transport is setup. for { server.mu.Lock() if len(server.conns) == 0 { server.mu.Unlock() time.Sleep(time.Millisecond) continue } for k := range server.conns { var ok bool sc, ok = k.(*http2Server) if !ok { t.Fatalf("Failed to convert %v to *http2Server", k) } } server.mu.Unlock() break } cc, ok := ct.(*http2Client) if !ok { t.Fatalf("Failed to convert %v to *http2Client", ct) } s, err := ct.NewStream(context.Background(), callHdr) if err != nil { t.Fatalf("Failed to open stream: %v", err) } // Make sure the headers frame is flushed out. <-cc.writableChan if err = cc.framer.writeData(true, s.id, false, make([]byte, http2MaxFrameLen)); err != nil { t.Fatalf("Failed to write data: %v", err) } cc.writableChan <- 0 // Loop until the server side stream is created. var ss *Stream for { time.Sleep(time.Second) sc.mu.Lock() if len(sc.activeStreams) == 0 { sc.mu.Unlock() continue } ss = sc.activeStreams[s.id] sc.mu.Unlock() break } cc.Close() select { case <-ss.Context().Done(): if ss.Context().Err() != context.Canceled { t.Fatalf("ss.Context().Err() got %v, want %v", ss.Context().Err(), context.Canceled) } case <-time.After(5 * time.Second): t.Fatalf("Failed to cancel the context of the sever side stream.") } server.stop() } func TestClientConnDecoupledFromApplicationRead(t *testing.T) { connectOptions := ConnectOptions{ InitialWindowSize: defaultWindowSize, InitialConnWindowSize: defaultWindowSize, } server, client := setUpWithOptions(t, 0, &ServerConfig{}, suspended, connectOptions) defer server.stop() defer client.Close() waitWhileTrue(t, func() (bool, error) { server.mu.Lock() defer server.mu.Unlock() if len(server.conns) == 0 { return true, fmt.Errorf("timed-out while waiting for connection to be created on the server") } return false, nil }) var st *http2Server server.mu.Lock() for k := range server.conns { st = k.(*http2Server) } server.mu.Unlock() cstream1, err := client.NewStream(context.Background(), &CallHdr{Flush: true}) if err != nil { t.Fatalf("Client failed to create first stream. Err: %v", err) } var sstream1 *Stream // Access stream on the server. waitWhileTrue(t, func() (bool, error) { st.mu.Lock() defer st.mu.Unlock() if len(st.activeStreams) != 1 { return true, fmt.Errorf("timed-out while waiting for server to have created a stream") } for _, v := range st.activeStreams { sstream1 = v } return false, nil }) // Exhaust client's connection window. <-st.writableChan if err := st.framer.writeData(true, sstream1.id, true, make([]byte, defaultWindowSize)); err != nil { st.writableChan <- 0 t.Fatalf("Server failed to write data. Err: %v", err) } st.writableChan <- 0 // Create another stream on client. cstream2, err := client.NewStream(context.Background(), &CallHdr{Flush: true}) if err != nil { t.Fatalf("Client failed to create second stream. Err: %v", err) } var sstream2 *Stream waitWhileTrue(t, func() (bool, error) { st.mu.Lock() defer st.mu.Unlock() if len(st.activeStreams) != 2 { return true, fmt.Errorf("timed-out while waiting for server to have created the second stream") } for _, v := range st.activeStreams { if v.id == cstream2.id { sstream2 = v } } if sstream2 == nil { return true, fmt.Errorf("didn't find stream corresponding to client cstream.id: %v on the server", cstream2.id) } return false, nil }) // Server should be able to send data on the new stream, even though the client hasn't read anything on the first stream. <-st.writableChan if err := st.framer.writeData(true, sstream2.id, true, make([]byte, defaultWindowSize)); err != nil { st.writableChan <- 0 t.Fatalf("Server failed to write data. Err: %v", err) } st.writableChan <- 0 // Client should be able to read data on second stream. if _, err := cstream2.Read(make([]byte, defaultWindowSize)); err != nil { t.Fatalf("_.Read(_) = _, %v, want _, ", err) } // Client should be able to read data on first stream. if _, err := cstream1.Read(make([]byte, defaultWindowSize)); err != nil { t.Fatalf("_.Read(_) = _, %v, want _, ", err) } } func TestServerConnDecoupledFromApplicationRead(t *testing.T) { serverConfig := &ServerConfig{ InitialWindowSize: defaultWindowSize, InitialConnWindowSize: defaultWindowSize, } server, client := setUpWithOptions(t, 0, serverConfig, suspended, ConnectOptions{}) defer server.stop() defer client.Close() waitWhileTrue(t, func() (bool, error) { server.mu.Lock() defer server.mu.Unlock() if len(server.conns) == 0 { return true, fmt.Errorf("timed-out while waiting for connection to be created on the server") } return false, nil }) var st *http2Server server.mu.Lock() for k := range server.conns { st = k.(*http2Server) } server.mu.Unlock() cstream1, err := client.NewStream(context.Background(), &CallHdr{Flush: true}) if err != nil { t.Fatalf("Failed to create 1st stream. Err: %v", err) } // Exhaust server's connection window. if err := client.Write(cstream1, make([]byte, defaultWindowSize), nil, &Options{Last: true}); err != nil { t.Fatalf("Client failed to write data. Err: %v", err) } //Client should be able to create another stream and send data on it. cstream2, err := client.NewStream(context.Background(), &CallHdr{Flush: true}) if err != nil { t.Fatalf("Failed to create 2nd stream. Err: %v", err) } if err := client.Write(cstream2, make([]byte, defaultWindowSize), nil, &Options{}); err != nil { t.Fatalf("Client failed to write data. Err: %v", err) } // Get the streams on server. waitWhileTrue(t, func() (bool, error) { st.mu.Lock() defer st.mu.Unlock() if len(st.activeStreams) != 2 { return true, fmt.Errorf("timed-out while waiting for server to have created the streams") } return false, nil }) var sstream1 *Stream st.mu.Lock() for _, v := range st.activeStreams { if v.id == 1 { sstream1 = v } } st.mu.Unlock() // Trying to write more on a max-ed out stream should result in a RST_STREAM from the server. ct := client.(*http2Client) <-ct.writableChan if err := ct.framer.writeData(true, cstream2.id, true, make([]byte, 1)); err != nil { t.Fatalf("Client failed to write. Err: %v", err) } ct.writableChan <- 0 code := http2ErrConvTab[http2.ErrCodeFlowControl] waitWhileTrue(t, func() (bool, error) { cstream2.mu.Lock() defer cstream2.mu.Unlock() if cstream2.status.Code() != code { return true, fmt.Errorf("want code = %v, got %v", code, cstream2.status.Code()) } return false, nil }) // Reading from the stream on server should succeed. if _, err := sstream1.Read(make([]byte, defaultWindowSize)); err != nil { t.Fatalf("_.Read(_) = %v, want ", err) } if _, err := sstream1.Read(make([]byte, 1)); err != io.EOF { t.Fatalf("_.Read(_) = %v, want io.EOF", err) } } func TestServerWithMisbehavedClient(t *testing.T) { server, ct := setUp(t, 0, math.MaxUint32, suspended) callHdr := &CallHdr{ Host: "localhost", Method: "foo", } var sc *http2Server // Wait until the server transport is setup. for { server.mu.Lock() if len(server.conns) == 0 { server.mu.Unlock() time.Sleep(time.Millisecond) continue } for k := range server.conns { var ok bool sc, ok = k.(*http2Server) if !ok { t.Fatalf("Failed to convert %v to *http2Server", k) } } server.mu.Unlock() break } cc, ok := ct.(*http2Client) if !ok { t.Fatalf("Failed to convert %v to *http2Client", ct) } // Test server behavior for violation of stream flow control window size restriction. s, err := ct.NewStream(context.Background(), callHdr) if err != nil { t.Fatalf("Failed to open stream: %v", err) } var sent int // Drain the stream flow control window <-cc.writableChan if err = cc.framer.writeData(true, s.id, false, make([]byte, http2MaxFrameLen)); err != nil { t.Fatalf("Failed to write data: %v", err) } cc.writableChan <- 0 sent += http2MaxFrameLen // Wait until the server creates the corresponding stream and receive some data. var ss *Stream for { time.Sleep(time.Millisecond) sc.mu.Lock() if len(sc.activeStreams) == 0 { sc.mu.Unlock() continue } ss = sc.activeStreams[s.id] sc.mu.Unlock() ss.fc.mu.Lock() if ss.fc.pendingData > 0 { ss.fc.mu.Unlock() break } ss.fc.mu.Unlock() } if ss.fc.pendingData != http2MaxFrameLen || ss.fc.pendingUpdate != 0 || sc.fc.pendingData != 0 || sc.fc.pendingUpdate != 0 { t.Fatalf("Server mistakenly updates inbound flow control params: got %d, %d, %d, %d; want %d, %d, %d, %d", ss.fc.pendingData, ss.fc.pendingUpdate, sc.fc.pendingData, sc.fc.pendingUpdate, http2MaxFrameLen, 0, 0, 0) } // Keep sending until the server inbound window is drained for that stream. for sent <= initialWindowSize { <-cc.writableChan if err = cc.framer.writeData(true, s.id, false, make([]byte, 1)); err != nil { t.Fatalf("Failed to write data: %v", err) } cc.writableChan <- 0 sent++ } // Server sent a resetStream for s already. code := http2ErrConvTab[http2.ErrCodeFlowControl] if _, err := s.Read(make([]byte, 1)); err != io.EOF { t.Fatalf("%v got err %v want ", s, err) } if s.status.Code() != code { t.Fatalf("%v got status %v; want Code=%v", s, s.status, code) } ct.CloseStream(s, nil) ct.Close() server.stop() } func TestClientWithMisbehavedServer(t *testing.T) { // Turn off BDP estimation so that the server can // violate stream window. connectOptions := ConnectOptions{ InitialWindowSize: initialWindowSize, } server, ct := setUpWithOptions(t, 0, &ServerConfig{}, misbehaved, connectOptions) callHdr := &CallHdr{ Host: "localhost", Method: "foo.Stream", } conn, ok := ct.(*http2Client) if !ok { t.Fatalf("Failed to convert %v to *http2Client", ct) } // Test the logic for the violation of stream flow control window size restriction. s, err := ct.NewStream(context.Background(), callHdr) if err != nil { t.Fatalf("Failed to open stream: %v", err) } d := make([]byte, 1) if err := ct.Write(s, d, nil, &Options{Last: true, Delay: false}); err != nil && err != io.EOF { t.Fatalf("Failed to write: %v", err) } // Read without window update. for { p := make([]byte, http2MaxFrameLen) if _, err = s.trReader.(*transportReader).reader.Read(p); err != nil { break } } if s.fc.pendingData <= initialWindowSize || s.fc.pendingUpdate != 0 || conn.fc.pendingData != 0 || conn.fc.pendingUpdate != 0 { t.Fatalf("Client mistakenly updates inbound flow control params: got %d, %d, %d, %d; want >%d, %d, %d, >%d", s.fc.pendingData, s.fc.pendingUpdate, conn.fc.pendingData, conn.fc.pendingUpdate, initialWindowSize, 0, 0, 0) } if err != io.EOF { t.Fatalf("Got err %v, want ", err) } if s.status.Code() != codes.Internal { t.Fatalf("Got s.status %v, want s.status.Code()=Internal", s.status) } conn.CloseStream(s, err) ct.Close() server.stop() } var encodingTestStatus = status.New(codes.Internal, "\n") func TestEncodingRequiredStatus(t *testing.T) { server, ct := setUp(t, 0, math.MaxUint32, encodingRequiredStatus) callHdr := &CallHdr{ Host: "localhost", Method: "foo", } s, err := ct.NewStream(context.Background(), callHdr) if err != nil { return } opts := Options{ Last: true, Delay: false, } if err := ct.Write(s, expectedRequest, nil, &opts); err != nil && err != io.EOF { t.Fatalf("Failed to write the request: %v", err) } p := make([]byte, http2MaxFrameLen) if _, err := s.trReader.(*transportReader).Read(p); err != io.EOF { t.Fatalf("Read got error %v, want %v", err, io.EOF) } if !reflect.DeepEqual(s.Status(), encodingTestStatus) { t.Fatalf("stream with status %v, want %v", s.Status(), encodingTestStatus) } ct.Close() server.stop() } func TestInvalidHeaderField(t *testing.T) { server, ct := setUp(t, 0, math.MaxUint32, invalidHeaderField) callHdr := &CallHdr{ Host: "localhost", Method: "foo", } s, err := ct.NewStream(context.Background(), callHdr) if err != nil { return } opts := Options{ Last: true, Delay: false, } if err := ct.Write(s, expectedRequest, nil, &opts); err != nil && err != io.EOF { t.Fatalf("Failed to write the request: %v", err) } p := make([]byte, http2MaxFrameLen) _, err = s.trReader.(*transportReader).Read(p) if se, ok := err.(StreamError); !ok || se.Code != codes.FailedPrecondition || !strings.Contains(err.Error(), expectedInvalidHeaderField) { t.Fatalf("Read got error %v, want error with code %s and contains %q", err, codes.FailedPrecondition, expectedInvalidHeaderField) } ct.Close() server.stop() } func TestStreamContext(t *testing.T) { expectedStream := &Stream{} ctx := newContextWithStream(context.Background(), expectedStream) s, ok := StreamFromContext(ctx) if !ok || expectedStream != s { t.Fatalf("GetStreamFromContext(%v) = %v, %t, want: %v, true", ctx, s, ok, expectedStream) } } func TestIsReservedHeader(t *testing.T) { tests := []struct { h string want bool }{ {"", false}, // but should be rejected earlier {"foo", false}, {"content-type", true}, {"grpc-message-type", true}, {"grpc-encoding", true}, {"grpc-message", true}, {"grpc-status", true}, {"grpc-timeout", true}, {"te", true}, } for _, tt := range tests { got := isReservedHeader(tt.h) if got != tt.want { t.Errorf("isReservedHeader(%q) = %v; want %v", tt.h, got, tt.want) } } } func TestContextErr(t *testing.T) { for _, test := range []struct { // input errIn error // outputs errOut StreamError }{ {context.DeadlineExceeded, StreamError{codes.DeadlineExceeded, context.DeadlineExceeded.Error()}}, {context.Canceled, StreamError{codes.Canceled, context.Canceled.Error()}}, } { err := ContextErr(test.errIn) if err != test.errOut { t.Fatalf("ContextErr{%v} = %v \nwant %v", test.errIn, err, test.errOut) } } } func max(a, b int32) int32 { if a > b { return a } return b } type windowSizeConfig struct { serverStream int32 serverConn int32 clientStream int32 clientConn int32 } func TestAccountCheckWindowSizeWithLargeWindow(t *testing.T) { wc := windowSizeConfig{ serverStream: 10 * 1024 * 1024, serverConn: 12 * 1024 * 1024, clientStream: 6 * 1024 * 1024, clientConn: 8 * 1024 * 1024, } testAccountCheckWindowSize(t, wc) } func TestAccountCheckWindowSizeWithSmallWindow(t *testing.T) { wc := windowSizeConfig{ serverStream: defaultWindowSize, // Note this is smaller than initialConnWindowSize which is the current default. serverConn: defaultWindowSize, clientStream: defaultWindowSize, clientConn: defaultWindowSize, } testAccountCheckWindowSize(t, wc) } func testAccountCheckWindowSize(t *testing.T, wc windowSizeConfig) { serverConfig := &ServerConfig{ InitialWindowSize: wc.serverStream, InitialConnWindowSize: wc.serverConn, } connectOptions := ConnectOptions{ InitialWindowSize: wc.clientStream, InitialConnWindowSize: wc.clientConn, } server, client := setUpWithOptions(t, 0, serverConfig, suspended, connectOptions) defer server.stop() defer client.Close() // Wait for server conns to be populated with new server transport. waitWhileTrue(t, func() (bool, error) { server.mu.Lock() defer server.mu.Unlock() if len(server.conns) == 0 { return true, fmt.Errorf("timed out waiting for server transport to be created") } return false, nil }) var st *http2Server server.mu.Lock() for k := range server.conns { st = k.(*http2Server) } server.mu.Unlock() ct := client.(*http2Client) cstream, err := client.NewStream(context.Background(), &CallHdr{Flush: true}) if err != nil { t.Fatalf("Failed to create stream. Err: %v", err) } // Wait for server to receive headers. waitWhileTrue(t, func() (bool, error) { st.mu.Lock() defer st.mu.Unlock() if len(st.activeStreams) == 0 { return true, fmt.Errorf("timed out waiting for server to receive headers") } return false, nil }) // Sleeping to make sure the settings are applied in case of negative test. time.Sleep(time.Second) waitWhileTrue(t, func() (bool, error) { st.fc.mu.Lock() lim := st.fc.limit st.fc.mu.Unlock() if lim != uint32(serverConfig.InitialConnWindowSize) { return true, fmt.Errorf("Server transport flow control window size: got %v, want %v", lim, serverConfig.InitialConnWindowSize) } return false, nil }) ctx, cancel := context.WithTimeout(context.Background(), time.Second) serverSendQuota, err := wait(ctx, nil, nil, nil, st.sendQuotaPool.acquire()) if err != nil { t.Fatalf("Error while acquiring sendQuota on server. Err: %v", err) } cancel() st.sendQuotaPool.add(serverSendQuota) if serverSendQuota != int(connectOptions.InitialConnWindowSize) { t.Fatalf("Server send quota(%v) not equal to client's window size(%v) on conn.", serverSendQuota, connectOptions.InitialConnWindowSize) } st.mu.Lock() ssq := st.streamSendQuota st.mu.Unlock() if ssq != uint32(connectOptions.InitialWindowSize) { t.Fatalf("Server stream send quota(%v) not equal to client's window size(%v) on stream.", ssq, connectOptions.InitialWindowSize) } ct.fc.mu.Lock() limit := ct.fc.limit ct.fc.mu.Unlock() if limit != uint32(connectOptions.InitialConnWindowSize) { t.Fatalf("Client transport flow control window size is %v, want %v", limit, connectOptions.InitialConnWindowSize) } ctx, cancel = context.WithTimeout(context.Background(), time.Second) clientSendQuota, err := wait(ctx, nil, nil, nil, ct.sendQuotaPool.acquire()) if err != nil { t.Fatalf("Error while acquiring sendQuota on client. Err: %v", err) } cancel() ct.sendQuotaPool.add(clientSendQuota) if clientSendQuota != int(serverConfig.InitialConnWindowSize) { t.Fatalf("Client send quota(%v) not equal to server's window size(%v) on conn.", clientSendQuota, serverConfig.InitialConnWindowSize) } ct.mu.Lock() ssq = ct.streamSendQuota ct.mu.Unlock() if ssq != uint32(serverConfig.InitialWindowSize) { t.Fatalf("Client stream send quota(%v) not equal to server's window size(%v) on stream.", ssq, serverConfig.InitialWindowSize) } cstream.fc.mu.Lock() limit = cstream.fc.limit cstream.fc.mu.Unlock() if limit != uint32(connectOptions.InitialWindowSize) { t.Fatalf("Client stream flow control window size is %v, want %v", limit, connectOptions.InitialWindowSize) } var sstream *Stream st.mu.Lock() for _, v := range st.activeStreams { sstream = v } st.mu.Unlock() sstream.fc.mu.Lock() limit = sstream.fc.limit sstream.fc.mu.Unlock() if limit != uint32(serverConfig.InitialWindowSize) { t.Fatalf("Server stream flow control window size is %v, want %v", limit, serverConfig.InitialWindowSize) } } // Check accounting on both sides after sending and receiving large messages. func TestAccountCheckExpandingWindow(t *testing.T) { server, client := setUp(t, 0, 0, pingpong) defer server.stop() defer client.Close() waitWhileTrue(t, func() (bool, error) { server.mu.Lock() defer server.mu.Unlock() if len(server.conns) == 0 { return true, fmt.Errorf("timed out while waiting for server transport to be created") } return false, nil }) var st *http2Server server.mu.Lock() for k := range server.conns { st = k.(*http2Server) } server.mu.Unlock() ct := client.(*http2Client) cstream, err := client.NewStream(context.Background(), &CallHdr{Flush: true}) if err != nil { t.Fatalf("Failed to create stream. Err: %v", err) } msgSize := 65535 * 16 * 2 msg := make([]byte, msgSize) buf := make([]byte, msgSize+5) buf[0] = byte(0) binary.BigEndian.PutUint32(buf[1:], uint32(msgSize)) copy(buf[5:], msg) opts := Options{} header := make([]byte, 5) for i := 1; i <= 10; i++ { if err := ct.Write(cstream, buf, nil, &opts); err != nil { t.Fatalf("Error on client while writing message: %v", err) } if _, err := cstream.Read(header); err != nil { t.Fatalf("Error on client while reading data frame header: %v", err) } sz := binary.BigEndian.Uint32(header[1:]) recvMsg := make([]byte, int(sz)) if _, err := cstream.Read(recvMsg); err != nil { t.Fatalf("Error on client while reading data: %v", err) } if len(recvMsg) != len(msg) { t.Fatalf("Length of message received by client: %v, want: %v", len(recvMsg), len(msg)) } } var sstream *Stream st.mu.Lock() for _, v := range st.activeStreams { sstream = v } st.mu.Unlock() waitWhileTrue(t, func() (bool, error) { // Check that pendingData and delta on flow control windows on both sides are 0. cstream.fc.mu.Lock() if cstream.fc.delta != 0 { cstream.fc.mu.Unlock() return true, fmt.Errorf("delta on flow control window of client stream is non-zero") } if cstream.fc.pendingData != 0 { cstream.fc.mu.Unlock() return true, fmt.Errorf("pendingData on flow control window of client stream is non-zero") } cstream.fc.mu.Unlock() sstream.fc.mu.Lock() if sstream.fc.delta != 0 { sstream.fc.mu.Unlock() return true, fmt.Errorf("delta on flow control window of server stream is non-zero") } if sstream.fc.pendingData != 0 { sstream.fc.mu.Unlock() return true, fmt.Errorf("pendingData on flow control window of sercer stream is non-zero") } sstream.fc.mu.Unlock() ct.fc.mu.Lock() if ct.fc.delta != 0 { ct.fc.mu.Unlock() return true, fmt.Errorf("delta on flow control window of client transport is non-zero") } if ct.fc.pendingData != 0 { ct.fc.mu.Unlock() return true, fmt.Errorf("pendingData on flow control window of client transport is non-zero") } ct.fc.mu.Unlock() st.fc.mu.Lock() if st.fc.delta != 0 { st.fc.mu.Unlock() return true, fmt.Errorf("delta on flow control window of server transport is non-zero") } if st.fc.pendingData != 0 { st.fc.mu.Unlock() return true, fmt.Errorf("pendingData on flow control window of server transport is non-zero") } st.fc.mu.Unlock() // Check flow conrtrol window on client stream is equal to out flow on server stream. ctx, cancel := context.WithTimeout(context.Background(), time.Second) serverStreamSendQuota, err := wait(ctx, nil, nil, nil, sstream.sendQuotaPool.acquire()) cancel() if err != nil { return true, fmt.Errorf("error while acquiring server stream send quota. Err: %v", err) } sstream.sendQuotaPool.add(serverStreamSendQuota) cstream.fc.mu.Lock() clientEst := cstream.fc.limit - cstream.fc.pendingUpdate cstream.fc.mu.Unlock() if uint32(serverStreamSendQuota) != clientEst { return true, fmt.Errorf("server stream outflow: %v, estimated by client: %v", serverStreamSendQuota, clientEst) } // Check flow control window on server stream is equal to out flow on client stream. ctx, cancel = context.WithTimeout(context.Background(), time.Second) clientStreamSendQuota, err := wait(ctx, nil, nil, nil, cstream.sendQuotaPool.acquire()) cancel() if err != nil { return true, fmt.Errorf("error while acquiring client stream send quota. Err: %v", err) } cstream.sendQuotaPool.add(clientStreamSendQuota) sstream.fc.mu.Lock() serverEst := sstream.fc.limit - sstream.fc.pendingUpdate sstream.fc.mu.Unlock() if uint32(clientStreamSendQuota) != serverEst { return true, fmt.Errorf("client stream outflow: %v. estimated by server: %v", clientStreamSendQuota, serverEst) } // Check flow control window on client transport is equal to out flow of server transport. ctx, cancel = context.WithTimeout(context.Background(), time.Second) serverTrSendQuota, err := wait(ctx, nil, nil, nil, st.sendQuotaPool.acquire()) cancel() if err != nil { return true, fmt.Errorf("error while acquring server transport send quota. Err: %v", err) } st.sendQuotaPool.add(serverTrSendQuota) ct.fc.mu.Lock() clientEst = ct.fc.limit - ct.fc.pendingUpdate ct.fc.mu.Unlock() if uint32(serverTrSendQuota) != clientEst { return true, fmt.Errorf("server transport outflow: %v, estimated by client: %v", serverTrSendQuota, clientEst) } // Check flow control window on server transport is equal to out flow of client transport. ctx, cancel = context.WithTimeout(context.Background(), time.Second) clientTrSendQuota, err := wait(ctx, nil, nil, nil, ct.sendQuotaPool.acquire()) cancel() if err != nil { return true, fmt.Errorf("error while acquiring client transport send quota. Err: %v", err) } ct.sendQuotaPool.add(clientTrSendQuota) st.fc.mu.Lock() serverEst = st.fc.limit - st.fc.pendingUpdate st.fc.mu.Unlock() if uint32(clientTrSendQuota) != serverEst { return true, fmt.Errorf("client transport outflow: %v, estimated by client: %v", clientTrSendQuota, serverEst) } return false, nil }) } func waitWhileTrue(t *testing.T, condition func() (bool, error)) { var ( wait bool err error ) timer := time.NewTimer(time.Second * 5) for { wait, err = condition() if wait { select { case <-timer.C: t.Fatalf(err.Error()) default: time.Sleep(50 * time.Millisecond) continue } } if !timer.Stop() { <-timer.C } break } } // A function of type writeHeaders writes out // http status with the given stream ID using the given framer. type writeHeaders func(*http2.Framer, uint32, int) error func writeOneHeader(framer *http2.Framer, sid uint32, httpStatus int) error { var buf bytes.Buffer henc := hpack.NewEncoder(&buf) henc.WriteField(hpack.HeaderField{Name: ":status", Value: fmt.Sprint(httpStatus)}) if err := framer.WriteHeaders(http2.HeadersFrameParam{ StreamID: sid, BlockFragment: buf.Bytes(), EndStream: true, EndHeaders: true, }); err != nil { return err } return nil } func writeTwoHeaders(framer *http2.Framer, sid uint32, httpStatus int) error { var buf bytes.Buffer henc := hpack.NewEncoder(&buf) henc.WriteField(hpack.HeaderField{ Name: ":status", Value: fmt.Sprint(http.StatusOK), }) if err := framer.WriteHeaders(http2.HeadersFrameParam{ StreamID: sid, BlockFragment: buf.Bytes(), EndHeaders: true, }); err != nil { return err } buf.Reset() henc.WriteField(hpack.HeaderField{ Name: ":status", Value: fmt.Sprint(httpStatus), }) if err := framer.WriteHeaders(http2.HeadersFrameParam{ StreamID: sid, BlockFragment: buf.Bytes(), EndStream: true, EndHeaders: true, }); err != nil { return err } return nil } type httpServer struct { conn net.Conn httpStatus int wh writeHeaders } func (s *httpServer) start(t *testing.T, lis net.Listener) { // Launch an HTTP server to send back header with httpStatus. go func() { var err error s.conn, err = lis.Accept() if err != nil { t.Errorf("Error accepting connection: %v", err) return } defer s.conn.Close() // Read preface sent by client. if _, err = io.ReadFull(s.conn, make([]byte, len(http2.ClientPreface))); err != nil { t.Errorf("Error at server-side while reading preface from cleint. Err: %v", err) return } reader := bufio.NewReaderSize(s.conn, http2IOBufSize) writer := bufio.NewWriterSize(s.conn, http2IOBufSize) framer := http2.NewFramer(writer, reader) if err = framer.WriteSettingsAck(); err != nil { t.Errorf("Error at server-side while sending Settings ack. Err: %v", err) return } var sid uint32 // Read frames until a header is received. for { frame, err := framer.ReadFrame() if err != nil { t.Errorf("Error at server-side while reading frame. Err: %v", err) return } if hframe, ok := frame.(*http2.HeadersFrame); ok { sid = hframe.Header().StreamID break } } if err = s.wh(framer, sid, s.httpStatus); err != nil { t.Errorf("Error at server-side while writing headers. Err: %v", err) return } writer.Flush() }() } func (s *httpServer) cleanUp() { if s.conn != nil { s.conn.Close() } } func setUpHTTPStatusTest(t *testing.T, httpStatus int, wh writeHeaders) (stream *Stream, cleanUp func()) { var ( err error lis net.Listener server *httpServer client ClientTransport ) cleanUp = func() { if lis != nil { lis.Close() } if server != nil { server.cleanUp() } if client != nil { client.Close() } } defer func() { if err != nil { cleanUp() } }() lis, err = net.Listen("tcp", "localhost:0") if err != nil { t.Fatalf("Failed to listen. Err: %v", err) } server = &httpServer{ httpStatus: httpStatus, wh: wh, } server.start(t, lis) client, err = newHTTP2Client(context.Background(), TargetInfo{Addr: lis.Addr().String()}, ConnectOptions{}) if err != nil { t.Fatalf("Error creating client. Err: %v", err) } stream, err = client.NewStream(context.Background(), &CallHdr{Method: "bogus/method", Flush: true}) if err != nil { t.Fatalf("Error creating stream at client-side. Err: %v", err) } return } func TestHTTPToGRPCStatusMapping(t *testing.T) { for k := range httpStatusConvTab { testHTTPToGRPCStatusMapping(t, k, writeOneHeader) } } func testHTTPToGRPCStatusMapping(t *testing.T, httpStatus int, wh writeHeaders) { stream, cleanUp := setUpHTTPStatusTest(t, httpStatus, wh) defer cleanUp() want := httpStatusConvTab[httpStatus] buf := make([]byte, 8) _, err := stream.Read(buf) if err == nil { t.Fatalf("Stream.Read(_) unexpectedly returned no error. Expected stream error with code %v", want) } serr, ok := err.(StreamError) if !ok { t.Fatalf("err.(Type) = %T, want StreamError", err) } if want != serr.Code { t.Fatalf("Want error code: %v, got: %v", want, serr.Code) } } func TestHTTPStatusOKAndMissingGRPCStatus(t *testing.T) { stream, cleanUp := setUpHTTPStatusTest(t, http.StatusOK, writeOneHeader) defer cleanUp() buf := make([]byte, 8) _, err := stream.Read(buf) if err != io.EOF { t.Fatalf("stream.Read(_) = _, %v, want _, io.EOF", err) } want := codes.Unknown stream.mu.Lock() defer stream.mu.Unlock() if stream.status.Code() != want { t.Fatalf("Status code of stream: %v, want: %v", stream.status.Code(), want) } } func TestHTTPStatusNottOKAndMissingGRPCStatusInSecondHeader(t *testing.T) { testHTTPToGRPCStatusMapping(t, http.StatusUnauthorized, writeTwoHeaders) } // If any error occurs on a call to Stream.Read, future calls // should continue to return that same error. func TestReadGivesSameErrorAfterAnyErrorOccurs(t *testing.T) { testRecvBuffer := newRecvBuffer() s := &Stream{ ctx: context.Background(), goAway: make(chan struct{}), buf: testRecvBuffer, requestRead: func(int) {}, } s.trReader = &transportReader{ reader: &recvBufferReader{ ctx: s.ctx, goAway: s.goAway, recv: s.buf, }, windowHandler: func(int) {}, } testData := make([]byte, 1) testData[0] = 5 testErr := errors.New("test error") s.write(recvMsg{data: testData, err: testErr}) inBuf := make([]byte, 1) actualCount, actualErr := s.Read(inBuf) if actualCount != 0 { t.Errorf("actualCount, _ := s.Read(_) differs; want 0; got %v", actualCount) } if actualErr.Error() != testErr.Error() { t.Errorf("_ , actualErr := s.Read(_) differs; want actualErr.Error() to be %v; got %v", testErr.Error(), actualErr.Error()) } s.write(recvMsg{data: testData, err: nil}) s.write(recvMsg{data: testData, err: errors.New("different error from first")}) for i := 0; i < 2; i++ { inBuf := make([]byte, 1) actualCount, actualErr := s.Read(inBuf) if actualCount != 0 { t.Errorf("actualCount, _ := s.Read(_) differs; want %v; got %v", 0, actualCount) } if actualErr.Error() != testErr.Error() { t.Errorf("_ , actualErr := s.Read(_) differs; want actualErr.Error() to be %v; got %v", testErr.Error(), actualErr.Error()) } } } golang-google-grpc-1.6.0/vet.sh000077500000000000000000000047421315416461300163400ustar00rootroot00000000000000#!/bin/bash set -ex # Exit on error; debugging enabled. set -o pipefail # Fail a pipe if any sub-command fails. die() { echo "$@" >&2 exit 1 } # TODO: Remove this check and the mangling below once "context" is imported # directly. if git status --porcelain | read; then die "Uncommitted or untracked files found; commit changes first" fi # Undo any edits made by this script. cleanup() { git reset --hard HEAD } trap cleanup EXIT # Check proto in manual runs or cron runs. if [[ "$TRAVIS" != "true" || "$TRAVIS_EVENT_TYPE" = "cron" ]]; then check_proto="true" fi if [ "$1" = "-install" ]; then go get -d \ google.golang.org/grpc/... go get -u \ github.com/golang/lint/golint \ golang.org/x/tools/cmd/goimports \ honnef.co/go/tools/cmd/staticcheck \ github.com/golang/protobuf/protoc-gen-go \ golang.org/x/tools/cmd/stringer if [[ "$check_proto" = "true" ]]; then if [[ "$TRAVIS" = "true" ]]; then PROTOBUF_VERSION=3.3.0 cd /home/travis wget https://github.com/google/protobuf/releases/download/v$PROTOBUF_VERSION/$basename-linux-x86_64.zip unzip $basename-linux-x86_64.zip bin/protoc --version elif ! which protoc > /dev/null; then die "Please install protoc into your path" fi fi exit 0 elif [[ "$#" -ne 0 ]]; then die "Unknown argument(s): $*" fi git ls-files "*.go" | xargs grep -L "\(Copyright [0-9]\{4,\} gRPC authors\)\|DO NOT EDIT" 2>&1 | tee /dev/stderr | (! read) gofmt -s -d -l . 2>&1 | tee /dev/stderr | (! read) goimports -l . 2>&1 | tee /dev/stderr | (! read) golint ./... 2>&1 | (grep -vE "(_mock|_string|\.pb)\.go:" || true) | tee /dev/stderr | (! read) # Rewrite golang.org/x/net/context -> context imports (see grpc/grpc-go#1484). # TODO: Remove this mangling once "context" is imported directly (grpc/grpc-go#711). git ls-files "*.go" | xargs sed -i 's:"golang.org/x/net/context":"context":' set +o pipefail # TODO: Stop filtering pb.go files once golang/protobuf#214 is fixed. # TODO: Remove clientconn exception once go1.6 support is removed. go tool vet -all . 2>&1 | grep -vE 'clientconn.go:.*cancel' | grep -vF '.pb.go:' | tee /dev/stderr | (! read) set -o pipefail git reset --hard HEAD if [[ "$check_proto" = "true" ]]; then PATH=/home/travis/bin:$PATH make proto && \ git status --porcelain 2>&1 | (! read) || \ (git status; git --no-pager diff; exit 1) fi # TODO(menghanl): fix errors in transport_test. staticcheck -ignore google.golang.org/grpc/transport/transport_test.go:SA2002 ./...