pax_global_header00006660000000000000000000000064133640627510014521gustar00rootroot0000000000000052 comment=460e8e43b282a1a68219df600ef63442b81faf5f golang-github-confluentinc-confluent-kafka-go-0.11.6/000077500000000000000000000000001336406275100224735ustar00rootroot00000000000000golang-github-confluentinc-confluent-kafka-go-0.11.6/.github/000077500000000000000000000000001336406275100240335ustar00rootroot00000000000000golang-github-confluentinc-confluent-kafka-go-0.11.6/.github/ISSUE_TEMPLATE000066400000000000000000000006541336406275100261460ustar00rootroot00000000000000Description =========== How to reproduce ================ Checklist ========= Please provide the following information: - [ ] confluent-kafka-go and librdkafka version (`LibraryVersion()`): - [ ] Apache Kafka broker version: - [ ] Client configuration: `ConfigMap{...}` - [ ] Operating system: - [ ] Provide client logs (with `"debug": ".."` as necessary) - [ ] Provide broker log excerpts - [ ] Critical issue golang-github-confluentinc-confluent-kafka-go-0.11.6/.gitignore000066400000000000000000000000301336406275100244540ustar00rootroot00000000000000*~ \#* *.prof tmp-build golang-github-confluentinc-confluent-kafka-go-0.11.6/.travis.yml000066400000000000000000000032071336406275100246060ustar00rootroot00000000000000language: go go: - 1.7 - 1.8 - 1.9 osx_image: xcode9.2 os: - linux - osx env: global: - PKG_CONFIG_PATH="$HOME/gopath/src/github.com/confluentinc/confluent-kafka-go/tmp-build/lib/pkgconfig" - LD_LIBRARY_PATH="$HOME/gopath/src/github.com/confluentinc/confluent-kafka-go/tmp-build/lib" - DYLD_LIBRARY_PATH="$HOME/gopath/src/github.com/confluentinc/confluent-kafka-go/tmp-build/lib" - PATH="$PATH:$GOPATH/bin" - LIBRDKAFKA_VERSION=master # Travis OSX worker has problems running our Go binaries for 1.7 and 1.8, # workaround for now is to skip exec for those. before_install: - if [[ $TRAVIS_OS_NAME == linux ]]; then wget -qO - https://packages.confluent.io/deb/5.0/archive.key | sudo apt-key add - ; fi - if [[ $TRAVIS_OS_NAME == linux ]]; then sudo add-apt-repository "deb [arch=amd64] https://packages.confluent.io/deb/5.0 stable main" -y ; fi - if [[ $TRAVIS_OS_NAME == linux ]]; then sudo apt-get update -q ; fi - if [[ $TRAVIS_OS_NAME == linux ]]; then sudo apt-get install confluent-librdkafka-plugins -y ; fi - rm -rf tmp-build - bash mk/bootstrap-librdkafka.sh ${LIBRDKAFKA_VERSION} tmp-build # golint requires Go >= 1.9 - if [[ ! $TRAVIS_GO_VERSION =~ ^1\.[78] ]] ; then go get -u golang.org/x/lint/golint && touch .do_lint ; fi - if [[ $TRAVIS_OS_NAME == osx && $TRAVIS_GO_VERSION =~ ^1\.[78] ]] ; then touch .no_exec ; fi install: - go get -tags static ./... - go install -tags static ./... script: - if [[ -f .do_lint ]]; then golint -set_exit_status ./... ; fi - if [[ ! -f .no_exec ]]; then go test -timeout 60s -v -tags static ./... ; fi - if [[ ! -f .no_exec ]]; then go-kafkacat --help ; fi golang-github-confluentinc-confluent-kafka-go-0.11.6/LICENSE000066400000000000000000000260751336406275100235120ustar00rootroot00000000000000Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "{}" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright {yyyy} {name of copyright owner} Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. golang-github-confluentinc-confluent-kafka-go-0.11.6/README.md000066400000000000000000000177561336406275100237720ustar00rootroot00000000000000Confluent's Golang Client for Apache KafkaTM ===================================================== **confluent-kafka-go** is Confluent's Golang client for [Apache Kafka](http://kafka.apache.org/) and the [Confluent Platform](https://www.confluent.io/product/compare/). Features: - **High performance** - confluent-kafka-go is a lightweight wrapper around [librdkafka](https://github.com/edenhill/librdkafka), a finely tuned C client. - **Reliability** - There are a lot of details to get right when writing an Apache Kafka client. We get them right in one place (librdkafka) and leverage this work across all of our clients (also [confluent-kafka-python](https://github.com/confluentinc/confluent-kafka-python) and [confluent-kafka-dotnet](https://github.com/confluentinc/confluent-kafka-dotnet)). - **Supported** - Commercial support is offered by [Confluent](https://confluent.io/). - **Future proof** - Confluent, founded by the creators of Kafka, is building a [streaming platform](https://www.confluent.io/product/compare/) with Apache Kafka at its core. It's high priority for us that client features keep pace with core Apache Kafka and components of the [Confluent Platform](https://www.confluent.io/product/compare/). The Golang bindings provides a high-level Producer and Consumer with support for the balanced consumer groups of Apache Kafka 0.9 and above. See the [API documentation](http://docs.confluent.io/current/clients/confluent-kafka-go/index.html) for more information. **License**: [Apache License v2.0](http://www.apache.org/licenses/LICENSE-2.0) Examples ======== High-level balanced consumer ```golang import ( "fmt" "github.com/confluentinc/confluent-kafka-go/kafka" ) func main() { c, err := kafka.NewConsumer(&kafka.ConfigMap{ "bootstrap.servers": "localhost", "group.id": "myGroup", "auto.offset.reset": "earliest", }) if err != nil { panic(err) } c.SubscribeTopics([]string{"myTopic", "^aRegex.*[Tt]opic"}, nil) for { msg, err := c.ReadMessage(-1) if err == nil { fmt.Printf("Message on %s: %s\n", msg.TopicPartition, string(msg.Value)) } else { // The client will automatically try to recover from all errors. fmt.Printf("Consumer error: %v (%v)\n", err, msg) } } c.Close() } ``` Producer ```golang import ( "fmt" "github.com/confluentinc/confluent-kafka-go/kafka" ) func main() { p, err := kafka.NewProducer(&kafka.ConfigMap{"bootstrap.servers": "localhost"}) if err != nil { panic(err) } defer p.Close() // Delivery report handler for produced messages go func() { for e := range p.Events() { switch ev := e.(type) { case *kafka.Message: if ev.TopicPartition.Error != nil { fmt.Printf("Delivery failed: %v\n", ev.TopicPartition) } else { fmt.Printf("Delivered message to %v\n", ev.TopicPartition) } } } }() // Produce messages to topic (asynchronously) topic := "myTopic" for _, word := range []string{"Welcome", "to", "the", "Confluent", "Kafka", "Golang", "client"} { p.Produce(&kafka.Message{ TopicPartition: kafka.TopicPartition{Topic: &topic, Partition: kafka.PartitionAny}, Value: []byte(word), }, nil) } // Wait for message deliveries before shutting down p.Flush(15 * 1000) } ``` More elaborate examples are available in the [examples](examples) directory, including [how to configure](examples/confluent_cloud_example) the Go client for use with [Confluent Cloud](https://www.confluent.io/confluent-cloud/). Getting Started =============== Installing librdkafka --------------------- This client for Go depends on librdkafka v0.11.5 or later, so you either need to install librdkafka through your OS/distributions package manager, or download and build it from source. - For Debian and Ubuntu based distros, install `librdkafka-dev` from the standard repositories or using [Confluent's Deb repository](http://docs.confluent.io/current/installation.html#installation-apt). - For Redhat based distros, install `librdkafka-devel` using [Confluent's YUM repository](http://docs.confluent.io/current/installation.html#rpm-packages-via-yum). - For MacOS X, install `librdkafka` from Homebrew. You may also need to brew install pkg-config if you don't already have it. - For Windows, see the `librdkafka.redist` NuGet package. Build from source: git clone https://github.com/edenhill/librdkafka.git cd librdkafka ./configure --prefix /usr make sudo make install Install the client ------------------- ``` go get -u github.com/confluentinc/confluent-kafka-go/kafka ``` See the [examples](examples) for usage details. Note that the development of librdkafka and the Go client are kept in synch. So if you use HEAD on master of the Go client, then you need to use HEAD on master of librdkafka. See this [issue](https://github.com/confluentinc/confluent-kafka-go/issues/61#issuecomment-303746159) for more details. API Strands =========== There are two main API strands: channel based or function based. Channel Based Consumer ---------------------- Messages, errors and events are posted on the consumer.Events channel for the application to read. Pros: * Possibly more Golang:ish * Makes reading from multiple channels easy * Fast Cons: * Outdated events and messages may be consumed due to the buffering nature of channels. The extent is limited, but not remedied, by the Events channel buffer size (`go.events.channel.size`). See [examples/consumer_channel_example](examples/consumer_channel_example) Function Based Consumer ----------------------- Messages, errors and events are polled through the consumer.Poll() function. Pros: * More direct mapping to underlying librdkafka functionality. Cons: * Makes it harder to read from multiple channels, but a go-routine easily solves that (see Cons in channel based consumer above about outdated events). * Slower than the channel consumer. See [examples/consumer_example](examples/consumer_example) Channel Based Producer ---------------------- Application writes messages to the producer.ProducerChannel. Delivery reports are emitted on the producer.Events or specified private channel. Pros: * Go:ish * Proper channel backpressure if librdkafka internal queue is full. Cons: * Double queueing: messages are first queued in the channel (size is configurable) and then inside librdkafka. See [examples/producer_channel_example](examples/producer_channel_example) Function Based Producer ----------------------- Application calls producer.Produce() to produce messages. Delivery reports are emitted on the producer.Events or specified private channel. Pros: * Go:ish Cons: * Produce() is a non-blocking call, if the internal librdkafka queue is full the call will fail. * Somewhat slower than the channel producer. See [examples/producer_example](examples/producer_example) Static Builds ============= **NOTE**: Requires pkg-config To link your application statically with librdkafka append `-tags static` to your application's `go build` command, e.g.: $ cd kafkatest/go_verifiable_consumer $ go build -tags static This will create a binary with librdkafka statically linked, do note however that any librdkafka dependencies (such as ssl, sasl2, lz4, etc, depending on librdkafka build configuration) will be linked dynamically and thus required on the target system. To create a completely static binary append `-tags static_all` instead. This requires all dependencies to be available as static libraries (e.g., libsasl2.a). Static libraries are typically not installed by default but are available in the corresponding `..-dev` or `..-devel` packages (e.g., libsasl2-dev). After a succesful static build verify the dependencies by running `ldd ./your_program` (or `otool -L ./your_program` on OSX), librdkafka should not be listed. Tests ===== See [kafka/README](kafka/README.md) Contributing ------------ Contributions to the code, examples, documentation, et.al, are very much appreciated. Make your changes, run gofmt, tests, etc, push your branch, create a PR, and [sign the CLA](http://clabot.confluent.io/cla). golang-github-confluentinc-confluent-kafka-go-0.11.6/examples/000077500000000000000000000000001336406275100243115ustar00rootroot00000000000000golang-github-confluentinc-confluent-kafka-go-0.11.6/examples/.gitignore000066400000000000000000000004711336406275100263030ustar00rootroot00000000000000consumer_channel_example/consumer_channel_example consumer_example/consumer_example producer_channel_example/producer_channel_example producer_example/producer_example go-kafkacat/go-kafkacat admin_describe_config/admin_describe_config admin_delete_topics/admin_delete_topics admin_create_topic/admin_create_topicgolang-github-confluentinc-confluent-kafka-go-0.11.6/examples/README000066400000000000000000000006651336406275100252000ustar00rootroot00000000000000 Examples: consumer_channel_example - Channel based consumer consumer_example - Function & callback based consumer producer_channel_example - Channel based producer producer_example - Function based producer go-kafkacat - Channel based kafkacat Go clone Usage example: $ cd consumer_example $ go build (or 'go install') $ ./consumer_example # see usage $ ./consumer_example mybroker mygroup mytopic golang-github-confluentinc-confluent-kafka-go-0.11.6/examples/admin_create_topic/000077500000000000000000000000001336406275100301225ustar00rootroot00000000000000admin_create_topic.go000066400000000000000000000047151336406275100342120ustar00rootroot00000000000000golang-github-confluentinc-confluent-kafka-go-0.11.6/examples/admin_create_topic// Create topic package main /** * Copyright 2018 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import ( "context" "fmt" "github.com/confluentinc/confluent-kafka-go/kafka" "os" "strconv" "time" ) func main() { if len(os.Args) != 5 { fmt.Fprintf(os.Stderr, "Usage: %s \n", os.Args[0]) os.Exit(1) } broker := os.Args[1] topic := os.Args[2] numParts, err := strconv.Atoi(os.Args[3]) if err != nil { fmt.Printf("Invalid partition count: %s: %v\n", os.Args[3], err) os.Exit(1) } replicationFactor, err := strconv.Atoi(os.Args[4]) if err != nil { fmt.Printf("Invalid replication factor: %s: %v\n", os.Args[4], err) os.Exit(1) } // Create a new AdminClient. // AdminClient can also be instantiated using an existing // Producer or Consumer instance, see NewAdminClientFromProducer and // NewAdminClientFromConsumer. a, err := kafka.NewAdminClient(&kafka.ConfigMap{"bootstrap.servers": broker}) if err != nil { fmt.Printf("Failed to create Admin client: %s\n", err) os.Exit(1) } // Contexts are used to abort or limit the amount of time // the Admin call blocks waiting for a result. ctx, cancel := context.WithCancel(context.Background()) defer cancel() // Create topics on cluster. // Set Admin options to wait for the operation to finish (or at most 60s) maxDur, err := time.ParseDuration("60s") if err != nil { panic("ParseDuration(60s)") } results, err := a.CreateTopics( ctx, // Multiple topics can be created simultaneously // by providing more TopicSpecification structs here. []kafka.TopicSpecification{{ Topic: topic, NumPartitions: numParts, ReplicationFactor: replicationFactor}}, // Admin options kafka.SetAdminOperationTimeout(maxDur)) if err != nil { fmt.Printf("Failed to create topic: %v\n", err) os.Exit(1) } // Print results for _, result := range results { fmt.Printf("%s\n", result) } a.Close() } golang-github-confluentinc-confluent-kafka-go-0.11.6/examples/admin_delete_topics/000077500000000000000000000000001336406275100303045ustar00rootroot00000000000000admin_delete_topics.go000066400000000000000000000035771336406275100345630ustar00rootroot00000000000000golang-github-confluentinc-confluent-kafka-go-0.11.6/examples/admin_delete_topics// Delete topics package main /** * Copyright 2018 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import ( "context" "fmt" "github.com/confluentinc/confluent-kafka-go/kafka" "os" "time" ) func main() { if len(os.Args) < 3 { fmt.Fprintf(os.Stderr, "Usage: %s ..\n", os.Args[0]) os.Exit(1) } broker := os.Args[1] topics := os.Args[2:] // Create a new AdminClient. // AdminClient can also be instantiated using an existing // Producer or Consumer instance, see NewAdminClientFromProducer and // NewAdminClientFromConsumer. a, err := kafka.NewAdminClient(&kafka.ConfigMap{"bootstrap.servers": broker}) if err != nil { fmt.Printf("Failed to create Admin client: %s\n", err) os.Exit(1) } // Contexts are used to abort or limit the amount of time // the Admin call blocks waiting for a result. ctx, cancel := context.WithCancel(context.Background()) defer cancel() // Delete topics on cluster // Set Admin options to wait for the operation to finish (or at most 60s) maxDur, err := time.ParseDuration("60s") if err != nil { panic("ParseDuration(60s)") } results, err := a.DeleteTopics(ctx, topics, kafka.SetAdminOperationTimeout(maxDur)) if err != nil { fmt.Printf("Failed to delete topics: %v\n", err) os.Exit(1) } // Print results for _, result := range results { fmt.Printf("%s\n", result) } a.Close() } golang-github-confluentinc-confluent-kafka-go-0.11.6/examples/admin_describe_config/000077500000000000000000000000001336406275100305665ustar00rootroot00000000000000admin_describe_config.go000066400000000000000000000050451336406275100353170ustar00rootroot00000000000000golang-github-confluentinc-confluent-kafka-go-0.11.6/examples/admin_describe_config// List current configuration for a cluster resource package main /** * Copyright 2018 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import ( "context" "fmt" "github.com/confluentinc/confluent-kafka-go/kafka" "os" "time" ) func main() { if len(os.Args) != 4 { fmt.Fprintf(os.Stderr, "Usage: %s \n"+ "\n"+ " - CSV list of bootstrap brokers\n"+ " - any, broker, topic, group\n"+ " - broker id or topic name\n", os.Args[0]) os.Exit(1) } broker := os.Args[1] resourceType, err := kafka.ResourceTypeFromString(os.Args[2]) if err != nil { fmt.Printf("Invalid resource type: %s\n", os.Args[2]) os.Exit(1) } resourceName := os.Args[3] // Create a new AdminClient. // AdminClient can also be instantiated using an existing // Producer or Consumer instance, see NewAdminClientFromProducer and // NewAdminClientFromConsumer. a, err := kafka.NewAdminClient(&kafka.ConfigMap{"bootstrap.servers": broker}) if err != nil { fmt.Printf("Failed to create Admin client: %s\n", err) os.Exit(1) } // Contexts are used to abort or limit the amount of time // the Admin call blocks waiting for a result. ctx, cancel := context.WithCancel(context.Background()) defer cancel() dur, _ := time.ParseDuration("20s") // Ask cluster for the resource's current configuration results, err := a.DescribeConfigs(ctx, []kafka.ConfigResource{{Type: resourceType, Name: resourceName}}, kafka.SetAdminRequestTimeout(dur)) if err != nil { fmt.Printf("Failed to DescribeConfigs(%s, %s): %s\n", resourceType, resourceName, err) os.Exit(1) } // Print results for _, result := range results { fmt.Printf("%s %s: %s:\n", result.Type, result.Name, result.Error) for _, entry := range result.Config { // Truncate the value to 60 chars, if needed, for nicer formatting. fmt.Printf("%60s = %-60.60s %-20s Read-only:%v Sensitive:%v\n", entry.Name, entry.Value, entry.Source, entry.IsReadOnly, entry.IsSensitive) } } a.Close() } golang-github-confluentinc-confluent-kafka-go-0.11.6/examples/confluent_cloud_example/000077500000000000000000000000001336406275100312075ustar00rootroot00000000000000confluent_cloud_example.go000066400000000000000000000057511336406275100363650ustar00rootroot00000000000000golang-github-confluentinc-confluent-kafka-go-0.11.6/examples/confluent_cloud_example// This is a simple example demonstrating how to produce a message to // Confluent Cloud then read it back again. // // https://www.confluent.io/confluent-cloud/ // // Auto-creation of topics is disabled in Confluent Cloud. You will need to // use the ccloud cli to create the go-test-topic topic before running this // example. // // $ ccloud topic create go-test-topic // // The , and parameters // are available via the Confluent Cloud web interface. For more information, // refer to the quick-start: // // https://docs.confluent.io/current/cloud-quickstart.html package main /** * Copyright 2018 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import ( "time" "fmt" "github.com/confluentinc/confluent-kafka-go/kafka" ) func main() { p, err := kafka.NewProducer(&kafka.ConfigMap{ "bootstrap.servers": "", "broker.version.fallback": "0.10.0.0", "api.version.fallback.ms": 0, "sasl.mechanisms": "PLAIN", "security.protocol": "SASL_SSL", "sasl.username": "", "sasl.password": "",}) if err != nil { panic(fmt.Sprintf("Failed to create producer: %s", err)) } value := "golang test value" topic := "go-test-topic" p.Produce(&kafka.Message{ TopicPartition: kafka.TopicPartition{Topic: &topic, Partition: kafka.PartitionAny}, Value: []byte(value), }, nil) // Wait for delivery report e := <-p.Events() m := e.(*kafka.Message) if m.TopicPartition.Error != nil { fmt.Printf("failed to deliver message: %v\n", m.TopicPartition) } else { fmt.Printf("delivered to topic %s [%d] at offset %v\n", *m.TopicPartition.Topic, m.TopicPartition.Partition, m.TopicPartition.Offset) } p.Close() c, err := kafka.NewConsumer(&kafka.ConfigMap{ "bootstrap.servers": "", "broker.version.fallback": "0.10.0.0", "api.version.fallback.ms": 0, "sasl.mechanisms": "PLAIN", "security.protocol": "SASL_SSL", "sasl.username": "", "sasl.password": "", "session.timeout.ms": 6000, "group.id": "my-group", "default.topic.config": kafka.ConfigMap{"auto.offset.reset": "earliest"},}) if err != nil { panic(fmt.Sprintf("Failed to create consumer: %s", err)) } topics := []string { topic } c.SubscribeTopics(topics, nil) for { msg, err := c.ReadMessage(100 * time.Millisecond) if err == nil { fmt.Printf("consumed: %s: %s\n", msg.TopicPartition, string(msg.Value)) } } c.Close() } golang-github-confluentinc-confluent-kafka-go-0.11.6/examples/consumer_channel_example/000077500000000000000000000000001336406275100313475ustar00rootroot00000000000000consumer_channel_example.go000066400000000000000000000043771336406275100366700ustar00rootroot00000000000000golang-github-confluentinc-confluent-kafka-go-0.11.6/examples/consumer_channel_example// Example channel-based high-level Apache Kafka consumer package main /** * Copyright 2016 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import ( "fmt" "github.com/confluentinc/confluent-kafka-go/kafka" "os" "os/signal" "syscall" ) func main() { if len(os.Args) < 4 { fmt.Fprintf(os.Stderr, "Usage: %s \n", os.Args[0]) os.Exit(1) } broker := os.Args[1] group := os.Args[2] topics := os.Args[3:] sigchan := make(chan os.Signal, 1) signal.Notify(sigchan, syscall.SIGINT, syscall.SIGTERM) c, err := kafka.NewConsumer(&kafka.ConfigMap{ "bootstrap.servers": broker, "group.id": group, "session.timeout.ms": 6000, "go.events.channel.enable": true, "go.application.rebalance.enable": true, "default.topic.config": kafka.ConfigMap{"auto.offset.reset": "earliest"}}) if err != nil { fmt.Fprintf(os.Stderr, "Failed to create consumer: %s\n", err) os.Exit(1) } fmt.Printf("Created Consumer %v\n", c) err = c.SubscribeTopics(topics, nil) run := true for run == true { select { case sig := <-sigchan: fmt.Printf("Caught signal %v: terminating\n", sig) run = false case ev := <-c.Events(): switch e := ev.(type) { case kafka.AssignedPartitions: fmt.Fprintf(os.Stderr, "%% %v\n", e) c.Assign(e.Partitions) case kafka.RevokedPartitions: fmt.Fprintf(os.Stderr, "%% %v\n", e) c.Unassign() case *kafka.Message: fmt.Printf("%% Message on %s:\n%s\n", e.TopicPartition, string(e.Value)) case kafka.PartitionEOF: fmt.Printf("%% Reached %v\n", e) case kafka.Error: fmt.Fprintf(os.Stderr, "%% Error: %v\n", e) run = false } } } fmt.Printf("Closing consumer\n") c.Close() } golang-github-confluentinc-confluent-kafka-go-0.11.6/examples/consumer_example/000077500000000000000000000000001336406275100276575ustar00rootroot00000000000000golang-github-confluentinc-confluent-kafka-go-0.11.6/examples/consumer_example/consumer_example.go000066400000000000000000000043221336406275100335550ustar00rootroot00000000000000// Example function-based high-level Apache Kafka consumer package main /** * Copyright 2016 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ // consumer_example implements a consumer using the non-channel Poll() API // to retrieve messages and events. import ( "fmt" "github.com/confluentinc/confluent-kafka-go/kafka" "os" "os/signal" "syscall" ) func main() { if len(os.Args) < 4 { fmt.Fprintf(os.Stderr, "Usage: %s \n", os.Args[0]) os.Exit(1) } broker := os.Args[1] group := os.Args[2] topics := os.Args[3:] sigchan := make(chan os.Signal, 1) signal.Notify(sigchan, syscall.SIGINT, syscall.SIGTERM) c, err := kafka.NewConsumer(&kafka.ConfigMap{ "bootstrap.servers": broker, "group.id": group, "session.timeout.ms": 6000, "default.topic.config": kafka.ConfigMap{"auto.offset.reset": "earliest"}}) if err != nil { fmt.Fprintf(os.Stderr, "Failed to create consumer: %s\n", err) os.Exit(1) } fmt.Printf("Created Consumer %v\n", c) err = c.SubscribeTopics(topics, nil) run := true for run == true { select { case sig := <-sigchan: fmt.Printf("Caught signal %v: terminating\n", sig) run = false default: ev := c.Poll(100) if ev == nil { continue } switch e := ev.(type) { case *kafka.Message: fmt.Printf("%% Message on %s:\n%s\n", e.TopicPartition, string(e.Value)) if e.Headers != nil { fmt.Printf("%% Headers: %v\n", e.Headers) } case kafka.PartitionEOF: fmt.Printf("%% Reached %v\n", e) case kafka.Error: fmt.Fprintf(os.Stderr, "%% Error: %v\n", e) run = false default: fmt.Printf("Ignored %v\n", e) } } } fmt.Printf("Closing consumer\n") c.Close() } golang-github-confluentinc-confluent-kafka-go-0.11.6/examples/go-kafkacat/000077500000000000000000000000001336406275100264615ustar00rootroot00000000000000golang-github-confluentinc-confluent-kafka-go-0.11.6/examples/go-kafkacat/go-kafkacat.go000066400000000000000000000146751336406275100311750ustar00rootroot00000000000000// Example kafkacat clone written in Golang package main /** * Copyright 2016 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import ( "bufio" "fmt" "github.com/confluentinc/confluent-kafka-go/kafka" "gopkg.in/alecthomas/kingpin.v2" "os" "os/signal" "strings" "syscall" ) var ( verbosity = 1 exitEOF = false eofCnt = 0 partitionCnt = 0 keyDelim = "" sigs chan os.Signal ) func runProducer(config *kafka.ConfigMap, topic string, partition int32) { p, err := kafka.NewProducer(config) if err != nil { fmt.Fprintf(os.Stderr, "Failed to create producer: %s\n", err) os.Exit(1) } fmt.Fprintf(os.Stderr, "Created Producer %v, topic %s [%d]\n", p, topic, partition) tp := kafka.TopicPartition{Topic: &topic, Partition: partition} go func(drs chan kafka.Event) { for ev := range drs { m, ok := ev.(*kafka.Message) if !ok { continue } if m.TopicPartition.Error != nil { fmt.Fprintf(os.Stderr, "%% Delivery error: %v\n", m.TopicPartition) } else if verbosity >= 2 { fmt.Fprintf(os.Stderr, "%% Delivered %v\n", m) } } }(p.Events()) reader := bufio.NewReader(os.Stdin) stdinChan := make(chan string) go func() { for true { line, err := reader.ReadString('\n') if err != nil { break } line = strings.TrimSuffix(line, "\n") if len(line) == 0 { continue } stdinChan <- line } close(stdinChan) }() run := true for run == true { select { case sig := <-sigs: fmt.Fprintf(os.Stderr, "%% Terminating on signal %v\n", sig) run = false case line, ok := <-stdinChan: if !ok { run = false break } msg := kafka.Message{TopicPartition: tp} if keyDelim != "" { vec := strings.SplitN(line, keyDelim, 2) if len(vec[0]) > 0 { msg.Key = ([]byte)(vec[0]) } if len(vec) == 2 && len(vec[1]) > 0 { msg.Value = ([]byte)(vec[1]) } } else { msg.Value = ([]byte)(line) } p.ProduceChannel() <- &msg } } fmt.Fprintf(os.Stderr, "%% Flushing %d message(s)\n", p.Len()) p.Flush(10000) fmt.Fprintf(os.Stderr, "%% Closing\n") p.Close() } func runConsumer(config *kafka.ConfigMap, topics []string) { c, err := kafka.NewConsumer(config) if err != nil { fmt.Fprintf(os.Stderr, "Failed to create consumer: %s\n", err) os.Exit(1) } fmt.Fprintf(os.Stderr, "%% Created Consumer %v\n", c) c.SubscribeTopics(topics, nil) run := true for run == true { select { case sig := <-sigs: fmt.Fprintf(os.Stderr, "%% Terminating on signal %v\n", sig) run = false case ev := <-c.Events(): switch e := ev.(type) { case kafka.AssignedPartitions: fmt.Fprintf(os.Stderr, "%% %v\n", e) c.Assign(e.Partitions) partitionCnt = len(e.Partitions) eofCnt = 0 case kafka.RevokedPartitions: fmt.Fprintf(os.Stderr, "%% %v\n", e) c.Unassign() partitionCnt = 0 eofCnt = 0 case *kafka.Message: if verbosity >= 2 { fmt.Fprintf(os.Stderr, "%% %v:\n", e.TopicPartition) } if keyDelim != "" { if e.Key != nil { fmt.Printf("%s%s", string(e.Key), keyDelim) } else { fmt.Printf("%s", keyDelim) } } fmt.Println(string(e.Value)) case kafka.PartitionEOF: fmt.Fprintf(os.Stderr, "%% Reached %v\n", e) eofCnt++ if exitEOF && eofCnt >= partitionCnt { run = false } case kafka.Error: fmt.Fprintf(os.Stderr, "%% Error: %v\n", e) run = false case kafka.OffsetsCommitted: if verbosity >= 2 { fmt.Fprintf(os.Stderr, "%% %v\n", e) } default: fmt.Fprintf(os.Stderr, "%% Unhandled event %T ignored: %v\n", e, e) } } } fmt.Fprintf(os.Stderr, "%% Closing consumer\n") c.Close() } type configArgs struct { conf kafka.ConfigMap } func (c *configArgs) String() string { return "FIXME" } func (c *configArgs) Set(value string) error { return c.conf.Set(value) } func (c *configArgs) IsCumulative() bool { return true } func main() { sigs = make(chan os.Signal) signal.Notify(sigs, syscall.SIGINT, syscall.SIGTERM) _, libver := kafka.LibraryVersion() kingpin.Version(fmt.Sprintf("confluent-kafka-go (librdkafka v%s)", libver)) // Default config var confargs configArgs confargs.conf = kafka.ConfigMap{"session.timeout.ms": 6000} /* General options */ brokers := kingpin.Flag("broker", "Bootstrap broker(s)").Required().String() kingpin.Flag("config", "Configuration property (prop=val)").Short('X').PlaceHolder("PROP=VAL").SetValue(&confargs) keyDelimArg := kingpin.Flag("key-delim", "Key and value delimiter (empty string=dont print/parse key)").Default("").String() verbosityArg := kingpin.Flag("verbosity", "Output verbosity level").Short('v').Default("1").Int() /* Producer mode options */ modeP := kingpin.Command("produce", "Produce messages") topic := modeP.Flag("topic", "Topic to produce to").Required().String() partition := modeP.Flag("partition", "Partition to produce to").Default("-1").Int() /* Consumer mode options */ modeC := kingpin.Command("consume", "Consume messages").Default() group := modeC.Flag("group", "Consumer group").Required().String() topics := modeC.Arg("topic", "Topic(s) to subscribe to").Required().Strings() initialOffset := modeC.Flag("offset", "Initial offset").Short('o').Default(kafka.OffsetBeginning.String()).String() exitEOFArg := modeC.Flag("eof", "Exit when EOF is reached for all partitions").Bool() mode := kingpin.Parse() verbosity = *verbosityArg keyDelim = *keyDelimArg exitEOF = *exitEOFArg confargs.conf["bootstrap.servers"] = *brokers switch mode { case "produce": confargs.conf["default.topic.config"] = kafka.ConfigMap{"produce.offset.report": true} runProducer((*kafka.ConfigMap)(&confargs.conf), *topic, int32(*partition)) case "consume": confargs.conf["group.id"] = *group confargs.conf["go.events.channel.enable"] = true confargs.conf["go.application.rebalance.enable"] = true confargs.conf["default.topic.config"] = kafka.ConfigMap{"auto.offset.reset": *initialOffset} runConsumer((*kafka.ConfigMap)(&confargs.conf), *topics) } } golang-github-confluentinc-confluent-kafka-go-0.11.6/examples/producer_channel_example/000077500000000000000000000000001336406275100313375ustar00rootroot00000000000000producer_channel_example.go000066400000000000000000000035221336406275100366370ustar00rootroot00000000000000golang-github-confluentinc-confluent-kafka-go-0.11.6/examples/producer_channel_example// Example channel-based Apache Kafka producer package main /** * Copyright 2016 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import ( "fmt" "github.com/confluentinc/confluent-kafka-go/kafka" "os" ) func main() { if len(os.Args) != 3 { fmt.Fprintf(os.Stderr, "Usage: %s \n", os.Args[0]) os.Exit(1) } broker := os.Args[1] topic := os.Args[2] p, err := kafka.NewProducer(&kafka.ConfigMap{"bootstrap.servers": broker}) if err != nil { fmt.Printf("Failed to create producer: %s\n", err) os.Exit(1) } fmt.Printf("Created Producer %v\n", p) doneChan := make(chan bool) go func() { defer close(doneChan) for e := range p.Events() { switch ev := e.(type) { case *kafka.Message: m := ev if m.TopicPartition.Error != nil { fmt.Printf("Delivery failed: %v\n", m.TopicPartition.Error) } else { fmt.Printf("Delivered message to topic %s [%d] at offset %v\n", *m.TopicPartition.Topic, m.TopicPartition.Partition, m.TopicPartition.Offset) } return default: fmt.Printf("Ignored event: %s\n", ev) } } }() value := "Hello Go!" p.ProduceChannel() <- &kafka.Message{TopicPartition: kafka.TopicPartition{Topic: &topic, Partition: kafka.PartitionAny}, Value: []byte(value)} // wait for delivery report goroutine to finish _ = <-doneChan p.Close() } golang-github-confluentinc-confluent-kafka-go-0.11.6/examples/producer_example/000077500000000000000000000000001336406275100276475ustar00rootroot00000000000000golang-github-confluentinc-confluent-kafka-go-0.11.6/examples/producer_example/producer_example.go000066400000000000000000000035221336406275100335360ustar00rootroot00000000000000// Example function-based Apache Kafka producer package main /** * Copyright 2016 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import ( "fmt" "github.com/confluentinc/confluent-kafka-go/kafka" "os" ) func main() { if len(os.Args) != 3 { fmt.Fprintf(os.Stderr, "Usage: %s \n", os.Args[0]) os.Exit(1) } broker := os.Args[1] topic := os.Args[2] p, err := kafka.NewProducer(&kafka.ConfigMap{"bootstrap.servers": broker}) if err != nil { fmt.Printf("Failed to create producer: %s\n", err) os.Exit(1) } fmt.Printf("Created Producer %v\n", p) // Optional delivery channel, if not specified the Producer object's // .Events channel is used. deliveryChan := make(chan kafka.Event) value := "Hello Go!" err = p.Produce(&kafka.Message{ TopicPartition: kafka.TopicPartition{Topic: &topic, Partition: kafka.PartitionAny}, Value: []byte(value), Headers: []kafka.Header{{Key: "myTestHeader", Value: []byte("header values are binary")}}, }, deliveryChan) e := <-deliveryChan m := e.(*kafka.Message) if m.TopicPartition.Error != nil { fmt.Printf("Delivery failed: %v\n", m.TopicPartition.Error) } else { fmt.Printf("Delivered message to topic %s [%d] at offset %v\n", *m.TopicPartition.Topic, m.TopicPartition.Partition, m.TopicPartition.Offset) } close(deliveryChan) } golang-github-confluentinc-confluent-kafka-go-0.11.6/kafka/000077500000000000000000000000001336406275100235505ustar00rootroot00000000000000golang-github-confluentinc-confluent-kafka-go-0.11.6/kafka/.gitignore000066400000000000000000000000621336406275100255360ustar00rootroot00000000000000testconf.json go_rdkafka_generr/go_rdkafka_generr golang-github-confluentinc-confluent-kafka-go-0.11.6/kafka/00version.go000066400000000000000000000040541336406275100257270ustar00rootroot00000000000000package kafka /** * Copyright 2016-2018 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import ( "fmt" ) /* #include //Minimum required librdkafka version. This is checked both during //build-time and runtime. //Make sure to keep the MIN_RD_KAFKA_VERSION, MIN_VER_ERRSTR and #error //defines and strings in sync. // #define MIN_RD_KAFKA_VERSION 0x0000b0500 #ifdef __APPLE__ #define MIN_VER_ERRSTR "confluent-kafka-go requires librdkafka v0.11.5 or later. Install the latest version of librdkafka from Homebrew by running `brew install librdkafka` or `brew upgrade librdkafka`" #else #define MIN_VER_ERRSTR "confluent-kafka-go requires librdkafka v0.11.5 or later. Install the latest version of librdkafka from the Confluent repositories, see http://docs.confluent.io/current/installation.html" #endif #if RD_KAFKA_VERSION < MIN_RD_KAFKA_VERSION #ifdef __APPLE__ #error "confluent-kafka-go requires librdkafka v0.11.5 or later. Install the latest version of librdkafka from Homebrew by running `brew install librdkafka` or `brew upgrade librdkafka`" #else #error "confluent-kafka-go requires librdkafka v0.11.5 or later. Install the latest version of librdkafka from the Confluent repositories, see http://docs.confluent.io/current/installation.html" #endif #endif */ import "C" func versionCheck() error { ver, verstr := LibraryVersion() if ver < C.MIN_RD_KAFKA_VERSION { return newErrorFromString(ErrNotImplemented, fmt.Sprintf("%s: librdkafka version %s (0x%x) detected", C.MIN_VER_ERRSTR, verstr, ver)) } return nil } golang-github-confluentinc-confluent-kafka-go-0.11.6/kafka/README.md000066400000000000000000000031041336406275100250250ustar00rootroot00000000000000# Information for confluent-kafka-go developers Whenever librdkafka error codes are updated make sure to run generate before building: ``` $ (cd go_rdkafka_generr && go install) && go generate $ go build ``` ## Testing Some of the tests included in this directory, the benchmark and integration tests in particular, require an existing Kafka cluster and a testconf.json configuration file to provide tests with bootstrap brokers, topic name, etc. The format of testconf.json is a JSON object: ``` { "Brokers": "", "Topic": "" } ``` See testconf-example.json for an example and full set of available options. To run unit-tests: ``` $ go test ``` To run benchmark tests: ``` $ go test -bench . ``` For the code coverage: ``` $ go test -coverprofile=coverage.out -bench=. $ go tool cover -func=coverage.out ``` ## Build tags (static linking) Different build types are supported through Go build tags (`-tags ..`), these tags should be specified on the **application** build command. * `static` - Build with librdkafka linked statically (but librdkafka dependencies linked dynamically). * `static_all` - Build with all libraries linked statically. * neither - Build with librdkafka (and its dependencies) linked dynamically. ## Generating HTML documentation To generate one-page HTML documentation run the mk/doc-gen.py script from the top-level directory. This script requires the beautifulsoup4 Python package. ``` $ source .../your/virtualenv/bin/activate $ pip install beautifulsoup4 ... $ mk/doc-gen.py > kafka.html ``` golang-github-confluentinc-confluent-kafka-go-0.11.6/kafka/adminapi.go000066400000000000000000000750301336406275100256660ustar00rootroot00000000000000/** * Copyright 2018 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package kafka import ( "context" "fmt" "strings" "time" "unsafe" ) /* #include #include static const rd_kafka_topic_result_t * topic_result_by_idx (const rd_kafka_topic_result_t **topics, size_t cnt, size_t idx) { if (idx >= cnt) return NULL; return topics[idx]; } static const rd_kafka_ConfigResource_t * ConfigResource_by_idx (const rd_kafka_ConfigResource_t **res, size_t cnt, size_t idx) { if (idx >= cnt) return NULL; return res[idx]; } static const rd_kafka_ConfigEntry_t * ConfigEntry_by_idx (const rd_kafka_ConfigEntry_t **entries, size_t cnt, size_t idx) { if (idx >= cnt) return NULL; return entries[idx]; } */ import "C" // AdminClient is derived from an existing Producer or Consumer type AdminClient struct { handle *handle isDerived bool // Derived from existing client handle } func durationToMilliseconds(t time.Duration) int { if t > 0 { return (int)(t.Seconds() * 1000.0) } return (int)(t) } // TopicResult provides per-topic operation result (error) information. type TopicResult struct { // Topic name Topic string // Error, if any, of result. Check with `Error.Code() != ErrNoError`. Error Error } // String returns a human-readable representation of a TopicResult. func (t TopicResult) String() string { if t.Error.code == 0 { return t.Topic } return fmt.Sprintf("%s (%s)", t.Topic, t.Error.str) } // TopicSpecification holds parameters for creating a new topic. // TopicSpecification is analogous to NewTopic in the Java Topic Admin API. type TopicSpecification struct { // Topic name to create. Topic string // Number of partitions in topic. NumPartitions int // Default replication factor for the topic's partitions, or zero // if an explicit ReplicaAssignment is set. ReplicationFactor int // (Optional) Explicit replica assignment. The outer array is // indexed by the partition number, while the inner per-partition array // contains the replica broker ids. The first broker in each // broker id list will be the preferred replica. ReplicaAssignment [][]int32 // Topic configuration. Config map[string]string } // PartitionsSpecification holds parameters for creating additional partitions for a topic. // PartitionsSpecification is analogous to NewPartitions in the Java Topic Admin API. type PartitionsSpecification struct { // Topic to create more partitions for. Topic string // New partition count for topic, must be higher than current partition count. IncreaseTo int // (Optional) Explicit replica assignment. The outer array is // indexed by the new partition index (i.e., 0 for the first added // partition), while the inner per-partition array // contains the replica broker ids. The first broker in each // broker id list will be the preferred replica. ReplicaAssignment [][]int32 } // ResourceType represents an Apache Kafka resource type type ResourceType int const ( // ResourceUnknown - Unknown ResourceUnknown = ResourceType(C.RD_KAFKA_RESOURCE_UNKNOWN) // ResourceAny - match any resource type (DescribeConfigs) ResourceAny = ResourceType(C.RD_KAFKA_RESOURCE_ANY) // ResourceTopic - Topic ResourceTopic = ResourceType(C.RD_KAFKA_RESOURCE_TOPIC) // ResourceGroup - Group ResourceGroup = ResourceType(C.RD_KAFKA_RESOURCE_GROUP) // ResourceBroker - Broker ResourceBroker = ResourceType(C.RD_KAFKA_RESOURCE_BROKER) ) // String returns the human-readable representation of a ResourceType func (t ResourceType) String() string { return C.GoString(C.rd_kafka_ResourceType_name(C.rd_kafka_ResourceType_t(t))) } // ResourceTypeFromString translates a resource type name/string to // a ResourceType value. func ResourceTypeFromString(typeString string) (ResourceType, error) { switch strings.ToUpper(typeString) { case "ANY": return ResourceAny, nil case "TOPIC": return ResourceTopic, nil case "GROUP": return ResourceGroup, nil case "BROKER": return ResourceBroker, nil default: return ResourceUnknown, newGoError(ErrInvalidArg) } } // ConfigSource represents an Apache Kafka config source type ConfigSource int const ( // ConfigSourceUnknown is the default value ConfigSourceUnknown = ConfigSource(C.RD_KAFKA_CONFIG_SOURCE_UNKNOWN_CONFIG) // ConfigSourceDynamicTopic is dynamic topic config that is configured for a specific topic ConfigSourceDynamicTopic = ConfigSource(C.RD_KAFKA_CONFIG_SOURCE_DYNAMIC_TOPIC_CONFIG) // ConfigSourceDynamicBroker is dynamic broker config that is configured for a specific broker ConfigSourceDynamicBroker = ConfigSource(C.RD_KAFKA_CONFIG_SOURCE_DYNAMIC_BROKER_CONFIG) // ConfigSourceDynamicDefaultBroker is dynamic broker config that is configured as default for all brokers in the cluster ConfigSourceDynamicDefaultBroker = ConfigSource(C.RD_KAFKA_CONFIG_SOURCE_DYNAMIC_DEFAULT_BROKER_CONFIG) // ConfigSourceStaticBroker is static broker config provided as broker properties at startup (e.g. from server.properties file) ConfigSourceStaticBroker = ConfigSource(C.RD_KAFKA_CONFIG_SOURCE_STATIC_BROKER_CONFIG) // ConfigSourceDefault is built-in default configuration for configs that have a default value ConfigSourceDefault = ConfigSource(C.RD_KAFKA_CONFIG_SOURCE_DEFAULT_CONFIG) ) // String returns the human-readable representation of a ConfigSource type func (t ConfigSource) String() string { return C.GoString(C.rd_kafka_ConfigSource_name(C.rd_kafka_ConfigSource_t(t))) } // ConfigResource holds parameters for altering an Apache Kafka configuration resource type ConfigResource struct { // Type of resource to set. Type ResourceType // Name of resource to set. Name string // Config entries to set. // Configuration updates are atomic, any configuration property not provided // here will be reverted (by the broker) to its default value. // Use DescribeConfigs to retrieve the list of current configuration entry values. Config []ConfigEntry } // String returns a human-readable representation of a ConfigResource func (c ConfigResource) String() string { return fmt.Sprintf("Resource(%s, %s)", c.Type, c.Name) } // AlterOperation specifies the operation to perform on the ConfigEntry. // Currently only AlterOperationSet. type AlterOperation int const ( // AlterOperationSet sets/overwrites the configuration setting. AlterOperationSet = iota ) // String returns the human-readable representation of an AlterOperation func (o AlterOperation) String() string { switch o { case AlterOperationSet: return "Set" default: return fmt.Sprintf("Unknown%d?", int(o)) } } // ConfigEntry holds parameters for altering a resource's configuration. type ConfigEntry struct { // Name of configuration entry, e.g., topic configuration property name. Name string // Value of configuration entry. Value string // Operation to perform on the entry. Operation AlterOperation } // StringMapToConfigEntries creates a new map of ConfigEntry objects from the // provided string map. The AlterOperation is set on each created entry. func StringMapToConfigEntries(stringMap map[string]string, operation AlterOperation) []ConfigEntry { var ceList []ConfigEntry for k, v := range stringMap { ceList = append(ceList, ConfigEntry{Name: k, Value: v, Operation: operation}) } return ceList } // String returns a human-readable representation of a ConfigEntry. func (c ConfigEntry) String() string { return fmt.Sprintf("%v %s=\"%s\"", c.Operation, c.Name, c.Value) } // ConfigEntryResult contains the result of a single configuration entry from a // DescribeConfigs request. type ConfigEntryResult struct { // Name of configuration entry, e.g., topic configuration property name. Name string // Value of configuration entry. Value string // Source indicates the configuration source. Source ConfigSource // IsReadOnly indicates whether the configuration entry can be altered. IsReadOnly bool // IsSensitive indicates whether the configuration entry contains sensitive information, in which case the value will be unset. IsSensitive bool // IsSynonym indicates whether the configuration entry is a synonym for another configuration property. IsSynonym bool // Synonyms contains a map of configuration entries that are synonyms to this configuration entry. Synonyms map[string]ConfigEntryResult } // String returns a human-readable representation of a ConfigEntryResult. func (c ConfigEntryResult) String() string { return fmt.Sprintf("%s=\"%s\"", c.Name, c.Value) } // setFromC sets up a ConfigEntryResult from a C ConfigEntry func configEntryResultFromC(cEntry *C.rd_kafka_ConfigEntry_t) (entry ConfigEntryResult) { entry.Name = C.GoString(C.rd_kafka_ConfigEntry_name(cEntry)) cValue := C.rd_kafka_ConfigEntry_value(cEntry) if cValue != nil { entry.Value = C.GoString(cValue) } entry.Source = ConfigSource(C.rd_kafka_ConfigEntry_source(cEntry)) entry.IsReadOnly = cint2bool(C.rd_kafka_ConfigEntry_is_read_only(cEntry)) entry.IsSensitive = cint2bool(C.rd_kafka_ConfigEntry_is_sensitive(cEntry)) entry.IsSynonym = cint2bool(C.rd_kafka_ConfigEntry_is_synonym(cEntry)) var cSynCnt C.size_t cSyns := C.rd_kafka_ConfigEntry_synonyms(cEntry, &cSynCnt) if cSynCnt > 0 { entry.Synonyms = make(map[string]ConfigEntryResult) } for si := 0; si < int(cSynCnt); si++ { cSyn := C.ConfigEntry_by_idx(cSyns, cSynCnt, C.size_t(si)) Syn := configEntryResultFromC(cSyn) entry.Synonyms[Syn.Name] = Syn } return entry } // ConfigResourceResult provides the result for a resource from a AlterConfigs or // DescribeConfigs request. type ConfigResourceResult struct { // Type of returned result resource. Type ResourceType // Name of returned result resource. Name string // Error, if any, of returned result resource. Error Error // Config entries, if any, of returned result resource. Config map[string]ConfigEntryResult } // String returns a human-readable representation of a ConfigResourceResult. func (c ConfigResourceResult) String() string { if c.Error.Code() != 0 { return fmt.Sprintf("ResourceResult(%s, %s, \"%v\")", c.Type, c.Name, c.Error) } return fmt.Sprintf("ResourceResult(%s, %s, %d config(s))", c.Type, c.Name, len(c.Config)) } // waitResult waits for a result event on cQueue or the ctx to be cancelled, whichever happens // first. // The returned result event is checked for errors its error is returned if set. func (a *AdminClient) waitResult(ctx context.Context, cQueue *C.rd_kafka_queue_t, cEventType C.rd_kafka_event_type_t) (rkev *C.rd_kafka_event_t, err error) { resultChan := make(chan *C.rd_kafka_event_t) closeChan := make(chan bool) // never written to, just closed go func() { for { select { case _, ok := <-closeChan: if !ok { // Context cancelled/timed out close(resultChan) return } default: // Wait for result event for at most 50ms // to avoid blocking for too long if // context is cancelled. rkev := C.rd_kafka_queue_poll(cQueue, 50) if rkev != nil { resultChan <- rkev close(resultChan) return } } } }() select { case rkev = <-resultChan: // Result type check if cEventType != C.rd_kafka_event_type(rkev) { err = newErrorFromString(ErrInvalidType, fmt.Sprintf("Expected %d result event, not %d", (int)(cEventType), (int)(C.rd_kafka_event_type(rkev)))) C.rd_kafka_event_destroy(rkev) return nil, err } // Generic error handling cErr := C.rd_kafka_event_error(rkev) if cErr != 0 { err = newErrorFromCString(cErr, C.rd_kafka_event_error_string(rkev)) C.rd_kafka_event_destroy(rkev) return nil, err } close(closeChan) return rkev, nil case <-ctx.Done(): // signal close to go-routine close(closeChan) // wait for close from go-routine to make sure it is done // using cQueue before we return. rkev, ok := <-resultChan if ok { // throw away result since context was cancelled C.rd_kafka_event_destroy(rkev) } return nil, ctx.Err() } } // cToTopicResults converts a C topic_result_t array to Go TopicResult list. func (a *AdminClient) cToTopicResults(cTopicRes **C.rd_kafka_topic_result_t, cCnt C.size_t) (result []TopicResult, err error) { result = make([]TopicResult, int(cCnt)) for i := 0; i < int(cCnt); i++ { cTopic := C.topic_result_by_idx(cTopicRes, cCnt, C.size_t(i)) result[i].Topic = C.GoString(C.rd_kafka_topic_result_name(cTopic)) result[i].Error = newErrorFromCString( C.rd_kafka_topic_result_error(cTopic), C.rd_kafka_topic_result_error_string(cTopic)) } return result, nil } // cConfigResourceToResult converts a C ConfigResource result array to Go ConfigResourceResult func (a *AdminClient) cConfigResourceToResult(cRes **C.rd_kafka_ConfigResource_t, cCnt C.size_t) (result []ConfigResourceResult, err error) { result = make([]ConfigResourceResult, int(cCnt)) for i := 0; i < int(cCnt); i++ { cRes := C.ConfigResource_by_idx(cRes, cCnt, C.size_t(i)) result[i].Type = ResourceType(C.rd_kafka_ConfigResource_type(cRes)) result[i].Name = C.GoString(C.rd_kafka_ConfigResource_name(cRes)) result[i].Error = newErrorFromCString( C.rd_kafka_ConfigResource_error(cRes), C.rd_kafka_ConfigResource_error_string(cRes)) var cConfigCnt C.size_t cConfigs := C.rd_kafka_ConfigResource_configs(cRes, &cConfigCnt) if cConfigCnt > 0 { result[i].Config = make(map[string]ConfigEntryResult) } for ci := 0; ci < int(cConfigCnt); ci++ { cEntry := C.ConfigEntry_by_idx(cConfigs, cConfigCnt, C.size_t(ci)) entry := configEntryResultFromC(cEntry) result[i].Config[entry.Name] = entry } } return result, nil } // CreateTopics creates topics in cluster. // // The list of TopicSpecification objects define the per-topic partition count, replicas, etc. // // Topic creation is non-atomic and may succeed for some topics but fail for others, // make sure to check the result for topic-specific errors. // // Note: TopicSpecification is analogous to NewTopic in the Java Topic Admin API. func (a *AdminClient) CreateTopics(ctx context.Context, topics []TopicSpecification, options ...CreateTopicsAdminOption) (result []TopicResult, err error) { cTopics := make([]*C.rd_kafka_NewTopic_t, len(topics)) cErrstrSize := C.size_t(512) cErrstr := (*C.char)(C.malloc(cErrstrSize)) defer C.free(unsafe.Pointer(cErrstr)) // Convert Go TopicSpecifications to C TopicSpecifications for i, topic := range topics { var cReplicationFactor C.int if topic.ReplicationFactor == 0 { cReplicationFactor = -1 } else { cReplicationFactor = C.int(topic.ReplicationFactor) } if topic.ReplicaAssignment != nil { if cReplicationFactor != -1 { return nil, newErrorFromString(ErrInvalidArg, "TopicSpecification.ReplicationFactor and TopicSpecification.ReplicaAssignment are mutually exclusive") } if len(topic.ReplicaAssignment) != topic.NumPartitions { return nil, newErrorFromString(ErrInvalidArg, "TopicSpecification.ReplicaAssignment must contain exactly TopicSpecification.NumPartitions partitions") } } else if cReplicationFactor == -1 { return nil, newErrorFromString(ErrInvalidArg, "TopicSpecification.ReplicationFactor or TopicSpecification.ReplicaAssignment must be specified") } cTopics[i] = C.rd_kafka_NewTopic_new( C.CString(topic.Topic), C.int(topic.NumPartitions), cReplicationFactor, cErrstr, cErrstrSize) if cTopics[i] == nil { return nil, newErrorFromString(ErrInvalidArg, fmt.Sprintf("Topic %s: %s", topic.Topic, C.GoString(cErrstr))) } defer C.rd_kafka_NewTopic_destroy(cTopics[i]) for p, replicas := range topic.ReplicaAssignment { cReplicas := make([]C.int32_t, len(replicas)) for ri, replica := range replicas { cReplicas[ri] = C.int32_t(replica) } cErr := C.rd_kafka_NewTopic_set_replica_assignment( cTopics[i], C.int32_t(p), (*C.int32_t)(&cReplicas[0]), C.size_t(len(cReplicas)), cErrstr, cErrstrSize) if cErr != 0 { return nil, newCErrorFromString(cErr, fmt.Sprintf("Failed to set replica assignment for topic %s partition %d: %s", topic.Topic, p, C.GoString(cErrstr))) } } for key, value := range topic.Config { cErr := C.rd_kafka_NewTopic_set_config( cTopics[i], C.CString(key), C.CString(value)) if cErr != 0 { return nil, newCErrorFromString(cErr, fmt.Sprintf("Failed to set config %s=%s for topic %s", key, value, topic.Topic)) } } } // Convert Go AdminOptions (if any) to C AdminOptions genericOptions := make([]AdminOption, len(options)) for i := range options { genericOptions[i] = options[i] } cOptions, err := adminOptionsSetup(a.handle, C.RD_KAFKA_ADMIN_OP_CREATETOPICS, genericOptions) if err != nil { return nil, err } defer C.rd_kafka_AdminOptions_destroy(cOptions) // Create temporary queue for async operation cQueue := C.rd_kafka_queue_new(a.handle.rk) defer C.rd_kafka_queue_destroy(cQueue) // Asynchronous call C.rd_kafka_CreateTopics( a.handle.rk, (**C.rd_kafka_NewTopic_t)(&cTopics[0]), C.size_t(len(cTopics)), cOptions, cQueue) // Wait for result, error or context timeout rkev, err := a.waitResult(ctx, cQueue, C.RD_KAFKA_EVENT_CREATETOPICS_RESULT) if err != nil { return nil, err } defer C.rd_kafka_event_destroy(rkev) cRes := C.rd_kafka_event_CreateTopics_result(rkev) // Convert result from C to Go var cCnt C.size_t cTopicRes := C.rd_kafka_CreateTopics_result_topics(cRes, &cCnt) return a.cToTopicResults(cTopicRes, cCnt) } // DeleteTopics deletes a batch of topics. // // This operation is not transactional and may succeed for a subset of topics while // failing others. // It may take several seconds after the DeleteTopics result returns success for // all the brokers to become aware that the topics are gone. During this time, // topic metadata and configuration may continue to return information about deleted topics. // // Requires broker version >= 0.10.1.0 func (a *AdminClient) DeleteTopics(ctx context.Context, topics []string, options ...DeleteTopicsAdminOption) (result []TopicResult, err error) { cTopics := make([]*C.rd_kafka_DeleteTopic_t, len(topics)) cErrstrSize := C.size_t(512) cErrstr := (*C.char)(C.malloc(cErrstrSize)) defer C.free(unsafe.Pointer(cErrstr)) // Convert Go DeleteTopics to C DeleteTopics for i, topic := range topics { cTopics[i] = C.rd_kafka_DeleteTopic_new(C.CString(topic)) if cTopics[i] == nil { return nil, newErrorFromString(ErrInvalidArg, fmt.Sprintf("Invalid arguments for topic %s", topic)) } defer C.rd_kafka_DeleteTopic_destroy(cTopics[i]) } // Convert Go AdminOptions (if any) to C AdminOptions genericOptions := make([]AdminOption, len(options)) for i := range options { genericOptions[i] = options[i] } cOptions, err := adminOptionsSetup(a.handle, C.RD_KAFKA_ADMIN_OP_DELETETOPICS, genericOptions) if err != nil { return nil, err } defer C.rd_kafka_AdminOptions_destroy(cOptions) // Create temporary queue for async operation cQueue := C.rd_kafka_queue_new(a.handle.rk) defer C.rd_kafka_queue_destroy(cQueue) // Asynchronous call C.rd_kafka_DeleteTopics( a.handle.rk, (**C.rd_kafka_DeleteTopic_t)(&cTopics[0]), C.size_t(len(cTopics)), cOptions, cQueue) // Wait for result, error or context timeout rkev, err := a.waitResult(ctx, cQueue, C.RD_KAFKA_EVENT_DELETETOPICS_RESULT) if err != nil { return nil, err } defer C.rd_kafka_event_destroy(rkev) cRes := C.rd_kafka_event_DeleteTopics_result(rkev) // Convert result from C to Go var cCnt C.size_t cTopicRes := C.rd_kafka_DeleteTopics_result_topics(cRes, &cCnt) return a.cToTopicResults(cTopicRes, cCnt) } // CreatePartitions creates additional partitions for topics. func (a *AdminClient) CreatePartitions(ctx context.Context, partitions []PartitionsSpecification, options ...CreatePartitionsAdminOption) (result []TopicResult, err error) { cParts := make([]*C.rd_kafka_NewPartitions_t, len(partitions)) cErrstrSize := C.size_t(512) cErrstr := (*C.char)(C.malloc(cErrstrSize)) defer C.free(unsafe.Pointer(cErrstr)) // Convert Go PartitionsSpecification to C NewPartitions for i, part := range partitions { cParts[i] = C.rd_kafka_NewPartitions_new(C.CString(part.Topic), C.size_t(part.IncreaseTo), cErrstr, cErrstrSize) if cParts[i] == nil { return nil, newErrorFromString(ErrInvalidArg, fmt.Sprintf("Topic %s: %s", part.Topic, C.GoString(cErrstr))) } defer C.rd_kafka_NewPartitions_destroy(cParts[i]) for pidx, replicas := range part.ReplicaAssignment { cReplicas := make([]C.int32_t, len(replicas)) for ri, replica := range replicas { cReplicas[ri] = C.int32_t(replica) } cErr := C.rd_kafka_NewPartitions_set_replica_assignment( cParts[i], C.int32_t(pidx), (*C.int32_t)(&cReplicas[0]), C.size_t(len(cReplicas)), cErrstr, cErrstrSize) if cErr != 0 { return nil, newCErrorFromString(cErr, fmt.Sprintf("Failed to set replica assignment for topic %s new partition index %d: %s", part.Topic, pidx, C.GoString(cErrstr))) } } } // Convert Go AdminOptions (if any) to C AdminOptions genericOptions := make([]AdminOption, len(options)) for i := range options { genericOptions[i] = options[i] } cOptions, err := adminOptionsSetup(a.handle, C.RD_KAFKA_ADMIN_OP_CREATEPARTITIONS, genericOptions) if err != nil { return nil, err } defer C.rd_kafka_AdminOptions_destroy(cOptions) // Create temporary queue for async operation cQueue := C.rd_kafka_queue_new(a.handle.rk) defer C.rd_kafka_queue_destroy(cQueue) // Asynchronous call C.rd_kafka_CreatePartitions( a.handle.rk, (**C.rd_kafka_NewPartitions_t)(&cParts[0]), C.size_t(len(cParts)), cOptions, cQueue) // Wait for result, error or context timeout rkev, err := a.waitResult(ctx, cQueue, C.RD_KAFKA_EVENT_CREATEPARTITIONS_RESULT) if err != nil { return nil, err } defer C.rd_kafka_event_destroy(rkev) cRes := C.rd_kafka_event_CreatePartitions_result(rkev) // Convert result from C to Go var cCnt C.size_t cTopicRes := C.rd_kafka_CreatePartitions_result_topics(cRes, &cCnt) return a.cToTopicResults(cTopicRes, cCnt) } // AlterConfigs alters/updates cluster resource configuration. // // Updates are not transactional so they may succeed for a subset // of the provided resources while others fail. // The configuration for a particular resource is updated atomically, // replacing values using the provided ConfigEntrys and reverting // unspecified ConfigEntrys to their default values. // // Requires broker version >=0.11.0.0 // // AlterConfigs will replace all existing configuration for // the provided resources with the new configuration given, // reverting all other configuration to their default values. // // Multiple resources and resource types may be set, but at most one // resource of type ResourceBroker is allowed per call since these // resource requests must be sent to the broker specified in the resource. func (a *AdminClient) AlterConfigs(ctx context.Context, resources []ConfigResource, options ...AlterConfigsAdminOption) (result []ConfigResourceResult, err error) { cRes := make([]*C.rd_kafka_ConfigResource_t, len(resources)) cErrstrSize := C.size_t(512) cErrstr := (*C.char)(C.malloc(cErrstrSize)) defer C.free(unsafe.Pointer(cErrstr)) // Convert Go ConfigResources to C ConfigResources for i, res := range resources { cRes[i] = C.rd_kafka_ConfigResource_new( C.rd_kafka_ResourceType_t(res.Type), C.CString(res.Name)) if cRes[i] == nil { return nil, newErrorFromString(ErrInvalidArg, fmt.Sprintf("Invalid arguments for resource %v", res)) } defer C.rd_kafka_ConfigResource_destroy(cRes[i]) for _, entry := range res.Config { var cErr C.rd_kafka_resp_err_t switch entry.Operation { case AlterOperationSet: cErr = C.rd_kafka_ConfigResource_set_config( cRes[i], C.CString(entry.Name), C.CString(entry.Value)) default: panic(fmt.Sprintf("Invalid ConfigEntry.Operation: %v", entry.Operation)) } if cErr != 0 { return nil, newCErrorFromString(cErr, fmt.Sprintf("Failed to add configuration %s: %s", entry, C.GoString(C.rd_kafka_err2str(cErr)))) } } } // Convert Go AdminOptions (if any) to C AdminOptions genericOptions := make([]AdminOption, len(options)) for i := range options { genericOptions[i] = options[i] } cOptions, err := adminOptionsSetup(a.handle, C.RD_KAFKA_ADMIN_OP_ALTERCONFIGS, genericOptions) if err != nil { return nil, err } defer C.rd_kafka_AdminOptions_destroy(cOptions) // Create temporary queue for async operation cQueue := C.rd_kafka_queue_new(a.handle.rk) defer C.rd_kafka_queue_destroy(cQueue) // Asynchronous call C.rd_kafka_AlterConfigs( a.handle.rk, (**C.rd_kafka_ConfigResource_t)(&cRes[0]), C.size_t(len(cRes)), cOptions, cQueue) // Wait for result, error or context timeout rkev, err := a.waitResult(ctx, cQueue, C.RD_KAFKA_EVENT_ALTERCONFIGS_RESULT) if err != nil { return nil, err } defer C.rd_kafka_event_destroy(rkev) cResult := C.rd_kafka_event_AlterConfigs_result(rkev) // Convert results from C to Go var cCnt C.size_t cResults := C.rd_kafka_AlterConfigs_result_resources(cResult, &cCnt) return a.cConfigResourceToResult(cResults, cCnt) } // DescribeConfigs retrieves configuration for cluster resources. // // The returned configuration includes default values, use // ConfigEntryResult.IsDefault or ConfigEntryResult.Source to distinguish // default values from manually configured settings. // // The value of config entries where .IsSensitive is true // will always be nil to avoid disclosing sensitive // information, such as security settings. // // Configuration entries where .IsReadOnly is true can't be modified // (with AlterConfigs). // // Synonym configuration entries are returned if the broker supports // it (broker version >= 1.1.0). See .Synonyms. // // Requires broker version >=0.11.0.0 // // Multiple resources and resource types may be requested, but at most // one resource of type ResourceBroker is allowed per call // since these resource requests must be sent to the broker specified // in the resource. func (a *AdminClient) DescribeConfigs(ctx context.Context, resources []ConfigResource, options ...DescribeConfigsAdminOption) (result []ConfigResourceResult, err error) { cRes := make([]*C.rd_kafka_ConfigResource_t, len(resources)) cErrstrSize := C.size_t(512) cErrstr := (*C.char)(C.malloc(cErrstrSize)) defer C.free(unsafe.Pointer(cErrstr)) // Convert Go ConfigResources to C ConfigResources for i, res := range resources { cRes[i] = C.rd_kafka_ConfigResource_new( C.rd_kafka_ResourceType_t(res.Type), C.CString(res.Name)) if cRes[i] == nil { return nil, newErrorFromString(ErrInvalidArg, fmt.Sprintf("Invalid arguments for resource %v", res)) } defer C.rd_kafka_ConfigResource_destroy(cRes[i]) } // Convert Go AdminOptions (if any) to C AdminOptions genericOptions := make([]AdminOption, len(options)) for i := range options { genericOptions[i] = options[i] } cOptions, err := adminOptionsSetup(a.handle, C.RD_KAFKA_ADMIN_OP_DESCRIBECONFIGS, genericOptions) if err != nil { return nil, err } defer C.rd_kafka_AdminOptions_destroy(cOptions) // Create temporary queue for async operation cQueue := C.rd_kafka_queue_new(a.handle.rk) defer C.rd_kafka_queue_destroy(cQueue) // Asynchronous call C.rd_kafka_DescribeConfigs( a.handle.rk, (**C.rd_kafka_ConfigResource_t)(&cRes[0]), C.size_t(len(cRes)), cOptions, cQueue) // Wait for result, error or context timeout rkev, err := a.waitResult(ctx, cQueue, C.RD_KAFKA_EVENT_DESCRIBECONFIGS_RESULT) if err != nil { return nil, err } defer C.rd_kafka_event_destroy(rkev) cResult := C.rd_kafka_event_DescribeConfigs_result(rkev) // Convert results from C to Go var cCnt C.size_t cResults := C.rd_kafka_DescribeConfigs_result_resources(cResult, &cCnt) return a.cConfigResourceToResult(cResults, cCnt) } // GetMetadata queries broker for cluster and topic metadata. // If topic is non-nil only information about that topic is returned, else if // allTopics is false only information about locally used topics is returned, // else information about all topics is returned. // GetMetadata is equivalent to listTopics, describeTopics and describeCluster in the Java API. func (a *AdminClient) GetMetadata(topic *string, allTopics bool, timeoutMs int) (*Metadata, error) { return getMetadata(a, topic, allTopics, timeoutMs) } // String returns a human readable name for an AdminClient instance func (a *AdminClient) String() string { return fmt.Sprintf("admin-%s", a.handle.String()) } // get_handle implements the Handle interface func (a *AdminClient) gethandle() *handle { return a.handle } // Close an AdminClient instance. func (a *AdminClient) Close() { if a.isDerived { // Derived AdminClient needs no cleanup. a.handle = &handle{} return } a.handle.cleanup() C.rd_kafka_destroy(a.handle.rk) } // NewAdminClient creats a new AdminClient instance with a new underlying client instance func NewAdminClient(conf *ConfigMap) (*AdminClient, error) { err := versionCheck() if err != nil { return nil, err } a := &AdminClient{} a.handle = &handle{} // Convert ConfigMap to librdkafka conf_t cConf, err := conf.convert() if err != nil { return nil, err } cErrstr := (*C.char)(C.malloc(C.size_t(256))) defer C.free(unsafe.Pointer(cErrstr)) // Create librdkafka producer instance. The Producer is somewhat cheaper than // the consumer, but any instance type can be used for Admin APIs. a.handle.rk = C.rd_kafka_new(C.RD_KAFKA_PRODUCER, cConf, cErrstr, 256) if a.handle.rk == nil { return nil, newErrorFromCString(C.RD_KAFKA_RESP_ERR__INVALID_ARG, cErrstr) } a.isDerived = false a.handle.setup() return a, nil } // NewAdminClientFromProducer derives a new AdminClient from an existing Producer instance. // The AdminClient will use the same configuration and connections as the parent instance. func NewAdminClientFromProducer(p *Producer) (a *AdminClient, err error) { if p.handle.rk == nil { return nil, newErrorFromString(ErrInvalidArg, "Can't derive AdminClient from closed producer") } a = &AdminClient{} a.handle = &p.handle a.isDerived = true return a, nil } // NewAdminClientFromConsumer derives a new AdminClient from an existing Consumer instance. // The AdminClient will use the same configuration and connections as the parent instance. func NewAdminClientFromConsumer(c *Consumer) (a *AdminClient, err error) { if c.handle.rk == nil { return nil, newErrorFromString(ErrInvalidArg, "Can't derive AdminClient from closed consumer") } a = &AdminClient{} a.handle = &c.handle a.isDerived = true return a, nil } golang-github-confluentinc-confluent-kafka-go-0.11.6/kafka/adminapi_test.go000066400000000000000000000173651336406275100267340ustar00rootroot00000000000000/** * Copyright 2018 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package kafka import ( "context" "strings" "testing" "time" ) func testAdminAPIs(what string, a *AdminClient, t *testing.T) { t.Logf("AdminClient API testing on %s: %s", a, what) expDuration, err := time.ParseDuration("0.1s") if err != nil { t.Fatalf("%s", err) } confStrings := map[string]string{ "some.topic.config": "unchecked", "these.are.verified": "on the broker", "and.this.is": "just", "a": "unit test"} // Correct input, fail with timeout ctx, cancel := context.WithTimeout(context.Background(), expDuration) defer cancel() res, err := a.CreateTopics( ctx, []TopicSpecification{ { Topic: "mytopic", NumPartitions: 7, ReplicationFactor: 3, }, { Topic: "mytopic2", NumPartitions: 2, ReplicaAssignment: [][]int32{ []int32{1, 2, 3}, []int32{3, 2, 1}, }, }, { Topic: "mytopic3", NumPartitions: 10000, ReplicationFactor: 90, Config: confStrings, }, }) if res != nil || err == nil { t.Fatalf("Expected CreateTopics to fail, but got result: %v, err: %v", res, err) } if ctx.Err() != context.DeadlineExceeded { t.Fatalf("Expected DeadlineExceeded, not %v, %v", ctx.Err(), err) } // Incorrect input, fail with ErrInvalidArg ctx, cancel = context.WithTimeout(context.Background(), expDuration) defer cancel() res, err = a.CreateTopics( ctx, []TopicSpecification{ { // Must not specify both ReplicationFactor and ReplicaAssignment Topic: "mytopic", NumPartitions: 2, ReplicationFactor: 3, ReplicaAssignment: [][]int32{ []int32{1, 2, 3}, []int32{3, 2, 1}, }, }, }) if res != nil || err == nil { t.Fatalf("Expected CreateTopics to fail, but got result: %v, err: %v", res, err) } if ctx.Err() != nil { t.Fatalf("Did not expect context to fail: %v", ctx.Err()) } if err.(Error).Code() != ErrInvalidArg { t.Fatalf("Expected ErrInvalidArg, not %v", err) } // Incorrect input, fail with ErrInvalidArg ctx, cancel = context.WithTimeout(context.Background(), expDuration) defer cancel() res, err = a.CreateTopics( ctx, []TopicSpecification{ { // ReplicaAssignment must be same length as Numpartitions Topic: "mytopic", NumPartitions: 7, ReplicaAssignment: [][]int32{ []int32{1, 2, 3}, []int32{3, 2, 1}, }, }, }) if res != nil || err == nil { t.Fatalf("Expected CreateTopics to fail, but got result: %v, err: %v", res, err) } if ctx.Err() != nil { t.Fatalf("Did not expect context to fail: %v", ctx.Err()) } if err.(Error).Code() != ErrInvalidArg { t.Fatalf("Expected ErrInvalidArg, not %v", err) } // Correct input, using options ctx, cancel = context.WithTimeout(context.Background(), expDuration) defer cancel() res, err = a.CreateTopics( ctx, []TopicSpecification{ { Topic: "mytopic4", NumPartitions: 9, ReplicaAssignment: [][]int32{ []int32{1}, []int32{2}, []int32{3}, []int32{4}, []int32{1}, []int32{2}, []int32{3}, []int32{4}, []int32{1}, }, Config: map[string]string{ "some.topic.config": "unchecked", "these.are.verified": "on the broker", "and.this.is": "just", "a": "unit test", }, }, }, SetAdminValidateOnly(false)) if res != nil || err == nil { t.Fatalf("Expected CreateTopics to fail, but got result: %v, err: %v", res, err) } if ctx.Err() != context.DeadlineExceeded { t.Fatalf("Expected DeadlineExceeded, not %v", ctx.Err()) } // // Remaining APIs // Timeout code is identical for all APIs, no need to test // them for each API. // ctx, cancel = context.WithTimeout(context.Background(), expDuration) defer cancel() res, err = a.CreatePartitions( ctx, []PartitionsSpecification{ { Topic: "topic", IncreaseTo: 19, ReplicaAssignment: [][]int32{ []int32{3234522}, []int32{99999}, }, }, { Topic: "topic2", IncreaseTo: 2, ReplicaAssignment: [][]int32{ []int32{99999}, }, }, }) if res != nil || err == nil { t.Fatalf("Expected CreatePartitions to fail, but got result: %v, err: %v", res, err) } if ctx.Err() != context.DeadlineExceeded { t.Fatalf("Expected DeadlineExceeded, not %v", ctx.Err()) } ctx, cancel = context.WithTimeout(context.Background(), expDuration) defer cancel() res, err = a.DeleteTopics( ctx, []string{"topic1", "topic2"}) if res != nil || err == nil { t.Fatalf("Expected DeleteTopics to fail, but got result: %v, err: %v", res, err) } if ctx.Err() != context.DeadlineExceeded { t.Fatalf("Expected DeadlineExceeded, not %v for error %v", ctx.Err(), err) } ctx, cancel = context.WithTimeout(context.Background(), expDuration) defer cancel() cres, err := a.AlterConfigs( ctx, []ConfigResource{{Type: ResourceTopic, Name: "topic"}}) if cres != nil || err == nil { t.Fatalf("Expected AlterConfigs to fail, but got result: %v, err: %v", cres, err) } if ctx.Err() != context.DeadlineExceeded { t.Fatalf("Expected DeadlineExceeded, not %v", ctx.Err()) } ctx, cancel = context.WithTimeout(context.Background(), expDuration) defer cancel() cres, err = a.DescribeConfigs( ctx, []ConfigResource{{Type: ResourceTopic, Name: "topic"}}) if cres != nil || err == nil { t.Fatalf("Expected DescribeConfigs to fail, but got result: %v, err: %v", cres, err) } if ctx.Err() != context.DeadlineExceeded { t.Fatalf("Expected DeadlineExceeded, not %v", ctx.Err()) } } // TestAdminAPIs dry-tests most Admin APIs, no broker is needed. func TestAdminAPIs(t *testing.T) { a, err := NewAdminClient(&ConfigMap{}) if err != nil { t.Fatalf("%s", err) } testAdminAPIs("Non-derived, no config", a, t) a.Close() a, err = NewAdminClient(&ConfigMap{"retries": 1234}) if err != nil { t.Fatalf("%s", err) } testAdminAPIs("Non-derived, config", a, t) a.Close() // Test derived clients c, err := NewConsumer(&ConfigMap{"group.id": "test"}) if err != nil { t.Fatalf("%s", err) } defer c.Close() a, err = NewAdminClientFromConsumer(c) if err != nil { t.Fatalf("%s", err) } if !strings.Contains(a.String(), c.String()) { t.Fatalf("Expected derived client %s to have similar name to parent %s", a, c) } testAdminAPIs("Derived from consumer", a, t) a.Close() a, err = NewAdminClientFromConsumer(c) if err != nil { t.Fatalf("%s", err) } if !strings.Contains(a.String(), c.String()) { t.Fatalf("Expected derived client %s to have similar name to parent %s", a, c) } testAdminAPIs("Derived from same consumer", a, t) a.Close() p, err := NewProducer(&ConfigMap{}) if err != nil { t.Fatalf("%s", err) } defer p.Close() a, err = NewAdminClientFromProducer(p) if err != nil { t.Fatalf("%s", err) } if !strings.Contains(a.String(), p.String()) { t.Fatalf("Expected derived client %s to have similar name to parent %s", a, p) } testAdminAPIs("Derived from Producer", a, t) a.Close() a, err = NewAdminClientFromProducer(p) if err != nil { t.Fatalf("%s", err) } if !strings.Contains(a.String(), p.String()) { t.Fatalf("Expected derived client %s to have similar name to parent %s", a, p) } testAdminAPIs("Derived from same Producer", a, t) a.Close() } golang-github-confluentinc-confluent-kafka-go-0.11.6/kafka/adminoptions.go000066400000000000000000000160651336406275100266130ustar00rootroot00000000000000/** * Copyright 2018 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package kafka import ( "fmt" "time" "unsafe" ) /* #include #include */ import "C" // AdminOptionOperationTimeout sets the broker's operation timeout, such as the // timeout for CreateTopics to complete the creation of topics on the controller // before returning a result to the application. // // CreateTopics, DeleteTopics, CreatePartitions: // a value 0 will return immediately after triggering topic // creation, while > 0 will wait this long for topic creation to propagate // in cluster. // // Default: 0 (return immediately). // // Valid for CreateTopics, DeleteTopics, CreatePartitions. type AdminOptionOperationTimeout struct { isSet bool val time.Duration } func (ao AdminOptionOperationTimeout) supportsCreateTopics() { } func (ao AdminOptionOperationTimeout) supportsDeleteTopics() { } func (ao AdminOptionOperationTimeout) supportsCreatePartitions() { } func (ao AdminOptionOperationTimeout) apply(cOptions *C.rd_kafka_AdminOptions_t) error { if !ao.isSet { return nil } cErrstrSize := C.size_t(512) cErrstr := (*C.char)(C.malloc(cErrstrSize)) defer C.free(unsafe.Pointer(cErrstr)) cErr := C.rd_kafka_AdminOptions_set_operation_timeout( cOptions, C.int(durationToMilliseconds(ao.val)), cErrstr, cErrstrSize) if cErr != 0 { C.rd_kafka_AdminOptions_destroy(cOptions) return newCErrorFromString(cErr, fmt.Sprintf("Failed to set operation timeout: %s", C.GoString(cErrstr))) } return nil } // SetAdminOperationTimeout sets the broker's operation timeout, such as the // timeout for CreateTopics to complete the creation of topics on the controller // before returning a result to the application. // // CreateTopics, DeleteTopics, CreatePartitions: // a value 0 will return immediately after triggering topic // creation, while > 0 will wait this long for topic creation to propagate // in cluster. // // Default: 0 (return immediately). // // Valid for CreateTopics, DeleteTopics, CreatePartitions. func SetAdminOperationTimeout(t time.Duration) (ao AdminOptionOperationTimeout) { ao.isSet = true ao.val = t return ao } // AdminOptionRequestTimeout sets the overall request timeout, including broker // lookup, request transmission, operation time on broker, and response. // // Default: `socket.timeout.ms`. // // Valid for all Admin API methods. type AdminOptionRequestTimeout struct { isSet bool val time.Duration } func (ao AdminOptionRequestTimeout) supportsCreateTopics() { } func (ao AdminOptionRequestTimeout) supportsDeleteTopics() { } func (ao AdminOptionRequestTimeout) supportsCreatePartitions() { } func (ao AdminOptionRequestTimeout) supportsAlterConfigs() { } func (ao AdminOptionRequestTimeout) supportsDescribeConfigs() { } func (ao AdminOptionRequestTimeout) apply(cOptions *C.rd_kafka_AdminOptions_t) error { if !ao.isSet { return nil } cErrstrSize := C.size_t(512) cErrstr := (*C.char)(C.malloc(cErrstrSize)) defer C.free(unsafe.Pointer(cErrstr)) cErr := C.rd_kafka_AdminOptions_set_request_timeout( cOptions, C.int(durationToMilliseconds(ao.val)), cErrstr, cErrstrSize) if cErr != 0 { C.rd_kafka_AdminOptions_destroy(cOptions) return newCErrorFromString(cErr, fmt.Sprintf("%s", C.GoString(cErrstr))) } return nil } // SetAdminRequestTimeout sets the overall request timeout, including broker // lookup, request transmission, operation time on broker, and response. // // Default: `socket.timeout.ms`. // // Valid for all Admin API methods. func SetAdminRequestTimeout(t time.Duration) (ao AdminOptionRequestTimeout) { ao.isSet = true ao.val = t return ao } // AdminOptionValidateOnly tells the broker to only validate the request, // without performing the requested operation (create topics, etc). // // Default: false. // // Valid for CreateTopics, CreatePartitions, AlterConfigs type AdminOptionValidateOnly struct { isSet bool val bool } func (ao AdminOptionValidateOnly) supportsCreateTopics() { } func (ao AdminOptionValidateOnly) supportsCreatePartitions() { } func (ao AdminOptionValidateOnly) supportsAlterConfigs() { } func (ao AdminOptionValidateOnly) apply(cOptions *C.rd_kafka_AdminOptions_t) error { if !ao.isSet { return nil } cErrstrSize := C.size_t(512) cErrstr := (*C.char)(C.malloc(cErrstrSize)) defer C.free(unsafe.Pointer(cErrstr)) cErr := C.rd_kafka_AdminOptions_set_validate_only( cOptions, bool2cint(ao.val), cErrstr, cErrstrSize) if cErr != 0 { C.rd_kafka_AdminOptions_destroy(cOptions) return newCErrorFromString(cErr, fmt.Sprintf("%s", C.GoString(cErrstr))) } return nil } // SetAdminValidateOnly tells the broker to only validate the request, // without performing the requested operation (create topics, etc). // // Default: false. // // Valid for CreateTopics, DeleteTopics, CreatePartitions, AlterConfigs func SetAdminValidateOnly(validateOnly bool) (ao AdminOptionValidateOnly) { ao.isSet = true ao.val = validateOnly return ao } // CreateTopicsAdminOption - see setters. // // See SetAdminRequestTimeout, SetAdminOperationTimeout, SetAdminValidateOnly. type CreateTopicsAdminOption interface { supportsCreateTopics() apply(cOptions *C.rd_kafka_AdminOptions_t) error } // DeleteTopicsAdminOption - see setters. // // See SetAdminRequestTimeout, SetAdminOperationTimeout. type DeleteTopicsAdminOption interface { supportsDeleteTopics() apply(cOptions *C.rd_kafka_AdminOptions_t) error } // CreatePartitionsAdminOption - see setters. // // See SetAdminRequestTimeout, SetAdminOperationTimeout, SetAdminValidateOnly. type CreatePartitionsAdminOption interface { supportsCreatePartitions() apply(cOptions *C.rd_kafka_AdminOptions_t) error } // AlterConfigsAdminOption - see setters. // // See SetAdminRequestTimeout, SetAdminValidateOnly, SetAdminIncremental. type AlterConfigsAdminOption interface { supportsAlterConfigs() apply(cOptions *C.rd_kafka_AdminOptions_t) error } // DescribeConfigsAdminOption - see setters. // // See SetAdminRequestTimeout. type DescribeConfigsAdminOption interface { supportsDescribeConfigs() apply(cOptions *C.rd_kafka_AdminOptions_t) error } // AdminOption is a generic type not to be used directly. // // See CreateTopicsAdminOption et.al. type AdminOption interface { apply(cOptions *C.rd_kafka_AdminOptions_t) error } func adminOptionsSetup(h *handle, opType C.rd_kafka_admin_op_t, options []AdminOption) (*C.rd_kafka_AdminOptions_t, error) { cOptions := C.rd_kafka_AdminOptions_new(h.rk, opType) for _, opt := range options { if opt == nil { continue } err := opt.apply(cOptions) if err != nil { return nil, err } } return cOptions, nil } golang-github-confluentinc-confluent-kafka-go-0.11.6/kafka/api.html000066400000000000000000002414011336406275100252110ustar00rootroot00000000000000 kafka - The Go Programming Language
...

Package kafka

import "github.com/confluentinc/confluent-kafka-go/kafka"
Overview
Index

Overview â–¾

Package kafka provides high-level Apache Kafka producer and consumers using bindings on-top of the librdkafka C library.

High-level Consumer

* Decide if you want to read messages and events from the `.Events()` channel (set `"go.events.channel.enable": true`) or by calling `.Poll()`.

* Create a Consumer with `kafka.NewConsumer()` providing at least the `bootstrap.servers` and `group.id` configuration properties.

* Call `.Subscribe()` or (`.SubscribeTopics()` to subscribe to multiple topics) to join the group with the specified subscription set. Subscriptions are atomic, calling `.Subscribe*()` again will leave the group and rejoin with the new set of topics.

* Start reading events and messages from either the `.Events` channel or by calling `.Poll()`.

* When the group has rebalanced each client member is assigned a (sub-)set of topic+partitions. By default the consumer will start fetching messages for its assigned partitions at this point, but your application may enable rebalance events to get an insight into what the assigned partitions where as well as set the initial offsets. To do this you need to pass `"go.application.rebalance.enable": true` to the `NewConsumer()` call mentioned above. You will (eventually) see a `kafka.AssignedPartitions` event with the assigned partition set. You can optionally modify the initial offsets (they'll default to stored offsets and if there are no previously stored offsets it will fall back to `"default.topic.config": ConfigMap{"auto.offset.reset": ..}` which defaults to the `latest` message) and then call `.Assign(partitions)` to start consuming. If you don't need to modify the initial offsets you will not need to call `.Assign()`, the client will do so automatically for you if you dont.

* As messages are fetched they will be made available on either the `.Events` channel or by calling `.Poll()`, look for event type `*kafka.Message`.

* Handle messages, events and errors to your liking.

* When you are done consuming call `.Close()` to commit final offsets and leave the consumer group.

Producer

* Create a Producer with `kafka.NewProducer()` providing at least the `bootstrap.servers` configuration properties.

* Messages may now be produced either by sending a `*kafka.Message` on the `.ProduceChannel` or by calling `.Produce()`.

* Producing is an asynchronous operation so the client notifies the application of per-message produce success or failure through something called delivery reports. Delivery reports are by default emitted on the `.Events()` channel as `*kafka.Message` and you should check `msg.TopicPartition.Error` for `nil` to find out if the message was succesfully delivered or not. It is also possible to direct delivery reports to alternate channels by providing a non-nil `chan Event` channel to `.Produce()`. If no delivery reports are wanted they can be completely disabled by setting configuration property `"go.delivery.reports": false`.

* When you are done producing messages you will need to make sure all messages are indeed delivered to the broker (or failed), remember that this is an asynchronous client so some of your messages may be lingering in internal channels or tranmission queues. To do this you can either keep track of the messages you've produced and wait for their corresponding delivery reports, or call the convenience function `.Flush()` that will block until all message deliveries are done or the provided timeout elapses.

* Finally call `.Close()` to decommission the producer.

Events

Apart from emitting messages and delivery reports the client also communicates with the application through a number of different event types. An application may choose to handle or ignore these events.

Consumer events

* `*kafka.Message` - a fetched message.

* `AssignedPartitions` - The assigned partition set for this client following a rebalance. Requires `go.application.rebalance.enable`

* `RevokedPartitions` - The counter part to `AssignedPartitions` following a rebalance. `AssignedPartitions` and `RevokedPartitions` are symetrical. Requires `go.application.rebalance.enable`

* `PartitionEOF` - Consumer has reached the end of a partition. NOTE: The consumer will keep trying to fetch new messages for the partition.

* `OffsetsCommitted` - Offset commit results (when `enable.auto.commit` is enabled).

Producer events

* `*kafka.Message` - delivery report for produced message. Check `.TopicPartition.Error` for delivery result.

Generic events for both Consumer and Producer

* `KafkaError` - client (error codes are prefixed with _) or broker error. These errors are normally just informational since the client will try its best to automatically recover (eventually).

Hint: If your application registers a signal notification (signal.Notify) makes sure the signals channel is buffered to avoid possible complications with blocking Poll() calls.

Index â–¾

Constants
func LibraryVersion() (int, string)
type AssignedPartitions
func (e AssignedPartitions) String() string
type BrokerMetadata
type ConfigMap
func (m ConfigMap) Set(kv string) error
func (m ConfigMap) SetKey(key string, value ConfigValue) error
type ConfigValue
type Consumer
func NewConsumer(conf *ConfigMap) (*Consumer, error)
func (c *Consumer) Assign(partitions []TopicPartition) (err error)
func (c *Consumer) Close() (err error)
func (c *Consumer) Commit() ([]TopicPartition, error)
func (c *Consumer) CommitMessage(m *Message) ([]TopicPartition, error)
func (c *Consumer) CommitOffsets(offsets []TopicPartition) ([]TopicPartition, error)
func (c *Consumer) Events() chan Event
func (c *Consumer) GetMetadata(topic *string, allTopics bool, timeoutMs int) (*Metadata, error)
func (c *Consumer) Poll(timeoutMs int) (event Event)
func (c *Consumer) QueryWatermarkOffsets(topic string, partition int32, timeoutMs int) (low, high int64, err error)
func (c *Consumer) String() string
func (c *Consumer) Subscribe(topic string, rebalanceCb RebalanceCb) error
func (c *Consumer) SubscribeTopics(topics []string, rebalanceCb RebalanceCb) (err error)
func (c *Consumer) Unassign() (err error)
func (c *Consumer) Unsubscribe() (err error)
type Error
func (e Error) Code() ErrorCode
func (e Error) Error() string
func (e Error) String() string
type ErrorCode
func (c ErrorCode) String() string
type Event
type Handle
type Message
func (m *Message) String() string
type Metadata
type Offset
func NewOffset(offset interface{}) (Offset, error)
func OffsetTail(relativeOffset Offset) Offset
func (o Offset) Set(offset interface{}) error
func (o Offset) String() string
type OffsetsCommitted
func (o OffsetsCommitted) String() string
type PartitionEOF
func (p PartitionEOF) String() string
type PartitionMetadata
type Producer
func NewProducer(conf *ConfigMap) (*Producer, error)
func (p *Producer) Close()
func (p *Producer) Events() chan Event
func (p *Producer) Flush(timeoutMs int) int
func (p *Producer) GetMetadata(topic *string, allTopics bool, timeoutMs int) (*Metadata, error)
func (p *Producer) Len() int
func (p *Producer) Produce(msg *Message, deliveryChan chan Event) error
func (p *Producer) ProduceChannel() chan *Message
func (p *Producer) QueryWatermarkOffsets(topic string, partition int32, timeoutMs int) (low, high int64, err error)
func (p *Producer) String() string
type RebalanceCb
type RevokedPartitions
func (e RevokedPartitions) String() string
type TimestampType
func (t TimestampType) String() string
type TopicMetadata
type TopicPartition
func (p TopicPartition) String() string

Package files

build_dynamic.go config.go consumer.go error.go event.go generated_errors.go handle.go kafka.go message.go metadata.go misc.go producer.go testhelpers.go

Constants

const (
    // TimestampNotAvailable indicates no timestamp was set, or not available due to lacking broker support
    TimestampNotAvailable = TimestampType(C.RD_KAFKA_TIMESTAMP_NOT_AVAILABLE)
    // TimestampCreateTime indicates timestamp set by producer (source time)
    TimestampCreateTime = TimestampType(C.RD_KAFKA_TIMESTAMP_CREATE_TIME)
    // TimestampLogAppendTime indicates timestamp set set by broker (store time)
    TimestampLogAppendTime = TimestampType(C.RD_KAFKA_TIMESTAMP_LOG_APPEND_TIME)
)
const OffsetBeginning = Offset(C.RD_KAFKA_OFFSET_BEGINNING)

Earliest offset (logical)

const OffsetEnd = Offset(C.RD_KAFKA_OFFSET_END)

Latest offset (logical)

const OffsetInvalid = Offset(C.RD_KAFKA_OFFSET_INVALID)

Invalid/unspecified offset

const OffsetStored = Offset(C.RD_KAFKA_OFFSET_STORED)

Use stored offset

const PartitionAny = int32(C.RD_KAFKA_PARTITION_UA)

Any partition (for partitioning), or unspecified value (for all other cases)

func LibraryVersion

func LibraryVersion() (int, string)

LibraryVersion returns the underlying librdkafka library version as a (version_int, version_str) tuple.

type AssignedPartitions

type AssignedPartitions struct {
    Partitions []TopicPartition
}

AssignedPartitions consumer group rebalance event: assigned partition set

func (AssignedPartitions) String

func (e AssignedPartitions) String() string

type BrokerMetadata

type BrokerMetadata struct {
    ID   int32
    Host string
    Port int
}

BrokerMetadata contains per-broker metadata

type ConfigMap

type ConfigMap map[string]ConfigValue

ConfigMap is a map contaning standard librdkafka configuration properties as documented in: https://github.com/edenhill/librdkafka/tree/master/CONFIGURATION.md

The special property "default.topic.config" (optional) is a ConfigMap containing default topic configuration properties.

func (ConfigMap) Set

func (m ConfigMap) Set(kv string) error

Set implements flag.Set (command line argument parser) as a convenience for `-X key=value` config.

func (ConfigMap) SetKey

func (m ConfigMap) SetKey(key string, value ConfigValue) error

SetKey sets configuration property key to value. For user convenience a key prefixed with {topic}. will be set on the "default.topic.config" sub-map.

type ConfigValue

type ConfigValue interface{}

ConfigValue supports the following types:

bool, int, string, any type with the standard String() interface

type Consumer

type Consumer struct {
    // contains filtered or unexported fields
}

Consumer implements a High-level Apache Kafka Consumer instance

func NewConsumer

func NewConsumer(conf *ConfigMap) (*Consumer, error)

NewConsumer creates a new high-level Consumer instance.

Supported special configuration properties:

go.application.rebalance.enable (bool, false) - Forward rebalancing responsibility to application via the Events() channel.
                                     If set to true the app must handle the AssignedPartitions and
                                     RevokedPartitions events and call Assign() and Unassign()
                                     respectively.
go.events.channel.enable (bool, false) - Enable the Events() channel. Messages and events will be pushed on the Events() channel and the Poll() interface will be disabled. (Experimental)
go.events.channel.size (int, 1000) - Events() channel size

WARNING: Due to the buffering nature of channels (and queues in general) the use of the events channel risks receiving outdated events and messages. Minimizing go.events.channel.size reduces the risk and number of outdated events and messages but does not eliminate the factor completely. With a channel size of 1 at most one event or message may be outdated.

func (*Consumer) Assign

func (c *Consumer) Assign(partitions []TopicPartition) (err error)

Assign an atomic set of partitions to consume. This replaces the current assignment.

func (*Consumer) Close

func (c *Consumer) Close() (err error)

Close Consumer instance. The object is no longer usable after this call.

func (*Consumer) Commit

func (c *Consumer) Commit() ([]TopicPartition, error)

Commit offsets for currently assigned partitions This is a blocking call. Returns the committed offsets on success.

func (*Consumer) CommitMessage

func (c *Consumer) CommitMessage(m *Message) ([]TopicPartition, error)

CommitMessage commits offset based on the provided message. This is a blocking call. Returns the committed offsets on success.

func (*Consumer) CommitOffsets

func (c *Consumer) CommitOffsets(offsets []TopicPartition) ([]TopicPartition, error)

CommitOffsets commits the provided list of offsets This is a blocking call. Returns the committed offsets on success.

func (*Consumer) Events

func (c *Consumer) Events() chan Event

Events returns the Events channel (if enabled)

func (*Consumer) GetMetadata

func (c *Consumer) GetMetadata(topic *string, allTopics bool, timeoutMs int) (*Metadata, error)

GetMetadata queries broker for cluster and topic metadata. If topic is non-nil only information about that topic is returned, else if allTopics is false only information about locally used topics is returned, else information about all topics is returned.

func (*Consumer) Poll

func (c *Consumer) Poll(timeoutMs int) (event Event)

Poll the consumer for messages or events.

Will block for at most timeoutMs milliseconds

The following callbacks may be triggered:

Subscribe()'s rebalanceCb

Returns nil on timeout, else an Event

func (*Consumer) QueryWatermarkOffsets

func (c *Consumer) QueryWatermarkOffsets(topic string, partition int32, timeoutMs int) (low, high int64, err error)

QueryWatermarkOffsets returns the broker's low and high offsets for the given topic and partition.

func (*Consumer) String

func (c *Consumer) String() string

Strings returns a human readable name for a Consumer instance

func (*Consumer) Subscribe

func (c *Consumer) Subscribe(topic string, rebalanceCb RebalanceCb) error

Subscribe to a single topic This replaces the current subscription

func (*Consumer) SubscribeTopics

func (c *Consumer) SubscribeTopics(topics []string, rebalanceCb RebalanceCb) (err error)

SubscribeTopics subscribes to the provided list of topics. This replaces the current subscription.

func (*Consumer) Unassign

func (c *Consumer) Unassign() (err error)

Unassign the current set of partitions to consume.

func (*Consumer) Unsubscribe

func (c *Consumer) Unsubscribe() (err error)

Unsubscribe from the current subscription, if any.

type Error

type Error struct {
    // contains filtered or unexported fields
}

Error provides a Kafka-specific error container

func (Error) Code

func (e Error) Code() ErrorCode

Code returns the ErrorCode of an Error

func (Error) Error

func (e Error) Error() string

Error returns a human readable representation of an Error Same as Error.String()

func (Error) String

func (e Error) String() string

String returns a human readable representation of an Error

type ErrorCode

type ErrorCode int

ErrorCode is the integer representation of local and broker error codes

const (
    // ErrBadMsg Local: Bad message format
    ErrBadMsg ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__BAD_MSG)
    // ErrBadCompression Local: Invalid compressed data
    ErrBadCompression ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__BAD_COMPRESSION)
    // ErrDestroy Local: Broker handle destroyed
    ErrDestroy ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__DESTROY)
    // ErrFail Local: Communication failure with broker
    ErrFail ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__FAIL)
    // ErrTransport Local: Broker transport failure
    ErrTransport ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__TRANSPORT)
    // ErrCritSysResource Local: Critical system resource failure
    ErrCritSysResource ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__CRIT_SYS_RESOURCE)
    // ErrResolve Local: Host resolution failure
    ErrResolve ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__RESOLVE)
    // ErrMsgTimedOut Local: Message timed out
    ErrMsgTimedOut ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__MSG_TIMED_OUT)
    // ErrPartitionEOF Broker: No more messages
    ErrPartitionEOF ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__PARTITION_EOF)
    // ErrUnknownPartition Local: Unknown partition
    ErrUnknownPartition ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__UNKNOWN_PARTITION)
    // ErrFs Local: File or filesystem error
    ErrFs ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__FS)
    // ErrUnknownTopic Local: Unknown topic
    ErrUnknownTopic ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__UNKNOWN_TOPIC)
    // ErrAllBrokersDown Local: All broker connections are down
    ErrAllBrokersDown ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__ALL_BROKERS_DOWN)
    // ErrInvalidArg Local: Invalid argument or configuration
    ErrInvalidArg ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__INVALID_ARG)
    // ErrTimedOut Local: Timed out
    ErrTimedOut ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__TIMED_OUT)
    // ErrQueueFull Local: Queue full
    ErrQueueFull ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__QUEUE_FULL)
    // ErrIsrInsuff Local: ISR count insufficient
    ErrIsrInsuff ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__ISR_INSUFF)
    // ErrNodeUpdate Local: Broker node update
    ErrNodeUpdate ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__NODE_UPDATE)
    // ErrSsl Local: SSL error
    ErrSsl ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__SSL)
    // ErrWaitCoord Local: Waiting for coordinator
    ErrWaitCoord ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__WAIT_COORD)
    // ErrUnknownGroup Local: Unknown group
    ErrUnknownGroup ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__UNKNOWN_GROUP)
    // ErrInProgress Local: Operation in progress
    ErrInProgress ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__IN_PROGRESS)
    // ErrPrevInProgress Local: Previous operation in progress
    ErrPrevInProgress ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__PREV_IN_PROGRESS)
    // ErrExistingSubscription Local: Existing subscription
    ErrExistingSubscription ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__EXISTING_SUBSCRIPTION)
    // ErrAssignPartitions Local: Assign partitions
    ErrAssignPartitions ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__ASSIGN_PARTITIONS)
    // ErrRevokePartitions Local: Revoke partitions
    ErrRevokePartitions ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__REVOKE_PARTITIONS)
    // ErrConflict Local: Conflicting use
    ErrConflict ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__CONFLICT)
    // ErrState Local: Erroneous state
    ErrState ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__STATE)
    // ErrUnknownProtocol Local: Unknown protocol
    ErrUnknownProtocol ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__UNKNOWN_PROTOCOL)
    // ErrNotImplemented Local: Not implemented
    ErrNotImplemented ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__NOT_IMPLEMENTED)
    // ErrAuthentication Local: Authentication failure
    ErrAuthentication ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__AUTHENTICATION)
    // ErrNoOffset Local: No offset stored
    ErrNoOffset ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__NO_OFFSET)
    // ErrOutdated Local: Outdated
    ErrOutdated ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__OUTDATED)
    // ErrTimedOutQueue Local: Timed out in queue
    ErrTimedOutQueue ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__TIMED_OUT_QUEUE)
    // ErrUnknown Unknown broker error
    ErrUnknown ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_UNKNOWN)
    // ErrNoError Success
    ErrNoError ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_NO_ERROR)
    // ErrOffsetOutOfRange Broker: Offset out of range
    ErrOffsetOutOfRange ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_OFFSET_OUT_OF_RANGE)
    // ErrInvalidMsg Broker: Invalid message
    ErrInvalidMsg ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_INVALID_MSG)
    // ErrUnknownTopicOrPart Broker: Unknown topic or partition
    ErrUnknownTopicOrPart ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_UNKNOWN_TOPIC_OR_PART)
    // ErrInvalidMsgSize Broker: Invalid message size
    ErrInvalidMsgSize ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_INVALID_MSG_SIZE)
    // ErrLeaderNotAvailable Broker: Leader not available
    ErrLeaderNotAvailable ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_LEADER_NOT_AVAILABLE)
    // ErrNotLeaderForPartition Broker: Not leader for partition
    ErrNotLeaderForPartition ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_NOT_LEADER_FOR_PARTITION)
    // ErrRequestTimedOut Broker: Request timed out
    ErrRequestTimedOut ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_REQUEST_TIMED_OUT)
    // ErrBrokerNotAvailable Broker: Broker not available
    ErrBrokerNotAvailable ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_BROKER_NOT_AVAILABLE)
    // ErrReplicaNotAvailable Broker: Replica not available
    ErrReplicaNotAvailable ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_REPLICA_NOT_AVAILABLE)
    // ErrMsgSizeTooLarge Broker: Message size too large
    ErrMsgSizeTooLarge ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_MSG_SIZE_TOO_LARGE)
    // ErrStaleCtrlEpoch Broker: StaleControllerEpochCode
    ErrStaleCtrlEpoch ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_STALE_CTRL_EPOCH)
    // ErrOffsetMetadataTooLarge Broker: Offset metadata string too large
    ErrOffsetMetadataTooLarge ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_OFFSET_METADATA_TOO_LARGE)
    // ErrNetworkException Broker: Broker disconnected before response received
    ErrNetworkException ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_NETWORK_EXCEPTION)
    // ErrGroupLoadInProgress Broker: Group coordinator load in progress
    ErrGroupLoadInProgress ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_GROUP_LOAD_IN_PROGRESS)
    // ErrGroupCoordinatorNotAvailable Broker: Group coordinator not available
    ErrGroupCoordinatorNotAvailable ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_GROUP_COORDINATOR_NOT_AVAILABLE)
    // ErrNotCoordinatorForGroup Broker: Not coordinator for group
    ErrNotCoordinatorForGroup ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_NOT_COORDINATOR_FOR_GROUP)
    // ErrTopicException Broker: Invalid topic
    ErrTopicException ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_TOPIC_EXCEPTION)
    // ErrRecordListTooLarge Broker: Message batch larger than configured server segment size
    ErrRecordListTooLarge ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_RECORD_LIST_TOO_LARGE)
    // ErrNotEnoughReplicas Broker: Not enough in-sync replicas
    ErrNotEnoughReplicas ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_NOT_ENOUGH_REPLICAS)
    // ErrNotEnoughReplicasAfterAppend Broker: Message(s) written to insufficient number of in-sync replicas
    ErrNotEnoughReplicasAfterAppend ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_NOT_ENOUGH_REPLICAS_AFTER_APPEND)
    // ErrInvalidRequiredAcks Broker: Invalid required acks value
    ErrInvalidRequiredAcks ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_INVALID_REQUIRED_ACKS)
    // ErrIllegalGeneration Broker: Specified group generation id is not valid
    ErrIllegalGeneration ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_ILLEGAL_GENERATION)
    // ErrInconsistentGroupProtocol Broker: Inconsistent group protocol
    ErrInconsistentGroupProtocol ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_INCONSISTENT_GROUP_PROTOCOL)
    // ErrInvalidGroupID Broker: Invalid group.id
    ErrInvalidGroupID ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_INVALID_GROUP_ID)
    // ErrUnknownMemberID Broker: Unknown member
    ErrUnknownMemberID ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_UNKNOWN_MEMBER_ID)
    // ErrInvalidSessionTimeout Broker: Invalid session timeout
    ErrInvalidSessionTimeout ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_INVALID_SESSION_TIMEOUT)
    // ErrRebalanceInProgress Broker: Group rebalance in progress
    ErrRebalanceInProgress ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_REBALANCE_IN_PROGRESS)
    // ErrInvalidCommitOffsetSize Broker: Commit offset data size is not valid
    ErrInvalidCommitOffsetSize ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_INVALID_COMMIT_OFFSET_SIZE)
    // ErrTopicAuthorizationFailed Broker: Topic authorization failed
    ErrTopicAuthorizationFailed ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_TOPIC_AUTHORIZATION_FAILED)
    // ErrGroupAuthorizationFailed Broker: Group authorization failed
    ErrGroupAuthorizationFailed ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_GROUP_AUTHORIZATION_FAILED)
    // ErrClusterAuthorizationFailed Broker: Cluster authorization failed
    ErrClusterAuthorizationFailed ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_CLUSTER_AUTHORIZATION_FAILED)
    // ErrInvalidTimestamp Broker: Invalid timestamp
    ErrInvalidTimestamp ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_INVALID_TIMESTAMP)
    // ErrUnsupportedSaslMechanism Broker: Unsupported SASL mechanism
    ErrUnsupportedSaslMechanism ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_UNSUPPORTED_SASL_MECHANISM)
    // ErrIllegalSaslState Broker: Request not valid in current SASL state
    ErrIllegalSaslState ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_ILLEGAL_SASL_STATE)
    // ErrUnsupportedVersion Broker: API version not supported
    ErrUnsupportedVersion ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_UNSUPPORTED_VERSION)
)

func (ErrorCode) String

func (c ErrorCode) String() string

String returns a human readable representation of an error code

type Event

type Event interface {
    // String returns a human-readable representation of the event
    String() string
}

Event generic interface

type Handle

type Handle interface {
    // contains filtered or unexported methods
}

Handle represents a generic client handle containing common parts for both Producer and Consumer.

type Message

type Message struct {
    TopicPartition TopicPartition
    Value          []byte
    Key            []byte
    Timestamp      time.Time
    TimestampType  TimestampType
    Opaque         interface{}
}

Message represents a Kafka message

func (*Message) String

func (m *Message) String() string

String returns a human readable representation of a Message. Key and payload are not represented.

type Metadata

type Metadata struct {
    Brokers []BrokerMetadata
    Topics  map[string]TopicMetadata

    OriginatingBroker BrokerMetadata
}

Metadata contains broker and topic metadata for all (matching) topics

type Offset

type Offset int64

Offset type (int64) with support for canonical names

func NewOffset

func NewOffset(offset interface{}) (Offset, error)

NewOffset creates a new Offset using the provided logical string, or an absolute int64 offset value. Logical offsets: "beginning", "earliest", "end", "latest", "unset", "invalid", "stored"

func OffsetTail

func OffsetTail(relativeOffset Offset) Offset

OffsetTail returns the logical offset relativeOffset from current end of partition

func (Offset) Set

func (o Offset) Set(offset interface{}) error

Set offset value, see NewOffset()

func (Offset) String

func (o Offset) String() string

type OffsetsCommitted

type OffsetsCommitted struct {
    Error   error
    Offsets []TopicPartition
}

OffsetsCommitted reports committed offsets

func (OffsetsCommitted) String

func (o OffsetsCommitted) String() string

type PartitionEOF

type PartitionEOF TopicPartition

PartitionEOF consumer reached end of partition

func (PartitionEOF) String

func (p PartitionEOF) String() string

type PartitionMetadata

type PartitionMetadata struct {
    ID       int32
    Error    Error
    Leader   int32
    Replicas []int32
    Isrs     []int32
}

PartitionMetadata contains per-partition metadata

type Producer

type Producer struct {
    // contains filtered or unexported fields
}

Producer implements a High-level Apache Kafka Producer instance

func NewProducer

func NewProducer(conf *ConfigMap) (*Producer, error)

NewProducer creates a new high-level Producer instance.

conf is a *ConfigMap with standard librdkafka configuration properties, see here:

Supported special configuration properties:

go.batch.producer (bool, false) - Enable batch producer (experimental for increased performance).
                                  These batches do not relate to Kafka message batches in any way.
go.delivery.reports (bool, true) - Forward per-message delivery reports to the
                                   Events() channel.
go.produce.channel.size (int, 1000000) - ProduceChannel() buffer size (in number of messages)

func (*Producer) Close

func (p *Producer) Close()

Close a Producer instance. The Producer object or its channels are no longer usable after this call.

func (*Producer) Events

func (p *Producer) Events() chan Event

Events returns the Events channel (read)

func (*Producer) Flush

func (p *Producer) Flush(timeoutMs int) int

Flush and wait for outstanding messages and requests to complete delivery. Includes messages on ProduceChannel. Runs until value reaches zero or on timeoutMs. Returns the number of outstanding events still un-flushed.

func (*Producer) GetMetadata

func (p *Producer) GetMetadata(topic *string, allTopics bool, timeoutMs int) (*Metadata, error)

GetMetadata queries broker for cluster and topic metadata. If topic is non-nil only information about that topic is returned, else if allTopics is false only information about locally used topics is returned, else information about all topics is returned.

func (*Producer) Len

func (p *Producer) Len() int

Len returns the number of messages and requests waiting to be transmitted to the broker as well as delivery reports queued for the application. Includes messages on ProduceChannel.

func (*Producer) Produce

func (p *Producer) Produce(msg *Message, deliveryChan chan Event) error

Produce single message. This is an asynchronous call that enqueues the message on the internal transmit queue, thus returning immediately. The delivery report will be sent on the provided deliveryChan if specified, or on the Producer object's Events() channel if not. Returns an error if message could not be enqueued.

func (*Producer) ProduceChannel

func (p *Producer) ProduceChannel() chan *Message

ProduceChannel returns the produce *Message channel (write)

func (*Producer) QueryWatermarkOffsets

func (p *Producer) QueryWatermarkOffsets(topic string, partition int32, timeoutMs int) (low, high int64, err error)

QueryWatermarkOffsets returns the broker's low and high offsets for the given topic and partition.

func (*Producer) String

func (p *Producer) String() string

String returns a human readable name for a Producer instance

type RebalanceCb

type RebalanceCb func(*Consumer, Event) error

RebalanceCb provides a per-Subscribe*() rebalance event callback. The passed Event will be either AssignedPartitions or RevokedPartitions

type RevokedPartitions

type RevokedPartitions struct {
    Partitions []TopicPartition
}

RevokedPartitions consumer group rebalance event: revoked partition set

func (RevokedPartitions) String

func (e RevokedPartitions) String() string

type TimestampType

type TimestampType int

TimestampType is a the Message timestamp type or source

func (TimestampType) String

func (t TimestampType) String() string

type TopicMetadata

type TopicMetadata struct {
    Topic      string
    Partitions []PartitionMetadata
    Error      Error
}

TopicMetadata contains per-topic metadata

type TopicPartition

type TopicPartition struct {
    Topic     *string
    Partition int32
    Offset    Offset
    Error     error
}

TopicPartition is a generic placeholder for a Topic+Partition and optionally Offset.

func (TopicPartition) String

func (p TopicPartition) String() string
golang-github-confluentinc-confluent-kafka-go-0.11.6/kafka/build_dynamic.go000066400000000000000000000001371336406275100267030ustar00rootroot00000000000000// +build !static // +build !static_all package kafka // #cgo pkg-config: rdkafka import "C" golang-github-confluentinc-confluent-kafka-go-0.11.6/kafka/build_static.go000066400000000000000000000001451336406275100265450ustar00rootroot00000000000000// +build static // +build !static_all package kafka // #cgo pkg-config: rdkafka-static import "C" golang-github-confluentinc-confluent-kafka-go-0.11.6/kafka/build_static_all.go000066400000000000000000000001761336406275100274010ustar00rootroot00000000000000// +build !static // +build static_all package kafka // #cgo pkg-config: rdkafka-static // #cgo LDFLAGS: -static import "C" golang-github-confluentinc-confluent-kafka-go-0.11.6/kafka/config.go000066400000000000000000000147071336406275100253550ustar00rootroot00000000000000package kafka /** * Copyright 2016 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import ( "fmt" "reflect" "strings" "unsafe" ) /* #include #include */ import "C" // ConfigValue supports the following types: // bool, int, string, any type with the standard String() interface type ConfigValue interface{} // ConfigMap is a map contaning standard librdkafka configuration properties as documented in: // https://github.com/edenhill/librdkafka/tree/master/CONFIGURATION.md // // The special property "default.topic.config" (optional) is a ConfigMap containing default topic // configuration properties. type ConfigMap map[string]ConfigValue // SetKey sets configuration property key to value. // For user convenience a key prefixed with {topic}. will be // set on the "default.topic.config" sub-map. func (m ConfigMap) SetKey(key string, value ConfigValue) error { if strings.HasPrefix(key, "{topic}.") { _, found := m["default.topic.config"] if !found { m["default.topic.config"] = ConfigMap{} } m["default.topic.config"].(ConfigMap)[strings.TrimPrefix(key, "{topic}.")] = value } else { m[key] = value } return nil } // Set implements flag.Set (command line argument parser) as a convenience // for `-X key=value` config. func (m ConfigMap) Set(kv string) error { i := strings.Index(kv, "=") if i == -1 { return Error{ErrInvalidArg, "Expected key=value"} } k := kv[:i] v := kv[i+1:] return m.SetKey(k, v) } func value2string(v ConfigValue) (ret string, errstr string) { switch x := v.(type) { case bool: if x { ret = "true" } else { ret = "false" } case int: ret = fmt.Sprintf("%d", x) case string: ret = x case fmt.Stringer: ret = x.String() default: return "", fmt.Sprintf("Invalid value type %T", v) } return ret, "" } // rdkAnyconf abstracts rd_kafka_conf_t and rd_kafka_topic_conf_t // into a common interface. type rdkAnyconf interface { set(cKey *C.char, cVal *C.char, cErrstr *C.char, errstrSize int) C.rd_kafka_conf_res_t } func anyconfSet(anyconf rdkAnyconf, key string, val ConfigValue) (err error) { value, errstr := value2string(val) if errstr != "" { return Error{ErrInvalidArg, fmt.Sprintf("%s for key %s (expected string,bool,int,ConfigMap)", errstr, key)} } cKey := C.CString(key) cVal := C.CString(value) cErrstr := (*C.char)(C.malloc(C.size_t(128))) defer C.free(unsafe.Pointer(cErrstr)) if anyconf.set(cKey, cVal, cErrstr, 128) != C.RD_KAFKA_CONF_OK { C.free(unsafe.Pointer(cKey)) C.free(unsafe.Pointer(cVal)) return newErrorFromCString(C.RD_KAFKA_RESP_ERR__INVALID_ARG, cErrstr) } return nil } // we need these typedefs to workaround a crash in golint // when parsing the set() methods below type rdkConf C.rd_kafka_conf_t type rdkTopicConf C.rd_kafka_topic_conf_t func (cConf *rdkConf) set(cKey *C.char, cVal *C.char, cErrstr *C.char, errstrSize int) C.rd_kafka_conf_res_t { return C.rd_kafka_conf_set((*C.rd_kafka_conf_t)(cConf), cKey, cVal, cErrstr, C.size_t(errstrSize)) } func (ctopicConf *rdkTopicConf) set(cKey *C.char, cVal *C.char, cErrstr *C.char, errstrSize int) C.rd_kafka_conf_res_t { return C.rd_kafka_topic_conf_set((*C.rd_kafka_topic_conf_t)(ctopicConf), cKey, cVal, cErrstr, C.size_t(errstrSize)) } func configConvertAnyconf(m ConfigMap, anyconf rdkAnyconf) (err error) { // set plugins first, any plugin-specific configuration depends on // the plugin to have already been set pluginPaths, ok := m["plugin.library.paths"] if ok { err = anyconfSet(anyconf, "plugin.library.paths", pluginPaths) if err != nil { return err } } for k, v := range m { if k == "plugin.library.paths" { continue } switch v.(type) { case ConfigMap: /* Special sub-ConfigMap, only used for default.topic.config */ if k != "default.topic.config" { return Error{ErrInvalidArg, fmt.Sprintf("Invalid type for key %s", k)} } var cTopicConf = C.rd_kafka_topic_conf_new() err = configConvertAnyconf(v.(ConfigMap), (*rdkTopicConf)(cTopicConf)) if err != nil { C.rd_kafka_topic_conf_destroy(cTopicConf) return err } C.rd_kafka_conf_set_default_topic_conf( (*C.rd_kafka_conf_t)(anyconf.(*rdkConf)), (*C.rd_kafka_topic_conf_t)((*rdkTopicConf)(cTopicConf))) default: err = anyconfSet(anyconf, k, v) if err != nil { return err } } } return nil } // convert ConfigMap to C rd_kafka_conf_t * func (m ConfigMap) convert() (cConf *C.rd_kafka_conf_t, err error) { cConf = C.rd_kafka_conf_new() err = configConvertAnyconf(m, (*rdkConf)(cConf)) if err != nil { C.rd_kafka_conf_destroy(cConf) return nil, err } return cConf, nil } // get finds key in the configmap and returns its value. // If the key is not found defval is returned. // If the key is found but the type is mismatched an error is returned. func (m ConfigMap) get(key string, defval ConfigValue) (ConfigValue, error) { if strings.HasPrefix(key, "{topic}.") { defconfCv, found := m["default.topic.config"] if !found { return defval, nil } return defconfCv.(ConfigMap).get(strings.TrimPrefix(key, "{topic}."), defval) } v, ok := m[key] if !ok { return defval, nil } if defval != nil && reflect.TypeOf(defval) != reflect.TypeOf(v) { return nil, Error{ErrInvalidArg, fmt.Sprintf("%s expects type %T, not %T", key, defval, v)} } return v, nil } // extract performs a get() and if found deletes the key. func (m ConfigMap) extract(key string, defval ConfigValue) (ConfigValue, error) { v, err := m.get(key, defval) if err != nil { return nil, err } delete(m, key) return v, nil } func (m ConfigMap) clone() ConfigMap { m2 := make(ConfigMap) for k, v := range m { m2[k] = v } return m2 } // Get finds the given key in the ConfigMap and returns its value. // If the key is not found `defval` is returned. // If the key is found but the type does not match that of `defval` (unless nil) // an ErrInvalidArg error is returned. func (m ConfigMap) Get(key string, defval ConfigValue) (ConfigValue, error) { return m.get(key, defval) } golang-github-confluentinc-confluent-kafka-go-0.11.6/kafka/config_test.go000066400000000000000000000101311336406275100263770ustar00rootroot00000000000000/** * Copyright 2016 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package kafka import ( "fmt" "testing" ) // A custom type with Stringer interface to be used to test config map APIs type HostPortType struct { Host string Port int } // implements String() interface func (hp HostPortType) String() string { return fmt.Sprintf("%s:%d", hp.Host, hp.Port) } //Test config map APIs func TestConfigMapAPIs(t *testing.T) { config := &ConfigMap{} // set a good key via SetKey() err := config.SetKey("bootstrap.servers", testconf.Brokers) if err != nil { t.Errorf("Failed to set key via SetKey(). Error: %s\n", err) } // test custom Stringer type hostPort := HostPortType{Host: "localhost", Port: 9092} err = config.SetKey("bootstrap.servers", hostPort) if err != nil { t.Errorf("Failed to set custom Stringer type via SetKey(). Error: %s\n", err) } // test boolean type err = config.SetKey("{topic}.produce.offset.report", true) if err != nil { t.Errorf("Failed to set key via SetKey(). Error: %s\n", err) } // test offset literal string err = config.SetKey("{topic}.auto.offset.reset", "earliest") if err != nil { t.Errorf("Failed to set key via SetKey(). Error: %s\n", err) } //test offset constant err = config.SetKey("{topic}.auto.offset.reset", OffsetBeginning) if err != nil { t.Errorf("Failed to set key via SetKey(). Error: %s\n", err) } //test integer offset err = config.SetKey("{topic}.message.timeout.ms", 10) if err != nil { t.Errorf("Failed to set integer value via SetKey(). Error: %s\n", err) } // set a good key-value pair via Set() err = config.Set("group.id=test.id") if err != nil { t.Errorf("Failed to set key-value pair via Set(). Error: %s\n", err) } // negative test cases // set a bad key-value pair via Set() err = config.Set("group.id:test.id2") if err == nil { t.Errorf("Expected failure when setting invalid key-value pair via Set()\n") } // get string value v, err := config.Get("group.id", nil) if err != nil { t.Errorf("Expected Get(group.id) to succeed: %s\n", err) } if v == nil { t.Errorf("Expected Get(group.id) to return non-nil value\n") } if v.(string) != "test.id" { t.Errorf("group.id mismatch: %s\n", v) } // get string value but request int dummyInt := 12 _, err = config.Get("group.id", dummyInt) if err == nil { t.Errorf("Expected Get(group.id) to fail\n") } // get integer value v, err = config.Get("{topic}.message.timeout.ms", dummyInt) if err != nil { t.Errorf("Expected Get(message.timeout.ms) to succeed: %s\n", err) } if v == nil { t.Errorf("Expected Get(message.timeout.ms) to return non-nil value\n") } if v.(int) != 10 { t.Errorf("message.timeout.ms mismatch: %d\n", v.(int)) } // get unknown value v, err = config.Get("dummy.value.not.found", nil) if v != nil { t.Errorf("Expected nil for dummy value, got %v\n", v) } } // Test that plugins will always be configured before their config options func TestConfigPluginPaths(t *testing.T) { config := &ConfigMap{ "plugin.library.paths": "monitoring-interceptor", } _, err := config.convert() if err != nil { t.Skipf("Missing monitoring-interceptor: %s", err) } config = &ConfigMap{ "plugin.library.paths": "monitoring-interceptor", "confluent.monitoring.interceptor.icdebug": true, } // convert() would fail randomly due to random order of ConfigMap key iteration. // running convert() once gave the test case a 50% failure chance, // running it 100 times gives ~100% for i := 1; i <= 100; i++ { _, err := config.convert() if err != nil { t.Fatalf("Failed to convert. Error: %s\n", err) } } } golang-github-confluentinc-confluent-kafka-go-0.11.6/kafka/consumer.go000066400000000000000000000434351336406275100257430ustar00rootroot00000000000000package kafka /** * Copyright 2016 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import ( "fmt" "math" "time" "unsafe" ) /* #include #include static rd_kafka_topic_partition_t *_c_rdkafka_topic_partition_list_entry(rd_kafka_topic_partition_list_t *rktparlist, int idx) { return idx < rktparlist->cnt ? &rktparlist->elems[idx] : NULL; } */ import "C" // RebalanceCb provides a per-Subscribe*() rebalance event callback. // The passed Event will be either AssignedPartitions or RevokedPartitions type RebalanceCb func(*Consumer, Event) error // Consumer implements a High-level Apache Kafka Consumer instance type Consumer struct { events chan Event handle handle eventsChanEnable bool readerTermChan chan bool rebalanceCb RebalanceCb appReassigned bool appRebalanceEnable bool // config setting } // Strings returns a human readable name for a Consumer instance func (c *Consumer) String() string { return c.handle.String() } // getHandle implements the Handle interface func (c *Consumer) gethandle() *handle { return &c.handle } // Subscribe to a single topic // This replaces the current subscription func (c *Consumer) Subscribe(topic string, rebalanceCb RebalanceCb) error { return c.SubscribeTopics([]string{topic}, rebalanceCb) } // SubscribeTopics subscribes to the provided list of topics. // This replaces the current subscription. func (c *Consumer) SubscribeTopics(topics []string, rebalanceCb RebalanceCb) (err error) { ctopics := C.rd_kafka_topic_partition_list_new(C.int(len(topics))) defer C.rd_kafka_topic_partition_list_destroy(ctopics) for _, topic := range topics { ctopic := C.CString(topic) defer C.free(unsafe.Pointer(ctopic)) C.rd_kafka_topic_partition_list_add(ctopics, ctopic, C.RD_KAFKA_PARTITION_UA) } e := C.rd_kafka_subscribe(c.handle.rk, ctopics) if e != C.RD_KAFKA_RESP_ERR_NO_ERROR { return newError(e) } c.rebalanceCb = rebalanceCb c.handle.currAppRebalanceEnable = c.rebalanceCb != nil || c.appRebalanceEnable return nil } // Unsubscribe from the current subscription, if any. func (c *Consumer) Unsubscribe() (err error) { C.rd_kafka_unsubscribe(c.handle.rk) return nil } // Assign an atomic set of partitions to consume. // This replaces the current assignment. func (c *Consumer) Assign(partitions []TopicPartition) (err error) { c.appReassigned = true cparts := newCPartsFromTopicPartitions(partitions) defer C.rd_kafka_topic_partition_list_destroy(cparts) e := C.rd_kafka_assign(c.handle.rk, cparts) if e != C.RD_KAFKA_RESP_ERR_NO_ERROR { return newError(e) } return nil } // Unassign the current set of partitions to consume. func (c *Consumer) Unassign() (err error) { c.appReassigned = true e := C.rd_kafka_assign(c.handle.rk, nil) if e != C.RD_KAFKA_RESP_ERR_NO_ERROR { return newError(e) } return nil } // commit offsets for specified offsets. // If offsets is nil the currently assigned partitions' offsets are committed. // This is a blocking call, caller will need to wrap in go-routine to // get async or throw-away behaviour. func (c *Consumer) commit(offsets []TopicPartition) (committedOffsets []TopicPartition, err error) { var rkqu *C.rd_kafka_queue_t rkqu = C.rd_kafka_queue_new(c.handle.rk) defer C.rd_kafka_queue_destroy(rkqu) var coffsets *C.rd_kafka_topic_partition_list_t if offsets != nil { coffsets = newCPartsFromTopicPartitions(offsets) defer C.rd_kafka_topic_partition_list_destroy(coffsets) } cErr := C.rd_kafka_commit_queue(c.handle.rk, coffsets, rkqu, nil, nil) if cErr != C.RD_KAFKA_RESP_ERR_NO_ERROR { return nil, newError(cErr) } rkev := C.rd_kafka_queue_poll(rkqu, C.int(-1)) if rkev == nil { // shouldn't happen return nil, newError(C.RD_KAFKA_RESP_ERR__DESTROY) } defer C.rd_kafka_event_destroy(rkev) if C.rd_kafka_event_type(rkev) != C.RD_KAFKA_EVENT_OFFSET_COMMIT { panic(fmt.Sprintf("Expected OFFSET_COMMIT, got %s", C.GoString(C.rd_kafka_event_name(rkev)))) } cErr = C.rd_kafka_event_error(rkev) if cErr != C.RD_KAFKA_RESP_ERR_NO_ERROR { return nil, newErrorFromCString(cErr, C.rd_kafka_event_error_string(rkev)) } cRetoffsets := C.rd_kafka_event_topic_partition_list(rkev) if cRetoffsets == nil { // no offsets, no error return nil, nil } committedOffsets = newTopicPartitionsFromCparts(cRetoffsets) return committedOffsets, nil } // Commit offsets for currently assigned partitions // This is a blocking call. // Returns the committed offsets on success. func (c *Consumer) Commit() ([]TopicPartition, error) { return c.commit(nil) } // CommitMessage commits offset based on the provided message. // This is a blocking call. // Returns the committed offsets on success. func (c *Consumer) CommitMessage(m *Message) ([]TopicPartition, error) { if m.TopicPartition.Error != nil { return nil, Error{ErrInvalidArg, "Can't commit errored message"} } offsets := []TopicPartition{m.TopicPartition} offsets[0].Offset++ return c.commit(offsets) } // CommitOffsets commits the provided list of offsets // This is a blocking call. // Returns the committed offsets on success. func (c *Consumer) CommitOffsets(offsets []TopicPartition) ([]TopicPartition, error) { return c.commit(offsets) } // StoreOffsets stores the provided list of offsets that will be committed // to the offset store according to `auto.commit.interval.ms` or manual // offset-less Commit(). // // Returns the stored offsets on success. If at least one offset couldn't be stored, // an error and a list of offsets is returned. Each offset can be checked for // specific errors via its `.Error` member. func (c *Consumer) StoreOffsets(offsets []TopicPartition) (storedOffsets []TopicPartition, err error) { coffsets := newCPartsFromTopicPartitions(offsets) defer C.rd_kafka_topic_partition_list_destroy(coffsets) cErr := C.rd_kafka_offsets_store(c.handle.rk, coffsets) // coffsets might be annotated with an error storedOffsets = newTopicPartitionsFromCparts(coffsets) if cErr != C.RD_KAFKA_RESP_ERR_NO_ERROR { return storedOffsets, newError(cErr) } return storedOffsets, nil } // Seek seeks the given topic partitions using the offset from the TopicPartition. // // If timeoutMs is not 0 the call will wait this long for the // seek to be performed. If the timeout is reached the internal state // will be unknown and this function returns ErrTimedOut. // If timeoutMs is 0 it will initiate the seek but return // immediately without any error reporting (e.g., async). // // Seek() may only be used for partitions already being consumed // (through Assign() or implicitly through a self-rebalanced Subscribe()). // To set the starting offset it is preferred to use Assign() and provide // a starting offset for each partition. // // Returns an error on failure or nil otherwise. func (c *Consumer) Seek(partition TopicPartition, timeoutMs int) error { rkt := c.handle.getRkt(*partition.Topic) cErr := C.rd_kafka_seek(rkt, C.int32_t(partition.Partition), C.int64_t(partition.Offset), C.int(timeoutMs)) if cErr != C.RD_KAFKA_RESP_ERR_NO_ERROR { return newError(cErr) } return nil } // Poll the consumer for messages or events. // // Will block for at most timeoutMs milliseconds // // The following callbacks may be triggered: // Subscribe()'s rebalanceCb // // Returns nil on timeout, else an Event func (c *Consumer) Poll(timeoutMs int) (event Event) { ev, _ := c.handle.eventPoll(nil, timeoutMs, 1, nil) return ev } // Events returns the Events channel (if enabled) func (c *Consumer) Events() chan Event { return c.events } // ReadMessage polls the consumer for a message. // // This is a conveniance API that wraps Poll() and only returns // messages or errors. All other event types are discarded. // // The call will block for at most `timeout` waiting for // a new message or error. `timeout` may be set to -1 for // indefinite wait. // // Timeout is returned as (nil, err) where err is `kafka.(Error).Code == Kafka.ErrTimedOut`. // // Messages are returned as (msg, nil), // while general errors are returned as (nil, err), // and partition-specific errors are returned as (msg, err) where // msg.TopicPartition provides partition-specific information (such as topic, partition and offset). // // All other event types, such as PartitionEOF, AssignedPartitions, etc, are silently discarded. // func (c *Consumer) ReadMessage(timeout time.Duration) (*Message, error) { var absTimeout time.Time var timeoutMs int if timeout > 0 { absTimeout = time.Now().Add(timeout) timeoutMs = (int)(timeout.Seconds() * 1000.0) } else { timeoutMs = (int)(timeout) } for { ev := c.Poll(timeoutMs) switch e := ev.(type) { case *Message: if e.TopicPartition.Error != nil { return e, e.TopicPartition.Error } return e, nil case Error: return nil, e default: // Ignore other event types } if timeout > 0 { // Calculate remaining time timeoutMs = int(math.Max(0.0, absTimeout.Sub(time.Now()).Seconds()*1000.0)) } if timeoutMs == 0 && ev == nil { return nil, newError(C.RD_KAFKA_RESP_ERR__TIMED_OUT) } } } // Close Consumer instance. // The object is no longer usable after this call. func (c *Consumer) Close() (err error) { if c.eventsChanEnable { // Wait for consumerReader() to terminate (by closing readerTermChan) close(c.readerTermChan) c.handle.waitTerminated(1) close(c.events) } C.rd_kafka_queue_destroy(c.handle.rkq) c.handle.rkq = nil e := C.rd_kafka_consumer_close(c.handle.rk) if e != C.RD_KAFKA_RESP_ERR_NO_ERROR { return newError(e) } c.handle.cleanup() C.rd_kafka_destroy(c.handle.rk) return nil } // NewConsumer creates a new high-level Consumer instance. // // Supported special configuration properties: // go.application.rebalance.enable (bool, false) - Forward rebalancing responsibility to application via the Events() channel. // If set to true the app must handle the AssignedPartitions and // RevokedPartitions events and call Assign() and Unassign() // respectively. // go.events.channel.enable (bool, false) - Enable the Events() channel. Messages and events will be pushed on the Events() channel and the Poll() interface will be disabled. (Experimental) // go.events.channel.size (int, 1000) - Events() channel size // // WARNING: Due to the buffering nature of channels (and queues in general) the // use of the events channel risks receiving outdated events and // messages. Minimizing go.events.channel.size reduces the risk // and number of outdated events and messages but does not eliminate // the factor completely. With a channel size of 1 at most one // event or message may be outdated. func NewConsumer(conf *ConfigMap) (*Consumer, error) { err := versionCheck() if err != nil { return nil, err } // before we do anything with the configuration, create a copy such that // the original is not mutated. confCopy := conf.clone() groupid, _ := confCopy.get("group.id", nil) if groupid == nil { // without a group.id the underlying cgrp subsystem in librdkafka wont get started // and without it there is no way to consume assigned partitions. // So for now require the group.id, this might change in the future. return nil, newErrorFromString(ErrInvalidArg, "Required property group.id not set") } c := &Consumer{} v, err := confCopy.extract("go.application.rebalance.enable", false) if err != nil { return nil, err } c.appRebalanceEnable = v.(bool) v, err = confCopy.extract("go.events.channel.enable", false) if err != nil { return nil, err } c.eventsChanEnable = v.(bool) v, err = confCopy.extract("go.events.channel.size", 1000) if err != nil { return nil, err } eventsChanSize := v.(int) cConf, err := confCopy.convert() if err != nil { return nil, err } cErrstr := (*C.char)(C.malloc(C.size_t(256))) defer C.free(unsafe.Pointer(cErrstr)) C.rd_kafka_conf_set_events(cConf, C.RD_KAFKA_EVENT_REBALANCE|C.RD_KAFKA_EVENT_OFFSET_COMMIT|C.RD_KAFKA_EVENT_STATS|C.RD_KAFKA_EVENT_ERROR) c.handle.rk = C.rd_kafka_new(C.RD_KAFKA_CONSUMER, cConf, cErrstr, 256) if c.handle.rk == nil { return nil, newErrorFromCString(C.RD_KAFKA_RESP_ERR__INVALID_ARG, cErrstr) } C.rd_kafka_poll_set_consumer(c.handle.rk) c.handle.c = c c.handle.setup() c.handle.rkq = C.rd_kafka_queue_get_consumer(c.handle.rk) if c.handle.rkq == nil { // no cgrp (no group.id configured), revert to main queue. c.handle.rkq = C.rd_kafka_queue_get_main(c.handle.rk) } if c.eventsChanEnable { c.events = make(chan Event, eventsChanSize) c.readerTermChan = make(chan bool) /* Start rdkafka consumer queue reader -> events writer goroutine */ go consumerReader(c, c.readerTermChan) } return c, nil } // rebalance calls the application's rebalance callback, if any. // Returns true if the underlying assignment was updated, else false. func (c *Consumer) rebalance(ev Event) bool { c.appReassigned = false if c.rebalanceCb != nil { c.rebalanceCb(c, ev) } return c.appReassigned } // consumerReader reads messages and events from the librdkafka consumer queue // and posts them on the consumer channel. // Runs until termChan closes func consumerReader(c *Consumer, termChan chan bool) { out: for true { select { case _ = <-termChan: break out default: _, term := c.handle.eventPoll(c.events, 100, 1000, termChan) if term { break out } } } c.handle.terminatedChan <- "consumerReader" return } // GetMetadata queries broker for cluster and topic metadata. // If topic is non-nil only information about that topic is returned, else if // allTopics is false only information about locally used topics is returned, // else information about all topics is returned. // GetMetadata is equivalent to listTopics, describeTopics and describeCluster in the Java API. func (c *Consumer) GetMetadata(topic *string, allTopics bool, timeoutMs int) (*Metadata, error) { return getMetadata(c, topic, allTopics, timeoutMs) } // QueryWatermarkOffsets returns the broker's low and high offsets for the given topic // and partition. func (c *Consumer) QueryWatermarkOffsets(topic string, partition int32, timeoutMs int) (low, high int64, err error) { return queryWatermarkOffsets(c, topic, partition, timeoutMs) } // OffsetsForTimes looks up offsets by timestamp for the given partitions. // // The returned offset for each partition is the earliest offset whose // timestamp is greater than or equal to the given timestamp in the // corresponding partition. // // The timestamps to query are represented as `.Offset` in the `times` // argument and the looked up offsets are represented as `.Offset` in the returned // `offsets` list. // // The function will block for at most timeoutMs milliseconds. // // Duplicate Topic+Partitions are not supported. // Per-partition errors may be returned in the `.Error` field. func (c *Consumer) OffsetsForTimes(times []TopicPartition, timeoutMs int) (offsets []TopicPartition, err error) { return offsetsForTimes(c, times, timeoutMs) } // Subscription returns the current subscription as set by Subscribe() func (c *Consumer) Subscription() (topics []string, err error) { var cTopics *C.rd_kafka_topic_partition_list_t cErr := C.rd_kafka_subscription(c.handle.rk, &cTopics) if cErr != C.RD_KAFKA_RESP_ERR_NO_ERROR { return nil, newError(cErr) } defer C.rd_kafka_topic_partition_list_destroy(cTopics) topicCnt := int(cTopics.cnt) topics = make([]string, topicCnt) for i := 0; i < topicCnt; i++ { crktpar := C._c_rdkafka_topic_partition_list_entry(cTopics, C.int(i)) topics[i] = C.GoString(crktpar.topic) } return topics, nil } // Assignment returns the current partition assignments func (c *Consumer) Assignment() (partitions []TopicPartition, err error) { var cParts *C.rd_kafka_topic_partition_list_t cErr := C.rd_kafka_assignment(c.handle.rk, &cParts) if cErr != C.RD_KAFKA_RESP_ERR_NO_ERROR { return nil, newError(cErr) } defer C.rd_kafka_topic_partition_list_destroy(cParts) partitions = newTopicPartitionsFromCparts(cParts) return partitions, nil } // Committed retrieves committed offsets for the given set of partitions func (c *Consumer) Committed(partitions []TopicPartition, timeoutMs int) (offsets []TopicPartition, err error) { cparts := newCPartsFromTopicPartitions(partitions) defer C.rd_kafka_topic_partition_list_destroy(cparts) cerr := C.rd_kafka_committed(c.handle.rk, cparts, C.int(timeoutMs)) if cerr != C.RD_KAFKA_RESP_ERR_NO_ERROR { return nil, newError(cerr) } return newTopicPartitionsFromCparts(cparts), nil } // Pause consumption for the provided list of partitions // // Note that messages already enqueued on the consumer's Event channel // (if `go.events.channel.enable` has been set) will NOT be purged by // this call, set `go.events.channel.size` accordingly. func (c *Consumer) Pause(partitions []TopicPartition) (err error) { cparts := newCPartsFromTopicPartitions(partitions) defer C.rd_kafka_topic_partition_list_destroy(cparts) cerr := C.rd_kafka_pause_partitions(c.handle.rk, cparts) if cerr != C.RD_KAFKA_RESP_ERR_NO_ERROR { return newError(cerr) } return nil } // Resume consumption for the provided list of partitions func (c *Consumer) Resume(partitions []TopicPartition) (err error) { cparts := newCPartsFromTopicPartitions(partitions) defer C.rd_kafka_topic_partition_list_destroy(cparts) cerr := C.rd_kafka_resume_partitions(c.handle.rk, cparts) if cerr != C.RD_KAFKA_RESP_ERR_NO_ERROR { return newError(cerr) } return nil } golang-github-confluentinc-confluent-kafka-go-0.11.6/kafka/consumer_performance_test.go000066400000000000000000000102141336406275100313500ustar00rootroot00000000000000/** * Copyright 2016 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package kafka import ( "fmt" "math/rand" "testing" "time" ) // consumerPerfTest measures the consumer performance using a pre-primed (produced to) topic func consumerPerfTest(b *testing.B, testname string, msgcnt int, useChannel bool, consumeFunc func(c *Consumer, rd *ratedisp, expCnt int), rebalanceCb func(c *Consumer, event Event) error) { r := testconsumerInit(b) if r == -1 { b.Skipf("Missing testconf.json") return } if msgcnt == 0 { msgcnt = r } rand.Seed(int64(time.Now().Unix())) conf := ConfigMap{"bootstrap.servers": testconf.Brokers, "go.events.channel.enable": useChannel, "group.id": fmt.Sprintf("go_cperf_%d", rand.Intn(1000000)), "session.timeout.ms": 6000, "api.version.request": "true", "enable.auto.commit": false, "debug": ",", "default.topic.config": ConfigMap{"auto.offset.reset": "earliest"}} conf.updateFromTestconf() c, err := NewConsumer(&conf) if err != nil { panic(err) } expCnt := msgcnt b.Logf("%s, expecting %d messages", testname, expCnt) c.Subscribe(testconf.Topic, rebalanceCb) rd := ratedispStart(b, testname, 10) consumeFunc(c, &rd, expCnt) rd.print("TOTAL: ") c.Close() b.SetBytes(rd.size) } // handleEvent returns false if processing should stop, else true func handleEvent(c *Consumer, rd *ratedisp, expCnt int, ev Event) bool { switch e := ev.(type) { case *Message: if e.TopicPartition.Error != nil { rd.b.Logf("Error: %v", e.TopicPartition) } if rd.cnt == 0 { // start measuring time from first message to avoid // including rebalancing time. rd.b.ResetTimer() rd.reset() } rd.tick(1, int64(len(e.Value))) if rd.cnt >= int64(expCnt) { return false } case PartitionEOF: break // silence default: rd.b.Fatalf("Consumer error: %v", e) } return true } // consume messages through the Events channel func eventChannelConsumer(c *Consumer, rd *ratedisp, expCnt int) { for ev := range c.Events() { if !handleEvent(c, rd, expCnt, ev) { break } } } // consume messages through the Poll() interface func eventPollConsumer(c *Consumer, rd *ratedisp, expCnt int) { for true { ev := c.Poll(100) if ev == nil { // timeout continue } if !handleEvent(c, rd, expCnt, ev) { break } } } var testconsumerInited = false // Produce messages to consume (if needed) // Query watermarks of topic to see if we need to prime it at all. // NOTE: This wont work for compacted topics.. // returns the number of messages to consume func testconsumerInit(b *testing.B) int { if testconsumerInited { return testconf.PerfMsgCount } if !testconfRead() { return -1 } msgcnt := testconf.PerfMsgCount currcnt, err := getMessageCountInTopic(testconf.Topic) if err == nil { b.Logf("Topic %s has %d messages, need %d", testconf.Topic, currcnt, msgcnt) } if currcnt < msgcnt { producerPerfTest(b, "Priming producer", msgcnt, false, false, true, func(p *Producer, m *Message, drChan chan Event) { p.ProduceChannel() <- m }) } testconsumerInited = true b.ResetTimer() return msgcnt } func BenchmarkConsumerChannelPerformance(b *testing.B) { consumerPerfTest(b, "Channel Consumer", 0, true, eventChannelConsumer, nil) } func BenchmarkConsumerPollPerformance(b *testing.B) { consumerPerfTest(b, "Poll Consumer", 0, false, eventPollConsumer, nil) } func BenchmarkConsumerPollRebalancePerformance(b *testing.B) { consumerPerfTest(b, "Poll Consumer (rebalance callback)", 0, false, eventPollConsumer, func(c *Consumer, event Event) error { b.Logf("Rebalanced: %s", event) return nil }) } golang-github-confluentinc-confluent-kafka-go-0.11.6/kafka/consumer_test.go000066400000000000000000000147341336406275100270020ustar00rootroot00000000000000/** * Copyright 2016 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package kafka import ( "fmt" "os" "reflect" "sort" "testing" "time" ) // TestConsumerAPIs dry-tests most Consumer APIs, no broker is needed. func TestConsumerAPIs(t *testing.T) { c, err := NewConsumer(&ConfigMap{}) if err == nil { t.Fatalf("Expected NewConsumer() to fail without group.id") } c, err = NewConsumer(&ConfigMap{ "group.id": "gotest", "socket.timeout.ms": 10, "session.timeout.ms": 10, "enable.auto.offset.store": false, // permit StoreOffsets() }) if err != nil { t.Fatalf("%s", err) } t.Logf("Consumer %s", c) err = c.Subscribe("gotest", nil) if err != nil { t.Errorf("Subscribe failed: %s", err) } err = c.SubscribeTopics([]string{"gotest1", "gotest2"}, func(my_c *Consumer, ev Event) error { t.Logf("%s", ev) return nil }) if err != nil { t.Errorf("SubscribeTopics failed: %s", err) } _, err = c.Commit() if err != nil && err.(Error).Code() != ErrNoOffset { t.Errorf("Commit() failed: %s", err) } err = c.Unsubscribe() if err != nil { t.Errorf("Unsubscribe failed: %s", err) } topic := "gotest" stored, err := c.StoreOffsets([]TopicPartition{{Topic: &topic, Partition: 0, Offset: 1}}) if err != nil && err.(Error).Code() != ErrUnknownPartition { t.Errorf("StoreOffsets() failed: %s", err) toppar := stored[0] if toppar.Error != nil && toppar.Error.(Error).Code() == ErrUnknownPartition { t.Errorf("StoreOffsets() TopicPartition error: %s", toppar.Error) } } var empty []TopicPartition stored, err = c.StoreOffsets(empty) if err != nil { t.Errorf("StoreOffsets(empty) failed: %s", err) } topic1 := "gotest1" topic2 := "gotest2" err = c.Assign([]TopicPartition{{Topic: &topic1, Partition: 2}, {Topic: &topic2, Partition: 1}}) if err != nil { t.Errorf("Assign failed: %s", err) } err = c.Seek(TopicPartition{Topic: &topic1, Partition: 2, Offset: -1}, 1000) if err != nil { t.Errorf("Seek failed: %s", err) } // Pause & Resume err = c.Pause([]TopicPartition{{Topic: &topic1, Partition: 2}, {Topic: &topic2, Partition: 1}}) if err != nil { t.Errorf("Pause failed: %s", err) } err = c.Resume([]TopicPartition{{Topic: &topic1, Partition: 2}, {Topic: &topic2, Partition: 1}}) if err != nil { t.Errorf("Resume failed: %s", err) } err = c.Unassign() if err != nil { t.Errorf("Unassign failed: %s", err) } topic = "mytopic" // OffsetsForTimes offsets, err := c.OffsetsForTimes([]TopicPartition{{Topic: &topic, Offset: 12345}}, 100) t.Logf("OffsetsForTimes() returned Offsets %s and error %s\n", offsets, err) if err == nil { t.Errorf("OffsetsForTimes() should have failed\n") } if offsets != nil { t.Errorf("OffsetsForTimes() failed but returned non-nil Offsets: %s\n", offsets) } // Committed offsets, err = c.Committed([]TopicPartition{{Topic: &topic, Partition: 5}}, 10) t.Logf("Committed() returned Offsets %s and error %s\n", offsets, err) if err == nil { t.Errorf("Committed() should have failed\n") } if offsets != nil { t.Errorf("Committed() failed but returned non-nil Offsets: %s\n", offsets) } err = c.Close() if err != nil { t.Errorf("Close failed: %s", err) } } func TestConsumerSubscription(t *testing.T) { c, err := NewConsumer(&ConfigMap{"group.id": "gotest"}) if err != nil { t.Fatalf("%s", err) } topics := []string{"gotest1", "gotest2", "gotest3"} sort.Strings(topics) err = c.SubscribeTopics(topics, nil) if err != nil { t.Fatalf("SubscribeTopics failed: %s", err) } subscription, err := c.Subscription() if err != nil { t.Fatalf("Subscription() failed: %s", err) } sort.Strings(subscription) t.Logf("Compare Subscription %v to original list of topics %v\n", subscription, topics) r := reflect.DeepEqual(topics, subscription) if r != true { t.Fatalf("Subscription() %v does not match original topics %v", subscription, topics) } c.Close() } func TestConsumerAssignment(t *testing.T) { c, err := NewConsumer(&ConfigMap{"group.id": "gotest"}) if err != nil { t.Fatalf("%s", err) } topic0 := "topic0" topic1 := "topic1" partitions := TopicPartitions{ {Topic: &topic1, Partition: 1}, {Topic: &topic1, Partition: 3}, {Topic: &topic0, Partition: 2}} sort.Sort(partitions) err = c.Assign(partitions) if err != nil { t.Fatalf("Assign failed: %s", err) } assignment, err := c.Assignment() if err != nil { t.Fatalf("Assignment() failed: %s", err) } sort.Sort(TopicPartitions(assignment)) t.Logf("Compare Assignment %v to original list of partitions %v\n", assignment, partitions) // Use Logf instead of Errorf for timeout-checking errors on CI builds // since CI environments are unreliable timing-wise. tmoutFunc := t.Errorf _, onCi := os.LookupEnv("CI") if onCi { tmoutFunc = t.Logf } // Test ReadMessage() for _, tmout := range []time.Duration{0, 200 * time.Millisecond} { start := time.Now() m, err := c.ReadMessage(tmout) duration := time.Since(start) t.Logf("ReadMessage(%v) ret %v and %v in %v", tmout, m, err, duration) if m != nil || err == nil { t.Errorf("Expected ReadMessage to fail: %v, %v", m, err) } if err.(Error).Code() != ErrTimedOut { t.Errorf("Expected ReadMessage to fail with ErrTimedOut, not %v", err) } if tmout == 0 { if duration.Seconds() > 0.1 { tmoutFunc("Expected ReadMessage(%v) to fail after max 100ms, not %v", tmout, duration) } } else if tmout > 0 { if duration.Seconds() < tmout.Seconds()*0.75 || duration.Seconds() > tmout.Seconds()*1.25 { tmoutFunc("Expected ReadMessage() to fail after %v -+25%%, not %v", tmout, duration) } } } // reflect.DeepEqual() can't be used since TopicPartition.Topic // is a pointer to a string rather than a string and the pointer // will differ between partitions and assignment. // Instead do a simple stringification + string compare. if fmt.Sprintf("%v", assignment) != fmt.Sprintf("%v", partitions) { t.Fatalf("Assignment() %v does not match original partitions %v", assignment, partitions) } c.Close() } golang-github-confluentinc-confluent-kafka-go-0.11.6/kafka/error.go000066400000000000000000000035601336406275100252340ustar00rootroot00000000000000package kafka /** * Copyright 2016 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ // Automatically generate error codes from librdkafka // See README for instructions //go:generate $GOPATH/bin/go_rdkafka_generr generated_errors.go /* #include */ import "C" // Error provides a Kafka-specific error container type Error struct { code ErrorCode str string } func newError(code C.rd_kafka_resp_err_t) (err Error) { return Error{ErrorCode(code), ""} } func newGoError(code ErrorCode) (err Error) { return Error{code, ""} } func newErrorFromString(code ErrorCode, str string) (err Error) { return Error{code, str} } func newErrorFromCString(code C.rd_kafka_resp_err_t, cstr *C.char) (err Error) { var str string if cstr != nil { str = C.GoString(cstr) } else { str = "" } return Error{ErrorCode(code), str} } func newCErrorFromString(code C.rd_kafka_resp_err_t, str string) (err Error) { return newErrorFromString(ErrorCode(code), str) } // Error returns a human readable representation of an Error // Same as Error.String() func (e Error) Error() string { return e.String() } // String returns a human readable representation of an Error func (e Error) String() string { if len(e.str) > 0 { return e.str } return e.code.String() } // Code returns the ErrorCode of an Error func (e Error) Code() ErrorCode { return e.code } golang-github-confluentinc-confluent-kafka-go-0.11.6/kafka/event.go000066400000000000000000000216341336406275100252260ustar00rootroot00000000000000package kafka /** * Copyright 2016 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import ( "fmt" "os" "unsafe" ) /* #include #include #include "glue_rdkafka.h" #ifdef RD_KAFKA_V_HEADERS void chdrs_to_tmphdrs (rd_kafka_headers_t *chdrs, tmphdr_t *tmphdrs) { size_t i = 0; const char *name; const void *val; size_t size; while (!rd_kafka_header_get_all(chdrs, i, &tmphdrs[i].key, &tmphdrs[i].val, (size_t *)&tmphdrs[i].size)) i++; } #endif rd_kafka_event_t *_rk_queue_poll (rd_kafka_queue_t *rkq, int timeoutMs, rd_kafka_event_type_t *evtype, fetched_c_msg_t *fcMsg, rd_kafka_event_t *prev_rkev) { rd_kafka_event_t *rkev; if (prev_rkev) rd_kafka_event_destroy(prev_rkev); rkev = rd_kafka_queue_poll(rkq, timeoutMs); *evtype = rd_kafka_event_type(rkev); if (*evtype == RD_KAFKA_EVENT_FETCH) { #ifdef RD_KAFKA_V_HEADERS rd_kafka_headers_t *hdrs; #endif fcMsg->msg = (rd_kafka_message_t *)rd_kafka_event_message_next(rkev); fcMsg->ts = rd_kafka_message_timestamp(fcMsg->msg, &fcMsg->tstype); #ifdef RD_KAFKA_V_HEADERS if (!rd_kafka_message_headers(fcMsg->msg, &hdrs)) { fcMsg->tmphdrsCnt = rd_kafka_header_cnt(hdrs); fcMsg->tmphdrs = malloc(sizeof(*fcMsg->tmphdrs) * fcMsg->tmphdrsCnt); chdrs_to_tmphdrs(hdrs, fcMsg->tmphdrs); } else { #else if (1) { #endif fcMsg->tmphdrs = NULL; fcMsg->tmphdrsCnt = 0; } } return rkev; } */ import "C" // Event generic interface type Event interface { // String returns a human-readable representation of the event String() string } // Specific event types // Stats statistics event type Stats struct { statsJSON string } func (e Stats) String() string { return e.statsJSON } // AssignedPartitions consumer group rebalance event: assigned partition set type AssignedPartitions struct { Partitions []TopicPartition } func (e AssignedPartitions) String() string { return fmt.Sprintf("AssignedPartitions: %v", e.Partitions) } // RevokedPartitions consumer group rebalance event: revoked partition set type RevokedPartitions struct { Partitions []TopicPartition } func (e RevokedPartitions) String() string { return fmt.Sprintf("RevokedPartitions: %v", e.Partitions) } // PartitionEOF consumer reached end of partition type PartitionEOF TopicPartition func (p PartitionEOF) String() string { return fmt.Sprintf("EOF at %s", TopicPartition(p)) } // OffsetsCommitted reports committed offsets type OffsetsCommitted struct { Error error Offsets []TopicPartition } func (o OffsetsCommitted) String() string { return fmt.Sprintf("OffsetsCommitted (%v, %v)", o.Error, o.Offsets) } // eventPoll polls an event from the handler's C rd_kafka_queue_t, // translates it into an Event type and then sends on `channel` if non-nil, else returns the Event. // term_chan is an optional channel to monitor along with producing to channel // to indicate that `channel` is being terminated. // returns (event Event, terminate Bool) tuple, where Terminate indicates // if termChan received a termination event. func (h *handle) eventPoll(channel chan Event, timeoutMs int, maxEvents int, termChan chan bool) (Event, bool) { var prevRkev *C.rd_kafka_event_t term := false var retval Event if channel == nil { maxEvents = 1 } out: for evcnt := 0; evcnt < maxEvents; evcnt++ { var evtype C.rd_kafka_event_type_t var fcMsg C.fetched_c_msg_t rkev := C._rk_queue_poll(h.rkq, C.int(timeoutMs), &evtype, &fcMsg, prevRkev) prevRkev = rkev timeoutMs = 0 retval = nil switch evtype { case C.RD_KAFKA_EVENT_FETCH: // Consumer fetch event, new message. // Extracted into temporary fcMsg for optimization retval = h.newMessageFromFcMsg(&fcMsg) case C.RD_KAFKA_EVENT_REBALANCE: // Consumer rebalance event // If the app provided a RebalanceCb to Subscribe*() or // has go.application.rebalance.enable=true we create an event // and forward it to the application thru the RebalanceCb or the // Events channel respectively. // Since librdkafka requires the rebalance event to be "acked" by // the application to synchronize state we keep track of if the // application performed Assign() or Unassign(), but this only works for // the non-channel case. For the channel case we assume the application // calls Assign() / Unassign(). // Failure to do so will "hang" the consumer, e.g., it wont start consuming // and it wont close cleanly, so this error case should be visible // immediately to the application developer. appReassigned := false if C.rd_kafka_event_error(rkev) == C.RD_KAFKA_RESP_ERR__ASSIGN_PARTITIONS { if h.currAppRebalanceEnable { // Application must perform Assign() call var ev AssignedPartitions ev.Partitions = newTopicPartitionsFromCparts(C.rd_kafka_event_topic_partition_list(rkev)) if channel != nil || h.c.rebalanceCb == nil { retval = ev appReassigned = true } else { appReassigned = h.c.rebalance(ev) } } if !appReassigned { C.rd_kafka_assign(h.rk, C.rd_kafka_event_topic_partition_list(rkev)) } } else { if h.currAppRebalanceEnable { // Application must perform Unassign() call var ev RevokedPartitions ev.Partitions = newTopicPartitionsFromCparts(C.rd_kafka_event_topic_partition_list(rkev)) if channel != nil || h.c.rebalanceCb == nil { retval = ev appReassigned = true } else { appReassigned = h.c.rebalance(ev) } } if !appReassigned { C.rd_kafka_assign(h.rk, nil) } } case C.RD_KAFKA_EVENT_ERROR: // Error event cErr := C.rd_kafka_event_error(rkev) switch cErr { case C.RD_KAFKA_RESP_ERR__PARTITION_EOF: crktpar := C.rd_kafka_event_topic_partition(rkev) if crktpar == nil { break } defer C.rd_kafka_topic_partition_destroy(crktpar) var peof PartitionEOF setupTopicPartitionFromCrktpar((*TopicPartition)(&peof), crktpar) retval = peof default: retval = newErrorFromCString(cErr, C.rd_kafka_event_error_string(rkev)) } case C.RD_KAFKA_EVENT_STATS: retval = &Stats{C.GoString(C.rd_kafka_event_stats(rkev))} case C.RD_KAFKA_EVENT_DR: // Producer Delivery Report event // Each such event contains delivery reports for all // messages in the produced batch. // Forward delivery reports to per-message's response channel // or to the global Producer.Events channel, or none. rkmessages := make([]*C.rd_kafka_message_t, int(C.rd_kafka_event_message_count(rkev))) cnt := int(C.rd_kafka_event_message_array(rkev, (**C.rd_kafka_message_t)(unsafe.Pointer(&rkmessages[0])), C.size_t(len(rkmessages)))) for _, rkmessage := range rkmessages[:cnt] { msg := h.newMessageFromC(rkmessage) var ch *chan Event if rkmessage._private != nil { // Find cgoif by id cg, found := h.cgoGet((int)((uintptr)(rkmessage._private))) if found { cdr := cg.(cgoDr) if cdr.deliveryChan != nil { ch = &cdr.deliveryChan } msg.Opaque = cdr.opaque } } if ch == nil && h.fwdDr { ch = &channel } if ch != nil { select { case *ch <- msg: case <-termChan: break out } } else { retval = msg break out } } case C.RD_KAFKA_EVENT_OFFSET_COMMIT: // Offsets committed cErr := C.rd_kafka_event_error(rkev) coffsets := C.rd_kafka_event_topic_partition_list(rkev) var offsets []TopicPartition if coffsets != nil { offsets = newTopicPartitionsFromCparts(coffsets) } if cErr != C.RD_KAFKA_RESP_ERR_NO_ERROR { retval = OffsetsCommitted{newErrorFromCString(cErr, C.rd_kafka_event_error_string(rkev)), offsets} } else { retval = OffsetsCommitted{nil, offsets} } case C.RD_KAFKA_EVENT_NONE: // poll timed out: no events available break out default: if rkev != nil { fmt.Fprintf(os.Stderr, "Ignored event %s\n", C.GoString(C.rd_kafka_event_name(rkev))) } } if retval != nil { if channel != nil { select { case channel <- retval: case <-termChan: retval = nil term = true break out } } else { break out } } } if prevRkev != nil { C.rd_kafka_event_destroy(prevRkev) } return retval, term } golang-github-confluentinc-confluent-kafka-go-0.11.6/kafka/event_test.go000066400000000000000000000023511336406275100262600ustar00rootroot00000000000000/** * Copyright 2016 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package kafka import ( "testing" ) // TestEventAPIs dry-tests the public event related APIs, no broker is needed. func TestEventAPIs(t *testing.T) { assignedPartitions := AssignedPartitions{} t.Logf("%s\n", assignedPartitions.String()) revokedPartitions := RevokedPartitions{} t.Logf("%s\n", revokedPartitions.String()) topic := "test" partition := PartitionEOF{Topic: &topic} t.Logf("%s\n", partition.String()) partition = PartitionEOF{} t.Logf("%s\n", partition.String()) committedOffsets := OffsetsCommitted{} t.Logf("%s\n", committedOffsets.String()) stats := Stats{"{\"name\": \"Producer-1\"}"} t.Logf("Stats: %s\n", stats.String()) } golang-github-confluentinc-confluent-kafka-go-0.11.6/kafka/generated_errors.go000066400000000000000000000365731336406275100274470ustar00rootroot00000000000000package kafka // Copyright 2016 Confluent Inc. // AUTOMATICALLY GENERATED BY /home/maglun/gocode/bin/go_rdkafka_generr ON 2018-10-11 09:26:58.938371378 +0200 CEST m=+0.001256618 USING librdkafka 0.11.5 /* #include */ import "C" // ErrorCode is the integer representation of local and broker error codes type ErrorCode int // String returns a human readable representation of an error code func (c ErrorCode) String() string { return C.GoString(C.rd_kafka_err2str(C.rd_kafka_resp_err_t(c))) } const ( // ErrBadMsg Local: Bad message format ErrBadMsg ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__BAD_MSG) // ErrBadCompression Local: Invalid compressed data ErrBadCompression ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__BAD_COMPRESSION) // ErrDestroy Local: Broker handle destroyed ErrDestroy ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__DESTROY) // ErrFail Local: Communication failure with broker ErrFail ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__FAIL) // ErrTransport Local: Broker transport failure ErrTransport ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__TRANSPORT) // ErrCritSysResource Local: Critical system resource failure ErrCritSysResource ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__CRIT_SYS_RESOURCE) // ErrResolve Local: Host resolution failure ErrResolve ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__RESOLVE) // ErrMsgTimedOut Local: Message timed out ErrMsgTimedOut ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__MSG_TIMED_OUT) // ErrPartitionEOF Broker: No more messages ErrPartitionEOF ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__PARTITION_EOF) // ErrUnknownPartition Local: Unknown partition ErrUnknownPartition ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__UNKNOWN_PARTITION) // ErrFs Local: File or filesystem error ErrFs ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__FS) // ErrUnknownTopic Local: Unknown topic ErrUnknownTopic ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__UNKNOWN_TOPIC) // ErrAllBrokersDown Local: All broker connections are down ErrAllBrokersDown ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__ALL_BROKERS_DOWN) // ErrInvalidArg Local: Invalid argument or configuration ErrInvalidArg ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__INVALID_ARG) // ErrTimedOut Local: Timed out ErrTimedOut ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__TIMED_OUT) // ErrQueueFull Local: Queue full ErrQueueFull ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__QUEUE_FULL) // ErrIsrInsuff Local: ISR count insufficient ErrIsrInsuff ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__ISR_INSUFF) // ErrNodeUpdate Local: Broker node update ErrNodeUpdate ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__NODE_UPDATE) // ErrSsl Local: SSL error ErrSsl ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__SSL) // ErrWaitCoord Local: Waiting for coordinator ErrWaitCoord ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__WAIT_COORD) // ErrUnknownGroup Local: Unknown group ErrUnknownGroup ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__UNKNOWN_GROUP) // ErrInProgress Local: Operation in progress ErrInProgress ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__IN_PROGRESS) // ErrPrevInProgress Local: Previous operation in progress ErrPrevInProgress ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__PREV_IN_PROGRESS) // ErrExistingSubscription Local: Existing subscription ErrExistingSubscription ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__EXISTING_SUBSCRIPTION) // ErrAssignPartitions Local: Assign partitions ErrAssignPartitions ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__ASSIGN_PARTITIONS) // ErrRevokePartitions Local: Revoke partitions ErrRevokePartitions ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__REVOKE_PARTITIONS) // ErrConflict Local: Conflicting use ErrConflict ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__CONFLICT) // ErrState Local: Erroneous state ErrState ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__STATE) // ErrUnknownProtocol Local: Unknown protocol ErrUnknownProtocol ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__UNKNOWN_PROTOCOL) // ErrNotImplemented Local: Not implemented ErrNotImplemented ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__NOT_IMPLEMENTED) // ErrAuthentication Local: Authentication failure ErrAuthentication ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__AUTHENTICATION) // ErrNoOffset Local: No offset stored ErrNoOffset ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__NO_OFFSET) // ErrOutdated Local: Outdated ErrOutdated ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__OUTDATED) // ErrTimedOutQueue Local: Timed out in queue ErrTimedOutQueue ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__TIMED_OUT_QUEUE) // ErrUnsupportedFeature Local: Required feature not supported by broker ErrUnsupportedFeature ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__UNSUPPORTED_FEATURE) // ErrWaitCache Local: Awaiting cache update ErrWaitCache ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__WAIT_CACHE) // ErrIntr Local: Operation interrupted ErrIntr ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__INTR) // ErrKeySerialization Local: Key serialization error ErrKeySerialization ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__KEY_SERIALIZATION) // ErrValueSerialization Local: Value serialization error ErrValueSerialization ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__VALUE_SERIALIZATION) // ErrKeyDeserialization Local: Key deserialization error ErrKeyDeserialization ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__KEY_DESERIALIZATION) // ErrValueDeserialization Local: Value deserialization error ErrValueDeserialization ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__VALUE_DESERIALIZATION) // ErrPartial Local: Partial response ErrPartial ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__PARTIAL) // ErrReadOnly Local: Read-only object ErrReadOnly ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__READ_ONLY) // ErrNoent Local: No such entry ErrNoent ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__NOENT) // ErrUnderflow Local: Read underflow ErrUnderflow ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__UNDERFLOW) // ErrInvalidType Local: Invalid type ErrInvalidType ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR__INVALID_TYPE) // ErrUnknown Unknown broker error ErrUnknown ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_UNKNOWN) // ErrNoError Success ErrNoError ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_NO_ERROR) // ErrOffsetOutOfRange Broker: Offset out of range ErrOffsetOutOfRange ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_OFFSET_OUT_OF_RANGE) // ErrInvalidMsg Broker: Invalid message ErrInvalidMsg ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_INVALID_MSG) // ErrUnknownTopicOrPart Broker: Unknown topic or partition ErrUnknownTopicOrPart ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_UNKNOWN_TOPIC_OR_PART) // ErrInvalidMsgSize Broker: Invalid message size ErrInvalidMsgSize ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_INVALID_MSG_SIZE) // ErrLeaderNotAvailable Broker: Leader not available ErrLeaderNotAvailable ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_LEADER_NOT_AVAILABLE) // ErrNotLeaderForPartition Broker: Not leader for partition ErrNotLeaderForPartition ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_NOT_LEADER_FOR_PARTITION) // ErrRequestTimedOut Broker: Request timed out ErrRequestTimedOut ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_REQUEST_TIMED_OUT) // ErrBrokerNotAvailable Broker: Broker not available ErrBrokerNotAvailable ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_BROKER_NOT_AVAILABLE) // ErrReplicaNotAvailable Broker: Replica not available ErrReplicaNotAvailable ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_REPLICA_NOT_AVAILABLE) // ErrMsgSizeTooLarge Broker: Message size too large ErrMsgSizeTooLarge ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_MSG_SIZE_TOO_LARGE) // ErrStaleCtrlEpoch Broker: StaleControllerEpochCode ErrStaleCtrlEpoch ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_STALE_CTRL_EPOCH) // ErrOffsetMetadataTooLarge Broker: Offset metadata string too large ErrOffsetMetadataTooLarge ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_OFFSET_METADATA_TOO_LARGE) // ErrNetworkException Broker: Broker disconnected before response received ErrNetworkException ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_NETWORK_EXCEPTION) // ErrGroupLoadInProgress Broker: Group coordinator load in progress ErrGroupLoadInProgress ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_GROUP_LOAD_IN_PROGRESS) // ErrGroupCoordinatorNotAvailable Broker: Group coordinator not available ErrGroupCoordinatorNotAvailable ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_GROUP_COORDINATOR_NOT_AVAILABLE) // ErrNotCoordinatorForGroup Broker: Not coordinator for group ErrNotCoordinatorForGroup ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_NOT_COORDINATOR_FOR_GROUP) // ErrTopicException Broker: Invalid topic ErrTopicException ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_TOPIC_EXCEPTION) // ErrRecordListTooLarge Broker: Message batch larger than configured server segment size ErrRecordListTooLarge ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_RECORD_LIST_TOO_LARGE) // ErrNotEnoughReplicas Broker: Not enough in-sync replicas ErrNotEnoughReplicas ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_NOT_ENOUGH_REPLICAS) // ErrNotEnoughReplicasAfterAppend Broker: Message(s) written to insufficient number of in-sync replicas ErrNotEnoughReplicasAfterAppend ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_NOT_ENOUGH_REPLICAS_AFTER_APPEND) // ErrInvalidRequiredAcks Broker: Invalid required acks value ErrInvalidRequiredAcks ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_INVALID_REQUIRED_ACKS) // ErrIllegalGeneration Broker: Specified group generation id is not valid ErrIllegalGeneration ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_ILLEGAL_GENERATION) // ErrInconsistentGroupProtocol Broker: Inconsistent group protocol ErrInconsistentGroupProtocol ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_INCONSISTENT_GROUP_PROTOCOL) // ErrInvalidGroupID Broker: Invalid group.id ErrInvalidGroupID ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_INVALID_GROUP_ID) // ErrUnknownMemberID Broker: Unknown member ErrUnknownMemberID ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_UNKNOWN_MEMBER_ID) // ErrInvalidSessionTimeout Broker: Invalid session timeout ErrInvalidSessionTimeout ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_INVALID_SESSION_TIMEOUT) // ErrRebalanceInProgress Broker: Group rebalance in progress ErrRebalanceInProgress ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_REBALANCE_IN_PROGRESS) // ErrInvalidCommitOffsetSize Broker: Commit offset data size is not valid ErrInvalidCommitOffsetSize ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_INVALID_COMMIT_OFFSET_SIZE) // ErrTopicAuthorizationFailed Broker: Topic authorization failed ErrTopicAuthorizationFailed ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_TOPIC_AUTHORIZATION_FAILED) // ErrGroupAuthorizationFailed Broker: Group authorization failed ErrGroupAuthorizationFailed ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_GROUP_AUTHORIZATION_FAILED) // ErrClusterAuthorizationFailed Broker: Cluster authorization failed ErrClusterAuthorizationFailed ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_CLUSTER_AUTHORIZATION_FAILED) // ErrInvalidTimestamp Broker: Invalid timestamp ErrInvalidTimestamp ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_INVALID_TIMESTAMP) // ErrUnsupportedSaslMechanism Broker: Unsupported SASL mechanism ErrUnsupportedSaslMechanism ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_UNSUPPORTED_SASL_MECHANISM) // ErrIllegalSaslState Broker: Request not valid in current SASL state ErrIllegalSaslState ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_ILLEGAL_SASL_STATE) // ErrUnsupportedVersion Broker: API version not supported ErrUnsupportedVersion ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_UNSUPPORTED_VERSION) // ErrTopicAlreadyExists Broker: Topic already exists ErrTopicAlreadyExists ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_TOPIC_ALREADY_EXISTS) // ErrInvalidPartitions Broker: Invalid number of partitions ErrInvalidPartitions ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_INVALID_PARTITIONS) // ErrInvalidReplicationFactor Broker: Invalid replication factor ErrInvalidReplicationFactor ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_INVALID_REPLICATION_FACTOR) // ErrInvalidReplicaAssignment Broker: Invalid replica assignment ErrInvalidReplicaAssignment ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_INVALID_REPLICA_ASSIGNMENT) // ErrInvalidConfig Broker: Configuration is invalid ErrInvalidConfig ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_INVALID_CONFIG) // ErrNotController Broker: Not controller for cluster ErrNotController ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_NOT_CONTROLLER) // ErrInvalidRequest Broker: Invalid request ErrInvalidRequest ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_INVALID_REQUEST) // ErrUnsupportedForMessageFormat Broker: Message format on broker does not support request ErrUnsupportedForMessageFormat ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_UNSUPPORTED_FOR_MESSAGE_FORMAT) // ErrPolicyViolation Broker: Isolation policy volation ErrPolicyViolation ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_POLICY_VIOLATION) // ErrOutOfOrderSequenceNumber Broker: Broker received an out of order sequence number ErrOutOfOrderSequenceNumber ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_OUT_OF_ORDER_SEQUENCE_NUMBER) // ErrDuplicateSequenceNumber Broker: Broker received a duplicate sequence number ErrDuplicateSequenceNumber ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_DUPLICATE_SEQUENCE_NUMBER) // ErrInvalidProducerEpoch Broker: Producer attempted an operation with an old epoch ErrInvalidProducerEpoch ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_INVALID_PRODUCER_EPOCH) // ErrInvalidTxnState Broker: Producer attempted a transactional operation in an invalid state ErrInvalidTxnState ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_INVALID_TXN_STATE) // ErrInvalidProducerIDMapping Broker: Producer attempted to use a producer id which is not currently assigned to its transactional id ErrInvalidProducerIDMapping ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_INVALID_PRODUCER_ID_MAPPING) // ErrInvalidTransactionTimeout Broker: Transaction timeout is larger than the maximum value allowed by the broker's max.transaction.timeout.ms ErrInvalidTransactionTimeout ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_INVALID_TRANSACTION_TIMEOUT) // ErrConcurrentTransactions Broker: Producer attempted to update a transaction while another concurrent operation on the same transaction was ongoing ErrConcurrentTransactions ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_CONCURRENT_TRANSACTIONS) // ErrTransactionCoordinatorFenced Broker: Indicates that the transaction coordinator sending a WriteTxnMarker is no longer the current coordinator for a given producer ErrTransactionCoordinatorFenced ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_TRANSACTION_COORDINATOR_FENCED) // ErrTransactionalIDAuthorizationFailed Broker: Transactional Id authorization failed ErrTransactionalIDAuthorizationFailed ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_TRANSACTIONAL_ID_AUTHORIZATION_FAILED) // ErrSecurityDisabled Broker: Security features are disabled ErrSecurityDisabled ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_SECURITY_DISABLED) // ErrOperationNotAttempted Broker: Operation not attempted ErrOperationNotAttempted ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_OPERATION_NOT_ATTEMPTED) ) golang-github-confluentinc-confluent-kafka-go-0.11.6/kafka/glue_rdkafka.h000066400000000000000000000022341336406275100263410ustar00rootroot00000000000000/** * Copyright 2016 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once /** * Glue between Go, Cgo and librdkafka */ /** * Temporary C to Go header representation */ typedef struct tmphdr_s { const char *key; const void *val; // producer: malloc()ed by Go code if size > 0 // consumer: owned by librdkafka ssize_t size; } tmphdr_t; /** * Represents a fetched C message, with all extra fields extracted * to struct fields. */ typedef struct fetched_c_msg { rd_kafka_message_t *msg; rd_kafka_timestamp_type_t tstype; int64_t ts; tmphdr_t *tmphdrs; size_t tmphdrsCnt; } fetched_c_msg_t; golang-github-confluentinc-confluent-kafka-go-0.11.6/kafka/go_rdkafka_generr/000077500000000000000000000000001336406275100272025ustar00rootroot00000000000000golang-github-confluentinc-confluent-kafka-go-0.11.6/kafka/go_rdkafka_generr/go_rdkafka_generr.go000066400000000000000000000052651336406275100331730ustar00rootroot00000000000000// confluent-kafka-go internal tool to generate error constants from librdkafka package main /** * Copyright 2016 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import ( "fmt" "os" "strings" "time" ) /* #cgo pkg-config: --static rdkafka #cgo LDFLAGS: -lrdkafka #include static const char *errdesc_to_string (const struct rd_kafka_err_desc *ed, int idx) { return ed[idx].name; } static const char *errdesc_to_desc (const struct rd_kafka_err_desc *ed, int idx) { return ed[idx].desc; } */ import "C" func camelCase(s string) string { ret := "" for _, v := range strings.Split(s, "_") { if len(v) == 0 { continue } ret += strings.ToUpper((string)(v[0])) + strings.ToLower(v[1:]) } return ret } func main() { outfile := os.Args[1] f, err := os.Create(outfile) if err != nil { panic(err) } defer f.Close() f.WriteString("package kafka\n") f.WriteString("// Copyright 2016 Confluent Inc.\n") f.WriteString(fmt.Sprintf("// AUTOMATICALLY GENERATED BY %s ON %v USING librdkafka %s\n", os.Args[0], time.Now(), C.GoString(C.rd_kafka_version_str()))) var errdescs *C.struct_rd_kafka_err_desc var csize C.size_t C.rd_kafka_get_err_descs(&errdescs, &csize) f.WriteString(` /* #include */ import "C" // ErrorCode is the integer representation of local and broker error codes type ErrorCode int // String returns a human readable representation of an error code func (c ErrorCode) String() string { return C.GoString(C.rd_kafka_err2str(C.rd_kafka_resp_err_t(c))) } const ( `) for i := 0; i < int(csize); i++ { orig := C.GoString(C.errdesc_to_string(errdescs, C.int(i))) if len(orig) == 0 { continue } desc := C.GoString(C.errdesc_to_desc(errdescs, C.int(i))) if len(desc) == 0 { continue } errname := "Err" + camelCase(orig) // Special handling to please golint // Eof -> EOF // Id -> ID errname = strings.Replace(errname, "Eof", "EOF", -1) errname = strings.Replace(errname, "Id", "ID", -1) f.WriteString(fmt.Sprintf(" // %s %s\n", errname, desc)) f.WriteString(fmt.Sprintf(" %s ErrorCode = ErrorCode(C.RD_KAFKA_RESP_ERR_%s)\n", errname, orig)) } f.WriteString(")\n") } golang-github-confluentinc-confluent-kafka-go-0.11.6/kafka/handle.go000066400000000000000000000120031336406275100253260ustar00rootroot00000000000000package kafka /** * Copyright 2016 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import ( "fmt" "sync" "unsafe" ) /* #include #include */ import "C" // Handle represents a generic client handle containing common parts for // both Producer and Consumer. type Handle interface { gethandle() *handle } // Common instance handle for both Producer and Consumer type handle struct { rk *C.rd_kafka_t rkq *C.rd_kafka_queue_t // Termination of background go-routines terminatedChan chan string // string is go-routine name // Topic <-> rkt caches rktCacheLock sync.Mutex // topic name -> rkt cache rktCache map[string]*C.rd_kafka_topic_t // rkt -> topic name cache rktNameCache map[*C.rd_kafka_topic_t]string // // cgo map // Maps C callbacks based on cgoid back to its Go object cgoLock sync.Mutex cgoidNext uintptr cgomap map[int]cgoif // // producer // p *Producer // Forward delivery reports on Producer.Events channel fwdDr bool // // consumer // c *Consumer // Forward rebalancing ack responsibility to application (current setting) currAppRebalanceEnable bool } func (h *handle) String() string { return C.GoString(C.rd_kafka_name(h.rk)) } func (h *handle) setup() { h.rktCache = make(map[string]*C.rd_kafka_topic_t) h.rktNameCache = make(map[*C.rd_kafka_topic_t]string) h.cgomap = make(map[int]cgoif) h.terminatedChan = make(chan string, 10) } func (h *handle) cleanup() { for _, crkt := range h.rktCache { C.rd_kafka_topic_destroy(crkt) } if h.rkq != nil { C.rd_kafka_queue_destroy(h.rkq) } } // waitTerminated waits termination of background go-routines. // termCnt is the number of goroutines expected to signal termination completion // on h.terminatedChan func (h *handle) waitTerminated(termCnt int) { // Wait for termCnt termination-done events from goroutines for ; termCnt > 0; termCnt-- { _ = <-h.terminatedChan } } // getRkt0 finds or creates and returns a C topic_t object from the local cache. func (h *handle) getRkt0(topic string, ctopic *C.char, doLock bool) (crkt *C.rd_kafka_topic_t) { if doLock { h.rktCacheLock.Lock() defer h.rktCacheLock.Unlock() } crkt, ok := h.rktCache[topic] if ok { return crkt } if ctopic == nil { ctopic = C.CString(topic) defer C.free(unsafe.Pointer(ctopic)) } crkt = C.rd_kafka_topic_new(h.rk, ctopic, nil) if crkt == nil { panic(fmt.Sprintf("Unable to create new C topic \"%s\": %s", topic, C.GoString(C.rd_kafka_err2str(C.rd_kafka_last_error())))) } h.rktCache[topic] = crkt h.rktNameCache[crkt] = topic return crkt } // getRkt finds or creates and returns a C topic_t object from the local cache. func (h *handle) getRkt(topic string) (crkt *C.rd_kafka_topic_t) { return h.getRkt0(topic, nil, true) } // getTopicNameFromRkt returns the topic name for a C topic_t object, preferably // using the local cache to avoid a cgo call. func (h *handle) getTopicNameFromRkt(crkt *C.rd_kafka_topic_t) (topic string) { h.rktCacheLock.Lock() defer h.rktCacheLock.Unlock() topic, ok := h.rktNameCache[crkt] if ok { return topic } // we need our own copy/refcount of the crkt ctopic := C.rd_kafka_topic_name(crkt) topic = C.GoString(ctopic) crkt = h.getRkt0(topic, ctopic, false /* dont lock */) return topic } // cgoif is a generic interface for holding Go state passed as opaque // value to the C code. // Since pointers to complex Go types cannot be passed to C we instead create // a cgoif object, generate a unique id that is added to the cgomap, // and then pass that id to the C code. When the C code callback is called we // use the id to look up the cgoif object in the cgomap. type cgoif interface{} // delivery report cgoif container type cgoDr struct { deliveryChan chan Event opaque interface{} } // cgoPut adds object cg to the handle's cgo map and returns a // unique id for the added entry. // Thread-safe. // FIXME: the uniquity of the id is questionable over time. func (h *handle) cgoPut(cg cgoif) (cgoid int) { h.cgoLock.Lock() defer h.cgoLock.Unlock() h.cgoidNext++ if h.cgoidNext == 0 { h.cgoidNext++ } cgoid = (int)(h.cgoidNext) h.cgomap[cgoid] = cg return cgoid } // cgoGet looks up cgoid in the cgo map, deletes the reference from the map // and returns the object, if found. Else returns nil, false. // Thread-safe. func (h *handle) cgoGet(cgoid int) (cg cgoif, found bool) { if cgoid == 0 { return nil, false } h.cgoLock.Lock() defer h.cgoLock.Unlock() cg, found = h.cgomap[cgoid] if found { delete(h.cgomap, cgoid) } return cg, found } golang-github-confluentinc-confluent-kafka-go-0.11.6/kafka/header.go000066400000000000000000000036261336406275100253360ustar00rootroot00000000000000package kafka /** * Copyright 2018 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import ( "fmt" "strconv" ) /* #include #include #include "glue_rdkafka.h" */ import "C" // Header represents a single Kafka message header. // // Message headers are made up of a list of Header elements, retaining their original insert // order and allowing for duplicate Keys. // // Key is a human readable string identifying the header. // Value is the key's binary value, Kafka does not put any restrictions on the format of // of the Value but it should be made relatively compact. // The value may be a byte array, empty, or nil. // // NOTE: Message headers are not available on producer delivery report messages. type Header struct { Key string // Header name (utf-8 string) Value []byte // Header value (nil, empty, or binary) } // String returns the Header Key and data in a human representable possibly truncated form // suitable for displaying to the user. func (h Header) String() string { if h.Value == nil { return fmt.Sprintf("%s=nil", h.Key) } valueLen := len(h.Value) if valueLen == 0 { return fmt.Sprintf("%s=", h.Key) } truncSize := valueLen trunc := "" if valueLen > 50+15 { truncSize = 50 trunc = fmt.Sprintf("(%d more bytes)", valueLen-truncSize) } return fmt.Sprintf("%s=%s%s", h.Key, strconv.Quote(string(h.Value[:truncSize])), trunc) } golang-github-confluentinc-confluent-kafka-go-0.11.6/kafka/header_test.go000066400000000000000000000023441336406275100263710ustar00rootroot00000000000000/** * Copyright 2016 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package kafka import ( "testing" ) // TestHeader tests the Header type func TestHeader(t *testing.T) { hdr := Header{"MyHdr1", []byte("a string")} if hdr.String() != "MyHdr1=\"a string\"" { t.Errorf("Unexpected: %s", hdr.String()) } hdr = Header{"MyHdr2", []byte("a longer string that will be truncated right here <-- so you wont see this part.")} if hdr.String() != "MyHdr2=\"a longer string that will be truncated right here \"(30 more bytes)" { t.Errorf("Unexpected: %s", hdr.String()) } hdr = Header{"MyHdr3", []byte{1, 2, 3, 4}} if hdr.String() != "MyHdr3=\"\\x01\\x02\\x03\\x04\"" { t.Errorf("Unexpected: %s", hdr.String()) } } golang-github-confluentinc-confluent-kafka-go-0.11.6/kafka/integration_test.go000066400000000000000000001214461336406275100274710ustar00rootroot00000000000000/** * Copyright 2016 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package kafka import ( "context" "encoding/binary" "fmt" "math/rand" "path" "reflect" "runtime" "testing" "time" ) // producer test control type producerCtrl struct { silent bool withDr bool // use delivery channel batchProducer bool // enable batch producer } // define commitMode with constants type commitMode string const ( ViaCommitMessageAPI = "CommitMessage" ViaCommitOffsetsAPI = "CommitOffsets" ViaCommitAPI = "Commit" ) // consumer test control type consumerCtrl struct { autoCommit bool // set enable.auto.commit property useChannel bool commitMode commitMode // which commit api to use } type testmsgType struct { msg Message expectedError Error } // msgtracker tracks messages type msgtracker struct { t *testing.T msgcnt int64 errcnt int64 // count of failed messages msgs []*Message } // msgtrackerStart sets up a new message tracker func msgtrackerStart(t *testing.T, expectedCnt int) (mt msgtracker) { mt = msgtracker{t: t} mt.msgs = make([]*Message, expectedCnt) return mt } var testMsgsInit = false var p0TestMsgs []*testmsgType // partition 0 test messages // pAllTestMsgs holds messages for various partitions including PartitionAny and invalid partitions var pAllTestMsgs []*testmsgType // createTestMessages populates p0TestMsgs and pAllTestMsgs func createTestMessages() { if testMsgsInit { return } defer func() { testMsgsInit = true }() testmsgs := make([]*testmsgType, 100) i := 0 // a test message with default initialization testmsgs[i] = &testmsgType{msg: Message{TopicPartition: TopicPartition{Topic: &testconf.Topic, Partition: 0}}} i++ // a test message for partition 0 with only Opaque specified testmsgs[i] = &testmsgType{msg: Message{TopicPartition: TopicPartition{Topic: &testconf.Topic, Partition: 0}, Opaque: fmt.Sprintf("Op%d", i), }} i++ // a test message for partition 0 with empty Value and Keys testmsgs[i] = &testmsgType{msg: Message{TopicPartition: TopicPartition{Topic: &testconf.Topic, Partition: 0}, Value: []byte(""), Key: []byte(""), Opaque: fmt.Sprintf("Op%d", i), }} i++ // a test message for partition 0 with Value, Key, and Opaque testmsgs[i] = &testmsgType{msg: Message{TopicPartition: TopicPartition{Topic: &testconf.Topic, Partition: 0}, Value: []byte(fmt.Sprintf("value%d", i)), Key: []byte(fmt.Sprintf("key%d", i)), Opaque: fmt.Sprintf("Op%d", i), }} i++ // a test message for partition 0 without Value testmsgs[i] = &testmsgType{msg: Message{TopicPartition: TopicPartition{Topic: &testconf.Topic, Partition: 0}, Key: []byte(fmt.Sprintf("key%d", i)), Opaque: fmt.Sprintf("Op%d", i), }} i++ // a test message for partition 0 without Key testmsgs[i] = &testmsgType{msg: Message{TopicPartition: TopicPartition{Topic: &testconf.Topic, Partition: 0}, Value: []byte(fmt.Sprintf("value%d", i)), Opaque: fmt.Sprintf("Op%d", i), }} i++ p0TestMsgs = testmsgs[:i] // a test message for PartitonAny with Value, Key, and Opaque testmsgs[i] = &testmsgType{msg: Message{TopicPartition: TopicPartition{Topic: &testconf.Topic, Partition: PartitionAny}, Value: []byte(fmt.Sprintf("value%d", i)), Key: []byte(fmt.Sprintf("key%d", i)), Opaque: fmt.Sprintf("Op%d", i), }} i++ // a test message for a non-existent partition with Value, Key, and Opaque. // It should generate ErrUnknownPartition testmsgs[i] = &testmsgType{expectedError: Error{ErrUnknownPartition, ""}, msg: Message{TopicPartition: TopicPartition{Topic: &testconf.Topic, Partition: int32(10000)}, Value: []byte(fmt.Sprintf("value%d", i)), Key: []byte(fmt.Sprintf("key%d", i)), Opaque: fmt.Sprintf("Op%d", i), }} i++ pAllTestMsgs = testmsgs[:i] } // consume messages through the Poll() interface func eventTestPollConsumer(c *Consumer, mt *msgtracker, expCnt int) { for true { ev := c.Poll(100) if ev == nil { // timeout continue } if !handleTestEvent(c, mt, expCnt, ev) { break } } } // consume messages through the Events channel func eventTestChannelConsumer(c *Consumer, mt *msgtracker, expCnt int) { for ev := range c.Events() { if !handleTestEvent(c, mt, expCnt, ev) { break } } } // handleTestEvent returns false if processing should stop, else true. Tracks the message received func handleTestEvent(c *Consumer, mt *msgtracker, expCnt int, ev Event) bool { switch e := ev.(type) { case *Message: if e.TopicPartition.Error != nil { mt.t.Errorf("Error: %v", e.TopicPartition) } mt.msgs[mt.msgcnt] = e mt.msgcnt++ if mt.msgcnt >= int64(expCnt) { return false } case PartitionEOF: break // silence default: mt.t.Fatalf("Consumer error: %v", e) } return true } // delivery event handler. Tracks the message received func deliveryTestHandler(t *testing.T, expCnt int64, deliveryChan chan Event, mt *msgtracker, doneChan chan int64) { for ev := range deliveryChan { m, ok := ev.(*Message) if !ok { continue } mt.msgs[mt.msgcnt] = m mt.msgcnt++ if m.TopicPartition.Error != nil { mt.errcnt++ // log it and check it later t.Logf("Message delivery error: %v", m.TopicPartition) } t.Logf("Delivered %d/%d to %s, error count %d", mt.msgcnt, expCnt, m.TopicPartition, mt.errcnt) if mt.msgcnt >= expCnt { break } } doneChan <- mt.msgcnt close(doneChan) } // producerTest produces messages in to topic. Verifies delivered messages func producerTest(t *testing.T, testname string, testmsgs []*testmsgType, pc producerCtrl, produceFunc func(p *Producer, m *Message, drChan chan Event)) { if !testconfRead() { t.Skipf("Missing testconf.json") } if testmsgs == nil { createTestMessages() testmsgs = pAllTestMsgs } //get the number of messages prior to producing more messages prerunMsgCnt, err := getMessageCountInTopic(testconf.Topic) if err != nil { t.Fatalf("Cannot get message count, Error: %s\n", err) } conf := ConfigMap{"bootstrap.servers": testconf.Brokers, "go.batch.producer": pc.batchProducer, "go.delivery.reports": pc.withDr, "queue.buffering.max.messages": len(testmsgs), "api.version.request": "true", "broker.version.fallback": "0.9.0.1", "default.topic.config": ConfigMap{"acks": 1}} conf.updateFromTestconf() p, err := NewProducer(&conf) if err != nil { panic(err) } mt := msgtrackerStart(t, len(testmsgs)) var doneChan chan int64 var drChan chan Event if pc.withDr { doneChan = make(chan int64) drChan = p.Events() go deliveryTestHandler(t, int64(len(testmsgs)), p.Events(), &mt, doneChan) } if !pc.silent { t.Logf("%s: produce %d messages", testname, len(testmsgs)) } for i := 0; i < len(testmsgs); i++ { t.Logf("producing message %d: %v\n", i, testmsgs[i].msg) produceFunc(p, &testmsgs[i].msg, drChan) } if !pc.silent { t.Logf("produce done") } // Wait for messages in-flight and in-queue to get delivered. if !pc.silent { t.Logf("%s: %d messages in queue", testname, p.Len()) } r := p.Flush(10000) if r > 0 { t.Errorf("%s: %d messages remains in queue after Flush()", testname, r) } if pc.withDr { mt.msgcnt = <-doneChan } else { mt.msgcnt = int64(len(testmsgs)) } if !pc.silent { t.Logf("delivered %d messages\n", mt.msgcnt) } p.Close() //get the number of messages afterward postrunMsgCnt, err := getMessageCountInTopic(testconf.Topic) if err != nil { t.Fatalf("Cannot get message count, Error: %s\n", err) } if !pc.silent { t.Logf("prerun message count: %d, postrun count %d, delta: %d\n", prerunMsgCnt, postrunMsgCnt, postrunMsgCnt-prerunMsgCnt) t.Logf("deliveried message count: %d, error message count %d\n", mt.msgcnt, mt.errcnt) } // verify the count and messages only if we get the delivered messages if pc.withDr { if int64(postrunMsgCnt-prerunMsgCnt) != (mt.msgcnt - mt.errcnt) { t.Errorf("Expected topic message count %d, got %d\n", prerunMsgCnt+int(mt.msgcnt-mt.errcnt), postrunMsgCnt) } verifyMessages(t, mt.msgs, testmsgs) } } // consumerTest consumes messages from a pre-primed (produced to) topic func consumerTest(t *testing.T, testname string, msgcnt int, cc consumerCtrl, consumeFunc func(c *Consumer, mt *msgtracker, expCnt int), rebalanceCb func(c *Consumer, event Event) error) { if msgcnt == 0 { createTestMessages() producerTest(t, "Priming producer", p0TestMsgs, producerCtrl{}, func(p *Producer, m *Message, drChan chan Event) { p.ProduceChannel() <- m }) msgcnt = len(p0TestMsgs) } conf := ConfigMap{"bootstrap.servers": testconf.Brokers, "go.events.channel.enable": cc.useChannel, "group.id": testconf.GroupID, "session.timeout.ms": 6000, "api.version.request": "true", "enable.auto.commit": cc.autoCommit, "debug": ",", "default.topic.config": ConfigMap{"auto.offset.reset": "earliest"}} conf.updateFromTestconf() c, err := NewConsumer(&conf) if err != nil { panic(err) } defer c.Close() expCnt := msgcnt mt := msgtrackerStart(t, expCnt) t.Logf("%s, expecting %d messages", testname, expCnt) c.Subscribe(testconf.Topic, rebalanceCb) consumeFunc(c, &mt, expCnt) //test commits switch cc.commitMode { case ViaCommitMessageAPI: // verify CommitMessage() API for _, message := range mt.msgs { _, commitErr := c.CommitMessage(message) if commitErr != nil { t.Errorf("Cannot commit message. Error: %s\n", commitErr) } } case ViaCommitOffsetsAPI: // verify CommitOffset partitions := make([]TopicPartition, len(mt.msgs)) for index, message := range mt.msgs { partitions[index] = message.TopicPartition } _, commitErr := c.CommitOffsets(partitions) if commitErr != nil { t.Errorf("Failed to commit using CommitOffsets. Error: %s\n", commitErr) } case ViaCommitAPI: // verify Commit() API _, commitErr := c.Commit() if commitErr != nil { t.Errorf("Failed to commit. Error: %s", commitErr) } } // Trigger RevokePartitions c.Unsubscribe() // Handle RevokePartitions c.Poll(500) } //Test consumer QueryWatermarkOffsets API func TestConsumerQueryWatermarkOffsets(t *testing.T) { if !testconfRead() { t.Skipf("Missing testconf.json") } // getMessageCountInTopic() uses consumer QueryWatermarkOffsets() API to // get the number of messages in a topic msgcnt, err := getMessageCountInTopic(testconf.Topic) if err != nil { t.Errorf("Cannot get message size. Error: %s\n", err) } // Prime topic with test messages createTestMessages() producerTest(t, "Priming producer", p0TestMsgs, producerCtrl{silent: true}, func(p *Producer, m *Message, drChan chan Event) { p.ProduceChannel() <- m }) // getMessageCountInTopic() uses consumer QueryWatermarkOffsets() API to // get the number of messages in a topic newmsgcnt, err := getMessageCountInTopic(testconf.Topic) if err != nil { t.Errorf("Cannot get message size. Error: %s\n", err) } if newmsgcnt-msgcnt != len(p0TestMsgs) { t.Errorf("Incorrect offsets. Expected message count %d, got %d\n", len(p0TestMsgs), newmsgcnt-msgcnt) } } //TestConsumerOffsetsForTimes func TestConsumerOffsetsForTimes(t *testing.T) { if !testconfRead() { t.Skipf("Missing testconf.json") } conf := ConfigMap{"bootstrap.servers": testconf.Brokers, "group.id": testconf.GroupID, "api.version.request": true} conf.updateFromTestconf() c, err := NewConsumer(&conf) if err != nil { panic(err) } defer c.Close() // Prime topic with test messages createTestMessages() producerTest(t, "Priming producer", p0TestMsgs, producerCtrl{silent: true}, func(p *Producer, m *Message, drChan chan Event) { p.ProduceChannel() <- m }) times := make([]TopicPartition, 1) times[0] = TopicPartition{Topic: &testconf.Topic, Partition: 0, Offset: 12345} offsets, err := c.OffsetsForTimes(times, 5000) if err != nil { t.Errorf("OffsetsForTimes() failed: %s\n", err) return } if len(offsets) != 1 { t.Errorf("OffsetsForTimes() returned wrong length %d, expected 1\n", len(offsets)) return } if *offsets[0].Topic != testconf.Topic || offsets[0].Partition != 0 { t.Errorf("OffsetsForTimes() returned wrong topic/partition\n") return } if offsets[0].Error != nil { t.Errorf("OffsetsForTimes() returned error for partition 0: %s\n", err) return } low, _, err := c.QueryWatermarkOffsets(testconf.Topic, 0, 5*1000) if err != nil { t.Errorf("Failed to query watermark offsets for topic %s. Error: %s\n", testconf.Topic, err) return } t.Logf("OffsetsForTimes() returned offset %d for timestamp %d\n", offsets[0].Offset, times[0].Offset) // Since we're using a phony low timestamp it is assumed that the returned // offset will be oldest message. if offsets[0].Offset != Offset(low) { t.Errorf("OffsetsForTimes() returned invalid offset %d for timestamp %d, expected %d\n", offsets[0].Offset, times[0].Offset, low) return } } // test consumer GetMetadata API func TestConsumerGetMetadata(t *testing.T) { if !testconfRead() { t.Skipf("Missing testconf.json") } config := &ConfigMap{"bootstrap.servers": testconf.Brokers, "group.id": testconf.GroupID} config.updateFromTestconf() // Create consumer c, err := NewConsumer(config) if err != nil { t.Errorf("Failed to create consumer: %s\n", err) return } defer c.Close() metaData, err := c.GetMetadata(&testconf.Topic, false, 5*1000) if err != nil { t.Errorf("Failed to get meta data for topic %s. Error: %s\n", testconf.Topic, err) return } t.Logf("Meta data for topic %s: %v\n", testconf.Topic, metaData) metaData, err = c.GetMetadata(nil, true, 5*1000) if err != nil { t.Errorf("Failed to get meta data, Error: %s\n", err) return } t.Logf("Meta data for consumer: %v\n", metaData) } //Test producer QueryWatermarkOffsets API func TestProducerQueryWatermarkOffsets(t *testing.T) { if !testconfRead() { t.Skipf("Missing testconf.json") } config := &ConfigMap{"bootstrap.servers": testconf.Brokers} config.updateFromTestconf() // Create producer p, err := NewProducer(config) if err != nil { t.Errorf("Failed to create producer: %s\n", err) return } defer p.Close() low, high, err := p.QueryWatermarkOffsets(testconf.Topic, 0, 5*1000) if err != nil { t.Errorf("Failed to query watermark offsets for topic %s. Error: %s\n", testconf.Topic, err) return } cnt := high - low t.Logf("Watermark offsets fo topic %s: low=%d, high=%d\n", testconf.Topic, low, high) createTestMessages() producerTest(t, "Priming producer", p0TestMsgs, producerCtrl{silent: true}, func(p *Producer, m *Message, drChan chan Event) { p.ProduceChannel() <- m }) low, high, err = p.QueryWatermarkOffsets(testconf.Topic, 0, 5*1000) if err != nil { t.Errorf("Failed to query watermark offsets for topic %s. Error: %s\n", testconf.Topic, err) return } t.Logf("Watermark offsets fo topic %s: low=%d, high=%d\n", testconf.Topic, low, high) newcnt := high - low t.Logf("count = %d, New count = %d\n", cnt, newcnt) if newcnt-cnt != int64(len(p0TestMsgs)) { t.Errorf("Incorrect offsets. Expected message count %d, got %d\n", len(p0TestMsgs), newcnt-cnt) } } //Test producer GetMetadata API func TestProducerGetMetadata(t *testing.T) { if !testconfRead() { t.Skipf("Missing testconf.json") } config := &ConfigMap{"bootstrap.servers": testconf.Brokers} config.updateFromTestconf() // Create producer p, err := NewProducer(config) if err != nil { t.Errorf("Failed to create producer: %s\n", err) return } defer p.Close() metaData, err := p.GetMetadata(&testconf.Topic, false, 5*1000) if err != nil { t.Errorf("Failed to get meta data for topic %s. Error: %s\n", testconf.Topic, err) return } t.Logf("Meta data for topic %s: %v\n", testconf.Topic, metaData) metaData, err = p.GetMetadata(nil, true, 5*1000) if err != nil { t.Errorf("Failed to get meta data, Error: %s\n", err) return } t.Logf("Meta data for producer: %v\n", metaData) } // test producer function-based API without delivery report func TestProducerFunc(t *testing.T) { producerTest(t, "Function producer (without DR)", nil, producerCtrl{}, func(p *Producer, m *Message, drChan chan Event) { err := p.Produce(m, drChan) if err != nil { t.Errorf("Produce() failed: %v", err) } }) } // test producer function-based API with delivery report func TestProducerFuncDR(t *testing.T) { producerTest(t, "Function producer (with DR)", nil, producerCtrl{withDr: true}, func(p *Producer, m *Message, drChan chan Event) { err := p.Produce(m, drChan) if err != nil { t.Errorf("Produce() failed: %v", err) } }) } // test producer with bad messages func TestProducerWithBadMessages(t *testing.T) { conf := ConfigMap{"bootstrap.servers": testconf.Brokers} conf.updateFromTestconf() p, err := NewProducer(&conf) if err != nil { panic(err) } defer p.Close() // producing a nil message should return an error without crash err = p.Produce(nil, p.Events()) if err == nil { t.Errorf("Producing a nil message should return error\n") } else { t.Logf("Producing a nil message returns expected error: %s\n", err) } // producing a blank message (with nil Topic) should return an error without crash err = p.Produce(&Message{}, p.Events()) if err == nil { t.Errorf("Producing a blank message should return error\n") } else { t.Logf("Producing a blank message returns expected error: %s\n", err) } } // test producer channel-based API without delivery report func TestProducerChannel(t *testing.T) { producerTest(t, "Channel producer (without DR)", nil, producerCtrl{}, func(p *Producer, m *Message, drChan chan Event) { p.ProduceChannel() <- m }) } // test producer channel-based API with delivery report func TestProducerChannelDR(t *testing.T) { producerTest(t, "Channel producer (with DR)", nil, producerCtrl{withDr: true}, func(p *Producer, m *Message, drChan chan Event) { p.ProduceChannel() <- m }) } // test batch producer channel-based API without delivery report func TestProducerBatchChannel(t *testing.T) { producerTest(t, "Channel producer (without DR, batch channel)", nil, producerCtrl{batchProducer: true}, func(p *Producer, m *Message, drChan chan Event) { p.ProduceChannel() <- m }) } // test batch producer channel-based API with delivery report func TestProducerBatchChannelDR(t *testing.T) { producerTest(t, "Channel producer (DR, batch channel)", nil, producerCtrl{withDr: true, batchProducer: true}, func(p *Producer, m *Message, drChan chan Event) { p.ProduceChannel() <- m }) } // use opaque string to locate the matching test message for message verification func findExpectedMessage(expected []*testmsgType, opaque string) *testmsgType { for i, m := range expected { if expected[i].msg.Opaque != nil && expected[i].msg.Opaque.(string) == opaque { return m } } return nil } // verify the message content against the expected func verifyMessages(t *testing.T, msgs []*Message, expected []*testmsgType) { if len(msgs) != len(expected) { t.Errorf("Expected %d messages, got %d instead\n", len(expected), len(msgs)) return } for _, m := range msgs { if m.Opaque == nil { continue // No way to look up the corresponding expected message, let it go } testmsg := findExpectedMessage(expected, m.Opaque.(string)) if testmsg == nil { t.Errorf("Cannot find a matching expected message for message %v\n", m) continue } em := testmsg.msg if m.TopicPartition.Error != nil { if m.TopicPartition.Error != testmsg.expectedError { t.Errorf("Expected error %s, but got error %s\n", testmsg.expectedError, m.TopicPartition.Error) } continue } // check partition if em.TopicPartition.Partition == PartitionAny { if m.TopicPartition.Partition < 0 { t.Errorf("Expected partition %d, got %d\n", em.TopicPartition.Partition, m.TopicPartition.Partition) } } else if em.TopicPartition.Partition != m.TopicPartition.Partition { t.Errorf("Expected partition %d, got %d\n", em.TopicPartition.Partition, m.TopicPartition.Partition) } //check Key, Value, and Opaque if string(m.Key) != string(em.Key) { t.Errorf("Expected Key %v, got %v\n", m.Key, em.Key) } if string(m.Value) != string(em.Value) { t.Errorf("Expected Value %v, got %v\n", m.Value, em.Value) } if m.Opaque.(string) != em.Opaque.(string) { t.Errorf("Expected Opaque %v, got %v\n", m.Opaque, em.Opaque) } } } // test consumer APIs with various message commit modes func consumerTestWithCommits(t *testing.T, testname string, msgcnt int, useChannel bool, consumeFunc func(c *Consumer, mt *msgtracker, expCnt int), rebalanceCb func(c *Consumer, event Event) error) { consumerTest(t, testname+" auto commit", msgcnt, consumerCtrl{useChannel: useChannel, autoCommit: true}, consumeFunc, rebalanceCb) consumerTest(t, testname+" using CommitMessage() API", msgcnt, consumerCtrl{useChannel: useChannel, commitMode: ViaCommitMessageAPI}, consumeFunc, rebalanceCb) consumerTest(t, testname+" using CommitOffsets() API", msgcnt, consumerCtrl{useChannel: useChannel, commitMode: ViaCommitOffsetsAPI}, consumeFunc, rebalanceCb) consumerTest(t, testname+" using Commit() API", msgcnt, consumerCtrl{useChannel: useChannel, commitMode: ViaCommitAPI}, consumeFunc, rebalanceCb) } // test consumer channel-based API func TestConsumerChannel(t *testing.T) { consumerTestWithCommits(t, "Channel Consumer", 0, true, eventTestChannelConsumer, nil) } // test consumer poll-based API func TestConsumerPoll(t *testing.T) { consumerTestWithCommits(t, "Poll Consumer", 0, false, eventTestPollConsumer, nil) } // test consumer poll-based API with rebalance callback func TestConsumerPollRebalance(t *testing.T) { consumerTestWithCommits(t, "Poll Consumer (rebalance callback)", 0, false, eventTestPollConsumer, func(c *Consumer, event Event) error { t.Logf("Rebalanced: %s", event) return nil }) } // Test Committed() API func TestConsumerCommitted(t *testing.T) { consumerTestWithCommits(t, "Poll Consumer (rebalance callback, verify Committed())", 0, false, eventTestPollConsumer, func(c *Consumer, event Event) error { t.Logf("Rebalanced: %s", event) rp, ok := event.(RevokedPartitions) if ok { offsets, err := c.Committed(rp.Partitions, 5000) if err != nil { t.Errorf("Failed to get committed offsets: %s\n", err) return nil } t.Logf("Retrieved Committed offsets: %s\n", offsets) if len(offsets) != len(rp.Partitions) || len(rp.Partitions) == 0 { t.Errorf("Invalid number of partitions %d, should be %d (and >0)\n", len(offsets), len(rp.Partitions)) } // Verify proper offsets: at least one partition needs // to have a committed offset. validCnt := 0 for _, p := range offsets { if p.Error != nil { t.Errorf("Committed() partition error: %v: %v", p, p.Error) } else if p.Offset >= 0 { validCnt++ } } if validCnt == 0 { t.Errorf("Committed(): no partitions with valid offsets: %v", offsets) } } return nil }) } // TestProducerConsumerTimestamps produces messages with timestamps // and verifies them on consumption. // Requires librdkafka >=0.9.4 and Kafka >=0.10.0.0 func TestProducerConsumerTimestamps(t *testing.T) { numver, strver := LibraryVersion() if numver < 0x00090400 { t.Skipf("Requires librdkafka >=0.9.4 (currently on %s)", strver) } if !testconfRead() { t.Skipf("Missing testconf.json") } consumerConf := ConfigMap{"bootstrap.servers": testconf.Brokers, "go.events.channel.enable": true, "group.id": testconf.Topic, } consumerConf.updateFromTestconf() /* Create consumer and find recognizable message, verify timestamp. * The consumer is started before the producer to make sure * the message isn't missed. */ t.Logf("Creating consumer") c, err := NewConsumer(&consumerConf) if err != nil { t.Fatalf("NewConsumer: %v", err) } t.Logf("Assign %s [0]", testconf.Topic) err = c.Assign([]TopicPartition{{Topic: &testconf.Topic, Partition: 0, Offset: OffsetEnd}}) if err != nil { t.Fatalf("Assign: %v", err) } /* Wait until EOF is reached so we dont miss the produced message */ for ev := range c.Events() { t.Logf("Awaiting initial EOF") _, ok := ev.(PartitionEOF) if ok { break } } /* * Create producer and produce one recognizable message with timestamp */ producerConf := ConfigMap{"bootstrap.servers": testconf.Brokers} producerConf.updateFromTestconf() t.Logf("Creating producer") p, err := NewProducer(&producerConf) if err != nil { t.Fatalf("NewProducer: %v", err) } drChan := make(chan Event, 1) /* Offset the timestamp to avoid comparison with system clock */ future, _ := time.ParseDuration("87658h") // 10y timestamp := time.Now().Add(future) key := fmt.Sprintf("TS: %v", timestamp) t.Logf("Producing message with timestamp %v", timestamp) err = p.Produce(&Message{ TopicPartition: TopicPartition{Topic: &testconf.Topic, Partition: 0}, Key: []byte(key), Timestamp: timestamp}, drChan) if err != nil { t.Fatalf("Produce: %v", err) } // Wait for delivery t.Logf("Awaiting delivery report") ev := <-drChan m, ok := ev.(*Message) if !ok { t.Fatalf("drChan: Expected *Message, got %v", ev) } if m.TopicPartition.Error != nil { t.Fatalf("Delivery failed: %v", m.TopicPartition) } t.Logf("Produced message to %v", m.TopicPartition) producedOffset := m.TopicPartition.Offset p.Close() /* Now consume messages, waiting for that recognizable one. */ t.Logf("Consuming messages") outer: for ev := range c.Events() { switch m := ev.(type) { case *Message: if m.TopicPartition.Error != nil { continue } if m.Key == nil || string(m.Key) != key { continue } t.Logf("Found message at %v with timestamp %s %s", m.TopicPartition, m.TimestampType, m.Timestamp) if m.TopicPartition.Offset != producedOffset { t.Fatalf("Produced Offset %d does not match consumed offset %d", producedOffset, m.TopicPartition.Offset) } if m.TimestampType != TimestampCreateTime { t.Fatalf("Expected timestamp CreateTime, not %s", m.TimestampType) } /* Since Kafka timestamps are milliseconds we need to * shave off some precision for the comparison */ if m.Timestamp.UnixNano()/1000000 != timestamp.UnixNano()/1000000 { t.Fatalf("Expected timestamp %v (%d), not %v (%d)", timestamp, timestamp.UnixNano(), m.Timestamp, m.Timestamp.UnixNano()) } break outer default: } } c.Close() } // TestProducerConsumerHeaders produces messages with headers // and verifies them on consumption. // Requires librdkafka >=0.11.4 and Kafka >=0.11.0.0 func TestProducerConsumerHeaders(t *testing.T) { numver, strver := LibraryVersion() if numver < 0x000b0400 { t.Skipf("Requires librdkafka >=0.11.4 (currently on %s, 0x%x)", strver, numver) } if !testconfRead() { t.Skipf("Missing testconf.json") } conf := ConfigMap{"bootstrap.servers": testconf.Brokers, "api.version.request": true, "enable.auto.commit": false, "group.id": testconf.Topic, } conf.updateFromTestconf() /* * Create producer and produce a couple of messages with and without * headers. */ t.Logf("Creating producer") p, err := NewProducer(&conf) if err != nil { t.Fatalf("NewProducer: %v", err) } drChan := make(chan Event, 1) // prepare some header values bigBytes := make([]byte, 2500) for i := 0; i < len(bigBytes); i++ { bigBytes[i] = byte(i) } myVarint := make([]byte, binary.MaxVarintLen64) myVarintLen := binary.PutVarint(myVarint, 12345678901234) expMsgHeaders := [][]Header{ { {"msgid", []byte("1")}, {"a key with SPACES ", bigBytes[:15]}, {"BIGONE!", bigBytes}, }, { {"msgid", []byte("2")}, {"myVarint", myVarint[:myVarintLen]}, {"empty", []byte("")}, {"theNullIsNil", nil}, }, nil, // no headers { {"msgid", []byte("4")}, {"order", []byte("1")}, {"order", []byte("2")}, {"order", nil}, {"order", []byte("4")}, }, } t.Logf("Producing %d messages", len(expMsgHeaders)) for _, hdrs := range expMsgHeaders { err = p.Produce(&Message{ TopicPartition: TopicPartition{Topic: &testconf.Topic, Partition: 0}, Headers: hdrs}, drChan) } if err != nil { t.Fatalf("Produce: %v", err) } var firstOffset Offset = OffsetInvalid for range expMsgHeaders { ev := <-drChan m, ok := ev.(*Message) if !ok { t.Fatalf("drChan: Expected *Message, got %v", ev) } if m.TopicPartition.Error != nil { t.Fatalf("Delivery failed: %v", m.TopicPartition) } t.Logf("Produced message to %v", m.TopicPartition) if firstOffset == OffsetInvalid { firstOffset = m.TopicPartition.Offset } } p.Close() /* Now consume the produced messages and verify the headers */ t.Logf("Creating consumer starting at offset %v", firstOffset) c, err := NewConsumer(&conf) if err != nil { t.Fatalf("NewConsumer: %v", err) } err = c.Assign([]TopicPartition{{Topic: &testconf.Topic, Partition: 0, Offset: firstOffset}}) if err != nil { t.Fatalf("Assign: %v", err) } for n, hdrs := range expMsgHeaders { m, err := c.ReadMessage(-1) if err != nil { t.Fatalf("Expected message #%d, not error %v", n, err) } if m.Headers == nil { if hdrs == nil { continue } t.Fatalf("Expected message #%d to have headers", n) } if hdrs == nil { t.Fatalf("Expected message #%d not to have headers, but found %v", n, m.Headers) } // Compare headers if !reflect.DeepEqual(hdrs, m.Headers) { t.Fatalf("Expected message #%d headers to match %v, but found %v", n, hdrs, m.Headers) } t.Logf("Message #%d headers matched: %v", n, m.Headers) } c.Close() } func createAdminClient(t *testing.T) (a *AdminClient) { numver, strver := LibraryVersion() if numver < 0x000b0500 { t.Skipf("Requires librdkafka >=0.11.5 (currently on %s, 0x%x)", strver, numver) } if !testconfRead() { t.Skipf("Missing testconf.json") } conf := ConfigMap{"bootstrap.servers": testconf.Brokers} conf.updateFromTestconf() /* * Create producer and produce a couple of messages with and without * headers. */ a, err := NewAdminClient(&conf) if err != nil { t.Fatalf("NewAdminClient: %v", err) } return a } func validateTopicResult(t *testing.T, result []TopicResult, expError map[string]Error) { for _, res := range result { exp, ok := expError[res.Topic] if !ok { t.Errorf("Result for unexpected topic %s", res) continue } if res.Error.Code() != exp.Code() { t.Errorf("Topic %s: expected \"%s\", got \"%s\"", res.Topic, exp, res.Error) continue } t.Logf("Topic %s: matched expected \"%s\"", res.Topic, res.Error) } } func TestAdminTopics(t *testing.T) { rand.Seed(time.Now().Unix()) a := createAdminClient(t) defer a.Close() brokerList, err := getBrokerList(a) if err != nil { t.Fatalf("Failed to retrieve broker list: %v", err) } // Few and Many replica sets use in these tests var fewReplicas []int32 if len(brokerList) < 2 { fewReplicas = brokerList } else { fewReplicas = brokerList[0:2] } var manyReplicas []int32 if len(brokerList) < 5 { manyReplicas = brokerList } else { manyReplicas = brokerList[0:5] } const topicCnt = 7 newTopics := make([]TopicSpecification, topicCnt) expError := map[string]Error{} for i := 0; i < topicCnt; i++ { topic := fmt.Sprintf("%s-create-%d-%d", testconf.Topic, i, rand.Intn(100000)) newTopics[i] = TopicSpecification{ Topic: topic, NumPartitions: 1 + i*2, } if (i % 1) == 0 { newTopics[i].ReplicationFactor = len(fewReplicas) } else { newTopics[i].ReplicationFactor = len(manyReplicas) } expError[newTopics[i].Topic] = Error{} // No error var useReplicas []int32 if i == 2 { useReplicas = fewReplicas } else if i == 3 { useReplicas = manyReplicas } else if i == topicCnt-1 { newTopics[i].ReplicationFactor = len(brokerList) + 10 expError[newTopics[i].Topic] = Error{code: ErrInvalidReplicationFactor} } if len(useReplicas) > 0 { newTopics[i].ReplicaAssignment = make([][]int32, newTopics[i].NumPartitions) newTopics[i].ReplicationFactor = 0 for p := 0; p < newTopics[i].NumPartitions; p++ { newTopics[i].ReplicaAssignment[p] = useReplicas } } } maxDuration, err := time.ParseDuration("30s") if err != nil { t.Fatalf("%s", err) } // First just validate the topics, don't create t.Logf("Validating topics before creation\n") ctx, cancel := context.WithTimeout(context.Background(), maxDuration) defer cancel() result, err := a.CreateTopics(ctx, newTopics, SetAdminValidateOnly(true)) if err != nil { t.Fatalf("CreateTopics(ValidateOnly) failed: %s", err) } validateTopicResult(t, result, expError) // Now create the topics t.Logf("Creating topics\n") ctx, cancel = context.WithTimeout(context.Background(), maxDuration) defer cancel() result, err = a.CreateTopics(ctx, newTopics, SetAdminValidateOnly(false)) if err != nil { t.Fatalf("CreateTopics() failed: %s", err) } validateTopicResult(t, result, expError) // Attempt to create the topics again, should all fail. t.Logf("Attempt to re-create topics, should all fail\n") for k := range expError { if expError[k].code == ErrNoError { expError[k] = Error{code: ErrTopicAlreadyExists} } } ctx, cancel = context.WithTimeout(context.Background(), maxDuration) defer cancel() result, err = a.CreateTopics(ctx, newTopics) if err != nil { t.Fatalf("CreateTopics#2() failed: %s", err) } validateTopicResult(t, result, expError) // Add partitions to some of the topics t.Logf("Create new partitions for a subset of topics\n") newParts := make([]PartitionsSpecification, topicCnt/2) expError = map[string]Error{} for i := 0; i < topicCnt/2; i++ { topic := newTopics[i].Topic newParts[i] = PartitionsSpecification{ Topic: topic, IncreaseTo: newTopics[i].NumPartitions + 3, } if i == 1 { // Invalid partition count (less than current) newParts[i].IncreaseTo = newTopics[i].NumPartitions - 1 expError[topic] = Error{code: ErrInvalidPartitions} } else { expError[topic] = Error{} } t.Logf("Creating new partitions for %s: %d -> %d: expecting %v\n", topic, newTopics[i].NumPartitions, newParts[i].IncreaseTo, expError[topic]) } ctx, cancel = context.WithTimeout(context.Background(), maxDuration) defer cancel() result, err = a.CreatePartitions(ctx, newParts) if err != nil { t.Fatalf("CreatePartitions() failed: %s", err) } validateTopicResult(t, result, expError) // FIXME: wait for topics to become available in metadata instead time.Sleep(5000 * time.Millisecond) // Delete the topics deleteTopics := make([]string, topicCnt) for i := 0; i < topicCnt; i++ { deleteTopics[i] = newTopics[i].Topic if i == topicCnt-1 { expError[deleteTopics[i]] = Error{code: ErrUnknownTopicOrPart} } else { expError[deleteTopics[i]] = Error{} } } ctx, cancel = context.WithTimeout(context.Background(), maxDuration) defer cancel() result2, err := a.DeleteTopics(ctx, deleteTopics) if err != nil { t.Fatalf("DeleteTopics() failed: %s", err) } validateTopicResult(t, result2, expError) } func validateConfig(t *testing.T, results []ConfigResourceResult, expResults []ConfigResourceResult, checkConfigEntries bool) { _, file, line, _ := runtime.Caller(1) caller := fmt.Sprintf("%s:%d", path.Base(file), line) if len(results) != len(expResults) { t.Fatalf("%s: Expected %d results, got %d: %v", caller, len(expResults), len(results), results) } for i, result := range results { expResult := expResults[i] if result.Error.Code() != expResult.Error.Code() { t.Errorf("%s: %v: Expected %v, got %v", caller, result, expResult.Error.Code(), result.Error.Code()) continue } if !checkConfigEntries { continue } matchCnt := 0 for _, expEntry := range expResult.Config { entry, ok := result.Config[expEntry.Name] if !ok { t.Errorf("%s: %v: expected config %s not found in result", caller, result, expEntry.Name) continue } if entry.Value != expEntry.Value { t.Errorf("%s: %v: expected config %s to have value \"%s\", not \"%s\"", caller, result, expEntry.Name, expEntry.Value, entry.Value) continue } matchCnt++ } if matchCnt != len(expResult.Config) { t.Errorf("%s: %v: only %d/%d expected configs matched", caller, result, matchCnt, len(expResult.Config)) } } if t.Failed() { t.Fatalf("%s: ConfigResourceResult validation failed: see previous errors", caller) } } func TestAdminConfig(t *testing.T) { rand.Seed(time.Now().Unix()) a := createAdminClient(t) defer a.Close() // Steps: // 1) Create a topic, providing initial non-default configuration // 2) Read back config to verify // 3) Alter config // 4) Read back config to verify // 5) Delete the topic topic := fmt.Sprintf("%s-config-%d", testconf.Topic, rand.Intn(100000)) // Expected config expResources := []ConfigResourceResult{ { Type: ResourceTopic, Name: topic, Config: map[string]ConfigEntryResult{ "compression.type": ConfigEntryResult{ Name: "compression.type", Value: "snappy", }, }, }, } // Create topic newTopics := []TopicSpecification{{ Topic: topic, NumPartitions: 1, ReplicationFactor: 1, Config: map[string]string{"compression.type": "snappy"}, }} ctx, cancel := context.WithCancel(context.Background()) defer cancel() topicResult, err := a.CreateTopics(ctx, newTopics) if err != nil { t.Fatalf("Create topic request failed: %v", err) } if topicResult[0].Error.Code() != ErrNoError { t.Fatalf("Failed to create topic %s: %s", topic, topicResult[0].Error) } // Wait for topic to show up in metadata before performing // subsequent operations on it, otherwise we risk DescribeConfigs() // to fail with UnknownTopic.. (this is really a broker issue). // Sometimes even the metadata is not enough, so we add an // arbitrary 10s sleep too. t.Logf("Waiting for new topic %s to show up in metadata and stabilize", topic) err = waitTopicInMetadata(a, topic, 10*1000) // 10s if err != nil { t.Fatalf("%v", err) } t.Logf("Topic %s now in metadata, waiting another 10s for stabilization", topic) time.Sleep(10 * 1000 * 1000) // Read back config to validate configResources := []ConfigResource{{Type: ResourceTopic, Name: topic}} describeRes, err := a.DescribeConfigs(ctx, configResources) if err != nil { t.Fatalf("Describe configs request failed: %v", err) } validateConfig(t, describeRes, expResources, true) // Alter some configs. // Configuration alterations are currently atomic, all values // need to be passed, otherwise non-passed values will be reverted // to their default values. // Future versions will allow incremental updates: // https://cwiki.apache.org/confluence/display/KAFKA/KIP-339%3A+Create+a+new+IncrementalAlterConfigs+API newConfig := make(map[string]string) for _, entry := range describeRes[0].Config { newConfig[entry.Name] = entry.Value } // Change something newConfig["retention.ms"] = "86400000" newConfig["message.timestamp.type"] = "LogAppendTime" for k, v := range newConfig { expResources[0].Config[k] = ConfigEntryResult{Name: k, Value: v} } configResources = []ConfigResource{{Type: ResourceTopic, Name: topic, Config: StringMapToConfigEntries(newConfig, AlterOperationSet)}} alterRes, err := a.AlterConfigs(ctx, configResources) if err != nil { t.Fatalf("Alter configs request failed: %v", err) } validateConfig(t, alterRes, expResources, false) // Read back config to validate configResources = []ConfigResource{{Type: ResourceTopic, Name: topic}} describeRes, err = a.DescribeConfigs(ctx, configResources) if err != nil { t.Fatalf("Describe configs request failed: %v", err) } validateConfig(t, describeRes, expResources, true) // Delete the topic // FIXME: wait for topics to become available in metadata instead time.Sleep(5000 * time.Millisecond) topicResult, err = a.DeleteTopics(ctx, []string{topic}) if err != nil { t.Fatalf("DeleteTopics() failed: %s", err) } if topicResult[0].Error.Code() != ErrNoError { t.Fatalf("Failed to delete topic %s: %s", topic, topicResult[0].Error) } } //Test AdminClient GetMetadata API func TestAdminGetMetadata(t *testing.T) { if !testconfRead() { t.Skipf("Missing testconf.json") } config := &ConfigMap{"bootstrap.servers": testconf.Brokers} config.updateFromTestconf() // Create Admin client a, err := NewAdminClient(config) if err != nil { t.Errorf("Failed to create Admin client: %s\n", err) return } defer a.Close() metaData, err := a.GetMetadata(&testconf.Topic, false, 5*1000) if err != nil { t.Errorf("Failed to get meta data for topic %s. Error: %s\n", testconf.Topic, err) return } t.Logf("Meta data for topic %s: %v\n", testconf.Topic, metaData) metaData, err = a.GetMetadata(nil, true, 5*1000) if err != nil { t.Errorf("Failed to get meta data, Error: %s\n", err) return } t.Logf("Meta data for admin client: %v\n", metaData) } golang-github-confluentinc-confluent-kafka-go-0.11.6/kafka/kafka.go000066400000000000000000000212201336406275100251510ustar00rootroot00000000000000/** * Copyright 2016 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ // Package kafka provides high-level Apache Kafka producer and consumers // using bindings on-top of the librdkafka C library. // // // High-level Consumer // // * Decide if you want to read messages and events from the `.Events()` channel // (set `"go.events.channel.enable": true`) or by calling `.Poll()`. // // * Create a Consumer with `kafka.NewConsumer()` providing at // least the `bootstrap.servers` and `group.id` configuration properties. // // * Call `.Subscribe()` or (`.SubscribeTopics()` to subscribe to multiple topics) // to join the group with the specified subscription set. // Subscriptions are atomic, calling `.Subscribe*()` again will leave // the group and rejoin with the new set of topics. // // * Start reading events and messages from either the `.Events` channel // or by calling `.Poll()`. // // * When the group has rebalanced each client member is assigned a // (sub-)set of topic+partitions. // By default the consumer will start fetching messages for its assigned // partitions at this point, but your application may enable rebalance // events to get an insight into what the assigned partitions where // as well as set the initial offsets. To do this you need to pass // `"go.application.rebalance.enable": true` to the `NewConsumer()` call // mentioned above. You will (eventually) see a `kafka.AssignedPartitions` event // with the assigned partition set. You can optionally modify the initial // offsets (they'll default to stored offsets and if there are no previously stored // offsets it will fall back to `"default.topic.config": ConfigMap{"auto.offset.reset": ..}` // which defaults to the `latest` message) and then call `.Assign(partitions)` // to start consuming. If you don't need to modify the initial offsets you will // not need to call `.Assign()`, the client will do so automatically for you if // you dont. // // * As messages are fetched they will be made available on either the // `.Events` channel or by calling `.Poll()`, look for event type `*kafka.Message`. // // * Handle messages, events and errors to your liking. // // * When you are done consuming call `.Close()` to commit final offsets // and leave the consumer group. // // // // Producer // // * Create a Producer with `kafka.NewProducer()` providing at least // the `bootstrap.servers` configuration properties. // // * Messages may now be produced either by sending a `*kafka.Message` // on the `.ProduceChannel` or by calling `.Produce()`. // // * Producing is an asynchronous operation so the client notifies the application // of per-message produce success or failure through something called delivery reports. // Delivery reports are by default emitted on the `.Events()` channel as `*kafka.Message` // and you should check `msg.TopicPartition.Error` for `nil` to find out if the message // was succesfully delivered or not. // It is also possible to direct delivery reports to alternate channels // by providing a non-nil `chan Event` channel to `.Produce()`. // If no delivery reports are wanted they can be completely disabled by // setting configuration property `"go.delivery.reports": false`. // // * When you are done producing messages you will need to make sure all messages // are indeed delivered to the broker (or failed), remember that this is // an asynchronous client so some of your messages may be lingering in internal // channels or tranmission queues. // To do this you can either keep track of the messages you've produced // and wait for their corresponding delivery reports, or call the convenience // function `.Flush()` that will block until all message deliveries are done // or the provided timeout elapses. // // * Finally call `.Close()` to decommission the producer. // // // Events // // Apart from emitting messages and delivery reports the client also communicates // with the application through a number of different event types. // An application may choose to handle or ignore these events. // // Consumer events // // * `*kafka.Message` - a fetched message. // // * `AssignedPartitions` - The assigned partition set for this client following a rebalance. // Requires `go.application.rebalance.enable` // // * `RevokedPartitions` - The counter part to `AssignedPartitions` following a rebalance. // `AssignedPartitions` and `RevokedPartitions` are symetrical. // Requires `go.application.rebalance.enable` // // * `PartitionEOF` - Consumer has reached the end of a partition. // NOTE: The consumer will keep trying to fetch new messages for the partition. // // * `OffsetsCommitted` - Offset commit results (when `enable.auto.commit` is enabled). // // // Producer events // // * `*kafka.Message` - delivery report for produced message. // Check `.TopicPartition.Error` for delivery result. // // // Generic events for both Consumer and Producer // // * `KafkaError` - client (error codes are prefixed with _) or broker error. // These errors are normally just informational since the // client will try its best to automatically recover (eventually). // // // Hint: If your application registers a signal notification // (signal.Notify) makes sure the signals channel is buffered to avoid // possible complications with blocking Poll() calls. // // Note: The Confluent Kafka Go client is safe for concurrent use. package kafka import ( "fmt" "unsafe" ) /* #include #include #include static rd_kafka_topic_partition_t *_c_rdkafka_topic_partition_list_entry(rd_kafka_topic_partition_list_t *rktparlist, int idx) { return idx < rktparlist->cnt ? &rktparlist->elems[idx] : NULL; } */ import "C" // PartitionAny represents any partition (for partitioning), // or unspecified value (for all other cases) const PartitionAny = int32(C.RD_KAFKA_PARTITION_UA) // TopicPartition is a generic placeholder for a Topic+Partition and optionally Offset. type TopicPartition struct { Topic *string Partition int32 Offset Offset Error error } func (p TopicPartition) String() string { topic := "" if p.Topic != nil { topic = *p.Topic } if p.Error != nil { return fmt.Sprintf("%s[%d]@%s(%s)", topic, p.Partition, p.Offset, p.Error) } return fmt.Sprintf("%s[%d]@%s", topic, p.Partition, p.Offset) } // TopicPartitions is a slice of TopicPartitions that also implements // the sort interface type TopicPartitions []TopicPartition func (tps TopicPartitions) Len() int { return len(tps) } func (tps TopicPartitions) Less(i, j int) bool { if *tps[i].Topic < *tps[j].Topic { return true } else if *tps[i].Topic > *tps[j].Topic { return false } return tps[i].Partition < tps[j].Partition } func (tps TopicPartitions) Swap(i, j int) { tps[i], tps[j] = tps[j], tps[i] } // new_cparts_from_TopicPartitions creates a new C rd_kafka_topic_partition_list_t // from a TopicPartition array. func newCPartsFromTopicPartitions(partitions []TopicPartition) (cparts *C.rd_kafka_topic_partition_list_t) { cparts = C.rd_kafka_topic_partition_list_new(C.int(len(partitions))) for _, part := range partitions { ctopic := C.CString(*part.Topic) defer C.free(unsafe.Pointer(ctopic)) rktpar := C.rd_kafka_topic_partition_list_add(cparts, ctopic, C.int32_t(part.Partition)) rktpar.offset = C.int64_t(part.Offset) } return cparts } func setupTopicPartitionFromCrktpar(partition *TopicPartition, crktpar *C.rd_kafka_topic_partition_t) { topic := C.GoString(crktpar.topic) partition.Topic = &topic partition.Partition = int32(crktpar.partition) partition.Offset = Offset(crktpar.offset) if crktpar.err != C.RD_KAFKA_RESP_ERR_NO_ERROR { partition.Error = newError(crktpar.err) } } func newTopicPartitionsFromCparts(cparts *C.rd_kafka_topic_partition_list_t) (partitions []TopicPartition) { partcnt := int(cparts.cnt) partitions = make([]TopicPartition, partcnt) for i := 0; i < partcnt; i++ { crktpar := C._c_rdkafka_topic_partition_list_entry(cparts, C.int(i)) setupTopicPartitionFromCrktpar(&partitions[i], crktpar) } return partitions } // LibraryVersion returns the underlying librdkafka library version as a // (version_int, version_str) tuple. func LibraryVersion() (int, string) { ver := (int)(C.rd_kafka_version()) verstr := C.GoString(C.rd_kafka_version_str()) return ver, verstr } golang-github-confluentinc-confluent-kafka-go-0.11.6/kafka/kafka_test.go000066400000000000000000000076671336406275100262330ustar00rootroot00000000000000/** * Copyright 2016 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package kafka import ( "testing" ) //Test LibraryVersion() func TestLibraryVersion(t *testing.T) { ver, verstr := LibraryVersion() if ver >= 0x00090200 { t.Logf("Library version %d: %s\n", ver, verstr) } else { t.Errorf("Unexpected Library version %d: %s\n", ver, verstr) } } //Test Offset APIs func TestOffsetAPIs(t *testing.T) { offsets := []Offset{OffsetBeginning, OffsetEnd, OffsetInvalid, OffsetStored, 1001} for _, offset := range offsets { t.Logf("Offset: %s\n", offset.String()) } // test known offset strings testOffsets := map[string]Offset{"beginning": OffsetBeginning, "earliest": OffsetBeginning, "end": OffsetEnd, "latest": OffsetEnd, "unset": OffsetInvalid, "invalid": OffsetInvalid, "stored": OffsetStored} for key, expectedOffset := range testOffsets { offset, err := NewOffset(key) if err != nil { t.Errorf("Cannot create offset for %s, error: %s\n", key, err) } else { if offset != expectedOffset { t.Errorf("Offset does not equal expected: %s != %s\n", offset, expectedOffset) } } } // test numeric string conversion offset, err := NewOffset("10") if err != nil { t.Errorf("Cannot create offset for 10, error: %s\n", err) } else { if offset != Offset(10) { t.Errorf("Offset does not equal expected: %s != %s\n", offset, Offset(10)) } } // test integer offset var intOffset = 10 offset, err = NewOffset(intOffset) if err != nil { t.Errorf("Cannot create offset for int 10, Error: %s\n", err) } else { if offset != Offset(10) { t.Errorf("Offset does not equal expected: %s != %s\n", offset, Offset(10)) } } // test int64 offset var int64Offset int64 = 10 offset, err = NewOffset(int64Offset) if err != nil { t.Errorf("Cannot create offset for int64 10, Error: %s \n", err) } else { if offset != Offset(10) { t.Errorf("Offset does not equal expected: %s != %s\n", offset, Offset(10)) } } // test invalid string offset invalidOffsetString := "what is this offset" offset, err = NewOffset(invalidOffsetString) if err == nil { t.Errorf("Expected error for this string offset. Error: %s\n", err) } else if offset != Offset(0) { t.Errorf("Expected offset (%v), got (%v)\n", Offset(0), offset) } t.Logf("Offset for string (%s): %v\n", invalidOffsetString, offset) // test double offset doubleOffset := 12.15 offset, err = NewOffset(doubleOffset) if err == nil { t.Errorf("Expected error for this double offset: %f. Error: %s\n", doubleOffset, err) } else if offset != OffsetInvalid { t.Errorf("Expected offset (%v), got (%v)\n", OffsetInvalid, offset) } t.Logf("Offset for double (%f): %v\n", doubleOffset, offset) // test change offset via Set() offset, err = NewOffset("beginning") if err != nil { t.Errorf("Cannot create offset for 'beginning'. Error: %s\n", err) } // test change to a logical offset err = offset.Set("latest") if err != nil { t.Errorf("Cannot set offset to 'latest'. Error: %s \n", err) } else if offset != OffsetEnd { t.Errorf("Failed to change offset. Expect (%v), got (%v)\n", OffsetEnd, offset) } // test change to an integer offset err = offset.Set(int(10)) if err != nil { t.Errorf("Cannot set offset to (%v). Error: %s \n", 10, err) } else if offset != 10 { t.Errorf("Failed to change offset. Expect (%v), got (%v)\n", 10, offset) } // test OffsetTail() tail := OffsetTail(offset) t.Logf("offset tail %v\n", tail) } golang-github-confluentinc-confluent-kafka-go-0.11.6/kafka/message.go000066400000000000000000000133241336406275100255260ustar00rootroot00000000000000package kafka /** * Copyright 2016 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import ( "fmt" "time" "unsafe" ) /* #include #include #include #include "glue_rdkafka.h" void setup_rkmessage (rd_kafka_message_t *rkmessage, rd_kafka_topic_t *rkt, int32_t partition, const void *payload, size_t len, void *key, size_t keyLen, void *opaque) { rkmessage->rkt = rkt; rkmessage->partition = partition; rkmessage->payload = (void *)payload; rkmessage->len = len; rkmessage->key = (void *)key; rkmessage->key_len = keyLen; rkmessage->_private = opaque; } */ import "C" // TimestampType is a the Message timestamp type or source // type TimestampType int const ( // TimestampNotAvailable indicates no timestamp was set, or not available due to lacking broker support TimestampNotAvailable = TimestampType(C.RD_KAFKA_TIMESTAMP_NOT_AVAILABLE) // TimestampCreateTime indicates timestamp set by producer (source time) TimestampCreateTime = TimestampType(C.RD_KAFKA_TIMESTAMP_CREATE_TIME) // TimestampLogAppendTime indicates timestamp set set by broker (store time) TimestampLogAppendTime = TimestampType(C.RD_KAFKA_TIMESTAMP_LOG_APPEND_TIME) ) func (t TimestampType) String() string { switch t { case TimestampCreateTime: return "CreateTime" case TimestampLogAppendTime: return "LogAppendTime" case TimestampNotAvailable: fallthrough default: return "NotAvailable" } } // Message represents a Kafka message type Message struct { TopicPartition TopicPartition Value []byte Key []byte Timestamp time.Time TimestampType TimestampType Opaque interface{} Headers []Header } // String returns a human readable representation of a Message. // Key and payload are not represented. func (m *Message) String() string { var topic string if m.TopicPartition.Topic != nil { topic = *m.TopicPartition.Topic } else { topic = "" } return fmt.Sprintf("%s[%d]@%s", topic, m.TopicPartition.Partition, m.TopicPartition.Offset) } func (h *handle) getRktFromMessage(msg *Message) (crkt *C.rd_kafka_topic_t) { if msg.TopicPartition.Topic == nil { return nil } return h.getRkt(*msg.TopicPartition.Topic) } func (h *handle) newMessageFromFcMsg(fcMsg *C.fetched_c_msg_t) (msg *Message) { msg = &Message{} if fcMsg.ts != -1 { ts := int64(fcMsg.ts) msg.TimestampType = TimestampType(fcMsg.tstype) msg.Timestamp = time.Unix(ts/1000, (ts%1000)*1000000) } if fcMsg.tmphdrsCnt > 0 { msg.Headers = make([]Header, fcMsg.tmphdrsCnt) for n := range msg.Headers { tmphdr := (*[1 << 30]C.tmphdr_t)(unsafe.Pointer(fcMsg.tmphdrs))[n] msg.Headers[n].Key = C.GoString(tmphdr.key) if tmphdr.val != nil { msg.Headers[n].Value = C.GoBytes(unsafe.Pointer(tmphdr.val), C.int(tmphdr.size)) } else { msg.Headers[n].Value = nil } } C.free(unsafe.Pointer(fcMsg.tmphdrs)) } h.setupMessageFromC(msg, fcMsg.msg) return msg } // setupMessageFromC sets up a message object from a C rd_kafka_message_t func (h *handle) setupMessageFromC(msg *Message, cmsg *C.rd_kafka_message_t) { if cmsg.rkt != nil { topic := h.getTopicNameFromRkt(cmsg.rkt) msg.TopicPartition.Topic = &topic } msg.TopicPartition.Partition = int32(cmsg.partition) if cmsg.payload != nil { msg.Value = C.GoBytes(unsafe.Pointer(cmsg.payload), C.int(cmsg.len)) } if cmsg.key != nil { msg.Key = C.GoBytes(unsafe.Pointer(cmsg.key), C.int(cmsg.key_len)) } msg.TopicPartition.Offset = Offset(cmsg.offset) if cmsg.err != 0 { msg.TopicPartition.Error = newError(cmsg.err) } } // newMessageFromC creates a new message object from a C rd_kafka_message_t // NOTE: For use with Producer: does not set message timestamp fields. func (h *handle) newMessageFromC(cmsg *C.rd_kafka_message_t) (msg *Message) { msg = &Message{} h.setupMessageFromC(msg, cmsg) return msg } // messageToC sets up cmsg as a clone of msg func (h *handle) messageToC(msg *Message, cmsg *C.rd_kafka_message_t) { var valp unsafe.Pointer var keyp unsafe.Pointer // to circumvent Cgo constraints we need to allocate C heap memory // for both Value and Key (one allocation back to back) // and copy the bytes from Value and Key to the C memory. // We later tell librdkafka (in produce()) to free the // C memory pointer when it is done. var payload unsafe.Pointer valueLen := 0 keyLen := 0 if msg.Value != nil { valueLen = len(msg.Value) } if msg.Key != nil { keyLen = len(msg.Key) } allocLen := valueLen + keyLen if allocLen > 0 { payload = C.malloc(C.size_t(allocLen)) if valueLen > 0 { copy((*[1 << 30]byte)(payload)[0:valueLen], msg.Value) valp = payload } if keyLen > 0 { copy((*[1 << 30]byte)(payload)[valueLen:allocLen], msg.Key) keyp = unsafe.Pointer(&((*[1 << 31]byte)(payload)[valueLen])) } } cmsg.rkt = h.getRktFromMessage(msg) cmsg.partition = C.int32_t(msg.TopicPartition.Partition) cmsg.payload = valp cmsg.len = C.size_t(valueLen) cmsg.key = keyp cmsg.key_len = C.size_t(keyLen) cmsg._private = nil } // used for testing messageToC performance func (h *handle) messageToCDummy(msg *Message) { var cmsg C.rd_kafka_message_t h.messageToC(msg, &cmsg) } golang-github-confluentinc-confluent-kafka-go-0.11.6/kafka/message_test.go000066400000000000000000000017661336406275100265740ustar00rootroot00000000000000/** * Copyright 2016 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package kafka import ( "testing" ) //Test TimestampType func TestTimestampType(t *testing.T) { timestampMap := map[TimestampType]string{TimestampCreateTime: "CreateTime", TimestampLogAppendTime: "LogAppendTime", TimestampNotAvailable: "NotAvailable"} for ts, desc := range timestampMap { if ts.String() != desc { t.Errorf("Wrong timestamp description for %s, expected %s\n", desc, ts.String()) } } } golang-github-confluentinc-confluent-kafka-go-0.11.6/kafka/metadata.go000066400000000000000000000107311336406275100256610ustar00rootroot00000000000000/** * Copyright 2016 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package kafka import ( "unsafe" ) /* #include #include struct rd_kafka_metadata_broker *_getMetadata_broker_element(struct rd_kafka_metadata *m, int i) { return &m->brokers[i]; } struct rd_kafka_metadata_topic *_getMetadata_topic_element(struct rd_kafka_metadata *m, int i) { return &m->topics[i]; } struct rd_kafka_metadata_partition *_getMetadata_partition_element(struct rd_kafka_metadata *m, int topic_idx, int partition_idx) { return &m->topics[topic_idx].partitions[partition_idx]; } int32_t _get_int32_element (int32_t *arr, int i) { return arr[i]; } */ import "C" // BrokerMetadata contains per-broker metadata type BrokerMetadata struct { ID int32 Host string Port int } // PartitionMetadata contains per-partition metadata type PartitionMetadata struct { ID int32 Error Error Leader int32 Replicas []int32 Isrs []int32 } // TopicMetadata contains per-topic metadata type TopicMetadata struct { Topic string Partitions []PartitionMetadata Error Error } // Metadata contains broker and topic metadata for all (matching) topics type Metadata struct { Brokers []BrokerMetadata Topics map[string]TopicMetadata OriginatingBroker BrokerMetadata } // getMetadata queries broker for cluster and topic metadata. // If topic is non-nil only information about that topic is returned, else if // allTopics is false only information about locally used topics is returned, // else information about all topics is returned. func getMetadata(H Handle, topic *string, allTopics bool, timeoutMs int) (*Metadata, error) { h := H.gethandle() var rkt *C.rd_kafka_topic_t if topic != nil { rkt = h.getRkt(*topic) } var cMd *C.struct_rd_kafka_metadata cErr := C.rd_kafka_metadata(h.rk, bool2cint(allTopics), rkt, &cMd, C.int(timeoutMs)) if cErr != C.RD_KAFKA_RESP_ERR_NO_ERROR { return nil, newError(cErr) } m := Metadata{} defer C.rd_kafka_metadata_destroy(cMd) m.Brokers = make([]BrokerMetadata, cMd.broker_cnt) for i := 0; i < int(cMd.broker_cnt); i++ { b := C._getMetadata_broker_element(cMd, C.int(i)) m.Brokers[i] = BrokerMetadata{int32(b.id), C.GoString(b.host), int(b.port)} } m.Topics = make(map[string]TopicMetadata, int(cMd.topic_cnt)) for i := 0; i < int(cMd.topic_cnt); i++ { t := C._getMetadata_topic_element(cMd, C.int(i)) thisTopic := C.GoString(t.topic) m.Topics[thisTopic] = TopicMetadata{Topic: thisTopic, Error: newError(t.err), Partitions: make([]PartitionMetadata, int(t.partition_cnt))} for j := 0; j < int(t.partition_cnt); j++ { p := C._getMetadata_partition_element(cMd, C.int(i), C.int(j)) m.Topics[thisTopic].Partitions[j] = PartitionMetadata{ ID: int32(p.id), Error: newError(p.err), Leader: int32(p.leader)} m.Topics[thisTopic].Partitions[j].Replicas = make([]int32, int(p.replica_cnt)) for ir := 0; ir < int(p.replica_cnt); ir++ { m.Topics[thisTopic].Partitions[j].Replicas[ir] = int32(C._get_int32_element(p.replicas, C.int(ir))) } m.Topics[thisTopic].Partitions[j].Isrs = make([]int32, int(p.isr_cnt)) for ii := 0; ii < int(p.isr_cnt); ii++ { m.Topics[thisTopic].Partitions[j].Isrs[ii] = int32(C._get_int32_element(p.isrs, C.int(ii))) } } } m.OriginatingBroker = BrokerMetadata{int32(cMd.orig_broker_id), C.GoString(cMd.orig_broker_name), 0} return &m, nil } // queryWatermarkOffsets returns the broker's low and high offsets for the given topic // and partition. func queryWatermarkOffsets(H Handle, topic string, partition int32, timeoutMs int) (low, high int64, err error) { h := H.gethandle() ctopic := C.CString(topic) defer C.free(unsafe.Pointer(ctopic)) var cLow, cHigh C.int64_t e := C.rd_kafka_query_watermark_offsets(h.rk, ctopic, C.int32_t(partition), &cLow, &cHigh, C.int(timeoutMs)) if e != C.RD_KAFKA_RESP_ERR_NO_ERROR { return 0, 0, newError(e) } low = int64(cLow) high = int64(cHigh) return low, high, nil } golang-github-confluentinc-confluent-kafka-go-0.11.6/kafka/metadata_test.go000066400000000000000000000027371336406275100267270ustar00rootroot00000000000000/** * Copyright 2016 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package kafka import ( "testing" ) // TestMetadataAPIs dry-tests the Metadata APIs, no broker is needed. func TestMetadataAPIs(t *testing.T) { p, err := NewProducer(&ConfigMap{"socket.timeout.ms": 10}) if err != nil { t.Fatalf("%s", err) } metadata, err := p.GetMetadata(nil, true, 10) if err == nil { t.Errorf("Expected GetMetadata to fail") } topic := "gotest" metadata, err = p.GetMetadata(&topic, false, 10) if err == nil { t.Errorf("Expected GetMetadata to fail") } metadata, err = p.GetMetadata(nil, false, 10) if err == nil { t.Errorf("Expected GetMetadata to fail") } p.Close() c, err := NewConsumer(&ConfigMap{"group.id": "gotest"}) if err != nil { t.Fatalf("%s", err) } metadata, err = c.GetMetadata(nil, true, 10) if err == nil { t.Errorf("Expected GetMetadata to fail") } if metadata != nil { t.Errorf("Return value should be nil") } c.Close() } golang-github-confluentinc-confluent-kafka-go-0.11.6/kafka/misc.go000066400000000000000000000015261336406275100250360ustar00rootroot00000000000000/** * Copyright 2016 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package kafka import "C" // bool2int converts a bool to a C.int (1 or 0) func bool2cint(b bool) C.int { if b { return 1 } return 0 } // cint2bool converts a C.int to a bool func cint2bool(v C.int) bool { if v == 0 { return false } return true } golang-github-confluentinc-confluent-kafka-go-0.11.6/kafka/offset.go000066400000000000000000000074151336406275100253740ustar00rootroot00000000000000/** * Copyright 2017 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package kafka import ( "fmt" "strconv" ) /* #include #include static int64_t _c_rdkafka_offset_tail(int64_t rel) { return RD_KAFKA_OFFSET_TAIL(rel); } */ import "C" // Offset type (int64) with support for canonical names type Offset int64 // OffsetBeginning represents the earliest offset (logical) const OffsetBeginning = Offset(C.RD_KAFKA_OFFSET_BEGINNING) // OffsetEnd represents the latest offset (logical) const OffsetEnd = Offset(C.RD_KAFKA_OFFSET_END) // OffsetInvalid represents an invalid/unspecified offset const OffsetInvalid = Offset(C.RD_KAFKA_OFFSET_INVALID) // OffsetStored represents a stored offset const OffsetStored = Offset(C.RD_KAFKA_OFFSET_STORED) func (o Offset) String() string { switch o { case OffsetBeginning: return "beginning" case OffsetEnd: return "end" case OffsetInvalid: return "unset" case OffsetStored: return "stored" default: return fmt.Sprintf("%d", int64(o)) } } // Set offset value, see NewOffset() func (o *Offset) Set(offset interface{}) error { n, err := NewOffset(offset) if err == nil { *o = n } return err } // NewOffset creates a new Offset using the provided logical string, or an // absolute int64 offset value. // Logical offsets: "beginning", "earliest", "end", "latest", "unset", "invalid", "stored" func NewOffset(offset interface{}) (Offset, error) { switch v := offset.(type) { case string: switch v { case "beginning": fallthrough case "earliest": return Offset(OffsetBeginning), nil case "end": fallthrough case "latest": return Offset(OffsetEnd), nil case "unset": fallthrough case "invalid": return Offset(OffsetInvalid), nil case "stored": return Offset(OffsetStored), nil default: off, err := strconv.Atoi(v) return Offset(off), err } case int: return Offset((int64)(v)), nil case int64: return Offset(v), nil default: return OffsetInvalid, newErrorFromString(ErrInvalidArg, fmt.Sprintf("Invalid offset type: %t", v)) } } // OffsetTail returns the logical offset relativeOffset from current end of partition func OffsetTail(relativeOffset Offset) Offset { return Offset(C._c_rdkafka_offset_tail(C.int64_t(relativeOffset))) } // offsetsForTimes looks up offsets by timestamp for the given partitions. // // The returned offset for each partition is the earliest offset whose // timestamp is greater than or equal to the given timestamp in the // corresponding partition. // // The timestamps to query are represented as `.Offset` in the `times` // argument and the looked up offsets are represented as `.Offset` in the returned // `offsets` list. // // The function will block for at most timeoutMs milliseconds. // // Duplicate Topic+Partitions are not supported. // Per-partition errors may be returned in the `.Error` field. func offsetsForTimes(H Handle, times []TopicPartition, timeoutMs int) (offsets []TopicPartition, err error) { cparts := newCPartsFromTopicPartitions(times) defer C.rd_kafka_topic_partition_list_destroy(cparts) cerr := C.rd_kafka_offsets_for_times(H.gethandle().rk, cparts, C.int(timeoutMs)) if cerr != C.RD_KAFKA_RESP_ERR_NO_ERROR { return nil, newError(cerr) } return newTopicPartitionsFromCparts(cparts), nil } golang-github-confluentinc-confluent-kafka-go-0.11.6/kafka/producer.go000066400000000000000000000412541336406275100257300ustar00rootroot00000000000000/** * Copyright 2016 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package kafka import ( "fmt" "math" "time" "unsafe" ) /* #include #include #include "glue_rdkafka.h" #ifdef RD_KAFKA_V_HEADERS // Convert tmphdrs to chdrs (created by this function). // If tmphdr.size == -1: value is considered Null // tmphdr.size == 0: value is considered empty (ignored) // tmphdr.size > 0: value is considered non-empty // // WARNING: The header keys and values will be freed by this function. void tmphdrs_to_chdrs (tmphdr_t *tmphdrs, size_t tmphdrsCnt, rd_kafka_headers_t **chdrs) { size_t i; *chdrs = rd_kafka_headers_new(tmphdrsCnt); for (i = 0 ; i < tmphdrsCnt ; i++) { rd_kafka_header_add(*chdrs, tmphdrs[i].key, -1, tmphdrs[i].size == -1 ? NULL : (tmphdrs[i].size == 0 ? "" : tmphdrs[i].val), tmphdrs[i].size == -1 ? 0 : tmphdrs[i].size); if (tmphdrs[i].size > 0) free((void *)tmphdrs[i].val); free((void *)tmphdrs[i].key); } } #else void free_tmphdrs (tmphdr_t *tmphdrs, size_t tmphdrsCnt) { size_t i; for (i = 0 ; i < tmphdrsCnt ; i++) { if (tmphdrs[i].size > 0) free((void *)tmphdrs[i].val); free((void *)tmphdrs[i].key); } } #endif rd_kafka_resp_err_t do_produce (rd_kafka_t *rk, rd_kafka_topic_t *rkt, int32_t partition, int msgflags, int valIsNull, void *val, size_t val_len, int keyIsNull, void *key, size_t key_len, int64_t timestamp, tmphdr_t *tmphdrs, size_t tmphdrsCnt, uintptr_t cgoid) { void *valp = valIsNull ? NULL : val; void *keyp = keyIsNull ? NULL : key; #ifdef RD_KAFKA_V_TIMESTAMP rd_kafka_resp_err_t err; #ifdef RD_KAFKA_V_HEADERS rd_kafka_headers_t *hdrs = NULL; #endif #endif if (tmphdrsCnt > 0) { #ifdef RD_KAFKA_V_HEADERS tmphdrs_to_chdrs(tmphdrs, tmphdrsCnt, &hdrs); #else free_tmphdrs(tmphdrs, tmphdrsCnt); return RD_KAFKA_RESP_ERR__NOT_IMPLEMENTED; #endif } #ifdef RD_KAFKA_V_TIMESTAMP err = rd_kafka_producev(rk, RD_KAFKA_V_RKT(rkt), RD_KAFKA_V_PARTITION(partition), RD_KAFKA_V_MSGFLAGS(msgflags), RD_KAFKA_V_VALUE(valp, val_len), RD_KAFKA_V_KEY(keyp, key_len), RD_KAFKA_V_TIMESTAMP(timestamp), #ifdef RD_KAFKA_V_HEADERS RD_KAFKA_V_HEADERS(hdrs), #endif RD_KAFKA_V_OPAQUE((void *)cgoid), RD_KAFKA_V_END); #ifdef RD_KAFKA_V_HEADERS if (err && hdrs) rd_kafka_headers_destroy(hdrs); #endif return err; #else if (timestamp) return RD_KAFKA_RESP_ERR__NOT_IMPLEMENTED; if (rd_kafka_produce(rkt, partition, msgflags, valp, val_len, keyp, key_len, (void *)cgoid) == -1) return rd_kafka_last_error(); else return RD_KAFKA_RESP_ERR_NO_ERROR; #endif } */ import "C" // Producer implements a High-level Apache Kafka Producer instance type Producer struct { events chan Event produceChannel chan *Message handle handle // Terminates the poller() goroutine pollerTermChan chan bool } // String returns a human readable name for a Producer instance func (p *Producer) String() string { return p.handle.String() } // get_handle implements the Handle interface func (p *Producer) gethandle() *handle { return &p.handle } func (p *Producer) produce(msg *Message, msgFlags int, deliveryChan chan Event) error { if msg == nil || msg.TopicPartition.Topic == nil || len(*msg.TopicPartition.Topic) == 0 { return newErrorFromString(ErrInvalidArg, "") } crkt := p.handle.getRkt(*msg.TopicPartition.Topic) // Three problems: // 1) There's a difference between an empty Value or Key (length 0, proper pointer) and // a null Value or Key (length 0, null pointer). // 2) we need to be able to send a null Value or Key, but the unsafe.Pointer(&slice[0]) // dereference can't be performed on a nil slice. // 3) cgo's pointer checking requires the unsafe.Pointer(slice..) call to be made // in the call to the C function. // // Solution: // Keep track of whether the Value or Key were nil (1), but let the valp and keyp pointers // point to a 1-byte slice (but the length to send is still 0) so that the dereference (2) // works. // Then perform the unsafe.Pointer() on the valp and keyp pointers (which now either point // to the original msg.Value and msg.Key or to the 1-byte slices) in the call to C (3). // var valp []byte var keyp []byte oneByte := []byte{0} var valIsNull C.int var keyIsNull C.int var valLen int var keyLen int if msg.Value == nil { valIsNull = 1 valLen = 0 valp = oneByte } else { valLen = len(msg.Value) if valLen > 0 { valp = msg.Value } else { valp = oneByte } } if msg.Key == nil { keyIsNull = 1 keyLen = 0 keyp = oneByte } else { keyLen = len(msg.Key) if keyLen > 0 { keyp = msg.Key } else { keyp = oneByte } } var cgoid int // Per-message state that needs to be retained through the C code: // delivery channel (if specified) // message opaque (if specified) // Since these cant be passed as opaque pointers to the C code, // due to cgo constraints, we add them to a per-producer map for lookup // when the C code triggers the callbacks or events. if deliveryChan != nil || msg.Opaque != nil { cgoid = p.handle.cgoPut(cgoDr{deliveryChan: deliveryChan, opaque: msg.Opaque}) } var timestamp int64 if !msg.Timestamp.IsZero() { timestamp = msg.Timestamp.UnixNano() / 1000000 } // Convert headers to C-friendly tmphdrs var tmphdrs []C.tmphdr_t tmphdrsCnt := len(msg.Headers) if tmphdrsCnt > 0 { tmphdrs = make([]C.tmphdr_t, tmphdrsCnt) for n, hdr := range msg.Headers { // Make a copy of the key // to avoid runtime panic with // foreign Go pointers in cgo. tmphdrs[n].key = C.CString(hdr.Key) if hdr.Value != nil { tmphdrs[n].size = C.ssize_t(len(hdr.Value)) if tmphdrs[n].size > 0 { // Make a copy of the value // to avoid runtime panic with // foreign Go pointers in cgo. tmphdrs[n].val = C.CBytes(hdr.Value) } } else { // null value tmphdrs[n].size = C.ssize_t(-1) } } } else { // no headers, need a dummy tmphdrs of size 1 to avoid index // out of bounds panic in do_produce() call below. // tmphdrsCnt will be 0. tmphdrs = []C.tmphdr_t{{nil, nil, 0}} } cErr := C.do_produce(p.handle.rk, crkt, C.int32_t(msg.TopicPartition.Partition), C.int(msgFlags)|C.RD_KAFKA_MSG_F_COPY, valIsNull, unsafe.Pointer(&valp[0]), C.size_t(valLen), keyIsNull, unsafe.Pointer(&keyp[0]), C.size_t(keyLen), C.int64_t(timestamp), (*C.tmphdr_t)(unsafe.Pointer(&tmphdrs[0])), C.size_t(tmphdrsCnt), (C.uintptr_t)(cgoid)) if cErr != C.RD_KAFKA_RESP_ERR_NO_ERROR { if cgoid != 0 { p.handle.cgoGet(cgoid) } return newError(cErr) } return nil } // Produce single message. // This is an asynchronous call that enqueues the message on the internal // transmit queue, thus returning immediately. // The delivery report will be sent on the provided deliveryChan if specified, // or on the Producer object's Events() channel if not. // msg.Timestamp requires librdkafka >= 0.9.4 (else returns ErrNotImplemented), // api.version.request=true, and broker >= 0.10.0.0. // msg.Headers requires librdkafka >= 0.11.4 (else returns ErrNotImplemented), // api.version.request=true, and broker >= 0.11.0.0. // Returns an error if message could not be enqueued. func (p *Producer) Produce(msg *Message, deliveryChan chan Event) error { return p.produce(msg, 0, deliveryChan) } // Produce a batch of messages. // These batches do not relate to the message batches sent to the broker, the latter // are collected on the fly internally in librdkafka. // WARNING: This is an experimental API. // NOTE: timestamps and headers are not supported with this API. func (p *Producer) produceBatch(topic string, msgs []*Message, msgFlags int) error { crkt := p.handle.getRkt(topic) cmsgs := make([]C.rd_kafka_message_t, len(msgs)) for i, m := range msgs { p.handle.messageToC(m, &cmsgs[i]) } r := C.rd_kafka_produce_batch(crkt, C.RD_KAFKA_PARTITION_UA, C.int(msgFlags)|C.RD_KAFKA_MSG_F_FREE, (*C.rd_kafka_message_t)(&cmsgs[0]), C.int(len(msgs))) if r == -1 { return newError(C.rd_kafka_last_error()) } return nil } // Events returns the Events channel (read) func (p *Producer) Events() chan Event { return p.events } // ProduceChannel returns the produce *Message channel (write) func (p *Producer) ProduceChannel() chan *Message { return p.produceChannel } // Len returns the number of messages and requests waiting to be transmitted to the broker // as well as delivery reports queued for the application. // Includes messages on ProduceChannel. func (p *Producer) Len() int { return len(p.produceChannel) + len(p.events) + int(C.rd_kafka_outq_len(p.handle.rk)) } // Flush and wait for outstanding messages and requests to complete delivery. // Includes messages on ProduceChannel. // Runs until value reaches zero or on timeoutMs. // Returns the number of outstanding events still un-flushed. func (p *Producer) Flush(timeoutMs int) int { termChan := make(chan bool) // unused stand-in termChan d, _ := time.ParseDuration(fmt.Sprintf("%dms", timeoutMs)) tEnd := time.Now().Add(d) for p.Len() > 0 { remain := tEnd.Sub(time.Now()).Seconds() if remain <= 0.0 { return p.Len() } p.handle.eventPoll(p.events, int(math.Min(100, remain*1000)), 1000, termChan) } return 0 } // Close a Producer instance. // The Producer object or its channels are no longer usable after this call. func (p *Producer) Close() { // Wait for poller() (signaled by closing pollerTermChan) // and channel_producer() (signaled by closing ProduceChannel) close(p.pollerTermChan) close(p.produceChannel) p.handle.waitTerminated(2) close(p.events) p.handle.cleanup() C.rd_kafka_destroy(p.handle.rk) } // NewProducer creates a new high-level Producer instance. // // conf is a *ConfigMap with standard librdkafka configuration properties, see here: // // // // // // Supported special configuration properties: // go.batch.producer (bool, false) - EXPERIMENTAL: Enable batch producer (for increased performance). // These batches do not relate to Kafka message batches in any way. // Note: timestamps and headers are not supported with this interface. // go.delivery.reports (bool, true) - Forward per-message delivery reports to the // Events() channel. // go.events.channel.size (int, 1000000) - Events() channel size // go.produce.channel.size (int, 1000000) - ProduceChannel() buffer size (in number of messages) // func NewProducer(conf *ConfigMap) (*Producer, error) { err := versionCheck() if err != nil { return nil, err } p := &Producer{} // before we do anything with the configuration, create a copy such that // the original is not mutated. confCopy := conf.clone() v, err := confCopy.extract("go.batch.producer", false) if err != nil { return nil, err } batchProducer := v.(bool) v, err = confCopy.extract("go.delivery.reports", true) if err != nil { return nil, err } p.handle.fwdDr = v.(bool) v, err = confCopy.extract("go.events.channel.size", 1000000) if err != nil { return nil, err } eventsChanSize := v.(int) v, err = confCopy.extract("go.produce.channel.size", 1000000) if err != nil { return nil, err } produceChannelSize := v.(int) if int(C.rd_kafka_version()) < 0x01000000 { // produce.offset.report is no longer used in librdkafka >= v1.0.0 v, _ = confCopy.extract("{topic}.produce.offset.report", nil) if v == nil { // Enable offset reporting by default, unless overriden. confCopy.SetKey("{topic}.produce.offset.report", true) } } // Convert ConfigMap to librdkafka conf_t cConf, err := confCopy.convert() if err != nil { return nil, err } cErrstr := (*C.char)(C.malloc(C.size_t(256))) defer C.free(unsafe.Pointer(cErrstr)) C.rd_kafka_conf_set_events(cConf, C.RD_KAFKA_EVENT_DR|C.RD_KAFKA_EVENT_STATS|C.RD_KAFKA_EVENT_ERROR) // Create librdkafka producer instance p.handle.rk = C.rd_kafka_new(C.RD_KAFKA_PRODUCER, cConf, cErrstr, 256) if p.handle.rk == nil { return nil, newErrorFromCString(C.RD_KAFKA_RESP_ERR__INVALID_ARG, cErrstr) } p.handle.p = p p.handle.setup() p.handle.rkq = C.rd_kafka_queue_get_main(p.handle.rk) p.events = make(chan Event, eventsChanSize) p.produceChannel = make(chan *Message, produceChannelSize) p.pollerTermChan = make(chan bool) go poller(p, p.pollerTermChan) // non-batch or batch producer, only one must be used if batchProducer { go channelBatchProducer(p) } else { go channelProducer(p) } return p, nil } // channel_producer serves the ProduceChannel channel func channelProducer(p *Producer) { for m := range p.produceChannel { err := p.produce(m, C.RD_KAFKA_MSG_F_BLOCK, nil) if err != nil { m.TopicPartition.Error = err p.events <- m } } p.handle.terminatedChan <- "channelProducer" } // channelBatchProducer serves the ProduceChannel channel and attempts to // improve cgo performance by using the produceBatch() interface. func channelBatchProducer(p *Producer) { var buffered = make(map[string][]*Message) bufferedCnt := 0 const batchSize int = 1000000 totMsgCnt := 0 totBatchCnt := 0 for m := range p.produceChannel { buffered[*m.TopicPartition.Topic] = append(buffered[*m.TopicPartition.Topic], m) bufferedCnt++ loop2: for true { select { case m, ok := <-p.produceChannel: if !ok { break loop2 } if m == nil { panic("nil message received on ProduceChannel") } if m.TopicPartition.Topic == nil { panic(fmt.Sprintf("message without Topic received on ProduceChannel: %v", m)) } buffered[*m.TopicPartition.Topic] = append(buffered[*m.TopicPartition.Topic], m) bufferedCnt++ if bufferedCnt >= batchSize { break loop2 } default: break loop2 } } totBatchCnt++ totMsgCnt += len(buffered) for topic, buffered2 := range buffered { err := p.produceBatch(topic, buffered2, C.RD_KAFKA_MSG_F_BLOCK) if err != nil { for _, m = range buffered2 { m.TopicPartition.Error = err p.events <- m } } } buffered = make(map[string][]*Message) bufferedCnt = 0 } p.handle.terminatedChan <- "channelBatchProducer" } // poller polls the rd_kafka_t handle for events until signalled for termination func poller(p *Producer, termChan chan bool) { out: for true { select { case _ = <-termChan: break out default: _, term := p.handle.eventPoll(p.events, 100, 1000, termChan) if term { break out } break } } p.handle.terminatedChan <- "poller" } // GetMetadata queries broker for cluster and topic metadata. // If topic is non-nil only information about that topic is returned, else if // allTopics is false only information about locally used topics is returned, // else information about all topics is returned. // GetMetadata is equivalent to listTopics, describeTopics and describeCluster in the Java API. func (p *Producer) GetMetadata(topic *string, allTopics bool, timeoutMs int) (*Metadata, error) { return getMetadata(p, topic, allTopics, timeoutMs) } // QueryWatermarkOffsets returns the broker's low and high offsets for the given topic // and partition. func (p *Producer) QueryWatermarkOffsets(topic string, partition int32, timeoutMs int) (low, high int64, err error) { return queryWatermarkOffsets(p, topic, partition, timeoutMs) } // OffsetsForTimes looks up offsets by timestamp for the given partitions. // // The returned offset for each partition is the earliest offset whose // timestamp is greater than or equal to the given timestamp in the // corresponding partition. // // The timestamps to query are represented as `.Offset` in the `times` // argument and the looked up offsets are represented as `.Offset` in the returned // `offsets` list. // // The function will block for at most timeoutMs milliseconds. // // Duplicate Topic+Partitions are not supported. // Per-partition errors may be returned in the `.Error` field. func (p *Producer) OffsetsForTimes(times []TopicPartition, timeoutMs int) (offsets []TopicPartition, err error) { return offsetsForTimes(p, times, timeoutMs) } golang-github-confluentinc-confluent-kafka-go-0.11.6/kafka/producer_performance_test.go000066400000000000000000000126641336406275100313530ustar00rootroot00000000000000/** * Copyright 2016 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package kafka import ( "fmt" "strings" "testing" ) func deliveryHandler(b *testing.B, expCnt int64, deliveryChan chan Event, doneChan chan int64) { var cnt, size int64 for ev := range deliveryChan { m, ok := ev.(*Message) if !ok { continue } if m.TopicPartition.Error != nil { b.Errorf("Message delivery error: %v", m.TopicPartition) break } cnt++ // b.Logf("Delivered %d/%d to %s", cnt, expCnt, m.TopicPartition) if m.Value != nil { size += int64(len(m.Value)) } if cnt >= expCnt { break } } doneChan <- cnt doneChan <- size close(doneChan) } func producerPerfTest(b *testing.B, testname string, msgcnt int, withDr bool, batchProducer bool, silent bool, produceFunc func(p *Producer, m *Message, drChan chan Event)) { if !testconfRead() { b.Skipf("Missing testconf.json") } if msgcnt == 0 { msgcnt = testconf.PerfMsgCount } conf := ConfigMap{"bootstrap.servers": testconf.Brokers, "go.batch.producer": batchProducer, "go.delivery.reports": withDr, "queue.buffering.max.messages": msgcnt, "api.version.request": "true", "broker.version.fallback": "0.9.0.1", "default.topic.config": ConfigMap{"acks": 1}} conf.updateFromTestconf() p, err := NewProducer(&conf) if err != nil { panic(err) } topic := testconf.Topic partition := int32(-1) size := testconf.PerfMsgSize pattern := "Hello" buf := []byte(strings.Repeat(pattern, size/len(pattern))) var doneChan chan int64 var drChan chan Event if withDr { doneChan = make(chan int64) drChan = p.Events() go deliveryHandler(b, int64(msgcnt), p.Events(), doneChan) } if !silent { b.Logf("%s: produce %d messages", testname, msgcnt) } displayInterval := 5.0 if !silent { displayInterval = 1000.0 } rd := ratedispStart(b, fmt.Sprintf("%s: produce", testname), displayInterval) rdDelivery := ratedispStart(b, fmt.Sprintf("%s: delivery", testname), displayInterval) for i := 0; i < msgcnt; i++ { m := Message{TopicPartition: TopicPartition{Topic: &topic, Partition: partition}, Value: buf} produceFunc(p, &m, drChan) rd.tick(1, int64(size)) } if !silent { rd.print("produce done: ") } // Wait for messages in-flight and in-queue to get delivered. if !silent { b.Logf("%s: %d messages in queue", testname, p.Len()) } r := p.Flush(10000) if r > 0 { b.Errorf("%s: %d messages remains in queue after Flush()", testname, r) } // Close producer p.Close() var deliveryCnt, deliverySize int64 if withDr { deliveryCnt = <-doneChan deliverySize = <-doneChan } else { deliveryCnt = int64(msgcnt) deliverySize = deliveryCnt * int64(size) } rdDelivery.tick(deliveryCnt, deliverySize) rd.print("TOTAL: ") b.SetBytes(deliverySize) } func BenchmarkProducerFunc(b *testing.B) { producerPerfTest(b, "Function producer (without DR)", 0, false, false, false, func(p *Producer, m *Message, drChan chan Event) { err := p.Produce(m, drChan) if err != nil { b.Errorf("Produce() failed: %v", err) } }) } func BenchmarkProducerFuncDR(b *testing.B) { producerPerfTest(b, "Function producer (with DR)", 0, true, false, false, func(p *Producer, m *Message, drChan chan Event) { err := p.Produce(m, drChan) if err != nil { b.Errorf("Produce() failed: %v", err) } }) } func BenchmarkProducerChannel(b *testing.B) { producerPerfTest(b, "Channel producer (without DR)", 0, false, false, false, func(p *Producer, m *Message, drChan chan Event) { p.ProduceChannel() <- m }) } func BenchmarkProducerChannelDR(b *testing.B) { producerPerfTest(b, "Channel producer (with DR)", testconf.PerfMsgCount, true, false, false, func(p *Producer, m *Message, drChan chan Event) { p.ProduceChannel() <- m }) } func BenchmarkProducerBatchChannel(b *testing.B) { producerPerfTest(b, "Channel producer (without DR, batch channel)", 0, false, true, false, func(p *Producer, m *Message, drChan chan Event) { p.ProduceChannel() <- m }) } func BenchmarkProducerBatchChannelDR(b *testing.B) { producerPerfTest(b, "Channel producer (DR, batch channel)", 0, true, true, false, func(p *Producer, m *Message, drChan chan Event) { p.ProduceChannel() <- m }) } func BenchmarkProducerInternalMessageInstantiation(b *testing.B) { topic := "test" buf := []byte(strings.Repeat("Ten bytes!", 10)) v := 0 for i := 0; i < b.N; i++ { msg := Message{TopicPartition: TopicPartition{Topic: &topic, Partition: 0}, Value: buf} v += int(msg.TopicPartition.Partition) // avoid msg unused error } } func BenchmarkProducerInternalMessageToC(b *testing.B) { p, err := NewProducer(&ConfigMap{}) if err != nil { b.Fatalf("NewProducer failed: %s", err) } b.ResetTimer() topic := "test" buf := []byte(strings.Repeat("Ten bytes!", 10)) for i := 0; i < b.N; i++ { msg := Message{TopicPartition: TopicPartition{Topic: &topic, Partition: 0}, Value: buf} p.handle.messageToCDummy(&msg) } } golang-github-confluentinc-confluent-kafka-go-0.11.6/kafka/producer_test.go000066400000000000000000000156001336406275100267630ustar00rootroot00000000000000/** * Copyright 2016 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package kafka import ( "encoding/binary" "encoding/json" "reflect" "testing" "time" ) // TestProducerAPIs dry-tests all Producer APIs, no broker is needed. func TestProducerAPIs(t *testing.T) { // expected message dr count on events channel expMsgCnt := 0 p, err := NewProducer(&ConfigMap{ "socket.timeout.ms": 10, "default.topic.config": ConfigMap{"message.timeout.ms": 10}}) if err != nil { t.Fatalf("%s", err) } t.Logf("Producer %s", p) drChan := make(chan Event, 10) topic1 := "gotest" topic2 := "gotest2" // Produce with function, DR on passed drChan err = p.Produce(&Message{TopicPartition: TopicPartition{Topic: &topic1, Partition: 0}, Value: []byte("Own drChan"), Key: []byte("This is my key")}, drChan) if err != nil { t.Errorf("Produce failed: %s", err) } // Produce with function, use default DR channel (Events) err = p.Produce(&Message{TopicPartition: TopicPartition{Topic: &topic2, Partition: 0}, Value: []byte("Events DR"), Key: []byte("This is my key")}, nil) if err != nil { t.Errorf("Produce failed: %s", err) } expMsgCnt++ // Produce with function and timestamp, // success depends on librdkafka version err = p.Produce(&Message{TopicPartition: TopicPartition{Topic: &topic2, Partition: 0}, Timestamp: time.Now()}, nil) numver, strver := LibraryVersion() t.Logf("Produce with timestamp on %s returned: %s", strver, err) if numver < 0x00090400 { if err == nil || err.(Error).Code() != ErrNotImplemented { t.Errorf("Expected Produce with timestamp to fail with ErrNotImplemented on %s (0x%x), got: %s", strver, numver, err) } } else { if err != nil { t.Errorf("Produce with timestamp failed on %s: %s", strver, err) } } if err == nil { expMsgCnt++ } // Produce through ProducerChannel, uses default DR channel (Events), // pass Opaque object. myOpq := "My opaque" p.ProduceChannel() <- &Message{TopicPartition: TopicPartition{Topic: &topic2, Partition: 0}, Opaque: &myOpq, Value: []byte("ProducerChannel"), Key: []byte("This is my key")} expMsgCnt++ // Len() will not report messages on private delivery report chans (our drChan for example), // so expect at least 2 messages, not 3. // And completely ignore the timestamp message. if p.Len() < 2 { t.Errorf("Expected at least 2 messages (+requests) in queue, only %d reported", p.Len()) } // Message Headers varIntHeader := make([]byte, binary.MaxVarintLen64) varIntLen := binary.PutVarint(varIntHeader, 123456789) myHeaders := []Header{ {"thisHdrIsNullOrNil", nil}, {"empty", []byte("")}, {"MyVarIntHeader", varIntHeader[:varIntLen]}, {"mystring", []byte("This is a simple string")}, } p.ProduceChannel() <- &Message{TopicPartition: TopicPartition{Topic: &topic2, Partition: 0}, Value: []byte("Headers"), Headers: myHeaders} expMsgCnt++ // // Now wait for messages to time out so that delivery reports are triggered // // drChan (1 message) ev := <-drChan m := ev.(*Message) if string(m.Value) != "Own drChan" { t.Errorf("DR for wrong message (wanted 'Own drChan'), got %s", string(m.Value)) } else if m.TopicPartition.Error == nil { t.Errorf("Expected error for message") } else { t.Logf("Message %s", m.TopicPartition) } close(drChan) // Events chan (3 messages and possibly events) for msgCnt := 0; msgCnt < expMsgCnt; { ev = <-p.Events() switch e := ev.(type) { case *Message: msgCnt++ if (string)(e.Value) == "ProducerChannel" { s := e.Opaque.(*string) if s != &myOpq { t.Errorf("Opaque should point to %v, not %v", &myOpq, s) } if *s != myOpq { t.Errorf("Opaque should be \"%s\", not \"%v\"", myOpq, *s) } t.Logf("Message \"%s\" with opaque \"%s\"\n", (string)(e.Value), *s) } else if (string)(e.Value) == "Headers" { if e.Opaque != nil { t.Errorf("Message opaque should be nil, not %v", e.Opaque) } if !reflect.DeepEqual(e.Headers, myHeaders) { // FIXME: Headers are currently not available on the delivery report. // t.Errorf("Message headers should be %v, not %v", myHeaders, e.Headers) } } else { if e.Opaque != nil { t.Errorf("Message opaque should be nil, not %v", e.Opaque) } } default: t.Logf("Ignored event %s", e) } } r := p.Flush(2000) if r > 0 { t.Errorf("Expected empty queue after Flush, still has %d", r) } // OffsetsForTimes offsets, err := p.OffsetsForTimes([]TopicPartition{{Topic: &topic2, Offset: 12345}}, 100) t.Logf("OffsetsForTimes() returned Offsets %s and error %s\n", offsets, err) if err == nil { t.Errorf("OffsetsForTimes() should have failed\n") } if offsets != nil { t.Errorf("OffsetsForTimes() failed but returned non-nil Offsets: %s\n", offsets) } } // TestProducerBufferSafety verifies issue #24, passing any type of memory backed buffer // (JSON in this case) to Produce() func TestProducerBufferSafety(t *testing.T) { p, err := NewProducer(&ConfigMap{ "socket.timeout.ms": 10, "default.topic.config": ConfigMap{"message.timeout.ms": 10}}) if err != nil { t.Fatalf("%s", err) } topic := "gotest" value, _ := json.Marshal(struct{ M string }{M: "Hello Go!"}) empty := []byte("") // Try combinations of Value and Key: json value, empty, nil p.Produce(&Message{TopicPartition: TopicPartition{Topic: &topic}, Value: value, Key: nil}, nil) p.Produce(&Message{TopicPartition: TopicPartition{Topic: &topic}, Value: value, Key: value}, nil) p.Produce(&Message{TopicPartition: TopicPartition{Topic: &topic}, Value: nil, Key: value}, nil) p.Produce(&Message{TopicPartition: TopicPartition{Topic: &topic}, Value: nil, Key: nil}, nil) p.Produce(&Message{TopicPartition: TopicPartition{Topic: &topic}, Value: empty, Key: nil}, nil) p.Produce(&Message{TopicPartition: TopicPartition{Topic: &topic}, Value: empty, Key: empty}, nil) p.Produce(&Message{TopicPartition: TopicPartition{Topic: &topic}, Value: nil, Key: empty}, nil) p.Produce(&Message{TopicPartition: TopicPartition{Topic: &topic}, Value: value, Key: empty}, nil) p.Produce(&Message{TopicPartition: TopicPartition{Topic: &topic}, Value: value, Key: value}, nil) p.Produce(&Message{TopicPartition: TopicPartition{Topic: &topic}, Value: empty, Key: value}, nil) // And Headers p.Produce(&Message{TopicPartition: TopicPartition{Topic: &topic}, Value: empty, Key: value, Headers: []Header{{"hdr", value}, {"hdr2", empty}, {"hdr3", nil}}}, nil) p.Flush(100) p.Close() } golang-github-confluentinc-confluent-kafka-go-0.11.6/kafka/stats_event_test.go000066400000000000000000000064421336406275100275030ustar00rootroot00000000000000/** * Copyright 2017 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package kafka import ( "encoding/json" "testing" "time" ) // handleStatsEvent checks for stats event and signals on statsReceived func handleStatsEvent(t *testing.T, eventCh chan Event, statsReceived chan bool) { for ev := range eventCh { switch e := ev.(type) { case *Stats: t.Logf("Stats: %v", e) // test if the stats string can be decoded into JSON var raw map[string]interface{} err := json.Unmarshal([]byte(e.String()), &raw) // convert string to json if err != nil { t.Fatalf("json unmarshall error: %s", err) } t.Logf("Stats['name']: %s", raw["name"]) close(statsReceived) return default: t.Logf("Ignored event: %v", e) } } } // TestStatsEventProducerFunc dry-test stats event, no broker is needed. func TestStatsEventProducerFunc(t *testing.T) { testProducerFunc(t, false) } func TestStatsEventProducerChannel(t *testing.T) { testProducerFunc(t, true) } func testProducerFunc(t *testing.T, withProducerChannel bool) { p, err := NewProducer(&ConfigMap{ "statistics.interval.ms": 50, "socket.timeout.ms": 10, "default.topic.config": ConfigMap{"message.timeout.ms": 10}}) if err != nil { t.Fatalf("%s", err) } defer p.Close() t.Logf("Producer %s", p) topic1 := "gotest" // go routine to check for stats event statsReceived := make(chan bool) go handleStatsEvent(t, p.Events(), statsReceived) if withProducerChannel { err = p.Produce(&Message{TopicPartition: TopicPartition{Topic: &topic1, Partition: 0}, Value: []byte("Own drChan"), Key: []byte("This is my key")}, nil) if err != nil { t.Errorf("Produce failed: %s", err) } } else { p.ProduceChannel() <- &Message{TopicPartition: TopicPartition{Topic: &topic1, Partition: 0}, Value: []byte("Own drChan"), Key: []byte("This is my key")} } select { case <-statsReceived: t.Logf("Stats recevied") case <-time.After(time.Second * 3): t.Fatalf("Excepted stats but got none") } return } // TestStatsEventConsumerChannel dry-tests stats event for consumer, no broker is needed. func TestStatsEventConsumerChannel(t *testing.T) { c, err := NewConsumer(&ConfigMap{ "group.id": "gotest", "statistics.interval.ms": 50, "go.events.channel.enable": true, "socket.timeout.ms": 10, "session.timeout.ms": 10}) if err != nil { t.Fatalf("%s", err) } defer c.Close() t.Logf("Consumer %s", c) // go routine to check for stats event statsReceived := make(chan bool) go handleStatsEvent(t, c.Events(), statsReceived) err = c.Subscribe("gotest", nil) if err != nil { t.Errorf("Subscribe failed: %s", err) } select { case <-statsReceived: t.Logf("Stats recevied") case <-time.After(time.Second * 3): t.Fatalf("Excepted stats but got none") } } golang-github-confluentinc-confluent-kafka-go-0.11.6/kafka/testconf-example.json000066400000000000000000000003001336406275100277120ustar00rootroot00000000000000{ "Brokers": "mybroker or $BROKERS env", "Topic": "test", "GroupID": "testgroup", "PerfMsgCount": 1000000, "PerfMsgSize": 100, "Config": ["api.version.request=true"] } golang-github-confluentinc-confluent-kafka-go-0.11.6/kafka/testhelpers.go000066400000000000000000000101051336406275100264360ustar00rootroot00000000000000package kafka /** * Copyright 2016 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import ( "encoding/json" "fmt" "os" "time" ) /* #include */ import "C" var testconf struct { Brokers string Topic string GroupID string PerfMsgCount int PerfMsgSize int Config []string conf ConfigMap } // testconf_read reads the test suite config file testconf.json which must // contain at least Brokers and Topic string properties. // Returns true if the testconf was found and usable, false if no such file, or panics // if the file format is wrong. func testconfRead() bool { cf, err := os.Open("testconf.json") if err != nil { fmt.Fprintf(os.Stderr, "%% testconf.json not found - ignoring test\n") return false } // Default values testconf.PerfMsgCount = 2000000 testconf.PerfMsgSize = 100 testconf.GroupID = "testgroup" jp := json.NewDecoder(cf) err = jp.Decode(&testconf) if err != nil { panic(fmt.Sprintf("Failed to parse testconf: %s", err)) } cf.Close() if testconf.Brokers[0] == '$' { // Read broker list from environment variable testconf.Brokers = os.Getenv(testconf.Brokers[1:]) } if testconf.Brokers == "" || testconf.Topic == "" { panic("Missing Brokers or Topic in testconf.json") } return true } // update existing ConfigMap with key=value pairs from testconf.Config func (cm *ConfigMap) updateFromTestconf() error { if testconf.Config == nil { return nil } // Translate "key=value" pairs in Config to ConfigMap for _, s := range testconf.Config { err := cm.Set(s) if err != nil { return err } } return nil } // Return the number of messages available in all partitions of a topic. // WARNING: This uses watermark offsets so it will be incorrect for compacted topics. func getMessageCountInTopic(topic string) (int, error) { // Create consumer config := &ConfigMap{"bootstrap.servers": testconf.Brokers, "group.id": testconf.GroupID} config.updateFromTestconf() c, err := NewConsumer(config) if err != nil { return 0, err } // get metadata for the topic to find out number of partitions metadata, err := c.GetMetadata(&topic, false, 5*1000) if err != nil { return 0, err } t, ok := metadata.Topics[topic] if !ok { return 0, newError(C.RD_KAFKA_RESP_ERR__UNKNOWN_TOPIC) } cnt := 0 for _, p := range t.Partitions { low, high, err := c.QueryWatermarkOffsets(topic, p.ID, 5*1000) if err != nil { continue } cnt += int(high - low) } return cnt, nil } // getBrokerList returns a list of brokers (ids) in the cluster func getBrokerList(H Handle) (brokers []int32, err error) { md, err := getMetadata(H, nil, true, 15*1000) if err != nil { return nil, err } brokers = make([]int32, len(md.Brokers)) for i, mdBroker := range md.Brokers { brokers[i] = mdBroker.ID } return brokers, nil } // waitTopicInMetadata waits for the given topic to show up in metadata func waitTopicInMetadata(H Handle, topic string, timeoutMs int) error { d, _ := time.ParseDuration(fmt.Sprintf("%dms", timeoutMs)) tEnd := time.Now().Add(d) for { remain := tEnd.Sub(time.Now()).Seconds() if remain < 0.0 { return newErrorFromString(ErrTimedOut, fmt.Sprintf("Timed out waiting for topic %s to appear in metadata", topic)) } md, err := getMetadata(H, nil, true, int(remain*1000)) if err != nil { return err } for _, t := range md.Topics { if t.Topic != topic { continue } if t.Error.Code() != ErrNoError || len(t.Partitions) < 1 { continue } // Proper topic found in metadata return nil } time.Sleep(500 * 1000) // 500ms } } golang-github-confluentinc-confluent-kafka-go-0.11.6/kafka/testhelpers_test.go000066400000000000000000000033651336406275100275070ustar00rootroot00000000000000/** * Copyright 2016 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package kafka import ( "testing" "time" ) // ratepdisp tracks and prints message & byte rates type ratedisp struct { name string start time.Time lastPrint time.Time every float64 cnt int64 size int64 b *testing.B } // ratedisp_start sets up a new rate displayer func ratedispStart(b *testing.B, name string, every float64) (pf ratedisp) { now := time.Now() return ratedisp{name: name, start: now, lastPrint: now, b: b, every: every} } // reset start time and counters func (rd *ratedisp) reset() { rd.start = time.Now() rd.cnt = 0 rd.size = 0 } // print the current (accumulated) rate func (rd *ratedisp) print(pfx string) { elapsed := time.Since(rd.start).Seconds() rd.b.Logf("%s: %s%d messages in %fs (%.0f msgs/s), %d bytes (%.3fMb/s)", rd.name, pfx, rd.cnt, elapsed, float64(rd.cnt)/elapsed, rd.size, (float64(rd.size)/elapsed)/(1024*1024)) } // tick adds cnt of total size size to the rate displayer and also prints // running stats every 1s. func (rd *ratedisp) tick(cnt, size int64) { rd.cnt += cnt rd.size += size if time.Since(rd.lastPrint).Seconds() >= rd.every { rd.print("") rd.lastPrint = time.Now() } } golang-github-confluentinc-confluent-kafka-go-0.11.6/kafkatest/000077500000000000000000000000001336406275100244505ustar00rootroot00000000000000golang-github-confluentinc-confluent-kafka-go-0.11.6/kafkatest/.gitignore000066400000000000000000000001341336406275100264360ustar00rootroot00000000000000go_verifiable_consumer/go_verifiable_consumer go_verifiable_producer/go_verifiable_producer golang-github-confluentinc-confluent-kafka-go-0.11.6/kafkatest/README.md000066400000000000000000000024511336406275100257310ustar00rootroot00000000000000Contains kafkatest compatible clients for plugging in with the official Apache Kafka client tests Instructions ============ **Build both clients with statically linked librdkafka:** $ mkdir ~/src/kafka/tests/go $ cd go_verifiable_consumer $ go build -tags static $ cp go_verifiable_producer ~/src/kafka/tests/go $ cd go_verifiable_consumer $ go build -tags static $ $ cp go_verifiable_consumer ~/src/kafka/tests/go **Install librdkafka's dependencies on kafkatest VMs:** $ cd ~/src/kafka # your Kafka git checkout $ for n in $(vagrant status | grep running | awk '{print $1}') ; do \ vagrant ssh $n -c 'sudo apt-get install -y libssl1.0.0 libsasl2-modules-gssapi-mit liblz4-1 zlib1g' ; done *Note*: There is also a deploy.sh script in this directory that can be used on the VMs to do the same. **Run kafkatests using Go client:** $ cd ~/src/kafka # your Kafka git checkout $ source ~/src/venv2.7/bin/activate # your virtualenv containing ducktape $ vagrant rsync # to copy go_verifiable_* clients to worker instances $ ducktape --debug tests/kafkatest/tests/client --globals $GOPATH/src/github.com/confluentinc/confluent-kafka-go/kafkatest/globals.json # Go do something else for 40 minutes # Come back and look at the results golang-github-confluentinc-confluent-kafka-go-0.11.6/kafkatest/deploy.sh000066400000000000000000000002761336406275100263050ustar00rootroot00000000000000#!/bin/bash # # Per-instance Go kafkatest client dependency deployment. # Installs required dependencies. sudo apt-get install -y libsasl2 libsasl2-modules-gssapi-mit libssl1.1.0 liblz4-1 golang-github-confluentinc-confluent-kafka-go-0.11.6/kafkatest/globals.json000066400000000000000000000007021336406275100267650ustar00rootroot00000000000000{"VerifiableConsumer": { "class": "kafkatest.services.verifiable_client.VerifiableClientApp", "exec_cmd": "/vagrant/tests/go/go_verifiable_consumer --debug=cgrp,topic,protocol,broker -X api.version.request=true" }, "VerifiableProducer": { "class": "kafkatest.services.verifiable_client.VerifiableClientApp", "exec_cmd": "/vagrant/tests/go/go_verifiable_producer --debug=topic,protocol,broker -X api.version.request=true" } } golang-github-confluentinc-confluent-kafka-go-0.11.6/kafkatest/go_verifiable_consumer/000077500000000000000000000000001336406275100311605ustar00rootroot00000000000000go_verifiable_consumer.go000066400000000000000000000270441336406275100361470ustar00rootroot00000000000000golang-github-confluentinc-confluent-kafka-go-0.11.6/kafkatest/go_verifiable_consumer// Apache Kafka kafkatest VerifiableConsumer implemented in Go package main /** * Copyright 2016 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import ( "encoding/json" "fmt" "github.com/confluentinc/confluent-kafka-go/kafka" "gopkg.in/alecthomas/kingpin.v2" "os" "os/signal" "strings" "syscall" "time" ) var ( verbosity = 1 sigs chan os.Signal ) func fatal(why string) { fmt.Fprintf(os.Stderr, "%% FATAL ERROR: %s", why) panic(why) } func send(name string, msg map[string]interface{}) { if msg == nil { msg = make(map[string]interface{}) } msg["name"] = name msg["_time"] = time.Now().Format("2006-01-02 15:04:05.000") b, err := json.Marshal(msg) if err != nil { fatal(fmt.Sprintf("json.Marshal failed: %v", err)) } fmt.Println(string(b)) } func partitionsToMap(partitions []kafka.TopicPartition) []map[string]interface{} { parts := make([]map[string]interface{}, len(partitions)) for i, tp := range partitions { parts[i] = map[string]interface{}{"topic": *tp.Topic, "partition": tp.Partition, "offset": tp.Offset} } return parts } func sendOffsetsCommitted(offsets []kafka.TopicPartition, err error) { if len(state.currAssignment) == 0 { // Dont emit offsets_committed if there is no current assignment // This happens when auto_commit is enabled since we also // force a manual commit on rebalance to make sure // offsets_committed is emitted prior to partitions_revoked, // so the builtin auto committer will also kick in and post // this later OffsetsCommitted event which we simply ignore.. fmt.Fprintf(os.Stderr, "%% Ignore OffsetsCommitted(%v) without a valid assignment\n", err) return } msg := make(map[string]interface{}) if err != nil { msg["success"] = false msg["error"] = fmt.Sprintf("%v", err) kerr, ok := err.(kafka.Error) if ok && kerr.Code() == kafka.ErrNoOffset { fmt.Fprintf(os.Stderr, "%% No offsets to commit\n") return } fmt.Fprintf(os.Stderr, "%% Commit failed: %v", msg["error"]) } else { msg["success"] = true } if offsets != nil { msg["offsets"] = partitionsToMap(offsets) } // Make sure we report consumption before commit, // otherwise tests may fail because of commit > consumed sendRecordsConsumed(true) send("offsets_committed", msg) } func sendPartitions(name string, partitions []kafka.TopicPartition) { msg := make(map[string]interface{}) msg["partitions"] = partitionsToMap(partitions) send(name, msg) } type assignedPartition struct { tp kafka.TopicPartition consumedMsgs int minOffset int64 maxOffset int64 } func assignmentKey(tp kafka.TopicPartition) string { return fmt.Sprintf("%s-%d", *tp.Topic, tp.Partition) } func findAssignment(tp kafka.TopicPartition) *assignedPartition { a, ok := state.currAssignment[assignmentKey(tp)] if !ok { return nil } return a } func addAssignment(tp kafka.TopicPartition) { state.currAssignment[assignmentKey(tp)] = &assignedPartition{tp: tp, minOffset: -1, maxOffset: -1} } func clearCurrAssignment() { state.currAssignment = make(map[string]*assignedPartition) } type commState struct { run bool consumedMsgs int consumedMsgsLastReported int consumedMsgsAtLastCommit int currAssignment map[string]*assignedPartition maxMessages int autoCommit bool asyncCommit bool c *kafka.Consumer termOnRevoke bool } var state commState func sendRecordsConsumed(immediate bool) { if len(state.currAssignment) == 0 || (!immediate && state.consumedMsgsLastReported+1000 > state.consumedMsgs) { return } msg := map[string]interface{}{} msg["count"] = state.consumedMsgs - state.consumedMsgsLastReported parts := make([]map[string]interface{}, len(state.currAssignment)) i := 0 for _, a := range state.currAssignment { if a.minOffset == -1 { // Skip partitions that havent had any messages since last time. // This is to circumvent some minOffset checks in kafkatest. continue } parts[i] = map[string]interface{}{"topic": *a.tp.Topic, "partition": a.tp.Partition, "consumed_msgs": a.consumedMsgs, "minOffset": a.minOffset, "maxOffset": a.maxOffset} a.minOffset = -1 i++ } msg["partitions"] = parts[0:i] send("records_consumed", msg) state.consumedMsgsLastReported = state.consumedMsgs } // do_commit commits every 1000 messages or whenever there is a consume timeout, or when immediate==true func doCommit(immediate bool, async bool) { if !immediate && (state.autoCommit || state.consumedMsgsAtLastCommit+1000 > state.consumedMsgs) { return } async = state.asyncCommit fmt.Fprintf(os.Stderr, "%% Committing %d messages (async=%v)\n", state.consumedMsgs-state.consumedMsgsAtLastCommit, async) state.consumedMsgsAtLastCommit = state.consumedMsgs var waitCommitted chan bool if !async { waitCommitted = make(chan bool) } go func() { offsets, err := state.c.Commit() sendOffsetsCommitted(offsets, err) if !async { close(waitCommitted) } }() if !async { _, _ = <-waitCommitted } } // returns false when consumer should terminate, else true to keep running. func handleMsg(m *kafka.Message) bool { if verbosity >= 2 { fmt.Fprintf(os.Stderr, "%% Message receved: %v:\n", m.TopicPartition) } a := findAssignment(m.TopicPartition) if a == nil { fmt.Fprintf(os.Stderr, "%% Received message on unassigned partition: %v\n", m.TopicPartition) return true } a.consumedMsgs++ offset := int64(m.TopicPartition.Offset) if a.minOffset == -1 { a.minOffset = offset } if a.maxOffset < offset { a.maxOffset = offset } state.consumedMsgs++ sendRecordsConsumed(false) doCommit(false, state.asyncCommit) if state.maxMessages > 0 && state.consumedMsgs >= state.maxMessages { // ignore extra messages return false } return true } // handle_event handles an event as returned by Poll(). func handleEvent(c *kafka.Consumer, ev kafka.Event) { switch e := ev.(type) { case kafka.AssignedPartitions: if len(state.currAssignment) > 0 { fatal(fmt.Sprintf("Assign: currAssignment should have been empty: %v", state.currAssignment)) } state.currAssignment = make(map[string]*assignedPartition) for _, tp := range e.Partitions { addAssignment(tp) } sendPartitions("partitions_assigned", e.Partitions) c.Assign(e.Partitions) case kafka.RevokedPartitions: sendRecordsConsumed(true) doCommit(true, false) sendPartitions("partitions_revoked", e.Partitions) clearCurrAssignment() c.Unassign() if state.termOnRevoke { state.run = false } case kafka.OffsetsCommitted: sendOffsetsCommitted(e.Offsets, e.Error) case *kafka.Message: state.run = handleMsg(e) case kafka.Error: if e.Code() == kafka.ErrUnknownTopicOrPart { fmt.Fprintf(os.Stderr, "%% Ignoring transient error: %v\n", e) } else { fatal(fmt.Sprintf("%% Error: %v\n", e)) } default: fmt.Fprintf(os.Stderr, "%% Unhandled event %T ignored: %v\n", e, e) } } // main_loop serves consumer events, signals, etc. // will run for at most (roughly) \p timeout seconds. func mainLoop(c *kafka.Consumer, timeout int) { tmout := time.NewTicker(time.Duration(timeout) * time.Second) every1s := time.NewTicker(1 * time.Second) out: for state.run == true { select { case _ = <-tmout.C: tmout.Stop() break out case sig := <-sigs: fmt.Fprintf(os.Stderr, "%% Terminating on signal %v\n", sig) state.run = false case _ = <-every1s.C: // Report consumed messages sendRecordsConsumed(true) // Commit on timeout as well (not just every 1000 messages) doCommit(false, state.asyncCommit) default: //case _ = <-time.After(100000 * time.Microsecond): for true { ev := c.Poll(0) if ev == nil { break } handleEvent(c, ev) } } } } func runConsumer(config *kafka.ConfigMap, topic string) { c, err := kafka.NewConsumer(config) if err != nil { fatal(fmt.Sprintf("Failed to create consumer: %v", err)) } _, verstr := kafka.LibraryVersion() fmt.Fprintf(os.Stderr, "%% Created Consumer %v (%s)\n", c, verstr) state.c = c c.Subscribe(topic, nil) send("startup_complete", nil) state.run = true mainLoop(c, 10*60) tTermBegin := time.Now() fmt.Fprintf(os.Stderr, "%% Consumer shutting down\n") sendRecordsConsumed(true) // Final commit (if auto commit is disabled) doCommit(false, false) c.Unsubscribe() // Wait for rebalance, final offset commits, etc. state.run = true state.termOnRevoke = true mainLoop(c, 10) fmt.Fprintf(os.Stderr, "%% Closing consumer\n") c.Close() msg := make(map[string]interface{}) msg["_shutdown_duration"] = time.Since(tTermBegin).Seconds() send("shutdown_complete", msg) } func main() { sigs = make(chan os.Signal, 1) signal.Notify(sigs, syscall.SIGINT, syscall.SIGTERM) // Default config conf := kafka.ConfigMap{"default.topic.config": kafka.ConfigMap{"auto.offset.reset": "earliest"}} /* Required options */ group := kingpin.Flag("group-id", "Consumer group").Required().String() topic := kingpin.Flag("topic", "Topic to consume").Required().String() brokers := kingpin.Flag("broker-list", "Bootstrap broker(s)").Required().String() sessionTimeout := kingpin.Flag("session-timeout", "Session timeout").Required().Int() /* Optionals */ enableAutocommit := kingpin.Flag("enable-autocommit", "Enable auto-commit").Default("true").Bool() maxMessages := kingpin.Flag("max-messages", "Max messages to consume").Default("10000000").Int() javaAssignmentStrategy := kingpin.Flag("assignment-strategy", "Assignment strategy (Java class name)").String() configFile := kingpin.Flag("consumer.config", "Config file").File() debug := kingpin.Flag("debug", "Debug flags").String() xconf := kingpin.Flag("--property", "CSV separated key=value librdkafka configuration properties").Short('X').String() kingpin.Parse() conf["bootstrap.servers"] = *brokers conf["group.id"] = *group conf["session.timeout.ms"] = *sessionTimeout conf["enable.auto.commit"] = *enableAutocommit if len(*debug) > 0 { conf["debug"] = *debug } /* Convert Java assignment strategy(s) (CSV) to librdkafka one. * "[java.class.path.]Strategy[Assignor],.." -> "strategy,.." */ if javaAssignmentStrategy != nil && len(*javaAssignmentStrategy) > 0 { var strats []string for _, jstrat := range strings.Split(*javaAssignmentStrategy, ",") { s := strings.Split(jstrat, ".") strats = append(strats, strings.ToLower(strings.TrimSuffix(s[len(s)-1], "Assignor"))) } conf["partition.assignment.strategy"] = strings.Join(strats, ",") fmt.Fprintf(os.Stderr, "%% Mapped %s -> %s\n", *javaAssignmentStrategy, conf["partition.assignment.strategy"]) } if *configFile != nil { fmt.Fprintf(os.Stderr, "%% Ignoring config file %v\n", *configFile) } conf["go.events.channel.enable"] = false conf["go.application.rebalance.enable"] = true if len(*xconf) > 0 { for _, kv := range strings.Split(*xconf, ",") { x := strings.Split(kv, "=") if len(x) != 2 { panic("-X expects a ,-separated list of confprop=val pairs") } conf[x[0]] = x[1] } } fmt.Println("Config: ", conf) state.autoCommit = *enableAutocommit state.maxMessages = *maxMessages runConsumer((*kafka.ConfigMap)(&conf), *topic) } golang-github-confluentinc-confluent-kafka-go-0.11.6/kafkatest/go_verifiable_producer/000077500000000000000000000000001336406275100311505ustar00rootroot00000000000000go_verifiable_producer.go000066400000000000000000000142231336406275100361220ustar00rootroot00000000000000golang-github-confluentinc-confluent-kafka-go-0.11.6/kafkatest/go_verifiable_producer// Apache Kafka kafkatest VerifiableProducer implemented in Go package main /** * Copyright 2016 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import ( "encoding/json" "fmt" "github.com/confluentinc/confluent-kafka-go/kafka" "gopkg.in/alecthomas/kingpin.v2" "os" "os/signal" "strings" "syscall" "time" ) var ( verbosity = 1 sigs chan os.Signal ) func send(name string, msg map[string]interface{}) { if msg == nil { msg = make(map[string]interface{}) } msg["name"] = name msg["_time"] = time.Now().Format("2006-01-02 15:04:05.000") b, err := json.Marshal(msg) if err != nil { panic(err) } fmt.Println(string(b)) } func partitionsToMap(partitions []kafka.TopicPartition) []map[string]interface{} { parts := make([]map[string]interface{}, len(partitions)) for i, tp := range partitions { parts[i] = map[string]interface{}{"topic": *tp.Topic, "partition": tp.Partition} } return parts } func sendPartitions(name string, partitions []kafka.TopicPartition) { msg := make(map[string]interface{}) msg["partitions"] = partitionsToMap(partitions) send(name, msg) } type commState struct { maxMessages int // messages to send msgCnt int // messages produced deliveryCnt int // messages delivered errCnt int // messages failed to deliver valuePrefix string throughput int p *kafka.Producer } var state commState // handle_dr handles delivery reports // returns false when producer should terminate, else true to keep running. func handleDr(m *kafka.Message) bool { if verbosity >= 2 { fmt.Fprintf(os.Stderr, "%% DR: %v:\n", m.TopicPartition) } if m.TopicPartition.Error != nil { state.errCnt++ errmsg := make(map[string]interface{}) errmsg["message"] = m.TopicPartition.Error.Error() errmsg["topic"] = *m.TopicPartition.Topic errmsg["partition"] = m.TopicPartition.Partition errmsg["key"] = (string)(m.Key) errmsg["value"] = (string)(m.Value) send("producer_send_error", errmsg) } else { state.deliveryCnt++ drmsg := make(map[string]interface{}) drmsg["topic"] = *m.TopicPartition.Topic drmsg["partition"] = m.TopicPartition.Partition drmsg["offset"] = m.TopicPartition.Offset drmsg["key"] = (string)(m.Key) drmsg["value"] = (string)(m.Value) send("producer_send_success", drmsg) } if state.deliveryCnt+state.errCnt >= state.maxMessages { // we're done return false } return true } func runProducer(config *kafka.ConfigMap, topic string) { p, err := kafka.NewProducer(config) if err != nil { fmt.Fprintf(os.Stderr, "Failed to create producer: %s\n", err) os.Exit(1) } _, verstr := kafka.LibraryVersion() fmt.Fprintf(os.Stderr, "%% Created Producer %v (%s)\n", p, verstr) state.p = p send("startup_complete", nil) run := true throttle := time.NewTicker(time.Second / (time.Duration)(state.throughput)) for run == true { select { case <-throttle.C: // produce a message (async) on each throttler tick value := fmt.Sprintf("%s%d", state.valuePrefix, state.msgCnt) state.msgCnt++ err := p.Produce(&kafka.Message{ TopicPartition: kafka.TopicPartition{ Topic: &topic, Partition: kafka.PartitionAny}, Value: []byte(value)}, nil) if err != nil { fmt.Fprintf(os.Stderr, "%% Produce failed: %v\n", err) state.errCnt++ } if state.msgCnt == state.maxMessages { // all messages sent, now wait for deliveries throttle.Stop() } case sig := <-sigs: fmt.Fprintf(os.Stderr, "%% Terminating on signal %v\n", sig) run = false case ev := <-p.Events(): switch e := ev.(type) { case *kafka.Message: run = handleDr(e) case kafka.Error: fmt.Fprintf(os.Stderr, "%% Error: %v\n", e) run = false default: fmt.Fprintf(os.Stderr, "%% Unhandled event %T ignored: %v\n", e, e) } } } fmt.Fprintf(os.Stderr, "%% Closing, %d/%d messages delivered, %d failed\n", state.deliveryCnt, state.msgCnt, state.errCnt) p.Close() send("shutdown_complete", nil) } func main() { sigs = make(chan os.Signal) signal.Notify(sigs, syscall.SIGINT, syscall.SIGTERM) // Default config conf := kafka.ConfigMap{"default.topic.config": kafka.ConfigMap{ "auto.offset.reset": "earliest", "produce.offset.report": true}} /* Required options */ topic := kingpin.Flag("topic", "Topic").Required().String() brokers := kingpin.Flag("broker-list", "Bootstrap broker(s)").Required().String() /* Optionals */ throughput := kingpin.Flag("throughput", "Msgs/s").Default("1000000").Int() maxMessages := kingpin.Flag("max-messages", "Max message count").Default("1000000").Int() valuePrefix := kingpin.Flag("value-prefix", "Payload value string prefix").Default("").String() acks := kingpin.Flag("acks", "Required acks").Default("all").String() configFile := kingpin.Flag("producer.config", "Config file").File() debug := kingpin.Flag("debug", "Debug flags").String() xconf := kingpin.Flag("--property", "CSV separated key=value librdkafka configuration properties").Short('X').String() kingpin.Parse() conf["bootstrap.servers"] = *brokers conf["default.topic.config"].(kafka.ConfigMap).SetKey("acks", *acks) if len(*debug) > 0 { conf["debug"] = *debug } if len(*xconf) > 0 { for _, kv := range strings.Split(*xconf, ",") { x := strings.Split(kv, "=") if len(x) != 2 { panic("-X expects a ,-separated list of confprop=val pairs") } conf[x[0]] = x[1] } } fmt.Println("Config: ", conf) if *configFile != nil { fmt.Fprintf(os.Stderr, "%% Ignoring config file %v\n", *configFile) } if len(*valuePrefix) > 0 { state.valuePrefix = fmt.Sprintf("%s.", *valuePrefix) } else { state.valuePrefix = "" } state.throughput = *throughput state.maxMessages = *maxMessages runProducer((*kafka.ConfigMap)(&conf), *topic) } golang-github-confluentinc-confluent-kafka-go-0.11.6/mk/000077500000000000000000000000001336406275100231025ustar00rootroot00000000000000golang-github-confluentinc-confluent-kafka-go-0.11.6/mk/Makefile000066400000000000000000000006011336406275100245370ustar00rootroot00000000000000# Convenience helper Makefile for simplifying tasks in all sub-dirs # of this git repo. # # Usage (from top-level dir): # make -f mk/Makefile " " # # E.g., to run 'go vet' on all Go code: # make -f mk/Makefile "go vet" # # # DIRS?=$(shell find . -xdev -type f -name '*.go' -exec dirname {} \; | sort | uniq) %: @(for d in $(DIRS) ; do (cd "$$d" && $@) ; done) golang-github-confluentinc-confluent-kafka-go-0.11.6/mk/bootstrap-librdkafka.sh000077500000000000000000000011461336406275100275500ustar00rootroot00000000000000#!/bin/bash # # # Downloads, builds and installs librdkafka into # set -e VERSION=$1 PREFIXDIR=$2 if [[ -z "$VERSION" ]]; then echo "Usage: $0 []" 1>&2 exit 1 fi if [[ -z "$PREFIXDIR" ]]; then PREFIXDIR=tmp-build fi if [[ $PREFIXDIR != /* ]]; then PREFIXDIR="$PWD/$PREFIXDIR" fi mkdir -p "$PREFIXDIR/librdkafka" pushd "$PREFIXDIR/librdkafka" test -f configure || curl -sL "https://github.com/edenhill/librdkafka/archive/${VERSION}.tar.gz" | \ tar -xz --strip-components=1 -f - ./configure --prefix="$PREFIXDIR" make -j make install popd golang-github-confluentinc-confluent-kafka-go-0.11.6/mk/doc-gen.py000077500000000000000000000021571336406275100250000ustar00rootroot00000000000000#!/usr/bin/env python # Extract godoc HTML documentation for our packages, # remove some nonsense, update some links and make it ready # for inclusion in Confluent doc tree. import subprocess, re from bs4 import BeautifulSoup if __name__ == '__main__': # Use godoc client to extract our package docs html_in = subprocess.check_output(["godoc", "-url=/pkg/github.com/confluentinc/confluent-kafka-go/kafka"]) # Parse HTML soup = BeautifulSoup(html_in, 'html.parser') # Remove topbar (Blog, Search, etc) topbar = soup.find(id="topbar").decompose() # Remove "Subdirectories" soup.find(id="pkg-subdirectories").decompose() soup.find(attrs={"class":"pkg-dir"}).decompose() for t in soup.find_all(href="#pkg-subdirectories"): t.decompose() # Use golang.org for external resources (such as CSS and JS) for t in soup.find_all(href=re.compile(r'^/')): t['href'] = '//golang.org' + t['href'] for t in soup.find_all(src=re.compile(r'^/')): t['src'] = '//golang.org' + t['src'] # Write updated HTML to stdout print(soup.prettify().encode('utf-8'))