confluent-kafka-1.1.0/0000755000076500000240000000000013513111321014644 5ustar ryanstaff00000000000000confluent-kafka-1.1.0/LICENSE.txt0000644000076500000240000010033013446646122016505 0ustar ryanstaff00000000000000############################################################################## # The source distribution of confluent-kafka-python is covered by the # # Apache 2.0 license. # ############################################################################## Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "{}" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright {yyyy} {name of copyright owner} Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ############################################################################## # The binary wheel distribution of confluent-kafka-python contains # # additional software with the following licenses: # ############################################################################## ############################################################################## # OpenSSL # ############################################################################## OpenSSL License --------------- /* ==================================================================== * Copyright (c) 1998-2017 The OpenSSL Project. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * 3. All advertising materials mentioning features or use of this * software must display the following acknowledgment: * "This product includes software developed by the OpenSSL Project * for use in the OpenSSL Toolkit. (http://www.openssl.org/)" * * 4. The names "OpenSSL Toolkit" and "OpenSSL Project" must not be used to * endorse or promote products derived from this software without * prior written permission. For written permission, please contact * openssl-core@openssl.org. * * 5. Products derived from this software may not be called "OpenSSL" * nor may "OpenSSL" appear in their names without prior written * permission of the OpenSSL Project. * * 6. Redistributions of any form whatsoever must retain the following * acknowledgment: * "This product includes software developed by the OpenSSL Project * for use in the OpenSSL Toolkit (http://www.openssl.org/)" * * THIS SOFTWARE IS PROVIDED BY THE OpenSSL PROJECT ``AS IS'' AND ANY * EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE OpenSSL PROJECT OR * ITS CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED * OF THE POSSIBILITY OF SUCH DAMAGE. * ==================================================================== * * This product includes cryptographic software written by Eric Young * (eay@cryptsoft.com). This product includes software written by Tim * Hudson (tjh@cryptsoft.com). * */ Original SSLeay License ----------------------- /* Copyright (C) 1995-1998 Eric Young (eay@cryptsoft.com) * All rights reserved. * * This package is an SSL implementation written * by Eric Young (eay@cryptsoft.com). * The implementation was written so as to conform with Netscapes SSL. * * This library is free for commercial and non-commercial use as long as * the following conditions are aheared to. The following conditions * apply to all code found in this distribution, be it the RC4, RSA, * lhash, DES, etc., code; not just the SSL code. The SSL documentation * included with this distribution is covered by the same copyright terms * except that the holder is Tim Hudson (tjh@cryptsoft.com). * * Copyright remains Eric Young's, and as such any Copyright notices in * the code are not to be removed. * If this package is used in a product, Eric Young should be given attribution * as the author of the parts of the library used. * This can be in the form of a textual message at program startup or * in documentation (online or textual) provided with the package. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. All advertising materials mentioning features or use of this software * must display the following acknowledgement: * "This product includes cryptographic software written by * Eric Young (eay@cryptsoft.com)" * The word 'cryptographic' can be left out if the rouines from the library * being used are not cryptographic related :-). * 4. If you include any Windows specific code (or a derivative thereof) from * the apps directory (application code) you must include an acknowledgement: * "This product includes software written by Tim Hudson (tjh@cryptsoft.com)" * * THIS SOFTWARE IS PROVIDED BY ERIC YOUNG ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * The licence and distribution terms for any publically available version or * derivative of this code cannot be changed. i.e. this code cannot simply be * copied and put under another distribution licence * [including the GNU Public Licence.] */ ############################################################################## # zlib # ############################################################################## Copyright (C) 1995-1998 Jean-loup Gailly and Mark Adler This software is provided 'as-is', without any express or implied warranty. In no event will the authors be held liable for any damages arising from the use of this software. Permission is granted to anyone to use this software for any purpose, including commercial applications, and to alter it and redistribute it freely, subject to the following restrictions: 1. The origin of this software must not be misrepresented; you must not claim that you wrote the original software. If you use this software in a product, an acknowledgment in the product documentation would be appreciated but is not required. 2. Altered source versions must be plainly marked as such, and must not be misrepresented as being the original software. 3. This notice may not be removed or altered from any source distribution. Jean-loup Gailly Mark Adler jloup@gzip.org madler@alumni.caltech.edu ############################################################################## # librdkafka licenses # ############################################################################## LICENSE -------------------------------------------------------------- librdkafka - Apache Kafka C driver library Copyright (c) 2012, Magnus Edenhill All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. LICENSE.crc32c -------------------------------------------------------------- # For src/crc32c.c copied (with modifications) from # http://stackoverflow.com/a/17646775/1821055 /* crc32c.c -- compute CRC-32C using the Intel crc32 instruction * Copyright (C) 2013 Mark Adler * Version 1.1 1 Aug 2013 Mark Adler */ /* This software is provided 'as-is', without any express or implied warranty. In no event will the author be held liable for any damages arising from the use of this software. Permission is granted to anyone to use this software for any purpose, including commercial applications, and to alter it and redistribute it freely, subject to the following restrictions: 1. The origin of this software must not be misrepresented; you must not claim that you wrote the original software. If you use this software in a product, an acknowledgment in the product documentation would be appreciated but is not required. 2. Altered source versions must be plainly marked as such, and must not be misrepresented as being the original software. 3. This notice may not be removed or altered from any source distribution. Mark Adler madler@alumni.caltech.edu */ LICENSE.lz4 -------------------------------------------------------------- src/xxhash.[ch] src/lz4*.[ch]: git@github.com:lz4/lz4.git e2827775ee80d2ef985858727575df31fc60f1f3 LZ4 Library Copyright (c) 2011-2016, Yann Collet All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. LICENSE.pycrc -------------------------------------------------------------- The following license applies to the files rdcrc32.c and rdcrc32.h which have been generated by the pycrc tool. ============================================================================ Copyright (c) 2006-2012, Thomas Pircher Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. LICENSE.queue -------------------------------------------------------------- For sys/queue.h: * Copyright (c) 1991, 1993 * The Regents of the University of California. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 4. Neither the name of the University nor the names of its contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * @(#)queue.h 8.5 (Berkeley) 8/20/94 * $FreeBSD$ LICENSE.regexp -------------------------------------------------------------- regexp.c and regexp.h from https://github.com/ccxvii/minilibs sha 875c33568b5a4aa4fb3dd0c52ea98f7f0e5ca684 " These libraries are in the public domain (or the equivalent where that is not possible). You can do anything you want with them. You have no legal obligation to do anything else, although I appreciate attribution. " LICENSE.snappy -------------------------------------------------------------- ###################################################################### # LICENSE.snappy covers files: snappy.c, snappy.h, snappy_compat.h # # originally retrieved from http://github.com/andikleen/snappy-c # # git revision 8015f2d28739b9a6076ebaa6c53fe27bc238d219 # ###################################################################### The snappy-c code is under the same license as the original snappy source Copyright 2011 Intel Corporation All Rights Reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of Intel Corporation nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. LICENSE.tinycthread -------------------------------------------------------------- From https://github.com/tinycthread/tinycthread/README.txt c57166cd510ffb5022dd5f127489b131b61441b9 License ------- Copyright (c) 2012 Marcus Geelnard 2013-2014 Evan Nemerson This software is provided 'as-is', without any express or implied warranty. In no event will the authors be held liable for any damages arising from the use of this software. Permission is granted to anyone to use this software for any purpose, including commercial applications, and to alter it and redistribute it freely, subject to the following restrictions: 1. The origin of this software must not be misrepresented; you must not claim that you wrote the original software. If you use this software in a product, an acknowledgment in the product documentation would be appreciated but is not required. 2. Altered source versions must be plainly marked as such, and must not be misrepresented as being the original software. 3. This notice may not be removed or altered from any source distribution. LICENSE.wingetopt -------------------------------------------------------------- For the files wingetopt.c wingetopt.h downloaded from https://github.com/alex85k/wingetopt /* * Copyright (c) 2002 Todd C. Miller * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. * * Sponsored in part by the Defense Advanced Research Projects * Agency (DARPA) and Air Force Research Laboratory, Air Force * Materiel Command, USAF, under agreement number F39502-99-1-0512. */ /*- * Copyright (c) 2000 The NetBSD Foundation, Inc. * All rights reserved. * * This code is derived from software contributed to The NetBSD Foundation * by Dieter Baron and Thomas Klausner. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE * POSSIBILITY OF SUCH DAMAGE. */ confluent-kafka-1.1.0/MANIFEST.in0000644000076500000240000000014713446646122016425 0ustar ryanstaff00000000000000include README.md include LICENSE.txt include test-requirements.txt include confluent_kafka/src/*.[ch] confluent-kafka-1.1.0/PKG-INFO0000644000076500000240000000051313513111321015740 0ustar ryanstaff00000000000000Metadata-Version: 2.1 Name: confluent-kafka Version: 1.1.0 Summary: Confluent's Python client for Apache Kafka Home-page: https://github.com/confluentinc/confluent-kafka-python Author: Confluent Inc Author-email: support@confluent.io License: UNKNOWN Description: UNKNOWN Platform: UNKNOWN Provides-Extra: dev Provides-Extra: avro confluent-kafka-1.1.0/README.md0000644000076500000240000002014513446646122016146 0ustar ryanstaff00000000000000Confluent's Python Client for Apache KafkaTM ======================================================= **confluent-kafka-python** is Confluent's Python client for [Apache Kafka](http://kafka.apache.org/) and the [Confluent Platform](https://www.confluent.io/product/compare/). Features: - **High performance** - confluent-kafka-python is a lightweight wrapper around [librdkafka](https://github.com/edenhill/librdkafka), a finely tuned C client. - **Reliability** - There are a lot of details to get right when writing an Apache Kafka client. We get them right in one place (librdkafka) and leverage this work across all of our clients (also [confluent-kafka-go](https://github.com/confluentinc/confluent-kafka-go) and [confluent-kafka-dotnet](https://github.com/confluentinc/confluent-kafka-dotnet)). - **Supported** - Commercial support is offered by [Confluent](https://confluent.io/). - **Future proof** - Confluent, founded by the creators of Kafka, is building a [streaming platform](https://www.confluent.io/product/compare/) with Apache Kafka at its core. It's high priority for us that client features keep pace with core Apache Kafka and components of the [Confluent Platform](https://www.confluent.io/product/compare/). The Python bindings provides a high-level Producer and Consumer with support for the balanced consumer groups of Apache Kafka >= 0.9. See the [API documentation](http://docs.confluent.io/current/clients/confluent-kafka-python/index.html) for more info. **License**: [Apache License v2.0](http://www.apache.org/licenses/LICENSE-2.0) Usage ===== **Producer:** ```python from confluent_kafka import Producer p = Producer({'bootstrap.servers': 'mybroker1,mybroker2'}) def delivery_report(err, msg): """ Called once for each message produced to indicate delivery result. Triggered by poll() or flush(). """ if err is not None: print('Message delivery failed: {}'.format(err)) else: print('Message delivered to {} [{}]'.format(msg.topic(), msg.partition())) for data in some_data_source: # Trigger any available delivery report callbacks from previous produce() calls p.poll(0) # Asynchronously produce a message, the delivery report callback # will be triggered from poll() above, or flush() below, when the message has # been successfully delivered or failed permanently. p.produce('mytopic', data.encode('utf-8'), callback=delivery_report) # Wait for any outstanding messages to be delivered and delivery report # callbacks to be triggered. p.flush() ``` **High-level Consumer:** ```python from confluent_kafka import Consumer, KafkaError c = Consumer({ 'bootstrap.servers': 'mybroker', 'group.id': 'mygroup', 'auto.offset.reset': 'earliest' }) c.subscribe(['mytopic']) while True: msg = c.poll(1.0) if msg is None: continue if msg.error(): print("Consumer error: {}".format(msg.error())) continue print('Received message: {}'.format(msg.value().decode('utf-8'))) c.close() ``` **AvroProducer** ```python from confluent_kafka import avro from confluent_kafka.avro import AvroProducer value_schema_str = """ { "namespace": "my.test", "name": "value", "type": "record", "fields" : [ { "name" : "name", "type" : "string" } ] } """ key_schema_str = """ { "namespace": "my.test", "name": "key", "type": "record", "fields" : [ { "name" : "name", "type" : "string" } ] } """ value_schema = avro.loads(value_schema_str) key_schema = avro.loads(key_schema_str) value = {"name": "Value"} key = {"name": "Key"} avroProducer = AvroProducer({ 'bootstrap.servers': 'mybroker,mybroker2', 'schema.registry.url': 'http://schem_registry_host:port' }, default_key_schema=key_schema, default_value_schema=value_schema) avroProducer.produce(topic='my_topic', value=value, key=key) avroProducer.flush() ``` **AvroConsumer** ```python from confluent_kafka import KafkaError from confluent_kafka.avro import AvroConsumer from confluent_kafka.avro.serializer import SerializerError c = AvroConsumer({ 'bootstrap.servers': 'mybroker,mybroker2', 'group.id': 'groupid', 'schema.registry.url': 'http://127.0.0.1:8081'}) c.subscribe(['my_topic']) while True: try: msg = c.poll(10) except SerializerError as e: print("Message deserialization failed for {}: {}".format(msg, e)) break if msg is None: continue if msg.error(): print("AvroConsumer error: {}".format(msg.error())) continue print(msg.value()) c.close() ``` See the [examples](examples) directory for more examples, including [how to configure](examples/confluent_cloud.py) the python client for use with [Confluent Cloud](https://www.confluent.io/confluent-cloud/). Install ======= **NOTE:** The pre-built Linux wheels do NOT contain SASL Kerberos/GSSAPI support. If you need SASL Kerberos/GSSAPI support you must install librdkafka and its dependencies using the repositories below and then build confluent-kafka using the command in the "Install from source from PyPi" section below. **Install self-contained binary wheels for OSX and Linux from PyPi:** $ pip install confluent-kafka **Install AvroProducer and AvroConsumer:** $ pip install confluent-kafka[avro] **Install from source from PyPi** *(requires librdkafka + dependencies to be installed separately)*: $ pip install --no-binary :all: confluent-kafka For source install, see *Prerequisites* below. Broker Compatibility ==================== The Python client (as well as the underlying C library librdkafka) supports all broker versions >= 0.8. But due to the nature of the Kafka protocol in broker versions 0.8 and 0.9 it is not safe for a client to assume what protocol version is actually supported by the broker, thus you will need to hint the Python client what protocol version it may use. This is done through two configuration settings: * `broker.version.fallback=YOUR_BROKER_VERSION` (default 0.9.0.1) * `api.version.request=true|false` (default true) When using a Kafka 0.10 broker or later you don't need to do anything (`api.version.request=true` is the default). If you use Kafka broker 0.9 or 0.8 you must set `api.version.request=false` and set `broker.version.fallback` to your broker version, e.g `broker.version.fallback=0.9.0.1`. More info here: https://github.com/edenhill/librdkafka/wiki/Broker-version-compatibility Prerequisites ============= * Python >= 2.7 or Python 3.x * [librdkafka](https://github.com/edenhill/librdkafka) >= 0.11.5 (latest release is embedded in wheels) librdkafka is embedded in the macosx manylinux wheels, for other platforms, SASL Kerberos/GSSAPI support or when a specific version of librdkafka is desired, following these guidelines: * For **Debian/Ubuntu** based systems, add this APT repo and then do `sudo apt-get install librdkafka-dev python-dev`: http://docs.confluent.io/current/installation.html#installation-apt * For **RedHat** and **RPM**-based distros, add this YUM repo and then do `sudo yum install librdkafka-devel python-devel`: http://docs.confluent.io/current/installation.html#rpm-packages-via-yum * On **OSX**, use **homebrew** and do `brew install librdkafka` Build ===== $ python setup.py build If librdkafka is installed in a non-standard location provide the include and library directories with: $ C_INCLUDE_PATH=/path/to/include LIBRARY_PATH=/path/to/lib python setup.py ... Tests ===== **Run unit-tests:** In order to run full test suite, simply execute: $ tox -r **NOTE**: Requires `tox` (please install with `pip install tox`), several supported versions of Python on your path, and `librdkafka` [installed](tools/bootstrap-librdkafka.sh) into `tmp-build`. **Integration tests:** See [tests/README.md](tests/README.md) for instructions on how to run integration tests. Generate Documentation ====================== Install sphinx and sphinx_rtd_theme packages: $ pip install sphinx sphinx_rtd_theme Build HTML docs: $ make docs or: $ python setup.py build_sphinx Documentation will be generated in `docs/_build/`. confluent-kafka-1.1.0/confluent_kafka/0000755000076500000240000000000013513111321017776 5ustar ryanstaff00000000000000confluent-kafka-1.1.0/confluent_kafka/__init__.py0000644000076500000240000000641213510607721022125 0ustar ryanstaff00000000000000__all__ = ['cimpl', 'admin', 'avro', 'kafkatest'] from .cimpl import (Consumer, # noqa KafkaError, KafkaException, Message, Producer, TopicPartition, libversion, version, TIMESTAMP_NOT_AVAILABLE, TIMESTAMP_CREATE_TIME, TIMESTAMP_LOG_APPEND_TIME, OFFSET_BEGINNING, OFFSET_END, OFFSET_STORED, OFFSET_INVALID) __version__ = version()[0] class ThrottleEvent (object): """ ThrottleEvent contains details about a throttled request. Set up a throttle callback by setting the ``throttle_cb`` configuration property to a callable that takes a ThrottleEvent object as its only argument. The callback will be triggered from poll(), consume() or flush() when a request has been throttled by the broker. This class is typically not user instantiated. :ivar str broker_name: The hostname of the broker which throttled the request :ivar int broker_id: The broker id :ivar float throttle_time: The amount of time (in seconds) the broker throttled (delayed) the request """ def __init__(self, broker_name, broker_id, throttle_time): self.broker_name = broker_name self.broker_id = broker_id self.throttle_time = throttle_time def __str__(self): return "{}/{} throttled for {} ms".format(self.broker_name, self.broker_id, int(self.throttle_time * 1000)) def _resolve_plugins(plugins): """ Resolve embedded plugins from the wheel's library directory. For internal module use only. :param str plugins: The plugin.library.paths value """ import os from sys import platform # Location of __init__.py and the embedded library directory basedir = os.path.dirname(__file__) if platform in ('win32', 'cygwin'): paths_sep = ';' ext = '.dll' libdir = basedir elif platform in ('linux', 'linux2'): paths_sep = ':' ext = '.so' libdir = os.path.join(basedir, '.libs') elif platform == 'darwin': paths_sep = ':' ext = '.dylib' libdir = os.path.join(basedir, '.dylibs') else: # Unknown platform, there are probably no embedded plugins. return plugins if not os.path.isdir(libdir): # No embedded library directory, probably not a wheel installation. return plugins resolved = [] for plugin in plugins.split(paths_sep): if '/' in plugin or '\\' in plugin: # Path specified, leave unchanged resolved.append(plugin) continue # See if the plugin can be found in the wheel's # embedded library directory. # The user might not have supplied a file extension, so try both. good = None for file in [plugin, plugin + ext]: fpath = os.path.join(libdir, file) if os.path.isfile(fpath): good = fpath break if good is not None: resolved.append(good) else: resolved.append(plugin) return paths_sep.join(resolved) confluent-kafka-1.1.0/confluent_kafka/admin/0000755000076500000240000000000013513111321021066 5ustar ryanstaff00000000000000confluent-kafka-1.1.0/confluent_kafka/admin/__init__.py0000644000076500000240000005466713446646122023242 0ustar ryanstaff00000000000000""" Kafka Admin client: create, view, alter, delete topics and resources. """ from ..cimpl import (KafkaException, # noqa _AdminClientImpl, NewTopic, NewPartitions, CONFIG_SOURCE_UNKNOWN_CONFIG, CONFIG_SOURCE_DYNAMIC_TOPIC_CONFIG, CONFIG_SOURCE_DYNAMIC_BROKER_CONFIG, CONFIG_SOURCE_DYNAMIC_DEFAULT_BROKER_CONFIG, CONFIG_SOURCE_STATIC_BROKER_CONFIG, CONFIG_SOURCE_DEFAULT_CONFIG, RESOURCE_UNKNOWN, RESOURCE_ANY, RESOURCE_TOPIC, RESOURCE_GROUP, RESOURCE_BROKER) import concurrent.futures import functools from enum import Enum class ConfigSource(Enum): """ Config sources returned in ConfigEntry by `describe_configs()`. """ UNKNOWN_CONFIG = CONFIG_SOURCE_UNKNOWN_CONFIG #: DYNAMIC_TOPIC_CONFIG = CONFIG_SOURCE_DYNAMIC_TOPIC_CONFIG #: DYNAMIC_BROKER_CONFIG = CONFIG_SOURCE_DYNAMIC_BROKER_CONFIG #: DYNAMIC_DEFAULT_BROKER_CONFIG = CONFIG_SOURCE_DYNAMIC_DEFAULT_BROKER_CONFIG #: STATIC_BROKER_CONFIG = CONFIG_SOURCE_STATIC_BROKER_CONFIG #: DEFAULT_CONFIG = CONFIG_SOURCE_DEFAULT_CONFIG #: class ConfigEntry(object): """ ConfigEntry is returned by describe_configs() for each configuration entry for the specified resource. This class is typically not user instantiated. :ivar str name: Configuration property name. :ivar str value: Configuration value (or None if not set or is_sensitive==True). :ivar ConfigSource source: Configuration source. :ivar bool is_read_only: Indicates if configuration property is read-only. :ivar bool is_default: Indicates if configuration property is using its default value. :ivar bool is_sensitive: Indicates if configuration property value contains sensitive information (such as security settings), in which case .value is None. :ivar bool is_synonym: Indicates if configuration property is a synonym for the parent configuration entry. :ivar list synonyms: A ConfigEntry list of synonyms and alternate sources for this configuration property. """ def __init__(self, name, value, source=ConfigSource.UNKNOWN_CONFIG, is_read_only=False, is_default=False, is_sensitive=False, is_synonym=False, synonyms=[]): """ This class is typically not user instantiated. """ super(ConfigEntry, self).__init__() self.name = name self.value = value self.source = source self.is_read_only = bool(is_read_only) self.is_default = bool(is_default) self.is_sensitive = bool(is_sensitive) self.is_synonym = bool(is_synonym) self.synonyms = synonyms def __repr__(self): return "ConfigEntry(%s=\"%s\")" % (self.name, self.value) def __str__(self): return "%s=\"%s\"" % (self.name, self.value) @functools.total_ordering class ConfigResource(object): """ Class representing resources that have configs. Instantiate with a resource type and a resource name. """ class Type(Enum): """ ConfigResource.Type depicts the type of a Kafka resource. """ UNKNOWN = RESOURCE_UNKNOWN #: Resource type is not known or not set. ANY = RESOURCE_ANY #: Match any resource, used for lookups. TOPIC = RESOURCE_TOPIC #: Topic resource. Resource name is topic name GROUP = RESOURCE_GROUP #: Group resource. Resource name is group.id BROKER = RESOURCE_BROKER #: Broker resource. Resource name is broker id def __init__(self, restype, name, set_config=None, described_configs=None, error=None): """ :param ConfigResource.Type restype: Resource type. :param str name: Resource name, depending on restype. For RESOURCE_BROKER the resource name is the broker id. :param dict set_config: Configuration to set/overwrite. Dict of str, str. :param dict described_configs: For internal use only. :param KafkaError error: For internal use only. """ super(ConfigResource, self).__init__() if name is None: raise ValueError("Expected resource name to be a string") if type(restype) == str: # Allow resource type to be specified as case-insensitive string, for convenience. try: restype = ConfigResource.Type[restype.upper()] except KeyError: raise ValueError("Unknown resource type \"%s\": should be a ConfigResource.Type" % restype) elif type(restype) == int: # The C-code passes restype as an int, convert to Type. restype = ConfigResource.Type(restype) self.restype = restype self.restype_int = int(self.restype.value) # for the C code self.name = name if set_config is not None: self.set_config_dict = set_config.copy() else: self.set_config_dict = dict() self.configs = described_configs self.error = error def __repr__(self): if self.error is not None: return "ConfigResource(%s,%s,%r)" % (self.restype, self.name, self.error) else: return "ConfigResource(%s,%s)" % (self.restype, self.name) def __hash__(self): return hash((self.restype, self.name)) def __lt__(self, other): if self.restype < other.restype: return True return self.name.__lt__(other.name) def __eq__(self, other): return self.restype == other.restype and self.name == other.name def __len__(self): """ :rtype: int :returns: number of configuration entries/operations """ return len(self.set_config_dict) def set_config(self, name, value, overwrite=True): """ Set/Overwrite configuration entry Any configuration properties that are not included will be reverted to their default values. As a workaround use describe_configs() to retrieve the current configuration and overwrite the settings you want to change. :param str name: Configuration property name :param str value: Configuration value :param bool overwrite: If True overwrite entry if already exists (default). If False do nothing if entry already exists. """ if not overwrite and name in self.set_config_dict: return self.set_config_dict[name] = value class AdminClient (_AdminClientImpl): """ The Kafka AdminClient provides admin operations for Kafka brokers, topics, groups, and other resource types supported by the broker. The Admin API methods are asynchronous and returns a dict of concurrent.futures.Future objects keyed by the entity. The entity is a topic name for create_topics(), delete_topics(), create_partitions(), and a ConfigResource for alter_configs(), describe_configs(). All the futures for a single API call will currently finish/fail at the same time (backed by the same protocol request), but this might change in future versions of the client. See examples/adminapi.py for example usage. For more information see the Java Admin API documentation: https://docs.confluent.io/current/clients/javadocs/org/apache/kafka/clients/admin/package-frame.html Requires broker version v0.11.0.0 or later. """ def __init__(self, conf): """ Create a new AdminClient using the provided configuration dictionary. The AdminClient is a standard Kafka protocol client, supporting the standard librdkafka configuration properties as specified at https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md At least 'bootstrap.servers' should be configured. """ super(AdminClient, self).__init__(conf) @staticmethod def _make_topics_result(f, futmap): """ Map per-topic results to per-topic futures in futmap. The result value of each (successful) future is None. """ try: result = f.result() for topic, error in result.items(): fut = futmap.get(topic, None) if fut is None: raise RuntimeError("Topic {} not found in future-map: {}".format(topic, futmap)) if error is not None: # Topic-level exception fut.set_exception(KafkaException(error)) else: # Topic-level success fut.set_result(None) except Exception as e: # Request-level exception, raise the same for all topics for topic, fut in futmap.items(): fut.set_exception(e) @staticmethod def _make_resource_result(f, futmap): """ Map per-resource results to per-resource futures in futmap. The result value of each (successful) future is a ConfigResource. """ try: result = f.result() for resource, configs in result.items(): fut = futmap.get(resource, None) if fut is None: raise RuntimeError("Resource {} not found in future-map: {}".format(resource, futmap)) if resource.error is not None: # Resource-level exception fut.set_exception(KafkaException(resource.error)) else: # Resource-level success # configs will be a dict for describe_configs() # and None for alter_configs() fut.set_result(configs) except Exception as e: # Request-level exception, raise the same for all resources for resource, fut in futmap.items(): fut.set_exception(e) @staticmethod def _make_futures(futmap_keys, class_check, make_result_fn): """ Create futures and a futuremap for the keys in futmap_keys, and create a request-level future to be bassed to the C API. """ futmap = {} for key in futmap_keys: if class_check is not None and not isinstance(key, class_check): raise ValueError("Expected list of {}".format(type(class_check))) futmap[key] = concurrent.futures.Future() if not futmap[key].set_running_or_notify_cancel(): raise RuntimeError("Future was cancelled prematurely") # Create an internal future for the entire request, # this future will trigger _make_..._result() and set result/exception # per topic,future in futmap. f = concurrent.futures.Future() f.add_done_callback(lambda f: make_result_fn(f, futmap)) if not f.set_running_or_notify_cancel(): raise RuntimeError("Future was cancelled prematurely") return f, futmap def create_topics(self, new_topics, **kwargs): """ Create new topics in cluster. The future result() value is None. :param list(NewTopic) new_topics: New topics to be created. :param float operation_timeout: Set broker's operation timeout in seconds, controlling how long the CreateTopics request will block on the broker waiting for the topic creation to propagate in the cluster. A value of 0 returns immediately. Default: 0 :param float request_timeout: Set the overall request timeout in seconds, including broker lookup, request transmission, operation time on broker, and response. Default: `socket.timeout.ms*1000.0` :param bool validate_only: Tell broker to only validate the request, without creating the topic. Default: False :returns: a dict of futures for each topic, keyed by the topic name. :rtype: dict() :raises KafkaException: Operation failed locally or on broker. :raises TypeException: Invalid input. :raises ValueException: Invalid input. """ f, futmap = AdminClient._make_futures([x.topic for x in new_topics], None, AdminClient._make_topics_result) super(AdminClient, self).create_topics(new_topics, f, **kwargs) return futmap def delete_topics(self, topics, **kwargs): """ Delete topics. The future result() value is None. :param list(str) topics: Topics to mark for deletion. :param float operation_timeout: Set broker's operation timeout in seconds, controlling how long the DeleteTopics request will block on the broker waiting for the topic deletion to propagate in the cluster. A value of 0 returns immediately. Default: 0 :param float request_timeout: Set the overall request timeout in seconds, including broker lookup, request transmission, operation time on broker, and response. Default: `socket.timeout.ms*1000.0` :returns: a dict of futures for each topic, keyed by the topic name. :rtype: dict() :raises KafkaException: Operation failed locally or on broker. :raises TypeException: Invalid input. :raises ValueException: Invalid input. """ f, futmap = AdminClient._make_futures(topics, None, AdminClient._make_topics_result) super(AdminClient, self).delete_topics(topics, f, **kwargs) return futmap def create_partitions(self, new_partitions, **kwargs): """ Create additional partitions for the given topics. The future result() value is None. :param list(NewPartitions) new_partitions: New partitions to be created. :param float operation_timeout: Set broker's operation timeout in seconds, controlling how long the CreatePartitions request will block on the broker waiting for the partition creation to propagate in the cluster. A value of 0 returns immediately. Default: 0 :param float request_timeout: Set the overall request timeout in seconds, including broker lookup, request transmission, operation time on broker, and response. Default: `socket.timeout.ms*1000.0` :param bool validate_only: Tell broker to only validate the request, without creating the partitions. Default: False :returns: a dict of futures for each topic, keyed by the topic name. :rtype: dict() :raises KafkaException: Operation failed locally or on broker. :raises TypeException: Invalid input. :raises ValueException: Invalid input. """ f, futmap = AdminClient._make_futures([x.topic for x in new_partitions], None, AdminClient._make_topics_result) super(AdminClient, self).create_partitions(new_partitions, f, **kwargs) return futmap def describe_configs(self, resources, **kwargs): """ Get configuration for the specified resources. The future result() value is a dict(). :warning: Multiple resources and resource types may be requested, but at most one resource of type RESOURCE_BROKER is allowed per call since these resource requests must be sent to the broker specified in the resource. :param list(ConfigResource) resources: Resources to get configuration for. :param float request_timeout: Set the overall request timeout in seconds, including broker lookup, request transmission, operation time on broker, and response. Default: `socket.timeout.ms*1000.0` :param bool validate_only: Tell broker to only validate the request, without creating the partitions. Default: False :returns: a dict of futures for each resource, keyed by the ConfigResource. :rtype: dict() :raises KafkaException: Operation failed locally or on broker. :raises TypeException: Invalid input. :raises ValueException: Invalid input. """ f, futmap = AdminClient._make_futures(resources, ConfigResource, AdminClient._make_resource_result) super(AdminClient, self).describe_configs(resources, f, **kwargs) return futmap def alter_configs(self, resources, **kwargs): """ Update configuration values for the specified resources. Updates are not transactional so they may succeed for a subset of the provided resources while the others fail. The configuration for a particular resource is updated atomically, replacing the specified values while reverting unspecified configuration entries to their default values. The future result() value is None. :warning: alter_configs() will replace all existing configuration for the provided resources with the new configuration given, reverting all other configuration for the resource back to their default values. :warning: Multiple resources and resource types may be specified, but at most one resource of type RESOURCE_BROKER is allowed per call since these resource requests must be sent to the broker specified in the resource. :param list(ConfigResource) resources: Resources to update configuration for. :param float request_timeout: Set the overall request timeout in seconds, including broker lookup, request transmission, operation time on broker, and response. Default: `socket.timeout.ms*1000.0`. :param bool validate_only: Tell broker to only validate the request, without altering the configuration. Default: False :returns: a dict of futures for each resource, keyed by the ConfigResource. :rtype: dict() :raises KafkaException: Operation failed locally or on broker. :raises TypeException: Invalid input. :raises ValueException: Invalid input. """ f, futmap = AdminClient._make_futures(resources, ConfigResource, AdminClient._make_resource_result) super(AdminClient, self).alter_configs(resources, f, **kwargs) return futmap class ClusterMetadata (object): """ ClusterMetadata as returned by list_topics() contains information about the Kafka cluster, brokers, and topics. This class is typically not user instantiated. :ivar str cluster_id: Cluster id string, if supported by broker, else None. :ivar id controller_id: Current controller broker id, or -1. :ivar dict brokers: Map of brokers indexed by the int broker id. Value is BrokerMetadata object. :ivar dict topics: Map of topics indexed by the topic name. Value is TopicMetadata object. :ivar int orig_broker_id: The broker this metadata originated from. :ivar str orig_broker_name: Broker name/address this metadata originated from. """ def __init__(self): self.cluster_id = None self.controller_id = -1 self.brokers = {} self.topics = {} self.orig_broker_id = -1 self.orig_broker_name = None def __repr__(self): return "ClusterMetadata({})".format(self.cluster_id) def __str__(self): return str(self.cluster_id) class BrokerMetadata (object): """ BrokerMetadata contains information about a Kafka broker. This class is typically not user instantiated. :ivar int id: Broker id. :ivar str host: Broker hostname. :ivar int port: Broker port. """ def __init__(self): self.id = -1 self.host = None self.port = -1 def __repr__(self): return "BrokerMetadata({}, {}:{})".format(self.id, self.host, self.port) def __str__(self): return "{}:{}/{}".format(self.host, self.port, self.id) class TopicMetadata (object): """ TopicMetadata contains information about a Kafka topic. This class is typically not user instantiated. :ivar str topic: Topic name. :ivar dict partitions: Map of partitions indexed by partition id. Value is PartitionMetadata object. :ivar KafkaError error: Topic error, or None. Value is a KafkaError object. """ def __init__(self): self.topic = None self.partitions = {} self.error = None def __repr__(self): if self.error is not None: return "TopicMetadata({}, {} partitions, {})".format(self.topic, len(self.partitions), self.error) else: return "TopicMetadata({}, {} partitions)".format(self.topic, len(self.partitions)) def __str__(self): return self.topic class PartitionMetadata (object): """ PartitionsMetadata contains information about a Kafka partition. This class is typically not user instantiated. :ivar int id: Partition id. :ivar int leader: Current leader broker for this partition, or -1. :ivar list(int) replicas: List of replica broker ids for this partition. :ivar list(int) isrs: List of in-sync-replica broker ids for this partition. :ivar KafkaError error: Partition error, or None. Value is a KafkaError object. :warning: Depending on cluster state the broker ids referenced in leader, replicas and isrs may temporarily not be reported in ClusterMetadata.brokers. Always check the availability of a broker id in the brokers dict. """ def __init__(self): self.id = -1 self.leader = -1 self.replicas = [] self.isrs = [] self.error = None def __repr__(self): if self.error is not None: return "PartitionMetadata({}, {})".format(self.id, self.error) else: return "PartitionMetadata({})".format(self.id) def __str__(self): return "{}".format(self.id) confluent-kafka-1.1.0/confluent_kafka/avro/0000755000076500000240000000000013513111321020745 5ustar ryanstaff00000000000000confluent-kafka-1.1.0/confluent_kafka/avro/__init__.py0000644000076500000240000001554513446646122023111 0ustar ryanstaff00000000000000""" Avro schema registry module: Deals with encoding and decoding of messages with avro schemas """ from confluent_kafka import Producer, Consumer from confluent_kafka.avro.error import ClientError from confluent_kafka.avro.load import load, loads # noqa from confluent_kafka.avro.cached_schema_registry_client import CachedSchemaRegistryClient from confluent_kafka.avro.serializer import (SerializerError, # noqa KeySerializerError, ValueSerializerError) from confluent_kafka.avro.serializer.message_serializer import MessageSerializer class AvroProducer(Producer): """ Kafka Producer client which does avro schema encoding to messages. Handles schema registration, Message serialization. Constructor takes below parameters. :param dict config: Config parameters containing url for schema registry (``schema.registry.url``) and the standard Kafka client configuration (``bootstrap.servers`` et.al). :param str default_key_schema: Optional default avro schema for key :param str default_value_schema: Optional default avro schema for value """ def __init__(self, config, default_key_schema=None, default_value_schema=None, schema_registry=None): sr_conf = {key.replace("schema.registry.", ""): value for key, value in config.items() if key.startswith("schema.registry")} if sr_conf.get("basic.auth.credentials.source") == 'SASL_INHERIT': sr_conf['sasl.mechanisms'] = config.get('sasl.mechanisms', '') sr_conf['sasl.username'] = config.get('sasl.username', '') sr_conf['sasl.password'] = config.get('sasl.password', '') ap_conf = {key: value for key, value in config.items() if not key.startswith("schema.registry")} if schema_registry is None: schema_registry = CachedSchemaRegistryClient(sr_conf) elif sr_conf.get("url", None) is not None: raise ValueError("Cannot pass schema_registry along with schema.registry.url config") super(AvroProducer, self).__init__(ap_conf) self._serializer = MessageSerializer(schema_registry) self._key_schema = default_key_schema self._value_schema = default_value_schema def produce(self, **kwargs): """ Asynchronously sends message to Kafka by encoding with specified or default avro schema. :param str topic: topic name :param object value: An object to serialize :param str value_schema: Avro schema for value :param object key: An object to serialize :param str key_schema: Avro schema for key Plus any other parameters accepted by confluent_kafka.Producer.produce :raises SerializerError: On serialization failure :raises BufferError: If producer queue is full. :raises KafkaException: For other produce failures. """ # get schemas from kwargs if defined key_schema = kwargs.pop('key_schema', self._key_schema) value_schema = kwargs.pop('value_schema', self._value_schema) topic = kwargs.pop('topic', None) if not topic: raise ClientError("Topic name not specified.") value = kwargs.pop('value', None) key = kwargs.pop('key', None) if value is not None: if value_schema: value = self._serializer.encode_record_with_schema(topic, value_schema, value) else: raise ValueSerializerError("Avro schema required for values") if key is not None: if key_schema: key = self._serializer.encode_record_with_schema(topic, key_schema, key, True) else: raise KeySerializerError("Avro schema required for key") super(AvroProducer, self).produce(topic, value, key, **kwargs) class AvroConsumer(Consumer): """ Kafka Consumer client which does avro schema decoding of messages. Handles message deserialization. Constructor takes below parameters :param dict config: Config parameters containing url for schema registry (``schema.registry.url``) and the standard Kafka client configuration (``bootstrap.servers`` et.al) :param schema reader_key_schema: a reader schema for the message key :param schema reader_value_schema: a reader schema for the message value :raises ValueError: For invalid configurations """ def __init__(self, config, schema_registry=None, reader_key_schema=None, reader_value_schema=None): sr_conf = {key.replace("schema.registry.", ""): value for key, value in config.items() if key.startswith("schema.registry")} if sr_conf.get("basic.auth.credentials.source") == 'SASL_INHERIT': sr_conf['sasl.mechanisms'] = config.get('sasl.mechanisms', '') sr_conf['sasl.username'] = config.get('sasl.username', '') sr_conf['sasl.password'] = config.get('sasl.password', '') ap_conf = {key: value for key, value in config.items() if not key.startswith("schema.registry")} if schema_registry is None: schema_registry = CachedSchemaRegistryClient(sr_conf) elif sr_conf.get("url", None) is not None: raise ValueError("Cannot pass schema_registry along with schema.registry.url config") super(AvroConsumer, self).__init__(ap_conf) self._serializer = MessageSerializer(schema_registry, reader_key_schema, reader_value_schema) def poll(self, timeout=None): """ This is an overriden method from confluent_kafka.Consumer class. This handles message deserialization using avro schema :param float timeout: Poll timeout in seconds (default: indefinite) :returns: message object with deserialized key and value as dict objects :rtype: Message """ if timeout is None: timeout = -1 message = super(AvroConsumer, self).poll(timeout) if message is None: return None if not message.error(): try: if message.value() is not None: decoded_value = self._serializer.decode_message(message.value(), is_key=False) message.set_value(decoded_value) if message.key() is not None: decoded_key = self._serializer.decode_message(message.key(), is_key=True) message.set_key(decoded_key) except SerializerError as e: raise SerializerError("Message deserialization failed for message at {} [{}] offset {}: {}".format( message.topic(), message.partition(), message.offset(), e)) return message confluent-kafka-1.1.0/confluent_kafka/avro/cached_schema_registry_client.py0000644000076500000240000004047513473730310027360 0ustar ryanstaff00000000000000#!/usr/bin/env python # # Copyright 2016 Confluent Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # derived from https://github.com/verisign/python-confluent-schemaregistry.git # import json import logging import warnings from collections import defaultdict from requests import Session, utils from .error import ClientError from . import loads # Python 2 considers int an instance of str try: string_type = basestring # noqa except NameError: string_type = str VALID_LEVELS = ['NONE', 'FULL', 'FORWARD', 'BACKWARD'] VALID_METHODS = ['GET', 'POST', 'PUT', 'DELETE'] VALID_AUTH_PROVIDERS = ['URL', 'USER_INFO', 'SASL_INHERIT'] # Common accept header sent ACCEPT_HDR = "application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json" log = logging.getLogger(__name__) class CachedSchemaRegistryClient(object): """ A client that talks to a Schema Registry over HTTP See http://confluent.io/docs/current/schema-registry/docs/intro.html for more information. .. deprecated:: Use CachedSchemaRegistryClient(dict: config) instead. Existing params ca_location, cert_location and key_location will be replaced with their librdkafka equivalents: `ssl.ca.location`, `ssl.certificate.location` and `ssl.key.location` respectively. Errors communicating to the server will result in a ClientError being raised. :param str|dict url: url(deprecated) to schema registry or dictionary containing client configuration. :param str ca_location: File or directory path to CA certificate(s) for verifying the Schema Registry key. :param str cert_location: Path to client's public key used for authentication. :param str key_location: Path to client's private key used for authentication. """ def __init__(self, url, max_schemas_per_subject=1000, ca_location=None, cert_location=None, key_location=None): # In order to maintain compatibility the url(conf in future versions) param has been preserved for now. conf = url if not isinstance(url, dict): conf = { 'url': url, 'ssl.ca.location': ca_location, 'ssl.certificate.location': cert_location, 'ssl.key.location': key_location } warnings.warn( "CachedSchemaRegistry constructor is being deprecated. " "Use CachedSchemaRegistryClient(dict: config) instead. " "Existing params ca_location, cert_location and key_location will be replaced with their " "librdkafka equivalents as keys in the conf dict: `ssl.ca.location`, `ssl.certificate.location` and " "`ssl.key.location` respectively", category=DeprecationWarning, stacklevel=2) """Construct a Schema Registry client""" # Ensure URL valid scheme is included; http[s] url = conf.get('url', '') if not isinstance(url, string_type): raise TypeError("URL must be of type str") if not url.startswith('http'): raise ValueError("Invalid URL provided for Schema Registry") self.url = url.rstrip('/') # subj => { schema => id } self.subject_to_schema_ids = defaultdict(dict) # id => avro_schema self.id_to_schema = defaultdict(dict) # subj => { schema => version } self.subject_to_schema_versions = defaultdict(dict) s = Session() ca_path = conf.pop('ssl.ca.location', None) if ca_path is not None: s.verify = ca_path s.cert = self._configure_client_tls(conf) s.auth = self._configure_basic_auth(conf) self.url = conf.pop('url') self._session = s if len(conf) > 0: raise ValueError("Unrecognized configuration properties: {}".format(conf.keys())) def __del__(self): self.close() def __enter__(self): return self def __exit__(self, *args): self.close() def close(self): self._session.close() @staticmethod def _configure_basic_auth(conf): url = conf['url'] auth_provider = conf.pop('basic.auth.credentials.source', 'URL').upper() if auth_provider not in VALID_AUTH_PROVIDERS: raise ValueError("schema.registry.basic.auth.credentials.source must be one of {}" .format(VALID_AUTH_PROVIDERS)) if auth_provider == 'SASL_INHERIT': if conf.pop('sasl.mechanism', '').upper() is ['GSSAPI']: raise ValueError("SASL_INHERIT does not support SASL mechanisms GSSAPI") auth = (conf.pop('sasl.username', ''), conf.pop('sasl.password', '')) elif auth_provider == 'USER_INFO': auth = tuple(conf.pop('basic.auth.user.info', '').split(':')) else: auth = utils.get_auth_from_url(url) conf['url'] = utils.urldefragauth(url) return auth @staticmethod def _configure_client_tls(conf): cert = conf.pop('ssl.certificate.location', None), conf.pop('ssl.key.location', None) # Both values can be None or no values can be None if bool(cert[0]) != bool(cert[1]): raise ValueError( "Both schema.registry.ssl.certificate.location and schema.registry.ssl.key.location must be set") return cert def _send_request(self, url, method='GET', body=None, headers={}): if method not in VALID_METHODS: raise ClientError("Method {} is invalid; valid methods include {}".format(method, VALID_METHODS)) _headers = {'Accept': ACCEPT_HDR} if body: _headers["Content-Length"] = str(len(body)) _headers["Content-Type"] = "application/vnd.schemaregistry.v1+json" _headers.update(headers) response = self._session.request(method, url, headers=_headers, json=body) # Returned by Jetty not SR so the payload is not json encoded try: return response.json(), response.status_code except ValueError: return response.content, response.status_code @staticmethod def _add_to_cache(cache, subject, schema, value): sub_cache = cache[subject] sub_cache[schema] = value def _cache_schema(self, schema, schema_id, subject=None, version=None): # don't overwrite anything if schema_id in self.id_to_schema: schema = self.id_to_schema[schema_id] else: self.id_to_schema[schema_id] = schema if subject: self._add_to_cache(self.subject_to_schema_ids, subject, schema, schema_id) if version: self._add_to_cache(self.subject_to_schema_versions, subject, schema, version) def register(self, subject, avro_schema): """ POST /subjects/(string: subject)/versions Register a schema with the registry under the given subject and receive a schema id. avro_schema must be a parsed schema from the python avro library Multiple instances of the same schema will result in cache misses. :param str subject: subject name :param schema avro_schema: Avro schema to be registered :returns: schema_id :rtype: int """ schemas_to_id = self.subject_to_schema_ids[subject] schema_id = schemas_to_id.get(avro_schema, None) if schema_id is not None: return schema_id # send it up url = '/'.join([self.url, 'subjects', subject, 'versions']) # body is { schema : json_string } body = {'schema': json.dumps(avro_schema.to_json())} result, code = self._send_request(url, method='POST', body=body) if (code == 401 or code == 403): raise ClientError("Unauthorized access. Error code:" + str(code)) elif code == 409: raise ClientError("Incompatible Avro schema:" + str(code)) elif code == 422: raise ClientError("Invalid Avro schema:" + str(code)) elif not (code >= 200 and code <= 299): raise ClientError("Unable to register schema. Error code:" + str(code)) # result is a dict schema_id = result['id'] # cache it self._cache_schema(avro_schema, schema_id, subject) return schema_id def delete_subject(self, subject): """ DELETE /subjects/(string: subject) Deletes the specified subject and its associated compatibility level if registered. It is recommended to use this API only when a topic needs to be recycled or in development environments. :param subject: subject name :returns: version of the schema deleted under this subject :rtype: (int) """ url = '/'.join([self.url, 'subjects', subject]) result, code = self._send_request(url, method="DELETE") if not (code >= 200 and code <= 299): raise ClientError('Unable to delete subject: {}'.format(result)) return result def get_by_id(self, schema_id): """ GET /schemas/ids/{int: id} Retrieve a parsed avro schema by id or None if not found :param int schema_id: int value :returns: Avro schema :rtype: schema """ if schema_id in self.id_to_schema: return self.id_to_schema[schema_id] # fetch from the registry url = '/'.join([self.url, 'schemas', 'ids', str(schema_id)]) result, code = self._send_request(url) if code == 404: log.error("Schema not found:" + str(code)) return None elif not (code >= 200 and code <= 299): log.error("Unable to get schema for the specific ID:" + str(code)) return None else: # need to parse the schema schema_str = result.get("schema") try: result = loads(schema_str) # cache it self._cache_schema(result, schema_id) return result except ClientError as e: # bad schema - should not happen raise ClientError("Received bad schema (id %s) from registry: %s" % (schema_id, e)) def get_latest_schema(self, subject): """ GET /subjects/(string: subject)/versions/(versionId: version) Return the latest 3-tuple of: (the schema id, the parsed avro schema, the schema version) for a particular subject. This call always contacts the registry. If the subject is not found, (None,None,None) is returned. :param str subject: subject name :returns: (schema_id, schema, version) :rtype: (string, schema, int) """ url = '/'.join([self.url, 'subjects', subject, 'versions', 'latest']) result, code = self._send_request(url) if code == 404: log.error("Schema not found:" + str(code)) return (None, None, None) elif code == 422: log.error("Invalid version:" + str(code)) return (None, None, None) elif not (code >= 200 and code <= 299): return (None, None, None) schema_id = result['id'] version = result['version'] if schema_id in self.id_to_schema: schema = self.id_to_schema[schema_id] else: try: schema = loads(result['schema']) except ClientError: # bad schema - should not happen raise self._cache_schema(schema, schema_id, subject, version) return (schema_id, schema, version) def get_version(self, subject, avro_schema): """ POST /subjects/(string: subject) Get the version of a schema for a given subject. Returns None if not found. :param str subject: subject name :param: schema avro_schema: Avro schema :returns: version :rtype: int """ schemas_to_version = self.subject_to_schema_versions[subject] version = schemas_to_version.get(avro_schema, None) if version is not None: return version url = '/'.join([self.url, 'subjects', subject]) body = {'schema': json.dumps(avro_schema.to_json())} result, code = self._send_request(url, method='POST', body=body) if code == 404: log.error("Not found:" + str(code)) return None elif not (code >= 200 and code <= 299): log.error("Unable to get version of a schema:" + str(code)) return None schema_id = result['id'] version = result['version'] self._cache_schema(avro_schema, schema_id, subject, version) return version def test_compatibility(self, subject, avro_schema, version='latest'): """ POST /compatibility/subjects/(string: subject)/versions/(versionId: version) Test the compatibility of a candidate parsed schema for a given subject. By default the latest version is checked against. :param: str subject: subject name :param: schema avro_schema: Avro schema :return: True if compatible, False if not compatible :rtype: bool """ url = '/'.join([self.url, 'compatibility', 'subjects', subject, 'versions', str(version)]) body = {'schema': json.dumps(avro_schema.to_json())} try: result, code = self._send_request(url, method='POST', body=body) if code == 404: log.error(("Subject or version not found:" + str(code))) return False elif code == 422: log.error(("Invalid subject or schema:" + str(code))) return False elif code >= 200 and code <= 299: return result.get('is_compatible') else: log.error("Unable to check the compatibility: " + str(code)) return False except Exception as e: log.error("_send_request() failed: %s", e) return False def update_compatibility(self, level, subject=None): """ PUT /config/(string: subject) Update the compatibility level for a subject. Level must be one of: :param str level: ex: 'NONE','FULL','FORWARD', or 'BACKWARD' """ if level not in VALID_LEVELS: raise ClientError("Invalid level specified: %s" % (str(level))) url = '/'.join([self.url, 'config']) if subject: url += '/' + subject body = {"compatibility": level} result, code = self._send_request(url, method='PUT', body=body) if code >= 200 and code <= 299: return result['compatibility'] else: raise ClientError("Unable to update level: %s. Error code: %d" % (str(level)), code) def get_compatibility(self, subject=None): """ GET /config Get the current compatibility level for a subject. Result will be one of: :param str subject: subject name :raises ClientError: if the request was unsuccessful or an invalid compatibility level was returned :returns: one of 'NONE','FULL','FORWARD', or 'BACKWARD' :rtype: bool """ url = '/'.join([self.url, 'config']) if subject: url = '/'.join([url, subject]) result, code = self._send_request(url) is_successful_request = code >= 200 and code <= 299 if not is_successful_request: raise ClientError('Unable to fetch compatibility level. Error code: %d' % code) compatibility = result.get('compatibilityLevel', None) if compatibility not in VALID_LEVELS: if compatibility is None: error_msg_suffix = 'No compatibility was returned' else: error_msg_suffix = str(compatibility) raise ClientError('Invalid compatibility level received: %s' % error_msg_suffix) return compatibility confluent-kafka-1.1.0/confluent_kafka/avro/error.py0000644000076500000240000000175613446646122022502 0ustar ryanstaff00000000000000#!/usr/bin/env python # # Copyright 2017 Confluent Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # class ClientError(Exception): """ Error thrown by Schema Registry clients """ def __init__(self, message, http_code=None): self.message = message self.http_code = http_code super(ClientError, self).__init__(self.__str__()) def __repr__(self): return "ClientError(error={error})".format(error=self.message) def __str__(self): return self.message confluent-kafka-1.1.0/confluent_kafka/avro/load.py0000644000076500000240000000274413446646122022266 0ustar ryanstaff00000000000000#!/usr/bin/env python # # Copyright 2017 Confluent Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import sys from confluent_kafka.avro.error import ClientError def loads(schema_str): """ Parse a schema given a schema string """ try: if sys.version_info[0] < 3: return schema.parse(schema_str) else: return schema.Parse(schema_str) except schema.SchemaParseException as e: raise ClientError("Schema parse failed: %s" % (str(e))) def load(fp): """ Parse a schema from a file path """ with open(fp) as f: return loads(f.read()) # avro.schema.RecordSchema and avro.schema.PrimitiveSchema classes are not hashable. Hence defining them explicitly as # a quick fix def _hash_func(self): return hash(str(self)) try: from avro import schema schema.RecordSchema.__hash__ = _hash_func schema.PrimitiveSchema.__hash__ = _hash_func schema.UnionSchema.__hash__ = _hash_func except ImportError: schema = None confluent-kafka-1.1.0/confluent_kafka/avro/serializer/0000755000076500000240000000000013513111321023116 5ustar ryanstaff00000000000000confluent-kafka-1.1.0/confluent_kafka/avro/serializer/__init__.py0000644000076500000240000000211013446646122025242 0ustar ryanstaff00000000000000#!/usr/bin/env python # # Copyright 2016 Confluent Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # class SerializerError(Exception): """Generic error from serializer package""" def __init__(self, message): self.message = message def __repr__(self): return '{klass}(error={error})'.format( klass=self.__class__.__name__, error=self.message ) def __str__(self): return self.message class KeySerializerError(SerializerError): pass class ValueSerializerError(SerializerError): pass confluent-kafka-1.1.0/confluent_kafka/avro/serializer/message_serializer.py0000644000076500000240000002015113446646122027365 0ustar ryanstaff00000000000000#!/usr/bin/env python # # Copyright 2016 Confluent Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # derived from https://github.com/verisign/python-confluent-schemaregistry.git # import io import logging import struct import sys import traceback import avro import avro.io from confluent_kafka.avro import ClientError from confluent_kafka.avro.serializer import (SerializerError, KeySerializerError, ValueSerializerError) log = logging.getLogger(__name__) MAGIC_BYTE = 0 HAS_FAST = False try: from fastavro import schemaless_reader, schemaless_writer HAS_FAST = True except ImportError: pass class ContextStringIO(io.BytesIO): """ Wrapper to allow use of StringIO via 'with' constructs. """ def __enter__(self): return self def __exit__(self, *args): self.close() return False class MessageSerializer(object): """ A helper class that can serialize and deserialize messages that need to be encoded or decoded using the schema registry. All encode_* methods return a buffer that can be sent to kafka. All decode_* methods expect a buffer received from kafka. """ def __init__(self, registry_client, reader_key_schema=None, reader_value_schema=None): self.registry_client = registry_client self.id_to_decoder_func = {} self.id_to_writers = {} self.reader_key_schema = reader_key_schema self.reader_value_schema = reader_value_schema # Encoder support def _get_encoder_func(self, writer_schema): if HAS_FAST: schema = writer_schema.to_json() return lambda record, fp: schemaless_writer(fp, schema, record) writer = avro.io.DatumWriter(writer_schema) return lambda record, fp: writer.write(record, avro.io.BinaryEncoder(fp)) def encode_record_with_schema(self, topic, schema, record, is_key=False): """ Given a parsed avro schema, encode a record for the given topic. The record is expected to be a dictionary. The schema is registered with the subject of 'topic-value' :param str topic: Topic name :param schema schema: Avro Schema :param dict record: An object to serialize :param bool is_key: If the record is a key :returns: Encoded record with schema ID as bytes :rtype: bytes """ serialize_err = KeySerializerError if is_key else ValueSerializerError subject_suffix = ('-key' if is_key else '-value') # get the latest schema for the subject subject = topic + subject_suffix # register it schema_id = self.registry_client.register(subject, schema) if not schema_id: message = "Unable to retrieve schema id for subject %s" % (subject) raise serialize_err(message) # cache writer self.id_to_writers[schema_id] = self._get_encoder_func(schema) return self.encode_record_with_schema_id(schema_id, record, is_key=is_key) def encode_record_with_schema_id(self, schema_id, record, is_key=False): """ Encode a record with a given schema id. The record must be a python dictionary. :param int schema_id: integer ID :param dict record: An object to serialize :param bool is_key: If the record is a key :returns: decoder function :rtype: func """ serialize_err = KeySerializerError if is_key else ValueSerializerError # use slow avro if schema_id not in self.id_to_writers: # get the writer + schema try: schema = self.registry_client.get_by_id(schema_id) if not schema: raise serialize_err("Schema does not exist") self.id_to_writers[schema_id] = self._get_encoder_func(schema) except ClientError: exc_type, exc_value, exc_traceback = sys.exc_info() raise serialize_err(repr(traceback.format_exception(exc_type, exc_value, exc_traceback))) # get the writer writer = self.id_to_writers[schema_id] with ContextStringIO() as outf: # Write the magic byte and schema ID in network byte order (big endian) outf.write(struct.pack('>bI', MAGIC_BYTE, schema_id)) # write the record to the rest of the buffer writer(record, outf) return outf.getvalue() # Decoder support def _get_decoder_func(self, schema_id, payload, is_key=False): if schema_id in self.id_to_decoder_func: return self.id_to_decoder_func[schema_id] # fetch writer schema from schema reg try: writer_schema_obj = self.registry_client.get_by_id(schema_id) except ClientError as e: raise SerializerError("unable to fetch schema with id %d: %s" % (schema_id, str(e))) if writer_schema_obj is None: raise SerializerError("unable to fetch schema with id %d" % (schema_id)) curr_pos = payload.tell() reader_schema_obj = self.reader_key_schema if is_key else self.reader_value_schema if HAS_FAST: # try to use fast avro try: writer_schema = writer_schema_obj.to_json() reader_schema = reader_schema_obj.to_json() schemaless_reader(payload, writer_schema) # If we reach this point, this means we have fastavro and it can # do this deserialization. Rewind since this method just determines # the reader function and we need to deserialize again along the # normal path. payload.seek(curr_pos) self.id_to_decoder_func[schema_id] = lambda p: schemaless_reader( p, writer_schema, reader_schema) return self.id_to_decoder_func[schema_id] except Exception: # Fast avro failed, fall thru to standard avro below. pass # here means we should just delegate to slow avro # rewind payload.seek(curr_pos) # Avro DatumReader py2/py3 inconsistency, hence no param keywords # should be revisited later # https://github.com/apache/avro/blob/master/lang/py3/avro/io.py#L459 # https://github.com/apache/avro/blob/master/lang/py/src/avro/io.py#L423 # def __init__(self, writers_schema=None, readers_schema=None) # def __init__(self, writer_schema=None, reader_schema=None) avro_reader = avro.io.DatumReader(writer_schema_obj, reader_schema_obj) def decoder(p): bin_decoder = avro.io.BinaryDecoder(p) return avro_reader.read(bin_decoder) self.id_to_decoder_func[schema_id] = decoder return self.id_to_decoder_func[schema_id] def decode_message(self, message, is_key=False): """ Decode a message from kafka that has been encoded for use with the schema registry. :param str|bytes or None message: message key or value to be decoded :returns: Decoded message contents. :rtype dict: """ if message is None: return None if len(message) <= 5: raise SerializerError("message is too small to decode") with ContextStringIO(message) as payload: magic, schema_id = struct.unpack('>bI', payload.read(5)) if magic != MAGIC_BYTE: raise SerializerError("message does not start with magic byte") decoder_func = self._get_decoder_func(schema_id, payload, is_key) return decoder_func(payload) confluent-kafka-1.1.0/confluent_kafka/kafkatest/0000755000076500000240000000000013513111321021753 5ustar ryanstaff00000000000000confluent-kafka-1.1.0/confluent_kafka/kafkatest/__init__.py0000644000076500000240000000012513446646122024103 0ustar ryanstaff00000000000000""" Python client implementations of the official Kafka tests/kafkatest clients. """ confluent-kafka-1.1.0/confluent_kafka/kafkatest/verifiable_client.py0000644000076500000240000000675213446646122026026 0ustar ryanstaff00000000000000# Copyright 2016 Confluent Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import datetime import json import os import re import signal import socket import sys import time class VerifiableClient(object): """ Generic base class for a kafkatest verifiable client. Implements the common kafkatest protocol and semantics. """ def __init__(self, conf): """ """ super(VerifiableClient, self).__init__() self.conf = conf self.conf['client.id'] = 'python@' + socket.gethostname() self.run = True signal.signal(signal.SIGTERM, self.sig_term) self.dbg('Pid is %d' % os.getpid()) def sig_term(self, sig, frame): self.dbg('SIGTERM') self.run = False @staticmethod def _timestamp(): return time.strftime('%H:%M:%S', time.localtime()) def dbg(self, s): """ Debugging printout """ sys.stderr.write('%% %s DEBUG: %s\n' % (self._timestamp(), s)) def err(self, s, term=False): """ Error printout, if term=True the process will terminate immediately. """ sys.stderr.write('%% %s ERROR: %s\n' % (self._timestamp(), s)) if term: sys.stderr.write('%% FATAL ERROR ^\n') sys.exit(1) def send(self, d): """ Send dict as JSON to stdout for consumtion by kafkatest handler """ d['_time'] = str(datetime.datetime.now()) self.dbg('SEND: %s' % json.dumps(d)) sys.stdout.write('%s\n' % json.dumps(d)) sys.stdout.flush() @staticmethod def set_config(conf, args): """ Set client config properties using args dict. """ for n, v in args.iteritems(): if v is None: continue if n.startswith('topicconf_'): conf[n[10:]] = v continue if not n.startswith('conf_'): # App config, skip continue # Remove conf_ prefix n = n[5:] # Handle known Java properties to librdkafka properties. if n == 'partition.assignment.strategy': # Convert Java class name to config value. # "org.apache.kafka.clients.consumer.RangeAssignor" -> "range" conf[n] = re.sub(r'org.apache.kafka.clients.consumer.(\w+)Assignor', lambda x: x.group(1).lower(), v) else: conf[n] = v @staticmethod def read_config_file(path): """Read (java client) config file and return dict with properties""" conf = {} with open(path, 'r') as f: for line in f: line = line.strip() if line.startswith('#') or len(line) == 0: continue fi = line.find('=') if fi < 1: raise Exception('%s: invalid line, no key=value pair: %s' % (path, line)) k = line[:fi] v = line[fi+1:] conf[k] = v return conf confluent-kafka-1.1.0/confluent_kafka/kafkatest/verifiable_consumer.py0000755000076500000240000002647513446646122026412 0ustar ryanstaff00000000000000#!/usr/bin/env python # # Copyright 2016 Confluent Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import argparse import os import time from confluent_kafka import Consumer, KafkaError, KafkaException from verifiable_client import VerifiableClient class VerifiableConsumer(VerifiableClient): """ confluent-kafka-python backed VerifiableConsumer class for use with Kafka's kafkatests client tests. """ def __init__(self, conf): """ conf is a config dict passed to confluent_kafka.Consumer() """ super(VerifiableConsumer, self).__init__(conf) self.conf['on_commit'] = self.on_commit self.consumer = Consumer(**conf) self.consumed_msgs = 0 self.consumed_msgs_last_reported = 0 self.consumed_msgs_at_last_commit = 0 self.use_auto_commit = False self.use_async_commit = False self.max_msgs = -1 self.assignment = [] self.assignment_dict = dict() def find_assignment(self, topic, partition): """ Find and return existing assignment based on topic and partition, or None on miss. """ skey = '%s %d' % (topic, partition) return self.assignment_dict.get(skey) def send_records_consumed(self, immediate=False): """ Send records_consumed, every 100 messages, on timeout, or if immediate is set. """ if self.consumed_msgs <= self.consumed_msgs_last_reported + (0 if immediate else 100): return if len(self.assignment) == 0: return d = {'name': 'records_consumed', 'count': self.consumed_msgs - self.consumed_msgs_last_reported, 'partitions': []} for a in self.assignment: if a.min_offset == -1: # Skip partitions that havent had any messages since last time. # This is to circumvent some minOffset checks in kafkatest. continue d['partitions'].append(a.to_dict()) a.min_offset = -1 self.send(d) self.consumed_msgs_last_reported = self.consumed_msgs def send_assignment(self, evtype, partitions): """ Send assignment update, evtype is either 'assigned' or 'revoked' """ d = {'name': 'partitions_' + evtype, 'partitions': [{'topic': x.topic, 'partition': x.partition} for x in partitions]} self.send(d) def on_assign(self, consumer, partitions): """ Rebalance on_assign callback """ old_assignment = self.assignment self.assignment = [AssignedPartition(p.topic, p.partition) for p in partitions] # Move over our last seen offsets so that we can report a proper # minOffset even after a rebalance loop. for a in old_assignment: b = self.find_assignment(a.topic, a.partition) b.min_offset = a.min_offset self.assignment_dict = {a.skey: a for a in self.assignment} self.send_assignment('assigned', partitions) def on_revoke(self, consumer, partitions): """ Rebalance on_revoke callback """ # Send final consumed records prior to rebalancing to make sure # latest consumed is in par with what is going to be committed. self.send_records_consumed(immediate=True) self.do_commit(immediate=True, asynchronous=False) self.assignment = list() self.assignment_dict = dict() self.send_assignment('revoked', partitions) def on_commit(self, err, partitions): """ Offsets Committed callback """ if err is not None and err.code() == KafkaError._NO_OFFSET: self.dbg('on_commit(): no offsets to commit') return # Report consumed messages to make sure consumed position >= committed position self.send_records_consumed(immediate=True) d = {'name': 'offsets_committed', 'offsets': []} if err is not None: d['success'] = False d['error'] = str(err) else: d['success'] = True d['error'] = '' for p in partitions: pd = {'topic': p.topic, 'partition': p.partition, 'offset': p.offset} if p.error is not None: pd['error'] = str(p.error) d['offsets'].append(pd) if len(self.assignment) == 0: self.dbg('Not sending offsets_committed: No current assignment: would be: %s' % d) return self.send(d) def do_commit(self, immediate=False, asynchronous=None): """ Commit every 1000 messages or whenever there is a consume timeout or immediate. """ if (self.use_auto_commit or self.consumed_msgs_at_last_commit + (0 if immediate else 1000) > self.consumed_msgs): return # Make sure we report consumption before commit, # otherwise tests may fail because of commit > consumed if self.consumed_msgs_at_last_commit < self.consumed_msgs: self.send_records_consumed(immediate=True) if asynchronous is None: async_mode = self.use_async_commit else: async_mode = asynchronous self.dbg('Committing %d messages (Async=%s)' % (self.consumed_msgs - self.consumed_msgs_at_last_commit, async_mode)) retries = 3 while True: try: self.dbg('Commit') offsets = self.consumer.commit(asynchronous=async_mode) self.dbg('Commit done: offsets %s' % offsets) if not async_mode: self.on_commit(None, offsets) break except KafkaException as e: if e.args[0].code() == KafkaError._NO_OFFSET: self.dbg('No offsets to commit') break elif e.args[0].code() in (KafkaError.REQUEST_TIMED_OUT, KafkaError.NOT_COORDINATOR_FOR_GROUP, KafkaError._WAIT_COORD): self.dbg('Commit failed: %s (%d retries)' % (str(e), retries)) if retries <= 0: raise retries -= 1 time.sleep(1) continue else: raise self.consumed_msgs_at_last_commit = self.consumed_msgs def msg_consume(self, msg): """ Handle consumed message (or error event) """ if msg.error(): self.err('Consume failed: %s' % msg.error(), term=False) return if False: self.dbg('Read msg from %s [%d] @ %d' % (msg.topic(), msg.partition(), msg.offset())) if self.max_msgs >= 0 and self.consumed_msgs >= self.max_msgs: return # ignore extra messages # Find assignment. a = self.find_assignment(msg.topic(), msg.partition()) if a is None: self.err('Received message on unassigned partition %s [%d] @ %d' % (msg.topic(), msg.partition(), msg.offset()), term=True) a.consumed_msgs += 1 if a.min_offset == -1: a.min_offset = msg.offset() if a.max_offset < msg.offset(): a.max_offset = msg.offset() self.consumed_msgs += 1 self.consumer.store_offsets(message=msg) self.send_records_consumed(immediate=False) self.do_commit(immediate=False) class AssignedPartition(object): """ Local state container for assigned partition. """ def __init__(self, topic, partition): super(AssignedPartition, self).__init__() self.topic = topic self.partition = partition self.skey = '%s %d' % (self.topic, self.partition) self.consumed_msgs = 0 self.min_offset = -1 self.max_offset = 0 def to_dict(self): """ Return a dict of this partition's state """ return {'topic': self.topic, 'partition': self.partition, 'minOffset': self.min_offset, 'maxOffset': self.max_offset} if __name__ == '__main__': parser = argparse.ArgumentParser(description='Verifiable Python Consumer') parser.add_argument('--topic', action='append', type=str, required=True) parser.add_argument('--group-id', dest='conf_group.id', required=True) parser.add_argument('--broker-list', dest='conf_bootstrap.servers', required=True) parser.add_argument('--session-timeout', type=int, dest='conf_session.timeout.ms', default=6000) parser.add_argument('--enable-autocommit', action='store_true', dest='conf_enable.auto.commit', default=False) parser.add_argument('--max-messages', type=int, dest='max_messages', default=-1) parser.add_argument('--assignment-strategy', dest='conf_partition.assignment.strategy') parser.add_argument('--reset-policy', dest='topicconf_auto.offset.reset', default='earliest') parser.add_argument('--consumer.config', dest='consumer_config') parser.add_argument('-X', nargs=1, dest='extra_conf', action='append', help='Configuration property', default=[]) args = vars(parser.parse_args()) conf = {'broker.version.fallback': '0.9.0', # Do explicit manual offset stores to avoid race conditions # where a message is consumed from librdkafka but not yet handled # by the Python code that keeps track of last consumed offset. 'enable.auto.offset.store': False} if args.get('consumer_config', None) is not None: args.update(VerifiableClient.read_config_file(args['consumer_config'])) args.update([x[0].split('=') for x in args.get('extra_conf', [])]) VerifiableClient.set_config(conf, args) vc = VerifiableConsumer(conf) vc.use_auto_commit = args['conf_enable.auto.commit'] vc.max_msgs = args['max_messages'] vc.dbg('Pid %d' % os.getpid()) vc.dbg('Using config: %s' % conf) vc.dbg('Subscribing to %s' % args['topic']) vc.consumer.subscribe(args['topic'], on_assign=vc.on_assign, on_revoke=vc.on_revoke) try: while vc.run: msg = vc.consumer.poll(timeout=1.0) if msg is None: # Timeout. # Try reporting consumed messages vc.send_records_consumed(immediate=True) # Commit every poll() timeout instead of on every message. # Also commit on every 1000 messages, whichever comes first. vc.do_commit(immediate=True) continue # Handle message (or error event) vc.msg_consume(msg) except KeyboardInterrupt: vc.dbg('KeyboardInterrupt') vc.run = False pass vc.dbg('Closing consumer') vc.send_records_consumed(immediate=True) if not vc.use_auto_commit: vc.do_commit(immediate=True, asynchronous=False) vc.consumer.close() vc.send({'name': 'shutdown_complete'}) vc.dbg('All done') confluent-kafka-1.1.0/confluent_kafka/kafkatest/verifiable_producer.py0000755000076500000240000001176513446646122026376 0ustar ryanstaff00000000000000#!/usr/bin/env python # # Copyright 2016 Confluent Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import argparse import time from confluent_kafka import Producer, KafkaException from verifiable_client import VerifiableClient class VerifiableProducer(VerifiableClient): """ confluent-kafka-python backed VerifiableProducer class for use with Kafka's kafkatests client tests. """ def __init__(self, conf): """ conf is a config dict passed to confluent_kafka.Producer() """ super(VerifiableProducer, self).__init__(conf) self.conf['on_delivery'] = self.dr_cb self.producer = Producer(**self.conf) self.num_acked = 0 self.num_sent = 0 self.num_err = 0 def dr_cb(self, err, msg): """ Per-message Delivery report callback. Called from poll() """ if err: self.num_err += 1 self.send({'name': 'producer_send_error', 'message': str(err), 'topic': msg.topic(), 'key': msg.key(), 'value': msg.value()}) else: self.num_acked += 1 self.send({'name': 'producer_send_success', 'topic': msg.topic(), 'partition': msg.partition(), 'offset': msg.offset(), 'key': msg.key(), 'value': msg.value()}) pass if __name__ == '__main__': parser = argparse.ArgumentParser(description='Verifiable Python Producer') parser.add_argument('--topic', type=str, required=True) parser.add_argument('--throughput', type=int, default=0) parser.add_argument('--broker-list', dest='conf_bootstrap.servers', required=True) parser.add_argument('--max-messages', type=int, dest='max_msgs', default=1000000) # avoid infinite parser.add_argument('--value-prefix', dest='value_prefix', type=str, default=None) parser.add_argument('--acks', type=int, dest='topicconf_request.required.acks', default=-1) parser.add_argument('--message-create-time', type=int, dest='create_time', default=0) parser.add_argument('--producer.config', dest='producer_config') parser.add_argument('-X', nargs=1, dest='extra_conf', action='append', help='Configuration property', default=[]) args = vars(parser.parse_args()) conf = {'broker.version.fallback': '0.9.0', 'produce.offset.report': True} if args.get('producer_config', None) is not None: args.update(VerifiableClient.read_config_file(args['producer_config'])) args.update([x[0].split('=') for x in args.get('extra_conf', [])]) VerifiableClient.set_config(conf, args) vp = VerifiableProducer(conf) vp.max_msgs = args['max_msgs'] throughput = args['throughput'] topic = args['topic'] if args['value_prefix'] is not None: value_fmt = args['value_prefix'] + '.%d' else: value_fmt = '%d' if throughput > 0: delay = 1.0/throughput else: delay = 0 vp.dbg('Producing %d messages at a rate of %d/s' % (vp.max_msgs, throughput)) try: for i in range(0, vp.max_msgs): if not vp.run: break t_end = time.time() + delay while vp.run: try: vp.producer.produce(topic, value=(value_fmt % i), timestamp=args.get('create_time', 0)) vp.num_sent += 1 except KafkaException as e: vp.err('produce() #%d/%d failed: %s' % (i, vp.max_msgs, str(e))) vp.num_err += 1 except BufferError: vp.dbg('Local produce queue full (produced %d/%d msgs), waiting for deliveries..' % (i, vp.max_msgs)) vp.producer.poll(timeout=0.5) continue break # Delay to achieve desired throughput, # but make sure poll is called at least once # to serve DRs. while True: remaining = max(0, t_end - time.time()) vp.producer.poll(timeout=remaining) if remaining <= 0.00000001: break except KeyboardInterrupt: pass # Flush remaining messages to broker. vp.dbg('Flushing') try: vp.producer.flush(5) except KeyboardInterrupt: pass vp.send({'name': 'shutdown_complete', '_qlen': len(vp.producer)}) vp.dbg('All done') confluent-kafka-1.1.0/confluent_kafka/src/0000755000076500000240000000000013513111321020565 5ustar ryanstaff00000000000000confluent-kafka-1.1.0/confluent_kafka/src/Admin.c0000644000076500000240000016053713446646122022016 0ustar ryanstaff00000000000000/** * Copyright 2018 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include "confluent_kafka.h" #include /**************************************************************************** * * * Admin Client API * * ****************************************************************************/ static int Admin_clear (Handle *self) { Handle_clear(self); return 0; } static void Admin_dealloc (Handle *self) { PyObject_GC_UnTrack(self); if (self->rk) { CallState cs; CallState_begin(self, &cs); rd_kafka_destroy(self->rk); CallState_end(self, &cs); } Admin_clear(self); Py_TYPE(self)->tp_free((PyObject *)self); } static int Admin_traverse (Handle *self, visitproc visit, void *arg) { Handle_traverse(self, visit, arg); return 0; } /** * @name AdminOptions * * */ #define Admin_options_def_int (-12345) #define Admin_options_def_float ((float)Admin_options_def_int) struct Admin_options { int validate_only; /* needs special bool parsing */ float request_timeout; /* parser: f */ float operation_timeout; /* parser: f */ int broker; /* parser: i */ }; /**@brief "unset" value initializers for Admin_options * Make sure this is kept up to date with Admin_options above. */ #define Admin_options_INITIALIZER { \ Admin_options_def_int, Admin_options_def_float, \ Admin_options_def_float, Admin_options_def_int, \ } #define Admin_options_is_set_int(v) ((v) != Admin_options_def_int) #define Admin_options_is_set_float(v) Admin_options_is_set_int((int)(v)) /** * @brief Convert Admin_options to rd_kafka_AdminOptions_t. * * @param forApi is the librdkafka name of the admin API that these options * will be used for, e.g., "CreateTopics". * @param future is set as the options opaque. * * @returns a new C admin options object on success, or NULL on failure in * which case an exception is raised. */ static rd_kafka_AdminOptions_t * Admin_options_to_c (Handle *self, rd_kafka_admin_op_t for_api, const struct Admin_options *options, PyObject *future) { rd_kafka_AdminOptions_t *c_options; rd_kafka_resp_err_t err; char errstr[512]; c_options = rd_kafka_AdminOptions_new(self->rk, for_api); if (!c_options) { PyErr_Format(PyExc_RuntimeError, "This Admin API method " "is unsupported by librdkafka %s", rd_kafka_version_str()); return NULL; } rd_kafka_AdminOptions_set_opaque(c_options, (void *)future); if (Admin_options_is_set_int(options->validate_only) && (err = rd_kafka_AdminOptions_set_validate_only( c_options, options->validate_only, errstr, sizeof(errstr)))) goto err; if (Admin_options_is_set_float(options->request_timeout) && (err = rd_kafka_AdminOptions_set_request_timeout( c_options, (int)(options->request_timeout * 1000.0f), errstr, sizeof(errstr)))) goto err; if (Admin_options_is_set_float(options->operation_timeout) && (err = rd_kafka_AdminOptions_set_operation_timeout( c_options, (int)(options->operation_timeout * 1000.0f), errstr, sizeof(errstr)))) goto err; if (Admin_options_is_set_int(options->broker) && (err = rd_kafka_AdminOptions_set_broker( c_options, (int32_t)options->broker, errstr, sizeof(errstr)))) goto err; return c_options; err: rd_kafka_AdminOptions_destroy(c_options); PyErr_Format(PyExc_ValueError, "%s", errstr); return NULL; } /** * @brief Translate Python list(list(int)) replica assignments and set * on the specified generic C object using a setter based on * forApi. * * @returns 1 on success or 0 on error in which case an exception is raised. */ static int Admin_set_replica_assignment (const char *forApi, void *c_obj, PyObject *ra, int min_count, int max_count, const char *err_count_desc) { int pi; if (!PyList_Check(ra) || (int)PyList_Size(ra) < min_count || (int)PyList_Size(ra) > max_count) { PyErr_Format(PyExc_ValueError, "replica_assignment must be " "a list of int lists with an " "outer size of %s", err_count_desc); return 0; } for (pi = 0 ; pi < (int)PyList_Size(ra) ; pi++) { size_t ri; PyObject *replicas = PyList_GET_ITEM(ra, pi); rd_kafka_resp_err_t err; int32_t *c_replicas; size_t replica_cnt; char errstr[512]; if (!PyList_Check(replicas) || (replica_cnt = (size_t)PyList_Size(replicas)) < 1) { PyErr_Format( PyExc_ValueError, "replica_assignment must be " "a list of int lists with an " "outer size of %s", err_count_desc); return 0; } c_replicas = malloc(sizeof(*c_replicas) * replica_cnt); for (ri = 0 ; ri < replica_cnt ; ri++) { PyObject *replica = PyList_GET_ITEM(replicas, ri); if (!cfl_PyInt_Check(replica)) { PyErr_Format( PyExc_ValueError, "replica_assignment must be " "a list of int lists with an " "outer size of %s", err_count_desc); free(c_replicas); return 0; } c_replicas[ri] = (int32_t)cfl_PyInt_AsInt(replica); } if (!strcmp(forApi, "CreateTopics")) err = rd_kafka_NewTopic_set_replica_assignment( (rd_kafka_NewTopic_t *)c_obj, (int32_t)pi, c_replicas, replica_cnt, errstr, sizeof(errstr)); else if (!strcmp(forApi, "CreatePartitions")) err = rd_kafka_NewPartitions_set_replica_assignment( (rd_kafka_NewPartitions_t *)c_obj, (int32_t)pi, c_replicas, replica_cnt, errstr, sizeof(errstr)); else { /* Should never be reached */ err = RD_KAFKA_RESP_ERR__UNSUPPORTED_FEATURE; snprintf(errstr, sizeof(errstr), "Unsupported forApi %s", forApi); } free(c_replicas); if (err) { PyErr_SetString( PyExc_ValueError, errstr); return 0; } } return 1; } /** * @brief Translate a dict to ConfigResource set_config() calls, * or to NewTopic_add_config() calls. * * * @returns 1 on success or 0 if an exception was raised. */ static int Admin_config_dict_to_c (void *c_obj, PyObject *dict, const char *op_name) { Py_ssize_t pos = 0; PyObject *ko, *vo; while (PyDict_Next(dict, &pos, &ko, &vo)) { PyObject *ks, *ks8; PyObject *vs = NULL, *vs8 = NULL; const char *k; const char *v; rd_kafka_resp_err_t err; if (!(ks = cfl_PyObject_Unistr(ko))) { PyErr_Format(PyExc_ValueError, "expected %s config name to be unicode " "string", op_name); return 0; } k = cfl_PyUnistr_AsUTF8(ks, &ks8); if (!(vs = cfl_PyObject_Unistr(vo)) || !(v = cfl_PyUnistr_AsUTF8(vs, &vs8))) { PyErr_Format(PyExc_ValueError, "expect %s config value for %s " "to be unicode string", op_name, k); Py_XDECREF(vs); Py_XDECREF(vs8); Py_DECREF(ks); Py_XDECREF(ks8); return 0; } if (!strcmp(op_name, "set_config")) err = rd_kafka_ConfigResource_set_config( (rd_kafka_ConfigResource_t *)c_obj, k, v); else if (!strcmp(op_name, "newtopic_set_config")) err = rd_kafka_NewTopic_set_config( (rd_kafka_NewTopic_t *)c_obj, k, v); else err = RD_KAFKA_RESP_ERR__NOT_IMPLEMENTED; if (err) { PyErr_Format(PyExc_ValueError, "%s config %s failed: %s", op_name, k, rd_kafka_err2str(err)); Py_XDECREF(vs); Py_XDECREF(vs8); Py_DECREF(ks); Py_XDECREF(ks8); return 0; } Py_XDECREF(vs); Py_XDECREF(vs8); Py_DECREF(ks); Py_XDECREF(ks8); } return 1; } /** * @brief create_topics */ static PyObject *Admin_create_topics (Handle *self, PyObject *args, PyObject *kwargs) { PyObject *topics = NULL, *future, *validate_only_obj = NULL; static char *kws[] = { "topics", "future", /* options */ "validate_only", "request_timeout", "operation_timeout", NULL }; struct Admin_options options = Admin_options_INITIALIZER; rd_kafka_AdminOptions_t *c_options = NULL; int tcnt; int i; rd_kafka_NewTopic_t **c_objs; rd_kafka_queue_t *rkqu; CallState cs; /* topics is a list of NewTopic objects. */ if (!PyArg_ParseTupleAndKeywords(args, kwargs, "OO|Off", kws, &topics, &future, &validate_only_obj, &options.request_timeout, &options.operation_timeout)) return NULL; if (!PyList_Check(topics) || (tcnt = (int)PyList_Size(topics)) < 1) { PyErr_SetString(PyExc_ValueError, "Expected non-empty list of NewTopic objects"); return NULL; } if (validate_only_obj && !cfl_PyBool_get(validate_only_obj, "validate_only", &options.validate_only)) return NULL; c_options = Admin_options_to_c(self, RD_KAFKA_ADMIN_OP_CREATETOPICS, &options, future); if (!c_options) return NULL; /* Exception raised by options_to_c() */ /* options_to_c() sets future as the opaque, which is used in the * background_event_cb to set the results on the future as the * admin operation is finished, so we need to keep our own refcount. */ Py_INCREF(future); /* * Parse the list of NewTopics and convert to corresponding C types. */ c_objs = malloc(sizeof(*c_objs) * tcnt); for (i = 0 ; i < tcnt ; i++) { NewTopic *newt = (NewTopic *)PyList_GET_ITEM(topics, i); char errstr[512]; int r; r = PyObject_IsInstance((PyObject *)newt, (PyObject *)&NewTopicType); if (r == -1) goto err; /* Exception raised by IsInstance() */ else if (r == 0) { PyErr_SetString(PyExc_ValueError, "Expected list of NewTopic objects"); goto err; } c_objs[i] = rd_kafka_NewTopic_new(newt->topic, newt->num_partitions, newt->replication_factor, errstr, sizeof(errstr)); if (!c_objs[i]) { PyErr_Format(PyExc_ValueError, "Invalid NewTopic(%s): %s", newt->topic, errstr); i++; goto err; } if (newt->replica_assignment) { if (newt->replication_factor != -1) { PyErr_SetString(PyExc_ValueError, "replication_factor and " "replica_assignment are " "mutually exclusive"); i++; goto err; } if (!Admin_set_replica_assignment( "CreateTopics", (void *)c_objs[i], newt->replica_assignment, newt->num_partitions, newt->num_partitions, "num_partitions")) { i++; goto err; } } if (newt->config) { if (!Admin_config_dict_to_c((void *)c_objs[i], newt->config, "newtopic_set_config")) { i++; goto err; } } } /* Use librdkafka's background thread queue to automatically dispatch * Admin_background_event_cb() when the admin operation is finished. */ rkqu = rd_kafka_queue_get_background(self->rk); /* * Call CreateTopics. * * We need to set up a CallState and release GIL here since * the background_event_cb may be triggered immediately. */ CallState_begin(self, &cs); rd_kafka_CreateTopics(self->rk, c_objs, tcnt, c_options, rkqu); CallState_end(self, &cs); rd_kafka_NewTopic_destroy_array(c_objs, tcnt); rd_kafka_AdminOptions_destroy(c_options); free(c_objs); rd_kafka_queue_destroy(rkqu); /* drop reference from get_background */ Py_RETURN_NONE; err: rd_kafka_NewTopic_destroy_array(c_objs, i); rd_kafka_AdminOptions_destroy(c_options); free(c_objs); Py_DECREF(future); /* from options_to_c() */ return NULL; } /** * @brief delete_topics */ static PyObject *Admin_delete_topics (Handle *self, PyObject *args, PyObject *kwargs) { PyObject *topics = NULL, *future; static char *kws[] = { "topics", "future", /* options */ "request_timeout", "operation_timeout", NULL }; struct Admin_options options = Admin_options_INITIALIZER; rd_kafka_AdminOptions_t *c_options = NULL; int tcnt; int i; rd_kafka_DeleteTopic_t **c_objs; rd_kafka_queue_t *rkqu; CallState cs; /* topics is a list of strings. */ if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O!O|ff", kws, (PyObject *)&PyList_Type, &topics, &future, &options.request_timeout, &options.operation_timeout)) return NULL; if (!PyList_Check(topics) || (tcnt = (int)PyList_Size(topics)) < 1) { PyErr_SetString(PyExc_ValueError, "Expected non-empty list of topic strings"); return NULL; } c_options = Admin_options_to_c(self, RD_KAFKA_ADMIN_OP_DELETETOPICS, &options, future); if (!c_options) return NULL; /* Exception raised by options_to_c() */ /* options_to_c() sets opaque to the future object, which is used in the * background_event_cb to set the results on the future as the * admin operation is finished, so we need to keep our own refcount. */ Py_INCREF(future); /* * Parse the list of strings and convert to corresponding C types. */ c_objs = malloc(sizeof(*c_objs) * tcnt); for (i = 0 ; i < tcnt ; i++) { PyObject *topic = PyList_GET_ITEM(topics, i); PyObject *utopic; PyObject *uotopic = NULL; if (topic == Py_None || !(utopic = cfl_PyObject_Unistr(topic))) { PyErr_Format(PyExc_ValueError, "Expected list of topic strings, " "not %s", ((PyTypeObject *)PyObject_Type(topic))-> tp_name); goto err; } c_objs[i] = rd_kafka_DeleteTopic_new( cfl_PyUnistr_AsUTF8(utopic, &uotopic)); Py_XDECREF(utopic); Py_XDECREF(uotopic); } /* Use librdkafka's background thread queue to automatically dispatch * Admin_background_event_cb() when the admin operation is finished. */ rkqu = rd_kafka_queue_get_background(self->rk); /* * Call DeleteTopics. * * We need to set up a CallState and release GIL here since * the event_cb may be triggered immediately. */ CallState_begin(self, &cs); rd_kafka_DeleteTopics(self->rk, c_objs, tcnt, c_options, rkqu); CallState_end(self, &cs); rd_kafka_DeleteTopic_destroy_array(c_objs, i); rd_kafka_AdminOptions_destroy(c_options); free(c_objs); rd_kafka_queue_destroy(rkqu); /* drop reference from get_background */ Py_RETURN_NONE; err: rd_kafka_DeleteTopic_destroy_array(c_objs, i); rd_kafka_AdminOptions_destroy(c_options); free(c_objs); Py_DECREF(future); /* from options_to_c() */ return NULL; } /** * @brief create_partitions */ static PyObject *Admin_create_partitions (Handle *self, PyObject *args, PyObject *kwargs) { PyObject *topics = NULL, *future, *validate_only_obj = NULL; static char *kws[] = { "topics", "future", /* options */ "validate_only", "request_timeout", "operation_timeout", NULL }; struct Admin_options options = Admin_options_INITIALIZER; rd_kafka_AdminOptions_t *c_options = NULL; int tcnt; int i; rd_kafka_NewPartitions_t **c_objs; rd_kafka_queue_t *rkqu; CallState cs; /* topics is a list of NewPartitions_t objects. */ if (!PyArg_ParseTupleAndKeywords(args, kwargs, "OO|Off", kws, &topics, &future, &validate_only_obj, &options.request_timeout, &options.operation_timeout)) return NULL; if (!PyList_Check(topics) || (tcnt = (int)PyList_Size(topics)) < 1) { PyErr_SetString(PyExc_ValueError, "Expected non-empty list of " "NewPartitions objects"); return NULL; } if (validate_only_obj && !cfl_PyBool_get(validate_only_obj, "validate_only", &options.validate_only)) return NULL; c_options = Admin_options_to_c(self, RD_KAFKA_ADMIN_OP_CREATEPARTITIONS, &options, future); if (!c_options) return NULL; /* Exception raised by options_to_c() */ /* options_to_c() sets future as the opaque, which is used in the * event_cb to set the results on the future as the admin operation * is finished, so we need to keep our own refcount. */ Py_INCREF(future); /* * Parse the list of NewPartitions and convert to corresponding C types. */ c_objs = malloc(sizeof(*c_objs) * tcnt); for (i = 0 ; i < tcnt ; i++) { NewPartitions *newp = (NewPartitions *)PyList_GET_ITEM(topics, i); char errstr[512]; int r; r = PyObject_IsInstance((PyObject *)newp, (PyObject *)&NewPartitionsType); if (r == -1) goto err; /* Exception raised by IsInstance() */ else if (r == 0) { PyErr_SetString(PyExc_ValueError, "Expected list of " "NewPartitions objects"); goto err; } c_objs[i] = rd_kafka_NewPartitions_new(newp->topic, newp->new_total_count, errstr, sizeof(errstr)); if (!c_objs[i]) { PyErr_Format(PyExc_ValueError, "Invalid NewPartitions(%s): %s", newp->topic, errstr); goto err; } if (newp->replica_assignment && !Admin_set_replica_assignment( "CreatePartitions", (void *)c_objs[i], newp->replica_assignment, 1, newp->new_total_count, "new_total_count - " "existing partition count")) { i++; goto err; /* Exception raised by set_..() */ } } /* Use librdkafka's background thread queue to automatically dispatch * Admin_background_event_cb() when the admin operation is finished. */ rkqu = rd_kafka_queue_get_background(self->rk); /* * Call CreatePartitions * * We need to set up a CallState and release GIL here since * the event_cb may be triggered immediately. */ CallState_begin(self, &cs); rd_kafka_CreatePartitions(self->rk, c_objs, tcnt, c_options, rkqu); CallState_end(self, &cs); rd_kafka_NewPartitions_destroy_array(c_objs, tcnt); rd_kafka_AdminOptions_destroy(c_options); free(c_objs); rd_kafka_queue_destroy(rkqu); /* drop reference from get_background */ Py_RETURN_NONE; err: rd_kafka_NewPartitions_destroy_array(c_objs, i); rd_kafka_AdminOptions_destroy(c_options); free(c_objs); Py_DECREF(future); /* from options_to_c() */ return NULL; } /** * @brief describe_configs */ static PyObject *Admin_describe_configs (Handle *self, PyObject *args, PyObject *kwargs) { PyObject *resources, *future; static char *kws[] = { "resources", "future", /* options */ "request_timeout", "broker", NULL }; struct Admin_options options = Admin_options_INITIALIZER; rd_kafka_AdminOptions_t *c_options = NULL; PyObject *ConfigResource_type; int cnt, i; rd_kafka_ConfigResource_t **c_objs; rd_kafka_queue_t *rkqu; CallState cs; /* topics is a list of NewPartitions_t objects. */ if (!PyArg_ParseTupleAndKeywords(args, kwargs, "OO|fi", kws, &resources, &future, &options.request_timeout, &options.broker)) return NULL; if (!PyList_Check(resources) || (cnt = (int)PyList_Size(resources)) < 1) { PyErr_SetString(PyExc_ValueError, "Expected non-empty list of ConfigResource " "objects"); return NULL; } c_options = Admin_options_to_c(self, RD_KAFKA_ADMIN_OP_DESCRIBECONFIGS, &options, future); if (!c_options) return NULL; /* Exception raised by options_to_c() */ /* Look up the ConfigResource class so we can check if the provided * topics are of correct type. * Since this is not in the fast path we treat ourselves * to the luxury of looking up this for each call. */ ConfigResource_type = cfl_PyObject_lookup("confluent_kafka.admin", "ConfigResource"); if (!ConfigResource_type) { rd_kafka_AdminOptions_destroy(c_options); return NULL; /* Exception raised by lookup() */ } /* options_to_c() sets future as the opaque, which is used in the * event_cb to set the results on the future as the admin operation * is finished, so we need to keep our own refcount. */ Py_INCREF(future); /* * Parse the list of ConfigResources and convert to * corresponding C types. */ c_objs = malloc(sizeof(*c_objs) * cnt); for (i = 0 ; i < cnt ; i++) { PyObject *res = PyList_GET_ITEM(resources, i); int r; int restype; char *resname; r = PyObject_IsInstance(res, ConfigResource_type); if (r == -1) goto err; /* Exception raised by IsInstance() */ else if (r == 0) { PyErr_SetString(PyExc_ValueError, "Expected list of " "ConfigResource objects"); goto err; } if (!cfl_PyObject_GetInt(res, "restype_int", &restype, 0, 0)) goto err; if (!cfl_PyObject_GetString(res, "name", &resname, NULL, 0)) goto err; c_objs[i] = rd_kafka_ConfigResource_new( (rd_kafka_ResourceType_t)restype, resname); if (!c_objs[i]) { PyErr_Format(PyExc_ValueError, "Invalid ConfigResource(%d,%s)", restype, resname); free(resname); goto err; } free(resname); } /* Use librdkafka's background thread queue to automatically dispatch * Admin_background_event_cb() when the admin operation is finished. */ rkqu = rd_kafka_queue_get_background(self->rk); /* * Call DescribeConfigs * * We need to set up a CallState and release GIL here since * the event_cb may be triggered immediately. */ CallState_begin(self, &cs); rd_kafka_DescribeConfigs(self->rk, c_objs, cnt, c_options, rkqu); CallState_end(self, &cs); rd_kafka_ConfigResource_destroy_array(c_objs, cnt); rd_kafka_AdminOptions_destroy(c_options); free(c_objs); rd_kafka_queue_destroy(rkqu); /* drop reference from get_background */ Py_DECREF(ConfigResource_type); /* from lookup() */ Py_RETURN_NONE; err: rd_kafka_ConfigResource_destroy_array(c_objs, i); rd_kafka_AdminOptions_destroy(c_options); free(c_objs); Py_DECREF(ConfigResource_type); /* from lookup() */ Py_DECREF(future); /* from options_to_c() */ return NULL; } /** * @brief alter_configs */ static PyObject *Admin_alter_configs (Handle *self, PyObject *args, PyObject *kwargs) { PyObject *resources, *future; PyObject *validate_only_obj = NULL; static char *kws[] = { "resources", "future", /* options */ "validate_only", "request_timeout", "broker", NULL }; struct Admin_options options = Admin_options_INITIALIZER; rd_kafka_AdminOptions_t *c_options = NULL; PyObject *ConfigResource_type; int cnt, i; rd_kafka_ConfigResource_t **c_objs; rd_kafka_queue_t *rkqu; CallState cs; /* topics is a list of NewPartitions_t objects. */ if (!PyArg_ParseTupleAndKeywords(args, kwargs, "OO|Ofi", kws, &resources, &future, &validate_only_obj, &options.request_timeout, &options.broker)) return NULL; if (!PyList_Check(resources) || (cnt = (int)PyList_Size(resources)) < 1) { PyErr_SetString(PyExc_ValueError, "Expected non-empty list of ConfigResource " "objects"); return NULL; } if (validate_only_obj && !cfl_PyBool_get(validate_only_obj, "validate_only", &options.validate_only)) return NULL; c_options = Admin_options_to_c(self, RD_KAFKA_ADMIN_OP_ALTERCONFIGS, &options, future); if (!c_options) return NULL; /* Exception raised by options_to_c() */ /* Look up the ConfigResource class so we can check if the provided * topics are of correct type. * Since this is not in the fast path we treat ourselves * to the luxury of looking up this for each call. */ ConfigResource_type = cfl_PyObject_lookup("confluent_kafka.admin", "ConfigResource"); if (!ConfigResource_type) { rd_kafka_AdminOptions_destroy(c_options); return NULL; /* Exception raised by find() */ } /* options_to_c() sets future as the opaque, which is used in the * event_cb to set the results on the future as the admin operation * is finished, so we need to keep our own refcount. */ Py_INCREF(future); /* * Parse the list of ConfigResources and convert to * corresponding C types. */ c_objs = malloc(sizeof(*c_objs) * cnt); for (i = 0 ; i < cnt ; i++) { PyObject *res = PyList_GET_ITEM(resources, i); int r; int restype; char *resname; PyObject *dict; r = PyObject_IsInstance(res, ConfigResource_type); if (r == -1) goto err; /* Exception raised by IsInstance() */ else if (r == 0) { PyErr_SetString(PyExc_ValueError, "Expected list of " "ConfigResource objects"); goto err; } if (!cfl_PyObject_GetInt(res, "restype_int", &restype, 0, 0)) goto err; if (!cfl_PyObject_GetString(res, "name", &resname, NULL, 0)) goto err; c_objs[i] = rd_kafka_ConfigResource_new( (rd_kafka_ResourceType_t)restype, resname); if (!c_objs[i]) { PyErr_Format(PyExc_ValueError, "Invalid ConfigResource(%d,%s)", restype, resname); free(resname); goto err; } free(resname); /* * Translate and apply config entries in the various dicts. */ if (!cfl_PyObject_GetAttr(res, "set_config_dict", &dict, &PyDict_Type, 1)) { i++; goto err; } if (!Admin_config_dict_to_c(c_objs[i], dict, "set_config")) { Py_DECREF(dict); i++; goto err; } Py_DECREF(dict); } /* Use librdkafka's background thread queue to automatically dispatch * Admin_background_event_cb() when the admin operation is finished. */ rkqu = rd_kafka_queue_get_background(self->rk); /* * Call AlterConfigs * * We need to set up a CallState and release GIL here since * the event_cb may be triggered immediately. */ CallState_begin(self, &cs); rd_kafka_AlterConfigs(self->rk, c_objs, cnt, c_options, rkqu); CallState_end(self, &cs); rd_kafka_ConfigResource_destroy_array(c_objs, cnt); rd_kafka_AdminOptions_destroy(c_options); free(c_objs); rd_kafka_queue_destroy(rkqu); /* drop reference from get_background */ Py_DECREF(ConfigResource_type); /* from lookup() */ Py_RETURN_NONE; err: rd_kafka_ConfigResource_destroy_array(c_objs, i); rd_kafka_AdminOptions_destroy(c_options); free(c_objs); Py_DECREF(ConfigResource_type); /* from lookup() */ Py_DECREF(future); /* from options_to_c() */ return NULL; } /** * @brief Call rd_kafka_poll() and keep track of crashing callbacks. * @returns -1 if callback crashed (or poll() failed), else the number * of events served. */ static int Admin_poll0 (Handle *self, int tmout) { int r; CallState cs; CallState_begin(self, &cs); r = rd_kafka_poll(self->rk, tmout); if (!CallState_end(self, &cs)) { return -1; } return r; } static PyObject *Admin_poll (Handle *self, PyObject *args, PyObject *kwargs) { double tmout; int r; static char *kws[] = { "timeout", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "d", kws, &tmout)) return NULL; r = Admin_poll0(self, (int)(tmout * 1000)); if (r == -1) return NULL; return cfl_PyInt_FromInt(r); } static PyMethodDef Admin_methods[] = { { "create_topics", (PyCFunction)Admin_create_topics, METH_VARARGS|METH_KEYWORDS, ".. py:function:: create_topics(topics, future, [validate_only, request_timeout, operation_timeout])\n" "\n" " Create new topics.\n" "\n" " This method should not be used directly, use confluent_kafka.AdminClient.create_topics()\n" }, { "delete_topics", (PyCFunction)Admin_delete_topics, METH_VARARGS|METH_KEYWORDS, ".. py:function:: delete_topics(topics, future, [request_timeout, operation_timeout])\n" "\n" " This method should not be used directly, use confluent_kafka.AdminClient.delete_topics()\n" }, { "create_partitions", (PyCFunction)Admin_create_partitions, METH_VARARGS|METH_KEYWORDS, ".. py:function:: create_partitions(topics, future, [validate_only, request_timeout, operation_timeout])\n" "\n" " This method should not be used directly, use confluent_kafka.AdminClient.create_partitions()\n" }, { "describe_configs", (PyCFunction)Admin_describe_configs, METH_VARARGS|METH_KEYWORDS, ".. py:function:: describe_configs(resources, future, [request_timeout, broker])\n" "\n" " This method should not be used directly, use confluent_kafka.AdminClient.describe_configs()\n" }, { "alter_configs", (PyCFunction)Admin_alter_configs, METH_VARARGS|METH_KEYWORDS, ".. py:function:: alter_configs(resources, future, [request_timeout, broker])\n" "\n" " This method should not be used directly, use confluent_kafka.AdminClient.alter_configs()\n" }, { "poll", (PyCFunction)Admin_poll, METH_VARARGS|METH_KEYWORDS, ".. py:function:: poll([timeout])\n" "\n" " Polls the Admin client for event callbacks, such as error_cb, " "stats_cb, etc, if registered.\n" "\n" " There is no need to call poll() if no callbacks have been registered.\n" "\n" " :param float timeout: Maximum time to block waiting for events. (Seconds)\n" " :returns: Number of events processed (callbacks served)\n" " :rtype: int\n" "\n" }, { "list_topics", (PyCFunction)list_topics, METH_VARARGS|METH_KEYWORDS, list_topics_doc }, { NULL } }; static Py_ssize_t Admin__len__ (Handle *self) { return rd_kafka_outq_len(self->rk); } static PySequenceMethods Admin_seq_methods = { (lenfunc)Admin__len__ /* sq_length */ }; /** * @brief Convert C topic_result_t array to topic-indexed dict. */ static PyObject * Admin_c_topic_result_to_py (const rd_kafka_topic_result_t **c_result, size_t cnt) { PyObject *result; size_t ti; result = PyDict_New(); for (ti = 0 ; ti < cnt ; ti++) { PyObject *error; error = KafkaError_new_or_None( rd_kafka_topic_result_error(c_result[ti]), rd_kafka_topic_result_error_string(c_result[ti])); PyDict_SetItemString( result, rd_kafka_topic_result_name(c_result[ti]), error); Py_DECREF(error); } return result; } /** * @brief Convert C ConfigEntry array to dict of py ConfigEntry objects. */ static PyObject * Admin_c_ConfigEntries_to_py (PyObject *ConfigEntry_type, const rd_kafka_ConfigEntry_t **c_configs, size_t config_cnt) { PyObject *dict; size_t ci; dict = PyDict_New(); for (ci = 0 ; ci < config_cnt ; ci++) { PyObject *kwargs, *args; const rd_kafka_ConfigEntry_t *ent = c_configs[ci]; const rd_kafka_ConfigEntry_t **c_synonyms; PyObject *entry, *synonyms; size_t synonym_cnt; const char *val; kwargs = PyDict_New(); cfl_PyDict_SetString(kwargs, "name", rd_kafka_ConfigEntry_name(ent)); val = rd_kafka_ConfigEntry_value(ent); if (val) cfl_PyDict_SetString(kwargs, "value", val); else PyDict_SetItemString(kwargs, "value", Py_None); cfl_PyDict_SetInt(kwargs, "source", (int)rd_kafka_ConfigEntry_source(ent)); cfl_PyDict_SetInt(kwargs, "is_read_only", rd_kafka_ConfigEntry_is_read_only(ent)); cfl_PyDict_SetInt(kwargs, "is_default", rd_kafka_ConfigEntry_is_default(ent)); cfl_PyDict_SetInt(kwargs, "is_sensitive", rd_kafka_ConfigEntry_is_sensitive(ent)); cfl_PyDict_SetInt(kwargs, "is_synonym", rd_kafka_ConfigEntry_is_synonym(ent)); c_synonyms = rd_kafka_ConfigEntry_synonyms(ent, &synonym_cnt); synonyms = Admin_c_ConfigEntries_to_py(ConfigEntry_type, c_synonyms, synonym_cnt); if (!synonyms) { Py_DECREF(kwargs); Py_DECREF(dict); return NULL; } PyDict_SetItemString(kwargs, "synonyms", synonyms); Py_DECREF(synonyms); args = PyTuple_New(0); entry = PyObject_Call(ConfigEntry_type, args, kwargs); Py_DECREF(args); Py_DECREF(kwargs); if (!entry) { Py_DECREF(dict); return NULL; } PyDict_SetItemString(dict, rd_kafka_ConfigEntry_name(ent), entry); Py_DECREF(entry); } return dict; } /** * @brief Convert C ConfigResource array to dict indexed by ConfigResource * with the value of dict(ConfigEntry). * * @param ret_configs If true, return configs rather than None. */ static PyObject * Admin_c_ConfigResource_result_to_py (const rd_kafka_ConfigResource_t **c_resources, size_t cnt, int ret_configs) { PyObject *result; PyObject *ConfigResource_type; PyObject *ConfigEntry_type; size_t ri; ConfigResource_type = cfl_PyObject_lookup("confluent_kafka.admin", "ConfigResource"); if (!ConfigResource_type) return NULL; ConfigEntry_type = cfl_PyObject_lookup("confluent_kafka.admin", "ConfigEntry"); if (!ConfigEntry_type) { Py_DECREF(ConfigResource_type); return NULL; } result = PyDict_New(); for (ri = 0 ; ri < cnt ; ri++) { const rd_kafka_ConfigResource_t *c_res = c_resources[ri]; const rd_kafka_ConfigEntry_t **c_configs; PyObject *kwargs, *wrap; PyObject *key; PyObject *configs, *error; size_t config_cnt; c_configs = rd_kafka_ConfigResource_configs(c_res, &config_cnt); configs = Admin_c_ConfigEntries_to_py(ConfigEntry_type, c_configs, config_cnt); if (!configs) goto err; error = KafkaError_new_or_None( rd_kafka_ConfigResource_error(c_res), rd_kafka_ConfigResource_error_string(c_res)); kwargs = PyDict_New(); cfl_PyDict_SetInt(kwargs, "restype", (int)rd_kafka_ConfigResource_type(c_res)); cfl_PyDict_SetString(kwargs, "name", rd_kafka_ConfigResource_name(c_res)); PyDict_SetItemString(kwargs, "described_configs", configs); PyDict_SetItemString(kwargs, "error", error); Py_DECREF(error); /* Instantiate ConfigResource */ wrap = PyTuple_New(0); key = PyObject_Call(ConfigResource_type, wrap, kwargs); Py_DECREF(wrap); Py_DECREF(kwargs); if (!key) { Py_DECREF(configs); goto err; } /* Set result to dict[ConfigResource(..)] = configs | None * depending on ret_configs */ if (ret_configs) PyDict_SetItem(result, key, configs); else PyDict_SetItem(result, key, Py_None); Py_DECREF(configs); Py_DECREF(key); } return result; err: Py_DECREF(ConfigResource_type); Py_DECREF(ConfigEntry_type); Py_DECREF(result); return NULL; } /** * @brief Event callback triggered from librdkafka's background thread * when Admin API results are ready. * * The rkev opaque (not \p opaque) is the future PyObject * which we'll set the result on. * * @locality background rdkafka thread */ static void Admin_background_event_cb (rd_kafka_t *rk, rd_kafka_event_t *rkev, void *opaque) { PyObject *future = (PyObject *)rd_kafka_event_opaque(rkev); const rd_kafka_topic_result_t **c_topic_res; size_t c_topic_res_cnt; PyGILState_STATE gstate; PyObject *error, *method, *ret; PyObject *result = NULL; PyObject *exctype = NULL, *exc = NULL, *excargs = NULL; /* Acquire GIL */ gstate = PyGILState_Ensure(); /* Generic request-level error handling. */ error = KafkaError_new_or_None(rd_kafka_event_error(rkev), rd_kafka_event_error_string(rkev)); if (error != Py_None) goto raise; switch (rd_kafka_event_type(rkev)) { case RD_KAFKA_EVENT_CREATETOPICS_RESULT: { const rd_kafka_CreateTopics_result_t *c_res; c_res = rd_kafka_event_CreateTopics_result(rkev); c_topic_res = rd_kafka_CreateTopics_result_topics( c_res, &c_topic_res_cnt); result = Admin_c_topic_result_to_py(c_topic_res, c_topic_res_cnt); break; } case RD_KAFKA_EVENT_DELETETOPICS_RESULT: { const rd_kafka_DeleteTopics_result_t *c_res; c_res = rd_kafka_event_DeleteTopics_result(rkev); c_topic_res = rd_kafka_DeleteTopics_result_topics( c_res, &c_topic_res_cnt); result = Admin_c_topic_result_to_py(c_topic_res, c_topic_res_cnt); break; } case RD_KAFKA_EVENT_CREATEPARTITIONS_RESULT: { const rd_kafka_CreatePartitions_result_t *c_res; c_res = rd_kafka_event_CreatePartitions_result(rkev); c_topic_res = rd_kafka_CreatePartitions_result_topics( c_res, &c_topic_res_cnt); result = Admin_c_topic_result_to_py(c_topic_res, c_topic_res_cnt); break; } case RD_KAFKA_EVENT_DESCRIBECONFIGS_RESULT: { const rd_kafka_ConfigResource_t **c_resources; size_t resource_cnt; c_resources = rd_kafka_DescribeConfigs_result_resources( rd_kafka_event_DescribeConfigs_result(rkev), &resource_cnt); result = Admin_c_ConfigResource_result_to_py( c_resources, resource_cnt, 1/* return configs */); break; } case RD_KAFKA_EVENT_ALTERCONFIGS_RESULT: { const rd_kafka_ConfigResource_t **c_resources; size_t resource_cnt; c_resources = rd_kafka_AlterConfigs_result_resources( rd_kafka_event_AlterConfigs_result(rkev), &resource_cnt); result = Admin_c_ConfigResource_result_to_py( c_resources, resource_cnt, 0/* return None instead of (the empty) configs */); break; } default: Py_DECREF(error); /* Py_None */ error = KafkaError_new0(RD_KAFKA_RESP_ERR__UNSUPPORTED_FEATURE, "Unsupported event type %s", rd_kafka_event_name(rkev)); goto raise; } if (!result) { Py_DECREF(error); /* Py_None */ if (!PyErr_Occurred()) { error = KafkaError_new0(RD_KAFKA_RESP_ERR__INVALID_ARG, "BUG: Event %s handling failed " "but no exception raised", rd_kafka_event_name(rkev)); } else { /* Extract the exception type and message * and pass it as an error to raise and subsequently * the future. * We loose the backtrace here unfortunately, so * these errors are a bit cryptic. */ PyObject *trace = NULL; /* Fetch (and clear) currently raised exception */ PyErr_Fetch(&exctype, &error, &trace); Py_XDECREF(trace); } goto raise; } /* * Call future.set_result() */ method = cfl_PyUnistr(_FromString("set_result")); ret = PyObject_CallMethodObjArgs(future, method, result, NULL); Py_XDECREF(ret); Py_XDECREF(result); Py_DECREF(future); Py_DECREF(method); /* Release GIL */ PyGILState_Release(gstate); rd_kafka_event_destroy(rkev); return; raise: /* * Pass an exception to future.set_exception(). */ if (!exctype) { /* No previous exception raised, use KafkaException */ exctype = KafkaException; Py_INCREF(exctype); } /* Create a new exception based on exception type and error. */ excargs = PyTuple_New(1); Py_INCREF(error); /* tuple's reference */ PyTuple_SET_ITEM(excargs, 0, error); exc = ((PyTypeObject *)exctype)->tp_new( (PyTypeObject *)exctype, NULL, NULL); exc->ob_type->tp_init(exc, excargs, NULL); Py_DECREF(excargs); Py_XDECREF(exctype); Py_XDECREF(error); /* from error source above */ /* * Call future.set_exception(exc) */ method = cfl_PyUnistr(_FromString("set_exception")); ret = PyObject_CallMethodObjArgs(future, method, exc, NULL); Py_XDECREF(ret); Py_DECREF(exc); Py_DECREF(future); Py_DECREF(method); /* Release GIL */ PyGILState_Release(gstate); rd_kafka_event_destroy(rkev); } static int Admin_init (PyObject *selfobj, PyObject *args, PyObject *kwargs) { Handle *self = (Handle *)selfobj; char errstr[256]; rd_kafka_conf_t *conf; if (self->rk) { PyErr_SetString(PyExc_RuntimeError, "Admin already __init__:ialized"); return -1; } self->type = PY_RD_KAFKA_ADMIN; if (!(conf = common_conf_setup(PY_RD_KAFKA_ADMIN, self, args, kwargs))) return -1; rd_kafka_conf_set_background_event_cb(conf, Admin_background_event_cb); /* There is no dedicated ADMIN client type in librdkafka, the Admin * API can use either PRODUCER or CONSUMER. * We choose PRODUCER since it is more lightweight than a * CONSUMER instance. */ self->rk = rd_kafka_new(RD_KAFKA_PRODUCER, conf, errstr, sizeof(errstr)); if (!self->rk) { cfl_PyErr_Format(rd_kafka_last_error(), "Failed to create admin client: %s", errstr); rd_kafka_conf_destroy(conf); return -1; } /* Forward log messages to poll queue */ if (self->logger) rd_kafka_set_log_queue(self->rk, NULL); return 0; } static PyObject *Admin_new (PyTypeObject *type, PyObject *args, PyObject *kwargs) { return type->tp_alloc(type, 0); } PyTypeObject AdminType = { PyVarObject_HEAD_INIT(NULL, 0) "cimpl._AdminClientImpl", /*tp_name*/ sizeof(Handle), /*tp_basicsize*/ 0, /*tp_itemsize*/ (destructor)Admin_dealloc, /*tp_dealloc*/ 0, /*tp_print*/ 0, /*tp_getattr*/ 0, /*tp_setattr*/ 0, /*tp_compare*/ 0, /*tp_repr*/ 0, /*tp_as_number*/ &Admin_seq_methods, /*tp_as_sequence*/ 0, /*tp_as_mapping*/ 0, /*tp_hash */ 0, /*tp_call*/ 0, /*tp_str*/ 0, /*tp_getattro*/ 0, /*tp_setattro*/ 0, /*tp_as_buffer*/ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_GC, /*tp_flags*/ "Kafka Admin Client\n" "\n" ".. py:function:: Admin(**kwargs)\n" "\n" " Create new AdminClient instance using provided configuration dict.\n" "\n" "This class should not be used directly, use confluent_kafka.AdminClient\n." "\n" ".. py:function:: len()\n" "\n" " :returns: Number Kafka protocol requests waiting to be delivered to, or returned from, broker.\n" " :rtype: int\n" "\n", /*tp_doc*/ (traverseproc)Admin_traverse, /* tp_traverse */ (inquiry)Admin_clear, /* tp_clear */ 0, /* tp_richcompare */ 0, /* tp_weaklistoffset */ 0, /* tp_iter */ 0, /* tp_iternext */ Admin_methods, /* tp_methods */ 0, /* tp_members */ 0, /* tp_getset */ 0, /* tp_base */ 0, /* tp_dict */ 0, /* tp_descr_get */ 0, /* tp_descr_set */ 0, /* tp_dictoffset */ Admin_init, /* tp_init */ 0, /* tp_alloc */ Admin_new /* tp_new */ }; confluent-kafka-1.1.0/confluent_kafka/src/AdminTypes.c0000644000076500000240000004302013446646122023026 0ustar ryanstaff00000000000000/** * Copyright 2018 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include "confluent_kafka.h" #include /**************************************************************************** * * * Admin Client types * * ****************************************************************************/ /**************************************************************************** * * * NewTopic * * * * ****************************************************************************/ static int NewTopic_clear (NewTopic *self) { if (self->topic) { free(self->topic); self->topic = NULL; } if (self->replica_assignment) { Py_DECREF(self->replica_assignment); self->replica_assignment = NULL; } if (self->config) { Py_DECREF(self->config); self->config = NULL; } return 0; } static void NewTopic_dealloc (NewTopic *self) { PyObject_GC_UnTrack(self); NewTopic_clear(self); Py_TYPE(self)->tp_free((PyObject *)self); } static int NewTopic_init (PyObject *self0, PyObject *args, PyObject *kwargs) { NewTopic *self = (NewTopic *)self0; const char *topic; static char *kws[] = { "topic", "num_partitions", "replication_factor", "replica_assignment", "config", NULL }; self->replication_factor = -1; self->replica_assignment = NULL; self->config = NULL; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "si|iOO", kws, &topic, &self->num_partitions, &self->replication_factor, &self->replica_assignment, &self->config)) return -1; if (self->config) { if (!PyDict_Check(self->config)) { PyErr_SetString(PyExc_TypeError, "config must be a dict of strings"); return -1; } Py_INCREF(self->config); } Py_XINCREF(self->replica_assignment); self->topic = strdup(topic); return 0; } static PyObject *NewTopic_new (PyTypeObject *type, PyObject *args, PyObject *kwargs) { PyObject *self = type->tp_alloc(type, 1); return self; } static int NewTopic_traverse (NewTopic *self, visitproc visit, void *arg) { if (self->replica_assignment) Py_VISIT(self->replica_assignment); if (self->config) Py_VISIT(self->config); return 0; } static PyMemberDef NewTopic_members[] = { { "topic", T_STRING, offsetof(NewTopic, topic), READONLY, ":py:attribute:topic - Topic name (string)" }, { "num_partitions", T_INT, offsetof(NewTopic, num_partitions), 0, ":py:attribute: Number of partitions (int)" }, { "replication_factor", T_INT, offsetof(NewTopic, replication_factor), 0, " :py:attribute: Replication factor (int).\n" "Must be set to -1 if a replica_assignment is specified.\n" }, { "replica_assignment", T_OBJECT, offsetof(NewTopic, replica_assignment), 0, ":py:attribute: Replication assignment (list of lists).\n" "The outer list index represents the partition index, the inner " "list is the replica assignment (broker ids) for that partition.\n" "replication_factor and replica_assignment are mutually exclusive.\n" }, { "config", T_OBJECT, offsetof(NewTopic, config), 0, ":py:attribute: Optional topic configuration.\n" "See http://kafka.apache.org/documentation.html#topicconfigs.\n" }, { NULL } }; static PyObject *NewTopic_str0 (NewTopic *self) { return cfl_PyUnistr( _FromFormat("NewTopic(topic=%s,num_partitions=%d)", self->topic, self->num_partitions)); } static PyObject * NewTopic_richcompare (NewTopic *self, PyObject *o2, int op) { NewTopic *a = self, *b; int tr, pr; int r; PyObject *result; if (Py_TYPE(o2) != Py_TYPE(self)) { PyErr_SetNone(PyExc_NotImplementedError); return NULL; } b = (NewTopic *)o2; tr = strcmp(a->topic, b->topic); pr = a->num_partitions - b->num_partitions; switch (op) { case Py_LT: r = tr < 0 || (tr == 0 && pr < 0); break; case Py_LE: r = tr < 0 || (tr == 0 && pr <= 0); break; case Py_EQ: r = (tr == 0 && pr == 0); break; case Py_NE: r = (tr != 0 || pr != 0); break; case Py_GT: r = tr > 0 || (tr == 0 && pr > 0); break; case Py_GE: r = tr > 0 || (tr == 0 && pr >= 0); break; default: r = 0; break; } result = r ? Py_True : Py_False; Py_INCREF(result); return result; } static long NewTopic_hash (NewTopic *self) { PyObject *topic = cfl_PyUnistr(_FromString(self->topic)); long r = PyObject_Hash(topic) ^ self->num_partitions; Py_DECREF(topic); return r; } PyTypeObject NewTopicType = { PyVarObject_HEAD_INIT(NULL, 0) "cimpl.NewTopic", /*tp_name*/ sizeof(NewTopic), /*tp_basicsize*/ 0, /*tp_itemsize*/ (destructor)NewTopic_dealloc, /*tp_dealloc*/ 0, /*tp_print*/ 0, /*tp_getattr*/ 0, /*tp_setattr*/ 0, /*tp_compare*/ (reprfunc)NewTopic_str0, /*tp_repr*/ 0, /*tp_as_number*/ 0, /*tp_as_sequence*/ 0, /*tp_as_mapping*/ (hashfunc)NewTopic_hash, /*tp_hash */ 0, /*tp_call*/ 0, /*tp_str*/ PyObject_GenericGetAttr, /*tp_getattro*/ 0, /*tp_setattro*/ 0, /*tp_as_buffer*/ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_GC, /*tp_flags*/ "NewTopic specifies per-topic settings for passing to " "passed to AdminClient.create_topics().\n" "\n" ".. py:function:: NewTopic(topic, num_partitions, [replication_factor], [replica_assignment], [config])\n" "\n" " Instantiate a NewTopic object.\n" "\n" " :param string topic: Topic name\n" " :param int num_partitions: Number of partitions to create\n" " :param int replication_factor: Replication factor of partitions, or -1 if replica_assignment is used.\n" " :param list replica_assignment: List of lists with the replication assignment for each new partition.\n" " :param dict config: Dict (str:str) of topic configuration. See http://kafka.apache.org/documentation.html#topicconfigs\n" " :rtype: NewTopic\n" "\n" "\n", /*tp_doc*/ (traverseproc)NewTopic_traverse, /* tp_traverse */ (inquiry)NewTopic_clear, /* tp_clear */ (richcmpfunc)NewTopic_richcompare, /* tp_richcompare */ 0, /* tp_weaklistoffset */ 0, /* tp_iter */ 0, /* tp_iternext */ 0, /* tp_methods */ NewTopic_members,/* tp_members */ 0, /* tp_getset */ 0, /* tp_base */ 0, /* tp_dict */ 0, /* tp_descr_get */ 0, /* tp_descr_set */ 0, /* tp_dictoffset */ NewTopic_init, /* tp_init */ 0, /* tp_alloc */ NewTopic_new /* tp_new */ }; /**************************************************************************** * * * NewPartitions * * * * ****************************************************************************/ static int NewPartitions_clear (NewPartitions *self) { if (self->topic) { free(self->topic); self->topic = NULL; } if (self->replica_assignment) { Py_DECREF(self->replica_assignment); self->replica_assignment = NULL; } return 0; } static void NewPartitions_dealloc (NewPartitions *self) { PyObject_GC_UnTrack(self); NewPartitions_clear(self); Py_TYPE(self)->tp_free((PyObject *)self); } static int NewPartitions_init (PyObject *self0, PyObject *args, PyObject *kwargs) { NewPartitions *self = (NewPartitions *)self0; const char *topic; static char *kws[] = { "topic", "new_total_count", "replica_assignment", NULL }; self->replica_assignment = NULL; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "si|O", kws, &topic, &self->new_total_count, &self->replica_assignment)) return -1; self->topic = strdup(topic); Py_XINCREF(self->replica_assignment); return 0; } static PyObject *NewPartitions_new (PyTypeObject *type, PyObject *args, PyObject *kwargs) { PyObject *self = type->tp_alloc(type, 1); return self; } static int NewPartitions_traverse (NewPartitions *self, visitproc visit, void *arg) { if (self->replica_assignment) Py_VISIT(self->replica_assignment); return 0; } static PyMemberDef NewPartitions_members[] = { { "topic", T_STRING, offsetof(NewPartitions, topic), READONLY, ":py:attribute:topic - Topic name (string)" }, { "new_total_count", T_INT, offsetof(NewPartitions, new_total_count), 0, ":py:attribute: Total number of partitions (int)" }, { "replica_assignment", T_OBJECT, offsetof(NewPartitions, replica_assignment), 0, ":py:attribute: Replication assignment (list of lists).\n" "The outer list index represents the partition index, the inner " "list is the replica assignment (broker ids) for that partition.\n" }, { NULL } }; static PyObject *NewPartitions_str0 (NewPartitions *self) { return cfl_PyUnistr( _FromFormat("NewPartitions(topic=%s,new_total_count=%d)", self->topic, self->new_total_count)); } static PyObject * NewPartitions_richcompare (NewPartitions *self, PyObject *o2, int op) { NewPartitions *a = self, *b; int tr, pr; int r; PyObject *result; if (Py_TYPE(o2) != Py_TYPE(self)) { PyErr_SetNone(PyExc_NotImplementedError); return NULL; } b = (NewPartitions *)o2; tr = strcmp(a->topic, b->topic); pr = a->new_total_count - b->new_total_count; switch (op) { case Py_LT: r = tr < 0 || (tr == 0 && pr < 0); break; case Py_LE: r = tr < 0 || (tr == 0 && pr <= 0); break; case Py_EQ: r = (tr == 0 && pr == 0); break; case Py_NE: r = (tr != 0 || pr != 0); break; case Py_GT: r = tr > 0 || (tr == 0 && pr > 0); break; case Py_GE: r = tr > 0 || (tr == 0 && pr >= 0); break; default: r = 0; break; } result = r ? Py_True : Py_False; Py_INCREF(result); return result; } static long NewPartitions_hash (NewPartitions *self) { PyObject *topic = cfl_PyUnistr(_FromString(self->topic)); long r = PyObject_Hash(topic) ^ self->new_total_count; Py_DECREF(topic); return r; } PyTypeObject NewPartitionsType = { PyVarObject_HEAD_INIT(NULL, 0) "cimpl.NewPartitions", /*tp_name*/ sizeof(NewPartitions), /*tp_basicsize*/ 0, /*tp_itemsize*/ (destructor)NewPartitions_dealloc, /*tp_dealloc*/ 0, /*tp_print*/ 0, /*tp_getattr*/ 0, /*tp_setattr*/ 0, /*tp_compare*/ (reprfunc)NewPartitions_str0, /*tp_repr*/ 0, /*tp_as_number*/ 0, /*tp_as_sequence*/ 0, /*tp_as_mapping*/ (hashfunc)NewPartitions_hash, /*tp_hash */ 0, /*tp_call*/ 0, /*tp_str*/ PyObject_GenericGetAttr, /*tp_getattro*/ 0, /*tp_setattro*/ 0, /*tp_as_buffer*/ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_GC, /*tp_flags*/ "NewPartitions specifies per-topic settings for passing to " "passed to AdminClient.create_partitions().\n" "\n" ".. py:function:: NewPartitions(topic, new_total_count, [replication_factor], [replica_assignment])\n" "\n" " Instantiate a NewPartitions object.\n" "\n" " :param string topic: Topic name\n" " :param int new_total_cnt: Increase the topic's partition count to this value.\n" " :param list replica_assignment: List of lists with the replication assignment for each new partition.\n" " :rtype: NewPartitions\n" "\n" "\n", /*tp_doc*/ (traverseproc)NewPartitions_traverse, /* tp_traverse */ (inquiry)NewPartitions_clear, /* tp_clear */ (richcmpfunc)NewPartitions_richcompare, /* tp_richcompare */ 0, /* tp_weaklistoffset */ 0, /* tp_iter */ 0, /* tp_iternext */ 0, /* tp_methods */ NewPartitions_members,/* tp_members */ 0, /* tp_getset */ 0, /* tp_base */ 0, /* tp_dict */ 0, /* tp_descr_get */ 0, /* tp_descr_set */ 0, /* tp_dictoffset */ NewPartitions_init, /* tp_init */ 0, /* tp_alloc */ NewPartitions_new /* tp_new */ }; /** * @brief Finalize type objects */ int AdminTypes_Ready (void) { int r; r = PyType_Ready(&NewTopicType); if (r < 0) return r; r = PyType_Ready(&NewPartitionsType); if (r < 0) return r; return r; } /** * @brief Add Admin types to module */ void AdminTypes_AddObjects (PyObject *m) { Py_INCREF(&NewTopicType); PyModule_AddObject(m, "NewTopic", (PyObject *)&NewTopicType); Py_INCREF(&NewPartitionsType); PyModule_AddObject(m, "NewPartitions", (PyObject *)&NewPartitionsType); /* rd_kafka_ConfigSource_t */ PyModule_AddIntConstant(m, "CONFIG_SOURCE_UNKNOWN_CONFIG", RD_KAFKA_CONFIG_SOURCE_UNKNOWN_CONFIG); PyModule_AddIntConstant(m, "CONFIG_SOURCE_DYNAMIC_TOPIC_CONFIG", RD_KAFKA_CONFIG_SOURCE_DYNAMIC_TOPIC_CONFIG); PyModule_AddIntConstant(m, "CONFIG_SOURCE_DYNAMIC_BROKER_CONFIG", RD_KAFKA_CONFIG_SOURCE_DYNAMIC_BROKER_CONFIG); PyModule_AddIntConstant(m, "CONFIG_SOURCE_DYNAMIC_DEFAULT_BROKER_CONFIG", RD_KAFKA_CONFIG_SOURCE_DYNAMIC_DEFAULT_BROKER_CONFIG); PyModule_AddIntConstant(m, "CONFIG_SOURCE_STATIC_BROKER_CONFIG", RD_KAFKA_CONFIG_SOURCE_STATIC_BROKER_CONFIG); PyModule_AddIntConstant(m, "CONFIG_SOURCE_DEFAULT_CONFIG", RD_KAFKA_CONFIG_SOURCE_DEFAULT_CONFIG); /* rd_kafka_ResourceType_t */ PyModule_AddIntConstant(m, "RESOURCE_UNKNOWN", RD_KAFKA_RESOURCE_UNKNOWN); PyModule_AddIntConstant(m, "RESOURCE_ANY", RD_KAFKA_RESOURCE_ANY); PyModule_AddIntConstant(m, "RESOURCE_TOPIC", RD_KAFKA_RESOURCE_TOPIC); PyModule_AddIntConstant(m, "RESOURCE_GROUP", RD_KAFKA_RESOURCE_GROUP); PyModule_AddIntConstant(m, "RESOURCE_BROKER", RD_KAFKA_RESOURCE_BROKER); } confluent-kafka-1.1.0/confluent_kafka/src/Consumer.c0000644000076500000240000013147713500170126022545 0ustar ryanstaff00000000000000/** * Copyright 2016 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include "confluent_kafka.h" /**************************************************************************** * * * Consumer * * * * ****************************************************************************/ static void Consumer_clear0 (Handle *self) { if (self->u.Consumer.on_assign) { Py_DECREF(self->u.Consumer.on_assign); self->u.Consumer.on_assign = NULL; } if (self->u.Consumer.on_revoke) { Py_DECREF(self->u.Consumer.on_revoke); self->u.Consumer.on_revoke = NULL; } if (self->u.Consumer.on_commit) { Py_DECREF(self->u.Consumer.on_commit); self->u.Consumer.on_commit = NULL; } if (self->u.Consumer.rkqu) { rd_kafka_queue_destroy(self->u.Consumer.rkqu); self->u.Consumer.rkqu = NULL; } } static int Consumer_clear (Handle *self) { Consumer_clear0(self); Handle_clear(self); return 0; } static void Consumer_dealloc (Handle *self) { PyObject_GC_UnTrack(self); Consumer_clear0(self); if (self->rk) { CallState cs; CallState_begin(self, &cs); /* If application has not called c.close() then * rd_kafka_destroy() will, and that might trigger * callbacks to be called from consumer_close(). * This should probably be fixed in librdkafka, * or the application. */ rd_kafka_destroy(self->rk); CallState_end(self, &cs); } Handle_clear(self); Py_TYPE(self)->tp_free((PyObject *)self); } static int Consumer_traverse (Handle *self, visitproc visit, void *arg) { if (self->u.Consumer.on_assign) Py_VISIT(self->u.Consumer.on_assign); if (self->u.Consumer.on_revoke) Py_VISIT(self->u.Consumer.on_revoke); if (self->u.Consumer.on_commit) Py_VISIT(self->u.Consumer.on_commit); Handle_traverse(self, visit, arg); return 0; } static PyObject *Consumer_subscribe (Handle *self, PyObject *args, PyObject *kwargs) { rd_kafka_topic_partition_list_t *topics; static char *kws[] = { "topics", "on_assign", "on_revoke", NULL }; PyObject *tlist, *on_assign = NULL, *on_revoke = NULL; Py_ssize_t pos = 0; rd_kafka_resp_err_t err; if (!self->rk) { PyErr_SetString(PyExc_RuntimeError, "Consumer closed"); return NULL; } if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|OO", kws, &tlist, &on_assign, &on_revoke)) return NULL; if (!PyList_Check(tlist)) { PyErr_Format(PyExc_TypeError, "expected list of topic unicode strings"); return NULL; } if (on_assign && !PyCallable_Check(on_assign)) { PyErr_Format(PyExc_TypeError, "on_assign expects a callable"); return NULL; } if (on_revoke && !PyCallable_Check(on_revoke)) { PyErr_Format(PyExc_TypeError, "on_revoke expects a callable"); return NULL; } topics = rd_kafka_topic_partition_list_new((int)PyList_Size(tlist)); for (pos = 0 ; pos < PyList_Size(tlist) ; pos++) { PyObject *o = PyList_GetItem(tlist, pos); PyObject *uo, *uo8; if (!(uo = cfl_PyObject_Unistr(o))) { PyErr_Format(PyExc_TypeError, "expected list of unicode strings"); rd_kafka_topic_partition_list_destroy(topics); return NULL; } rd_kafka_topic_partition_list_add(topics, cfl_PyUnistr_AsUTF8(uo, &uo8), RD_KAFKA_PARTITION_UA); Py_XDECREF(uo8); Py_DECREF(uo); } err = rd_kafka_subscribe(self->rk, topics); rd_kafka_topic_partition_list_destroy(topics); if (err) { cfl_PyErr_Format(err, "Failed to set subscription: %s", rd_kafka_err2str(err)); return NULL; } /* * Update rebalance callbacks */ if (self->u.Consumer.on_assign) { Py_DECREF(self->u.Consumer.on_assign); self->u.Consumer.on_assign = NULL; } if (on_assign) { self->u.Consumer.on_assign = on_assign; Py_INCREF(self->u.Consumer.on_assign); } if (self->u.Consumer.on_revoke) { Py_DECREF(self->u.Consumer.on_revoke); self->u.Consumer.on_revoke = NULL; } if (on_revoke) { self->u.Consumer.on_revoke = on_revoke; Py_INCREF(self->u.Consumer.on_revoke); } Py_RETURN_NONE; } static PyObject *Consumer_unsubscribe (Handle *self, PyObject *ignore) { rd_kafka_resp_err_t err; if (!self->rk) { PyErr_SetString(PyExc_RuntimeError, "Consumer closed"); return NULL; } err = rd_kafka_unsubscribe(self->rk); if (err) { cfl_PyErr_Format(err, "Failed to remove subscription: %s", rd_kafka_err2str(err)); return NULL; } Py_RETURN_NONE; } static PyObject *Consumer_assign (Handle *self, PyObject *tlist) { rd_kafka_topic_partition_list_t *c_parts; rd_kafka_resp_err_t err; if (!self->rk) { PyErr_SetString(PyExc_RuntimeError, "Consumer closed"); return NULL; } if (!(c_parts = py_to_c_parts(tlist))) return NULL; self->u.Consumer.rebalance_assigned++; err = rd_kafka_assign(self->rk, c_parts); rd_kafka_topic_partition_list_destroy(c_parts); if (err) { cfl_PyErr_Format(err, "Failed to set assignment: %s", rd_kafka_err2str(err)); return NULL; } Py_RETURN_NONE; } static PyObject *Consumer_unassign (Handle *self, PyObject *ignore) { rd_kafka_resp_err_t err; if (!self->rk) { PyErr_SetString(PyExc_RuntimeError, "Consumer closed"); return NULL; } self->u.Consumer.rebalance_assigned++; err = rd_kafka_assign(self->rk, NULL); if (err) { cfl_PyErr_Format(err, "Failed to remove assignment: %s", rd_kafka_err2str(err)); return NULL; } Py_RETURN_NONE; } static PyObject *Consumer_assignment (Handle *self, PyObject *args, PyObject *kwargs) { PyObject *plist; rd_kafka_topic_partition_list_t *c_parts; rd_kafka_resp_err_t err; if (!self->rk) { PyErr_SetString(PyExc_RuntimeError, "Consumer closed"); return NULL; } err = rd_kafka_assignment(self->rk, &c_parts); if (err) { cfl_PyErr_Format(err, "Failed to get assignment: %s", rd_kafka_err2str(err)); return NULL; } plist = c_parts_to_py(c_parts); rd_kafka_topic_partition_list_destroy(c_parts); return plist; } /** * @brief Global offset commit on_commit callback trampoline triggered * from poll() et.al */ static void Consumer_offset_commit_cb (rd_kafka_t *rk, rd_kafka_resp_err_t err, rd_kafka_topic_partition_list_t *c_parts, void *opaque) { Handle *self = opaque; PyObject *parts, *k_err, *args, *result; CallState *cs; if (!self->u.Consumer.on_commit) return; cs = CallState_get(self); /* Insantiate error object */ k_err = KafkaError_new_or_None(err, NULL); /* Construct list of TopicPartition based on 'c_parts' */ if (c_parts) parts = c_parts_to_py(c_parts); else parts = PyList_New(0); args = Py_BuildValue("(OO)", k_err, parts); Py_DECREF(k_err); Py_DECREF(parts); if (!args) { cfl_PyErr_Format(RD_KAFKA_RESP_ERR__FAIL, "Unable to build callback args"); CallState_crash(cs); CallState_resume(cs); return; } result = PyObject_CallObject(self->u.Consumer.on_commit, args); Py_DECREF(args); if (result) Py_DECREF(result); else { CallState_crash(cs); rd_kafka_yield(rk); } CallState_resume(cs); } /** * @brief Simple struct to pass results from commit from offset_commit_return_cb * back to offset_commit() return value. */ struct commit_return { rd_kafka_resp_err_t err; rd_kafka_topic_partition_list_t *c_parts; }; /** * @brief Simple offset_commit_cb to pass the callback information * as return value from commit() through the commit_return struct. * Triggered from rd_kafka_commit_queue(). */ static void Consumer_offset_commit_return_cb (rd_kafka_t *rk, rd_kafka_resp_err_t err, rd_kafka_topic_partition_list_t *c_parts, void *opaque) { struct commit_return *commit_return = opaque; commit_return->err = err; if (c_parts) commit_return->c_parts = rd_kafka_topic_partition_list_copy(c_parts); } static PyObject *Consumer_commit (Handle *self, PyObject *args, PyObject *kwargs) { rd_kafka_resp_err_t err; PyObject *msg = NULL, *offsets = NULL, *async_o = NULL; rd_kafka_topic_partition_list_t *c_offsets; int async = 1; static char *kws[] = { "message", "offsets", "async", "asynchronous", NULL }; rd_kafka_queue_t *rkqu = NULL; struct commit_return commit_return; PyThreadState *thread_state; if (!self->rk) { PyErr_SetString(PyExc_RuntimeError, "Consumer closed"); return NULL; } if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|OOOO", kws, &msg, &offsets, &async_o, &async_o)) return NULL; if (msg && offsets) { PyErr_SetString(PyExc_ValueError, "message and offsets are mutually exclusive"); return NULL; } if (async_o) async = PyObject_IsTrue(async_o); if (offsets) { if (!(c_offsets = py_to_c_parts(offsets))) return NULL; } else if (msg) { Message *m; PyObject *uo8; if (PyObject_Type((PyObject *)msg) != (PyObject *)&MessageType) { PyErr_Format(PyExc_TypeError, "expected %s", MessageType.tp_name); return NULL; } m = (Message *)msg; c_offsets = rd_kafka_topic_partition_list_new(1); rd_kafka_topic_partition_list_add( c_offsets, cfl_PyUnistr_AsUTF8(m->topic, &uo8), m->partition)->offset =m->offset + 1; Py_XDECREF(uo8); } else { c_offsets = NULL; } if (async) { /* Async mode: Use consumer queue for offset commit * served by consumer_poll() */ rkqu = self->u.Consumer.rkqu; } else { /* Sync mode: Let commit_queue() trigger the callback. */ memset(&commit_return, 0, sizeof(commit_return)); /* Unlock GIL while we are blocking. */ thread_state = PyEval_SaveThread(); } err = rd_kafka_commit_queue(self->rk, c_offsets, rkqu, async ? Consumer_offset_commit_cb : Consumer_offset_commit_return_cb, async ? (void *)self : (void *)&commit_return); if (c_offsets) rd_kafka_topic_partition_list_destroy(c_offsets); if (!async) { /* Re-lock GIL */ PyEval_RestoreThread(thread_state); /* Honour inner error (richer) from offset_commit_return_cb */ if (commit_return.err) err = commit_return.err; } if (err) { /* Outer error from commit_queue() */ if (!async && commit_return.c_parts) rd_kafka_topic_partition_list_destroy(commit_return.c_parts); cfl_PyErr_Format(err, "Commit failed: %s", rd_kafka_err2str(err)); return NULL; } if (async) { /* async commit returns None when commit is in progress */ Py_RETURN_NONE; } else { PyObject *plist; /* sync commit returns the topic,partition,offset,err list */ assert(commit_return.c_parts); plist = c_parts_to_py(commit_return.c_parts); rd_kafka_topic_partition_list_destroy(commit_return.c_parts); return plist; } } static PyObject *Consumer_store_offsets (Handle *self, PyObject *args, PyObject *kwargs) { #if RD_KAFKA_VERSION < 0x000b0000 PyErr_Format(PyExc_NotImplementedError, "Consumer store_offsets require " "confluent-kafka-python built for librdkafka " "version >=v0.11.0 (librdkafka runtime 0x%x, " "buildtime 0x%x)", rd_kafka_version(), RD_KAFKA_VERSION); return NULL; #else rd_kafka_resp_err_t err; PyObject *msg = NULL, *offsets = NULL; rd_kafka_topic_partition_list_t *c_offsets; static char *kws[] = { "message", "offsets", NULL }; if (!self->rk) { PyErr_SetString(PyExc_RuntimeError, "Consumer closed"); return NULL; } if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|OO", kws, &msg, &offsets)) return NULL; if (msg && offsets) { PyErr_SetString(PyExc_ValueError, "message and offsets are mutually exclusive"); return NULL; } if (!msg && !offsets) { PyErr_SetString(PyExc_ValueError, "expected either message or offsets"); return NULL; } if (offsets) { if (!(c_offsets = py_to_c_parts(offsets))) return NULL; } else { Message *m; PyObject *uo8; if (PyObject_Type((PyObject *)msg) != (PyObject *)&MessageType) { PyErr_Format(PyExc_TypeError, "expected %s", MessageType.tp_name); return NULL; } m = (Message *)msg; c_offsets = rd_kafka_topic_partition_list_new(1); rd_kafka_topic_partition_list_add( c_offsets, cfl_PyUnistr_AsUTF8(m->topic, &uo8), m->partition)->offset = m->offset + 1; Py_XDECREF(uo8); } err = rd_kafka_offsets_store(self->rk, c_offsets); rd_kafka_topic_partition_list_destroy(c_offsets); if (err) { cfl_PyErr_Format(err, "StoreOffsets failed: %s", rd_kafka_err2str(err)); return NULL; } Py_RETURN_NONE; #endif } static PyObject *Consumer_committed (Handle *self, PyObject *args, PyObject *kwargs) { PyObject *plist; rd_kafka_topic_partition_list_t *c_parts; rd_kafka_resp_err_t err; double tmout = -1.0f; static char *kws[] = { "partitions", "timeout", NULL }; if (!self->rk) { PyErr_SetString(PyExc_RuntimeError, "Consumer closed"); return NULL; } if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|d", kws, &plist, &tmout)) return NULL; if (!(c_parts = py_to_c_parts(plist))) return NULL; Py_BEGIN_ALLOW_THREADS; err = rd_kafka_committed(self->rk, c_parts, tmout >= 0 ? (int)(tmout * 1000.0f) : -1); Py_END_ALLOW_THREADS; if (err) { rd_kafka_topic_partition_list_destroy(c_parts); cfl_PyErr_Format(err, "Failed to get committed offsets: %s", rd_kafka_err2str(err)); return NULL; } plist = c_parts_to_py(c_parts); rd_kafka_topic_partition_list_destroy(c_parts); return plist; } static PyObject *Consumer_position (Handle *self, PyObject *args, PyObject *kwargs) { PyObject *plist; rd_kafka_topic_partition_list_t *c_parts; rd_kafka_resp_err_t err; static char *kws[] = { "partitions", NULL }; if (!self->rk) { PyErr_SetString(PyExc_RuntimeError, "Consumer closed"); return NULL; } if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O", kws, &plist)) return NULL; if (!(c_parts = py_to_c_parts(plist))) return NULL; err = rd_kafka_position(self->rk, c_parts); if (err) { rd_kafka_topic_partition_list_destroy(c_parts); cfl_PyErr_Format(err, "Failed to get position: %s", rd_kafka_err2str(err)); return NULL; } plist = c_parts_to_py(c_parts); rd_kafka_topic_partition_list_destroy(c_parts); return plist; } static PyObject *Consumer_pause(Handle *self, PyObject *args, PyObject *kwargs) { PyObject *plist; rd_kafka_topic_partition_list_t *c_parts; rd_kafka_resp_err_t err; static char *kws[] = {"partitions", NULL}; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O", kws, &plist)) return NULL; if (!(c_parts = py_to_c_parts(plist))) return NULL; err = rd_kafka_pause_partitions(self->rk, c_parts); rd_kafka_topic_partition_list_destroy(c_parts); if (err) { cfl_PyErr_Format(err, "Failed to pause partitions: %s", rd_kafka_err2str(err)); return NULL; } Py_RETURN_NONE; } static PyObject *Consumer_resume (Handle *self, PyObject *args, PyObject *kwargs) { PyObject *plist; rd_kafka_topic_partition_list_t *c_parts; rd_kafka_resp_err_t err; static char *kws[] = {"partitions", NULL}; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O", kws, &plist)) return NULL; if (!(c_parts = py_to_c_parts(plist))) return NULL; err = rd_kafka_resume_partitions(self->rk, c_parts); rd_kafka_topic_partition_list_destroy(c_parts); if (err) { cfl_PyErr_Format(err, "Failed to resume partitions: %s", rd_kafka_err2str(err)); return NULL; } Py_RETURN_NONE; } static PyObject *Consumer_seek (Handle *self, PyObject *args, PyObject *kwargs) { TopicPartition *tp; rd_kafka_resp_err_t err; static char *kws[] = { "partition", NULL }; rd_kafka_topic_t *rkt; if (!self->rk) { PyErr_SetString(PyExc_RuntimeError, "Consumer closed"); return NULL; } if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O", kws, (PyObject **)&tp)) return NULL; if (PyObject_Type((PyObject *)tp) != (PyObject *)&TopicPartitionType) { PyErr_Format(PyExc_TypeError, "expected %s", TopicPartitionType.tp_name); return NULL; } rkt = rd_kafka_topic_new(self->rk, tp->topic, NULL); if (!rkt) { cfl_PyErr_Format(rd_kafka_last_error(), "Failed to get topic object for " "topic \"%s\": %s", tp->topic, rd_kafka_err2str(rd_kafka_last_error())); return NULL; } Py_BEGIN_ALLOW_THREADS; err = rd_kafka_seek(rkt, tp->partition, tp->offset, -1); Py_END_ALLOW_THREADS; rd_kafka_topic_destroy(rkt); if (err) { cfl_PyErr_Format(err, "Failed to seek to offset %"CFL_PRId64": %s", tp->offset, rd_kafka_err2str(err)); return NULL; } Py_RETURN_NONE; } static PyObject *Consumer_get_watermark_offsets (Handle *self, PyObject *args, PyObject *kwargs) { TopicPartition *tp; rd_kafka_resp_err_t err; double tmout = -1.0f; int cached = 0; int64_t low = RD_KAFKA_OFFSET_INVALID, high = RD_KAFKA_OFFSET_INVALID; static char *kws[] = { "partition", "timeout", "cached", NULL }; PyObject *rtup; if (!self->rk) { PyErr_SetString(PyExc_RuntimeError, "Consumer closed"); return NULL; } if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|db", kws, (PyObject **)&tp, &tmout, &cached)) return NULL; if (PyObject_Type((PyObject *)tp) != (PyObject *)&TopicPartitionType) { PyErr_Format(PyExc_TypeError, "expected %s", TopicPartitionType.tp_name); return NULL; } if (cached) { err = rd_kafka_get_watermark_offsets(self->rk, tp->topic, tp->partition, &low, &high); } else { Py_BEGIN_ALLOW_THREADS; err = rd_kafka_query_watermark_offsets(self->rk, tp->topic, tp->partition, &low, &high, tmout >= 0 ? (int)(tmout * 1000.0f) : -1); Py_END_ALLOW_THREADS; } if (err) { cfl_PyErr_Format(err, "Failed to get watermark offsets: %s", rd_kafka_err2str(err)); return NULL; } rtup = PyTuple_New(2); PyTuple_SetItem(rtup, 0, PyLong_FromLongLong(low)); PyTuple_SetItem(rtup, 1, PyLong_FromLongLong(high)); return rtup; } static PyObject *Consumer_offsets_for_times (Handle *self, PyObject *args, PyObject *kwargs) { #if RD_KAFKA_VERSION < 0x000b0000 PyErr_Format(PyExc_NotImplementedError, "Consumer offsets_for_times require " "confluent-kafka-python built for librdkafka " "version >=v0.11.0 (librdkafka runtime 0x%x, " "buildtime 0x%x)", rd_kafka_version(), RD_KAFKA_VERSION); return NULL; #else PyObject *plist; double tmout = -1.0f; rd_kafka_topic_partition_list_t *c_parts; rd_kafka_resp_err_t err; static char *kws[] = { "partitions", "timeout", NULL }; if (!self->rk) { PyErr_SetString(PyExc_RuntimeError, "Consumer closed"); return NULL; } if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|d", kws, &plist, &tmout)) return NULL; if (!(c_parts = py_to_c_parts(plist))) return NULL; Py_BEGIN_ALLOW_THREADS; err = rd_kafka_offsets_for_times(self->rk, c_parts, tmout >= 0 ? (int)(tmout * 1000.0f) : -1); Py_END_ALLOW_THREADS; if (err) { rd_kafka_topic_partition_list_destroy(c_parts); cfl_PyErr_Format(err, "Failed to get offsets: %s", rd_kafka_err2str(err)); return NULL; } plist = c_parts_to_py(c_parts); rd_kafka_topic_partition_list_destroy(c_parts); return plist; #endif } static PyObject *Consumer_poll (Handle *self, PyObject *args, PyObject *kwargs) { double tmout = -1.0f; static char *kws[] = { "timeout", NULL }; rd_kafka_message_t *rkm; PyObject *msgobj; CallState cs; if (!self->rk) { PyErr_SetString(PyExc_RuntimeError, "Consumer closed"); return NULL; } if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|d", kws, &tmout)) return NULL; CallState_begin(self, &cs); rkm = rd_kafka_consumer_poll(self->rk, tmout >= 0 ? (int)(tmout * 1000.0f) : -1); if (!CallState_end(self, &cs)) { if (rkm) rd_kafka_message_destroy(rkm); return NULL; } if (!rkm) Py_RETURN_NONE; msgobj = Message_new0(self, rkm); #ifdef RD_KAFKA_V_HEADERS // Have to detach headers outside Message_new0 because it declares the // rk message as a const rd_kafka_message_detach_headers(rkm, &((Message *)msgobj)->c_headers); #endif rd_kafka_message_destroy(rkm); return msgobj; } static PyObject *Consumer_consume (Handle *self, PyObject *args, PyObject *kwargs) { unsigned int num_messages = 1; double tmout = -1.0f; static char *kws[] = { "num_messages", "timeout", NULL }; rd_kafka_message_t **rkmessages; PyObject *msglist; rd_kafka_queue_t *rkqu = self->u.Consumer.rkqu; CallState cs; Py_ssize_t i, n; if (!self->rk) { PyErr_SetString(PyExc_RuntimeError, "Consumer closed"); return NULL; } if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|Id", kws, &num_messages, &tmout)) return NULL; if (num_messages > 1000000) { PyErr_SetString(PyExc_ValueError, "num_messages must be between 0 and 1000000 (1M)"); return NULL; } CallState_begin(self, &cs); rkmessages = malloc(num_messages * sizeof(rd_kafka_message_t *)); n = (Py_ssize_t)rd_kafka_consume_batch_queue(rkqu, tmout >= 0 ? (int)(tmout * 1000.0f) : -1, rkmessages, num_messages); if (!CallState_end(self, &cs)) { for (i = 0; i < n; i++) { rd_kafka_message_destroy(rkmessages[i]); } free(rkmessages); return NULL; } if (n < 0) { free(rkmessages); cfl_PyErr_Format(rd_kafka_last_error(), "%s", rd_kafka_err2str(rd_kafka_last_error())); return NULL; } msglist = PyList_New(n); for (i = 0; i < n; i++) { PyObject *msgobj = Message_new0(self, rkmessages[i]); #ifdef RD_KAFKA_V_HEADERS // Have to detach headers outside Message_new0 because it declares the // rk message as a const rd_kafka_message_detach_headers(rkmessages[i], &((Message *)msgobj)->c_headers); #endif PyList_SET_ITEM(msglist, i, msgobj); rd_kafka_message_destroy(rkmessages[i]); } free(rkmessages); return msglist; } static PyObject *Consumer_close (Handle *self, PyObject *ignore) { CallState cs; if (!self->rk) { PyErr_SetString(PyExc_RuntimeError, "Consumer already closed"); return NULL; } CallState_begin(self, &cs); rd_kafka_consumer_close(self->rk); if (self->u.Consumer.rkqu) { rd_kafka_queue_destroy(self->u.Consumer.rkqu); self->u.Consumer.rkqu = NULL; } rd_kafka_destroy(self->rk); self->rk = NULL; if (!CallState_end(self, &cs)) return NULL; Py_RETURN_NONE; } static PyMethodDef Consumer_methods[] = { { "subscribe", (PyCFunction)Consumer_subscribe, METH_VARARGS|METH_KEYWORDS, ".. py:function:: subscribe(topics, [on_assign=None], [on_revoke=None])\n" "\n" " Set subscription to supplied list of topics\n" " This replaces a previous subscription.\n" "\n" " Regexp pattern subscriptions are supported by prefixing " "the topic string with ``\"^\"``, e.g.::\n" "\n" " consumer.subscribe([\"^my_topic.*\", \"^another[0-9]-?[a-z]+$\", \"not_a_regex\"])\n" "\n" " :param list(str) topics: List of topics (strings) to subscribe to.\n" " :param callable on_assign: callback to provide handling of " "customized offsets on completion of a successful partition " "re-assignment.\n" " :param callable on_revoke: callback to provide handling of " "offset commits to a customized store on the start of a " "rebalance operation.\n" "\n" " :raises KafkaException:\n" " :raises: RuntimeError if called on a closed consumer\n" "\n" "\n" ".. py:function:: on_assign(consumer, partitions)\n" ".. py:function:: on_revoke(consumer, partitions)\n" "\n" " :param Consumer consumer: Consumer instance.\n" " :param list(TopicPartition) partitions: Absolute list of partitions being assigned or revoked.\n" "\n" }, { "unsubscribe", (PyCFunction)Consumer_unsubscribe, METH_NOARGS, " Remove current subscription.\n" "\n" " :raises: KafkaException\n" " :raises: RuntimeError if called on a closed consumer\n" "\n" }, { "poll", (PyCFunction)Consumer_poll, METH_VARARGS|METH_KEYWORDS, ".. py:function:: poll([timeout=None])\n" "\n" " Consume messages, calls callbacks and returns events.\n" "\n" " The application must check the returned :py:class:`Message` " "object's :py:func:`Message.error()` method to distinguish " "between proper messages (error() returns None), or an event or " "error (see error().code() for specifics).\n" "\n" " .. note: Callbacks may be called from this method, " "such as ``on_assign``, ``on_revoke``, et.al.\n" "\n" " :param float timeout: Maximum time to block waiting for message, event or callback. (Seconds)\n" " :returns: A Message object or None on timeout\n" " :rtype: :py:class:`Message` or None\n" " :raises: RuntimeError if called on a closed consumer\n" "\n" }, { "consume", (PyCFunction)Consumer_consume, METH_VARARGS|METH_KEYWORDS, ".. py:function:: consume([num_messages=1], [timeout=-1])\n" "\n" " Consume messages, calls callbacks and returns list of messages " "(possibly empty on timeout).\n" "\n" " The application must check the returned :py:class:`Message` " "object's :py:func:`Message.error()` method to distinguish " "between proper messages (error() returns None), or an event or " "error for each :py:class:`Message` in the list (see error().code() " "for specifics).\n" "\n" " .. note: Callbacks may be called from this method, " "such as ``on_assign``, ``on_revoke``, et.al.\n" "\n" " :param int num_messages: Maximum number of messages to return (default: 1).\n" " :param float timeout: Maximum time to block waiting for message, event or callback (default: infinite (-1)). (Seconds)\n" " :returns: A list of Message objects (possibly empty on timeout)\n" " :rtype: list(Message)\n" " :raises RuntimeError: if called on a closed consumer\n" " :raises KafkaError: in case of internal error\n" " :raises ValueError: if num_messages > 1M\n" "\n" }, { "assign", (PyCFunction)Consumer_assign, METH_O, ".. py:function:: assign(partitions)\n" "\n" " Set consumer partition assignment to the provided list of " ":py:class:`TopicPartition` and starts consuming.\n" "\n" " :param list(TopicPartition) partitions: List of topic+partitions and optionally initial offsets to start consuming.\n" " :raises: RuntimeError if called on a closed consumer\n" "\n" }, { "unassign", (PyCFunction)Consumer_unassign, METH_NOARGS, " Removes the current partition assignment and stops consuming.\n" "\n" " :raises KafkaException:\n" " :raises RuntimeError: if called on a closed consumer\n" "\n" }, { "assignment", (PyCFunction)Consumer_assignment, METH_VARARGS|METH_KEYWORDS, " Returns the current partition assignment.\n" "\n" " :returns: List of assigned topic+partitions.\n" " :rtype: list(TopicPartition)\n" " :raises: KafkaException\n" " :raises: RuntimeError if called on a closed consumer\n" "\n" }, { "store_offsets", (PyCFunction)Consumer_store_offsets, METH_VARARGS|METH_KEYWORDS, ".. py:function:: store_offsets([message=None], [offsets=None])\n" "\n" " Store offsets for a message or a list of offsets.\n" "\n" " ``message`` and ``offsets`` are mutually exclusive. " "The stored offsets will be committed according to 'auto.commit.interval.ms' or manual " "offset-less :py:meth:`commit`. " "Note that 'enable.auto.offset.store' must be set to False when using this API.\n" "\n" " :param confluent_kafka.Message message: Store message's offset+1.\n" " :param list(TopicPartition) offsets: List of topic+partitions+offsets to store.\n" " :rtype: None\n" " :raises: KafkaException\n" " :raises: RuntimeError if called on a closed consumer\n" "\n" }, { "commit", (PyCFunction)Consumer_commit, METH_VARARGS|METH_KEYWORDS, ".. py:function:: commit([message=None], [offsets=None], [asynchronous=True])\n" "\n" " Commit a message or a list of offsets.\n" "\n" " ``message`` and ``offsets`` are mutually exclusive, if neither is set " "the current partition assignment's offsets are used instead. " "The consumer relies on your use of this method if you have set 'enable.auto.commit' to False\n" "\n" " :param confluent_kafka.Message message: Commit message's offset+1.\n" " :param list(TopicPartition) offsets: List of topic+partitions+offsets to commit.\n" " :param bool asynchronous: Asynchronous commit, return None immediately. " "If False the commit() call will block until the commit succeeds or " "fails and the committed offsets will be returned (on success). Note that specific partitions may have failed and the .err field of each partition will need to be checked for success.\n" " :rtype: None|list(TopicPartition)\n" " :raises: KafkaException\n" " :raises: RuntimeError if called on a closed consumer\n" "\n" }, { "committed", (PyCFunction)Consumer_committed, METH_VARARGS|METH_KEYWORDS, ".. py:function:: committed(partitions, [timeout=None])\n" "\n" " Retrieve committed offsets for the list of partitions.\n" "\n" " :param list(TopicPartition) partitions: List of topic+partitions " "to query for stored offsets.\n" " :param float timeout: Request timeout. (Seconds)\n" " :returns: List of topic+partitions with offset and possibly error set.\n" " :rtype: list(TopicPartition)\n" " :raises: KafkaException\n" " :raises: RuntimeError if called on a closed consumer\n" "\n" }, { "position", (PyCFunction)Consumer_position, METH_VARARGS|METH_KEYWORDS, ".. py:function:: position(partitions, [timeout=None])\n" "\n" " Retrieve current positions (offsets) for the list of partitions.\n" "\n" " :param list(TopicPartition) partitions: List of topic+partitions " "to return current offsets for. The current offset is the offset of the " "last consumed message + 1.\n" " :returns: List of topic+partitions with offset and possibly error set.\n" " :rtype: list(TopicPartition)\n" " :raises: KafkaException\n" " :raises: RuntimeError if called on a closed consumer\n" "\n" }, { "pause", (PyCFunction)Consumer_pause, METH_VARARGS|METH_KEYWORDS, ".. py:function:: pause(partitions)\n" "\n" " Pause consumption for the provided list of partitions.\n" "\n" " :param list(TopicPartition) partitions: List of topic+partitions " "to pause.\n" " :rtype: None\n" " :raises: KafkaException\n" "\n" }, { "resume", (PyCFunction)Consumer_resume, METH_VARARGS|METH_KEYWORDS, ".. py:function:: resume(partitions)\n" "\n" " Resume consumption for the provided list of partitions.\n" "\n" " :param list(TopicPartition) partitions: List of topic+partitions " "to resume.\n" " :rtype: None\n" " :raises: KafkaException\n" "\n" }, { "seek", (PyCFunction)Consumer_seek, METH_VARARGS|METH_KEYWORDS, ".. py:function:: seek(partition)\n" "\n" " Set consume position for partition to offset.\n" " The offset may be an absolute (>=0) or a\n" " logical offset (:py:const:`OFFSET_BEGINNING` et.al).\n" "\n" " seek() may only be used to update the consume offset of an\n" " actively consumed partition (i.e., after :py:const:`assign()`),\n" " to set the starting offset of partition not being consumed instead\n" " pass the offset in an `assign()` call.\n" "\n" " :param TopicPartition partition: Topic+partition+offset to seek to.\n" "\n" " :raises: KafkaException\n" "\n" }, { "get_watermark_offsets", (PyCFunction)Consumer_get_watermark_offsets, METH_VARARGS|METH_KEYWORDS, ".. py:function:: get_watermark_offsets(partition, [timeout=None], [cached=False])\n" "\n" " Retrieve low and high offsets for partition.\n" "\n" " :param TopicPartition partition: Topic+partition to return offsets for.\n" " :param float timeout: Request timeout (when cached=False). (Seconds)\n" " :param bool cached: Instead of querying the broker used cached information. " "Cached values: The low offset is updated periodically (if statistics.interval.ms is set) while " "the high offset is updated on each message fetched from the broker for this partition.\n" " :returns: Tuple of (low,high) on success or None on timeout. " "The high offset is the offset of the last message + 1.\n" " :rtype: tuple(int,int)\n" " :raises: KafkaException\n" " :raises: RuntimeError if called on a closed consumer\n" "\n" }, { "offsets_for_times", (PyCFunction)Consumer_offsets_for_times, METH_VARARGS|METH_KEYWORDS, ".. py:function:: offsets_for_times(partitions, [timeout=None])\n" "\n" " offsets_for_times looks up offsets by timestamp for the given partitions.\n" "\n" " The returned offsets for each partition is the earliest offset whose\n" " timestamp is greater than or equal to the given timestamp in the\n" " corresponding partition.\n" "\n" " :param list(TopicPartition) partitions: topic+partitions with timestamps in the TopicPartition.offset field.\n" " :param float timeout: Request timeout. (Seconds)\n" " :returns: list of topic+partition with offset field set and possibly error set\n" " :rtype: list(TopicPartition)\n" " :raises: KafkaException\n" " :raises: RuntimeError if called on a closed consumer\n" "\n" }, { "close", (PyCFunction)Consumer_close, METH_NOARGS, "\n" " Close down and terminate the Kafka Consumer.\n" "\n" " Actions performed:\n" "\n" " - Stops consuming\n" " - Commits offsets - except if the consumer property 'enable.auto.commit' is set to False\n" " - Leave consumer group\n" "\n" " .. note: Registered callbacks may be called from this method, " "see :py:func::`poll()` for more info.\n" "\n" " :rtype: None\n" " :raises: RuntimeError if called on a closed consumer\n" "\n" }, { "list_topics", (PyCFunction)list_topics, METH_VARARGS|METH_KEYWORDS, list_topics_doc }, { NULL } }; static void Consumer_rebalance_cb (rd_kafka_t *rk, rd_kafka_resp_err_t err, rd_kafka_topic_partition_list_t *c_parts, void *opaque) { Handle *self = opaque; CallState *cs; cs = CallState_get(self); self->u.Consumer.rebalance_assigned = 0; if ((err == RD_KAFKA_RESP_ERR__ASSIGN_PARTITIONS && self->u.Consumer.on_assign) || (err == RD_KAFKA_RESP_ERR__REVOKE_PARTITIONS && self->u.Consumer.on_revoke)) { PyObject *parts; PyObject *args, *result; /* Construct list of TopicPartition based on 'c_parts' */ parts = c_parts_to_py(c_parts); args = Py_BuildValue("(OO)", self, parts); Py_DECREF(parts); if (!args) { cfl_PyErr_Format(RD_KAFKA_RESP_ERR__FAIL, "Unable to build callback args"); CallState_crash(cs); CallState_resume(cs); return; } result = PyObject_CallObject( err == RD_KAFKA_RESP_ERR__ASSIGN_PARTITIONS ? self->u.Consumer.on_assign : self->u.Consumer.on_revoke, args); Py_DECREF(args); if (result) Py_DECREF(result); else { CallState_crash(cs); rd_kafka_yield(rk); } } /* Fallback: librdkafka needs the rebalance_cb to call assign() * to synchronize state, if the user did not do this from callback, * or there was no callback, or the callback failed, then we perform * that assign() call here instead. */ if (!self->u.Consumer.rebalance_assigned) { if (err == RD_KAFKA_RESP_ERR__ASSIGN_PARTITIONS) rd_kafka_assign(rk, c_parts); else rd_kafka_assign(rk, NULL); } CallState_resume(cs); } static int Consumer_init (PyObject *selfobj, PyObject *args, PyObject *kwargs) { Handle *self = (Handle *)selfobj; char errstr[256]; rd_kafka_conf_t *conf; if (self->rk) { PyErr_SetString(PyExc_RuntimeError, "Consumer already initialized"); return -1; } self->type = RD_KAFKA_CONSUMER; if (!(conf = common_conf_setup(RD_KAFKA_CONSUMER, self, args, kwargs))) return -1; /* Exception raised by ..conf_setup() */ rd_kafka_conf_set_rebalance_cb(conf, Consumer_rebalance_cb); rd_kafka_conf_set_offset_commit_cb(conf, Consumer_offset_commit_cb); self->rk = rd_kafka_new(RD_KAFKA_CONSUMER, conf, errstr, sizeof(errstr)); if (!self->rk) { cfl_PyErr_Format(rd_kafka_last_error(), "Failed to create consumer: %s", errstr); rd_kafka_conf_destroy(conf); return -1; } /* Forward log messages to main queue which is then forwarded * to the consumer queue */ if (self->logger) rd_kafka_set_log_queue(self->rk, NULL); rd_kafka_poll_set_consumer(self->rk); self->u.Consumer.rkqu = rd_kafka_queue_get_consumer(self->rk); assert(self->u.Consumer.rkqu); return 0; } static PyObject *Consumer_new (PyTypeObject *type, PyObject *args, PyObject *kwargs) { return type->tp_alloc(type, 0); } PyTypeObject ConsumerType = { PyVarObject_HEAD_INIT(NULL, 0) "cimpl.Consumer", /*tp_name*/ sizeof(Handle), /*tp_basicsize*/ 0, /*tp_itemsize*/ (destructor)Consumer_dealloc, /*tp_dealloc*/ 0, /*tp_print*/ 0, /*tp_getattr*/ 0, /*tp_setattr*/ 0, /*tp_compare*/ 0, /*tp_repr*/ 0, /*tp_as_number*/ 0, /*tp_as_sequence*/ 0, /*tp_as_mapping*/ 0, /*tp_hash */ 0, /*tp_call*/ 0, /*tp_str*/ 0, /*tp_getattro*/ 0, /*tp_setattro*/ 0, /*tp_as_buffer*/ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_GC, /*tp_flags*/ "High-level Kafka Consumer\n" "\n" ".. py:function:: Consumer(config)\n" "\n" " :param dict config: Configuration properties. At a minimum ``group.id`` **must** be set," " ``bootstrap.servers`` **should** be set" "\n" "Create new Consumer instance using provided configuration dict.\n" "\n" " Special configuration properties:\n" " ``on_commit``: Optional callback will be called when a commit " "request has succeeded or failed.\n" "\n" "\n" ".. py:function:: on_commit(err, partitions)\n" "\n" " :param Consumer consumer: Consumer instance.\n" " :param KafkaError err: Commit error object, or None on success.\n" " :param list(TopicPartition) partitions: List of partitions with " "their committed offsets or per-partition errors.\n" "\n" "\n", /*tp_doc*/ (traverseproc)Consumer_traverse, /* tp_traverse */ (inquiry)Consumer_clear, /* tp_clear */ 0, /* tp_richcompare */ 0, /* tp_weaklistoffset */ 0, /* tp_iter */ 0, /* tp_iternext */ Consumer_methods, /* tp_methods */ 0, /* tp_members */ 0, /* tp_getset */ 0, /* tp_base */ 0, /* tp_dict */ 0, /* tp_descr_get */ 0, /* tp_descr_set */ 0, /* tp_dictoffset */ Consumer_init, /* tp_init */ 0, /* tp_alloc */ Consumer_new /* tp_new */ }; confluent-kafka-1.1.0/confluent_kafka/src/Metadata.c0000644000076500000240000003131113446646122022471 0ustar ryanstaff00000000000000/** * Copyright 2018 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include "confluent_kafka.h" /** * @name Cluster and topic metadata retrieval * */ /** * @returns a dict, * or NULL (and exception) on error. */ static PyObject * c_partitions_to_py (Handle *self, const rd_kafka_metadata_partition_t *c_partitions, int partition_cnt) { PyObject *PartitionMetadata_type; PyObject *dict; int i; PartitionMetadata_type = cfl_PyObject_lookup("confluent_kafka.admin", "PartitionMetadata"); if (!PartitionMetadata_type) return NULL; dict = PyDict_New(); if (!dict) goto err; for (i = 0 ; i < partition_cnt ; i++) { PyObject *partition, *key; PyObject *error, *replicas, *isrs; partition = PyObject_CallObject(PartitionMetadata_type, NULL); if (!partition) goto err; key = cfl_PyInt_FromInt(c_partitions[i].id); if (PyDict_SetItem(dict, key, partition) == -1) { Py_DECREF(key); Py_DECREF(partition); goto err; } Py_DECREF(key); Py_DECREF(partition); if (cfl_PyObject_SetInt(partition, "id", (int)c_partitions[i].id) == -1) goto err; if (cfl_PyObject_SetInt(partition, "leader", (int)c_partitions[i].leader) == -1) goto err; error = KafkaError_new_or_None(c_partitions[i].err, NULL); if (PyObject_SetAttrString(partition, "error", error) == -1) { Py_DECREF(error); goto err; } Py_DECREF(error); /* replicas */ replicas = cfl_int32_array_to_py_list( c_partitions[i].replicas, (size_t)c_partitions[i].replica_cnt); if (!replicas) goto err; if (PyObject_SetAttrString(partition, "replicas", replicas) == -1) { Py_DECREF(replicas); goto err; } Py_DECREF(replicas); /* isrs */ isrs = cfl_int32_array_to_py_list( c_partitions[i].isrs, (size_t)c_partitions[i].isr_cnt); if (!isrs) goto err; if (PyObject_SetAttrString(partition, "isrs", isrs) == -1) { Py_DECREF(isrs); goto err; } Py_DECREF(isrs); } Py_DECREF(PartitionMetadata_type); return dict; err: Py_DECREF(PartitionMetadata_type); Py_XDECREF(dict); return NULL; } /** * @returns a dict, or NULL (and exception) on error. */ static PyObject * c_topics_to_py (Handle *self, const rd_kafka_metadata_topic_t *c_topics, int topic_cnt) { PyObject *TopicMetadata_type; PyObject *dict; int i; TopicMetadata_type = cfl_PyObject_lookup("confluent_kafka.admin", "TopicMetadata"); if (!TopicMetadata_type) return NULL; dict = PyDict_New(); if (!dict) goto err; for (i = 0 ; i < topic_cnt ; i++) { PyObject *topic; PyObject *error, *partitions; topic = PyObject_CallObject(TopicMetadata_type, NULL); if (!topic) goto err; if (PyDict_SetItemString(dict, c_topics[i].topic, topic) == -1) { Py_DECREF(topic); goto err; } Py_DECREF(topic); if (cfl_PyObject_SetString(topic, "topic", c_topics[i].topic) == -1) goto err; error = KafkaError_new_or_None(c_topics[i].err, NULL); if (PyObject_SetAttrString(topic, "error", error) == -1) { Py_DECREF(error); goto err; } Py_DECREF(error); /* partitions dict */ partitions = c_partitions_to_py(self, c_topics[i].partitions, c_topics[i].partition_cnt); if (!partitions) goto err; if (PyObject_SetAttrString(topic, "partitions", partitions) == -1) { Py_DECREF(partitions); goto err; } Py_DECREF(partitions); } Py_DECREF(TopicMetadata_type); return dict; err: Py_DECREF(TopicMetadata_type); Py_XDECREF(dict); return NULL; } /** * @returns a dict, or NULL (and exception) on error. */ static PyObject *c_brokers_to_py (Handle *self, const rd_kafka_metadata_broker_t *c_brokers, int broker_cnt) { PyObject *BrokerMetadata_type; PyObject *dict; int i; BrokerMetadata_type = cfl_PyObject_lookup("confluent_kafka.admin", "BrokerMetadata"); if (!BrokerMetadata_type) return NULL; dict = PyDict_New(); if (!dict) goto err; for (i = 0 ; i < broker_cnt ; i++) { PyObject *broker; PyObject *key; broker = PyObject_CallObject(BrokerMetadata_type, NULL); if (!broker) goto err; key = cfl_PyInt_FromInt(c_brokers[i].id); if (PyDict_SetItem(dict, key, broker) == -1) { Py_DECREF(key); Py_DECREF(broker); goto err; } Py_DECREF(broker); if (PyObject_SetAttrString(broker, "id", key) == -1) { Py_DECREF(key); goto err; } Py_DECREF(key); if (cfl_PyObject_SetString(broker, "host", c_brokers[i].host) == -1) goto err; if (cfl_PyObject_SetInt(broker, "port", (int)c_brokers[i].port) == -1) goto err; } Py_DECREF(BrokerMetadata_type); return dict; err: Py_DECREF(BrokerMetadata_type); Py_XDECREF(dict); return NULL; } /** * @returns a ClusterMetadata object populated with all metadata information * from \p metadata, or NULL on error in which case an exception * has been raised. */ static PyObject * c_metadata_to_py (Handle *self, const rd_kafka_metadata_t *metadata) { PyObject *ClusterMetadata_type; PyObject *cluster = NULL, *brokers, *topics; #if RD_KAFKA_VERSION >= 0x000b0500 char *cluster_id; #endif ClusterMetadata_type = cfl_PyObject_lookup("confluent_kafka.admin", "ClusterMetadata"); if (!ClusterMetadata_type) return NULL; cluster = PyObject_CallObject(ClusterMetadata_type, NULL); Py_DECREF(ClusterMetadata_type); if (!cluster) return NULL; #if RD_KAFKA_VERSION >= 0x000b0500 if (cfl_PyObject_SetInt( cluster, "controller_id", (int)rd_kafka_controllerid(self->rk, 0)) == -1) goto err; if ((cluster_id = rd_kafka_clusterid(self->rk, 0))) { if (cfl_PyObject_SetString(cluster, "cluster_id", cluster_id) == -1) { free(cluster_id); goto err; } free(cluster_id); } #endif if (cfl_PyObject_SetInt(cluster, "orig_broker_id", (int)metadata->orig_broker_id) == -1) goto err; if (metadata->orig_broker_name && cfl_PyObject_SetString(cluster, "orig_broker_name", metadata->orig_broker_name) == -1) goto err; /* Create and set 'brokers' dict */ brokers = c_brokers_to_py(self, metadata->brokers, metadata->broker_cnt); if (!brokers) goto err; if (PyObject_SetAttrString(cluster, "brokers", brokers) == -1) { Py_DECREF(brokers); goto err; } Py_DECREF(brokers); /* Create and set 'topics' dict */ topics = c_topics_to_py(self, metadata->topics, metadata->topic_cnt); if (!topics) goto err; if (PyObject_SetAttrString(cluster, "topics", topics) == -1) { Py_DECREF(topics); goto err; } Py_DECREF(topics); return cluster; err: Py_XDECREF(cluster); return NULL; } PyObject * list_topics (Handle *self, PyObject *args, PyObject *kwargs) { CallState cs; PyObject *result = NULL; rd_kafka_resp_err_t err; const rd_kafka_metadata_t *metadata = NULL; rd_kafka_topic_t *only_rkt = NULL; const char *topic = NULL; double timeout = -1.0f; static char *kws[] = {"topic", "timeout", NULL}; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|zd", kws, &topic, &timeout)) return NULL; if (topic != NULL) { if (!(only_rkt = rd_kafka_topic_new(self->rk, topic, NULL))) { return PyErr_Format( PyExc_RuntimeError, "Unable to create topic object " "for \"%s\": %s", topic, rd_kafka_err2str(rd_kafka_last_error())); } } CallState_begin(self, &cs); err = rd_kafka_metadata(self->rk, !only_rkt, only_rkt, &metadata, timeout >= 0 ? (int)(timeout * 1000.0f) : -1); if (!CallState_end(self, &cs)) { /* Exception raised */ goto end; } if (err != RD_KAFKA_RESP_ERR_NO_ERROR) { cfl_PyErr_Format(err, "Failed to get metadata: %s", rd_kafka_err2str(err)); goto end; } result = c_metadata_to_py(self, metadata); end: if (metadata != NULL) { rd_kafka_metadata_destroy(metadata); } if (only_rkt != NULL) { rd_kafka_topic_destroy(only_rkt); } return result; } const char list_topics_doc[] = PyDoc_STR( ".. py:function:: list_topics([topic=None], [timeout=-1])\n" "\n" " Request Metadata from cluster.\n" " This method provides the same information as " " listTopics(), describeTopics() and describeCluster() in " " the Java Admin client.\n" "\n" " :param str topic: If specified, only request info about this topic, else return for all topics in cluster. Warning: If auto.create.topics.enable is set to true on the broker and an unknown topic is specified it will be created.\n" " :param float timeout: Maximum response time before timing out, or -1 for infinite timeout.\n" " :rtype: ClusterMetadata \n" " :raises: KafkaException \n"); confluent-kafka-1.1.0/confluent_kafka/src/Producer.c0000644000076500000240000004204613450477633022547 0ustar ryanstaff00000000000000/** * Copyright 2016 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include "confluent_kafka.h" /** * @brief KNOWN ISSUES * * - Partitioners will cause a dead-lock with librdkafka, because: * GIL + topic lock in topic_new is different lock order than * topic lock in msg_partitioner + GIL. * This needs to be sorted out in librdkafka, preferably making the * partitioner run without any locks taken. * Until this is fixed the partitioner is ignored and librdkafka's * default will be used. * */ /**************************************************************************** * * * Producer * * * * ****************************************************************************/ /** * Per-message state. */ struct Producer_msgstate { Handle *self; PyObject *dr_cb; }; /** * Create a new per-message state. * Returns NULL if neither dr_cb or partitioner_cb is set. */ static __inline struct Producer_msgstate * Producer_msgstate_new (Handle *self, PyObject *dr_cb) { struct Producer_msgstate *msgstate; msgstate = calloc(1, sizeof(*msgstate)); msgstate->self = self; if (dr_cb) { msgstate->dr_cb = dr_cb; Py_INCREF(dr_cb); } return msgstate; } static __inline void Producer_msgstate_destroy (struct Producer_msgstate *msgstate) { if (msgstate->dr_cb) Py_DECREF(msgstate->dr_cb); free(msgstate); } static void Producer_clear0 (Handle *self) { if (self->u.Producer.default_dr_cb) { Py_DECREF(self->u.Producer.default_dr_cb); self->u.Producer.default_dr_cb = NULL; } } static int Producer_clear (Handle *self) { Producer_clear0(self); Handle_clear(self); return 0; } static void Producer_dealloc (Handle *self) { PyObject_GC_UnTrack(self); Producer_clear0(self); if (self->rk) { CallState cs; CallState_begin(self, &cs); rd_kafka_destroy(self->rk); CallState_end(self, &cs); } Handle_clear(self); Py_TYPE(self)->tp_free((PyObject *)self); } static int Producer_traverse (Handle *self, visitproc visit, void *arg) { if (self->u.Producer.default_dr_cb) Py_VISIT(self->u.Producer.default_dr_cb); Handle_traverse(self, visit, arg); return 0; } static void dr_msg_cb (rd_kafka_t *rk, const rd_kafka_message_t *rkm, void *opaque) { struct Producer_msgstate *msgstate = rkm->_private; Handle *self = opaque; CallState *cs; PyObject *args; PyObject *result; PyObject *msgobj; if (!msgstate) return; cs = CallState_get(self); if (!msgstate->dr_cb) { /* No callback defined */ goto done; } /* Skip callback if delivery.report.only.error=true */ if (self->u.Producer.dr_only_error && !rkm->err) goto done; msgobj = Message_new0(self, rkm); args = Py_BuildValue("(OO)", ((Message *)msgobj)->error, msgobj); Py_DECREF(msgobj); if (!args) { cfl_PyErr_Format(RD_KAFKA_RESP_ERR__FAIL, "Unable to build callback args"); CallState_crash(cs); goto done; } result = PyObject_CallObject(msgstate->dr_cb, args); Py_DECREF(args); if (result) Py_DECREF(result); else { CallState_crash(cs); rd_kafka_yield(rk); } done: Producer_msgstate_destroy(msgstate); CallState_resume(cs); } #if HAVE_PRODUCEV static rd_kafka_resp_err_t Producer_producev (Handle *self, const char *topic, int32_t partition, const void *value, size_t value_len, const void *key, size_t key_len, void *opaque, int64_t timestamp #ifdef RD_KAFKA_V_HEADERS ,rd_kafka_headers_t *headers #endif ) { return rd_kafka_producev(self->rk, RD_KAFKA_V_MSGFLAGS(RD_KAFKA_MSG_F_COPY), RD_KAFKA_V_TOPIC(topic), RD_KAFKA_V_PARTITION(partition), RD_KAFKA_V_KEY(key, (size_t)key_len), RD_KAFKA_V_VALUE((void *)value, (size_t)value_len), RD_KAFKA_V_TIMESTAMP(timestamp), #ifdef RD_KAFKA_V_HEADERS RD_KAFKA_V_HEADERS(headers), #endif RD_KAFKA_V_OPAQUE(opaque), RD_KAFKA_V_END); } #else static rd_kafka_resp_err_t Producer_produce0 (Handle *self, const char *topic, int32_t partition, const void *value, size_t value_len, const void *key, size_t key_len, void *opaque) { rd_kafka_topic_t *rkt; rd_kafka_resp_err_t err = RD_KAFKA_RESP_ERR_NO_ERROR; if (!(rkt = rd_kafka_topic_new(self->rk, topic, NULL))) return RD_KAFKA_RESP_ERR__INVALID_ARG; if (rd_kafka_produce(rkt, partition, RD_KAFKA_MSG_F_COPY, (void *)value, value_len, (void *)key, key_len, opaque) == -1) err = rd_kafka_last_error(); rd_kafka_topic_destroy(rkt); return err; } #endif static PyObject *Producer_produce (Handle *self, PyObject *args, PyObject *kwargs) { const char *topic, *value = NULL, *key = NULL; int value_len = 0, key_len = 0; int partition = RD_KAFKA_PARTITION_UA; PyObject *headers = NULL, *dr_cb = NULL, *dr_cb2 = NULL; long long timestamp = 0; rd_kafka_resp_err_t err; struct Producer_msgstate *msgstate; #ifdef RD_KAFKA_V_HEADERS rd_kafka_headers_t *rd_headers = NULL; #endif static char *kws[] = { "topic", "value", "key", "partition", "callback", "on_delivery", /* Alias */ "timestamp", "headers", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "s|z#z#iOOLO" , kws, &topic, &value, &value_len, &key, &key_len, &partition, &dr_cb, &dr_cb2, ×tamp, &headers)) return NULL; #if !HAVE_PRODUCEV if (timestamp) { PyErr_Format(PyExc_NotImplementedError, "Producer timestamps require " "confluent-kafka-python built for librdkafka " "version >=v0.9.4 (librdkafka runtime 0x%x, " "buildtime 0x%x)", rd_kafka_version(), RD_KAFKA_VERSION); return NULL; } #endif #ifndef RD_KAFKA_V_HEADERS if (headers) { PyErr_Format(PyExc_NotImplementedError, "Producer message headers requires " "confluent-kafka-python built for librdkafka " "version >=v0.11.4 (librdkafka runtime 0x%x, " "buildtime 0x%x)", rd_kafka_version(), RD_KAFKA_VERSION); return NULL; } #else if (headers) { if(!(rd_headers = py_headers_to_c(headers))) return NULL; } #endif if (dr_cb2 && !dr_cb) /* Alias */ dr_cb = dr_cb2; if (!dr_cb || dr_cb == Py_None) dr_cb = self->u.Producer.default_dr_cb; /* Create msgstate if necessary, may return NULL if no callbacks * are wanted. */ msgstate = Producer_msgstate_new(self, dr_cb); /* Produce message */ #if HAVE_PRODUCEV err = Producer_producev(self, topic, partition, value, value_len, key, key_len, msgstate, timestamp #ifdef RD_KAFKA_V_HEADERS ,rd_headers #endif ); #else err = Producer_produce0(self, topic, partition, value, value_len, key, key_len, msgstate); #endif if (err) { if (msgstate) Producer_msgstate_destroy(msgstate); if (err == RD_KAFKA_RESP_ERR__QUEUE_FULL) PyErr_Format(PyExc_BufferError, "%s", rd_kafka_err2str(err)); else cfl_PyErr_Format(err, "Unable to produce message: %s", rd_kafka_err2str(err)); return NULL; } Py_RETURN_NONE; } /** * @brief Call rd_kafka_poll() and keep track of crashing callbacks. * @returns -1 if callback crashed (or poll() failed), else the number * of events served. */ static int Producer_poll0 (Handle *self, int tmout) { int r; CallState cs; CallState_begin(self, &cs); r = rd_kafka_poll(self->rk, tmout); if (!CallState_end(self, &cs)) { return -1; } return r; } static PyObject *Producer_poll (Handle *self, PyObject *args, PyObject *kwargs) { double tmout; int r; static char *kws[] = { "timeout", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "d", kws, &tmout)) return NULL; r = Producer_poll0(self, (int)(tmout * 1000)); if (r == -1) return NULL; return cfl_PyInt_FromInt(r); } static PyObject *Producer_flush (Handle *self, PyObject *args, PyObject *kwargs) { double tmout = -1; int qlen; static char *kws[] = { "timeout", NULL }; #if RD_KAFKA_VERSION >= 0x00090300 CallState cs; #endif if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|d", kws, &tmout)) return NULL; #if RD_KAFKA_VERSION >= 0x00090300 CallState_begin(self, &cs); rd_kafka_flush(self->rk, tmout < 0 ? -1 : (int)(tmout * 1000)); if (!CallState_end(self, &cs)) return NULL; qlen = rd_kafka_outq_len(self->rk); #else while ((qlen = rd_kafka_outq_len(self->rk)) > 0) { if (Producer_poll0(self, 500) == -1) return NULL; } #endif return cfl_PyInt_FromInt(qlen); } static PyMethodDef Producer_methods[] = { { "produce", (PyCFunction)Producer_produce, METH_VARARGS|METH_KEYWORDS, ".. py:function:: produce(topic, [value], [key], [partition], [on_delivery], [timestamp], [headers])\n" "\n" " Produce message to topic.\n" " This is an asynchronous operation, an application may use the " "``callback`` (alias ``on_delivery``) argument to pass a function " "(or lambda) that will be called from :py:func:`poll()` when the " "message has been successfully delivered or permanently fails delivery.\n" "\n" " Currently message headers are not supported on the message returned to the " "callback. The ``msg.headers()`` will return None even if the original message " "had headers set.\n" "\n" " :param str topic: Topic to produce message to\n" " :param str|bytes value: Message payload\n" " :param str|bytes key: Message key\n" " :param int partition: Partition to produce to, else uses the " "configured built-in partitioner.\n" " :param func on_delivery(err,msg): Delivery report callback to call " "(from :py:func:`poll()` or :py:func:`flush()`) on successful or " "failed delivery\n" " :param int timestamp: Message timestamp (CreateTime) in milliseconds since epoch UTC (requires librdkafka >= v0.9.4, api.version.request=true, and broker >= 0.10.0.0). Default value is current time.\n" "\n" " :param headers dict|list: Message headers to set on the message. The header key must be a string while the value must be binary, unicode or None. Accepts a list of (key,value) or a dict. (Requires librdkafka >= v0.11.4 and broker version >= 0.11.0.0)\n" " :rtype: None\n" " :raises BufferError: if the internal producer message queue is " "full (``queue.buffering.max.messages`` exceeded)\n" " :raises KafkaException: for other errors, see exception code\n" " :raises NotImplementedError: if timestamp is specified without underlying library support.\n" "\n" }, { "poll", (PyCFunction)Producer_poll, METH_VARARGS|METH_KEYWORDS, ".. py:function:: poll([timeout])\n" "\n" " Polls the producer for events and calls the corresponding " "callbacks (if registered).\n" "\n" " Callbacks:\n" "\n" " - ``on_delivery`` callbacks from :py:func:`produce()`\n" " - ...\n" "\n" " :param float timeout: Maximum time to block waiting for events. (Seconds)\n" " :returns: Number of events processed (callbacks served)\n" " :rtype: int\n" "\n" }, { "flush", (PyCFunction)Producer_flush, METH_VARARGS|METH_KEYWORDS, ".. py:function:: flush([timeout])\n" "\n" " Wait for all messages in the Producer queue to be delivered.\n" " This is a convenience method that calls :py:func:`poll()` until " ":py:func:`len()` is zero or the optional timeout elapses.\n" "\n" " :param: float timeout: Maximum time to block (requires librdkafka >= v0.9.4). (Seconds)\n" " :returns: Number of messages still in queue.\n" "\n" ".. note:: See :py:func:`poll()` for a description on what " "callbacks may be triggered.\n" "\n" }, { "list_topics", (PyCFunction)list_topics, METH_VARARGS|METH_KEYWORDS, list_topics_doc }, { NULL } }; static Py_ssize_t Producer__len__ (Handle *self) { return rd_kafka_outq_len(self->rk); } static PySequenceMethods Producer_seq_methods = { (lenfunc)Producer__len__ /* sq_length */ }; static int Producer_init (PyObject *selfobj, PyObject *args, PyObject *kwargs) { Handle *self = (Handle *)selfobj; char errstr[256]; rd_kafka_conf_t *conf; if (self->rk) { PyErr_SetString(PyExc_RuntimeError, "Producer already __init__:ialized"); return -1; } self->type = RD_KAFKA_PRODUCER; if (!(conf = common_conf_setup(RD_KAFKA_PRODUCER, self, args, kwargs))) return -1; rd_kafka_conf_set_dr_msg_cb(conf, dr_msg_cb); self->rk = rd_kafka_new(RD_KAFKA_PRODUCER, conf, errstr, sizeof(errstr)); if (!self->rk) { cfl_PyErr_Format(rd_kafka_last_error(), "Failed to create producer: %s", errstr); rd_kafka_conf_destroy(conf); return -1; } /* Forward log messages to poll queue */ if (self->logger) rd_kafka_set_log_queue(self->rk, NULL); return 0; } static PyObject *Producer_new (PyTypeObject *type, PyObject *args, PyObject *kwargs) { return type->tp_alloc(type, 0); } PyTypeObject ProducerType = { PyVarObject_HEAD_INIT(NULL, 0) "cimpl.Producer", /*tp_name*/ sizeof(Handle), /*tp_basicsize*/ 0, /*tp_itemsize*/ (destructor)Producer_dealloc, /*tp_dealloc*/ 0, /*tp_print*/ 0, /*tp_getattr*/ 0, /*tp_setattr*/ 0, /*tp_compare*/ 0, /*tp_repr*/ 0, /*tp_as_number*/ &Producer_seq_methods, /*tp_as_sequence*/ 0, /*tp_as_mapping*/ 0, /*tp_hash */ 0, /*tp_call*/ 0, /*tp_str*/ 0, /*tp_getattro*/ 0, /*tp_setattro*/ 0, /*tp_as_buffer*/ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_GC, /*tp_flags*/ "Asynchronous Kafka Producer\n" "\n" ".. py:function:: Producer(config)\n" "\n" " :param dict config: Configuration properties. At a minimum ``bootstrap.servers`` **should** be set\n" "\n" " Create new Producer instance using provided configuration dict.\n" "\n" "\n" ".. py:function:: len()\n" "\n" " :returns: Number of messages and Kafka protocol requests waiting to be delivered to broker.\n" " :rtype: int\n" "\n", /*tp_doc*/ (traverseproc)Producer_traverse, /* tp_traverse */ (inquiry)Producer_clear, /* tp_clear */ 0, /* tp_richcompare */ 0, /* tp_weaklistoffset */ 0, /* tp_iter */ 0, /* tp_iternext */ Producer_methods, /* tp_methods */ 0, /* tp_members */ 0, /* tp_getset */ 0, /* tp_base */ 0, /* tp_dict */ 0, /* tp_descr_get */ 0, /* tp_descr_set */ 0, /* tp_dictoffset */ Producer_init, /* tp_init */ 0, /* tp_alloc */ Producer_new /* tp_new */ }; confluent-kafka-1.1.0/confluent_kafka/src/confluent_kafka.c0000644000076500000240000020510113513052761024076 0ustar ryanstaff00000000000000/** * Copyright 2016 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include "confluent_kafka.h" #include /** * @brief KNOWN ISSUES * * - Partitioners will cause a dead-lock with librdkafka, because: * GIL + topic lock in topic_new is different lock order than * topic lock in msg_partitioner + GIL. * This needs to be sorted out in librdkafka, preferably making the * partitioner run without any locks taken. * Until this is fixed the partitioner is ignored and librdkafka's * default will be used. * - KafkaError type .tp_doc allocation is lost on exit. * */ PyObject *KafkaException; /**************************************************************************** * * * KafkaError * * * FIXME: Pre-create simple instances for each error code, only instantiate * a new object if a rich error string is provided. * ****************************************************************************/ typedef struct { #ifdef PY3 PyException_HEAD #else PyObject_HEAD /* Standard fields of PyBaseExceptionObject which we inherit from. */ PyObject *dict; PyObject *args; PyObject *message; #endif rd_kafka_resp_err_t code; /* Error code */ char *str; /* Human readable representation of error, if one * was provided by librdkafka. * Else falls back on err2str(). */ int fatal; /**< Set to true if a fatal error. */ } KafkaError; static void cfl_PyErr_Fatal (rd_kafka_resp_err_t err, const char *reason); static PyObject *KafkaError_code (KafkaError *self, PyObject *ignore) { return cfl_PyInt_FromInt(self->code); } static PyObject *KafkaError_str (KafkaError *self, PyObject *ignore) { if (self->str) return cfl_PyUnistr(_FromString(self->str)); else return cfl_PyUnistr(_FromString(rd_kafka_err2str(self->code))); } static PyObject *KafkaError_name (KafkaError *self, PyObject *ignore) { /* FIXME: Pre-create name objects */ return cfl_PyUnistr(_FromString(rd_kafka_err2name(self->code))); } static PyObject *KafkaError_fatal (KafkaError *self, PyObject *ignore) { PyObject *ret = self->fatal ? Py_True : Py_False; Py_INCREF(ret); return ret; } static PyObject *KafkaError_test_raise_fatal (KafkaError *null, PyObject *ignore) { cfl_PyErr_Fatal(RD_KAFKA_RESP_ERR__INVALID_ARG, "This is a fatal exception for testing purposes"); return NULL; } static PyMethodDef KafkaError_methods[] = { { "code", (PyCFunction)KafkaError_code, METH_NOARGS, " Returns the error/event code for comparison to" "KafkaError..\n" "\n" " :returns: error/event code\n" " :rtype: int\n" "\n" }, { "str", (PyCFunction)KafkaError_str, METH_NOARGS, " Returns the human-readable error/event string.\n" "\n" " :returns: error/event message string\n" " :rtype: str\n" "\n" }, { "name", (PyCFunction)KafkaError_name, METH_NOARGS, " Returns the enum name for error/event.\n" "\n" " :returns: error/event enum name string\n" " :rtype: str\n" "\n" }, { "fatal", (PyCFunction)KafkaError_fatal, METH_NOARGS, " :returns: True if this a fatal error, else False.\n" " :rtype: bool\n" "\n" }, { "_test_raise_fatal", (PyCFunction)KafkaError_test_raise_fatal, METH_NOARGS|METH_STATIC }, { NULL } }; static void KafkaError_clear (PyObject *self0) { KafkaError *self = (KafkaError *)self0; if (self->str) { free(self->str); self->str = NULL; } } static void KafkaError_dealloc (PyObject *self0) { KafkaError *self = (KafkaError *)self0; KafkaError_clear(self0);; PyObject_GC_UnTrack(self0); Py_TYPE(self)->tp_free(self0); } static int KafkaError_traverse (KafkaError *self, visitproc visit, void *arg) { return 0; } static PyObject *KafkaError_str0 (KafkaError *self) { return cfl_PyUnistr(_FromFormat("KafkaError{%scode=%s,val=%d,str=\"%s\"}", self->fatal?"FATAL,":"", rd_kafka_err2name(self->code), self->code, self->str ? self->str : rd_kafka_err2str(self->code))); } static long KafkaError_hash (KafkaError *self) { return self->code; } static PyTypeObject KafkaErrorType; static PyObject* KafkaError_richcompare (KafkaError *self, PyObject *o2, int op) { int code2; int r; PyObject *result; if (Py_TYPE(o2) == &KafkaErrorType) code2 = ((KafkaError *)o2)->code; else code2 = cfl_PyInt_AsInt(o2); switch (op) { case Py_LT: r = self->code < code2; break; case Py_LE: r = self->code <= code2; break; case Py_EQ: r = self->code == code2; break; case Py_NE: r = self->code != code2; break; case Py_GT: r = self->code > code2; break; case Py_GE: r = self->code >= code2; break; default: r = 0; break; } result = r ? Py_True : Py_False; Py_INCREF(result); return result; } static PyTypeObject KafkaErrorType = { PyVarObject_HEAD_INIT(NULL, 0) "cimpl.KafkaError", /*tp_name*/ sizeof(KafkaError), /*tp_basicsize*/ 0, /*tp_itemsize*/ (destructor)KafkaError_dealloc, /*tp_dealloc*/ 0, /*tp_print*/ 0, /*tp_getattr*/ 0, /*tp_setattr*/ 0, /*tp_compare*/ (reprfunc)KafkaError_str0, /*tp_repr*/ 0, /*tp_as_number*/ 0, /*tp_as_sequence*/ 0, /*tp_as_mapping*/ (hashfunc)KafkaError_hash, /*tp_hash */ 0, /*tp_call*/ 0, /*tp_str*/ PyObject_GenericGetAttr, /*tp_getattro*/ 0, /*tp_setattro*/ 0, /*tp_as_buffer*/ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_BASE_EXC_SUBCLASS | Py_TPFLAGS_HAVE_GC, /*tp_flags*/ "Kafka error and event object\n" "\n" " The KafkaError class serves multiple purposes:\n" "\n" " - Propagation of errors\n" " - Propagation of events\n" " - Exceptions\n" "\n" " This class is not user-instantiable.\n" "\n", /*tp_doc*/ (traverseproc)KafkaError_traverse, /* tp_traverse */ (inquiry)KafkaError_clear, /* tp_clear */ (richcmpfunc)KafkaError_richcompare, /* tp_richcompare */ 0, /* tp_weaklistoffset */ 0, /* tp_iter */ 0, /* tp_iternext */ KafkaError_methods, /* tp_methods */ 0, /* tp_members */ 0, /* tp_getset */ 0, /* tp_base */ 0, /* tp_dict */ 0, /* tp_descr_get */ 0, /* tp_descr_set */ 0, /* tp_dictoffset */ 0, /* tp_init */ 0 /* tp_alloc */ }; /** * @brief Initialize a KafkaError object. */ static void KafkaError_init (KafkaError *self, rd_kafka_resp_err_t code, const char *str) { self->code = code; self->fatal = 0; if (str) self->str = strdup(str); else self->str = NULL; } /** * @brief Internal factory to create KafkaError object. */ PyObject *KafkaError_new0 (rd_kafka_resp_err_t err, const char *fmt, ...) { KafkaError *self; va_list ap; char buf[512]; self = (KafkaError *)KafkaErrorType. tp_alloc(&KafkaErrorType, 0); if (!self) return NULL; if (fmt) { va_start(ap, fmt); vsnprintf(buf, sizeof(buf), fmt, ap); va_end(ap); } KafkaError_init(self, err, fmt ? buf : rd_kafka_err2str(err)); return (PyObject *)self; } /** * @brief Internal factory to create KafkaError object. * @returns a new KafkaError object if \p err != 0, else a None object. */ PyObject *KafkaError_new_or_None (rd_kafka_resp_err_t err, const char *str) { if (!err) Py_RETURN_NONE; if (str) return KafkaError_new0(err, "%s", str); else return KafkaError_new0(err, NULL); } /** * @brief Raise exception from fatal error. */ static void cfl_PyErr_Fatal (rd_kafka_resp_err_t err, const char *reason) { PyObject *eo = KafkaError_new0(err, "%s", reason); ((KafkaError *)eo)->fatal = 1; PyErr_SetObject(KafkaException, eo); } /**************************************************************************** * * * Message * * * * ****************************************************************************/ /** * @returns a Message's error object, if any, else None. * @remark The error object refcount is increased by this function. */ PyObject *Message_error (Message *self, PyObject *ignore) { if (self->error) { Py_INCREF(self->error); return self->error; } else Py_RETURN_NONE; } static PyObject *Message_value (Message *self, PyObject *ignore) { if (self->value) { Py_INCREF(self->value); return self->value; } else Py_RETURN_NONE; } static PyObject *Message_key (Message *self, PyObject *ignore) { if (self->key) { Py_INCREF(self->key); return self->key; } else Py_RETURN_NONE; } static PyObject *Message_topic (Message *self, PyObject *ignore) { if (self->topic) { Py_INCREF(self->topic); return self->topic; } else Py_RETURN_NONE; } static PyObject *Message_partition (Message *self, PyObject *ignore) { if (self->partition != RD_KAFKA_PARTITION_UA) return cfl_PyInt_FromInt(self->partition); else Py_RETURN_NONE; } static PyObject *Message_offset (Message *self, PyObject *ignore) { if (self->offset >= 0) return PyLong_FromLongLong(self->offset); else Py_RETURN_NONE; } static PyObject *Message_timestamp (Message *self, PyObject *ignore) { return Py_BuildValue("iL", self->tstype, self->timestamp); } static PyObject *Message_headers (Message *self, PyObject *ignore) { #ifdef RD_KAFKA_V_HEADERS if (self->headers) { Py_INCREF(self->headers); return self->headers; } else if (self->c_headers) { self->headers = c_headers_to_py(self->c_headers); rd_kafka_headers_destroy(self->c_headers); self->c_headers = NULL; Py_INCREF(self->headers); return self->headers; } else { Py_RETURN_NONE; } #else Py_RETURN_NONE; #endif } static PyObject *Message_set_headers (Message *self, PyObject *new_headers) { if (self->headers) Py_DECREF(self->headers); self->headers = new_headers; Py_INCREF(self->headers); Py_RETURN_NONE; } static PyObject *Message_set_value (Message *self, PyObject *new_val) { if (self->value) Py_DECREF(self->value); self->value = new_val; Py_INCREF(self->value); Py_RETURN_NONE; } static PyObject *Message_set_key (Message *self, PyObject *new_key) { if (self->key) Py_DECREF(self->key); self->key = new_key; Py_INCREF(self->key); Py_RETURN_NONE; } static PyMethodDef Message_methods[] = { { "error", (PyCFunction)Message_error, METH_NOARGS, " The message object is also used to propagate errors and events, " "an application must check error() to determine if the Message " "is a proper message (error() returns None) or an error or event " "(error() returns a KafkaError object)\n" "\n" " :rtype: None or :py:class:`KafkaError`\n" "\n" }, { "value", (PyCFunction)Message_value, METH_NOARGS, " :returns: message value (payload) or None if not available.\n" " :rtype: str|bytes or None\n" "\n" }, { "key", (PyCFunction)Message_key, METH_NOARGS, " :returns: message key or None if not available.\n" " :rtype: str|bytes or None\n" "\n" }, { "topic", (PyCFunction)Message_topic, METH_NOARGS, " :returns: topic name or None if not available.\n" " :rtype: str or None\n" "\n" }, { "partition", (PyCFunction)Message_partition, METH_NOARGS, " :returns: partition number or None if not available.\n" " :rtype: int or None\n" "\n" }, { "offset", (PyCFunction)Message_offset, METH_NOARGS, " :returns: message offset or None if not available.\n" " :rtype: int or None\n" "\n" }, { "timestamp", (PyCFunction)Message_timestamp, METH_NOARGS, "Retrieve timestamp type and timestamp from message.\n" "The timestamp type is one of:\n" " * :py:const:`TIMESTAMP_NOT_AVAILABLE`" " - Timestamps not supported by broker\n" " * :py:const:`TIMESTAMP_CREATE_TIME` " " - Message creation time (or source / producer time)\n" " * :py:const:`TIMESTAMP_LOG_APPEND_TIME` " " - Broker receive time\n" "\n" "The returned timestamp should be ignored if the timestamp type is " ":py:const:`TIMESTAMP_NOT_AVAILABLE`.\n" "\n" " The timestamp is the number of milliseconds since the epoch (UTC).\n" "\n" " Timestamps require broker version 0.10.0.0 or later and \n" " ``{'api.version.request': True}`` configured on the client.\n" "\n" " :returns: tuple of message timestamp type, and timestamp.\n" " :rtype: (int, int)\n" "\n" }, { "headers", (PyCFunction)Message_headers, METH_NOARGS, " Retrieve the headers set on a message. Each header is a key value" "pair. Please note that header keys are ordered and can repeat.\n" "\n" " :returns: list of two-tuples, one (key, value) pair for each header.\n" " :rtype: [(str, bytes),...] or None.\n" "\n" }, { "set_headers", (PyCFunction)Message_set_headers, METH_O, " Set the field 'Message.headers' with new value.\n" "\n" " :param object value: Message.headers.\n" " :returns: None.\n" " :rtype: None\n" "\n" }, { "set_value", (PyCFunction)Message_set_value, METH_O, " Set the field 'Message.value' with new value.\n" "\n" " :param object value: Message.value.\n" " :returns: None.\n" " :rtype: None\n" "\n" }, { "set_key", (PyCFunction)Message_set_key, METH_O, " Set the field 'Message.key' with new value.\n" "\n" " :param object value: Message.key.\n" " :returns: None.\n" " :rtype: None\n" "\n" }, { NULL } }; static int Message_clear (Message *self) { if (self->topic) { Py_DECREF(self->topic); self->topic = NULL; } if (self->value) { Py_DECREF(self->value); self->value = NULL; } if (self->key) { Py_DECREF(self->key); self->key = NULL; } if (self->error) { Py_DECREF(self->error); self->error = NULL; } if (self->headers) { Py_DECREF(self->headers); self->headers = NULL; } #ifdef RD_KAFKA_V_HEADERS if (self->c_headers){ rd_kafka_headers_destroy(self->c_headers); self->c_headers = NULL; } #endif return 0; } static void Message_dealloc (Message *self) { Message_clear(self); PyObject_GC_UnTrack(self); Py_TYPE(self)->tp_free((PyObject *)self); } static int Message_traverse (Message *self, visitproc visit, void *arg) { if (self->topic) Py_VISIT(self->topic); if (self->value) Py_VISIT(self->value); if (self->key) Py_VISIT(self->key); if (self->error) Py_VISIT(self->error); if (self->headers) Py_VISIT(self->headers); return 0; } static Py_ssize_t Message__len__ (Message *self) { return self->value ? PyObject_Length(self->value) : 0; } static PySequenceMethods Message_seq_methods = { (lenfunc)Message__len__ /* sq_length */ }; PyTypeObject MessageType = { PyVarObject_HEAD_INIT(NULL, 0) "cimpl.Message", /*tp_name*/ sizeof(Message), /*tp_basicsize*/ 0, /*tp_itemsize*/ (destructor)Message_dealloc, /*tp_dealloc*/ 0, /*tp_print*/ 0, /*tp_getattr*/ 0, /*tp_setattr*/ 0, /*tp_compare*/ 0, /*tp_repr*/ 0, /*tp_as_number*/ &Message_seq_methods, /*tp_as_sequence*/ 0, /*tp_as_mapping*/ 0, /*tp_hash */ 0, /*tp_call*/ 0, /*tp_str*/ PyObject_GenericGetAttr, /*tp_getattro*/ 0, /*tp_setattro*/ 0, /*tp_as_buffer*/ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_GC, /*tp_flags*/ "The Message object represents either a single consumed or " "produced message, or an event (:py:func:`error()` is not None).\n" "\n" "An application must check with :py:func:`error()` to see if the " "object is a proper message (error() returns None) or an " "error/event.\n" "\n" "This class is not user-instantiable.\n" "\n" ".. py:function:: len()\n" "\n" " :returns: Message value (payload) size in bytes\n" " :rtype: int\n" "\n", /*tp_doc*/ (traverseproc)Message_traverse, /* tp_traverse */ (inquiry)Message_clear, /* tp_clear */ 0, /* tp_richcompare */ 0, /* tp_weaklistoffset */ 0, /* tp_iter */ 0, /* tp_iternext */ Message_methods, /* tp_methods */ 0, /* tp_members */ 0, /* tp_getset */ 0, /* tp_base */ 0, /* tp_dict */ 0, /* tp_descr_get */ 0, /* tp_descr_set */ 0, /* tp_dictoffset */ 0, /* tp_init */ 0 /* tp_alloc */ }; /** * @brief Internal factory to create Message object from message_t */ PyObject *Message_new0 (const Handle *handle, const rd_kafka_message_t *rkm) { Message *self; self = (Message *)MessageType.tp_alloc(&MessageType, 0); if (!self) return NULL; /* Only use message error string on Consumer, for Producers * it will contain the original message payload. */ self->error = KafkaError_new_or_None( rkm->err, (rkm->err && handle->type != RD_KAFKA_PRODUCER) ? rd_kafka_message_errstr(rkm) : NULL); if (rkm->rkt) self->topic = cfl_PyUnistr( _FromString(rd_kafka_topic_name(rkm->rkt))); if (rkm->payload) self->value = cfl_PyBin(_FromStringAndSize(rkm->payload, rkm->len)); if (rkm->key) self->key = cfl_PyBin( _FromStringAndSize(rkm->key, rkm->key_len)); self->partition = rkm->partition; self->offset = rkm->offset; self->timestamp = rd_kafka_message_timestamp(rkm, &self->tstype); return (PyObject *)self; } /**************************************************************************** * * * TopicPartition * * * * ****************************************************************************/ static int TopicPartition_clear (TopicPartition *self) { if (self->topic) { free(self->topic); self->topic = NULL; } if (self->error) { Py_DECREF(self->error); self->error = NULL; } return 0; } static void TopicPartition_setup (TopicPartition *self, const char *topic, int partition, long long offset, rd_kafka_resp_err_t err) { self->topic = strdup(topic); self->partition = partition; self->offset = offset; self->error = KafkaError_new_or_None(err, NULL); } static void TopicPartition_dealloc (TopicPartition *self) { PyObject_GC_UnTrack(self); TopicPartition_clear(self); Py_TYPE(self)->tp_free((PyObject *)self); } static int TopicPartition_init (PyObject *self, PyObject *args, PyObject *kwargs) { const char *topic; int partition = RD_KAFKA_PARTITION_UA; long long offset = RD_KAFKA_OFFSET_INVALID; static char *kws[] = { "topic", "partition", "offset", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "s|iL", kws, &topic, &partition, &offset)) return -1; TopicPartition_setup((TopicPartition *)self, topic, partition, offset, 0); return 0; } static PyObject *TopicPartition_new (PyTypeObject *type, PyObject *args, PyObject *kwargs) { PyObject *self = type->tp_alloc(type, 1); return self; } static int TopicPartition_traverse (TopicPartition *self, visitproc visit, void *arg) { if (self->error) Py_VISIT(self->error); return 0; } static PyMemberDef TopicPartition_members[] = { { "topic", T_STRING, offsetof(TopicPartition, topic), READONLY, ":attribute topic: Topic name (string)" }, { "partition", T_INT, offsetof(TopicPartition, partition), 0, ":attribute partition: Partition number (int)" }, { "offset", T_LONGLONG, offsetof(TopicPartition, offset), 0, ":attribute offset: Offset (long)\n" "\n" "Either an absolute offset (>=0) or a logical offset: " " :py:const:`OFFSET_BEGINNING`," " :py:const:`OFFSET_END`," " :py:const:`OFFSET_STORED`," " :py:const:`OFFSET_INVALID`\n" }, { "error", T_OBJECT, offsetof(TopicPartition, error), READONLY, ":attribute error: Indicates an error (with :py:class:`KafkaError`) unless None." }, { NULL } }; static PyObject *TopicPartition_str0 (TopicPartition *self) { PyObject *errstr = NULL; PyObject *errstr8 = NULL; const char *c_errstr = NULL; PyObject *ret; char offset_str[40]; snprintf(offset_str, sizeof(offset_str), "%"CFL_PRId64"", self->offset); if (self->error != Py_None) { errstr = cfl_PyObject_Unistr(self->error); c_errstr = cfl_PyUnistr_AsUTF8(errstr, &errstr8); } ret = cfl_PyUnistr( _FromFormat("TopicPartition{topic=%s,partition=%"CFL_PRId32 ",offset=%s,error=%s}", self->topic, self->partition, offset_str, c_errstr ? c_errstr : "None")); Py_XDECREF(errstr8); Py_XDECREF(errstr); return ret; } static PyObject * TopicPartition_richcompare (TopicPartition *self, PyObject *o2, int op) { TopicPartition *a = self, *b; int tr, pr; int r; PyObject *result; if (Py_TYPE(o2) != Py_TYPE(self)) { PyErr_SetNone(PyExc_NotImplementedError); return NULL; } b = (TopicPartition *)o2; tr = strcmp(a->topic, b->topic); pr = a->partition - b->partition; switch (op) { case Py_LT: r = tr < 0 || (tr == 0 && pr < 0); break; case Py_LE: r = tr < 0 || (tr == 0 && pr <= 0); break; case Py_EQ: r = (tr == 0 && pr == 0); break; case Py_NE: r = (tr != 0 || pr != 0); break; case Py_GT: r = tr > 0 || (tr == 0 && pr > 0); break; case Py_GE: r = tr > 0 || (tr == 0 && pr >= 0); break; default: r = 0; break; } result = r ? Py_True : Py_False; Py_INCREF(result); return result; } static long TopicPartition_hash (TopicPartition *self) { PyObject *topic = cfl_PyUnistr(_FromString(self->topic)); long r = PyObject_Hash(topic) ^ self->partition; Py_DECREF(topic); return r; } PyTypeObject TopicPartitionType = { PyVarObject_HEAD_INIT(NULL, 0) "cimpl.TopicPartition", /*tp_name*/ sizeof(TopicPartition), /*tp_basicsize*/ 0, /*tp_itemsize*/ (destructor)TopicPartition_dealloc, /*tp_dealloc*/ 0, /*tp_print*/ 0, /*tp_getattr*/ 0, /*tp_setattr*/ 0, /*tp_compare*/ (reprfunc)TopicPartition_str0, /*tp_repr*/ 0, /*tp_as_number*/ 0, /*tp_as_sequence*/ 0, /*tp_as_mapping*/ (hashfunc)TopicPartition_hash, /*tp_hash */ 0, /*tp_call*/ 0, /*tp_str*/ PyObject_GenericGetAttr, /*tp_getattro*/ 0, /*tp_setattro*/ 0, /*tp_as_buffer*/ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_GC, /*tp_flags*/ "TopicPartition is a generic type to hold a single partition and " "various information about it.\n" "\n" "It is typically used to provide a list of topics or partitions for " "various operations, such as :py:func:`Consumer.assign()`.\n" "\n" ".. py:function:: TopicPartition(topic, [partition], [offset])\n" "\n" " Instantiate a TopicPartition object.\n" "\n" " :param string topic: Topic name\n" " :param int partition: Partition id\n" " :param int offset: Initial partition offset\n" " :rtype: TopicPartition\n" "\n" "\n", /*tp_doc*/ (traverseproc)TopicPartition_traverse, /* tp_traverse */ (inquiry)TopicPartition_clear, /* tp_clear */ (richcmpfunc)TopicPartition_richcompare, /* tp_richcompare */ 0, /* tp_weaklistoffset */ 0, /* tp_iter */ 0, /* tp_iternext */ 0, /* tp_methods */ TopicPartition_members,/* tp_members */ 0, /* tp_getset */ 0, /* tp_base */ 0, /* tp_dict */ 0, /* tp_descr_get */ 0, /* tp_descr_set */ 0, /* tp_dictoffset */ TopicPartition_init, /* tp_init */ 0, /* tp_alloc */ TopicPartition_new /* tp_new */ }; /** * @brief Internal factory to create a TopicPartition object. */ static PyObject *TopicPartition_new0 (const char *topic, int partition, long long offset, rd_kafka_resp_err_t err) { TopicPartition *self; self = (TopicPartition *)TopicPartitionType.tp_new( &TopicPartitionType, NULL, NULL); TopicPartition_setup(self, topic, partition, offset, err); return (PyObject *)self; } /** * @brief Convert C rd_kafka_topic_partition_list_t to Python list(TopicPartition). * * @returns The new Python list object. */ PyObject *c_parts_to_py (const rd_kafka_topic_partition_list_t *c_parts) { PyObject *parts; size_t i; parts = PyList_New(c_parts->cnt); for (i = 0 ; i < (size_t)c_parts->cnt ; i++) { const rd_kafka_topic_partition_t *rktpar = &c_parts->elems[i]; PyList_SET_ITEM(parts, i, TopicPartition_new0( rktpar->topic, rktpar->partition, rktpar->offset, rktpar->err)); } return parts; } /** * @brief Convert Python list(TopicPartition) to C rd_kafka_topic_partition_list_t. * * @returns The new C list on success or NULL on error. */ rd_kafka_topic_partition_list_t *py_to_c_parts (PyObject *plist) { rd_kafka_topic_partition_list_t *c_parts; size_t i; if (!PyList_Check(plist)) { PyErr_SetString(PyExc_TypeError, "requires list of TopicPartition"); return NULL; } c_parts = rd_kafka_topic_partition_list_new((int)PyList_Size(plist)); for (i = 0 ; i < (size_t)PyList_Size(plist) ; i++) { TopicPartition *tp = (TopicPartition *) PyList_GetItem(plist, i); if (PyObject_Type((PyObject *)tp) != (PyObject *)&TopicPartitionType) { PyErr_Format(PyExc_TypeError, "expected %s", TopicPartitionType.tp_name); rd_kafka_topic_partition_list_destroy(c_parts); return NULL; } rd_kafka_topic_partition_list_add(c_parts, tp->topic, tp->partition)->offset = tp->offset; } return c_parts; } #ifdef RD_KAFKA_V_HEADERS /** * @brief Translate Python \p key and \p value to C types and set on * provided \p rd_headers object. * * @returns 1 on success or 0 if an exception was raised. */ static int py_header_to_c (rd_kafka_headers_t *rd_headers, PyObject *key, PyObject *value) { PyObject *ks, *ks8, *vo8 = NULL; const char *k; const void *v = NULL; Py_ssize_t vsize = 0; rd_kafka_resp_err_t err; if (!(ks = cfl_PyObject_Unistr(key))) { PyErr_SetString(PyExc_TypeError, "expected header key to be unicode " "string"); return 0; } k = cfl_PyUnistr_AsUTF8(ks, &ks8); if (value != Py_None) { if (cfl_PyBin(_Check(value))) { /* Proper binary */ if (cfl_PyBin(_AsStringAndSize(value, (char **)&v, &vsize)) == -1) { Py_DECREF(ks); Py_XDECREF(ks8); return 0; } } else if (cfl_PyUnistr(_Check(value))) { /* Unicode string, translate to utf-8. */ v = cfl_PyUnistr_AsUTF8(value, &vo8); if (!v) { Py_DECREF(ks); Py_XDECREF(ks8); return 0; } vsize = (Py_ssize_t)strlen(v); } else { PyErr_Format(PyExc_TypeError, "expected header value to be " "None, binary, or unicode string, not %s", ((PyTypeObject *)PyObject_Type(value))-> tp_name); Py_DECREF(ks); Py_XDECREF(ks8); return 0; } } if ((err = rd_kafka_header_add(rd_headers, k, -1, v, vsize))) { cfl_PyErr_Format(err, "Unable to add message header \"%s\": " "%s", k, rd_kafka_err2str(err)); Py_DECREF(ks); Py_XDECREF(ks8); Py_XDECREF(vo8); return 0; } Py_DECREF(ks); Py_XDECREF(ks8); Py_XDECREF(vo8); return 1; } /** * @brief Convert Python list of tuples to rd_kafka_headers_t * * Header names must be unicode strong. * Header values may be None, binary or unicode string, the latter is * automatically encoded as utf-8. */ static rd_kafka_headers_t *py_headers_list_to_c (PyObject *hdrs) { int i, len; rd_kafka_headers_t *rd_headers = NULL; len = (int)PyList_Size(hdrs); rd_headers = rd_kafka_headers_new(len); for (i = 0; i < len; i++) { PyObject *tuple = PyList_GET_ITEM(hdrs, i); if (!PyTuple_Check(tuple) || PyTuple_Size(tuple) != 2) { rd_kafka_headers_destroy(rd_headers); PyErr_SetString(PyExc_TypeError, "Headers are expected to be a " "list of (key, value) tuples"); return NULL; } if (!py_header_to_c(rd_headers, PyTuple_GET_ITEM(tuple, 0), PyTuple_GET_ITEM(tuple, 1))) { rd_kafka_headers_destroy(rd_headers); return NULL; } } return rd_headers; } /** * @brief Convert Python dict to rd_kafka_headers_t */ static rd_kafka_headers_t *py_headers_dict_to_c (PyObject *hdrs) { int len; Py_ssize_t pos = 0; rd_kafka_headers_t *rd_headers = NULL; PyObject *ko, *vo; len = (int)PyDict_Size(hdrs); rd_headers = rd_kafka_headers_new(len); while (PyDict_Next(hdrs, &pos, &ko, &vo)) { if (!py_header_to_c(rd_headers, ko, vo)) { rd_kafka_headers_destroy(rd_headers); return NULL; } } return rd_headers; } /** * @brief Convert Python list[(header_key, header_value),...]) to C rd_kafka_topic_partition_list_t. * * @returns The new Python list[(header_key, header_value),...] object. */ rd_kafka_headers_t *py_headers_to_c (PyObject *hdrs) { if (PyList_Check(hdrs)) { return py_headers_list_to_c(hdrs); } else if (PyDict_Check(hdrs)) { return py_headers_dict_to_c(hdrs); } else { PyErr_Format(PyExc_TypeError, "expected headers to be " "dict or list of (key, value) tuples, not %s", ((PyTypeObject *)PyObject_Type(hdrs))->tp_name); return NULL; } } /** * @brief Convert rd_kafka_headers_t to Python list[(header_key, header_value),...]) * * @returns The new C headers on success or NULL on error. */ PyObject *c_headers_to_py (rd_kafka_headers_t *headers) { size_t idx = 0; size_t header_size = 0; const char *header_key; const void *header_value; size_t header_value_size; PyObject *header_list; header_size = rd_kafka_header_cnt(headers); header_list = PyList_New(header_size); while (!rd_kafka_header_get_all(headers, idx++, &header_key, &header_value, &header_value_size)) { // Create one (key, value) tuple for each header PyObject *header_tuple = PyTuple_New(2); PyTuple_SetItem(header_tuple, 0, cfl_PyUnistr(_FromString(header_key)) ); if (header_value) { PyTuple_SetItem(header_tuple, 1, cfl_PyBin(_FromStringAndSize(header_value, header_value_size)) ); } else { PyTuple_SetItem(header_tuple, 1, Py_None); } PyList_SET_ITEM(header_list, idx-1, header_tuple); } return header_list; } #endif /**************************************************************************** * * * Common callbacks * * * * ****************************************************************************/ static void error_cb (rd_kafka_t *rk, int err, const char *reason, void *opaque) { Handle *h = opaque; PyObject *eo, *result; CallState *cs; cs = CallState_get(h); /* If the client raised a fatal error we'll raise an exception * rather than calling the error callback. */ if (err == RD_KAFKA_RESP_ERR__FATAL) { char errstr[512]; err = rd_kafka_fatal_error(rk, errstr, sizeof(errstr)); cfl_PyErr_Fatal(err, errstr); goto crash; } if (!h->error_cb) { /* No callback defined */ goto done; } eo = KafkaError_new0(err, "%s", reason); result = PyObject_CallFunctionObjArgs(h->error_cb, eo, NULL); Py_DECREF(eo); if (result) Py_DECREF(result); else { crash: CallState_crash(cs); rd_kafka_yield(h->rk); } done: CallState_resume(cs); } /** * @brief librdkafka throttle callback triggered by poll() or flush(), triggers the * corresponding Python throttle_cb */ static void throttle_cb (rd_kafka_t *rk, const char *broker_name, int32_t broker_id, int throttle_time_ms, void *opaque) { Handle *h = opaque; PyObject *ThrottleEvent_type, *throttle_event; PyObject *result, *args; CallState *cs; cs = CallState_get(h); if (!h->throttle_cb) { /* No callback defined */ goto done; } ThrottleEvent_type = cfl_PyObject_lookup("confluent_kafka", "ThrottleEvent"); if (!ThrottleEvent_type) { /* ThrottleEvent class not found */ goto err; } args = Py_BuildValue("(sid)", broker_name, broker_id, (double)throttle_time_ms/1000); throttle_event = PyObject_Call(ThrottleEvent_type, args, NULL); Py_DECREF(args); Py_DECREF(ThrottleEvent_type); if (!throttle_event) { /* Failed to instantiate ThrottleEvent object */ goto err; } result = PyObject_CallFunctionObjArgs(h->throttle_cb, throttle_event, NULL); Py_DECREF(throttle_event); if (result) { /* throttle_cb executed successfully */ Py_DECREF(result); goto done; } /** * Stop callback dispatcher, return err to application * fall-through to unlock GIL */ err: CallState_crash(cs); rd_kafka_yield(h->rk); done: CallState_resume(cs); } static int stats_cb(rd_kafka_t *rk, char *json, size_t json_len, void *opaque) { Handle *h = opaque; PyObject *eo = NULL, *result = NULL; CallState *cs = NULL; cs = CallState_get(h); if (json_len == 0) { /* No data returned*/ goto done; } eo = Py_BuildValue("s", json); result = PyObject_CallFunctionObjArgs(h->stats_cb, eo, NULL); Py_DECREF(eo); if (result) Py_DECREF(result); else { CallState_crash(cs); rd_kafka_yield(h->rk); } done: CallState_resume(cs); return 0; } static void log_cb (const rd_kafka_t *rk, int level, const char *fac, const char *buf) { Handle *h = rd_kafka_opaque(rk); PyObject *result; CallState *cs; static const int level_map[8] = { /* Map syslog levels to python logging levels */ 50, /* LOG_EMERG -> logging.CRITICAL */ 50, /* LOG_ALERT -> logging.CRITICAL */ 50, /* LOG_CRIT -> logging.CRITICAL */ 40, /* LOG_ERR -> logging.ERROR */ 30, /* LOG_WARNING -> logging.WARNING */ 20, /* LOG_NOTICE -> logging.INFO */ 20, /* LOG_INFO -> logging.INFO */ 10, /* LOG_DEBUG -> logging.DEBUG */ }; cs = CallState_get(h); result = PyObject_CallMethod(h->logger, "log", "issss", level_map[level], "%s [%s] %s", fac, rd_kafka_name(rk), buf); if (result) Py_DECREF(result); else { CallState_crash(cs); rd_kafka_yield(h->rk); } CallState_resume(cs); } /**************************************************************************** * * * Common helpers * * * * ****************************************************************************/ /** * Clear Python object references in Handle */ void Handle_clear (Handle *h) { if (h->error_cb) { Py_DECREF(h->error_cb); h->error_cb = NULL; } if (h->throttle_cb) { Py_DECREF(h->throttle_cb); h->throttle_cb = NULL; } if (h->stats_cb) { Py_DECREF(h->stats_cb); h->stats_cb = NULL; } if (h->logger) { Py_DECREF(h->logger); h->logger = NULL; } if (h->initiated) { #ifdef WITH_PY_TSS PyThread_tss_delete(&h->tlskey); #else PyThread_delete_key(h->tlskey); #endif } } /** * GC traversal for Python object references */ int Handle_traverse (Handle *h, visitproc visit, void *arg) { if (h->error_cb) Py_VISIT(h->error_cb); if (h->throttle_cb) Py_VISIT(h->throttle_cb); if (h->stats_cb) Py_VISIT(h->stats_cb); return 0; } /** * @brief Set single special producer config value. * * @returns 1 if handled, 0 if unknown, or -1 on failure (exception raised). */ static int producer_conf_set_special (Handle *self, rd_kafka_conf_t *conf, const char *name, PyObject *valobj) { if (!strcmp(name, "on_delivery")) { if (!PyCallable_Check(valobj)) { cfl_PyErr_Format( RD_KAFKA_RESP_ERR__INVALID_ARG, "%s requires a callable " "object", name); return -1; } self->u.Producer.default_dr_cb = valobj; Py_INCREF(self->u.Producer.default_dr_cb); return 1; } else if (!strcmp(name, "delivery.report.only.error")) { /* Since we allocate msgstate for each produced message * with a callback we can't use delivery.report.only.error * as-is, as we wouldn't be able to ever free those msgstates. * Instead we shortcut this setting in the Python client, * providing the same functionality from dr_msg_cb trampoline. */ if (!cfl_PyBool_get(valobj, name, &self->u.Producer.dr_only_error)) return -1; return 1; } return 0; /* Not handled */ } /** * @brief Set single special consumer config value. * * @returns 1 if handled, 0 if unknown, or -1 on failure (exception raised). */ static int consumer_conf_set_special (Handle *self, rd_kafka_conf_t *conf, const char *name, PyObject *valobj) { if (!strcmp(name, "on_commit")) { if (!PyCallable_Check(valobj)) { cfl_PyErr_Format( RD_KAFKA_RESP_ERR__INVALID_ARG, "%s requires a callable " "object", name); return -1; } self->u.Consumer.on_commit = valobj; Py_INCREF(self->u.Consumer.on_commit); return 1; } return 0; } /** * @brief Call out to __init__.py _resolve_plugins() to see if any * of the specified `plugin.library.paths` are found in the * wheel's embedded library directory, and if so change the * path to use these libraries. * * @returns a possibly updated plugin.library.paths string object which * must be DECREF:ed, or NULL if an exception was raised. */ static PyObject *resolve_plugins (PyObject *plugins) { PyObject *resolved; PyObject *module, *function; module = PyImport_ImportModule("confluent_kafka"); if (!module) return NULL; function = PyObject_GetAttrString(module, "_resolve_plugins"); if (!function) { PyErr_SetString(PyExc_RuntimeError, "confluent_kafka._resolve_plugins() not found"); Py_DECREF(module); return NULL; } resolved = PyObject_CallFunctionObjArgs(function, plugins, NULL); Py_DECREF(function); Py_DECREF(module); if (!resolved) { PyErr_SetString(PyExc_RuntimeError, "confluent_kafka._resolve_plugins() failed"); return NULL; } return resolved; } /** * @brief Remove property from confidct and set rd_kafka_conf with its value * * @param vo The property value object * * @returns 1 on success or 0 on failure (exception raised). */ static int common_conf_set_special(PyObject *confdict, rd_kafka_conf_t *conf, const char *name, PyObject *vo) { const char *v; char errstr[256]; PyObject *vs; PyObject *vs8 = NULL; if (!(vs = cfl_PyObject_Unistr(vo))) { PyErr_Format(PyExc_TypeError, "expected configuration property %s " "as type unicode string", name); return 0; } v = cfl_PyUnistr_AsUTF8(vs, &vs8); if (rd_kafka_conf_set(conf, name, v, errstr, sizeof(errstr)) != RD_KAFKA_CONF_OK) { cfl_PyErr_Format(RD_KAFKA_RESP_ERR__INVALID_ARG, "%s", errstr); Py_DECREF(vs); Py_XDECREF(vs8); return 0; } Py_DECREF(vs); Py_XDECREF(vs8); PyDict_DelItemString(confdict, name); return 1; } /** * Common config setup for Kafka client handles. * * Returns a conf object on success or NULL on failure in which case * an exception has been raised. */ rd_kafka_conf_t *common_conf_setup (rd_kafka_type_t ktype, Handle *h, PyObject *args, PyObject *kwargs) { rd_kafka_conf_t *conf; Py_ssize_t pos = 0; PyObject *ko, *vo; PyObject *confdict = NULL; if (rd_kafka_version() < MIN_RD_KAFKA_VERSION) { PyErr_Format(PyExc_RuntimeError, "%s: librdkafka version %s (0x%x) detected", MIN_VER_ERRSTR, rd_kafka_version_str(), rd_kafka_version()); return NULL; } /* Supported parameter constellations: * - kwargs (conf={..}, logger=..) * - args and kwargs ({..}, logger=..) * - args ({..}) * When both args and kwargs are present the kwargs take * precedence in case of duplicate keys. * All keys map to configuration properties. * * Copy configuration dict to avoid manipulating application config. */ if (args && PyTuple_Size(args)) { if (!PyTuple_Check(args) || PyTuple_Size(args) > 1) { PyErr_SetString(PyExc_TypeError, "expected tuple containing single dict"); return NULL; } else if (PyTuple_Size(args) == 1 && !PyDict_Check((confdict = PyTuple_GetItem(args, 0)))) { PyErr_SetString(PyExc_TypeError, "expected configuration dict"); return NULL; } confdict = PyDict_Copy(confdict); } if (!confdict) { if (!kwargs) { PyErr_SetString(PyExc_TypeError, "expected configuration dict"); return NULL; } confdict = PyDict_Copy(kwargs); } else if (kwargs) { /* Update confdict with kwargs */ PyDict_Update(confdict, kwargs); } if (ktype == RD_KAFKA_CONSUMER && !PyDict_GetItemString(confdict, "group.id")) { PyErr_SetString(PyExc_ValueError, "Failed to create consumer: group.id must be set"); Py_DECREF(confdict); return NULL; } conf = rd_kafka_conf_new(); /* * Set debug contexts first to capture all events including plugin loading */ if ((vo = PyDict_GetItemString(confdict, "debug")) && !common_conf_set_special(confdict, conf, "debug", vo)) goto outer_err; /* * Plugins must be configured prior to handling any of their * configuration properties. * Dicts are unordered so we explicitly check for, set, and delete the * plugin paths here. * This ensures plugin configuration properties are handled in the * correct order. */ if ((vo = PyDict_GetItemString(confdict, "plugin.library.paths"))) { /* Resolve plugin paths */ PyObject *resolved; resolved = resolve_plugins(vo); if (!resolved) goto outer_err; if (!common_conf_set_special(confdict, conf, "plugin.library.paths", resolved)) { Py_DECREF(resolved); goto outer_err; } Py_DECREF(resolved); } if ((vo = PyDict_GetItemString(confdict, "default.topic.config"))) { /* TODO: uncomment for 1.0 release PyErr_Warn(PyExc_DeprecationWarning, "default.topic.config has being deprecated, " "set default topic configuration values in the global dict"); */ if (PyDict_Update(confdict, vo) == -1) { goto outer_err; } PyDict_DelItemString(confdict, "default.topic.config"); } /* Convert config dict to config key-value pairs. */ while (PyDict_Next(confdict, &pos, &ko, &vo)) { PyObject *ks; PyObject *ks8 = NULL; PyObject *vs = NULL, *vs8 = NULL; const char *k; const char *v; char errstr[256]; int r = 0; if (!(ks = cfl_PyObject_Unistr(ko))) { PyErr_SetString(PyExc_TypeError, "expected configuration property name " "as type unicode string"); goto inner_err; } k = cfl_PyUnistr_AsUTF8(ks, &ks8); if (!strcmp(k, "error_cb")) { if (!PyCallable_Check(vo)) { PyErr_SetString(PyExc_TypeError, "expected error_cb property " "as a callable function"); goto inner_err; } if (h->error_cb) { Py_DECREF(h->error_cb); h->error_cb = NULL; } if (vo != Py_None) { h->error_cb = vo; Py_INCREF(h->error_cb); } Py_XDECREF(ks8); Py_DECREF(ks); continue; } else if (!strcmp(k, "throttle_cb")) { if (!PyCallable_Check(vo)) { PyErr_SetString(PyExc_ValueError, "expected throttle_cb property " "as a callable function"); goto inner_err; } if (h->throttle_cb) { Py_DECREF(h->throttle_cb); h->throttle_cb = NULL; } if (vo != Py_None) { h->throttle_cb = vo; Py_INCREF(h->throttle_cb); } Py_XDECREF(ks8); Py_DECREF(ks); continue; } else if (!strcmp(k, "stats_cb")) { if (!PyCallable_Check(vo)) { PyErr_SetString(PyExc_TypeError, "expected stats_cb property " "as a callable function"); goto inner_err; } if (h->stats_cb) { Py_DECREF(h->stats_cb); h->stats_cb = NULL; } if (vo != Py_None) { h->stats_cb = vo; Py_INCREF(h->stats_cb); } Py_XDECREF(ks8); Py_DECREF(ks); continue; } else if (!strcmp(k, "logger")) { if (h->logger) { Py_DECREF(h->logger); h->logger = NULL; } if (vo != Py_None) { h->logger = vo; Py_INCREF(h->logger); } Py_XDECREF(ks8); Py_DECREF(ks); continue; } /* Special handling for certain config keys. */ if (ktype == RD_KAFKA_PRODUCER) r = producer_conf_set_special(h, conf, k, vo); else if (ktype == RD_KAFKA_CONSUMER) r = consumer_conf_set_special(h, conf, k, vo); if (r == -1) { /* Error */ goto inner_err; } else if (r == 1) { /* Handled */ continue; } /* * Pass configuration property through to librdkafka. */ if (vo == Py_None) { v = NULL; } else { if (!(vs = cfl_PyObject_Unistr(vo))) { PyErr_SetString(PyExc_TypeError, "expected configuration " "property value as type " "unicode string"); goto inner_err; } v = cfl_PyUnistr_AsUTF8(vs, &vs8); } if (rd_kafka_conf_set(conf, k, v, errstr, sizeof(errstr)) != RD_KAFKA_CONF_OK) { cfl_PyErr_Format(RD_KAFKA_RESP_ERR__INVALID_ARG, "%s", errstr); goto inner_err; } Py_XDECREF(vs8); Py_XDECREF(vs); Py_XDECREF(ks8); Py_DECREF(ks); continue; inner_err: Py_XDECREF(vs8); Py_XDECREF(vs); Py_XDECREF(ks8); Py_XDECREF(ks); goto outer_err; } Py_DECREF(confdict); rd_kafka_conf_set_error_cb(conf, error_cb); if (h->throttle_cb) rd_kafka_conf_set_throttle_cb(conf, throttle_cb); if (h->stats_cb) rd_kafka_conf_set_stats_cb(conf, stats_cb); if (h->logger) { /* Write logs to log queue (which is forwarded * to the polled queue in the Producer/Consumer constructors) */ rd_kafka_conf_set(conf, "log.queue", "true", NULL, 0); rd_kafka_conf_set_log_cb(conf, log_cb); } rd_kafka_conf_set_opaque(conf, h); #ifdef WITH_PY_TSS if (PyThread_tss_create(&h->tlskey)) { PyErr_SetString(PyExc_RuntimeError, "Failed to initialize thread local storage"); rd_kafka_conf_destroy(conf); return NULL; } #else h->tlskey = PyThread_create_key(); #endif h->initiated = 1; return conf; outer_err: Py_DECREF(confdict); rd_kafka_conf_destroy(conf); return NULL; } /** * @brief Initialiase a CallState and unlock the GIL prior to a * possibly blocking external call. */ void CallState_begin (Handle *h, CallState *cs) { cs->thread_state = PyEval_SaveThread(); assert(cs->thread_state != NULL); cs->crashed = 0; #ifdef WITH_PY_TSS PyThread_tss_set(&h->tlskey, cs); #else PyThread_set_key_value(h->tlskey, cs); #endif } /** * @brief Relock the GIL after external call is done. * @returns 0 if a Python signal was raised or a callback crashed, else 1. */ int CallState_end (Handle *h, CallState *cs) { #ifdef WITH_PY_TSS PyThread_tss_set(&h->tlskey, NULL); #else PyThread_delete_key_value(h->tlskey); #endif PyEval_RestoreThread(cs->thread_state); if (PyErr_CheckSignals() == -1 || cs->crashed) return 0; return 1; } /** * @brief Get the current thread's CallState and re-locks the GIL. */ CallState *CallState_get (Handle *h) { CallState *cs; #ifdef WITH_PY_TSS cs = PyThread_tss_get(&h->tlskey); #else cs = PyThread_get_key_value(h->tlskey); #endif assert(cs != NULL); assert(cs->thread_state != NULL); PyEval_RestoreThread(cs->thread_state); cs->thread_state = NULL; return cs; } /** * @brief Un-locks the GIL to resume blocking external call. */ void CallState_resume (CallState *cs) { assert(cs->thread_state == NULL); cs->thread_state = PyEval_SaveThread(); } /** * @brief Indicate that call crashed. */ void CallState_crash (CallState *cs) { cs->crashed++; } /** * @brief Find class/type/object \p typename in \p modulename * * @returns a new reference to the object. * * @raises a TypeError exception if the type is not found. */ PyObject *cfl_PyObject_lookup (const char *modulename, const char *typename) { PyObject *module = PyImport_ImportModule(modulename); PyObject *obj; if (!modulename) { PyErr_Format(PyExc_TypeError, "Module %s not found when looking up %s.%s", modulename, modulename, typename); return NULL; } obj = PyObject_GetAttrString(module, typename); if (!obj) { Py_DECREF(module); PyErr_Format(PyExc_TypeError, "No such class/type/object: %s.%s", modulename, typename); return NULL; } return obj; } void cfl_PyDict_SetString (PyObject *dict, const char *name, const char *val) { PyObject *vo = cfl_PyUnistr(_FromString(val)); PyDict_SetItemString(dict, name, vo); Py_DECREF(vo); } void cfl_PyDict_SetInt (PyObject *dict, const char *name, int val) { PyObject *vo = cfl_PyInt_FromInt(val); PyDict_SetItemString(dict, name, vo); Py_DECREF(vo); } int cfl_PyObject_SetString (PyObject *o, const char *name, const char *val) { PyObject *vo = cfl_PyUnistr(_FromString(val)); int r = PyObject_SetAttrString(o, name, vo); Py_DECREF(vo); return r; } int cfl_PyObject_SetInt (PyObject *o, const char *name, int val) { PyObject *vo = cfl_PyInt_FromInt(val); int r = PyObject_SetAttrString(o, name, vo); Py_DECREF(vo); return r; } /** * @brief Get attribute \p attr_name from \p object and verify it is * of type \p py_type. * * @param py_type the value type of \p attr_name must match \p py_type, unless * \p py_type is NULL. * * @returns 1 if \p valp was updated with the object (new reference) or NULL * if not matched and not required, or * 0 if an exception was raised. */ int cfl_PyObject_GetAttr (PyObject *object, const char *attr_name, PyObject **valp, const PyTypeObject *py_type, int required) { PyObject *o; o = PyObject_GetAttrString(object, attr_name); if (!o) { if (!required) { *valp = NULL; return 1; } PyErr_Format(PyExc_TypeError, "Required attribute .%s missing", attr_name); return 0; } if (py_type && Py_TYPE(o) != py_type) { Py_DECREF(o); PyErr_Format(PyExc_TypeError, "Expected .%s to be %s type, not %s", attr_name, py_type->tp_name, ((PyTypeObject *)PyObject_Type(o))->tp_name); return 0; } *valp = o; return 1; } /** * @brief Get attribute \p attr_name from \p object and make sure it is * an integer type. * * @returns 1 if \p valp was updated with either the object value, or \p defval. * 0 if an exception was raised. */ int cfl_PyObject_GetInt (PyObject *object, const char *attr_name, int *valp, int defval, int required) { PyObject *o; if (!cfl_PyObject_GetAttr(object, attr_name, &o, #ifdef PY3 &PyLong_Type, #else &PyInt_Type, #endif required)) return 0; if (!o) { *valp = defval; return 1; } *valp = cfl_PyInt_AsInt(o); Py_DECREF(o); return 1; } /** * @brief Checks that \p object is a bool (or boolable) and sets * \p *valp according to the object. * * @returns 1 if \p valp was set, or 0 if \p object is not a boolable object. * An exception is raised in the error case. */ int cfl_PyBool_get (PyObject *object, const char *name, int *valp) { if (!PyBool_Check(object)) { PyErr_Format(PyExc_TypeError, "Expected %s to be bool type, not %s", name, ((PyTypeObject *)PyObject_Type(object))->tp_name); return 0; } *valp = object == Py_True; return 1; } /** * @brief Get attribute \p attr_name from \p object and make sure it is * a string type. * * @returns 1 if \p valp was updated with a newly allocated copy of either the * object value (UTF8), or \p defval. * 0 if an exception was raised. */ int cfl_PyObject_GetString (PyObject *object, const char *attr_name, char **valp, const char *defval, int required) { PyObject *o, *uo, *uop; if (!cfl_PyObject_GetAttr(object, attr_name, &o, #ifdef PY3 &PyUnicode_Type, #else /* Python 2: support both str and unicode * let PyObject_Unistr() do the * proper conversion below. */ NULL, #endif required)) return 0; if (!o) { *valp = defval ? strdup(defval) : NULL; return 1; } if (!(uo = cfl_PyObject_Unistr(o))) { Py_DECREF(o); PyErr_Format(PyExc_TypeError, "Expected .%s to be a unicode string type, not %s", attr_name, ((PyTypeObject *)PyObject_Type(o))->tp_name); return 0; } Py_DECREF(o); *valp = (char *)cfl_PyUnistr_AsUTF8(uo, &uop); if (!*valp) { Py_DECREF(uo); Py_XDECREF(uop); return 0; /* exception raised by AsUTF8 */ } *valp = strdup(*valp); Py_DECREF(uo); Py_XDECREF(uop); return 1; } /** * @returns a Python list of longs based on the input int32_t array */ PyObject *cfl_int32_array_to_py_list (const int32_t *arr, size_t cnt) { PyObject *list; size_t i; list = PyList_New((Py_ssize_t)cnt); if (!list) return NULL; for (i = 0 ; i < cnt ; i++) PyList_SET_ITEM(list, (Py_ssize_t)i, cfl_PyInt_FromInt(arr[i])); return list; } /**************************************************************************** * * * Base * * * * ****************************************************************************/ static PyObject *libversion (PyObject *self, PyObject *args) { return Py_BuildValue("si", rd_kafka_version_str(), rd_kafka_version()); } /* * Version hex representation * 0xMMmmRRPP * MM=major, mm=minor, RR=revision, PP=patchlevel (not used) */ static PyObject *version (PyObject *self, PyObject *args) { return Py_BuildValue("si", "1.1.0", 0x01010000); } static PyMethodDef cimpl_methods[] = { {"libversion", libversion, METH_NOARGS, " Retrieve librdkafka version string and integer\n" "\n" " :returns: (version_string, version_int) tuple\n" " :rtype: tuple(str,int)\n" "\n" }, {"version", version, METH_NOARGS, " Retrieve module version string and integer\n" "\n" " :returns: (version_string, version_int) tuple\n" " :rtype: tuple(str,int)\n" "\n" }, { NULL } }; /** * @brief Add librdkafka error enums to KafkaError's type dict. * @returns an updated doc string containing all error constants. */ static char *KafkaError_add_errs (PyObject *dict, const char *origdoc) { const struct rd_kafka_err_desc *descs; size_t cnt; size_t i; char *doc; size_t dof = 0, dsize; /* RST grid table column widths */ #define _COL1_W 50 #define _COL2_W 100 /* Must be larger than COL1 */ char dash[_COL2_W], eq[_COL2_W]; rd_kafka_get_err_descs(&descs, &cnt); memset(dash, '-', sizeof(dash)); memset(eq, '=', sizeof(eq)); /* Setup output doc buffer. */ dof = strlen(origdoc); dsize = dof + 500 + (cnt * 200); doc = malloc(dsize); memcpy(doc, origdoc, dof+1); #define _PRINT(...) do { \ char tmpdoc[512]; \ size_t _len; \ _len = snprintf(tmpdoc, sizeof(tmpdoc), __VA_ARGS__); \ if (_len > sizeof(tmpdoc)) _len = sizeof(tmpdoc)-1; \ if (dof + _len >= dsize) { \ dsize += 2; \ doc = realloc(doc, dsize); \ } \ memcpy(doc+dof, tmpdoc, _len+1); \ dof += _len; \ } while (0) /* Error constant table header (RST grid table) */ _PRINT("Error and event constants:\n\n" "+-%.*s-+-%.*s-+\n" "| %-*.*s | %-*.*s |\n" "+=%.*s=+=%.*s=+\n", _COL1_W, dash, _COL2_W, dash, _COL1_W, _COL1_W, "Constant", _COL2_W, _COL2_W, "Description", _COL1_W, eq, _COL2_W, eq); for (i = 0 ; i < cnt ; i++) { PyObject *code; if (!descs[i].desc) continue; code = cfl_PyInt_FromInt(descs[i].code); PyDict_SetItemString(dict, descs[i].name, code); Py_DECREF(code); _PRINT("| %-*.*s | %-*.*s |\n" "+-%.*s-+-%.*s-+\n", _COL1_W, _COL1_W, descs[i].name, _COL2_W, _COL2_W, descs[i].desc, _COL1_W, dash, _COL2_W, dash); } _PRINT("\n"); return doc; // FIXME: leak } #ifdef PY3 static struct PyModuleDef cimpl_moduledef = { PyModuleDef_HEAD_INIT, "cimpl", /* m_name */ "Confluent's Python client for Apache Kafka (C implementation)", /* m_doc */ -1, /* m_size */ cimpl_methods, /* m_methods */ }; #endif static PyObject *_init_cimpl (void) { PyObject *m; PyEval_InitThreads(); if (PyType_Ready(&KafkaErrorType) < 0) return NULL; if (PyType_Ready(&MessageType) < 0) return NULL; if (PyType_Ready(&TopicPartitionType) < 0) return NULL; if (PyType_Ready(&ProducerType) < 0) return NULL; if (PyType_Ready(&ConsumerType) < 0) return NULL; if (PyType_Ready(&AdminType) < 0) return NULL; if (AdminTypes_Ready() < 0) return NULL; #ifdef PY3 m = PyModule_Create(&cimpl_moduledef); #else m = Py_InitModule3("cimpl", cimpl_methods, "Confluent's Python client for Apache Kafka (C implementation)"); #endif if (!m) return NULL; Py_INCREF(&KafkaErrorType); KafkaErrorType.tp_doc = KafkaError_add_errs(KafkaErrorType.tp_dict, KafkaErrorType.tp_doc); PyModule_AddObject(m, "KafkaError", (PyObject *)&KafkaErrorType); Py_INCREF(&MessageType); PyModule_AddObject(m, "Message", (PyObject *)&MessageType); Py_INCREF(&TopicPartitionType); PyModule_AddObject(m, "TopicPartition", (PyObject *)&TopicPartitionType); Py_INCREF(&ProducerType); PyModule_AddObject(m, "Producer", (PyObject *)&ProducerType); Py_INCREF(&ConsumerType); PyModule_AddObject(m, "Consumer", (PyObject *)&ConsumerType); Py_INCREF(&AdminType); PyModule_AddObject(m, "_AdminClientImpl", (PyObject *)&AdminType); AdminTypes_AddObjects(m); #if PY_VERSION_HEX >= 0x02070000 KafkaException = PyErr_NewExceptionWithDoc( "cimpl.KafkaException", "Kafka exception that wraps the :py:class:`KafkaError` " "class.\n" "\n" "Use ``exception.args[0]`` to extract the " ":py:class:`KafkaError` object\n" "\n", NULL, NULL); #else KafkaException = PyErr_NewException("cimpl.KafkaException", NULL, NULL); #endif Py_INCREF(KafkaException); PyModule_AddObject(m, "KafkaException", KafkaException); PyModule_AddIntConstant(m, "TIMESTAMP_NOT_AVAILABLE", RD_KAFKA_TIMESTAMP_NOT_AVAILABLE); PyModule_AddIntConstant(m, "TIMESTAMP_CREATE_TIME", RD_KAFKA_TIMESTAMP_CREATE_TIME); PyModule_AddIntConstant(m, "TIMESTAMP_LOG_APPEND_TIME", RD_KAFKA_TIMESTAMP_LOG_APPEND_TIME); PyModule_AddIntConstant(m, "OFFSET_BEGINNING", RD_KAFKA_OFFSET_BEGINNING); PyModule_AddIntConstant(m, "OFFSET_END", RD_KAFKA_OFFSET_END); PyModule_AddIntConstant(m, "OFFSET_STORED", RD_KAFKA_OFFSET_STORED); PyModule_AddIntConstant(m, "OFFSET_INVALID", RD_KAFKA_OFFSET_INVALID); return m; } #ifdef PY3 PyMODINIT_FUNC PyInit_cimpl (void) { return _init_cimpl(); } #else PyMODINIT_FUNC initcimpl (void) { _init_cimpl(); } #endif confluent-kafka-1.1.0/confluent_kafka/src/confluent_kafka.h0000644000076500000240000003023513446646122024114 0ustar ryanstaff00000000000000/** * Copyright 2016 Confluent Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include #include #include #include #include #ifdef _MSC_VER /* Windows */ #define CFL_PRId64 "I64d" #define CFL_PRId32 "I32d" #else /* C99 */ #include #define CFL_PRId64 PRId64 #define CFL_PRId32 PRId32 #endif /** * Minimum required librdkafka version. This is checked both during * build-time (just below) and runtime (see confluent_kafka.c). * Make sure to keep the MIN_RD_KAFKA_VERSION, MIN_VER_ERRSTR and #error * defines and strings in sync. */ #define MIN_RD_KAFKA_VERSION 0x01000000 #ifdef __APPLE__ #define MIN_VER_ERRSTR "confluent-kafka-python requires librdkafka v1.0.0 or later. Install the latest version of librdkafka from Homebrew by running `brew install librdkafka` or `brew upgrade librdkafka`" #else #define MIN_VER_ERRSTR "confluent-kafka-python requires librdkafka v1.0.0 or later. Install the latest version of librdkafka from the Confluent repositories, see http://docs.confluent.io/current/installation.html" #endif #if RD_KAFKA_VERSION < MIN_RD_KAFKA_VERSION #ifdef __APPLE__ #error "confluent-kafka-python requires librdkafka v1.0.0 or later. Install the latest version of librdkafka from Homebrew by running `brew install librdkafka` or `brew upgrade librdkafka`" #else #error "confluent-kafka-python requires librdkafka v1.0.0 or later. Install the latest version of librdkafka from the Confluent repositories, see http://docs.confluent.io/current/installation.html" #endif #endif #if PY_MAJOR_VERSION >= 3 #define PY3 #include #if PY_MINOR_VERSION >= 7 #define WITH_PY_TSS #endif #endif /** * librdkafka feature detection */ #ifdef RD_KAFKA_V_TIMESTAMP #define HAVE_PRODUCEV 1 /* rd_kafka_producev() */ #endif /**************************************************************************** * * * Python 2 & 3 portability * * Binary data (we call it cfl_PyBin): * Python 2: string * Python 3: bytes * * Unicode Strings (we call it cfl_PyUnistr): * Python 2: unicode * Python 3: strings * ****************************************************************************/ #ifdef PY3 /* Python 3 */ /** * @brief Binary type, use as cfl_PyBin(_X(A,B)) where _X() is the type-less * suffix of a PyBytes/Str_X() function */ #define cfl_PyBin(X) PyBytes ## X /** * @brief Unicode type, same usage as PyBin() */ #define cfl_PyUnistr(X) PyUnicode ## X /** * @returns Unicode Python object as char * in UTF-8 encoding * @param uobjp might be set to NULL or a new object reference (depending * on Python version) which needs to be cleaned up with * Py_XDECREF() after finished use of the returned string. */ static __inline const char * cfl_PyUnistr_AsUTF8 (PyObject *o, PyObject **uobjp) { *uobjp = NULL; /* No intermediary object needed in Py3 */ return PyUnicode_AsUTF8(o); } /** * @returns Unicode Python string object */ #define cfl_PyObject_Unistr(X) PyObject_Str(X) #else /* Python 2 */ /* See comments above */ #define cfl_PyBin(X) PyString ## X #define cfl_PyUnistr(X) PyUnicode ## X /** * @returns NULL if object \p can't be represented as UTF8, else a temporary * char string with a lifetime equal of \p o and \p uobjp */ static __inline const char * cfl_PyUnistr_AsUTF8 (PyObject *o, PyObject **uobjp) { if (!PyUnicode_Check(o)) { PyObject *uo; if (!(uo = PyUnicode_FromObject(o))) { *uobjp = NULL; return NULL; } /*UTF8 intermediary object on Py2*/ *uobjp = PyUnicode_AsUTF8String(o); Py_DECREF(uo); } else { /*UTF8 intermediary object on Py2*/ *uobjp = PyUnicode_AsUTF8String(o); } if (!*uobjp) return NULL; return PyBytes_AsString(*uobjp); } #define cfl_PyObject_Unistr(X) PyObject_Unicode(X) #endif /**************************************************************************** * * * KafkaError * * * * ****************************************************************************/ extern PyObject *KafkaException; PyObject *KafkaError_new0 (rd_kafka_resp_err_t err, const char *fmt, ...); PyObject *KafkaError_new_or_None (rd_kafka_resp_err_t err, const char *str); /** * @brief Raise an exception using KafkaError. * \p err and and \p ... (string representation of error) is set on the returned * KafkaError object. */ #define cfl_PyErr_Format(err,...) do { \ PyObject *_eo = KafkaError_new0(err, __VA_ARGS__); \ PyErr_SetObject(KafkaException, _eo); \ } while (0) /**************************************************************************** * * * Common instance handle for both Producer and Consumer * * * * ****************************************************************************/ typedef struct { PyObject_HEAD rd_kafka_t *rk; PyObject *error_cb; PyObject *throttle_cb; PyObject *stats_cb; int initiated; /* Thread-Local-Storage key */ #ifdef WITH_PY_TSS Py_tss_t tlskey; #else int tlskey; #endif rd_kafka_type_t type; /* Producer or consumer */ PyObject *logger; union { /** * Producer */ struct { PyObject *default_dr_cb; int dr_only_error; /**< delivery.report.only.error */ } Producer; /** * Consumer */ struct { int rebalance_assigned; /* Rebalance: Callback performed assign() call.*/ PyObject *on_assign; /* Rebalance: on_assign callback */ PyObject *on_revoke; /* Rebalance: on_revoke callback */ PyObject *on_commit; /* Commit callback */ rd_kafka_queue_t *rkqu; /* Consumer queue */ } Consumer; } u; } Handle; void Handle_clear (Handle *h); int Handle_traverse (Handle *h, visitproc visit, void *arg); /** * @brief Current thread's state for "blocking" calls to librdkafka. */ typedef struct { PyThreadState *thread_state; int crashed; /* Callback crashed */ } CallState; /** * @brief Initialiase a CallState and unlock the GIL prior to a * possibly blocking external call. */ void CallState_begin (Handle *h, CallState *cs); /** * @brief Relock the GIL after external call is done, remove TLS state. * @returns 0 if a Python signal was raised or a callback crashed, else 1. */ int CallState_end (Handle *h, CallState *cs); /** * @brief Get the current thread's CallState and re-locks the GIL. */ CallState *CallState_get (Handle *h); /** * @brief Un-locks the GIL to resume blocking external call. */ void CallState_resume (CallState *cs); /** * @brief Indicate that call crashed. */ void CallState_crash (CallState *cs); /** * @brief Python 3 renamed the internal PyInt type to PyLong, but the * type is still exposed as 'int' in Python. * We use the (cfl_)PyInt name for both Python 2 and 3 to mean an int, * assuming it will be at least 31 bits+signed on all platforms. */ #ifdef PY3 #define cfl_PyInt_Check(o) PyLong_Check(o) #define cfl_PyInt_AsInt(o) (int)PyLong_AsLong(o) #define cfl_PyInt_FromInt(v) PyLong_FromLong(v) #else #define cfl_PyInt_Check(o) PyInt_Check(o) #define cfl_PyInt_AsInt(o) (int)PyInt_AsLong(o) #define cfl_PyInt_FromInt(v) PyInt_FromLong(v) #endif PyObject *cfl_PyObject_lookup (const char *modulename, const char *typename); void cfl_PyDict_SetString (PyObject *dict, const char *name, const char *val); void cfl_PyDict_SetInt (PyObject *dict, const char *name, int val); int cfl_PyObject_SetString (PyObject *o, const char *name, const char *val); int cfl_PyObject_SetInt (PyObject *o, const char *name, int val); int cfl_PyObject_GetAttr (PyObject *object, const char *attr_name, PyObject **valp, const PyTypeObject *py_type, int required); int cfl_PyObject_GetInt (PyObject *object, const char *attr_name, int *valp, int defval, int required); int cfl_PyObject_GetString (PyObject *object, const char *attr_name, char **valp, const char *defval, int required); int cfl_PyBool_get (PyObject *object, const char *name, int *valp); PyObject *cfl_int32_array_to_py_list (const int32_t *arr, size_t cnt); /**************************************************************************** * * * TopicPartition * * * * ****************************************************************************/ typedef struct { PyObject_HEAD char *topic; int partition; int64_t offset; PyObject *error; } TopicPartition; extern PyTypeObject TopicPartitionType; /**************************************************************************** * * * Common * * * * ****************************************************************************/ #define PY_RD_KAFKA_ADMIN 100 /* There is no Admin client type in librdkafka, * so we use the producer type for now, * but we need to differentiate between a * proper producer and an admin client in the * python code in some places. */ rd_kafka_conf_t *common_conf_setup (rd_kafka_type_t ktype, Handle *h, PyObject *args, PyObject *kwargs); PyObject *c_parts_to_py (const rd_kafka_topic_partition_list_t *c_parts); rd_kafka_topic_partition_list_t *py_to_c_parts (PyObject *plist); PyObject *list_topics (Handle *self, PyObject *args, PyObject *kwargs); extern const char list_topics_doc[]; #ifdef RD_KAFKA_V_HEADERS rd_kafka_headers_t *py_headers_to_c (PyObject *hdrs); PyObject *c_headers_to_py (rd_kafka_headers_t *headers); #endif /**************************************************************************** * * * Message * * * * ****************************************************************************/ /** * @brief confluent_kafka.Message object */ typedef struct { PyObject_HEAD PyObject *topic; PyObject *value; PyObject *key; PyObject *headers; #ifdef RD_KAFKA_V_HEADERS rd_kafka_headers_t *c_headers; #endif PyObject *error; int32_t partition; int64_t offset; int64_t timestamp; rd_kafka_timestamp_type_t tstype; } Message; extern PyTypeObject MessageType; PyObject *Message_new0 (const Handle *handle, const rd_kafka_message_t *rkm); PyObject *Message_error (Message *self, PyObject *ignore); /**************************************************************************** * * * Producer * * * * ****************************************************************************/ extern PyTypeObject ProducerType; /**************************************************************************** * * * Consumer * * * * ****************************************************************************/ extern PyTypeObject ConsumerType; /**************************************************************************** * * * AdminClient types * * * * ****************************************************************************/ typedef struct { PyObject_HEAD char *topic; int num_partitions; int replication_factor; PyObject *replica_assignment; /**< list */ PyObject *config; /**< dict */ } NewTopic; extern PyTypeObject NewTopicType; typedef struct { PyObject_HEAD char *topic; int new_total_count; PyObject *replica_assignment; } NewPartitions; extern PyTypeObject NewPartitionsType; int AdminTypes_Ready (void); void AdminTypes_AddObjects (PyObject *m); /**************************************************************************** * * * AdminClient * * * * ****************************************************************************/ extern PyTypeObject AdminType; confluent-kafka-1.1.0/confluent_kafka.egg-info/0000755000076500000240000000000013513111321021470 5ustar ryanstaff00000000000000confluent-kafka-1.1.0/confluent_kafka.egg-info/PKG-INFO0000644000076500000240000000051313513111321022564 0ustar ryanstaff00000000000000Metadata-Version: 2.1 Name: confluent-kafka Version: 1.1.0 Summary: Confluent's Python client for Apache Kafka Home-page: https://github.com/confluentinc/confluent-kafka-python Author: Confluent Inc Author-email: support@confluent.io License: UNKNOWN Description: UNKNOWN Platform: UNKNOWN Provides-Extra: dev Provides-Extra: avro confluent-kafka-1.1.0/confluent_kafka.egg-info/SOURCES.txt0000644000076500000240000000172113513111321023355 0ustar ryanstaff00000000000000LICENSE.txt MANIFEST.in README.md setup.py test-requirements.txt confluent_kafka/__init__.py confluent_kafka.egg-info/PKG-INFO confluent_kafka.egg-info/SOURCES.txt confluent_kafka.egg-info/dependency_links.txt confluent_kafka.egg-info/requires.txt confluent_kafka.egg-info/top_level.txt confluent_kafka/admin/__init__.py confluent_kafka/avro/__init__.py confluent_kafka/avro/cached_schema_registry_client.py confluent_kafka/avro/error.py confluent_kafka/avro/load.py confluent_kafka/avro/serializer/__init__.py confluent_kafka/avro/serializer/message_serializer.py confluent_kafka/kafkatest/__init__.py confluent_kafka/kafkatest/verifiable_client.py confluent_kafka/kafkatest/verifiable_consumer.py confluent_kafka/kafkatest/verifiable_producer.py confluent_kafka/src/Admin.c confluent_kafka/src/AdminTypes.c confluent_kafka/src/Consumer.c confluent_kafka/src/Metadata.c confluent_kafka/src/Producer.c confluent_kafka/src/confluent_kafka.c confluent_kafka/src/confluent_kafka.hconfluent-kafka-1.1.0/confluent_kafka.egg-info/dependency_links.txt0000644000076500000240000000000113513111321025536 0ustar ryanstaff00000000000000 confluent-kafka-1.1.0/confluent_kafka.egg-info/requires.txt0000644000076500000240000000032413513111321024067 0ustar ryanstaff00000000000000 [:python_version < "3.2"] futures requests [:python_version < "3.4"] enum34 [avro] fastavro requests [avro:python_version < "3.0"] avro [avro:python_version > "3.0"] avro-python3 [dev] pytest==4.6.4 flake8 confluent-kafka-1.1.0/confluent_kafka.egg-info/top_level.txt0000644000076500000240000000002013513111321024212 0ustar ryanstaff00000000000000confluent_kafka confluent-kafka-1.1.0/setup.cfg0000644000076500000240000000004613513111321016465 0ustar ryanstaff00000000000000[egg_info] tag_build = tag_date = 0 confluent-kafka-1.1.0/setup.py0000755000076500000240000000354113513052761016400 0ustar ryanstaff00000000000000#!/usr/bin/env python import os from setuptools import setup, find_packages from distutils.core import Extension import platform INSTALL_REQUIRES = [ 'futures;python_version<"3.2"', 'enum34;python_version<"3.4"', 'requests;python_version<"3.2"' ] # On Un*x the library is linked as -lrdkafka, # while on windows we need the full librdkafka name. if platform.system() == 'Windows': librdkafka_libname = 'librdkafka' else: librdkafka_libname = 'rdkafka' module = Extension('confluent_kafka.cimpl', libraries=[librdkafka_libname], sources=['confluent_kafka/src/confluent_kafka.c', 'confluent_kafka/src/Producer.c', 'confluent_kafka/src/Consumer.c', 'confluent_kafka/src/Metadata.c', 'confluent_kafka/src/AdminTypes.c', 'confluent_kafka/src/Admin.c']) def get_install_requirements(path): content = open(os.path.join(os.path.dirname(__file__), path)).read() return [ req for req in content.split("\n") if req != '' and not req.startswith('#') ] setup(name='confluent-kafka', version='1.1.0', description='Confluent\'s Python client for Apache Kafka', author='Confluent Inc', author_email='support@confluent.io', url='https://github.com/confluentinc/confluent-kafka-python', ext_modules=[module], packages=find_packages(exclude=("tests", "tests.*")), data_files=[('', ['LICENSE.txt'])], install_requires=INSTALL_REQUIRES, extras_require={ 'avro': [ 'fastavro', 'requests', 'avro;python_version<"3.0"', 'avro-python3;python_version>"3.0"' ], 'dev': get_install_requirements("test-requirements.txt") }) confluent-kafka-1.1.0/test-requirements.txt0000644000076500000240000000002513513052761021116 0ustar ryanstaff00000000000000pytest==4.6.4 flake8