mqtree-1.0.6/0000755000232200023220000000000013605316475013373 5ustar debalancedebalancemqtree-1.0.6/README.md0000644000232200023220000001131313605316475014651 0ustar debalancedebalancemqtree: Index tree for MQTT topic filters ==================================================== mqtree is an Erlang NIF implementation of N-ary tree to keep MQTT topic filters for efficient matching. # System requirements To compile mqtree you need: - GNU Make. - GCC. - Erlang/OTP 17.5 or higher. # Compiling ``` $ git clone https://github.com/processone/mqtree.git $ cd mqtree $ make ``` # API ## new/0 ```erlang -spec new() -> tree(). ``` Creates new tree. The tree is mutable just like ETS, so there is no need to keep its updated version between calls. The created tree gets destroyed when it's garbage collected. Complexity: `O(1)`. **NOTE**: a registered tree (see [register/2](#register2)) is not a subject for garbage collection until [unregister/1](#unregister1) is called **explicitly**. ## insert/2 ```erlang -spec insert(Tree :: tree(), Filter :: iodata()) -> ok. ``` Inserts `Filter` into `Tree` and increases its reference counter. The reference counter is increased every time when the same filter is inserted into the tree. The reference counter is decreased when the filter is deleted, see [delete/2](#delete2). Complexity: `O(H)` where `H` is the number of slashes (`/`) in `Filter`. **NOTE**: no checks are performed on the filter being inserted: it's up to the caller to check if the filter conforms to the MQTT specification. ## delete/2 ```erlang -spec delete(Tree :: tree(), Filter :: iodata()) -> ok. ``` Deletes `Filter` from `Tree` and decreases its reference counter. Nothing is done if the filter is not found in the tree. Complexity: `O(H)` where `H` is the number of slashes (`/`) in `Filter`. **NOTE**: no checks are performed on the filter being deleted: it's up to the caller to check if the filter conforms to the MQTT specification. ## match/2 ```erlang -spec match(Tree :: tree(), Path :: iodata()) -> [binary()]. ``` Finds filters in `Tree` matching `Path` according to the MQTT specification. Complexity: `O(2^H)` worst case, where `H` is the number of slashes (`/`) in `Path`. Note that the worst case complexity is only achieved when an attacker forces to store in the tree a massive amount of filters containing `+` meta-symbol. The obvious protection is to restrict the filter depth. Another approach is to make filter "deduplication" during subscription registration, e.g. filters `a/+`, `+/b` and `+/+` should be "merged" into single `+/+`. **NOTE**: no checks are performed on the path being matched: it's up to the caller to check if the path conforms to the MQTT specification. **NOTE**: any path starting with `$` won't match filters starting with `+` or `#`. This is in accordance with the MQTT specification. ## refc/2 ```erlang -spec refc(Tree :: tree(), Filter :: iodata()) -> non_neg_intger(). ``` Returns the reference counter of `Filter` in `Tree`. In particular, zero (0) is returned if the filter is not found in the tree. Complexity: `O(H)` where `H` is the number of slashes (`/`) in `Filter`. **NOTE**: no checks are performed on the filter being searched: it's up to the caller to check if the filter conforms to the MQTT specification. ## clear/1 ```erlang -spec clear(Tree :: tree()) -> ok. ``` Deletes all filters from `Tree`. Complexity: `O(N)` where `N` is the number of filters in the tree. ## size/1 ```erlang -spec size(Tree :: tree()) -> non_neg_integer(). ``` Returns the size of `Tree`. That is, the number of filters in the tree (irrespective of their reference counters). Complexity: `O(N)` where `N` is the number of filters in the tree. ## is_empty/1 ```erlang -spec is_empty(Tree :: tree()) -> boolean(). ``` Returns `true` if `Tree` holds no filters. Returns `false` otherwise. Complexity: `O(1)`. ## register/2 ```erlang -spec register(RegName :: atom(), Tree :: tree()) -> ok. ``` Associates `RegName` with `Tree`. The tree is then available via call to [whereis/1](#whereis1). Fails with `badarg` exception if: - `RegName` is already in use (even by the tree being registered) - `RegName` is atom `undefined` - Either `RegName` or `Tree` has invalid type It is safe to register already registered tree to another name. In this case the old name will be freed automatically. Complexity: `O(1)`. **NOTE**: a registered tree is not a subject for garbage collection. You must call [unregister/1](#unregister1) **explicitly** if you want the tree to be freed by garbage collector. ## unregister/1 ```erlang -spec unregister(RegName :: atom()) -> ok. ``` Removes the registered name `RegName` associated with a tree. Fails with `badarg` exception if `RegName` is not a registered name. Complexity: `O(1)`. ## whereis/1 ```erlang -spec whereis(RegName :: atom()) -> Tree :: tree() | undefined. ``` Returns `Tree` with registered name `RegName`. Returns `undefined` otherwise. Complexity: `O(1)`. mqtree-1.0.6/rebar.config.script0000644000232200023220000000645613605316475017173 0ustar debalancedebalance%%%---------------------------------------------------------------------- %%% File : rebar.config.script %%% Author : Evgeniy Khramtsov %%% Purpose : Rebar build script. Compliant with rebar and rebar3. %%% Created : 8 May 2013 by Evgeniy Khramtsov %%% %%% Copyright (C) 2002-2017 ProcessOne, SARL. All Rights Reserved. %%% %%% Licensed under the Apache License, Version 2.0 (the "License"); %%% you may not use this file except in compliance with the License. %%% You may obtain a copy of the License at %%% %%% http://www.apache.org/licenses/LICENSE-2.0 %%% %%% Unless required by applicable law or agreed to in writing, software %%% distributed under the License is distributed on an "AS IS" BASIS, %%% WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. %%% See the License for the specific language governing permissions and %%% limitations under the License. %%% %%%---------------------------------------------------------------------- IsRebar3 = case erlang:function_exported(rebar3, main, 1) of true -> true; _ -> lists:keymember(mix, 1, application:loaded_applications()) end, ModCfg0 = fun(F, Cfg, [Key|Tail], Op, Default) -> {OldVal,PartCfg} = case lists:keytake(Key, 1, Cfg) of {value, {_, V1}, V2} -> {V1, V2}; false -> {if Tail == [] -> Default; true -> [] end, Cfg} end, case Tail of [] -> [{Key, Op(OldVal)} | PartCfg]; _ -> [{Key, F(F, OldVal, Tail, Op, Default)} | PartCfg] end end, ModCfg = fun(Cfg, Keys, Op, Default) -> ModCfg0(ModCfg0, Cfg, Keys, Op, Default) end, ModCfgS = fun(Cfg, Keys, Val) -> ModCfg0(ModCfg0, Cfg, Keys, fun(_V) -> Val end, "") end, FilterConfig = fun(F, Cfg, [{Path, true, ModFun, Default} | Tail]) -> F(F, ModCfg0(ModCfg0, Cfg, Path, ModFun, Default), Tail); (F, Cfg, [_ | Tail]) -> F(F, Cfg, Tail); (F, Cfg, []) -> Cfg end, AppendStr = fun(Append) -> fun("") -> Append; (Val) -> Val ++ " " ++ Append end end, AppendList = fun(Append) -> fun(Val) -> Val ++ Append end end, Rebar3DepsFilter = fun(DepsList) -> lists:map(fun({DepName,_, {git,_, {tag,Version}}}) -> {DepName, Version}; (Dep) -> Dep end, DepsList) end, GlobalDepsFilter = fun(Deps) -> DepNames = lists:map(fun({DepName, _, _}) -> DepName; ({DepName, _}) -> DepName end, Deps), lists:filtermap(fun(Dep) -> case code:lib_dir(Dep) of {error, _} -> {true,"Unable to locate dep '"++atom_to_list(Dep)++"' in system deps."}; _ -> false end end, DepNames) end, Rules = [ {[deps], IsRebar3, Rebar3DepsFilter, []}, {[plugins], IsRebar3, AppendList([rebar3_hex, pc]), []}, {[provider_hooks], IsRebar3, AppendList([{pre, [ {compile, {pc, compile}}, {clean, {pc, clean}} ]}]), []}, {[deps], os:getenv("USE_GLOBAL_DEPS") /= false, GlobalDepsFilter, []} ], Config = FilterConfig(FilterConfig, CONFIG, Rules), %io:format("Rules:~n~p~n~nCONFIG:~n~p~n~nConfig:~n~p~n", [Rules, CONFIG, Config]), Config. %% Local Variables: %% mode: erlang %% End: %% vim: set filetype=erlang tabstop=8: mqtree-1.0.6/CHANGELOG.md0000644000232200023220000000020013605316475015174 0ustar debalancedebalance# Version 1.0.6 * Updating p1_utils to version 1.0.17. * Fix repo url in README # Version 1.0.5 * Improve dialyzer handling mqtree-1.0.6/Makefile0000644000232200023220000000362513605316475015041 0ustar debalancedebalanceREBAR=./rebar all: src src: $(REBAR) get-deps compile clean: $(REBAR) clean distclean: clean rm -f config.status rm -f config.log rm -rf autom4te.cache rm -rf deps rm -rf ebin rm -rf priv rm -f vars.config rm -f compile_commands.json rm -rf dialyzer test: all mkdir -p .eunit/priv/lib cp priv/lib/mqtree.* .eunit/priv/lib/ $(REBAR) -v skip_deps=true eunit xref: all $(REBAR) skip_deps=true xref deps := $(wildcard deps/*/ebin) dialyzer/erlang.plt: @mkdir -p dialyzer @dialyzer --build_plt --output_plt dialyzer/erlang.plt \ -o dialyzer/erlang.log --apps kernel stdlib erts; \ status=$$? ; if [ $$status -ne 2 ]; then exit $$status; else exit 0; fi dialyzer/deps.plt: @mkdir -p dialyzer @dialyzer --build_plt --output_plt dialyzer/deps.plt \ -o dialyzer/deps.log $(deps); \ status=$$? ; if [ $$status -ne 2 ]; then exit $$status; else exit 0; fi dialyzer/mqtree.plt: @mkdir -p dialyzer @dialyzer --build_plt --output_plt dialyzer/mqtree.plt \ -o dialyzer/mqtree.log ebin; \ status=$$? ; if [ $$status -ne 2 ]; then exit $$status; else exit 0; fi erlang_plt: dialyzer/erlang.plt @dialyzer --plt dialyzer/erlang.plt --check_plt -o dialyzer/erlang.log; \ status=$$? ; if [ $$status -ne 2 ]; then exit $$status; else exit 0; fi deps_plt: dialyzer/deps.plt @dialyzer --plt dialyzer/deps.plt --check_plt -o dialyzer/deps.log; \ status=$$? ; if [ $$status -ne 2 ]; then exit $$status; else exit 0; fi mqtree_plt: dialyzer/mqtree.plt @dialyzer --plt dialyzer/mqtree.plt --check_plt -o dialyzer/mqtree.log; \ status=$$? ; if [ $$status -ne 2 ]; then exit $$status; else exit 0; fi dialyzer: erlang_plt deps_plt mqtree_plt @dialyzer --plts dialyzer/*.plt --no_check_plt \ --get_warnings -o dialyzer/error.log ebin; \ status=$$? ; if [ $$status -ne 2 ]; then exit $$status; else exit 0; fi check-syntax: gcc -o nul -S ${CHK_SOURCES} .PHONY: clean src test all dialyzer erlang_plt deps_plt mqtree_plt mqtree-1.0.6/c_src/0000755000232200023220000000000013605316475014464 5ustar debalancedebalancemqtree-1.0.6/c_src/uthash.h0000644000232200023220000020755313605316475016145 0ustar debalancedebalance/* Copyright (c) 2003-2017, Troy D. Hanson http://troydhanson.github.com/uthash/ All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #ifndef UTHASH_H #define UTHASH_H #define UTHASH_VERSION 2.0.2 #include /* memcmp,strlen */ #include /* ptrdiff_t */ #include /* exit() */ /* These macros use decltype or the earlier __typeof GNU extension. As decltype is only available in newer compilers (VS2010 or gcc 4.3+ when compiling c++ source) this code uses whatever method is needed or, for VS2008 where neither is available, uses casting workarounds. */ #if defined(_MSC_VER) /* MS compiler */ #if _MSC_VER >= 1600 && defined(__cplusplus) /* VS2010 or newer in C++ mode */ #define DECLTYPE(x) (decltype(x)) #else /* VS2008 or older (or VS2010 in C mode) */ #define NO_DECLTYPE #define DECLTYPE(x) #endif #elif defined(__BORLANDC__) || defined(__LCC__) || defined(__WATCOMC__) #define NO_DECLTYPE #define DECLTYPE(x) #else /* GNU, Sun and other compilers */ #define DECLTYPE(x) (__typeof(x)) #endif #ifdef NO_DECLTYPE #define DECLTYPE_ASSIGN(dst,src) \ do { \ char **_da_dst = (char**)(&(dst)); \ *_da_dst = (char*)(src); \ } while (0) #else #define DECLTYPE_ASSIGN(dst,src) \ do { \ (dst) = DECLTYPE(dst)(src); \ } while (0) #endif /* a number of the hash function use uint32_t which isn't defined on Pre VS2010 */ #if defined(_WIN32) #if defined(_MSC_VER) && _MSC_VER >= 1600 #include #elif defined(__WATCOMC__) || defined(__MINGW32__) || defined(__CYGWIN__) #include #else typedef unsigned int uint32_t; typedef unsigned char uint8_t; #endif #elif defined(__GNUC__) && !defined(__VXWORKS__) #include #else typedef unsigned int uint32_t; typedef unsigned char uint8_t; #endif #ifndef uthash_fatal #define uthash_fatal(msg) exit(-1) /* fatal error (out of memory,etc) */ #endif #ifndef uthash_malloc #define uthash_malloc(sz) malloc(sz) /* malloc fcn */ #endif #ifndef uthash_free #define uthash_free(ptr,sz) free(ptr) /* free fcn */ #endif #ifndef uthash_strlen #define uthash_strlen(s) strlen(s) #endif #ifndef uthash_memcmp #define uthash_memcmp(a,b,n) memcmp(a,b,n) #endif #ifndef uthash_noexpand_fyi #define uthash_noexpand_fyi(tbl) /* can be defined to log noexpand */ #endif #ifndef uthash_expand_fyi #define uthash_expand_fyi(tbl) /* can be defined to log expands */ #endif /* initial number of buckets */ #define HASH_INITIAL_NUM_BUCKETS 32U /* initial number of buckets */ #define HASH_INITIAL_NUM_BUCKETS_LOG2 5U /* lg2 of initial number of buckets */ #define HASH_BKT_CAPACITY_THRESH 10U /* expand when bucket count reaches */ /* calculate the element whose hash handle address is hhp */ #define ELMT_FROM_HH(tbl,hhp) ((void*)(((char*)(hhp)) - ((tbl)->hho))) /* calculate the hash handle from element address elp */ #define HH_FROM_ELMT(tbl,elp) ((UT_hash_handle *)(((char*)(elp)) + ((tbl)->hho))) #define HASH_VALUE(keyptr,keylen,hashv) \ do { \ HASH_FCN(keyptr, keylen, hashv); \ } while (0) #define HASH_FIND_BYHASHVALUE(hh,head,keyptr,keylen,hashval,out) \ do { \ (out) = NULL; \ if (head) { \ unsigned _hf_bkt; \ HASH_TO_BKT(hashval, (head)->hh.tbl->num_buckets, _hf_bkt); \ if (HASH_BLOOM_TEST((head)->hh.tbl, hashval) != 0) { \ HASH_FIND_IN_BKT((head)->hh.tbl, hh, (head)->hh.tbl->buckets[ _hf_bkt ], keyptr, keylen, hashval, out); \ } \ } \ } while (0) #define HASH_FIND(hh,head,keyptr,keylen,out) \ do { \ unsigned _hf_hashv; \ HASH_VALUE(keyptr, keylen, _hf_hashv); \ HASH_FIND_BYHASHVALUE(hh, head, keyptr, keylen, _hf_hashv, out); \ } while (0) #ifdef HASH_BLOOM #define HASH_BLOOM_BITLEN (1UL << HASH_BLOOM) #define HASH_BLOOM_BYTELEN (HASH_BLOOM_BITLEN/8UL) + (((HASH_BLOOM_BITLEN%8UL)!=0UL) ? 1UL : 0UL) #define HASH_BLOOM_MAKE(tbl) \ do { \ (tbl)->bloom_nbits = HASH_BLOOM; \ (tbl)->bloom_bv = (uint8_t*)uthash_malloc(HASH_BLOOM_BYTELEN); \ if (!((tbl)->bloom_bv)) { uthash_fatal( "out of memory"); } \ memset((tbl)->bloom_bv, 0, HASH_BLOOM_BYTELEN); \ (tbl)->bloom_sig = HASH_BLOOM_SIGNATURE; \ } while (0) #define HASH_BLOOM_FREE(tbl) \ do { \ uthash_free((tbl)->bloom_bv, HASH_BLOOM_BYTELEN); \ } while (0) #define HASH_BLOOM_BITSET(bv,idx) (bv[(idx)/8U] |= (1U << ((idx)%8U))) #define HASH_BLOOM_BITTEST(bv,idx) (bv[(idx)/8U] & (1U << ((idx)%8U))) #define HASH_BLOOM_ADD(tbl,hashv) \ HASH_BLOOM_BITSET((tbl)->bloom_bv, (hashv & (uint32_t)((1ULL << (tbl)->bloom_nbits) - 1U))) #define HASH_BLOOM_TEST(tbl,hashv) \ HASH_BLOOM_BITTEST((tbl)->bloom_bv, (hashv & (uint32_t)((1ULL << (tbl)->bloom_nbits) - 1U))) #else #define HASH_BLOOM_MAKE(tbl) #define HASH_BLOOM_FREE(tbl) #define HASH_BLOOM_ADD(tbl,hashv) #define HASH_BLOOM_TEST(tbl,hashv) (1) #define HASH_BLOOM_BYTELEN 0U #endif #define HASH_MAKE_TABLE(hh,head) \ do { \ (head)->hh.tbl = (UT_hash_table*)uthash_malloc( \ sizeof(UT_hash_table)); \ if (!((head)->hh.tbl)) { uthash_fatal( "out of memory"); } \ memset((head)->hh.tbl, 0, sizeof(UT_hash_table)); \ (head)->hh.tbl->tail = &((head)->hh); \ (head)->hh.tbl->num_buckets = HASH_INITIAL_NUM_BUCKETS; \ (head)->hh.tbl->log2_num_buckets = HASH_INITIAL_NUM_BUCKETS_LOG2; \ (head)->hh.tbl->hho = (char*)(&(head)->hh) - (char*)(head); \ (head)->hh.tbl->buckets = (UT_hash_bucket*)uthash_malloc( \ HASH_INITIAL_NUM_BUCKETS*sizeof(struct UT_hash_bucket)); \ if (! (head)->hh.tbl->buckets) { uthash_fatal( "out of memory"); } \ memset((head)->hh.tbl->buckets, 0, \ HASH_INITIAL_NUM_BUCKETS*sizeof(struct UT_hash_bucket)); \ HASH_BLOOM_MAKE((head)->hh.tbl); \ (head)->hh.tbl->signature = HASH_SIGNATURE; \ } while (0) #define HASH_REPLACE_BYHASHVALUE_INORDER(hh,head,fieldname,keylen_in,hashval,add,replaced,cmpfcn) \ do { \ (replaced) = NULL; \ HASH_FIND_BYHASHVALUE(hh, head, &((add)->fieldname), keylen_in, hashval, replaced); \ if (replaced) { \ HASH_DELETE(hh, head, replaced); \ } \ HASH_ADD_KEYPTR_BYHASHVALUE_INORDER(hh, head, &((add)->fieldname), keylen_in, hashval, add, cmpfcn); \ } while (0) #define HASH_REPLACE_BYHASHVALUE(hh,head,fieldname,keylen_in,hashval,add,replaced) \ do { \ (replaced) = NULL; \ HASH_FIND_BYHASHVALUE(hh, head, &((add)->fieldname), keylen_in, hashval, replaced); \ if (replaced) { \ HASH_DELETE(hh, head, replaced); \ } \ HASH_ADD_KEYPTR_BYHASHVALUE(hh, head, &((add)->fieldname), keylen_in, hashval, add); \ } while (0) #define HASH_REPLACE(hh,head,fieldname,keylen_in,add,replaced) \ do { \ unsigned _hr_hashv; \ HASH_VALUE(&((add)->fieldname), keylen_in, _hr_hashv); \ HASH_REPLACE_BYHASHVALUE(hh, head, fieldname, keylen_in, _hr_hashv, add, replaced); \ } while (0) #define HASH_REPLACE_INORDER(hh,head,fieldname,keylen_in,add,replaced,cmpfcn) \ do { \ unsigned _hr_hashv; \ HASH_VALUE(&((add)->fieldname), keylen_in, _hr_hashv); \ HASH_REPLACE_BYHASHVALUE_INORDER(hh, head, fieldname, keylen_in, _hr_hashv, add, replaced, cmpfcn); \ } while (0) #define HASH_APPEND_LIST(hh, head, add) \ do { \ (add)->hh.next = NULL; \ (add)->hh.prev = ELMT_FROM_HH((head)->hh.tbl, (head)->hh.tbl->tail); \ (head)->hh.tbl->tail->next = (add); \ (head)->hh.tbl->tail = &((add)->hh); \ } while (0) #define HASH_ADD_KEYPTR_BYHASHVALUE_INORDER(hh,head,keyptr,keylen_in,hashval,add,cmpfcn) \ do { \ unsigned _ha_bkt; \ (add)->hh.hashv = (hashval); \ (add)->hh.key = (char*) (keyptr); \ (add)->hh.keylen = (unsigned) (keylen_in); \ if (!(head)) { \ (add)->hh.next = NULL; \ (add)->hh.prev = NULL; \ (head) = (add); \ HASH_MAKE_TABLE(hh, head); \ } else { \ void *_hs_iter = (head); \ (add)->hh.tbl = (head)->hh.tbl; \ do { \ if (cmpfcn(DECLTYPE(head)(_hs_iter), add) > 0) \ break; \ } while ((_hs_iter = HH_FROM_ELMT((head)->hh.tbl, _hs_iter)->next)); \ if (_hs_iter) { \ (add)->hh.next = _hs_iter; \ if (((add)->hh.prev = HH_FROM_ELMT((head)->hh.tbl, _hs_iter)->prev)) { \ HH_FROM_ELMT((head)->hh.tbl, (add)->hh.prev)->next = (add); \ } else { \ (head) = (add); \ } \ HH_FROM_ELMT((head)->hh.tbl, _hs_iter)->prev = (add); \ } else { \ HASH_APPEND_LIST(hh, head, add); \ } \ } \ (head)->hh.tbl->num_items++; \ HASH_TO_BKT(hashval, (head)->hh.tbl->num_buckets, _ha_bkt); \ HASH_ADD_TO_BKT((head)->hh.tbl->buckets[_ha_bkt], &(add)->hh); \ HASH_BLOOM_ADD((head)->hh.tbl, hashval); \ HASH_EMIT_KEY(hh, head, keyptr, keylen_in); \ HASH_FSCK(hh, head); \ } while (0) #define HASH_ADD_KEYPTR_INORDER(hh,head,keyptr,keylen_in,add,cmpfcn) \ do { \ unsigned _hs_hashv; \ HASH_VALUE(keyptr, keylen_in, _hs_hashv); \ HASH_ADD_KEYPTR_BYHASHVALUE_INORDER(hh, head, keyptr, keylen_in, _hs_hashv, add, cmpfcn); \ } while (0) #define HASH_ADD_BYHASHVALUE_INORDER(hh,head,fieldname,keylen_in,hashval,add,cmpfcn) \ HASH_ADD_KEYPTR_BYHASHVALUE_INORDER(hh, head, &((add)->fieldname), keylen_in, hashval, add, cmpfcn) #define HASH_ADD_INORDER(hh,head,fieldname,keylen_in,add,cmpfcn) \ HASH_ADD_KEYPTR_INORDER(hh, head, &((add)->fieldname), keylen_in, add, cmpfcn) #define HASH_ADD_KEYPTR_BYHASHVALUE(hh,head,keyptr,keylen_in,hashval,add) \ do { \ unsigned _ha_bkt; \ (add)->hh.hashv = (hashval); \ (add)->hh.key = (char*) (keyptr); \ (add)->hh.keylen = (unsigned) (keylen_in); \ if (!(head)) { \ (add)->hh.next = NULL; \ (add)->hh.prev = NULL; \ (head) = (add); \ HASH_MAKE_TABLE(hh, head); \ } else { \ (add)->hh.tbl = (head)->hh.tbl; \ HASH_APPEND_LIST(hh, head, add); \ } \ (head)->hh.tbl->num_items++; \ HASH_TO_BKT(hashval, (head)->hh.tbl->num_buckets, _ha_bkt); \ HASH_ADD_TO_BKT((head)->hh.tbl->buckets[_ha_bkt], &(add)->hh); \ HASH_BLOOM_ADD((head)->hh.tbl, hashval); \ HASH_EMIT_KEY(hh, head, keyptr, keylen_in); \ HASH_FSCK(hh, head); \ } while (0) #define HASH_ADD_KEYPTR(hh,head,keyptr,keylen_in,add) \ do { \ unsigned _ha_hashv; \ HASH_VALUE(keyptr, keylen_in, _ha_hashv); \ HASH_ADD_KEYPTR_BYHASHVALUE(hh, head, keyptr, keylen_in, _ha_hashv, add); \ } while (0) #define HASH_ADD_BYHASHVALUE(hh,head,fieldname,keylen_in,hashval,add) \ HASH_ADD_KEYPTR_BYHASHVALUE(hh, head, &((add)->fieldname), keylen_in, hashval, add) #define HASH_ADD(hh,head,fieldname,keylen_in,add) \ HASH_ADD_KEYPTR(hh, head, &((add)->fieldname), keylen_in, add) #define HASH_TO_BKT(hashv,num_bkts,bkt) \ do { \ bkt = ((hashv) & ((num_bkts) - 1U)); \ } while (0) /* delete "delptr" from the hash table. * "the usual" patch-up process for the app-order doubly-linked-list. * The use of _hd_hh_del below deserves special explanation. * These used to be expressed using (delptr) but that led to a bug * if someone used the same symbol for the head and deletee, like * HASH_DELETE(hh,users,users); * We want that to work, but by changing the head (users) below * we were forfeiting our ability to further refer to the deletee (users) * in the patch-up process. Solution: use scratch space to * copy the deletee pointer, then the latter references are via that * scratch pointer rather than through the repointed (users) symbol. */ #define HASH_DELETE(hh,head,delptr) \ do { \ struct UT_hash_handle *_hd_hh_del; \ if ( ((delptr)->hh.prev == NULL) && ((delptr)->hh.next == NULL) ) { \ uthash_free((head)->hh.tbl->buckets, \ (head)->hh.tbl->num_buckets*sizeof(struct UT_hash_bucket) ); \ HASH_BLOOM_FREE((head)->hh.tbl); \ uthash_free((head)->hh.tbl, sizeof(UT_hash_table)); \ head = NULL; \ } else { \ unsigned _hd_bkt; \ _hd_hh_del = &((delptr)->hh); \ if ((delptr) == ELMT_FROM_HH((head)->hh.tbl,(head)->hh.tbl->tail)) { \ (head)->hh.tbl->tail = \ (UT_hash_handle*)((ptrdiff_t)((delptr)->hh.prev) + \ (head)->hh.tbl->hho); \ } \ if ((delptr)->hh.prev != NULL) { \ ((UT_hash_handle*)((ptrdiff_t)((delptr)->hh.prev) + \ (head)->hh.tbl->hho))->next = (delptr)->hh.next; \ } else { \ DECLTYPE_ASSIGN(head,(delptr)->hh.next); \ } \ if (_hd_hh_del->next != NULL) { \ ((UT_hash_handle*)((ptrdiff_t)_hd_hh_del->next + \ (head)->hh.tbl->hho))->prev = \ _hd_hh_del->prev; \ } \ HASH_TO_BKT( _hd_hh_del->hashv, (head)->hh.tbl->num_buckets, _hd_bkt); \ HASH_DEL_IN_BKT(hh,(head)->hh.tbl->buckets[_hd_bkt], _hd_hh_del); \ (head)->hh.tbl->num_items--; \ } \ HASH_FSCK(hh,head); \ } while (0) /* convenience forms of HASH_FIND/HASH_ADD/HASH_DEL */ #define HASH_FIND_STR(head,findstr,out) \ HASH_FIND(hh,head,findstr,(unsigned)uthash_strlen(findstr),out) #define HASH_ADD_STR(head,strfield,add) \ HASH_ADD(hh,head,strfield[0],(unsigned)uthash_strlen(add->strfield),add) #define HASH_REPLACE_STR(head,strfield,add,replaced) \ HASH_REPLACE(hh,head,strfield[0],(unsigned)uthash_strlen(add->strfield),add,replaced) #define HASH_FIND_INT(head,findint,out) \ HASH_FIND(hh,head,findint,sizeof(int),out) #define HASH_ADD_INT(head,intfield,add) \ HASH_ADD(hh,head,intfield,sizeof(int),add) #define HASH_REPLACE_INT(head,intfield,add,replaced) \ HASH_REPLACE(hh,head,intfield,sizeof(int),add,replaced) #define HASH_FIND_PTR(head,findptr,out) \ HASH_FIND(hh,head,findptr,sizeof(void *),out) #define HASH_ADD_PTR(head,ptrfield,add) \ HASH_ADD(hh,head,ptrfield,sizeof(void *),add) #define HASH_REPLACE_PTR(head,ptrfield,add,replaced) \ HASH_REPLACE(hh,head,ptrfield,sizeof(void *),add,replaced) #define HASH_DEL(head,delptr) \ HASH_DELETE(hh,head,delptr) /* HASH_FSCK checks hash integrity on every add/delete when HASH_DEBUG is defined. * This is for uthash developer only; it compiles away if HASH_DEBUG isn't defined. */ #ifdef HASH_DEBUG #define HASH_OOPS(...) do { fprintf(stderr,__VA_ARGS__); exit(-1); } while (0) #define HASH_FSCK(hh,head) \ do { \ struct UT_hash_handle *_thh; \ if (head) { \ unsigned _bkt_i; \ unsigned _count; \ char *_prev; \ _count = 0; \ for( _bkt_i = 0; _bkt_i < (head)->hh.tbl->num_buckets; _bkt_i++) { \ unsigned _bkt_count = 0; \ _thh = (head)->hh.tbl->buckets[_bkt_i].hh_head; \ _prev = NULL; \ while (_thh) { \ if (_prev != (char*)(_thh->hh_prev)) { \ HASH_OOPS("invalid hh_prev %p, actual %p\n", \ _thh->hh_prev, _prev ); \ } \ _bkt_count++; \ _prev = (char*)(_thh); \ _thh = _thh->hh_next; \ } \ _count += _bkt_count; \ if ((head)->hh.tbl->buckets[_bkt_i].count != _bkt_count) { \ HASH_OOPS("invalid bucket count %u, actual %u\n", \ (head)->hh.tbl->buckets[_bkt_i].count, _bkt_count); \ } \ } \ if (_count != (head)->hh.tbl->num_items) { \ HASH_OOPS("invalid hh item count %u, actual %u\n", \ (head)->hh.tbl->num_items, _count ); \ } \ /* traverse hh in app order; check next/prev integrity, count */ \ _count = 0; \ _prev = NULL; \ _thh = &(head)->hh; \ while (_thh) { \ _count++; \ if (_prev !=(char*)(_thh->prev)) { \ HASH_OOPS("invalid prev %p, actual %p\n", \ _thh->prev, _prev ); \ } \ _prev = (char*)ELMT_FROM_HH((head)->hh.tbl, _thh); \ _thh = ( _thh->next ? (UT_hash_handle*)((char*)(_thh->next) + \ (head)->hh.tbl->hho) : NULL ); \ } \ if (_count != (head)->hh.tbl->num_items) { \ HASH_OOPS("invalid app item count %u, actual %u\n", \ (head)->hh.tbl->num_items, _count ); \ } \ } \ } while (0) #else #define HASH_FSCK(hh,head) #endif /* When compiled with -DHASH_EMIT_KEYS, length-prefixed keys are emitted to * the descriptor to which this macro is defined for tuning the hash function. * The app can #include to get the prototype for write(2). */ #ifdef HASH_EMIT_KEYS #define HASH_EMIT_KEY(hh,head,keyptr,fieldlen) \ do { \ unsigned _klen = fieldlen; \ write(HASH_EMIT_KEYS, &_klen, sizeof(_klen)); \ write(HASH_EMIT_KEYS, keyptr, (unsigned long)fieldlen); \ } while (0) #else #define HASH_EMIT_KEY(hh,head,keyptr,fieldlen) #endif /* default to Jenkin's hash unless overridden e.g. DHASH_FUNCTION=HASH_SAX */ #ifdef HASH_FUNCTION #define HASH_FCN HASH_FUNCTION #else #define HASH_FCN HASH_JEN #endif /* The Bernstein hash function, used in Perl prior to v5.6. Note (x<<5+x)=x*33. */ #define HASH_BER(key,keylen,hashv) \ do { \ unsigned _hb_keylen=(unsigned)keylen; \ const unsigned char *_hb_key=(const unsigned char*)(key); \ (hashv) = 0; \ while (_hb_keylen-- != 0U) { \ (hashv) = (((hashv) << 5) + (hashv)) + *_hb_key++; \ } \ } while (0) /* SAX/FNV/OAT/JEN hash functions are macro variants of those listed at * http://eternallyconfuzzled.com/tuts/algorithms/jsw_tut_hashing.aspx */ #define HASH_SAX(key,keylen,hashv) \ do { \ unsigned _sx_i; \ const unsigned char *_hs_key=(const unsigned char*)(key); \ hashv = 0; \ for(_sx_i=0; _sx_i < keylen; _sx_i++) { \ hashv ^= (hashv << 5) + (hashv >> 2) + _hs_key[_sx_i]; \ } \ } while (0) /* FNV-1a variation */ #define HASH_FNV(key,keylen,hashv) \ do { \ unsigned _fn_i; \ const unsigned char *_hf_key=(const unsigned char*)(key); \ hashv = 2166136261U; \ for(_fn_i=0; _fn_i < keylen; _fn_i++) { \ hashv = hashv ^ _hf_key[_fn_i]; \ hashv = hashv * 16777619U; \ } \ } while (0) #define HASH_OAT(key,keylen,hashv) \ do { \ unsigned _ho_i; \ const unsigned char *_ho_key=(const unsigned char*)(key); \ hashv = 0; \ for(_ho_i=0; _ho_i < keylen; _ho_i++) { \ hashv += _ho_key[_ho_i]; \ hashv += (hashv << 10); \ hashv ^= (hashv >> 6); \ } \ hashv += (hashv << 3); \ hashv ^= (hashv >> 11); \ hashv += (hashv << 15); \ } while (0) #define HASH_JEN_MIX(a,b,c) \ do { \ a -= b; a -= c; a ^= ( c >> 13 ); \ b -= c; b -= a; b ^= ( a << 8 ); \ c -= a; c -= b; c ^= ( b >> 13 ); \ a -= b; a -= c; a ^= ( c >> 12 ); \ b -= c; b -= a; b ^= ( a << 16 ); \ c -= a; c -= b; c ^= ( b >> 5 ); \ a -= b; a -= c; a ^= ( c >> 3 ); \ b -= c; b -= a; b ^= ( a << 10 ); \ c -= a; c -= b; c ^= ( b >> 15 ); \ } while (0) #define HASH_JEN(key,keylen,hashv) \ do { \ unsigned _hj_i,_hj_j,_hj_k; \ unsigned const char *_hj_key=(unsigned const char*)(key); \ hashv = 0xfeedbeefu; \ _hj_i = _hj_j = 0x9e3779b9u; \ _hj_k = (unsigned)(keylen); \ while (_hj_k >= 12U) { \ _hj_i += (_hj_key[0] + ( (unsigned)_hj_key[1] << 8 ) \ + ( (unsigned)_hj_key[2] << 16 ) \ + ( (unsigned)_hj_key[3] << 24 ) ); \ _hj_j += (_hj_key[4] + ( (unsigned)_hj_key[5] << 8 ) \ + ( (unsigned)_hj_key[6] << 16 ) \ + ( (unsigned)_hj_key[7] << 24 ) ); \ hashv += (_hj_key[8] + ( (unsigned)_hj_key[9] << 8 ) \ + ( (unsigned)_hj_key[10] << 16 ) \ + ( (unsigned)_hj_key[11] << 24 ) ); \ \ HASH_JEN_MIX(_hj_i, _hj_j, hashv); \ \ _hj_key += 12; \ _hj_k -= 12U; \ } \ hashv += (unsigned)(keylen); \ switch ( _hj_k ) { \ case 11: hashv += ( (unsigned)_hj_key[10] << 24 ); /* FALLTHROUGH */ \ case 10: hashv += ( (unsigned)_hj_key[9] << 16 ); /* FALLTHROUGH */ \ case 9: hashv += ( (unsigned)_hj_key[8] << 8 ); /* FALLTHROUGH */ \ case 8: _hj_j += ( (unsigned)_hj_key[7] << 24 ); /* FALLTHROUGH */ \ case 7: _hj_j += ( (unsigned)_hj_key[6] << 16 ); /* FALLTHROUGH */ \ case 6: _hj_j += ( (unsigned)_hj_key[5] << 8 ); /* FALLTHROUGH */ \ case 5: _hj_j += _hj_key[4]; /* FALLTHROUGH */ \ case 4: _hj_i += ( (unsigned)_hj_key[3] << 24 ); /* FALLTHROUGH */ \ case 3: _hj_i += ( (unsigned)_hj_key[2] << 16 ); /* FALLTHROUGH */ \ case 2: _hj_i += ( (unsigned)_hj_key[1] << 8 ); /* FALLTHROUGH */ \ case 1: _hj_i += _hj_key[0]; \ } \ HASH_JEN_MIX(_hj_i, _hj_j, hashv); \ } while (0) /* The Paul Hsieh hash function */ #undef get16bits #if (defined(__GNUC__) && defined(__i386__)) || defined(__WATCOMC__) \ || defined(_MSC_VER) || defined (__BORLANDC__) || defined (__TURBOC__) #define get16bits(d) (*((const uint16_t *) (d))) #endif #if !defined (get16bits) #define get16bits(d) ((((uint32_t)(((const uint8_t *)(d))[1])) << 8) \ +(uint32_t)(((const uint8_t *)(d))[0]) ) #endif #define HASH_SFH(key,keylen,hashv) \ do { \ unsigned const char *_sfh_key=(unsigned const char*)(key); \ uint32_t _sfh_tmp, _sfh_len = (uint32_t)keylen; \ \ unsigned _sfh_rem = _sfh_len & 3U; \ _sfh_len >>= 2; \ hashv = 0xcafebabeu; \ \ /* Main loop */ \ for (;_sfh_len > 0U; _sfh_len--) { \ hashv += get16bits (_sfh_key); \ _sfh_tmp = ((uint32_t)(get16bits (_sfh_key+2)) << 11) ^ hashv; \ hashv = (hashv << 16) ^ _sfh_tmp; \ _sfh_key += 2U*sizeof (uint16_t); \ hashv += hashv >> 11; \ } \ \ /* Handle end cases */ \ switch (_sfh_rem) { \ case 3: hashv += get16bits (_sfh_key); \ hashv ^= hashv << 16; \ hashv ^= (uint32_t)(_sfh_key[sizeof (uint16_t)]) << 18; \ hashv += hashv >> 11; \ break; \ case 2: hashv += get16bits (_sfh_key); \ hashv ^= hashv << 11; \ hashv += hashv >> 17; \ break; \ case 1: hashv += *_sfh_key; \ hashv ^= hashv << 10; \ hashv += hashv >> 1; \ } \ \ /* Force "avalanching" of final 127 bits */ \ hashv ^= hashv << 3; \ hashv += hashv >> 5; \ hashv ^= hashv << 4; \ hashv += hashv >> 17; \ hashv ^= hashv << 25; \ hashv += hashv >> 6; \ } while (0) #ifdef HASH_USING_NO_STRICT_ALIASING /* The MurmurHash exploits some CPU's (x86,x86_64) tolerance for unaligned reads. * For other types of CPU's (e.g. Sparc) an unaligned read causes a bus error. * MurmurHash uses the faster approach only on CPU's where we know it's safe. * * Note the preprocessor built-in defines can be emitted using: * * gcc -m64 -dM -E - < /dev/null (on gcc) * cc -## a.c (where a.c is a simple test file) (Sun Studio) */ #if (defined(__i386__) || defined(__x86_64__) || defined(_M_IX86)) #define MUR_GETBLOCK(p,i) p[i] #else /* non intel */ #define MUR_PLUS0_ALIGNED(p) (((unsigned long)p & 3UL) == 0UL) #define MUR_PLUS1_ALIGNED(p) (((unsigned long)p & 3UL) == 1UL) #define MUR_PLUS2_ALIGNED(p) (((unsigned long)p & 3UL) == 2UL) #define MUR_PLUS3_ALIGNED(p) (((unsigned long)p & 3UL) == 3UL) #define WP(p) ((uint32_t*)((unsigned long)(p) & ~3UL)) #if (defined(__BIG_ENDIAN__) || defined(SPARC) || defined(__ppc__) || defined(__ppc64__)) #define MUR_THREE_ONE(p) ((((*WP(p))&0x00ffffff) << 8) | (((*(WP(p)+1))&0xff000000) >> 24)) #define MUR_TWO_TWO(p) ((((*WP(p))&0x0000ffff) <<16) | (((*(WP(p)+1))&0xffff0000) >> 16)) #define MUR_ONE_THREE(p) ((((*WP(p))&0x000000ff) <<24) | (((*(WP(p)+1))&0xffffff00) >> 8)) #else /* assume little endian non-intel */ #define MUR_THREE_ONE(p) ((((*WP(p))&0xffffff00) >> 8) | (((*(WP(p)+1))&0x000000ff) << 24)) #define MUR_TWO_TWO(p) ((((*WP(p))&0xffff0000) >>16) | (((*(WP(p)+1))&0x0000ffff) << 16)) #define MUR_ONE_THREE(p) ((((*WP(p))&0xff000000) >>24) | (((*(WP(p)+1))&0x00ffffff) << 8)) #endif #define MUR_GETBLOCK(p,i) (MUR_PLUS0_ALIGNED(p) ? ((p)[i]) : \ (MUR_PLUS1_ALIGNED(p) ? MUR_THREE_ONE(p) : \ (MUR_PLUS2_ALIGNED(p) ? MUR_TWO_TWO(p) : \ MUR_ONE_THREE(p)))) #endif #define MUR_ROTL32(x,r) (((x) << (r)) | ((x) >> (32 - (r)))) #define MUR_FMIX(_h) \ do { \ _h ^= _h >> 16; \ _h *= 0x85ebca6bu; \ _h ^= _h >> 13; \ _h *= 0xc2b2ae35u; \ _h ^= _h >> 16; \ } while (0) #define HASH_MUR(key,keylen,hashv) \ do { \ const uint8_t *_mur_data = (const uint8_t*)(key); \ const int _mur_nblocks = (int)(keylen) / 4; \ uint32_t _mur_h1 = 0xf88D5353u; \ uint32_t _mur_c1 = 0xcc9e2d51u; \ uint32_t _mur_c2 = 0x1b873593u; \ uint32_t _mur_k1 = 0; \ const uint8_t *_mur_tail; \ const uint32_t *_mur_blocks = (const uint32_t*)(_mur_data+(_mur_nblocks*4)); \ int _mur_i; \ for(_mur_i = -_mur_nblocks; _mur_i!=0; _mur_i++) { \ _mur_k1 = MUR_GETBLOCK(_mur_blocks,_mur_i); \ _mur_k1 *= _mur_c1; \ _mur_k1 = MUR_ROTL32(_mur_k1,15); \ _mur_k1 *= _mur_c2; \ \ _mur_h1 ^= _mur_k1; \ _mur_h1 = MUR_ROTL32(_mur_h1,13); \ _mur_h1 = (_mur_h1*5U) + 0xe6546b64u; \ } \ _mur_tail = (const uint8_t*)(_mur_data + (_mur_nblocks*4)); \ _mur_k1=0; \ switch((keylen) & 3U) { \ case 3: _mur_k1 ^= (uint32_t)_mur_tail[2] << 16; /* FALLTHROUGH */ \ case 2: _mur_k1 ^= (uint32_t)_mur_tail[1] << 8; /* FALLTHROUGH */ \ case 1: _mur_k1 ^= (uint32_t)_mur_tail[0]; \ _mur_k1 *= _mur_c1; \ _mur_k1 = MUR_ROTL32(_mur_k1,15); \ _mur_k1 *= _mur_c2; \ _mur_h1 ^= _mur_k1; \ } \ _mur_h1 ^= (uint32_t)(keylen); \ MUR_FMIX(_mur_h1); \ hashv = _mur_h1; \ } while (0) #endif /* HASH_USING_NO_STRICT_ALIASING */ /* iterate over items in a known bucket to find desired item */ #define HASH_FIND_IN_BKT(tbl,hh,head,keyptr,keylen_in,hashval,out) \ do { \ if ((head).hh_head != NULL) { \ DECLTYPE_ASSIGN(out, ELMT_FROM_HH(tbl, (head).hh_head)); \ } else { \ (out) = NULL; \ } \ while ((out) != NULL) { \ if ((out)->hh.hashv == (hashval) && (out)->hh.keylen == (keylen_in)) { \ if (uthash_memcmp((out)->hh.key, keyptr, keylen_in) == 0) { \ break; \ } \ } \ if ((out)->hh.hh_next != NULL) { \ DECLTYPE_ASSIGN(out, ELMT_FROM_HH(tbl, (out)->hh.hh_next)); \ } else { \ (out) = NULL; \ } \ } \ } while (0) /* add an item to a bucket */ #define HASH_ADD_TO_BKT(head,addhh) \ do { \ head.count++; \ (addhh)->hh_next = head.hh_head; \ (addhh)->hh_prev = NULL; \ if (head.hh_head != NULL) { (head).hh_head->hh_prev = (addhh); } \ (head).hh_head=addhh; \ if ((head.count >= ((head.expand_mult+1U) * HASH_BKT_CAPACITY_THRESH)) \ && ((addhh)->tbl->noexpand != 1U)) { \ HASH_EXPAND_BUCKETS((addhh)->tbl); \ } \ } while (0) /* remove an item from a given bucket */ #define HASH_DEL_IN_BKT(hh,head,hh_del) \ (head).count--; \ if ((head).hh_head == hh_del) { \ (head).hh_head = hh_del->hh_next; \ } \ if (hh_del->hh_prev) { \ hh_del->hh_prev->hh_next = hh_del->hh_next; \ } \ if (hh_del->hh_next) { \ hh_del->hh_next->hh_prev = hh_del->hh_prev; \ } /* Bucket expansion has the effect of doubling the number of buckets * and redistributing the items into the new buckets. Ideally the * items will distribute more or less evenly into the new buckets * (the extent to which this is true is a measure of the quality of * the hash function as it applies to the key domain). * * With the items distributed into more buckets, the chain length * (item count) in each bucket is reduced. Thus by expanding buckets * the hash keeps a bound on the chain length. This bounded chain * length is the essence of how a hash provides constant time lookup. * * The calculation of tbl->ideal_chain_maxlen below deserves some * explanation. First, keep in mind that we're calculating the ideal * maximum chain length based on the *new* (doubled) bucket count. * In fractions this is just n/b (n=number of items,b=new num buckets). * Since the ideal chain length is an integer, we want to calculate * ceil(n/b). We don't depend on floating point arithmetic in this * hash, so to calculate ceil(n/b) with integers we could write * * ceil(n/b) = (n/b) + ((n%b)?1:0) * * and in fact a previous version of this hash did just that. * But now we have improved things a bit by recognizing that b is * always a power of two. We keep its base 2 log handy (call it lb), * so now we can write this with a bit shift and logical AND: * * ceil(n/b) = (n>>lb) + ( (n & (b-1)) ? 1:0) * */ #define HASH_EXPAND_BUCKETS(tbl) \ do { \ unsigned _he_bkt; \ unsigned _he_bkt_i; \ struct UT_hash_handle *_he_thh, *_he_hh_nxt; \ UT_hash_bucket *_he_new_buckets, *_he_newbkt; \ _he_new_buckets = (UT_hash_bucket*)uthash_malloc( \ 2UL * tbl->num_buckets * sizeof(struct UT_hash_bucket)); \ if (!_he_new_buckets) { uthash_fatal( "out of memory"); } \ memset(_he_new_buckets, 0, \ 2UL * tbl->num_buckets * sizeof(struct UT_hash_bucket)); \ tbl->ideal_chain_maxlen = \ (tbl->num_items >> (tbl->log2_num_buckets+1U)) + \ (((tbl->num_items & ((tbl->num_buckets*2U)-1U)) != 0U) ? 1U : 0U); \ tbl->nonideal_items = 0; \ for(_he_bkt_i = 0; _he_bkt_i < tbl->num_buckets; _he_bkt_i++) \ { \ _he_thh = tbl->buckets[ _he_bkt_i ].hh_head; \ while (_he_thh != NULL) { \ _he_hh_nxt = _he_thh->hh_next; \ HASH_TO_BKT( _he_thh->hashv, tbl->num_buckets*2U, _he_bkt); \ _he_newbkt = &(_he_new_buckets[ _he_bkt ]); \ if (++(_he_newbkt->count) > tbl->ideal_chain_maxlen) { \ tbl->nonideal_items++; \ _he_newbkt->expand_mult = _he_newbkt->count / \ tbl->ideal_chain_maxlen; \ } \ _he_thh->hh_prev = NULL; \ _he_thh->hh_next = _he_newbkt->hh_head; \ if (_he_newbkt->hh_head != NULL) { _he_newbkt->hh_head->hh_prev = \ _he_thh; } \ _he_newbkt->hh_head = _he_thh; \ _he_thh = _he_hh_nxt; \ } \ } \ uthash_free( tbl->buckets, tbl->num_buckets*sizeof(struct UT_hash_bucket) ); \ tbl->num_buckets *= 2U; \ tbl->log2_num_buckets++; \ tbl->buckets = _he_new_buckets; \ tbl->ineff_expands = (tbl->nonideal_items > (tbl->num_items >> 1)) ? \ (tbl->ineff_expands+1U) : 0U; \ if (tbl->ineff_expands > 1U) { \ tbl->noexpand=1; \ uthash_noexpand_fyi(tbl); \ } \ uthash_expand_fyi(tbl); \ } while (0) /* This is an adaptation of Simon Tatham's O(n log(n)) mergesort */ /* Note that HASH_SORT assumes the hash handle name to be hh. * HASH_SRT was added to allow the hash handle name to be passed in. */ #define HASH_SORT(head,cmpfcn) HASH_SRT(hh,head,cmpfcn) #define HASH_SRT(hh,head,cmpfcn) \ do { \ unsigned _hs_i; \ unsigned _hs_looping,_hs_nmerges,_hs_insize,_hs_psize,_hs_qsize; \ struct UT_hash_handle *_hs_p, *_hs_q, *_hs_e, *_hs_list, *_hs_tail; \ if (head != NULL) { \ _hs_insize = 1; \ _hs_looping = 1; \ _hs_list = &((head)->hh); \ while (_hs_looping != 0U) { \ _hs_p = _hs_list; \ _hs_list = NULL; \ _hs_tail = NULL; \ _hs_nmerges = 0; \ while (_hs_p != NULL) { \ _hs_nmerges++; \ _hs_q = _hs_p; \ _hs_psize = 0; \ for ( _hs_i = 0; _hs_i < _hs_insize; _hs_i++ ) { \ _hs_psize++; \ _hs_q = (UT_hash_handle*)((_hs_q->next != NULL) ? \ ((void*)((char*)(_hs_q->next) + \ (head)->hh.tbl->hho)) : NULL); \ if (! (_hs_q) ) { break; } \ } \ _hs_qsize = _hs_insize; \ while ((_hs_psize > 0U) || ((_hs_qsize > 0U) && (_hs_q != NULL))) {\ if (_hs_psize == 0U) { \ _hs_e = _hs_q; \ _hs_q = (UT_hash_handle*)((_hs_q->next != NULL) ? \ ((void*)((char*)(_hs_q->next) + \ (head)->hh.tbl->hho)) : NULL); \ _hs_qsize--; \ } else if ( (_hs_qsize == 0U) || (_hs_q == NULL) ) { \ _hs_e = _hs_p; \ if (_hs_p != NULL){ \ _hs_p = (UT_hash_handle*)((_hs_p->next != NULL) ? \ ((void*)((char*)(_hs_p->next) + \ (head)->hh.tbl->hho)) : NULL); \ } \ _hs_psize--; \ } else if (( \ cmpfcn(DECLTYPE(head)(ELMT_FROM_HH((head)->hh.tbl,_hs_p)), \ DECLTYPE(head)(ELMT_FROM_HH((head)->hh.tbl,_hs_q))) \ ) <= 0) { \ _hs_e = _hs_p; \ if (_hs_p != NULL){ \ _hs_p = (UT_hash_handle*)((_hs_p->next != NULL) ? \ ((void*)((char*)(_hs_p->next) + \ (head)->hh.tbl->hho)) : NULL); \ } \ _hs_psize--; \ } else { \ _hs_e = _hs_q; \ _hs_q = (UT_hash_handle*)((_hs_q->next != NULL) ? \ ((void*)((char*)(_hs_q->next) + \ (head)->hh.tbl->hho)) : NULL); \ _hs_qsize--; \ } \ if ( _hs_tail != NULL ) { \ _hs_tail->next = ((_hs_e != NULL) ? \ ELMT_FROM_HH((head)->hh.tbl,_hs_e) : NULL); \ } else { \ _hs_list = _hs_e; \ } \ if (_hs_e != NULL) { \ _hs_e->prev = ((_hs_tail != NULL) ? \ ELMT_FROM_HH((head)->hh.tbl,_hs_tail) : NULL); \ } \ _hs_tail = _hs_e; \ } \ _hs_p = _hs_q; \ } \ if (_hs_tail != NULL){ \ _hs_tail->next = NULL; \ } \ if ( _hs_nmerges <= 1U ) { \ _hs_looping=0; \ (head)->hh.tbl->tail = _hs_tail; \ DECLTYPE_ASSIGN(head,ELMT_FROM_HH((head)->hh.tbl, _hs_list)); \ } \ _hs_insize *= 2U; \ } \ HASH_FSCK(hh,head); \ } \ } while (0) /* This function selects items from one hash into another hash. * The end result is that the selected items have dual presence * in both hashes. There is no copy of the items made; rather * they are added into the new hash through a secondary hash * hash handle that must be present in the structure. */ #define HASH_SELECT(hh_dst, dst, hh_src, src, cond) \ do { \ unsigned _src_bkt, _dst_bkt; \ void *_last_elt=NULL, *_elt; \ UT_hash_handle *_src_hh, *_dst_hh, *_last_elt_hh=NULL; \ ptrdiff_t _dst_hho = ((char*)(&(dst)->hh_dst) - (char*)(dst)); \ if (src != NULL) { \ for(_src_bkt=0; _src_bkt < (src)->hh_src.tbl->num_buckets; _src_bkt++) { \ for(_src_hh = (src)->hh_src.tbl->buckets[_src_bkt].hh_head; \ _src_hh != NULL; \ _src_hh = _src_hh->hh_next) { \ _elt = ELMT_FROM_HH((src)->hh_src.tbl, _src_hh); \ if (cond(_elt)) { \ _dst_hh = (UT_hash_handle*)(((char*)_elt) + _dst_hho); \ _dst_hh->key = _src_hh->key; \ _dst_hh->keylen = _src_hh->keylen; \ _dst_hh->hashv = _src_hh->hashv; \ _dst_hh->prev = _last_elt; \ _dst_hh->next = NULL; \ if (_last_elt_hh != NULL) { _last_elt_hh->next = _elt; } \ if (dst == NULL) { \ DECLTYPE_ASSIGN(dst,_elt); \ HASH_MAKE_TABLE(hh_dst,dst); \ } else { \ _dst_hh->tbl = (dst)->hh_dst.tbl; \ } \ HASH_TO_BKT(_dst_hh->hashv, _dst_hh->tbl->num_buckets, _dst_bkt); \ HASH_ADD_TO_BKT(_dst_hh->tbl->buckets[_dst_bkt],_dst_hh); \ (dst)->hh_dst.tbl->num_items++; \ _last_elt = _elt; \ _last_elt_hh = _dst_hh; \ } \ } \ } \ } \ HASH_FSCK(hh_dst,dst); \ } while (0) #define HASH_CLEAR(hh,head) \ do { \ if (head != NULL) { \ uthash_free((head)->hh.tbl->buckets, \ (head)->hh.tbl->num_buckets*sizeof(struct UT_hash_bucket)); \ HASH_BLOOM_FREE((head)->hh.tbl); \ uthash_free((head)->hh.tbl, sizeof(UT_hash_table)); \ (head)=NULL; \ } \ } while (0) #define HASH_OVERHEAD(hh,head) \ ((head != NULL) ? ( \ (size_t)(((head)->hh.tbl->num_items * sizeof(UT_hash_handle)) + \ ((head)->hh.tbl->num_buckets * sizeof(UT_hash_bucket)) + \ sizeof(UT_hash_table) + \ (HASH_BLOOM_BYTELEN))) : 0U) #ifdef NO_DECLTYPE #define HASH_ITER(hh,head,el,tmp) \ for(((el)=(head)), ((*(char**)(&(tmp)))=(char*)((head!=NULL)?(head)->hh.next:NULL)); \ (el) != NULL; ((el)=(tmp)), ((*(char**)(&(tmp)))=(char*)((tmp!=NULL)?(tmp)->hh.next:NULL))) #else #define HASH_ITER(hh,head,el,tmp) \ for(((el)=(head)), ((tmp)=DECLTYPE(el)((head!=NULL)?(head)->hh.next:NULL)); \ (el) != NULL; ((el)=(tmp)), ((tmp)=DECLTYPE(el)((tmp!=NULL)?(tmp)->hh.next:NULL))) #endif /* obtain a count of items in the hash */ #define HASH_COUNT(head) HASH_CNT(hh,head) #define HASH_CNT(hh,head) ((head != NULL)?((head)->hh.tbl->num_items):0U) typedef struct UT_hash_bucket { struct UT_hash_handle *hh_head; unsigned count; /* expand_mult is normally set to 0. In this situation, the max chain length * threshold is enforced at its default value, HASH_BKT_CAPACITY_THRESH. (If * the bucket's chain exceeds this length, bucket expansion is triggered). * However, setting expand_mult to a non-zero value delays bucket expansion * (that would be triggered by additions to this particular bucket) * until its chain length reaches a *multiple* of HASH_BKT_CAPACITY_THRESH. * (The multiplier is simply expand_mult+1). The whole idea of this * multiplier is to reduce bucket expansions, since they are expensive, in * situations where we know that a particular bucket tends to be overused. * It is better to let its chain length grow to a longer yet-still-bounded * value, than to do an O(n) bucket expansion too often. */ unsigned expand_mult; } UT_hash_bucket; /* random signature used only to find hash tables in external analysis */ #define HASH_SIGNATURE 0xa0111fe1u #define HASH_BLOOM_SIGNATURE 0xb12220f2u typedef struct UT_hash_table { UT_hash_bucket *buckets; unsigned num_buckets, log2_num_buckets; unsigned num_items; struct UT_hash_handle *tail; /* tail hh in app order, for fast append */ ptrdiff_t hho; /* hash handle offset (byte pos of hash handle in element */ /* in an ideal situation (all buckets used equally), no bucket would have * more than ceil(#items/#buckets) items. that's the ideal chain length. */ unsigned ideal_chain_maxlen; /* nonideal_items is the number of items in the hash whose chain position * exceeds the ideal chain maxlen. these items pay the penalty for an uneven * hash distribution; reaching them in a chain traversal takes >ideal steps */ unsigned nonideal_items; /* ineffective expands occur when a bucket doubling was performed, but * afterward, more than half the items in the hash had nonideal chain * positions. If this happens on two consecutive expansions we inhibit any * further expansion, as it's not helping; this happens when the hash * function isn't a good fit for the key domain. When expansion is inhibited * the hash will still work, albeit no longer in constant time. */ unsigned ineff_expands, noexpand; uint32_t signature; /* used only to find hash tables in external analysis */ #ifdef HASH_BLOOM uint32_t bloom_sig; /* used only to test bloom exists in external analysis */ uint8_t *bloom_bv; uint8_t bloom_nbits; #endif } UT_hash_table; typedef struct UT_hash_handle { struct UT_hash_table *tbl; void *prev; /* prev element in app order */ void *next; /* next element in app order */ struct UT_hash_handle *hh_prev; /* previous hh in bucket order */ struct UT_hash_handle *hh_next; /* next hh in bucket order */ void *key; /* ptr to enclosing struct's key */ unsigned keylen; /* enclosing struct's key len */ unsigned hashv; /* result of hash-fcn(key) */ } UT_hash_handle; #endif /* UTHASH_H */ mqtree-1.0.6/c_src/mqtree.c0000644000232200023220000004243513605316475016135 0ustar debalancedebalance/* * @author Evgeny Khramtsov * @copyright (C) 2002-2019 ProcessOne, SARL. All Rights Reserved. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ #include #include #include #include "uthash.h" void __free(void *ptr, size_t size) { enif_free(ptr); } #undef uthash_malloc #undef uthash_free #define uthash_malloc enif_alloc #define uthash_free __free /**************************************************************** * Structures/Globals definitions * ****************************************************************/ typedef struct __tree_t { char *key; char *val; int refc; struct __tree_t *sub; UT_hash_handle hh; } tree_t; typedef struct { tree_t *tree; char *name; ErlNifRWLock *lock; } state_t; typedef struct { char *name; state_t *state; UT_hash_handle hh; } registry_t; static ErlNifResourceType *tree_state_t = NULL; static registry_t *registry = NULL; static ErlNifRWLock *registry_lock = NULL; /**************************************************************** * MQTT Tree Manipulation * ****************************************************************/ tree_t *tree_new(char *key, size_t len) { tree_t *tree = enif_alloc(sizeof(tree_t)); if (tree) { memset(tree, 0, sizeof(tree_t)); if (key && len) { tree->key = enif_alloc(len); if (tree->key) { memcpy(tree->key, key, len); } else { enif_free(tree); tree = NULL; } } } return tree; } void tree_free(tree_t *t) { tree_t *found, *iter; if (t) { enif_free(t->key); enif_free(t->val); HASH_ITER(hh, t->sub, found, iter) { HASH_DEL(t->sub, found); tree_free(found); } memset(t, 0, sizeof(tree_t)); enif_free(t); } } void tree_clear(tree_t *root) { tree_t *found, *iter; HASH_ITER(hh, root->sub, found, iter) { HASH_DEL(root->sub, found); tree_free(found); } } int tree_add(tree_t *root, char *path, size_t size) { int i = 0; size_t len; tree_t *t = root; tree_t *found, *new; while (i<=size) { len = strlen(path+i) + 1; HASH_FIND_STR(t->sub, path+i, found); if (found) { i += len; t = found; } else { new = tree_new(path+i, len); if (new) { HASH_ADD_STR(t->sub, key, new); i += len; t = new; } else return ENOMEM; } } if (!t->val) { t->val = enif_alloc(size+1); if (t->val) { t->val[size] = 0; for (i=0; ival[i] = c ? c : '/'; } } else return ENOMEM; } t->refc++; return 0; } int tree_del(tree_t *root, char *path, size_t i, size_t size) { tree_t *found; if (i<=size) { HASH_FIND_STR(root->sub, path+i, found); if (found) { i += strlen(path+i) + 1; int deleted = tree_del(found, path, i, size); if (deleted) { HASH_DEL(root->sub, found); tree_free(found); } } } else if (root->refc) { root->refc--; if (!root->refc) { enif_free(root->val); root->val = NULL; } } return !root->refc && !root->sub; } void tree_size(tree_t *tree, size_t *size) { tree_t *found, *iter; HASH_ITER(hh, tree->sub, found, iter) { if (found->refc) (*size)++; tree_size(found, size); } } int tree_refc(tree_t *tree, char *path, size_t i, size_t size) { tree_t *found; if (i<=size) { HASH_FIND_STR(tree->sub, path+i, found); if (found) { i += strlen(path+i) + 1; return tree_refc(found, path, i, size); } else { return 0; } } else return tree->refc; } /**************************************************************** * Registration * ****************************************************************/ void delete_registry_entry(registry_t *entry) { /* registry_lock must be RW-locked! */ HASH_DEL(registry, entry); entry->state->name = NULL; enif_release_resource(entry->state); enif_free(entry->name); enif_free(entry); } int register_tree(char *name, state_t *state) { registry_t *entry, *found; entry = enif_alloc(sizeof(registry_t)); if (!entry) return ENOMEM; entry->name = enif_alloc(strlen(name) + 1); if (!entry->name) { free(entry); return ENOMEM; } entry->state = state; strcpy(entry->name, name); enif_rwlock_rwlock(registry_lock); HASH_FIND_STR(registry, name, found); if (found) { enif_rwlock_rwunlock(registry_lock); enif_free(entry->name); enif_free(entry); return EINVAL; } else { if (state->name) { /* Unregistering previously registered name */ HASH_FIND_STR(registry, state->name, found); if (found) delete_registry_entry(found); } enif_keep_resource(state); HASH_ADD_STR(registry, name, entry); state->name = entry->name; enif_rwlock_rwunlock(registry_lock); return 0; } } int unregister_tree(char *name) { registry_t *entry; int ret; enif_rwlock_rwlock(registry_lock); HASH_FIND_STR(registry, name, entry); if (entry) { delete_registry_entry(entry); ret = 0; } else { ret = EINVAL; } enif_rwlock_rwunlock(registry_lock); return ret; } /**************************************************************** * NIF helpers * ****************************************************************/ static ERL_NIF_TERM cons(ErlNifEnv *env, char *str, ERL_NIF_TERM tail) { if (str) { size_t len = strlen(str); ERL_NIF_TERM head; unsigned char *buf = enif_make_new_binary(env, len, &head); if (buf) { memcpy(buf, str, len); return enif_make_list_cell(env, head, tail); } } return tail; } static void match(ErlNifEnv *env, tree_t *root, char *path, size_t i, size_t size, ERL_NIF_TERM *acc) { tree_t *found; size_t len = 0; if (i<=size) { HASH_FIND_STR(root->sub, path+i, found); if (found) { len = strlen(path+i) + 1; match(env, found, path, i+len, size, acc); }; if (i || path[0] != '$') { HASH_FIND_STR(root->sub, "+", found); if (found) { len = strlen(path+i) + 1; match(env, found, path, i+len, size, acc); } HASH_FIND_STR(root->sub, "#", found); if (found) { *acc = cons(env, found->val, *acc); } } } else { *acc = cons(env, root->val, *acc); HASH_FIND_STR(root->sub, "#", found); if (found) *acc = cons(env, found->val, *acc); } } static void to_list(ErlNifEnv *env, tree_t *root, ERL_NIF_TERM *acc) { tree_t *found, *iter; HASH_ITER(hh, root->sub, found, iter) { if (found->val) { size_t len = strlen(found->val); ERL_NIF_TERM refc = enif_make_int(env, found->refc); ERL_NIF_TERM val; unsigned char *buf = enif_make_new_binary(env, len, &val); if (buf) { memcpy(buf, found->val, len); *acc = enif_make_list_cell(env, enif_make_tuple2(env, val, refc), *acc); } }; to_list(env, found, acc); } } static ERL_NIF_TERM dump(ErlNifEnv *env, tree_t *tree) { tree_t *found, *iter; ERL_NIF_TERM tail, head; tail = enif_make_list(env, 0); HASH_ITER(hh, tree->sub, found, iter) { head = dump(env, found); tail = enif_make_list_cell(env, head, tail); } if (tree->key) { ERL_NIF_TERM part, path; part = enif_make_string(env, tree->key, ERL_NIF_LATIN1); if (tree->val) path = enif_make_string(env, tree->val, ERL_NIF_LATIN1); else path = enif_make_atom(env, "none"); return enif_make_tuple4(env, part, path, enif_make_int(env, tree->refc), tail); } else return tail; } static ERL_NIF_TERM raise(ErlNifEnv *env, int err) { switch (err) { case ENOMEM: return enif_raise_exception(env, enif_make_atom(env, "enomem")); default: return enif_make_badarg(env); } } void prep_path(char *path, ErlNifBinary *bin) { int i; unsigned char c; path[bin->size] = 0; for (i=0; isize; i++) { c = bin->data[i]; path[i] = (c == '/') ? 0 : c; } } /**************************************************************** * Constructors/Destructors * ****************************************************************/ static state_t *init_tree_state(ErlNifEnv *env) { state_t *state = enif_alloc_resource(tree_state_t, sizeof(state_t)); if (state) { memset(state, 0, sizeof(state_t)); state->tree = tree_new(NULL, 0); state->lock = enif_rwlock_create("mqtree_lock"); if (state->tree && state->lock) return state; else enif_release_resource(state); } return NULL; } static void destroy_tree_state(ErlNifEnv *env, void *data) { state_t *state = (state_t *) data; if (state) { tree_free(state->tree); if (state->lock) enif_rwlock_destroy(state->lock); } memset(state, 0, sizeof(state_t)); } /**************************************************************** * NIF definitions * ****************************************************************/ static int load(ErlNifEnv* env, void** priv, ERL_NIF_TERM max) { registry_lock = enif_rwlock_create("mqtree_registry"); if (registry_lock) { ErlNifResourceFlags flags = ERL_NIF_RT_CREATE | ERL_NIF_RT_TAKEOVER; tree_state_t = enif_open_resource_type(env, NULL, "mqtree_state", destroy_tree_state, flags, NULL); return 0; } return ENOMEM; } static void unload(ErlNifEnv* env, void* priv) { if (registry_lock) { enif_rwlock_destroy(registry_lock); registry_lock = NULL; } } static ERL_NIF_TERM new_0(ErlNifEnv* env, int argc, const ERL_NIF_TERM argv[]) { ERL_NIF_TERM result; state_t *state = init_tree_state(env); if (state) { result = enif_make_resource(env, state); enif_release_resource(state); } else result = raise(env, ENOMEM); return result; } static ERL_NIF_TERM insert_2(ErlNifEnv* env, int argc, const ERL_NIF_TERM argv[]) { state_t *state; ErlNifBinary path_bin; if (!enif_get_resource(env, argv[0], tree_state_t, (void *) &state) || !enif_inspect_iolist_as_binary(env, argv[1], &path_bin)) return raise(env, EINVAL); if (!path_bin.size) return enif_make_atom(env, "ok"); char path[path_bin.size+1]; prep_path(path, &path_bin); enif_rwlock_rwlock(state->lock); int ret = tree_add(state->tree, path, path_bin.size); enif_rwlock_rwunlock(state->lock); if (!ret) return enif_make_atom(env, "ok"); else return raise(env, ret); } static ERL_NIF_TERM delete_2(ErlNifEnv* env, int argc, const ERL_NIF_TERM argv[]) { state_t *state; ErlNifBinary path_bin; if (!enif_get_resource(env, argv[0], tree_state_t, (void *) &state) || !enif_inspect_iolist_as_binary(env, argv[1], &path_bin)) return raise(env, EINVAL); if (!path_bin.size) return enif_make_atom(env, "ok"); char path[path_bin.size+1]; prep_path(path, &path_bin); enif_rwlock_rwlock(state->lock); tree_del(state->tree, path, 0, path_bin.size); enif_rwlock_rwunlock(state->lock); return enif_make_atom(env, "ok"); } static ERL_NIF_TERM match_2(ErlNifEnv* env, int argc, const ERL_NIF_TERM argv[]) { state_t *state; ErlNifBinary path_bin; ERL_NIF_TERM result = enif_make_list(env, 0); if (!enif_get_resource(env, argv[0], tree_state_t, (void *) &state) || !enif_inspect_iolist_as_binary(env, argv[1], &path_bin)) return raise(env, EINVAL); if (!path_bin.size) return result; char path[path_bin.size+1]; prep_path(path, &path_bin); enif_rwlock_rlock(state->lock); match(env, state->tree, path, 0, path_bin.size, &result); enif_rwlock_runlock(state->lock); return result; } static ERL_NIF_TERM refc_2(ErlNifEnv* env, int argc, const ERL_NIF_TERM argv[]) { state_t *state; ErlNifBinary path_bin; if (!enif_get_resource(env, argv[0], tree_state_t, (void *) &state) || !enif_inspect_iolist_as_binary(env, argv[1], &path_bin)) return raise(env, EINVAL); if (!path_bin.size) return enif_make_int(env, 0); char path[path_bin.size+1]; prep_path(path, &path_bin); enif_rwlock_rlock(state->lock); int refc = tree_refc(state->tree, path, 0, path_bin.size); enif_rwlock_runlock(state->lock); return enif_make_int(env, refc); } static ERL_NIF_TERM clear_1(ErlNifEnv* env, int argc, const ERL_NIF_TERM argv[]) { state_t *state; if (!enif_get_resource(env, argv[0], tree_state_t, (void *) &state)) return raise(env, EINVAL); enif_rwlock_rwlock(state->lock); tree_clear(state->tree); enif_rwlock_rwunlock(state->lock); return enif_make_atom(env, "ok"); } static ERL_NIF_TERM size_1(ErlNifEnv* env, int argc, const ERL_NIF_TERM argv[]) { state_t *state; size_t size = 0; if (!enif_get_resource(env, argv[0], tree_state_t, (void *) &state)) return raise(env, EINVAL); enif_rwlock_rlock(state->lock); tree_size(state->tree, &size); enif_rwlock_runlock(state->lock); return enif_make_uint64(env, (ErlNifUInt64) size); } static ERL_NIF_TERM is_empty_1(ErlNifEnv* env, int argc, const ERL_NIF_TERM argv[]) { state_t *state; if (!enif_get_resource(env, argv[0], tree_state_t, (void *) &state)) return raise(env, EINVAL); enif_rwlock_rlock(state->lock); char *ret = state->tree->sub ? "false" : "true"; enif_rwlock_runlock(state->lock); return enif_make_atom(env, ret); } static ERL_NIF_TERM to_list_1(ErlNifEnv* env, int argc, const ERL_NIF_TERM argv[]) { state_t *state; ERL_NIF_TERM result = enif_make_list(env, 0); if (!enif_get_resource(env, argv[0], tree_state_t, (void *) &state)) return raise(env, EINVAL); enif_rwlock_rlock(state->lock); to_list(env, state->tree, &result); enif_rwlock_runlock(state->lock); return result; } static ERL_NIF_TERM dump_1(ErlNifEnv* env, int argc, const ERL_NIF_TERM argv[]) { state_t *state; if (!enif_get_resource(env, argv[0], tree_state_t, (void *) &state)) return raise(env, EINVAL); enif_rwlock_rlock(state->lock); ERL_NIF_TERM result = dump(env, state->tree); enif_rwlock_runlock(state->lock); return result; } static ERL_NIF_TERM register_2(ErlNifEnv* env, int argc, const ERL_NIF_TERM argv[]) { state_t *state; unsigned int len; int ret; if (!enif_get_atom_length(env, argv[0], &len, ERL_NIF_LATIN1) || !enif_get_resource(env, argv[1], tree_state_t, (void *) &state)) return raise(env, EINVAL); char name[len+1]; enif_get_atom(env, argv[0], name, len+1, ERL_NIF_LATIN1); if (!strcmp(name, "undefined")) return raise(env, EINVAL); ret = register_tree(name, state); if (ret) return raise(env, ret); else return enif_make_atom(env, "ok"); } static ERL_NIF_TERM unregister_1(ErlNifEnv* env, int argc, const ERL_NIF_TERM argv[]) { unsigned int len; int ret; if (!enif_get_atom_length(env, argv[0], &len, ERL_NIF_LATIN1)) return raise(env, EINVAL); char name[len+1]; enif_get_atom(env, argv[0], name, len+1, ERL_NIF_LATIN1); ret = unregister_tree(name); if (ret) return raise(env, ret); else return enif_make_atom(env, "ok"); } static ERL_NIF_TERM whereis_1(ErlNifEnv* env, int argc, const ERL_NIF_TERM argv[]) { unsigned int len; registry_t *entry; ERL_NIF_TERM result; if (!enif_get_atom_length(env, argv[0], &len, ERL_NIF_LATIN1)) return raise(env, EINVAL); char name[len+1]; enif_get_atom(env, argv[0], name, len+1, ERL_NIF_LATIN1); enif_rwlock_rlock(registry_lock); HASH_FIND_STR(registry, name, entry); if (entry) result = enif_make_resource(env, entry->state); else result = enif_make_atom(env, "undefined"); enif_rwlock_runlock(registry_lock); return result; } static ERL_NIF_TERM registered_0(ErlNifEnv* env, int argc, const ERL_NIF_TERM argv[]) { registry_t *entry, *iter; ERL_NIF_TERM result = enif_make_list(env, 0); enif_rwlock_rlock(registry_lock); HASH_ITER(hh, registry, entry, iter) { result = enif_make_list_cell(env, enif_make_atom(env, entry->name), result); } enif_rwlock_runlock(registry_lock); return result; } static ErlNifFunc nif_funcs[] = { {"new", 0, new_0}, {"insert", 2, insert_2}, {"delete", 2, delete_2}, {"match", 2, match_2}, {"refc", 2, refc_2}, {"clear", 1, clear_1}, {"size", 1, size_1}, {"is_empty", 1, is_empty_1}, {"to_list", 1, to_list_1}, {"dump", 1, dump_1}, {"register", 2, register_2}, {"unregister", 1, unregister_1}, {"whereis", 1, whereis_1}, {"registered", 0, registered_0} }; ERL_NIF_INIT(mqtree, nif_funcs, load, NULL, NULL, unload) mqtree-1.0.6/src/0000755000232200023220000000000013605316475014162 5ustar debalancedebalancemqtree-1.0.6/src/mqtree.app.src0000644000232200023220000000246113605316475016752 0ustar debalancedebalance%%%------------------------------------------------------------------- %%% @author Evgeny Khramtsov %%% @copyright (C) 2002-2019 ProcessOne, SARL. All Rights Reserved. %%% %%% Licensed under the Apache License, Version 2.0 (the "License"); %%% you may not use this file except in compliance with the License. %%% You may obtain a copy of the License at %%% %%% http://www.apache.org/licenses/LICENSE-2.0 %%% %%% Unless required by applicable law or agreed to in writing, software %%% distributed under the License is distributed on an "AS IS" BASIS, %%% WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. %%% See the License for the specific language governing permissions and %%% limitations under the License. %%% %%%------------------------------------------------------------------- {application, mqtree, [ {description, "Index tree for MQTT topic filters"}, {vsn, "1.0.6"}, {registered, []}, {applications, [kernel, stdlib]}, {env, []}, %% hex.pm packaging: {files, ["src/", "c_src/*.c", "c_src/*.h", "rebar.config", "rebar.config.script", "README.md", "LICENSE"]}, {licenses, ["Apache 2.0"]}, {links, [{"Github", "https://github.com/processone/mqtree"}]} ]}. %% Local Variables: %% mode: erlang %% End: %% vim: set filetype=erlang tabstop=8: mqtree-1.0.6/src/mqtree.erl0000644000232200023220000000712213605316475016165 0ustar debalancedebalance%%%------------------------------------------------------------------- %%% @author Evgeny Khramtsov %%% @copyright (C) 2002-2019 ProcessOne, SARL. All Rights Reserved. %%% %%% Licensed under the Apache License, Version 2.0 (the "License"); %%% you may not use this file except in compliance with the License. %%% You may obtain a copy of the License at %%% %%% http://www.apache.org/licenses/LICENSE-2.0 %%% %%% Unless required by applicable law or agreed to in writing, software %%% distributed under the License is distributed on an "AS IS" BASIS, %%% WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. %%% See the License for the specific language governing permissions and %%% limitations under the License. %%% %%%------------------------------------------------------------------- -module(mqtree). -on_load(load_nif/0). %% API -export([new/0, insert/2, delete/2, match/2, refc/2, clear/1, size/1, is_empty/1]). -export([register/2, unregister/1, whereis/1, registered/0]). %% For debugging -export([dump/1, to_list/1]). -type path() :: iodata(). -opaque tree() :: reference(). -export_type([tree/0, path/0]). %%%=================================================================== %%% API %%%=================================================================== -spec new() -> tree(). new() -> erlang:nif_error({nif_not_loaded, ?MODULE}). -spec insert(tree(), path()) -> ok. insert(_Tree, _Path) -> erlang:nif_error({nif_not_loaded, ?MODULE}). -spec delete(tree(), path()) -> ok. delete(_Tree, _Path) -> erlang:nif_error({nif_not_loaded, ?MODULE}). -spec match(tree(), path()) -> [binary()]. match(_Tree, _Path) -> erlang:nif_error({nif_not_loaded, ?MODULE}). -spec refc(tree(), path()) -> non_neg_integer(). refc(_Tree, _Path) -> erlang:nif_error({nif_not_loaded, ?MODULE}). -spec clear(tree()) -> ok. clear(_Tree) -> erlang:nif_error({nif_not_loaded, ?MODULE}). -spec size(tree()) -> non_neg_integer(). size(_Tree) -> erlang:nif_error({nif_not_loaded, ?MODULE}). -spec is_empty(tree()) -> boolean(). is_empty(_Tree) -> erlang:nif_error({nif_not_loaded, ?MODULE}). -spec register(atom(), tree()) -> ok. register(_Name, _Tree) -> erlang:nif_error({nif_not_loaded, ?MODULE}). -spec unregister(atom()) -> ok. unregister(_Name) -> erlang:nif_error({nif_not_loaded, ?MODULE}). -spec whereis(atom()) -> tree() | undefined. whereis(_Name) -> erlang:nif_error({nif_not_loaded, ?MODULE}). -spec registered() -> [atom()]. registered() -> erlang:nif_error({nif_not_loaded, ?MODULE}). %%%=================================================================== %%% For testing/debugging %%%=================================================================== -type tree_node() :: {string(), string() | none, non_neg_integer(), [tree_node()]}. -spec dump(tree()) -> [tree_node()]. dump(_Tree) -> erlang:nif_error({nif_not_loaded, ?MODULE}). -spec to_list(tree()) -> [{binary(), non_neg_integer()}]. to_list(_Tree) -> erlang:nif_error({nif_not_loaded, ?MODULE}). %%%=================================================================== %%% Internal functions %%%=================================================================== load_nif() -> Path = p1_nif_utils:get_so_path(?MODULE, [?MODULE], atom_to_list(?MODULE)), case erlang:load_nif(Path, 0) of ok -> ok; {error, {upgrade, _}} -> ok; {error, {Reason, Text}} -> error_logger:error_msg("Failed to load NIF ~s: ~s (~p)", [Path, Text, Reason]), erlang:nif_error(Reason) end. mqtree-1.0.6/rebar.config0000644000232200023220000000276613605316475015670 0ustar debalancedebalance%%%------------------------------------------------------------------- %%% @author Evgeny Khramtsov %%% @copyright (C) 2002-2019 ProcessOne, SARL. All Rights Reserved. %%% %%% Licensed under the Apache License, Version 2.0 (the "License"); %%% you may not use this file except in compliance with the License. %%% You may obtain a copy of the License at %%% %%% http://www.apache.org/licenses/LICENSE-2.0 %%% %%% Unless required by applicable law or agreed to in writing, software %%% distributed under the License is distributed on an "AS IS" BASIS, %%% WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. %%% See the License for the specific language governing permissions and %%% limitations under the License. %%% %%%------------------------------------------------------------------- {erl_opts, [debug_info, {src_dirs, ["src"]}]}. {port_env, [{"CFLAGS", "$CFLAGS -std=c99 -g -O2 -Wall"}, {"LDFLAGS", "$LDFLAGS -lpthread"}]}. {port_specs, [{"priv/lib/mqtree.so", ["c_src/mqtree.c"]}]}. {deps, [{p1_utils, ".*", {git, "https://github.com/processone/p1_utils", {tag, "1.0.17"}}}]}. {clean_files, ["c_src/mqtree.gcda", "c_src/mqtree.gcno"]}. {cover_enabled, true}. {cover_export_enabled, true}. {xref_checks, [undefined_function_calls, undefined_functions, deprecated_function_calls, deprecated_functions]}. {profiles, [{test, [{erl_opts, [{src_dirs, ["test"]}]}]}]}. %% Local Variables: %% mode: erlang %% End: %% vim: set filetype=erlang tabstop=8: mqtree-1.0.6/test/0000755000232200023220000000000013605316475014352 5ustar debalancedebalancemqtree-1.0.6/test/mqtree_test.erl0000644000232200023220000002410313605316475017412 0ustar debalancedebalance%%%------------------------------------------------------------------- %%% @author Evgeny Khramtsov %%% @copyright (C) 2002-2019 ProcessOne, SARL. All Rights Reserved. %%% %%% Licensed under the Apache License, Version 2.0 (the "License"); %%% you may not use this file except in compliance with the License. %%% You may obtain a copy of the License at %%% %%% http://www.apache.org/licenses/LICENSE-2.0 %%% %%% Unless required by applicable law or agreed to in writing, software %%% distributed under the License is distributed on an "AS IS" BASIS, %%% WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. %%% See the License for the specific language governing permissions and %%% limitations under the License. %%% %%%------------------------------------------------------------------- -module(mqtree_test). -include_lib("eunit/include/eunit.hrl"). -define(assertTree(L), case L of [] -> ?assertEqual([], mqtree:dump(T)), ?assertEqual([], mqtree:to_list(T)); _ -> ?assertEqual(L, lists:sort(mqtree:to_list(T))) end). -define(assertInsert(E), ?assertEqual(ok, mqtree:insert(T, E))). -define(assertDelete(E), ?assertEqual(ok, mqtree:delete(T, E))). %%%=================================================================== %%% Tests %%%=================================================================== new_test() -> T = mqtree:new(), ?assertTree([]). insert_test() -> T = mqtree:new(), Path = <<"/a/b/c">>, ?assertInsert(Path), ?assertTree([{Path, 1}]). is_empty_test() -> T = mqtree:new(), ?assert(mqtree:is_empty(T)), ?assertInsert(<<"/">>), ?assert(not mqtree:is_empty(T)). insert_then_delete_test() -> T = mqtree:new(), Path = <<"a/b">>, ?assertInsert(Path), ?assertDelete(Path), ?assertTree([]). insert_empty_then_delete_empty_test() -> T = mqtree:new(), ?assertInsert(<<>>), ?assertTree([]), ?assertDelete(<<>>), ?assertTree([]). insert_then_delete_empty_test() -> T = mqtree:new(), Path = <<"/a/b">>, ?assertInsert(Path), ?assertTree([{Path, 1}]), ?assertDelete(<<>>), ?assertTree([{Path, 1}]). insert_then_delete_shuffle_test_() -> {timeout, 60, fun insert_then_delete_shuffle/0}. insert_then_delete_shuffle() -> T = mqtree:new(), Check = lists:sort(rand_paths()), lists:foldl( fun(insert, Refc) -> lists:foreach( fun(Path) -> ?assertInsert(Path) end, rand_paths()), Refc1 = Refc+1, ?assertTree([{P, Refc1} || P <- Check]), Refc1; (delete, Refc) -> lists:foreach( fun(Path) -> ?assertDelete(Path) end, rand_paths()), Refc1 = Refc-1, case Refc1 of 0 -> ?assertTree([]); _ -> ?assertTree([{P, Refc1} || P <- Check]) end, Refc1 end, 0, rand_funs()). refc_test() -> T = mqtree:new(), lists:foreach( fun(Refc) -> lists:foreach( fun(P) -> ?assertEqual(Refc, mqtree:refc(T, P)), ?assertInsert(P) end, rand_paths()) end, lists:seq(0, 5)), lists:foreach( fun(Refc) -> lists:foreach( fun(P) -> ?assertDelete(P), ?assertEqual(Refc, mqtree:refc(T, P)) end, rand_paths()) end, lists:seq(5, 0, -1)). clear_test() -> T = mqtree:new(), lists:foreach( fun(_) -> lists:foreach(fun(P) -> ?assertInsert(P) end, rand_paths()), ?assertEqual(ok, mqtree:clear(T)), ?assertTree([]) end, lists:seq(1, 10)). clear_empty_test() -> T = mqtree:new(), ?assertEqual(ok, mqtree:clear(T)), ?assertTree([]). size_test() -> T = mqtree:new(), ?assertEqual(0, mqtree:size(T)), Paths = rand_paths(), lists:foreach( fun(_) -> lists:foreach(fun(P) -> ?assertInsert(P) end, rand_paths()), ?assert(mqtree:size(T) == length(Paths)) end, [1,2,3]), ?assertEqual(ok, mqtree:clear(T)), ?assertEqual(0, mqtree:size(T)). delete_non_existent_test() -> T = mqtree:new(), lists:foreach( fun(_) -> lists:foreach(fun(P) -> ?assertDelete(P) end, rand_paths()), ?assertTree([]) end, lists:seq(1, 10)). insert_then_delete_non_existent_test() -> T = mqtree:new(), Inserts = rand_paths("@$%&*"), Check = [{P, 1} || P <- lists:sort(Inserts)], lists:foreach(fun(P) -> ?assertInsert(P) end, Inserts), lists:foreach( fun(_) -> lists:foreach(fun(P) -> ?assertDelete(P) end, rand_paths()), ?assertTree(Check) end, lists:seq(1, 10)). match_all_test() -> T = mqtree:new(), lists:foreach( fun(_) -> ?assertInsert("#"), lists:foreach( fun(P) -> ?assertEqual([<<"#">>], mqtree:match(T, P)) end, rand_paths()) end, lists:seq(1, 10)). match_none_test() -> T = mqtree:new(), lists:foreach( fun(P) -> ?assertEqual([], mqtree:match(T, P)) end, rand_paths()). match_exact_test() -> T = mqtree:new(), lists:foreach(fun(P) -> ?assertInsert(P) end, rand_paths()), lists:foreach( fun(P) -> ?assertEqual([P], mqtree:match(T, P)) end, rand_paths()). match_tail_test() -> T = mqtree:new(), Filter = <<"a/b/#">>, ?assertInsert(Filter), ?assertEqual([], mqtree:match(T, "a/bc")), ?assertEqual([Filter], mqtree:match(T, "a/b")), ?assertEqual([Filter], mqtree:match(T, "a/b/")), ?assertEqual([Filter], mqtree:match(T, "a/b/c")), ?assertEqual([Filter], mqtree:match(T, "a/b/c/d")). match_plus_test() -> T = mqtree:new(), Filter = lists:sort([<> || A<-"+a", B<-"+b"]), lists:foreach(fun(P) -> ?assertInsert(P) end, Filter), ?assertEqual([<<"+/+">>], mqtree:match(T, "/")), ?assertEqual([<<"+/+">>], mqtree:match(T, "x/")), ?assertEqual([<<"+/+">>], mqtree:match(T, "/y")), ?assertEqual([<<"+/+">>], mqtree:match(T, "x/y")), ?assertEqual([<<"+/+">>, <<"a/+">>], mqtree:match(T, "a/")), ?assertEqual([<<"+/+">>, <<"a/+">>], mqtree:match(T, "a/y")), ?assertEqual([<<"+/+">>, <<"+/b">>], mqtree:match(T, "/b")), ?assertEqual([<<"+/+">>, <<"+/b">>], mqtree:match(T, "x/b")), ?assertEqual(Filter, lists:sort(mqtree:match(T, "a/b"))). 'match_begins_with_$_test'() -> T = mqtree:new(), Filters = ["#", "+", "+/", "+/+"], Topics = [<<"$SYS">>, <<"$SYS/some">>, <<"$">>, <<"$/some">>], lists:foreach(fun(P) -> ?assertInsert(P) end, Filters ++ Topics), lists:foreach(fun(P) -> ?assertEqual([P], mqtree:match(T, P)) end, Topics). whereis_non_existent_test() -> ?assertEqual(undefined, mqtree:whereis(test_tree)). unregister_non_existent_test() -> ?assertError(badarg, mqtree:unregister(test_tree)). register_test() -> T = mqtree:new(), ?assertEqual(ok, mqtree:register(test_tree, T)), ?assertEqual(T, mqtree:whereis(test_tree)), ?assertEqual(ok, mqtree:unregister(test_tree)). double_register_same_tree_test() -> T = mqtree:new(), ?assertEqual(ok, mqtree:register(test_tree, T)), ?assertError(badarg, mqtree:register(test_tree, T)), ?assertEqual(ok, mqtree:unregister(test_tree)). double_register_another_tree_test() -> T1 = mqtree:new(), T2 = mqtree:new(), ?assertEqual(ok, mqtree:register(test_tree, T1)), ?assertError(badarg, mqtree:register(test_tree, T2)), ?assertEqual(ok, mqtree:unregister(test_tree)). unregister_test() -> T = mqtree:new(), ?assertEqual(ok, mqtree:register(test_tree, T)), ?assertEqual(ok, mqtree:unregister(test_tree)), ?assertEqual(undefined, mqtree:whereis(test_tree)). double_unregister_test() -> T = mqtree:new(), ?assertEqual(ok, mqtree:register(test_tree, T)), ?assertEqual(ok, mqtree:unregister(test_tree)), ?assertError(badarg, mqtree:unregister(test_tree)). rename_test() -> T = mqtree:new(), ?assertEqual(ok, mqtree:register(test_tree_1, T)), ?assertEqual(ok, mqtree:register(test_tree_2, T)), ?assertEqual(undefined, mqtree:whereis(test_tree_1)), ?assertError(badarg, mqtree:unregister(test_tree_1)), ?assertEqual(T, mqtree:whereis(test_tree_2)), ?assertEqual(ok, mqtree:unregister(test_tree_2)). register_undefined_test() -> T = mqtree:new(), ?assertError(badarg, mqtree:register(undefined, T)). registered_test() -> Names = [list_to_atom("test_tree_" ++ integer_to_list(I)) || I <- lists:seq(1, 9)], lists:foldl( fun(Name, Acc) -> ?assertEqual(Acc, lists:sort(mqtree:registered())), T = mqtree:new(), ?assertEqual(ok, mqtree:register(Name, T)), [Name|Acc] end, [], lists:reverse(Names)), lists:foldl( fun(_, [Name|Acc]) -> ?assertEqual(ok, mqtree:unregister(Name)), ?assertEqual(Acc, lists:sort(mqtree:registered())), Acc end, Names, Names). %%%=================================================================== %%% Internal functions %%%=================================================================== rand_paths() -> rand_paths("/abcd"). rand_paths(Set) -> L1 = [{p1_rand:uniform(), <>} || A<-Set], L2 = [{p1_rand:uniform(), <>} || A<-Set, B<-Set], L3 = [{p1_rand:uniform(), <>} || A<-Set, B<-Set, C<-Set], L4 = [{p1_rand:uniform(), <>} || A<-Set, B<-Set, C<-Set, D<-Set], L5 = [{p1_rand:uniform(), <>} || A<-Set, B<-Set, C<-Set, D<-Set, E<-Set], [Path || {_, Path} <- lists:keysort(1, L1++L2++L3++L4++L5)]. rand_funs() -> lists:flatmap( fun(_) -> I = p1_rand:uniform(5), Inserts = lists:duplicate(I, insert), Deletes = lists:duplicate(I, delete), Inserts ++ Deletes end, [1,2,3,4,5]). mqtree-1.0.6/LICENSE0000644000232200023220000002613613605316475014410 0ustar debalancedebalance Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.