sparsehash-2.0.2/0000775000175000017500000000000011721550526010663 500000000000000sparsehash-2.0.2/AUTHORS0000664000175000017500000000004411721252346011650 00000000000000google-sparsehash@googlegroups.com sparsehash-2.0.2/NEWS0000664000175000017500000002045511721550006011301 00000000000000== 23 Ferbruary 2012 == A backwards incompatibility arose from flattening the include headers structure for the folder. This is now fixed in 2.0.2. You only need to upgrade if you had previously included files from the folder. == 1 February 2012 == A minor bug related to the namespace switch from google to sparsehash stopped the build from working when perftools is also installed. This is now fixed in 2.0.1. You only need to upgrade if you have perftools installed. == 31 January 2012 == I've just released sparsehash 2.0. The `google-sparsehash` project has been renamed to `sparsehash`. I (csilvers) am stepping down as maintainer, to be replaced by the team of Donovan Hide and Geoff Pike. Welcome to the team, Donovan and Geoff! Donovan has been an active contributor to sparsehash bug reports and discussions in the past, and Geoff has been closely involved with sparsehash inside Google (in addition to writing the [http://code.google.com/p/cityhash CityHash hash function]). The two of them together should be a formidable force. For good. I bumped the major version number up to 2 to reflect the new community ownership of the project. All the [http://sparsehash.googlecode.com/svn/tags/sparsehash-2.0/ChangeLog changes] are related to the renaming. The only functional change from sparsehash 1.12 is that I've renamed the `google/` include-directory to be `sparsehash/` instead. New code should `#include `/etc. I've kept the old names around as forwarding headers to the new, so `#include ` will continue to work. Note that the classes and functions remain in the `google` C++ namespace (I didn't change that to `sparsehash` as well); I think that's a trickier transition, and can happen in a future release. === 18 January 2011 === The `google-sparsehash` Google Code page has been renamed to `sparsehash`, in preparation for the project being renamed to `sparsehash`. In the coming weeks, I'll be stepping down as maintainer for the sparsehash project, and as part of that Google is relinquishing ownership of the project; it will now be entirely community run. The name change reflects that shift. === 20 December 2011 === I've just released sparsehash 1.12. This release features improved I/O (serialization) support. Support is finally added to serialize and unserialize `dense_hash_map`/`set`, paralleling the existing code for `sparse_hash_map`/`set`. In addition, the serialization API has gotten simpler, with a single `serialize()` method to write to disk, and an `unserialize()` method to read from disk. Finally, support has gotten more generic, with built-in support for both C `FILE*`s and C++ streams, and an extension mechanism to support arbitrary sources and sinks. There are also more minor changes, including minor bugfixes, an improved deleted-key test, and a minor addition to the `sparsetable` API. See the [http://google-sparsehash.googlecode.com/svn/tags/sparsehash-1.12/ChangeLog ChangeLog] for full details. === 23 June 2011 === I've just released sparsehash 1.11. The major user-visible change is that the default behavior is improved -- using the hash_map/set is faster -- for hashtables where the key is a pointer. We now notice that case and ignore the low 2-3 bits (which are almost always 0 for pointers) when hashing. Another user-visible change is we've removed the tests for whether the STL (vector, pair, etc) is defined in the 'std' namespace. gcc 2.95 is the most recent compiler I know of to put STL types and functions in the global namespace. If you need to use such an old compiler, do not update to the latest sparsehash release. We've also changed the internal tools we use to integrate Googler-supplied patches to sparsehash into the opensource release. These new tools should result in more frequent updates with better change descriptions. They will also result in future ChangeLog entries being much more verbose (for better or for worse). A full list of changes is described in [http://google-sparsehash.googlecode.com/svn/tags/sparsehash-1.11/ChangeLog ChangeLog]. === 21 January 2011 === I've just released sparsehash 1.10. This fixes a performance regression in sparsehash 1.8, where sparse_hash_map would copy hashtable keys by value even when the key was explicitly a reference. It also fixes compiler warnings from MSVC 10, which uses some c++0x features that did not interact well with sparsehash. There is no reason to upgrade unless you use references for your hashtable keys, or compile with MSVC 10. A full list of changes is described in [http://google-sparsehash.googlecode.com/svn/tags/sparsehash-1.10/ChangeLog ChangeLog]. === 24 September 2010 === I've just released sparsehash 1.9. This fixes a size regression in sparsehash 1.8, where the new allocator would take up space in `sparse_hash_map`, doubling the sparse_hash_map overhead (from 1-2 bits per bucket to 3 or so). All users are encouraged to upgrade. This change also marks enums as being Plain Old Data, which can speed up hashtables with enum keys and/or values. A full list of changes is described in [http://google-sparsehash.googlecode.com/svn/tags/sparsehash-1.9/ChangeLog ChangeLog]. === 29 July 2010 === I've just released sparsehash 1.8. This includes improved support for `Allocator`, including supporting the allocator constructor arg and `get_allocator()` access method. To work around a bug in gcc 4.0.x, I've renamed the static variables `HT_OCCUPANCY_FLT` and `HT_SHRINK_FLT` to `HT_OCCUPANCY_PCT` and `HT_SHRINK_PCT`, and changed their type from float to int. This should not be a user-visible change, since these variables are only used in the internal hashtable classes (sparsehash clients should use `max_load_factor()` and `min_load_factor()` instead of modifying these static variables), but if you do access these constants, you will need to change your code. Internally, the biggest change is a revamp of the test suite. It now has more complete coverage, and a more capable timing tester. There are other, more minor changes as well. A full list of changes is described in the [http://google-sparsehash.googlecode.com/svn/tags/sparsehash-1.8/ChangeLog ChangeLog]. === 31 March 2010 === I've just released sparsehash 1.7. The major news here is the addition of `Allocator` support. Previously, these hashtable classes would just ignore the `Allocator` template parameter. They now respect it, and even inherit `size_type`, `pointer`, etc. from the allocator class. By default, they use a special allocator we provide that uses libc `malloc` and `free` to allocate. The hash classes notice when this special allocator is being used, and use `realloc` when it can. This means that the default allocator is significantly faster than custom allocators are likely to be (since realloc-like functionality is not supported by STL allocators). There are a few more minor changes as well. A full list of changes is described in the [http://google-sparsehash.googlecode.com/svn/tags/sparsehash-1.7/ChangeLog ChangeLog]. === 11 January 2010 === I've just released sparsehash 1.6. The API has widened a bit with the addition of `deleted_key()` and `empty_key()`, which let you query what values these keys have. A few rather obscure bugs have been fixed (such as an error when copying one hashtable into another when the empty_keys differ). A full list of changes is described in the [http://google-sparsehash.googlecode.com/svn/tags/sparsehash-1.6/ChangeLog ChangeLog]. === 9 May 2009 === I've just released sparsehash 1.5.1. Hot on the heels of sparsehash 1.5, this release fixes a longstanding bug in the sparsehash code, where `equal_range` would always return an empty range. It now works as documented. All sparsehash users are encouraged to upgrade. === 7 May 2009 === I've just released sparsehash 1.5. This release introduces tr1 compatibility: I've added `rehash`, `begin(i)`, and other methods that are expected to be part of the `unordered_map` API once `tr1` in introduced. This allows `sparse_hash_map`, `dense_hash_map`, `sparse_hash_set`, and `dense_hash_set` to be (almost) drop-in replacements for `unordered_map` and `unordered_set`. There is no need to upgrade unless you need this functionality, or need one of the other, more minor, changes described in the [http://google-sparsehash.googlecode.com/svn/tags/sparsehash-1.5/ChangeLog ChangeLog]. sparsehash-2.0.2/vsprojects/0000775000175000017500000000000011721550526013065 500000000000000sparsehash-2.0.2/vsprojects/hashtable_test/0000775000175000017500000000000011721550526016057 500000000000000sparsehash-2.0.2/vsprojects/hashtable_test/hashtable_test.vcproj0000775000175000017500000001224711721252346022226 00000000000000 sparsehash-2.0.2/vsprojects/time_hash_map/0000775000175000017500000000000011721550526015663 500000000000000sparsehash-2.0.2/vsprojects/time_hash_map/time_hash_map.vcproj0000775000175000017500000001167111721252346021636 00000000000000 sparsehash-2.0.2/vsprojects/type_traits_unittest/0000775000175000017500000000000011721550526017373 500000000000000sparsehash-2.0.2/vsprojects/type_traits_unittest/type_traits_unittest.vcproj0000775000175000017500000001060211721252346025047 00000000000000 sparsehash-2.0.2/vsprojects/simple_test/0000775000175000017500000000000011721550526015415 500000000000000sparsehash-2.0.2/vsprojects/simple_test/simple_test.vcproj0000775000175000017500000001165711721252346021126 00000000000000 sparsehash-2.0.2/vsprojects/sparsetable_unittest/0000775000175000017500000000000011721550526017331 500000000000000sparsehash-2.0.2/vsprojects/sparsetable_unittest/sparsetable_unittest.vcproj0000775000175000017500000001073511721252346024752 00000000000000 sparsehash-2.0.2/vsprojects/libc_allocator_with_realloc_test/0000775000175000017500000000000011721550526021631 500000000000000sparsehash-2.0.2/vsprojects/libc_allocator_with_realloc_test/libc_allocator_with_realloc_test.vcproj0000775000175000017500000001045311721252346031547 00000000000000 sparsehash-2.0.2/README0000664000175000017500000001330211721252346011461 00000000000000This directory contains several hash-map implementations, similar in API to SGI's hash_map class, but with different performance characteristics. sparse_hash_map uses very little space overhead, 1-2 bits per entry. dense_hash_map is very fast, particulary on lookup. (sparse_hash_set and dense_hash_set are the set versions of these routines.) On the other hand, these classes have requirements that may not make them appropriate for all applications. All these implementation use a hashtable with internal quadratic probing. This method is space-efficient -- there is no pointer overhead -- and time-efficient for good hash functions. COMPILING --------- To compile test applications with these classes, run ./configure followed by make. To install these header files on your system, run 'make install'. (On Windows, the instructions are different; see README_windows.txt.) See INSTALL for more details. This code should work on any modern C++ system. It has been tested on Linux (Ubuntu, Fedora, RedHat, Debian), Solaris 10 x86, FreeBSD 6.0, OS X 10.3 and 10.4, and Windows under both VC++7 and VC++8. USING ----- See the html files in the doc directory for small example programs that use these classes. It's enough to just include the header file: #include // or sparse_hash_set, dense_hash_map, ... google::sparse_hash_set number_mapper; and use the class the way you would other hash-map implementations. (Though see "API" below for caveats.) By default (you can change it via a flag to ./configure), these hash implementations are defined in the google namespace. API --- The API for sparse_hash_map, dense_hash_map, sparse_hash_set, and dense_hash_set, are a superset of the API of SGI's hash_map class. See doc/sparse_hash_map.html, et al., for more information about the API. The usage of these classes differ from SGI's hash_map, and other hashtable implementations, in the following major ways: 1) dense_hash_map requires you to set aside one key value as the 'empty bucket' value, set via the set_empty_key() method. This *MUST* be called before you can use the dense_hash_map. It is illegal to insert any elements into a dense_hash_map whose key is equal to the empty-key. 2) For both dense_hash_map and sparse_hash_map, if you wish to delete elements from the hashtable, you must set aside a key value as the 'deleted bucket' value, set via the set_deleted_key() method. If your hash-map is insert-only, there is no need to call this method. If you call set_deleted_key(), it is illegal to insert any elements into a dense_hash_map or sparse_hash_map whose key is equal to the deleted-key. 3) These hash-map implementation support I/O. See below. There are also some smaller differences: 1) The constructor takes an optional argument that specifies the number of elements you expect to insert into the hashtable. This differs from SGI's hash_map implementation, which takes an optional number of buckets. 2) erase() does not immediately reclaim memory. As a consequence, erase() does not invalidate any iterators, making loops like this correct: for (it = ht.begin(); it != ht.end(); ++it) if (...) ht.erase(it); As another consequence, a series of erase() calls can leave your hashtable using more memory than it needs to. The hashtable will automatically compact at the next call to insert(), but to manually compact a hashtable, you can call ht.resize(0) I/O --- In addition to the normal hash-map operations, sparse_hash_map can read and write hashtables to disk. (dense_hash_map also has the API, but it has not yet been implemented, and writes will always fail.) In the simplest case, writing a hashtable is as easy as calling two methods on the hashtable: ht.write_metadata(fp); ht.write_nopointer_data(fp); Reading in this data is equally simple: google::sparse_hash_map<...> ht; ht.read_metadata(fp); ht.read_nopointer_data(fp); The above is sufficient if the key and value do not contain any pointers: they are basic C types or agglomorations of basic C types. If the key and/or value do contain pointers, you can still store the hashtable by replacing write_nopointer_data() with a custom writing routine. See sparse_hash_map.html et al. for more information. SPARSETABLE ----------- In addition to the hash-map and hash-set classes, this package also provides sparsetable.h, an array implementation that uses space proportional to the number of elements in the array, rather than the maximum element index. It uses very little space overhead: 1 bit per entry. See doc/sparsetable.html for the API. RESOURCE USAGE -------------- * sparse_hash_map has memory overhead of about 2 bits per hash-map entry. * dense_hash_map has a factor of 2-3 memory overhead: if your hashtable data takes X bytes, dense_hash_map will use 3X-4X memory total. Hashtables tend to double in size when resizing, creating an additional 50% space overhead. dense_hash_map does in fact have a significant "high water mark" memory use requirement. sparse_hash_map, however, is written to need very little space overhead when resizing: only a few bits per hashtable entry. PERFORMANCE ----------- You can compile and run the included file time_hash_map.cc to examine the performance of sparse_hash_map, dense_hash_map, and your native hash_map implementation on your system. One test against the SGI hash_map implementation gave the following timing information for a simple find() call: SGI hash_map: 22 ns dense_hash_map: 13 ns sparse_hash_map: 117 ns SGI map: 113 ns See doc/performance.html for more detailed charts on resource usage and performance data. --- 16 March 2005 (Last updated: 12 September 2010) sparsehash-2.0.2/install-sh0000755000175000017500000003253711721254575012624 00000000000000#!/bin/sh # install - install a program, script, or datafile scriptversion=2009-04-28.21; # UTC # This originates from X11R5 (mit/util/scripts/install.sh), which was # later released in X11R6 (xc/config/util/install.sh) with the # following copyright and license. # # Copyright (C) 1994 X Consortium # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to # deal in the Software without restriction, including without limitation the # rights to use, copy, modify, merge, publish, distribute, sublicense, and/or # sell copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # X CONSORTIUM BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN # AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNEC- # TION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # # Except as contained in this notice, the name of the X Consortium shall not # be used in advertising or otherwise to promote the sale, use or other deal- # ings in this Software without prior written authorization from the X Consor- # tium. # # # FSF changes to this file are in the public domain. # # Calling this script install-sh is preferred over install.sh, to prevent # `make' implicit rules from creating a file called install from it # when there is no Makefile. # # This script is compatible with the BSD install script, but was written # from scratch. nl=' ' IFS=" "" $nl" # set DOITPROG to echo to test this script # Don't use :- since 4.3BSD and earlier shells don't like it. doit=${DOITPROG-} if test -z "$doit"; then doit_exec=exec else doit_exec=$doit fi # Put in absolute file names if you don't have them in your path; # or use environment vars. chgrpprog=${CHGRPPROG-chgrp} chmodprog=${CHMODPROG-chmod} chownprog=${CHOWNPROG-chown} cmpprog=${CMPPROG-cmp} cpprog=${CPPROG-cp} mkdirprog=${MKDIRPROG-mkdir} mvprog=${MVPROG-mv} rmprog=${RMPROG-rm} stripprog=${STRIPPROG-strip} posix_glob='?' initialize_posix_glob=' test "$posix_glob" != "?" || { if (set -f) 2>/dev/null; then posix_glob= else posix_glob=: fi } ' posix_mkdir= # Desired mode of installed file. mode=0755 chgrpcmd= chmodcmd=$chmodprog chowncmd= mvcmd=$mvprog rmcmd="$rmprog -f" stripcmd= src= dst= dir_arg= dst_arg= copy_on_change=false no_target_directory= usage="\ Usage: $0 [OPTION]... [-T] SRCFILE DSTFILE or: $0 [OPTION]... SRCFILES... DIRECTORY or: $0 [OPTION]... -t DIRECTORY SRCFILES... or: $0 [OPTION]... -d DIRECTORIES... In the 1st form, copy SRCFILE to DSTFILE. In the 2nd and 3rd, copy all SRCFILES to DIRECTORY. In the 4th, create DIRECTORIES. Options: --help display this help and exit. --version display version info and exit. -c (ignored) -C install only if different (preserve the last data modification time) -d create directories instead of installing files. -g GROUP $chgrpprog installed files to GROUP. -m MODE $chmodprog installed files to MODE. -o USER $chownprog installed files to USER. -s $stripprog installed files. -t DIRECTORY install into DIRECTORY. -T report an error if DSTFILE is a directory. Environment variables override the default commands: CHGRPPROG CHMODPROG CHOWNPROG CMPPROG CPPROG MKDIRPROG MVPROG RMPROG STRIPPROG " while test $# -ne 0; do case $1 in -c) ;; -C) copy_on_change=true;; -d) dir_arg=true;; -g) chgrpcmd="$chgrpprog $2" shift;; --help) echo "$usage"; exit $?;; -m) mode=$2 case $mode in *' '* | *' '* | *' '* | *'*'* | *'?'* | *'['*) echo "$0: invalid mode: $mode" >&2 exit 1;; esac shift;; -o) chowncmd="$chownprog $2" shift;; -s) stripcmd=$stripprog;; -t) dst_arg=$2 shift;; -T) no_target_directory=true;; --version) echo "$0 $scriptversion"; exit $?;; --) shift break;; -*) echo "$0: invalid option: $1" >&2 exit 1;; *) break;; esac shift done if test $# -ne 0 && test -z "$dir_arg$dst_arg"; then # When -d is used, all remaining arguments are directories to create. # When -t is used, the destination is already specified. # Otherwise, the last argument is the destination. Remove it from $@. for arg do if test -n "$dst_arg"; then # $@ is not empty: it contains at least $arg. set fnord "$@" "$dst_arg" shift # fnord fi shift # arg dst_arg=$arg done fi if test $# -eq 0; then if test -z "$dir_arg"; then echo "$0: no input file specified." >&2 exit 1 fi # It's OK to call `install-sh -d' without argument. # This can happen when creating conditional directories. exit 0 fi if test -z "$dir_arg"; then trap '(exit $?); exit' 1 2 13 15 # Set umask so as not to create temps with too-generous modes. # However, 'strip' requires both read and write access to temps. case $mode in # Optimize common cases. *644) cp_umask=133;; *755) cp_umask=22;; *[0-7]) if test -z "$stripcmd"; then u_plus_rw= else u_plus_rw='% 200' fi cp_umask=`expr '(' 777 - $mode % 1000 ')' $u_plus_rw`;; *) if test -z "$stripcmd"; then u_plus_rw= else u_plus_rw=,u+rw fi cp_umask=$mode$u_plus_rw;; esac fi for src do # Protect names starting with `-'. case $src in -*) src=./$src;; esac if test -n "$dir_arg"; then dst=$src dstdir=$dst test -d "$dstdir" dstdir_status=$? else # Waiting for this to be detected by the "$cpprog $src $dsttmp" command # might cause directories to be created, which would be especially bad # if $src (and thus $dsttmp) contains '*'. if test ! -f "$src" && test ! -d "$src"; then echo "$0: $src does not exist." >&2 exit 1 fi if test -z "$dst_arg"; then echo "$0: no destination specified." >&2 exit 1 fi dst=$dst_arg # Protect names starting with `-'. case $dst in -*) dst=./$dst;; esac # If destination is a directory, append the input filename; won't work # if double slashes aren't ignored. if test -d "$dst"; then if test -n "$no_target_directory"; then echo "$0: $dst_arg: Is a directory" >&2 exit 1 fi dstdir=$dst dst=$dstdir/`basename "$src"` dstdir_status=0 else # Prefer dirname, but fall back on a substitute if dirname fails. dstdir=` (dirname "$dst") 2>/dev/null || expr X"$dst" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$dst" : 'X\(//\)[^/]' \| \ X"$dst" : 'X\(//\)$' \| \ X"$dst" : 'X\(/\)' \| . 2>/dev/null || echo X"$dst" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q' ` test -d "$dstdir" dstdir_status=$? fi fi obsolete_mkdir_used=false if test $dstdir_status != 0; then case $posix_mkdir in '') # Create intermediate dirs using mode 755 as modified by the umask. # This is like FreeBSD 'install' as of 1997-10-28. umask=`umask` case $stripcmd.$umask in # Optimize common cases. *[2367][2367]) mkdir_umask=$umask;; .*0[02][02] | .[02][02] | .[02]) mkdir_umask=22;; *[0-7]) mkdir_umask=`expr $umask + 22 \ - $umask % 100 % 40 + $umask % 20 \ - $umask % 10 % 4 + $umask % 2 `;; *) mkdir_umask=$umask,go-w;; esac # With -d, create the new directory with the user-specified mode. # Otherwise, rely on $mkdir_umask. if test -n "$dir_arg"; then mkdir_mode=-m$mode else mkdir_mode= fi posix_mkdir=false case $umask in *[123567][0-7][0-7]) # POSIX mkdir -p sets u+wx bits regardless of umask, which # is incompatible with FreeBSD 'install' when (umask & 300) != 0. ;; *) tmpdir=${TMPDIR-/tmp}/ins$RANDOM-$$ trap 'ret=$?; rmdir "$tmpdir/d" "$tmpdir" 2>/dev/null; exit $ret' 0 if (umask $mkdir_umask && exec $mkdirprog $mkdir_mode -p -- "$tmpdir/d") >/dev/null 2>&1 then if test -z "$dir_arg" || { # Check for POSIX incompatibilities with -m. # HP-UX 11.23 and IRIX 6.5 mkdir -m -p sets group- or # other-writeable bit of parent directory when it shouldn't. # FreeBSD 6.1 mkdir -m -p sets mode of existing directory. ls_ld_tmpdir=`ls -ld "$tmpdir"` case $ls_ld_tmpdir in d????-?r-*) different_mode=700;; d????-?--*) different_mode=755;; *) false;; esac && $mkdirprog -m$different_mode -p -- "$tmpdir" && { ls_ld_tmpdir_1=`ls -ld "$tmpdir"` test "$ls_ld_tmpdir" = "$ls_ld_tmpdir_1" } } then posix_mkdir=: fi rmdir "$tmpdir/d" "$tmpdir" else # Remove any dirs left behind by ancient mkdir implementations. rmdir ./$mkdir_mode ./-p ./-- 2>/dev/null fi trap '' 0;; esac;; esac if $posix_mkdir && ( umask $mkdir_umask && $doit_exec $mkdirprog $mkdir_mode -p -- "$dstdir" ) then : else # The umask is ridiculous, or mkdir does not conform to POSIX, # or it failed possibly due to a race condition. Create the # directory the slow way, step by step, checking for races as we go. case $dstdir in /*) prefix='/';; -*) prefix='./';; *) prefix='';; esac eval "$initialize_posix_glob" oIFS=$IFS IFS=/ $posix_glob set -f set fnord $dstdir shift $posix_glob set +f IFS=$oIFS prefixes= for d do test -z "$d" && continue prefix=$prefix$d if test -d "$prefix"; then prefixes= else if $posix_mkdir; then (umask=$mkdir_umask && $doit_exec $mkdirprog $mkdir_mode -p -- "$dstdir") && break # Don't fail if two instances are running concurrently. test -d "$prefix" || exit 1 else case $prefix in *\'*) qprefix=`echo "$prefix" | sed "s/'/'\\\\\\\\''/g"`;; *) qprefix=$prefix;; esac prefixes="$prefixes '$qprefix'" fi fi prefix=$prefix/ done if test -n "$prefixes"; then # Don't fail if two instances are running concurrently. (umask $mkdir_umask && eval "\$doit_exec \$mkdirprog $prefixes") || test -d "$dstdir" || exit 1 obsolete_mkdir_used=true fi fi fi if test -n "$dir_arg"; then { test -z "$chowncmd" || $doit $chowncmd "$dst"; } && { test -z "$chgrpcmd" || $doit $chgrpcmd "$dst"; } && { test "$obsolete_mkdir_used$chowncmd$chgrpcmd" = false || test -z "$chmodcmd" || $doit $chmodcmd $mode "$dst"; } || exit 1 else # Make a couple of temp file names in the proper directory. dsttmp=$dstdir/_inst.$$_ rmtmp=$dstdir/_rm.$$_ # Trap to clean up those temp files at exit. trap 'ret=$?; rm -f "$dsttmp" "$rmtmp" && exit $ret' 0 # Copy the file name to the temp name. (umask $cp_umask && $doit_exec $cpprog "$src" "$dsttmp") && # and set any options; do chmod last to preserve setuid bits. # # If any of these fail, we abort the whole thing. If we want to # ignore errors from any of these, just make sure not to ignore # errors from the above "$doit $cpprog $src $dsttmp" command. # { test -z "$chowncmd" || $doit $chowncmd "$dsttmp"; } && { test -z "$chgrpcmd" || $doit $chgrpcmd "$dsttmp"; } && { test -z "$stripcmd" || $doit $stripcmd "$dsttmp"; } && { test -z "$chmodcmd" || $doit $chmodcmd $mode "$dsttmp"; } && # If -C, don't bother to copy if it wouldn't change the file. if $copy_on_change && old=`LC_ALL=C ls -dlL "$dst" 2>/dev/null` && new=`LC_ALL=C ls -dlL "$dsttmp" 2>/dev/null` && eval "$initialize_posix_glob" && $posix_glob set -f && set X $old && old=:$2:$4:$5:$6 && set X $new && new=:$2:$4:$5:$6 && $posix_glob set +f && test "$old" = "$new" && $cmpprog "$dst" "$dsttmp" >/dev/null 2>&1 then rm -f "$dsttmp" else # Rename the file to the real destination. $doit $mvcmd -f "$dsttmp" "$dst" 2>/dev/null || # The rename failed, perhaps because mv can't rename something else # to itself, or perhaps because mv is so ancient that it does not # support -f. { # Now remove or move aside any old file at destination location. # We try this two ways since rm can't unlink itself on some # systems and the destination file might be busy for other # reasons. In this case, the final cleanup might fail but the new # file should still install successfully. { test ! -f "$dst" || $doit $rmcmd -f "$dst" 2>/dev/null || { $doit $mvcmd -f "$dst" "$rmtmp" 2>/dev/null && { $doit $rmcmd -f "$rmtmp" 2>/dev/null; :; } } || { echo "$0: cannot unlink or rename $dst" >&2 (exit 1); exit 1 } } && # Now rename the file to the real destination. $doit $mvcmd "$dsttmp" "$dst" } fi || exit 1 trap '' 0 fi done # Local variables: # eval: (add-hook 'write-file-hooks 'time-stamp) # time-stamp-start: "scriptversion=" # time-stamp-format: "%:y-%02m-%02d.%02H" # time-stamp-time-zone: "UTC" # time-stamp-end: "; # UTC" # End: sparsehash-2.0.2/README_windows.txt0000664000175000017500000000215711721252346014057 00000000000000This project has been ported to Windows. A working solution file exists in this directory: sparsehash.sln You can load this solution file into either VC++ 7.1 (Visual Studio 2003) or VC++ 8.0 (Visual Studio 2005) -- in the latter case, it will automatically convert the files to the latest format for you. When you build the solution, it will create a number of unittests,which you can run by hand (or, more easily, under the Visual Studio debugger) to make sure everything is working properly on your system. The binaries will end up in a directory called "debug" or "release" in the top-level directory (next to the .sln file). Note that these systems are set to build in Debug mode by default. You may want to change them to Release mode. I have little experience with Windows programming, so there may be better ways to set this up than I've done! If you run across any problems, please post to the google-sparsehash Google Group, or report them on the sparsehash Google Code site: http://groups.google.com/group/google-sparsehash http://code.google.com/p/sparsehash/issues/list -- craig sparsehash-2.0.2/src/0000775000175000017500000000000011721550526011452 500000000000000sparsehash-2.0.2/src/sparsehash/0000775000175000017500000000000011721550526013613 500000000000000sparsehash-2.0.2/src/sparsehash/template_util.h0000664000175000017500000001115611721252346016557 00000000000000// Copyright 2005 Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // ---- // // Template metaprogramming utility functions. // // This code is compiled directly on many platforms, including client // platforms like Windows, Mac, and embedded systems. Before making // any changes here, make sure that you're not breaking any platforms. // // // The names choosen here reflect those used in tr1 and the boost::mpl // library, there are similar operations used in the Loki library as // well. I prefer the boost names for 2 reasons: // 1. I think that portions of the Boost libraries are more likely to // be included in the c++ standard. // 2. It is not impossible that some of the boost libraries will be // included in our own build in the future. // Both of these outcomes means that we may be able to directly replace // some of these with boost equivalents. // #ifndef BASE_TEMPLATE_UTIL_H_ #define BASE_TEMPLATE_UTIL_H_ #include _START_GOOGLE_NAMESPACE_ // Types small_ and big_ are guaranteed such that sizeof(small_) < // sizeof(big_) typedef char small_; struct big_ { char dummy[2]; }; // Identity metafunction. template struct identity_ { typedef T type; }; // integral_constant, defined in tr1, is a wrapper for an integer // value. We don't really need this generality; we could get away // with hardcoding the integer type to bool. We use the fully // general integer_constant for compatibility with tr1. template struct integral_constant { static const T value = v; typedef T value_type; typedef integral_constant type; }; template const T integral_constant::value; // Abbreviations: true_type and false_type are structs that represent boolean // true and false values. Also define the boost::mpl versions of those names, // true_ and false_. typedef integral_constant true_type; typedef integral_constant false_type; typedef true_type true_; typedef false_type false_; // if_ is a templatized conditional statement. // if_ is a compile time evaluation of cond. // if_<>::type contains A if cond is true, B otherwise. template struct if_{ typedef A type; }; template struct if_ { typedef B type; }; // type_equals_ is a template type comparator, similar to Loki IsSameType. // type_equals_::value is true iff "A" is the same type as "B". // // New code should prefer base::is_same, defined in base/type_traits.h. // It is functionally identical, but is_same is the standard spelling. template struct type_equals_ : public false_ { }; template struct type_equals_ : public true_ { }; // and_ is a template && operator. // and_::value evaluates "A::value && B::value". template struct and_ : public integral_constant { }; // or_ is a template || operator. // or_::value evaluates "A::value || B::value". template struct or_ : public integral_constant { }; _END_GOOGLE_NAMESPACE_ #endif // BASE_TEMPLATE_UTIL_H_ sparsehash-2.0.2/src/sparsehash/internal/0000775000175000017500000000000011721550526015427 500000000000000sparsehash-2.0.2/src/sparsehash/internal/sparsehashtable.h0000664000175000017500000014673211721252346020705 00000000000000// Copyright (c) 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // --- // // A sparse hashtable is a particular implementation of // a hashtable: one that is meant to minimize memory use. // It does this by using a *sparse table* (cf sparsetable.h), // which uses between 1 and 2 bits to store empty buckets // (we may need another bit for hashtables that support deletion). // // When empty buckets are so cheap, an appealing hashtable // implementation is internal probing, in which the hashtable // is a single table, and collisions are resolved by trying // to insert again in another bucket. The most cache-efficient // internal probing schemes are linear probing (which suffers, // alas, from clumping) and quadratic probing, which is what // we implement by default. // // Deleted buckets are a bit of a pain. We have to somehow mark // deleted buckets (the probing must distinguish them from empty // buckets). The most principled way is to have another bitmap, // but that's annoying and takes up space. Instead we let the // user specify an "impossible" key. We set deleted buckets // to have the impossible key. // // Note it is possible to change the value of the delete key // on the fly; you can even remove it, though after that point // the hashtable is insert_only until you set it again. // // You probably shouldn't use this code directly. Use // sparse_hash_map<> or sparse_hash_set<> instead. // // You can modify the following, below: // HT_OCCUPANCY_PCT -- how full before we double size // HT_EMPTY_PCT -- how empty before we halve size // HT_MIN_BUCKETS -- smallest bucket size // HT_DEFAULT_STARTING_BUCKETS -- default bucket size at construct-time // // You can also change enlarge_factor (which defaults to // HT_OCCUPANCY_PCT), and shrink_factor (which defaults to // HT_EMPTY_PCT) with set_resizing_parameters(). // // How to decide what values to use? // shrink_factor's default of .4 * OCCUPANCY_PCT, is probably good. // HT_MIN_BUCKETS is probably unnecessary since you can specify // (indirectly) the starting number of buckets at construct-time. // For enlarge_factor, you can use this chart to try to trade-off // expected lookup time to the space taken up. By default, this // code uses quadratic probing, though you can change it to linear // via _JUMP below if you really want to. // // From http://www.augustana.ca/~mohrj/courses/1999.fall/csc210/lecture_notes/hashing.html // NUMBER OF PROBES / LOOKUP Successful Unsuccessful // Quadratic collision resolution 1 - ln(1-L) - L/2 1/(1-L) - L - ln(1-L) // Linear collision resolution [1+1/(1-L)]/2 [1+1/(1-L)2]/2 // // -- enlarge_factor -- 0.10 0.50 0.60 0.75 0.80 0.90 0.99 // QUADRATIC COLLISION RES. // probes/successful lookup 1.05 1.44 1.62 2.01 2.21 2.85 5.11 // probes/unsuccessful lookup 1.11 2.19 2.82 4.64 5.81 11.4 103.6 // LINEAR COLLISION RES. // probes/successful lookup 1.06 1.5 1.75 2.5 3.0 5.5 50.5 // probes/unsuccessful lookup 1.12 2.5 3.6 8.5 13.0 50.0 5000.0 // // The value type is required to be copy constructible and default // constructible, but it need not be (and commonly isn't) assignable. #ifndef _SPARSEHASHTABLE_H_ #define _SPARSEHASHTABLE_H_ #include #include #include // For swap(), eg #include // for iterator tags #include // for numeric_limits #include // for pair #include // for remove_const #include #include // IWYU pragma: export #include // For length_error _START_GOOGLE_NAMESPACE_ namespace base { // just to make google->opensource transition easier using GOOGLE_NAMESPACE::remove_const; } #ifndef SPARSEHASH_STAT_UPDATE #define SPARSEHASH_STAT_UPDATE(x) ((void) 0) #endif // The probing method // Linear probing // #define JUMP_(key, num_probes) ( 1 ) // Quadratic probing #define JUMP_(key, num_probes) ( num_probes ) // The smaller this is, the faster lookup is (because the group bitmap is // smaller) and the faster insert is, because there's less to move. // On the other hand, there are more groups. Since group::size_type is // a short, this number should be of the form 32*x + 16 to avoid waste. static const u_int16_t DEFAULT_GROUP_SIZE = 48; // fits in 1.5 words // Hashtable class, used to implement the hashed associative containers // hash_set and hash_map. // // Value: what is stored in the table (each bucket is a Value). // Key: something in a 1-to-1 correspondence to a Value, that can be used // to search for a Value in the table (find() takes a Key). // HashFcn: Takes a Key and returns an integer, the more unique the better. // ExtractKey: given a Value, returns the unique Key associated with it. // Must inherit from unary_function, or at least have a // result_type enum indicating the return type of operator(). // SetKey: given a Value* and a Key, modifies the value such that // ExtractKey(value) == key. We guarantee this is only called // with key == deleted_key. // EqualKey: Given two Keys, says whether they are the same (that is, // if they are both associated with the same Value). // Alloc: STL allocator to use to allocate memory. template class sparse_hashtable; template struct sparse_hashtable_iterator; template struct sparse_hashtable_const_iterator; // As far as iterating, we're basically just a sparsetable // that skips over deleted elements. template struct sparse_hashtable_iterator { private: typedef typename A::template rebind::other value_alloc_type; public: typedef sparse_hashtable_iterator iterator; typedef sparse_hashtable_const_iterator const_iterator; typedef typename sparsetable::nonempty_iterator st_iterator; typedef std::forward_iterator_tag iterator_category; // very little defined! typedef V value_type; typedef typename value_alloc_type::difference_type difference_type; typedef typename value_alloc_type::size_type size_type; typedef typename value_alloc_type::reference reference; typedef typename value_alloc_type::pointer pointer; // "Real" constructor and default constructor sparse_hashtable_iterator(const sparse_hashtable *h, st_iterator it, st_iterator it_end) : ht(h), pos(it), end(it_end) { advance_past_deleted(); } sparse_hashtable_iterator() { } // not ever used internally // The default destructor is fine; we don't define one // The default operator= is fine; we don't define one // Happy dereferencer reference operator*() const { return *pos; } pointer operator->() const { return &(operator*()); } // Arithmetic. The only hard part is making sure that // we're not on a marked-deleted array element void advance_past_deleted() { while ( pos != end && ht->test_deleted(*this) ) ++pos; } iterator& operator++() { assert(pos != end); ++pos; advance_past_deleted(); return *this; } iterator operator++(int) { iterator tmp(*this); ++*this; return tmp; } // Comparison. bool operator==(const iterator& it) const { return pos == it.pos; } bool operator!=(const iterator& it) const { return pos != it.pos; } // The actual data const sparse_hashtable *ht; st_iterator pos, end; }; // Now do it all again, but with const-ness! template struct sparse_hashtable_const_iterator { private: typedef typename A::template rebind::other value_alloc_type; public: typedef sparse_hashtable_iterator iterator; typedef sparse_hashtable_const_iterator const_iterator; typedef typename sparsetable::const_nonempty_iterator st_iterator; typedef std::forward_iterator_tag iterator_category; // very little defined! typedef V value_type; typedef typename value_alloc_type::difference_type difference_type; typedef typename value_alloc_type::size_type size_type; typedef typename value_alloc_type::const_reference reference; typedef typename value_alloc_type::const_pointer pointer; // "Real" constructor and default constructor sparse_hashtable_const_iterator(const sparse_hashtable *h, st_iterator it, st_iterator it_end) : ht(h), pos(it), end(it_end) { advance_past_deleted(); } // This lets us convert regular iterators to const iterators sparse_hashtable_const_iterator() { } // never used internally sparse_hashtable_const_iterator(const iterator &it) : ht(it.ht), pos(it.pos), end(it.end) { } // The default destructor is fine; we don't define one // The default operator= is fine; we don't define one // Happy dereferencer reference operator*() const { return *pos; } pointer operator->() const { return &(operator*()); } // Arithmetic. The only hard part is making sure that // we're not on a marked-deleted array element void advance_past_deleted() { while ( pos != end && ht->test_deleted(*this) ) ++pos; } const_iterator& operator++() { assert(pos != end); ++pos; advance_past_deleted(); return *this; } const_iterator operator++(int) { const_iterator tmp(*this); ++*this; return tmp; } // Comparison. bool operator==(const const_iterator& it) const { return pos == it.pos; } bool operator!=(const const_iterator& it) const { return pos != it.pos; } // The actual data const sparse_hashtable *ht; st_iterator pos, end; }; // And once again, but this time freeing up memory as we iterate template struct sparse_hashtable_destructive_iterator { private: typedef typename A::template rebind::other value_alloc_type; public: typedef sparse_hashtable_destructive_iterator iterator; typedef typename sparsetable::destructive_iterator st_iterator; typedef std::forward_iterator_tag iterator_category; // very little defined! typedef V value_type; typedef typename value_alloc_type::difference_type difference_type; typedef typename value_alloc_type::size_type size_type; typedef typename value_alloc_type::reference reference; typedef typename value_alloc_type::pointer pointer; // "Real" constructor and default constructor sparse_hashtable_destructive_iterator(const sparse_hashtable *h, st_iterator it, st_iterator it_end) : ht(h), pos(it), end(it_end) { advance_past_deleted(); } sparse_hashtable_destructive_iterator() { } // never used internally // The default destructor is fine; we don't define one // The default operator= is fine; we don't define one // Happy dereferencer reference operator*() const { return *pos; } pointer operator->() const { return &(operator*()); } // Arithmetic. The only hard part is making sure that // we're not on a marked-deleted array element void advance_past_deleted() { while ( pos != end && ht->test_deleted(*this) ) ++pos; } iterator& operator++() { assert(pos != end); ++pos; advance_past_deleted(); return *this; } iterator operator++(int) { iterator tmp(*this); ++*this; return tmp; } // Comparison. bool operator==(const iterator& it) const { return pos == it.pos; } bool operator!=(const iterator& it) const { return pos != it.pos; } // The actual data const sparse_hashtable *ht; st_iterator pos, end; }; template class sparse_hashtable { private: typedef typename Alloc::template rebind::other value_alloc_type; public: typedef Key key_type; typedef Value value_type; typedef HashFcn hasher; typedef EqualKey key_equal; typedef Alloc allocator_type; typedef typename value_alloc_type::size_type size_type; typedef typename value_alloc_type::difference_type difference_type; typedef typename value_alloc_type::reference reference; typedef typename value_alloc_type::const_reference const_reference; typedef typename value_alloc_type::pointer pointer; typedef typename value_alloc_type::const_pointer const_pointer; typedef sparse_hashtable_iterator iterator; typedef sparse_hashtable_const_iterator const_iterator; typedef sparse_hashtable_destructive_iterator destructive_iterator; // These come from tr1. For us they're the same as regular iterators. typedef iterator local_iterator; typedef const_iterator const_local_iterator; // How full we let the table get before we resize, by default. // Knuth says .8 is good -- higher causes us to probe too much, // though it saves memory. static const int HT_OCCUPANCY_PCT; // = 80 (out of 100); // How empty we let the table get before we resize lower, by default. // (0.0 means never resize lower.) // It should be less than OCCUPANCY_PCT / 2 or we thrash resizing static const int HT_EMPTY_PCT; // = 0.4 * HT_OCCUPANCY_PCT; // Minimum size we're willing to let hashtables be. // Must be a power of two, and at least 4. // Note, however, that for a given hashtable, the initial size is a // function of the first constructor arg, and may be >HT_MIN_BUCKETS. static const size_type HT_MIN_BUCKETS = 4; // By default, if you don't specify a hashtable size at // construction-time, we use this size. Must be a power of two, and // at least HT_MIN_BUCKETS. static const size_type HT_DEFAULT_STARTING_BUCKETS = 32; // ITERATOR FUNCTIONS iterator begin() { return iterator(this, table.nonempty_begin(), table.nonempty_end()); } iterator end() { return iterator(this, table.nonempty_end(), table.nonempty_end()); } const_iterator begin() const { return const_iterator(this, table.nonempty_begin(), table.nonempty_end()); } const_iterator end() const { return const_iterator(this, table.nonempty_end(), table.nonempty_end()); } // These come from tr1 unordered_map. They iterate over 'bucket' n. // For sparsehashtable, we could consider each 'group' to be a bucket, // I guess, but I don't really see the point. We'll just consider // bucket n to be the n-th element of the sparsetable, if it's occupied, // or some empty element, otherwise. local_iterator begin(size_type i) { if (table.test(i)) return local_iterator(this, table.get_iter(i), table.nonempty_end()); else return local_iterator(this, table.nonempty_end(), table.nonempty_end()); } local_iterator end(size_type i) { local_iterator it = begin(i); if (table.test(i) && !test_deleted(i)) ++it; return it; } const_local_iterator begin(size_type i) const { if (table.test(i)) return const_local_iterator(this, table.get_iter(i), table.nonempty_end()); else return const_local_iterator(this, table.nonempty_end(), table.nonempty_end()); } const_local_iterator end(size_type i) const { const_local_iterator it = begin(i); if (table.test(i) && !test_deleted(i)) ++it; return it; } // This is used when resizing destructive_iterator destructive_begin() { return destructive_iterator(this, table.destructive_begin(), table.destructive_end()); } destructive_iterator destructive_end() { return destructive_iterator(this, table.destructive_end(), table.destructive_end()); } // ACCESSOR FUNCTIONS for the things we templatize on, basically hasher hash_funct() const { return settings; } key_equal key_eq() const { return key_info; } allocator_type get_allocator() const { return table.get_allocator(); } // Accessor function for statistics gathering. int num_table_copies() const { return settings.num_ht_copies(); } private: // We need to copy values when we set the special marker for deleted // elements, but, annoyingly, we can't just use the copy assignment // operator because value_type might not be assignable (it's often // pair). We use explicit destructor invocation and // placement new to get around this. Arg. void set_value(pointer dst, const_reference src) { dst->~value_type(); // delete the old value, if any new(dst) value_type(src); } // This is used as a tag for the copy constructor, saying to destroy its // arg We have two ways of destructively copying: with potentially growing // the hashtable as we copy, and without. To make sure the outside world // can't do a destructive copy, we make the typename private. enum MoveDontCopyT {MoveDontCopy, MoveDontGrow}; // DELETE HELPER FUNCTIONS // This lets the user describe a key that will indicate deleted // table entries. This key should be an "impossible" entry -- // if you try to insert it for real, you won't be able to retrieve it! // (NB: while you pass in an entire value, only the key part is looked // at. This is just because I don't know how to assign just a key.) private: void squash_deleted() { // gets rid of any deleted entries we have if ( num_deleted ) { // get rid of deleted before writing sparse_hashtable tmp(MoveDontGrow, *this); swap(tmp); // now we are tmp } assert(num_deleted == 0); } // Test if the given key is the deleted indicator. Requires // num_deleted > 0, for correctness of read(), and because that // guarantees that key_info.delkey is valid. bool test_deleted_key(const key_type& key) const { assert(num_deleted > 0); return equals(key_info.delkey, key); } public: void set_deleted_key(const key_type &key) { // It's only safe to change what "deleted" means if we purge deleted guys squash_deleted(); settings.set_use_deleted(true); key_info.delkey = key; } void clear_deleted_key() { squash_deleted(); settings.set_use_deleted(false); } key_type deleted_key() const { assert(settings.use_deleted() && "Must set deleted key before calling deleted_key"); return key_info.delkey; } // These are public so the iterators can use them // True if the item at position bucknum is "deleted" marker bool test_deleted(size_type bucknum) const { // Invariant: !use_deleted() implies num_deleted is 0. assert(settings.use_deleted() || num_deleted == 0); return num_deleted > 0 && table.test(bucknum) && test_deleted_key(get_key(table.unsafe_get(bucknum))); } bool test_deleted(const iterator &it) const { // Invariant: !use_deleted() implies num_deleted is 0. assert(settings.use_deleted() || num_deleted == 0); return num_deleted > 0 && test_deleted_key(get_key(*it)); } bool test_deleted(const const_iterator &it) const { // Invariant: !use_deleted() implies num_deleted is 0. assert(settings.use_deleted() || num_deleted == 0); return num_deleted > 0 && test_deleted_key(get_key(*it)); } bool test_deleted(const destructive_iterator &it) const { // Invariant: !use_deleted() implies num_deleted is 0. assert(settings.use_deleted() || num_deleted == 0); return num_deleted > 0 && test_deleted_key(get_key(*it)); } private: void check_use_deleted(const char* caller) { (void)caller; // could log it if the assert failed assert(settings.use_deleted()); } // Set it so test_deleted is true. true if object didn't used to be deleted. // TODO(csilvers): make these private (also in densehashtable.h) bool set_deleted(iterator &it) { check_use_deleted("set_deleted()"); bool retval = !test_deleted(it); // &* converts from iterator to value-type. set_key(&(*it), key_info.delkey); return retval; } // Set it so test_deleted is false. true if object used to be deleted. bool clear_deleted(iterator &it) { check_use_deleted("clear_deleted()"); // Happens automatically when we assign something else in its place. return test_deleted(it); } // We also allow to set/clear the deleted bit on a const iterator. // We allow a const_iterator for the same reason you can delete a // const pointer: it's convenient, and semantically you can't use // 'it' after it's been deleted anyway, so its const-ness doesn't // really matter. bool set_deleted(const_iterator &it) { check_use_deleted("set_deleted()"); bool retval = !test_deleted(it); set_key(const_cast(&(*it)), key_info.delkey); return retval; } // Set it so test_deleted is false. true if object used to be deleted. bool clear_deleted(const_iterator &it) { check_use_deleted("clear_deleted()"); return test_deleted(it); } // FUNCTIONS CONCERNING SIZE public: size_type size() const { return table.num_nonempty() - num_deleted; } size_type max_size() const { return table.max_size(); } bool empty() const { return size() == 0; } size_type bucket_count() const { return table.size(); } size_type max_bucket_count() const { return max_size(); } // These are tr1 methods. Their idea of 'bucket' doesn't map well to // what we do. We just say every bucket has 0 or 1 items in it. size_type bucket_size(size_type i) const { return begin(i) == end(i) ? 0 : 1; } private: // Because of the above, size_type(-1) is never legal; use it for errors static const size_type ILLEGAL_BUCKET = size_type(-1); // Used after a string of deletes. Returns true if we actually shrunk. // TODO(csilvers): take a delta so we can take into account inserts // done after shrinking. Maybe make part of the Settings class? bool maybe_shrink() { assert(table.num_nonempty() >= num_deleted); assert((bucket_count() & (bucket_count()-1)) == 0); // is a power of two assert(bucket_count() >= HT_MIN_BUCKETS); bool retval = false; // If you construct a hashtable with < HT_DEFAULT_STARTING_BUCKETS, // we'll never shrink until you get relatively big, and we'll never // shrink below HT_DEFAULT_STARTING_BUCKETS. Otherwise, something // like "dense_hash_set x; x.insert(4); x.erase(4);" will // shrink us down to HT_MIN_BUCKETS buckets, which is too small. const size_type num_remain = table.num_nonempty() - num_deleted; const size_type shrink_threshold = settings.shrink_threshold(); if (shrink_threshold > 0 && num_remain < shrink_threshold && bucket_count() > HT_DEFAULT_STARTING_BUCKETS) { const float shrink_factor = settings.shrink_factor(); size_type sz = bucket_count() / 2; // find how much we should shrink while (sz > HT_DEFAULT_STARTING_BUCKETS && num_remain < static_cast(sz * shrink_factor)) { sz /= 2; // stay a power of 2 } sparse_hashtable tmp(MoveDontCopy, *this, sz); swap(tmp); // now we are tmp retval = true; } settings.set_consider_shrink(false); // because we just considered it return retval; } // We'll let you resize a hashtable -- though this makes us copy all! // When you resize, you say, "make it big enough for this many more elements" // Returns true if we actually resized, false if size was already ok. bool resize_delta(size_type delta) { bool did_resize = false; if ( settings.consider_shrink() ) { // see if lots of deletes happened if ( maybe_shrink() ) did_resize = true; } if (table.num_nonempty() >= (std::numeric_limits::max)() - delta) { throw std::length_error("resize overflow"); } if ( bucket_count() >= HT_MIN_BUCKETS && (table.num_nonempty() + delta) <= settings.enlarge_threshold() ) return did_resize; // we're ok as we are // Sometimes, we need to resize just to get rid of all the // "deleted" buckets that are clogging up the hashtable. So when // deciding whether to resize, count the deleted buckets (which // are currently taking up room). But later, when we decide what // size to resize to, *don't* count deleted buckets, since they // get discarded during the resize. const size_type needed_size = settings.min_buckets(table.num_nonempty() + delta, 0); if ( needed_size <= bucket_count() ) // we have enough buckets return did_resize; size_type resize_to = settings.min_buckets(table.num_nonempty() - num_deleted + delta, bucket_count()); if (resize_to < needed_size && // may double resize_to resize_to < (std::numeric_limits::max)() / 2) { // This situation means that we have enough deleted elements, // that once we purge them, we won't actually have needed to // grow. But we may want to grow anyway: if we just purge one // element, say, we'll have to grow anyway next time we // insert. Might as well grow now, since we're already going // through the trouble of copying (in order to purge the // deleted elements). const size_type target = static_cast(settings.shrink_size(resize_to*2)); if (table.num_nonempty() - num_deleted + delta >= target) { // Good, we won't be below the shrink threshhold even if we double. resize_to *= 2; } } sparse_hashtable tmp(MoveDontCopy, *this, resize_to); swap(tmp); // now we are tmp return true; } // Used to actually do the rehashing when we grow/shrink a hashtable void copy_from(const sparse_hashtable &ht, size_type min_buckets_wanted) { clear(); // clear table, set num_deleted to 0 // If we need to change the size of our table, do it now const size_type resize_to = settings.min_buckets(ht.size(), min_buckets_wanted); if ( resize_to > bucket_count() ) { // we don't have enough buckets table.resize(resize_to); // sets the number of buckets settings.reset_thresholds(bucket_count()); } // We use a normal iterator to get non-deleted bcks from ht // We could use insert() here, but since we know there are // no duplicates and no deleted items, we can be more efficient assert((bucket_count() & (bucket_count()-1)) == 0); // a power of two for ( const_iterator it = ht.begin(); it != ht.end(); ++it ) { size_type num_probes = 0; // how many times we've probed size_type bucknum; const size_type bucket_count_minus_one = bucket_count() - 1; for (bucknum = hash(get_key(*it)) & bucket_count_minus_one; table.test(bucknum); // not empty bucknum = (bucknum + JUMP_(key, num_probes)) & bucket_count_minus_one) { ++num_probes; assert(num_probes < bucket_count() && "Hashtable is full: an error in key_equal<> or hash<>"); } table.set(bucknum, *it); // copies the value to here } settings.inc_num_ht_copies(); } // Implementation is like copy_from, but it destroys the table of the // "from" guy by freeing sparsetable memory as we iterate. This is // useful in resizing, since we're throwing away the "from" guy anyway. void move_from(MoveDontCopyT mover, sparse_hashtable &ht, size_type min_buckets_wanted) { clear(); // clear table, set num_deleted to 0 // If we need to change the size of our table, do it now size_type resize_to; if ( mover == MoveDontGrow ) resize_to = ht.bucket_count(); // keep same size as old ht else // MoveDontCopy resize_to = settings.min_buckets(ht.size(), min_buckets_wanted); if ( resize_to > bucket_count() ) { // we don't have enough buckets table.resize(resize_to); // sets the number of buckets settings.reset_thresholds(bucket_count()); } // We use a normal iterator to get non-deleted bcks from ht // We could use insert() here, but since we know there are // no duplicates and no deleted items, we can be more efficient assert( (bucket_count() & (bucket_count()-1)) == 0); // a power of two // THIS IS THE MAJOR LINE THAT DIFFERS FROM COPY_FROM(): for ( destructive_iterator it = ht.destructive_begin(); it != ht.destructive_end(); ++it ) { size_type num_probes = 0; // how many times we've probed size_type bucknum; for ( bucknum = hash(get_key(*it)) & (bucket_count()-1); // h % buck_cnt table.test(bucknum); // not empty bucknum = (bucknum + JUMP_(key, num_probes)) & (bucket_count()-1) ) { ++num_probes; assert(num_probes < bucket_count() && "Hashtable is full: an error in key_equal<> or hash<>"); } table.set(bucknum, *it); // copies the value to here } settings.inc_num_ht_copies(); } // Required by the spec for hashed associative container public: // Though the docs say this should be num_buckets, I think it's much // more useful as num_elements. As a special feature, calling with // req_elements==0 will cause us to shrink if we can, saving space. void resize(size_type req_elements) { // resize to this or larger if ( settings.consider_shrink() || req_elements == 0 ) maybe_shrink(); if ( req_elements > table.num_nonempty() ) // we only grow resize_delta(req_elements - table.num_nonempty()); } // Get and change the value of shrink_factor and enlarge_factor. The // description at the beginning of this file explains how to choose // the values. Setting the shrink parameter to 0.0 ensures that the // table never shrinks. void get_resizing_parameters(float* shrink, float* grow) const { *shrink = settings.shrink_factor(); *grow = settings.enlarge_factor(); } void set_resizing_parameters(float shrink, float grow) { settings.set_resizing_parameters(shrink, grow); settings.reset_thresholds(bucket_count()); } // CONSTRUCTORS -- as required by the specs, we take a size, // but also let you specify a hashfunction, key comparator, // and key extractor. We also define a copy constructor and =. // DESTRUCTOR -- the default is fine, surprisingly. explicit sparse_hashtable(size_type expected_max_items_in_table = 0, const HashFcn& hf = HashFcn(), const EqualKey& eql = EqualKey(), const ExtractKey& ext = ExtractKey(), const SetKey& set = SetKey(), const Alloc& alloc = Alloc()) : settings(hf), key_info(ext, set, eql), num_deleted(0), table((expected_max_items_in_table == 0 ? HT_DEFAULT_STARTING_BUCKETS : settings.min_buckets(expected_max_items_in_table, 0)), alloc) { settings.reset_thresholds(bucket_count()); } // As a convenience for resize(), we allow an optional second argument // which lets you make this new hashtable a different size than ht. // We also provide a mechanism of saying you want to "move" the ht argument // into us instead of copying. sparse_hashtable(const sparse_hashtable& ht, size_type min_buckets_wanted = HT_DEFAULT_STARTING_BUCKETS) : settings(ht.settings), key_info(ht.key_info), num_deleted(0), table(0, ht.get_allocator()) { settings.reset_thresholds(bucket_count()); copy_from(ht, min_buckets_wanted); // copy_from() ignores deleted entries } sparse_hashtable(MoveDontCopyT mover, sparse_hashtable& ht, size_type min_buckets_wanted = HT_DEFAULT_STARTING_BUCKETS) : settings(ht.settings), key_info(ht.key_info), num_deleted(0), table(0, ht.get_allocator()) { settings.reset_thresholds(bucket_count()); move_from(mover, ht, min_buckets_wanted); // ignores deleted entries } sparse_hashtable& operator= (const sparse_hashtable& ht) { if (&ht == this) return *this; // don't copy onto ourselves settings = ht.settings; key_info = ht.key_info; num_deleted = ht.num_deleted; // copy_from() calls clear and sets num_deleted to 0 too copy_from(ht, HT_MIN_BUCKETS); // we purposefully don't copy the allocator, which may not be copyable return *this; } // Many STL algorithms use swap instead of copy constructors void swap(sparse_hashtable& ht) { std::swap(settings, ht.settings); std::swap(key_info, ht.key_info); std::swap(num_deleted, ht.num_deleted); table.swap(ht.table); settings.reset_thresholds(bucket_count()); // also resets consider_shrink ht.settings.reset_thresholds(ht.bucket_count()); // we purposefully don't swap the allocator, which may not be swap-able } // It's always nice to be able to clear a table without deallocating it void clear() { if (!empty() || (num_deleted != 0)) { table.clear(); } settings.reset_thresholds(bucket_count()); num_deleted = 0; } // LOOKUP ROUTINES private: // Returns a pair of positions: 1st where the object is, 2nd where // it would go if you wanted to insert it. 1st is ILLEGAL_BUCKET // if object is not found; 2nd is ILLEGAL_BUCKET if it is. // Note: because of deletions where-to-insert is not trivial: it's the // first deleted bucket we see, as long as we don't find the key later std::pair find_position(const key_type &key) const { size_type num_probes = 0; // how many times we've probed const size_type bucket_count_minus_one = bucket_count() - 1; size_type bucknum = hash(key) & bucket_count_minus_one; size_type insert_pos = ILLEGAL_BUCKET; // where we would insert SPARSEHASH_STAT_UPDATE(total_lookups += 1); while ( 1 ) { // probe until something happens if ( !table.test(bucknum) ) { // bucket is empty SPARSEHASH_STAT_UPDATE(total_probes += num_probes); if ( insert_pos == ILLEGAL_BUCKET ) // found no prior place to insert return std::pair(ILLEGAL_BUCKET, bucknum); else return std::pair(ILLEGAL_BUCKET, insert_pos); } else if ( test_deleted(bucknum) ) {// keep searching, but mark to insert if ( insert_pos == ILLEGAL_BUCKET ) insert_pos = bucknum; } else if ( equals(key, get_key(table.unsafe_get(bucknum))) ) { SPARSEHASH_STAT_UPDATE(total_probes += num_probes); return std::pair(bucknum, ILLEGAL_BUCKET); } ++num_probes; // we're doing another probe bucknum = (bucknum + JUMP_(key, num_probes)) & bucket_count_minus_one; assert(num_probes < bucket_count() && "Hashtable is full: an error in key_equal<> or hash<>"); } } public: iterator find(const key_type& key) { if ( size() == 0 ) return end(); std::pair pos = find_position(key); if ( pos.first == ILLEGAL_BUCKET ) // alas, not there return end(); else return iterator(this, table.get_iter(pos.first), table.nonempty_end()); } const_iterator find(const key_type& key) const { if ( size() == 0 ) return end(); std::pair pos = find_position(key); if ( pos.first == ILLEGAL_BUCKET ) // alas, not there return end(); else return const_iterator(this, table.get_iter(pos.first), table.nonempty_end()); } // This is a tr1 method: the bucket a given key is in, or what bucket // it would be put in, if it were to be inserted. Shrug. size_type bucket(const key_type& key) const { std::pair pos = find_position(key); return pos.first == ILLEGAL_BUCKET ? pos.second : pos.first; } // Counts how many elements have key key. For maps, it's either 0 or 1. size_type count(const key_type &key) const { std::pair pos = find_position(key); return pos.first == ILLEGAL_BUCKET ? 0 : 1; } // Likewise, equal_range doesn't really make sense for us. Oh well. std::pair equal_range(const key_type& key) { iterator pos = find(key); // either an iterator or end if (pos == end()) { return std::pair(pos, pos); } else { const iterator startpos = pos++; return std::pair(startpos, pos); } } std::pair equal_range(const key_type& key) const { const_iterator pos = find(key); // either an iterator or end if (pos == end()) { return std::pair(pos, pos); } else { const const_iterator startpos = pos++; return std::pair(startpos, pos); } } // INSERTION ROUTINES private: // Private method used by insert_noresize and find_or_insert. iterator insert_at(const_reference obj, size_type pos) { if (size() >= max_size()) { throw std::length_error("insert overflow"); } if ( test_deleted(pos) ) { // just replace if it's been deleted // The set() below will undelete this object. We just worry about stats assert(num_deleted > 0); --num_deleted; // used to be, now it isn't } table.set(pos, obj); return iterator(this, table.get_iter(pos), table.nonempty_end()); } // If you know *this is big enough to hold obj, use this routine std::pair insert_noresize(const_reference obj) { // First, double-check we're not inserting delkey assert((!settings.use_deleted() || !equals(get_key(obj), key_info.delkey)) && "Inserting the deleted key"); const std::pair pos = find_position(get_key(obj)); if ( pos.first != ILLEGAL_BUCKET) { // object was already there return std::pair(iterator(this, table.get_iter(pos.first), table.nonempty_end()), false); // false: we didn't insert } else { // pos.second says where to put it return std::pair(insert_at(obj, pos.second), true); } } // Specializations of insert(it, it) depending on the power of the iterator: // (1) Iterator supports operator-, resize before inserting template void insert(ForwardIterator f, ForwardIterator l, std::forward_iterator_tag) { size_t dist = std::distance(f, l); if (dist >= (std::numeric_limits::max)()) { throw std::length_error("insert-range overflow"); } resize_delta(static_cast(dist)); for ( ; dist > 0; --dist, ++f) { insert_noresize(*f); } } // (2) Arbitrary iterator, can't tell how much to resize template void insert(InputIterator f, InputIterator l, std::input_iterator_tag) { for ( ; f != l; ++f) insert(*f); } public: // This is the normal insert routine, used by the outside world std::pair insert(const_reference obj) { resize_delta(1); // adding an object, grow if need be return insert_noresize(obj); } // When inserting a lot at a time, we specialize on the type of iterator template void insert(InputIterator f, InputIterator l) { // specializes on iterator type insert(f, l, typename std::iterator_traits::iterator_category()); } // DefaultValue is a functor that takes a key and returns a value_type // representing the default value to be inserted if none is found. template value_type& find_or_insert(const key_type& key) { // First, double-check we're not inserting delkey assert((!settings.use_deleted() || !equals(key, key_info.delkey)) && "Inserting the deleted key"); const std::pair pos = find_position(key); DefaultValue default_value; if ( pos.first != ILLEGAL_BUCKET) { // object was already there return *table.get_iter(pos.first); } else if (resize_delta(1)) { // needed to rehash to make room // Since we resized, we can't use pos, so recalculate where to insert. return *insert_noresize(default_value(key)).first; } else { // no need to rehash, insert right here return *insert_at(default_value(key), pos.second); } } // DELETION ROUTINES size_type erase(const key_type& key) { // First, double-check we're not erasing delkey. assert((!settings.use_deleted() || !equals(key, key_info.delkey)) && "Erasing the deleted key"); assert(!settings.use_deleted() || !equals(key, key_info.delkey)); const_iterator pos = find(key); // shrug: shouldn't need to be const if ( pos != end() ) { assert(!test_deleted(pos)); // or find() shouldn't have returned it set_deleted(pos); ++num_deleted; // will think about shrink after next insert settings.set_consider_shrink(true); return 1; // because we deleted one thing } else { return 0; // because we deleted nothing } } // We return the iterator past the deleted item. void erase(iterator pos) { if ( pos == end() ) return; // sanity check if ( set_deleted(pos) ) { // true if object has been newly deleted ++num_deleted; // will think about shrink after next insert settings.set_consider_shrink(true); } } void erase(iterator f, iterator l) { for ( ; f != l; ++f) { if ( set_deleted(f) ) // should always be true ++num_deleted; } // will think about shrink after next insert settings.set_consider_shrink(true); } // We allow you to erase a const_iterator just like we allow you to // erase an iterator. This is in parallel to 'delete': you can delete // a const pointer just like a non-const pointer. The logic is that // you can't use the object after it's erased anyway, so it doesn't matter // if it's const or not. void erase(const_iterator pos) { if ( pos == end() ) return; // sanity check if ( set_deleted(pos) ) { // true if object has been newly deleted ++num_deleted; // will think about shrink after next insert settings.set_consider_shrink(true); } } void erase(const_iterator f, const_iterator l) { for ( ; f != l; ++f) { if ( set_deleted(f) ) // should always be true ++num_deleted; } // will think about shrink after next insert settings.set_consider_shrink(true); } // COMPARISON bool operator==(const sparse_hashtable& ht) const { if (size() != ht.size()) { return false; } else if (this == &ht) { return true; } else { // Iterate through the elements in "this" and see if the // corresponding element is in ht for ( const_iterator it = begin(); it != end(); ++it ) { const_iterator it2 = ht.find(get_key(*it)); if ((it2 == ht.end()) || (*it != *it2)) { return false; } } return true; } } bool operator!=(const sparse_hashtable& ht) const { return !(*this == ht); } // I/O // We support reading and writing hashtables to disk. NOTE that // this only stores the hashtable metadata, not the stuff you've // actually put in the hashtable! Alas, since I don't know how to // write a hasher or key_equal, you have to make sure everything // but the table is the same. We compact before writing. // // The OUTPUT type needs to support a Write() operation. File and // OutputBuffer are appropriate types to pass in. // // The INPUT type needs to support a Read() operation. File and // InputBuffer are appropriate types to pass in. template bool write_metadata(OUTPUT *fp) { squash_deleted(); // so we don't have to worry about delkey return table.write_metadata(fp); } template bool read_metadata(INPUT *fp) { num_deleted = 0; // since we got rid before writing const bool result = table.read_metadata(fp); settings.reset_thresholds(bucket_count()); return result; } // Only meaningful if value_type is a POD. template bool write_nopointer_data(OUTPUT *fp) { return table.write_nopointer_data(fp); } // Only meaningful if value_type is a POD. template bool read_nopointer_data(INPUT *fp) { return table.read_nopointer_data(fp); } // INPUT and OUTPUT must be either a FILE, *or* a C++ stream // (istream, ostream, etc) *or* a class providing // Read(void*, size_t) and Write(const void*, size_t) // (respectively), which writes a buffer into a stream // (which the INPUT/OUTPUT instance presumably owns). typedef sparsehash_internal::pod_serializer NopointerSerializer; // ValueSerializer: a functor. operator()(OUTPUT*, const value_type&) template bool serialize(ValueSerializer serializer, OUTPUT *fp) { squash_deleted(); // so we don't have to worry about delkey return table.serialize(serializer, fp); } // ValueSerializer: a functor. operator()(INPUT*, value_type*) template bool unserialize(ValueSerializer serializer, INPUT *fp) { num_deleted = 0; // since we got rid before writing const bool result = table.unserialize(serializer, fp); settings.reset_thresholds(bucket_count()); return result; } private: // Table is the main storage class. typedef sparsetable Table; // Package templated functors with the other types to eliminate memory // needed for storing these zero-size operators. Since ExtractKey and // hasher's operator() might have the same function signature, they // must be packaged in different classes. struct Settings : sparsehash_internal::sh_hashtable_settings { explicit Settings(const hasher& hf) : sparsehash_internal::sh_hashtable_settings( hf, HT_OCCUPANCY_PCT / 100.0f, HT_EMPTY_PCT / 100.0f) {} }; // KeyInfo stores delete key and packages zero-size functors: // ExtractKey and SetKey. class KeyInfo : public ExtractKey, public SetKey, public EqualKey { public: KeyInfo(const ExtractKey& ek, const SetKey& sk, const EqualKey& eq) : ExtractKey(ek), SetKey(sk), EqualKey(eq) { } // We want to return the exact same type as ExtractKey: Key or const Key& typename ExtractKey::result_type get_key(const_reference v) const { return ExtractKey::operator()(v); } void set_key(pointer v, const key_type& k) const { SetKey::operator()(v, k); } bool equals(const key_type& a, const key_type& b) const { return EqualKey::operator()(a, b); } // Which key marks deleted entries. // TODO(csilvers): make a pointer, and get rid of use_deleted (benchmark!) typename base::remove_const::type delkey; }; // Utility functions to access the templated operators size_type hash(const key_type& v) const { return settings.hash(v); } bool equals(const key_type& a, const key_type& b) const { return key_info.equals(a, b); } typename ExtractKey::result_type get_key(const_reference v) const { return key_info.get_key(v); } void set_key(pointer v, const key_type& k) const { key_info.set_key(v, k); } private: // Actual data Settings settings; KeyInfo key_info; size_type num_deleted; // how many occupied buckets are marked deleted Table table; // holds num_buckets and num_elements too }; // We need a global swap as well template inline void swap(sparse_hashtable &x, sparse_hashtable &y) { x.swap(y); } #undef JUMP_ template const typename sparse_hashtable::size_type sparse_hashtable::ILLEGAL_BUCKET; // How full we let the table get before we resize. Knuth says .8 is // good -- higher causes us to probe too much, though saves memory template const int sparse_hashtable::HT_OCCUPANCY_PCT = 80; // How empty we let the table get before we resize lower. // It should be less than OCCUPANCY_PCT / 2 or we thrash resizing template const int sparse_hashtable::HT_EMPTY_PCT = static_cast(0.4 * sparse_hashtable::HT_OCCUPANCY_PCT); _END_GOOGLE_NAMESPACE_ #endif /* _SPARSEHASHTABLE_H_ */ sparsehash-2.0.2/src/sparsehash/internal/libc_allocator_with_realloc.h0000664000175000017500000000751311721252346023232 00000000000000// Copyright (c) 2010, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // --- #ifndef UTIL_GTL_LIBC_ALLOCATOR_WITH_REALLOC_H_ #define UTIL_GTL_LIBC_ALLOCATOR_WITH_REALLOC_H_ #include #include // for malloc/realloc/free #include // for ptrdiff_t #include // for placement new _START_GOOGLE_NAMESPACE_ template class libc_allocator_with_realloc { public: typedef T value_type; typedef size_t size_type; typedef ptrdiff_t difference_type; typedef T* pointer; typedef const T* const_pointer; typedef T& reference; typedef const T& const_reference; libc_allocator_with_realloc() {} libc_allocator_with_realloc(const libc_allocator_with_realloc&) {} ~libc_allocator_with_realloc() {} pointer address(reference r) const { return &r; } const_pointer address(const_reference r) const { return &r; } pointer allocate(size_type n, const_pointer = 0) { return static_cast(malloc(n * sizeof(value_type))); } void deallocate(pointer p, size_type) { free(p); } pointer reallocate(pointer p, size_type n) { return static_cast(realloc(p, n * sizeof(value_type))); } size_type max_size() const { return static_cast(-1) / sizeof(value_type); } void construct(pointer p, const value_type& val) { new(p) value_type(val); } void destroy(pointer p) { p->~value_type(); } template libc_allocator_with_realloc(const libc_allocator_with_realloc&) {} template struct rebind { typedef libc_allocator_with_realloc other; }; }; // libc_allocator_with_realloc specialization. template<> class libc_allocator_with_realloc { public: typedef void value_type; typedef size_t size_type; typedef ptrdiff_t difference_type; typedef void* pointer; typedef const void* const_pointer; template struct rebind { typedef libc_allocator_with_realloc other; }; }; template inline bool operator==(const libc_allocator_with_realloc&, const libc_allocator_with_realloc&) { return true; } template inline bool operator!=(const libc_allocator_with_realloc&, const libc_allocator_with_realloc&) { return false; } _END_GOOGLE_NAMESPACE_ #endif // UTIL_GTL_LIBC_ALLOCATOR_WITH_REALLOC_H_ sparsehash-2.0.2/src/sparsehash/internal/hashtable-common.h0000664000175000017500000003404711721252346020750 00000000000000// Copyright (c) 2010, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // --- // // Provides classes shared by both sparse and dense hashtable. // // sh_hashtable_settings has parameters for growing and shrinking // a hashtable. It also packages zero-size functor (ie. hasher). // // Other functions and classes provide common code for serializing // and deserializing hashtables to a stream (such as a FILE*). #ifndef UTIL_GTL_HASHTABLE_COMMON_H_ #define UTIL_GTL_HASHTABLE_COMMON_H_ #include #include #include #include // for size_t #include #include // For length_error _START_GOOGLE_NAMESPACE_ template struct SparsehashCompileAssert { }; #define SPARSEHASH_COMPILE_ASSERT(expr, msg) \ typedef SparsehashCompileAssert<(bool(expr))> msg[bool(expr) ? 1 : -1] namespace sparsehash_internal { // Adaptor methods for reading/writing data from an INPUT or OUPTUT // variable passed to serialize() or unserialize(). For now we // have implemented INPUT/OUTPUT for FILE*, istream*/ostream* (note // they are pointers, unlike typical use), or else a pointer to // something that supports a Read()/Write() method. // // For technical reasons, we implement read_data/write_data in two // stages. The actual work is done in *_data_internal, which takes // the stream argument twice: once as a template type, and once with // normal type information. (We only use the second version.) We do // this because of how C++ picks what function overload to use. If we // implemented this the naive way: // bool read_data(istream* is, const void* data, size_t length); // template read_data(T* fp, const void* data, size_t length); // C++ would prefer the second version for every stream type except // istream. However, we want C++ to prefer the first version for // streams that are *subclasses* of istream, such as istringstream. // This is not possible given the way template types are resolved. So // we split the stream argument in two, one of which is templated and // one of which is not. The specialized functions (like the istream // version above) ignore the template arg and use the second, 'type' // arg, getting subclass matching as normal. The 'catch-all' // functions (the second version above) use the template arg to deduce // the type, and use a second, void* arg to achieve the desired // 'catch-all' semantics. // ----- low-level I/O for FILE* ---- template inline bool read_data_internal(Ignored*, FILE* fp, void* data, size_t length) { return fread(data, length, 1, fp) == 1; } template inline bool write_data_internal(Ignored*, FILE* fp, const void* data, size_t length) { return fwrite(data, length, 1, fp) == 1; } // ----- low-level I/O for iostream ---- // We want the caller to be responsible for #including , not // us, because iostream is a big header! According to the standard, // it's only legal to delay the instantiation the way we want to if // the istream/ostream is a template type. So we jump through hoops. template inline bool read_data_internal_for_istream(ISTREAM* fp, void* data, size_t length) { return fp->read(reinterpret_cast(data), length).good(); } template inline bool read_data_internal(Ignored*, std::istream* fp, void* data, size_t length) { return read_data_internal_for_istream(fp, data, length); } template inline bool write_data_internal_for_ostream(OSTREAM* fp, const void* data, size_t length) { return fp->write(reinterpret_cast(data), length).good(); } template inline bool write_data_internal(Ignored*, std::ostream* fp, const void* data, size_t length) { return write_data_internal_for_ostream(fp, data, length); } // ----- low-level I/O for custom streams ---- // The INPUT type needs to support a Read() method that takes a // buffer and a length and returns the number of bytes read. template inline bool read_data_internal(INPUT* fp, void*, void* data, size_t length) { return static_cast(fp->Read(data, length)) == length; } // The OUTPUT type needs to support a Write() operation that takes // a buffer and a length and returns the number of bytes written. template inline bool write_data_internal(OUTPUT* fp, void*, const void* data, size_t length) { return static_cast(fp->Write(data, length)) == length; } // ----- low-level I/O: the public API ---- template inline bool read_data(INPUT* fp, void* data, size_t length) { return read_data_internal(fp, fp, data, length); } template inline bool write_data(OUTPUT* fp, const void* data, size_t length) { return write_data_internal(fp, fp, data, length); } // Uses read_data() and write_data() to read/write an integer. // length is the number of bytes to read/write (which may differ // from sizeof(IntType), allowing us to save on a 32-bit system // and load on a 64-bit system). Excess bytes are taken to be 0. // INPUT and OUTPUT must match legal inputs to read/write_data (above). template bool read_bigendian_number(INPUT* fp, IntType* value, size_t length) { *value = 0; unsigned char byte; // We require IntType to be unsigned or else the shifting gets all screwy. SPARSEHASH_COMPILE_ASSERT(static_cast(-1) > static_cast(0), serializing_int_requires_an_unsigned_type); for (size_t i = 0; i < length; ++i) { if (!read_data(fp, &byte, sizeof(byte))) return false; *value |= static_cast(byte) << ((length - 1 - i) * 8); } return true; } template bool write_bigendian_number(OUTPUT* fp, IntType value, size_t length) { unsigned char byte; // We require IntType to be unsigned or else the shifting gets all screwy. SPARSEHASH_COMPILE_ASSERT(static_cast(-1) > static_cast(0), serializing_int_requires_an_unsigned_type); for (size_t i = 0; i < length; ++i) { byte = (sizeof(value) <= length-1 - i) ? 0 : static_cast((value >> ((length-1 - i) * 8)) & 255); if (!write_data(fp, &byte, sizeof(byte))) return false; } return true; } // If your keys and values are simple enough, you can pass this // serializer to serialize()/unserialize(). "Simple enough" means // value_type is a POD type that contains no pointers. Note, // however, we don't try to normalize endianness. // This is the type used for NopointerSerializer. template struct pod_serializer { template bool operator()(INPUT* fp, value_type* value) const { return read_data(fp, value, sizeof(*value)); } template bool operator()(OUTPUT* fp, const value_type& value) const { return write_data(fp, &value, sizeof(value)); } }; // Settings contains parameters for growing and shrinking the table. // It also packages zero-size functor (ie. hasher). // // It does some munging of the hash value in cases where we think // (fear) the original hash function might not be very good. In // particular, the default hash of pointers is the identity hash, // so probably all the low bits are 0. We identify when we think // we're hashing a pointer, and chop off the low bits. Note this // isn't perfect: even when the key is a pointer, we can't tell // for sure that the hash is the identity hash. If it's not, this // is needless work (and possibly, though not likely, harmful). template class sh_hashtable_settings : public HashFunc { public: typedef Key key_type; typedef HashFunc hasher; typedef SizeType size_type; public: sh_hashtable_settings(const hasher& hf, const float ht_occupancy_flt, const float ht_empty_flt) : hasher(hf), enlarge_threshold_(0), shrink_threshold_(0), consider_shrink_(false), use_empty_(false), use_deleted_(false), num_ht_copies_(0) { set_enlarge_factor(ht_occupancy_flt); set_shrink_factor(ht_empty_flt); } size_type hash(const key_type& v) const { // We munge the hash value when we don't trust hasher::operator(). return hash_munger::MungedHash(hasher::operator()(v)); } float enlarge_factor() const { return enlarge_factor_; } void set_enlarge_factor(float f) { enlarge_factor_ = f; } float shrink_factor() const { return shrink_factor_; } void set_shrink_factor(float f) { shrink_factor_ = f; } size_type enlarge_threshold() const { return enlarge_threshold_; } void set_enlarge_threshold(size_type t) { enlarge_threshold_ = t; } size_type shrink_threshold() const { return shrink_threshold_; } void set_shrink_threshold(size_type t) { shrink_threshold_ = t; } size_type enlarge_size(size_type x) const { return static_cast(x * enlarge_factor_); } size_type shrink_size(size_type x) const { return static_cast(x * shrink_factor_); } bool consider_shrink() const { return consider_shrink_; } void set_consider_shrink(bool t) { consider_shrink_ = t; } bool use_empty() const { return use_empty_; } void set_use_empty(bool t) { use_empty_ = t; } bool use_deleted() const { return use_deleted_; } void set_use_deleted(bool t) { use_deleted_ = t; } size_type num_ht_copies() const { return static_cast(num_ht_copies_); } void inc_num_ht_copies() { ++num_ht_copies_; } // Reset the enlarge and shrink thresholds void reset_thresholds(size_type num_buckets) { set_enlarge_threshold(enlarge_size(num_buckets)); set_shrink_threshold(shrink_size(num_buckets)); // whatever caused us to reset already considered set_consider_shrink(false); } // Caller is resposible for calling reset_threshold right after // set_resizing_parameters. void set_resizing_parameters(float shrink, float grow) { assert(shrink >= 0.0); assert(grow <= 1.0); if (shrink > grow/2.0f) shrink = grow / 2.0f; // otherwise we thrash hashtable size set_shrink_factor(shrink); set_enlarge_factor(grow); } // This is the smallest size a hashtable can be without being too crowded // If you like, you can give a min #buckets as well as a min #elts size_type min_buckets(size_type num_elts, size_type min_buckets_wanted) { float enlarge = enlarge_factor(); size_type sz = HT_MIN_BUCKETS; // min buckets allowed while ( sz < min_buckets_wanted || num_elts >= static_cast(sz * enlarge) ) { // This just prevents overflowing size_type, since sz can exceed // max_size() here. if (static_cast(sz * 2) < sz) { throw std::length_error("resize overflow"); // protect against overflow } sz *= 2; } return sz; } private: template class hash_munger { public: static size_t MungedHash(size_t hash) { return hash; } }; // This matches when the hashtable key is a pointer. template class hash_munger { public: static size_t MungedHash(size_t hash) { // TODO(csilvers): consider rotating instead: // static const int shift = (sizeof(void *) == 4) ? 2 : 3; // return (hash << (sizeof(hash) * 8) - shift)) | (hash >> shift); // This matters if we ever change sparse/dense_hash_* to compare // hashes before comparing actual values. It's speedy on x86. return hash / sizeof(void*); // get rid of known-0 bits } }; size_type enlarge_threshold_; // table.size() * enlarge_factor size_type shrink_threshold_; // table.size() * shrink_factor float enlarge_factor_; // how full before resize float shrink_factor_; // how empty before resize // consider_shrink=true if we should try to shrink before next insert bool consider_shrink_; bool use_empty_; // used only by densehashtable, not sparsehashtable bool use_deleted_; // false until delkey has been set // num_ht_copies is a counter incremented every Copy/Move unsigned int num_ht_copies_; }; } // namespace sparsehash_internal #undef SPARSEHASH_COMPILE_ASSERT _END_GOOGLE_NAMESPACE_ #endif // UTIL_GTL_HASHTABLE_COMMON_H_ sparsehash-2.0.2/src/sparsehash/internal/densehashtable.h0000664000175000017500000015161311721252346020500 00000000000000// Copyright (c) 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // --- // // A dense hashtable is a particular implementation of // a hashtable: one that is meant to minimize memory allocation. // It does this by using an array to store all the data. We // steal a value from the key space to indicate "empty" array // elements (ie indices where no item lives) and another to indicate // "deleted" elements. // // (Note it is possible to change the value of the delete key // on the fly; you can even remove it, though after that point // the hashtable is insert_only until you set it again. The empty // value however can't be changed.) // // To minimize allocation and pointer overhead, we use internal // probing, in which the hashtable is a single table, and collisions // are resolved by trying to insert again in another bucket. The // most cache-efficient internal probing schemes are linear probing // (which suffers, alas, from clumping) and quadratic probing, which // is what we implement by default. // // Type requirements: value_type is required to be Copy Constructible // and Default Constructible. It is not required to be (and commonly // isn't) Assignable. // // You probably shouldn't use this code directly. Use dense_hash_map<> // or dense_hash_set<> instead. // You can change the following below: // HT_OCCUPANCY_PCT -- how full before we double size // HT_EMPTY_PCT -- how empty before we halve size // HT_MIN_BUCKETS -- default smallest bucket size // // You can also change enlarge_factor (which defaults to // HT_OCCUPANCY_PCT), and shrink_factor (which defaults to // HT_EMPTY_PCT) with set_resizing_parameters(). // // How to decide what values to use? // shrink_factor's default of .4 * OCCUPANCY_PCT, is probably good. // HT_MIN_BUCKETS is probably unnecessary since you can specify // (indirectly) the starting number of buckets at construct-time. // For enlarge_factor, you can use this chart to try to trade-off // expected lookup time to the space taken up. By default, this // code uses quadratic probing, though you can change it to linear // via JUMP_ below if you really want to. // // From http://www.augustana.ca/~mohrj/courses/1999.fall/csc210/lecture_notes/hashing.html // NUMBER OF PROBES / LOOKUP Successful Unsuccessful // Quadratic collision resolution 1 - ln(1-L) - L/2 1/(1-L) - L - ln(1-L) // Linear collision resolution [1+1/(1-L)]/2 [1+1/(1-L)2]/2 // // -- enlarge_factor -- 0.10 0.50 0.60 0.75 0.80 0.90 0.99 // QUADRATIC COLLISION RES. // probes/successful lookup 1.05 1.44 1.62 2.01 2.21 2.85 5.11 // probes/unsuccessful lookup 1.11 2.19 2.82 4.64 5.81 11.4 103.6 // LINEAR COLLISION RES. // probes/successful lookup 1.06 1.5 1.75 2.5 3.0 5.5 50.5 // probes/unsuccessful lookup 1.12 2.5 3.6 8.5 13.0 50.0 5000.0 #ifndef _DENSEHASHTABLE_H_ #define _DENSEHASHTABLE_H_ #include #include #include // for FILE, fwrite, fread #include // For swap(), eg #include // For iterator tags #include // for numeric_limits #include // For uninitialized_fill #include // for pair #include #include #include #include // For length_error _START_GOOGLE_NAMESPACE_ namespace base { // just to make google->opensource transition easier using GOOGLE_NAMESPACE::true_type; using GOOGLE_NAMESPACE::false_type; using GOOGLE_NAMESPACE::integral_constant; using GOOGLE_NAMESPACE::is_same; using GOOGLE_NAMESPACE::remove_const; } // The probing method // Linear probing // #define JUMP_(key, num_probes) ( 1 ) // Quadratic probing #define JUMP_(key, num_probes) ( num_probes ) // Hashtable class, used to implement the hashed associative containers // hash_set and hash_map. // Value: what is stored in the table (each bucket is a Value). // Key: something in a 1-to-1 correspondence to a Value, that can be used // to search for a Value in the table (find() takes a Key). // HashFcn: Takes a Key and returns an integer, the more unique the better. // ExtractKey: given a Value, returns the unique Key associated with it. // Must inherit from unary_function, or at least have a // result_type enum indicating the return type of operator(). // SetKey: given a Value* and a Key, modifies the value such that // ExtractKey(value) == key. We guarantee this is only called // with key == deleted_key or key == empty_key. // EqualKey: Given two Keys, says whether they are the same (that is, // if they are both associated with the same Value). // Alloc: STL allocator to use to allocate memory. template class dense_hashtable; template struct dense_hashtable_iterator; template struct dense_hashtable_const_iterator; // We're just an array, but we need to skip over empty and deleted elements template struct dense_hashtable_iterator { private: typedef typename A::template rebind::other value_alloc_type; public: typedef dense_hashtable_iterator iterator; typedef dense_hashtable_const_iterator const_iterator; typedef std::forward_iterator_tag iterator_category; // very little defined! typedef V value_type; typedef typename value_alloc_type::difference_type difference_type; typedef typename value_alloc_type::size_type size_type; typedef typename value_alloc_type::reference reference; typedef typename value_alloc_type::pointer pointer; // "Real" constructor and default constructor dense_hashtable_iterator(const dense_hashtable *h, pointer it, pointer it_end, bool advance) : ht(h), pos(it), end(it_end) { if (advance) advance_past_empty_and_deleted(); } dense_hashtable_iterator() { } // The default destructor is fine; we don't define one // The default operator= is fine; we don't define one // Happy dereferencer reference operator*() const { return *pos; } pointer operator->() const { return &(operator*()); } // Arithmetic. The only hard part is making sure that // we're not on an empty or marked-deleted array element void advance_past_empty_and_deleted() { while ( pos != end && (ht->test_empty(*this) || ht->test_deleted(*this)) ) ++pos; } iterator& operator++() { assert(pos != end); ++pos; advance_past_empty_and_deleted(); return *this; } iterator operator++(int) { iterator tmp(*this); ++*this; return tmp; } // Comparison. bool operator==(const iterator& it) const { return pos == it.pos; } bool operator!=(const iterator& it) const { return pos != it.pos; } // The actual data const dense_hashtable *ht; pointer pos, end; }; // Now do it all again, but with const-ness! template struct dense_hashtable_const_iterator { private: typedef typename A::template rebind::other value_alloc_type; public: typedef dense_hashtable_iterator iterator; typedef dense_hashtable_const_iterator const_iterator; typedef std::forward_iterator_tag iterator_category; // very little defined! typedef V value_type; typedef typename value_alloc_type::difference_type difference_type; typedef typename value_alloc_type::size_type size_type; typedef typename value_alloc_type::const_reference reference; typedef typename value_alloc_type::const_pointer pointer; // "Real" constructor and default constructor dense_hashtable_const_iterator( const dense_hashtable *h, pointer it, pointer it_end, bool advance) : ht(h), pos(it), end(it_end) { if (advance) advance_past_empty_and_deleted(); } dense_hashtable_const_iterator() : ht(NULL), pos(pointer()), end(pointer()) { } // This lets us convert regular iterators to const iterators dense_hashtable_const_iterator(const iterator &it) : ht(it.ht), pos(it.pos), end(it.end) { } // The default destructor is fine; we don't define one // The default operator= is fine; we don't define one // Happy dereferencer reference operator*() const { return *pos; } pointer operator->() const { return &(operator*()); } // Arithmetic. The only hard part is making sure that // we're not on an empty or marked-deleted array element void advance_past_empty_and_deleted() { while ( pos != end && (ht->test_empty(*this) || ht->test_deleted(*this)) ) ++pos; } const_iterator& operator++() { assert(pos != end); ++pos; advance_past_empty_and_deleted(); return *this; } const_iterator operator++(int) { const_iterator tmp(*this); ++*this; return tmp; } // Comparison. bool operator==(const const_iterator& it) const { return pos == it.pos; } bool operator!=(const const_iterator& it) const { return pos != it.pos; } // The actual data const dense_hashtable *ht; pointer pos, end; }; template class dense_hashtable { private: typedef typename Alloc::template rebind::other value_alloc_type; public: typedef Key key_type; typedef Value value_type; typedef HashFcn hasher; typedef EqualKey key_equal; typedef Alloc allocator_type; typedef typename value_alloc_type::size_type size_type; typedef typename value_alloc_type::difference_type difference_type; typedef typename value_alloc_type::reference reference; typedef typename value_alloc_type::const_reference const_reference; typedef typename value_alloc_type::pointer pointer; typedef typename value_alloc_type::const_pointer const_pointer; typedef dense_hashtable_iterator iterator; typedef dense_hashtable_const_iterator const_iterator; // These come from tr1. For us they're the same as regular iterators. typedef iterator local_iterator; typedef const_iterator const_local_iterator; // How full we let the table get before we resize, by default. // Knuth says .8 is good -- higher causes us to probe too much, // though it saves memory. static const int HT_OCCUPANCY_PCT; // defined at the bottom of this file // How empty we let the table get before we resize lower, by default. // (0.0 means never resize lower.) // It should be less than OCCUPANCY_PCT / 2 or we thrash resizing static const int HT_EMPTY_PCT; // defined at the bottom of this file // Minimum size we're willing to let hashtables be. // Must be a power of two, and at least 4. // Note, however, that for a given hashtable, the initial size is a // function of the first constructor arg, and may be >HT_MIN_BUCKETS. static const size_type HT_MIN_BUCKETS = 4; // By default, if you don't specify a hashtable size at // construction-time, we use this size. Must be a power of two, and // at least HT_MIN_BUCKETS. static const size_type HT_DEFAULT_STARTING_BUCKETS = 32; // ITERATOR FUNCTIONS iterator begin() { return iterator(this, table, table + num_buckets, true); } iterator end() { return iterator(this, table + num_buckets, table + num_buckets, true); } const_iterator begin() const { return const_iterator(this, table, table+num_buckets,true);} const_iterator end() const { return const_iterator(this, table + num_buckets, table+num_buckets,true);} // These come from tr1 unordered_map. They iterate over 'bucket' n. // We'll just consider bucket n to be the n-th element of the table. local_iterator begin(size_type i) { return local_iterator(this, table + i, table + i+1, false); } local_iterator end(size_type i) { local_iterator it = begin(i); if (!test_empty(i) && !test_deleted(i)) ++it; return it; } const_local_iterator begin(size_type i) const { return const_local_iterator(this, table + i, table + i+1, false); } const_local_iterator end(size_type i) const { const_local_iterator it = begin(i); if (!test_empty(i) && !test_deleted(i)) ++it; return it; } // ACCESSOR FUNCTIONS for the things we templatize on, basically hasher hash_funct() const { return settings; } key_equal key_eq() const { return key_info; } allocator_type get_allocator() const { return allocator_type(val_info); } // Accessor function for statistics gathering. int num_table_copies() const { return settings.num_ht_copies(); } private: // Annoyingly, we can't copy values around, because they might have // const components (they're probably pair). We use // explicit destructor invocation and placement new to get around // this. Arg. void set_value(pointer dst, const_reference src) { dst->~value_type(); // delete the old value, if any new(dst) value_type(src); } void destroy_buckets(size_type first, size_type last) { for ( ; first != last; ++first) table[first].~value_type(); } // DELETE HELPER FUNCTIONS // This lets the user describe a key that will indicate deleted // table entries. This key should be an "impossible" entry -- // if you try to insert it for real, you won't be able to retrieve it! // (NB: while you pass in an entire value, only the key part is looked // at. This is just because I don't know how to assign just a key.) private: void squash_deleted() { // gets rid of any deleted entries we have if ( num_deleted ) { // get rid of deleted before writing dense_hashtable tmp(*this); // copying will get rid of deleted swap(tmp); // now we are tmp } assert(num_deleted == 0); } // Test if the given key is the deleted indicator. Requires // num_deleted > 0, for correctness of read(), and because that // guarantees that key_info.delkey is valid. bool test_deleted_key(const key_type& key) const { assert(num_deleted > 0); return equals(key_info.delkey, key); } public: void set_deleted_key(const key_type &key) { // the empty indicator (if specified) and the deleted indicator // must be different assert((!settings.use_empty() || !equals(key, get_key(val_info.emptyval))) && "Passed the empty-key to set_deleted_key"); // It's only safe to change what "deleted" means if we purge deleted guys squash_deleted(); settings.set_use_deleted(true); key_info.delkey = key; } void clear_deleted_key() { squash_deleted(); settings.set_use_deleted(false); } key_type deleted_key() const { assert(settings.use_deleted() && "Must set deleted key before calling deleted_key"); return key_info.delkey; } // These are public so the iterators can use them // True if the item at position bucknum is "deleted" marker bool test_deleted(size_type bucknum) const { // Invariant: !use_deleted() implies num_deleted is 0. assert(settings.use_deleted() || num_deleted == 0); return num_deleted > 0 && test_deleted_key(get_key(table[bucknum])); } bool test_deleted(const iterator &it) const { // Invariant: !use_deleted() implies num_deleted is 0. assert(settings.use_deleted() || num_deleted == 0); return num_deleted > 0 && test_deleted_key(get_key(*it)); } bool test_deleted(const const_iterator &it) const { // Invariant: !use_deleted() implies num_deleted is 0. assert(settings.use_deleted() || num_deleted == 0); return num_deleted > 0 && test_deleted_key(get_key(*it)); } private: void check_use_deleted(const char* caller) { (void)caller; // could log it if the assert failed assert(settings.use_deleted()); } // Set it so test_deleted is true. true if object didn't used to be deleted. bool set_deleted(iterator &it) { check_use_deleted("set_deleted()"); bool retval = !test_deleted(it); // &* converts from iterator to value-type. set_key(&(*it), key_info.delkey); return retval; } // Set it so test_deleted is false. true if object used to be deleted. bool clear_deleted(iterator &it) { check_use_deleted("clear_deleted()"); // Happens automatically when we assign something else in its place. return test_deleted(it); } // We also allow to set/clear the deleted bit on a const iterator. // We allow a const_iterator for the same reason you can delete a // const pointer: it's convenient, and semantically you can't use // 'it' after it's been deleted anyway, so its const-ness doesn't // really matter. bool set_deleted(const_iterator &it) { check_use_deleted("set_deleted()"); bool retval = !test_deleted(it); set_key(const_cast(&(*it)), key_info.delkey); return retval; } // Set it so test_deleted is false. true if object used to be deleted. bool clear_deleted(const_iterator &it) { check_use_deleted("clear_deleted()"); return test_deleted(it); } // EMPTY HELPER FUNCTIONS // This lets the user describe a key that will indicate empty (unused) // table entries. This key should be an "impossible" entry -- // if you try to insert it for real, you won't be able to retrieve it! // (NB: while you pass in an entire value, only the key part is looked // at. This is just because I don't know how to assign just a key.) public: // These are public so the iterators can use them // True if the item at position bucknum is "empty" marker bool test_empty(size_type bucknum) const { assert(settings.use_empty()); // we always need to know what's empty! return equals(get_key(val_info.emptyval), get_key(table[bucknum])); } bool test_empty(const iterator &it) const { assert(settings.use_empty()); // we always need to know what's empty! return equals(get_key(val_info.emptyval), get_key(*it)); } bool test_empty(const const_iterator &it) const { assert(settings.use_empty()); // we always need to know what's empty! return equals(get_key(val_info.emptyval), get_key(*it)); } private: void fill_range_with_empty(pointer table_start, pointer table_end) { std::uninitialized_fill(table_start, table_end, val_info.emptyval); } public: // TODO(csilvers): change all callers of this to pass in a key instead, // and take a const key_type instead of const value_type. void set_empty_key(const_reference val) { // Once you set the empty key, you can't change it assert(!settings.use_empty() && "Calling set_empty_key multiple times"); // The deleted indicator (if specified) and the empty indicator // must be different. assert((!settings.use_deleted() || !equals(get_key(val), key_info.delkey)) && "Setting the empty key the same as the deleted key"); settings.set_use_empty(true); set_value(&val_info.emptyval, val); assert(!table); // must set before first use // num_buckets was set in constructor even though table was NULL table = val_info.allocate(num_buckets); assert(table); fill_range_with_empty(table, table + num_buckets); } // TODO(user): return a key_type rather than a value_type value_type empty_key() const { assert(settings.use_empty()); return val_info.emptyval; } // FUNCTIONS CONCERNING SIZE public: size_type size() const { return num_elements - num_deleted; } size_type max_size() const { return val_info.max_size(); } bool empty() const { return size() == 0; } size_type bucket_count() const { return num_buckets; } size_type max_bucket_count() const { return max_size(); } size_type nonempty_bucket_count() const { return num_elements; } // These are tr1 methods. Their idea of 'bucket' doesn't map well to // what we do. We just say every bucket has 0 or 1 items in it. size_type bucket_size(size_type i) const { return begin(i) == end(i) ? 0 : 1; } private: // Because of the above, size_type(-1) is never legal; use it for errors static const size_type ILLEGAL_BUCKET = size_type(-1); // Used after a string of deletes. Returns true if we actually shrunk. // TODO(csilvers): take a delta so we can take into account inserts // done after shrinking. Maybe make part of the Settings class? bool maybe_shrink() { assert(num_elements >= num_deleted); assert((bucket_count() & (bucket_count()-1)) == 0); // is a power of two assert(bucket_count() >= HT_MIN_BUCKETS); bool retval = false; // If you construct a hashtable with < HT_DEFAULT_STARTING_BUCKETS, // we'll never shrink until you get relatively big, and we'll never // shrink below HT_DEFAULT_STARTING_BUCKETS. Otherwise, something // like "dense_hash_set x; x.insert(4); x.erase(4);" will // shrink us down to HT_MIN_BUCKETS buckets, which is too small. const size_type num_remain = num_elements - num_deleted; const size_type shrink_threshold = settings.shrink_threshold(); if (shrink_threshold > 0 && num_remain < shrink_threshold && bucket_count() > HT_DEFAULT_STARTING_BUCKETS) { const float shrink_factor = settings.shrink_factor(); size_type sz = bucket_count() / 2; // find how much we should shrink while (sz > HT_DEFAULT_STARTING_BUCKETS && num_remain < sz * shrink_factor) { sz /= 2; // stay a power of 2 } dense_hashtable tmp(*this, sz); // Do the actual resizing swap(tmp); // now we are tmp retval = true; } settings.set_consider_shrink(false); // because we just considered it return retval; } // We'll let you resize a hashtable -- though this makes us copy all! // When you resize, you say, "make it big enough for this many more elements" // Returns true if we actually resized, false if size was already ok. bool resize_delta(size_type delta) { bool did_resize = false; if ( settings.consider_shrink() ) { // see if lots of deletes happened if ( maybe_shrink() ) did_resize = true; } if (num_elements >= (std::numeric_limits::max)() - delta) { throw std::length_error("resize overflow"); } if ( bucket_count() >= HT_MIN_BUCKETS && (num_elements + delta) <= settings.enlarge_threshold() ) return did_resize; // we're ok as we are // Sometimes, we need to resize just to get rid of all the // "deleted" buckets that are clogging up the hashtable. So when // deciding whether to resize, count the deleted buckets (which // are currently taking up room). But later, when we decide what // size to resize to, *don't* count deleted buckets, since they // get discarded during the resize. const size_type needed_size = settings.min_buckets(num_elements + delta, 0); if ( needed_size <= bucket_count() ) // we have enough buckets return did_resize; size_type resize_to = settings.min_buckets(num_elements - num_deleted + delta, bucket_count()); if (resize_to < needed_size && // may double resize_to resize_to < (std::numeric_limits::max)() / 2) { // This situation means that we have enough deleted elements, // that once we purge them, we won't actually have needed to // grow. But we may want to grow anyway: if we just purge one // element, say, we'll have to grow anyway next time we // insert. Might as well grow now, since we're already going // through the trouble of copying (in order to purge the // deleted elements). const size_type target = static_cast(settings.shrink_size(resize_to*2)); if (num_elements - num_deleted + delta >= target) { // Good, we won't be below the shrink threshhold even if we double. resize_to *= 2; } } dense_hashtable tmp(*this, resize_to); swap(tmp); // now we are tmp return true; } // We require table be not-NULL and empty before calling this. void resize_table(size_type /*old_size*/, size_type new_size, base::true_type) { table = val_info.realloc_or_die(table, new_size); } void resize_table(size_type old_size, size_type new_size, base::false_type) { val_info.deallocate(table, old_size); table = val_info.allocate(new_size); } // Used to actually do the rehashing when we grow/shrink a hashtable void copy_from(const dense_hashtable &ht, size_type min_buckets_wanted) { clear_to_size(settings.min_buckets(ht.size(), min_buckets_wanted)); // We use a normal iterator to get non-deleted bcks from ht // We could use insert() here, but since we know there are // no duplicates and no deleted items, we can be more efficient assert((bucket_count() & (bucket_count()-1)) == 0); // a power of two for ( const_iterator it = ht.begin(); it != ht.end(); ++it ) { size_type num_probes = 0; // how many times we've probed size_type bucknum; const size_type bucket_count_minus_one = bucket_count() - 1; for (bucknum = hash(get_key(*it)) & bucket_count_minus_one; !test_empty(bucknum); // not empty bucknum = (bucknum + JUMP_(key, num_probes)) & bucket_count_minus_one) { ++num_probes; assert(num_probes < bucket_count() && "Hashtable is full: an error in key_equal<> or hash<>"); } set_value(&table[bucknum], *it); // copies the value to here num_elements++; } settings.inc_num_ht_copies(); } // Required by the spec for hashed associative container public: // Though the docs say this should be num_buckets, I think it's much // more useful as num_elements. As a special feature, calling with // req_elements==0 will cause us to shrink if we can, saving space. void resize(size_type req_elements) { // resize to this or larger if ( settings.consider_shrink() || req_elements == 0 ) maybe_shrink(); if ( req_elements > num_elements ) resize_delta(req_elements - num_elements); } // Get and change the value of shrink_factor and enlarge_factor. The // description at the beginning of this file explains how to choose // the values. Setting the shrink parameter to 0.0 ensures that the // table never shrinks. void get_resizing_parameters(float* shrink, float* grow) const { *shrink = settings.shrink_factor(); *grow = settings.enlarge_factor(); } void set_resizing_parameters(float shrink, float grow) { settings.set_resizing_parameters(shrink, grow); settings.reset_thresholds(bucket_count()); } // CONSTRUCTORS -- as required by the specs, we take a size, // but also let you specify a hashfunction, key comparator, // and key extractor. We also define a copy constructor and =. // DESTRUCTOR -- needs to free the table explicit dense_hashtable(size_type expected_max_items_in_table = 0, const HashFcn& hf = HashFcn(), const EqualKey& eql = EqualKey(), const ExtractKey& ext = ExtractKey(), const SetKey& set = SetKey(), const Alloc& alloc = Alloc()) : settings(hf), key_info(ext, set, eql), num_deleted(0), num_elements(0), num_buckets(expected_max_items_in_table == 0 ? HT_DEFAULT_STARTING_BUCKETS : settings.min_buckets(expected_max_items_in_table, 0)), val_info(alloc_impl(alloc)), table(NULL) { // table is NULL until emptyval is set. However, we set num_buckets // here so we know how much space to allocate once emptyval is set settings.reset_thresholds(bucket_count()); } // As a convenience for resize(), we allow an optional second argument // which lets you make this new hashtable a different size than ht dense_hashtable(const dense_hashtable& ht, size_type min_buckets_wanted = HT_DEFAULT_STARTING_BUCKETS) : settings(ht.settings), key_info(ht.key_info), num_deleted(0), num_elements(0), num_buckets(0), val_info(ht.val_info), table(NULL) { if (!ht.settings.use_empty()) { // If use_empty isn't set, copy_from will crash, so we do our own copying. assert(ht.empty()); num_buckets = settings.min_buckets(ht.size(), min_buckets_wanted); settings.reset_thresholds(bucket_count()); return; } settings.reset_thresholds(bucket_count()); copy_from(ht, min_buckets_wanted); // copy_from() ignores deleted entries } dense_hashtable& operator= (const dense_hashtable& ht) { if (&ht == this) return *this; // don't copy onto ourselves if (!ht.settings.use_empty()) { assert(ht.empty()); dense_hashtable empty_table(ht); // empty table with ht's thresholds this->swap(empty_table); return *this; } settings = ht.settings; key_info = ht.key_info; set_value(&val_info.emptyval, ht.val_info.emptyval); // copy_from() calls clear and sets num_deleted to 0 too copy_from(ht, HT_MIN_BUCKETS); // we purposefully don't copy the allocator, which may not be copyable return *this; } ~dense_hashtable() { if (table) { destroy_buckets(0, num_buckets); val_info.deallocate(table, num_buckets); } } // Many STL algorithms use swap instead of copy constructors void swap(dense_hashtable& ht) { std::swap(settings, ht.settings); std::swap(key_info, ht.key_info); std::swap(num_deleted, ht.num_deleted); std::swap(num_elements, ht.num_elements); std::swap(num_buckets, ht.num_buckets); { value_type tmp; // for annoying reasons, swap() doesn't work set_value(&tmp, val_info.emptyval); set_value(&val_info.emptyval, ht.val_info.emptyval); set_value(&ht.val_info.emptyval, tmp); } std::swap(table, ht.table); settings.reset_thresholds(bucket_count()); // also resets consider_shrink ht.settings.reset_thresholds(ht.bucket_count()); // we purposefully don't swap the allocator, which may not be swap-able } private: void clear_to_size(size_type new_num_buckets) { if (!table) { table = val_info.allocate(new_num_buckets); } else { destroy_buckets(0, num_buckets); if (new_num_buckets != num_buckets) { // resize, if necessary typedef base::integral_constant >::value> realloc_ok; resize_table(num_buckets, new_num_buckets, realloc_ok()); } } assert(table); fill_range_with_empty(table, table + new_num_buckets); num_elements = 0; num_deleted = 0; num_buckets = new_num_buckets; // our new size settings.reset_thresholds(bucket_count()); } public: // It's always nice to be able to clear a table without deallocating it void clear() { // If the table is already empty, and the number of buckets is // already as we desire, there's nothing to do. const size_type new_num_buckets = settings.min_buckets(0, 0); if (num_elements == 0 && new_num_buckets == num_buckets) { return; } clear_to_size(new_num_buckets); } // Clear the table without resizing it. // Mimicks the stl_hashtable's behaviour when clear()-ing in that it // does not modify the bucket count void clear_no_resize() { if (num_elements > 0) { assert(table); destroy_buckets(0, num_buckets); fill_range_with_empty(table, table + num_buckets); } // don't consider to shrink before another erase() settings.reset_thresholds(bucket_count()); num_elements = 0; num_deleted = 0; } // LOOKUP ROUTINES private: // Returns a pair of positions: 1st where the object is, 2nd where // it would go if you wanted to insert it. 1st is ILLEGAL_BUCKET // if object is not found; 2nd is ILLEGAL_BUCKET if it is. // Note: because of deletions where-to-insert is not trivial: it's the // first deleted bucket we see, as long as we don't find the key later std::pair find_position(const key_type &key) const { size_type num_probes = 0; // how many times we've probed const size_type bucket_count_minus_one = bucket_count() - 1; size_type bucknum = hash(key) & bucket_count_minus_one; size_type insert_pos = ILLEGAL_BUCKET; // where we would insert while ( 1 ) { // probe until something happens if ( test_empty(bucknum) ) { // bucket is empty if ( insert_pos == ILLEGAL_BUCKET ) // found no prior place to insert return std::pair(ILLEGAL_BUCKET, bucknum); else return std::pair(ILLEGAL_BUCKET, insert_pos); } else if ( test_deleted(bucknum) ) {// keep searching, but mark to insert if ( insert_pos == ILLEGAL_BUCKET ) insert_pos = bucknum; } else if ( equals(key, get_key(table[bucknum])) ) { return std::pair(bucknum, ILLEGAL_BUCKET); } ++num_probes; // we're doing another probe bucknum = (bucknum + JUMP_(key, num_probes)) & bucket_count_minus_one; assert(num_probes < bucket_count() && "Hashtable is full: an error in key_equal<> or hash<>"); } } public: iterator find(const key_type& key) { if ( size() == 0 ) return end(); std::pair pos = find_position(key); if ( pos.first == ILLEGAL_BUCKET ) // alas, not there return end(); else return iterator(this, table + pos.first, table + num_buckets, false); } const_iterator find(const key_type& key) const { if ( size() == 0 ) return end(); std::pair pos = find_position(key); if ( pos.first == ILLEGAL_BUCKET ) // alas, not there return end(); else return const_iterator(this, table + pos.first, table+num_buckets, false); } // This is a tr1 method: the bucket a given key is in, or what bucket // it would be put in, if it were to be inserted. Shrug. size_type bucket(const key_type& key) const { std::pair pos = find_position(key); return pos.first == ILLEGAL_BUCKET ? pos.second : pos.first; } // Counts how many elements have key key. For maps, it's either 0 or 1. size_type count(const key_type &key) const { std::pair pos = find_position(key); return pos.first == ILLEGAL_BUCKET ? 0 : 1; } // Likewise, equal_range doesn't really make sense for us. Oh well. std::pair equal_range(const key_type& key) { iterator pos = find(key); // either an iterator or end if (pos == end()) { return std::pair(pos, pos); } else { const iterator startpos = pos++; return std::pair(startpos, pos); } } std::pair equal_range(const key_type& key) const { const_iterator pos = find(key); // either an iterator or end if (pos == end()) { return std::pair(pos, pos); } else { const const_iterator startpos = pos++; return std::pair(startpos, pos); } } // INSERTION ROUTINES private: // Private method used by insert_noresize and find_or_insert. iterator insert_at(const_reference obj, size_type pos) { if (size() >= max_size()) { throw std::length_error("insert overflow"); } if ( test_deleted(pos) ) { // just replace if it's been del. // shrug: shouldn't need to be const. const_iterator delpos(this, table + pos, table + num_buckets, false); clear_deleted(delpos); assert( num_deleted > 0); --num_deleted; // used to be, now it isn't } else { ++num_elements; // replacing an empty bucket } set_value(&table[pos], obj); return iterator(this, table + pos, table + num_buckets, false); } // If you know *this is big enough to hold obj, use this routine std::pair insert_noresize(const_reference obj) { // First, double-check we're not inserting delkey or emptyval assert((!settings.use_empty() || !equals(get_key(obj), get_key(val_info.emptyval))) && "Inserting the empty key"); assert((!settings.use_deleted() || !equals(get_key(obj), key_info.delkey)) && "Inserting the deleted key"); const std::pair pos = find_position(get_key(obj)); if ( pos.first != ILLEGAL_BUCKET) { // object was already there return std::pair(iterator(this, table + pos.first, table + num_buckets, false), false); // false: we didn't insert } else { // pos.second says where to put it return std::pair(insert_at(obj, pos.second), true); } } // Specializations of insert(it, it) depending on the power of the iterator: // (1) Iterator supports operator-, resize before inserting template void insert(ForwardIterator f, ForwardIterator l, std::forward_iterator_tag) { size_t dist = std::distance(f, l); if (dist >= (std::numeric_limits::max)()) { throw std::length_error("insert-range overflow"); } resize_delta(static_cast(dist)); for ( ; dist > 0; --dist, ++f) { insert_noresize(*f); } } // (2) Arbitrary iterator, can't tell how much to resize template void insert(InputIterator f, InputIterator l, std::input_iterator_tag) { for ( ; f != l; ++f) insert(*f); } public: // This is the normal insert routine, used by the outside world std::pair insert(const_reference obj) { resize_delta(1); // adding an object, grow if need be return insert_noresize(obj); } // When inserting a lot at a time, we specialize on the type of iterator template void insert(InputIterator f, InputIterator l) { // specializes on iterator type insert(f, l, typename std::iterator_traits::iterator_category()); } // DefaultValue is a functor that takes a key and returns a value_type // representing the default value to be inserted if none is found. template value_type& find_or_insert(const key_type& key) { // First, double-check we're not inserting emptykey or delkey assert((!settings.use_empty() || !equals(key, get_key(val_info.emptyval))) && "Inserting the empty key"); assert((!settings.use_deleted() || !equals(key, key_info.delkey)) && "Inserting the deleted key"); const std::pair pos = find_position(key); DefaultValue default_value; if ( pos.first != ILLEGAL_BUCKET) { // object was already there return table[pos.first]; } else if (resize_delta(1)) { // needed to rehash to make room // Since we resized, we can't use pos, so recalculate where to insert. return *insert_noresize(default_value(key)).first; } else { // no need to rehash, insert right here return *insert_at(default_value(key), pos.second); } } // DELETION ROUTINES size_type erase(const key_type& key) { // First, double-check we're not trying to erase delkey or emptyval. assert((!settings.use_empty() || !equals(key, get_key(val_info.emptyval))) && "Erasing the empty key"); assert((!settings.use_deleted() || !equals(key, key_info.delkey)) && "Erasing the deleted key"); const_iterator pos = find(key); // shrug: shouldn't need to be const if ( pos != end() ) { assert(!test_deleted(pos)); // or find() shouldn't have returned it set_deleted(pos); ++num_deleted; settings.set_consider_shrink(true); // will think about shrink after next insert return 1; // because we deleted one thing } else { return 0; // because we deleted nothing } } // We return the iterator past the deleted item. void erase(iterator pos) { if ( pos == end() ) return; // sanity check if ( set_deleted(pos) ) { // true if object has been newly deleted ++num_deleted; settings.set_consider_shrink(true); // will think about shrink after next insert } } void erase(iterator f, iterator l) { for ( ; f != l; ++f) { if ( set_deleted(f) ) // should always be true ++num_deleted; } settings.set_consider_shrink(true); // will think about shrink after next insert } // We allow you to erase a const_iterator just like we allow you to // erase an iterator. This is in parallel to 'delete': you can delete // a const pointer just like a non-const pointer. The logic is that // you can't use the object after it's erased anyway, so it doesn't matter // if it's const or not. void erase(const_iterator pos) { if ( pos == end() ) return; // sanity check if ( set_deleted(pos) ) { // true if object has been newly deleted ++num_deleted; settings.set_consider_shrink(true); // will think about shrink after next insert } } void erase(const_iterator f, const_iterator l) { for ( ; f != l; ++f) { if ( set_deleted(f) ) // should always be true ++num_deleted; } settings.set_consider_shrink(true); // will think about shrink after next insert } // COMPARISON bool operator==(const dense_hashtable& ht) const { if (size() != ht.size()) { return false; } else if (this == &ht) { return true; } else { // Iterate through the elements in "this" and see if the // corresponding element is in ht for ( const_iterator it = begin(); it != end(); ++it ) { const_iterator it2 = ht.find(get_key(*it)); if ((it2 == ht.end()) || (*it != *it2)) { return false; } } return true; } } bool operator!=(const dense_hashtable& ht) const { return !(*this == ht); } // I/O // We support reading and writing hashtables to disk. Alas, since // I don't know how to write a hasher or key_equal, you have to make // sure everything but the table is the same. We compact before writing. private: // Every time the disk format changes, this should probably change too typedef unsigned long MagicNumberType; static const MagicNumberType MAGIC_NUMBER = 0x13578642; public: // I/O -- this is an add-on for writing hash table to disk // // INPUT and OUTPUT must be either a FILE, *or* a C++ stream // (istream, ostream, etc) *or* a class providing // Read(void*, size_t) and Write(const void*, size_t) // (respectively), which writes a buffer into a stream // (which the INPUT/OUTPUT instance presumably owns). typedef sparsehash_internal::pod_serializer NopointerSerializer; // ValueSerializer: a functor. operator()(OUTPUT*, const value_type&) template bool serialize(ValueSerializer serializer, OUTPUT *fp) { squash_deleted(); // so we don't have to worry about delkey if ( !sparsehash_internal::write_bigendian_number(fp, MAGIC_NUMBER, 4) ) return false; if ( !sparsehash_internal::write_bigendian_number(fp, num_buckets, 8) ) return false; if ( !sparsehash_internal::write_bigendian_number(fp, num_elements, 8) ) return false; // Now write a bitmap of non-empty buckets. for ( size_type i = 0; i < num_buckets; i += 8 ) { unsigned char bits = 0; for ( int bit = 0; bit < 8; ++bit ) { if ( i + bit < num_buckets && !test_empty(i + bit) ) bits |= (1 << bit); } if ( !sparsehash_internal::write_data(fp, &bits, sizeof(bits)) ) return false; for ( int bit = 0; bit < 8; ++bit ) { if ( bits & (1 << bit) ) { if ( !serializer(fp, table[i + bit]) ) return false; } } } return true; } // INPUT: anything we've written an overload of read_data() for. // ValueSerializer: a functor. operator()(INPUT*, value_type*) template bool unserialize(ValueSerializer serializer, INPUT *fp) { assert(settings.use_empty() && "empty_key not set for read"); clear(); // just to be consistent MagicNumberType magic_read; if ( !sparsehash_internal::read_bigendian_number(fp, &magic_read, 4) ) return false; if ( magic_read != MAGIC_NUMBER ) { return false; } size_type new_num_buckets; if ( !sparsehash_internal::read_bigendian_number(fp, &new_num_buckets, 8) ) return false; clear_to_size(new_num_buckets); if ( !sparsehash_internal::read_bigendian_number(fp, &num_elements, 8) ) return false; // Read the bitmap of non-empty buckets. for (size_type i = 0; i < num_buckets; i += 8) { unsigned char bits; if ( !sparsehash_internal::read_data(fp, &bits, sizeof(bits)) ) return false; for ( int bit = 0; bit < 8; ++bit ) { if ( i + bit < num_buckets && (bits & (1 << bit)) ) { // not empty if ( !serializer(fp, &table[i + bit]) ) return false; } } } return true; } private: template class alloc_impl : public A { public: typedef typename A::pointer pointer; typedef typename A::size_type size_type; // Convert a normal allocator to one that has realloc_or_die() alloc_impl(const A& a) : A(a) { } // realloc_or_die should only be used when using the default // allocator (libc_allocator_with_realloc). pointer realloc_or_die(pointer /*ptr*/, size_type /*n*/) { fprintf(stderr, "realloc_or_die is only supported for " "libc_allocator_with_realloc\n"); exit(1); return NULL; } }; // A template specialization of alloc_impl for // libc_allocator_with_realloc that can handle realloc_or_die. template class alloc_impl > : public libc_allocator_with_realloc { public: typedef typename libc_allocator_with_realloc::pointer pointer; typedef typename libc_allocator_with_realloc::size_type size_type; alloc_impl(const libc_allocator_with_realloc& a) : libc_allocator_with_realloc(a) { } pointer realloc_or_die(pointer ptr, size_type n) { pointer retval = this->reallocate(ptr, n); if (retval == NULL) { fprintf(stderr, "sparsehash: FATAL ERROR: failed to reallocate " "%lu elements for ptr %p", static_cast(n), ptr); exit(1); } return retval; } }; // Package allocator with emptyval to eliminate memory needed for // the zero-size allocator. // If new fields are added to this class, we should add them to // operator= and swap. class ValInfo : public alloc_impl { public: typedef typename alloc_impl::value_type value_type; ValInfo(const alloc_impl& a) : alloc_impl(a), emptyval() { } ValInfo(const ValInfo& v) : alloc_impl(v), emptyval(v.emptyval) { } value_type emptyval; // which key marks unused entries }; // Package functors with another class to eliminate memory needed for // zero-size functors. Since ExtractKey and hasher's operator() might // have the same function signature, they must be packaged in // different classes. struct Settings : sparsehash_internal::sh_hashtable_settings { explicit Settings(const hasher& hf) : sparsehash_internal::sh_hashtable_settings( hf, HT_OCCUPANCY_PCT / 100.0f, HT_EMPTY_PCT / 100.0f) {} }; // Packages ExtractKey and SetKey functors. class KeyInfo : public ExtractKey, public SetKey, public EqualKey { public: KeyInfo(const ExtractKey& ek, const SetKey& sk, const EqualKey& eq) : ExtractKey(ek), SetKey(sk), EqualKey(eq) { } // We want to return the exact same type as ExtractKey: Key or const Key& typename ExtractKey::result_type get_key(const_reference v) const { return ExtractKey::operator()(v); } void set_key(pointer v, const key_type& k) const { SetKey::operator()(v, k); } bool equals(const key_type& a, const key_type& b) const { return EqualKey::operator()(a, b); } // Which key marks deleted entries. // TODO(csilvers): make a pointer, and get rid of use_deleted (benchmark!) typename base::remove_const::type delkey; }; // Utility functions to access the templated operators size_type hash(const key_type& v) const { return settings.hash(v); } bool equals(const key_type& a, const key_type& b) const { return key_info.equals(a, b); } typename ExtractKey::result_type get_key(const_reference v) const { return key_info.get_key(v); } void set_key(pointer v, const key_type& k) const { key_info.set_key(v, k); } private: // Actual data Settings settings; KeyInfo key_info; size_type num_deleted; // how many occupied buckets are marked deleted size_type num_elements; size_type num_buckets; ValInfo val_info; // holds emptyval, and also the allocator pointer table; }; // We need a global swap as well template inline void swap(dense_hashtable &x, dense_hashtable &y) { x.swap(y); } #undef JUMP_ template const typename dense_hashtable::size_type dense_hashtable::ILLEGAL_BUCKET; // How full we let the table get before we resize. Knuth says .8 is // good -- higher causes us to probe too much, though saves memory. // However, we go with .5, getting better performance at the cost of // more space (a trade-off densehashtable explicitly chooses to make). // Feel free to play around with different values, though, via // max_load_factor() and/or set_resizing_parameters(). template const int dense_hashtable::HT_OCCUPANCY_PCT = 50; // How empty we let the table get before we resize lower. // It should be less than OCCUPANCY_PCT / 2 or we thrash resizing. template const int dense_hashtable::HT_EMPTY_PCT = static_cast(0.4 * dense_hashtable::HT_OCCUPANCY_PCT); _END_GOOGLE_NAMESPACE_ #endif /* _DENSEHASHTABLE_H_ */ sparsehash-2.0.2/src/sparsehash/sparse_hash_map0000664000175000017500000003644511721252346016626 00000000000000// Copyright (c) 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // --- // // This is just a very thin wrapper over sparsehashtable.h, just // like sgi stl's stl_hash_map is a very thin wrapper over // stl_hashtable. The major thing we define is operator[], because // we have a concept of a data_type which stl_hashtable doesn't // (it only has a key and a value). // // We adhere mostly to the STL semantics for hash-map. One important // exception is that insert() may invalidate iterators entirely -- STL // semantics are that insert() may reorder iterators, but they all // still refer to something valid in the hashtable. Not so for us. // Likewise, insert() may invalidate pointers into the hashtable. // (Whether insert invalidates iterators and pointers depends on // whether it results in a hashtable resize). On the plus side, // delete() doesn't invalidate iterators or pointers at all, or even // change the ordering of elements. // // Here are a few "power user" tips: // // 1) set_deleted_key(): // Unlike STL's hash_map, if you want to use erase() you // *must* call set_deleted_key() after construction. // // 2) resize(0): // When an item is deleted, its memory isn't freed right // away. This is what allows you to iterate over a hashtable // and call erase() without invalidating the iterator. // To force the memory to be freed, call resize(0). // For tr1 compatibility, this can also be called as rehash(0). // // 3) min_load_factor(0.0) // Setting the minimum load factor to 0.0 guarantees that // the hash table will never shrink. // // Roughly speaking: // (1) dense_hash_map: fastest, uses the most memory unless entries are small // (2) sparse_hash_map: slowest, uses the least memory // (3) hash_map / unordered_map (STL): in the middle // // Typically I use sparse_hash_map when I care about space and/or when // I need to save the hashtable on disk. I use hash_map otherwise. I // don't personally use dense_hash_map ever; some people use it for // small maps with lots of lookups. // // - dense_hash_map has, typically, about 78% memory overhead (if your // data takes up X bytes, the hash_map uses .78X more bytes in overhead). // - sparse_hash_map has about 4 bits overhead per entry. // - sparse_hash_map can be 3-7 times slower than the others for lookup and, // especially, inserts. See time_hash_map.cc for details. // // See /usr/(local/)?doc/sparsehash-*/sparse_hash_map.html // for information about how to use this class. #ifndef _SPARSE_HASH_MAP_H_ #define _SPARSE_HASH_MAP_H_ #include #include // needed by stl_alloc #include // for equal_to<>, select1st<>, etc #include // for alloc #include // for pair<> #include #include // IWYU pragma: export #include HASH_FUN_H // for hash<> _START_GOOGLE_NAMESPACE_ template , // defined in sparseconfig.h class EqualKey = std::equal_to, class Alloc = libc_allocator_with_realloc > > class sparse_hash_map { private: // Apparently select1st is not stl-standard, so we define our own struct SelectKey { typedef const Key& result_type; const Key& operator()(const std::pair& p) const { return p.first; } }; struct SetKey { void operator()(std::pair* value, const Key& new_key) const { *const_cast(&value->first) = new_key; // It would be nice to clear the rest of value here as well, in // case it's taking up a lot of memory. We do this by clearing // the value. This assumes T has a zero-arg constructor! value->second = T(); } }; // For operator[]. struct DefaultValue { std::pair operator()(const Key& key) { return std::make_pair(key, T()); } }; // The actual data typedef sparse_hashtable, Key, HashFcn, SelectKey, SetKey, EqualKey, Alloc> ht; ht rep; public: typedef typename ht::key_type key_type; typedef T data_type; typedef T mapped_type; typedef typename ht::value_type value_type; typedef typename ht::hasher hasher; typedef typename ht::key_equal key_equal; typedef Alloc allocator_type; typedef typename ht::size_type size_type; typedef typename ht::difference_type difference_type; typedef typename ht::pointer pointer; typedef typename ht::const_pointer const_pointer; typedef typename ht::reference reference; typedef typename ht::const_reference const_reference; typedef typename ht::iterator iterator; typedef typename ht::const_iterator const_iterator; typedef typename ht::local_iterator local_iterator; typedef typename ht::const_local_iterator const_local_iterator; // Iterator functions iterator begin() { return rep.begin(); } iterator end() { return rep.end(); } const_iterator begin() const { return rep.begin(); } const_iterator end() const { return rep.end(); } // These come from tr1's unordered_map. For us, a bucket has 0 or 1 elements. local_iterator begin(size_type i) { return rep.begin(i); } local_iterator end(size_type i) { return rep.end(i); } const_local_iterator begin(size_type i) const { return rep.begin(i); } const_local_iterator end(size_type i) const { return rep.end(i); } // Accessor functions allocator_type get_allocator() const { return rep.get_allocator(); } hasher hash_funct() const { return rep.hash_funct(); } hasher hash_function() const { return hash_funct(); } key_equal key_eq() const { return rep.key_eq(); } // Constructors explicit sparse_hash_map(size_type expected_max_items_in_table = 0, const hasher& hf = hasher(), const key_equal& eql = key_equal(), const allocator_type& alloc = allocator_type()) : rep(expected_max_items_in_table, hf, eql, SelectKey(), SetKey(), alloc) { } template sparse_hash_map(InputIterator f, InputIterator l, size_type expected_max_items_in_table = 0, const hasher& hf = hasher(), const key_equal& eql = key_equal(), const allocator_type& alloc = allocator_type()) : rep(expected_max_items_in_table, hf, eql, SelectKey(), SetKey(), alloc) { rep.insert(f, l); } // We use the default copy constructor // We use the default operator=() // We use the default destructor void clear() { rep.clear(); } void swap(sparse_hash_map& hs) { rep.swap(hs.rep); } // Functions concerning size size_type size() const { return rep.size(); } size_type max_size() const { return rep.max_size(); } bool empty() const { return rep.empty(); } size_type bucket_count() const { return rep.bucket_count(); } size_type max_bucket_count() const { return rep.max_bucket_count(); } // These are tr1 methods. bucket() is the bucket the key is or would be in. size_type bucket_size(size_type i) const { return rep.bucket_size(i); } size_type bucket(const key_type& key) const { return rep.bucket(key); } float load_factor() const { return size() * 1.0f / bucket_count(); } float max_load_factor() const { float shrink, grow; rep.get_resizing_parameters(&shrink, &grow); return grow; } void max_load_factor(float new_grow) { float shrink, grow; rep.get_resizing_parameters(&shrink, &grow); rep.set_resizing_parameters(shrink, new_grow); } // These aren't tr1 methods but perhaps ought to be. float min_load_factor() const { float shrink, grow; rep.get_resizing_parameters(&shrink, &grow); return shrink; } void min_load_factor(float new_shrink) { float shrink, grow; rep.get_resizing_parameters(&shrink, &grow); rep.set_resizing_parameters(new_shrink, grow); } // Deprecated; use min_load_factor() or max_load_factor() instead. void set_resizing_parameters(float shrink, float grow) { rep.set_resizing_parameters(shrink, grow); } void resize(size_type hint) { rep.resize(hint); } void rehash(size_type hint) { resize(hint); } // the tr1 name // Lookup routines iterator find(const key_type& key) { return rep.find(key); } const_iterator find(const key_type& key) const { return rep.find(key); } data_type& operator[](const key_type& key) { // This is our value-add! // If key is in the hashtable, returns find(key)->second, // otherwise returns insert(value_type(key, T()).first->second. // Note it does not create an empty T unless the find fails. return rep.template find_or_insert(key).second; } size_type count(const key_type& key) const { return rep.count(key); } std::pair equal_range(const key_type& key) { return rep.equal_range(key); } std::pair equal_range(const key_type& key) const { return rep.equal_range(key); } // Insertion routines std::pair insert(const value_type& obj) { return rep.insert(obj); } template void insert(InputIterator f, InputIterator l) { rep.insert(f, l); } void insert(const_iterator f, const_iterator l) { rep.insert(f, l); } // Required for std::insert_iterator; the passed-in iterator is ignored. iterator insert(iterator, const value_type& obj) { return insert(obj).first; } // Deletion routines // THESE ARE NON-STANDARD! I make you specify an "impossible" key // value to identify deleted buckets. You can change the key as // time goes on, or get rid of it entirely to be insert-only. void set_deleted_key(const key_type& key) { rep.set_deleted_key(key); } void clear_deleted_key() { rep.clear_deleted_key(); } key_type deleted_key() const { return rep.deleted_key(); } // These are standard size_type erase(const key_type& key) { return rep.erase(key); } void erase(iterator it) { rep.erase(it); } void erase(iterator f, iterator l) { rep.erase(f, l); } // Comparison bool operator==(const sparse_hash_map& hs) const { return rep == hs.rep; } bool operator!=(const sparse_hash_map& hs) const { return rep != hs.rep; } // I/O -- this is an add-on for writing metainformation to disk // // For maximum flexibility, this does not assume a particular // file type (though it will probably be a FILE *). We just pass // the fp through to rep. // If your keys and values are simple enough, you can pass this // serializer to serialize()/unserialize(). "Simple enough" means // value_type is a POD type that contains no pointers. Note, // however, we don't try to normalize endianness. typedef typename ht::NopointerSerializer NopointerSerializer; // serializer: a class providing operator()(OUTPUT*, const value_type&) // (writing value_type to OUTPUT). You can specify a // NopointerSerializer object if appropriate (see above). // fp: either a FILE*, OR an ostream*/subclass_of_ostream*, OR a // pointer to a class providing size_t Write(const void*, size_t), // which writes a buffer into a stream (which fp presumably // owns) and returns the number of bytes successfully written. // Note basic_ostream is not currently supported. template bool serialize(ValueSerializer serializer, OUTPUT* fp) { return rep.serialize(serializer, fp); } // serializer: a functor providing operator()(INPUT*, value_type*) // (reading from INPUT and into value_type). You can specify a // NopointerSerializer object if appropriate (see above). // fp: either a FILE*, OR an istream*/subclass_of_istream*, OR a // pointer to a class providing size_t Read(void*, size_t), // which reads into a buffer from a stream (which fp presumably // owns) and returns the number of bytes successfully read. // Note basic_istream is not currently supported. // NOTE: Since value_type is std::pair, ValueSerializer // may need to do a const cast in order to fill in the key. // NOTE: if Key or T are not POD types, the serializer MUST use // placement-new to initialize their values, rather than a normal // equals-assignment or similar. (The value_type* passed into the // serializer points to garbage memory.) template bool unserialize(ValueSerializer serializer, INPUT* fp) { return rep.unserialize(serializer, fp); } // The four methods below are DEPRECATED. // Use serialize() and unserialize() for new code. template bool write_metadata(OUTPUT *fp) { return rep.write_metadata(fp); } template bool read_metadata(INPUT *fp) { return rep.read_metadata(fp); } template bool write_nopointer_data(OUTPUT *fp) { return rep.write_nopointer_data(fp); } template bool read_nopointer_data(INPUT *fp) { return rep.read_nopointer_data(fp); } }; // We need a global swap as well template inline void swap(sparse_hash_map& hm1, sparse_hash_map& hm2) { hm1.swap(hm2); } _END_GOOGLE_NAMESPACE_ #endif /* _SPARSE_HASH_MAP_H_ */ sparsehash-2.0.2/src/sparsehash/sparse_hash_set0000664000175000017500000003423711721252346016641 00000000000000// Copyright (c) 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // --- // // This is just a very thin wrapper over sparsehashtable.h, just // like sgi stl's stl_hash_set is a very thin wrapper over // stl_hashtable. The major thing we define is operator[], because // we have a concept of a data_type which stl_hashtable doesn't // (it only has a key and a value). // // This is more different from sparse_hash_map than you might think, // because all iterators for sets are const (you obviously can't // change the key, and for sets there is no value). // // We adhere mostly to the STL semantics for hash-map. One important // exception is that insert() may invalidate iterators entirely -- STL // semantics are that insert() may reorder iterators, but they all // still refer to something valid in the hashtable. Not so for us. // Likewise, insert() may invalidate pointers into the hashtable. // (Whether insert invalidates iterators and pointers depends on // whether it results in a hashtable resize). On the plus side, // delete() doesn't invalidate iterators or pointers at all, or even // change the ordering of elements. // // Here are a few "power user" tips: // // 1) set_deleted_key(): // Unlike STL's hash_map, if you want to use erase() you // *must* call set_deleted_key() after construction. // // 2) resize(0): // When an item is deleted, its memory isn't freed right // away. This allows you to iterate over a hashtable, // and call erase(), without invalidating the iterator. // To force the memory to be freed, call resize(0). // For tr1 compatibility, this can also be called as rehash(0). // // 3) min_load_factor(0.0) // Setting the minimum load factor to 0.0 guarantees that // the hash table will never shrink. // // Roughly speaking: // (1) dense_hash_set: fastest, uses the most memory unless entries are small // (2) sparse_hash_set: slowest, uses the least memory // (3) hash_set / unordered_set (STL): in the middle // // Typically I use sparse_hash_set when I care about space and/or when // I need to save the hashtable on disk. I use hash_set otherwise. I // don't personally use dense_hash_set ever; some people use it for // small sets with lots of lookups. // // - dense_hash_set has, typically, about 78% memory overhead (if your // data takes up X bytes, the hash_set uses .78X more bytes in overhead). // - sparse_hash_set has about 4 bits overhead per entry. // - sparse_hash_set can be 3-7 times slower than the others for lookup and, // especially, inserts. See time_hash_map.cc for details. // // See /usr/(local/)?doc/sparsehash-*/sparse_hash_set.html // for information about how to use this class. #ifndef _SPARSE_HASH_SET_H_ #define _SPARSE_HASH_SET_H_ #include #include // needed by stl_alloc #include // for equal_to<> #include // for alloc (which we don't use) #include // for pair<> #include #include // IWYU pragma: export #include HASH_FUN_H // for hash<> _START_GOOGLE_NAMESPACE_ template , // defined in sparseconfig.h class EqualKey = std::equal_to, class Alloc = libc_allocator_with_realloc > class sparse_hash_set { private: // Apparently identity is not stl-standard, so we define our own struct Identity { typedef const Value& result_type; const Value& operator()(const Value& v) const { return v; } }; struct SetKey { void operator()(Value* value, const Value& new_key) const { *value = new_key; } }; typedef sparse_hashtable ht; ht rep; public: typedef typename ht::key_type key_type; typedef typename ht::value_type value_type; typedef typename ht::hasher hasher; typedef typename ht::key_equal key_equal; typedef Alloc allocator_type; typedef typename ht::size_type size_type; typedef typename ht::difference_type difference_type; typedef typename ht::const_pointer pointer; typedef typename ht::const_pointer const_pointer; typedef typename ht::const_reference reference; typedef typename ht::const_reference const_reference; typedef typename ht::const_iterator iterator; typedef typename ht::const_iterator const_iterator; typedef typename ht::const_local_iterator local_iterator; typedef typename ht::const_local_iterator const_local_iterator; // Iterator functions -- recall all iterators are const iterator begin() const { return rep.begin(); } iterator end() const { return rep.end(); } // These come from tr1's unordered_set. For us, a bucket has 0 or 1 elements. local_iterator begin(size_type i) const { return rep.begin(i); } local_iterator end(size_type i) const { return rep.end(i); } // Accessor functions allocator_type get_allocator() const { return rep.get_allocator(); } hasher hash_funct() const { return rep.hash_funct(); } hasher hash_function() const { return hash_funct(); } // tr1 name key_equal key_eq() const { return rep.key_eq(); } // Constructors explicit sparse_hash_set(size_type expected_max_items_in_table = 0, const hasher& hf = hasher(), const key_equal& eql = key_equal(), const allocator_type& alloc = allocator_type()) : rep(expected_max_items_in_table, hf, eql, Identity(), SetKey(), alloc) { } template sparse_hash_set(InputIterator f, InputIterator l, size_type expected_max_items_in_table = 0, const hasher& hf = hasher(), const key_equal& eql = key_equal(), const allocator_type& alloc = allocator_type()) : rep(expected_max_items_in_table, hf, eql, Identity(), SetKey(), alloc) { rep.insert(f, l); } // We use the default copy constructor // We use the default operator=() // We use the default destructor void clear() { rep.clear(); } void swap(sparse_hash_set& hs) { rep.swap(hs.rep); } // Functions concerning size size_type size() const { return rep.size(); } size_type max_size() const { return rep.max_size(); } bool empty() const { return rep.empty(); } size_type bucket_count() const { return rep.bucket_count(); } size_type max_bucket_count() const { return rep.max_bucket_count(); } // These are tr1 methods. bucket() is the bucket the key is or would be in. size_type bucket_size(size_type i) const { return rep.bucket_size(i); } size_type bucket(const key_type& key) const { return rep.bucket(key); } float load_factor() const { return size() * 1.0f / bucket_count(); } float max_load_factor() const { float shrink, grow; rep.get_resizing_parameters(&shrink, &grow); return grow; } void max_load_factor(float new_grow) { float shrink, grow; rep.get_resizing_parameters(&shrink, &grow); rep.set_resizing_parameters(shrink, new_grow); } // These aren't tr1 methods but perhaps ought to be. float min_load_factor() const { float shrink, grow; rep.get_resizing_parameters(&shrink, &grow); return shrink; } void min_load_factor(float new_shrink) { float shrink, grow; rep.get_resizing_parameters(&shrink, &grow); rep.set_resizing_parameters(new_shrink, grow); } // Deprecated; use min_load_factor() or max_load_factor() instead. void set_resizing_parameters(float shrink, float grow) { rep.set_resizing_parameters(shrink, grow); } void resize(size_type hint) { rep.resize(hint); } void rehash(size_type hint) { resize(hint); } // the tr1 name // Lookup routines iterator find(const key_type& key) const { return rep.find(key); } size_type count(const key_type& key) const { return rep.count(key); } std::pair equal_range(const key_type& key) const { return rep.equal_range(key); } // Insertion routines std::pair insert(const value_type& obj) { std::pair p = rep.insert(obj); return std::pair(p.first, p.second); // const to non-const } template void insert(InputIterator f, InputIterator l) { rep.insert(f, l); } void insert(const_iterator f, const_iterator l) { rep.insert(f, l); } // Required for std::insert_iterator; the passed-in iterator is ignored. iterator insert(iterator, const value_type& obj) { return insert(obj).first; } // Deletion routines // THESE ARE NON-STANDARD! I make you specify an "impossible" key // value to identify deleted buckets. You can change the key as // time goes on, or get rid of it entirely to be insert-only. void set_deleted_key(const key_type& key) { rep.set_deleted_key(key); } void clear_deleted_key() { rep.clear_deleted_key(); } key_type deleted_key() const { return rep.deleted_key(); } // These are standard size_type erase(const key_type& key) { return rep.erase(key); } void erase(iterator it) { rep.erase(it); } void erase(iterator f, iterator l) { rep.erase(f, l); } // Comparison bool operator==(const sparse_hash_set& hs) const { return rep == hs.rep; } bool operator!=(const sparse_hash_set& hs) const { return rep != hs.rep; } // I/O -- this is an add-on for writing metainformation to disk // // For maximum flexibility, this does not assume a particular // file type (though it will probably be a FILE *). We just pass // the fp through to rep. // If your keys and values are simple enough, you can pass this // serializer to serialize()/unserialize(). "Simple enough" means // value_type is a POD type that contains no pointers. Note, // however, we don't try to normalize endianness. typedef typename ht::NopointerSerializer NopointerSerializer; // serializer: a class providing operator()(OUTPUT*, const value_type&) // (writing value_type to OUTPUT). You can specify a // NopointerSerializer object if appropriate (see above). // fp: either a FILE*, OR an ostream*/subclass_of_ostream*, OR a // pointer to a class providing size_t Write(const void*, size_t), // which writes a buffer into a stream (which fp presumably // owns) and returns the number of bytes successfully written. // Note basic_ostream is not currently supported. template bool serialize(ValueSerializer serializer, OUTPUT* fp) { return rep.serialize(serializer, fp); } // serializer: a functor providing operator()(INPUT*, value_type*) // (reading from INPUT and into value_type). You can specify a // NopointerSerializer object if appropriate (see above). // fp: either a FILE*, OR an istream*/subclass_of_istream*, OR a // pointer to a class providing size_t Read(void*, size_t), // which reads into a buffer from a stream (which fp presumably // owns) and returns the number of bytes successfully read. // Note basic_istream is not currently supported. // NOTE: Since value_type is const Key, ValueSerializer // may need to do a const cast in order to fill in the key. // NOTE: if Key is not a POD type, the serializer MUST use // placement-new to initialize its value, rather than a normal // equals-assignment or similar. (The value_type* passed into // the serializer points to garbage memory.) template bool unserialize(ValueSerializer serializer, INPUT* fp) { return rep.unserialize(serializer, fp); } // The four methods below are DEPRECATED. // Use serialize() and unserialize() for new code. template bool write_metadata(OUTPUT *fp) { return rep.write_metadata(fp); } template bool read_metadata(INPUT *fp) { return rep.read_metadata(fp); } template bool write_nopointer_data(OUTPUT *fp) { return rep.write_nopointer_data(fp); } template bool read_nopointer_data(INPUT *fp) { return rep.read_nopointer_data(fp); } }; template inline void swap(sparse_hash_set& hs1, sparse_hash_set& hs2) { hs1.swap(hs2); } _END_GOOGLE_NAMESPACE_ #endif /* _SPARSE_HASH_SET_H_ */ sparsehash-2.0.2/src/sparsehash/type_traits.h0000664000175000017500000003557611721252346016272 00000000000000// Copyright (c) 2006, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // ---- // // This code is compiled directly on many platforms, including client // platforms like Windows, Mac, and embedded systems. Before making // any changes here, make sure that you're not breaking any platforms. // // Define a small subset of tr1 type traits. The traits we define are: // is_integral // is_floating_point // is_pointer // is_enum // is_reference // is_pod // has_trivial_constructor // has_trivial_copy // has_trivial_assign // has_trivial_destructor // remove_const // remove_volatile // remove_cv // remove_reference // add_reference // remove_pointer // is_same // is_convertible // We can add more type traits as required. #ifndef BASE_TYPE_TRAITS_H_ #define BASE_TYPE_TRAITS_H_ #include #include // For pair #include // For true_type and false_type _START_GOOGLE_NAMESPACE_ template struct is_integral; template struct is_floating_point; template struct is_pointer; // MSVC can't compile this correctly, and neither can gcc 3.3.5 (at least) #if !defined(_MSC_VER) && !(defined(__GNUC__) && __GNUC__ <= 3) // is_enum uses is_convertible, which is not available on MSVC. template struct is_enum; #endif template struct is_reference; template struct is_pod; template struct has_trivial_constructor; template struct has_trivial_copy; template struct has_trivial_assign; template struct has_trivial_destructor; template struct remove_const; template struct remove_volatile; template struct remove_cv; template struct remove_reference; template struct add_reference; template struct remove_pointer; template struct is_same; #if !defined(_MSC_VER) && !(defined(__GNUC__) && __GNUC__ <= 3) template struct is_convertible; #endif // is_integral is false except for the built-in integer types. A // cv-qualified type is integral if and only if the underlying type is. template struct is_integral : false_type { }; template<> struct is_integral : true_type { }; template<> struct is_integral : true_type { }; template<> struct is_integral : true_type { }; template<> struct is_integral : true_type { }; #if defined(_MSC_VER) // wchar_t is not by default a distinct type from unsigned short in // Microsoft C. // See http://msdn2.microsoft.com/en-us/library/dh8che7s(VS.80).aspx template<> struct is_integral<__wchar_t> : true_type { }; #else template<> struct is_integral : true_type { }; #endif template<> struct is_integral : true_type { }; template<> struct is_integral : true_type { }; template<> struct is_integral : true_type { }; template<> struct is_integral : true_type { }; template<> struct is_integral : true_type { }; template<> struct is_integral : true_type { }; #ifdef HAVE_LONG_LONG template<> struct is_integral : true_type { }; template<> struct is_integral : true_type { }; #endif template struct is_integral : is_integral { }; template struct is_integral : is_integral { }; template struct is_integral : is_integral { }; // is_floating_point is false except for the built-in floating-point types. // A cv-qualified type is integral if and only if the underlying type is. template struct is_floating_point : false_type { }; template<> struct is_floating_point : true_type { }; template<> struct is_floating_point : true_type { }; template<> struct is_floating_point : true_type { }; template struct is_floating_point : is_floating_point { }; template struct is_floating_point : is_floating_point { }; template struct is_floating_point : is_floating_point { }; // is_pointer is false except for pointer types. A cv-qualified type (e.g. // "int* const", as opposed to "int const*") is cv-qualified if and only if // the underlying type is. template struct is_pointer : false_type { }; template struct is_pointer : true_type { }; template struct is_pointer : is_pointer { }; template struct is_pointer : is_pointer { }; template struct is_pointer : is_pointer { }; #if !defined(_MSC_VER) && !(defined(__GNUC__) && __GNUC__ <= 3) namespace internal { template struct is_class_or_union { template static small_ tester(void (U::*)()); template static big_ tester(...); static const bool value = sizeof(tester(0)) == sizeof(small_); }; // is_convertible chokes if the first argument is an array. That's why // we use add_reference here. template struct is_enum_impl : is_convertible::type, int> { }; template struct is_enum_impl : false_type { }; } // namespace internal // Specified by TR1 [4.5.1] primary type categories. // Implementation note: // // Each type is either void, integral, floating point, array, pointer, // reference, member object pointer, member function pointer, enum, // union or class. Out of these, only integral, floating point, reference, // class and enum types are potentially convertible to int. Therefore, // if a type is not a reference, integral, floating point or class and // is convertible to int, it's a enum. Adding cv-qualification to a type // does not change whether it's an enum. // // Is-convertible-to-int check is done only if all other checks pass, // because it can't be used with some types (e.g. void or classes with // inaccessible conversion operators). template struct is_enum : internal::is_enum_impl< is_same::value || is_integral::value || is_floating_point::value || is_reference::value || internal::is_class_or_union::value, T> { }; template struct is_enum : is_enum { }; template struct is_enum : is_enum { }; template struct is_enum : is_enum { }; #endif // is_reference is false except for reference types. template struct is_reference : false_type {}; template struct is_reference : true_type {}; // We can't get is_pod right without compiler help, so fail conservatively. // We will assume it's false except for arithmetic types, enumerations, // pointers and cv-qualified versions thereof. Note that std::pair // is not a POD even if T and U are PODs. template struct is_pod : integral_constant::value || is_floating_point::value || #if !defined(_MSC_VER) && !(defined(__GNUC__) && __GNUC__ <= 3) // is_enum is not available on MSVC. is_enum::value || #endif is_pointer::value)> { }; template struct is_pod : is_pod { }; template struct is_pod : is_pod { }; template struct is_pod : is_pod { }; // We can't get has_trivial_constructor right without compiler help, so // fail conservatively. We will assume it's false except for: (1) types // for which is_pod is true. (2) std::pair of types with trivial // constructors. (3) array of a type with a trivial constructor. // (4) const versions thereof. template struct has_trivial_constructor : is_pod { }; template struct has_trivial_constructor > : integral_constant::value && has_trivial_constructor::value)> { }; template struct has_trivial_constructor : has_trivial_constructor { }; template struct has_trivial_constructor : has_trivial_constructor { }; // We can't get has_trivial_copy right without compiler help, so fail // conservatively. We will assume it's false except for: (1) types // for which is_pod is true. (2) std::pair of types with trivial copy // constructors. (3) array of a type with a trivial copy constructor. // (4) const versions thereof. template struct has_trivial_copy : is_pod { }; template struct has_trivial_copy > : integral_constant::value && has_trivial_copy::value)> { }; template struct has_trivial_copy : has_trivial_copy { }; template struct has_trivial_copy : has_trivial_copy { }; // We can't get has_trivial_assign right without compiler help, so fail // conservatively. We will assume it's false except for: (1) types // for which is_pod is true. (2) std::pair of types with trivial copy // constructors. (3) array of a type with a trivial assign constructor. template struct has_trivial_assign : is_pod { }; template struct has_trivial_assign > : integral_constant::value && has_trivial_assign::value)> { }; template struct has_trivial_assign : has_trivial_assign { }; // We can't get has_trivial_destructor right without compiler help, so // fail conservatively. We will assume it's false except for: (1) types // for which is_pod is true. (2) std::pair of types with trivial // destructors. (3) array of a type with a trivial destructor. // (4) const versions thereof. template struct has_trivial_destructor : is_pod { }; template struct has_trivial_destructor > : integral_constant::value && has_trivial_destructor::value)> { }; template struct has_trivial_destructor : has_trivial_destructor { }; template struct has_trivial_destructor : has_trivial_destructor { }; // Specified by TR1 [4.7.1] template struct remove_const { typedef T type; }; template struct remove_const { typedef T type; }; template struct remove_volatile { typedef T type; }; template struct remove_volatile { typedef T type; }; template struct remove_cv { typedef typename remove_const::type>::type type; }; // Specified by TR1 [4.7.2] Reference modifications. template struct remove_reference { typedef T type; }; template struct remove_reference { typedef T type; }; template struct add_reference { typedef T& type; }; template struct add_reference { typedef T& type; }; // Specified by TR1 [4.7.4] Pointer modifications. template struct remove_pointer { typedef T type; }; template struct remove_pointer { typedef T type; }; template struct remove_pointer { typedef T type; }; template struct remove_pointer { typedef T type; }; template struct remove_pointer { typedef T type; }; // Specified by TR1 [4.6] Relationships between types template struct is_same : public false_type { }; template struct is_same : public true_type { }; // Specified by TR1 [4.6] Relationships between types #if !defined(_MSC_VER) && !(defined(__GNUC__) && __GNUC__ <= 3) namespace internal { // This class is an implementation detail for is_convertible, and you // don't need to know how it works to use is_convertible. For those // who care: we declare two different functions, one whose argument is // of type To and one with a variadic argument list. We give them // return types of different size, so we can use sizeof to trick the // compiler into telling us which function it would have chosen if we // had called it with an argument of type From. See Alexandrescu's // _Modern C++ Design_ for more details on this sort of trick. template struct ConvertHelper { static small_ Test(To); static big_ Test(...); static From Create(); }; } // namespace internal // Inherits from true_type if From is convertible to To, false_type otherwise. template struct is_convertible : integral_constant::Test( internal::ConvertHelper::Create())) == sizeof(small_)> { }; #endif _END_GOOGLE_NAMESPACE_ // Right now these macros are no-ops, and mostly just document the fact // these types are PODs, for human use. They may be made more contentful // later. The typedef is just to make it legal to put a semicolon after // these macros. #define DECLARE_POD(TypeName) typedef int Dummy_Type_For_DECLARE_POD #define DECLARE_NESTED_POD(TypeName) DECLARE_POD(TypeName) #define PROPAGATE_POD_FROM_TEMPLATE_ARGUMENT(TemplateName) \ typedef int Dummy_Type_For_PROPAGATE_POD_FROM_TEMPLATE_ARGUMENT #define ENFORCE_POD(TypeName) typedef int Dummy_Type_For_ENFORCE_POD #endif // BASE_TYPE_TRAITS_H_ sparsehash-2.0.2/src/sparsehash/sparsetable0000664000175000017500000023045611721252346015774 00000000000000// Copyright (c) 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // --- // // // A sparsetable is a random container that implements a sparse array, // that is, an array that uses very little memory to store unassigned // indices (in this case, between 1-2 bits per unassigned index). For // instance, if you allocate an array of size 5 and assign a[2] = , then a[2] will take up a lot of memory but a[0], a[1], // a[3], and a[4] will not. Array elements that have a value are // called "assigned". Array elements that have no value yet, or have // had their value cleared using erase() or clear(), are called // "unassigned". // // Unassigned values seem to have the default value of T (see below). // Nevertheless, there is a difference between an unassigned index and // one explicitly assigned the value of T(). The latter is considered // assigned. // // Access to an array element is constant time, as is insertion and // deletion. Insertion and deletion may be fairly slow, however: // because of this container's memory economy, each insert and delete // causes a memory reallocation. // // NOTE: You should not test(), get(), or set() any index that is // greater than sparsetable.size(). If you need to do that, call // resize() first. // // --- Template parameters // PARAMETER DESCRIPTION DEFAULT // T The value of the array: the type of -- // object that is stored in the array. // // GROUP_SIZE How large each "group" in the table 48 // is (see below). Larger values use // a little less memory but cause most // operations to be a little slower // // Alloc: Allocator to use to allocate memory. libc_allocator_with_realloc // // --- Model of // Random Access Container // // --- Type requirements // T must be Copy Constructible. It need not be Assignable. // // --- Public base classes // None. // // --- Members // Type members // // MEMBER WHERE DEFINED DESCRIPTION // value_type container The type of object, T, stored in the array // allocator_type container Allocator to use // pointer container Pointer to p // const_pointer container Const pointer to p // reference container Reference to t // const_reference container Const reference to t // size_type container An unsigned integral type // difference_type container A signed integral type // iterator [*] container Iterator used to iterate over a sparsetable // const_iterator container Const iterator used to iterate over a table // reverse_iterator reversible Iterator used to iterate backwards over // container a sparsetable // const_reverse_iterator reversible container Guess // nonempty_iterator [+] sparsetable Iterates over assigned // array elements only // const_nonempty_iterator sparsetable Iterates over assigned // array elements only // reverse_nonempty_iterator sparsetable Iterates backwards over // assigned array elements only // const_reverse_nonempty_iterator sparsetable Iterates backwards over // assigned array elements only // // [*] All iterators are const in a sparsetable (though nonempty_iterators // may not be). Use get() and set() to assign values, not iterators. // // [+] iterators are random-access iterators. nonempty_iterators are // bidirectional iterators. // Iterator members // MEMBER WHERE DEFINED DESCRIPTION // // iterator begin() container An iterator to the beginning of the table // iterator end() container An iterator to the end of the table // const_iterator container A const_iterator pointing to the // begin() const beginning of a sparsetable // const_iterator container A const_iterator pointing to the // end() const end of a sparsetable // // reverse_iterator reversable Points to beginning of a reversed // rbegin() container sparsetable // reverse_iterator reversable Points to end of a reversed table // rend() container // const_reverse_iterator reversable Points to beginning of a // rbegin() const container reversed sparsetable // const_reverse_iterator reversable Points to end of a reversed table // rend() const container // // nonempty_iterator sparsetable Points to first assigned element // begin() of a sparsetable // nonempty_iterator sparsetable Points past last assigned element // end() of a sparsetable // const_nonempty_iterator sparsetable Points to first assigned element // begin() const of a sparsetable // const_nonempty_iterator sparsetable Points past last assigned element // end() const of a sparsetable // // reverse_nonempty_iterator sparsetable Points to first assigned element // begin() of a reversed sparsetable // reverse_nonempty_iterator sparsetable Points past last assigned element // end() of a reversed sparsetable // const_reverse_nonempty_iterator sparsetable Points to first assigned // begin() const elt of a reversed sparsetable // const_reverse_nonempty_iterator sparsetable Points past last assigned // end() const elt of a reversed sparsetable // // // Other members // MEMBER WHERE DEFINED DESCRIPTION // sparsetable() sparsetable A table of size 0; must resize() // before using. // sparsetable(size_type size) sparsetable A table of size size. All // indices are unassigned. // sparsetable( // const sparsetable &tbl) sparsetable Copy constructor // ~sparsetable() sparsetable The destructor // sparsetable &operator=( sparsetable The assignment operator // const sparsetable &tbl) // // void resize(size_type size) sparsetable Grow or shrink a table to // have size indices [*] // // void swap(sparsetable &x) sparsetable Swap two sparsetables // void swap(sparsetable &x, sparsetable Swap two sparsetables // sparsetable &y) (global, not member, function) // // size_type size() const sparsetable Number of "buckets" in the table // size_type max_size() const sparsetable Max allowed size of a sparsetable // bool empty() const sparsetable true if size() == 0 // size_type num_nonempty() const sparsetable Number of assigned "buckets" // // const_reference get( sparsetable Value at index i, or default // size_type i) const value if i is unassigned // const_reference operator[]( sparsetable Identical to get(i) [+] // difference_type i) const // reference set(size_type i, sparsetable Set element at index i to // const_reference val) be a copy of val // bool test(size_type i) sparsetable True if element at index i // const has been assigned to // bool test(iterator pos) sparsetable True if element pointed to // const by pos has been assigned to // void erase(iterator pos) sparsetable Set element pointed to by // pos to be unassigned [!] // void erase(size_type i) sparsetable Set element i to be unassigned // void erase(iterator start, sparsetable Erases all elements between // iterator end) start and end // void clear() sparsetable Erases all elements in the table // // I/O versions exist for both FILE* and for File* (Google2-style files): // bool write_metadata(FILE *fp) sparsetable Writes a sparsetable to the // bool write_metadata(File *fp) given file. true if write // completes successfully // bool read_metadata(FILE *fp) sparsetable Replaces sparsetable with // bool read_metadata(File *fp) version read from fp. true // if read completes sucessfully // bool write_nopointer_data(FILE *fp) Read/write the data stored in // bool read_nopointer_data(FILE*fp) the table, if it's simple // // bool operator==( forward Tests two tables for equality. // const sparsetable &t1, container This is a global function, // const sparsetable &t2) not a member function. // bool operator<( forward Lexicographical comparison. // const sparsetable &t1, container This is a global function, // const sparsetable &t2) not a member function. // // [*] If you shrink a sparsetable using resize(), assigned elements // past the end of the table are removed using erase(). If you grow // a sparsetable, new unassigned indices are created. // // [+] Note that operator[] returns a const reference. You must use // set() to change the value of a table element. // // [!] Unassignment also calls the destructor. // // Iterators are invalidated whenever an item is inserted or // deleted (ie set() or erase() is used) or when the size of // the table changes (ie resize() or clear() is used). // // See doc/sparsetable.html for more information about how to use this class. // Note: this uses STL style for naming, rather than Google naming. // That's because this is an STL-y container #ifndef UTIL_GTL_SPARSETABLE_H_ #define UTIL_GTL_SPARSETABLE_H_ #include #include // for malloc/free #include // to read/write tables #include // for memcpy #ifdef HAVE_STDINT_H #include // the normal place uint16_t is defined #endif #ifdef HAVE_SYS_TYPES_H #include // the normal place u_int16_t is defined #endif #ifdef HAVE_INTTYPES_H #include // a third place for uint16_t or u_int16_t #endif #include // for bounds checking #include // to define reverse_iterator for me #include // equal, lexicographical_compare, swap,... #include // uninitialized_copy, uninitialized_fill #include // a sparsetable is a vector of groups #include #include #include // A lot of work to get a type that's guaranteed to be 16 bits... #ifndef HAVE_U_INT16_T # if defined HAVE_UINT16_T typedef uint16_t u_int16_t; // true on solaris, possibly other C99 libc's # elif defined HAVE___UINT16 typedef __int16 int16_t; // true on vc++7 typedef unsigned __int16 u_int16_t; # else // Cannot find a 16-bit integer type. Hoping for the best with "short"... typedef short int int16_t; typedef unsigned short int u_int16_t; # endif #endif _START_GOOGLE_NAMESPACE_ namespace base { // just to make google->opensource transition easier using GOOGLE_NAMESPACE::true_type; using GOOGLE_NAMESPACE::false_type; using GOOGLE_NAMESPACE::integral_constant; using GOOGLE_NAMESPACE::has_trivial_copy; using GOOGLE_NAMESPACE::has_trivial_destructor; using GOOGLE_NAMESPACE::is_same; } // The smaller this is, the faster lookup is (because the group bitmap is // smaller) and the faster insert is, because there's less to move. // On the other hand, there are more groups. Since group::size_type is // a short, this number should be of the form 32*x + 16 to avoid waste. static const u_int16_t DEFAULT_SPARSEGROUP_SIZE = 48; // fits in 1.5 words // Our iterator as simple as iterators can be: basically it's just // the index into our table. Dereference, the only complicated // thing, we punt to the table class. This just goes to show how // much machinery STL requires to do even the most trivial tasks. // // A NOTE ON ASSIGNING: // A sparse table does not actually allocate memory for entries // that are not filled. Because of this, it becomes complicated // to have a non-const iterator: we don't know, if the iterator points // to a not-filled bucket, whether you plan to fill it with something // or whether you plan to read its value (in which case you'll get // the default bucket value). Therefore, while we can define const // operations in a pretty 'normal' way, for non-const operations, we // define something that returns a helper object with operator= and // operator& that allocate a bucket lazily. We use this for table[] // and also for regular table iterators. template class table_element_adaptor { public: typedef typename tabletype::value_type value_type; typedef typename tabletype::size_type size_type; typedef typename tabletype::reference reference; typedef typename tabletype::pointer pointer; table_element_adaptor(tabletype *tbl, size_type p) : table(tbl), pos(p) { } table_element_adaptor& operator= (const value_type &val) { table->set(pos, val); return *this; } operator value_type() { return table->get(pos); } // we look like a value pointer operator& () { return &table->mutating_get(pos); } private: tabletype* table; size_type pos; }; // Our iterator as simple as iterators can be: basically it's just // the index into our table. Dereference, the only complicated // thing, we punt to the table class. This just goes to show how // much machinery STL requires to do even the most trivial tasks. // // By templatizing over tabletype, we have one iterator type which // we can use for both sparsetables and sparsebins. In fact it // works on any class that allows size() and operator[] (eg vector), // as long as it does the standard STL typedefs too (eg value_type). template class table_iterator { public: typedef table_iterator iterator; typedef std::random_access_iterator_tag iterator_category; typedef typename tabletype::value_type value_type; typedef typename tabletype::difference_type difference_type; typedef typename tabletype::size_type size_type; typedef table_element_adaptor reference; typedef table_element_adaptor* pointer; // The "real" constructor table_iterator(tabletype *tbl, size_type p) : table(tbl), pos(p) { } // The default constructor, used when I define vars of type table::iterator table_iterator() : table(NULL), pos(0) { } // The copy constructor, for when I say table::iterator foo = tbl.begin() // The default destructor is fine; we don't define one // The default operator= is fine; we don't define one // The main thing our iterator does is dereference. If the table entry // we point to is empty, we return the default value type. // This is the big different function from the const iterator. reference operator*() { return table_element_adaptor(table, pos); } pointer operator->() { return &(operator*()); } // Helper function to assert things are ok; eg pos is still in range void check() const { assert(table); assert(pos <= table->size()); } // Arithmetic: we just do arithmetic on pos. We don't even need to // do bounds checking, since STL doesn't consider that its job. :-) iterator& operator+=(size_type t) { pos += t; check(); return *this; } iterator& operator-=(size_type t) { pos -= t; check(); return *this; } iterator& operator++() { ++pos; check(); return *this; } iterator& operator--() { --pos; check(); return *this; } iterator operator++(int) { iterator tmp(*this); // for x++ ++pos; check(); return tmp; } iterator operator--(int) { iterator tmp(*this); // for x-- --pos; check(); return tmp; } iterator operator+(difference_type i) const { iterator tmp(*this); tmp += i; return tmp; } iterator operator-(difference_type i) const { iterator tmp(*this); tmp -= i; return tmp; } difference_type operator-(iterator it) const { // for "x = it2 - it" assert(table == it.table); return pos - it.pos; } reference operator[](difference_type n) const { return *(*this + n); // simple though not totally efficient } // Comparisons. bool operator==(const iterator& it) const { return table == it.table && pos == it.pos; } bool operator<(const iterator& it) const { assert(table == it.table); // life is bad bad bad otherwise return pos < it.pos; } bool operator!=(const iterator& it) const { return !(*this == it); } bool operator<=(const iterator& it) const { return !(it < *this); } bool operator>(const iterator& it) const { return it < *this; } bool operator>=(const iterator& it) const { return !(*this < it); } // Here's the info we actually need to be an iterator tabletype *table; // so we can dereference and bounds-check size_type pos; // index into the table }; // support for "3 + iterator" has to be defined outside the class, alas template table_iterator operator+(typename table_iterator::difference_type i, table_iterator it) { return it + i; // so people can say it2 = 3 + it } template class const_table_iterator { public: typedef table_iterator iterator; typedef const_table_iterator const_iterator; typedef std::random_access_iterator_tag iterator_category; typedef typename tabletype::value_type value_type; typedef typename tabletype::difference_type difference_type; typedef typename tabletype::size_type size_type; typedef typename tabletype::const_reference reference; // we're const-only typedef typename tabletype::const_pointer pointer; // The "real" constructor const_table_iterator(const tabletype *tbl, size_type p) : table(tbl), pos(p) { } // The default constructor, used when I define vars of type table::iterator const_table_iterator() : table(NULL), pos(0) { } // The copy constructor, for when I say table::iterator foo = tbl.begin() // Also converts normal iterators to const iterators const_table_iterator(const iterator &from) : table(from.table), pos(from.pos) { } // The default destructor is fine; we don't define one // The default operator= is fine; we don't define one // The main thing our iterator does is dereference. If the table entry // we point to is empty, we return the default value type. reference operator*() const { return (*table)[pos]; } pointer operator->() const { return &(operator*()); } // Helper function to assert things are ok; eg pos is still in range void check() const { assert(table); assert(pos <= table->size()); } // Arithmetic: we just do arithmetic on pos. We don't even need to // do bounds checking, since STL doesn't consider that its job. :-) const_iterator& operator+=(size_type t) { pos += t; check(); return *this; } const_iterator& operator-=(size_type t) { pos -= t; check(); return *this; } const_iterator& operator++() { ++pos; check(); return *this; } const_iterator& operator--() { --pos; check(); return *this; } const_iterator operator++(int) { const_iterator tmp(*this); // for x++ ++pos; check(); return tmp; } const_iterator operator--(int) { const_iterator tmp(*this); // for x-- --pos; check(); return tmp; } const_iterator operator+(difference_type i) const { const_iterator tmp(*this); tmp += i; return tmp; } const_iterator operator-(difference_type i) const { const_iterator tmp(*this); tmp -= i; return tmp; } difference_type operator-(const_iterator it) const { // for "x = it2 - it" assert(table == it.table); return pos - it.pos; } reference operator[](difference_type n) const { return *(*this + n); // simple though not totally efficient } // Comparisons. bool operator==(const const_iterator& it) const { return table == it.table && pos == it.pos; } bool operator<(const const_iterator& it) const { assert(table == it.table); // life is bad bad bad otherwise return pos < it.pos; } bool operator!=(const const_iterator& it) const { return !(*this == it); } bool operator<=(const const_iterator& it) const { return !(it < *this); } bool operator>(const const_iterator& it) const { return it < *this; } bool operator>=(const const_iterator& it) const { return !(*this < it); } // Here's the info we actually need to be an iterator const tabletype *table; // so we can dereference and bounds-check size_type pos; // index into the table }; // support for "3 + iterator" has to be defined outside the class, alas template const_table_iterator operator+(typename const_table_iterator::difference_type i, const_table_iterator it) { return it + i; // so people can say it2 = 3 + it } // --------------------------------------------------------------------------- /* // This is a 2-D iterator. You specify a begin and end over a list // of *containers*. We iterate over each container by iterating over // it. It's actually simple: // VECTOR.begin() VECTOR[0].begin() --------> VECTOR[0].end() ---, // | ________________________________________________/ // | \_> VECTOR[1].begin() --------> VECTOR[1].end() -, // | ___________________________________________________/ // v \_> ...... // VECTOR.end() // // It's impossible to do random access on one of these things in constant // time, so it's just a bidirectional iterator. // // Unfortunately, because we need to use this for a non-empty iterator, // we use nonempty_begin() and nonempty_end() instead of begin() and end() // (though only going across, not down). */ #define TWOD_BEGIN_ nonempty_begin #define TWOD_END_ nonempty_end #define TWOD_ITER_ nonempty_iterator #define TWOD_CONST_ITER_ const_nonempty_iterator template class two_d_iterator { public: typedef two_d_iterator iterator; typedef std::bidirectional_iterator_tag iterator_category; // apparently some versions of VC++ have trouble with two ::'s in a typename typedef typename containertype::value_type _tmp_vt; typedef typename _tmp_vt::value_type value_type; typedef typename _tmp_vt::difference_type difference_type; typedef typename _tmp_vt::reference reference; typedef typename _tmp_vt::pointer pointer; // The "real" constructor. begin and end specify how many rows we have // (in the diagram above); we always iterate over each row completely. two_d_iterator(typename containertype::iterator begin, typename containertype::iterator end, typename containertype::iterator curr) : row_begin(begin), row_end(end), row_current(curr), col_current() { if ( row_current != row_end ) { col_current = row_current->TWOD_BEGIN_(); advance_past_end(); // in case cur->begin() == cur->end() } } // If you want to start at an arbitrary place, you can, I guess two_d_iterator(typename containertype::iterator begin, typename containertype::iterator end, typename containertype::iterator curr, typename containertype::value_type::TWOD_ITER_ col) : row_begin(begin), row_end(end), row_current(curr), col_current(col) { advance_past_end(); // in case cur->begin() == cur->end() } // The default constructor, used when I define vars of type table::iterator two_d_iterator() : row_begin(), row_end(), row_current(), col_current() { } // The default destructor is fine; we don't define one // The default operator= is fine; we don't define one // Happy dereferencer reference operator*() const { return *col_current; } pointer operator->() const { return &(operator*()); } // Arithmetic: we just do arithmetic on pos. We don't even need to // do bounds checking, since STL doesn't consider that its job. :-) // NOTE: this is not amortized constant time! What do we do about it? void advance_past_end() { // used when col_current points to end() while ( col_current == row_current->TWOD_END_() ) { // end of current row ++row_current; // go to beginning of next if ( row_current != row_end ) // col is irrelevant at end col_current = row_current->TWOD_BEGIN_(); else break; // don't go past row_end } } iterator& operator++() { assert(row_current != row_end); // how to ++ from there? ++col_current; advance_past_end(); // in case col_current is at end() return *this; } iterator& operator--() { while ( row_current == row_end || col_current == row_current->TWOD_BEGIN_() ) { assert(row_current != row_begin); --row_current; col_current = row_current->TWOD_END_(); // this is 1 too far } --col_current; return *this; } iterator operator++(int) { iterator tmp(*this); ++*this; return tmp; } iterator operator--(int) { iterator tmp(*this); --*this; return tmp; } // Comparisons. bool operator==(const iterator& it) const { return ( row_begin == it.row_begin && row_end == it.row_end && row_current == it.row_current && (row_current == row_end || col_current == it.col_current) ); } bool operator!=(const iterator& it) const { return !(*this == it); } // Here's the info we actually need to be an iterator // These need to be public so we convert from iterator to const_iterator typename containertype::iterator row_begin, row_end, row_current; typename containertype::value_type::TWOD_ITER_ col_current; }; // The same thing again, but this time const. :-( template class const_two_d_iterator { public: typedef const_two_d_iterator iterator; typedef std::bidirectional_iterator_tag iterator_category; // apparently some versions of VC++ have trouble with two ::'s in a typename typedef typename containertype::value_type _tmp_vt; typedef typename _tmp_vt::value_type value_type; typedef typename _tmp_vt::difference_type difference_type; typedef typename _tmp_vt::const_reference reference; typedef typename _tmp_vt::const_pointer pointer; const_two_d_iterator(typename containertype::const_iterator begin, typename containertype::const_iterator end, typename containertype::const_iterator curr) : row_begin(begin), row_end(end), row_current(curr), col_current() { if ( curr != end ) { col_current = curr->TWOD_BEGIN_(); advance_past_end(); // in case cur->begin() == cur->end() } } const_two_d_iterator(typename containertype::const_iterator begin, typename containertype::const_iterator end, typename containertype::const_iterator curr, typename containertype::value_type::TWOD_CONST_ITER_ col) : row_begin(begin), row_end(end), row_current(curr), col_current(col) { advance_past_end(); // in case cur->begin() == cur->end() } const_two_d_iterator() : row_begin(), row_end(), row_current(), col_current() { } // Need this explicitly so we can convert normal iterators to const iterators const_two_d_iterator(const two_d_iterator& it) : row_begin(it.row_begin), row_end(it.row_end), row_current(it.row_current), col_current(it.col_current) { } typename containertype::const_iterator row_begin, row_end, row_current; typename containertype::value_type::TWOD_CONST_ITER_ col_current; // EVERYTHING FROM HERE DOWN IS THE SAME AS THE NON-CONST ITERATOR reference operator*() const { return *col_current; } pointer operator->() const { return &(operator*()); } void advance_past_end() { // used when col_current points to end() while ( col_current == row_current->TWOD_END_() ) { // end of current row ++row_current; // go to beginning of next if ( row_current != row_end ) // col is irrelevant at end col_current = row_current->TWOD_BEGIN_(); else break; // don't go past row_end } } iterator& operator++() { assert(row_current != row_end); // how to ++ from there? ++col_current; advance_past_end(); // in case col_current is at end() return *this; } iterator& operator--() { while ( row_current == row_end || col_current == row_current->TWOD_BEGIN_() ) { assert(row_current != row_begin); --row_current; col_current = row_current->TWOD_END_(); // this is 1 too far } --col_current; return *this; } iterator operator++(int) { iterator tmp(*this); ++*this; return tmp; } iterator operator--(int) { iterator tmp(*this); --*this; return tmp; } bool operator==(const iterator& it) const { return ( row_begin == it.row_begin && row_end == it.row_end && row_current == it.row_current && (row_current == row_end || col_current == it.col_current) ); } bool operator!=(const iterator& it) const { return !(*this == it); } }; // We provide yet another version, to be as frugal with memory as // possible. This one frees each block of memory as it finishes // iterating over it. By the end, the entire table is freed. // For understandable reasons, you can only iterate over it once, // which is why it's an input iterator template class destructive_two_d_iterator { public: typedef destructive_two_d_iterator iterator; typedef std::input_iterator_tag iterator_category; // apparently some versions of VC++ have trouble with two ::'s in a typename typedef typename containertype::value_type _tmp_vt; typedef typename _tmp_vt::value_type value_type; typedef typename _tmp_vt::difference_type difference_type; typedef typename _tmp_vt::reference reference; typedef typename _tmp_vt::pointer pointer; destructive_two_d_iterator(typename containertype::iterator begin, typename containertype::iterator end, typename containertype::iterator curr) : row_begin(begin), row_end(end), row_current(curr), col_current() { if ( curr != end ) { col_current = curr->TWOD_BEGIN_(); advance_past_end(); // in case cur->begin() == cur->end() } } destructive_two_d_iterator(typename containertype::iterator begin, typename containertype::iterator end, typename containertype::iterator curr, typename containertype::value_type::TWOD_ITER_ col) : row_begin(begin), row_end(end), row_current(curr), col_current(col) { advance_past_end(); // in case cur->begin() == cur->end() } destructive_two_d_iterator() : row_begin(), row_end(), row_current(), col_current() { } typename containertype::iterator row_begin, row_end, row_current; typename containertype::value_type::TWOD_ITER_ col_current; // This is the part that destroys void advance_past_end() { // used when col_current points to end() while ( col_current == row_current->TWOD_END_() ) { // end of current row row_current->clear(); // the destructive part // It would be nice if we could decrement sparsetable->num_buckets here ++row_current; // go to beginning of next if ( row_current != row_end ) // col is irrelevant at end col_current = row_current->TWOD_BEGIN_(); else break; // don't go past row_end } } // EVERYTHING FROM HERE DOWN IS THE SAME AS THE REGULAR ITERATOR reference operator*() const { return *col_current; } pointer operator->() const { return &(operator*()); } iterator& operator++() { assert(row_current != row_end); // how to ++ from there? ++col_current; advance_past_end(); // in case col_current is at end() return *this; } iterator operator++(int) { iterator tmp(*this); ++*this; return tmp; } bool operator==(const iterator& it) const { return ( row_begin == it.row_begin && row_end == it.row_end && row_current == it.row_current && (row_current == row_end || col_current == it.col_current) ); } bool operator!=(const iterator& it) const { return !(*this == it); } }; #undef TWOD_BEGIN_ #undef TWOD_END_ #undef TWOD_ITER_ #undef TWOD_CONST_ITER_ // SPARSE-TABLE // ------------ // The idea is that a table with (logically) t buckets is divided // into t/M *groups* of M buckets each. (M is a constant set in // GROUP_SIZE for efficiency.) Each group is stored sparsely. // Thus, inserting into the table causes some array to grow, which is // slow but still constant time. Lookup involves doing a // logical-position-to-sparse-position lookup, which is also slow but // constant time. The larger M is, the slower these operations are // but the less overhead (slightly). // // To store the sparse array, we store a bitmap B, where B[i] = 1 iff // bucket i is non-empty. Then to look up bucket i we really look up // array[# of 1s before i in B]. This is constant time for fixed M. // // Terminology: the position of an item in the overall table (from // 1 .. t) is called its "location." The logical position in a group // (from 1 .. M ) is called its "position." The actual location in // the array (from 1 .. # of non-empty buckets in the group) is // called its "offset." template class sparsegroup { private: typedef typename Alloc::template rebind::other value_alloc_type; public: // Basic types typedef T value_type; typedef Alloc allocator_type; typedef typename value_alloc_type::reference reference; typedef typename value_alloc_type::const_reference const_reference; typedef typename value_alloc_type::pointer pointer; typedef typename value_alloc_type::const_pointer const_pointer; typedef table_iterator > iterator; typedef const_table_iterator > const_iterator; typedef table_element_adaptor > element_adaptor; typedef u_int16_t size_type; // max # of buckets typedef int16_t difference_type; typedef std::reverse_iterator const_reverse_iterator; typedef std::reverse_iterator reverse_iterator; // from iterator.h // These are our special iterators, that go over non-empty buckets in a // group. These aren't const-only because you can change non-empty bcks. typedef pointer nonempty_iterator; typedef const_pointer const_nonempty_iterator; typedef std::reverse_iterator reverse_nonempty_iterator; typedef std::reverse_iterator const_reverse_nonempty_iterator; // Iterator functions iterator begin() { return iterator(this, 0); } const_iterator begin() const { return const_iterator(this, 0); } iterator end() { return iterator(this, size()); } const_iterator end() const { return const_iterator(this, size()); } reverse_iterator rbegin() { return reverse_iterator(end()); } const_reverse_iterator rbegin() const { return const_reverse_iterator(end()); } reverse_iterator rend() { return reverse_iterator(begin()); } const_reverse_iterator rend() const { return const_reverse_iterator(begin()); } // We'll have versions for our special non-empty iterator too nonempty_iterator nonempty_begin() { return group; } const_nonempty_iterator nonempty_begin() const { return group; } nonempty_iterator nonempty_end() { return group + settings.num_buckets; } const_nonempty_iterator nonempty_end() const { return group + settings.num_buckets; } reverse_nonempty_iterator nonempty_rbegin() { return reverse_nonempty_iterator(nonempty_end()); } const_reverse_nonempty_iterator nonempty_rbegin() const { return const_reverse_nonempty_iterator(nonempty_end()); } reverse_nonempty_iterator nonempty_rend() { return reverse_nonempty_iterator(nonempty_begin()); } const_reverse_nonempty_iterator nonempty_rend() const { return const_reverse_nonempty_iterator(nonempty_begin()); } // This gives us the "default" value to return for an empty bucket. // We just use the default constructor on T, the template type const_reference default_value() const { static value_type defaultval = value_type(); return defaultval; } private: // We need to do all this bit manipulation, of course. ick static size_type charbit(size_type i) { return i >> 3; } static size_type modbit(size_type i) { return 1 << (i&7); } int bmtest(size_type i) const { return bitmap[charbit(i)] & modbit(i); } void bmset(size_type i) { bitmap[charbit(i)] |= modbit(i); } void bmclear(size_type i) { bitmap[charbit(i)] &= ~modbit(i); } pointer allocate_group(size_type n) { pointer retval = settings.allocate(n); if (retval == NULL) { // We really should use PRIuS here, but I don't want to have to add // a whole new configure option, with concomitant macro namespace // pollution, just to print this (unlikely) error message. So I cast. fprintf(stderr, "sparsehash FATAL ERROR: failed to allocate %lu groups\n", static_cast(n)); exit(1); } return retval; } void free_group() { if (!group) return; pointer end_it = group + settings.num_buckets; for (pointer p = group; p != end_it; ++p) p->~value_type(); settings.deallocate(group, settings.num_buckets); group = NULL; } static size_type bits_in_char(unsigned char c) { // We could make these ints. The tradeoff is size (eg does it overwhelm // the cache?) vs efficiency in referencing sub-word-sized array elements. static const char bits_in[256] = { 0, 1, 1, 2, 1, 2, 2, 3, 1, 2, 2, 3, 2, 3, 3, 4, 1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5, 1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7, 1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7, 3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7, 4, 5, 5, 6, 5, 6, 6, 7, 5, 6, 6, 7, 6, 7, 7, 8, }; return bits_in[c]; } public: // get_iter() in sparsetable needs it // We need a small function that tells us how many set bits there are // in positions 0..i-1 of the bitmap. It uses a big table. // We make it static so templates don't allocate lots of these tables. // There are lots of ways to do this calculation (called 'popcount'). // The 8-bit table lookup is one of the fastest, though this // implementation suffers from not doing any loop unrolling. See, eg, // http://www.dalkescientific.com/writings/diary/archive/2008/07/03/hakmem_and_other_popcounts.html // http://gurmeetsingh.wordpress.com/2008/08/05/fast-bit-counting-routines/ static size_type pos_to_offset(const unsigned char *bm, size_type pos) { size_type retval = 0; // [Note: condition pos > 8 is an optimization; convince yourself we // give exactly the same result as if we had pos >= 8 here instead.] for ( ; pos > 8; pos -= 8 ) // bm[0..pos/8-1] retval += bits_in_char(*bm++); // chars we want *all* bits in return retval + bits_in_char(*bm & ((1 << pos)-1)); // char including pos } size_type pos_to_offset(size_type pos) const { // not static but still const return pos_to_offset(bitmap, pos); } // Returns the (logical) position in the bm[] array, i, such that // bm[i] is the offset-th set bit in the array. It is the inverse // of pos_to_offset. get_pos() uses this function to find the index // of an nonempty_iterator in the table. Bit-twiddling from // http://hackersdelight.org/basics.pdf static size_type offset_to_pos(const unsigned char *bm, size_type offset) { size_type retval = 0; // This is sizeof(this->bitmap). const size_type group_size = (GROUP_SIZE-1) / 8 + 1; for (size_type i = 0; i < group_size; i++) { // forward scan const size_type pop_count = bits_in_char(*bm); if (pop_count > offset) { unsigned char last_bm = *bm; for (; offset > 0; offset--) { last_bm &= (last_bm-1); // remove right-most set bit } // Clear all bits to the left of the rightmost bit (the &), // and then clear the rightmost bit but set all bits to the // right of it (the -1). last_bm = (last_bm & -last_bm) - 1; retval += bits_in_char(last_bm); return retval; } offset -= pop_count; retval += 8; bm++; } return retval; } size_type offset_to_pos(size_type offset) const { return offset_to_pos(bitmap, offset); } public: // Constructors -- default and copy -- and destructor explicit sparsegroup(allocator_type& a) : group(0), settings(alloc_impl(a)) { memset(bitmap, 0, sizeof(bitmap)); } sparsegroup(const sparsegroup& x) : group(0), settings(x.settings) { if ( settings.num_buckets ) { group = allocate_group(x.settings.num_buckets); std::uninitialized_copy(x.group, x.group + x.settings.num_buckets, group); } memcpy(bitmap, x.bitmap, sizeof(bitmap)); } ~sparsegroup() { free_group(); } // Operator= is just like the copy constructor, I guess // TODO(austern): Make this exception safe. Handle exceptions in value_type's // copy constructor. sparsegroup &operator=(const sparsegroup& x) { if ( &x == this ) return *this; // x = x if ( x.settings.num_buckets == 0 ) { free_group(); } else { pointer p = allocate_group(x.settings.num_buckets); std::uninitialized_copy(x.group, x.group + x.settings.num_buckets, p); free_group(); group = p; } memcpy(bitmap, x.bitmap, sizeof(bitmap)); settings.num_buckets = x.settings.num_buckets; return *this; } // Many STL algorithms use swap instead of copy constructors void swap(sparsegroup& x) { std::swap(group, x.group); // defined in for ( int i = 0; i < sizeof(bitmap) / sizeof(*bitmap); ++i ) std::swap(bitmap[i], x.bitmap[i]); // swap not defined on arrays std::swap(settings.num_buckets, x.settings.num_buckets); // we purposefully don't swap the allocator, which may not be swap-able } // It's always nice to be able to clear a table without deallocating it void clear() { free_group(); memset(bitmap, 0, sizeof(bitmap)); settings.num_buckets = 0; } // Functions that tell you about size. Alas, these aren't so useful // because our table is always fixed size. size_type size() const { return GROUP_SIZE; } size_type max_size() const { return GROUP_SIZE; } bool empty() const { return false; } // We also may want to know how many *used* buckets there are size_type num_nonempty() const { return settings.num_buckets; } // get()/set() are explicitly const/non-const. You can use [] if // you want something that can be either (potentially more expensive). const_reference get(size_type i) const { if ( bmtest(i) ) // bucket i is occupied return group[pos_to_offset(bitmap, i)]; else return default_value(); // return the default reference } // TODO(csilvers): make protected + friend // This is used by sparse_hashtable to get an element from the table // when we know it exists. const_reference unsafe_get(size_type i) const { assert(bmtest(i)); return group[pos_to_offset(bitmap, i)]; } // TODO(csilvers): make protected + friend reference mutating_get(size_type i) { // fills bucket i before getting if ( !bmtest(i) ) set(i, default_value()); return group[pos_to_offset(bitmap, i)]; } // Syntactic sugar. It's easy to return a const reference. To // return a non-const reference, we need to use the assigner adaptor. const_reference operator[](size_type i) const { return get(i); } element_adaptor operator[](size_type i) { return element_adaptor(this, i); } private: // Create space at group[offset], assuming value_type has trivial // copy constructor and destructor, and the allocator_type is // the default libc_allocator_with_alloc. (Really, we want it to have // "trivial move", because that's what realloc and memmove both do. // But there's no way to capture that using type_traits, so we // pretend that move(x, y) is equivalent to "x.~T(); new(x) T(y);" // which is pretty much correct, if a bit conservative.) void set_aux(size_type offset, base::true_type) { group = settings.realloc_or_die(group, settings.num_buckets+1); // This is equivalent to memmove(), but faster on my Intel P4, // at least with gcc4.1 -O2 / glibc 2.3.6. for (size_type i = settings.num_buckets; i > offset; --i) memcpy(group + i, group + i-1, sizeof(*group)); } // Create space at group[offset], without special assumptions about value_type // and allocator_type. void set_aux(size_type offset, base::false_type) { // This is valid because 0 <= offset <= num_buckets pointer p = allocate_group(settings.num_buckets + 1); std::uninitialized_copy(group, group + offset, p); std::uninitialized_copy(group + offset, group + settings.num_buckets, p + offset + 1); free_group(); group = p; } public: // This returns a reference to the inserted item (which is a copy of val). // TODO(austern): Make this exception safe: handle exceptions from // value_type's copy constructor. reference set(size_type i, const_reference val) { size_type offset = pos_to_offset(bitmap, i); // where we'll find (or insert) if ( bmtest(i) ) { // Delete the old value, which we're replacing with the new one group[offset].~value_type(); } else { typedef base::integral_constant::value && base::has_trivial_destructor::value && base::is_same< allocator_type, libc_allocator_with_realloc >::value)> realloc_and_memmove_ok; // we pretend mv(x,y) == "x.~T(); new(x) T(y)" set_aux(offset, realloc_and_memmove_ok()); ++settings.num_buckets; bmset(i); } // This does the actual inserting. Since we made the array using // malloc, we use "placement new" to just call the constructor. new(&group[offset]) value_type(val); return group[offset]; } // We let you see if a bucket is non-empty without retrieving it bool test(size_type i) const { return bmtest(i) != 0; } bool test(iterator pos) const { return bmtest(pos.pos) != 0; } private: // Shrink the array, assuming value_type has trivial copy // constructor and destructor, and the allocator_type is the default // libc_allocator_with_alloc. (Really, we want it to have "trivial // move", because that's what realloc and memmove both do. But // there's no way to capture that using type_traits, so we pretend // that move(x, y) is equivalent to ""x.~T(); new(x) T(y);" // which is pretty much correct, if a bit conservative.) void erase_aux(size_type offset, base::true_type) { // This isn't technically necessary, since we know we have a // trivial destructor, but is a cheap way to get a bit more safety. group[offset].~value_type(); // This is equivalent to memmove(), but faster on my Intel P4, // at lesat with gcc4.1 -O2 / glibc 2.3.6. assert(settings.num_buckets > 0); for (size_type i = offset; i < settings.num_buckets-1; ++i) memcpy(group + i, group + i+1, sizeof(*group)); // hopefully inlined! group = settings.realloc_or_die(group, settings.num_buckets-1); } // Shrink the array, without any special assumptions about value_type and // allocator_type. void erase_aux(size_type offset, base::false_type) { // This is valid because 0 <= offset < num_buckets. Note the inequality. pointer p = allocate_group(settings.num_buckets - 1); std::uninitialized_copy(group, group + offset, p); std::uninitialized_copy(group + offset + 1, group + settings.num_buckets, p + offset); free_group(); group = p; } public: // This takes the specified elements out of the group. This is // "undefining", rather than "clearing". // TODO(austern): Make this exception safe: handle exceptions from // value_type's copy constructor. void erase(size_type i) { if ( bmtest(i) ) { // trivial to erase empty bucket size_type offset = pos_to_offset(bitmap,i); // where we'll find (or insert) if ( settings.num_buckets == 1 ) { free_group(); group = NULL; } else { typedef base::integral_constant::value && base::has_trivial_destructor::value && base::is_same< allocator_type, libc_allocator_with_realloc >::value)> realloc_and_memmove_ok; // pretend mv(x,y) == "x.~T(); new(x) T(y)" erase_aux(offset, realloc_and_memmove_ok()); } --settings.num_buckets; bmclear(i); } } void erase(iterator pos) { erase(pos.pos); } void erase(iterator start_it, iterator end_it) { // This could be more efficient, but to do so we'd need to make // bmclear() clear a range of indices. Doesn't seem worth it. for ( ; start_it != end_it; ++start_it ) erase(start_it); } // I/O // We support reading and writing groups to disk. We don't store // the actual array contents (which we don't know how to store), // just the bitmap and size. Meant to be used with table I/O. template bool write_metadata(OUTPUT *fp) const { // we explicitly set to u_int16_t assert(sizeof(settings.num_buckets) == 2); if ( !sparsehash_internal::write_bigendian_number(fp, settings.num_buckets, 2) ) return false; if ( !sparsehash_internal::write_data(fp, bitmap, sizeof(bitmap)) ) return false; return true; } // Reading destroys the old group contents! Returns true if all was ok. template bool read_metadata(INPUT *fp) { clear(); if ( !sparsehash_internal::read_bigendian_number(fp, &settings.num_buckets, 2) ) return false; if ( !sparsehash_internal::read_data(fp, bitmap, sizeof(bitmap)) ) return false; // We'll allocate the space, but we won't fill it: it will be // left as uninitialized raw memory. group = allocate_group(settings.num_buckets); return true; } // Again, only meaningful if value_type is a POD. template bool read_nopointer_data(INPUT *fp) { for ( nonempty_iterator it = nonempty_begin(); it != nonempty_end(); ++it ) { if ( !sparsehash_internal::read_data(fp, &(*it), sizeof(*it)) ) return false; } return true; } // If your keys and values are simple enough, we can write them // to disk for you. "simple enough" means POD and no pointers. // However, we don't try to normalize endianness. template bool write_nopointer_data(OUTPUT *fp) const { for ( const_nonempty_iterator it = nonempty_begin(); it != nonempty_end(); ++it ) { if ( !sparsehash_internal::write_data(fp, &(*it), sizeof(*it)) ) return false; } return true; } // Comparisons. We only need to define == and < -- we get // != > <= >= via relops.h (which we happily included above). // Note the comparisons are pretty arbitrary: we compare // values of the first index that isn't equal (using default // value for empty buckets). bool operator==(const sparsegroup& x) const { return ( settings.num_buckets == x.settings.num_buckets && memcmp(bitmap, x.bitmap, sizeof(bitmap)) == 0 && std::equal(begin(), end(), x.begin()) ); // from } bool operator<(const sparsegroup& x) const { // also from return std::lexicographical_compare(begin(), end(), x.begin(), x.end()); } bool operator!=(const sparsegroup& x) const { return !(*this == x); } bool operator<=(const sparsegroup& x) const { return !(x < *this); } bool operator>(const sparsegroup& x) const { return x < *this; } bool operator>=(const sparsegroup& x) const { return !(*this < x); } private: template class alloc_impl : public A { public: typedef typename A::pointer pointer; typedef typename A::size_type size_type; // Convert a normal allocator to one that has realloc_or_die() alloc_impl(const A& a) : A(a) { } // realloc_or_die should only be used when using the default // allocator (libc_allocator_with_realloc). pointer realloc_or_die(pointer /*ptr*/, size_type /*n*/) { fprintf(stderr, "realloc_or_die is only supported for " "libc_allocator_with_realloc\n"); exit(1); return NULL; } }; // A template specialization of alloc_impl for // libc_allocator_with_realloc that can handle realloc_or_die. template class alloc_impl > : public libc_allocator_with_realloc { public: typedef typename libc_allocator_with_realloc::pointer pointer; typedef typename libc_allocator_with_realloc::size_type size_type; alloc_impl(const libc_allocator_with_realloc& a) : libc_allocator_with_realloc(a) { } pointer realloc_or_die(pointer ptr, size_type n) { pointer retval = this->reallocate(ptr, n); if (retval == NULL) { fprintf(stderr, "sparsehash: FATAL ERROR: failed to reallocate " "%lu elements for ptr %p", static_cast(n), ptr); exit(1); } return retval; } }; // Package allocator with num_buckets to eliminate memory needed for the // zero-size allocator. // If new fields are added to this class, we should add them to // operator= and swap. class Settings : public alloc_impl { public: Settings(const alloc_impl& a, u_int16_t n = 0) : alloc_impl(a), num_buckets(n) { } Settings(const Settings& s) : alloc_impl(s), num_buckets(s.num_buckets) { } u_int16_t num_buckets; // limits GROUP_SIZE to 64K }; // The actual data pointer group; // (small) array of T's Settings settings; // allocator and num_buckets unsigned char bitmap[(GROUP_SIZE-1)/8 + 1]; // fancy math is so we round up }; // We need a global swap as well template inline void swap(sparsegroup &x, sparsegroup &y) { x.swap(y); } // --------------------------------------------------------------------------- template > class sparsetable { private: typedef typename Alloc::template rebind::other value_alloc_type; typedef typename Alloc::template rebind< sparsegroup >::other vector_alloc; public: // Basic types typedef T value_type; // stolen from stl_vector.h typedef Alloc allocator_type; typedef typename value_alloc_type::size_type size_type; typedef typename value_alloc_type::difference_type difference_type; typedef typename value_alloc_type::reference reference; typedef typename value_alloc_type::const_reference const_reference; typedef typename value_alloc_type::pointer pointer; typedef typename value_alloc_type::const_pointer const_pointer; typedef table_iterator > iterator; typedef const_table_iterator > const_iterator; typedef table_element_adaptor > element_adaptor; typedef std::reverse_iterator const_reverse_iterator; typedef std::reverse_iterator reverse_iterator; // from iterator.h // These are our special iterators, that go over non-empty buckets in a // table. These aren't const only because you can change non-empty bcks. typedef two_d_iterator< std::vector< sparsegroup, vector_alloc> > nonempty_iterator; typedef const_two_d_iterator< std::vector< sparsegroup, vector_alloc> > const_nonempty_iterator; typedef std::reverse_iterator reverse_nonempty_iterator; typedef std::reverse_iterator const_reverse_nonempty_iterator; // Another special iterator: it frees memory as it iterates (used to resize) typedef destructive_two_d_iterator< std::vector< sparsegroup, vector_alloc> > destructive_iterator; // Iterator functions iterator begin() { return iterator(this, 0); } const_iterator begin() const { return const_iterator(this, 0); } iterator end() { return iterator(this, size()); } const_iterator end() const { return const_iterator(this, size()); } reverse_iterator rbegin() { return reverse_iterator(end()); } const_reverse_iterator rbegin() const { return const_reverse_iterator(end()); } reverse_iterator rend() { return reverse_iterator(begin()); } const_reverse_iterator rend() const { return const_reverse_iterator(begin()); } // Versions for our special non-empty iterator nonempty_iterator nonempty_begin() { return nonempty_iterator(groups.begin(), groups.end(), groups.begin()); } const_nonempty_iterator nonempty_begin() const { return const_nonempty_iterator(groups.begin(),groups.end(), groups.begin()); } nonempty_iterator nonempty_end() { return nonempty_iterator(groups.begin(), groups.end(), groups.end()); } const_nonempty_iterator nonempty_end() const { return const_nonempty_iterator(groups.begin(), groups.end(), groups.end()); } reverse_nonempty_iterator nonempty_rbegin() { return reverse_nonempty_iterator(nonempty_end()); } const_reverse_nonempty_iterator nonempty_rbegin() const { return const_reverse_nonempty_iterator(nonempty_end()); } reverse_nonempty_iterator nonempty_rend() { return reverse_nonempty_iterator(nonempty_begin()); } const_reverse_nonempty_iterator nonempty_rend() const { return const_reverse_nonempty_iterator(nonempty_begin()); } destructive_iterator destructive_begin() { return destructive_iterator(groups.begin(), groups.end(), groups.begin()); } destructive_iterator destructive_end() { return destructive_iterator(groups.begin(), groups.end(), groups.end()); } typedef sparsegroup group_type; typedef std::vector group_vector_type; typedef typename group_vector_type::reference GroupsReference; typedef typename group_vector_type::const_reference GroupsConstReference; typedef typename group_vector_type::iterator GroupsIterator; typedef typename group_vector_type::const_iterator GroupsConstIterator; // How to deal with the proper group static size_type num_groups(size_type num) { // how many to hold num buckets return num == 0 ? 0 : ((num-1) / GROUP_SIZE) + 1; } u_int16_t pos_in_group(size_type i) const { return static_cast(i % GROUP_SIZE); } size_type group_num(size_type i) const { return i / GROUP_SIZE; } GroupsReference which_group(size_type i) { return groups[group_num(i)]; } GroupsConstReference which_group(size_type i) const { return groups[group_num(i)]; } public: // Constructors -- default, normal (when you specify size), and copy explicit sparsetable(size_type sz = 0, Alloc alloc = Alloc()) : groups(vector_alloc(alloc)), settings(alloc, sz) { groups.resize(num_groups(sz), group_type(settings)); } // We can get away with using the default copy constructor, // and default destructor, and hence the default operator=. Huzzah! // Many STL algorithms use swap instead of copy constructors void swap(sparsetable& x) { std::swap(groups, x.groups); // defined in stl_algobase.h std::swap(settings.table_size, x.settings.table_size); std::swap(settings.num_buckets, x.settings.num_buckets); } // It's always nice to be able to clear a table without deallocating it void clear() { GroupsIterator group; for ( group = groups.begin(); group != groups.end(); ++group ) { group->clear(); } settings.num_buckets = 0; } // ACCESSOR FUNCTIONS for the things we templatize on, basically allocator_type get_allocator() const { return allocator_type(settings); } // Functions that tell you about size. // NOTE: empty() is non-intuitive! It does not tell you the number // of not-empty buckets (use num_nonempty() for that). Instead // it says whether you've allocated any buckets or not. size_type size() const { return settings.table_size; } size_type max_size() const { return settings.max_size(); } bool empty() const { return settings.table_size == 0; } // We also may want to know how many *used* buckets there are size_type num_nonempty() const { return settings.num_buckets; } // OK, we'll let you resize one of these puppies void resize(size_type new_size) { groups.resize(num_groups(new_size), group_type(settings)); if ( new_size < settings.table_size) { // lower num_buckets, clear last group if ( pos_in_group(new_size) > 0 ) // need to clear inside last group groups.back().erase(groups.back().begin() + pos_in_group(new_size), groups.back().end()); settings.num_buckets = 0; // refigure # of used buckets GroupsConstIterator group; for ( group = groups.begin(); group != groups.end(); ++group ) settings.num_buckets += group->num_nonempty(); } settings.table_size = new_size; } // We let you see if a bucket is non-empty without retrieving it bool test(size_type i) const { assert(i < settings.table_size); return which_group(i).test(pos_in_group(i)); } bool test(iterator pos) const { return which_group(pos.pos).test(pos_in_group(pos.pos)); } bool test(const_iterator pos) const { return which_group(pos.pos).test(pos_in_group(pos.pos)); } // We only return const_references because it's really hard to // return something settable for empty buckets. Use set() instead. const_reference get(size_type i) const { assert(i < settings.table_size); return which_group(i).get(pos_in_group(i)); } // TODO(csilvers): make protected + friend // This is used by sparse_hashtable to get an element from the table // when we know it exists (because the caller has called test(i)). const_reference unsafe_get(size_type i) const { assert(i < settings.table_size); assert(test(i)); return which_group(i).unsafe_get(pos_in_group(i)); } // TODO(csilvers): make protected + friend element_adaptor reference mutating_get(size_type i) { // fills bucket i before getting assert(i < settings.table_size); typename group_type::size_type old_numbuckets = which_group(i).num_nonempty(); reference retval = which_group(i).mutating_get(pos_in_group(i)); settings.num_buckets += which_group(i).num_nonempty() - old_numbuckets; return retval; } // Syntactic sugar. As in sparsegroup, the non-const version is harder const_reference operator[](size_type i) const { return get(i); } element_adaptor operator[](size_type i) { return element_adaptor(this, i); } // Needed for hashtables, gets as a nonempty_iterator. Crashes for empty bcks const_nonempty_iterator get_iter(size_type i) const { assert(test(i)); // how can a nonempty_iterator point to an empty bucket? return const_nonempty_iterator( groups.begin(), groups.end(), groups.begin() + group_num(i), (groups[group_num(i)].nonempty_begin() + groups[group_num(i)].pos_to_offset(pos_in_group(i)))); } // For nonempty we can return a non-const version nonempty_iterator get_iter(size_type i) { assert(test(i)); // how can a nonempty_iterator point to an empty bucket? return nonempty_iterator( groups.begin(), groups.end(), groups.begin() + group_num(i), (groups[group_num(i)].nonempty_begin() + groups[group_num(i)].pos_to_offset(pos_in_group(i)))); } // And the reverse transformation. size_type get_pos(const const_nonempty_iterator it) const { difference_type current_row = it.row_current - it.row_begin; difference_type current_col = (it.col_current - groups[current_row].nonempty_begin()); return ((current_row * GROUP_SIZE) + groups[current_row].offset_to_pos(current_col)); } // This returns a reference to the inserted item (which is a copy of val) // The trick is to figure out whether we're replacing or inserting anew reference set(size_type i, const_reference val) { assert(i < settings.table_size); typename group_type::size_type old_numbuckets = which_group(i).num_nonempty(); reference retval = which_group(i).set(pos_in_group(i), val); settings.num_buckets += which_group(i).num_nonempty() - old_numbuckets; return retval; } // This takes the specified elements out of the table. This is // "undefining", rather than "clearing". void erase(size_type i) { assert(i < settings.table_size); typename group_type::size_type old_numbuckets = which_group(i).num_nonempty(); which_group(i).erase(pos_in_group(i)); settings.num_buckets += which_group(i).num_nonempty() - old_numbuckets; } void erase(iterator pos) { erase(pos.pos); } void erase(iterator start_it, iterator end_it) { // This could be more efficient, but then we'd need to figure // out if we spanned groups or not. Doesn't seem worth it. for ( ; start_it != end_it; ++start_it ) erase(start_it); } // We support reading and writing tables to disk. We don't store // the actual array contents (which we don't know how to store), // just the groups and sizes. Returns true if all went ok. private: // Every time the disk format changes, this should probably change too typedef unsigned long MagicNumberType; static const MagicNumberType MAGIC_NUMBER = 0x24687531; // Old versions of this code write all data in 32 bits. We need to // support these files as well as having support for 64-bit systems. // So we use the following encoding scheme: for values < 2^32-1, we // store in 4 bytes in big-endian order. For values > 2^32, we // store 0xFFFFFFF followed by 8 bytes in big-endian order. This // causes us to mis-read old-version code that stores exactly // 0xFFFFFFF, but I don't think that is likely to have happened for // these particular values. template static bool write_32_or_64(OUTPUT* fp, IntType value) { if ( value < 0xFFFFFFFFULL ) { // fits in 4 bytes if ( !sparsehash_internal::write_bigendian_number(fp, value, 4) ) return false; } else { if ( !sparsehash_internal::write_bigendian_number(fp, 0xFFFFFFFFUL, 4) ) return false; if ( !sparsehash_internal::write_bigendian_number(fp, value, 8) ) return false; } return true; } template static bool read_32_or_64(INPUT* fp, IntType *value) { // reads into value MagicNumberType first4 = 0; // a convenient 32-bit unsigned type if ( !sparsehash_internal::read_bigendian_number(fp, &first4, 4) ) return false; if ( first4 < 0xFFFFFFFFULL ) { *value = first4; } else { if ( !sparsehash_internal::read_bigendian_number(fp, value, 8) ) return false; } return true; } public: // read/write_metadata() and read_write/nopointer_data() are DEPRECATED. // Use serialize() and unserialize(), below, for new code. template bool write_metadata(OUTPUT *fp) const { if ( !write_32_or_64(fp, MAGIC_NUMBER) ) return false; if ( !write_32_or_64(fp, settings.table_size) ) return false; if ( !write_32_or_64(fp, settings.num_buckets) ) return false; GroupsConstIterator group; for ( group = groups.begin(); group != groups.end(); ++group ) if ( group->write_metadata(fp) == false ) return false; return true; } // Reading destroys the old table contents! Returns true if read ok. template bool read_metadata(INPUT *fp) { size_type magic_read = 0; if ( !read_32_or_64(fp, &magic_read) ) return false; if ( magic_read != MAGIC_NUMBER ) { clear(); // just to be consistent return false; } if ( !read_32_or_64(fp, &settings.table_size) ) return false; if ( !read_32_or_64(fp, &settings.num_buckets) ) return false; resize(settings.table_size); // so the vector's sized ok GroupsIterator group; for ( group = groups.begin(); group != groups.end(); ++group ) if ( group->read_metadata(fp) == false ) return false; return true; } // This code is identical to that for SparseGroup // If your keys and values are simple enough, we can write them // to disk for you. "simple enough" means no pointers. // However, we don't try to normalize endianness bool write_nopointer_data(FILE *fp) const { for ( const_nonempty_iterator it = nonempty_begin(); it != nonempty_end(); ++it ) { if ( !fwrite(&*it, sizeof(*it), 1, fp) ) return false; } return true; } // When reading, we have to override the potential const-ness of *it bool read_nopointer_data(FILE *fp) { for ( nonempty_iterator it = nonempty_begin(); it != nonempty_end(); ++it ) { if ( !fread(reinterpret_cast(&(*it)), sizeof(*it), 1, fp) ) return false; } return true; } // INPUT and OUTPUT must be either a FILE, *or* a C++ stream // (istream, ostream, etc) *or* a class providing // Read(void*, size_t) and Write(const void*, size_t) // (respectively), which writes a buffer into a stream // (which the INPUT/OUTPUT instance presumably owns). typedef sparsehash_internal::pod_serializer NopointerSerializer; // ValueSerializer: a functor. operator()(OUTPUT*, const value_type&) template bool serialize(ValueSerializer serializer, OUTPUT *fp) { if ( !write_metadata(fp) ) return false; for ( const_nonempty_iterator it = nonempty_begin(); it != nonempty_end(); ++it ) { if ( !serializer(fp, *it) ) return false; } return true; } // ValueSerializer: a functor. operator()(INPUT*, value_type*) template bool unserialize(ValueSerializer serializer, INPUT *fp) { clear(); if ( !read_metadata(fp) ) return false; for ( nonempty_iterator it = nonempty_begin(); it != nonempty_end(); ++it ) { if ( !serializer(fp, &*it) ) return false; } return true; } // Comparisons. Note the comparisons are pretty arbitrary: we // compare values of the first index that isn't equal (using default // value for empty buckets). bool operator==(const sparsetable& x) const { return ( settings.table_size == x.settings.table_size && settings.num_buckets == x.settings.num_buckets && groups == x.groups ); } bool operator<(const sparsetable& x) const { return std::lexicographical_compare(begin(), end(), x.begin(), x.end()); } bool operator!=(const sparsetable& x) const { return !(*this == x); } bool operator<=(const sparsetable& x) const { return !(x < *this); } bool operator>(const sparsetable& x) const { return x < *this; } bool operator>=(const sparsetable& x) const { return !(*this < x); } private: // Package allocator with table_size and num_buckets to eliminate memory // needed for the zero-size allocator. // If new fields are added to this class, we should add them to // operator= and swap. class Settings : public allocator_type { public: typedef typename allocator_type::size_type size_type; Settings(const allocator_type& a, size_type sz = 0, size_type n = 0) : allocator_type(a), table_size(sz), num_buckets(n) { } Settings(const Settings& s) : allocator_type(s), table_size(s.table_size), num_buckets(s.num_buckets) { } size_type table_size; // how many buckets they want size_type num_buckets; // number of non-empty buckets }; // The actual data group_vector_type groups; // our list of groups Settings settings; // allocator, table size, buckets }; // We need a global swap as well template inline void swap(sparsetable &x, sparsetable &y) { x.swap(y); } _END_GOOGLE_NAMESPACE_ #endif // UTIL_GTL_SPARSETABLE_H_ sparsehash-2.0.2/src/sparsehash/dense_hash_map0000664000175000017500000003720311721252346016420 00000000000000// Copyright (c) 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // ---- // // This is just a very thin wrapper over densehashtable.h, just // like sgi stl's stl_hash_map is a very thin wrapper over // stl_hashtable. The major thing we define is operator[], because // we have a concept of a data_type which stl_hashtable doesn't // (it only has a key and a value). // // NOTE: this is exactly like sparse_hash_map.h, with the word // "sparse" replaced by "dense", except for the addition of // set_empty_key(). // // YOU MUST CALL SET_EMPTY_KEY() IMMEDIATELY AFTER CONSTRUCTION. // // Otherwise your program will die in mysterious ways. (Note if you // use the constructor that takes an InputIterator range, you pass in // the empty key in the constructor, rather than after. As a result, // this constructor differs from the standard STL version.) // // In other respects, we adhere mostly to the STL semantics for // hash-map. One important exception is that insert() may invalidate // iterators entirely -- STL semantics are that insert() may reorder // iterators, but they all still refer to something valid in the // hashtable. Not so for us. Likewise, insert() may invalidate // pointers into the hashtable. (Whether insert invalidates iterators // and pointers depends on whether it results in a hashtable resize). // On the plus side, delete() doesn't invalidate iterators or pointers // at all, or even change the ordering of elements. // // Here are a few "power user" tips: // // 1) set_deleted_key(): // If you want to use erase() you *must* call set_deleted_key(), // in addition to set_empty_key(), after construction. // The deleted and empty keys must differ. // // 2) resize(0): // When an item is deleted, its memory isn't freed right // away. This allows you to iterate over a hashtable, // and call erase(), without invalidating the iterator. // To force the memory to be freed, call resize(0). // For tr1 compatibility, this can also be called as rehash(0). // // 3) min_load_factor(0.0) // Setting the minimum load factor to 0.0 guarantees that // the hash table will never shrink. // // Roughly speaking: // (1) dense_hash_map: fastest, uses the most memory unless entries are small // (2) sparse_hash_map: slowest, uses the least memory // (3) hash_map / unordered_map (STL): in the middle // // Typically I use sparse_hash_map when I care about space and/or when // I need to save the hashtable on disk. I use hash_map otherwise. I // don't personally use dense_hash_set ever; some people use it for // small sets with lots of lookups. // // - dense_hash_map has, typically, about 78% memory overhead (if your // data takes up X bytes, the hash_map uses .78X more bytes in overhead). // - sparse_hash_map has about 4 bits overhead per entry. // - sparse_hash_map can be 3-7 times slower than the others for lookup and, // especially, inserts. See time_hash_map.cc for details. // // See /usr/(local/)?doc/sparsehash-*/dense_hash_map.html // for information about how to use this class. #ifndef _DENSE_HASH_MAP_H_ #define _DENSE_HASH_MAP_H_ #include #include // needed by stl_alloc #include // for equal_to<>, select1st<>, etc #include // for alloc #include // for pair<> #include // IWYU pragma: export #include #include HASH_FUN_H // for hash<> _START_GOOGLE_NAMESPACE_ template , // defined in sparseconfig.h class EqualKey = std::equal_to, class Alloc = libc_allocator_with_realloc > > class dense_hash_map { private: // Apparently select1st is not stl-standard, so we define our own struct SelectKey { typedef const Key& result_type; const Key& operator()(const std::pair& p) const { return p.first; } }; struct SetKey { void operator()(std::pair* value, const Key& new_key) const { *const_cast(&value->first) = new_key; // It would be nice to clear the rest of value here as well, in // case it's taking up a lot of memory. We do this by clearing // the value. This assumes T has a zero-arg constructor! value->second = T(); } }; // For operator[]. struct DefaultValue { std::pair operator()(const Key& key) { return std::make_pair(key, T()); } }; // The actual data typedef dense_hashtable, Key, HashFcn, SelectKey, SetKey, EqualKey, Alloc> ht; ht rep; public: typedef typename ht::key_type key_type; typedef T data_type; typedef T mapped_type; typedef typename ht::value_type value_type; typedef typename ht::hasher hasher; typedef typename ht::key_equal key_equal; typedef Alloc allocator_type; typedef typename ht::size_type size_type; typedef typename ht::difference_type difference_type; typedef typename ht::pointer pointer; typedef typename ht::const_pointer const_pointer; typedef typename ht::reference reference; typedef typename ht::const_reference const_reference; typedef typename ht::iterator iterator; typedef typename ht::const_iterator const_iterator; typedef typename ht::local_iterator local_iterator; typedef typename ht::const_local_iterator const_local_iterator; // Iterator functions iterator begin() { return rep.begin(); } iterator end() { return rep.end(); } const_iterator begin() const { return rep.begin(); } const_iterator end() const { return rep.end(); } // These come from tr1's unordered_map. For us, a bucket has 0 or 1 elements. local_iterator begin(size_type i) { return rep.begin(i); } local_iterator end(size_type i) { return rep.end(i); } const_local_iterator begin(size_type i) const { return rep.begin(i); } const_local_iterator end(size_type i) const { return rep.end(i); } // Accessor functions allocator_type get_allocator() const { return rep.get_allocator(); } hasher hash_funct() const { return rep.hash_funct(); } hasher hash_function() const { return hash_funct(); } key_equal key_eq() const { return rep.key_eq(); } // Constructors explicit dense_hash_map(size_type expected_max_items_in_table = 0, const hasher& hf = hasher(), const key_equal& eql = key_equal(), const allocator_type& alloc = allocator_type()) : rep(expected_max_items_in_table, hf, eql, SelectKey(), SetKey(), alloc) { } template dense_hash_map(InputIterator f, InputIterator l, const key_type& empty_key_val, size_type expected_max_items_in_table = 0, const hasher& hf = hasher(), const key_equal& eql = key_equal(), const allocator_type& alloc = allocator_type()) : rep(expected_max_items_in_table, hf, eql, SelectKey(), SetKey(), alloc) { set_empty_key(empty_key_val); rep.insert(f, l); } // We use the default copy constructor // We use the default operator=() // We use the default destructor void clear() { rep.clear(); } // This clears the hash map without resizing it down to the minimum // bucket count, but rather keeps the number of buckets constant void clear_no_resize() { rep.clear_no_resize(); } void swap(dense_hash_map& hs) { rep.swap(hs.rep); } // Functions concerning size size_type size() const { return rep.size(); } size_type max_size() const { return rep.max_size(); } bool empty() const { return rep.empty(); } size_type bucket_count() const { return rep.bucket_count(); } size_type max_bucket_count() const { return rep.max_bucket_count(); } // These are tr1 methods. bucket() is the bucket the key is or would be in. size_type bucket_size(size_type i) const { return rep.bucket_size(i); } size_type bucket(const key_type& key) const { return rep.bucket(key); } float load_factor() const { return size() * 1.0f / bucket_count(); } float max_load_factor() const { float shrink, grow; rep.get_resizing_parameters(&shrink, &grow); return grow; } void max_load_factor(float new_grow) { float shrink, grow; rep.get_resizing_parameters(&shrink, &grow); rep.set_resizing_parameters(shrink, new_grow); } // These aren't tr1 methods but perhaps ought to be. float min_load_factor() const { float shrink, grow; rep.get_resizing_parameters(&shrink, &grow); return shrink; } void min_load_factor(float new_shrink) { float shrink, grow; rep.get_resizing_parameters(&shrink, &grow); rep.set_resizing_parameters(new_shrink, grow); } // Deprecated; use min_load_factor() or max_load_factor() instead. void set_resizing_parameters(float shrink, float grow) { rep.set_resizing_parameters(shrink, grow); } void resize(size_type hint) { rep.resize(hint); } void rehash(size_type hint) { resize(hint); } // the tr1 name // Lookup routines iterator find(const key_type& key) { return rep.find(key); } const_iterator find(const key_type& key) const { return rep.find(key); } data_type& operator[](const key_type& key) { // This is our value-add! // If key is in the hashtable, returns find(key)->second, // otherwise returns insert(value_type(key, T()).first->second. // Note it does not create an empty T unless the find fails. return rep.template find_or_insert(key).second; } size_type count(const key_type& key) const { return rep.count(key); } std::pair equal_range(const key_type& key) { return rep.equal_range(key); } std::pair equal_range(const key_type& key) const { return rep.equal_range(key); } // Insertion routines std::pair insert(const value_type& obj) { return rep.insert(obj); } template void insert(InputIterator f, InputIterator l) { rep.insert(f, l); } void insert(const_iterator f, const_iterator l) { rep.insert(f, l); } // Required for std::insert_iterator; the passed-in iterator is ignored. iterator insert(iterator, const value_type& obj) { return insert(obj).first; } // Deletion and empty routines // THESE ARE NON-STANDARD! I make you specify an "impossible" key // value to identify deleted and empty buckets. You can change the // deleted key as time goes on, or get rid of it entirely to be insert-only. void set_empty_key(const key_type& key) { // YOU MUST CALL THIS! rep.set_empty_key(value_type(key, data_type())); // rep wants a value } key_type empty_key() const { return rep.empty_key().first; // rep returns a value } void set_deleted_key(const key_type& key) { rep.set_deleted_key(key); } void clear_deleted_key() { rep.clear_deleted_key(); } key_type deleted_key() const { return rep.deleted_key(); } // These are standard size_type erase(const key_type& key) { return rep.erase(key); } void erase(iterator it) { rep.erase(it); } void erase(iterator f, iterator l) { rep.erase(f, l); } // Comparison bool operator==(const dense_hash_map& hs) const { return rep == hs.rep; } bool operator!=(const dense_hash_map& hs) const { return rep != hs.rep; } // I/O -- this is an add-on for writing hash map to disk // // For maximum flexibility, this does not assume a particular // file type (though it will probably be a FILE *). We just pass // the fp through to rep. // If your keys and values are simple enough, you can pass this // serializer to serialize()/unserialize(). "Simple enough" means // value_type is a POD type that contains no pointers. Note, // however, we don't try to normalize endianness. typedef typename ht::NopointerSerializer NopointerSerializer; // serializer: a class providing operator()(OUTPUT*, const value_type&) // (writing value_type to OUTPUT). You can specify a // NopointerSerializer object if appropriate (see above). // fp: either a FILE*, OR an ostream*/subclass_of_ostream*, OR a // pointer to a class providing size_t Write(const void*, size_t), // which writes a buffer into a stream (which fp presumably // owns) and returns the number of bytes successfully written. // Note basic_ostream is not currently supported. template bool serialize(ValueSerializer serializer, OUTPUT* fp) { return rep.serialize(serializer, fp); } // serializer: a functor providing operator()(INPUT*, value_type*) // (reading from INPUT and into value_type). You can specify a // NopointerSerializer object if appropriate (see above). // fp: either a FILE*, OR an istream*/subclass_of_istream*, OR a // pointer to a class providing size_t Read(void*, size_t), // which reads into a buffer from a stream (which fp presumably // owns) and returns the number of bytes successfully read. // Note basic_istream is not currently supported. // NOTE: Since value_type is std::pair, ValueSerializer // may need to do a const cast in order to fill in the key. template bool unserialize(ValueSerializer serializer, INPUT* fp) { return rep.unserialize(serializer, fp); } }; // We need a global swap as well template inline void swap(dense_hash_map& hm1, dense_hash_map& hm2) { hm1.swap(hm2); } _END_GOOGLE_NAMESPACE_ #endif /* _DENSE_HASH_MAP_H_ */ sparsehash-2.0.2/src/sparsehash/dense_hash_set0000664000175000017500000003446311721252346016443 00000000000000// Copyright (c) 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // --- // // This is just a very thin wrapper over densehashtable.h, just // like sgi stl's stl_hash_set is a very thin wrapper over // stl_hashtable. The major thing we define is operator[], because // we have a concept of a data_type which stl_hashtable doesn't // (it only has a key and a value). // // This is more different from dense_hash_map than you might think, // because all iterators for sets are const (you obviously can't // change the key, and for sets there is no value). // // NOTE: this is exactly like sparse_hash_set.h, with the word // "sparse" replaced by "dense", except for the addition of // set_empty_key(). // // YOU MUST CALL SET_EMPTY_KEY() IMMEDIATELY AFTER CONSTRUCTION. // // Otherwise your program will die in mysterious ways. (Note if you // use the constructor that takes an InputIterator range, you pass in // the empty key in the constructor, rather than after. As a result, // this constructor differs from the standard STL version.) // // In other respects, we adhere mostly to the STL semantics for // hash-map. One important exception is that insert() may invalidate // iterators entirely -- STL semantics are that insert() may reorder // iterators, but they all still refer to something valid in the // hashtable. Not so for us. Likewise, insert() may invalidate // pointers into the hashtable. (Whether insert invalidates iterators // and pointers depends on whether it results in a hashtable resize). // On the plus side, delete() doesn't invalidate iterators or pointers // at all, or even change the ordering of elements. // // Here are a few "power user" tips: // // 1) set_deleted_key(): // If you want to use erase() you must call set_deleted_key(), // in addition to set_empty_key(), after construction. // The deleted and empty keys must differ. // // 2) resize(0): // When an item is deleted, its memory isn't freed right // away. This allows you to iterate over a hashtable, // and call erase(), without invalidating the iterator. // To force the memory to be freed, call resize(0). // For tr1 compatibility, this can also be called as rehash(0). // // 3) min_load_factor(0.0) // Setting the minimum load factor to 0.0 guarantees that // the hash table will never shrink. // // Roughly speaking: // (1) dense_hash_set: fastest, uses the most memory unless entries are small // (2) sparse_hash_set: slowest, uses the least memory // (3) hash_set / unordered_set (STL): in the middle // // Typically I use sparse_hash_set when I care about space and/or when // I need to save the hashtable on disk. I use hash_set otherwise. I // don't personally use dense_hash_set ever; some people use it for // small sets with lots of lookups. // // - dense_hash_set has, typically, about 78% memory overhead (if your // data takes up X bytes, the hash_set uses .78X more bytes in overhead). // - sparse_hash_set has about 4 bits overhead per entry. // - sparse_hash_set can be 3-7 times slower than the others for lookup and, // especially, inserts. See time_hash_map.cc for details. // // See /usr/(local/)?doc/sparsehash-*/dense_hash_set.html // for information about how to use this class. #ifndef _DENSE_HASH_SET_H_ #define _DENSE_HASH_SET_H_ #include #include // needed by stl_alloc #include // for equal_to<>, select1st<>, etc #include // for alloc #include // for pair<> #include // IWYU pragma: export #include #include HASH_FUN_H // for hash<> _START_GOOGLE_NAMESPACE_ template , // defined in sparseconfig.h class EqualKey = std::equal_to, class Alloc = libc_allocator_with_realloc > class dense_hash_set { private: // Apparently identity is not stl-standard, so we define our own struct Identity { typedef const Value& result_type; const Value& operator()(const Value& v) const { return v; } }; struct SetKey { void operator()(Value* value, const Value& new_key) const { *value = new_key; } }; // The actual data typedef dense_hashtable ht; ht rep; public: typedef typename ht::key_type key_type; typedef typename ht::value_type value_type; typedef typename ht::hasher hasher; typedef typename ht::key_equal key_equal; typedef Alloc allocator_type; typedef typename ht::size_type size_type; typedef typename ht::difference_type difference_type; typedef typename ht::const_pointer pointer; typedef typename ht::const_pointer const_pointer; typedef typename ht::const_reference reference; typedef typename ht::const_reference const_reference; typedef typename ht::const_iterator iterator; typedef typename ht::const_iterator const_iterator; typedef typename ht::const_local_iterator local_iterator; typedef typename ht::const_local_iterator const_local_iterator; // Iterator functions -- recall all iterators are const iterator begin() const { return rep.begin(); } iterator end() const { return rep.end(); } // These come from tr1's unordered_set. For us, a bucket has 0 or 1 elements. local_iterator begin(size_type i) const { return rep.begin(i); } local_iterator end(size_type i) const { return rep.end(i); } // Accessor functions allocator_type get_allocator() const { return rep.get_allocator(); } hasher hash_funct() const { return rep.hash_funct(); } hasher hash_function() const { return hash_funct(); } // tr1 name key_equal key_eq() const { return rep.key_eq(); } // Constructors explicit dense_hash_set(size_type expected_max_items_in_table = 0, const hasher& hf = hasher(), const key_equal& eql = key_equal(), const allocator_type& alloc = allocator_type()) : rep(expected_max_items_in_table, hf, eql, Identity(), SetKey(), alloc) { } template dense_hash_set(InputIterator f, InputIterator l, const key_type& empty_key_val, size_type expected_max_items_in_table = 0, const hasher& hf = hasher(), const key_equal& eql = key_equal(), const allocator_type& alloc = allocator_type()) : rep(expected_max_items_in_table, hf, eql, Identity(), SetKey(), alloc) { set_empty_key(empty_key_val); rep.insert(f, l); } // We use the default copy constructor // We use the default operator=() // We use the default destructor void clear() { rep.clear(); } // This clears the hash set without resizing it down to the minimum // bucket count, but rather keeps the number of buckets constant void clear_no_resize() { rep.clear_no_resize(); } void swap(dense_hash_set& hs) { rep.swap(hs.rep); } // Functions concerning size size_type size() const { return rep.size(); } size_type max_size() const { return rep.max_size(); } bool empty() const { return rep.empty(); } size_type bucket_count() const { return rep.bucket_count(); } size_type max_bucket_count() const { return rep.max_bucket_count(); } // These are tr1 methods. bucket() is the bucket the key is or would be in. size_type bucket_size(size_type i) const { return rep.bucket_size(i); } size_type bucket(const key_type& key) const { return rep.bucket(key); } float load_factor() const { return size() * 1.0f / bucket_count(); } float max_load_factor() const { float shrink, grow; rep.get_resizing_parameters(&shrink, &grow); return grow; } void max_load_factor(float new_grow) { float shrink, grow; rep.get_resizing_parameters(&shrink, &grow); rep.set_resizing_parameters(shrink, new_grow); } // These aren't tr1 methods but perhaps ought to be. float min_load_factor() const { float shrink, grow; rep.get_resizing_parameters(&shrink, &grow); return shrink; } void min_load_factor(float new_shrink) { float shrink, grow; rep.get_resizing_parameters(&shrink, &grow); rep.set_resizing_parameters(new_shrink, grow); } // Deprecated; use min_load_factor() or max_load_factor() instead. void set_resizing_parameters(float shrink, float grow) { rep.set_resizing_parameters(shrink, grow); } void resize(size_type hint) { rep.resize(hint); } void rehash(size_type hint) { resize(hint); } // the tr1 name // Lookup routines iterator find(const key_type& key) const { return rep.find(key); } size_type count(const key_type& key) const { return rep.count(key); } std::pair equal_range(const key_type& key) const { return rep.equal_range(key); } // Insertion routines std::pair insert(const value_type& obj) { std::pair p = rep.insert(obj); return std::pair(p.first, p.second); // const to non-const } template void insert(InputIterator f, InputIterator l) { rep.insert(f, l); } void insert(const_iterator f, const_iterator l) { rep.insert(f, l); } // Required for std::insert_iterator; the passed-in iterator is ignored. iterator insert(iterator, const value_type& obj) { return insert(obj).first; } // Deletion and empty routines // THESE ARE NON-STANDARD! I make you specify an "impossible" key // value to identify deleted and empty buckets. You can change the // deleted key as time goes on, or get rid of it entirely to be insert-only. void set_empty_key(const key_type& key) { rep.set_empty_key(key); } key_type empty_key() const { return rep.empty_key(); } void set_deleted_key(const key_type& key) { rep.set_deleted_key(key); } void clear_deleted_key() { rep.clear_deleted_key(); } key_type deleted_key() const { return rep.deleted_key(); } // These are standard size_type erase(const key_type& key) { return rep.erase(key); } void erase(iterator it) { rep.erase(it); } void erase(iterator f, iterator l) { rep.erase(f, l); } // Comparison bool operator==(const dense_hash_set& hs) const { return rep == hs.rep; } bool operator!=(const dense_hash_set& hs) const { return rep != hs.rep; } // I/O -- this is an add-on for writing metainformation to disk // // For maximum flexibility, this does not assume a particular // file type (though it will probably be a FILE *). We just pass // the fp through to rep. // If your keys and values are simple enough, you can pass this // serializer to serialize()/unserialize(). "Simple enough" means // value_type is a POD type that contains no pointers. Note, // however, we don't try to normalize endianness. typedef typename ht::NopointerSerializer NopointerSerializer; // serializer: a class providing operator()(OUTPUT*, const value_type&) // (writing value_type to OUTPUT). You can specify a // NopointerSerializer object if appropriate (see above). // fp: either a FILE*, OR an ostream*/subclass_of_ostream*, OR a // pointer to a class providing size_t Write(const void*, size_t), // which writes a buffer into a stream (which fp presumably // owns) and returns the number of bytes successfully written. // Note basic_ostream is not currently supported. template bool serialize(ValueSerializer serializer, OUTPUT* fp) { return rep.serialize(serializer, fp); } // serializer: a functor providing operator()(INPUT*, value_type*) // (reading from INPUT and into value_type). You can specify a // NopointerSerializer object if appropriate (see above). // fp: either a FILE*, OR an istream*/subclass_of_istream*, OR a // pointer to a class providing size_t Read(void*, size_t), // which reads into a buffer from a stream (which fp presumably // owns) and returns the number of bytes successfully read. // Note basic_istream is not currently supported. template bool unserialize(ValueSerializer serializer, INPUT* fp) { return rep.unserialize(serializer, fp); } }; template inline void swap(dense_hash_set& hs1, dense_hash_set& hs2) { hs1.swap(hs2); } _END_GOOGLE_NAMESPACE_ #endif /* _DENSE_HASH_SET_H_ */ sparsehash-2.0.2/src/google/0000775000175000017500000000000011721550526012726 500000000000000sparsehash-2.0.2/src/google/sparsehash/0000775000175000017500000000000011721550526015067 500000000000000sparsehash-2.0.2/src/google/sparsehash/sparsehashtable.h0000664000175000017500000000341411721252346020332 00000000000000// Copyright (c) 2012, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // Header files have moved from the google directory to the sparsehash // directory. This forwarding file is provided only for backwards // compatibility. Use in all new code. #include sparsehash-2.0.2/src/google/sparsehash/libc_allocator_with_realloc.h0000664000175000017500000000343011721252346022664 00000000000000// Copyright (c) 2012, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // Header files have moved from the google directory to the sparsehash // directory. This forwarding file is provided only for backwards // compatibility. Use in all new code. #include sparsehash-2.0.2/src/google/sparsehash/hashtable-common.h0000664000175000017500000000341511721252346020403 00000000000000// Copyright (c) 2012, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // Header files have moved from the google directory to the sparsehash // directory. This forwarding file is provided only for backwards // compatibility. Use in all new code. #include sparsehash-2.0.2/src/google/sparsehash/densehashtable.h0000664000175000017500000000341311721252346020132 00000000000000// Copyright (c) 2012, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // Header files have moved from the google directory to the sparsehash // directory. This forwarding file is provided only for backwards // compatibility. Use in all new code. #include sparsehash-2.0.2/src/google/template_util.h0000664000175000017500000000340111721252346015664 00000000000000// Copyright (c) 2012, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // Header files have moved from the google directory to the sparsehash // directory. This forwarding file is provided only for backwards // compatibility. Use in all new code. #include sparsehash-2.0.2/src/google/sparse_hash_map0000664000175000017500000000340111721252346015723 00000000000000// Copyright (c) 2012, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // Header files have moved from the google directory to the sparsehash // directory. This forwarding file is provided only for backwards // compatibility. Use in all new code. #include sparsehash-2.0.2/src/google/sparse_hash_set0000664000175000017500000000340111721252346015741 00000000000000// Copyright (c) 2012, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // Header files have moved from the google directory to the sparsehash // directory. This forwarding file is provided only for backwards // compatibility. Use in all new code. #include sparsehash-2.0.2/src/google/type_traits.h0000664000175000017500000000337711721252346015377 00000000000000// Copyright (c) 2012, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // Header files have moved from the google directory to the sparsehash // directory. This forwarding file is provided only for backwards // compatibility. Use in all new code. #include sparsehash-2.0.2/src/google/sparsetable0000664000175000017500000000337511721252346015105 00000000000000// Copyright (c) 2012, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // Header files have moved from the google directory to the sparsehash // directory. This forwarding file is provided only for backwards // compatibility. Use in all new code. #include sparsehash-2.0.2/src/google/dense_hash_map0000664000175000017500000000340011721252346015523 00000000000000// Copyright (c) 2012, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // Header files have moved from the google directory to the sparsehash // directory. This forwarding file is provided only for backwards // compatibility. Use in all new code. #include sparsehash-2.0.2/src/google/dense_hash_set0000664000175000017500000000340011721252346015541 00000000000000// Copyright (c) 2012, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // Header files have moved from the google directory to the sparsehash // directory. This forwarding file is provided only for backwards // compatibility. Use in all new code. #include sparsehash-2.0.2/src/template_util_unittest.cc0000664000175000017500000001015311721252346016507 00000000000000// Copyright 2005 Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // ---- // // These tests are really compile time tests. // If you try to step through this in a debugger // you will not see any evaluations, merely that // value is assigned true or false sequentially. #include #include #include #include "testutil.h" using namespace GOOGLE_NAMESPACE; namespace { TEST(TemplateUtilTest, TestSize) { EXPECT_GT(sizeof(GOOGLE_NAMESPACE::big_), sizeof(GOOGLE_NAMESPACE::small_)); } TEST(TemplateUtilTest, TestIntegralConstants) { // test the built-in types. EXPECT_TRUE(true_type::value); EXPECT_FALSE(false_type::value); typedef integral_constant one_type; EXPECT_EQ(1, one_type::value); } TEST(TemplateUtilTest, TestTemplateIf) { typedef if_::type if_true; EXPECT_TRUE(if_true::value); typedef if_::type if_false; EXPECT_FALSE(if_false::value); } TEST(TemplateUtilTest, TestTemplateTypeEquals) { // Check that the TemplateTypeEquals works correctly. bool value = false; // Test the same type is true. value = type_equals_::value; EXPECT_TRUE(value); // Test different types are false. value = type_equals_::value; EXPECT_FALSE(value); // Test type aliasing. typedef const int foo; value = type_equals_::value; EXPECT_TRUE(value); } TEST(TemplateUtilTest, TestTemplateAndOr) { // Check that the TemplateTypeEquals works correctly. bool value = false; // Yes && Yes == true. value = and_::value; EXPECT_TRUE(value); // Yes && No == false. value = and_::value; EXPECT_FALSE(value); // No && Yes == false. value = and_::value; EXPECT_FALSE(value); // No && No == false. value = and_::value; EXPECT_FALSE(value); // Yes || Yes == true. value = or_::value; EXPECT_TRUE(value); // Yes || No == true. value = or_::value; EXPECT_TRUE(value); // No || Yes == true. value = or_::value; EXPECT_TRUE(value); // No || No == false. value = or_::value; EXPECT_FALSE(value); } TEST(TemplateUtilTest, TestIdentity) { EXPECT_TRUE( (type_equals_::type, int>::value)); EXPECT_TRUE( (type_equals_::type, void>::value)); } } // namespace #include int main(int, char **) { // All the work is done in the static constructors. If they don't // die, the tests have all passed. std::cout << "PASS\n"; return 0; } sparsehash-2.0.2/src/config.h.include0000664000175000017500000000117311721252346014433 00000000000000/*** *** These are #defines that autoheader puts in config.h.in that we *** want to show up in sparseconfig.h, the minimal config.h file *** #included by all our .h files. The reason we don't take *** everything that autoheader emits is that we have to include a *** config.h in installed header files, and we want to minimize the *** number of #defines we make so as to not pollute the namespace. ***/ GOOGLE_NAMESPACE HASH_NAMESPACE HASH_FUN_H SPARSEHASH_HASH HAVE_UINT16_T HAVE_U_INT16_T HAVE___UINT16 HAVE_LONG_LONG HAVE_SYS_TYPES_H HAVE_STDINT_H HAVE_INTTYPES_H HAVE_MEMCPY _END_GOOGLE_NAMESPACE_ _START_GOOGLE_NAMESPACE_ sparsehash-2.0.2/src/libc_allocator_with_realloc_test.cc0000664000175000017500000001053711721252346020452 00000000000000// Copyright (c) 2010, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // --- #include #include #include #include #include #include #include #include "testutil.h" using std::cerr; using std::cout; using std::string; using std::basic_string; using std::char_traits; using std::vector; using GOOGLE_NAMESPACE::libc_allocator_with_realloc; #define arraysize(a) ( sizeof(a) / sizeof(*(a)) ) namespace { typedef libc_allocator_with_realloc int_alloc; typedef int_alloc::rebind::other intp_alloc; // cstring allocates from libc_allocator_with_realloc. typedef basic_string, libc_allocator_with_realloc > cstring; typedef vector > cstring_vector; TEST(LibcAllocatorWithReallocTest, Allocate) { int_alloc alloc; intp_alloc palloc; int** parray = palloc.allocate(1024); for (int i = 0; i < 16; ++i) { parray[i] = alloc.allocate(i * 1024 + 1); } for (int i = 0; i < 16; ++i) { alloc.deallocate(parray[i], i * 1024 + 1); } palloc.deallocate(parray, 1024); int* p = alloc.allocate(4096); p[0] = 1; p[1023] = 2; p[4095] = 3; p = alloc.reallocate(p, 8192); EXPECT_EQ(1, p[0]); EXPECT_EQ(2, p[1023]); EXPECT_EQ(3, p[4095]); p = alloc.reallocate(p, 1024); EXPECT_EQ(1, p[0]); EXPECT_EQ(2, p[1023]); alloc.deallocate(p, 1024); } TEST(LibcAllocatorWithReallocTest, TestSTL) { // Test strings copied from base/arena_unittest.cc static const char* test_strings[] = { "aback", "abaft", "abandon", "abandoned", "abandoning", "abandonment", "abandons", "abase", "abased", "abasement", "abasements", "abases", "abash", "abashed", "abashes", "abashing", "abasing", "abate", "abated", "abatement", "abatements", "abater", "abates", "abating", "abbe", "abbey", "abbeys", "abbot", "abbots", "abbreviate", "abbreviated", "abbreviates", "abbreviating", "abbreviation", "abbreviations", "abdomen", "abdomens", "abdominal", "abduct", "abducted", "abduction", "abductions", "abductor", "abductors", "abducts", "Abe", "abed", "Abel", "Abelian", "Abelson", "Aberdeen", "Abernathy", "aberrant", "aberration", "aberrations", "abet", "abets", "abetted", "abetter", "abetting", "abeyance", "abhor", "abhorred", "abhorrent", "abhorrer", "abhorring", "abhors", "abide", "abided", "abides", "abiding"}; cstring_vector v; for (size_t i = 0; i < arraysize(test_strings); ++i) { v.push_back(test_strings[i]); } for (size_t i = arraysize(test_strings); i > 0; --i) { EXPECT_EQ(cstring(test_strings[i-1]), v.back()); v.pop_back(); } } } // namespace int main(int, char **) { // All the work is done in the static constructors. If they don't // die, the tests have all passed. cout << "PASS\n"; return 0; } sparsehash-2.0.2/src/sparsetable_unittest.cc0000664000175000017500000007476411721252346016166 00000000000000// Copyright (c) 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // --- // // Since sparsetable is templatized, it's important that we test every // function in every class in this file -- not just to see if it // works, but even if it compiles. #include #include #include #include #include // for size_t #include // defines unlink() on some windows platforms(?) #ifdef HAVE_UNISTD_H # include #endif // for unlink() #include // for allocator #include #include using std::string; using std::allocator; using GOOGLE_NAMESPACE::sparsetable; using GOOGLE_NAMESPACE::DEFAULT_SPARSEGROUP_SIZE; typedef u_int16_t uint16; string FLAGS_test_tmpdir = "/tmp/"; // Many sparsetable operations return a size_t. Rather than have to // use PRIuS everywhere, we'll just cast to a "big enough" value. #define UL(x) ( static_cast(x) ) static char outbuf[10240]; // big enough for these tests static char* out = outbuf; // where to write next #define LEFT (outbuf + sizeof(outbuf) - out) #define TEST(cond) out += snprintf(out, LEFT, #cond "? %s\n", \ (cond) ? "yes" : "no"); inline string AsString(int n) { const int N = 64; char buf[N]; snprintf(buf, N, "%d", n); return string(buf); } // Test sparsetable with a POD type, int. void TestInt() { out += snprintf(out, LEFT, "int test\n"); sparsetable x(7), y(70), z; x.set(4, 10); y.set(12, -12); y.set(47, -47); y.set(48, -48); y.set(49, -49); const sparsetable constx(x); const sparsetable consty(y); // ---------------------------------------------------------------------- // Test the plain iterators for ( sparsetable::iterator it = x.begin(); it != x.end(); ++it ) { out += snprintf(out, LEFT, "x[%lu]: %d\n", UL(it - x.begin()), int(*it)); } for ( sparsetable::const_iterator it = x.begin(); it != x.end(); ++it ) { out += snprintf(out, LEFT, "x[%lu]: %d\n", UL(it - x.begin()), *it); } for ( sparsetable::reverse_iterator it = x.rbegin(); it != x.rend(); ++it ) { out += snprintf(out, LEFT, "x[%lu]: %d\n", UL(x.rend()-1 - it), int(*it)); } for ( sparsetable::const_reverse_iterator it = constx.rbegin(); it != constx.rend(); ++it ) { out += snprintf(out, LEFT, "x[%lu]: %d\n", UL(constx.rend()-1 - it), *it); } for ( sparsetable::iterator it = z.begin(); it != z.end(); ++it ) { out += snprintf(out, LEFT, "z[%lu]: %d\n", UL(it - z.begin()), int(*it)); } { // array version out += snprintf(out, LEFT, "x[3]: %d\n", int(x[3])); out += snprintf(out, LEFT, "x[4]: %d\n", int(x[4])); out += snprintf(out, LEFT, "x[5]: %d\n", int(x[5])); } { sparsetable::iterator it; // non-const version out += snprintf(out, LEFT, "x[4]: %d\n", int(x.begin()[4])); it = x.begin() + 4; // should point to the non-zero value out += snprintf(out, LEFT, "x[4]: %d\n", int(*it)); it--; --it; it += 5; it -= 2; it++; ++it; it = it - 3; it = 1 + it; // now at 5 out += snprintf(out, LEFT, "x[3]: %d\n", int(it[-2])); out += snprintf(out, LEFT, "x[4]: %d\n", int(it[-1])); *it = 55; out += snprintf(out, LEFT, "x[5]: %d\n", int(it[0])); out += snprintf(out, LEFT, "x[5]: %d\n", int(*it)); int *x6 = &(it[1]); *x6 = 66; out += snprintf(out, LEFT, "x[6]: %d\n", int(*(it + 1))); // Let's test comparitors as well TEST(it == it); TEST(!(it != it)); TEST(!(it < it)); TEST(!(it > it)); TEST(it <= it); TEST(it >= it); sparsetable::iterator it_minus_1 = it - 1; TEST(!(it == it_minus_1)); TEST(it != it_minus_1); TEST(!(it < it_minus_1)); TEST(it > it_minus_1); TEST(!(it <= it_minus_1)); TEST(it >= it_minus_1); TEST(!(it_minus_1 == it)); TEST(it_minus_1 != it); TEST(it_minus_1 < it); TEST(!(it_minus_1 > it)); TEST(it_minus_1 <= it); TEST(!(it_minus_1 >= it)); sparsetable::iterator it_plus_1 = it + 1; TEST(!(it == it_plus_1)); TEST(it != it_plus_1); TEST(it < it_plus_1); TEST(!(it > it_plus_1)); TEST(it <= it_plus_1); TEST(!(it >= it_plus_1)); TEST(!(it_plus_1 == it)); TEST(it_plus_1 != it); TEST(!(it_plus_1 < it)); TEST(it_plus_1 > it); TEST(!(it_plus_1 <= it)); TEST(it_plus_1 >= it); } { sparsetable::const_iterator it; // const version out += snprintf(out, LEFT, "x[4]: %d\n", int(x.begin()[4])); it = x.begin() + 4; // should point to the non-zero value out += snprintf(out, LEFT, "x[4]: %d\n", *it); it--; --it; it += 5; it -= 2; it++; ++it; it = it - 3; it = 1 + it; // now at 5 out += snprintf(out, LEFT, "x[3]: %d\n", it[-2]); out += snprintf(out, LEFT, "x[4]: %d\n", it[-1]); out += snprintf(out, LEFT, "x[5]: %d\n", *it); out += snprintf(out, LEFT, "x[6]: %d\n", *(it + 1)); // Let's test comparitors as well TEST(it == it); TEST(!(it != it)); TEST(!(it < it)); TEST(!(it > it)); TEST(it <= it); TEST(it >= it); sparsetable::const_iterator it_minus_1 = it - 1; TEST(!(it == it_minus_1)); TEST(it != it_minus_1); TEST(!(it < it_minus_1)); TEST(it > it_minus_1); TEST(!(it <= it_minus_1)); TEST(it >= it_minus_1); TEST(!(it_minus_1 == it)); TEST(it_minus_1 != it); TEST(it_minus_1 < it); TEST(!(it_minus_1 > it)); TEST(it_minus_1 <= it); TEST(!(it_minus_1 >= it)); sparsetable::const_iterator it_plus_1 = it + 1; TEST(!(it == it_plus_1)); TEST(it != it_plus_1); TEST(it < it_plus_1); TEST(!(it > it_plus_1)); TEST(it <= it_plus_1); TEST(!(it >= it_plus_1)); TEST(!(it_plus_1 == it)); TEST(it_plus_1 != it); TEST(!(it_plus_1 < it)); TEST(it_plus_1 > it); TEST(!(it_plus_1 <= it)); TEST(it_plus_1 >= it); } TEST(x.begin() == x.begin() + 1 - 1); TEST(x.begin() < x.end()); TEST(z.begin() < z.end()); TEST(z.begin() <= z.end()); TEST(z.begin() == z.end()); // ---------------------------------------------------------------------- // Test the non-empty iterators for ( sparsetable::nonempty_iterator it = x.nonempty_begin(); it != x.nonempty_end(); ++it ) { out += snprintf(out, LEFT, "x[??]: %d\n", *it); } for ( sparsetable::const_nonempty_iterator it = y.nonempty_begin(); it != y.nonempty_end(); ++it ) { out += snprintf(out, LEFT, "y[??]: %d\n", *it); } for ( sparsetable::reverse_nonempty_iterator it = y.nonempty_rbegin(); it != y.nonempty_rend(); ++it ) { out += snprintf(out, LEFT, "y[??]: %d\n", *it); } for ( sparsetable::const_reverse_nonempty_iterator it = consty.nonempty_rbegin(); it != consty.nonempty_rend(); ++it ) { out += snprintf(out, LEFT, "y[??]: %d\n", *it); } for ( sparsetable::nonempty_iterator it = z.nonempty_begin(); it != z.nonempty_end(); ++it ) { out += snprintf(out, LEFT, "z[??]: %d\n", *it); } { sparsetable::nonempty_iterator it; // non-const version out += snprintf(out, LEFT, "first non-empty y: %d\n", *y.nonempty_begin()); out += snprintf(out, LEFT, "first non-empty x: %d\n", *x.nonempty_begin()); it = x.nonempty_begin(); ++it; // should be at end --it; out += snprintf(out, LEFT, "first non-empty x: %d\n", *it++); it--; out += snprintf(out, LEFT, "first non-empty x: %d\n", *it++); } { sparsetable::const_nonempty_iterator it; // non-const version out += snprintf(out, LEFT, "first non-empty y: %d\n", *y.nonempty_begin()); out += snprintf(out, LEFT, "first non-empty x: %d\n", *x.nonempty_begin()); it = x.nonempty_begin(); ++it; // should be at end --it; out += snprintf(out, LEFT, "first non-empty x: %d\n", *it++); it--; out += snprintf(out, LEFT, "first non-empty x: %d\n", *it++); } TEST(x.begin() == x.begin() + 1 - 1); TEST(z.begin() != z.end()); // ---------------------------------------------------------------------- // Test the non-empty iterators get_pos function sparsetable gp(100); for (int i = 0; i < 100; i += 9) { gp.set(i,i); } for (sparsetable::const_nonempty_iterator it = gp.nonempty_begin(); it != gp.nonempty_end(); ++it) { out += snprintf(out, LEFT, "get_pos() for const nonempty_iterator: %u == %lu\n", *it, UL(gp.get_pos(it))); } for (sparsetable::nonempty_iterator it = gp.nonempty_begin(); it != gp.nonempty_end(); ++it) { out += snprintf(out, LEFT, "get_pos() for nonempty_iterator: %u == %lu\n", *it, UL(gp.get_pos(it))); } // ---------------------------------------------------------------------- // Test sparsetable functions out += snprintf(out, LEFT, "x has %lu/%lu buckets, " "y %lu/%lu, z %lu/%lu\n", UL(x.num_nonempty()), UL(x.size()), UL(y.num_nonempty()), UL(y.size()), UL(z.num_nonempty()), UL(z.size())); y.resize(48); // should get rid of 48 and 49 y.resize(70); // 48 and 49 should still be gone out += snprintf(out, LEFT, "y shrank and grew: it's now %lu/%lu\n", UL(y.num_nonempty()), UL(y.size())); out += snprintf(out, LEFT, "y[12] = %d, y.get(12) = %d\n", int(y[12]), y.get(12)); y.erase(12); out += snprintf(out, LEFT, "y[12] cleared. y now %lu/%lu. " "y[12] = %d, y.get(12) = %d\n", UL(y.num_nonempty()), UL(y.size()), int(y[12]), y.get(12)); swap(x, y); y.clear(); TEST(y == z); y.resize(70); for ( int i = 10; i < 40; ++i ) y[i] = -i; y.erase(y.begin() + 15, y.begin() + 30); y.erase(y.begin() + 34); y.erase(12); y.resize(38); y.resize(10000); y[9898] = -9898; for ( sparsetable::const_iterator it = y.begin(); it != y.end(); ++it ) { if ( y.test(it) ) out += snprintf(out, LEFT, "y[%lu] is set\n", UL(it - y.begin())); } out += snprintf(out, LEFT, "That's %lu set buckets\n", UL(y.num_nonempty())); out += snprintf(out, LEFT, "Starting from y[32]...\n"); for ( sparsetable::const_nonempty_iterator it = y.get_iter(32); it != y.nonempty_end(); ++it ) out += snprintf(out, LEFT, "y[??] = %d\n", *it); out += snprintf(out, LEFT, "From y[32] down...\n"); for ( sparsetable::nonempty_iterator it = y.get_iter(32); it != y.nonempty_begin(); ) out += snprintf(out, LEFT, "y[??] = %d\n", *--it); // ---------------------------------------------------------------------- // Test I/O using deprecated read/write_metadata string filestr = FLAGS_test_tmpdir + "/.sparsetable.test"; const char *file = filestr.c_str(); FILE *fp = fopen(file, "wb"); if ( fp == NULL ) { // maybe we can't write to /tmp/. Try the current directory file = ".sparsetable.test"; fp = fopen(file, "wb"); } if ( fp == NULL ) { out += snprintf(out, LEFT, "Can't open %s, skipping disk write...\n", file); } else { y.write_metadata(fp); // only write meta-information y.write_nopointer_data(fp); fclose(fp); } fp = fopen(file, "rb"); if ( fp == NULL ) { out += snprintf(out, LEFT, "Can't open %s, skipping disk read...\n", file); } else { sparsetable y2; y2.read_metadata(fp); y2.read_nopointer_data(fp); fclose(fp); for ( sparsetable::const_iterator it = y2.begin(); it != y2.end(); ++it ) { if ( y2.test(it) ) out += snprintf(out, LEFT, "y2[%lu] is %d\n", UL(it - y2.begin()), *it); } out += snprintf(out, LEFT, "That's %lu set buckets\n", UL(y2.num_nonempty())); } unlink(file); // ---------------------------------------------------------------------- // Also test I/O using serialize()/unserialize() fp = fopen(file, "wb"); if ( fp == NULL ) { out += snprintf(out, LEFT, "Can't open %s, skipping disk write...\n", file); } else { y.serialize(sparsetable::NopointerSerializer(), fp); fclose(fp); } fp = fopen(file, "rb"); if ( fp == NULL ) { out += snprintf(out, LEFT, "Can't open %s, skipping disk read...\n", file); } else { sparsetable y2; y2.unserialize(sparsetable::NopointerSerializer(), fp); fclose(fp); for ( sparsetable::const_iterator it = y2.begin(); it != y2.end(); ++it ) { if ( y2.test(it) ) out += snprintf(out, LEFT, "y2[%lu] is %d\n", UL(it - y2.begin()), *it); } out += snprintf(out, LEFT, "That's %lu set buckets\n", UL(y2.num_nonempty())); } unlink(file); } // Test sparsetable with a non-POD type, std::string void TestString() { out += snprintf(out, LEFT, "string test\n"); sparsetable x(7), y(70), z; x.set(4, "foo"); y.set(12, "orange"); y.set(47, "grape"); y.set(48, "pear"); y.set(49, "apple"); // ---------------------------------------------------------------------- // Test the plain iterators for ( sparsetable::iterator it = x.begin(); it != x.end(); ++it ) { out += snprintf(out, LEFT, "x[%lu]: %s\n", UL(it - x.begin()), static_cast(*it).c_str()); } for ( sparsetable::iterator it = z.begin(); it != z.end(); ++it ) { out += snprintf(out, LEFT, "z[%lu]: %s\n", UL(it - z.begin()), static_cast(*it).c_str()); } TEST(x.begin() == x.begin() + 1 - 1); TEST(x.begin() < x.end()); TEST(z.begin() < z.end()); TEST(z.begin() <= z.end()); TEST(z.begin() == z.end()); // ---------------------------------------------------------------------- // Test the non-empty iterators for ( sparsetable::nonempty_iterator it = x.nonempty_begin(); it != x.nonempty_end(); ++it ) { out += snprintf(out, LEFT, "x[??]: %s\n", it->c_str()); } for ( sparsetable::const_nonempty_iterator it = y.nonempty_begin(); it != y.nonempty_end(); ++it ) { out += snprintf(out, LEFT, "y[??]: %s\n", it->c_str()); } for ( sparsetable::nonempty_iterator it = z.nonempty_begin(); it != z.nonempty_end(); ++it ) { out += snprintf(out, LEFT, "z[??]: %s\n", it->c_str()); } // ---------------------------------------------------------------------- // Test sparsetable functions out += snprintf(out, LEFT, "x has %lu/%lu buckets, y %lu/%lu, z %lu/%lu\n", UL(x.num_nonempty()), UL(x.size()), UL(y.num_nonempty()), UL(y.size()), UL(z.num_nonempty()), UL(z.size())); y.resize(48); // should get rid of 48 and 49 y.resize(70); // 48 and 49 should still be gone out += snprintf(out, LEFT, "y shrank and grew: it's now %lu/%lu\n", UL(y.num_nonempty()), UL(y.size())); out += snprintf(out, LEFT, "y[12] = %s, y.get(12) = %s\n", static_cast(y[12]).c_str(), y.get(12).c_str()); y.erase(12); out += snprintf(out, LEFT, "y[12] cleared. y now %lu/%lu. " "y[12] = %s, y.get(12) = %s\n", UL(y.num_nonempty()), UL(y.size()), static_cast(y[12]).c_str(), static_cast(y.get(12)).c_str()); swap(x, y); y.clear(); TEST(y == z); y.resize(70); for ( int i = 10; i < 40; ++i ) y.set(i, AsString(-i)); y.erase(y.begin() + 15, y.begin() + 30); y.erase(y.begin() + 34); y.erase(12); y.resize(38); y.resize(10000); y.set(9898, AsString(-9898)); for ( sparsetable::const_iterator it = y.begin(); it != y.end(); ++it ) { if ( y.test(it) ) out += snprintf(out, LEFT, "y[%lu] is set\n", UL(it - y.begin())); } out += snprintf(out, LEFT, "That's %lu set buckets\n", UL(y.num_nonempty())); out += snprintf(out, LEFT, "Starting from y[32]...\n"); for ( sparsetable::const_nonempty_iterator it = y.get_iter(32); it != y.nonempty_end(); ++it ) out += snprintf(out, LEFT, "y[??] = %s\n", it->c_str()); out += snprintf(out, LEFT, "From y[32] down...\n"); for ( sparsetable::nonempty_iterator it = y.get_iter(32); it != y.nonempty_begin(); ) out += snprintf(out, LEFT, "y[??] = %s\n", (*--it).c_str()); } // An instrumented allocator that keeps track of all calls to // allocate/deallocate/construct/destroy. It stores the number of times // they were called and the values they were called with. Such information is // stored in the following global variables. static size_t sum_allocate_bytes; static size_t sum_deallocate_bytes; void ResetAllocatorCounters() { sum_allocate_bytes = 0; sum_deallocate_bytes = 0; } template class instrumented_allocator { public: typedef T value_type; typedef uint16 size_type; typedef ptrdiff_t difference_type; typedef T* pointer; typedef const T* const_pointer; typedef T& reference; typedef const T& const_reference; instrumented_allocator() {} instrumented_allocator(const instrumented_allocator&) {} ~instrumented_allocator() {} pointer address(reference r) const { return &r; } const_pointer address(const_reference r) const { return &r; } pointer allocate(size_type n, const_pointer = 0) { sum_allocate_bytes += n * sizeof(value_type); return static_cast(malloc(n * sizeof(value_type))); } void deallocate(pointer p, size_type n) { sum_deallocate_bytes += n * sizeof(value_type); free(p); } size_type max_size() const { return static_cast(-1) / sizeof(value_type); } void construct(pointer p, const value_type& val) { new(p) value_type(val); } void destroy(pointer p) { p->~value_type(); } template explicit instrumented_allocator(const instrumented_allocator&) {} template struct rebind { typedef instrumented_allocator other; }; private: void operator=(const instrumented_allocator&); }; template inline bool operator==(const instrumented_allocator&, const instrumented_allocator&) { return true; } template inline bool operator!=(const instrumented_allocator&, const instrumented_allocator&) { return false; } // Test sparsetable with instrumented_allocator. void TestAllocator() { out += snprintf(out, LEFT, "allocator test\n"); ResetAllocatorCounters(); // POD (int32) with instrumented_allocator. typedef sparsetable > IntSparseTable; IntSparseTable* s1 = new IntSparseTable(10000); TEST(sum_allocate_bytes > 0); for (int i = 0; i < 10000; ++i) { s1->set(i, 0); } TEST(sum_allocate_bytes >= 10000 * sizeof(int)); ResetAllocatorCounters(); delete s1; TEST(sum_deallocate_bytes >= 10000 * sizeof(int)); IntSparseTable* s2 = new IntSparseTable(1000); IntSparseTable* s3 = new IntSparseTable(1000); for (int i = 0; i < 1000; ++i) { s2->set(i, 0); s3->set(i, 0); } TEST(sum_allocate_bytes >= 2000 * sizeof(int)); ResetAllocatorCounters(); s3->clear(); TEST(sum_deallocate_bytes >= 1000 * sizeof(int)); ResetAllocatorCounters(); s2->swap(*s3); // s2 is empty after the swap s2->clear(); TEST(sum_deallocate_bytes < 1000 * sizeof(int)); for (int i = 0; i < s3->size(); ++i) { s3->erase(i); } TEST(sum_deallocate_bytes >= 1000 * sizeof(int)); delete s2; delete s3; // POD (int) with default allocator. sparsetable x, y; for (int s = 1000; s <= 40000; s += 1000) { x.resize(s); for (int i = 0; i < s; ++i) { x.set(i, i + 1); } y = x; for (int i = 0; i < s; ++i) { y.erase(i); } y.swap(x); } TEST(x.num_nonempty() == 0); out += snprintf(out, LEFT, "y[0]: %d\n", int(y[0])); out += snprintf(out, LEFT, "y[39999]: %d\n", int(y[39999])); y.clear(); // POD (int) with std allocator. sparsetable > u, v; for (int s = 1000; s <= 40000; s += 1000) { u.resize(s); for (int i = 0; i < s; ++i) { u.set(i, i + 1); } v = u; for (int i = 0; i < s; ++i) { v.erase(i); } v.swap(u); } TEST(u.num_nonempty() == 0); out += snprintf(out, LEFT, "v[0]: %d\n", int(v[0])); out += snprintf(out, LEFT, "v[39999]: %d\n", int(v[39999])); v.clear(); // Non-POD (string) with default allocator. sparsetable a, b; for (int s = 1000; s <= 40000; s += 1000) { a.resize(s); for (int i = 0; i < s; ++i) { a.set(i, "aa"); } b = a; for (int i = 0; i < s; ++i) { b.erase(i); } b.swap(a); } TEST(a.num_nonempty() == 0); out += snprintf(out, LEFT, "b[0]: %s\n", b.get(0).c_str()); out += snprintf(out, LEFT, "b[39999]: %s\n", b.get(39999).c_str()); b.clear(); } // The expected output from all of the above: TestInt(), TestString() and // TestAllocator(). static const char g_expected[] = ( "int test\n" "x[0]: 0\n" "x[1]: 0\n" "x[2]: 0\n" "x[3]: 0\n" "x[4]: 10\n" "x[5]: 0\n" "x[6]: 0\n" "x[0]: 0\n" "x[1]: 0\n" "x[2]: 0\n" "x[3]: 0\n" "x[4]: 10\n" "x[5]: 0\n" "x[6]: 0\n" "x[6]: 0\n" "x[5]: 0\n" "x[4]: 10\n" "x[3]: 0\n" "x[2]: 0\n" "x[1]: 0\n" "x[0]: 0\n" "x[6]: 0\n" "x[5]: 0\n" "x[4]: 10\n" "x[3]: 0\n" "x[2]: 0\n" "x[1]: 0\n" "x[0]: 0\n" "x[3]: 0\n" "x[4]: 10\n" "x[5]: 0\n" "x[4]: 10\n" "x[4]: 10\n" "x[3]: 0\n" "x[4]: 10\n" "x[5]: 55\n" "x[5]: 55\n" "x[6]: 66\n" "it == it? yes\n" "!(it != it)? yes\n" "!(it < it)? yes\n" "!(it > it)? yes\n" "it <= it? yes\n" "it >= it? yes\n" "!(it == it_minus_1)? yes\n" "it != it_minus_1? yes\n" "!(it < it_minus_1)? yes\n" "it > it_minus_1? yes\n" "!(it <= it_minus_1)? yes\n" "it >= it_minus_1? yes\n" "!(it_minus_1 == it)? yes\n" "it_minus_1 != it? yes\n" "it_minus_1 < it? yes\n" "!(it_minus_1 > it)? yes\n" "it_minus_1 <= it? yes\n" "!(it_minus_1 >= it)? yes\n" "!(it == it_plus_1)? yes\n" "it != it_plus_1? yes\n" "it < it_plus_1? yes\n" "!(it > it_plus_1)? yes\n" "it <= it_plus_1? yes\n" "!(it >= it_plus_1)? yes\n" "!(it_plus_1 == it)? yes\n" "it_plus_1 != it? yes\n" "!(it_plus_1 < it)? yes\n" "it_plus_1 > it? yes\n" "!(it_plus_1 <= it)? yes\n" "it_plus_1 >= it? yes\n" "x[4]: 10\n" "x[4]: 10\n" "x[3]: 0\n" "x[4]: 10\n" "x[5]: 55\n" "x[6]: 66\n" "it == it? yes\n" "!(it != it)? yes\n" "!(it < it)? yes\n" "!(it > it)? yes\n" "it <= it? yes\n" "it >= it? yes\n" "!(it == it_minus_1)? yes\n" "it != it_minus_1? yes\n" "!(it < it_minus_1)? yes\n" "it > it_minus_1? yes\n" "!(it <= it_minus_1)? yes\n" "it >= it_minus_1? yes\n" "!(it_minus_1 == it)? yes\n" "it_minus_1 != it? yes\n" "it_minus_1 < it? yes\n" "!(it_minus_1 > it)? yes\n" "it_minus_1 <= it? yes\n" "!(it_minus_1 >= it)? yes\n" "!(it == it_plus_1)? yes\n" "it != it_plus_1? yes\n" "it < it_plus_1? yes\n" "!(it > it_plus_1)? yes\n" "it <= it_plus_1? yes\n" "!(it >= it_plus_1)? yes\n" "!(it_plus_1 == it)? yes\n" "it_plus_1 != it? yes\n" "!(it_plus_1 < it)? yes\n" "it_plus_1 > it? yes\n" "!(it_plus_1 <= it)? yes\n" "it_plus_1 >= it? yes\n" "x.begin() == x.begin() + 1 - 1? yes\n" "x.begin() < x.end()? yes\n" "z.begin() < z.end()? no\n" "z.begin() <= z.end()? yes\n" "z.begin() == z.end()? yes\n" "x[??]: 10\n" "x[??]: 55\n" "x[??]: 66\n" "y[??]: -12\n" "y[??]: -47\n" "y[??]: -48\n" "y[??]: -49\n" "y[??]: -49\n" "y[??]: -48\n" "y[??]: -47\n" "y[??]: -12\n" "y[??]: -49\n" "y[??]: -48\n" "y[??]: -47\n" "y[??]: -12\n" "first non-empty y: -12\n" "first non-empty x: 10\n" "first non-empty x: 10\n" "first non-empty x: 10\n" "first non-empty y: -12\n" "first non-empty x: 10\n" "first non-empty x: 10\n" "first non-empty x: 10\n" "x.begin() == x.begin() + 1 - 1? yes\n" "z.begin() != z.end()? no\n" "get_pos() for const nonempty_iterator: 0 == 0\n" "get_pos() for const nonempty_iterator: 9 == 9\n" "get_pos() for const nonempty_iterator: 18 == 18\n" "get_pos() for const nonempty_iterator: 27 == 27\n" "get_pos() for const nonempty_iterator: 36 == 36\n" "get_pos() for const nonempty_iterator: 45 == 45\n" "get_pos() for const nonempty_iterator: 54 == 54\n" "get_pos() for const nonempty_iterator: 63 == 63\n" "get_pos() for const nonempty_iterator: 72 == 72\n" "get_pos() for const nonempty_iterator: 81 == 81\n" "get_pos() for const nonempty_iterator: 90 == 90\n" "get_pos() for const nonempty_iterator: 99 == 99\n" "get_pos() for nonempty_iterator: 0 == 0\n" "get_pos() for nonempty_iterator: 9 == 9\n" "get_pos() for nonempty_iterator: 18 == 18\n" "get_pos() for nonempty_iterator: 27 == 27\n" "get_pos() for nonempty_iterator: 36 == 36\n" "get_pos() for nonempty_iterator: 45 == 45\n" "get_pos() for nonempty_iterator: 54 == 54\n" "get_pos() for nonempty_iterator: 63 == 63\n" "get_pos() for nonempty_iterator: 72 == 72\n" "get_pos() for nonempty_iterator: 81 == 81\n" "get_pos() for nonempty_iterator: 90 == 90\n" "get_pos() for nonempty_iterator: 99 == 99\n" "x has 3/7 buckets, y 4/70, z 0/0\n" "y shrank and grew: it's now 2/70\n" "y[12] = -12, y.get(12) = -12\n" "y[12] cleared. y now 1/70. y[12] = 0, y.get(12) = 0\n" "y == z? no\n" "y[10] is set\n" "y[11] is set\n" "y[13] is set\n" "y[14] is set\n" "y[30] is set\n" "y[31] is set\n" "y[32] is set\n" "y[33] is set\n" "y[35] is set\n" "y[36] is set\n" "y[37] is set\n" "y[9898] is set\n" "That's 12 set buckets\n" "Starting from y[32]...\n" "y[??] = -32\n" "y[??] = -33\n" "y[??] = -35\n" "y[??] = -36\n" "y[??] = -37\n" "y[??] = -9898\n" "From y[32] down...\n" "y[??] = -31\n" "y[??] = -30\n" "y[??] = -14\n" "y[??] = -13\n" "y[??] = -11\n" "y[??] = -10\n" "y2[10] is -10\n" "y2[11] is -11\n" "y2[13] is -13\n" "y2[14] is -14\n" "y2[30] is -30\n" "y2[31] is -31\n" "y2[32] is -32\n" "y2[33] is -33\n" "y2[35] is -35\n" "y2[36] is -36\n" "y2[37] is -37\n" "y2[9898] is -9898\n" "That's 12 set buckets\n" "y2[10] is -10\n" "y2[11] is -11\n" "y2[13] is -13\n" "y2[14] is -14\n" "y2[30] is -30\n" "y2[31] is -31\n" "y2[32] is -32\n" "y2[33] is -33\n" "y2[35] is -35\n" "y2[36] is -36\n" "y2[37] is -37\n" "y2[9898] is -9898\n" "That's 12 set buckets\n" "string test\n" "x[0]: \n" "x[1]: \n" "x[2]: \n" "x[3]: \n" "x[4]: foo\n" "x[5]: \n" "x[6]: \n" "x.begin() == x.begin() + 1 - 1? yes\n" "x.begin() < x.end()? yes\n" "z.begin() < z.end()? no\n" "z.begin() <= z.end()? yes\n" "z.begin() == z.end()? yes\n" "x[??]: foo\n" "y[??]: orange\n" "y[??]: grape\n" "y[??]: pear\n" "y[??]: apple\n" "x has 1/7 buckets, y 4/70, z 0/0\n" "y shrank and grew: it's now 2/70\n" "y[12] = orange, y.get(12) = orange\n" "y[12] cleared. y now 1/70. y[12] = , y.get(12) = \n" "y == z? no\n" "y[10] is set\n" "y[11] is set\n" "y[13] is set\n" "y[14] is set\n" "y[30] is set\n" "y[31] is set\n" "y[32] is set\n" "y[33] is set\n" "y[35] is set\n" "y[36] is set\n" "y[37] is set\n" "y[9898] is set\n" "That's 12 set buckets\n" "Starting from y[32]...\n" "y[??] = -32\n" "y[??] = -33\n" "y[??] = -35\n" "y[??] = -36\n" "y[??] = -37\n" "y[??] = -9898\n" "From y[32] down...\n" "y[??] = -31\n" "y[??] = -30\n" "y[??] = -14\n" "y[??] = -13\n" "y[??] = -11\n" "y[??] = -10\n" "allocator test\n" "sum_allocate_bytes > 0? yes\n" "sum_allocate_bytes >= 10000 * sizeof(int)? yes\n" "sum_deallocate_bytes >= 10000 * sizeof(int)? yes\n" "sum_allocate_bytes >= 2000 * sizeof(int)? yes\n" "sum_deallocate_bytes >= 1000 * sizeof(int)? yes\n" "sum_deallocate_bytes < 1000 * sizeof(int)? yes\n" "sum_deallocate_bytes >= 1000 * sizeof(int)? yes\n" "x.num_nonempty() == 0? yes\n" "y[0]: 1\n" "y[39999]: 40000\n" "u.num_nonempty() == 0? yes\n" "v[0]: 1\n" "v[39999]: 40000\n" "a.num_nonempty() == 0? yes\n" "b[0]: aa\n" "b[39999]: aa\n" ); // defined at bottom of file for ease of maintainence int main(int argc, char **argv) { // though we ignore the args (void)argc; (void)argv; TestInt(); TestString(); TestAllocator(); // Finally, check to see if our output (in out) is what it's supposed to be. const size_t r = sizeof(g_expected) - 1; if ( r != static_cast(out - outbuf) || // output not the same size memcmp(outbuf, g_expected, r) ) { // or bytes differed fprintf(stderr, "TESTS FAILED\n\nEXPECTED:\n\n%s\n\nACTUAL:\n\n%s\n\n", g_expected, outbuf); return 1; } else { printf("PASS.\n"); return 0; } } sparsehash-2.0.2/src/config.h.in0000664000175000017500000000666211721254574013433 00000000000000/* src/config.h.in. Generated from configure.ac by autoheader. */ /* Namespace for Google classes */ #undef GOOGLE_NAMESPACE /* the location of the header defining hash functions */ #undef HASH_FUN_H /* the location of or */ #undef HASH_MAP_H /* the namespace of the hash<> function */ #undef HASH_NAMESPACE /* the location of or */ #undef HASH_SET_H /* Define to 1 if you have the header file. */ #undef HAVE_GOOGLE_MALLOC_EXTENSION_H /* define if the compiler has hash_map */ #undef HAVE_HASH_MAP /* define if the compiler has hash_set */ #undef HAVE_HASH_SET /* Define to 1 if you have the header file. */ #undef HAVE_INTTYPES_H /* Define to 1 if the system has the type `long long'. */ #undef HAVE_LONG_LONG /* Define to 1 if you have the `memcpy' function. */ #undef HAVE_MEMCPY /* Define to 1 if you have the `memmove' function. */ #undef HAVE_MEMMOVE /* Define to 1 if you have the header file. */ #undef HAVE_MEMORY_H /* define if the compiler implements namespaces */ #undef HAVE_NAMESPACES /* Define if you have POSIX threads libraries and header files. */ #undef HAVE_PTHREAD /* Define to 1 if you have the header file. */ #undef HAVE_STDINT_H /* Define to 1 if you have the header file. */ #undef HAVE_STDLIB_H /* Define to 1 if you have the header file. */ #undef HAVE_STRINGS_H /* Define to 1 if you have the header file. */ #undef HAVE_STRING_H /* Define to 1 if you have the header file. */ #undef HAVE_SYS_RESOURCE_H /* Define to 1 if you have the header file. */ #undef HAVE_SYS_STAT_H /* Define to 1 if you have the header file. */ #undef HAVE_SYS_TIME_H /* Define to 1 if you have the header file. */ #undef HAVE_SYS_TYPES_H /* Define to 1 if you have the header file. */ #undef HAVE_SYS_UTSNAME_H /* Define to 1 if the system has the type `uint16_t'. */ #undef HAVE_UINT16_T /* Define to 1 if you have the header file. */ #undef HAVE_UNISTD_H /* define if the compiler supports unordered_{map,set} */ #undef HAVE_UNORDERED_MAP /* Define to 1 if the system has the type `u_int16_t'. */ #undef HAVE_U_INT16_T /* Define to 1 if the system has the type `__uint16'. */ #undef HAVE___UINT16 /* Name of package */ #undef PACKAGE /* Define to the address where bug reports for this package should be sent. */ #undef PACKAGE_BUGREPORT /* Define to the full name of this package. */ #undef PACKAGE_NAME /* Define to the full name and version of this package. */ #undef PACKAGE_STRING /* Define to the one symbol short name of this package. */ #undef PACKAGE_TARNAME /* Define to the home page for this package. */ #undef PACKAGE_URL /* Define to the version of this package. */ #undef PACKAGE_VERSION /* Define to necessary symbol if this constant uses a non-standard name on your system. */ #undef PTHREAD_CREATE_JOINABLE /* The system-provided hash function including the namespace. */ #undef SPARSEHASH_HASH /* The system-provided hash function, in namespace HASH_NAMESPACE. */ #undef SPARSEHASH_HASH_NO_NAMESPACE /* Define to 1 if you have the ANSI C header files. */ #undef STDC_HEADERS /* Version number of package */ #undef VERSION /* Stops putting the code inside the Google namespace */ #undef _END_GOOGLE_NAMESPACE_ /* Puts following code inside the Google namespace */ #undef _START_GOOGLE_NAMESPACE_ sparsehash-2.0.2/src/simple_test.cc0000664000175000017500000001101611721252346014227 00000000000000// Copyright (c) 2007, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // --- // // This tests mostly that we can #include the files correctly // and have them work. This unittest purposefully does not // #include ; it's meant to emulate what a 'regular // install' of sparsehash would be able to see. #include #include #include #include #include #include #include #include #include #define CHECK_IFF(cond, when) do { \ if (when) { \ if (!(cond)) { \ puts("ERROR: " #cond " failed when " #when " is true\n"); \ exit(1); \ } \ } else { \ if (cond) { \ puts("ERROR: " #cond " succeeded when " #when " is false\n"); \ exit(1); \ } \ } \ } while (0) int main(int argc, char**) { // Run with an argument to get verbose output const bool verbose = argc > 1; google::sparse_hash_set sset; google::sparse_hash_map smap; google::dense_hash_set dset; google::dense_hash_map dmap; dset.set_empty_key(-1); dmap.set_empty_key(-1); for (int i = 0; i < 100; i += 10) { // go by tens sset.insert(i); smap[i] = i+1; dset.insert(i + 5); dmap[i+5] = i+6; } if (verbose) { for (google::sparse_hash_set::const_iterator it = sset.begin(); it != sset.end(); ++it) printf("sset: %d\n", *it); for (google::sparse_hash_map::const_iterator it = smap.begin(); it != smap.end(); ++it) printf("smap: %d -> %d\n", it->first, it->second); for (google::dense_hash_set::const_iterator it = dset.begin(); it != dset.end(); ++it) printf("dset: %d\n", *it); for (google::dense_hash_map::const_iterator it = dmap.begin(); it != dmap.end(); ++it) printf("dmap: %d -> %d\n", it->first, it->second); } for (int i = 0; i < 100; i++) { CHECK_IFF(sset.find(i) != sset.end(), (i % 10) == 0); CHECK_IFF(smap.find(i) != smap.end(), (i % 10) == 0); CHECK_IFF(smap.find(i) != smap.end() && smap.find(i)->second == i+1, (i % 10) == 0); CHECK_IFF(dset.find(i) != dset.end(), (i % 10) == 5); CHECK_IFF(dmap.find(i) != dmap.end(), (i % 10) == 5); CHECK_IFF(dmap.find(i) != dmap.end() && dmap.find(i)->second == i+1, (i % 10) == 5); } printf("PASS\n"); return 0; } sparsehash-2.0.2/src/hashtable_test.cc0000664000175000017500000022267411721252346014707 00000000000000// Copyright (c) 2010, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // --- // // This tests densehashtable // This tests dense_hash_set // This tests dense_hash_map // This tests sparsehashtable // This tests sparse_hash_set // This tests sparse_hash_map // // This test replaces hashtable_unittest.cc, which was becoming // unreadable. This file is opaque but hopefully not unreadable -- at // least, not the tests! // // Note that since all these classes are templatized, it's important // to call every public method on the class: not just to make sure // they work, but to make sure they even compile. #include #include #include #include // for size_t #include #include #ifdef HAVE_STDINT_H # include #endif // for uintptr_t #include #include #include #include // for class typeinfo (returned by typeid) #include #include #include #include "hash_test_interface.h" #include "testutil.h" namespace testing = GOOGLE_NAMESPACE::testing; using std::cout; using std::pair; using std::set; using std::string; using std::vector; using GOOGLE_NAMESPACE::dense_hash_map; using GOOGLE_NAMESPACE::dense_hash_set; using GOOGLE_NAMESPACE::sparse_hash_map; using GOOGLE_NAMESPACE::sparse_hash_set; using GOOGLE_NAMESPACE::sparsetable; using GOOGLE_NAMESPACE::HashtableInterface_SparseHashMap; using GOOGLE_NAMESPACE::HashtableInterface_SparseHashSet; using GOOGLE_NAMESPACE::HashtableInterface_SparseHashtable; using GOOGLE_NAMESPACE::HashtableInterface_DenseHashMap; using GOOGLE_NAMESPACE::HashtableInterface_DenseHashSet; using GOOGLE_NAMESPACE::HashtableInterface_DenseHashtable; namespace sparsehash_internal = GOOGLE_NAMESPACE::sparsehash_internal; typedef unsigned char uint8; #ifdef _MSC_VER // Below, we purposefully test having a very small allocator size. // This causes some "type conversion too small" errors when using this // allocator with sparsetable buckets. We're testing to make sure we // handle that situation ok, so we don't need the compiler warnings. #pragma warning(disable:4244) #endif namespace { #ifndef _MSC_VER // windows defines its own version # ifdef __MINGW32__ // mingw has trouble writing to /tmp static string TmpFile(const char* basename) { return string("./#") + basename; } # else static string TmpFile(const char* basename) { string kTmpdir = "/tmp"; return kTmpdir + "/" + basename; } # endif #endif // Used as a value in some of the hashtable tests. It's just some // arbitrary user-defined type with non-trivial memory management. struct ValueType { public: ValueType() : s_(kDefault) { } ValueType(const char* init_s) : s_(kDefault) { set_s(init_s); } ~ValueType() { set_s(NULL); } ValueType(const ValueType& that) : s_(kDefault) { operator=(that); } void operator=(const ValueType& that) { set_s(that.s_); } bool operator==(const ValueType& that) const { return strcmp(this->s(), that.s()) == 0; } void set_s(const char* new_s) { if (s_ != kDefault) free(const_cast(s_)); s_ = (new_s == NULL ? kDefault : reinterpret_cast(strdup(new_s))); } const char* s() const { return s_; } private: const char* s_; static const char* const kDefault; }; const char* const ValueType::kDefault = "hi"; // This is used by the low-level sparse/dense_hashtable classes, // which support the most general relationship between keys and // values: the key is derived from the value through some arbitrary // function. (For classes like sparse_hash_map, the 'value' is a // key/data pair, and the function to derive the key is // FirstElementOfPair.) KeyToValue is the inverse of this function, // so GetKey(KeyToValue(key)) == key. To keep the tests a bit // simpler, we've chosen to make the key and value actually be the // same type, which is why we need only one template argument for the // types, rather than two (one for the key and one for the value). template struct SetKey { void operator()(KeyAndValueT* value, const KeyAndValueT& new_key) const { *value = KeyToValue()(new_key); } }; // A hash function that keeps track of how often it's called. We use // a simple djb-hash so we don't depend on how STL hashes. We use // this same method to do the key-comparison, so we can keep track // of comparison-counts too. struct Hasher { explicit Hasher(int i=0) : id_(i), num_hashes_(0), num_compares_(0) { } int id() const { return id_; } int num_hashes() const { return num_hashes_; } int num_compares() const { return num_compares_; } size_t operator()(int a) const { num_hashes_++; return static_cast(a); } size_t operator()(const char* a) const { num_hashes_++; size_t hash = 0; for (size_t i = 0; a[i]; i++ ) hash = 33 * hash + a[i]; return hash; } size_t operator()(const string& a) const { num_hashes_++; size_t hash = 0; for (size_t i = 0; i < a.length(); i++ ) hash = 33 * hash + a[i]; return hash; } size_t operator()(const int* a) const { num_hashes_++; return static_cast(reinterpret_cast(a)); } bool operator()(int a, int b) const { num_compares_++; return a == b; } bool operator()(const string& a, const string& b) const { num_compares_++; return a == b; } bool operator()(const char* a, const char* b) const { num_compares_++; // The 'a == b' test is necessary, in case a and b are both NULL. return (a == b || (a && b && strcmp(a, b) == 0)); } private: mutable int id_; mutable int num_hashes_; mutable int num_compares_; }; // Allocator that allows controlling its size in various ways, to test // allocator overflow. Because we use this allocator in a vector, we // need to define != and swap for gcc. template(~0)> struct Alloc { typedef T value_type; typedef SizeT size_type; typedef ptrdiff_t difference_type; typedef T* pointer; typedef const T* const_pointer; typedef T& reference; typedef const T& const_reference; explicit Alloc(int i=0, int* count=NULL) : id_(i), count_(count) {} ~Alloc() {} pointer address(reference r) const { return &r; } const_pointer address(const_reference r) const { return &r; } pointer allocate(size_type n, const_pointer = 0) { if (count_) ++(*count_); return static_cast(malloc(n * sizeof(value_type))); } void deallocate(pointer p, size_type) { free(p); } pointer reallocate(pointer p, size_type n) { if (count_) ++(*count_); return static_cast(realloc(p, n * sizeof(value_type))); } size_type max_size() const { return static_cast(MAX_SIZE); } void construct(pointer p, const value_type& val) { new(p) value_type(val); } void destroy(pointer p) { p->~value_type(); } bool is_custom_alloc() const { return true; } template Alloc(const Alloc& that) : id_(that.id_), count_(that.count_) { } template struct rebind { typedef Alloc other; }; bool operator==(const Alloc& that) { return this->id_ == that.id_ && this->count_ == that.count_; } bool operator!=(const Alloc& that) { return !this->operator==(that); } int id() const { return id_; } // I have to make these public so the constructor used for rebinding // can see them. Normally, I'd just make them private and say: // template friend struct Alloc; // but MSVC 7.1 barfs on that. So public it is. But no peeking! public: int id_; int* count_; }; // Below are a few fun routines that convert a value into a key, used // for dense_hashtable and sparse_hashtable. It's our responsibility // to make sure, when we insert values into these objects, that the // values match the keys we insert them under. To allow us to use // these routines for SetKey as well, we require all these functions // be their own inverse: f(f(x)) == x. template struct Negation { typedef Value result_type; Value operator()(Value& v) { return -v; } const Value operator()(const Value& v) const { return -v; } }; struct Capital { typedef string result_type; string operator()(string& s) { return string(1, s[0] ^ 32) + s.substr(1); } const string operator()(const string& s) const { return string(1, s[0] ^ 32) + s.substr(1); } }; struct Identity { // lame, I know, but an important case to test. typedef const char* result_type; const char* operator()(const char* s) const { return s; } }; // This is just to avoid memory leaks -- it's a global pointer to // all the memory allocated by UniqueObjectHelper. We'll use it // to semi-test sparsetable as well. :-) sparsetable g_unique_charstar_objects(16); // This is an object-generator: pass in an index, and it will return a // unique object of type ItemType. We provide specializations for the // types we actually support. template ItemType UniqueObjectHelper(int index); template<> int UniqueObjectHelper(int index) { return index; } template<> string UniqueObjectHelper(int index) { char buffer[64]; snprintf(buffer, sizeof(buffer), "%d", index); return buffer; } template<> char* UniqueObjectHelper(int index) { // First grow the table if need be. sparsetable::size_type table_size = g_unique_charstar_objects.size(); while (index >= static_cast(table_size)) { assert(table_size * 2 > table_size); // avoid overflow problems table_size *= 2; } if (table_size > g_unique_charstar_objects.size()) g_unique_charstar_objects.resize(table_size); if (!g_unique_charstar_objects.test(index)) { char buffer[64]; snprintf(buffer, sizeof(buffer), "%d", index); g_unique_charstar_objects[index] = strdup(buffer); } return g_unique_charstar_objects.get(index); } template<> const char* UniqueObjectHelper(int index) { return UniqueObjectHelper(index); } template<> ValueType UniqueObjectHelper(int index) { return ValueType(UniqueObjectHelper(index).c_str()); } template<> pair UniqueObjectHelper(int index) { return pair(index, index + 1); } template<> pair UniqueObjectHelper(int index) { return pair( UniqueObjectHelper(index), UniqueObjectHelper(index + 1)); } template<> pair UniqueObjectHelper(int index) { return pair( UniqueObjectHelper(index), UniqueObjectHelper(index+1)); } class ValueSerializer { public: bool operator()(FILE* fp, const int& value) { return fwrite(&value, sizeof(value), 1, fp) == 1; } bool operator()(FILE* fp, int* value) { return fread(value, sizeof(*value), 1, fp) == 1; } bool operator()(FILE* fp, const string& value) { const int size = value.size(); return (*this)(fp, size) && fwrite(value.c_str(), size, 1, fp) == 1; } bool operator()(FILE* fp, string* value) { int size; if (!(*this)(fp, &size)) return false; char* buf = new char[size]; if (fread(buf, size, 1, fp) != 1) { delete[] buf; return false; } new(value) string(buf, size); delete[] buf; return true; } template bool operator()(OUTPUT* fp, const ValueType& v) { return (*this)(fp, string(v.s())); } template bool operator()(INPUT* fp, ValueType* v) { string data; if (!(*this)(fp, &data)) return false; new(v) ValueType(data.c_str()); return true; } template bool operator()(OUTPUT* fp, const char* const& value) { // Just store the index. return (*this)(fp, atoi(value)); } template bool operator()(INPUT* fp, const char** value) { // Look up via index. int index; if (!(*this)(fp, &index)) return false; *value = UniqueObjectHelper(index); return true; } template bool operator()(OUTPUT* fp, std::pair* value) { return (*this)(fp, const_cast(&value->first)) && (*this)(fp, &value->second); } template bool operator()(INPUT* fp, const std::pair& value) { return (*this)(fp, value.first) && (*this)(fp, value.second); } }; template class HashtableTest : public ::testing::Test { public: HashtableTest() : ht_() { } // Give syntactically-prettier access to UniqueObjectHelper. typename HashtableType::value_type UniqueObject(int index) { return UniqueObjectHelper(index); } typename HashtableType::key_type UniqueKey(int index) { return this->ht_.get_key(this->UniqueObject(index)); } protected: HashtableType ht_; }; } // These are used to specify the empty key and deleted key in some // contexts. They can't be in the unnamed namespace, or static, // because the template code requires external linkage. extern const string kEmptyString("--empty string--"); extern const string kDeletedString("--deleted string--"); extern const int kEmptyInt = 0; extern const int kDeletedInt = -1234676543; // an unlikely-to-pick int extern const char* const kEmptyCharStar = "--empty char*--"; extern const char* const kDeletedCharStar = "--deleted char*--"; namespace { #define INT_HASHTABLES \ HashtableInterface_SparseHashMap >, \ HashtableInterface_SparseHashSet >, \ /* This is a table where the key associated with a value is -value */ \ HashtableInterface_SparseHashtable, \ SetKey >, \ Hasher, Alloc >, \ HashtableInterface_DenseHashMap >, \ HashtableInterface_DenseHashSet >, \ HashtableInterface_DenseHashtable, \ SetKey >, \ Hasher, Alloc > #define STRING_HASHTABLES \ HashtableInterface_SparseHashMap >, \ HashtableInterface_SparseHashSet >, \ /* This is a table where the key associated with a value is Cap(value) */ \ HashtableInterface_SparseHashtable, \ Hasher, Alloc >, \ HashtableInterface_DenseHashMap >, \ HashtableInterface_DenseHashSet >, \ HashtableInterface_DenseHashtable, \ Hasher, Alloc > // I'd like to use ValueType keys for SparseHashtable<> and // DenseHashtable<> but I can't due to memory-management woes (nobody // really owns the char* involved). So instead I do something simpler. #define CHARSTAR_HASHTABLES \ HashtableInterface_SparseHashMap >, \ HashtableInterface_SparseHashSet >, \ /* This is a table where each value is its own key. */ \ HashtableInterface_SparseHashtable, \ Hasher, Alloc >, \ HashtableInterface_DenseHashMap >, \ HashtableInterface_DenseHashSet >, \ HashtableInterface_DenseHashtable, \ Hasher, Alloc > // This is the list of types we run each test against. // We need to define the same class 4 times due to limitations in the // testing framework. Basically, we associate each class below with // the set of types we want to run tests on it with. template class HashtableIntTest : public HashtableTest { }; template class HashtableStringTest : public HashtableTest { }; template class HashtableCharStarTest : public HashtableTest { }; template class HashtableAllTest : public HashtableTest { }; typedef testing::TypeList6 IntHashtables; typedef testing::TypeList6 StringHashtables; typedef testing::TypeList6 CharStarHashtables; typedef testing::TypeList18 AllHashtables; TYPED_TEST_CASE_6(HashtableIntTest, IntHashtables); TYPED_TEST_CASE_6(HashtableStringTest, StringHashtables); TYPED_TEST_CASE_6(HashtableCharStarTest, CharStarHashtables); TYPED_TEST_CASE_18(HashtableAllTest, AllHashtables); // ------------------------------------------------------------------------ // First, some testing of the underlying infrastructure. TEST(HashtableCommonTest, HashMunging) { const Hasher hasher; // We don't munge the hash value on non-pointer template types. { const sparsehash_internal::sh_hashtable_settings settings(hasher, 0.0, 0.0); const int v = 1000; EXPECT_EQ(hasher(v), settings.hash(v)); } { // We do munge the hash value on pointer template types. const sparsehash_internal::sh_hashtable_settings settings(hasher, 0.0, 0.0); int* v = NULL; v += 0x10000; // get a non-trivial pointer value EXPECT_NE(hasher(v), settings.hash(v)); } { const sparsehash_internal::sh_hashtable_settings settings(hasher, 0.0, 0.0); const int* v = NULL; v += 0x10000; // get a non-trivial pointer value EXPECT_NE(hasher(v), settings.hash(v)); } } // ------------------------------------------------------------------------ // If the first arg to TYPED_TEST is HashtableIntTest, it will run // this test on all the hashtable types, with key=int and value=int. // Likewise, HashtableStringTest will have string key/values, and // HashtableCharStarTest will have char* keys and -- just to mix it up // a little -- ValueType values. HashtableAllTest will run all three // key/value types on all 6 hashtables types, for 18 test-runs total // per test. // // In addition, TYPED_TEST makes available the magic keyword // TypeParam, which is the type being used for the current test. // This first set of tests just tests the public API, going through // the public typedefs and methods in turn. It goes approximately // in the definition-order in sparse_hash_map.h. TYPED_TEST(HashtableIntTest, Typedefs) { // Make sure all the standard STL-y typedefs are defined. The exact // key/value types don't matter here, so we only bother testing on // the int tables. This is just a compile-time "test"; nothing here // can fail at runtime. this->ht_.set_deleted_key(-2); // just so deleted_key succeeds typename TypeParam::key_type kt; typename TypeParam::value_type vt; typename TypeParam::hasher h; typename TypeParam::key_equal ke; typename TypeParam::allocator_type at; typename TypeParam::size_type st; typename TypeParam::difference_type dt; typename TypeParam::pointer p; typename TypeParam::const_pointer cp; // I can't declare variables of reference-type, since I have nothing // to point them to, so I just make sure that these types exist. typedef typename TypeParam::reference r; typedef typename TypeParam::const_reference cf; typename TypeParam::iterator i; typename TypeParam::const_iterator ci; typename TypeParam::local_iterator li; typename TypeParam::const_local_iterator cli; // Now make sure the variables are used, so the compiler doesn't // complain. Where possible, I "use" the variable by calling the // method that's supposed to return the unique instance of the // relevant type (eg. get_allocator()). Otherwise, I try to call a // different, arbitrary function that returns the type. Sometimes // the type isn't used at all, and there's no good way to use the // variable. kt = this->ht_.deleted_key(); (void)vt; // value_type may not be copyable. Easiest not to try. h = this->ht_.hash_funct(); ke = this->ht_.key_eq(); at = this->ht_.get_allocator(); st = this->ht_.size(); (void)dt; (void)p; (void)cp; i = this->ht_.begin(); ci = this->ht_.begin(); li = this->ht_.begin(0); cli = this->ht_.begin(0); } TYPED_TEST(HashtableAllTest, NormalIterators) { EXPECT_TRUE(this->ht_.begin() == this->ht_.end()); this->ht_.insert(this->UniqueObject(1)); { typename TypeParam::iterator it = this->ht_.begin(); EXPECT_TRUE(it != this->ht_.end()); ++it; EXPECT_TRUE(it == this->ht_.end()); } } TEST(HashtableTest, ModifyViaIterator) { // This only works for hash-maps, since only they have non-const values. { sparse_hash_map ht; ht[1] = 2; sparse_hash_map::iterator it = ht.find(1); EXPECT_TRUE(it != ht.end()); EXPECT_EQ(1, it->first); EXPECT_EQ(2, it->second); it->second = 5; it = ht.find(1); EXPECT_TRUE(it != ht.end()); EXPECT_EQ(5, it->second); } { dense_hash_map ht; ht.set_empty_key(0); ht[1] = 2; dense_hash_map::iterator it = ht.find(1); EXPECT_TRUE(it != ht.end()); EXPECT_EQ(1, it->first); EXPECT_EQ(2, it->second); it->second = 5; it = ht.find(1); EXPECT_TRUE(it != ht.end()); EXPECT_EQ(5, it->second); } } TYPED_TEST(HashtableAllTest, ConstIterators) { this->ht_.insert(this->UniqueObject(1)); typename TypeParam::const_iterator it = this->ht_.begin(); EXPECT_TRUE(it != this->ht_.end()); ++it; EXPECT_TRUE(it == this->ht_.end()); } TYPED_TEST(HashtableAllTest, LocalIterators) { // Now, tr1 begin/end (the local iterator that takes a bucket-number). // ht::bucket() returns the bucket that this key would be inserted in. this->ht_.insert(this->UniqueObject(1)); const typename TypeParam::size_type bucknum = this->ht_.bucket(this->UniqueKey(1)); typename TypeParam::local_iterator b = this->ht_.begin(bucknum); typename TypeParam::local_iterator e = this->ht_.end(bucknum); EXPECT_TRUE(b != e); b++; EXPECT_TRUE(b == e); // Check an empty bucket. We can just xor the bottom bit and be sure // of getting a legal bucket, since #buckets is always a power of 2. EXPECT_TRUE(this->ht_.begin(bucknum ^ 1) == this->ht_.end(bucknum ^ 1)); // Another test, this time making sure we're using the right types. typename TypeParam::local_iterator b2 = this->ht_.begin(bucknum ^ 1); typename TypeParam::local_iterator e2 = this->ht_.end(bucknum ^ 1); EXPECT_TRUE(b2 == e2); } TYPED_TEST(HashtableAllTest, ConstLocalIterators) { this->ht_.insert(this->UniqueObject(1)); const typename TypeParam::size_type bucknum = this->ht_.bucket(this->UniqueKey(1)); typename TypeParam::const_local_iterator b = this->ht_.begin(bucknum); typename TypeParam::const_local_iterator e = this->ht_.end(bucknum); EXPECT_TRUE(b != e); b++; EXPECT_TRUE(b == e); typename TypeParam::const_local_iterator b2 = this->ht_.begin(bucknum ^ 1); typename TypeParam::const_local_iterator e2 = this->ht_.end(bucknum ^ 1); EXPECT_TRUE(b2 == e2); } TYPED_TEST(HashtableAllTest, Iterating) { // Test a bit more iterating than just one ++. this->ht_.insert(this->UniqueObject(1)); this->ht_.insert(this->UniqueObject(11)); this->ht_.insert(this->UniqueObject(111)); this->ht_.insert(this->UniqueObject(1111)); this->ht_.insert(this->UniqueObject(11111)); this->ht_.insert(this->UniqueObject(111111)); this->ht_.insert(this->UniqueObject(1111111)); this->ht_.insert(this->UniqueObject(11111111)); this->ht_.insert(this->UniqueObject(111111111)); typename TypeParam::iterator it = this->ht_.begin(); for (int i = 1; i <= 9; i++) { // start at 1 so i is never 0 // && here makes it easier to tell what loop iteration the test failed on. EXPECT_TRUE(i && (it++ != this->ht_.end())); } EXPECT_TRUE(it == this->ht_.end()); } TYPED_TEST(HashtableIntTest, Constructors) { // The key/value types don't matter here, so I just test on one set // of tables, the ones with int keys, which can easily handle the // placement-news we have to do below. Hasher hasher(1); // 1 is a unique id int alloc_count = 0; Alloc alloc(2, &alloc_count); TypeParam ht_noarg; TypeParam ht_onearg(100); TypeParam ht_twoarg(100, hasher); TypeParam ht_threearg(100, hasher, hasher); // hasher serves as key_equal too TypeParam ht_fourarg(100, hasher, hasher, alloc); // The allocator should have been called at most once, for the last ht. EXPECT_LE(1, alloc_count); int old_alloc_count = alloc_count; const typename TypeParam::value_type input[] = { this->UniqueObject(1), this->UniqueObject(2), this->UniqueObject(4), this->UniqueObject(8) }; const int num_inputs = sizeof(input) / sizeof(input[0]); const typename TypeParam::value_type *begin = &input[0]; const typename TypeParam::value_type *end = begin + num_inputs; TypeParam ht_iter_noarg(begin, end); TypeParam ht_iter_onearg(begin, end, 100); TypeParam ht_iter_twoarg(begin, end, 100, hasher); TypeParam ht_iter_threearg(begin, end, 100, hasher, hasher); TypeParam ht_iter_fourarg(begin, end, 100, hasher, hasher, alloc); // Now the allocator should have been called more. EXPECT_GT(alloc_count, old_alloc_count); old_alloc_count = alloc_count; // Let's do a lot more inserting and make sure the alloc-count goes up for (int i = 2; i < 2000; i++) ht_fourarg.insert(this->UniqueObject(i)); EXPECT_GT(alloc_count, old_alloc_count); EXPECT_LT(ht_noarg.bucket_count(), 100u); EXPECT_GE(ht_onearg.bucket_count(), 100u); EXPECT_GE(ht_twoarg.bucket_count(), 100u); EXPECT_GE(ht_threearg.bucket_count(), 100u); EXPECT_GE(ht_fourarg.bucket_count(), 100u); EXPECT_GE(ht_iter_onearg.bucket_count(), 100u); // When we pass in a hasher -- it can serve both as the hash-function // and the key-equal function -- its id should be 1. Where we don't // pass it in and use the default Hasher object, the id should be 0. EXPECT_EQ(0, ht_noarg.hash_funct().id()); EXPECT_EQ(0, ht_noarg.key_eq().id()); EXPECT_EQ(0, ht_onearg.hash_funct().id()); EXPECT_EQ(0, ht_onearg.key_eq().id()); EXPECT_EQ(1, ht_twoarg.hash_funct().id()); EXPECT_EQ(0, ht_twoarg.key_eq().id()); EXPECT_EQ(1, ht_threearg.hash_funct().id()); EXPECT_EQ(1, ht_threearg.key_eq().id()); EXPECT_EQ(0, ht_iter_noarg.hash_funct().id()); EXPECT_EQ(0, ht_iter_noarg.key_eq().id()); EXPECT_EQ(0, ht_iter_onearg.hash_funct().id()); EXPECT_EQ(0, ht_iter_onearg.key_eq().id()); EXPECT_EQ(1, ht_iter_twoarg.hash_funct().id()); EXPECT_EQ(0, ht_iter_twoarg.key_eq().id()); EXPECT_EQ(1, ht_iter_threearg.hash_funct().id()); EXPECT_EQ(1, ht_iter_threearg.key_eq().id()); // Likewise for the allocator EXPECT_EQ(0, ht_threearg.get_allocator().id()); EXPECT_EQ(0, ht_iter_threearg.get_allocator().id()); EXPECT_EQ(2, ht_fourarg.get_allocator().id()); EXPECT_EQ(2, ht_iter_fourarg.get_allocator().id()); } TYPED_TEST(HashtableAllTest, OperatorEquals) { { TypeParam ht1, ht2; ht1.set_deleted_key(this->UniqueKey(1)); ht2.set_deleted_key(this->UniqueKey(2)); ht1.insert(this->UniqueObject(10)); ht2.insert(this->UniqueObject(20)); EXPECT_FALSE(ht1 == ht2); ht1 = ht2; EXPECT_TRUE(ht1 == ht2); } { TypeParam ht1, ht2; ht1.insert(this->UniqueObject(30)); ht1 = ht2; EXPECT_EQ(0u, ht1.size()); } { TypeParam ht1, ht2; ht1.set_deleted_key(this->UniqueKey(1)); ht2.insert(this->UniqueObject(1)); // has same key as ht1.delkey ht1 = ht2; // should reset deleted-key to 'unset' EXPECT_EQ(1u, ht1.size()); EXPECT_EQ(1u, ht1.count(this->UniqueKey(1))); } } TYPED_TEST(HashtableAllTest, Clear) { for (int i = 1; i < 200; i++) { this->ht_.insert(this->UniqueObject(i)); } this->ht_.clear(); EXPECT_EQ(0u, this->ht_.size()); // TODO(csilvers): do we want to enforce that the hashtable has or // has not shrunk? It does for dense_* but not sparse_*. } TYPED_TEST(HashtableAllTest, ClearNoResize) { if (!this->ht_.supports_clear_no_resize()) return; typename TypeParam::size_type empty_bucket_count = this->ht_.bucket_count(); int last_element = 1; while (this->ht_.bucket_count() == empty_bucket_count) { this->ht_.insert(this->UniqueObject(last_element)); ++last_element; } typename TypeParam::size_type last_bucket_count = this->ht_.bucket_count(); this->ht_.clear_no_resize(); EXPECT_EQ(last_bucket_count, this->ht_.bucket_count()); EXPECT_TRUE(this->ht_.empty()); // When inserting the same number of elements again, no resize // should be necessary. for (int i = 1; i < last_element; ++i) { this->ht_.insert(this->UniqueObject(last_element + i)); EXPECT_EQ(last_bucket_count, this->ht_.bucket_count()); } } TYPED_TEST(HashtableAllTest, Swap) { // Let's make a second hashtable with its own hasher, key_equal, etc. Hasher hasher(1); // 1 is a unique id TypeParam other_ht(200, hasher, hasher); this->ht_.set_deleted_key(this->UniqueKey(1)); other_ht.set_deleted_key(this->UniqueKey(2)); for (int i = 3; i < 2000; i++) { this->ht_.insert(this->UniqueObject(i)); } this->ht_.erase(this->UniqueKey(1000)); other_ht.insert(this->UniqueObject(2001)); typename TypeParam::size_type expected_buckets = other_ht.bucket_count(); this->ht_.swap(other_ht); EXPECT_EQ(this->UniqueKey(2), this->ht_.deleted_key()); EXPECT_EQ(this->UniqueKey(1), other_ht.deleted_key()); EXPECT_EQ(1, this->ht_.hash_funct().id()); EXPECT_EQ(0, other_ht.hash_funct().id()); EXPECT_EQ(1, this->ht_.key_eq().id()); EXPECT_EQ(0, other_ht.key_eq().id()); EXPECT_EQ(expected_buckets, this->ht_.bucket_count()); EXPECT_GT(other_ht.bucket_count(), 200u); EXPECT_EQ(1u, this->ht_.size()); EXPECT_EQ(1996u, other_ht.size()); // because we erased 1000 EXPECT_EQ(0u, this->ht_.count(this->UniqueKey(111))); EXPECT_EQ(1u, other_ht.count(this->UniqueKey(111))); EXPECT_EQ(1u, this->ht_.count(this->UniqueKey(2001))); EXPECT_EQ(0u, other_ht.count(this->UniqueKey(2001))); EXPECT_EQ(0u, this->ht_.count(this->UniqueKey(1000))); EXPECT_EQ(0u, other_ht.count(this->UniqueKey(1000))); // We purposefully don't swap allocs -- they're not necessarily swappable. // Now swap back, using the free-function swap // NOTE: MSVC seems to have trouble with this free swap, not quite // sure why. I've given up trying to fix it though. #ifdef _MSC_VER other_ht.swap(this->ht_); #else swap(this->ht_, other_ht); #endif EXPECT_EQ(this->UniqueKey(1), this->ht_.deleted_key()); EXPECT_EQ(this->UniqueKey(2), other_ht.deleted_key()); EXPECT_EQ(0, this->ht_.hash_funct().id()); EXPECT_EQ(1, other_ht.hash_funct().id()); EXPECT_EQ(1996u, this->ht_.size()); EXPECT_EQ(1u, other_ht.size()); EXPECT_EQ(1u, this->ht_.count(this->UniqueKey(111))); EXPECT_EQ(0u, other_ht.count(this->UniqueKey(111))); // A user reported a crash with this code using swap to clear. // We've since fixed the bug; this prevents a regression. TypeParam swap_to_clear_ht; swap_to_clear_ht.set_deleted_key(this->UniqueKey(1)); for (int i = 2; i < 10000; ++i) { swap_to_clear_ht.insert(this->UniqueObject(i)); } TypeParam empty_ht; empty_ht.swap(swap_to_clear_ht); swap_to_clear_ht.set_deleted_key(this->UniqueKey(1)); for (int i = 2; i < 10000; ++i) { swap_to_clear_ht.insert(this->UniqueObject(i)); } } TYPED_TEST(HashtableAllTest, Size) { EXPECT_EQ(0u, this->ht_.size()); for (int i = 1; i < 1000; i++) { // go through some resizes this->ht_.insert(this->UniqueObject(i)); EXPECT_EQ(static_cast(i), this->ht_.size()); } this->ht_.clear(); EXPECT_EQ(0u, this->ht_.size()); this->ht_.set_deleted_key(this->UniqueKey(1)); EXPECT_EQ(0u, this->ht_.size()); // deleted key doesn't count for (int i = 2; i < 1000; i++) { // go through some resizes this->ht_.insert(this->UniqueObject(i)); this->ht_.erase(this->UniqueKey(i)); EXPECT_EQ(0u, this->ht_.size()); } } TEST(HashtableTest, MaxSizeAndMaxBucketCount) { // The max size depends on the allocator. So we can't use the // built-in allocator type; instead, we make our own types. sparse_hash_set > ht_default; sparse_hash_set > ht_char; sparse_hash_set > ht_104; EXPECT_GE(ht_default.max_size(), 256u); EXPECT_EQ(255u, ht_char.max_size()); EXPECT_EQ(104u, ht_104.max_size()); // In our implementations, MaxBucketCount == MaxSize. EXPECT_EQ(ht_default.max_size(), ht_default.max_bucket_count()); EXPECT_EQ(ht_char.max_size(), ht_char.max_bucket_count()); EXPECT_EQ(ht_104.max_size(), ht_104.max_bucket_count()); } TYPED_TEST(HashtableAllTest, Empty) { EXPECT_TRUE(this->ht_.empty()); this->ht_.insert(this->UniqueObject(1)); EXPECT_FALSE(this->ht_.empty()); this->ht_.clear(); EXPECT_TRUE(this->ht_.empty()); TypeParam empty_ht; this->ht_.insert(this->UniqueObject(1)); this->ht_.swap(empty_ht); EXPECT_TRUE(this->ht_.empty()); } TYPED_TEST(HashtableAllTest, BucketCount) { TypeParam ht(100); // constructor arg is number of *items* to be inserted, not the // number of buckets, so we expect more buckets. EXPECT_GT(ht.bucket_count(), 100u); for (int i = 1; i < 200; i++) { ht.insert(this->UniqueObject(i)); } EXPECT_GT(ht.bucket_count(), 200u); } TYPED_TEST(HashtableAllTest, BucketAndBucketSize) { const typename TypeParam::size_type expected_bucknum = this->ht_.bucket( this->UniqueKey(1)); EXPECT_EQ(0u, this->ht_.bucket_size(expected_bucknum)); this->ht_.insert(this->UniqueObject(1)); EXPECT_EQ(expected_bucknum, this->ht_.bucket(this->UniqueKey(1))); EXPECT_EQ(1u, this->ht_.bucket_size(expected_bucknum)); // Check that a bucket we didn't insert into, has a 0 size. Since // we have an even number of buckets, bucknum^1 is guaranteed in range. EXPECT_EQ(0u, this->ht_.bucket_size(expected_bucknum ^ 1)); } TYPED_TEST(HashtableAllTest, LoadFactor) { const typename TypeParam::size_type kSize = 16536; // Check growing past various thresholds and then shrinking below // them. for (float grow_threshold = 0.2f; grow_threshold <= 0.8f; grow_threshold += 0.2f) { TypeParam ht; ht.set_deleted_key(this->UniqueKey(1)); ht.max_load_factor(grow_threshold); ht.min_load_factor(0.0); EXPECT_EQ(grow_threshold, ht.max_load_factor()); EXPECT_EQ(0.0, ht.min_load_factor()); ht.resize(kSize); size_t bucket_count = ht.bucket_count(); // Erase and insert an element to set consider_shrink = true, // which should not cause a shrink because the threshold is 0.0. ht.insert(this->UniqueObject(2)); ht.erase(this->UniqueKey(2)); for (int i = 2;; ++i) { ht.insert(this->UniqueObject(i)); if (static_cast(ht.size())/bucket_count < grow_threshold) { EXPECT_EQ(bucket_count, ht.bucket_count()); } else { EXPECT_GT(ht.bucket_count(), bucket_count); break; } } // Now set a shrink threshold 1% below the current size and remove // items until the size falls below that. const float shrink_threshold = static_cast(ht.size()) / ht.bucket_count() - 0.01f; // This time around, check the old set_resizing_parameters interface. ht.set_resizing_parameters(shrink_threshold, 1.0); EXPECT_EQ(1.0, ht.max_load_factor()); EXPECT_EQ(shrink_threshold, ht.min_load_factor()); bucket_count = ht.bucket_count(); for (int i = 2;; ++i) { ht.erase(this->UniqueKey(i)); // A resize is only triggered by an insert, so add and remove a // value every iteration to trigger the shrink as soon as the // threshold is passed. ht.erase(this->UniqueKey(i+1)); ht.insert(this->UniqueObject(i+1)); if (static_cast(ht.size())/bucket_count > shrink_threshold) { EXPECT_EQ(bucket_count, ht.bucket_count()); } else { EXPECT_LT(ht.bucket_count(), bucket_count); break; } } } } TYPED_TEST(HashtableAllTest, ResizeAndRehash) { // resize() and rehash() are synonyms. rehash() is the tr1 name. TypeParam ht(10000); ht.max_load_factor(0.8f); // for consistency's sake for (int i = 1; i < 100; ++i) ht.insert(this->UniqueObject(i)); ht.resize(0); // Now ht should be as small as possible. EXPECT_LT(ht.bucket_count(), 300u); ht.rehash(9000); // use the 'rehash' version of the name. // Bucket count should be next power of 2, after considering max_load_factor. EXPECT_EQ(16384u, ht.bucket_count()); for (int i = 101; i < 200; ++i) ht.insert(this->UniqueObject(i)); // Adding a few hundred buckets shouldn't have caused a resize yet. EXPECT_EQ(ht.bucket_count(), 16384u); } TYPED_TEST(HashtableAllTest, FindAndCountAndEqualRange) { pair eq_pair; pair const_eq_pair; EXPECT_TRUE(this->ht_.empty()); EXPECT_TRUE(this->ht_.find(this->UniqueKey(1)) == this->ht_.end()); EXPECT_EQ(0u, this->ht_.count(this->UniqueKey(1))); eq_pair = this->ht_.equal_range(this->UniqueKey(1)); EXPECT_TRUE(eq_pair.first == eq_pair.second); this->ht_.insert(this->UniqueObject(1)); EXPECT_FALSE(this->ht_.empty()); this->ht_.insert(this->UniqueObject(11)); this->ht_.insert(this->UniqueObject(111)); this->ht_.insert(this->UniqueObject(1111)); this->ht_.insert(this->UniqueObject(11111)); this->ht_.insert(this->UniqueObject(111111)); this->ht_.insert(this->UniqueObject(1111111)); this->ht_.insert(this->UniqueObject(11111111)); this->ht_.insert(this->UniqueObject(111111111)); EXPECT_EQ(9u, this->ht_.size()); typename TypeParam::const_iterator it = this->ht_.find(this->UniqueKey(1)); EXPECT_EQ(it.key(), this->UniqueKey(1)); // Allow testing the const version of the methods as well. const TypeParam ht = this->ht_; // Some successful lookups (via find, count, and equal_range). EXPECT_TRUE(this->ht_.find(this->UniqueKey(1)) != this->ht_.end()); EXPECT_EQ(1u, this->ht_.count(this->UniqueKey(1))); eq_pair = this->ht_.equal_range(this->UniqueKey(1)); EXPECT_TRUE(eq_pair.first != eq_pair.second); EXPECT_EQ(eq_pair.first.key(), this->UniqueKey(1)); ++eq_pair.first; EXPECT_TRUE(eq_pair.first == eq_pair.second); EXPECT_TRUE(ht.find(this->UniqueKey(1)) != ht.end()); EXPECT_EQ(1u, ht.count(this->UniqueKey(1))); const_eq_pair = ht.equal_range(this->UniqueKey(1)); EXPECT_TRUE(const_eq_pair.first != const_eq_pair.second); EXPECT_EQ(const_eq_pair.first.key(), this->UniqueKey(1)); ++const_eq_pair.first; EXPECT_TRUE(const_eq_pair.first == const_eq_pair.second); EXPECT_TRUE(this->ht_.find(this->UniqueKey(11111)) != this->ht_.end()); EXPECT_EQ(1u, this->ht_.count(this->UniqueKey(11111))); eq_pair = this->ht_.equal_range(this->UniqueKey(11111)); EXPECT_TRUE(eq_pair.first != eq_pair.second); EXPECT_EQ(eq_pair.first.key(), this->UniqueKey(11111)); ++eq_pair.first; EXPECT_TRUE(eq_pair.first == eq_pair.second); EXPECT_TRUE(ht.find(this->UniqueKey(11111)) != ht.end()); EXPECT_EQ(1u, ht.count(this->UniqueKey(11111))); const_eq_pair = ht.equal_range(this->UniqueKey(11111)); EXPECT_TRUE(const_eq_pair.first != const_eq_pair.second); EXPECT_EQ(const_eq_pair.first.key(), this->UniqueKey(11111)); ++const_eq_pair.first; EXPECT_TRUE(const_eq_pair.first == const_eq_pair.second); // Some unsuccessful lookups (via find, count, and equal_range). EXPECT_TRUE(this->ht_.find(this->UniqueKey(11112)) == this->ht_.end()); EXPECT_EQ(0u, this->ht_.count(this->UniqueKey(11112))); eq_pair = this->ht_.equal_range(this->UniqueKey(11112)); EXPECT_TRUE(eq_pair.first == eq_pair.second); EXPECT_TRUE(ht.find(this->UniqueKey(11112)) == ht.end()); EXPECT_EQ(0u, ht.count(this->UniqueKey(11112))); const_eq_pair = ht.equal_range(this->UniqueKey(11112)); EXPECT_TRUE(const_eq_pair.first == const_eq_pair.second); EXPECT_TRUE(this->ht_.find(this->UniqueKey(11110)) == this->ht_.end()); EXPECT_EQ(0u, this->ht_.count(this->UniqueKey(11110))); eq_pair = this->ht_.equal_range(this->UniqueKey(11110)); EXPECT_TRUE(eq_pair.first == eq_pair.second); EXPECT_TRUE(ht.find(this->UniqueKey(11110)) == ht.end()); EXPECT_EQ(0u, ht.count(this->UniqueKey(11110))); const_eq_pair = ht.equal_range(this->UniqueKey(11110)); EXPECT_TRUE(const_eq_pair.first == const_eq_pair.second); } TYPED_TEST(HashtableAllTest, BracketInsert) { // tests operator[], for those types that support it. if (!this->ht_.supports_brackets()) return; // bracket_equal is equivalent to ht_[a] == b. It should insert a if // it doesn't already exist. EXPECT_TRUE(this->ht_.bracket_equal(this->UniqueKey(1), this->ht_.default_data())); EXPECT_TRUE(this->ht_.find(this->UniqueKey(1)) != this->ht_.end()); // bracket_assign is equivalent to ht_[a] = b. this->ht_.bracket_assign(this->UniqueKey(2), this->ht_.get_data(this->UniqueObject(4))); EXPECT_TRUE(this->ht_.find(this->UniqueKey(2)) != this->ht_.end()); EXPECT_TRUE(this->ht_.bracket_equal( this->UniqueKey(2), this->ht_.get_data(this->UniqueObject(4)))); this->ht_.bracket_assign( this->UniqueKey(2), this->ht_.get_data(this->UniqueObject(6))); EXPECT_TRUE(this->ht_.bracket_equal( this->UniqueKey(2), this->ht_.get_data(this->UniqueObject(6)))); // bracket_equal shouldn't have modified the value. EXPECT_TRUE(this->ht_.bracket_equal( this->UniqueKey(2), this->ht_.get_data(this->UniqueObject(6)))); // Verify that an operator[] that doesn't cause a resize, also // doesn't require an extra rehash. TypeParam ht(100); EXPECT_EQ(0, ht.hash_funct().num_hashes()); ht.bracket_assign(this->UniqueKey(2), ht.get_data(this->UniqueObject(2))); EXPECT_EQ(1, ht.hash_funct().num_hashes()); // And overwriting, likewise, should only cause one extra hash. ht.bracket_assign(this->UniqueKey(2), ht.get_data(this->UniqueObject(2))); EXPECT_EQ(2, ht.hash_funct().num_hashes()); } TYPED_TEST(HashtableAllTest, InsertValue) { // First, try some straightforward insertions. EXPECT_TRUE(this->ht_.empty()); this->ht_.insert(this->UniqueObject(1)); EXPECT_FALSE(this->ht_.empty()); this->ht_.insert(this->UniqueObject(11)); this->ht_.insert(this->UniqueObject(111)); this->ht_.insert(this->UniqueObject(1111)); this->ht_.insert(this->UniqueObject(11111)); this->ht_.insert(this->UniqueObject(111111)); this->ht_.insert(this->UniqueObject(1111111)); this->ht_.insert(this->UniqueObject(11111111)); this->ht_.insert(this->UniqueObject(111111111)); EXPECT_EQ(9u, this->ht_.size()); EXPECT_EQ(1u, this->ht_.count(this->UniqueKey(1))); EXPECT_EQ(1u, this->ht_.count(this->UniqueKey(1111))); // Check the return type. pair insert_it; insert_it = this->ht_.insert(this->UniqueObject(1)); EXPECT_EQ(false, insert_it.second); // false: already present EXPECT_TRUE(*insert_it.first == this->UniqueObject(1)); insert_it = this->ht_.insert(this->UniqueObject(2)); EXPECT_EQ(true, insert_it.second); // true: not already present EXPECT_TRUE(*insert_it.first == this->UniqueObject(2)); } TYPED_TEST(HashtableIntTest, InsertRange) { // We just test the ints here, to make the placement-new easier. TypeParam ht_source; ht_source.insert(this->UniqueObject(10)); ht_source.insert(this->UniqueObject(100)); ht_source.insert(this->UniqueObject(1000)); ht_source.insert(this->UniqueObject(10000)); ht_source.insert(this->UniqueObject(100000)); ht_source.insert(this->UniqueObject(1000000)); const typename TypeParam::value_type input[] = { // This is a copy of the first element in ht_source. *ht_source.begin(), this->UniqueObject(2), this->UniqueObject(4), this->UniqueObject(8) }; set set_input; set_input.insert(this->UniqueObject(1111111)); set_input.insert(this->UniqueObject(111111)); set_input.insert(this->UniqueObject(11111)); set_input.insert(this->UniqueObject(1111)); set_input.insert(this->UniqueObject(111)); set_input.insert(this->UniqueObject(11)); // Insert from ht_source, an iterator of the same type as us. typename TypeParam::const_iterator begin = ht_source.begin(); typename TypeParam::const_iterator end = begin; std::advance(end, 3); this->ht_.insert(begin, end); // insert 3 elements from ht_source EXPECT_EQ(3u, this->ht_.size()); EXPECT_TRUE(*this->ht_.begin() == this->UniqueObject(10) || *this->ht_.begin() == this->UniqueObject(100) || *this->ht_.begin() == this->UniqueObject(1000) || *this->ht_.begin() == this->UniqueObject(10000) || *this->ht_.begin() == this->UniqueObject(100000) || *this->ht_.begin() == this->UniqueObject(1000000)); // And insert from set_input, a separate, non-random-access iterator. typename set::const_iterator set_begin; typename set::const_iterator set_end; set_begin = set_input.begin(); set_end = set_begin; std::advance(set_end, 3); this->ht_.insert(set_begin, set_end); EXPECT_EQ(6u, this->ht_.size()); // Insert from input as well, a separate, random-access iterator. // The first element of input overlaps with an existing element // of ht_, so this should only up the size by 2. this->ht_.insert(&input[0], &input[3]); EXPECT_EQ(8u, this->ht_.size()); } TEST(HashtableTest, InsertValueToMap) { // For the maps in particular, ensure that inserting doesn't change // the value. sparse_hash_map shm; pair::iterator, bool> shm_it; shm[1] = 2; // test a different method of inserting shm_it = shm.insert(pair(1, 3)); EXPECT_EQ(false, shm_it.second); EXPECT_EQ(1, shm_it.first->first); EXPECT_EQ(2, shm_it.first->second); shm_it.first->second = 20; EXPECT_EQ(20, shm[1]); shm_it = shm.insert(pair(2, 4)); EXPECT_EQ(true, shm_it.second); EXPECT_EQ(2, shm_it.first->first); EXPECT_EQ(4, shm_it.first->second); EXPECT_EQ(4, shm[2]); // Do it all again, with dense_hash_map. dense_hash_map dhm; dhm.set_empty_key(0); pair::iterator, bool> dhm_it; dhm[1] = 2; // test a different method of inserting dhm_it = dhm.insert(pair(1, 3)); EXPECT_EQ(false, dhm_it.second); EXPECT_EQ(1, dhm_it.first->first); EXPECT_EQ(2, dhm_it.first->second); dhm_it.first->second = 20; EXPECT_EQ(20, dhm[1]); dhm_it = dhm.insert(pair(2, 4)); EXPECT_EQ(true, dhm_it.second); EXPECT_EQ(2, dhm_it.first->first); EXPECT_EQ(4, dhm_it.first->second); EXPECT_EQ(4, dhm[2]); } TYPED_TEST(HashtableStringTest, EmptyKey) { // Only run the string tests, to make it easier to know what the // empty key should be. if (!this->ht_.supports_empty_key()) return; EXPECT_EQ(kEmptyString, this->ht_.empty_key()); } TYPED_TEST(HashtableAllTest, DeletedKey) { if (!this->ht_.supports_deleted_key()) return; this->ht_.insert(this->UniqueObject(10)); this->ht_.insert(this->UniqueObject(20)); this->ht_.set_deleted_key(this->UniqueKey(1)); EXPECT_EQ(this->ht_.deleted_key(), this->UniqueKey(1)); EXPECT_EQ(2u, this->ht_.size()); this->ht_.erase(this->UniqueKey(20)); EXPECT_EQ(1u, this->ht_.size()); // Changing the deleted key is fine. this->ht_.set_deleted_key(this->UniqueKey(2)); EXPECT_EQ(this->ht_.deleted_key(), this->UniqueKey(2)); EXPECT_EQ(1u, this->ht_.size()); } TYPED_TEST(HashtableAllTest, Erase) { this->ht_.set_deleted_key(this->UniqueKey(1)); EXPECT_EQ(0u, this->ht_.erase(this->UniqueKey(20))); this->ht_.insert(this->UniqueObject(10)); this->ht_.insert(this->UniqueObject(20)); EXPECT_EQ(1u, this->ht_.erase(this->UniqueKey(20))); EXPECT_EQ(1u, this->ht_.size()); EXPECT_EQ(0u, this->ht_.erase(this->UniqueKey(20))); EXPECT_EQ(1u, this->ht_.size()); EXPECT_EQ(0u, this->ht_.erase(this->UniqueKey(19))); EXPECT_EQ(1u, this->ht_.size()); typename TypeParam::iterator it = this->ht_.find(this->UniqueKey(10)); EXPECT_TRUE(it != this->ht_.end()); this->ht_.erase(it); EXPECT_EQ(0u, this->ht_.size()); for (int i = 10; i < 100; i++) this->ht_.insert(this->UniqueObject(i)); EXPECT_EQ(90u, this->ht_.size()); this->ht_.erase(this->ht_.begin(), this->ht_.end()); EXPECT_EQ(0u, this->ht_.size()); } TYPED_TEST(HashtableAllTest, EraseDoesNotResize) { this->ht_.set_deleted_key(this->UniqueKey(1)); for (int i = 10; i < 2000; i++) { this->ht_.insert(this->UniqueObject(i)); } const typename TypeParam::size_type old_count = this->ht_.bucket_count(); for (int i = 10; i < 1000; i++) { // erase half one at a time EXPECT_EQ(1u, this->ht_.erase(this->UniqueKey(i))); } this->ht_.erase(this->ht_.begin(), this->ht_.end()); // and the rest at once EXPECT_EQ(0u, this->ht_.size()); EXPECT_EQ(old_count, this->ht_.bucket_count()); } TYPED_TEST(HashtableAllTest, Equals) { // The real test here is whether two hashtables are equal if they // have the same items but in a different order. TypeParam ht1; TypeParam ht2; EXPECT_TRUE(ht1 == ht1); EXPECT_FALSE(ht1 != ht1); EXPECT_TRUE(ht1 == ht2); EXPECT_FALSE(ht1 != ht2); ht1.set_deleted_key(this->UniqueKey(1)); // Only the contents affect equality, not things like deleted-key. EXPECT_TRUE(ht1 == ht2); EXPECT_FALSE(ht1 != ht2); ht1.resize(2000); EXPECT_TRUE(ht1 == ht2); // The choice of allocator/etc doesn't matter either. Hasher hasher(1); Alloc alloc(2, NULL); TypeParam ht3(5, hasher, hasher, alloc); EXPECT_TRUE(ht1 == ht3); EXPECT_FALSE(ht1 != ht3); ht1.insert(this->UniqueObject(2)); EXPECT_TRUE(ht1 != ht2); EXPECT_FALSE(ht1 == ht2); // this should hold as well! ht2.insert(this->UniqueObject(2)); EXPECT_TRUE(ht1 == ht2); for (int i = 3; i <= 2000; i++) { ht1.insert(this->UniqueObject(i)); } for (int i = 2000; i >= 3; i--) { ht2.insert(this->UniqueObject(i)); } EXPECT_TRUE(ht1 == ht2); } TEST(HashtableTest, IntIO) { // Since the set case is just a special (easier) case than the map case, I // just test on sparse_hash_map. This handles the easy case where we can // use the standard reader and writer. sparse_hash_map ht_out; ht_out.set_deleted_key(0); for (int i = 1; i < 1000; i++) { ht_out[i] = i * i; } ht_out.erase(563); // just to test having some erased keys when we write. ht_out.erase(22); string file(TmpFile("intio")); FILE* fp = fopen(file.c_str(), "wb"); EXPECT_TRUE(fp != NULL); EXPECT_TRUE(ht_out.write_metadata(fp)); EXPECT_TRUE(ht_out.write_nopointer_data(fp)); fclose(fp); sparse_hash_map ht_in; fp = fopen(file.c_str(), "rb"); EXPECT_TRUE(fp != NULL); EXPECT_TRUE(ht_in.read_metadata(fp)); EXPECT_TRUE(ht_in.read_nopointer_data(fp)); fclose(fp); EXPECT_EQ(1, ht_in[1]); EXPECT_EQ(998001, ht_in[999]); EXPECT_EQ(100, ht_in[10]); EXPECT_EQ(441, ht_in[21]); EXPECT_EQ(0, ht_in[22]); // should not have been saved EXPECT_EQ(0, ht_in[563]); } TEST(HashtableTest, StringIO) { // Since the set case is just a special (easier) case than the map case, // I just test on sparse_hash_map. This handles the difficult case where // we have to write our own custom reader/writer for the data. sparse_hash_map ht_out; ht_out.set_deleted_key(string("")); for (int i = 32; i < 128; i++) { // This maps 'a' to 32 a's, 'b' to 33 b's, etc. ht_out[string(1, i)] = string(i, i); } ht_out.erase("c"); // just to test having some erased keys when we write. ht_out.erase("y"); string file(TmpFile("stringio")); FILE* fp = fopen(file.c_str(), "wb"); EXPECT_TRUE(fp != NULL); EXPECT_TRUE(ht_out.write_metadata(fp)); for (sparse_hash_map::const_iterator it = ht_out.begin(); it != ht_out.end(); ++it) { const string::size_type first_size = it->first.length(); fwrite(&first_size, sizeof(first_size), 1, fp); // ignore endianness issues fwrite(it->first.c_str(), first_size, 1, fp); const string::size_type second_size = it->second.length(); fwrite(&second_size, sizeof(second_size), 1, fp); fwrite(it->second.c_str(), second_size, 1, fp); } fclose(fp); sparse_hash_map ht_in; fp = fopen(file.c_str(), "rb"); EXPECT_TRUE(fp != NULL); EXPECT_TRUE(ht_in.read_metadata(fp)); for (sparse_hash_map::iterator it = ht_in.begin(); it != ht_in.end(); ++it) { string::size_type first_size; EXPECT_EQ(1u, fread(&first_size, sizeof(first_size), 1, fp)); char* first = new char[first_size]; EXPECT_EQ(1u, fread(first, first_size, 1, fp)); string::size_type second_size; EXPECT_EQ(1u, fread(&second_size, sizeof(second_size), 1, fp)); char* second = new char[second_size]; EXPECT_EQ(1u, fread(second, second_size, 1, fp)); // it points to garbage, so we have to use placement-new to initialize. // We also have to use const-cast since it->first is const. new(const_cast(&it->first)) string(first, first_size); new(&it->second) string(second, second_size); delete[] first; delete[] second; } fclose(fp); EXPECT_EQ(string(" "), ht_in[" "]); EXPECT_EQ(string("+++++++++++++++++++++++++++++++++++++++++++"), ht_in["+"]); EXPECT_EQ(string(""), ht_in["c"]); // should not have been saved EXPECT_EQ(string(""), ht_in["y"]); } TYPED_TEST(HashtableAllTest, Serialization) { if (!this->ht_.supports_serialization()) return; TypeParam ht_out; ht_out.set_deleted_key(this->UniqueKey(2000)); for (int i = 1; i < 100; i++) { ht_out.insert(this->UniqueObject(i)); } // just to test having some erased keys when we write. ht_out.erase(this->UniqueKey(56)); ht_out.erase(this->UniqueKey(22)); string file(TmpFile("serialization")); FILE* fp = fopen(file.c_str(), "wb"); EXPECT_TRUE(fp != NULL); EXPECT_TRUE(ht_out.serialize(ValueSerializer(), fp)); fclose(fp); TypeParam ht_in; fp = fopen(file.c_str(), "rb"); EXPECT_TRUE(fp != NULL); EXPECT_TRUE(ht_in.unserialize(ValueSerializer(), fp)); fclose(fp); EXPECT_EQ(this->UniqueObject(1), *ht_in.find(this->UniqueKey(1))); EXPECT_EQ(this->UniqueObject(99), *ht_in.find(this->UniqueKey(99))); EXPECT_FALSE(ht_in.count(this->UniqueKey(100))); EXPECT_EQ(this->UniqueObject(21), *ht_in.find(this->UniqueKey(21))); // should not have been saved EXPECT_FALSE(ht_in.count(this->UniqueKey(22))); EXPECT_FALSE(ht_in.count(this->UniqueKey(56))); } TYPED_TEST(HashtableIntTest, NopointerSerialization) { if (!this->ht_.supports_serialization()) return; TypeParam ht_out; ht_out.set_deleted_key(this->UniqueKey(2000)); for (int i = 1; i < 100; i++) { ht_out.insert(this->UniqueObject(i)); } // just to test having some erased keys when we write. ht_out.erase(this->UniqueKey(56)); ht_out.erase(this->UniqueKey(22)); string file(TmpFile("nopointer_serialization")); FILE* fp = fopen(file.c_str(), "wb"); EXPECT_TRUE(fp != NULL); EXPECT_TRUE(ht_out.serialize(typename TypeParam::NopointerSerializer(), fp)); fclose(fp); TypeParam ht_in; fp = fopen(file.c_str(), "rb"); EXPECT_TRUE(fp != NULL); EXPECT_TRUE(ht_in.unserialize(typename TypeParam::NopointerSerializer(), fp)); fclose(fp); EXPECT_EQ(this->UniqueObject(1), *ht_in.find(this->UniqueKey(1))); EXPECT_EQ(this->UniqueObject(99), *ht_in.find(this->UniqueKey(99))); EXPECT_FALSE(ht_in.count(this->UniqueKey(100))); EXPECT_EQ(this->UniqueObject(21), *ht_in.find(this->UniqueKey(21))); // should not have been saved EXPECT_FALSE(ht_in.count(this->UniqueKey(22))); EXPECT_FALSE(ht_in.count(this->UniqueKey(56))); } // We don't support serializing to a string by default, but you can do // it by writing your own custom input/output class. class StringIO { public: explicit StringIO(string* s) : s_(s) {} size_t Write(const void* buf, size_t len) { s_->append(reinterpret_cast(buf), len); return len; } size_t Read(void* buf, size_t len) { if (s_->length() < len) len = s_->length(); memcpy(reinterpret_cast(buf), s_->data(), len); s_->erase(0, len); return len; } private: string* const s_; }; TYPED_TEST(HashtableIntTest, SerializingToString) { if (!this->ht_.supports_serialization()) return; TypeParam ht_out; ht_out.set_deleted_key(this->UniqueKey(2000)); for (int i = 1; i < 100; i++) { ht_out.insert(this->UniqueObject(i)); } // just to test having some erased keys when we write. ht_out.erase(this->UniqueKey(56)); ht_out.erase(this->UniqueKey(22)); string stringbuf; StringIO stringio(&stringbuf); EXPECT_TRUE(ht_out.serialize(typename TypeParam::NopointerSerializer(), &stringio)); TypeParam ht_in; EXPECT_TRUE(ht_in.unserialize(typename TypeParam::NopointerSerializer(), &stringio)); EXPECT_EQ(this->UniqueObject(1), *ht_in.find(this->UniqueKey(1))); EXPECT_EQ(this->UniqueObject(99), *ht_in.find(this->UniqueKey(99))); EXPECT_FALSE(ht_in.count(this->UniqueKey(100))); EXPECT_EQ(this->UniqueObject(21), *ht_in.find(this->UniqueKey(21))); // should not have been saved EXPECT_FALSE(ht_in.count(this->UniqueKey(22))); EXPECT_FALSE(ht_in.count(this->UniqueKey(56))); } // An easier way to do the above would be to use the existing stream methods. TYPED_TEST(HashtableIntTest, SerializingToStringStream) { if (!this->ht_.supports_serialization()) return; TypeParam ht_out; ht_out.set_deleted_key(this->UniqueKey(2000)); for (int i = 1; i < 100; i++) { ht_out.insert(this->UniqueObject(i)); } // just to test having some erased keys when we write. ht_out.erase(this->UniqueKey(56)); ht_out.erase(this->UniqueKey(22)); std::stringstream string_buffer; EXPECT_TRUE(ht_out.serialize(typename TypeParam::NopointerSerializer(), &string_buffer)); TypeParam ht_in; EXPECT_TRUE(ht_in.unserialize(typename TypeParam::NopointerSerializer(), &string_buffer)); EXPECT_EQ(this->UniqueObject(1), *ht_in.find(this->UniqueKey(1))); EXPECT_EQ(this->UniqueObject(99), *ht_in.find(this->UniqueKey(99))); EXPECT_FALSE(ht_in.count(this->UniqueKey(100))); EXPECT_EQ(this->UniqueObject(21), *ht_in.find(this->UniqueKey(21))); // should not have been saved EXPECT_FALSE(ht_in.count(this->UniqueKey(22))); EXPECT_FALSE(ht_in.count(this->UniqueKey(56))); } // Verify that the metadata serialization is endianness and word size // agnostic. TYPED_TEST(HashtableAllTest, MetadataSerializationAndEndianness) { TypeParam ht_out; string kExpectedDense("\x13W\x86""B\0\0\0\0\0\0\0 \0\0\0\0\0\0\0\0\0\0\0\0", 24); string kExpectedSparse("$hu1\0\0\0 \0\0\0\0\0\0\0\0\0\0\0\0", 20); if (ht_out.supports_readwrite()) { string file(TmpFile("metadata_serialization")); FILE* fp = fopen(file.c_str(), "wb"); EXPECT_TRUE(fp != NULL); EXPECT_TRUE(ht_out.write_metadata(fp)); EXPECT_TRUE(ht_out.write_nopointer_data(fp)); const size_t num_bytes = ftell(fp); fclose(fp); fp = fopen(file.c_str(), "rb"); EXPECT_LE(num_bytes, static_cast(24)); char contents[24]; EXPECT_EQ(num_bytes, fread(contents, 1, num_bytes, fp)); EXPECT_EQ(EOF, fgetc(fp)); // check we're *exactly* the right size fclose(fp); // TODO(csilvers): check type of ht_out instead of looking at the 1st byte. if (contents[0] == kExpectedDense[0]) { EXPECT_EQ(kExpectedDense, string(contents, num_bytes)); } else { EXPECT_EQ(kExpectedSparse, string(contents, num_bytes)); } } // Do it again with new-style serialization. Here we can use StringIO. if (ht_out.supports_serialization()) { string stringbuf; StringIO stringio(&stringbuf); EXPECT_TRUE(ht_out.serialize(typename TypeParam::NopointerSerializer(), &stringio)); if (stringbuf[0] == kExpectedDense[0]) { EXPECT_EQ(kExpectedDense, stringbuf); } else { EXPECT_EQ(kExpectedSparse, stringbuf); } } } // ------------------------------------------------------------------------ // The above tests test the general API for correctness. These tests // test a few corner cases that have tripped us up in the past, and // more general, cross-API issues like memory management. TYPED_TEST(HashtableAllTest, BracketOperatorCrashing) { this->ht_.set_deleted_key(this->UniqueKey(1)); for (int iters = 0; iters < 10; iters++) { // We start at 33 because after shrinking, we'll be at 32 buckets. for (int i = 33; i < 133; i++) { this->ht_.bracket_assign(this->UniqueKey(i), this->ht_.get_data(this->UniqueObject(i))); } this->ht_.clear_no_resize(); // This will force a shrink on the next insert, which we want to test. this->ht_.bracket_assign(this->UniqueKey(2), this->ht_.get_data(this->UniqueObject(2))); this->ht_.erase(this->UniqueKey(2)); } } // For data types with trivial copy-constructors and destructors, we // should use an optimized routine for data-copying, that involves // memmove. We test this by keeping count of how many times the // copy-constructor is called; it should be much less with the // optimized code. struct Memmove { public: Memmove(): i(0) {} explicit Memmove(int ival): i(ival) {} Memmove(const Memmove& that) { this->i = that.i; num_copies++; } int i; static int num_copies; }; int Memmove::num_copies = 0; struct NoMemmove { public: NoMemmove(): i(0) {} explicit NoMemmove(int ival): i(ival) {} NoMemmove(const NoMemmove& that) { this->i = that.i; num_copies++; } int i; static int num_copies; }; int NoMemmove::num_copies = 0; } // unnamed namespace // This is what tells the hashtable code it can use memmove for this class: _START_GOOGLE_NAMESPACE_ template<> struct has_trivial_copy : true_type { }; template<> struct has_trivial_destructor : true_type { }; _END_GOOGLE_NAMESPACE_ namespace { TEST(HashtableTest, SimpleDataTypeOptimizations) { // Only sparsehashtable optimizes moves in this way. sparse_hash_map memmove; sparse_hash_map nomemmove; sparse_hash_map > memmove_nonstandard_alloc; Memmove::num_copies = 0; for (int i = 10000; i > 0; i--) { memmove[i] = Memmove(i); } const int memmove_copies = Memmove::num_copies; NoMemmove::num_copies = 0; for (int i = 10000; i > 0; i--) { nomemmove[i] = NoMemmove(i); } const int nomemmove_copies = NoMemmove::num_copies; Memmove::num_copies = 0; for (int i = 10000; i > 0; i--) { memmove_nonstandard_alloc[i] = Memmove(i); } const int memmove_nonstandard_alloc_copies = Memmove::num_copies; EXPECT_GT(nomemmove_copies, memmove_copies); EXPECT_EQ(nomemmove_copies, memmove_nonstandard_alloc_copies); } TYPED_TEST(HashtableAllTest, ResizeHysteresis) { // We want to make sure that when we create a hashtable, and then // add and delete one element, the size of the hashtable doesn't // change. this->ht_.set_deleted_key(this->UniqueKey(1)); typename TypeParam::size_type old_bucket_count = this->ht_.bucket_count(); this->ht_.insert(this->UniqueObject(4)); this->ht_.erase(this->UniqueKey(4)); this->ht_.insert(this->UniqueObject(4)); this->ht_.erase(this->UniqueKey(4)); EXPECT_EQ(old_bucket_count, this->ht_.bucket_count()); // Try it again, but with a hashtable that starts very small TypeParam ht(2); EXPECT_LT(ht.bucket_count(), 32u); // verify we really do start small ht.set_deleted_key(this->UniqueKey(1)); old_bucket_count = ht.bucket_count(); ht.insert(this->UniqueObject(4)); ht.erase(this->UniqueKey(4)); ht.insert(this->UniqueObject(4)); ht.erase(this->UniqueKey(4)); EXPECT_EQ(old_bucket_count, ht.bucket_count()); } TEST(HashtableTest, ConstKey) { // Sometimes people write hash_map, even though the // const isn't necessary. Make sure we handle this cleanly. sparse_hash_map shm; shm.set_deleted_key(1); shm[10] = 20; dense_hash_map dhm; dhm.set_empty_key(1); dhm.set_deleted_key(2); dhm[10] = 20; } TYPED_TEST(HashtableAllTest, ResizeActuallyResizes) { // This tests for a problem we had where we could repeatedly "resize" // a hashtable to the same size it was before, on every insert. const typename TypeParam::size_type kSize = 1<<10; // Pick any power of 2 const float kResize = 0.8f; // anything between 0.5 and 1 is fine. const int kThreshold = static_cast(kSize * kResize - 1); this->ht_.set_resizing_parameters(0, kResize); this->ht_.set_deleted_key(this->UniqueKey(kThreshold + 100)); // Get right up to the resizing threshold. for (int i = 0; i <= kThreshold; i++) { this->ht_.insert(this->UniqueObject(i+1)); } // The bucket count should equal kSize. EXPECT_EQ(kSize, this->ht_.bucket_count()); // Now start doing erase+insert pairs. This should cause us to // copy the hashtable at most once. const int pre_copies = this->ht_.num_table_copies(); for (int i = 0; i < static_cast(kSize); i++) { this->ht_.erase(this->UniqueKey(kThreshold)); this->ht_.insert(this->UniqueObject(kThreshold)); } EXPECT_LT(this->ht_.num_table_copies(), pre_copies + 2); // Now create a hashtable where we go right to the threshold, then // delete everything and do one insert. Even though our hashtable // is now tiny, we should still have at least kSize buckets, because // our shrink threshhold is 0. TypeParam ht2; ht2.set_deleted_key(this->UniqueKey(kThreshold + 100)); ht2.set_resizing_parameters(0, kResize); EXPECT_LT(ht2.bucket_count(), kSize); for (int i = 0; i <= kThreshold; i++) { ht2.insert(this->UniqueObject(i+1)); } EXPECT_EQ(ht2.bucket_count(), kSize); for (int i = 0; i <= kThreshold; i++) { ht2.erase(this->UniqueKey(i+1)); EXPECT_EQ(ht2.bucket_count(), kSize); } ht2.insert(this->UniqueObject(kThreshold+2)); EXPECT_GE(ht2.bucket_count(), kSize); } template class DenseIntMap : public dense_hash_map { public: DenseIntMap() { this->set_empty_key(0); } }; class DenseStringSet : public dense_hash_set { public: DenseStringSet() { this->set_empty_key(string("")); } }; TEST(HashtableTest, NestedHashtables) { // People can do better than to have a hash_map of hash_maps, but we // should still support it. I try a few different mappings. sparse_hash_map, Hasher, Hasher> ht1; sparse_hash_map ht2; dense_hash_map, Hasher, Hasher> ht3; ht3.set_empty_key(0); ht1["hi"]; // create a sub-ht with the default values ht1["lo"][1] = "there"; sparse_hash_map, Hasher, Hasher> ht1copy = ht1; ht2["hi"]; ht2["hi"].insert("lo"); sparse_hash_map ht2copy = ht2; ht3[1]; ht3[2][3] = 4; dense_hash_map, Hasher, Hasher> ht3copy = ht3; } TEST(HashtableDeathTest, ResizeOverflow) { dense_hash_map ht; EXPECT_DEATH(ht.resize(static_cast(-1)), "overflows size_type"); sparse_hash_map ht2; EXPECT_DEATH(ht2.resize(static_cast(-1)), "overflows size_type"); } TEST(HashtableDeathTest, InsertSizeTypeOverflow) { static const int kMax = 256; vector test_data(kMax); for (int i = 0; i < kMax; ++i) { test_data[i] = i+1000; } sparse_hash_set > shs; dense_hash_set > dhs; dhs.set_empty_key(-1); // Test we are using the correct allocator EXPECT_TRUE(shs.get_allocator().is_custom_alloc()); EXPECT_TRUE(dhs.get_allocator().is_custom_alloc()); // Test size_type overflow in insert(it, it) EXPECT_DEATH(dhs.insert(test_data.begin(), test_data.end()), "overflows size_type"); EXPECT_DEATH(shs.insert(test_data.begin(), test_data.end()), "overflows size_type"); } TEST(HashtableDeathTest, InsertMaxSizeOverflow) { static const int kMax = 256; vector test_data(kMax); for (int i = 0; i < kMax; ++i) { test_data[i] = i+1000; } sparse_hash_set > shs; dense_hash_set > dhs; dhs.set_empty_key(-1); // Test max_size overflow EXPECT_DEATH(dhs.insert(test_data.begin(), test_data.begin() + 11), "exceed max_size"); EXPECT_DEATH(shs.insert(test_data.begin(), test_data.begin() + 11), "exceed max_size"); } TEST(HashtableDeathTest, ResizeSizeTypeOverflow) { // Test min-buckets overflow, when we want to resize too close to size_type sparse_hash_set > shs; dense_hash_set > dhs; dhs.set_empty_key(-1); EXPECT_DEATH(dhs.resize(250), "overflows size_type"); // 9+250 > 256 EXPECT_DEATH(shs.resize(250), "overflows size_type"); } TEST(HashtableDeathTest, ResizeDeltaOverflow) { static const int kMax = 256; vector test_data(kMax); for (int i = 0; i < kMax; ++i) { test_data[i] = i+1000; } sparse_hash_set > shs; dense_hash_set > dhs; dhs.set_empty_key(-1); for (int i = 0; i < 9; i++) { dhs.insert(i); shs.insert(i); } EXPECT_DEATH(dhs.insert(test_data.begin(), test_data.begin() + 250), "overflows size_type"); // 9+250 > 256 EXPECT_DEATH(shs.insert(test_data.begin(), test_data.begin() + 250), "overflows size_type"); } // ------------------------------------------------------------------------ // This informational "test" comes last so it's easy to see. // Also, benchmarks. TYPED_TEST(HashtableAllTest, ClassSizes) { std::cout << "sizeof(" << typeid(TypeParam).name() << "): " << sizeof(this->ht_) << "\n"; } } // unnamed namespace int main(int, char **) { // All the work is done in the static constructors. If they don't // die, the tests have all passed. cout << "PASS\n"; return 0; } sparsehash-2.0.2/src/windows/0000775000175000017500000000000011721252346013143 500000000000000sparsehash-2.0.2/src/windows/sparsehash/0000775000175000017500000000000011721252346015304 500000000000000sparsehash-2.0.2/src/windows/sparsehash/.svn/0000775000175000017500000000000011721255316016170 500000000000000sparsehash-2.0.2/src/windows/sparsehash/.svn/text-base/0000775000175000017500000000000011721252346020064 500000000000000sparsehash-2.0.2/src/windows/sparsehash/.svn/props/0000775000175000017500000000000011721252346017333 500000000000000sparsehash-2.0.2/src/windows/sparsehash/.svn/tmp/0000775000175000017500000000000011721252346016770 500000000000000sparsehash-2.0.2/src/windows/sparsehash/.svn/tmp/text-base/0000775000175000017500000000000011721252346020664 500000000000000sparsehash-2.0.2/src/windows/sparsehash/.svn/tmp/props/0000775000175000017500000000000011721252346020133 500000000000000sparsehash-2.0.2/src/windows/sparsehash/.svn/tmp/prop-base/0000775000175000017500000000000011721252346020660 500000000000000sparsehash-2.0.2/src/windows/sparsehash/.svn/entries0000444000175000017500000000034511721252346017502 0000000000000010 dir 113 https://sparsehash.googlecode.com/svn/trunk/src/windows/sparsehash https://sparsehash.googlecode.com/svn 2012-01-31T23:50:02.386177Z 106 csilvers 21bedea4-f223-4c8b-73d6-85019ffb75a9 internal dir sparsehash-2.0.2/src/windows/sparsehash/.svn/all-wcprops0000444000175000017500000000012711721252346020272 00000000000000K 25 svn:wc:ra_dav:version-url V 46 /svn/!svn/ver/106/trunk/src/windows/sparsehash END sparsehash-2.0.2/src/windows/sparsehash/.svn/prop-base/0000775000175000017500000000000011721252346020060 500000000000000sparsehash-2.0.2/src/windows/sparsehash/internal/0000775000175000017500000000000011721252346017120 500000000000000sparsehash-2.0.2/src/windows/sparsehash/internal/sparseconfig.h0000664000175000017500000000264511721252346021703 00000000000000/* * NOTE: This file is for internal use only. * Do not use these #defines in your own program! */ /* Namespace for Google classes */ #define GOOGLE_NAMESPACE ::google /* the location of the header defining hash functions */ #define HASH_FUN_H /* the namespace of the hash<> function */ #define HASH_NAMESPACE stdext /* Define to 1 if you have the header file. */ #undef HAVE_INTTYPES_H /* Define to 1 if the system has the type `long long'. */ #define HAVE_LONG_LONG 1 /* Define to 1 if you have the `memcpy' function. */ #define HAVE_MEMCPY 1 /* Define to 1 if you have the header file. */ #undef HAVE_STDINT_H /* Define to 1 if you have the header file. */ #define HAVE_SYS_TYPES_H 1 /* Define to 1 if the system has the type `uint16_t'. */ #undef HAVE_UINT16_T /* Define to 1 if the system has the type `u_int16_t'. */ #undef HAVE_U_INT16_T /* Define to 1 if the system has the type `__uint16'. */ #define HAVE___UINT16 1 /* The system-provided hash function including the namespace. */ #define SPARSEHASH_HASH HASH_NAMESPACE::hash_compare /* The system-provided hash function, in namespace HASH_NAMESPACE. */ #define SPARSEHASH_HASH_NO_NAMESPACE hash_compare /* Stops putting the code inside the Google namespace */ #define _END_GOOGLE_NAMESPACE_ } /* Puts following code inside the Google namespace */ #define _START_GOOGLE_NAMESPACE_ namespace google { sparsehash-2.0.2/src/windows/sparsehash/internal/.svn/0000775000175000017500000000000011721255316020004 500000000000000sparsehash-2.0.2/src/windows/sparsehash/internal/.svn/text-base/0000775000175000017500000000000011721252346021700 500000000000000sparsehash-2.0.2/src/windows/sparsehash/internal/.svn/text-base/sparseconfig.h.svn-base0000444000175000017500000000264511721252346026174 00000000000000/* * NOTE: This file is for internal use only. * Do not use these #defines in your own program! */ /* Namespace for Google classes */ #define GOOGLE_NAMESPACE ::google /* the location of the header defining hash functions */ #define HASH_FUN_H /* the namespace of the hash<> function */ #define HASH_NAMESPACE stdext /* Define to 1 if you have the header file. */ #undef HAVE_INTTYPES_H /* Define to 1 if the system has the type `long long'. */ #define HAVE_LONG_LONG 1 /* Define to 1 if you have the `memcpy' function. */ #define HAVE_MEMCPY 1 /* Define to 1 if you have the header file. */ #undef HAVE_STDINT_H /* Define to 1 if you have the header file. */ #define HAVE_SYS_TYPES_H 1 /* Define to 1 if the system has the type `uint16_t'. */ #undef HAVE_UINT16_T /* Define to 1 if the system has the type `u_int16_t'. */ #undef HAVE_U_INT16_T /* Define to 1 if the system has the type `__uint16'. */ #define HAVE___UINT16 1 /* The system-provided hash function including the namespace. */ #define SPARSEHASH_HASH HASH_NAMESPACE::hash_compare /* The system-provided hash function, in namespace HASH_NAMESPACE. */ #define SPARSEHASH_HASH_NO_NAMESPACE hash_compare /* Stops putting the code inside the Google namespace */ #define _END_GOOGLE_NAMESPACE_ } /* Puts following code inside the Google namespace */ #define _START_GOOGLE_NAMESPACE_ namespace google { sparsehash-2.0.2/src/windows/sparsehash/internal/.svn/props/0000775000175000017500000000000011721252346021147 500000000000000sparsehash-2.0.2/src/windows/sparsehash/internal/.svn/tmp/0000775000175000017500000000000011721252346020604 500000000000000sparsehash-2.0.2/src/windows/sparsehash/internal/.svn/tmp/text-base/0000775000175000017500000000000011721252346022500 500000000000000sparsehash-2.0.2/src/windows/sparsehash/internal/.svn/tmp/props/0000775000175000017500000000000011721252346021747 500000000000000sparsehash-2.0.2/src/windows/sparsehash/internal/.svn/tmp/prop-base/0000775000175000017500000000000011721252346022474 500000000000000sparsehash-2.0.2/src/windows/sparsehash/internal/.svn/entries0000444000175000017500000000057111721252346021317 0000000000000010 dir 113 https://sparsehash.googlecode.com/svn/trunk/src/windows/sparsehash/internal https://sparsehash.googlecode.com/svn 2012-01-31T23:50:02.386177Z 106 csilvers 21bedea4-f223-4c8b-73d6-85019ffb75a9 sparseconfig.h file 2012-02-22T20:49:42.407760Z 67cc0408754e0a62822995e4b4227bb9 2012-01-31T23:50:02.386177Z 106 csilvers 1445 sparsehash-2.0.2/src/windows/sparsehash/internal/.svn/all-wcprops0000444000175000017500000000033611721252346022110 00000000000000K 25 svn:wc:ra_dav:version-url V 55 /svn/!svn/ver/106/trunk/src/windows/sparsehash/internal END sparseconfig.h K 25 svn:wc:ra_dav:version-url V 70 /svn/!svn/ver/106/trunk/src/windows/sparsehash/internal/sparseconfig.h END sparsehash-2.0.2/src/windows/sparsehash/internal/.svn/prop-base/0000775000175000017500000000000011721252346021674 500000000000000sparsehash-2.0.2/src/windows/google/0000775000175000017500000000000011721252346014417 500000000000000sparsehash-2.0.2/src/windows/google/sparsehash/0000775000175000017500000000000011721252346016560 500000000000000sparsehash-2.0.2/src/windows/google/sparsehash/sparseconfig.h0000664000175000017500000000264511721252346021343 00000000000000/* * NOTE: This file is for internal use only. * Do not use these #defines in your own program! */ /* Namespace for Google classes */ #define GOOGLE_NAMESPACE ::google /* the location of the header defining hash functions */ #define HASH_FUN_H /* the namespace of the hash<> function */ #define HASH_NAMESPACE stdext /* Define to 1 if you have the header file. */ #undef HAVE_INTTYPES_H /* Define to 1 if the system has the type `long long'. */ #define HAVE_LONG_LONG 1 /* Define to 1 if you have the `memcpy' function. */ #define HAVE_MEMCPY 1 /* Define to 1 if you have the header file. */ #undef HAVE_STDINT_H /* Define to 1 if you have the header file. */ #define HAVE_SYS_TYPES_H 1 /* Define to 1 if the system has the type `uint16_t'. */ #undef HAVE_UINT16_T /* Define to 1 if the system has the type `u_int16_t'. */ #undef HAVE_U_INT16_T /* Define to 1 if the system has the type `__uint16'. */ #define HAVE___UINT16 1 /* The system-provided hash function including the namespace. */ #define SPARSEHASH_HASH HASH_NAMESPACE::hash_compare /* The system-provided hash function, in namespace HASH_NAMESPACE. */ #define SPARSEHASH_HASH_NO_NAMESPACE hash_compare /* Stops putting the code inside the Google namespace */ #define _END_GOOGLE_NAMESPACE_ } /* Puts following code inside the Google namespace */ #define _START_GOOGLE_NAMESPACE_ namespace google { sparsehash-2.0.2/src/windows/google/sparsehash/.svn/0000775000175000017500000000000011721255316017444 500000000000000sparsehash-2.0.2/src/windows/google/sparsehash/.svn/text-base/0000775000175000017500000000000011721252346021340 500000000000000sparsehash-2.0.2/src/windows/google/sparsehash/.svn/text-base/sparseconfig.h.svn-base0000444000175000017500000000264511721252346025634 00000000000000/* * NOTE: This file is for internal use only. * Do not use these #defines in your own program! */ /* Namespace for Google classes */ #define GOOGLE_NAMESPACE ::google /* the location of the header defining hash functions */ #define HASH_FUN_H /* the namespace of the hash<> function */ #define HASH_NAMESPACE stdext /* Define to 1 if you have the header file. */ #undef HAVE_INTTYPES_H /* Define to 1 if the system has the type `long long'. */ #define HAVE_LONG_LONG 1 /* Define to 1 if you have the `memcpy' function. */ #define HAVE_MEMCPY 1 /* Define to 1 if you have the header file. */ #undef HAVE_STDINT_H /* Define to 1 if you have the header file. */ #define HAVE_SYS_TYPES_H 1 /* Define to 1 if the system has the type `uint16_t'. */ #undef HAVE_UINT16_T /* Define to 1 if the system has the type `u_int16_t'. */ #undef HAVE_U_INT16_T /* Define to 1 if the system has the type `__uint16'. */ #define HAVE___UINT16 1 /* The system-provided hash function including the namespace. */ #define SPARSEHASH_HASH HASH_NAMESPACE::hash_compare /* The system-provided hash function, in namespace HASH_NAMESPACE. */ #define SPARSEHASH_HASH_NO_NAMESPACE hash_compare /* Stops putting the code inside the Google namespace */ #define _END_GOOGLE_NAMESPACE_ } /* Puts following code inside the Google namespace */ #define _START_GOOGLE_NAMESPACE_ namespace google { sparsehash-2.0.2/src/windows/google/sparsehash/.svn/props/0000775000175000017500000000000011721252346020607 500000000000000sparsehash-2.0.2/src/windows/google/sparsehash/.svn/tmp/0000775000175000017500000000000011721252346020244 500000000000000sparsehash-2.0.2/src/windows/google/sparsehash/.svn/tmp/text-base/0000775000175000017500000000000011721252346022140 500000000000000sparsehash-2.0.2/src/windows/google/sparsehash/.svn/tmp/props/0000775000175000017500000000000011721252346021407 500000000000000sparsehash-2.0.2/src/windows/google/sparsehash/.svn/tmp/prop-base/0000775000175000017500000000000011721252346022134 500000000000000sparsehash-2.0.2/src/windows/google/sparsehash/.svn/entries0000444000175000017500000000056511721252346020762 0000000000000010 dir 113 https://sparsehash.googlecode.com/svn/trunk/src/windows/google/sparsehash https://sparsehash.googlecode.com/svn 2011-06-24T05:00:21.294954Z 66 csilvers 21bedea4-f223-4c8b-73d6-85019ffb75a9 sparseconfig.h file 2012-02-22T20:49:42.415760Z 67cc0408754e0a62822995e4b4227bb9 2011-06-24T05:00:21.294954Z 66 csilvers 1445 sparsehash-2.0.2/src/windows/google/sparsehash/.svn/all-wcprops0000444000175000017500000000033011721252346021542 00000000000000K 25 svn:wc:ra_dav:version-url V 52 /svn/!svn/ver/66/trunk/src/windows/google/sparsehash END sparseconfig.h K 25 svn:wc:ra_dav:version-url V 67 /svn/!svn/ver/66/trunk/src/windows/google/sparsehash/sparseconfig.h END sparsehash-2.0.2/src/windows/google/sparsehash/.svn/prop-base/0000775000175000017500000000000011721252346021334 500000000000000sparsehash-2.0.2/src/windows/google/.svn/0000775000175000017500000000000011721255316015303 500000000000000sparsehash-2.0.2/src/windows/google/.svn/text-base/0000775000175000017500000000000011721252346017177 500000000000000sparsehash-2.0.2/src/windows/google/.svn/props/0000775000175000017500000000000011721252346016446 500000000000000sparsehash-2.0.2/src/windows/google/.svn/tmp/0000775000175000017500000000000011721252346016103 500000000000000sparsehash-2.0.2/src/windows/google/.svn/tmp/text-base/0000775000175000017500000000000011721252346017777 500000000000000sparsehash-2.0.2/src/windows/google/.svn/tmp/props/0000775000175000017500000000000011721252346017246 500000000000000sparsehash-2.0.2/src/windows/google/.svn/tmp/prop-base/0000775000175000017500000000000011721252346017773 500000000000000sparsehash-2.0.2/src/windows/google/.svn/entries0000444000175000017500000000034211721252346016612 0000000000000010 dir 113 https://sparsehash.googlecode.com/svn/trunk/src/windows/google https://sparsehash.googlecode.com/svn 2011-06-24T05:00:21.294954Z 66 csilvers 21bedea4-f223-4c8b-73d6-85019ffb75a9 sparsehash dir sparsehash-2.0.2/src/windows/google/.svn/all-wcprops0000444000175000017500000000012211721252346017400 00000000000000K 25 svn:wc:ra_dav:version-url V 41 /svn/!svn/ver/66/trunk/src/windows/google END sparsehash-2.0.2/src/windows/google/.svn/prop-base/0000775000175000017500000000000011721252346017173 500000000000000sparsehash-2.0.2/src/windows/.svn/0000775000175000017500000000000011721255316014027 500000000000000sparsehash-2.0.2/src/windows/.svn/text-base/0000775000175000017500000000000011721252346015723 500000000000000sparsehash-2.0.2/src/windows/.svn/text-base/port.cc.svn-base0000444000175000017500000000514511721252346020654 00000000000000/* Copyright (c) 2007, Google Inc. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following disclaimer * in the documentation and/or other materials provided with the * distribution. * * Neither the name of Google Inc. nor the names of its * contributors may be used to endorse or promote products derived from * this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * --- * Author: Craig Silverstein */ #include #ifndef WIN32 # error You should only be including windows/port.cc in a windows environment! #endif #include "config.h" #include // for va_list, va_start, va_end #include "port.h" // Calls the windows _vsnprintf, but always NUL-terminate. int snprintf(char *str, size_t size, const char *format, ...) { if (size == 0) // not even room for a \0? return -1; // not what C99 says to do, but what windows does str[size-1] = '\0'; va_list ap; va_start(ap, format); const int r = _vsnprintf(str, size-1, format, ap); va_end(ap); return r; } std::string TmpFile(const char* basename) { char tmppath_buffer[1024]; int tmppath_len = GetTempPathA(sizeof(tmppath_buffer), tmppath_buffer); if (tmppath_len <= 0 || tmppath_len >= sizeof(tmppath_buffer)) { return basename; // an error, so just bail on tmppath } snprintf(tmppath_buffer + tmppath_len, sizeof(tmppath_buffer) - tmppath_len, "\\%s", basename); return tmppath_buffer; } sparsehash-2.0.2/src/windows/.svn/text-base/config.h.svn-base0000444000175000017500000001030311721252346020767 00000000000000#ifndef GOOGLE_SPARSEHASH_WINDOWS_CONFIG_H_ #define GOOGLE_SPARSEHASH_WINDOWS_CONFIG_H_ /* src/config.h.in. Generated from configure.ac by autoheader. */ /* Namespace for Google classes */ #define GOOGLE_NAMESPACE ::google /* the location of the header defining hash functions */ #define HASH_FUN_H /* the location of or */ #define HASH_MAP_H /* the namespace of the hash<> function */ #define HASH_NAMESPACE stdext /* the location of or */ #define HASH_SET_H /* Define to 1 if you have the header file. */ #undef HAVE_GOOGLE_MALLOC_EXTENSION_H /* define if the compiler has hash_map */ #define HAVE_HASH_MAP 1 /* define if the compiler has hash_set */ #define HAVE_HASH_SET 1 /* Define to 1 if you have the header file. */ #undef HAVE_INTTYPES_H /* Define to 1 if the system has the type `long long'. */ #define HAVE_LONG_LONG 1 /* Define to 1 if you have the `memcpy' function. */ #define HAVE_MEMCPY 1 /* Define to 1 if you have the `memmove' function. */ #define HAVE_MEMMOVE 1 /* Define to 1 if you have the header file. */ #undef HAVE_MEMORY_H /* define if the compiler implements namespaces */ #define HAVE_NAMESPACES 1 /* Define if you have POSIX threads libraries and header files. */ #undef HAVE_PTHREAD /* Define to 1 if you have the header file. */ #undef HAVE_STDINT_H /* Define to 1 if you have the header file. */ #define HAVE_STDLIB_H 1 /* Define to 1 if you have the header file. */ #undef HAVE_STRINGS_H /* Define to 1 if you have the header file. */ #define HAVE_STRING_H 1 /* Define to 1 if you have the header file. */ #undef HAVE_SYS_RESOURCE_H /* Define to 1 if you have the header file. */ #define HAVE_SYS_STAT_H 1 /* Define to 1 if you have the header file. */ #undef HAVE_SYS_TIME_H /* Define to 1 if you have the header file. */ #define HAVE_SYS_TYPES_H 1 /* Define to 1 if you have the header file. */ #undef HAVE_SYS_UTSNAME_H /* Define to 1 if the system has the type `uint16_t'. */ #undef HAVE_UINT16_T /* Define to 1 if you have the header file. */ #undef HAVE_UNISTD_H /* define if the compiler supports unordered_{map,set} */ #undef HAVE_UNORDERED_MAP /* Define to 1 if the system has the type `u_int16_t'. */ #undef HAVE_U_INT16_T /* Define to 1 if the system has the type `__uint16'. */ #define HAVE___UINT16 1 /* Name of package */ #undef PACKAGE /* Define to the address where bug reports for this package should be sent. */ #undef PACKAGE_BUGREPORT /* Define to the full name of this package. */ #undef PACKAGE_NAME /* Define to the full name and version of this package. */ #undef PACKAGE_STRING /* Define to the one symbol short name of this package. */ #undef PACKAGE_TARNAME /* Define to the home page for this package. */ #undef PACKAGE_URL /* Define to the version of this package. */ #undef PACKAGE_VERSION /* Define to necessary symbol if this constant uses a non-standard name on your system. */ #undef PTHREAD_CREATE_JOINABLE /* The system-provided hash function including the namespace. */ #define SPARSEHASH_HASH HASH_NAMESPACE::hash_compare /* The system-provided hash function, in namespace HASH_NAMESPACE. */ #define SPARSEHASH_HASH_NO_NAMESPACE hash_compare /* Define to 1 if you have the ANSI C header files. */ #define STDC_HEADERS 1 /* Version number of package */ #undef VERSION /* Stops putting the code inside the Google namespace */ #define _END_GOOGLE_NAMESPACE_ } /* Puts following code inside the Google namespace */ #define _START_GOOGLE_NAMESPACE_ namespace google { // --------------------------------------------------------------------- // Extra stuff not found in config.h.in #define HAVE_WINDOWS_H 1 // used in time_hash_map // This makes sure the definitions in config.h and sparseconfig.h match // up. If they don't, the compiler will complain about redefinition. #include // TODO(csilvers): include windows/port.h in every relevant source file instead? #include "windows/port.h" #endif /* GOOGLE_SPARSEHASH_WINDOWS_CONFIG_H_ */ sparsehash-2.0.2/src/windows/.svn/text-base/port.h.svn-base0000444000175000017500000000560411721252346020516 00000000000000/* Copyright (c) 2007, Google Inc. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following disclaimer * in the documentation and/or other materials provided with the * distribution. * * Neither the name of Google Inc. nor the names of its * contributors may be used to endorse or promote products derived from * this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * --- * Author: Craig Silverstein * * These are some portability typedefs and defines to make it a bit * easier to compile this code -- in particular, unittests -- under VC++. * Other portability code is found in windows/sparsehash/internal/sparseconfig.h. * * Several of these are taken from glib: * http://developer.gnome.org/doc/API/glib/glib-windows-compatability-functions.html */ #ifndef SPARSEHASH_WINDOWS_PORT_H_ #define SPARSEHASH_WINDOWS_PORT_H_ #include #include "config.h" #ifdef WIN32 #define WIN32_LEAN_AND_MEAN /* We always want minimal includes */ #include #include /* because we so often use open/close/etc */ #include // 4996: Yes, we're ok using the "unsafe" functions like _vsnprintf and fopen // 4127: We use "while (1)" sometimes: yes, we know it's a constant // 4181: type_traits_test is explicitly testing 'qualifier applied to reference' #pragma warning(disable:4996 4127 4181) // file I/O #define unlink _unlink #define strdup _strdup // We can't just use _snprintf as a drop-in replacement, because it // doesn't always NUL-terminate. :-( extern int snprintf(char *str, size_t size, const char *format, ...); extern std::string TmpFile(const char* basename); // used in hashtable_unittest #endif /* WIN32 */ #endif /* SPARSEHASH_WINDOWS_PORT_H_ */ sparsehash-2.0.2/src/windows/.svn/props/0000775000175000017500000000000011721252346015172 500000000000000sparsehash-2.0.2/src/windows/.svn/tmp/0000775000175000017500000000000011721252346014627 500000000000000sparsehash-2.0.2/src/windows/.svn/tmp/text-base/0000775000175000017500000000000011721252346016523 500000000000000sparsehash-2.0.2/src/windows/.svn/tmp/props/0000775000175000017500000000000011721252346015772 500000000000000sparsehash-2.0.2/src/windows/.svn/tmp/prop-base/0000775000175000017500000000000011721252346016517 500000000000000sparsehash-2.0.2/src/windows/.svn/entries0000444000175000017500000000124211721252346015336 0000000000000010 dir 113 https://sparsehash.googlecode.com/svn/trunk/src/windows https://sparsehash.googlecode.com/svn 2012-01-31T23:50:02.386177Z 106 csilvers 21bedea4-f223-4c8b-73d6-85019ffb75a9 sparsehash dir port.cc file 2012-02-22T20:49:42.415760Z 151c78cf52673740bfb58bcc65e4d5ba 2012-01-31T23:50:02.386177Z 106 csilvers 2661 port.h file 2012-02-22T20:49:42.415760Z a4606eabefc5a8058fff503ea36c9d55 2012-01-31T23:50:02.386177Z 106 csilvers 2948 config.h file 2012-02-22T20:49:42.415760Z 956a5f41260940e9fa6666ffd256906e 2012-01-31T23:50:02.386177Z 106 csilvers 4291 google dir sparsehash-2.0.2/src/windows/.svn/all-wcprops0000444000175000017500000000054011721252346016130 00000000000000K 25 svn:wc:ra_dav:version-url V 35 /svn/!svn/ver/106/trunk/src/windows END port.cc K 25 svn:wc:ra_dav:version-url V 43 /svn/!svn/ver/106/trunk/src/windows/port.cc END port.h K 25 svn:wc:ra_dav:version-url V 42 /svn/!svn/ver/106/trunk/src/windows/port.h END config.h K 25 svn:wc:ra_dav:version-url V 44 /svn/!svn/ver/106/trunk/src/windows/config.h END sparsehash-2.0.2/src/windows/.svn/prop-base/0000775000175000017500000000000011721252346015717 500000000000000sparsehash-2.0.2/src/windows/port.h0000664000175000017500000000560411721252346014225 00000000000000/* Copyright (c) 2007, Google Inc. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following disclaimer * in the documentation and/or other materials provided with the * distribution. * * Neither the name of Google Inc. nor the names of its * contributors may be used to endorse or promote products derived from * this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * --- * Author: Craig Silverstein * * These are some portability typedefs and defines to make it a bit * easier to compile this code -- in particular, unittests -- under VC++. * Other portability code is found in windows/sparsehash/internal/sparseconfig.h. * * Several of these are taken from glib: * http://developer.gnome.org/doc/API/glib/glib-windows-compatability-functions.html */ #ifndef SPARSEHASH_WINDOWS_PORT_H_ #define SPARSEHASH_WINDOWS_PORT_H_ #include #include "config.h" #ifdef WIN32 #define WIN32_LEAN_AND_MEAN /* We always want minimal includes */ #include #include /* because we so often use open/close/etc */ #include // 4996: Yes, we're ok using the "unsafe" functions like _vsnprintf and fopen // 4127: We use "while (1)" sometimes: yes, we know it's a constant // 4181: type_traits_test is explicitly testing 'qualifier applied to reference' #pragma warning(disable:4996 4127 4181) // file I/O #define unlink _unlink #define strdup _strdup // We can't just use _snprintf as a drop-in replacement, because it // doesn't always NUL-terminate. :-( extern int snprintf(char *str, size_t size, const char *format, ...); extern std::string TmpFile(const char* basename); // used in hashtable_unittest #endif /* WIN32 */ #endif /* SPARSEHASH_WINDOWS_PORT_H_ */ sparsehash-2.0.2/src/windows/port.cc0000664000175000017500000000514511721252346014363 00000000000000/* Copyright (c) 2007, Google Inc. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following disclaimer * in the documentation and/or other materials provided with the * distribution. * * Neither the name of Google Inc. nor the names of its * contributors may be used to endorse or promote products derived from * this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * --- * Author: Craig Silverstein */ #include #ifndef WIN32 # error You should only be including windows/port.cc in a windows environment! #endif #include "config.h" #include // for va_list, va_start, va_end #include "port.h" // Calls the windows _vsnprintf, but always NUL-terminate. int snprintf(char *str, size_t size, const char *format, ...) { if (size == 0) // not even room for a \0? return -1; // not what C99 says to do, but what windows does str[size-1] = '\0'; va_list ap; va_start(ap, format); const int r = _vsnprintf(str, size-1, format, ap); va_end(ap); return r; } std::string TmpFile(const char* basename) { char tmppath_buffer[1024]; int tmppath_len = GetTempPathA(sizeof(tmppath_buffer), tmppath_buffer); if (tmppath_len <= 0 || tmppath_len >= sizeof(tmppath_buffer)) { return basename; // an error, so just bail on tmppath } snprintf(tmppath_buffer + tmppath_len, sizeof(tmppath_buffer) - tmppath_len, "\\%s", basename); return tmppath_buffer; } sparsehash-2.0.2/src/windows/config.h0000664000175000017500000001030311721252346014476 00000000000000#ifndef GOOGLE_SPARSEHASH_WINDOWS_CONFIG_H_ #define GOOGLE_SPARSEHASH_WINDOWS_CONFIG_H_ /* src/config.h.in. Generated from configure.ac by autoheader. */ /* Namespace for Google classes */ #define GOOGLE_NAMESPACE ::google /* the location of the header defining hash functions */ #define HASH_FUN_H /* the location of or */ #define HASH_MAP_H /* the namespace of the hash<> function */ #define HASH_NAMESPACE stdext /* the location of or */ #define HASH_SET_H /* Define to 1 if you have the header file. */ #undef HAVE_GOOGLE_MALLOC_EXTENSION_H /* define if the compiler has hash_map */ #define HAVE_HASH_MAP 1 /* define if the compiler has hash_set */ #define HAVE_HASH_SET 1 /* Define to 1 if you have the header file. */ #undef HAVE_INTTYPES_H /* Define to 1 if the system has the type `long long'. */ #define HAVE_LONG_LONG 1 /* Define to 1 if you have the `memcpy' function. */ #define HAVE_MEMCPY 1 /* Define to 1 if you have the `memmove' function. */ #define HAVE_MEMMOVE 1 /* Define to 1 if you have the header file. */ #undef HAVE_MEMORY_H /* define if the compiler implements namespaces */ #define HAVE_NAMESPACES 1 /* Define if you have POSIX threads libraries and header files. */ #undef HAVE_PTHREAD /* Define to 1 if you have the header file. */ #undef HAVE_STDINT_H /* Define to 1 if you have the header file. */ #define HAVE_STDLIB_H 1 /* Define to 1 if you have the header file. */ #undef HAVE_STRINGS_H /* Define to 1 if you have the header file. */ #define HAVE_STRING_H 1 /* Define to 1 if you have the header file. */ #undef HAVE_SYS_RESOURCE_H /* Define to 1 if you have the header file. */ #define HAVE_SYS_STAT_H 1 /* Define to 1 if you have the header file. */ #undef HAVE_SYS_TIME_H /* Define to 1 if you have the header file. */ #define HAVE_SYS_TYPES_H 1 /* Define to 1 if you have the header file. */ #undef HAVE_SYS_UTSNAME_H /* Define to 1 if the system has the type `uint16_t'. */ #undef HAVE_UINT16_T /* Define to 1 if you have the header file. */ #undef HAVE_UNISTD_H /* define if the compiler supports unordered_{map,set} */ #undef HAVE_UNORDERED_MAP /* Define to 1 if the system has the type `u_int16_t'. */ #undef HAVE_U_INT16_T /* Define to 1 if the system has the type `__uint16'. */ #define HAVE___UINT16 1 /* Name of package */ #undef PACKAGE /* Define to the address where bug reports for this package should be sent. */ #undef PACKAGE_BUGREPORT /* Define to the full name of this package. */ #undef PACKAGE_NAME /* Define to the full name and version of this package. */ #undef PACKAGE_STRING /* Define to the one symbol short name of this package. */ #undef PACKAGE_TARNAME /* Define to the home page for this package. */ #undef PACKAGE_URL /* Define to the version of this package. */ #undef PACKAGE_VERSION /* Define to necessary symbol if this constant uses a non-standard name on your system. */ #undef PTHREAD_CREATE_JOINABLE /* The system-provided hash function including the namespace. */ #define SPARSEHASH_HASH HASH_NAMESPACE::hash_compare /* The system-provided hash function, in namespace HASH_NAMESPACE. */ #define SPARSEHASH_HASH_NO_NAMESPACE hash_compare /* Define to 1 if you have the ANSI C header files. */ #define STDC_HEADERS 1 /* Version number of package */ #undef VERSION /* Stops putting the code inside the Google namespace */ #define _END_GOOGLE_NAMESPACE_ } /* Puts following code inside the Google namespace */ #define _START_GOOGLE_NAMESPACE_ namespace google { // --------------------------------------------------------------------- // Extra stuff not found in config.h.in #define HAVE_WINDOWS_H 1 // used in time_hash_map // This makes sure the definitions in config.h and sparseconfig.h match // up. If they don't, the compiler will complain about redefinition. #include // TODO(csilvers): include windows/port.h in every relevant source file instead? #include "windows/port.h" #endif /* GOOGLE_SPARSEHASH_WINDOWS_CONFIG_H_ */ sparsehash-2.0.2/src/hash_test_interface.h0000664000175000017500000011667611721252346015565 00000000000000// Copyright (c) 2010, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // --- // // This implements a uniform interface for all 6 hash implementations: // dense_hashtable, dense_hash_map, dense_hash_set // sparse_hashtable, sparse_hash_map, sparse_hash_set // This is intended to be used for testing, to provide a single routine // that can easily test all 6 implementations. // // The main reasons to specialize are to (1) provide dummy // implementations for methods that are only needed for some of the // implementations (for instance, set_empty_key()), and (2) provide a // uniform interface to just the keys -- for instance, we provide // wrappers around the iterators that define it.key, which gives the // "key" part of the bucket (*it or it->first, depending on the class). #ifndef UTIL_GTL_HASH_TEST_INTERFACE_H_ #define UTIL_GTL_HASH_TEST_INTERFACE_H_ #include #include // for equal_to<> #include #include #include #include #include #include #include HASH_FUN_H // for hash<> _START_GOOGLE_NAMESPACE_ // This is the "default" interface, which just passes everything // through to the underlying hashtable. You'll need to subclass it to // specialize behavior for an individual hashtable. template class BaseHashtableInterface { public: virtual ~BaseHashtableInterface() {} typedef typename HT::key_type key_type; typedef typename HT::value_type value_type; typedef typename HT::hasher hasher; typedef typename HT::key_equal key_equal; typedef typename HT::allocator_type allocator_type; typedef typename HT::size_type size_type; typedef typename HT::difference_type difference_type; typedef typename HT::pointer pointer; typedef typename HT::const_pointer const_pointer; typedef typename HT::reference reference; typedef typename HT::const_reference const_reference; class const_iterator; class iterator : public HT::iterator { public: iterator() : parent_(NULL) { } // this allows code like "iterator it;" iterator(typename HT::iterator it, const BaseHashtableInterface* parent) : HT::iterator(it), parent_(parent) { } key_type key() { return parent_->it_to_key(*this); } private: friend class BaseHashtableInterface::const_iterator; // for its ctor const BaseHashtableInterface* parent_; }; class const_iterator : public HT::const_iterator { public: const_iterator() : parent_(NULL) { } const_iterator(typename HT::const_iterator it, const BaseHashtableInterface* parent) : HT::const_iterator(it), parent_(parent) { } const_iterator(typename HT::iterator it, BaseHashtableInterface* parent) : HT::const_iterator(it), parent_(parent) { } // The parameter type here *should* just be "iterator", but MSVC // gets confused by that, so I'm overly specific. const_iterator(typename BaseHashtableInterface::iterator it) : HT::const_iterator(it), parent_(it.parent_) { } key_type key() { return parent_->it_to_key(*this); } private: const BaseHashtableInterface* parent_; }; class const_local_iterator; class local_iterator : public HT::local_iterator { public: local_iterator() : parent_(NULL) { } local_iterator(typename HT::local_iterator it, const BaseHashtableInterface* parent) : HT::local_iterator(it), parent_(parent) { } key_type key() { return parent_->it_to_key(*this); } private: friend class BaseHashtableInterface::const_local_iterator; // for its ctor const BaseHashtableInterface* parent_; }; class const_local_iterator : public HT::const_local_iterator { public: const_local_iterator() : parent_(NULL) { } const_local_iterator(typename HT::const_local_iterator it, const BaseHashtableInterface* parent) : HT::const_local_iterator(it), parent_(parent) { } const_local_iterator(typename HT::local_iterator it, BaseHashtableInterface* parent) : HT::const_local_iterator(it), parent_(parent) { } const_local_iterator(local_iterator it) : HT::const_local_iterator(it), parent_(it.parent_) { } key_type key() { return parent_->it_to_key(*this); } private: const BaseHashtableInterface* parent_; }; iterator begin() { return iterator(ht_.begin(), this); } iterator end() { return iterator(ht_.end(), this); } const_iterator begin() const { return const_iterator(ht_.begin(), this); } const_iterator end() const { return const_iterator(ht_.end(), this); } local_iterator begin(size_type i) { return local_iterator(ht_.begin(i), this); } local_iterator end(size_type i) { return local_iterator(ht_.end(i), this); } const_local_iterator begin(size_type i) const { return const_local_iterator(ht_.begin(i), this); } const_local_iterator end(size_type i) const { return const_local_iterator(ht_.end(i), this); } hasher hash_funct() const { return ht_.hash_funct(); } hasher hash_function() const { return ht_.hash_function(); } key_equal key_eq() const { return ht_.key_eq(); } allocator_type get_allocator() const { return ht_.get_allocator(); } BaseHashtableInterface(size_type expected_max_items_in_table, const hasher& hf, const key_equal& eql, const allocator_type& alloc) : ht_(expected_max_items_in_table, hf, eql, alloc) { } // Not all ht_'s support this constructor: you should only call it // from a subclass if you know your ht supports it. Otherwise call // the previous constructor, followed by 'insert(f, l);'. template BaseHashtableInterface(InputIterator f, InputIterator l, size_type expected_max_items_in_table, const hasher& hf, const key_equal& eql, const allocator_type& alloc) : ht_(f, l, expected_max_items_in_table, hf, eql, alloc) { } // This is the version of the constructor used by dense_*, which // requires an empty key in the constructor. template BaseHashtableInterface(InputIterator f, InputIterator l, key_type empty_k, size_type expected_max_items_in_table, const hasher& hf, const key_equal& eql, const allocator_type& alloc) : ht_(f, l, empty_k, expected_max_items_in_table, hf, eql, alloc) { } // This is the constructor appropriate for {dense,sparse}hashtable. template BaseHashtableInterface(size_type expected_max_items_in_table, const hasher& hf, const key_equal& eql, const ExtractKey& ek, const SetKey& sk, const allocator_type& alloc) : ht_(expected_max_items_in_table, hf, eql, ek, sk, alloc) { } void clear() { ht_.clear(); } void swap(BaseHashtableInterface& other) { ht_.swap(other.ht_); } // Only part of the API for some hashtable implementations. void clear_no_resize() { clear(); } size_type size() const { return ht_.size(); } size_type max_size() const { return ht_.max_size(); } bool empty() const { return ht_.empty(); } size_type bucket_count() const { return ht_.bucket_count(); } size_type max_bucket_count() const { return ht_.max_bucket_count(); } size_type bucket_size(size_type i) const { return ht_.bucket_size(i); } size_type bucket(const key_type& key) const { return ht_.bucket(key); } float load_factor() const { return ht_.load_factor(); } float max_load_factor() const { return ht_.max_load_factor(); } void max_load_factor(float grow) { ht_.max_load_factor(grow); } float min_load_factor() const { return ht_.min_load_factor(); } void min_load_factor(float shrink) { ht_.min_load_factor(shrink); } void set_resizing_parameters(float shrink, float grow) { ht_.set_resizing_parameters(shrink, grow); } void resize(size_type hint) { ht_.resize(hint); } void rehash(size_type hint) { ht_.rehash(hint); } iterator find(const key_type& key) { return iterator(ht_.find(key), this); } const_iterator find(const key_type& key) const { return const_iterator(ht_.find(key), this); } // Rather than try to implement operator[], which doesn't make much // sense for set types, we implement two methods: bracket_equal and // bracket_assign. By default, bracket_equal(a, b) returns true if // ht[a] == b, and false otherwise. (Note that this follows // operator[] semantics exactly, including inserting a if it's not // already in the hashtable, before doing the equality test.) For // sets, which have no operator[], b is ignored, and bracket_equal // returns true if key is in the set and false otherwise. // bracket_assign(a, b) is equivalent to ht[a] = b. For sets, b is // ignored, and bracket_assign is equivalent to ht.insert(a). template bool bracket_equal(const key_type& key, const AssignValue& expected) { return ht_[key] == expected; } template void bracket_assign(const key_type& key, const AssignValue& value) { ht_[key] = value; } size_type count(const key_type& key) const { return ht_.count(key); } std::pair equal_range(const key_type& key) { std::pair r = ht_.equal_range(key); return std::pair(iterator(r.first, this), iterator(r.second, this)); } std::pair equal_range(const key_type& key) const { std::pair r = ht_.equal_range(key); return std::pair( const_iterator(r.first, this), const_iterator(r.second, this)); } const_iterator random_element(class ACMRandom* r) const { return const_iterator(ht_.random_element(r), this); } iterator random_element(class ACMRandom* r) { return iterator(ht_.random_element(r), this); } std::pair insert(const value_type& obj) { std::pair r = ht_.insert(obj); return std::pair(iterator(r.first, this), r.second); } template void insert(InputIterator f, InputIterator l) { ht_.insert(f, l); } void insert(typename HT::const_iterator f, typename HT::const_iterator l) { ht_.insert(f, l); } iterator insert(typename HT::iterator, const value_type& obj) { return iterator(insert(obj).first, this); } // These will commonly need to be overridden by the child. void set_empty_key(const key_type& k) { ht_.set_empty_key(k); } void clear_empty_key() { ht_.clear_empty_key(); } key_type empty_key() const { return ht_.empty_key(); } void set_deleted_key(const key_type& k) { ht_.set_deleted_key(k); } void clear_deleted_key() { ht_.clear_deleted_key(); } key_type deleted_key() const { return ht_.deleted_key(); } size_type erase(const key_type& key) { return ht_.erase(key); } void erase(typename HT::iterator it) { ht_.erase(it); } void erase(typename HT::iterator f, typename HT::iterator l) { ht_.erase(f, l); } bool operator==(const BaseHashtableInterface& other) const { return ht_ == other.ht_; } bool operator!=(const BaseHashtableInterface& other) const { return ht_ != other.ht_; } template bool serialize(ValueSerializer serializer, OUTPUT *fp) { return ht_.serialize(serializer, fp); } template bool unserialize(ValueSerializer serializer, INPUT *fp) { return ht_.unserialize(serializer, fp); } template bool write_metadata(OUTPUT *fp) { return ht_.write_metadata(fp); } template bool read_metadata(INPUT *fp) { return ht_.read_metadata(fp); } template bool write_nopointer_data(OUTPUT *fp) { return ht_.write_nopointer_data(fp); } template bool read_nopointer_data(INPUT *fp) { return ht_.read_nopointer_data(fp); } // low-level stats int num_table_copies() const { return ht_.num_table_copies(); } // Not part of the hashtable API, but is provided to make testing easier. virtual key_type get_key(const value_type& value) const = 0; // All subclasses should define get_data(value_type) as well. I don't // provide an abstract-virtual definition here, because the return type // differs between subclasses (not all subclasses define data_type). //virtual data_type get_data(const value_type& value) const = 0; //virtual data_type default_data() const = 0; // These allow introspection into the interface. "Supports" means // that the implementation of this functionality isn't a noop. virtual bool supports_clear_no_resize() const = 0; virtual bool supports_empty_key() const = 0; virtual bool supports_deleted_key() const = 0; virtual bool supports_brackets() const = 0; // has a 'real' operator[] virtual bool supports_readwrite() const = 0; virtual bool supports_num_table_copies() const = 0; virtual bool supports_serialization() const = 0; protected: HT ht_; // These are what subclasses have to define to get class-specific behavior virtual key_type it_to_key(const iterator& it) const = 0; virtual key_type it_to_key(const const_iterator& it) const = 0; virtual key_type it_to_key(const local_iterator& it) const = 0; virtual key_type it_to_key(const const_local_iterator& it) const = 0; }; // --------------------------------------------------------------------- template , class EqualKey = std::equal_to, class Alloc = libc_allocator_with_realloc > > class HashtableInterface_SparseHashMap : public BaseHashtableInterface< sparse_hash_map > { private: typedef sparse_hash_map ht; typedef BaseHashtableInterface p; // parent public: explicit HashtableInterface_SparseHashMap( typename p::size_type expected_max_items = 0, const typename p::hasher& hf = typename p::hasher(), const typename p::key_equal& eql = typename p::key_equal(), const typename p::allocator_type& alloc = typename p::allocator_type()) : BaseHashtableInterface(expected_max_items, hf, eql, alloc) { } template HashtableInterface_SparseHashMap( InputIterator f, InputIterator l, typename p::size_type expected_max_items = 0, const typename p::hasher& hf = typename p::hasher(), const typename p::key_equal& eql = typename p::key_equal(), const typename p::allocator_type& alloc = typename p::allocator_type()) : BaseHashtableInterface(f, l, expected_max_items, hf, eql, alloc) { } typename p::key_type get_key(const typename p::value_type& value) const { return value.first; } typename ht::data_type get_data(const typename p::value_type& value) const { return value.second; } typename ht::data_type default_data() const { return typename ht::data_type(); } bool supports_clear_no_resize() const { return false; } bool supports_empty_key() const { return false; } bool supports_deleted_key() const { return true; } bool supports_brackets() const { return true; } bool supports_readwrite() const { return true; } bool supports_num_table_copies() const { return false; } bool supports_serialization() const { return true; } void set_empty_key(const typename p::key_type& k) { } void clear_empty_key() { } typename p::key_type empty_key() const { return typename p::key_type(); } int num_table_copies() const { return 0; } typedef typename ht::NopointerSerializer NopointerSerializer; protected: template friend void swap(HashtableInterface_SparseHashMap& a, HashtableInterface_SparseHashMap& b); typename p::key_type it_to_key(const typename p::iterator& it) const { return it->first; } typename p::key_type it_to_key(const typename p::const_iterator& it) const { return it->first; } typename p::key_type it_to_key(const typename p::local_iterator& it) const { return it->first; } typename p::key_type it_to_key(const typename p::const_local_iterator& it) const { return it->first; } }; template void swap(HashtableInterface_SparseHashMap& a, HashtableInterface_SparseHashMap& b) { swap(a.ht_, b.ht_); } // --------------------------------------------------------------------- template , class EqualKey = std::equal_to, class Alloc = libc_allocator_with_realloc > class HashtableInterface_SparseHashSet : public BaseHashtableInterface< sparse_hash_set > { private: typedef sparse_hash_set ht; typedef BaseHashtableInterface p; // parent public: // Bizarrely, MSVC 8.0 has trouble with the (perfectly fine) // typename's in this constructor, and this constructor alone, out // of all the ones in the file. So for MSVC, we take some typenames // out, which is technically invalid C++, but MSVC doesn't seem to // mind. #ifdef _MSC_VER explicit HashtableInterface_SparseHashSet( typename p::size_type expected_max_items = 0, const typename p::hasher& hf = p::hasher(), const typename p::key_equal& eql = p::key_equal(), const typename p::allocator_type& alloc = p::allocator_type()) : BaseHashtableInterface(expected_max_items, hf, eql, alloc) { } #else explicit HashtableInterface_SparseHashSet( typename p::size_type expected_max_items = 0, const typename p::hasher& hf = typename p::hasher(), const typename p::key_equal& eql = typename p::key_equal(), const typename p::allocator_type& alloc = typename p::allocator_type()) : BaseHashtableInterface(expected_max_items, hf, eql, alloc) { } #endif template HashtableInterface_SparseHashSet( InputIterator f, InputIterator l, typename p::size_type expected_max_items = 0, const typename p::hasher& hf = typename p::hasher(), const typename p::key_equal& eql = typename p::key_equal(), const typename p::allocator_type& alloc = typename p::allocator_type()) : BaseHashtableInterface(f, l, expected_max_items, hf, eql, alloc) { } template bool bracket_equal(const typename p::key_type& key, const AssignValue&) { return this->ht_.find(key) != this->ht_.end(); } template void bracket_assign(const typename p::key_type& key, const AssignValue&) { this->ht_.insert(key); } typename p::key_type get_key(const typename p::value_type& value) const { return value; } // For sets, the only 'data' is that an item is actually inserted. bool get_data(const typename p::value_type&) const { return true; } bool default_data() const { return true; } bool supports_clear_no_resize() const { return false; } bool supports_empty_key() const { return false; } bool supports_deleted_key() const { return true; } bool supports_brackets() const { return false; } bool supports_readwrite() const { return true; } bool supports_num_table_copies() const { return false; } bool supports_serialization() const { return true; } void set_empty_key(const typename p::key_type& k) { } void clear_empty_key() { } typename p::key_type empty_key() const { return typename p::key_type(); } int num_table_copies() const { return 0; } typedef typename ht::NopointerSerializer NopointerSerializer; protected: template friend void swap(HashtableInterface_SparseHashSet& a, HashtableInterface_SparseHashSet& b); typename p::key_type it_to_key(const typename p::iterator& it) const { return *it; } typename p::key_type it_to_key(const typename p::const_iterator& it) const { return *it; } typename p::key_type it_to_key(const typename p::local_iterator& it) const { return *it; } typename p::key_type it_to_key(const typename p::const_local_iterator& it) const { return *it; } }; template void swap(HashtableInterface_SparseHashSet& a, HashtableInterface_SparseHashSet& b) { swap(a.ht_, b.ht_); } // --------------------------------------------------------------------- template class HashtableInterface_SparseHashtable : public BaseHashtableInterface< sparse_hashtable > { private: typedef sparse_hashtable ht; typedef BaseHashtableInterface p; // parent public: explicit HashtableInterface_SparseHashtable( typename p::size_type expected_max_items = 0, const typename p::hasher& hf = typename p::hasher(), const typename p::key_equal& eql = typename p::key_equal(), const typename p::allocator_type& alloc = typename p::allocator_type()) : BaseHashtableInterface(expected_max_items, hf, eql, ExtractKey(), SetKey(), alloc) { } template HashtableInterface_SparseHashtable( InputIterator f, InputIterator l, typename p::size_type expected_max_items = 0, const typename p::hasher& hf = typename p::hasher(), const typename p::key_equal& eql = typename p::key_equal(), const typename p::allocator_type& alloc = typename p::allocator_type()) : BaseHashtableInterface(expected_max_items, hf, eql, ExtractKey(), SetKey(), alloc) { this->insert(f, l); } float max_load_factor() const { float shrink, grow; this->ht_.get_resizing_parameters(&shrink, &grow); return grow; } void max_load_factor(float new_grow) { float shrink, grow; this->ht_.get_resizing_parameters(&shrink, &grow); this->ht_.set_resizing_parameters(shrink, new_grow); } float min_load_factor() const { float shrink, grow; this->ht_.get_resizing_parameters(&shrink, &grow); return shrink; } void min_load_factor(float new_shrink) { float shrink, grow; this->ht_.get_resizing_parameters(&shrink, &grow); this->ht_.set_resizing_parameters(new_shrink, grow); } template bool bracket_equal(const typename p::key_type&, const AssignValue&) { return false; } template void bracket_assign(const typename p::key_type&, const AssignValue&) { } typename p::key_type get_key(const typename p::value_type& value) const { return extract_key(value); } typename p::value_type get_data(const typename p::value_type& value) const { return value; } typename p::value_type default_data() const { return typename p::value_type(); } bool supports_clear_no_resize() const { return false; } bool supports_empty_key() const { return false; } bool supports_deleted_key() const { return true; } bool supports_brackets() const { return false; } bool supports_readwrite() const { return true; } bool supports_num_table_copies() const { return true; } bool supports_serialization() const { return true; } void set_empty_key(const typename p::key_type& k) { } void clear_empty_key() { } typename p::key_type empty_key() const { return typename p::key_type(); } // These tr1 names aren't defined for sparse_hashtable. typename p::hasher hash_function() { return this->hash_funct(); } void rehash(typename p::size_type hint) { this->resize(hint); } // TODO(csilvers): also support/test destructive_begin()/destructive_end()? typedef typename ht::NopointerSerializer NopointerSerializer; protected: template friend void swap( HashtableInterface_SparseHashtable& a, HashtableInterface_SparseHashtable& b); typename p::key_type it_to_key(const typename p::iterator& it) const { return extract_key(*it); } typename p::key_type it_to_key(const typename p::const_iterator& it) const { return extract_key(*it); } typename p::key_type it_to_key(const typename p::local_iterator& it) const { return extract_key(*it); } typename p::key_type it_to_key(const typename p::const_local_iterator& it) const { return extract_key(*it); } private: ExtractKey extract_key; }; template void swap(HashtableInterface_SparseHashtable& a, HashtableInterface_SparseHashtable& b) { swap(a.ht_, b.ht_); } // --------------------------------------------------------------------- // Unlike dense_hash_map, the wrapper class takes an extra template // value saying what the empty key is. template , class EqualKey = std::equal_to, class Alloc = libc_allocator_with_realloc > > class HashtableInterface_DenseHashMap : public BaseHashtableInterface< dense_hash_map > { private: typedef dense_hash_map ht; typedef BaseHashtableInterface p; // parent public: explicit HashtableInterface_DenseHashMap( typename p::size_type expected_max_items = 0, const typename p::hasher& hf = typename p::hasher(), const typename p::key_equal& eql = typename p::key_equal(), const typename p::allocator_type& alloc = typename p::allocator_type()) : BaseHashtableInterface(expected_max_items, hf, eql, alloc) { this->set_empty_key(EMPTY_KEY); } template HashtableInterface_DenseHashMap( InputIterator f, InputIterator l, typename p::size_type expected_max_items = 0, const typename p::hasher& hf = typename p::hasher(), const typename p::key_equal& eql = typename p::key_equal(), const typename p::allocator_type& alloc = typename p::allocator_type()) : BaseHashtableInterface(f, l, EMPTY_KEY, expected_max_items, hf, eql, alloc) { } void clear_no_resize() { this->ht_.clear_no_resize(); } typename p::key_type get_key(const typename p::value_type& value) const { return value.first; } typename ht::data_type get_data(const typename p::value_type& value) const { return value.second; } typename ht::data_type default_data() const { return typename ht::data_type(); } bool supports_clear_no_resize() const { return true; } bool supports_empty_key() const { return true; } bool supports_deleted_key() const { return true; } bool supports_brackets() const { return true; } bool supports_readwrite() const { return false; } bool supports_num_table_copies() const { return false; } bool supports_serialization() const { return true; } typedef typename ht::NopointerSerializer NopointerSerializer; template bool write_metadata(OUTPUT *) { return false; } template bool read_metadata(INPUT *) { return false; } template bool write_nopointer_data(OUTPUT *) { return false; } template bool read_nopointer_data(INPUT *) { return false; } int num_table_copies() const { return 0; } protected: template friend void swap(HashtableInterface_DenseHashMap& a, HashtableInterface_DenseHashMap& b); typename p::key_type it_to_key(const typename p::iterator& it) const { return it->first; } typename p::key_type it_to_key(const typename p::const_iterator& it) const { return it->first; } typename p::key_type it_to_key(const typename p::local_iterator& it) const { return it->first; } typename p::key_type it_to_key(const typename p::const_local_iterator& it) const { return it->first; } }; template void swap(HashtableInterface_DenseHashMap& a, HashtableInterface_DenseHashMap& b) { swap(a.ht_, b.ht_); } // --------------------------------------------------------------------- // Unlike dense_hash_set, the wrapper class takes an extra template // value saying what the empty key is. template , class EqualKey = std::equal_to, class Alloc = libc_allocator_with_realloc > class HashtableInterface_DenseHashSet : public BaseHashtableInterface< dense_hash_set > { private: typedef dense_hash_set ht; typedef BaseHashtableInterface p; // parent public: explicit HashtableInterface_DenseHashSet( typename p::size_type expected_max_items = 0, const typename p::hasher& hf = typename p::hasher(), const typename p::key_equal& eql = typename p::key_equal(), const typename p::allocator_type& alloc = typename p::allocator_type()) : BaseHashtableInterface(expected_max_items, hf, eql, alloc) { this->set_empty_key(EMPTY_KEY); } template HashtableInterface_DenseHashSet( InputIterator f, InputIterator l, typename p::size_type expected_max_items = 0, const typename p::hasher& hf = typename p::hasher(), const typename p::key_equal& eql = typename p::key_equal(), const typename p::allocator_type& alloc = typename p::allocator_type()) : BaseHashtableInterface(f, l, EMPTY_KEY, expected_max_items, hf, eql, alloc) { } void clear_no_resize() { this->ht_.clear_no_resize(); } template bool bracket_equal(const typename p::key_type& key, const AssignValue&) { return this->ht_.find(key) != this->ht_.end(); } template void bracket_assign(const typename p::key_type& key, const AssignValue&) { this->ht_.insert(key); } typename p::key_type get_key(const typename p::value_type& value) const { return value; } bool get_data(const typename p::value_type&) const { return true; } bool default_data() const { return true; } bool supports_clear_no_resize() const { return true; } bool supports_empty_key() const { return true; } bool supports_deleted_key() const { return true; } bool supports_brackets() const { return false; } bool supports_readwrite() const { return false; } bool supports_num_table_copies() const { return false; } bool supports_serialization() const { return true; } typedef typename ht::NopointerSerializer NopointerSerializer; template bool write_metadata(OUTPUT *) { return false; } template bool read_metadata(INPUT *) { return false; } template bool write_nopointer_data(OUTPUT *) { return false; } template bool read_nopointer_data(INPUT *) { return false; } int num_table_copies() const { return 0; } protected: template friend void swap(HashtableInterface_DenseHashSet& a, HashtableInterface_DenseHashSet& b); typename p::key_type it_to_key(const typename p::iterator& it) const { return *it; } typename p::key_type it_to_key(const typename p::const_iterator& it) const { return *it; } typename p::key_type it_to_key(const typename p::local_iterator& it) const { return *it; } typename p::key_type it_to_key(const typename p::const_local_iterator& it) const { return *it; } }; template void swap(HashtableInterface_DenseHashSet& a, HashtableInterface_DenseHashSet& b) { swap(a.ht_, b.ht_); } // --------------------------------------------------------------------- // Unlike dense_hashtable, the wrapper class takes an extra template // value saying what the empty key is. template class HashtableInterface_DenseHashtable : public BaseHashtableInterface< dense_hashtable > { private: typedef dense_hashtable ht; typedef BaseHashtableInterface p; // parent public: explicit HashtableInterface_DenseHashtable( typename p::size_type expected_max_items = 0, const typename p::hasher& hf = typename p::hasher(), const typename p::key_equal& eql = typename p::key_equal(), const typename p::allocator_type& alloc = typename p::allocator_type()) : BaseHashtableInterface(expected_max_items, hf, eql, ExtractKey(), SetKey(), alloc) { this->set_empty_key(EMPTY_KEY); } template HashtableInterface_DenseHashtable( InputIterator f, InputIterator l, typename p::size_type expected_max_items = 0, const typename p::hasher& hf = typename p::hasher(), const typename p::key_equal& eql = typename p::key_equal(), const typename p::allocator_type& alloc = typename p::allocator_type()) : BaseHashtableInterface(expected_max_items, hf, eql, ExtractKey(), SetKey(), alloc) { this->set_empty_key(EMPTY_KEY); this->insert(f, l); } void clear_no_resize() { this->ht_.clear_no_resize(); } float max_load_factor() const { float shrink, grow; this->ht_.get_resizing_parameters(&shrink, &grow); return grow; } void max_load_factor(float new_grow) { float shrink, grow; this->ht_.get_resizing_parameters(&shrink, &grow); this->ht_.set_resizing_parameters(shrink, new_grow); } float min_load_factor() const { float shrink, grow; this->ht_.get_resizing_parameters(&shrink, &grow); return shrink; } void min_load_factor(float new_shrink) { float shrink, grow; this->ht_.get_resizing_parameters(&shrink, &grow); this->ht_.set_resizing_parameters(new_shrink, grow); } template bool bracket_equal(const typename p::key_type&, const AssignValue&) { return false; } template void bracket_assign(const typename p::key_type&, const AssignValue&) { } typename p::key_type get_key(const typename p::value_type& value) const { return extract_key(value); } typename p::value_type get_data(const typename p::value_type& value) const { return value; } typename p::value_type default_data() const { return typename p::value_type(); } bool supports_clear_no_resize() const { return true; } bool supports_empty_key() const { return true; } bool supports_deleted_key() const { return true; } bool supports_brackets() const { return false; } bool supports_readwrite() const { return false; } bool supports_num_table_copies() const { return true; } bool supports_serialization() const { return true; } typedef typename ht::NopointerSerializer NopointerSerializer; template bool write_metadata(OUTPUT *) { return false; } template bool read_metadata(INPUT *) { return false; } template bool write_nopointer_data(OUTPUT *) { return false; } template bool read_nopointer_data(INPUT *) { return false; } // These tr1 names aren't defined for dense_hashtable. typename p::hasher hash_function() { return this->hash_funct(); } void rehash(typename p::size_type hint) { this->resize(hint); } protected: template friend void swap( HashtableInterface_DenseHashtable& a, HashtableInterface_DenseHashtable& b); typename p::key_type it_to_key(const typename p::iterator& it) const { return extract_key(*it); } typename p::key_type it_to_key(const typename p::const_iterator& it) const { return extract_key(*it); } typename p::key_type it_to_key(const typename p::local_iterator& it) const { return extract_key(*it); } typename p::key_type it_to_key(const typename p::const_local_iterator& it) const { return extract_key(*it); } private: ExtractKey extract_key; }; template void swap(HashtableInterface_DenseHashtable& a, HashtableInterface_DenseHashtable& b) { swap(a.ht_, b.ht_); } _END_GOOGLE_NAMESPACE_ #endif // UTIL_GTL_HASH_TEST_INTERFACE_H_ sparsehash-2.0.2/src/time_hash_map.cc0000664000175000017500000005120411721252346014500 00000000000000// Copyright (c) 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // --- // Authors: Sanjay Ghemawat and Craig Silverstein // Time various hash map implementations // // Below, times are per-call. "Memory use" is "bytes in use by // application" as reported by tcmalloc, compared before and after the // function call. This does not really report fragmentation, which is // not bad for the sparse* routines but bad for the dense* ones. // // The tests generally yield best-case performance because the // code uses sequential keys; on the other hand, "map_fetch_random" does // lookups in a pseudorandom order. Also, "stresshashfunction" is // a stress test of sorts. It uses keys from an arithmetic sequence, which, // if combined with a quick-and-dirty hash function, will yield worse // performance than the otherwise similar "map_predict/grow." // // Consider doing the following to get good numbers: // // 1. Run the tests on a machine with no X service. Make sure no other // processes are running. // 2. Minimize compiled-code differences. Compare results from the same // binary, if possible, instead of comparing results from two different // binaries. // // See PERFORMANCE for the output of one example run. #include #include #ifdef HAVE_INTTYPES_H # include #endif // for uintptr_t #include #include #include extern "C" { #include #ifdef HAVE_SYS_TIME_H # include #endif #ifdef HAVE_SYS_RESOURCE_H # include #endif #ifdef HAVE_SYS_UTSNAME_H # include #endif // for uname() } // The functions that we call on each map, that differ for different types. // By default each is a noop, but we redefine them for types that need them. #include #include HASH_MAP_H #include #include #include #include #include using std::map; using std::swap; using std::vector; using GOOGLE_NAMESPACE::dense_hash_map; using GOOGLE_NAMESPACE::sparse_hash_map; static bool FLAGS_test_sparse_hash_map = true; static bool FLAGS_test_dense_hash_map = true; static bool FLAGS_test_hash_map = true; static bool FLAGS_test_map = true; static bool FLAGS_test_4_bytes = true; static bool FLAGS_test_8_bytes = true; static bool FLAGS_test_16_bytes = true; static bool FLAGS_test_256_bytes = true; #if defined(HAVE_UNORDERED_MAP) using HASH_NAMESPACE::unordered_map; #elif defined(HAVE_HASH_MAP) || defined(_MSC_VER) using HASH_NAMESPACE::hash_map; #endif static const int kDefaultIters = 10000000; // A version of each of the hashtable classes we test, that has been // augumented to provide a common interface. For instance, the // sparse_hash_map and dense_hash_map versions set empty-key and // deleted-key (we can do this because all our tests use int-like // keys), so the users don't have to. The hash_map version adds // resize(), so users can just call resize() for all tests without // worrying about whether the map-type supports it or not. template class EasyUseSparseHashMap : public sparse_hash_map { public: EasyUseSparseHashMap() { this->set_deleted_key(-1); } }; template class EasyUseDenseHashMap : public dense_hash_map { public: EasyUseDenseHashMap() { this->set_empty_key(-1); this->set_deleted_key(-2); } }; // For pointers, we only set the empty key. template class EasyUseSparseHashMap : public sparse_hash_map { public: EasyUseSparseHashMap() { } }; template class EasyUseDenseHashMap : public dense_hash_map { public: EasyUseDenseHashMap() { this->set_empty_key((K*)(~0)); } }; #if defined(HAVE_UNORDERED_MAP) template class EasyUseHashMap : public unordered_map { public: // resize() is called rehash() in tr1 void resize(size_t r) { this->rehash(r); } }; #elif defined(_MSC_VER) template class EasyUseHashMap : public hash_map { public: void resize(size_t r) { } }; #elif defined(HAVE_HASH_MAP) template class EasyUseHashMap : public hash_map { public: // Don't need to do anything: hash_map is already easy to use! }; #endif template class EasyUseMap : public map { public: void resize(size_t) { } // map<> doesn't support resize }; // Returns the number of hashes that have been done since the last // call to NumHashesSinceLastCall(). This is shared across all // HashObject instances, which isn't super-OO, but avoids two issues: // (1) making HashObject bigger than it ought to be (this is very // important for our testing), and (2) having to pass around // HashObject objects everywhere, which is annoying. static int g_num_hashes; static int g_num_copies; int NumHashesSinceLastCall() { int retval = g_num_hashes; g_num_hashes = 0; return retval; } int NumCopiesSinceLastCall() { int retval = g_num_copies; g_num_copies = 0; return retval; } /* * These are the objects we hash. Size is the size of the object * (must be > sizeof(int). Hashsize is how many of these bytes we * use when hashing (must be > sizeof(int) and < Size). */ template class HashObject { public: typedef HashObject class_type; HashObject() {} HashObject(int i) : i_(i) { memset(buffer_, i & 255, sizeof(buffer_)); // a "random" char } HashObject(const HashObject& that) { operator=(that); } void operator=(const HashObject& that) { g_num_copies++; this->i_ = that.i_; memcpy(this->buffer_, that.buffer_, sizeof(this->buffer_)); } size_t Hash() const { g_num_hashes++; int hashval = i_; for (size_t i = 0; i < Hashsize - sizeof(i_); ++i) { hashval += buffer_[i]; } return SPARSEHASH_HASH()(hashval); } bool operator==(const class_type& that) const { return this->i_ == that.i_; } bool operator< (const class_type& that) const { return this->i_ < that.i_; } bool operator<=(const class_type& that) const { return this->i_ <= that.i_; } private: int i_; // the key used for hashing char buffer_[Size - sizeof(int)]; }; // A specialization for the case sizeof(buffer_) == 0 template<> class HashObject { public: typedef HashObject class_type; HashObject() {} HashObject(int i) : i_(i) {} HashObject(const HashObject& that) { operator=(that); } void operator=(const HashObject& that) { g_num_copies++; this->i_ = that.i_; } size_t Hash() const { g_num_hashes++; return SPARSEHASH_HASH()(i_); } bool operator==(const class_type& that) const { return this->i_ == that.i_; } bool operator< (const class_type& that) const { return this->i_ < that.i_; } bool operator<=(const class_type& that) const { return this->i_ <= that.i_; } private: int i_; // the key used for hashing }; _START_GOOGLE_NAMESPACE_ // Let the hashtable implementations know it can use an optimized memcpy, // because the compiler defines both the destructor and copy constructor. template struct has_trivial_copy< HashObject > : true_type { }; template struct has_trivial_destructor< HashObject > : true_type { }; _END_GOOGLE_NAMESPACE_ class HashFn { public: template size_t operator()(const HashObject& obj) const { return obj.Hash(); } // Do the identity hash for pointers. template size_t operator()(const HashObject* obj) const { return reinterpret_cast(obj); } // Less operator for MSVC's hash containers. template bool operator()(const HashObject& a, const HashObject& b) const { return a < b; } template bool operator()(const HashObject* a, const HashObject* b) const { return a < b; } // These two public members are required by msvc. 4 and 8 are defaults. static const size_t bucket_size = 4; static const size_t min_buckets = 8; }; /* * Measure resource usage. */ class Rusage { public: /* Start collecting usage */ Rusage() { Reset(); } /* Reset collection */ void Reset(); /* Show usage, in seconds */ double UserTime(); private: #if defined HAVE_SYS_RESOURCE_H struct rusage start; #elif defined HAVE_WINDOWS_H long long int start; #else time_t start_time_t; #endif }; inline void Rusage::Reset() { #if defined HAVE_SYS_RESOURCE_H getrusage(RUSAGE_SELF, &start); #elif defined HAVE_WINDOWS_H start = GetTickCount(); #else time(&start_time_t); #endif } inline double Rusage::UserTime() { #if defined HAVE_SYS_RESOURCE_H struct rusage u; getrusage(RUSAGE_SELF, &u); struct timeval result; result.tv_sec = u.ru_utime.tv_sec - start.ru_utime.tv_sec; result.tv_usec = u.ru_utime.tv_usec - start.ru_utime.tv_usec; return double(result.tv_sec) + double(result.tv_usec) / 1000000.0; #elif defined HAVE_WINDOWS_H return double(GetTickCount() - start) / 1000.0; #else time_t now; time(&now); return now - start_time_t; #endif } static void print_uname() { #ifdef HAVE_SYS_UTSNAME_H struct utsname u; if (uname(&u) == 0) { printf("%s %s %s %s %s\n", u.sysname, u.nodename, u.release, u.version, u.machine); } #endif } // Generate stamp for this run static void stamp_run(int iters) { time_t now = time(0); printf("======\n"); fflush(stdout); print_uname(); printf("Average over %d iterations\n", iters); fflush(stdout); // don't need asctime_r/gmtime_r: we're not threaded printf("Current time (GMT): %s", asctime(gmtime(&now))); } // This depends on the malloc implementation for exactly what it does // -- and thus requires work after the fact to make sense of the // numbers -- and also is likely thrown off by the memory management // STL tries to do on its own. #ifdef HAVE_GOOGLE_MALLOC_EXTENSION_H #include static size_t CurrentMemoryUsage() { size_t result; if (MallocExtension::instance()->GetNumericProperty( "generic.current_allocated_bytes", &result)) { return result; } else { return 0; } } #else /* not HAVE_GOOGLE_MALLOC_EXTENSION_H */ static size_t CurrentMemoryUsage() { return 0; } #endif static void report(char const* title, double t, int iters, size_t start_memory, size_t end_memory) { // Construct heap growth report text if applicable char heap[100] = ""; if (end_memory > start_memory) { snprintf(heap, sizeof(heap), "%7.1f MB", (end_memory - start_memory) / 1048576.0); } printf("%-20s %6.1f ns (%8d hashes, %8d copies)%s\n", title, (t * 1000000000.0 / iters), NumHashesSinceLastCall(), NumCopiesSinceLastCall(), heap); fflush(stdout); } template static void time_map_grow(int iters) { MapType set; Rusage t; const size_t start = CurrentMemoryUsage(); t.Reset(); for (int i = 0; i < iters; i++) { set[i] = i+1; } double ut = t.UserTime(); const size_t finish = CurrentMemoryUsage(); report("map_grow", ut, iters, start, finish); } template static void time_map_grow_predicted(int iters) { MapType set; Rusage t; const size_t start = CurrentMemoryUsage(); set.resize(iters); t.Reset(); for (int i = 0; i < iters; i++) { set[i] = i+1; } double ut = t.UserTime(); const size_t finish = CurrentMemoryUsage(); report("map_predict/grow", ut, iters, start, finish); } template static void time_map_replace(int iters) { MapType set; Rusage t; int i; for (i = 0; i < iters; i++) { set[i] = i+1; } t.Reset(); for (i = 0; i < iters; i++) { set[i] = i+1; } double ut = t.UserTime(); report("map_replace", ut, iters, 0, 0); } template static void time_map_fetch(int iters, const vector& indices, char const* title) { MapType set; Rusage t; int r; int i; for (i = 0; i < iters; i++) { set[i] = i+1; } r = 1; t.Reset(); for (i = 0; i < iters; i++) { r ^= static_cast(set.find(indices[i]) != set.end()); } double ut = t.UserTime(); srand(r); // keep compiler from optimizing away r (we never call rand()) report(title, ut, iters, 0, 0); } template static void time_map_fetch_sequential(int iters) { vector v(iters); for (int i = 0; i < iters; i++) { v[i] = i; } time_map_fetch(iters, v, "map_fetch_sequential"); } // Apply a pseudorandom permutation to the given vector. static void shuffle(vector* v) { srand(9); for (int n = v->size(); n >= 2; n--) { swap((*v)[n - 1], (*v)[static_cast(rand()) % n]); } } template static void time_map_fetch_random(int iters) { vector v(iters); for (int i = 0; i < iters; i++) { v[i] = i; } shuffle(&v); time_map_fetch(iters, v, "map_fetch_random"); } template static void time_map_fetch_empty(int iters) { MapType set; Rusage t; int r; int i; r = 1; t.Reset(); for (i = 0; i < iters; i++) { r ^= static_cast(set.find(i) != set.end()); } double ut = t.UserTime(); srand(r); // keep compiler from optimizing away r (we never call rand()) report("map_fetch_empty", ut, iters, 0, 0); } template static void time_map_remove(int iters) { MapType set; Rusage t; int i; for (i = 0; i < iters; i++) { set[i] = i+1; } t.Reset(); for (i = 0; i < iters; i++) { set.erase(i); } double ut = t.UserTime(); report("map_remove", ut, iters, 0, 0); } template static void time_map_toggle(int iters) { MapType set; Rusage t; int i; const size_t start = CurrentMemoryUsage(); t.Reset(); for (i = 0; i < iters; i++) { set[i] = i+1; set.erase(i); } double ut = t.UserTime(); const size_t finish = CurrentMemoryUsage(); report("map_toggle", ut, iters, start, finish); } template static void time_map_iterate(int iters) { MapType set; Rusage t; int r; int i; for (i = 0; i < iters; i++) { set[i] = i+1; } r = 1; t.Reset(); for (typename MapType::const_iterator it = set.begin(), it_end = set.end(); it != it_end; ++it) { r ^= it->second; } double ut = t.UserTime(); srand(r); // keep compiler from optimizing away r (we never call rand()) report("map_iterate", ut, iters, 0, 0); } template static void stresshashfunction(int desired_insertions, int map_size, int stride) { Rusage t; int num_insertions = 0; // One measurement of user time (in seconds) is done for each iteration of // the outer loop. The times are summed. double total_seconds = 0; const int k = desired_insertions / map_size; MapType set; for (int o = 0; o < k; o++) { set.clear(); set.resize(map_size); t.Reset(); const int maxint = (1ull << (sizeof(int) * 8 - 1)) - 1; // Use n arithmetic sequences. Using just one may lead to overflow // if stride * map_size > maxint. Compute n by requiring // stride * map_size/n < maxint, i.e., map_size/(maxint/stride) < n char* key; // something we can do math on const int n = map_size / (maxint / stride) + 1; for (int i = 0; i < n; i++) { key = NULL; key += i; for (int j = 0; j < map_size/n; j++) { key += stride; set[reinterpret_cast(key)] = ++num_insertions; } } total_seconds += t.UserTime(); } printf("stresshashfunction map_size=%d stride=%d: %.1fns/insertion\n", map_size, stride, total_seconds * 1e9 / num_insertions); } template static void stresshashfunction(int num_inserts) { static const int kMapSizes[] = {256, 1024}; for (unsigned i = 0; i < sizeof(kMapSizes) / sizeof(kMapSizes[0]); i++) { const int map_size = kMapSizes[i]; for (int stride = 1; stride <= map_size; stride *= map_size) { stresshashfunction(num_inserts, map_size, stride); } } } template static void measure_map(const char* label, int obj_size, int iters, bool stress_hash_function) { printf("\n%s (%d byte objects, %d iterations):\n", label, obj_size, iters); if (1) time_map_grow(iters); if (1) time_map_grow_predicted(iters); if (1) time_map_replace(iters); if (1) time_map_fetch_random(iters); if (1) time_map_fetch_sequential(iters); if (1) time_map_fetch_empty(iters); if (1) time_map_remove(iters); if (1) time_map_toggle(iters); if (1) time_map_iterate(iters); // This last test is useful only if the map type uses hashing. // And it's slow, so use fewer iterations. if (stress_hash_function) { // Blank line in the output makes clear that what follows isn't part of the // table of results that we just printed. puts(""); stresshashfunction(iters / 4); } } template static void test_all_maps(int obj_size, int iters) { const bool stress_hash_function = obj_size <= 8; if (FLAGS_test_sparse_hash_map) measure_map< EasyUseSparseHashMap, EasyUseSparseHashMap >( "SPARSE_HASH_MAP", obj_size, iters, stress_hash_function); if (FLAGS_test_dense_hash_map) measure_map< EasyUseDenseHashMap, EasyUseDenseHashMap >( "DENSE_HASH_MAP", obj_size, iters, stress_hash_function); if (FLAGS_test_hash_map) measure_map< EasyUseHashMap, EasyUseHashMap >( "STANDARD HASH_MAP", obj_size, iters, stress_hash_function); if (FLAGS_test_map) measure_map< EasyUseMap, EasyUseMap >( "STANDARD MAP", obj_size, iters, false); } int main(int argc, char** argv) { int iters = kDefaultIters; if (argc > 1) { // first arg is # of iterations iters = atoi(argv[1]); } stamp_run(iters); #ifndef HAVE_SYS_RESOURCE_H printf("\n*** WARNING ***: sys/resources.h was not found, so all times\n" " reported are wall-clock time, not user time\n"); #endif // It would be nice to set these at run-time, but by setting them at // compile-time, we allow optimizations that make it as fast to use // a HashObject as it would be to use just a straight int/char // buffer. To keep memory use similar, we normalize the number of // iterations based on size. if (FLAGS_test_4_bytes) test_all_maps< HashObject<4,4> >(4, iters/1); if (FLAGS_test_8_bytes) test_all_maps< HashObject<8,8> >(8, iters/2); if (FLAGS_test_16_bytes) test_all_maps< HashObject<16,16> >(16, iters/4); if (FLAGS_test_256_bytes) test_all_maps< HashObject<256,256> >(256, iters/32); return 0; } sparsehash-2.0.2/src/type_traits_unittest.cc0000664000175000017500000006001111721252346016204 00000000000000// Copyright (c) 2006, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // ---- #include #include #include #include // for exit() #include #include #include #include "testutil.h" typedef int int32; typedef long int64; using std::string; using std::vector; using std::pair; using GOOGLE_NAMESPACE::add_reference; using GOOGLE_NAMESPACE::has_trivial_assign; using GOOGLE_NAMESPACE::has_trivial_constructor; using GOOGLE_NAMESPACE::has_trivial_copy; using GOOGLE_NAMESPACE::has_trivial_destructor; #if !defined(_MSC_VER) && !(defined(__GNUC__) && __GNUC__ <= 3) using GOOGLE_NAMESPACE::is_convertible; using GOOGLE_NAMESPACE::is_enum; #endif using GOOGLE_NAMESPACE::is_floating_point; using GOOGLE_NAMESPACE::is_integral; using GOOGLE_NAMESPACE::is_pointer; using GOOGLE_NAMESPACE::is_pod; using GOOGLE_NAMESPACE::is_reference; using GOOGLE_NAMESPACE::is_same; using GOOGLE_NAMESPACE::remove_const; using GOOGLE_NAMESPACE::remove_cv; using GOOGLE_NAMESPACE::remove_pointer; using GOOGLE_NAMESPACE::remove_reference; using GOOGLE_NAMESPACE::remove_volatile; // This assertion produces errors like "error: invalid use of // incomplete type 'struct ::AssertTypesEq'" // when it fails. template struct AssertTypesEq; template struct AssertTypesEq {}; #define COMPILE_ASSERT_TYPES_EQ(T, U) static_cast(AssertTypesEq()) // A user-defined POD type. struct A { int n_; }; // A user-defined non-POD type with a trivial copy constructor. class B { public: explicit B(int n) : n_(n) { } private: int n_; }; // Another user-defined non-POD type with a trivial copy constructor. // We will explicitly declare C to have a trivial copy constructor // by specializing has_trivial_copy. class C { public: explicit C(int n) : n_(n) { } private: int n_; }; _START_GOOGLE_NAMESPACE_ template<> struct has_trivial_copy : true_type { }; _END_GOOGLE_NAMESPACE_ // Another user-defined non-POD type with a trivial assignment operator. // We will explicitly declare C to have a trivial assignment operator // by specializing has_trivial_assign. class D { public: explicit D(int n) : n_(n) { } private: int n_; }; _START_GOOGLE_NAMESPACE_ template<> struct has_trivial_assign : true_type { }; _END_GOOGLE_NAMESPACE_ // Another user-defined non-POD type with a trivial constructor. // We will explicitly declare E to have a trivial constructor // by specializing has_trivial_constructor. class E { public: int n_; }; _START_GOOGLE_NAMESPACE_ template<> struct has_trivial_constructor : true_type { }; _END_GOOGLE_NAMESPACE_ // Another user-defined non-POD type with a trivial destructor. // We will explicitly declare E to have a trivial destructor // by specializing has_trivial_destructor. class F { public: explicit F(int n) : n_(n) { } private: int n_; }; _START_GOOGLE_NAMESPACE_ template<> struct has_trivial_destructor : true_type { }; _END_GOOGLE_NAMESPACE_ enum G {}; union H {}; class I { public: operator int() const; }; class J { private: operator int() const; }; namespace { // A base class and a derived class that inherits from it, used for // testing conversion type traits. class Base { public: virtual ~Base() { } }; class Derived : public Base { }; TEST(TypeTraitsTest, TestIsInteger) { // Verify that is_integral is true for all integer types. EXPECT_TRUE(is_integral::value); EXPECT_TRUE(is_integral::value); EXPECT_TRUE(is_integral::value); EXPECT_TRUE(is_integral::value); EXPECT_TRUE(is_integral::value); EXPECT_TRUE(is_integral::value); EXPECT_TRUE(is_integral::value); EXPECT_TRUE(is_integral::value); EXPECT_TRUE(is_integral::value); EXPECT_TRUE(is_integral::value); EXPECT_TRUE(is_integral::value); // Verify that is_integral is false for a few non-integer types. EXPECT_FALSE(is_integral::value); EXPECT_FALSE(is_integral::value); EXPECT_FALSE(is_integral::value); EXPECT_FALSE(is_integral::value); EXPECT_FALSE(is_integral::value); EXPECT_FALSE((is_integral >::value)); // Verify that cv-qualified integral types are still integral, and // cv-qualified non-integral types are still non-integral. EXPECT_TRUE(is_integral::value); EXPECT_TRUE(is_integral::value); EXPECT_TRUE(is_integral::value); EXPECT_FALSE(is_integral::value); EXPECT_FALSE(is_integral::value); EXPECT_FALSE(is_integral::value); } TEST(TypeTraitsTest, TestIsFloating) { // Verify that is_floating_point is true for all floating-point types. EXPECT_TRUE(is_floating_point::value); EXPECT_TRUE(is_floating_point::value); EXPECT_TRUE(is_floating_point::value); // Verify that is_floating_point is false for a few non-float types. EXPECT_FALSE(is_floating_point::value); EXPECT_FALSE(is_floating_point::value); EXPECT_FALSE(is_floating_point::value); EXPECT_FALSE(is_floating_point::value); EXPECT_FALSE(is_floating_point::value); EXPECT_FALSE((is_floating_point >::value)); // Verify that cv-qualified floating point types are still floating, and // cv-qualified non-floating types are still non-floating. EXPECT_TRUE(is_floating_point::value); EXPECT_TRUE(is_floating_point::value); EXPECT_TRUE(is_floating_point::value); EXPECT_FALSE(is_floating_point::value); EXPECT_FALSE(is_floating_point::value); EXPECT_FALSE(is_floating_point::value); } TEST(TypeTraitsTest, TestIsPointer) { // Verify that is_pointer is true for some pointer types. EXPECT_TRUE(is_pointer::value); EXPECT_TRUE(is_pointer::value); EXPECT_TRUE(is_pointer::value); EXPECT_TRUE(is_pointer::value); EXPECT_TRUE(is_pointer::value); // Verify that is_pointer is false for some non-pointer types. EXPECT_FALSE(is_pointer::value); EXPECT_FALSE(is_pointer::value); EXPECT_FALSE(is_pointer::value); EXPECT_FALSE(is_pointer >::value); EXPECT_FALSE(is_pointer::value); // A function pointer is a pointer, but a function type, or a function // reference type, is not. EXPECT_TRUE(is_pointer::value); EXPECT_FALSE(is_pointer::value); EXPECT_FALSE(is_pointer::value); // Verify that is_pointer is true for some cv-qualified pointer types, // and false for some cv-qualified non-pointer types. EXPECT_TRUE(is_pointer::value); EXPECT_TRUE(is_pointer::value); EXPECT_TRUE(is_pointer::value); EXPECT_FALSE(is_pointer::value); EXPECT_FALSE(is_pointer >::value); EXPECT_FALSE(is_pointer::value); } TEST(TypeTraitsTest, TestIsEnum) { // is_enum isn't supported on MSVC or gcc 3.x #if !defined(_MSC_VER) && !(defined(__GNUC__) && __GNUC__ <= 3) // Verify that is_enum is true for enum types. EXPECT_TRUE(is_enum::value); EXPECT_TRUE(is_enum::value); EXPECT_TRUE(is_enum::value); EXPECT_TRUE(is_enum::value); // Verify that is_enum is false for a few non-enum types. EXPECT_FALSE(is_enum::value); EXPECT_FALSE(is_enum::value); EXPECT_FALSE(is_enum::value); EXPECT_FALSE(is_enum::value); EXPECT_FALSE(is_enum::value); EXPECT_FALSE(is_enum::value); EXPECT_FALSE(is_enum::value); EXPECT_FALSE(is_enum::value); EXPECT_FALSE(is_enum::value); EXPECT_FALSE(is_enum::value); EXPECT_FALSE(is_enum::value); EXPECT_FALSE(is_enum::value); EXPECT_FALSE(is_enum::value); EXPECT_FALSE(is_enum::value); EXPECT_FALSE(is_enum::value); EXPECT_FALSE(is_enum::value); EXPECT_FALSE(is_enum::value); #endif } TEST(TypeTraitsTest, TestIsReference) { // Verifies that is_reference is true for all reference types. typedef float& RefFloat; EXPECT_TRUE(is_reference::value); EXPECT_TRUE(is_reference::value); EXPECT_TRUE(is_reference::value); EXPECT_TRUE(is_reference::value); EXPECT_TRUE(is_reference::value); EXPECT_TRUE(is_reference::value); EXPECT_TRUE(is_reference::value); EXPECT_TRUE(is_reference::value); // Verifies that is_reference is false for all non-reference types. EXPECT_FALSE(is_reference::value); EXPECT_FALSE(is_reference::value); EXPECT_FALSE(is_reference::value); EXPECT_FALSE(is_reference::value); EXPECT_FALSE(is_reference::value); EXPECT_FALSE(is_reference::value); EXPECT_FALSE(is_reference::value); } TEST(TypeTraitsTest, TestAddReference) { COMPILE_ASSERT_TYPES_EQ(int&, add_reference::type); COMPILE_ASSERT_TYPES_EQ(const int&, add_reference::type); COMPILE_ASSERT_TYPES_EQ(volatile int&, add_reference::type); COMPILE_ASSERT_TYPES_EQ(const volatile int&, add_reference::type); COMPILE_ASSERT_TYPES_EQ(int&, add_reference::type); COMPILE_ASSERT_TYPES_EQ(const int&, add_reference::type); COMPILE_ASSERT_TYPES_EQ(volatile int&, add_reference::type); COMPILE_ASSERT_TYPES_EQ(const volatile int&, add_reference::type); } TEST(TypeTraitsTest, TestIsPod) { // Verify that arithmetic types and pointers are marked as PODs. EXPECT_TRUE(is_pod::value); EXPECT_TRUE(is_pod::value); EXPECT_TRUE(is_pod::value); EXPECT_TRUE(is_pod::value); EXPECT_TRUE(is_pod::value); EXPECT_TRUE(is_pod::value); EXPECT_TRUE(is_pod::value); EXPECT_TRUE(is_pod::value); EXPECT_TRUE(is_pod::value); EXPECT_TRUE(is_pod::value); EXPECT_TRUE(is_pod::value); EXPECT_TRUE(is_pod::value); EXPECT_TRUE(is_pod::value); EXPECT_TRUE(is_pod::value); EXPECT_TRUE(is_pod::value); EXPECT_TRUE(is_pod::value); EXPECT_TRUE(is_pod::value); EXPECT_TRUE(is_pod::value); EXPECT_TRUE(is_pod::value); EXPECT_TRUE(is_pod::value); EXPECT_TRUE(is_pod::value); #if !defined(_MSC_VER) && !(defined(__GNUC__) && __GNUC__ <= 3) EXPECT_TRUE(is_pod::value); EXPECT_TRUE(is_pod::value); EXPECT_TRUE(is_pod::value); EXPECT_TRUE(is_pod::value); #endif // Verify that some non-POD types are not marked as PODs. EXPECT_FALSE(is_pod::value); EXPECT_FALSE(is_pod::value); EXPECT_FALSE((is_pod >::value)); EXPECT_FALSE(is_pod::value); EXPECT_FALSE(is_pod::value); EXPECT_FALSE(is_pod::value); EXPECT_FALSE(is_pod::value); EXPECT_FALSE(is_pod::value); EXPECT_FALSE(is_pod::value); } TEST(TypeTraitsTest, TestHasTrivialConstructor) { // Verify that arithmetic types and pointers have trivial constructors. EXPECT_TRUE(has_trivial_constructor::value); EXPECT_TRUE(has_trivial_constructor::value); EXPECT_TRUE(has_trivial_constructor::value); EXPECT_TRUE(has_trivial_constructor::value); EXPECT_TRUE(has_trivial_constructor::value); EXPECT_TRUE(has_trivial_constructor::value); EXPECT_TRUE(has_trivial_constructor::value); EXPECT_TRUE(has_trivial_constructor::value); EXPECT_TRUE(has_trivial_constructor::value); EXPECT_TRUE(has_trivial_constructor::value); EXPECT_TRUE(has_trivial_constructor::value); EXPECT_TRUE(has_trivial_constructor::value); EXPECT_TRUE(has_trivial_constructor::value); EXPECT_TRUE(has_trivial_constructor::value); EXPECT_TRUE(has_trivial_constructor::value); EXPECT_TRUE(has_trivial_constructor::value); EXPECT_TRUE(has_trivial_constructor::value); EXPECT_TRUE(has_trivial_constructor::value); // Verify that pairs and arrays of such types have trivial // constructors. typedef int int10[10]; EXPECT_TRUE((has_trivial_constructor >::value)); EXPECT_TRUE(has_trivial_constructor::value); // Verify that pairs of types without trivial constructors // are not marked as trivial. EXPECT_FALSE((has_trivial_constructor >::value)); EXPECT_FALSE((has_trivial_constructor >::value)); // Verify that types without trivial constructors are // correctly marked as such. EXPECT_FALSE(has_trivial_constructor::value); EXPECT_FALSE(has_trivial_constructor >::value); // Verify that E, which we have declared to have a trivial // constructor, is correctly marked as such. EXPECT_TRUE(has_trivial_constructor::value); } TEST(TypeTraitsTest, TestHasTrivialCopy) { // Verify that arithmetic types and pointers have trivial copy // constructors. EXPECT_TRUE(has_trivial_copy::value); EXPECT_TRUE(has_trivial_copy::value); EXPECT_TRUE(has_trivial_copy::value); EXPECT_TRUE(has_trivial_copy::value); EXPECT_TRUE(has_trivial_copy::value); EXPECT_TRUE(has_trivial_copy::value); EXPECT_TRUE(has_trivial_copy::value); EXPECT_TRUE(has_trivial_copy::value); EXPECT_TRUE(has_trivial_copy::value); EXPECT_TRUE(has_trivial_copy::value); EXPECT_TRUE(has_trivial_copy::value); EXPECT_TRUE(has_trivial_copy::value); EXPECT_TRUE(has_trivial_copy::value); EXPECT_TRUE(has_trivial_copy::value); EXPECT_TRUE(has_trivial_copy::value); EXPECT_TRUE(has_trivial_copy::value); EXPECT_TRUE(has_trivial_copy::value); EXPECT_TRUE(has_trivial_copy::value); // Verify that pairs and arrays of such types have trivial // copy constructors. typedef int int10[10]; EXPECT_TRUE((has_trivial_copy >::value)); EXPECT_TRUE(has_trivial_copy::value); // Verify that pairs of types without trivial copy constructors // are not marked as trivial. EXPECT_FALSE((has_trivial_copy >::value)); EXPECT_FALSE((has_trivial_copy >::value)); // Verify that types without trivial copy constructors are // correctly marked as such. EXPECT_FALSE(has_trivial_copy::value); EXPECT_FALSE(has_trivial_copy >::value); // Verify that C, which we have declared to have a trivial // copy constructor, is correctly marked as such. EXPECT_TRUE(has_trivial_copy::value); } TEST(TypeTraitsTest, TestHasTrivialAssign) { // Verify that arithmetic types and pointers have trivial assignment // operators. EXPECT_TRUE(has_trivial_assign::value); EXPECT_TRUE(has_trivial_assign::value); EXPECT_TRUE(has_trivial_assign::value); EXPECT_TRUE(has_trivial_assign::value); EXPECT_TRUE(has_trivial_assign::value); EXPECT_TRUE(has_trivial_assign::value); EXPECT_TRUE(has_trivial_assign::value); EXPECT_TRUE(has_trivial_assign::value); EXPECT_TRUE(has_trivial_assign::value); EXPECT_TRUE(has_trivial_assign::value); EXPECT_TRUE(has_trivial_assign::value); EXPECT_TRUE(has_trivial_assign::value); EXPECT_TRUE(has_trivial_assign::value); EXPECT_TRUE(has_trivial_assign::value); EXPECT_TRUE(has_trivial_assign::value); EXPECT_TRUE(has_trivial_assign::value); EXPECT_TRUE(has_trivial_assign::value); EXPECT_TRUE(has_trivial_assign::value); // Verify that pairs and arrays of such types have trivial // assignment operators. typedef int int10[10]; EXPECT_TRUE((has_trivial_assign >::value)); EXPECT_TRUE(has_trivial_assign::value); // Verify that pairs of types without trivial assignment operators // are not marked as trivial. EXPECT_FALSE((has_trivial_assign >::value)); EXPECT_FALSE((has_trivial_assign >::value)); // Verify that types without trivial assignment operators are // correctly marked as such. EXPECT_FALSE(has_trivial_assign::value); EXPECT_FALSE(has_trivial_assign >::value); // Verify that D, which we have declared to have a trivial // assignment operator, is correctly marked as such. EXPECT_TRUE(has_trivial_assign::value); } TEST(TypeTraitsTest, TestHasTrivialDestructor) { // Verify that arithmetic types and pointers have trivial destructors. EXPECT_TRUE(has_trivial_destructor::value); EXPECT_TRUE(has_trivial_destructor::value); EXPECT_TRUE(has_trivial_destructor::value); EXPECT_TRUE(has_trivial_destructor::value); EXPECT_TRUE(has_trivial_destructor::value); EXPECT_TRUE(has_trivial_destructor::value); EXPECT_TRUE(has_trivial_destructor::value); EXPECT_TRUE(has_trivial_destructor::value); EXPECT_TRUE(has_trivial_destructor::value); EXPECT_TRUE(has_trivial_destructor::value); EXPECT_TRUE(has_trivial_destructor::value); EXPECT_TRUE(has_trivial_destructor::value); EXPECT_TRUE(has_trivial_destructor::value); EXPECT_TRUE(has_trivial_destructor::value); EXPECT_TRUE(has_trivial_destructor::value); EXPECT_TRUE(has_trivial_destructor::value); EXPECT_TRUE(has_trivial_destructor::value); EXPECT_TRUE(has_trivial_destructor::value); // Verify that pairs and arrays of such types have trivial // destructors. typedef int int10[10]; EXPECT_TRUE((has_trivial_destructor >::value)); EXPECT_TRUE(has_trivial_destructor::value); // Verify that pairs of types without trivial destructors // are not marked as trivial. EXPECT_FALSE((has_trivial_destructor >::value)); EXPECT_FALSE((has_trivial_destructor >::value)); // Verify that types without trivial destructors are // correctly marked as such. EXPECT_FALSE(has_trivial_destructor::value); EXPECT_FALSE(has_trivial_destructor >::value); // Verify that F, which we have declared to have a trivial // destructor, is correctly marked as such. EXPECT_TRUE(has_trivial_destructor::value); } // Tests remove_pointer. TEST(TypeTraitsTest, TestRemovePointer) { COMPILE_ASSERT_TYPES_EQ(int, remove_pointer::type); COMPILE_ASSERT_TYPES_EQ(int, remove_pointer::type); COMPILE_ASSERT_TYPES_EQ(const int, remove_pointer::type); COMPILE_ASSERT_TYPES_EQ(int, remove_pointer::type); COMPILE_ASSERT_TYPES_EQ(int, remove_pointer::type); } TEST(TypeTraitsTest, TestRemoveConst) { COMPILE_ASSERT_TYPES_EQ(int, remove_const::type); COMPILE_ASSERT_TYPES_EQ(int, remove_const::type); COMPILE_ASSERT_TYPES_EQ(int *, remove_const::type); // TR1 examples. COMPILE_ASSERT_TYPES_EQ(const int *, remove_const::type); COMPILE_ASSERT_TYPES_EQ(volatile int, remove_const::type); } TEST(TypeTraitsTest, TestRemoveVolatile) { COMPILE_ASSERT_TYPES_EQ(int, remove_volatile::type); COMPILE_ASSERT_TYPES_EQ(int, remove_volatile::type); COMPILE_ASSERT_TYPES_EQ(int *, remove_volatile::type); // TR1 examples. COMPILE_ASSERT_TYPES_EQ(volatile int *, remove_volatile::type); COMPILE_ASSERT_TYPES_EQ(const int, remove_volatile::type); } TEST(TypeTraitsTest, TestRemoveCV) { COMPILE_ASSERT_TYPES_EQ(int, remove_cv::type); COMPILE_ASSERT_TYPES_EQ(int, remove_cv::type); COMPILE_ASSERT_TYPES_EQ(int, remove_cv::type); COMPILE_ASSERT_TYPES_EQ(int *, remove_cv::type); // TR1 examples. COMPILE_ASSERT_TYPES_EQ(const volatile int *, remove_cv::type); COMPILE_ASSERT_TYPES_EQ(int, remove_cv::type); } TEST(TypeTraitsTest, TestRemoveReference) { COMPILE_ASSERT_TYPES_EQ(int, remove_reference::type); COMPILE_ASSERT_TYPES_EQ(int, remove_reference::type); COMPILE_ASSERT_TYPES_EQ(const int, remove_reference::type); COMPILE_ASSERT_TYPES_EQ(int*, remove_reference::type); } TEST(TypeTraitsTest, TestIsSame) { EXPECT_TRUE((is_same::value)); EXPECT_FALSE((is_same::value)); EXPECT_FALSE((is_same::value)); EXPECT_FALSE((is_same::value)); EXPECT_TRUE((is_same::value)); EXPECT_FALSE((is_same::value)); EXPECT_FALSE((is_same::value)); EXPECT_TRUE((is_same::value)); EXPECT_TRUE((is_same::value)); EXPECT_FALSE((is_same::value)); EXPECT_FALSE((is_same::value)); EXPECT_FALSE((is_same::value)); EXPECT_FALSE((is_same::value)); EXPECT_TRUE((is_same::value)); EXPECT_TRUE((is_same::value)); EXPECT_FALSE((is_same::value)); EXPECT_FALSE((is_same::value)); } TEST(TypeTraitsTest, TestConvertible) { #if !defined(_MSC_VER) && !(defined(__GNUC__) && __GNUC__ <= 3) EXPECT_TRUE((is_convertible::value)); EXPECT_TRUE((is_convertible::value)); EXPECT_TRUE((is_convertible::value)); EXPECT_TRUE((is_convertible::value)); EXPECT_FALSE((is_convertible::value)); EXPECT_TRUE((is_convertible::value)); EXPECT_FALSE((is_convertible::value)); EXPECT_TRUE((is_convertible::value)); EXPECT_FALSE((is_convertible::value)); #endif } } // namespace #include int main(int, char **) { // All the work is done in the static constructors. If they don't // die, the tests have all passed. std::cout << "PASS\n"; return 0; } sparsehash-2.0.2/src/testutil.h0000664000175000017500000003335311721252346013426 00000000000000// Copyright (c) 2010, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // --- // This macro mimics a unittest framework, but is a bit less flexible // than most. It requires a superclass to derive from, and does all // work in global constructors. The tricky part is implementing // TYPED_TEST. #ifndef SPARSEHASH_TEST_UTIL_H_ #define SPARSEHASH_TEST_UTIL_H_ #include #include "config.h" #include #include // for exit #include // for length_error _START_GOOGLE_NAMESPACE_ namespace testing { #define EXPECT_TRUE(cond) do { \ if (!(cond)) { \ ::fputs("Test failed: " #cond "\n", stderr); \ ::exit(1); \ } \ } while (0) #define EXPECT_FALSE(a) EXPECT_TRUE(!(a)) #define EXPECT_EQ(a, b) EXPECT_TRUE((a) == (b)) #define EXPECT_NE(a, b) EXPECT_TRUE((a) != (b)) #define EXPECT_LT(a, b) EXPECT_TRUE((a) < (b)) #define EXPECT_GT(a, b) EXPECT_TRUE((a) > (b)) #define EXPECT_LE(a, b) EXPECT_TRUE((a) <= (b)) #define EXPECT_GE(a, b) EXPECT_TRUE((a) >= (b)) #define EXPECT_DEATH(cmd, expected_error_string) \ try { \ cmd; \ EXPECT_FALSE("did not see expected error: " #expected_error_string); \ } catch (const std::length_error&) { \ /* Good, the cmd failed. */ \ } #define TEST(suitename, testname) \ class TEST_##suitename##_##testname { \ public: \ TEST_##suitename##_##testname() { \ ::fputs("Running " #suitename "." #testname "\n", stderr); \ Run(); \ } \ void Run(); \ }; \ static TEST_##suitename##_##testname \ test_instance_##suitename##_##testname; \ void TEST_##suitename##_##testname::Run() template struct TypeList6 { typedef C1 type1; typedef C2 type2; typedef C3 type3; typedef C4 type4; typedef C5 type5; typedef C6 type6; }; // I need to list 18 types here, for code below to compile, though // only the first 6 are ever used. #define TYPED_TEST_CASE_6(classname, typelist) \ typedef typelist::type1 classname##_type1; \ typedef typelist::type2 classname##_type2; \ typedef typelist::type3 classname##_type3; \ typedef typelist::type4 classname##_type4; \ typedef typelist::type5 classname##_type5; \ typedef typelist::type6 classname##_type6; \ static const int classname##_numtypes = 6; \ typedef typelist::type1 classname##_type7; \ typedef typelist::type1 classname##_type8; \ typedef typelist::type1 classname##_type9; \ typedef typelist::type1 classname##_type10; \ typedef typelist::type1 classname##_type11; \ typedef typelist::type1 classname##_type12; \ typedef typelist::type1 classname##_type13; \ typedef typelist::type1 classname##_type14; \ typedef typelist::type1 classname##_type15; \ typedef typelist::type1 classname##_type16; \ typedef typelist::type1 classname##_type17; \ typedef typelist::type1 classname##_type18; template struct TypeList18 { typedef C1 type1; typedef C2 type2; typedef C3 type3; typedef C4 type4; typedef C5 type5; typedef C6 type6; typedef C7 type7; typedef C8 type8; typedef C9 type9; typedef C10 type10; typedef C11 type11; typedef C12 type12; typedef C13 type13; typedef C14 type14; typedef C15 type15; typedef C16 type16; typedef C17 type17; typedef C18 type18; }; #define TYPED_TEST_CASE_18(classname, typelist) \ typedef typelist::type1 classname##_type1; \ typedef typelist::type2 classname##_type2; \ typedef typelist::type3 classname##_type3; \ typedef typelist::type4 classname##_type4; \ typedef typelist::type5 classname##_type5; \ typedef typelist::type6 classname##_type6; \ typedef typelist::type7 classname##_type7; \ typedef typelist::type8 classname##_type8; \ typedef typelist::type9 classname##_type9; \ typedef typelist::type10 classname##_type10; \ typedef typelist::type11 classname##_type11; \ typedef typelist::type12 classname##_type12; \ typedef typelist::type13 classname##_type13; \ typedef typelist::type14 classname##_type14; \ typedef typelist::type15 classname##_type15; \ typedef typelist::type16 classname##_type16; \ typedef typelist::type17 classname##_type17; \ typedef typelist::type18 classname##_type18; \ static const int classname##_numtypes = 18; #define TYPED_TEST(superclass, testname) \ template \ class TEST_onetype_##superclass##_##testname : \ public superclass { \ public: \ TEST_onetype_##superclass##_##testname() { \ Run(); \ } \ private: \ void Run(); \ }; \ class TEST_typed_##superclass##_##testname { \ public: \ explicit TEST_typed_##superclass##_##testname() { \ if (superclass##_numtypes >= 1) { \ ::fputs("Running " #superclass "." #testname ".1\n", stderr); \ TEST_onetype_##superclass##_##testname t; \ } \ if (superclass##_numtypes >= 2) { \ ::fputs("Running " #superclass "." #testname ".2\n", stderr); \ TEST_onetype_##superclass##_##testname t; \ } \ if (superclass##_numtypes >= 3) { \ ::fputs("Running " #superclass "." #testname ".3\n", stderr); \ TEST_onetype_##superclass##_##testname t; \ } \ if (superclass##_numtypes >= 4) { \ ::fputs("Running " #superclass "." #testname ".4\n", stderr); \ TEST_onetype_##superclass##_##testname t; \ } \ if (superclass##_numtypes >= 5) { \ ::fputs("Running " #superclass "." #testname ".5\n", stderr); \ TEST_onetype_##superclass##_##testname t; \ } \ if (superclass##_numtypes >= 6) { \ ::fputs("Running " #superclass "." #testname ".6\n", stderr); \ TEST_onetype_##superclass##_##testname t; \ } \ if (superclass##_numtypes >= 7) { \ ::fputs("Running " #superclass "." #testname ".7\n", stderr); \ TEST_onetype_##superclass##_##testname t; \ } \ if (superclass##_numtypes >= 8) { \ ::fputs("Running " #superclass "." #testname ".8\n", stderr); \ TEST_onetype_##superclass##_##testname t; \ } \ if (superclass##_numtypes >= 9) { \ ::fputs("Running " #superclass "." #testname ".9\n", stderr); \ TEST_onetype_##superclass##_##testname t; \ } \ if (superclass##_numtypes >= 10) { \ ::fputs("Running " #superclass "." #testname ".10\n", stderr); \ TEST_onetype_##superclass##_##testname t; \ } \ if (superclass##_numtypes >= 11) { \ ::fputs("Running " #superclass "." #testname ".11\n", stderr); \ TEST_onetype_##superclass##_##testname t; \ } \ if (superclass##_numtypes >= 12) { \ ::fputs("Running " #superclass "." #testname ".12\n", stderr); \ TEST_onetype_##superclass##_##testname t; \ } \ if (superclass##_numtypes >= 13) { \ ::fputs("Running " #superclass "." #testname ".13\n", stderr); \ TEST_onetype_##superclass##_##testname t; \ } \ if (superclass##_numtypes >= 14) { \ ::fputs("Running " #superclass "." #testname ".14\n", stderr); \ TEST_onetype_##superclass##_##testname t; \ } \ if (superclass##_numtypes >= 15) { \ ::fputs("Running " #superclass "." #testname ".15\n", stderr); \ TEST_onetype_##superclass##_##testname t; \ } \ if (superclass##_numtypes >= 16) { \ ::fputs("Running " #superclass "." #testname ".16\n", stderr); \ TEST_onetype_##superclass##_##testname t; \ } \ if (superclass##_numtypes >= 17) { \ ::fputs("Running " #superclass "." #testname ".17\n", stderr); \ TEST_onetype_##superclass##_##testname t; \ } \ if (superclass##_numtypes >= 18) { \ ::fputs("Running " #superclass "." #testname ".18\n", stderr); \ TEST_onetype_##superclass##_##testname t; \ } \ } \ }; \ static TEST_typed_##superclass##_##testname \ test_instance_typed_##superclass##_##testname; \ template \ void TEST_onetype_##superclass##_##testname::Run() // This is a dummy class just to make converting from internal-google // to opensourcing easier. class Test { }; } // namespace testing _END_GOOGLE_NAMESPACE_ #endif // SPARSEHASH_TEST_UTIL_H_ sparsehash-2.0.2/src/simple_compat_test.cc0000664000175000017500000001107211721252046015571 00000000000000// Copyright (c) 2007, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // --- // // This tests mostly that we can #include the files correctly // and have them work. It is like simple_test.cc but uses the // compatibility #include directory (google/) rather than the // canonical one (sparsehash/). This unittest purposefully does // not #include ; it's meant to emulate what a 'regular // install' of sparsehash would be able to see. #include #include #include #include #include #include #include #define CHECK_IFF(cond, when) do { \ if (when) { \ if (!(cond)) { \ puts("ERROR: " #cond " failed when " #when " is true\n"); \ exit(1); \ } \ } else { \ if (cond) { \ puts("ERROR: " #cond " succeeded when " #when " is false\n"); \ exit(1); \ } \ } \ } while (0) int main(int argc, char**) { // Run with an argument to get verbose output const bool verbose = argc > 1; google::sparse_hash_set sset; google::sparse_hash_map smap; google::dense_hash_set dset; google::dense_hash_map dmap; dset.set_empty_key(-1); dmap.set_empty_key(-1); for (int i = 0; i < 100; i += 10) { // go by tens sset.insert(i); smap[i] = i+1; dset.insert(i + 5); dmap[i+5] = i+6; } if (verbose) { for (google::sparse_hash_set::const_iterator it = sset.begin(); it != sset.end(); ++it) printf("sset: %d\n", *it); for (google::sparse_hash_map::const_iterator it = smap.begin(); it != smap.end(); ++it) printf("smap: %d -> %d\n", it->first, it->second); for (google::dense_hash_set::const_iterator it = dset.begin(); it != dset.end(); ++it) printf("dset: %d\n", *it); for (google::dense_hash_map::const_iterator it = dmap.begin(); it != dmap.end(); ++it) printf("dmap: %d -> %d\n", it->first, it->second); } for (int i = 0; i < 100; i++) { CHECK_IFF(sset.find(i) != sset.end(), (i % 10) == 0); CHECK_IFF(smap.find(i) != smap.end(), (i % 10) == 0); CHECK_IFF(smap.find(i) != smap.end() && smap.find(i)->second == i+1, (i % 10) == 0); CHECK_IFF(dset.find(i) != dset.end(), (i % 10) == 5); CHECK_IFF(dmap.find(i) != dmap.end(), (i % 10) == 5); CHECK_IFF(dmap.find(i) != dmap.end() && dmap.find(i)->second == i+1, (i % 10) == 5); } printf("PASS\n"); return 0; } sparsehash-2.0.2/experimental/0000775000175000017500000000000011721252346013357 500000000000000sparsehash-2.0.2/experimental/libchash.c0000664000175000017500000020105311721252346015221 00000000000000/* Copyright (c) 1998 - 2005, Google Inc. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following disclaimer * in the documentation and/or other materials provided with the * distribution. * * Neither the name of Google Inc. nor the names of its * contributors may be used to endorse or promote products derived from * this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * --- * Author: Craig Silverstein * * This library is intended to be used for in-memory hash tables, * though it provides rudimentary permanent-storage capabilities. * It attempts to be fast, portable, and small. The best algorithm * to fulfill these goals is an internal probing hashing algorithm, * as in Knuth, _Art of Computer Programming_, vol III. Unlike * chained (open) hashing, it doesn't require a pointer for every * item, yet it is still constant time lookup in practice. * * Also to save space, we let the contents (both data and key) that * you insert be a union: if the key/data is small, we store it * directly in the hashtable, otherwise we store a pointer to it. * To keep you from having to figure out which, use KEY_PTR and * PTR_KEY to convert between the arguments to these functions and * a pointer to the real data. For instance: * char key[] = "ab", *key2; * HTItem *bck; HashTable *ht; * HashInsert(ht, PTR_KEY(ht, key), 0); * bck = HashFind(ht, PTR_KEY(ht, "ab")); * key2 = KEY_PTR(ht, bck->key); * * There are a rich set of operations supported: * AllocateHashTable() -- Allocates a hashtable structure and * returns it. * cchKey: if it's a positive number, then each key is a * fixed-length record of that length. If it's 0, * the key is assumed to be a \0-terminated string. * fSaveKey: normally, you are responsible for allocating * space for the key. If this is 1, we make a * copy of the key for you. * ClearHashTable() -- Removes everything from a hashtable * FreeHashTable() -- Frees memory used by a hashtable * * HashFind() -- takes a key (use PTR_KEY) and returns the * HTItem containing that key, or NULL if the * key is not in the hashtable. * HashFindLast() -- returns the item found by last HashFind() * HashFindOrInsert() -- inserts the key/data pair if the key * is not already in the hashtable, or * returns the appropraite HTItem if it is. * HashFindOrInsertItem() -- takes key/data as an HTItem. * HashInsert() -- adds a key/data pair to the hashtable. What * it does if the key is already in the table * depends on the value of SAMEKEY_OVERWRITE. * HashInsertItem() -- takes key/data as an HTItem. * HashDelete() -- removes a key/data pair from the hashtable, * if it's there. RETURNS 1 if it was there, * 0 else. * If you use sparse tables and never delete, the full data * space is available. Otherwise we steal -2 (maybe -3), * so you can't have data fields with those values. * HashDeleteLast() -- deletes the item returned by the last Find(). * * HashFirstBucket() -- used to iterate over the buckets in a * hashtable. DON'T INSERT OR DELETE WHILE * ITERATING! You can't nest iterations. * HashNextBucket() -- RETURNS NULL at the end of iterating. * * HashSetDeltaGoalSize() -- if you're going to insert 1000 items * at once, call this fn with arg 1000. * It grows the table more intelligently. * * HashSave() -- saves the hashtable to a file. It saves keys ok, * but it doesn't know how to interpret the data field, * so if the data field is a pointer to some complex * structure, you must send a function that takes a * file pointer and a pointer to the structure, and * write whatever you want to write. It should return * the number of bytes written. If the file is NULL, * it should just return the number of bytes it would * write, without writing anything. * If your data field is just an integer, not a * pointer, just send NULL for the function. * HashLoad() -- loads a hashtable. It needs a function that takes * a file and the size of the structure, and expects * you to read in the structure and return a pointer * to it. You must do memory allocation, etc. If * the data is just a number, send NULL. * HashLoadKeys() -- unlike HashLoad(), doesn't load the data off disk * until needed. This saves memory, but if you look * up the same key a lot, it does a disk access each * time. * You can't do Insert() or Delete() on hashtables that were loaded * from disk. * * See libchash.h for parameters you can modify. Make sure LOG_WORD_SIZE * is defined correctly for your machine! (5 for 32 bit words, 6 for 64). */ #include #include #include #include /* for strcmp, memcmp, etc */ #include /* ULTRIX needs this for in.h */ #include /* for reading/writing hashtables */ #include #include "libchash.h" /* all the types */ /* if keys are stored directly but cchKey is less than sizeof(ulong), */ /* this cuts off the bits at the end */ char grgKeyTruncMask[sizeof(ulong)][sizeof(ulong)]; #define KEY_TRUNC(ht, key) \ ( STORES_PTR(ht) || (ht)->cchKey == sizeof(ulong) \ ? (key) : ((key) & *(ulong *)&(grgKeyTruncMask[(ht)->cchKey][0])) ) /* round num up to a multiple of wordsize. (LOG_WORD_SIZE-3 is in bytes) */ #define WORD_ROUND(num) ( ((num-1) | ((1<<(LOG_WORD_SIZE-3))-1)) + 1 ) #define NULL_TERMINATED 0 /* val of cchKey if keys are null-term strings */ /* Useful operations we do to keys: compare them, copy them, free them */ #define KEY_CMP(ht, key1, key2) ( !STORES_PTR(ht) ? (key1) - (key2) : \ (key1) == (key2) ? 0 : \ HashKeySize(ht) == NULL_TERMINATED ? \ strcmp((char *)key1, (char *)key2) :\ memcmp((void *)key1, (void *)key2, \ HashKeySize(ht)) ) #define COPY_KEY(ht, keyTo, keyFrom) do \ if ( !STORES_PTR(ht) || !(ht)->fSaveKeys ) \ (keyTo) = (keyFrom); /* just copy pointer or info */\ else if ( (ht)->cchKey == NULL_TERMINATED ) /* copy 0-term.ed str */\ { \ (keyTo) = (ulong)HTsmalloc( WORD_ROUND(strlen((char *)(keyFrom))+1) ); \ strcpy((char *)(keyTo), (char *)(keyFrom)); \ } \ else \ { \ (keyTo) = (ulong) HTsmalloc( WORD_ROUND((ht)->cchKey) ); \ memcpy( (char *)(keyTo), (char *)(keyFrom), (ht)->cchKey); \ } \ while ( 0 ) #define FREE_KEY(ht, key) do \ if ( STORES_PTR(ht) && (ht)->fSaveKeys ) \ if ( (ht)->cchKey == NULL_TERMINATED ) \ HTfree((char *)(key), WORD_ROUND(strlen((char *)(key))+1)); \ else \ HTfree((char *)(key), WORD_ROUND((ht)->cchKey)); \ while ( 0 ) /* the following are useful for bitmaps */ /* Format is like this (if 1 word = 4 bits): 3210 7654 ba98 fedc ... */ typedef ulong HTBitmapPart; /* this has to be unsigned, for >> */ typedef HTBitmapPart HTBitmap[1<> LOG_WORD_SIZE) << (LOG_WORD_SIZE-3) ) #define MOD2(i, logmod) ( (i) & ((1<<(logmod))-1) ) #define DIV_NUM_ENTRIES(i) ( (i) >> LOG_WORD_SIZE ) #define MOD_NUM_ENTRIES(i) ( MOD2(i, LOG_WORD_SIZE) ) #define MODBIT(i) ( ((ulong)1) << MOD_NUM_ENTRIES(i) ) #define TEST_BITMAP(bm, i) ( (bm)[DIV_NUM_ENTRIES(i)] & MODBIT(i) ? 1 : 0 ) #define SET_BITMAP(bm, i) (bm)[DIV_NUM_ENTRIES(i)] |= MODBIT(i) #define CLEAR_BITMAP(bm, i) (bm)[DIV_NUM_ENTRIES(i)] &= ~MODBIT(i) /* the following are useful for reading and writing hashtables */ #define READ_UL(fp, data) \ do { \ long _ul; \ fread(&_ul, sizeof(_ul), 1, (fp)); \ data = ntohl(_ul); \ } while (0) #define WRITE_UL(fp, data) \ do { \ long _ul = htonl((long)(data)); \ fwrite(&_ul, sizeof(_ul), 1, (fp)); \ } while (0) /* Moves data from disk to memory if necessary. Note dataRead cannot be * * NULL, because then we might as well (and do) load the data into memory */ #define LOAD_AND_RETURN(ht, loadCommand) /* lC returns an HTItem * */ \ if ( !(ht)->fpData ) /* data is stored in memory */ \ return (loadCommand); \ else /* must read data off of disk */ \ { \ int cchData; \ HTItem *bck; \ if ( (ht)->bckData.data ) free((char *)(ht)->bckData.data); \ ht->bckData.data = (ulong)NULL; /* needed if loadCommand fails */ \ bck = (loadCommand); \ if ( bck == NULL ) /* loadCommand failed: key not found */ \ return NULL; \ else \ (ht)->bckData = *bck; \ fseek(ht->fpData, (ht)->bckData.data, SEEK_SET); \ READ_UL((ht)->fpData, cchData); \ (ht)->bckData.data = (ulong)(ht)->dataRead((ht)->fpData, cchData); \ return &((ht)->bckData); \ } /* ======================================================================== */ /* UTILITY ROUTINES */ /* ---------------------- */ /* HTsmalloc() -- safe malloc * allocates memory, or crashes if the allocation fails. */ static void *HTsmalloc(unsigned long size) { void *retval; if ( size == 0 ) return NULL; retval = (void *)malloc(size); if ( !retval ) { fprintf(stderr, "HTsmalloc: Unable to allocate %lu bytes of memory\n", size); exit(1); } return retval; } /* HTscalloc() -- safe calloc * allocates memory and initializes it to 0, or crashes if * the allocation fails. */ static void *HTscalloc(unsigned long size) { void *retval; retval = (void *)calloc(size, 1); if ( !retval && size > 0 ) { fprintf(stderr, "HTscalloc: Unable to allocate %lu bytes of memory\n", size); exit(1); } return retval; } /* HTsrealloc() -- safe calloc * grows the amount of memory from a source, or crashes if * the allocation fails. */ static void *HTsrealloc(void *ptr, unsigned long new_size, long delta) { if ( ptr == NULL ) return HTsmalloc(new_size); ptr = realloc(ptr, new_size); if ( !ptr && new_size > 0 ) { fprintf(stderr, "HTsrealloc: Unable to reallocate %lu bytes of memory\n", new_size); exit(1); } return ptr; } /* HTfree() -- keep track of memory use * frees memory using free, but updates count of how much memory * is being used. */ static void HTfree(void *ptr, unsigned long size) { if ( size > 0 ) /* some systems seem to not like freeing NULL */ free(ptr); } /*************************************************************************\ | HTcopy() | | Sometimes we interpret data as a ulong. But ulongs must be | | aligned on some machines, so instead of casting we copy. | \*************************************************************************/ unsigned long HTcopy(char *ul) { unsigned long retval; memcpy(&retval, ul, sizeof(retval)); return retval; } /*************************************************************************\ | HTSetupKeyTrunc() | | If keys are stored directly but cchKey is less than | | sizeof(ulong), this cuts off the bits at the end. | \*************************************************************************/ static void HTSetupKeyTrunc(void) { int i, j; for ( i = 0; i < sizeof(unsigned long); i++ ) for ( j = 0; j < sizeof(unsigned long); j++ ) grgKeyTruncMask[i][j] = j < i ? 255 : 0; /* chars have 8 bits */ } /* ======================================================================== */ /* TABLE ROUTINES */ /* -------------------- */ /* The idea is that a hashtable with (logically) t buckets is divided * into t/M groups of M buckets each. (M is a constant set in * LOG_BM_WORDS for efficiency.) Each group is stored sparsely. * Thus, inserting into the table causes some array to grow, which is * slow but still constant time. Lookup involves doing a * logical-position-to-sparse-position lookup, which is also slow but * constant time. The larger M is, the slower these operations are * but the less overhead (slightly). * * To store the sparse array, we store a bitmap B, where B[i] = 1 iff * bucket i is non-empty. Then to look up bucket i we really look up * array[# of 1s before i in B]. This is constant time for fixed M. * * Terminology: the position of an item in the overall table (from * 1 .. t) is called its "location." The logical position in a group * (from 1 .. M ) is called its "position." The actual location in * the array (from 1 .. # of non-empty buckets in the group) is * called its "offset." * * The following operations are supported: * o Allocate an array with t buckets, all empty * o Free a array (but not whatever was stored in the buckets) * o Tell whether or not a bucket is empty * o Return a bucket with a given location * o Set the value of a bucket at a given location * o Iterate through all the buckets in the array * o Read and write an occupancy bitmap to disk * o Return how much memory is being allocated by the array structure */ #ifndef SparseBucket /* by default, each bucket holds an HTItem */ #define SparseBucket HTItem #endif typedef struct SparseBin { SparseBucket *binSparse; HTBitmap bmOccupied; /* bmOccupied[i] is 1 if bucket i has an item */ short cOccupied; /* size of binSparse; useful for iterators, eg */ } SparseBin; typedef struct SparseIterator { long posGroup; long posOffset; SparseBin *binSparse; /* state info, to avoid args for NextBucket() */ ulong cBuckets; } SparseIterator; #define LOG_LOW_BIN_SIZE ( LOG_BM_WORDS+LOG_WORD_SIZE ) #define SPARSE_GROUPS(cBuckets) ( (((cBuckets)-1) >> LOG_LOW_BIN_SIZE) + 1 ) /* we need a small function to figure out # of items set in the bm */ static HTOffset EntriesUpto(HTBitmapPart *bm, int i) { /* returns # of set bits in 0..i-1 */ HTOffset retval = 0; static HTOffset rgcBits[256] = /* # of bits set in one char */ {0, 1, 1, 2, 1, 2, 2, 3, 1, 2, 2, 3, 2, 3, 3, 4, 1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5, 1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7, 1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7, 3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7, 4, 5, 5, 6, 5, 6, 6, 7, 5, 6, 6, 7, 6, 7, 7, 8}; if ( i == 0 ) return 0; for ( ; i > sizeof(*bm)*8; i -= sizeof(*bm)*8, bm++ ) { /* think of it as loop unrolling */ #if LOG_WORD_SIZE >= 3 /* 1 byte per word, or more */ retval += rgcBits[*bm & 255]; /* get the low byte */ #if LOG_WORD_SIZE >= 4 /* at least 2 bytes */ retval += rgcBits[(*bm >> 8) & 255]; #if LOG_WORD_SIZE >= 5 /* at least 4 bytes */ retval += rgcBits[(*bm >> 16) & 255]; retval += rgcBits[(*bm >> 24) & 255]; #if LOG_WORD_SIZE >= 6 /* 8 bytes! */ retval += rgcBits[(*bm >> 32) & 255]; retval += rgcBits[(*bm >> 40) & 255]; retval += rgcBits[(*bm >> 48) & 255]; retval += rgcBits[(*bm >> 56) & 255]; #if LOG_WORD_SIZE >= 7 /* not a concern for a while... */ #error Need to rewrite EntriesUpto to support such big words #endif /* >8 bytes */ #endif /* 8 bytes */ #endif /* 4 bytes */ #endif /* 2 bytes */ #endif /* 1 byte */ } switch ( i ) { /* from 0 to 63 */ case 0: return retval; #if LOG_WORD_SIZE >= 3 /* 1 byte per word, or more */ case 1: case 2: case 3: case 4: case 5: case 6: case 7: case 8: return (retval + rgcBits[*bm & ((1 << i)-1)]); #if LOG_WORD_SIZE >= 4 /* at least 2 bytes */ case 9: case 10: case 11: case 12: case 13: case 14: case 15: case 16: return (retval + rgcBits[*bm & 255] + rgcBits[(*bm >> 8) & ((1 << (i-8))-1)]); #if LOG_WORD_SIZE >= 5 /* at least 4 bytes */ case 17: case 18: case 19: case 20: case 21: case 22: case 23: case 24: return (retval + rgcBits[*bm & 255] + rgcBits[(*bm >> 8) & 255] + rgcBits[(*bm >> 16) & ((1 << (i-16))-1)]); case 25: case 26: case 27: case 28: case 29: case 30: case 31: case 32: return (retval + rgcBits[*bm & 255] + rgcBits[(*bm >> 8) & 255] + rgcBits[(*bm >> 16) & 255] + rgcBits[(*bm >> 24) & ((1 << (i-24))-1)]); #if LOG_WORD_SIZE >= 6 /* 8 bytes! */ case 33: case 34: case 35: case 36: case 37: case 38: case 39: case 40: return (retval + rgcBits[*bm & 255] + rgcBits[(*bm >> 8) & 255] + rgcBits[(*bm >> 16) & 255] + rgcBits[(*bm >> 24) & 255] + rgcBits[(*bm >> 32) & ((1 << (i-32))-1)]); case 41: case 42: case 43: case 44: case 45: case 46: case 47: case 48: return (retval + rgcBits[*bm & 255] + rgcBits[(*bm >> 8) & 255] + rgcBits[(*bm >> 16) & 255] + rgcBits[(*bm >> 24) & 255] + rgcBits[(*bm >> 32) & 255] + rgcBits[(*bm >> 40) & ((1 << (i-40))-1)]); case 49: case 50: case 51: case 52: case 53: case 54: case 55: case 56: return (retval + rgcBits[*bm & 255] + rgcBits[(*bm >> 8) & 255] + rgcBits[(*bm >> 16) & 255] + rgcBits[(*bm >> 24) & 255] + rgcBits[(*bm >> 32) & 255] + rgcBits[(*bm >> 40) & 255] + rgcBits[(*bm >> 48) & ((1 << (i-48))-1)]); case 57: case 58: case 59: case 60: case 61: case 62: case 63: case 64: return (retval + rgcBits[*bm & 255] + rgcBits[(*bm >> 8) & 255] + rgcBits[(*bm >> 16) & 255] + rgcBits[(*bm >> 24) & 255] + rgcBits[(*bm >> 32) & 255] + rgcBits[(*bm >> 40) & 255] + rgcBits[(*bm >> 48) & 255] + rgcBits[(*bm >> 56) & ((1 << (i-56))-1)]); #endif /* 8 bytes */ #endif /* 4 bytes */ #endif /* 2 bytes */ #endif /* 1 byte */ } assert("" == "word size is too big in EntriesUpto()"); return -1; } #define SPARSE_POS_TO_OFFSET(bm, i) ( EntriesUpto(&((bm)[0]), i) ) #define SPARSE_BUCKET(bin, location) \ ( (bin)[(location) >> LOG_LOW_BIN_SIZE].binSparse + \ SPARSE_POS_TO_OFFSET((bin)[(location)>>LOG_LOW_BIN_SIZE].bmOccupied, \ MOD2(location, LOG_LOW_BIN_SIZE)) ) /*************************************************************************\ | SparseAllocate() | | SparseFree() | | Allocates, sets-to-empty, and frees a sparse array. All you need | | to tell me is how many buckets you want. I return the number of | | buckets I actually allocated, setting the array as a parameter. | | Note that you have to set auxilliary parameters, like cOccupied. | \*************************************************************************/ static ulong SparseAllocate(SparseBin **pbinSparse, ulong cBuckets) { int cGroups = SPARSE_GROUPS(cBuckets); *pbinSparse = (SparseBin *) HTscalloc(sizeof(**pbinSparse) * cGroups); return cGroups << LOG_LOW_BIN_SIZE; } static SparseBin *SparseFree(SparseBin *binSparse, ulong cBuckets) { ulong iGroup, cGroups = SPARSE_GROUPS(cBuckets); for ( iGroup = 0; iGroup < cGroups; iGroup++ ) HTfree(binSparse[iGroup].binSparse, (sizeof(*binSparse[iGroup].binSparse) * binSparse[iGroup].cOccupied)); HTfree(binSparse, sizeof(*binSparse) * cGroups); return NULL; } /*************************************************************************\ | SparseIsEmpty() | | SparseFind() | | You give me a location (ie a number between 1 and t), and I | | return the bucket at that location, or NULL if the bucket is | | empty. It's OK to call Find() on an empty table. | \*************************************************************************/ static int SparseIsEmpty(SparseBin *binSparse, ulong location) { return !TEST_BITMAP(binSparse[location>>LOG_LOW_BIN_SIZE].bmOccupied, MOD2(location, LOG_LOW_BIN_SIZE)); } static SparseBucket *SparseFind(SparseBin *binSparse, ulong location) { if ( SparseIsEmpty(binSparse, location) ) return NULL; return SPARSE_BUCKET(binSparse, location); } /*************************************************************************\ | SparseInsert() | | You give me a location, and contents to put there, and I insert | | into that location and RETURN a pointer to the location. If | | bucket was already occupied, I write over the contents only if | | *pfOverwrite is 1. We set *pfOverwrite to 1 if there was someone | | there (whether or not we overwrote) and 0 else. | \*************************************************************************/ static SparseBucket *SparseInsert(SparseBin *binSparse, SparseBucket *bckInsert, ulong location, int *pfOverwrite) { SparseBucket *bckPlace; HTOffset offset; bckPlace = SparseFind(binSparse, location); if ( bckPlace ) /* means we replace old contents */ { if ( *pfOverwrite ) *bckPlace = *bckInsert; *pfOverwrite = 1; return bckPlace; } binSparse += (location >> LOG_LOW_BIN_SIZE); offset = SPARSE_POS_TO_OFFSET(binSparse->bmOccupied, MOD2(location, LOG_LOW_BIN_SIZE)); binSparse->binSparse = (SparseBucket *) HTsrealloc(binSparse->binSparse, sizeof(*binSparse->binSparse) * ++binSparse->cOccupied, sizeof(*binSparse->binSparse)); memmove(binSparse->binSparse + offset+1, binSparse->binSparse + offset, (binSparse->cOccupied-1 - offset) * sizeof(*binSparse->binSparse)); binSparse->binSparse[offset] = *bckInsert; SET_BITMAP(binSparse->bmOccupied, MOD2(location, LOG_LOW_BIN_SIZE)); *pfOverwrite = 0; return binSparse->binSparse + offset; } /*************************************************************************\ | SparseFirstBucket() | | SparseNextBucket() | | SparseCurrentBit() | | Iterate through the occupied buckets of a dense hashtable. You | | must, of course, have allocated space yourself for the iterator. | \*************************************************************************/ static SparseBucket *SparseNextBucket(SparseIterator *iter) { if ( iter->posOffset != -1 && /* not called from FirstBucket()? */ (++iter->posOffset < iter->binSparse[iter->posGroup].cOccupied) ) return iter->binSparse[iter->posGroup].binSparse + iter->posOffset; iter->posOffset = 0; /* start the next group */ for ( iter->posGroup++; iter->posGroup < SPARSE_GROUPS(iter->cBuckets); iter->posGroup++ ) if ( iter->binSparse[iter->posGroup].cOccupied > 0 ) return iter->binSparse[iter->posGroup].binSparse; /* + 0 */ return NULL; /* all remaining groups were empty */ } static SparseBucket *SparseFirstBucket(SparseIterator *iter, SparseBin *binSparse, ulong cBuckets) { iter->binSparse = binSparse; /* set it up for NextBucket() */ iter->cBuckets = cBuckets; iter->posOffset = -1; /* when we advance, we're at 0 */ iter->posGroup = -1; return SparseNextBucket(iter); } /*************************************************************************\ | SparseWrite() | | SparseRead() | | These are routines for storing a sparse hashtable onto disk. We | | store the number of buckets and a bitmap indicating which buckets | | are allocated (occupied). The actual contents of the buckets | | must be stored separately. | \*************************************************************************/ static void SparseWrite(FILE *fp, SparseBin *binSparse, ulong cBuckets) { ulong i, j; WRITE_UL(fp, cBuckets); for ( i = 0; i < SPARSE_GROUPS(cBuckets); i++ ) for ( j = 0; j < (1<rgBuckets, cBuckets); } static ulong DenseAllocate(DenseBin **pbin, ulong cBuckets) { *pbin = (DenseBin *) HTsmalloc(sizeof(*pbin)); (*pbin)->rgBuckets = (DenseBucket *) HTsmalloc(sizeof(*(*pbin)->rgBuckets) * cBuckets); DenseClear(*pbin, cBuckets); return cBuckets; } static DenseBin *DenseFree(DenseBin *bin, ulong cBuckets) { HTfree(bin->rgBuckets, sizeof(*bin->rgBuckets) * cBuckets); HTfree(bin, sizeof(*bin)); return NULL; } static int DenseIsEmpty(DenseBin *bin, ulong location) { return DENSE_IS_EMPTY(bin->rgBuckets, location); } static DenseBucket *DenseFind(DenseBin *bin, ulong location) { if ( DenseIsEmpty(bin, location) ) return NULL; return bin->rgBuckets + location; } static DenseBucket *DenseInsert(DenseBin *bin, DenseBucket *bckInsert, ulong location, int *pfOverwrite) { DenseBucket *bckPlace; bckPlace = DenseFind(bin, location); if ( bckPlace ) /* means something is already there */ { if ( *pfOverwrite ) *bckPlace = *bckInsert; *pfOverwrite = 1; /* set to 1 to indicate someone was there */ return bckPlace; } else { bin->rgBuckets[location] = *bckInsert; *pfOverwrite = 0; return bin->rgBuckets + location; } } static DenseBucket *DenseNextBucket(DenseIterator *iter) { for ( iter->pos++; iter->pos < iter->cBuckets; iter->pos++ ) if ( !DenseIsEmpty(iter->bin, iter->pos) ) return iter->bin->rgBuckets + iter->pos; return NULL; /* all remaining groups were empty */ } static DenseBucket *DenseFirstBucket(DenseIterator *iter, DenseBin *bin, ulong cBuckets) { iter->bin = bin; /* set it up for NextBucket() */ iter->cBuckets = cBuckets; iter->pos = -1; /* thus the next bucket will be 0 */ return DenseNextBucket(iter); } static void DenseWrite(FILE *fp, DenseBin *bin, ulong cBuckets) { ulong pos = 0, bit, bm; WRITE_UL(fp, cBuckets); while ( pos < cBuckets ) { bm = 0; for ( bit = 0; bit < 8*sizeof(ulong); bit++ ) { if ( !DenseIsEmpty(bin, pos) ) SET_BITMAP(&bm, bit); /* in fks-hash.h */ if ( ++pos == cBuckets ) break; } WRITE_UL(fp, bm); } } static ulong DenseRead(FILE *fp, DenseBin **pbin) { ulong pos = 0, bit, bm, cBuckets; READ_UL(fp, cBuckets); cBuckets = DenseAllocate(pbin, cBuckets); while ( pos < cBuckets ) { READ_UL(fp, bm); for ( bit = 0; bit < 8*sizeof(ulong); bit++ ) { if ( TEST_BITMAP(&bm, bit) ) /* in fks-hash.h */ DENSE_SET_OCCUPIED((*pbin)->rgBuckets, pos); else DENSE_SET_EMPTY((*pbin)->rgBuckets, pos); if ( ++pos == cBuckets ) break; } } return cBuckets; } static ulong DenseMemory(ulong cBuckets, ulong cOccupied) { return cBuckets * sizeof(DenseBucket); } /* ======================================================================== */ /* HASHING ROUTINES */ /* ---------------------- */ /* Implements a simple quadratic hashing scheme. We have a single hash * table of size t and a single hash function h(x). When inserting an * item, first we try h(x) % t. If it's occupied, we try h(x) + * i*(i-1)/2 % t for increasing values of i until we hit a not-occupied * space. To make this dynamic, we double the size of the hash table as * soon as more than half the cells are occupied. When deleting, we can * choose to shrink the hashtable when less than a quarter of the * cells are occupied, or we can choose never to shrink the hashtable. * For lookup, we check h(x) + i*(i-1)/2 % t (starting with i=0) until * we get a match or we hit an empty space. Note that as a result, * we can't make a cell empty on deletion, or lookups may end prematurely. * Instead we mark the cell as "deleted." We thus steal the value * DELETED as a possible "data" value. As long as data are pointers, * that's ok. * The hash increment we use, i(i-1)/2, is not the standard quadratic * hash increment, which is i^2. i(i-1)/2 covers the entire bucket space * when the hashtable size is a power of two, as it is for us. In fact, * the first n probes cover n distinct buckets; then it repeats. This * guarantees insertion will always succeed. * If you linear hashing, set JUMP in chash.h. You can also change * various other parameters there. */ /*************************************************************************\ | Hash() | | The hash function I use is due to Bob Jenkins (see | | http://burtleburtle.net/bob/hash/evahash.html | | According to http://burtleburtle.net/bob/c/lookup2.c, | | his implementation is public domain.) | | It takes 36 instructions, in 18 cycles if you're lucky. | | hashing depends on the fact the hashtable size is always a | | power of 2. cBuckets is probably ht->cBuckets. | \*************************************************************************/ #if LOG_WORD_SIZE == 5 /* 32 bit words */ #define mix(a,b,c) \ { \ a -= b; a -= c; a ^= (c>>13); \ b -= c; b -= a; b ^= (a<<8); \ c -= a; c -= b; c ^= (b>>13); \ a -= b; a -= c; a ^= (c>>12); \ b -= c; b -= a; b ^= (a<<16); \ c -= a; c -= b; c ^= (b>>5); \ a -= b; a -= c; a ^= (c>>3); \ b -= c; b -= a; b ^= (a<<10); \ c -= a; c -= b; c ^= (b>>15); \ } #ifdef WORD_HASH /* play with this on little-endian machines */ #define WORD_AT(ptr) ( *(ulong *)(ptr) ) #else #define WORD_AT(ptr) ( (ptr)[0] + ((ulong)(ptr)[1]<<8) + \ ((ulong)(ptr)[2]<<16) + ((ulong)(ptr)[3]<<24) ) #endif #elif LOG_WORD_SIZE == 6 /* 64 bit words */ #define mix(a,b,c) \ { \ a -= b; a -= c; a ^= (c>>43); \ b -= c; b -= a; b ^= (a<<9); \ c -= a; c -= b; c ^= (b>>8); \ a -= b; a -= c; a ^= (c>>38); \ b -= c; b -= a; b ^= (a<<23); \ c -= a; c -= b; c ^= (b>>5); \ a -= b; a -= c; a ^= (c>>35); \ b -= c; b -= a; b ^= (a<<49); \ c -= a; c -= b; c ^= (b>>11); \ a -= b; a -= c; a ^= (c>>12); \ b -= c; b -= a; b ^= (a<<18); \ c -= a; c -= b; c ^= (b>>22); \ } #ifdef WORD_HASH /* alpha is little-endian, btw */ #define WORD_AT(ptr) ( *(ulong *)(ptr) ) #else #define WORD_AT(ptr) ( (ptr)[0] + ((ulong)(ptr)[1]<<8) + \ ((ulong)(ptr)[2]<<16) + ((ulong)(ptr)[3]<<24) + \ ((ulong)(ptr)[4]<<32) + ((ulong)(ptr)[5]<<40) + \ ((ulong)(ptr)[6]<<48) + ((ulong)(ptr)[7]<<56) ) #endif #else /* neither 32 or 64 bit words */ #error This hash function can only hash 32 or 64 bit words. Sorry. #endif static ulong Hash(HashTable *ht, char *key, ulong cBuckets) { ulong a, b, c, cchKey, cchKeyOrig; cchKeyOrig = ht->cchKey == NULL_TERMINATED ? strlen(key) : ht->cchKey; a = b = c = 0x9e3779b9; /* the golden ratio; an arbitrary value */ for ( cchKey = cchKeyOrig; cchKey >= 3 * sizeof(ulong); cchKey -= 3 * sizeof(ulong), key += 3 * sizeof(ulong) ) { a += WORD_AT(key); b += WORD_AT(key + sizeof(ulong)); c += WORD_AT(key + sizeof(ulong)*2); mix(a,b,c); } c += cchKeyOrig; switch ( cchKey ) { /* deal with rest. Cases fall through */ #if LOG_WORD_SIZE == 5 case 11: c += (ulong)key[10]<<24; case 10: c += (ulong)key[9]<<16; case 9 : c += (ulong)key[8]<<8; /* the first byte of c is reserved for the length */ case 8 : b += WORD_AT(key+4); a+= WORD_AT(key); break; case 7 : b += (ulong)key[6]<<16; case 6 : b += (ulong)key[5]<<8; case 5 : b += key[4]; case 4 : a += WORD_AT(key); break; case 3 : a += (ulong)key[2]<<16; case 2 : a += (ulong)key[1]<<8; case 1 : a += key[0]; /* case 0 : nothing left to add */ #elif LOG_WORD_SIZE == 6 case 23: c += (ulong)key[22]<<56; case 22: c += (ulong)key[21]<<48; case 21: c += (ulong)key[20]<<40; case 20: c += (ulong)key[19]<<32; case 19: c += (ulong)key[18]<<24; case 18: c += (ulong)key[17]<<16; case 17: c += (ulong)key[16]<<8; /* the first byte of c is reserved for the length */ case 16: b += WORD_AT(key+8); a+= WORD_AT(key); break; case 15: b += (ulong)key[14]<<48; case 14: b += (ulong)key[13]<<40; case 13: b += (ulong)key[12]<<32; case 12: b += (ulong)key[11]<<24; case 11: b += (ulong)key[10]<<16; case 10: b += (ulong)key[ 9]<<8; case 9: b += (ulong)key[ 8]; case 8: a += WORD_AT(key); break; case 7: a += (ulong)key[ 6]<<48; case 6: a += (ulong)key[ 5]<<40; case 5: a += (ulong)key[ 4]<<32; case 4: a += (ulong)key[ 3]<<24; case 3: a += (ulong)key[ 2]<<16; case 2: a += (ulong)key[ 1]<<8; case 1: a += (ulong)key[ 0]; /* case 0: nothing left to add */ #endif } mix(a,b,c); return c & (cBuckets-1); } /*************************************************************************\ | Rehash() | | You give me a hashtable, a new size, and a bucket to follow, and | | I resize the hashtable's bin to be the new size, rehashing | | everything in it. I keep particular track of the bucket you pass | | in, and RETURN a pointer to where the item in the bucket got to. | | (If you pass in NULL, I return an arbitrary pointer.) | \*************************************************************************/ static HTItem *Rehash(HashTable *ht, ulong cNewBuckets, HTItem *bckWatch) { Table *tableNew; ulong iBucketFirst; HTItem *bck, *bckNew = NULL; ulong offset; /* the i in h(x) + i*(i-1)/2 */ int fOverwrite = 0; /* not an issue: there can be no collisions */ assert( ht->table ); cNewBuckets = Table(Allocate)(&tableNew, cNewBuckets); /* Since we RETURN the new position of bckWatch, we want * * to make sure it doesn't get moved due to some table * * rehashing that comes after it's inserted. Thus, we * * have to put it in last. This makes the loop weird. */ for ( bck = HashFirstBucket(ht); ; bck = HashNextBucket(ht) ) { if ( bck == NULL ) /* we're done iterating, so look at bckWatch */ { bck = bckWatch; if ( bck == NULL ) /* I guess bckWatch wasn't specified */ break; } else if ( bck == bckWatch ) continue; /* ignore if we see it during the iteration */ offset = 0; /* a new i for a new bucket */ for ( iBucketFirst = Hash(ht, KEY_PTR(ht, bck->key), cNewBuckets); !Table(IsEmpty)(tableNew, iBucketFirst); iBucketFirst = (iBucketFirst + JUMP(KEY_PTR(ht,bck->key), offset)) & (cNewBuckets-1) ) ; bckNew = Table(Insert)(tableNew, bck, iBucketFirst, &fOverwrite); if ( bck == bckWatch ) /* we're done with the last thing to do */ break; } Table(Free)(ht->table, ht->cBuckets); ht->table = tableNew; ht->cBuckets = cNewBuckets; ht->cDeletedItems = 0; return bckNew; /* new position of bckWatch, which was inserted last */ } /*************************************************************************\ | Find() | | Does the quadratic searching stuff. RETURNS NULL if we don't | | find an object with the given key, and a pointer to the Item | | holding the key, if we do. Also sets posLastFind. If piEmpty is | | non-NULL, we set it to the first open bucket we pass; helpful for | | doing a later insert if the search fails, for instance. | \*************************************************************************/ static HTItem *Find(HashTable *ht, ulong key, ulong *piEmpty) { ulong iBucketFirst; HTItem *item; ulong offset = 0; /* the i in h(x) + i*(i-1)/2 */ int fFoundEmpty = 0; /* set when we pass over an empty bucket */ ht->posLastFind = NULL; /* set up for failure: a new find starts */ if ( ht->table == NULL ) /* empty hash table: find is bound to fail */ return NULL; iBucketFirst = Hash(ht, KEY_PTR(ht, key), ht->cBuckets); while ( 1 ) /* now try all i > 0 */ { item = Table(Find)(ht->table, iBucketFirst); if ( item == NULL ) /* it's not in the table */ { if ( piEmpty && !fFoundEmpty ) *piEmpty = iBucketFirst; return NULL; } else { if ( IS_BCK_DELETED(item) ) /* always 0 ifdef INSERT_ONLY */ { if ( piEmpty && !fFoundEmpty ) { *piEmpty = iBucketFirst; fFoundEmpty = 1; } } else if ( !KEY_CMP(ht, key, item->key) ) /* must be occupied */ { ht->posLastFind = item; return item; /* we found it! */ } } iBucketFirst = ((iBucketFirst + JUMP(KEY_PTR(ht, key), offset)) & (ht->cBuckets-1)); } } /*************************************************************************\ | Insert() | | If an item with the key already exists in the hashtable, RETURNS | | a pointer to the item (replacing its data if fOverwrite is 1). | | If not, we find the first place-to-insert (which Find() is nice | | enough to set for us) and insert the item there, RETURNing a | | pointer to the item. We might grow the hashtable if it's getting | | full. Note we include buckets holding DELETED when determining | | fullness, because they slow down searching. | \*************************************************************************/ static ulong NextPow2(ulong x) /* returns next power of 2 > x, or 2^31 */ { if ( ((x << 1) >> 1) != x ) /* next power of 2 overflows */ x >>= 1; /* so we return highest power of 2 we can */ while ( (x & (x-1)) != 0 ) /* blacks out all but the top bit */ x &= (x-1); return x << 1; /* makes it the *next* power of 2 */ } static HTItem *Insert(HashTable *ht, ulong key, ulong data, int fOverwrite) { HTItem *item, bckInsert; ulong iEmpty; /* first empty bucket key probes */ if ( ht->table == NULL ) /* empty hash table: find is bound to fail */ return NULL; item = Find(ht, key, &iEmpty); ht->posLastFind = NULL; /* last operation is insert, not find */ if ( item ) { if ( fOverwrite ) item->data = data; /* key already matches */ return item; } COPY_KEY(ht, bckInsert.key, key); /* make our own copy of the key */ bckInsert.data = data; /* oh, and the data too */ item = Table(Insert)(ht->table, &bckInsert, iEmpty, &fOverwrite); if ( fOverwrite ) /* we overwrote a deleted bucket */ ht->cDeletedItems--; ht->cItems++; /* insert couldn't have overwritten */ if ( ht->cDeltaGoalSize > 0 ) /* closer to our goal size */ ht->cDeltaGoalSize--; if ( ht->cItems + ht->cDeletedItems >= ht->cBuckets * OCCUPANCY_PCT || ht->cDeltaGoalSize < 0 ) /* we must've overestimated # of deletes */ item = Rehash(ht, NextPow2((ulong)(((ht->cDeltaGoalSize > 0 ? ht->cDeltaGoalSize : 0) + ht->cItems) / OCCUPANCY_PCT)), item); return item; } /*************************************************************************\ | Delete() | | Removes the item from the hashtable, and if fShrink is 1, will | | shrink the hashtable if it's too small (ie even after halving, | | the ht would be less than half full, though in order to avoid | | oscillating table size, we insist that after halving the ht would | | be less than 40% full). RETURNS 1 if the item was found, 0 else. | | If fLastFindSet is true, then this function is basically | | DeleteLastFind. | \*************************************************************************/ static int Delete(HashTable *ht, ulong key, int fShrink, int fLastFindSet) { if ( !fLastFindSet && !Find(ht, key, NULL) ) return 0; SET_BCK_DELETED(ht, ht->posLastFind); /* find set this, how nice */ ht->cItems--; ht->cDeletedItems++; if ( ht->cDeltaGoalSize < 0 ) /* heading towards our goal of deletion */ ht->cDeltaGoalSize++; if ( fShrink && ht->cItems < ht->cBuckets * OCCUPANCY_PCT*0.4 && ht->cDeltaGoalSize >= 0 /* wait until we're done deleting */ && (ht->cBuckets >> 1) >= MIN_HASH_SIZE ) /* shrink */ Rehash(ht, NextPow2((ulong)((ht->cItems+ht->cDeltaGoalSize)/OCCUPANCY_PCT)), NULL); ht->posLastFind = NULL; /* last operation is delete, not find */ return 1; } /* ======================================================================== */ /* USER-VISIBLE API */ /* ---------------------- */ /*************************************************************************\ | AllocateHashTable() | | ClearHashTable() | | FreeHashTable() | | Allocate() allocates a hash table and sets up size parameters. | | Free() frees it. Clear() deletes all the items from the hash | | table, but frees not. | | cchKey is < 0 if the keys you send me are meant to be pointers | | to \0-terminated strings. Then -cchKey is the maximum key size. | | If cchKey < one word (ulong), the keys you send me are the keys | | themselves; else the keys you send me are pointers to the data. | | If fSaveKeys is 1, we copy any keys given to us to insert. We | | also free these keys when freeing the hash table. If it's 0, the | | user is responsible for key space management. | | AllocateHashTable() RETURNS a hash table; the others TAKE one. | \*************************************************************************/ HashTable *AllocateHashTable(int cchKey, int fSaveKeys) { HashTable *ht; ht = (HashTable *) HTsmalloc(sizeof(*ht)); /* set everything to 0 */ ht->cBuckets = Table(Allocate)(&ht->table, MIN_HASH_SIZE); ht->cchKey = cchKey <= 0 ? NULL_TERMINATED : cchKey; ht->cItems = 0; ht->cDeletedItems = 0; ht->fSaveKeys = fSaveKeys; ht->cDeltaGoalSize = 0; ht->iter = HTsmalloc( sizeof(TableIterator) ); ht->fpData = NULL; /* set by HashLoad, maybe */ ht->bckData.data = (ulong) NULL; /* this must be done */ HTSetupKeyTrunc(); /* in util.c */ return ht; } void ClearHashTable(HashTable *ht) { HTItem *bck; if ( STORES_PTR(ht) && ht->fSaveKeys ) /* need to free keys */ for ( bck = HashFirstBucket(ht); bck; bck = HashNextBucket(ht) ) { FREE_KEY(ht, bck->key); if ( ht->fSaveKeys == 2 ) /* this means key stored in one block */ break; /* ...so only free once */ } Table(Free)(ht->table, ht->cBuckets); ht->cBuckets = Table(Allocate)(&ht->table, MIN_HASH_SIZE); ht->cItems = 0; ht->cDeletedItems = 0; ht->cDeltaGoalSize = 0; ht->posLastFind = NULL; ht->fpData = NULL; /* no longer HashLoading */ if ( ht->bckData.data ) free( (char *)(ht)->bckData.data); ht->bckData.data = (ulong) NULL; } void FreeHashTable(HashTable *ht) { ClearHashTable(ht); if ( ht->iter ) HTfree(ht->iter, sizeof(TableIterator)); if ( ht->table ) Table(Free)(ht->table, ht->cBuckets); free(ht); } /*************************************************************************\ | HashFind() | | HashFindLast() | | HashFind(): looks in h(x) + i(i-1)/2 % t as i goes up from 0 | | until we either find the key or hit an empty bucket. RETURNS a | | pointer to the item in the hit bucket, if we find it, else | | RETURNS NULL. | | HashFindLast() returns the item returned by the last | | HashFind(), which may be NULL if the last HashFind() failed. | | LOAD_AND_RETURN reads the data from off disk, if necessary. | \*************************************************************************/ HTItem *HashFind(HashTable *ht, ulong key) { LOAD_AND_RETURN(ht, Find(ht, KEY_TRUNC(ht, key), NULL)); } HTItem *HashFindLast(HashTable *ht) { LOAD_AND_RETURN(ht, ht->posLastFind); } /*************************************************************************\ | HashFindOrInsert() | | HashFindOrInsertItem() | | HashInsert() | | HashInsertItem() | | HashDelete() | | HashDeleteLast() | | Pretty obvious what these guys do. Some take buckets (items), | | some take keys and data separately. All things RETURN the bucket | | (a pointer into the hashtable) if appropriate. | \*************************************************************************/ HTItem *HashFindOrInsert(HashTable *ht, ulong key, ulong dataInsert) { /* This is equivalent to Insert without samekey-overwrite */ return Insert(ht, KEY_TRUNC(ht, key), dataInsert, 0); } HTItem *HashFindOrInsertItem(HashTable *ht, HTItem *pItem) { return HashFindOrInsert(ht, pItem->key, pItem->data); } HTItem *HashInsert(HashTable *ht, ulong key, ulong data) { return Insert(ht, KEY_TRUNC(ht, key), data, SAMEKEY_OVERWRITE); } HTItem *HashInsertItem(HashTable *ht, HTItem *pItem) { return HashInsert(ht, pItem->key, pItem->data); } int HashDelete(HashTable *ht, ulong key) { return Delete(ht, KEY_TRUNC(ht, key), !FAST_DELETE, 0); } int HashDeleteLast(HashTable *ht) { if ( !ht->posLastFind ) /* last find failed */ return 0; return Delete(ht, 0, !FAST_DELETE, 1); /* no need to specify a key */ } /*************************************************************************\ | HashFirstBucket() | | HashNextBucket() | | Iterates through the items in the hashtable by iterating through | | the table. Since we know about deleted buckets and loading data | | off disk, and the table doesn't, our job is to take care of these | | things. RETURNS a bucket, or NULL after the last bucket. | \*************************************************************************/ HTItem *HashFirstBucket(HashTable *ht) { HTItem *retval; for ( retval = Table(FirstBucket)(ht->iter, ht->table, ht->cBuckets); retval; retval = Table(NextBucket)(ht->iter) ) if ( !IS_BCK_DELETED(retval) ) LOAD_AND_RETURN(ht, retval); return NULL; } HTItem *HashNextBucket(HashTable *ht) { HTItem *retval; while ( (retval=Table(NextBucket)(ht->iter)) ) if ( !IS_BCK_DELETED(retval) ) LOAD_AND_RETURN(ht, retval); return NULL; } /*************************************************************************\ | HashSetDeltaGoalSize() | | If we're going to insert 100 items, set the delta goal size to | | 100 and we take that into account when inserting. Likewise, if | | we're going to delete 10 items, set it to -100 and we won't | | rehash until all 100 have been done. It's ok to be wrong, but | | it's efficient to be right. Returns the delta value. | \*************************************************************************/ int HashSetDeltaGoalSize(HashTable *ht, int delta) { ht->cDeltaGoalSize = delta; #if FAST_DELETE == 1 || defined INSERT_ONLY if ( ht->cDeltaGoalSize < 0 ) /* for fast delete, we never */ ht->cDeltaGoalSize = 0; /* ...rehash after deletion */ #endif return ht->cDeltaGoalSize; } /*************************************************************************\ | HashSave() | | HashLoad() | | HashLoadKeys() | | Routines for saving and loading the hashtable from disk. We can | | then use the hashtable in two ways: loading it back into memory | | (HashLoad()) or loading only the keys into memory, in which case | | the data for a given key is loaded off disk when the key is | | retrieved. The data is freed when something new is retrieved in | | its place, so this is not a "lazy-load" scheme. | | The key is saved automatically and restored upon load, but the | | user needs to specify a routine for reading and writing the data. | | fSaveKeys is of course set to 1 when you read in a hashtable. | | HashLoad RETURNS a newly allocated hashtable. | | DATA_WRITE() takes an fp and a char * (representing the data | | field), and must perform two separate tasks. If fp is NULL, | | return the number of bytes written. If not, writes the data to | | disk at the place the fp points to. | | DATA_READ() takes an fp and the number of bytes in the data | | field, and returns a char * which points to wherever you've | | written the data. Thus, you must allocate memory for the data. | | Both dataRead and dataWrite may be NULL if you just wish to | | store the data field directly, as an integer. | \*************************************************************************/ void HashSave(FILE *fp, HashTable *ht, int (*dataWrite)(FILE *, char *)) { long cchData, posStart; HTItem *bck; /* File format: magic number (4 bytes) : cchKey (one word) : cItems (one word) : cDeletedItems (one word) : table info (buckets and a bitmap) : cchAllKeys (one word) Then the keys, in a block. If cchKey is NULL_TERMINATED, the keys are null-terminated too, otherwise this takes up cchKey*cItems bytes. Note that keys are not written for DELETED buckets. Then the data: : EITHER DELETED (one word) to indicate it's a deleted bucket, : OR number of bytes for this (non-empty) bucket's data (one word). This is not stored if dataWrite == NULL since the size is known to be sizeof(ul). Plus: : the data for this bucket (variable length) All words are in network byte order. */ fprintf(fp, "%s", MAGIC_KEY); WRITE_UL(fp, ht->cchKey); /* WRITE_UL, READ_UL, etc in fks-hash.h */ WRITE_UL(fp, ht->cItems); WRITE_UL(fp, ht->cDeletedItems); Table(Write)(fp, ht->table, ht->cBuckets); /* writes cBuckets too */ WRITE_UL(fp, 0); /* to be replaced with sizeof(key block) */ posStart = ftell(fp); for ( bck = HashFirstBucket(ht); bck; bck = HashNextBucket(ht) ) fwrite(KEY_PTR(ht, bck->key), 1, (ht->cchKey == NULL_TERMINATED ? strlen(KEY_PTR(ht, bck->key))+1 : ht->cchKey), fp); cchData = ftell(fp) - posStart; fseek(fp, posStart - sizeof(unsigned long), SEEK_SET); WRITE_UL(fp, cchData); fseek(fp, 0, SEEK_END); /* done with our sojourn at the header */ /* Unlike HashFirstBucket, TableFirstBucket iters through deleted bcks */ for ( bck = Table(FirstBucket)(ht->iter, ht->table, ht->cBuckets); bck; bck = Table(NextBucket)(ht->iter) ) if ( dataWrite == NULL || IS_BCK_DELETED(bck) ) WRITE_UL(fp, bck->data); else /* write cchData followed by the data */ { WRITE_UL(fp, (*dataWrite)(NULL, (char *)bck->data)); (*dataWrite)(fp, (char *)bck->data); } } static HashTable *HashDoLoad(FILE *fp, char * (*dataRead)(FILE *, int), HashTable *ht) { ulong cchKey; char szMagicKey[4], *rgchKeys; HTItem *bck; fread(szMagicKey, 1, 4, fp); if ( strncmp(szMagicKey, MAGIC_KEY, 4) ) { fprintf(stderr, "ERROR: not a hash table (magic key is %4.4s, not %s)\n", szMagicKey, MAGIC_KEY); exit(3); } Table(Free)(ht->table, ht->cBuckets); /* allocated in AllocateHashTable */ READ_UL(fp, ht->cchKey); READ_UL(fp, ht->cItems); READ_UL(fp, ht->cDeletedItems); ht->cBuckets = Table(Read)(fp, &ht->table); /* next is the table info */ READ_UL(fp, cchKey); rgchKeys = (char *) HTsmalloc( cchKey ); /* stores all the keys */ fread(rgchKeys, 1, cchKey, fp); /* We use the table iterator so we don't try to LOAD_AND_RETURN */ for ( bck = Table(FirstBucket)(ht->iter, ht->table, ht->cBuckets); bck; bck = Table(NextBucket)(ht->iter) ) { READ_UL(fp, bck->data); /* all we need if dataRead is NULL */ if ( IS_BCK_DELETED(bck) ) /* always 0 if defined(INSERT_ONLY) */ continue; /* this is why we read the data first */ if ( dataRead != NULL ) /* if it's null, we're done */ if ( !ht->fpData ) /* load data into memory */ bck->data = (ulong)dataRead(fp, bck->data); else /* store location of data on disk */ { fseek(fp, bck->data, SEEK_CUR); /* bck->data held size of data */ bck->data = ftell(fp) - bck->data - sizeof(unsigned long); } if ( ht->cchKey == NULL_TERMINATED ) /* now read the key */ { bck->key = (ulong) rgchKeys; rgchKeys = strchr(rgchKeys, '\0') + 1; /* read past the string */ } else { if ( STORES_PTR(ht) ) /* small keys stored directly */ bck->key = (ulong) rgchKeys; else memcpy(&bck->key, rgchKeys, ht->cchKey); rgchKeys += ht->cchKey; } } if ( !STORES_PTR(ht) ) /* keys are stored directly */ HTfree(rgchKeys - cchKey, cchKey); /* we've advanced rgchK to end */ return ht; } HashTable *HashLoad(FILE *fp, char * (*dataRead)(FILE *, int)) { HashTable *ht; ht = AllocateHashTable(0, 2); /* cchKey set later, fSaveKey should be 2! */ return HashDoLoad(fp, dataRead, ht); } HashTable *HashLoadKeys(FILE *fp, char * (*dataRead)(FILE *, int)) { HashTable *ht; if ( dataRead == NULL ) return HashLoad(fp, NULL); /* no reason not to load the data here */ ht = AllocateHashTable(0, 2); /* cchKey set later, fSaveKey should be 2! */ ht->fpData = fp; /* tells HashDoLoad() to only load keys */ ht->dataRead = dataRead; return HashDoLoad(fp, dataRead, ht); } /*************************************************************************\ | PrintHashTable() | | A debugging tool. Prints the entire contents of the hash table, | | like so: : key of the contents. Returns number of bytes | | allocated. If time is not -1, we print it as the time required | | for the hash. If iForm is 0, we just print the stats. If it's | | 1, we print the keys and data too, but the keys are printed as | | ulongs. If it's 2, we print the keys correctly (as long numbers | | or as strings). | \*************************************************************************/ ulong PrintHashTable(HashTable *ht, double time, int iForm) { ulong cbData = 0, cbBin = 0, cItems = 0, cOccupied = 0; HTItem *item; printf("HASH TABLE.\n"); if ( time > -1.0 ) { printf("----------\n"); printf("Time: %27.2f\n", time); } for ( item = Table(FirstBucket)(ht->iter, ht->table, ht->cBuckets); item; item = Table(NextBucket)(ht->iter) ) { cOccupied++; /* this includes deleted buckets */ if ( IS_BCK_DELETED(item) ) /* we don't need you for anything else */ continue; cItems++; /* this is for a sanity check */ if ( STORES_PTR(ht) ) cbData += ht->cchKey == NULL_TERMINATED ? WORD_ROUND(strlen((char *)item->key)+1) : ht->cchKey; else cbBin -= sizeof(item->key), cbData += sizeof(item->key); cbBin -= sizeof(item->data), cbData += sizeof(item->data); if ( iForm != 0 ) /* we want the actual contents */ { if ( iForm == 2 && ht->cchKey == NULL_TERMINATED ) printf("%s/%lu\n", (char *)item->key, item->data); else if ( iForm == 2 && STORES_PTR(ht) ) printf("%.*s/%lu\n", (int)ht->cchKey, (char *)item->key, item->data); else /* either key actually is a ulong, or iForm == 1 */ printf("%lu/%lu\n", item->key, item->data); } } assert( cItems == ht->cItems ); /* sanity check */ cbBin = Table(Memory)(ht->cBuckets, cOccupied); printf("----------\n"); printf("%lu buckets (%lu bytes). %lu empty. %lu hold deleted items.\n" "%lu items (%lu bytes).\n" "%lu bytes total. %lu bytes (%2.1f%%) of this is ht overhead.\n", ht->cBuckets, cbBin, ht->cBuckets - cOccupied, cOccupied - ht->cItems, ht->cItems, cbData, cbData + cbBin, cbBin, cbBin*100.0/(cbBin+cbData)); return cbData + cbBin; } sparsehash-2.0.2/experimental/libchash.h0000664000175000017500000003020711721252346015227 00000000000000/* Copyright (c) 1998 - 2005, Google Inc. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following disclaimer * in the documentation and/or other materials provided with the * distribution. * * Neither the name of Google Inc. nor the names of its * contributors may be used to endorse or promote products derived from * this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * --- * Author: Craig Silverstein * * This library is intended to be used for in-memory hash tables, * though it provides rudimentary permanent-storage capabilities. * It attempts to be fast, portable, and small. The best algorithm * to fulfill these goals is an internal probing hashing algorithm, * as in Knuth, _Art of Computer Programming_, vol III. Unlike * chained (open) hashing, it doesn't require a pointer for every * item, yet it is still constant time lookup in practice. * * Also to save space, we let the contents (both data and key) that * you insert be a union: if the key/data is small, we store it * directly in the hashtable, otherwise we store a pointer to it. * To keep you from having to figure out which, use KEY_PTR and * PTR_KEY to convert between the arguments to these functions and * a pointer to the real data. For instance: * char key[] = "ab", *key2; * HTItem *bck; HashTable *ht; * HashInsert(ht, PTR_KEY(ht, key), 0); * bck = HashFind(ht, PTR_KEY(ht, "ab")); * key2 = KEY_PTR(ht, bck->key); * * There are a rich set of operations supported: * AllocateHashTable() -- Allocates a hashtable structure and * returns it. * cchKey: if it's a positive number, then each key is a * fixed-length record of that length. If it's 0, * the key is assumed to be a \0-terminated string. * fSaveKey: normally, you are responsible for allocating * space for the key. If this is 1, we make a * copy of the key for you. * ClearHashTable() -- Removes everything from a hashtable * FreeHashTable() -- Frees memory used by a hashtable * * HashFind() -- takes a key (use PTR_KEY) and returns the * HTItem containing that key, or NULL if the * key is not in the hashtable. * HashFindLast() -- returns the item found by last HashFind() * HashFindOrInsert() -- inserts the key/data pair if the key * is not already in the hashtable, or * returns the appropraite HTItem if it is. * HashFindOrInsertItem() -- takes key/data as an HTItem. * HashInsert() -- adds a key/data pair to the hashtable. What * it does if the key is already in the table * depends on the value of SAMEKEY_OVERWRITE. * HashInsertItem() -- takes key/data as an HTItem. * HashDelete() -- removes a key/data pair from the hashtable, * if it's there. RETURNS 1 if it was there, * 0 else. * If you use sparse tables and never delete, the full data * space is available. Otherwise we steal -2 (maybe -3), * so you can't have data fields with those values. * HashDeleteLast() -- deletes the item returned by the last Find(). * * HashFirstBucket() -- used to iterate over the buckets in a * hashtable. DON'T INSERT OR DELETE WHILE * ITERATING! You can't nest iterations. * HashNextBucket() -- RETURNS NULL at the end of iterating. * * HashSetDeltaGoalSize() -- if you're going to insert 1000 items * at once, call this fn with arg 1000. * It grows the table more intelligently. * * HashSave() -- saves the hashtable to a file. It saves keys ok, * but it doesn't know how to interpret the data field, * so if the data field is a pointer to some complex * structure, you must send a function that takes a * file pointer and a pointer to the structure, and * write whatever you want to write. It should return * the number of bytes written. If the file is NULL, * it should just return the number of bytes it would * write, without writing anything. * If your data field is just an integer, not a * pointer, just send NULL for the function. * HashLoad() -- loads a hashtable. It needs a function that takes * a file and the size of the structure, and expects * you to read in the structure and return a pointer * to it. You must do memory allocation, etc. If * the data is just a number, send NULL. * HashLoadKeys() -- unlike HashLoad(), doesn't load the data off disk * until needed. This saves memory, but if you look * up the same key a lot, it does a disk access each * time. * You can't do Insert() or Delete() on hashtables that were loaded * from disk. */ #include #include /* includes definition of "ulong", we hope */ #define ulong u_long #define MAGIC_KEY "CHsh" /* when we save the file */ #ifndef LOG_WORD_SIZE /* 5 for 32 bit words, 6 for 64 */ #if defined (__LP64__) || defined (_LP64) #define LOG_WORD_SIZE 6 /* log_2(sizeof(ulong)) [in bits] */ #else #define LOG_WORD_SIZE 5 /* log_2(sizeof(ulong)) [in bits] */ #endif #endif /* The following gives a speed/time tradeoff: how many buckets are * * in each bin. 0 gives 32 buckets/bin, which is a good number. */ #ifndef LOG_BM_WORDS #define LOG_BM_WORDS 0 /* each group has 2^L_B_W * 32 buckets */ #endif /* The following are all parameters that affect performance. */ #ifndef JUMP #define JUMP(key, offset) ( ++(offset) ) /* ( 1 ) for linear hashing */ #endif #ifndef Table #define Table(x) Sparse##x /* Dense##x for dense tables */ #endif #ifndef FAST_DELETE #define FAST_DELETE 0 /* if it's 1, we never shrink the ht */ #endif #ifndef SAMEKEY_OVERWRITE #define SAMEKEY_OVERWRITE 1 /* overwrite item with our key on insert? */ #endif #ifndef OCCUPANCY_PCT #define OCCUPANCY_PCT 0.5 /* large PCT means smaller and slower */ #endif #ifndef MIN_HASH_SIZE #define MIN_HASH_SIZE 512 /* ht size when first created */ #endif /* When deleting a bucket, we can't just empty it (future hashes * * may fail); instead we set the data field to DELETED. Thus you * * should set DELETED to a data value you never use. Better yet, * * if you don't need to delete, define INSERT_ONLY. */ #ifndef INSERT_ONLY #define DELETED -2UL #define IS_BCK_DELETED(bck) ( (bck) && (bck)->data == DELETED ) #define SET_BCK_DELETED(ht, bck) do { (bck)->data = DELETED; \ FREE_KEY(ht, (bck)->key); } while ( 0 ) #else #define IS_BCK_DELETED(bck) 0 #define SET_BCK_DELETED(ht, bck) \ do { fprintf(stderr, "Deletion not supported for insert-only hashtable\n");\ exit(2); } while ( 0 ) #endif /* We need the following only for dense buckets (Dense##x above). * * If you need to, set this to a value you'll never use for data. */ #define EMPTY -3UL /* steal more of the bck->data space */ /* This is what an item is. Either can be cast to a pointer. */ typedef struct { ulong data; /* 4 bytes for data: either a pointer or an integer */ ulong key; /* 4 bytes for the key: either a pointer or an int */ } HTItem; struct Table(Bin); /* defined in chash.c, I hope */ struct Table(Iterator); typedef struct Table(Bin) Table; /* Expands to SparseBin, etc */ typedef struct Table(Iterator) TableIterator; /* for STORES_PTR to work ok, cchKey MUST BE DEFINED 1st, cItems 2nd! */ typedef struct HashTable { ulong cchKey; /* the length of the key, or if it's \0 terminated */ ulong cItems; /* number of items currently in the hashtable */ ulong cDeletedItems; /* # of buckets holding DELETE in the hashtable */ ulong cBuckets; /* size of the table */ Table *table; /* The actual contents of the hashtable */ int fSaveKeys; /* 1 if we copy keys locally; 2 if keys in one block */ int cDeltaGoalSize; /* # of coming inserts (or deletes, if <0) we expect */ HTItem *posLastFind; /* position of last Find() command */ TableIterator *iter; /* used in First/NextBucket */ FILE *fpData; /* if non-NULL, what item->data points into */ char * (*dataRead)(FILE *, int); /* how to load data from disk */ HTItem bckData; /* holds data after being loaded from disk */ } HashTable; /* Small keys are stored and passed directly, but large keys are * stored and passed as pointers. To make it easier to remember * what to pass, we provide two functions: * PTR_KEY: give it a pointer to your data, and it returns * something appropriate to send to Hash() functions or * be stored in a data field. * KEY_PTR: give it something returned by a Hash() routine, and * it returns a (char *) pointer to the actual data. */ #define HashKeySize(ht) ( ((ulong *)(ht))[0] ) /* this is how we inline */ #define HashSize(ht) ( ((ulong *)(ht))[1] ) /* ...a la C++ :-) */ #define STORES_PTR(ht) ( HashKeySize(ht) == 0 || \ HashKeySize(ht) > sizeof(ulong) ) #define KEY_PTR(ht, key) ( STORES_PTR(ht) ? (char *)(key) : (char *)&(key) ) #ifdef DONT_HAVE_TO_WORRY_ABOUT_BUS_ERRORS #define PTR_KEY(ht, ptr) ( STORES_PTR(ht) ? (ulong)(ptr) : *(ulong *)(ptr) ) #else #define PTR_KEY(ht, ptr) ( STORES_PTR(ht) ? (ulong)(ptr) : HTcopy((char *)ptr)) #endif /* Function prototypes */ unsigned long HTcopy(char *pul); /* for PTR_KEY, not for users */ struct HashTable *AllocateHashTable(int cchKey, int fSaveKeys); void ClearHashTable(struct HashTable *ht); void FreeHashTable(struct HashTable *ht); HTItem *HashFind(struct HashTable *ht, ulong key); HTItem *HashFindLast(struct HashTable *ht); HTItem *HashFindOrInsert(struct HashTable *ht, ulong key, ulong dataInsert); HTItem *HashFindOrInsertItem(struct HashTable *ht, HTItem *pItem); HTItem *HashInsert(struct HashTable *ht, ulong key, ulong data); HTItem *HashInsertItem(struct HashTable *ht, HTItem *pItem); int HashDelete(struct HashTable *ht, ulong key); int HashDeleteLast(struct HashTable *ht); HTItem *HashFirstBucket(struct HashTable *ht); HTItem *HashNextBucket(struct HashTable *ht); int HashSetDeltaGoalSize(struct HashTable *ht, int delta); void HashSave(FILE *fp, struct HashTable *ht, int (*write)(FILE *, char *)); struct HashTable *HashLoad(FILE *fp, char * (*read)(FILE *, int)); struct HashTable *HashLoadKeys(FILE *fp, char * (*read)(FILE *, int)); sparsehash-2.0.2/experimental/README0000664000175000017500000000125511721252346014162 00000000000000This is a C version of sparsehash (and also, maybe, densehash) that I wrote way back when, and served as the inspiration for the C++ version. The API for the C version is much uglier than the C++, because of the lack of template support. I believe the class works, but I'm not convinced it's really flexible or easy enough to use. It would be nice to rework this C class to follow the C++ API as closely as possible (eg have a set_deleted_key() instead of using a #define like this code does now). I believe the code compiles and runs, if anybody is interested in using it now, but it's subject to major change in the future, as people work on it. Craig Silverstein 20 March 2005 sparsehash-2.0.2/experimental/.svn/0000775000175000017500000000000011721255316014243 500000000000000sparsehash-2.0.2/experimental/.svn/text-base/0000775000175000017500000000000011721252346016137 500000000000000sparsehash-2.0.2/experimental/.svn/text-base/README.svn-base0000444000175000017500000000125511721252346020453 00000000000000This is a C version of sparsehash (and also, maybe, densehash) that I wrote way back when, and served as the inspiration for the C++ version. The API for the C version is much uglier than the C++, because of the lack of template support. I believe the class works, but I'm not convinced it's really flexible or easy enough to use. It would be nice to rework this C class to follow the C++ API as closely as possible (eg have a set_deleted_key() instead of using a #define like this code does now). I believe the code compiles and runs, if anybody is interested in using it now, but it's subject to major change in the future, as people work on it. Craig Silverstein 20 March 2005 sparsehash-2.0.2/experimental/.svn/text-base/example.c.svn-base0000444000175000017500000000254211721252346021372 00000000000000#include #include #include #include #include "libchash.h" static void TestInsert() { struct HashTable* ht; HTItem* bck; ht = AllocateHashTable(1, 0); /* value is 1 byte, 0: don't copy keys */ HashInsert(ht, PTR_KEY(ht, "January"), 31); /* 0: don't overwrite old val */ bck = HashInsert(ht, PTR_KEY(ht, "February"), 28); bck = HashInsert(ht, PTR_KEY(ht, "March"), 31); bck = HashFind(ht, PTR_KEY(ht, "February")); assert(bck); assert(bck->data == 28); FreeHashTable(ht); } static void TestFindOrInsert() { struct HashTable* ht; int i; int iterations = 1000000; int range = 30; /* random number between 1 and 30 */ ht = AllocateHashTable(4, 0); /* value is 4 bytes, 0: don't copy keys */ /* We'll test how good rand() is as a random number generator */ for (i = 0; i < iterations; ++i) { int key = rand() % range; HTItem* bck = HashFindOrInsert(ht, key, 0); /* initialize to 0 */ bck->data++; /* found one more of them */ } for (i = 0; i < range; ++i) { HTItem* bck = HashFind(ht, i); if (bck) { printf("%3d: %d\n", bck->key, bck->data); } else { printf("%3d: 0\n", i); } } FreeHashTable(ht); } int main(int argc, char** argv) { TestInsert(); TestFindOrInsert(); return 0; } sparsehash-2.0.2/experimental/.svn/text-base/libchash.h.svn-base0000444000175000017500000003020711721252346021520 00000000000000/* Copyright (c) 1998 - 2005, Google Inc. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following disclaimer * in the documentation and/or other materials provided with the * distribution. * * Neither the name of Google Inc. nor the names of its * contributors may be used to endorse or promote products derived from * this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * --- * Author: Craig Silverstein * * This library is intended to be used for in-memory hash tables, * though it provides rudimentary permanent-storage capabilities. * It attempts to be fast, portable, and small. The best algorithm * to fulfill these goals is an internal probing hashing algorithm, * as in Knuth, _Art of Computer Programming_, vol III. Unlike * chained (open) hashing, it doesn't require a pointer for every * item, yet it is still constant time lookup in practice. * * Also to save space, we let the contents (both data and key) that * you insert be a union: if the key/data is small, we store it * directly in the hashtable, otherwise we store a pointer to it. * To keep you from having to figure out which, use KEY_PTR and * PTR_KEY to convert between the arguments to these functions and * a pointer to the real data. For instance: * char key[] = "ab", *key2; * HTItem *bck; HashTable *ht; * HashInsert(ht, PTR_KEY(ht, key), 0); * bck = HashFind(ht, PTR_KEY(ht, "ab")); * key2 = KEY_PTR(ht, bck->key); * * There are a rich set of operations supported: * AllocateHashTable() -- Allocates a hashtable structure and * returns it. * cchKey: if it's a positive number, then each key is a * fixed-length record of that length. If it's 0, * the key is assumed to be a \0-terminated string. * fSaveKey: normally, you are responsible for allocating * space for the key. If this is 1, we make a * copy of the key for you. * ClearHashTable() -- Removes everything from a hashtable * FreeHashTable() -- Frees memory used by a hashtable * * HashFind() -- takes a key (use PTR_KEY) and returns the * HTItem containing that key, or NULL if the * key is not in the hashtable. * HashFindLast() -- returns the item found by last HashFind() * HashFindOrInsert() -- inserts the key/data pair if the key * is not already in the hashtable, or * returns the appropraite HTItem if it is. * HashFindOrInsertItem() -- takes key/data as an HTItem. * HashInsert() -- adds a key/data pair to the hashtable. What * it does if the key is already in the table * depends on the value of SAMEKEY_OVERWRITE. * HashInsertItem() -- takes key/data as an HTItem. * HashDelete() -- removes a key/data pair from the hashtable, * if it's there. RETURNS 1 if it was there, * 0 else. * If you use sparse tables and never delete, the full data * space is available. Otherwise we steal -2 (maybe -3), * so you can't have data fields with those values. * HashDeleteLast() -- deletes the item returned by the last Find(). * * HashFirstBucket() -- used to iterate over the buckets in a * hashtable. DON'T INSERT OR DELETE WHILE * ITERATING! You can't nest iterations. * HashNextBucket() -- RETURNS NULL at the end of iterating. * * HashSetDeltaGoalSize() -- if you're going to insert 1000 items * at once, call this fn with arg 1000. * It grows the table more intelligently. * * HashSave() -- saves the hashtable to a file. It saves keys ok, * but it doesn't know how to interpret the data field, * so if the data field is a pointer to some complex * structure, you must send a function that takes a * file pointer and a pointer to the structure, and * write whatever you want to write. It should return * the number of bytes written. If the file is NULL, * it should just return the number of bytes it would * write, without writing anything. * If your data field is just an integer, not a * pointer, just send NULL for the function. * HashLoad() -- loads a hashtable. It needs a function that takes * a file and the size of the structure, and expects * you to read in the structure and return a pointer * to it. You must do memory allocation, etc. If * the data is just a number, send NULL. * HashLoadKeys() -- unlike HashLoad(), doesn't load the data off disk * until needed. This saves memory, but if you look * up the same key a lot, it does a disk access each * time. * You can't do Insert() or Delete() on hashtables that were loaded * from disk. */ #include #include /* includes definition of "ulong", we hope */ #define ulong u_long #define MAGIC_KEY "CHsh" /* when we save the file */ #ifndef LOG_WORD_SIZE /* 5 for 32 bit words, 6 for 64 */ #if defined (__LP64__) || defined (_LP64) #define LOG_WORD_SIZE 6 /* log_2(sizeof(ulong)) [in bits] */ #else #define LOG_WORD_SIZE 5 /* log_2(sizeof(ulong)) [in bits] */ #endif #endif /* The following gives a speed/time tradeoff: how many buckets are * * in each bin. 0 gives 32 buckets/bin, which is a good number. */ #ifndef LOG_BM_WORDS #define LOG_BM_WORDS 0 /* each group has 2^L_B_W * 32 buckets */ #endif /* The following are all parameters that affect performance. */ #ifndef JUMP #define JUMP(key, offset) ( ++(offset) ) /* ( 1 ) for linear hashing */ #endif #ifndef Table #define Table(x) Sparse##x /* Dense##x for dense tables */ #endif #ifndef FAST_DELETE #define FAST_DELETE 0 /* if it's 1, we never shrink the ht */ #endif #ifndef SAMEKEY_OVERWRITE #define SAMEKEY_OVERWRITE 1 /* overwrite item with our key on insert? */ #endif #ifndef OCCUPANCY_PCT #define OCCUPANCY_PCT 0.5 /* large PCT means smaller and slower */ #endif #ifndef MIN_HASH_SIZE #define MIN_HASH_SIZE 512 /* ht size when first created */ #endif /* When deleting a bucket, we can't just empty it (future hashes * * may fail); instead we set the data field to DELETED. Thus you * * should set DELETED to a data value you never use. Better yet, * * if you don't need to delete, define INSERT_ONLY. */ #ifndef INSERT_ONLY #define DELETED -2UL #define IS_BCK_DELETED(bck) ( (bck) && (bck)->data == DELETED ) #define SET_BCK_DELETED(ht, bck) do { (bck)->data = DELETED; \ FREE_KEY(ht, (bck)->key); } while ( 0 ) #else #define IS_BCK_DELETED(bck) 0 #define SET_BCK_DELETED(ht, bck) \ do { fprintf(stderr, "Deletion not supported for insert-only hashtable\n");\ exit(2); } while ( 0 ) #endif /* We need the following only for dense buckets (Dense##x above). * * If you need to, set this to a value you'll never use for data. */ #define EMPTY -3UL /* steal more of the bck->data space */ /* This is what an item is. Either can be cast to a pointer. */ typedef struct { ulong data; /* 4 bytes for data: either a pointer or an integer */ ulong key; /* 4 bytes for the key: either a pointer or an int */ } HTItem; struct Table(Bin); /* defined in chash.c, I hope */ struct Table(Iterator); typedef struct Table(Bin) Table; /* Expands to SparseBin, etc */ typedef struct Table(Iterator) TableIterator; /* for STORES_PTR to work ok, cchKey MUST BE DEFINED 1st, cItems 2nd! */ typedef struct HashTable { ulong cchKey; /* the length of the key, or if it's \0 terminated */ ulong cItems; /* number of items currently in the hashtable */ ulong cDeletedItems; /* # of buckets holding DELETE in the hashtable */ ulong cBuckets; /* size of the table */ Table *table; /* The actual contents of the hashtable */ int fSaveKeys; /* 1 if we copy keys locally; 2 if keys in one block */ int cDeltaGoalSize; /* # of coming inserts (or deletes, if <0) we expect */ HTItem *posLastFind; /* position of last Find() command */ TableIterator *iter; /* used in First/NextBucket */ FILE *fpData; /* if non-NULL, what item->data points into */ char * (*dataRead)(FILE *, int); /* how to load data from disk */ HTItem bckData; /* holds data after being loaded from disk */ } HashTable; /* Small keys are stored and passed directly, but large keys are * stored and passed as pointers. To make it easier to remember * what to pass, we provide two functions: * PTR_KEY: give it a pointer to your data, and it returns * something appropriate to send to Hash() functions or * be stored in a data field. * KEY_PTR: give it something returned by a Hash() routine, and * it returns a (char *) pointer to the actual data. */ #define HashKeySize(ht) ( ((ulong *)(ht))[0] ) /* this is how we inline */ #define HashSize(ht) ( ((ulong *)(ht))[1] ) /* ...a la C++ :-) */ #define STORES_PTR(ht) ( HashKeySize(ht) == 0 || \ HashKeySize(ht) > sizeof(ulong) ) #define KEY_PTR(ht, key) ( STORES_PTR(ht) ? (char *)(key) : (char *)&(key) ) #ifdef DONT_HAVE_TO_WORRY_ABOUT_BUS_ERRORS #define PTR_KEY(ht, ptr) ( STORES_PTR(ht) ? (ulong)(ptr) : *(ulong *)(ptr) ) #else #define PTR_KEY(ht, ptr) ( STORES_PTR(ht) ? (ulong)(ptr) : HTcopy((char *)ptr)) #endif /* Function prototypes */ unsigned long HTcopy(char *pul); /* for PTR_KEY, not for users */ struct HashTable *AllocateHashTable(int cchKey, int fSaveKeys); void ClearHashTable(struct HashTable *ht); void FreeHashTable(struct HashTable *ht); HTItem *HashFind(struct HashTable *ht, ulong key); HTItem *HashFindLast(struct HashTable *ht); HTItem *HashFindOrInsert(struct HashTable *ht, ulong key, ulong dataInsert); HTItem *HashFindOrInsertItem(struct HashTable *ht, HTItem *pItem); HTItem *HashInsert(struct HashTable *ht, ulong key, ulong data); HTItem *HashInsertItem(struct HashTable *ht, HTItem *pItem); int HashDelete(struct HashTable *ht, ulong key); int HashDeleteLast(struct HashTable *ht); HTItem *HashFirstBucket(struct HashTable *ht); HTItem *HashNextBucket(struct HashTable *ht); int HashSetDeltaGoalSize(struct HashTable *ht, int delta); void HashSave(FILE *fp, struct HashTable *ht, int (*write)(FILE *, char *)); struct HashTable *HashLoad(FILE *fp, char * (*read)(FILE *, int)); struct HashTable *HashLoadKeys(FILE *fp, char * (*read)(FILE *, int)); sparsehash-2.0.2/experimental/.svn/text-base/Makefile.svn-base0000444000175000017500000000031211721252346021224 00000000000000example: example.o libchash.o $(CC) $(CFLAGS) $(LDFLAGS) -o $@ $^ .SUFFIXES: .c .o .h .c.o: $(CC) -c $(CPPFLAGS) $(CFLAGS) -o $@ $< example.o: example.c libchash.h libchash.o: libchash.c libchash.h sparsehash-2.0.2/experimental/.svn/text-base/libchash.c.svn-base0000444000175000017500000020105311721252346021512 00000000000000/* Copyright (c) 1998 - 2005, Google Inc. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following disclaimer * in the documentation and/or other materials provided with the * distribution. * * Neither the name of Google Inc. nor the names of its * contributors may be used to endorse or promote products derived from * this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * --- * Author: Craig Silverstein * * This library is intended to be used for in-memory hash tables, * though it provides rudimentary permanent-storage capabilities. * It attempts to be fast, portable, and small. The best algorithm * to fulfill these goals is an internal probing hashing algorithm, * as in Knuth, _Art of Computer Programming_, vol III. Unlike * chained (open) hashing, it doesn't require a pointer for every * item, yet it is still constant time lookup in practice. * * Also to save space, we let the contents (both data and key) that * you insert be a union: if the key/data is small, we store it * directly in the hashtable, otherwise we store a pointer to it. * To keep you from having to figure out which, use KEY_PTR and * PTR_KEY to convert between the arguments to these functions and * a pointer to the real data. For instance: * char key[] = "ab", *key2; * HTItem *bck; HashTable *ht; * HashInsert(ht, PTR_KEY(ht, key), 0); * bck = HashFind(ht, PTR_KEY(ht, "ab")); * key2 = KEY_PTR(ht, bck->key); * * There are a rich set of operations supported: * AllocateHashTable() -- Allocates a hashtable structure and * returns it. * cchKey: if it's a positive number, then each key is a * fixed-length record of that length. If it's 0, * the key is assumed to be a \0-terminated string. * fSaveKey: normally, you are responsible for allocating * space for the key. If this is 1, we make a * copy of the key for you. * ClearHashTable() -- Removes everything from a hashtable * FreeHashTable() -- Frees memory used by a hashtable * * HashFind() -- takes a key (use PTR_KEY) and returns the * HTItem containing that key, or NULL if the * key is not in the hashtable. * HashFindLast() -- returns the item found by last HashFind() * HashFindOrInsert() -- inserts the key/data pair if the key * is not already in the hashtable, or * returns the appropraite HTItem if it is. * HashFindOrInsertItem() -- takes key/data as an HTItem. * HashInsert() -- adds a key/data pair to the hashtable. What * it does if the key is already in the table * depends on the value of SAMEKEY_OVERWRITE. * HashInsertItem() -- takes key/data as an HTItem. * HashDelete() -- removes a key/data pair from the hashtable, * if it's there. RETURNS 1 if it was there, * 0 else. * If you use sparse tables and never delete, the full data * space is available. Otherwise we steal -2 (maybe -3), * so you can't have data fields with those values. * HashDeleteLast() -- deletes the item returned by the last Find(). * * HashFirstBucket() -- used to iterate over the buckets in a * hashtable. DON'T INSERT OR DELETE WHILE * ITERATING! You can't nest iterations. * HashNextBucket() -- RETURNS NULL at the end of iterating. * * HashSetDeltaGoalSize() -- if you're going to insert 1000 items * at once, call this fn with arg 1000. * It grows the table more intelligently. * * HashSave() -- saves the hashtable to a file. It saves keys ok, * but it doesn't know how to interpret the data field, * so if the data field is a pointer to some complex * structure, you must send a function that takes a * file pointer and a pointer to the structure, and * write whatever you want to write. It should return * the number of bytes written. If the file is NULL, * it should just return the number of bytes it would * write, without writing anything. * If your data field is just an integer, not a * pointer, just send NULL for the function. * HashLoad() -- loads a hashtable. It needs a function that takes * a file and the size of the structure, and expects * you to read in the structure and return a pointer * to it. You must do memory allocation, etc. If * the data is just a number, send NULL. * HashLoadKeys() -- unlike HashLoad(), doesn't load the data off disk * until needed. This saves memory, but if you look * up the same key a lot, it does a disk access each * time. * You can't do Insert() or Delete() on hashtables that were loaded * from disk. * * See libchash.h for parameters you can modify. Make sure LOG_WORD_SIZE * is defined correctly for your machine! (5 for 32 bit words, 6 for 64). */ #include #include #include #include /* for strcmp, memcmp, etc */ #include /* ULTRIX needs this for in.h */ #include /* for reading/writing hashtables */ #include #include "libchash.h" /* all the types */ /* if keys are stored directly but cchKey is less than sizeof(ulong), */ /* this cuts off the bits at the end */ char grgKeyTruncMask[sizeof(ulong)][sizeof(ulong)]; #define KEY_TRUNC(ht, key) \ ( STORES_PTR(ht) || (ht)->cchKey == sizeof(ulong) \ ? (key) : ((key) & *(ulong *)&(grgKeyTruncMask[(ht)->cchKey][0])) ) /* round num up to a multiple of wordsize. (LOG_WORD_SIZE-3 is in bytes) */ #define WORD_ROUND(num) ( ((num-1) | ((1<<(LOG_WORD_SIZE-3))-1)) + 1 ) #define NULL_TERMINATED 0 /* val of cchKey if keys are null-term strings */ /* Useful operations we do to keys: compare them, copy them, free them */ #define KEY_CMP(ht, key1, key2) ( !STORES_PTR(ht) ? (key1) - (key2) : \ (key1) == (key2) ? 0 : \ HashKeySize(ht) == NULL_TERMINATED ? \ strcmp((char *)key1, (char *)key2) :\ memcmp((void *)key1, (void *)key2, \ HashKeySize(ht)) ) #define COPY_KEY(ht, keyTo, keyFrom) do \ if ( !STORES_PTR(ht) || !(ht)->fSaveKeys ) \ (keyTo) = (keyFrom); /* just copy pointer or info */\ else if ( (ht)->cchKey == NULL_TERMINATED ) /* copy 0-term.ed str */\ { \ (keyTo) = (ulong)HTsmalloc( WORD_ROUND(strlen((char *)(keyFrom))+1) ); \ strcpy((char *)(keyTo), (char *)(keyFrom)); \ } \ else \ { \ (keyTo) = (ulong) HTsmalloc( WORD_ROUND((ht)->cchKey) ); \ memcpy( (char *)(keyTo), (char *)(keyFrom), (ht)->cchKey); \ } \ while ( 0 ) #define FREE_KEY(ht, key) do \ if ( STORES_PTR(ht) && (ht)->fSaveKeys ) \ if ( (ht)->cchKey == NULL_TERMINATED ) \ HTfree((char *)(key), WORD_ROUND(strlen((char *)(key))+1)); \ else \ HTfree((char *)(key), WORD_ROUND((ht)->cchKey)); \ while ( 0 ) /* the following are useful for bitmaps */ /* Format is like this (if 1 word = 4 bits): 3210 7654 ba98 fedc ... */ typedef ulong HTBitmapPart; /* this has to be unsigned, for >> */ typedef HTBitmapPart HTBitmap[1<> LOG_WORD_SIZE) << (LOG_WORD_SIZE-3) ) #define MOD2(i, logmod) ( (i) & ((1<<(logmod))-1) ) #define DIV_NUM_ENTRIES(i) ( (i) >> LOG_WORD_SIZE ) #define MOD_NUM_ENTRIES(i) ( MOD2(i, LOG_WORD_SIZE) ) #define MODBIT(i) ( ((ulong)1) << MOD_NUM_ENTRIES(i) ) #define TEST_BITMAP(bm, i) ( (bm)[DIV_NUM_ENTRIES(i)] & MODBIT(i) ? 1 : 0 ) #define SET_BITMAP(bm, i) (bm)[DIV_NUM_ENTRIES(i)] |= MODBIT(i) #define CLEAR_BITMAP(bm, i) (bm)[DIV_NUM_ENTRIES(i)] &= ~MODBIT(i) /* the following are useful for reading and writing hashtables */ #define READ_UL(fp, data) \ do { \ long _ul; \ fread(&_ul, sizeof(_ul), 1, (fp)); \ data = ntohl(_ul); \ } while (0) #define WRITE_UL(fp, data) \ do { \ long _ul = htonl((long)(data)); \ fwrite(&_ul, sizeof(_ul), 1, (fp)); \ } while (0) /* Moves data from disk to memory if necessary. Note dataRead cannot be * * NULL, because then we might as well (and do) load the data into memory */ #define LOAD_AND_RETURN(ht, loadCommand) /* lC returns an HTItem * */ \ if ( !(ht)->fpData ) /* data is stored in memory */ \ return (loadCommand); \ else /* must read data off of disk */ \ { \ int cchData; \ HTItem *bck; \ if ( (ht)->bckData.data ) free((char *)(ht)->bckData.data); \ ht->bckData.data = (ulong)NULL; /* needed if loadCommand fails */ \ bck = (loadCommand); \ if ( bck == NULL ) /* loadCommand failed: key not found */ \ return NULL; \ else \ (ht)->bckData = *bck; \ fseek(ht->fpData, (ht)->bckData.data, SEEK_SET); \ READ_UL((ht)->fpData, cchData); \ (ht)->bckData.data = (ulong)(ht)->dataRead((ht)->fpData, cchData); \ return &((ht)->bckData); \ } /* ======================================================================== */ /* UTILITY ROUTINES */ /* ---------------------- */ /* HTsmalloc() -- safe malloc * allocates memory, or crashes if the allocation fails. */ static void *HTsmalloc(unsigned long size) { void *retval; if ( size == 0 ) return NULL; retval = (void *)malloc(size); if ( !retval ) { fprintf(stderr, "HTsmalloc: Unable to allocate %lu bytes of memory\n", size); exit(1); } return retval; } /* HTscalloc() -- safe calloc * allocates memory and initializes it to 0, or crashes if * the allocation fails. */ static void *HTscalloc(unsigned long size) { void *retval; retval = (void *)calloc(size, 1); if ( !retval && size > 0 ) { fprintf(stderr, "HTscalloc: Unable to allocate %lu bytes of memory\n", size); exit(1); } return retval; } /* HTsrealloc() -- safe calloc * grows the amount of memory from a source, or crashes if * the allocation fails. */ static void *HTsrealloc(void *ptr, unsigned long new_size, long delta) { if ( ptr == NULL ) return HTsmalloc(new_size); ptr = realloc(ptr, new_size); if ( !ptr && new_size > 0 ) { fprintf(stderr, "HTsrealloc: Unable to reallocate %lu bytes of memory\n", new_size); exit(1); } return ptr; } /* HTfree() -- keep track of memory use * frees memory using free, but updates count of how much memory * is being used. */ static void HTfree(void *ptr, unsigned long size) { if ( size > 0 ) /* some systems seem to not like freeing NULL */ free(ptr); } /*************************************************************************\ | HTcopy() | | Sometimes we interpret data as a ulong. But ulongs must be | | aligned on some machines, so instead of casting we copy. | \*************************************************************************/ unsigned long HTcopy(char *ul) { unsigned long retval; memcpy(&retval, ul, sizeof(retval)); return retval; } /*************************************************************************\ | HTSetupKeyTrunc() | | If keys are stored directly but cchKey is less than | | sizeof(ulong), this cuts off the bits at the end. | \*************************************************************************/ static void HTSetupKeyTrunc(void) { int i, j; for ( i = 0; i < sizeof(unsigned long); i++ ) for ( j = 0; j < sizeof(unsigned long); j++ ) grgKeyTruncMask[i][j] = j < i ? 255 : 0; /* chars have 8 bits */ } /* ======================================================================== */ /* TABLE ROUTINES */ /* -------------------- */ /* The idea is that a hashtable with (logically) t buckets is divided * into t/M groups of M buckets each. (M is a constant set in * LOG_BM_WORDS for efficiency.) Each group is stored sparsely. * Thus, inserting into the table causes some array to grow, which is * slow but still constant time. Lookup involves doing a * logical-position-to-sparse-position lookup, which is also slow but * constant time. The larger M is, the slower these operations are * but the less overhead (slightly). * * To store the sparse array, we store a bitmap B, where B[i] = 1 iff * bucket i is non-empty. Then to look up bucket i we really look up * array[# of 1s before i in B]. This is constant time for fixed M. * * Terminology: the position of an item in the overall table (from * 1 .. t) is called its "location." The logical position in a group * (from 1 .. M ) is called its "position." The actual location in * the array (from 1 .. # of non-empty buckets in the group) is * called its "offset." * * The following operations are supported: * o Allocate an array with t buckets, all empty * o Free a array (but not whatever was stored in the buckets) * o Tell whether or not a bucket is empty * o Return a bucket with a given location * o Set the value of a bucket at a given location * o Iterate through all the buckets in the array * o Read and write an occupancy bitmap to disk * o Return how much memory is being allocated by the array structure */ #ifndef SparseBucket /* by default, each bucket holds an HTItem */ #define SparseBucket HTItem #endif typedef struct SparseBin { SparseBucket *binSparse; HTBitmap bmOccupied; /* bmOccupied[i] is 1 if bucket i has an item */ short cOccupied; /* size of binSparse; useful for iterators, eg */ } SparseBin; typedef struct SparseIterator { long posGroup; long posOffset; SparseBin *binSparse; /* state info, to avoid args for NextBucket() */ ulong cBuckets; } SparseIterator; #define LOG_LOW_BIN_SIZE ( LOG_BM_WORDS+LOG_WORD_SIZE ) #define SPARSE_GROUPS(cBuckets) ( (((cBuckets)-1) >> LOG_LOW_BIN_SIZE) + 1 ) /* we need a small function to figure out # of items set in the bm */ static HTOffset EntriesUpto(HTBitmapPart *bm, int i) { /* returns # of set bits in 0..i-1 */ HTOffset retval = 0; static HTOffset rgcBits[256] = /* # of bits set in one char */ {0, 1, 1, 2, 1, 2, 2, 3, 1, 2, 2, 3, 2, 3, 3, 4, 1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5, 1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7, 1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7, 3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7, 4, 5, 5, 6, 5, 6, 6, 7, 5, 6, 6, 7, 6, 7, 7, 8}; if ( i == 0 ) return 0; for ( ; i > sizeof(*bm)*8; i -= sizeof(*bm)*8, bm++ ) { /* think of it as loop unrolling */ #if LOG_WORD_SIZE >= 3 /* 1 byte per word, or more */ retval += rgcBits[*bm & 255]; /* get the low byte */ #if LOG_WORD_SIZE >= 4 /* at least 2 bytes */ retval += rgcBits[(*bm >> 8) & 255]; #if LOG_WORD_SIZE >= 5 /* at least 4 bytes */ retval += rgcBits[(*bm >> 16) & 255]; retval += rgcBits[(*bm >> 24) & 255]; #if LOG_WORD_SIZE >= 6 /* 8 bytes! */ retval += rgcBits[(*bm >> 32) & 255]; retval += rgcBits[(*bm >> 40) & 255]; retval += rgcBits[(*bm >> 48) & 255]; retval += rgcBits[(*bm >> 56) & 255]; #if LOG_WORD_SIZE >= 7 /* not a concern for a while... */ #error Need to rewrite EntriesUpto to support such big words #endif /* >8 bytes */ #endif /* 8 bytes */ #endif /* 4 bytes */ #endif /* 2 bytes */ #endif /* 1 byte */ } switch ( i ) { /* from 0 to 63 */ case 0: return retval; #if LOG_WORD_SIZE >= 3 /* 1 byte per word, or more */ case 1: case 2: case 3: case 4: case 5: case 6: case 7: case 8: return (retval + rgcBits[*bm & ((1 << i)-1)]); #if LOG_WORD_SIZE >= 4 /* at least 2 bytes */ case 9: case 10: case 11: case 12: case 13: case 14: case 15: case 16: return (retval + rgcBits[*bm & 255] + rgcBits[(*bm >> 8) & ((1 << (i-8))-1)]); #if LOG_WORD_SIZE >= 5 /* at least 4 bytes */ case 17: case 18: case 19: case 20: case 21: case 22: case 23: case 24: return (retval + rgcBits[*bm & 255] + rgcBits[(*bm >> 8) & 255] + rgcBits[(*bm >> 16) & ((1 << (i-16))-1)]); case 25: case 26: case 27: case 28: case 29: case 30: case 31: case 32: return (retval + rgcBits[*bm & 255] + rgcBits[(*bm >> 8) & 255] + rgcBits[(*bm >> 16) & 255] + rgcBits[(*bm >> 24) & ((1 << (i-24))-1)]); #if LOG_WORD_SIZE >= 6 /* 8 bytes! */ case 33: case 34: case 35: case 36: case 37: case 38: case 39: case 40: return (retval + rgcBits[*bm & 255] + rgcBits[(*bm >> 8) & 255] + rgcBits[(*bm >> 16) & 255] + rgcBits[(*bm >> 24) & 255] + rgcBits[(*bm >> 32) & ((1 << (i-32))-1)]); case 41: case 42: case 43: case 44: case 45: case 46: case 47: case 48: return (retval + rgcBits[*bm & 255] + rgcBits[(*bm >> 8) & 255] + rgcBits[(*bm >> 16) & 255] + rgcBits[(*bm >> 24) & 255] + rgcBits[(*bm >> 32) & 255] + rgcBits[(*bm >> 40) & ((1 << (i-40))-1)]); case 49: case 50: case 51: case 52: case 53: case 54: case 55: case 56: return (retval + rgcBits[*bm & 255] + rgcBits[(*bm >> 8) & 255] + rgcBits[(*bm >> 16) & 255] + rgcBits[(*bm >> 24) & 255] + rgcBits[(*bm >> 32) & 255] + rgcBits[(*bm >> 40) & 255] + rgcBits[(*bm >> 48) & ((1 << (i-48))-1)]); case 57: case 58: case 59: case 60: case 61: case 62: case 63: case 64: return (retval + rgcBits[*bm & 255] + rgcBits[(*bm >> 8) & 255] + rgcBits[(*bm >> 16) & 255] + rgcBits[(*bm >> 24) & 255] + rgcBits[(*bm >> 32) & 255] + rgcBits[(*bm >> 40) & 255] + rgcBits[(*bm >> 48) & 255] + rgcBits[(*bm >> 56) & ((1 << (i-56))-1)]); #endif /* 8 bytes */ #endif /* 4 bytes */ #endif /* 2 bytes */ #endif /* 1 byte */ } assert("" == "word size is too big in EntriesUpto()"); return -1; } #define SPARSE_POS_TO_OFFSET(bm, i) ( EntriesUpto(&((bm)[0]), i) ) #define SPARSE_BUCKET(bin, location) \ ( (bin)[(location) >> LOG_LOW_BIN_SIZE].binSparse + \ SPARSE_POS_TO_OFFSET((bin)[(location)>>LOG_LOW_BIN_SIZE].bmOccupied, \ MOD2(location, LOG_LOW_BIN_SIZE)) ) /*************************************************************************\ | SparseAllocate() | | SparseFree() | | Allocates, sets-to-empty, and frees a sparse array. All you need | | to tell me is how many buckets you want. I return the number of | | buckets I actually allocated, setting the array as a parameter. | | Note that you have to set auxilliary parameters, like cOccupied. | \*************************************************************************/ static ulong SparseAllocate(SparseBin **pbinSparse, ulong cBuckets) { int cGroups = SPARSE_GROUPS(cBuckets); *pbinSparse = (SparseBin *) HTscalloc(sizeof(**pbinSparse) * cGroups); return cGroups << LOG_LOW_BIN_SIZE; } static SparseBin *SparseFree(SparseBin *binSparse, ulong cBuckets) { ulong iGroup, cGroups = SPARSE_GROUPS(cBuckets); for ( iGroup = 0; iGroup < cGroups; iGroup++ ) HTfree(binSparse[iGroup].binSparse, (sizeof(*binSparse[iGroup].binSparse) * binSparse[iGroup].cOccupied)); HTfree(binSparse, sizeof(*binSparse) * cGroups); return NULL; } /*************************************************************************\ | SparseIsEmpty() | | SparseFind() | | You give me a location (ie a number between 1 and t), and I | | return the bucket at that location, or NULL if the bucket is | | empty. It's OK to call Find() on an empty table. | \*************************************************************************/ static int SparseIsEmpty(SparseBin *binSparse, ulong location) { return !TEST_BITMAP(binSparse[location>>LOG_LOW_BIN_SIZE].bmOccupied, MOD2(location, LOG_LOW_BIN_SIZE)); } static SparseBucket *SparseFind(SparseBin *binSparse, ulong location) { if ( SparseIsEmpty(binSparse, location) ) return NULL; return SPARSE_BUCKET(binSparse, location); } /*************************************************************************\ | SparseInsert() | | You give me a location, and contents to put there, and I insert | | into that location and RETURN a pointer to the location. If | | bucket was already occupied, I write over the contents only if | | *pfOverwrite is 1. We set *pfOverwrite to 1 if there was someone | | there (whether or not we overwrote) and 0 else. | \*************************************************************************/ static SparseBucket *SparseInsert(SparseBin *binSparse, SparseBucket *bckInsert, ulong location, int *pfOverwrite) { SparseBucket *bckPlace; HTOffset offset; bckPlace = SparseFind(binSparse, location); if ( bckPlace ) /* means we replace old contents */ { if ( *pfOverwrite ) *bckPlace = *bckInsert; *pfOverwrite = 1; return bckPlace; } binSparse += (location >> LOG_LOW_BIN_SIZE); offset = SPARSE_POS_TO_OFFSET(binSparse->bmOccupied, MOD2(location, LOG_LOW_BIN_SIZE)); binSparse->binSparse = (SparseBucket *) HTsrealloc(binSparse->binSparse, sizeof(*binSparse->binSparse) * ++binSparse->cOccupied, sizeof(*binSparse->binSparse)); memmove(binSparse->binSparse + offset+1, binSparse->binSparse + offset, (binSparse->cOccupied-1 - offset) * sizeof(*binSparse->binSparse)); binSparse->binSparse[offset] = *bckInsert; SET_BITMAP(binSparse->bmOccupied, MOD2(location, LOG_LOW_BIN_SIZE)); *pfOverwrite = 0; return binSparse->binSparse + offset; } /*************************************************************************\ | SparseFirstBucket() | | SparseNextBucket() | | SparseCurrentBit() | | Iterate through the occupied buckets of a dense hashtable. You | | must, of course, have allocated space yourself for the iterator. | \*************************************************************************/ static SparseBucket *SparseNextBucket(SparseIterator *iter) { if ( iter->posOffset != -1 && /* not called from FirstBucket()? */ (++iter->posOffset < iter->binSparse[iter->posGroup].cOccupied) ) return iter->binSparse[iter->posGroup].binSparse + iter->posOffset; iter->posOffset = 0; /* start the next group */ for ( iter->posGroup++; iter->posGroup < SPARSE_GROUPS(iter->cBuckets); iter->posGroup++ ) if ( iter->binSparse[iter->posGroup].cOccupied > 0 ) return iter->binSparse[iter->posGroup].binSparse; /* + 0 */ return NULL; /* all remaining groups were empty */ } static SparseBucket *SparseFirstBucket(SparseIterator *iter, SparseBin *binSparse, ulong cBuckets) { iter->binSparse = binSparse; /* set it up for NextBucket() */ iter->cBuckets = cBuckets; iter->posOffset = -1; /* when we advance, we're at 0 */ iter->posGroup = -1; return SparseNextBucket(iter); } /*************************************************************************\ | SparseWrite() | | SparseRead() | | These are routines for storing a sparse hashtable onto disk. We | | store the number of buckets and a bitmap indicating which buckets | | are allocated (occupied). The actual contents of the buckets | | must be stored separately. | \*************************************************************************/ static void SparseWrite(FILE *fp, SparseBin *binSparse, ulong cBuckets) { ulong i, j; WRITE_UL(fp, cBuckets); for ( i = 0; i < SPARSE_GROUPS(cBuckets); i++ ) for ( j = 0; j < (1<rgBuckets, cBuckets); } static ulong DenseAllocate(DenseBin **pbin, ulong cBuckets) { *pbin = (DenseBin *) HTsmalloc(sizeof(*pbin)); (*pbin)->rgBuckets = (DenseBucket *) HTsmalloc(sizeof(*(*pbin)->rgBuckets) * cBuckets); DenseClear(*pbin, cBuckets); return cBuckets; } static DenseBin *DenseFree(DenseBin *bin, ulong cBuckets) { HTfree(bin->rgBuckets, sizeof(*bin->rgBuckets) * cBuckets); HTfree(bin, sizeof(*bin)); return NULL; } static int DenseIsEmpty(DenseBin *bin, ulong location) { return DENSE_IS_EMPTY(bin->rgBuckets, location); } static DenseBucket *DenseFind(DenseBin *bin, ulong location) { if ( DenseIsEmpty(bin, location) ) return NULL; return bin->rgBuckets + location; } static DenseBucket *DenseInsert(DenseBin *bin, DenseBucket *bckInsert, ulong location, int *pfOverwrite) { DenseBucket *bckPlace; bckPlace = DenseFind(bin, location); if ( bckPlace ) /* means something is already there */ { if ( *pfOverwrite ) *bckPlace = *bckInsert; *pfOverwrite = 1; /* set to 1 to indicate someone was there */ return bckPlace; } else { bin->rgBuckets[location] = *bckInsert; *pfOverwrite = 0; return bin->rgBuckets + location; } } static DenseBucket *DenseNextBucket(DenseIterator *iter) { for ( iter->pos++; iter->pos < iter->cBuckets; iter->pos++ ) if ( !DenseIsEmpty(iter->bin, iter->pos) ) return iter->bin->rgBuckets + iter->pos; return NULL; /* all remaining groups were empty */ } static DenseBucket *DenseFirstBucket(DenseIterator *iter, DenseBin *bin, ulong cBuckets) { iter->bin = bin; /* set it up for NextBucket() */ iter->cBuckets = cBuckets; iter->pos = -1; /* thus the next bucket will be 0 */ return DenseNextBucket(iter); } static void DenseWrite(FILE *fp, DenseBin *bin, ulong cBuckets) { ulong pos = 0, bit, bm; WRITE_UL(fp, cBuckets); while ( pos < cBuckets ) { bm = 0; for ( bit = 0; bit < 8*sizeof(ulong); bit++ ) { if ( !DenseIsEmpty(bin, pos) ) SET_BITMAP(&bm, bit); /* in fks-hash.h */ if ( ++pos == cBuckets ) break; } WRITE_UL(fp, bm); } } static ulong DenseRead(FILE *fp, DenseBin **pbin) { ulong pos = 0, bit, bm, cBuckets; READ_UL(fp, cBuckets); cBuckets = DenseAllocate(pbin, cBuckets); while ( pos < cBuckets ) { READ_UL(fp, bm); for ( bit = 0; bit < 8*sizeof(ulong); bit++ ) { if ( TEST_BITMAP(&bm, bit) ) /* in fks-hash.h */ DENSE_SET_OCCUPIED((*pbin)->rgBuckets, pos); else DENSE_SET_EMPTY((*pbin)->rgBuckets, pos); if ( ++pos == cBuckets ) break; } } return cBuckets; } static ulong DenseMemory(ulong cBuckets, ulong cOccupied) { return cBuckets * sizeof(DenseBucket); } /* ======================================================================== */ /* HASHING ROUTINES */ /* ---------------------- */ /* Implements a simple quadratic hashing scheme. We have a single hash * table of size t and a single hash function h(x). When inserting an * item, first we try h(x) % t. If it's occupied, we try h(x) + * i*(i-1)/2 % t for increasing values of i until we hit a not-occupied * space. To make this dynamic, we double the size of the hash table as * soon as more than half the cells are occupied. When deleting, we can * choose to shrink the hashtable when less than a quarter of the * cells are occupied, or we can choose never to shrink the hashtable. * For lookup, we check h(x) + i*(i-1)/2 % t (starting with i=0) until * we get a match or we hit an empty space. Note that as a result, * we can't make a cell empty on deletion, or lookups may end prematurely. * Instead we mark the cell as "deleted." We thus steal the value * DELETED as a possible "data" value. As long as data are pointers, * that's ok. * The hash increment we use, i(i-1)/2, is not the standard quadratic * hash increment, which is i^2. i(i-1)/2 covers the entire bucket space * when the hashtable size is a power of two, as it is for us. In fact, * the first n probes cover n distinct buckets; then it repeats. This * guarantees insertion will always succeed. * If you linear hashing, set JUMP in chash.h. You can also change * various other parameters there. */ /*************************************************************************\ | Hash() | | The hash function I use is due to Bob Jenkins (see | | http://burtleburtle.net/bob/hash/evahash.html | | According to http://burtleburtle.net/bob/c/lookup2.c, | | his implementation is public domain.) | | It takes 36 instructions, in 18 cycles if you're lucky. | | hashing depends on the fact the hashtable size is always a | | power of 2. cBuckets is probably ht->cBuckets. | \*************************************************************************/ #if LOG_WORD_SIZE == 5 /* 32 bit words */ #define mix(a,b,c) \ { \ a -= b; a -= c; a ^= (c>>13); \ b -= c; b -= a; b ^= (a<<8); \ c -= a; c -= b; c ^= (b>>13); \ a -= b; a -= c; a ^= (c>>12); \ b -= c; b -= a; b ^= (a<<16); \ c -= a; c -= b; c ^= (b>>5); \ a -= b; a -= c; a ^= (c>>3); \ b -= c; b -= a; b ^= (a<<10); \ c -= a; c -= b; c ^= (b>>15); \ } #ifdef WORD_HASH /* play with this on little-endian machines */ #define WORD_AT(ptr) ( *(ulong *)(ptr) ) #else #define WORD_AT(ptr) ( (ptr)[0] + ((ulong)(ptr)[1]<<8) + \ ((ulong)(ptr)[2]<<16) + ((ulong)(ptr)[3]<<24) ) #endif #elif LOG_WORD_SIZE == 6 /* 64 bit words */ #define mix(a,b,c) \ { \ a -= b; a -= c; a ^= (c>>43); \ b -= c; b -= a; b ^= (a<<9); \ c -= a; c -= b; c ^= (b>>8); \ a -= b; a -= c; a ^= (c>>38); \ b -= c; b -= a; b ^= (a<<23); \ c -= a; c -= b; c ^= (b>>5); \ a -= b; a -= c; a ^= (c>>35); \ b -= c; b -= a; b ^= (a<<49); \ c -= a; c -= b; c ^= (b>>11); \ a -= b; a -= c; a ^= (c>>12); \ b -= c; b -= a; b ^= (a<<18); \ c -= a; c -= b; c ^= (b>>22); \ } #ifdef WORD_HASH /* alpha is little-endian, btw */ #define WORD_AT(ptr) ( *(ulong *)(ptr) ) #else #define WORD_AT(ptr) ( (ptr)[0] + ((ulong)(ptr)[1]<<8) + \ ((ulong)(ptr)[2]<<16) + ((ulong)(ptr)[3]<<24) + \ ((ulong)(ptr)[4]<<32) + ((ulong)(ptr)[5]<<40) + \ ((ulong)(ptr)[6]<<48) + ((ulong)(ptr)[7]<<56) ) #endif #else /* neither 32 or 64 bit words */ #error This hash function can only hash 32 or 64 bit words. Sorry. #endif static ulong Hash(HashTable *ht, char *key, ulong cBuckets) { ulong a, b, c, cchKey, cchKeyOrig; cchKeyOrig = ht->cchKey == NULL_TERMINATED ? strlen(key) : ht->cchKey; a = b = c = 0x9e3779b9; /* the golden ratio; an arbitrary value */ for ( cchKey = cchKeyOrig; cchKey >= 3 * sizeof(ulong); cchKey -= 3 * sizeof(ulong), key += 3 * sizeof(ulong) ) { a += WORD_AT(key); b += WORD_AT(key + sizeof(ulong)); c += WORD_AT(key + sizeof(ulong)*2); mix(a,b,c); } c += cchKeyOrig; switch ( cchKey ) { /* deal with rest. Cases fall through */ #if LOG_WORD_SIZE == 5 case 11: c += (ulong)key[10]<<24; case 10: c += (ulong)key[9]<<16; case 9 : c += (ulong)key[8]<<8; /* the first byte of c is reserved for the length */ case 8 : b += WORD_AT(key+4); a+= WORD_AT(key); break; case 7 : b += (ulong)key[6]<<16; case 6 : b += (ulong)key[5]<<8; case 5 : b += key[4]; case 4 : a += WORD_AT(key); break; case 3 : a += (ulong)key[2]<<16; case 2 : a += (ulong)key[1]<<8; case 1 : a += key[0]; /* case 0 : nothing left to add */ #elif LOG_WORD_SIZE == 6 case 23: c += (ulong)key[22]<<56; case 22: c += (ulong)key[21]<<48; case 21: c += (ulong)key[20]<<40; case 20: c += (ulong)key[19]<<32; case 19: c += (ulong)key[18]<<24; case 18: c += (ulong)key[17]<<16; case 17: c += (ulong)key[16]<<8; /* the first byte of c is reserved for the length */ case 16: b += WORD_AT(key+8); a+= WORD_AT(key); break; case 15: b += (ulong)key[14]<<48; case 14: b += (ulong)key[13]<<40; case 13: b += (ulong)key[12]<<32; case 12: b += (ulong)key[11]<<24; case 11: b += (ulong)key[10]<<16; case 10: b += (ulong)key[ 9]<<8; case 9: b += (ulong)key[ 8]; case 8: a += WORD_AT(key); break; case 7: a += (ulong)key[ 6]<<48; case 6: a += (ulong)key[ 5]<<40; case 5: a += (ulong)key[ 4]<<32; case 4: a += (ulong)key[ 3]<<24; case 3: a += (ulong)key[ 2]<<16; case 2: a += (ulong)key[ 1]<<8; case 1: a += (ulong)key[ 0]; /* case 0: nothing left to add */ #endif } mix(a,b,c); return c & (cBuckets-1); } /*************************************************************************\ | Rehash() | | You give me a hashtable, a new size, and a bucket to follow, and | | I resize the hashtable's bin to be the new size, rehashing | | everything in it. I keep particular track of the bucket you pass | | in, and RETURN a pointer to where the item in the bucket got to. | | (If you pass in NULL, I return an arbitrary pointer.) | \*************************************************************************/ static HTItem *Rehash(HashTable *ht, ulong cNewBuckets, HTItem *bckWatch) { Table *tableNew; ulong iBucketFirst; HTItem *bck, *bckNew = NULL; ulong offset; /* the i in h(x) + i*(i-1)/2 */ int fOverwrite = 0; /* not an issue: there can be no collisions */ assert( ht->table ); cNewBuckets = Table(Allocate)(&tableNew, cNewBuckets); /* Since we RETURN the new position of bckWatch, we want * * to make sure it doesn't get moved due to some table * * rehashing that comes after it's inserted. Thus, we * * have to put it in last. This makes the loop weird. */ for ( bck = HashFirstBucket(ht); ; bck = HashNextBucket(ht) ) { if ( bck == NULL ) /* we're done iterating, so look at bckWatch */ { bck = bckWatch; if ( bck == NULL ) /* I guess bckWatch wasn't specified */ break; } else if ( bck == bckWatch ) continue; /* ignore if we see it during the iteration */ offset = 0; /* a new i for a new bucket */ for ( iBucketFirst = Hash(ht, KEY_PTR(ht, bck->key), cNewBuckets); !Table(IsEmpty)(tableNew, iBucketFirst); iBucketFirst = (iBucketFirst + JUMP(KEY_PTR(ht,bck->key), offset)) & (cNewBuckets-1) ) ; bckNew = Table(Insert)(tableNew, bck, iBucketFirst, &fOverwrite); if ( bck == bckWatch ) /* we're done with the last thing to do */ break; } Table(Free)(ht->table, ht->cBuckets); ht->table = tableNew; ht->cBuckets = cNewBuckets; ht->cDeletedItems = 0; return bckNew; /* new position of bckWatch, which was inserted last */ } /*************************************************************************\ | Find() | | Does the quadratic searching stuff. RETURNS NULL if we don't | | find an object with the given key, and a pointer to the Item | | holding the key, if we do. Also sets posLastFind. If piEmpty is | | non-NULL, we set it to the first open bucket we pass; helpful for | | doing a later insert if the search fails, for instance. | \*************************************************************************/ static HTItem *Find(HashTable *ht, ulong key, ulong *piEmpty) { ulong iBucketFirst; HTItem *item; ulong offset = 0; /* the i in h(x) + i*(i-1)/2 */ int fFoundEmpty = 0; /* set when we pass over an empty bucket */ ht->posLastFind = NULL; /* set up for failure: a new find starts */ if ( ht->table == NULL ) /* empty hash table: find is bound to fail */ return NULL; iBucketFirst = Hash(ht, KEY_PTR(ht, key), ht->cBuckets); while ( 1 ) /* now try all i > 0 */ { item = Table(Find)(ht->table, iBucketFirst); if ( item == NULL ) /* it's not in the table */ { if ( piEmpty && !fFoundEmpty ) *piEmpty = iBucketFirst; return NULL; } else { if ( IS_BCK_DELETED(item) ) /* always 0 ifdef INSERT_ONLY */ { if ( piEmpty && !fFoundEmpty ) { *piEmpty = iBucketFirst; fFoundEmpty = 1; } } else if ( !KEY_CMP(ht, key, item->key) ) /* must be occupied */ { ht->posLastFind = item; return item; /* we found it! */ } } iBucketFirst = ((iBucketFirst + JUMP(KEY_PTR(ht, key), offset)) & (ht->cBuckets-1)); } } /*************************************************************************\ | Insert() | | If an item with the key already exists in the hashtable, RETURNS | | a pointer to the item (replacing its data if fOverwrite is 1). | | If not, we find the first place-to-insert (which Find() is nice | | enough to set for us) and insert the item there, RETURNing a | | pointer to the item. We might grow the hashtable if it's getting | | full. Note we include buckets holding DELETED when determining | | fullness, because they slow down searching. | \*************************************************************************/ static ulong NextPow2(ulong x) /* returns next power of 2 > x, or 2^31 */ { if ( ((x << 1) >> 1) != x ) /* next power of 2 overflows */ x >>= 1; /* so we return highest power of 2 we can */ while ( (x & (x-1)) != 0 ) /* blacks out all but the top bit */ x &= (x-1); return x << 1; /* makes it the *next* power of 2 */ } static HTItem *Insert(HashTable *ht, ulong key, ulong data, int fOverwrite) { HTItem *item, bckInsert; ulong iEmpty; /* first empty bucket key probes */ if ( ht->table == NULL ) /* empty hash table: find is bound to fail */ return NULL; item = Find(ht, key, &iEmpty); ht->posLastFind = NULL; /* last operation is insert, not find */ if ( item ) { if ( fOverwrite ) item->data = data; /* key already matches */ return item; } COPY_KEY(ht, bckInsert.key, key); /* make our own copy of the key */ bckInsert.data = data; /* oh, and the data too */ item = Table(Insert)(ht->table, &bckInsert, iEmpty, &fOverwrite); if ( fOverwrite ) /* we overwrote a deleted bucket */ ht->cDeletedItems--; ht->cItems++; /* insert couldn't have overwritten */ if ( ht->cDeltaGoalSize > 0 ) /* closer to our goal size */ ht->cDeltaGoalSize--; if ( ht->cItems + ht->cDeletedItems >= ht->cBuckets * OCCUPANCY_PCT || ht->cDeltaGoalSize < 0 ) /* we must've overestimated # of deletes */ item = Rehash(ht, NextPow2((ulong)(((ht->cDeltaGoalSize > 0 ? ht->cDeltaGoalSize : 0) + ht->cItems) / OCCUPANCY_PCT)), item); return item; } /*************************************************************************\ | Delete() | | Removes the item from the hashtable, and if fShrink is 1, will | | shrink the hashtable if it's too small (ie even after halving, | | the ht would be less than half full, though in order to avoid | | oscillating table size, we insist that after halving the ht would | | be less than 40% full). RETURNS 1 if the item was found, 0 else. | | If fLastFindSet is true, then this function is basically | | DeleteLastFind. | \*************************************************************************/ static int Delete(HashTable *ht, ulong key, int fShrink, int fLastFindSet) { if ( !fLastFindSet && !Find(ht, key, NULL) ) return 0; SET_BCK_DELETED(ht, ht->posLastFind); /* find set this, how nice */ ht->cItems--; ht->cDeletedItems++; if ( ht->cDeltaGoalSize < 0 ) /* heading towards our goal of deletion */ ht->cDeltaGoalSize++; if ( fShrink && ht->cItems < ht->cBuckets * OCCUPANCY_PCT*0.4 && ht->cDeltaGoalSize >= 0 /* wait until we're done deleting */ && (ht->cBuckets >> 1) >= MIN_HASH_SIZE ) /* shrink */ Rehash(ht, NextPow2((ulong)((ht->cItems+ht->cDeltaGoalSize)/OCCUPANCY_PCT)), NULL); ht->posLastFind = NULL; /* last operation is delete, not find */ return 1; } /* ======================================================================== */ /* USER-VISIBLE API */ /* ---------------------- */ /*************************************************************************\ | AllocateHashTable() | | ClearHashTable() | | FreeHashTable() | | Allocate() allocates a hash table and sets up size parameters. | | Free() frees it. Clear() deletes all the items from the hash | | table, but frees not. | | cchKey is < 0 if the keys you send me are meant to be pointers | | to \0-terminated strings. Then -cchKey is the maximum key size. | | If cchKey < one word (ulong), the keys you send me are the keys | | themselves; else the keys you send me are pointers to the data. | | If fSaveKeys is 1, we copy any keys given to us to insert. We | | also free these keys when freeing the hash table. If it's 0, the | | user is responsible for key space management. | | AllocateHashTable() RETURNS a hash table; the others TAKE one. | \*************************************************************************/ HashTable *AllocateHashTable(int cchKey, int fSaveKeys) { HashTable *ht; ht = (HashTable *) HTsmalloc(sizeof(*ht)); /* set everything to 0 */ ht->cBuckets = Table(Allocate)(&ht->table, MIN_HASH_SIZE); ht->cchKey = cchKey <= 0 ? NULL_TERMINATED : cchKey; ht->cItems = 0; ht->cDeletedItems = 0; ht->fSaveKeys = fSaveKeys; ht->cDeltaGoalSize = 0; ht->iter = HTsmalloc( sizeof(TableIterator) ); ht->fpData = NULL; /* set by HashLoad, maybe */ ht->bckData.data = (ulong) NULL; /* this must be done */ HTSetupKeyTrunc(); /* in util.c */ return ht; } void ClearHashTable(HashTable *ht) { HTItem *bck; if ( STORES_PTR(ht) && ht->fSaveKeys ) /* need to free keys */ for ( bck = HashFirstBucket(ht); bck; bck = HashNextBucket(ht) ) { FREE_KEY(ht, bck->key); if ( ht->fSaveKeys == 2 ) /* this means key stored in one block */ break; /* ...so only free once */ } Table(Free)(ht->table, ht->cBuckets); ht->cBuckets = Table(Allocate)(&ht->table, MIN_HASH_SIZE); ht->cItems = 0; ht->cDeletedItems = 0; ht->cDeltaGoalSize = 0; ht->posLastFind = NULL; ht->fpData = NULL; /* no longer HashLoading */ if ( ht->bckData.data ) free( (char *)(ht)->bckData.data); ht->bckData.data = (ulong) NULL; } void FreeHashTable(HashTable *ht) { ClearHashTable(ht); if ( ht->iter ) HTfree(ht->iter, sizeof(TableIterator)); if ( ht->table ) Table(Free)(ht->table, ht->cBuckets); free(ht); } /*************************************************************************\ | HashFind() | | HashFindLast() | | HashFind(): looks in h(x) + i(i-1)/2 % t as i goes up from 0 | | until we either find the key or hit an empty bucket. RETURNS a | | pointer to the item in the hit bucket, if we find it, else | | RETURNS NULL. | | HashFindLast() returns the item returned by the last | | HashFind(), which may be NULL if the last HashFind() failed. | | LOAD_AND_RETURN reads the data from off disk, if necessary. | \*************************************************************************/ HTItem *HashFind(HashTable *ht, ulong key) { LOAD_AND_RETURN(ht, Find(ht, KEY_TRUNC(ht, key), NULL)); } HTItem *HashFindLast(HashTable *ht) { LOAD_AND_RETURN(ht, ht->posLastFind); } /*************************************************************************\ | HashFindOrInsert() | | HashFindOrInsertItem() | | HashInsert() | | HashInsertItem() | | HashDelete() | | HashDeleteLast() | | Pretty obvious what these guys do. Some take buckets (items), | | some take keys and data separately. All things RETURN the bucket | | (a pointer into the hashtable) if appropriate. | \*************************************************************************/ HTItem *HashFindOrInsert(HashTable *ht, ulong key, ulong dataInsert) { /* This is equivalent to Insert without samekey-overwrite */ return Insert(ht, KEY_TRUNC(ht, key), dataInsert, 0); } HTItem *HashFindOrInsertItem(HashTable *ht, HTItem *pItem) { return HashFindOrInsert(ht, pItem->key, pItem->data); } HTItem *HashInsert(HashTable *ht, ulong key, ulong data) { return Insert(ht, KEY_TRUNC(ht, key), data, SAMEKEY_OVERWRITE); } HTItem *HashInsertItem(HashTable *ht, HTItem *pItem) { return HashInsert(ht, pItem->key, pItem->data); } int HashDelete(HashTable *ht, ulong key) { return Delete(ht, KEY_TRUNC(ht, key), !FAST_DELETE, 0); } int HashDeleteLast(HashTable *ht) { if ( !ht->posLastFind ) /* last find failed */ return 0; return Delete(ht, 0, !FAST_DELETE, 1); /* no need to specify a key */ } /*************************************************************************\ | HashFirstBucket() | | HashNextBucket() | | Iterates through the items in the hashtable by iterating through | | the table. Since we know about deleted buckets and loading data | | off disk, and the table doesn't, our job is to take care of these | | things. RETURNS a bucket, or NULL after the last bucket. | \*************************************************************************/ HTItem *HashFirstBucket(HashTable *ht) { HTItem *retval; for ( retval = Table(FirstBucket)(ht->iter, ht->table, ht->cBuckets); retval; retval = Table(NextBucket)(ht->iter) ) if ( !IS_BCK_DELETED(retval) ) LOAD_AND_RETURN(ht, retval); return NULL; } HTItem *HashNextBucket(HashTable *ht) { HTItem *retval; while ( (retval=Table(NextBucket)(ht->iter)) ) if ( !IS_BCK_DELETED(retval) ) LOAD_AND_RETURN(ht, retval); return NULL; } /*************************************************************************\ | HashSetDeltaGoalSize() | | If we're going to insert 100 items, set the delta goal size to | | 100 and we take that into account when inserting. Likewise, if | | we're going to delete 10 items, set it to -100 and we won't | | rehash until all 100 have been done. It's ok to be wrong, but | | it's efficient to be right. Returns the delta value. | \*************************************************************************/ int HashSetDeltaGoalSize(HashTable *ht, int delta) { ht->cDeltaGoalSize = delta; #if FAST_DELETE == 1 || defined INSERT_ONLY if ( ht->cDeltaGoalSize < 0 ) /* for fast delete, we never */ ht->cDeltaGoalSize = 0; /* ...rehash after deletion */ #endif return ht->cDeltaGoalSize; } /*************************************************************************\ | HashSave() | | HashLoad() | | HashLoadKeys() | | Routines for saving and loading the hashtable from disk. We can | | then use the hashtable in two ways: loading it back into memory | | (HashLoad()) or loading only the keys into memory, in which case | | the data for a given key is loaded off disk when the key is | | retrieved. The data is freed when something new is retrieved in | | its place, so this is not a "lazy-load" scheme. | | The key is saved automatically and restored upon load, but the | | user needs to specify a routine for reading and writing the data. | | fSaveKeys is of course set to 1 when you read in a hashtable. | | HashLoad RETURNS a newly allocated hashtable. | | DATA_WRITE() takes an fp and a char * (representing the data | | field), and must perform two separate tasks. If fp is NULL, | | return the number of bytes written. If not, writes the data to | | disk at the place the fp points to. | | DATA_READ() takes an fp and the number of bytes in the data | | field, and returns a char * which points to wherever you've | | written the data. Thus, you must allocate memory for the data. | | Both dataRead and dataWrite may be NULL if you just wish to | | store the data field directly, as an integer. | \*************************************************************************/ void HashSave(FILE *fp, HashTable *ht, int (*dataWrite)(FILE *, char *)) { long cchData, posStart; HTItem *bck; /* File format: magic number (4 bytes) : cchKey (one word) : cItems (one word) : cDeletedItems (one word) : table info (buckets and a bitmap) : cchAllKeys (one word) Then the keys, in a block. If cchKey is NULL_TERMINATED, the keys are null-terminated too, otherwise this takes up cchKey*cItems bytes. Note that keys are not written for DELETED buckets. Then the data: : EITHER DELETED (one word) to indicate it's a deleted bucket, : OR number of bytes for this (non-empty) bucket's data (one word). This is not stored if dataWrite == NULL since the size is known to be sizeof(ul). Plus: : the data for this bucket (variable length) All words are in network byte order. */ fprintf(fp, "%s", MAGIC_KEY); WRITE_UL(fp, ht->cchKey); /* WRITE_UL, READ_UL, etc in fks-hash.h */ WRITE_UL(fp, ht->cItems); WRITE_UL(fp, ht->cDeletedItems); Table(Write)(fp, ht->table, ht->cBuckets); /* writes cBuckets too */ WRITE_UL(fp, 0); /* to be replaced with sizeof(key block) */ posStart = ftell(fp); for ( bck = HashFirstBucket(ht); bck; bck = HashNextBucket(ht) ) fwrite(KEY_PTR(ht, bck->key), 1, (ht->cchKey == NULL_TERMINATED ? strlen(KEY_PTR(ht, bck->key))+1 : ht->cchKey), fp); cchData = ftell(fp) - posStart; fseek(fp, posStart - sizeof(unsigned long), SEEK_SET); WRITE_UL(fp, cchData); fseek(fp, 0, SEEK_END); /* done with our sojourn at the header */ /* Unlike HashFirstBucket, TableFirstBucket iters through deleted bcks */ for ( bck = Table(FirstBucket)(ht->iter, ht->table, ht->cBuckets); bck; bck = Table(NextBucket)(ht->iter) ) if ( dataWrite == NULL || IS_BCK_DELETED(bck) ) WRITE_UL(fp, bck->data); else /* write cchData followed by the data */ { WRITE_UL(fp, (*dataWrite)(NULL, (char *)bck->data)); (*dataWrite)(fp, (char *)bck->data); } } static HashTable *HashDoLoad(FILE *fp, char * (*dataRead)(FILE *, int), HashTable *ht) { ulong cchKey; char szMagicKey[4], *rgchKeys; HTItem *bck; fread(szMagicKey, 1, 4, fp); if ( strncmp(szMagicKey, MAGIC_KEY, 4) ) { fprintf(stderr, "ERROR: not a hash table (magic key is %4.4s, not %s)\n", szMagicKey, MAGIC_KEY); exit(3); } Table(Free)(ht->table, ht->cBuckets); /* allocated in AllocateHashTable */ READ_UL(fp, ht->cchKey); READ_UL(fp, ht->cItems); READ_UL(fp, ht->cDeletedItems); ht->cBuckets = Table(Read)(fp, &ht->table); /* next is the table info */ READ_UL(fp, cchKey); rgchKeys = (char *) HTsmalloc( cchKey ); /* stores all the keys */ fread(rgchKeys, 1, cchKey, fp); /* We use the table iterator so we don't try to LOAD_AND_RETURN */ for ( bck = Table(FirstBucket)(ht->iter, ht->table, ht->cBuckets); bck; bck = Table(NextBucket)(ht->iter) ) { READ_UL(fp, bck->data); /* all we need if dataRead is NULL */ if ( IS_BCK_DELETED(bck) ) /* always 0 if defined(INSERT_ONLY) */ continue; /* this is why we read the data first */ if ( dataRead != NULL ) /* if it's null, we're done */ if ( !ht->fpData ) /* load data into memory */ bck->data = (ulong)dataRead(fp, bck->data); else /* store location of data on disk */ { fseek(fp, bck->data, SEEK_CUR); /* bck->data held size of data */ bck->data = ftell(fp) - bck->data - sizeof(unsigned long); } if ( ht->cchKey == NULL_TERMINATED ) /* now read the key */ { bck->key = (ulong) rgchKeys; rgchKeys = strchr(rgchKeys, '\0') + 1; /* read past the string */ } else { if ( STORES_PTR(ht) ) /* small keys stored directly */ bck->key = (ulong) rgchKeys; else memcpy(&bck->key, rgchKeys, ht->cchKey); rgchKeys += ht->cchKey; } } if ( !STORES_PTR(ht) ) /* keys are stored directly */ HTfree(rgchKeys - cchKey, cchKey); /* we've advanced rgchK to end */ return ht; } HashTable *HashLoad(FILE *fp, char * (*dataRead)(FILE *, int)) { HashTable *ht; ht = AllocateHashTable(0, 2); /* cchKey set later, fSaveKey should be 2! */ return HashDoLoad(fp, dataRead, ht); } HashTable *HashLoadKeys(FILE *fp, char * (*dataRead)(FILE *, int)) { HashTable *ht; if ( dataRead == NULL ) return HashLoad(fp, NULL); /* no reason not to load the data here */ ht = AllocateHashTable(0, 2); /* cchKey set later, fSaveKey should be 2! */ ht->fpData = fp; /* tells HashDoLoad() to only load keys */ ht->dataRead = dataRead; return HashDoLoad(fp, dataRead, ht); } /*************************************************************************\ | PrintHashTable() | | A debugging tool. Prints the entire contents of the hash table, | | like so: : key of the contents. Returns number of bytes | | allocated. If time is not -1, we print it as the time required | | for the hash. If iForm is 0, we just print the stats. If it's | | 1, we print the keys and data too, but the keys are printed as | | ulongs. If it's 2, we print the keys correctly (as long numbers | | or as strings). | \*************************************************************************/ ulong PrintHashTable(HashTable *ht, double time, int iForm) { ulong cbData = 0, cbBin = 0, cItems = 0, cOccupied = 0; HTItem *item; printf("HASH TABLE.\n"); if ( time > -1.0 ) { printf("----------\n"); printf("Time: %27.2f\n", time); } for ( item = Table(FirstBucket)(ht->iter, ht->table, ht->cBuckets); item; item = Table(NextBucket)(ht->iter) ) { cOccupied++; /* this includes deleted buckets */ if ( IS_BCK_DELETED(item) ) /* we don't need you for anything else */ continue; cItems++; /* this is for a sanity check */ if ( STORES_PTR(ht) ) cbData += ht->cchKey == NULL_TERMINATED ? WORD_ROUND(strlen((char *)item->key)+1) : ht->cchKey; else cbBin -= sizeof(item->key), cbData += sizeof(item->key); cbBin -= sizeof(item->data), cbData += sizeof(item->data); if ( iForm != 0 ) /* we want the actual contents */ { if ( iForm == 2 && ht->cchKey == NULL_TERMINATED ) printf("%s/%lu\n", (char *)item->key, item->data); else if ( iForm == 2 && STORES_PTR(ht) ) printf("%.*s/%lu\n", (int)ht->cchKey, (char *)item->key, item->data); else /* either key actually is a ulong, or iForm == 1 */ printf("%lu/%lu\n", item->key, item->data); } } assert( cItems == ht->cItems ); /* sanity check */ cbBin = Table(Memory)(ht->cBuckets, cOccupied); printf("----------\n"); printf("%lu buckets (%lu bytes). %lu empty. %lu hold deleted items.\n" "%lu items (%lu bytes).\n" "%lu bytes total. %lu bytes (%2.1f%%) of this is ht overhead.\n", ht->cBuckets, cbBin, ht->cBuckets - cOccupied, cOccupied - ht->cItems, ht->cItems, cbData, cbData + cbBin, cbBin, cbBin*100.0/(cbBin+cbData)); return cbData + cbBin; } sparsehash-2.0.2/experimental/.svn/props/0000775000175000017500000000000011721252346015406 500000000000000sparsehash-2.0.2/experimental/.svn/tmp/0000775000175000017500000000000011721252346015043 500000000000000sparsehash-2.0.2/experimental/.svn/tmp/text-base/0000775000175000017500000000000011721252346016737 500000000000000sparsehash-2.0.2/experimental/.svn/tmp/props/0000775000175000017500000000000011721252346016206 500000000000000sparsehash-2.0.2/experimental/.svn/tmp/prop-base/0000775000175000017500000000000011721252346016733 500000000000000sparsehash-2.0.2/experimental/.svn/entries0000444000175000017500000000165711721252346015564 0000000000000010 dir 113 https://sparsehash.googlecode.com/svn/trunk/experimental https://sparsehash.googlecode.com/svn 2012-01-31T23:50:02.386177Z 106 csilvers 21bedea4-f223-4c8b-73d6-85019ffb75a9 libchash.h file 2012-02-22T20:49:42.591761Z a89bc1e53ce02605ffeac5b4a88668bd 2012-01-31T23:50:02.386177Z 106 csilvers 12423 example.c file 2012-02-22T20:49:42.591761Z 5a51d3a6a14815482e13d93ef21c02be 2012-01-31T23:50:02.386177Z 106 csilvers 1378 README file 2012-02-22T20:49:42.591761Z 7a66f1080d79d95694b0b6d7b66f5617 2007-03-22T00:40:12.885450Z 8 csilvers 685 Makefile file 2012-02-22T20:49:42.591761Z dca812b35878b81f9b8b6e1686688aed 2007-03-22T00:40:12.885450Z 8 csilvers 202 libchash.c file 2012-02-22T20:49:42.591761Z 92d10df4280cc93b801c2994d399274e 2012-01-31T23:50:02.386177Z 106 csilvers 66091 sparsehash-2.0.2/experimental/.svn/all-wcprops0000444000175000017500000000105211721252346016343 00000000000000K 25 svn:wc:ra_dav:version-url V 36 /svn/!svn/ver/106/trunk/experimental END libchash.h K 25 svn:wc:ra_dav:version-url V 47 /svn/!svn/ver/106/trunk/experimental/libchash.h END example.c K 25 svn:wc:ra_dav:version-url V 46 /svn/!svn/ver/106/trunk/experimental/example.c END README K 25 svn:wc:ra_dav:version-url V 41 /svn/!svn/ver/8/trunk/experimental/README END Makefile K 25 svn:wc:ra_dav:version-url V 43 /svn/!svn/ver/8/trunk/experimental/Makefile END libchash.c K 25 svn:wc:ra_dav:version-url V 47 /svn/!svn/ver/106/trunk/experimental/libchash.c END sparsehash-2.0.2/experimental/.svn/prop-base/0000775000175000017500000000000011721252346016133 500000000000000sparsehash-2.0.2/experimental/Makefile0000664000175000017500000000031211721252346014733 00000000000000example: example.o libchash.o $(CC) $(CFLAGS) $(LDFLAGS) -o $@ $^ .SUFFIXES: .c .o .h .c.o: $(CC) -c $(CPPFLAGS) $(CFLAGS) -o $@ $< example.o: example.c libchash.h libchash.o: libchash.c libchash.h sparsehash-2.0.2/experimental/example.c0000664000175000017500000000254211721252346015101 00000000000000#include #include #include #include #include "libchash.h" static void TestInsert() { struct HashTable* ht; HTItem* bck; ht = AllocateHashTable(1, 0); /* value is 1 byte, 0: don't copy keys */ HashInsert(ht, PTR_KEY(ht, "January"), 31); /* 0: don't overwrite old val */ bck = HashInsert(ht, PTR_KEY(ht, "February"), 28); bck = HashInsert(ht, PTR_KEY(ht, "March"), 31); bck = HashFind(ht, PTR_KEY(ht, "February")); assert(bck); assert(bck->data == 28); FreeHashTable(ht); } static void TestFindOrInsert() { struct HashTable* ht; int i; int iterations = 1000000; int range = 30; /* random number between 1 and 30 */ ht = AllocateHashTable(4, 0); /* value is 4 bytes, 0: don't copy keys */ /* We'll test how good rand() is as a random number generator */ for (i = 0; i < iterations; ++i) { int key = rand() % range; HTItem* bck = HashFindOrInsert(ht, key, 0); /* initialize to 0 */ bck->data++; /* found one more of them */ } for (i = 0; i < range; ++i) { HTItem* bck = HashFind(ht, i); if (bck) { printf("%3d: %d\n", bck->key, bck->data); } else { printf("%3d: 0\n", i); } } FreeHashTable(ht); } int main(int argc, char** argv) { TestInsert(); TestFindOrInsert(); return 0; } sparsehash-2.0.2/Makefile.am0000664000175000017500000002236511721252016012640 00000000000000## Process this file with automake to produce Makefile.in # Make sure that when we re-make ./configure, we get the macros we need ACLOCAL_AMFLAGS = -I m4 # This is so we can #include AM_CPPFLAGS = -I$(top_srcdir)/src # These are good warnings to turn on by default if GCC AM_CXXFLAGS = -Wall -W -Wwrite-strings -Woverloaded-virtual -Wshadow endif docdir = $(prefix)/share/doc/$(PACKAGE)-$(VERSION) ## This is for HTML and other documentation you want to install. ## Add your documentation files (in doc/) in addition to these boilerplate ## Also add a TODO file if you have one dist_doc_DATA = AUTHORS COPYING ChangeLog INSTALL NEWS README README_windows.txt \ TODO \ doc/dense_hash_map.html \ doc/dense_hash_set.html \ doc/sparse_hash_map.html \ doc/sparse_hash_set.html \ doc/sparsetable.html \ doc/implementation.html \ doc/performance.html \ doc/index.html \ doc/designstyle.css ## The libraries (.so's) you want to install lib_LTLIBRARIES = ## The location of the windows project file for each binary we make WINDOWS_PROJECTS = sparsehash.sln ## unittests you want to run when people type 'make check'. ## TESTS is for binary unittests, check_SCRIPTS for script-based unittests. ## TESTS_ENVIRONMENT sets environment variables for when you run unittest, ## but it only seems to take effect for *binary* unittests (argh!) TESTS = check_SCRIPTS = TESTS_ENVIRONMENT = ## This should always include $(TESTS), but may also include other ## binaries that you compile but don't want automatically installed. noinst_PROGRAMS = $(TESTS) time_hash_map WINDOWS_PROJECTS += vsprojects/time_hash_map/time_hash_map.vcproj ## vvvv RULES TO MAKE THE LIBRARIES, BINARIES, AND UNITTESTS # All our .h files need to read the config information in config.h. The # autoheader config.h has too much info, including PACKAGENAME, that # might conflict with other config.h's an application might #include. # Thus, we create a "minimal" config.h, called sparseconfig.h, that # includes only the #defines we really need, and that are unlikely to # change from system to system. NOTE: The awk command is equivalent to # fgrep -B2 -f$(top_builddir)/src/config.h.include $(top_builddir)/src/config.h # | fgrep -vx -e -- > _sparsehash_config # For correctness, it depends on the fact config.h.include does not have # any lines starting with #. src/sparsehash/internal/sparseconfig.h: $(top_builddir)/src/config.h \ $(top_srcdir)/src/config.h.include [ -d $(@D) ] || mkdir -p $(@D) echo "/*" > $(@D)/_sparsehash_config echo " * NOTE: This file is for internal use only." >> $(@D)/_sparsehash_config echo " * Do not use these #defines in your own program!" >> $(@D)/_sparsehash_config echo " */" >> $(@D)/_sparsehash_config $(AWK) '{prevline=currline; currline=$$0;} \ /^#/ {in_second_file = 1;} \ !in_second_file {if (currline !~ /^ *$$/) {inc[currline]=0}}; \ in_second_file { for (i in inc) { \ if (index(currline, i) != 0) { \ print "\n"prevline"\n"currline; \ delete inc[i]; \ } \ } }' \ $(top_srcdir)/src/config.h.include $(top_builddir)/src/config.h \ >> $(@D)/_sparsehash_config mv -f $(@D)/_sparsehash_config $@ # This is how we tell automake about auto-generated .h files BUILT_SOURCES = src/sparsehash/internal/sparseconfig.h CLEANFILES = src/sparsehash/internal/sparseconfig.h sparsehashincludedir = $(includedir)/sparsehash ## The .h files you want to install (that is, .h files that people ## who install this package can include in their own applications.) sparsehashinclude_HEADERS = \ src/sparsehash/dense_hash_map \ src/sparsehash/dense_hash_set \ src/sparsehash/sparse_hash_map \ src/sparsehash/sparse_hash_set \ src/sparsehash/sparsetable \ src/sparsehash/template_util.h \ src/sparsehash/type_traits.h internalincludedir = $(sparsehashincludedir)/internal internalinclude_HEADERS = \ src/sparsehash/internal/densehashtable.h \ src/sparsehash/internal/sparsehashtable.h \ src/sparsehash/internal/hashtable-common.h \ src/sparsehash/internal/libc_allocator_with_realloc.h nodist_internalinclude_HEADERS = src/sparsehash/internal/sparseconfig.h # This is for backwards compatibility only. googleincludedir = $(includedir)/google googleinclude_HEADERS = \ src/google/dense_hash_map \ src/google/dense_hash_set \ src/google/sparse_hash_map \ src/google/sparse_hash_set \ src/google/sparsetable \ src/google/template_util.h \ src/google/type_traits.h googleinternalincludedir = $(includedir)/google/sparsehash googleinternalinclude_HEADERS= \ src/google/sparsehash/densehashtable.h \ src/google/sparsehash/sparsehashtable.h \ src/google/sparsehash/hashtable-common.h \ src/google/sparsehash/libc_allocator_with_realloc.h TESTS += template_util_unittest # TODO(csilvers): Update windows projects for template_util_unittest. # WINDOWS_PROJECTS += vsprojects/template_util_unittest/template_util_unittest.vcproj template_util_unittest_SOURCES = \ src/template_util_unittest.cc \ src/sparsehash/template_util.h nodist_template_util_unittest_SOURCES = $(nodist_internalinclude_HEADERS) TESTS += type_traits_unittest WINDOWS_PROJECTS += vsprojects/type_traits_unittest/type_traits_unittest.vcproj type_traits_unittest_SOURCES = \ src/type_traits_unittest.cc \ $(internalinclude_HEADERS) \ src/sparsehash/type_traits.h nodist_type_traits_unittest_SOURCES = $(nodist_internalinclude_HEADERS) TESTS += libc_allocator_with_realloc_test WINDOWS_PROJECTS += vsprojects/libc_allocator_with_realloc_test/libc_allocator_with_realloc_test.vcproj libc_allocator_with_realloc_test_SOURCES = \ src/libc_allocator_with_realloc_test.cc \ $(internalinclude_HEADERS) \ src/sparsehash/internal/libc_allocator_with_realloc.h TESTS += sparsetable_unittest WINDOWS_PROJECTS += vsprojects/sparsetable_unittest/sparsetable_unittest.vcproj sparsetable_unittest_SOURCES = \ src/sparsetable_unittest.cc \ $(internalinclude_HEADERS) \ src/sparsehash/sparsetable nodist_sparsetable_unittest_SOURCES = $(nodist_internalinclude_HEADERS) TESTS += hashtable_test WINDOWS_PROJECTS += vsprojects/hashtable_test/hashtable_test.vcproj hashtable_test_SOURCES = \ src/hashtable_test.cc \ src/hash_test_interface.h \ src/testutil.h \ $(sparsehashinclude_HEADERS) \ $(internalinclude_HEADERS) nodist_hashtable_test_SOURCES = $(nodist_internalinclude_HEADERS) TESTS += simple_test WINDOWS_PROJECTS += vsprojects/simple_test/simple_test.vcproj simple_test_SOURCES = \ src/simple_test.cc \ $(internalinclude_HEADERS) nodist_simple_test_SOURCES = $(nodist_internalinclude_HEADERS) TESTS += simple_compat_test simple_compat_test_SOURCES = \ src/simple_compat_test.cc \ $(internalinclude_HEADERS) \ $(googleinclude_HEADERS) \ $(googleinternalinclude_HEADERS) nodist_simple_compat_test_SOURCES = $(nodist_internalinclude_HEADERS) time_hash_map_SOURCES = \ src/time_hash_map.cc \ $(internalinclude_HEADERS) \ $(sparsehashinclude_HEADERS) nodist_time_hash_map_SOURCES = $(nodist_internalinclude_HEADERS) # If tcmalloc is installed, use it with time_hash_map; it gives us # heap-usage statistics for the hash_map routines, which is very nice time_hash_map_CXXFLAGS = @tcmalloc_flags@ $(AM_CXXFLAGS) time_hash_map_LDFLAGS = @tcmalloc_flags@ time_hash_map_LDADD = @tcmalloc_libs@ ## ^^^^ END OF RULES TO MAKE THE LIBRARIES, BINARIES, AND UNITTESTS rpm: dist-gzip packages/rpm.sh packages/rpm/rpm.spec @cd packages && ./rpm.sh ${PACKAGE} ${VERSION} deb: dist-gzip packages/deb.sh packages/deb/* @cd packages && ./deb.sh ${PACKAGE} ${VERSION} # http://linux.die.net/man/1/pkg-config, http://pkg-config.freedesktop.org/wiki pkgconfigdir = $(libdir)/pkgconfig pkgconfig_DATA = lib${PACKAGE}.pc CLEANFILES += $(pkgconfig_DATA) # I get the description and URL lines from the rpm spec. I use sed to # try to rewrite exec_prefix, libdir, and includedir in terms of # prefix, if possible. lib${PACKAGE}.pc: Makefile packages/rpm/rpm.spec echo 'prefix=$(prefix)' > "$@".tmp echo 'exec_prefix='`echo '$(exec_prefix)' | sed 's@^$(prefix)@$${prefix}@'` >> "$@".tmp echo 'libdir='`echo '$(libdir)' | sed 's@^$(exec_prefix)@$${exec_prefix}@'` >> "$@".tmp echo 'includedir='`echo '$(includedir)' | sed 's@^$(prefix)@$${prefix}@'` >> "$@".tmp echo '' >> "$@".tmp echo 'Name: $(PACKAGE)' >> "$@".tmp echo 'Version: $(VERSION)' >> "$@".tmp -grep '^Summary:' $(top_srcdir)/packages/rpm/rpm.spec | sed s/^Summary:/Description:/ | head -n1 >> "$@".tmp -grep '^URL: ' $(top_srcdir)/packages/rpm/rpm.spec >> "$@".tmp echo 'Requires:' >> "$@".tmp echo 'Libs:' >> "$@".tmp echo 'Cflags: -I$${includedir}' >> "$@".tmp mv -f "$@".tmp "$@" # Windows wants write permission to .vcproj files and maybe even sln files. dist-hook: test -e "$(distdir)/vsprojects" \ && chmod -R u+w $(distdir)/*.sln $(distdir)/vsprojects/ EXTRA_DIST = packages/rpm.sh packages/rpm/rpm.spec packages/deb.sh packages/deb \ src/config.h.include src/windows $(WINDOWS_PROJECTS) experimental sparsehash-2.0.2/INSTALL0000644000175000017500000003633211721254575011646 00000000000000Installation Instructions ************************* Copyright (C) 1994, 1995, 1996, 1999, 2000, 2001, 2002, 2004, 2005, 2006, 2007, 2008, 2009 Free Software Foundation, Inc. Copying and distribution of this file, with or without modification, are permitted in any medium without royalty provided the copyright notice and this notice are preserved. This file is offered as-is, without warranty of any kind. Basic Installation ================== Briefly, the shell commands `./configure; make; make install' should configure, build, and install this package. The following more-detailed instructions are generic; see the `README' file for instructions specific to this package. Some packages provide this `INSTALL' file but do not implement all of the features documented below. The lack of an optional feature in a given package is not necessarily a bug. More recommendations for GNU packages can be found in *note Makefile Conventions: (standards)Makefile Conventions. The `configure' shell script attempts to guess correct values for various system-dependent variables used during compilation. It uses those values to create a `Makefile' in each directory of the package. It may also create one or more `.h' files containing system-dependent definitions. Finally, it creates a shell script `config.status' that you can run in the future to recreate the current configuration, and a file `config.log' containing compiler output (useful mainly for debugging `configure'). It can also use an optional file (typically called `config.cache' and enabled with `--cache-file=config.cache' or simply `-C') that saves the results of its tests to speed up reconfiguring. Caching is disabled by default to prevent problems with accidental use of stale cache files. If you need to do unusual things to compile the package, please try to figure out how `configure' could check whether to do them, and mail diffs or instructions to the address given in the `README' so they can be considered for the next release. If you are using the cache, and at some point `config.cache' contains results you don't want to keep, you may remove or edit it. The file `configure.ac' (or `configure.in') is used to create `configure' by a program called `autoconf'. You need `configure.ac' if you want to change it or regenerate `configure' using a newer version of `autoconf'. The simplest way to compile this package is: 1. `cd' to the directory containing the package's source code and type `./configure' to configure the package for your system. Running `configure' might take a while. While running, it prints some messages telling which features it is checking for. 2. Type `make' to compile the package. 3. Optionally, type `make check' to run any self-tests that come with the package, generally using the just-built uninstalled binaries. 4. Type `make install' to install the programs and any data files and documentation. When installing into a prefix owned by root, it is recommended that the package be configured and built as a regular user, and only the `make install' phase executed with root privileges. 5. Optionally, type `make installcheck' to repeat any self-tests, but this time using the binaries in their final installed location. This target does not install anything. Running this target as a regular user, particularly if the prior `make install' required root privileges, verifies that the installation completed correctly. 6. You can remove the program binaries and object files from the source code directory by typing `make clean'. To also remove the files that `configure' created (so you can compile the package for a different kind of computer), type `make distclean'. There is also a `make maintainer-clean' target, but that is intended mainly for the package's developers. If you use it, you may have to get all sorts of other programs in order to regenerate files that came with the distribution. 7. Often, you can also type `make uninstall' to remove the installed files again. In practice, not all packages have tested that uninstallation works correctly, even though it is required by the GNU Coding Standards. 8. Some packages, particularly those that use Automake, provide `make distcheck', which can by used by developers to test that all other targets like `make install' and `make uninstall' work correctly. This target is generally not run by end users. Compilers and Options ===================== Some systems require unusual options for compilation or linking that the `configure' script does not know about. Run `./configure --help' for details on some of the pertinent environment variables. You can give `configure' initial values for configuration parameters by setting variables in the command line or in the environment. Here is an example: ./configure CC=c99 CFLAGS=-g LIBS=-lposix *Note Defining Variables::, for more details. Compiling For Multiple Architectures ==================================== You can compile the package for more than one kind of computer at the same time, by placing the object files for each architecture in their own directory. To do this, you can use GNU `make'. `cd' to the directory where you want the object files and executables to go and run the `configure' script. `configure' automatically checks for the source code in the directory that `configure' is in and in `..'. This is known as a "VPATH" build. With a non-GNU `make', it is safer to compile the package for one architecture at a time in the source code directory. After you have installed the package for one architecture, use `make distclean' before reconfiguring for another architecture. On MacOS X 10.5 and later systems, you can create libraries and executables that work on multiple system types--known as "fat" or "universal" binaries--by specifying multiple `-arch' options to the compiler but only a single `-arch' option to the preprocessor. Like this: ./configure CC="gcc -arch i386 -arch x86_64 -arch ppc -arch ppc64" \ CXX="g++ -arch i386 -arch x86_64 -arch ppc -arch ppc64" \ CPP="gcc -E" CXXCPP="g++ -E" This is not guaranteed to produce working output in all cases, you may have to build one architecture at a time and combine the results using the `lipo' tool if you have problems. Installation Names ================== By default, `make install' installs the package's commands under `/usr/local/bin', include files under `/usr/local/include', etc. You can specify an installation prefix other than `/usr/local' by giving `configure' the option `--prefix=PREFIX', where PREFIX must be an absolute file name. You can specify separate installation prefixes for architecture-specific files and architecture-independent files. If you pass the option `--exec-prefix=PREFIX' to `configure', the package uses PREFIX as the prefix for installing programs and libraries. Documentation and other data files still use the regular prefix. In addition, if you use an unusual directory layout you can give options like `--bindir=DIR' to specify different values for particular kinds of files. Run `configure --help' for a list of the directories you can set and what kinds of files go in them. In general, the default for these options is expressed in terms of `${prefix}', so that specifying just `--prefix' will affect all of the other directory specifications that were not explicitly provided. The most portable way to affect installation locations is to pass the correct locations to `configure'; however, many packages provide one or both of the following shortcuts of passing variable assignments to the `make install' command line to change installation locations without having to reconfigure or recompile. The first method involves providing an override variable for each affected directory. For example, `make install prefix=/alternate/directory' will choose an alternate location for all directory configuration variables that were expressed in terms of `${prefix}'. Any directories that were specified during `configure', but not in terms of `${prefix}', must each be overridden at install time for the entire installation to be relocated. The approach of makefile variable overrides for each directory variable is required by the GNU Coding Standards, and ideally causes no recompilation. However, some platforms have known limitations with the semantics of shared libraries that end up requiring recompilation when using this method, particularly noticeable in packages that use GNU Libtool. The second method involves providing the `DESTDIR' variable. For example, `make install DESTDIR=/alternate/directory' will prepend `/alternate/directory' before all installation names. The approach of `DESTDIR' overrides is not required by the GNU Coding Standards, and does not work on platforms that have drive letters. On the other hand, it does better at avoiding recompilation issues, and works well even when some directory options were not specified in terms of `${prefix}' at `configure' time. Optional Features ================= If the package supports it, you can cause programs to be installed with an extra prefix or suffix on their names by giving `configure' the option `--program-prefix=PREFIX' or `--program-suffix=SUFFIX'. Some packages pay attention to `--enable-FEATURE' options to `configure', where FEATURE indicates an optional part of the package. They may also pay attention to `--with-PACKAGE' options, where PACKAGE is something like `gnu-as' or `x' (for the X Window System). The `README' should mention any `--enable-' and `--with-' options that the package recognizes. For packages that use the X Window System, `configure' can usually find the X include and library files automatically, but if it doesn't, you can use the `configure' options `--x-includes=DIR' and `--x-libraries=DIR' to specify their locations. Some packages offer the ability to configure how verbose the execution of `make' will be. For these packages, running `./configure --enable-silent-rules' sets the default to minimal output, which can be overridden with `make V=1'; while running `./configure --disable-silent-rules' sets the default to verbose, which can be overridden with `make V=0'. Particular systems ================== On HP-UX, the default C compiler is not ANSI C compatible. If GNU CC is not installed, it is recommended to use the following options in order to use an ANSI C compiler: ./configure CC="cc -Ae -D_XOPEN_SOURCE=500" and if that doesn't work, install pre-built binaries of GCC for HP-UX. On OSF/1 a.k.a. Tru64, some versions of the default C compiler cannot parse its `' header file. The option `-nodtk' can be used as a workaround. If GNU CC is not installed, it is therefore recommended to try ./configure CC="cc" and if that doesn't work, try ./configure CC="cc -nodtk" On Solaris, don't put `/usr/ucb' early in your `PATH'. This directory contains several dysfunctional programs; working variants of these programs are available in `/usr/bin'. So, if you need `/usr/ucb' in your `PATH', put it _after_ `/usr/bin'. On Haiku, software installed for all users goes in `/boot/common', not `/usr/local'. It is recommended to use the following options: ./configure --prefix=/boot/common Specifying the System Type ========================== There may be some features `configure' cannot figure out automatically, but needs to determine by the type of machine the package will run on. Usually, assuming the package is built to be run on the _same_ architectures, `configure' can figure that out, but if it prints a message saying it cannot guess the machine type, give it the `--build=TYPE' option. TYPE can either be a short name for the system type, such as `sun4', or a canonical name which has the form: CPU-COMPANY-SYSTEM where SYSTEM can have one of these forms: OS KERNEL-OS See the file `config.sub' for the possible values of each field. If `config.sub' isn't included in this package, then this package doesn't need to know the machine type. If you are _building_ compiler tools for cross-compiling, you should use the option `--target=TYPE' to select the type of system they will produce code for. If you want to _use_ a cross compiler, that generates code for a platform different from the build platform, you should specify the "host" platform (i.e., that on which the generated programs will eventually be run) with `--host=TYPE'. Sharing Defaults ================ If you want to set default values for `configure' scripts to share, you can create a site shell script called `config.site' that gives default values for variables like `CC', `cache_file', and `prefix'. `configure' looks for `PREFIX/share/config.site' if it exists, then `PREFIX/etc/config.site' if it exists. Or, you can set the `CONFIG_SITE' environment variable to the location of the site script. A warning: not all `configure' scripts look for a site script. Defining Variables ================== Variables not defined in a site shell script can be set in the environment passed to `configure'. However, some packages may run configure again during the build, and the customized values of these variables may be lost. In order to avoid this problem, you should set them in the `configure' command line, using `VAR=value'. For example: ./configure CC=/usr/local2/bin/gcc causes the specified `gcc' to be used as the C compiler (unless it is overridden in the site shell script). Unfortunately, this technique does not work for `CONFIG_SHELL' due to an Autoconf bug. Until the bug is fixed you can use this workaround: CONFIG_SHELL=/bin/bash /bin/bash ./configure CONFIG_SHELL=/bin/bash `configure' Invocation ====================== `configure' recognizes the following options to control how it operates. `--help' `-h' Print a summary of all of the options to `configure', and exit. `--help=short' `--help=recursive' Print a summary of the options unique to this package's `configure', and exit. The `short' variant lists options used only in the top level, while the `recursive' variant lists options also present in any nested packages. `--version' `-V' Print the version of Autoconf used to generate the `configure' script, and exit. `--cache-file=FILE' Enable the cache: use and save the results of the tests in FILE, traditionally `config.cache'. FILE defaults to `/dev/null' to disable caching. `--config-cache' `-C' Alias for `--cache-file=config.cache'. `--quiet' `--silent' `-q' Do not print messages saying which checks are being made. To suppress all normal output, redirect it to `/dev/null' (any error messages will still be shown). `--srcdir=DIR' Look for the package's source code in directory DIR. Usually `configure' can determine that directory automatically. `--prefix=DIR' Use DIR as the installation prefix. *note Installation Names:: for more details, including other options available for fine-tuning the installation locations. `--no-create' `-n' Run the configure checks, but stop before creating any output files. `configure' also accepts some other, not widely useful, options. Run `configure --help' for more details. sparsehash-2.0.2/sparsehash.sln0000775000175000017500000000731311721252346013470 00000000000000Microsoft Visual Studio Solution File, Format Version 8.00 Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "type_traits_unittest", "vsprojects\type_traits_unittest\type_traits_unittest.vcproj", "{008CCFED-7D7B-46F8-8E13-03837A2258B3}" ProjectSection(ProjectDependencies) = postProject EndProjectSection EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "sparsetable_unittest", "vsprojects\sparsetable_unittest\sparsetable_unittest.vcproj", "{E420867B-8BFA-4739-99EC-E008AB762FF9}" ProjectSection(ProjectDependencies) = postProject EndProjectSection EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "hashtable_test", "vsprojects\hashtable_test\hashtable_test.vcproj", "{FCDB3718-F01C-4DE4-B9F5-E10F2C5C0535}" ProjectSection(ProjectDependencies) = postProject EndProjectSection EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "simple_test", "vsprojects\simple_test\simple_test.vcproj", "{FCDB3718-F01C-4DE4-B9F5-E10F2C5C0538}" ProjectSection(ProjectDependencies) = postProject EndProjectSection EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "libc_allocator_with_realloc_test", "vsprojects\libc_allocator_with_realloc_test\libc_allocator_with_realloc_test.vcproj", "{FCDB3718-F01C-4DE4-B9F5-E10F2C5C0539}" ProjectSection(ProjectDependencies) = postProject EndProjectSection EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "time_hash_map", "vsprojects\time_hash_map\time_hash_map.vcproj", "{A74E5DB8-5295-487A-AB1D-23859F536F45}" ProjectSection(ProjectDependencies) = postProject EndProjectSection EndProject Global GlobalSection(SolutionConfiguration) = preSolution Debug = Debug Release = Release EndGlobalSection GlobalSection(ProjectDependencies) = postSolution EndGlobalSection GlobalSection(ProjectConfiguration) = postSolution {008CCFED-7D7B-46F8-8E13-03837A2258B3}.Debug.ActiveCfg = Debug|Win32 {008CCFED-7D7B-46F8-8E13-03837A2258B3}.Debug.Build.0 = Debug|Win32 {008CCFED-7D7B-46F8-8E13-03837A2258B3}.Release.ActiveCfg = Release|Win32 {008CCFED-7D7B-46F8-8E13-03837A2258B3}.Release.Build.0 = Release|Win32 {E420867B-8BFA-4739-99EC-E008AB762FF9}.Debug.ActiveCfg = Debug|Win32 {E420867B-8BFA-4739-99EC-E008AB762FF9}.Debug.Build.0 = Debug|Win32 {E420867B-8BFA-4739-99EC-E008AB762FF9}.Release.ActiveCfg = Release|Win32 {E420867B-8BFA-4739-99EC-E008AB762FF9}.Release.Build.0 = Release|Win32 {FCDB3718-F01C-4DE4-B9F5-E10F2C5C0535}.Debug.ActiveCfg = Debug|Win32 {FCDB3718-F01C-4DE4-B9F5-E10F2C5C0535}.Debug.Build.0 = Debug|Win32 {FCDB3718-F01C-4DE4-B9F5-E10F2C5C0535}.Release.ActiveCfg = Release|Win32 {FCDB3718-F01C-4DE4-B9F5-E10F2C5C0535}.Release.Build.0 = Release|Win32 {FCDB3718-F01C-4DE4-B9F5-E10F2C5C0538}.Debug.ActiveCfg = Debug|Win32 {FCDB3718-F01C-4DE4-B9F5-E10F2C5C0538}.Debug.Build.0 = Debug|Win32 {FCDB3718-F01C-4DE4-B9F5-E10F2C5C0538}.Release.ActiveCfg = Release|Win32 {FCDB3718-F01C-4DE4-B9F5-E10F2C5C0538}.Release.Build.0 = Release|Win32 {FCDB3718-F01C-4DE4-B9F5-E10F2C5C0539}.Debug.ActiveCfg = Debug|Win32 {FCDB3718-F01C-4DE4-B9F5-E10F2C5C0539}.Debug.Build.0 = Debug|Win32 {FCDB3718-F01C-4DE4-B9F5-E10F2C5C0539}.Release.ActiveCfg = Release|Win32 {FCDB3718-F01C-4DE4-B9F5-E10F2C5C0539}.Release.Build.0 = Release|Win32 {A74E5DB8-5295-487A-AB1D-23859F536F45}.Debug.ActiveCfg = Debug|Win32 {A74E5DB8-5295-487A-AB1D-23859F536F45}.Debug.Build.0 = Debug|Win32 {A74E5DB8-5295-487A-AB1D-23859F536F45}.Release.ActiveCfg = Release|Win32 {A74E5DB8-5295-487A-AB1D-23859F536F45}.Release.Build.0 = Release|Win32 EndGlobalSection GlobalSection(ExtensibilityGlobals) = postSolution EndGlobalSection GlobalSection(ExtensibilityAddIns) = postSolution EndGlobalSection EndGlobal sparsehash-2.0.2/packages/0000775000175000017500000000000011721550526012441 500000000000000sparsehash-2.0.2/packages/deb/0000775000175000017500000000000011721550007013165 500000000000000sparsehash-2.0.2/packages/deb/sparsehash.dirs0000664000175000017500000000012011721252346016127 00000000000000usr/include usr/include/google usr/include/sparsehash usr/lib usr/lib/pkgconfig sparsehash-2.0.2/packages/deb/README0000664000175000017500000000045711721252346014000 00000000000000The list of files here isn't complete. For a step-by-step guide on how to set this package up correctly, check out http://www.debian.org/doc/maint-guide/ Most of the files that are in this directory are boilerplate. However, you may need to change the list of binary-arch dependencies in 'rules'. sparsehash-2.0.2/packages/deb/.svn/0000775000175000017500000000000011721255316014056 500000000000000sparsehash-2.0.2/packages/deb/.svn/text-base/0000775000175000017500000000000011721252346015752 500000000000000sparsehash-2.0.2/packages/deb/.svn/text-base/control.svn-base0000444000175000017500000000124311721252346021006 00000000000000Source: sparsehash Section: libdevel Priority: optional Maintainer: Google Inc. and others Build-Depends: debhelper (>= 4.0.0) Standards-Version: 3.6.1 Package: sparsehash Section: libs Architecture: any Description: hash_map and hash_set classes with minimal space overhead This package contains several hash-map implementations, similar in API to SGI's hash_map class, but with different performance characteristics. sparse_hash_map uses very little space overhead: 1-2 bits per entry. dense_hash_map is typically faster than the default SGI STL implementation. This package also includes hash-set analogues of these classes. sparsehash-2.0.2/packages/deb/.svn/text-base/rules.svn-base0000444000175000017500000000554211721252346020466 00000000000000#!/usr/bin/make -f # -*- makefile -*- # Sample debian/rules that uses debhelper. # This file was originally written by Joey Hess and Craig Small. # As a special exception, when this file is copied by dh-make into a # dh-make output file, you may use that output file without restriction. # This special exception was added by Craig Small in version 0.37 of dh-make. # Uncomment this to turn on verbose mode. #export DH_VERBOSE=1 # These are used for cross-compiling and for saving the configure script # from having to guess our platform (since we know it already) DEB_HOST_GNU_TYPE ?= $(shell dpkg-architecture -qDEB_HOST_GNU_TYPE) DEB_BUILD_GNU_TYPE ?= $(shell dpkg-architecture -qDEB_BUILD_GNU_TYPE) CFLAGS = -Wall -g ifneq (,$(findstring noopt,$(DEB_BUILD_OPTIONS))) CFLAGS += -O0 else CFLAGS += -O2 endif ifeq (,$(findstring nostrip,$(DEB_BUILD_OPTIONS))) INSTALL_PROGRAM += -s endif # shared library versions, option 1 #version=2.0.5 #major=2 # option 2, assuming the library is created as src/.libs/libfoo.so.2.0.5 or so version=`ls src/.libs/lib*.so.* | \ awk '{if (match($$0,/[0-9]+\.[0-9]+\.[0-9]+$$/)) print substr($$0,RSTART)}'` major=`ls src/.libs/lib*.so.* | \ awk '{if (match($$0,/\.so\.[0-9]+$$/)) print substr($$0,RSTART+4)}'` config.status: configure dh_testdir # Add here commands to configure the package. CFLAGS="$(CFLAGS)" ./configure --host=$(DEB_HOST_GNU_TYPE) --build=$(DEB_BUILD_GNU_TYPE) --prefix=/usr --mandir=\$${prefix}/share/man --infodir=\$${prefix}/share/info build: build-stamp build-stamp: config.status dh_testdir # Add here commands to compile the package. $(MAKE) touch build-stamp clean: dh_testdir dh_testroot rm -f build-stamp # Add here commands to clean up after the build process. -$(MAKE) distclean ifneq "$(wildcard /usr/share/misc/config.sub)" "" cp -f /usr/share/misc/config.sub config.sub endif ifneq "$(wildcard /usr/share/misc/config.guess)" "" cp -f /usr/share/misc/config.guess config.guess endif dh_clean install: build dh_testdir dh_testroot dh_clean -k dh_installdirs # Add here commands to install the package into debian/tmp $(MAKE) install DESTDIR=$(CURDIR)/debian/tmp # Build architecture-independent files here. binary-indep: build install # We have nothing to do by default. # Build architecture-dependent files here. binary-arch: build install dh_testdir dh_testroot dh_installchangelogs ChangeLog dh_installdocs dh_installexamples dh_install --sourcedir=debian/tmp # dh_installmenu # dh_installdebconf # dh_installlogrotate # dh_installemacsen # dh_installpam # dh_installmime # dh_installinit # dh_installcron # dh_installinfo dh_installman dh_link dh_strip dh_compress dh_fixperms # dh_perl # dh_python dh_makeshlibs dh_installdeb dh_shlibdeps dh_gencontrol dh_md5sums dh_builddeb binary: binary-indep binary-arch .PHONY: build clean binary-indep binary-arch binary install sparsehash-2.0.2/packages/deb/.svn/text-base/copyright.svn-base0000444000175000017500000000327511721252346021345 00000000000000This package was debianized by Donovan Hide on Wed, 01 Feb 2012 02:57:48 +0000. It was downloaded from http://code.google.com/p/sparsehash/downloads/list Upstream Author: google-sparsehash@googlegroups.com Copyright (c) 2005, Google Inc. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of Google Inc. nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. sparsehash-2.0.2/packages/deb/.svn/text-base/docs.svn-base0000444000175000017500000000037211721252346020260 00000000000000AUTHORS COPYING ChangeLog INSTALL NEWS README TODO doc/dense_hash_map.html doc/dense_hash_set.html doc/sparse_hash_map.html doc/sparse_hash_set.html doc/sparsetable.html doc/implementation.html doc/performance.html doc/index.html doc/designstyle.css sparsehash-2.0.2/packages/deb/.svn/text-base/sparsehash.dirs.svn-base0000444000175000017500000000012011721252346022420 00000000000000usr/include usr/include/google usr/include/sparsehash usr/lib usr/lib/pkgconfig sparsehash-2.0.2/packages/deb/.svn/text-base/README.svn-base0000444000175000017500000000045711721252346020271 00000000000000The list of files here isn't complete. For a step-by-step guide on how to set this package up correctly, check out http://www.debian.org/doc/maint-guide/ Most of the files that are in this directory are boilerplate. However, you may need to change the list of binary-arch dependencies in 'rules'. sparsehash-2.0.2/packages/deb/.svn/text-base/sparsehash.install.svn-base0000444000175000017500000000024511721252346023135 00000000000000usr/include/google/* usr/include/sparsehash/* usr/lib/pkgconfig/* debian/tmp/usr/include/google/* debian/tmp/usr/include/sparsehash/* debian/tmp/usr/lib/pkgconfig/* sparsehash-2.0.2/packages/deb/.svn/text-base/changelog.svn-base0000444000175000017500000000776411721252346021273 00000000000000sparsehash (2.0.1-1) unstable; urgency=low * New upstream release. -- Google Inc. and others Wed, 01 Feb 2012 02:57:48 +0000 sparsehash (2.0-1) unstable; urgency=low * New upstream release. -- Google Inc. and others Tue, 31 Jan 2012 11:33:04 -0800 sparsehash (1.12-1) unstable; urgency=low * New upstream release. -- Google Inc. Tue, 20 Dec 2011 21:04:04 -0800 sparsehash (1.11-1) unstable; urgency=low * New upstream release. -- Google Inc. Thu, 23 Jun 2011 21:12:58 -0700 sparsehash (1.10-1) unstable; urgency=low * New upstream release. -- Google Inc. Thu, 20 Jan 2011 16:07:39 -0800 sparsehash (1.9-1) unstable; urgency=low * New upstream release. -- Google Inc. Fri, 24 Sep 2010 11:37:50 -0700 sparsehash (1.8.1-1) unstable; urgency=low * New upstream release. -- Google Inc. Thu, 29 Jul 2010 15:01:29 -0700 sparsehash (1.8-1) unstable; urgency=low * New upstream release. -- Google Inc. Thu, 29 Jul 2010 09:53:26 -0700 sparsehash (1.7-1) unstable; urgency=low * New upstream release. -- Google Inc. Wed, 31 Mar 2010 12:32:03 -0700 sparsehash (1.6-1) unstable; urgency=low * New upstream release. -- Google Inc. Fri, 08 Jan 2010 14:47:55 -0800 sparsehash (1.5.2-1) unstable; urgency=low * New upstream release. -- Google Inc. Tue, 12 May 2009 14:16:38 -0700 sparsehash (1.5.1-1) unstable; urgency=low * New upstream release. -- Google Inc. Fri, 08 May 2009 15:23:44 -0700 sparsehash (1.5-1) unstable; urgency=low * New upstream release. -- Google Inc. Wed, 06 May 2009 11:28:49 -0700 sparsehash (1.4-1) unstable; urgency=low * New upstream release. -- Google Inc. Wed, 28 Jan 2009 17:11:31 -0800 sparsehash (1.3-1) unstable; urgency=low * New upstream release. -- Google Inc. Thu, 06 Nov 2008 15:06:09 -0800 sparsehash (1.2-1) unstable; urgency=low * New upstream release. -- Google Inc. Thu, 18 Sep 2008 13:53:20 -0700 sparsehash (1.1-1) unstable; urgency=low * New upstream release. -- Google Inc. Mon, 11 Feb 2008 16:30:11 -0800 sparsehash (1.0-1) unstable; urgency=low * New upstream release. We are now out of beta. -- Google Inc. Tue, 13 Nov 2007 15:15:46 -0800 sparsehash (0.9.1-1) unstable; urgency=low * New upstream release. -- Google Inc. Fri, 12 Oct 2007 12:35:24 -0700 sparsehash (0.9-1) unstable; urgency=low * New upstream release. -- Google Inc. Tue, 09 Oct 2007 14:15:21 -0700 sparsehash (0.8-1) unstable; urgency=low * New upstream release. -- Google Inc. Tue, 03 Jul 2007 12:55:04 -0700 sparsehash (0.7-1) unstable; urgency=low * New upstream release. -- Google Inc. Mon, 11 Jun 2007 11:33:41 -0700 sparsehash (0.6-1) unstable; urgency=low * New upstream release. -- Google Inc. Tue, 20 Mar 2007 17:29:34 -0700 sparsehash (0.5-1) unstable; urgency=low * New upstream release. -- Google Inc. Sat, 21 Oct 2006 13:47:47 -0700 sparsehash (0.4-1) unstable; urgency=low * New upstream release. -- Google Inc. Sun, 23 Apr 2006 22:42:35 -0700 sparsehash (0.3-1) unstable; urgency=low * New upstream release. -- Google Inc. Thu, 03 Nov 2005 20:12:31 -0800 sparsehash (0.2-1) unstable; urgency=low * New upstream release. -- Google Inc. Mon, 02 May 2005 07:04:46 -0700 sparsehash (0.1-1) unstable; urgency=low * Initial release. -- Google Inc. Tue, 15 Feb 2005 07:17:02 -0800 sparsehash-2.0.2/packages/deb/.svn/text-base/compat.svn-base0000444000175000017500000000000211721252346020601 000000000000004 sparsehash-2.0.2/packages/deb/.svn/props/0000775000175000017500000000000011721252346015221 500000000000000sparsehash-2.0.2/packages/deb/.svn/tmp/0000775000175000017500000000000011721252346014656 500000000000000sparsehash-2.0.2/packages/deb/.svn/tmp/text-base/0000775000175000017500000000000011721252346016552 500000000000000sparsehash-2.0.2/packages/deb/.svn/tmp/props/0000775000175000017500000000000011721252346016021 500000000000000sparsehash-2.0.2/packages/deb/.svn/tmp/prop-base/0000775000175000017500000000000011721252346016546 500000000000000sparsehash-2.0.2/packages/deb/.svn/entries0000444000175000017500000000300311721252346015362 0000000000000010 dir 113 https://sparsehash.googlecode.com/svn/trunk/packages/deb https://sparsehash.googlecode.com/svn 2012-02-02T22:46:54.449012Z 113 csilvers 21bedea4-f223-4c8b-73d6-85019ffb75a9 control file 2012-02-22T20:49:42.943762Z 8261546cb30188dd474c74916c161fcf 2012-01-31T23:50:02.386177Z 106 csilvers 675 sparsehash.dirs file 2012-02-22T20:49:42.943762Z 2dc533a8415133f45a5e803a1e28279c 2012-01-31T23:50:02.386177Z 106 csilvers 80 compat file 2012-02-22T20:49:42.943762Z 48a24b70a0b376535542b996af517398 2007-03-22T00:33:42.310464Z 6 csilvers 2 sparsehash.install file 2012-02-22T20:49:42.943762Z 533a0097660ac72db58ae7cc5fa7c8db 2012-01-31T23:50:02.386177Z 106 csilvers 165 changelog file 2012-02-22T20:49:42.943762Z 39bc6e17f8aa4f7c271aaedad10fe6f5 2012-02-01T03:10:59.454942Z 109 donovanhide 4084 docs file 2012-02-22T20:49:42.943762Z 5abf2a8d8096d2c61dd2f421405c2191 2007-06-11T19:35:30.179649Z 19 csilvers 250 copyright file 2012-02-22T20:49:42.943762Z b6ac709225eaeb679e3db6dbd4509b6e 2012-02-02T22:46:54.449012Z 113 csilvers 1725 rules file 2012-02-22T20:49:42.943762Z d4819f5489a5760835fcc7478acbf164 2007-03-22T00:33:42.310464Z 6 csilvers has-props 2914 README file 2012-02-22T20:49:42.943762Z d4c29fa922136ba5bb1a0129b80b369d 2007-03-22T00:33:42.310464Z 6 csilvers 303 sparsehash-2.0.2/packages/deb/.svn/all-wcprops0000444000175000017500000000165311721252346016165 00000000000000K 25 svn:wc:ra_dav:version-url V 36 /svn/!svn/ver/113/trunk/packages/deb END control K 25 svn:wc:ra_dav:version-url V 44 /svn/!svn/ver/106/trunk/packages/deb/control END sparsehash.dirs K 25 svn:wc:ra_dav:version-url V 52 /svn/!svn/ver/106/trunk/packages/deb/sparsehash.dirs END compat K 25 svn:wc:ra_dav:version-url V 41 /svn/!svn/ver/6/trunk/packages/deb/compat END sparsehash.install K 25 svn:wc:ra_dav:version-url V 55 /svn/!svn/ver/106/trunk/packages/deb/sparsehash.install END changelog K 25 svn:wc:ra_dav:version-url V 46 /svn/!svn/ver/109/trunk/packages/deb/changelog END docs K 25 svn:wc:ra_dav:version-url V 40 /svn/!svn/ver/19/trunk/packages/deb/docs END copyright K 25 svn:wc:ra_dav:version-url V 46 /svn/!svn/ver/113/trunk/packages/deb/copyright END rules K 25 svn:wc:ra_dav:version-url V 40 /svn/!svn/ver/6/trunk/packages/deb/rules END README K 25 svn:wc:ra_dav:version-url V 41 /svn/!svn/ver/6/trunk/packages/deb/README END sparsehash-2.0.2/packages/deb/.svn/prop-base/0000775000175000017500000000000011721252346015746 500000000000000sparsehash-2.0.2/packages/deb/.svn/prop-base/rules.svn-base0000444000175000017500000000003611721252346020453 00000000000000K 14 svn:executable V 1 * END sparsehash-2.0.2/packages/deb/docs0000664000175000017500000000037211721252346013767 00000000000000AUTHORS COPYING ChangeLog INSTALL NEWS README TODO doc/dense_hash_map.html doc/dense_hash_set.html doc/sparse_hash_map.html doc/sparse_hash_set.html doc/sparsetable.html doc/implementation.html doc/performance.html doc/index.html doc/designstyle.css sparsehash-2.0.2/packages/deb/control0000664000175000017500000000124311721252346014515 00000000000000Source: sparsehash Section: libdevel Priority: optional Maintainer: Google Inc. and others Build-Depends: debhelper (>= 4.0.0) Standards-Version: 3.6.1 Package: sparsehash Section: libs Architecture: any Description: hash_map and hash_set classes with minimal space overhead This package contains several hash-map implementations, similar in API to SGI's hash_map class, but with different performance characteristics. sparse_hash_map uses very little space overhead: 1-2 bits per entry. dense_hash_map is typically faster than the default SGI STL implementation. This package also includes hash-set analogues of these classes. sparsehash-2.0.2/packages/deb/rules0000775000175000017500000000554211721252346014200 00000000000000#!/usr/bin/make -f # -*- makefile -*- # Sample debian/rules that uses debhelper. # This file was originally written by Joey Hess and Craig Small. # As a special exception, when this file is copied by dh-make into a # dh-make output file, you may use that output file without restriction. # This special exception was added by Craig Small in version 0.37 of dh-make. # Uncomment this to turn on verbose mode. #export DH_VERBOSE=1 # These are used for cross-compiling and for saving the configure script # from having to guess our platform (since we know it already) DEB_HOST_GNU_TYPE ?= $(shell dpkg-architecture -qDEB_HOST_GNU_TYPE) DEB_BUILD_GNU_TYPE ?= $(shell dpkg-architecture -qDEB_BUILD_GNU_TYPE) CFLAGS = -Wall -g ifneq (,$(findstring noopt,$(DEB_BUILD_OPTIONS))) CFLAGS += -O0 else CFLAGS += -O2 endif ifeq (,$(findstring nostrip,$(DEB_BUILD_OPTIONS))) INSTALL_PROGRAM += -s endif # shared library versions, option 1 #version=2.0.5 #major=2 # option 2, assuming the library is created as src/.libs/libfoo.so.2.0.5 or so version=`ls src/.libs/lib*.so.* | \ awk '{if (match($$0,/[0-9]+\.[0-9]+\.[0-9]+$$/)) print substr($$0,RSTART)}'` major=`ls src/.libs/lib*.so.* | \ awk '{if (match($$0,/\.so\.[0-9]+$$/)) print substr($$0,RSTART+4)}'` config.status: configure dh_testdir # Add here commands to configure the package. CFLAGS="$(CFLAGS)" ./configure --host=$(DEB_HOST_GNU_TYPE) --build=$(DEB_BUILD_GNU_TYPE) --prefix=/usr --mandir=\$${prefix}/share/man --infodir=\$${prefix}/share/info build: build-stamp build-stamp: config.status dh_testdir # Add here commands to compile the package. $(MAKE) touch build-stamp clean: dh_testdir dh_testroot rm -f build-stamp # Add here commands to clean up after the build process. -$(MAKE) distclean ifneq "$(wildcard /usr/share/misc/config.sub)" "" cp -f /usr/share/misc/config.sub config.sub endif ifneq "$(wildcard /usr/share/misc/config.guess)" "" cp -f /usr/share/misc/config.guess config.guess endif dh_clean install: build dh_testdir dh_testroot dh_clean -k dh_installdirs # Add here commands to install the package into debian/tmp $(MAKE) install DESTDIR=$(CURDIR)/debian/tmp # Build architecture-independent files here. binary-indep: build install # We have nothing to do by default. # Build architecture-dependent files here. binary-arch: build install dh_testdir dh_testroot dh_installchangelogs ChangeLog dh_installdocs dh_installexamples dh_install --sourcedir=debian/tmp # dh_installmenu # dh_installdebconf # dh_installlogrotate # dh_installemacsen # dh_installpam # dh_installmime # dh_installinit # dh_installcron # dh_installinfo dh_installman dh_link dh_strip dh_compress dh_fixperms # dh_perl # dh_python dh_makeshlibs dh_installdeb dh_shlibdeps dh_gencontrol dh_md5sums dh_builddeb binary: binary-indep binary-arch .PHONY: build clean binary-indep binary-arch binary install sparsehash-2.0.2/packages/deb/sparsehash.install0000664000175000017500000000024511721252346016644 00000000000000usr/include/google/* usr/include/sparsehash/* usr/lib/pkgconfig/* debian/tmp/usr/include/google/* debian/tmp/usr/include/sparsehash/* debian/tmp/usr/lib/pkgconfig/* sparsehash-2.0.2/packages/deb/copyright0000664000175000017500000000330211721550007015036 00000000000000This package was debianized by Donovan Hide on Wed, Thu, 23 Feb 2012 23:47:18 +0000. It was downloaded from http://code.google.com/p/sparsehash/downloads/list Upstream Author: google-sparsehash@googlegroups.com Copyright (c) 2005, Google Inc. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of Google Inc. nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. sparsehash-2.0.2/packages/deb/changelog0000664000175000017500000001023511721550007014760 00000000000000sparsehash (2.0.2-1) unstable; urgency=low * New upstream release. -- Google Inc. and others Thu, 23 Feb 2012 23:47:18 +0000 sparsehash (2.0.1-1) unstable; urgency=low * New upstream release. -- Google Inc. and others Wed, 01 Feb 2012 02:57:48 +0000 sparsehash (2.0-1) unstable; urgency=low * New upstream release. -- Google Inc. and others Tue, 31 Jan 2012 11:33:04 -0800 sparsehash (1.12-1) unstable; urgency=low * New upstream release. -- Google Inc. Tue, 20 Dec 2011 21:04:04 -0800 sparsehash (1.11-1) unstable; urgency=low * New upstream release. -- Google Inc. Thu, 23 Jun 2011 21:12:58 -0700 sparsehash (1.10-1) unstable; urgency=low * New upstream release. -- Google Inc. Thu, 20 Jan 2011 16:07:39 -0800 sparsehash (1.9-1) unstable; urgency=low * New upstream release. -- Google Inc. Fri, 24 Sep 2010 11:37:50 -0700 sparsehash (1.8.1-1) unstable; urgency=low * New upstream release. -- Google Inc. Thu, 29 Jul 2010 15:01:29 -0700 sparsehash (1.8-1) unstable; urgency=low * New upstream release. -- Google Inc. Thu, 29 Jul 2010 09:53:26 -0700 sparsehash (1.7-1) unstable; urgency=low * New upstream release. -- Google Inc. Wed, 31 Mar 2010 12:32:03 -0700 sparsehash (1.6-1) unstable; urgency=low * New upstream release. -- Google Inc. Fri, 08 Jan 2010 14:47:55 -0800 sparsehash (1.5.2-1) unstable; urgency=low * New upstream release. -- Google Inc. Tue, 12 May 2009 14:16:38 -0700 sparsehash (1.5.1-1) unstable; urgency=low * New upstream release. -- Google Inc. Fri, 08 May 2009 15:23:44 -0700 sparsehash (1.5-1) unstable; urgency=low * New upstream release. -- Google Inc. Wed, 06 May 2009 11:28:49 -0700 sparsehash (1.4-1) unstable; urgency=low * New upstream release. -- Google Inc. Wed, 28 Jan 2009 17:11:31 -0800 sparsehash (1.3-1) unstable; urgency=low * New upstream release. -- Google Inc. Thu, 06 Nov 2008 15:06:09 -0800 sparsehash (1.2-1) unstable; urgency=low * New upstream release. -- Google Inc. Thu, 18 Sep 2008 13:53:20 -0700 sparsehash (1.1-1) unstable; urgency=low * New upstream release. -- Google Inc. Mon, 11 Feb 2008 16:30:11 -0800 sparsehash (1.0-1) unstable; urgency=low * New upstream release. We are now out of beta. -- Google Inc. Tue, 13 Nov 2007 15:15:46 -0800 sparsehash (0.9.1-1) unstable; urgency=low * New upstream release. -- Google Inc. Fri, 12 Oct 2007 12:35:24 -0700 sparsehash (0.9-1) unstable; urgency=low * New upstream release. -- Google Inc. Tue, 09 Oct 2007 14:15:21 -0700 sparsehash (0.8-1) unstable; urgency=low * New upstream release. -- Google Inc. Tue, 03 Jul 2007 12:55:04 -0700 sparsehash (0.7-1) unstable; urgency=low * New upstream release. -- Google Inc. Mon, 11 Jun 2007 11:33:41 -0700 sparsehash (0.6-1) unstable; urgency=low * New upstream release. -- Google Inc. Tue, 20 Mar 2007 17:29:34 -0700 sparsehash (0.5-1) unstable; urgency=low * New upstream release. -- Google Inc. Sat, 21 Oct 2006 13:47:47 -0700 sparsehash (0.4-1) unstable; urgency=low * New upstream release. -- Google Inc. Sun, 23 Apr 2006 22:42:35 -0700 sparsehash (0.3-1) unstable; urgency=low * New upstream release. -- Google Inc. Thu, 03 Nov 2005 20:12:31 -0800 sparsehash (0.2-1) unstable; urgency=low * New upstream release. -- Google Inc. Mon, 02 May 2005 07:04:46 -0700 sparsehash (0.1-1) unstable; urgency=low * Initial release. -- Google Inc. Tue, 15 Feb 2005 07:17:02 -0800 sparsehash-2.0.2/packages/deb/compat0000664000175000017500000000000211721252346014310 000000000000004 sparsehash-2.0.2/packages/deb.sh0000775000175000017500000000500011721252346013444 00000000000000#!/bin/bash -e # This takes one commandline argument, the name of the package. If no # name is given, then we'll end up just using the name associated with # an arbitrary .tar.gz file in the rootdir. That's fine: there's probably # only one. # # Run this from the 'packages' directory, just under rootdir ## Set LIB to lib if exporting a library, empty-string else LIB= #LIB=lib PACKAGE="$1" VERSION="$2" # We can only build Debian packages, if the Debian build tools are installed if [ \! -x /usr/bin/debuild ]; then echo "Cannot find /usr/bin/debuild. Not building Debian packages." 1>&2 exit 0 fi # Double-check we're in the packages directory, just under rootdir if [ \! -r ../Makefile -a \! -r ../INSTALL ]; then echo "Must run $0 in the 'packages' directory, under the root directory." 1>&2 echo "Also, you must run \"make dist\" before running this script." 1>&2 exit 0 fi # Find the top directory for this package topdir="${PWD%/*}" # Find the tar archive built by "make dist" archive="${PACKAGE}-${VERSION}" archive_with_underscore="${PACKAGE}_${VERSION}" if [ -z "${archive}" ]; then echo "Cannot find ../$PACKAGE*.tar.gz. Run \"make dist\" first." 1>&2 exit 0 fi # Create a pristine directory for building the Debian package files trap 'rm -rf '`pwd`/tmp'; exit $?' EXIT SIGHUP SIGINT SIGTERM rm -rf tmp mkdir -p tmp cd tmp # Debian has very specific requirements about the naming of build # directories, and tar archives. It also wants to write all generated # packages to the parent of the source directory. We accommodate these # requirements by building directly from the tar file. ln -s "${topdir}/${archive}.tar.gz" "${LIB}${archive}.orig.tar.gz" # Some version of debuilder want foo.orig.tar.gz with _ between versions. ln -s "${topdir}/${archive}.tar.gz" "${LIB}${archive_with_underscore}.orig.tar.gz" tar zfx "${LIB}${archive}.orig.tar.gz" [ -n "${LIB}" ] && mv "${archive}" "${LIB}${archive}" cd "${LIB}${archive}" # This is one of those 'specific requirements': where the deb control files live cp -a "packages/deb" "debian" # Now, we can call Debian's standard build tool debuild -uc -us cd ../.. # get back to the original top-level dir # We'll put the result in a subdirectory that's named after the OS version # we've made this .deb file for. destdir="debian-$(cat /etc/debian_version 2>/dev/null || echo UNKNOWN)" rm -rf "$destdir" mkdir -p "$destdir" mv $(find tmp -mindepth 1 -maxdepth 1 -type f) "$destdir" echo echo "The Debian package files are located in $PWD/$destdir" sparsehash-2.0.2/packages/rpm.sh0000775000175000017500000000514511721252346013522 00000000000000#!/bin/sh -e # Run this from the 'packages' directory, just under rootdir # We can only build rpm packages, if the rpm build tools are installed if [ \! -x /usr/bin/rpmbuild ] then echo "Cannot find /usr/bin/rpmbuild. Not building an rpm." 1>&2 exit 0 fi # Check the commandline flags PACKAGE="$1" VERSION="$2" fullname="${PACKAGE}-${VERSION}" archive=../$fullname.tar.gz if [ -z "$1" -o -z "$2" ] then echo "Usage: $0 " 1>&2 exit 0 fi # Double-check we're in the packages directory, just under rootdir if [ \! -r ../Makefile -a \! -r ../INSTALL ] then echo "Must run $0 in the 'packages' directory, under the root directory." 1>&2 echo "Also, you must run \"make dist\" before running this script." 1>&2 exit 0 fi if [ \! -r "$archive" ] then echo "Cannot find $archive. Run \"make dist\" first." 1>&2 exit 0 fi # Create the directory where the input lives, and where the output should live RPM_SOURCE_DIR="/tmp/rpmsource-$fullname" RPM_BUILD_DIR="/tmp/rpmbuild-$fullname" trap 'rm -rf $RPM_SOURCE_DIR $RPM_BUILD_DIR; exit $?' EXIT SIGHUP SIGINT SIGTERM rm -rf "$RPM_SOURCE_DIR" "$RPM_BUILD_DIR" mkdir "$RPM_SOURCE_DIR" mkdir "$RPM_BUILD_DIR" cp "$archive" "$RPM_SOURCE_DIR" # rpmbuild -- as far as I can tell -- asks the OS what CPU it has. # This may differ from what kind of binaries gcc produces. dpkg # does a better job of this, so if we can run 'dpkg --print-architecture' # to get the build CPU, we use that in preference of the rpmbuild # default. target=`dpkg --print-architecture 2>/dev/null || echo ""` if [ -n "$target" ] then target=" --target $target" fi rpmbuild -bb rpm/rpm.spec $target \ --define "NAME $PACKAGE" \ --define "VERSION $VERSION" \ --define "_sourcedir $RPM_SOURCE_DIR" \ --define "_builddir $RPM_BUILD_DIR" \ --define "_rpmdir $RPM_SOURCE_DIR" # We put the output in a directory based on what system we've built for destdir=rpm-unknown if [ -r /etc/issue ] then grep "Red Hat.*release 7" /etc/issue >/dev/null 2>&1 && destdir=rh7 grep "Red Hat.*release 8" /etc/issue >/dev/null 2>&1 && destdir=rh8 grep "Red Hat.*release 9" /etc/issue >/dev/null 2>&1 && destdir=rh9 grep "Fedora Core.*release 1" /etc/issue >/dev/null 2>&1 && destdir=fc1 grep "Fedora Core.*release 2" /etc/issue >/dev/null 2>&1 && destdir=fc2 grep "Fedora Core.*release 3" /etc/issue >/dev/null 2>&1 && destdir=fc3 fi rm -rf "$destdir" mkdir -p "$destdir" # We want to get not only the main package but devel etc, hence the middle * mv "$RPM_SOURCE_DIR"/*/"${PACKAGE}"-*"${VERSION}"*.rpm "$destdir" echo echo "The rpm package file(s) are located in $PWD/$destdir" sparsehash-2.0.2/packages/rpm/0000775000175000017500000000000011721550526013237 500000000000000sparsehash-2.0.2/packages/rpm/rpm.spec0000664000175000017500000000415111721252346014631 00000000000000%define RELEASE 1 %define rel %{?CUSTOM_RELEASE} %{!?CUSTOM_RELEASE:%RELEASE} %define prefix /usr Name: %NAME Summary: hash_map and hash_set classes with minimal space overhead Version: %VERSION Release: %rel Group: Development/Libraries URL: http://code.google.com/p/sparsehash License: BSD Vendor: Google Inc. and others Packager: Google Inc. and others Source: http://%{NAME}.googlecode.com/files/%{NAME}-%{VERSION}.tar.gz Distribution: Redhat 7 and above. Buildroot: %{_tmppath}/%{name}-root Prefix: %prefix Buildarch: noarch %description The %name package contains several hash-map implementations, similar in API to the SGI hash_map class, but with different performance characteristics. sparse_hash_map uses very little space overhead: 1-2 bits per entry. dense_hash_map is typically faster than the default SGI STL implementation. This package also includes hash-set analogues of these classes. %changelog * Wed Apr 22 2009 - Change build rule to use %configure instead of ./configure - Change install to use DESTDIR instead of prefix for make install - Use wildcards for doc/ and lib/ directories - Use {_includedir} instead of {prefix}/include * Fri Jan 14 2005 - First draft %prep %setup %build # I can't use '% configure', because it defines -m32 which breaks on # my development environment for some reason. But I do take # as much from % configure (in /usr/lib/rpm/macros) as I can. ./configure --prefix=%{_prefix} --exec-prefix=%{_exec_prefix} --bindir=%{_bindir} --sbindir=%{_sbindir} --sysconfdir=%{_sysconfdir} --datadir=%{_datadir} --includedir=%{_includedir} --libdir=%{_libdir} --libexecdir=%{_libexecdir} --localstatedir=%{_localstatedir} --sharedstatedir=%{_sharedstatedir} --mandir=%{_mandir} --infodir=%{_infodir} make %install rm -rf $RPM_BUILD_ROOT make DESTDIR=$RPM_BUILD_ROOT install %clean rm -rf $RPM_BUILD_ROOT %files %defattr(-,root,root) %docdir %{prefix}/share/doc/%{NAME}-%{VERSION} %{prefix}/share/doc/%{NAME}-%{VERSION}/* %{_includedir}/google %{_includedir}/sparsehash %{_libdir}/pkgconfig/*.pc sparsehash-2.0.2/configure.ac0000664000175000017500000000537411721254326013101 00000000000000## Process this file with autoconf to produce configure. ## In general, the safest way to proceed is to run ./autogen.sh # make sure we're interpreted by some minimal autoconf AC_PREREQ(2.57) AC_INIT(sparsehash, 2.0.2, google-sparsehash@googlegroups.com) # The argument here is just something that should be in the current directory # (for sanity checking) AC_CONFIG_SRCDIR(README) AM_INIT_AUTOMAKE([dist-zip]) AM_CONFIG_HEADER(src/config.h) # Checks for programs. AC_PROG_CXX AC_PROG_CC AC_PROG_CPP AM_CONDITIONAL(GCC, test "$GCC" = yes) # let the Makefile know if we're gcc # Check whether some low-level functions/files are available AC_HEADER_STDC AC_CHECK_FUNCS(memcpy memmove) AC_CHECK_TYPES([uint16_t]) # defined in C99 systems AC_CHECK_TYPES([u_int16_t]) # defined in BSD-derived systems, and gnu AC_CHECK_TYPES([__uint16]) # defined in some windows systems (vc7) AC_CHECK_TYPES([long long]) # probably defined everywhere, but... # These are 'only' needed for unittests AC_CHECK_HEADERS(sys/resource.h unistd.h sys/time.h sys/utsname.h) # If you have google-perftools installed, we can do a bit more testing. # We not only want to set HAVE_MALLOC_EXTENSION_H, we also want to set # a variable to let the Makefile to know to link in tcmalloc. AC_LANG([C++]) AC_CHECK_HEADERS(google/malloc_extension.h, tcmalloc_libs=-ltcmalloc, tcmalloc_libs=) # On some systems, when linking in tcmalloc you also need to link in # pthread. That's a bug somewhere, but we'll work around it for now. tcmalloc_flags="" if test -n "$tcmalloc_libs"; then ACX_PTHREAD tcmalloc_flags="\$(PTHREAD_CFLAGS)" tcmalloc_libs="$tcmalloc_libs \$(PTHREAD_LIBS)" fi AC_SUBST(tcmalloc_flags) AC_SUBST(tcmalloc_libs) # Figure out where hash_map lives and also hash_fun.h (or stl_hash_fun.h). # This also tells us what namespace hash code lives in. AC_CXX_STL_HASH AC_CXX_STL_HASH_FUN # Find out what namespace the user wants our classes to be defined in. # TODO(csilvers): change this to default to sparsehash instead. AC_DEFINE_GOOGLE_NAMESPACE(google) # In unix-based systems, hash is always defined as hash<> (in namespace. # HASH_NAMESPACE.) So we can use a simple AC_DEFINE here. On # windows, and possibly on future unix STL implementations, this # macro will evaluate to something different.) AC_DEFINE(SPARSEHASH_HASH_NO_NAMESPACE, hash, [The system-provided hash function, in namespace HASH_NAMESPACE.]) # Do *not* define this in terms of SPARSEHASH_HASH_NO_NAMESPACE, because # SPARSEHASH_HASH is exported to sparseconfig.h, but S_H_NO_NAMESPACE isn't. AC_DEFINE(SPARSEHASH_HASH, HASH_NAMESPACE::hash, [The system-provided hash function including the namespace.]) # Write generated configuration file AC_CONFIG_FILES([Makefile]) AC_OUTPUT sparsehash-2.0.2/config.sub0000755000175000017500000010460611721254575012600 00000000000000#! /bin/sh # Configuration validation subroutine script. # Copyright (C) 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, # 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, # 2011 Free Software Foundation, Inc. timestamp='2011-03-23' # This file is (in principle) common to ALL GNU software. # The presence of a machine in this file suggests that SOME GNU software # can handle that machine. It does not imply ALL GNU software can. # # This file is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street - Fifth Floor, Boston, MA # 02110-1301, USA. # # As a special exception to the GNU General Public License, if you # distribute this file as part of a program that contains a # configuration script generated by Autoconf, you may include it under # the same distribution terms that you use for the rest of that program. # Please send patches to . Submit a context # diff and a properly formatted GNU ChangeLog entry. # # Configuration subroutine to validate and canonicalize a configuration type. # Supply the specified configuration type as an argument. # If it is invalid, we print an error message on stderr and exit with code 1. # Otherwise, we print the canonical config type on stdout and succeed. # You can get the latest version of this script from: # http://git.savannah.gnu.org/gitweb/?p=config.git;a=blob_plain;f=config.sub;hb=HEAD # This file is supposed to be the same for all GNU packages # and recognize all the CPU types, system types and aliases # that are meaningful with *any* GNU software. # Each package is responsible for reporting which valid configurations # it does not support. The user should be able to distinguish # a failure to support a valid configuration from a meaningless # configuration. # The goal of this file is to map all the various variations of a given # machine specification into a single specification in the form: # CPU_TYPE-MANUFACTURER-OPERATING_SYSTEM # or in some cases, the newer four-part form: # CPU_TYPE-MANUFACTURER-KERNEL-OPERATING_SYSTEM # It is wrong to echo any other type of specification. me=`echo "$0" | sed -e 's,.*/,,'` usage="\ Usage: $0 [OPTION] CPU-MFR-OPSYS $0 [OPTION] ALIAS Canonicalize a configuration name. Operation modes: -h, --help print this help, then exit -t, --time-stamp print date of last modification, then exit -v, --version print version number, then exit Report bugs and patches to ." version="\ GNU config.sub ($timestamp) Copyright (C) 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE." help=" Try \`$me --help' for more information." # Parse command line while test $# -gt 0 ; do case $1 in --time-stamp | --time* | -t ) echo "$timestamp" ; exit ;; --version | -v ) echo "$version" ; exit ;; --help | --h* | -h ) echo "$usage"; exit ;; -- ) # Stop option processing shift; break ;; - ) # Use stdin as input. break ;; -* ) echo "$me: invalid option $1$help" exit 1 ;; *local*) # First pass through any local machine types. echo $1 exit ;; * ) break ;; esac done case $# in 0) echo "$me: missing argument$help" >&2 exit 1;; 1) ;; *) echo "$me: too many arguments$help" >&2 exit 1;; esac # Separate what the user gave into CPU-COMPANY and OS or KERNEL-OS (if any). # Here we must recognize all the valid KERNEL-OS combinations. maybe_os=`echo $1 | sed 's/^\(.*\)-\([^-]*-[^-]*\)$/\2/'` case $maybe_os in nto-qnx* | linux-gnu* | linux-android* | linux-dietlibc | linux-newlib* | \ linux-uclibc* | uclinux-uclibc* | uclinux-gnu* | kfreebsd*-gnu* | \ knetbsd*-gnu* | netbsd*-gnu* | \ kopensolaris*-gnu* | \ storm-chaos* | os2-emx* | rtmk-nova*) os=-$maybe_os basic_machine=`echo $1 | sed 's/^\(.*\)-\([^-]*-[^-]*\)$/\1/'` ;; *) basic_machine=`echo $1 | sed 's/-[^-]*$//'` if [ $basic_machine != $1 ] then os=`echo $1 | sed 's/.*-/-/'` else os=; fi ;; esac ### Let's recognize common machines as not being operating systems so ### that things like config.sub decstation-3100 work. We also ### recognize some manufacturers as not being operating systems, so we ### can provide default operating systems below. case $os in -sun*os*) # Prevent following clause from handling this invalid input. ;; -dec* | -mips* | -sequent* | -encore* | -pc532* | -sgi* | -sony* | \ -att* | -7300* | -3300* | -delta* | -motorola* | -sun[234]* | \ -unicom* | -ibm* | -next | -hp | -isi* | -apollo | -altos* | \ -convergent* | -ncr* | -news | -32* | -3600* | -3100* | -hitachi* |\ -c[123]* | -convex* | -sun | -crds | -omron* | -dg | -ultra | -tti* | \ -harris | -dolphin | -highlevel | -gould | -cbm | -ns | -masscomp | \ -apple | -axis | -knuth | -cray | -microblaze) os= basic_machine=$1 ;; -bluegene*) os=-cnk ;; -sim | -cisco | -oki | -wec | -winbond) os= basic_machine=$1 ;; -scout) ;; -wrs) os=-vxworks basic_machine=$1 ;; -chorusos*) os=-chorusos basic_machine=$1 ;; -chorusrdb) os=-chorusrdb basic_machine=$1 ;; -hiux*) os=-hiuxwe2 ;; -sco6) os=-sco5v6 basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` ;; -sco5) os=-sco3.2v5 basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` ;; -sco4) os=-sco3.2v4 basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` ;; -sco3.2.[4-9]*) os=`echo $os | sed -e 's/sco3.2./sco3.2v/'` basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` ;; -sco3.2v[4-9]*) # Don't forget version if it is 3.2v4 or newer. basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` ;; -sco5v6*) # Don't forget version if it is 3.2v4 or newer. basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` ;; -sco*) os=-sco3.2v2 basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` ;; -udk*) basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` ;; -isc) os=-isc2.2 basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` ;; -clix*) basic_machine=clipper-intergraph ;; -isc*) basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` ;; -lynx*) os=-lynxos ;; -ptx*) basic_machine=`echo $1 | sed -e 's/86-.*/86-sequent/'` ;; -windowsnt*) os=`echo $os | sed -e 's/windowsnt/winnt/'` ;; -psos*) os=-psos ;; -mint | -mint[0-9]*) basic_machine=m68k-atari os=-mint ;; esac # Decode aliases for certain CPU-COMPANY combinations. case $basic_machine in # Recognize the basic CPU types without company name. # Some are omitted here because they have special meanings below. 1750a | 580 \ | a29k \ | alpha | alphaev[4-8] | alphaev56 | alphaev6[78] | alphapca5[67] \ | alpha64 | alpha64ev[4-8] | alpha64ev56 | alpha64ev6[78] | alpha64pca5[67] \ | am33_2.0 \ | arc | arm | arm[bl]e | arme[lb] | armv[2345] | armv[345][lb] | avr | avr32 \ | bfin \ | c4x | clipper \ | d10v | d30v | dlx | dsp16xx \ | fido | fr30 | frv \ | h8300 | h8500 | hppa | hppa1.[01] | hppa2.0 | hppa2.0[nw] | hppa64 \ | i370 | i860 | i960 | ia64 \ | ip2k | iq2000 \ | lm32 \ | m32c | m32r | m32rle | m68000 | m68k | m88k \ | maxq | mb | microblaze | mcore | mep | metag \ | mips | mipsbe | mipseb | mipsel | mipsle \ | mips16 \ | mips64 | mips64el \ | mips64octeon | mips64octeonel \ | mips64orion | mips64orionel \ | mips64r5900 | mips64r5900el \ | mips64vr | mips64vrel \ | mips64vr4100 | mips64vr4100el \ | mips64vr4300 | mips64vr4300el \ | mips64vr5000 | mips64vr5000el \ | mips64vr5900 | mips64vr5900el \ | mipsisa32 | mipsisa32el \ | mipsisa32r2 | mipsisa32r2el \ | mipsisa64 | mipsisa64el \ | mipsisa64r2 | mipsisa64r2el \ | mipsisa64sb1 | mipsisa64sb1el \ | mipsisa64sr71k | mipsisa64sr71kel \ | mipstx39 | mipstx39el \ | mn10200 | mn10300 \ | moxie \ | mt \ | msp430 \ | nds32 | nds32le | nds32be \ | nios | nios2 \ | ns16k | ns32k \ | open8 \ | or32 \ | pdp10 | pdp11 | pj | pjl \ | powerpc | powerpc64 | powerpc64le | powerpcle \ | pyramid \ | rx \ | score \ | sh | sh[1234] | sh[24]a | sh[24]aeb | sh[23]e | sh[34]eb | sheb | shbe | shle | sh[1234]le | sh3ele \ | sh64 | sh64le \ | sparc | sparc64 | sparc64b | sparc64v | sparc86x | sparclet | sparclite \ | sparcv8 | sparcv9 | sparcv9b | sparcv9v \ | spu \ | tahoe | tic4x | tic54x | tic55x | tic6x | tic80 | tron \ | ubicom32 \ | v850 | v850e \ | we32k \ | x86 | xc16x | xstormy16 | xtensa \ | z8k | z80) basic_machine=$basic_machine-unknown ;; c54x) basic_machine=tic54x-unknown ;; c55x) basic_machine=tic55x-unknown ;; c6x) basic_machine=tic6x-unknown ;; m6811 | m68hc11 | m6812 | m68hc12 | picochip) # Motorola 68HC11/12. basic_machine=$basic_machine-unknown os=-none ;; m88110 | m680[12346]0 | m683?2 | m68360 | m5200 | v70 | w65 | z8k) ;; ms1) basic_machine=mt-unknown ;; strongarm | thumb | xscale) basic_machine=arm-unknown ;; xscaleeb) basic_machine=armeb-unknown ;; xscaleel) basic_machine=armel-unknown ;; # We use `pc' rather than `unknown' # because (1) that's what they normally are, and # (2) the word "unknown" tends to confuse beginning users. i*86 | x86_64) basic_machine=$basic_machine-pc ;; # Object if more than one company name word. *-*-*) echo Invalid configuration \`$1\': machine \`$basic_machine\' not recognized 1>&2 exit 1 ;; # Recognize the basic CPU types with company name. 580-* \ | a29k-* \ | alpha-* | alphaev[4-8]-* | alphaev56-* | alphaev6[78]-* \ | alpha64-* | alpha64ev[4-8]-* | alpha64ev56-* | alpha64ev6[78]-* \ | alphapca5[67]-* | alpha64pca5[67]-* | arc-* \ | arm-* | armbe-* | armle-* | armeb-* | armv*-* \ | avr-* | avr32-* \ | bfin-* | bs2000-* \ | c[123]* | c30-* | [cjt]90-* | c4x-* \ | clipper-* | craynv-* | cydra-* \ | d10v-* | d30v-* | dlx-* \ | elxsi-* \ | f30[01]-* | f700-* | fido-* | fr30-* | frv-* | fx80-* \ | h8300-* | h8500-* \ | hppa-* | hppa1.[01]-* | hppa2.0-* | hppa2.0[nw]-* | hppa64-* \ | i*86-* | i860-* | i960-* | ia64-* \ | ip2k-* | iq2000-* \ | lm32-* \ | m32c-* | m32r-* | m32rle-* \ | m68000-* | m680[012346]0-* | m68360-* | m683?2-* | m68k-* \ | m88110-* | m88k-* | maxq-* | mcore-* | metag-* | microblaze-* \ | mips-* | mipsbe-* | mipseb-* | mipsel-* | mipsle-* \ | mips16-* \ | mips64-* | mips64el-* \ | mips64octeon-* | mips64octeonel-* \ | mips64orion-* | mips64orionel-* \ | mips64r5900-* | mips64r5900el-* \ | mips64vr-* | mips64vrel-* \ | mips64vr4100-* | mips64vr4100el-* \ | mips64vr4300-* | mips64vr4300el-* \ | mips64vr5000-* | mips64vr5000el-* \ | mips64vr5900-* | mips64vr5900el-* \ | mipsisa32-* | mipsisa32el-* \ | mipsisa32r2-* | mipsisa32r2el-* \ | mipsisa64-* | mipsisa64el-* \ | mipsisa64r2-* | mipsisa64r2el-* \ | mipsisa64sb1-* | mipsisa64sb1el-* \ | mipsisa64sr71k-* | mipsisa64sr71kel-* \ | mipstx39-* | mipstx39el-* \ | mmix-* \ | mt-* \ | msp430-* \ | nds32-* | nds32le-* | nds32be-* \ | nios-* | nios2-* \ | none-* | np1-* | ns16k-* | ns32k-* \ | open8-* \ | orion-* \ | pdp10-* | pdp11-* | pj-* | pjl-* | pn-* | power-* \ | powerpc-* | powerpc64-* | powerpc64le-* | powerpcle-* \ | pyramid-* \ | romp-* | rs6000-* | rx-* \ | sh-* | sh[1234]-* | sh[24]a-* | sh[24]aeb-* | sh[23]e-* | sh[34]eb-* | sheb-* | shbe-* \ | shle-* | sh[1234]le-* | sh3ele-* | sh64-* | sh64le-* \ | sparc-* | sparc64-* | sparc64b-* | sparc64v-* | sparc86x-* | sparclet-* \ | sparclite-* \ | sparcv8-* | sparcv9-* | sparcv9b-* | sparcv9v-* | sv1-* | sx?-* \ | tahoe-* \ | tic30-* | tic4x-* | tic54x-* | tic55x-* | tic6x-* | tic80-* \ | tile-* | tilegx-* \ | tron-* \ | ubicom32-* \ | v850-* | v850e-* | vax-* \ | we32k-* \ | x86-* | x86_64-* | xc16x-* | xps100-* \ | xstormy16-* | xtensa*-* \ | ymp-* \ | z8k-* | z80-*) ;; # Recognize the basic CPU types without company name, with glob match. xtensa*) basic_machine=$basic_machine-unknown ;; # Recognize the various machine names and aliases which stand # for a CPU type and a company and sometimes even an OS. 386bsd) basic_machine=i386-unknown os=-bsd ;; 3b1 | 7300 | 7300-att | att-7300 | pc7300 | safari | unixpc) basic_machine=m68000-att ;; 3b*) basic_machine=we32k-att ;; a29khif) basic_machine=a29k-amd os=-udi ;; abacus) basic_machine=abacus-unknown ;; adobe68k) basic_machine=m68010-adobe os=-scout ;; alliant | fx80) basic_machine=fx80-alliant ;; altos | altos3068) basic_machine=m68k-altos ;; am29k) basic_machine=a29k-none os=-bsd ;; amd64) basic_machine=x86_64-pc ;; amd64-*) basic_machine=x86_64-`echo $basic_machine | sed 's/^[^-]*-//'` ;; amdahl) basic_machine=580-amdahl os=-sysv ;; amiga | amiga-*) basic_machine=m68k-unknown ;; amigaos | amigados) basic_machine=m68k-unknown os=-amigaos ;; amigaunix | amix) basic_machine=m68k-unknown os=-sysv4 ;; apollo68) basic_machine=m68k-apollo os=-sysv ;; apollo68bsd) basic_machine=m68k-apollo os=-bsd ;; aros) basic_machine=i386-pc os=-aros ;; aux) basic_machine=m68k-apple os=-aux ;; balance) basic_machine=ns32k-sequent os=-dynix ;; blackfin) basic_machine=bfin-unknown os=-linux ;; blackfin-*) basic_machine=bfin-`echo $basic_machine | sed 's/^[^-]*-//'` os=-linux ;; bluegene*) basic_machine=powerpc-ibm os=-cnk ;; c54x-*) basic_machine=tic54x-`echo $basic_machine | sed 's/^[^-]*-//'` ;; c55x-*) basic_machine=tic55x-`echo $basic_machine | sed 's/^[^-]*-//'` ;; c6x-*) basic_machine=tic6x-`echo $basic_machine | sed 's/^[^-]*-//'` ;; c90) basic_machine=c90-cray os=-unicos ;; cegcc) basic_machine=arm-unknown os=-cegcc ;; convex-c1) basic_machine=c1-convex os=-bsd ;; convex-c2) basic_machine=c2-convex os=-bsd ;; convex-c32) basic_machine=c32-convex os=-bsd ;; convex-c34) basic_machine=c34-convex os=-bsd ;; convex-c38) basic_machine=c38-convex os=-bsd ;; cray | j90) basic_machine=j90-cray os=-unicos ;; craynv) basic_machine=craynv-cray os=-unicosmp ;; cr16 | cr16-*) basic_machine=cr16-unknown os=-elf ;; crds | unos) basic_machine=m68k-crds ;; crisv32 | crisv32-* | etraxfs*) basic_machine=crisv32-axis ;; cris | cris-* | etrax*) basic_machine=cris-axis ;; crx) basic_machine=crx-unknown os=-elf ;; da30 | da30-*) basic_machine=m68k-da30 ;; decstation | decstation-3100 | pmax | pmax-* | pmin | dec3100 | decstatn) basic_machine=mips-dec ;; decsystem10* | dec10*) basic_machine=pdp10-dec os=-tops10 ;; decsystem20* | dec20*) basic_machine=pdp10-dec os=-tops20 ;; delta | 3300 | motorola-3300 | motorola-delta \ | 3300-motorola | delta-motorola) basic_machine=m68k-motorola ;; delta88) basic_machine=m88k-motorola os=-sysv3 ;; dicos) basic_machine=i686-pc os=-dicos ;; djgpp) basic_machine=i586-pc os=-msdosdjgpp ;; dpx20 | dpx20-*) basic_machine=rs6000-bull os=-bosx ;; dpx2* | dpx2*-bull) basic_machine=m68k-bull os=-sysv3 ;; ebmon29k) basic_machine=a29k-amd os=-ebmon ;; elxsi) basic_machine=elxsi-elxsi os=-bsd ;; encore | umax | mmax) basic_machine=ns32k-encore ;; es1800 | OSE68k | ose68k | ose | OSE) basic_machine=m68k-ericsson os=-ose ;; fx2800) basic_machine=i860-alliant ;; genix) basic_machine=ns32k-ns ;; gmicro) basic_machine=tron-gmicro os=-sysv ;; go32) basic_machine=i386-pc os=-go32 ;; h3050r* | hiux*) basic_machine=hppa1.1-hitachi os=-hiuxwe2 ;; h8300hms) basic_machine=h8300-hitachi os=-hms ;; h8300xray) basic_machine=h8300-hitachi os=-xray ;; h8500hms) basic_machine=h8500-hitachi os=-hms ;; harris) basic_machine=m88k-harris os=-sysv3 ;; hp300-*) basic_machine=m68k-hp ;; hp300bsd) basic_machine=m68k-hp os=-bsd ;; hp300hpux) basic_machine=m68k-hp os=-hpux ;; hp3k9[0-9][0-9] | hp9[0-9][0-9]) basic_machine=hppa1.0-hp ;; hp9k2[0-9][0-9] | hp9k31[0-9]) basic_machine=m68000-hp ;; hp9k3[2-9][0-9]) basic_machine=m68k-hp ;; hp9k6[0-9][0-9] | hp6[0-9][0-9]) basic_machine=hppa1.0-hp ;; hp9k7[0-79][0-9] | hp7[0-79][0-9]) basic_machine=hppa1.1-hp ;; hp9k78[0-9] | hp78[0-9]) # FIXME: really hppa2.0-hp basic_machine=hppa1.1-hp ;; hp9k8[67]1 | hp8[67]1 | hp9k80[24] | hp80[24] | hp9k8[78]9 | hp8[78]9 | hp9k893 | hp893) # FIXME: really hppa2.0-hp basic_machine=hppa1.1-hp ;; hp9k8[0-9][13679] | hp8[0-9][13679]) basic_machine=hppa1.1-hp ;; hp9k8[0-9][0-9] | hp8[0-9][0-9]) basic_machine=hppa1.0-hp ;; hppa-next) os=-nextstep3 ;; hppaosf) basic_machine=hppa1.1-hp os=-osf ;; hppro) basic_machine=hppa1.1-hp os=-proelf ;; i370-ibm* | ibm*) basic_machine=i370-ibm ;; # I'm not sure what "Sysv32" means. Should this be sysv3.2? i*86v32) basic_machine=`echo $1 | sed -e 's/86.*/86-pc/'` os=-sysv32 ;; i*86v4*) basic_machine=`echo $1 | sed -e 's/86.*/86-pc/'` os=-sysv4 ;; i*86v) basic_machine=`echo $1 | sed -e 's/86.*/86-pc/'` os=-sysv ;; i*86sol2) basic_machine=`echo $1 | sed -e 's/86.*/86-pc/'` os=-solaris2 ;; i386mach) basic_machine=i386-mach os=-mach ;; i386-vsta | vsta) basic_machine=i386-unknown os=-vsta ;; iris | iris4d) basic_machine=mips-sgi case $os in -irix*) ;; *) os=-irix4 ;; esac ;; isi68 | isi) basic_machine=m68k-isi os=-sysv ;; m68knommu) basic_machine=m68k-unknown os=-linux ;; m68knommu-*) basic_machine=m68k-`echo $basic_machine | sed 's/^[^-]*-//'` os=-linux ;; m88k-omron*) basic_machine=m88k-omron ;; magnum | m3230) basic_machine=mips-mips os=-sysv ;; merlin) basic_machine=ns32k-utek os=-sysv ;; microblaze) basic_machine=microblaze-xilinx ;; mingw32) basic_machine=i386-pc os=-mingw32 ;; mingw32ce) basic_machine=arm-unknown os=-mingw32ce ;; miniframe) basic_machine=m68000-convergent ;; *mint | -mint[0-9]* | *MiNT | *MiNT[0-9]*) basic_machine=m68k-atari os=-mint ;; mips3*-*) basic_machine=`echo $basic_machine | sed -e 's/mips3/mips64/'` ;; mips3*) basic_machine=`echo $basic_machine | sed -e 's/mips3/mips64/'`-unknown ;; monitor) basic_machine=m68k-rom68k os=-coff ;; morphos) basic_machine=powerpc-unknown os=-morphos ;; msdos) basic_machine=i386-pc os=-msdos ;; ms1-*) basic_machine=`echo $basic_machine | sed -e 's/ms1-/mt-/'` ;; mvs) basic_machine=i370-ibm os=-mvs ;; ncr3000) basic_machine=i486-ncr os=-sysv4 ;; netbsd386) basic_machine=i386-unknown os=-netbsd ;; netwinder) basic_machine=armv4l-rebel os=-linux ;; news | news700 | news800 | news900) basic_machine=m68k-sony os=-newsos ;; news1000) basic_machine=m68030-sony os=-newsos ;; news-3600 | risc-news) basic_machine=mips-sony os=-newsos ;; necv70) basic_machine=v70-nec os=-sysv ;; next | m*-next ) basic_machine=m68k-next case $os in -nextstep* ) ;; -ns2*) os=-nextstep2 ;; *) os=-nextstep3 ;; esac ;; nh3000) basic_machine=m68k-harris os=-cxux ;; nh[45]000) basic_machine=m88k-harris os=-cxux ;; nindy960) basic_machine=i960-intel os=-nindy ;; mon960) basic_machine=i960-intel os=-mon960 ;; nonstopux) basic_machine=mips-compaq os=-nonstopux ;; np1) basic_machine=np1-gould ;; neo-tandem) basic_machine=neo-tandem ;; nse-tandem) basic_machine=nse-tandem ;; nsr-tandem) basic_machine=nsr-tandem ;; op50n-* | op60c-*) basic_machine=hppa1.1-oki os=-proelf ;; openrisc | openrisc-*) basic_machine=or32-unknown ;; os400) basic_machine=powerpc-ibm os=-os400 ;; OSE68000 | ose68000) basic_machine=m68000-ericsson os=-ose ;; os68k) basic_machine=m68k-none os=-os68k ;; pa-hitachi) basic_machine=hppa1.1-hitachi os=-hiuxwe2 ;; paragon) basic_machine=i860-intel os=-osf ;; parisc) basic_machine=hppa-unknown os=-linux ;; parisc-*) basic_machine=hppa-`echo $basic_machine | sed 's/^[^-]*-//'` os=-linux ;; pbd) basic_machine=sparc-tti ;; pbb) basic_machine=m68k-tti ;; pc532 | pc532-*) basic_machine=ns32k-pc532 ;; pc98) basic_machine=i386-pc ;; pc98-*) basic_machine=i386-`echo $basic_machine | sed 's/^[^-]*-//'` ;; pentium | p5 | k5 | k6 | nexgen | viac3) basic_machine=i586-pc ;; pentiumpro | p6 | 6x86 | athlon | athlon_*) basic_machine=i686-pc ;; pentiumii | pentium2 | pentiumiii | pentium3) basic_machine=i686-pc ;; pentium4) basic_machine=i786-pc ;; pentium-* | p5-* | k5-* | k6-* | nexgen-* | viac3-*) basic_machine=i586-`echo $basic_machine | sed 's/^[^-]*-//'` ;; pentiumpro-* | p6-* | 6x86-* | athlon-*) basic_machine=i686-`echo $basic_machine | sed 's/^[^-]*-//'` ;; pentiumii-* | pentium2-* | pentiumiii-* | pentium3-*) basic_machine=i686-`echo $basic_machine | sed 's/^[^-]*-//'` ;; pentium4-*) basic_machine=i786-`echo $basic_machine | sed 's/^[^-]*-//'` ;; pn) basic_machine=pn-gould ;; power) basic_machine=power-ibm ;; ppc | ppcbe) basic_machine=powerpc-unknown ;; ppc-* | ppcbe-*) basic_machine=powerpc-`echo $basic_machine | sed 's/^[^-]*-//'` ;; ppcle | powerpclittle | ppc-le | powerpc-little) basic_machine=powerpcle-unknown ;; ppcle-* | powerpclittle-*) basic_machine=powerpcle-`echo $basic_machine | sed 's/^[^-]*-//'` ;; ppc64) basic_machine=powerpc64-unknown ;; ppc64-*) basic_machine=powerpc64-`echo $basic_machine | sed 's/^[^-]*-//'` ;; ppc64le | powerpc64little | ppc64-le | powerpc64-little) basic_machine=powerpc64le-unknown ;; ppc64le-* | powerpc64little-*) basic_machine=powerpc64le-`echo $basic_machine | sed 's/^[^-]*-//'` ;; ps2) basic_machine=i386-ibm ;; pw32) basic_machine=i586-unknown os=-pw32 ;; rdos) basic_machine=i386-pc os=-rdos ;; rom68k) basic_machine=m68k-rom68k os=-coff ;; rm[46]00) basic_machine=mips-siemens ;; rtpc | rtpc-*) basic_machine=romp-ibm ;; s390 | s390-*) basic_machine=s390-ibm ;; s390x | s390x-*) basic_machine=s390x-ibm ;; sa29200) basic_machine=a29k-amd os=-udi ;; sb1) basic_machine=mipsisa64sb1-unknown ;; sb1el) basic_machine=mipsisa64sb1el-unknown ;; sde) basic_machine=mipsisa32-sde os=-elf ;; sei) basic_machine=mips-sei os=-seiux ;; sequent) basic_machine=i386-sequent ;; sh) basic_machine=sh-hitachi os=-hms ;; sh5el) basic_machine=sh5le-unknown ;; sh64) basic_machine=sh64-unknown ;; sparclite-wrs | simso-wrs) basic_machine=sparclite-wrs os=-vxworks ;; sps7) basic_machine=m68k-bull os=-sysv2 ;; spur) basic_machine=spur-unknown ;; st2000) basic_machine=m68k-tandem ;; stratus) basic_machine=i860-stratus os=-sysv4 ;; strongarm-* | thumb-*) basic_machine=arm-`echo $basic_machine | sed 's/^[^-]*-//'` ;; sun2) basic_machine=m68000-sun ;; sun2os3) basic_machine=m68000-sun os=-sunos3 ;; sun2os4) basic_machine=m68000-sun os=-sunos4 ;; sun3os3) basic_machine=m68k-sun os=-sunos3 ;; sun3os4) basic_machine=m68k-sun os=-sunos4 ;; sun4os3) basic_machine=sparc-sun os=-sunos3 ;; sun4os4) basic_machine=sparc-sun os=-sunos4 ;; sun4sol2) basic_machine=sparc-sun os=-solaris2 ;; sun3 | sun3-*) basic_machine=m68k-sun ;; sun4) basic_machine=sparc-sun ;; sun386 | sun386i | roadrunner) basic_machine=i386-sun ;; sv1) basic_machine=sv1-cray os=-unicos ;; symmetry) basic_machine=i386-sequent os=-dynix ;; t3e) basic_machine=alphaev5-cray os=-unicos ;; t90) basic_machine=t90-cray os=-unicos ;; # This must be matched before tile*. tilegx*) basic_machine=tilegx-unknown os=-linux-gnu ;; tile*) basic_machine=tile-unknown os=-linux-gnu ;; tx39) basic_machine=mipstx39-unknown ;; tx39el) basic_machine=mipstx39el-unknown ;; toad1) basic_machine=pdp10-xkl os=-tops20 ;; tower | tower-32) basic_machine=m68k-ncr ;; tpf) basic_machine=s390x-ibm os=-tpf ;; udi29k) basic_machine=a29k-amd os=-udi ;; ultra3) basic_machine=a29k-nyu os=-sym1 ;; v810 | necv810) basic_machine=v810-nec os=-none ;; vaxv) basic_machine=vax-dec os=-sysv ;; vms) basic_machine=vax-dec os=-vms ;; vpp*|vx|vx-*) basic_machine=f301-fujitsu ;; vxworks960) basic_machine=i960-wrs os=-vxworks ;; vxworks68) basic_machine=m68k-wrs os=-vxworks ;; vxworks29k) basic_machine=a29k-wrs os=-vxworks ;; w65*) basic_machine=w65-wdc os=-none ;; w89k-*) basic_machine=hppa1.1-winbond os=-proelf ;; xbox) basic_machine=i686-pc os=-mingw32 ;; xps | xps100) basic_machine=xps100-honeywell ;; xscale-* | xscalee[bl]-*) basic_machine=`echo $basic_machine | sed 's/^xscale/arm/'` ;; ymp) basic_machine=ymp-cray os=-unicos ;; z8k-*-coff) basic_machine=z8k-unknown os=-sim ;; z80-*-coff) basic_machine=z80-unknown os=-sim ;; none) basic_machine=none-none os=-none ;; # Here we handle the default manufacturer of certain CPU types. It is in # some cases the only manufacturer, in others, it is the most popular. w89k) basic_machine=hppa1.1-winbond ;; op50n) basic_machine=hppa1.1-oki ;; op60c) basic_machine=hppa1.1-oki ;; romp) basic_machine=romp-ibm ;; mmix) basic_machine=mmix-knuth ;; rs6000) basic_machine=rs6000-ibm ;; vax) basic_machine=vax-dec ;; pdp10) # there are many clones, so DEC is not a safe bet basic_machine=pdp10-unknown ;; pdp11) basic_machine=pdp11-dec ;; we32k) basic_machine=we32k-att ;; sh[1234] | sh[24]a | sh[24]aeb | sh[34]eb | sh[1234]le | sh[23]ele) basic_machine=sh-unknown ;; sparc | sparcv8 | sparcv9 | sparcv9b | sparcv9v) basic_machine=sparc-sun ;; cydra) basic_machine=cydra-cydrome ;; orion) basic_machine=orion-highlevel ;; orion105) basic_machine=clipper-highlevel ;; mac | mpw | mac-mpw) basic_machine=m68k-apple ;; pmac | pmac-mpw) basic_machine=powerpc-apple ;; *-unknown) # Make sure to match an already-canonicalized machine name. ;; *) echo Invalid configuration \`$1\': machine \`$basic_machine\' not recognized 1>&2 exit 1 ;; esac # Here we canonicalize certain aliases for manufacturers. case $basic_machine in *-digital*) basic_machine=`echo $basic_machine | sed 's/digital.*/dec/'` ;; *-commodore*) basic_machine=`echo $basic_machine | sed 's/commodore.*/cbm/'` ;; *) ;; esac # Decode manufacturer-specific aliases for certain operating systems. if [ x"$os" != x"" ] then case $os in # First match some system type aliases # that might get confused with valid system types. # -solaris* is a basic system type, with this one exception. -auroraux) os=-auroraux ;; -solaris1 | -solaris1.*) os=`echo $os | sed -e 's|solaris1|sunos4|'` ;; -solaris) os=-solaris2 ;; -svr4*) os=-sysv4 ;; -unixware*) os=-sysv4.2uw ;; -gnu/linux*) os=`echo $os | sed -e 's|gnu/linux|linux-gnu|'` ;; # First accept the basic system types. # The portable systems comes first. # Each alternative MUST END IN A *, to match a version number. # -sysv* is not here because it comes later, after sysvr4. -gnu* | -bsd* | -mach* | -minix* | -genix* | -ultrix* | -irix* \ | -*vms* | -sco* | -esix* | -isc* | -aix* | -cnk* | -sunos | -sunos[34]*\ | -hpux* | -unos* | -osf* | -luna* | -dgux* | -auroraux* | -solaris* \ | -sym* | -kopensolaris* \ | -amigaos* | -amigados* | -msdos* | -newsos* | -unicos* | -aof* \ | -aos* | -aros* \ | -nindy* | -vxsim* | -vxworks* | -ebmon* | -hms* | -mvs* \ | -clix* | -riscos* | -uniplus* | -iris* | -rtu* | -xenix* \ | -hiux* | -386bsd* | -knetbsd* | -mirbsd* | -netbsd* \ | -openbsd* | -solidbsd* \ | -ekkobsd* | -kfreebsd* | -freebsd* | -riscix* | -lynxos* \ | -bosx* | -nextstep* | -cxux* | -aout* | -elf* | -oabi* \ | -ptx* | -coff* | -ecoff* | -winnt* | -domain* | -vsta* \ | -udi* | -eabi* | -lites* | -ieee* | -go32* | -aux* \ | -chorusos* | -chorusrdb* | -cegcc* \ | -cygwin* | -pe* | -psos* | -moss* | -proelf* | -rtems* \ | -mingw32* | -linux-gnu* | -linux-android* \ | -linux-newlib* | -linux-uclibc* \ | -uxpv* | -beos* | -mpeix* | -udk* \ | -interix* | -uwin* | -mks* | -rhapsody* | -darwin* | -opened* \ | -openstep* | -oskit* | -conix* | -pw32* | -nonstopux* \ | -storm-chaos* | -tops10* | -tenex* | -tops20* | -its* \ | -os2* | -vos* | -palmos* | -uclinux* | -nucleus* \ | -morphos* | -superux* | -rtmk* | -rtmk-nova* | -windiss* \ | -powermax* | -dnix* | -nx6 | -nx7 | -sei* | -dragonfly* \ | -skyos* | -haiku* | -rdos* | -toppers* | -drops* | -es*) # Remember, each alternative MUST END IN *, to match a version number. ;; -qnx*) case $basic_machine in x86-* | i*86-*) ;; *) os=-nto$os ;; esac ;; -nto-qnx*) ;; -nto*) os=`echo $os | sed -e 's|nto|nto-qnx|'` ;; -sim | -es1800* | -hms* | -xray | -os68k* | -none* | -v88r* \ | -windows* | -osx | -abug | -netware* | -os9* | -beos* | -haiku* \ | -macos* | -mpw* | -magic* | -mmixware* | -mon960* | -lnews*) ;; -mac*) os=`echo $os | sed -e 's|mac|macos|'` ;; -linux-dietlibc) os=-linux-dietlibc ;; -linux*) os=`echo $os | sed -e 's|linux|linux-gnu|'` ;; -sunos5*) os=`echo $os | sed -e 's|sunos5|solaris2|'` ;; -sunos6*) os=`echo $os | sed -e 's|sunos6|solaris3|'` ;; -opened*) os=-openedition ;; -os400*) os=-os400 ;; -wince*) os=-wince ;; -osfrose*) os=-osfrose ;; -osf*) os=-osf ;; -utek*) os=-bsd ;; -dynix*) os=-bsd ;; -acis*) os=-aos ;; -atheos*) os=-atheos ;; -syllable*) os=-syllable ;; -386bsd) os=-bsd ;; -ctix* | -uts*) os=-sysv ;; -nova*) os=-rtmk-nova ;; -ns2 ) os=-nextstep2 ;; -nsk*) os=-nsk ;; # Preserve the version number of sinix5. -sinix5.*) os=`echo $os | sed -e 's|sinix|sysv|'` ;; -sinix*) os=-sysv4 ;; -tpf*) os=-tpf ;; -triton*) os=-sysv3 ;; -oss*) os=-sysv3 ;; -svr4) os=-sysv4 ;; -svr3) os=-sysv3 ;; -sysvr4) os=-sysv4 ;; # This must come after -sysvr4. -sysv*) ;; -ose*) os=-ose ;; -es1800*) os=-ose ;; -xenix) os=-xenix ;; -*mint | -mint[0-9]* | -*MiNT | -MiNT[0-9]*) os=-mint ;; -aros*) os=-aros ;; -kaos*) os=-kaos ;; -zvmoe) os=-zvmoe ;; -dicos*) os=-dicos ;; -nacl*) ;; -none) ;; *) # Get rid of the `-' at the beginning of $os. os=`echo $os | sed 's/[^-]*-//'` echo Invalid configuration \`$1\': system \`$os\' not recognized 1>&2 exit 1 ;; esac else # Here we handle the default operating systems that come with various machines. # The value should be what the vendor currently ships out the door with their # machine or put another way, the most popular os provided with the machine. # Note that if you're going to try to match "-MANUFACTURER" here (say, # "-sun"), then you have to tell the case statement up towards the top # that MANUFACTURER isn't an operating system. Otherwise, code above # will signal an error saying that MANUFACTURER isn't an operating # system, and we'll never get to this point. case $basic_machine in score-*) os=-elf ;; spu-*) os=-elf ;; *-acorn) os=-riscix1.2 ;; arm*-rebel) os=-linux ;; arm*-semi) os=-aout ;; c4x-* | tic4x-*) os=-coff ;; tic54x-*) os=-coff ;; tic55x-*) os=-coff ;; tic6x-*) os=-coff ;; # This must come before the *-dec entry. pdp10-*) os=-tops20 ;; pdp11-*) os=-none ;; *-dec | vax-*) os=-ultrix4.2 ;; m68*-apollo) os=-domain ;; i386-sun) os=-sunos4.0.2 ;; m68000-sun) os=-sunos3 # This also exists in the configure program, but was not the # default. # os=-sunos4 ;; m68*-cisco) os=-aout ;; mep-*) os=-elf ;; mips*-cisco) os=-elf ;; mips*-*) os=-elf ;; or32-*) os=-coff ;; *-tti) # must be before sparc entry or we get the wrong os. os=-sysv3 ;; sparc-* | *-sun) os=-sunos4.1.1 ;; *-be) os=-beos ;; *-haiku) os=-haiku ;; *-ibm) os=-aix ;; *-knuth) os=-mmixware ;; *-wec) os=-proelf ;; *-winbond) os=-proelf ;; *-oki) os=-proelf ;; *-hp) os=-hpux ;; *-hitachi) os=-hiux ;; i860-* | *-att | *-ncr | *-altos | *-motorola | *-convergent) os=-sysv ;; *-cbm) os=-amigaos ;; *-dg) os=-dgux ;; *-dolphin) os=-sysv3 ;; m68k-ccur) os=-rtu ;; m88k-omron*) os=-luna ;; *-next ) os=-nextstep ;; *-sequent) os=-ptx ;; *-crds) os=-unos ;; *-ns) os=-genix ;; i370-*) os=-mvs ;; *-next) os=-nextstep3 ;; *-gould) os=-sysv ;; *-highlevel) os=-bsd ;; *-encore) os=-bsd ;; *-sgi) os=-irix ;; *-siemens) os=-sysv4 ;; *-masscomp) os=-rtu ;; f30[01]-fujitsu | f700-fujitsu) os=-uxpv ;; *-rom68k) os=-coff ;; *-*bug) os=-coff ;; *-apple) os=-macos ;; *-atari*) os=-mint ;; *) os=-none ;; esac fi # Here we handle the case where we know the os, and the CPU type, but not the # manufacturer. We pick the logical manufacturer. vendor=unknown case $basic_machine in *-unknown) case $os in -riscix*) vendor=acorn ;; -sunos*) vendor=sun ;; -cnk*|-aix*) vendor=ibm ;; -beos*) vendor=be ;; -hpux*) vendor=hp ;; -mpeix*) vendor=hp ;; -hiux*) vendor=hitachi ;; -unos*) vendor=crds ;; -dgux*) vendor=dg ;; -luna*) vendor=omron ;; -genix*) vendor=ns ;; -mvs* | -opened*) vendor=ibm ;; -os400*) vendor=ibm ;; -ptx*) vendor=sequent ;; -tpf*) vendor=ibm ;; -vxsim* | -vxworks* | -windiss*) vendor=wrs ;; -aux*) vendor=apple ;; -hms*) vendor=hitachi ;; -mpw* | -macos*) vendor=apple ;; -*mint | -mint[0-9]* | -*MiNT | -MiNT[0-9]*) vendor=atari ;; -vos*) vendor=stratus ;; esac basic_machine=`echo $basic_machine | sed "s/unknown/$vendor/"` ;; esac echo $basic_machine$os exit # Local variables: # eval: (add-hook 'write-file-hooks 'time-stamp) # time-stamp-start: "timestamp='" # time-stamp-format: "%:y-%02m-%02d" # time-stamp-end: "'" # End: sparsehash-2.0.2/depcomp0000755000175000017500000004426711721254575012200 00000000000000#! /bin/sh # depcomp - compile a program generating dependencies as side-effects scriptversion=2009-04-28.21; # UTC # Copyright (C) 1999, 2000, 2003, 2004, 2005, 2006, 2007, 2009 Free # Software Foundation, Inc. # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2, or (at your option) # any later version. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # You should have received a copy of the GNU General Public License # along with this program. If not, see . # As a special exception to the GNU General Public License, if you # distribute this file as part of a program that contains a # configuration script generated by Autoconf, you may include it under # the same distribution terms that you use for the rest of that program. # Originally written by Alexandre Oliva . case $1 in '') echo "$0: No command. Try \`$0 --help' for more information." 1>&2 exit 1; ;; -h | --h*) cat <<\EOF Usage: depcomp [--help] [--version] PROGRAM [ARGS] Run PROGRAMS ARGS to compile a file, generating dependencies as side-effects. Environment variables: depmode Dependency tracking mode. source Source file read by `PROGRAMS ARGS'. object Object file output by `PROGRAMS ARGS'. DEPDIR directory where to store dependencies. depfile Dependency file to output. tmpdepfile Temporary file to use when outputing dependencies. libtool Whether libtool is used (yes/no). Report bugs to . EOF exit $? ;; -v | --v*) echo "depcomp $scriptversion" exit $? ;; esac if test -z "$depmode" || test -z "$source" || test -z "$object"; then echo "depcomp: Variables source, object and depmode must be set" 1>&2 exit 1 fi # Dependencies for sub/bar.o or sub/bar.obj go into sub/.deps/bar.Po. depfile=${depfile-`echo "$object" | sed 's|[^\\/]*$|'${DEPDIR-.deps}'/&|;s|\.\([^.]*\)$|.P\1|;s|Pobj$|Po|'`} tmpdepfile=${tmpdepfile-`echo "$depfile" | sed 's/\.\([^.]*\)$/.T\1/'`} rm -f "$tmpdepfile" # Some modes work just like other modes, but use different flags. We # parameterize here, but still list the modes in the big case below, # to make depend.m4 easier to write. Note that we *cannot* use a case # here, because this file can only contain one case statement. if test "$depmode" = hp; then # HP compiler uses -M and no extra arg. gccflag=-M depmode=gcc fi if test "$depmode" = dashXmstdout; then # This is just like dashmstdout with a different argument. dashmflag=-xM depmode=dashmstdout fi cygpath_u="cygpath -u -f -" if test "$depmode" = msvcmsys; then # This is just like msvisualcpp but w/o cygpath translation. # Just convert the backslash-escaped backslashes to single forward # slashes to satisfy depend.m4 cygpath_u="sed s,\\\\\\\\,/,g" depmode=msvisualcpp fi case "$depmode" in gcc3) ## gcc 3 implements dependency tracking that does exactly what ## we want. Yay! Note: for some reason libtool 1.4 doesn't like ## it if -MD -MP comes after the -MF stuff. Hmm. ## Unfortunately, FreeBSD c89 acceptance of flags depends upon ## the command line argument order; so add the flags where they ## appear in depend2.am. Note that the slowdown incurred here ## affects only configure: in makefiles, %FASTDEP% shortcuts this. for arg do case $arg in -c) set fnord "$@" -MT "$object" -MD -MP -MF "$tmpdepfile" "$arg" ;; *) set fnord "$@" "$arg" ;; esac shift # fnord shift # $arg done "$@" stat=$? if test $stat -eq 0; then : else rm -f "$tmpdepfile" exit $stat fi mv "$tmpdepfile" "$depfile" ;; gcc) ## There are various ways to get dependency output from gcc. Here's ## why we pick this rather obscure method: ## - Don't want to use -MD because we'd like the dependencies to end ## up in a subdir. Having to rename by hand is ugly. ## (We might end up doing this anyway to support other compilers.) ## - The DEPENDENCIES_OUTPUT environment variable makes gcc act like ## -MM, not -M (despite what the docs say). ## - Using -M directly means running the compiler twice (even worse ## than renaming). if test -z "$gccflag"; then gccflag=-MD, fi "$@" -Wp,"$gccflag$tmpdepfile" stat=$? if test $stat -eq 0; then : else rm -f "$tmpdepfile" exit $stat fi rm -f "$depfile" echo "$object : \\" > "$depfile" alpha=ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz ## The second -e expression handles DOS-style file names with drive letters. sed -e 's/^[^:]*: / /' \ -e 's/^['$alpha']:\/[^:]*: / /' < "$tmpdepfile" >> "$depfile" ## This next piece of magic avoids the `deleted header file' problem. ## The problem is that when a header file which appears in a .P file ## is deleted, the dependency causes make to die (because there is ## typically no way to rebuild the header). We avoid this by adding ## dummy dependencies for each header file. Too bad gcc doesn't do ## this for us directly. tr ' ' ' ' < "$tmpdepfile" | ## Some versions of gcc put a space before the `:'. On the theory ## that the space means something, we add a space to the output as ## well. ## Some versions of the HPUX 10.20 sed can't process this invocation ## correctly. Breaking it into two sed invocations is a workaround. sed -e 's/^\\$//' -e '/^$/d' -e '/:$/d' | sed -e 's/$/ :/' >> "$depfile" rm -f "$tmpdepfile" ;; hp) # This case exists only to let depend.m4 do its work. It works by # looking at the text of this script. This case will never be run, # since it is checked for above. exit 1 ;; sgi) if test "$libtool" = yes; then "$@" "-Wp,-MDupdate,$tmpdepfile" else "$@" -MDupdate "$tmpdepfile" fi stat=$? if test $stat -eq 0; then : else rm -f "$tmpdepfile" exit $stat fi rm -f "$depfile" if test -f "$tmpdepfile"; then # yes, the sourcefile depend on other files echo "$object : \\" > "$depfile" # Clip off the initial element (the dependent). Don't try to be # clever and replace this with sed code, as IRIX sed won't handle # lines with more than a fixed number of characters (4096 in # IRIX 6.2 sed, 8192 in IRIX 6.5). We also remove comment lines; # the IRIX cc adds comments like `#:fec' to the end of the # dependency line. tr ' ' ' ' < "$tmpdepfile" \ | sed -e 's/^.*\.o://' -e 's/#.*$//' -e '/^$/ d' | \ tr ' ' ' ' >> "$depfile" echo >> "$depfile" # The second pass generates a dummy entry for each header file. tr ' ' ' ' < "$tmpdepfile" \ | sed -e 's/^.*\.o://' -e 's/#.*$//' -e '/^$/ d' -e 's/$/:/' \ >> "$depfile" else # The sourcefile does not contain any dependencies, so just # store a dummy comment line, to avoid errors with the Makefile # "include basename.Plo" scheme. echo "#dummy" > "$depfile" fi rm -f "$tmpdepfile" ;; aix) # The C for AIX Compiler uses -M and outputs the dependencies # in a .u file. In older versions, this file always lives in the # current directory. Also, the AIX compiler puts `$object:' at the # start of each line; $object doesn't have directory information. # Version 6 uses the directory in both cases. dir=`echo "$object" | sed -e 's|/[^/]*$|/|'` test "x$dir" = "x$object" && dir= base=`echo "$object" | sed -e 's|^.*/||' -e 's/\.o$//' -e 's/\.lo$//'` if test "$libtool" = yes; then tmpdepfile1=$dir$base.u tmpdepfile2=$base.u tmpdepfile3=$dir.libs/$base.u "$@" -Wc,-M else tmpdepfile1=$dir$base.u tmpdepfile2=$dir$base.u tmpdepfile3=$dir$base.u "$@" -M fi stat=$? if test $stat -eq 0; then : else rm -f "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3" exit $stat fi for tmpdepfile in "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3" do test -f "$tmpdepfile" && break done if test -f "$tmpdepfile"; then # Each line is of the form `foo.o: dependent.h'. # Do two passes, one to just change these to # `$object: dependent.h' and one to simply `dependent.h:'. sed -e "s,^.*\.[a-z]*:,$object:," < "$tmpdepfile" > "$depfile" # That's a tab and a space in the []. sed -e 's,^.*\.[a-z]*:[ ]*,,' -e 's,$,:,' < "$tmpdepfile" >> "$depfile" else # The sourcefile does not contain any dependencies, so just # store a dummy comment line, to avoid errors with the Makefile # "include basename.Plo" scheme. echo "#dummy" > "$depfile" fi rm -f "$tmpdepfile" ;; icc) # Intel's C compiler understands `-MD -MF file'. However on # icc -MD -MF foo.d -c -o sub/foo.o sub/foo.c # ICC 7.0 will fill foo.d with something like # foo.o: sub/foo.c # foo.o: sub/foo.h # which is wrong. We want: # sub/foo.o: sub/foo.c # sub/foo.o: sub/foo.h # sub/foo.c: # sub/foo.h: # ICC 7.1 will output # foo.o: sub/foo.c sub/foo.h # and will wrap long lines using \ : # foo.o: sub/foo.c ... \ # sub/foo.h ... \ # ... "$@" -MD -MF "$tmpdepfile" stat=$? if test $stat -eq 0; then : else rm -f "$tmpdepfile" exit $stat fi rm -f "$depfile" # Each line is of the form `foo.o: dependent.h', # or `foo.o: dep1.h dep2.h \', or ` dep3.h dep4.h \'. # Do two passes, one to just change these to # `$object: dependent.h' and one to simply `dependent.h:'. sed "s,^[^:]*:,$object :," < "$tmpdepfile" > "$depfile" # Some versions of the HPUX 10.20 sed can't process this invocation # correctly. Breaking it into two sed invocations is a workaround. sed 's,^[^:]*: \(.*\)$,\1,;s/^\\$//;/^$/d;/:$/d' < "$tmpdepfile" | sed -e 's/$/ :/' >> "$depfile" rm -f "$tmpdepfile" ;; hp2) # The "hp" stanza above does not work with aCC (C++) and HP's ia64 # compilers, which have integrated preprocessors. The correct option # to use with these is +Maked; it writes dependencies to a file named # 'foo.d', which lands next to the object file, wherever that # happens to be. # Much of this is similar to the tru64 case; see comments there. dir=`echo "$object" | sed -e 's|/[^/]*$|/|'` test "x$dir" = "x$object" && dir= base=`echo "$object" | sed -e 's|^.*/||' -e 's/\.o$//' -e 's/\.lo$//'` if test "$libtool" = yes; then tmpdepfile1=$dir$base.d tmpdepfile2=$dir.libs/$base.d "$@" -Wc,+Maked else tmpdepfile1=$dir$base.d tmpdepfile2=$dir$base.d "$@" +Maked fi stat=$? if test $stat -eq 0; then : else rm -f "$tmpdepfile1" "$tmpdepfile2" exit $stat fi for tmpdepfile in "$tmpdepfile1" "$tmpdepfile2" do test -f "$tmpdepfile" && break done if test -f "$tmpdepfile"; then sed -e "s,^.*\.[a-z]*:,$object:," "$tmpdepfile" > "$depfile" # Add `dependent.h:' lines. sed -ne '2,${ s/^ *// s/ \\*$// s/$/:/ p }' "$tmpdepfile" >> "$depfile" else echo "#dummy" > "$depfile" fi rm -f "$tmpdepfile" "$tmpdepfile2" ;; tru64) # The Tru64 compiler uses -MD to generate dependencies as a side # effect. `cc -MD -o foo.o ...' puts the dependencies into `foo.o.d'. # At least on Alpha/Redhat 6.1, Compaq CCC V6.2-504 seems to put # dependencies in `foo.d' instead, so we check for that too. # Subdirectories are respected. dir=`echo "$object" | sed -e 's|/[^/]*$|/|'` test "x$dir" = "x$object" && dir= base=`echo "$object" | sed -e 's|^.*/||' -e 's/\.o$//' -e 's/\.lo$//'` if test "$libtool" = yes; then # With Tru64 cc, shared objects can also be used to make a # static library. This mechanism is used in libtool 1.4 series to # handle both shared and static libraries in a single compilation. # With libtool 1.4, dependencies were output in $dir.libs/$base.lo.d. # # With libtool 1.5 this exception was removed, and libtool now # generates 2 separate objects for the 2 libraries. These two # compilations output dependencies in $dir.libs/$base.o.d and # in $dir$base.o.d. We have to check for both files, because # one of the two compilations can be disabled. We should prefer # $dir$base.o.d over $dir.libs/$base.o.d because the latter is # automatically cleaned when .libs/ is deleted, while ignoring # the former would cause a distcleancheck panic. tmpdepfile1=$dir.libs/$base.lo.d # libtool 1.4 tmpdepfile2=$dir$base.o.d # libtool 1.5 tmpdepfile3=$dir.libs/$base.o.d # libtool 1.5 tmpdepfile4=$dir.libs/$base.d # Compaq CCC V6.2-504 "$@" -Wc,-MD else tmpdepfile1=$dir$base.o.d tmpdepfile2=$dir$base.d tmpdepfile3=$dir$base.d tmpdepfile4=$dir$base.d "$@" -MD fi stat=$? if test $stat -eq 0; then : else rm -f "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3" "$tmpdepfile4" exit $stat fi for tmpdepfile in "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3" "$tmpdepfile4" do test -f "$tmpdepfile" && break done if test -f "$tmpdepfile"; then sed -e "s,^.*\.[a-z]*:,$object:," < "$tmpdepfile" > "$depfile" # That's a tab and a space in the []. sed -e 's,^.*\.[a-z]*:[ ]*,,' -e 's,$,:,' < "$tmpdepfile" >> "$depfile" else echo "#dummy" > "$depfile" fi rm -f "$tmpdepfile" ;; #nosideeffect) # This comment above is used by automake to tell side-effect # dependency tracking mechanisms from slower ones. dashmstdout) # Important note: in order to support this mode, a compiler *must* # always write the preprocessed file to stdout, regardless of -o. "$@" || exit $? # Remove the call to Libtool. if test "$libtool" = yes; then while test "X$1" != 'X--mode=compile'; do shift done shift fi # Remove `-o $object'. IFS=" " for arg do case $arg in -o) shift ;; $object) shift ;; *) set fnord "$@" "$arg" shift # fnord shift # $arg ;; esac done test -z "$dashmflag" && dashmflag=-M # Require at least two characters before searching for `:' # in the target name. This is to cope with DOS-style filenames: # a dependency such as `c:/foo/bar' could be seen as target `c' otherwise. "$@" $dashmflag | sed 's:^[ ]*[^: ][^:][^:]*\:[ ]*:'"$object"'\: :' > "$tmpdepfile" rm -f "$depfile" cat < "$tmpdepfile" > "$depfile" tr ' ' ' ' < "$tmpdepfile" | \ ## Some versions of the HPUX 10.20 sed can't process this invocation ## correctly. Breaking it into two sed invocations is a workaround. sed -e 's/^\\$//' -e '/^$/d' -e '/:$/d' | sed -e 's/$/ :/' >> "$depfile" rm -f "$tmpdepfile" ;; dashXmstdout) # This case only exists to satisfy depend.m4. It is never actually # run, as this mode is specially recognized in the preamble. exit 1 ;; makedepend) "$@" || exit $? # Remove any Libtool call if test "$libtool" = yes; then while test "X$1" != 'X--mode=compile'; do shift done shift fi # X makedepend shift cleared=no eat=no for arg do case $cleared in no) set ""; shift cleared=yes ;; esac if test $eat = yes; then eat=no continue fi case "$arg" in -D*|-I*) set fnord "$@" "$arg"; shift ;; # Strip any option that makedepend may not understand. Remove # the object too, otherwise makedepend will parse it as a source file. -arch) eat=yes ;; -*|$object) ;; *) set fnord "$@" "$arg"; shift ;; esac done obj_suffix=`echo "$object" | sed 's/^.*\././'` touch "$tmpdepfile" ${MAKEDEPEND-makedepend} -o"$obj_suffix" -f"$tmpdepfile" "$@" rm -f "$depfile" cat < "$tmpdepfile" > "$depfile" sed '1,2d' "$tmpdepfile" | tr ' ' ' ' | \ ## Some versions of the HPUX 10.20 sed can't process this invocation ## correctly. Breaking it into two sed invocations is a workaround. sed -e 's/^\\$//' -e '/^$/d' -e '/:$/d' | sed -e 's/$/ :/' >> "$depfile" rm -f "$tmpdepfile" "$tmpdepfile".bak ;; cpp) # Important note: in order to support this mode, a compiler *must* # always write the preprocessed file to stdout. "$@" || exit $? # Remove the call to Libtool. if test "$libtool" = yes; then while test "X$1" != 'X--mode=compile'; do shift done shift fi # Remove `-o $object'. IFS=" " for arg do case $arg in -o) shift ;; $object) shift ;; *) set fnord "$@" "$arg" shift # fnord shift # $arg ;; esac done "$@" -E | sed -n -e '/^# [0-9][0-9]* "\([^"]*\)".*/ s:: \1 \\:p' \ -e '/^#line [0-9][0-9]* "\([^"]*\)".*/ s:: \1 \\:p' | sed '$ s: \\$::' > "$tmpdepfile" rm -f "$depfile" echo "$object : \\" > "$depfile" cat < "$tmpdepfile" >> "$depfile" sed < "$tmpdepfile" '/^$/d;s/^ //;s/ \\$//;s/$/ :/' >> "$depfile" rm -f "$tmpdepfile" ;; msvisualcpp) # Important note: in order to support this mode, a compiler *must* # always write the preprocessed file to stdout. "$@" || exit $? # Remove the call to Libtool. if test "$libtool" = yes; then while test "X$1" != 'X--mode=compile'; do shift done shift fi IFS=" " for arg do case "$arg" in -o) shift ;; $object) shift ;; "-Gm"|"/Gm"|"-Gi"|"/Gi"|"-ZI"|"/ZI") set fnord "$@" shift shift ;; *) set fnord "$@" "$arg" shift shift ;; esac done "$@" -E 2>/dev/null | sed -n '/^#line [0-9][0-9]* "\([^"]*\)"/ s::\1:p' | $cygpath_u | sort -u > "$tmpdepfile" rm -f "$depfile" echo "$object : \\" > "$depfile" sed < "$tmpdepfile" -n -e 's% %\\ %g' -e '/^\(.*\)$/ s:: \1 \\:p' >> "$depfile" echo " " >> "$depfile" sed < "$tmpdepfile" -n -e 's% %\\ %g' -e '/^\(.*\)$/ s::\1\::p' >> "$depfile" rm -f "$tmpdepfile" ;; msvcmsys) # This case exists only to let depend.m4 do its work. It works by # looking at the text of this script. This case will never be run, # since it is checked for above. exit 1 ;; none) exec "$@" ;; *) echo "Unknown depmode $depmode" 1>&2 exit 1 ;; esac exit 0 # Local Variables: # mode: shell-script # sh-indentation: 2 # eval: (add-hook 'write-file-hooks 'time-stamp) # time-stamp-start: "scriptversion=" # time-stamp-format: "%:y-%02m-%02d.%02H" # time-stamp-time-zone: "UTC" # time-stamp-end: "; # UTC" # End: sparsehash-2.0.2/aclocal.m40000664000175000017500000010456711721254573012463 00000000000000# generated automatically by aclocal 1.11.1 -*- Autoconf -*- # Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, # 2005, 2006, 2007, 2008, 2009 Free Software Foundation, Inc. # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY, to the extent permitted by law; without # even the implied warranty of MERCHANTABILITY or FITNESS FOR A # PARTICULAR PURPOSE. m4_ifndef([AC_AUTOCONF_VERSION], [m4_copy([m4_PACKAGE_VERSION], [AC_AUTOCONF_VERSION])])dnl m4_if(m4_defn([AC_AUTOCONF_VERSION]), [2.68],, [m4_warning([this file was generated for autoconf 2.68. You have another version of autoconf. It may work, but is not guaranteed to. If you have problems, you may need to regenerate the build system entirely. To do so, use the procedure documented by the package, typically `autoreconf'.])]) # Copyright (C) 2002, 2003, 2005, 2006, 2007, 2008 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # AM_AUTOMAKE_VERSION(VERSION) # ---------------------------- # Automake X.Y traces this macro to ensure aclocal.m4 has been # generated from the m4 files accompanying Automake X.Y. # (This private macro should not be called outside this file.) AC_DEFUN([AM_AUTOMAKE_VERSION], [am__api_version='1.11' dnl Some users find AM_AUTOMAKE_VERSION and mistake it for a way to dnl require some minimum version. Point them to the right macro. m4_if([$1], [1.11.1], [], [AC_FATAL([Do not call $0, use AM_INIT_AUTOMAKE([$1]).])])dnl ]) # _AM_AUTOCONF_VERSION(VERSION) # ----------------------------- # aclocal traces this macro to find the Autoconf version. # This is a private macro too. Using m4_define simplifies # the logic in aclocal, which can simply ignore this definition. m4_define([_AM_AUTOCONF_VERSION], []) # AM_SET_CURRENT_AUTOMAKE_VERSION # ------------------------------- # Call AM_AUTOMAKE_VERSION and AM_AUTOMAKE_VERSION so they can be traced. # This function is AC_REQUIREd by AM_INIT_AUTOMAKE. AC_DEFUN([AM_SET_CURRENT_AUTOMAKE_VERSION], [AM_AUTOMAKE_VERSION([1.11.1])dnl m4_ifndef([AC_AUTOCONF_VERSION], [m4_copy([m4_PACKAGE_VERSION], [AC_AUTOCONF_VERSION])])dnl _AM_AUTOCONF_VERSION(m4_defn([AC_AUTOCONF_VERSION]))]) # AM_AUX_DIR_EXPAND -*- Autoconf -*- # Copyright (C) 2001, 2003, 2005 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # For projects using AC_CONFIG_AUX_DIR([foo]), Autoconf sets # $ac_aux_dir to `$srcdir/foo'. In other projects, it is set to # `$srcdir', `$srcdir/..', or `$srcdir/../..'. # # Of course, Automake must honor this variable whenever it calls a # tool from the auxiliary directory. The problem is that $srcdir (and # therefore $ac_aux_dir as well) can be either absolute or relative, # depending on how configure is run. This is pretty annoying, since # it makes $ac_aux_dir quite unusable in subdirectories: in the top # source directory, any form will work fine, but in subdirectories a # relative path needs to be adjusted first. # # $ac_aux_dir/missing # fails when called from a subdirectory if $ac_aux_dir is relative # $top_srcdir/$ac_aux_dir/missing # fails if $ac_aux_dir is absolute, # fails when called from a subdirectory in a VPATH build with # a relative $ac_aux_dir # # The reason of the latter failure is that $top_srcdir and $ac_aux_dir # are both prefixed by $srcdir. In an in-source build this is usually # harmless because $srcdir is `.', but things will broke when you # start a VPATH build or use an absolute $srcdir. # # So we could use something similar to $top_srcdir/$ac_aux_dir/missing, # iff we strip the leading $srcdir from $ac_aux_dir. That would be: # am_aux_dir='\$(top_srcdir)/'`expr "$ac_aux_dir" : "$srcdir//*\(.*\)"` # and then we would define $MISSING as # MISSING="\${SHELL} $am_aux_dir/missing" # This will work as long as MISSING is not called from configure, because # unfortunately $(top_srcdir) has no meaning in configure. # However there are other variables, like CC, which are often used in # configure, and could therefore not use this "fixed" $ac_aux_dir. # # Another solution, used here, is to always expand $ac_aux_dir to an # absolute PATH. The drawback is that using absolute paths prevent a # configured tree to be moved without reconfiguration. AC_DEFUN([AM_AUX_DIR_EXPAND], [dnl Rely on autoconf to set up CDPATH properly. AC_PREREQ([2.50])dnl # expand $ac_aux_dir to an absolute path am_aux_dir=`cd $ac_aux_dir && pwd` ]) # AM_CONDITIONAL -*- Autoconf -*- # Copyright (C) 1997, 2000, 2001, 2003, 2004, 2005, 2006, 2008 # Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # serial 9 # AM_CONDITIONAL(NAME, SHELL-CONDITION) # ------------------------------------- # Define a conditional. AC_DEFUN([AM_CONDITIONAL], [AC_PREREQ(2.52)dnl ifelse([$1], [TRUE], [AC_FATAL([$0: invalid condition: $1])], [$1], [FALSE], [AC_FATAL([$0: invalid condition: $1])])dnl AC_SUBST([$1_TRUE])dnl AC_SUBST([$1_FALSE])dnl _AM_SUBST_NOTMAKE([$1_TRUE])dnl _AM_SUBST_NOTMAKE([$1_FALSE])dnl m4_define([_AM_COND_VALUE_$1], [$2])dnl if $2; then $1_TRUE= $1_FALSE='#' else $1_TRUE='#' $1_FALSE= fi AC_CONFIG_COMMANDS_PRE( [if test -z "${$1_TRUE}" && test -z "${$1_FALSE}"; then AC_MSG_ERROR([[conditional "$1" was never defined. Usually this means the macro was only invoked conditionally.]]) fi])]) # Copyright (C) 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2009 # Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # serial 10 # There are a few dirty hacks below to avoid letting `AC_PROG_CC' be # written in clear, in which case automake, when reading aclocal.m4, # will think it sees a *use*, and therefore will trigger all it's # C support machinery. Also note that it means that autoscan, seeing # CC etc. in the Makefile, will ask for an AC_PROG_CC use... # _AM_DEPENDENCIES(NAME) # ---------------------- # See how the compiler implements dependency checking. # NAME is "CC", "CXX", "GCJ", or "OBJC". # We try a few techniques and use that to set a single cache variable. # # We don't AC_REQUIRE the corresponding AC_PROG_CC since the latter was # modified to invoke _AM_DEPENDENCIES(CC); we would have a circular # dependency, and given that the user is not expected to run this macro, # just rely on AC_PROG_CC. AC_DEFUN([_AM_DEPENDENCIES], [AC_REQUIRE([AM_SET_DEPDIR])dnl AC_REQUIRE([AM_OUTPUT_DEPENDENCY_COMMANDS])dnl AC_REQUIRE([AM_MAKE_INCLUDE])dnl AC_REQUIRE([AM_DEP_TRACK])dnl ifelse([$1], CC, [depcc="$CC" am_compiler_list=], [$1], CXX, [depcc="$CXX" am_compiler_list=], [$1], OBJC, [depcc="$OBJC" am_compiler_list='gcc3 gcc'], [$1], UPC, [depcc="$UPC" am_compiler_list=], [$1], GCJ, [depcc="$GCJ" am_compiler_list='gcc3 gcc'], [depcc="$$1" am_compiler_list=]) AC_CACHE_CHECK([dependency style of $depcc], [am_cv_$1_dependencies_compiler_type], [if test -z "$AMDEP_TRUE" && test -f "$am_depcomp"; then # We make a subdir and do the tests there. Otherwise we can end up # making bogus files that we don't know about and never remove. For # instance it was reported that on HP-UX the gcc test will end up # making a dummy file named `D' -- because `-MD' means `put the output # in D'. mkdir conftest.dir # Copy depcomp to subdir because otherwise we won't find it if we're # using a relative directory. cp "$am_depcomp" conftest.dir cd conftest.dir # We will build objects and dependencies in a subdirectory because # it helps to detect inapplicable dependency modes. For instance # both Tru64's cc and ICC support -MD to output dependencies as a # side effect of compilation, but ICC will put the dependencies in # the current directory while Tru64 will put them in the object # directory. mkdir sub am_cv_$1_dependencies_compiler_type=none if test "$am_compiler_list" = ""; then am_compiler_list=`sed -n ['s/^#*\([a-zA-Z0-9]*\))$/\1/p'] < ./depcomp` fi am__universal=false m4_case([$1], [CC], [case " $depcc " in #( *\ -arch\ *\ -arch\ *) am__universal=true ;; esac], [CXX], [case " $depcc " in #( *\ -arch\ *\ -arch\ *) am__universal=true ;; esac]) for depmode in $am_compiler_list; do # Setup a source with many dependencies, because some compilers # like to wrap large dependency lists on column 80 (with \), and # we should not choose a depcomp mode which is confused by this. # # We need to recreate these files for each test, as the compiler may # overwrite some of them when testing with obscure command lines. # This happens at least with the AIX C compiler. : > sub/conftest.c for i in 1 2 3 4 5 6; do echo '#include "conftst'$i'.h"' >> sub/conftest.c # Using `: > sub/conftst$i.h' creates only sub/conftst1.h with # Solaris 8's {/usr,}/bin/sh. touch sub/conftst$i.h done echo "${am__include} ${am__quote}sub/conftest.Po${am__quote}" > confmf # We check with `-c' and `-o' for the sake of the "dashmstdout" # mode. It turns out that the SunPro C++ compiler does not properly # handle `-M -o', and we need to detect this. Also, some Intel # versions had trouble with output in subdirs am__obj=sub/conftest.${OBJEXT-o} am__minus_obj="-o $am__obj" case $depmode in gcc) # This depmode causes a compiler race in universal mode. test "$am__universal" = false || continue ;; nosideeffect) # after this tag, mechanisms are not by side-effect, so they'll # only be used when explicitly requested if test "x$enable_dependency_tracking" = xyes; then continue else break fi ;; msvisualcpp | msvcmsys) # This compiler won't grok `-c -o', but also, the minuso test has # not run yet. These depmodes are late enough in the game, and # so weak that their functioning should not be impacted. am__obj=conftest.${OBJEXT-o} am__minus_obj= ;; none) break ;; esac if depmode=$depmode \ source=sub/conftest.c object=$am__obj \ depfile=sub/conftest.Po tmpdepfile=sub/conftest.TPo \ $SHELL ./depcomp $depcc -c $am__minus_obj sub/conftest.c \ >/dev/null 2>conftest.err && grep sub/conftst1.h sub/conftest.Po > /dev/null 2>&1 && grep sub/conftst6.h sub/conftest.Po > /dev/null 2>&1 && grep $am__obj sub/conftest.Po > /dev/null 2>&1 && ${MAKE-make} -s -f confmf > /dev/null 2>&1; then # icc doesn't choke on unknown options, it will just issue warnings # or remarks (even with -Werror). So we grep stderr for any message # that says an option was ignored or not supported. # When given -MP, icc 7.0 and 7.1 complain thusly: # icc: Command line warning: ignoring option '-M'; no argument required # The diagnosis changed in icc 8.0: # icc: Command line remark: option '-MP' not supported if (grep 'ignoring option' conftest.err || grep 'not supported' conftest.err) >/dev/null 2>&1; then :; else am_cv_$1_dependencies_compiler_type=$depmode break fi fi done cd .. rm -rf conftest.dir else am_cv_$1_dependencies_compiler_type=none fi ]) AC_SUBST([$1DEPMODE], [depmode=$am_cv_$1_dependencies_compiler_type]) AM_CONDITIONAL([am__fastdep$1], [ test "x$enable_dependency_tracking" != xno \ && test "$am_cv_$1_dependencies_compiler_type" = gcc3]) ]) # AM_SET_DEPDIR # ------------- # Choose a directory name for dependency files. # This macro is AC_REQUIREd in _AM_DEPENDENCIES AC_DEFUN([AM_SET_DEPDIR], [AC_REQUIRE([AM_SET_LEADING_DOT])dnl AC_SUBST([DEPDIR], ["${am__leading_dot}deps"])dnl ]) # AM_DEP_TRACK # ------------ AC_DEFUN([AM_DEP_TRACK], [AC_ARG_ENABLE(dependency-tracking, [ --disable-dependency-tracking speeds up one-time build --enable-dependency-tracking do not reject slow dependency extractors]) if test "x$enable_dependency_tracking" != xno; then am_depcomp="$ac_aux_dir/depcomp" AMDEPBACKSLASH='\' fi AM_CONDITIONAL([AMDEP], [test "x$enable_dependency_tracking" != xno]) AC_SUBST([AMDEPBACKSLASH])dnl _AM_SUBST_NOTMAKE([AMDEPBACKSLASH])dnl ]) # Generate code to set up dependency tracking. -*- Autoconf -*- # Copyright (C) 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2008 # Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. #serial 5 # _AM_OUTPUT_DEPENDENCY_COMMANDS # ------------------------------ AC_DEFUN([_AM_OUTPUT_DEPENDENCY_COMMANDS], [{ # Autoconf 2.62 quotes --file arguments for eval, but not when files # are listed without --file. Let's play safe and only enable the eval # if we detect the quoting. case $CONFIG_FILES in *\'*) eval set x "$CONFIG_FILES" ;; *) set x $CONFIG_FILES ;; esac shift for mf do # Strip MF so we end up with the name of the file. mf=`echo "$mf" | sed -e 's/:.*$//'` # Check whether this is an Automake generated Makefile or not. # We used to match only the files named `Makefile.in', but # some people rename them; so instead we look at the file content. # Grep'ing the first line is not enough: some people post-process # each Makefile.in and add a new line on top of each file to say so. # Grep'ing the whole file is not good either: AIX grep has a line # limit of 2048, but all sed's we know have understand at least 4000. if sed -n 's,^#.*generated by automake.*,X,p' "$mf" | grep X >/dev/null 2>&1; then dirpart=`AS_DIRNAME("$mf")` else continue fi # Extract the definition of DEPDIR, am__include, and am__quote # from the Makefile without running `make'. DEPDIR=`sed -n 's/^DEPDIR = //p' < "$mf"` test -z "$DEPDIR" && continue am__include=`sed -n 's/^am__include = //p' < "$mf"` test -z "am__include" && continue am__quote=`sed -n 's/^am__quote = //p' < "$mf"` # When using ansi2knr, U may be empty or an underscore; expand it U=`sed -n 's/^U = //p' < "$mf"` # Find all dependency output files, they are included files with # $(DEPDIR) in their names. We invoke sed twice because it is the # simplest approach to changing $(DEPDIR) to its actual value in the # expansion. for file in `sed -n " s/^$am__include $am__quote\(.*(DEPDIR).*\)$am__quote"'$/\1/p' <"$mf" | \ sed -e 's/\$(DEPDIR)/'"$DEPDIR"'/g' -e 's/\$U/'"$U"'/g'`; do # Make sure the directory exists. test -f "$dirpart/$file" && continue fdir=`AS_DIRNAME(["$file"])` AS_MKDIR_P([$dirpart/$fdir]) # echo "creating $dirpart/$file" echo '# dummy' > "$dirpart/$file" done done } ])# _AM_OUTPUT_DEPENDENCY_COMMANDS # AM_OUTPUT_DEPENDENCY_COMMANDS # ----------------------------- # This macro should only be invoked once -- use via AC_REQUIRE. # # This code is only required when automatic dependency tracking # is enabled. FIXME. This creates each `.P' file that we will # need in order to bootstrap the dependency handling code. AC_DEFUN([AM_OUTPUT_DEPENDENCY_COMMANDS], [AC_CONFIG_COMMANDS([depfiles], [test x"$AMDEP_TRUE" != x"" || _AM_OUTPUT_DEPENDENCY_COMMANDS], [AMDEP_TRUE="$AMDEP_TRUE" ac_aux_dir="$ac_aux_dir"]) ]) # Copyright (C) 1996, 1997, 2000, 2001, 2003, 2005 # Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # serial 8 # AM_CONFIG_HEADER is obsolete. It has been replaced by AC_CONFIG_HEADERS. AU_DEFUN([AM_CONFIG_HEADER], [AC_CONFIG_HEADERS($@)]) # Do all the work for Automake. -*- Autoconf -*- # Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, # 2005, 2006, 2008, 2009 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # serial 16 # This macro actually does too much. Some checks are only needed if # your package does certain things. But this isn't really a big deal. # AM_INIT_AUTOMAKE(PACKAGE, VERSION, [NO-DEFINE]) # AM_INIT_AUTOMAKE([OPTIONS]) # ----------------------------------------------- # The call with PACKAGE and VERSION arguments is the old style # call (pre autoconf-2.50), which is being phased out. PACKAGE # and VERSION should now be passed to AC_INIT and removed from # the call to AM_INIT_AUTOMAKE. # We support both call styles for the transition. After # the next Automake release, Autoconf can make the AC_INIT # arguments mandatory, and then we can depend on a new Autoconf # release and drop the old call support. AC_DEFUN([AM_INIT_AUTOMAKE], [AC_PREREQ([2.62])dnl dnl Autoconf wants to disallow AM_ names. We explicitly allow dnl the ones we care about. m4_pattern_allow([^AM_[A-Z]+FLAGS$])dnl AC_REQUIRE([AM_SET_CURRENT_AUTOMAKE_VERSION])dnl AC_REQUIRE([AC_PROG_INSTALL])dnl if test "`cd $srcdir && pwd`" != "`pwd`"; then # Use -I$(srcdir) only when $(srcdir) != ., so that make's output # is not polluted with repeated "-I." AC_SUBST([am__isrc], [' -I$(srcdir)'])_AM_SUBST_NOTMAKE([am__isrc])dnl # test to see if srcdir already configured if test -f $srcdir/config.status; then AC_MSG_ERROR([source directory already configured; run "make distclean" there first]) fi fi # test whether we have cygpath if test -z "$CYGPATH_W"; then if (cygpath --version) >/dev/null 2>/dev/null; then CYGPATH_W='cygpath -w' else CYGPATH_W=echo fi fi AC_SUBST([CYGPATH_W]) # Define the identity of the package. dnl Distinguish between old-style and new-style calls. m4_ifval([$2], [m4_ifval([$3], [_AM_SET_OPTION([no-define])])dnl AC_SUBST([PACKAGE], [$1])dnl AC_SUBST([VERSION], [$2])], [_AM_SET_OPTIONS([$1])dnl dnl Diagnose old-style AC_INIT with new-style AM_AUTOMAKE_INIT. m4_if(m4_ifdef([AC_PACKAGE_NAME], 1)m4_ifdef([AC_PACKAGE_VERSION], 1), 11,, [m4_fatal([AC_INIT should be called with package and version arguments])])dnl AC_SUBST([PACKAGE], ['AC_PACKAGE_TARNAME'])dnl AC_SUBST([VERSION], ['AC_PACKAGE_VERSION'])])dnl _AM_IF_OPTION([no-define],, [AC_DEFINE_UNQUOTED(PACKAGE, "$PACKAGE", [Name of package]) AC_DEFINE_UNQUOTED(VERSION, "$VERSION", [Version number of package])])dnl # Some tools Automake needs. AC_REQUIRE([AM_SANITY_CHECK])dnl AC_REQUIRE([AC_ARG_PROGRAM])dnl AM_MISSING_PROG(ACLOCAL, aclocal-${am__api_version}) AM_MISSING_PROG(AUTOCONF, autoconf) AM_MISSING_PROG(AUTOMAKE, automake-${am__api_version}) AM_MISSING_PROG(AUTOHEADER, autoheader) AM_MISSING_PROG(MAKEINFO, makeinfo) AC_REQUIRE([AM_PROG_INSTALL_SH])dnl AC_REQUIRE([AM_PROG_INSTALL_STRIP])dnl AC_REQUIRE([AM_PROG_MKDIR_P])dnl # We need awk for the "check" target. The system "awk" is bad on # some platforms. AC_REQUIRE([AC_PROG_AWK])dnl AC_REQUIRE([AC_PROG_MAKE_SET])dnl AC_REQUIRE([AM_SET_LEADING_DOT])dnl _AM_IF_OPTION([tar-ustar], [_AM_PROG_TAR([ustar])], [_AM_IF_OPTION([tar-pax], [_AM_PROG_TAR([pax])], [_AM_PROG_TAR([v7])])]) _AM_IF_OPTION([no-dependencies],, [AC_PROVIDE_IFELSE([AC_PROG_CC], [_AM_DEPENDENCIES(CC)], [define([AC_PROG_CC], defn([AC_PROG_CC])[_AM_DEPENDENCIES(CC)])])dnl AC_PROVIDE_IFELSE([AC_PROG_CXX], [_AM_DEPENDENCIES(CXX)], [define([AC_PROG_CXX], defn([AC_PROG_CXX])[_AM_DEPENDENCIES(CXX)])])dnl AC_PROVIDE_IFELSE([AC_PROG_OBJC], [_AM_DEPENDENCIES(OBJC)], [define([AC_PROG_OBJC], defn([AC_PROG_OBJC])[_AM_DEPENDENCIES(OBJC)])])dnl ]) _AM_IF_OPTION([silent-rules], [AC_REQUIRE([AM_SILENT_RULES])])dnl dnl The `parallel-tests' driver may need to know about EXEEXT, so add the dnl `am__EXEEXT' conditional if _AM_COMPILER_EXEEXT was seen. This macro dnl is hooked onto _AC_COMPILER_EXEEXT early, see below. AC_CONFIG_COMMANDS_PRE(dnl [m4_provide_if([_AM_COMPILER_EXEEXT], [AM_CONDITIONAL([am__EXEEXT], [test -n "$EXEEXT"])])])dnl ]) dnl Hook into `_AC_COMPILER_EXEEXT' early to learn its expansion. Do not dnl add the conditional right here, as _AC_COMPILER_EXEEXT may be further dnl mangled by Autoconf and run in a shell conditional statement. m4_define([_AC_COMPILER_EXEEXT], m4_defn([_AC_COMPILER_EXEEXT])[m4_provide([_AM_COMPILER_EXEEXT])]) # When config.status generates a header, we must update the stamp-h file. # This file resides in the same directory as the config header # that is generated. The stamp files are numbered to have different names. # Autoconf calls _AC_AM_CONFIG_HEADER_HOOK (when defined) in the # loop where config.status creates the headers, so we can generate # our stamp files there. AC_DEFUN([_AC_AM_CONFIG_HEADER_HOOK], [# Compute $1's index in $config_headers. _am_arg=$1 _am_stamp_count=1 for _am_header in $config_headers :; do case $_am_header in $_am_arg | $_am_arg:* ) break ;; * ) _am_stamp_count=`expr $_am_stamp_count + 1` ;; esac done echo "timestamp for $_am_arg" >`AS_DIRNAME(["$_am_arg"])`/stamp-h[]$_am_stamp_count]) # Copyright (C) 2001, 2003, 2005, 2008 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # AM_PROG_INSTALL_SH # ------------------ # Define $install_sh. AC_DEFUN([AM_PROG_INSTALL_SH], [AC_REQUIRE([AM_AUX_DIR_EXPAND])dnl if test x"${install_sh}" != xset; then case $am_aux_dir in *\ * | *\ *) install_sh="\${SHELL} '$am_aux_dir/install-sh'" ;; *) install_sh="\${SHELL} $am_aux_dir/install-sh" esac fi AC_SUBST(install_sh)]) # Copyright (C) 2003, 2005 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # serial 2 # Check whether the underlying file-system supports filenames # with a leading dot. For instance MS-DOS doesn't. AC_DEFUN([AM_SET_LEADING_DOT], [rm -rf .tst 2>/dev/null mkdir .tst 2>/dev/null if test -d .tst; then am__leading_dot=. else am__leading_dot=_ fi rmdir .tst 2>/dev/null AC_SUBST([am__leading_dot])]) # Check to see how 'make' treats includes. -*- Autoconf -*- # Copyright (C) 2001, 2002, 2003, 2005, 2009 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # serial 4 # AM_MAKE_INCLUDE() # ----------------- # Check to see how make treats includes. AC_DEFUN([AM_MAKE_INCLUDE], [am_make=${MAKE-make} cat > confinc << 'END' am__doit: @echo this is the am__doit target .PHONY: am__doit END # If we don't find an include directive, just comment out the code. AC_MSG_CHECKING([for style of include used by $am_make]) am__include="#" am__quote= _am_result=none # First try GNU make style include. echo "include confinc" > confmf # Ignore all kinds of additional output from `make'. case `$am_make -s -f confmf 2> /dev/null` in #( *the\ am__doit\ target*) am__include=include am__quote= _am_result=GNU ;; esac # Now try BSD make style include. if test "$am__include" = "#"; then echo '.include "confinc"' > confmf case `$am_make -s -f confmf 2> /dev/null` in #( *the\ am__doit\ target*) am__include=.include am__quote="\"" _am_result=BSD ;; esac fi AC_SUBST([am__include]) AC_SUBST([am__quote]) AC_MSG_RESULT([$_am_result]) rm -f confinc confmf ]) # Fake the existence of programs that GNU maintainers use. -*- Autoconf -*- # Copyright (C) 1997, 1999, 2000, 2001, 2003, 2004, 2005, 2008 # Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # serial 6 # AM_MISSING_PROG(NAME, PROGRAM) # ------------------------------ AC_DEFUN([AM_MISSING_PROG], [AC_REQUIRE([AM_MISSING_HAS_RUN]) $1=${$1-"${am_missing_run}$2"} AC_SUBST($1)]) # AM_MISSING_HAS_RUN # ------------------ # Define MISSING if not defined so far and test if it supports --run. # If it does, set am_missing_run to use it, otherwise, to nothing. AC_DEFUN([AM_MISSING_HAS_RUN], [AC_REQUIRE([AM_AUX_DIR_EXPAND])dnl AC_REQUIRE_AUX_FILE([missing])dnl if test x"${MISSING+set}" != xset; then case $am_aux_dir in *\ * | *\ *) MISSING="\${SHELL} \"$am_aux_dir/missing\"" ;; *) MISSING="\${SHELL} $am_aux_dir/missing" ;; esac fi # Use eval to expand $SHELL if eval "$MISSING --run true"; then am_missing_run="$MISSING --run " else am_missing_run= AC_MSG_WARN([`missing' script is too old or missing]) fi ]) # Copyright (C) 2003, 2004, 2005, 2006 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # AM_PROG_MKDIR_P # --------------- # Check for `mkdir -p'. AC_DEFUN([AM_PROG_MKDIR_P], [AC_PREREQ([2.60])dnl AC_REQUIRE([AC_PROG_MKDIR_P])dnl dnl Automake 1.8 to 1.9.6 used to define mkdir_p. We now use MKDIR_P, dnl while keeping a definition of mkdir_p for backward compatibility. dnl @MKDIR_P@ is magic: AC_OUTPUT adjusts its value for each Makefile. dnl However we cannot define mkdir_p as $(MKDIR_P) for the sake of dnl Makefile.ins that do not define MKDIR_P, so we do our own dnl adjustment using top_builddir (which is defined more often than dnl MKDIR_P). AC_SUBST([mkdir_p], ["$MKDIR_P"])dnl case $mkdir_p in [[\\/$]]* | ?:[[\\/]]*) ;; */*) mkdir_p="\$(top_builddir)/$mkdir_p" ;; esac ]) # Helper functions for option handling. -*- Autoconf -*- # Copyright (C) 2001, 2002, 2003, 2005, 2008 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # serial 4 # _AM_MANGLE_OPTION(NAME) # ----------------------- AC_DEFUN([_AM_MANGLE_OPTION], [[_AM_OPTION_]m4_bpatsubst($1, [[^a-zA-Z0-9_]], [_])]) # _AM_SET_OPTION(NAME) # ------------------------------ # Set option NAME. Presently that only means defining a flag for this option. AC_DEFUN([_AM_SET_OPTION], [m4_define(_AM_MANGLE_OPTION([$1]), 1)]) # _AM_SET_OPTIONS(OPTIONS) # ---------------------------------- # OPTIONS is a space-separated list of Automake options. AC_DEFUN([_AM_SET_OPTIONS], [m4_foreach_w([_AM_Option], [$1], [_AM_SET_OPTION(_AM_Option)])]) # _AM_IF_OPTION(OPTION, IF-SET, [IF-NOT-SET]) # ------------------------------------------- # Execute IF-SET if OPTION is set, IF-NOT-SET otherwise. AC_DEFUN([_AM_IF_OPTION], [m4_ifset(_AM_MANGLE_OPTION([$1]), [$2], [$3])]) # Check to make sure that the build environment is sane. -*- Autoconf -*- # Copyright (C) 1996, 1997, 2000, 2001, 2003, 2005, 2008 # Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # serial 5 # AM_SANITY_CHECK # --------------- AC_DEFUN([AM_SANITY_CHECK], [AC_MSG_CHECKING([whether build environment is sane]) # Just in case sleep 1 echo timestamp > conftest.file # Reject unsafe characters in $srcdir or the absolute working directory # name. Accept space and tab only in the latter. am_lf=' ' case `pwd` in *[[\\\"\#\$\&\'\`$am_lf]]*) AC_MSG_ERROR([unsafe absolute working directory name]);; esac case $srcdir in *[[\\\"\#\$\&\'\`$am_lf\ \ ]]*) AC_MSG_ERROR([unsafe srcdir value: `$srcdir']);; esac # Do `set' in a subshell so we don't clobber the current shell's # arguments. Must try -L first in case configure is actually a # symlink; some systems play weird games with the mod time of symlinks # (eg FreeBSD returns the mod time of the symlink's containing # directory). if ( set X `ls -Lt "$srcdir/configure" conftest.file 2> /dev/null` if test "$[*]" = "X"; then # -L didn't work. set X `ls -t "$srcdir/configure" conftest.file` fi rm -f conftest.file if test "$[*]" != "X $srcdir/configure conftest.file" \ && test "$[*]" != "X conftest.file $srcdir/configure"; then # If neither matched, then we have a broken ls. This can happen # if, for instance, CONFIG_SHELL is bash and it inherits a # broken ls alias from the environment. This has actually # happened. Such a system could not be considered "sane". AC_MSG_ERROR([ls -t appears to fail. Make sure there is not a broken alias in your environment]) fi test "$[2]" = conftest.file ) then # Ok. : else AC_MSG_ERROR([newly created file is older than distributed files! Check your system clock]) fi AC_MSG_RESULT(yes)]) # Copyright (C) 2001, 2003, 2005 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # AM_PROG_INSTALL_STRIP # --------------------- # One issue with vendor `install' (even GNU) is that you can't # specify the program used to strip binaries. This is especially # annoying in cross-compiling environments, where the build's strip # is unlikely to handle the host's binaries. # Fortunately install-sh will honor a STRIPPROG variable, so we # always use install-sh in `make install-strip', and initialize # STRIPPROG with the value of the STRIP variable (set by the user). AC_DEFUN([AM_PROG_INSTALL_STRIP], [AC_REQUIRE([AM_PROG_INSTALL_SH])dnl # Installed binaries are usually stripped using `strip' when the user # run `make install-strip'. However `strip' might not be the right # tool to use in cross-compilation environments, therefore Automake # will honor the `STRIP' environment variable to overrule this program. dnl Don't test for $cross_compiling = yes, because it might be `maybe'. if test "$cross_compiling" != no; then AC_CHECK_TOOL([STRIP], [strip], :) fi INSTALL_STRIP_PROGRAM="\$(install_sh) -c -s" AC_SUBST([INSTALL_STRIP_PROGRAM])]) # Copyright (C) 2006, 2008 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # serial 2 # _AM_SUBST_NOTMAKE(VARIABLE) # --------------------------- # Prevent Automake from outputting VARIABLE = @VARIABLE@ in Makefile.in. # This macro is traced by Automake. AC_DEFUN([_AM_SUBST_NOTMAKE]) # AM_SUBST_NOTMAKE(VARIABLE) # --------------------------- # Public sister of _AM_SUBST_NOTMAKE. AC_DEFUN([AM_SUBST_NOTMAKE], [_AM_SUBST_NOTMAKE($@)]) # Check how to create a tarball. -*- Autoconf -*- # Copyright (C) 2004, 2005 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # serial 2 # _AM_PROG_TAR(FORMAT) # -------------------- # Check how to create a tarball in format FORMAT. # FORMAT should be one of `v7', `ustar', or `pax'. # # Substitute a variable $(am__tar) that is a command # writing to stdout a FORMAT-tarball containing the directory # $tardir. # tardir=directory && $(am__tar) > result.tar # # Substitute a variable $(am__untar) that extract such # a tarball read from stdin. # $(am__untar) < result.tar AC_DEFUN([_AM_PROG_TAR], [# Always define AMTAR for backward compatibility. AM_MISSING_PROG([AMTAR], [tar]) m4_if([$1], [v7], [am__tar='${AMTAR} chof - "$$tardir"'; am__untar='${AMTAR} xf -'], [m4_case([$1], [ustar],, [pax],, [m4_fatal([Unknown tar format])]) AC_MSG_CHECKING([how to create a $1 tar archive]) # Loop over all known methods to create a tar archive until one works. _am_tools='gnutar m4_if([$1], [ustar], [plaintar]) pax cpio none' _am_tools=${am_cv_prog_tar_$1-$_am_tools} # Do not fold the above two line into one, because Tru64 sh and # Solaris sh will not grok spaces in the rhs of `-'. for _am_tool in $_am_tools do case $_am_tool in gnutar) for _am_tar in tar gnutar gtar; do AM_RUN_LOG([$_am_tar --version]) && break done am__tar="$_am_tar --format=m4_if([$1], [pax], [posix], [$1]) -chf - "'"$$tardir"' am__tar_="$_am_tar --format=m4_if([$1], [pax], [posix], [$1]) -chf - "'"$tardir"' am__untar="$_am_tar -xf -" ;; plaintar) # Must skip GNU tar: if it does not support --format= it doesn't create # ustar tarball either. (tar --version) >/dev/null 2>&1 && continue am__tar='tar chf - "$$tardir"' am__tar_='tar chf - "$tardir"' am__untar='tar xf -' ;; pax) am__tar='pax -L -x $1 -w "$$tardir"' am__tar_='pax -L -x $1 -w "$tardir"' am__untar='pax -r' ;; cpio) am__tar='find "$$tardir" -print | cpio -o -H $1 -L' am__tar_='find "$tardir" -print | cpio -o -H $1 -L' am__untar='cpio -i -H $1 -d' ;; none) am__tar=false am__tar_=false am__untar=false ;; esac # If the value was cached, stop now. We just wanted to have am__tar # and am__untar set. test -n "${am_cv_prog_tar_$1}" && break # tar/untar a dummy directory, and stop if the command works rm -rf conftest.dir mkdir conftest.dir echo GrepMe > conftest.dir/file AM_RUN_LOG([tardir=conftest.dir && eval $am__tar_ >conftest.tar]) rm -rf conftest.dir if test -s conftest.tar; then AM_RUN_LOG([$am__untar /dev/null 2>&1 && break fi done rm -rf conftest.dir AC_CACHE_VAL([am_cv_prog_tar_$1], [am_cv_prog_tar_$1=$_am_tool]) AC_MSG_RESULT([$am_cv_prog_tar_$1])]) AC_SUBST([am__tar]) AC_SUBST([am__untar]) ]) # _AM_PROG_TAR m4_include([m4/acx_pthread.m4]) m4_include([m4/google_namespace.m4]) m4_include([m4/namespaces.m4]) m4_include([m4/stl_hash.m4]) m4_include([m4/stl_hash_fun.m4]) sparsehash-2.0.2/Makefile.in0000664000175000017500000017121411721254575012663 00000000000000# Makefile.in generated by automake 1.11.1 from Makefile.am. # @configure_input@ # Copyright (C) 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, # 2003, 2004, 2005, 2006, 2007, 2008, 2009 Free Software Foundation, # Inc. # This Makefile.in is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY, to the extent permitted by law; without # even the implied warranty of MERCHANTABILITY or FITNESS FOR A # PARTICULAR PURPOSE. @SET_MAKE@ VPATH = @srcdir@ pkgdatadir = $(datadir)/@PACKAGE@ pkgincludedir = $(includedir)/@PACKAGE@ pkglibdir = $(libdir)/@PACKAGE@ pkglibexecdir = $(libexecdir)/@PACKAGE@ am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd install_sh_DATA = $(install_sh) -c -m 644 install_sh_PROGRAM = $(install_sh) -c install_sh_SCRIPT = $(install_sh) -c INSTALL_HEADER = $(INSTALL_DATA) transform = $(program_transform_name) NORMAL_INSTALL = : PRE_INSTALL = : POST_INSTALL = : NORMAL_UNINSTALL = : PRE_UNINSTALL = : POST_UNINSTALL = : build_triplet = @build@ host_triplet = @host@ TESTS = template_util_unittest$(EXEEXT) type_traits_unittest$(EXEEXT) \ libc_allocator_with_realloc_test$(EXEEXT) \ sparsetable_unittest$(EXEEXT) hashtable_test$(EXEEXT) \ simple_test$(EXEEXT) simple_compat_test$(EXEEXT) noinst_PROGRAMS = $(am__EXEEXT_1) time_hash_map$(EXEEXT) subdir = . DIST_COMMON = README $(am__configure_deps) $(dist_doc_DATA) \ $(googleinclude_HEADERS) $(googleinternalinclude_HEADERS) \ $(internalinclude_HEADERS) $(sparsehashinclude_HEADERS) \ $(srcdir)/Makefile.am $(srcdir)/Makefile.in \ $(top_srcdir)/configure $(top_srcdir)/src/config.h.in AUTHORS \ COPYING ChangeLog INSTALL NEWS TODO config.guess config.sub \ depcomp install-sh missing ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 am__aclocal_m4_deps = $(top_srcdir)/m4/acx_pthread.m4 \ $(top_srcdir)/m4/google_namespace.m4 \ $(top_srcdir)/m4/namespaces.m4 $(top_srcdir)/m4/stl_hash.m4 \ $(top_srcdir)/m4/stl_hash_fun.m4 $(top_srcdir)/configure.ac am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ $(ACLOCAL_M4) am__CONFIG_DISTCLEAN_FILES = config.status config.cache config.log \ configure.lineno config.status.lineno mkinstalldirs = $(install_sh) -d CONFIG_HEADER = $(top_builddir)/src/config.h CONFIG_CLEAN_FILES = CONFIG_CLEAN_VPATH_FILES = am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; am__vpath_adj = case $$p in \ $(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \ *) f=$$p;; \ esac; am__strip_dir = f=`echo $$p | sed -e 's|^.*/||'`; am__install_max = 40 am__nobase_strip_setup = \ srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*|]/\\\\&/g'` am__nobase_strip = \ for p in $$list; do echo "$$p"; done | sed -e "s|$$srcdirstrip/||" am__nobase_list = $(am__nobase_strip_setup); \ for p in $$list; do echo "$$p $$p"; done | \ sed "s| $$srcdirstrip/| |;"' / .*\//!s/ .*/ ./; s,\( .*\)/[^/]*$$,\1,' | \ $(AWK) 'BEGIN { files["."] = "" } { files[$$2] = files[$$2] " " $$1; \ if (++n[$$2] == $(am__install_max)) \ { print $$2, files[$$2]; n[$$2] = 0; files[$$2] = "" } } \ END { for (dir in files) print dir, files[dir] }' am__base_list = \ sed '$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;s/\n/ /g' | \ sed '$$!N;$$!N;$$!N;$$!N;s/\n/ /g' am__installdirs = "$(DESTDIR)$(libdir)" "$(DESTDIR)$(docdir)" \ "$(DESTDIR)$(pkgconfigdir)" "$(DESTDIR)$(googleincludedir)" \ "$(DESTDIR)$(googleinternalincludedir)" \ "$(DESTDIR)$(internalincludedir)" \ "$(DESTDIR)$(internalincludedir)" \ "$(DESTDIR)$(sparsehashincludedir)" LTLIBRARIES = $(lib_LTLIBRARIES) am__EXEEXT_1 = template_util_unittest$(EXEEXT) \ type_traits_unittest$(EXEEXT) \ libc_allocator_with_realloc_test$(EXEEXT) \ sparsetable_unittest$(EXEEXT) hashtable_test$(EXEEXT) \ simple_test$(EXEEXT) simple_compat_test$(EXEEXT) PROGRAMS = $(noinst_PROGRAMS) am__objects_1 = am_hashtable_test_OBJECTS = hashtable_test.$(OBJEXT) $(am__objects_1) \ $(am__objects_1) nodist_hashtable_test_OBJECTS = $(am__objects_1) hashtable_test_OBJECTS = $(am_hashtable_test_OBJECTS) \ $(nodist_hashtable_test_OBJECTS) hashtable_test_LDADD = $(LDADD) am_libc_allocator_with_realloc_test_OBJECTS = \ libc_allocator_with_realloc_test.$(OBJEXT) $(am__objects_1) libc_allocator_with_realloc_test_OBJECTS = \ $(am_libc_allocator_with_realloc_test_OBJECTS) libc_allocator_with_realloc_test_LDADD = $(LDADD) am_simple_compat_test_OBJECTS = simple_compat_test.$(OBJEXT) \ $(am__objects_1) $(am__objects_1) $(am__objects_1) nodist_simple_compat_test_OBJECTS = $(am__objects_1) simple_compat_test_OBJECTS = $(am_simple_compat_test_OBJECTS) \ $(nodist_simple_compat_test_OBJECTS) simple_compat_test_LDADD = $(LDADD) am_simple_test_OBJECTS = simple_test.$(OBJEXT) $(am__objects_1) nodist_simple_test_OBJECTS = $(am__objects_1) simple_test_OBJECTS = $(am_simple_test_OBJECTS) \ $(nodist_simple_test_OBJECTS) simple_test_LDADD = $(LDADD) am_sparsetable_unittest_OBJECTS = sparsetable_unittest.$(OBJEXT) \ $(am__objects_1) nodist_sparsetable_unittest_OBJECTS = $(am__objects_1) sparsetable_unittest_OBJECTS = $(am_sparsetable_unittest_OBJECTS) \ $(nodist_sparsetable_unittest_OBJECTS) sparsetable_unittest_LDADD = $(LDADD) am_template_util_unittest_OBJECTS = template_util_unittest.$(OBJEXT) nodist_template_util_unittest_OBJECTS = $(am__objects_1) template_util_unittest_OBJECTS = $(am_template_util_unittest_OBJECTS) \ $(nodist_template_util_unittest_OBJECTS) template_util_unittest_LDADD = $(LDADD) am_time_hash_map_OBJECTS = time_hash_map-time_hash_map.$(OBJEXT) \ $(am__objects_1) $(am__objects_1) nodist_time_hash_map_OBJECTS = $(am__objects_1) time_hash_map_OBJECTS = $(am_time_hash_map_OBJECTS) \ $(nodist_time_hash_map_OBJECTS) time_hash_map_DEPENDENCIES = time_hash_map_LINK = $(CXXLD) $(time_hash_map_CXXFLAGS) $(CXXFLAGS) \ $(time_hash_map_LDFLAGS) $(LDFLAGS) -o $@ am_type_traits_unittest_OBJECTS = type_traits_unittest.$(OBJEXT) \ $(am__objects_1) nodist_type_traits_unittest_OBJECTS = $(am__objects_1) type_traits_unittest_OBJECTS = $(am_type_traits_unittest_OBJECTS) \ $(nodist_type_traits_unittest_OBJECTS) type_traits_unittest_LDADD = $(LDADD) DEFAULT_INCLUDES = -I.@am__isrc@ -I$(top_builddir)/src depcomp = $(SHELL) $(top_srcdir)/depcomp am__depfiles_maybe = depfiles am__mv = mv -f CXXCOMPILE = $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) \ $(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) CXXLD = $(CXX) CXXLINK = $(CXXLD) $(AM_CXXFLAGS) $(CXXFLAGS) $(AM_LDFLAGS) $(LDFLAGS) \ -o $@ COMPILE = $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) \ $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) CCLD = $(CC) LINK = $(CCLD) $(AM_CFLAGS) $(CFLAGS) $(AM_LDFLAGS) $(LDFLAGS) -o $@ SOURCES = $(hashtable_test_SOURCES) $(nodist_hashtable_test_SOURCES) \ $(libc_allocator_with_realloc_test_SOURCES) \ $(simple_compat_test_SOURCES) \ $(nodist_simple_compat_test_SOURCES) $(simple_test_SOURCES) \ $(nodist_simple_test_SOURCES) $(sparsetable_unittest_SOURCES) \ $(nodist_sparsetable_unittest_SOURCES) \ $(template_util_unittest_SOURCES) \ $(nodist_template_util_unittest_SOURCES) \ $(time_hash_map_SOURCES) $(nodist_time_hash_map_SOURCES) \ $(type_traits_unittest_SOURCES) \ $(nodist_type_traits_unittest_SOURCES) DIST_SOURCES = $(hashtable_test_SOURCES) \ $(libc_allocator_with_realloc_test_SOURCES) \ $(simple_compat_test_SOURCES) $(simple_test_SOURCES) \ $(sparsetable_unittest_SOURCES) \ $(template_util_unittest_SOURCES) $(time_hash_map_SOURCES) \ $(type_traits_unittest_SOURCES) DATA = $(dist_doc_DATA) $(pkgconfig_DATA) HEADERS = $(googleinclude_HEADERS) $(googleinternalinclude_HEADERS) \ $(internalinclude_HEADERS) $(nodist_internalinclude_HEADERS) \ $(sparsehashinclude_HEADERS) ETAGS = etags CTAGS = ctags am__tty_colors = \ red=; grn=; lgn=; blu=; std= DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST) distdir = $(PACKAGE)-$(VERSION) top_distdir = $(distdir) am__remove_distdir = \ { test ! -d "$(distdir)" \ || { find "$(distdir)" -type d ! -perm -200 -exec chmod u+w {} ';' \ && rm -fr "$(distdir)"; }; } DIST_ARCHIVES = $(distdir).tar.gz $(distdir).zip GZIP_ENV = --best distuninstallcheck_listfiles = find . -type f -print distcleancheck_listfiles = find . -type f -print ACLOCAL = @ACLOCAL@ AMTAR = @AMTAR@ AUTOCONF = @AUTOCONF@ AUTOHEADER = @AUTOHEADER@ AUTOMAKE = @AUTOMAKE@ AWK = @AWK@ CC = @CC@ CCDEPMODE = @CCDEPMODE@ CFLAGS = @CFLAGS@ CPP = @CPP@ CPPFLAGS = @CPPFLAGS@ CXX = @CXX@ CXXCPP = @CXXCPP@ CXXDEPMODE = @CXXDEPMODE@ CXXFLAGS = @CXXFLAGS@ CYGPATH_W = @CYGPATH_W@ DEFS = @DEFS@ DEPDIR = @DEPDIR@ ECHO_C = @ECHO_C@ ECHO_N = @ECHO_N@ ECHO_T = @ECHO_T@ EGREP = @EGREP@ EXEEXT = @EXEEXT@ GREP = @GREP@ INSTALL = @INSTALL@ INSTALL_DATA = @INSTALL_DATA@ INSTALL_PROGRAM = @INSTALL_PROGRAM@ INSTALL_SCRIPT = @INSTALL_SCRIPT@ INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@ LDFLAGS = @LDFLAGS@ LIBOBJS = @LIBOBJS@ LIBS = @LIBS@ LTLIBOBJS = @LTLIBOBJS@ MAKEINFO = @MAKEINFO@ MKDIR_P = @MKDIR_P@ OBJEXT = @OBJEXT@ PACKAGE = @PACKAGE@ PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@ PACKAGE_NAME = @PACKAGE_NAME@ PACKAGE_STRING = @PACKAGE_STRING@ PACKAGE_TARNAME = @PACKAGE_TARNAME@ PACKAGE_URL = @PACKAGE_URL@ PACKAGE_VERSION = @PACKAGE_VERSION@ PATH_SEPARATOR = @PATH_SEPARATOR@ PTHREAD_CC = @PTHREAD_CC@ PTHREAD_CFLAGS = @PTHREAD_CFLAGS@ PTHREAD_LIBS = @PTHREAD_LIBS@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ STRIP = @STRIP@ VERSION = @VERSION@ abs_builddir = @abs_builddir@ abs_srcdir = @abs_srcdir@ abs_top_builddir = @abs_top_builddir@ abs_top_srcdir = @abs_top_srcdir@ ac_ct_CC = @ac_ct_CC@ ac_ct_CXX = @ac_ct_CXX@ acx_pthread_config = @acx_pthread_config@ am__include = @am__include@ am__leading_dot = @am__leading_dot@ am__quote = @am__quote@ am__tar = @am__tar@ am__untar = @am__untar@ bindir = @bindir@ build = @build@ build_alias = @build_alias@ build_cpu = @build_cpu@ build_os = @build_os@ build_vendor = @build_vendor@ builddir = @builddir@ datadir = @datadir@ datarootdir = @datarootdir@ docdir = $(prefix)/share/doc/$(PACKAGE)-$(VERSION) dvidir = @dvidir@ exec_prefix = @exec_prefix@ host = @host@ host_alias = @host_alias@ host_cpu = @host_cpu@ host_os = @host_os@ host_vendor = @host_vendor@ htmldir = @htmldir@ includedir = @includedir@ infodir = @infodir@ install_sh = @install_sh@ libdir = @libdir@ libexecdir = @libexecdir@ localedir = @localedir@ localstatedir = @localstatedir@ mandir = @mandir@ mkdir_p = @mkdir_p@ oldincludedir = @oldincludedir@ pdfdir = @pdfdir@ prefix = @prefix@ program_transform_name = @program_transform_name@ psdir = @psdir@ sbindir = @sbindir@ sharedstatedir = @sharedstatedir@ srcdir = @srcdir@ sysconfdir = @sysconfdir@ target_alias = @target_alias@ tcmalloc_flags = @tcmalloc_flags@ tcmalloc_libs = @tcmalloc_libs@ top_build_prefix = @top_build_prefix@ top_builddir = @top_builddir@ top_srcdir = @top_srcdir@ # Make sure that when we re-make ./configure, we get the macros we need ACLOCAL_AMFLAGS = -I m4 # This is so we can #include AM_CPPFLAGS = -I$(top_srcdir)/src # These are good warnings to turn on by default @GCC_TRUE@AM_CXXFLAGS = -Wall -W -Wwrite-strings -Woverloaded-virtual -Wshadow dist_doc_DATA = AUTHORS COPYING ChangeLog INSTALL NEWS README README_windows.txt \ TODO \ doc/dense_hash_map.html \ doc/dense_hash_set.html \ doc/sparse_hash_map.html \ doc/sparse_hash_set.html \ doc/sparsetable.html \ doc/implementation.html \ doc/performance.html \ doc/index.html \ doc/designstyle.css lib_LTLIBRARIES = WINDOWS_PROJECTS = sparsehash.sln \ vsprojects/time_hash_map/time_hash_map.vcproj \ vsprojects/type_traits_unittest/type_traits_unittest.vcproj \ vsprojects/libc_allocator_with_realloc_test/libc_allocator_with_realloc_test.vcproj \ vsprojects/sparsetable_unittest/sparsetable_unittest.vcproj \ vsprojects/hashtable_test/hashtable_test.vcproj \ vsprojects/simple_test/simple_test.vcproj check_SCRIPTS = TESTS_ENVIRONMENT = # This is how we tell automake about auto-generated .h files BUILT_SOURCES = src/sparsehash/internal/sparseconfig.h CLEANFILES = src/sparsehash/internal/sparseconfig.h $(pkgconfig_DATA) sparsehashincludedir = $(includedir)/sparsehash sparsehashinclude_HEADERS = \ src/sparsehash/dense_hash_map \ src/sparsehash/dense_hash_set \ src/sparsehash/sparse_hash_map \ src/sparsehash/sparse_hash_set \ src/sparsehash/sparsetable \ src/sparsehash/template_util.h \ src/sparsehash/type_traits.h internalincludedir = $(sparsehashincludedir)/internal internalinclude_HEADERS = \ src/sparsehash/internal/densehashtable.h \ src/sparsehash/internal/sparsehashtable.h \ src/sparsehash/internal/hashtable-common.h \ src/sparsehash/internal/libc_allocator_with_realloc.h nodist_internalinclude_HEADERS = src/sparsehash/internal/sparseconfig.h # This is for backwards compatibility only. googleincludedir = $(includedir)/google googleinclude_HEADERS = \ src/google/dense_hash_map \ src/google/dense_hash_set \ src/google/sparse_hash_map \ src/google/sparse_hash_set \ src/google/sparsetable \ src/google/template_util.h \ src/google/type_traits.h googleinternalincludedir = $(includedir)/google/sparsehash googleinternalinclude_HEADERS = \ src/google/sparsehash/densehashtable.h \ src/google/sparsehash/sparsehashtable.h \ src/google/sparsehash/hashtable-common.h \ src/google/sparsehash/libc_allocator_with_realloc.h # TODO(csilvers): Update windows projects for template_util_unittest. # WINDOWS_PROJECTS += vsprojects/template_util_unittest/template_util_unittest.vcproj template_util_unittest_SOURCES = \ src/template_util_unittest.cc \ src/sparsehash/template_util.h nodist_template_util_unittest_SOURCES = $(nodist_internalinclude_HEADERS) type_traits_unittest_SOURCES = \ src/type_traits_unittest.cc \ $(internalinclude_HEADERS) \ src/sparsehash/type_traits.h nodist_type_traits_unittest_SOURCES = $(nodist_internalinclude_HEADERS) libc_allocator_with_realloc_test_SOURCES = \ src/libc_allocator_with_realloc_test.cc \ $(internalinclude_HEADERS) \ src/sparsehash/internal/libc_allocator_with_realloc.h sparsetable_unittest_SOURCES = \ src/sparsetable_unittest.cc \ $(internalinclude_HEADERS) \ src/sparsehash/sparsetable nodist_sparsetable_unittest_SOURCES = $(nodist_internalinclude_HEADERS) hashtable_test_SOURCES = \ src/hashtable_test.cc \ src/hash_test_interface.h \ src/testutil.h \ $(sparsehashinclude_HEADERS) \ $(internalinclude_HEADERS) nodist_hashtable_test_SOURCES = $(nodist_internalinclude_HEADERS) simple_test_SOURCES = \ src/simple_test.cc \ $(internalinclude_HEADERS) nodist_simple_test_SOURCES = $(nodist_internalinclude_HEADERS) simple_compat_test_SOURCES = \ src/simple_compat_test.cc \ $(internalinclude_HEADERS) \ $(googleinclude_HEADERS) \ $(googleinternalinclude_HEADERS) nodist_simple_compat_test_SOURCES = $(nodist_internalinclude_HEADERS) time_hash_map_SOURCES = \ src/time_hash_map.cc \ $(internalinclude_HEADERS) \ $(sparsehashinclude_HEADERS) nodist_time_hash_map_SOURCES = $(nodist_internalinclude_HEADERS) # If tcmalloc is installed, use it with time_hash_map; it gives us # heap-usage statistics for the hash_map routines, which is very nice time_hash_map_CXXFLAGS = @tcmalloc_flags@ $(AM_CXXFLAGS) time_hash_map_LDFLAGS = @tcmalloc_flags@ time_hash_map_LDADD = @tcmalloc_libs@ # http://linux.die.net/man/1/pkg-config, http://pkg-config.freedesktop.org/wiki pkgconfigdir = $(libdir)/pkgconfig pkgconfig_DATA = lib${PACKAGE}.pc EXTRA_DIST = packages/rpm.sh packages/rpm/rpm.spec packages/deb.sh packages/deb \ src/config.h.include src/windows $(WINDOWS_PROJECTS) experimental all: $(BUILT_SOURCES) $(MAKE) $(AM_MAKEFLAGS) all-am .SUFFIXES: .SUFFIXES: .cc .o .obj am--refresh: @: $(srcdir)/Makefile.in: $(srcdir)/Makefile.am $(am__configure_deps) @for dep in $?; do \ case '$(am__configure_deps)' in \ *$$dep*) \ echo ' cd $(srcdir) && $(AUTOMAKE) --gnu'; \ $(am__cd) $(srcdir) && $(AUTOMAKE) --gnu \ && exit 0; \ exit 1;; \ esac; \ done; \ echo ' cd $(top_srcdir) && $(AUTOMAKE) --gnu Makefile'; \ $(am__cd) $(top_srcdir) && \ $(AUTOMAKE) --gnu Makefile .PRECIOUS: Makefile Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status @case '$?' in \ *config.status*) \ echo ' $(SHELL) ./config.status'; \ $(SHELL) ./config.status;; \ *) \ echo ' cd $(top_builddir) && $(SHELL) ./config.status $@ $(am__depfiles_maybe)'; \ cd $(top_builddir) && $(SHELL) ./config.status $@ $(am__depfiles_maybe);; \ esac; $(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES) $(SHELL) ./config.status --recheck $(top_srcdir)/configure: $(am__configure_deps) $(am__cd) $(srcdir) && $(AUTOCONF) $(ACLOCAL_M4): $(am__aclocal_m4_deps) $(am__cd) $(srcdir) && $(ACLOCAL) $(ACLOCAL_AMFLAGS) $(am__aclocal_m4_deps): src/config.h: src/stamp-h1 @if test ! -f $@; then \ rm -f src/stamp-h1; \ $(MAKE) $(AM_MAKEFLAGS) src/stamp-h1; \ else :; fi src/stamp-h1: $(top_srcdir)/src/config.h.in $(top_builddir)/config.status @rm -f src/stamp-h1 cd $(top_builddir) && $(SHELL) ./config.status src/config.h $(top_srcdir)/src/config.h.in: $(am__configure_deps) ($(am__cd) $(top_srcdir) && $(AUTOHEADER)) rm -f src/stamp-h1 touch $@ distclean-hdr: -rm -f src/config.h src/stamp-h1 install-libLTLIBRARIES: $(lib_LTLIBRARIES) @$(NORMAL_INSTALL) test -z "$(libdir)" || $(MKDIR_P) "$(DESTDIR)$(libdir)" @list='$(lib_LTLIBRARIES)'; test -n "$(libdir)" || list=; \ list2=; for p in $$list; do \ if test -f $$p; then \ list2="$$list2 $$p"; \ else :; fi; \ done; \ test -z "$$list2" || { \ echo " $(INSTALL) $(INSTALL_STRIP_FLAG) $$list '$(DESTDIR)$(libdir)'"; \ $(INSTALL) $(INSTALL_STRIP_FLAG) $$list "$(DESTDIR)$(libdir)"; \ } uninstall-libLTLIBRARIES: @$(NORMAL_UNINSTALL) @list='$(lib_LTLIBRARIES)'; test -n "$(libdir)" || list=; \ for p in $$list; do \ $(am__strip_dir) \ echo " rm -f '$(DESTDIR)$(libdir)/$$f'"; \ rm -f "$(DESTDIR)$(libdir)/$$f"; \ done clean-libLTLIBRARIES: -test -z "$(lib_LTLIBRARIES)" || rm -f $(lib_LTLIBRARIES) @list='$(lib_LTLIBRARIES)'; for p in $$list; do \ dir="`echo $$p | sed -e 's|/[^/]*$$||'`"; \ test "$$dir" != "$$p" || dir=.; \ echo "rm -f \"$${dir}/so_locations\""; \ rm -f "$${dir}/so_locations"; \ done clean-noinstPROGRAMS: -test -z "$(noinst_PROGRAMS)" || rm -f $(noinst_PROGRAMS) hashtable_test$(EXEEXT): $(hashtable_test_OBJECTS) $(hashtable_test_DEPENDENCIES) @rm -f hashtable_test$(EXEEXT) $(CXXLINK) $(hashtable_test_OBJECTS) $(hashtable_test_LDADD) $(LIBS) libc_allocator_with_realloc_test$(EXEEXT): $(libc_allocator_with_realloc_test_OBJECTS) $(libc_allocator_with_realloc_test_DEPENDENCIES) @rm -f libc_allocator_with_realloc_test$(EXEEXT) $(CXXLINK) $(libc_allocator_with_realloc_test_OBJECTS) $(libc_allocator_with_realloc_test_LDADD) $(LIBS) simple_compat_test$(EXEEXT): $(simple_compat_test_OBJECTS) $(simple_compat_test_DEPENDENCIES) @rm -f simple_compat_test$(EXEEXT) $(CXXLINK) $(simple_compat_test_OBJECTS) $(simple_compat_test_LDADD) $(LIBS) simple_test$(EXEEXT): $(simple_test_OBJECTS) $(simple_test_DEPENDENCIES) @rm -f simple_test$(EXEEXT) $(CXXLINK) $(simple_test_OBJECTS) $(simple_test_LDADD) $(LIBS) sparsetable_unittest$(EXEEXT): $(sparsetable_unittest_OBJECTS) $(sparsetable_unittest_DEPENDENCIES) @rm -f sparsetable_unittest$(EXEEXT) $(CXXLINK) $(sparsetable_unittest_OBJECTS) $(sparsetable_unittest_LDADD) $(LIBS) template_util_unittest$(EXEEXT): $(template_util_unittest_OBJECTS) $(template_util_unittest_DEPENDENCIES) @rm -f template_util_unittest$(EXEEXT) $(CXXLINK) $(template_util_unittest_OBJECTS) $(template_util_unittest_LDADD) $(LIBS) time_hash_map$(EXEEXT): $(time_hash_map_OBJECTS) $(time_hash_map_DEPENDENCIES) @rm -f time_hash_map$(EXEEXT) $(time_hash_map_LINK) $(time_hash_map_OBJECTS) $(time_hash_map_LDADD) $(LIBS) type_traits_unittest$(EXEEXT): $(type_traits_unittest_OBJECTS) $(type_traits_unittest_DEPENDENCIES) @rm -f type_traits_unittest$(EXEEXT) $(CXXLINK) $(type_traits_unittest_OBJECTS) $(type_traits_unittest_LDADD) $(LIBS) mostlyclean-compile: -rm -f *.$(OBJEXT) distclean-compile: -rm -f *.tab.c @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/hashtable_test.Po@am__quote@ @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libc_allocator_with_realloc_test.Po@am__quote@ @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/simple_compat_test.Po@am__quote@ @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/simple_test.Po@am__quote@ @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/sparsetable_unittest.Po@am__quote@ @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/template_util_unittest.Po@am__quote@ @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/time_hash_map-time_hash_map.Po@am__quote@ @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/type_traits_unittest.Po@am__quote@ .cc.o: @am__fastdepCXX_TRUE@ $(CXXCOMPILE) -MT $@ -MD -MP -MF $(DEPDIR)/$*.Tpo -c -o $@ $< @am__fastdepCXX_TRUE@ $(am__mv) $(DEPDIR)/$*.Tpo $(DEPDIR)/$*.Po @AMDEP_TRUE@@am__fastdepCXX_FALSE@ source='$<' object='$@' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCXX_FALSE@ DEPDIR=$(DEPDIR) $(CXXDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCXX_FALSE@ $(CXXCOMPILE) -c -o $@ $< .cc.obj: @am__fastdepCXX_TRUE@ $(CXXCOMPILE) -MT $@ -MD -MP -MF $(DEPDIR)/$*.Tpo -c -o $@ `$(CYGPATH_W) '$<'` @am__fastdepCXX_TRUE@ $(am__mv) $(DEPDIR)/$*.Tpo $(DEPDIR)/$*.Po @AMDEP_TRUE@@am__fastdepCXX_FALSE@ source='$<' object='$@' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCXX_FALSE@ DEPDIR=$(DEPDIR) $(CXXDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCXX_FALSE@ $(CXXCOMPILE) -c -o $@ `$(CYGPATH_W) '$<'` hashtable_test.o: src/hashtable_test.cc @am__fastdepCXX_TRUE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) -MT hashtable_test.o -MD -MP -MF $(DEPDIR)/hashtable_test.Tpo -c -o hashtable_test.o `test -f 'src/hashtable_test.cc' || echo '$(srcdir)/'`src/hashtable_test.cc @am__fastdepCXX_TRUE@ $(am__mv) $(DEPDIR)/hashtable_test.Tpo $(DEPDIR)/hashtable_test.Po @AMDEP_TRUE@@am__fastdepCXX_FALSE@ source='src/hashtable_test.cc' object='hashtable_test.o' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCXX_FALSE@ DEPDIR=$(DEPDIR) $(CXXDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCXX_FALSE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) -c -o hashtable_test.o `test -f 'src/hashtable_test.cc' || echo '$(srcdir)/'`src/hashtable_test.cc hashtable_test.obj: src/hashtable_test.cc @am__fastdepCXX_TRUE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) -MT hashtable_test.obj -MD -MP -MF $(DEPDIR)/hashtable_test.Tpo -c -o hashtable_test.obj `if test -f 'src/hashtable_test.cc'; then $(CYGPATH_W) 'src/hashtable_test.cc'; else $(CYGPATH_W) '$(srcdir)/src/hashtable_test.cc'; fi` @am__fastdepCXX_TRUE@ $(am__mv) $(DEPDIR)/hashtable_test.Tpo $(DEPDIR)/hashtable_test.Po @AMDEP_TRUE@@am__fastdepCXX_FALSE@ source='src/hashtable_test.cc' object='hashtable_test.obj' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCXX_FALSE@ DEPDIR=$(DEPDIR) $(CXXDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCXX_FALSE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) -c -o hashtable_test.obj `if test -f 'src/hashtable_test.cc'; then $(CYGPATH_W) 'src/hashtable_test.cc'; else $(CYGPATH_W) '$(srcdir)/src/hashtable_test.cc'; fi` libc_allocator_with_realloc_test.o: src/libc_allocator_with_realloc_test.cc @am__fastdepCXX_TRUE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) -MT libc_allocator_with_realloc_test.o -MD -MP -MF $(DEPDIR)/libc_allocator_with_realloc_test.Tpo -c -o libc_allocator_with_realloc_test.o `test -f 'src/libc_allocator_with_realloc_test.cc' || echo '$(srcdir)/'`src/libc_allocator_with_realloc_test.cc @am__fastdepCXX_TRUE@ $(am__mv) $(DEPDIR)/libc_allocator_with_realloc_test.Tpo $(DEPDIR)/libc_allocator_with_realloc_test.Po @AMDEP_TRUE@@am__fastdepCXX_FALSE@ source='src/libc_allocator_with_realloc_test.cc' object='libc_allocator_with_realloc_test.o' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCXX_FALSE@ DEPDIR=$(DEPDIR) $(CXXDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCXX_FALSE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) -c -o libc_allocator_with_realloc_test.o `test -f 'src/libc_allocator_with_realloc_test.cc' || echo '$(srcdir)/'`src/libc_allocator_with_realloc_test.cc libc_allocator_with_realloc_test.obj: src/libc_allocator_with_realloc_test.cc @am__fastdepCXX_TRUE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) -MT libc_allocator_with_realloc_test.obj -MD -MP -MF $(DEPDIR)/libc_allocator_with_realloc_test.Tpo -c -o libc_allocator_with_realloc_test.obj `if test -f 'src/libc_allocator_with_realloc_test.cc'; then $(CYGPATH_W) 'src/libc_allocator_with_realloc_test.cc'; else $(CYGPATH_W) '$(srcdir)/src/libc_allocator_with_realloc_test.cc'; fi` @am__fastdepCXX_TRUE@ $(am__mv) $(DEPDIR)/libc_allocator_with_realloc_test.Tpo $(DEPDIR)/libc_allocator_with_realloc_test.Po @AMDEP_TRUE@@am__fastdepCXX_FALSE@ source='src/libc_allocator_with_realloc_test.cc' object='libc_allocator_with_realloc_test.obj' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCXX_FALSE@ DEPDIR=$(DEPDIR) $(CXXDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCXX_FALSE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) -c -o libc_allocator_with_realloc_test.obj `if test -f 'src/libc_allocator_with_realloc_test.cc'; then $(CYGPATH_W) 'src/libc_allocator_with_realloc_test.cc'; else $(CYGPATH_W) '$(srcdir)/src/libc_allocator_with_realloc_test.cc'; fi` simple_compat_test.o: src/simple_compat_test.cc @am__fastdepCXX_TRUE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) -MT simple_compat_test.o -MD -MP -MF $(DEPDIR)/simple_compat_test.Tpo -c -o simple_compat_test.o `test -f 'src/simple_compat_test.cc' || echo '$(srcdir)/'`src/simple_compat_test.cc @am__fastdepCXX_TRUE@ $(am__mv) $(DEPDIR)/simple_compat_test.Tpo $(DEPDIR)/simple_compat_test.Po @AMDEP_TRUE@@am__fastdepCXX_FALSE@ source='src/simple_compat_test.cc' object='simple_compat_test.o' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCXX_FALSE@ DEPDIR=$(DEPDIR) $(CXXDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCXX_FALSE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) -c -o simple_compat_test.o `test -f 'src/simple_compat_test.cc' || echo '$(srcdir)/'`src/simple_compat_test.cc simple_compat_test.obj: src/simple_compat_test.cc @am__fastdepCXX_TRUE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) -MT simple_compat_test.obj -MD -MP -MF $(DEPDIR)/simple_compat_test.Tpo -c -o simple_compat_test.obj `if test -f 'src/simple_compat_test.cc'; then $(CYGPATH_W) 'src/simple_compat_test.cc'; else $(CYGPATH_W) '$(srcdir)/src/simple_compat_test.cc'; fi` @am__fastdepCXX_TRUE@ $(am__mv) $(DEPDIR)/simple_compat_test.Tpo $(DEPDIR)/simple_compat_test.Po @AMDEP_TRUE@@am__fastdepCXX_FALSE@ source='src/simple_compat_test.cc' object='simple_compat_test.obj' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCXX_FALSE@ DEPDIR=$(DEPDIR) $(CXXDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCXX_FALSE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) -c -o simple_compat_test.obj `if test -f 'src/simple_compat_test.cc'; then $(CYGPATH_W) 'src/simple_compat_test.cc'; else $(CYGPATH_W) '$(srcdir)/src/simple_compat_test.cc'; fi` simple_test.o: src/simple_test.cc @am__fastdepCXX_TRUE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) -MT simple_test.o -MD -MP -MF $(DEPDIR)/simple_test.Tpo -c -o simple_test.o `test -f 'src/simple_test.cc' || echo '$(srcdir)/'`src/simple_test.cc @am__fastdepCXX_TRUE@ $(am__mv) $(DEPDIR)/simple_test.Tpo $(DEPDIR)/simple_test.Po @AMDEP_TRUE@@am__fastdepCXX_FALSE@ source='src/simple_test.cc' object='simple_test.o' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCXX_FALSE@ DEPDIR=$(DEPDIR) $(CXXDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCXX_FALSE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) -c -o simple_test.o `test -f 'src/simple_test.cc' || echo '$(srcdir)/'`src/simple_test.cc simple_test.obj: src/simple_test.cc @am__fastdepCXX_TRUE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) -MT simple_test.obj -MD -MP -MF $(DEPDIR)/simple_test.Tpo -c -o simple_test.obj `if test -f 'src/simple_test.cc'; then $(CYGPATH_W) 'src/simple_test.cc'; else $(CYGPATH_W) '$(srcdir)/src/simple_test.cc'; fi` @am__fastdepCXX_TRUE@ $(am__mv) $(DEPDIR)/simple_test.Tpo $(DEPDIR)/simple_test.Po @AMDEP_TRUE@@am__fastdepCXX_FALSE@ source='src/simple_test.cc' object='simple_test.obj' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCXX_FALSE@ DEPDIR=$(DEPDIR) $(CXXDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCXX_FALSE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) -c -o simple_test.obj `if test -f 'src/simple_test.cc'; then $(CYGPATH_W) 'src/simple_test.cc'; else $(CYGPATH_W) '$(srcdir)/src/simple_test.cc'; fi` sparsetable_unittest.o: src/sparsetable_unittest.cc @am__fastdepCXX_TRUE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) -MT sparsetable_unittest.o -MD -MP -MF $(DEPDIR)/sparsetable_unittest.Tpo -c -o sparsetable_unittest.o `test -f 'src/sparsetable_unittest.cc' || echo '$(srcdir)/'`src/sparsetable_unittest.cc @am__fastdepCXX_TRUE@ $(am__mv) $(DEPDIR)/sparsetable_unittest.Tpo $(DEPDIR)/sparsetable_unittest.Po @AMDEP_TRUE@@am__fastdepCXX_FALSE@ source='src/sparsetable_unittest.cc' object='sparsetable_unittest.o' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCXX_FALSE@ DEPDIR=$(DEPDIR) $(CXXDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCXX_FALSE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) -c -o sparsetable_unittest.o `test -f 'src/sparsetable_unittest.cc' || echo '$(srcdir)/'`src/sparsetable_unittest.cc sparsetable_unittest.obj: src/sparsetable_unittest.cc @am__fastdepCXX_TRUE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) -MT sparsetable_unittest.obj -MD -MP -MF $(DEPDIR)/sparsetable_unittest.Tpo -c -o sparsetable_unittest.obj `if test -f 'src/sparsetable_unittest.cc'; then $(CYGPATH_W) 'src/sparsetable_unittest.cc'; else $(CYGPATH_W) '$(srcdir)/src/sparsetable_unittest.cc'; fi` @am__fastdepCXX_TRUE@ $(am__mv) $(DEPDIR)/sparsetable_unittest.Tpo $(DEPDIR)/sparsetable_unittest.Po @AMDEP_TRUE@@am__fastdepCXX_FALSE@ source='src/sparsetable_unittest.cc' object='sparsetable_unittest.obj' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCXX_FALSE@ DEPDIR=$(DEPDIR) $(CXXDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCXX_FALSE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) -c -o sparsetable_unittest.obj `if test -f 'src/sparsetable_unittest.cc'; then $(CYGPATH_W) 'src/sparsetable_unittest.cc'; else $(CYGPATH_W) '$(srcdir)/src/sparsetable_unittest.cc'; fi` template_util_unittest.o: src/template_util_unittest.cc @am__fastdepCXX_TRUE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) -MT template_util_unittest.o -MD -MP -MF $(DEPDIR)/template_util_unittest.Tpo -c -o template_util_unittest.o `test -f 'src/template_util_unittest.cc' || echo '$(srcdir)/'`src/template_util_unittest.cc @am__fastdepCXX_TRUE@ $(am__mv) $(DEPDIR)/template_util_unittest.Tpo $(DEPDIR)/template_util_unittest.Po @AMDEP_TRUE@@am__fastdepCXX_FALSE@ source='src/template_util_unittest.cc' object='template_util_unittest.o' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCXX_FALSE@ DEPDIR=$(DEPDIR) $(CXXDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCXX_FALSE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) -c -o template_util_unittest.o `test -f 'src/template_util_unittest.cc' || echo '$(srcdir)/'`src/template_util_unittest.cc template_util_unittest.obj: src/template_util_unittest.cc @am__fastdepCXX_TRUE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) -MT template_util_unittest.obj -MD -MP -MF $(DEPDIR)/template_util_unittest.Tpo -c -o template_util_unittest.obj `if test -f 'src/template_util_unittest.cc'; then $(CYGPATH_W) 'src/template_util_unittest.cc'; else $(CYGPATH_W) '$(srcdir)/src/template_util_unittest.cc'; fi` @am__fastdepCXX_TRUE@ $(am__mv) $(DEPDIR)/template_util_unittest.Tpo $(DEPDIR)/template_util_unittest.Po @AMDEP_TRUE@@am__fastdepCXX_FALSE@ source='src/template_util_unittest.cc' object='template_util_unittest.obj' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCXX_FALSE@ DEPDIR=$(DEPDIR) $(CXXDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCXX_FALSE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) -c -o template_util_unittest.obj `if test -f 'src/template_util_unittest.cc'; then $(CYGPATH_W) 'src/template_util_unittest.cc'; else $(CYGPATH_W) '$(srcdir)/src/template_util_unittest.cc'; fi` time_hash_map-time_hash_map.o: src/time_hash_map.cc @am__fastdepCXX_TRUE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) $(time_hash_map_CXXFLAGS) $(CXXFLAGS) -MT time_hash_map-time_hash_map.o -MD -MP -MF $(DEPDIR)/time_hash_map-time_hash_map.Tpo -c -o time_hash_map-time_hash_map.o `test -f 'src/time_hash_map.cc' || echo '$(srcdir)/'`src/time_hash_map.cc @am__fastdepCXX_TRUE@ $(am__mv) $(DEPDIR)/time_hash_map-time_hash_map.Tpo $(DEPDIR)/time_hash_map-time_hash_map.Po @AMDEP_TRUE@@am__fastdepCXX_FALSE@ source='src/time_hash_map.cc' object='time_hash_map-time_hash_map.o' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCXX_FALSE@ DEPDIR=$(DEPDIR) $(CXXDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCXX_FALSE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) $(time_hash_map_CXXFLAGS) $(CXXFLAGS) -c -o time_hash_map-time_hash_map.o `test -f 'src/time_hash_map.cc' || echo '$(srcdir)/'`src/time_hash_map.cc time_hash_map-time_hash_map.obj: src/time_hash_map.cc @am__fastdepCXX_TRUE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) $(time_hash_map_CXXFLAGS) $(CXXFLAGS) -MT time_hash_map-time_hash_map.obj -MD -MP -MF $(DEPDIR)/time_hash_map-time_hash_map.Tpo -c -o time_hash_map-time_hash_map.obj `if test -f 'src/time_hash_map.cc'; then $(CYGPATH_W) 'src/time_hash_map.cc'; else $(CYGPATH_W) '$(srcdir)/src/time_hash_map.cc'; fi` @am__fastdepCXX_TRUE@ $(am__mv) $(DEPDIR)/time_hash_map-time_hash_map.Tpo $(DEPDIR)/time_hash_map-time_hash_map.Po @AMDEP_TRUE@@am__fastdepCXX_FALSE@ source='src/time_hash_map.cc' object='time_hash_map-time_hash_map.obj' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCXX_FALSE@ DEPDIR=$(DEPDIR) $(CXXDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCXX_FALSE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) $(time_hash_map_CXXFLAGS) $(CXXFLAGS) -c -o time_hash_map-time_hash_map.obj `if test -f 'src/time_hash_map.cc'; then $(CYGPATH_W) 'src/time_hash_map.cc'; else $(CYGPATH_W) '$(srcdir)/src/time_hash_map.cc'; fi` type_traits_unittest.o: src/type_traits_unittest.cc @am__fastdepCXX_TRUE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) -MT type_traits_unittest.o -MD -MP -MF $(DEPDIR)/type_traits_unittest.Tpo -c -o type_traits_unittest.o `test -f 'src/type_traits_unittest.cc' || echo '$(srcdir)/'`src/type_traits_unittest.cc @am__fastdepCXX_TRUE@ $(am__mv) $(DEPDIR)/type_traits_unittest.Tpo $(DEPDIR)/type_traits_unittest.Po @AMDEP_TRUE@@am__fastdepCXX_FALSE@ source='src/type_traits_unittest.cc' object='type_traits_unittest.o' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCXX_FALSE@ DEPDIR=$(DEPDIR) $(CXXDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCXX_FALSE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) -c -o type_traits_unittest.o `test -f 'src/type_traits_unittest.cc' || echo '$(srcdir)/'`src/type_traits_unittest.cc type_traits_unittest.obj: src/type_traits_unittest.cc @am__fastdepCXX_TRUE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) -MT type_traits_unittest.obj -MD -MP -MF $(DEPDIR)/type_traits_unittest.Tpo -c -o type_traits_unittest.obj `if test -f 'src/type_traits_unittest.cc'; then $(CYGPATH_W) 'src/type_traits_unittest.cc'; else $(CYGPATH_W) '$(srcdir)/src/type_traits_unittest.cc'; fi` @am__fastdepCXX_TRUE@ $(am__mv) $(DEPDIR)/type_traits_unittest.Tpo $(DEPDIR)/type_traits_unittest.Po @AMDEP_TRUE@@am__fastdepCXX_FALSE@ source='src/type_traits_unittest.cc' object='type_traits_unittest.obj' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCXX_FALSE@ DEPDIR=$(DEPDIR) $(CXXDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCXX_FALSE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) -c -o type_traits_unittest.obj `if test -f 'src/type_traits_unittest.cc'; then $(CYGPATH_W) 'src/type_traits_unittest.cc'; else $(CYGPATH_W) '$(srcdir)/src/type_traits_unittest.cc'; fi` install-dist_docDATA: $(dist_doc_DATA) @$(NORMAL_INSTALL) test -z "$(docdir)" || $(MKDIR_P) "$(DESTDIR)$(docdir)" @list='$(dist_doc_DATA)'; test -n "$(docdir)" || list=; \ for p in $$list; do \ if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \ echo "$$d$$p"; \ done | $(am__base_list) | \ while read files; do \ echo " $(INSTALL_DATA) $$files '$(DESTDIR)$(docdir)'"; \ $(INSTALL_DATA) $$files "$(DESTDIR)$(docdir)" || exit $$?; \ done uninstall-dist_docDATA: @$(NORMAL_UNINSTALL) @list='$(dist_doc_DATA)'; test -n "$(docdir)" || list=; \ files=`for p in $$list; do echo $$p; done | sed -e 's|^.*/||'`; \ test -n "$$files" || exit 0; \ echo " ( cd '$(DESTDIR)$(docdir)' && rm -f" $$files ")"; \ cd "$(DESTDIR)$(docdir)" && rm -f $$files install-pkgconfigDATA: $(pkgconfig_DATA) @$(NORMAL_INSTALL) test -z "$(pkgconfigdir)" || $(MKDIR_P) "$(DESTDIR)$(pkgconfigdir)" @list='$(pkgconfig_DATA)'; test -n "$(pkgconfigdir)" || list=; \ for p in $$list; do \ if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \ echo "$$d$$p"; \ done | $(am__base_list) | \ while read files; do \ echo " $(INSTALL_DATA) $$files '$(DESTDIR)$(pkgconfigdir)'"; \ $(INSTALL_DATA) $$files "$(DESTDIR)$(pkgconfigdir)" || exit $$?; \ done uninstall-pkgconfigDATA: @$(NORMAL_UNINSTALL) @list='$(pkgconfig_DATA)'; test -n "$(pkgconfigdir)" || list=; \ files=`for p in $$list; do echo $$p; done | sed -e 's|^.*/||'`; \ test -n "$$files" || exit 0; \ echo " ( cd '$(DESTDIR)$(pkgconfigdir)' && rm -f" $$files ")"; \ cd "$(DESTDIR)$(pkgconfigdir)" && rm -f $$files install-googleincludeHEADERS: $(googleinclude_HEADERS) @$(NORMAL_INSTALL) test -z "$(googleincludedir)" || $(MKDIR_P) "$(DESTDIR)$(googleincludedir)" @list='$(googleinclude_HEADERS)'; test -n "$(googleincludedir)" || list=; \ for p in $$list; do \ if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \ echo "$$d$$p"; \ done | $(am__base_list) | \ while read files; do \ echo " $(INSTALL_HEADER) $$files '$(DESTDIR)$(googleincludedir)'"; \ $(INSTALL_HEADER) $$files "$(DESTDIR)$(googleincludedir)" || exit $$?; \ done uninstall-googleincludeHEADERS: @$(NORMAL_UNINSTALL) @list='$(googleinclude_HEADERS)'; test -n "$(googleincludedir)" || list=; \ files=`for p in $$list; do echo $$p; done | sed -e 's|^.*/||'`; \ test -n "$$files" || exit 0; \ echo " ( cd '$(DESTDIR)$(googleincludedir)' && rm -f" $$files ")"; \ cd "$(DESTDIR)$(googleincludedir)" && rm -f $$files install-googleinternalincludeHEADERS: $(googleinternalinclude_HEADERS) @$(NORMAL_INSTALL) test -z "$(googleinternalincludedir)" || $(MKDIR_P) "$(DESTDIR)$(googleinternalincludedir)" @list='$(googleinternalinclude_HEADERS)'; test -n "$(googleinternalincludedir)" || list=; \ for p in $$list; do \ if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \ echo "$$d$$p"; \ done | $(am__base_list) | \ while read files; do \ echo " $(INSTALL_HEADER) $$files '$(DESTDIR)$(googleinternalincludedir)'"; \ $(INSTALL_HEADER) $$files "$(DESTDIR)$(googleinternalincludedir)" || exit $$?; \ done uninstall-googleinternalincludeHEADERS: @$(NORMAL_UNINSTALL) @list='$(googleinternalinclude_HEADERS)'; test -n "$(googleinternalincludedir)" || list=; \ files=`for p in $$list; do echo $$p; done | sed -e 's|^.*/||'`; \ test -n "$$files" || exit 0; \ echo " ( cd '$(DESTDIR)$(googleinternalincludedir)' && rm -f" $$files ")"; \ cd "$(DESTDIR)$(googleinternalincludedir)" && rm -f $$files install-internalincludeHEADERS: $(internalinclude_HEADERS) @$(NORMAL_INSTALL) test -z "$(internalincludedir)" || $(MKDIR_P) "$(DESTDIR)$(internalincludedir)" @list='$(internalinclude_HEADERS)'; test -n "$(internalincludedir)" || list=; \ for p in $$list; do \ if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \ echo "$$d$$p"; \ done | $(am__base_list) | \ while read files; do \ echo " $(INSTALL_HEADER) $$files '$(DESTDIR)$(internalincludedir)'"; \ $(INSTALL_HEADER) $$files "$(DESTDIR)$(internalincludedir)" || exit $$?; \ done uninstall-internalincludeHEADERS: @$(NORMAL_UNINSTALL) @list='$(internalinclude_HEADERS)'; test -n "$(internalincludedir)" || list=; \ files=`for p in $$list; do echo $$p; done | sed -e 's|^.*/||'`; \ test -n "$$files" || exit 0; \ echo " ( cd '$(DESTDIR)$(internalincludedir)' && rm -f" $$files ")"; \ cd "$(DESTDIR)$(internalincludedir)" && rm -f $$files install-nodist_internalincludeHEADERS: $(nodist_internalinclude_HEADERS) @$(NORMAL_INSTALL) test -z "$(internalincludedir)" || $(MKDIR_P) "$(DESTDIR)$(internalincludedir)" @list='$(nodist_internalinclude_HEADERS)'; test -n "$(internalincludedir)" || list=; \ for p in $$list; do \ if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \ echo "$$d$$p"; \ done | $(am__base_list) | \ while read files; do \ echo " $(INSTALL_HEADER) $$files '$(DESTDIR)$(internalincludedir)'"; \ $(INSTALL_HEADER) $$files "$(DESTDIR)$(internalincludedir)" || exit $$?; \ done uninstall-nodist_internalincludeHEADERS: @$(NORMAL_UNINSTALL) @list='$(nodist_internalinclude_HEADERS)'; test -n "$(internalincludedir)" || list=; \ files=`for p in $$list; do echo $$p; done | sed -e 's|^.*/||'`; \ test -n "$$files" || exit 0; \ echo " ( cd '$(DESTDIR)$(internalincludedir)' && rm -f" $$files ")"; \ cd "$(DESTDIR)$(internalincludedir)" && rm -f $$files install-sparsehashincludeHEADERS: $(sparsehashinclude_HEADERS) @$(NORMAL_INSTALL) test -z "$(sparsehashincludedir)" || $(MKDIR_P) "$(DESTDIR)$(sparsehashincludedir)" @list='$(sparsehashinclude_HEADERS)'; test -n "$(sparsehashincludedir)" || list=; \ for p in $$list; do \ if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \ echo "$$d$$p"; \ done | $(am__base_list) | \ while read files; do \ echo " $(INSTALL_HEADER) $$files '$(DESTDIR)$(sparsehashincludedir)'"; \ $(INSTALL_HEADER) $$files "$(DESTDIR)$(sparsehashincludedir)" || exit $$?; \ done uninstall-sparsehashincludeHEADERS: @$(NORMAL_UNINSTALL) @list='$(sparsehashinclude_HEADERS)'; test -n "$(sparsehashincludedir)" || list=; \ files=`for p in $$list; do echo $$p; done | sed -e 's|^.*/||'`; \ test -n "$$files" || exit 0; \ echo " ( cd '$(DESTDIR)$(sparsehashincludedir)' && rm -f" $$files ")"; \ cd "$(DESTDIR)$(sparsehashincludedir)" && rm -f $$files ID: $(HEADERS) $(SOURCES) $(LISP) $(TAGS_FILES) list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \ unique=`for i in $$list; do \ if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \ done | \ $(AWK) '{ files[$$0] = 1; nonempty = 1; } \ END { if (nonempty) { for (i in files) print i; }; }'`; \ mkid -fID $$unique tags: TAGS TAGS: $(HEADERS) $(SOURCES) $(TAGS_DEPENDENCIES) \ $(TAGS_FILES) $(LISP) set x; \ here=`pwd`; \ list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \ unique=`for i in $$list; do \ if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \ done | \ $(AWK) '{ files[$$0] = 1; nonempty = 1; } \ END { if (nonempty) { for (i in files) print i; }; }'`; \ shift; \ if test -z "$(ETAGS_ARGS)$$*$$unique"; then :; else \ test -n "$$unique" || unique=$$empty_fix; \ if test $$# -gt 0; then \ $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ "$$@" $$unique; \ else \ $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ $$unique; \ fi; \ fi ctags: CTAGS CTAGS: $(HEADERS) $(SOURCES) $(TAGS_DEPENDENCIES) \ $(TAGS_FILES) $(LISP) list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \ unique=`for i in $$list; do \ if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \ done | \ $(AWK) '{ files[$$0] = 1; nonempty = 1; } \ END { if (nonempty) { for (i in files) print i; }; }'`; \ test -z "$(CTAGS_ARGS)$$unique" \ || $(CTAGS) $(CTAGSFLAGS) $(AM_CTAGSFLAGS) $(CTAGS_ARGS) \ $$unique GTAGS: here=`$(am__cd) $(top_builddir) && pwd` \ && $(am__cd) $(top_srcdir) \ && gtags -i $(GTAGS_ARGS) "$$here" distclean-tags: -rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags check-TESTS: $(TESTS) @failed=0; all=0; xfail=0; xpass=0; skip=0; \ srcdir=$(srcdir); export srcdir; \ list=' $(TESTS) '; \ $(am__tty_colors); \ if test -n "$$list"; then \ for tst in $$list; do \ if test -f ./$$tst; then dir=./; \ elif test -f $$tst; then dir=; \ else dir="$(srcdir)/"; fi; \ if $(TESTS_ENVIRONMENT) $${dir}$$tst; then \ all=`expr $$all + 1`; \ case " $(XFAIL_TESTS) " in \ *[\ \ ]$$tst[\ \ ]*) \ xpass=`expr $$xpass + 1`; \ failed=`expr $$failed + 1`; \ col=$$red; res=XPASS; \ ;; \ *) \ col=$$grn; res=PASS; \ ;; \ esac; \ elif test $$? -ne 77; then \ all=`expr $$all + 1`; \ case " $(XFAIL_TESTS) " in \ *[\ \ ]$$tst[\ \ ]*) \ xfail=`expr $$xfail + 1`; \ col=$$lgn; res=XFAIL; \ ;; \ *) \ failed=`expr $$failed + 1`; \ col=$$red; res=FAIL; \ ;; \ esac; \ else \ skip=`expr $$skip + 1`; \ col=$$blu; res=SKIP; \ fi; \ echo "$${col}$$res$${std}: $$tst"; \ done; \ if test "$$all" -eq 1; then \ tests="test"; \ All=""; \ else \ tests="tests"; \ All="All "; \ fi; \ if test "$$failed" -eq 0; then \ if test "$$xfail" -eq 0; then \ banner="$$All$$all $$tests passed"; \ else \ if test "$$xfail" -eq 1; then failures=failure; else failures=failures; fi; \ banner="$$All$$all $$tests behaved as expected ($$xfail expected $$failures)"; \ fi; \ else \ if test "$$xpass" -eq 0; then \ banner="$$failed of $$all $$tests failed"; \ else \ if test "$$xpass" -eq 1; then passes=pass; else passes=passes; fi; \ banner="$$failed of $$all $$tests did not behave as expected ($$xpass unexpected $$passes)"; \ fi; \ fi; \ dashes="$$banner"; \ skipped=""; \ if test "$$skip" -ne 0; then \ if test "$$skip" -eq 1; then \ skipped="($$skip test was not run)"; \ else \ skipped="($$skip tests were not run)"; \ fi; \ test `echo "$$skipped" | wc -c` -le `echo "$$banner" | wc -c` || \ dashes="$$skipped"; \ fi; \ report=""; \ if test "$$failed" -ne 0 && test -n "$(PACKAGE_BUGREPORT)"; then \ report="Please report to $(PACKAGE_BUGREPORT)"; \ test `echo "$$report" | wc -c` -le `echo "$$banner" | wc -c` || \ dashes="$$report"; \ fi; \ dashes=`echo "$$dashes" | sed s/./=/g`; \ if test "$$failed" -eq 0; then \ echo "$$grn$$dashes"; \ else \ echo "$$red$$dashes"; \ fi; \ echo "$$banner"; \ test -z "$$skipped" || echo "$$skipped"; \ test -z "$$report" || echo "$$report"; \ echo "$$dashes$$std"; \ test "$$failed" -eq 0; \ else :; fi distdir: $(DISTFILES) $(am__remove_distdir) test -d "$(distdir)" || mkdir "$(distdir)" @srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ list='$(DISTFILES)'; \ dist_files=`for file in $$list; do echo $$file; done | \ sed -e "s|^$$srcdirstrip/||;t" \ -e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \ case $$dist_files in \ */*) $(MKDIR_P) `echo "$$dist_files" | \ sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \ sort -u` ;; \ esac; \ for file in $$dist_files; do \ if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \ if test -d $$d/$$file; then \ dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \ if test -d "$(distdir)/$$file"; then \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \ cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \ else \ test -f "$(distdir)/$$file" \ || cp -p $$d/$$file "$(distdir)/$$file" \ || exit 1; \ fi; \ done $(MAKE) $(AM_MAKEFLAGS) \ top_distdir="$(top_distdir)" distdir="$(distdir)" \ dist-hook -test -n "$(am__skip_mode_fix)" \ || find "$(distdir)" -type d ! -perm -755 \ -exec chmod u+rwx,go+rx {} \; -o \ ! -type d ! -perm -444 -links 1 -exec chmod a+r {} \; -o \ ! -type d ! -perm -400 -exec chmod a+r {} \; -o \ ! -type d ! -perm -444 -exec $(install_sh) -c -m a+r {} {} \; \ || chmod -R a+r "$(distdir)" dist-gzip: distdir tardir=$(distdir) && $(am__tar) | GZIP=$(GZIP_ENV) gzip -c >$(distdir).tar.gz $(am__remove_distdir) dist-bzip2: distdir tardir=$(distdir) && $(am__tar) | bzip2 -9 -c >$(distdir).tar.bz2 $(am__remove_distdir) dist-lzma: distdir tardir=$(distdir) && $(am__tar) | lzma -9 -c >$(distdir).tar.lzma $(am__remove_distdir) dist-xz: distdir tardir=$(distdir) && $(am__tar) | xz -c >$(distdir).tar.xz $(am__remove_distdir) dist-tarZ: distdir tardir=$(distdir) && $(am__tar) | compress -c >$(distdir).tar.Z $(am__remove_distdir) dist-shar: distdir shar $(distdir) | GZIP=$(GZIP_ENV) gzip -c >$(distdir).shar.gz $(am__remove_distdir) dist-zip: distdir -rm -f $(distdir).zip zip -rq $(distdir).zip $(distdir) $(am__remove_distdir) dist dist-all: distdir tardir=$(distdir) && $(am__tar) | GZIP=$(GZIP_ENV) gzip -c >$(distdir).tar.gz -rm -f $(distdir).zip zip -rq $(distdir).zip $(distdir) $(am__remove_distdir) # This target untars the dist file and tries a VPATH configuration. Then # it guarantees that the distribution is self-contained by making another # tarfile. distcheck: dist case '$(DIST_ARCHIVES)' in \ *.tar.gz*) \ GZIP=$(GZIP_ENV) gzip -dc $(distdir).tar.gz | $(am__untar) ;;\ *.tar.bz2*) \ bzip2 -dc $(distdir).tar.bz2 | $(am__untar) ;;\ *.tar.lzma*) \ lzma -dc $(distdir).tar.lzma | $(am__untar) ;;\ *.tar.xz*) \ xz -dc $(distdir).tar.xz | $(am__untar) ;;\ *.tar.Z*) \ uncompress -c $(distdir).tar.Z | $(am__untar) ;;\ *.shar.gz*) \ GZIP=$(GZIP_ENV) gzip -dc $(distdir).shar.gz | unshar ;;\ *.zip*) \ unzip $(distdir).zip ;;\ esac chmod -R a-w $(distdir); chmod a+w $(distdir) mkdir $(distdir)/_build mkdir $(distdir)/_inst chmod a-w $(distdir) test -d $(distdir)/_build || exit 0; \ dc_install_base=`$(am__cd) $(distdir)/_inst && pwd | sed -e 's,^[^:\\/]:[\\/],/,'` \ && dc_destdir="$${TMPDIR-/tmp}/am-dc-$$$$/" \ && am__cwd=`pwd` \ && $(am__cd) $(distdir)/_build \ && ../configure --srcdir=.. --prefix="$$dc_install_base" \ $(DISTCHECK_CONFIGURE_FLAGS) \ && $(MAKE) $(AM_MAKEFLAGS) \ && $(MAKE) $(AM_MAKEFLAGS) dvi \ && $(MAKE) $(AM_MAKEFLAGS) check \ && $(MAKE) $(AM_MAKEFLAGS) install \ && $(MAKE) $(AM_MAKEFLAGS) installcheck \ && $(MAKE) $(AM_MAKEFLAGS) uninstall \ && $(MAKE) $(AM_MAKEFLAGS) distuninstallcheck_dir="$$dc_install_base" \ distuninstallcheck \ && chmod -R a-w "$$dc_install_base" \ && ({ \ (cd ../.. && umask 077 && mkdir "$$dc_destdir") \ && $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" install \ && $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" uninstall \ && $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" \ distuninstallcheck_dir="$$dc_destdir" distuninstallcheck; \ } || { rm -rf "$$dc_destdir"; exit 1; }) \ && rm -rf "$$dc_destdir" \ && $(MAKE) $(AM_MAKEFLAGS) dist \ && rm -rf $(DIST_ARCHIVES) \ && $(MAKE) $(AM_MAKEFLAGS) distcleancheck \ && cd "$$am__cwd" \ || exit 1 $(am__remove_distdir) @(echo "$(distdir) archives ready for distribution: "; \ list='$(DIST_ARCHIVES)'; for i in $$list; do echo $$i; done) | \ sed -e 1h -e 1s/./=/g -e 1p -e 1x -e '$$p' -e '$$x' distuninstallcheck: @$(am__cd) '$(distuninstallcheck_dir)' \ && test `$(distuninstallcheck_listfiles) | wc -l` -le 1 \ || { echo "ERROR: files left after uninstall:" ; \ if test -n "$(DESTDIR)"; then \ echo " (check DESTDIR support)"; \ fi ; \ $(distuninstallcheck_listfiles) ; \ exit 1; } >&2 distcleancheck: distclean @if test '$(srcdir)' = . ; then \ echo "ERROR: distcleancheck can only run from a VPATH build" ; \ exit 1 ; \ fi @test `$(distcleancheck_listfiles) | wc -l` -eq 0 \ || { echo "ERROR: files left in build directory after distclean:" ; \ $(distcleancheck_listfiles) ; \ exit 1; } >&2 check-am: all-am $(MAKE) $(AM_MAKEFLAGS) $(check_SCRIPTS) $(MAKE) $(AM_MAKEFLAGS) check-TESTS check: $(BUILT_SOURCES) $(MAKE) $(AM_MAKEFLAGS) check-am all-am: Makefile $(LTLIBRARIES) $(PROGRAMS) $(DATA) $(HEADERS) installdirs: for dir in "$(DESTDIR)$(libdir)" "$(DESTDIR)$(docdir)" "$(DESTDIR)$(pkgconfigdir)" "$(DESTDIR)$(googleincludedir)" "$(DESTDIR)$(googleinternalincludedir)" "$(DESTDIR)$(internalincludedir)" "$(DESTDIR)$(internalincludedir)" "$(DESTDIR)$(sparsehashincludedir)"; do \ test -z "$$dir" || $(MKDIR_P) "$$dir"; \ done install: $(BUILT_SOURCES) $(MAKE) $(AM_MAKEFLAGS) install-am install-exec: install-exec-am install-data: install-data-am uninstall: uninstall-am install-am: all-am @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am installcheck: installcheck-am install-strip: $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ `test -z '$(STRIP)' || \ echo "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'"` install mostlyclean-generic: clean-generic: -test -z "$(CLEANFILES)" || rm -f $(CLEANFILES) distclean-generic: -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES) -test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES) maintainer-clean-generic: @echo "This command is intended for maintainers to use" @echo "it deletes files that may require special tools to rebuild." -test -z "$(BUILT_SOURCES)" || rm -f $(BUILT_SOURCES) clean: clean-am clean-am: clean-generic clean-libLTLIBRARIES clean-noinstPROGRAMS \ mostlyclean-am distclean: distclean-am -rm -f $(am__CONFIG_DISTCLEAN_FILES) -rm -rf ./$(DEPDIR) -rm -f Makefile distclean-am: clean-am distclean-compile distclean-generic \ distclean-hdr distclean-tags dvi: dvi-am dvi-am: html: html-am html-am: info: info-am info-am: install-data-am: install-dist_docDATA install-googleincludeHEADERS \ install-googleinternalincludeHEADERS \ install-internalincludeHEADERS \ install-nodist_internalincludeHEADERS install-pkgconfigDATA \ install-sparsehashincludeHEADERS install-dvi: install-dvi-am install-dvi-am: install-exec-am: install-libLTLIBRARIES install-html: install-html-am install-html-am: install-info: install-info-am install-info-am: install-man: install-pdf: install-pdf-am install-pdf-am: install-ps: install-ps-am install-ps-am: installcheck-am: maintainer-clean: maintainer-clean-am -rm -f $(am__CONFIG_DISTCLEAN_FILES) -rm -rf $(top_srcdir)/autom4te.cache -rm -rf ./$(DEPDIR) -rm -f Makefile maintainer-clean-am: distclean-am maintainer-clean-generic mostlyclean: mostlyclean-am mostlyclean-am: mostlyclean-compile mostlyclean-generic pdf: pdf-am pdf-am: ps: ps-am ps-am: uninstall-am: uninstall-dist_docDATA uninstall-googleincludeHEADERS \ uninstall-googleinternalincludeHEADERS \ uninstall-internalincludeHEADERS uninstall-libLTLIBRARIES \ uninstall-nodist_internalincludeHEADERS \ uninstall-pkgconfigDATA uninstall-sparsehashincludeHEADERS .MAKE: all check check-am install install-am install-strip .PHONY: CTAGS GTAGS all all-am am--refresh check check-TESTS check-am \ clean clean-generic clean-libLTLIBRARIES clean-noinstPROGRAMS \ ctags dist dist-all dist-bzip2 dist-gzip dist-hook dist-lzma \ dist-shar dist-tarZ dist-xz dist-zip distcheck distclean \ distclean-compile distclean-generic distclean-hdr \ distclean-tags distcleancheck distdir distuninstallcheck dvi \ dvi-am html html-am info info-am install install-am \ install-data install-data-am install-dist_docDATA install-dvi \ install-dvi-am install-exec install-exec-am \ install-googleincludeHEADERS \ install-googleinternalincludeHEADERS install-html \ install-html-am install-info install-info-am \ install-internalincludeHEADERS install-libLTLIBRARIES \ install-man install-nodist_internalincludeHEADERS install-pdf \ install-pdf-am install-pkgconfigDATA install-ps install-ps-am \ install-sparsehashincludeHEADERS install-strip installcheck \ installcheck-am installdirs maintainer-clean \ maintainer-clean-generic mostlyclean mostlyclean-compile \ mostlyclean-generic pdf pdf-am ps ps-am tags uninstall \ uninstall-am uninstall-dist_docDATA \ uninstall-googleincludeHEADERS \ uninstall-googleinternalincludeHEADERS \ uninstall-internalincludeHEADERS uninstall-libLTLIBRARIES \ uninstall-nodist_internalincludeHEADERS \ uninstall-pkgconfigDATA uninstall-sparsehashincludeHEADERS # All our .h files need to read the config information in config.h. The # autoheader config.h has too much info, including PACKAGENAME, that # might conflict with other config.h's an application might #include. # Thus, we create a "minimal" config.h, called sparseconfig.h, that # includes only the #defines we really need, and that are unlikely to # change from system to system. NOTE: The awk command is equivalent to # fgrep -B2 -f$(top_builddir)/src/config.h.include $(top_builddir)/src/config.h # | fgrep -vx -e -- > _sparsehash_config # For correctness, it depends on the fact config.h.include does not have # any lines starting with #. src/sparsehash/internal/sparseconfig.h: $(top_builddir)/src/config.h \ $(top_srcdir)/src/config.h.include [ -d $(@D) ] || mkdir -p $(@D) echo "/*" > $(@D)/_sparsehash_config echo " * NOTE: This file is for internal use only." >> $(@D)/_sparsehash_config echo " * Do not use these #defines in your own program!" >> $(@D)/_sparsehash_config echo " */" >> $(@D)/_sparsehash_config $(AWK) '{prevline=currline; currline=$$0;} \ /^#/ {in_second_file = 1;} \ !in_second_file {if (currline !~ /^ *$$/) {inc[currline]=0}}; \ in_second_file { for (i in inc) { \ if (index(currline, i) != 0) { \ print "\n"prevline"\n"currline; \ delete inc[i]; \ } \ } }' \ $(top_srcdir)/src/config.h.include $(top_builddir)/src/config.h \ >> $(@D)/_sparsehash_config mv -f $(@D)/_sparsehash_config $@ rpm: dist-gzip packages/rpm.sh packages/rpm/rpm.spec @cd packages && ./rpm.sh ${PACKAGE} ${VERSION} deb: dist-gzip packages/deb.sh packages/deb/* @cd packages && ./deb.sh ${PACKAGE} ${VERSION} # I get the description and URL lines from the rpm spec. I use sed to # try to rewrite exec_prefix, libdir, and includedir in terms of # prefix, if possible. lib${PACKAGE}.pc: Makefile packages/rpm/rpm.spec echo 'prefix=$(prefix)' > "$@".tmp echo 'exec_prefix='`echo '$(exec_prefix)' | sed 's@^$(prefix)@$${prefix}@'` >> "$@".tmp echo 'libdir='`echo '$(libdir)' | sed 's@^$(exec_prefix)@$${exec_prefix}@'` >> "$@".tmp echo 'includedir='`echo '$(includedir)' | sed 's@^$(prefix)@$${prefix}@'` >> "$@".tmp echo '' >> "$@".tmp echo 'Name: $(PACKAGE)' >> "$@".tmp echo 'Version: $(VERSION)' >> "$@".tmp -grep '^Summary:' $(top_srcdir)/packages/rpm/rpm.spec | sed s/^Summary:/Description:/ | head -n1 >> "$@".tmp -grep '^URL: ' $(top_srcdir)/packages/rpm/rpm.spec >> "$@".tmp echo 'Requires:' >> "$@".tmp echo 'Libs:' >> "$@".tmp echo 'Cflags: -I$${includedir}' >> "$@".tmp mv -f "$@".tmp "$@" # Windows wants write permission to .vcproj files and maybe even sln files. dist-hook: test -e "$(distdir)/vsprojects" \ && chmod -R u+w $(distdir)/*.sln $(distdir)/vsprojects/ # Tell versions [3.59,3.63) of GNU make to not export all variables. # Otherwise a system limit (for SysV at least) may be exceeded. .NOEXPORT: sparsehash-2.0.2/doc/0000775000175000017500000000000011721550526011430 500000000000000sparsehash-2.0.2/doc/dense_hash_set.html0000664000175000017500000012401711721252346015216 00000000000000 dense_hash_set<Key, HashFcn, EqualKey, Alloc>

[Note: this document is formatted similarly to the SGI STL implementation documentation pages, and refers to concepts and classes defined there. However, neither this document nor the code it describes is associated with SGI, nor is it necessary to have SGI's STL implementation installed in order to use this class.]

dense_hash_set<Key, HashFcn, EqualKey, Alloc>

Looking up an element in a dense_hash_set by its key is efficient, so dense_hash_set is useful for "dictionaries" where the order of elements is irrelevant. If it is important for the elements to be in a particular order, however, then map is more appropriate.

dense_hash_set is distinguished from other hash-set implementations by its speed and by the ability to save and restore contents to disk. On the other hand, this hash-set implementation can use significantly more space than other hash-set implementations, and it also has requirements -- for instance, for a distinguished "empty key" -- that may not be easy for all applications to satisfy.

This class is appropriate for applications that need speedy access to relatively small "dictionaries" stored in memory, or for applications that need these dictionaries to be persistent. [implementation note])

Example

(Note: this example uses SGI semantics for hash<> -- the kind used by gcc and most Unix compiler suites -- and not Dinkumware semantics -- the kind used by Microsoft Visual Studio. If you are using MSVC, this example will not compile as-is: you'll need to change hash to hash_compare, and you won't use eqstr at all. See the MSVC documentation for hash_map and hash_compare, for more details.)
#include <iostream>
#include <sparsehash/dense_hash_set>

using google::dense_hash_set;      // namespace where class lives by default
using std::cout;
using std::endl;
using ext::hash;  // or __gnu_cxx::hash, or maybe tr1::hash, depending on your OS

struct eqstr
{
  bool operator()(const char* s1, const char* s2) const
  {
    return (s1 == s2) || (s1 && s2 && strcmp(s1, s2) == 0);
  }
};

void lookup(const hash_set<const char*, hash<const char*>, eqstr>& Set,
            const char* word)
{
  dense_hash_set<const char*, hash<const char*>, eqstr>::const_iterator it
    = Set.find(word);
  cout << word << ": "
       << (it != Set.end() ? "present" : "not present")
       << endl;
}

int main()
{
  dense_hash_set<const char*, hash<const char*>, eqstr> Set;
  Set.set_empty_key(NULL);
  Set.insert("kiwi");
  Set.insert("plum");
  Set.insert("apple");
  Set.insert("mango");
  Set.insert("apricot");
  Set.insert("banana");

  lookup(Set, "mango");
  lookup(Set, "apple");
  lookup(Set, "durian");
}

Definition

Defined in the header dense_hash_set. This class is not part of the C++ standard, though it is mostly compatible with the tr1 class unordered_set.

Template parameters

ParameterDescriptionDefault
Key The hash_set's key and value type. This is also defined as dense_hash_set::key_type and dense_hash_set::value_type.  
HashFcn The hash function used by the hash_set. This is also defined as dense_hash_set::hasher.
Note: Hashtable performance depends heavily on the choice of hash function. See the performance page for more information.
hash<Key>
EqualKey The hash_set key equality function: a binary predicate that determines whether two keys are equal. This is also defined as dense_hash_set::key_equal. equal_to<Key>
Alloc The STL allocator to use. By default, uses the provided allocator libc_allocator_with_realloc, which likely gives better performance than other STL allocators due to its built-in support for realloc, which this container takes advantage of. If you use an allocator other than the default, note that this container imposes an additional requirement on the STL allocator type beyond those in [lib.allocator.requirements]: it does not support allocators that define alternate memory models. That is, it assumes that pointer, const_pointer, size_type, and difference_type are just T*, const T*, size_t, and ptrdiff_t, respectively. This is also defined as dense_hash_set::allocator_type.

Model of

Unique Hashed Associative Container, Simple Associative Container

Type requirements

  • Key is Assignable.
  • EqualKey is a Binary Predicate whose argument type is Key.
  • EqualKey is an equivalence relation.
  • Alloc is an Allocator.

Public base classes

None.

Members

MemberWhere definedDescription
value_type Container The type of object, T, stored in the hash_set.
key_type Associative Container The key type associated with value_type.
hasher Hashed Associative Container The dense_hash_set's hash function.
key_equal Hashed Associative Container Function object that compares keys for equality.
allocator_type Unordered Associative Container (tr1) The type of the Allocator given as a template parameter.
pointer Container Pointer to T.
reference Container Reference to T
const_reference Container Const reference to T
size_type Container An unsigned integral type.
difference_type Container A signed integral type.
iterator Container Iterator used to iterate through a dense_hash_set.
const_iterator Container Const iterator used to iterate through a dense_hash_set. (iterator and const_iterator are the same type.)
local_iterator Unordered Associative Container (tr1) Iterator used to iterate through a subset of dense_hash_set.
const_local_iterator Unordered Associative Container (tr1) Const iterator used to iterate through a subset of dense_hash_set.
iterator begin() const Container Returns an iterator pointing to the beginning of the dense_hash_set.
iterator end() const Container Returns an iterator pointing to the end of the dense_hash_set.
local_iterator begin(size_type i) Unordered Associative Container (tr1) Returns a local_iterator pointing to the beginning of bucket i in the dense_hash_set.
local_iterator end(size_type i) Unordered Associative Container (tr1) Returns a local_iterator pointing to the end of bucket i in the dense_hash_set. For dense_hash_set, each bucket contains either 0 or 1 item.
const_local_iterator begin(size_type i) const Unordered Associative Container (tr1) Returns a const_local_iterator pointing to the beginning of bucket i in the dense_hash_set.
const_local_iterator end(size_type i) const Unordered Associative Container (tr1) Returns a const_local_iterator pointing to the end of bucket i in the dense_hash_set. For dense_hash_set, each bucket contains either 0 or 1 item.
size_type size() const Container Returns the size of the dense_hash_set.
size_type max_size() const Container Returns the largest possible size of the dense_hash_set.
bool empty() const Container true if the dense_hash_set's size is 0.
size_type bucket_count() const Hashed Associative Container Returns the number of buckets used by the dense_hash_set.
size_type max_bucket_count() const Hashed Associative Container Returns the largest possible number of buckets used by the dense_hash_set.
size_type bucket_size(size_type i) const Unordered Associative Container (tr1) Returns the number of elements in bucket i. For dense_hash_set, this will be either 0 or 1.
size_type bucket(const key_type& key) const Unordered Associative Container (tr1) If the key exists in the set, returns the index of the bucket containing the given key, otherwise, return the bucket the key would be inserted into. This value may be passed to begin(size_type) and end(size_type).
float load_factor() const Unordered Associative Container (tr1) The number of elements in the dense_hash_set divided by the number of buckets.
float max_load_factor() const Unordered Associative Container (tr1) The maximum load factor before increasing the number of buckets in the dense_hash_set.
void max_load_factor(float new_grow) Unordered Associative Container (tr1) Sets the maximum load factor before increasing the number of buckets in the dense_hash_set.
float min_load_factor() const dense_hash_set The minimum load factor before decreasing the number of buckets in the dense_hash_set.
void min_load_factor(float new_grow) dense_hash_set Sets the minimum load factor before decreasing the number of buckets in the dense_hash_set.
void set_resizing_parameters(float shrink, float grow) dense_hash_set DEPRECATED. See below.
void resize(size_type n) Hashed Associative Container Increases the bucket count to hold at least n items. [2] [3]
void rehash(size_type n) Unordered Associative Container (tr1) Increases the bucket count to hold at least n items. This is identical to resize. [2] [3]
hasher hash_funct() const Hashed Associative Container Returns the hasher object used by the dense_hash_set.
hasher hash_function() const Unordered Associative Container (tr1) Returns the hasher object used by the dense_hash_set. This is idential to hash_funct.
key_equal key_eq() const Hashed Associative Container Returns the key_equal object used by the dense_hash_set.
allocator_type get_allocator() const Unordered Associative Container (tr1) Returns the allocator_type object used by the dense_hash_set: either the one passed in to the constructor, or a default Alloc instance.
dense_hash_set() Container Creates an empty dense_hash_set.
dense_hash_set(size_type n) Hashed Associative Container Creates an empty dense_hash_set that's optimized for holding up to n items. [3]
dense_hash_set(size_type n, const hasher& h) Hashed Associative Container Creates an empty dense_hash_set that's optimized for up to n items, using h as the hash function.
dense_hash_set(size_type n, const hasher& h, const key_equal& k) Hashed Associative Container Creates an empty dense_hash_set that's optimized for up to n items, using h as the hash function and k as the key equal function.
dense_hash_set(size_type n, const hasher& h, const key_equal& k, const allocator_type& a) Unordered Associative Container (tr1) Creates an empty dense_hash_set that's optimized for up to n items, using h as the hash function, k as the key equal function, and a as the allocator object.
template <class InputIterator>
dense_hash_set(InputIterator f, InputIterator l) 
[2]
Unique Hashed Associative Container Creates a dense_hash_set with a copy of a range.
template <class InputIterator>
dense_hash_set(InputIterator f, InputIterator l, size_type n) 
[2]
Unique Hashed Associative Container Creates a hash_set with a copy of a range that's optimized to hold up to n items.
template <class InputIterator>
dense_hash_set(InputIterator f, InputIterator l, size_type n, const
hasher& h) 
[2]
Unique Hashed Associative Container Creates a hash_set with a copy of a range that's optimized to hold up to n items, using h as the hash function.
template <class InputIterator>
dense_hash_set(InputIterator f, InputIterator l, size_type n, const
hasher& h, const key_equal& k) 
[2]
Unique Hashed Associative Container Creates a hash_set with a copy of a range that's optimized for holding up to n items, using h as the hash function and k as the key equal function.
template <class InputIterator>
dense_hash_set(InputIterator f, InputIterator l, size_type n, const
hasher& h, const key_equal& k, const allocator_type& a) 
[2]
Unordered Associative Container (tr1) Creates a hash_set with a copy of a range that's optimized for holding up to n items, using h as the hash function, k as the key equal function, and a as the allocator object.
dense_hash_set(const hash_set&) Container The copy constructor.
dense_hash_set& operator=(const hash_set&) Container The assignment operator
void swap(hash_set&) Container Swaps the contents of two hash_sets.
pair<iterator, bool> insert(const value_type& x)
Unique Associative Container Inserts x into the dense_hash_set.
template <class InputIterator>
void insert(InputIterator f, InputIterator l) 
[2]
Unique Associative Container Inserts a range into the dense_hash_set.
void set_empty_key(const key_type& key) [4] dense_hash_set See below.
void set_deleted_key(const key_type& key) [4] dense_hash_set See below.
void clear_deleted_key() [4] dense_hash_set See below.
void erase(iterator pos) Associative Container Erases the element pointed to by pos. [4]
size_type erase(const key_type& k) Associative Container Erases the element whose key is k. [4]
void erase(iterator first, iterator last) Associative Container Erases all elements in a range. [4]
void clear() Associative Container Erases all of the elements.
void clear_no_resize() dense_hash_set See below.
iterator find(const key_type& k) const Associative Container Finds an element whose key is k.
size_type count(const key_type& k) const Unique Associative Container Counts the number of elements whose key is k.
pair<iterator, iterator> equal_range(const
key_type& k) const
Associative Container Finds a range containing all elements whose key is k.
template <ValueSerializer, OUTPUT> bool serialize(ValueSerializer serializer, OUTPUT *fp) dense_hash_set See below.
template <ValueSerializer, INPUT> bool unserialize(ValueSerializer serializer, INPUT *fp) dense_hash_set See below.
NopointerSerializer dense_hash_set See below.
bool write_metadata(FILE *fp) dense_hash_set DEPRECATED. See below.
bool read_metadata(FILE *fp) dense_hash_set DEPRECATED. See below.
bool write_nopointer_data(FILE *fp) dense_hash_set DEPRECATED. See below.
bool read_nopointer_data(FILE *fp) dense_hash_set DEPRECATED. See below.
bool operator==(const hash_set&, const hash_set&)
Hashed Associative Container Tests two hash_sets for equality. This is a global function, not a member function.

New members

These members are not defined in the Unique Hashed Associative Container, Simple Associative Container, or tr1's Unordered Associative Container requirements, but are specific to dense_hash_set.
MemberDescription
void set_empty_key(const key_type& key) Sets the distinguished "empty" key to key. This must be called immediately after construct time, before calls to another other dense_hash_set operation. [4]
void set_deleted_key(const key_type& key) Sets the distinguished "deleted" key to key. This must be called before any calls to erase(). [4]
void clear_deleted_key() Clears the distinguished "deleted" key. After this is called, calls to erase() are not valid on this object. [4]
void clear_no_resize() Clears the hashtable like clear() does, but does not recover the memory used for hashtable buckets. (The memory used by the items in the hashtable is still recovered.) This can save time for applications that want to reuse a dense_hash_set many times, each time with a similar number of objects.
void set_resizing_parameters(float shrink, float grow) This function is DEPRECATED. It is equivalent to calling min_load_factor(shrink); max_load_factor(grow).
template <ValueSerializer, OUTPUT> bool serialize(ValueSerializer serializer, OUTPUT *fp) Emit a serialization of the hash_set to a stream. See below.
template <ValueSerializer, INPUT> bool unserialize(ValueSerializer serializer, INPUT *fp) Read in a serialization of a hash_set from a stream, replacing the existing hash_set contents with the serialized contents. See below.
bool write_metadata(FILE *fp) This function is DEPRECATED. See below.
bool read_metadata(FILE *fp) This function is DEPRECATED. See below.
bool write_nopointer_data(FILE *fp) This function is DEPRECATED. See below.
bool read_nopointer_data(FILE *fp) This function is DEPRECATED. See below.

Notes

[1] This member function relies on member template functions, which may not be supported by all compilers. If your compiler supports member templates, you can call this function with any type of input iterator. If your compiler does not yet support member templates, though, then the arguments must either be of type const value_type* or of type dense_hash_set::const_iterator.

[2] In order to preserve iterators, erasing hashtable elements does not cause a hashtable to resize. This means that after a string of erase() calls, the hashtable will use more space than is required. At a cost of invalidating all current iterators, you can call resize() to manually compact the hashtable. The hashtable promotes too-small resize() arguments to the smallest legal value, so to compact a hashtable, it's sufficient to call resize(0).

[3] Unlike some other hashtable implementations, the optional n in the calls to the constructor, resize, and rehash indicates not the desired number of buckets that should be allocated, but instead the expected number of items to be inserted. The class then sizes the hash-set appropriately for the number of items specified. It's not an error to actually insert more or fewer items into the hashtable, but the implementation is most efficient -- does the fewest hashtable resizes -- if the number of inserted items is n or slightly less.

[4] dense_hash_set requires you call set_empty_key() immediately after constructing the hash-set, and before calling any other dense_hash_set method. (This is the largest difference between the dense_hash_set API and other hash-set APIs. See implementation.html for why this is necessary.) The argument to set_empty_key() should be a key-value that is never used for legitimate hash-set entries. If you have no such key value, you will be unable to use dense_hash_set. It is an error to call insert() with an item whose key is the "empty key."

dense_hash_set also requires you call set_deleted_key() before calling erase(). The argument to set_deleted_key() should be a key-value that is never used for legitimate hash-set entries. It must be different from the key-value used for set_empty_key(). It is an error to call erase() without first calling set_deleted_key(), and it is also an error to call insert() with an item whose key is the "deleted key."

There is no need to call set_deleted_key if you do not wish to call erase() on the hash-set.

It is acceptable to change the deleted-key at any time by calling set_deleted_key() with a new argument. You can also call clear_deleted_key(), at which point all keys become valid for insertion but no hashtable entries can be deleted until set_deleted_key() is called again.

Input/Output

It is possible to save and restore dense_hash_set objects to an arbitrary stream (such as a disk file) using the serialize() and unserialize() methods.

Each of these methods takes two arguments: a serializer, which says how to write hashtable items to disk, and a stream, which can be a C++ stream (istream or its subclasses for input, ostream or its subclasses for output), a FILE*, or a user-defined type (as described below).

dense_hash_set is a Hashed Associative Container that stores objects of type Key. dense_hash_set is a Simple Associative Container, meaning that its value type, as well as its key type, is key. It is also a Unique Associative Container, meaning that no two elements have keys that compare equal using EqualKey.

The serializer is a functor that takes a stream and a single hashtable element (a value_type) and copies the hashtable element to the stream (for serialize()) or fills the hashtable element contents from the stream (for unserialize()), and returns true on success or false on error. The copy-in and copy-out functions can be provided in a single functor. Here is a sample serializer that read/writes a hashtable element for a string hash_set to a FILE*:

struct StringSerializer {
  bool operator()(FILE* fp, const std::string& value) const {
    assert(value.length() <= 255);   // we only support writing small strings
    const unsigned char size = value.length();
    if (fwrite(&size, 1, 1, fp) != 1)
      return false;
    if (fwrite(value.data(), size, 1, fp) != 1)
      return false;
    return true;
  }
  bool operator()(FILE* fp, std::string* value) const {
    unsigned char size;    // all strings are <= 255 chars long
    if (fread(&size, 1, 1, fp) != 1)
      return false;
    char* buf = new char[size];
    if (fread(buf, size, 1, fp) != 1) {
      delete[] buf;
      return false;
    }
    value->assign(buf, size);
    delete[] buf;
    return true;
  }
};

Here is the functor being used in code (error checking omitted):

   dense_hash_set<string> myset = CreateSet();
   FILE* fp = fopen("hashtable.data", "w");
   myset.serialize(StringSerializer(), fp);
   fclose(fp);

   dense_hash_set<string> myset2;
   FILE* fp_in = fopen("hashtable.data", "r");
   myset2.unserialize(StringSerializer(), fp_in);
   fclose(fp_in);
   assert(myset == myset2);

Note that this example serializer can only serialize to a FILE*. If you want to also be able to use this serializer with C++ streams, you will need to write two more overloads of operator()'s, one that reads from an istream, and one that writes to an ostream. Likewise if you want to support serializing to a custom class.

If the key is "simple" enough, you can use the pre-supplied functor NopointerSerializer. This copies the hashtable data using the equivalent of a memcpy<>. Native C data types can be serialized this way, as can structs of native C data types. Pointers and STL objects cannot.

Note that NopointerSerializer() does not do any endian conversion. Thus, it is only appropriate when you intend to read the data on the same endian architecture as you write the data.

If you wish to serialize to your own stream type, you can do so by creating an object which supports two methods:

   bool Write(const void* data, size_t length);
   bool Read(void* data, size_t length);

Write() writes length bytes of data to a stream (presumably a stream owned by the object), while Read() reads data bytes from the stream into data. Both return true on success or false on error.

To unserialize a hashtable from a stream, you wil typically create a new dense_hash_set object, then call unserialize() on it. unserialize() destroys the old contents of the object. You must pass in the appropriate ValueSerializer for the data being read in.

Both serialize() and unserialize() return true on success, or false if there was an error streaming the data.

Note that serialize() is not a const method, since it purges deleted elements before serializing. It is not safe to serialize from two threads at once, without synchronization.

NOTE: older versions of dense_hash_set provided a different API, consisting of read_metadata(), read_nopointer_data(), write_metadata(), write_nopointer_data(). These methods were never implemented and always did nothing but return false. You should exclusively use the new API for serialization.

Validity of Iterators

erase() is guaranteed not to invalidate any iterators -- except for any iterators pointing to the item being erased, of course. insert() invalidates all iterators, as does resize().

This is implemented by making erase() not resize the hashtable. If you desire maximum space efficiency, you can call resize(0) after a string of erase() calls, to force the hashtable to resize to the smallest possible size.

In addition to invalidating iterators, insert() and resize() invalidate all pointers into the hashtable. If you want to store a pointer to an object held in a dense_hash_set, either do so after finishing hashtable inserts, or store the object on the heap and a pointer to it in the dense_hash_set.

See also

The following are SGI STL, and some Google STL, concepts and classes related to dense_hash_set.

hash_set, Associative Container, Hashed Associative Container, Simple Associative Container, Unique Hashed Associative Container, set, map multiset, multimap, hash_map, hash_multiset, hash_multimap, sparse_hash_set, sparse_hash_map, dense_hash_map sparsehash-2.0.2/doc/sparse_hash_set.html0000664000175000017500000012161011721252346015411 00000000000000 sparse_hash_set<Key, HashFcn, EqualKey, Alloc>

[Note: this document is formatted similarly to the SGI STL implementation documentation pages, and refers to concepts and classes defined there. However, neither this document nor the code it describes is associated with SGI, nor is it necessary to have SGI's STL implementation installed in order to use this class.]

sparse_hash_set<Key, HashFcn, EqualKey, Alloc>

sparse_hash_set is a Hashed Associative Container that stores objects of type Key. sparse_hash_set is a Simple Associative Container, meaning that its value type, as well as its key type, is key. It is also a Unique Associative Container, meaning that no two elements have keys that compare equal using EqualKey.

Looking up an element in a sparse_hash_set by its key is efficient, so sparse_hash_set is useful for "dictionaries" where the order of elements is irrelevant. If it is important for the elements to be in a particular order, however, then map is more appropriate.

sparse_hash_set is distinguished from other hash-set implementations by its stingy use of memory and by the ability to save and restore contents to disk. On the other hand, this hash-set implementation, while still efficient, is slower than other hash-set implementations, and it also has requirements -- for instance, for a distinguished "deleted key" -- that may not be easy for all applications to satisfy.

This class is appropriate for applications that need to store large "dictionaries" in memory, or for applications that need these dictionaries to be persistent.

Example

(Note: this example uses SGI semantics for hash<> -- the kind used by gcc and most Unix compiler suites -- and not Dinkumware semantics -- the kind used by Microsoft Visual Studio. If you are using MSVC, this example will not compile as-is: you'll need to change hash to hash_compare, and you won't use eqstr at all. See the MSVC documentation for hash_map and hash_compare, for more details.)
#include <iostream>
#include <sparsehash/sparse_hash_set>

using google::sparse_hash_set;      // namespace where class lives by default
using std::cout;
using std::endl;
using ext::hash;  // or __gnu_cxx::hash, or maybe tr1::hash, depending on your OS

struct eqstr
{
  bool operator()(const char* s1, const char* s2) const
  {
    return (s1 == s2) || (s1 && s2 && strcmp(s1, s2) == 0);
  }
};

void lookup(const hash_set<const char*, hash<const char*>, eqstr>& Set,
            const char* word)
{
  sparse_hash_set<const char*, hash<const char*>, eqstr>::const_iterator it
    = Set.find(word);
  cout << word << ": "
       << (it != Set.end() ? "present" : "not present")
       << endl;
}

int main()
{
  sparse_hash_set<const char*, hash<const char*>, eqstr> Set;
  Set.insert("kiwi");
  Set.insert("plum");
  Set.insert("apple");
  Set.insert("mango");
  Set.insert("apricot");
  Set.insert("banana");

  lookup(Set, "mango");
  lookup(Set, "apple");
  lookup(Set, "durian");
}

Definition

Defined in the header sparse_hash_set. This class is not part of the C++ standard, though it is mostly compatible with the tr1 class unordered_set.

Template parameters

ParameterDescriptionDefault
Key The hash_set's key and value type. This is also defined as sparse_hash_set::key_type and sparse_hash_set::value_type.  
HashFcn The hash function used by the hash_set. This is also defined as sparse_hash_set::hasher.
Note: Hashtable performance depends heavily on the choice of hash function. See the performance page for more information.
hash<Key>
EqualKey The hash_set key equality function: a binary predicate that determines whether two keys are equal. This is also defined as sparse_hash_set::key_equal. equal_to<Key>
Alloc The STL allocator to use. By default, uses the provided allocator libc_allocator_with_realloc, which likely gives better performance than other STL allocators due to its built-in support for realloc, which this container takes advantage of. If you use an allocator other than the default, note that this container imposes an additional requirement on the STL allocator type beyond those in [lib.allocator.requirements]: it does not support allocators that define alternate memory models. That is, it assumes that pointer, const_pointer, size_type, and difference_type are just T*, const T*, size_t, and ptrdiff_t, respectively. This is also defined as sparse_hash_set::allocator_type.

Model of

Unique Hashed Associative Container, Simple Associative Container

Type requirements

  • Key is Assignable.
  • EqualKey is a Binary Predicate whose argument type is Key.
  • EqualKey is an equivalence relation.
  • Alloc is an Allocator.

Public base classes

None.

Members

MemberWhere definedDescription
value_type Container The type of object, T, stored in the hash_set.
key_type Associative Container The key type associated with value_type.
hasher Hashed Associative Container The sparse_hash_set's hash function.
key_equal Hashed Associative Container Function object that compares keys for equality.
allocator_type Unordered Associative Container (tr1) The type of the Allocator given as a template parameter.
pointer Container Pointer to T.
reference Container Reference to T
const_reference Container Const reference to T
size_type Container An unsigned integral type.
difference_type Container A signed integral type.
iterator Container Iterator used to iterate through a sparse_hash_set.
const_iterator Container Const iterator used to iterate through a sparse_hash_set. (iterator and const_iterator are the same type.)
local_iterator Unordered Associative Container (tr1) Iterator used to iterate through a subset of sparse_hash_set.
const_local_iterator Unordered Associative Container (tr1) Const iterator used to iterate through a subset of sparse_hash_set.
iterator begin() const Container Returns an iterator pointing to the beginning of the sparse_hash_set.
iterator end() const Container Returns an iterator pointing to the end of the sparse_hash_set.
local_iterator begin(size_type i) Unordered Associative Container (tr1) Returns a local_iterator pointing to the beginning of bucket i in the sparse_hash_set.
local_iterator end(size_type i) Unordered Associative Container (tr1) Returns a local_iterator pointing to the end of bucket i in the sparse_hash_set. For sparse_hash_set, each bucket contains either 0 or 1 item.
const_local_iterator begin(size_type i) const Unordered Associative Container (tr1) Returns a const_local_iterator pointing to the beginning of bucket i in the sparse_hash_set.
const_local_iterator end(size_type i) const Unordered Associative Container (tr1) Returns a const_local_iterator pointing to the end of bucket i in the sparse_hash_set. For sparse_hash_set, each bucket contains either 0 or 1 item.
size_type size() const Container Returns the size of the sparse_hash_set.
size_type max_size() const Container Returns the largest possible size of the sparse_hash_set.
bool empty() const Container true if the sparse_hash_set's size is 0.
size_type bucket_count() const Hashed Associative Container Returns the number of buckets used by the sparse_hash_set.
size_type max_bucket_count() const Hashed Associative Container Returns the largest possible number of buckets used by the sparse_hash_set.
size_type bucket_size(size_type i) const Unordered Associative Container (tr1) Returns the number of elements in bucket i. For sparse_hash_set, this will be either 0 or 1.
size_type bucket(const key_type& key) const Unordered Associative Container (tr1) If the key exists in the set, returns the index of the bucket containing the given key, otherwise, return the bucket the key would be inserted into. This value may be passed to begin(size_type) and end(size_type).
float load_factor() const Unordered Associative Container (tr1) The number of elements in the sparse_hash_set divided by the number of buckets.
float max_load_factor() const Unordered Associative Container (tr1) The maximum load factor before increasing the number of buckets in the sparse_hash_set.
void max_load_factor(float new_grow) Unordered Associative Container (tr1) Sets the maximum load factor before increasing the number of buckets in the sparse_hash_set.
float min_load_factor() const sparse_hash_set The minimum load factor before decreasing the number of buckets in the sparse_hash_set.
void min_load_factor(float new_grow) sparse_hash_set Sets the minimum load factor before decreasing the number of buckets in the sparse_hash_set.
void set_resizing_parameters(float shrink, float grow) sparse_hash_set DEPRECATED. See below.
void resize(size_type n) Hashed Associative Container Increases the bucket count to hold at least n items. [2] [3]
void rehash(size_type n) Unordered Associative Container (tr1) Increases the bucket count to hold at least n items. This is identical to resize. [2] [3]
hasher hash_funct() const Hashed Associative Container Returns the hasher object used by the sparse_hash_set.
hasher hash_function() const Unordered Associative Container (tr1) Returns the hasher object used by the sparse_hash_set. This is idential to hash_funct.
key_equal key_eq() const Hashed Associative Container Returns the key_equal object used by the sparse_hash_set.
allocator_type get_allocator() const Unordered Associative Container (tr1) Returns the allocator_type object used by the sparse_hash_set: either the one passed in to the constructor, or a default Alloc instance.
sparse_hash_set() Container Creates an empty sparse_hash_set.
sparse_hash_set(size_type n) Hashed Associative Container Creates an empty sparse_hash_set that's optimized for holding up to n items. [3]
sparse_hash_set(size_type n, const hasher& h) Hashed Associative Container Creates an empty sparse_hash_set that's optimized for up to n items, using h as the hash function.
sparse_hash_set(size_type n, const hasher& h, const key_equal& k) Hashed Associative Container Creates an empty sparse_hash_set that's optimized for up to n items, using h as the hash function and k as the key equal function.
sparse_hash_set(size_type n, const hasher& h, const key_equal& k, const allocator_type& a) Unordered Associative Container (tr1) Creates an empty sparse_hash_set that's optimized for up to n items, using h as the hash function, k as the key equal function, and a as the allocator object.
template <class InputIterator>
sparse_hash_set(InputIterator f, InputIterator l) 
[2]
Unique Hashed Associative Container Creates a sparse_hash_set with a copy of a range.
template <class InputIterator>
sparse_hash_set(InputIterator f, InputIterator l, size_type n) 
[2]
Unique Hashed Associative Container Creates a hash_set with a copy of a range that's optimized to hold up to n items.
template <class InputIterator>
sparse_hash_set(InputIterator f, InputIterator l, size_type n, const
hasher& h) 
[2]
Unique Hashed Associative Container Creates a hash_set with a copy of a range that's optimized to hold up to n items, using h as the hash function.
template <class InputIterator>
sparse_hash_set(InputIterator f, InputIterator l, size_type n, const
hasher& h, const key_equal& k) 
[2]
Unique Hashed Associative Container Creates a hash_set with a copy of a range that's optimized for holding up to n items, using h as the hash function and k as the key equal function.
template <class InputIterator>
sparse_hash_set(InputIterator f, InputIterator l, size_type n, const
hasher& h, const key_equal& k, const allocator_type& a) 
[2]
Unordered Associative Container (tr1) Creates a hash_set with a copy of a range that's optimized for holding up to n items, using h as the hash function, k as the key equal function, and a as the allocator object.
sparse_hash_set(const hash_set&) Container The copy constructor.
sparse_hash_set& operator=(const hash_set&) Container The assignment operator
void swap(hash_set&) Container Swaps the contents of two hash_sets.
pair<iterator, bool> insert(const value_type& x)
Unique Associative Container Inserts x into the sparse_hash_set.
template <class InputIterator>
void insert(InputIterator f, InputIterator l) 
[2]
Unique Associative Container Inserts a range into the sparse_hash_set.
void set_deleted_key(const key_type& key) [4] sparse_hash_set See below.
void clear_deleted_key() [4] sparse_hash_set See below.
void erase(iterator pos) Associative Container Erases the element pointed to by pos. [4]
size_type erase(const key_type& k) Associative Container Erases the element whose key is k. [4]
void erase(iterator first, iterator last) Associative Container Erases all elements in a range. [4]
void clear() Associative Container Erases all of the elements.
iterator find(const key_type& k) const Associative Container Finds an element whose key is k.
size_type count(const key_type& k) const Unique Associative Container Counts the number of elements whose key is k.
pair<iterator, iterator> equal_range(const
key_type& k) const
Associative Container Finds a range containing all elements whose key is k.
template <ValueSerializer, OUTPUT> bool serialize(ValueSerializer serializer, OUTPUT *fp) sparse_hash_set See below.
template <ValueSerializer, INPUT> bool unserialize(ValueSerializer serializer, INPUT *fp) sparse_hash_set See below.
NopointerSerializer sparse_hash_set See below.
bool write_metadata(FILE *fp) sparse_hash_set DEPRECATED. See below.
bool read_metadata(FILE *fp) sparse_hash_set DEPRECATED. See below.
bool write_nopointer_data(FILE *fp) sparse_hash_set DEPRECATED. See below.
bool read_nopointer_data(FILE *fp) sparse_hash_set DEPRECATED. See below.
bool operator==(const hash_set&, const hash_set&)
Hashed Associative Container Tests two hash_sets for equality. This is a global function, not a member function.

New members

These members are not defined in the Unique Hashed Associative Container, Simple Associative Container, or tr1's Unordered Associative Container requirements, but are specific to sparse_hash_set.
MemberDescription
void set_deleted_key(const key_type& key) Sets the distinguished "deleted" key to key. This must be called before any calls to erase(). [4]
void clear_deleted_key() Clears the distinguished "deleted" key. After this is called, calls to erase() are not valid on this object. [4]
void set_resizing_parameters(float shrink, float grow) This function is DEPRECATED. It is equivalent to calling min_load_factor(shrink); max_load_factor(grow).
template <ValueSerializer, OUTPUT> bool serialize(ValueSerializer serializer, OUTPUT *fp) Emit a serialization of the hash_set to a stream. See below.
template <ValueSerializer, INPUT> bool unserialize(ValueSerializer serializer, INPUT *fp) Read in a serialization of a hash_set from a stream, replacing the existing hash_set contents with the serialized contents. See below.
bool write_metadata(FILE *fp) This function is DEPRECATED. See below.
bool read_metadata(FILE *fp) This function is DEPRECATED. See below.
bool write_nopointer_data(FILE *fp) This function is DEPRECATED. See below.
bool read_nopointer_data(FILE *fp) This function is DEPRECATED. See below.

Notes

[1] This member function relies on member template functions, which may not be supported by all compilers. If your compiler supports member templates, you can call this function with any type of input iterator. If your compiler does not yet support member templates, though, then the arguments must either be of type const value_type* or of type sparse_hash_set::const_iterator.

[2] In order to preserve iterators, erasing hashtable elements does not cause a hashtable to resize. This means that after a string of erase() calls, the hashtable will use more space than is required. At a cost of invalidating all current iterators, you can call resize() to manually compact the hashtable. The hashtable promotes too-small resize() arguments to the smallest legal value, so to compact a hashtable, it's sufficient to call resize(0).

[3] Unlike some other hashtable implementations, the optional n in the calls to the constructor, resize, and rehash indicates not the desired number of buckets that should be allocated, but instead the expected number of items to be inserted. The class then sizes the hash-set appropriately for the number of items specified. It's not an error to actually insert more or fewer items into the hashtable, but the implementation is most efficient -- does the fewest hashtable resizes -- if the number of inserted items is n or slightly less.

[4] sparse_hash_set requires you call set_deleted_key() before calling erase(). (This is the largest difference between the sparse_hash_set API and other hash-set APIs. See implementation.html for why this is necessary.) The argument to set_deleted_key() should be a key-value that is never used for legitimate hash-set entries. It is an error to call erase() without first calling set_deleted_key(), and it is also an error to call insert() with an item whose key is the "deleted key."

There is no need to call set_deleted_key if you do not wish to call erase() on the hash-set.

It is acceptable to change the deleted-key at any time by calling set_deleted_key() with a new argument. You can also call clear_deleted_key(), at which point all keys become valid for insertion but no hashtable entries can be deleted until set_deleted_key() is called again.

Input/Output

It is possible to save and restore sparse_hash_set objects to an arbitrary stream (such as a disk file) using the serialize() and unserialize() methods.

Each of these methods takes two arguments: a serializer, which says how to write hashtable items to disk, and a stream, which can be a C++ stream (istream or its subclasses for input, ostream or its subclasses for output), a FILE*, or a user-defined type (as described below).

The serializer is a functor that takes a stream and a single hashtable element (a value_type) and copies the hashtable element to the stream (for serialize()) or fills the hashtable element contents from the stream (for unserialize()), and returns true on success or false on error. The copy-in and copy-out functions can be provided in a single functor. Here is a sample serializer that read/writes a hashtable element for a string hash_set to a FILE*:

struct StringSerializer {
  bool operator()(FILE* fp, const std::string& value) const {
    assert(value.length() <= 255);   // we only support writing small strings
    const unsigned char size = value.length();
    if (fwrite(&size, 1, 1, fp) != 1)
      return false;
    if (fwrite(value.data(), size, 1, fp) != 1)
      return false;
    return true;
  }
  bool operator()(FILE* fp, std::string* value) const {
    unsigned char size;    // all strings are <= 255 chars long
    if (fread(&size, 1, 1, fp) != 1)
      return false;
    char* buf = new char[size];
    if (fread(buf, size, 1, fp) != 1) {
      delete[] buf;
      return false;
    }
    new(value) string(buf, size);
    delete[] buf;
    return true;
  }
};

Here is the functor being used in code (error checking omitted):

   sparse_hash_set<string> myset = CreateSet();
   FILE* fp = fopen("hashtable.data", "w");
   myset.serialize(StringSerializer(), fp);
   fclose(fp);

   sparse_hash_set<string> myset2;
   FILE* fp_in = fopen("hashtable.data", "r");
   myset2.unserialize(StringSerializer(), fp_in);
   fclose(fp_in);
   assert(myset == myset2);

Important note: the code above uses placement-new to instantiate the string. This is required for any non-POD type. The value_type passed in to the unserializer points to garbage memory, so it is not safe to assign to it directly if doing so causes a destructor to be called.

Also note that this example serializer can only serialize to a FILE*. If you want to also be able to use this serializer with C++ streams, you will need to write two more overloads of operator()'s, one that reads from an istream, and one that writes to an ostream. Likewise if you want to support serializing to a custom class.

If the key is "simple" enough, you can use the pre-supplied functor NopointerSerializer. This copies the hashtable data using the equivalent of a memcpy<>. Native C data types can be serialized this way, as can structs of native C data types. Pointers and STL objects cannot.

Note that NopointerSerializer() does not do any endian conversion. Thus, it is only appropriate when you intend to read the data on the same endian architecture as you write the data.

If you wish to serialize to your own stream type, you can do so by creating an object which supports two methods:

   bool Write(const void* data, size_t length);
   bool Read(void* data, size_t length);

Write() writes length bytes of data to a stream (presumably a stream owned by the object), while Read() reads data bytes from the stream into data. Both return true on success or false on error.

To unserialize a hashtable from a stream, you wil typically create a new sparse_hash_set object, then call unserialize() on it. unserialize() destroys the old contents of the object. You must pass in the appropriate ValueSerializer for the data being read in.

Both serialize() and unserialize() return true on success, or false if there was an error streaming the data.

Note that serialize() is not a const method, since it purges deleted elements before serializing. It is not safe to serialize from two threads at once, without synchronization.

NOTE: older versions of sparse_hash_set provided a different API, consisting of read_metadata(), read_nopointer_data(), write_metadata(), write_nopointer_data(). Writing to disk consisted of a call to write_metadata() followed by write_nopointer_data() (if the hash data was POD) or a custom loop over the hashtable buckets to write the data (otherwise). Reading from disk was similar. Prefer the new API for new code.

Validity of Iterators

erase() is guaranteed not to invalidate any iterators -- except for any iterators pointing to the item being erased, of course. insert() invalidates all iterators, as does resize().

This is implemented by making erase() not resize the hashtable. If you desire maximum space efficiency, you can call resize(0) after a string of erase() calls, to force the hashtable to resize to the smallest possible size.

In addition to invalidating iterators, insert() and resize() invalidate all pointers into the hashtable. If you want to store a pointer to an object held in a sparse_hash_set, either do so after finishing hashtable inserts, or store the object on the heap and a pointer to it in the sparse_hash_set.

See also

The following are SGI STL, and some Google STL, concepts and classes related to sparse_hash_set.

hash_set, Associative Container, Hashed Associative Container, Simple Associative Container, Unique Hashed Associative Container, set, map multiset, multimap, hash_map, hash_multiset, hash_multimap, sparsetable, sparse_hash_map, dense_hash_set, dense_hash_map sparsehash-2.0.2/doc/dense_hash_map.html0000664000175000017500000013603211721252346015200 00000000000000 dense_hash_map<Key, Data, HashFcn, EqualKey, Alloc>

[Note: this document is formatted similarly to the SGI STL implementation documentation pages, and refers to concepts and classes defined there. However, neither this document nor the code it describes is associated with SGI, nor is it necessary to have SGI's STL implementation installed in order to use this class.]

dense_hash_map<Key, Data, HashFcn, EqualKey, Alloc>

dense_hash_map is a Hashed Associative Container that associates objects of type Key with objects of type Data. dense_hash_map is a Pair Associative Container, meaning that its value type is pair<const Key, Data>. It is also a Unique Associative Container, meaning that no two elements have keys that compare equal using EqualKey.

Looking up an element in a dense_hash_map by its key is efficient, so dense_hash_map is useful for "dictionaries" where the order of elements is irrelevant. If it is important for the elements to be in a particular order, however, then map is more appropriate.

dense_hash_map is distinguished from other hash-map implementations by its speed and by the ability to save and restore contents to disk. On the other hand, this hash-map implementation can use significantly more space than other hash-map implementations, and it also has requirements -- for instance, for a distinguished "empty key" -- that may not be easy for all applications to satisfy.

This class is appropriate for applications that need speedy access to relatively small "dictionaries" stored in memory, or for applications that need these dictionaries to be persistent. [implementation note])

Example

(Note: this example uses SGI semantics for hash<> -- the kind used by gcc and most Unix compiler suites -- and not Dinkumware semantics -- the kind used by Microsoft Visual Studio. If you are using MSVC, this example will not compile as-is: you'll need to change hash to hash_compare, and you won't use eqstr at all. See the MSVC documentation for hash_map and hash_compare, for more details.)
#include <iostream>
#include <sparsehash/dense_hash_map>

using google::dense_hash_map;      // namespace where class lives by default
using std::cout;
using std::endl;
using ext::hash;  // or __gnu_cxx::hash, or maybe tr1::hash, depending on your OS

struct eqstr
{
  bool operator()(const char* s1, const char* s2) const
  {
    return (s1 == s2) || (s1 && s2 && strcmp(s1, s2) == 0);
  }
};

int main()
{
  dense_hash_map<const char*, int, hash<const char*>, eqstr> months;
  
  months.set_empty_key(NULL);
  months["january"] = 31;
  months["february"] = 28;
  months["march"] = 31;
  months["april"] = 30;
  months["may"] = 31;
  months["june"] = 30;
  months["july"] = 31;
  months["august"] = 31;
  months["september"] = 30;
  months["october"] = 31;
  months["november"] = 30;
  months["december"] = 31;
  
  cout << "september -> " << months["september"] << endl;
  cout << "april     -> " << months["april"] << endl;
  cout << "june      -> " << months["june"] << endl;
  cout << "november  -> " << months["november"] << endl;
}

Definition

Defined in the header dense_hash_map. This class is not part of the C++ standard, though it is mostly compatible with the tr1 class unordered_map.

Template parameters

ParameterDescriptionDefault
Key The hash_map's key type. This is also defined as dense_hash_map::key_type.  
Data The hash_map's data type. This is also defined as dense_hash_map::data_type. [7]  
HashFcn The hash function used by the hash_map. This is also defined as dense_hash_map::hasher.
Note: Hashtable performance depends heavily on the choice of hash function. See the performance page for more information.
hash<Key>
EqualKey The hash_map key equality function: a binary predicate that determines whether two keys are equal. This is also defined as dense_hash_map::key_equal. equal_to<Key>
Alloc The STL allocator to use. By default, uses the provided allocator libc_allocator_with_realloc, which likely gives better performance than other STL allocators due to its built-in support for realloc, which this container takes advantage of. If you use an allocator other than the default, note that this container imposes an additional requirement on the STL allocator type beyond those in [lib.allocator.requirements]: it does not support allocators that define alternate memory models. That is, it assumes that pointer, const_pointer, size_type, and difference_type are just T*, const T*, size_t, and ptrdiff_t, respectively. This is also defined as dense_hash_map::allocator_type.

Model of

Unique Hashed Associative Container, Pair Associative Container

Type requirements

  • Key is Assignable.
  • EqualKey is a Binary Predicate whose argument type is Key.
  • EqualKey is an equivalence relation.
  • Alloc is an Allocator.

Public base classes

None.

Members

MemberWhere definedDescription
key_type Associative Container The dense_hash_map's key type, Key.
data_type Pair Associative Container The type of object associated with the keys.
value_type Pair Associative Container The type of object, pair<const key_type, data_type>, stored in the hash_map.
hasher Hashed Associative Container The dense_hash_map's hash function.
key_equal Hashed Associative Container Function object that compares keys for equality.
allocator_type Unordered Associative Container (tr1) The type of the Allocator given as a template parameter.
pointer Container Pointer to T.
reference Container Reference to T
const_reference Container Const reference to T
size_type Container An unsigned integral type.
difference_type Container A signed integral type.
iterator Container Iterator used to iterate through a dense_hash_map. [1]
const_iterator Container Const iterator used to iterate through a dense_hash_map.
local_iterator Unordered Associative Container (tr1) Iterator used to iterate through a subset of dense_hash_map. [1]
const_local_iterator Unordered Associative Container (tr1) Const iterator used to iterate through a subset of dense_hash_map.
iterator begin() Container Returns an iterator pointing to the beginning of the dense_hash_map.
iterator end() Container Returns an iterator pointing to the end of the dense_hash_map.
const_iterator begin() const Container Returns an const_iterator pointing to the beginning of the dense_hash_map.
const_iterator end() const Container Returns an const_iterator pointing to the end of the dense_hash_map.
local_iterator begin(size_type i) Unordered Associative Container (tr1) Returns a local_iterator pointing to the beginning of bucket i in the dense_hash_map.
local_iterator end(size_type i) Unordered Associative Container (tr1) Returns a local_iterator pointing to the end of bucket i in the dense_hash_map. For dense_hash_map, each bucket contains either 0 or 1 item.
const_local_iterator begin(size_type i) const Unordered Associative Container (tr1) Returns a const_local_iterator pointing to the beginning of bucket i in the dense_hash_map.
const_local_iterator end(size_type i) const Unordered Associative Container (tr1) Returns a const_local_iterator pointing to the end of bucket i in the dense_hash_map. For dense_hash_map, each bucket contains either 0 or 1 item.
size_type size() const Container Returns the size of the dense_hash_map.
size_type max_size() const Container Returns the largest possible size of the dense_hash_map.
bool empty() const Container true if the dense_hash_map's size is 0.
size_type bucket_count() const Hashed Associative Container Returns the number of buckets used by the dense_hash_map.
size_type max_bucket_count() const Hashed Associative Container Returns the largest possible number of buckets used by the dense_hash_map.
size_type bucket_size(size_type i) const Unordered Associative Container (tr1) Returns the number of elements in bucket i. For dense_hash_map, this will be either 0 or 1.
size_type bucket(const key_type& key) const Unordered Associative Container (tr1) If the key exists in the map, returns the index of the bucket containing the given key, otherwise, return the bucket the key would be inserted into. This value may be passed to begin(size_type) and end(size_type).
float load_factor() const Unordered Associative Container (tr1) The number of elements in the dense_hash_map divided by the number of buckets.
float max_load_factor() const Unordered Associative Container (tr1) The maximum load factor before increasing the number of buckets in the dense_hash_map.
void max_load_factor(float new_grow) Unordered Associative Container (tr1) Sets the maximum load factor before increasing the number of buckets in the dense_hash_map.
float min_load_factor() const dense_hash_map The minimum load factor before decreasing the number of buckets in the dense_hash_map.
void min_load_factor(float new_grow) dense_hash_map Sets the minimum load factor before decreasing the number of buckets in the dense_hash_map.
void set_resizing_parameters(float shrink, float grow) dense_hash_map DEPRECATED. See below.
void resize(size_type n) Hashed Associative Container Increases the bucket count to hold at least n items. [4] [5]
void rehash(size_type n) Unordered Associative Container (tr1) Increases the bucket count to hold at least n items. This is identical to resize. [4] [5]
hasher hash_funct() const Hashed Associative Container Returns the hasher object used by the dense_hash_map.
hasher hash_function() const Unordered Associative Container (tr1) Returns the hasher object used by the dense_hash_map. This is idential to hash_funct.
key_equal key_eq() const Hashed Associative Container Returns the key_equal object used by the dense_hash_map.
allocator_type get_allocator() const Unordered Associative Container (tr1) Returns the allocator_type object used by the dense_hash_map: either the one passed in to the constructor, or a default Alloc instance.
dense_hash_map() Container Creates an empty dense_hash_map.
dense_hash_map(size_type n) Hashed Associative Container Creates an empty dense_hash_map that's optimized for holding up to n items. [5]
dense_hash_map(size_type n, const hasher& h) Hashed Associative Container Creates an empty dense_hash_map that's optimized for up to n items, using h as the hash function.
dense_hash_map(size_type n, const hasher& h, const key_equal& k) Hashed Associative Container Creates an empty dense_hash_map that's optimized for up to n items, using h as the hash function and k as the key equal function.
dense_hash_map(size_type n, const hasher& h, const key_equal& k, const allocator_type& a) Unordered Associative Container (tr1) Creates an empty dense_hash_map that's optimized for up to n items, using h as the hash function, k as the key equal function, and a as the allocator object.
template <class InputIterator>
dense_hash_map(InputIterator f, InputIterator l) 
[2]
Unique Hashed Associative Container Creates a dense_hash_map with a copy of a range.
template <class InputIterator>
dense_hash_map(InputIterator f, InputIterator l, size_type n) 
[2]
Unique Hashed Associative Container Creates a hash_map with a copy of a range that's optimized to hold up to n items.
template <class InputIterator>
dense_hash_map(InputIterator f, InputIterator l, size_type n, const
hasher& h) 
[2]
Unique Hashed Associative Container Creates a hash_map with a copy of a range that's optimized to hold up to n items, using h as the hash function.
template <class InputIterator>
dense_hash_map(InputIterator f, InputIterator l, size_type n, const
hasher& h, const key_equal& k) 
[2]
Unique Hashed Associative Container Creates a hash_map with a copy of a range that's optimized for holding up to n items, using h as the hash function and k as the key equal function.
template <class InputIterator>
dense_hash_map(InputIterator f, InputIterator l, size_type n, const
hasher& h, const key_equal& k, const allocator_type& a) 
[2]
Unordered Associative Container (tr1) Creates a hash_map with a copy of a range that's optimized for holding up to n items, using h as the hash function, k as the key equal function, and a as the allocator object.
dense_hash_map(const hash_map&) Container The copy constructor.
dense_hash_map& operator=(const hash_map&) Container The assignment operator
void swap(hash_map&) Container Swaps the contents of two hash_maps.
pair<iterator, bool> insert(const value_type& x)
Unique Associative Container Inserts x into the dense_hash_map.
template <class InputIterator>
void insert(InputIterator f, InputIterator l) 
[2]
Unique Associative Container Inserts a range into the dense_hash_map.
void set_empty_key(const key_type& key) [6] dense_hash_map See below.
void set_deleted_key(const key_type& key) [6] dense_hash_map See below.
void clear_deleted_key() [6] dense_hash_map See below.
void erase(iterator pos) Associative Container Erases the element pointed to by pos. [6]
size_type erase(const key_type& k) Associative Container Erases the element whose key is k. [6]
void erase(iterator first, iterator last) Associative Container Erases all elements in a range. [6]
void clear() Associative Container Erases all of the elements.
void clear_no_resize() dense_hash_map See below.
const_iterator find(const key_type& k) const Associative Container Finds an element whose key is k.
iterator find(const key_type& k) Associative Container Finds an element whose key is k.
size_type count(const key_type& k) const Unique Associative Container Counts the number of elements whose key is k.
pair<const_iterator, const_iterator> equal_range(const
key_type& k) const 
Associative Container Finds a range containing all elements whose key is k.
pair<iterator, iterator> equal_range(const
key_type& k) 
Associative Container Finds a range containing all elements whose key is k.
data_type& operator[](const key_type& k) [3] 
dense_hash_map See below.
template <ValueSerializer, OUTPUT> bool serialize(ValueSerializer serializer, OUTPUT *fp) dense_hash_map See below.
template <ValueSerializer, INPUT> bool unserialize(ValueSerializer serializer, INPUT *fp) dense_hash_map See below.
NopointerSerializer dense_hash_map See below.
bool write_metadata(FILE *fp) dense_hash_map DEPRECATED. See below.
bool read_metadata(FILE *fp) dense_hash_map DEPRECATED. See below.
bool write_nopointer_data(FILE *fp) dense_hash_map DEPRECATED. See below.
bool read_nopointer_data(FILE *fp) dense_hash_map DEPRECATED. See below.
bool operator==(const hash_map&, const hash_map&)
Hashed Associative Container Tests two hash_maps for equality. This is a global function, not a member function.

New members

These members are not defined in the Unique Hashed Associative Container, Pair Associative Container, or tr1's +Unordered Associative Container requirements, but are specific to dense_hash_map.
MemberDescription
void set_empty_key(const key_type& key) Sets the distinguished "empty" key to key. This must be called immediately after construct time, before calls to another other dense_hash_map operation. [6]
void set_deleted_key(const key_type& key) Sets the distinguished "deleted" key to key. This must be called before any calls to erase(). [6]
void clear_deleted_key() Clears the distinguished "deleted" key. After this is called, calls to erase() are not valid on this object. [6]
void clear_no_resize() Clears the hashtable like clear() does, but does not recover the memory used for hashtable buckets. (The memory used by the items in the hashtable is still recovered.) This can save time for applications that want to reuse a dense_hash_map many times, each time with a similar number of objects.
data_type& 
operator[](const key_type& k) [3]
Returns a reference to the object that is associated with a particular key. If the dense_hash_map does not already contain such an object, operator[] inserts the default object data_type(). [3]
void set_resizing_parameters(float shrink, float grow) This function is DEPRECATED. It is equivalent to calling min_load_factor(shrink); max_load_factor(grow).
template <ValueSerializer, OUTPUT> bool serialize(ValueSerializer serializer, OUTPUT *fp) Emit a serialization of the hash_map to a stream. See below.
template <ValueSerializer, INPUT> bool unserialize(ValueSerializer serializer, INPUT *fp) Read in a serialization of a hash_map from a stream, replacing the existing hash_map contents with the serialized contents. See below.
bool write_metadata(FILE *fp) This function is DEPRECATED. See below.
bool read_metadata(FILE *fp) This function is DEPRECATED. See below.
bool write_nopointer_data(FILE *fp) This function is DEPRECATED. See below.
bool read_nopointer_data(FILE *fp) This function is DEPRECATED. See below.

Notes

[1] dense_hash_map::iterator is not a mutable iterator, because dense_hash_map::value_type is not Assignable. That is, if i is of type dense_hash_map::iterator and p is of type dense_hash_map::value_type, then *i = p is not a valid expression. However, dense_hash_map::iterator isn't a constant iterator either, because it can be used to modify the object that it points to. Using the same notation as above, (*i).second = p is a valid expression.

[2] This member function relies on member template functions, which may not be supported by all compilers. If your compiler supports member templates, you can call this function with any type of input iterator. If your compiler does not yet support member templates, though, then the arguments must either be of type const value_type* or of type dense_hash_map::const_iterator.

[3] Since operator[] might insert a new element into the dense_hash_map, it can't possibly be a const member function. Note that the definition of operator[] is extremely simple: m[k] is equivalent to (*((m.insert(value_type(k, data_type()))).first)).second. Strictly speaking, this member function is unnecessary: it exists only for convenience.

[4] In order to preserve iterators, erasing hashtable elements does not cause a hashtable to resize. This means that after a string of erase() calls, the hashtable will use more space than is required. At a cost of invalidating all current iterators, you can call resize() to manually compact the hashtable. The hashtable promotes too-small resize() arguments to the smallest legal value, so to compact a hashtable, it's sufficient to call resize(0).

[5] Unlike some other hashtable implementations, the optional n in the calls to the constructor, resize, and rehash indicates not the desired number of buckets that should be allocated, but instead the expected number of items to be inserted. The class then sizes the hash-map appropriately for the number of items specified. It's not an error to actually insert more or fewer items into the hashtable, but the implementation is most efficient -- does the fewest hashtable resizes -- if the number of inserted items is n or slightly less.

[6] dense_hash_map requires you call set_empty_key() immediately after constructing the hash-map, and before calling any other dense_hash_map method. (This is the largest difference between the dense_hash_map API and other hash-map APIs. See implementation.html for why this is necessary.) The argument to set_empty_key() should be a key-value that is never used for legitimate hash-map entries. If you have no such key value, you will be unable to use dense_hash_map. It is an error to call insert() with an item whose key is the "empty key."

dense_hash_map also requires you call set_deleted_key() before calling erase(). The argument to set_deleted_key() should be a key-value that is never used for legitimate hash-map entries. It must be different from the key-value used for set_empty_key(). It is an error to call erase() without first calling set_deleted_key(), and it is also an error to call insert() with an item whose key is the "deleted key."

There is no need to call set_deleted_key if you do not wish to call erase() on the hash-map.

It is acceptable to change the deleted-key at any time by calling set_deleted_key() with a new argument. You can also call clear_deleted_key(), at which point all keys become valid for insertion but no hashtable entries can be deleted until set_deleted_key() is called again.

[7] dense_hash_map requires that data_type has a zero-argument default constructor. This is because dense_hash_map uses the special value pair(empty_key, data_type()) to denote empty buckets, and thus needs to be able to create data_type using a zero-argument constructor.

If your data_type does not have a zero-argument default constructor, there are several workarounds:

  • Store a pointer to data_type in the map, instead of data_type directly. This may yield faster code as well, since hashtable-resizes will just have to move pointers around, rather than copying the entire data_type.
  • Add a zero-argument default constructor to data_type.
  • Subclass data_type and add a zero-argument default constructor to the subclass.

Input/Output

It is possible to save and restore dense_hash_map objects to an arbitrary stream (such as a disk file) using the serialize() and unserialize() methods.

Each of these methods takes two arguments: a serializer, which says how to write hashtable items to disk, and a stream, which can be a C++ stream (istream or its subclasses for input, ostream or its subclasses for output), a FILE*, or a user-defined type (as described below).

The serializer is a functor that takes a stream and a single hashtable element (a value_type, which is a pair of the key and data) and copies the hashtable element to the stream (for serialize()) or fills the hashtable element contents from the stream (for unserialize()), and returns true on success or false on error. The copy-in and copy-out functions can be provided in a single functor. Here is a sample serializer that read/writes a hashtable element for an int-to-string hash_map to a FILE*:

struct StringToIntSerializer {
  bool operator()(FILE* fp, const std::pair<const int, std::string>& value) const {
    // Write the key.  We ignore endianness for this example.
    if (fwrite(&value.first, sizeof(value.first), 1, fp) != 1)
      return false;
    // Write the value.
    assert(value.second.length() <= 255);   // we only support writing small strings
    const unsigned char size = value.second.length();
    if (fwrite(&size, 1, 1, fp) != 1)
      return false;
    if (fwrite(value.second.data(), size, 1, fp) != 1)
      return false;
    return true;
  }
  bool operator()(FILE* fp, std::pair<const int, std::string>* value) const {
    // Read the key.  Note the need for const_cast to get around
    // the fact hash_map keys are always const.
    if (fread(const_cast<int*>(&value->first), sizeof(value->first), 1, fp) != 1)
      return false;
    // Read the value.
    unsigned char size;    // all strings are <= 255 chars long
    if (fread(&size, 1, 1, fp) != 1)
      return false;
    char* buf = new char[size];
    if (fread(buf, size, 1, fp) != 1) {
      delete[] buf;
      return false;
    }
    value->second.assign(buf, size);
    delete[] buf;
    return true;
  }
};

Here is the functor being used in code (error checking omitted):

   dense_hash_map<string, int> mymap = CreateMap();
   FILE* fp = fopen("hashtable.data", "w");
   mymap.serialize(StringToIntSerializer(), fp);
   fclose(fp);

   dense_hash_map<string, int> mymap2;
   FILE* fp_in = fopen("hashtable.data", "r");
   mymap2.unserialize(StringToIntSerializer(), fp_in);
   fclose(fp_in);
   assert(mymap == mymap2);

Note that this example serializer can only serialize to a FILE*. If you want to also be able to use this serializer with C++ streams, you will need to write two more overloads of operator()'s, one that reads from an istream, and one that writes to an ostream. Likewise if you want to support serializing to a custom class.

If both the key and data are "simple" enough, you can use the pre-supplied functor NopointerSerializer. This copies the hashtable data using the equivalent of a memcpy<>. Native C data types can be serialized this way, as can structs of native C data types. Pointers and STL objects cannot.

Note that NopointerSerializer() does not do any endian conversion. Thus, it is only appropriate when you intend to read the data on the same endian architecture as you write the data.

If you wish to serialize to your own stream type, you can do so by creating an object which supports two methods:

   bool Write(const void* data, size_t length);
   bool Read(void* data, size_t length);

Write() writes length bytes of data to a stream (presumably a stream owned by the object), while Read() reads data bytes from the stream into data. Both return true on success or false on error.

To unserialize a hashtable from a stream, you wil typically create a new dense_hash_map object, then call unserialize() on it. unserialize() destroys the old contents of the object. You must pass in the appropriate ValueSerializer for the data being read in.

Both serialize() and unserialize() return true on success, or false if there was an error streaming the data.

Note that serialize() is not a const method, since it purges deleted elements before serializing. It is not safe to serialize from two threads at once, without synchronization.

NOTE: older versions of dense_hash_map provided a different API, consisting of read_metadata(), read_nopointer_data(), write_metadata(), write_nopointer_data(). These methods were never implemented and always did nothing but return false. You should exclusively use the new API for serialization.

Validity of Iterators

erase() is guaranteed not to invalidate any iterators -- except for any iterators pointing to the item being erased, of course. insert() invalidates all iterators, as does resize().

This is implemented by making erase() not resize the hashtable. If you desire maximum space efficiency, you can call resize(0) after a string of erase() calls, to force the hashtable to resize to the smallest possible size.

In addition to invalidating iterators, insert() and resize() invalidate all pointers into the hashtable. If you want to store a pointer to an object held in a dense_hash_map, either do so after finishing hashtable inserts, or store the object on the heap and a pointer to it in the dense_hash_map.

See also

The following are SGI STL, and some Google STL, concepts and classes related to dense_hash_map.

hash_map, Associative Container, Hashed Associative Container, Pair Associative Container, Unique Hashed Associative Container, set, map multiset, multimap, hash_set, hash_multiset, hash_multimap, sparse_hash_map, sparse_hash_set, dense_hash_set sparsehash-2.0.2/doc/performance.html0000664000175000017500000000671311721252346014545 00000000000000 Performance notes: sparse_hash, dense_hash, sparsetable

Performance Numbers

Here are some performance numbers from an example desktop machine, taken from a version of time_hash_map that was instrumented to also report memory allocation information (this modification is not included by default because it required a big hack to do, including modifying the STL code to not try to do its own freelist management).

Note there are lots of caveats on these numbers: they may differ from machine to machine and compiler to compiler, and they only test a very particular usage pattern that may not match how you use hashtables -- for instance, they test hashtables with very small keys. However, they're still useful for a baseline comparison of the various hashtable implementations.

These figures are from a 2.80GHz Pentium 4 with 2G of memory. The 'standard' hash_map and map implementations are the SGI STL code included with gcc2. Compiled with gcc2.95.3 -g -O2

======
Average over 10000000 iterations
Wed Dec  8 14:56:38 PST 2004

SPARSE_HASH_MAP:
map_grow                  665 ns
map_predict/grow          303 ns
map_replace               177 ns
map_fetch                 117 ns
map_remove                192 ns
memory used in map_grow    84.3956 Mbytes

DENSE_HASH_MAP:
map_grow                   84 ns
map_predict/grow           22 ns
map_replace                18 ns
map_fetch                  13 ns
map_remove                 23 ns
memory used in map_grow   256.0000 Mbytes

STANDARD HASH_MAP:
map_grow                  162 ns
map_predict/grow          107 ns
map_replace                44 ns
map_fetch                  22 ns
map_remove                124 ns
memory used in map_grow   204.1643 Mbytes

STANDARD MAP:
map_grow                  297 ns
map_predict/grow          282 ns
map_replace               113 ns
map_fetch                 113 ns
map_remove                238 ns
memory used in map_grow   236.8081 Mbytes

A Note on Hash Functions

For good performance, the sparsehash hash routines depend on a good hash function: one that distributes data evenly. Many hashtable implementations come with sub-optimal hash functions that can degrade performance. For instance, the hash function given in Knuth's _Art of Computer Programming_, and the default string hash function in SGI's STL implementation, both distribute certain data sets unevenly, leading to poor performance.

As an example, in one test of the default SGI STL string hash function against the Hsieh hash function (see below), for a particular set of string keys, the Hsieh function resulted in hashtable lookups that were 20 times as fast as the STLPort hash function. The string keys were chosen to be "hard" to hash well, so these results may not be typical, but they are suggestive.

There has been much research over the years into good hash functions. Here are some hash functions of note.

sparsehash-2.0.2/doc/sparse_hash_map.html0000664000175000017500000013416511721252346015404 00000000000000 sparse_hash_map<Key, Data, HashFcn, EqualKey, Alloc>

[Note: this document is formatted similarly to the SGI STL implementation documentation pages, and refers to concepts and classes defined there. However, neither this document nor the code it describes is associated with SGI, nor is it necessary to have SGI's STL implementation installed in order to use this class.]

sparse_hash_map<Key, Data, HashFcn, EqualKey, Alloc>

sparse_hash_map is a Hashed Associative Container that associates objects of type Key with objects of type Data. sparse_hash_map is a Pair Associative Container, meaning that its value type is pair<const Key, Data>. It is also a Unique Associative Container, meaning that no two elements have keys that compare equal using EqualKey.

Looking up an element in a sparse_hash_map by its key is efficient, so sparse_hash_map is useful for "dictionaries" where the order of elements is irrelevant. If it is important for the elements to be in a particular order, however, then map is more appropriate.

sparse_hash_map is distinguished from other hash-map implementations by its stingy use of memory and by the ability to save and restore contents to disk. On the other hand, this hash-map implementation, while still efficient, is slower than other hash-map implementations, and it also has requirements -- for instance, for a distinguished "deleted key" -- that may not be easy for all applications to satisfy.

This class is appropriate for applications that need to store large "dictionaries" in memory, or for applications that need these dictionaries to be persistent.

Example

(Note: this example uses SGI semantics for hash<> -- the kind used by gcc and most Unix compiler suites -- and not Dinkumware semantics -- the kind used by Microsoft Visual Studio. If you are using MSVC, this example will not compile as-is: you'll need to change hash to hash_compare, and you won't use eqstr at all. See the MSVC documentation for hash_map and hash_compare, for more details.)
#include <iostream>
#include <sparsehash/sparse_hash_map>

using google::sparse_hash_map;      // namespace where class lives by default
using std::cout;
using std::endl;
using ext::hash;  // or __gnu_cxx::hash, or maybe tr1::hash, depending on your OS

struct eqstr
{
  bool operator()(const char* s1, const char* s2) const
  {
    return (s1 == s2) || (s1 && s2 && strcmp(s1, s2) == 0);
  }
};

int main()
{
  sparse_hash_map<const char*, int, hash<const char*>, eqstr> months;
  
  months["january"] = 31;
  months["february"] = 28;
  months["march"] = 31;
  months["april"] = 30;
  months["may"] = 31;
  months["june"] = 30;
  months["july"] = 31;
  months["august"] = 31;
  months["september"] = 30;
  months["october"] = 31;
  months["november"] = 30;
  months["december"] = 31;
  
  cout << "september -> " << months["september"] << endl;
  cout << "april     -> " << months["april"] << endl;
  cout << "june      -> " << months["june"] << endl;
  cout << "november  -> " << months["november"] << endl;
}

Definition

Defined in the header sparse_hash_map. This class is not part of the C++ standard, though it is mostly compatible with the tr1 class unordered_map.

Template parameters

ParameterDescriptionDefault
Key The hash_map's key type. This is also defined as sparse_hash_map::key_type.  
Data The hash_map's data type. This is also defined as sparse_hash_map::data_type.  
HashFcn The hash function used by the hash_map. This is also defined as sparse_hash_map::hasher.
Note: Hashtable performance depends heavily on the choice of hash function. See the performance page for more information.
hash<Key>
EqualKey The hash_map key equality function: a binary predicate that determines whether two keys are equal. This is also defined as sparse_hash_map::key_equal. equal_to<Key>
Alloc The STL allocator to use. By default, uses the provided allocator libc_allocator_with_realloc, which likely gives better performance than other STL allocators due to its built-in support for realloc, which this container takes advantage of. If you use an allocator other than the default, note that this container imposes an additional requirement on the STL allocator type beyond those in [lib.allocator.requirements]: it does not support allocators that define alternate memory models. That is, it assumes that pointer, const_pointer, size_type, and difference_type are just T*, const T*, size_t, and ptrdiff_t, respectively. This is also defined as sparse_hash_map::allocator_type.

Model of

Unique Hashed Associative Container, Pair Associative Container

Type requirements

  • Key is Assignable.
  • EqualKey is a Binary Predicate whose argument type is Key.
  • EqualKey is an equivalence relation.
  • Alloc is an Allocator.

Public base classes

None.

Members

MemberWhere definedDescription
key_type Associative Container The sparse_hash_map's key type, Key.
data_type Pair Associative Container The type of object associated with the keys.
value_type Pair Associative Container The type of object, pair<const key_type, data_type>, stored in the hash_map.
hasher Hashed Associative Container The sparse_hash_map's hash function.
key_equal Hashed Associative Container Function object that compares keys for equality.
allocator_type Unordered Associative Container (tr1) The type of the Allocator given as a template parameter.
pointer Container Pointer to T.
reference Container Reference to T
const_reference Container Const reference to T
size_type Container An unsigned integral type.
difference_type Container A signed integral type.
iterator Container Iterator used to iterate through a sparse_hash_map. [1]
const_iterator Container Const iterator used to iterate through a sparse_hash_map.
local_iterator Unordered Associative Container (tr1) Iterator used to iterate through a subset of sparse_hash_map. [1]
const_local_iterator Unordered Associative Container (tr1) Const iterator used to iterate through a subset of sparse_hash_map.
iterator begin() Container Returns an iterator pointing to the beginning of the sparse_hash_map.
iterator end() Container Returns an iterator pointing to the end of the sparse_hash_map.
const_iterator begin() const Container Returns an const_iterator pointing to the beginning of the sparse_hash_map.
const_iterator end() const Container Returns an const_iterator pointing to the end of the sparse_hash_map.
local_iterator begin(size_type i) Unordered Associative Container (tr1) Returns a local_iterator pointing to the beginning of bucket i in the sparse_hash_map.
local_iterator end(size_type i) Unordered Associative Container (tr1) Returns a local_iterator pointing to the end of bucket i in the sparse_hash_map. For sparse_hash_map, each bucket contains either 0 or 1 item.
const_local_iterator begin(size_type i) const Unordered Associative Container (tr1) Returns a const_local_iterator pointing to the beginning of bucket i in the sparse_hash_map.
const_local_iterator end(size_type i) const Unordered Associative Container (tr1) Returns a const_local_iterator pointing to the end of bucket i in the sparse_hash_map. For sparse_hash_map, each bucket contains either 0 or 1 item.
size_type size() const Container Returns the size of the sparse_hash_map.
size_type max_size() const Container Returns the largest possible size of the sparse_hash_map.
bool empty() const Container true if the sparse_hash_map's size is 0.
size_type bucket_count() const Hashed Associative Container Returns the number of buckets used by the sparse_hash_map.
size_type max_bucket_count() const Hashed Associative Container Returns the largest possible number of buckets used by the sparse_hash_map.
size_type bucket_size(size_type i) const Unordered Associative Container (tr1) Returns the number of elements in bucket i. For sparse_hash_map, this will be either 0 or 1.
size_type bucket(const key_type& key) const Unordered Associative Container (tr1) If the key exists in the map, returns the index of the bucket containing the given key, otherwise, return the bucket the key would be inserted into. This value may be passed to begin(size_type) and end(size_type).
float load_factor() const Unordered Associative Container (tr1) The number of elements in the sparse_hash_map divided by the number of buckets.
float max_load_factor() const Unordered Associative Container (tr1) The maximum load factor before increasing the number of buckets in the sparse_hash_map.
void max_load_factor(float new_grow) Unordered Associative Container (tr1) Sets the maximum load factor before increasing the number of buckets in the sparse_hash_map.
float min_load_factor() const sparse_hash_map The minimum load factor before decreasing the number of buckets in the sparse_hash_map.
void min_load_factor(float new_grow) sparse_hash_map Sets the minimum load factor before decreasing the number of buckets in the sparse_hash_map.
void set_resizing_parameters(float shrink, float grow) sparse_hash_map DEPRECATED. See below.
void resize(size_type n) Hashed Associative Container Increases the bucket count to hold at least n items. [4] [5]
void rehash(size_type n) Unordered Associative Container (tr1) Increases the bucket count to hold at least n items. This is identical to resize. [4] [5]
hasher hash_funct() const Hashed Associative Container Returns the hasher object used by the sparse_hash_map.
hasher hash_function() const Unordered Associative Container (tr1) Returns the hasher object used by the sparse_hash_map. This is idential to hash_funct.
key_equal key_eq() const Hashed Associative Container Returns the key_equal object used by the sparse_hash_map.
allocator_type get_allocator() const Unordered Associative Container (tr1) Returns the allocator_type object used by the sparse_hash_map: either the one passed in to the constructor, or a default Alloc instance.
sparse_hash_map() Container Creates an empty sparse_hash_map.
sparse_hash_map(size_type n) Hashed Associative Container Creates an empty sparse_hash_map that's optimized for holding up to n items. [5]
sparse_hash_map(size_type n, const hasher& h) Hashed Associative Container Creates an empty sparse_hash_map that's optimized for up to n items, using h as the hash function.
sparse_hash_map(size_type n, const hasher& h, const key_equal& k) Hashed Associative Container Creates an empty sparse_hash_map that's optimized for up to n items, using h as the hash function and k as the key equal function.
sparse_hash_map(size_type n, const hasher& h, const key_equal& k, const allocator_type& a) Unordered Associative Container (tr1) Creates an empty sparse_hash_map that's optimized for up to n items, using h as the hash function, k as the key equal function, and a as the allocator object.
template <class InputIterator>
sparse_hash_map(InputIterator f, InputIterator l) 
[2]
Unique Hashed Associative Container Creates a sparse_hash_map with a copy of a range.
template <class InputIterator>
sparse_hash_map(InputIterator f, InputIterator l, size_type n) 
[2]
Unique Hashed Associative Container Creates a hash_map with a copy of a range that's optimized to hold up to n items.
template <class InputIterator>
sparse_hash_map(InputIterator f, InputIterator l, size_type n, const
hasher& h) 
[2]
Unique Hashed Associative Container Creates a hash_map with a copy of a range that's optimized to hold up to n items, using h as the hash function.
template <class InputIterator>
sparse_hash_map(InputIterator f, InputIterator l, size_type n, const
hasher& h, const key_equal& k) 
[2]
Unique Hashed Associative Container Creates a hash_map with a copy of a range that's optimized for holding up to n items, using h as the hash function and k as the key equal function.
template <class InputIterator>
sparse_hash_map(InputIterator f, InputIterator l, size_type n, const
hasher& h, const key_equal& k, const allocator_type& a) 
[2]
Unordered Associative Container (tr1) Creates a hash_map with a copy of a range that's optimized for holding up to n items, using h as the hash function, k as the key equal function, and a as the allocator object.
sparse_hash_map(const hash_map&) Container The copy constructor.
sparse_hash_map& operator=(const hash_map&) Container The assignment operator
void swap(hash_map&) Container Swaps the contents of two hash_maps.
pair<iterator, bool> insert(const value_type& x)
Unique Associative Container Inserts x into the sparse_hash_map.
template <class InputIterator>
void insert(InputIterator f, InputIterator l) 
[2]
Unique Associative Container Inserts a range into the sparse_hash_map.
void set_deleted_key(const key_type& key) [6] sparse_hash_map See below.
void clear_deleted_key() [6] sparse_hash_map See below.
void erase(iterator pos) Associative Container Erases the element pointed to by pos. [6]
size_type erase(const key_type& k) Associative Container Erases the element whose key is k. [6]
void erase(iterator first, iterator last) Associative Container Erases all elements in a range. [6]
void clear() Associative Container Erases all of the elements.
const_iterator find(const key_type& k) const Associative Container Finds an element whose key is k.
iterator find(const key_type& k) Associative Container Finds an element whose key is k.
size_type count(const key_type& k) const Unique Associative Container Counts the number of elements whose key is k.
pair<const_iterator, const_iterator> equal_range(const
key_type& k) const 
Associative Container Finds a range containing all elements whose key is k.
pair<iterator, iterator> equal_range(const
key_type& k) 
Associative Container Finds a range containing all elements whose key is k.
data_type& operator[](const key_type& k) [3] 
sparse_hash_map See below.
template <ValueSerializer, OUTPUT> bool serialize(ValueSerializer serializer, OUTPUT *fp) sparse_hash_map See below.
template <ValueSerializer, INPUT> bool unserialize(ValueSerializer serializer, INPUT *fp) sparse_hash_map See below.
NopointerSerializer sparse_hash_map See below.
bool write_metadata(FILE *fp) sparse_hash_map DEPRECATED. See below.
bool read_metadata(FILE *fp) sparse_hash_map DEPRECATED. See below.
bool write_nopointer_data(FILE *fp) sparse_hash_map DEPRECATED. See below.
bool read_nopointer_data(FILE *fp) sparse_hash_map DEPRECATED. See below.
bool operator==(const hash_map&, const hash_map&)
Hashed Associative Container Tests two hash_maps for equality. This is a global function, not a member function.

New members

These members are not defined in the Unique Hashed Associative Container, Pair Associative Container, or tr1's Unordered Associative Container requirements, but are specific to sparse_hash_map.
MemberDescription
void set_deleted_key(const key_type& key) Sets the distinguished "deleted" key to key. This must be called before any calls to erase(). [6]
void clear_deleted_key() Clears the distinguished "deleted" key. After this is called, calls to erase() are not valid on this object. [6]
data_type& 
operator[](const key_type& k) [3]
Returns a reference to the object that is associated with a particular key. If the sparse_hash_map does not already contain such an object, operator[] inserts the default object data_type(). [3]
void set_resizing_parameters(float shrink, float grow) This function is DEPRECATED. It is equivalent to calling min_load_factor(shrink); max_load_factor(grow).
template <ValueSerializer, OUTPUT> bool serialize(ValueSerializer serializer, OUTPUT *fp) Emit a serialization of the hash_map to a stream. See below.
template <ValueSerializer, INPUT> bool unserialize(ValueSerializer serializer, INPUT *fp) Read in a serialization of a hash_map from a stream, replacing the existing hash_map contents with the serialized contents. See below.
bool write_metadata(FILE *fp) This function is DEPRECATED. See below.
bool read_metadata(FILE *fp) This function is DEPRECATED. See below.
bool write_nopointer_data(FILE *fp) This function is DEPRECATED. See below.
bool read_nopointer_data(FILE *fp) This function is DEPRECATED. See below.

Notes

[1] sparse_hash_map::iterator is not a mutable iterator, because sparse_hash_map::value_type is not Assignable. That is, if i is of type sparse_hash_map::iterator and p is of type sparse_hash_map::value_type, then *i = p is not a valid expression. However, sparse_hash_map::iterator isn't a constant iterator either, because it can be used to modify the object that it points to. Using the same notation as above, (*i).second = p is a valid expression.

[2] This member function relies on member template functions, which may not be supported by all compilers. If your compiler supports member templates, you can call this function with any type of input iterator. If your compiler does not yet support member templates, though, then the arguments must either be of type const value_type* or of type sparse_hash_map::const_iterator.

[3] Since operator[] might insert a new element into the sparse_hash_map, it can't possibly be a const member function. Note that the definition of operator[] is extremely simple: m[k] is equivalent to (*((m.insert(value_type(k, data_type()))).first)).second. Strictly speaking, this member function is unnecessary: it exists only for convenience.

[4] In order to preserve iterators, erasing hashtable elements does not cause a hashtable to resize. This means that after a string of erase() calls, the hashtable will use more space than is required. At a cost of invalidating all current iterators, you can call resize() to manually compact the hashtable. The hashtable promotes too-small resize() arguments to the smallest legal value, so to compact a hashtable, it's sufficient to call resize(0).

[5] Unlike some other hashtable implementations, the optional n in the calls to the constructor, resize, and rehash indicates not the desired number of buckets that should be allocated, but instead the expected number of items to be inserted. The class then sizes the hash-map appropriately for the number of items specified. It's not an error to actually insert more or fewer items into the hashtable, but the implementation is most efficient -- does the fewest hashtable resizes -- if the number of inserted items is n or slightly less.

[6] sparse_hash_map requires you call set_deleted_key() before calling erase(). (This is the largest difference between the sparse_hash_map API and other hash-map APIs. See implementation.html for why this is necessary.) The argument to set_deleted_key() should be a key-value that is never used for legitimate hash-map entries. It is an error to call erase() without first calling set_deleted_key(), and it is also an error to call insert() with an item whose key is the "deleted key."

There is no need to call set_deleted_key if you do not wish to call erase() on the hash-map.

It is acceptable to change the deleted-key at any time by calling set_deleted_key() with a new argument. You can also call clear_deleted_key(), at which point all keys become valid for insertion but no hashtable entries can be deleted until set_deleted_key() is called again.

Note: If you use set_deleted_key, it is also necessary that data_type has a zero-argument default constructor. This is because sparse_hash_map uses the special value pair(deleted_key, data_type()) to denote deleted buckets, and thus needs to be able to create data_type using a zero-argument constructor.

If your data_type does not have a zero-argument default constructor, there are several workarounds:

  • Store a pointer to data_type in the map, instead of data_type directly. This may yield faster code as well, since hashtable-resizes will just have to move pointers around, rather than copying the entire data_type.
  • Add a zero-argument default constructor to data_type.
  • Subclass data_type and add a zero-argument default constructor to the subclass.

If you do not use set_deleted_key, then there is no requirement that data_type havea zero-argument default constructor.

Input/Output

It is possible to save and restore sparse_hash_map objects to an arbitrary stream (such as a disk file) using the serialize() and unserialize() methods.

Each of these methods takes two arguments: a serializer, which says how to write hashtable items to disk, and a stream, which can be a C++ stream (istream or its subclasses for input, ostream or its subclasses for output), a FILE*, or a user-defined type (as described below).

The serializer is a functor that takes a stream and a single hashtable element (a value_type, which is a pair of the key and data) and copies the hashtable element to the stream (for serialize()) or fills the hashtable element contents from the stream (for unserialize()), and returns true on success or false on error. The copy-in and copy-out functions can be provided in a single functor. Here is a sample serializer that read/writes a hashtable element for an int-to-string hash_map to a FILE*:

struct StringToIntSerializer {
  bool operator()(FILE* fp, const std::pair<const int, std::string>& value) const {
    // Write the key.  We ignore endianness for this example.
    if (fwrite(&value.first, sizeof(value.first), 1, fp) != 1)
      return false;
    // Write the value.
    assert(value.second.length() <= 255);   // we only support writing small strings
    const unsigned char size = value.second.length();
    if (fwrite(&size, 1, 1, fp) != 1)
      return false;
    if (fwrite(value.second.data(), size, 1, fp) != 1)
      return false;
    return true;
  }
  bool operator()(FILE* fp, std::pair<const int, std::string>* value) const {
    // Read the key.  Note the need for const_cast to get around
    // the fact hash_map keys are always const.
    if (fread(const_cast<int*>(&value->first), sizeof(value->first), 1, fp) != 1)
      return false;
    // Read the value.
    unsigned char size;    // all strings are <= 255 chars long
    if (fread(&size, 1, 1, fp) != 1)
      return false;
    char* buf = new char[size];
    if (fread(buf, size, 1, fp) != 1) {
      delete[] buf;
      return false;
    }
    new(&value->second) string(buf, size);
    delete[] buf;
    return true;
  }
};

Here is the functor being used in code (error checking omitted):

   sparse_hash_map<string, int> mymap = CreateMap();
   FILE* fp = fopen("hashtable.data", "w");
   mymap.serialize(StringToIntSerializer(), fp);
   fclose(fp);

   sparse_hash_map<string, int> mymap2;
   FILE* fp_in = fopen("hashtable.data", "r");
   mymap2.unserialize(StringToIntSerializer(), fp_in);
   fclose(fp_in);
   assert(mymap == mymap2);

Important note: the code above uses placement-new to instantiate the string. This is required for any non-POD type (which is why we didn't need to worry about this to read in the integer key). The value_type passed in to the unserializer points to garbage memory, so it is not safe to assign to it directly if doing so causes a destructor to be called.

Also note that this example serializer can only serialize to a FILE*. If you want to also be able to use this serializer with C++ streams, you will need to write two more overloads of operator()'s, one that reads from an istream, and one that writes to an ostream. Likewise if you want to support serializing to a custom class.

If both the key and data are "simple" enough, you can use the pre-supplied functor NopointerSerializer. This copies the hashtable data using the equivalent of a memcpy<>. Native C data types can be serialized this way, as can structs of native C data types. Pointers and STL objects cannot.

Note that NopointerSerializer() does not do any endian conversion. Thus, it is only appropriate when you intend to read the data on the same endian architecture as you write the data.

If you wish to serialize to your own stream type, you can do so by creating an object which supports two methods:

   bool Write(const void* data, size_t length);
   bool Read(void* data, size_t length);

Write() writes length bytes of data to a stream (presumably a stream owned by the object), while Read() reads data bytes from the stream into data. Both return true on success or false on error.

To unserialize a hashtable from a stream, you wil typically create a new sparse_hash_map object, then call unserialize() on it. unserialize() destroys the old contents of the object. You must pass in the appropriate ValueSerializer for the data being read in.

Both serialize() and unserialize() return true on success, or false if there was an error streaming the data.

Note that serialize() is not a const method, since it purges deleted elements before serializing. It is not safe to serialize from two threads at once, without synchronization.

NOTE: older versions of sparse_hash_map provided a different API, consisting of read_metadata(), read_nopointer_data(), write_metadata(), write_nopointer_data(). Writing to disk consisted of a call to write_metadata() followed by write_nopointer_data() (if the hash data was POD) or a custom loop over the hashtable buckets to write the data (otherwise). Reading from disk was similar. Prefer the new API for new code.

Validity of Iterators

erase() is guaranteed not to invalidate any iterators -- except for any iterators pointing to the item being erased, of course. insert() invalidates all iterators, as does resize().

This is implemented by making erase() not resize the hashtable. If you desire maximum space efficiency, you can call resize(0) after a string of erase() calls, to force the hashtable to resize to the smallest possible size.

In addition to invalidating iterators, insert() and resize() invalidate all pointers into the hashtable. If you want to store a pointer to an object held in a sparse_hash_map, either do so after finishing hashtable inserts, or store the object on the heap and a pointer to it in the sparse_hash_map.

See also

The following are SGI STL, and some Google STL, concepts and classes related to sparse_hash_map.

hash_map, Associative Container, Hashed Associative Container, Pair Associative Container, Unique Hashed Associative Container, set, map multiset, multimap, hash_set, hash_multiset, hash_multimap, sparsetable, sparse_hash_set, dense_hash_set, dense_hash_map sparsehash-2.0.2/doc/designstyle.css0000664000175000017500000000371011721252346014414 00000000000000body { background-color: #ffffff; color: black; margin-right: 1in; margin-left: 1in; } h1, h2, h3, h4, h5, h6 { color: #3366ff; font-family: sans-serif; } @media print { /* Darker version for printing */ h1, h2, h3, h4, h5, h6 { color: #000080; font-family: helvetica, sans-serif; } } h1 { text-align: center; font-size: 18pt; } h2 { margin-left: -0.5in; } h3 { margin-left: -0.25in; } h4 { margin-left: -0.125in; } hr { margin-left: -1in; } /* Definition lists: definition term bold */ dt { font-weight: bold; } address { text-align: right; } /* Use the tag for bits of code and for variables and objects. */ code,pre,samp,var { color: #006000; } /* Use the tag for file and directory paths and names. */ file { color: #905050; font-family: monospace; } /* Use the tag for stuff the user should type. */ kbd { color: #600000; } div.note p { float: right; width: 3in; margin-right: 0%; padding: 1px; border: 2px solid #6060a0; background-color: #fffff0; } UL.nobullets { list-style-type: none; list-style-image: none; margin-left: -1em; } /* pretty printing styles. See prettify.js */ .str { color: #080; } .kwd { color: #008; } .com { color: #800; } .typ { color: #606; } .lit { color: #066; } .pun { color: #660; } .pln { color: #000; } .tag { color: #008; } .atn { color: #606; } .atv { color: #080; } pre.prettyprint { padding: 2px; border: 1px solid #888; } .embsrc { background: #eee; } @media print { .str { color: #060; } .kwd { color: #006; font-weight: bold; } .com { color: #600; font-style: italic; } .typ { color: #404; font-weight: bold; } .lit { color: #044; } .pun { color: #440; } .pln { color: #000; } .tag { color: #006; font-weight: bold; } .atn { color: #404; } .atv { color: #060; } } /* Table Column Headers */ .hdr { color: #006; font-weight: bold; background-color: #dddddd; } .hdr2 { color: #006; background-color: #eeeeee; }sparsehash-2.0.2/doc/index.html0000664000175000017500000000406611721252346013352 00000000000000 Sparsehash Package (formerly Google Sparsehash)

Sparsehash Package (formerly Google Sparsehash)


The sparsehash package consists of two hashtable implementations: sparse, which is designed to be very space efficient, and dense, which is designed to be very time efficient. For each one, the package provides both a hash-map and a hash-set, to mirror the classes in the common STL implementation.

Documentation on how to use these classes:

In addition to the hash-map (and hash-set) classes, there's also a lower-level class that implements a "sparse" array. This class can be useful in its own right; consider using it when you'd normally use a sparse_hash_map, but your keys are all small-ish integers.

There is also a doc explaining the implementation details of these classes, for those who are curious. And finally, you can see some performance comparisons, both between the various classes here, but also between these implementations and other standard hashtable implementations.


Craig Silverstein
Last modified: Thu Jan 25 17:58:02 PST 2007
sparsehash-2.0.2/doc/sparsetable.html0000664000175000017500000007775111721252346014563 00000000000000 sparsetable<T, GROUP_SIZE>

[Note: this document is formatted similarly to the SGI STL implementation documentation pages, and refers to concepts and classes defined there. However, neither this document nor the code it describes is associated with SGI, nor is it necessary to have SGI's STL implementation installed in order to use this class.]

sparsetable<T, GROUP_SIZE>

A sparsetable is a Random Access Container that supports constant time random access to elements, and constant time insertion and removal of elements. It implements the "array" or "table" abstract data type. The number of elements in a sparsetable is set at constructor time, though you can change it at any time by calling resize().

sparsetable is distinguished from other array implementations, including the default C implementation, in its stingy use of memory -- in particular, unused array elements require only 1 bit of disk space to store, rather than sizeof(T) bytes -- and by the ability to save and restore contents to disk. On the other hand, this array implementation, while still efficient, is slower than other array implementations.

A sparsetable distinguishes between table elements that have been assigned and those that are unassigned. Assigned table elements are those that have had a value set via set(), operator(), assignment via an iterator, and so forth. Unassigned table elements are those that have not had a value set in one of these ways, or that have been explicitly unassigned via a call to erase() or clear(). Lookup is valid on both assigned and unassigned table elements; for unassigned elements, lookup returns the default value T().

This class is appropriate for applications that need to store large arrays in memory, or for applications that need these arrays to be persistent.

Example

#include <sparsehash/sparsetable>

using google::sparsetable;      // namespace where class lives by default

sparsetable<int> t(100);
t[5] = 6;
cout << "t[5] = " << t[5];
cout << "Default value = " << t[99];

Definition

Defined in the header sparsetable. This class is not part of the C++ standard.

Template parameters

ParameterDescriptionDefault
T The sparsetable's value type: the type of object that is stored in the table.  
GROUP_SIZE The number of elements in each sparsetable group (see the implementation doc for more details on this value). This almost never need be specified; the default template parameter value works well in all situations.  

Model of

Random Access Container

Type requirements

None, except for those imposed by the requirements of Random Access Container

Public base classes

None.

Members

MemberWhere definedDescription
value_type Container The type of object, T, stored in the table.
pointer Container Pointer to T.
reference Container Reference to T.
const_reference Container Const reference to T.
size_type Container An unsigned integral type.
difference_type Container A signed integral type.
iterator Container Iterator used to iterate through a sparsetable.
const_iterator Container Const iterator used to iterate through a sparsetable.
reverse_iterator Reversible Container Iterator used to iterate backwards through a sparsetable.
const_reverse_iterator Reversible Container Const iterator used to iterate backwards through a sparsetable.
nonempty_iterator sparsetable Iterator used to iterate through the assigned elements of the sparsetable.
const_nonempty_iterator sparsetable Const iterator used to iterate through the assigned elements of the sparsetable.
reverse_nonempty_iterator sparsetable Iterator used to iterate backwards through the assigned elements of the sparsetable.
const_reverse_nonempty_iterator sparsetable Const iterator used to iterate backwards through the assigned elements of the sparsetable.
destructive_iterator sparsetable Iterator used to iterate through the assigned elements of the sparsetable, erasing elements as it iterates. [1]
iterator begin() Container Returns an iterator pointing to the beginning of the sparsetable.
iterator end() Container Returns an iterator pointing to the end of the sparsetable.
const_iterator begin() const Container Returns an const_iterator pointing to the beginning of the sparsetable.
const_iterator end() const Container Returns an const_iterator pointing to the end of the sparsetable.
reverse_iterator rbegin() Reversible Container Returns a reverse_iterator pointing to the beginning of the reversed sparsetable.
reverse_iterator rend() Reversible Container Returns a reverse_iterator pointing to the end of the reversed sparsetable.
const_reverse_iterator rbegin() const Reversible Container Returns a const_reverse_iterator pointing to the beginning of the reversed sparsetable.
const_reverse_iterator rend() const Reversible Container Returns a const_reverse_iterator pointing to the end of the reversed sparsetable.
nonempty_iterator nonempty_begin() sparsetable Returns a nonempty_iterator pointing to the first assigned element of the sparsetable.
nonempty_iterator nonempty_end() sparsetable Returns a nonempty_iterator pointing to the end of the sparsetable.
const_nonempty_iterator nonempty_begin() const sparsetable Returns a const_nonempty_iterator pointing to the first assigned element of the sparsetable.
const_nonempty_iterator nonempty_end() const sparsetable Returns a const_nonempty_iterator pointing to the end of the sparsetable.
reverse_nonempty_iterator nonempty_rbegin() sparsetable Returns a reverse_nonempty_iterator pointing to the first assigned element of the reversed sparsetable.
reverse_nonempty_iterator nonempty_rend() sparsetable Returns a reverse_nonempty_iterator pointing to the end of the reversed sparsetable.
const_reverse_nonempty_iterator nonempty_rbegin() const sparsetable Returns a const_reverse_nonempty_iterator pointing to the first assigned element of the reversed sparsetable.
const_reverse_nonempty_iterator nonempty_rend() const sparsetable Returns a const_reverse_nonempty_iterator pointing to the end of the reversed sparsetable.
destructive_iterator destructive_begin() sparsetable Returns a destructive_iterator pointing to the first assigned element of the sparsetable.
destructive_iterator destructive_end() sparsetable Returns a destructive_iterator pointing to the end of the sparsetable.
size_type size() const Container Returns the size of the sparsetable.
size_type max_size() const Container Returns the largest possible size of the sparsetable.
bool empty() const Container true if the sparsetable's size is 0.
size_type num_nonempty() const sparsetable Returns the number of sparsetable elements that are currently assigned.
sparsetable(size_type n) Container Creates a sparsetable with n elements.
sparsetable(const sparsetable&) Container The copy constructor.
~sparsetable() Container The destructor.
sparsetable& operator=(const sparsetable&) Container The assignment operator
void swap(sparsetable&) Container Swaps the contents of two sparsetables.
reference operator[](size_type n) Random Access Container Returns the n'th element. [2]
const_reference operator[](size_type n) const Random Access Container Returns the n'th element.
bool test(size_type i) const sparsetable true if the i'th element of the sparsetable is assigned.
bool test(iterator pos) const sparsetable true if the sparsetable element pointed to by pos is assigned.
bool test(const_iterator pos) const sparsetable true if the sparsetable element pointed to by pos is assigned.
const_reference get(size_type i) const sparsetable returns the i'th element of the sparsetable.
reference set(size_type i, const_reference val) sparsetable Sets the i'th element of the sparsetable to value val.
void erase(size_type i) sparsetable Erases the i'th element of the sparsetable.
void erase(iterator pos) sparsetable Erases the element of the sparsetable pointed to by pos.
void erase(iterator first, iterator last) sparsetable Erases the elements of the sparsetable in the range [first, last).
void clear() sparsetable Erases all of the elements.
void resize(size_type n) sparsetable Changes the size of sparsetable to n.
bool write_metadata(FILE *fp) sparsetable See below.
bool read_metadata(FILE *fp) sparsetable See below.
bool write_nopointer_data(FILE *fp) sparsetable See below.
bool read_nopointer_data(FILE *fp) sparsetable See below.
bool operator==(const sparsetable&, const sparsetable&)
Forward Container Tests two sparsetables for equality. This is a global function, not a member function.
bool operator<(const sparsetable&, const sparsetable&)
Forward Container Lexicographical comparison. This is a global function, not a member function.

New members

These members are not defined in the Random Access Container requirement, but are specific to sparsetable.
MemberDescription
nonempty_iterator Iterator used to iterate through the assigned elements of the sparsetable.
const_nonempty_iterator Const iterator used to iterate through the assigned elements of the sparsetable.
reverse_nonempty_iterator Iterator used to iterate backwards through the assigned elements of the sparsetable.
const_reverse_nonempty_iterator Const iterator used to iterate backwards through the assigned elements of the sparsetable.
destructive_iterator Iterator used to iterate through the assigned elements of the sparsetable, erasing elements as it iterates. [1]
nonempty_iterator nonempty_begin() Returns a nonempty_iterator pointing to the first assigned element of the sparsetable.
nonempty_iterator nonempty_end() Returns a nonempty_iterator pointing to the end of the sparsetable.
const_nonempty_iterator nonempty_begin() const Returns a const_nonempty_iterator pointing to the first assigned element of the sparsetable.
const_nonempty_iterator nonempty_end() const Returns a const_nonempty_iterator pointing to the end of the sparsetable.
reverse_nonempty_iterator nonempty_rbegin() Returns a reverse_nonempty_iterator pointing to the first assigned element of the reversed sparsetable.
reverse_nonempty_iterator nonempty_rend() Returns a reverse_nonempty_iterator pointing to the end of the reversed sparsetable.
const_reverse_nonempty_iterator nonempty_rbegin() const Returns a const_reverse_nonempty_iterator pointing to the first assigned element of the reversed sparsetable.
const_reverse_nonempty_iterator nonempty_rend() const Returns a const_reverse_nonempty_iterator pointing to the end of the reversed sparsetable.
destructive_iterator destructive_begin() Returns a destructive_iterator pointing to the first assigned element of the sparsetable.
destructive_iterator destructive_end() Returns a destructive_iterator pointing to the end of the sparsetable.
size_type num_nonempty() const Returns the number of sparsetable elements that are currently assigned.
bool test(size_type i) const true if the i'th element of the sparsetable is assigned.
bool test(iterator pos) const true if the sparsetable element pointed to by pos is assigned.
bool test(const_iterator pos) const true if the sparsetable element pointed to by pos is assigned.
const_reference get(size_type i) const returns the i'th element of the sparsetable. If the i'th element is assigned, the assigned value is returned, otherwise, the default value T() is returned.
reference set(size_type i, const_reference val) Sets the i'th element of the sparsetable to value val, and returns a reference to the i'th element of the table. This operation causes the i'th element to be assigned.
void erase(size_type i) Erases the i'th element of the sparsetable. This operation causes the i'th element to be unassigned.
void erase(iterator pos) Erases the element of the sparsetable pointed to by pos. This operation causes the i'th element to be unassigned.
void erase(iterator first, iterator last) Erases the elements of the sparsetable in the range [first, last). This operation causes these elements to be unassigned.
void clear() Erases all of the elements. This causes all elements to be unassigned.
void resize(size_type n) Changes the size of sparsetable to n. If n is greater than the old size, new, unassigned elements are appended. If n is less than the old size, all elements in position >n are deleted.
bool write_metadata(FILE *fp) Write hashtable metadata to fp. See below.
bool read_metadata(FILE *fp) Read hashtable metadata from fp. See below.
bool write_nopointer_data(FILE *fp) Write hashtable contents to fp. This is valid only if the hashtable key and value are "plain" data. See below.
bool read_nopointer_data(FILE *fp) Read hashtable contents to fp. This is valid only if the hashtable key and value are "plain" data. See below.

Notes

[1] sparsetable::destructive_iterator iterates through a sparsetable like a normal iterator, but ++it may delete the element being iterated past. Obviously, this iterator can only be used once on a given table! One application of this iterator is to copy data from a sparsetable to some other data structure without using extra memory to store the data in both places during the copy.

[2] Since operator[] might insert a new element into the sparsetable, it can't possibly be a const member function. In theory, since it might insert a new element, it should cause the element it refers to to become assigned. However, this is undesirable when operator[] is used to examine elements, rather than assign them. Thus, as an implementation trick, operator[] does not really return a reference. Instead it returns an object that behaves almost exactly like a reference. This object, however, delays setting the appropriate sparsetable element to assigned to when it is actually assigned to.

For a bit more detail: the object returned by operator[] is an opaque type which defines operator=, operator reference(), and operator&. The first operator controls assigning to the value. The second controls examining the value. The third controls pointing to the value.

All three operators perform exactly as an object of type reference would perform. The only problems that arise is when this object is accessed in situations where C++ cannot do the conversion by default. By far the most common situation is with variadic functions such as printf. In such situations, you may need to manually cast the object to the right type:

   printf("%d", static_cast<typename table::reference>(table[i]));

Input/Output

It is possible to save and restore sparsetable objects to disk. Storage takes place in two steps. The first writes the table metadata. The second writes the actual data.

To write a sparsetable to disk, first call write_metadata() on an open file pointer. This saves the sparsetable information in a byte-order-independent format.

After the metadata has been written to disk, you must write the actual data stored in the sparsetable to disk. If the value is "simple" enough, you can do this by calling write_nopointer_data(). "Simple" data is data that can be safely copied to disk via fwrite(). Native C data types fall into this category, as do structs of native C data types. Pointers and STL objects do not.

Note that write_nopointer_data() does not do any endian conversion. Thus, it is only appropriate when you intend to read the data on the same endian architecture as you write the data.

If you cannot use write_nopointer_data() for any reason, you can write the data yourself by iterating over the sparsetable with a const_nonempty_iterator and writing the key and data in any manner you wish.

To read the hashtable information from disk, first you must create a sparsetable object. Then open a file pointer to point to the saved sparsetable, and call read_metadata(). If you saved the data via write_nopointer_data(), you can follow the read_metadata() call with a call to read_nopointer_data(). This is all that is needed.

If you saved the data through a custom write routine, you must call a custom read routine to read in the data. To do this, iterate over the sparsetable with a nonempty_iterator; this operation is sensical because the metadata has already been set up. For each iterator item, you can read the key and value from disk, and set it appropriately. The code might look like this:

   for (sparsetable<int*>::nonempty_iterator it = t.nonempty_begin();
        it != t.nonempty_end(); ++it) {
       *it = new int;
       fread(*it, sizeof(int), 1, fp);
   }

Here's another example, where the item stored in the sparsetable is a C++ object with a non-trivial constructor. In this case, you must use "placement new" to construct the object at the correct memory location.

   for (sparsetable<ComplicatedCppClass>::nonempty_iterator it = t.nonempty_begin();
        it != t.nonempty_end(); ++it) {
       int constructor_arg;   // ComplicatedCppClass takes an int to construct
       fread(&constructor_arg, sizeof(int), 1, fp);
       new (&(*it)) ComplicatedCppClass(constructor_arg);     // placement new
   }

See also

The following are SGI STL concepts and classes related to sparsetable.

Container, Random Access Container, sparse_hash_set, sparse_hash_map sparsehash-2.0.2/doc/implementation.html0000664000175000017500000003663711721252346015301 00000000000000 Implementation notes: sparse_hash, dense_hash, sparsetable

Implementation of sparse_hash_map, dense_hash_map, and sparsetable

This document contains a few notes on how the data structures in this package are implemented. This discussion refers at several points to the classic text in this area: Knuth, The Art of Computer Programming, Vol 3, Hashing.

sparsetable

For specificity, consider the declaration

   sparsetable<Foo> t(100);        // a sparse array with 100 elements

A sparsetable is a random container that implements a sparse array, that is, an array that uses very little memory to store unassigned indices (in this case, between 1-2 bits per unassigned index). For instance, if you allocate an array of size 5 and assign a[2] = [big struct], then a[2] will take up a lot of memory but a[0], a[1], a[3], and a[4] will not. Array elements that have a value are called "assigned". Array elements that have no value yet, or have had their value cleared using erase() or clear(), are called "unassigned". For assigned elements, lookups return the assigned value; for unassigned elements, they return the default value, which for t is Foo().

sparsetable is implemented as an array of "groups". Each group is responsible for M array indices. The first group knows about t[0]..t[M-1], the second about t[M]..t[2M-1], and so forth. (M is 48 by default.) At construct time, t creates an array of (99/M + 1) groups. From this point on, all operations -- insert, delete, lookup -- are passed to the appropriate group. In particular, any operation on t[i] is actually performed on (t.group[i / M])[i % M].

Each group contains of a vector, which holds assigned values, and a bitmap of size M, which indicates which indices are assigned. A lookup works as follows: the group is asked to look up index i, where i < M. The group looks at bitmap[i]. If it's 0, the lookup fails. If it's 1, then the group has to find the appropriate value in the vector.

find()

Finding the appropriate vector element is the most expensive part of the lookup. The code counts all bitmap entries <= i that are set to 1. (There's at least 1 of them, since bitmap[i] is 1.) Suppose there are 4 such entries. Then the right value to return is the 4th element of the vector: vector[3]. This takes time O(M), which is a constant since M is a constant.

insert()

Insert starts with a lookup. If the lookup succeeds, the code merely replaces vector[3] with the new value. If the lookup fails, then the code must insert a new entry into the middle of the vector. Again, to insert at position i, the code must count all the bitmap entries <= i that are set to i. This indicates the position to insert into the vector. All vector entries above that position must be moved to make room for the new entry. This takes time, but still constant time since the vector has size at most M.

(Inserts could be made faster by using a list instead of a vector to hold group values, but this would use much more memory, since each list element requires a full pointer of overhead.)

The only metadata that needs to be updated, after the actual value is inserted, is to set bitmap[i] to 1. No other counts must be maintained.

delete()

Deletes are similar to inserts. They start with a lookup. If it fails, the delete is a noop. Otherwise, the appropriate entry is removed from the vector, all the vector elements above it are moved down one, and bitmap[i] is set to 0.

iterators

Sparsetable iterators pose a special burden. They must iterate over unassigned array values, but the act of iterating should not cause an assignment to happen -- otherwise, iterating over a sparsetable would cause it to take up much more room. For const iterators, the matter is simple: the iterator is merely programmed to return the default value -- Foo() -- when dereferenced while pointing to an unassigned entry.

For non-const iterators, such simple techniques fail. Instead, dereferencing a sparsetable_iterator returns an opaque object that acts like a Foo in almost all situations, but isn't actually a Foo. (It does this by defining operator=(), operator value_type(), and, most sneakily, operator&().) This works in almost all cases. If it doesn't, an explicit cast to value_type will solve the problem:

   printf("%d", static_cast<Foo>(*t.find(0)));

To avoid such problems, consider using get() and set() instead of an iterator:

   for (int i = 0; i < t.size(); ++i)
      if (t.get(i) == ...)  t.set(i, ...);

Sparsetable also has a special class of iterator, besides normal and const: nonempty_iterator. This only iterates over array values that are assigned. This is particularly fast given the sparsetable implementation, since it can ignore the bitmaps entirely and just iterate over the various group vectors.

Resource use

The space overhead for an sparsetable of size N is N + 48N/M bits. For the default value of M, this is exactly 2 bits per array entry. (That's for 32-bit pointers; for machines with 64-bit pointers, it's N + 80N/M bits, or 2.67 bits per entry.) A larger M would use less overhead -- approaching 1 bit per array entry -- but take longer for inserts, deletes, and lookups. A smaller M would use more overhead but make operations somewhat faster.

You can also look at some specific performance numbers.


sparse_hash_set

For specificity, consider the declaration

   sparse_hash_set<Foo> t;

sparse_hash_set is a hashtable. For more information on hashtables, see Knuth. Hashtables are basically arrays with complicated logic on top of them. sparse_hash_set uses a sparsetable to implement the underlying array.

In particular, sparse_hash_set stores its data in a sparsetable using quadratic internal probing (see Knuth). Many hashtable implementations use external probing, so each table element is actually a pointer chain, holding many hashtable values. sparse_hash_set, on the other hand, always stores at most one value in each table location. If the hashtable wants to store a second value at a given table location, it can't; it's forced to look somewhere else.

insert()

As a specific example, suppose t is a new sparse_hash_set. It then holds a sparsetable of size 32. The code for t.insert(foo) works as follows:

1) Call hash<Foo>(foo) to convert foo into an integer i. (hash<Foo> is the default hash function; you can specify a different one in the template arguments.)

2a) Look at t.sparsetable[i % 32]. If it's unassigned, assign it to foo. foo is now in the hashtable.

2b) If t.sparsetable[i % 32] is assigned, and its value is foo, then do nothing: foo was already in t and the insert is a noop.

2c) If t.sparsetable[i % 32] is assigned, but to a value other than foo, look at t.sparsetable[(i+1) % 32]. If that also fails, try t.sparsetable[(i+3) % 32], then t.sparsetable[(i+6) % 32]. In general, keep trying the next triangular number.

3) If the table is now "too full" -- say, 25 of the 32 table entries are now assigned -- grow the table by creating a new sparsetable that's twice as big, and rehashing every single element from the old table into the new one. This keeps the table from ever filling up.

4) If the table is now "too empty" -- say, only 3 of the 32 table entries are now assigned -- shrink the table by creating a new sparsetable that's half as big, and rehashing every element as in the growing case. This keeps the table overhead proportional to the number of elements in the table.

Instead of using triangular numbers as offsets, one could just use regular integers: try i, then i+1, then i+2, then i+3. This has bad 'clumping' behavior, as explored in Knuth. Quadratic probing, using the triangular numbers, avoids the clumping while keeping cache coherency in the common case. As long as the table size is a power of 2, the quadratic-probing method described above will explore every table element if necessary, to find a good place to insert.

(As a side note, using a table size that's a power of two has several advantages, including the speed of calculating (i % table_size). On the other hand, power-of-two tables are not very forgiving of a poor hash function. Make sure your hash function is a good one! There are plenty of dos and don'ts on the web (and in Knuth), for writing hash functions.)

The "too full" value, also called the "maximum occupancy", determines a time-space tradeoff: in general, the higher it is, the less space is wasted but the more probes must be performed for each insert. sparse_hash_set uses a high maximum occupancy, since space is more important than speed for this data structure.

The "too empty" value is not necessary for performance but helps with space use. It's rare for hashtable implementations to check this value at insert() time -- after all, how will inserting cause a hashtable to get too small? However, the sparse_hash_set implementation never resizes on erase(); it's nice to have an erase() that does not invalidate iterators. Thus, the first insert() after a long string of erase()s could well trigger a hashtable shrink.

find()

find() works similarly to insert. The only difference is in step (2a): if the value is unassigned, then the lookup fails immediately.

delete()

delete() is tricky in an internal-probing scheme. The obvious implementation of just "unassigning" the relevant table entry doesn't work. Consider the following scenario:

    t.insert(foo1);         // foo1 hashes to 4, is put in table[4]
    t.insert(foo2);         // foo2 hashes to 4, is put in table[5]
    t.erase(foo1);          // table[4] is now 'unassigned'
    t.lookup(foo2);         // fails since table[hash(foo2)] is unassigned

To avoid these failure situations, delete(foo1) is actually implemented by replacing foo1 by a special 'delete' value in the hashtable. This 'delete' value causes the table entry to be considered unassigned for the purposes of insertion -- if foo3 hashes to 4 as well, it can go into table[4] no problem -- but assigned for the purposes of lookup.

What is this special 'delete' value? The delete value has to be an element of type Foo, since the table can't hold anything else. It obviously must be an element the client would never want to insert on its own, or else the code couldn't distinguish deleted entries from 'real' entries with the same value. There's no way to determine a good value automatically. The client has to specify it explicitly. This is what the set_deleted_key() method does.

Note that set_deleted_key() is only necessary if the client actually wants to call t.erase(). For insert-only hash-sets, set_deleted_key() is unnecessary.

When copying the hashtable, either to grow it or shrink it, the special 'delete' values are not copied into the new table. The copy-time rehash makes them unnecessary.

Resource use

The data is stored in a sparsetable, so space use is the same as for sparsetable. However, by default the sparse_hash_set implementation tries to keep about half the table buckets empty, to keep lookup-chains short. Since sparsehashmap has about 2 bits overhead per bucket (or 2.5 bits on 64-bit systems), sparse_hash_map has about 4-5 bits overhead per hashtable item.

Time use is also determined in large part by the sparsetable implementation. However, there is also an extra probing cost in hashtables, which depends in large part on the "too full" value. It should be rare to need more than 4-5 probes per lookup, and usually significantly less will suffice.

A note on growing and shrinking the hashtable: all hashtable implementations use the most memory when growing a hashtable, since they must have room for both the old table and the new table at the same time. sparse_hash_set is careful to delete entries from the old hashtable as soon as they're copied into the new one, to minimize this space overhead. (It does this efficiently by using its knowledge of the sparsetable class and copying one sparsetable group at a time.)

You can also look at some specific performance numbers.


sparse_hash_map

sparse_hash_map is implemented identically to sparse_hash_set. The only difference is instead of storing just Foo in each table entry, the data structure stores pair<Foo, Value>.


dense_hash_set

The hashtable aspects of dense_hash_set are identical to sparse_hash_set: it uses quadratic internal probing, and resizes hashtables in exactly the same way. The difference is in the underlying array: instead of using a sparsetable, dense_hash_set uses a C array. This means much more space is used, especially if Foo is big. However, it makes all operations faster, since sparsetable has memory management overhead that C arrays do not.

The use of C arrays instead of sparsetables points to one immediate complication dense_hash_set has that sparse_hash_set does not: the need to distinguish assigned from unassigned entries. In a sparsetable, this is accomplished by a bitmap. dense_hash_set, on the other hand, uses a dedicated value to specify unassigned entries. Thus, dense_hash_set has two special values: one to indicate deleted table entries, and one to indicated unassigned table entries. At construct time, all table entries are initialized to 'unassigned'.

dense_hash_set provides the method set_empty_key() to indicate the value that should be used for unassigned entries. Like set_deleted_key(), set_empty_key() requires a value that will not be used by the client for any legitimate purpose. Unlike set_deleted_key(), set_empty_key() is always required, no matter what hashtable operations the client wishes to perform.

Resource use

This implementation is fast because even though dense_hash_set may not be space efficient, most lookups are localized: a single lookup may need to access table[i], and maybe table[i+1] and table[i+3], but nothing other than that. For all but the biggest data structures, these will frequently be in a single cache line.

This implementation takes, for every unused bucket, space as big as the key-type. Usually between half and two-thirds of the buckets are empty.

The doubling method used by dense_hash_set tends to work poorly with most memory allocators. This is because memory allocators tend to have memory 'buckets' which are a power of two. Since each doubling of a dense_hash_set doubles the memory use, a single hashtable doubling will require a new memory 'bucket' from the memory allocator, leaving the old bucket stranded as fragmented memory. Hence, it's not recommended this data structure be used with many inserts in memory-constrained situations.

You can also look at some specific performance numbers.


dense_hash_map

dense_hash_map is identical to dense_hash_set except for what values are stored in each table entry.


Craig Silverstein
Thu Jan 6 20:15:42 PST 2005
sparsehash-2.0.2/TODO0000664000175000017500000000206711721252346011277 000000000000001) TODO: I/O implementation in densehashtable.h 2) TODO: document SPARSEHASH_STAT_UPDATE macro, and also macros that tweak performance. Perhaps add support to these to the API? 3) TODO: support exceptions? 4) BUG: sparsetable's operator[] doesn't work well with printf: you need to explicitly cast the result to value_type to print it. (It works fine with streams.) 5) TODO: consider rewriting dense_hash_map to use a 'groups' scheme, like sparsetable, but without the sparse-allocation within a group. This makes resizing have better memory-use properties. The downside is that probes across groups might take longer since groups are not contiguous in memory. Making groups the same size as a cache-line, and ensuring they're loaded on cache-line boundaries, might help. Needs careful testing to make sure it doesn't hurt performance. 6) TODO: Get the C-only version of sparsehash in experimental/ ready for prime-time. 7) TODO: use cmake (www.cmake.org) to make it easy to isntall this on a windows system. --- 28 February 2007 sparsehash-2.0.2/m4/0000775000175000017500000000000011721550526011203 500000000000000sparsehash-2.0.2/m4/namespaces.m40000664000175000017500000000133711721252345013506 00000000000000# Checks whether the compiler implements namespaces AC_DEFUN([AC_CXX_NAMESPACES], [AC_CACHE_CHECK(whether the compiler implements namespaces, ac_cv_cxx_namespaces, [AC_LANG_SAVE AC_LANG_CPLUSPLUS AC_TRY_COMPILE([namespace Outer { namespace Inner { int i = 0; }}], [using namespace Outer::Inner; return i;], ac_cv_cxx_namespaces=yes, ac_cv_cxx_namespaces=no) AC_LANG_RESTORE]) if test "$ac_cv_cxx_namespaces" = yes; then AC_DEFINE(HAVE_NAMESPACES, 1, [define if the compiler implements namespaces]) fi]) sparsehash-2.0.2/m4/acx_pthread.m40000664000175000017500000003404511721252345013653 00000000000000# This was retrieved from # http://svn.0pointer.de/viewvc/trunk/common/acx_pthread.m4?revision=1277&root=avahi # See also (perhaps for new versions?) # http://svn.0pointer.de/viewvc/trunk/common/acx_pthread.m4?root=avahi # # We've rewritten the inconsistency check code (from avahi), to work # more broadly. In particular, it no longer assumes ld accepts -zdefs. # This caused a restructing of the code, but the functionality has only # changed a little. dnl @synopsis ACX_PTHREAD([ACTION-IF-FOUND[, ACTION-IF-NOT-FOUND]]) dnl dnl @summary figure out how to build C programs using POSIX threads dnl dnl This macro figures out how to build C programs using POSIX threads. dnl It sets the PTHREAD_LIBS output variable to the threads library and dnl linker flags, and the PTHREAD_CFLAGS output variable to any special dnl C compiler flags that are needed. (The user can also force certain dnl compiler flags/libs to be tested by setting these environment dnl variables.) dnl dnl Also sets PTHREAD_CC to any special C compiler that is needed for dnl multi-threaded programs (defaults to the value of CC otherwise). dnl (This is necessary on AIX to use the special cc_r compiler alias.) dnl dnl NOTE: You are assumed to not only compile your program with these dnl flags, but also link it with them as well. e.g. you should link dnl with $PTHREAD_CC $CFLAGS $PTHREAD_CFLAGS $LDFLAGS ... $PTHREAD_LIBS dnl $LIBS dnl dnl If you are only building threads programs, you may wish to use dnl these variables in your default LIBS, CFLAGS, and CC: dnl dnl LIBS="$PTHREAD_LIBS $LIBS" dnl CFLAGS="$CFLAGS $PTHREAD_CFLAGS" dnl CC="$PTHREAD_CC" dnl dnl In addition, if the PTHREAD_CREATE_JOINABLE thread-attribute dnl constant has a nonstandard name, defines PTHREAD_CREATE_JOINABLE to dnl that name (e.g. PTHREAD_CREATE_UNDETACHED on AIX). dnl dnl ACTION-IF-FOUND is a list of shell commands to run if a threads dnl library is found, and ACTION-IF-NOT-FOUND is a list of commands to dnl run it if it is not found. If ACTION-IF-FOUND is not specified, the dnl default action will define HAVE_PTHREAD. dnl dnl Please let the authors know if this macro fails on any platform, or dnl if you have any other suggestions or comments. This macro was based dnl on work by SGJ on autoconf scripts for FFTW (www.fftw.org) (with dnl help from M. Frigo), as well as ac_pthread and hb_pthread macros dnl posted by Alejandro Forero Cuervo to the autoconf macro repository. dnl We are also grateful for the helpful feedback of numerous users. dnl dnl @category InstalledPackages dnl @author Steven G. Johnson dnl @version 2006-05-29 dnl @license GPLWithACException dnl dnl Checks for GCC shared/pthread inconsistency based on work by dnl Marcin Owsiany AC_DEFUN([ACX_PTHREAD], [ AC_REQUIRE([AC_CANONICAL_HOST]) AC_LANG_SAVE AC_LANG_C acx_pthread_ok=no # We used to check for pthread.h first, but this fails if pthread.h # requires special compiler flags (e.g. on True64 or Sequent). # It gets checked for in the link test anyway. # First of all, check if the user has set any of the PTHREAD_LIBS, # etcetera environment variables, and if threads linking works using # them: if test x"$PTHREAD_LIBS$PTHREAD_CFLAGS" != x; then save_CFLAGS="$CFLAGS" CFLAGS="$CFLAGS $PTHREAD_CFLAGS" save_LIBS="$LIBS" LIBS="$PTHREAD_LIBS $LIBS" AC_MSG_CHECKING([for pthread_join in LIBS=$PTHREAD_LIBS with CFLAGS=$PTHREAD_CFLAGS]) AC_TRY_LINK_FUNC(pthread_join, acx_pthread_ok=yes) AC_MSG_RESULT($acx_pthread_ok) if test x"$acx_pthread_ok" = xno; then PTHREAD_LIBS="" PTHREAD_CFLAGS="" fi LIBS="$save_LIBS" CFLAGS="$save_CFLAGS" fi # We must check for the threads library under a number of different # names; the ordering is very important because some systems # (e.g. DEC) have both -lpthread and -lpthreads, where one of the # libraries is broken (non-POSIX). # Create a list of thread flags to try. Items starting with a "-" are # C compiler flags, and other items are library names, except for "none" # which indicates that we try without any flags at all, and "pthread-config" # which is a program returning the flags for the Pth emulation library. acx_pthread_flags="pthreads none -Kthread -kthread lthread -pthread -pthreads -mthreads pthread --thread-safe -mt pthread-config" # The ordering *is* (sometimes) important. Some notes on the # individual items follow: # pthreads: AIX (must check this before -lpthread) # none: in case threads are in libc; should be tried before -Kthread and # other compiler flags to prevent continual compiler warnings # -Kthread: Sequent (threads in libc, but -Kthread needed for pthread.h) # -kthread: FreeBSD kernel threads (preferred to -pthread since SMP-able) # lthread: LinuxThreads port on FreeBSD (also preferred to -pthread) # -pthread: Linux/gcc (kernel threads), BSD/gcc (userland threads) # -pthreads: Solaris/gcc # -mthreads: Mingw32/gcc, Lynx/gcc # -mt: Sun Workshop C (may only link SunOS threads [-lthread], but it # doesn't hurt to check since this sometimes defines pthreads too; # also defines -D_REENTRANT) # ... -mt is also the pthreads flag for HP/aCC # pthread: Linux, etcetera # --thread-safe: KAI C++ # pthread-config: use pthread-config program (for GNU Pth library) case "${host_cpu}-${host_os}" in *solaris*) # On Solaris (at least, for some versions), libc contains stubbed # (non-functional) versions of the pthreads routines, so link-based # tests will erroneously succeed. (We need to link with -pthreads/-mt/ # -lpthread.) (The stubs are missing pthread_cleanup_push, or rather # a function called by this macro, so we could check for that, but # who knows whether they'll stub that too in a future libc.) So, # we'll just look for -pthreads and -lpthread first: acx_pthread_flags="-pthreads pthread -mt -pthread $acx_pthread_flags" ;; esac if test x"$acx_pthread_ok" = xno; then for flag in $acx_pthread_flags; do case $flag in none) AC_MSG_CHECKING([whether pthreads work without any flags]) ;; -*) AC_MSG_CHECKING([whether pthreads work with $flag]) PTHREAD_CFLAGS="$flag" ;; pthread-config) AC_CHECK_PROG(acx_pthread_config, pthread-config, yes, no) if test x"$acx_pthread_config" = xno; then continue; fi PTHREAD_CFLAGS="`pthread-config --cflags`" PTHREAD_LIBS="`pthread-config --ldflags` `pthread-config --libs`" ;; *) AC_MSG_CHECKING([for the pthreads library -l$flag]) PTHREAD_LIBS="-l$flag" ;; esac save_LIBS="$LIBS" save_CFLAGS="$CFLAGS" LIBS="$PTHREAD_LIBS $LIBS" CFLAGS="$CFLAGS $PTHREAD_CFLAGS" # Check for various functions. We must include pthread.h, # since some functions may be macros. (On the Sequent, we # need a special flag -Kthread to make this header compile.) # We check for pthread_join because it is in -lpthread on IRIX # while pthread_create is in libc. We check for pthread_attr_init # due to DEC craziness with -lpthreads. We check for # pthread_cleanup_push because it is one of the few pthread # functions on Solaris that doesn't have a non-functional libc stub. # We try pthread_create on general principles. AC_TRY_LINK([#include ], [pthread_t th; pthread_join(th, 0); pthread_attr_init(0); pthread_cleanup_push(0, 0); pthread_create(0,0,0,0); pthread_cleanup_pop(0); ], [acx_pthread_ok=yes]) LIBS="$save_LIBS" CFLAGS="$save_CFLAGS" AC_MSG_RESULT($acx_pthread_ok) if test "x$acx_pthread_ok" = xyes; then break; fi PTHREAD_LIBS="" PTHREAD_CFLAGS="" done fi # Various other checks: if test "x$acx_pthread_ok" = xyes; then save_LIBS="$LIBS" LIBS="$PTHREAD_LIBS $LIBS" save_CFLAGS="$CFLAGS" CFLAGS="$CFLAGS $PTHREAD_CFLAGS" # Detect AIX lossage: JOINABLE attribute is called UNDETACHED. AC_MSG_CHECKING([for joinable pthread attribute]) attr_name=unknown for attr in PTHREAD_CREATE_JOINABLE PTHREAD_CREATE_UNDETACHED; do AC_TRY_LINK([#include ], [int attr=$attr; return attr;], [attr_name=$attr; break]) done AC_MSG_RESULT($attr_name) if test "$attr_name" != PTHREAD_CREATE_JOINABLE; then AC_DEFINE_UNQUOTED(PTHREAD_CREATE_JOINABLE, $attr_name, [Define to necessary symbol if this constant uses a non-standard name on your system.]) fi AC_MSG_CHECKING([if more special flags are required for pthreads]) flag=no case "${host_cpu}-${host_os}" in *-aix* | *-freebsd* | *-darwin*) flag="-D_THREAD_SAFE";; *solaris* | *-osf* | *-hpux*) flag="-D_REENTRANT";; esac AC_MSG_RESULT(${flag}) if test "x$flag" != xno; then PTHREAD_CFLAGS="$flag $PTHREAD_CFLAGS" fi LIBS="$save_LIBS" CFLAGS="$save_CFLAGS" # More AIX lossage: must compile with xlc_r or cc_r if test x"$GCC" != xyes; then AC_CHECK_PROGS(PTHREAD_CC, xlc_r cc_r, ${CC}) else PTHREAD_CC=$CC fi # The next part tries to detect GCC inconsistency with -shared on some # architectures and systems. The problem is that in certain # configurations, when -shared is specified, GCC "forgets" to # internally use various flags which are still necessary. # # Prepare the flags # save_CFLAGS="$CFLAGS" save_LIBS="$LIBS" save_CC="$CC" # Try with the flags determined by the earlier checks. # # -Wl,-z,defs forces link-time symbol resolution, so that the # linking checks with -shared actually have any value # # FIXME: -fPIC is required for -shared on many architectures, # so we specify it here, but the right way would probably be to # properly detect whether it is actually required. CFLAGS="-shared -fPIC -Wl,-z,defs $CFLAGS $PTHREAD_CFLAGS" LIBS="$PTHREAD_LIBS $LIBS" CC="$PTHREAD_CC" # In order not to create several levels of indentation, we test # the value of "$done" until we find the cure or run out of ideas. done="no" # First, make sure the CFLAGS we added are actually accepted by our # compiler. If not (and OS X's ld, for instance, does not accept -z), # then we can't do this test. if test x"$done" = xno; then AC_MSG_CHECKING([whether to check for GCC pthread/shared inconsistencies]) AC_TRY_LINK(,, , [done=yes]) if test "x$done" = xyes ; then AC_MSG_RESULT([no]) else AC_MSG_RESULT([yes]) fi fi if test x"$done" = xno; then AC_MSG_CHECKING([whether -pthread is sufficient with -shared]) AC_TRY_LINK([#include ], [pthread_t th; pthread_join(th, 0); pthread_attr_init(0); pthread_cleanup_push(0, 0); pthread_create(0,0,0,0); pthread_cleanup_pop(0); ], [done=yes]) if test "x$done" = xyes; then AC_MSG_RESULT([yes]) else AC_MSG_RESULT([no]) fi fi # # Linux gcc on some architectures such as mips/mipsel forgets # about -lpthread # if test x"$done" = xno; then AC_MSG_CHECKING([whether -lpthread fixes that]) LIBS="-lpthread $PTHREAD_LIBS $save_LIBS" AC_TRY_LINK([#include ], [pthread_t th; pthread_join(th, 0); pthread_attr_init(0); pthread_cleanup_push(0, 0); pthread_create(0,0,0,0); pthread_cleanup_pop(0); ], [done=yes]) if test "x$done" = xyes; then AC_MSG_RESULT([yes]) PTHREAD_LIBS="-lpthread $PTHREAD_LIBS" else AC_MSG_RESULT([no]) fi fi # # FreeBSD 4.10 gcc forgets to use -lc_r instead of -lc # if test x"$done" = xno; then AC_MSG_CHECKING([whether -lc_r fixes that]) LIBS="-lc_r $PTHREAD_LIBS $save_LIBS" AC_TRY_LINK([#include ], [pthread_t th; pthread_join(th, 0); pthread_attr_init(0); pthread_cleanup_push(0, 0); pthread_create(0,0,0,0); pthread_cleanup_pop(0); ], [done=yes]) if test "x$done" = xyes; then AC_MSG_RESULT([yes]) PTHREAD_LIBS="-lc_r $PTHREAD_LIBS" else AC_MSG_RESULT([no]) fi fi if test x"$done" = xno; then # OK, we have run out of ideas AC_MSG_WARN([Impossible to determine how to use pthreads with shared libraries]) # so it's not safe to assume that we may use pthreads acx_pthread_ok=no fi AC_MSG_CHECKING([whether what we have so far is sufficient with -nostdlib]) CFLAGS="-nostdlib $CFLAGS" # we need c with nostdlib LIBS="$LIBS -lc" AC_TRY_LINK([#include ], [pthread_t th; pthread_join(th, 0); pthread_attr_init(0); pthread_cleanup_push(0, 0); pthread_create(0,0,0,0); pthread_cleanup_pop(0); ], [done=yes],[done=no]) if test "x$done" = xyes; then AC_MSG_RESULT([yes]) else AC_MSG_RESULT([no]) fi if test x"$done" = xno; then AC_MSG_CHECKING([whether -lpthread saves the day]) LIBS="-lpthread $LIBS" AC_TRY_LINK([#include ], [pthread_t th; pthread_join(th, 0); pthread_attr_init(0); pthread_cleanup_push(0, 0); pthread_create(0,0,0,0); pthread_cleanup_pop(0); ], [done=yes],[done=no]) if test "x$done" = xyes; then AC_MSG_RESULT([yes]) PTHREAD_LIBS="$PTHREAD_LIBS -lpthread" else AC_MSG_RESULT([no]) AC_MSG_WARN([Impossible to determine how to use pthreads with shared libraries and -nostdlib]) fi fi CFLAGS="$save_CFLAGS" LIBS="$save_LIBS" CC="$save_CC" else PTHREAD_CC="$CC" fi AC_SUBST(PTHREAD_LIBS) AC_SUBST(PTHREAD_CFLAGS) AC_SUBST(PTHREAD_CC) # Finally, execute ACTION-IF-FOUND/ACTION-IF-NOT-FOUND: if test x"$acx_pthread_ok" = xyes; then ifelse([$1],,AC_DEFINE(HAVE_PTHREAD,1,[Define if you have POSIX threads libraries and header files.]),[$1]) : else acx_pthread_ok=no $2 fi AC_LANG_RESTORE ])dnl ACX_PTHREAD sparsehash-2.0.2/m4/stl_hash_fun.m40000664000175000017500000000272311721252345014044 00000000000000# We just try to figure out where hash<> is defined. It's in some file # that ends in hash_fun.h... # # Ideally we'd use AC_CACHE_CHECK, but that only lets us store one value # at a time, and we need to store two (filename and namespace). # prints messages itself, so we have to do the message-printing ourselves # via AC_MSG_CHECKING + AC_MSG_RESULT. (TODO(csilvers): can we cache?) # # tr1/functional_hash.h: new gcc's with tr1 support # stl_hash_fun.h: old gcc's (gc2.95?) # ext/hash_fun.h: newer gcc's (gcc4) # stl/_hash_fun.h: STLport AC_DEFUN([AC_CXX_STL_HASH_FUN], [AC_REQUIRE([AC_CXX_STL_HASH]) AC_MSG_CHECKING(how to include hash_fun directly) AC_LANG_SAVE AC_LANG_CPLUSPLUS ac_cv_cxx_stl_hash_fun="" for location in functional tr1/functional \ ext/hash_fun.h ext/stl_hash_fun.h \ hash_fun.h stl_hash_fun.h \ stl/_hash_fun.h; do if test -z "$ac_cv_cxx_stl_hash_fun"; then AC_TRY_COMPILE([#include <$location>], [int x = ${ac_cv_cxx_hash_namespace}::hash()(5)], [ac_cv_cxx_stl_hash_fun="<$location>";]) fi done AC_LANG_RESTORE AC_DEFINE_UNQUOTED(HASH_FUN_H,$ac_cv_cxx_stl_hash_fun, [the location of the header defining hash functions]) AC_DEFINE_UNQUOTED(HASH_NAMESPACE,$ac_cv_cxx_hash_namespace, [the namespace of the hash<> function]) AC_MSG_RESULT([$ac_cv_cxx_stl_hash_fun]) ]) sparsehash-2.0.2/m4/google_namespace.m40000664000175000017500000000412111721252345014651 00000000000000# Allow users to override the namespace we define our application's classes in # Arg $1 is the default namespace to use if --enable-namespace isn't present. # In general, $1 should be 'google', so we put all our exported symbols in a # unique namespace that is not likely to conflict with anyone else. However, # when it makes sense -- for instance, when publishing stl-like code -- you # may want to go with a different default, like 'std'. # We guarantee the invariant that GOOGLE_NAMESPACE starts with ::, # unless it's the empty string. Thus, it's always safe to do # GOOGLE_NAMESPACE::foo and be sure you're getting the foo that's # actually in the google namespace, and not some other namespace that # the namespace rules might kick in. AC_DEFUN([AC_DEFINE_GOOGLE_NAMESPACE], [google_namespace_default=[$1] AC_ARG_ENABLE(namespace, [ --enable-namespace=FOO to define these Google classes in the FOO namespace. --disable-namespace to define them in the global namespace. Default is to define them in namespace $1.], [case "$enableval" in yes) google_namespace="$google_namespace_default" ;; no) google_namespace="" ;; *) google_namespace="$enableval" ;; esac], [google_namespace="$google_namespace_default"]) if test -n "$google_namespace"; then ac_google_namespace="::$google_namespace" ac_google_start_namespace="namespace $google_namespace {" ac_google_end_namespace="}" else ac_google_namespace="" ac_google_start_namespace="" ac_google_end_namespace="" fi AC_DEFINE_UNQUOTED(GOOGLE_NAMESPACE, $ac_google_namespace, Namespace for Google classes) AC_DEFINE_UNQUOTED(_START_GOOGLE_NAMESPACE_, $ac_google_start_namespace, Puts following code inside the Google namespace) AC_DEFINE_UNQUOTED(_END_GOOGLE_NAMESPACE_, $ac_google_end_namespace, Stops putting the code inside the Google namespace) ]) sparsehash-2.0.2/m4/stl_hash.m40000664000175000017500000000621611721252345013175 00000000000000# We check two things: where the include file is for # unordered_map/hash_map (we prefer the first form), and what # namespace unordered/hash_map lives in within that include file. We # include AC_TRY_COMPILE for all the combinations we've seen in the # wild. We define HASH_MAP_H to the location of the header file, and # HASH_NAMESPACE to the namespace the class (unordered_map or # hash_map) is in. We define HAVE_UNORDERED_MAP if the class we found # is named unordered_map, or leave it undefined if not. # This also checks if unordered map exists. AC_DEFUN([AC_CXX_STL_HASH], [AC_REQUIRE([AC_CXX_NAMESPACES]) AC_MSG_CHECKING(the location of hash_map) AC_LANG_SAVE AC_LANG_CPLUSPLUS ac_cv_cxx_hash_map="" # First try unordered_map, but not on gcc's before 4.2 -- I've # seen unexplainable unordered_map bugs with -O2 on older gcc's. AC_TRY_COMPILE([#if defined(__GNUC__) && (__GNUC__ < 4 || (__GNUC__ == 4 && __GNUC_MINOR__ < 2)) # error GCC too old for unordered_map #endif ], [/* no program body necessary */], [stl_hash_old_gcc=no], [stl_hash_old_gcc=yes]) for location in unordered_map tr1/unordered_map; do for namespace in std std::tr1; do if test -z "$ac_cv_cxx_hash_map" -a "$stl_hash_old_gcc" != yes; then # Some older gcc's have a buggy tr1, so test a bit of code. AC_TRY_COMPILE([#include <$location>], [const ${namespace}::unordered_map t; return t.find(5) == t.end();], [ac_cv_cxx_hash_map="<$location>"; ac_cv_cxx_hash_namespace="$namespace"; ac_cv_cxx_have_unordered_map="yes";]) fi done done # Now try hash_map for location in ext/hash_map hash_map; do for namespace in __gnu_cxx "" std stdext; do if test -z "$ac_cv_cxx_hash_map"; then AC_TRY_COMPILE([#include <$location>], [${namespace}::hash_map t], [ac_cv_cxx_hash_map="<$location>"; ac_cv_cxx_hash_namespace="$namespace"; ac_cv_cxx_have_unordered_map="no";]) fi done done ac_cv_cxx_hash_set=`echo "$ac_cv_cxx_hash_map" | sed s/map/set/`; if test -n "$ac_cv_cxx_hash_map"; then AC_DEFINE(HAVE_HASH_MAP, 1, [define if the compiler has hash_map]) AC_DEFINE(HAVE_HASH_SET, 1, [define if the compiler has hash_set]) AC_DEFINE_UNQUOTED(HASH_MAP_H,$ac_cv_cxx_hash_map, [the location of or ]) AC_DEFINE_UNQUOTED(HASH_SET_H,$ac_cv_cxx_hash_set, [the location of or ]) AC_DEFINE_UNQUOTED(HASH_NAMESPACE,$ac_cv_cxx_hash_namespace, [the namespace of hash_map/hash_set]) if test "$ac_cv_cxx_have_unordered_map" = yes; then AC_DEFINE(HAVE_UNORDERED_MAP,1, [define if the compiler supports unordered_{map,set}]) fi AC_MSG_RESULT([$ac_cv_cxx_hash_map]) else AC_MSG_RESULT() AC_MSG_WARN([could not find an STL hash_map]) fi ]) sparsehash-2.0.2/config.guess0000755000175000017500000012673011721254575013137 00000000000000#! /bin/sh # Attempt to guess a canonical system name. # Copyright (C) 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, # 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, # 2011 Free Software Foundation, Inc. timestamp='2011-05-11' # This file is free software; you can redistribute it and/or modify it # under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street - Fifth Floor, Boston, MA # 02110-1301, USA. # # As a special exception to the GNU General Public License, if you # distribute this file as part of a program that contains a # configuration script generated by Autoconf, you may include it under # the same distribution terms that you use for the rest of that program. # Originally written by Per Bothner. Please send patches (context # diff format) to and include a ChangeLog # entry. # # This script attempts to guess a canonical system name similar to # config.sub. If it succeeds, it prints the system name on stdout, and # exits with 0. Otherwise, it exits with 1. # # You can get the latest version of this script from: # http://git.savannah.gnu.org/gitweb/?p=config.git;a=blob_plain;f=config.guess;hb=HEAD me=`echo "$0" | sed -e 's,.*/,,'` usage="\ Usage: $0 [OPTION] Output the configuration name of the system \`$me' is run on. Operation modes: -h, --help print this help, then exit -t, --time-stamp print date of last modification, then exit -v, --version print version number, then exit Report bugs and patches to ." version="\ GNU config.guess ($timestamp) Originally written by Per Bothner. Copyright (C) 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE." help=" Try \`$me --help' for more information." # Parse command line while test $# -gt 0 ; do case $1 in --time-stamp | --time* | -t ) echo "$timestamp" ; exit ;; --version | -v ) echo "$version" ; exit ;; --help | --h* | -h ) echo "$usage"; exit ;; -- ) # Stop option processing shift; break ;; - ) # Use stdin as input. break ;; -* ) echo "$me: invalid option $1$help" >&2 exit 1 ;; * ) break ;; esac done if test $# != 0; then echo "$me: too many arguments$help" >&2 exit 1 fi trap 'exit 1' 1 2 15 # CC_FOR_BUILD -- compiler used by this script. Note that the use of a # compiler to aid in system detection is discouraged as it requires # temporary files to be created and, as you can see below, it is a # headache to deal with in a portable fashion. # Historically, `CC_FOR_BUILD' used to be named `HOST_CC'. We still # use `HOST_CC' if defined, but it is deprecated. # Portable tmp directory creation inspired by the Autoconf team. set_cc_for_build=' trap "exitcode=\$?; (rm -f \$tmpfiles 2>/dev/null; rmdir \$tmp 2>/dev/null) && exit \$exitcode" 0 ; trap "rm -f \$tmpfiles 2>/dev/null; rmdir \$tmp 2>/dev/null; exit 1" 1 2 13 15 ; : ${TMPDIR=/tmp} ; { tmp=`(umask 077 && mktemp -d "$TMPDIR/cgXXXXXX") 2>/dev/null` && test -n "$tmp" && test -d "$tmp" ; } || { test -n "$RANDOM" && tmp=$TMPDIR/cg$$-$RANDOM && (umask 077 && mkdir $tmp) ; } || { tmp=$TMPDIR/cg-$$ && (umask 077 && mkdir $tmp) && echo "Warning: creating insecure temp directory" >&2 ; } || { echo "$me: cannot create a temporary directory in $TMPDIR" >&2 ; exit 1 ; } ; dummy=$tmp/dummy ; tmpfiles="$dummy.c $dummy.o $dummy.rel $dummy" ; case $CC_FOR_BUILD,$HOST_CC,$CC in ,,) echo "int x;" > $dummy.c ; for c in cc gcc c89 c99 ; do if ($c -c -o $dummy.o $dummy.c) >/dev/null 2>&1 ; then CC_FOR_BUILD="$c"; break ; fi ; done ; if test x"$CC_FOR_BUILD" = x ; then CC_FOR_BUILD=no_compiler_found ; fi ;; ,,*) CC_FOR_BUILD=$CC ;; ,*,*) CC_FOR_BUILD=$HOST_CC ;; esac ; set_cc_for_build= ;' # This is needed to find uname on a Pyramid OSx when run in the BSD universe. # (ghazi@noc.rutgers.edu 1994-08-24) if (test -f /.attbin/uname) >/dev/null 2>&1 ; then PATH=$PATH:/.attbin ; export PATH fi UNAME_MACHINE=`(uname -m) 2>/dev/null` || UNAME_MACHINE=unknown UNAME_RELEASE=`(uname -r) 2>/dev/null` || UNAME_RELEASE=unknown UNAME_SYSTEM=`(uname -s) 2>/dev/null` || UNAME_SYSTEM=unknown UNAME_VERSION=`(uname -v) 2>/dev/null` || UNAME_VERSION=unknown # Note: order is significant - the case branches are not exclusive. case "${UNAME_MACHINE}:${UNAME_SYSTEM}:${UNAME_RELEASE}:${UNAME_VERSION}" in *:NetBSD:*:*) # NetBSD (nbsd) targets should (where applicable) match one or # more of the tupples: *-*-netbsdelf*, *-*-netbsdaout*, # *-*-netbsdecoff* and *-*-netbsd*. For targets that recently # switched to ELF, *-*-netbsd* would select the old # object file format. This provides both forward # compatibility and a consistent mechanism for selecting the # object file format. # # Note: NetBSD doesn't particularly care about the vendor # portion of the name. We always set it to "unknown". sysctl="sysctl -n hw.machine_arch" UNAME_MACHINE_ARCH=`(/sbin/$sysctl 2>/dev/null || \ /usr/sbin/$sysctl 2>/dev/null || echo unknown)` case "${UNAME_MACHINE_ARCH}" in armeb) machine=armeb-unknown ;; arm*) machine=arm-unknown ;; sh3el) machine=shl-unknown ;; sh3eb) machine=sh-unknown ;; sh5el) machine=sh5le-unknown ;; *) machine=${UNAME_MACHINE_ARCH}-unknown ;; esac # The Operating System including object format, if it has switched # to ELF recently, or will in the future. case "${UNAME_MACHINE_ARCH}" in arm*|i386|m68k|ns32k|sh3*|sparc|vax) eval $set_cc_for_build if echo __ELF__ | $CC_FOR_BUILD -E - 2>/dev/null \ | grep -q __ELF__ then # Once all utilities can be ECOFF (netbsdecoff) or a.out (netbsdaout). # Return netbsd for either. FIX? os=netbsd else os=netbsdelf fi ;; *) os=netbsd ;; esac # The OS release # Debian GNU/NetBSD machines have a different userland, and # thus, need a distinct triplet. However, they do not need # kernel version information, so it can be replaced with a # suitable tag, in the style of linux-gnu. case "${UNAME_VERSION}" in Debian*) release='-gnu' ;; *) release=`echo ${UNAME_RELEASE}|sed -e 's/[-_].*/\./'` ;; esac # Since CPU_TYPE-MANUFACTURER-KERNEL-OPERATING_SYSTEM: # contains redundant information, the shorter form: # CPU_TYPE-MANUFACTURER-OPERATING_SYSTEM is used. echo "${machine}-${os}${release}" exit ;; *:OpenBSD:*:*) UNAME_MACHINE_ARCH=`arch | sed 's/OpenBSD.//'` echo ${UNAME_MACHINE_ARCH}-unknown-openbsd${UNAME_RELEASE} exit ;; *:ekkoBSD:*:*) echo ${UNAME_MACHINE}-unknown-ekkobsd${UNAME_RELEASE} exit ;; *:SolidBSD:*:*) echo ${UNAME_MACHINE}-unknown-solidbsd${UNAME_RELEASE} exit ;; macppc:MirBSD:*:*) echo powerpc-unknown-mirbsd${UNAME_RELEASE} exit ;; *:MirBSD:*:*) echo ${UNAME_MACHINE}-unknown-mirbsd${UNAME_RELEASE} exit ;; alpha:OSF1:*:*) case $UNAME_RELEASE in *4.0) UNAME_RELEASE=`/usr/sbin/sizer -v | awk '{print $3}'` ;; *5.*) UNAME_RELEASE=`/usr/sbin/sizer -v | awk '{print $4}'` ;; esac # According to Compaq, /usr/sbin/psrinfo has been available on # OSF/1 and Tru64 systems produced since 1995. I hope that # covers most systems running today. This code pipes the CPU # types through head -n 1, so we only detect the type of CPU 0. ALPHA_CPU_TYPE=`/usr/sbin/psrinfo -v | sed -n -e 's/^ The alpha \(.*\) processor.*$/\1/p' | head -n 1` case "$ALPHA_CPU_TYPE" in "EV4 (21064)") UNAME_MACHINE="alpha" ;; "EV4.5 (21064)") UNAME_MACHINE="alpha" ;; "LCA4 (21066/21068)") UNAME_MACHINE="alpha" ;; "EV5 (21164)") UNAME_MACHINE="alphaev5" ;; "EV5.6 (21164A)") UNAME_MACHINE="alphaev56" ;; "EV5.6 (21164PC)") UNAME_MACHINE="alphapca56" ;; "EV5.7 (21164PC)") UNAME_MACHINE="alphapca57" ;; "EV6 (21264)") UNAME_MACHINE="alphaev6" ;; "EV6.7 (21264A)") UNAME_MACHINE="alphaev67" ;; "EV6.8CB (21264C)") UNAME_MACHINE="alphaev68" ;; "EV6.8AL (21264B)") UNAME_MACHINE="alphaev68" ;; "EV6.8CX (21264D)") UNAME_MACHINE="alphaev68" ;; "EV6.9A (21264/EV69A)") UNAME_MACHINE="alphaev69" ;; "EV7 (21364)") UNAME_MACHINE="alphaev7" ;; "EV7.9 (21364A)") UNAME_MACHINE="alphaev79" ;; esac # A Pn.n version is a patched version. # A Vn.n version is a released version. # A Tn.n version is a released field test version. # A Xn.n version is an unreleased experimental baselevel. # 1.2 uses "1.2" for uname -r. echo ${UNAME_MACHINE}-dec-osf`echo ${UNAME_RELEASE} | sed -e 's/^[PVTX]//' | tr 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' 'abcdefghijklmnopqrstuvwxyz'` # Reset EXIT trap before exiting to avoid spurious non-zero exit code. exitcode=$? trap '' 0 exit $exitcode ;; Alpha\ *:Windows_NT*:*) # How do we know it's Interix rather than the generic POSIX subsystem? # Should we change UNAME_MACHINE based on the output of uname instead # of the specific Alpha model? echo alpha-pc-interix exit ;; 21064:Windows_NT:50:3) echo alpha-dec-winnt3.5 exit ;; Amiga*:UNIX_System_V:4.0:*) echo m68k-unknown-sysv4 exit ;; *:[Aa]miga[Oo][Ss]:*:*) echo ${UNAME_MACHINE}-unknown-amigaos exit ;; *:[Mm]orph[Oo][Ss]:*:*) echo ${UNAME_MACHINE}-unknown-morphos exit ;; *:OS/390:*:*) echo i370-ibm-openedition exit ;; *:z/VM:*:*) echo s390-ibm-zvmoe exit ;; *:OS400:*:*) echo powerpc-ibm-os400 exit ;; arm:RISC*:1.[012]*:*|arm:riscix:1.[012]*:*) echo arm-acorn-riscix${UNAME_RELEASE} exit ;; arm:riscos:*:*|arm:RISCOS:*:*) echo arm-unknown-riscos exit ;; SR2?01:HI-UX/MPP:*:* | SR8000:HI-UX/MPP:*:*) echo hppa1.1-hitachi-hiuxmpp exit ;; Pyramid*:OSx*:*:* | MIS*:OSx*:*:* | MIS*:SMP_DC-OSx*:*:*) # akee@wpdis03.wpafb.af.mil (Earle F. Ake) contributed MIS and NILE. if test "`(/bin/universe) 2>/dev/null`" = att ; then echo pyramid-pyramid-sysv3 else echo pyramid-pyramid-bsd fi exit ;; NILE*:*:*:dcosx) echo pyramid-pyramid-svr4 exit ;; DRS?6000:unix:4.0:6*) echo sparc-icl-nx6 exit ;; DRS?6000:UNIX_SV:4.2*:7* | DRS?6000:isis:4.2*:7*) case `/usr/bin/uname -p` in sparc) echo sparc-icl-nx7; exit ;; esac ;; s390x:SunOS:*:*) echo ${UNAME_MACHINE}-ibm-solaris2`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'` exit ;; sun4H:SunOS:5.*:*) echo sparc-hal-solaris2`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'` exit ;; sun4*:SunOS:5.*:* | tadpole*:SunOS:5.*:*) echo sparc-sun-solaris2`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'` exit ;; i86pc:AuroraUX:5.*:* | i86xen:AuroraUX:5.*:*) echo i386-pc-auroraux${UNAME_RELEASE} exit ;; i86pc:SunOS:5.*:* | i86xen:SunOS:5.*:*) eval $set_cc_for_build SUN_ARCH="i386" # If there is a compiler, see if it is configured for 64-bit objects. # Note that the Sun cc does not turn __LP64__ into 1 like gcc does. # This test works for both compilers. if [ "$CC_FOR_BUILD" != 'no_compiler_found' ]; then if (echo '#ifdef __amd64'; echo IS_64BIT_ARCH; echo '#endif') | \ (CCOPTS= $CC_FOR_BUILD -E - 2>/dev/null) | \ grep IS_64BIT_ARCH >/dev/null then SUN_ARCH="x86_64" fi fi echo ${SUN_ARCH}-pc-solaris2`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'` exit ;; sun4*:SunOS:6*:*) # According to config.sub, this is the proper way to canonicalize # SunOS6. Hard to guess exactly what SunOS6 will be like, but # it's likely to be more like Solaris than SunOS4. echo sparc-sun-solaris3`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'` exit ;; sun4*:SunOS:*:*) case "`/usr/bin/arch -k`" in Series*|S4*) UNAME_RELEASE=`uname -v` ;; esac # Japanese Language versions have a version number like `4.1.3-JL'. echo sparc-sun-sunos`echo ${UNAME_RELEASE}|sed -e 's/-/_/'` exit ;; sun3*:SunOS:*:*) echo m68k-sun-sunos${UNAME_RELEASE} exit ;; sun*:*:4.2BSD:*) UNAME_RELEASE=`(sed 1q /etc/motd | awk '{print substr($5,1,3)}') 2>/dev/null` test "x${UNAME_RELEASE}" = "x" && UNAME_RELEASE=3 case "`/bin/arch`" in sun3) echo m68k-sun-sunos${UNAME_RELEASE} ;; sun4) echo sparc-sun-sunos${UNAME_RELEASE} ;; esac exit ;; aushp:SunOS:*:*) echo sparc-auspex-sunos${UNAME_RELEASE} exit ;; # The situation for MiNT is a little confusing. The machine name # can be virtually everything (everything which is not # "atarist" or "atariste" at least should have a processor # > m68000). The system name ranges from "MiNT" over "FreeMiNT" # to the lowercase version "mint" (or "freemint"). Finally # the system name "TOS" denotes a system which is actually not # MiNT. But MiNT is downward compatible to TOS, so this should # be no problem. atarist[e]:*MiNT:*:* | atarist[e]:*mint:*:* | atarist[e]:*TOS:*:*) echo m68k-atari-mint${UNAME_RELEASE} exit ;; atari*:*MiNT:*:* | atari*:*mint:*:* | atarist[e]:*TOS:*:*) echo m68k-atari-mint${UNAME_RELEASE} exit ;; *falcon*:*MiNT:*:* | *falcon*:*mint:*:* | *falcon*:*TOS:*:*) echo m68k-atari-mint${UNAME_RELEASE} exit ;; milan*:*MiNT:*:* | milan*:*mint:*:* | *milan*:*TOS:*:*) echo m68k-milan-mint${UNAME_RELEASE} exit ;; hades*:*MiNT:*:* | hades*:*mint:*:* | *hades*:*TOS:*:*) echo m68k-hades-mint${UNAME_RELEASE} exit ;; *:*MiNT:*:* | *:*mint:*:* | *:*TOS:*:*) echo m68k-unknown-mint${UNAME_RELEASE} exit ;; m68k:machten:*:*) echo m68k-apple-machten${UNAME_RELEASE} exit ;; powerpc:machten:*:*) echo powerpc-apple-machten${UNAME_RELEASE} exit ;; RISC*:Mach:*:*) echo mips-dec-mach_bsd4.3 exit ;; RISC*:ULTRIX:*:*) echo mips-dec-ultrix${UNAME_RELEASE} exit ;; VAX*:ULTRIX*:*:*) echo vax-dec-ultrix${UNAME_RELEASE} exit ;; 2020:CLIX:*:* | 2430:CLIX:*:*) echo clipper-intergraph-clix${UNAME_RELEASE} exit ;; mips:*:*:UMIPS | mips:*:*:RISCos) eval $set_cc_for_build sed 's/^ //' << EOF >$dummy.c #ifdef __cplusplus #include /* for printf() prototype */ int main (int argc, char *argv[]) { #else int main (argc, argv) int argc; char *argv[]; { #endif #if defined (host_mips) && defined (MIPSEB) #if defined (SYSTYPE_SYSV) printf ("mips-mips-riscos%ssysv\n", argv[1]); exit (0); #endif #if defined (SYSTYPE_SVR4) printf ("mips-mips-riscos%ssvr4\n", argv[1]); exit (0); #endif #if defined (SYSTYPE_BSD43) || defined(SYSTYPE_BSD) printf ("mips-mips-riscos%sbsd\n", argv[1]); exit (0); #endif #endif exit (-1); } EOF $CC_FOR_BUILD -o $dummy $dummy.c && dummyarg=`echo "${UNAME_RELEASE}" | sed -n 's/\([0-9]*\).*/\1/p'` && SYSTEM_NAME=`$dummy $dummyarg` && { echo "$SYSTEM_NAME"; exit; } echo mips-mips-riscos${UNAME_RELEASE} exit ;; Motorola:PowerMAX_OS:*:*) echo powerpc-motorola-powermax exit ;; Motorola:*:4.3:PL8-*) echo powerpc-harris-powermax exit ;; Night_Hawk:*:*:PowerMAX_OS | Synergy:PowerMAX_OS:*:*) echo powerpc-harris-powermax exit ;; Night_Hawk:Power_UNIX:*:*) echo powerpc-harris-powerunix exit ;; m88k:CX/UX:7*:*) echo m88k-harris-cxux7 exit ;; m88k:*:4*:R4*) echo m88k-motorola-sysv4 exit ;; m88k:*:3*:R3*) echo m88k-motorola-sysv3 exit ;; AViiON:dgux:*:*) # DG/UX returns AViiON for all architectures UNAME_PROCESSOR=`/usr/bin/uname -p` if [ $UNAME_PROCESSOR = mc88100 ] || [ $UNAME_PROCESSOR = mc88110 ] then if [ ${TARGET_BINARY_INTERFACE}x = m88kdguxelfx ] || \ [ ${TARGET_BINARY_INTERFACE}x = x ] then echo m88k-dg-dgux${UNAME_RELEASE} else echo m88k-dg-dguxbcs${UNAME_RELEASE} fi else echo i586-dg-dgux${UNAME_RELEASE} fi exit ;; M88*:DolphinOS:*:*) # DolphinOS (SVR3) echo m88k-dolphin-sysv3 exit ;; M88*:*:R3*:*) # Delta 88k system running SVR3 echo m88k-motorola-sysv3 exit ;; XD88*:*:*:*) # Tektronix XD88 system running UTekV (SVR3) echo m88k-tektronix-sysv3 exit ;; Tek43[0-9][0-9]:UTek:*:*) # Tektronix 4300 system running UTek (BSD) echo m68k-tektronix-bsd exit ;; *:IRIX*:*:*) echo mips-sgi-irix`echo ${UNAME_RELEASE}|sed -e 's/-/_/g'` exit ;; ????????:AIX?:[12].1:2) # AIX 2.2.1 or AIX 2.1.1 is RT/PC AIX. echo romp-ibm-aix # uname -m gives an 8 hex-code CPU id exit ;; # Note that: echo "'`uname -s`'" gives 'AIX ' i*86:AIX:*:*) echo i386-ibm-aix exit ;; ia64:AIX:*:*) if [ -x /usr/bin/oslevel ] ; then IBM_REV=`/usr/bin/oslevel` else IBM_REV=${UNAME_VERSION}.${UNAME_RELEASE} fi echo ${UNAME_MACHINE}-ibm-aix${IBM_REV} exit ;; *:AIX:2:3) if grep bos325 /usr/include/stdio.h >/dev/null 2>&1; then eval $set_cc_for_build sed 's/^ //' << EOF >$dummy.c #include main() { if (!__power_pc()) exit(1); puts("powerpc-ibm-aix3.2.5"); exit(0); } EOF if $CC_FOR_BUILD -o $dummy $dummy.c && SYSTEM_NAME=`$dummy` then echo "$SYSTEM_NAME" else echo rs6000-ibm-aix3.2.5 fi elif grep bos324 /usr/include/stdio.h >/dev/null 2>&1; then echo rs6000-ibm-aix3.2.4 else echo rs6000-ibm-aix3.2 fi exit ;; *:AIX:*:[4567]) IBM_CPU_ID=`/usr/sbin/lsdev -C -c processor -S available | sed 1q | awk '{ print $1 }'` if /usr/sbin/lsattr -El ${IBM_CPU_ID} | grep ' POWER' >/dev/null 2>&1; then IBM_ARCH=rs6000 else IBM_ARCH=powerpc fi if [ -x /usr/bin/oslevel ] ; then IBM_REV=`/usr/bin/oslevel` else IBM_REV=${UNAME_VERSION}.${UNAME_RELEASE} fi echo ${IBM_ARCH}-ibm-aix${IBM_REV} exit ;; *:AIX:*:*) echo rs6000-ibm-aix exit ;; ibmrt:4.4BSD:*|romp-ibm:BSD:*) echo romp-ibm-bsd4.4 exit ;; ibmrt:*BSD:*|romp-ibm:BSD:*) # covers RT/PC BSD and echo romp-ibm-bsd${UNAME_RELEASE} # 4.3 with uname added to exit ;; # report: romp-ibm BSD 4.3 *:BOSX:*:*) echo rs6000-bull-bosx exit ;; DPX/2?00:B.O.S.:*:*) echo m68k-bull-sysv3 exit ;; 9000/[34]??:4.3bsd:1.*:*) echo m68k-hp-bsd exit ;; hp300:4.4BSD:*:* | 9000/[34]??:4.3bsd:2.*:*) echo m68k-hp-bsd4.4 exit ;; 9000/[34678]??:HP-UX:*:*) HPUX_REV=`echo ${UNAME_RELEASE}|sed -e 's/[^.]*.[0B]*//'` case "${UNAME_MACHINE}" in 9000/31? ) HP_ARCH=m68000 ;; 9000/[34]?? ) HP_ARCH=m68k ;; 9000/[678][0-9][0-9]) if [ -x /usr/bin/getconf ]; then sc_cpu_version=`/usr/bin/getconf SC_CPU_VERSION 2>/dev/null` sc_kernel_bits=`/usr/bin/getconf SC_KERNEL_BITS 2>/dev/null` case "${sc_cpu_version}" in 523) HP_ARCH="hppa1.0" ;; # CPU_PA_RISC1_0 528) HP_ARCH="hppa1.1" ;; # CPU_PA_RISC1_1 532) # CPU_PA_RISC2_0 case "${sc_kernel_bits}" in 32) HP_ARCH="hppa2.0n" ;; 64) HP_ARCH="hppa2.0w" ;; '') HP_ARCH="hppa2.0" ;; # HP-UX 10.20 esac ;; esac fi if [ "${HP_ARCH}" = "" ]; then eval $set_cc_for_build sed 's/^ //' << EOF >$dummy.c #define _HPUX_SOURCE #include #include int main () { #if defined(_SC_KERNEL_BITS) long bits = sysconf(_SC_KERNEL_BITS); #endif long cpu = sysconf (_SC_CPU_VERSION); switch (cpu) { case CPU_PA_RISC1_0: puts ("hppa1.0"); break; case CPU_PA_RISC1_1: puts ("hppa1.1"); break; case CPU_PA_RISC2_0: #if defined(_SC_KERNEL_BITS) switch (bits) { case 64: puts ("hppa2.0w"); break; case 32: puts ("hppa2.0n"); break; default: puts ("hppa2.0"); break; } break; #else /* !defined(_SC_KERNEL_BITS) */ puts ("hppa2.0"); break; #endif default: puts ("hppa1.0"); break; } exit (0); } EOF (CCOPTS= $CC_FOR_BUILD -o $dummy $dummy.c 2>/dev/null) && HP_ARCH=`$dummy` test -z "$HP_ARCH" && HP_ARCH=hppa fi ;; esac if [ ${HP_ARCH} = "hppa2.0w" ] then eval $set_cc_for_build # hppa2.0w-hp-hpux* has a 64-bit kernel and a compiler generating # 32-bit code. hppa64-hp-hpux* has the same kernel and a compiler # generating 64-bit code. GNU and HP use different nomenclature: # # $ CC_FOR_BUILD=cc ./config.guess # => hppa2.0w-hp-hpux11.23 # $ CC_FOR_BUILD="cc +DA2.0w" ./config.guess # => hppa64-hp-hpux11.23 if echo __LP64__ | (CCOPTS= $CC_FOR_BUILD -E - 2>/dev/null) | grep -q __LP64__ then HP_ARCH="hppa2.0w" else HP_ARCH="hppa64" fi fi echo ${HP_ARCH}-hp-hpux${HPUX_REV} exit ;; ia64:HP-UX:*:*) HPUX_REV=`echo ${UNAME_RELEASE}|sed -e 's/[^.]*.[0B]*//'` echo ia64-hp-hpux${HPUX_REV} exit ;; 3050*:HI-UX:*:*) eval $set_cc_for_build sed 's/^ //' << EOF >$dummy.c #include int main () { long cpu = sysconf (_SC_CPU_VERSION); /* The order matters, because CPU_IS_HP_MC68K erroneously returns true for CPU_PA_RISC1_0. CPU_IS_PA_RISC returns correct results, however. */ if (CPU_IS_PA_RISC (cpu)) { switch (cpu) { case CPU_PA_RISC1_0: puts ("hppa1.0-hitachi-hiuxwe2"); break; case CPU_PA_RISC1_1: puts ("hppa1.1-hitachi-hiuxwe2"); break; case CPU_PA_RISC2_0: puts ("hppa2.0-hitachi-hiuxwe2"); break; default: puts ("hppa-hitachi-hiuxwe2"); break; } } else if (CPU_IS_HP_MC68K (cpu)) puts ("m68k-hitachi-hiuxwe2"); else puts ("unknown-hitachi-hiuxwe2"); exit (0); } EOF $CC_FOR_BUILD -o $dummy $dummy.c && SYSTEM_NAME=`$dummy` && { echo "$SYSTEM_NAME"; exit; } echo unknown-hitachi-hiuxwe2 exit ;; 9000/7??:4.3bsd:*:* | 9000/8?[79]:4.3bsd:*:* ) echo hppa1.1-hp-bsd exit ;; 9000/8??:4.3bsd:*:*) echo hppa1.0-hp-bsd exit ;; *9??*:MPE/iX:*:* | *3000*:MPE/iX:*:*) echo hppa1.0-hp-mpeix exit ;; hp7??:OSF1:*:* | hp8?[79]:OSF1:*:* ) echo hppa1.1-hp-osf exit ;; hp8??:OSF1:*:*) echo hppa1.0-hp-osf exit ;; i*86:OSF1:*:*) if [ -x /usr/sbin/sysversion ] ; then echo ${UNAME_MACHINE}-unknown-osf1mk else echo ${UNAME_MACHINE}-unknown-osf1 fi exit ;; parisc*:Lites*:*:*) echo hppa1.1-hp-lites exit ;; C1*:ConvexOS:*:* | convex:ConvexOS:C1*:*) echo c1-convex-bsd exit ;; C2*:ConvexOS:*:* | convex:ConvexOS:C2*:*) if getsysinfo -f scalar_acc then echo c32-convex-bsd else echo c2-convex-bsd fi exit ;; C34*:ConvexOS:*:* | convex:ConvexOS:C34*:*) echo c34-convex-bsd exit ;; C38*:ConvexOS:*:* | convex:ConvexOS:C38*:*) echo c38-convex-bsd exit ;; C4*:ConvexOS:*:* | convex:ConvexOS:C4*:*) echo c4-convex-bsd exit ;; CRAY*Y-MP:*:*:*) echo ymp-cray-unicos${UNAME_RELEASE} | sed -e 's/\.[^.]*$/.X/' exit ;; CRAY*[A-Z]90:*:*:*) echo ${UNAME_MACHINE}-cray-unicos${UNAME_RELEASE} \ | sed -e 's/CRAY.*\([A-Z]90\)/\1/' \ -e y/ABCDEFGHIJKLMNOPQRSTUVWXYZ/abcdefghijklmnopqrstuvwxyz/ \ -e 's/\.[^.]*$/.X/' exit ;; CRAY*TS:*:*:*) echo t90-cray-unicos${UNAME_RELEASE} | sed -e 's/\.[^.]*$/.X/' exit ;; CRAY*T3E:*:*:*) echo alphaev5-cray-unicosmk${UNAME_RELEASE} | sed -e 's/\.[^.]*$/.X/' exit ;; CRAY*SV1:*:*:*) echo sv1-cray-unicos${UNAME_RELEASE} | sed -e 's/\.[^.]*$/.X/' exit ;; *:UNICOS/mp:*:*) echo craynv-cray-unicosmp${UNAME_RELEASE} | sed -e 's/\.[^.]*$/.X/' exit ;; F30[01]:UNIX_System_V:*:* | F700:UNIX_System_V:*:*) FUJITSU_PROC=`uname -m | tr 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' 'abcdefghijklmnopqrstuvwxyz'` FUJITSU_SYS=`uname -p | tr 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' 'abcdefghijklmnopqrstuvwxyz' | sed -e 's/\///'` FUJITSU_REL=`echo ${UNAME_RELEASE} | sed -e 's/ /_/'` echo "${FUJITSU_PROC}-fujitsu-${FUJITSU_SYS}${FUJITSU_REL}" exit ;; 5000:UNIX_System_V:4.*:*) FUJITSU_SYS=`uname -p | tr 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' 'abcdefghijklmnopqrstuvwxyz' | sed -e 's/\///'` FUJITSU_REL=`echo ${UNAME_RELEASE} | tr 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' 'abcdefghijklmnopqrstuvwxyz' | sed -e 's/ /_/'` echo "sparc-fujitsu-${FUJITSU_SYS}${FUJITSU_REL}" exit ;; i*86:BSD/386:*:* | i*86:BSD/OS:*:* | *:Ascend\ Embedded/OS:*:*) echo ${UNAME_MACHINE}-pc-bsdi${UNAME_RELEASE} exit ;; sparc*:BSD/OS:*:*) echo sparc-unknown-bsdi${UNAME_RELEASE} exit ;; *:BSD/OS:*:*) echo ${UNAME_MACHINE}-unknown-bsdi${UNAME_RELEASE} exit ;; *:FreeBSD:*:*) case ${UNAME_MACHINE} in pc98) echo i386-unknown-freebsd`echo ${UNAME_RELEASE}|sed -e 's/[-(].*//'` ;; amd64) echo x86_64-unknown-freebsd`echo ${UNAME_RELEASE}|sed -e 's/[-(].*//'` ;; *) echo ${UNAME_MACHINE}-unknown-freebsd`echo ${UNAME_RELEASE}|sed -e 's/[-(].*//'` ;; esac exit ;; i*:CYGWIN*:*) echo ${UNAME_MACHINE}-pc-cygwin exit ;; *:MINGW*:*) echo ${UNAME_MACHINE}-pc-mingw32 exit ;; i*:windows32*:*) # uname -m includes "-pc" on this system. echo ${UNAME_MACHINE}-mingw32 exit ;; i*:PW*:*) echo ${UNAME_MACHINE}-pc-pw32 exit ;; *:Interix*:*) case ${UNAME_MACHINE} in x86) echo i586-pc-interix${UNAME_RELEASE} exit ;; authenticamd | genuineintel | EM64T) echo x86_64-unknown-interix${UNAME_RELEASE} exit ;; IA64) echo ia64-unknown-interix${UNAME_RELEASE} exit ;; esac ;; [345]86:Windows_95:* | [345]86:Windows_98:* | [345]86:Windows_NT:*) echo i${UNAME_MACHINE}-pc-mks exit ;; 8664:Windows_NT:*) echo x86_64-pc-mks exit ;; i*:Windows_NT*:* | Pentium*:Windows_NT*:*) # How do we know it's Interix rather than the generic POSIX subsystem? # It also conflicts with pre-2.0 versions of AT&T UWIN. Should we # UNAME_MACHINE based on the output of uname instead of i386? echo i586-pc-interix exit ;; i*:UWIN*:*) echo ${UNAME_MACHINE}-pc-uwin exit ;; amd64:CYGWIN*:*:* | x86_64:CYGWIN*:*:*) echo x86_64-unknown-cygwin exit ;; p*:CYGWIN*:*) echo powerpcle-unknown-cygwin exit ;; prep*:SunOS:5.*:*) echo powerpcle-unknown-solaris2`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'` exit ;; *:GNU:*:*) # the GNU system echo `echo ${UNAME_MACHINE}|sed -e 's,[-/].*$,,'`-unknown-gnu`echo ${UNAME_RELEASE}|sed -e 's,/.*$,,'` exit ;; *:GNU/*:*:*) # other systems with GNU libc and userland echo ${UNAME_MACHINE}-unknown-`echo ${UNAME_SYSTEM} | sed 's,^[^/]*/,,' | tr '[A-Z]' '[a-z]'``echo ${UNAME_RELEASE}|sed -e 's/[-(].*//'`-gnu exit ;; i*86:Minix:*:*) echo ${UNAME_MACHINE}-pc-minix exit ;; alpha:Linux:*:*) case `sed -n '/^cpu model/s/^.*: \(.*\)/\1/p' < /proc/cpuinfo` in EV5) UNAME_MACHINE=alphaev5 ;; EV56) UNAME_MACHINE=alphaev56 ;; PCA56) UNAME_MACHINE=alphapca56 ;; PCA57) UNAME_MACHINE=alphapca56 ;; EV6) UNAME_MACHINE=alphaev6 ;; EV67) UNAME_MACHINE=alphaev67 ;; EV68*) UNAME_MACHINE=alphaev68 ;; esac objdump --private-headers /bin/sh | grep -q ld.so.1 if test "$?" = 0 ; then LIBC="libc1" ; else LIBC="" ; fi echo ${UNAME_MACHINE}-unknown-linux-gnu${LIBC} exit ;; arm*:Linux:*:*) eval $set_cc_for_build if echo __ARM_EABI__ | $CC_FOR_BUILD -E - 2>/dev/null \ | grep -q __ARM_EABI__ then echo ${UNAME_MACHINE}-unknown-linux-gnu else if echo __ARM_PCS_VFP | $CC_FOR_BUILD -E - 2>/dev/null \ | grep -q __ARM_PCS_VFP then echo ${UNAME_MACHINE}-unknown-linux-gnueabi else echo ${UNAME_MACHINE}-unknown-linux-gnueabihf fi fi exit ;; avr32*:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-gnu exit ;; cris:Linux:*:*) echo cris-axis-linux-gnu exit ;; crisv32:Linux:*:*) echo crisv32-axis-linux-gnu exit ;; frv:Linux:*:*) echo frv-unknown-linux-gnu exit ;; i*86:Linux:*:*) LIBC=gnu eval $set_cc_for_build sed 's/^ //' << EOF >$dummy.c #ifdef __dietlibc__ LIBC=dietlibc #endif EOF eval `$CC_FOR_BUILD -E $dummy.c 2>/dev/null | grep '^LIBC'` echo "${UNAME_MACHINE}-pc-linux-${LIBC}" exit ;; ia64:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-gnu exit ;; m32r*:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-gnu exit ;; m68*:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-gnu exit ;; mips:Linux:*:* | mips64:Linux:*:*) eval $set_cc_for_build sed 's/^ //' << EOF >$dummy.c #undef CPU #undef ${UNAME_MACHINE} #undef ${UNAME_MACHINE}el #if defined(__MIPSEL__) || defined(__MIPSEL) || defined(_MIPSEL) || defined(MIPSEL) CPU=${UNAME_MACHINE}el #else #if defined(__MIPSEB__) || defined(__MIPSEB) || defined(_MIPSEB) || defined(MIPSEB) CPU=${UNAME_MACHINE} #else CPU= #endif #endif EOF eval `$CC_FOR_BUILD -E $dummy.c 2>/dev/null | grep '^CPU'` test x"${CPU}" != x && { echo "${CPU}-unknown-linux-gnu"; exit; } ;; or32:Linux:*:*) echo or32-unknown-linux-gnu exit ;; padre:Linux:*:*) echo sparc-unknown-linux-gnu exit ;; parisc64:Linux:*:* | hppa64:Linux:*:*) echo hppa64-unknown-linux-gnu exit ;; parisc:Linux:*:* | hppa:Linux:*:*) # Look for CPU level case `grep '^cpu[^a-z]*:' /proc/cpuinfo 2>/dev/null | cut -d' ' -f2` in PA7*) echo hppa1.1-unknown-linux-gnu ;; PA8*) echo hppa2.0-unknown-linux-gnu ;; *) echo hppa-unknown-linux-gnu ;; esac exit ;; ppc64:Linux:*:*) echo powerpc64-unknown-linux-gnu exit ;; ppc:Linux:*:*) echo powerpc-unknown-linux-gnu exit ;; s390:Linux:*:* | s390x:Linux:*:*) echo ${UNAME_MACHINE}-ibm-linux exit ;; sh64*:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-gnu exit ;; sh*:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-gnu exit ;; sparc:Linux:*:* | sparc64:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-gnu exit ;; tile*:Linux:*:*) echo ${UNAME_MACHINE}-tilera-linux-gnu exit ;; vax:Linux:*:*) echo ${UNAME_MACHINE}-dec-linux-gnu exit ;; x86_64:Linux:*:*) echo x86_64-unknown-linux-gnu exit ;; xtensa*:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-gnu exit ;; i*86:DYNIX/ptx:4*:*) # ptx 4.0 does uname -s correctly, with DYNIX/ptx in there. # earlier versions are messed up and put the nodename in both # sysname and nodename. echo i386-sequent-sysv4 exit ;; i*86:UNIX_SV:4.2MP:2.*) # Unixware is an offshoot of SVR4, but it has its own version # number series starting with 2... # I am not positive that other SVR4 systems won't match this, # I just have to hope. -- rms. # Use sysv4.2uw... so that sysv4* matches it. echo ${UNAME_MACHINE}-pc-sysv4.2uw${UNAME_VERSION} exit ;; i*86:OS/2:*:*) # If we were able to find `uname', then EMX Unix compatibility # is probably installed. echo ${UNAME_MACHINE}-pc-os2-emx exit ;; i*86:XTS-300:*:STOP) echo ${UNAME_MACHINE}-unknown-stop exit ;; i*86:atheos:*:*) echo ${UNAME_MACHINE}-unknown-atheos exit ;; i*86:syllable:*:*) echo ${UNAME_MACHINE}-pc-syllable exit ;; i*86:LynxOS:2.*:* | i*86:LynxOS:3.[01]*:* | i*86:LynxOS:4.[02]*:*) echo i386-unknown-lynxos${UNAME_RELEASE} exit ;; i*86:*DOS:*:*) echo ${UNAME_MACHINE}-pc-msdosdjgpp exit ;; i*86:*:4.*:* | i*86:SYSTEM_V:4.*:*) UNAME_REL=`echo ${UNAME_RELEASE} | sed 's/\/MP$//'` if grep Novell /usr/include/link.h >/dev/null 2>/dev/null; then echo ${UNAME_MACHINE}-univel-sysv${UNAME_REL} else echo ${UNAME_MACHINE}-pc-sysv${UNAME_REL} fi exit ;; i*86:*:5:[678]*) # UnixWare 7.x, OpenUNIX and OpenServer 6. case `/bin/uname -X | grep "^Machine"` in *486*) UNAME_MACHINE=i486 ;; *Pentium) UNAME_MACHINE=i586 ;; *Pent*|*Celeron) UNAME_MACHINE=i686 ;; esac echo ${UNAME_MACHINE}-unknown-sysv${UNAME_RELEASE}${UNAME_SYSTEM}${UNAME_VERSION} exit ;; i*86:*:3.2:*) if test -f /usr/options/cb.name; then UNAME_REL=`sed -n 's/.*Version //p' /dev/null >/dev/null ; then UNAME_REL=`(/bin/uname -X|grep Release|sed -e 's/.*= //')` (/bin/uname -X|grep i80486 >/dev/null) && UNAME_MACHINE=i486 (/bin/uname -X|grep '^Machine.*Pentium' >/dev/null) \ && UNAME_MACHINE=i586 (/bin/uname -X|grep '^Machine.*Pent *II' >/dev/null) \ && UNAME_MACHINE=i686 (/bin/uname -X|grep '^Machine.*Pentium Pro' >/dev/null) \ && UNAME_MACHINE=i686 echo ${UNAME_MACHINE}-pc-sco$UNAME_REL else echo ${UNAME_MACHINE}-pc-sysv32 fi exit ;; pc:*:*:*) # Left here for compatibility: # uname -m prints for DJGPP always 'pc', but it prints nothing about # the processor, so we play safe by assuming i586. # Note: whatever this is, it MUST be the same as what config.sub # prints for the "djgpp" host, or else GDB configury will decide that # this is a cross-build. echo i586-pc-msdosdjgpp exit ;; Intel:Mach:3*:*) echo i386-pc-mach3 exit ;; paragon:*:*:*) echo i860-intel-osf1 exit ;; i860:*:4.*:*) # i860-SVR4 if grep Stardent /usr/include/sys/uadmin.h >/dev/null 2>&1 ; then echo i860-stardent-sysv${UNAME_RELEASE} # Stardent Vistra i860-SVR4 else # Add other i860-SVR4 vendors below as they are discovered. echo i860-unknown-sysv${UNAME_RELEASE} # Unknown i860-SVR4 fi exit ;; mini*:CTIX:SYS*5:*) # "miniframe" echo m68010-convergent-sysv exit ;; mc68k:UNIX:SYSTEM5:3.51m) echo m68k-convergent-sysv exit ;; M680?0:D-NIX:5.3:*) echo m68k-diab-dnix exit ;; M68*:*:R3V[5678]*:*) test -r /sysV68 && { echo 'm68k-motorola-sysv'; exit; } ;; 3[345]??:*:4.0:3.0 | 3[34]??A:*:4.0:3.0 | 3[34]??,*:*:4.0:3.0 | 3[34]??/*:*:4.0:3.0 | 4400:*:4.0:3.0 | 4850:*:4.0:3.0 | SKA40:*:4.0:3.0 | SDS2:*:4.0:3.0 | SHG2:*:4.0:3.0 | S7501*:*:4.0:3.0) OS_REL='' test -r /etc/.relid \ && OS_REL=.`sed -n 's/[^ ]* [^ ]* \([0-9][0-9]\).*/\1/p' < /etc/.relid` /bin/uname -p 2>/dev/null | grep 86 >/dev/null \ && { echo i486-ncr-sysv4.3${OS_REL}; exit; } /bin/uname -p 2>/dev/null | /bin/grep entium >/dev/null \ && { echo i586-ncr-sysv4.3${OS_REL}; exit; } ;; 3[34]??:*:4.0:* | 3[34]??,*:*:4.0:*) /bin/uname -p 2>/dev/null | grep 86 >/dev/null \ && { echo i486-ncr-sysv4; exit; } ;; NCR*:*:4.2:* | MPRAS*:*:4.2:*) OS_REL='.3' test -r /etc/.relid \ && OS_REL=.`sed -n 's/[^ ]* [^ ]* \([0-9][0-9]\).*/\1/p' < /etc/.relid` /bin/uname -p 2>/dev/null | grep 86 >/dev/null \ && { echo i486-ncr-sysv4.3${OS_REL}; exit; } /bin/uname -p 2>/dev/null | /bin/grep entium >/dev/null \ && { echo i586-ncr-sysv4.3${OS_REL}; exit; } /bin/uname -p 2>/dev/null | /bin/grep pteron >/dev/null \ && { echo i586-ncr-sysv4.3${OS_REL}; exit; } ;; m68*:LynxOS:2.*:* | m68*:LynxOS:3.0*:*) echo m68k-unknown-lynxos${UNAME_RELEASE} exit ;; mc68030:UNIX_System_V:4.*:*) echo m68k-atari-sysv4 exit ;; TSUNAMI:LynxOS:2.*:*) echo sparc-unknown-lynxos${UNAME_RELEASE} exit ;; rs6000:LynxOS:2.*:*) echo rs6000-unknown-lynxos${UNAME_RELEASE} exit ;; PowerPC:LynxOS:2.*:* | PowerPC:LynxOS:3.[01]*:* | PowerPC:LynxOS:4.[02]*:*) echo powerpc-unknown-lynxos${UNAME_RELEASE} exit ;; SM[BE]S:UNIX_SV:*:*) echo mips-dde-sysv${UNAME_RELEASE} exit ;; RM*:ReliantUNIX-*:*:*) echo mips-sni-sysv4 exit ;; RM*:SINIX-*:*:*) echo mips-sni-sysv4 exit ;; *:SINIX-*:*:*) if uname -p 2>/dev/null >/dev/null ; then UNAME_MACHINE=`(uname -p) 2>/dev/null` echo ${UNAME_MACHINE}-sni-sysv4 else echo ns32k-sni-sysv fi exit ;; PENTIUM:*:4.0*:*) # Unisys `ClearPath HMP IX 4000' SVR4/MP effort # says echo i586-unisys-sysv4 exit ;; *:UNIX_System_V:4*:FTX*) # From Gerald Hewes . # How about differentiating between stratus architectures? -djm echo hppa1.1-stratus-sysv4 exit ;; *:*:*:FTX*) # From seanf@swdc.stratus.com. echo i860-stratus-sysv4 exit ;; i*86:VOS:*:*) # From Paul.Green@stratus.com. echo ${UNAME_MACHINE}-stratus-vos exit ;; *:VOS:*:*) # From Paul.Green@stratus.com. echo hppa1.1-stratus-vos exit ;; mc68*:A/UX:*:*) echo m68k-apple-aux${UNAME_RELEASE} exit ;; news*:NEWS-OS:6*:*) echo mips-sony-newsos6 exit ;; R[34]000:*System_V*:*:* | R4000:UNIX_SYSV:*:* | R*000:UNIX_SV:*:*) if [ -d /usr/nec ]; then echo mips-nec-sysv${UNAME_RELEASE} else echo mips-unknown-sysv${UNAME_RELEASE} fi exit ;; BeBox:BeOS:*:*) # BeOS running on hardware made by Be, PPC only. echo powerpc-be-beos exit ;; BeMac:BeOS:*:*) # BeOS running on Mac or Mac clone, PPC only. echo powerpc-apple-beos exit ;; BePC:BeOS:*:*) # BeOS running on Intel PC compatible. echo i586-pc-beos exit ;; BePC:Haiku:*:*) # Haiku running on Intel PC compatible. echo i586-pc-haiku exit ;; SX-4:SUPER-UX:*:*) echo sx4-nec-superux${UNAME_RELEASE} exit ;; SX-5:SUPER-UX:*:*) echo sx5-nec-superux${UNAME_RELEASE} exit ;; SX-6:SUPER-UX:*:*) echo sx6-nec-superux${UNAME_RELEASE} exit ;; SX-7:SUPER-UX:*:*) echo sx7-nec-superux${UNAME_RELEASE} exit ;; SX-8:SUPER-UX:*:*) echo sx8-nec-superux${UNAME_RELEASE} exit ;; SX-8R:SUPER-UX:*:*) echo sx8r-nec-superux${UNAME_RELEASE} exit ;; Power*:Rhapsody:*:*) echo powerpc-apple-rhapsody${UNAME_RELEASE} exit ;; *:Rhapsody:*:*) echo ${UNAME_MACHINE}-apple-rhapsody${UNAME_RELEASE} exit ;; *:Darwin:*:*) UNAME_PROCESSOR=`uname -p` || UNAME_PROCESSOR=unknown case $UNAME_PROCESSOR in i386) eval $set_cc_for_build if [ "$CC_FOR_BUILD" != 'no_compiler_found' ]; then if (echo '#ifdef __LP64__'; echo IS_64BIT_ARCH; echo '#endif') | \ (CCOPTS= $CC_FOR_BUILD -E - 2>/dev/null) | \ grep IS_64BIT_ARCH >/dev/null then UNAME_PROCESSOR="x86_64" fi fi ;; unknown) UNAME_PROCESSOR=powerpc ;; esac echo ${UNAME_PROCESSOR}-apple-darwin${UNAME_RELEASE} exit ;; *:procnto*:*:* | *:QNX:[0123456789]*:*) UNAME_PROCESSOR=`uname -p` if test "$UNAME_PROCESSOR" = "x86"; then UNAME_PROCESSOR=i386 UNAME_MACHINE=pc fi echo ${UNAME_PROCESSOR}-${UNAME_MACHINE}-nto-qnx${UNAME_RELEASE} exit ;; *:QNX:*:4*) echo i386-pc-qnx exit ;; NEO-?:NONSTOP_KERNEL:*:*) echo neo-tandem-nsk${UNAME_RELEASE} exit ;; NSE-?:NONSTOP_KERNEL:*:*) echo nse-tandem-nsk${UNAME_RELEASE} exit ;; NSR-?:NONSTOP_KERNEL:*:*) echo nsr-tandem-nsk${UNAME_RELEASE} exit ;; *:NonStop-UX:*:*) echo mips-compaq-nonstopux exit ;; BS2000:POSIX*:*:*) echo bs2000-siemens-sysv exit ;; DS/*:UNIX_System_V:*:*) echo ${UNAME_MACHINE}-${UNAME_SYSTEM}-${UNAME_RELEASE} exit ;; *:Plan9:*:*) # "uname -m" is not consistent, so use $cputype instead. 386 # is converted to i386 for consistency with other x86 # operating systems. if test "$cputype" = "386"; then UNAME_MACHINE=i386 else UNAME_MACHINE="$cputype" fi echo ${UNAME_MACHINE}-unknown-plan9 exit ;; *:TOPS-10:*:*) echo pdp10-unknown-tops10 exit ;; *:TENEX:*:*) echo pdp10-unknown-tenex exit ;; KS10:TOPS-20:*:* | KL10:TOPS-20:*:* | TYPE4:TOPS-20:*:*) echo pdp10-dec-tops20 exit ;; XKL-1:TOPS-20:*:* | TYPE5:TOPS-20:*:*) echo pdp10-xkl-tops20 exit ;; *:TOPS-20:*:*) echo pdp10-unknown-tops20 exit ;; *:ITS:*:*) echo pdp10-unknown-its exit ;; SEI:*:*:SEIUX) echo mips-sei-seiux${UNAME_RELEASE} exit ;; *:DragonFly:*:*) echo ${UNAME_MACHINE}-unknown-dragonfly`echo ${UNAME_RELEASE}|sed -e 's/[-(].*//'` exit ;; *:*VMS:*:*) UNAME_MACHINE=`(uname -p) 2>/dev/null` case "${UNAME_MACHINE}" in A*) echo alpha-dec-vms ; exit ;; I*) echo ia64-dec-vms ; exit ;; V*) echo vax-dec-vms ; exit ;; esac ;; *:XENIX:*:SysV) echo i386-pc-xenix exit ;; i*86:skyos:*:*) echo ${UNAME_MACHINE}-pc-skyos`echo ${UNAME_RELEASE}` | sed -e 's/ .*$//' exit ;; i*86:rdos:*:*) echo ${UNAME_MACHINE}-pc-rdos exit ;; i*86:AROS:*:*) echo ${UNAME_MACHINE}-pc-aros exit ;; esac #echo '(No uname command or uname output not recognized.)' 1>&2 #echo "${UNAME_MACHINE}:${UNAME_SYSTEM}:${UNAME_RELEASE}:${UNAME_VERSION}" 1>&2 eval $set_cc_for_build cat >$dummy.c < # include #endif main () { #if defined (sony) #if defined (MIPSEB) /* BFD wants "bsd" instead of "newsos". Perhaps BFD should be changed, I don't know.... */ printf ("mips-sony-bsd\n"); exit (0); #else #include printf ("m68k-sony-newsos%s\n", #ifdef NEWSOS4 "4" #else "" #endif ); exit (0); #endif #endif #if defined (__arm) && defined (__acorn) && defined (__unix) printf ("arm-acorn-riscix\n"); exit (0); #endif #if defined (hp300) && !defined (hpux) printf ("m68k-hp-bsd\n"); exit (0); #endif #if defined (NeXT) #if !defined (__ARCHITECTURE__) #define __ARCHITECTURE__ "m68k" #endif int version; version=`(hostinfo | sed -n 's/.*NeXT Mach \([0-9]*\).*/\1/p') 2>/dev/null`; if (version < 4) printf ("%s-next-nextstep%d\n", __ARCHITECTURE__, version); else printf ("%s-next-openstep%d\n", __ARCHITECTURE__, version); exit (0); #endif #if defined (MULTIMAX) || defined (n16) #if defined (UMAXV) printf ("ns32k-encore-sysv\n"); exit (0); #else #if defined (CMU) printf ("ns32k-encore-mach\n"); exit (0); #else printf ("ns32k-encore-bsd\n"); exit (0); #endif #endif #endif #if defined (__386BSD__) printf ("i386-pc-bsd\n"); exit (0); #endif #if defined (sequent) #if defined (i386) printf ("i386-sequent-dynix\n"); exit (0); #endif #if defined (ns32000) printf ("ns32k-sequent-dynix\n"); exit (0); #endif #endif #if defined (_SEQUENT_) struct utsname un; uname(&un); if (strncmp(un.version, "V2", 2) == 0) { printf ("i386-sequent-ptx2\n"); exit (0); } if (strncmp(un.version, "V1", 2) == 0) { /* XXX is V1 correct? */ printf ("i386-sequent-ptx1\n"); exit (0); } printf ("i386-sequent-ptx\n"); exit (0); #endif #if defined (vax) # if !defined (ultrix) # include # if defined (BSD) # if BSD == 43 printf ("vax-dec-bsd4.3\n"); exit (0); # else # if BSD == 199006 printf ("vax-dec-bsd4.3reno\n"); exit (0); # else printf ("vax-dec-bsd\n"); exit (0); # endif # endif # else printf ("vax-dec-bsd\n"); exit (0); # endif # else printf ("vax-dec-ultrix\n"); exit (0); # endif #endif #if defined (alliant) && defined (i860) printf ("i860-alliant-bsd\n"); exit (0); #endif exit (1); } EOF $CC_FOR_BUILD -o $dummy $dummy.c 2>/dev/null && SYSTEM_NAME=`$dummy` && { echo "$SYSTEM_NAME"; exit; } # Apollos put the system type in the environment. test -d /usr/apollo && { echo ${ISP}-apollo-${SYSTYPE}; exit; } # Convex versions that predate uname can use getsysinfo(1) if [ -x /usr/convex/getsysinfo ] then case `getsysinfo -f cpu_type` in c1*) echo c1-convex-bsd exit ;; c2*) if getsysinfo -f scalar_acc then echo c32-convex-bsd else echo c2-convex-bsd fi exit ;; c34*) echo c34-convex-bsd exit ;; c38*) echo c38-convex-bsd exit ;; c4*) echo c4-convex-bsd exit ;; esac fi cat >&2 < in order to provide the needed information to handle your system. config.guess timestamp = $timestamp uname -m = `(uname -m) 2>/dev/null || echo unknown` uname -r = `(uname -r) 2>/dev/null || echo unknown` uname -s = `(uname -s) 2>/dev/null || echo unknown` uname -v = `(uname -v) 2>/dev/null || echo unknown` /usr/bin/uname -p = `(/usr/bin/uname -p) 2>/dev/null` /bin/uname -X = `(/bin/uname -X) 2>/dev/null` hostinfo = `(hostinfo) 2>/dev/null` /bin/universe = `(/bin/universe) 2>/dev/null` /usr/bin/arch -k = `(/usr/bin/arch -k) 2>/dev/null` /bin/arch = `(/bin/arch) 2>/dev/null` /usr/bin/oslevel = `(/usr/bin/oslevel) 2>/dev/null` /usr/convex/getsysinfo = `(/usr/convex/getsysinfo) 2>/dev/null` UNAME_MACHINE = ${UNAME_MACHINE} UNAME_RELEASE = ${UNAME_RELEASE} UNAME_SYSTEM = ${UNAME_SYSTEM} UNAME_VERSION = ${UNAME_VERSION} EOF exit 1 # Local variables: # eval: (add-hook 'write-file-hooks 'time-stamp) # time-stamp-start: "timestamp='" # time-stamp-format: "%:y-%02m-%02d" # time-stamp-end: "'" # End: sparsehash-2.0.2/configure0000775000175000017500000064775011721254574012541 00000000000000#! /bin/sh # Guess values for system-dependent variables and create Makefiles. # Generated by GNU Autoconf 2.68 for sparsehash 2.0.2. # # Report bugs to . # # # Copyright (C) 1992, 1993, 1994, 1995, 1996, 1998, 1999, 2000, 2001, # 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010 Free Software # Foundation, Inc. # # # This configure script is free software; the Free Software Foundation # gives unlimited permission to copy, distribute and modify it. ## -------------------- ## ## M4sh Initialization. ## ## -------------------- ## # Be more Bourne compatible DUALCASE=1; export DUALCASE # for MKS sh if test -n "${ZSH_VERSION+set}" && (emulate sh) >/dev/null 2>&1; then : emulate sh NULLCMD=: # Pre-4.2 versions of Zsh do word splitting on ${1+"$@"}, which # is contrary to our usage. Disable this feature. alias -g '${1+"$@"}'='"$@"' setopt NO_GLOB_SUBST else case `(set -o) 2>/dev/null` in #( *posix*) : set -o posix ;; #( *) : ;; esac fi as_nl=' ' export as_nl # Printing a long string crashes Solaris 7 /usr/bin/printf. as_echo='\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\' as_echo=$as_echo$as_echo$as_echo$as_echo$as_echo as_echo=$as_echo$as_echo$as_echo$as_echo$as_echo$as_echo # Prefer a ksh shell builtin over an external printf program on Solaris, # but without wasting forks for bash or zsh. if test -z "$BASH_VERSION$ZSH_VERSION" \ && (test "X`print -r -- $as_echo`" = "X$as_echo") 2>/dev/null; then as_echo='print -r --' as_echo_n='print -rn --' elif (test "X`printf %s $as_echo`" = "X$as_echo") 2>/dev/null; then as_echo='printf %s\n' as_echo_n='printf %s' else if test "X`(/usr/ucb/echo -n -n $as_echo) 2>/dev/null`" = "X-n $as_echo"; then as_echo_body='eval /usr/ucb/echo -n "$1$as_nl"' as_echo_n='/usr/ucb/echo -n' else as_echo_body='eval expr "X$1" : "X\\(.*\\)"' as_echo_n_body='eval arg=$1; case $arg in #( *"$as_nl"*) expr "X$arg" : "X\\(.*\\)$as_nl"; arg=`expr "X$arg" : ".*$as_nl\\(.*\\)"`;; esac; expr "X$arg" : "X\\(.*\\)" | tr -d "$as_nl" ' export as_echo_n_body as_echo_n='sh -c $as_echo_n_body as_echo' fi export as_echo_body as_echo='sh -c $as_echo_body as_echo' fi # The user is always right. if test "${PATH_SEPARATOR+set}" != set; then PATH_SEPARATOR=: (PATH='/bin;/bin'; FPATH=$PATH; sh -c :) >/dev/null 2>&1 && { (PATH='/bin:/bin'; FPATH=$PATH; sh -c :) >/dev/null 2>&1 || PATH_SEPARATOR=';' } fi # IFS # We need space, tab and new line, in precisely that order. Quoting is # there to prevent editors from complaining about space-tab. # (If _AS_PATH_WALK were called with IFS unset, it would disable word # splitting by setting IFS to empty value.) IFS=" "" $as_nl" # Find who we are. Look in the path if we contain no directory separator. as_myself= case $0 in #(( *[\\/]* ) as_myself=$0 ;; *) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. test -r "$as_dir/$0" && as_myself=$as_dir/$0 && break done IFS=$as_save_IFS ;; esac # We did not find ourselves, most probably we were run as `sh COMMAND' # in which case we are not to be found in the path. if test "x$as_myself" = x; then as_myself=$0 fi if test ! -f "$as_myself"; then $as_echo "$as_myself: error: cannot find myself; rerun with an absolute file name" >&2 exit 1 fi # Unset variables that we do not need and which cause bugs (e.g. in # pre-3.0 UWIN ksh). But do not cause bugs in bash 2.01; the "|| exit 1" # suppresses any "Segmentation fault" message there. '((' could # trigger a bug in pdksh 5.2.14. for as_var in BASH_ENV ENV MAIL MAILPATH do eval test x\${$as_var+set} = xset \ && ( (unset $as_var) || exit 1) >/dev/null 2>&1 && unset $as_var || : done PS1='$ ' PS2='> ' PS4='+ ' # NLS nuisances. LC_ALL=C export LC_ALL LANGUAGE=C export LANGUAGE # CDPATH. (unset CDPATH) >/dev/null 2>&1 && unset CDPATH if test "x$CONFIG_SHELL" = x; then as_bourne_compatible="if test -n \"\${ZSH_VERSION+set}\" && (emulate sh) >/dev/null 2>&1; then : emulate sh NULLCMD=: # Pre-4.2 versions of Zsh do word splitting on \${1+\"\$@\"}, which # is contrary to our usage. Disable this feature. alias -g '\${1+\"\$@\"}'='\"\$@\"' setopt NO_GLOB_SUBST else case \`(set -o) 2>/dev/null\` in #( *posix*) : set -o posix ;; #( *) : ;; esac fi " as_required="as_fn_return () { (exit \$1); } as_fn_success () { as_fn_return 0; } as_fn_failure () { as_fn_return 1; } as_fn_ret_success () { return 0; } as_fn_ret_failure () { return 1; } exitcode=0 as_fn_success || { exitcode=1; echo as_fn_success failed.; } as_fn_failure && { exitcode=1; echo as_fn_failure succeeded.; } as_fn_ret_success || { exitcode=1; echo as_fn_ret_success failed.; } as_fn_ret_failure && { exitcode=1; echo as_fn_ret_failure succeeded.; } if ( set x; as_fn_ret_success y && test x = \"\$1\" ); then : else exitcode=1; echo positional parameters were not saved. fi test x\$exitcode = x0 || exit 1" as_suggested=" as_lineno_1=";as_suggested=$as_suggested$LINENO;as_suggested=$as_suggested" as_lineno_1a=\$LINENO as_lineno_2=";as_suggested=$as_suggested$LINENO;as_suggested=$as_suggested" as_lineno_2a=\$LINENO eval 'test \"x\$as_lineno_1'\$as_run'\" != \"x\$as_lineno_2'\$as_run'\" && test \"x\`expr \$as_lineno_1'\$as_run' + 1\`\" = \"x\$as_lineno_2'\$as_run'\"' || exit 1 test \$(( 1 + 1 )) = 2 || exit 1" if (eval "$as_required") 2>/dev/null; then : as_have_required=yes else as_have_required=no fi if test x$as_have_required = xyes && (eval "$as_suggested") 2>/dev/null; then : else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR as_found=false for as_dir in /bin$PATH_SEPARATOR/usr/bin$PATH_SEPARATOR$PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. as_found=: case $as_dir in #( /*) for as_base in sh bash ksh sh5; do # Try only shells that exist, to save several forks. as_shell=$as_dir/$as_base if { test -f "$as_shell" || test -f "$as_shell.exe"; } && { $as_echo "$as_bourne_compatible""$as_required" | as_run=a "$as_shell"; } 2>/dev/null; then : CONFIG_SHELL=$as_shell as_have_required=yes if { $as_echo "$as_bourne_compatible""$as_suggested" | as_run=a "$as_shell"; } 2>/dev/null; then : break 2 fi fi done;; esac as_found=false done $as_found || { if { test -f "$SHELL" || test -f "$SHELL.exe"; } && { $as_echo "$as_bourne_compatible""$as_required" | as_run=a "$SHELL"; } 2>/dev/null; then : CONFIG_SHELL=$SHELL as_have_required=yes fi; } IFS=$as_save_IFS if test "x$CONFIG_SHELL" != x; then : # We cannot yet assume a decent shell, so we have to provide a # neutralization value for shells without unset; and this also # works around shells that cannot unset nonexistent variables. # Preserve -v and -x to the replacement shell. BASH_ENV=/dev/null ENV=/dev/null (unset BASH_ENV) >/dev/null 2>&1 && unset BASH_ENV ENV export CONFIG_SHELL case $- in # (((( *v*x* | *x*v* ) as_opts=-vx ;; *v* ) as_opts=-v ;; *x* ) as_opts=-x ;; * ) as_opts= ;; esac exec "$CONFIG_SHELL" $as_opts "$as_myself" ${1+"$@"} fi if test x$as_have_required = xno; then : $as_echo "$0: This script requires a shell more modern than all" $as_echo "$0: the shells that I found on your system." if test x${ZSH_VERSION+set} = xset ; then $as_echo "$0: In particular, zsh $ZSH_VERSION has bugs and should" $as_echo "$0: be upgraded to zsh 4.3.4 or later." else $as_echo "$0: Please tell bug-autoconf@gnu.org and $0: google-sparsehash@googlegroups.com about your system, $0: including any error possibly output before this $0: message. Then install a modern shell, or manually run $0: the script under such a shell if you do have one." fi exit 1 fi fi fi SHELL=${CONFIG_SHELL-/bin/sh} export SHELL # Unset more variables known to interfere with behavior of common tools. CLICOLOR_FORCE= GREP_OPTIONS= unset CLICOLOR_FORCE GREP_OPTIONS ## --------------------- ## ## M4sh Shell Functions. ## ## --------------------- ## # as_fn_unset VAR # --------------- # Portably unset VAR. as_fn_unset () { { eval $1=; unset $1;} } as_unset=as_fn_unset # as_fn_set_status STATUS # ----------------------- # Set $? to STATUS, without forking. as_fn_set_status () { return $1 } # as_fn_set_status # as_fn_exit STATUS # ----------------- # Exit the shell with STATUS, even in a "trap 0" or "set -e" context. as_fn_exit () { set +e as_fn_set_status $1 exit $1 } # as_fn_exit # as_fn_mkdir_p # ------------- # Create "$as_dir" as a directory, including parents if necessary. as_fn_mkdir_p () { case $as_dir in #( -*) as_dir=./$as_dir;; esac test -d "$as_dir" || eval $as_mkdir_p || { as_dirs= while :; do case $as_dir in #( *\'*) as_qdir=`$as_echo "$as_dir" | sed "s/'/'\\\\\\\\''/g"`;; #'( *) as_qdir=$as_dir;; esac as_dirs="'$as_qdir' $as_dirs" as_dir=`$as_dirname -- "$as_dir" || $as_expr X"$as_dir" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$as_dir" : 'X\(//\)[^/]' \| \ X"$as_dir" : 'X\(//\)$' \| \ X"$as_dir" : 'X\(/\)' \| . 2>/dev/null || $as_echo X"$as_dir" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q'` test -d "$as_dir" && break done test -z "$as_dirs" || eval "mkdir $as_dirs" } || test -d "$as_dir" || as_fn_error $? "cannot create directory $as_dir" } # as_fn_mkdir_p # as_fn_append VAR VALUE # ---------------------- # Append the text in VALUE to the end of the definition contained in VAR. Take # advantage of any shell optimizations that allow amortized linear growth over # repeated appends, instead of the typical quadratic growth present in naive # implementations. if (eval "as_var=1; as_var+=2; test x\$as_var = x12") 2>/dev/null; then : eval 'as_fn_append () { eval $1+=\$2 }' else as_fn_append () { eval $1=\$$1\$2 } fi # as_fn_append # as_fn_arith ARG... # ------------------ # Perform arithmetic evaluation on the ARGs, and store the result in the # global $as_val. Take advantage of shells that can avoid forks. The arguments # must be portable across $(()) and expr. if (eval "test \$(( 1 + 1 )) = 2") 2>/dev/null; then : eval 'as_fn_arith () { as_val=$(( $* )) }' else as_fn_arith () { as_val=`expr "$@" || test $? -eq 1` } fi # as_fn_arith # as_fn_error STATUS ERROR [LINENO LOG_FD] # ---------------------------------------- # Output "`basename $0`: error: ERROR" to stderr. If LINENO and LOG_FD are # provided, also output the error to LOG_FD, referencing LINENO. Then exit the # script with STATUS, using 1 if that was 0. as_fn_error () { as_status=$1; test $as_status -eq 0 && as_status=1 if test "$4"; then as_lineno=${as_lineno-"$3"} as_lineno_stack=as_lineno_stack=$as_lineno_stack $as_echo "$as_me:${as_lineno-$LINENO}: error: $2" >&$4 fi $as_echo "$as_me: error: $2" >&2 as_fn_exit $as_status } # as_fn_error if expr a : '\(a\)' >/dev/null 2>&1 && test "X`expr 00001 : '.*\(...\)'`" = X001; then as_expr=expr else as_expr=false fi if (basename -- /) >/dev/null 2>&1 && test "X`basename -- / 2>&1`" = "X/"; then as_basename=basename else as_basename=false fi if (as_dir=`dirname -- /` && test "X$as_dir" = X/) >/dev/null 2>&1; then as_dirname=dirname else as_dirname=false fi as_me=`$as_basename -- "$0" || $as_expr X/"$0" : '.*/\([^/][^/]*\)/*$' \| \ X"$0" : 'X\(//\)$' \| \ X"$0" : 'X\(/\)' \| . 2>/dev/null || $as_echo X/"$0" | sed '/^.*\/\([^/][^/]*\)\/*$/{ s//\1/ q } /^X\/\(\/\/\)$/{ s//\1/ q } /^X\/\(\/\).*/{ s//\1/ q } s/.*/./; q'` # Avoid depending upon Character Ranges. as_cr_letters='abcdefghijklmnopqrstuvwxyz' as_cr_LETTERS='ABCDEFGHIJKLMNOPQRSTUVWXYZ' as_cr_Letters=$as_cr_letters$as_cr_LETTERS as_cr_digits='0123456789' as_cr_alnum=$as_cr_Letters$as_cr_digits as_lineno_1=$LINENO as_lineno_1a=$LINENO as_lineno_2=$LINENO as_lineno_2a=$LINENO eval 'test "x$as_lineno_1'$as_run'" != "x$as_lineno_2'$as_run'" && test "x`expr $as_lineno_1'$as_run' + 1`" = "x$as_lineno_2'$as_run'"' || { # Blame Lee E. McMahon (1931-1989) for sed's syntax. :-) sed -n ' p /[$]LINENO/= ' <$as_myself | sed ' s/[$]LINENO.*/&-/ t lineno b :lineno N :loop s/[$]LINENO\([^'$as_cr_alnum'_].*\n\)\(.*\)/\2\1\2/ t loop s/-\n.*// ' >$as_me.lineno && chmod +x "$as_me.lineno" || { $as_echo "$as_me: error: cannot create $as_me.lineno; rerun with a POSIX shell" >&2; as_fn_exit 1; } # Don't try to exec as it changes $[0], causing all sort of problems # (the dirname of $[0] is not the place where we might find the # original and so on. Autoconf is especially sensitive to this). . "./$as_me.lineno" # Exit status is that of the last command. exit } ECHO_C= ECHO_N= ECHO_T= case `echo -n x` in #((((( -n*) case `echo 'xy\c'` in *c*) ECHO_T=' ';; # ECHO_T is single tab character. xy) ECHO_C='\c';; *) echo `echo ksh88 bug on AIX 6.1` > /dev/null ECHO_T=' ';; esac;; *) ECHO_N='-n';; esac rm -f conf$$ conf$$.exe conf$$.file if test -d conf$$.dir; then rm -f conf$$.dir/conf$$.file else rm -f conf$$.dir mkdir conf$$.dir 2>/dev/null fi if (echo >conf$$.file) 2>/dev/null; then if ln -s conf$$.file conf$$ 2>/dev/null; then as_ln_s='ln -s' # ... but there are two gotchas: # 1) On MSYS, both `ln -s file dir' and `ln file dir' fail. # 2) DJGPP < 2.04 has no symlinks; `ln -s' creates a wrapper executable. # In both cases, we have to default to `cp -p'. ln -s conf$$.file conf$$.dir 2>/dev/null && test ! -f conf$$.exe || as_ln_s='cp -p' elif ln conf$$.file conf$$ 2>/dev/null; then as_ln_s=ln else as_ln_s='cp -p' fi else as_ln_s='cp -p' fi rm -f conf$$ conf$$.exe conf$$.dir/conf$$.file conf$$.file rmdir conf$$.dir 2>/dev/null if mkdir -p . 2>/dev/null; then as_mkdir_p='mkdir -p "$as_dir"' else test -d ./-p && rmdir ./-p as_mkdir_p=false fi if test -x / >/dev/null 2>&1; then as_test_x='test -x' else if ls -dL / >/dev/null 2>&1; then as_ls_L_option=L else as_ls_L_option= fi as_test_x=' eval sh -c '\'' if test -d "$1"; then test -d "$1/."; else case $1 in #( -*)set "./$1";; esac; case `ls -ld'$as_ls_L_option' "$1" 2>/dev/null` in #(( ???[sx]*):;;*)false;;esac;fi '\'' sh ' fi as_executable_p=$as_test_x # Sed expression to map a string onto a valid CPP name. as_tr_cpp="eval sed 'y%*$as_cr_letters%P$as_cr_LETTERS%;s%[^_$as_cr_alnum]%_%g'" # Sed expression to map a string onto a valid variable name. as_tr_sh="eval sed 'y%*+%pp%;s%[^_$as_cr_alnum]%_%g'" test -n "$DJDIR" || exec 7<&0 &1 # Name of the host. # hostname on some systems (SVR3.2, old GNU/Linux) returns a bogus exit status, # so uname gets run too. ac_hostname=`(hostname || uname -n) 2>/dev/null | sed 1q` # # Initializations. # ac_default_prefix=/usr/local ac_clean_files= ac_config_libobj_dir=. LIBOBJS= cross_compiling=no subdirs= MFLAGS= MAKEFLAGS= # Identity of this package. PACKAGE_NAME='sparsehash' PACKAGE_TARNAME='sparsehash' PACKAGE_VERSION='2.0.2' PACKAGE_STRING='sparsehash 2.0.2' PACKAGE_BUGREPORT='google-sparsehash@googlegroups.com' PACKAGE_URL='' ac_unique_file="README" # Factoring default headers for most tests. ac_includes_default="\ #include #ifdef HAVE_SYS_TYPES_H # include #endif #ifdef HAVE_SYS_STAT_H # include #endif #ifdef STDC_HEADERS # include # include #else # ifdef HAVE_STDLIB_H # include # endif #endif #ifdef HAVE_STRING_H # if !defined STDC_HEADERS && defined HAVE_MEMORY_H # include # endif # include #endif #ifdef HAVE_STRINGS_H # include #endif #ifdef HAVE_INTTYPES_H # include #endif #ifdef HAVE_STDINT_H # include #endif #ifdef HAVE_UNISTD_H # include #endif" ac_subst_vars='am__EXEEXT_FALSE am__EXEEXT_TRUE LTLIBOBJS LIBOBJS tcmalloc_libs tcmalloc_flags PTHREAD_CFLAGS PTHREAD_LIBS PTHREAD_CC acx_pthread_config host_os host_vendor host_cpu host build_os build_vendor build_cpu build CXXCPP EGREP GREP GCC_FALSE GCC_TRUE CPP am__fastdepCC_FALSE am__fastdepCC_TRUE CCDEPMODE ac_ct_CC CFLAGS CC am__fastdepCXX_FALSE am__fastdepCXX_TRUE CXXDEPMODE AMDEPBACKSLASH AMDEP_FALSE AMDEP_TRUE am__quote am__include DEPDIR OBJEXT EXEEXT ac_ct_CXX CPPFLAGS LDFLAGS CXXFLAGS CXX am__untar am__tar AMTAR am__leading_dot SET_MAKE AWK mkdir_p MKDIR_P INSTALL_STRIP_PROGRAM STRIP install_sh MAKEINFO AUTOHEADER AUTOMAKE AUTOCONF ACLOCAL VERSION PACKAGE CYGPATH_W am__isrc INSTALL_DATA INSTALL_SCRIPT INSTALL_PROGRAM target_alias host_alias build_alias LIBS ECHO_T ECHO_N ECHO_C DEFS mandir localedir libdir psdir pdfdir dvidir htmldir infodir docdir oldincludedir includedir localstatedir sharedstatedir sysconfdir datadir datarootdir libexecdir sbindir bindir program_transform_name prefix exec_prefix PACKAGE_URL PACKAGE_BUGREPORT PACKAGE_STRING PACKAGE_VERSION PACKAGE_TARNAME PACKAGE_NAME PATH_SEPARATOR SHELL' ac_subst_files='' ac_user_opts=' enable_option_checking enable_dependency_tracking enable_namespace ' ac_precious_vars='build_alias host_alias target_alias CXX CXXFLAGS LDFLAGS LIBS CPPFLAGS CCC CC CFLAGS CPP CXXCPP' # Initialize some variables set by options. ac_init_help= ac_init_version=false ac_unrecognized_opts= ac_unrecognized_sep= # The variables have the same names as the options, with # dashes changed to underlines. cache_file=/dev/null exec_prefix=NONE no_create= no_recursion= prefix=NONE program_prefix=NONE program_suffix=NONE program_transform_name=s,x,x, silent= site= srcdir= verbose= x_includes=NONE x_libraries=NONE # Installation directory options. # These are left unexpanded so users can "make install exec_prefix=/foo" # and all the variables that are supposed to be based on exec_prefix # by default will actually change. # Use braces instead of parens because sh, perl, etc. also accept them. # (The list follows the same order as the GNU Coding Standards.) bindir='${exec_prefix}/bin' sbindir='${exec_prefix}/sbin' libexecdir='${exec_prefix}/libexec' datarootdir='${prefix}/share' datadir='${datarootdir}' sysconfdir='${prefix}/etc' sharedstatedir='${prefix}/com' localstatedir='${prefix}/var' includedir='${prefix}/include' oldincludedir='/usr/include' docdir='${datarootdir}/doc/${PACKAGE_TARNAME}' infodir='${datarootdir}/info' htmldir='${docdir}' dvidir='${docdir}' pdfdir='${docdir}' psdir='${docdir}' libdir='${exec_prefix}/lib' localedir='${datarootdir}/locale' mandir='${datarootdir}/man' ac_prev= ac_dashdash= for ac_option do # If the previous option needs an argument, assign it. if test -n "$ac_prev"; then eval $ac_prev=\$ac_option ac_prev= continue fi case $ac_option in *=?*) ac_optarg=`expr "X$ac_option" : '[^=]*=\(.*\)'` ;; *=) ac_optarg= ;; *) ac_optarg=yes ;; esac # Accept the important Cygnus configure options, so we can diagnose typos. case $ac_dashdash$ac_option in --) ac_dashdash=yes ;; -bindir | --bindir | --bindi | --bind | --bin | --bi) ac_prev=bindir ;; -bindir=* | --bindir=* | --bindi=* | --bind=* | --bin=* | --bi=*) bindir=$ac_optarg ;; -build | --build | --buil | --bui | --bu) ac_prev=build_alias ;; -build=* | --build=* | --buil=* | --bui=* | --bu=*) build_alias=$ac_optarg ;; -cache-file | --cache-file | --cache-fil | --cache-fi \ | --cache-f | --cache- | --cache | --cach | --cac | --ca | --c) ac_prev=cache_file ;; -cache-file=* | --cache-file=* | --cache-fil=* | --cache-fi=* \ | --cache-f=* | --cache-=* | --cache=* | --cach=* | --cac=* | --ca=* | --c=*) cache_file=$ac_optarg ;; --config-cache | -C) cache_file=config.cache ;; -datadir | --datadir | --datadi | --datad) ac_prev=datadir ;; -datadir=* | --datadir=* | --datadi=* | --datad=*) datadir=$ac_optarg ;; -datarootdir | --datarootdir | --datarootdi | --datarootd | --dataroot \ | --dataroo | --dataro | --datar) ac_prev=datarootdir ;; -datarootdir=* | --datarootdir=* | --datarootdi=* | --datarootd=* \ | --dataroot=* | --dataroo=* | --dataro=* | --datar=*) datarootdir=$ac_optarg ;; -disable-* | --disable-*) ac_useropt=`expr "x$ac_option" : 'x-*disable-\(.*\)'` # Reject names that are not valid shell variable names. expr "x$ac_useropt" : ".*[^-+._$as_cr_alnum]" >/dev/null && as_fn_error $? "invalid feature name: $ac_useropt" ac_useropt_orig=$ac_useropt ac_useropt=`$as_echo "$ac_useropt" | sed 's/[-+.]/_/g'` case $ac_user_opts in *" "enable_$ac_useropt" "*) ;; *) ac_unrecognized_opts="$ac_unrecognized_opts$ac_unrecognized_sep--disable-$ac_useropt_orig" ac_unrecognized_sep=', ';; esac eval enable_$ac_useropt=no ;; -docdir | --docdir | --docdi | --doc | --do) ac_prev=docdir ;; -docdir=* | --docdir=* | --docdi=* | --doc=* | --do=*) docdir=$ac_optarg ;; -dvidir | --dvidir | --dvidi | --dvid | --dvi | --dv) ac_prev=dvidir ;; -dvidir=* | --dvidir=* | --dvidi=* | --dvid=* | --dvi=* | --dv=*) dvidir=$ac_optarg ;; -enable-* | --enable-*) ac_useropt=`expr "x$ac_option" : 'x-*enable-\([^=]*\)'` # Reject names that are not valid shell variable names. expr "x$ac_useropt" : ".*[^-+._$as_cr_alnum]" >/dev/null && as_fn_error $? "invalid feature name: $ac_useropt" ac_useropt_orig=$ac_useropt ac_useropt=`$as_echo "$ac_useropt" | sed 's/[-+.]/_/g'` case $ac_user_opts in *" "enable_$ac_useropt" "*) ;; *) ac_unrecognized_opts="$ac_unrecognized_opts$ac_unrecognized_sep--enable-$ac_useropt_orig" ac_unrecognized_sep=', ';; esac eval enable_$ac_useropt=\$ac_optarg ;; -exec-prefix | --exec_prefix | --exec-prefix | --exec-prefi \ | --exec-pref | --exec-pre | --exec-pr | --exec-p | --exec- \ | --exec | --exe | --ex) ac_prev=exec_prefix ;; -exec-prefix=* | --exec_prefix=* | --exec-prefix=* | --exec-prefi=* \ | --exec-pref=* | --exec-pre=* | --exec-pr=* | --exec-p=* | --exec-=* \ | --exec=* | --exe=* | --ex=*) exec_prefix=$ac_optarg ;; -gas | --gas | --ga | --g) # Obsolete; use --with-gas. with_gas=yes ;; -help | --help | --hel | --he | -h) ac_init_help=long ;; -help=r* | --help=r* | --hel=r* | --he=r* | -hr*) ac_init_help=recursive ;; -help=s* | --help=s* | --hel=s* | --he=s* | -hs*) ac_init_help=short ;; -host | --host | --hos | --ho) ac_prev=host_alias ;; -host=* | --host=* | --hos=* | --ho=*) host_alias=$ac_optarg ;; -htmldir | --htmldir | --htmldi | --htmld | --html | --htm | --ht) ac_prev=htmldir ;; -htmldir=* | --htmldir=* | --htmldi=* | --htmld=* | --html=* | --htm=* \ | --ht=*) htmldir=$ac_optarg ;; -includedir | --includedir | --includedi | --included | --include \ | --includ | --inclu | --incl | --inc) ac_prev=includedir ;; -includedir=* | --includedir=* | --includedi=* | --included=* | --include=* \ | --includ=* | --inclu=* | --incl=* | --inc=*) includedir=$ac_optarg ;; -infodir | --infodir | --infodi | --infod | --info | --inf) ac_prev=infodir ;; -infodir=* | --infodir=* | --infodi=* | --infod=* | --info=* | --inf=*) infodir=$ac_optarg ;; -libdir | --libdir | --libdi | --libd) ac_prev=libdir ;; -libdir=* | --libdir=* | --libdi=* | --libd=*) libdir=$ac_optarg ;; -libexecdir | --libexecdir | --libexecdi | --libexecd | --libexec \ | --libexe | --libex | --libe) ac_prev=libexecdir ;; -libexecdir=* | --libexecdir=* | --libexecdi=* | --libexecd=* | --libexec=* \ | --libexe=* | --libex=* | --libe=*) libexecdir=$ac_optarg ;; -localedir | --localedir | --localedi | --localed | --locale) ac_prev=localedir ;; -localedir=* | --localedir=* | --localedi=* | --localed=* | --locale=*) localedir=$ac_optarg ;; -localstatedir | --localstatedir | --localstatedi | --localstated \ | --localstate | --localstat | --localsta | --localst | --locals) ac_prev=localstatedir ;; -localstatedir=* | --localstatedir=* | --localstatedi=* | --localstated=* \ | --localstate=* | --localstat=* | --localsta=* | --localst=* | --locals=*) localstatedir=$ac_optarg ;; -mandir | --mandir | --mandi | --mand | --man | --ma | --m) ac_prev=mandir ;; -mandir=* | --mandir=* | --mandi=* | --mand=* | --man=* | --ma=* | --m=*) mandir=$ac_optarg ;; -nfp | --nfp | --nf) # Obsolete; use --without-fp. with_fp=no ;; -no-create | --no-create | --no-creat | --no-crea | --no-cre \ | --no-cr | --no-c | -n) no_create=yes ;; -no-recursion | --no-recursion | --no-recursio | --no-recursi \ | --no-recurs | --no-recur | --no-recu | --no-rec | --no-re | --no-r) no_recursion=yes ;; -oldincludedir | --oldincludedir | --oldincludedi | --oldincluded \ | --oldinclude | --oldinclud | --oldinclu | --oldincl | --oldinc \ | --oldin | --oldi | --old | --ol | --o) ac_prev=oldincludedir ;; -oldincludedir=* | --oldincludedir=* | --oldincludedi=* | --oldincluded=* \ | --oldinclude=* | --oldinclud=* | --oldinclu=* | --oldincl=* | --oldinc=* \ | --oldin=* | --oldi=* | --old=* | --ol=* | --o=*) oldincludedir=$ac_optarg ;; -prefix | --prefix | --prefi | --pref | --pre | --pr | --p) ac_prev=prefix ;; -prefix=* | --prefix=* | --prefi=* | --pref=* | --pre=* | --pr=* | --p=*) prefix=$ac_optarg ;; -program-prefix | --program-prefix | --program-prefi | --program-pref \ | --program-pre | --program-pr | --program-p) ac_prev=program_prefix ;; -program-prefix=* | --program-prefix=* | --program-prefi=* \ | --program-pref=* | --program-pre=* | --program-pr=* | --program-p=*) program_prefix=$ac_optarg ;; -program-suffix | --program-suffix | --program-suffi | --program-suff \ | --program-suf | --program-su | --program-s) ac_prev=program_suffix ;; -program-suffix=* | --program-suffix=* | --program-suffi=* \ | --program-suff=* | --program-suf=* | --program-su=* | --program-s=*) program_suffix=$ac_optarg ;; -program-transform-name | --program-transform-name \ | --program-transform-nam | --program-transform-na \ | --program-transform-n | --program-transform- \ | --program-transform | --program-transfor \ | --program-transfo | --program-transf \ | --program-trans | --program-tran \ | --progr-tra | --program-tr | --program-t) ac_prev=program_transform_name ;; -program-transform-name=* | --program-transform-name=* \ | --program-transform-nam=* | --program-transform-na=* \ | --program-transform-n=* | --program-transform-=* \ | --program-transform=* | --program-transfor=* \ | --program-transfo=* | --program-transf=* \ | --program-trans=* | --program-tran=* \ | --progr-tra=* | --program-tr=* | --program-t=*) program_transform_name=$ac_optarg ;; -pdfdir | --pdfdir | --pdfdi | --pdfd | --pdf | --pd) ac_prev=pdfdir ;; -pdfdir=* | --pdfdir=* | --pdfdi=* | --pdfd=* | --pdf=* | --pd=*) pdfdir=$ac_optarg ;; -psdir | --psdir | --psdi | --psd | --ps) ac_prev=psdir ;; -psdir=* | --psdir=* | --psdi=* | --psd=* | --ps=*) psdir=$ac_optarg ;; -q | -quiet | --quiet | --quie | --qui | --qu | --q \ | -silent | --silent | --silen | --sile | --sil) silent=yes ;; -sbindir | --sbindir | --sbindi | --sbind | --sbin | --sbi | --sb) ac_prev=sbindir ;; -sbindir=* | --sbindir=* | --sbindi=* | --sbind=* | --sbin=* \ | --sbi=* | --sb=*) sbindir=$ac_optarg ;; -sharedstatedir | --sharedstatedir | --sharedstatedi \ | --sharedstated | --sharedstate | --sharedstat | --sharedsta \ | --sharedst | --shareds | --shared | --share | --shar \ | --sha | --sh) ac_prev=sharedstatedir ;; -sharedstatedir=* | --sharedstatedir=* | --sharedstatedi=* \ | --sharedstated=* | --sharedstate=* | --sharedstat=* | --sharedsta=* \ | --sharedst=* | --shareds=* | --shared=* | --share=* | --shar=* \ | --sha=* | --sh=*) sharedstatedir=$ac_optarg ;; -site | --site | --sit) ac_prev=site ;; -site=* | --site=* | --sit=*) site=$ac_optarg ;; -srcdir | --srcdir | --srcdi | --srcd | --src | --sr) ac_prev=srcdir ;; -srcdir=* | --srcdir=* | --srcdi=* | --srcd=* | --src=* | --sr=*) srcdir=$ac_optarg ;; -sysconfdir | --sysconfdir | --sysconfdi | --sysconfd | --sysconf \ | --syscon | --sysco | --sysc | --sys | --sy) ac_prev=sysconfdir ;; -sysconfdir=* | --sysconfdir=* | --sysconfdi=* | --sysconfd=* | --sysconf=* \ | --syscon=* | --sysco=* | --sysc=* | --sys=* | --sy=*) sysconfdir=$ac_optarg ;; -target | --target | --targe | --targ | --tar | --ta | --t) ac_prev=target_alias ;; -target=* | --target=* | --targe=* | --targ=* | --tar=* | --ta=* | --t=*) target_alias=$ac_optarg ;; -v | -verbose | --verbose | --verbos | --verbo | --verb) verbose=yes ;; -version | --version | --versio | --versi | --vers | -V) ac_init_version=: ;; -with-* | --with-*) ac_useropt=`expr "x$ac_option" : 'x-*with-\([^=]*\)'` # Reject names that are not valid shell variable names. expr "x$ac_useropt" : ".*[^-+._$as_cr_alnum]" >/dev/null && as_fn_error $? "invalid package name: $ac_useropt" ac_useropt_orig=$ac_useropt ac_useropt=`$as_echo "$ac_useropt" | sed 's/[-+.]/_/g'` case $ac_user_opts in *" "with_$ac_useropt" "*) ;; *) ac_unrecognized_opts="$ac_unrecognized_opts$ac_unrecognized_sep--with-$ac_useropt_orig" ac_unrecognized_sep=', ';; esac eval with_$ac_useropt=\$ac_optarg ;; -without-* | --without-*) ac_useropt=`expr "x$ac_option" : 'x-*without-\(.*\)'` # Reject names that are not valid shell variable names. expr "x$ac_useropt" : ".*[^-+._$as_cr_alnum]" >/dev/null && as_fn_error $? "invalid package name: $ac_useropt" ac_useropt_orig=$ac_useropt ac_useropt=`$as_echo "$ac_useropt" | sed 's/[-+.]/_/g'` case $ac_user_opts in *" "with_$ac_useropt" "*) ;; *) ac_unrecognized_opts="$ac_unrecognized_opts$ac_unrecognized_sep--without-$ac_useropt_orig" ac_unrecognized_sep=', ';; esac eval with_$ac_useropt=no ;; --x) # Obsolete; use --with-x. with_x=yes ;; -x-includes | --x-includes | --x-include | --x-includ | --x-inclu \ | --x-incl | --x-inc | --x-in | --x-i) ac_prev=x_includes ;; -x-includes=* | --x-includes=* | --x-include=* | --x-includ=* | --x-inclu=* \ | --x-incl=* | --x-inc=* | --x-in=* | --x-i=*) x_includes=$ac_optarg ;; -x-libraries | --x-libraries | --x-librarie | --x-librari \ | --x-librar | --x-libra | --x-libr | --x-lib | --x-li | --x-l) ac_prev=x_libraries ;; -x-libraries=* | --x-libraries=* | --x-librarie=* | --x-librari=* \ | --x-librar=* | --x-libra=* | --x-libr=* | --x-lib=* | --x-li=* | --x-l=*) x_libraries=$ac_optarg ;; -*) as_fn_error $? "unrecognized option: \`$ac_option' Try \`$0 --help' for more information" ;; *=*) ac_envvar=`expr "x$ac_option" : 'x\([^=]*\)='` # Reject names that are not valid shell variable names. case $ac_envvar in #( '' | [0-9]* | *[!_$as_cr_alnum]* ) as_fn_error $? "invalid variable name: \`$ac_envvar'" ;; esac eval $ac_envvar=\$ac_optarg export $ac_envvar ;; *) # FIXME: should be removed in autoconf 3.0. $as_echo "$as_me: WARNING: you should use --build, --host, --target" >&2 expr "x$ac_option" : ".*[^-._$as_cr_alnum]" >/dev/null && $as_echo "$as_me: WARNING: invalid host type: $ac_option" >&2 : "${build_alias=$ac_option} ${host_alias=$ac_option} ${target_alias=$ac_option}" ;; esac done if test -n "$ac_prev"; then ac_option=--`echo $ac_prev | sed 's/_/-/g'` as_fn_error $? "missing argument to $ac_option" fi if test -n "$ac_unrecognized_opts"; then case $enable_option_checking in no) ;; fatal) as_fn_error $? "unrecognized options: $ac_unrecognized_opts" ;; *) $as_echo "$as_me: WARNING: unrecognized options: $ac_unrecognized_opts" >&2 ;; esac fi # Check all directory arguments for consistency. for ac_var in exec_prefix prefix bindir sbindir libexecdir datarootdir \ datadir sysconfdir sharedstatedir localstatedir includedir \ oldincludedir docdir infodir htmldir dvidir pdfdir psdir \ libdir localedir mandir do eval ac_val=\$$ac_var # Remove trailing slashes. case $ac_val in */ ) ac_val=`expr "X$ac_val" : 'X\(.*[^/]\)' \| "X$ac_val" : 'X\(.*\)'` eval $ac_var=\$ac_val;; esac # Be sure to have absolute directory names. case $ac_val in [\\/$]* | ?:[\\/]* ) continue;; NONE | '' ) case $ac_var in *prefix ) continue;; esac;; esac as_fn_error $? "expected an absolute directory name for --$ac_var: $ac_val" done # There might be people who depend on the old broken behavior: `$host' # used to hold the argument of --host etc. # FIXME: To remove some day. build=$build_alias host=$host_alias target=$target_alias # FIXME: To remove some day. if test "x$host_alias" != x; then if test "x$build_alias" = x; then cross_compiling=maybe $as_echo "$as_me: WARNING: if you wanted to set the --build type, don't use --host. If a cross compiler is detected then cross compile mode will be used" >&2 elif test "x$build_alias" != "x$host_alias"; then cross_compiling=yes fi fi ac_tool_prefix= test -n "$host_alias" && ac_tool_prefix=$host_alias- test "$silent" = yes && exec 6>/dev/null ac_pwd=`pwd` && test -n "$ac_pwd" && ac_ls_di=`ls -di .` && ac_pwd_ls_di=`cd "$ac_pwd" && ls -di .` || as_fn_error $? "working directory cannot be determined" test "X$ac_ls_di" = "X$ac_pwd_ls_di" || as_fn_error $? "pwd does not report name of working directory" # Find the source files, if location was not specified. if test -z "$srcdir"; then ac_srcdir_defaulted=yes # Try the directory containing this script, then the parent directory. ac_confdir=`$as_dirname -- "$as_myself" || $as_expr X"$as_myself" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$as_myself" : 'X\(//\)[^/]' \| \ X"$as_myself" : 'X\(//\)$' \| \ X"$as_myself" : 'X\(/\)' \| . 2>/dev/null || $as_echo X"$as_myself" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q'` srcdir=$ac_confdir if test ! -r "$srcdir/$ac_unique_file"; then srcdir=.. fi else ac_srcdir_defaulted=no fi if test ! -r "$srcdir/$ac_unique_file"; then test "$ac_srcdir_defaulted" = yes && srcdir="$ac_confdir or .." as_fn_error $? "cannot find sources ($ac_unique_file) in $srcdir" fi ac_msg="sources are in $srcdir, but \`cd $srcdir' does not work" ac_abs_confdir=`( cd "$srcdir" && test -r "./$ac_unique_file" || as_fn_error $? "$ac_msg" pwd)` # When building in place, set srcdir=. if test "$ac_abs_confdir" = "$ac_pwd"; then srcdir=. fi # Remove unnecessary trailing slashes from srcdir. # Double slashes in file names in object file debugging info # mess up M-x gdb in Emacs. case $srcdir in */) srcdir=`expr "X$srcdir" : 'X\(.*[^/]\)' \| "X$srcdir" : 'X\(.*\)'`;; esac for ac_var in $ac_precious_vars; do eval ac_env_${ac_var}_set=\${${ac_var}+set} eval ac_env_${ac_var}_value=\$${ac_var} eval ac_cv_env_${ac_var}_set=\${${ac_var}+set} eval ac_cv_env_${ac_var}_value=\$${ac_var} done # # Report the --help message. # if test "$ac_init_help" = "long"; then # Omit some internal or obsolete options to make the list less imposing. # This message is too long to be a string in the A/UX 3.1 sh. cat <<_ACEOF \`configure' configures sparsehash 2.0.2 to adapt to many kinds of systems. Usage: $0 [OPTION]... [VAR=VALUE]... To assign environment variables (e.g., CC, CFLAGS...), specify them as VAR=VALUE. See below for descriptions of some of the useful variables. Defaults for the options are specified in brackets. Configuration: -h, --help display this help and exit --help=short display options specific to this package --help=recursive display the short help of all the included packages -V, --version display version information and exit -q, --quiet, --silent do not print \`checking ...' messages --cache-file=FILE cache test results in FILE [disabled] -C, --config-cache alias for \`--cache-file=config.cache' -n, --no-create do not create output files --srcdir=DIR find the sources in DIR [configure dir or \`..'] Installation directories: --prefix=PREFIX install architecture-independent files in PREFIX [$ac_default_prefix] --exec-prefix=EPREFIX install architecture-dependent files in EPREFIX [PREFIX] By default, \`make install' will install all the files in \`$ac_default_prefix/bin', \`$ac_default_prefix/lib' etc. You can specify an installation prefix other than \`$ac_default_prefix' using \`--prefix', for instance \`--prefix=\$HOME'. For better control, use the options below. Fine tuning of the installation directories: --bindir=DIR user executables [EPREFIX/bin] --sbindir=DIR system admin executables [EPREFIX/sbin] --libexecdir=DIR program executables [EPREFIX/libexec] --sysconfdir=DIR read-only single-machine data [PREFIX/etc] --sharedstatedir=DIR modifiable architecture-independent data [PREFIX/com] --localstatedir=DIR modifiable single-machine data [PREFIX/var] --libdir=DIR object code libraries [EPREFIX/lib] --includedir=DIR C header files [PREFIX/include] --oldincludedir=DIR C header files for non-gcc [/usr/include] --datarootdir=DIR read-only arch.-independent data root [PREFIX/share] --datadir=DIR read-only architecture-independent data [DATAROOTDIR] --infodir=DIR info documentation [DATAROOTDIR/info] --localedir=DIR locale-dependent data [DATAROOTDIR/locale] --mandir=DIR man documentation [DATAROOTDIR/man] --docdir=DIR documentation root [DATAROOTDIR/doc/sparsehash] --htmldir=DIR html documentation [DOCDIR] --dvidir=DIR dvi documentation [DOCDIR] --pdfdir=DIR pdf documentation [DOCDIR] --psdir=DIR ps documentation [DOCDIR] _ACEOF cat <<\_ACEOF Program names: --program-prefix=PREFIX prepend PREFIX to installed program names --program-suffix=SUFFIX append SUFFIX to installed program names --program-transform-name=PROGRAM run sed PROGRAM on installed program names System types: --build=BUILD configure for building on BUILD [guessed] --host=HOST cross-compile to build programs to run on HOST [BUILD] _ACEOF fi if test -n "$ac_init_help"; then case $ac_init_help in short | recursive ) echo "Configuration of sparsehash 2.0.2:";; esac cat <<\_ACEOF Optional Features: --disable-option-checking ignore unrecognized --enable/--with options --disable-FEATURE do not include FEATURE (same as --enable-FEATURE=no) --enable-FEATURE[=ARG] include FEATURE [ARG=yes] --disable-dependency-tracking speeds up one-time build --enable-dependency-tracking do not reject slow dependency extractors --enable-namespace=FOO to define these Google classes in the FOO namespace. --disable-namespace to define them in the global namespace. Default is to define them in namespace google. Some influential environment variables: CXX C++ compiler command CXXFLAGS C++ compiler flags LDFLAGS linker flags, e.g. -L if you have libraries in a nonstandard directory LIBS libraries to pass to the linker, e.g. -l CPPFLAGS (Objective) C/C++ preprocessor flags, e.g. -I if you have headers in a nonstandard directory CC C compiler command CFLAGS C compiler flags CPP C preprocessor CXXCPP C++ preprocessor Use these variables to override the choices made by `configure' or to help it to find libraries and programs with nonstandard names/locations. Report bugs to . _ACEOF ac_status=$? fi if test "$ac_init_help" = "recursive"; then # If there are subdirs, report their specific --help. for ac_dir in : $ac_subdirs_all; do test "x$ac_dir" = x: && continue test -d "$ac_dir" || { cd "$srcdir" && ac_pwd=`pwd` && srcdir=. && test -d "$ac_dir"; } || continue ac_builddir=. case "$ac_dir" in .) ac_dir_suffix= ac_top_builddir_sub=. ac_top_build_prefix= ;; *) ac_dir_suffix=/`$as_echo "$ac_dir" | sed 's|^\.[\\/]||'` # A ".." for each directory in $ac_dir_suffix. ac_top_builddir_sub=`$as_echo "$ac_dir_suffix" | sed 's|/[^\\/]*|/..|g;s|/||'` case $ac_top_builddir_sub in "") ac_top_builddir_sub=. ac_top_build_prefix= ;; *) ac_top_build_prefix=$ac_top_builddir_sub/ ;; esac ;; esac ac_abs_top_builddir=$ac_pwd ac_abs_builddir=$ac_pwd$ac_dir_suffix # for backward compatibility: ac_top_builddir=$ac_top_build_prefix case $srcdir in .) # We are building in place. ac_srcdir=. ac_top_srcdir=$ac_top_builddir_sub ac_abs_top_srcdir=$ac_pwd ;; [\\/]* | ?:[\\/]* ) # Absolute name. ac_srcdir=$srcdir$ac_dir_suffix; ac_top_srcdir=$srcdir ac_abs_top_srcdir=$srcdir ;; *) # Relative name. ac_srcdir=$ac_top_build_prefix$srcdir$ac_dir_suffix ac_top_srcdir=$ac_top_build_prefix$srcdir ac_abs_top_srcdir=$ac_pwd/$srcdir ;; esac ac_abs_srcdir=$ac_abs_top_srcdir$ac_dir_suffix cd "$ac_dir" || { ac_status=$?; continue; } # Check for guested configure. if test -f "$ac_srcdir/configure.gnu"; then echo && $SHELL "$ac_srcdir/configure.gnu" --help=recursive elif test -f "$ac_srcdir/configure"; then echo && $SHELL "$ac_srcdir/configure" --help=recursive else $as_echo "$as_me: WARNING: no configuration information is in $ac_dir" >&2 fi || ac_status=$? cd "$ac_pwd" || { ac_status=$?; break; } done fi test -n "$ac_init_help" && exit $ac_status if $ac_init_version; then cat <<\_ACEOF sparsehash configure 2.0.2 generated by GNU Autoconf 2.68 Copyright (C) 2010 Free Software Foundation, Inc. This configure script is free software; the Free Software Foundation gives unlimited permission to copy, distribute and modify it. _ACEOF exit fi ## ------------------------ ## ## Autoconf initialization. ## ## ------------------------ ## # ac_fn_cxx_try_compile LINENO # ---------------------------- # Try to compile conftest.$ac_ext, and return whether this succeeded. ac_fn_cxx_try_compile () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack rm -f conftest.$ac_objext if { { ac_try="$ac_compile" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_compile") 2>conftest.err ac_status=$? if test -s conftest.err; then grep -v '^ *+' conftest.err >conftest.er1 cat conftest.er1 >&5 mv -f conftest.er1 conftest.err fi $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && { test -z "$ac_cxx_werror_flag" || test ! -s conftest.err } && test -s conftest.$ac_objext; then : ac_retval=0 else $as_echo "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 ac_retval=1 fi eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno as_fn_set_status $ac_retval } # ac_fn_cxx_try_compile # ac_fn_c_try_compile LINENO # -------------------------- # Try to compile conftest.$ac_ext, and return whether this succeeded. ac_fn_c_try_compile () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack rm -f conftest.$ac_objext if { { ac_try="$ac_compile" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_compile") 2>conftest.err ac_status=$? if test -s conftest.err; then grep -v '^ *+' conftest.err >conftest.er1 cat conftest.er1 >&5 mv -f conftest.er1 conftest.err fi $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && { test -z "$ac_c_werror_flag" || test ! -s conftest.err } && test -s conftest.$ac_objext; then : ac_retval=0 else $as_echo "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 ac_retval=1 fi eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno as_fn_set_status $ac_retval } # ac_fn_c_try_compile # ac_fn_c_try_cpp LINENO # ---------------------- # Try to preprocess conftest.$ac_ext, and return whether this succeeded. ac_fn_c_try_cpp () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack if { { ac_try="$ac_cpp conftest.$ac_ext" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_cpp conftest.$ac_ext") 2>conftest.err ac_status=$? if test -s conftest.err; then grep -v '^ *+' conftest.err >conftest.er1 cat conftest.er1 >&5 mv -f conftest.er1 conftest.err fi $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } > conftest.i && { test -z "$ac_c_preproc_warn_flag$ac_c_werror_flag" || test ! -s conftest.err }; then : ac_retval=0 else $as_echo "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 ac_retval=1 fi eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno as_fn_set_status $ac_retval } # ac_fn_c_try_cpp # ac_fn_c_try_run LINENO # ---------------------- # Try to link conftest.$ac_ext, and return whether this succeeded. Assumes # that executables *can* be run. ac_fn_c_try_run () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack if { { ac_try="$ac_link" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_link") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && { ac_try='./conftest$ac_exeext' { { case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_try") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; }; then : ac_retval=0 else $as_echo "$as_me: program exited with status $ac_status" >&5 $as_echo "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 ac_retval=$ac_status fi rm -rf conftest.dSYM conftest_ipa8_conftest.oo eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno as_fn_set_status $ac_retval } # ac_fn_c_try_run # ac_fn_c_try_link LINENO # ----------------------- # Try to link conftest.$ac_ext, and return whether this succeeded. ac_fn_c_try_link () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack rm -f conftest.$ac_objext conftest$ac_exeext if { { ac_try="$ac_link" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_link") 2>conftest.err ac_status=$? if test -s conftest.err; then grep -v '^ *+' conftest.err >conftest.er1 cat conftest.er1 >&5 mv -f conftest.er1 conftest.err fi $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && { test -z "$ac_c_werror_flag" || test ! -s conftest.err } && test -s conftest$ac_exeext && { test "$cross_compiling" = yes || $as_test_x conftest$ac_exeext }; then : ac_retval=0 else $as_echo "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 ac_retval=1 fi # Delete the IPA/IPO (Inter Procedural Analysis/Optimization) information # created by the PGI compiler (conftest_ipa8_conftest.oo), as it would # interfere with the next link command; also delete a directory that is # left behind by Apple's compiler. We do this before executing the actions. rm -rf conftest.dSYM conftest_ipa8_conftest.oo eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno as_fn_set_status $ac_retval } # ac_fn_c_try_link # ac_fn_c_check_func LINENO FUNC VAR # ---------------------------------- # Tests whether FUNC exists, setting the cache variable VAR accordingly ac_fn_c_check_func () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $2" >&5 $as_echo_n "checking for $2... " >&6; } if eval \${$3+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Define $2 to an innocuous variant, in case declares $2. For example, HP-UX 11i declares gettimeofday. */ #define $2 innocuous_$2 /* System header to define __stub macros and hopefully few prototypes, which can conflict with char $2 (); below. Prefer to if __STDC__ is defined, since exists even on freestanding compilers. */ #ifdef __STDC__ # include #else # include #endif #undef $2 /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char $2 (); /* The GNU C library defines this for functions which it implements to always fail with ENOSYS. Some functions are actually named something starting with __ and the normal name is an alias. */ #if defined __stub_$2 || defined __stub___$2 choke me #endif int main () { return $2 (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : eval "$3=yes" else eval "$3=no" fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext fi eval ac_res=\$$3 { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5 $as_echo "$ac_res" >&6; } eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno } # ac_fn_c_check_func # ac_fn_c_check_type LINENO TYPE VAR INCLUDES # ------------------------------------------- # Tests whether TYPE exists after having included INCLUDES, setting cache # variable VAR accordingly. ac_fn_c_check_type () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $2" >&5 $as_echo_n "checking for $2... " >&6; } if eval \${$3+:} false; then : $as_echo_n "(cached) " >&6 else eval "$3=no" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ $4 int main () { if (sizeof ($2)) return 0; ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ $4 int main () { if (sizeof (($2))) return 0; ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : else eval "$3=yes" fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi eval ac_res=\$$3 { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5 $as_echo "$ac_res" >&6; } eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno } # ac_fn_c_check_type # ac_fn_c_check_header_compile LINENO HEADER VAR INCLUDES # ------------------------------------------------------- # Tests whether HEADER exists and can be compiled using the include files in # INCLUDES, setting the cache variable VAR accordingly. ac_fn_c_check_header_compile () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $2" >&5 $as_echo_n "checking for $2... " >&6; } if eval \${$3+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ $4 #include <$2> _ACEOF if ac_fn_c_try_compile "$LINENO"; then : eval "$3=yes" else eval "$3=no" fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi eval ac_res=\$$3 { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5 $as_echo "$ac_res" >&6; } eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno } # ac_fn_c_check_header_compile # ac_fn_c_check_header_mongrel LINENO HEADER VAR INCLUDES # ------------------------------------------------------- # Tests whether HEADER exists, giving a warning if it cannot be compiled using # the include files in INCLUDES and setting the cache variable VAR # accordingly. ac_fn_c_check_header_mongrel () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack if eval \${$3+:} false; then : { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $2" >&5 $as_echo_n "checking for $2... " >&6; } if eval \${$3+:} false; then : $as_echo_n "(cached) " >&6 fi eval ac_res=\$$3 { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5 $as_echo "$ac_res" >&6; } else # Is the header compilable? { $as_echo "$as_me:${as_lineno-$LINENO}: checking $2 usability" >&5 $as_echo_n "checking $2 usability... " >&6; } cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ $4 #include <$2> _ACEOF if ac_fn_c_try_compile "$LINENO"; then : ac_header_compiler=yes else ac_header_compiler=no fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_header_compiler" >&5 $as_echo "$ac_header_compiler" >&6; } # Is the header present? { $as_echo "$as_me:${as_lineno-$LINENO}: checking $2 presence" >&5 $as_echo_n "checking $2 presence... " >&6; } cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include <$2> _ACEOF if ac_fn_c_try_cpp "$LINENO"; then : ac_header_preproc=yes else ac_header_preproc=no fi rm -f conftest.err conftest.i conftest.$ac_ext { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_header_preproc" >&5 $as_echo "$ac_header_preproc" >&6; } # So? What about this header? case $ac_header_compiler:$ac_header_preproc:$ac_c_preproc_warn_flag in #(( yes:no: ) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $2: accepted by the compiler, rejected by the preprocessor!" >&5 $as_echo "$as_me: WARNING: $2: accepted by the compiler, rejected by the preprocessor!" >&2;} { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $2: proceeding with the compiler's result" >&5 $as_echo "$as_me: WARNING: $2: proceeding with the compiler's result" >&2;} ;; no:yes:* ) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $2: present but cannot be compiled" >&5 $as_echo "$as_me: WARNING: $2: present but cannot be compiled" >&2;} { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $2: check for missing prerequisite headers?" >&5 $as_echo "$as_me: WARNING: $2: check for missing prerequisite headers?" >&2;} { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $2: see the Autoconf documentation" >&5 $as_echo "$as_me: WARNING: $2: see the Autoconf documentation" >&2;} { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $2: section \"Present But Cannot Be Compiled\"" >&5 $as_echo "$as_me: WARNING: $2: section \"Present But Cannot Be Compiled\"" >&2;} { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $2: proceeding with the compiler's result" >&5 $as_echo "$as_me: WARNING: $2: proceeding with the compiler's result" >&2;} ( $as_echo "## ------------------------------------------------- ## ## Report this to google-sparsehash@googlegroups.com ## ## ------------------------------------------------- ##" ) | sed "s/^/$as_me: WARNING: /" >&2 ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $2" >&5 $as_echo_n "checking for $2... " >&6; } if eval \${$3+:} false; then : $as_echo_n "(cached) " >&6 else eval "$3=\$ac_header_compiler" fi eval ac_res=\$$3 { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5 $as_echo "$ac_res" >&6; } fi eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno } # ac_fn_c_check_header_mongrel # ac_fn_cxx_try_cpp LINENO # ------------------------ # Try to preprocess conftest.$ac_ext, and return whether this succeeded. ac_fn_cxx_try_cpp () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack if { { ac_try="$ac_cpp conftest.$ac_ext" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_cpp conftest.$ac_ext") 2>conftest.err ac_status=$? if test -s conftest.err; then grep -v '^ *+' conftest.err >conftest.er1 cat conftest.er1 >&5 mv -f conftest.er1 conftest.err fi $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } > conftest.i && { test -z "$ac_cxx_preproc_warn_flag$ac_cxx_werror_flag" || test ! -s conftest.err }; then : ac_retval=0 else $as_echo "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 ac_retval=1 fi eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno as_fn_set_status $ac_retval } # ac_fn_cxx_try_cpp # ac_fn_cxx_check_header_mongrel LINENO HEADER VAR INCLUDES # --------------------------------------------------------- # Tests whether HEADER exists, giving a warning if it cannot be compiled using # the include files in INCLUDES and setting the cache variable VAR # accordingly. ac_fn_cxx_check_header_mongrel () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack if eval \${$3+:} false; then : { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $2" >&5 $as_echo_n "checking for $2... " >&6; } if eval \${$3+:} false; then : $as_echo_n "(cached) " >&6 fi eval ac_res=\$$3 { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5 $as_echo "$ac_res" >&6; } else # Is the header compilable? { $as_echo "$as_me:${as_lineno-$LINENO}: checking $2 usability" >&5 $as_echo_n "checking $2 usability... " >&6; } cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ $4 #include <$2> _ACEOF if ac_fn_cxx_try_compile "$LINENO"; then : ac_header_compiler=yes else ac_header_compiler=no fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_header_compiler" >&5 $as_echo "$ac_header_compiler" >&6; } # Is the header present? { $as_echo "$as_me:${as_lineno-$LINENO}: checking $2 presence" >&5 $as_echo_n "checking $2 presence... " >&6; } cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include <$2> _ACEOF if ac_fn_cxx_try_cpp "$LINENO"; then : ac_header_preproc=yes else ac_header_preproc=no fi rm -f conftest.err conftest.i conftest.$ac_ext { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_header_preproc" >&5 $as_echo "$ac_header_preproc" >&6; } # So? What about this header? case $ac_header_compiler:$ac_header_preproc:$ac_cxx_preproc_warn_flag in #(( yes:no: ) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $2: accepted by the compiler, rejected by the preprocessor!" >&5 $as_echo "$as_me: WARNING: $2: accepted by the compiler, rejected by the preprocessor!" >&2;} { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $2: proceeding with the compiler's result" >&5 $as_echo "$as_me: WARNING: $2: proceeding with the compiler's result" >&2;} ;; no:yes:* ) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $2: present but cannot be compiled" >&5 $as_echo "$as_me: WARNING: $2: present but cannot be compiled" >&2;} { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $2: check for missing prerequisite headers?" >&5 $as_echo "$as_me: WARNING: $2: check for missing prerequisite headers?" >&2;} { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $2: see the Autoconf documentation" >&5 $as_echo "$as_me: WARNING: $2: see the Autoconf documentation" >&2;} { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $2: section \"Present But Cannot Be Compiled\"" >&5 $as_echo "$as_me: WARNING: $2: section \"Present But Cannot Be Compiled\"" >&2;} { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $2: proceeding with the compiler's result" >&5 $as_echo "$as_me: WARNING: $2: proceeding with the compiler's result" >&2;} ( $as_echo "## ------------------------------------------------- ## ## Report this to google-sparsehash@googlegroups.com ## ## ------------------------------------------------- ##" ) | sed "s/^/$as_me: WARNING: /" >&2 ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $2" >&5 $as_echo_n "checking for $2... " >&6; } if eval \${$3+:} false; then : $as_echo_n "(cached) " >&6 else eval "$3=\$ac_header_compiler" fi eval ac_res=\$$3 { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5 $as_echo "$ac_res" >&6; } fi eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno } # ac_fn_cxx_check_header_mongrel cat >config.log <<_ACEOF This file contains any messages produced by compilers while running configure, to aid debugging if configure makes a mistake. It was created by sparsehash $as_me 2.0.2, which was generated by GNU Autoconf 2.68. Invocation command line was $ $0 $@ _ACEOF exec 5>>config.log { cat <<_ASUNAME ## --------- ## ## Platform. ## ## --------- ## hostname = `(hostname || uname -n) 2>/dev/null | sed 1q` uname -m = `(uname -m) 2>/dev/null || echo unknown` uname -r = `(uname -r) 2>/dev/null || echo unknown` uname -s = `(uname -s) 2>/dev/null || echo unknown` uname -v = `(uname -v) 2>/dev/null || echo unknown` /usr/bin/uname -p = `(/usr/bin/uname -p) 2>/dev/null || echo unknown` /bin/uname -X = `(/bin/uname -X) 2>/dev/null || echo unknown` /bin/arch = `(/bin/arch) 2>/dev/null || echo unknown` /usr/bin/arch -k = `(/usr/bin/arch -k) 2>/dev/null || echo unknown` /usr/convex/getsysinfo = `(/usr/convex/getsysinfo) 2>/dev/null || echo unknown` /usr/bin/hostinfo = `(/usr/bin/hostinfo) 2>/dev/null || echo unknown` /bin/machine = `(/bin/machine) 2>/dev/null || echo unknown` /usr/bin/oslevel = `(/usr/bin/oslevel) 2>/dev/null || echo unknown` /bin/universe = `(/bin/universe) 2>/dev/null || echo unknown` _ASUNAME as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. $as_echo "PATH: $as_dir" done IFS=$as_save_IFS } >&5 cat >&5 <<_ACEOF ## ----------- ## ## Core tests. ## ## ----------- ## _ACEOF # Keep a trace of the command line. # Strip out --no-create and --no-recursion so they do not pile up. # Strip out --silent because we don't want to record it for future runs. # Also quote any args containing shell meta-characters. # Make two passes to allow for proper duplicate-argument suppression. ac_configure_args= ac_configure_args0= ac_configure_args1= ac_must_keep_next=false for ac_pass in 1 2 do for ac_arg do case $ac_arg in -no-create | --no-c* | -n | -no-recursion | --no-r*) continue ;; -q | -quiet | --quiet | --quie | --qui | --qu | --q \ | -silent | --silent | --silen | --sile | --sil) continue ;; *\'*) ac_arg=`$as_echo "$ac_arg" | sed "s/'/'\\\\\\\\''/g"` ;; esac case $ac_pass in 1) as_fn_append ac_configure_args0 " '$ac_arg'" ;; 2) as_fn_append ac_configure_args1 " '$ac_arg'" if test $ac_must_keep_next = true; then ac_must_keep_next=false # Got value, back to normal. else case $ac_arg in *=* | --config-cache | -C | -disable-* | --disable-* \ | -enable-* | --enable-* | -gas | --g* | -nfp | --nf* \ | -q | -quiet | --q* | -silent | --sil* | -v | -verb* \ | -with-* | --with-* | -without-* | --without-* | --x) case "$ac_configure_args0 " in "$ac_configure_args1"*" '$ac_arg' "* ) continue ;; esac ;; -* ) ac_must_keep_next=true ;; esac fi as_fn_append ac_configure_args " '$ac_arg'" ;; esac done done { ac_configure_args0=; unset ac_configure_args0;} { ac_configure_args1=; unset ac_configure_args1;} # When interrupted or exit'd, cleanup temporary files, and complete # config.log. We remove comments because anyway the quotes in there # would cause problems or look ugly. # WARNING: Use '\'' to represent an apostrophe within the trap. # WARNING: Do not start the trap code with a newline, due to a FreeBSD 4.0 bug. trap 'exit_status=$? # Save into config.log some information that might help in debugging. { echo $as_echo "## ---------------- ## ## Cache variables. ## ## ---------------- ##" echo # The following way of writing the cache mishandles newlines in values, ( for ac_var in `(set) 2>&1 | sed -n '\''s/^\([a-zA-Z_][a-zA-Z0-9_]*\)=.*/\1/p'\''`; do eval ac_val=\$$ac_var case $ac_val in #( *${as_nl}*) case $ac_var in #( *_cv_*) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: cache variable $ac_var contains a newline" >&5 $as_echo "$as_me: WARNING: cache variable $ac_var contains a newline" >&2;} ;; esac case $ac_var in #( _ | IFS | as_nl) ;; #( BASH_ARGV | BASH_SOURCE) eval $ac_var= ;; #( *) { eval $ac_var=; unset $ac_var;} ;; esac ;; esac done (set) 2>&1 | case $as_nl`(ac_space='\'' '\''; set) 2>&1` in #( *${as_nl}ac_space=\ *) sed -n \ "s/'\''/'\''\\\\'\'''\''/g; s/^\\([_$as_cr_alnum]*_cv_[_$as_cr_alnum]*\\)=\\(.*\\)/\\1='\''\\2'\''/p" ;; #( *) sed -n "/^[_$as_cr_alnum]*_cv_[_$as_cr_alnum]*=/p" ;; esac | sort ) echo $as_echo "## ----------------- ## ## Output variables. ## ## ----------------- ##" echo for ac_var in $ac_subst_vars do eval ac_val=\$$ac_var case $ac_val in *\'\''*) ac_val=`$as_echo "$ac_val" | sed "s/'\''/'\''\\\\\\\\'\'''\''/g"`;; esac $as_echo "$ac_var='\''$ac_val'\''" done | sort echo if test -n "$ac_subst_files"; then $as_echo "## ------------------- ## ## File substitutions. ## ## ------------------- ##" echo for ac_var in $ac_subst_files do eval ac_val=\$$ac_var case $ac_val in *\'\''*) ac_val=`$as_echo "$ac_val" | sed "s/'\''/'\''\\\\\\\\'\'''\''/g"`;; esac $as_echo "$ac_var='\''$ac_val'\''" done | sort echo fi if test -s confdefs.h; then $as_echo "## ----------- ## ## confdefs.h. ## ## ----------- ##" echo cat confdefs.h echo fi test "$ac_signal" != 0 && $as_echo "$as_me: caught signal $ac_signal" $as_echo "$as_me: exit $exit_status" } >&5 rm -f core *.core core.conftest.* && rm -f -r conftest* confdefs* conf$$* $ac_clean_files && exit $exit_status ' 0 for ac_signal in 1 2 13 15; do trap 'ac_signal='$ac_signal'; as_fn_exit 1' $ac_signal done ac_signal=0 # confdefs.h avoids OS command line length limits that DEFS can exceed. rm -f -r conftest* confdefs.h $as_echo "/* confdefs.h */" > confdefs.h # Predefined preprocessor variables. cat >>confdefs.h <<_ACEOF #define PACKAGE_NAME "$PACKAGE_NAME" _ACEOF cat >>confdefs.h <<_ACEOF #define PACKAGE_TARNAME "$PACKAGE_TARNAME" _ACEOF cat >>confdefs.h <<_ACEOF #define PACKAGE_VERSION "$PACKAGE_VERSION" _ACEOF cat >>confdefs.h <<_ACEOF #define PACKAGE_STRING "$PACKAGE_STRING" _ACEOF cat >>confdefs.h <<_ACEOF #define PACKAGE_BUGREPORT "$PACKAGE_BUGREPORT" _ACEOF cat >>confdefs.h <<_ACEOF #define PACKAGE_URL "$PACKAGE_URL" _ACEOF # Let the site file select an alternate cache file if it wants to. # Prefer an explicitly selected file to automatically selected ones. ac_site_file1=NONE ac_site_file2=NONE if test -n "$CONFIG_SITE"; then # We do not want a PATH search for config.site. case $CONFIG_SITE in #(( -*) ac_site_file1=./$CONFIG_SITE;; */*) ac_site_file1=$CONFIG_SITE;; *) ac_site_file1=./$CONFIG_SITE;; esac elif test "x$prefix" != xNONE; then ac_site_file1=$prefix/share/config.site ac_site_file2=$prefix/etc/config.site else ac_site_file1=$ac_default_prefix/share/config.site ac_site_file2=$ac_default_prefix/etc/config.site fi for ac_site_file in "$ac_site_file1" "$ac_site_file2" do test "x$ac_site_file" = xNONE && continue if test /dev/null != "$ac_site_file" && test -r "$ac_site_file"; then { $as_echo "$as_me:${as_lineno-$LINENO}: loading site script $ac_site_file" >&5 $as_echo "$as_me: loading site script $ac_site_file" >&6;} sed 's/^/| /' "$ac_site_file" >&5 . "$ac_site_file" \ || { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "failed to load site script $ac_site_file See \`config.log' for more details" "$LINENO" 5; } fi done if test -r "$cache_file"; then # Some versions of bash will fail to source /dev/null (special files # actually), so we avoid doing that. DJGPP emulates it as a regular file. if test /dev/null != "$cache_file" && test -f "$cache_file"; then { $as_echo "$as_me:${as_lineno-$LINENO}: loading cache $cache_file" >&5 $as_echo "$as_me: loading cache $cache_file" >&6;} case $cache_file in [\\/]* | ?:[\\/]* ) . "$cache_file";; *) . "./$cache_file";; esac fi else { $as_echo "$as_me:${as_lineno-$LINENO}: creating cache $cache_file" >&5 $as_echo "$as_me: creating cache $cache_file" >&6;} >$cache_file fi # Check that the precious variables saved in the cache have kept the same # value. ac_cache_corrupted=false for ac_var in $ac_precious_vars; do eval ac_old_set=\$ac_cv_env_${ac_var}_set eval ac_new_set=\$ac_env_${ac_var}_set eval ac_old_val=\$ac_cv_env_${ac_var}_value eval ac_new_val=\$ac_env_${ac_var}_value case $ac_old_set,$ac_new_set in set,) { $as_echo "$as_me:${as_lineno-$LINENO}: error: \`$ac_var' was set to \`$ac_old_val' in the previous run" >&5 $as_echo "$as_me: error: \`$ac_var' was set to \`$ac_old_val' in the previous run" >&2;} ac_cache_corrupted=: ;; ,set) { $as_echo "$as_me:${as_lineno-$LINENO}: error: \`$ac_var' was not set in the previous run" >&5 $as_echo "$as_me: error: \`$ac_var' was not set in the previous run" >&2;} ac_cache_corrupted=: ;; ,);; *) if test "x$ac_old_val" != "x$ac_new_val"; then # differences in whitespace do not lead to failure. ac_old_val_w=`echo x $ac_old_val` ac_new_val_w=`echo x $ac_new_val` if test "$ac_old_val_w" != "$ac_new_val_w"; then { $as_echo "$as_me:${as_lineno-$LINENO}: error: \`$ac_var' has changed since the previous run:" >&5 $as_echo "$as_me: error: \`$ac_var' has changed since the previous run:" >&2;} ac_cache_corrupted=: else { $as_echo "$as_me:${as_lineno-$LINENO}: warning: ignoring whitespace changes in \`$ac_var' since the previous run:" >&5 $as_echo "$as_me: warning: ignoring whitespace changes in \`$ac_var' since the previous run:" >&2;} eval $ac_var=\$ac_old_val fi { $as_echo "$as_me:${as_lineno-$LINENO}: former value: \`$ac_old_val'" >&5 $as_echo "$as_me: former value: \`$ac_old_val'" >&2;} { $as_echo "$as_me:${as_lineno-$LINENO}: current value: \`$ac_new_val'" >&5 $as_echo "$as_me: current value: \`$ac_new_val'" >&2;} fi;; esac # Pass precious variables to config.status. if test "$ac_new_set" = set; then case $ac_new_val in *\'*) ac_arg=$ac_var=`$as_echo "$ac_new_val" | sed "s/'/'\\\\\\\\''/g"` ;; *) ac_arg=$ac_var=$ac_new_val ;; esac case " $ac_configure_args " in *" '$ac_arg' "*) ;; # Avoid dups. Use of quotes ensures accuracy. *) as_fn_append ac_configure_args " '$ac_arg'" ;; esac fi done if $ac_cache_corrupted; then { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} { $as_echo "$as_me:${as_lineno-$LINENO}: error: changes in the environment can compromise the build" >&5 $as_echo "$as_me: error: changes in the environment can compromise the build" >&2;} as_fn_error $? "run \`make distclean' and/or \`rm $cache_file' and start over" "$LINENO" 5 fi ## -------------------- ## ## Main body of script. ## ## -------------------- ## ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu # The argument here is just something that should be in the current directory # (for sanity checking) am__api_version='1.11' ac_aux_dir= for ac_dir in "$srcdir" "$srcdir/.." "$srcdir/../.."; do if test -f "$ac_dir/install-sh"; then ac_aux_dir=$ac_dir ac_install_sh="$ac_aux_dir/install-sh -c" break elif test -f "$ac_dir/install.sh"; then ac_aux_dir=$ac_dir ac_install_sh="$ac_aux_dir/install.sh -c" break elif test -f "$ac_dir/shtool"; then ac_aux_dir=$ac_dir ac_install_sh="$ac_aux_dir/shtool install -c" break fi done if test -z "$ac_aux_dir"; then as_fn_error $? "cannot find install-sh, install.sh, or shtool in \"$srcdir\" \"$srcdir/..\" \"$srcdir/../..\"" "$LINENO" 5 fi # These three variables are undocumented and unsupported, # and are intended to be withdrawn in a future Autoconf release. # They can cause serious problems if a builder's source tree is in a directory # whose full name contains unusual characters. ac_config_guess="$SHELL $ac_aux_dir/config.guess" # Please don't use this var. ac_config_sub="$SHELL $ac_aux_dir/config.sub" # Please don't use this var. ac_configure="$SHELL $ac_aux_dir/configure" # Please don't use this var. # Find a good install program. We prefer a C program (faster), # so one script is as good as another. But avoid the broken or # incompatible versions: # SysV /etc/install, /usr/sbin/install # SunOS /usr/etc/install # IRIX /sbin/install # AIX /bin/install # AmigaOS /C/install, which installs bootblocks on floppy discs # AIX 4 /usr/bin/installbsd, which doesn't work without a -g flag # AFS /usr/afsws/bin/install, which mishandles nonexistent args # SVR4 /usr/ucb/install, which tries to use the nonexistent group "staff" # OS/2's system install, which has a completely different semantic # ./install, which can be erroneously created by make from ./install.sh. # Reject install programs that cannot install multiple files. { $as_echo "$as_me:${as_lineno-$LINENO}: checking for a BSD-compatible install" >&5 $as_echo_n "checking for a BSD-compatible install... " >&6; } if test -z "$INSTALL"; then if ${ac_cv_path_install+:} false; then : $as_echo_n "(cached) " >&6 else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. # Account for people who put trailing slashes in PATH elements. case $as_dir/ in #(( ./ | .// | /[cC]/* | \ /etc/* | /usr/sbin/* | /usr/etc/* | /sbin/* | /usr/afsws/bin/* | \ ?:[\\/]os2[\\/]install[\\/]* | ?:[\\/]OS2[\\/]INSTALL[\\/]* | \ /usr/ucb/* ) ;; *) # OSF1 and SCO ODT 3.0 have their own names for install. # Don't use installbsd from OSF since it installs stuff as root # by default. for ac_prog in ginstall scoinst install; do for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_prog$ac_exec_ext" && $as_test_x "$as_dir/$ac_prog$ac_exec_ext"; }; then if test $ac_prog = install && grep dspmsg "$as_dir/$ac_prog$ac_exec_ext" >/dev/null 2>&1; then # AIX install. It has an incompatible calling convention. : elif test $ac_prog = install && grep pwplus "$as_dir/$ac_prog$ac_exec_ext" >/dev/null 2>&1; then # program-specific install script used by HP pwplus--don't use. : else rm -rf conftest.one conftest.two conftest.dir echo one > conftest.one echo two > conftest.two mkdir conftest.dir if "$as_dir/$ac_prog$ac_exec_ext" -c conftest.one conftest.two "`pwd`/conftest.dir" && test -s conftest.one && test -s conftest.two && test -s conftest.dir/conftest.one && test -s conftest.dir/conftest.two then ac_cv_path_install="$as_dir/$ac_prog$ac_exec_ext -c" break 3 fi fi fi done done ;; esac done IFS=$as_save_IFS rm -rf conftest.one conftest.two conftest.dir fi if test "${ac_cv_path_install+set}" = set; then INSTALL=$ac_cv_path_install else # As a last resort, use the slow shell script. Don't cache a # value for INSTALL within a source directory, because that will # break other packages using the cache if that directory is # removed, or if the value is a relative name. INSTALL=$ac_install_sh fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $INSTALL" >&5 $as_echo "$INSTALL" >&6; } # Use test -z because SunOS4 sh mishandles braces in ${var-val}. # It thinks the first close brace ends the variable substitution. test -z "$INSTALL_PROGRAM" && INSTALL_PROGRAM='${INSTALL}' test -z "$INSTALL_SCRIPT" && INSTALL_SCRIPT='${INSTALL}' test -z "$INSTALL_DATA" && INSTALL_DATA='${INSTALL} -m 644' { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether build environment is sane" >&5 $as_echo_n "checking whether build environment is sane... " >&6; } # Just in case sleep 1 echo timestamp > conftest.file # Reject unsafe characters in $srcdir or the absolute working directory # name. Accept space and tab only in the latter. am_lf=' ' case `pwd` in *[\\\"\#\$\&\'\`$am_lf]*) as_fn_error $? "unsafe absolute working directory name" "$LINENO" 5;; esac case $srcdir in *[\\\"\#\$\&\'\`$am_lf\ \ ]*) as_fn_error $? "unsafe srcdir value: \`$srcdir'" "$LINENO" 5;; esac # Do `set' in a subshell so we don't clobber the current shell's # arguments. Must try -L first in case configure is actually a # symlink; some systems play weird games with the mod time of symlinks # (eg FreeBSD returns the mod time of the symlink's containing # directory). if ( set X `ls -Lt "$srcdir/configure" conftest.file 2> /dev/null` if test "$*" = "X"; then # -L didn't work. set X `ls -t "$srcdir/configure" conftest.file` fi rm -f conftest.file if test "$*" != "X $srcdir/configure conftest.file" \ && test "$*" != "X conftest.file $srcdir/configure"; then # If neither matched, then we have a broken ls. This can happen # if, for instance, CONFIG_SHELL is bash and it inherits a # broken ls alias from the environment. This has actually # happened. Such a system could not be considered "sane". as_fn_error $? "ls -t appears to fail. Make sure there is not a broken alias in your environment" "$LINENO" 5 fi test "$2" = conftest.file ) then # Ok. : else as_fn_error $? "newly created file is older than distributed files! Check your system clock" "$LINENO" 5 fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } test "$program_prefix" != NONE && program_transform_name="s&^&$program_prefix&;$program_transform_name" # Use a double $ so make ignores it. test "$program_suffix" != NONE && program_transform_name="s&\$&$program_suffix&;$program_transform_name" # Double any \ or $. # By default was `s,x,x', remove it if useless. ac_script='s/[\\$]/&&/g;s/;s,x,x,$//' program_transform_name=`$as_echo "$program_transform_name" | sed "$ac_script"` # expand $ac_aux_dir to an absolute path am_aux_dir=`cd $ac_aux_dir && pwd` if test x"${MISSING+set}" != xset; then case $am_aux_dir in *\ * | *\ *) MISSING="\${SHELL} \"$am_aux_dir/missing\"" ;; *) MISSING="\${SHELL} $am_aux_dir/missing" ;; esac fi # Use eval to expand $SHELL if eval "$MISSING --run true"; then am_missing_run="$MISSING --run " else am_missing_run= { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: \`missing' script is too old or missing" >&5 $as_echo "$as_me: WARNING: \`missing' script is too old or missing" >&2;} fi if test x"${install_sh}" != xset; then case $am_aux_dir in *\ * | *\ *) install_sh="\${SHELL} '$am_aux_dir/install-sh'" ;; *) install_sh="\${SHELL} $am_aux_dir/install-sh" esac fi # Installed binaries are usually stripped using `strip' when the user # run `make install-strip'. However `strip' might not be the right # tool to use in cross-compilation environments, therefore Automake # will honor the `STRIP' environment variable to overrule this program. if test "$cross_compiling" != no; then if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}strip", so it can be a program name with args. set dummy ${ac_tool_prefix}strip; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_STRIP+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$STRIP"; then ac_cv_prog_STRIP="$STRIP" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_STRIP="${ac_tool_prefix}strip" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi STRIP=$ac_cv_prog_STRIP if test -n "$STRIP"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $STRIP" >&5 $as_echo "$STRIP" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_STRIP"; then ac_ct_STRIP=$STRIP # Extract the first word of "strip", so it can be a program name with args. set dummy strip; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_STRIP+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_STRIP"; then ac_cv_prog_ac_ct_STRIP="$ac_ct_STRIP" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_ac_ct_STRIP="strip" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_STRIP=$ac_cv_prog_ac_ct_STRIP if test -n "$ac_ct_STRIP"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_STRIP" >&5 $as_echo "$ac_ct_STRIP" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_STRIP" = x; then STRIP=":" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac STRIP=$ac_ct_STRIP fi else STRIP="$ac_cv_prog_STRIP" fi fi INSTALL_STRIP_PROGRAM="\$(install_sh) -c -s" { $as_echo "$as_me:${as_lineno-$LINENO}: checking for a thread-safe mkdir -p" >&5 $as_echo_n "checking for a thread-safe mkdir -p... " >&6; } if test -z "$MKDIR_P"; then if ${ac_cv_path_mkdir+:} false; then : $as_echo_n "(cached) " >&6 else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH$PATH_SEPARATOR/opt/sfw/bin do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_prog in mkdir gmkdir; do for ac_exec_ext in '' $ac_executable_extensions; do { test -f "$as_dir/$ac_prog$ac_exec_ext" && $as_test_x "$as_dir/$ac_prog$ac_exec_ext"; } || continue case `"$as_dir/$ac_prog$ac_exec_ext" --version 2>&1` in #( 'mkdir (GNU coreutils) '* | \ 'mkdir (coreutils) '* | \ 'mkdir (fileutils) '4.1*) ac_cv_path_mkdir=$as_dir/$ac_prog$ac_exec_ext break 3;; esac done done done IFS=$as_save_IFS fi test -d ./--version && rmdir ./--version if test "${ac_cv_path_mkdir+set}" = set; then MKDIR_P="$ac_cv_path_mkdir -p" else # As a last resort, use the slow shell script. Don't cache a # value for MKDIR_P within a source directory, because that will # break other packages using the cache if that directory is # removed, or if the value is a relative name. MKDIR_P="$ac_install_sh -d" fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MKDIR_P" >&5 $as_echo "$MKDIR_P" >&6; } mkdir_p="$MKDIR_P" case $mkdir_p in [\\/$]* | ?:[\\/]*) ;; */*) mkdir_p="\$(top_builddir)/$mkdir_p" ;; esac for ac_prog in gawk mawk nawk awk do # Extract the first word of "$ac_prog", so it can be a program name with args. set dummy $ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_AWK+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$AWK"; then ac_cv_prog_AWK="$AWK" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_AWK="$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi AWK=$ac_cv_prog_AWK if test -n "$AWK"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $AWK" >&5 $as_echo "$AWK" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$AWK" && break done { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether ${MAKE-make} sets \$(MAKE)" >&5 $as_echo_n "checking whether ${MAKE-make} sets \$(MAKE)... " >&6; } set x ${MAKE-make} ac_make=`$as_echo "$2" | sed 's/+/p/g; s/[^a-zA-Z0-9_]/_/g'` if eval \${ac_cv_prog_make_${ac_make}_set+:} false; then : $as_echo_n "(cached) " >&6 else cat >conftest.make <<\_ACEOF SHELL = /bin/sh all: @echo '@@@%%%=$(MAKE)=@@@%%%' _ACEOF # GNU make sometimes prints "make[1]: Entering ...", which would confuse us. case `${MAKE-make} -f conftest.make 2>/dev/null` in *@@@%%%=?*=@@@%%%*) eval ac_cv_prog_make_${ac_make}_set=yes;; *) eval ac_cv_prog_make_${ac_make}_set=no;; esac rm -f conftest.make fi if eval test \$ac_cv_prog_make_${ac_make}_set = yes; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } SET_MAKE= else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } SET_MAKE="MAKE=${MAKE-make}" fi rm -rf .tst 2>/dev/null mkdir .tst 2>/dev/null if test -d .tst; then am__leading_dot=. else am__leading_dot=_ fi rmdir .tst 2>/dev/null if test "`cd $srcdir && pwd`" != "`pwd`"; then # Use -I$(srcdir) only when $(srcdir) != ., so that make's output # is not polluted with repeated "-I." am__isrc=' -I$(srcdir)' # test to see if srcdir already configured if test -f $srcdir/config.status; then as_fn_error $? "source directory already configured; run \"make distclean\" there first" "$LINENO" 5 fi fi # test whether we have cygpath if test -z "$CYGPATH_W"; then if (cygpath --version) >/dev/null 2>/dev/null; then CYGPATH_W='cygpath -w' else CYGPATH_W=echo fi fi # Define the identity of the package. PACKAGE='sparsehash' VERSION='2.0.2' cat >>confdefs.h <<_ACEOF #define PACKAGE "$PACKAGE" _ACEOF cat >>confdefs.h <<_ACEOF #define VERSION "$VERSION" _ACEOF # Some tools Automake needs. ACLOCAL=${ACLOCAL-"${am_missing_run}aclocal-${am__api_version}"} AUTOCONF=${AUTOCONF-"${am_missing_run}autoconf"} AUTOMAKE=${AUTOMAKE-"${am_missing_run}automake-${am__api_version}"} AUTOHEADER=${AUTOHEADER-"${am_missing_run}autoheader"} MAKEINFO=${MAKEINFO-"${am_missing_run}makeinfo"} # We need awk for the "check" target. The system "awk" is bad on # some platforms. # Always define AMTAR for backward compatibility. AMTAR=${AMTAR-"${am_missing_run}tar"} am__tar='${AMTAR} chof - "$$tardir"'; am__untar='${AMTAR} xf -' ac_config_headers="$ac_config_headers src/config.h" # Checks for programs. ac_ext=cpp ac_cpp='$CXXCPP $CPPFLAGS' ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_cxx_compiler_gnu if test -z "$CXX"; then if test -n "$CCC"; then CXX=$CCC else if test -n "$ac_tool_prefix"; then for ac_prog in g++ c++ gpp aCC CC cxx cc++ cl.exe FCC KCC RCC xlC_r xlC do # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args. set dummy $ac_tool_prefix$ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_CXX+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$CXX"; then ac_cv_prog_CXX="$CXX" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_CXX="$ac_tool_prefix$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi CXX=$ac_cv_prog_CXX if test -n "$CXX"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $CXX" >&5 $as_echo "$CXX" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$CXX" && break done fi if test -z "$CXX"; then ac_ct_CXX=$CXX for ac_prog in g++ c++ gpp aCC CC cxx cc++ cl.exe FCC KCC RCC xlC_r xlC do # Extract the first word of "$ac_prog", so it can be a program name with args. set dummy $ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_CXX+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_CXX"; then ac_cv_prog_ac_ct_CXX="$ac_ct_CXX" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_ac_ct_CXX="$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_CXX=$ac_cv_prog_ac_ct_CXX if test -n "$ac_ct_CXX"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_CXX" >&5 $as_echo "$ac_ct_CXX" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$ac_ct_CXX" && break done if test "x$ac_ct_CXX" = x; then CXX="g++" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac CXX=$ac_ct_CXX fi fi fi fi # Provide some information about the compiler. $as_echo "$as_me:${as_lineno-$LINENO}: checking for C++ compiler version" >&5 set X $ac_compile ac_compiler=$2 for ac_option in --version -v -V -qversion; do { { ac_try="$ac_compiler $ac_option >&5" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_compiler $ac_option >&5") 2>conftest.err ac_status=$? if test -s conftest.err; then sed '10a\ ... rest of stderr output deleted ... 10q' conftest.err >conftest.er1 cat conftest.er1 >&5 fi rm -f conftest.er1 conftest.err $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } done cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF ac_clean_files_save=$ac_clean_files ac_clean_files="$ac_clean_files a.out a.out.dSYM a.exe b.out" # Try to create an executable without -o first, disregard a.out. # It will help us diagnose broken compilers, and finding out an intuition # of exeext. { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the C++ compiler works" >&5 $as_echo_n "checking whether the C++ compiler works... " >&6; } ac_link_default=`$as_echo "$ac_link" | sed 's/ -o *conftest[^ ]*//'` # The possible output files: ac_files="a.out conftest.exe conftest a.exe a_out.exe b.out conftest.*" ac_rmfiles= for ac_file in $ac_files do case $ac_file in *.$ac_ext | *.xcoff | *.tds | *.d | *.pdb | *.xSYM | *.bb | *.bbg | *.map | *.inf | *.dSYM | *.o | *.obj ) ;; * ) ac_rmfiles="$ac_rmfiles $ac_file";; esac done rm -f $ac_rmfiles if { { ac_try="$ac_link_default" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_link_default") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then : # Autoconf-2.13 could set the ac_cv_exeext variable to `no'. # So ignore a value of `no', otherwise this would lead to `EXEEXT = no' # in a Makefile. We should not override ac_cv_exeext if it was cached, # so that the user can short-circuit this test for compilers unknown to # Autoconf. for ac_file in $ac_files '' do test -f "$ac_file" || continue case $ac_file in *.$ac_ext | *.xcoff | *.tds | *.d | *.pdb | *.xSYM | *.bb | *.bbg | *.map | *.inf | *.dSYM | *.o | *.obj ) ;; [ab].out ) # We found the default executable, but exeext='' is most # certainly right. break;; *.* ) if test "${ac_cv_exeext+set}" = set && test "$ac_cv_exeext" != no; then :; else ac_cv_exeext=`expr "$ac_file" : '[^.]*\(\..*\)'` fi # We set ac_cv_exeext here because the later test for it is not # safe: cross compilers may not add the suffix if given an `-o' # argument, so we may need to know it at that point already. # Even if this section looks crufty: it has the advantage of # actually working. break;; * ) break;; esac done test "$ac_cv_exeext" = no && ac_cv_exeext= else ac_file='' fi if test -z "$ac_file"; then : { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } $as_echo "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error 77 "C++ compiler cannot create executables See \`config.log' for more details" "$LINENO" 5; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for C++ compiler default output file name" >&5 $as_echo_n "checking for C++ compiler default output file name... " >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_file" >&5 $as_echo "$ac_file" >&6; } ac_exeext=$ac_cv_exeext rm -f -r a.out a.out.dSYM a.exe conftest$ac_cv_exeext b.out ac_clean_files=$ac_clean_files_save { $as_echo "$as_me:${as_lineno-$LINENO}: checking for suffix of executables" >&5 $as_echo_n "checking for suffix of executables... " >&6; } if { { ac_try="$ac_link" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_link") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then : # If both `conftest.exe' and `conftest' are `present' (well, observable) # catch `conftest.exe'. For instance with Cygwin, `ls conftest' will # work properly (i.e., refer to `conftest.exe'), while it won't with # `rm'. for ac_file in conftest.exe conftest conftest.*; do test -f "$ac_file" || continue case $ac_file in *.$ac_ext | *.xcoff | *.tds | *.d | *.pdb | *.xSYM | *.bb | *.bbg | *.map | *.inf | *.dSYM | *.o | *.obj ) ;; *.* ) ac_cv_exeext=`expr "$ac_file" : '[^.]*\(\..*\)'` break;; * ) break;; esac done else { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "cannot compute suffix of executables: cannot compile and link See \`config.log' for more details" "$LINENO" 5; } fi rm -f conftest conftest$ac_cv_exeext { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_exeext" >&5 $as_echo "$ac_cv_exeext" >&6; } rm -f conftest.$ac_ext EXEEXT=$ac_cv_exeext ac_exeext=$EXEEXT cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include int main () { FILE *f = fopen ("conftest.out", "w"); return ferror (f) || fclose (f) != 0; ; return 0; } _ACEOF ac_clean_files="$ac_clean_files conftest.out" # Check that the compiler produces executables we can run. If not, either # the compiler is broken, or we cross compile. { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether we are cross compiling" >&5 $as_echo_n "checking whether we are cross compiling... " >&6; } if test "$cross_compiling" != yes; then { { ac_try="$ac_link" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_link") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } if { ac_try='./conftest$ac_cv_exeext' { { case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_try") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; }; then cross_compiling=no else if test "$cross_compiling" = maybe; then cross_compiling=yes else { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "cannot run C++ compiled programs. If you meant to cross compile, use \`--host'. See \`config.log' for more details" "$LINENO" 5; } fi fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $cross_compiling" >&5 $as_echo "$cross_compiling" >&6; } rm -f conftest.$ac_ext conftest$ac_cv_exeext conftest.out ac_clean_files=$ac_clean_files_save { $as_echo "$as_me:${as_lineno-$LINENO}: checking for suffix of object files" >&5 $as_echo_n "checking for suffix of object files... " >&6; } if ${ac_cv_objext+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF rm -f conftest.o conftest.obj if { { ac_try="$ac_compile" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_compile") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then : for ac_file in conftest.o conftest.obj conftest.*; do test -f "$ac_file" || continue; case $ac_file in *.$ac_ext | *.xcoff | *.tds | *.d | *.pdb | *.xSYM | *.bb | *.bbg | *.map | *.inf | *.dSYM ) ;; *) ac_cv_objext=`expr "$ac_file" : '.*\.\(.*\)'` break;; esac done else $as_echo "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "cannot compute suffix of object files: cannot compile See \`config.log' for more details" "$LINENO" 5; } fi rm -f conftest.$ac_cv_objext conftest.$ac_ext fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_objext" >&5 $as_echo "$ac_cv_objext" >&6; } OBJEXT=$ac_cv_objext ac_objext=$OBJEXT { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether we are using the GNU C++ compiler" >&5 $as_echo_n "checking whether we are using the GNU C++ compiler... " >&6; } if ${ac_cv_cxx_compiler_gnu+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { #ifndef __GNUC__ choke me #endif ; return 0; } _ACEOF if ac_fn_cxx_try_compile "$LINENO"; then : ac_compiler_gnu=yes else ac_compiler_gnu=no fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext ac_cv_cxx_compiler_gnu=$ac_compiler_gnu fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_cxx_compiler_gnu" >&5 $as_echo "$ac_cv_cxx_compiler_gnu" >&6; } if test $ac_compiler_gnu = yes; then GXX=yes else GXX= fi ac_test_CXXFLAGS=${CXXFLAGS+set} ac_save_CXXFLAGS=$CXXFLAGS { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether $CXX accepts -g" >&5 $as_echo_n "checking whether $CXX accepts -g... " >&6; } if ${ac_cv_prog_cxx_g+:} false; then : $as_echo_n "(cached) " >&6 else ac_save_cxx_werror_flag=$ac_cxx_werror_flag ac_cxx_werror_flag=yes ac_cv_prog_cxx_g=no CXXFLAGS="-g" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_cxx_try_compile "$LINENO"; then : ac_cv_prog_cxx_g=yes else CXXFLAGS="" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_cxx_try_compile "$LINENO"; then : else ac_cxx_werror_flag=$ac_save_cxx_werror_flag CXXFLAGS="-g" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_cxx_try_compile "$LINENO"; then : ac_cv_prog_cxx_g=yes fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext ac_cxx_werror_flag=$ac_save_cxx_werror_flag fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cxx_g" >&5 $as_echo "$ac_cv_prog_cxx_g" >&6; } if test "$ac_test_CXXFLAGS" = set; then CXXFLAGS=$ac_save_CXXFLAGS elif test $ac_cv_prog_cxx_g = yes; then if test "$GXX" = yes; then CXXFLAGS="-g -O2" else CXXFLAGS="-g" fi else if test "$GXX" = yes; then CXXFLAGS="-O2" else CXXFLAGS= fi fi ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu DEPDIR="${am__leading_dot}deps" ac_config_commands="$ac_config_commands depfiles" am_make=${MAKE-make} cat > confinc << 'END' am__doit: @echo this is the am__doit target .PHONY: am__doit END # If we don't find an include directive, just comment out the code. { $as_echo "$as_me:${as_lineno-$LINENO}: checking for style of include used by $am_make" >&5 $as_echo_n "checking for style of include used by $am_make... " >&6; } am__include="#" am__quote= _am_result=none # First try GNU make style include. echo "include confinc" > confmf # Ignore all kinds of additional output from `make'. case `$am_make -s -f confmf 2> /dev/null` in #( *the\ am__doit\ target*) am__include=include am__quote= _am_result=GNU ;; esac # Now try BSD make style include. if test "$am__include" = "#"; then echo '.include "confinc"' > confmf case `$am_make -s -f confmf 2> /dev/null` in #( *the\ am__doit\ target*) am__include=.include am__quote="\"" _am_result=BSD ;; esac fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $_am_result" >&5 $as_echo "$_am_result" >&6; } rm -f confinc confmf # Check whether --enable-dependency-tracking was given. if test "${enable_dependency_tracking+set}" = set; then : enableval=$enable_dependency_tracking; fi if test "x$enable_dependency_tracking" != xno; then am_depcomp="$ac_aux_dir/depcomp" AMDEPBACKSLASH='\' fi if test "x$enable_dependency_tracking" != xno; then AMDEP_TRUE= AMDEP_FALSE='#' else AMDEP_TRUE='#' AMDEP_FALSE= fi depcc="$CXX" am_compiler_list= { $as_echo "$as_me:${as_lineno-$LINENO}: checking dependency style of $depcc" >&5 $as_echo_n "checking dependency style of $depcc... " >&6; } if ${am_cv_CXX_dependencies_compiler_type+:} false; then : $as_echo_n "(cached) " >&6 else if test -z "$AMDEP_TRUE" && test -f "$am_depcomp"; then # We make a subdir and do the tests there. Otherwise we can end up # making bogus files that we don't know about and never remove. For # instance it was reported that on HP-UX the gcc test will end up # making a dummy file named `D' -- because `-MD' means `put the output # in D'. mkdir conftest.dir # Copy depcomp to subdir because otherwise we won't find it if we're # using a relative directory. cp "$am_depcomp" conftest.dir cd conftest.dir # We will build objects and dependencies in a subdirectory because # it helps to detect inapplicable dependency modes. For instance # both Tru64's cc and ICC support -MD to output dependencies as a # side effect of compilation, but ICC will put the dependencies in # the current directory while Tru64 will put them in the object # directory. mkdir sub am_cv_CXX_dependencies_compiler_type=none if test "$am_compiler_list" = ""; then am_compiler_list=`sed -n 's/^#*\([a-zA-Z0-9]*\))$/\1/p' < ./depcomp` fi am__universal=false case " $depcc " in #( *\ -arch\ *\ -arch\ *) am__universal=true ;; esac for depmode in $am_compiler_list; do # Setup a source with many dependencies, because some compilers # like to wrap large dependency lists on column 80 (with \), and # we should not choose a depcomp mode which is confused by this. # # We need to recreate these files for each test, as the compiler may # overwrite some of them when testing with obscure command lines. # This happens at least with the AIX C compiler. : > sub/conftest.c for i in 1 2 3 4 5 6; do echo '#include "conftst'$i'.h"' >> sub/conftest.c # Using `: > sub/conftst$i.h' creates only sub/conftst1.h with # Solaris 8's {/usr,}/bin/sh. touch sub/conftst$i.h done echo "${am__include} ${am__quote}sub/conftest.Po${am__quote}" > confmf # We check with `-c' and `-o' for the sake of the "dashmstdout" # mode. It turns out that the SunPro C++ compiler does not properly # handle `-M -o', and we need to detect this. Also, some Intel # versions had trouble with output in subdirs am__obj=sub/conftest.${OBJEXT-o} am__minus_obj="-o $am__obj" case $depmode in gcc) # This depmode causes a compiler race in universal mode. test "$am__universal" = false || continue ;; nosideeffect) # after this tag, mechanisms are not by side-effect, so they'll # only be used when explicitly requested if test "x$enable_dependency_tracking" = xyes; then continue else break fi ;; msvisualcpp | msvcmsys) # This compiler won't grok `-c -o', but also, the minuso test has # not run yet. These depmodes are late enough in the game, and # so weak that their functioning should not be impacted. am__obj=conftest.${OBJEXT-o} am__minus_obj= ;; none) break ;; esac if depmode=$depmode \ source=sub/conftest.c object=$am__obj \ depfile=sub/conftest.Po tmpdepfile=sub/conftest.TPo \ $SHELL ./depcomp $depcc -c $am__minus_obj sub/conftest.c \ >/dev/null 2>conftest.err && grep sub/conftst1.h sub/conftest.Po > /dev/null 2>&1 && grep sub/conftst6.h sub/conftest.Po > /dev/null 2>&1 && grep $am__obj sub/conftest.Po > /dev/null 2>&1 && ${MAKE-make} -s -f confmf > /dev/null 2>&1; then # icc doesn't choke on unknown options, it will just issue warnings # or remarks (even with -Werror). So we grep stderr for any message # that says an option was ignored or not supported. # When given -MP, icc 7.0 and 7.1 complain thusly: # icc: Command line warning: ignoring option '-M'; no argument required # The diagnosis changed in icc 8.0: # icc: Command line remark: option '-MP' not supported if (grep 'ignoring option' conftest.err || grep 'not supported' conftest.err) >/dev/null 2>&1; then :; else am_cv_CXX_dependencies_compiler_type=$depmode break fi fi done cd .. rm -rf conftest.dir else am_cv_CXX_dependencies_compiler_type=none fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $am_cv_CXX_dependencies_compiler_type" >&5 $as_echo "$am_cv_CXX_dependencies_compiler_type" >&6; } CXXDEPMODE=depmode=$am_cv_CXX_dependencies_compiler_type if test "x$enable_dependency_tracking" != xno \ && test "$am_cv_CXX_dependencies_compiler_type" = gcc3; then am__fastdepCXX_TRUE= am__fastdepCXX_FALSE='#' else am__fastdepCXX_TRUE='#' am__fastdepCXX_FALSE= fi ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}gcc", so it can be a program name with args. set dummy ${ac_tool_prefix}gcc; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_CC+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$CC"; then ac_cv_prog_CC="$CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_CC="${ac_tool_prefix}gcc" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi CC=$ac_cv_prog_CC if test -n "$CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $CC" >&5 $as_echo "$CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_CC"; then ac_ct_CC=$CC # Extract the first word of "gcc", so it can be a program name with args. set dummy gcc; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_CC+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_CC"; then ac_cv_prog_ac_ct_CC="$ac_ct_CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_ac_ct_CC="gcc" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_CC=$ac_cv_prog_ac_ct_CC if test -n "$ac_ct_CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_CC" >&5 $as_echo "$ac_ct_CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_CC" = x; then CC="" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac CC=$ac_ct_CC fi else CC="$ac_cv_prog_CC" fi if test -z "$CC"; then if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}cc", so it can be a program name with args. set dummy ${ac_tool_prefix}cc; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_CC+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$CC"; then ac_cv_prog_CC="$CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_CC="${ac_tool_prefix}cc" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi CC=$ac_cv_prog_CC if test -n "$CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $CC" >&5 $as_echo "$CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi fi if test -z "$CC"; then # Extract the first word of "cc", so it can be a program name with args. set dummy cc; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_CC+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$CC"; then ac_cv_prog_CC="$CC" # Let the user override the test. else ac_prog_rejected=no as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then if test "$as_dir/$ac_word$ac_exec_ext" = "/usr/ucb/cc"; then ac_prog_rejected=yes continue fi ac_cv_prog_CC="cc" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS if test $ac_prog_rejected = yes; then # We found a bogon in the path, so make sure we never use it. set dummy $ac_cv_prog_CC shift if test $# != 0; then # We chose a different compiler from the bogus one. # However, it has the same basename, so the bogon will be chosen # first if we set CC to just the basename; use the full file name. shift ac_cv_prog_CC="$as_dir/$ac_word${1+' '}$@" fi fi fi fi CC=$ac_cv_prog_CC if test -n "$CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $CC" >&5 $as_echo "$CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$CC"; then if test -n "$ac_tool_prefix"; then for ac_prog in cl.exe do # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args. set dummy $ac_tool_prefix$ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_CC+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$CC"; then ac_cv_prog_CC="$CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_CC="$ac_tool_prefix$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi CC=$ac_cv_prog_CC if test -n "$CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $CC" >&5 $as_echo "$CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$CC" && break done fi if test -z "$CC"; then ac_ct_CC=$CC for ac_prog in cl.exe do # Extract the first word of "$ac_prog", so it can be a program name with args. set dummy $ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_CC+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_CC"; then ac_cv_prog_ac_ct_CC="$ac_ct_CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_ac_ct_CC="$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_CC=$ac_cv_prog_ac_ct_CC if test -n "$ac_ct_CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_CC" >&5 $as_echo "$ac_ct_CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$ac_ct_CC" && break done if test "x$ac_ct_CC" = x; then CC="" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac CC=$ac_ct_CC fi fi fi test -z "$CC" && { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "no acceptable C compiler found in \$PATH See \`config.log' for more details" "$LINENO" 5; } # Provide some information about the compiler. $as_echo "$as_me:${as_lineno-$LINENO}: checking for C compiler version" >&5 set X $ac_compile ac_compiler=$2 for ac_option in --version -v -V -qversion; do { { ac_try="$ac_compiler $ac_option >&5" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_compiler $ac_option >&5") 2>conftest.err ac_status=$? if test -s conftest.err; then sed '10a\ ... rest of stderr output deleted ... 10q' conftest.err >conftest.er1 cat conftest.er1 >&5 fi rm -f conftest.er1 conftest.err $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } done { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether we are using the GNU C compiler" >&5 $as_echo_n "checking whether we are using the GNU C compiler... " >&6; } if ${ac_cv_c_compiler_gnu+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { #ifndef __GNUC__ choke me #endif ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : ac_compiler_gnu=yes else ac_compiler_gnu=no fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext ac_cv_c_compiler_gnu=$ac_compiler_gnu fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_c_compiler_gnu" >&5 $as_echo "$ac_cv_c_compiler_gnu" >&6; } if test $ac_compiler_gnu = yes; then GCC=yes else GCC= fi ac_test_CFLAGS=${CFLAGS+set} ac_save_CFLAGS=$CFLAGS { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether $CC accepts -g" >&5 $as_echo_n "checking whether $CC accepts -g... " >&6; } if ${ac_cv_prog_cc_g+:} false; then : $as_echo_n "(cached) " >&6 else ac_save_c_werror_flag=$ac_c_werror_flag ac_c_werror_flag=yes ac_cv_prog_cc_g=no CFLAGS="-g" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : ac_cv_prog_cc_g=yes else CFLAGS="" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : else ac_c_werror_flag=$ac_save_c_werror_flag CFLAGS="-g" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : ac_cv_prog_cc_g=yes fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext ac_c_werror_flag=$ac_save_c_werror_flag fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cc_g" >&5 $as_echo "$ac_cv_prog_cc_g" >&6; } if test "$ac_test_CFLAGS" = set; then CFLAGS=$ac_save_CFLAGS elif test $ac_cv_prog_cc_g = yes; then if test "$GCC" = yes; then CFLAGS="-g -O2" else CFLAGS="-g" fi else if test "$GCC" = yes; then CFLAGS="-O2" else CFLAGS= fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $CC option to accept ISO C89" >&5 $as_echo_n "checking for $CC option to accept ISO C89... " >&6; } if ${ac_cv_prog_cc_c89+:} false; then : $as_echo_n "(cached) " >&6 else ac_cv_prog_cc_c89=no ac_save_CC=$CC cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include #include #include /* Most of the following tests are stolen from RCS 5.7's src/conf.sh. */ struct buf { int x; }; FILE * (*rcsopen) (struct buf *, struct stat *, int); static char *e (p, i) char **p; int i; { return p[i]; } static char *f (char * (*g) (char **, int), char **p, ...) { char *s; va_list v; va_start (v,p); s = g (p, va_arg (v,int)); va_end (v); return s; } /* OSF 4.0 Compaq cc is some sort of almost-ANSI by default. It has function prototypes and stuff, but not '\xHH' hex character constants. These don't provoke an error unfortunately, instead are silently treated as 'x'. The following induces an error, until -std is added to get proper ANSI mode. Curiously '\x00'!='x' always comes out true, for an array size at least. It's necessary to write '\x00'==0 to get something that's true only with -std. */ int osf4_cc_array ['\x00' == 0 ? 1 : -1]; /* IBM C 6 for AIX is almost-ANSI by default, but it replaces macro parameters inside strings and character constants. */ #define FOO(x) 'x' int xlc6_cc_array[FOO(a) == 'x' ? 1 : -1]; int test (int i, double x); struct s1 {int (*f) (int a);}; struct s2 {int (*f) (double a);}; int pairnames (int, char **, FILE *(*)(struct buf *, struct stat *, int), int, int); int argc; char **argv; int main () { return f (e, argv, 0) != argv[0] || f (e, argv, 1) != argv[1]; ; return 0; } _ACEOF for ac_arg in '' -qlanglvl=extc89 -qlanglvl=ansi -std \ -Ae "-Aa -D_HPUX_SOURCE" "-Xc -D__EXTENSIONS__" do CC="$ac_save_CC $ac_arg" if ac_fn_c_try_compile "$LINENO"; then : ac_cv_prog_cc_c89=$ac_arg fi rm -f core conftest.err conftest.$ac_objext test "x$ac_cv_prog_cc_c89" != "xno" && break done rm -f conftest.$ac_ext CC=$ac_save_CC fi # AC_CACHE_VAL case "x$ac_cv_prog_cc_c89" in x) { $as_echo "$as_me:${as_lineno-$LINENO}: result: none needed" >&5 $as_echo "none needed" >&6; } ;; xno) { $as_echo "$as_me:${as_lineno-$LINENO}: result: unsupported" >&5 $as_echo "unsupported" >&6; } ;; *) CC="$CC $ac_cv_prog_cc_c89" { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cc_c89" >&5 $as_echo "$ac_cv_prog_cc_c89" >&6; } ;; esac if test "x$ac_cv_prog_cc_c89" != xno; then : fi ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu depcc="$CC" am_compiler_list= { $as_echo "$as_me:${as_lineno-$LINENO}: checking dependency style of $depcc" >&5 $as_echo_n "checking dependency style of $depcc... " >&6; } if ${am_cv_CC_dependencies_compiler_type+:} false; then : $as_echo_n "(cached) " >&6 else if test -z "$AMDEP_TRUE" && test -f "$am_depcomp"; then # We make a subdir and do the tests there. Otherwise we can end up # making bogus files that we don't know about and never remove. For # instance it was reported that on HP-UX the gcc test will end up # making a dummy file named `D' -- because `-MD' means `put the output # in D'. mkdir conftest.dir # Copy depcomp to subdir because otherwise we won't find it if we're # using a relative directory. cp "$am_depcomp" conftest.dir cd conftest.dir # We will build objects and dependencies in a subdirectory because # it helps to detect inapplicable dependency modes. For instance # both Tru64's cc and ICC support -MD to output dependencies as a # side effect of compilation, but ICC will put the dependencies in # the current directory while Tru64 will put them in the object # directory. mkdir sub am_cv_CC_dependencies_compiler_type=none if test "$am_compiler_list" = ""; then am_compiler_list=`sed -n 's/^#*\([a-zA-Z0-9]*\))$/\1/p' < ./depcomp` fi am__universal=false case " $depcc " in #( *\ -arch\ *\ -arch\ *) am__universal=true ;; esac for depmode in $am_compiler_list; do # Setup a source with many dependencies, because some compilers # like to wrap large dependency lists on column 80 (with \), and # we should not choose a depcomp mode which is confused by this. # # We need to recreate these files for each test, as the compiler may # overwrite some of them when testing with obscure command lines. # This happens at least with the AIX C compiler. : > sub/conftest.c for i in 1 2 3 4 5 6; do echo '#include "conftst'$i'.h"' >> sub/conftest.c # Using `: > sub/conftst$i.h' creates only sub/conftst1.h with # Solaris 8's {/usr,}/bin/sh. touch sub/conftst$i.h done echo "${am__include} ${am__quote}sub/conftest.Po${am__quote}" > confmf # We check with `-c' and `-o' for the sake of the "dashmstdout" # mode. It turns out that the SunPro C++ compiler does not properly # handle `-M -o', and we need to detect this. Also, some Intel # versions had trouble with output in subdirs am__obj=sub/conftest.${OBJEXT-o} am__minus_obj="-o $am__obj" case $depmode in gcc) # This depmode causes a compiler race in universal mode. test "$am__universal" = false || continue ;; nosideeffect) # after this tag, mechanisms are not by side-effect, so they'll # only be used when explicitly requested if test "x$enable_dependency_tracking" = xyes; then continue else break fi ;; msvisualcpp | msvcmsys) # This compiler won't grok `-c -o', but also, the minuso test has # not run yet. These depmodes are late enough in the game, and # so weak that their functioning should not be impacted. am__obj=conftest.${OBJEXT-o} am__minus_obj= ;; none) break ;; esac if depmode=$depmode \ source=sub/conftest.c object=$am__obj \ depfile=sub/conftest.Po tmpdepfile=sub/conftest.TPo \ $SHELL ./depcomp $depcc -c $am__minus_obj sub/conftest.c \ >/dev/null 2>conftest.err && grep sub/conftst1.h sub/conftest.Po > /dev/null 2>&1 && grep sub/conftst6.h sub/conftest.Po > /dev/null 2>&1 && grep $am__obj sub/conftest.Po > /dev/null 2>&1 && ${MAKE-make} -s -f confmf > /dev/null 2>&1; then # icc doesn't choke on unknown options, it will just issue warnings # or remarks (even with -Werror). So we grep stderr for any message # that says an option was ignored or not supported. # When given -MP, icc 7.0 and 7.1 complain thusly: # icc: Command line warning: ignoring option '-M'; no argument required # The diagnosis changed in icc 8.0: # icc: Command line remark: option '-MP' not supported if (grep 'ignoring option' conftest.err || grep 'not supported' conftest.err) >/dev/null 2>&1; then :; else am_cv_CC_dependencies_compiler_type=$depmode break fi fi done cd .. rm -rf conftest.dir else am_cv_CC_dependencies_compiler_type=none fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $am_cv_CC_dependencies_compiler_type" >&5 $as_echo "$am_cv_CC_dependencies_compiler_type" >&6; } CCDEPMODE=depmode=$am_cv_CC_dependencies_compiler_type if test "x$enable_dependency_tracking" != xno \ && test "$am_cv_CC_dependencies_compiler_type" = gcc3; then am__fastdepCC_TRUE= am__fastdepCC_FALSE='#' else am__fastdepCC_TRUE='#' am__fastdepCC_FALSE= fi ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to run the C preprocessor" >&5 $as_echo_n "checking how to run the C preprocessor... " >&6; } # On Suns, sometimes $CPP names a directory. if test -n "$CPP" && test -d "$CPP"; then CPP= fi if test -z "$CPP"; then if ${ac_cv_prog_CPP+:} false; then : $as_echo_n "(cached) " >&6 else # Double quotes because CPP needs to be expanded for CPP in "$CC -E" "$CC -E -traditional-cpp" "/lib/cpp" do ac_preproc_ok=false for ac_c_preproc_warn_flag in '' yes do # Use a header file that comes with gcc, so configuring glibc # with a fresh cross-compiler works. # Prefer to if __STDC__ is defined, since # exists even on freestanding compilers. # On the NeXT, cc -E runs the code through the compiler's parser, # not just through cpp. "Syntax error" is here to catch this case. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #ifdef __STDC__ # include #else # include #endif Syntax error _ACEOF if ac_fn_c_try_cpp "$LINENO"; then : else # Broken: fails on valid input. continue fi rm -f conftest.err conftest.i conftest.$ac_ext # OK, works on sane cases. Now check whether nonexistent headers # can be detected and how. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include _ACEOF if ac_fn_c_try_cpp "$LINENO"; then : # Broken: success on invalid input. continue else # Passes both tests. ac_preproc_ok=: break fi rm -f conftest.err conftest.i conftest.$ac_ext done # Because of `break', _AC_PREPROC_IFELSE's cleaning code was skipped. rm -f conftest.i conftest.err conftest.$ac_ext if $ac_preproc_ok; then : break fi done ac_cv_prog_CPP=$CPP fi CPP=$ac_cv_prog_CPP else ac_cv_prog_CPP=$CPP fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $CPP" >&5 $as_echo "$CPP" >&6; } ac_preproc_ok=false for ac_c_preproc_warn_flag in '' yes do # Use a header file that comes with gcc, so configuring glibc # with a fresh cross-compiler works. # Prefer to if __STDC__ is defined, since # exists even on freestanding compilers. # On the NeXT, cc -E runs the code through the compiler's parser, # not just through cpp. "Syntax error" is here to catch this case. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #ifdef __STDC__ # include #else # include #endif Syntax error _ACEOF if ac_fn_c_try_cpp "$LINENO"; then : else # Broken: fails on valid input. continue fi rm -f conftest.err conftest.i conftest.$ac_ext # OK, works on sane cases. Now check whether nonexistent headers # can be detected and how. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include _ACEOF if ac_fn_c_try_cpp "$LINENO"; then : # Broken: success on invalid input. continue else # Passes both tests. ac_preproc_ok=: break fi rm -f conftest.err conftest.i conftest.$ac_ext done # Because of `break', _AC_PREPROC_IFELSE's cleaning code was skipped. rm -f conftest.i conftest.err conftest.$ac_ext if $ac_preproc_ok; then : else { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "C preprocessor \"$CPP\" fails sanity check See \`config.log' for more details" "$LINENO" 5; } fi ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu if test "$GCC" = yes; then GCC_TRUE= GCC_FALSE='#' else GCC_TRUE='#' GCC_FALSE= fi # let the Makefile know if we're gcc # Check whether some low-level functions/files are available { $as_echo "$as_me:${as_lineno-$LINENO}: checking for grep that handles long lines and -e" >&5 $as_echo_n "checking for grep that handles long lines and -e... " >&6; } if ${ac_cv_path_GREP+:} false; then : $as_echo_n "(cached) " >&6 else if test -z "$GREP"; then ac_path_GREP_found=false # Loop through the user's path and test for each of PROGNAME-LIST as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH$PATH_SEPARATOR/usr/xpg4/bin do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_prog in grep ggrep; do for ac_exec_ext in '' $ac_executable_extensions; do ac_path_GREP="$as_dir/$ac_prog$ac_exec_ext" { test -f "$ac_path_GREP" && $as_test_x "$ac_path_GREP"; } || continue # Check for GNU ac_path_GREP and select it if it is found. # Check for GNU $ac_path_GREP case `"$ac_path_GREP" --version 2>&1` in *GNU*) ac_cv_path_GREP="$ac_path_GREP" ac_path_GREP_found=:;; *) ac_count=0 $as_echo_n 0123456789 >"conftest.in" while : do cat "conftest.in" "conftest.in" >"conftest.tmp" mv "conftest.tmp" "conftest.in" cp "conftest.in" "conftest.nl" $as_echo 'GREP' >> "conftest.nl" "$ac_path_GREP" -e 'GREP$' -e '-(cannot match)-' < "conftest.nl" >"conftest.out" 2>/dev/null || break diff "conftest.out" "conftest.nl" >/dev/null 2>&1 || break as_fn_arith $ac_count + 1 && ac_count=$as_val if test $ac_count -gt ${ac_path_GREP_max-0}; then # Best one so far, save it but keep looking for a better one ac_cv_path_GREP="$ac_path_GREP" ac_path_GREP_max=$ac_count fi # 10*(2^10) chars as input seems more than enough test $ac_count -gt 10 && break done rm -f conftest.in conftest.tmp conftest.nl conftest.out;; esac $ac_path_GREP_found && break 3 done done done IFS=$as_save_IFS if test -z "$ac_cv_path_GREP"; then as_fn_error $? "no acceptable grep could be found in $PATH$PATH_SEPARATOR/usr/xpg4/bin" "$LINENO" 5 fi else ac_cv_path_GREP=$GREP fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_path_GREP" >&5 $as_echo "$ac_cv_path_GREP" >&6; } GREP="$ac_cv_path_GREP" { $as_echo "$as_me:${as_lineno-$LINENO}: checking for egrep" >&5 $as_echo_n "checking for egrep... " >&6; } if ${ac_cv_path_EGREP+:} false; then : $as_echo_n "(cached) " >&6 else if echo a | $GREP -E '(a|b)' >/dev/null 2>&1 then ac_cv_path_EGREP="$GREP -E" else if test -z "$EGREP"; then ac_path_EGREP_found=false # Loop through the user's path and test for each of PROGNAME-LIST as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH$PATH_SEPARATOR/usr/xpg4/bin do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_prog in egrep; do for ac_exec_ext in '' $ac_executable_extensions; do ac_path_EGREP="$as_dir/$ac_prog$ac_exec_ext" { test -f "$ac_path_EGREP" && $as_test_x "$ac_path_EGREP"; } || continue # Check for GNU ac_path_EGREP and select it if it is found. # Check for GNU $ac_path_EGREP case `"$ac_path_EGREP" --version 2>&1` in *GNU*) ac_cv_path_EGREP="$ac_path_EGREP" ac_path_EGREP_found=:;; *) ac_count=0 $as_echo_n 0123456789 >"conftest.in" while : do cat "conftest.in" "conftest.in" >"conftest.tmp" mv "conftest.tmp" "conftest.in" cp "conftest.in" "conftest.nl" $as_echo 'EGREP' >> "conftest.nl" "$ac_path_EGREP" 'EGREP$' < "conftest.nl" >"conftest.out" 2>/dev/null || break diff "conftest.out" "conftest.nl" >/dev/null 2>&1 || break as_fn_arith $ac_count + 1 && ac_count=$as_val if test $ac_count -gt ${ac_path_EGREP_max-0}; then # Best one so far, save it but keep looking for a better one ac_cv_path_EGREP="$ac_path_EGREP" ac_path_EGREP_max=$ac_count fi # 10*(2^10) chars as input seems more than enough test $ac_count -gt 10 && break done rm -f conftest.in conftest.tmp conftest.nl conftest.out;; esac $ac_path_EGREP_found && break 3 done done done IFS=$as_save_IFS if test -z "$ac_cv_path_EGREP"; then as_fn_error $? "no acceptable egrep could be found in $PATH$PATH_SEPARATOR/usr/xpg4/bin" "$LINENO" 5 fi else ac_cv_path_EGREP=$EGREP fi fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_path_EGREP" >&5 $as_echo "$ac_cv_path_EGREP" >&6; } EGREP="$ac_cv_path_EGREP" { $as_echo "$as_me:${as_lineno-$LINENO}: checking for ANSI C header files" >&5 $as_echo_n "checking for ANSI C header files... " >&6; } if ${ac_cv_header_stdc+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include #include #include int main () { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : ac_cv_header_stdc=yes else ac_cv_header_stdc=no fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext if test $ac_cv_header_stdc = yes; then # SunOS 4.x string.h does not declare mem*, contrary to ANSI. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include _ACEOF if (eval "$ac_cpp conftest.$ac_ext") 2>&5 | $EGREP "memchr" >/dev/null 2>&1; then : else ac_cv_header_stdc=no fi rm -f conftest* fi if test $ac_cv_header_stdc = yes; then # ISC 2.0.2 stdlib.h does not declare free, contrary to ANSI. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include _ACEOF if (eval "$ac_cpp conftest.$ac_ext") 2>&5 | $EGREP "free" >/dev/null 2>&1; then : else ac_cv_header_stdc=no fi rm -f conftest* fi if test $ac_cv_header_stdc = yes; then # /bin/cc in Irix-4.0.5 gets non-ANSI ctype macros unless using -ansi. if test "$cross_compiling" = yes; then : : else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include #if ((' ' & 0x0FF) == 0x020) # define ISLOWER(c) ('a' <= (c) && (c) <= 'z') # define TOUPPER(c) (ISLOWER(c) ? 'A' + ((c) - 'a') : (c)) #else # define ISLOWER(c) \ (('a' <= (c) && (c) <= 'i') \ || ('j' <= (c) && (c) <= 'r') \ || ('s' <= (c) && (c) <= 'z')) # define TOUPPER(c) (ISLOWER(c) ? ((c) | 0x40) : (c)) #endif #define XOR(e, f) (((e) && !(f)) || (!(e) && (f))) int main () { int i; for (i = 0; i < 256; i++) if (XOR (islower (i), ISLOWER (i)) || toupper (i) != TOUPPER (i)) return 2; return 0; } _ACEOF if ac_fn_c_try_run "$LINENO"; then : else ac_cv_header_stdc=no fi rm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \ conftest.$ac_objext conftest.beam conftest.$ac_ext fi fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_header_stdc" >&5 $as_echo "$ac_cv_header_stdc" >&6; } if test $ac_cv_header_stdc = yes; then $as_echo "#define STDC_HEADERS 1" >>confdefs.h fi for ac_func in memcpy memmove do : as_ac_var=`$as_echo "ac_cv_func_$ac_func" | $as_tr_sh` ac_fn_c_check_func "$LINENO" "$ac_func" "$as_ac_var" if eval test \"x\$"$as_ac_var"\" = x"yes"; then : cat >>confdefs.h <<_ACEOF #define `$as_echo "HAVE_$ac_func" | $as_tr_cpp` 1 _ACEOF fi done # On IRIX 5.3, sys/types and inttypes.h are conflicting. for ac_header in sys/types.h sys/stat.h stdlib.h string.h memory.h strings.h \ inttypes.h stdint.h unistd.h do : as_ac_Header=`$as_echo "ac_cv_header_$ac_header" | $as_tr_sh` ac_fn_c_check_header_compile "$LINENO" "$ac_header" "$as_ac_Header" "$ac_includes_default " if eval test \"x\$"$as_ac_Header"\" = x"yes"; then : cat >>confdefs.h <<_ACEOF #define `$as_echo "HAVE_$ac_header" | $as_tr_cpp` 1 _ACEOF fi done ac_fn_c_check_type "$LINENO" "uint16_t" "ac_cv_type_uint16_t" "$ac_includes_default" if test "x$ac_cv_type_uint16_t" = xyes; then : cat >>confdefs.h <<_ACEOF #define HAVE_UINT16_T 1 _ACEOF fi # defined in C99 systems ac_fn_c_check_type "$LINENO" "u_int16_t" "ac_cv_type_u_int16_t" "$ac_includes_default" if test "x$ac_cv_type_u_int16_t" = xyes; then : cat >>confdefs.h <<_ACEOF #define HAVE_U_INT16_T 1 _ACEOF fi # defined in BSD-derived systems, and gnu ac_fn_c_check_type "$LINENO" "__uint16" "ac_cv_type___uint16" "$ac_includes_default" if test "x$ac_cv_type___uint16" = xyes; then : cat >>confdefs.h <<_ACEOF #define HAVE___UINT16 1 _ACEOF fi # defined in some windows systems (vc7) ac_fn_c_check_type "$LINENO" "long long" "ac_cv_type_long_long" "$ac_includes_default" if test "x$ac_cv_type_long_long" = xyes; then : cat >>confdefs.h <<_ACEOF #define HAVE_LONG_LONG 1 _ACEOF fi # probably defined everywhere, but... # These are 'only' needed for unittests for ac_header in sys/resource.h unistd.h sys/time.h sys/utsname.h do : as_ac_Header=`$as_echo "ac_cv_header_$ac_header" | $as_tr_sh` ac_fn_c_check_header_mongrel "$LINENO" "$ac_header" "$as_ac_Header" "$ac_includes_default" if eval test \"x\$"$as_ac_Header"\" = x"yes"; then : cat >>confdefs.h <<_ACEOF #define `$as_echo "HAVE_$ac_header" | $as_tr_cpp` 1 _ACEOF fi done # If you have google-perftools installed, we can do a bit more testing. # We not only want to set HAVE_MALLOC_EXTENSION_H, we also want to set # a variable to let the Makefile to know to link in tcmalloc. ac_ext=cpp ac_cpp='$CXXCPP $CPPFLAGS' ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_cxx_compiler_gnu ac_ext=cpp ac_cpp='$CXXCPP $CPPFLAGS' ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_cxx_compiler_gnu { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to run the C++ preprocessor" >&5 $as_echo_n "checking how to run the C++ preprocessor... " >&6; } if test -z "$CXXCPP"; then if ${ac_cv_prog_CXXCPP+:} false; then : $as_echo_n "(cached) " >&6 else # Double quotes because CXXCPP needs to be expanded for CXXCPP in "$CXX -E" "/lib/cpp" do ac_preproc_ok=false for ac_cxx_preproc_warn_flag in '' yes do # Use a header file that comes with gcc, so configuring glibc # with a fresh cross-compiler works. # Prefer to if __STDC__ is defined, since # exists even on freestanding compilers. # On the NeXT, cc -E runs the code through the compiler's parser, # not just through cpp. "Syntax error" is here to catch this case. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #ifdef __STDC__ # include #else # include #endif Syntax error _ACEOF if ac_fn_cxx_try_cpp "$LINENO"; then : else # Broken: fails on valid input. continue fi rm -f conftest.err conftest.i conftest.$ac_ext # OK, works on sane cases. Now check whether nonexistent headers # can be detected and how. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include _ACEOF if ac_fn_cxx_try_cpp "$LINENO"; then : # Broken: success on invalid input. continue else # Passes both tests. ac_preproc_ok=: break fi rm -f conftest.err conftest.i conftest.$ac_ext done # Because of `break', _AC_PREPROC_IFELSE's cleaning code was skipped. rm -f conftest.i conftest.err conftest.$ac_ext if $ac_preproc_ok; then : break fi done ac_cv_prog_CXXCPP=$CXXCPP fi CXXCPP=$ac_cv_prog_CXXCPP else ac_cv_prog_CXXCPP=$CXXCPP fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $CXXCPP" >&5 $as_echo "$CXXCPP" >&6; } ac_preproc_ok=false for ac_cxx_preproc_warn_flag in '' yes do # Use a header file that comes with gcc, so configuring glibc # with a fresh cross-compiler works. # Prefer to if __STDC__ is defined, since # exists even on freestanding compilers. # On the NeXT, cc -E runs the code through the compiler's parser, # not just through cpp. "Syntax error" is here to catch this case. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #ifdef __STDC__ # include #else # include #endif Syntax error _ACEOF if ac_fn_cxx_try_cpp "$LINENO"; then : else # Broken: fails on valid input. continue fi rm -f conftest.err conftest.i conftest.$ac_ext # OK, works on sane cases. Now check whether nonexistent headers # can be detected and how. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include _ACEOF if ac_fn_cxx_try_cpp "$LINENO"; then : # Broken: success on invalid input. continue else # Passes both tests. ac_preproc_ok=: break fi rm -f conftest.err conftest.i conftest.$ac_ext done # Because of `break', _AC_PREPROC_IFELSE's cleaning code was skipped. rm -f conftest.i conftest.err conftest.$ac_ext if $ac_preproc_ok; then : else { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "C++ preprocessor \"$CXXCPP\" fails sanity check See \`config.log' for more details" "$LINENO" 5; } fi ac_ext=cpp ac_cpp='$CXXCPP $CPPFLAGS' ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_cxx_compiler_gnu for ac_header in google/malloc_extension.h do : ac_fn_cxx_check_header_mongrel "$LINENO" "google/malloc_extension.h" "ac_cv_header_google_malloc_extension_h" "$ac_includes_default" if test "x$ac_cv_header_google_malloc_extension_h" = xyes; then : cat >>confdefs.h <<_ACEOF #define HAVE_GOOGLE_MALLOC_EXTENSION_H 1 _ACEOF tcmalloc_libs=-ltcmalloc else tcmalloc_libs= fi done # On some systems, when linking in tcmalloc you also need to link in # pthread. That's a bug somewhere, but we'll work around it for now. tcmalloc_flags="" if test -n "$tcmalloc_libs"; then # Make sure we can run config.sub. $SHELL "$ac_aux_dir/config.sub" sun4 >/dev/null 2>&1 || as_fn_error $? "cannot run $SHELL $ac_aux_dir/config.sub" "$LINENO" 5 { $as_echo "$as_me:${as_lineno-$LINENO}: checking build system type" >&5 $as_echo_n "checking build system type... " >&6; } if ${ac_cv_build+:} false; then : $as_echo_n "(cached) " >&6 else ac_build_alias=$build_alias test "x$ac_build_alias" = x && ac_build_alias=`$SHELL "$ac_aux_dir/config.guess"` test "x$ac_build_alias" = x && as_fn_error $? "cannot guess build type; you must specify one" "$LINENO" 5 ac_cv_build=`$SHELL "$ac_aux_dir/config.sub" $ac_build_alias` || as_fn_error $? "$SHELL $ac_aux_dir/config.sub $ac_build_alias failed" "$LINENO" 5 fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_build" >&5 $as_echo "$ac_cv_build" >&6; } case $ac_cv_build in *-*-*) ;; *) as_fn_error $? "invalid value of canonical build" "$LINENO" 5;; esac build=$ac_cv_build ac_save_IFS=$IFS; IFS='-' set x $ac_cv_build shift build_cpu=$1 build_vendor=$2 shift; shift # Remember, the first character of IFS is used to create $*, # except with old shells: build_os=$* IFS=$ac_save_IFS case $build_os in *\ *) build_os=`echo "$build_os" | sed 's/ /-/g'`;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: checking host system type" >&5 $as_echo_n "checking host system type... " >&6; } if ${ac_cv_host+:} false; then : $as_echo_n "(cached) " >&6 else if test "x$host_alias" = x; then ac_cv_host=$ac_cv_build else ac_cv_host=`$SHELL "$ac_aux_dir/config.sub" $host_alias` || as_fn_error $? "$SHELL $ac_aux_dir/config.sub $host_alias failed" "$LINENO" 5 fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_host" >&5 $as_echo "$ac_cv_host" >&6; } case $ac_cv_host in *-*-*) ;; *) as_fn_error $? "invalid value of canonical host" "$LINENO" 5;; esac host=$ac_cv_host ac_save_IFS=$IFS; IFS='-' set x $ac_cv_host shift host_cpu=$1 host_vendor=$2 shift; shift # Remember, the first character of IFS is used to create $*, # except with old shells: host_os=$* IFS=$ac_save_IFS case $host_os in *\ *) host_os=`echo "$host_os" | sed 's/ /-/g'`;; esac ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu acx_pthread_ok=no # We used to check for pthread.h first, but this fails if pthread.h # requires special compiler flags (e.g. on True64 or Sequent). # It gets checked for in the link test anyway. # First of all, check if the user has set any of the PTHREAD_LIBS, # etcetera environment variables, and if threads linking works using # them: if test x"$PTHREAD_LIBS$PTHREAD_CFLAGS" != x; then save_CFLAGS="$CFLAGS" CFLAGS="$CFLAGS $PTHREAD_CFLAGS" save_LIBS="$LIBS" LIBS="$PTHREAD_LIBS $LIBS" { $as_echo "$as_me:${as_lineno-$LINENO}: checking for pthread_join in LIBS=$PTHREAD_LIBS with CFLAGS=$PTHREAD_CFLAGS" >&5 $as_echo_n "checking for pthread_join in LIBS=$PTHREAD_LIBS with CFLAGS=$PTHREAD_CFLAGS... " >&6; } cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char pthread_join (); int main () { return pthread_join (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : acx_pthread_ok=yes fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext { $as_echo "$as_me:${as_lineno-$LINENO}: result: $acx_pthread_ok" >&5 $as_echo "$acx_pthread_ok" >&6; } if test x"$acx_pthread_ok" = xno; then PTHREAD_LIBS="" PTHREAD_CFLAGS="" fi LIBS="$save_LIBS" CFLAGS="$save_CFLAGS" fi # We must check for the threads library under a number of different # names; the ordering is very important because some systems # (e.g. DEC) have both -lpthread and -lpthreads, where one of the # libraries is broken (non-POSIX). # Create a list of thread flags to try. Items starting with a "-" are # C compiler flags, and other items are library names, except for "none" # which indicates that we try without any flags at all, and "pthread-config" # which is a program returning the flags for the Pth emulation library. acx_pthread_flags="pthreads none -Kthread -kthread lthread -pthread -pthreads -mthreads pthread --thread-safe -mt pthread-config" # The ordering *is* (sometimes) important. Some notes on the # individual items follow: # pthreads: AIX (must check this before -lpthread) # none: in case threads are in libc; should be tried before -Kthread and # other compiler flags to prevent continual compiler warnings # -Kthread: Sequent (threads in libc, but -Kthread needed for pthread.h) # -kthread: FreeBSD kernel threads (preferred to -pthread since SMP-able) # lthread: LinuxThreads port on FreeBSD (also preferred to -pthread) # -pthread: Linux/gcc (kernel threads), BSD/gcc (userland threads) # -pthreads: Solaris/gcc # -mthreads: Mingw32/gcc, Lynx/gcc # -mt: Sun Workshop C (may only link SunOS threads [-lthread], but it # doesn't hurt to check since this sometimes defines pthreads too; # also defines -D_REENTRANT) # ... -mt is also the pthreads flag for HP/aCC # pthread: Linux, etcetera # --thread-safe: KAI C++ # pthread-config: use pthread-config program (for GNU Pth library) case "${host_cpu}-${host_os}" in *solaris*) # On Solaris (at least, for some versions), libc contains stubbed # (non-functional) versions of the pthreads routines, so link-based # tests will erroneously succeed. (We need to link with -pthreads/-mt/ # -lpthread.) (The stubs are missing pthread_cleanup_push, or rather # a function called by this macro, so we could check for that, but # who knows whether they'll stub that too in a future libc.) So, # we'll just look for -pthreads and -lpthread first: acx_pthread_flags="-pthreads pthread -mt -pthread $acx_pthread_flags" ;; esac if test x"$acx_pthread_ok" = xno; then for flag in $acx_pthread_flags; do case $flag in none) { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether pthreads work without any flags" >&5 $as_echo_n "checking whether pthreads work without any flags... " >&6; } ;; -*) { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether pthreads work with $flag" >&5 $as_echo_n "checking whether pthreads work with $flag... " >&6; } PTHREAD_CFLAGS="$flag" ;; pthread-config) # Extract the first word of "pthread-config", so it can be a program name with args. set dummy pthread-config; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_acx_pthread_config+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$acx_pthread_config"; then ac_cv_prog_acx_pthread_config="$acx_pthread_config" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_acx_pthread_config="yes" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS test -z "$ac_cv_prog_acx_pthread_config" && ac_cv_prog_acx_pthread_config="no" fi fi acx_pthread_config=$ac_cv_prog_acx_pthread_config if test -n "$acx_pthread_config"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $acx_pthread_config" >&5 $as_echo "$acx_pthread_config" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test x"$acx_pthread_config" = xno; then continue; fi PTHREAD_CFLAGS="`pthread-config --cflags`" PTHREAD_LIBS="`pthread-config --ldflags` `pthread-config --libs`" ;; *) { $as_echo "$as_me:${as_lineno-$LINENO}: checking for the pthreads library -l$flag" >&5 $as_echo_n "checking for the pthreads library -l$flag... " >&6; } PTHREAD_LIBS="-l$flag" ;; esac save_LIBS="$LIBS" save_CFLAGS="$CFLAGS" LIBS="$PTHREAD_LIBS $LIBS" CFLAGS="$CFLAGS $PTHREAD_CFLAGS" # Check for various functions. We must include pthread.h, # since some functions may be macros. (On the Sequent, we # need a special flag -Kthread to make this header compile.) # We check for pthread_join because it is in -lpthread on IRIX # while pthread_create is in libc. We check for pthread_attr_init # due to DEC craziness with -lpthreads. We check for # pthread_cleanup_push because it is one of the few pthread # functions on Solaris that doesn't have a non-functional libc stub. # We try pthread_create on general principles. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include int main () { pthread_t th; pthread_join(th, 0); pthread_attr_init(0); pthread_cleanup_push(0, 0); pthread_create(0,0,0,0); pthread_cleanup_pop(0); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : acx_pthread_ok=yes fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS="$save_LIBS" CFLAGS="$save_CFLAGS" { $as_echo "$as_me:${as_lineno-$LINENO}: result: $acx_pthread_ok" >&5 $as_echo "$acx_pthread_ok" >&6; } if test "x$acx_pthread_ok" = xyes; then break; fi PTHREAD_LIBS="" PTHREAD_CFLAGS="" done fi # Various other checks: if test "x$acx_pthread_ok" = xyes; then save_LIBS="$LIBS" LIBS="$PTHREAD_LIBS $LIBS" save_CFLAGS="$CFLAGS" CFLAGS="$CFLAGS $PTHREAD_CFLAGS" # Detect AIX lossage: JOINABLE attribute is called UNDETACHED. { $as_echo "$as_me:${as_lineno-$LINENO}: checking for joinable pthread attribute" >&5 $as_echo_n "checking for joinable pthread attribute... " >&6; } attr_name=unknown for attr in PTHREAD_CREATE_JOINABLE PTHREAD_CREATE_UNDETACHED; do cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include int main () { int attr=$attr; return attr; ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : attr_name=$attr; break fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext done { $as_echo "$as_me:${as_lineno-$LINENO}: result: $attr_name" >&5 $as_echo "$attr_name" >&6; } if test "$attr_name" != PTHREAD_CREATE_JOINABLE; then cat >>confdefs.h <<_ACEOF #define PTHREAD_CREATE_JOINABLE $attr_name _ACEOF fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking if more special flags are required for pthreads" >&5 $as_echo_n "checking if more special flags are required for pthreads... " >&6; } flag=no case "${host_cpu}-${host_os}" in *-aix* | *-freebsd* | *-darwin*) flag="-D_THREAD_SAFE";; *solaris* | *-osf* | *-hpux*) flag="-D_REENTRANT";; esac { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${flag}" >&5 $as_echo "${flag}" >&6; } if test "x$flag" != xno; then PTHREAD_CFLAGS="$flag $PTHREAD_CFLAGS" fi LIBS="$save_LIBS" CFLAGS="$save_CFLAGS" # More AIX lossage: must compile with xlc_r or cc_r if test x"$GCC" != xyes; then for ac_prog in xlc_r cc_r do # Extract the first word of "$ac_prog", so it can be a program name with args. set dummy $ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_PTHREAD_CC+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$PTHREAD_CC"; then ac_cv_prog_PTHREAD_CC="$PTHREAD_CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_PTHREAD_CC="$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi PTHREAD_CC=$ac_cv_prog_PTHREAD_CC if test -n "$PTHREAD_CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $PTHREAD_CC" >&5 $as_echo "$PTHREAD_CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$PTHREAD_CC" && break done test -n "$PTHREAD_CC" || PTHREAD_CC="${CC}" else PTHREAD_CC=$CC fi # The next part tries to detect GCC inconsistency with -shared on some # architectures and systems. The problem is that in certain # configurations, when -shared is specified, GCC "forgets" to # internally use various flags which are still necessary. # # Prepare the flags # save_CFLAGS="$CFLAGS" save_LIBS="$LIBS" save_CC="$CC" # Try with the flags determined by the earlier checks. # # -Wl,-z,defs forces link-time symbol resolution, so that the # linking checks with -shared actually have any value # # FIXME: -fPIC is required for -shared on many architectures, # so we specify it here, but the right way would probably be to # properly detect whether it is actually required. CFLAGS="-shared -fPIC -Wl,-z,defs $CFLAGS $PTHREAD_CFLAGS" LIBS="$PTHREAD_LIBS $LIBS" CC="$PTHREAD_CC" # In order not to create several levels of indentation, we test # the value of "$done" until we find the cure or run out of ideas. done="no" # First, make sure the CFLAGS we added are actually accepted by our # compiler. If not (and OS X's ld, for instance, does not accept -z), # then we can't do this test. if test x"$done" = xno; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to check for GCC pthread/shared inconsistencies" >&5 $as_echo_n "checking whether to check for GCC pthread/shared inconsistencies... " >&6; } cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : else done=yes fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext if test "x$done" = xyes ; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } fi fi if test x"$done" = xno; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether -pthread is sufficient with -shared" >&5 $as_echo_n "checking whether -pthread is sufficient with -shared... " >&6; } cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include int main () { pthread_t th; pthread_join(th, 0); pthread_attr_init(0); pthread_cleanup_push(0, 0); pthread_create(0,0,0,0); pthread_cleanup_pop(0); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : done=yes fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext if test "x$done" = xyes; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi # # Linux gcc on some architectures such as mips/mipsel forgets # about -lpthread # if test x"$done" = xno; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether -lpthread fixes that" >&5 $as_echo_n "checking whether -lpthread fixes that... " >&6; } LIBS="-lpthread $PTHREAD_LIBS $save_LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include int main () { pthread_t th; pthread_join(th, 0); pthread_attr_init(0); pthread_cleanup_push(0, 0); pthread_create(0,0,0,0); pthread_cleanup_pop(0); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : done=yes fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext if test "x$done" = xyes; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } PTHREAD_LIBS="-lpthread $PTHREAD_LIBS" else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi # # FreeBSD 4.10 gcc forgets to use -lc_r instead of -lc # if test x"$done" = xno; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether -lc_r fixes that" >&5 $as_echo_n "checking whether -lc_r fixes that... " >&6; } LIBS="-lc_r $PTHREAD_LIBS $save_LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include int main () { pthread_t th; pthread_join(th, 0); pthread_attr_init(0); pthread_cleanup_push(0, 0); pthread_create(0,0,0,0); pthread_cleanup_pop(0); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : done=yes fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext if test "x$done" = xyes; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } PTHREAD_LIBS="-lc_r $PTHREAD_LIBS" else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test x"$done" = xno; then # OK, we have run out of ideas { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Impossible to determine how to use pthreads with shared libraries" >&5 $as_echo "$as_me: WARNING: Impossible to determine how to use pthreads with shared libraries" >&2;} # so it's not safe to assume that we may use pthreads acx_pthread_ok=no fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether what we have so far is sufficient with -nostdlib" >&5 $as_echo_n "checking whether what we have so far is sufficient with -nostdlib... " >&6; } CFLAGS="-nostdlib $CFLAGS" # we need c with nostdlib LIBS="$LIBS -lc" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include int main () { pthread_t th; pthread_join(th, 0); pthread_attr_init(0); pthread_cleanup_push(0, 0); pthread_create(0,0,0,0); pthread_cleanup_pop(0); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : done=yes else done=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext if test "x$done" = xyes; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test x"$done" = xno; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether -lpthread saves the day" >&5 $as_echo_n "checking whether -lpthread saves the day... " >&6; } LIBS="-lpthread $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include int main () { pthread_t th; pthread_join(th, 0); pthread_attr_init(0); pthread_cleanup_push(0, 0); pthread_create(0,0,0,0); pthread_cleanup_pop(0); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : done=yes else done=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext if test "x$done" = xyes; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } PTHREAD_LIBS="$PTHREAD_LIBS -lpthread" else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Impossible to determine how to use pthreads with shared libraries and -nostdlib" >&5 $as_echo "$as_me: WARNING: Impossible to determine how to use pthreads with shared libraries and -nostdlib" >&2;} fi fi CFLAGS="$save_CFLAGS" LIBS="$save_LIBS" CC="$save_CC" else PTHREAD_CC="$CC" fi # Finally, execute ACTION-IF-FOUND/ACTION-IF-NOT-FOUND: if test x"$acx_pthread_ok" = xyes; then $as_echo "#define HAVE_PTHREAD 1" >>confdefs.h : else acx_pthread_ok=no fi ac_ext=cpp ac_cpp='$CXXCPP $CPPFLAGS' ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_cxx_compiler_gnu tcmalloc_flags="\$(PTHREAD_CFLAGS)" tcmalloc_libs="$tcmalloc_libs \$(PTHREAD_LIBS)" fi # Figure out where hash_map lives and also hash_fun.h (or stl_hash_fun.h). # This also tells us what namespace hash code lives in. { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the compiler implements namespaces" >&5 $as_echo_n "checking whether the compiler implements namespaces... " >&6; } if ${ac_cv_cxx_namespaces+:} false; then : $as_echo_n "(cached) " >&6 else ac_ext=cpp ac_cpp='$CXXCPP $CPPFLAGS' ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_cxx_compiler_gnu cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ namespace Outer { namespace Inner { int i = 0; }} int main () { using namespace Outer::Inner; return i; ; return 0; } _ACEOF if ac_fn_cxx_try_compile "$LINENO"; then : ac_cv_cxx_namespaces=yes else ac_cv_cxx_namespaces=no fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext ac_ext=cpp ac_cpp='$CXXCPP $CPPFLAGS' ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_cxx_compiler_gnu fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_cxx_namespaces" >&5 $as_echo "$ac_cv_cxx_namespaces" >&6; } if test "$ac_cv_cxx_namespaces" = yes; then $as_echo "#define HAVE_NAMESPACES 1" >>confdefs.h fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking the location of hash_map" >&5 $as_echo_n "checking the location of hash_map... " >&6; } ac_ext=cpp ac_cpp='$CXXCPP $CPPFLAGS' ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_cxx_compiler_gnu ac_cv_cxx_hash_map="" # First try unordered_map, but not on gcc's before 4.2 -- I've # seen unexplainable unordered_map bugs with -O2 on older gcc's. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #if defined(__GNUC__) && (__GNUC__ < 4 || (__GNUC__ == 4 && __GNUC_MINOR__ < 2)) # error GCC too old for unordered_map #endif int main () { /* no program body necessary */ ; return 0; } _ACEOF if ac_fn_cxx_try_compile "$LINENO"; then : stl_hash_old_gcc=no else stl_hash_old_gcc=yes fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext for location in unordered_map tr1/unordered_map; do for namespace in std std::tr1; do if test -z "$ac_cv_cxx_hash_map" -a "$stl_hash_old_gcc" != yes; then # Some older gcc's have a buggy tr1, so test a bit of code. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include <$location> int main () { const ${namespace}::unordered_map t; return t.find(5) == t.end(); ; return 0; } _ACEOF if ac_fn_cxx_try_compile "$LINENO"; then : ac_cv_cxx_hash_map="<$location>"; ac_cv_cxx_hash_namespace="$namespace"; ac_cv_cxx_have_unordered_map="yes"; fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi done done # Now try hash_map for location in ext/hash_map hash_map; do for namespace in __gnu_cxx "" std stdext; do if test -z "$ac_cv_cxx_hash_map"; then cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include <$location> int main () { ${namespace}::hash_map t ; return 0; } _ACEOF if ac_fn_cxx_try_compile "$LINENO"; then : ac_cv_cxx_hash_map="<$location>"; ac_cv_cxx_hash_namespace="$namespace"; ac_cv_cxx_have_unordered_map="no"; fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi done done ac_cv_cxx_hash_set=`echo "$ac_cv_cxx_hash_map" | sed s/map/set/`; if test -n "$ac_cv_cxx_hash_map"; then $as_echo "#define HAVE_HASH_MAP 1" >>confdefs.h $as_echo "#define HAVE_HASH_SET 1" >>confdefs.h cat >>confdefs.h <<_ACEOF #define HASH_MAP_H $ac_cv_cxx_hash_map _ACEOF cat >>confdefs.h <<_ACEOF #define HASH_SET_H $ac_cv_cxx_hash_set _ACEOF cat >>confdefs.h <<_ACEOF #define HASH_NAMESPACE $ac_cv_cxx_hash_namespace _ACEOF if test "$ac_cv_cxx_have_unordered_map" = yes; then $as_echo "#define HAVE_UNORDERED_MAP 1" >>confdefs.h fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_cxx_hash_map" >&5 $as_echo "$ac_cv_cxx_hash_map" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: " >&5 $as_echo "" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: could not find an STL hash_map" >&5 $as_echo "$as_me: WARNING: could not find an STL hash_map" >&2;} fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to include hash_fun directly" >&5 $as_echo_n "checking how to include hash_fun directly... " >&6; } ac_ext=cpp ac_cpp='$CXXCPP $CPPFLAGS' ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_cxx_compiler_gnu ac_cv_cxx_stl_hash_fun="" for location in functional tr1/functional \ ext/hash_fun.h ext/stl_hash_fun.h \ hash_fun.h stl_hash_fun.h \ stl/_hash_fun.h; do if test -z "$ac_cv_cxx_stl_hash_fun"; then cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include <$location> int main () { int x = ${ac_cv_cxx_hash_namespace}::hash()(5) ; return 0; } _ACEOF if ac_fn_cxx_try_compile "$LINENO"; then : ac_cv_cxx_stl_hash_fun="<$location>"; fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi done ac_ext=cpp ac_cpp='$CXXCPP $CPPFLAGS' ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_cxx_compiler_gnu cat >>confdefs.h <<_ACEOF #define HASH_FUN_H $ac_cv_cxx_stl_hash_fun _ACEOF cat >>confdefs.h <<_ACEOF #define HASH_NAMESPACE $ac_cv_cxx_hash_namespace _ACEOF { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_cxx_stl_hash_fun" >&5 $as_echo "$ac_cv_cxx_stl_hash_fun" >&6; } # Find out what namespace the user wants our classes to be defined in. # TODO(csilvers): change this to default to sparsehash instead. google_namespace_default=google # Check whether --enable-namespace was given. if test "${enable_namespace+set}" = set; then : enableval=$enable_namespace; case "$enableval" in yes) google_namespace="$google_namespace_default" ;; no) google_namespace="" ;; *) google_namespace="$enableval" ;; esac else google_namespace="$google_namespace_default" fi if test -n "$google_namespace"; then ac_google_namespace="::$google_namespace" ac_google_start_namespace="namespace $google_namespace {" ac_google_end_namespace="}" else ac_google_namespace="" ac_google_start_namespace="" ac_google_end_namespace="" fi cat >>confdefs.h <<_ACEOF #define GOOGLE_NAMESPACE $ac_google_namespace _ACEOF cat >>confdefs.h <<_ACEOF #define _START_GOOGLE_NAMESPACE_ $ac_google_start_namespace _ACEOF cat >>confdefs.h <<_ACEOF #define _END_GOOGLE_NAMESPACE_ $ac_google_end_namespace _ACEOF # In unix-based systems, hash is always defined as hash<> (in namespace. # HASH_NAMESPACE.) So we can use a simple AC_DEFINE here. On # windows, and possibly on future unix STL implementations, this # macro will evaluate to something different.) $as_echo "#define SPARSEHASH_HASH_NO_NAMESPACE hash" >>confdefs.h # Do *not* define this in terms of SPARSEHASH_HASH_NO_NAMESPACE, because # SPARSEHASH_HASH is exported to sparseconfig.h, but S_H_NO_NAMESPACE isn't. $as_echo "#define SPARSEHASH_HASH HASH_NAMESPACE::hash" >>confdefs.h # Write generated configuration file ac_config_files="$ac_config_files Makefile" cat >confcache <<\_ACEOF # This file is a shell script that caches the results of configure # tests run on this system so they can be shared between configure # scripts and configure runs, see configure's option --config-cache. # It is not useful on other systems. If it contains results you don't # want to keep, you may remove or edit it. # # config.status only pays attention to the cache file if you give it # the --recheck option to rerun configure. # # `ac_cv_env_foo' variables (set or unset) will be overridden when # loading this file, other *unset* `ac_cv_foo' will be assigned the # following values. _ACEOF # The following way of writing the cache mishandles newlines in values, # but we know of no workaround that is simple, portable, and efficient. # So, we kill variables containing newlines. # Ultrix sh set writes to stderr and can't be redirected directly, # and sets the high bit in the cache file unless we assign to the vars. ( for ac_var in `(set) 2>&1 | sed -n 's/^\([a-zA-Z_][a-zA-Z0-9_]*\)=.*/\1/p'`; do eval ac_val=\$$ac_var case $ac_val in #( *${as_nl}*) case $ac_var in #( *_cv_*) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: cache variable $ac_var contains a newline" >&5 $as_echo "$as_me: WARNING: cache variable $ac_var contains a newline" >&2;} ;; esac case $ac_var in #( _ | IFS | as_nl) ;; #( BASH_ARGV | BASH_SOURCE) eval $ac_var= ;; #( *) { eval $ac_var=; unset $ac_var;} ;; esac ;; esac done (set) 2>&1 | case $as_nl`(ac_space=' '; set) 2>&1` in #( *${as_nl}ac_space=\ *) # `set' does not quote correctly, so add quotes: double-quote # substitution turns \\\\ into \\, and sed turns \\ into \. sed -n \ "s/'/'\\\\''/g; s/^\\([_$as_cr_alnum]*_cv_[_$as_cr_alnum]*\\)=\\(.*\\)/\\1='\\2'/p" ;; #( *) # `set' quotes correctly as required by POSIX, so do not add quotes. sed -n "/^[_$as_cr_alnum]*_cv_[_$as_cr_alnum]*=/p" ;; esac | sort ) | sed ' /^ac_cv_env_/b end t clear :clear s/^\([^=]*\)=\(.*[{}].*\)$/test "${\1+set}" = set || &/ t end s/^\([^=]*\)=\(.*\)$/\1=${\1=\2}/ :end' >>confcache if diff "$cache_file" confcache >/dev/null 2>&1; then :; else if test -w "$cache_file"; then if test "x$cache_file" != "x/dev/null"; then { $as_echo "$as_me:${as_lineno-$LINENO}: updating cache $cache_file" >&5 $as_echo "$as_me: updating cache $cache_file" >&6;} if test ! -f "$cache_file" || test -h "$cache_file"; then cat confcache >"$cache_file" else case $cache_file in #( */* | ?:*) mv -f confcache "$cache_file"$$ && mv -f "$cache_file"$$ "$cache_file" ;; #( *) mv -f confcache "$cache_file" ;; esac fi fi else { $as_echo "$as_me:${as_lineno-$LINENO}: not updating unwritable cache $cache_file" >&5 $as_echo "$as_me: not updating unwritable cache $cache_file" >&6;} fi fi rm -f confcache test "x$prefix" = xNONE && prefix=$ac_default_prefix # Let make expand exec_prefix. test "x$exec_prefix" = xNONE && exec_prefix='${prefix}' DEFS=-DHAVE_CONFIG_H ac_libobjs= ac_ltlibobjs= U= for ac_i in : $LIBOBJS; do test "x$ac_i" = x: && continue # 1. Remove the extension, and $U if already installed. ac_script='s/\$U\././;s/\.o$//;s/\.obj$//' ac_i=`$as_echo "$ac_i" | sed "$ac_script"` # 2. Prepend LIBOBJDIR. When used with automake>=1.10 LIBOBJDIR # will be set to the directory where LIBOBJS objects are built. as_fn_append ac_libobjs " \${LIBOBJDIR}$ac_i\$U.$ac_objext" as_fn_append ac_ltlibobjs " \${LIBOBJDIR}$ac_i"'$U.lo' done LIBOBJS=$ac_libobjs LTLIBOBJS=$ac_ltlibobjs if test -n "$EXEEXT"; then am__EXEEXT_TRUE= am__EXEEXT_FALSE='#' else am__EXEEXT_TRUE='#' am__EXEEXT_FALSE= fi if test -z "${AMDEP_TRUE}" && test -z "${AMDEP_FALSE}"; then as_fn_error $? "conditional \"AMDEP\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${am__fastdepCXX_TRUE}" && test -z "${am__fastdepCXX_FALSE}"; then as_fn_error $? "conditional \"am__fastdepCXX\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${am__fastdepCC_TRUE}" && test -z "${am__fastdepCC_FALSE}"; then as_fn_error $? "conditional \"am__fastdepCC\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${GCC_TRUE}" && test -z "${GCC_FALSE}"; then as_fn_error $? "conditional \"GCC\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi : "${CONFIG_STATUS=./config.status}" ac_write_fail=0 ac_clean_files_save=$ac_clean_files ac_clean_files="$ac_clean_files $CONFIG_STATUS" { $as_echo "$as_me:${as_lineno-$LINENO}: creating $CONFIG_STATUS" >&5 $as_echo "$as_me: creating $CONFIG_STATUS" >&6;} as_write_fail=0 cat >$CONFIG_STATUS <<_ASEOF || as_write_fail=1 #! $SHELL # Generated by $as_me. # Run this file to recreate the current configuration. # Compiler output produced by configure, useful for debugging # configure, is in config.log if it exists. debug=false ac_cs_recheck=false ac_cs_silent=false SHELL=\${CONFIG_SHELL-$SHELL} export SHELL _ASEOF cat >>$CONFIG_STATUS <<\_ASEOF || as_write_fail=1 ## -------------------- ## ## M4sh Initialization. ## ## -------------------- ## # Be more Bourne compatible DUALCASE=1; export DUALCASE # for MKS sh if test -n "${ZSH_VERSION+set}" && (emulate sh) >/dev/null 2>&1; then : emulate sh NULLCMD=: # Pre-4.2 versions of Zsh do word splitting on ${1+"$@"}, which # is contrary to our usage. Disable this feature. alias -g '${1+"$@"}'='"$@"' setopt NO_GLOB_SUBST else case `(set -o) 2>/dev/null` in #( *posix*) : set -o posix ;; #( *) : ;; esac fi as_nl=' ' export as_nl # Printing a long string crashes Solaris 7 /usr/bin/printf. as_echo='\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\' as_echo=$as_echo$as_echo$as_echo$as_echo$as_echo as_echo=$as_echo$as_echo$as_echo$as_echo$as_echo$as_echo # Prefer a ksh shell builtin over an external printf program on Solaris, # but without wasting forks for bash or zsh. if test -z "$BASH_VERSION$ZSH_VERSION" \ && (test "X`print -r -- $as_echo`" = "X$as_echo") 2>/dev/null; then as_echo='print -r --' as_echo_n='print -rn --' elif (test "X`printf %s $as_echo`" = "X$as_echo") 2>/dev/null; then as_echo='printf %s\n' as_echo_n='printf %s' else if test "X`(/usr/ucb/echo -n -n $as_echo) 2>/dev/null`" = "X-n $as_echo"; then as_echo_body='eval /usr/ucb/echo -n "$1$as_nl"' as_echo_n='/usr/ucb/echo -n' else as_echo_body='eval expr "X$1" : "X\\(.*\\)"' as_echo_n_body='eval arg=$1; case $arg in #( *"$as_nl"*) expr "X$arg" : "X\\(.*\\)$as_nl"; arg=`expr "X$arg" : ".*$as_nl\\(.*\\)"`;; esac; expr "X$arg" : "X\\(.*\\)" | tr -d "$as_nl" ' export as_echo_n_body as_echo_n='sh -c $as_echo_n_body as_echo' fi export as_echo_body as_echo='sh -c $as_echo_body as_echo' fi # The user is always right. if test "${PATH_SEPARATOR+set}" != set; then PATH_SEPARATOR=: (PATH='/bin;/bin'; FPATH=$PATH; sh -c :) >/dev/null 2>&1 && { (PATH='/bin:/bin'; FPATH=$PATH; sh -c :) >/dev/null 2>&1 || PATH_SEPARATOR=';' } fi # IFS # We need space, tab and new line, in precisely that order. Quoting is # there to prevent editors from complaining about space-tab. # (If _AS_PATH_WALK were called with IFS unset, it would disable word # splitting by setting IFS to empty value.) IFS=" "" $as_nl" # Find who we are. Look in the path if we contain no directory separator. as_myself= case $0 in #(( *[\\/]* ) as_myself=$0 ;; *) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. test -r "$as_dir/$0" && as_myself=$as_dir/$0 && break done IFS=$as_save_IFS ;; esac # We did not find ourselves, most probably we were run as `sh COMMAND' # in which case we are not to be found in the path. if test "x$as_myself" = x; then as_myself=$0 fi if test ! -f "$as_myself"; then $as_echo "$as_myself: error: cannot find myself; rerun with an absolute file name" >&2 exit 1 fi # Unset variables that we do not need and which cause bugs (e.g. in # pre-3.0 UWIN ksh). But do not cause bugs in bash 2.01; the "|| exit 1" # suppresses any "Segmentation fault" message there. '((' could # trigger a bug in pdksh 5.2.14. for as_var in BASH_ENV ENV MAIL MAILPATH do eval test x\${$as_var+set} = xset \ && ( (unset $as_var) || exit 1) >/dev/null 2>&1 && unset $as_var || : done PS1='$ ' PS2='> ' PS4='+ ' # NLS nuisances. LC_ALL=C export LC_ALL LANGUAGE=C export LANGUAGE # CDPATH. (unset CDPATH) >/dev/null 2>&1 && unset CDPATH # as_fn_error STATUS ERROR [LINENO LOG_FD] # ---------------------------------------- # Output "`basename $0`: error: ERROR" to stderr. If LINENO and LOG_FD are # provided, also output the error to LOG_FD, referencing LINENO. Then exit the # script with STATUS, using 1 if that was 0. as_fn_error () { as_status=$1; test $as_status -eq 0 && as_status=1 if test "$4"; then as_lineno=${as_lineno-"$3"} as_lineno_stack=as_lineno_stack=$as_lineno_stack $as_echo "$as_me:${as_lineno-$LINENO}: error: $2" >&$4 fi $as_echo "$as_me: error: $2" >&2 as_fn_exit $as_status } # as_fn_error # as_fn_set_status STATUS # ----------------------- # Set $? to STATUS, without forking. as_fn_set_status () { return $1 } # as_fn_set_status # as_fn_exit STATUS # ----------------- # Exit the shell with STATUS, even in a "trap 0" or "set -e" context. as_fn_exit () { set +e as_fn_set_status $1 exit $1 } # as_fn_exit # as_fn_unset VAR # --------------- # Portably unset VAR. as_fn_unset () { { eval $1=; unset $1;} } as_unset=as_fn_unset # as_fn_append VAR VALUE # ---------------------- # Append the text in VALUE to the end of the definition contained in VAR. Take # advantage of any shell optimizations that allow amortized linear growth over # repeated appends, instead of the typical quadratic growth present in naive # implementations. if (eval "as_var=1; as_var+=2; test x\$as_var = x12") 2>/dev/null; then : eval 'as_fn_append () { eval $1+=\$2 }' else as_fn_append () { eval $1=\$$1\$2 } fi # as_fn_append # as_fn_arith ARG... # ------------------ # Perform arithmetic evaluation on the ARGs, and store the result in the # global $as_val. Take advantage of shells that can avoid forks. The arguments # must be portable across $(()) and expr. if (eval "test \$(( 1 + 1 )) = 2") 2>/dev/null; then : eval 'as_fn_arith () { as_val=$(( $* )) }' else as_fn_arith () { as_val=`expr "$@" || test $? -eq 1` } fi # as_fn_arith if expr a : '\(a\)' >/dev/null 2>&1 && test "X`expr 00001 : '.*\(...\)'`" = X001; then as_expr=expr else as_expr=false fi if (basename -- /) >/dev/null 2>&1 && test "X`basename -- / 2>&1`" = "X/"; then as_basename=basename else as_basename=false fi if (as_dir=`dirname -- /` && test "X$as_dir" = X/) >/dev/null 2>&1; then as_dirname=dirname else as_dirname=false fi as_me=`$as_basename -- "$0" || $as_expr X/"$0" : '.*/\([^/][^/]*\)/*$' \| \ X"$0" : 'X\(//\)$' \| \ X"$0" : 'X\(/\)' \| . 2>/dev/null || $as_echo X/"$0" | sed '/^.*\/\([^/][^/]*\)\/*$/{ s//\1/ q } /^X\/\(\/\/\)$/{ s//\1/ q } /^X\/\(\/\).*/{ s//\1/ q } s/.*/./; q'` # Avoid depending upon Character Ranges. as_cr_letters='abcdefghijklmnopqrstuvwxyz' as_cr_LETTERS='ABCDEFGHIJKLMNOPQRSTUVWXYZ' as_cr_Letters=$as_cr_letters$as_cr_LETTERS as_cr_digits='0123456789' as_cr_alnum=$as_cr_Letters$as_cr_digits ECHO_C= ECHO_N= ECHO_T= case `echo -n x` in #((((( -n*) case `echo 'xy\c'` in *c*) ECHO_T=' ';; # ECHO_T is single tab character. xy) ECHO_C='\c';; *) echo `echo ksh88 bug on AIX 6.1` > /dev/null ECHO_T=' ';; esac;; *) ECHO_N='-n';; esac rm -f conf$$ conf$$.exe conf$$.file if test -d conf$$.dir; then rm -f conf$$.dir/conf$$.file else rm -f conf$$.dir mkdir conf$$.dir 2>/dev/null fi if (echo >conf$$.file) 2>/dev/null; then if ln -s conf$$.file conf$$ 2>/dev/null; then as_ln_s='ln -s' # ... but there are two gotchas: # 1) On MSYS, both `ln -s file dir' and `ln file dir' fail. # 2) DJGPP < 2.04 has no symlinks; `ln -s' creates a wrapper executable. # In both cases, we have to default to `cp -p'. ln -s conf$$.file conf$$.dir 2>/dev/null && test ! -f conf$$.exe || as_ln_s='cp -p' elif ln conf$$.file conf$$ 2>/dev/null; then as_ln_s=ln else as_ln_s='cp -p' fi else as_ln_s='cp -p' fi rm -f conf$$ conf$$.exe conf$$.dir/conf$$.file conf$$.file rmdir conf$$.dir 2>/dev/null # as_fn_mkdir_p # ------------- # Create "$as_dir" as a directory, including parents if necessary. as_fn_mkdir_p () { case $as_dir in #( -*) as_dir=./$as_dir;; esac test -d "$as_dir" || eval $as_mkdir_p || { as_dirs= while :; do case $as_dir in #( *\'*) as_qdir=`$as_echo "$as_dir" | sed "s/'/'\\\\\\\\''/g"`;; #'( *) as_qdir=$as_dir;; esac as_dirs="'$as_qdir' $as_dirs" as_dir=`$as_dirname -- "$as_dir" || $as_expr X"$as_dir" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$as_dir" : 'X\(//\)[^/]' \| \ X"$as_dir" : 'X\(//\)$' \| \ X"$as_dir" : 'X\(/\)' \| . 2>/dev/null || $as_echo X"$as_dir" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q'` test -d "$as_dir" && break done test -z "$as_dirs" || eval "mkdir $as_dirs" } || test -d "$as_dir" || as_fn_error $? "cannot create directory $as_dir" } # as_fn_mkdir_p if mkdir -p . 2>/dev/null; then as_mkdir_p='mkdir -p "$as_dir"' else test -d ./-p && rmdir ./-p as_mkdir_p=false fi if test -x / >/dev/null 2>&1; then as_test_x='test -x' else if ls -dL / >/dev/null 2>&1; then as_ls_L_option=L else as_ls_L_option= fi as_test_x=' eval sh -c '\'' if test -d "$1"; then test -d "$1/."; else case $1 in #( -*)set "./$1";; esac; case `ls -ld'$as_ls_L_option' "$1" 2>/dev/null` in #(( ???[sx]*):;;*)false;;esac;fi '\'' sh ' fi as_executable_p=$as_test_x # Sed expression to map a string onto a valid CPP name. as_tr_cpp="eval sed 'y%*$as_cr_letters%P$as_cr_LETTERS%;s%[^_$as_cr_alnum]%_%g'" # Sed expression to map a string onto a valid variable name. as_tr_sh="eval sed 'y%*+%pp%;s%[^_$as_cr_alnum]%_%g'" exec 6>&1 ## ----------------------------------- ## ## Main body of $CONFIG_STATUS script. ## ## ----------------------------------- ## _ASEOF test $as_write_fail = 0 && chmod +x $CONFIG_STATUS || ac_write_fail=1 cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 # Save the log message, to keep $0 and so on meaningful, and to # report actual input values of CONFIG_FILES etc. instead of their # values after options handling. ac_log=" This file was extended by sparsehash $as_me 2.0.2, which was generated by GNU Autoconf 2.68. Invocation command line was CONFIG_FILES = $CONFIG_FILES CONFIG_HEADERS = $CONFIG_HEADERS CONFIG_LINKS = $CONFIG_LINKS CONFIG_COMMANDS = $CONFIG_COMMANDS $ $0 $@ on `(hostname || uname -n) 2>/dev/null | sed 1q` " _ACEOF case $ac_config_files in *" "*) set x $ac_config_files; shift; ac_config_files=$*;; esac case $ac_config_headers in *" "*) set x $ac_config_headers; shift; ac_config_headers=$*;; esac cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 # Files that config.status was made for. config_files="$ac_config_files" config_headers="$ac_config_headers" config_commands="$ac_config_commands" _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 ac_cs_usage="\ \`$as_me' instantiates files and other configuration actions from templates according to the current configuration. Unless the files and actions are specified as TAGs, all are instantiated by default. Usage: $0 [OPTION]... [TAG]... -h, --help print this help, then exit -V, --version print version number and configuration settings, then exit --config print configuration, then exit -q, --quiet, --silent do not print progress messages -d, --debug don't remove temporary files --recheck update $as_me by reconfiguring in the same conditions --file=FILE[:TEMPLATE] instantiate the configuration file FILE --header=FILE[:TEMPLATE] instantiate the configuration header FILE Configuration files: $config_files Configuration headers: $config_headers Configuration commands: $config_commands Report bugs to ." _ACEOF cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 ac_cs_config="`$as_echo "$ac_configure_args" | sed 's/^ //; s/[\\""\`\$]/\\\\&/g'`" ac_cs_version="\\ sparsehash config.status 2.0.2 configured by $0, generated by GNU Autoconf 2.68, with options \\"\$ac_cs_config\\" Copyright (C) 2010 Free Software Foundation, Inc. This config.status script is free software; the Free Software Foundation gives unlimited permission to copy, distribute and modify it." ac_pwd='$ac_pwd' srcdir='$srcdir' INSTALL='$INSTALL' MKDIR_P='$MKDIR_P' AWK='$AWK' test -n "\$AWK" || AWK=awk _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 # The default lists apply if the user does not specify any file. ac_need_defaults=: while test $# != 0 do case $1 in --*=?*) ac_option=`expr "X$1" : 'X\([^=]*\)='` ac_optarg=`expr "X$1" : 'X[^=]*=\(.*\)'` ac_shift=: ;; --*=) ac_option=`expr "X$1" : 'X\([^=]*\)='` ac_optarg= ac_shift=: ;; *) ac_option=$1 ac_optarg=$2 ac_shift=shift ;; esac case $ac_option in # Handling of the options. -recheck | --recheck | --rechec | --reche | --rech | --rec | --re | --r) ac_cs_recheck=: ;; --version | --versio | --versi | --vers | --ver | --ve | --v | -V ) $as_echo "$ac_cs_version"; exit ;; --config | --confi | --conf | --con | --co | --c ) $as_echo "$ac_cs_config"; exit ;; --debug | --debu | --deb | --de | --d | -d ) debug=: ;; --file | --fil | --fi | --f ) $ac_shift case $ac_optarg in *\'*) ac_optarg=`$as_echo "$ac_optarg" | sed "s/'/'\\\\\\\\''/g"` ;; '') as_fn_error $? "missing file argument" ;; esac as_fn_append CONFIG_FILES " '$ac_optarg'" ac_need_defaults=false;; --header | --heade | --head | --hea ) $ac_shift case $ac_optarg in *\'*) ac_optarg=`$as_echo "$ac_optarg" | sed "s/'/'\\\\\\\\''/g"` ;; esac as_fn_append CONFIG_HEADERS " '$ac_optarg'" ac_need_defaults=false;; --he | --h) # Conflict between --help and --header as_fn_error $? "ambiguous option: \`$1' Try \`$0 --help' for more information.";; --help | --hel | -h ) $as_echo "$ac_cs_usage"; exit ;; -q | -quiet | --quiet | --quie | --qui | --qu | --q \ | -silent | --silent | --silen | --sile | --sil | --si | --s) ac_cs_silent=: ;; # This is an error. -*) as_fn_error $? "unrecognized option: \`$1' Try \`$0 --help' for more information." ;; *) as_fn_append ac_config_targets " $1" ac_need_defaults=false ;; esac shift done ac_configure_extra_args= if $ac_cs_silent; then exec 6>/dev/null ac_configure_extra_args="$ac_configure_extra_args --silent" fi _ACEOF cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 if \$ac_cs_recheck; then set X '$SHELL' '$0' $ac_configure_args \$ac_configure_extra_args --no-create --no-recursion shift \$as_echo "running CONFIG_SHELL=$SHELL \$*" >&6 CONFIG_SHELL='$SHELL' export CONFIG_SHELL exec "\$@" fi _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 exec 5>>config.log { echo sed 'h;s/./-/g;s/^.../## /;s/...$/ ##/;p;x;p;x' <<_ASBOX ## Running $as_me. ## _ASBOX $as_echo "$ac_log" } >&5 _ACEOF cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 # # INIT-COMMANDS # AMDEP_TRUE="$AMDEP_TRUE" ac_aux_dir="$ac_aux_dir" _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 # Handling of arguments. for ac_config_target in $ac_config_targets do case $ac_config_target in "src/config.h") CONFIG_HEADERS="$CONFIG_HEADERS src/config.h" ;; "depfiles") CONFIG_COMMANDS="$CONFIG_COMMANDS depfiles" ;; "Makefile") CONFIG_FILES="$CONFIG_FILES Makefile" ;; *) as_fn_error $? "invalid argument: \`$ac_config_target'" "$LINENO" 5;; esac done # If the user did not use the arguments to specify the items to instantiate, # then the envvar interface is used. Set only those that are not. # We use the long form for the default assignment because of an extremely # bizarre bug on SunOS 4.1.3. if $ac_need_defaults; then test "${CONFIG_FILES+set}" = set || CONFIG_FILES=$config_files test "${CONFIG_HEADERS+set}" = set || CONFIG_HEADERS=$config_headers test "${CONFIG_COMMANDS+set}" = set || CONFIG_COMMANDS=$config_commands fi # Have a temporary directory for convenience. Make it in the build tree # simply because there is no reason against having it here, and in addition, # creating and moving files from /tmp can sometimes cause problems. # Hook for its removal unless debugging. # Note that there is a small window in which the directory will not be cleaned: # after its creation but before its name has been assigned to `$tmp'. $debug || { tmp= ac_tmp= trap 'exit_status=$? : "${ac_tmp:=$tmp}" { test ! -d "$ac_tmp" || rm -fr "$ac_tmp"; } && exit $exit_status ' 0 trap 'as_fn_exit 1' 1 2 13 15 } # Create a (secure) tmp directory for tmp files. { tmp=`(umask 077 && mktemp -d "./confXXXXXX") 2>/dev/null` && test -d "$tmp" } || { tmp=./conf$$-$RANDOM (umask 077 && mkdir "$tmp") } || as_fn_error $? "cannot create a temporary directory in ." "$LINENO" 5 ac_tmp=$tmp # Set up the scripts for CONFIG_FILES section. # No need to generate them if there are no CONFIG_FILES. # This happens for instance with `./config.status config.h'. if test -n "$CONFIG_FILES"; then ac_cr=`echo X | tr X '\015'` # On cygwin, bash can eat \r inside `` if the user requested igncr. # But we know of no other shell where ac_cr would be empty at this # point, so we can use a bashism as a fallback. if test "x$ac_cr" = x; then eval ac_cr=\$\'\\r\' fi ac_cs_awk_cr=`$AWK 'BEGIN { print "a\rb" }' /dev/null` if test "$ac_cs_awk_cr" = "a${ac_cr}b"; then ac_cs_awk_cr='\\r' else ac_cs_awk_cr=$ac_cr fi echo 'BEGIN {' >"$ac_tmp/subs1.awk" && _ACEOF { echo "cat >conf$$subs.awk <<_ACEOF" && echo "$ac_subst_vars" | sed 's/.*/&!$&$ac_delim/' && echo "_ACEOF" } >conf$$subs.sh || as_fn_error $? "could not make $CONFIG_STATUS" "$LINENO" 5 ac_delim_num=`echo "$ac_subst_vars" | grep -c '^'` ac_delim='%!_!# ' for ac_last_try in false false false false false :; do . ./conf$$subs.sh || as_fn_error $? "could not make $CONFIG_STATUS" "$LINENO" 5 ac_delim_n=`sed -n "s/.*$ac_delim\$/X/p" conf$$subs.awk | grep -c X` if test $ac_delim_n = $ac_delim_num; then break elif $ac_last_try; then as_fn_error $? "could not make $CONFIG_STATUS" "$LINENO" 5 else ac_delim="$ac_delim!$ac_delim _$ac_delim!! " fi done rm -f conf$$subs.sh cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 cat >>"\$ac_tmp/subs1.awk" <<\\_ACAWK && _ACEOF sed -n ' h s/^/S["/; s/!.*/"]=/ p g s/^[^!]*!// :repl t repl s/'"$ac_delim"'$// t delim :nl h s/\(.\{148\}\)..*/\1/ t more1 s/["\\]/\\&/g; s/^/"/; s/$/\\n"\\/ p n b repl :more1 s/["\\]/\\&/g; s/^/"/; s/$/"\\/ p g s/.\{148\}// t nl :delim h s/\(.\{148\}\)..*/\1/ t more2 s/["\\]/\\&/g; s/^/"/; s/$/"/ p b :more2 s/["\\]/\\&/g; s/^/"/; s/$/"\\/ p g s/.\{148\}// t delim ' >$CONFIG_STATUS || ac_write_fail=1 rm -f conf$$subs.awk cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 _ACAWK cat >>"\$ac_tmp/subs1.awk" <<_ACAWK && for (key in S) S_is_set[key] = 1 FS = "" } { line = $ 0 nfields = split(line, field, "@") substed = 0 len = length(field[1]) for (i = 2; i < nfields; i++) { key = field[i] keylen = length(key) if (S_is_set[key]) { value = S[key] line = substr(line, 1, len) "" value "" substr(line, len + keylen + 3) len += length(value) + length(field[++i]) substed = 1 } else len += 1 + keylen } print line } _ACAWK _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 if sed "s/$ac_cr//" < /dev/null > /dev/null 2>&1; then sed "s/$ac_cr\$//; s/$ac_cr/$ac_cs_awk_cr/g" else cat fi < "$ac_tmp/subs1.awk" > "$ac_tmp/subs.awk" \ || as_fn_error $? "could not setup config files machinery" "$LINENO" 5 _ACEOF # VPATH may cause trouble with some makes, so we remove sole $(srcdir), # ${srcdir} and @srcdir@ entries from VPATH if srcdir is ".", strip leading and # trailing colons and then remove the whole line if VPATH becomes empty # (actually we leave an empty line to preserve line numbers). if test "x$srcdir" = x.; then ac_vpsub='/^[ ]*VPATH[ ]*=[ ]*/{ h s/// s/^/:/ s/[ ]*$/:/ s/:\$(srcdir):/:/g s/:\${srcdir}:/:/g s/:@srcdir@:/:/g s/^:*// s/:*$// x s/\(=[ ]*\).*/\1/ G s/\n// s/^[^=]*=[ ]*$// }' fi cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 fi # test -n "$CONFIG_FILES" # Set up the scripts for CONFIG_HEADERS section. # No need to generate them if there are no CONFIG_HEADERS. # This happens for instance with `./config.status Makefile'. if test -n "$CONFIG_HEADERS"; then cat >"$ac_tmp/defines.awk" <<\_ACAWK || BEGIN { _ACEOF # Transform confdefs.h into an awk script `defines.awk', embedded as # here-document in config.status, that substitutes the proper values into # config.h.in to produce config.h. # Create a delimiter string that does not exist in confdefs.h, to ease # handling of long lines. ac_delim='%!_!# ' for ac_last_try in false false :; do ac_tt=`sed -n "/$ac_delim/p" confdefs.h` if test -z "$ac_tt"; then break elif $ac_last_try; then as_fn_error $? "could not make $CONFIG_HEADERS" "$LINENO" 5 else ac_delim="$ac_delim!$ac_delim _$ac_delim!! " fi done # For the awk script, D is an array of macro values keyed by name, # likewise P contains macro parameters if any. Preserve backslash # newline sequences. ac_word_re=[_$as_cr_Letters][_$as_cr_alnum]* sed -n ' s/.\{148\}/&'"$ac_delim"'/g t rset :rset s/^[ ]*#[ ]*define[ ][ ]*/ / t def d :def s/\\$// t bsnl s/["\\]/\\&/g s/^ \('"$ac_word_re"'\)\(([^()]*)\)[ ]*\(.*\)/P["\1"]="\2"\ D["\1"]=" \3"/p s/^ \('"$ac_word_re"'\)[ ]*\(.*\)/D["\1"]=" \2"/p d :bsnl s/["\\]/\\&/g s/^ \('"$ac_word_re"'\)\(([^()]*)\)[ ]*\(.*\)/P["\1"]="\2"\ D["\1"]=" \3\\\\\\n"\\/p t cont s/^ \('"$ac_word_re"'\)[ ]*\(.*\)/D["\1"]=" \2\\\\\\n"\\/p t cont d :cont n s/.\{148\}/&'"$ac_delim"'/g t clear :clear s/\\$// t bsnlc s/["\\]/\\&/g; s/^/"/; s/$/"/p d :bsnlc s/["\\]/\\&/g; s/^/"/; s/$/\\\\\\n"\\/p b cont ' >$CONFIG_STATUS || ac_write_fail=1 cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 for (key in D) D_is_set[key] = 1 FS = "" } /^[\t ]*#[\t ]*(define|undef)[\t ]+$ac_word_re([\t (]|\$)/ { line = \$ 0 split(line, arg, " ") if (arg[1] == "#") { defundef = arg[2] mac1 = arg[3] } else { defundef = substr(arg[1], 2) mac1 = arg[2] } split(mac1, mac2, "(") #) macro = mac2[1] prefix = substr(line, 1, index(line, defundef) - 1) if (D_is_set[macro]) { # Preserve the white space surrounding the "#". print prefix "define", macro P[macro] D[macro] next } else { # Replace #undef with comments. This is necessary, for example, # in the case of _POSIX_SOURCE, which is predefined and required # on some systems where configure will not decide to define it. if (defundef == "undef") { print "/*", prefix defundef, macro, "*/" next } } } { print } _ACAWK _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 as_fn_error $? "could not setup config headers machinery" "$LINENO" 5 fi # test -n "$CONFIG_HEADERS" eval set X " :F $CONFIG_FILES :H $CONFIG_HEADERS :C $CONFIG_COMMANDS" shift for ac_tag do case $ac_tag in :[FHLC]) ac_mode=$ac_tag; continue;; esac case $ac_mode$ac_tag in :[FHL]*:*);; :L* | :C*:*) as_fn_error $? "invalid tag \`$ac_tag'" "$LINENO" 5;; :[FH]-) ac_tag=-:-;; :[FH]*) ac_tag=$ac_tag:$ac_tag.in;; esac ac_save_IFS=$IFS IFS=: set x $ac_tag IFS=$ac_save_IFS shift ac_file=$1 shift case $ac_mode in :L) ac_source=$1;; :[FH]) ac_file_inputs= for ac_f do case $ac_f in -) ac_f="$ac_tmp/stdin";; *) # Look for the file first in the build tree, then in the source tree # (if the path is not absolute). The absolute path cannot be DOS-style, # because $ac_f cannot contain `:'. test -f "$ac_f" || case $ac_f in [\\/$]*) false;; *) test -f "$srcdir/$ac_f" && ac_f="$srcdir/$ac_f";; esac || as_fn_error 1 "cannot find input file: \`$ac_f'" "$LINENO" 5;; esac case $ac_f in *\'*) ac_f=`$as_echo "$ac_f" | sed "s/'/'\\\\\\\\''/g"`;; esac as_fn_append ac_file_inputs " '$ac_f'" done # Let's still pretend it is `configure' which instantiates (i.e., don't # use $as_me), people would be surprised to read: # /* config.h. Generated by config.status. */ configure_input='Generated from '` $as_echo "$*" | sed 's|^[^:]*/||;s|:[^:]*/|, |g' `' by configure.' if test x"$ac_file" != x-; then configure_input="$ac_file. $configure_input" { $as_echo "$as_me:${as_lineno-$LINENO}: creating $ac_file" >&5 $as_echo "$as_me: creating $ac_file" >&6;} fi # Neutralize special characters interpreted by sed in replacement strings. case $configure_input in #( *\&* | *\|* | *\\* ) ac_sed_conf_input=`$as_echo "$configure_input" | sed 's/[\\\\&|]/\\\\&/g'`;; #( *) ac_sed_conf_input=$configure_input;; esac case $ac_tag in *:-:* | *:-) cat >"$ac_tmp/stdin" \ || as_fn_error $? "could not create $ac_file" "$LINENO" 5 ;; esac ;; esac ac_dir=`$as_dirname -- "$ac_file" || $as_expr X"$ac_file" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$ac_file" : 'X\(//\)[^/]' \| \ X"$ac_file" : 'X\(//\)$' \| \ X"$ac_file" : 'X\(/\)' \| . 2>/dev/null || $as_echo X"$ac_file" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q'` as_dir="$ac_dir"; as_fn_mkdir_p ac_builddir=. case "$ac_dir" in .) ac_dir_suffix= ac_top_builddir_sub=. ac_top_build_prefix= ;; *) ac_dir_suffix=/`$as_echo "$ac_dir" | sed 's|^\.[\\/]||'` # A ".." for each directory in $ac_dir_suffix. ac_top_builddir_sub=`$as_echo "$ac_dir_suffix" | sed 's|/[^\\/]*|/..|g;s|/||'` case $ac_top_builddir_sub in "") ac_top_builddir_sub=. ac_top_build_prefix= ;; *) ac_top_build_prefix=$ac_top_builddir_sub/ ;; esac ;; esac ac_abs_top_builddir=$ac_pwd ac_abs_builddir=$ac_pwd$ac_dir_suffix # for backward compatibility: ac_top_builddir=$ac_top_build_prefix case $srcdir in .) # We are building in place. ac_srcdir=. ac_top_srcdir=$ac_top_builddir_sub ac_abs_top_srcdir=$ac_pwd ;; [\\/]* | ?:[\\/]* ) # Absolute name. ac_srcdir=$srcdir$ac_dir_suffix; ac_top_srcdir=$srcdir ac_abs_top_srcdir=$srcdir ;; *) # Relative name. ac_srcdir=$ac_top_build_prefix$srcdir$ac_dir_suffix ac_top_srcdir=$ac_top_build_prefix$srcdir ac_abs_top_srcdir=$ac_pwd/$srcdir ;; esac ac_abs_srcdir=$ac_abs_top_srcdir$ac_dir_suffix case $ac_mode in :F) # # CONFIG_FILE # case $INSTALL in [\\/$]* | ?:[\\/]* ) ac_INSTALL=$INSTALL ;; *) ac_INSTALL=$ac_top_build_prefix$INSTALL ;; esac ac_MKDIR_P=$MKDIR_P case $MKDIR_P in [\\/$]* | ?:[\\/]* ) ;; */*) ac_MKDIR_P=$ac_top_build_prefix$MKDIR_P ;; esac _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 # If the template does not know about datarootdir, expand it. # FIXME: This hack should be removed a few years after 2.60. ac_datarootdir_hack=; ac_datarootdir_seen= ac_sed_dataroot=' /datarootdir/ { p q } /@datadir@/p /@docdir@/p /@infodir@/p /@localedir@/p /@mandir@/p' case `eval "sed -n \"\$ac_sed_dataroot\" $ac_file_inputs"` in *datarootdir*) ac_datarootdir_seen=yes;; *@datadir@*|*@docdir@*|*@infodir@*|*@localedir@*|*@mandir@*) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $ac_file_inputs seems to ignore the --datarootdir setting" >&5 $as_echo "$as_me: WARNING: $ac_file_inputs seems to ignore the --datarootdir setting" >&2;} _ACEOF cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 ac_datarootdir_hack=' s&@datadir@&$datadir&g s&@docdir@&$docdir&g s&@infodir@&$infodir&g s&@localedir@&$localedir&g s&@mandir@&$mandir&g s&\\\${datarootdir}&$datarootdir&g' ;; esac _ACEOF # Neutralize VPATH when `$srcdir' = `.'. # Shell code in configure.ac might set extrasub. # FIXME: do we really want to maintain this feature? cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 ac_sed_extra="$ac_vpsub $extrasub _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 :t /@[a-zA-Z_][a-zA-Z_0-9]*@/!b s|@configure_input@|$ac_sed_conf_input|;t t s&@top_builddir@&$ac_top_builddir_sub&;t t s&@top_build_prefix@&$ac_top_build_prefix&;t t s&@srcdir@&$ac_srcdir&;t t s&@abs_srcdir@&$ac_abs_srcdir&;t t s&@top_srcdir@&$ac_top_srcdir&;t t s&@abs_top_srcdir@&$ac_abs_top_srcdir&;t t s&@builddir@&$ac_builddir&;t t s&@abs_builddir@&$ac_abs_builddir&;t t s&@abs_top_builddir@&$ac_abs_top_builddir&;t t s&@INSTALL@&$ac_INSTALL&;t t s&@MKDIR_P@&$ac_MKDIR_P&;t t $ac_datarootdir_hack " eval sed \"\$ac_sed_extra\" "$ac_file_inputs" | $AWK -f "$ac_tmp/subs.awk" \ >$ac_tmp/out || as_fn_error $? "could not create $ac_file" "$LINENO" 5 test -z "$ac_datarootdir_hack$ac_datarootdir_seen" && { ac_out=`sed -n '/\${datarootdir}/p' "$ac_tmp/out"`; test -n "$ac_out"; } && { ac_out=`sed -n '/^[ ]*datarootdir[ ]*:*=/p' \ "$ac_tmp/out"`; test -z "$ac_out"; } && { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $ac_file contains a reference to the variable \`datarootdir' which seems to be undefined. Please make sure it is defined" >&5 $as_echo "$as_me: WARNING: $ac_file contains a reference to the variable \`datarootdir' which seems to be undefined. Please make sure it is defined" >&2;} rm -f "$ac_tmp/stdin" case $ac_file in -) cat "$ac_tmp/out" && rm -f "$ac_tmp/out";; *) rm -f "$ac_file" && mv "$ac_tmp/out" "$ac_file";; esac \ || as_fn_error $? "could not create $ac_file" "$LINENO" 5 ;; :H) # # CONFIG_HEADER # if test x"$ac_file" != x-; then { $as_echo "/* $configure_input */" \ && eval '$AWK -f "$ac_tmp/defines.awk"' "$ac_file_inputs" } >"$ac_tmp/config.h" \ || as_fn_error $? "could not create $ac_file" "$LINENO" 5 if diff "$ac_file" "$ac_tmp/config.h" >/dev/null 2>&1; then { $as_echo "$as_me:${as_lineno-$LINENO}: $ac_file is unchanged" >&5 $as_echo "$as_me: $ac_file is unchanged" >&6;} else rm -f "$ac_file" mv "$ac_tmp/config.h" "$ac_file" \ || as_fn_error $? "could not create $ac_file" "$LINENO" 5 fi else $as_echo "/* $configure_input */" \ && eval '$AWK -f "$ac_tmp/defines.awk"' "$ac_file_inputs" \ || as_fn_error $? "could not create -" "$LINENO" 5 fi # Compute "$ac_file"'s index in $config_headers. _am_arg="$ac_file" _am_stamp_count=1 for _am_header in $config_headers :; do case $_am_header in $_am_arg | $_am_arg:* ) break ;; * ) _am_stamp_count=`expr $_am_stamp_count + 1` ;; esac done echo "timestamp for $_am_arg" >`$as_dirname -- "$_am_arg" || $as_expr X"$_am_arg" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$_am_arg" : 'X\(//\)[^/]' \| \ X"$_am_arg" : 'X\(//\)$' \| \ X"$_am_arg" : 'X\(/\)' \| . 2>/dev/null || $as_echo X"$_am_arg" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q'`/stamp-h$_am_stamp_count ;; :C) { $as_echo "$as_me:${as_lineno-$LINENO}: executing $ac_file commands" >&5 $as_echo "$as_me: executing $ac_file commands" >&6;} ;; esac case $ac_file$ac_mode in "depfiles":C) test x"$AMDEP_TRUE" != x"" || { # Autoconf 2.62 quotes --file arguments for eval, but not when files # are listed without --file. Let's play safe and only enable the eval # if we detect the quoting. case $CONFIG_FILES in *\'*) eval set x "$CONFIG_FILES" ;; *) set x $CONFIG_FILES ;; esac shift for mf do # Strip MF so we end up with the name of the file. mf=`echo "$mf" | sed -e 's/:.*$//'` # Check whether this is an Automake generated Makefile or not. # We used to match only the files named `Makefile.in', but # some people rename them; so instead we look at the file content. # Grep'ing the first line is not enough: some people post-process # each Makefile.in and add a new line on top of each file to say so. # Grep'ing the whole file is not good either: AIX grep has a line # limit of 2048, but all sed's we know have understand at least 4000. if sed -n 's,^#.*generated by automake.*,X,p' "$mf" | grep X >/dev/null 2>&1; then dirpart=`$as_dirname -- "$mf" || $as_expr X"$mf" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$mf" : 'X\(//\)[^/]' \| \ X"$mf" : 'X\(//\)$' \| \ X"$mf" : 'X\(/\)' \| . 2>/dev/null || $as_echo X"$mf" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q'` else continue fi # Extract the definition of DEPDIR, am__include, and am__quote # from the Makefile without running `make'. DEPDIR=`sed -n 's/^DEPDIR = //p' < "$mf"` test -z "$DEPDIR" && continue am__include=`sed -n 's/^am__include = //p' < "$mf"` test -z "am__include" && continue am__quote=`sed -n 's/^am__quote = //p' < "$mf"` # When using ansi2knr, U may be empty or an underscore; expand it U=`sed -n 's/^U = //p' < "$mf"` # Find all dependency output files, they are included files with # $(DEPDIR) in their names. We invoke sed twice because it is the # simplest approach to changing $(DEPDIR) to its actual value in the # expansion. for file in `sed -n " s/^$am__include $am__quote\(.*(DEPDIR).*\)$am__quote"'$/\1/p' <"$mf" | \ sed -e 's/\$(DEPDIR)/'"$DEPDIR"'/g' -e 's/\$U/'"$U"'/g'`; do # Make sure the directory exists. test -f "$dirpart/$file" && continue fdir=`$as_dirname -- "$file" || $as_expr X"$file" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$file" : 'X\(//\)[^/]' \| \ X"$file" : 'X\(//\)$' \| \ X"$file" : 'X\(/\)' \| . 2>/dev/null || $as_echo X"$file" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q'` as_dir=$dirpart/$fdir; as_fn_mkdir_p # echo "creating $dirpart/$file" echo '# dummy' > "$dirpart/$file" done done } ;; esac done # for ac_tag as_fn_exit 0 _ACEOF ac_clean_files=$ac_clean_files_save test $ac_write_fail = 0 || as_fn_error $? "write failure creating $CONFIG_STATUS" "$LINENO" 5 # configure is writing to config.log, and then calls config.status. # config.status does its own redirection, appending to config.log. # Unfortunately, on DOS this fails, as config.log is still kept open # by configure, so config.status won't be able to write to it; its # output is simply discarded. So we exec the FD to /dev/null, # effectively closing config.log, so it can be properly (re)opened and # appended to by config.status. When coming back to configure, we # need to make the FD available again. if test "$no_create" != yes; then ac_cs_success=: ac_config_status_args= test "$silent" = yes && ac_config_status_args="$ac_config_status_args --quiet" exec 5>/dev/null $SHELL $CONFIG_STATUS $ac_config_status_args || ac_cs_success=false exec 5>>config.log # Use ||, not &&, to avoid exiting from the if with $? = 1, which # would make configure fail if this is the last instruction. $ac_cs_success || as_fn_exit 1 fi if test -n "$ac_unrecognized_opts" && test "$enable_option_checking" != no; then { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: unrecognized options: $ac_unrecognized_opts" >&5 $as_echo "$as_me: WARNING: unrecognized options: $ac_unrecognized_opts" >&2;} fi sparsehash-2.0.2/missing0000755000175000017500000002623311721254575012213 00000000000000#! /bin/sh # Common stub for a few missing GNU programs while installing. scriptversion=2009-04-28.21; # UTC # Copyright (C) 1996, 1997, 1999, 2000, 2002, 2003, 2004, 2005, 2006, # 2008, 2009 Free Software Foundation, Inc. # Originally by Fran,cois Pinard , 1996. # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2, or (at your option) # any later version. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # You should have received a copy of the GNU General Public License # along with this program. If not, see . # As a special exception to the GNU General Public License, if you # distribute this file as part of a program that contains a # configuration script generated by Autoconf, you may include it under # the same distribution terms that you use for the rest of that program. if test $# -eq 0; then echo 1>&2 "Try \`$0 --help' for more information" exit 1 fi run=: sed_output='s/.* --output[ =]\([^ ]*\).*/\1/p' sed_minuso='s/.* -o \([^ ]*\).*/\1/p' # In the cases where this matters, `missing' is being run in the # srcdir already. if test -f configure.ac; then configure_ac=configure.ac else configure_ac=configure.in fi msg="missing on your system" case $1 in --run) # Try to run requested program, and just exit if it succeeds. run= shift "$@" && exit 0 # Exit code 63 means version mismatch. This often happens # when the user try to use an ancient version of a tool on # a file that requires a minimum version. In this case we # we should proceed has if the program had been absent, or # if --run hadn't been passed. if test $? = 63; then run=: msg="probably too old" fi ;; -h|--h|--he|--hel|--help) echo "\ $0 [OPTION]... PROGRAM [ARGUMENT]... Handle \`PROGRAM [ARGUMENT]...' for when PROGRAM is missing, or return an error status if there is no known handling for PROGRAM. Options: -h, --help display this help and exit -v, --version output version information and exit --run try to run the given command, and emulate it if it fails Supported PROGRAM values: aclocal touch file \`aclocal.m4' autoconf touch file \`configure' autoheader touch file \`config.h.in' autom4te touch the output file, or create a stub one automake touch all \`Makefile.in' files bison create \`y.tab.[ch]', if possible, from existing .[ch] flex create \`lex.yy.c', if possible, from existing .c help2man touch the output file lex create \`lex.yy.c', if possible, from existing .c makeinfo touch the output file tar try tar, gnutar, gtar, then tar without non-portable flags yacc create \`y.tab.[ch]', if possible, from existing .[ch] Version suffixes to PROGRAM as well as the prefixes \`gnu-', \`gnu', and \`g' are ignored when checking the name. Send bug reports to ." exit $? ;; -v|--v|--ve|--ver|--vers|--versi|--versio|--version) echo "missing $scriptversion (GNU Automake)" exit $? ;; -*) echo 1>&2 "$0: Unknown \`$1' option" echo 1>&2 "Try \`$0 --help' for more information" exit 1 ;; esac # normalize program name to check for. program=`echo "$1" | sed ' s/^gnu-//; t s/^gnu//; t s/^g//; t'` # Now exit if we have it, but it failed. Also exit now if we # don't have it and --version was passed (most likely to detect # the program). This is about non-GNU programs, so use $1 not # $program. case $1 in lex*|yacc*) # Not GNU programs, they don't have --version. ;; tar*) if test -n "$run"; then echo 1>&2 "ERROR: \`tar' requires --run" exit 1 elif test "x$2" = "x--version" || test "x$2" = "x--help"; then exit 1 fi ;; *) if test -z "$run" && ($1 --version) > /dev/null 2>&1; then # We have it, but it failed. exit 1 elif test "x$2" = "x--version" || test "x$2" = "x--help"; then # Could not run --version or --help. This is probably someone # running `$TOOL --version' or `$TOOL --help' to check whether # $TOOL exists and not knowing $TOOL uses missing. exit 1 fi ;; esac # If it does not exist, or fails to run (possibly an outdated version), # try to emulate it. case $program in aclocal*) echo 1>&2 "\ WARNING: \`$1' is $msg. You should only need it if you modified \`acinclude.m4' or \`${configure_ac}'. You might want to install the \`Automake' and \`Perl' packages. Grab them from any GNU archive site." touch aclocal.m4 ;; autoconf*) echo 1>&2 "\ WARNING: \`$1' is $msg. You should only need it if you modified \`${configure_ac}'. You might want to install the \`Autoconf' and \`GNU m4' packages. Grab them from any GNU archive site." touch configure ;; autoheader*) echo 1>&2 "\ WARNING: \`$1' is $msg. You should only need it if you modified \`acconfig.h' or \`${configure_ac}'. You might want to install the \`Autoconf' and \`GNU m4' packages. Grab them from any GNU archive site." files=`sed -n 's/^[ ]*A[CM]_CONFIG_HEADER(\([^)]*\)).*/\1/p' ${configure_ac}` test -z "$files" && files="config.h" touch_files= for f in $files; do case $f in *:*) touch_files="$touch_files "`echo "$f" | sed -e 's/^[^:]*://' -e 's/:.*//'`;; *) touch_files="$touch_files $f.in";; esac done touch $touch_files ;; automake*) echo 1>&2 "\ WARNING: \`$1' is $msg. You should only need it if you modified \`Makefile.am', \`acinclude.m4' or \`${configure_ac}'. You might want to install the \`Automake' and \`Perl' packages. Grab them from any GNU archive site." find . -type f -name Makefile.am -print | sed 's/\.am$/.in/' | while read f; do touch "$f"; done ;; autom4te*) echo 1>&2 "\ WARNING: \`$1' is needed, but is $msg. You might have modified some files without having the proper tools for further handling them. You can get \`$1' as part of \`Autoconf' from any GNU archive site." file=`echo "$*" | sed -n "$sed_output"` test -z "$file" && file=`echo "$*" | sed -n "$sed_minuso"` if test -f "$file"; then touch $file else test -z "$file" || exec >$file echo "#! /bin/sh" echo "# Created by GNU Automake missing as a replacement of" echo "# $ $@" echo "exit 0" chmod +x $file exit 1 fi ;; bison*|yacc*) echo 1>&2 "\ WARNING: \`$1' $msg. You should only need it if you modified a \`.y' file. You may need the \`Bison' package in order for those modifications to take effect. You can get \`Bison' from any GNU archive site." rm -f y.tab.c y.tab.h if test $# -ne 1; then eval LASTARG="\${$#}" case $LASTARG in *.y) SRCFILE=`echo "$LASTARG" | sed 's/y$/c/'` if test -f "$SRCFILE"; then cp "$SRCFILE" y.tab.c fi SRCFILE=`echo "$LASTARG" | sed 's/y$/h/'` if test -f "$SRCFILE"; then cp "$SRCFILE" y.tab.h fi ;; esac fi if test ! -f y.tab.h; then echo >y.tab.h fi if test ! -f y.tab.c; then echo 'main() { return 0; }' >y.tab.c fi ;; lex*|flex*) echo 1>&2 "\ WARNING: \`$1' is $msg. You should only need it if you modified a \`.l' file. You may need the \`Flex' package in order for those modifications to take effect. You can get \`Flex' from any GNU archive site." rm -f lex.yy.c if test $# -ne 1; then eval LASTARG="\${$#}" case $LASTARG in *.l) SRCFILE=`echo "$LASTARG" | sed 's/l$/c/'` if test -f "$SRCFILE"; then cp "$SRCFILE" lex.yy.c fi ;; esac fi if test ! -f lex.yy.c; then echo 'main() { return 0; }' >lex.yy.c fi ;; help2man*) echo 1>&2 "\ WARNING: \`$1' is $msg. You should only need it if you modified a dependency of a manual page. You may need the \`Help2man' package in order for those modifications to take effect. You can get \`Help2man' from any GNU archive site." file=`echo "$*" | sed -n "$sed_output"` test -z "$file" && file=`echo "$*" | sed -n "$sed_minuso"` if test -f "$file"; then touch $file else test -z "$file" || exec >$file echo ".ab help2man is required to generate this page" exit $? fi ;; makeinfo*) echo 1>&2 "\ WARNING: \`$1' is $msg. You should only need it if you modified a \`.texi' or \`.texinfo' file, or any other file indirectly affecting the aspect of the manual. The spurious call might also be the consequence of using a buggy \`make' (AIX, DU, IRIX). You might want to install the \`Texinfo' package or the \`GNU make' package. Grab either from any GNU archive site." # The file to touch is that specified with -o ... file=`echo "$*" | sed -n "$sed_output"` test -z "$file" && file=`echo "$*" | sed -n "$sed_minuso"` if test -z "$file"; then # ... or it is the one specified with @setfilename ... infile=`echo "$*" | sed 's/.* \([^ ]*\) *$/\1/'` file=`sed -n ' /^@setfilename/{ s/.* \([^ ]*\) *$/\1/ p q }' $infile` # ... or it is derived from the source name (dir/f.texi becomes f.info) test -z "$file" && file=`echo "$infile" | sed 's,.*/,,;s,.[^.]*$,,'`.info fi # If the file does not exist, the user really needs makeinfo; # let's fail without touching anything. test -f $file || exit 1 touch $file ;; tar*) shift # We have already tried tar in the generic part. # Look for gnutar/gtar before invocation to avoid ugly error # messages. if (gnutar --version > /dev/null 2>&1); then gnutar "$@" && exit 0 fi if (gtar --version > /dev/null 2>&1); then gtar "$@" && exit 0 fi firstarg="$1" if shift; then case $firstarg in *o*) firstarg=`echo "$firstarg" | sed s/o//` tar "$firstarg" "$@" && exit 0 ;; esac case $firstarg in *h*) firstarg=`echo "$firstarg" | sed s/h//` tar "$firstarg" "$@" && exit 0 ;; esac fi echo 1>&2 "\ WARNING: I can't seem to be able to run \`tar' with the given arguments. You may want to install GNU tar or Free paxutils, or check the command line arguments." exit 1 ;; *) echo 1>&2 "\ WARNING: \`$1' is needed, and is $msg. You might have modified some files without having the proper tools for further handling them. Check the \`README' file, it often tells you about the needed prerequisites for installing this package. You may also peek at any GNU archive site, in case some other package would contain this missing \`$1' program." exit 1 ;; esac exit 0 # Local variables: # eval: (add-hook 'write-file-hooks 'time-stamp) # time-stamp-start: "scriptversion=" # time-stamp-format: "%:y-%02m-%02d.%02H" # time-stamp-time-zone: "UTC" # time-stamp-end: "; # UTC" # End: sparsehash-2.0.2/COPYING0000664000175000017500000000270711721252346011643 00000000000000Copyright (c) 2005, Google Inc. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of Google Inc. nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. sparsehash-2.0.2/ChangeLog0000664000175000017500000003041111721550006012345 00000000000000Thu Feb 23 23:47:18 2012 Google Inc. * sparsehash: version 2.0.2 * BUGFIX: Fix backwards compatibility for include folders Wed Feb 01 02:57:48 2012 Google Inc. * sparsehash: version 2.0.1 * BUGFIX: Fix path to malloc_extension.h in time_hash_map.cc Tue Jan 31 11:33:04 2012 Google Inc. * sparsehash: version 2.0 * Renamed include directory from google/ to sparsehash/ (csilvers) * Changed the 'official' sparsehash email in setup.py/etc * Renamed google-sparsehash.sln to sparsehash.sln * Changed copyright text to reflect Google's relinquished ownership Tue Dec 20 21:04:04 2011 Google Inc. * sparsehash: version 1.12 release * Add support for serializing/unserializing dense_hash_map/set to disk * New simpler and more flexible serialization API * Be more consistent about clearing on unserialize() even if it fails * Quiet some compiler warnings about unused variables * Add a timing test for iterating (suggested by google code issue 77) * Add offset_to_pos, the opposite of pos_to_offset, to sparsetable * PORTING: Add some missing #includes, needed on some systems * Die at configure-time when g++ isn't installed * Successfully make rpm's even when dpkg is missing * Improve deleted key test in util/gtl/{dense,sparse}hashtable * Update automake to 1.10.1, and autoconf to 2.62 Thu Jun 23 21:12:58 2011 Google Inc. * sparsehash: version 1.11 release * Improve performance on pointer keys by ignoring always-0 low bits * Fix missing $(top_srcdir) in Makefile.am, which broke some compiles * BUGFIX: Fix a crashing typo-bug in swap() * PORTING: Remove support for old compilers that do not use 'std' * Add some new benchmarks to test for a place dense_hash_* does badly * Some cosmetic changes due to a switch to a new releasing tool Thu Jan 20 16:07:39 2011 Google Inc. * sparsehash: version 1.10 release * Follow ExtractKey return type, allowing it to return a reference * PORTING: fix MSVC 10 warnings (constifying result_type, placement-new) * Update from autoconf 2.61 to autoconf 2.65 Fri Sep 24 11:37:50 2010 Google Inc. * sparsehash: version 1.9 release * Add is_enum; make all enums PODs by default (romanp) * Make find_or_insert() usable directly (dawidk) * Use zero-memory trick for allocators to reduce space use (guilin) * Fix some compiler warnings (chandlerc, eraman) * BUGFIX: int -> size_type in one function we missed (csilvers) * Added sparsehash.pc, for pkg-config (csilvers) Thu Jul 29 15:01:29 2010 Google Inc. * sparsehash: version 1.8.1 release * Remove -Werror from Makefile: gcc 4.3 gives spurious warnings Thu Jul 29 09:53:26 2010 Google Inc. * sparsehash: version 1.8 release * More support for Allocator, including allocator ctor arg (csilvers) * Repack hasthable vars to reduce container size *more* (giao) * Speed up clear() (csilvers) * Change HT_{OCCUPANCY,SHRINK}_FLT from float to int (csilvers) * Revamp test suite for more complete code & timing coverage (csilvers) * BUGFIX: Enforce max_size for dense/sparse_hashtable (giao, csilvers) * BUGFIX: Raise exception instead of crashing on overflow (csilvers) * BUGFIX: Allow extraneous const in key type (csilvers) * BUGFIX: Allow same functor for both hasher and key_equals (giao) * PORTING: remove is_convertible, which gives AIX cc fits (csilvers) * PORTING: Renamed README.windows to README_windows.txt (csilvers) * Created non-empty NEWS file (csilvers) Wed Mar 31 12:32:03 2010 Google Inc. * sparsehash: version 1.7 release * Add support for Allocator (guilin) * Add libc_allocator_with_realloc as the new default allocator (guilin) * Repack {sparse,dense}hashtable vars to reduce container size (giao) * BUGFIX: operator== no longer requires same table ordering (csilvers) * BUGFIX: fix dense_hash_*(it,it) by requiring empty-key too (csilvers) * PORTING: fix language bugs that gcc allowed (csilvers, chandlerc) * Update from autoconf 2.61 to autoconf 2.64 Fri Jan 8 14:47:55 2010 Google Inc. * sparsehash: version 1.6 release * New accessor methods for deleted_key, empty_key (sjackman) * Use explicit hash functions in sparsehash tests (csilvers) * BUGFIX: Cast resize to fix SUNWspro bug (csilvers) * Check for sz overflow in min_size (csilvers) * Speed up clear() for dense and sparse hashtables (jeff) * Avoid shrinking in all cases when min-load is 0 (shaunj, csilvers) * Improve densehashtable code for the deleted key (gpike) * BUGFIX: Fix operator= when the 2 empty-keys differ (andreidam) * BUGFIX: Fix ht copying when empty-key isn't set (andreidam) * PORTING: Use TmpFile() instead of /tmp on MinGW (csilvers) * PORTING: Use filenames that work with Stratus VOS. Tue May 12 14:16:38 2009 Google Inc. * sparsehash: version 1.5.2 release * Fix compile error: not initializing set_key in all constructors Fri May 8 15:23:44 2009 Google Inc. * sparsehash: version 1.5.1 release * Fix broken equal_range() for all the hash-classes (csilvers) Wed May 6 11:28:49 2009 Google Inc. * sparsehash: version 1.5 release * Support the tr1 unordered_map (and unordered_set) API (csilvers) * Store only key for delkey; reduces need for 0-arg c-tor (csilvers) * Prefer unordered_map to hash_map for the timing test (csilvers) * PORTING: update the resource use for 64-bit machines (csilvers) * PORTING: fix MIN/MAX collisions by un-#including windows.h (csilvers) * Updated autoconf version to 2.61 and libtool version to 1.5.26 Wed Jan 28 17:11:31 2009 Google Inc. * sparsehash: version 1.4 release * Allow hashtables to be <32 buckets (csilvers) * Fix initial-sizing bug: was sizing tables too small (csilvers) * Add asserts that clients don't abuse deleted/empty key (csilvers) * Improve determination of 32/64 bit for C code (csilvers) * Small fix for doc files in rpm (csilvers) Thu Nov 6 15:06:09 2008 Google Inc. * sparsehash: version 1.3 release * Add an interface to change the parameters for resizing (myl) * Document another potentially good hash function (csilvers) Thu Sep 18 13:53:20 2008 Google Inc. * sparsehash: version 1.2 release * Augment documentation to better describe namespace issues (csilvers) * BUG FIX: replace hash<> with SPARSEHASH_HASH, for windows (csilvers) * Add timing test to unittest to test repeated add+delete (csilvers) * Do better picking a new size when resizing (csilvers) * Use ::google instead of google as a namespace (csilvers) * Improve threading test at config time (csilvers) Mon Feb 11 16:30:11 2008 Google Inc. * sparsehash: version 1.1 release * Fix brown-paper-bag bug in some constructors (rafferty) * Fix problem with variables shadowing member vars, add -Wshadow Thu Nov 29 11:44:38 2007 Google Inc. * sparsehash: version 1.0.2 release * Fix a final reference to hash<> to use SPARSEHASH_HASH<> instead. Wed Nov 14 08:47:48 2007 Google Inc. * sparsehash: version 1.0.1 release :-( * Remove an unnecessary (harmful) "#define hash" in windows' config.h Tue Nov 13 15:15:46 2007 Google Inc. * sparsehash: version 1.0 release! We are now out of beta. * Clean up Makefile awk script to be more readable (csilvers) * Namespace fixes: use fewer #defines, move typedefs into namespace Fri Oct 12 12:35:24 2007 Google Inc. * sparsehash: version 0.9.1 release * Fix Makefile awk script to work on more architectures (csilvers) * Add test to test code in more 'real life' situations (csilvers) Tue Oct 9 14:15:21 2007 Google Inc. * sparsehash: version 0.9 release * More type-hygiene improvements, especially for 64-bit (csilvers) * Some configure improvements to improve portability, utility (austern) * Small bugfix for operator== for dense_hash_map (jeff) Tue Jul 3 12:55:04 2007 Google Inc. * sparsehash: version 0.8 release * Minor type-hygiene improvements: size_t for int, etc. (csilvers) * Porting improvements: tests pass on OS X, FreeBSD, Solaris (csilvers) * Full windows port! VS solution provided for all unittests (csilvers) Mon Jun 11 11:33:41 2007 Google Inc. * sparsehash: version 0.7 release * Syntax fixes to better support gcc 4.3 and VC++ 7 (mec, csilvers) * Improved windows/VC++ support (see README.windows) (csilvers) * Config improvements: better tcmalloc support and config.h (csilvers) * More robust with missing hash_map + nix 'trampoline' .h's (csilvers) * Support for STLport's hash_map/hash_fun locations (csilvers) * Add .m4 files to distribution; now all source is there (csilvers) * Tiny modification of shrink-threshhold to allow never-shrinking (amc) * Protect timing tests against aggressive optimizers (csilvers) * Extend time_hash_map to test bigger objects (csilvers) * Extend type-trait support to work with const objects (csilvers) * USER VISIBLE: speed up all code by replacing memmove with memcpy (csilvers) Tue Mar 20 17:29:34 2007 Google Inc. * sparsehash: version 0.6 release * Some improvement to type-traits (jyasskin) * Better timing results when google-perftools is installed (sanjay) * Updates and fixes to html documentation and README (csilvers) * A bit more careful about #includes (csilvers) * Fix for typo that broken compilation on some systems (csilvers) * USER VISIBLE: New clear_no_resize() method added to dense_hash_map (uszkoreit) Sat Oct 21 13:47:47 2006 Google Inc. * sparsehash: version 0.5 release * Support uint16_t (SunOS) in addition to u_int16_t (BSD) (csilvers) * Get rid of UNDERSTANDS_ITERATOR_TAGS; everyone understands (csilvers) * Test that empty-key and deleted-key differ (rbayardo) * Fix example docs: strcmp needs to test for NULL (csilvers) Sun Apr 23 22:42:35 2006 Google Inc. * sparsehash: version 0.4 release * Remove POD requirement for keys and values! (austern) * Add tr1-compatible type-traits system to speed up POD ops. (austern) * Fixed const-iterator bug where postfix ++ didn't compile. (csilvers) * Fixed iterator comparison bugs where <= was incorrect. (csilvers) * Clean up config.h to keep its #defines from conflicting. (csilvers) * Big documentation sweep and cleanup. (csilvers) * Update documentation to talk more about good hash fns. (csilvers) * Fixes to compile on MSVC (working around some MSVC bugs). (rennie) * Avoid resizing hashtable on operator[] lookups (austern) Thu Nov 3 20:12:31 2005 Google Inc. * sparsehash: version 0.3 release * Quiet compiler warnings on some compilers. (csilvers) * Some documentation fixes: example code for dense_hash_map. (csilvers) * Fix a bug where swap() wasn't swapping delete_key(). (csilvers) * set_deleted_key() and set_empty_key() now take a key only, allowing hash-map values to be forward-declared. (csilvers) * support for std::insert_iterator (and std::inserter). (csilvers) Mon May 2 07:04:46 2005 Google Inc. * sparsehash: version 0.2 release * Preliminary support for msvc++ compilation. (csilvers) * Documentation fixes -- some example code was incomplete! (csilvers) * Minimize size of config.h to avoid other-package conflicts (csilvers) * Contribute a C-based version of sparsehash that served as the inspiration for this code. One day, I hope to clean it up and support it, but for now it's just in experimental/, for playing around with. (csilvers) * Change default namespace from std to google. (csilvers) Fri Jan 14 16:53:32 2005 Google Inc. * sparsehash: initial release: The sparsehash package contains several hash-map implementations, similar in API to SGI's hash_map class, but with different performance characteristics. sparse_hash_map uses very little space overhead: 1-2 bits per entry. dense_hash_map is typically faster than the default SGI STL implementation. This package also includes hash-set analogues of these classes.