pax_global_header00006660000000000000000000000064131477403470014524gustar00rootroot0000000000000052 comment=1a5e259b259130b50607174fc9f9508dc1f2941c the_silver_searcher-2.1.0/000077500000000000000000000000001314774034700155445ustar00rootroot00000000000000the_silver_searcher-2.1.0/.clang-format000066400000000000000000000022321314774034700201160ustar00rootroot00000000000000AlignEscapedNewlinesLeft: false AlignTrailingComments: true AllowAllParametersOfDeclarationOnNextLine: true AllowShortBlocksOnASingleLine: false AllowShortCaseLabelsOnASingleLine: false AllowShortFunctionsOnASingleLine: None AllowShortIfStatementsOnASingleLine: false AllowShortLoopsOnASingleLine: false AlwaysBreakAfterDefinitionReturnType: false AlwaysBreakBeforeMultilineStrings: false BinPackArguments: true BinPackParameters: true BreakBeforeBinaryOperators: None BreakBeforeBraces: Attach BreakBeforeTernaryOperators: true ColumnLimit: 0 ContinuationIndentWidth: 4 Cpp11BracedListStyle: false DerivePointerAlignment: false DisableFormat: false ExperimentalAutoDetectBinPacking: false IndentCaseLabels: true IndentFunctionDeclarationAfterType: false IndentWidth: 4 IndentWrappedFunctionNames: false KeepEmptyLinesAtTheStartOfBlocks: false Language: Cpp MaxEmptyLinesToKeep: 2 PointerAlignment: Right SpaceAfterCStyleCast: false SpaceBeforeAssignmentOperators: true SpaceBeforeParens: ControlStatements SpaceInEmptyParentheses: false SpacesBeforeTrailingComments: 1 SpacesInCStyleCastParentheses: false SpacesInParentheses: false SpacesInSquareBrackets: false UseTab: Never the_silver_searcher-2.1.0/.gitignore000066400000000000000000000005471314774034700175420ustar00rootroot00000000000000*.dSYM *.gcda *.o *.plist .deps .dirstamp .DS_Store aclocal.m4 ag ag.exe autom4te.cache cachegrind.out.* callgrind.out.* clang_output_* compile config.guess config.log config.status config.sub configure depcomp gmon.out install-sh Makefile Makefile.in missing src/config.h* stamp-h1 tests/*.err tests/big/*.err tests/big/big_file.txt the_silver_searcher.spec the_silver_searcher-2.1.0/.travis.yml000066400000000000000000000013241314774034700176550ustar00rootroot00000000000000language: c sudo: false branches: only: - master compiler: - clang - gcc addons: apt: sources: - ubuntu-toolchain-r-test packages: - automake - liblzma-dev - libpcre3-dev - pkg-config - zlib1g-dev env: global: - LLVM_VERSION=3.8.0 - LLVM_PATH=$HOME/clang+llvm - CLANG_FORMAT=$LLVM_PATH/bin/clang-format before_install: - wget http://llvm.org/releases/$LLVM_VERSION/clang+llvm-$LLVM_VERSION-x86_64-linux-gnu-ubuntu-14.04.tar.xz -O $LLVM_PATH.tar.xz - mkdir $LLVM_PATH - tar xf $LLVM_PATH.tar.xz -C $LLVM_PATH --strip-components=1 - export PATH=$HOME/.local/bin:$PATH install: - pip install --user cram script: - ./build.sh && make test the_silver_searcher-2.1.0/CONTRIBUTING.md000066400000000000000000000027371314774034700200060ustar00rootroot00000000000000## Contributing I like when people send pull requests. It validates my existence. If you want to help out, check the [issue list](https://github.com/ggreer/the_silver_searcher/issues?sort=updated&state=open) or search the codebase for `TODO`. Don't worry if you lack experience writing C. If I think a pull request isn't ready to be merged, I'll give feedback in comments. Once everything looks good, I'll comment on your pull request with a cool animated gif and hit the merge button. ### Running the test suite If you contribute, you might want to run the test suite before and after writing some code, just to make sure you did not break anything. Adding tests along with your code is nice to have, because it makes regressions less likely to happen. Also, if you think you have found a bug, contributing a failing test case is a good way of making your point and adding value at the same time. The test suite uses [Cram](https://bitheap.org/cram/). You'll need to build ag first, and then you can run the suite from the root of the repository : make test ### Adding filetypes Ag can search files which belong to a certain class for example `ag --html test` searches all files with the extension defined in [lang.c](src/lang.c). If you want to add a new file 'class' to ag please modify [lang.c](src/lang.c) and [list_file_types.t](tests/list_file_types.t). `lang.c` adds the functionality and `list_file_types.t` adds the test case. Without adding a test case the test __will__ fail. the_silver_searcher-2.1.0/LICENSE000066400000000000000000000261361314774034700165610ustar00rootroot00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. the_silver_searcher-2.1.0/Makefile.am000066400000000000000000000017431314774034700176050ustar00rootroot00000000000000ACLOCAL_AMFLAGS = ${ACLOCAL_FLAGS} bin_PROGRAMS = ag ag_SOURCES = src/ignore.c src/ignore.h src/log.c src/log.h src/options.c src/options.h src/print.c src/print_w32.c src/print.h src/scandir.c src/scandir.h src/search.c src/search.h src/lang.c src/lang.h src/util.c src/util.h src/decompress.c src/decompress.h src/uthash.h src/main.c src/zfile.c ag_LDADD = ${PCRE_LIBS} ${LZMA_LIBS} ${ZLIB_LIBS} $(PTHREAD_LIBS) dist_man_MANS = doc/ag.1 bashcompdir = $(pkgdatadir)/completions dist_bashcomp_DATA = ag.bashcomp.sh zshcompdir = $(datadir)/zsh/site-functions dist_zshcomp_DATA = _the_silver_searcher EXTRA_DIST = Makefile.w32 LICENSE NOTICE the_silver_searcher.spec README.md all: @$(MAKE) ag -r test: ag cram -v tests/*.t if HAS_CLANG_FORMAT CLANG_FORMAT=${CLANG_FORMAT} ./format.sh test else @echo "clang-format is not available. Skipped clang-format test." endif test_big: ag cram -v tests/big/*.t test_fail: ag cram -v tests/fail/*.t .PHONY : all clean test test_big test_fail the_silver_searcher-2.1.0/Makefile.w32000066400000000000000000000012341314774034700176160ustar00rootroot00000000000000SED=sed VERSION:=$(shell "$(SED)" -n "s/[^[]*\[\([0-9]\+\.[0-9]\+\.[0-9]\+\)\],/\1/p" configure.ac) CC=gcc RM=/bin/rm SRCS = \ src/decompress.c \ src/ignore.c \ src/lang.c \ src/log.c \ src/main.c \ src/options.c \ src/print.c \ src/scandir.c \ src/search.c \ src/util.c \ src/print_w32.c OBJS = $(subst .c,.o,$(SRCS)) CFLAGS = -O2 -Isrc/win32 -DPACKAGE_VERSION=\"$(VERSION)\" LIBS = -lz -lpthread -lpcre -llzma -lshlwapi TARGET = ag.exe all : $(TARGET) # depend on configure.ac to account for version changes $(TARGET) : $(OBJS) configure.ac $(CC) -o $@ $(OBJS) $(LIBS) .c.o : $(CC) -c $(CFLAGS) -Isrc $< -o $@ clean : $(RM) -f src/*.o $(TARGET) the_silver_searcher-2.1.0/NOTICE000066400000000000000000000000641314774034700164500ustar00rootroot00000000000000The Silver Searcher Copyright 2011-2016 Geoff Greer the_silver_searcher-2.1.0/README.md000066400000000000000000000153561314774034700170350ustar00rootroot00000000000000# The Silver Searcher A code searching tool similar to `ack`, with a focus on speed. [![Build Status](https://travis-ci.org/ggreer/the_silver_searcher.svg?branch=master)](https://travis-ci.org/ggreer/the_silver_searcher) [![Floobits Status](https://floobits.com/ggreer/ag.svg)](https://floobits.com/ggreer/ag/redirect) [![#ag on Freenode](https://img.shields.io/badge/Freenode-%23ag-brightgreen.svg)](https://webchat.freenode.net/?channels=ag) Do you know C? Want to improve ag? [I invite you to pair with me](http://geoff.greer.fm/2014/10/13/help-me-get-to-ag-10/). ## What's so great about Ag? * It is an order of magnitude faster than `ack`. * It ignores file patterns from your `.gitignore` and `.hgignore`. * If there are files in your source repo you don't want to search, just add their patterns to a `.ignore` file. (\*cough\* `*.min.js` \*cough\*) * The command name is 33% shorter than `ack`, and all keys are on the home row! Ag is quite stable now. Most changes are new features, minor bug fixes, or performance improvements. It's much faster than Ack in my benchmarks: ack test_blah ~/code/ 104.66s user 4.82s system 99% cpu 1:50.03 total ag test_blah ~/code/ 4.67s user 4.58s system 286% cpu 3.227 total Ack and Ag found the same results, but Ag was 34x faster (3.2 seconds vs 110 seconds). My `~/code` directory is about 8GB. Thanks to git/hg/ignore, Ag only searched 700MB of that. There are also [graphs of performance across releases](http://geoff.greer.fm/ag/speed/). ## How is it so fast? * Ag uses [Pthreads](https://en.wikipedia.org/wiki/POSIX_Threads) to take advantage of multiple CPU cores and search files in parallel. * Files are `mmap()`ed instead of read into a buffer. * Literal string searching uses [Boyer-Moore strstr](https://en.wikipedia.org/wiki/Boyer%E2%80%93Moore_string_search_algorithm). * Regex searching uses [PCRE's JIT compiler](http://sljit.sourceforge.net/pcre.html) (if Ag is built with PCRE >=8.21). * Ag calls `pcre_study()` before executing the same regex on every file. * Instead of calling `fnmatch()` on every pattern in your ignore files, non-regex patterns are loaded into arrays and binary searched. I've written several blog posts showing how I've improved performance. These include how I [added pthreads](http://geoff.greer.fm/2012/09/07/the-silver-searcher-adding-pthreads/), [wrote my own `scandir()`](http://geoff.greer.fm/2012/09/03/profiling-ag-writing-my-own-scandir/), [benchmarked every revision to find performance regressions](http://geoff.greer.fm/2012/08/25/the-silver-searcher-benchmarking-revisions/), and profiled with [gprof](http://geoff.greer.fm/2012/02/08/profiling-with-gprof/) and [Valgrind](http://geoff.greer.fm/2012/01/23/making-programs-faster-profiling/). ## Installing ### macOS brew install the_silver_searcher or port install the_silver_searcher ### Linux * Ubuntu >= 13.10 (Saucy) or Debian >= 8 (Jessie) apt-get install silversearcher-ag * Fedora 21 and lower yum install the_silver_searcher * Fedora 22+ dnf install the_silver_searcher * RHEL7+ yum install epel-release.noarch the_silver_searcher * Gentoo emerge -a sys-apps/the_silver_searcher * Arch pacman -S the_silver_searcher * Slackware sbopkg -i the_silver_searcher * openSUSE: zypper install the_silver_searcher * CentOS: yum install the_silver_searcher * SUSE Linux Enterprise: Follow [these simple instructions](https://software.opensuse.org/download.html?project=utilities&package=the_silver_searcher). ### BSD * FreeBSD pkg install the_silver_searcher * OpenBSD/NetBSD pkg_add the_silver_searcher ### Windows * Win32/64 Unofficial daily builds are [available](https://github.com/k-takata/the_silver_searcher-win32). * MSYS2 pacman -S mingw-w64-{i686,x86_64}-ag * Cygwin Run the relevant [`setup-*.exe`](https://cygwin.com/install.html), and select "the\_silver\_searcher" in the "Utils" category. ## Building from source ### Building master 1. Install dependencies (Automake, pkg-config, PCRE, LZMA): * macOS: brew install automake pkg-config pcre xz or port install automake pkgconfig pcre xz * Ubuntu/Debian: apt-get install -y automake pkg-config libpcre3-dev zlib1g-dev liblzma-dev * Fedora: yum -y install pkgconfig automake gcc zlib-devel pcre-devel xz-devel * CentOS: yum -y groupinstall "Development Tools" yum -y install pcre-devel xz-devel * openSUSE: zypper source-install --build-deps-only the_silver_searcher * Windows: It's complicated. See [this wiki page](https://github.com/ggreer/the_silver_searcher/wiki/Windows). 2. Run the build script (which just runs aclocal, automake, etc): ./build.sh On Windows (inside an msys/MinGW shell): make -f Makefile.w32 3. Make install: sudo make install ### Building a release tarball GPG-signed releases are available [here](http://geoff.greer.fm/ag). Building release tarballs requires the same dependencies, except for automake and pkg-config. Once you've installed the dependencies, just run: ./configure make make install You may need to use `sudo` or run as root for the make install. ## Editor Integration ### Vim You can use Ag with [ack.vim][] by adding the following line to your `.vimrc`: let g:ackprg = 'ag --nogroup --nocolor --column' or: let g:ackprg = 'ag --vimgrep' Which has the same effect but will report every match on the line. ### Emacs You can use [ag.el][] as an Emacs front-end to Ag. See also: [helm-ag]. [ag.el]: https://github.com/Wilfred/ag.el [helm-ag]: https://github.com/syohex/emacs-helm-ag ### TextMate TextMate users can use Ag with [my fork](https://github.com/ggreer/AckMate) of the popular AckMate plugin, which lets you use both Ack and Ag for searching. If you already have AckMate you just want to replace Ack with Ag, move or delete `"~/Library/Application Support/TextMate/PlugIns/AckMate.tmplugin/Contents/Resources/ackmate_ack"` and run `ln -s /usr/local/bin/ag "~/Library/Application Support/TextMate/PlugIns/AckMate.tmplugin/Contents/Resources/ackmate_ack"` ## Other stuff you might like * [Ack](https://github.com/petdance/ack2) - Better than grep. Without Ack, Ag would not exist. * [ack.vim](https://github.com/mileszs/ack.vim) * [Exuberant Ctags](http://ctags.sourceforge.net/) - Faster than Ag, but it builds an index beforehand. Good for *really* big codebases. * [Git-grep](http://git-scm.com/docs/git-grep) - As fast as Ag but only works on git repos. * [ripgrep](https://github.com/BurntSushi/ripgrep) * [Sack](https://github.com/sampson-chen/sack) - A utility that wraps Ack and Ag. It removes a lot of repetition from searching and opening matching files. the_silver_searcher-2.1.0/_the_silver_searcher000066400000000000000000000115641314774034700216550ustar00rootroot00000000000000#compdef ag # Completion function for zsh local ret=1 local -a args expl # Intentionally avoided many possible mutual exlusions because it is # likely that earlier options come from an alias. In line with this # the following conditionally adds options that assert defaults. [[ -n $words[(r)(-[is]|--ignore-case|--case-sensitive)] ]] && args+=( '(-S --smart-case -s -s --ignore-case --case-sensitive)'{-S,--smart-case}'[insensitive match unless pattern includes uppercase]' ) [[ -n $words[(r)--nobreak] ]] && args+=( "(--nobreak)--break[print newlines between matches in different files]" ) [[ -n $words[(r)--nogroup] ]] && args+=( "(--nogroup)--group[don't repeat filename for each match line]" ) _tags normal-options file-types while _tags; do _requested normal-options && _arguments -S -s $args \ '--ackmate[print results in AckMate-parseable format]' \ '(--after -A)'{--after=-,-A+}'[specify lines of trailing context]::lines [2]' \ '(--before -B)'{--before=-,-B+}'[specify lines of leading context]::lines [2]' \ "--nobreak[don't print newlines between matches in different files]" \ '(--count -c)'{--count,-c}'[only print a count of matching lines]' \ '--color[enable color highlighting of output]' \ '(--color-line-number --color-match --color-path)--nocolor[disable color highlighting of output]' \ '--color-line-number=[specify color for line numbers]:color [1;33]' \ '--color-match=[specify color for result match numbers]:color [30;43]' \ '--color-path=[specify color for path names]:color [1;32]' \ '--column[print column numbers in results]' \ '(--context -C)'{--context=-,-C+}'[specify lines of context]::lines' \ '(--debug -D)'{--debug,-D}'[output debug information]' \ '--depth=[specify directory levels to descend when searching]:levels [25]' \ '(--noheading)--nofilename[suppress printing of filenames]' \ '(-f --follow)'{-f,--follow}'[follow symlinks]' \ '(-F --fixed-strings --literal -Q)'{--fixed-strings,-F,--literal,-Q}'[use literal strings]' \ '--nogroup[repeat filename for each match line]' \ '(1 -G --file-search-regex)-g+[print filenames matching a pattern]:regex' \ '(-G --file-search-regex)'{-G+,--file-search-regex=}'[limit search to filenames matching pattern]:regex' \ '(-H --heading --noheading)'{-H,--heading}'[print filename with each match]' \ '(-H --heading --noheading --nofilename)--noheading[suppress printing of filenames]' \ '--hidden[search hidden files (obeying .*ignore files)]' \ {--ignore=,--ignore-dir=}'[ignore files/directories matching pattern]:regex' \ '(-i --ignore-case)'{-i,--ignore-case}'[match case-insensitively]' \ '(-l --files-with-matches)'{-l,--files-with-matches}"[output matching files' names only]" \ '(-L --files-without-matches)'{-L,--files-without-matches}"[output non-matching files' names only]" \ '(--max-count -m)'{--max-count=,-m+}'[stop after specified no of matches in each file]:max number of matches' \ '--numbers[prefix output with line numbers, even for streams]' \ '--nonumbers[suppress printing of line numbers]' \ '(--only-matching -o)'{--only-matching,-o}'[show only matching part of line]' \ '(-p --path-to-ignore)'{-p+,--path-to-ignore=}'[use specified .ignore file]:file:_files' \ '--print-long-lines[print matches on very long lines]' \ "--passthrough[when searching a stream, print all lines even if they don't match]" \ '(-s --case-sensitive)'{-s,--case-sensitive}'[match case]' \ '--silent[suppress all log messages, including errors]' \ '(--stats-only)--stats[print stats (files scanned, time taken, etc.)]' \ '(--stats)--stats-only[print stats and nothing else]' \ '(-U --skip-vcs-ignores)'{-U,--skip-vcs-ignores}'[ignore VCS files (stil obey .ignore)]' \ '(-v --invert-match)'{-v,--invert-match}'[select non-matching lines]' \ '--vimgrep[output results like vim :vimgrep /pattern/g would]' \ '(-w --word-regexp)'{-w,--word-regexp}'[force pattern to match only whole words]' \ '(-z --search-zip)'{-z,--search-zip}'[search contents of compressed files]' \ '(-0 --null)'{-0,--null}'[separate filenames with null]' \ ': :_guard "^-*" pattern' \ '*:file:_files' \ '(- :)--list-file-types[list supported file types]' \ '(- :)'{-h,--help}'[display help information]' \ '(- :)'{-V,--version}'[display version information]' \ - '(ignores)' \ '(-a --all-types)'{-a,--all-types}'[search all files]' \ '--search-binary[search binary files for matches]' \ {-t,--all-text}'[search all text files (not including hidden files)]' \ {-u,--unrestricted}'[search all files]' && ret=0 _requested file-types && { ! zstyle -T ":completion:${curcontext}:options" prefix-needed || [[ -prefix - ]] } && _all_labels file-types expl 'file type' \ compadd - ${(M)$(_call_program file-types $words[1] --list-file-types):#--*} && ret=0 (( ret )) || break done return ret the_silver_searcher-2.1.0/ag.bashcomp.sh000066400000000000000000000046531314774034700202720ustar00rootroot00000000000000_ag() { local lngopt shtopt split=false local cur prev COMPREPLY=() cur=$(_get_cword "=") prev="${COMP_WORDS[COMP_CWORD-1]}" _expand || return 0 lngopt=' --ackmate --ackmate-dir-filter --affinity --after --all-text --all-types --before --break --case-sensitive --color --color-line-number --color-match --color-path --color-win-ansi --column --context --count --debug --depth --file-search-regex --filename --files-with-matches --files-without-matches --fixed-strings --follow --group --heading --help --hidden --ignore --ignore-case --ignore-dir --invert-match --line-numbers --list-file-types --literal --match --max-count --no-numbers --no-recurse --noaffinity --nobreak --nocolor --nofilename --nofollow --nogroup --noheading --nonumbers --nopager --norecurse --null --numbers --one-device --only-matching --pager --parallel --passthrough --passthru --path-to-agignore --print-long-lines --print0 --recurse --search-binary --search-files --search-zip --silent --skip-vcs-ignores --smart-case --stats --unrestricted --version --vimgrep --word-regexp --workers ' shtopt=' -a -A -B -C -D -f -F -g -G -h -i -l -L -m -n -p -Q -r -R -s -S -t -u -U -v -V -w -z ' types=$(ag --list-file-types |grep -- '--') # these options require an argument if [[ "${prev}" == -[ABCGgm] ]] ; then return 0 fi _split_longopt && split=true case "${prev}" in --ignore-dir) # directory completion _filedir -d return 0;; --path-to-agignore) # file completion _filedir return 0;; --pager) # command completion COMPREPLY=( $(compgen -c -- "${cur}") ) return 0;; --ackmate-dir-filter|--after|--before|--color-*|--context|--depth\ |--file-search-regex|--ignore|--max-count|--workers) return 0;; esac $split && return 0 case "${cur}" in -*) COMPREPLY=( $(compgen -W \ "${lngopt} ${shtopt} ${types}" -- "${cur}") ) return 0;; *) _filedir return 0;; esac } && # shellcheck disable=SC2086 # shellcheck disable=SC2154,SC2086 complete -F _ag ${nospace} ag the_silver_searcher-2.1.0/autogen.sh000077500000000000000000000007331314774034700175500ustar00rootroot00000000000000#!/bin/sh set -e cd "$(dirname "$0")" AC_SEARCH_OPTS="" # For those of us with pkg-config and other tools in /usr/local PATH=$PATH:/usr/local/bin # This is to make life easier for people who installed pkg-config in /usr/local # but have autoconf/make/etc in /usr/. AKA most mac users if [ -d "/usr/local/share/aclocal" ] then AC_SEARCH_OPTS="-I /usr/local/share/aclocal" fi # shellcheck disable=2086 aclocal $AC_SEARCH_OPTS autoconf autoheader automake --add-missing the_silver_searcher-2.1.0/build.sh000077500000000000000000000001171314774034700172010ustar00rootroot00000000000000#!/bin/sh set -e cd "$(dirname "$0")" ./autogen.sh ./configure "$@" make -j4 the_silver_searcher-2.1.0/configure.ac000066400000000000000000000052641314774034700200410ustar00rootroot00000000000000AC_INIT( [the_silver_searcher], [2.1.0], [https://github.com/ggreer/the_silver_searcher/issues], [the_silver_searcher], [https://github.com/ggreer/the_silver_searcher]) AM_INIT_AUTOMAKE([no-define foreign subdir-objects]) AC_PROG_CC AM_PROG_CC_C_O AC_PREREQ([2.59]) AC_PROG_GREP m4_ifdef( [AM_SILENT_RULES], [AM_SILENT_RULES([yes])]) PKG_CHECK_MODULES([PCRE], [libpcre]) m4_include([m4/ax_pthread.m4]) AX_PTHREAD( [AC_CHECK_HEADERS([pthread.h])], [AC_MSG_WARN([No pthread support. Ag will be slower due to running single-threaded.])] ) # Run CFLAGS="-pg" ./configure if you want debug symbols if ! echo "$CFLAGS" | "$GREP" '\(^\|[[[:space:]]]\)-O' > /dev/null; then CFLAGS="$CFLAGS -O2" fi CFLAGS="$CFLAGS $PTHREAD_CFLAGS $PCRE_CFLAGS -Wall -Wextra -Wformat=2 -Wno-format-nonliteral -Wshadow" CFLAGS="$CFLAGS -Wpointer-arith -Wcast-qual -Wmissing-prototypes -Wno-missing-braces -std=gnu89 -D_GNU_SOURCE" LDFLAGS="$LDFLAGS" case $host in *mingw*) AC_CHECK_LIB(shlwapi, main,, AC_MSG_ERROR(libshlwapi missing)) esac LIBS="$PTHREAD_LIBS $LIBS" AC_ARG_ENABLE([zlib], AS_HELP_STRING([--disable-zlib], [Disable zlib compressed search support])) AS_IF([test "x$enable_zlib" != "xno"], [ AC_CHECK_HEADERS([zlib.h]) AC_SEARCH_LIBS([inflate], [zlib, z]) ]) AC_ARG_ENABLE([lzma], AS_HELP_STRING([--disable-lzma], [Disable lzma compressed search support])) AS_IF([test "x$enable_lzma" != "xno"], [ AC_CHECK_HEADERS([lzma.h]) PKG_CHECK_MODULES([LZMA], [liblzma]) ]) AC_CHECK_DECL([PCRE_CONFIG_JIT], [AC_DEFINE([USE_PCRE_JIT], [], [Use PCRE JIT])], [], [#include ]) AC_CHECK_DECL([CPU_ZERO, CPU_SET], [AC_DEFINE([USE_CPU_SET], [], [Use CPU_SET macros])] , [], [#include ]) AC_CHECK_HEADERS([sys/cpuset.h err.h]) AC_CHECK_MEMBER([struct dirent.d_type], [AC_DEFINE([HAVE_DIRENT_DTYPE], [], [Have dirent struct member d_type])], [], [[#include ]]) AC_CHECK_MEMBER([struct dirent.d_namlen], [AC_DEFINE([HAVE_DIRENT_DNAMLEN], [], [Have dirent struct member d_namlen])], [], [[#include ]]) AC_CHECK_FUNCS(fgetln fopencookie getline realpath strlcpy strndup vasprintf madvise posix_fadvise pthread_setaffinity_np pledge) AC_CONFIG_FILES([Makefile the_silver_searcher.spec]) AC_CONFIG_HEADERS([src/config.h]) AC_CHECK_PROGS( [CLANG_FORMAT], [clang-format-3.8 clang-format-3.7 clang-format-3.6 clang-format], [no] ) AM_CONDITIONAL([HAS_CLANG_FORMAT], [test x$CLANG_FORMAT != xno]) AM_COND_IF( [HAS_CLANG_FORMAT], [AC_MSG_NOTICE([clang-format found. 'make test' will detect improperly-formatted files.])], [AC_MSG_WARN([clang-format not found. 'make test' will not detect improperly-formatted files.])] ) AC_OUTPUT the_silver_searcher-2.1.0/doc/000077500000000000000000000000001314774034700163115ustar00rootroot00000000000000the_silver_searcher-2.1.0/doc/ag.1000066400000000000000000000203271314774034700167660ustar00rootroot00000000000000.\" generated with Ronn/v0.7.3 .\" http://github.com/rtomayko/ronn/tree/0.7.3 . .TH "AG" "1" "December 2016" "" "" . .SH "NAME" \fBag\fR \- The Silver Searcher\. Like ack, but faster\. . .SH "SYNOPSIS" \fBag\fR [\fIoptions\fR] \fIpattern\fR [\fIpath \.\.\.\fR] . .SH "DESCRIPTION" Recursively search for PATTERN in PATH\. Like grep or ack, but faster\. . .SH "OPTIONS" . .TP \fB\-\-ackmate\fR Output results in a format parseable by AckMate \fIhttps://github\.com/protocool/AckMate\fR\. . .TP \fB\-\-[no]affinity\fR Set thread affinity (if platform supports it)\. Default is true\. . .TP \fB\-a \-\-all\-types\fR Search all files\. This doesn\'t include hidden files, and doesn\'t respect any ignore files\. . .TP \fB\-A \-\-after [LINES]\fR Print lines after match\. If not provided, LINES defaults to 2\. . .TP \fB\-B \-\-before [LINES]\fR Print lines before match\. If not provided, LINES defaults to 2\. . .TP \fB\-\-[no]break\fR Print a newline between matches in different files\. Enabled by default\. . .TP \fB\-c \-\-count\fR Only print the number of matches in each file\. Note: This is the number of matches, \fBnot\fR the number of matching lines\. Pipe output to \fBwc \-l\fR if you want the number of matching lines\. . .TP \fB\-\-[no]color\fR Print color codes in results\. Enabled by default\. . .TP \fB\-\-color\-line\-number\fR Color codes for line numbers\. Default is 1;33\. . .TP \fB\-\-color\-match\fR Color codes for result match numbers\. Default is 30;43\. . .TP \fB\-\-color\-path\fR Color codes for path names\. Default is 1;32\. . .TP \fB\-\-column\fR Print column numbers in results\. . .TP \fB\-C \-\-context [LINES]\fR Print lines before and after matches\. Default is 2\. . .TP \fB\-D \-\-debug\fR Output ridiculous amounts of debugging info\. Not useful unless you\'re actually debugging\. . .TP \fB\-\-depth NUM\fR Search up to NUM directories deep, \-1 for unlimited\. Default is 25\. . .TP \fB\-\-[no]filename\fR Print file names\. Enabled by default, except when searching a single file\. . .TP \fB\-f \-\-[no]follow\fR Follow symlinks\. Default is false\. . .TP \fB\-F \-\-fixed\-strings\fR Alias for \-\-literal for compatibility with grep\. . .TP \fB\-\-[no]group\fR The default, \fB\-\-group\fR, lumps multiple matches in the same file together, and presents them under a single occurrence of the filename\. \fB\-\-nogroup\fR refrains from this, and instead places the filename at the start of each match line\. . .TP \fB\-g PATTERN\fR Print filenames matching PATTERN\. . .TP \fB\-G \-\-file\-search\-regex PATTERN\fR Only search files whose names match PATTERN\. . .TP \fB\-H \-\-[no]heading\fR Print filenames above matching contents\. . .TP \fB\-\-hidden\fR Search hidden files\. This option obeys ignored files\. . .TP \fB\-\-ignore PATTERN\fR Ignore files/directories whose names match this pattern\. Literal file and directory names are also allowed\. . .TP \fB\-\-ignore\-dir NAME\fR Alias for \-\-ignore for compatibility with ack\. . .TP \fB\-i \-\-ignore\-case\fR Match case\-insensitively\. . .TP \fB\-l \-\-files\-with\-matches\fR Only print the names of files containing matches, not the matching lines\. An empty query will print all files that would be searched\. . .TP \fB\-L \-\-files\-without\-matches\fR Only print the names of files that don\'t contain matches\. . .TP \fB\-\-list\-file\-types\fR See \fBFILE TYPES\fR below\. . .TP \fB\-m \-\-max\-count NUM\fR Skip the rest of a file after NUM matches\. Default is 0, which never skips\. . .TP \fB\-\-[no]mmap\fR Toggle use of memory\-mapped I/O\. Defaults to true on platforms where \fBmmap()\fR is faster than \fBread()\fR\. (All but macOS\.) . .TP \fB\-\-[no]multiline\fR Match regexes across newlines\. Enabled by default\. . .TP \fB\-n \-\-norecurse\fR Don\'t recurse into directories\. . .TP \fB\-\-[no]numbers\fR Print line numbers\. Default is to omit line numbers when searching streams\. . .TP \fB\-o \-\-only\-matching\fR Print only the matching part of the lines\. . .TP \fB\-\-one\-device\fR When recursing directories, don\'t scan dirs that reside on other storage devices\. This lets you avoid scanning slow network mounts\. This feature is not supported on all platforms\. . .TP \fB\-p \-\-path\-to\-ignore STRING\fR Provide a path to a specific \.ignore file\. . .TP \fB\-\-pager COMMAND\fR Use a pager such as \fBless\fR\. Use \fB\-\-nopager\fR to override\. This option is also ignored if output is piped to another program\. . .TP \fB\-\-parallel\fR Parse the input stream as a search term, not data to search\. This is meant to be used with tools such as GNU parallel\. For example: \fBecho "foo\enbar\enbaz" | parallel "ag {} \."\fR will run 3 instances of ag, searching the current directory for "foo", "bar", and "baz"\. . .TP \fB\-\-print\-long\-lines\fR Print matches on very long lines (> 2k characters by default)\. . .TP \fB\-\-passthrough \-\-passthru\fR When searching a stream, print all lines even if they don\'t match\. . .TP \fB\-Q \-\-literal\fR Do not parse PATTERN as a regular expression\. Try to match it literally\. . .TP \fB\-r \-\-recurse\fR Recurse into directories when searching\. Default is true\. . .TP \fB\-s \-\-case\-sensitive\fR Match case\-sensitively\. . .TP \fB\-S \-\-smart\-case\fR Match case\-sensitively if there are any uppercase letters in PATTERN, case\-insensitively otherwise\. Enabled by default\. . .TP \fB\-\-search\-binary\fR Search binary files for matches\. . .TP \fB\-\-silent\fR Suppress all log messages, including errors\. . .TP \fB\-\-stats\fR Print stats (files scanned, time taken, etc)\. . .TP \fB\-\-stats\-only\fR Print stats (files scanned, time taken, etc) and nothing else\. . .TP \fB\-t \-\-all\-text\fR Search all text files\. This doesn\'t include hidden files\. . .TP \fB\-u \-\-unrestricted\fR Search \fIall\fR files\. This ignores \.ignore, \.gitignore, etc\. It searches binary and hidden files as well\. . .TP \fB\-U \-\-skip\-vcs\-ignores\fR Ignore VCS ignore files (\.gitignore, \.hgignore), but still use \.ignore\. . .TP \fB\-v \-\-invert\-match\fR Match every line \fInot\fR containing the specified pattern\. . .TP \fB\-V \-\-version\fR Print version info\. . .TP \fB\-\-vimgrep\fR Output results in the same form as Vim\'s \fB:vimgrep /pattern/g\fR . .IP Here is a ~/\.vimrc configuration example: . .IP \fBset grepprg=ag\e \-\-vimgrep\e $*\fR \fBset grepformat=%f:%l:%c:%m\fR . .IP Then use \fB:grep\fR to grep for something\. Then use \fB:copen\fR, \fB:cn\fR, \fB:cp\fR, etc\. to navigate through the matches\. . .TP \fB\-w \-\-word\-regexp\fR Only match whole words\. . .TP \fB\-\-workers NUM\fR Use NUM worker threads\. Default is the number of CPU cores, with a max of 8\. . .TP \fB\-z \-\-search\-zip\fR Search contents of compressed files\. Currently, gz and xz are supported\. This option requires that ag is built with lzma and zlib\. . .TP \fB\-0 \-\-null \-\-print0\fR Separate the filenames with \fB\e0\fR, rather than \fB\en\fR: this allows \fBxargs \-0 \fR to correctly process filenames containing spaces or newlines\. . .SH "FILE TYPES" It is possible to restrict the types of files searched\. For example, passing \fB\-\-html\fR will search only files with the extensions \fBhtm\fR, \fBhtml\fR, \fBshtml\fR or \fBxhtml\fR\. For a list of supported types, run \fBag \-\-list\-file\-types\fR\. . .SH "IGNORING FILES" By default, ag will ignore files whose names match patterns in \.gitignore, \.hgignore, or \.ignore\. These files can be anywhere in the directories being searched\. Binary files are ignored by default as well\. Finally, ag looks in $HOME/\.agignore for ignore patterns\. . .P If you want to ignore \.gitignore and \.hgignore, but still take \.ignore into account, use \fB\-U\fR\. . .P Use the \fB\-t\fR option to search all text files; \fB\-a\fR to search all files; and \fB\-u\fR to search all, including hidden files\. . .SH "EXAMPLES" \fBag printf\fR: Find matches for "printf" in the current directory\. . .P \fBag foo /bar/\fR: Find matches for "foo" in path /bar/\. . .P \fBag \-\- \-\-foo\fR: Find matches for "\-\-foo" in the current directory\. (As with most UNIX command line utilities, "\-\-" is used to signify that the remaining arguments should not be treated as options\.) . .SH "ABOUT" ag was originally created by Geoff Greer\. More information (and the latest release) can be found at http://geoff\.greer\.fm/ag . .SH "SEE ALSO" grep(1) the_silver_searcher-2.1.0/doc/ag.1.md000066400000000000000000000172441314774034700173710ustar00rootroot00000000000000ag(1) -- The Silver Searcher. Like ack, but faster. ============================================= ## SYNOPSIS `ag` [_options_] _pattern_ [_path ..._] ## DESCRIPTION Recursively search for PATTERN in PATH. Like grep or ack, but faster. ## OPTIONS * `--ackmate`: Output results in a format parseable by [AckMate](https://github.com/protocool/AckMate). * `--[no]affinity`: Set thread affinity (if platform supports it). Default is true. * `-a --all-types`: Search all files. This doesn't include hidden files, and doesn't respect any ignore files. * `-A --after [LINES]`: Print lines after match. If not provided, LINES defaults to 2. * `-B --before [LINES]`: Print lines before match. If not provided, LINES defaults to 2. * `--[no]break`: Print a newline between matches in different files. Enabled by default. * `-c --count`: Only print the number of matches in each file. Note: This is the number of matches, **not** the number of matching lines. Pipe output to `wc -l` if you want the number of matching lines. * `--[no]color`: Print color codes in results. Enabled by default. * `--color-line-number`: Color codes for line numbers. Default is 1;33. * `--color-match`: Color codes for result match numbers. Default is 30;43. * `--color-path`: Color codes for path names. Default is 1;32. * `--column`: Print column numbers in results. * `-C --context [LINES]`: Print lines before and after matches. Default is 2. * `-D --debug`: Output ridiculous amounts of debugging info. Not useful unless you're actually debugging. * `--depth NUM`: Search up to NUM directories deep, -1 for unlimited. Default is 25. * `--[no]filename`: Print file names. Enabled by default, except when searching a single file. * `-f --[no]follow`: Follow symlinks. Default is false. * `-F --fixed-strings`: Alias for --literal for compatibility with grep. * `--[no]group`: The default, `--group`, lumps multiple matches in the same file together, and presents them under a single occurrence of the filename. `--nogroup` refrains from this, and instead places the filename at the start of each match line. * `-g PATTERN`: Print filenames matching PATTERN. * `-G --file-search-regex PATTERN`: Only search files whose names match PATTERN. * `-H --[no]heading`: Print filenames above matching contents. * `--hidden`: Search hidden files. This option obeys ignored files. * `--ignore PATTERN`: Ignore files/directories whose names match this pattern. Literal file and directory names are also allowed. * `--ignore-dir NAME`: Alias for --ignore for compatibility with ack. * `-i --ignore-case`: Match case-insensitively. * `-l --files-with-matches`: Only print the names of files containing matches, not the matching lines. An empty query will print all files that would be searched. * `-L --files-without-matches`: Only print the names of files that don't contain matches. * `--list-file-types`: See `FILE TYPES` below. * `-m --max-count NUM`: Skip the rest of a file after NUM matches. Default is 0, which never skips. * `--[no]mmap`: Toggle use of memory-mapped I/O. Defaults to true on platforms where `mmap()` is faster than `read()`. (All but macOS.) * `--[no]multiline`: Match regexes across newlines. Enabled by default. * `-n --norecurse`: Don't recurse into directories. * `--[no]numbers`: Print line numbers. Default is to omit line numbers when searching streams. * `-o --only-matching`: Print only the matching part of the lines. * `--one-device`: When recursing directories, don't scan dirs that reside on other storage devices. This lets you avoid scanning slow network mounts. This feature is not supported on all platforms. * `-p --path-to-ignore STRING`: Provide a path to a specific .ignore file. * `--pager COMMAND`: Use a pager such as `less`. Use `--nopager` to override. This option is also ignored if output is piped to another program. * `--parallel`: Parse the input stream as a search term, not data to search. This is meant to be used with tools such as GNU parallel. For example: `echo "foo\nbar\nbaz" | parallel "ag {} ."` will run 3 instances of ag, searching the current directory for "foo", "bar", and "baz". * `--print-long-lines`: Print matches on very long lines (> 2k characters by default). * `--passthrough --passthru`: When searching a stream, print all lines even if they don't match. * `-Q --literal`: Do not parse PATTERN as a regular expression. Try to match it literally. * `-r --recurse`: Recurse into directories when searching. Default is true. * `-s --case-sensitive`: Match case-sensitively. * `-S --smart-case`: Match case-sensitively if there are any uppercase letters in PATTERN, case-insensitively otherwise. Enabled by default. * `--search-binary`: Search binary files for matches. * `--silent`: Suppress all log messages, including errors. * `--stats`: Print stats (files scanned, time taken, etc). * `--stats-only`: Print stats (files scanned, time taken, etc) and nothing else. * `-t --all-text`: Search all text files. This doesn't include hidden files. * `-u --unrestricted`: Search *all* files. This ignores .ignore, .gitignore, etc. It searches binary and hidden files as well. * `-U --skip-vcs-ignores`: Ignore VCS ignore files (.gitignore, .hgignore), but still use .ignore. * `-v --invert-match`: Match every line *not* containing the specified pattern. * `-V --version`: Print version info. * `--vimgrep`: Output results in the same form as Vim's `:vimgrep /pattern/g` Here is a ~/.vimrc configuration example: `set grepprg=ag\ --vimgrep\ $*` `set grepformat=%f:%l:%c:%m` Then use `:grep` to grep for something. Then use `:copen`, `:cn`, `:cp`, etc. to navigate through the matches. * `-w --word-regexp`: Only match whole words. * `--workers NUM`: Use NUM worker threads. Default is the number of CPU cores, with a max of 8. * `-z --search-zip`: Search contents of compressed files. Currently, gz and xz are supported. This option requires that ag is built with lzma and zlib. * `-0 --null --print0`: Separate the filenames with `\0`, rather than `\n`: this allows `xargs -0 ` to correctly process filenames containing spaces or newlines. ## FILE TYPES It is possible to restrict the types of files searched. For example, passing `--html` will search only files with the extensions `htm`, `html`, `shtml` or `xhtml`. For a list of supported types, run `ag --list-file-types`. ## IGNORING FILES By default, ag will ignore files whose names match patterns in .gitignore, .hgignore, or .ignore. These files can be anywhere in the directories being searched. Binary files are ignored by default as well. Finally, ag looks in $HOME/.agignore for ignore patterns. If you want to ignore .gitignore and .hgignore, but still take .ignore into account, use `-U`. Use the `-t` option to search all text files; `-a` to search all files; and `-u` to search all, including hidden files. ## EXAMPLES `ag printf`: Find matches for "printf" in the current directory. `ag foo /bar/`: Find matches for "foo" in path /bar/. `ag -- --foo`: Find matches for "--foo" in the current directory. (As with most UNIX command line utilities, "--" is used to signify that the remaining arguments should not be treated as options.) ## ABOUT ag was originally created by Geoff Greer. More information (and the latest release) can be found at http://geoff.greer.fm/ag ## SEE ALSO grep(1) the_silver_searcher-2.1.0/doc/generate_man.sh000077500000000000000000000004151314774034700212750ustar00rootroot00000000000000#!/bin/sh # ronn is used to turn the markdown into a manpage. # Get ronn at https://github.com/rtomayko/ronn # Alternately, since ronn is a Ruby gem, you can just # `gem install ronn` sed -e 's/\\0/\\\\0/' ag.1.md.tmp ronn -r ag.1.md.tmp rm -f ag.1.md.tmp the_silver_searcher-2.1.0/format.sh000077500000000000000000000024131314774034700173730ustar00rootroot00000000000000#!/bin/bash function usage() { echo "Usage: $0 test|reformat" } if [ $# -eq 0 ] then usage exit 0 fi if [ -z "$CLANG_FORMAT" ] then CLANG_FORMAT=clang-format echo "No CLANG_FORMAT set. Using $CLANG_FORMAT" fi if ! type "$CLANG_FORMAT" &> /dev/null then echo "The command \"$CLANG_FORMAT\" was not found" exit 1 fi SOURCE_FILES=$(git ls-files src/) if [ "$1" == "reformat" ] then echo "Reformatting source files" # shellcheck disable=2086 echo $CLANG_FORMAT -style=file -i $SOURCE_FILES # shellcheck disable=2086 $CLANG_FORMAT -style=file -i $SOURCE_FILES exit 0 elif [ "$1" == "test" ] then # shellcheck disable=2086 RESULT=$($CLANG_FORMAT -style=file -output-replacements-xml $SOURCE_FILES | grep -c ' # Copyright (c) 2011 Daniel Richard G. # # This program is free software: you can redistribute it and/or modify it # under the terms of the GNU General Public License as published by the # Free Software Foundation, either version 3 of the License, or (at your # option) any later version. # # This program is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General # Public License for more details. # # You should have received a copy of the GNU General Public License along # with this program. If not, see . # # As a special exception, the respective Autoconf Macro's copyright owner # gives unlimited permission to copy, distribute and modify the configure # scripts that are the output of Autoconf when processing the Macro. You # need not follow the terms of the GNU General Public License when using # or distributing such scripts, even though portions of the text of the # Macro appear in them. The GNU General Public License (GPL) does govern # all other use of the material that constitutes the Autoconf Macro. # # This special exception to the GPL applies to versions of the Autoconf # Macro released by the Autoconf Archive. When you make and distribute a # modified version of the Autoconf Macro, you may extend this special # exception to the GPL to apply to your modified version as well. #serial 21 AU_ALIAS([ACX_PTHREAD], [AX_PTHREAD]) AC_DEFUN([AX_PTHREAD], [ AC_REQUIRE([AC_CANONICAL_HOST]) AC_LANG_PUSH([C]) ax_pthread_ok=no # We used to check for pthread.h first, but this fails if pthread.h # requires special compiler flags (e.g. on True64 or Sequent). # It gets checked for in the link test anyway. # First of all, check if the user has set any of the PTHREAD_LIBS, # etcetera environment variables, and if threads linking works using # them: if test x"$PTHREAD_LIBS$PTHREAD_CFLAGS" != x; then save_CFLAGS="$CFLAGS" CFLAGS="$CFLAGS $PTHREAD_CFLAGS" save_LIBS="$LIBS" LIBS="$PTHREAD_LIBS $LIBS" AC_MSG_CHECKING([for pthread_join in LIBS=$PTHREAD_LIBS with CFLAGS=$PTHREAD_CFLAGS]) AC_TRY_LINK_FUNC([pthread_join], [ax_pthread_ok=yes]) AC_MSG_RESULT([$ax_pthread_ok]) if test x"$ax_pthread_ok" = xno; then PTHREAD_LIBS="" PTHREAD_CFLAGS="" fi LIBS="$save_LIBS" CFLAGS="$save_CFLAGS" fi # We must check for the threads library under a number of different # names; the ordering is very important because some systems # (e.g. DEC) have both -lpthread and -lpthreads, where one of the # libraries is broken (non-POSIX). # Create a list of thread flags to try. Items starting with a "-" are # C compiler flags, and other items are library names, except for "none" # which indicates that we try without any flags at all, and "pthread-config" # which is a program returning the flags for the Pth emulation library. ax_pthread_flags="pthreads none -Kthread -kthread lthread -pthread -pthreads -mthreads pthread --thread-safe -mt pthread-config" # The ordering *is* (sometimes) important. Some notes on the # individual items follow: # pthreads: AIX (must check this before -lpthread) # none: in case threads are in libc; should be tried before -Kthread and # other compiler flags to prevent continual compiler warnings # -Kthread: Sequent (threads in libc, but -Kthread needed for pthread.h) # -kthread: FreeBSD kernel threads (preferred to -pthread since SMP-able) # lthread: LinuxThreads port on FreeBSD (also preferred to -pthread) # -pthread: Linux/gcc (kernel threads), BSD/gcc (userland threads) # -pthreads: Solaris/gcc # -mthreads: Mingw32/gcc, Lynx/gcc # -mt: Sun Workshop C (may only link SunOS threads [-lthread], but it # doesn't hurt to check since this sometimes defines pthreads too; # also defines -D_REENTRANT) # ... -mt is also the pthreads flag for HP/aCC # pthread: Linux, etcetera # --thread-safe: KAI C++ # pthread-config: use pthread-config program (for GNU Pth library) case ${host_os} in solaris*) # On Solaris (at least, for some versions), libc contains stubbed # (non-functional) versions of the pthreads routines, so link-based # tests will erroneously succeed. (We need to link with -pthreads/-mt/ # -lpthread.) (The stubs are missing pthread_cleanup_push, or rather # a function called by this macro, so we could check for that, but # who knows whether they'll stub that too in a future libc.) So, # we'll just look for -pthreads and -lpthread first: ax_pthread_flags="-pthreads pthread -mt -pthread $ax_pthread_flags" ;; darwin*) ax_pthread_flags="-pthread $ax_pthread_flags" ;; esac # Clang doesn't consider unrecognized options an error unless we specify # -Werror. We throw in some extra Clang-specific options to ensure that # this doesn't happen for GCC, which also accepts -Werror. AC_MSG_CHECKING([if compiler needs -Werror to reject unknown flags]) save_CFLAGS="$CFLAGS" ax_pthread_extra_flags="-Werror" CFLAGS="$CFLAGS $ax_pthread_extra_flags -Wunknown-warning-option -Wsizeof-array-argument" AC_COMPILE_IFELSE([AC_LANG_PROGRAM([int foo(void);],[foo()])], [AC_MSG_RESULT([yes])], [ax_pthread_extra_flags= AC_MSG_RESULT([no])]) CFLAGS="$save_CFLAGS" if test x"$ax_pthread_ok" = xno; then for flag in $ax_pthread_flags; do case $flag in none) AC_MSG_CHECKING([whether pthreads work without any flags]) ;; -*) AC_MSG_CHECKING([whether pthreads work with $flag]) PTHREAD_CFLAGS="$flag" ;; pthread-config) AC_CHECK_PROG([ax_pthread_config], [pthread-config], [yes], [no]) if test x"$ax_pthread_config" = xno; then continue; fi PTHREAD_CFLAGS="`pthread-config --cflags`" PTHREAD_LIBS="`pthread-config --ldflags` `pthread-config --libs`" ;; *) AC_MSG_CHECKING([for the pthreads library -l$flag]) PTHREAD_LIBS="-l$flag" ;; esac save_LIBS="$LIBS" save_CFLAGS="$CFLAGS" LIBS="$PTHREAD_LIBS $LIBS" CFLAGS="$CFLAGS $PTHREAD_CFLAGS $ax_pthread_extra_flags" # Check for various functions. We must include pthread.h, # since some functions may be macros. (On the Sequent, we # need a special flag -Kthread to make this header compile.) # We check for pthread_join because it is in -lpthread on IRIX # while pthread_create is in libc. We check for pthread_attr_init # due to DEC craziness with -lpthreads. We check for # pthread_cleanup_push because it is one of the few pthread # functions on Solaris that doesn't have a non-functional libc stub. # We try pthread_create on general principles. AC_LINK_IFELSE([AC_LANG_PROGRAM([#include static void routine(void *a) { a = 0; } static void *start_routine(void *a) { return a; }], [pthread_t th; pthread_attr_t attr; pthread_create(&th, 0, start_routine, 0); pthread_join(th, 0); pthread_attr_init(&attr); pthread_cleanup_push(routine, 0); pthread_cleanup_pop(0) /* ; */])], [ax_pthread_ok=yes], []) LIBS="$save_LIBS" CFLAGS="$save_CFLAGS" AC_MSG_RESULT([$ax_pthread_ok]) if test "x$ax_pthread_ok" = xyes; then break; fi PTHREAD_LIBS="" PTHREAD_CFLAGS="" done fi # Various other checks: if test "x$ax_pthread_ok" = xyes; then save_LIBS="$LIBS" LIBS="$PTHREAD_LIBS $LIBS" save_CFLAGS="$CFLAGS" CFLAGS="$CFLAGS $PTHREAD_CFLAGS" # Detect AIX lossage: JOINABLE attribute is called UNDETACHED. AC_MSG_CHECKING([for joinable pthread attribute]) attr_name=unknown for attr in PTHREAD_CREATE_JOINABLE PTHREAD_CREATE_UNDETACHED; do AC_LINK_IFELSE([AC_LANG_PROGRAM([#include ], [int attr = $attr; return attr /* ; */])], [attr_name=$attr; break], []) done AC_MSG_RESULT([$attr_name]) if test "$attr_name" != PTHREAD_CREATE_JOINABLE; then AC_DEFINE_UNQUOTED([PTHREAD_CREATE_JOINABLE], [$attr_name], [Define to necessary symbol if this constant uses a non-standard name on your system.]) fi AC_MSG_CHECKING([if more special flags are required for pthreads]) flag=no case ${host_os} in aix* | freebsd* | darwin*) flag="-D_THREAD_SAFE";; osf* | hpux*) flag="-D_REENTRANT";; solaris*) if test "$GCC" = "yes"; then flag="-D_REENTRANT" else # TODO: What about Clang on Solaris? flag="-mt -D_REENTRANT" fi ;; esac AC_MSG_RESULT([$flag]) if test "x$flag" != xno; then PTHREAD_CFLAGS="$flag $PTHREAD_CFLAGS" fi AC_CACHE_CHECK([for PTHREAD_PRIO_INHERIT], [ax_cv_PTHREAD_PRIO_INHERIT], [ AC_LINK_IFELSE([AC_LANG_PROGRAM([[#include ]], [[int i = PTHREAD_PRIO_INHERIT;]])], [ax_cv_PTHREAD_PRIO_INHERIT=yes], [ax_cv_PTHREAD_PRIO_INHERIT=no]) ]) AS_IF([test "x$ax_cv_PTHREAD_PRIO_INHERIT" = "xyes"], [AC_DEFINE([HAVE_PTHREAD_PRIO_INHERIT], [1], [Have PTHREAD_PRIO_INHERIT.])]) LIBS="$save_LIBS" CFLAGS="$save_CFLAGS" # More AIX lossage: compile with *_r variant if test "x$GCC" != xyes; then case $host_os in aix*) AS_CASE(["x/$CC"], [x*/c89|x*/c89_128|x*/c99|x*/c99_128|x*/cc|x*/cc128|x*/xlc|x*/xlc_v6|x*/xlc128|x*/xlc128_v6], [#handle absolute path differently from PATH based program lookup AS_CASE(["x$CC"], [x/*], [AS_IF([AS_EXECUTABLE_P([${CC}_r])],[PTHREAD_CC="${CC}_r"])], [AC_CHECK_PROGS([PTHREAD_CC],[${CC}_r],[$CC])])]) ;; esac fi fi test -n "$PTHREAD_CC" || PTHREAD_CC="$CC" AC_SUBST([PTHREAD_LIBS]) AC_SUBST([PTHREAD_CFLAGS]) AC_SUBST([PTHREAD_CC]) # Finally, execute ACTION-IF-FOUND/ACTION-IF-NOT-FOUND: if test x"$ax_pthread_ok" = xyes; then ifelse([$1],,[AC_DEFINE([HAVE_PTHREAD],[1],[Define if you have POSIX threads libraries and header files.])],[$1]) : else ax_pthread_ok=no $2 fi AC_LANG_POP ])dnl AX_PTHREAD the_silver_searcher-2.1.0/pgo.sh000077500000000000000000000002471314774034700166730ustar00rootroot00000000000000#!/bin/sh set -e cd "$(dirname "$0")" make clean CFLAGS="$CFLAGS -fprofile-generate" ./build.sh ./ag example .. make clean CFLAGS="$CFLAGS -fprofile-use" ./build.sh the_silver_searcher-2.1.0/sanitize.sh000077500000000000000000000115141314774034700177330ustar00rootroot00000000000000#!/bin/bash # Copyright 2016 Allen Wild # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. AVAILABLE_SANITIZERS=( address thread undefined valgrind ) DEFAULT_SANITIZERS=( address thread undefined ) usage() { cat < and then runs the test suite. Memory leaks or other errors will be printed in ag's output, thus failing the test. Available LLVM sanitizers are: ${AVAILABLE_SANITIZERS[*]} The compile-time sanitizers are supported in clang/llvm >= 3.1 and gcc >= 4.8 for x86_64 Linux only. clang is preferred and will be used, if available. For function names and line numbers in error output traces, llvm-symbolizer needs to be available in PATH or set through ASAN_SYMBOLIZER_PATH. If 'valgrind' is passed as the sanitizer, then ag will be run through valgrind without recompiling. If $(dirname $0)/ag doesn't exist, then it will be built. WARNING: This script will run "make distclean" and "./configure" to recompile ag once per sanitizer (except for valgrind). If you need to pass additional options to ./configure, put them in the CONFIGOPTS environment variable. EOF } vrun() { echo "Running: $*" "$@" } die() { echo "Fatal: $*" exit 1 } valid_sanitizer() { for san in "${AVAILABLE_SANITIZERS[@]}"; do if [[ "$1" == "$san" ]]; then return 0 fi done return 1 } run_sanitizer() { sanitizer=$1 if [[ "$sanitizer" == "valgrind" ]]; then run_valgrind return $? fi echo -e "\nCompiling for sanitizer '$sanitizer'" [[ -f Makefile ]] && vrun make distclean vrun ./configure $CONFIGOPTS CC=$SANITIZE_CC \ CFLAGS="-g -O0 -fsanitize=$sanitizer $EXTRA_CFLAGS" if [[ $? != 0 ]]; then echo "ERROR: Failed to configure. Try setting CONFIGOPTS?" return 1 fi vrun make if [[ $? != 0 ]]; then echo "ERROR: failed to build" return 1 fi echo "Testing with sanitizer '$sanitizer'" vrun make test if [[ $? != 0 ]]; then echo "Tests for sanitizer '$sanitizer' FAIL!" echo "Check the above output for failure information" return 2 else echo "Tests for sanitizer '$sanitizer' PASS!" return 0 fi } run_valgrind() { echo "Compiling ag normally for use with valgrind" [[ -f Makefile ]] && vrun make distclean vrun ./configure $CONFIGOPTS if [[ $? != 0 ]]; then echo "ERROR: Failed to configure. Try setting CONFIGOPTS?" return 1 fi vrun make if [[ $? != 0 ]]; then echo "ERROR: failed to build" return 1 fi echo "Running: AGPROG=\"valgrind -q $PWD/ag\" make test" AGPROG="valgrind -q $PWD/ag" make test if [[ $? != 0 ]]; then echo "Valgrind tests FAIL!" return 1 else echo "Valgrind tests PASS!" return 0 fi } #### MAIN #### run_sanitizers=() for opt in "$@"; do if [[ "$opt" == -* ]]; then case opt in -h|--help) usage exit 0 ;; *) echo "Unknown option: '$opt'" usage exit 1 ;; esac else if valid_sanitizer "$opt"; then run_sanitizers+=("$opt") else echo "Invalid Sanitizer: '$opt'" usage exit 1 fi fi done if [[ ${#run_sanitizers[@]} == 0 ]]; then run_sanitizers=(${DEFAULT_SANITIZERS[@]}) fi if [[ -n $CC ]]; then echo "Using CC=$CC" SANITIZE_CC="$CC" elif which clang &>/dev/null; then SANITIZE_CC="clang" else echo "Warning: CC unset and clang not found" fi if [[ -n $CFLAGS ]]; then EXTRA_CFLAGS="$CFLAGS" unset CFLAGS fi if [[ ! -e ./configure ]]; then echo "Warning: ./configure not found. Running autogen" vrun ./autogen.sh || die "autogen.sh failed" fi echo "Running sanitizers: ${run_sanitizers[*]}" failedsan=() for san in "${run_sanitizers[@]}"; do run_sanitizer $san if [[ $? != 0 ]]; then failedsan+=($san) fi done if [[ ${#failedsan[@]} == 0 ]]; then echo "All sanitizers PASSED" exit 0 else echo "The following sanitizers FAILED: ${failedsan[*]}" exit ${#failedsan[@]} fi the_silver_searcher-2.1.0/src/000077500000000000000000000000001314774034700163335ustar00rootroot00000000000000the_silver_searcher-2.1.0/src/decompress.c000066400000000000000000000202121314774034700206400ustar00rootroot00000000000000#include #include #include "decompress.h" #ifdef HAVE_LZMA_H #include /* http://tukaani.org/xz/xz-file-format.txt */ const uint8_t XZ_HEADER_MAGIC[6] = { 0xFD, '7', 'z', 'X', 'Z', 0x00 }; const uint8_t LZMA_HEADER_SOMETIMES[3] = { 0x5D, 0x00, 0x00 }; #endif #ifdef HAVE_ZLIB_H #define ZLIB_CONST 1 #include /* Code in decompress_zlib from * * https://raw.github.com/madler/zlib/master/examples/zpipe.c * * zpipe.c: example of proper use of zlib's inflate() and deflate() * Not copyrighted -- provided to the public domain * Version 1.4 11 December 2005 Mark Adler */ static void *decompress_zlib(const void *buf, const int buf_len, const char *dir_full_path, int *new_buf_len) { int ret = 0; unsigned char *result = NULL; size_t result_size = 0; size_t pagesize = 0; z_stream stream; log_debug("Decompressing zlib file %s", dir_full_path); /* allocate inflate state */ stream.zalloc = Z_NULL; stream.zfree = Z_NULL; stream.opaque = Z_NULL; stream.avail_in = 0; stream.next_in = Z_NULL; /* Add 32 to allow zlib and gzip format detection */ if (inflateInit2(&stream, 32 + 15) != Z_OK) { log_err("Unable to initialize zlib: %s", stream.msg); goto error_out; } stream.avail_in = buf_len; /* Explicitly cast away the const-ness of buf */ stream.next_in = (Bytef *)buf; pagesize = getpagesize(); result_size = ((buf_len + pagesize - 1) & ~(pagesize - 1)); do { do { unsigned char *tmp_result = result; /* Double the buffer size and realloc */ result_size *= 2; result = (unsigned char *)realloc(result, result_size * sizeof(unsigned char)); if (result == NULL) { free(tmp_result); log_err("Unable to allocate %d bytes to decompress file %s", result_size * sizeof(unsigned char), dir_full_path); inflateEnd(&stream); goto error_out; } stream.avail_out = result_size / 2; stream.next_out = &result[stream.total_out]; ret = inflate(&stream, Z_SYNC_FLUSH); log_debug("inflate ret = %d", ret); switch (ret) { case Z_STREAM_ERROR: { log_err("Found stream error while decompressing zlib stream: %s", stream.msg); inflateEnd(&stream); goto error_out; } case Z_NEED_DICT: case Z_DATA_ERROR: case Z_MEM_ERROR: { log_err("Found mem/data error while decompressing zlib stream: %s", stream.msg); inflateEnd(&stream); goto error_out; } } } while (stream.avail_out == 0); } while (ret == Z_OK); *new_buf_len = stream.total_out; inflateEnd(&stream); if (ret == Z_STREAM_END) { return result; } error_out: *new_buf_len = 0; return NULL; } #endif static void *decompress_lzw(const void *buf, const int buf_len, const char *dir_full_path, int *new_buf_len) { (void)buf; (void)buf_len; log_err("LZW (UNIX compress) files not yet supported: %s", dir_full_path); *new_buf_len = 0; return NULL; } static void *decompress_zip(const void *buf, const int buf_len, const char *dir_full_path, int *new_buf_len) { (void)buf; (void)buf_len; log_err("Zip files not yet supported: %s", dir_full_path); *new_buf_len = 0; return NULL; } #ifdef HAVE_LZMA_H static void *decompress_lzma(const void *buf, const int buf_len, const char *dir_full_path, int *new_buf_len) { lzma_stream stream = LZMA_STREAM_INIT; lzma_ret lzrt; unsigned char *result = NULL; size_t result_size = 0; size_t pagesize = 0; stream.avail_in = buf_len; stream.next_in = buf; lzrt = lzma_auto_decoder(&stream, -1, 0); if (lzrt != LZMA_OK) { log_err("Unable to initialize lzma_auto_decoder: %d", lzrt); goto error_out; } pagesize = getpagesize(); result_size = ((buf_len + pagesize - 1) & ~(pagesize - 1)); do { do { unsigned char *tmp_result = result; /* Double the buffer size and realloc */ result_size *= 2; result = (unsigned char *)realloc(result, result_size * sizeof(unsigned char)); if (result == NULL) { free(tmp_result); log_err("Unable to allocate %d bytes to decompress file %s", result_size * sizeof(unsigned char), dir_full_path); goto error_out; } stream.avail_out = result_size / 2; stream.next_out = &result[stream.total_out]; lzrt = lzma_code(&stream, LZMA_RUN); log_debug("lzma_code ret = %d", lzrt); switch (lzrt) { case LZMA_OK: case LZMA_STREAM_END: break; default: log_err("Found mem/data error while decompressing xz/lzma stream: %d", lzrt); goto error_out; } } while (stream.avail_out == 0); } while (lzrt == LZMA_OK); *new_buf_len = stream.total_out; if (lzrt == LZMA_STREAM_END) { lzma_end(&stream); return result; } error_out: lzma_end(&stream); *new_buf_len = 0; if (result) { free(result); } return NULL; } #endif /* This function is very hot. It's called on every file when zip is enabled. */ void *decompress(const ag_compression_type zip_type, const void *buf, const int buf_len, const char *dir_full_path, int *new_buf_len) { switch (zip_type) { #ifdef HAVE_ZLIB_H case AG_GZIP: return decompress_zlib(buf, buf_len, dir_full_path, new_buf_len); #endif case AG_COMPRESS: return decompress_lzw(buf, buf_len, dir_full_path, new_buf_len); case AG_ZIP: return decompress_zip(buf, buf_len, dir_full_path, new_buf_len); #ifdef HAVE_LZMA_H case AG_XZ: return decompress_lzma(buf, buf_len, dir_full_path, new_buf_len); #endif case AG_NO_COMPRESSION: log_err("File %s is not compressed", dir_full_path); break; default: log_err("Unsupported compression type: %d", zip_type); } *new_buf_len = 0; return NULL; } /* This function is very hot. It's called on every file. */ ag_compression_type is_zipped(const void *buf, const int buf_len) { /* Zip magic numbers * compressed file: { 0x1F, 0x9B } * http://en.wikipedia.org/wiki/Compress * * gzip file: { 0x1F, 0x8B } * http://www.gzip.org/zlib/rfc-gzip.html#file-format * * zip file: { 0x50, 0x4B, 0x03, 0x04 } * http://www.pkware.com/documents/casestudies/APPNOTE.TXT (Section 4.3) */ const unsigned char *buf_c = buf; if (buf_len == 0) return AG_NO_COMPRESSION; /* Check for gzip & compress */ if (buf_len >= 2) { if (buf_c[0] == 0x1F) { if (buf_c[1] == 0x8B) { #ifdef HAVE_ZLIB_H log_debug("Found gzip-based stream"); return AG_GZIP; #endif } else if (buf_c[1] == 0x9B) { log_debug("Found compress-based stream"); return AG_COMPRESS; } } } /* Check for zip */ if (buf_len >= 4) { if (buf_c[0] == 0x50 && buf_c[1] == 0x4B && buf_c[2] == 0x03 && buf_c[3] == 0x04) { log_debug("Found zip-based stream"); return AG_ZIP; } } #ifdef HAVE_LZMA_H if (buf_len >= 6) { if (memcmp(XZ_HEADER_MAGIC, buf_c, 6) == 0) { log_debug("Found xz based stream"); return AG_XZ; } } /* LZMA doesn't really have a header: http://www.mail-archive.com/xz-devel@tukaani.org/msg00003.html */ if (buf_len >= 3) { if (memcmp(LZMA_HEADER_SOMETIMES, buf_c, 3) == 0) { log_debug("Found lzma-based stream"); return AG_XZ; } } #endif return AG_NO_COMPRESSION; } the_silver_searcher-2.1.0/src/decompress.h000066400000000000000000000010511314774034700206450ustar00rootroot00000000000000#ifndef DECOMPRESS_H #define DECOMPRESS_H #include #include "config.h" #include "log.h" #include "options.h" typedef enum { AG_NO_COMPRESSION, AG_GZIP, AG_COMPRESS, AG_ZIP, AG_XZ, } ag_compression_type; ag_compression_type is_zipped(const void *buf, const int buf_len); void *decompress(const ag_compression_type zip_type, const void *buf, const int buf_len, const char *dir_full_path, int *new_buf_len); #if HAVE_FOPENCOOKIE FILE *decompress_open(int fd, const char *mode, ag_compression_type ctype); #endif #endif the_silver_searcher-2.1.0/src/ignore.c000066400000000000000000000266011314774034700177670ustar00rootroot00000000000000#include #include #include #include #include #include #include #include "ignore.h" #include "log.h" #include "options.h" #include "scandir.h" #include "util.h" #ifdef _WIN32 #include #define fnmatch(x, y, z) (!PathMatchSpec(y, x)) #else #include const int fnmatch_flags = FNM_PATHNAME; #endif /* TODO: build a huge-ass list of files we want to ignore by default (build cache stuff, pyc files, etc) */ const char *evil_hardcoded_ignore_files[] = { ".", "..", NULL }; /* Warning: changing the first two strings will break skip_vcs_ignores. */ const char *ignore_pattern_files[] = { ".ignore", ".gitignore", ".git/info/exclude", ".hgignore", NULL }; int is_empty(ignores *ig) { return (ig->extensions_len + ig->names_len + ig->slash_names_len + ig->regexes_len + ig->slash_regexes_len == 0); }; ignores *init_ignore(ignores *parent, const char *dirname, const size_t dirname_len) { ignores *ig = ag_malloc(sizeof(ignores)); ig->extensions = NULL; ig->extensions_len = 0; ig->names = NULL; ig->names_len = 0; ig->slash_names = NULL; ig->slash_names_len = 0; ig->regexes = NULL; ig->regexes_len = 0; ig->invert_regexes = NULL; ig->invert_regexes_len = 0; ig->slash_regexes = NULL; ig->slash_regexes_len = 0; ig->dirname = dirname; ig->dirname_len = dirname_len; if (parent && is_empty(parent) && parent->parent) { ig->parent = parent->parent; } else { ig->parent = parent; } if (parent && parent->abs_path_len > 0) { ag_asprintf(&(ig->abs_path), "%s/%s", parent->abs_path, dirname); ig->abs_path_len = parent->abs_path_len + 1 + dirname_len; } else if (dirname_len == 1 && dirname[0] == '.') { ig->abs_path = ag_malloc(sizeof(char)); ig->abs_path[0] = '\0'; ig->abs_path_len = 0; } else { ag_asprintf(&(ig->abs_path), "%s", dirname); ig->abs_path_len = dirname_len; } return ig; } void cleanup_ignore(ignores *ig) { if (ig == NULL) { return; } free_strings(ig->extensions, ig->extensions_len); free_strings(ig->names, ig->names_len); free_strings(ig->slash_names, ig->slash_names_len); free_strings(ig->regexes, ig->regexes_len); free_strings(ig->invert_regexes, ig->invert_regexes_len); free_strings(ig->slash_regexes, ig->slash_regexes_len); if (ig->abs_path) { free(ig->abs_path); } free(ig); } void add_ignore_pattern(ignores *ig, const char *pattern) { int i; int pattern_len; /* Strip off the leading dot so that matches are more likely. */ if (strncmp(pattern, "./", 2) == 0) { pattern++; } /* Kill trailing whitespace */ for (pattern_len = strlen(pattern); pattern_len > 0; pattern_len--) { if (!isspace(pattern[pattern_len - 1])) { break; } } if (pattern_len == 0) { log_debug("Pattern is empty. Not adding any ignores."); return; } char ***patterns_p; size_t *patterns_len; if (is_fnmatch(pattern)) { if (pattern[0] == '*' && pattern[1] == '.' && strchr(pattern + 2, '.') && !is_fnmatch(pattern + 2)) { patterns_p = &(ig->extensions); patterns_len = &(ig->extensions_len); pattern += 2; pattern_len -= 2; } else if (pattern[0] == '/') { patterns_p = &(ig->slash_regexes); patterns_len = &(ig->slash_regexes_len); pattern++; pattern_len--; } else if (pattern[0] == '!') { patterns_p = &(ig->invert_regexes); patterns_len = &(ig->invert_regexes_len); pattern++; pattern_len--; } else { patterns_p = &(ig->regexes); patterns_len = &(ig->regexes_len); } } else { if (pattern[0] == '/') { patterns_p = &(ig->slash_names); patterns_len = &(ig->slash_names_len); pattern++; pattern_len--; } else { patterns_p = &(ig->names); patterns_len = &(ig->names_len); } } ++*patterns_len; char **patterns; /* a balanced binary tree is best for performance, but I'm lazy */ *patterns_p = patterns = ag_realloc(*patterns_p, (*patterns_len) * sizeof(char *)); /* TODO: de-dupe these patterns */ for (i = *patterns_len - 1; i > 0; i--) { if (strcmp(pattern, patterns[i - 1]) > 0) { break; } patterns[i] = patterns[i - 1]; } patterns[i] = ag_strndup(pattern, pattern_len); log_debug("added ignore pattern %s to %s", pattern, ig == root_ignores ? "root ignores" : ig->abs_path); } /* For loading git/hg ignore patterns */ void load_ignore_patterns(ignores *ig, const char *path) { FILE *fp = NULL; fp = fopen(path, "r"); if (fp == NULL) { log_debug("Skipping ignore file %s: not readable", path); return; } log_debug("Loading ignore file %s.", path); char *line = NULL; ssize_t line_len = 0; size_t line_cap = 0; while ((line_len = getline(&line, &line_cap, fp)) > 0) { if (line_len == 0 || line[0] == '\n' || line[0] == '#') { continue; } if (line[line_len - 1] == '\n') { line[line_len - 1] = '\0'; /* kill the \n */ } add_ignore_pattern(ig, line); } free(line); fclose(fp); } static int ackmate_dir_match(const char *dir_name) { if (opts.ackmate_dir_filter == NULL) { return 0; } /* we just care about the match, not where the matches are */ return pcre_exec(opts.ackmate_dir_filter, NULL, dir_name, strlen(dir_name), 0, 0, NULL, 0); } /* This is the hottest code in Ag. 10-15% of all execution time is spent here */ static int path_ignore_search(const ignores *ig, const char *path, const char *filename) { char *temp; size_t i; int match_pos; match_pos = binary_search(filename, ig->names, 0, ig->names_len); if (match_pos >= 0) { log_debug("file %s ignored because name matches static pattern %s", filename, ig->names[match_pos]); return 1; } ag_asprintf(&temp, "%s/%s", path[0] == '.' ? path + 1 : path, filename); if (strncmp(temp, ig->abs_path, ig->abs_path_len) == 0) { char *slash_filename = temp + ig->abs_path_len; if (slash_filename[0] == '/') { slash_filename++; } match_pos = binary_search(slash_filename, ig->names, 0, ig->names_len); if (match_pos >= 0) { log_debug("file %s ignored because name matches static pattern %s", temp, ig->names[match_pos]); free(temp); return 1; } match_pos = binary_search(slash_filename, ig->slash_names, 0, ig->slash_names_len); if (match_pos >= 0) { log_debug("file %s ignored because name matches slash static pattern %s", slash_filename, ig->slash_names[match_pos]); free(temp); return 1; } for (i = 0; i < ig->names_len; i++) { char *pos = strstr(slash_filename, ig->names[i]); if (pos == slash_filename || (pos && *(pos - 1) == '/')) { pos += strlen(ig->names[i]); if (*pos == '\0' || *pos == '/') { log_debug("file %s ignored because path somewhere matches name %s", slash_filename, ig->names[i]); free(temp); return 1; } } log_debug("pattern %s doesn't match path %s", ig->names[i], slash_filename); } for (i = 0; i < ig->slash_regexes_len; i++) { if (fnmatch(ig->slash_regexes[i], slash_filename, fnmatch_flags) == 0) { log_debug("file %s ignored because name matches slash regex pattern %s", slash_filename, ig->slash_regexes[i]); free(temp); return 1; } log_debug("pattern %s doesn't match slash file %s", ig->slash_regexes[i], slash_filename); } } for (i = 0; i < ig->invert_regexes_len; i++) { if (fnmatch(ig->invert_regexes[i], filename, fnmatch_flags) == 0) { log_debug("file %s not ignored because name matches regex pattern !%s", filename, ig->invert_regexes[i]); free(temp); return 0; } log_debug("pattern !%s doesn't match file %s", ig->invert_regexes[i], filename); } for (i = 0; i < ig->regexes_len; i++) { if (fnmatch(ig->regexes[i], filename, fnmatch_flags) == 0) { log_debug("file %s ignored because name matches regex pattern %s", filename, ig->regexes[i]); free(temp); return 1; } log_debug("pattern %s doesn't match file %s", ig->regexes[i], filename); } int rv = ackmate_dir_match(temp); free(temp); return rv; } /* This function is REALLY HOT. It gets called for every file */ int filename_filter(const char *path, const struct dirent *dir, void *baton) { const char *filename = dir->d_name; if (!opts.search_hidden_files && filename[0] == '.') { return 0; } size_t i; for (i = 0; evil_hardcoded_ignore_files[i] != NULL; i++) { if (strcmp(filename, evil_hardcoded_ignore_files[i]) == 0) { return 0; } } if (!opts.follow_symlinks && is_symlink(path, dir)) { log_debug("File %s ignored becaused it's a symlink", dir->d_name); return 0; } if (is_named_pipe(path, dir)) { log_debug("%s ignored because it's a named pipe or socket", path); return 0; } if (opts.search_all_files && !opts.path_to_ignore) { return 1; } scandir_baton_t *scandir_baton = (scandir_baton_t *)baton; const char *path_start = scandir_baton->path_start; const char *extension = strchr(filename, '.'); if (extension) { if (extension[1]) { // The dot is not the last character, extension starts at the next one ++extension; } else { // No extension extension = NULL; } } #ifdef HAVE_DIRENT_DNAMLEN size_t filename_len = dir->d_namlen; #else size_t filename_len = 0; #endif if (strncmp(filename, "./", 2) == 0) { #ifndef HAVE_DIRENT_DNAMLEN filename_len = strlen(filename); #endif filename++; filename_len--; } const ignores *ig = scandir_baton->ig; while (ig != NULL) { if (extension) { int match_pos = binary_search(extension, ig->extensions, 0, ig->extensions_len); if (match_pos >= 0) { log_debug("file %s ignored because name matches extension %s", filename, ig->extensions[match_pos]); return 0; } } if (path_ignore_search(ig, path_start, filename)) { return 0; } if (is_directory(path, dir)) { #ifndef HAVE_DIRENT_DNAMLEN if (!filename_len) { filename_len = strlen(filename); } #endif if (filename[filename_len - 1] != '/') { char *temp; ag_asprintf(&temp, "%s/", filename); int rv = path_ignore_search(ig, path_start, temp); free(temp); if (rv) { return 0; } } } ig = ig->parent; } log_debug("%s not ignored", filename); return 1; } the_silver_searcher-2.1.0/src/ignore.h000066400000000000000000000023101314774034700177630ustar00rootroot00000000000000#ifndef IGNORE_H #define IGNORE_H #include #include struct ignores { char **extensions; /* File extensions to ignore */ size_t extensions_len; char **names; /* Non-regex ignore lines. Sorted so we can binary search them. */ size_t names_len; char **slash_names; /* Same but starts with a slash */ size_t slash_names_len; char **regexes; /* For patterns that need fnmatch */ size_t regexes_len; char **invert_regexes; /* For "!" patterns */ size_t invert_regexes_len; char **slash_regexes; size_t slash_regexes_len; const char *dirname; size_t dirname_len; char *abs_path; size_t abs_path_len; struct ignores *parent; }; typedef struct ignores ignores; ignores *root_ignores; extern const char *evil_hardcoded_ignore_files[]; extern const char *ignore_pattern_files[]; ignores *init_ignore(ignores *parent, const char *dirname, const size_t dirname_len); void cleanup_ignore(ignores *ig); void add_ignore_pattern(ignores *ig, const char *pattern); void load_ignore_patterns(ignores *ig, const char *path); int filename_filter(const char *path, const struct dirent *dir, void *baton); int is_empty(ignores *ig); #endif the_silver_searcher-2.1.0/src/lang.c000066400000000000000000000144661314774034700174330ustar00rootroot00000000000000#include #include #include "lang.h" #include "util.h" lang_spec_t langs[] = { { "actionscript", { "as", "mxml" } }, { "ada", { "ada", "adb", "ads" } }, { "asciidoc", { "adoc", "ad", "asc", "asciidoc" } }, { "asm", { "asm", "s" } }, { "batch", { "bat", "cmd" } }, { "bitbake", { "bb", "bbappend", "bbclass", "inc" } }, { "bro", { "bro", "bif" } }, { "cc", { "c", "h", "xs" } }, { "cfmx", { "cfc", "cfm", "cfml" } }, { "chpl", { "chpl" } }, { "clojure", { "clj", "cljs", "cljc", "cljx" } }, { "coffee", { "coffee", "cjsx" } }, { "cpp", { "cpp", "cc", "C", "cxx", "m", "hpp", "hh", "h", "H", "hxx", "tpp" } }, { "crystal", { "cr", "ecr" } }, { "csharp", { "cs" } }, { "css", { "css" } }, { "cython", { "pyx", "pxd", "pxi" } }, { "delphi", { "pas", "int", "dfm", "nfm", "dof", "dpk", "dpr", "dproj", "groupproj", "bdsgroup", "bdsproj" } }, { "dot", { "dot", "gv" } }, { "ebuild", { "ebuild", "eclass" } }, { "elisp", { "el" } }, { "elixir", { "ex", "eex", "exs" } }, { "elm", { "elm" } }, { "erlang", { "erl", "hrl" } }, { "factor", { "factor" } }, { "fortran", { "f", "f77", "f90", "f95", "f03", "for", "ftn", "fpp" } }, { "fsharp", { "fs", "fsi", "fsx" } }, { "gettext", { "po", "pot", "mo" } }, { "glsl", { "vert", "tesc", "tese", "geom", "frag", "comp" } }, { "go", { "go" } }, { "groovy", { "groovy", "gtmpl", "gpp", "grunit", "gradle" } }, { "haml", { "haml" } }, { "handlebars", { "hbs" } }, { "haskell", { "hs", "lhs" } }, { "haxe", { "hx" } }, { "hh", { "h" } }, { "html", { "htm", "html", "shtml", "xhtml" } }, { "ini", { "ini" } }, { "ipython", { "ipynb" } }, { "jade", { "jade" } }, { "java", { "java", "properties" } }, { "js", { "es6", "js", "jsx", "vue" } }, { "json", { "json" } }, { "jsp", { "jsp", "jspx", "jhtm", "jhtml", "jspf", "tag", "tagf" } }, { "julia", { "jl" } }, { "kotlin", { "kt" } }, { "less", { "less" } }, { "liquid", { "liquid" } }, { "lisp", { "lisp", "lsp" } }, { "log", { "log" } }, { "lua", { "lua" } }, { "m4", { "m4" } }, { "make", { "Makefiles", "mk", "mak" } }, { "mako", { "mako" } }, { "markdown", { "markdown", "mdown", "mdwn", "mkdn", "mkd", "md" } }, { "mason", { "mas", "mhtml", "mpl", "mtxt" } }, { "matlab", { "m" } }, { "mathematica", { "m", "wl" } }, { "md", { "markdown", "mdown", "mdwn", "mkdn", "mkd", "md" } }, { "mercury", { "m", "moo" } }, { "nim", { "nim" } }, { "nix", { "nix" } }, { "objc", { "m", "h" } }, { "objcpp", { "mm", "h" } }, { "ocaml", { "ml", "mli", "mll", "mly" } }, { "octave", { "m" } }, { "org", { "org" } }, { "parrot", { "pir", "pasm", "pmc", "ops", "pod", "pg", "tg" } }, { "perl", { "pl", "pm", "pm6", "pod", "t" } }, { "php", { "php", "phpt", "php3", "php4", "php5", "phtml" } }, { "pike", { "pike", "pmod" } }, { "plist", { "plist" } }, { "plone", { "pt", "cpt", "metadata", "cpy", "py", "xml", "zcml" } }, { "proto", { "proto" } }, { "puppet", { "pp" } }, { "python", { "py" } }, { "qml", { "qml" } }, { "racket", { "rkt", "ss", "scm" } }, { "rake", { "Rakefile" } }, { "restructuredtext", { "rst" } }, { "rs", { "rs" } }, { "r", { "R", "Rmd", "Rnw", "Rtex", "Rrst" } }, { "rdoc", { "rdoc" } }, { "ruby", { "rb", "rhtml", "rjs", "rxml", "erb", "rake", "spec" } }, { "rust", { "rs" } }, { "salt", { "sls" } }, { "sass", { "sass", "scss" } }, { "scala", { "scala" } }, { "scheme", { "scm", "ss" } }, { "shell", { "sh", "bash", "csh", "tcsh", "ksh", "zsh", "fish" } }, { "smalltalk", { "st" } }, { "sml", { "sml", "fun", "mlb", "sig" } }, { "sql", { "sql", "ctl" } }, { "stylus", { "styl" } }, { "swift", { "swift" } }, { "tcl", { "tcl", "itcl", "itk" } }, { "tex", { "tex", "cls", "sty" } }, { "tt", { "tt", "tt2", "ttml" } }, { "toml", { "toml" } }, { "ts", { "ts", "tsx" } }, { "twig", { "twig" } }, { "vala", { "vala", "vapi" } }, { "vb", { "bas", "cls", "frm", "ctl", "vb", "resx" } }, { "velocity", { "vm", "vtl", "vsl" } }, { "verilog", { "v", "vh", "sv" } }, { "vhdl", { "vhd", "vhdl" } }, { "vim", { "vim" } }, { "wix", { "wxi", "wxs" } }, { "wsdl", { "wsdl" } }, { "wadl", { "wadl" } }, { "xml", { "xml", "dtd", "xsl", "xslt", "ent", "tld", "plist" } }, { "yaml", { "yaml", "yml" } } }; size_t get_lang_count() { return sizeof(langs) / sizeof(lang_spec_t); } char *make_lang_regex(char *ext_array, size_t num_exts) { int regex_capacity = 100; char *regex = ag_malloc(regex_capacity); int regex_length = 3; int subsequent = 0; char *extension; size_t i; strcpy(regex, "\\.("); for (i = 0; i < num_exts; ++i) { extension = ext_array + i * SINGLE_EXT_LEN; int extension_length = strlen(extension); while (regex_length + extension_length + 3 + subsequent > regex_capacity) { regex_capacity *= 2; regex = ag_realloc(regex, regex_capacity); } if (subsequent) { regex[regex_length++] = '|'; } else { subsequent = 1; } strcpy(regex + regex_length, extension); regex_length += extension_length; } regex[regex_length++] = ')'; regex[regex_length++] = '$'; regex[regex_length++] = 0; return regex; } size_t combine_file_extensions(size_t *extension_index, size_t len, char **exts) { /* Keep it fixed as 100 for the reason that if you have more than 100 * file types to search, you'd better search all the files. * */ size_t ext_capacity = 100; (*exts) = (char *)ag_malloc(ext_capacity * SINGLE_EXT_LEN); memset((*exts), 0, ext_capacity * SINGLE_EXT_LEN); size_t num_of_extensions = 0; size_t i; for (i = 0; i < len; ++i) { size_t j = 0; const char *ext = langs[extension_index[i]].extensions[j]; do { if (num_of_extensions == ext_capacity) { break; } char *pos = (*exts) + num_of_extensions * SINGLE_EXT_LEN; strncpy(pos, ext, strlen(ext)); ++num_of_extensions; ext = langs[extension_index[i]].extensions[++j]; } while (ext); } return num_of_extensions; } the_silver_searcher-2.1.0/src/lang.h000066400000000000000000000015341314774034700174300ustar00rootroot00000000000000#ifndef LANG_H #define LANG_H #define MAX_EXTENSIONS 12 #define SINGLE_EXT_LEN 20 typedef struct { const char *name; const char *extensions[MAX_EXTENSIONS]; } lang_spec_t; extern lang_spec_t langs[]; /** Return the language count. */ size_t get_lang_count(void); /** Convert a NULL-terminated array of language extensions into a regular expression of the form \.(extension1|extension2...)$ Caller is responsible for freeing the returned string. */ char *make_lang_regex(char *ext_array, size_t num_exts); /** Combine multiple file type extensions into one array. The combined result is returned through *exts*; *exts* is one-dimension array, which can contain up to 100 extensions; The number of extensions that *exts* actually contain is returned. */ size_t combine_file_extensions(size_t *extension_index, size_t len, char **exts); #endif the_silver_searcher-2.1.0/src/log.c000066400000000000000000000031171314774034700172620ustar00rootroot00000000000000#include #include #include "log.h" #include "util.h" static enum log_level log_threshold = LOG_LEVEL_ERR; void set_log_level(enum log_level threshold) { log_threshold = threshold; } void log_debug(const char *fmt, ...) { va_list args; va_start(args, fmt); vplog(LOG_LEVEL_DEBUG, fmt, args); va_end(args); } void log_msg(const char *fmt, ...) { va_list args; va_start(args, fmt); vplog(LOG_LEVEL_MSG, fmt, args); va_end(args); } void log_warn(const char *fmt, ...) { va_list args; va_start(args, fmt); vplog(LOG_LEVEL_WARN, fmt, args); va_end(args); } void log_err(const char *fmt, ...) { va_list args; va_start(args, fmt); vplog(LOG_LEVEL_ERR, fmt, args); va_end(args); } void vplog(const unsigned int level, const char *fmt, va_list args) { if (level < log_threshold) { return; } pthread_mutex_lock(&print_mtx); FILE *stream = out_fd; switch (level) { case LOG_LEVEL_DEBUG: fprintf(stream, "DEBUG: "); break; case LOG_LEVEL_MSG: fprintf(stream, "MSG: "); break; case LOG_LEVEL_WARN: fprintf(stream, "WARN: "); break; case LOG_LEVEL_ERR: stream = stderr; fprintf(stream, "ERR: "); break; } vfprintf(stream, fmt, args); fprintf(stream, "\n"); pthread_mutex_unlock(&print_mtx); } void plog(const unsigned int level, const char *fmt, ...) { va_list args; va_start(args, fmt); vplog(level, fmt, args); va_end(args); } the_silver_searcher-2.1.0/src/log.h000066400000000000000000000011621314774034700172650ustar00rootroot00000000000000#ifndef LOG_H #define LOG_H #include #include "config.h" #ifdef HAVE_PTHREAD_H #include #endif pthread_mutex_t print_mtx; enum log_level { LOG_LEVEL_DEBUG = 10, LOG_LEVEL_MSG = 20, LOG_LEVEL_WARN = 30, LOG_LEVEL_ERR = 40, LOG_LEVEL_NONE = 100 }; void set_log_level(enum log_level threshold); void log_debug(const char *fmt, ...); void log_msg(const char *fmt, ...); void log_warn(const char *fmt, ...); void log_err(const char *fmt, ...); void vplog(const unsigned int level, const char *fmt, va_list args); void plog(const unsigned int level, const char *fmt, ...); #endif the_silver_searcher-2.1.0/src/main.c000066400000000000000000000163011314774034700174240ustar00rootroot00000000000000#include #include #include #include #include #include #include #ifdef _WIN32 #include #endif #include "config.h" #ifdef HAVE_SYS_CPUSET_H #include #endif #ifdef HAVE_PTHREAD_H #include #endif #if defined(HAVE_PTHREAD_SETAFFINITY_NP) && defined(__FreeBSD__) #include #endif #include "log.h" #include "options.h" #include "search.h" #include "util.h" typedef struct { pthread_t thread; int id; } worker_t; int main(int argc, char **argv) { char **base_paths = NULL; char **paths = NULL; int i; int pcre_opts = PCRE_MULTILINE; int study_opts = 0; worker_t *workers = NULL; int workers_len; int num_cores; #ifdef HAVE_PLEDGE if (pledge("stdio rpath proc exec", NULL) == -1) { die("pledge: %s", strerror(errno)); } #endif set_log_level(LOG_LEVEL_WARN); work_queue = NULL; work_queue_tail = NULL; root_ignores = init_ignore(NULL, "", 0); out_fd = stdout; parse_options(argc, argv, &base_paths, &paths); log_debug("PCRE Version: %s", pcre_version()); if (opts.stats) { memset(&stats, 0, sizeof(stats)); gettimeofday(&(stats.time_start), NULL); } #ifdef USE_PCRE_JIT int has_jit = 0; pcre_config(PCRE_CONFIG_JIT, &has_jit); if (has_jit) { study_opts |= PCRE_STUDY_JIT_COMPILE; } #endif #ifdef _WIN32 { SYSTEM_INFO si; GetSystemInfo(&si); num_cores = si.dwNumberOfProcessors; } #else num_cores = (int)sysconf(_SC_NPROCESSORS_ONLN); #endif workers_len = num_cores < 8 ? num_cores : 8; if (opts.literal) { workers_len--; } if (opts.workers) { workers_len = opts.workers; } if (workers_len < 1) { workers_len = 1; } log_debug("Using %i workers", workers_len); done_adding_files = FALSE; workers = ag_calloc(workers_len, sizeof(worker_t)); if (pthread_cond_init(&files_ready, NULL)) { die("pthread_cond_init failed!"); } if (pthread_mutex_init(&print_mtx, NULL)) { die("pthread_mutex_init failed!"); } if (opts.stats && pthread_mutex_init(&stats_mtx, NULL)) { die("pthread_mutex_init failed!"); } if (pthread_mutex_init(&work_queue_mtx, NULL)) { die("pthread_mutex_init failed!"); } if (opts.casing == CASE_SMART) { opts.casing = is_lowercase(opts.query) ? CASE_INSENSITIVE : CASE_SENSITIVE; } if (opts.literal) { if (opts.casing == CASE_INSENSITIVE) { /* Search routine needs the query to be lowercase */ char *c = opts.query; for (; *c != '\0'; ++c) { *c = (char)tolower(*c); } } generate_alpha_skip(opts.query, opts.query_len, alpha_skip_lookup, opts.casing == CASE_SENSITIVE); find_skip_lookup = NULL; generate_find_skip(opts.query, opts.query_len, &find_skip_lookup, opts.casing == CASE_SENSITIVE); generate_hash(opts.query, opts.query_len, h_table, opts.casing == CASE_SENSITIVE); if (opts.word_regexp) { init_wordchar_table(); opts.literal_starts_wordchar = is_wordchar(opts.query[0]); opts.literal_ends_wordchar = is_wordchar(opts.query[opts.query_len - 1]); } } else { if (opts.casing == CASE_INSENSITIVE) { pcre_opts |= PCRE_CASELESS; } if (opts.word_regexp) { char *word_regexp_query; ag_asprintf(&word_regexp_query, "\\b(?:%s)\\b", opts.query); free(opts.query); opts.query = word_regexp_query; opts.query_len = strlen(opts.query); } compile_study(&opts.re, &opts.re_extra, opts.query, pcre_opts, study_opts); } if (opts.search_stream) { search_stream(stdin, ""); } else { for (i = 0; i < workers_len; i++) { workers[i].id = i; int rv = pthread_create(&(workers[i].thread), NULL, &search_file_worker, &(workers[i].id)); if (rv != 0) { die("Error in pthread_create(): %s", strerror(rv)); } #if defined(HAVE_PTHREAD_SETAFFINITY_NP) && (defined(USE_CPU_SET) || defined(HAVE_SYS_CPUSET_H)) if (opts.use_thread_affinity) { #ifdef __linux__ cpu_set_t cpu_set; #elif __FreeBSD__ cpuset_t cpu_set; #endif CPU_ZERO(&cpu_set); CPU_SET(i % num_cores, &cpu_set); rv = pthread_setaffinity_np(workers[i].thread, sizeof(cpu_set), &cpu_set); if (rv) { log_err("Error in pthread_setaffinity_np(): %s", strerror(rv)); log_err("Performance may be affected. Use --noaffinity to suppress this message."); } else { log_debug("Thread %i set to CPU %i", i, i); } } else { log_debug("Thread affinity disabled."); } #else log_debug("No CPU affinity support."); #endif } #ifdef HAVE_PLEDGE if (pledge("stdio rpath", NULL) == -1) { die("pledge: %s", strerror(errno)); } #endif for (i = 0; paths[i] != NULL; i++) { log_debug("searching path %s for %s", paths[i], opts.query); symhash = NULL; ignores *ig = init_ignore(root_ignores, "", 0); struct stat s = {.st_dev = 0 }; #ifndef _WIN32 /* The device is ignored if opts.one_dev is false, so it's fine * to leave it at the default 0 */ if (opts.one_dev && lstat(paths[i], &s) == -1) { log_err("Failed to get device information for path %s. Skipping...", paths[i]); } #endif search_dir(ig, base_paths[i], paths[i], 0, s.st_dev); cleanup_ignore(ig); } pthread_mutex_lock(&work_queue_mtx); done_adding_files = TRUE; pthread_cond_broadcast(&files_ready); pthread_mutex_unlock(&work_queue_mtx); for (i = 0; i < workers_len; i++) { if (pthread_join(workers[i].thread, NULL)) { die("pthread_join failed!"); } } } if (opts.stats) { gettimeofday(&(stats.time_end), NULL); double time_diff = ((long)stats.time_end.tv_sec * 1000000 + stats.time_end.tv_usec) - ((long)stats.time_start.tv_sec * 1000000 + stats.time_start.tv_usec); time_diff /= 1000000; printf("%ld matches\n%ld files contained matches\n%ld files searched\n%ld bytes searched\n%f seconds\n", stats.total_matches, stats.total_file_matches, stats.total_files, stats.total_bytes, time_diff); pthread_mutex_destroy(&stats_mtx); } if (opts.pager) { pclose(out_fd); } cleanup_options(); pthread_cond_destroy(&files_ready); pthread_mutex_destroy(&work_queue_mtx); pthread_mutex_destroy(&print_mtx); cleanup_ignore(root_ignores); free(workers); for (i = 0; paths[i] != NULL; i++) { free(paths[i]); free(base_paths[i]); } free(base_paths); free(paths); if (find_skip_lookup) { free(find_skip_lookup); } return !opts.match_found; } the_silver_searcher-2.1.0/src/options.c000066400000000000000000000757671314774034700202200ustar00rootroot00000000000000#include #include #include #include #include #include #include #include #include #include "config.h" #include "ignore.h" #include "lang.h" #include "log.h" #include "options.h" #include "print.h" #include "util.h" const char *color_line_number = "\033[1;33m"; /* bold yellow */ const char *color_match = "\033[30;43m"; /* black with yellow background */ const char *color_path = "\033[1;32m"; /* bold green */ /* TODO: try to obey out_fd? */ void usage(void) { printf("\n"); printf("Usage: ag [FILE-TYPE] [OPTIONS] PATTERN [PATH]\n\n"); printf(" Recursively search for PATTERN in PATH.\n"); printf(" Like grep or ack, but faster.\n\n"); printf("Example:\n ag -i foo /bar/\n\n"); printf("\ Output Options:\n\ --ackmate Print results in AckMate-parseable format\n\ -A --after [LINES] Print lines after match (Default: 2)\n\ -B --before [LINES] Print lines before match (Default: 2)\n\ --[no]break Print newlines between matches in different files\n\ (Enabled by default)\n\ -c --count Only print the number of matches in each file.\n\ (This often differs from the number of matching lines)\n\ --[no]color Print color codes in results (Enabled by default)\n\ --color-line-number Color codes for line numbers (Default: 1;33)\n\ --color-match Color codes for result match numbers (Default: 30;43)\n\ --color-path Color codes for path names (Default: 1;32)\n\ "); #ifdef _WIN32 printf("\ --color-win-ansi Use ansi colors on Windows even where we can use native\n\ (pager/pipe colors are ansi regardless) (Default: off)\n\ "); #endif printf("\ --column Print column numbers in results\n\ --[no]filename Print file names (Enabled unless searching a single file)\n\ -H --[no]heading Print file names before each file's matches\n\ (Enabled by default)\n\ -C --context [LINES] Print lines before and after matches (Default: 2)\n\ --[no]group Same as --[no]break --[no]heading\n\ -g --filename-pattern PATTERN\n\ Print filenames matching PATTERN\n\ -l --files-with-matches Only print filenames that contain matches\n\ (don't print the matching lines)\n\ -L --files-without-matches\n\ Only print filenames that don't contain matches\n\ --print-all-files Print headings for all files searched, even those that\n\ don't contain matches\n\ --[no]numbers Print line numbers. Default is to omit line numbers\n\ when searching streams\n\ -o --only-matching Prints only the matching part of the lines\n\ --print-long-lines Print matches on very long lines (Default: >2k characters)\n\ --passthrough When searching a stream, print all lines even if they\n\ don't match\n\ --silent Suppress all log messages, including errors\n\ --stats Print stats (files scanned, time taken, etc.)\n\ --stats-only Print stats and nothing else.\n\ (Same as --count when searching a single file)\n\ --vimgrep Print results like vim's :vimgrep /pattern/g would\n\ (it reports every match on the line)\n\ -0 --null --print0 Separate filenames with null (for 'xargs -0')\n\ \n\ Search Options:\n\ -a --all-types Search all files (doesn't include hidden files\n\ or patterns from ignore files)\n\ -D --debug Ridiculous debugging (probably not useful)\n\ --depth NUM Search up to NUM directories deep (Default: 25)\n\ -f --follow Follow symlinks\n\ -F --fixed-strings Alias for --literal for compatibility with grep\n\ -G --file-search-regex PATTERN Limit search to filenames matching PATTERN\n\ --hidden Search hidden files (obeys .*ignore files)\n\ -i --ignore-case Match case insensitively\n\ --ignore PATTERN Ignore files/directories matching PATTERN\n\ (literal file/directory names also allowed)\n\ --ignore-dir NAME Alias for --ignore for compatibility with ack.\n\ -m --max-count NUM Skip the rest of a file after NUM matches (Default: 10,000)\n\ --one-device Don't follow links to other devices.\n\ -p --path-to-ignore STRING\n\ Use .ignore file at STRING\n\ -Q --literal Don't parse PATTERN as a regular expression\n\ -s --case-sensitive Match case sensitively\n\ -S --smart-case Match case insensitively unless PATTERN contains\n\ uppercase characters (Enabled by default)\n\ --search-binary Search binary files for matches\n\ -t --all-text Search all text files (doesn't include hidden files)\n\ -u --unrestricted Search all files (ignore .ignore, .gitignore, etc.;\n\ searches binary and hidden files as well)\n\ -U --skip-vcs-ignores Ignore VCS ignore files\n\ (.gitignore, .hgignore; still obey .ignore)\n\ -v --invert-match\n\ -w --word-regexp Only match whole words\n\ -W --width NUM Truncate match lines after NUM characters\n\ -z --search-zip Search contents of compressed (e.g., gzip) files\n\ \n"); printf("File Types:\n\ The search can be restricted to certain types of files. Example:\n\ ag --html needle\n\ - Searches for 'needle' in files with suffix .htm, .html, .shtml or .xhtml.\n\ \n\ For a list of supported file types run:\n\ ag --list-file-types\n\n\ ag was originally created by Geoff Greer. More information (and the latest release)\n\ can be found at http://geoff.greer.fm/ag\n"); } void print_version(void) { char jit = '-'; char lzma = '-'; char zlib = '-'; #ifdef USE_PCRE_JIT jit = '+'; #endif #ifdef HAVE_LZMA_H lzma = '+'; #endif #ifdef HAVE_ZLIB_H zlib = '+'; #endif printf("ag version %s\n\n", PACKAGE_VERSION); printf("Features:\n"); printf(" %cjit %clzma %czlib\n", jit, lzma, zlib); } void init_options(void) { memset(&opts, 0, sizeof(opts)); opts.casing = CASE_DEFAULT; opts.color = TRUE; opts.color_win_ansi = FALSE; opts.max_matches_per_file = 0; opts.max_search_depth = DEFAULT_MAX_SEARCH_DEPTH; #if defined(__APPLE__) || defined(__MACH__) /* mamp() is slower than normal read() on macos. default to off */ opts.mmap = FALSE; #else opts.mmap = TRUE; #endif opts.multiline = TRUE; opts.width = 0; opts.path_sep = '\n'; opts.print_break = TRUE; opts.print_path = PATH_PRINT_DEFAULT; opts.print_all_paths = FALSE; opts.print_line_numbers = TRUE; opts.recurse_dirs = TRUE; opts.color_path = ag_strdup(color_path); opts.color_match = ag_strdup(color_match); opts.color_line_number = ag_strdup(color_line_number); opts.use_thread_affinity = TRUE; } void cleanup_options(void) { free(opts.color_path); free(opts.color_match); free(opts.color_line_number); if (opts.query) { free(opts.query); } pcre_free(opts.re); if (opts.re_extra) { /* Using pcre_free_study on pcre_extra* can segfault on some versions of PCRE */ pcre_free(opts.re_extra); } if (opts.ackmate_dir_filter) { pcre_free(opts.ackmate_dir_filter); } if (opts.ackmate_dir_filter_extra) { pcre_free(opts.ackmate_dir_filter_extra); } if (opts.file_search_regex) { pcre_free(opts.file_search_regex); } if (opts.file_search_regex_extra) { pcre_free(opts.file_search_regex_extra); } } void parse_options(int argc, char **argv, char **base_paths[], char **paths[]) { int ch; size_t i; int path_len = 0; int base_path_len = 0; int useless = 0; int group = 1; int help = 0; int version = 0; int list_file_types = 0; int opt_index = 0; char *num_end; const char *home_dir = getenv("HOME"); char *ignore_file_path = NULL; int accepts_query = 1; int needs_query = 1; struct stat statbuf; int rv; size_t lang_count; size_t lang_num = 0; int has_filetype = 0; size_t longopts_len, full_len; option_t *longopts; char *lang_regex = NULL; size_t *ext_index = NULL; char *extensions = NULL; size_t num_exts = 0; init_options(); option_t base_longopts[] = { { "ackmate", no_argument, &opts.ackmate, 1 }, { "ackmate-dir-filter", required_argument, NULL, 0 }, { "affinity", no_argument, &opts.use_thread_affinity, 1 }, { "after", optional_argument, NULL, 'A' }, { "all-text", no_argument, NULL, 't' }, { "all-types", no_argument, NULL, 'a' }, { "before", optional_argument, NULL, 'B' }, { "break", no_argument, &opts.print_break, 1 }, { "case-sensitive", no_argument, NULL, 's' }, { "color", no_argument, &opts.color, 1 }, { "color-line-number", required_argument, NULL, 0 }, { "color-match", required_argument, NULL, 0 }, { "color-path", required_argument, NULL, 0 }, { "color-win-ansi", no_argument, &opts.color_win_ansi, TRUE }, { "column", no_argument, &opts.column, 1 }, { "context", optional_argument, NULL, 'C' }, { "count", no_argument, NULL, 'c' }, { "debug", no_argument, NULL, 'D' }, { "depth", required_argument, NULL, 0 }, { "filename", no_argument, NULL, 0 }, { "filename-pattern", required_argument, NULL, 'g' }, { "file-search-regex", required_argument, NULL, 'G' }, { "files-with-matches", no_argument, NULL, 'l' }, { "files-without-matches", no_argument, NULL, 'L' }, { "fixed-strings", no_argument, NULL, 'F' }, { "follow", no_argument, &opts.follow_symlinks, 1 }, { "group", no_argument, &group, 1 }, { "heading", no_argument, &opts.print_path, PATH_PRINT_TOP }, { "help", no_argument, NULL, 'h' }, { "hidden", no_argument, &opts.search_hidden_files, 1 }, { "ignore", required_argument, NULL, 0 }, { "ignore-case", no_argument, NULL, 'i' }, { "ignore-dir", required_argument, NULL, 0 }, { "invert-match", no_argument, NULL, 'v' }, /* deprecated for --numbers. Remove eventually. */ { "line-numbers", no_argument, &opts.print_line_numbers, 2 }, { "list-file-types", no_argument, &list_file_types, 1 }, { "literal", no_argument, NULL, 'Q' }, { "match", no_argument, &useless, 0 }, { "max-count", required_argument, NULL, 'm' }, { "mmap", no_argument, &opts.mmap, TRUE }, { "multiline", no_argument, &opts.multiline, TRUE }, /* Accept both --no-* and --no* forms for convenience/BC */ { "no-affinity", no_argument, &opts.use_thread_affinity, 0 }, { "noaffinity", no_argument, &opts.use_thread_affinity, 0 }, { "no-break", no_argument, &opts.print_break, 0 }, { "nobreak", no_argument, &opts.print_break, 0 }, { "no-color", no_argument, &opts.color, 0 }, { "nocolor", no_argument, &opts.color, 0 }, { "no-filename", no_argument, NULL, 0 }, { "nofilename", no_argument, NULL, 0 }, { "no-follow", no_argument, &opts.follow_symlinks, 0 }, { "nofollow", no_argument, &opts.follow_symlinks, 0 }, { "no-group", no_argument, &group, 0 }, { "nogroup", no_argument, &group, 0 }, { "no-heading", no_argument, &opts.print_path, PATH_PRINT_EACH_LINE }, { "noheading", no_argument, &opts.print_path, PATH_PRINT_EACH_LINE }, { "no-mmap", no_argument, &opts.mmap, FALSE }, { "nommap", no_argument, &opts.mmap, FALSE }, { "no-multiline", no_argument, &opts.multiline, FALSE }, { "nomultiline", no_argument, &opts.multiline, FALSE }, { "no-numbers", no_argument, &opts.print_line_numbers, FALSE }, { "nonumbers", no_argument, &opts.print_line_numbers, FALSE }, { "no-pager", no_argument, NULL, 0 }, { "nopager", no_argument, NULL, 0 }, { "no-recurse", no_argument, NULL, 'n' }, { "norecurse", no_argument, NULL, 'n' }, { "null", no_argument, NULL, '0' }, { "numbers", no_argument, &opts.print_line_numbers, 2 }, { "only-matching", no_argument, NULL, 'o' }, { "one-device", no_argument, &opts.one_dev, 1 }, { "pager", required_argument, NULL, 0 }, { "parallel", no_argument, &opts.parallel, 1 }, { "passthrough", no_argument, &opts.passthrough, 1 }, { "passthru", no_argument, &opts.passthrough, 1 }, { "path-to-ignore", required_argument, NULL, 'p' }, { "print0", no_argument, NULL, '0' }, { "print-all-files", no_argument, NULL, 0 }, { "print-long-lines", no_argument, &opts.print_long_lines, 1 }, { "recurse", no_argument, NULL, 'r' }, { "search-binary", no_argument, &opts.search_binary_files, 1 }, { "search-files", no_argument, &opts.search_stream, 0 }, { "search-zip", no_argument, &opts.search_zip_files, 1 }, { "silent", no_argument, NULL, 0 }, { "skip-vcs-ignores", no_argument, NULL, 'U' }, { "smart-case", no_argument, NULL, 'S' }, { "stats", no_argument, &opts.stats, 1 }, { "stats-only", no_argument, NULL, 0 }, { "unrestricted", no_argument, NULL, 'u' }, { "version", no_argument, &version, 1 }, { "vimgrep", no_argument, &opts.vimgrep, 1 }, { "width", required_argument, NULL, 'W' }, { "word-regexp", no_argument, NULL, 'w' }, { "workers", required_argument, NULL, 0 }, }; lang_count = get_lang_count(); longopts_len = (sizeof(base_longopts) / sizeof(option_t)); full_len = (longopts_len + lang_count + 1); longopts = ag_malloc(full_len * sizeof(option_t)); memcpy(longopts, base_longopts, sizeof(base_longopts)); ext_index = (size_t *)ag_malloc(sizeof(size_t) * lang_count); memset(ext_index, 0, sizeof(size_t) * lang_count); for (i = 0; i < lang_count; i++) { option_t opt = { langs[i].name, no_argument, NULL, 0 }; longopts[i + longopts_len] = opt; } longopts[full_len - 1] = (option_t){ NULL, 0, NULL, 0 }; if (argc < 2) { usage(); cleanup_ignore(root_ignores); cleanup_options(); exit(1); } rv = fstat(fileno(stdin), &statbuf); if (rv == 0) { if (S_ISFIFO(statbuf.st_mode) || S_ISREG(statbuf.st_mode)) { opts.search_stream = 1; } } /* If we're not outputting to a terminal. change output to: * turn off colors * print filenames on every line */ if (!isatty(fileno(stdout))) { opts.color = 0; group = 0; /* Don't search the file that stdout is redirected to */ rv = fstat(fileno(stdout), &statbuf); if (rv != 0) { die("Error fstat()ing stdout"); } opts.stdout_inode = statbuf.st_ino; } char *file_search_regex = NULL; while ((ch = getopt_long(argc, argv, "A:aB:C:cDG:g:FfHhiLlm:nop:QRrSsvVtuUwW:z0", longopts, &opt_index)) != -1) { switch (ch) { case 'A': if (optarg) { opts.after = strtol(optarg, &num_end, 10); if (num_end == optarg || *num_end != '\0' || errno == ERANGE) { /* This arg must be the search string instead of the after length */ optind--; opts.after = DEFAULT_AFTER_LEN; } } else { opts.after = DEFAULT_AFTER_LEN; } break; case 'a': opts.search_all_files = 1; opts.search_binary_files = 1; break; case 'B': if (optarg) { opts.before = strtol(optarg, &num_end, 10); if (num_end == optarg || *num_end != '\0' || errno == ERANGE) { /* This arg must be the search string instead of the before length */ optind--; opts.before = DEFAULT_BEFORE_LEN; } } else { opts.before = DEFAULT_BEFORE_LEN; } break; case 'C': if (optarg) { opts.context = strtol(optarg, &num_end, 10); if (num_end == optarg || *num_end != '\0' || errno == ERANGE) { /* This arg must be the search string instead of the context length */ optind--; opts.context = DEFAULT_CONTEXT_LEN; } } else { opts.context = DEFAULT_CONTEXT_LEN; } break; case 'c': opts.print_count = 1; opts.print_filename_only = 1; break; case 'D': set_log_level(LOG_LEVEL_DEBUG); break; case 'f': opts.follow_symlinks = 1; break; case 'g': needs_query = accepts_query = 0; opts.match_files = 1; /* Fall through so regex is built */ case 'G': if (file_search_regex) { log_err("File search regex (-g or -G) already specified."); usage(); exit(1); } file_search_regex = ag_strdup(optarg); break; case 'H': opts.print_path = PATH_PRINT_TOP; break; case 'h': help = 1; break; case 'i': opts.casing = CASE_INSENSITIVE; break; case 'L': opts.invert_match = 1; /* fall through */ case 'l': needs_query = 0; opts.print_filename_only = 1; opts.print_path = PATH_PRINT_TOP; break; case 'm': opts.max_matches_per_file = atoi(optarg); break; case 'n': opts.recurse_dirs = 0; break; case 'p': opts.path_to_ignore = TRUE; load_ignore_patterns(root_ignores, optarg); break; case 'o': opts.only_matching = 1; break; case 'F': case 'Q': opts.literal = 1; break; case 'R': case 'r': opts.recurse_dirs = 1; break; case 'S': opts.casing = CASE_SMART; break; case 's': opts.casing = CASE_SENSITIVE; break; case 't': opts.search_all_files = 1; break; case 'u': opts.search_binary_files = 1; opts.search_all_files = 1; opts.search_hidden_files = 1; break; case 'U': opts.skip_vcs_ignores = 1; break; case 'v': opts.invert_match = 1; /* Color highlighting doesn't make sense when inverting matches */ opts.color = 0; break; case 'V': version = 1; break; case 'w': opts.word_regexp = 1; break; case 'W': opts.width = strtol(optarg, &num_end, 10); if (num_end == optarg || *num_end != '\0' || errno == ERANGE) { die("Invalid width\n"); } break; case 'z': opts.search_zip_files = 1; break; case '0': opts.path_sep = '\0'; break; case 0: /* Long option */ if (strcmp(longopts[opt_index].name, "ackmate-dir-filter") == 0) { compile_study(&opts.ackmate_dir_filter, &opts.ackmate_dir_filter_extra, optarg, 0, 0); break; } else if (strcmp(longopts[opt_index].name, "depth") == 0) { opts.max_search_depth = atoi(optarg); break; } else if (strcmp(longopts[opt_index].name, "filename") == 0) { opts.print_path = PATH_PRINT_DEFAULT; opts.print_line_numbers = TRUE; break; } else if (strcmp(longopts[opt_index].name, "ignore-dir") == 0) { add_ignore_pattern(root_ignores, optarg); break; } else if (strcmp(longopts[opt_index].name, "ignore") == 0) { add_ignore_pattern(root_ignores, optarg); break; } else if (strcmp(longopts[opt_index].name, "no-filename") == 0 || strcmp(longopts[opt_index].name, "nofilename") == 0) { opts.print_path = PATH_PRINT_NOTHING; opts.print_line_numbers = FALSE; break; } else if (strcmp(longopts[opt_index].name, "no-pager") == 0 || strcmp(longopts[opt_index].name, "nopager") == 0) { out_fd = stdout; opts.pager = NULL; break; } else if (strcmp(longopts[opt_index].name, "pager") == 0) { opts.pager = optarg; break; } else if (strcmp(longopts[opt_index].name, "print-all-files") == 0) { opts.print_all_paths = TRUE; break; } else if (strcmp(longopts[opt_index].name, "workers") == 0) { opts.workers = atoi(optarg); break; } else if (strcmp(longopts[opt_index].name, "color-line-number") == 0) { free(opts.color_line_number); ag_asprintf(&opts.color_line_number, "\033[%sm", optarg); break; } else if (strcmp(longopts[opt_index].name, "color-match") == 0) { free(opts.color_match); ag_asprintf(&opts.color_match, "\033[%sm", optarg); break; } else if (strcmp(longopts[opt_index].name, "color-path") == 0) { free(opts.color_path); ag_asprintf(&opts.color_path, "\033[%sm", optarg); break; } else if (strcmp(longopts[opt_index].name, "silent") == 0) { set_log_level(LOG_LEVEL_NONE); break; } else if (strcmp(longopts[opt_index].name, "stats-only") == 0) { opts.print_filename_only = 1; opts.print_path = PATH_PRINT_NOTHING; opts.stats = 1; break; } /* Continue to usage if we don't recognize the option */ if (longopts[opt_index].flag != 0) { break; } for (i = 0; i < lang_count; i++) { if (strcmp(longopts[opt_index].name, langs[i].name) == 0) { has_filetype = 1; ext_index[lang_num++] = i; break; } } if (i != lang_count) { break; } log_err("option %s does not take a value", longopts[opt_index].name); default: usage(); exit(1); } } if (opts.casing == CASE_DEFAULT) { opts.casing = CASE_SMART; } if (file_search_regex) { int pcre_opts = 0; if (opts.casing == CASE_INSENSITIVE || (opts.casing == CASE_SMART && is_lowercase(file_search_regex))) { pcre_opts |= PCRE_CASELESS; } if (opts.word_regexp) { char *old_file_search_regex = file_search_regex; ag_asprintf(&file_search_regex, "\\b%s\\b", file_search_regex); free(old_file_search_regex); } compile_study(&opts.file_search_regex, &opts.file_search_regex_extra, file_search_regex, pcre_opts, 0); free(file_search_regex); } if (has_filetype) { num_exts = combine_file_extensions(ext_index, lang_num, &extensions); lang_regex = make_lang_regex(extensions, num_exts); compile_study(&opts.file_search_regex, &opts.file_search_regex_extra, lang_regex, 0, 0); } if (extensions) { free(extensions); } free(ext_index); if (lang_regex) { free(lang_regex); } free(longopts); argc -= optind; argv += optind; if (opts.pager) { out_fd = popen(opts.pager, "w"); if (!out_fd) { perror("Failed to run pager"); exit(1); } } #ifdef HAVE_PLEDGE if (opts.skip_vcs_ignores) { if (pledge("stdio rpath proc", NULL) == -1) { die("pledge: %s", strerror(errno)); } } #endif if (help) { usage(); exit(0); } if (version) { print_version(); exit(0); } if (list_file_types) { size_t lang_index; printf("The following file types are supported:\n"); for (lang_index = 0; lang_index < lang_count; lang_index++) { printf(" --%s\n ", langs[lang_index].name); int j; for (j = 0; j < MAX_EXTENSIONS && langs[lang_index].extensions[j]; j++) { printf(" .%s", langs[lang_index].extensions[j]); } printf("\n\n"); } exit(0); } if (needs_query && argc == 0) { log_err("What do you want to search for?"); exit(1); } if (home_dir && !opts.search_all_files) { log_debug("Found user's home dir: %s", home_dir); ag_asprintf(&ignore_file_path, "%s/.agignore", home_dir); load_ignore_patterns(root_ignores, ignore_file_path); free(ignore_file_path); } if (!opts.skip_vcs_ignores) { FILE *gitconfig_file = NULL; size_t buf_len = 0; char *gitconfig_res = NULL; #ifdef _WIN32 gitconfig_file = popen("git config -z --path --get core.excludesfile 2>NUL", "r"); #else gitconfig_file = popen("git config -z --path --get core.excludesfile 2>/dev/null", "r"); #endif if (gitconfig_file != NULL) { do { gitconfig_res = ag_realloc(gitconfig_res, buf_len + 65); buf_len += fread(gitconfig_res + buf_len, 1, 64, gitconfig_file); } while (!feof(gitconfig_file) && buf_len > 0 && buf_len % 64 == 0); gitconfig_res[buf_len] = '\0'; if (buf_len == 0) { free(gitconfig_res); const char *config_home = getenv("XDG_CONFIG_HOME"); if (config_home) { ag_asprintf(&gitconfig_res, "%s/%s", config_home, "git/ignore"); } else { ag_asprintf(&gitconfig_res, "%s/%s", home_dir, ".config/git/ignore"); } } log_debug("global core.excludesfile: %s", gitconfig_res); load_ignore_patterns(root_ignores, gitconfig_res); free(gitconfig_res); pclose(gitconfig_file); } } #ifdef HAVE_PLEDGE if (pledge("stdio rpath proc", NULL) == -1) { die("pledge: %s", strerror(errno)); } #endif if (opts.context > 0) { opts.before = opts.context; opts.after = opts.context; } if (opts.ackmate) { opts.color = 0; opts.print_break = 1; group = 1; opts.search_stream = 0; } if (opts.vimgrep) { opts.color = 0; opts.print_break = 0; group = 1; opts.search_stream = 0; opts.print_path = PATH_PRINT_NOTHING; } if (opts.parallel) { opts.search_stream = 0; } if (!(opts.print_path != PATH_PRINT_DEFAULT || opts.print_break == 0)) { if (group) { opts.print_break = 1; } else { opts.print_path = PATH_PRINT_DEFAULT_EACH_LINE; opts.print_break = 0; } } if (opts.search_stream) { opts.print_break = 0; opts.print_path = PATH_PRINT_NOTHING; if (opts.print_line_numbers != 2) { opts.print_line_numbers = 0; } } if (accepts_query && argc > 0) { if (!needs_query && strlen(argv[0]) == 0) { // use default query opts.query = ag_strdup("."); } else { // use the provided query opts.query = ag_strdup(argv[0]); } argc--; argv++; } else if (!needs_query) { // use default query opts.query = ag_strdup("."); } opts.query_len = strlen(opts.query); log_debug("Query is %s", opts.query); if (opts.query_len == 0) { log_err("Error: No query. What do you want to search for?"); exit(1); } if (!is_regex(opts.query)) { opts.literal = 1; } char *path = NULL; char *base_path = NULL; #ifdef PATH_MAX char *tmp = NULL; #endif opts.paths_len = argc; if (argc > 0) { *paths = ag_calloc(sizeof(char *), argc + 1); *base_paths = ag_calloc(sizeof(char *), argc + 1); for (i = 0; i < (size_t)argc; i++) { path = ag_strdup(argv[i]); path_len = strlen(path); /* kill trailing slash */ if (path_len > 1 && path[path_len - 1] == '/') { path[path_len - 1] = '\0'; } (*paths)[i] = path; #ifdef PATH_MAX tmp = ag_malloc(PATH_MAX); base_path = realpath(path, tmp); #else base_path = realpath(path, NULL); #endif if (base_path) { base_path_len = strlen(base_path); /* add trailing slash */ if (base_path_len > 1 && base_path[base_path_len - 1] != '/') { base_path = ag_realloc(base_path, base_path_len + 2); base_path[base_path_len] = '/'; base_path[base_path_len + 1] = '\0'; } } (*base_paths)[i] = base_path; } /* Make sure we search these paths instead of stdin. */ opts.search_stream = 0; } else { path = ag_strdup("."); *paths = ag_malloc(sizeof(char *) * 2); *base_paths = ag_malloc(sizeof(char *) * 2); (*paths)[0] = path; #ifdef PATH_MAX tmp = ag_malloc(PATH_MAX); (*base_paths)[0] = realpath(path, tmp); #else (*base_paths)[0] = realpath(path, NULL); #endif i = 1; } (*paths)[i] = NULL; (*base_paths)[i] = NULL; #ifdef _WIN32 windows_use_ansi(opts.color_win_ansi); #endif } the_silver_searcher-2.1.0/src/options.h000066400000000000000000000051671314774034700202100ustar00rootroot00000000000000#ifndef OPTIONS_H #define OPTIONS_H #include #include #include #define DEFAULT_AFTER_LEN 2 #define DEFAULT_BEFORE_LEN 2 #define DEFAULT_CONTEXT_LEN 2 #define DEFAULT_MAX_SEARCH_DEPTH 25 enum case_behavior { CASE_DEFAULT, /* Changes to CASE_SMART at the end of option parsing */ CASE_SENSITIVE, CASE_INSENSITIVE, CASE_SMART, CASE_SENSITIVE_RETRY_INSENSITIVE /* for future use */ }; enum path_print_behavior { PATH_PRINT_DEFAULT, /* PRINT_TOP if > 1 file being searched, else PRINT_NOTHING */ PATH_PRINT_DEFAULT_EACH_LINE, /* PRINT_EACH_LINE if > 1 file being searched, else PRINT_NOTHING */ PATH_PRINT_TOP, PATH_PRINT_EACH_LINE, PATH_PRINT_NOTHING }; typedef struct { int ackmate; pcre *ackmate_dir_filter; pcre_extra *ackmate_dir_filter_extra; size_t after; size_t before; enum case_behavior casing; const char *file_search_string; int match_files; pcre *file_search_regex; pcre_extra *file_search_regex_extra; int color; char *color_line_number; char *color_match; char *color_path; int color_win_ansi; int column; int context; int follow_symlinks; int invert_match; int literal; int literal_starts_wordchar; int literal_ends_wordchar; size_t max_matches_per_file; int max_search_depth; int mmap; int multiline; int one_dev; int only_matching; char path_sep; int path_to_ignore; int print_break; int print_count; int print_filename_only; int print_path; int print_all_paths; int print_line_numbers; int print_long_lines; /* TODO: support this in print.c */ int passthrough; pcre *re; pcre_extra *re_extra; int recurse_dirs; int search_all_files; int skip_vcs_ignores; int search_binary_files; int search_zip_files; int search_hidden_files; int search_stream; /* true if tail -F blah | ag */ int stats; size_t stream_line_num; /* This should totally not be in here */ int match_found; /* This should totally not be in here */ ino_t stdout_inode; char *query; int query_len; char *pager; int paths_len; int parallel; int use_thread_affinity; int vimgrep; size_t width; int word_regexp; int workers; } cli_options; /* global options. parse_options gives it sane values, everything else reads from it */ cli_options opts; typedef struct option option_t; void usage(void); void print_version(void); void init_options(void); void parse_options(int argc, char **argv, char **base_paths[], char **paths[]); void cleanup_options(void); #endif the_silver_searcher-2.1.0/src/print.c000066400000000000000000000337761314774034700176530ustar00rootroot00000000000000#include #include #include #include #include #include "ignore.h" #include "log.h" #include "options.h" #include "print.h" #include "search.h" #include "util.h" #ifdef _WIN32 #define fprintf(...) fprintf_w32(__VA_ARGS__) #endif int first_file_match = 1; const char *color_reset = "\033[0m\033[K"; const char *truncate_marker = " [...]"; __thread struct print_context { size_t line; char **context_prev_lines; size_t prev_line; size_t last_prev_line; size_t prev_line_offset; size_t line_preceding_current_match_offset; size_t lines_since_last_match; size_t last_printed_match; int in_a_match; int printing_a_match; } print_context; void print_init_context(void) { if (print_context.context_prev_lines != NULL) { return; } print_context.context_prev_lines = ag_calloc(sizeof(char *), (opts.before + 1)); print_context.line = 1; print_context.prev_line = 0; print_context.last_prev_line = 0; print_context.prev_line_offset = 0; print_context.line_preceding_current_match_offset = 0; print_context.lines_since_last_match = INT_MAX; print_context.last_printed_match = 0; print_context.in_a_match = FALSE; print_context.printing_a_match = FALSE; } void print_cleanup_context(void) { size_t i; if (print_context.context_prev_lines == NULL) { return; } for (i = 0; i < opts.before; i++) { if (print_context.context_prev_lines[i] != NULL) { free(print_context.context_prev_lines[i]); } } free(print_context.context_prev_lines); print_context.context_prev_lines = NULL; } void print_context_append(const char *line, size_t len) { if (opts.before == 0) { return; } if (print_context.context_prev_lines[print_context.last_prev_line] != NULL) { free(print_context.context_prev_lines[print_context.last_prev_line]); } print_context.context_prev_lines[print_context.last_prev_line] = ag_strndup(line, len); print_context.last_prev_line = (print_context.last_prev_line + 1) % opts.before; } void print_trailing_context(const char *path, const char *buf, size_t n) { char sep = '-'; if (opts.ackmate || opts.vimgrep) { sep = ':'; } if (print_context.lines_since_last_match != 0 && print_context.lines_since_last_match <= opts.after) { if (opts.print_path == PATH_PRINT_EACH_LINE) { print_path(path, ':'); } print_line_number(print_context.line, sep); fwrite(buf, 1, n, out_fd); fputc('\n', out_fd); } print_context.line++; if (!print_context.in_a_match && print_context.lines_since_last_match < INT_MAX) { print_context.lines_since_last_match++; } } void print_path(const char *path, const char sep) { if (opts.print_path == PATH_PRINT_NOTHING && !opts.vimgrep) { return; } path = normalize_path(path); if (opts.ackmate) { fprintf(out_fd, ":%s%c", path, sep); } else if (opts.vimgrep) { fprintf(out_fd, "%s%c", path, sep); } else { if (opts.color) { fprintf(out_fd, "%s%s%s%c", opts.color_path, path, color_reset, sep); } else { fprintf(out_fd, "%s%c", path, sep); } } } void print_path_count(const char *path, const char sep, const size_t count) { if (*path) { print_path(path, ':'); } if (opts.color) { fprintf(out_fd, "%s%lu%s%c", opts.color_line_number, (unsigned long)count, color_reset, sep); } else { fprintf(out_fd, "%lu%c", (unsigned long)count, sep); } } void print_line(const char *buf, size_t buf_pos, size_t prev_line_offset) { size_t write_chars = buf_pos - prev_line_offset + 1; if (opts.width > 0 && opts.width < write_chars) { write_chars = opts.width; } fwrite(buf + prev_line_offset, 1, write_chars, out_fd); } void print_binary_file_matches(const char *path) { path = normalize_path(path); print_file_separator(); fprintf(out_fd, "Binary file %s matches.\n", path); } void print_file_matches(const char *path, const char *buf, const size_t buf_len, const match_t matches[], const size_t matches_len) { size_t cur_match = 0; ssize_t lines_to_print = 0; char sep = '-'; size_t i, j; int blanks_between_matches = opts.context || opts.after || opts.before; if (opts.ackmate || opts.vimgrep) { sep = ':'; } print_file_separator(); if (opts.print_path == PATH_PRINT_DEFAULT) { opts.print_path = PATH_PRINT_TOP; } else if (opts.print_path == PATH_PRINT_DEFAULT_EACH_LINE) { opts.print_path = PATH_PRINT_EACH_LINE; } if (opts.print_path == PATH_PRINT_TOP) { if (opts.print_count) { print_path_count(path, opts.path_sep, matches_len); } else { print_path(path, opts.path_sep); } } for (i = 0; i <= buf_len && (cur_match < matches_len || print_context.lines_since_last_match <= opts.after); i++) { if (cur_match < matches_len && i == matches[cur_match].start) { print_context.in_a_match = TRUE; /* We found the start of a match */ if (cur_match > 0 && blanks_between_matches && print_context.lines_since_last_match > (opts.before + opts.after + 1)) { fprintf(out_fd, "--\n"); } if (print_context.lines_since_last_match > 0 && opts.before > 0) { /* TODO: better, but still needs work */ /* print the previous line(s) */ lines_to_print = print_context.lines_since_last_match - (opts.after + 1); if (lines_to_print < 0) { lines_to_print = 0; } else if ((size_t)lines_to_print > opts.before) { lines_to_print = opts.before; } for (j = (opts.before - lines_to_print); j < opts.before; j++) { print_context.prev_line = (print_context.last_prev_line + j) % opts.before; if (print_context.context_prev_lines[print_context.prev_line] != NULL) { if (opts.print_path == PATH_PRINT_EACH_LINE) { print_path(path, ':'); } print_line_number(print_context.line - (opts.before - j), sep); fprintf(out_fd, "%s\n", print_context.context_prev_lines[print_context.prev_line]); } } } print_context.lines_since_last_match = 0; } if (cur_match < matches_len && i == matches[cur_match].end) { /* We found the end of a match. */ cur_match++; print_context.in_a_match = FALSE; } /* We found the end of a line. */ if ((i == buf_len || buf[i] == '\n') && opts.before > 0) { /* We don't want to strcpy the \n */ print_context_append(&buf[print_context.prev_line_offset], i - print_context.prev_line_offset); } if (i == buf_len || buf[i] == '\n') { if (print_context.lines_since_last_match == 0) { if (opts.print_path == PATH_PRINT_EACH_LINE && !opts.search_stream) { print_path(path, ':'); } if (opts.ackmate) { /* print headers for ackmate to parse */ print_line_number(print_context.line, ';'); for (; print_context.last_printed_match < cur_match; print_context.last_printed_match++) { size_t start = matches[print_context.last_printed_match].start - print_context.line_preceding_current_match_offset; fprintf(out_fd, "%lu %lu", start, matches[print_context.last_printed_match].end - matches[print_context.last_printed_match].start); print_context.last_printed_match == cur_match - 1 ? fputc(':', out_fd) : fputc(',', out_fd); } print_line(buf, i, print_context.prev_line_offset); } else if (opts.vimgrep) { for (; print_context.last_printed_match < cur_match; print_context.last_printed_match++) { print_path(path, sep); print_line_number(print_context.line, sep); print_column_number(matches, print_context.last_printed_match, print_context.prev_line_offset, sep); print_line(buf, i, print_context.prev_line_offset); } } else { print_line_number(print_context.line, ':'); int printed_match = FALSE; if (opts.column) { print_column_number(matches, print_context.last_printed_match, print_context.prev_line_offset, ':'); } if (print_context.printing_a_match && opts.color) { fprintf(out_fd, "%s", opts.color_match); } for (j = print_context.prev_line_offset; j <= i; j++) { /* close highlight of match term */ if (print_context.last_printed_match < matches_len && j == matches[print_context.last_printed_match].end) { if (opts.color) { fprintf(out_fd, "%s", color_reset); } print_context.printing_a_match = FALSE; print_context.last_printed_match++; printed_match = TRUE; if (opts.only_matching) { fputc('\n', out_fd); } } /* skip remaining characters if truncation width exceeded, needs to be done * before highlight opening */ if (j < buf_len && opts.width > 0 && j - print_context.prev_line_offset >= opts.width) { if (j < i) { fputs(truncate_marker, out_fd); } fputc('\n', out_fd); /* prevent any more characters or highlights */ j = i; print_context.last_printed_match = matches_len; } /* open highlight of match term */ if (print_context.last_printed_match < matches_len && j == matches[print_context.last_printed_match].start) { if (opts.only_matching && printed_match) { if (opts.print_path == PATH_PRINT_EACH_LINE) { print_path(path, ':'); } print_line_number(print_context.line, ':'); if (opts.column) { print_column_number(matches, print_context.last_printed_match, print_context.prev_line_offset, ':'); } } if (opts.color) { fprintf(out_fd, "%s", opts.color_match); } print_context.printing_a_match = TRUE; } /* Don't print the null terminator */ if (j < buf_len) { /* if only_matching is set, print only matches and newlines */ if (!opts.only_matching || print_context.printing_a_match) { if (opts.width == 0 || j - print_context.prev_line_offset < opts.width) { fputc(buf[j], out_fd); } } } } if (print_context.printing_a_match && opts.color) { fprintf(out_fd, "%s", color_reset); } } } if (opts.search_stream) { print_context.last_printed_match = 0; break; } /* print context after matching line */ print_trailing_context(path, &buf[print_context.prev_line_offset], i - print_context.prev_line_offset); print_context.prev_line_offset = i + 1; /* skip the newline */ if (!print_context.in_a_match) { print_context.line_preceding_current_match_offset = i + 1; } /* File doesn't end with a newline. Print one so the output is pretty. */ if (i == buf_len && buf[i - 1] != '\n') { fputc('\n', out_fd); } } } /* Flush output if stdout is not a tty */ if (opts.stdout_inode) { fflush(out_fd); } } void print_line_number(size_t line, const char sep) { if (!opts.print_line_numbers) { return; } if (opts.color) { fprintf(out_fd, "%s%lu%s%c", opts.color_line_number, (unsigned long)line, color_reset, sep); } else { fprintf(out_fd, "%lu%c", (unsigned long)line, sep); } } void print_column_number(const match_t matches[], size_t last_printed_match, size_t prev_line_offset, const char sep) { size_t column = 0; if (prev_line_offset <= matches[last_printed_match].start) { column = (matches[last_printed_match].start - prev_line_offset) + 1; } fprintf(out_fd, "%lu%c", (unsigned long)column, sep); } void print_file_separator(void) { if (first_file_match == 0 && opts.print_break) { fprintf(out_fd, "\n"); } first_file_match = 0; } const char *normalize_path(const char *path) { if (strlen(path) < 3) { return path; } if (path[0] == '.' && path[1] == '/') { return path + 2; } if (path[0] == '/' && path[1] == '/') { return path + 1; } return path; } the_silver_searcher-2.1.0/src/print.h000066400000000000000000000020051314774034700176350ustar00rootroot00000000000000#ifndef PRINT_H #define PRINT_H #include "util.h" void print_init_context(void); void print_cleanup_context(void); void print_context_append(const char *line, size_t len); void print_trailing_context(const char *path, const char *buf, size_t n); void print_path(const char *path, const char sep); void print_path_count(const char *path, const char sep, const size_t count); void print_line(const char *buf, size_t buf_pos, size_t prev_line_offset); void print_binary_file_matches(const char *path); void print_file_matches(const char *path, const char *buf, const size_t buf_len, const match_t matches[], const size_t matches_len); void print_line_number(size_t line, const char sep); void print_column_number(const match_t matches[], size_t last_printed_match, size_t prev_line_offset, const char sep); void print_file_separator(void); const char *normalize_path(const char *path); #ifdef _WIN32 void windows_use_ansi(int use_ansi); int fprintf_w32(FILE *fp, const char *format, ...); #endif #endif the_silver_searcher-2.1.0/src/print_w32.c000066400000000000000000000340231314774034700203300ustar00rootroot00000000000000#ifdef _WIN32 #include "print.h" #include #include #include #include #ifndef FOREGROUND_MASK #define FOREGROUND_MASK (FOREGROUND_RED | FOREGROUND_BLUE | FOREGROUND_GREEN | FOREGROUND_INTENSITY) #endif #ifndef BACKGROUND_MASK #define BACKGROUND_MASK (BACKGROUND_RED | BACKGROUND_BLUE | BACKGROUND_GREEN | BACKGROUND_INTENSITY) #endif #define FG_RGB (FOREGROUND_RED | FOREGROUND_BLUE | FOREGROUND_GREEN) #define BG_RGB (BACKGROUND_RED | BACKGROUND_BLUE | BACKGROUND_GREEN) // BUFSIZ is guarenteed to be "at least 256 bytes" which might // not be enough for us. Use an arbitrary but reasonably big // buffer. win32 colored output will be truncated to this length. #define BUF_SIZE (16 * 1024) // max consecutive ansi sequence values beyond which we're aborting // e.g. this is 3 values: \e[0;1;33m #define MAX_VALUES 8 static int g_use_ansi = 0; void windows_use_ansi(int use_ansi) { g_use_ansi = use_ansi; } int fprintf_w32(FILE *fp, const char *format, ...) { va_list args; char buf[BUF_SIZE] = { 0 }, *ptr = buf; static WORD attr_reset; static BOOL attr_initialized = FALSE; HANDLE stdo = INVALID_HANDLE_VALUE; WORD attr; DWORD written, csize; CONSOLE_CURSOR_INFO cci; CONSOLE_SCREEN_BUFFER_INFO csbi; COORD coord; // if we don't output to screen (tty) e.g. for pager/pipe or // if for other reason we can't access the screen info, of if // the user just prefers ansi, do plain passthrough. BOOL passthrough = g_use_ansi || !isatty(fileno(fp)) || INVALID_HANDLE_VALUE == (stdo = (HANDLE)_get_osfhandle(fileno(fp))) || !GetConsoleScreenBufferInfo(stdo, &csbi); if (passthrough) { int rv; va_start(args, format); rv = vfprintf(fp, format, args); va_end(args); return rv; } va_start(args, format); // truncates to (null terminated) BUF_SIZE if too long. // if too long - vsnprintf will fill count chars without // terminating null. buf is zeroed, so make sure we don't fill it. vsnprintf(buf, BUF_SIZE - 1, format, args); va_end(args); attr = csbi.wAttributes; if (!attr_initialized) { // reset is defined to have all (non color) attributes off attr_reset = attr & (FG_RGB | BG_RGB); attr_initialized = TRUE; } while (*ptr) { if (*ptr == '\033') { unsigned char c; int i, n = 0, m = '\0', v[MAX_VALUES], w, h; for (i = 0; i < MAX_VALUES; i++) v[i] = -1; ptr++; retry: if ((c = *ptr++) == 0) break; if (isdigit(c)) { if (v[n] == -1) v[n] = c - '0'; else v[n] = v[n] * 10 + c - '0'; goto retry; } if (c == '[') { goto retry; } if (c == ';') { if (++n == MAX_VALUES) break; goto retry; } if (c == '>' || c == '?') { m = c; goto retry; } switch (c) { // n is the last occupied index, so we have n+1 values case 'h': if (m == '?') { for (i = 0; i <= n; i++) { switch (v[i]) { case 3: GetConsoleScreenBufferInfo(stdo, &csbi); w = csbi.dwSize.X; h = csbi.srWindow.Bottom - csbi.srWindow.Top + 1; csize = w * (h + 1); coord.X = 0; coord.Y = csbi.srWindow.Top; FillConsoleOutputCharacter(stdo, ' ', csize, coord, &written); FillConsoleOutputAttribute(stdo, csbi.wAttributes, csize, coord, &written); SetConsoleCursorPosition(stdo, csbi.dwCursorPosition); csbi.dwSize.X = 132; SetConsoleScreenBufferSize(stdo, csbi.dwSize); csbi.srWindow.Right = csbi.srWindow.Left + 131; SetConsoleWindowInfo(stdo, TRUE, &csbi.srWindow); break; case 5: attr = ((attr & FOREGROUND_MASK) << 4) | ((attr & BACKGROUND_MASK) >> 4); SetConsoleTextAttribute(stdo, attr); break; case 9: break; case 25: GetConsoleCursorInfo(stdo, &cci); cci.bVisible = TRUE; SetConsoleCursorInfo(stdo, &cci); break; case 47: coord.X = 0; coord.Y = 0; SetConsoleCursorPosition(stdo, coord); break; default: break; } } } else if (m == '>' && v[0] == 5) { GetConsoleCursorInfo(stdo, &cci); cci.bVisible = FALSE; SetConsoleCursorInfo(stdo, &cci); } break; case 'l': if (m == '?') { for (i = 0; i <= n; i++) { switch (v[i]) { case 3: GetConsoleScreenBufferInfo(stdo, &csbi); w = csbi.dwSize.X; h = csbi.srWindow.Bottom - csbi.srWindow.Top + 1; csize = w * (h + 1); coord.X = 0; coord.Y = csbi.srWindow.Top; FillConsoleOutputCharacter(stdo, ' ', csize, coord, &written); FillConsoleOutputAttribute(stdo, csbi.wAttributes, csize, coord, &written); SetConsoleCursorPosition(stdo, csbi.dwCursorPosition); csbi.srWindow.Right = csbi.srWindow.Left + 79; SetConsoleWindowInfo(stdo, TRUE, &csbi.srWindow); csbi.dwSize.X = 80; SetConsoleScreenBufferSize(stdo, csbi.dwSize); break; case 5: attr = ((attr & FOREGROUND_MASK) << 4) | ((attr & BACKGROUND_MASK) >> 4); SetConsoleTextAttribute(stdo, attr); break; case 25: GetConsoleCursorInfo(stdo, &cci); cci.bVisible = FALSE; SetConsoleCursorInfo(stdo, &cci); break; default: break; } } } else if (m == '>' && v[0] == 5) { GetConsoleCursorInfo(stdo, &cci); cci.bVisible = TRUE; SetConsoleCursorInfo(stdo, &cci); } break; case 'm': for (i = 0; i <= n; i++) { if (v[i] == -1 || v[i] == 0) attr = attr_reset; else if (v[i] == 1) attr |= FOREGROUND_INTENSITY; else if (v[i] == 4) attr |= FOREGROUND_INTENSITY; else if (v[i] == 5) // blink is typically applied as bg intensity attr |= BACKGROUND_INTENSITY; else if (v[i] == 7) attr = ((attr & FOREGROUND_MASK) << 4) | ((attr & BACKGROUND_MASK) >> 4); else if (v[i] == 10) ; // symbol on else if (v[i] == 11) ; // symbol off else if (v[i] == 22) attr &= ~FOREGROUND_INTENSITY; else if (v[i] == 24) attr &= ~FOREGROUND_INTENSITY; else if (v[i] == 25) attr &= ~BACKGROUND_INTENSITY; else if (v[i] == 27) attr = ((attr & FOREGROUND_MASK) << 4) | ((attr & BACKGROUND_MASK) >> 4); else if (v[i] >= 30 && v[i] <= 37) { attr &= ~FG_RGB; // doesn't affect attributes if ((v[i] - 30) & 1) attr |= FOREGROUND_RED; if ((v[i] - 30) & 2) attr |= FOREGROUND_GREEN; if ((v[i] - 30) & 4) attr |= FOREGROUND_BLUE; } else if (v[i] == 39) // reset fg color and attributes attr = (attr & ~FOREGROUND_MASK) | (attr_reset & FG_RGB); else if (v[i] >= 40 && v[i] <= 47) { attr &= ~BG_RGB; if ((v[i] - 40) & 1) attr |= BACKGROUND_RED; if ((v[i] - 40) & 2) attr |= BACKGROUND_GREEN; if ((v[i] - 40) & 4) attr |= BACKGROUND_BLUE; } else if (v[i] == 49) // reset bg color attr = (attr & ~BACKGROUND_MASK) | (attr_reset & BG_RGB); } SetConsoleTextAttribute(stdo, attr); break; case 'K': GetConsoleScreenBufferInfo(stdo, &csbi); coord = csbi.dwCursorPosition; switch (v[0]) { default: case 0: csize = csbi.dwSize.X - coord.X; break; case 1: csize = coord.X; coord.X = 0; break; case 2: csize = csbi.dwSize.X; coord.X = 0; break; } FillConsoleOutputCharacter(stdo, ' ', csize, coord, &written); FillConsoleOutputAttribute(stdo, csbi.wAttributes, csize, coord, &written); SetConsoleCursorPosition(stdo, csbi.dwCursorPosition); break; case 'J': GetConsoleScreenBufferInfo(stdo, &csbi); w = csbi.dwSize.X; h = csbi.srWindow.Bottom - csbi.srWindow.Top + 1; coord = csbi.dwCursorPosition; switch (v[0]) { default: case 0: csize = w * (h - coord.Y) - coord.X; coord.X = 0; break; case 1: csize = w * coord.Y + coord.X; coord.X = 0; coord.Y = csbi.srWindow.Top; break; case 2: csize = w * (h + 1); coord.X = 0; coord.Y = csbi.srWindow.Top; break; } FillConsoleOutputCharacter(stdo, ' ', csize, coord, &written); FillConsoleOutputAttribute(stdo, csbi.wAttributes, csize, coord, &written); SetConsoleCursorPosition(stdo, csbi.dwCursorPosition); break; case 'H': GetConsoleScreenBufferInfo(stdo, &csbi); coord = csbi.dwCursorPosition; if (v[0] != -1) { if (v[1] != -1) { coord.Y = csbi.srWindow.Top + v[0] - 1; coord.X = v[1] - 1; } else coord.X = v[0] - 1; } else { coord.X = 0; coord.Y = csbi.srWindow.Top; } if (coord.X < csbi.srWindow.Left) coord.X = csbi.srWindow.Left; else if (coord.X > csbi.srWindow.Right) coord.X = csbi.srWindow.Right; if (coord.Y < csbi.srWindow.Top) coord.Y = csbi.srWindow.Top; else if (coord.Y > csbi.srWindow.Bottom) coord.Y = csbi.srWindow.Bottom; SetConsoleCursorPosition(stdo, coord); break; default: break; } } else { putchar(*ptr); ptr++; } } return ptr - buf; } #endif /* _WIN32 */ the_silver_searcher-2.1.0/src/scandir.c000066400000000000000000000032311314774034700201210ustar00rootroot00000000000000#include #include #include "scandir.h" #include "util.h" int ag_scandir(const char *dirname, struct dirent ***namelist, filter_fp filter, void *baton) { DIR *dirp = NULL; struct dirent **names = NULL; struct dirent *entry, *d; int names_len = 32; int results_len = 0; dirp = opendir(dirname); if (dirp == NULL) { goto fail; } names = malloc(sizeof(struct dirent *) * names_len); if (names == NULL) { goto fail; } while ((entry = readdir(dirp)) != NULL) { if ((*filter)(dirname, entry, baton) == FALSE) { continue; } if (results_len >= names_len) { struct dirent **tmp_names = names; names_len *= 2; names = realloc(names, sizeof(struct dirent *) * names_len); if (names == NULL) { free(tmp_names); goto fail; } } #if defined(__MINGW32__) || defined(__CYGWIN__) d = malloc(sizeof(struct dirent)); #else d = malloc(entry->d_reclen); #endif if (d == NULL) { goto fail; } #if defined(__MINGW32__) || defined(__CYGWIN__) memcpy(d, entry, sizeof(struct dirent)); #else memcpy(d, entry, entry->d_reclen); #endif names[results_len] = d; results_len++; } closedir(dirp); *namelist = names; return results_len; fail: if (dirp) { closedir(dirp); } if (names != NULL) { int i; for (i = 0; i < results_len; i++) { free(names[i]); } free(names); } return -1; } the_silver_searcher-2.1.0/src/scandir.h000066400000000000000000000006471314774034700201360ustar00rootroot00000000000000#ifndef SCANDIR_H #define SCANDIR_H #include "ignore.h" typedef struct { const ignores *ig; const char *base_path; size_t base_path_len; const char *path_start; } scandir_baton_t; typedef int (*filter_fp)(const char *path, const struct dirent *, void *); int ag_scandir(const char *dirname, struct dirent ***namelist, filter_fp filter, void *baton); #endif the_silver_searcher-2.1.0/src/search.c000066400000000000000000000547241314774034700177600ustar00rootroot00000000000000#include "search.h" #include "print.h" #include "scandir.h" void search_buf(const char *buf, const size_t buf_len, const char *dir_full_path) { int binary = -1; /* 1 = yes, 0 = no, -1 = don't know */ size_t buf_offset = 0; if (opts.search_stream) { binary = 0; } else if (!opts.search_binary_files) { binary = is_binary((const void *)buf, buf_len); if (binary) { log_debug("File %s is binary. Skipping...", dir_full_path); return; } } size_t matches_len = 0; match_t *matches; size_t matches_size; size_t matches_spare; if (opts.invert_match) { /* If we are going to invert the set of matches at the end, we will need * one extra match struct, even if there are no matches at all. So make * sure we have a nonempty array; and make sure we always have spare * capacity for one extra. */ matches_size = 100; matches = ag_malloc(matches_size * sizeof(match_t)); matches_spare = 1; } else { matches_size = 0; matches = NULL; matches_spare = 0; } if (!opts.literal && opts.query_len == 1 && opts.query[0] == '.') { matches_size = 1; matches = matches == NULL ? ag_malloc(matches_size * sizeof(match_t)) : matches; matches[0].start = 0; matches[0].end = buf_len; matches_len = 1; } else if (opts.literal) { const char *match_ptr = buf; while (buf_offset < buf_len) { /* hash_strnstr only for little-endian platforms that allow unaligned access */ #if defined(__i386__) || defined(__x86_64__) /* Decide whether to fall back on boyer-moore */ if ((size_t)opts.query_len < 2 * sizeof(uint16_t) - 1 || opts.query_len >= UCHAR_MAX) { match_ptr = boyer_moore_strnstr(match_ptr, opts.query, buf_len - buf_offset, opts.query_len, alpha_skip_lookup, find_skip_lookup, opts.casing == CASE_INSENSITIVE); } else { match_ptr = hash_strnstr(match_ptr, opts.query, buf_len - buf_offset, opts.query_len, h_table, opts.casing == CASE_SENSITIVE); } #else match_ptr = boyer_moore_strnstr(match_ptr, opts.query, buf_len - buf_offset, opts.query_len, alpha_skip_lookup, find_skip_lookup, opts.casing == CASE_INSENSITIVE); #endif if (match_ptr == NULL) { break; } if (opts.word_regexp) { const char *start = match_ptr; const char *end = match_ptr + opts.query_len; /* Check whether both start and end of the match lie on a word * boundary */ if ((start == buf || is_wordchar(*(start - 1)) != opts.literal_starts_wordchar) && (end == buf + buf_len || is_wordchar(*end) != opts.literal_ends_wordchar)) { /* It's a match */ } else { /* It's not a match */ match_ptr += find_skip_lookup[0] - opts.query_len + 1; buf_offset = match_ptr - buf; continue; } } realloc_matches(&matches, &matches_size, matches_len + matches_spare); matches[matches_len].start = match_ptr - buf; matches[matches_len].end = matches[matches_len].start + opts.query_len; buf_offset = matches[matches_len].end; log_debug("Match found. File %s, offset %lu bytes.", dir_full_path, matches[matches_len].start); matches_len++; match_ptr += opts.query_len; if (opts.max_matches_per_file > 0 && matches_len >= opts.max_matches_per_file) { log_err("Too many matches in %s. Skipping the rest of this file.", dir_full_path); break; } } } else { int offset_vector[3]; if (opts.multiline) { while (buf_offset < buf_len && (pcre_exec(opts.re, opts.re_extra, buf, buf_len, buf_offset, 0, offset_vector, 3)) >= 0) { log_debug("Regex match found. File %s, offset %i bytes.", dir_full_path, offset_vector[0]); buf_offset = offset_vector[1]; if (offset_vector[0] == offset_vector[1]) { ++buf_offset; log_debug("Regex match is of length zero. Advancing offset one byte."); } realloc_matches(&matches, &matches_size, matches_len + matches_spare); matches[matches_len].start = offset_vector[0]; matches[matches_len].end = offset_vector[1]; matches_len++; if (opts.max_matches_per_file > 0 && matches_len >= opts.max_matches_per_file) { log_err("Too many matches in %s. Skipping the rest of this file.", dir_full_path); break; } } } else { while (buf_offset < buf_len) { const char *line; size_t line_len = buf_getline(&line, buf, buf_len, buf_offset); if (!line) { break; } size_t line_offset = 0; while (line_offset < line_len) { int rv = pcre_exec(opts.re, opts.re_extra, line, line_len, line_offset, 0, offset_vector, 3); if (rv < 0) { break; } size_t line_to_buf = buf_offset + line_offset; log_debug("Regex match found. File %s, offset %i bytes.", dir_full_path, offset_vector[0]); line_offset = offset_vector[1]; if (offset_vector[0] == offset_vector[1]) { ++line_offset; log_debug("Regex match is of length zero. Advancing offset one byte."); } realloc_matches(&matches, &matches_size, matches_len + matches_spare); matches[matches_len].start = offset_vector[0] + line_to_buf; matches[matches_len].end = offset_vector[1] + line_to_buf; matches_len++; if (opts.max_matches_per_file > 0 && matches_len >= opts.max_matches_per_file) { log_err("Too many matches in %s. Skipping the rest of this file.", dir_full_path); goto multiline_done; } } buf_offset += line_len + 1; } } } multiline_done: if (opts.invert_match) { matches_len = invert_matches(buf, buf_len, matches, matches_len); } if (opts.stats) { pthread_mutex_lock(&stats_mtx); stats.total_bytes += buf_len; stats.total_files++; stats.total_matches += matches_len; if (matches_len > 0) { stats.total_file_matches++; } pthread_mutex_unlock(&stats_mtx); } if (matches_len > 0 || opts.print_all_paths) { if (binary == -1 && !opts.print_filename_only) { binary = is_binary((const void *)buf, buf_len); } pthread_mutex_lock(&print_mtx); if (opts.print_filename_only) { /* If the --files-without-matches or -L option is passed we should * not print a matching line. This option currently sets * opts.print_filename_only and opts.invert_match. Unfortunately * setting the latter has the side effect of making matches.len = 1 * on a file-without-matches which is not desired behaviour. See * GitHub issue 206 for the consequences if this behaviour is not * checked. */ if (!opts.invert_match || matches_len < 2) { if (opts.print_count) { print_path_count(dir_full_path, opts.path_sep, (size_t)matches_len); } else { print_path(dir_full_path, opts.path_sep); } } } else if (binary) { print_binary_file_matches(dir_full_path); } else { print_file_matches(dir_full_path, buf, buf_len, matches, matches_len); } pthread_mutex_unlock(&print_mtx); opts.match_found = 1; } else if (opts.search_stream && opts.passthrough) { fprintf(out_fd, "%s", buf); } else { log_debug("No match in %s", dir_full_path); } if (matches_len == 0 && opts.search_stream) { print_context_append(buf, buf_len - 1); } if (matches_size > 0) { free(matches); } } /* TODO: this will only match single lines. multi-line regexes silently don't match */ void search_stream(FILE *stream, const char *path) { char *line = NULL; ssize_t line_len = 0; size_t line_cap = 0; size_t i; print_init_context(); for (i = 1; (line_len = getline(&line, &line_cap, stream)) > 0; i++) { opts.stream_line_num = i; search_buf(line, line_len, path); if (line[line_len - 1] == '\n') { line_len--; } print_trailing_context(path, line, line_len); } free(line); print_cleanup_context(); } void search_file(const char *file_full_path) { int fd = -1; off_t f_len = 0; char *buf = NULL; struct stat statbuf; int rv = 0; FILE *fp = NULL; rv = stat(file_full_path, &statbuf); if (rv != 0) { log_err("Skipping %s: Error fstat()ing file.", file_full_path); goto cleanup; } if (opts.stdout_inode != 0 && opts.stdout_inode == statbuf.st_ino) { log_debug("Skipping %s: stdout is redirected to it", file_full_path); goto cleanup; } // handling only regular files and FIFOs if (!S_ISREG(statbuf.st_mode) && !S_ISFIFO(statbuf.st_mode)) { log_err("Skipping %s: Mode %u is not a file.", file_full_path, statbuf.st_mode); goto cleanup; } fd = open(file_full_path, O_RDONLY); if (fd < 0) { /* XXXX: strerror is not thread-safe */ log_err("Skipping %s: Error opening file: %s", file_full_path, strerror(errno)); goto cleanup; } // repeating stat check with file handle to prevent TOCTOU issue rv = fstat(fd, &statbuf); if (rv != 0) { log_err("Skipping %s: Error fstat()ing file.", file_full_path); goto cleanup; } if (opts.stdout_inode != 0 && opts.stdout_inode == statbuf.st_ino) { log_debug("Skipping %s: stdout is redirected to it", file_full_path); goto cleanup; } // handling only regular files and FIFOs if (!S_ISREG(statbuf.st_mode) && !S_ISFIFO(statbuf.st_mode)) { log_err("Skipping %s: Mode %u is not a file.", file_full_path, statbuf.st_mode); goto cleanup; } print_init_context(); if (statbuf.st_mode & S_IFIFO) { log_debug("%s is a named pipe. stream searching", file_full_path); fp = fdopen(fd, "r"); search_stream(fp, file_full_path); fclose(fp); goto cleanup; } f_len = statbuf.st_size; if (f_len == 0) { if (opts.query[0] == '.' && opts.query_len == 1 && !opts.literal && opts.search_all_files) { search_buf(buf, f_len, file_full_path); } else { log_debug("Skipping %s: file is empty.", file_full_path); } goto cleanup; } if (!opts.literal && f_len > INT_MAX) { log_err("Skipping %s: pcre_exec() can't handle files larger than %i bytes.", file_full_path, INT_MAX); goto cleanup; } #ifdef _WIN32 { HANDLE hmmap = CreateFileMapping( (HANDLE)_get_osfhandle(fd), 0, PAGE_READONLY, 0, f_len, NULL); buf = (char *)MapViewOfFile(hmmap, FILE_SHARE_READ, 0, 0, f_len); if (hmmap != NULL) CloseHandle(hmmap); } if (buf == NULL) { FormatMessageA( FORMAT_MESSAGE_ALLOCATE_BUFFER | FORMAT_MESSAGE_FROM_SYSTEM | FORMAT_MESSAGE_IGNORE_INSERTS, NULL, GetLastError(), 0, (void *)&buf, 0, NULL); log_err("File %s failed to load: %s.", file_full_path, buf); LocalFree((void *)buf); goto cleanup; } #else if (opts.mmap) { buf = mmap(0, f_len, PROT_READ, MAP_PRIVATE, fd, 0); if (buf == MAP_FAILED) { log_err("File %s failed to load: %s.", file_full_path, strerror(errno)); goto cleanup; } #if HAVE_MADVISE madvise(buf, f_len, MADV_SEQUENTIAL); #elif HAVE_POSIX_FADVISE posix_fadvise(fd, 0, f_len, POSIX_MADV_SEQUENTIAL); #endif } else { buf = ag_malloc(f_len); size_t bytes_read = read(fd, buf, f_len); if ((off_t)bytes_read != f_len) { die("expected to read %u bytes but read %u", f_len, bytes_read); } } #endif if (opts.search_zip_files) { ag_compression_type zip_type = is_zipped(buf, f_len); if (zip_type != AG_NO_COMPRESSION) { #if HAVE_FOPENCOOKIE log_debug("%s is a compressed file. stream searching", file_full_path); fp = decompress_open(fd, "r", zip_type); search_stream(fp, file_full_path); fclose(fp); #else int _buf_len = (int)f_len; char *_buf = decompress(zip_type, buf, f_len, file_full_path, &_buf_len); if (_buf == NULL || _buf_len == 0) { log_err("Cannot decompress zipped file %s", file_full_path); goto cleanup; } search_buf(_buf, _buf_len, file_full_path); free(_buf); #endif goto cleanup; } } search_buf(buf, f_len, file_full_path); cleanup: print_cleanup_context(); if (buf != NULL) { #ifdef _WIN32 UnmapViewOfFile(buf); #else if (opts.mmap) { if (buf != MAP_FAILED) { munmap(buf, f_len); } } else { free(buf); } #endif } if (fd != -1) { close(fd); } } void *search_file_worker(void *i) { work_queue_t *queue_item; int worker_id = *(int *)i; log_debug("Worker %i started", worker_id); while (TRUE) { pthread_mutex_lock(&work_queue_mtx); while (work_queue == NULL) { if (done_adding_files) { pthread_mutex_unlock(&work_queue_mtx); log_debug("Worker %i finished.", worker_id); pthread_exit(NULL); } pthread_cond_wait(&files_ready, &work_queue_mtx); } queue_item = work_queue; work_queue = work_queue->next; if (work_queue == NULL) { work_queue_tail = NULL; } pthread_mutex_unlock(&work_queue_mtx); search_file(queue_item->path); free(queue_item->path); free(queue_item); } } static int check_symloop_enter(const char *path, dirkey_t *outkey) { #ifdef _WIN32 return SYMLOOP_OK; #else struct stat buf; symdir_t *item_found = NULL; symdir_t *new_item = NULL; memset(outkey, 0, sizeof(dirkey_t)); outkey->dev = 0; outkey->ino = 0; int res = stat(path, &buf); if (res != 0) { log_err("Error stat()ing: %s", path); return SYMLOOP_ERROR; } outkey->dev = buf.st_dev; outkey->ino = buf.st_ino; HASH_FIND(hh, symhash, outkey, sizeof(dirkey_t), item_found); if (item_found) { return SYMLOOP_LOOP; } new_item = (symdir_t *)ag_malloc(sizeof(symdir_t)); memcpy(&new_item->key, outkey, sizeof(dirkey_t)); HASH_ADD(hh, symhash, key, sizeof(dirkey_t), new_item); return SYMLOOP_OK; #endif } static int check_symloop_leave(dirkey_t *dirkey) { #ifdef _WIN32 return SYMLOOP_OK; #else symdir_t *item_found = NULL; if (dirkey->dev == 0 && dirkey->ino == 0) { return SYMLOOP_ERROR; } HASH_FIND(hh, symhash, dirkey, sizeof(dirkey_t), item_found); if (!item_found) { log_err("item not found! weird stuff...\n"); return SYMLOOP_ERROR; } HASH_DELETE(hh, symhash, item_found); free(item_found); return SYMLOOP_OK; #endif } /* TODO: Append matches to some data structure instead of just printing them out. * Then ag can have sweet summaries of matches/files scanned/time/etc. */ void search_dir(ignores *ig, const char *base_path, const char *path, const int depth, dev_t original_dev) { struct dirent **dir_list = NULL; struct dirent *dir = NULL; scandir_baton_t scandir_baton; int results = 0; size_t base_path_len = 0; const char *path_start = path; char *dir_full_path = NULL; const char *ignore_file = NULL; int i; int symres; dirkey_t current_dirkey; symres = check_symloop_enter(path, ¤t_dirkey); if (symres == SYMLOOP_LOOP) { log_err("Recursive directory loop: %s", path); return; } /* find .*ignore files to load ignore patterns from */ for (i = 0; opts.skip_vcs_ignores ? (i == 0) : (ignore_pattern_files[i] != NULL); i++) { ignore_file = ignore_pattern_files[i]; ag_asprintf(&dir_full_path, "%s/%s", path, ignore_file); load_ignore_patterns(ig, dir_full_path); free(dir_full_path); dir_full_path = NULL; } /* path_start is the part of path that isn't in base_path * base_path will have a trailing '/' because we put it there in parse_options */ base_path_len = base_path ? strlen(base_path) : 0; for (i = 0; ((size_t)i < base_path_len) && (path[i]) && (base_path[i] == path[i]); i++) { path_start = path + i + 1; } log_debug("search_dir: path is '%s', base_path is '%s', path_start is '%s'", path, base_path, path_start); scandir_baton.ig = ig; scandir_baton.base_path = base_path; scandir_baton.base_path_len = base_path_len; scandir_baton.path_start = path_start; results = ag_scandir(path, &dir_list, &filename_filter, &scandir_baton); if (results == 0) { log_debug("No results found in directory %s", path); goto search_dir_cleanup; } else if (results == -1) { if (errno == ENOTDIR) { /* Not a directory. Probably a file. */ if (depth == 0 && opts.paths_len == 1) { /* If we're only searching one file, don't print the filename header at the top. */ if (opts.print_path == PATH_PRINT_DEFAULT || opts.print_path == PATH_PRINT_DEFAULT_EACH_LINE) { opts.print_path = PATH_PRINT_NOTHING; } /* If we're only searching one file and --only-matching is specified, disable line numbers too. */ if (opts.only_matching && opts.print_path == PATH_PRINT_NOTHING) { opts.print_line_numbers = FALSE; } } search_file(path); } else { log_err("Error opening directory %s: %s", path, strerror(errno)); } goto search_dir_cleanup; } int offset_vector[3]; int rc = 0; work_queue_t *queue_item; for (i = 0; i < results; i++) { queue_item = NULL; dir = dir_list[i]; ag_asprintf(&dir_full_path, "%s/%s", path, dir->d_name); #ifndef _WIN32 if (opts.one_dev) { struct stat s; if (lstat(dir_full_path, &s) != 0) { log_err("Failed to get device information for %s. Skipping...", dir->d_name); goto cleanup; } if (s.st_dev != original_dev) { log_debug("File %s crosses a device boundary (is probably a mount point.) Skipping...", dir->d_name); goto cleanup; } } #endif /* If a link points to a directory then we need to treat it as a directory. */ if (!opts.follow_symlinks && is_symlink(path, dir)) { log_debug("File %s ignored becaused it's a symlink", dir->d_name); goto cleanup; } if (!is_directory(path, dir)) { if (opts.file_search_regex) { rc = pcre_exec(opts.file_search_regex, NULL, dir_full_path, strlen(dir_full_path), 0, 0, offset_vector, 3); if (rc < 0) { /* no match */ log_debug("Skipping %s due to file_search_regex.", dir_full_path); goto cleanup; } else if (opts.match_files) { log_debug("match_files: file_search_regex matched for %s.", dir_full_path); pthread_mutex_lock(&print_mtx); print_path(dir_full_path, opts.path_sep); pthread_mutex_unlock(&print_mtx); opts.match_found = 1; goto cleanup; } } queue_item = ag_malloc(sizeof(work_queue_t)); queue_item->path = dir_full_path; queue_item->next = NULL; pthread_mutex_lock(&work_queue_mtx); if (work_queue_tail == NULL) { work_queue = queue_item; } else { work_queue_tail->next = queue_item; } work_queue_tail = queue_item; pthread_cond_signal(&files_ready); pthread_mutex_unlock(&work_queue_mtx); log_debug("%s added to work queue", dir_full_path); } else if (opts.recurse_dirs) { if (depth < opts.max_search_depth || opts.max_search_depth == -1) { log_debug("Searching dir %s", dir_full_path); ignores *child_ig; #ifdef HAVE_DIRENT_DNAMLEN child_ig = init_ignore(ig, dir->d_name, dir->d_namlen); #else child_ig = init_ignore(ig, dir->d_name, strlen(dir->d_name)); #endif search_dir(child_ig, base_path, dir_full_path, depth + 1, original_dev); cleanup_ignore(child_ig); } else { if (opts.max_search_depth == DEFAULT_MAX_SEARCH_DEPTH) { /* * If the user didn't intentionally specify a particular depth, * this is a warning... */ log_err("Skipping %s. Use the --depth option to search deeper.", dir_full_path); } else { /* ... if they did, let's settle for debug. */ log_debug("Skipping %s. Use the --depth option to search deeper.", dir_full_path); } } } cleanup: free(dir); dir = NULL; if (queue_item == NULL) { free(dir_full_path); dir_full_path = NULL; } } search_dir_cleanup: check_symloop_leave(¤t_dirkey); free(dir_list); dir_list = NULL; } the_silver_searcher-2.1.0/src/search.h000066400000000000000000000027611314774034700177570ustar00rootroot00000000000000#ifndef SEARCH_H #define SEARCH_H #include #include #include #include #include #include #include #include #ifdef _WIN32 #include #else #include #endif #include #include #include "config.h" #ifdef HAVE_PTHREAD_H #include #endif #include "decompress.h" #include "ignore.h" #include "log.h" #include "options.h" #include "print.h" #include "uthash.h" #include "util.h" size_t alpha_skip_lookup[256]; size_t *find_skip_lookup; uint8_t h_table[H_SIZE] __attribute__((aligned(64))); struct work_queue_t { char *path; struct work_queue_t *next; }; typedef struct work_queue_t work_queue_t; work_queue_t *work_queue; work_queue_t *work_queue_tail; int done_adding_files; pthread_cond_t files_ready; pthread_mutex_t stats_mtx; pthread_mutex_t work_queue_mtx; /* For symlink loop detection */ #define SYMLOOP_ERROR (-1) #define SYMLOOP_OK (0) #define SYMLOOP_LOOP (1) typedef struct { dev_t dev; ino_t ino; } dirkey_t; typedef struct { dirkey_t key; UT_hash_handle hh; } symdir_t; symdir_t *symhash; void search_buf(const char *buf, const size_t buf_len, const char *dir_full_path); void search_stream(FILE *stream, const char *path); void search_file(const char *file_full_path); void *search_file_worker(void *i); void search_dir(ignores *ig, const char *base_path, const char *path, const int depth, dev_t original_dev); #endif the_silver_searcher-2.1.0/src/uthash.h000066400000000000000000002214061314774034700200050ustar00rootroot00000000000000/* Copyright (c) 2003-2014, Troy D. Hanson http://troydhanson.github.com/uthash/ All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #ifndef UTHASH_H #define UTHASH_H #include /* ptrdiff_t */ #include /* exit() */ #include /* memcmp,strlen */ /* These macros use decltype or the earlier __typeof GNU extension. As decltype is only available in newer compilers (VS2010 or gcc 4.3+ when compiling c++ source) this code uses whatever method is needed or, for VS2008 where neither is available, uses casting workarounds. */ #if defined(_MSC_VER) /* MS compiler */ #if _MSC_VER >= 1600 && defined(__cplusplus) /* VS2010 or newer in C++ mode */ #define DECLTYPE(x) (decltype(x)) #else /* VS2008 or older (or VS2010 in C mode) */ #define NO_DECLTYPE #define DECLTYPE(x) #endif #elif defined(__BORLANDC__) || defined(__LCC__) || defined(__WATCOMC__) #define NO_DECLTYPE #define DECLTYPE(x) #else /* GNU, Sun and other compilers */ #define DECLTYPE(x) (__typeof(x)) #endif #ifdef NO_DECLTYPE #define DECLTYPE_ASSIGN(dst, src) \ do { \ char **_da_dst = (char **)(&(dst)); \ *_da_dst = (char *)(src); \ } while (0) #else #define DECLTYPE_ASSIGN(dst, src) \ do { \ (dst) = DECLTYPE(dst)(src); \ } while (0) #endif /* a number of the hash function use uint32_t which isn't defined on Pre VS2010 */ #if defined(_WIN32) #if defined(_MSC_VER) && _MSC_VER >= 1600 #include #elif defined(__WATCOMC__) #include #else typedef unsigned int uint32_t; typedef unsigned char uint8_t; #endif #else #include #endif #define UTHASH_VERSION 1.9.9 #ifndef uthash_fatal #define uthash_fatal(msg) exit(-1) /* fatal error (out of memory,etc) */ #endif #ifndef uthash_malloc #define uthash_malloc(sz) malloc(sz) /* malloc fcn */ #endif #ifndef uthash_free #define uthash_free(ptr, sz) free(ptr) /* free fcn */ #endif #ifndef uthash_noexpand_fyi #define uthash_noexpand_fyi(tbl) /* can be defined to log noexpand */ #endif #ifndef uthash_expand_fyi #define uthash_expand_fyi(tbl) /* can be defined to log expands */ #endif /* initial number of buckets */ #define HASH_INITIAL_NUM_BUCKETS 32 /* initial number of buckets */ #define HASH_INITIAL_NUM_BUCKETS_LOG2 5 /* lg2 of initial number of buckets */ #define HASH_BKT_CAPACITY_THRESH 10 /* expand when bucket count reaches */ /* calculate the element whose hash handle address is hhe */ #define ELMT_FROM_HH(tbl, hhp) ((void *)(((char *)(hhp)) - ((tbl)->hho))) #define HASH_FIND(hh, head, keyptr, keylen, out) \ do { \ unsigned _hf_bkt, _hf_hashv; \ out = NULL; \ if (head) { \ HASH_FCN(keyptr, keylen, (head)->hh.tbl->num_buckets, _hf_hashv, _hf_bkt); \ if (HASH_BLOOM_TEST((head)->hh.tbl, _hf_hashv)) { \ HASH_FIND_IN_BKT((head)->hh.tbl, hh, (head)->hh.tbl->buckets[_hf_bkt], keyptr, keylen, out); \ } \ } \ } while (0) #ifdef HASH_BLOOM #define HASH_BLOOM_BITLEN (1ULL << HASH_BLOOM) #define HASH_BLOOM_BYTELEN (HASH_BLOOM_BITLEN / 8) + ((HASH_BLOOM_BITLEN % 8) ? 1 : 0) #define HASH_BLOOM_MAKE(tbl) \ do { \ (tbl)->bloom_nbits = HASH_BLOOM; \ (tbl)->bloom_bv = (uint8_t *)uthash_malloc(HASH_BLOOM_BYTELEN); \ if (!((tbl)->bloom_bv)) { \ uthash_fatal("out of memory"); \ } \ memset((tbl)->bloom_bv, 0, HASH_BLOOM_BYTELEN); \ (tbl)->bloom_sig = HASH_BLOOM_SIGNATURE; \ } while (0) #define HASH_BLOOM_FREE(tbl) \ do { \ uthash_free((tbl)->bloom_bv, HASH_BLOOM_BYTELEN); \ } while (0) #define HASH_BLOOM_BITSET(bv, idx) (bv[(idx) / 8] |= (1U << ((idx) % 8))) #define HASH_BLOOM_BITTEST(bv, idx) (bv[(idx) / 8] & (1U << ((idx) % 8))) #define HASH_BLOOM_ADD(tbl, hashv) \ HASH_BLOOM_BITSET((tbl)->bloom_bv, (hashv & (uint32_t)((1ULL << (tbl)->bloom_nbits) - 1))) #define HASH_BLOOM_TEST(tbl, hashv) \ HASH_BLOOM_BITTEST((tbl)->bloom_bv, (hashv & (uint32_t)((1ULL << (tbl)->bloom_nbits) - 1))) #else #define HASH_BLOOM_MAKE(tbl) #define HASH_BLOOM_FREE(tbl) #define HASH_BLOOM_ADD(tbl, hashv) #define HASH_BLOOM_TEST(tbl, hashv) (1) #define HASH_BLOOM_BYTELEN 0 #endif #define HASH_MAKE_TABLE(hh, head) \ do { \ (head)->hh.tbl = (UT_hash_table *)uthash_malloc(sizeof(UT_hash_table)); \ if (!((head)->hh.tbl)) { \ uthash_fatal("out of memory"); \ } \ memset((head)->hh.tbl, 0, sizeof(UT_hash_table)); \ (head)->hh.tbl->tail = &((head)->hh); \ (head)->hh.tbl->num_buckets = HASH_INITIAL_NUM_BUCKETS; \ (head)->hh.tbl->log2_num_buckets = HASH_INITIAL_NUM_BUCKETS_LOG2; \ (head)->hh.tbl->hho = (char *)(&(head)->hh) - (char *)(head); \ (head)->hh.tbl->buckets = (UT_hash_bucket *)uthash_malloc(HASH_INITIAL_NUM_BUCKETS * sizeof(struct UT_hash_bucket)); \ if (!(head)->hh.tbl->buckets) { \ uthash_fatal("out of memory"); \ } \ memset((head)->hh.tbl->buckets, 0, HASH_INITIAL_NUM_BUCKETS * sizeof(struct UT_hash_bucket)); \ HASH_BLOOM_MAKE((head)->hh.tbl); \ (head)->hh.tbl->signature = HASH_SIGNATURE; \ } while (0) #define HASH_ADD(hh, head, fieldname, keylen_in, add) \ HASH_ADD_KEYPTR(hh, head, &((add)->fieldname), keylen_in, add) #define HASH_REPLACE(hh, head, fieldname, keylen_in, add, replaced) \ do { \ replaced = NULL; \ HASH_FIND(hh, head, &((add)->fieldname), keylen_in, replaced); \ if (replaced != NULL) { \ HASH_DELETE(hh, head, replaced); \ }; \ HASH_ADD(hh, head, fieldname, keylen_in, add); \ } while (0) #define HASH_ADD_KEYPTR(hh, head, keyptr, keylen_in, add) \ do { \ unsigned _ha_bkt; \ (add)->hh.next = NULL; \ (add)->hh.key = (char *)(keyptr); \ (add)->hh.keylen = (unsigned)(keylen_in); \ if (!(head)) { \ head = (add); \ (head)->hh.prev = NULL; \ HASH_MAKE_TABLE(hh, head); \ } else { \ (head)->hh.tbl->tail->next = (add); \ (add)->hh.prev = ELMT_FROM_HH((head)->hh.tbl, (head)->hh.tbl->tail); \ (head)->hh.tbl->tail = &((add)->hh); \ } \ (head)->hh.tbl->num_items++; \ (add)->hh.tbl = (head)->hh.tbl; \ HASH_FCN(keyptr, keylen_in, (head)->hh.tbl->num_buckets, (add)->hh.hashv, _ha_bkt); \ HASH_ADD_TO_BKT((head)->hh.tbl->buckets[_ha_bkt], &(add)->hh); \ HASH_BLOOM_ADD((head)->hh.tbl, (add)->hh.hashv); \ HASH_EMIT_KEY(hh, head, keyptr, keylen_in); \ HASH_FSCK(hh, head); \ } while (0) #define HASH_TO_BKT(hashv, num_bkts, bkt) \ do { \ bkt = ((hashv) & (num_bkts - 1)); \ } while (0) /* delete "delptr" from the hash table. * "the usual" patch-up process for the app-order doubly-linked-list. * The use of _hd_hh_del below deserves special explanation. * These used to be expressed using (delptr) but that led to a bug * if someone used the same symbol for the head and deletee, like * HASH_DELETE(hh,users,users); * We want that to work, but by changing the head (users) below * we were forfeiting our ability to further refer to the deletee (users) * in the patch-up process. Solution: use scratch space to * copy the deletee pointer, then the latter references are via that * scratch pointer rather than through the repointed (users) symbol. */ #define HASH_DELETE(hh, head, delptr) \ do { \ unsigned _hd_bkt; \ struct UT_hash_handle *_hd_hh_del; \ if (((delptr)->hh.prev == NULL) && ((delptr)->hh.next == NULL)) { \ uthash_free((head)->hh.tbl->buckets, (head)->hh.tbl->num_buckets * sizeof(struct UT_hash_bucket)); \ HASH_BLOOM_FREE((head)->hh.tbl); \ uthash_free((head)->hh.tbl, sizeof(UT_hash_table)); \ head = NULL; \ } else { \ _hd_hh_del = &((delptr)->hh); \ if ((delptr) == ELMT_FROM_HH((head)->hh.tbl, (head)->hh.tbl->tail)) { \ (head)->hh.tbl->tail = (UT_hash_handle *)((ptrdiff_t)((delptr)->hh.prev) + (head)->hh.tbl->hho); \ } \ if ((delptr)->hh.prev) { \ ((UT_hash_handle *)((ptrdiff_t)((delptr)->hh.prev) + (head)->hh.tbl->hho))->next = (delptr)->hh.next; \ } else { \ DECLTYPE_ASSIGN(head, (delptr)->hh.next); \ } \ if (_hd_hh_del->next) { \ ((UT_hash_handle *)((ptrdiff_t)_hd_hh_del->next + (head)->hh.tbl->hho))->prev = _hd_hh_del->prev; \ } \ HASH_TO_BKT(_hd_hh_del->hashv, (head)->hh.tbl->num_buckets, _hd_bkt); \ HASH_DEL_IN_BKT(hh, (head)->hh.tbl->buckets[_hd_bkt], _hd_hh_del); \ (head)->hh.tbl->num_items--; \ } \ HASH_FSCK(hh, head); \ } while (0) /* convenience forms of HASH_FIND/HASH_ADD/HASH_DEL */ #define HASH_FIND_STR(head, findstr, out) \ HASH_FIND(hh, head, findstr, strlen(findstr), out) #define HASH_ADD_STR(head, strfield, add) \ HASH_ADD(hh, head, strfield[0], strlen(add->strfield), add) #define HASH_REPLACE_STR(head, strfield, add, replaced) \ HASH_REPLACE(hh, head, strfield[0], strlen(add->strfield), add, replaced) #define HASH_FIND_INT(head, findint, out) \ HASH_FIND(hh, head, findint, sizeof(int), out) #define HASH_ADD_INT(head, intfield, add) \ HASH_ADD(hh, head, intfield, sizeof(int), add) #define HASH_REPLACE_INT(head, intfield, add, replaced) \ HASH_REPLACE(hh, head, intfield, sizeof(int), add, replaced) #define HASH_FIND_PTR(head, findptr, out) \ HASH_FIND(hh, head, findptr, sizeof(void *), out) #define HASH_ADD_PTR(head, ptrfield, add) \ HASH_ADD(hh, head, ptrfield, sizeof(void *), add) #define HASH_REPLACE_PTR(head, ptrfield, add, replaced) \ HASH_REPLACE(hh, head, ptrfield, sizeof(void *), add, replaced) #define HASH_DEL(head, delptr) \ HASH_DELETE(hh, head, delptr) /* HASH_FSCK checks hash integrity on every add/delete when HASH_DEBUG is defined. * This is for uthash developer only; it compiles away if HASH_DEBUG isn't defined. */ #ifdef HASH_DEBUG #define HASH_OOPS(...) \ do { \ fprintf(stderr, __VA_ARGS__); \ exit(-1); \ } while (0) #define HASH_FSCK(hh, head) \ do { \ unsigned _bkt_i; \ unsigned _count, _bkt_count; \ char *_prev; \ struct UT_hash_handle *_thh; \ if (head) { \ _count = 0; \ for (_bkt_i = 0; _bkt_i < (head)->hh.tbl->num_buckets; _bkt_i++) { \ _bkt_count = 0; \ _thh = (head)->hh.tbl->buckets[_bkt_i].hh_head; \ _prev = NULL; \ while (_thh) { \ if (_prev != (char *)(_thh->hh_prev)) { \ HASH_OOPS("invalid hh_prev %p, actual %p\n", _thh->hh_prev, _prev); \ } \ _bkt_count++; \ _prev = (char *)(_thh); \ _thh = _thh->hh_next; \ } \ _count += _bkt_count; \ if ((head)->hh.tbl->buckets[_bkt_i].count != _bkt_count) { \ HASH_OOPS("invalid bucket count %d, actual %d\n", (head)->hh.tbl->buckets[_bkt_i].count, _bkt_count); \ } \ } \ if (_count != (head)->hh.tbl->num_items) { \ HASH_OOPS("invalid hh item count %d, actual %d\n", (head)->hh.tbl->num_items, _count); \ } \ /* traverse hh in app order; check next/prev integrity, count */ \ _count = 0; \ _prev = NULL; \ _thh = &(head)->hh; \ while (_thh) { \ _count++; \ if (_prev != (char *)(_thh->prev)) { \ HASH_OOPS("invalid prev %p, actual %p\n", _thh->prev, _prev); \ } \ _prev = (char *)ELMT_FROM_HH((head)->hh.tbl, _thh); \ _thh = (_thh->next ? (UT_hash_handle *)((char *)(_thh->next) + (head)->hh.tbl->hho) : NULL); \ } \ if (_count != (head)->hh.tbl->num_items) { \ HASH_OOPS("invalid app item count %d, actual %d\n", (head)->hh.tbl->num_items, _count); \ } \ } \ } while (0) #else #define HASH_FSCK(hh, head) #endif /* When compiled with -DHASH_EMIT_KEYS, length-prefixed keys are emitted to * the descriptor to which this macro is defined for tuning the hash function. * The app can #include to get the prototype for write(2). */ #ifdef HASH_EMIT_KEYS #define HASH_EMIT_KEY(hh, head, keyptr, fieldlen) \ do { \ unsigned _klen = fieldlen; \ write(HASH_EMIT_KEYS, &_klen, sizeof(_klen)); \ write(HASH_EMIT_KEYS, keyptr, fieldlen); \ } while (0) #else #define HASH_EMIT_KEY(hh, head, keyptr, fieldlen) #endif /* default to Jenkin's hash unless overridden e.g. DHASH_FUNCTION=HASH_SAX */ #ifdef HASH_FUNCTION #define HASH_FCN HASH_FUNCTION #else #define HASH_FCN HASH_JEN #endif /* The Bernstein hash function, used in Perl prior to v5.6. Note (x<<5+x)=x*33. */ #define HASH_BER(key, keylen, num_bkts, hashv, bkt) \ do { \ unsigned _hb_keylen = keylen; \ char *_hb_key = (char *)(key); \ (hashv) = 0; \ while (_hb_keylen--) { \ (hashv) = (((hashv) << 5) + (hashv)) + *_hb_key++; \ } \ bkt = (hashv) & (num_bkts - 1); \ } while (0) /* SAX/FNV/OAT/JEN hash functions are macro variants of those listed at * http://eternallyconfuzzled.com/tuts/algorithms/jsw_tut_hashing.aspx */ #define HASH_SAX(key, keylen, num_bkts, hashv, bkt) \ do { \ unsigned _sx_i; \ char *_hs_key = (char *)(key); \ hashv = 0; \ for (_sx_i = 0; _sx_i < keylen; _sx_i++) \ hashv ^= (hashv << 5) + (hashv >> 2) + _hs_key[_sx_i]; \ bkt = hashv & (num_bkts - 1); \ } while (0) /* FNV-1a variation */ #define HASH_FNV(key, keylen, num_bkts, hashv, bkt) \ do { \ unsigned _fn_i; \ char *_hf_key = (char *)(key); \ hashv = 2166136261UL; \ for (_fn_i = 0; _fn_i < keylen; _fn_i++) \ hashv = hashv ^ _hf_key[_fn_i]; \ hashv = hashv * 16777619; \ bkt = hashv & (num_bkts - 1); \ } while (0) #define HASH_OAT(key, keylen, num_bkts, hashv, bkt) \ do { \ unsigned _ho_i; \ char *_ho_key = (char *)(key); \ hashv = 0; \ for (_ho_i = 0; _ho_i < keylen; _ho_i++) { \ hashv += _ho_key[_ho_i]; \ hashv += (hashv << 10); \ hashv ^= (hashv >> 6); \ } \ hashv += (hashv << 3); \ hashv ^= (hashv >> 11); \ hashv += (hashv << 15); \ bkt = hashv & (num_bkts - 1); \ } while (0) #define HASH_JEN_MIX(a, b, c) \ do { \ a -= b; \ a -= c; \ a ^= (c >> 13); \ b -= c; \ b -= a; \ b ^= (a << 8); \ c -= a; \ c -= b; \ c ^= (b >> 13); \ a -= b; \ a -= c; \ a ^= (c >> 12); \ b -= c; \ b -= a; \ b ^= (a << 16); \ c -= a; \ c -= b; \ c ^= (b >> 5); \ a -= b; \ a -= c; \ a ^= (c >> 3); \ b -= c; \ b -= a; \ b ^= (a << 10); \ c -= a; \ c -= b; \ c ^= (b >> 15); \ } while (0) #define HASH_JEN(key, keylen, num_bkts, hashv, bkt) \ do { \ unsigned _hj_i, _hj_j, _hj_k; \ unsigned char *_hj_key = (unsigned char *)(key); \ hashv = 0xfeedbeef; \ _hj_i = _hj_j = 0x9e3779b9; \ _hj_k = (unsigned)(keylen); \ while (_hj_k >= 12) { \ _hj_i += (_hj_key[0] + ((unsigned)_hj_key[1] << 8) + ((unsigned)_hj_key[2] << 16) + ((unsigned)_hj_key[3] << 24)); \ _hj_j += (_hj_key[4] + ((unsigned)_hj_key[5] << 8) + ((unsigned)_hj_key[6] << 16) + ((unsigned)_hj_key[7] << 24)); \ hashv += (_hj_key[8] + ((unsigned)_hj_key[9] << 8) + ((unsigned)_hj_key[10] << 16) + ((unsigned)_hj_key[11] << 24)); \ \ HASH_JEN_MIX(_hj_i, _hj_j, hashv); \ \ _hj_key += 12; \ _hj_k -= 12; \ } \ hashv += keylen; \ switch (_hj_k) { \ case 11: \ hashv += ((unsigned)_hj_key[10] << 24); \ case 10: \ hashv += ((unsigned)_hj_key[9] << 16); \ case 9: \ hashv += ((unsigned)_hj_key[8] << 8); \ case 8: \ _hj_j += ((unsigned)_hj_key[7] << 24); \ case 7: \ _hj_j += ((unsigned)_hj_key[6] << 16); \ case 6: \ _hj_j += ((unsigned)_hj_key[5] << 8); \ case 5: \ _hj_j += _hj_key[4]; \ case 4: \ _hj_i += ((unsigned)_hj_key[3] << 24); \ case 3: \ _hj_i += ((unsigned)_hj_key[2] << 16); \ case 2: \ _hj_i += ((unsigned)_hj_key[1] << 8); \ case 1: \ _hj_i += _hj_key[0]; \ } \ HASH_JEN_MIX(_hj_i, _hj_j, hashv); \ bkt = hashv & (num_bkts - 1); \ } while (0) /* The Paul Hsieh hash function */ #undef get16bits #if (defined(__GNUC__) && defined(__i386__)) || defined(__WATCOMC__) || defined(_MSC_VER) || defined(__BORLANDC__) || defined(__TURBOC__) #define get16bits(d) (*((const uint16_t *)(d))) #endif #if !defined(get16bits) #define get16bits(d) ((((uint32_t)(((const uint8_t *)(d))[1])) << 8) + (uint32_t)(((const uint8_t *)(d))[0])) #endif #define HASH_SFH(key, keylen, num_bkts, hashv, bkt) \ do { \ unsigned char *_sfh_key = (unsigned char *)(key); \ uint32_t _sfh_tmp, _sfh_len = keylen; \ \ int _sfh_rem = _sfh_len & 3; \ _sfh_len >>= 2; \ hashv = 0xcafebabe; \ \ /* Main loop */ \ for (; _sfh_len > 0; _sfh_len--) { \ hashv += get16bits(_sfh_key); \ _sfh_tmp = (uint32_t)(get16bits(_sfh_key + 2)) << 11 ^ hashv; \ hashv = (hashv << 16) ^ _sfh_tmp; \ _sfh_key += 2 * sizeof(uint16_t); \ hashv += hashv >> 11; \ } \ \ /* Handle end cases */ \ switch (_sfh_rem) { \ case 3: \ hashv += get16bits(_sfh_key); \ hashv ^= hashv << 16; \ hashv ^= (uint32_t)(_sfh_key[sizeof(uint16_t)] << 18); \ hashv += hashv >> 11; \ break; \ case 2: \ hashv += get16bits(_sfh_key); \ hashv ^= hashv << 11; \ hashv += hashv >> 17; \ break; \ case 1: \ hashv += *_sfh_key; \ hashv ^= hashv << 10; \ hashv += hashv >> 1; \ } \ \ /* Force "avalanching" of final 127 bits */ \ hashv ^= hashv << 3; \ hashv += hashv >> 5; \ hashv ^= hashv << 4; \ hashv += hashv >> 17; \ hashv ^= hashv << 25; \ hashv += hashv >> 6; \ bkt = hashv & (num_bkts - 1); \ } while (0) #ifdef HASH_USING_NO_STRICT_ALIASING /* The MurmurHash exploits some CPU's (x86,x86_64) tolerance for unaligned reads. * For other types of CPU's (e.g. Sparc) an unaligned read causes a bus error. * MurmurHash uses the faster approach only on CPU's where we know it's safe. * * Note the preprocessor built-in defines can be emitted using: * * gcc -m64 -dM -E - < /dev/null (on gcc) * cc -## a.c (where a.c is a simple test file) (Sun Studio) */ #if (defined(__i386__) || defined(__x86_64__) || defined(_M_IX86)) #define MUR_GETBLOCK(p, i) p[i] #else /* non intel */ #define MUR_PLUS0_ALIGNED(p) (((unsigned long)p & 0x3) == 0) #define MUR_PLUS1_ALIGNED(p) (((unsigned long)p & 0x3) == 1) #define MUR_PLUS2_ALIGNED(p) (((unsigned long)p & 0x3) == 2) #define MUR_PLUS3_ALIGNED(p) (((unsigned long)p & 0x3) == 3) #define WP(p) ((uint32_t *)((unsigned long)(p) & ~3UL)) #if (defined(__BIG_ENDIAN__) || defined(SPARC) || defined(__ppc__) || defined(__ppc64__)) #define MUR_THREE_ONE(p) ((((*WP(p)) & 0x00ffffff) << 8) | (((*(WP(p) + 1)) & 0xff000000) >> 24)) #define MUR_TWO_TWO(p) ((((*WP(p)) & 0x0000ffff) << 16) | (((*(WP(p) + 1)) & 0xffff0000) >> 16)) #define MUR_ONE_THREE(p) ((((*WP(p)) & 0x000000ff) << 24) | (((*(WP(p) + 1)) & 0xffffff00) >> 8)) #else /* assume little endian non-intel */ #define MUR_THREE_ONE(p) ((((*WP(p)) & 0xffffff00) >> 8) | (((*(WP(p) + 1)) & 0x000000ff) << 24)) #define MUR_TWO_TWO(p) ((((*WP(p)) & 0xffff0000) >> 16) | (((*(WP(p) + 1)) & 0x0000ffff) << 16)) #define MUR_ONE_THREE(p) ((((*WP(p)) & 0xff000000) >> 24) | (((*(WP(p) + 1)) & 0x00ffffff) << 8)) #endif #define MUR_GETBLOCK(p, i) (MUR_PLUS0_ALIGNED(p) ? ((p)[i]) : (MUR_PLUS1_ALIGNED(p) ? MUR_THREE_ONE(p) : (MUR_PLUS2_ALIGNED(p) ? MUR_TWO_TWO(p) : MUR_ONE_THREE(p)))) #endif #define MUR_ROTL32(x, r) (((x) << (r)) | ((x) >> (32 - (r)))) #define MUR_FMIX(_h) \ do { \ _h ^= _h >> 16; \ _h *= 0x85ebca6b; \ _h ^= _h >> 13; \ _h *= 0xc2b2ae35l; \ _h ^= _h >> 16; \ } while (0) #define HASH_MUR(key, keylen, num_bkts, hashv, bkt) \ do { \ const uint8_t *_mur_data = (const uint8_t *)(key); \ const int _mur_nblocks = (keylen) / 4; \ uint32_t _mur_h1 = 0xf88D5353; \ uint32_t _mur_c1 = 0xcc9e2d51; \ uint32_t _mur_c2 = 0x1b873593; \ uint32_t _mur_k1 = 0; \ const uint8_t *_mur_tail; \ const uint32_t *_mur_blocks = (const uint32_t *)(_mur_data + _mur_nblocks * 4); \ int _mur_i; \ for (_mur_i = -_mur_nblocks; _mur_i; _mur_i++) { \ _mur_k1 = MUR_GETBLOCK(_mur_blocks, _mur_i); \ _mur_k1 *= _mur_c1; \ _mur_k1 = MUR_ROTL32(_mur_k1, 15); \ _mur_k1 *= _mur_c2; \ \ _mur_h1 ^= _mur_k1; \ _mur_h1 = MUR_ROTL32(_mur_h1, 13); \ _mur_h1 = _mur_h1 * 5 + 0xe6546b64; \ } \ _mur_tail = (const uint8_t *)(_mur_data + _mur_nblocks * 4); \ _mur_k1 = 0; \ switch (keylen & 3) { \ case 3: \ _mur_k1 ^= _mur_tail[2] << 16; \ case 2: \ _mur_k1 ^= _mur_tail[1] << 8; \ case 1: \ _mur_k1 ^= _mur_tail[0]; \ _mur_k1 *= _mur_c1; \ _mur_k1 = MUR_ROTL32(_mur_k1, 15); \ _mur_k1 *= _mur_c2; \ _mur_h1 ^= _mur_k1; \ } \ _mur_h1 ^= (keylen); \ MUR_FMIX(_mur_h1); \ hashv = _mur_h1; \ bkt = hashv & (num_bkts - 1); \ } while (0) #endif /* HASH_USING_NO_STRICT_ALIASING */ /* key comparison function; return 0 if keys equal */ #define HASH_KEYCMP(a, b, len) memcmp(a, b, len) /* iterate over items in a known bucket to find desired item */ #define HASH_FIND_IN_BKT(tbl, hh, head, keyptr, keylen_in, out) \ do { \ if (head.hh_head) \ DECLTYPE_ASSIGN(out, ELMT_FROM_HH(tbl, head.hh_head)); \ else \ out = NULL; \ while (out) { \ if ((out)->hh.keylen == keylen_in) { \ if ((HASH_KEYCMP((out)->hh.key, keyptr, keylen_in)) == 0) \ break; \ } \ if ((out)->hh.hh_next) \ DECLTYPE_ASSIGN(out, ELMT_FROM_HH(tbl, (out)->hh.hh_next)); \ else \ out = NULL; \ } \ } while (0) /* add an item to a bucket */ #define HASH_ADD_TO_BKT(head, addhh) \ do { \ head.count++; \ (addhh)->hh_next = head.hh_head; \ (addhh)->hh_prev = NULL; \ if (head.hh_head) { \ (head).hh_head->hh_prev = (addhh); \ } \ (head).hh_head = addhh; \ if (head.count >= ((head.expand_mult + 1) * HASH_BKT_CAPACITY_THRESH) && (addhh)->tbl->noexpand != 1) { \ HASH_EXPAND_BUCKETS((addhh)->tbl); \ } \ } while (0) /* remove an item from a given bucket */ #define HASH_DEL_IN_BKT(hh, head, hh_del) \ (head).count--; \ if ((head).hh_head == hh_del) { \ (head).hh_head = hh_del->hh_next; \ } \ if (hh_del->hh_prev) { \ hh_del->hh_prev->hh_next = hh_del->hh_next; \ } \ if (hh_del->hh_next) { \ hh_del->hh_next->hh_prev = hh_del->hh_prev; \ } /* Bucket expansion has the effect of doubling the number of buckets * and redistributing the items into the new buckets. Ideally the * items will distribute more or less evenly into the new buckets * (the extent to which this is true is a measure of the quality of * the hash function as it applies to the key domain). * * With the items distributed into more buckets, the chain length * (item count) in each bucket is reduced. Thus by expanding buckets * the hash keeps a bound on the chain length. This bounded chain * length is the essence of how a hash provides constant time lookup. * * The calculation of tbl->ideal_chain_maxlen below deserves some * explanation. First, keep in mind that we're calculating the ideal * maximum chain length based on the *new* (doubled) bucket count. * In fractions this is just n/b (n=number of items,b=new num buckets). * Since the ideal chain length is an integer, we want to calculate * ceil(n/b). We don't depend on floating point arithmetic in this * hash, so to calculate ceil(n/b) with integers we could write * * ceil(n/b) = (n/b) + ((n%b)?1:0) * * and in fact a previous version of this hash did just that. * But now we have improved things a bit by recognizing that b is * always a power of two. We keep its base 2 log handy (call it lb), * so now we can write this with a bit shift and logical AND: * * ceil(n/b) = (n>>lb) + ( (n & (b-1)) ? 1:0) * */ #define HASH_EXPAND_BUCKETS(tbl) \ do { \ unsigned _he_bkt; \ unsigned _he_bkt_i; \ struct UT_hash_handle *_he_thh, *_he_hh_nxt; \ UT_hash_bucket *_he_new_buckets, *_he_newbkt; \ _he_new_buckets = (UT_hash_bucket *)uthash_malloc(2 * tbl->num_buckets * sizeof(struct UT_hash_bucket)); \ if (!_he_new_buckets) { \ uthash_fatal("out of memory"); \ } \ memset(_he_new_buckets, 0, 2 * tbl->num_buckets * sizeof(struct UT_hash_bucket)); \ tbl->ideal_chain_maxlen = (tbl->num_items >> (tbl->log2_num_buckets + 1)) + ((tbl->num_items & ((tbl->num_buckets * 2) - 1)) ? 1 : 0); \ tbl->nonideal_items = 0; \ for (_he_bkt_i = 0; _he_bkt_i < tbl->num_buckets; _he_bkt_i++) { \ _he_thh = tbl->buckets[_he_bkt_i].hh_head; \ while (_he_thh) { \ _he_hh_nxt = _he_thh->hh_next; \ HASH_TO_BKT(_he_thh->hashv, tbl->num_buckets * 2, _he_bkt); \ _he_newbkt = &(_he_new_buckets[_he_bkt]); \ if (++(_he_newbkt->count) > tbl->ideal_chain_maxlen) { \ tbl->nonideal_items++; \ _he_newbkt->expand_mult = _he_newbkt->count / tbl->ideal_chain_maxlen; \ } \ _he_thh->hh_prev = NULL; \ _he_thh->hh_next = _he_newbkt->hh_head; \ if (_he_newbkt->hh_head) \ _he_newbkt->hh_head->hh_prev = _he_thh; \ _he_newbkt->hh_head = _he_thh; \ _he_thh = _he_hh_nxt; \ } \ } \ uthash_free(tbl->buckets, tbl->num_buckets * sizeof(struct UT_hash_bucket)); \ tbl->num_buckets *= 2; \ tbl->log2_num_buckets++; \ tbl->buckets = _he_new_buckets; \ tbl->ineff_expands = (tbl->nonideal_items > (tbl->num_items >> 1)) ? (tbl->ineff_expands + 1) : 0; \ if (tbl->ineff_expands > 1) { \ tbl->noexpand = 1; \ uthash_noexpand_fyi(tbl); \ } \ uthash_expand_fyi(tbl); \ } while (0) /* This is an adaptation of Simon Tatham's O(n log(n)) mergesort */ /* Note that HASH_SORT assumes the hash handle name to be hh. * HASH_SRT was added to allow the hash handle name to be passed in. */ #define HASH_SORT(head, cmpfcn) HASH_SRT(hh, head, cmpfcn) #define HASH_SRT(hh, head, cmpfcn) \ do { \ unsigned _hs_i; \ unsigned _hs_looping, _hs_nmerges, _hs_insize, _hs_psize, _hs_qsize; \ struct UT_hash_handle *_hs_p, *_hs_q, *_hs_e, *_hs_list, *_hs_tail; \ if (head) { \ _hs_insize = 1; \ _hs_looping = 1; \ _hs_list = &((head)->hh); \ while (_hs_looping) { \ _hs_p = _hs_list; \ _hs_list = NULL; \ _hs_tail = NULL; \ _hs_nmerges = 0; \ while (_hs_p) { \ _hs_nmerges++; \ _hs_q = _hs_p; \ _hs_psize = 0; \ for (_hs_i = 0; _hs_i < _hs_insize; _hs_i++) { \ _hs_psize++; \ _hs_q = (UT_hash_handle *)((_hs_q->next) ? ((void *)((char *)(_hs_q->next) + (head)->hh.tbl->hho)) : NULL); \ if (!(_hs_q)) \ break; \ } \ _hs_qsize = _hs_insize; \ while ((_hs_psize > 0) || ((_hs_qsize > 0) && _hs_q)) { \ if (_hs_psize == 0) { \ _hs_e = _hs_q; \ _hs_q = (UT_hash_handle *)((_hs_q->next) ? ((void *)((char *)(_hs_q->next) + (head)->hh.tbl->hho)) : NULL); \ _hs_qsize--; \ } else if ((_hs_qsize == 0) || !(_hs_q)) { \ _hs_e = _hs_p; \ if (_hs_p) { \ _hs_p = (UT_hash_handle *)((_hs_p->next) ? ((void *)((char *)(_hs_p->next) + (head)->hh.tbl->hho)) : NULL); \ } \ _hs_psize--; \ } else if ((cmpfcn(DECLTYPE(head)(ELMT_FROM_HH((head)->hh.tbl, _hs_p)), DECLTYPE(head)(ELMT_FROM_HH((head)->hh.tbl, _hs_q)))) <= 0) { \ _hs_e = _hs_p; \ if (_hs_p) { \ _hs_p = (UT_hash_handle *)((_hs_p->next) ? ((void *)((char *)(_hs_p->next) + (head)->hh.tbl->hho)) : NULL); \ } \ _hs_psize--; \ } else { \ _hs_e = _hs_q; \ _hs_q = (UT_hash_handle *)((_hs_q->next) ? ((void *)((char *)(_hs_q->next) + (head)->hh.tbl->hho)) : NULL); \ _hs_qsize--; \ } \ if (_hs_tail) { \ _hs_tail->next = ((_hs_e) ? ELMT_FROM_HH((head)->hh.tbl, _hs_e) : NULL); \ } else { \ _hs_list = _hs_e; \ } \ if (_hs_e) { \ _hs_e->prev = ((_hs_tail) ? ELMT_FROM_HH((head)->hh.tbl, _hs_tail) : NULL); \ } \ _hs_tail = _hs_e; \ } \ _hs_p = _hs_q; \ } \ if (_hs_tail) { \ _hs_tail->next = NULL; \ } \ if (_hs_nmerges <= 1) { \ _hs_looping = 0; \ (head)->hh.tbl->tail = _hs_tail; \ DECLTYPE_ASSIGN(head, ELMT_FROM_HH((head)->hh.tbl, _hs_list)); \ } \ _hs_insize *= 2; \ } \ HASH_FSCK(hh, head); \ } \ } while (0) /* This function selects items from one hash into another hash. * The end result is that the selected items have dual presence * in both hashes. There is no copy of the items made; rather * they are added into the new hash through a secondary hash * hash handle that must be present in the structure. */ #define HASH_SELECT(hh_dst, dst, hh_src, src, cond) \ do { \ unsigned _src_bkt, _dst_bkt; \ void *_last_elt = NULL, *_elt; \ UT_hash_handle *_src_hh, *_dst_hh, *_last_elt_hh = NULL; \ ptrdiff_t _dst_hho = ((char *)(&(dst)->hh_dst) - (char *)(dst)); \ if (src) { \ for (_src_bkt = 0; _src_bkt < (src)->hh_src.tbl->num_buckets; _src_bkt++) { \ for (_src_hh = (src)->hh_src.tbl->buckets[_src_bkt].hh_head; _src_hh; _src_hh = _src_hh->hh_next) { \ _elt = ELMT_FROM_HH((src)->hh_src.tbl, _src_hh); \ if (cond(_elt)) { \ _dst_hh = (UT_hash_handle *)(((char *)_elt) + _dst_hho); \ _dst_hh->key = _src_hh->key; \ _dst_hh->keylen = _src_hh->keylen; \ _dst_hh->hashv = _src_hh->hashv; \ _dst_hh->prev = _last_elt; \ _dst_hh->next = NULL; \ if (_last_elt_hh) { \ _last_elt_hh->next = _elt; \ } \ if (!dst) { \ DECLTYPE_ASSIGN(dst, _elt); \ HASH_MAKE_TABLE(hh_dst, dst); \ } else { \ _dst_hh->tbl = (dst)->hh_dst.tbl; \ } \ HASH_TO_BKT(_dst_hh->hashv, _dst_hh->tbl->num_buckets, _dst_bkt); \ HASH_ADD_TO_BKT(_dst_hh->tbl->buckets[_dst_bkt], _dst_hh); \ (dst)->hh_dst.tbl->num_items++; \ _last_elt = _elt; \ _last_elt_hh = _dst_hh; \ } \ } \ } \ } \ HASH_FSCK(hh_dst, dst); \ } while (0) #define HASH_CLEAR(hh, head) \ do { \ if (head) { \ uthash_free((head)->hh.tbl->buckets, (head)->hh.tbl->num_buckets * sizeof(struct UT_hash_bucket)); \ HASH_BLOOM_FREE((head)->hh.tbl); \ uthash_free((head)->hh.tbl, sizeof(UT_hash_table)); \ (head) = NULL; \ } \ } while (0) #define HASH_OVERHEAD(hh, head) \ (size_t)((((head)->hh.tbl->num_items * sizeof(UT_hash_handle)) + \ ((head)->hh.tbl->num_buckets * sizeof(UT_hash_bucket)) + \ (sizeof(UT_hash_table)) + \ (HASH_BLOOM_BYTELEN))) #ifdef NO_DECLTYPE #define HASH_ITER(hh, head, el, tmp) for ((el) = (head), (*(char **)(&(tmp))) = (char *)((head) ? (head)->hh.next : NULL); \ el; (el) = (tmp), (*(char **)(&(tmp))) = (char *)((tmp) ? (tmp)->hh.next : NULL)) #else #define HASH_ITER(hh, head, el, tmp) for ((el) = (head), (tmp) = DECLTYPE(el)((head) ? (head)->hh.next : NULL); \ el; (el) = (tmp), (tmp) = DECLTYPE(el)((tmp) ? (tmp)->hh.next : NULL)) #endif /* obtain a count of items in the hash */ #define HASH_COUNT(head) HASH_CNT(hh, head) #define HASH_CNT(hh, head) ((head) ? ((head)->hh.tbl->num_items) : 0) typedef struct UT_hash_bucket { struct UT_hash_handle *hh_head; unsigned count; /* expand_mult is normally set to 0. In this situation, the max chain length * threshold is enforced at its default value, HASH_BKT_CAPACITY_THRESH. (If * the bucket's chain exceeds this length, bucket expansion is triggered). * However, setting expand_mult to a non-zero value delays bucket expansion * (that would be triggered by additions to this particular bucket) * until its chain length reaches a *multiple* of HASH_BKT_CAPACITY_THRESH. * (The multiplier is simply expand_mult+1). The whole idea of this * multiplier is to reduce bucket expansions, since they are expensive, in * situations where we know that a particular bucket tends to be overused. * It is better to let its chain length grow to a longer yet-still-bounded * value, than to do an O(n) bucket expansion too often. */ unsigned expand_mult; } UT_hash_bucket; /* random signature used only to find hash tables in external analysis */ #define HASH_SIGNATURE 0xa0111fe1 #define HASH_BLOOM_SIGNATURE 0xb12220f2 typedef struct UT_hash_table { UT_hash_bucket *buckets; unsigned num_buckets, log2_num_buckets; unsigned num_items; struct UT_hash_handle *tail; /* tail hh in app order, for fast append */ ptrdiff_t hho; /* hash handle offset (byte pos of hash handle in element */ /* in an ideal situation (all buckets used equally), no bucket would have * more than ceil(#items/#buckets) items. that's the ideal chain length. */ unsigned ideal_chain_maxlen; /* nonideal_items is the number of items in the hash whose chain position * exceeds the ideal chain maxlen. these items pay the penalty for an uneven * hash distribution; reaching them in a chain traversal takes >ideal steps */ unsigned nonideal_items; /* ineffective expands occur when a bucket doubling was performed, but * afterward, more than half the items in the hash had nonideal chain * positions. If this happens on two consecutive expansions we inhibit any * further expansion, as it's not helping; this happens when the hash * function isn't a good fit for the key domain. When expansion is inhibited * the hash will still work, albeit no longer in constant time. */ unsigned ineff_expands, noexpand; uint32_t signature; /* used only to find hash tables in external analysis */ #ifdef HASH_BLOOM uint32_t bloom_sig; /* used only to test bloom exists in external analysis */ uint8_t *bloom_bv; char bloom_nbits; #endif } UT_hash_table; typedef struct UT_hash_handle { struct UT_hash_table *tbl; void *prev; /* prev element in app order */ void *next; /* next element in app order */ struct UT_hash_handle *hh_prev; /* previous hh in bucket order */ struct UT_hash_handle *hh_next; /* next hh in bucket order */ void *key; /* ptr to enclosing struct's key */ unsigned keylen; /* enclosing struct's key len */ unsigned hashv; /* result of hash-fcn(key) */ } UT_hash_handle; #endif /* UTHASH_H */ the_silver_searcher-2.1.0/src/util.c000066400000000000000000000445671314774034700174740ustar00rootroot00000000000000#include #include #include #include #include #include #include "config.h" #include "util.h" #ifdef _WIN32 #include #define flockfile(x) #define funlockfile(x) #define getc_unlocked(x) getc(x) #endif #define CHECK_AND_RETURN(ptr) \ if (ptr == NULL) { \ die("Memory allocation failed."); \ } \ return ptr; void *ag_malloc(size_t size) { void *ptr = malloc(size); CHECK_AND_RETURN(ptr) } void *ag_realloc(void *ptr, size_t size) { void *new_ptr = realloc(ptr, size); CHECK_AND_RETURN(new_ptr) } void *ag_calloc(size_t count, size_t size) { void *ptr = calloc(count, size); CHECK_AND_RETURN(ptr) } char *ag_strdup(const char *s) { char *str = strdup(s); CHECK_AND_RETURN(str) } char *ag_strndup(const char *s, size_t size) { char *str = NULL; #ifdef HAVE_STRNDUP str = strndup(s, size); CHECK_AND_RETURN(str) #else str = (char *)ag_malloc(size + 1); strlcpy(str, s, size + 1); return str; #endif } void free_strings(char **strs, const size_t strs_len) { if (strs == NULL) { return; } size_t i; for (i = 0; i < strs_len; i++) { free(strs[i]); } free(strs); } void generate_alpha_skip(const char *find, size_t f_len, size_t skip_lookup[], const int case_sensitive) { size_t i; for (i = 0; i < 256; i++) { skip_lookup[i] = f_len; } f_len--; for (i = 0; i < f_len; i++) { if (case_sensitive) { skip_lookup[(unsigned char)find[i]] = f_len - i; } else { skip_lookup[(unsigned char)tolower(find[i])] = f_len - i; skip_lookup[(unsigned char)toupper(find[i])] = f_len - i; } } } int is_prefix(const char *s, const size_t s_len, const size_t pos, const int case_sensitive) { size_t i; for (i = 0; pos + i < s_len; i++) { if (case_sensitive) { if (s[i] != s[i + pos]) { return 0; } } else { if (tolower(s[i]) != tolower(s[i + pos])) { return 0; } } } return 1; } size_t suffix_len(const char *s, const size_t s_len, const size_t pos, const int case_sensitive) { size_t i; for (i = 0; i < pos; i++) { if (case_sensitive) { if (s[pos - i] != s[s_len - i - 1]) { break; } } else { if (tolower(s[pos - i]) != tolower(s[s_len - i - 1])) { break; } } } return i; } void generate_find_skip(const char *find, const size_t f_len, size_t **skip_lookup, const int case_sensitive) { size_t i; size_t s_len; size_t *sl = ag_malloc(f_len * sizeof(size_t)); *skip_lookup = sl; size_t last_prefix = f_len; for (i = last_prefix; i > 0; i--) { if (is_prefix(find, f_len, i, case_sensitive)) { last_prefix = i; } sl[i - 1] = last_prefix + (f_len - i); } for (i = 0; i < f_len; i++) { s_len = suffix_len(find, f_len, i, case_sensitive); if (find[i - s_len] != find[f_len - 1 - s_len]) { sl[f_len - 1 - s_len] = f_len - 1 - i + s_len; } } } size_t ag_max(size_t a, size_t b) { if (b > a) { return b; } return a; } void generate_hash(const char *find, const size_t f_len, uint8_t *h_table, const int case_sensitive) { int i; for (i = f_len - sizeof(uint16_t); i >= 0; i--) { // Add all 2^sizeof(uint16_t) combinations of capital letters to the hash table int caps_set; for (caps_set = 0; caps_set < (1 << sizeof(uint16_t)); caps_set++) { word_t word; memcpy(&word.as_chars, find + i, sizeof(uint16_t)); int cap_index; // Capitalize the letters whose corresponding bits in caps_set are 1 for (cap_index = 0; caps_set >> cap_index; cap_index++) { if ((caps_set >> cap_index) & 1) word.as_chars[cap_index] -= 'a' - 'A'; } size_t h; // Find next free cell for (h = word.as_word % H_SIZE; h_table[h]; h = (h + 1) % H_SIZE) ; h_table[h] = i + 1; // Don't add capital letters if case sensitive if (case_sensitive) break; } } } /* Boyer-Moore strstr */ const char *boyer_moore_strnstr(const char *s, const char *find, const size_t s_len, const size_t f_len, const size_t alpha_skip_lookup[], const size_t *find_skip_lookup, const int case_insensitive) { ssize_t i; size_t pos = f_len - 1; while (pos < s_len) { for (i = f_len - 1; i >= 0 && (case_insensitive ? tolower(s[pos]) : s[pos]) == find[i]; pos--, i--) { } if (i < 0) { return s + pos + 1; } pos += ag_max(alpha_skip_lookup[(unsigned char)s[pos]], find_skip_lookup[i]); } return NULL; } // Clang's -fsanitize=alignment (included in -fsanitize=undefined) will flag // the intentional unaligned access here, so suppress it for this function NO_SANITIZE_ALIGNMENT const char *hash_strnstr(const char *s, const char *find, const size_t s_len, const size_t f_len, uint8_t *h_table, const int case_sensitive) { if (s_len < f_len) return NULL; // Step through s const size_t step = f_len - sizeof(uint16_t) + 1; size_t s_i = f_len - sizeof(uint16_t); for (; s_i <= s_len - f_len; s_i += step) { size_t h; for (h = *(const uint16_t *)(s + s_i) % H_SIZE; h_table[h]; h = (h + 1) % H_SIZE) { const char *R = s + s_i - (h_table[h] - 1); size_t i; // Check putative match for (i = 0; i < f_len; i++) { if ((case_sensitive ? R[i] : tolower(R[i])) != find[i]) goto next_hash_cell; } return R; // Found next_hash_cell:; } } // Check tail for (s_i = s_i - step + 1; s_i <= s_len - f_len; s_i++) { size_t i; const char *R = s + s_i; for (i = 0; i < f_len; i++) { char s_c = case_sensitive ? R[i] : tolower(R[i]); if (s_c != find[i]) goto next_start; } return R; next_start:; } return NULL; } size_t invert_matches(const char *buf, const size_t buf_len, match_t matches[], size_t matches_len) { size_t i; size_t match_read_index = 0; size_t inverted_match_count = 0; size_t inverted_match_start = 0; size_t last_line_end = 0; int in_inverted_match = TRUE; match_t next_match; log_debug("Inverting %u matches.", matches_len); if (matches_len > 0) { next_match = matches[0]; } else { next_match.start = buf_len + 1; } /* No matches, so the whole buffer is now a match. */ if (matches_len == 0) { matches[0].start = 0; matches[0].end = buf_len - 1; return 1; } for (i = 0; i < buf_len; i++) { if (i == next_match.start) { i = next_match.end - 1; match_read_index++; if (match_read_index < matches_len) { next_match = matches[match_read_index]; } if (in_inverted_match && last_line_end > inverted_match_start) { matches[inverted_match_count].start = inverted_match_start; matches[inverted_match_count].end = last_line_end - 1; inverted_match_count++; } in_inverted_match = FALSE; } else if (i == buf_len - 1 && in_inverted_match) { matches[inverted_match_count].start = inverted_match_start; matches[inverted_match_count].end = i; inverted_match_count++; } else if (buf[i] == '\n') { last_line_end = i + 1; if (!in_inverted_match) { inverted_match_start = last_line_end; } in_inverted_match = TRUE; } } for (i = 0; i < matches_len; i++) { log_debug("Inverted match %i start %i end %i.", i, matches[i].start, matches[i].end); } return inverted_match_count; } void realloc_matches(match_t **matches, size_t *matches_size, size_t matches_len) { if (matches_len < *matches_size) { return; } /* TODO: benchmark initial size of matches. 100 may be too small/big */ *matches_size = *matches ? *matches_size * 2 : 100; *matches = ag_realloc(*matches, *matches_size * sizeof(match_t)); } void compile_study(pcre **re, pcre_extra **re_extra, char *q, const int pcre_opts, const int study_opts) { const char *pcre_err = NULL; int pcre_err_offset = 0; *re = pcre_compile(q, pcre_opts, &pcre_err, &pcre_err_offset, NULL); if (*re == NULL) { die("Bad regex! pcre_compile() failed at position %i: %s\nIf you meant to search for a literal string, run ag with -Q", pcre_err_offset, pcre_err); } *re_extra = pcre_study(*re, study_opts, &pcre_err); if (*re_extra == NULL) { log_debug("pcre_study returned nothing useful. Error: %s", pcre_err); } } /* This function is very hot. It's called on every file. */ int is_binary(const void *buf, const size_t buf_len) { size_t suspicious_bytes = 0; size_t total_bytes = buf_len > 512 ? 512 : buf_len; const unsigned char *buf_c = buf; size_t i; if (buf_len == 0) { /* Is an empty file binary? Is it text? */ return 0; } if (buf_len >= 3 && buf_c[0] == 0xEF && buf_c[1] == 0xBB && buf_c[2] == 0xBF) { /* UTF-8 BOM. This isn't binary. */ return 0; } if (buf_len >= 5 && strncmp(buf, "%PDF-", 5) == 0) { /* PDF. This is binary. */ return 1; } for (i = 0; i < total_bytes; i++) { if (buf_c[i] == '\0') { /* NULL char. It's binary */ return 1; } else if ((buf_c[i] < 7 || buf_c[i] > 14) && (buf_c[i] < 32 || buf_c[i] > 127)) { /* UTF-8 detection */ if (buf_c[i] > 193 && buf_c[i] < 224 && i + 1 < total_bytes) { i++; if (buf_c[i] > 127 && buf_c[i] < 192) { continue; } } else if (buf_c[i] > 223 && buf_c[i] < 240 && i + 2 < total_bytes) { i++; if (buf_c[i] > 127 && buf_c[i] < 192 && buf_c[i + 1] > 127 && buf_c[i + 1] < 192) { i++; continue; } } suspicious_bytes++; /* Disk IO is so slow that it's worthwhile to do this calculation after every suspicious byte. */ /* This is true even on a 1.6Ghz Atom with an Intel 320 SSD. */ /* Read at least 32 bytes before making a decision */ if (i >= 32 && (suspicious_bytes * 100) / total_bytes > 10) { return 1; } } } if ((suspicious_bytes * 100) / total_bytes > 10) { return 1; } return 0; } int is_regex(const char *query) { char regex_chars[] = { '$', '(', ')', '*', '+', '.', '?', '[', '\\', '^', '{', '|', '\0' }; return (strpbrk(query, regex_chars) != NULL); } int is_fnmatch(const char *filename) { char fnmatch_chars[] = { '!', '*', '?', '[', ']', '\0' }; return (strpbrk(filename, fnmatch_chars) != NULL); } int binary_search(const char *needle, char **haystack, int start, int end) { int mid; int rc; if (start == end) { return -1; } mid = start + ((end - start) / 2); rc = strcmp(needle, haystack[mid]); if (rc < 0) { return binary_search(needle, haystack, start, mid); } else if (rc > 0) { return binary_search(needle, haystack, mid + 1, end); } return mid; } static int wordchar_table[256]; void init_wordchar_table(void) { int i; for (i = 0; i < 256; ++i) { char ch = (char)i; wordchar_table[i] = ('a' <= ch && ch <= 'z') || ('A' <= ch && ch <= 'Z') || ('0' <= ch && ch <= '9') || ch == '_'; } } int is_wordchar(char ch) { return wordchar_table[(unsigned char)ch]; } int is_lowercase(const char *s) { int i; for (i = 0; s[i] != '\0'; i++) { if (!isascii(s[i]) || isupper(s[i])) { return FALSE; } } return TRUE; } int is_directory(const char *path, const struct dirent *d) { #ifdef HAVE_DIRENT_DTYPE /* Some filesystems, e.g. ReiserFS, always return a type DT_UNKNOWN from readdir or scandir. */ /* Call stat if we don't find DT_DIR to get the information we need. */ /* Also works for symbolic links to directories. */ if (d->d_type != DT_UNKNOWN && d->d_type != DT_LNK) { return d->d_type == DT_DIR; } #endif char *full_path; struct stat s; ag_asprintf(&full_path, "%s/%s", path, d->d_name); if (stat(full_path, &s) != 0) { free(full_path); return FALSE; } #ifdef _WIN32 int is_dir = GetFileAttributesA(full_path) & FILE_ATTRIBUTE_DIRECTORY; #else int is_dir = S_ISDIR(s.st_mode); #endif free(full_path); return is_dir; } int is_symlink(const char *path, const struct dirent *d) { #ifdef _WIN32 char full_path[MAX_PATH + 1] = { 0 }; sprintf(full_path, "%s\\%s", path, d->d_name); return (GetFileAttributesA(full_path) & FILE_ATTRIBUTE_REPARSE_POINT); #else #ifdef HAVE_DIRENT_DTYPE /* Some filesystems, e.g. ReiserFS, always return a type DT_UNKNOWN from readdir or scandir. */ /* Call lstat if we find DT_UNKNOWN to get the information we need. */ if (d->d_type != DT_UNKNOWN) { return (d->d_type == DT_LNK); } #endif char *full_path; struct stat s; ag_asprintf(&full_path, "%s/%s", path, d->d_name); if (lstat(full_path, &s) != 0) { free(full_path); return FALSE; } free(full_path); return S_ISLNK(s.st_mode); #endif } int is_named_pipe(const char *path, const struct dirent *d) { #ifdef HAVE_DIRENT_DTYPE if (d->d_type != DT_UNKNOWN) { return d->d_type == DT_FIFO || d->d_type == DT_SOCK; } #endif char *full_path; struct stat s; ag_asprintf(&full_path, "%s/%s", path, d->d_name); if (stat(full_path, &s) != 0) { free(full_path); return FALSE; } free(full_path); return S_ISFIFO(s.st_mode) #ifdef S_ISSOCK || S_ISSOCK(s.st_mode) #endif ; } void ag_asprintf(char **ret, const char *fmt, ...) { va_list args; va_start(args, fmt); if (vasprintf(ret, fmt, args) == -1) { die("vasprintf returned -1"); } va_end(args); } void die(const char *fmt, ...) { va_list args; va_start(args, fmt); vplog(LOG_LEVEL_ERR, fmt, args); va_end(args); exit(2); } #ifndef HAVE_FGETLN char *fgetln(FILE *fp, size_t *lenp) { char *buf = NULL; int c, used = 0, len = 0; flockfile(fp); while ((c = getc_unlocked(fp)) != EOF) { if (!buf || len >= used) { size_t nsize; char *newbuf; nsize = used + BUFSIZ; if (!(newbuf = realloc(buf, nsize))) { funlockfile(fp); if (buf) free(buf); return NULL; } buf = newbuf; used = nsize; } buf[len++] = c; if (c == '\n') { break; } } funlockfile(fp); *lenp = len; return buf; } #endif #ifndef HAVE_GETLINE /* * Do it yourself getline() implementation */ ssize_t getline(char **lineptr, size_t *n, FILE *stream) { size_t len = 0; char *srcln = NULL; char *newlnptr = NULL; /* get line, bail on error */ if (!(srcln = fgetln(stream, &len))) { return -1; } if (len >= *n) { /* line is too big for buffer, must realloc */ /* double the buffer, bail on error */ if (!(newlnptr = realloc(*lineptr, len * 2))) { return -1; } *lineptr = newlnptr; *n = len * 2; } memcpy(*lineptr, srcln, len); #ifndef HAVE_FGETLN /* Our own implementation of fgetln() returns a malloc()d buffer that we * must free */ free(srcln); #endif (*lineptr)[len] = '\0'; return len; } #endif ssize_t buf_getline(const char **line, const char *buf, const size_t buf_len, const size_t buf_offset) { const char *cur = buf + buf_offset; ssize_t i; for (i = 0; cur[i] != '\n' && (buf_offset + i < buf_len); i++) { } *line = cur; return i; } #ifndef HAVE_REALPATH /* * realpath() for Windows. Turns slashes into backslashes and calls _fullpath */ char *realpath(const char *path, char *resolved_path) { char *p; char tmp[_MAX_PATH + 1]; strlcpy(tmp, path, sizeof(tmp)); p = tmp; while (*p) { if (*p == '/') { *p = '\\'; } p++; } return _fullpath(resolved_path, tmp, _MAX_PATH); } #endif #ifndef HAVE_STRLCPY size_t strlcpy(char *dst, const char *src, size_t size) { char *d = dst; const char *s = src; size_t n = size; /* Copy as many bytes as will fit */ if (n != 0) { while (--n != 0) { if ((*d++ = *s++) == '\0') { break; } } } /* Not enough room in dst, add NUL and traverse rest of src */ if (n == 0) { if (size != 0) { *d = '\0'; /* NUL-terminate dst */ } while (*s++) { } } return (s - src - 1); /* count does not include NUL */ } #endif #ifndef HAVE_VASPRINTF int vasprintf(char **ret, const char *fmt, va_list args) { int rv; *ret = NULL; va_list args2; /* vsnprintf can destroy args, so we need to copy it for the second call */ #ifdef __va_copy /* non-standard macro, but usually exists */ __va_copy(args2, args); #elif va_copy /* C99 macro. We compile with -std=c89 but you never know */ va_copy(args2, args); #else /* Ancient compiler. This usually works but there are no guarantees. */ memcpy(args2, args, sizeof(va_list)); #endif rv = vsnprintf(NULL, 0, fmt, args); va_end(args); if (rv < 0) { return rv; } *ret = malloc(++rv); /* vsnprintf doesn't count \0 */ if (*ret == NULL) { return -1; } rv = vsnprintf(*ret, rv, fmt, args2); va_end(args2); if (rv < 0) { free(*ret); } return rv; } #endif the_silver_searcher-2.1.0/src/util.h000066400000000000000000000067711314774034700174740ustar00rootroot00000000000000#ifndef UTIL_H #define UTIL_H #include #include #include #include #include #include #include "config.h" #include "log.h" #include "options.h" FILE *out_fd; #ifndef TRUE #define TRUE 1 #endif #ifndef FALSE #define FALSE 0 #endif #define H_SIZE (64 * 1024) #ifdef __clang__ #define NO_SANITIZE_ALIGNMENT __attribute__((no_sanitize("alignment"))) #else #define NO_SANITIZE_ALIGNMENT #endif void *ag_malloc(size_t size); void *ag_realloc(void *ptr, size_t size); void *ag_calloc(size_t nelem, size_t elsize); char *ag_strdup(const char *s); char *ag_strndup(const char *s, size_t size); typedef struct { size_t start; /* Byte at which the match starts */ size_t end; /* and where it ends */ } match_t; typedef struct { long total_bytes; long total_files; long total_matches; long total_file_matches; struct timeval time_start; struct timeval time_end; } ag_stats; ag_stats stats; /* Union to translate between chars and words without violating strict aliasing */ typedef union { char as_chars[sizeof(uint16_t)]; uint16_t as_word; } word_t; void free_strings(char **strs, const size_t strs_len); void generate_alpha_skip(const char *find, size_t f_len, size_t skip_lookup[], const int case_sensitive); int is_prefix(const char *s, const size_t s_len, const size_t pos, const int case_sensitive); size_t suffix_len(const char *s, const size_t s_len, const size_t pos, const int case_sensitive); void generate_find_skip(const char *find, const size_t f_len, size_t **skip_lookup, const int case_sensitive); void generate_hash(const char *find, const size_t f_len, uint8_t *H, const int case_sensitive); /* max is already defined on spec-violating compilers such as MinGW */ size_t ag_max(size_t a, size_t b); const char *boyer_moore_strnstr(const char *s, const char *find, const size_t s_len, const size_t f_len, const size_t alpha_skip_lookup[], const size_t *find_skip_lookup, const int case_insensitive); const char *hash_strnstr(const char *s, const char *find, const size_t s_len, const size_t f_len, uint8_t *h_table, const int case_sensitive); size_t invert_matches(const char *buf, const size_t buf_len, match_t matches[], size_t matches_len); void realloc_matches(match_t **matches, size_t *matches_size, size_t matches_len); void compile_study(pcre **re, pcre_extra **re_extra, char *q, const int pcre_opts, const int study_opts); int is_binary(const void *buf, const size_t buf_len); int is_regex(const char *query); int is_fnmatch(const char *filename); int binary_search(const char *needle, char **haystack, int start, int end); void init_wordchar_table(void); int is_wordchar(char ch); int is_lowercase(const char *s); int is_directory(const char *path, const struct dirent *d); int is_symlink(const char *path, const struct dirent *d); int is_named_pipe(const char *path, const struct dirent *d); void die(const char *fmt, ...); void ag_asprintf(char **ret, const char *fmt, ...); ssize_t buf_getline(const char **line, const char *buf, const size_t buf_len, const size_t buf_offset); #ifndef HAVE_FGETLN char *fgetln(FILE *fp, size_t *lenp); #endif #ifndef HAVE_GETLINE ssize_t getline(char **lineptr, size_t *n, FILE *stream); #endif #ifndef HAVE_REALPATH char *realpath(const char *path, char *resolved_path); #endif #ifndef HAVE_STRLCPY size_t strlcpy(char *dest, const char *src, size_t size); #endif #ifndef HAVE_VASPRINTF int vasprintf(char **ret, const char *fmt, va_list args); #endif #endif the_silver_searcher-2.1.0/src/win32/000077500000000000000000000000001314774034700172755ustar00rootroot00000000000000the_silver_searcher-2.1.0/src/win32/config.h000066400000000000000000000000531314774034700207110ustar00rootroot00000000000000#define HAVE_LZMA_H #define HAVE_PTHREAD_H the_silver_searcher-2.1.0/src/zfile.c000066400000000000000000000244051314774034700176150ustar00rootroot00000000000000#ifdef __FreeBSD__ #include #endif #include #ifdef __CYGWIN__ typedef _off64_t off64_t; #endif #include #include #include #include #include #include #include #include #include #include "config.h" #ifdef HAVE_ERR_H #include #endif #ifdef HAVE_ZLIB_H #include #endif #ifdef HAVE_LZMA_H #include #endif #include "decompress.h" #if HAVE_FOPENCOOKIE #define min(a, b) ({ \ __typeof (a) _a = (a); \ __typeof (b) _b = (b); \ _a < _b ? _a : _b; }) static cookie_read_function_t zfile_read; static cookie_seek_function_t zfile_seek; static cookie_close_function_t zfile_close; static const cookie_io_functions_t zfile_io = { .read = zfile_read, .write = NULL, .seek = zfile_seek, .close = zfile_close, }; #define KB (1024) struct zfile { FILE *in; // Source FILE stream uint64_t logic_offset, // Logical offset in output (forward seeks) decode_offset, // Where we've decoded to actual_len; uint32_t outbuf_start; ag_compression_type ctype; union { z_stream gz; lzma_stream lzma; } stream; uint8_t inbuf[32 * KB]; uint8_t outbuf[256 * KB]; bool eof; }; #define CAVAIL_IN(c) ((c)->ctype == AG_GZIP ? (c)->stream.gz.avail_in : (c)->stream.lzma.avail_in) #define CNEXT_OUT(c) ((c)->ctype == AG_GZIP ? (c)->stream.gz.next_out : (c)->stream.lzma.next_out) static int zfile_cookie_init(struct zfile *cookie) { #ifdef HAVE_LZMA_H lzma_ret lzrc; #endif int rc; assert(cookie->logic_offset == 0); assert(cookie->decode_offset == 0); cookie->actual_len = 0; switch (cookie->ctype) { #ifdef HAVE_ZLIB_H case AG_GZIP: memset(&cookie->stream.gz, 0, sizeof cookie->stream.gz); rc = inflateInit2(&cookie->stream.gz, 32 + 15); if (rc != Z_OK) { log_err("Unable to initialize zlib: %s", zError(rc)); return EIO; } cookie->stream.gz.next_in = NULL; cookie->stream.gz.avail_in = 0; cookie->stream.gz.next_out = cookie->outbuf; cookie->stream.gz.avail_out = sizeof cookie->outbuf; break; #endif #ifdef HAVE_LZMA_H case AG_XZ: cookie->stream.lzma = (lzma_stream)LZMA_STREAM_INIT; lzrc = lzma_auto_decoder(&cookie->stream.lzma, -1, 0); if (lzrc != LZMA_OK) { log_err("Unable to initialize lzma_auto_decoder: %d", lzrc); return EIO; } cookie->stream.lzma.next_in = NULL; cookie->stream.lzma.avail_in = 0; cookie->stream.lzma.next_out = cookie->outbuf; cookie->stream.lzma.avail_out = sizeof cookie->outbuf; break; #endif default: log_err("Unsupported compression type: %d", cookie->ctype); return EINVAL; } cookie->outbuf_start = 0; cookie->eof = false; return 0; } static void zfile_cookie_cleanup(struct zfile *cookie) { switch (cookie->ctype) { #ifdef HAVE_ZLIB_H case AG_GZIP: inflateEnd(&cookie->stream.gz); break; #endif #ifdef HAVE_LZMA_H case AG_XZ: lzma_end(&cookie->stream.lzma); break; #endif default: /* Compiler false positive - unreachable. */ break; } } /* * Open compressed file 'path' as a (forward-)seekable (and rewindable), * read-only stream. */ FILE * decompress_open(int fd, const char *mode, ag_compression_type ctype) { struct zfile *cookie; FILE *res, *in; int error; cookie = NULL; in = res = NULL; if (strstr(mode, "w") || strstr(mode, "a")) { errno = EINVAL; goto out; } in = fdopen(fd, mode); if (in == NULL) goto out; /* * No validation of compression type is done -- file is assumed to * match input. In Ag, the compression type is already detected, so * that's ok. */ cookie = malloc(sizeof *cookie); if (cookie == NULL) { errno = ENOMEM; goto out; } cookie->in = in; cookie->logic_offset = 0; cookie->decode_offset = 0; cookie->ctype = ctype; error = zfile_cookie_init(cookie); if (error != 0) { errno = error; goto out; } res = fopencookie(cookie, mode, zfile_io); out: if (res == NULL) { if (in != NULL) fclose(in); if (cookie != NULL) free(cookie); } return res; } /* * Return number of bytes into buf, 0 on EOF, -1 on error. Update stream * offset. */ static ssize_t zfile_read(void *cookie_, char *buf, size_t size) { struct zfile *cookie = cookie_; size_t nb, ignorebytes; ssize_t total = 0; lzma_ret lzret; int ret; assert(size <= SSIZE_MAX); if (size == 0) return 0; if (cookie->eof) return 0; ret = Z_OK; lzret = LZMA_OK; ignorebytes = cookie->logic_offset - cookie->decode_offset; assert(ignorebytes == 0); do { size_t inflated; /* Drain output buffer first */ while (CNEXT_OUT(cookie) > &cookie->outbuf[cookie->outbuf_start]) { size_t left = CNEXT_OUT(cookie) - &cookie->outbuf[cookie->outbuf_start]; size_t ignoreskip = min(ignorebytes, left); size_t toread; if (ignoreskip > 0) { ignorebytes -= ignoreskip; left -= ignoreskip; cookie->outbuf_start += ignoreskip; cookie->decode_offset += ignoreskip; } // Ran out of output before we seek()ed up. if (ignorebytes > 0) break; toread = min(left, size); memcpy(buf, &cookie->outbuf[cookie->outbuf_start], toread); buf += toread; size -= toread; left -= toread; cookie->outbuf_start += toread; cookie->decode_offset += toread; cookie->logic_offset += toread; total += toread; if (size == 0) break; } if (size == 0) break; /* * If we have not satisfied read, the output buffer must be * empty. */ assert(cookie->stream.gz.next_out == &cookie->outbuf[cookie->outbuf_start]); if ((cookie->ctype == AG_XZ && lzret == LZMA_STREAM_END) || (cookie->ctype == AG_GZIP && ret == Z_STREAM_END)) { cookie->eof = true; break; } /* Read more input if empty */ if (CAVAIL_IN(cookie) == 0) { nb = fread(cookie->inbuf, 1, sizeof cookie->inbuf, cookie->in); if (ferror(cookie->in)) { warn("error read core"); exit(1); } if (nb == 0 && feof(cookie->in)) { warn("truncated file"); exit(1); } if (cookie->ctype == AG_XZ) { cookie->stream.lzma.avail_in = nb; cookie->stream.lzma.next_in = cookie->inbuf; } else { cookie->stream.gz.avail_in = nb; cookie->stream.gz.next_in = cookie->inbuf; } } /* Reset stream state to beginning of output buffer */ if (cookie->ctype == AG_XZ) { cookie->stream.lzma.next_out = cookie->outbuf; cookie->stream.lzma.avail_out = sizeof cookie->outbuf; } else { cookie->stream.gz.next_out = cookie->outbuf; cookie->stream.gz.avail_out = sizeof cookie->outbuf; } cookie->outbuf_start = 0; if (cookie->ctype == AG_GZIP) { ret = inflate(&cookie->stream.gz, Z_NO_FLUSH); if (ret != Z_OK && ret != Z_STREAM_END) { log_err("Found mem/data error while decompressing zlib stream: %s", zError(ret)); return -1; } } else { lzret = lzma_code(&cookie->stream.lzma, LZMA_RUN); if (lzret != LZMA_OK && lzret != LZMA_STREAM_END) { log_err("Found mem/data error while decompressing xz/lzma stream: %d", lzret); return -1; } } inflated = CNEXT_OUT(cookie) - &cookie->outbuf[0]; cookie->actual_len += inflated; } while (!ferror(cookie->in) && size > 0); assert(total <= SSIZE_MAX); return total; } static int zfile_seek(void *cookie_, off64_t *offset_, int whence) { struct zfile *cookie = cookie_; off64_t new_offset = 0, offset = *offset_; if (whence == SEEK_SET) { new_offset = offset; } else if (whence == SEEK_CUR) { new_offset = (off64_t)cookie->logic_offset + offset; } else { /* SEEK_END not ok */ return -1; } if (new_offset < 0) return -1; /* Backward seeks to anywhere but 0 are not ok */ if (new_offset < (off64_t)cookie->logic_offset && new_offset != 0) { return -1; } if (new_offset == 0) { /* rewind(3) */ cookie->decode_offset = 0; cookie->logic_offset = 0; zfile_cookie_cleanup(cookie); zfile_cookie_init(cookie); } else if ((uint64_t)new_offset > cookie->logic_offset) { /* Emulate forward seek by skipping ... */ char *buf; const size_t bsz = 32 * 1024; buf = malloc(bsz); while ((uint64_t)new_offset > cookie->logic_offset) { size_t diff = min(bsz, (uint64_t)new_offset - cookie->logic_offset); ssize_t err = zfile_read(cookie_, buf, diff); if (err < 0) { free(buf); return -1; } /* Seek past EOF gets positioned at EOF */ if (err == 0) { assert(cookie->eof); new_offset = cookie->logic_offset; break; } } free(buf); } assert(cookie->logic_offset == (uint64_t)new_offset); *offset_ = new_offset; return 0; } static int zfile_close(void *cookie_) { struct zfile *cookie = cookie_; zfile_cookie_cleanup(cookie); fclose(cookie->in); free(cookie); return 0; } #endif /* HAVE_FOPENCOOKIE */ the_silver_searcher-2.1.0/tests/000077500000000000000000000000001314774034700167065ustar00rootroot00000000000000the_silver_searcher-2.1.0/tests/adjacent_matches.t000066400000000000000000000004651314774034700223550ustar00rootroot00000000000000Setup: $ . $TESTDIR/setup.sh $ alias ag="$TESTDIR/../ag --noaffinity --workers=1 --parallel --color" $ printf 'blahfoofooblah\n' > ./fooblah.txt Highlights are adjacent: $ ag --no-numbers foo \x1b[1;32mfooblah.txt\x1b[0m\x1b[K:blah\x1b[30;43mfoo\x1b[0m\x1b[K\x1b[30;43mfoo\x1b[0m\x1b[Kblah (esc) the_silver_searcher-2.1.0/tests/bad_path.t000066400000000000000000000003151314774034700206340ustar00rootroot00000000000000Setup: $ . $TESTDIR/setup.sh Complain about nonexistent path: $ ag foo doesnt_exist ERR: Error stat()ing: doesnt_exist ERR: Error opening directory doesnt_exist: No such file or directory [1] the_silver_searcher-2.1.0/tests/big/000077500000000000000000000000001314774034700174475ustar00rootroot00000000000000the_silver_searcher-2.1.0/tests/big/big_file.t000066400000000000000000000012331314774034700213730ustar00rootroot00000000000000Setup and create really big file: $ . $TESTDIR/../setup.sh $ python3 $TESTDIR/create_big_file.py $TESTDIR/big_file.txt Search a big file: $ $TESTDIR/../../ag --nocolor --workers=1 --parallel hello $TESTDIR/big_file.txt 33554432:hello1073741824 67108864:hello2147483648 100663296:hello3221225472 134217728:hello4294967296 167772160:hello5368709120 201326592:hello6442450944 234881024:hello7516192768 268435456:hello Fail to regex search a big file: $ $TESTDIR/../../ag --nocolor --workers=1 --parallel 'hello.*' $TESTDIR/big_file.txt ERR: Skipping */big_file.txt: pcre_exec() can't handle files larger than 2147483647 bytes. (glob) [1] the_silver_searcher-2.1.0/tests/big/create_big_file.py000077500000000000000000000011761314774034700231140ustar00rootroot00000000000000#!/usr/bin/env python # Create an 8GB file of mostly "abcdefghijklmnopqrstuvwxyz01234", # with a few instances of "hello" import sys if len(sys.argv) != 2: print("Usage: %s big_file.txt" % sys.argv[0]) sys.exit(1) big_file = sys.argv[1] def create_big_file(): with open(big_file, "w") as fd: for i in range(1, 2**28): byte = i * 32 if byte % 2**30 == 0: fd.write("hello%s\n" % byte) else: fd.write("abcdefghijklmnopqrstuvwxyz01234\n") fd.write("hello\n") try: fd = open(big_file, "r") except Exception as e: create_big_file() the_silver_searcher-2.1.0/tests/case_sensitivity.t000066400000000000000000000011341314774034700224570ustar00rootroot00000000000000Setup: $ . $TESTDIR/setup.sh $ printf 'Foo\n' >> ./sample $ printf 'bar\n' >> ./sample Smart case by default: $ ag foo sample 1:Foo $ ag FOO sample [1] $ ag 'f.o' sample 1:Foo $ ag Foo sample 1:Foo $ ag 'F.o' sample 1:Foo Case sensitive mode: $ ag -s foo sample [1] $ ag -s FOO sample [1] $ ag -s 'f.o' sample [1] $ ag -s Foo sample 1:Foo $ ag -s 'F.o' sample 1:Foo Case insensitive mode: $ ag fOO -i sample 1:Foo $ ag fOO --ignore-case sample 1:Foo $ ag 'f.o' -i sample 1:Foo Case insensitive file regex $ ag -i -g 'Samp.*' sample the_silver_searcher-2.1.0/tests/color.t000066400000000000000000000011361314774034700202120ustar00rootroot00000000000000Setup. Note that we have to turn --color on manually since ag detects that stdout isn't a tty when running in cram. $ . $TESTDIR/setup.sh $ alias ag="$TESTDIR/../ag --noaffinity --workers=1 --parallel --color" $ printf 'foo\n' > ./blah.txt $ printf 'bar\n' >> ./blah.txt Matches should contain colors: $ ag --no-numbers foo blah.txt \x1b[30;43mfoo\x1b[0m\x1b[K (esc) --nocolor should suppress colors: $ ag --nocolor foo blah.txt 1:foo --invert-match should suppress colors: $ ag --invert-match foo blah.txt 2:bar -v is the same as --invert-match $ ag -v foo blah.txt 2:bar the_silver_searcher-2.1.0/tests/column.t000066400000000000000000000005261314774034700203730ustar00rootroot00000000000000Setup: $ . $TESTDIR/setup.sh $ printf "blah\nblah2\n" > blah.txt Ensure column is correct: $ ag --column "blah\nb" blah.txt:1:1:blah blah.txt:2:0:blah2 # Test ackmate output. Not quite right, but at least offsets are in the # ballpark instead of being 9 quintillion $ ag --ackmate "lah\nb" :blah.txt 1;blah 2;1 5:blah2 the_silver_searcher-2.1.0/tests/count.t000066400000000000000000000010451314774034700202230ustar00rootroot00000000000000Setup: $ . $TESTDIR/setup.sh $ unalias ag $ alias ag="$TESTDIR/../ag --noaffinity --nocolor --workers=1" $ printf "blah\n" > blah.txt $ printf "blah2\n" >> blah.txt $ printf "blah_OTHER\n" > other_file.txt $ printf "blah_OTHER\n" >> other_file.txt Count matches: $ ag --count --parallel blah | sort blah.txt:2 other_file.txt:2 Count stream matches: $ printf 'blah blah blah\n' | ag --count blah 3 Count stream matches per line (not very useful since it does not print zero): $ cat blah.txt | ag --count blah 1 1 the_silver_searcher-2.1.0/tests/ds_store_ignore.t000066400000000000000000000003651314774034700222640ustar00rootroot00000000000000Setup. $ . $TESTDIR/setup.sh $ mkdir -p dir0/dir1/dir2 $ printf '*.DS_Store\n' > dir0/.ignore $ printf 'blah\n' > dir0/dir1/dir2/blah.txt $ touch dir0/dir1/.DS_Store Find blah in blah.txt $ ag blah dir0/dir1/dir2/blah.txt:1:blah the_silver_searcher-2.1.0/tests/empty_match.t000066400000000000000000000010331314774034700214020ustar00rootroot00000000000000Setup. $ . $TESTDIR/setup.sh $ touch empty.txt $ printf 'foo\n' > nonempty.txt Zero-length match on an empty file should fail silently with return code 1 $ ag "^" empty.txt [1] A genuine zero-length match should succeed: $ ag "^" nonempty.txt 1:foo Empty files should be listed with --unrestricted --files-with-matches (-ul) $ ag -lu --stats | sed '$d' | sort # Remove the last line about timing which will differ 2 files contained matches 2 files searched 2 matches 4 bytes searched empty.txt nonempty.txt the_silver_searcher-2.1.0/tests/exitcodes.t000066400000000000000000000005561314774034700210700ustar00rootroot00000000000000Setup: $ . $TESTDIR/setup.sh $ printf 'foo\n' > ./exitcodes_test.txt $ printf 'bar\n' >> ./exitcodes_test.txt Normal matching: $ ag foo exitcodes_test.txt 1:foo $ ag zoo exitcodes_test.txt [1] Inverted matching: $ ag -v foo exitcodes_test.txt 2:bar $ ag -v zoo exitcodes_test.txt 1:foo 2:bar $ ag -v "foo|bar" exitcodes_test.txt [1] the_silver_searcher-2.1.0/tests/fail/000077500000000000000000000000001314774034700176215ustar00rootroot00000000000000the_silver_searcher-2.1.0/tests/fail/unicode_case_insensitive.t000066400000000000000000000006051314774034700250500ustar00rootroot00000000000000Setup: $ . $TESTDIR/../setup.sh $ printf "hello=你好\n" > test.txt $ printf "hello=你好\n" >> test.txt Normal search: $ $TESTDIR/../../ag --nocolor --workers=1 --parallel 你好 test.txt:1:hello=你好 test.txt:2:hello=你好 Case-insensitive search: $ $TESTDIR/../../ag --nocolor --workers=1 --parallel -i 你好 test.txt:1:hello=你好 test.txt:2:hello=你好 the_silver_searcher-2.1.0/tests/fail/unicode_case_insensitive.t.err000066400000000000000000000007451314774034700256440ustar00rootroot00000000000000Setup: $ . $TESTDIR/../setup.sh $ printf "hello=你好\n" > test.txt $ printf "hello=你好\n" >> test.txt Normal search: $ $TESTDIR/../../ag --nocolor --workers=1 --parallel 你好 test.txt:1:hello=\xe4\xbd\xa0\xe5\xa5\xbd (esc) test.txt:2:hello=\xe4\xbd\xa0\xe5\xa5\xbd (esc) Case-insensitive search: $ $TESTDIR/../../ag --nocolor --workers=1 --parallel -i 你好 test.txt:1:hello=\xe4\xbd\xa0\xe5\xa5\xbd (esc) test.txt:2:hello=\xe4\xbd\xa0\xe5\xa5\xbd (esc) the_silver_searcher-2.1.0/tests/files_with_matches.t000066400000000000000000000007121314774034700227340ustar00rootroot00000000000000Setup: $ . $TESTDIR/setup.sh $ printf 'foo\n' > ./foo.txt $ printf 'bar\n' > ./bar.txt Files with matches: $ ag --files-with-matches foo foo.txt foo.txt $ ag --files-with-matches foo foo.txt bar.txt foo.txt $ ag --files-with-matches foo bar.txt [1] Files without matches: $ ag --files-without-matches bar foo.txt foo.txt $ ag --files-without-matches bar foo.txt bar.txt foo.txt $ ag --files-without-matches bar bar.txt [1] the_silver_searcher-2.1.0/tests/filetype.t000066400000000000000000000013171314774034700207160ustar00rootroot00000000000000Setup: $ . $TESTDIR/setup.sh $ TEST_FILETYPE_EXT1=`ag --list-file-types | grep -E '^[ \t]+\..+' | head -n 1 | awk '{ print $1 }'` $ TEST_FILETYPE_EXT2=`ag --list-file-types | grep -E '^[ \t]+\..+' | tail -n 1 | awk '{ print $1 }'` $ TEST_FILETYPE_DIR=filetype_test $ mkdir $TEST_FILETYPE_DIR $ printf "This is filetype test1.\n" > $TEST_FILETYPE_DIR/test.$TEST_FILETYPE_EXT1 $ printf "This is filetype test2.\n" > $TEST_FILETYPE_DIR/test.$TEST_FILETYPE_EXT2 Match only top file type: $ TEST_FILETYPE_OPTION=`ag --list-file-types | grep -E '^[ \t]+--.+' | head -n 1 | awk '{ print $1 }'` $ ag 'This is filetype test' --nofilename $TEST_FILETYPE_OPTION $TEST_FILETYPE_DIR This is filetype test1. the_silver_searcher-2.1.0/tests/hidden_option.t000066400000000000000000000011771314774034700217240ustar00rootroot00000000000000Setup: $ . $TESTDIR/setup.sh $ mkdir hidden_bug $ cd hidden_bug $ printf "test\n" > a.txt $ git init --quiet $ if [ ! -d .git/info ] ; then mkdir .git/info ; fi $ printf "a.txt\n" > .git/info/exclude $ ag --ignore-dir .git test [1] $ ag --hidden --ignore-dir .git test [1] $ ag -U --ignore-dir .git test a.txt:1:test $ ag --hidden -U --ignore-dir .git test a.txt:1:test $ mkdir -p ./.hidden $ printf 'whatever\n' > ./.hidden/a.txt $ ag whatever [1] $ ag --hidden whatever [1] $ printf "\n" > .git/info/exclude $ ag whatever [1] $ ag --hidden whatever .hidden/a.txt:1:whatever the_silver_searcher-2.1.0/tests/ignore_abs_path.t000066400000000000000000000007711314774034700222240ustar00rootroot00000000000000Setup: $ . $TESTDIR/setup.sh $ mkdir -p ./a/b/c $ printf 'whatever1\n' > ./a/b/c/blah.yml $ printf 'whatever2\n' > ./a/b/foo.yml $ printf '/a/b/foo.yml\n' > ./.ignore Ignore foo.yml but not blah.yml: $ ag whatever . a/b/c/blah.yml:1:whatever1 Dont ignore anything (unrestricted search): $ ag -u whatever . | sort a/b/c/blah.yml:1:whatever1 a/b/foo.yml:1:whatever2 Ignore foo.yml given an absolute search path [#448]: $ ag whatever $(pwd) /.*/a/b/c/blah.yml:1:whatever1 (re) the_silver_searcher-2.1.0/tests/ignore_absolute_search_path_with_glob.t000066400000000000000000000005031314774034700266510ustar00rootroot00000000000000Setup: $ . $TESTDIR/setup.sh $ mkdir -p parent/multi-part $ printf 'match1\n' > parent/multi-part/file1.txt $ printf 'parent/multi-*\n' > .ignore # Ignore directory specified by glob: # $ ag match . # [1] # Ignore directory specified by glob with absolute search path (#448): # $ ag match $(pwd) # [1] the_silver_searcher-2.1.0/tests/ignore_backups.t000066400000000000000000000027531314774034700220750ustar00rootroot00000000000000Setup: $ . $TESTDIR/setup.sh $ mkdir -p ./a/b/c $ printf 'whatever1\n' > ./a/b/c/foo.yml $ printf 'whatever2\n' > ./a/b/c/foo.yml~ $ printf 'whatever3\n' > ./a/b/c/.foo.yml.swp $ printf 'whatever4\n' > ./a/b/c/.foo.yml.swo $ printf 'whatever5\n' > ./a/b/foo.yml $ printf 'whatever6\n' > ./a/b/foo.yml~ $ printf 'whatever7\n' > ./a/b/.foo.yml.swp $ printf 'whatever8\n' > ./a/b/.foo.yml.swo $ printf 'whatever9\n' > ./a/foo.yml $ printf 'whatever10\n' > ./a/foo.yml~ $ printf 'whatever11\n' > ./a/.foo.yml.swp $ printf 'whatever12\n' > ./a/.foo.yml.swo $ printf 'whatever13\n' > ./foo.yml $ printf 'whatever14\n' > ./foo.yml~ $ printf 'whatever15\n' > ./.foo.yml.swp $ printf 'whatever16\n' > ./.foo.yml.swo $ printf '*~\n' > ./.ignore $ printf '*.sw[po]\n' >> ./.ignore Ignore all files except foo.yml $ ag whatever . | sort a/b/c/foo.yml:1:whatever1 a/b/foo.yml:1:whatever5 a/foo.yml:1:whatever9 foo.yml:1:whatever13 Dont ignore anything (unrestricted search): $ ag -u whatever . | sort .foo.yml.swo:1:whatever16 .foo.yml.swp:1:whatever15 a/.foo.yml.swo:1:whatever12 a/.foo.yml.swp:1:whatever11 a/b/.foo.yml.swo:1:whatever8 a/b/.foo.yml.swp:1:whatever7 a/b/c/.foo.yml.swo:1:whatever4 a/b/c/.foo.yml.swp:1:whatever3 a/b/c/foo.yml:1:whatever1 a/b/c/foo.yml~:1:whatever2 a/b/foo.yml:1:whatever5 a/b/foo.yml~:1:whatever6 a/foo.yml:1:whatever9 a/foo.yml~:1:whatever10 foo.yml:1:whatever13 foo.yml~:1:whatever14 the_silver_searcher-2.1.0/tests/ignore_examine_parent_ignorefiles.t000066400000000000000000000006611314774034700260260ustar00rootroot00000000000000Setup: $ . $TESTDIR/setup.sh $ mkdir -p subdir $ printf 'match1\n' > subdir/file1.txt $ printf 'file1.txt\n' > .ignore Ignore directory specified by name: $ ag match [1] # Ignore directory specified by name in parent directory when using path (#144): # $ ag match subdir # [1] # Ignore directory specified by name in parent directory when using current directory (#144): # $ cd subdir # $ ag match # [1] the_silver_searcher-2.1.0/tests/ignore_extensions.t000066400000000000000000000022121314774034700226320ustar00rootroot00000000000000Setup: $ . $TESTDIR/setup.sh $ printf '*.js\n' > .ignore $ printf '*.test.txt\n' >> .ignore $ printf 'targetA\n' > something.js $ printf 'targetB\n' > aFile.test.txt $ printf 'targetC\n' > aFile.txt $ printf 'targetG\n' > something.min.js $ mkdir -p subdir $ printf 'targetD\n' > subdir/somethingElse.js $ printf 'targetE\n' > subdir/anotherFile.test.txt $ printf 'targetF\n' > subdir/anotherFile.txt $ printf 'targetH\n' > subdir/somethingElse.min.js Ignore patterns with single extension in root directory: $ ag "targetA" [1] Ignore patterns with multiple extensions in root directory: $ ag "targetB" [1] *.js ignores *.min.js in root directory: $ ag "targetG" [1] Do not ignore patterns with partial extensions in root directory: $ ag "targetC" aFile.txt:1:targetC Ignore patterns with single extension in subdirectory: $ ag "targetD" [1] Ignore patterns with multiple extensions in subdirectory: $ ag "targetE" [1] *.js ignores *.min.js in subdirectory: $ ag "targetH" [1] Do not ignore patterns with partial extensions in subdirectory: $ ag "targetF" subdir/anotherFile.txt:1:targetF the_silver_searcher-2.1.0/tests/ignore_gitignore.t000066400000000000000000000005061314774034700224260ustar00rootroot00000000000000Setup: $ . $TESTDIR/setup.sh $ export HOME=$PWD $ printf '[core]\nexcludesfile = ~/.gitignore.global' >> $HOME/.gitconfig $ printf 'PATTERN_MARKER\n' > .gitignore.global Test that the ignore pattern got picked up: $ ag --debug . | grep PATTERN_MARKER DEBUG: added ignore pattern PATTERN_MARKER to root ignores the_silver_searcher-2.1.0/tests/ignore_invert.t000066400000000000000000000004051314774034700217440ustar00rootroot00000000000000Setup: $ . $TESTDIR/setup.sh $ printf 'blah1\n' > ./printme.txt $ printf 'blah2\n' > ./dontprintme.c $ printf '*\n' > ./.ignore $ printf '!*.txt\n' >> ./.ignore Ignore .gitignore patterns but not .ignore patterns: $ ag blah printme.txt:1:blah1 the_silver_searcher-2.1.0/tests/ignore_pattern_in_subdirectory.t000066400000000000000000000004531314774034700254010ustar00rootroot00000000000000Setup: $ . $TESTDIR/setup.sh $ mkdir subdir $ printf 'first\n' > file1.txt $ printf 'second\n' > subdir/file2.txt $ printf '*.txt\n' > .gitignore Ignore file based on extension match: $ ag first [1] Ignore file in subdirectory based on extension match (#442): $ ag second [1] the_silver_searcher-2.1.0/tests/ignore_subdir.t000066400000000000000000000007331314774034700217310ustar00rootroot00000000000000Setup: $ . $TESTDIR/setup.sh $ mkdir -p ./a/b/c $ printf 'whatever1\n' > ./a/b/c/blah.yml $ printf 'whatever2\n' > ./a/b/foo.yml $ printf 'a/b/foo.yml\n' > ./.gitignore # TODO: have this work instead of the above # $ printf 'a/b/*.yml\n' > ./.gitignore Ignore foo.yml but not blah.yml: $ ag whatever . a/b/c/blah.yml:1:whatever1 Dont ignore anything (unrestricted search): $ ag -u whatever . | sort a/b/c/blah.yml:1:whatever1 a/b/foo.yml:1:whatever2 the_silver_searcher-2.1.0/tests/ignore_vcs.t000066400000000000000000000006601314774034700212330ustar00rootroot00000000000000Setup: $ . $TESTDIR/setup.sh $ printf 'whatever1\n' > ./always.txt $ printf 'whatever2\n' > ./git.txt $ printf 'whatever3\n' > ./text.txt $ printf 'git.txt\n' > ./.gitignore $ printf 'text.*\n' > ./.ignore Obey .gitignore and .ignore patterns: $ ag whatever . always.txt:1:whatever1 Ignore .gitignore patterns but not .ignore patterns: $ ag -U whatever . | sort always.txt:1:whatever1 git.txt:1:whatever2 the_silver_searcher-2.1.0/tests/invert_match.t000066400000000000000000000006741314774034700215650ustar00rootroot00000000000000Setup: $ . $TESTDIR/setup.sh $ printf 'valid: 1\n' > ./blah.txt $ printf 'some_string\n' >> ./blah.txt $ printf 'valid: 654\n' >> ./blah.txt $ printf 'some_other_string\n' >> ./blah.txt $ printf 'valid: 0\n' >> ./blah.txt $ printf 'valid: 23\n' >> ./blah.txt $ printf 'valid: 0\n' >> ./blah.txt Search for lines not matching "valid: 0" in blah.txt: $ ag -v 'valid: ' blah.txt:2:some_string blah.txt:4:some_other_string the_silver_searcher-2.1.0/tests/is_binary.pdf000066400000000000000000002377271314774034700214020ustar00rootroot00000000000000%PDF-1.5 % 1 0 obj <>>> endobj 2 0 obj <> endobj 3 0 obj <>/ExtGState<>/ProcSet[/PDF/Text/ImageB/ImageC/ImageI] >>/MediaBox[ 0 0 612 792] /Contents 4 0 R/Group<>/Tabs/S/StructParents 0>> endobj 4 0 obj <> stream xM= 0E@ CZ يC񣓓Ptp.McǸOE",1 9 *cPhk|(p XR/?Tnͤ旁>#fuxRkB"Q endstream endobj 5 0 obj <> endobj 6 0 obj <> endobj 7 0 obj <> endobj 8 0 obj <> endobj 9 0 obj <> endobj 16 0 obj <> stream xmQ0c.²m٥TD=RPMJMb !7d" flîXQ,kL- sLxsZp 0p6b/ ?sggDR3ґ\c:Gq6ƫ&0zK*Cm'zQ̩4E"=Ne+$ wć%'6Џ׋RWܪlOCqG^j ުK䭨XZy{Q&}w?γ^.MЛ endstream endobj 17 0 obj [ 226 0 0 0 0 0 0 0 0 0 0 0 0 0 252 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 615 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 479 0 423 525 498 0 0 0 0 0 0 0 0 0 0 0 0 0 0 335] endobj 18 0 obj <> stream x} |E̛4G4IJۤ@/ZPJ E)r E<4uuV]Zu]oyRp>|xgsdX.1Z6zBe¸w3Kzl 1vi*/\3YƔe=gʆoIiKF6Ư1WVJ42eD2cO=k[[gXfUnՇ^blE$/j[WŠ B1iq6[)?-0XԽeHoiX[э2escG1cdK.ow c-KZ-mhQgeIwVo۶|eGPkD~[{sw+`WY:Kh`"mɓ&N_YQ^VZ2[{~kTZthOOj4"PkEŨa9^fiұsaSЌjOփ5^#&|jϤ95lV̓R_H)?KCL(XYN9jRM&ǟ=Af I]qOA uF`kVtT{6wEW}OpSCWV^2R婮T:fsxU4',SUu{ysj>ofM@Ji]Imwf̫ZaFphi>/c\jPӍ=6qأ&m lZyUmILږZX?=c_4Ma/b]8#Τ:),5IwO08&94,y1 gDDҿ^jD]}Z,[ Lт1JTurDF &P Zmxi͒Zu9lg$ڌ/ʫ U&BdVAFUאʼn$^VKNқF5VcY$K3DmFHg(aiMH1 GhSؒZ꼚*w&(+CdM}VtU}T4SfxdV[#oIPß`qFBm"^ kgx֦ 9 &sއjN5fdNZTsW+ ^Fw9>5 5>bgKq8SRlbgHNRb蔢CRMR,UR.iR,ER,Y&)h^:)J@R̓bsFR̒'L)!Eӥ&T)H1YIRLb㥨Br)ʤ(DqRx(bc-()FJQ$E#.0) ȗbC,ERH-E()2Ȑ"]Rx'En)\RJ"EN)H"Ax)pH+ERإIa"J f)LRDJa ^ RhHH`!RKqTVoZR/s)>S)>KJKoR+_HqX?KoIoHR&ūRAWxY/JKJ;)i)~+SR<)Rq)Q)JKJKq=R+=RbH-_K;CR.mR*)vKq7Kq7Jq^뤸VkZR*) ).2).7R\"R\$ŅRl)I%R'V)Hq嵇k.=\^{py嵇k.=\^{py嵇K!?\py.?\py.?\py.?\py.=\^{pym.o;\vpy9:օ;s :RgRG6QLT3hAhm ehM hQ'uPj%Q;WRJ@mDˉQVDA-!j!ZL(\jTQ#QQ=QBTo>%CTKTC4hh&Q5 *DӈM!L4hb94h|9TITpNAeDD%7yX1D(ThpaXQ>2h`j,(ee "H4( J6yQiDn"J%J!J&r%I@DqdtŒ1(Ny6"+,Df3E)@'"U - 8SE1J}Ot;R}Mї/ ՠRD}FyR#DSGD#;ޥ_)wSޟ&[DoA:HZ ~6@,^!D/E*d|Y=CE&-"z CDS((AG(a DGC%=D n"?DwIt^q8mʭD{(o7-D7Dt# DƮV%&I*p% 2ʻZ %w1EDm'JnTDm%pԃ 8@m8&:+68pF T}=;h]ZK&ZEIAn+Frjll%ZJt:iDK^ b"LD%-Aϧ#KCMҋjfSwgы|LjDUX/hz VaZ V,敏͠)d*2hb >R*X*nbb7JqD^bh|1ׂF ("* +A#}h٠TrH.68`{3(ADDDDR"ُLԊ(%9m(>`[#rES;UJEd!2SI$@'Q*%H!D6[]ǬM l____OπOO# |xx{bߢZ\a🁷~:']Y^%2-YlZZ]~iZNs=eYzz²uuG{A<2p=hnw=`^#olwֺ2siZô״u;pp+ bq un2:kkF[;ulWWAKőS]ENs]ص={\j]h ]yl&Y{7mmܻgM&m8c olF"ػηַڷfjH9;ڷjoOѩe|p'WXݩ1w}+XMv(vȞ} w}V·׶wo٢Viž} |{| :{;W[X㛍 g|{g |3VNM}J$体| &,c,ٖND&'K;ϜZ;:5$W2КK%g&^&xfWX_so gsiblqSfV\\F0 j`60 3j`PLS)d`0* (J``40  `80 (`09@6 L H ɀH @,DvX( t@S(k1{(- 5%Og'c#C}=߀w^x x 2{Ey9Yw3o''CcAa!A~>8wwww{ہۀ[=nf&F`p=pp-p p5\\ \\\\ \l\4ncsϱ9?csϱ9?qpgqpgqpgqpgqpgqpgqpgqpgcs}ϱ9>{cs__pc;~I=VM[vv){flnc~({8̬X cGz"X.E*F>a ڂbA]4TZ`'?Hڪ\9UlcM-gNgKY+[!o1ZzOZ[lbPJPӝl5cglCZG:5̜Vdlfs1k[yL߫6vBvя'.F2v9]uq5땪}']5#.zUٓQ}G_>lc[1v1H~vB~%7$B Zp'.H.W+?egVSBj1};v?{+av'clTc#-JQINgسRϫ""{^߳<^xEqrװ8Vj^)azVĦl̂868(+3Gܸ ^VܛTwn>/o5Ǟ;袼#%f#ւG2%߇I=YClQ}f߇I_F́*g莘Ɋï﨟n <_ӓ/c`x~ܤ/U<pY|'[߿@۷ݽ"eGoԞ};8gC#. #ƶU,E&$wѣؽ͑++6DH1Nҡ2ĺb |ɑH5)+!֝g;[܃'ŔHl:cI>"Bo6hLh:,݃8ٝ:(dIq` ka^UW1 JOpMj z(iēHn)l x3G1xcO[2`g wܠZ_Ӧ|;q̛)Z'#v\`\e/#G-2x6LݚĊݾcc∍:|ؖLrEreU؎[2hL!f٨;Kf-*?of˚J\-`6tfE&ML(>̙I1)IlƙsVTa΁wݚ0v~w2S#V -33C+pifȕEE\HxW쏙hou)8^:>x|Z߆ Ƣ9t:R^$?!s܇P:e3 )E'ǥFXiKFClACsdh&eV!2#N ~jd#ټݓ.9<N9!O䈁-9G::MOH5[O l/Hu|6 ȣKprOį~5)js{~eϤu}gggggggۘ : !Ζ"[~A=,K\cKll;PK³ %KQo)4%Vդ]JؖyT zQn ZXj˭~z)ʺ>wv}j+CvDk蝢c\Y^u`W_][zu4Fd-j[ZRcm^.U}+)ѧmX9w[&b K.t)1 [jߗƵ\mZDHxmZF}:ҹz;jkj kU?tfŌӼA0Q̵m8Tf%RBw`4Czg^]#4.ѓz+v:W"{`F=+r02JP٤D{@_šu[Z\e(߬(>2Mj{jjmGj5{Zσ^jZX#+V}-vuj1*YJ:^;V @:Kw4sؠ֕*g>{\n{?MONաo{)-6b;U6&5M]!k6uB+jVb:nO;rjMW~S::~7tp49_x@Bh=3kzvH'vГFES]yCܹ7U9IIZhr[UPnZ1t4--HQ""^nnn.mP.ʚ4CIZ z''<ϓy.T"e:lQj樳yquN UQ@hOT*%Yc{#0zo ߑ%Zɱg; iYY{ssvКnrY \ϳKϵZ۝¢Μ,#(/Ӟ7КOUgV.-2,Gau:/+Y*:sv'ߑQ8ZN+Mrr+rP0˩vPh-p6/f՞[N%"5ǞwϳεW;}3Ic¬Dk8 y֌"[Ⱃ6 sE|ڗPݙOBJJw)2gd;,ˑ5('@RWBұޙ8j5;Y鎅JJ4Wݛ|ϳg&N/^8Кeߑv: ]xqbnkDuwdps*J!I0/' 5v3+:T kwsա~̢n)ʤ)e rE s&Z[;?#Zr*vZՕ-N=#4^ھ]&}Q`[ʜp(;3q^N~z/%POh9 ^lRdg\x:143k^:?1Iqb wbyN nq}r95IۮhGw8N>-RЮ1>FG2}ǣMk jGGLc}̢ (J]G}LYM>6VZPƯQ/֮gQ2vzW^GHzcڵ%z(pS`rV9T)XtJZI;Tqx,%_HgQ:K$~.V2Fl޼o]P6PR@ 2B$ FqRɨF]r8HI >H%H\JY4I ~J~+E"$i򍛲c}ZZ[`K&Ns#uN^2KVr-3$C8WNDFIf4h\IN'1%'ԟUMfUa/XLLP|jLf/3ePfIcn ڨjTfd6ׄZ&h**6<`/4>i\eTmUە^'$KKR6qQ'\HN%Hu,%>IX] v?vKdxɞRSw}neE+9\|2676(EK?&HRQ1w:kNX6[D"**#**"R#+IlڌRG<ߑP$;x1 顴{fQVNRB$ "ZtQχhX:ɢoڌbiӬb|uOZ!wkHvGFAax[}}_}X}D}L}ݹVW}=VדYէbʫW_=,uA܅ uQ}}C}ݪnoۉW&_(E= 4_G[u(Oċb!vN\dT35 ZuVuW!M ]zU]>K~k?wz\yuua"Rssz!?ӟĘ RfR Ӥ5߈_i%*uN[*kZIkzȔ.}jz,I{--4Z)ʌZKʨuњhhtKtBrmQܰ:Ӯ4ˑme%2@-/QGDpܮqbIm+lh+PyHvPQ\W:輷rrV)]u7*]cLA-;oXtZ:uo+e KZf߯-RoZ*J 3߈ߦ{~=u|(I.-$}Jf,,;/> xM췳۪<```ʐuP7ʠP n0D7$zH!T&&zp?~w7jΨ;{FW& NZ38Z֏۩a|22?Ataff$?݉fSNܟ=n}6BWM>aSO6ZR-PKɴ5^d~Kx1FԔ)rDK\RrbNfKz\ɢ݋: /:9.Ʌs ?+vtu^4hmX_<.RIϒJl%KԔ.}ҚKK,K.%K.Y:hĥ-ݴǗ%/[?^gWdغ·2ieY>~=vq_-:|V{SqΓHӺ+׮=W:*кߏmX55X}gIe}ƨ#mk&uc}2QDovTbuZUb1Z? ~u| -!Sݡ]ab}NE?XM5uWWyZGOxSJ~6֟5Pq-mulu5.nNiq>i:W)_'cu݊zu55rQGSh޺~*:gV/~gRr.Y.u9ɨm}Zw؞]uWwШT6NA۝]c+r>gԑ֑=';`^9IX]3o9c}r5ع5z_ }RH<רhSflơ)}2{⦢D\jWU)޴Iɵu Ź\PQAyFK;q|Y/K6IJi\/e)SH E 5GRs6GQBZkRRQDJ\\}}%59! 2X.x_jψaZV ~'miW G+z%JD1Ut$1PL+PV8fQŰJTXX `92گ'a <kixuG _Y H*HB<]̐^dlcqk+ŭ'U[&nս)m* 1qQ*|4\#T@GxϑBCF6z/$??@c$zbsa8\a=o[FMpZ  YP$zE 1&襎ݳkz.rFm9?nF*FC)djxIΒT^q2=شny4+ڳ#8;CfΊvObD=7h57kr^KGz^e( rQIIzئfD}>J?Py~FC3A_PiGZtWLd۳:k뙭߉۵ CNu5-f=@39DYY^eO'a <kixu_paWpJ,P 'i8^({Ŀ@\e\!YI$uU<{T>}w|^wT龆Jjjp +0$A$ E03aqg\XȵE;b(R>/ %7˰~Ior~ osqg#ȀG,ȀG9uGdI.>8'14y]&|;C>12! 49Yݹ9Lư{E7|ʧ)/`!q6 &22=L#02=L#02=LeiAFZdiAF`1FL<ڟm:eeȧ5FQax5FQax5FQaxp2 =85yp+S7<D Q=ATE(@*zP1TGDŽ-f&{{گ؅mT}+2<껌Oʟ1}2}2}2}2}2}2}2}2}2}2o/{e_l%s9[ɜ=Ŝ3gY?sϜ2gY/s˜2g[Lc<=I{;JJUPj1Wi.mk&9~A۩τ><~J*jVRszw՞>뇹^14W?{i?= S]O<I zIs r! `8 ϰH"oYE%O7,i,~uC%Ʋsw.;"h?ŝsZt垲>GLaØDL>}MDփzY"Ad= Dփzв3-hٙyj(ZF2Qe-hE(ZFѲ?-eZޡBK --BK --n92r$<"q4Hո\GhA-AHaaĽ{7nHPmRwUNJi\' A0Caa;.'c`,0n0 a2L0 3Ux ^7M o5 Vx{>|o`;|oaܭU|L `7 p J `?Y/c߸_^Cr0|)WEu tnryx@|VU>{>xnS7U_q@3B/B|@_*m0@>fc|1YXF\gԂ"@0B$ h| HF6m='o$~c>Ab.5U$ùdǩ7y\Ol(rX+sc>\_\#;UE>ie -PGB u$ԑPGB u$ZH#2 :C ;B/B>A =`C" ap 0`@0H&$04.C, HHB<L?3X.VJp `5< k)X Ud*=}+XcYұXtx:V<+ONJcӱXtx:Vᷰw{|N) _  v. Ewa,]+t8f:O<9$= l3E׈YOzbֳJ??\+eܧ\ϟqS]~ akg;=vj}#+1byN-myf8Zݵ<̢E3_ ~[|-3tqr!C VV/,` ;87.k`w bQ :=`3X tNY]+t8zqHF4&@o}&3m|Nxǭ+HQ0~BI0\ ڥʻBLn1>Nw*]bXN +`%4WW퍼oWUx ޥmк9<4i)3JF&c8IXԕ* X#{*Tfzj93JGv>QF,?}@ݩIΉ!arf$ 0YJ3E>pW>S>q.fLs)gYXLzx^` ᗰ ^W5x~oނ-kx[n H<;xC~kd/5|J .YnC0̂G!C>eZ9d?On~r󓛟'7?On~r󓛟'7?On~r󓛟'7?On~reH ʧݏopFQޫo#mGLȅ 4P}o#m侍+Ƚ+Ƚ+ȽT"o7RxWqUWqUWqUWqUqvb;vka53HjF;Jv?K ewg݌fFW3] ] ] qƎ3vg8c;qƎ3vg8c;qƎ3vg8c;qƎ3vg(P5(P5(P5(P58c lxlO3YB)d}AFBWT-j^ë}x5RP#5RP#5RP#5laC jPÆ6԰ 5laC jPÆ6԰ 5laC jPÆ6԰ 5laC jPÆ6HAHAHAHA԰ c-dq, "vnFjtEXD7&_M_M_MQMQMQMQMu]] kl-`[9@D|m[ΚR>`^*7rX+O*($F3khfm46YͬfF3ku̺hf]4.YͬfE3bt$̚ j~渗9e{MyNգ]/s2w]/O~b'v?O~b'v?O~b'v?O~b'v?+kl}?o[jp2*z[p2n\ƍԭffL)&M h]ơdYNdYNdYNdYNdYNdYNdYNdYNdYNdYNdYNdYNdYNdYNdYNdYNdY.$2ً7{5vA3̀&ntS~3C&T~w{n/ŻxȪȪȪȪȪȪȪȪȪȪȪȪȪȪȪȪʘdzy<, 7%!{|!BN]2"Cs|!kobf5re7휽Y-@j4%RX-Wi)Z5A^[3[VVHސI,!BBC,C {5KDzZ\\h65<98[BR>q V.5G穷^>y^`)}jZAFQ #@,tB7x=F4lDF4lDF4lDF4l4'Gc`,0n0 a2LLȂy0 `!@.A>"p@!8a1@),?:F TOi6MEIHŅ .q(#;N'@*Q9` a0vQ?A~D Q?A~D Q?A~D Q_BK /%Կrv\]..` P7A nuD Q7A nuD Q7A nuD sѭhѽZDiԮChfSs/ZFӚ%|^.G|#1}|p4_0a|  s k<£*\}E/.zqы^\E/.zqы^\E/.zqы^\ťF\jĥF\jĥF\jĥF\jĥF\jĥF\jĥF\jĥF\j%.pɇK>\|åZ\ťZ\ťZ\ťZ\ťZ\ťZ\ťZ\ťZ\ťZ\ťZ\ťZ\ťZ1 Pgcȅf\h…&ԄMۄMۄM@P7 nu@P7 nu@P7 nu@&iB&iB&iB&iX.2\fMgbYs6߿¾݃p 2`>dwh݂-h݂-h݂-h݂-h݂-h݂-h݂-h݂-h"ѺG>fAY,hPoFƏzǏzǏzǏzǏzǏzǏzǏzǏzǏzǏzǏzǏz}(CA PЇ>fClh`640 ̆fClh`640{=2+I'\T؊(nƭE[J ڒ^\˽R.Rki+@i˵LIi;5Bd5M6myOv>:sךfz.%P h(١*١*١*١*١*١*١*١*١*١*١* ޡ y*۫lSm}+6.ql\f2ٸe6.ql\f2ٸe6.ql\f2ٸe6lek,[cXƲ5lek,[cXƲ5*0W*7k,[c9:D.ۘi7E.9u.Fh+iThˉh{]!Ҳy(TLҗXׇ^A٫:ԇXx M5^˻{yw/mzy_/}^UUAZ׾?\Kٶm_aJgS5ž]Co;wQJ3\'tkvkvkvk6k6k6k6k6k6k6k6k6k6k6k6kS=|O>çzTw{w{w{w{w{w{w{w{w{w{w{w{W]ep% k{D`txja4""|)LI-sS;錁0-'2gAa~|z׿VftH(wo J;6sERȧܘ+_ :ݼV;hX[и(vQ3ѺSæ3>]Yac_C9МO/ r'*ff4ܗ|aOϟagxµF)Z+Vaڽ@zBgn'v!?#;֟\OכWu氳.܃C eVa7Yu3Xu-dյ5wfkdKd+aV`a" e,Xd,X` Y_,Xda,`,Â=,zXXozXlXlXjzXXXXXTzXjXj S zc9-!\ƥ~J{O4?ceq:yxx8.>6a|b7P,=s3OIg…:ƒat<*,U=#'Ja߰ݞ=qX:K,}JG{!T%^4#[w*|'7sP<>Bxŧ'}pxx_]P +N?԰:rY.sxXc^镼*OEv\6ͼc33^yN^Wo?KZKT{gv49Mmb~{X X:4&{Y{/kImbXrD8\['4c%֡랱1\^?ׄyidp0ZMvsoactEu<7n}ޒ}ޒ}޲oD[Q]l>vQ;ooݽkݽkeܲCn!;orb\*kĩ;Q+ &va;PVet,:XDU{fϩ|:XY熾A(Ugcaʷ*~Zwh귭V؊UhGm%Z]ela#l1–vO }㷅mv-Mya[Nm} ۮ}{UY@tB\PK ӓ$k PLS۵vF%vM;xLy>בMtדM7uMtS7L75m>OR2TLbNORE7EMMtGwqЈ;hxn]׹sׁ};_f.6ˆ{KO|bJkwm{aVdNVdN4?NE:Y5:Y5:Y5:Y%:Y:Y:Y3zuָ[c۬(Պ׵YOUY*Kֲx`YK%Y6hְڬچ/EG3=C\&ۢ MaF[=h/MnzhN);|©0NGQ |OLOS4F|G¹<_%/cTtdṪSU/%,XV+h«aik֚C=55Tv ]Cep4:m5e7ܖ?`J{zZhIiO椰4>?<>:$fd.EtFױ,̔{w[kS6F5jXK)Cuȡ`Cq'3=8sSeg ;>ntXX/:#p$އ8 "e|_u|>{ϽNqWjLsy\<7so+oM`nmwN܅=\~_ڰgtcAko42,8';Ow<,wqť>sϹs&IiM[/[mBKt; wc~x OhJbڐj):{UCA-uU~bhDt.D>>h>>㣣0!c~>~ĵ7&,=Ma`G';\;:׎εsܨ {B]m v6TVW}A<_ao0\A #0G(w8ZkZk8'D|߳.|&êw.NɊɊɊ͸p+7{; wc:f܋p?' ¯0fGM5h$Lֱm wuYguYguYguYguYguXguXguXguXgS,*BNH$U^U~{UYn_H#ZT,l@(RE H)"P@(||S%JD (%PJ@(Q%JD dKdKdKI4 #\11~0?azzzzzlzlzlzlzlzlMiV6ʦY4+feӬlMnۡvnۡvD;GV/e4}/@hM_ 4}/@hM_ 4}/@hM_ 4}/@hM_ 4}/@hM_ U_FT}_ *Q^%ʫDy(UJW*Q^%ʫDy(UJW*Q^%ʫDy(UJW*Q^/ѨXXXXXhK4%zƪ?E٪fkQV˩b9U,::~0Y5;O5;o]ʩ1Ju{[UK eS*hS*hx8?oէEGr%U.ըt' dm*QN˩r9U.T*Sr\N˩r9U.GI(%]Kt.Q%JDI(%]KtzzHg܋p?ggg]FU4fUѬ*UEhVͪYU4fUѬ*3:33:33:33:33:33:33Pޅ> 12Wǫ̗yy+ +.K(glNS(䗨ԘS9g9]Cft*G9>((:T΢(qpk':? r.HM>u))Σ(<:Σ(RPPPѴYuQ^oUcOV]}+untAjT0'~6|)^ 2\F-;~k[ݲ豰*ZVy~P5*R[E7+;?iOe\$,P[^_ F.iiF~_javbN:xU~yZUtcGw~[\ڿB'mSͶA5ԗTQaTQzN~){jy},5"O D9UT+Gx^su*](RO,ctMX>pq>w)Oߍέ3'm=iI[Ozk  c =m&_şnJ5X֥6@vg8⨰.s4ށw3ذ SLT܆Ph<;,}_X_{Jh×vXW}_zg~L<}aݰ(v&DŽC1;Ǹ W@g{Vg{Vg{f܂i曽wN܅13pL܇Y֘}xÂź/×|uנ0S0? : ܈p3n4܊q]1s OE]x:V+xu!yhBM5h$ EsQ\?EsNω9?'DsNω9?'DsNω9?'DsNω9?'DsNω9?'D蟓ʯpUt>tM٦lK[>?jVPfƯEǪ*MzzzzOESQT3%zDϔ=SgJH3A3)3$337He}@9sr(fN=2_E:H[i" \777777777Wj^Mhr8Ϟo+z37Æ sMFS$L[뇇`ף;u>/??žO/RR/EWFyz|v3n:g腓9O1 114*G*Knըh{sQ~inW?Z.V-KCQQzO}x{O4ʿEFAw|kH+w[kW.љb赔++n&lKzg{UG#A-!:Px~tD"\jMBkqWjL5h$LFGGGGGGGGGm[sv򯙼nYly-uZ{}tXUKt|Uka>]F ܅ʯWI/utWLOBI (.Ȣxp(. "rxr+r*.^( ** rߐg$ߪĄrz^WV7̔RsيPMPD=-p7P*jm5k n(w4W s]

tB77DZ0ƻ{jܮX}͠ 9)z}t']q;~v\fВh ȴpىq5Uu>gn~B1Z Ϋy菫z-ڌd8 m]lm/@՛5\80䁋4&D/@ȗr _ˡ]Kw"܅0ѣC" v156f9hYZz`h_@O^ =V~ " -#6&.VV^)'S;9a*sZg#~ww7^j6FhЇu"܏σTYQQ^ܮ3jLVTbT"ރ v'؂amF(o tOf"%S6mOzvu.^(}va (%_#w1R#%hi/rL/'6CKK}򶢖](rˆ #9|۶up^϶`6dH !sWHkOsDzz^w"3*5؊zqz~˫jJDTt5UWPO@݈*˅NNhctƎ EF"+o;':P8 T\ZI'o;'Ш:P e.@^@TZ+9 --X&(8籯6f. XXgQ+ؤgХ؛R{Mt3Ro\bF5Kli9':X'S(aX5dgv-@#ά+d=Xnc(a{?ثIثwػ-bxSޜ[Ysފb-y~ ;c+"~o.x֖_o`-»+yރ]{;ռ7{Zޏd3f,xև_et6r6mdL}3~y\ b_ .F(fEg?~"-E\JP4fEq[' h6%KElh-ڰ Vee2CNAtdč):;AGeyx"pW øb&<$fYoxX,+xH;y%4o"[وekIH W~&.[Mjtgw|g_u6N9[|78;|3Ca9;'r*7gqns*\InM!"'jWnG8>!Zp]gDw;Zulj;ܗݗŝw˝N}i4}}Sp?s Kbx].O+ܕb]-^r׻nCLp5ER]@ߪ UkDUmJuXQznT7&uآ:bC=GݧL5X cq)Hth??e&j,t5]&jQ d#X-5j=獒üh7'{' of9;/x^1r\@ŀ*'rZZH ov{{{{w '? /? >|P~(?C3?-erOp}p  ȼP'54.4y!ih3)(yW~οſt~?`[?&#T?m_r)nK{֟qp___?NR_t nO׽?g~; ;<}",îd ݧqDwT8)N Ww_ s'OS#ϻodž_tg_wg_ ~~-iY1<&ƝS]S-(pQ7A߉*]G(NӦgtm)D~M=Su h#-MqS,ݡb=OPJQ/G nPnAHޕ8O/*cՔPzޣ4\mFi2au|fK[gվaޠl} vgPByU%Gc}SI>B==?@[Zo[y7EwЮ7L2leԋ8/V@EX[?9Nvnt@kOooSd*1Z~f%G[ZrtD?w}@Oζ/+=O/5# s şdh: scqMN =kSo?A1zVu2u 3$}7;fQ_{ғ?{S[JԍQ@734w&pͣvU%}C|ZE[=e0F'E7`lyhxz#WP0ogFۜF,~FC/ۜFۜ6ыE*z5Ui'';kܵ]MVt6ʹtfl:fl:al:il::bl:ʃM#Xsc^TXXX%5MMgMǪ5MkN ^ ͫUf*^UK{^*6%?;CFAΞc56bl"64hh {X:l'9{{[?[-1[el l5Fcktck`{[41v5v;fc1!bT옣1Z9䜭fM95~bTtRPm@jJGlVPvꨝjZrCTD{~|PDCrTÐ$[(8pUTYSP{VP &8Mp4h#G @&Ā&8A` I) Lp3BȲЊJ VSAe((NfJ[(oRg wvw#߃~&CgY(sFQ|MIacZW6 KK@1 U !a\p"tD[ Qf%SéS;\(_/\q`ҁ}Hy=< LOTpoߦ* I48O ` +]{Ϥpm@ï/0P|.b. ~YXL88X`d)}b):8a}q|qH6@[ %{hP2dQ2`Q2bbwQ%+cZRHyE*΢3U]ljX%Han38X`-q` A}bQ/Ѭڢ^.EL]FbI`\{ nE$QuDA7nP7xꄒEDnAn)@8ޡ@;՝8Vq4HY Fnt8O=`5n8{B=A:"HT(P=^@A=ϢEqj yR, zRQԛ >UM@y`J!=l#7W}7W yՈԫbQ/Ѣ^Т^UzI-XuT-%YKb_.0NX=1  >BGqJp`S(xq<|xh\Uq8-YdIFVǯT^? zgS^xa%q?dعE33&->oa>+ɒ)m[ bHF/?Y}}k%-&"gv )rg*Wz>3SG2\? ݡµ>}=5_8˟.2 g,ptX cbEF~atK7R-N&mcp~&QxA?}$OGrO}|9t3f{=7عg 4cWIqxX~)Ny^MUN;#S7"G& /i$fgy'Cٷ]22q 'P[D,8S'"b^L0g"utzUYL"wDgEn:{%y"< ƺ\ ۂW@Pѫн->< zckyhlDj)aUFzySϘ>h{[AEG);NXvѾPGmE7v.b_[QKִ*t@iiit˳7+k~O𤻠SW 6>ռ)"CcyV4&)H[1t+`z/{K ̹ Q/ѳ %m.}?%nφ@랑q6J]_G[=PWdzJF1)bo8hhc~[4A5sv:l,K%2E'xW {.WLyN 0sț?fGK"y_El^I`]ߟ`nsveIﺟ-2;{ FӰbQ;'-FΊThK>-\ʴa.t3W=tHRJQO]*tl=m6bޓ~ّ.>gXxRe?(Vo 8ӐzR#+:ς'n2%Z |3dkreBtekSĊs^ϱRF_0?to& RlYQoJ˾AEfK$/k֪S?a+:wRp>kEWK⨱g΃tyqկa$Td-H\߁B[D mk[ ~ WZ5vZ6#VU9n*P!,cOe2`թlTK-v4Fef_ _z#zNETUFTp*bڦwV+R*Cc("IGl?m cw`L_ϤFv<Ӗ:FrMp||劼Yqc4҅FPs')|t()Gf &#Ty G?x^fEeoeu(2x%ֽ2hv7 : /,/l/#a:*D<s9QOczWh)Z*cYr֟\eYta ͷr,x}c);[ΒXgn<;7vTb'c&6{cC76{cÌ7ƞv99㍽`| l|)n|g|/7}083ݸ4ݸcqeqϝN1Ư7~xe׍W7~x]׍7t+ycэ_hs=B^?zqI/KOyI^2ګ#^Ɵ3(qyXw3~k]'kUwO4~l7|x55 YY>{1}h5xy\+o [kU[eMve+t#+5^1M@rlrNH8[T 44ځցD@@;$pE*qV@{4!Q4E?OL nRM\mk6&1[x6tc1|'om_gM,wO3~ڤ4~ڤ2~di!MV1~duM0~d&3dM0~d+M^bɶOiW?m:M^oM~lɮ˚fۍ5y&3^dcPL|LdnfQEfA]`wS`|߱"=.f?[[4@8Lw-YL,'XLs,_`1:0R >j@:XO `}M,B ~*WkZܯiq.pߧ" o`ѿE@$:C$d:SjgF1Pc ql$ 51ZV8^$.Bnk6 `lcR̷WoW_i~S}ƉF>&!b ʱr<]('W|UNVrJUߔ,gO(#,jnR 3f\!ߌ+89qtSɩDB1,%,sQYN gIg:kYzRgyFg#Uq69(d$ 1 %9ۨNvsv9Q'*;{TՌUs,'Z;Ð-Ɇ93s˚eb9F>/>*J 1Fa%%1E$6(-XV)*b[ :D=JqD.8[ !Ø!"FX8ʠ8111,N+׋\l&8mlXM[X8,jo@#nesx,ƃ0{{]"ޥnq7@??~cǨbOC3\u{J]V-oNy'Ƽ'@?`^y/Pއ?'AT~*. |=.| c~!gy/_b rJ*# s1zQy~'_BccVJ=f,_fOpVf`0J$GYN`2y+LJ2U5&׳uVy4+<9O34< OtlȮxxZ sg88 O!2%2#2N#2^Dd=d9_ei^ şX4[z4>pD| ojxNρf`a#͏Xfg) «‹(\k/ ~(' (ah*% M(ah*1?/|EXdx9?%| |#" CC7gG3 ~R~+D1'R[pET(ʳݲ)(W_<(Op }r/|"Cg=">eٔxhyQVգӏr((WMB%_D?Q;K:"ci@EDmB3ڹl.sixF[?P}A~x5,ܿg"/??~;ŪdS&S*e2MLruP>+>b"y"yy (pggԏǡk i5( DD5"4"D5 8|X"ӈk&(D,R9Ǚ@m %#N#^Dl=zuLq WAˉ+čF91?Kļ@,nGrV.+g+;g%F|R3'1rS ӎcq"uKN#`/f/JR/*Rςr6(NVZ >g! sJ̓ޑA]]jr;oHߗvb/yIKc|KϓD%>'3p&P_͊ə%Qrif ~ej1ceh&4 DN>j;EVCۈ6mn}hюG;v<%&ORc+Ԅ9Dv&MBKAKGD?ca|9S6j>[ϱm x}Q^qAۏZ]CCߍh"ʥNMk[@X ͇DM<5ㅊ'z!۷Ml/hulE;v,Q7xl-;&5ѦMG#sP_+ܚ&^ߖ߫ǡUf|VV9 Z+j&ּ͍hKuSWv ]T$Ҡ2]Э]#|rӎ@󊫝 V\2pѴbj"չ|`WxBt6\3'] =N?յ z=e^Rv 1ӕ.'zzK˨t nwEV|N&kFaT M M ݵo.+l:tU\ٮ#*3N"@̘չFtuRe^ '4sfR3wjOhQf5ht]. m uݛTC 5R`ss4[D%HLy%X޹S-n92IgNcei#O-cjEzҲc>tQhq,ro\yH26vUk;'9pævP+:O9I]yV]tԳyA]v/H#]=޷zozo3|6襾2נJߞ9Ϋz~;Cj[cF UlwBۅkj wRv4Yh Sw7 wwCU^w_y_u݂-Ck:>ǵڗ8ٸu)\cԟmeAr}f_+QUv|pqmw*ըq}k(׸>]]@E:^O.{Ѹמ|ƷO{}oy?kn9|N<~ȷ6lU+KlREqnV4]p!]Uc3cc'cz8ڬs0޵MPo6':AN'rUo&QoD ߜ8H1vz9q?UhqK 5a_kw:Ādʶ 4 5wO \07қۻfM;1~9]ʪUS}S ym"@ؾE=5pyS-S};o^J\Q'¾+0TeߋsEہ+wt{zRTk?k>' OUГk#7A/vo^|{WPy M{UczIX8WGsvG lk@^}28Ŵ~.P"wx?8ݴ~W\/oIfwCsB ZwL|bn)uδ@i ԙ@CKc=WgƤ: &~7ޙ ]LY59YI9-g)loQZ*Y_E| z5%\f(X޲ڹJNkMeӢlt*X_y ִ ; .jZv8Ung4 67esu T\uh9ȯsG[ǝ; \(闱;4UπgOܟoEV(h/R($1] =VW?rʹ;hn9lo9<t\r zZ8\l<5pLЏ11R0ry%,8ך%sqZeN̙=RpqiOZoݓn 2Vy=yž^9q0DҨ?0ns('7pс֔svP3p*D 5|lgٞ .]p+󏺦2W]Ӄ# 7\Yv(|˕yR}WQU*  5U*m UEuVƵ5˵(Nt12:Қj֚Z:0ZZ10Ή:}'BĕYֵC}ҍ.ZAšE.C[ U CrUqR k5֪ŭ PQq6ũ5Xw qmEAS~v'4t5ڥַr:\]u9ֵcK9t;sJV97.]޺5պ5ٺu7tupaaNwo2Zp73);a1wv8;7:]z]΋3@sk2};F^v [+sZj:ªVy].k8߼]ȴ0pĺݽ,b&nz"=KHdU;/7l:s-V K{kRށsK,u 6pR>yjG ` 7X{K:^nY>n,s ,&_Y,Eq_=<~F5vOx x%tCx36}% Xɢrd\aoX kAx'wZVoǹ"gB,u{X/OkeG̶ {$"߲Փդ{LnyC}*>j9) ̱́qOY|x@Ou*|tg>nQO5Vj᫖@咧zųXgy` ]&uOsrc T[n{=Kb<sm,>Z_ /SuM#oQŎUJl 8um76cC-c9 mߨ?7nߪ{wĊ:t+NJyPqHdUʎS;0?8>oorҎkW_ѿ/Wa~!fVDkWdJ̡c%ß+ٱp܁N},#??A)W6)X{{!i\oPJIlKg/=RK*w|ɣ۠䕔z)qcQ U02/]1Y]`_nUyUT؁Tmj19:xj89e?V 8hEԱ(kqUs9ؘb?NzJ6q%^WZ)+W>irVݨ_{]4*) cU9+ K;>>uTseZ)neiU\p+kS"W+g:,4LȏŤˌkbc%́rTJQY%O0|Gtgxb5ݳ&h=CiL_;<#S-2ݚɂBHuP$ZY='"%:Hs6RiO;% n!]ҭ=W#fύRSV`j܉=#fv~8&4ⱺzugY}5;%Yk]ӛ`]ߛ͏lͽE]В`[k[_S{zk"{E[6FY.]d=kV-ֳ^GuhRzjp`F*u_|lR߻6싓[ k["c&ֻ%2nӻ=rzwWwo$Ǧ=)%2۔cQ6dTel9hڃ{E3lEٶ\[yhw,ZlލV,Ze[^9Zo3{Ӡތhڄ:Ǜ]l{ "a[[]f[-lkQm*6ykNo}_ߨjnF]ն,/^St]ZmJpz#^5v]jpXލknۘwktmܻzŘ2ȾVX8D LGA Ke l*[̒^ef&[ξֲ([>b1k{vA~5SM!F$Mjfhi,M&)Ьlj4ּW/i׾\:Wԅ5}U75~[4AۺojBwtjݺ=~Ygu_~y~AwFm%ݨ曺 fiÿE٦#}Q/jΊOOh'"_r'OjA'ֈ_DUq"6f!KNaUi+Z9q8, uB[PxM<-/~qT>" 1x_Xe`d-C! c»<ó"1 7 0|C+6 k vkWU;cml} m׆Um? ._>哔Otr [Y6q՞&PN+g ʨruQe"~aB2lV);=rV]{&4O52)Lm7Q cL+LOD_ b,AH8;2}4Y{B;M$ 4FWZM_ehk mˤoklmͦ߀=.`9\8YFW+8:NvtqZXx]u+"+iJl%W)SRBRjzeL1)EQWQ=QUgf #i1c 1P#\CD3Μ(F"/RD4RL!Dy)rKsyE.) y3au[ծ~> JN٨lnU9+QDf VbS!sFC9<*-)Մ nq$T\KPy"d%TU.T7 %BBKBC+IfAhMh}hSh+_wy Q hJP <,bJC(2Jw) ) *ry]WB0;O)[ |rk>QB}(/Rca9gvPRzPxa} a?"?P*ޣ>q%CjuJtzan(2|~I}Yd|~\{@xJAKgyj|sP%Lj>KnǷ;:EN]NKRB[L˅9=-|UJkF]u|sy=|]Up^$]M_;]  \$y!Oe2!a.7 I|(-0iv='A'/KȻ vBNȭ]'n|A(_ fJ`v0',K`Ek`8. . . n nl ǂ'=K+J"*C,e2ZW qDLBR+39)"+%,VRZU6 j5he~_2vP Z-Zhy6Qh}\hy|h1cqǠC'@?O&A~Bw.5{24r|?)hh?ABD-tϑ $AVu'{`h\$=u _6?k~Vek+4J,<(螲(,DX,q[X[-,o9ae|tpYl@YGclM?z¦x![zw276}-^y-7)| كq{pGlr8>׮k n`{-ueB9홾8+}ɸNL?ү'󚞗GǶ'l|I*D֪ ALؖڟv{r)X_\7*^_BCkʊoof~2*5 4.};LߟϡДx;T?cǏdМм ˡHj(x|X:ghj 9&{kC#x_k3_&C[Sk(m["7=ϨϾ2y. 5L<7˼^a=b?ۀ5"DƦ4B5Lmm\+oZ#u2!Zizꈶ2W#"\'ƇN%&GLbr|bj7Qpm[#Ĭ˾(lJ`i;"t䰟 'ߤ?/  /yxTB듽sBnw(QX>'%V''ܧ]ᩚO:Z?w}? WiWgl'dd>B{4Yq{dWPdO 7#(#>DHGTxY#3)ȧw$. rםJ99>] D./0E._Lxc3-nB̓QX&EG,ۥ7|0a؃eq<hQi`'ԷPqcZxƂZwqc4um,ZSc32/qWFmų\G/<7r%6c7%$cᅉЧ:lsHȯK oL`d[xK}D)q;q8nM !Τ9mlpw2M\H/&}^{ G}_KMΜ@?u.;Y]]a_TW+NJd?(&+O.H.a\gM.Ǟºʺ5u5<^uMuV'%;y%[~ݒd[O5ɮcuunS${9ֳ0Ǻ%kK^ĺкcuSΥzRuRx|뮤&qSSX"CS=22532:5_(#)LIq)ύLLE"e$2%8RZu5HQL:9\&eQ/(G#\Rxr18 }44cqZL6)(=Niq:Ix_ p&$D3ѧp&:g3/ruBDƙh%DL gq&jř gvݷt Ngp&:g_ƙ8 Mk+\~ g_Ǚ|q_{]d}4:aPCWcq)yw/4&i4B%"y%EI zJˤRN(mHc"i4I4\%9gIs%'XқG5i&a]iKH{XWLЕ!)OsIX?A?NW4!ta5҂mO/)}0p![>]00q~?!sˢ9~y}3:F7_Œ>L3 u)"rխY,,>Y3t :$(v L"i{{R:IZ:0IUnYY*&\J.J^NV\w,f[뵴LMR{{Fxk׹7-\WKMZ10vvw[{}@KSݙ~W(-T2 y&'(9 % ] $OJHhO&[Sw?9%}2i:,uI)?rNtNKyRA:aIRTTJV~# *nM,)UK9yFK5jKa*j=Rm SXJ:noOt3S:Hxf~xPnz[<2Z$$֔@7,FF!O7=ٓr7y([_ToY Uʍ ^www7;A[*B+G4G{pA nQy{ͧv>z/ۗie_ 뻯WY}լ4slO!i[ƫj)}aKJ|Vi6Y"V6y/%o'ck;kICܿE@_"p p槛SA(e3y&Bݻ#gG[k;|rvM:C%rI.ICs7i ZFdeHIQ^'Wwȳ풞R{.5ksek]gwWmsUVn'QaQQ6q1&z`]BSh8bzz'l|/6Bh#a@`7 0f "1~&4'~T8M".Nۃv!a y8Mףz²[$Ἤh԰q 7`7u 8z{ cP=kVm 8ɼN׾ۃMoNvIM7ޕ [~˧VHwDѵLN=kxJAͰUfL]6uHIj?Fs2vm)> mÚmtz&dBD'RB_]<_'mk 5YWU}ј4.=8L,`466;rk㫍'?u2gcT|N}֮4`ה<5k}6篪}߻je3g \ kۏ X_:y-,rX3ֻ~EOծnٝӚd5Nݫ TN="B.4;HE-~)m# }@`Wio[ ZisOfɾNm$;AݚO>dߡ=i>hN̓{D_i4f'3}eI)Uǀ} ҟE*8fxtQAycZ2b#{hNgn-SǞK fTiuc5zɛA73H ҳ,tlF?r ? Y q/Ѯ~xa~ɲԌƌWhb.یuZ T kh:0G}u?vp݈2c,lkƘ -?{uXvn^/h.:}C;٘â"٪p($!85dڵ M8hsTgsVAkWמ4q>;„&XS=c9a%a ȱ@{8VB1')&p$Ѧ)Ä.mN9u=j}+j ?훝8`yXgHY;)uV̙#ϣP7N1QN:P<8Iq)SѮ)U 9)r\uCwszpE׿.fQ-7%dimѮHhBZ-$Sjx2B%1TgCGr!Tń*Zf6v+㚮_n֞۟qppppƕ|7\cqYǿLT`ZpUoiwõ;Fܸ1+Vesf\6/bz2 [mmmɶöj;`kuڎR:a붝]]ٮvb϶y]L^J(Wحjl[da a_d_b_n_i_c_odjo錄[m.1Ii9{Ct ud9F:F;q2G|ccCrȎ#XX 48V9 ͎m6ű]K( Ci㐣Gtq 8C<^eU4Ý#hMw_\_\0,Ņ,B6~qa$~q½<} Q(%a`[, |ٲeert#t|^7 Y5Thfd5<׫'O^7VwVd^nҰ5;JGǭg(Rljl&p5Y;l#ll1TZD并I3ɶdJeKEsu~iColKD-O[*e! m ̷<,A~K,,-K-,X,mB5k#KPX/%]PGq'U עȯ$.1*gG>F<0%_#N`6~?d\Dlq*1I u^~І("? OT["puH[K|ZXZO_y ͸+੟$gm( H -y lDI%xK2O'n ~A|DKP PGi<M(W<u$lL 3FF Fn} '1zӃ_D}S/0M 2;}Qiِ!4_Gz 1~ uBW|x2uV Ԝ9r}{Єqw?>~?먦_a՛Q:jgD}>0>OubC GYx5/ Y 6b ϑp e25QA|ĿRS;5n7{PD ԣd:'n7`?5)AeY+-߆|+xV n+/ "?3CO1}K\r-4m^Y<9o.,w2Sۦje=(Ay'b_־īaMAk?F/1#HI_-.B|G4_ëm@7:!Oo!G!$4'fV9,D+ơ,IT$t/Q'|ޏ5| ^=}=GÈ1?x}7di9/>|%ۋ^|_,1턷׷q ;rK/)vս^ZhP7m;g2QaS82x4I.<%O( ט 9Q$~~2OT 8/퀄B lj|} <}SEY܋R A!HE;gAɸ=ܻWў1o<leƜ\S?\f|+(r 9oBΛE}/#(r^ އ|jdSqtșEY5r<|?JFCo@{v9x+4Ȝ sdNii,P5 EpW{A g_S6A&{RkЙ7#ac*fDEp3w_q~|<%j=a;)D}5~ C8r3 ż5+72I?_'!ʏk~ wQҎosk[[s_/@l}]F-Ʈ8P_ ɿBؽF?цwsx/#`Uxkl6 Q6(#Kx> ԡ_sEbOE>#ЇQ:il-]c=;V<qK=Q̂ʼ{=^i4X+%hk^|/^<̘M,|5=C-\n|75_ʞ t5@;Z W^SvհC6FPټ5,`HƙXX+σ;6~i=9涉8#5ԃp$| v >kCkE{0J80M{d:LyF~%&?{S寣iͦ>!syC  v עCyuG91y=ó/,VЇì'\.IJ{'`xR{3$MCw+X򎘴}BRe3[VK`^C8GnwO(?wQ^i[ԝb{#c ނ5n=c?B/={mӝ^X~fN.hG>?-w ށxTg7W(߂6?-G?wg1ǚs1zz8bݫr_R<(I_xޏGBg>LK߾<[e.]XgƌpyH*FpFFFGȱM<ԁ7%Qw;z]`?tc2s.~R0el8ʩ;2PFlMyYv^Ţ?;[ݺL\RHG^3=GX n'< ;ݺg9 ư ^ӣw zaӹ܃-zn"/5_G2r52! $-Z^k|wv )팰3lcUi)8r3 6eP&&q3I|*z7*;xeL<\+ }UQMXh~~q+L'_rb7l|oL47%sD2^+Y»ӏY >0}t*^=F}Exx5&ܜ84J"Vʫߪ~W)m JV:*_yONelIh=hSJ,%NXgÒ&beG9z}\=Ya2)2-5DT5qdO0 =~WbVwSxOL'40lu8ކSW %a^OIsYPUo5^18+L6 ƸzhMD,޿9ȥr엧/էRȃ3yҲm 2qD+~08SD$3$JiϫX9Aa焹_ύ `ֻtIY'c4!˪A^ʾbKi:rKjL%>xIT+Z vU%%+:S~-$b9Y>E GwV!̏3̤| /A>L%4Y ,YlVt'W`56  f2Qj& p]QøVTOA"7п\ 4G%F vtžbV3EIp=\ kW>"xB&~5Q:\%-:Bs"%7>f*2(;Z rߋ7;=Mnpe K(x3<Q}TޟJ-H27gLxO?|=8ud>l_rKnkehk@U9 qXPkŲEB0 s&SR.km3Ce/<٨E/yKZWse4g)X-G&f}턇ښjOv6O83Òn]6*.-e2(BQUY(gڇtiO(<x9fջ\7:2NYjd۫ZɥWu+oۙm>Ӧ>j-d$ ;L {DamFM`;&PFy7Kio vYV[3޳>u1#ZuyApn&vJ{;6h&z^hy|} ?^cI_s9}A> o5Ko8qseYg5 46t/ >h *Z<4۵L*5UywBư;8[V ^s"OOyz-0յ|j,'خQM1՘|\X VSzoDX$؆6_j,Wz=ґx8#1'D[e*[|; ѿ KyaGU|e҄!4Iʰe wrxeFi p1rw䑰 Qp2`!!og<)l&9~jxzfdO7ocYy8+fxFXAm!X^eOxuTxTќ@4^#4+ vvs^ʈW ǫ|2ʌhclb9/c☨_U#FÙg$AZY. Wb,:3m:}9<1ǔ'߉\FX9:e\YqZl-f]ȬqZ#&z\N16+ lx]]Źړs;t{K3^{y Bl89yj28iEؖk&0Gu|?S>`x-6?q,vȺH3Є90qMcfFTKVXRQU/ڈlףYWcыh 6&^1[jfj,>QW=#OGf'-YSKNQ!U+$ U=>IjE#{roeR䒽DHfSfׯlO 枢5Rl=x]DAiQGU|e҄!4Iʰe wrxeFi p1rw䑰 Qp2`!!og<)l&9~jxzfdO7ocYy8+fxFXAm!X7K3 sI:k-3p&l@_fep%*0^x^a9<0u4Ɂ,wb3Z7ʤ5(4[0Ǒp6eLd,oZ?D4F=9oqǹ zl84{h} :8ixur-zՄ?_?ϮG%L ;+ktC"iKp; @Հ }AG):Zx/㪂lףY 7EDq3ZM짡Wv)v$d̖jxT{d~S#u7Ǒ;=p zzvwH&*a YƕdrR%)2y;{:Gjf0!`)+Sul>5͹OL$um){hOxTo)JKyHF桟}4=h}a g3lIk۷H>+JxʷjO;z5 I}gӼXld@<3h>jNk/r5T.GHkS%[9>k7"74;z6EUvusLZK")_iCh#OeQ Gk2Cc/< +Ѵf {)pV2rI/4oY CfjЌL/_aWK$0}1k]#P)_w7xhګMX>UkDkKiis 3gRYr0}kjud|܏s=ضjsمYn paeS=}ONh@c2s\F,GoO'+}]ZDЪlIk.n| <7e.e`Fd=5"vKd >=A O*#n#X 4:̴A|cu+3Ȣ 2G,Aqr@п;55P+L{Y,No Er&j X6G,Fy+қ`?v!}OgIOzʚ^ż׵|^_TGG1\,ۥyCRR,OW"Λw:ɓF@_@"Iߨ߲Nwo§{t5kӵ^^y^x|wWnoz?sm#b{:G#~=[&^O.Q9=*Mک؏7ئ-egMFvr{<ԓD{fyˢ/̸UhZ cلsE5'287b?3!J*UՑkV?\?<# B/dfz#qx{Gr ZSb(.g5sVy:3=B+cK/CGٞf*>"O W8h5k=Lt?xNɻ֓K4&=BzX$>9y'>?@ތV2h~~G@06sخw-9~ y`_y]5䊿ϗ-))%K&I=TZrV{&w  9WYr^˕Angm'(_9zI?,c7>N2}('9R].K7_5WOr qRLl6mKvW_+]ڷ2Wl~3O6kHLUQn+ CzK*%RAJgi*<q8OJJE﷖ԕr4F}\+#~UK&$#8Sΐ2rTjK#BZH$]$r2L˴k]uf{p*v.: nۻvy°4[+`+خ[;Ͱ;w G ,¥qp5\7n^xiek, ҾTVa6P?a{{ްpt?Տp gE p9\ n#?}0Чk0aH5Ma+v?30X3Z3SZ*&El$OeR_:1W_9 !6d% elANP64 ݂`r0+X6;<Ҧfƛfy׬1[ns%me[߶m/;̎S\q֧r ~wS~ow >uSxr9.}^w:i)*i:RϞ~S_~rOkkCxNJCs|*h7$ǭqgr<+N cQT,9uv8uVf^>^o<^;u"2/9uS=-O{׭O]z-=LqLvߩ% Ggr("nevs-s˽&$G#b]cXBw_75zS "}ju17_J.ǃ?L?bgl}ήɹni~k 1qwZJYsU3qXo9t;qǕܙX~XJ,H,{W2V*-i6oqkZFwZ6Ҳ#F]Q-jn棾-r)+z,O}Q~VBA! Ε{A'?;`0>/Sdbp08(GX[LԐMʤ mIs9Sb-݇C{[VV#}>'{[6Sm>svs}ᾰݗK{}es߸ogw]˳7'ogPxpW WY>K2LISR2MSFLESQ 澺uHv-&QqQϨGp40(gFR\9˕qe/JQWޕb(gʮw IIWUwkI)wHuɟ%uv\#W+S˺fs7JuVwTr]oRs7 n,P7T.tpFRÍvH-7΍n RMrb{LR=鞔)\vOz=M?g3=瞓Fy\^p/Hc{IrJS7͗ݫU½^fn[(ߤ{۽--ݻ]-qK*_k_;Wվv6nm_m۹^jmUZW&_ew57msF~#M;(7qGx_RZ[V݃ooIL0;k?ٗd_I nK}O/X eLm{EAJI}i"-t axyTL#˻\F*;d%HYCf uAY8κ㐬@/ 80kAY#9κ㐬qq`e=qpC`o79ʚqpCd=CD&q(Yqu:sPXY{7"83D'LI"TIdN"2-$"$y6sIDf&y> ID^L"RID^N"27ȼ$",H"J~fYDdοגD$" D$"o'Nw,J"8̒$2KD$"˓|DdEJ":GID$8ȫDod2"ߌ'ID%YDdCO|DdSϓlN"E/lM"U+_'{mIdI"=̷IDK"3IDv%!Z"l!Sv)$"{M"sID$9DPID$9D$"ǒD$"'L"Nr%?L#S #Sđ)`HD<"r\3EF7O:H`yƶWVnAvՎx`w;wv?Ovcڟ>!{#迣l6пεW+ֶXvI%%;vaHnieߵHN ^SR ,) ˄erȁO+gU]XTL,.gSo0'_V3~/>oNX4,K%9R(,Fa*3ì@xF ]X(,m1Æ6mu}ζ+vЮj]c?k'(̾`__ҿks\Qv=o5׷kv]j߳v1_/zlF罿b}u#{yjyI̴.i6~Q_Pr< cdA.LI~?&qyB)O4.3yVy%/ȋ̖}=+d,WUyW7d)my׊EXRyOr| +CY)d|DzV>u^6ȧ|&s,_טklod|+wS](?GϲO9aɓ#rT~c䄜4L[\cڛkuz`:M'sb:.fVfzMo5w~Nb4[Wkw|coi7Gmg9h&1G/7I|_ֆ6)a3mmkkl{dom{m11f_7B}Ǯh?vpQ$ۻU&Q"*f0!TL b,DŴs^u9Px˰޻~gUuӧN}^a:{bowVw]pWk\wnq;\.wp<pOgs{Žpow{W} O\W̕:ބ%H}Ҁ@Ґ4"IҔ4#A$4'-HKҊPҚ!mI;Ҟ$t$Hg҅DHR4bHOҋ&}H_KH?O2 $`H@Fd$EF$2%x2L$d2L%t2$?Yd6Cyd>I& H I% IYDtA2Lj%zl$fE}+NvdM'ȯ 9D#(9F$9EN3,9GΓ "D.+*Frur$mr䒻O$<"I>yBg9yA^W) I!DH1)"-&YdYNVdyMސy/ !Pa0\!F $a0V'q8J-&cıxq8Q$NSitq8SLSŅbH\,bD\*f qJY\-׊qI,AxL<.O3Yx^ ^/W;]P|$ŗkV|'?GP$%HIJ顔'=KLz.^JNz/} RI*acuitDt:3u&:S\gYueu6:[^+s9*u4"b].C[[--׭tnαd3t#.KmlKb]Pd$K&d&Kd%d+I${A*/9JNRY(U\$l X7+FxD C8Nx3>"x& g2(f$3Md0f3 df0\z+&h1kƚe늖'(_%.;Ƽ1p[ h&ia[ l\^3W+jR4iMX {j5p !@sDs\\4W59mP5% /6ޣuѺ}Zw'>kW닏hi>ki@m >mmOiiPm(>mmjôa+>+B_SH@<og?b;0D(a"R.3R7M| DkKG˞-`Tihg4>p< UEKukݰuzv^SW~pΆ!KSq8'ӻlޚ/_#W|%ޅwxwރ{ xo|_%|_W5&o;8}?y~8B[%;80cLG`L~ -W,uG}iЪA'CP-H"jBQ[H= } dz@2CQ2GhD#! klPYa;dG)ny:Rqo9e*x rpӓ$䂧3;#x3'>#=>O o:CG/MS':\د_աf> VX(,oE8@T/qqW|vAVKsC8R(4!-!C!DqH8dHqȈ !SCf)YRRW,Fh`UǮ Wp .qx h$<Os[27mxgmC;g a$Ƅb'Z-D#=CSഞLo`$f8Dt:zL7CѹWcv=hj!SVVdyy bd*on ]bͨ>DWQ3sP`)wݒrw+!C)w݆rw[b`0u$eᔩGc(x⧯hk"ښMh;D-R[RCFi9]^s߁3ӞhOa5ԟ2!1 )9%gOk%*E^.u2/#V"O1 엿xRk |9(==О. ,ȸ|4ߝA(W+.!{zG겿\K#דȍNrM!c~y(Gdy.OgE|UΑoO+*{.s #xS8_F"^`z̗j=2s05Nb{}y^Oԇ}͙L=DV W״~u }cRWJ'u쇩R]jǤN`˟$}j]V>uߥzIQOZߧ2KO_| 37'so*;>O'LF3<~2 t"s Bzʛ;?>ώHPSTW,hq`q4l?|'c݋ bf\},~ +ʙP^¨o ahN"70b fL33PTZ631N?c]ʸBݍq;7x@2'IBf!Ә4/bҧ6F,ۄ7Uǃ5>ِoX>zOՃ>X}b?/H}>2k"A/ z3[n^Ϻ}P߯;CTl:5YB#<@eCog嗹iL5)S T`L5S T`A0 JS%T`D0U"*LJS%T`D0U"*LJS%T`D0U"*LJS%T`D0U"*LJS%T`D0U"*LJS%T`D0U"*LJS%T`D0U"*LJS%T`D0U"*LJS%T`D0U"*LJS%T`D0U"_i!և4{^%zu|u0Iٰk^T 42.6>"!A)gM~;'#$!O}u;K[P *׷MeƿtM-Z+*>o#:޾AH}@HpJ O_Aql- 5ZIKҩ$@)/0IW>ո ZqoNa,lr҂?q<|ޙ> if5o7յmQf*-OlY)V"#S&էeGJSucnL)p0WJ%ھJRL0,%IZWlImefu7zCE:Ws>dY)}>\m'*r޴!OoZ>yO Cs+ǜn0`B)[gVrpT>zǣCbdۻ d>ϷMe}8'6<^Ұ]Б̮RzN{?h[;OhMSV^QGU, 18q+\vp츊?x,&)Sc-Z5=}#Vku}GyhQڠkuoOy#=*8䮺}y&;]YiM/I9~3L^01N*I*'mٿibMፗ%nޒs nh%H^űG/E~."?l)joYwеd2dݞ ^u_)-@=DM]UUqM U}#*U}|"ymhVmaYPNi+% =:qg[ a 0X,ɹ~w}]Mzv(AE:akifᮁ=X(9eЎ u菉dXvDrVxjÝCcܿ׭4ەFm%jkJמKM#F'64(x?O+yXaPܘѽ}6_v&.*ɸ1fK̬P2σC.w2/܎8[.'.0IyLW7.?2r{fiMx3 Ts)z'~X2w; E}pUl,8*[L׈hz$$R0S?oUU0n7q!؆e)]-һVK'|ܫx[XoLzRrs`Nj(J[aұ5߼tZĦRkEn xzU#/yң׮yR˳yi9^Q9v0fřx2Ӳ6x^+#JpzLiܴȢG;[lhzr:ŨRUk{՜y2O3.<2l-zl!yCV5yxߢM³,M;nwX]늇RۇNPL}T;,iSq \!s[eJ@zku,v8 ABKX#@+BN0?`ԾZ΁kc3_컢6qj#Z$r]ӳَmލ۹mӇ͍z\w;e9qɹ]qW2٬6Mnz _~r>ڤq4fvb*7ԌcyY?){nF\~}R?ؼDpۓSwWܐصM̖' -iF*kk.$yt+3q3/y o!<{MޙV|cm{V4yfdn:ިTagȦ54Zuc-i;96bޟ^]H}iyA|U7'0"N|fԛS[>JtVɋBӊuV+rdNҥat:= ~q`Ǡ(}\9 VG%m=ضЎCFJӭ7wxA'Ro3;#l[}||В~{+z -ۨ4z/VF'3~ ]F3(qEF{CL+G?)t6?@8ƣMJlWNסd=Cgl|$sjmm-R'εRl;rI ,w ?2QuᴞG{0s'Wq>=ɍe6?w[F=G׶Hb_;X{7/yWd`zk1mSFi]zC/;^w~.]Pe[~&EVS7۵q㺠G+0j-I'#;7N_5k#a<_vsAڱ걻GWL?75i_-{֜10bS e{W5|a[4٫[-LuWU}^*=C ۖGU19=O֎k/Ǎ< m^DIF MwU ţX: % {QluP3G9`MH#w%aKgSMhE7f.-_K>7@KQ-ez%ς],.=+ןB1j^/ӸQA/<.݈j;ppӉuIڲo_sh{&o 3o>\"#wV]!Kodp5TO/Ǽ8Ug*oM %Ž ?IK޲FE6]"F_r[ W{DJEB8n˩]{|%Lʥjf(U͑il{&& &&D61oZg$drF̬0-˰CF M--0R䉒7i[Hy9t(5ysZy J+ϪgbT~ kqmTK+Ӑ6Q|%r՛k_fn8O1ӆSj$z[z RnɼfK-gm`O٢~㗸ysUB,!wL9Wy͑~Qœ[e%roI?fyEop}Z\iq;ݓ6QZ{C.mN9i^9?gL\if =TfrL05E!{d6 endstream endobj 19 0 obj <<19800F86140BEF41A79E5081DBA819E5>] /Filter/FlateDecode/Length 78>> stream xc` P[^04L1[)u& 3bP 5=b?m endstream endobj xref 0 20 0000000010 65535 f 0000000017 00000 n 0000000125 00000 n 0000000181 00000 n 0000000445 00000 n 0000000656 00000 n 0000000824 00000 n 0000001063 00000 n 0000001116 00000 n 0000001169 00000 n 0000000011 65535 f 0000000012 65535 f 0000000013 65535 f 0000000014 65535 f 0000000015 65535 f 0000000016 65535 f 0000000000 65535 f 0000001775 00000 n 0000001984 00000 n 0000080868 00000 n trailer <<19800F86140BEF41A79E5081DBA819E5>] >> startxref 81145 %%EOF xref 0 0 trailer <<19800F86140BEF41A79E5081DBA819E5>] /Prev 81145/XRefStm 80868>> startxref 81701 %%EOFthe_silver_searcher-2.1.0/tests/is_binary_pdf.t000066400000000000000000000003421314774034700217020ustar00rootroot00000000000000Setup: $ . $TESTDIR/setup.sh $ cp $TESTDIR/is_binary.pdf . PDF files are binary. Do not search them by default: $ ag PDF [1] OK, search binary files $ ag --search-binary PDF Binary file is_binary.pdf matches. the_silver_searcher-2.1.0/tests/line_width.t000066400000000000000000000014211314774034700212170ustar00rootroot00000000000000Setup: $ . $TESTDIR/setup.sh $ printf "12345678901234567890123456789012345678901234567890\n" >> ./blah.txt Truncate to width inside input line length: $ ag -W 20 1 < ./blah.txt blah.txt:1:12345678901234567890 [...] Truncate to width inside input line length, long-form: $ ag --width 20 1 < ./blah.txt blah.txt:1:12345678901234567890 [...] Truncate to width outside input line length: $ ag -W 60 1 < ./blah.txt blah.txt:1:12345678901234567890123456789012345678901234567890 Truncate to width one less than input line length: $ ag -W 49 1 < ./blah.txt blah.txt:1:1234567890123456789012345678901234567890123456789 [...] Truncate to width exactly input line length: $ ag -W 50 1 < ./blah.txt blah.txt:1:12345678901234567890123456789012345678901234567890 the_silver_searcher-2.1.0/tests/list_file_types.t000066400000000000000000000104511314774034700222720ustar00rootroot00000000000000Setup: $ . $TESTDIR/setup.sh Language types are output: $ ag --list-file-types The following file types are supported: --actionscript .as .mxml --ada .ada .adb .ads --asciidoc .adoc .ad .asc .asciidoc --asm .asm .s --batch .bat .cmd --bitbake .bb .bbappend .bbclass .inc --bro .bro .bif --cc .c .h .xs --cfmx .cfc .cfm .cfml --chpl .chpl --clojure .clj .cljs .cljc .cljx --coffee .coffee .cjsx --cpp .cpp .cc .C .cxx .m .hpp .hh .h .H .hxx .tpp --crystal .cr .ecr --csharp .cs --css .css --cython .pyx .pxd .pxi --delphi .pas .int .dfm .nfm .dof .dpk .dpr .dproj .groupproj .bdsgroup .bdsproj --dot .dot .gv --ebuild .ebuild .eclass --elisp .el --elixir .ex .eex .exs --elm .elm --erlang .erl .hrl --factor .factor --fortran .f .f77 .f90 .f95 .f03 .for .ftn .fpp --fsharp .fs .fsi .fsx --gettext .po .pot .mo --glsl .vert .tesc .tese .geom .frag .comp --go .go --groovy .groovy .gtmpl .gpp .grunit .gradle --haml .haml --handlebars .hbs --haskell .hs .lhs --haxe .hx --hh .h --html .htm .html .shtml .xhtml --ini .ini --ipython .ipynb --jade .jade --java .java .properties --js .es6 .js .jsx .vue --json .json --jsp .jsp .jspx .jhtm .jhtml .jspf .tag .tagf --julia .jl --kotlin .kt --less .less --liquid .liquid --lisp .lisp .lsp --log .log --lua .lua --m4 .m4 --make .Makefiles .mk .mak --mako .mako --markdown .markdown .mdown .mdwn .mkdn .mkd .md --mason .mas .mhtml .mpl .mtxt --matlab .m --mathematica .m .wl --md .markdown .mdown .mdwn .mkdn .mkd .md --mercury .m .moo --nim .nim --nix .nix --objc .m .h --objcpp .mm .h --ocaml .ml .mli .mll .mly --octave .m --org .org --parrot .pir .pasm .pmc .ops .pod .pg .tg --perl .pl .pm .pm6 .pod .t --php .php .phpt .php3 .php4 .php5 .phtml --pike .pike .pmod --plist .plist --plone .pt .cpt .metadata .cpy .py .xml .zcml --proto .proto --puppet .pp --python .py --qml .qml --racket .rkt .ss .scm --rake .Rakefile --restructuredtext .rst --rs .rs --r .R .Rmd .Rnw .Rtex .Rrst --rdoc .rdoc --ruby .rb .rhtml .rjs .rxml .erb .rake .spec --rust .rs --salt .sls --sass .sass .scss --scala .scala --scheme .scm .ss --shell .sh .bash .csh .tcsh .ksh .zsh .fish --smalltalk .st --sml .sml .fun .mlb .sig --sql .sql .ctl --stylus .styl --swift .swift --tcl .tcl .itcl .itk --tex .tex .cls .sty --tt .tt .tt2 .ttml --toml .toml --ts .ts .tsx --twig .twig --vala .vala .vapi --vb .bas .cls .frm .ctl .vb .resx --velocity .vm .vtl .vsl --verilog .v .vh .sv --vhdl .vhd .vhdl --vim .vim --wix .wxi .wxs --wsdl .wsdl --wadl .wadl --xml .xml .dtd .xsl .xslt .ent .tld .plist --yaml .yaml .yml the_silver_searcher-2.1.0/tests/literal_word_regexp.t000066400000000000000000000013651314774034700231410ustar00rootroot00000000000000Setup: $ . $TESTDIR/setup.sh $ echo 'blah abc def' > blah1.txt $ echo 'abc blah def' > blah2.txt $ echo 'abc def blah' > blah3.txt $ echo 'abcblah def' > blah4.txt $ echo 'abc blahdef' >> blah4.txt $ echo 'blahx blah' > blah5.txt $ echo 'abcblah blah blah' > blah6.txt Match a word of the beginning: $ ag -wF --column 'blah' blah1.txt 1:1:blah abc def Match a middle word: $ ag -wF --column 'blah' blah2.txt 1:5:abc blah def Match a last word: $ ag -wF --column 'blah' blah3.txt 1:9:abc def blah No match: $ ag -wF --column 'blah' blah4.txt [1] Match: $ ag -wF --column 'blah' blah5.txt 1:7:blahx blah Case of a word repeating the same part: $ ag -wF --column 'blah blah' blah6.txt 1:9:abcblah blah blah the_silver_searcher-2.1.0/tests/max_count.t000066400000000000000000000016211314774034700210700ustar00rootroot00000000000000Setup: $ . $TESTDIR/setup.sh $ printf "blah\n" > blah.txt $ printf "blah2\n" >> blah.txt $ printf "blah2\n" > blah2.txt $ printf "blah2\n" >> blah2.txt $ printf "blah2\n" >> blah2.txt $ printf "blah2\n" >> blah2.txt $ printf "blah2\n" >> blah2.txt $ printf "blah2\n" >> blah2.txt $ printf "blah2\n" >> blah2.txt $ printf "blah2\n" >> blah2.txt $ printf "blah2\n" >> blah2.txt $ printf "blah2\n" >> blah2.txt # 10 lines Max match of 1: $ ag --max-count 1 blah blah.txt ERR: Too many matches in blah.txt. Skipping the rest of this file. 1:blah Max match of 10, one file: $ ag --count --max-count 10 blah blah2.txt ERR: Too many matches in blah2.txt. Skipping the rest of this file. 10 Max match of 10, multiple files: $ ag --count --max-count 10 blah blah.txt blah2.txt ERR: Too many matches in blah2.txt. Skipping the rest of this file. blah.txt:2 blah2.txt:10 the_silver_searcher-2.1.0/tests/multiline.t000066400000000000000000000005551314774034700211020ustar00rootroot00000000000000Setup: $ . $TESTDIR/setup.sh $ printf 'what\n' > blah.txt $ printf 'ever\n' >> blah.txt $ printf 'whatever\n' >> blah.txt Multiline: $ ag 'wh[^w]+er' . blah.txt:1:what blah.txt:2:ever blah.txt:3:whatever No multiline: $ ag --nomultiline 'wh[^w]+er' . blah.txt:3:whatever Multiline explicit: $ ag '^wh[^w\n]+er$' . blah.txt:3:whatever the_silver_searcher-2.1.0/tests/negated_options.t000066400000000000000000000017311314774034700222570ustar00rootroot00000000000000Setup: $ . "${TESTDIR}/setup.sh" Should accept both --no-